Common Data Provider for z Systems: User Guide

IBM Common Data Provider for z Systems
User Guide
Version 1 Release 1
IBM
IBM Common Data Provider for z Systems
User Guide
Version 1 Release 1
IBM
ii
Common Data Provider for z Systems: User Guide
Figures
1.
Flow of operational data among IBM Common
Data Provider for z Systems components to
multiple analytics platforms . . . . . . . 4
© Copyright IBM Corp. 2016, 2018
iii
iv
Common Data Provider for z Systems: User Guide
Tables
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
Required authorizations and associated
information for each component . . . . . 10
Working directories for IBM Common Data
Provider for z Systems components . . . . 11
Data gatherer configuration of Data Streamer
port number . . . . . . . . . . . . 11
Target libraries for IBM Common Data
Provider for z Systems components . . . . 13
Configuration reference information for
managing policies . . . . . . . . . . 25
Icons on each data stream node in a policy
31
Icons on each transform node in a policy
32
Icons on each subscriber node in a policy
32
Data stream names that IBM Common Data
Provider for z Systems uses to collect SMF
data . . . . . . . . . . . . . . . 33
Fields in the SMF_110_1_KPI data stream
45
Correlation between the sources from which
the Log Forwarder gathers data and the data
streams that can be defined for those sources . 49
Common target destinations with the
required streaming protocols and associated
information . . . . . . . . . . . . 101
Mapping of the prefix that is used in a
Logstash configuration file name to the
content of the file . . . . . . . . . . 104
User exits for collecting z/OS SYSLOG data,
with associated MVS installation exits and
usage notes . . . . . . . . . . . . 115
Example System Data Engine interval values
that are a factor of the total time in one day . 122
z/OS console commands for starting,
stopping, or viewing status or configuration
information for individual Log Forwarder
data streams . . . . . . . . . . . . 135
Headers for data that is sent by using the
Data Transfer Protocol . . . . . . . . 143
Unsplit payload format . . . . . . . . 144
Split payload format . . . . . . . . . 144
Metadata keywords and values . . . . . 145
© Copyright IBM Corp. 2016, 2018
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
IBM Tivoli Decision Support for z/OS lookup
table members to customize. . . . . . .
Sample jobs for adding tables to IBM Db2
Analytics Accelerator for z/OS. . . . . .
Sample jobs for moving lookup table contents
to IBM Db2 Analytics Accelerator for z/OS .
IBM Common Data Provider for z Systems
lookup table members . . . . . . . .
Sample jobs for generating DB2 UNLOAD
format . . . . . . . . . . . . . .
Sample jobs for loading data into IBM Db2
Analytics Accelerator . . . . . . . . .
Sample jobs for enabling tables for
acceleration in IBM Db2 Analytics Accelerator
Sample jobs that are provided by IBM Tivoli
Decision Support for z/OS for removing
tables from IBM Db2 Analytics Accelerator
for z/OS . . . . . . . . . . . . .
IBM Tivoli Decision Support for z/OS
analytics components that can be loaded by
the System Data Engine . . . . . . . .
Tables for Analytics - z/OS Performance
component of IBM Tivoli Decision Support
for z/OS, with corresponding base
component tables . . . . . . . . . .
Tables for Analytics - DB2 component of IBM
Tivoli Decision Support for z/OS, with
corresponding base component tables . . .
Tables for Analytics - KPM CICS component
of IBM Tivoli Decision Support for z/OS,
with corresponding base component tables .
Tables for Analytics - KPM DB2 component
of IBM Tivoli Decision Support for z/OS,
with corresponding base component tables .
Tables for Analytics - KPM z/OS component
of IBM Tivoli Decision Support for z/OS,
with corresponding base component tables .
IBM Tivoli Decision Support for z/OS
analytics component views that are based on
multiple tables . . . . . . . . . . .
153
153
153
154
154
156
156
156
157
158
160
161
161
161
162
v
vi
Common Data Provider for z Systems: User Guide
Contents
Figures . . . . . . . . . . . . . . . iii
Tables . . . . . . . . . . . . . . . v
Conventions used in this documentation 1
Common Data Provider for z Systems
overview . . . . . . . . . . . . . . 3
Operational data . . .
Analytics platforms . .
Components of Common
Systems . . . . . .
. .
. .
Data
. .
. . .
. . .
Provider
. . .
.
.
for
.
. .
. .
z
. .
.
.
. 4
. 5
.
. 5
Planning to use Common Data Provider
for z Systems . . . . . . . . . . . . 9
z/OS system requirements. . . . . . . . . . 9
Data Receiver system requirements . . . . . . 10
Required authorizations for Common Data Provider
components . . . . . . . . . . . . . . 10
Working directory definitions . . . . . . . . 11
Data Streamer port definition . . . . . . . . 11
Installing Common Data Provider for z
Systems . . . . . . . . . . . . . . 13
Configuring Common Data Provider for
z Systems . . . . . . . . . . . . . 15
Getting started with the Configuration Tool . . .
Setting up a working directory for the
Configuration Tool . . . . . . . . . . .
Installing the Configuration Tool . . . . . .
Uninstalling the Configuration Tool . . . . .
Enabling the Configuration Tool to support SMF
data destined for IBM Operations Analytics for z
Systems . . . . . . . . . . . . . .
Running the Configuration Tool . . . . . .
Output from the Configuration Tool . . . . .
Managing policies . . . . . . . . . . . .
Subscribers to a data stream or transform . . .
Creating a policy . . . . . . . . . . .
Updating a policy . . . . . . . . . . .
Adding a subscriber for a data stream or
transform . . . . . . . . . . . . . .
Exporting and importing subscribers . . . . .
Configuration reference for managing policies . .
Global properties that you can define for all
data streams in a policy . . . . . . . .
SYSTEM properties: Defining alternative
host names for source systems . . . . .
z/OS LOG FORWARDER properties:
Defining your Log Forwarder environment .
SDE properties: Defining your System Data
Engine environment . . . . . . . .
© Copyright IBM Corp. 2016, 2018
15
15
16
17
18
19
19
20
20
22
24
24
24
25
26
26
27
29
SCHEDULES properties: Defining time
intervals for filtering operational data . . 30
Icons on each node in a policy . . . . . . 31
SMF data stream reference . . . . . . . 32
SMF_110_1_KPI data stream content . . . . 45
Data stream configuration for data gathered
by Log Forwarder . . . . . . . . . . 49
Generic VSAM Cluster data stream . . . 50
Generic ZFS File data stream . . . . . 51
Generic z/OS Job Output data stream . . 52
CICS EYULOG data stream . . . . . . 54
CICS EYULOG DMY data stream. . . . 57
CICS EYULOG YMD data stream. . . . 59
CICS User Messages data stream . . . . 61
CICS User Messages DMY data stream . . 63
CICS User Messages YMD data stream . . 65
NetView Netlog data stream . . . . . 67
USS Syslogd data stream . . . . . . 68
WebSphere HPEL data stream . . . . . 69
WebSphere SYSOUT data stream . . . . 70
WebSphere SYSPRINT data stream . . . 72
WebSphere USS Sysprint data stream . . 74
z/OS SYSLOG data stream . . . . . . 75
Data collection from a rolling z/OS UNIX
log . . . . . . . . . . . . . . 76
Data stream configuration for data gathered
by System Data Engine . . . . . . . . 79
Transform configuration . . . . . . . . 79
TRANSCRIBE transform . . . . . . . 80
CRLF Splitter transform . . . . . . . 81
SYSLOG Splitter transform . . . . . . 82
SyslogD Splitter transform . . . . . . 83
NetView Splitter transform . . . . . . 83
EYULOG MDY Splitter transform . . . 84
EYULOG DMY Splitter transform . . . 85
EYULOG YMD Splitter transform . . . 85
CICS MSGUSR MDY Splitter transform
86
CICS MSGUSR DMY Splitter transform
87
CICS MSGUSR YMD Splitter transform . . 88
WAS for zOS SYSOUT Splitter transform 88
WAS for zOS SYSPRINT Splitter
transform . . . . . . . . . . . . 89
WAS HPEL Splitter transform . . . . . 90
WAS SYSTEMOUT Splitter transform . . 91
FixedLength Splitter transform. . . . . 91
Regex Filter transform . . . . . . . 92
Time Filter transform . . . . . . . . 94
Subscriber configuration . . . . . . . . 95
Securing communications between the Data
Streamer and its subscribers . . . . . . . . . 98
Preparing the target destinations to receive data
from the Data Streamer . . . . . . . . . . 101
Preparing to send data to Splunk . . . . . . 102
Preparing to send data to Elasticsearch . . . . 103
Configuring the Data Receiver. . . . . . . 105
vii
Setting up a working directory and an
output directory for the Data Receiver . . .
Copying the Data Receiver files to the target
system . . . . . . . . . . . . .
Updating the Data Receiver properties . . .
Data Receiver process for managing disk
space . . . . . . . . . . . . .
Configuring a Logstash receiver . . . . . .
Configuring the Data Streamer . . . . . . .
Configuring the data gatherer components . . .
Configuring the Log Forwarder . . . . . .
Creating the Log Forwarder started task . .
Requirements for the Log Forwarder user
ID . . . . . . . . . . . . . .
Copying the Log Forwarder configuration
files to the ENVDIR directory . . . . . .
Installing the user exit for collecting z/OS
SYSLOG data . . . . . . . . . . .
manageUserExit utility for managing the
installed user exit . . . . . . . . .
Configuring the z/OS NetView message
provider for collecting NetView messages . .
Configuring the System Data Engine . . . .
Authorizing the System Data Engine with
APF . . . . . . . . . . . . . .
Deciding which method to use for collecting
SMF data . . . . . . . . . . . . .
Creating the System Data Engine started task
for streaming SMF data . . . . . . . .
Requirements for the System Data Engine
user ID . . . . . . . . . . . .
Installing the SMF user exit . . . . . .
Uninstalling the SMF user exit. . . . .
Writing IMS records to SMF for processing
by the System Data Engine . . . . . . .
Installing the IMS LOGWRT user exit . .
Running the HBOPIMS utility . . . . .
Creating the job for loading SMF data in
batch . . . . . . . . . . . . . .
Verifying the search order for the TCP/IP resolver
configuration file . . . . . . . . . . . .
viii
Common Data Provider for z Systems: User Guide
105
Operating Common Data Provider for
z Systems . . . . . . . . . . . . . 131
106
106
Running
Running
Running
Running
107
108
109
111
111
112
113
114
114
116
118
119
120
121
121
123
123
125
125
126
127
127
129
the
the
the
the
Data Receiver . . .
Data Streamer . . .
Log Forwarder . .
System Data Engine .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
131
132
133
137
Implementing the Open Streaming API
for sending user application data to
the Data Streamer . . . . . . . . . 139
Defining data streams for the user application
Stream definition example . . . . . .
Sending the user application data to the Data
Streamer . . . . . . . . . . . . .
Data Transfer Protocol . . . . . . .
Sending data by using the Java API . . .
Sending data by using the REXX API . .
data 139
. . 140
.
.
.
.
.
.
.
.
142
143
146
147
Loading data to IBM Db2 Analytics
Accelerator for target destination IBM
Tivoli Decision Support for z/OS . . . 149
Configuring IBM Tivoli Decision Support for z/OS
for loading the data . . . . . . . . . . .
Running the System Data Engine to write data in
DB2 UNLOAD format . . . . . . . . . .
Loading data to IBM Db2 Analytics Accelerator
Removing tables from IBM Db2 Analytics
Accelerator . . . . . . . . . . . . . .
IBM Tivoli Decision Support for z/OS analytics
components that can be loaded by the System Data
Engine . . . . . . . . . . . . . . .
Analytics component tables. . . . . . . .
Analytics component views that are based on
multiple tables . . . . . . . . . . . .
149
154
155
156
157
158
162
Notices . . . . . . . . . . . . . . 163
Trademarks . . . . . . . . . . . . . . 165
Terms and conditions for product documentation
165
Conventions used in this documentation
This information describes conventions that are used in the IBM® Common Data
Provider for z Systems® documentation.
hlq
© Copyright IBM Corp. 2016, 2018
In the context of hlq.SHBOSAMP library, for example, hlq is the high-level
qualifier according to your site requirements.
1
2
Common Data Provider for z Systems: User Guide
Common Data Provider for z Systems overview
IBM Common Data Provider for z Systems provides the infrastructure for
accessing IT operational data from z/OS® systems and streaming it to the analytics
platform of your choice in a consumable format. It is a single data provider for
sources of both structured and unstructured data, and it can provide a near
real-time data feed of z/OS log data and System Management Facilities (SMF) data
to your analytics platform.
IBM Common Data Provider for z Systems automatically monitors z/OS log data
and SMF data and forwards it to the configured destination.
In each logical partition (LPAR) from which you want to analyze z/OS log data or
SMF data, a unique instance of IBM Common Data Provider for z Systems must be
installed and configured to specify the type of data to be gathered and the
destination for that data, which is called a subscriber.
IBM Common Data Provider for z Systems includes a web-based configuration tool
that is provided as a plug-in for IBM z®/OS Management Facility (z/OSMF).
Flow of operational data to your analytics platform
As illustrated in Figure 1 on page 4, operational data (such as SMF record type 30
or z/OS SYSLOG data) is gathered by data gatherers, such as the System Data
Engine or the Log Forwarder, and can be streamed to multiple subscribers.
The data gatherers send the data to the Data Streamer, which transforms the data
before it sends the data to the subscribers.
The flow of data is controlled by a policy that you define in the IBM Common
Data Provider for z Systems Configuration Tool.
policy In IBM Common Data Provider for z Systems, a set of rules that define
what operational data to collect and where to send that data.
subscriber
In the IBM Common Data Provider for z Systems configuration, the
software that you define to receive operational data. You can have both
on-platform and off-platform subscribers.
on-platform subscriber
A subscriber that is on the same z/OS system, or in the same logical
partition (LPAR), as the source from which the operational data originates.
off-platform subscriber
A subscriber that is not on the same z/OS system, or in the same logical
partition (LPAR), as the source from which the operational data originates.
© Copyright IBM Corp. 2016, 2018
3
Common Data Provider
On-Platform
Subscribers
Data Gatherers
SMF Data
Batch
System Data
Engine
IBM DB2
Analytics
Accelerator
IBM Tivoli
Decision
Support
for z/OS
Off-Platform
Subscribers
Log Data
Log
Forwarder
• Job Log
• z/OS UNIX Log File
• Entry-sequenced
VSAM Cluster
• z/OS SYSLOG
• NetView Messages
• WebSphere Application
Server HPEL Log
IBM Operations
Analytics for
z Systems
(on premises)
Logstash
Data
Streamer
Logstash
Open Streaming
API
Elasticsearch
Logstash
User Application Data
User
Application
Configuration
Tool
Kafka
Data Receiver
®
Data
Feed
Configuration
Files
RACF
z/OSMF
Figure 1. Flow of operational data among IBM Common Data Provider for z Systems components to multiple analytics
platforms
Operational data
Operational data is data that is generated by the z/OS system as it runs. This data
describes the health of the system and the actions that are taking place on the
system. The analysis of operational data by analytics platforms and cognitive
agents can produce insights and recommended actions for making the system
work more efficiently and for resolving, or preventing, problems.
IBM Common Data Provider for z Systems can collect the following types of
operational data:
v System Management Facilities (SMF) data
v Log data from the following sources:
– Job log, which is output that is written to a data definition (DD) by a running
job
– z/OS UNIX log file, including the UNIX System Services system log (syslogd)
– Entry-sequenced Virtual Storage Access Method (VSAM) cluster
– z/OS system log (SYSLOG)
– IBM Tivoli® NetView® for z/OS messages
– IBM WebSphere® Application Server for z/OS High Performance Extensible
Logging (HPEL) log
v User application data, which is operational data from your own applications
4
Common Data Provider for z Systems: User Guide
Analytics platforms
An analytics platform is a software program, or group of dedicated systems and
software, that is configured to receive, store, and analyze large volumes of
operational data.
The following analytics platforms are examples:
v IBM Db2® Analytics Accelerator for z/OS, a database application that provides
query-based reporting
v IBM Operations Analytics for z Systems, an on-premises product that can receive
large volumes of operational data for analysis and can provide insights and
recommended actions to the system owners, which are based on expert
knowledge about z Systems and applications
v Platforms such as Elasticsearch, Apache Hadoop, and Splunk that can receive
and store operational data for analysis. These platforms do not include expert
knowledge about z Systems and applications, but users can create or import
their own analytics to run against the data.
Tip: Apache Kafka is an open-source, high performance, distributed streaming
platform that can be used to publish and subscribe to streams of records. It can be
used as a path to data consumers such as Apache Hadoop, Apache Spark
Streaming, and others.
Components of Common Data Provider for z Systems
IBM Common Data Provider for z Systems includes the following basic
components: 1) a Configuration Tool for defining the sources from which you want
to collect operational data, 2) the data gatherer components (System Data Engine
and Log Forwarder) for gathering different types of operational data, and 3) a Data
Streamer for streaming all data to its destination.
Other components include the Open Streaming API for gathering operational data
from your own applications, and a Data Receiver that acts as a target subscriber
for operational data if the intended subscriber cannot directly ingest the data feed.
The components are illustrated in Figure 1 on page 4.
Basic components
Configuration Tool
The IBM Common Data Provider for z Systems Configuration Tool is a
web-based user interface that is provided as a plug-in for IBM z/OS
Management Facility (z/OSMF). In the tool, you specify the configuration
information as part of creating a policy for streaming operational data to its
destination.
In the policy definition, you must define a data stream for each source from
which you want to collect operational data. A stream of data is a set of
data that is sent from a common source in a standard format, is routed to,
and transformed by, the Data Streamer in a predictable way, and is
delivered to one or more subscribers.
You must specify the following information in each data stream definition:
v The source (such as SMF record type 30 or z/OS SYSLOG)
v The format to which to transform the operational data so that it is
consumable by the analytics platform.
Common Data Provider for z Systems overview
5
In the transcribe transform, for example, you specify the character
encoding (such as UTF-8).
v The subscriber or subscribers for the operational data that is output by
IBM Common Data Provider for z Systems.
For example, subscribers might include analytics platforms such as IBM
Operations Analytics for z Systems, Elasticsearch, Apache Hadoop,
Splunk, and others.
Data gatherer components
Each of the following components gathers a different type of data:
System Data Engine
The System Data Engine gathers System Management Facilities
(SMF) data and IBM Information Management System (IMS™) log
data in near real time. It can also gather SMF data in batch.
The System Data Engine can process all commonly used SMF
record types from the following sources:
v SMF archive (which is processed only in batch)
v SMF in-memory resource (by using the SMF real-time interface)
v SMF user exit HBOSMFEX
v SMF log stream
It can also convert SMF records into a consumable format, such as
a comma-separated values (CSV) file, or into DB2® UNLOAD
format for loading in batch.
The System Data Engine can also be installed as a stand-alone
utility to feed SMF data into IBM Db2 Analytics Accelerator for
z/OS (IDAA) for use by IBM Tivoli Decision Support for z/OS.
Log Forwarder
The Log Forwarder gathers z/OS log data from the following
sources:
v Job log, which is output that is written to a data definition (DD)
by a running job
v z/OS UNIX log file, including the UNIX System Services system
log (syslogd)
v Entry-sequenced Virtual Storage Access Method (VSAM) cluster
v z/OS system log (SYSLOG)
v IBM Tivoli NetView for z/OS messages
v IBM WebSphere Application Server for z/OS High Performance
Extensible Logging (HPEL) log
To reduce general CPU usage and costs, you can run the Log
Forwarder on z Systems Integrated Information Processors (zIIPs).
Data Streamer
The Data Streamer streams operational data to configured subscribers in
the appropriate format. It receives the data from the data gatherers, splits it
into individual messages if required (for z/OS SYSLOG data, for example),
transforms the data into the appropriate format (such as UTF-8) for the
subscriber, and sends the data to the subscriber.
The Data Streamer can stream data to both on-platform and off-platform
subscribers. To reduce general CPU usage and costs, you can run the Data
Streamer on z Systems Integrated Information Processors (zIIPs).
6
Common Data Provider for z Systems: User Guide
Other components
Depending on your environment, you might want also want to use one, or both, of
the following components:
Open Streaming API
The Open Streaming API provides an efficient way to gather operational
data from your own applications by enabling your applications to be data
gatherers. You can use the API to define your own data streams for
sending your application data to the Data Streamer and streaming it to
analytics platforms.
Data Receiver
The Data Receiver acts as a target subscriber if the intended subscriber of a
data stream cannot directly ingest the data feed from IBM Common Data
Provider for z Systems. The Data Receiver writes any data that it receives
to disk files, which can then be ingested into an analytics platform such as
Splunk.
The Data Receiver can run on a distributed platform, a z/OS system, or
both.
Common Data Provider for z Systems overview
7
8
Common Data Provider for z Systems: User Guide
Planning to use Common Data Provider for z Systems
Review the system and security requirements for using IBM Common Data
Provider for z Systems to provide z/OS operational data. Also, review the
information about the Data Streamer port definition and about the working
directories for IBM Common Data Provider for z Systems components.
z/OS system requirements
Verify that your z/OS system meets the requirements for running IBM Common
Data Provider for z Systems. You must run the IBM Common Data Provider for z
Systems in each z/OS logical partition (LPAR) from which you want to gather
z/OS operational data.
These requirements apply to the z/OS system where you are running the IBM
Common Data Provider for z Systems Data Streamer, Log Forwarder, and System
Data Engine.
IBM Common Data Provider for z Systems must be run with the following
software:
v IBM z/OS V2.1 or V2.2 (product number 5655-ZOS)
v One of the following products:
– IBM z/OS Management Facility V2.1 (product number 5610-A01), with
APAR/PTF PI52426/UI36314
– IBM z/OS Management Facility V2.2 (product number 5650-ZOS), with
APAR/PTF PI52426/UI36315
v One of the following Java™ libraries:
– IBM 31-bit SDK for z/OS Java Technology Edition V7.0.1 (product number
5655-W43)
– IBM 64-bit SDK for z/OS Java Technology Edition V7.0.1 (product number
5655-W44)
– IBM 31-bit SDK for z/OS Java Technology Edition V8 (product number
5655-DGG)
– IBM 64-bit SDK for z/OS Java Technology Edition V8 (product number
5655-DGH)
Important considerations:
– Use the latest available service release of the version of IBM SDK for z/OS,
Java Technology Edition, that you choose, and apply fix packs as soon as
possible after they are released. To find the latest service release or fix pack,
see IBM Java Standard Edition Products on z/OS.
– Although the IBM Common Data Provider for z Systems runs on IBM SDK
for z/OS, Java Technology Edition, Versions, 7 and 8, some tests indicate that
CPU usage might be higher when you run the IBM Common Data Provider
for z Systems on Version 7. If CPU usage of the IBM Common Data Provider
for z Systems is a concern, consider running the IBM Common Data Provider
for z Systems on IBM SDK for z/OS, Java Technology Edition Version 8.
v z/OS Communications Server
v Optional requirements:
© Copyright IBM Corp. 2016, 2018
9
– For collecting System Management Facilities (SMF) data from SMF in-memory
resources, APAR OA49263 of IBM z/OS V2.1 or V2.2
– For loading data to IBM Db2 Analytics Accelerator for z/OS, IBM Db2
Analytics Accelerator Loader for z/OS V2.1 (product number 5639-OLE)
Data Receiver system requirements
If you plan to use the IBM Common Data Provider for z Systems Data Receiver,
verify that the system on which you plan to install the Data Receiver meets the
requirements for running the Data Receiver.
The Data Receiver can be run on a Linux, Windows, or z/OS system. It requires
Java Runtime Environment (JRE) 7, or later.
If your target destination is Splunk, also see the Splunk system requirements.
CPU usage, disk usage, and memory usage vary depending on the amount of data
that is processed.
Required authorizations for Common Data Provider components
Different authorizations are required for installing and configuring IBM Common
Data Provider for z Systems components and for accessing component-related
libraries and configuration files during run time.
Table 1 references the information about the required authorizations for each IBM
Common Data Provider for z Systems component. The authorization requirements
for installation of the components are described in the Program Directories.
Table 1. Required authorizations and associated information for each component
Component
Information about required authorizations
Configuration Tool
v User ID for running the setup script: “Setting up a
working directory for the Configuration Tool” on page
15
v User ID for installing the tool: “Installing the
Configuration Tool” on page 16
v User ID for uninstalling the tool: “Uninstalling the
Configuration Tool” on page 17
v User ID for running the tool: “Running the
Configuration Tool” on page 19
Data Streamer
v User ID that is associated with the Data Streamer
started task: “Configuring the Data Streamer” on page
109
Log Forwarder
v User ID that is associated with the Log Forwarder
started task: “Requirements for the Log Forwarder
user ID” on page 113
v Security software updates to permit the Log
Forwarder started task to run in your environment:
“Creating the Log Forwarder started task” on page
112, step 4 on page 113
10
Common Data Provider for z Systems: User Guide
Table 1. Required authorizations and associated information for each
component (continued)
Component
Information about required authorizations
System Data Engine
v APF authorization: “Authorizing the System Data
Engine with APF” on page 120
v User ID that is associated with the System Data
Engine started task: “Requirements for the System
Data Engine user ID” on page 123
Working directory definitions
When you configure some IBM Common Data Provider for z Systems components,
you must define a working directory for the component. To avoid possible
conflicts, do not define the same directory as the working directory for multiple
components.
Table 2 indicates where you can find information about the working directories
that must be defined.
Table 2. Working directories for IBM Common Data Provider for z Systems components
Component
Information about working directory
Configuration Tool
“Setting up a working directory for the Configuration Tool”
on page 15
Data Receiver
“Setting up a working directory and an output directory for
the Data Receiver” on page 105
Data Streamer
“Configuring the Data Streamer” on page 109
Log Forwarder
“Log Forwarder properties configuration” on page 28
Data Streamer port definition
When you configure the IBM Common Data Provider for z Systems Data Streamer,
you define the port number on which the Data Streamer listens for data from the
data gatherers. All data gatherers must send data to the Data Streamer through this
port. If you update this port in the Data Streamer configuration, you must also
update it in the configuration for all data gatherers.
For information about how to update this port for the Data Streamer, see
“Configuring the Data Streamer” on page 109.
For each data gatherer, Table 3 indicates where you can find information about the
Data Streamer port number configuration.
Table 3. Data gatherer configuration of Data Streamer port number
Data gatherer
Information about Data Streamer port number
configuration
Log Forwarder
“Log Forwarder properties configuration” on page 28
System Data Engine
“Creating the System Data Engine started task for
streaming SMF data” on page 121
Planning to use Common Data Provider for z Systems
11
12
Common Data Provider for z Systems: User Guide
Installing Common Data Provider for z Systems
Install IBM Common Data Provider for z Systems by using SMP/E for z/OS
(SMP/E). For installation instructions, see the two IBM Common Data Provider for
z Systems Program Directories.
About this task
If you want to stream z/OS operational data to off-platform subscribers, such as
those that are described and illustrated in “Common Data Provider for z Systems
overview” on page 3, you must install all IBM Common Data Provider for z
Systems components, according to the instructions in the following Program
Directories:
v Common Data Handler Program Directory
v System Data Engine Program Directory
Tip: The term Common Data Handler is a generic concept that covers the following
IBM Common Data Provider for z Systems components:
v Configuration Tool
v Data Streamer
v Log Forwarder
If you plan only to load data in batch mode to, for example, an on-platform
subscriber such as IBM Db2 Analytics Accelerator for z/OS, for use by IBM Tivoli
Decision Support for z/OS, you need to install only the System Data Engine,
according to the instructions in the System Data Engine Program Directory.
Table 4 lists the target libraries.
Table 4. Target libraries for IBM Common Data Provider for z Systems components
Component
Target library
Configuration Tool
v /usr/lpp/IBM/cdpz/v1r1m0/UI/LIB
Data Streamer
v /usr/lpp/IBM/cdpz/v1r1m0/DS/LIB
Log Forwarder
v /usr/lpp/IBM/zscala/V3R1
System Data Engine
For the following libraries, customize the
high-level qualifier (.hlq) according to site
requirements.
v hlq.SHBOCNTL
v hlq.SHBOLOAD
v hlq.SHBODEFS
v hlq.SHBOSAMP
© Copyright IBM Corp. 2016, 2018
13
14
Common Data Provider for z Systems: User Guide
Configuring Common Data Provider for z Systems
To configure IBM Common Data Provider for z Systems, you must set up the
Configuration Tool, use the Configuration Tool to create your policies for streaming
data, prepare the target destinations to receive data from the Data Streamer,
configure the Data Streamer, and configure the primary data gatherer components,
which are the Log Forwarder and the System Data Engine.
Getting started with the Configuration Tool
The IBM Common Data Provider for z Systems Configuration Tool is a web-based
user interface that is provided as a plug-in for IBM z/OS Management Facility
(z/OSMF). You use the tool to specify what data you want to collect from your
z/OS system, where you want that data to be sent, and what form you want the
data to arrive in at its destination. This configuration information is contained in a
policy.
Before you begin
For information about how to set up z/OSMF so that you can use the IBM
Common Data Provider for z Systems Configuration Tool, see the z/OSMF setup
documentation for IBM Common Data Provider for z Systems.
About this task
The Configuration Tool helps you create and manage policies for streaming
operational data to its destination. The Data Streamer needs this policy information
to know what to do with the data that it receives from the data gatherers (such as
the System Data Engine and the Log Forwarder).
Each policy definition is stored on the host and secured by the IBM Resource
Access Control Facility (RACF®).
Setting up a working directory for the Configuration Tool
Before you install the IBM Common Data Provider for z Systems Configuration
Tool, you must set up a working directory where the tool can store policy
definition files. A setup script (savingpolicy.sh) is provided to automate this
process.
About this task
Guidelines for the working directory
Use the following guidelines to help you decide which directory to use as
the working directory:
v The directory must be readable and writable by the user ID that runs the
Configuration Tool.
v To avoid possible conflicts, do not use a directory that is defined as the
Data Streamer working directory (CDP_HOME) or the Log Forwarder
working directory (ZLF_WORK).
v The setup script prompts you for input. You can enter a blank value to
accept the default directory that is shown in the prompt.
© Copyright IBM Corp. 2016, 2018
15
User ID criteria for running the setup script
To run the setup script, you must be logged in to the z/OS system with a
user ID that meets the following criteria:
v Because IBM z/OS Management Facility (z/OSMF) administrators need
to write updates to the policy definition files, the user ID must be in
z/OSMF administrator group 1, which is the UNIX group to which
z/OSMF administrators are added. By default, z/OSMF administrator
group 1 is IZUADMIN.
v The user ID must be a TSO ID that has the UID 0 attribute.
Procedure
1. To set up the working directory, run the following command with a user ID
that meets the criteria that is specified in About this task:
sh /usr/lpp/IBM/cdpz/v1r1m0/UI/LIB/savingpolicy.sh
The script creates the following path and file:
/u/userid/cdpConfig/HBOCDEUI/v1r1m0/LIB path
This path is a symbolic link to target libraries. The
cdpConfig.properties file is in this path, and it must be imported into
z/OSMF when you install the Configuration Tool. For more
information about the installation steps, see “Installing the
Configuration Tool.”
/u/userid/cdpConfig/HBOCDEUI/v1r1m0/LIB/cdpConfig.json file
This file includes the variable configPath that defines the working
directory for the Configuration Tool.
2. For reference as you create policies, a sample policy is provided in the
/usr/lpp/IBM/cdpz/v1r1m0/UI/LIB/Sample.policy file. To use this sample
policy, copy the following files to your working directory for the Configuration
Tool:
v Sample.policy
v Sample.layout
Installing the Configuration Tool
To install the IBM Common Data Provider for z Systems Configuration Tool, you
must log in to the IBM z/OS Management Facility (z/OSMF), and import the
cdpConfig.properties file.
Before you begin
Before you install the Configuration Tool, the following tasks must be complete:
1. IBM Common Data Provider for z Systems is installed, and all SMP/E tasks are
complete.
2. System Management Facilities (SMF) log streaming is configured.
3. z/OSMF is installed, configured, and running.
Tip: z/OSMF is running if the started tasks CFZCIM, IZUANG1, and IZUSVR1
are active.
4. The following z/OSMF program temporary fixes (PTFs) are installed:
v UA82682
v UI31615
16
Common Data Provider for z Systems: User Guide
5. The working directory for the Configuration Tool must be set up. For more
information, see “Setting up a working directory for the Configuration Tool” on
page 15.
About this task
To install the Configuration Tool, you must be logged in to z/OSMF with a TSO
user ID that is in z/OSMF administrator group 1, which is the UNIX group to
which z/OSMF administrators are added. By default, z/OSMF administrator group
1 is IZUADMIN.
Procedure
To install the Configuration Tool, complete the following steps:
1. Open z/OSMF in a web browser, and log in with your TSO ID.
Tips:
v The web address, such as https://host/zosmf/, is dependent on the
configuration of your z/OSMF installation.
v If you cannot log in to z/OSMF, verify that the z/OSMF WebSphere Liberty
Profile server is started. The default procedure is IZUSVR1.
2. In the left navigation pane, expand z/OSMF Administration, and click Import
Manager.
3. Type the path and name of the cdpConfig.properties file, which is in the path
/u/userid/cdpConfig/HBOCDEUI/v1r1m0/LIB, and click Import.
The import can take several seconds. When it is complete, the following
message is shown:
Plug-in "Common Data Provider" with tasks
"Common Data Provider" was added to z/OSMF.
To control access, define SAF resource profiles in the
"ZMFAPLA" class for the following SAF resources:
"IZUDFLT.ZOSMF.IBM_CDP.CONFIG.CDPConfiguration".
Tip: If you click this resulting message, you are directed to documentation that
your systems programmer might need to grant permissions for accessing the
plug-in, as indicated in the next step.
4. Have your systems programmer run the following command to grant
permissions for the z/OSMF administrator group 1 (default is IZUADMIN
group) to access the plug-in:
RDEFINE ZMFAPLA +
(IZUDFLT.ZOSMF.IBM_CDP.CONFIG.CDPConfiguration) UACC(NONE) PERMIT +
IZUDFLT.ZOSMF.IBM_CDP.CONFIG.CDPConfiguration +
CLASS(ZMFAPLA) ID(IZUADMIN) ACCESS(CONTROL) PERMIT +
IZUDFLT.ZOSMF.IBM_CDP.CONFIG.CDPConfiguration +
CLASS(ZMFAPLA) ID(IZUUSER) ACCESS(READ)
Uninstalling the Configuration Tool
To uninstall the IBM Common Data Provider for z Systems Configuration Tool,
you must log in to the IBM z/OS Management Facility (z/OSMF), and import the
cdpConfig.remove.properties file.
About this task
To uninstall the Configuration Tool, you must be logged in to z/OSMF with a TSO
user ID that is in z/OSMF administrator group 1, which is the UNIX group to
Configuring Common Data Provider for z Systems
17
which z/OSMF administrators are added. By default, z/OSMF administrator group
1 is IZUADMIN.
Procedure
To uninstall the plug-in, complete the following steps:
1. Open z/OSMF in a web browser, and log in with your TSO ID.
2. In the left navigation pane, expand z/OSMF Administration, and click Import
Manager.
3. Type the path and name of the cdpConfig.remove.properties file, which is in
the path /u/userid/cdpConfig/HBOCDEUI/v1r1m0/LIB, and click Import.
Enabling the Configuration Tool to support SMF data destined
for IBM Operations Analytics for z Systems
If you plan to send System Management Facilities (SMF) data to IBM Operations
Analytics for z Systems, you must enable the IBM Common Data Provider for z
Systems Configuration Tool to support the SMF record types that are destined for
IBM Operations Analytics for z Systems.
About this task
You must complete this task before you create any policies that define SMF data
streams with IBM Operations Analytics for z Systems as the target destination.
Procedure
1. Copy the glasmf.streams.json file from the IBM Operations Analytics for z
Systems installation directory to the working directory for the IBM Common
Data Provider for z Systems Configuration Tool.
Tips:
v For information about the working directory for the IBM Common Data
Provider for z Systems Configuration Tool, see “Setting up a working
directory for the Configuration Tool” on page 15.
v From UNIX System Services, use the following sample command, with the
appropriate values for your environment, to copy the glasmf.streams.json
file:
cp /usr/lpp/IBM/zscala/V3R1/samples/glasmf.streams.json config_tool_workdir
2. Verify that the concats.json file is in the working directory for the IBM
Common Data Provider for z Systems Configuration Tool. Also, verify that the
file contains the appropriate values for your installation, as described in the
following example:
18
Line in concats.json file
Explanation
"CDP" : "CDP.SHBODEFS"
A reference to the SHBODEFS data set that is
installed with IBM Common Data Provider for z
Systems. The value in quotation marks (in this
example, CDP.SHBODEFS) must be the data set
name for your installation.
"IOAz" : "ZSCALA.V3R1M0.SGLASAMP"
A reference to the SGLASAMP data set that is
installed with IBM Operations Analytics for z
Systems. The value in quotation marks (in this
example, ZSCALA.V3R1M0.SGLASAMP) must be the
data set name for your installation.
Common Data Provider for z Systems: User Guide
Results
In the IBM Common Data Provider for z Systems Configuration Tool, you can now
create policies that define SMF data streams with IBM Operations Analytics for z
Systems as the target destination.
Running the Configuration Tool
To run the IBM Common Data Provider for z Systems Configuration Tool, you
must log in to the IBM z/OS Management Facility (z/OSMF).
Before you begin
Install the Configuration Tool, as described in “Installing the Configuration Tool”
on page 16.
About this task
To run the Configuration Tool, you must be logged in to z/OSMF with a TSO user
ID that is in z/OSMF administrator group 1, which is the UNIX group to which
z/OSMF administrators are added. By default, z/OSMF administrator group 1 is
IZUADMIN.
Procedure
1. Open z/OSMF in a web browser, and log in with your TSO ID.
2. In the left navigation pane, expand Configuration, and click Common Data
Provider.
The Common Data Provider tab opens, and any predefined policies are shown.
From this page, you can manage your policies for streaming z/OS operational
data to subscribers.
Tip: The sample policy is shown on this page if it was copied into the
Configuration Tool working directory. For more information, see “Setting up a
working directory for the Configuration Tool” on page 15.
Output from the Configuration Tool
For each policy that you save, the IBM Common Data Provider for z Systems
Configuration Tool creates several files in its working directory.
For example, if you create and save a policy that is named Sample1, the following
files are created in the Configuration Tool working directory:
v Sample1.policy
v Sample1.layout
v Sample1.sde
v Sample1.zlf.conf
v Sample1.config.properties
Important: Do not edit the files that the Configuration Tool creates.
The following descriptions explain the purpose of each output file, based on the
file name extension:
.policy file
Contains configuration information for the Data Streamer.
Configuring Common Data Provider for z Systems
19
.layout file
Contains information about how a policy definition is visually presented in
the Configuration Tool.
.sde file
Contains configuration information for the System Data Engine.
.zlf.conf file
Contains environment variables for the Log Forwarder.
.config.properties file
Contains configuration information for the Log Forwarder.
Managing policies
From the IBM Common Data Provider for z Systems Configuration Tool, you can
manage (for example, create, update, search for, and sort) your policies for
streaming z/OS operational data to subscribers.
Before you begin
For information about how to run the Configuration Tool, see “Running the
Configuration Tool” on page 19.
About this task
A policy includes the following components, which are shown in the Configuration
Tool as interconnected nodes in a graph so that you can more easily see how data
flows through your system:
Data streams
You must define a data stream for each source (such as SMF record type 30
or z/OS SYSLOG) from which you want to collect operational data.
Transforms
In a transform, you specify the format to apply to the operational data in
the stream so that the data is consumable at its target destination. For
example, you can specify UTF-8 encoding as the format to be applied to a
data stream with EBCDIC encoding.
Subscribers
You must define the subscriber or subscribers for each data stream. A
subscriber can be subscribed to multiple data streams.
Tip: Only one policy can be active in each logical partition (LPAR).
Subscribers to a data stream or transform
A subscriber is typically intermediary software (such as Logstash) through which
operational data can be sent to its target destination, which can include analytics
platforms such as IBM Operations Analytics for z Systems, Splunk, Elasticsearch,
and others. When you create a policy, you can define any of the following
subscribers for your data streams: Logstash, the IBM Common Data Provider for z
Systems Data Receiver, or a generic HTTP or HTTPS subscriber.
20
Common Data Provider for z Systems: User Guide
Streaming protocols for sending data from the Data Streamer to
its subscribers
If the subscriber is Logstash or the Data Receiver, the Data Streamer uses a
persistent TCP socket connection to send data to the subscriber.
If the subscriber is the generic HTTP or HTTPS subscriber, the Data Streamer uses
HTTP or HTTPS to send data to the subscriber.
When you configure a subscriber in a policy, the streaming protocol that you select
in the configuration (in the Protocol field) defines the following characteristics for
sending the data:
v The intermediary software (such as Logstash) through which operational data is
to be sent to its target destination
v The communications protocol, such as a persistent TCP socket connection, HTTP,
or HTTPS
v Whether secure communications is used
For more information about the streaming protocols, see “Subscriber configuration”
on page 95.
Logstash
Logstash is an open source data collection engine that in near real-time, can
dynamically unify data from disparate sources and normalize the data into the
destinations of your choice for analysis and visualization.
Because IBM Common Data Provider for z Systems cannot forward data directly to
IBM Operations Analytics for z Systems, for example, the z/OS log data and SMF
data that is gathered by IBM Common Data Provider for z Systems must be
forwarded to a Logstash instance. The Logstash instance then forwards the data to
IBM Operations Analytics for z Systems. Through Logstash, you can also forward
data to other destinations, such as Elasticsearch, or route data through a message
broker, such as Apache Kafka.
For more information about Logstash, see the Logstash documentation.
Data Receiver
The IBM Common Data Provider for z Systems Data Receiver is software that runs
on a distributed platform and acts as a target subscriber when the intended
subscriber of a data stream cannot directly ingest the data feed from IBM Common
Data Provider for z Systems. The Data Receiver writes any data that it receives to
disk into files that are sorted by the data source type. These files can then be
ingested by an analytics platform.
Depending on the target destinations for the operational data in your environment,
you might want to use the IBM Common Data Provider for z Systems Data
Receiver. For example, you must use the Data Receiver as the subscriber if the
target destination for your operational data is Splunk.
If the input to the Data Receiver is defined in comma-separated values (CSV)
format, the Data Receiver writes data in CSV format. If the input is defined in
JavaScript Object Notation (JSON) format, it writes data in JSON format. Each
record that the Data Receiver writes contains the following fields:
Configuring Common Data Provider for z Systems
21
sysplex
The name of the sysplex from which the data originates, as it is defined to
the z/OS system.
sysname
The name of the system from which the data originates, as it is defined to
the z/OS system.
hostname
The TCP/IP host name for the system from which the data originates.
path
The virtual path of the data, which can be a real file path, such as
/home/etc/cdpConfig.log or a virtual file path, such as SMF/SMF_030.
sourceType
The type of the data.
sourceName
A name that is associated with the data stream.
timezone
The base time zone offset for the time stamps in the data.
Generic HTTP subscriber
A generic HTTP subscriber is an application that can receive data that is sent by
using an HTTP POST request. An example of a generic HTTP subscriber is a
Logstash event processing pipeline that is configured with an HTTP input that
uses the JavaScript Object Notation (JSON) codec.
The body of the HTTP POST request is a JSON object with the following members:
host
The TCP/IP host name of the system from which the data originates.
path
The virtual path of the data, which can be a real file path, such as
/home/etc/cdpConfig.log or a virtual file path, such as SMF/SMF_030.
sourceType
The type of the data.
sourceName
A name that is associated with the data stream.
timezone
The base time zone offset for the time stamps in the data.
systemName
The name of the system from which the data originates, as it is defined to
the z/OS system.
sysplexName
The name of the sysplex from which the data originates, as it is defined to
the z/OS system.
message
The data itself. Depending on whether the data was split, and on the
number of records that were in the data, this value might be one string or
an array of strings.
Creating a policy
From the Common Data Provider tab in the IBM Common Data Provider for z
Systems Configuration Tool, you can create a policy.
22
Common Data Provider for z Systems: User Guide
About this task
In a policy, you must define at least one data stream with at least one subscriber.
For each data stream, you can also define one or more transforms and multiple
subscribers. A subscriber can also be subscribed to multiple data streams and
transforms.
“Icons on each node in a policy” on page 31 describes the icons that are shown on
each data stream, transform, and subscriber node that you define in a policy.
Procedure
To create a policy, complete the following steps:
1. Click the Create a new policy box.
2. In the resulting Policy Profile Edit window, type or update the required policy
name and, optionally, a policy description.
3. Define any global properties, which are properties that apply to all data
streams in the policy. For information about what you can define, see “Global
properties that you can define for all data streams in a policy” on page 26.
4. Click the Add Data Stream icon
. The “Select data
stream” window is shown with a list of categorized data streams. You can
expand the categories to view the possible data streams that you can define for
this policy.
5. Select one or more data streams and the encoding (if any) into which you want
to convert the data streams (such as UTF-8 encoding), and click Select.
Important: By default, the Transcribe selected streams to check box is selected.
If you do not want to convert the data streams into another encoding, clear this
check box. Otherwise, select the encoding value that you want.
After you click Select, each data stream that you chose is shown as a node in
the graph. If you chose to transform the data stream, a transform node is
connected to the data stream in the graph. Each node includes the icons that
are described in “Icons on each node in a policy” on page 31.
6. Depending on what you want to define in the policy, use the Configure
,
Transform
, and Subscribe
icons on each node to complete the policy
definition. If you want to add more data streams to the policy, use the Add
Data Stream icon
.
Tips:
v “Icons on each node in a policy” on page 31 indicates where you can find
more information about configuring data streams, transforms, and
subscribers, including information about the configuration values.
v To remove a data stream, transform, or subscriber node from a policy
definition, click the Remove icon (X mark) on the node. When you remove a
data stream node or a transform node, any connected transform nodes are
also removed.
7. To save the policy, click Save. The box for the new policy is then shown on the
page.
Configuring Common Data Provider for z Systems
23
Updating a policy
From the Common Data Provider tab in the IBM Common Data Provider for z
Systems Configuration Tool, you can update a policy, which can include editing,
renaming, duplicating, or deleting the policy.
Procedure
Complete one or more of the following steps, depending on what you want to do:
v To edit a policy, click the box that has the policy name, make changes in the
resulting Policy Profile Edit window, and click Save to save your changes.
v To rename a policy, click the Rename icon
on the box for the policy.
v To duplicate a policy, click the Duplicate icon
v To delete a policy, click the Delete icon
on the box for the policy.
on the box for the policy.
Tip: “Output from the Configuration Tool” on page 19 describes the files that
the Configuration Tool creates for each policy that you save. For recovery
purposes, when you delete a policy, the file extension .hidden is appended as a
suffix to the standard file extension of all the associated policy files. To recover a
deleted policy, rename each of the associated policy files to remove .hidden from
the file extension.
Adding a subscriber for a data stream or transform
When you click the Subscribe icon
on a data stream or transform node, a
window opens where you can select a previously defined subscriber, or define a
new subscriber to include in the selection list. This procedure focuses on how to
define a new subscriber.
Before you begin
For information about the types of subscribers that you can choose, see
“Subscribers to a data stream or transform” on page 20.
Procedure
To define a new subscriber for a data stream or transform node, complete the
following steps:
1. In the “Subscribe to a data stream” or “Subscribe to a transform” window, click
the Add Subscriber icon
.
2. In the resulting “Add subscriber” window, update the associated configuration
values, and click OK to save the subscriber. For more information about the
configuration values, see “Subscriber configuration” on page 95.
3. In the “Subscribe to a data stream” or “Subscribe to a transform” window,
select one or more subscribers, and click Update Subscriptions. The subscribers
that you chose are then shown on the graph.
Exporting and importing subscribers
You can export a subscriber with all transforms and data streams to which it is
subscribed. The resulting subscriber file can then be imported into another policy.
24
Common Data Provider for z Systems: User Guide
About this task
Each subscriber node includes the Export icon
that you can use to export the
subscriber and its associated data stream and transform nodes.
Procedure
To export a subscriber and import it into another policy, complete the following
steps:
1. On the subscriber node, click the Export icon
.
In the resulting Export window, the following check boxes are shown:
Omit layout
By default, an exported subscriber file includes information about how
a policy definition is visually presented in the Configuration Tool. This
layout information is used to reproduce the positioning of the
subscriber node and its associated data stream and transform nodes.
If you do not want to save the layout information, select this check box
to omit it. A new layout is then generated when the subscriber is
imported.
Export as template
By default, an exported subscriber file includes configuration
information that was previously provided (for example, ports, IP
addresses, and user names).
If you do not want to save this configuration information, select this
check box to export the subscriber file as a template in which all
configurable information is reset to the default values.
2. After you optionally select one or both check boxes, click Export to download
the policy information, which is in a file with the extension .subscriber. The
exported policy information in the .subscriber file can then be imported into
another policy.
3. To import the preconfigured subscriber into a policy, complete either of the
following actions:
v Drag and drop the .subscriber file onto the policy graph.
v Click the Import button at the top of the Policy Profile Edit window to
browse for the .subscriber file to import.
The existing and imported graphs are then merged.
Configuration reference for managing policies
This reference contains information that is useful in creating and updating policies.
It includes information about the global properties that you can define for a policy,
the icons on each node in a policy, the correlation between SMF record types and
the associated SMF data stream names, and the configuration values that you can
update for each data stream, transform, or subscriber.
Table 5. Configuration reference information for managing policies
Reference information
Area of Configuration Tool where the
relevant configuration is done
“Global properties that you can define for all Policy Profile Edit window in the Global
data streams in a policy” on page 26
Properties section
Configuring Common Data Provider for z Systems
25
Table 5. Configuration reference information for managing policies (continued)
Reference information
Area of Configuration Tool where the
relevant configuration is done
“Icons on each node in a policy” on page 31
Policy Profile Edit window in the graph
“SMF data stream reference” on page 32
Window that is shown when you click the
Add Data Stream icon
in the Policy Profile
Edit window
“SMF_110_1_KPI data stream content” on
page 45
Window that is shown when you click the
Add Data Stream icon
in the Policy Profile
Edit window
“Data stream configuration for data gathered Window that is shown when you click the
by Log Forwarder” on page 49
Configure icon
on a data stream node
for data that is gathered by the Log
Forwarder
“Data stream configuration for data gathered Window that is shown when you click the
by System Data Engine” on page 79
Configure icon
on a data stream node
for data that is gathered by the System Data
Engine
“Transform configuration” on page 79
Window that is shown when you click the
Transform icon
transform node
“Subscriber configuration” on page 95
on a data stream or
Window that is shown when you click a
Subscribe icon
transform node
on a data stream or
Global properties that you can define for all data streams in a
policy
When you create or edit a policy in the IBM Common Data Provider for z Systems
Configuration Tool, the buttons SYSTEM, z/OS LOG FORWARDER, SDE, and
SCHEDULES are shown in the Global Properties section of the Policy Profile Edit
window. You can use these buttons to define properties that apply to all data
streams (or all data streams from a certain type of data gatherer) in the policy.
Tips:
v The z/OS LOG FORWARDER button is available only after you define a data
stream for z/OS log data, which is gathered by the Log Forwarder. Use this
button to set, or verify, the Log Forwarder properties.
v The SDE button is available only after you define a data stream for SMF data,
which is gathered by the System Data Engine. Use this button to set, or verify,
the System Data Engine properties.
SYSTEM properties: Defining alternative host names for source systems:
When you create or edit a policy in the IBM Common Data Provider for z Systems
Configuration Tool, you can use the SYSTEM button to define alternative host
names for the source systems from which IBM Common Data Provider for z
26
Common Data Provider for z Systems: User Guide
Systems collects data. The Data Streamer then uses these alternative host names in
the associated data records that it sends to subscribers.
About this task
Example of using alternative host names
If the host name for a source system is abc.host.com, and you define an
alternative host name of def.host.com for this source system, the Data
Streamer changes the host name in the associated data records to
def.host.com before it sends the records to the subscriber.
Reasons why you might want to define alternative host names
The host name for a source system can sometimes change. If you know, for
example, that a source system interchangeably uses ghi.host.com and
jkl.host.com as its host name, you can define ghi.host.com to be the
alternative host name for jkl.host.com. Then, the host name is always
reported as ghi.host.com so that the data at the target destination can
easily be correlated to the correct source system.
You might also have other reasons for defining alternative host names.
Procedure
To define alternative host names, complete the following steps:
1. In the Global Properties section of the Policy Profile Edit window, click
SYSTEM.
2. Click ADD SYSTEM.
3. In the System name field, type the current host name.
4. In the Remapped host name field, type the alternative host name.
5. Repeat steps 2 to 4 for each source system for which you want to define
alternative host names.
6. Click OK.
z/OS LOG FORWARDER properties: Defining your Log Forwarder
environment:
In the IBM Common Data Provider for z Systems Configuration Tool, after you
define a data stream for z/OS log data (which is gathered by the Log Forwarder),
use the z/OS LOG FORWARDER button to set the configuration values for your
Log Forwarder environment.
About this task
For more information about the Log Forwarder configuration values, see “Log
Forwarder properties configuration” on page 28.
For more information about configuring the Log Forwarder, see “Configuring the
Log Forwarder” on page 111.
Procedure
To define your Log Forwarder environment, complete the following steps:
1. In the Global Properties section of the Policy Profile Edit window, click z/OS
LOG FORWARDER.
2. In the “Configure z/OS Log Forwarder properties” window, update the
configuration values for your environment, and click OK.
Configuring Common Data Provider for z Systems
27
Log Forwarder properties configuration:
This reference lists the configuration values that you can update in the “Configure
z/OS Log Forwarder properties” window of the IBM Common Data Provider for z
Systems Configuration Tool.
Port
The port on which the Data Streamer listens for data from the Log
Forwarder.
Tip: For more information about the Data Streamer port, see “Configuring
the Data Streamer” on page 109.
Discovery Interval
In the process of streaming data, the number of minutes that the Log
Forwarder waits before it checks for a new log file in the data stream. This
value applies to all data streams from the Log Forwarder, although it can
be overridden on some individual streams.
The value must be an integer in the range 1 - 5. The default value is 1.
Pattern Discovery Interval
In the process of streaming data, the number of minutes that the Log
Forwarder waits before it checks for new data sources that match wildcard
specifications. This value applies to all data streams from the Log
Forwarder, although it can be overridden on some individual streams.
The value must be an integer in the range 0 - 60. The default value is 1.
JRELIB
The fully qualified path to a set of native libraries that are required by the
Java Runtime Environment (31-bit).
JRELIB64
The fully qualified path to a set of native libraries that are required by the
Java Runtime Environment (64-bit).
REGJAR
The fully qualified path to the ifaedjreg.jar file, which provides access to
z/OS product registration services.
RESOLVER_CONFIG
The TCP/IP resolver configuration file that the Log Forwarder must use.
The Log Forwarder is a z/OS UNIX System Services program. It uses
TCP/IP functions that require access to the TCP/IP resolver configuration
file.
For more information, see “Verifying the search order for the TCP/IP
resolver configuration file” on page 129.
TZ
The time zone offset for the Log Forwarder and all data streams from the
Log Forwarder.
ZLF_JAVA_HOME
The Java installation directory.
ZLF_HOME
The Log Forwarder installation directory.
ZLF_WORK
The Log Forwarder working directory, which contains files that are created
28
Common Data Provider for z Systems: User Guide
and used during the operation of the Log Forwarder. For example, it
includes files that contain information about the state of the Log Forwarder
and its progress in collecting data.
Guidelines for the working directory
Use the following guidelines to help you decide which directory to
use as the working directory:
v The working directory must be in a different physical location
from the working directory for any other Log Forwarder
instance.
v The directory must be readable and writable by the user ID that
runs the Log Forwarder.
v To avoid possible conflicts, do not use a directory that is defined
as the Configuration Tool working directory.
Important: Do not update, delete, or move the files in the Log
Forwarder working directory.
ZLF_LOG
The directory for the logging.properties file.
ZLF_WAS_PLUGINS_ROOT
The IBM WebSphere Application Server installation root directory for Web
Server Plug-ins. This directory contains the com.ibm.hpel.logging.jar file
that is used to retrieve log data from High Performance Extensible Logging
(HPEL).
ZLF_GATHERER
The directory for use by data gatherers from a third party organization.
Transport Affinity (environment variable _BPXK_SETIBMOPT_TRANSPORT)
The TCP/IP stack to which the Log Forwarder must have affinity. If no
value is specified, the Log Forwarder has affinity to the default TCP/IP
stack.
SDE properties: Defining your System Data Engine environment:
In the IBM Common Data Provider for z Systems Configuration Tool, after you
define a data stream for SMF data (which is gathered by the System Data Engine),
use the SDE button to set the configuration values for your System Data Engine
environment.
About this task
For more information about configuring the System Data Engine, see “Configuring
the System Data Engine” on page 119.
Procedure
1. In the Global Properties section of the Policy Profile Edit window, click SDE.
2. In the “Configure SDE properties” window, update the following configuration
values for your environment, and click OK.
CDP Concatenation
This value must be the name of the SHBODEFS data set that is installed
with IBM Common Data Provider for z Systems in your environment.
This data set is also referenced in the concats.json file, which is in the
working directory for the IBM Common Data Provider for z Systems
Configuration Tool.
Configuring Common Data Provider for z Systems
29
IOAz Concatenation
This value is relevant only if you are using IBM Operations Analytics
for z Systems. It is required as part of enabling the Configuration Tool
to support SMF data that is destined for IBM Operations Analytics for z
Systems. For more information, see “Enabling the Configuration Tool to
support SMF data destined for IBM Operations Analytics for z
Systems” on page 18.
The value must be the name of the SGLASAMP data set that is installed
with IBM Operations Analytics for z Systems in your environment. This
data set is also referenced in the concats.json file, which is in the
working directory for the IBM Common Data Provider for z Systems
Configuration Tool.
SCHEDULES properties: Defining time intervals for filtering operational data:
When you create or edit a policy in the IBM Common Data Provider for z Systems
Configuration Tool, you can use the SCHEDULES button to define time intervals
for filtering the operational data that IBM Common Data Provider for z Systems
collects. For example, you might want to define time intervals to filter data
according to the expected peak demand for your applications.
About this task
To define a time interval for filtering the data, you must first define a schedule,
which can contain one or more time interval definitions. You can define multiple
schedules.
Important: The schedules that you define are used in filtering data streams only if,
when you configure the data streams, you select the Time Filter transform in the
“Transform data stream” window. For more information about transform types, see
“Transform configuration” on page 79.
Procedure
In the Global Properties section of the Policy Profile Edit window, click
SCHEDULES, and complete one or more of the following actions, depending on
what you want to do. Any previously defined schedules are shown in the
Schedule list.
30
Common Data Provider for z Systems: User Guide
Action
Instruction
Create or edit a schedule
To edit a schedule, select it from the
Schedule list.
To define a new time interval in a schedule,
click ADD, and complete the following
steps:
1. In the Edit name field, type the name for
the schedule that you want to contain
this time interval.
2. To set the time interval for this schedule,
either type the time information in the
From and to fields, or use the slider to
adjust the time.
3. To add another time interval for this
schedule, click ADD WINDOW, and
repeat the previous step.
4. To save the schedule, click APPLY.
Select the schedule from the Schedule list,
and click DELETE.
Restriction: The DELETE button is not
available if a schedule is assigned to a
transform for a data stream.
Delete a schedule
Icons on each node in a policy
This reference describes the icons that are shown on each data stream, transform,
and subscriber node that you define in a policy. It also indicates where you can
find more information about configuring data streams, transforms, and subscribers.
Data stream node
Table 6. Icons on each data stream node in a policy
Icon
Configure icon
Window that opens when
you click icon
One of the following
windows opens, depending
on whether the data is
gathered by the Log
Forwarder or the System
Data Engine:
v Configure z/OS Log
Forwarder data stream
More information
v “Data stream configuration
for data gathered by Log
Forwarder” on page 49
v “Data stream configuration
for data gathered by
System Data Engine” on
page 79
v Configure SDE data
stream
Transform icon
Subscribe icon
v Transform data stream
v “Transform configuration”
on page 79
v Subscribe to a data stream
v “Adding a subscriber for a
data stream or transform”
on page 24
v “Subscriber configuration”
on page 95
Configuring Common Data Provider for z Systems
31
Transform node
Table 7. Icons on each transform node in a policy
Window that opens when
you click icon
Icon
Configure icon
Transform icon
Subscribe icon
More information
v Configure transform
v “Transform configuration”
on page 79
v Transform data stream
v “Transform configuration”
on page 79
v Subscribe to a transform
v “Adding a subscriber for a
data stream or transform”
on page 24
v “Subscriber configuration”
on page 95
Subscriber node
Table 8. Icons on each subscriber node in a policy
Window that opens when
you click icon
Icon
v Configure subscriber
Configure icon
More information
v “Adding a subscriber for a
data stream or transform”
on page 24
v “Subscriber configuration”
on page 95
v Export
Export icon
v “Exporting and importing
subscribers” on page 24
SMF data stream reference
For each System Management Facilities (SMF) record type, this reference lists the
name of the data stream that IBM Common Data Provider for z Systems uses to
collect the data and includes a brief description of the data stream content. In the
Configuration Tool, these SMF data stream names are shown in the “Select data
stream” window, which opens when you click the Add Data Stream icon
in the Policy Profile Edit window.
Table 9 on page 33 provides the following information:
Column 1
The SMF record type
Column 2
The subtype of the SMF record. In either of the following situations, no
subtype is indicated:
v The stream applies to all subtypes of the respective SMF record.
v The respective SMF record has no subtypes.
Column 3
The name of the data stream to which the SMF data is written
Column 4
A brief description of the content of the SMF data stream
32
Common Data Provider for z Systems: User Guide
Table 9. Data stream names that IBM Common Data Provider for z Systems uses to collect SMF data
Type
Data stream name
Description of data stream content
0
SMF_000
IPL
2
SMF_002
Dump header
3
SMF_003
Dump trailer
4
SMF_004
Step termination
SMF_004_DEVICE
Step termination device data
SMF_005
Job termination
SMF_005_ACCOUNTING
Job termination accounting data
6
SMF_006
JES2/JES3/PSF/External writer
7
SMF_007
SMF data lost
8
SMF_008
I/O configuration at IPL
SMF_008_ONLINE
Data for online devices at IPL
SMF_009
VARY device ONLINE
SMF_009_DEVICE
Data for each device varied online
SMF_010
Allocation recovery
SMF_010_DEVICE
Data for each device made available
SMF_011
VARY device OFFLINE
SMF_011_DEVICE
Data for each device varied offline
SMF_014
INPUT or RDBACK data set activity
SMF_014_UCB
UCB information
SMF_015
OUTPUT; UPDAT; INOUT; or OUTIN data set
activity
SMF_015_UCB
UCB information
SMF_016
DFSORT statistics
SMF_016_SORTIN
SORTIN data set information
SMF_016_SORTOUT
SORTOUT data set information
SMF_016_OUTFIL
OUTFIL data set information
SMF_017
Scratch data set status
SMF_017_VOLUMEXT
Volume information
SMF_018
Rename data set status
SMF_018_VOLUMEXT
Volume information
19
SMF_019
Direct access volume
20
SMF_020
Job initiation
SMF_020_ACCOUNTING
Job accounting information
21
SMF_021
Error statistics by volume
22
SMF_022
Configuration
23
SMF_023
SMF status
24
SMF_024
JES2 spool offload
SMF_024_PRODUCT
JES2 product information
SMF_024_SPOOLOFF
Statistics for spool offload devices
SMF_024_JOBSEL
Job selection criteria
SMF_024_SYSSEL
SYSOUT selection criteria
SMF_024_SYSAFF
System affinity information
25
SMF_025
JES3 device allocation
26
SMF_026
JES2/JES3 job purge
28
SMF_028
NPM statistics
5
9
10
11
14
15
16
17
18
Subtype
Configuring Common Data Provider for z Systems
33
Table 9. Data stream names that IBM Common Data Provider for z Systems uses to collect SMF data (continued)
Type
Subtype
30
Data stream name
Description of data stream content
SMF_030
Common address space work
SMF_030_EXCP
I/O information for a specific DD Name/Device
address pair for the address space
SMF_030_ACCOUNTING
User accounting information for the address space
SMF_030_OPENMVS
z/OS UNIX process information
SMF_030_ARM
Information related to a batch job or started task that
registers as an element of automatic restart
management
SMF_030_USAGE
Product ID information and usage data
SMF_030_ENCLAVE
Remote system data for each system that executed
work under a multisystem enclave
SMF_030_COUNTER
Hardware Instrumentation Services (HIS) counters
31
SMF_031
TIOC initialization
32
SMF_032
TSO user work accounting
SMF_032_IDENTIF
Identification section
SMF_032_TSOCOMMAND
TSO/E command segment
SMF_033
APPC/MVS™ TP accounting
SMF_033_ACS
TP usage accounting
SMF_033_TPS
TP usage scheduler data
SMF_034
TS-step termination
SMF_034_DEVICE
EXCP section
SMF_035
LOGOFF
SMF_035_ACCOUNT
Accounting information
36
SMF_036
Integrated Catalog Facility Catalog
37
SMF_037_HW
NetView Hardware Monitor
SMF_037_ETHERNET
Ethernet LAN data
33
1
34
35
39
SMF_037_TEXT
Text message data
1-7
SMF_039_1_TO_7
NetView Session Monitor
8
SMF_039_8
NetView Session Monitor
SMF_040
Dynamic DD
SMF_040_DEVICE
EXCP section
SMF_041
Data-in-virtual Access/Unaccess
SMF_041_VLF
VLF statistics
SMF_042_1
DFSMS - BMF performance statistics
SMF_042_STOR_CLASS
Storage class summary
SMF_042_2
DFSMS - DFP cache control unit statistics
SMF_042_UNIT_CACHE
Control unit cache section
SMF_042_VOL_STATUS
Volume status section
SMF_042_3
DFSMS - DFP SMS configuration statistics
SMF_042_EVNT_AUDIT
Event audit section
SMF_042_4
DFSMS - DFP concurrent copy session statistics
SMF_042_CONC_COPY
Concurrent copy session section
SMF_042_EAVCC_VOL
EAV concurrent copy volume section
SMF_042_5
DFP Storage Class statistics
SMF_042_STOR_RESP
Storage class response time
6
SMF_042_6
DFP Data Set statistics
11
SMF_042_11
DFP Extended Remote Copy (XRC) Session Statistics
14
SMF_042_14
ADSM Server statistics
2
SMF_043_JES2
JES2 start
5
SMF_043_JES3
JES3 start
2
SMF_045_JES2
JES2 withdrawal
5
SMF_045_JES3
JES3 stop
40
41
1-3
42
1
2
3
4
5
43
45
34
Common Data Provider for z Systems: User Guide
Table 9. Data stream names that IBM Common Data Provider for z Systems uses to collect SMF data (continued)
Type
Subtype
Data stream name
Description of data stream content
47
2
SMF_047_JES2
JES2 SIGNON/start line (BSC only)
5
SMF_047_JES3
JES3 SIGNON/start line/LOGON
2
SMF_048_JES2
JES2 SIGNOFF/stop line (BSC only)
5
SMF_048_JES3
JES3 SIGNOFF/stop line/LOGOFF
2
SMF_049_JES2
JES2 integrity (BSC only)
5
SMF_049_JES3
JES3 integrity
50
SMF_050
ACF/VTAM® tuning statistics
52
SMF_052
JES2 LOGON/start line (SNA only)
53
SMF_053
JES2 LOGOFF/stop line (SNA only)
54
SMF_054
JES2 integrity (SNA only)
55
SMF_055
JES2 network SIGNON
56
SMF_056
JES2 network integrity
2
SMF_057_JES2
JES2 network SYSOUT transmission
5
SMF_057_JES3
JES3 networking transmission
58
SMF_058
JES2 network SIGNOFF
59
SMF_059
MVS/BDT file-to-file transmission
60
SMF_060
VSAM volume data set updated
61
SMF_061
Integrated Catalog Facility Define Activity
62
SMF_062
VSAM component or cluster opened
SMF_062_ONLINE
Online volume information
63
SMF_063
VSAM entry defined
64
SMF_064
VSAM component or cluster status
SMF_064_EXTENT
Extent information section
65
SMF_065
Integrated Catalog Facility Delete Activity
66
SMF_066
Integrated Catalog Facility Alter Activity
67
SMF_067
VSAM entry delete
68
SMF_068
VSAM entry renamed
69
SMF_069
VSAM data space defined; extended; or deleted
SMF_070
RMF™ CPU activity
SMF_070_CPU
CPU data section
SMF_070_BCT
PR/SM™ partition data section
SMF_070_BPD
PR/SM logical processor data section
SMF_070_INS
CPU identification section
SMF_070_LCD
Logical core data section
SMF_070_2
RMF Cryptographic Hardware Activity
SMF_070_PCICC
Cryptographic CCA coprocessor data section
SMF_070_PKCS11
Cryptographic PKCS11 coprocessor data section
SMF_071
RMF paging activity
SMF_071_SWAP
Swap placement section
48
49
57
70
1
2
71
1
Configuring Common Data Provider for z Systems
35
Table 9. Data stream names that IBM Common Data Provider for z Systems uses to collect SMF data (continued)
Type
Subtype
Data stream name
Description of data stream content
72
1
SMF_072_1
RMF workload activity
SMF_072_PGP
Performance Group Period data section
SMF_072_2
RMF storage data
SMF_072_2_DATA
Performance Group data section
SMF_072_2_SWAP_RSN
Swap reason data section
SMF_072_3
RMF goalmode workload activity
SMF_072_SSS
Service class served data section
SMF_072_SCS
Service/Report Class period data section
SMF_072_WRS
Work Manager/Resource Manager state section
SMF_072_DNS
Resource delay type names section
SMF_072_4
RMF Goalmode delay and storage frame data
SMF_072_4_DATA
Service class period data section
SMF_072_4_SWAP_RSN
Swap reason data section
SMF_072_5
RMF system suspend locks and GRS data
SMF_072_CMS_LOCK
CMS lock data section
SMF_072_ENQ_LOCK
CMS Enqueue/Dequeue lock data section
SMF_072_LATCH_LOCK
CMS latch lock data section
SMF_072_SMF_LOCK
CMS SMF lock data section
SMF_072_LOCK_TYPE
Local lock data section
SMF_072_LOCK_OWNER
CML lock owner data section
SMF_072_LOCK_RQSTR
CML lock requestor data section
SMF_072_LATCH_CRTR
Latch creator data section
SMF_072_OFFSET_LR
Latch requestor data section
SMF_072_GRS_ENQ
GRS Enqueue step data section
SMF_072_ENQ_SYS
GRS Enqueue system data section
SMF_072_ENQ_SYSS
GRS Enqueue systems data section
SMF_072_GRS_QSCAN
GRS QScan statistics data section
SMF_073
RMF channel path activity
SMF_073_CHAN_PATH
Channel path data section
SMF_073_EXT_CHAN
Extended channel path data section
2
3
4
5
73
36
1
Common Data Provider for z Systems: User Guide
Table 9. Data stream names that IBM Common Data Provider for z Systems uses to collect SMF data (continued)
Type
Subtype
Data stream name
Description of data stream content
74
1
SMF_074_1
RMF device activity
SMF_074_DEV_DATA
Device data section
SMF_074_2
RMF XCF activity
SMF_074_SYS_DATA
System data section
SMF_074_PATH_DATA
Path data section
SMF_074_MBR_DATA
Member data section
SMF_074_3
RMF OPENMVS kernel activity
SMF_074_OMVS_DATA
Control data section
SMF_074_4
RMF XES/CF activity
SMF_074_CONN_DATA
Connectivity data section
SMF_074_STRUC_DATA
Structure data section
SMF_074_RQST_DATA
Request data section
SMF_074_PROC_DATA
Processor utilization data section
SMF_074_CACHE_DATA
Cache data section
SMF_074_REMOTE_FAC
Remote facility data section
SMF_074_CHAN_PATH
Channel path data section
SMF_074_SC_MEMDATA
Storage class memory data section
SMF_074_5
RMF Cache activity
SMF_074_CACHE_DEV
Cache device data section
SMF_074_XDEV
Cache device data section extension
SMF_074_CCU_STATUS
Cache control unit status section
SMF_074_RAID_RANK
RAID Rank/Extent Pool data section
SMF_074_6
RMF Hierarchical file system activity
SMF_074_HFS_GLOBAL
HFS global data section
SMF_074_HFS_BUFFER
HFS global buffer section
SMF_074_HFS
HFS file system section
SMF_074_7
RMF FICON® Director Statistics
SMF_074_FCD_GLOBAL
FCD global data section
SMF_074_FCD_PORT
FCD port data section
SMF_074_FCD_CONN
FCD connector data section
SMF_074_8
RMF Enterprise Storage Server® (ESS) Link Statistics
SMF_074_ESS_LINK
Link statistics section
SMF_074_EXT_POOL
Extent pool statistics section
SMF_074_RANK_STATS
Rank statistics section
SMF_074_RANK_ARRAY
Rank array data section
SMF_074_9
RMF Monitor III PCIE Statistics
SMF_074_PCIE_FUNC
PCIE function data section
SMF_074_DMA_DATA
PCIE function type data section
SMF_074_HWAC
Hardware accelerator data section
SMF_074_HWAC_COMP
Hardware accelerator compression data section
SMF_075
RMF page/swap data set activity
SMF_075_PAGE_SWAP
Page Data Set data section
SMF_076
RMF trace activity
SMF_076_PRODUCT
RMF product section
SMF_076_TRCCTRL
Trace control section
SMF_076_TRCDATA
Trace data section
SMF_076_VARDATA
Variable trace data section
SMF_077
RMF enqueue activity
SMF_077_ENQ
Enqueue data section
2
3
4
5
6
7
8
9
75
1
76
77
1
Configuring Common Data Provider for z Systems
37
Table 9. Data stream names that IBM Common Data Provider for z Systems uses to collect SMF data (continued)
Type
Subtype
Data stream name
Description of data stream content
78
1
SMF_078_1
RMF I/O queuing activity for the 308x; 908x; and
4381 processors
SMF_078_1_IOQDATA
I/O Queuing data section for 308x
SMF_078_2
RMF virtual storage activity
SMF_078_VSPA
Virtual Storage Private Area data section
SMF_078_VSPASS
Virtual Storage Private Area subpool section
SMF_078_3
RMF I/O queuing activity for the 3090; 9021; 9121;
and 9221 processors
SMF_078_HYPERPAV
HyperPAV data section
SMF_078_IOQDATA
I/O Queuing data section
SMF_079
RMF Monitor II activity
SMF_079_ASDDATA
ASD and ASDJ data section
SMF_079_ARDDATA
ARD and ARDJ data section
SMF_079_SRCSDATA
SRCS data section
SMF_079_SPAGDATA
SPAG data section
SMF_079_ASRMDATA
ASRM and ASRMJ data section
SMF_079_SENQRDATA
SENQR data section
SMF_079_SENQDATA
SENQ data section
SMF_079_TRXDATA
TRX data section
SMF_079_DEVICEDATA
Device data section
SMF_079_DDMNDATA
DDMN data section
SMF_079_PGSPDATA
PGSP control section
SMF_079_PGSP_DATA
PGSP data set section
SMF_079_CHANNELCTL
Channel path control section
SMF_079_CHPA_DATA
Channel path data section
SMF_079_IOCONFIG
I/O Queuing configuration control section for 308x
SMF_079_IOQ_DATA
I/O Queuing configuration data section for 308x
SMF_079_IOQUEDTA
I/O Queuing data section for 308x
SMF_079_IOCONFIGQ
I/O Queuing global section
SMF_079_IOQ_DATAS
I/O Queuing data section
SMF_079_LONG_LOCK
IMS long lock data section
SMF_080
RACF processing
SMF_080_RELOCATE
RACF relocate section
SMF_080_XRELOCATE
RACF extended relocate section
80
(for CA
Top Secret)
SMF_080_CA_16
CA Top Secret security-related activity
SMF_080_CA_REL
CA Top Secret security-related audit and logging
information
81
SMF_081
RACF initialization
SMF_081_RELOCATE
RACF relocate section
SMF_082_1
PCF (Programmed Cryptographic Facility) record
SMF_082_1_GENERAL
PCF repeated section
SMF_082_2
CUSP (Cryptographic Unit Support Program) record
SMF_082_2_GENERAL
CUSP repeated section
SMF_083
RACF audit record for data sets
2
3
79
80 (for RACF)
82
1
2
83
38
Common Data Provider for z Systems: User Guide
Table 9. Data stream names that IBM Common Data Provider for z Systems uses to collect SMF data (continued)
Type
Subtype
Data stream name
Description of data stream content
84
1
SMF_084_1
JES3 monitoring facility (JMF) FCT (Function Control
Table) analysis
2
SMF_084_2
JES3 monitoring facility (JMF) FCT summary and
highlight
SMF_084_JES3_WAIT
JES3 wait analysis section
3
SMF_084_3
JES3 monitoring facility (JMF) Spool data
management
4
SMF_084_4
JES3 monitoring facility (JMF) Resqueue cellpool; JCT
and control block utilization
5
SMF_084_5
JES3 monitoring facility (JMF) Job analysis
6
SMF_084_6
JES3 monitoring facility (JMF) JES3 hot spot analysis
7
SMF_084_7
JES3 monitoring facility (JMF) JES3 internal reader
DSP analysis
8
SMF_084_8
JES3 monitoring facility (JMF) JES3 SSI response time
analysis
9
SMF_084_9
JES3 monitoring facility (JMF) JES3 SSI destination
queue analysis
10
SMF_084_10
JES3 monitoring facility (JMF) JES3 Workload
Manager Analysis
SMF_085
OAM record
SMF_085_ARRAY
Volume array section
88
SMF_088
System logger
89
SMF_089
Product Usage Data
SMF_089_USAGE_DATA
Usage data section
SMF_089_PROD_ISECT
Product intersection data section
SMF_089_STATE_DATA
State data section
SMF_090
System status
SMF_090_SMFDATASET
SMF data set section
SMF_090_SUBSYSTEM
Subsystem record section
SMF_090_SUBPARM
Subsystem parameter section
SMF_092
OpenMvs File System Activity
1
SMF_094
34xx tape library data server statistics
2
SMF_094_2
Volume Pool Statistics
85
90
92
94
Configuring Common Data Provider for z Systems
39
Table 9. Data stream names that IBM Common Data Provider for z Systems uses to collect SMF data (continued)
Type
99
40
Subtype
Data stream name
Description of data stream content
SMF_099
System resource manager decisions
SMF_099_REASM_INFO
Reassembly area information
SMF_099_AAT
Trace table entry section
SMF_099_SS
System state information section
SMF_099_PP
System paging plot information section
SMF_099_PT
Priority table entry section
SMF_099_RG
Resource group entry section
SMF99_S1_GENRES
Generic resource entry section
SMF99_S1_SL
Software licensing information
SMF99_S1_SLT
Software licensing table information
SMF99_S1_ZE
ZE information section
SMF99_S1_BP
Buffer pool section
SMF99_S2_CLS
Class data section
SMF99_S2_XMEM
Cross memory delay entry section
SMF99_S2_SERVER
Server data entry section
SMF99_S2_SDATA
Server sample data entry section
SMF99_S2_QDATA
Queue server data entry section
SMF99_S2_ASESP
Address space expanded storage access policy section
SMF99_S3_CLS
Class data section
SMF99_S3_PPRP
Period paging rate plot section
SMF99_S3_RUA
Ready user average plot section
SMF99_S3_SWP
Swap delay plot section
SMF99_S3_PAS
Proportional aggregate speed plot section
SMF99_S3_QMPLP
Queue delay plot section
SMF99_S3_QRUAP
Queue ready user average plot section
SMF99_S3_AINS
Active server instances plot section
SMF99_S3_ASTR
VS plot for active server instances section
SMF99_S3_TSTR
VS plot for total server instances section
SMF99_S3_QSTP
Queue service time plot section
SMF99_S4_IOPT
Device cluster priority table section
SMF99_S4_IOPLOT
I/O plot information section
SMF99_S5_MON
Monitored address space information
SMF99_S6_PDS
Period data section
SMF99_S7_PAV
PAV device section
SMF99_S8_LPAR
LPAR data entry section
SMF99_S8_PT
Priority table entry section
SMF99_S8_IOSUB
I/O subsystems samples data section
SMF99_S8_ICPU
LPAR CPU data for a partition in an LPAR cluster
section
SMF99_S8_SYSH
SYSH CPU plot section
SMF99_S9_SUBS
Channel path data entry section
SMF99_S9_PLOT
I/O subsystem plot section
SMF99_S9_CHAN
Channel path data entry section
SMF99_SA_CPUD
CPU data section
SMF99_SA_PCHGO
Processor speed change (old)
SMF99_SA_PCHGN
Processor speed change (new)
SMF99_SB_DATA
Capacity group data section
SMF99_SB_CECS
CEC service data section
Common Data Provider for z Systems: User Guide
Table 9. Data stream names that IBM Common Data Provider for z Systems uses to collect SMF data (continued)
Type
Subtype
Data stream name
Description of data stream content
100
0
SMF_100_0
DB2 statistics; system services
SMF_100_ADDR_SPACE
Address space data section
SMF_100_DEST
Instrumentation destination data section
SMF_100_INST
Instrumentation data section
SMF_100_LATCH_MGR
Latch manager data section
SMF_100_STRGE_MGR9
Storage manager data section (DB2 V9 and below)
SMF_100_STRGE_MGR
Storage manager data section (DB2 V10 and above)
SMF_100_DDF9
Distributed data facility section (DB2 V9 and below)
SMF_100_DDF
Distributed data facility section (DB2 V10 and above)
SMF_100_1
DB2 statistics - database services
SMF_100_BIND
Bind data section (DSNDQTST)
SMF_100_BUFF_MGR9
Buffer manager data section (DSNDQBST - DB2 V9
and below)
SMF_100_BUFF_MGR
Buffer manager data section (DSNDQBST - DB2 V10
and above)
SMF_100_DATA_MGR9
Data manager data section (DSNDQIST - DB2 V9 and
below)
SMF_100_DATA_MGR
Data manager data section (DSNDQIST - DB2 V10
and above)
SMF_100_BUFF_POOL9
Buffer manager group buffer pool (DB2 V9 and
below)
SMF_100_BUFF_POOL
Buffer manager group buffer pool (DB2 V10 and
above)
SMF_100_SERV_CNTL
Service controller locking statistics
SMF_100_IDAA_DATA
IDAA data section
2
SMF_100_2
DB2 statistics - Dynamic ZPARMS
3
SMF_100_3
DB2 statistics - Buffer Manager Group Buffer Pool
SMF_100_3BUFF_POOL
Buffer manager group buffer pool
SMF_100_4
DB2 System Storage® Usage
SMF_100_DB2_STRGE9
DB2 system storage usage (DB2 V9 and below)
SMF_100_DB2_STRGE
DB2 system storage usage (DB2 V10 and above)
SMF_100_STOR_THRD
Thread information (QW02252)
SMF_100_STOR_CMN
Shared and common storage summary (QW02253)
SMF_100_STOR_STMT
Statement cache and shareable statement detail
(QW02254)
SMF_100_STOR_POOL
Pool details (QW02255)
SMF_100_STOR_IRLM
IRLM storage information (QW02256)
1
4
Configuring Common Data Provider for z Systems
41
Table 9. Data stream names that IBM Common Data Provider for z Systems uses to collect SMF data (continued)
Type
Subtype
Data stream name
Description of data stream content
101
0
SMF_101
DB2 accounting
SMF_101_BUFFER_31
Buffer manager accounting block (DSNDQBAC)
SMF_101_DIST9
Distributed data facility statistics (DSNDQLAC - DB2
V9 and below)
SMF_101_DIST
Distributed data facility statistics (DSNDQLAC - DB2
V10 and above)
SMF_101_QMDA
Distributed QMD accounting data (DSNDQMDA)
SMF_101_IFI
Distributed IFI accounting data (DSNDQIFA)
SMF_101_PACKAGE
Package accounting data (DSNDQPAC)
SMF_101_QWAR
Rollup accounting correlation block (DSNDQWAR)
SMF_101_BUFFER_MGR
Buffer manager group buffer pool accounting
information (DSNDQBGA)
SMF_101_GLBL_LOCK
Service controller global locking accounting block
(DSNDQTGA)
SMF_101_DATA_SHARE
Data sharing accounting data (DSNDQWDA)
SMF_101_IDAA_ACCT
Accelerator services accounting block (DSNDQ8AC)
SMF_101_1
DB2 accounting IFCID 239
SMF_101_1_PACKAGE
Package accounting data (DSNDQPAC)
SMF_101_SQL_ACC
SQL accounting data (DSNDQXPK)
SMF_101_BUFMGR_ACC
Buffer manager accounting block (DSNDQBAC)
SMF_101_LOCK_ACC
Lock manager accounting block (DSNDQTXA)
SMF_102
DB2 system initialization parameters
SMF_102_SYS_PARM
System initialization parameters (DSNDQWPZ)
SMF_102_INI_PARM
Log initialization parameters (DSNDQWPZ)
SMF_102_ARCH_PARM
Archive initialization parameters (DSNDQWPZ)
SMF_102_SYS_PARM8
System parameters (DSNDQWPZ - DB2 V8)
SMF_102_SYS_PARM9
System parameters (DSNDQWPZ - DB2 V9)
SMF_102_SYS_PARMA
System parameters (DSNDQWPZ - DB2 V10)
SMF_102_SYS_PARMB
System parameters (DSNDQWPZ - DB2 V11)
SMF_102_DDF_START
DDF start control information (DSNDQWPZ)
SMF_102_DATA_SHARE
Group initialization parameters for data sharing
(DSNDQWPZ)
1
102
103
SMF_102_DSNHDECP
DSNHDECP parameters (DSNDQWPZ)
1
SMF_103_01
Internet Connection Secure Server configuration
Record
2
SMF_103_02
Internet Connection Secure Server performance record
SMF_104
RMF Distributed Platform Performance Data
SMF_104_METRICS
Metric section
SMF_108_01
Domino® Statistics Server Load
SMF_108_TRAN
Transaction section
SMF_108_PORT_ACT
Port activity section
SMF_108_02
Domino Statistics User Activity
SMF_108_USER_ACT
User activity server load section
3
SMF_108_03
Domino Statistics Monitoring and Tuning
6
SMF_108_06
Domino Statistics Data Base Activity
SMF_108_DB_ACT
Database activity data section
104
108
1
2
42
Common Data Provider for z Systems: User Guide
Table 9. Data stream names that IBM Common Data Provider for z Systems uses to collect SMF data (continued)
Type
Subtype
Data stream name
Description of data stream content
110
0
SMF_110_0
CICS®/ESA monitoring record
1
SMF_110_1
CICS Transaction Server for z/OS monitoring data
SMF_110_1_FIELD
Field connectors
SMF_110_1_DICT
Dictionary data
SMF_110_1_5
CICS/MVS transaction data
SMF_110_1_6
CTS Monitoring identity data
SMF_110_E
Monitoring exception data
2
SMF_110_2
CICS statistics
3
SMF_110_3
CICS statistics
4
SMF_110_4
CICS/TS Coupling Facility statistics
5
SMF_110_5
CICS/TS Named server sequence statistics
SMF_111
CICS TS for z/OS Statistics
111
112
203
SMF_112_203
OMEGAMON® CICS
113
1
SMF_113_1
Hardware capacity delta statistics
SMF_113_1_SCDS
Short counters section
SMF_113_1_LCDS
Long counters section
SMF_113_2
Hardware capacity reporting and statistics
SMF_113_2_CSS
Counter set section
SMF_113_2_CDS
Counter data section
2
114
1
SMF_114_1
System Automation Tracking
115
1
MQS_115_1
MQSeries® log manager statistics
MQS_115_QSST
Storage manager statistics section
MQS_115_QJST
Log manager statistics section
MQS_115_2
MQSeries information statistics
MQS_115_QMST
Message manager statistics section
MQS_115_QPST
Buffer manager statistics section
MQS_115_QLST
Lock manager statistics section
MQS_115_Q5ST
DB2 manager statistics section
MQS_115_QTST
Data manager statistics section
MQS_115_QESD
Shared message data sets section
SMF_116
MQSeries accounting statistics
MQS_116_QWHS
Message manager section
MQS_116_QMAC
Message manager accounting section
SMF_116_1
MQSeries thread and queue level accounting statistics
MQS_116_WTAS
Task-related statistics section
MQS_116_WQST1
Queue-level accounting statistics section
SMF_116_2
MQSeries queue level accounting statistics
MQS_116_QWHS2
Common MQseries SMF Header
MQS_116_WQST2
Queue-level accounting statistics section
SMF_117
WebSphere Message Broker
SMF_117_T1_THREAD
Thread data
SMF_117_T2_NODE
Node data
2
116
0
1
2
117
118
SMF_117_T2_TERM
Terminal data
1-2
SMF_118_1
TCP/IP API calls
3
SMF_118_3
TCP/IP FTP Client Calls
4
SMF_118_4
TCP/IP TELNET Client Calls record
5
SMF_118_5
TCP/IP General Stats Record
SMF_118_5_2
TCP/IP General Stats Record
20 - 21
SMF_118_20
TCP/IP TELNET Server Record
70 - 75
SMF_118_70
TCP/IP FTP Server
Configuring Common Data Provider for z Systems
43
Table 9. Data stream names that IBM Common Data Provider for z Systems uses to collect SMF data (continued)
Type
Subtype
Data stream name
Description of data stream content
119
1
SMF_119_1
TCP/IP Connection Initiation Record
2
SMF_119_2
TCP/IP Connection Termination Record
3
SMF_119_3
TCP/IP Client Transfer Completion
4
SMF_119_4
TCP/IP Profile Information Record
5
SMF_119_5
TCP/IP Statistics
6
SMF_119_6
TCP/IP Interface Statistics
SMF_119_INTERFACE
Interface statistics
SMF_119_HOME_IP
Home IP address section
SMF_119_7
TCP/IP Server Port Statistics
SMF_119_TCP_PORT
TCP server port statistics section
SMF_119_UDP_PORT
UDP server port statistics section
8
SMF_119_8
TCP/IP Stack Start/Stop
10
SMF_119_10
UDP Socket Close Record
20
SMF_119_20
TN3270 Server SNA Session Initiation
21
SMF_119_21
TN3270 Server SNA Session Termination
22
SMF_119_22
TSO Telnet Client Connection Initiation
23
SMF_119_23
TSO Telnet Client Connection Termination
32
SMF_119_32
DVIPA Status Change Record
33
SMF_119_33
DVIPA Removed Record
34
SMF_119_34
DVIPA Target Added Record
35
SMF_119_35
DVIPA Target Removed Record
36
SMF_119_36
DVIPA Target Server Started Record
37
SMF_119_37
DVIPA Target Server Ended Record
48
SMF_119_48
CSSMTP Configuration record
49
SMF_119_49
CSSMTP Connection Record
50
SMF_119_50
CSSMTP Mail Record
51
SMF_119_51
CSSMTP Spool File Record
52
SMF_119_52
CSSMTP Statistical Record
70
SMF_119_70
FTP Server Transfer Completion
72
SMF_119_72
FTP Server Logon Failure
73
SMF_119_73
IPSec IKE Tunnel Activation/Refresh
74
SMF_119_74
IPSec IKE Tunnel Deactivation/Expire
75 - 80
SMF_119_75_80
IPSec Dynamic/Manual Tunnel Activation/Refresh/
Deactivate Add/Remove
9
SMF_120_9
WAS Request Activity Record
SMF_120_9_CLA
Classification data section
SMF_120_9_SEC
Security data section
SMF_120_9_CPU
CPU usage breakdown section
SMF_120_9_USR
User data section
SMF_IMS_07
IMS program termination
SMF_IMS_08
IMS program start
SMF_IMS_0A07
IMS CPI-CI program termination
SMF_IMS_0A08
IMS CPI-CI program start
SMF_IMS_10
IMS security violation
SMF_IMS_56FA
IMS transaction level statistics
SMF_IMS_CA01
IMS Transaction Index
SMF_IMS_CA20
IMS Connect Transaction Index
SMF_194
Definition of TS7700 Virtualization Engine Statistics
Record
7
120
127
194
44
1000
Common Data Provider for z Systems: User Guide
Table 9. Data stream names that IBM Common Data Provider for z Systems uses to collect SMF data (continued)
Type
Subtype
Data stream name
Description of data stream content
SMF_230_CA_16
CA ACF2 security-related activity
SMF_230_CA_16_T1
For CA ACF2 record version 0 or 1, part of command
trace information
SMF_230_CA_16_T2
For CA ACF2 record version 2, part of command
trace information
SMF_230_CA_16_SNEN
Part of CA ACF2 distributed database sense
information
231
Restriction: The only
supported event code type is
SMF80EVT =70, which is OMVS
TRACE (70).
SMF_231_CA_16
CA Top Secret security events for UNIX System
Services
SMF_231_CA_EXT
CA Top Secret audit records for UNIX System
Services
245
SMF_245_3_CACHE
Cache RMF Reporter (3990 model 03)
SMF_245_3_DEV
Caching subsystem device status entries
SMF_245_6_CACHE
Cache RMF Reporter (3990 model 06)
SMF_245_6_DEV
Caching subsystem device status entries
SMF_245_13_CACHE
Cache RMF Reporter (3880 model 13)
SMF_245_23_CACHE
Cache RMF Reporter (3880 model 23)
230
Restriction: The following four
fields, which have a length that
is greater than 4096 bytes, are
not included in the streams:
v ACEMFREC
v ACLMFARE
v ACRMFRUL
v ACWMFDATA
SMF_110_1_KPI data stream content
SMF_110_1_KPI records in the SMF_110_1_KPI data stream contain information
about key performance indicators (KPIs) for CICS Transaction Server for z/OS
monitoring.
Data stream definition in Configuration Tool
To select the SMF_110_1_KPI data stream in the IBM Common Data Provider for z
Systems Configuration Tool, complete the following steps:
1. In the “Select data stream” window, expand KPI Data.
2. Expand Base Streams.
3. Expand CICS.
4. Select the SMF_110_1_KPI check box.
Fields in the SMF_110_1_KPI data stream
In the following table, the column that is titled “Corresponding SMF field”
indicates the name of the SMF field that corresponds to the field name in the data
stream.
Table 10. Fields in the SMF_110_1_KPI data stream
Field name
Description
Corresponding
SMF field
Time
The time that the record was written to SMF
SMFMNTME
Date
The date that the record was written to SMF
SMFMNDTE
MVS_SYSTEM_ID
The system ID, which is also known as the SMF
ID
SMFMNSID
START_TIMESTAMP
The start time of the transaction.
START
The field name corresponds to the CICS
Transaction Server for z/OS Dictionary nickname.
Configuring Common Data Provider for z Systems
45
Table 10. Fields in the SMF_110_1_KPI data stream (continued)
Field name
Description
Corresponding
SMF field
STOP_TIMESTAMP
The stop time of the transaction.
STOP
The field name corresponds to the CICS
Transaction Server for z/OS Dictionary nickname.
ELAPSED_TIME
The elapsed time of the transaction, which is
derived by the System Data Engine (the stop time
minus the start time)
Not applicable
CICS_SPEC_APPLID
CICS Transaction Server for z/OS specific
application ID
SMFMNSPN
CICS_GEN_APPLID
CICS Transaction Server for z/OS generic
application ID
SMFMNPRN
JOB_NAME
CICS Transaction Server for z/OS job name
SMFMNJBN
PGM_NAME
CICS Transaction Server for z/OS program name.
PGMNAME
The field name corresponds to the CICS
Transaction Server for z/OS Dictionary nickname.
TRANSACTION_ID
CICS Transaction Server for z/OS transaction ID.
TRAN
The field name corresponds to the CICS
Transaction Server for z/OS Dictionary nickname.
TRANSACTION_NUM
CICS Transaction Server for z/OS transaction
number.
TRANNUM
The field name corresponds to the CICS
Transaction Server for z/OS Dictionary nickname.
ORIG_ABEND_CODR
Original CICS Transaction Server for z/OS abend
code.
ABCODEO
The field name corresponds to the CICS
Transaction Server for z/OS Dictionary nickname.
CURR_ABEND_CODE
Current CICS Transaction Server for z/OS abend
code.
ABCODEC
The field name corresponds to the CICS
Transaction Server for z/OS Dictionary nickname.
CICS_USER
Current CICS Transaction Server for z/OS user
ID.
USERID
The field name corresponds to the CICS
Transaction Server for z/OS Dictionary nickname.
SYNCPOINTS
The total number of syncpoint requests that are
issued by the user task.
SPSYNCCT
The field name corresponds to the CICS
Transaction Server for z/OS Dictionary nickname.
TERM_WAIT
After the user task issued a RECEIVE request, the
elapsed time during which the user task waited
for input from the terminal operator.
The field name corresponds to the CICS
Transaction Server for z/OS Dictionary nickname.
46
Common Data Provider for z Systems: User Guide
TCIOWTT
Table 10. Fields in the SMF_110_1_KPI data stream (continued)
Corresponding
SMF field
Field name
Description
DISPATCH_TIME
The total elapsed time during which the user task
was dispatched on each CICS task control block
(TCB) under which the task ran. The TCB modes
that are managed by the CICS dispatcher are: QR,
RO, CO, FO, SZ, RP, SL, SP, SO, EP, J8, J9, L8, L9,
S8, TP, T8, X8, X9, JM, and D2.
USRDISPT
The field name corresponds to the CICS
Transaction Server for z/OS Dictionary nickname.
CPU_TIME
The total processor time during which the user
task was dispatched by the CICS dispatcher
domain on each CICS TCB under which the task
ran.
USRCPUT
The field name corresponds to the CICS
Transaction Server for z/OS Dictionary nickname.
RLS_CPU_TIME
The amount of CPU time in which the transaction RLSCPUT
was processing record-level sharing (RLS) file
requests.
Tip: For a measurement of the total CPU time
that is used by a transaction, add this
RLS_CPU_TIME value to the CPU_TIME value (SMF
field USRCPUT).
The field name corresponds to the CICS
Transaction Server for z/OS Dictionary nickname.
SUSP_TIME
The time in which the user task was suspended
by the CICS dispatcher.
SUSPTIME
The field name corresponds to the CICS
Transaction Server for z/OS Dictionary nickname.
SYNCTIME
The time in which the user task was dispatched
and was processing syncpoint requests.
SYNCTIME
The field name corresponds to the CICS
Transaction Server for z/OS Dictionary nickname.
DISP_CICS_USER
The dispatch time that CICS Transaction Server
for z/OS gives to a user task, which is the total
elapsed time during which the user task is
dispatched by the CICS dispatcher domain on a
CICS Key 8 mode TCB.
KY8DISPT
The field name corresponds to the CICS
Transaction Server for z/OS Dictionary nickname.
Configuring Common Data Provider for z Systems
47
Table 10. Fields in the SMF_110_1_KPI data stream (continued)
Corresponding
SMF field
Field name
Description
JAVA_CPU_TIME
This field is a composite field that indicates one of J8CPUT
the following elements:
v The amount of CPU time that this task used
when it was dispatched on the J8 TCB Mode
v The number of times that this task was
dispatched on the J8 TCB Mode
v An indication that the mode is used by Java
applications
The field name corresponds to the CICS
Transaction Server for z/OS Dictionary nickname.
L8_TCB_DISP_TIME
This field is a composite field that indicates one of L8CPUT
the following elements:
v The amount of CPU time that this task used
when it was dispatched on the L8 TCB Mode
v The number of times that this task was
dispatched on the L8 TCB Mode
v An indication that the mode is used by
programs that are defined with
CONCURRENCY=THREADSAFE when they issue DB2
requests
The field name corresponds to the CICS
Transaction Server for z/OS Dictionary nickname.
S8_TCB_DISP_TIME
This field is a composite field that indicates one of S8CPUT
the following elements:
v The amount of CPU time that this task used
when it was dispatched on the S8 TCB Mode
v The number of times that this task was
dispatched on the S8 TCB Mode
v An indication that the mode is used for making
secure socket calls
The field name corresponds to the CICS
Transaction Server for z/OS Dictionary nickname.
RMI_TIME
The time in which the task was external to CICS
Transaction Server for z/OS (for example, in DB2
or MQ).
RMITIME
The field name corresponds to the CICS
Transaction Server for z/OS Dictionary nickname.
RMI_SUSP_TIME
The time in which the user task was suspended
by the CICS dispatcher while it was in the CICS
Resource Manager Interface (RMI).
The field name corresponds to the CICS
Transaction Server for z/OS Dictionary nickname.
48
Common Data Provider for z Systems: User Guide
RMISUSP
Data stream configuration for data gathered by Log Forwarder
This reference lists the configuration values that you can update in the “Configure
z/OS Log Forwarder data stream” window. The fields that are shown in this
window are based on the source from which the Log Forwarder collects data for
the data stream.
The Log Forwarder gathers z/OS log data from the following sources:
v Job log, which is output that is written to a data definition (DD) by a running
job
v z/OS UNIX log file, including the UNIX System Services system log (syslogd)
v Entry-sequenced Virtual Storage Access Method (VSAM) cluster
v z/OS system log (SYSLOG)
v IBM Tivoli NetView for z/OS messages
v IBM WebSphere Application Server for z/OS High Performance Extensible
Logging (HPEL) log
Table 11 summarizes which data streams come from which sources.
Table 11. Correlation between the sources from which the Log Forwarder gathers data and
the data streams that can be defined for those sources
Source
Data streams
Job log
v “Generic z/OS Job Output data stream” on page
52
v “CICS EYULOG data stream” on page 54
v “CICS EYULOG DMY data stream” on page 57
v “CICS EYULOG YMD data stream” on page 59
v “CICS User Messages data stream” on page 61
v “CICS User Messages DMY data stream” on page
63
v “CICS User Messages YMD data stream” on page
65
v “WebSphere SYSOUT data stream” on page 70
v “WebSphere SYSPRINT data stream” on page 72
z/OS UNIX log file
v “Generic ZFS File data stream” on page 51
v “USS Syslogd data stream” on page 68
v “WebSphere USS Sysprint data stream” on page
74
Entry-sequenced VSAM cluster
v “Generic VSAM Cluster data stream” on page 50
z/OS SYSLOG
v “z/OS SYSLOG data stream” on page 75
IBM Tivoli NetView for z/OS
messages
v “NetView Netlog data stream” on page 67
IBM WebSphere Application Server
for z/OS HPEL log
v “WebSphere HPEL data stream” on page 69
Configuring Common Data Provider for z Systems
49
Generic VSAM Cluster data stream:
This reference lists the configuration values that you can update in the “Configure
z/OS Log Forwarder data stream” window for the Generic VSAM Cluster data
stream. It also describes why you might want to define paired data sets for this
data stream.
Data collection from paired data sets
For the Generic VSAM Cluster data stream, the Log Forwarder can gather log
data from a logical pair of data sets, called paired data sets. The use of paired data
sets prevents an individual data set from getting too large and makes the process
of pruning old log data from the system much easier.
With paired data sets, data is logged to only one data set in the pair at a time.
When that data set exceeds some threshold (for example, the data set surpasses a
specified size, or a specified time interval passes), the data in the other data set is
deleted, and logging switches to that other data set. This switching between each
data set in the pair is repeated continuously as each threshold is exceeded.
When you define a Generic VSAM Cluster data stream, you can specify either a
single data set (in the Data Set Name field) or two data sets (one in the Data Set
Name field, and the other in the Paired Data Set Name field) that are logically
paired. If you specify two data sets, the contents of both data sets are associated
with the same data stream. Both data sets must be entry-sequenced Virtual Storage
Access Method (VSAM) clusters. At least one of the data sets must be allocated
before the Log Forwarder is started.
Configuration values that you can update
Name The name that uniquely identifies the data stream to the Configuration
Tool. If you want to add more data streams of the same type, you must
first rename the last stream that you added.
Data Set Name
The name of the entry-sequenced VSAM cluster that contains the data to
be gathered. This name must be in the format x.y.z.
Paired Data Set Name
The name of the entry-sequenced VSAM cluster that, together with the
cluster that is specified in the Data Set Name field, contains the data to be
gathered. This name must be in the format x.y.z.
Tip: For more information about the use of paired data sets, see “Data
collection from paired data sets.”
Data Source Name
The name that uniquely identifies the data source to subscribers.
Tip: If you use the Auto-Qualify field in the subscriber configuration to
fully qualify the data source name, this dataSourceName value is
automatically updated with the fully qualified data source name. For more
information about the values that you can select in the Auto-Qualify field,
see “Subscriber configuration” on page 95.
Data Source Type
A value that the subscriber can use to uniquely identify the type and
format of the streamed data.
50
Common Data Provider for z Systems: User Guide
File Path
A unique identifier that represents the data origin.
Time Zone
If the time stamps in the collected data do not include a time zone, this
value specifies a time zone to the target destination. Specify this value if
the time zone is different from the system time zone, which is defined in
the Log Forwarder properties, as described in “Log Forwarder properties
configuration” on page 28.
The value must be in the format plus_or_minusHHMM, where plus_or_minus
represents the + or - sign, HH represents two digits for the hour, and MM
represents two digits for the minute.
Examples:
If you want this time zone
Specify this value
Coordinated Universal Time (UTC)
+0000
5 hours west of UTC
-0500
8 hours east of UTC
+0800
Generic ZFS File data stream:
This reference lists the configuration values that you can update in the “Configure
z/OS Log Forwarder data stream” window for the Generic ZFS File data stream.
Configuration values that you can update
Name The name that uniquely identifies the data stream to the Configuration
Tool. If you want to add more data streams of the same type, you must
first rename the last stream that you added.
File Path
A unique identifier that represents the data origin. The identifier must be
the absolute path, including the file name, of a log file that contains the
relevant data.
Tip: If you are gathering log data from a rolling z/OS UNIX log, see “Data
collection from a rolling z/OS UNIX log” on page 76 for more information,
including how to specify this file path value for a rolling log.
Data Source Name
The name that uniquely identifies the data source to subscribers.
Tip: If you use the Auto-Qualify field in the subscriber configuration to
fully qualify the data source name, this dataSourceName value is
automatically updated with the fully qualified data source name. For more
information about the values that you can select in the Auto-Qualify field,
see “Subscriber configuration” on page 95.
Data Source Type
A value that the subscriber can use to uniquely identify the type and
format of the streamed data.
Time Zone
If the time stamps in the collected data do not include a time zone, this
value specifies a time zone to the target destination. Specify this value if
Configuring Common Data Provider for z Systems
51
the time zone is different from the system time zone, which is defined in
the Log Forwarder properties, as described in “Log Forwarder properties
configuration” on page 28.
The value must be in the format plus_or_minusHHMM, where plus_or_minus
represents the + or - sign, HH represents two digits for the hour, and MM
represents two digits for the minute.
Examples:
If you want this time zone
Specify this value
Coordinated Universal Time (UTC)
+0000
5 hours west of UTC
-0500
8 hours east of UTC
+0800
Discovery Interval
In the process of streaming data, the number of minutes that the Log
Forwarder waits before it checks for a new log file in the data stream. This
value applies to all data streams from the Log Forwarder, although it can
be overridden on some individual streams, such as this one.
The value must be an integer in the range 1 - 5. The default value is the
value that is defined in the Log Forwarder properties, as described in “Log
Forwarder properties configuration” on page 28.
Generic z/OS Job Output data stream:
This reference lists the configuration values that you can update in the “Configure
z/OS Log Forwarder data stream” window for the Generic z/OS Job Output data
stream. It also describes how to use wildcard characters in the Job Name field for
this data stream.
Configuration values that you can update
Name The name that uniquely identifies the data stream to the Configuration
Tool. If you want to add more data streams of the same type, you must
first rename the last stream that you added.
Job Name
The name of the server job from which to gather data. This value can
contain wildcard characters.
For information about the use of wildcard characters, see “Use of wildcard
characters in the Job Name field” on page 53.
DD Name
The data definition (DD) name for the job log.
Data Source Name
The name that uniquely identifies the data source to subscribers.
Tip: If you use the Auto-Qualify field in the subscriber configuration to
fully qualify the data source name, this dataSourceName value is
automatically updated with the fully qualified data source name. For more
information about the values that you can select in the Auto-Qualify field,
see “Subscriber configuration” on page 95.
52
Common Data Provider for z Systems: User Guide
Data Source Type
A value that the subscriber can use to uniquely identify the type and
format of the streamed data.
File Path
A unique identifier, such as jobName/ddName, that represents the data origin.
Time Zone
If the time stamps in the collected data do not include a time zone, this
value specifies a time zone to the target destination. Specify this value if
the time zone is different from the system time zone, which is defined in
the Log Forwarder properties, as described in “Log Forwarder properties
configuration” on page 28.
The value must be in the format plus_or_minusHHMM, where plus_or_minus
represents the + or - sign, HH represents two digits for the hour, and MM
represents two digits for the minute.
Examples:
If you want this time zone
Specify this value
Coordinated Universal Time (UTC)
+0000
5 hours west of UTC
-0500
8 hours east of UTC
+0800
Discovery Interval
In the process of streaming data, the number of minutes that the Log
Forwarder waits before it checks for a new log file in the data stream. This
value applies to all data streams from the Log Forwarder, although it can
be overridden on some individual streams, such as this one.
The value must be an integer in the range 1 - 5. The default value is the
value that is defined in the Log Forwarder properties, as described in “Log
Forwarder properties configuration” on page 28.
Use of wildcard characters in the Job Name field
In the Job Name field for this data stream, you can use the following wildcard
characters:
Wildcard character
What the character represents
?
Any single character
*
Any sequence of characters, including an empty
sequence
If you use wildcard characters in the job name, the job name value becomes a
pattern, and the data stream definition becomes a template. When the Log
Forwarder starts, it searches the Job Entry Subsystem (JES) spool for job names that
match the pattern, and it creates a separate data stream for each unique job name
that it discovers. After the Log Forwarder initialization is complete, the Log
Forwarder continues to monitor the job names on the JES spool. As it discovers
new job names that match the pattern, it uses the same template to create more
data streams.
Configuring Common Data Provider for z Systems
53
For example, if the job name value is ABCD????, and the JES spool contains the
following jobs, two data streams are created, one for job name ABCD1234 and one
for job name ABCDE567:
JOBNAME
JobID
ABCD1234
STC00735
DEFG1234
STC00746
ABCDE567
STC00798
DEFG5678
STC00775
ABCD123
STC00772
DEFG456
STC00794
HBODSPRO
STC00623
GLAPROC
STC00661
SYSLOG
STC00552
Tips:
v To avoid gathering data from job logs that you do not intend to gather from, use
a job name pattern that is not too broad.
v The Log Forwarder might discover jobs from other systems if spool is shared
between systems or if JES multi-access spool is enabled. Although the data
stream does not include data for the jobs that run on other systems, the Log
Forwarder creates a data stream for that data. Therefore, ensure that the
wildcard pattern does not match jobs that run on other systems.
Each resulting data stream is based on the template and has the same
configuration values as the template, with the exception of the following values:
Template field
Value
Job Name
The discovered job name
Data Source Name
The value of the Data Source Name field in
the template, with _jobName_ddName
appended to that value. The jobName is the
discovered job name, and the ddName is the
DD name for the job log.
File Path
The value of the File Path field in the
template, with /jobName/ddName appended to
that value. The jobName is the discovered job
name, and the ddName is the DD name for
the job log.
CICS EYULOG data stream:
This reference lists the configuration values that you can update in the “Configure
z/OS Log Forwarder data stream” window for the CICS EYULOG data stream. It
also describes how to use wildcard characters in the Job Name field for this data
stream. The source for the CICS EYULOG data stream uses the date format
“month day year” (MDY) in the time stamp.
Configuration values that you can update
Name The name that uniquely identifies the data stream to the Configuration
54
Common Data Provider for z Systems: User Guide
Tool. If you want to add more data streams of the same type, you must
first rename the last stream that you added.
Job Name
The name of the server job from which to gather data. This value can
contain wildcard characters.
For information about the use of wildcard characters, see “Use of wildcard
characters in the Job Name field.”
Data Source Name
The name that uniquely identifies the data source to subscribers.
Tip: If you use the Auto-Qualify field in the subscriber configuration to
fully qualify the data source name, this dataSourceName value is
automatically updated with the fully qualified data source name. For more
information about the values that you can select in the Auto-Qualify field,
see “Subscriber configuration” on page 95.
File Path
A unique identifier, such as jobName/ddName, that represents the data origin.
Time Zone
If the time stamps in the collected data do not include a time zone, this
value specifies a time zone to the target destination. Specify this value if
the time zone is different from the system time zone, which is defined in
the Log Forwarder properties, as described in “Log Forwarder properties
configuration” on page 28.
The value must be in the format plus_or_minusHHMM, where plus_or_minus
represents the + or - sign, HH represents two digits for the hour, and MM
represents two digits for the minute.
Examples:
If you want this time zone
Specify this value
Coordinated Universal Time (UTC)
+0000
5 hours west of UTC
-0500
8 hours east of UTC
+0800
Discovery Interval
In the process of streaming data, the number of minutes that the Log
Forwarder waits before it checks for a new log file in the data stream. This
value applies to all data streams from the Log Forwarder, although it can
be overridden on some individual streams, such as this one.
The value must be an integer in the range 1 - 5. The default value is the
value that is defined in the Log Forwarder properties, as described in “Log
Forwarder properties configuration” on page 28.
Use of wildcard characters in the Job Name field
In the Job Name field for this data stream, you can use the following wildcard
characters:
Wildcard character
What the character represents
?
Any single character
Configuring Common Data Provider for z Systems
55
Wildcard character
What the character represents
*
Any sequence of characters, including an empty
sequence
If you use wildcard characters in the job name, the job name value becomes a
pattern, and the data stream definition becomes a template. When the Log
Forwarder starts, it searches the Job Entry Subsystem (JES) spool for job names that
match the pattern, and it creates a separate data stream for each unique job name
that it discovers. After the Log Forwarder initialization is complete, the Log
Forwarder continues to monitor the job names on the JES spool. As it discovers
new job names that match the pattern, it uses the same template to create more
data streams.
For example, if the job name value is CMAS5*, and the JES spool contains the
following jobs, two data streams are created, one for job name CMAS53 and one for
job name CMAS5862:
JOBNAME
JobID
CMAS43
STC00586
CMAS482
STC00588
CMAS53
STC00587
CMAS5862
STC00589
CMAS61
STC00590
CMAS62
STC00600
HBODSPRO
STC00623
GLAPROC
STC00661
SYSLOG
STC00552
Tips:
v To avoid gathering data from job logs that you do not intend to gather from, use
a job name pattern that is not too broad.
v The Log Forwarder might discover jobs from other systems if spool is shared
between systems or if JES multi-access spool is enabled. Although the data
stream does not include data for the jobs that run on other systems, the Log
Forwarder creates a data stream for that data. Therefore, ensure that the
wildcard pattern does not match jobs that run on other systems.
Each resulting data stream is based on the template and has the same
configuration values as the template, with the exception of the following values:
56
Template field
Value
Job Name
The discovered job name
Data Source Name
The value of the Data Source Name field in
the template, with _jobName_EYULOG
appended to that value. The jobName is the
discovered job name.
File Path
The value of the File Path field in the
template, with /jobName/EYULOG appended to
that value. The jobName is the discovered job
name.
Common Data Provider for z Systems: User Guide
CICS EYULOG DMY data stream:
This reference lists the configuration values that you can update in the “Configure
z/OS Log Forwarder data stream” window for the CICS EYULOG DMY data
stream. It also describes how to use wildcard characters in the Job Name field for
this data stream. The source for the CICS EYULOG DMY data stream uses the
date format “day month year” (DMY) in the time stamp.
Configuration values that you can update
Name The name that uniquely identifies the data stream to the Configuration
Tool. If you want to add more data streams of the same type, you must
first rename the last stream that you added.
Job Name
The name of the server job from which to gather data. This value can
contain wildcard characters.
For information about the use of wildcard characters, see “Use of wildcard
characters in the Job Name field” on page 58.
Data Source Name
The name that uniquely identifies the data source to subscribers.
Tip: If you use the Auto-Qualify field in the subscriber configuration to
fully qualify the data source name, this dataSourceName value is
automatically updated with the fully qualified data source name. For more
information about the values that you can select in the Auto-Qualify field,
see “Subscriber configuration” on page 95.
File Path
A unique identifier, such as jobName/ddName, that represents the data origin.
Time Zone
If the time stamps in the collected data do not include a time zone, this
value specifies a time zone to the target destination. Specify this value if
the time zone is different from the system time zone, which is defined in
the Log Forwarder properties, as described in “Log Forwarder properties
configuration” on page 28.
The value must be in the format plus_or_minusHHMM, where plus_or_minus
represents the + or - sign, HH represents two digits for the hour, and MM
represents two digits for the minute.
Examples:
If you want this time zone
Specify this value
Coordinated Universal Time (UTC)
+0000
5 hours west of UTC
-0500
8 hours east of UTC
+0800
Discovery Interval
In the process of streaming data, the number of minutes that the Log
Forwarder waits before it checks for a new log file in the data stream. This
value applies to all data streams from the Log Forwarder, although it can
be overridden on some individual streams, such as this one.
Configuring Common Data Provider for z Systems
57
The value must be an integer in the range 1 - 5. The default value is the
value that is defined in the Log Forwarder properties, as described in “Log
Forwarder properties configuration” on page 28.
Use of wildcard characters in the Job Name field
In the Job Name field for this data stream, you can use the following wildcard
characters:
Wildcard character
What the character represents
?
Any single character
*
Any sequence of characters, including an empty
sequence
If you use wildcard characters in the job name, the job name value becomes a
pattern, and the data stream definition becomes a template. When the Log
Forwarder starts, it searches the Job Entry Subsystem (JES) spool for job names that
match the pattern, and it creates a separate data stream for each unique job name
that it discovers. After the Log Forwarder initialization is complete, the Log
Forwarder continues to monitor the job names on the JES spool. As it discovers
new job names that match the pattern, it uses the same template to create more
data streams.
For example, if the job name value is CMAS5*, and the JES spool contains the
following jobs, two data streams are created, one for job name CMAS53 and one for
job name CMAS5862:
JOBNAME
JobID
CMAS43
STC00586
CMAS482
STC00588
CMAS53
STC00587
CMAS5862
STC00589
CMAS61
STC00590
CMAS62
STC00600
HBODSPRO
STC00623
GLAPROC
STC00661
SYSLOG
STC00552
Tips:
v To avoid gathering data from job logs that you do not intend to gather from, use
a job name pattern that is not too broad.
v The Log Forwarder might discover jobs from other systems if spool is shared
between systems or if JES multi-access spool is enabled. Although the data
stream does not include data for the jobs that run on other systems, the Log
Forwarder creates a data stream for that data. Therefore, ensure that the
wildcard pattern does not match jobs that run on other systems.
Each resulting data stream is based on the template and has the same
configuration values as the template, with the exception of the following values:
58
Common Data Provider for z Systems: User Guide
Template field
Value
Job Name
The discovered job name
Data Source Name
The value of the Data Source Name field in
the template, with _jobName_EYULOG
appended to that value. The jobName is the
discovered job name.
File Path
The value of the File Path field in the
template, with /jobName/EYULOG appended to
that value. The jobName is the discovered job
name.
CICS EYULOG YMD data stream:
This reference lists the configuration values that you can update in the “Configure
z/OS Log Forwarder data stream” window for the CICS EYULOG YMD data
stream. It also describes how to use wildcard characters in the Job Name field for
this data stream. The source for the CICS EYULOG YMD data stream uses the
date format “year month day” (YMD) in the time stamp.
Configuration values that you can update
Name The name that uniquely identifies the data stream to the Configuration
Tool. If you want to add more data streams of the same type, you must
first rename the last stream that you added.
Job Name
The name of the server job from which to gather data. This value can
contain wildcard characters.
For information about the use of wildcard characters, see “Use of wildcard
characters in the Job Name field” on page 60.
Data Source Name
The name that uniquely identifies the data source to subscribers.
Tip: If you use the Auto-Qualify field in the subscriber configuration to
fully qualify the data source name, this dataSourceName value is
automatically updated with the fully qualified data source name. For more
information about the values that you can select in the Auto-Qualify field,
see “Subscriber configuration” on page 95.
File Path
A unique identifier, such as jobName/ddName, that represents the data origin.
Time Zone
If the time stamps in the collected data do not include a time zone, this
value specifies a time zone to the target destination. Specify this value if
the time zone is different from the system time zone, which is defined in
the Log Forwarder properties, as described in “Log Forwarder properties
configuration” on page 28.
The value must be in the format plus_or_minusHHMM, where plus_or_minus
represents the + or - sign, HH represents two digits for the hour, and MM
represents two digits for the minute.
Examples:
Configuring Common Data Provider for z Systems
59
If you want this time zone
Specify this value
Coordinated Universal Time (UTC)
+0000
5 hours west of UTC
-0500
8 hours east of UTC
+0800
Discovery Interval
In the process of streaming data, the number of minutes that the Log
Forwarder waits before it checks for a new log file in the data stream. This
value applies to all data streams from the Log Forwarder, although it can
be overridden on some individual streams, such as this one.
The value must be an integer in the range 1 - 5. The default value is the
value that is defined in the Log Forwarder properties, as described in “Log
Forwarder properties configuration” on page 28.
Use of wildcard characters in the Job Name field
In the Job Name field for this data stream, you can use the following wildcard
characters:
Wildcard character
What the character represents
?
Any single character
*
Any sequence of characters, including an empty
sequence
If you use wildcard characters in the job name, the job name value becomes a
pattern, and the data stream definition becomes a template. When the Log
Forwarder starts, it searches the Job Entry Subsystem (JES) spool for job names that
match the pattern, and it creates a separate data stream for each unique job name
that it discovers. After the Log Forwarder initialization is complete, the Log
Forwarder continues to monitor the job names on the JES spool. As it discovers
new job names that match the pattern, it uses the same template to create more
data streams.
For example, if the job name value is CMAS5*, and the JES spool contains the
following jobs, two data streams are created, one for job name CMAS53 and one for
job name CMAS5862:
JOBNAME
JobID
CMAS43
STC00586
CMAS482
STC00588
CMAS53
STC00587
CMAS5862
STC00589
CMAS61
STC00590
CMAS62
STC00600
HBODSPRO
STC00623
GLAPROC
STC00661
SYSLOG
STC00552
Tips:
60
Common Data Provider for z Systems: User Guide
v To avoid gathering data from job logs that you do not intend to gather from, use
a job name pattern that is not too broad.
v The Log Forwarder might discover jobs from other systems if spool is shared
between systems or if JES multi-access spool is enabled. Although the data
stream does not include data for the jobs that run on other systems, the Log
Forwarder creates a data stream for that data. Therefore, ensure that the
wildcard pattern does not match jobs that run on other systems.
Each resulting data stream is based on the template and has the same
configuration values as the template, with the exception of the following values:
Template field
Value
Job Name
The discovered job name
Data Source Name
The value of the Data Source Name field in
the template, with _jobName_EYULOG
appended to that value. The jobName is the
discovered job name.
File Path
The value of the File Path field in the
template, with /jobName/EYULOG appended to
that value. The jobName is the discovered job
name.
CICS User Messages data stream:
This reference lists the configuration values that you can update in the “Configure
z/OS Log Forwarder data stream” window for the CICS User Messages data
stream. It also describes how to use wildcard characters in the Job Name field for
this data stream. The source for the CICS User Messages data stream uses the date
format “month day year” (MDY) in the time stamp.
Configuration values that you can update
Name The name that uniquely identifies the data stream to the Configuration
Tool. If you want to add more data streams of the same type, you must
first rename the last stream that you added.
Job Name
The name of the server job from which to gather data. This value can
contain wildcard characters.
For information about the use of wildcard characters, see “Use of wildcard
characters in the Job Name field” on page 62.
Data Source Name
The name that uniquely identifies the data source to subscribers.
Tip: If you use the Auto-Qualify field in the subscriber configuration to
fully qualify the data source name, this dataSourceName value is
automatically updated with the fully qualified data source name. For more
information about the values that you can select in the Auto-Qualify field,
see “Subscriber configuration” on page 95.
File Path
A unique identifier, such as jobName/ddName, that represents the data origin.
Time Zone
If the time stamps in the collected data do not include a time zone, this
value specifies a time zone to the target destination. Specify this value if
Configuring Common Data Provider for z Systems
61
the time zone is different from the system time zone, which is defined in
the Log Forwarder properties, as described in “Log Forwarder properties
configuration” on page 28.
The value must be in the format plus_or_minusHHMM, where plus_or_minus
represents the + or - sign, HH represents two digits for the hour, and MM
represents two digits for the minute.
Examples:
If you want this time zone
Specify this value
Coordinated Universal Time (UTC)
+0000
5 hours west of UTC
-0500
8 hours east of UTC
+0800
Discovery Interval
In the process of streaming data, the number of minutes that the Log
Forwarder waits before it checks for a new log file in the data stream. This
value applies to all data streams from the Log Forwarder, although it can
be overridden on some individual streams, such as this one.
The value must be an integer in the range 1 - 5. The default value is the
value that is defined in the Log Forwarder properties, as described in “Log
Forwarder properties configuration” on page 28.
Use of wildcard characters in the Job Name field
In the Job Name field for this data stream, you can use the following wildcard
characters:
Wildcard character
What the character represents
?
Any single character
*
Any sequence of characters, including an empty
sequence
If you use wildcard characters in the job name, the job name value becomes a
pattern, and the data stream definition becomes a template. When the Log
Forwarder starts, it searches the Job Entry Subsystem (JES) spool for job names that
match the pattern, and it creates a separate data stream for each unique job name
that it discovers. After the Log Forwarder initialization is complete, the Log
Forwarder continues to monitor the job names on the JES spool. As it discovers
new job names that match the pattern, it uses the same template to create more
data streams.
For example, if the job name value is CMAS5*, and the JES spool contains the
following jobs, two data streams are created, one for job name CMAS53 and one for
job name CMAS5862:
62
JOBNAME
JobID
CMAS43
STC00586
CMAS482
STC00588
CMAS53
STC00587
CMAS5862
STC00589
Common Data Provider for z Systems: User Guide
JOBNAME
JobID
CMAS61
STC00590
CMAS62
STC00600
HBODSPRO
STC00623
GLAPROC
STC00661
SYSLOG
STC00552
Tips:
v To avoid gathering data from job logs that you do not intend to gather from, use
a job name pattern that is not too broad.
v The Log Forwarder might discover jobs from other systems if spool is shared
between systems or if JES multi-access spool is enabled. Although the data
stream does not include data for the jobs that run on other systems, the Log
Forwarder creates a data stream for that data. Therefore, ensure that the
wildcard pattern does not match jobs that run on other systems.
Each resulting data stream is based on the template and has the same
configuration values as the template, with the exception of the following values:
Template field
Value
Job Name
The discovered job name
Data Source Name
The value of the Data Source Name field in
the template, with _jobName_MSGUSR
appended to that value. The jobName is the
discovered job name.
File Path
The value of the File Path field in the
template, with /jobName/MSGUSR appended to
that value. The jobName is the discovered job
name.
CICS User Messages DMY data stream:
This reference lists the configuration values that you can update in the “Configure
z/OS Log Forwarder data stream” window for the CICS User Messages DMY
data stream. It also describes how to use wildcard characters in the Job Name field
for this data stream. The source for the CICS User Messages DMY data stream
uses the date format “day month year” (DMY) in the time stamp.
Configuration values that you can update
Name The name that uniquely identifies the data stream to the Configuration
Tool. If you want to add more data streams of the same type, you must
first rename the last stream that you added.
Job Name
The name of the server job from which to gather data. This value can
contain wildcard characters.
For information about the use of wildcard characters, see “Use of wildcard
characters in the Job Name field” on page 64.
Data Source Name
The name that uniquely identifies the data source to subscribers.
Configuring Common Data Provider for z Systems
63
Tip: If you use the Auto-Qualify field in the subscriber configuration to
fully qualify the data source name, this dataSourceName value is
automatically updated with the fully qualified data source name. For more
information about the values that you can select in the Auto-Qualify field,
see “Subscriber configuration” on page 95.
File Path
A unique identifier, such as jobName/ddName, that represents the data origin.
Time Zone
If the time stamps in the collected data do not include a time zone, this
value specifies a time zone to the target destination. Specify this value if
the time zone is different from the system time zone, which is defined in
the Log Forwarder properties, as described in “Log Forwarder properties
configuration” on page 28.
The value must be in the format plus_or_minusHHMM, where plus_or_minus
represents the + or - sign, HH represents two digits for the hour, and MM
represents two digits for the minute.
Examples:
If you want this time zone
Specify this value
Coordinated Universal Time (UTC)
+0000
5 hours west of UTC
-0500
8 hours east of UTC
+0800
Discovery Interval
In the process of streaming data, the number of minutes that the Log
Forwarder waits before it checks for a new log file in the data stream. This
value applies to all data streams from the Log Forwarder, although it can
be overridden on some individual streams, such as this one.
The value must be an integer in the range 1 - 5. The default value is the
value that is defined in the Log Forwarder properties, as described in “Log
Forwarder properties configuration” on page 28.
Use of wildcard characters in the Job Name field
In the Job Name field for this data stream, you can use the following wildcard
characters:
Wildcard character
What the character represents
?
Any single character
*
Any sequence of characters, including an empty
sequence
If you use wildcard characters in the job name, the job name value becomes a
pattern, and the data stream definition becomes a template. When the Log
Forwarder starts, it searches the Job Entry Subsystem (JES) spool for job names that
match the pattern, and it creates a separate data stream for each unique job name
that it discovers. After the Log Forwarder initialization is complete, the Log
Forwarder continues to monitor the job names on the JES spool. As it discovers
new job names that match the pattern, it uses the same template to create more
data streams.
64
Common Data Provider for z Systems: User Guide
For example, if the job name value is CMAS5*, and the JES spool contains the
following jobs, two data streams are created, one for job name CMAS53 and one for
job name CMAS5862:
JOBNAME
JobID
CMAS43
STC00586
CMAS482
STC00588
CMAS53
STC00587
CMAS5862
STC00589
CMAS61
STC00590
CMAS62
STC00600
HBODSPRO
STC00623
GLAPROC
STC00661
SYSLOG
STC00552
Tips:
v To avoid gathering data from job logs that you do not intend to gather from, use
a job name pattern that is not too broad.
v The Log Forwarder might discover jobs from other systems if spool is shared
between systems or if JES multi-access spool is enabled. Although the data
stream does not include data for the jobs that run on other systems, the Log
Forwarder creates a data stream for that data. Therefore, ensure that the
wildcard pattern does not match jobs that run on other systems.
Each resulting data stream is based on the template and has the same
configuration values as the template, with the exception of the following values:
Template field
Value
Job Name
The discovered job name
Data Source Name
The value of the Data Source Name field in
the template, with _jobName_MSGUSR
appended to that value. The jobName is the
discovered job name.
File Path
The value of the File Path field in the
template, with /jobName/MSGUSR appended to
that value. The jobName is the discovered job
name.
CICS User Messages YMD data stream:
This reference lists the configuration values that you can update in the “Configure
z/OS Log Forwarder data stream” window for the CICS User Messages YMD
data stream. It also describes how to use wildcard characters in the Job Name field
for this data stream. The source for the CICS User Messages YMD data stream
uses the date format “year month day” (YMD) in the time stamp.
Configuration values that you can update
Name The name that uniquely identifies the data stream to the Configuration
Tool. If you want to add more data streams of the same type, you must
first rename the last stream that you added.
Configuring Common Data Provider for z Systems
65
Job Name
The name of the server job from which to gather data. This value can
contain wildcard characters.
For information about the use of wildcard characters, see “Use of wildcard
characters in the Job Name field.”
Data Source Name
The name that uniquely identifies the data source to subscribers.
Tip: If you use the Auto-Qualify field in the subscriber configuration to
fully qualify the data source name, this dataSourceName value is
automatically updated with the fully qualified data source name. For more
information about the values that you can select in the Auto-Qualify field,
see “Subscriber configuration” on page 95.
File Path
A unique identifier, such as jobName/ddName, that represents the data origin.
Time Zone
If the time stamps in the collected data do not include a time zone, this
value specifies a time zone to the target destination. Specify this value if
the time zone is different from the system time zone, which is defined in
the Log Forwarder properties, as described in “Log Forwarder properties
configuration” on page 28.
The value must be in the format plus_or_minusHHMM, where plus_or_minus
represents the + or - sign, HH represents two digits for the hour, and MM
represents two digits for the minute.
Examples:
If you want this time zone
Specify this value
Coordinated Universal Time (UTC)
+0000
5 hours west of UTC
-0500
8 hours east of UTC
+0800
Discovery Interval
In the process of streaming data, the number of minutes that the Log
Forwarder waits before it checks for a new log file in the data stream. This
value applies to all data streams from the Log Forwarder, although it can
be overridden on some individual streams, such as this one.
The value must be an integer in the range 1 - 5. The default value is the
value that is defined in the Log Forwarder properties, as described in “Log
Forwarder properties configuration” on page 28.
Use of wildcard characters in the Job Name field
In the Job Name field for this data stream, you can use the following wildcard
characters:
66
Wildcard character
What the character represents
?
Any single character
*
Any sequence of characters, including an empty
sequence
Common Data Provider for z Systems: User Guide
If you use wildcard characters in the job name, the job name value becomes a
pattern, and the data stream definition becomes a template. When the Log
Forwarder starts, it searches the Job Entry Subsystem (JES) spool for job names that
match the pattern, and it creates a separate data stream for each unique job name
that it discovers. After the Log Forwarder initialization is complete, the Log
Forwarder continues to monitor the job names on the JES spool. As it discovers
new job names that match the pattern, it uses the same template to create more
data streams.
For example, if the job name value is CMAS5*, and the JES spool contains the
following jobs, two data streams are created, one for job name CMAS53 and one for
job name CMAS5862:
JOBNAME
JobID
CMAS43
STC00586
CMAS482
STC00588
CMAS53
STC00587
CMAS5862
STC00589
CMAS61
STC00590
CMAS62
STC00600
HBODSPRO
STC00623
GLAPROC
STC00661
SYSLOG
STC00552
Tips:
v To avoid gathering data from job logs that you do not intend to gather from, use
a job name pattern that is not too broad.
v The Log Forwarder might discover jobs from other systems if spool is shared
between systems or if JES multi-access spool is enabled. Although the data
stream does not include data for the jobs that run on other systems, the Log
Forwarder creates a data stream for that data. Therefore, ensure that the
wildcard pattern does not match jobs that run on other systems.
Each resulting data stream is based on the template and has the same
configuration values as the template, with the exception of the following values:
Template field
Value
Job Name
The discovered job name
Data Source Name
The value of the Data Source Name field in
the template, with _jobName_MSGUSR
appended to that value. The jobName is the
discovered job name.
File Path
The value of the File Path field in the
template, with /jobName/MSGUSR appended to
that value. The jobName is the discovered job
name.
NetView Netlog data stream:
This reference lists the configuration values that you can update in the “Configure
z/OS Log Forwarder data stream” window for the NetView Netlog data stream.
Configuring Common Data Provider for z Systems
67
Configuration values that you can update
Name The name that uniquely identifies the data stream to the Configuration
Tool. If you want to add more data streams of the same type, you must
first rename the last stream that you added.
Domain Name
The name of the NetView domain from which to gather data.
Important: If you define multiple NetView Netlog data streams, do not
define the same NetView domain name for multiple streams. Each stream
must reference a unique domain name.
Data Source Name
The name that uniquely identifies the data source to subscribers.
Tip: If you use the Auto-Qualify field in the subscriber configuration to
fully qualify the data source name, this dataSourceName value is
automatically updated with the fully qualified data source name. For more
information about the values that you can select in the Auto-Qualify field,
see “Subscriber configuration” on page 95.
File Path
A unique identifier that represents the data origin.
USS Syslogd data stream:
This reference lists the configuration values that you can update in the “Configure
z/OS Log Forwarder data stream” window for the USS Syslogd Admin, USS
Syslogd Debug, and USS Syslogd Error data streams.
Configuration values that you can update
Name The name that uniquely identifies the data stream to the Configuration
Tool. If you want to add more data streams of the same type, you must
first rename the last stream that you added.
Data Source Name
The name that uniquely identifies the data source to subscribers.
Tip: If you use the Auto-Qualify field in the subscriber configuration to
fully qualify the data source name, this dataSourceName value is
automatically updated with the fully qualified data source name. For more
information about the values that you can select in the Auto-Qualify field,
see “Subscriber configuration” on page 95.
File Path
A unique identifier that represents the data origin. The identifier must be
the absolute path, including the file name, of a log file that contains the
relevant data.
Tip: If you are gathering log data from a rolling z/OS UNIX log, see “Data
collection from a rolling z/OS UNIX log” on page 76 for more information,
including how to specify this file path value for a rolling log.
Time Zone
If the time stamps in the collected data do not include a time zone, this
value specifies a time zone to the target destination. Specify this value if
68
Common Data Provider for z Systems: User Guide
the time zone is different from the system time zone, which is defined in
the Log Forwarder properties, as described in “Log Forwarder properties
configuration” on page 28.
The value must be in the format plus_or_minusHHMM, where plus_or_minus
represents the + or - sign, HH represents two digits for the hour, and MM
represents two digits for the minute.
Examples:
If you want this time zone
Specify this value
Coordinated Universal Time (UTC)
+0000
5 hours west of UTC
-0500
8 hours east of UTC
+0800
Discovery Interval
In the process of streaming data, the number of minutes that the Log
Forwarder waits before it checks for a new log file in the data stream. This
value applies to all data streams from the Log Forwarder, although it can
be overridden on some individual streams, such as this one.
The value must be an integer in the range 1 - 5. The default value is the
value that is defined in the Log Forwarder properties, as described in “Log
Forwarder properties configuration” on page 28.
WebSphere HPEL data stream:
This reference lists the configuration values that you can update in the “Configure
z/OS Log Forwarder data stream” window for the WebSphere HPEL data stream.
Configuration values that you can update
Name The name that uniquely identifies the data stream to the Configuration
Tool. If you want to add more data streams of the same type, you must
first rename the last stream that you added.
Log Directory
The HPEL log directory for an application server that you are collecting
data from. This HPEL log directory must have a logdata subdirectory, and
the HPEL log files must be present in the logdata subdirectory.
If you are collecting only trace data, do not specify a value in the Log
Directory field.
If no value is specified in the Trace Directory field, a Log Directory value
is required. Otherwise, the Log Directory value is not required.
Trace Directory
The WebSphere Application Server for z/OS HPEL trace directory for an
application server that you are collecting data from. This HPEL trace
directory must have a tracedata subdirectory, and the HPEL trace files
must be present in the tracedata subdirectory.
If you are collecting only log data, do not specify a value in the Trace
Directory field.
If no value is specified in the Log Directory field, a Trace Directory value
is required. Otherwise, the Trace Directory value is not required.
Configuring Common Data Provider for z Systems
69
Data Source Name
The name that uniquely identifies the data source to subscribers.
Tip: If you use the Auto-Qualify field in the subscriber configuration to
fully qualify the data source name, this dataSourceName value is
automatically updated with the fully qualified data source name. For more
information about the values that you can select in the Auto-Qualify field,
see “Subscriber configuration” on page 95.
File Path
A unique identifier that represents the data origin. The identifier must be a
virtual or physical path that represents the HPEL log data.
WebSphere SYSOUT data stream:
This reference lists the configuration values that you can update in the “Configure
z/OS Log Forwarder data stream” window for the WebSphere SYSOUT data
stream. It also describes how to use wildcard characters in the Job Name field for
this data stream.
Configuration values that you can update
Name The name that uniquely identifies the data stream to the Configuration
Tool. If you want to add more data streams of the same type, you must
first rename the last stream that you added.
Job Name
The name of the server job from which to gather data. This value can
contain wildcard characters.
For information about the use of wildcard characters, see “Use of wildcard
characters in the Job Name field” on page 71.
Data Source Name
The name that uniquely identifies the data source to subscribers.
Tip: If you use the Auto-Qualify field in the subscriber configuration to
fully qualify the data source name, this dataSourceName value is
automatically updated with the fully qualified data source name. For more
information about the values that you can select in the Auto-Qualify field,
see “Subscriber configuration” on page 95.
File Path
A unique identifier, such as jobName/ddName, that represents the data origin.
Time Zone
If the time stamps in the collected data do not include a time zone, this
value specifies a time zone to the target destination. Specify this value if
the time zone is different from the system time zone, which is defined in
the Log Forwarder properties, as described in “Log Forwarder properties
configuration” on page 28.
The value must be in the format plus_or_minusHHMM, where plus_or_minus
represents the + or - sign, HH represents two digits for the hour, and MM
represents two digits for the minute.
Examples:
70
If you want this time zone
Specify this value
Coordinated Universal Time (UTC)
+0000
Common Data Provider for z Systems: User Guide
If you want this time zone
Specify this value
5 hours west of UTC
-0500
8 hours east of UTC
+0800
Discovery Interval
In the process of streaming data, the number of minutes that the Log
Forwarder waits before it checks for a new log file in the data stream. This
value applies to all data streams from the Log Forwarder, although it can
be overridden on some individual streams, such as this one.
The value must be an integer in the range 1 - 5. The default value is the
value that is defined in the Log Forwarder properties, as described in “Log
Forwarder properties configuration” on page 28.
Use of wildcard characters in the Job Name field
In the Job Name field for this data stream, you can use the following wildcard
characters:
Wildcard character
What the character represents
?
Any single character
*
Any sequence of characters, including an empty
sequence
If you use wildcard characters in the job name, the job name value becomes a
pattern, and the data stream definition becomes a template. When the Log
Forwarder starts, it searches the Job Entry Subsystem (JES) spool for job names that
match the pattern, and it creates a separate data stream for each unique job name
that it discovers. After the Log Forwarder initialization is complete, the Log
Forwarder continues to monitor the job names on the JES spool. As it discovers
new job names that match the pattern, it uses the same template to create more
data streams.
For example, if the job name value is BBOS???S, and the JES spool contains the
following jobs, two data streams are created, one for job name BBOSABCS and one
for job name BBOSDEFS:
JOBNAME
JobID
BBODMGR
STC00586
BBODMGRS
STC00588
BBODMNC
STC00587
BBON001
STC00589
BBOSABC
STC00590
BBOSABC
STC00600
BBOSABCS
STC00592
BBOSABCS
STC00602
BBOSDEF
STC00594
BBOSDEFS
STC00596
BBOSDEFS
STC00598
GLAPROC
STC00661
Configuring Common Data Provider for z Systems
71
JOBNAME
JobID
SYSLOG
STC00552
Tips:
v To avoid gathering data from job logs that you do not intend to gather from, use
a job name pattern that is not too broad.
v The Log Forwarder might discover jobs from other systems if spool is shared
between systems or if JES multi-access spool is enabled. Although the data
stream does not include data for the jobs that run on other systems, the Log
Forwarder creates a data stream for that data. Therefore, ensure that the
wildcard pattern does not match jobs that run on other systems.
Each resulting data stream is based on the template and has the same
configuration values as the template, with the exception of the following values:
Template field
Value
Job Name
The discovered job name
Data Source Name
The value of the Data Source Name field in
the template, with _jobName_SYSOUT
appended to that value. The jobName is the
discovered job name.
File Path
The value of the File Path field in the
template, with /jobName/SYSOUT appended to
that value. The jobName is the discovered job
name.
WebSphere SYSPRINT data stream:
This reference lists the configuration values that you can update in the “Configure
z/OS Log Forwarder data stream” window for the WebSphere SYSPRINT data
stream. It also describes how to use wildcard characters in the Job Name field for
this data stream.
Configuration values that you can update
Name The name that uniquely identifies the data stream to the Configuration
Tool. If you want to add more data streams of the same type, you must
first rename the last stream that you added.
Job Name
The name of the server job from which to gather data. This value can
contain wildcard characters.
For information about the use of wildcard characters, see “Use of wildcard
characters in the Job Name field” on page 73.
Data Source Name
The name that uniquely identifies the data source to subscribers.
Tip: If you use the Auto-Qualify field in the subscriber configuration to
fully qualify the data source name, this dataSourceName value is
automatically updated with the fully qualified data source name. For more
information about the values that you can select in the Auto-Qualify field,
see “Subscriber configuration” on page 95.
72
Common Data Provider for z Systems: User Guide
File Path
A unique identifier, such as jobName/ddName, that represents the data origin.
Time Zone
If the time stamps in the collected data do not include a time zone, this
value specifies a time zone to the target destination. Specify this value if
the time zone is different from the system time zone, which is defined in
the Log Forwarder properties, as described in “Log Forwarder properties
configuration” on page 28.
The value must be in the format plus_or_minusHHMM, where plus_or_minus
represents the + or - sign, HH represents two digits for the hour, and MM
represents two digits for the minute.
Examples:
If you want this time zone
Specify this value
Coordinated Universal Time (UTC)
+0000
5 hours west of UTC
-0500
8 hours east of UTC
+0800
Discovery Interval
In the process of streaming data, the number of minutes that the Log
Forwarder waits before it checks for a new log file in the data stream. This
value applies to all data streams from the Log Forwarder, although it can
be overridden on some individual streams, such as this one.
The value must be an integer in the range 1 - 5. The default value is the
value that is defined in the Log Forwarder properties, as described in “Log
Forwarder properties configuration” on page 28.
Use of wildcard characters in the Job Name field
In the Job Name field for this data stream, you can use the following wildcard
characters:
Wildcard character
What the character represents
?
Any single character
*
Any sequence of characters, including an empty
sequence
If you use wildcard characters in the job name, the job name value becomes a
pattern, and the data stream definition becomes a template. When the Log
Forwarder starts, it searches the Job Entry Subsystem (JES) spool for job names that
match the pattern, and it creates a separate data stream for each unique job name
that it discovers. After the Log Forwarder initialization is complete, the Log
Forwarder continues to monitor the job names on the JES spool. As it discovers
new job names that match the pattern, it uses the same template to create more
data streams.
For example, if the job name value is BBOS???S, and the JES spool contains the
following jobs, two data streams are created, one for job name BBOSABCS and one
for job name BBOSDEFS:
Configuring Common Data Provider for z Systems
73
JOBNAME
JobID
BBODMGR
STC00586
BBODMGRS
STC00588
BBODMNC
STC00587
BBON001
STC00589
BBOSABC
STC00590
BBOSABC
STC00600
BBOSABCS
STC00592
BBOSABCS
STC00602
BBOSDEF
STC00594
BBOSDEFS
STC00596
BBOSDEFS
STC00598
GLAPROC
STC00661
SYSLOG
STC00552
Tips:
v To avoid gathering data from job logs that you do not intend to gather from, use
a job name pattern that is not too broad.
v The Log Forwarder might discover jobs from other systems if spool is shared
between systems or if JES multi-access spool is enabled. Although the data
stream does not include data for the jobs that run on other systems, the Log
Forwarder creates a data stream for that data. Therefore, ensure that the
wildcard pattern does not match jobs that run on other systems.
Each resulting data stream is based on the template and has the same
configuration values as the template, with the exception of the following values:
Template field
Value
Job Name
The discovered job name
Data Source Name
The value of the Data Source Name field in
the template, with _jobName_SYSPRINT
appended to that value. The jobName is the
discovered job name.
File Path
The value of the File Path field in the
template, with /jobName/SYSPRINT appended
to that value. The jobName is the discovered
job name.
WebSphere USS Sysprint data stream:
This reference lists the configuration values that you can update in the “Configure
z/OS Log Forwarder data stream” window for the WebSphere USS Sysprint data
stream.
Configuration values that you can update
Name The name that uniquely identifies the data stream to the Configuration
Tool. If you want to add more data streams of the same type, you must
first rename the last stream that you added.
74
Common Data Provider for z Systems: User Guide
Data Source Name
The name that uniquely identifies the data source to subscribers.
Tip: If you use the Auto-Qualify field in the subscriber configuration to
fully qualify the data source name, this dataSourceName value is
automatically updated with the fully qualified data source name. For more
information about the values that you can select in the Auto-Qualify field,
see “Subscriber configuration” on page 95.
File Path
A unique identifier that represents the data origin. The identifier must be
the absolute path, including the file name, of a log file that contains the
relevant data.
Tip: If you are gathering log data from a rolling z/OS UNIX log, see “Data
collection from a rolling z/OS UNIX log” on page 76 for more information,
including how to specify this file path value for a rolling log.
Time Zone
If the time stamps in the collected data do not include a time zone, this
value specifies a time zone to the target destination. Specify this value if
the time zone is different from the system time zone, which is defined in
the Log Forwarder properties, as described in “Log Forwarder properties
configuration” on page 28.
The value must be in the format plus_or_minusHHMM, where plus_or_minus
represents the + or - sign, HH represents two digits for the hour, and MM
represents two digits for the minute.
Examples:
If you want this time zone
Specify this value
Coordinated Universal Time (UTC)
+0000
5 hours west of UTC
-0500
8 hours east of UTC
+0800
z/OS SYSLOG data stream:
This reference lists the configuration values that you can update in the “Configure
z/OS Log Forwarder data stream” window for the z/OS SYSLOG (from user exit)
and z/OS SYSLOG from OPERLOG data streams.
Configuration values that you can update
Name The name that uniquely identifies the data stream to the Configuration
Tool. If you want to add more data streams of the same type, you must
first rename the last stream that you added.
Data Source Name
The name that uniquely identifies the data source to subscribers.
Tip: If you use the Auto-Qualify field in the subscriber configuration to
fully qualify the data source name, this dataSourceName value is
automatically updated with the fully qualified data source name. For more
information about the values that you can select in the Auto-Qualify field,
see “Subscriber configuration” on page 95.
Configuring Common Data Provider for z Systems
75
File Path
A unique identifier that represents the data origin.
Data collection from a rolling z/OS UNIX log:
For data streams that come from z/OS UNIX log file sources, IBM Common Data
Provider for z Systems can gather log data from rolling z/OS UNIX logs. The use
of a rolling log prevents any one log file from getting too large and simplifies the
process of pruning older log data from the system.
Tip: The following data streams come from z/OS UNIX log file sources:
v “Generic ZFS File data stream” on page 51
v “USS Syslogd data stream” on page 68
v “WebSphere USS Sysprint data stream” on page 74
Use of a rolling log
A rolling log is a dynamic, sequential set of files that contains a continuous stream
of log data. A new file is added whenever a previous file exceeds some threshold
(for example, the file surpasses a specified size, or a specified time interval passes).
Sometimes, older files are pruned (automatically or manually) so that only a
defined number of files is retained.
For example, with a rolling log, a new file might be created once a day, or at
specified times. The log is a set of logically grouped log files, rather than only one
log file. Individual files are differentiated by an index or a time stamp in the file
name.
Important: IBM Common Data Provider for z Systems does not gather log data
from a rolling log if the following events occurred when the log was rolled:
v A log file was renamed.
v The contents of a log file were removed.
File path pattern for a rolling log
IBM Common Data Provider for z Systems uses a file path pattern with one or
more wildcard characters to identify the log files that must be logically grouped
into one logical log (a rolling log) and mapped to the same data source name.
You must determine the appropriate file path pattern for each set of log files that
are gathered, and specify this pattern in the File Path field when you configure a
data stream that comes from a z/OS UNIX log file source. The file path pattern
must be as specific as possible so that only the appropriate log files are included.
The following wildcard characters are valid in a file path pattern (in the File Path
field for a data stream that comes from a z/OS UNIX log file source):
76
Wildcard character
What the character represents
?
Any single character
*
Any sequence of characters
Common Data Provider for z Systems: User Guide
Example of how to specify the file path pattern: Assume that a rolling log uses
the following file naming scheme, where the integer n is incremented for each new
log file:
v /u/myLogDir/myLogFile.n.log
For example, n is 1 for the first file, 2 for the second file, and 3 for the third file.
In this example, the following file path pattern matches all of the file path names:
v /u/myLogDir/myLogFile.*.log
The following scenarios provide more examples:
v “Sample scenario that uses date and time substitution in the JCL cataloged
procedure” on page 78
v “Sample scenario that uses the redirect_server_output_dir environment variable”
on page 78
File path pattern utility for verifying file path values for rolling logs
IBM Common Data Provider for z Systems includes a file path pattern utility to
help you verify the file path values for any rolling logs. The utility determines
which files on the current system are included by each file path pattern.
To run the utility, issue the following command in the logical partition (LPAR)
where IBM Common Data Provider for z Systems runs:
checkFilePattern.sh configuration_directory
The variable configuration_directory represents the directory that contains both the
data configuration file and the environment configuration file.
The following example further illustrates how to issue the command and includes
sample values:
/usr/lpp/IBM/zscala/V2R2/samples/checkFilepattern.sh /usr/lpp/IBM/zscala/V2R2
Optionally, a data stream identifier can be specified so that the file path for only
the specified data stream is checked. The following example shows that the data
stream identifier 9 is specified:
/usr/lpp/IBM/zscala/V2R2/samples/checkFilepattern.sh /usr/lpp/IBM/zscala/V2R2 9
The command response is written to standard output (STDOUT). As shown in the
following example, it contains a list of all files that match each file path value:
INFO: GLAB021I The file path pattern
/u/myLogDir/BBOCELL.BBONODE.BBOSAPP.BBOSAPPS.????????.SR.??????.??????.SYSPRINT.txt
for data gatherer identifier 5 resolves to the following files:
/u/myLogDir/BBOCELL.BBONODE.BBOSAPP.BBOSAPPS.STC00036.SR.140929.170703.SYSPRINT.txt
/u/myLogDir/BBOCELL.BBONODE.BBOSAPP.BBOSAPPS.STC00158.SR.140929.193451.SYSPRINT.txt
/u/myLogDir/BBOCELL.BBONODE.BBOSAPP.BBOSAPPS.STC00252.SR.141006.134949.SYSPRINT.txt
INFO: GLAB021I The file path pattern
/u/myLogDir/BBOCELL.BBONODE.BBOSAPP.BBOSAPPS.????????.SR.??????.??????.SYSOUT.txt
for data gatherer identifier 7 resolves to the following files:
/u/myLogDir/BBOCELL.BBONODE.BBOSAPP.BBOSAPPS.STC00036.SR.140929.170703.SYSOUT.txt
/u/myLogDir/BBOCELL.BBONODE.BBOSAPP.BBOSAPPS.STC00158.SR.140929.193451.SYSOUT.txt
/u/myLogDir/BBOCELL.BBONODE.BBOSAPP.BBOSAPPS.STC00252.SR.141006.134949.SYSOUT.txt
The following example shows the command response that is written for the data
stream if no files match a pattern:
Configuring Common Data Provider for z Systems
77
WARNING: GLAB022W The file path pattern
/u/myLogDir/BBOCELL.BBONODE.BBOSAPP.BBOSAPPS.????????.SR.??????.??????.SYSPRINT.txt
for data gatherer identifier 6 resolves to no files.
Sample scenario that uses date and time substitution in the JCL cataloged procedure:
Job logs can be redirected to z/OS UNIX files. They can then be rolled by using
date and time substitution in the JCL cataloged procedure that is used to start the
job. Each time that the job is restarted, a new file is created.
In this scenario, the following SYSOUT DD statement is from a JCL cataloged
procedure is used to start a job:
//SYSOUT DD PATH=’/u/myLogDir/myLog.&LYYMMDD..&LHHMMSS..log’,
//
PATHOPTS=(OWRONLY,OCREAT),PATHMODE=SIRWXU
The variable &LYYMMDD. is replaced by the local date on which the job was
started, and the date is in YYMMDD format. Similarly, the variable &LHHMMSS.
is replaced by the local time in which the job was started, and the time is in
HHMMSS format.
To convert a path with date and time variables into a file path pattern for IBM
Common Data Provider for z Systems configuration, replace the date and time
variables with one or more wildcard characters.
For example, in this scenario, replace &LYYMMDD. with ?????? because the date
format YYMMDD is always six characters. Similarly, replace &LHHMMSS. with
?????? because the time format HHMMSS is always six characters.
File path pattern for this scenario
Use the following file path pattern for this scenario:
/u/myLogDir/myLog.??????.??????.log
Sample scenario that uses the redirect_server_output_dir environment variable:
WebSphere Application Server for z/OS SYSOUT and SYSPRINT logs can also be
redirected to z/OS UNIX files and rolled by using the WebSphere environment
variable redirect_server_output_dir.
A new set of files for SYSOUT and SYSPRINT is created for each server region at
the following times:
v Each time that the server job is restarted.
v Each time that the modify command is issued with the ROLL_LOGS parameter.
The new files are created in the directory that is specified by the
redirect_server_output_dir environment variable.
The following file naming conventions are used for the redirected files:
cellName.nodeName.serverName.jobName.jobId.asType.date.time.SYSOUT.txt
cellName.nodeName.serverName.jobName.jobId.asType.date.time.SYSPRINT.txt
For each server region, the cell name, node name, server name, job name, and
address space type are constant. Only the job ID, date, and time are variable.
To convert one of these file naming convention into a file path pattern for IBM
Common Data Provider for z Systems configuration, complete the following steps:
78
Common Data Provider for z Systems: User Guide
1. Add the absolute path, which is specified in the WebSphere environment
variable redirect_server_output_dir, to the beginning of the file path pattern.
2. Replace cellName, nodeName, serverName, and jobName with the appropriate
values.
3. Replace asType with CTL (for controller), SR (for servant), or CRA (for adjunct).
4. If you are using JES2, replace jobId with ????????, which matches any eight
characters.
If you are using JES3, replace jobId with *, which matches any sequence of
characters. In JES3, jobId is sometimes incorrectly populated with the job name
rather than the job ID.
5. Replace date with ??????, which matches any six characters.
6. Replace time with ??????, which matches any six characters.
File path pattern for this scenario
The following file path pattern is an example of the pattern to use for SYSPRINT
files for the BBOSAPP server that is using JES2:
/u/myLogDir/BBOCELL.BBONODE.BBOSAPP.BBOSAPPS.????????.SR.??????.??????.SYSPRINT.txt
Data stream configuration for data gathered by System Data
Engine
This reference lists the configuration values that you can update in the “Configure
SDE data stream” window.
dataSourceName
The data source name. This value is sent to a Logstash receiver as the
Source Name field.
Tip: If you use the Auto-Qualify field in the subscriber configuration to
fully qualify the data source name, this dataSourceName value is
automatically updated with the fully qualified data source name. For more
information about the values that you can select in the Auto-Qualify field,
see “Subscriber configuration” on page 95.
Flavor This field is not available for all SMF record types. If it is available, it
specifies a filter (which you can select in the field) to restrict the forwarded
records to those for a specific set of applications, such as CICS Transaction
Server for z/OS or Db2 for z/OS applications.
Transform configuration
This reference lists and describes the transforms that you can select in the
“Transform data stream” window. For each transform, it also lists and describes
the field values that you can update in the “Configure transform” window.
The three categories of transforms are transcribe transforms, splitter transforms,
and filter transforms.
Transcribe transforms
A transcribe transform converts data from one format to another.
Transforms in this category
v “TRANSCRIBE transform” on page 80
Splitter transforms
Based on specified criteria, a splitter transform splits data that is received
as one message into multiple messages.
Configuring Common Data Provider for z Systems
79
Transforms in this category
v “CRLF Splitter transform” on page 81
v “SYSLOG Splitter transform” on page 82
v “SyslogD Splitter transform” on page 83
v “NetView Splitter transform” on page 83
v “EYULOG MDY Splitter transform” on page 84
v “EYULOG DMY Splitter transform” on page 85
v “EYULOG YMD Splitter transform” on page 85
v “CICS MSGUSR MDY Splitter transform” on page 86
v “CICS MSGUSR DMY Splitter transform” on page 87
v “CICS MSGUSR YMD Splitter transform” on page 88
v “WAS for zOS SYSOUT Splitter transform” on page 88
v “WAS for zOS SYSPRINT Splitter transform” on page 89
v “WAS HPEL Splitter transform” on page 90
v “WAS SYSTEMOUT Splitter transform” on page 91
v “FixedLength Splitter transform” on page 91
Filter transforms
Based on specified criteria, a filter transform discards messages from the
data stream.
Transforms in this category
v “Regex Filter transform” on page 92
v “Time Filter transform” on page 94
TRANSCRIBE transform:
A TRANSCRIBE transform converts the data stream into a different encoding,
such as UTF-8 encoding.
Configuration values that you can update
For the TRANSCRIBE transform, you can update the following field values in the
“Configure transform” window:
Inspect
Specifies whether, and at what stage, data packets in the data stream are to
be inspected. For example, during transform processing, the data packets
can be inspected by printing them to the z/OS console at the input stage,
the output stage, or both stages.
You can choose any of the following values. The default value is None. To
prevent the sending of large volumes of data to the z/OS console and to
the IBM Common Data Provider for z Systems Data Streamer job log, use
the default value, unless you are instructed by IBM Software Support to
change this value for troubleshooting purposes.
None
Specifies that data packets are not inspected.
Input
Specifies that data packets are printed to the z/OS console before
they are processed by the transform.
Output Specifies that data packets are printed to the z/OS console after
they are processed by the transform.
80
Common Data Provider for z Systems: User Guide
Both
Specifies that data packets are printed to the z/OS console both
before and after they are processed by the transform.
Output Encoding
Specifies the encoding, such as UTF-8, into which you want to convert the
data stream.
CRLF Splitter transform:
A CRLF Splitter transform splits a single message in a packet into multiple
messages, based on occurrences of a carriage return (CR) character, a line feed (LF)
character, or any contiguous string of these two characters. The transform also
considers the packet encoding as it determines whether characters in the message
are carriage return or line feed characters.
The transform splits data according to the following delimiters, among others:
v CR
v LF
v CRLF
v LFCR
v CRCR
v LFLF
v CRLFCRLF
v LFCRLFCR
Configuration values that you can update
For the CRLF Splitter transform, you can update the following field values in the
“Configure transform” window:
Inspect
Specifies whether, and at what stage, data packets in the data stream are to
be inspected. For example, during transform processing, the data packets
can be inspected by printing them to the z/OS console at the input stage,
the output stage, or both stages.
You can choose any of the following values. The default value is None. To
prevent the sending of large volumes of data to the z/OS console and to
the IBM Common Data Provider for z Systems Data Streamer job log, use
the default value, unless you are instructed by IBM Software Support to
change this value for troubleshooting purposes.
None
Specifies that data packets are not inspected.
Input
Specifies that data packets are printed to the z/OS console before
they are processed by the transform.
Output Specifies that data packets are printed to the z/OS console after
they are processed by the transform.
Both
Emit
Specifies that data packets are printed to the z/OS console both
before and after they are processed by the transform.
Specifies the maximum number of messages to be included in an outgoing
data packet. For example, if an incoming data packet has 15 messages, and
this Emit value is set to 6, each outgoing data packet must have no more
Configuring Common Data Provider for z Systems
81
than 6 messages. In this example, the incoming data packet must be split
into three outgoing data packets, where two data packets have 6 messages,
and one data packet has 3 messages.
The default value is 0, which indicates that all messages in an incoming
data packet are included in the outgoing data packet.
Ignore Character
Specifies a character that, if found at the beginning of a data record, causes
the record to be ignored and not included in the outgoing data packet.
This field is optional and is blank by default.
SYSLOG Splitter transform:
The SYSLOG Splitter transform is used only by the z/OS SYSLOG (from the user
exit or from OPERLOG) data stream. It splits z/OS SYSLOG records that are
received as a single message into individual log records.
This transform discards any log records that do not comply with the format of a
z/OS SYSLOG record.
Configuration values that you can update
For the SYSLOG Splitter transform, you can update the following field values in
the “Configure transform” window:
Inspect
Specifies whether, and at what stage, data packets in the data stream are to
be inspected. For example, during transform processing, the data packets
can be inspected by printing them to the z/OS console at the input stage,
the output stage, or both stages.
You can choose any of the following values. The default value is None. To
prevent the sending of large volumes of data to the z/OS console and to
the IBM Common Data Provider for z Systems Data Streamer job log, use
the default value, unless you are instructed by IBM Software Support to
change this value for troubleshooting purposes.
None
Specifies that data packets are not inspected.
Input
Specifies that data packets are printed to the z/OS console before
they are processed by the transform.
Output Specifies that data packets are printed to the z/OS console after
they are processed by the transform.
Both
Emit
Specifies that data packets are printed to the z/OS console both
before and after they are processed by the transform.
Specifies the maximum number of messages to be included in an outgoing
data packet. For example, if an incoming data packet has 15 messages, and
this Emit value is set to 6, each outgoing data packet must have no more
than 6 messages. In this example, the incoming data packet must be split
into three outgoing data packets, where two data packets have 6 messages,
and one data packet has 3 messages.
The default value is 0, which indicates that all messages in an incoming
data packet are included in the outgoing data packet.
82
Common Data Provider for z Systems: User Guide
SyslogD Splitter transform:
The SyslogD Splitter transform is used only by the USS Syslogd Admin, USS
Syslogd Debug, and USS Syslogd Error data streams. It splits UNIX System
Services system log (syslogd) records that are received as a single message into
individual log records.
Configuration values that you can update
For the SyslogD Splitter transform, you can update the following field values in
the “Configure transform” window:
Inspect
Specifies whether, and at what stage, data packets in the data stream are to
be inspected. For example, during transform processing, the data packets
can be inspected by printing them to the z/OS console at the input stage,
the output stage, or both stages.
You can choose any of the following values. The default value is None. To
prevent the sending of large volumes of data to the z/OS console and to
the IBM Common Data Provider for z Systems Data Streamer job log, use
the default value, unless you are instructed by IBM Software Support to
change this value for troubleshooting purposes.
None
Specifies that data packets are not inspected.
Input
Specifies that data packets are printed to the z/OS console before
they are processed by the transform.
Output Specifies that data packets are printed to the z/OS console after
they are processed by the transform.
Both
Emit
Specifies that data packets are printed to the z/OS console both
before and after they are processed by the transform.
Specifies the maximum number of messages to be included in an outgoing
data packet. For example, if an incoming data packet has 15 messages, and
this Emit value is set to 6, each outgoing data packet must have no more
than 6 messages. In this example, the incoming data packet must be split
into three outgoing data packets, where two data packets have 6 messages,
and one data packet has 3 messages.
The default value is 0, which indicates that all messages in an incoming
data packet are included in the outgoing data packet.
NetView Splitter transform:
The NetView Splitter transform is used only by the NetView Netlog data stream.
It splits NetView netlog records that are received as a single message into
individual log records.
Configuration values that you can update
For the NetView Splitter transform, you can update the following field values in
the “Configure transform” window:
Inspect
Specifies whether, and at what stage, data packets in the data stream are to
be inspected. For example, during transform processing, the data packets
can be inspected by printing them to the z/OS console at the input stage,
the output stage, or both stages.
Configuring Common Data Provider for z Systems
83
You can choose any of the following values. The default value is None. To
prevent the sending of large volumes of data to the z/OS console and to
the IBM Common Data Provider for z Systems Data Streamer job log, use
the default value, unless you are instructed by IBM Software Support to
change this value for troubleshooting purposes.
None
Specifies that data packets are not inspected.
Input
Specifies that data packets are printed to the z/OS console before
they are processed by the transform.
Output Specifies that data packets are printed to the z/OS console after
they are processed by the transform.
Both
Emit
Specifies that data packets are printed to the z/OS console both
before and after they are processed by the transform.
Specifies the maximum number of messages to be included in an outgoing
data packet. For example, if an incoming data packet has 15 messages, and
this Emit value is set to 6, each outgoing data packet must have no more
than 6 messages. In this example, the incoming data packet must be split
into three outgoing data packets, where two data packets have 6 messages,
and one data packet has 3 messages.
The default value is 0, which indicates that all messages in an incoming
data packet are included in the outgoing data packet.
EYULOG MDY Splitter transform:
The EYULOG MDY Splitter transform is used only by the CICS EYULOG data
stream. It splits CICS Transaction Server for z/OS EYULOG data that is in MDY
(month, day, year) format into individual log records.
Configuration values that you can update
For the EYULOG MDY Splitter transform, you can update the following field
values in the “Configure transform” window:
Inspect
Specifies whether, and at what stage, data packets in the data stream are to
be inspected. For example, during transform processing, the data packets
can be inspected by printing them to the z/OS console at the input stage,
the output stage, or both stages.
You can choose any of the following values. The default value is None. To
prevent the sending of large volumes of data to the z/OS console and to
the IBM Common Data Provider for z Systems Data Streamer job log, use
the default value, unless you are instructed by IBM Software Support to
change this value for troubleshooting purposes.
None
Specifies that data packets are not inspected.
Input
Specifies that data packets are printed to the z/OS console before
they are processed by the transform.
Output Specifies that data packets are printed to the z/OS console after
they are processed by the transform.
Both
Emit
84
Specifies that data packets are printed to the z/OS console both
before and after they are processed by the transform.
Specifies the maximum number of messages to be included in an outgoing
data packet. For example, if an incoming data packet has 15 messages, and
Common Data Provider for z Systems: User Guide
this Emit value is set to 6, each outgoing data packet must have no more
than 6 messages. In this example, the incoming data packet must be split
into three outgoing data packets, where two data packets have 6 messages,
and one data packet has 3 messages.
The default value is 0, which indicates that all messages in an incoming
data packet are included in the outgoing data packet.
EYULOG DMY Splitter transform:
The EYULOG DMY Splitter is used only by the CICS EYULOG DMY data
stream. It splits CICS Transaction Server for z/OS EYULOG data that is in DMY
(day, month, year) format into individual log records.
Configuration values that you can update
For the EYULOG DMY Splitter transform, you can update the following field
values in the “Configure transform” window:
Inspect
Specifies whether, and at what stage, data packets in the data stream are to
be inspected. For example, during transform processing, the data packets
can be inspected by printing them to the z/OS console at the input stage,
the output stage, or both stages.
You can choose any of the following values. The default value is None. To
prevent the sending of large volumes of data to the z/OS console and to
the IBM Common Data Provider for z Systems Data Streamer job log, use
the default value, unless you are instructed by IBM Software Support to
change this value for troubleshooting purposes.
None
Specifies that data packets are not inspected.
Input
Specifies that data packets are printed to the z/OS console before
they are processed by the transform.
Output Specifies that data packets are printed to the z/OS console after
they are processed by the transform.
Both
Emit
Specifies that data packets are printed to the z/OS console both
before and after they are processed by the transform.
Specifies the maximum number of messages to be included in an outgoing
data packet. For example, if an incoming data packet has 15 messages, and
this Emit value is set to 6, each outgoing data packet must have no more
than 6 messages. In this example, the incoming data packet must be split
into three outgoing data packets, where two data packets have 6 messages,
and one data packet has 3 messages.
The default value is 0, which indicates that all messages in an incoming
data packet are included in the outgoing data packet.
EYULOG YMD Splitter transform:
The EYULOG YMD Splitter transform is used only by the CICS EYULOG YMD
data stream. It splits CICS Transaction Server for z/OS EYULOG data that is in
YMD (year, month, day) format into individual log records.
Configuring Common Data Provider for z Systems
85
Configuration values that you can update
For the EYULOG YMD Splitter transform, you can update the following field
values in the “Configure transform” window:
Inspect
Specifies whether, and at what stage, data packets in the data stream are to
be inspected. For example, during transform processing, the data packets
can be inspected by printing them to the z/OS console at the input stage,
the output stage, or both stages.
You can choose any of the following values. The default value is None. To
prevent the sending of large volumes of data to the z/OS console and to
the IBM Common Data Provider for z Systems Data Streamer job log, use
the default value, unless you are instructed by IBM Software Support to
change this value for troubleshooting purposes.
None
Specifies that data packets are not inspected.
Input
Specifies that data packets are printed to the z/OS console before
they are processed by the transform.
Output Specifies that data packets are printed to the z/OS console after
they are processed by the transform.
Both
Emit
Specifies that data packets are printed to the z/OS console both
before and after they are processed by the transform.
Specifies the maximum number of messages to be included in an outgoing
data packet. For example, if an incoming data packet has 15 messages, and
this Emit value is set to 6, each outgoing data packet must have no more
than 6 messages. In this example, the incoming data packet must be split
into three outgoing data packets, where two data packets have 6 messages,
and one data packet has 3 messages.
The default value is 0, which indicates that all messages in an incoming
data packet are included in the outgoing data packet.
CICS MSGUSR MDY Splitter transform:
The CICS MSGUSR MDY Splitter transform is used only by the CICS User
Messages data stream. It splits CICS Transaction Server for z/OS MSGUSR data
that is in MDY (month, day, year) format into individual log records.
Configuration values that you can update
For the CICS MSGUSR MDY Splitter transform, you can update the following
field values in the “Configure transform” window:
Inspect
Specifies whether, and at what stage, data packets in the data stream are to
be inspected. For example, during transform processing, the data packets
can be inspected by printing them to the z/OS console at the input stage,
the output stage, or both stages.
You can choose any of the following values. The default value is None. To
prevent the sending of large volumes of data to the z/OS console and to
the IBM Common Data Provider for z Systems Data Streamer job log, use
the default value, unless you are instructed by IBM Software Support to
change this value for troubleshooting purposes.
None
86
Specifies that data packets are not inspected.
Common Data Provider for z Systems: User Guide
Input
Specifies that data packets are printed to the z/OS console before
they are processed by the transform.
Output Specifies that data packets are printed to the z/OS console after
they are processed by the transform.
Both
Emit
Specifies that data packets are printed to the z/OS console both
before and after they are processed by the transform.
Specifies the maximum number of messages to be included in an outgoing
data packet. For example, if an incoming data packet has 15 messages, and
this Emit value is set to 6, each outgoing data packet must have no more
than 6 messages. In this example, the incoming data packet must be split
into three outgoing data packets, where two data packets have 6 messages,
and one data packet has 3 messages.
The default value is 0, which indicates that all messages in an incoming
data packet are included in the outgoing data packet.
CICS MSGUSR DMY Splitter transform:
The CICS MSGUSR DMY Splitter transform is used only by the CICS User
Messages DMY data stream. It splits CICS Transaction Server for z/OS MSGUSR
data that is in DMY (day, month, year) format into individual log records.
Configuration values that you can update
For the CICS MSGUSR DMY Splitter transform, you can update the following
field values in the “Configure transform” window:
Inspect
Specifies whether, and at what stage, data packets in the data stream are to
be inspected. For example, during transform processing, the data packets
can be inspected by printing them to the z/OS console at the input stage,
the output stage, or both stages.
You can choose any of the following values. The default value is None. To
prevent the sending of large volumes of data to the z/OS console and to
the IBM Common Data Provider for z Systems Data Streamer job log, use
the default value, unless you are instructed by IBM Software Support to
change this value for troubleshooting purposes.
None
Specifies that data packets are not inspected.
Input
Specifies that data packets are printed to the z/OS console before
they are processed by the transform.
Output Specifies that data packets are printed to the z/OS console after
they are processed by the transform.
Both
Emit
Specifies that data packets are printed to the z/OS console both
before and after they are processed by the transform.
Specifies the maximum number of messages to be included in an outgoing
data packet. For example, if an incoming data packet has 15 messages, and
this Emit value is set to 6, each outgoing data packet must have no more
than 6 messages. In this example, the incoming data packet must be split
into three outgoing data packets, where two data packets have 6 messages,
and one data packet has 3 messages.
The default value is 0, which indicates that all messages in an incoming
data packet are included in the outgoing data packet.
Configuring Common Data Provider for z Systems
87
CICS MSGUSR YMD Splitter transform:
The CICS MSGUSR YMD Splitter is used only by the CICS User Messages YMD
data stream. It splits CICS Transaction Server for z/OS MSGUSR data that is in
YMD (year, month, day) format into individual log records.
Configuration values that you can update
For the CICS MSGUSR YMD Splitter transform, you can update the following field
values in the “Configure transform” window:
Inspect
Specifies whether, and at what stage, data packets in the data stream are to
be inspected. For example, during transform processing, the data packets
can be inspected by printing them to the z/OS console at the input stage,
the output stage, or both stages.
You can choose any of the following values. The default value is None. To
prevent the sending of large volumes of data to the z/OS console and to
the IBM Common Data Provider for z Systems Data Streamer job log, use
the default value, unless you are instructed by IBM Software Support to
change this value for troubleshooting purposes.
None
Specifies that data packets are not inspected.
Input
Specifies that data packets are printed to the z/OS console before
they are processed by the transform.
Output Specifies that data packets are printed to the z/OS console after
they are processed by the transform.
Both
Emit
Specifies that data packets are printed to the z/OS console both
before and after they are processed by the transform.
Specifies the maximum number of messages to be included in an outgoing
data packet. For example, if an incoming data packet has 15 messages, and
this Emit value is set to 6, each outgoing data packet must have no more
than 6 messages. In this example, the incoming data packet must be split
into three outgoing data packets, where two data packets have 6 messages,
and one data packet has 3 messages.
The default value is 0, which indicates that all messages in an incoming
data packet are included in the outgoing data packet.
WAS for zOS SYSOUT Splitter transform:
The WAS for zOS SYSOUT Splitter transform is used only by the WebSphere
SYSOUT data stream. It splits WebSphere Application Server for z/OS SYSOUT
data into individual messages.
This transform discards BBOO0408I messages, which indicate that output was
redirected to, or from, another file.
Configuration values that you can update
For the WAS for zOS SYSOUT Splitter transform, you can update the following
field values in the “Configure transform” window:
Inspect
Specifies whether, and at what stage, data packets in the data stream are to
88
Common Data Provider for z Systems: User Guide
be inspected. For example, during transform processing, the data packets
can be inspected by printing them to the z/OS console at the input stage,
the output stage, or both stages.
You can choose any of the following values. The default value is None. To
prevent the sending of large volumes of data to the z/OS console and to
the IBM Common Data Provider for z Systems Data Streamer job log, use
the default value, unless you are instructed by IBM Software Support to
change this value for troubleshooting purposes.
None
Specifies that data packets are not inspected.
Input
Specifies that data packets are printed to the z/OS console before
they are processed by the transform.
Output Specifies that data packets are printed to the z/OS console after
they are processed by the transform.
Both
Emit
Specifies that data packets are printed to the z/OS console both
before and after they are processed by the transform.
Specifies the maximum number of messages to be included in an outgoing
data packet. For example, if an incoming data packet has 15 messages, and
this Emit value is set to 6, each outgoing data packet must have no more
than 6 messages. In this example, the incoming data packet must be split
into three outgoing data packets, where two data packets have 6 messages,
and one data packet has 3 messages.
The default value is 0, which indicates that all messages in an incoming
data packet are included in the outgoing data packet.
WAS for zOS SYSPRINT Splitter transform:
The WAS for zOS SYSPRINT Splitter transform is used only by the WebSphere
SYSPRINT data stream. It splits WebSphere Application Server for z/OS
SYSPRINT data into individual messages.
This transform discards messages that are not written by the Java logging APIs.
Configuration values that you can update
For the WAS for zOS SYSPRINT Splitter transform, you can update the following
field values in the “Configure transform” window:
Inspect
Specifies whether, and at what stage, data packets in the data stream are to
be inspected. For example, during transform processing, the data packets
can be inspected by printing them to the z/OS console at the input stage,
the output stage, or both stages.
You can choose any of the following values. The default value is None. To
prevent the sending of large volumes of data to the z/OS console and to
the IBM Common Data Provider for z Systems Data Streamer job log, use
the default value, unless you are instructed by IBM Software Support to
change this value for troubleshooting purposes.
None
Specifies that data packets are not inspected.
Input
Specifies that data packets are printed to the z/OS console before
they are processed by the transform.
Configuring Common Data Provider for z Systems
89
Output Specifies that data packets are printed to the z/OS console after
they are processed by the transform.
Both
Emit
Specifies that data packets are printed to the z/OS console both
before and after they are processed by the transform.
Specifies the maximum number of messages to be included in an outgoing
data packet. For example, if an incoming data packet has 15 messages, and
this Emit value is set to 6, each outgoing data packet must have no more
than 6 messages. In this example, the incoming data packet must be split
into three outgoing data packets, where two data packets have 6 messages,
and one data packet has 3 messages.
The default value is 0, which indicates that all messages in an incoming
data packet are included in the outgoing data packet.
WAS HPEL Splitter transform:
The WAS HPEL Splitter transform is used only by the WebSphere HPEL data
stream. It splits WebSphere Application Server for z/OS High Performance
Extensible Logging (HPEL) data into individual messages.
Configuration values that you can update
For the WAS HPEL Splitter transform, you can update the following field values
in the “Configure transform” window:
Inspect
Specifies whether, and at what stage, data packets in the data stream are to
be inspected. For example, during transform processing, the data packets
can be inspected by printing them to the z/OS console at the input stage,
the output stage, or both stages.
You can choose any of the following values. The default value is None. To
prevent the sending of large volumes of data to the z/OS console and to
the IBM Common Data Provider for z Systems Data Streamer job log, use
the default value, unless you are instructed by IBM Software Support to
change this value for troubleshooting purposes.
None
Specifies that data packets are not inspected.
Input
Specifies that data packets are printed to the z/OS console before
they are processed by the transform.
Output Specifies that data packets are printed to the z/OS console after
they are processed by the transform.
Both
Emit
Specifies that data packets are printed to the z/OS console both
before and after they are processed by the transform.
Specifies the maximum number of messages to be included in an outgoing
data packet. For example, if an incoming data packet has 15 messages, and
this Emit value is set to 6, each outgoing data packet must have no more
than 6 messages. In this example, the incoming data packet must be split
into three outgoing data packets, where two data packets have 6 messages,
and one data packet has 3 messages.
The default value is 0, which indicates that all messages in an incoming
data packet are included in the outgoing data packet.
90
Common Data Provider for z Systems: User Guide
WAS SYSTEMOUT Splitter transform:
The WAS SYSTEMOUT Splitter transform is used only by the WebSphere USS
Sysprint data stream. It splits WebSphere Application Server for z/OS SYSPRINT
data that is in distributed format into individual messages. Use this splitter if you
configure your WebSphere Application Server for z/OS environment to use a
distributed logging format.
This transform discards messages with a length that is greater than 1 MB.
Configuration values that you can update
For the WAS SYSTEMOUT Splitter transform, you can update the following field
values in the “Configure transform” window:
Inspect
Specifies whether, and at what stage, data packets in the data stream are to
be inspected. For example, during transform processing, the data packets
can be inspected by printing them to the z/OS console at the input stage,
the output stage, or both stages.
You can choose any of the following values. The default value is None. To
prevent the sending of large volumes of data to the z/OS console and to
the IBM Common Data Provider for z Systems Data Streamer job log, use
the default value, unless you are instructed by IBM Software Support to
change this value for troubleshooting purposes.
None
Specifies that data packets are not inspected.
Input
Specifies that data packets are printed to the z/OS console before
they are processed by the transform.
Output Specifies that data packets are printed to the z/OS console after
they are processed by the transform.
Both
Emit
Specifies that data packets are printed to the z/OS console both
before and after they are processed by the transform.
Specifies the maximum number of messages to be included in an outgoing
data packet. For example, if an incoming data packet has 15 messages, and
this Emit value is set to 6, each outgoing data packet must have no more
than 6 messages. In this example, the incoming data packet must be split
into three outgoing data packets, where two data packets have 6 messages,
and one data packet has 3 messages.
The default value is 0, which indicates that all messages in an incoming
data packet are included in the outgoing data packet.
FixedLength Splitter transform:
The FixedLength Splitter transform splits data records that have a fixed record
length into multiple messages, based on configuration values that you provide.
Configuration values that you can update
For the FixedLength Splitter transform, you can update the following field values
in the “Configure transform” window:
Inspect
Specifies whether, and at what stage, data packets in the data stream are to
Configuring Common Data Provider for z Systems
91
be inspected. For example, during transform processing, the data packets
can be inspected by printing them to the z/OS console at the input stage,
the output stage, or both stages.
You can choose any of the following values. The default value is None. To
prevent the sending of large volumes of data to the z/OS console and to
the IBM Common Data Provider for z Systems Data Streamer job log, use
the default value, unless you are instructed by IBM Software Support to
change this value for troubleshooting purposes.
None
Specifies that data packets are not inspected.
Input
Specifies that data packets are printed to the z/OS console before
they are processed by the transform.
Output Specifies that data packets are printed to the z/OS console after
they are processed by the transform.
Both
Specifies that data packets are printed to the z/OS console both
before and after they are processed by the transform.
Start Offset
Specifies the starting point of each data record. This value is required.
Fixed Length
Specifies the expected length of the incoming data record. This value is
required.
Skip
Specifies the number of bytes from the incoming data record to skip, which
means that these bytes are excluded from the output message. This value is
required.
Emit
Specifies the maximum number of messages to be included in an outgoing
data packet. For example, if an incoming data packet has 15 messages, and
this Emit value is set to 6, each outgoing data packet must have no more
than 6 messages. In this example, the incoming data packet must be split
into three outgoing data packets, where two data packets have 6 messages,
and one data packet has 3 messages.
The default value is 0, which indicates that all messages in an incoming
data packet are included in the outgoing data packet.
Ignore Character
Specifies a character that, if found at the beginning of a data record, causes
the record to be ignored and not included in the outgoing data packet.
This field is optional and is blank by default.
Regex Filter transform:
The Regex Filter transform filters messages in the data stream according to a
regular expression (regex) pattern, which you can define. You also define the filter
to either accept or deny incoming messages based on the regular expression. For
example, if an incoming message contains the regular expression, and you define
the filter to deny incoming messages based on the regular expression, the filter
then discards any incoming messages that contain the regular expression.
Tip: If an incoming message is first processed by a splitter transform, this filter
transform processes each individual message that results from the splitter
transform. Otherwise, the incoming message is processed as a single message.
92
Common Data Provider for z Systems: User Guide
To use this transform, you must know how to use regular expressions. The Oracle
documentation about regular expressions is one source of reference information.
Important: The use of complex regular expressions can result in increased usage of
system resources.
Configuration values that you can update
For the Regex Filter transform, you can update the following field values in the
“Configure transform” window:
Inspect
Specifies whether, and at what stage, data packets in the data stream are to
be inspected. For example, during transform processing, the data packets
can be inspected by printing them to the z/OS console at the input stage,
the output stage, or both stages.
You can choose any of the following values. The default value is None. To
prevent the sending of large volumes of data to the z/OS console and to
the IBM Common Data Provider for z Systems Data Streamer job log, use
the default value, unless you are instructed by IBM Software Support to
change this value for troubleshooting purposes.
None
Specifies that data packets are not inspected.
Input
Specifies that data packets are printed to the z/OS console before
they are processed by the transform.
Output Specifies that data packets are printed to the z/OS console after
they are processed by the transform.
Both
Specifies that data packets are printed to the z/OS console both
before and after they are processed by the transform.
Regex Specifies one or more valid regular expressions. At least one regular
expression must be defined for this transform. You can also select the
check box for any of the following expression flags:
Case Insensitive
Enables case-insensitive matching, in which only characters in the
US-ASCII character set are matched.
To enable Unicode-aware, case-insensitive matching, select both the
Unicode Case flag and the Case Insensitive flag.
Comments
Permits white space and comments in the regular expression. In
this mode, white space is ignored, and any embedded comment
that starts with the number sign character (#) is ignored.
Dotall Enables dotall mode in which the “dot” expression (.) matches any
character, including a line terminator.
Multi Line
Enables multiline mode in which the caret expression (^) and the
dollar sign expression ($) match immediately after, or immediately
before, a line terminator or the end of the message.
Unicode Case
Enables Unicode-aware case folding.
To enable Unicode-aware, case-insensitive matching, select both the
Unicode Case flag and the Case Insensitive flag.
Configuring Common Data Provider for z Systems
93
Unix Lines
Enables UNIX lines mode in which the “dot” expression (.), the
caret expression (^), and the dollar sign expression ($) are
interpreted only as the line feed (LF) line terminator.
To define one or more regular expressions in the Regex field, complete the
following steps:
1. Type a regular expression in the Regex field, and optionally, select one
or more check boxes to define the matching modes.
2. To add another regular expression, click ADD REGEX, and repeat the
previous step.
Filter Type
Specifies whether the filter keeps or discards incoming messages that
contain the regular expression.
You can choose either of the following values. The default value is Accept.
Accept Specifies that any messages that contain the regular expression are
kept in the data stream.
Deny
Specifies that any messages that contain the regular expression are
discarded from the data stream.
Time Filter transform:
The Time Filter transform filters messages according to a specified schedule, which
you can define.
This filter discards messages that are not received within a time interval (or time
window) that is defined in the schedule.
Configuration values that you can update
For the Time Filter transform, you can update the following field values in the
“Configure transform” window:
Inspect
Specifies whether, and at what stage, data packets in the data stream are to
be inspected. For example, during transform processing, the data packets
can be inspected by printing them to the z/OS console at the input stage,
the output stage, or both stages.
You can choose any of the following values. The default value is None. To
prevent the sending of large volumes of data to the z/OS console and to
the IBM Common Data Provider for z Systems Data Streamer job log, use
the default value, unless you are instructed by IBM Software Support to
change this value for troubleshooting purposes.
None
Specifies that data packets are not inspected.
Input
Specifies that data packets are printed to the z/OS console before
they are processed by the transform.
Output Specifies that data packets are printed to the z/OS console after
they are processed by the transform.
Both
94
Specifies that data packets are printed to the z/OS console both
before and after they are processed by the transform.
Common Data Provider for z Systems: User Guide
Schedule
To define a new schedule with one or more time intervals, complete the
following steps:
1. For this field value, select Create a new schedule, and click OK.
2. In the Edit name field of the resulting Schedules window, type the
name for the schedule that you want to contain this time interval.
3. To set the time interval for this schedule, either type the time
information in the From and to fields, or use the slider to adjust the
time.
4. To add another time interval for this schedule, click ADD WINDOW,
and repeat the previous step.
5. To save the schedule, click APPLY.
For more information about how to define or update schedules in a policy,
see “SCHEDULES properties: Defining time intervals for filtering
operational data” on page 30.
Subscriber configuration
This reference lists the configuration values that you can update in the “Configure
subscriber” window.
Name The name of the subscriber.
Description
An optional description for the subscriber.
File Buffer Time
The length of time to keep unsent data in the Data Streamer file buffer
before discarding it.
Overview of Data Streamer file buffer process
When a subscriber that is defined in a policy becomes unreachable,
the Data Streamer cannot send data to the subscriber until the
connection is re-established. During this period, the Data Streamer
begins the file buffer process by writing the data to the buffer file
in its working directory (CDP_HOME directory). The Data
Streamer tries to reconnect to the subscriber every 5 minutes, while
it continues to write any incoming data to file.
If a reconnection attempt is successful before the File Buffer Time
value is reached, the Data Streamer reads the buffer file and sends
its contents to the subscriber.
If reconnection attempts are unsuccessful, and the File Buffer Time
value is reached, all unsent data in the buffer file, and any
incoming data, are discarded.
Considerations for setting this value
If this value is set to Always, the file buffer does not stop writing
unsent data to file until the file system is full, or the connection to
the subscriber is re-established.
If this value is set to None, and the subscriber becomes unreachable,
data is discarded.
If you want to select another value, the following information
might be useful:
v The file buffer writes to a z/OS file system (zFS) with a 16 TB
capacity that is shared among all subscribers.
Configuring Common Data Provider for z Systems
95
v Depending on the number and size of the files on the zFS, the
available space might be significantly less than the 16 TB limit.
v If the data traffic is high, and the File Buffer Time value is set
to a longer period of time, such as several days, the buffer file
can quickly fill the file system.
Protocol
The streaming protocol that the Data Streamer uses to send data to the
subscriber.
You can choose any of the following values, which are organized under the
applicable subscriber:
Logstash
CDP Logstash
The protocol for a Logstash subscriber, without encryption.
CDP Logstash SSL
If you want to have secure communications between the
Data Streamer and Logstash, use this value. You must also
complete the relevant configuration steps that are described
in “Securing communications between the Data Streamer
and its subscribers” on page 98.
Data Receiver
CDP Data Receiver
The protocol for a Data Receiver subscriber, without
encryption.
CDP Data Receiver SSL
If you want to have secure communications between the
Data Streamer and Data Receiver, use this value. You must
also complete the relevant configuration steps that are
described in “Securing communications between the Data
Streamer and its subscribers” on page 98.
Generic HTTP or HTTPS subscriber
CDP Generic HTTP
The protocol for a generic HTTP subscriber, which does not
provide encryption.
CDP Generic HTTPS
The protocol for a generic HTTPS subscriber, which
provides encryption. You must also complete the relevant
configuration steps that are described in “Securing
communications between the Data Streamer and its
subscribers” on page 98.
Tip: For more information about preparing your target destinations to
receive data from the IBM Common Data Provider for z Systems Data
Streamer, see “Preparing the target destinations to receive data from the
Data Streamer” on page 101.
Host
The host name or IP address of the subscriber.
Port
The port on which the subscriber listens for data from the Data Streamer.
Auto-Qualify
A specification of whether to prepend system names and sysplex names to
96
Common Data Provider for z Systems: User Guide
data source names in the data streams that are sent to the subscriber. The
data source name is the value of the dataSourceName field in the data
stream configuration.
If you use the same policy file for multiple systems within one sysplex, the
data source names must be unique across all systems in that sysplex. If
you use the same policy file for multiple sysplexes, the data source names
must be unique across all systems in all sysplexes. You can use this field to
fully qualify these data source names.
You can choose any of the following values. The default value is None.
None
Indicates that the data source name from the dataSourceName
field in the data stream configuration is used.
System Specifies that the system name and the data source name are used
in the following format:
systemName-dataSourceName
systemName represents the name of the system on which the IBM
Common Data Provider for z Systems runs.
If you use the same policy file for multiple systems within one
sysplex, you might want to use the System value.
Sysplex
Specifies that the sysplex name, system name, and data source
name are used in the following format:
sysplexName-systemName-dataSourceName
systemName represents the name of the system on which the IBM
Common Data Provider for z Systems runs. sysplexName represents
the name of the sysplex in which the IBM Common Data Provider
for z Systems runs.
If you use the same policy file for multiple sysplexes, you might
want to use the Sysplex value.
For more information about the dataSourceName field in the data stream
configuration, see the following topics:
v “Data stream configuration for data gathered by Log Forwarder” on
page 49
v “Data stream configuration for data gathered by System Data Engine”
on page 79
URL Path
This field is available only if the subscriber is a generic HTTP or HTTPS
subscriber. It specifies the path that is used to create the URL for the
subscriber. For example, if the subscriber Host value is logstash.myco.com,
the Port value is 8080, and the URL Path value is /myapp/upload/data, the
following URL is created for the subscriber:
http://logstash.myco.com:8080/myapp/upload/data
Send As
A specification of how the Data Streamer sends data to a Logstash
subscriber. The following values are possible:
Unsplit
Directs the Data Streamer to send the data that is contained in a
Configuring Common Data Provider for z Systems
97
packet as a single message to the subscriber, regardless of whether
any of the incoming data streams were previously split. Use this
value if the target destination is IBM Operations Analytics for z
Systems or any other target destination that includes its own
splitting functions.
Tip: If the target destination is IBM Operations Analytics for z
Systems, you must manually change the value to Unsplit.
Split
Directs the Data Streamer to send the data that is contained in a
packet as separate messages to the subscriber. If any of the
incoming data streams were previously split (such as those that
were previously processed by a splitter transform), Split is
preselected as the value, and you cannot change it.
Securing communications between the Data Streamer and its
subscribers
To secure communications between the IBM Common Data Provider for z Systems
Data Streamer and its subscribers, you must choose a streaming protocol that
supports Transport Layer Security (TLS) when you configure a subscriber in a
policy. You must also configure the Data Streamer and its subscribers to use TLS.
Before you begin
For more information about the streaming protocols, see “Subscriber configuration”
on page 95. The streaming protocols that support TLS contain either SSL or HTTPS
in the name. For example, to secure communications between the Data Streamer
and the Data Receiver, select the streaming protocol CDP Data Receiver SSL rather
than CDP Data Receiver.
Tip: The Transport Layer Security (TLS) and Secure Sockets Layer (SSL) protocols
are cryptographic protocols that can provide authentication and data encryption.
SSL is the predecessor of TLS.
About this task
The TLS protocol is provided by the IBM Java Runtime Environment that is
installed on the z/OS system where the IBM Common Data Provider for z Systems
runs, and on the distributed system where the Data Receiver runs. Use Java 8
because by default, it uses the TLS 1.2 protocol, which is the most recent TLS
protocol version.
The following scripts for configuring secure communications are provided in the
target library /usr/lpp/IBM/cdpz/v1r1m0/DS/LIB:
v setupDataStreamerSSL.sh
v setupDataReceiverSSL.sh
v setupDataReceiverSSL.bat
v importCertificate.sh
Procedure
To configure the Data Streamer and its subscribers to use TLS, complete the
following steps:
98
Common Data Provider for z Systems: User Guide
1. On each Data Streamer system that must use secure communications with
subscribers, set the following environment variables:
JAVA_HOME
The Java installation directory on the Data Streamer system.
CDP_HOME
The Data Streamer working directory that is described in “Configuring
the Data Streamer” on page 109.
2. On each Data Streamer system that must use secure communications with
subscribers, run the script setupDataStreamerSSL.sh, as shown in the following
command, where keystore_password represents the password that you want to
use for the Data Streamer keystore. This script configures the Data Streamer to
use TLS to communicate with subscribers.
/usr/lpp/IBM/cdpz/v1r1mo/DS/LIB/setupDataStreamerSSL.sh keystore_password
Tip: You are prompted for answers to several questions. For the question What
is your first and last name?, respond with the fully qualified host name of
the Data Streamer system.
The following files are created in the CDP_HOME directory:
passStore
Contains a secret key for password encryption.
cdp.properties
Contains the encrypted password for the Data Streamer truststore.
cdp.jks
Keystore to contain the public certificates for the subscribers.
3. For each Data Receiver that must use secure communications with the Data
Streamer, complete the following steps on the Data Receiver system.
a. Set the following environment variables:
JAVA_HOME
The Java installation directory on the Data Receiver system.
CDPDR_HOME
The Data Receiver working directory that is described in “Setting
up a working directory and an output directory for the Data
Receiver” on page 105.
b. Download the setupDataReceiverSSL.sh (for Linux systems) or
setupDataReceiverSSL.bat (for Windows systems) file from the IBM
Common Data Provider for z Systems system by using a binary protocol.
c. Move or copy the setupDataReceiverSSL.sh or setupDataReceiverSSL.bat
file into the CDPDR_HOME directory.
d. Run the script setupDataReceiverSSL.sh or setupDataReceiverSSL.bat, as
shown in the following command. This script configures the Data Receiver
to use TLS to communicate with the Data Streamer.
For Linux systems
cd CDPDR_HOME
./setupDataReceiverSSL.sh
datareceiver_hostname
datareceiver_ip_address
datareceiver_cert_alias keystore_password
Configuring Common Data Provider for z Systems
99
For Windows systems
cd CDPDR_HOME
setupDataReceiverSSL.bat
datareceiver_hostname
datareceiver_ip_address
datareceiver_cert_alias keystore_password
The following variables are used in the command:
datareceiver_hostname
The fully qualified host name of the Data Receiver.
datareceiver_ip_address
The IP address of the Data Receiver.
datareceiver_cert_alias
The alias name for the public certificate of the Data Receiver. This
name must be used in importing the Data Receiver public certificate
to the Data Streamer truststore.
keystore_password
The password that you want to use for the Data Receiver keystore.
Tip: You are prompted for answers to several questions. For the question
What is your first and last name?, respond with the fully qualified host
name of the Data Receiver system.
The following files are created in the CDPDR_HOME directory:
passStore
Contains a secret key for password encryption.
cdp.properties
Contains the encrypted password for the Data Receiver keystore.
cdp.jks
Contains the public certificate and private key pair for the Data
Receiver.
cdp.cert
The Data Receiver public certificate, which must be imported to the
Data Streamer truststore.
4. For each Logstash or Generic HTTPS subscriber that must use secure
communications with the Data Streamer, generate a public certificate and
private key pair, and configure the subscriber to use them. For information
about how to do this configuration, see the Logstash or other third-party
documentation.
5. Transfer the public certificate files from each subscriber system to the Data
Streamer systems that use a binary transfer to send data to the subscribers.
Important: If a Data Streamer is to send data to more than one subscriber, the
public certificate file names must be unique to avoid conflict.
6. On each Data Streamer system, run the script importCertificate.sh for each
subscriber public certificate, as shown in the following command. This script
imports the public certificate for the subscriber into the Data Streamer
truststore.
/usr/lpp/IBM/cdpz/v1r1m0/DS/LIB/importCertificate.sh
cdp.cert subscriber_cert_alias
The following variables are used in the command:
100
Common Data Provider for z Systems: User Guide
cdp.cert
The fully qualified path (including the file name) for the file where the
public certificate for the subscriber is stored.
subscriber_cert_alias
The alias name for the public certificate of the subscriber. Use the same
alias name that was used when the public certificate for the subscriber
was originally generated.
Preparing the target destinations to receive data from the Data
Streamer
You must prepare your target destinations to receive the z/OS operational data
from IBM Common Data Provider for z Systems Data Streamer. The preparation
steps differ depending on the target destination.
About this task
For the Data Streamer to stream data to a target destination, you must define the
streaming protocol for that target destination in the policy. Table 12 lists common
target destinations with the required streaming protocols and associated
information for preparing the target destination to receive data from the Data
Streamer. For more information about defining streaming protocols for sending
data from the Data Streamer to its subscribers, see “Subscribers to a data stream or
transform” on page 20.
Table 12. Common target destinations with the required streaming protocols and associated
information
Target
destination
Streaming
protocol that
must be defined
in the policy
IBM Operations
Analytics for z
Systems
One of the
following
protocols:
v CDP Logstash
v CDP Logstash
SSL
Splunk
One of the
following
protocols:
Steps for preparing target destination to receive
data from the Data Streamer
Install the ioaz Logstash output plugin and the
Logstash version that are provided with IBM
Operations Analytics for z Systems. The Logstash
version that is provided with IBM Operations
Analytics for z Systems is optimized for use with
Linux on z Systems. For more information, see the
IBM Operations Analytics for z Systems Version 3.1.0
documentation.
Complete the steps that are described in “Preparing
to send data to Splunk” on page 102.
v CDP Data
Receiver
v CDP Data
Receiver SSL
Elasticsearch
One of the
following
protocols:
Complete the steps that are described in “Preparing
to send data to Elasticsearch” on page 103.
v CDP Logstash
v CDP Logstash
SSL
Configuring Common Data Provider for z Systems
101
For other target destinations to which you want to stream z/OS operational data,
you must use one of the following three subscribers:
v Data Receiver, as described in “Configuring the Data Receiver” on page 105
v Logstash receiver, as described in “Configuring a Logstash receiver” on page 108
v Generic HTTP subscriber, as described in “Subscribers to a data stream or
transform” on page 20
Preparing to send data to Splunk
To send data from IBM Common Data Provider for z Systems to Splunk, configure
and run an IBM Common Data Provider for z Systems Data Receiver on the
system where the Splunk Enterprise server or heavy forwarder is installed. In
Splunk, you must also install the IBM Common Data Provider for z Systems
Buffered Splunk Ingestion App.
Procedure
In preparation for sending data to Splunk, complete the following steps:
1. Configure the Data Receiver, as described in “Configuring the Data Receiver”
on page 105.
Important: The Data Receiver environment variables must also be available to
Splunk. Verify that the Data Receiver working directory is assigned to the
environment variable CDPDR_HOME, and that the Data Receiver output
directory is assigned to the environment variable CDPDR_PATH, as described
in “Setting up a working directory and an output directory for the Data
Receiver” on page 105.
2. Start the Data Receiver, as described in “Running the Data Receiver” on page
131.
3. Define a policy with the Data Receiver as the subscriber. For more information,
see “Subscribers to a data stream or transform” on page 20.
4. From the IBM Common Data Provider for z Systems /usr/lpp/IBM/cdpz/
v1r1m0/DS/LIB directory, download the IBM Common Data Provider for z
Systems Buffered Splunk Ingestion App (which is a part of your SMP/E
installation package) in binary mode. The following files contain the App.
Platform on which
Splunk runs
File name for Buffered Splunk Ingestion App
UNIX
ibm_cdpz_buffer_nix.spl
Windows
ibm_cdpz_buffer_win.spl
5. To install the Buffered Splunk Ingestion App in Splunk, complete the following
steps:
a. Log in to Splunk.
b. Click the gear icon that is next to the word “Apps.”
c. Select Install app from file.
d. Browse for the file that you downloaded in step 4, and select that file.
e. When you are prompted, select Enable now.
If you are using a Splunk heavy forwarder, you do not have to index the data
locally. You can use the system, sysplex, and host attributes to route the data to
an appropriate indexer.
102
Common Data Provider for z Systems: User Guide
If you want to split the indexing locally, you can refine the monitor stanzas in
the inputs.conf file by extending them to add the sysplex component of the
file name. Then, duplicate the monitor stanza for each sysplex from which you
want to ingest data, and change the index value on the monitor stanzas to
indicate the index in which the data is to be kept. These indexes must be
created within Splunk. If you update the IBM Common Data Provider for z
Systems Buffered Splunk Ingestion App, this customization is deleted.
Results
You can see the data that is loaded into Splunk by using a simple search. For
example, the following search shows you all ingested z/OS SYSLOG events in the
zosdex index:
index=zosdex sourcetype=zOS-SYSLOG-Console
If you expand an event, you can see the individual fields for which extraction rules
are set.
The following search example shows you the z/OS SYSLOG messages that are
issued by the CICS35 job that is running on your production sysplex and are in the
zosdex index:
index=zosdex sysplex=PRODPLEX jobname=CICS35 sourcetype=zOS-SYSLOG-Console
You can also use Splunk analytics tools to analyze the data, or write your own
deep analysis tools.
Preparing to send data to Elasticsearch
To send data from IBM Common Data Provider for z Systems to Elasticsearch,
configure Logstash by using the Logstash configuration files that are provided by
IBM Common Data Provider for z Systems.
Before you begin
The data streams that are sent to Elasticsearch must be split. Depending on the
format in which you configure Logstash to receive data, you might also need to
configure the TRANSCRIBE transform in the policy to convert the data streams to
UTF-8 encoding before they are sent to Elasticsearch.
About this task
The IBM Common Data Provider for z Systems Elasticsearch ingestion kit contains
the Logstash configuration files that are provided by IBM Common Data Provider
for z Systems.
Tip: The Elastic Stack (formerly known as the ELK Stack) is a collection of the
popular open source software tools Elasticsearch, Logstash, and Kibana. The IBM
Common Data Provider for z Systems Elasticsearch ingestion kit is supported on
Elastic Stack Versions 5.1.2 through 5.2.1.
Procedure
In preparation for sending data to Elasticsearch, complete the following steps:
Configuring Common Data Provider for z Systems
103
1. From the IBM Common Data Provider for z Systems /usr/lpp/IBM/cdpz/
v1r1m0/DS/LIB directory, download the IBM Common Data Provider for z
Systems Elasticsearch ingestion kit, which is in the ibm_cdpz_ELK.tar.gz file, in
binary mode.
2. Extract the Elasticsearch ingestion kit to access the Logstash configuration files.
3. Copy the Logstash configuration files that you need for your environment to
your Logstash configuration directory.
Table 13 indicates the prefixes that are used in the file names for the Logstash
configuration files in the Elasticsearch ingestion kit. The file name prefix is an
indication of the configuration file content.
Table 13. Mapping of the prefix that is used in a Logstash configuration file name to the
content of the file
Prefix in file name of
Logstash configuration file
Content of configuration file with this prefix
B_
Input stage
E_
Preparation stage
H_
Field name annotation stage
N_
Timestamp resolution stage
Q_
Output stage
The following descriptions further explain the Logstash configuration files in
the Elasticsearch ingestion kit:
B_CDPz_Input.lsh file
This file contains the input stage that specifies the TCP/IP port on
which Logstash listens for data from the Data Streamer. Copy this file
to your Logstash configuration directory. You might need to edit the
port number after you copy the file.
E_CDPz_Input.lsh file
This file contains the preparation stage. Copy this file to your Logstash
configuration directory.
Files with H_ prefix in file name
Each of these files contains a unique field name annotation stage that
maps to a unique data stream that IBM Common Data Provider for z
Systems can send to Logstash. To your Logstash configuration directory,
copy the H_ files for only the data streams that you want to send to
Elasticsearch.
Files with N_ prefix in file name
Each of these files contains a unique timestamp resolution stage that
maps to a unique data stream that IBM Common Data Provider for z
Systems can send to Logstash. To your Logstash configuration directory,
copy the N_ files for only the data streams that you want to send to
Elasticsearch.
Q_CDPz_Elastic.lsh file
This file contains an output stage that sends all records to a single
Elasticsearch server. Copy this file to your Logstash configuration
directory.
After you copy the file, edit it to add the name of the host to which the
stage is sending the indexing call. The default name is localhost,
which indexes the data on the server that is running the ingestion
processing. Change the value of the hosts parameter rather than the
104
Common Data Provider for z Systems: User Guide
value of the index parameter. The index value is assigned during
ingestion so that the data for each source type is sent to a different
index. The host determines the Elasticsearch farm in which the data is
indexed. The index determines the index in which the data is held.
To split data according to sysplex, you can use the [sysplex] field in an
if statement that surrounds an appropriate Elasticsearch output stage.
4. In the script for starting Logstash, specify your Logstash configuration
directory.
5. Define a policy with the Logstash as the subscriber. For more information, see
“Subscribers to a data stream or transform” on page 20.
6. Start Logstash and Elasticsearch. If the activation is successful, IBM Common
Data Provider for z Systems starts sending data to Elasticsearch.
Configuring the Data Receiver
Before you can use the Data Receiver as a subscriber, you must configure it.
Before you begin
For more information about the Data Receiver, see “Subscribers to a data stream or
transform” on page 20.
Setting up a working directory and an output directory for the
Data Receiver
You must set up both a working directory and an output directory for the IBM
Common Data Provider for z Systems Data Receiver. You must assign the working
directory to the environment variable CDPDR_HOME, and assign the output
directory to the environment variable CDPDR_PATH.
About this task
The Data Receiver working directory contains files that are created and used
during the operation of the Data Receiver, including the Data Receiver properties
and security-related files. The Data Receiver output directory contains output files
that the Data Receiver generates based on the data that it receives.
Guidelines for both directories
Use the following guidelines to help you decide which directories to use as
the working directory and the output directory:
v The directories must be readable and writable.
v To avoid possible conflicts, do not use the same directory as both the
working directory and the output directory for the Data Receiver.
v If you are running multiple Data Receivers in your environment, each
Data Receiver must be assigned its own working directory and output
directory.
Important: Do not update, delete, or move the files in the CDPDR_HOME
directory.
Procedure
To set up the working directory and the output directory, complete the step that
applies to the platform on which you plan to run the Data Receiver.
Configuring Common Data Provider for z Systems
105
Platform on which the
Data Receiver runs
Instructions
Add the following lines either to the system profile or to the
profile of the user that starts the Data Receiver:
Linux
export CDPDR_HOME=/dr_working_directory
export CDPDR_PATH=/dr_output_directory
Windows
Create system or user environment variables that reference the
directories, for example:
CDPDR_HOME=C:\dr_working_directory
CDPDR_PATH=C:\dr_output_directory
Copying the Data Receiver files to the target system
After you set up a working directory for the IBM Common Data Provider for z
Systems Data Receiver, you must copy the Data Receiver files to the target system,
which is the system on which you plan to run the Data Receiver.
Before you begin
The Data Receiver is in the DataReceiver.jar file. The Data Receiver properties are
defined in the cdpdr.properties sample file. Both of these files are in the /DS/LIB
directory for IBM Common Data Provider for z Systems.
Procedure
1. Download the DataReceiver.jar file and the cdpdr.properties sample file
from the z/OS system by using a binary protocol. The cdpdr.properties
sample file must remain in UTF-8 encoding.
2. Move or copy the cdpdr.properties sample file into the Data Receiver working
directory (CDPDR_HOME directory).
Updating the Data Receiver properties
After you copy the IBM Common Data Provider for z Systems Data Receiver files
to the target system, you must update the cdpdr.properties sample file in the
Data Receiver working directory (CDPDR_HOME directory).
About this task
In the cdpdr.properties sample file, you can customize the following Data
Receiver properties:
port
The port on which the Data Receiver listens for data from the Data
Streamer. This port must be the same as the port that is defined for the
subscriber in the policy file. For more information, see “Subscriber
configuration” on page 95.
cycle
The number of output files that can simultaneously exist in the Data
Receiver output directory (CDPDR_PATH directory). The minimum value
is 3.
The cycle property is related to how the Data Receiver manages disk
space. For more information about how the Data Receiver manages disk
space, see “Data Receiver process for managing disk space” on page 107.
ssl
106
A y or n specification of whether to use the Transport Layer Security (TLS)
protocol for Data Receiver communication with the Data Streamer. If a
value other than uppercase Y or lowercase y is used for this property, the
Data Receiver disables TLS.
Common Data Provider for z Systems: User Guide
trace
A y or n specification of whether to activate tracing for the Data Receiver.
If a value other than uppercase Y or lowercase y is used for this property,
the Data Receiver disables tracing. Typically, you activate tracing only at
the request of IBM Support.
Procedure
To update the Data Receiver properties, complete the following steps:
1. In the cdpdr.properties file in the CDPDR_HOME directory, update the property
values with your configuration preferences.
2. If you choose to use the TLS protocol for Data Receiver communication with
the Data Streamer (ssl=y in the cdpdr.properties file), also complete the
appropriate configuration steps in “Securing communications between the Data
Streamer and its subscribers” on page 98.
Data Receiver process for managing disk space:
The IBM Common Data Provider for z Systems Data Receiver limits the number of
output files that can simultaneously exist in its output directory (CDPDR_PATH
directory). To manage these output files, the Data Receiver uses a cyclic process
with rolling files, which are a dynamic, sequential set of files that contain a
continuous stream of data.
How the process works
The cycle property in the Data Receiver properties file defines the number of
output files that can simultaneously exist in the Data Receiver output directory.
When the number of output files in the output directory equals the value of the
cycle property, and a new file is written, then the oldest file is deleted. Each file
contains 1 hour of data. Therefore, if the value of the cycle property is set to 3 (3
hours), no more than 3 hours of data (in 3 output files) is on disk at a time.
The following points further illustrate this example of how the Data Receiver
manages the output files:
v With the value of the cycle property set to 3, the suffixes -0, -1, and -2 are
appended to the names of the output files.
v At the beginning of each hour, the Data Receiver erases the old data in the next
file in the sequence (if it exists) and writes new data to the file (for example, if it
last wrote data to the -0 file, it erases the old data in the -1 file and writes new
data to the -1 file).
v One hour later, the Data Receiver erases the old data in the -2 file and writes
new data to the -2 file.
Important: The target destination must read the output data in a timely manner. In
this example, if the target destination does not read the data within 2 hours of
when it is written, the data is lost because it is deleted.
Tip: If you want to stream the z/OS operational data to an environment that uses
volume-based ingestion, and you want to avoid the ingestion of an unexpectedly
large volume of data, you can take the following actions:
1. When you start the Data Receiver, observe the volume of data that it generates
for each data stream that it receives.
Configuring Common Data Provider for z Systems
107
2. Let the Data Receiver run for awhile, or increase the cycle length, or both, so
that you can measure the data volume for an extended period of time, such as
a day or a week.
3. When you move your configuration to production, decrease the cycle length to
minimize the disk space that is used for the output files.
Configuring a Logstash receiver
If you are using a Logstash receiver for target destinations other than IBM
Operations Analytics for z Systems or Elasticsearch, you must install and configure
Logstash on your own. This procedure summarizes how to configure Logstash in
this situation.
Before you begin
Remember: If you are sending data to IBM Operations Analytics for z Systems or
Elasticsearch, the information in this topic does not apply. Instead, complete the
Logstash configuration steps as described in Table 12 on page 101.
About this task
On the distributed Logstash system where you want to send z/OS operational
data, you must configure the TCP input plug-in to specify the port on which
Logstash listens for data from the Data Streamer.
Procedure
1. To configure the TCP input plug-in, use the following JavaScript Object
Notation (JSON) format:
input {
tcp {
port => 8080
codec => "json"
}
}
2. The plug-in expects the data that is sent to it to be in UTF-8 encoding. If the
data is sent in another encoding (for example, if it is converted to another
format by the TRANSCRIBE transform in the policy), or if the data contains
characters that are not represented properly in the UTF-8 encoding, add a
charset parameter to the JSON codec, as shown in the following example, to
specify the other encoding for Logstash:
input {
tcp {
port => 8080
codec => "json { charset => “UTF-16BE” }"
}
}
3. If you want to configure a secure data connection for streaming operational
data from IBM Common Data Provider for z Systems to Logstash, see the
Logstash documentation for information about how to set up Transport Layer
Security for the TCP input plug-in.
Tip: The Transport Layer Security (TLS) and Secure Sockets Layer (SSL)
protocols are cryptographic protocols that can provide authentication and data
encryption. SSL is the predecessor of TLS.
4. This step applies only if, in the subscriber configuration, the Send As value
is set to Split. IBM Common Data Provider for z Systems sends the data to
Logstash in JSON format, where the message field contains the data in
108
Common Data Provider for z Systems: User Guide
comma-separated values (CSV) format. To complete the process of separating
the data into individual events for Logstash, configure the Logstash split filter
plug-in, as shown in the following example:
filter {
split { }
}
5. Configure the TCP output plug-in as appropriate for your environment.
Configuring the Data Streamer
The IBM Common Data Provider for z Systems Data Streamer streams operational
data to configured subscribers in the appropriate format. It receives the data from
the data gatherers (System Data Engine, Log Forwarder, or Open Streaming API),
splits it into individual messages if required (for z/OS SYSLOG data, for example),
transforms the data into the appropriate format (such as UTF-8) for the subscriber,
and sends the data to the subscriber.
Before you begin
The Data Streamer can stream data to both on-platform and off-platform
subscribers. To reduce general CPU usage and costs, you can run the Data
Streamer on z Systems Integrated Information Processors (zIIPs).
About this task
To configure the Data Streamer, you must create the Data Streamer started task by
copying the sample procedure HBODSPRO in the hlq.SHBOSAMP library, and updating
the copy.
If you want to run the Data Streamer as a job rather than a procedure, use the
sample job HBODS001 in the hlq.SHBOSAMP library rather than procedure HBODSPRO.
The user ID that is associated with the Data Streamer started task must have the
appropriate authority to access the IBM Common Data Provider for z Systems
program files, which include the installation files and the policy file. It must also
have read/execute permissions to the Java libraries in the UNIX System Services
file system.
Procedure
To create the started task, complete the following steps:
1. Copy the procedure HBODSPRO in the hlq.SHBOSAMP library to a user procedure
library.
Tip: You can rename this procedure according to your installation conventions.
When the name HBODSPRO is used in the IBM Common Data Provider for z
Systems documentation, including in the messages, it means the Data Streamer
started task.
2. In your copy of the procedure HBODSPRO, customize the following parameter
values for your environment:
/usr/lpp/IBM/cdpz/v1r1m0/DS/LIB
Replace this value with the directory where the Data Streamer is
installed in your environment. This directory contains the startup.sh
script for the Data Streamer.
Configuring Common Data Provider for z Systems
109
/usr/lpp/IBM/cdpz/v1r1m0/UI/LIB/Sample.policy
Replace this value with the policy file path and name for your
environment.
nnnnn
Replace this value with the port number on which the Data Streamer
listens for data from the data gatherers. The default port on which the
Data Streamer listens for data is 51401.
Important: All data gatherers must send data to the Data Streamer
through this port. If you update this port in the Data Streamer
configuration, you must also update it in the configuration for all data
gatherers. For more information, see “Data Streamer port definition” on
page 11.
3. In your copy of the procedure HBODSPRO, set the following environment
variables for your environment:
JAVA_HOME
Specify the Java installation directory.
CDP_HOME
Specify the location of the Data Streamer working directory. The Data
Streamer working directory contains files that are created and used
during the operation of the Data Streamer, including the Data Streamer
truststore and file buffers.
Guidelines for the working directory
Use the following guidelines to help you decide which
directory to use as the working directory:
v The directory must be readable and writable by the user ID
that runs the Data Streamer.
v To avoid possible conflicts, do not use a directory that is
defined as the Configuration Tool working directory.
Important: Do not update, delete, or move the files in the
CDP_HOME directory.
TZ
Specify the time zone offset for the Data Streamer. For more
information, see the information about the format of the TZ
environment variable in the z/OS product documentation in the IBM
Knowledge Center.
Important: If the value of the TZ environment variable is incorrect, the
time interval that is set in the Time Filter transform is directly affected.
For more information about the Time Filter transform, see “Time Filter
transform” on page 94.
RESOLVER_CONFIG
If a TCP/IP resolver must be explicitly provided, uncomment the
RESOLVER_CONFIG environment variable, and specify the correct
TCP/IP resolver. The Data Streamer must have access to a TCP/IP
resolver. For more information, see “Verifying the search order for the
TCP/IP resolver configuration file” on page 129.
_BPXK_SETIBMOPT_TRANSPORT
If you want the Data Streamer to have affinity to a certain TCP/IP
stack, uncomment the _BPXK_SETIBMOPT_TRANSPORT environment
variable, and specify that TCP/IP stack.
110
Common Data Provider for z Systems: User Guide
4. Update your security software, such as the Resource Access Control Facility
(RACF), to permit the Data Streamer started task to run in your environment.
Configuring the data gatherer components
The Log Forwarder and the System Data Engine are the primary data gatherer
components of IBM Common Data Provider for z Systems.
About this task
The z/OS File System (zFS) or systems that contain the IBM Common Data
Provider for z Systems program files (installation files, configuration files, and
working files) can be shared among multiple instances of IBM Common Data
Provider for z Systems.
If a single directory contains the Log Forwarder configuration files for more than
one system, or logical partition (LPAR), each configuration file name must include
the names of the sysplex and the system (LPAR) to which the file applies. The file
names must use the following conventions, where SYSNAME is the name of the
system (LPAR) where the Log Forwarder runs, and SYSPLEX is the name of the
sysplex (or monoplex) in which that system is located. The values of both SYSPLEX
and SYSNAME must be in all uppercase.
v SYSPLEX.SYSNAME.zlf.conf
v SYSPLEX.SYSNAME.config.properties
If one file system contains the working directories for multiple instances of IBM
Common Data Provider for z Systems, the working directory for each Data
Streamer or Log Forwarder instance must be uniquely named.
Configuring the Log Forwarder
Before you run the IBM Common Data Provider for z Systems Log Forwarder to
gather z/OS log data, you must configure it.
Before you begin
Before you configure the Log Forwarder, the following policy definition tasks,
which are done in the Configuration Tool, must be complete:
1. In the Configuration Tool, create one or more policies that include one or more
data streams for z/OS log data.
In the Configuration Tool, when you click the Configure icon
on a data
stream node for data that is gathered by the Log Forwarder, the “Configure
z/OS Log Forwarder data stream” window is shown. “Data stream
configuration for data gathered by Log Forwarder” on page 49 lists the
configuration values that you can update in this window.
2. After you configure the data streams for z/OS log data, click the z/OS LOG
FORWARDER button, which is in the Global Properties section of the Policy
Profile Edit window, to set the configuration values for your Log Forwarder
environment, as described in “z/OS LOG FORWARDER properties: Defining
your Log Forwarder environment” on page 27.
3. “Output from the Configuration Tool” on page 19 describes the output from the
Configuration Tool, which includes the following Log Forwarder files:
.zlf.conf file
Contains environment variables for the Log Forwarder.
Configuring Common Data Provider for z Systems
111
.config.properties file
Contains configuration information for the Log Forwarder.
About this task
To configure the Log Forwarder, you must complete the following tasks:
1. Create the Log Forwarder started task, as described in “Creating the Log
Forwarder started task.”
2. Copy the Log Forwarder configuration files to the ENVDIR directory, as
described in “Copying the Log Forwarder configuration files to the ENVDIR
directory” on page 114.
3. If appropriate for your environment, install the user exit for collecting z/OS
SYSLOG data, as described in “Installing the user exit for collecting z/OS
SYSLOG data” on page 114.
4. If appropriate for your environment, configure the z/OS NetView message
provider for collecting NetView messages, as described in “Configuring the
z/OS NetView message provider for collecting NetView messages” on page
118.
Creating the Log Forwarder started task
You must create the started task for the IBM Common Data Provider for z Systems
Log Forwarder by copying the sample procedure GLAPROC in the hlq.SGLASAMP
library, and updating the copy.
Procedure
To create the started task, complete the following steps:
1. Copy the procedure GLAPROC in the hlq.SGLASAMP library to a user procedure
library.
Tip: You can rename this procedure according to your installation conventions.
When the name GLAPROC is used in the IBM Common Data Provider for z
Systems documentation, including in the messages, it means the Log Forwarder
started task.
2. Update your copy of the GLAPROC procedure, according to the comments in the
sample.
Update the following variables:
ENVDIR procedure variable
Specifies the directory where the Log Forwarder configuration files are
located. To indicate the variable, the option identifier -e precedes the
directory specification, as shown in the following example:
’-e /etc/IBM/zscala/V3R1’
The following directory is the default directory that is used if the
ENVDIR procedure variable is not specified:
/usr/lpp/IBM/zscala/V3R1/samples
Important: You must copy the Log Forwarder configuration files to this
ENVDIR directory, as described in “Copying the Log Forwarder
configuration files to the ENVDIR directory” on page 114.
GLABASE procedure variable
Specifies the directory where the startup.sh script is located.
112
Common Data Provider for z Systems: User Guide
The following directory is the default installation directory for the
startup.sh script:
/usr/lpp/IBM/zscala/V3R1/samples
Change the value if a different installation directory was used during
the SMP/E installation.
3. Verify that the user ID that is associated with the Log Forwarder started task
has the required authorities, as described in “Requirements for the Log
Forwarder user ID.”
4. Update your security software, such as the Resource Access Control Facility
(RACF), to permit the Log Forwarder started task to run in your environment.
Requirements for the Log Forwarder user ID:
The user ID that is associated with the Log Forwarder started task must have the
required authorities for file access and for issuing z/OS console messages.
The following information further describes the required authorities:
v “File access authority”
v “Authority to issue z/OS console messages”
Tip: The Log Forwarder user ID does not require any special MVS authority to run
the Log Forwarder.
File access authority
The Log Forwarder user ID must have the appropriate authority to access the Log
Forwarder program files, which include the installation files, the configuration
files, and the files in the working directory.
Installation file access
The Log Forwarder user ID must have read and execute permissions to
the Log Forwarder installation files in the UNIX System Services file
system.
Configuration file access
The Log Forwarder user ID must have read permission to the Log
Forwarder configuration files in the UNIX System Services file system.
Important: The user ID that configures the Log Forwarder must have
read/write permission to the configuration files.
Working directory access
The Log Forwarder user ID must have read and write permissions to the
Log Forwarder working directory, which is described in “Log Forwarder
properties configuration” on page 28.
Authority to issue z/OS console messages
The Log Forwarder user ID must have the authority to issue z/OS console
messages.
If you are using the Resource Access Control Facility (RACF) as your System
Authorization Facility (SAF) product, complete one of the following options to
assign this authority:
Configuring Common Data Provider for z Systems
113
Option 1 if you are using RACF
You can use the GLARACF procedure in the SGLASAMP library to create a user
ID for the Log Forwarder started task (GLAPROC procedure) and associate
that user ID with the started task. The documentation that is provided in
the GLARACF sample includes more information, and the following steps
outline this process:
1. Copy the GLARACF procedure to a user job library.
2. To define a user ID and associate it with the Log Forwarder started task
(GLAPROC procedure), update your copy of the GLARACF procedure
according to the comments in the sample and the following
instructions:
v If the user ID exists, comment out the ADDUSER statement.
v If a user ID other than GLALGF is to be associated with the GLAPROC
procedure, change the USER value on the STDATA parameter.
3. Submit your updated copy of the GLARACF procedure.
Option 2 if you are using RACF
Complete the following steps:
1. In RACF, add the BPX.CONSOLE resource to the class FACILITY by using
the General Resource Profiles option in the RACF Services Option
Menu.
2. In the BPX.CONSOLE profile that was created (or updated) in the
preceding step, add the user ID that the Log Forwarder started task is
associated with, and assign read access to the user ID.
3. Issue the following command to activate your changes:
SETROPTS RACLIST(FACILITY) REFRESH
Tips:
v The user ID that the GLARACF procedure creates is named GLALGF. The Log
Forwarder started task does not require the user ID to be GLALGF. This user ID is
provided only as a convenience.
v If the SAF product for your environment is not RACF, use the GLARACF sample
procedure and the SAF product documentation to create the appropriate
definitions in the SAF product.
Copying the Log Forwarder configuration files to the ENVDIR
directory
After you create the started task for the IBM Common Data Provider for z Systems
Log Forwarder, copy the Log Forwarder configuration files from the Configuration
Tool working directory to the directory that is specified by the ENVDIR procedure
variable in the Log Forwarder started task.
Procedure
Copy the .zlf.conf and .config.properties files from the Configuration Tool
working directory to the ENVDIR directory. The names of the files in the ENVDIR
directory must be zlf.conf and config.properties. For more information about
these files, see “Output from the Configuration Tool” on page 19.
Installing the user exit for collecting z/OS SYSLOG data
If you configure a z/OS SYSLOG data stream for gathering z/OS SYSLOG data
from a user exit, you must install either the GLASYSG or GLAMDBG user exit. If
you configure only a z/OS SYSLOG from OPERLOG data stream for gathering
z/OS SYSLOG data, the installation of a user exit is not necessary.
114
Common Data Provider for z Systems: User Guide
About this task
The GLASYSG and GLAMDBG user exits, and other modules that are used by
these user exits, are provided with IBM Common Data Provider for z Systems and
are in the SGLALPA product library.
All modules in the SGLALPA library must be added to the system link pack area
(LPA). For more information about the LPA, see the z/OS MVS Initialization and
Tuning Guide.
The following modules are in the SGLALPA library:
v GLADSRAW (a program call module)
v GLAGDSDL (a program call module)
v GLAGLMSG (a program call module)
v GLAMDBG
v GLASYSG
v GLAUERQ (a program call module)
You must install the GLASYSG or GLAMDBG user exit on the appropriate MVS
installation exit. Table 14 indicates the MVS installation exit on which to install
each user exit and describes how to choose which user exit to install.
Both user exits allocate a data space with a minimum size of 100 MB and a
maximum size of 500 MB. The data space is used to store z/OS SYSLOG data for
retrieval by the z/OS SYSLOG.
Table 14. User exits for collecting z/OS SYSLOG data, with associated MVS installation
exits and usage notes
User exit
MVS installation exit on
which to install the user
exit
GLASYSG
CNZ_MSGTOSYSLOG
If your z/OS system is not running JES3
with the DLOG option enabled, install this
user exit.
GLAMDBG
CNZ_WTOMDBEXIT
If your z/OS system is running JES3 with
the DLOG option enabled, install this user
exit.
Usage note for user exit
Procedure
To install the user exit, complete the following steps:
1. To add the load modules to an LPA, complete one of the following actions:
Action
Instruction
Add the SGLALPA
Add the following statement to an LPALSTxx member:
library to the pageable zscala.v3r1.SGLALPA(volume)
link pack area (PLPA)
at system IPL
Replace zscala.v3r1 with the target library high-level qualifier
that is used to install IBM Common Data Provider for z Systems,
and replace volume with the volume identifier of the data set.
Configuring Common Data Provider for z Systems
115
Action
Instruction
Add the individual
modules in the
SGLALPA library to
the dynamic LPA after
the system IPL
Issue the following MVS system commands:
SETPROG
SETPROG
SETPROG
SETPROG
SETPROG
SETPROG
LPA,ADD,MODNAME=GLASYSG,DSNAME=zscala.v3r1.SGLALPA
LPA,ADD,MODNAME=GLAMDBG,DSNAME=zscala.v3r1.SGLALPA
LPA,ADD,MODNAME=GLADSRAW,DSNAME=zscala.v3r1.SGLALPA
LPA,ADD,MODNAME=GLAGDSDL,DSNAME=zscala.v3r1.SGLALPA
LPA,ADD,MODNAME=GLAGLMSG,DSNAME=zscala.v3r1.SGLALPA
LPA,ADD,MODNAME=GLAUERQ,DSNAME=zscala.v3r1.SGLALPA
2. To install the exit, complete one of the following actions:
Action
Instruction
Install the user exit on Add one of the following statements to a PROGxx member:
an MVS installation
v EXIT ADD EXITNAME(CNZ_MSGTOSYSLOG) MODNAME(GLASYSG)
exit at system IPL
v EXIT ADD EXITNAME(CNZ_WTOMDBEXIT) MODNAME(GLAMDBG)
Dynamically install
the user exit after the
system IPL
Issue one of the following MVS commands:
v SETPROG EXIT,ADD,EXITNAME=CNZ_MSGTOSYSLOG,MODNAME=GLASYSG
v SETPROG EXIT,ADD,EXITNAME=CNZ_WTOMDBEXIT,MODNAME=GLAMDBG
manageUserExit utility for managing the installed user exit:
The GLASYSG and GLAMDBG user exits create system resources that might need
to be managed while they are in operation. The manageUserExit utility is a shell
script that can be used to manage the system resources. The utility is included in
the product samples directory in the hierarchical file system.
The following system resources might need to be managed:
v A data space, which is used to store z/OS SYSLOG data for retrieval by the Log
Forwarder.
v Program call modules, which are loaded by the user exit and made available to
other programs (such as the Log Forwarder and the manageUserExit utility) for
interacting with the data space.
manageUserExit.sh description
This utility manages the data space and program call modules that are controlled
by the user exit. For example, you can use the utility to complete the following
management actions:
v Refresh the data space.
v Refresh the program call modules.
v Delete the data space, unload the program call modules, and uninstall the user
exit from the MVS installation exit.
Important: Before you run the manageUserExit.sh utility, stop any instances of the
Log Forwarder that are gathering z/OS SYSLOG data. This action prevents the Log
Forwarder from trying to access or call a system resource that is being deleted. An
abend might occur if the Log Forwarder accesses a non-existent data space or calls
a non-existent program call module.
116
Common Data Provider for z Systems: User Guide
manageUserExit.sh details
Format
manageUserExit.sh -p[d] [environment_configuration_directory]
manageUserExit.sh -d[p] [environment_configuration_directory]
manageUserExit.sh -u [environment_configuration_directory]
Parameters
-d
Refreshes the data space by deleting and re-creating it.
For normal operations, refreshing the data space is not needed.
However, for example, if you are requested to refresh the data
space by IBM Software Support, use this parameter to delete and
re-create the data space. All z/OS SYSLOG data that is in the data
space before deletion is lost.
-p
Refreshes the program call modules by unloading and reloading
from the LPA.
Refreshing the program call modules might be necessary when
maintenance is applied. Updates to the modules in the SGLALPA
library must be reloaded by the user exit. Use this parameter to
unload the previously loaded program call modules and load the
new program call modules.
Tips:
1. Before you refresh the program call modules, the modules must
be loaded dynamically into the system LPA. If the program call
modules are currently in the dynamic LPA, the user exit must
be uninstalled, and the old program call modules must be
deleted from the dynamic LPA before the new modules can be
reloaded. The user exit must then be reinstalled on the MVS
installation exit.
2. If the application of maintenance requires a refresh of the
program call modules, the maintenance information specifies
that a refresh is necessary.
-u
Deletes the data space, unloads the program call modules, and
uninstalls the user exit.
Examples
manageUserExit.sh -pd /etc/IBM/zscala/V3R1
This command refreshes both the data space and program call
modules. In this example, the directory /etc/IBM/zscala/V3R1
contains the environment configuration file.
manageUserExit.sh -u
This command uses the ZLF_CONF environment variable to find
the directory that contains the environment configuration file. It
also deletes the data space, unloads the program call modules, and
uninstalls the user exit.
Exit values
0
Successful completion
-1
Did not complete successfully
Configuring Common Data Provider for z Systems
117
Messages
The utility issues messages to standard output. The messages have the
prefix GLAK.
manageUserExit.sh usage notes
The following information describes some tips for using the manageUserExit.sh
utility:
v To run the manageUserExit.sh utility, you must specify at least one parameter.
v Specification of the environment configuration directory is optional. However, if
this directory is not specified, the ZLF_CONF environment variable must be set,
and its value must be the working directory that contains the zlf.conf file that
is used by the Log Forwarder.
For example, if the zlf.conf file is in /etc/IBM/zscala/V3R1, either the
environment configuration directory or the value of the ZLF_CONF environment
variable must be this directory.
v The -p and -d parameters cannot be used with the -u parameter.
v The utility requests operations by using a system common storage area. The
requested operation does not complete until the user exit is called by a system
console message. The requested operations are not run synchronously by the
utility.
v The utility can be run even if the user exit is not active or installed. The
requested operations are completed when the user exit is activated and is called
by a system console message.
v When the utility completes successfully, it indicates only that it made a request
of the user exit. A system console message is issued by the user exit when it
performs the requested operations.
Configuring the z/OS NetView message provider for collecting
NetView messages
If you configure a NetView Netlog data stream for gathering NetView for z/OS
message data, you must also configure the NetView message provider to monitor
and forward NetView for z/OS messages to the Log Forwarder. The NetView
message provider is defined in the REXX module GLANETV in the SGLACLST
data set.
About this task
The NetView message provider must be associated with a NetView autotask and
can be run as a long-running command to get NetView for z/OS messages. The
NetView autotask to which you associate the NetView message provider must
have the following permissions:
v Permission to access and edit the configuration directory for the NetView
message provider by using the queued sequential access method (QSAM).
v Permission to issue CZR messages by using the PIPE and CNMECZFS commands
v Permission to use the LISTVAR and PPI commands
Procedure
To prepare the NetView message provider for use, complete the following steps:
1. Ensure that the GLANETV module is placed in the DSICLD data set that is
defined in the NetView procedure. Also, ensure that the NetView autotask has
access to the GLANETV module.
118
Common Data Provider for z Systems: User Guide
2. In the CNMSTYLE member, specify the following information by using
common variables:
Information to specify
Example entry in CNMSTYLE
member
How to specify
Indication of whether to start the
Specify either of the following values COMMON.GLANETV.START = W
NetView message provider in cold or for the COMMON.GLANETV.START
warm start mode
variable:
v C for cold start mode
v W for warm start mode, which is
the default mode
Configuration directory for the
NetView message provider
For the configuration directory,
specify a partitioned data set (PDS)
where the Log Forwarder can store
some information to keep track of its
progress in reading log data.
COMMON.GLANETV.CONFIG.DIR =
USER.CLIST
Specify the data set as the value of
the
COMMON.GLANETV.CONFIG.DIR
variable.
The default value is USER.CLIST.
Configuring the System Data Engine
If you want to gather System Management Facilities (SMF) data, you must
authorize the IBM Common Data Provider for z Systems System Data Engine with
the authorized program facility (APF), and configure the System Data Engine to
run either as a started task for streaming SMF data, or as a job for loading SMF
data in batch.
Before you begin
Before you configure the System Data Engine, the following policy definition tasks,
which are done in the Configuration Tool, must be complete:
1. In the Configuration Tool, create one or more policies that include one or more
data streams for SMF data.
In the Configuration Tool, when you click the Configure icon
on a data
stream node for data that is gathered by the System Data Engine, the
“Configure SDE data stream” window is shown. “Data stream configuration for
data gathered by System Data Engine” on page 79 lists the configuration values
that you can update in this window.
2. After you configure the data streams for SMF data, click the SDE button, which
is in the Global Properties section of the Policy Profile Edit window, to set the
configuration values for your System Data Engine environment, as described in
“SDE properties: Defining your System Data Engine environment” on page 29.
3. “Output from the Configuration Tool” on page 19 describes the output from the
Configuration Tool, which includes the following System Data Engine file:
.sde file
Contains configuration information for the System Data Engine.
Configuring Common Data Provider for z Systems
119
Procedure
To configure the System Data Engine, complete the following steps:
1. Authorize the System Data Engine with the authorized program facility (APF),
as described in “Authorizing the System Data Engine with APF.”
2. Complete the following steps, depending on whether you want to stream the
SMF data, or load the SMF data in batch:
Option
Description
Stream SMF data
1.
Decide which method to use for
collecting SMF data, as described in
“Deciding which method to use for
collecting SMF data” on page 121.
2.
Create the System Data Engine started
task, as described in “Creating the
System Data Engine started task for
streaming SMF data” on page 121.
3.
If you want to collect SMF data from the
SMF user exit, install the user exit, as
described in “Installing the SMF user
exit” on page 123.
4.
If you want to collect IMS data, you
must write IMS records to SMF for
processing by the System Data Engine.
For more information, see “Writing IMS
records to SMF for processing by the
System Data Engine” on page 125.
Load SMF data in batch
Create the job for loading SMF data in batch,
as described in “Creating the job for loading
SMF data in batch” on page 127.
Tip: You must use the z/OS SYS1.PARMLIB member SMFPRMxx (or its equivalent)
to enable the collection of each SMF record type that you want to gather.
Authorizing the System Data Engine with APF
For the System Data Engine to gather System Management Facilities (SMF) data,
the SHBOLOAD library must be authorized with the authorized program facility
(APF).
About this task
To authorize the SHBOLOAD library, a library name and volume ID must be in the list
of authorized libraries in the PROGxx member of the SYS1.PARMLIB library.
Procedure
Use one of the following methods to authorize the SHBOLOAD library:
v To include the SHBOLOAD library in APF at system IPL, add the following
statement to a PROGxx member:
APF ADD DSNAME(hlq.SHBOLOAD) VOLUME(volname)
v To dynamically add the SHBOLOAD library to APF after system IPL, issue the
following MVS command:
SETPROG APF,ADD,DSNAME=hlq.SHBOLOAD,VOLUME=volname
120
Common Data Provider for z Systems: User Guide
Deciding which method to use for collecting SMF data
IBM Common Data Provider for z Systems can collect System Management
Facilities (SMF) data from any one of the following three sources: an SMF
in-memory resource (by using the SMF real-time interface), the SMF user exit
HBOSMFEX, or the SMF log stream. You must decide which method you want to
use, and do the appropriate configuration for that method.
Before you begin
For more information about the SMF user exit, see “Installing the SMF user exit”
on page 123.
About this task
Review the following tips, and decide which method you want to use for
collecting SMF data:
v If SMF is running in log stream recording mode, collect SMF data from an SMF
in-memory resource by using the SMF real-time interface.
If you are running z/OS V2R1 or V2R2, APAR OA49263 must be applied to use
the SMF real-time interface.
If you cannot apply APAR OA49263 to z/OS V2R1 or V2R2, use the SMF user
exit to collect SMF data.
v If SMF is running in data set recording mode, consider changing the mode to
log stream recording mode and collecting SMF data from an SMF in-memory
resource by using the SMF real-time interface.
If you cannot run SMF in log stream recording mode, use the SMF user exit to
collect SMF data.
Creating the System Data Engine started task for streaming SMF
data
To have the IBM Common Data Provider for z Systems System Data Engine stream
SMF data to the Data Streamer, you must create the started task for the System
Data Engine by copying the sample procedure HBOSMF in the hlq.SHBOCNTL library,
and updating the copy.
Procedure
To create the started task, complete the following steps:
1. Copy the procedure HBOSMF in the hlq.SHBOCNTL library to a user procedure
library.
Tip: You can rename this procedure according to your installation conventions.
When the name HBOSMF is used in the IBM Common Data Provider for z
Systems documentation, including in the messages, it means the System Data
Engine started task.
2. Update the high-level qualifier to the one for your IBM Common Data Provider
for z Systems target libraries that were installed by using SMP/E.
3. If appropriate for your environment, update the interval value (in minutes or
seconds) for IBM_SDE_INTERVAL, which controls how often the System Data
Engine processes data.
At regular intervals, the System Data Engine queries the appropriate sources
for new data. For example, it queries one of the following sources:
v SMF in-memory resource
Configuring Common Data Provider for z Systems
121
v Shared storage to which the SMF user exit writes
v SMF log stream
The default interval for this querying is 1 minute, and the minimum interval is
1 second. After each interval, the System Data Engine sends the new SMF
records to the Data Streamer.
This collect processing interval is set on the EVERY clause of the COLLECT
statement.
Guidelines for determining the interval value: Changing the interval value
can affect the resource consumption of IBM Common Data Provider for z
Systems. Either use the default value, which is 1 minute, or use the following
guidelines to help you determine an appropriate interval value:
v Use a large interval value to minimize overhead.
v Use a small interval value to minimize memory.
v The interval value must be small enough to produce data as often as it is
required by the subscriber.
v Use an interval value that is a factor of the total time in one day. Table 15
lists some example values.
v If you want to use an interval value that is equal to or greater than 60
seconds, specify that value as a whole number, and specify the time unit in
minutes. For example, if you want to set the interval value to 120 seconds,
instead set it to 2 minutes. Table 15 lists some example values.
Table 15. Example System Data Engine interval values that are a factor of the total time in
one day
Time unit
Example values
Seconds
1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27,
30, 32, 36, 40, 45, 48, 50, 54
Minutes
1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15
4. Update the port value for IBM_UPDATE_TARGET to specify the TCP/IP port that is
configured for the Data Streamer.
Tip: For more information about the Data Streamer port, see “Configuring the
Data Streamer” on page 109.
5. Replace the value /u/userid/cdpConfig/Sample1.sde with the policy file path
and name for your environment.
6. Update the value for IBM_RESOURCE to specify only one of the following values
as the source from which SMF data is to be gathered.
v The in-memory resource name
v The keyword EXIT, which indicates that SMF data is to be gathered from the
SMF user exit HBOSMFEX.
v The SMF log stream name
Remember: If you want to collect SMF data from the SMF user exit, you must
install the user exit, as described in “Installing the SMF user exit” on page 123.
7. Verify that the user ID that is associated with the System Data Engine started
task has the required authorities, as described in “Requirements for the System
Data Engine user ID” on page 123.
122
Common Data Provider for z Systems: User Guide
Requirements for the System Data Engine user ID:
If you are collecting SMF data from an in-memory resource or log stream, the user
ID that is associated with the System Data Engine started task must have authority
to read the SMF in-memory resource or log stream. Also, if you are collecting SMF
data from a log stream, the user ID must have update access to the RACF profile
MVS.SWITCH.SMF in the OPERCMDS RACF class.
If you are collecting SMF data from the SMF user exit, there are no other
requirements for the user ID.
The following information further describes the required authorities:
Authority to read the SMF in-memory resource or log stream
For example, if you are using the Resource Access Control Facility (RACF)
as your System Authorization Facility (SAF) product, you must give the
System Data Engine user ID read authority to the profile that you set up to
secure your SMF in-memory resource or log stream. In the following
examples, IFASMF.resource represents the name of the SMF in-memory
resource or log stream that is being used to gather SMF records, and userid
represents the System Data Engine user ID.
Tip: IFASMF.resource is also described in step 6 on page 122 of “Creating
the System Data Engine started task for streaming SMF data” on page 121.
In-memory resource example
PERMIT IFA.IFASMF.resource CLASS(FACILITY) ACCESS(READ) ID(userid)
Log stream example
PERMIT IFASMF.resource CLASS(LOGSTRM) ACCESS(READ) ID(userid)
Update access to the RACF profile MVS.SWITCH.SMF in the OPERCMDS RACF class
(only if you are collecting SMF data from a log stream)
This authority is not required to process data from an SMF in-memory
resource.
Update access to the RACF profile MVS.SWITCH.SMF in the OPERCMDS RACF
class is required only if you are collecting SMF data from a log stream so
that the user ID can issue the MVS SWITCH SMF command. The System Data
Engine periodically issues the MVS SWITCH SMF command to verify that it is
accessing the most up-to-date data from the log stream. To grant the user
ID update access to this RACF profile, issue the following commands:
PERMIT MVS.SWITCH.SMF CLASS(OPERCMDS) ACCESS(UPDATE) ID(userid)
SETROPTS RACLIST(OPERCMDS) REFRESH
Installing the SMF user exit
You can configure the IBM Common Data Provider for z Systems System Data
Engine to collect System Management Facilities (SMF) data from the SMF user exit
HBOSMFEX, which is provided with IBM Common Data Provider for z Systems.
By using the SMF user exit, you can collect streaming SMF data independently of
whether SMF is running in log stream recording mode or data set recording mode.
Before you begin
Important: The SMF types that you define on the SYS parameter in z/OS
SYS1.PARMLIB member SMFPRMxx (or its equivalent) do not take effect if you also
have SUBSYS parameter definitions. Therefore, if you define any subsystems, you
must define the associated SMF types for each subsystem on the SUBSYS parameter.
Configuring Common Data Provider for z Systems
123
About this task
If you want to use the user exit to collect SMF data, install the HBOSMFEX user
exit on the following MVS installation exits:
v IEFU83
v IEFU84
v IEFU85
For more information about MVS installation exits, see the z/OS MVS installation
exits documentation.
An MVS installation exit does not receive control for records when the writing of
the record is suppressed either because of a system failure or because of options
that were selected at IPL time or by using the SET SMF command.
The HBOSMFEX module is required by the HBOSMFEX user exit and is in the
SHBOLPA library.
All modules in the SHBOLPA library must be added to the system link pack area
(LPA). For more information about the LPA, see the z/OS MVS initialization and
tuning documentation.
Procedure
To install the SMF user exit HBOSMFEX, complete the following steps:
1. To add the load modules to an LPA, complete one of the following actions:
Action
Instruction
Add the SHBOLPA
library to the pageable
link pack area (PLPA)
at system IPL
Add the following statement to an LPALSTxx member, but
replace hlq with the target library high-level qualifier that is used
to install IBM Common Data Provider for z Systems, and replace
volume with the volume identifier of the data set:
hlq.SHBOLPA(volume)
Add the individual
modules in the
SHBOLPA library to
the dynamic LPA after
the system IPL
Issue the following MVS system command:
SETPROG LPA,ADD,MODNAME=HBOSMFEX,DSNAME=hlq.SHBOLPA
2. To install the exit, complete one of the following actions:
Action
Instruction
Install the user exit on Add the following statements to
an MVS installation
SYS1.PARMLIB:
exit at system IPL
EXIT ADD EXITNAME(SYS.IEFU83)
EXIT ADD EXITNAME(SYS.IEFU84)
EXIT ADD EXITNAME(SYS.IEFU85)
Dynamically install
the user exit after the
system IPL
124
a PROGxx member of library
MODNAME(HBOSMFEX)
MODNAME(HBOSMFEX)
MODNAME(HBOSMFEX)
Issue the following MVS commands:
v SETPROG EXIT,ADD,EXITNAME=SYS.IEFU83,MODNAME=HBOSMFEX
v SETPROG EXIT,ADD,EXITNAME=SYS.IEFU84,MODNAME=HBOSMFEX
v SETPROG EXIT,ADD,EXITNAME=SYS.IEFU85,MODNAME=HBOSMFEX
Common Data Provider for z Systems: User Guide
Troubleshooting tips:
v To display the status of the SMF user exit, use the following commands:
– D PROG,EXIT,EXITNAME=SYS.IEFU83
– D PROG,EXIT,EXITNAME=SYS.IEFU84
– D PROG,EXIT,EXITNAME=SYS.IEFU85
v If you need to uninstall the user exit, see “Uninstalling the SMF user exit.”
Uninstalling the SMF user exit:
To uninstall the SMF user exit HBOSMFEX from a system, complete this
procedure.
Procedure
1. Remove the SMF user exit from the MVS installation exits by issuing the
following MVS commands:
SETPROG EXIT,DELETE,EXITNAME=SYS.IEFU83,MODNAME=HBOSMFEX
SETPROG EXIT,DELETE,EXITNAME=SYS.IEFU84,MODNAME=HBOSMFEX
SETPROG EXIT,DELETE,EXITNAME=SYS.IEFU85,MODNAME=HBOSMFEX
2. Remove the SMF user exit from the system link pack area (LPA) by issuing the
following MVS command:
SETPROG LPA,DELETE,MODNAME=HBOSMFEX,FORCE=YES
3. Stop the System Data Engine.
4. Stop the Data Streamer.
5. To free the 2G above-the-bar storage, and other storage spaces that are used by
the SMF user exit, run the sample job HBODSPCE.
Writing IMS records to SMF for processing by the System Data
Engine
To collect IBM Information Management System (IMS) log data, you must write
IMS records to System Management Facilities (SMF) for processing by the System
Data Engine.
Before you begin
Before you complete the steps in this procedure, you must complete the
configuration steps for SMF data collection. For example, in the System Data
Engine started task, verify that the COLLECT statement specifies the correct source of
the SMF records (for example, the SMF in-memory resource or the SMF user exit).
Also, if you are collecting SMF data by using the SMF user exit, install the SMF
user exit, as described in “Installing the SMF user exit” on page 123.
About this task
IBM Common Data Provider for z Systems provides the following methods for
writing IMS log records to SMF, depending on the type of IMS log records that
you want to collect:
IMS Log Write (LOGWRT) user exit
For writing all IMS log records to SMF, except for IMS Performance
Analyzer Transaction Index records.
HBOPIMS utility
For writing IMS Performance Analyzer Transaction Index records to SMF.
Configuring Common Data Provider for z Systems
125
For example, if you want to collect all IMS log records, including the IMS
Performance Analyzer Transaction Index records, you must use both the IMS
LOGWRT user exit and the HBOPIMS utility.
Procedure
To write IMS records to SMF for processing by the System Data Engine, complete
the following configuration steps:
1. Update the z/OS SYS1.PARMLIB member SMFPRMxx (or its equivalent) to enable
the collection of SMF record type 127.
2. If you are collecting SMF data from the SMF in-memory resource, create a new,
or update an existing, SMF in-memory resource to include SMF record type
127.
Important: Do not include SMF record type 127 in any SMF log stream
definitions.
3. Depending on the type of IMS log records that you want to collect, choose one
or both of the following methods for writing IMS log records to SMF, and
complete the associated configuration steps:
Option
Description
IMS LOGWRT user
exit
For writing all IMS log records, except for IMS Performance
Analyzer Transaction Index records, install the IMS LOGWRT user
exit, as described in “Installing the IMS LOGWRT user exit.”
HBOPIMS utility
For writing IMS Performance Analyzer Transaction Index records,
run the HBOPIMS utility, as described in “Running the HBOPIMS
utility” on page 127.
Installing the IMS LOGWRT user exit:
IBM Common Data Provider for z Systems provides the IMS LOGWRT user exit to
write IMS log records to SMF. The System Data Engine reads the IMS log records
either from an SMF in-memory resource or from storage that is created by the SMF
user exit.
Before you begin
The IMS LOGWRT user exit supports IMS Version 13 or later.
Procedure
If you want to use the IMS LOGWRT user exit, choose one of the following
installation options, depending on your IMS configuration.
To install the user exit, complete the steps that apply for your installation option.
126
Common Data Provider for z Systems: User Guide
Installation option
Steps
IMS multi-user exit
1.
Add the hlq.SHBOLOAD data set to the STEPLIB concatenation of the IMS Control
Region.
2.
Add the following LOGWRT user exit definition to the IMS PROCLIB member
DFSDFxxx:
EXITDEF=(TYPE=LOGWRT,EXITS=(HBOFLGX0))
3.
IMS tools
After the IMS Control Region JCL is updated, recycle the IMS system to activate
the LOGWRT user exit.
If IMS tools are implemented for the IMS environment, install the LOGWRT user
exit by using the distributed module HBOFLGX0 that is in the SHBOLOAD library.
This module is specified as EXITNAME(HBOFLGX0). IMS Tools does not require the load
library to be inserted into the IMS Control Region STEPLIB JCL.
1.
Add an IMS tools user exit definition to the IMS PROCLIB member GLXEXIT0,
as shown in the following example:
EXITDEF(TYPE(LOGR) EXITNAME(HBOFLGX0) LOADLIB(hlq.SHBOLOAD))
To activate the LOGWRT user exit, recycle the IMS system.
2.
Stand-alone exit
The SHBOLOAD library contains member DFSFLGX0, which is the member name
that IMS searches for during startup. The DFSFLGX0 module loads HBOLGX?0,
which writes IMS log records to SMF.
1.
Add the SHBOLOAD library to the STEPLIB concatenation of the IMS Control
Region, and verify that the DFSFLGX0 module in this library is concatenated
before any other module of the same name.
2.
After the IMS Control Region JCL is updated, recycle the IMS system to activate
the LOGWRT user exit.
When the LOGWRT user exit initializes successfully, the following message is
written to the z/OS console:
HBO8101I CDP IMS LOGWRT EXIT ACTIVATED FOR IMSID=iiii
Running the HBOPIMS utility:
IMS Performance Analyzer batch reporting can create specialized extract files for
IMS Transaction Index and IMS Connect Transaction Index records. IBM Common
Data Provider for z Systems provides the HBOPIMS utility for reading IMS
Transaction Index and IMS Connect Transaction Index records from the extract files
and writing the records to SMF for processing by the System Data Engine.
Procedure
To write the IMS Transaction Index records (x'CA01') or IMS Connect Transaction
Index records (x'CA20') to SMF record type 127, subtype 1000, customize and run
the HBOJIMS JCL in the hlq.SHBOCNTL data set on the system where the System
Data Engine is running. The comments in the JCL job include instructions for
customizing and running the job.
Creating the job for loading SMF data in batch
To run the IBM Common Data Provider for z Systems System Data Engine in batch
mode so that it writes its output to a file, rather than streaming it to the Data
Streamer, you must create the job for loading SMF data in batch. You can create
this job by using the sample job HBOJBCOL in the hlq.SHBOCNTL library, and
updating the copy.
Configuring Common Data Provider for z Systems
127
Procedure
To create the job, complete the following steps:
1. Copy the job HBOJBCOL in the hlq.SHBOCNTL library to a user job library.
2. Update the job card according to your site standards.
3. Update the following STEPLIB DD statement to refer to the hlq.SHBOLOAD data
set:
//STEPLIB
DD DISP=SHR,DSN=HBOvrm.LOAD
4. For each SMF record type that you want to collect, update the following control
statements, which are provided by the HBOIN DD statement, and change the
variable nnn to the appropriate SMF record type value, for example, 030, 080,
or 110.
//HBOIN
//
//
//
DD
DD
DD
DD
DISP=SHR,DSN=HBOvrm.SHBODEFS(HBOCCSV)
DISP=SHR,DSN=HBOvrm.SHBODEFS(HBOLLSMF)
DISP=SHR,DSN=HBOvrm.SHBODEFS(HBORSnnn)
DISP=SHR,DSN=HBOvrm.SHBODEFS(HBOUSnnn)
Most of the control statements that are required to run the System Data Engine
are provided in the hlq.SHBODEFS data set and must not be changed.
Each member in the HBOIN DD concatenation specifies a task that the System
Data Engine must do. The last statement in the HBOIN DD concatenation must be
a COLLECT control statement, which initiates the processing of the input data by
the System Data Engine.
The following example shows the control statements for SMF record types 80
and SMF_110_1_KPI:
//* CONTROL STATEMENTS
//*
//HBOIN DD DISP=SHR,DSN=hlq.mlq.SHBODEFS(HBOCCSV)
// DD DISP=SHR,DSN=hlq.mlq.SHBODEFS(HBOLLSMF)
// DD DISP=SHR,DSN=hlq.mlq.SHBODEFS(HBOTCIFI) for
// DD DISP=SHR,DSN=hlq.mlq.SHBODEFS(HBORS110) for
// DD DISP=SHR,DSN=hlq.mlq.SHBODEFS(HBOU110I) for
// DD DISP=SHR,DSN=hlq.mlq.SHBODEFS(HBORS080) for
// DD DISP=SHR,DSN=hlq.mlq.SHBODEFS(HBOUS080) for
type
type
type
type
type
110_1_KPI
110_1_KPI
110_1_KPI
80
80
5. For each SMF record type that you specify for collection, add a DD statement,
such as the following statement, to receive the output, and change the variable
nnn to the appropriate SMF record type value, for example, 030, 080, or 110.
//SMFnnn
DD SYSOUT=*,RECFM=V,LRECL=32756
The following example shows the DD statements for receiving the output for
SMF record types 80 and SMF_110_1_KPI:
//* Sample COLLECT statement for processing log stream data
//*
// DD *
COLLECT SMF
BUFFER SIZE 1000 M;
/*
//HBOLOG DD DISP=SHR,DSN=stored.smfdata
//HBOOUT DD SYSOUT=*
//HBODUMP DD SYSOUT=*
//SMF080 DD SYSOUT=*
for type 80
//SMF110 DD SYSOUT=*
for type 110_1_KPI
//SMF11001 DD SYSOUT=* for type 110_1_KPI
//SMF110FC DD SYSOUT=* for type 110_1_KPI
//SMF110TX DD SYSOUT=* for type 110_1_KPI
//SMF1101I DD SYSOUT=* for type 110_1_KPI
128
Common Data Provider for z Systems: User Guide
Verifying the search order for the TCP/IP resolver configuration file
Before you start IBM Common Data Provider for z Systems, verify that the z/OS
environment is set up correctly so that IBM Common Data Provider for z Systems
can access the TCP/IP resolver configuration file.
About this task
The IBM Common Data Provider for z Systems Data Streamer and Log Forwarder
are z/OS UNIX System Services programs. They use TCP/IP functions that require
access to the TCP/IP resolver configuration file. This access is provided by using a
resolver search order. The resolver search order for z/OS UNIX System Services
programs is documented in the topic about resolver configuration files in the z/OS
Communications Server: IP Configuration Guide.
The following list summarizes the resolver search order:
1. GLOBALTCPIPDATA statement
2. The RESOLVER_CONFIG environment variable in the Data Streamer procedure
or job and in the Log Forwarder properties (which are part of the global
properties that you can define for data streams in a policy).
Tip: For information about this environment variable configuration, see the
following topics:
v “Configuring the Data Streamer” on page 109
v “Log Forwarder properties configuration” on page 28
3. /etc/resolv.conf file
4. SYSTCPD DD statement in the Log Forwarder started task
5. userid.TCPIP.DATA, where userid is the user ID that is associated with the Log
Forwarder started task
6. SYS1.TCPPARMS(TCPDATA)
7. DEFAULTTCPIPDATA
8. TCPIP.TCPIP.DATA
Procedure
Verify that the resolver configuration file is available to the Data Streamer and the
Log Forwarder by using one of the search order mechanisms.
Configuring Common Data Provider for z Systems
129
130
Common Data Provider for z Systems: User Guide
Operating Common Data Provider for z Systems
To operate IBM Common Data Provider for z Systems, you must know how to run
(for example, start, stop, or update) key components, such as the data gatherer
components (Log Forwarder and System Data Engine), the Data Streamer, and the
Data Receiver.
Before you begin
Before you start IBM Common Data Provider for z Systems, verify that the z/OS
environment is set up correctly for IBM Common Data Provider for z Systems to
do the following tasks:
v Access the TCP/IP resolver configuration file.
v Resolve host names.
Search order for the TCP/IP resolver configuration file
For more information, see “Verifying the search order for the TCP/IP
resolver configuration file” on page 129.
Host name resolution
To operate, IBM Common Data Provider for z Systems must determine the
fully qualified domain name (FQDN) of the system on which it is running.
Therefore, activate the networking and name resolution services that are
configured in the system for use by IBM Common Data Provider for z
Systems before you start IBM Common Data Provider for z Systems.
About this task
The following lists indicate the best practice for the order in which to start or stop
the components:
Order in which to start the components
1. Start the data receiving components, such as the Data Receiver or
Logstash.
2. Start the Data Streamer.
3. Start the data gatherer components, such as the Log Forwarder and
System Data Engine.
Order in which to stop the components
1. Stop the data gatherer components, such as the Log Forwarder and
System Data Engine.
2. Stop the Data Streamer.
Running the Data Receiver
To start the IBM Common Data Provider for z Systems Data Receiver, you run a
Java command with multiple parameters. A best practice is to create a shell or
batch file for starting the Data Receiver.
Before you begin
To run the Data Receiver, Java Runtime Environment (JRE) 8 must be installed.
© Copyright IBM Corp. 2016, 2018
131
When it runs, the Data Receiver uses the configuration values of the Data Receiver
properties in the cdpdr.properties file in the Data Receiver working directory
(CDPDR_HOME directory). For more information about these configuration values, see
“Updating the Data Receiver properties” on page 106.
About this task
When you run the Data Receiver, you can override the values in the
cdpdr.properties file by using the following command-line parameters:
-p port
Overrides the port on which the Data Receiver listens for data from the
Data Streamer.
Example
-p 8888
-c cycle
Overrides the number of output files that can simultaneously exist in the
Data Receiver output directory (CDPDR_PATH directory).
Example
-c 5
-s ssl
Overrides the specification of whether to use the Transport Layer Security
(TLS) protocol for Data Receiver communication with the Data Streamer.
Example
–s y
-t trace
Overrides the specification of whether to activate tracing for the Data
Receiver.
Example
-t y
Procedure
To start the Data Receiver, run the following command:
java -jar -Dfile.encoding=UTF-8 DataReceiver.jar
The following example shows how you can use command-line parameters to
override the values of the configuration properties in the cdpdr.properties file:
java –jar –Dfile.encoding=UTF-8 DataReceiver.jar –p 6767 –c 10
Running the Data Streamer
To start the IBM Common Data Provider for z Systems Data Streamer, you use the
Data Streamer started task. When a policy is updated, you must stop and restart
the Data Streamer to make the updated definitions take effect.
Before you begin
Create the Data Streamer started task, as described in “Configuring the Data
Streamer” on page 109.
132
Common Data Provider for z Systems: User Guide
About this task
You use z/OS console commands to control the operation of the Data Streamer
and to view information about the current policy.
Troubleshooting tip: After the Data Streamer is started, it should not stop until
you stop it. If it does stop without your stopping it explicitly, review the Data
Streamer job log output for possible errors.
Procedure
To run the Data Streamer, issue the following console commands, where procname
represents the name of the started task (such as HBODSPRO).
Action
z/OS console command
Start the Data
Streamer
START procname
Stop the Data
Streamer
STOP procname
View information
about the current
policy
MODIFY procname,APPL=DISPLAY,POLICY
The following message is sample output from the command:
HBO6076I The current policy is
/usr/lpp/IBM/cdpz/v1r1m0/UI/LIB/Sample.policy.
Running the Log Forwarder
To start the IBM Common Data Provider for z Systems Log Forwarder, you use the
Log Forwarder started task. If you are collecting NetView for z/OS message data,
the NetView message provider must also be active. The NetView message provider
is started as a started task by using the REXX module GLANETV.
Before you begin
Create the Log Forwarder started task, as described in “Creating the Log
Forwarder started task” on page 112.
If you configure a NetView Netlog data stream for gathering NetView for z/OS
message data, also configure the NetView message provider to monitor and
forward NetView for z/OS messages to the Log Forwarder, as described in
“Configuring the z/OS NetView message provider for collecting NetView
messages” on page 118. You can start the REXX module GLANETV from the
command line of an existing NetView user ID, or create a new NetView user ID to
support the running of this REXX module. Always start the NetView message
provider after the Log Forwarder is started for the first time.
If you configure a z/OS SYSLOG data stream for gathering z/OS SYSLOG data
from a user exit, you must install either the GLASYSG or GLAMDBG user exit, as
described in “Installing the user exit for collecting z/OS SYSLOG data” on page
114. The GLASYSG and GLAMDBG user exits create system resources that might
need to be managed while they are in operation. The manageUserExit utility is a
shell script that can be used to manage the system resources. For more information
about this utility, see “manageUserExit utility for managing the installed user exit”
on page 116.
Operating Common Data Provider for z Systems
133
About this task
You use z/OS console commands to control the operation of the Log Forwarder,
including to start, stop, or view status or configuration information for Log
Forwarder data streams.
For more information about Log Forwarder data streams, including the correlation
between the sources from which the Log Forwarder gathers data and the data
streams that can be defined for those sources, see “Data stream configuration for
data gathered by Log Forwarder” on page 49.
Troubleshooting tip: After the Log Forwarder is started, it should not stop until
you stop it. If it does stop without your stopping it explicitly, review the Log
Forwarder job log output for possible errors.
Procedure
1. To run the Log Forwarder, issue the following console commands, where
procname represents the name of the started task (such as GLAPROC).
Action
z/OS console command
Start the Log Forwarder
Issue one of the following console commands:
Warm start
START procname
A warm start resumes data collection where it previously stopped.
Cold start
START procname,OPT=-C
A cold start starts data collection anew. Any operational data that
was written while the Log Forwarder was stopped is not collected.
Tip: If the Log Forwarder is shut down for more than a few minutes, you
might want to use cold start mode to avoid having to wait for the Log
Forwarder to collect accumulated data.
Stop the Log Forwarder
STOP procname
View the status of all known Log
Forwarder data streams
MODIFY procname,APPL=DISPLAY,GATHERER,LIST
Start, stop, or view the status or
configuration information for an
individual data stream
Table 16 on page 135 lists the commands, which vary depending on the
source of the data stream.
134
Common Data Provider for z Systems: User Guide
Table 16. z/OS console commands for starting, stopping, or viewing status or configuration information for individual
Log Forwarder data streams
Source of data stream
Job log
z/OS console commands for controlling a data stream from this source
Start the data stream
MODIFY procname,APPL=START,GATHERER,JOBNAME=jobname,DDNAME=ddname
Stop the data stream
MODIFY procname,APPL=STOP,GATHERER,JOBNAME=jobname,DDNAME=ddname
View the status of the data stream
MODIFY procname,APPL=DISPLAY,GATHERER,STATUS,JOBNAME=jobname,DDNAME=ddname
View the configuration information for the data stream
MODIFY procname,APPL=DISPLAY,GATHERER,CONFIG,JOBNAME=jobname,DDNAME=ddname
Usage note: If you used wildcard characters in the job name when you defined the data
stream, the values of the JOBNAME and DDNAME parameters in these commands must
reference the specific instance of the job log data stream. For example, you must specify
JOB0011 or JOB0021 rather than JOB*1.
z/OS UNIX log file
Start the data stream
MODIFY procname,APPL=START,GATHERER,UNIXFILEPATH=’UNIXfilepath’
Stop the data stream
MODIFY procname,APPL=STOP,GATHERER,UNIXFILEPATH=’UNIXfilepath’
View the status of the data stream
MODIFY procname,APPL=DISPLAY,GATHERER,STATUS,UNIXFILEPATH=’UNIXfilepath’
View the configuration information for the data stream
MODIFY procname,APPL=DISPLAY,GATHERER,CONFIG,UNIXFILEPATH=’UNIXfilepath’
Usage note: To prevent an error message, the UNIX file path must be enclosed in quotation
marks.
Entry-sequenced
VSAM cluster
Start the data stream
MODIFY procname,APPL=START,GATHERER,DATASET=dataset
Stop the data stream
MODIFY procname,APPL=STOP,GATHERER,DATASET=dataset
View the status of the data stream
MODIFY procname,APPL=DISPLAY,GATHERER,STATUS,DATASET=dataset
View the configuration information for the data stream
MODIFY procname,APPL=DISPLAY,GATHERER,CONFIG,DATASET=dataset
Usage note: dataset represents the name of the data set.
Operating Common Data Provider for z Systems
135
Table 16. z/OS console commands for starting, stopping, or viewing status or configuration information for individual
Log Forwarder data streams (continued)
Source of data stream
z/OS SYSLOG
z/OS console commands for controlling a data stream from this source
Start the data stream
v From the user exit:
MODIFY procname,APPL=START,GATHERER,SYSLOG
v From OPERLOG:
MODIFY procname,APPL=START,GATHERER,OPERLOG
Stop the data stream
v From the user exit:
MODIFY procname,APPL=STOP,GATHERER,SYSLOG
v From OPERLOG:
MODIFY procname,APPL=STOP,GATHERER,OPERLOG
View the status of the data stream
v From the user exit:
MODIFY procname,APPL=DISPLAY,GATHERER,STATUS,SYSLOG
v From OPERLOG:
MODIFY procname,APPL=DISPLAY,GATHERER,STATUS,OPERLOG
View the configuration information for the data stream
v From the user exit:
MODIFY procname,APPL=DISPLAY,GATHERER,CONFIG,SYSLOG
v From OPERLOG:
MODIFY procname,APPL=DISPLAY,GATHERER,CONFIG,OPERLOG
IBM Tivoli NetView
for z/OS messages
Start the data stream
MODIFY procname,APPL=START,GATHERER,DOMAIN=domain
Stop the data stream
MODIFY procname,APPL=STOP,GATHERER,DOMAIN=domain
View the status of the data stream
MODIFY procname,APPL=DISPLAY,GATHERER,STATUS,DOMAIN=domain
View the configuration information for the data stream
MODIFY procname,APPL=DISPLAY,GATHERER,CONFIG,DOMAIN=domain
Usage note: domain represents the name of the NetView domain.
IBM WebSphere
Application Server for
z/OS HPEL log
Start the data stream
MODIFY procname,APPL=START,GATHERER,HPELDIRECTORY=’hpeldirectory’
Stop the data stream
MODIFY procname,APPL=STOP,GATHERER,HPELDIRECTORY=’hpeldirectory’
View the status of the data stream
MODIFY procname,APPL=DISPLAY,GATHERER,STATUS,HPELDIRECTORY=’hpeldirectory’
View the configuration information for the data stream
MODIFY procname,APPL=DISPLAY,GATHERER,CONFIG,HPELDIRECTORY=’hpeldirectory’
Usage note: hpeldirectory represents the High Performance Extensible Logging (HPEL) log
directory. To prevent an error message, the directory must be enclosed in quotation marks.
2. To run the NetView message provider, complete the following actions as
appropriate.
136
Common Data Provider for z Systems: User Guide
Action
Instructions
Start the NetView message
provider
Specify either C (cold start) or W (warm start) as the value for the
COMMON.GLANETV.START variable in the CNMSTYLE member, as shown in the
following example:
COMMON.GLANETV.START = W
Stop the NetView message
provider
Change the value of the GLANETV.STOP variable in the CGED panel to YES.
Running the System Data Engine
To start the IBM Common Data Provider for z Systems System Data Engine to
have it stream SMF data to the Data Streamer, you use the System Data Engine
started task.
Before you begin
Create the System Data Engine started task, as described in “Creating the System
Data Engine started task for streaming SMF data” on page 121.
About this task
You use z/OS console commands to query the status of the System Data Engine
and control its operation.
Troubleshooting tip: After the System Data Engine is started, it should not stop
until you stop it. If it does stop without your stopping it explicitly, review the
System Data Engine job log output for possible errors.
Procedure
To run the System Data Engine, issue the following console commands, where
procname represents the name of the started task (such as HBOSMF).
Action
z/OS console command
Start the System Data
Engine
START procname
Stop the System Data
Engine
STOP procname
View System Data
Engine status
MODIFY procname,DISPLAY STATUS
The status of the System Data Engine is written to the System
Data Engine HBOOUT file.
Operating Common Data Provider for z Systems
137
138
Common Data Provider for z Systems: User Guide
Implementing the Open Streaming API for sending user
application data to the Data Streamer
The IBM Common Data Provider for z Systems Open Streaming API provides an
efficient way to gather operational data from your own applications by enabling
your applications to be data gatherers. You can use the API to define your own
data streams for sending your application data to the Data Streamer and streaming
it to analytics platforms.
About this task
You must provide a stream definition for each type of user application data
(analogous to types such as SMF record type 30 or z/OS SYSLOG) from which you
want to stream operational data to the Data Streamer.
Your stream definitions are used to populate the Configuration Tool with the
possible data streams that you can configure in a policy for the user application
data. For example, in the Policy Profile Edit window of the Configuration Tool,
when you click the Add Data Stream icon
to add a data
stream to your policy, the “Select data stream” window is shown with a list of
categorized data streams. You can expand the categories to view the possible data
streams that you can define for the policy. After you provide your stream
definitions, your user application data streams are included in this categorized list.
Defining data streams for the user application data
To create your stream definitions, you must create a JavaScript Object Notation
(JSON) file with the file name extension .streams.json.
Before you begin
The data in your data streams must be organized into records that have a
consistent structure and format. A single data packet can contain one or more of
these records.
About this task
Multiple streams can be defined in the same .streams.json file. The .streams.json
file must be in the Configuration Tool working directory, which is described in
“Setting up a working directory for the Configuration Tool” on page 15.
Each stream definition includes metadata that IBM Common Data Provider for z
Systems uses to map incoming data to its associated data stream.
Procedure
Create your stream definitions by using the example in “Stream definition
example” on page 140.
© Copyright IBM Corp. 2016, 2018
139
Stream definition example
An example of a stream definition for a type of user application data is shown,
with the names and descriptions of the JavaScript Object Notation (JSON) objects
that must be defined.
“JSON objects to define” lists the names and descriptions of the JSON objects, with
corresponding code excerpts from the complete stream definition example in
“Complete stream definition example” on page 142.
JSON objects to define
For the stream definition in “Complete stream definition example” on page 142,
you must define the following JSON objects:
groupings
In the Configuration Tool, data streams are organized in groups. The
groupings object specifies how your data stream is organized in the groups
that are listed in the Configuration Tool. You can add your data stream to
existing groups, or you can add new groups for your data stream.
In this example, YourDataStreamGroup is the higher level group in the
hierarchy, YourDataStreamSubgroup is the lower level group, and
YourDataStream is the name of the data stream.
"groupings": {
"YourDataStreamGroup": {
"YourDataStreamSubgroup": [
"YourDataStream"
]
}
},
definitions
For each data stream that you list in the groupings object, you must specify
one definitions object, which contains the definition of the data stream.
"definitions": [
{
"category": "zLF",
"name": "YourDataStream",
"tags": [
"YourTag1",
"YourTag2"
],
"type": "YourDataStream",
"parms": [
{
"displayName": "Data Source Name",
"name": "dataSourceName",
"edit": "Required",
"description": "Name of data source sent to subscribers",
"unique": true
},
{
"displayName": "Data Source Type",
"name": "dataSourceType",
"defaultValue": "YourDataStream",
"edit": "Protected",
"description": "Type of data source sent to subscribers"
},
{
"displayName": "File Path",
"name": "filePath",
"edit": "Required",
"description": "Virtual file path of log",
"unique": true
140
Common Data Provider for z Systems: User Guide
}
]
}
]
category
The value of the category object must be "zLF".
"category": "zLF",
name
The value of the name object must be the same as the data stream
name that is specified in the groupings object.
"name": "YourDataStream",
tags
As the value of the tags object, you can specify tags to remind you
of the source or format of the data in the data stream. Any tags
that you specify are shown in the Configuration Tool on the data
stream and transform nodes.
If the data is being sent in split format, specify the tag "Split".
If the data is in CSV format, specify the tag "CSV".
"tags": [
"YourTag1",
"YourTag2"
],
type
The value of the type object must be the same as the value of the
name object.
"type": "YourDataStream",
parms For each data stream definition, you must specify the following
three parameters:
"Data Source Name"
Specify the object exactly as shown in this example.
"Data Source Type"
Specify the object as shown in this example, except for the
defaultValue object, specify the same data stream name
that you used in the groupings object. This name in the
defaultValue object is sent to subscribers in the metadata
for each packet.
"File Path"
Specify the object exactly as shown in this example.
"parms": [
{
"displayName": "Data Source Name",
"name": "dataSourceName",
"edit": "Required",
"description": "Name of data source sent to subscribers",
"unique": true
},
{
"displayName": "Data Source Type",
"name": "dataSourceType",
"defaultValue": "YourDataStream",
"edit": "Protected",
"description": "Type of data source sent to subscribers"
},
{
"displayName": "File Path",
"name": "filePath",
"edit": "Required",
"description": "Virtual file path of log",
"unique": true
}
]
Implementing the Open Streaming API for sending user application data to the Data Streamer
141
Complete stream definition example
{
"groupings": {
"YourDataStreamGroup": {
"YourDataStreamSubgroup": [
"YourDataStream"
]
}
},
"definitions": [
{
"category": "zLF",
"name": "YourDataStream",
"tags": [
"YourTag1",
"YourTag2"
],
"type": "YourDataStream",
"parms": [
{
"displayName": "Data Source Name",
"name": "dataSourceName",
"edit": "Required",
"description": "Name of data source sent to subscribers",
"unique": true
},
{
"displayName": "Data Source Type",
"name": "dataSourceType",
"defaultValue": "YourDataStream",
"edit": "Protected",
"description": "Type of data source sent to subscribers"
},
{
"displayName": "File Path",
"name": "filePath",
"edit": "Required",
"description": "Virtual file path of log",
"unique": true
}
]
}
]
}
Sending the user application data to the Data Streamer
For sending your application data to the Data Streamer, IBM Common Data
Provider for z Systems provides the Data Transfer Protocol, and Java and REXX
APIs that implement the Data Transfer Protocol. When the Data Streamer receives
a data packet, it processes and sends the data to subscribers, based on the policy
that you define in the Configuration Tool.
About this task
The Data Streamer has an open TCP/IP port on which it accepts connections. It
accepts connections only from data gatherers that are running on the same system,
and using the same TCP/IP stack.
Tip: For more information about the Data Streamer port, see “Data Streamer port
definition” on page 11.
142
Common Data Provider for z Systems: User Guide
Data Transfer Protocol
The Data Transfer Protocol is used to transfer data among IBM Common Data
Provider for z Systems components. It uses a binary, self-describing format that is
delivered over TCP/IP.
Data that is sent by using the Data Transfer Protocol can be split or unsplit.
Split data
Split data is divided into records and is sent as an ordered list of
individual text strings. Each individual text string represents an individual
record from the data source.
Unsplit data
Unsplit data is sent as a single text string and is not divided into records.
Header information for data that is sent by using the Data
Transfer Protocol
The transmission of data is preceded by the following information, in the following
order:
1. A header that has a length of 96 bytes. Headers are listed and described in
Table 17.
2. One of the following payload structures, which is described in the header:
v “Unsplit payload” on page 144
v “Split payload” on page 144
Table 17. Headers for data that is sent by using the Data Transfer Protocol
Number of
bytes
Type
Description
Value
4
Binary
Endianness
0x12345678
8
Text
Header encoding
The name of an encoding to
use for all other text in the
header, and for metadata in
the payload. The encoding
must be supported by Java.
The name should be padded
with blanks to 8 characters
and encoded in UTF-8.
8
Text
Eye catcher
HBOCDP (encoded in the
header encoding)
8
Text
Sender identifier
A unique identifier for the
sender (encoded in header
encoding)
4
Binary
Version
0x00000001
8
Reserved
4
Binary
Payload type
v For unsplit data,
0x00000001
v For split data, 0x00000002
4
Binary
48
Reserved
Payload length
The number of bytes in the
payload. The maximum
value is 2000000000.
Implementing the Open Streaming API for sending user application data to the Data Streamer
143
Unsplit payload
To transmit unsplit data, use the payload format that is shown in Table 18.
Table 18. Unsplit payload format
Number of
bytes
Type
Description
4
Binary
Number of metadata values
16 times the
number of
metadata
values
Binary
Metadata keyword and value lengths and offsets. For
each metadata value, the following information is
included:
v 4 bytes, which is the offset from the beginning of
the payload to the metadata keyword
v 4 bytes, which is the length of the metadata
keyword
v 4 bytes, which is the offset from the beginning of
the payload to the metadata value
v 4 bytes, which is the length of the metadata value
4
Binary
Offset from the beginning of the payload to the data
4
Binary
Length of data
Variable
Text
Metadata keywords and values.
Tip: The lengths and offsets of these keywords and
values are previously described in this payload. For
more information about the metadata keywords and
values, see Table 20 on page 145.
Variable
Text
The data that is being transmitted
Split payload
To transmit split data, use the payload format that is shown in Table 19.
Table 19. Split payload format
Number of
bytes
Type
Description
4
Binary
Number of metadata values
16 times the
number of
metadata
values
Binary
Metadata keyword and value lengths and offsets. For
each metadata value, the following information is
included:
v 4 bytes, which are offset from the beginning of the
payload to the metadata keyword
v 4 bytes, which is the length of the metadata
keyword
v 4 bytes, which are offset from the beginning of the
payload to the metadata value
v 4 bytes, which is the length of the metadata value
4
Binary
8 times the
Binary
number of
records in the
data
Number of records in the data
Record offsets and lengths. For each record, the
following information is included:
v 4 bytes, which are offset from the beginning of the
payload to the record
v 4 bytes, which is the length of the record
144
Common Data Provider for z Systems: User Guide
Table 19. Split payload format (continued)
Number of
bytes
Type
Description
4
Binary
Offset from the beginning of the payload to the data
4
Binary
Length of data
Variable
Text
Metadata keywords and values.
Tip: The lengths and offsets of these keywords and
values are previously described in this payload. For
more information about the metadata keywords and
values, see Table 20.
Variable
Text
The data that is being transmitted.
Tip: Based on the lengths and offsets, you can have
data in this field that is not included in any record.
Metadata keywords and values
Table 20 lists and describes the expected metadata keywords and values.
Table 20. Metadata keywords and values
Indication of
whether the keyword
is required or
optional
Keyword
Value
encoding
The character encoding of the data, which is Required
typically one of the following values:
v IBM037
v IBM1047
v UTF-8
sourcetype
The source type of the data. This value
must be the same as the name of the data
source type that is specified in the parms
object in the .streams.json file.
Required
sourcename
The source name of the data. This value
must be the same as the data source name
that is specified in the parms object in the
.streams.json file.
Required
timezone
If the time stamps in the data do not
include a time zone, this value specifies a
time zone to the target destination. Specify
this value if the time zone is different from
the system time zone.
Optional
This value must be in the following format,
where plus_or_minus represents the + or sign, HH represents two digits for the hour,
and MM represents two digits for the
minute:
plus_or_minusHHMM
Implementing the Open Streaming API for sending user application data to the Data Streamer
145
Sending data by using the Java API
The IBM Common Data Provider for z Systems Java API is a set of Java classes
that IBM Common Data Provider for z Systems uses to exchange data internally.
You can use these classes to write Java applications that send data to the Data
Streamer.
About this task
To send data, the API must have the port number on which the Data Streamer
listens for data.
Tip: For more information about the Data Streamer port, see “Data Streamer port
definition” on page 11.
Procedure
To use the Java API to send data to the Data Streamer, complete the following
steps:
1. Extract the /DS/LIB/CDPzLibraries.tar file, and add the CdpCommon.jar and
CdpProtocol.jar files to the Java build path. Java API documentation is
included in the TAR file.
2. As shown in the following example, define a Java class for the sender, where
port is the port number on which the Data Streamer listens for data:
CDPSender sender = new CDPSender("localhost",port);
3. As shown in the following example, define a variable for identifying the origin
of the data in traces and dumps. The value of senderName, which must have a
maximum length of 8 characters, is included in the header to identify the origin
of the data.
String senderName = "SAMPSNDR";
4. As shown in the following example, define a Java class for containing the
metadata for the data. This table must contain the metadata keywords and
values that are described in “Metadata keywords and values” on page 145.
HashMap<String, String> metadata = new HashMap<String, String>(4);
metadata.put(Dictionary.encoding.name(), "IBM1047");
metadata.put(Dictionary.sourcename.name(), "mySourceName");
metadata.put(Dictionary.sourcetype.name(), "mySourceType");
metadata.put(Dictionary.timezone.name(), "+0000");
5. To send the data to the Data Streamer, complete the following steps that apply,
depending on whether you are sending split or unsplit data:
Option
Description
Split data
1.
Define a Java class for containing the records to be sent, and for
adding records as they are collected, as shown in the following
example:
List<String> records = new ArrayList<String>();
records.add(someRecord);
2.
Send the data to the Data Streamer, as shown in the following
example:
sender.sendType2(senderName, metadata, records);
146
Common Data Provider for z Systems: User Guide
Option
Description
Unsplit data
1.
Send the data to the Data Streamer, as shown in the following
example:
String data = someData;
sender.sendType1(senderName, metadata, data);
Important: The following Java exceptions are thrown by the sendType2 and
sendType1 methods and must be caught:
IOException
Thrown if an I/O error occurs when connecting to or sending data to
the Data Streamer.
IllegalArgumentException
Thrown when the metadata does not contain a value for encoding, or
when the length of the sender name is greater than 8 characters.
UnsupportedEncodingException
Thrown when the encoding that is provided in the metadata is not
supported by Java.
Sending data by using the REXX API
The IBM Common Data Provider for z Systems REXX API is a set of REstructured
eXtended eXecutor (REXX) language functions that can be used to send data to the
Data Streamer.
About this task
The sample REXX program HBORS001 in the hlq.SHBOSAMP library illustrates how to
use the REXX API as described in the following procedure.
To send data, the API must have the port number on which the Data Streamer
listens for data.
Tip: For more information about the Data Streamer port, see “Data Streamer port
definition” on page 11.
Procedure
To use the REXX API to send data to the Data Streamer, complete the following
steps:
1. In your REXX program, include the REXX procedures from the HBORDAPI
sample program, which is in the hlq.SHBOSAMP library.
2. As shown in the following example, define your metadata in a stem variable
that is named “Meta.”. This table must contain the metadata keywords and
values that are described in “Metadata keywords and values” on page 145,
with one value for each entry in keyword=value format.
Meta.0
Meta.1
Meta.2
Meta.3
Meta.4
=
=
=
=
=
4
'encoding=ISO1047'
'sourcetype=mySourceType'
'sourcename=mySourceName'
’timezone=+0000’
Implementing the Open Streaming API for sending user application data to the Data Streamer
147
3. As shown in the following example, define your data in a stem variable that is
named “Data.”:
Data.0
Data.1
Data.2
Data.3
=
=
=
=
3
'Record 1'
'Record 2'
'Record 3'
4. To send data to the Data Streamer, complete the following steps that apply,
depending on whether you are sending data in a single transmission or
multiple transmissions:
Option
Description
Single transmission
To connect to the Data Streamer, send the
data, and disconnect from the Data Streamer,
call HBO_Send_Data, as shown in the
following example:
Call HBO_Send_Data port, type, sender
Multiple transmission
If you have a long running program, you
can open a connection to the Data Streamer
before you call HBO_Send_Data so that the
connection remains open, and you do not
have to reconnect to send more data.
1.
Call HBO_Open, as shown in the following
example:
Call HBO_Open port
2.
Call HBO_Send_Data, as shown in the
following example, which sends the data
without connecting to, or disconnecting
from, the Data Streamer:
Call HBO_Send_Data port, type, sender
Tip: In this call, the value for port is
ignored because the connection to the
Data Streamer is already open.
3.
When the program completes the
sending of data, call HBO_Close to
disconnect from the Data Streamer.
Tip: If you make these calls from multiple,
different REXX subroutines, ensure that any
procedure statements expose the following
variables:
v HBO_Socket
v hbo.
v ecpref
v ecname
The following information describes the variables that are used in the calls:
port
The port number on which the local Data Streamer listens for data.
type
A value of 1 indicates unsplit data, and a value of 2 indicates split data.
sender An eye catcher, with a maximum length of 8 characters, for identifying
the origin of the data in traces and dumps.
148
Common Data Provider for z Systems: User Guide
Loading data to IBM Db2 Analytics Accelerator for target
destination IBM Tivoli Decision Support for z/OS
If your target destination is IBM Tivoli Decision Support for z/OS, you must load
the z/OS operational data in batch mode from IBM Common Data Provider for z
Systems to IBM Db2 Analytics Accelerator for z/OS for use by IBM Tivoli Decision
Support for z/OS.
About this task
IBM Db2 Analytics Accelerator for z/OS is a high-performance component that is
tightly integrated with Db2 for z/OS. It delivers high-speed processing for
complex Db2 queries to support business-critical reporting and analytics
workloads.
IBM Common Data Provider for z Systems can send System Management Facilities
(SMF) data directly to IBM Db2 Analytics Accelerator for z/OS for storage,
analytics, and reporting. The data is stored in IBM Db2 Analytics Accelerator for
z/OS by using a database schema from IBM Tivoli Decision Support for z/OS
analytics components.
The IBM Common Data Provider for z Systems System Data Engine converts SMF
data into data sets that contain the IBM Tivoli Decision Support for z/OS analytics
component tables in DB2 UNLOAD format. The IBM Db2 Analytics Accelerator
Loader for z/OS is then used to load the data sets directly into IBM Db2 Analytics
Accelerator for z/OS.
By sending data directly to IBM Db2 Analytics Accelerator for z/OS, you gain the
following advantages:
v The need to store data in Db2 for z/OS is eliminated.
v More detailed timestamp level records can be stored.
v More CPU work is eliminated from the z/OS system.
v Reporting functions benefit from the high query speeds of IBM Db2 Analytics
Accelerator for z/OS.
Configuring IBM Tivoli Decision Support for z/OS for loading the data
You must configure IBM Tivoli Decision Support for z/OS in preparation for
loading the z/OS operational data in batch mode from IBM Common Data
Provider for z Systems to IBM Db2 Analytics Accelerator for z/OS.
Before you begin
Apply the following updates for the following prerequisite software:
IBM Tivoli Decision Support for z/OS Version 1.8.2
APAR PI70968
IBM Db2 Analytics Accelerator for z/OS Version 5.1
One of the following two sets of PTFs are required (either PTF-2 level or
PTF-3 level), as indicated:
v PTF-2 level with the following PTFs applied:
© Copyright IBM Corp. 2016, 2018
149
– UI30285
– UI30337
– UI30740
– UI31021
– UI31148
– UI31287
– UI31302
– UI31507
– UI31571
– UI31739
– UI32368
– UI32588
– UI32707
– UI32810
– UI35006
– UI35007
– UI35008
– UI35009
– UI35010
– UI35011
– UI35012
– UI37271
– UI37783
– UI37784
– UI37785
– UI37786
– UI37793
– UI37794
– UI37795
– UI37796
– UI38702
v PTF-3 level with the following PTFs applied:
– UI33493
– UI33603
– UI33797
– UI35501
– UI36461
– UI37053
– UI37534
– UI39653
– UI39921
– UI40892
– UI41378
– UI42327
– UI42328
150
Common Data Provider for z Systems: User Guide
– UI42329
IBM Db2 Analytics Accelerator Loader for z/OS Version 2.1
The following PTFs are required:
v UI18415
v UI20963
v UI21883
v UI22759
v UI23712
v UI26834
v UI27815
v UI33956
v UI35108
v UI36231
v UI36343
v UI38008
v UI38201
v UI38202
v UI38810
v UI38811
v UI38939
v UI38943
v UI38973
v UI39437
v UI39451
v UI39454
IBM Common Data Provider for z Systems Version 1.1.0
The following PTFs are required:
v UA91431
v UA91450
v UA91451
v UA91452
About this task
IBM Tivoli Decision Support for z/OS includes an analytics component for each set
of tables that are supported in IBM Db2 Analytics Accelerator for z/OS. “IBM
Tivoli Decision Support for z/OS analytics components that can be loaded by the
System Data Engine” on page 157 lists these analytics components with their
subcomponents and the names of the corresponding base components in IBM
Tivoli Decision Support for z/OS.
Procedure
To configure IBM Tivoli Decision Support for z/OS for loading the data, complete
the following steps:
1. Bind the DB2 plan that is used by IBM Tivoli Decision Support for z/OS by
specifying one of the following BIND options:
v QUERYACCELERATION(ELIGIBLE)
Loading data to IBM Db2 Analytics Accelerator
151
v QUERYACCELERATION(ENABLE)
For example, if you use the default plan name DRLPLAN, the following BIND
PACKAGE is used to set the query acceleration register as eligible:
//SYSTSIN DD *
DSN SYSTEM(DSN)
BIND PACKAGE(DRLPLAN) OWNER(authid) MEMBER(DRLPSQLX) ACTION(REPLACE) ISOLATION(CS) ENCODING(EBCDIC)
QUERYACCELERATION(ELIGIBLE)
BIND PLAN(DRLPLAN) OWNER(authid) PKLIST(*.DRLPLAN.*) ACTION(REPLACE) RETAIN
-
RUN PROGRAM(DSNTIAD) PLAN(DSNTIAxx) LIB(’xxxx.RUNLIB.LOAD’)
END
The SDRLCNTL(DRLJDBIN) job includes sample instructions for binding the plan
with QUERYACCELERATION specified.
2. Modify DRLFPROF, which is the IBM Tivoli Decision Support for z/OS data set
that contains user-modified parameters, to reflect the settings to apply when
installing new analytics components. The following parameters in DRLFPROF
provide support for the IBM Db2 Analytics Accelerator for z/OS:
def_useaot = "YES" | "NO"
"YES"
Means that the table is created as an Accelerator Only table.
"NO"
Means that the table is created in DB2 and can be used either
as a DB2 table or as an IDAA_ONLY table. The default value is
"NO".
def_accelerator = "xxxxxxxx"
"xxxxxxxx"
The name of the accelerator where the table resides.
def_timeint = "H" | "S" | "T"
"H"
The time stamp for tables is rounded to an hourly interval
(similar to tables with a suffix of _H in other components).
"S"
The time stamp for tables is rounded to a seconds interval
(similar to tables with a time field rather than a time stamp in
other components).
"T"
The time stamp for tables is the actual time stamp in the SMF
record (similar to tables with suffix _T). The default value is
"T".
3. If you are using IBM Tivoli Decision Support for z/OS to collect and populate
the component tables in Db2 for z/OS, or if you are using IBM Tivoli Decision
Support for z/OS reporting, customize each new lookup table in the IBM Tivoli
Decision Support for z/OS analytics components to reflect the contents of any
existing lookup tables in IBM Tivoli Decision Support for z/OS. For example,
insert the same rows that are currently in the DB2_APPLICATION table into the
A_DB2_APPLICATION table. Table 21 on page 153 lists the lookup table members
to customize.
Tip: If you are collecting data only into IBM Db2 Analytics Accelerator for
z/OS rather than having the data reside in Db2 for z/OS, the lookup tables are
configured in IBM Common Data Provider for z Systems, as described in
“Running the System Data Engine to write data in DB2 UNLOAD format” on
page 154.
152
Common Data Provider for z Systems: User Guide
Table 21. IBM Tivoli Decision Support for z/OS lookup table members to customize
Member
Base component table
Analytics component table
DRLTA2AP
DB2_APPLICATION
A_DB2_APPLICATION
DRLTA2AC
DB2_ACCUMAC
A_DB2_ACCUMAC
DRLTALUG
USER_GROUP
A_USER_GROUP
DRLTALKP
KPM_THRESHOLDS
A_KPM_THRESHOLDS_L
DRLTALW2
MVS_WORKLOAD2_TYPE
A_WORKLOAD2_L
DRLTALDA
MVSPM_DEVICE_ADDR
A_DEVICE_ADDR_L
DRLTALUT
MVSPM_UNIT_TYPE
A_UNIT_TYPE_L
DRLTALMI
MVS_MIPS_T
A_MIPS_L
DRLTALSP
MVS_SYSPLEX
A_SYSPLEX_L
DRLTALWL
MVS_WORKLOAD_TYPE
A_WORKLOAD_L
DRLTALW2
MVS_WORKLOAD2_TYPE
A_WORKLOAD2_L
DRLTALTR
MVSPM_TIME_RES
A_TIME_RES_L
4. Install the IBM Tivoli Decision Support for z/OS analytics components that you
want to use into IBM Tivoli Decision Support for z/OS. For information about
how to install components into IBM Tivoli Decision Support for z/OS, see the
IBM Tivoli Decision Support for z/OS administration documentation in the
IBM Knowledge Center.
5. After the IBM Tivoli Decision Support for z/OS analytics components and their
associated tables are created in IBM Tivoli Decision Support for z/OS, add
them to IBM Db2 Analytics Accelerator for z/OS by using the Data Studio
Eclipse application or by using stored procedures. Table 22 lists sample jobs for
adding tables to IBM Db2 Analytics Accelerator for z/OS.
Table 22. Sample jobs for adding tables to IBM Db2 Analytics Accelerator for z/OS
Analytics component
SDRLCNTL member
Analytics - z/OS Performance
DRLJAPMA
Analytics – DB2
DRLJA2DA
Analytics - KPM CICS
DRLJAKCA
Analytics - KPM DB2
DRLJAKDA
Analytics - KPM z/OS
DRLJAKZA
6. To move the contents of the lookup tables into the IBM Db2 Analytics
Accelerator for z/OS, modify and submit the SDRLCNTL members that are
listed in Table 23.
Table 23. Sample jobs for moving lookup table contents to IBM Db2 Analytics Accelerator
for z/OS
Analytics component
SDRLCNTL member
Analytics - z/OS Performance
DRLJAPMK
Analytics - KPM DB2
DRLJAKDK
Analytics - KPM z/OS
DRLJAKZK
Loading data to IBM Db2 Analytics Accelerator
153
Running the System Data Engine to write data in DB2 UNLOAD format
The IBM Common Data Provider for z Systems System Data Engine converts
System Management Facilities (SMF) data into data sets that contain the IBM Tivoli
Decision Support for z/OS analytics component tables in DB2 UNLOAD format.
The IBM Db2 Analytics Accelerator Loader for z/OS is then used to load the data
sets directly into IBM Db2 Analytics Accelerator for z/OS.
Procedure
To run the System Data Engine to write data in DB2 UNLOAD format, complete
the following steps:
1. Copy and customize the IBM Common Data Provider for z Systems lookup
definition members in Table 24 to reflect the contents of the corresponding IBM
Tivoli Decision Support for z/OS lookup tables. For example, insert the same
rows that are currently in the DB2_APPLICATION table into the A_DB2_APPLICATION
table. These lookup tables are used by the System Data Engine when
generating the DB2 UNLOAD format for each table. The System Data Engine
lookup tables that are defined in these members have the same names as the
IBM Tivoli Decision Support for z/OS analytics component lookup tables.
Table 24. IBM Common Data Provider for z Systems lookup table members
HBOvrm.SHBODEFS member
Analytics component
lookup table
Base component lookup
table
HBOTA2AP
A_DB2_APPLICATION
DB2_APPLICATION
HBOTA2AC
A_DB2_ACCUMAC
DB2_ACCUMAC
HBOTALUG
A_USER_GROUP
USER_GROUP
HBOTALKP
A_KPM_THRESHOLDS_L
KPM_THRESHOLDS
HBOTALWL
A_WORKLOAD2_L
MVS_WORKLOAD2_TYPE
HBOTALMI
A_MIPS_L
MVS_MIPS_T
HBOTALSP
A_SYSPLEX_L
MVS_SYSPLEX
HBOTALWL
A_WORKLOAD_L
MVS_WORKLOAD_TYPE
HBOTALW2
A_WORKLOAD2_L
MVS_WORKLOAD2_TYPE
HBOTALDA
A_DEVICE_ADDR_L
MVSPM_DEVICE_ADDR
HBOTALUT
A_UNIT_TYPE_L
MVSPM_UNIT_TYPE
HBOTALTR
A_TIME_RES_L
MVSPM_TIME_RES
2. Run the System Data Engine to generate DB2 UNLOAD format for the tables
that are created for the IBM Db2 Analytics Accelerator by the IBM Tivoli
Decision Support for z/OS analytics components.
The HBOvrm.SHBOCNTL members that are listed in Table 25 include sample JCL
jobs for generating DB2 UNLOAD format for each of the analytics component
tables.
Table 25. Sample jobs for generating DB2 UNLOAD format
154
HBOvrm.SHBOCNTL member
Analytics component
HBOAPMUN
Analytics - z/OS Performance
HBOA2DUN
Analytics – DB2
HBOAKCUN
Analytics - KPM CICS
HBOAKDUN
Analytics - KPM DB2
Common Data Provider for z Systems: User Guide
Table 25. Sample jobs for generating DB2 UNLOAD format (continued)
HBOvrm.SHBOCNTL member
Analytics component
HBOAKZUN
Analytics - KPM Z/OS
Each sample includes two steps. The first step deletes output files that are
created by a prior run, and the second step allocates output files and generates
DB2 UNLOAD format from a data set that contains SMF records. The second
COLLECT step uses the following DD names:
v HBOIN provides control statement input to the System Data Engine. It
references the following members of HBOvrm.SHBODEFS:
– HBOLLSMF contains control statements defining the SMF log as input.
– HBORS* members contain control statements for extracting data from SMF
records.
– HBOT* members contain control statements to define the lookup tables
that are used by the System Data Engine.
– HBOUA* members contain control statements to store the SMF data in
DB2 UNLOAD format.
– The in-stream COLLECT statement initiates System Data Engine
processing.
v HBOLOG provides the input to the System Data Engine, which must be a
data set that contains SMF records.
v Various UA* DD names refer to the output files to be written in DB2
UNLOAD format. The convention is that the DD name of the file matches
the name of the HBOvrm.SHBODEFS member, without the HBO prefix. Each file
that is produced by a definition member is in a sequence (such as 1, 2, 3, or
4). For example, DD name UA2D11 refers to table A_DB2_SYS_PARM_I in
DB2 UNLOAD format, which is the first file that is output by definition
member HBOUA2D1.
Loading data to IBM Db2 Analytics Accelerator
The IBM Db2 Analytics Accelerator Loader for z/OS is used to load the data that
is output from the IBM Common Data Provider for z Systems System Data Engine
directly into IBM Db2 Analytics Accelerator for z/OS.
Procedure
Run the IBM Db2 Analytics Accelerator Loader for z/OS by using the DB2 LOAD
utility with the following updates:
v A DD statement that enables the loader to intercept the DB2 LOAD utility:
//HLODUMMY DD DUMMY
v A statement that directs the loader to load data into the IBM Db2 Analytics
Accelerator. This statement indicates the name of the accelerator and indicates
that the target is an IDAA_ONLY table, as shown in the following example:
//SYSIN
DD *
LOAD DATA RESUME YES LOG NO INDDN input_data_set_ddname
IDAA_ONLY ON accelerator-name
INTO TABLE DRLxx.KPMZ_WORKLOAD_T FORMAT INTERNAL;
The DRLvrm.SDRLCNTL members that are listed in Table 26 on page 156 include
sample JCL jobs for loading DB2 UNLOAD format data for each of the analytics
component tables to IBM Db2 Analytics Accelerator.
Loading data to IBM Db2 Analytics Accelerator
155
Table 26. Sample jobs for loading data into IBM Db2 Analytics Accelerator
DRLvrm.SDRLCNTL member
Analytics component
DRLJAPMD
Analytics - z/OS Performance
DRLJA2DD
Analytics – DB2
DRLJAKCD
Analytics - KPM CICS
DRLJAKDD
Analytics - KPM DB2
DRLJAKZD
Analytics - KPM Z/OS
After the load is complete from the first time that you load an IDAA_ONLY table,
the table must be enabled for acceleration in IBM Db2 Analytics Accelerator for
z/OS. Tables can be enabled for acceleration by using the Data Studio Eclipse
application, or by using stored procedures.
The DRLvrm.SDRLCNTL members that are listed in Table 27 include sample JCL jobs
for using stored procedures to enable tables for acceleration.
Table 27. Sample jobs for enabling tables for acceleration in IBM Db2 Analytics Accelerator
DRLvrm.SDRLCNTL member
Analytics component
DRLJAPME
Analytics - z/OS Performance
DRLJA2DE
Analytics – DB2
DRLJAKCE
Analytics - KPM CICS
DRLJAKDE
Analytics - KPM DB2
DRLJAKZE
Analytics - KPM Z/OS
Removing tables from IBM Db2 Analytics Accelerator
If you want to uninstall a component in IBM Tivoli Decision Support for z/OS that
has tables that were added to IBM Db2 Analytics Accelerator for z/OS (by using
job DRLJxxxA), you must first remove the tables from IBM Db2 Analytics
Accelerator for z/OS.
Procedure
To remove tables from IBM Db2 Analytics Accelerator for z/OS, customize and
submit one or more of the jobs in Table 28.
Table 28. Sample jobs that are provided by IBM Tivoli Decision Support for z/OS for
removing tables from IBM Db2 Analytics Accelerator for z/OS
156
DRLvrm.SDRLCNTL member
Analytics component
DRLJAPMR
Analytics - z/OS Performance
DRLJA2DR
Analytics – DB2
DRLJAKCR
Analytics - KPM CICS
DRLJAKDR
Analytics - KPM DB2
DRLJAKZR
Analytics - KPM Z/OS
Common Data Provider for z Systems: User Guide
IBM Tivoli Decision Support for z/OS analytics components that can be
loaded by the System Data Engine
This reference lists the analytics components of IBM Tivoli Decision Support for
z/OS that can be loaded by the IBM Common Data Provider for z Systems System
Data Engine and used for storing data directly in the IBM Db2 Analytics
Accelerator for z/OS.
Table 29 lists the analytics components with their subcomponents and the names of
the corresponding base components in IBM Tivoli Decision Support for z/OS.
Table 29. IBM Tivoli Decision Support for z/OS analytics components that can be loaded by
the System Data Engine
Analytics component
Subcomponents
Corresponding base
component in IBM
Tivoli Decision Support
for z/OS
Analytics - z/OS
Performance
v Coupling Facility (CF)
MVSPM
v Cross System Coupling Facility
(XCF)
v Open MVS (OMVS)
v System
v Workload
v I/O
v Global Storage
v Virtual Storage
v Device
v Cryptography
v Application
Analytics - DB2
DB2
v Initialization
v Address Space
v Buffer Pool
v Accnt and RespTime
v Package
v Data Sharing
v DDF
v Storage
Analytics - KPM CICS
v Monitoring
CICS Key Performance
Metrics
Analytics - KPM DB2
v DB2 Accounting Level
DB2 Key Performance
Metrics
v DB2 Package
v DB2 System Level
Loading data to IBM Db2 Analytics Accelerator
157
Table 29. IBM Tivoli Decision Support for z/OS analytics components that can be loaded by
the System Data Engine (continued)
Analytics component
Subcomponents
Analytics - KPM z/OS
v Address Space
v LPAR
Corresponding base
component in IBM
Tivoli Decision Support
for z/OS
z/OS Key Performance
Metrics
v Storage
v Workload
v Capture Ratio Workload/LPAR
v Channel
v Coupling Facility
v Hardware Capacity
v Problem Determination
Analytics component tables
For each IBM Tivoli Decision Support for z/OS analytics component that can be
loaded by the IBM Common Data Provider for z Systems System Data Engine, this
reference lists the associated tables, with the corresponding IBM Tivoli Decision
Support for z/OS base component tables.
The tables are listed for the following analytics components:
v “Analytics - z/OS Performance component”
v “Analytics - DB2 component” on page 160
v “Analytics - KPM CICS component” on page 161
v “Analytics - KPM DB2 component” on page 161
v “Analytics - KPM z/OS component” on page 161
Analytics - z/OS Performance component
Table 30. Tables for Analytics - z/OS Performance component of IBM Tivoli Decision
Support for z/OS, with corresponding base component tables
158
Table
Corresponding base component table
A_PM_CF_I
MVSPM_CF_H
A_PM_CF_LINK_I
MVSPM_CF_LINK_H
A_PM_CF_PROC_I
MVSPM_CF_PROC_H
A_PM_CF_REQ_I
MVSPM_CF_REQUEST_H
A_PM_CF_CF_I
MVSPM_CF_TO_CF_H
A_PM_XCF_MEMBER_I
MVSPM_XCF_MEMBER_H
A_PM_XCF_PATH_I
MVSPM_XCF_PATH_H
A_PM_XCF_SYS_I
MVSPM_XCF_SYS_H
A_PM_OMVS_BUF_I
MVSPM_OMVS_BUF_H
A_PM_OMVS_FILE_I
MVSPM_OMVS_FILE_H
A_PM_OMVS_GHFS_I
MVSPM_OMVS_GHFS_H
A_PM_OMVS_HFS_I
MVSPM_OMVS_HFS_H
A_PM_OMVS_KERN_I
MVSPM_OMVS_KERN_H
Common Data Provider for z Systems: User Guide
Table 30. Tables for Analytics - z/OS Performance component of IBM Tivoli Decision
Support for z/OS, with corresponding base component tables (continued)
Table
Corresponding base component table
A_PM_OMVS_MOUNT_I
MVSPM_OMVS_MOUNT_H
A_PM_SYS_CLUST_I
MVSPM_CLUSTER_H
A_PM_SYS_CPU_I
MVSPM_CPU_H
A_PM_SYS_CPUMT_I
MVSPM_CPUMT_H
A_PM_SYS_ENQ_I
MVSPM_ENQUEUE_H
A_PM_SYS_LPAR_I
MVSPM_LPAR_H
A_PM_SYS_SYS_I
MVSPM_SYSTEM_H
A_PM_SYS_PROD_I
MVSPM_PROD_T
A_PM_SYS_PRDINT_I
MVSPM_PROD_INT_T
A_PM_SYS_MSU_I
MVSPM_LPAR_MSU_T
A_PM_WL_GOAL_I
MVSPM_GOAL_ACT_H
A_PM_WL_SERVED_I
MVSPM_WLM_SERVED_H
A_PM_WL_STATE_I
MVSPM_WLM_STATE_H
A_PM_WL_WKLD_I
MVSPM_WORKLOAD_H
A_PM_WL_WKLD2_I
MVSPM_WORKLOAD2_H
A_PM_IO_DATASET_I
MVSPM_DATASET_H
A_PM_IO_VOLUME_I
MVSPM_VOLUME_H
A_PM_IO_LCU_I
MVSPM_LCU_IO_H
A_PM_GS_BMF_I
MVSPM_BMF_H
A_PM_GS_CACHE_I
MVSPM_CACHE_H
A_PM_GS_PAGEDS_I
MVSPM_PAGE_DS_H
A_PM_GS_PAGING_I
MVSPM_PAGING_H
A_PM_GS_STORAGE_I
MVSPM_STORAGE_H
A_PM_GS_STORCLS_I
MVSPM_STORCLASS_H
A_PM_GS_SWAP_I
MVSPM_SWAP_H
A_PM_GS_CACHESS_I
MVSPM_CACHE_ESS_H
A_PM_VS_VLF_I
MVSPM_VLF_H
A_PM_VS_CSASQA_I
MVSPM_VS_CSASQA_H
A_PM_VS_PRIVATE_I
MVSPM_VS_PRIVATE_H
A_PM_VS_SUBPOOL_I
MVSPM_VS_SUBPOOL_H
A_PM_DEV_CHAN_I
MVSPM_CHANNEL_H
A_PM_DEV_HSCHAN_I
MVSPM_HS_CHAN_H
A_PM_DEV_AP_I
MVSPM_DEVICE_AP_H
A_PM_DEV_DEVICE_I
MVSPM_DEVICE_H
A_PM_DEV_FICON_I
MVSPM_FICON_H
A_PM_DEV_RAID_I
MVSPM_RAID_RANK_H
A_PM_DEV_ESSLNK_I
MVSPM_ESSLINK_H
A_PM_DEV_ESSEXT_I
MVSPM_ESS_EXTENT_H
A_PM_DEV_ESSRNK_I
MVSPM_ESS_RANK_H
Loading data to IBM Db2 Analytics Accelerator
159
Table 30. Tables for Analytics - z/OS Performance component of IBM Tivoli Decision
Support for z/OS, with corresponding base component tables (continued)
Table
Corresponding base component table
A_PM_DEV_PCIE_I
MVSPM_PCIE_H
A_PM_CRYP_PCI_I
MVSPM_CRYPTO_PCI_H
A_PM_CRYP_CCF_I
MVSPM_CRYPTO_CCF_H
A_PM_APP_APPL_I
MVSPM_APPL_H
Analytics - DB2 component
Table 31. Tables for Analytics - DB2 component of IBM Tivoli Decision Support for z/OS,
with corresponding base component tables
160
Table
Corresponding base component table
A_DB2_SYS_PARM_I
DB2_SYS_PARAMETER
A_DB2_DB_I
DB2_DATABASE_T
A_DB2_DB_BIND_I
DB2_DATABASE_T
A_DB2_DB_QIST_I
DB2_DATABASE_T
A_DB2_DB_SYS_I
DB2_SYSTEM_T
A_DB2_BP_I
DB2_BUFFER_POOL_T
A_DB2_USERTRAN_I
DB2_USER_TRAN_H
A_DB2_UT_BP_I
DB2_USER_TRAN_H
A_DB2_UT_SACC_I
DB2_USER_TRAN_H
A_DB2_UT_IDAA_I
DB2_USER_TRAN_H
A_DB2_IDAA_STAT_I
DB2_IDAA_STAT_H
A_DB2_IDAA_ACC_I
DB2_IDAA_ACC_H
A_DB2_IDAA_ST_A_I
DB2_IDAA_STAT_A_H
A_DB2_IDAA_ST_S_I
DB2_IDAA_STAT_S_H
A_DB2_PACK_I
DB2_PACKAGE_H
A_DB2_SHR_BP_I
DB2_BP_SHARING_T
A_DB2_SHR_BPAT_I
DB2_BPATTR_SHR_T
A_DB2_SHR_LOCK_I
DB2_LOCK_SHARING_T
A_DB2_SHR_INIT_I
DB2_SHARING_INIT
A_DB2_SHR_TRAN_I
DB2_US_TRAN_SHAR_H
A_DB2_DDF_I
DB2_USER_DIST_H
A_DB2_SYSTEM_I
DB2_SYSTEM_DIST_T
A_DB2_STORAGE_I
DB2_STORAGE_T
A_DB2_TRAN_IV
DB2_TRANSACTION_D
A_DB2_DATABASE_IV
DB2_DATABASE_T
Common Data Provider for z Systems: User Guide
Analytics - KPM CICS component
Table 32. Tables for Analytics - KPM CICS component of IBM Tivoli Decision Support for
z/OS, with corresponding base component tables
Table
Corresponding base component table
A_KC_MON_TRAN_I
KPMC_MON_TRAN_H
Analytics - KPM DB2 component
Table 33. Tables for Analytics - KPM DB2 component of IBM Tivoli Decision Support for
z/OS, with corresponding base component tables
Table
Corresponding base component table
A_KD_UT_I
KPM_DB2_USERTRAN_H
A_KD_UT_BP_I
KPM_DB2_USERTRAN_H
A_KD_EU_I
KPM_DB2_ENDUSER_H
A_KD_EU_BP_I
KPM_DB2_ENDUSER_H
A_KD_PACKAGE_I
KPM_DB2_PACKAGE_H
A_KD_SYS_IO_I
KPM_DB2_SYSTEM_T
A_KD_SYS_TCBSRB_I
KPM_DB2_SYSTEM_T
A_KD_SYS_LATCH_I
KPM_DB2_LATCH_T
A_KD_SYS_BP_I
KPM_DB2_BP_T
A_KD_SYS_BP_SHR_I
KPM_DB2_BP_SHR_T
A_KD_SYS_ST_DBM_I
KPM_DB2_STORAGE_T
A_KD_SYS_ST_DST_I
KPM_DB2_STORAGE_T
A_KD_SYS_ST_COM_I
KPM_DB2_STORAGE_T
A_DB_SYS_DB_WF_I
KPM_DB2_DATABASE_T
A_DB_SYS_DB_EDM_I
KPM_DB2_DATABASE_T
A_DB_SYS_DB_SET_I
KPM_DB2_DATABASE_T
A_DB_SYS_DB_LOCK_I
KPM_DB2_LOCK_T
Analytics - KPM z/OS component
Table 34. Tables for Analytics - KPM z/OS component of IBM Tivoli Decision Support for
z/OS, with corresponding base component tables
Table
Corresponding base component table
A_KPM_EXCEPTION_I
KPM_EXCEPTION_T
A_KZ_JOB_INT_I
KPMZ_JOB_INT_T
A_KZ_JOB_STEP_I
KPMZ_JOB_STEP_T
A_KZ_LPAR_I
KPMZ_LPAR_T
A_KZ_STORAGE_I
KPMZ_STORAGE_T
A_KZ_WORKLOAD_I
KPMZ_WORKLOAD_T
A_KZ_CHANNEL_I
KPMZ_CHANNEL_T
A_KZ_CF_I
KPMZ_CF_T
A_KZ_CF_STRUC_I
KPMZ_CF_STRUCTR_T
A_KZ_CPUMF_I
KPMZ_CPUMF_T
Loading data to IBM Db2 Analytics Accelerator
161
Table 34. Tables for Analytics - KPM z/OS component of IBM Tivoli Decision Support for
z/OS, with corresponding base component tables (continued)
Table
Corresponding base component table
A_KZ_CPUMF1_I
KPMZ_CPUMF1_T
A_KZ_CPUMF_PT_I
KPMZ_CPUMF_PT_T
A_KZ_CPUMF1_PT_I
KPMZ_CPUMF1_PT_T
A_KZ_SRM_WKLD_I
KPMZ_SRM_WKLD_T
Analytics component views that are based on multiple tables
In some cases, multiple tables from an IBM Tivoli Decision Support for z/OS
analytics component are combined into a single view. In these cases, the resulting
view matches a table from an IBM Tivoli Decision Support for z/OS base
component. This reference lists these analytics component views that are based on
multiple tables.
Table 35. IBM Tivoli Decision Support for z/OS analytics component views that are based on multiple tables
Analytics component
View name
Analytics component
tables that are used in
view
Analytics - DB2
A_DB2_USERTRAN_IV
v A_DB2_USERTRAN_I
Analytics - DB2
Analytics - KPM DB2
Analytics - KPM DB2
A_DB2_DATABASE_IV
A_KD_USERTRAN_IV
A_KD_ENDUSER_IV
v
A_DB2_UT_BP_I
v
A_DB2_UT_SACC_I
v
A_DB2_UT_IDAA_
v A_DB2_DB_I
v
A_DB2_DB_BIND_I
v
A_DB2_DB_QIST_I
v
A_KD_UT_I
v
A_KD_UT_BP_I
v A_KD_EU_I
v
Analytics - KPM DB2
A_KD_SYSTEM_IV
Analytics - KPM DB2
Analytics - KPM DB2
162
A_KD_STORAGE_IV
A_KD_DATABASE_IV
Common Data Provider for z Systems: User Guide
DB2_USER_TRAN_H
DB2_DATABASE_T
KPM_DB2_USERTRAN_H
KPM_DB2_ENDUSER_H
A_KD_EU_BP_I
v A_KD_SYS_IO_I
v
Base component table on
which view is based
KPM_DB2_SYSTEM_T
A_KD_SYS_TCBSRB_I
v A_KD_SYS_ST_DBM_I
v
A_KD_SYS_ST_DST_I
v
A_KD_SYS_ST_COM_I
v A_DB_SYS_DB_WF_I
v
A_DB_SYS_DB_EDM_I
v
A_DB_SYS_DB_SET_I
KPM_DB2_STORAGE_T
KPM_DB2_DATABASE_T
Notices
This information was developed for products and services offered in the US. This
material might be available from IBM in other languages. However, you may be
required to own a copy of the product or product version in that language in order
to access it.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user's responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not grant you
any license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive, MD-NC119
Armonk, NY 10504-1785
US
For license inquiries regarding double-byte character set (DBCS) information,
contact the IBM Intellectual Property Department in your country or send
inquiries, in writing, to:
Intellectual Property Licensing
Legal and Intellectual Property Law
IBM Japan Ltd.
19-21, Nihonbashi-Hakozakicho, Chuo-ku
Tokyo 103-8510, Japan
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE. Some jurisdictions do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may
not apply to you.
This information could include technical inaccuracies or typographical errors.
Changes are periodically made to the information herein; these changes will be
incorporated in new editions of the publication. IBM may make improvements
and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.
Any references in this information to non-IBM websites are provided for
convenience only and do not in any manner serve as an endorsement of those
© Copyright IBM Corp. 2016, 2018
163
websites. The materials at those websites are not part of the materials for this IBM
product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it
believes appropriate without incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact:
IBM Director of Licensing
IBM Corporation
North Castle Drive, MD-NC119
Armonk, NY 10504-1785
US
Such information may be available, subject to appropriate terms and conditions,
including in some cases, payment of a fee.
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement or any equivalent agreement
between us.
The performance data and client examples cited are presented for illustrative
purposes only. Actual performance results may vary depending on specific
configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of
those products, their published announcements or other publicly available sources.
IBM has not tested those products and cannot confirm the accuracy of
performance, compatibility or any other claims related to non-IBM products.
Questions on the capabilities of non-IBM products should be addressed to the
suppliers of those products.
Statements regarding IBM's future direction or intent are subject to change or
withdrawal without notice, and represent goals and objectives only.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which
illustrate programming techniques on various operating platforms. You may copy,
modify, and distribute these sample programs in any form without payment to
IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating
platform for which the sample programs are written. These examples have not
been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or
imply reliability, serviceability, or function of these programs. The sample
164
Common Data Provider for z Systems: User Guide
programs are provided "AS IS", without warranty of any kind. IBM shall not be
liable for any damages arising out of your use of the sample programs.
Trademarks
IBM, the IBM logo, and ibm.com® are trademarks or registered trademarks of
International Business Machines Corp., registered in many jurisdictions worldwide.
Other product and service names might be trademarks of IBM or other companies.
A current list of IBM trademarks is available on the Web at "Copyright and
trademark information" at http://www.ibm.com/legal/copytrade.shtml.
Java and all Java-based trademarks and logos are trademarks or registered
trademarks of Oracle and/or its affiliates.
Linux is a registered trademark of Linus Torvalds in the United States, other
countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Windows is a trademark of Microsoft Corporation in the United States, other
countries, or both.
Terms and conditions for product documentation
Permissions for the use of these publications are granted subject to the following
terms and conditions.
Applicability
These terms and conditions are in addition to any terms of use for the IBM
website.
Personal use
You may reproduce these publications for your personal, noncommercial use
provided that all proprietary notices are preserved. You may not distribute, display
or make derivative work of these publications, or any portion thereof, without the
express consent of IBM.
Commercial use
You may reproduce, distribute and display these publications solely within your
enterprise provided that all proprietary notices are preserved. You may not make
derivative works of these publications, or reproduce, distribute or display these
publications or any portion thereof outside your enterprise, without the express
consent of IBM.
Rights
Except as expressly granted in this permission, no other permissions, licenses or
rights are granted, either express or implied, to the publications or any
information, data, software or other intellectual property contained therein.
Notices
165
IBM reserves the right to withdraw the permissions granted herein whenever, in its
discretion, the use of the publications is detrimental to its interest or, as
determined by IBM, the above instructions are not being properly followed.
You may not download, export or re-export this information except in full
compliance with all applicable laws and regulations, including all United States
export laws and regulations.
IBM MAKES NO GUARANTEE ABOUT THE CONTENT OF THESE
PUBLICATIONS. THE PUBLICATIONS ARE PROVIDED "AS-IS" AND WITHOUT
WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING
BUT NOT LIMITED TO IMPLIED WARRANTIES OF MERCHANTABILITY,
NON-INFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.
166
Common Data Provider for z Systems: User Guide
Notices
167
IBM®
Printed in USA
Download PDF
Similar pages