Netcool/Impact
Version 7.1.0.5
Solutions Guide
IBM
SC27-4923-04
Netcool/Impact
Version 7.1.0.5
Solutions Guide
IBM
SC27-4923-04
Note
Before using this information and the product it supports, read the information in “Notices”.
Edition notice
This edition applies to version 7.1.0.5 of IBM Tivoli Netcool/Impact and to all subsequent releases and
modifications until otherwise indicated in new editions.
References in content to IBM products, software, programs, services or associated technologies do not imply that
they will be available in all countries in which IBM operates. Content, including any plans contained in content,
may change at any time at IBM's sole discretion, based on market opportunities or other factors, and is not
intended to be a commitment to future content, including product or feature availability, in any way. Statements
regarding IBM's future direction or intent are subject to change or withdrawal without notice and represent goals
and objectives only. Please refer to the developerWorks terms of use for more information.
© Copyright IBM Corporation 2006, 2016.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
About this publication . . . . . . . . vii
|
Intended audience . . . . . . . . . . . . vii
Publications . . . . . . . . . . . . . . vii
Netcool/Impact library . . . . . . . . . vii
Accessing terminology online . . . . . . . vii
Accessing publications online . . . . . . . vii
Ordering publications . . . . . . . . . viii
Accessibility . . . . . . . . . . . . . . viii
Tivoli technical training . . . . . . . . . . viii
Support for problem solving . . . . . . . . viii
Obtaining fixes . . . . . . . . . . . . viii
Receiving weekly support updates . . . . . . ix
Contacting IBM Software Support . . . . . . ix
Conventions used in this publication . . . . . . xi
Typeface conventions . . . . . . . . . . xi
PDF code examples with single quotation marks xii
Operating system-dependent variables and paths xii
Chapter 1. Using Netcool/Impact
Solutions . . . . . . . . . . . . . . 1
Solution components . . .
Data models . . . . .
Working with services . .
Policies . . . . . . .
Solution types . . . . . .
Event enrichment solution .
X events in Y time solution
Event notification solution .
Event gateway solution . .
Setting up a solution. . . .
Creating a data model . .
Setting up services . . .
Creating policies . . . .
Running a solution . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
1
1
1
2
2
2
2
3
3
3
3
3
Chapter 2. Working with data models . . 5
Data model components .
Data sources . . . .
Configuring data types .
Working with data items
Working with links . .
Setting up a data model .
Data model architecture .
Data model examples . .
Enterprise service model
Web hosting model . .
Working with data sources
Data sources overview.
Data source categories .
Data source architecture
Setting up data sources
Working with data types .
Data types overview .
Data type categories .
Data type fields . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
© Copyright IBM Corp. 2006, 2016
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 5
. 5
. 5
. 6
. 6
. 6
. 7
. 7
. 8
. 9
. 10
. 10
. 10
. 12
. 13
. 14
. 14
. 14
. 17
Data type keys . . . . . . . .
Setting up data types . . . . . .
Data type caching . . . . . . .
Working with event sources . . . . .
Event sources overview . . . . .
ObjectServer event sources . . . .
Non-ObjectServer event sources . .
Event source architecture . . . . .
Setting up ObjectServer event sources
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
18
18
20
21
21
21
22
22
23
Chapter 3. Working with services . . . 25
Services overview . . . . . . . . . . . .
Predefined services . . . . . . . . . . . .
User-defined services . . . . . . . . . . .
OMNIbus event reader service . . . . . . . .
OMINbus event reader architecture . . . . .
OMNIbus event reader process . . . . . . .
Managing the OMNIbusEventReader with an
ObjectServer pair for New Events or Inserts . .
Configuring the OMNIbusEventReader with an
ObjectServer pair for New Events or Inserts . .
Additional customization using the ReturnEvent
function to update one or more fields in
Netcool/OMNIbus . . . . . . . . . . .
Additional customization using a field other than
ImpactFlag . . . . . . . . . . . . .
Stopping the OMNIbus event reader retrying a
connection on hitting an error . . . . . . .
Handling Serial rollover . . . . . . . . .
Scenario Using a correlation policy and the
OMNIbus ObjectServer event reader service to
solve flood events . . . . . . . . . . .
Database event listener service . . . . . . . .
Setting up the database server . . . . . . .
Configuring the database event listener service
Sending database events . . . . . . . . .
Writing database event policies . . . . . . .
OMNIbus event listener service. . . . . . . .
Setting up the OMNIbus event listener service .
How to check the OMNIbus event listener
service logs . . . . . . . . . . . . .
Creating Triggers . . . . . . . . . . .
Using the ReturnEvent function . . . . . .
Subscribing to individual channels . . . . .
Controlling which events get sent over from
OMNIbus to Netcool/Impact using Spid . . .
25
25
26
26
26
27
30
31
32
33
33
33
34
36
37
41
41
55
57
57
58
58
59
60
60
Chapter 4. Handling events . . . . . . 63
Events overview . . . . . . . . . . .
Event container and event variables . . . .
Using the dot notation to access the event fields
Using the @ notation to access the event fields
Updating event fields . . . . . . . . .
Adding journal entries to events . . . . .
Assigning the JournalEntry variable . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
63
63
64
64
64
64
65
iii
Sending new events .
Deleting events . . .
Examples of deleting
event source . . .
.
.
an
.
. . . . . . . . . 65
. . . . . . . . . 66
incoming event from the
. . . . . . . . . 66
Chapter 5. Handling data . . . . . . . 69
Working with data items . . . . . . . .
Field variables . . . . . . . . . .
DataItem and DataItems variables . . . .
Retrieving data by filter . . . . . . . .
Working with filters . . . . . . . .
Retrieving data by filter in a policy . . .
Retrieving data by key . . . . . . . .
Keys. . . . . . . . . . . . . .
Key expressions . . . . . . . . . .
Retrieving data by key in a policy . . . .
Retrieving data by link . . . . . . . .
Links overview . . . . . . . . . .
Retrieving data by link in a policy. . . .
Adding data . . . . . . . . . . . .
Example of adding a data item to a data type
Updating data . . . . . . . . . . .
Example of updating single data items . .
Example of updating multiple data items .
Deleting data . . . . . . . . . . . .
Example of deleting single data items . .
Example of deleting data items by filter . .
Example of deleting data items by item . .
Calling database functions . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
69
69
69
69
69
73
75
75
75
76
77
77
77
78
79
79
80
80
81
81
82
82
82
Chapter 6. Setting up instant
messaging . . . . . . . . . . . . . 85
Netcool/Impact IM . . . . . .
Netcool/Impact IM components .
Netcool/Impact IM process . . .
Message listening . . . . .
Message sending . . . . .
Setting up Netcool/Impact IM . .
Writing instant messaging policies .
Handling incoming messages .
Sending messages . . . . .
Example . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
85
85
85
85
86
86
86
86
86
86
Chapter 7. Event enrichment tutorial . . 89
Tutorial overview . . . . . .
Understanding the Netcool/Impact
Understanding the business data .
Analyzing the workflow . . . .
Creating the project. . . . . .
Setting up the data model . . .
Creating the event source. . .
Creating the data sources . . .
Creating the data types . . .
Creating a dynamic link . . .
Reviewing the data model . .
Setting up services . . . . . .
Creating the event reader . . .
Reviewing the services . . .
Writing the policy . . . . . .
Looking up device information .
iv
Netcool/Impact: Solutions Guide
. . . .
installation
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
89
89
90
90
90
91
91
92
92
93
94
95
95
95
95
95
Looking up business departments
Increasing the alert severity . .
Reviewing the policy . . . .
Running the solution . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
96
97
98
98
Chapter 8. Working with the
Netcool/Impact UI data provider . . . 101
Getting started with the UI data provider . . . .
UI data provider components . . . . . . .
Configuring user authentication . . . . . .
Data types and the UI data provider . . . .
Integrating chart widgets and the UI data
provider . . . . . . . . . . . . . .
Names reserved for the UI data provider . . .
General steps for integrating the UI data
provider and the console . . . . . . . .
Accessing data from Netcool/Impact policies. . .
Configuring policy settings . . . . . . . .
Accessing Netcool/Impact object variables in a
policy . . . . . . . . . . . . . . .
Accessing data types output by the GetByFilter
function . . . . . . . . . . . . . .
Accessing data types output by the DirectSQL
function . . . . . . . . . . . . . .
Accessing an array of Impact objects with the UI
data provider . . . . . . . . . . . .
UI data provider and the IBM Dashboard
Application Services Hub . . . . . . . . .
Filtering data in the console . . . . . . .
Integrating the tree widget with an Impact
object or an array of Impact objects . . . . .
Integrating data from a policy with the topology
widget. . . . . . . . . . . . . . .
Displaying status and percentage in a widget
Controlling node images in the topology widget
Setting the topology node to have multiple
parents in the topology widget . . . . . .
Visualizing data from the UI data provider in the
console . . . . . . . . . . . . . . .
Example scenario overview. . . . . . . .
Visualizing data from the Netcool/Impact self
service dashboards . . . . . . . . . . .
Installing the Netcool/Impact Self Service
Dashboard widgets . . . . . . . . . .
Editing an Input Form widget . . . . . . .
Editing a Button widget . . . . . . . . .
Reference topics . . . . . . . . . . . .
Large data model support for the UI data
provider . . . . . . . . . . . . . .
UI data provider customization . . . . . .
Column labels in the browser locale inside the
console . . . . . . . . . . . . . .
Customizing tooltips to display field value
descriptions . . . . . . . . . . . . .
Accessing the Netcool/Impact UI data provider
Running policies and accessing output
parameters . . . . . . . . . . . . .
Mapping field types to the correct output
parameters in UI Data Provider . . . . . .
Customizing a topology widget layout . . . .
101
102
102
103
103
104
104
107
107
109
110
111
112
114
114
115
118
120
123
125
128
129
157
157
159
159
162
162
166
168
170
171
172
172
172
Chapter 9. Working with OSLC for
Netcool/Impact . . . . . . . . . . . 179
Introducing OSLC . . . . . . . . . . . .
OSLC resources and identifiers . . . . . .
OSLC roles . . . . . . . . . . . . .
Working with data types and OSLC . . . . . .
Accessing Netcool/Impact data types as OSLC
resources . . . . . . . . . . . . . .
Retrieving OSLC resources that represent
Netcool/Impact data items . . . . . . . .
Displaying results for unique key identifier . .
OSLC resource shapes for data types . . . .
Configuring custom URIs for data types and
user output parameters . . . . . . . . .
Working with the OSLC service provider . . . .
Creating OSLC service providers in
Netcool/Impact . . . . . . . . . . .
Registering OSLC service providers with
Netcool/Impact . . . . . . . . . . .
Registering OSLC resources . . . . . . .
Working with Netcool/Impact policies and OSLC
Accessing policy output parameters as OSLC
resources . . . . . . . . . . . . . .
Configuring custom URIs for policy results and
variables . . . . . . . . . . . . . .
Passing argument values to a policy . . . . .
Configuring hover previews for OSLC resources
Hover preview properties for OSLC resources
Example scenario: Using OSLC with
Netcool/Impact policies . . . . . . . . . .
OSLC reference topics . . . . . . . . . .
OSLC urls . . . . . . . . . . . . .
OSLC pagination . . . . . . . . . . .
Support for OSLC query syntax . . . . . .
RDF functions . . . . . . . . . . . .
180
181
181
182
182
183
184
185
187
189
189
191
193
202
202
212
214
215
217
218
220
220
220
221
228
Chapter 10. Service Level Objectives
(SLO) Reporting . . . . . . . . . . 241
SLO terminology overview . . . . . . . . .
SLO reporting prerequisites . . . . . . . .
SLO reporting migration considerations . . . .
Updating the SLORPRT database schema . . .
Importing the projects for SLO . . . . . .
SLO sample report . . . . . . . . . .
Installing and enabling SLO report package . . .
Defining service definition properties . . . . .
Service definition properties file . . . . . .
Configuring the time zone . . . . . . . .
Configuring business calendars . . . . . . .
Creating common properties in business
calendars . . . . . . . . . . . . . .
Business calendar properties file . . . . . .
Retrieving SLA metric data . . . . . . . . .
SLO reporting policies . . . . . . . . .
SLO reporting policy functions . . . . . .
Using the getDataFromTBSMAvailability sample
policy . . . . . . . . . . . . . . .
Configuring getDataFromTBSMAvailability . .
Reports . . . . . . . . . . . . . . .
Example SLO reporting configuration . . . . .
242
243
243
243
244
246
246
247
248
252
253
254
255
257
257
257
260
261
261
262
Properties files examples . . . . . . . . .
Operational hours service level example . . .
Single SLA example . . . . . . . . . .
Time zone example . . . . . . . . . .
Simple service definition example . . . . .
Multiple identities in a service definition
example . . . . . . . . . . . . . .
Common US calendar properties . . . . . .
US Calendar example . . . . . . . . .
Common calendar properties file example . . .
Canada calendar example . . . . . . . .
SLO Utility Functions . . . . . . . . . .
Maintaining the reporting data in the SLORPRT
database . . . . . . . . . . . . . .
Removing service, SLA, and calendar definitions
Exporting service and calendar definitions. . .
Removing specific outage data . . . . . .
Restoring outage data . . . . . . . . .
Setting SLO configuration values . . . . . .
264
264
265
265
266
267
267
268
268
269
269
270
270
271
272
273
274
Chapter 11. Configuring Maintenance
Window Management . . . . . . . . 277
Activating MWM in a Netcool/Impact cluster .
Configure the MWM_Properties policy . . .
Configuring MWMActivator service properties
Logging on to Maintenance Window
Management . . . . . . . . . . .
About MWM maintenance windows . . .
. 277
. 277
278
. 279
. 279
Chapter 12. Configuring Event
Isolation and Correlation . . . . . . 283
Overview. . . . . . . . . . . . . . .
Installing Netcool/Impact and the DB2 database
Installing the Discovery Library Toolkit. . . . .
Updating the RCA views . . . . . . . .
Event Isolation and Correlation policies . . . .
Event Isolation and Correlation Services . . . .
Event Isolation and Correlation operator views . .
Configuring Event Isolation and Correlation data
sources . . . . . . . . . . . . . . .
Configuring Event Isolation and Correlation data
types . . . . . . . . . . . . . . . .
Configuring policies for Event Isolation and
Correlation . . . . . . . . . . . . . .
Creating, editing, and deleting event rules. . . .
Creating an event rule . . . . . . . . .
Configuring WebGUI to add a new launch point
Launching the Event Isolation and Correlation
analysis page . . . . . . . . . . . . .
Viewing the Event Analysis . . . . . . . .
Visualizing Event Isolation and Correlation results
in a topology widget . . . . . . . . . . .
Reference information for the Service Component
Repository API . . . . . . . . . . . . .
Components of the SCR API . . . . . . .
Creating a Netcool/Impact policy using the SCR
API. . . . . . . . . . . . . . . .
Chapter 13. Simple email notification
Installation .
.
.
.
.
.
.
.
.
.
.
.
283
284
284
285
287
288
288
288
289
290
291
291
292
293
293
294
296
296
304
305
.
. 305
Contents
v
Creating the database table . . . . . .
Importing Netcool/Impact Artifacts . . .
Updating the email service . . . . . .
Importing the exported page into the IBM
Dashboard Application Services Hub . .
Creating a Remote Connection . . . . .
Updating the Object Server data source
configuration . . . . . . . . . .
Email notification GUI . . . . . . . .
Creating an email notification . . . . .
vi
Netcool/Impact: Solutions Guide
.
.
.
. 305
. 306
. 306
.
.
. 306
. 307
.
.
.
. 307
. 307
. 308
Viewing configured email notifications and
sample results . . . . . . . . . .
Activating email notifications . . . . .
Using DB2 as the database . . . . . . .
Creating the database and the table . . .
Updating the notificationConstants policy . .
.
.
.
.
.
.
.
.
.
.
308
308
309
309
309
Index . . . . . . . . . . . . . . . 311
About this publication
The Solutions Guide contains end-to-end information about using features in
Netcool/Impact.
Intended audience
This publication is for users who are responsible creating Netcool/Impact data
models, writing Netcool/Impact policies and running Netcool/Impact services.
Publications
This section lists publications in the Netcool/Impact library and related
documents. The section also describes how to access Tivoli® publications online
and how to order Tivoli publications.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Netcool/Impact library
v Quick Start Guide, CN1LAML
Provides concise information about installing and running Netcool/Impact for
the first time.
v Administration Guide, SC27491804
Provides information about installing, running and monitoring the product.
v Policy Reference Guide, SC27492104
Contains complete description and reference information for the Impact Policy
Language (IPL).
v DSA Reference Guide, SC27491904
Provides information about data source adaptors (DSAs).
v Operator View Guide, SC27492004
Provides information about creating operator views.
v Solutions Guide, SC27492304
Provides end-to-end information about using features of Netcool/Impact.
Accessing terminology online
The IBM® Terminology Web site consolidates the terminology from IBM product
libraries in one convenient location. You can access the Terminology Web site at the
following Web address:
http://www.ibm.com/software/globalization/terminology
Accessing publications online
Publications are available from the following locations:
v The Quick Start DVD contains the Quick Start Guide. Refer to the readme file on
the DVD for instructions on how to access the documentation.
v IBM Knowledge Center web site at http://publib.boulder.ibm.com/infocenter/
tivihelp/v8r1/topic/com.ibm.netcoolimpact.doc6.1.1/welcome.html. IBM posts
publications for all Tivoli products, as they become available and whenever they
are updated to the Tivoli Information Center Web site.
© Copyright IBM Corp. 2006, 2016
vii
Note: If you print PDF documents on paper other than letter-sized paper, set
the option in the File → Print window that allows Adobe Reader to print
letter-sized pages on your local paper.
v Tivoli Documentation Central at http://www.ibm.com/tivoli/documentation.
You can access publications of the previous and current versions of
Netcool/Impact from Tivoli Documentation Central.
v The Netcool/Impact wiki contains additional short documents and additional
information and is available at https://www.ibm.com/developerworks/
mydeveloperworks/wikis/home?lang=en#/wiki/Tivoli%20Netcool%20Impact.
Ordering publications
You can order many Tivoli publications online at http://
www.elink.ibmlink.ibm.com/publications/servlet/pbi.wss.
You can also order by telephone by calling one of these numbers:
v In the United States: 800-879-2755
v In Canada: 800-426-4968
In other countries, contact your software account representative to order Tivoli
publications. To locate the telephone number of your local representative, perform
the following steps:
1. Go to http://www.elink.ibmlink.ibm.com/publications/servlet/pbi.wss.
2. Select your country from the list and click Go.
3. Click About this site in the main panel to see an information page that
includes the telephone number of your local representative.
Accessibility
Accessibility features help users with a physical disability, such as restricted
mobility or limited vision, to use software products successfully. In this release, the
Netcool/Impact console does not meet all the accessibility requirements.
Tivoli technical training
For Tivoli technical training information, refer to the following IBM Tivoli
Education Web site at http://www.ibm.com/software/tivoli/education.
Support for problem solving
If you have a problem with your IBM software, you want to resolve it quickly. This
section describes the following options for obtaining support for IBM software
products:
v “Obtaining fixes”
v “Receiving weekly support updates” on page ix
v “Contacting IBM Software Support” on page ix
Obtaining fixes
A product fix might be available to resolve your problem. To determine which
fixes are available for your Tivoli software product, follow these steps:
1. Go to the IBM Software Support Web site at http://www.ibm.com/software/
support.
2. Navigate to the Downloads page.
viii
Netcool/Impact: Solutions Guide
3. Follow the instructions to locate the fix you want to download.
4. If there is no Download heading for your product, supply a search term, error
code, or APAR number in the search field.
For more information about the types of fixes that are available, see the IBM
Software Support Handbook at http://www14.software.ibm.com/webapp/set2/sas/
f/handbook/home.html.
Receiving weekly support updates
To receive weekly e-mail notifications about fixes and other software support news,
follow these steps:
1. Go to the IBM Software Support Web site at http://www.ibm.com/software/
support.
2. Click the My IBM in the toobar. Click My technical support.
3. If you have already registered for My technical support, sign in and skip to
the next step. If you have not registered, click register now. Complete the
registration form using your e-mail address as your IBM ID and click Submit.
4. The Edit profile tab is displayed.
5. In the first list under Products, select Software. In the second list, select a
product category (for example, Systems and Asset Management). In the third
list, select a product sub-category (for example, Application Performance &
Availability or Systems Performance). A list of applicable products is
displayed.
6. Select the products for which you want to receive updates.
7. Click Add products.
8. After selecting all products that are of interest to you, click Subscribe to email
on the Edit profile tab.
9. In the Documents list, select Software.
10. Select Please send these documents by weekly email.
11. Update your e-mail address as needed.
12. Select the types of documents you want to receive.
13. Click Update.
If you experience problems with the My technical support feature, you can obtain
help in one of the following ways:
Online
Send an e-mail message to erchelp@u.ibm.com, describing your problem.
By phone
Call 1-800-IBM-4You (1-800-426-4409).
World Wide Registration Help desk
For word wide support information check the details in the following link:
https://www.ibm.com/account/profile/us?page=reghelpdesk
Contacting IBM Software Support
Before contacting IBM Software Support, your company must have an active IBM
software maintenance contract, and you must be authorized to submit problems to
IBM. The type of software maintenance contract that you need depends on the
type of product you have:
About this publication
ix
v For IBM distributed software products (including, but not limited to, Tivoli,
Lotus®, and Rational® products, and DB2® and WebSphere® products that run on
Windows or UNIX operating systems), enroll in Passport Advantage® in one of
the following ways:
Online
Go to the Passport Advantage Web site at http://www-306.ibm.com/
software/howtobuy/passportadvantage/pao_customers.htm .
By phone
For the phone number to call in your country, go to the IBM Worldwide
IBM Registration Helpdesk Web site at https://www.ibm.com/account/
profile/us?page=reghelpdesk.
v For customers with Subscription and Support (S & S) contracts, go to the
Software Service Request Web site at https://techsupport.services.ibm.com/ssr/
login.
v For customers with IBMLink, CATIA, Linux, OS/390®, iSeries, pSeries, zSeries,
and other support agreements, go to the IBM Support Line Web site at
http://www.ibm.com/services/us/index.wss/so/its/a1000030/dt006.
v For IBM eServer™ software products (including, but not limited to, DB2 and
WebSphere products that run in zSeries, pSeries, and iSeries environments), you
can purchase a software maintenance agreement by working directly with an
IBM sales representative or an IBM Business Partner. For more information
about support for eServer software products, go to the IBM Technical Support
Advantage Web site at http://www.ibm.com/servers/eserver/techsupport.html.
If you are not sure what type of software maintenance contract you need, call
1-800-IBMSERV (1-800-426-7378) in the United States. From other countries, go to
the contacts page of the IBM Software Support Handbook on the Web at
http://www14.software.ibm.com/webapp/set2/sas/f/handbook/home.html and
click the name of your geographic region for phone numbers of people who
provide support for your location.
To contact IBM Software support, follow these steps:
1. “Determining the business impact”
2. “Describing problems and gathering information” on page xi
3. “Submitting problems” on page xi
Determining the business impact
When you report a problem to IBM, you are asked to supply a severity level. Use
the following criteria to understand and assess the business impact of the problem
that you are reporting:
Severity 1
The problem has a critical business impact. You are unable to use the
program, resulting in a critical impact on operations. This condition
requires an immediate solution.
Severity 2
The problem has a significant business impact. The program is usable, but
it is severely limited.
Severity 3
The problem has some business impact. The program is usable, but less
significant features (not critical to operations) are unavailable.
x
Netcool/Impact: Solutions Guide
Severity 4
The problem has minimal business impact. The problem causes little impact
on operations, or a reasonable circumvention to the problem was
implemented.
Describing problems and gathering information
When describing a problem to IBM, be as specific as possible. Include all relevant
background information so that IBM Software Support specialists can help you
solve the problem efficiently. To save time, know the answers to these questions:
v Which software versions were you running when the problem occurred?
v Do you have logs, traces, and messages that are related to the problem
symptoms? IBM Software Support is likely to ask for this information.
v Can you re-create the problem? If so, what steps were performed to re-create the
problem?
v Did you make any changes to the system? For example, did you make changes
to the hardware, operating system, networking software, and so on.
v Are you currently using a workaround for the problem? If so, be prepared to
explain the workaround when you report the problem.
Submitting problems
You can submit your problem to IBM Software Support in one of two ways:
Online
Click Submit and track problems on the IBM Software Support site at
http://www.ibm.com/software/support/probsub.html. Type your
information into the appropriate problem submission form.
By phone
For the phone number to call in your country, go to the contacts page of
the IBM Software Support Handbook at http://www14.software.ibm.com/
webapp/set2/sas/f/handbook/home.html and click the name of your
geographic region.
If the problem you submit is for a software defect or for missing or inaccurate
documentation, IBM Software Support creates an Authorized Program Analysis
Report (APAR). The APAR describes the problem in detail. Whenever possible,
IBM Software Support provides a workaround that you can implement until the
APAR is resolved and a fix is delivered. IBM publishes resolved APARs on the
Software Support Web site daily, so that other users who experience the same
problem can benefit from the same resolution.
Conventions used in this publication
This publication uses several conventions for special terms and actions, operating
system-dependent commands and paths, and margin graphics.
Typeface conventions
This publication uses the following typeface conventions:
Bold
v Lowercase commands and mixed case commands that are otherwise
difficult to distinguish from surrounding text
v Interface controls (check boxes, push buttons, radio buttons, spin
buttons, fields, folders, icons, list boxes, items inside list boxes,
About this publication
xi
multicolumn lists, containers, menu choices, menu names, tabs, property
sheets), labels (such as Tip:, and Operating system considerations:)
v Keywords and parameters in text
Italic
v Citations examples: titles of publications, diskettes, and CDs
v Words defined in text (example: a nonswitched line is called a
point-to-point line)
v Emphasis of words and letters (words as words example: "Use the word
that to introduce a restrictive clause."; letters as letters example: "The
LUN address must start with the letter L.")
v New terms in text (except in a definition list): a view is a frame in a
workspace that contains data.
v Variables and values you must provide: ... where myname represents....
Monospace
v Examples and code examples
v File names, programming keywords, and other elements that are difficult
to distinguish from surrounding text
v Message text and prompts addressed to the user
v Text that the user must type
v Values for arguments or command options
PDF code examples with single quotation marks
How to resolve issues with PDF code examples with single quotation marks.
Throughout the documentation, there are code examples that you can copy and
paste into the product. In instances where code or policy examples that contain
single quotation marks are copied from the PDF documentation the code examples
do not preserve the single quotation marks. You need to correct them manually. To
avoid this issue, copy and paste the code example content from the html version of
the documentation.
Operating system-dependent variables and paths
This publication uses the UNIX convention for specifying environment variables
and for directory notation.
When you use the Windows command line, replace the $variable with the
%variable% for environment variables and replace each forward slash (/) with a
backslash (\) in directory paths. The names of environment variables are not
always the same in the Windows and UNIX environments. For example, %TEMP%
in Windows environments is equivalent to $TMPDIR in UNIX environments.
Note: If you are using the bash shell on a Windows system, you can use the UNIX
conventions.
v On UNIX systems, the default installation directory is /opt/IBM/tivoli/impact.
v On Windows systems, the default installation directory is C:\Program
Files\IBM\Tivoli\impact.
Windows information, steps, and process are documented when they differ from
UNIX systems.
xii
Netcool/Impact: Solutions Guide
Chapter 1. Using Netcool/Impact Solutions
A solution is an implementation of Netcool/Impact that provides a specific type of
event management functionality.
This section contains information about creating data sources, data types, services
and policies to set up event management. It also contains end-to-end information
about the following features in Netcool/Impact:
v Event enrichment
v Netcool/Impact as a UI data provider.
v Visualizing data from the UI data provider in the IBM Dashboard Application
Services Hub in Jazz for Service Management.
v Visualizing data from the Netcool/Impact self service dashboards in the IBM
Dashboard Application Services Hub in Jazz for Service Management.
v Working with OSLC and Netcool/Impact.
v Setting up Service Level Objectives (SLO) Reporting.
v Configuring Event Isolation and Correlation.
v Configuring Maintenance Window Management.
v SLO reports.
Solution components
The components of a solution are a data model, services, and policies.
Most solutions use a combination of these three components.
Data models
A data model is a model of the business and metadata used in a Netcool/Impact
solution.
A data model consists of data sources, data types, data items, links, and event
sources.
Working with services
Services are runnable components of the Impact Server that you start and stop
using both the GUI and the CLI.
Policies
A policy is a set of operations that you want Netcool/Impact to perform.
These operations are specified using one of the following programming languages,
JavaScript or a language called the Netcool/Impact policy language (IPL).
Solution types
You can use Netcool/Impact to implement a wide variety of solution types. Some
common types are event enrichment, X events in Y time, event notification, and
event gateways.
© Copyright IBM Corp. 2006, 2016
1
Event enrichment solution
Event enrichment is the process by which Netcool/Impact monitors an event
source for new events, looks up information related to them in an external data
source and then adds the information to them.
An event enrichment solution consists of the following components:
v A data model that represents the data you want to add to events
v An OMNIbus event reader service that monitors the event source
v One or more event enrichment policies that look up information related to the
events and add the information to them
For a sample event enrichment solution, see Chapter 7, “Event enrichment
tutorial,” on page 89.
X events in Y time solution
X events in Y time is the process in which Netcool/Impact monitors an event
source for groups of events that occur together and takes the appropriate action
based on the event information.
An X events in Y time solution consists of the following components:
v A data model that contains internal data types that are used to store metadata
for the solution.
v An OMNIbus event reader service that monitors the event source.
v The hibernation activator service, which wakes hibernating policies at timed
intervals.
v One or more policies that check the event source to see if a specified group of
events is occurring and then take the appropriate action.
For information about removing disassociated files that result from XinY policy, see
the Troubleshooting section.
Event notification solution
Event notification is the process by which Netcool/Impact monitors an event
source for new events and then notifies an administrator or users when a certain
event or combination of events occurs.
Event notification is often part of a more complicated event management
automation that includes aspects of Netcool/Impact functionality.
An event notification solution has the following components:
v An event reader service that monitors the event source
v An e-mail sender service that sends e-mail to administrators or users, or the
JRExec server used to launch an external notification program
v One or more policies that perform the event notification
Event gateway solution
An event gateway is an implementation of Netcool/Impact in which you send
event information from the ObjectServer to a third-party application for processing.
An event gateway solution has the following components
2
Netcool/Impact: Solutions Guide
v A data model that includes a data source and data type representing the
third-party application
v An OMNIbus event reader service that monitors the event source
v One or more policies that send event information to the third-party application
Setting up a solution
To set up a Netcool/Impact solution, you create a data model, set up services, and
create policies.
For more information, see “Setting up a solution.”
Creating a data model
While it is possible to design a solution that does not require a data model, almost
all uses of Netcool/Impact require the ability to handle internal or external data of
some sort.
To create a data model, you create a data source for each real world source of data
that you want to use. Then, you create a data type for each structural element (for
example, a database table) that contains the data you want to use.
Alternatively, you can create dynamic links between data types or static links
between data items that make it easier to traverse the data programmatically from
within a policy.
Setting up services
Different types of solutions require different sets of services, but most solutions
require an OMNIbus event reader.
Solutions that use hibernations also require the hibernating policy activator.
Solutions that receive, or send email require an email reader and the email sender
service.
The first category of services is built in services like the event processor and the
command-line service manager. You can have only a single instance of this type of
service in Netcool/Impact. The second category is services like the event reader
and policy activator. You can create and configure multiple instances of this type of
service.
Creating policies
You create policies in the GUI Server, that contains a policy editor, a syntax
checker, and other tools you need to write, run, test, and debug your policies.
For more information, see Managing Polices section in IBM Knowledge Center or
the Netcool/Impact Policy Reference Guide.
Running a solution
To start a solution, you start each of the service components.
Start the components in the following order:
v Hibernating policy activator, e-mail sender, and command execution manager.
v Event processor
Chapter 1. Using Netcool/Impact Solutions
3
v Event reader, event listener, e-mail reader, or policy activator
You can configure services to run automatically at startup, or you can start them
manually using the GUI and CLI. By default, services that run automatically at
startup run in the proper order. If all other services are already running, starting
services like the event processor that trigger policies effectively starts the solution.
To stop a solution, you stop any services, like the event processor, that trigger your
policies.
4
Netcool/Impact: Solutions Guide
Chapter 2. Working with data models
You set up a data model once, when you first design your Netcool/Impact
solution.
After that, you do not need to actively manage the data model unless you change
the solution design. You can view, create, edit, and delete the components of a data
model in the GUI Server.
Data model components
A data model is made up of components that represent real world sources of data
and the actual data inside them.
Data sources
Data sources are elements of the data model that represent real world
sources of data in your environment.
Data types
Data types are elements of the data model that represent sets of data
stored in a data source.
Data items
Data items are elements of the data model that represent actual units of
data stored in a data source.
Links Links are elements of the data model that define relationships between
data types and data items.
Event sources
Event sources are special types of data sources. Each event source
represents an application that stores and manages events.
Data sources
Data sources are elements of the data model that represent real world sources of
data in your environment.
These sources of data include third-party SQL databases, LDAP directory servers,
or other applications such as messaging systems and network inventory
applications.
Data sources contain the information that you need to connect to the external data.
You create a data source for each physical source of data that you want to use in
your Netcool/Impact solution. When you create an SQL database, LDAP, or
Mediator data type, you associate it with the data source that you created. All
associated data types are listed under the data source in the Data Sources and
Types task pane.
Configuring data types
Data types are elements of the data model that represent sets of data stored in a
data source.
The structure of data types depends on the category of data source where it is
stored. For example, if the data source is an SQL database, each data type
© Copyright IBM Corp. 2006, 2016
5
corresponds to a database table. If the data source is an LDAP server, each data
type corresponds to a type of node in the LDAP hierarchy.
Working with data items
Data items are elements of the data model that represent actual units of data stored
in a data source.
The structure of this unit of data depends on the category of the associated data
source. For example, if the data source is an SQL database data type, each data
item corresponds to a row in a database table. If the data source is an LDAP
server, each data item corresponds to a node in the LDAP hierarchy.
Working with links
Links are elements of the data model that define relationships between data types
and data items.
You set up links after you create the data types that are required by your solution.
Static links define relationships between data items, and dynamic links define
relationships between data types. Links are an optional component of the
Netcool/Impact data model. When you write policies, you can use the GetByLinks
function to traverse the links and retrieve data items that are linked to other data
items.
Setting up a data model
To set up a data model, you must first determine what data you need to use in
your solution and where that data is stored. Then, you create a data source for
each real world source of data and create a data type for each structural element
that contains the data you need.
Procedure
1. Create data sources
Identify the data you want to use and where it is stored. Then, you create one
data source for each real world source of data. For example, if the data is
stored in one MySQL database and one LDAP server, you must create one
MySQL and one LDAP data source.
2. Create data types
After you have set up the data sources, you create the required data types. You
must create one data type for each database table (or other data element,
depending on the data source) that contains data you want to use. For example,
if the data is stored in two tables in an Oracle database, you must create one
data type for each table.
3. Optional: Create data items
For most data types, the best practice is to create data items using the native
tools supplied by the data source. For example, if your data source is an Oracle
database, you can add any required data to the database using the native
Oracle tools. If the data source is the internal data repository, you must create
data items using the GUI.
4. Optional: Create links
After you create data types, you can define linking relationships between them
using dynamic links. You can also define linking relationships between internal
data items using static links. That makes it easier to traverse the data
programmatically from within a policy. Use of links is optional.
6
Netcool/Impact: Solutions Guide
5. Create event sources
Most process events are retrieved from a Netcool/OMNIbus ObjectServer. The
ObjectServer is represented in the data model as an event source.
Data model architecture
This diagram shows the relationship between data sources, data types, and data
items in a Netcool/Impact solution.
Figure 1. Data Model Architecture
Data model examples
The examples provided here are, most likely, scaled down versions of data models
you might be required to implement in the real world.
They are designed to give you an idea of how all the different parts of a data
model work together, rather than provide a realistic sampling of every type of data
you might access with Netcool/Impact.
If you are uncertain about the definition of major concepts mentioned in these
examples, such as data sources or data types, you can skip ahead to the next four
Chapter 2. Working with data models
7
chapters of this book, which provide detailed information about the various
components of the data model. Once you have a better understanding of these
concepts, you can return to this section.
Enterprise service model
The enterprise service model is a data model that is designed for use in an
enterprise service environment.
The enterprise service environment is one of the most common network
management scenarios for the Netcool product suite. While the data model
described in this section is relatively simple, real world enterprise environments
can often rival a small telecommunications or ISP environment in complexity.
The goal of the data model in this example is to provide the means to access a set
of business data that has been previously collected and stored in an external
database. This business data contains information about the users, departments,
locations, and servers in the enterprise. If you were designing a complete solution
for this environment, you would tap into this data model from within policies
whenever you needed to access this data.
The enterprise service environment in this example consists of 125 users in five
business departments, spread over three locations. Each user in the environment
has a desktop computer and uses it to connect to a file server and an email server.
The solution proposed to manage this environment is designed to monitor the file
servers and e-mail servers for uptime. When a file server goes down, it notifies the
on-call administrator through email with a service request message. It also
determines which business units are served by the file server and sends an e-mail
to each user in the unit with a service interruption message. When an e-mail server
goes down, it notifies the on-call administrator through pager.
All the data used by this solution is stored in a MySQL database. This database
has six tables, named USER, ADMIN, DEPT, LOC, FILESERVER, and EMAILSERVER.
Enterprise service model elements
The enterprise service model consists of data sources, data types, data items, links,
and event sources.
Data sources
Because all the data needed is stored in a single MySQL database, this data
model only requires one data source. For the purposes of this example, the
data source is named MYSQL_01.
Data types
Each table in the MYSQL database is represented by a single SQL database
data type. For the purposes of this example, the data types are named
User, Admin, Department, Location, Fileserver, and Emailserver. In this
case, the names of the data types are the same as the table names.
Data items
Because the data is stored in an SQL database, the data items in the model
are rows in the corresponding database tables.
Links Links represent relationships between data types. The relationship between
the data types in this data model can be described as a set of the following
dynamic links:
v User -> Department
8
Netcool/Impact: Solutions Guide
v User -> Location
v Location -> Emailserver
v Department -> Fileserver
v Emailserver -> Location.
v Fileserver -> Departments
v Administrator -> Location
Event sources
This data model has a single event source, which represents the
Netcool/OMNIbus ObjectServer that stores events related to activity in
their environment.
Web hosting model
The Web hosting model is a data model designed for use in a Web hosting
environment.
The Web hosting environment is another common network management scenario
for the Netcool product suite. Managing a Web hosting environment presents some
unique challenges. It requires you to assure the uptime of services, such as the
availability of customer websites, that consist of groups of interrelated software
and hardware devices, in addition to assuring the uptime of the devices
themselves. As with the other examples in the chapter, the web services hosting
environment that is described here is scaled down from what you might encounter
in the real world.
The goal of the data model in this example is to provide the means to access a set
of device inventory and service management data that is generated and updated in
real time by a set of third-party application. This data contains information about
the server hardware that is in racks in the hosting facility and various other data
that describes how instances of HTTP and email server software is installed and
configured on the hardware. Policies that are developed for use with this
information would tap into this data model whenever they needed to access this
data.
The Web services hosting model in this example consists of 10 HTTP server
clusters and three email servers clusters, spread over 20 machines. Each HTTP
cluster and each email cluster consist of one primary and one backup server. This
environment serves 15 customers whose use is distributed across one or more
clusters, depending on their service agreement.
The solution that manages this environment is designed to monitor the uptime of
the HTTP and email services. When a problem occurs with one of these services, it
determines the identity of the cluster that is causing the problem and the hardware
where the component server instances are installed. It then modifies the original
alert data in Netcool/OMNIbus to reflect this information. This solution also
determines the customer that is associated with the service failure and sets the
priority of the alert to reflect the customer's service agreement.
The data in this model is stored in two separate Oracle databases. The first
database has five tables that are named Node, HTTPInstance, HTTPCluster,
EmailInstance, and EmailCluster. The second database is a customer service
database that has, among other tables, one named Customer.
Web hosting model elements
The Web hosting model consists of data sources, data types, data items, and links.
Chapter 2. Working with data models
9
Data sources
Because this model has two real world sources of data, it requires two data
sources. For this example, these sources are called ORACLE_01 and
ORACLE_02.
Data types
Each table in the MySQL database is represented by a single SQL database
data type. For the purposes of this example, the data types are named
Node, HTTPInstance, HTTPCluster, EmailInstance, EmailCluster, and
Customer.
Data items
Because the data is stored in an SQL database, the data items in the model
are rows in the corresponding database tables.
Links The relationship between the data types in this data model can be
described as a set of the following dynamic links:
v HTTPServer -> Node
v EmailServer -> Node
v HTTPServer -> HTTPCluster
v EmailServer -> EmailCluster
v Customer -> HTTPCluster
v Customer -> HTTPServer
Working with data sources
A data source is an element of the data model that represents a real world source
of data in your environment.
Data sources overview
Data sources provide an abstract layer between Netcool/Impact and real world
source of data.
Internally, data sources provide connection and other information that
Netcool/Impact uses to access the data. When you create a data model, you must
create one data source for every real world source of data you want to access in a
policy.
The internal data repository of Netcool/Impact can also be used as a data source.
Data source categories
Netcool/Impact supports four categories of data sources.
SQL database data sources
An SQL database data source represents a relational database or another
source of data that can be accessed using an SQL database DSA.
LDAP data sources
The Lightweight Directory Access Protocol (LDAP) data source represent
LDAP directory servers.
Mediator data sources
Mediator data sources represent third-party applications that are integrated
with Netcool/Impact through the DSA Mediator.
10
Netcool/Impact: Solutions Guide
JMS data sources
A Java™ Message Service (JMS) data source abstracts the information that is
required to connect to a JMS Implementation.
SQL database data sources
An SQL database data source represents a relational database or another source of
data that can be accessed using an SQL database DSA.
A wide variety of commercial relational databases are supported, such as Oracle,
Sybase, and Microsoft SQL Server. In addition, freely available databases like
MySQL, and PostgreSQL are also supported. The Netcool/OMNIbus ObjectServer
is also supported as a SQL data source.
The configuration properties for the data source specify connection information for
the underlying source of data. Some examples of SQL database data sources are:
v A DB2 database
v A MySQL database
v An application that provides a generic ODBC interface
v A character-delimited text file
You create SQL database data sources using the GUI. You must create one such
data source for each database that you want to access. When you create an SQL
database data source, you need to specify such properties as the host name and
port where the database server is running, and the name of the database. For the
flat file DSA and other SQL database DSAs that do not connect to a database
server, you must specify additional configuration properties.
Note that SQL database data sources are associated with databases rather than
database servers. For example, an Oracle database server can host one or a dozen
individual databases. Each SQL database data source can be associated with one
and only one database.
LDAP data sources
The Lightweight Directory Access Protocol (LDAP) data source represent LDAP
directory servers.
Netcool/Impact supports the OpenLDAP and Microsoft Active Directory servers.
You create LDAP data sources in the GUI Server. You must create one data source
for each LDAP server that you want to access. The configuration properties for the
data source specify connection information for the LDAP server and any required
security or authentication information.
Mediator data sources
Mediator data sources represent third-party applications that are integrated with
Netcool/Impact through the DSA Mediator.
These data sources include a wide variety of network inventory, network
provisioning, and messaging system software. In addition, providers of XML and
SNMP data can also be used as mediator data sources.
Typically Mediator DSA data sources and their data types are installed when you
install a Mediator DSA. The data sources are available for viewing and, if
necessary, for creating or editing.
Chapter 2. Working with data models
11
Attention:
manager.
For a complete list of supported data source, see your IBM account
Internal data repository
The internal data repository is a built-in data source for Netcool/Impact.
The primary responsibility of the internal data repository is to store system data.
Restriction: You must use internal data types solely for testing and demonstrating
Netcool/Impact, or for low load tasks.
JMS data source
A Java Message Service (JMS) data source abstracts the information that is required
to connect to a JMS Implementation.
This data source is used by the JMSMessageListener service, the SendJMSMessage,
and ReceiveJMSMessage functions.
Data source architecture
This diagram shows the relationship between Netcool/Impact, data sources, and
the real world source of data in your environment.
12
Netcool/Impact: Solutions Guide
Figure 2. Data Source Architecture
Setting up data sources
When you create a Netcool/Impact data model, you must set up a data source for
each real world source of data in your environment.
You set up data sources using the GUI. To set up a data source, you need to get
the connection information for the data source, and then use the GUI to create and
configure the data source.
|
|
Creating data sources
|
Before you begin
|
|
|
|
|
|
Before you create a data source, you must get the connection information for the
underlying application. The connection information that you need varies
depending on the type of event source. For most SQL database data sources, this
information is the host name and the port where the application is running, and a
valid user name and password. For LDAP and Mediator data sources, see the DSA
Reference Guide for the connection information required.
Use this procedure to create a user-defined data source.
Chapter 2. Working with data models
13
|
|
Procedure
|
|
|
|
|
|
2. From the Cluster and Project lists, select the cluster and project you want to
use.
3. In the Data Model tab, click the New Data Source icon in the toolbar. Select a
template for the data source that you want to create. The tab for the data
source opens.
4. Complete the information, and click Save to create the data source.
1. Click Data Model to open the Data Model tab.
|
Working with data types
Data types are an element of the data model that represent sets of data stored in a
data source.
Data types overview
Data types describe the content and structure of the data in the data source table
and summarize this information so that it can be accessed during the execution of
a policy.
Data types provide an abstract layer between Netcool/Impact and the associated
set of data in a data source. Data types are used to locate the data you want to use
in a policy. For each table or other data structure in your data source that contains
information you want to use in a policy, you must create one data type. To use a
data source in policies, you must create data types for it.
Attention: Some system data types are not displayed in the GUI. You can manage
these data types by using the Command Line Interface (CLI).
The structure of the data that is stored in a data source depends on the category of
the data source where the data is stored. For example, if the data source is an SQL
database, each data type corresponds to a database table. If the data source is an
LDAP server, each data type corresponds to a type of node in the LDAP hierarchy.
A data type definition contains the following information:
v The name of the underlying table or other structural element in the data source
v A list of fields that represent columns in the underlying table or another
structural element (for example, a type of attribute in an LDAP node)
v Settings that define how Netcool/Impact caches data in the data type
Data type categories
Netcool/Impact supports four categories of data types.
SQL database data types
SQL database data types represent data stored in a database table.
LDAP data types
LDAP data types represent data stored at a certain base context level of an
LDAP hierarchy.
Mediator data types
Mediator data types represent data that is managed by third-party
applications such as a network inventory manager or a messaging service.
14
Netcool/Impact: Solutions Guide
Internal data types
You use internal stored data types to model data that does not exist, or
cannot be easily created, in external databases.
SQL database data types
SQL database data types represent data stored in a database table.
Each data item in an SQL database data type corresponds to a row in the table.
Each field in the data item corresponds to a column. An SQL database data type
can include all the columns in a table or just a subset of the columns.
LDAP data types
LDAP data types represent data stored at a certain base context level of an LDAP
hierarchy.
Each data item in an LDAP data type corresponds to an LDAP node that exists at
that level and each field corresponds to an LDAP attribute. LDAP data types are
read-only, which means that you cannot add, update or delete data items in an
LDAP data type.
Mediator data types
Mediator data types represent data that is managed by third-party applications
such as a network inventory manager or a messaging service.
Usually the data types, and their associated data sources are installed when you
install the Mediator DSA (CORBA or Direct), so you do not have to create them.
The installed data types are available for viewing and, if necessary, for editing.
Typically, Mediator data types do not represent data stored in database tables.
Rather, they represent collections of data that are stored and provided by the data
source in various other formats. For example, sets of data objects or as messages.
These data types are typically created using scripts or other tools provided by the
corresponding DSA. For more information about the mediator data types used
with a particular DSA, see the DSA Reference Guide.
Internal data types
You use internal stored data types to model data that does not exist, or cannot be
easily created, in external databases.
This includes working data used by policies, which can contain copies of external
data or intermediate values of data. This data is stored directly in a data repository,
and you can use it as a data source. To create and access this data you define
internal data types.
Netcool/Impact provides the following categories of internal data types:
System data types
System data types are used to store and manage data used internally by
Netcool/Impact.
Predefined internal data types
Pre-defined data types are special data types that are stored in the global
repository.
User-defined internal data types
Internal data types that you create are user-defined internal data types.
Chapter 2. Working with data models
15
Restriction: Use internal data types only for prototyping and demonstrating
Netcool/Impact.
System data types:
System data types are used to store and manage data used internally by
Netcool/Impact.
These types include Policy, Service, and Hibernation. In most cases, you do not
directly access the data in these data types. However, there are some occasions in
which you can use them in a policy. Some examples are when you start a policy
from within another policy or work with hibernating policies.
Predefined internal data types:
Pre-defined data types are special data types that are stored in the global
repository.
The following predefined internal data types are provided:
v Schedule
v TimeRangeGroup
v Document
You use Schedule and TimeRangeGroup data types to manage Netcool/Impact
scheduling. You can use the Document data type to store information about URLs
located on your intranet.
Predefined data types are special data types that are stored in Netcool/Impact. The
non-editable pre-defined data types are:
v TimeRangeGroup
v LinkType
v Hibernation
The following predefined data types can be edited to add new fields:
v Schedule
v Document
v FailedEvent
v ITNM
Restriction: You cannot edit or delete existing fields. None of the pre-defined data
types can be deleted.
User-defined internal data types:
Internal data types that you create are user-defined internal data types.
The data items in these data types are stored in the internal data repository, rather
than in an external data source. User-defined data types function in much the same
way as SQL database data types. You must use internal data types solely for
testing and demonstrating Netcool/Impact, or for low load tasks. User-defined
internal data types are slower than external SQL database data types.
16
Netcool/Impact: Solutions Guide
Data type fields
A field is a unit of data as defined within a data type. The nature of this unit of
data depends on the category of the data type that contains it.
If the data type corresponds to a table in an SQL database, each field corresponds
to a table column. If the data type corresponds to a base context in an LDAP
server, each field corresponds to a type of LDAP attribute.
When you set up an SQL database data type, the fields are auto-populated from
the underlying table by Netcool/Impact. For other data types, you must manually
define the fields when the data type is created.
ID
By default, the ID field is the same as the name of the data element that
corresponds to the field in the underlying data source. For example, if the
data type is an SQL database data type, the underlying field corresponds
to a column in the table. By default, the field ID is the same as the column
name in the database.
You can change the ID to any other unique name. For example, if the
underlying column names in the data source are unreadable, or are
difficult to type and remember, you can use the ID field to provide a more
easy-to-use alias for the field.
The field ID overrides the actual name and display name attributes for the
field in all cases.
Field Name
The field name attribute is the name of the corresponding data element in
the underlying data source. Although you can use the GUI to freely edit
this field, it must be identical to how it displays in the data source. If these
fields are not identical, an error occurs when the data type is accessed.
Format
The format is the data format of the field. For SQL database data types,
Netcool/Impact auto-discovers the columns in the underlying table and
automatically deduces the data format for each field when you set up the
data type. For other data types, you must manually specify the format for
each field that you create.
STRING
Represents text strings up to 4 KB in length.
INTEGER
Represents whole numbers.
LONG
Represents long whole numbers.
FLOAT
Represents floating point decimal numbers.
DOUBLE
Represents double-precision floating point decimal numbers.
DATE Represents formatted date/time strings.
TIMESTAMP
Represents a time stamp in the following format, YYYY-MM-DD
HH:MM:SS.
Chapter 2. Working with data models
17
Restriction: The Microsoft SQL server table treats the
TIMESTAMP field as a non-date time field. The JDBC driver
returns the TIMESTAMP field as a row version binary data type,
which is discovered as STRING in the Microsoft SQL server data
type. To resolve this issue, in the Microsoft SQL server table, use
DATEITEM to display the property time format instead of
TIMESTAMP.
BOOLEAN
Represents Boolean values of true and false.
CLOB Represents large-format binary data.
LONG_STRING
Represents text strings up to 16 KB in length (internal data types
only).
PASSWORD_STRING
Represents password strings (internal data types only). The
password shows in the GUI as a string of asterisks, rather than the
actual password text.
Display Name
You can use the display name attribute to specify a label for the field that
is displayed only when you browse data items in the GUI. This attribute
does not otherwise affect the functions of the data type.
You can use this field to select a field from the menu to label data items
according to the field value. Choose a field that contains a unique value
that can be used to identify the data item for example, ID. To view the
values on the data item, you must go to View Data Items for the data type
and select the Links icon. Click the data item to display the details.
Description
You can use the description attribute to specify a short description for the
field. This description is only visible when you use the GUI to edit the
data type. Like the display name, it does not otherwise affect the functions
of the data type.
Data type keys
Key fields are fields whose value or combination of values can be used to identify
unique data items in a data type.
For SQL database data types, you must specify at least one key field for each data
type you create. Most often, the key field that you specify is a key field in the
underlying data source. Internal data items contain a default field named KEY that
is automatically used as the data type key.
You can use the policy function called GetByKey to retrieve data from the data type
using the key field value as a query condition. Keys are also used when you create
GetByKey dynamic links between data types.
Setting up data types
When you create a data model, you must set up a data type for each structural
element in a data source whose data you want to use.
For example, if you are using an SQL database data source, you must set up a data
type for each table that contains the data. If you are using an LDAP data source,
18
Netcool/Impact: Solutions Guide
you must set up a data type for each base context in the LDAP hierarchy that
contains nodes that you want to access. You set up data types using the GUI.
To set up a data type, you get the name of the structural element (for example, the
table) where the data is located, and then use the GUI to create and configure the
data type.
Getting the name of the structural element
If the data type is an SQL database data type, you must know the fully qualified
name of the underlying table in the database before you can set it up.
This name consists of the database name and the table name. Some databases use
case-sensitive table names, so make sure that you record the proper case when you
get this information. If the data type is an LDAP data type, you must know the
name of the base context level of the LDAP hierarchy where the nodes you want to
access are located.
Auto-populating the data type fields
After you have specified the name of the database and table, the next step is to
auto-populate the data type fields.
You can also specify the fields manually in the same way that you do for internal
data types, but in most cases, using the auto-populate feature saves time and
ensures that the field names are accurate.
When you auto-populate data type fields, the table description is retrieved from
the underlying data source, and a field in the data type is created for each column
in the table. The ID, actual name, and display name for the fields are defined using
the exact column name as it appears in the table.
A set of built-in rules is used to determine the data format for each of the
auto-populated fields. Columns in the database that contain text data, such as
varchar, are represented as string fields. Columns that contain whole numbers,
such as int and integer, are represented as integer fields. Columns that contain
decimal numbers are represented as float fields. Generally, you can automatically
assign the formats for data type fields without having to manually attempt to
recreate the database data formats in the data type.
If you only want a subset of the fields in a table to be represented in the data type,
you can manually remove the unwanted fields after auto-population. Removing
unwanted fields can speed the performance of a data type.
To auto-populate data type fields, you click the Refresh button in the Table
Description area of the Data Type tab. The table description is retrieved from the
data source, and the fields are populated. The fields are displayed in the Table
Description area.
After you auto-populate the data type fields, you can manually change the
attributes of any field definition. Do not change the value of the actual name
attribute. If you change this value, errors will be reported when you try to retrieve
data from the data type.
Specifying a data item filter
The data item filter specifies which rows in the underlying database table can be
accessed as data items in the data type.
Chapter 2. Working with data models
19
This filter is an optional setting. The syntax for the data item filter is the same as
the contents of the WHERE clause in the SQL SELECT statement that is supported by
the underlying database.
For example, if you want to specify that only rows where the Location field is New
York are accessible through this data type, you can use the following data item
filter:
Location = ’New York’
If you want to specify that only rows where the Location is either New York or New
Jersey, you can use the following expression:
Location = ’New York’ OR Location = ’New Jersey’
Make sure that you enclose any strings in single quotation marks.
To specify the data item filter, type the filter string in the Filter text box in the
Data Filter and Ordering area of the data type editor.
Specifying data item ordering
Data item ordering defines the order in which data items are retrieved from the
data type.
The order settings are used both when you retrieve data items using the
GetByFilter function in a policy and when you browse data items using the GUI.
You can order data items in ascending or descending alphanumeric order by any
data type field. Data item ordering is an optional part of the data type
configuration.
You specify data item ordering in the data type configuration as a
comma-separated list of fields, where each field is accompanied with the ASC or
DESC keyword.
For example, to retrieve data items in ascending order by the Name field, you use
the following ordering string:
Name ASC
To retrieve data items in descending order first by the Location field and then in
ascending order by Name, you use the following string:
Location DESC,Name ASC
To specify data item ordering:
1. In the Data Type Editor, scroll down so that the Data Filtering and Ordering
area is visible.
2. Type the data item ordering string in the Order By field.
Data type caching
You can use data type caching to reduce the total number of queries that are made
against a data source for performance or other reasons.
Caching helps you to decrease the load on the external databases used by
Netcool/Impact. Data caching also increases system performance by allowing you
to temporarily store data items that have been retrieved from a data source.
20
Netcool/Impact: Solutions Guide
Important: Caching works best for static data sources and for data sources where
the data does not change often.
Caching works when data is retrieved during the processing of a policy. When you
view data items in the GUI, cached data is retrieved rather than data directly from
the data source.
You can specify caching for external data types to control the number of data items
temporarily stored while policies are processing data. Many data items in the cache
use significant memory but can save bandwidth and time if the same data is
referenced frequently.
Important: Data type caching works with SQL database and LDAP data types.
Internal data types do not require data type caching.
You configure caching on a per data type basis within the GUI. If you do not
specify caching for the data type, each data item is reloaded from the external data
source every time it is accessed.
Working with event sources
When you design your solution, you must create one event source for each
application that you want to monitor for events, then you can create event reader
services and associate them with the event source.
Typically, a solution uses a single event source. This event source is most often an
ObjectServer database.
Event sources overview
An event source is a special type of data source that represents an application that
stores and manages events, the most common such application being the
ObjectServer database.
An event is a set of data that represents a status or an activity on a network. The
structure and content of an event varies depending on the device, system, or
application that generated the event but in most cases, events are
Netcool/OMNIbus alerts.
The installer automatically creates a default ObjectServer event source,
defaultobjectserver. This event source is configured using information you provide
during the installation. You can also use other applications as non-ObjectServer
event sources.
After you have set an event source, you do not need to actively manage it unless
you change the solution design but, if necessary, you can use the GUI to modify or
delete event sources.
ObjectServer event sources
The most common event source are ObjectServer event sources that represent
instances of the Netcool/OMNIbus ObjectServer database.
ObjectServer events are alerts stored in the alerts.status table of the database.
These alerts have a predefined set of alert fields that can be supplemented by
additional fields that you define.
Chapter 2. Working with data models
21
ObjectServer event sources are monitored using an OMNIbus event reader service.
The event reader service queries the ObjectServer at intervals and retrieves any
new, updated, or deleted events that matches its predefined filter conditions. The
event reader then passes each event to the policy engine for processing.
Non-ObjectServer event sources
Non-ObjectServer event sources represent instances of other applications, such as
external databases or messaging systems, that provide events to Netcool/Impact.
Non-ObjectServer events can take a wide variety of forms, depending on the
nature of the event source. For SQL database event sources, an event might be the
contents of a row in a table. For a messaging system event source, an event might
be the contents of a message.
Non-ObjectServer event sources are monitored using an event listener service. The
event listener service passively receives events from the event source and then
passes them to the policy engine for processing.
The DatabaseEventReader service monitors Non-ObjectServer data sources. The
Database Event Reader service queries the SQL data source at intervals and
retrieves any new or updated events that match its predefined filter conditions.
The Database Event Reader passes each event to the policy engine for processing.
Event source architecture
This diagram shows how event sources interact with event sources and event
listeners with their underlying event management applications.
22
Netcool/Impact: Solutions Guide
Figure 3. Event source architecture
Setting up ObjectServer event sources
Use this procedure to set up an ObjectServer event source.
Procedure
v Get the connection information for the ObjectServer.
This information is the host name or IP address of the ObjectServer host system
and the port number. The default port number for the ObjectServer is 4100.
v Create and configure the event source.
Chapter 2. Working with data models
23
For more information, see Configuring the default ObjectServer data source in the
Administration Guide.
v After you create the event source, you can then create and configure an
associated event reader service.
For more information about creating and configuring an event reader service,
see the online help.
24
Netcool/Impact: Solutions Guide
Chapter 3. Working with services
You work with services by configuring predefined services, and creating and
configure user-defined services.
Services overview
Services perform much of the functionality associated with the Impact Server,
including monitoring event sources, sending and receiving e-mail, and triggering
policies.
The most important service is the OMNIbus event reader, which you can use to
monitor an ObjectServer for new, updated or deleted events. The event processor,
which processes the events retrieved from the readers and listeners is also
important to the function of the Netcool/Impact.
Internal services control the application's standard processes, and coordinate the
performed tasks, for example:
v Receiving events from the ObjectServer and other external databases
v Executing policies
v Responding to and prioritizing alerts
v Sending and receiving e-mail and instant messages
v Handling errors
Some internal services have defaults, that you can enable rather than configure
your own services, or in addition to creating your own. For some of the basic
internal services, it is only necessary to specify whether to write the service log to
a file. For other services, you need to add information such as the port, host, and
startup data.
User defined services are services that you can create for use with a specific policy.
Generally, you set up services once, when you first design your solution. After
that, you do not need to actively manage the services unless you change the
solution design.
To set up services, you must first determine what service functionality you need to
use in your solution. Then, you create and configure the required services using
the GUI. After you have set up the services, you can start and stop them, and
manage the service logs.
Predefined services
Predefined services are services that are created automatically when you install
Netcool/Impact. You can configure predefined services, but you cannot create new
instances of the predefined services and you cannot delete existing ones.
These services are predefined:
v Event processor
v E-mail sender
v Hibernating policy activator
© Copyright IBM Corp. 2006, 2016
25
v Policy logger
v Command-line manager
User-defined services
User-defined services are services that you can create, modify, and delete. You can
also use the default instance of these services that are created at installation.
You can create user-defined services by using the defaults that are stored in the
global repository or select them from a list in the services task pane in the
navigation panel. All user-defined services are also listed in the services panel
where you can start them and stop them, just as you do the internal services. You
can add these services to a project as project members.
These services are user-defined:
v Event readers
v Event listeners
v E-mail readers
v Policy activators
OMNIbus event reader service
OMNIbus event readers are services that monitor a Netcool/OMNIbus
ObjectServer event source for new, updated, and deleted alerts and then runs
policies when the alert information matches filter conditions that you define.
The event reader service uses the host and port information of a specified
ObjectServer data source so that it can connect to an Objectserver to poll for new
and updated events and store them in a queue. The event processor service
requests events from the event reader. When an event reader discovers new,
updated, or deleted alerts in the ObjectServer, it retrieves the alert and sends it to
an event queue. Here, the event waits to be handled by the event processor.
You configure this service by defining a number of restriction filters that match the
incoming events, and passing the matching events to the appropriate policies. The
service can contain multiple restriction filters, each one triggering a different policy
from the same event stream, or it can trigger a single policy.
You can configure an event reader service to chain multiple policies together to be
run sequentially when triggered by an event from the event reader.
Important: Before you create an OMNIbus event reader service, you must have a
valid ObjectServer data source to which the event reader will connect to poll for
new and updated events.
OMINbus event reader architecture
This diagram shows the relationship between Netcool/Impact, an OMNIbus event
reader, and an ObjectServer.
26
Netcool/Impact: Solutions Guide
Figure 4. Event reader architecture
OMNIbus event reader process
The phases of the OMNIbus event reader process are startup, event polling, event
querying, deleted event notification, and event queueing.
Startup
When the event reader is started it reads events using the StateChange or
serial value that it used before being shut down. To read all the events on
start-up, click Clear State.
Event Polling
During the event polling phase, the OMNIbus event reader queries the
ObjectServer at intervals for all new and unprocessed events. You set the
polling interval when you configure the event reader.
Event Querying
When the OMNIbus event reader queries the ObjectServer, either at
startup, or when polling for events at intervals, it reads the state file,
retrieves new or updated events, and records the state file.. For more
information, see “Event querying.”
Deleted Event Notification
If the OMNIbus event reader is configured to run a policy when an event
is deleted from the ObjectServer, it listens to the ObjectServer through the
IDUC interface for notification of deleted alerts. The IDUC delete
notification includes the event field data for the deleted alert.
Event Queueing
After it retrieves new or updated events, or has received events through
delete notification, the OMNIbus event reader compares the field data in
the events to its set of filters. For more information, see “Event queuing”
on page 28.
Event querying
When the OMNIbus event reader queries the ObjectServer, either at startup, or
when polling for events at intervals, it reads the state file, retrieves new or
updated events, and records the state file.
Chapter 3. Working with services
27
Reading the state file
The state file is a text file used by the OMNIbus event reader to cache state
information about the last event read from the ObjectServer. The event
reader reads the state file to find the Serial or StateChange value of the
last read event. For more information, see “Reading the state file.”
Retrieving new or updated events
The event reader connects to the ObjectServer and retrieves new or
updated events that have occurred since the last read event. During this
phase, the event reader retrieves all the new or updated events from the
ObjectServer, using information from the state file to specify the correct
subset of events.
Recording the state file
After the event reader retrieves the events from the ObjectServer, it caches
the Serial or StateChange value of the last processed event.
Reading the state file:
The state file is a text file used by the OMNIbus event reader to cache state
information about the last event read from the ObjectServer.
If the event reader is configured to get only new events from the ObjectServer, the
state file contains the Serial value of the last event read from the ObjectServer. If
the event reader is configured to get both new and updated events from the
ObjectServer, the file contains the StateChange value of the last read event.
The event reader reads the contents of the state file whenever it polls the
ObjectServer and passes the Serial or StateChange value as part of the query.
Event queuing
After it retrieves new or updated events, or has received events through delete
notification, the OMNIbus event reader compares the field data in the events to its
set of filters.
If the event matches one or more of its filters, the event reader places the event in
the event queue with a pointer to the corresponding policy. After the events are in
the event queue, they can be picked up by the event processor service. The event
processor passes the events to the corresponding policies to the policy engine for
processing.
Mappings
Event mappings allow you to specify which policies you want to be run when
certain events are retrieved.
Each mapping consists of a filter that specifies the type of event and a policy
name. You must specify at least one event mapping for the event reader to work.
The syntax for the filter is the same as the WHERE clause in an SQL SELECT
statement. This clause consists of one or more comparisons that must be true in
order for the specified policy to be executed. For more information about the SQL
filter syntax, see the Policy Reference Guide.
The following examples show event mapping filters.
AlertKey = ’Node not responding’
AlertKey = ’Node not reachable by network ping’ AND Node = ’ORA_Host_01’
28
Netcool/Impact: Solutions Guide
Event matching
You can specify whether to run only the first matching policy in the event
mappings or to run every policy that matches.
If you choose to run every policy that matches, the OMNIbus event reader will
place a duplicate of the event in the event queue for every matching policy. The
event will be processed as many times as their are matching filters in the event
reader.
Actions
By default, the event broker monitors the ObjectServer for new alerts, but you can
also configure it to monitor for updated alerts and to be notified when an alert is
deleted.
In addition, you can configure it to get all the unprocessed alerts from the
ObjectServer at startup.
Event locking
Event locking allows a multithreaded event broker to categorize incoming alerts
based on the values of specified alert fields and then to process them within a
category one at a time in the order that they were sent to the ObjectServer.
Event locking locks the order in which the event reader processes alerts within
each category.
Remember: When event locking is enabled in the reader, the events read by it are
only processed in the primary server of the cluster.
You use event locking in situations where you want to preserve the order in which
incoming alerts are processed, or in situations where you want to prevent a
multithreaded event processor from attempting to access a single resource from
more than one instance of a policy running simultaneously.
You specify the way the event reader categorizes incoming alerts using an
expression called a locking expression. The locking expression consists of one or
more alert field names concatenated with a plus sign (+) as follows:
field[+field...]
Where field is the name of an alert field in the alerts.status table of the
ObjectServer.
When an event reader retrieves alerts from the ObjectServer, it evaluates the
locking expression for each incoming alert and categorizes it according to the
contents of the alert fields in the expression.
For example, when using the locking expression Node, the event broker categorizes
all incoming alerts based on the value of the Node alert field and then processes
them within a category one at a time in the order that they were sent to the
ObjectServer.
In the following example:
Node+AlertKey
The event broker categorizes all incoming alerts based on the concatenated values
of the Node and AlertKey fields. In this example, an alert whose Node value is Node1
and AlertKey value is 123456 is categorized separately
Chapter 3. Working with services
29
Event order
The reader first sorts based on StateChange or Serial value depending on whether
Get Updates is used or not.
Each event has a unique Serial so the Order by field is ignored. In instances where
there is more than one event with the same StateChange, the reader uses the Order
By field to sort events after they are sorted in ascending order of StateChange.
Managing the OMNIbusEventReader with an ObjectServer pair
for New Events or Inserts
The OMNIbusEventReader service uses the StateChange field by default when
querying Netcool®/OMNIbus so that Netcool/Impact can fetch both New and
Updated events. To configure the OMNIbusEventReader to receive only new
events or inserts, you can clear the Get updated events check box in the
OMNIbusEventReader configuration pane. Then, the query that is issued to
Netcool/OMNIbus uses Serial instead of StateChange.
The limitation with using Serial is that in a Netcool/OMNIbus failover
configuration each ObjectServer has its own unique values for Serial. During
failover and failback the query that is issued by Netcool/Impact can result in
events not being read or the same events being read again. The Serial value used
in the OMNIbusEventReader query does not consider the unique value Serial has
for each ObjectServer instance.
Scenario
The two ObjectServer instances are NCOMSA and NCOMSB. For NCOMSA, the
current Serial is 9000. For NCOMSB, the current Serial is 7000.
When Netcool/Impact is connected to NCOMSA, the query is:
select top 1000 * from alerts.status where Class != 10500 AND Serial >
9000 order by Serial;
NCOMSA goes down and Netcool/Impact connects to the secondary ObjectServer
which is NCOMSB.
When connected, Netcool/Impact issues a select statement:
select top 1000 * from alerts.status where Class != 10500 AND Serial >
9000 order by Serial;
NCOMSB has Serial 7000 and any new events that are inserted into NCOMSB
would have Serial values as 7001, 7002, and so on. However, Netcool/Impact does
not maintain a Serial value per ObjectServer instance and continues to look for
events that are based on its last internal check pointed value, which was 9000.
As a result Netcool/Impact does not pick up the new events 7000, 7002, 7003, and
so on. It would miss reading 2000 events from 7000 to 9000.
Once the inserted event gets a Serial value of 9001, Netcool/Impact starts fetching
those events.
30
Netcool/Impact: Solutions Guide
Configuring the OMNIbusEventReader with an ObjectServer
pair for New Events or Inserts
About this task
To configure an OMNIbuseventReader to work in an ObjectServer failover or
failback configuration, you must add properties to the eventreader.props file.
Adding properties to the eventreader.props file overrides selecting or clearing the
Get Updates Events check box in the UI.
Procedure
1. Identify the OMNIbusEventReader that is to query the Netcool/OMNIbus
failover/failback pair. A new Netcool/Impact installation provides a reader that
is called OMNIbusEventReader but you can create more instances in the
Services UI.
2. Stop the Impact Server. In a Netcool/Impact clustered environment, stop all the
servers.
3. Go to your $NCHOME/etc directory and look for the .props file that corresponds
to your OMNIbusEventReader. The file is in the following format:
<servername>_<readernameinlowercase>.props
where <servername> is the name of your Impact Server instance.
For example, If your Netcool/OMNIbus server instance is NCI1 and your reader
is OMNIbusEventReader, the file would be NCI1_omnibuseventreader.props
4. Edit the .props file and add the following lines:
impact.<readernameinlowercase>.objectserver.queryfornewevents=true
impact.<readernameinlowercase>.objectserver.updatefornewevents=true
For example:
impact.omnibuseventreader.objectserver.queryfornewevents=true
impact.omnibuseventreader.objectserver.updatefornewevents=true
Note:
The queryfornewevents property configures the event reader to add
"ImpactFlag=0" to the SQL query. The updatefornewevents property configures
the event reader to automatically set ImpactFlag to 1 after the event is
processed.
If you have two or more readers that are processing the same events, you
should use different flags for each reader so that one reader does not set the
flag before the other reader(s) get an opportunity to pick up the event. See .
“Additional customization using a field other than ImpactFlag” on page 33.
5. Copy the sql file nci_omnibus_update.sql in the $NCHOME/install/dbcore/
OMNIbus folder to the machines where the primary and secondary instances of
the ObjectServer are running. This script adds the Integer field ImpactFlag to
the alerts.status schema. Run this script against both the primary and
secondary ObjectServer pairs.
v For UNIX based platforms, you can run the SQL script:
cat <path to nci_omnibus_update.sql> | ./nco_sql -server ’<servername>
-user ’<username>’ -password ’<password>’
For example, if the nci_omnibus_update.sql is placed in the /opt/scripts
folder and I want to run this script against the ObjectServer instance NCOMS,
connecting as the root user with no password, the script can be run as:
Chapter 3. Working with services
31
cat /opt/scripts/nci_omnibus_update.sql | ./nco_sql -server ’NCOMS’
-user ’root’ -password “”
v For Windows platforms, you can run the SQL script as:
type <path to nci_omnibus_update.sql> | isql.bat -S <servername>
-U <username> -P <password>
For example, place the nci_omnibus_update.sql file in the OMNIHOME/bin
folder and run this script against ObjectServer instance NCOMS, connecting as
root with no password:
type nci_omnibus_update.sql | isql.bat -S NCOMS -U root -P
Make sure that -P is the last option. You can ignore providing the password
and enter it when prompted instead. For information aboutNetcool/
OMNIbus, see the IBM Tivoli Netcool/OMNIbus Administration Guide available
from the following website:
http://www.ibm.com/tivoli/documentation.
6. When the script is completed, an Integer field ImpactFlag is added to the
alerts.status schema. Check that the integer field called ImpactFlag is added
for each ObjectServer instance primary and secondary.
Important: The ImpactFlag must be included in the omnibus bidirectional
gateway mapping file.
7. Start your Netcool/Impact server and the OMNIbusEventReader. In a clustered
setup, start the primary server first followed by all the secondary servers.
8. You can check the OMNIbusEventReader logs and verify that the query now
has ImpactFlag = 0 as a condition. Once Netcool/Impact finishes processing
events, which match the ImpactFlag = 0 criteria, it automatically updates the
alerts.status table so that ImpactFlag gets set to 1. This prevents the
OMNIbusEventReader from reading the same events again and Netcool/Impact
reads and processes only new events and inserts that it has not read before.
Additional customization using the ReturnEvent function to
update one or more fields in Netcool/OMNIbus
If your policy is already using the ReturnEvent function to update one or more
fields in Netcool/OMNIbus, you can add the ImpactFlag field in the update
statement to avoid Netcool/Impact internally issuing an update to set the
ImpactFlag.
Procedure
1. For the policy to handle the ImpactFlag field, check that @ImpactFlag = 1; is
added before the ReturnEvent gets issued.
@ImpactFlag = 1;
ReturnEvent(EventContainer);
2. To remove the issued update, especially by Netcool/Impact: Either, set:
impact.<readernameinlowercase>.objectserver.updatefornewevents=false or
do not add the
impact.<readernameinlowercase>.objectserver.updatefornewevents=false
property to the .props file of the reader since default value is false.
Remember: Changing values in the .props file is done only when the server is
shut down. If you are running a cluster, make sure all the servers are down.
Change the .props of the primary server only and it replicates to the secondary
servers when they are started.
32
Netcool/Impact: Solutions Guide
Additional customization using a field other than ImpactFlag
If you cannot use the ImpactFlag field for the setup because a field called
ImpactFlag already exists, you can customize the field using the following
procedure.
Procedure
1. Customize the field Netcool/Impact uses by adding the following additional
properties:
impact.<readernameinlowercase>.objectserver.queryforneweventsexpr=<select expression>
impact.<readernameinlowercase>.objectserver.updateforneweventsexpr=<update expression>
For example, your reader name is OMNIbusEventReader and you want to use
a Integer field called Impacted for this reader instead of ImpactFlag, the
properties become:
impact.omnibuseventreader.objectserver.queryfornewevents=true
impact.omnibuseventreader.objectserver.updatefornewevents=true
impact.omnibuseventreader.objectserver.queryforneweventsexpr=Impacted = 0
impact.omnibuseventreader.objectserver.updateforneweventsexpr=Impacted = 1
2. Make sure that the field Impacted in this example exists in both the primary
and backup Netcool/OMNIbus instances configured for the data source. You
can edit the nci_omnibus_update.sql script and change ImpactFlag to Impacted.
3. Then run the script against Netcool/OMNIbus using steps 5 and 6 on page 32
in Configuring the OMNIbusEventReader with an ObjectServer pair for New
Events or Inserts.
Stopping the OMNIbus event reader retrying a connection on
hitting an error
By default, from Fix Pack 5, Impact automatically tries to reconnect to OMNIbus if
the connection dies. This behaviour can be overridden if required.
This automatic reconnection is controlled by the
impact.omnibuseventreader.retry.connection.enabled parameter in the backend
Impact server <SERVERNAME>_server.props file.properties file.
You can override this option by setting the
impact.omnibuseventreader.retry.connection.enabled parameter to false.
Handling Serial rollover
How to set up Serial rollover with Netcool/Impact.
Symptoms
If you are not using the Get Updates option in the OMNIbus reader service,
Netcool/Impact uses the Serial field to query Netcool/OMNIbus. Serial is an auto
increment field in Netcool/OMNIbus and has a maximum limit before it rolls over
and resets.
Resolution
Complete the following steps to set up Netcool/Impact to handle Serial rollover:
1. Identify the OMNIbusEventReader that queries the Netcool/OMNIbus
failover/failback pair. A Netcool/Impact installation provides a reader called
OMNIbusEventReader but you can create more instances in the Services GUI.
Chapter 3. Working with services
33
2. Stop the Impact Server. In a Netcool/Impact clustered environment, stop all the
servers.
3. Copy the sql file serialrotation.sql in the $NCHOME/install/dbcore/OMNIbus
folder to the machines where the primary and secondary instances of the
ObjectServer are running. This script creates a table called serialtrack in the
alerts database and also creates a trigger called newSerial to default_triggers.
4. Run this script against both the primary and secondary ObjectServer pairs.
v For UNIX based operating systems:
cat <path_to_serialrotation.sql> | ./nco_sql -server ’<servername>
-user ’<username>’ -password ’<password>’
For example, if the serialrotation.sql is placed in the /opt/scripts folder
and I want to run this script against the ObjectServer instance NCOMS,
connecting as the root user with no password, the script can be run as:
cat /opt/scripts/serialrotation.sql | ./nco_sql -server ’NCOMS’
-user ’root’ -password “”
v For Windows operating systems:
type <path to serialrotation.sql> | isql.bat -S <servername>
-U <username> -P <password>
For example, place the serialrotation.sql file in the OMNIHOME/bin folder
and run this script against the ObjectServer instance NCOMS, connecting as
a root user with no password:
type serialrotation.sql | isql.bat -S NCOMS -U root -P
Make sure that -P is the last option. You can ignore providing the password
and enter it when prompted instead. For information about
Netcool/OMNIbus, see the IBM Tivoli Netcool/OMNIbus Administration
Guide available from the following website: https://www.ibm.com/
developerworks/wikis/display/tivolidoccentral/OMNIbus.
Further steps
When the script completes, make sure that you enable the newSerial trigger.
1. Start your Netcool/Impactserver and the OMNIbusEventReader. In a clustered
setup, start the primary server first followed by all the secondary servers.
2. Log in to the Netcool/Impact GUI and create an instance of the
DefaultPolicyActivator service. In the Configuration, select the policy to trigger
as SerialRollover and provide an interval at which that policy gets triggered.
3. The SerialRollover policy assumes that the data source used to access
Netcool/OMNIbus is the defaultobjectserver and the event reader that
accesses Netcool/OMNIbus is the OMNIbusEventReader. If you are using a
different data source or event reader, you must update the DataSource_Name
and Reader_Name variables in the policy accordingly.
4. Start the instance of the DefaultPolicyActivator service that you created.
Scenario Using a correlation policy and the OMNIbus
ObjectServer event reader service to solve flood events
How to use the Netcool/Impact policy EventCorrelationUsingXinYExample.ipl
and the OMNIbus ObjectServer event reader service to perform event correlation
to solve events flood.
In Netcool/Impact, you create an OMNIbus event reader service that is based on a
specific filter that executes the EventCorrelationUsingXinYExample.ipl policy. The
34
Netcool/Impact: Solutions Guide
policy queries the OMNIbus ObjectServer again based on the same filter as the
reader or a different one to check if there are older events within a threshold and
how many they are.
The scenario uses a simple X in Y correlation example. Where X is the number of
events that occurred in a specified time window threshold Y, for example 50 events
in the past 120 seconds.
This specific scenario focuses on an IBM Tivoli Monitoring Tivoli Enterprise
Monitoring Server that sends a flood of events that are tagged as MS_Offline.
MS_Offline events are sent when the Tivoli Enterprise Monitoring Server agents
detect that servers are down or restarted. For example, if IBM Tivoli Monitoring
Tivoli Enterprise Monitoring Server sends 3 events per second per agent for 5
agents until the agents are responsive, it would result in:
3 events * 5 * (5*60 seconds) = 4500 events in 5 minutes.
Because the 4500 events are coming from the same source, they should be
correlated by either updating the new incoming event or deleting them. In this
example, the events are updated.
IBM Tivoli Monitoring Tivoli Enterprise Monitoring Server sends events to the
OMNIbus ObjectServer table with updated fields such as:
v Summary Like 'MS_Offline',
v ITMHostname='TEMS hostname',
v Agent = 'ITM'
The fields are used to query the ObjectServer.
This particular scenario is using a standard Netcool/Impact policy and an
OMNIbus ObjectServer Event Reader service for Version 5.x and up.
Updating the policy to match the filter
Procedure
1. The correlation policy is EventCorrelationUsingXinYExample.ipl and is in the
Global project folder
2. Update the policy to match a specific filter. Each section of the policy has a
description. Also, note that the policy uses @ITMHostname for ITM TEMS
because this example is specifically for MS_Offline event floods. Make sure to
update the filter accordingly.
a. Setting the thresholds for the time window and number of existing events.
/*Threshold time window in seconds:*/
CorrelationThreshold =120;
/*
* number of older events exist in the database before the incoming events
*/
numberOfEventsToCheck=5;
Log("LastOccurrence : " + @LastOccurrence );
/**
*DiffTime can be calculated using DiffTime = GetDate() - Int(CorrelationThreshold)
*Using GetDate() instead of @LastOccurrence makes sure that the
*policy checks period of time from "now time" - Threshold
*which keeps the time constant to check instead of using
*relative timestamp from LastOccurrence
*/
//DiffTime=@LastOccurrence - Int(CorrelationThreshold);
DiffTime=GetDate() - Int(CorrelationThreshold);
Log("DiffTime: " + DiffTime);
Chapter 3. Working with services
35
b. Filter
/*The following filter is used to correlate the events. It can be changed as needed
*This specific example is to filter events to handle ITM MS_OFFLine events flood
*/
CorrelationFilter="ITMHostname=’" + @ITMHostname + "’ AND Summary Like ’MS_Offline’
AND Severity = 5 AND Serial != " + @Serial ;
CorrelationFilter = CorrelationFilter + " AND LastOccurrence <= " + DiffTime ;
/*ORDER BY can be used to rank the events and check which one came in first
*/
CorrelationOrderBy = "ORDER BY LastOccurrence ASC";
c. Number of Events (X)
/* The following is to get COUNT(*) as EventCount
*from the same object server data source used by
*reader
*/
CorrelationFields="COUNT(*) AS EventCount";
/*form the correlation query including the Threshold filter*/
SQLQuery = "SELECT " + CorrelationFields + " FROM status WHERE " + CorrelationFilter ;
Log("Reader Policy Query: " + SQLQuery);
Log("Check older events...");
Nodes=DirectSQL(’defaultobjectserver’,SQLQuery,NULL);
Log("Number of Old Events: " + Nodes[0].EventCount );
/*The following if condition checks if there is an X events occurred in the threshold
*default is numberOfEventsToCheck (default 5) events older than the incoming event
*that was picked up by the reader.
* If there are older events, the incoming event will be correlated by updated the Severity
* and SuppressEscl
*/
if (Nodes[0].EventCount > numberOfEventsToCheck) {
Log("Found older events correlating this event: " + @Serial);
@Severity=2;
@SuppressEscl=6;
//event can deleted if the following is uncomented:
//@DeleteEvent=true;
ReturnEvent(EventContainer);
} else {
Log("No older events found that greater than the number of events to check
( " +numberOfEventsToCheck +") ...");
}
3. Create an ObjectServer Event Reader service that runs the
EventCorrelationUsingXinYExample policy. See Configuring the OMNIbus event
reader service in the online help. For example, the following filter is used in the
Event Mapping.
Summary Like 'MS_Offline' AND Severity = 5 AND ITMHostname <> '' AND
Agent ='ITM'
4. Run the Event Reader service and send some test events.
Results
When the Event Reader finds a matching event, it runs the correlation policy. The
policy queries the same ObjectServer by using the same filter (or different, based
on the configuration) and adds a threshold and time window (Y) as well as
number of events found. If the number of events that are found in the threshold is
greater than the count required (X), the incoming event is correlated by updating
the Severity and the SupressEscl fields.
Database event listener service
The database event listener service monitors an Oracle event source for new,
updated, and deleted events.
This service works only with Oracle databases. When the service receives the data,
it evaluates the event against filters and policies that are specified for the service
36
Netcool/Impact: Solutions Guide
and sends the event to the matching policies. The service listens asynchronously
for events that are generated by an Oracle database server and then runs one or
more policies in response.
You configure the service by using the GUI. Use the configuration properties to
specify one or more policies that are to be run when the listener receives incoming
events from the database server.
The database event listener agent is unable to communicate to the Name Server
through SSL.
Setting up the database server
Before you can use the database event listener, you must configure the database
client and install it into the Oracle database server.
Before you begin
The java classes contained within the client package require Oracle to be running
in a JVM of 1.5 or higher.
About this task
The database client is the component that sends events from the database server to
Netcool/Impact. It consists of a set of Oracle Java schema objects and related
properties files. When you install the Impact Server, the installer copies a
compressed client package that contains the client program files to the local
system.
Perform these steps to set up the database server:
Procedure
1. Copy the client package to the system where Oracle is running and extract its
contents.
a. Copy the relevant client package from Netcool/Impact into a temporary
directory on the system where Oracle is running.
On UNIX systems, copy $NCHOME/install/agents/oracleclient1.0.0.tar.
On Windows, copy $NCHOME/install/agents/oracleclient-1.0.0.zip .
b. Extract the package contents by using the UNIX tar command or a
Windows archive utility like WinZip. The package contents includes three
files: oracleclient.jar, impactdblistener.props, and commons-codec1.3.jar.
2. Edit the Name Server properties file on the database client side.
a. Copy the nameserver.props file from the Impact Server to the
$ORACLE_HOME/bin directory, where the loadjava program is also located.
b. Update the impact.namserver.password with the unencrypted password of
the user name that is specified in the impact.nameserver.userid property.
3. Edit the listener properties file.
The client tar file contains the impactdblistener.props with additional settings
for the database client. Copy the impactdblistener.props file to the
$ORACLE_HOME/bin directory, where the loadjava program is also located. For
information about configuring this file, see “Editing the listener properties file”
on page 39.
Chapter 3. Working with services
37
4. Install the client files into the database server that is using the Oracle loadjava
utility.
Oracle provides the $ORACLE_HOME/bin/loadjava utility that you can use to
install the client files into the database server. For information about installing
the client files into the database server, see “Installing the client files into
Oracle” on page 39.
5. Grant database permissions.
You must grant a certain set of permissions in the Oracle database server in
order for the database event listener to function.. For more information about
granting database permissions, see “Granting database permissions” on page
40.
Editing the nameserver.props file for the database client
The nameserver.props file is copied from the Impact Server to the
$ORACLE_HOME/bin directory. The database client uses the nameserver.props file to
determine the Name Server connection details.
The database client uses the name server to find and connect to the primary
instance of the Impact Server.
Restriction: In clustering configurations of Netcool/Impact, the database event
listener only runs in the primary server.
The following example shows a sample of the nameserver.props file that the
database client can use to connect to a single-server configuration of the Name
Server.
impact.nameserver.0.host=NCI1
impact.nameserver.0.port=9080
impact.nameserver.0.location=/nameserver/services
impact.nameserver.userid=impactadmin
impact.nameserver.password=impactpass
impact.nameserver.count=1
In this example, the Name Server is located on the NCI1 Impact Server, and is
running on the default port, 9080. The Name Server user and password have
default values, impactadmin, and impactpass.
The following example shows a sample of the nameserver.props file that the
database client can use to connect to a cluster that consists of two NameServer
instances.
impact.nameserver.0.host=NCI1
impact.nameserver.0.port=9080
impact.nameserver.0.location=/nameserver/services
impact.nameserver.1.host=NCI2
impact.nameserver.1.port=9080
impact.nameserver.1.location=/nameserver/services
impact.nameserver.userid=impactadmin
impact.nameserver.password=impactpass
impact.nameserver.count=2
In this example, the NameServers are located on systems named NCI1, and NCI2
Impact Servers, and are running on the default port, 9080.
38
Netcool/Impact: Solutions Guide
Editing the listener properties file
The client tar file contains the impactdblistener.props with additional settings for
the database client.
Edit this file so that it contains the correct name for the Impact Server cluster. You
can also change debug and delimiter properties.
Table 1 shows the properties in the listener properties file:
Table 1. Database client listener properties file
Property
Description
impact.cluster.name
Name of the Impact Server cluster where the
database event listener is running. The
default value for this property is NCICLUSTER.
impact.dblistener.debug
Specifies whether to run the database client
in debug mode. The default value for this
property is true.
impact.dblistener.delim
Specifies the delimiter character that
separates name/value pairs in the VARRAY
sent by Java stored procedures to the
database client. The default value for this
property is the pipe character (|). You
cannot use the colon (:) as a delimiter.
Installing the client files into Oracle
Oracle provides the $ORACLE_HOME/bin/loadjava utility that you can use to install
the client files into the database server.
Before you begin
If you are migrating to a newer version of Netcool/Impact, remove any preexisting
Java archive (JAR) and properties files. To remove the preexisting files, use the
following command:
dropjava -user username/password <file>
username, and password is a valid user name and password for a user whose
schema contains the database resources where the Java stored procedures are run.
Procedure
1. Log in as the oracle user and set the environment variables ORACLE_HOME and
ORACLE_SID.
2. Go to the ORACLE_HOME/bin directory.
3. Install the client jar, and property files. The property files must be in the same
$ORACLE_HOME/bin directory as the loadjava utility with no other directory
attached to the property files.
a. Use the following command to install the commons-codec-1.3.jar file:
loadjava -user username/password -resolve dir/commons-codec-1.3.jar
b. Use the following command to install the oracleclient.jar file:
loadjava -user username/password -resolve dir/oracleclient.jar
c. Use the following command to install the nameserver.props file:
loadjava -user username/password nameserver.props
d. Use the following command to install the impactdblistener.props file:
Chapter 3. Working with services
39
loadjava -user username/password impactdblistener.props
Important: You must follow this order of installation, otherwise loadjava
cannot resolve external references between files, and report errors during
installation.
Granting database permissions
You must grant a certain set of permissions in the Oracle database server in order
for the database event listener to function.
Procedure
1. Log in to sqlplus as the sysdba user: sqlplus "sys as sysdba"
2. Grant the permissions by entering the following commands at an Oracle
command line:
exec dbms_java.grant_permission( ’SCHEMA’,’SYS:java.net.SocketPermission’,
’hostname:port’,’connect,resolve’ )
/
exec dbms_java.grant_permission( ’SCHEMA’,’SYS:java.net.SocketPermission’,
’hostname:listener_port’,’connect,resolve’ )
/
exec dbms_java.grant_permission( ’SCHEMA’, ’SYS:java.lang.RuntimePermission’,
’shutdownHooks’ , ’’);
/
exec dbms_java.grant_permission( ’SCHEMA’,’SYS:java.util.logging.LoggingPermission’
,’control’, ’’ );
/
exec dbms_java.grant_permission(’SCHEMA’, ’SYS:java.util.PropertyPermission’,
’*’, ’read, write’)
/
exec dbms_java.grant_permission( ’SCHEMA’, ’SYS:java.lang.RuntimePermission’,
’getClassLoader’, ’’ )
/
exec dbms_java.grant_permission( ’SCHEMA’,’SYS:java.net.SocketPermission’,
’hostname:40000’,’connect,resolve’ );
/
exec dbms_java.grant_permission( ’SCHEMA’,’SYS:java.lang.RuntimePermission’,
’getClassLoader’, ’’ )
SCHEMA
Is the name of the user creating the triggers and procedures.
hostname
Is the name of the host where you are running the Impact Server.
port
Is the HTTP port on the server.
listener_port
Is the port that is used by the database event listener.
3. Grant socket permissions to specific ports, or all ports, for Oracle.
v
Use the next two port numbers in the allocation sequence to connect to the
database event listener service. You can adjust the communication port on
the Impact Server so that the Oracle client can grant permissions to connect
to the Impact Server on that port by using the impact.server.rmiport
property. For example:
$IMPACT_HOME/etc/<servername>_server.props impact.server.rmiport=50000
Grant the permission to connect to this port in your Oracle database, port
50000 in the example, otherwise the Impact Server starts at a random port.
You must grant permissions for a different port each time the Impact Server
is restarted.
40
Netcool/Impact: Solutions Guide
v Grant the permission to all ports for the Impact Server by using the
command: exec dbms_java.grant_permission(
'CMDBUSR','SYS:java.net.SocketPermission', 'hostname:*',
'connect,resolve' ), where hostname is the host name or IP address of the
Impact Server.
Configuring the database event listener service
You configure the database event listener service by setting events to trigger
policies when they match a filter.
Table 2. Event mapping settings for database event listener service configuration window
Window element
Description
Test events with all filters
Click this button, when an event matches
more than one filter, you want to trigger all
policies that match the filtering criteria.
Stop testing after first match
Click this button if you want to trigger only
the first matching policy.
You can choose to test events with all filters
and run any matching policies or to stop
testing after the first matching policy.
New Mapping: New
Click on the New button to create an event
filter.
Analyze Event Mapping Table
Click this icon to view any conflicts with
filter mappings that you set for this service.
Starts automatically when server starts
Select to automatically start the service when
the server starts. You can also start and stop
the service from the GUI.
Service log (Write to file)
Select to write log information to a file.
Sending database events
Perform these tasks to configure the database to send events.
v Create a call spec that publishes the sendEvent() function from the database
client library.
v Create triggers that call the resulting stored procedure.
Before you create these objects in the database, you must understand what
database events you want to send and what conditions cause them to be sent. For
example, if you want to send an event to Netcool/Impact every time a row is
inserted into a table, you must know:
The identity of the table
The subset of row information to send as part of the event
The name of the condition (for example, after insert) that triggers the operation.
It is easier to create the triggers and call specs in SQL files and then call the SQL
files within SQL-plus, by using @dir/filename. For example: @/tmp/
createTrigger.sql
For more information about Java stored procedures, call specs, and triggers, see the
Oracle Java Stored Procedure Developer's Guide.
Chapter 3. Working with services
41
Creating the call spec
The database client exposes a function named sendEvent() that allows Oracle
schema objects (in this case, triggers) to send events to Netcool/Impact.
The sendEvent() function is located in the class
com.micromuse.response.service.listener.database. DatabaseListenerClient,
which you compiled and loaded when you installed the client into the database
server.
The function has the following syntax:
sendEvent(java.sql.Array x)
Where each element in array x is a string that contains a name/value pair in the
event.
In this example, db_varray_type is a user-defined VARRAY that can be described
using the following statement:
CREATE TYPE db_varray_type AS VARRAY(30) OF VARCHAR2(100);
In order for Oracle objects to call this function, you must create a call spec that
publishes it to the database as a stored procedure. The following example shows a
call spec that publishes sendEvent() as a procedure named test_varray_proc:
CREATE OR REPLACE PROCEDURE test_varray_proc(v_array_inp db_varray_type)
AS LANGUAGE JAVA
NAME
’com.micromuse.response.service.listener.database.DatabaseListenerClient.
sendEvent(java.sql.Array)’;
/
This call spec and VARRAY type are used in examples elsewhere in this chapter.
When you call the procedure published with this call spec, you pass it an Oracle
VARRAY in which each element is a string that contains a name/value pair in the
event. The name and value in the string are separated using the pipe character (|)
or another character as specified when you configured the database client.
The procedures and triggers created for the event listener should not be created by
the sys user.
Creating triggers
You can create triggers for DML events, DDL events, system events, and user
events.
DML events triggers:
DML events are sent to Netcool/Impact when the database performs operations
that change rows in a table.
These include the standard SQL INSERT, UPDATE, and DELETE commands.
You configure the database to send DML events by creating triggers that are
associated with these operations. Most often, these triggers take field data from the
rows under current change and pass it to the database client using the call spec
you previously created. In this way, the database reports the inserts, updates, and
deletes to Netcool/Impact for processing as events.
42
Netcool/Impact: Solutions Guide
When the database client receives the field data from the trigger, it performs a
SELECT operation on the table to determine the underlying data type of each field.
Because the corresponding row is currently under change, Oracle is likely to report
a mutating table error (ORA-04091) when the database client performs the SELECT.
To avoid receiving this error, your DML triggers must create a copy of the row
data first and then use this copy when sending the event.
The following example contains table type declarations, variable declarations, and
trigger definitions that create a temporary copy of row data. You can modify this
example for your own use. This example uses the type db_varray_type described
in the previous section. The triggers in the example run in response to changes
made to a table named dept.
This example contains:
v Type declaration for deptTable, which is a nested table of db_varray_type.
v Variable declaration for dept1, which is a table of type deptTable. This table
stores the copy of the row data.
v Variable declaration for emptyDept, which is a second table of type deptTable.
This table is empty and is used to reset dept1.
v Trigger definition for dept_reset, which is used to reset dept1.
v Trigger definition for dept_after_row, which populates dept1 with field data
from the changed rows.
v Trigger definition for dept_after_stmt, which loops through the copied rows
and sends the field data to the database client using the call spec defined in the
previous section.
The trigger definition for dept_after_row is intentionally left incomplete in this
example, because it varies depending on whether you are handling INSERT, UPDATE
or DELETE operations.
This is an example definition for this trigger:
CREATE OR REPLACE PACKAGE dept_pkg AS
/* deptTable is a nested table of VARRAYs that will be sent */
/* to the database client */
TYPE deptTable IS TABLE OF db_varray_type INDEX BY BINARY_INTEGER;
/* dept1 will store the actual VARRAYs
dept1 deptTable;
/* emptyDept is used for initializing dept1 */
emptyDept deptTable;
end;
/
CREATE OR REPLACE TRIGGER dept_reset
BEFORE INSERT OR UPDATE OR DELETE ON dept
BEGIN
/* Initialize dept1 */
dept_pkg.dept1 := dept_pkg.emptyDept;
end;
/
/* CREATE OR REPLACE TRIGGER dept_after_row
/* AFTER INSERT OR UPDATE OR DELETE ON dept
/* FOR EACH ROW
Chapter 3. Working with services
43
/* BEGIN
/* This trigger intentionally left incomplete. */
/* See examples in following sections of this chapter. */
end;
/
CREATE OR REPLACE TRIGGER dept_after_stmt
AFTER INSERT OR UPDATE OR DELETE ON dept
BEGIN
/* Loop through rows in dept1 and send field data to database client */
/* using call proc defined in previous section of this chapter */
for i in 1 .. dept_pkg.dept1.count loop
test_varray_proc(dept_pkg.dept1(i));
end loop;
end;
/
Insert events triggers:
To send an event to Netcool/Impact when Oracle performs an INSERT operation,
you must first create a trigger that copies the inserted row data to a temporary
table.
You then use another trigger as shown in the example to loop through the
temporary table and send the row data to the database client for processing.
A typical insert trigger contains a statement that populates a VARRAY with the
wanted field data and then assigns the VARRAY as a row in the temporary table.
Each element in the VARRAY must contain a character-delimited set of
name/value pairs that the database client converts to event format before sending
it to Netcool/Impact. The default delimiter character is the pipe symbol (|).
The VARRAY must contain an element for a field named EVENTSOURCE. This field is
used by the database client to determine the table where the database event
originated.
The following example shows a typical VARRAY for insert events:
db_varray_type(’EVENTSOURCE | SCOTT.DEPT’,
’DEPTNO | ’||:NEW.DEPTNO, ’LOC | ’||:NEW.LOC,
’DNAME | ’||:NEW.DNAME, ’IMPACTED | ’||:NEW.IMPACTED);
In this example, the VARRAY contains an EVENTSOURCE field and fields that contain
values derived from the inserted row, as contained in the NEW pseudo-record
passed to the trigger. The value of the EVENTSOURCE field in this example is the dept
table in the Oracle SCOTT schema.
The following example shows a complete trigger that copies new row data to the
temporary table dept1 in package dept_pkg.
CREATE OR REPLACE TRIGGER dept_after_row
AFTER INSERT ON dept
FOR EACH ROW
BEGIN
dept_pkg.dept1(dept_pkg.dept1.count + 1) :=
db_varray_type(’EVENTSOURCE | SCOTT.DEPT’, ’DEPTNO | ’||:NEW.DEPTNO,
’LOC | ’||:NEW.LOC, ’DNAME | ’||:NEW.DNAME,
44
Netcool/Impact: Solutions Guide
’IMPACTED | ’||:NEW.IMPACTED);
end;
/
For a complete example that shows how to send an insert event, see “Insert event
trigger example” on page 47.
Update and delete events triggers:
You can send update and delete events using the same technique you use to send
insert events.
When you send update and delete events, however, you must obtain the row
values using the OLD pseudo-record instead of NEW.
The following example shows a trigger that copies updated row data to the
temporary table dept1 in package dept_pkg.
CREATE OR REPLACE TRIGGER dept_after_row
AFTER UPDATE ON dept
FOR EACH ROW
BEGIN
dept_pkg.dept1(dept_pkg.dept1.count + 1) :=
db_varray_type(’EVENTSOURCE | SCOTT.DEPT’,
’DEPTNO | ’||:OLD.DEPTNO, ’LOC | ’||:OLD.LOC,
’DNAME | ’||:OLD.DNAME, ’IMPACTED | ’||:OLD.IMPACTED);
end;
/
The following example shows a trigger that copies deleted row data to the
temporary table dept1.
CREATE OR REPLACE TRIGGER dept_after_row
AFTER DELETE ON dept
FOR EACH ROW
BEGIN
dept_pkg.dept1(dept_pkg.dept1.count + 1) :=
db_varray_type(’EVENTSOURCE | SCOTT.DEPT’,
’DEPTNO | ’||:OLD.DEPTNO, ’LOC | ’||:OLD.LOC,
’DNAME | ’||:OLD.DNAME, ’IMPACTED | ’||:OLD.IMPACTED);
end;
/
DDL events triggers:
DDL events are sent to Netcool/Impact when the database performs an action that
changes a schema object.
These actions include the SQL CREATE, ALTER, and DROP commands.
To send DDL events, you create a trigger that populates a VARRAY with data that
describes the DDL action and the database object that is changed by the operation.
Then, you pass the VARRAY element to the database client for processing. As with
DML events, the VARRAY contains a character-delimited set of name/value pairs
that the database client converts to event format before sending to Netcool/Impact.
Chapter 3. Working with services
45
DDL events require two VARRAY elements: EVENTSOURCE, as described in the
previous section, and TRIGGEREVENT. Typically, you populate the TRIGGEREVENT
element with the current value of Sys.sysevent.
The following example shows a typical VARRAY for DDL events.
db_varray_type(’TRIGGEREVENT | ’||Sys.sysevent,
’EVENTSOURCE | ’||Sys.database_name, ’USERNAME | ’||Sys.login_user,
’INSTANCENUM | ’||Sys.instance_num, ’OBJECTTYPE | ’||Sys.dictionary_obj_type,
’OBJECTOWNER | ’||Sys.dictionary_obj_owner);
The following example shows a complete trigger that sends an event to
Netcool/Impact before Oracle executes a CREATE command.
CREATE OR REPLACE TRIGGER ddl_before_create
BEFORE CREATE
ON SCOTT.schema
DECLARE
my_before_create_varray db_varray_type;
BEGIN
my_before_create_varray :=
db_varray_type(’TRIGGEREVENT | ’||Sys.sysevent,
’EVENTSOURCE | ’||Sys.database_name,
’USERNAME | ’||Sys.login_user,’INSTANCENUM | ’||Sys.instance_num,
’OBJECTTYPE | ’||Sys.dictionary_obj_type,
’OBJECTOWNER | ’||Sys.dictionary_obj_owner);
test_varray_proc(my_before_create_varray);
end;
/
System events triggers:
System events are sent to Netcool/Impact when the Oracle server starts up, shuts
down, or reports a system level error.
System events work only if the user who owns the corresponding triggers has
SYSDBA privileges (for example, the SYS user).
Some versions of Oracle do not support invoking custom Java code from a system
event trigger. Verify that your version of Oracle supports invoking custom Java
code from a system event trigger before you configure a system event trigger for
the Netcool/Impact database event listener.
To send DDL events, you create a trigger that populates a VARRAY with data that
describes the system action. Then, you pass the VARRAY element to the database
client for processing. As with DDL events, system events require the TRIGGEREVENT
element to be populated, typically with the value of Sys.sysevent.
The following example shows a typical VARRAY for system events.
db_varray_type(’TRIGGEREVENT | ’||Sys.sysevent,
’ OBJECTNAME | ’||Sys.database_name,
’ USER_NAME | ’||Sys.login_user,
’ INSTANCE_NUM | ’||Sys.instance_num);
The following example shows a complete trigger that sends an event to
Netcool/Impact at Oracle startup.
CREATE OR REPLACE TRIGGER databasestartuptrigger
AFTER STARTUP
ON database
46
Netcool/Impact: Solutions Guide
DECLARE
v_array_inp db_varray_type;
BEGIN
v_array_inp := db_varray_type(’TRIGGEREVENT | ’||Sys.sysevent,
’ OBJECTNAME | ’||Sys.database_name,
’ USER_NAME | ’||Sys.login_user, ’ INSTANCE_NUM | ’||Sys.instance_num);
test_varray_proc(v_array_inp);
end;
/
User events triggers:
User events are sent to Netcool/Impact when a user logs in to or out of Oracle.
Some versions of Oracle do not support invoking custom Java code from a user
event trigger. Verify that your version of Oracle supports invoking custom Java
code from a user event trigger before you configure a user event trigger for the
Netcool/Impact database event listener.
To send user events, you create a trigger that populates a VARRAY with data that
describes the user action. Then, you pass the VARRAY element to the database client
for processing. As with system events, user events require the TRIGGEREVENT
element to be populated, typically with the value of Sys.sysevent. If you do not
specify a value for the EVENTSOURCE element, the database client uses the name of
the database,
The following example shows a typical VARRAY for user events.
db_varray_type(’TRIGGEREVENT | ’||Sys.sysevent,
’EVENTSOURCE |’||Sys.database_name, ’LOGINUSER | ’ ||Sys.login_user,
’INSTANCENUM | ’||Sys.instance_num, ’TRIGGERNAME | USER_LOGIN’);
The following example shows a complete trigger that sends an event to
Netcool/Impact at when a user logs in.
CREATE OR REPLACE TRIGGER user_login
AFTER logon
on schema
DECLARE
my_login_varray db_varray_type;
BEGIN
my_login_varray := db_varray_type(’TRIGGEREVENT | ’||Sys.sysevent,
’EVENTSOURCE |’||Sys.database_name, ’LOGINUSER | ’ ||Sys.login_user,
’INSTANCENUM | ’||Sys.instance_num, ’TRIGGERNAME | USER_LOGIN’);
test_varray_proc(my_login_varray);
end;
/
Insert event trigger example:
This example shows how to create a set of Oracle triggers that send an insert event
to Netcool/Impact.
CREATE OR REPLACE PACKAGE dept_pkg AS
TYPE deptTable IS TABLE OF db_varray_type INDEX BY BINARY_INTEGER;
dept1 deptTable;
emptyDept deptTable;
Chapter 3. Working with services
47
end;
/
CREATE OR REPLACE TRIGGER dept_reset
BEFORE INSERT OR UPDATE OR DELETE ON dept
BEGIN
dept_pkg.dept1 := dept_pkg.emptyDept;
end;
/
CREATE OR REPLACE TRIGGER dept_after_row
AFTER INSERT ON dept
FOR EACH ROW
BEGIN
dept_pkg.dept1(dept_pkg.dept1.count + 1) :=
db_varray_type(’EVENTSOURCE | SCOTT.DEPT’, ’DEPTNO | ’||:NEW.DEPTNO,
’LOC | ’||:NEW.LOC, ’DNAME | ’||:NEW.DNAME, ’IMPACTED | ’||:NEW.IMPACTED);
end;
/
CREATE OR REPLACE TRIGGER dept_after_stmt
AFTER INSERT OR UPDATE OR DELETE ON dept
BEGIN
for i in 1 .. dept_pkg.dept1.count loop
test_varray_proc(dept_pkg.dept1(i));
end loop;
end;
/
In this example, test_varray_proc is a call spec that publishes the sendEvent()
function exposed by the database client. The type db_varray_type is a user-defined
data type that represents an Oracle VARRAY. The example uses the Oracle SCOTT
sample schema.
Update event trigger example:
This example shows how to create a set of Oracle triggers that send an update
event to Netcool/Impact.
CREATE OR REPLACE PACKAGE dept_pkg AS
TYPE deptTable IS TABLE OF db_varray_type INDEX BY BINARY_INTEGER;
dept1 deptTable;
emptyDept deptTable;
end;
/
CREATE OR REPLACE TRIGGER dept_reset
BEFORE INSERT OR UPDATE OR DELETE ON dept
BEGIN
dept_pkg.dept1 := dept_pkg.emptyDept;
end;
/
CREATE OR REPLACE TRIGGER dept_after_row
AFTER UPDATE ON dept
48
Netcool/Impact: Solutions Guide
FOR EACH ROW
BEGIN
dept_pkg.dept1(dept_pkg.dept1.count + 1) :=
db_varray_type(’EVENTSOURCE | SCOTT.DEPT’, ’DEPTNO | ’||:OLD.DEPTNO,
’LOC | ’||:OLD.LOC, ’DNAME | ’||:OLD.DNAME, ’IMPACTED | ’||:OLD.IMPACTED);
end;
/
CREATE OR REPLACE TRIGGER dept_after_stmt
AFTER INSERT OR UPDATE OR DELETE ON dept
BEGIN
for i in 1 .. dept_pkg.dept1.count loop
test_varray_proc(dept_pkg.dept1(i));
end loop;
end;
/
In this example, test_varray_proc is a call spec that publishes the sendEvent()
function exposed by the database client. The type db_varray_type is a user-defined
data type that represents an Oracle VARRAY. The example uses the Oracle SCOTT
sample schema.
Delete event trigger example:
This example shows how to create a set of Oracle triggers that send a delete event
to Netcool/Impact.
CREATE OR REPLACE PACKAGE dept_pkg AS
TYPE deptTable IS TABLE OF db_varray_type INDEX BY BINARY_INTEGER;
dept1 deptTable;
emptyDept deptTable;
end;
/
CREATE OR REPLACE TRIGGER dept_reset
BEFORE INSERT OR UPDATE OR DELETE ON dept
BEGIN
dept_pkg.dept1 := dept_pkg.emptyDept;
end;
/
CREATE OR REPLACE TRIGGER dept_after_row
AFTER DELETE ON dept
FOR EACH ROW
BEGIN
dept_pkg.dept1(dept_pkg.dept1.count + 1) :=
db_varray_type(’EVENTSOURCE | SCOTT.DEPT’, ’DEPTNO | ’||:OLD.DEPTNO,
’LOC | ’||:OLD.LOC, ’DNAME | ’||:OLD.DNAME, ’IMPACTED | ’||:OLD.IMPACTED);
end;
/
CREATE OR REPLACE TRIGGER dept_after_stmt
AFTER INSERT OR UPDATE OR DELETE ON dept
BEGIN
for i in 1 .. dept_pkg.dept1.count loop
Chapter 3. Working with services
49
test_varray_proc(dept_pkg.dept1(i));
end loop;
end;
/
In this example, test_varray_proc is a call spec that publishes the sendEvent()
function exposed by the database client. The type db_varray_type is a user-defined
data type that represents an Oracle VARRAY. The example uses the Oracle SCOTT
sample schema.
Before create event trigger example:
This example shows how to create a trigger that sends an event before Oracle
executes a CREATE command.
CREATE OR REPLACE TRIGGER ddl_before_create
BEFORE CREATE
ON SCOTT.schema
DECLARE
my_before_create_varray db_varray_type;
BEGIN
my_before_create_varray := db_varray_type(’TRIGGEREVENT | ’||Sys.sysevent,
’EVENTSOURCE | ’||Sys.database_name, ’USERNAME | ’||Sys.login_user,
’INSTANCENUM | ’||Sys.instance_num, ’OBJECTTYPE | ’||Sys.dictionary_obj_type,
’OBJECTOWNER | ’||Sys.dictionary_obj_owner);
test_varray_proc(my_before_create_varray);
end;
/
In this example, test_varray_proc is a call spec that publishes the sendEvent()
function exposed by the database client. The type db_varray_type is a user-defined
data type that represents an Oracle VARRAY. The example uses the Oracle SCOTT
sample schema.
After create event trigger example:
This example shows how to create a trigger that sends an event after Oracle
executes a CREATE command.
CREATE OR REPLACE TRIGGER ddl_after_create
AFTER CREATE
ON SCOTT.schema
DECLARE
my_after_create_varray db_varray_type;
BEGIN
my_after_create_varray := db_varray_type(’TRIGGEREVENT | ’||Sys.sysevent,
’EVENTSOURCE | ’||Sys.database_name, ’USERNAME | ’||Sys.login_user,
’INSTANCENUM | ’||Sys.instance_num, ’OBJECTTYPE | ’||Sys.dictionary_obj_type,
’OBJECTOWNER | ’||Sys.dictionary_obj_owner);
test_varray_proc(my_after_create_varray);
end;
/
In this example, test_varray_proc is a call spec that publishes the sendEvent()
function exposed by the database client. The type db_varray_type is a user-defined
data type that represents an Oracle VARRAY. The example uses the Oracle SCOTT
sample schema.
50
Netcool/Impact: Solutions Guide
Before alter event trigger example:
This example shows how to create a trigger that sends an event before Oracle
executes an ALTER command.
CREATE OR REPLACE TRIGGER ddl_before_alter
BEFORE ALTER
ON SCOTT.schema
DECLARE
my_before_alter_varray db_varray_type;
BEGIN
my_before_alter_varray := db_varray_type(’TRIGGEREVENT | ’||Sys.sysevent,
’EVENTSOURCE | ’||Sys.database_name, ’USERNAME | ’||Sys.login_user,
’INSTANCENUM | ’||Sys.instance_num, ’OBJECTTYPE | ’||Sys.dictionary_obj_type,
’OBJECTOWNER | ’||Sys.dictionary_obj_owner);
test_varray_proc(my_before_alter_varray);
end;
/
In this example, test_varray_proc is a call spec that publishes the sendEvent()
function exposed by the database client. The type db_varray_type is a user-defined
data type that represents an Oracle VARRAY. The example uses the Oracle SCOTT
sample schema.
After alter event trigger example:
This example shows how to create a trigger that sends an event after Oracle
executes an ALTER command.
CREATE OR REPLACE TRIGGER ddl_after_alter
AFTER ALTER
ON SCOTT.schema
DECLARE
my_after_alter_varray db_varray_type;
BEGIN
my_after_alter_varray := db_varray_type(’TRIGGEREVENT | ’||Sys.sysevent,
’EVENTSOURCE | ’||Sys.database_name, ’USERNAME | ’||Sys.login_user,
’INSTANCENUM | ’||Sys.instance_num, ’OBJECTTYPE | ’||Sys.dictionary_obj_type,
’OBJECTOWNER | ’||Sys.dictionary_obj_owner);
test_varray_proc(my_after_alter_varray);
end;
/
In this example, test_varray_proc is a call spec that publishes the sendEvent()
function exposed by the database client. The type db_varray_type is a user-defined
data type that represents an Oracle VARRAY. The example uses the Oracle SCOTT
sample schema.
Before drop event trigger example:
This example shows how to create a trigger that sends an event before Oracle
executes an DROP command.
CREATE OR REPLACE TRIGGER ddl_before_drop
BEFORE DROP
ON SCOTT.schema
DECLARE
Chapter 3. Working with services
51
my_before_drop_varray db_varray_type;
BEGIN
my_before_drop_varray := db_varray_type(’TRIGGEREVENT | ’||Sys.sysevent,
’EVENTSOURCE | ’||Sys.database_name, ’USERNAME | ’||Sys.login_user,
’INSTANCENUM | ’||Sys.instance_num, ’OBJECTTYPE | ’||Sys.dictionary_obj_type,
’OBJECTOWNER | ’||Sys.dictionary_obj_owner);
test_varray_proc(my_before_drop_varray);
end;
/
In this example, test_varray_proc is a call spec that publishes the sendEvent()
function exposed by the database client. The type db_varray_type is a user-defined
data type that represents an Oracle VARRAY. The example uses the Oracle SCOTT
sample schema.
After drop event trigger example:
This example shows how to create a trigger that sends an event after Oracle
executes an DROP command.
CREATE OR REPLACE TRIGGER ddl_after_drop
AFTER DROP
ON SCOTT.schema
DECLARE
my_after_drop_varray db_varray_type;
BEGIN
my_after_drop_varray := db_varray_type(’TRIGGEREVENT | ’||Sys.sysevent,
’EVENTSOURCE | ’||Sys.database_name, ’USERNAME | ’||Sys.login_user,
’INSTANCENUM | ’||Sys.instance_num, ’OBJECTTYPE | ’||Sys.dictionary_obj_type,
’OBJECTOWNER | ’||Sys.dictionary_obj_owner);
test_varray_proc(my_after_drop_varray);
end;
/
In this example, test_varray_proc is a call spec that publishes the sendEvent()
function exposed by the database client. The type db_varray_type is a user-defined
data type that represents an Oracle VARRAY. The example uses the Oracle SCOTT
sample schema.
Server startup event trigger example:
This example shows how to create a trigger that sends an event to Netcool/Impact
at Oracle startup.
The following example requires that the Oracle server is able to invoke custom
Java code from a system event trigger. Refer to Oracle documentation to determine
whether your version of Oracle supports invoking custom Java code from a system
event trigger.
CREATE OR REPLACE TRIGGER databasestartuptrigger
AFTER STARTUP
ON database
DECLARE
v_array_inp db_varray_type;
BEGIN
v_array_inp := db_varray_type(’TRIGGEREVENT | ’||Sys.sysevent,
’ OBJECTNAME | ’||Sys.database_name, ’ USER_NAME | ’||Sys.login_user,
52
Netcool/Impact: Solutions Guide
’ INSTANCE_NUM | ’||Sys.instance_num);
test_varray_proc(v_array_inp);
end;
/
In this example, test_varray_proc is a call spec that publishes the sendEvent()
function exposed by the database client. The type db_varray_type is a user-defined
data type that represents an Oracle VARRAY. The example uses the Oracle SCOTT
sample schema.
Server shutdown event trigger example:
This example shows how to create a trigger that sends an event to Netcool/Impact
at Oracle shutdown.
The following example requires that the Oracle server is able to invoke custom
Java code from a system event trigger. Refer to Oracle documentation to determine
whether your version of Oracle supports invoking custom Java code from a system
event trigger.
CREATE OR REPLACE TRIGGER databaseshutdowntrigger
BEFORE SHUTDOWN
ON database
DECLARE
v_array_inp db_varray_type;
BEGIN
v_array_inp := db_varray_type(’TRIGGEREVENT | ’||Sys.sysevent,
’ OBJECTNAME | ’||Sys.database_name, ’ USER_NAME | ’||Sys.login_user,
’ INSTANCE_NUM | ’||Sys.instance_num);
test_varray_proc(v_array_inp);
end;
/
In this example, test_varray_proc is a call spec that publishes the sendEvent()
function exposed by the database client. The type db_varray_type is a user-defined
data type that represents an Oracle VARRAY. The example uses the Oracle SCOTT
sample schema.
Server error event trigger example:
This example shows how to create a trigger that sends an event to Netcool/Impact
when Oracle encounters a server error.
The following example requires that the Oracle server is able to invoke custom
Java code from a system event trigger. Refer to Oracle documentation to determine
whether your version of Oracle supports invoking custom Java code from a system
event trigger.
CREATE OR REPLACE TRIGGER server_error_trigger_database
AFTER SERVERERROR
ON database
DECLARE
my_varray db_varray_type;
BEGIN
my_varray := db_varray_type(’TRIGGEREVENT | ’||Sys.sysevent,
’EVENTSOURCE | ’||Sys.database_name, ’INSTANCENUM | ’ ||Sys.instance_num,
’LOGINUSER | ’||Sys.login_user, ’ERRORNUM | ’||Sys.server_error(1));
Chapter 3. Working with services
53
test_varray_proc(my_varray);
end;
/
In this example, test_varray_proc is a call spec that publishes the sendEvent()
function exposed by the database client. The type db_varray_type is a user-defined
data type that represents an Oracle VARRAY. The example uses the Oracle SCOTT
sample schema.
Logon event trigger example:
This example shows how to create a trigger that sends an event to Netcool/Impact
when a user logs in to the database.
The following example requires that the Oracle server is able to invoke custom
Java code from a user event logon trigger. Refer to Oracle documentation to
determine whether your version of Oracle supports invoking custom Java code
from a user event logon trigger.
CREATE OR REPLACE TRIGGER user_login
AFTER logon
on schema
DECLARE
my_login_varray db_varray_type;
BEGIN
my_login_varray := db_varray_type(’TRIGGEREVENT | ’||Sys.sysevent,
’EVENTSOURCE |’||Sys.database_name, ’LOGINUSER | ’ ||Sys.login_user,
’INSTANCENUM | ’||Sys.instance_num, ’TRIGGERNAME | USER_LOGIN’);
test_varray_proc(my_login_varray);
end;
/
In this example, test_varray_proc is a call spec that publishes the sendEvent()
function exposed by the database client. The type db_varray_type is a user-defined
data type that represents an Oracle VARRAY. The example uses the Oracle SCOTT
sample schema.
Logoff event trigger example:
This example shows how to create a trigger that sends an event to Netcool/Impact
when a user logs out of the database.
The following example requires that the Oracle server is able to invoke custom
Java code from a user event logoff trigger. Refer to Oracle documentation to
determine whether your version of Oracle supports invoking custom Java code
from a user event logoff trigger.
CREATE OR REPLACE TRIGGER user_logoff
BEFORE logoff
on schema
DECLARE
my_logoff_varray db_varray_type;
BEGIN
my_logoff_varray := db_varray_type(’TRIGGEREVENT | ’||Sys.sysevent,
’EVENTSOURCE |’||Sys.database_name, ’LOGINUSER | ’ ||Sys.login_user,
’INSTANCENUM | ’||Sys.instance_num, ’TRIGGERNAME | USER_LOGOFF’);
54
Netcool/Impact: Solutions Guide
test_varray_proc(my_logoff_varray);
end;
/
In this example, test_varray_proc is a call spec that publishes the sendEvent()
function exposed by the database client. The type db_varray_type is a user-defined
data type that represents an Oracle VARRAY. The example uses the Oracle SCOTT
sample schema.
Writing database event policies
Policies that work with database events can handle incoming events, and return
events to the database.
Handling incoming database events
The database event listener passes incoming events to Netcool/Impact using the
built-in EventContainer variable.
When the database event listener receives an event from the database, it populates
the EventContainer member variables with the values sent by the database trigger
using the Oracle VARRAY. You can access the values of EventContainer using the
@ or dot notations in the same way you access the field values in any other type of
event.
The following example shows how to handle an incoming database event. In this
example, the event was generated using the example trigger described in “Insert
events triggers” on page 44.
// Log incoming event values
Log("Department number: " + @DEPTNO);
Log("Location: " + @LOC);
Log("Database name: " + @DNAME);
Log("Impacted: " + @IMPACTED);
The example prints the field values in the event to the policy log.
Returning events to the database
The database event listener supports the use of the ReturnEvent function in a
policy to update or delete events. To use ReturnEvent in a database event policy,
you must perform the following tasks:
Procedure
v Make sure that the database trigger that sends the event populates a special set
of connection event fields.
v Call the ReturnEvent function in the policy that handles the events.
Populating the connection event fields:
For the policy that handles events to return them to the event source, you must
populate a special set of event fields in the database trigger.
These fields specify connection information for the database server. The database
event listener uses this information to connect to the database when you return an
updated or deleted event.
Table 3 on page 56 shows the event fields that you must populate in the trigger.
Chapter 3. Working with services
55
Table 3. Database trigger connection event fields
Field
Description
RETURNEVENT
You must set a value of TRUE in this event field.
USERNAME
User name to use when connecting to the Oracle database server.
PASSWORD
Password to use when connecting to the Oracle database server.
HOST
Host name or IP address of the system where Oracle is running.
PORT
Connection port for the Oracle database server.
SID
Oracle server ID.
KEYFIELD
Key field in the database table, or any other field that uniquely
identifies a table row.
When the database client sends the event to Netcool/Impact, it encrypts the
connection information (including the database user name and password) specified
in the event fields. The connection information is then unencrypted when it is
received by Netcool/Impact.
The following example shows a trigger that sends an event to Netcool/Impact
when a new row is inserted into the dept table. In this example, you populate the
connection event fields by specifying elements in the Oracle VARRAY that you
pass to the database.
CREATE OR REPLACE TRIGGER dept_after_row
AFTER INSERT ON dept
FOR EACH ROW
BEGIN
dept_pkg.dept1(dept_pkg.dept1.count + 1) :=
db_varray_type(’EVENTSOURCE | SCOTT.DEPT’, ’DEPTNO | ’||:NEW.DEPTNO,
’LOC | ’||:NEW.LOC, ’DNAME | ’||:NEW.DNAME, ’IMPACTED | ’||:NEW.IMPACTED,
’RETURNEVENT | TRUE’, ’USERNAME | ora_user’, ’PASSWORD | ora_passwd’,
’HOST | ora_host’, ’PORT | 4100’, ’SID | ora_01’, ’KEYFIELD | DEPTNO’);
end;
/
Returning events to the database:
You can send updated or deleted events to the database server by using the
ReturnEvent function.
The ReturnEvent function sends the event information to the database event
listener, which assembles an UPDATE or DELETE command by using the information.
The database event listener then sends the command to the database server for
processing. The UPDATE or DELETE command updates or deletes the row that
corresponds to the original sent event. For more information about ReturnEvent,
see the Policy Reference Guide.
The following policy example shows how to return an updated event to the
database.
// Log incoming event values
Log("Department number: " + @DEPTNO);
Log("Location: " + @LOC);
Log("Database name: " + @DNAME);
Log("Impacted: " + @IMPACTED);
56
Netcool/Impact: Solutions Guide
// Update the value of the Location field
@LOC = "New York City";
// Return the event to the database
ReturnEvent(EventContainer);
The following example shows how to delete an event from the database.
// Set the value of the DeleteEvent variable to true
@DeleteEvent = true; // @DeleteEvent name is case-sensitive
// Set the event field variables required by the database event listener
// in order to connect to Netcool/Impact
// Return the event to the database
ReturnEvent(EventContainer);
To commit the updates to the rows, enter the command set autocommit on in
Oracle.
If you are updating or deleting the events from Netcool/Impact, you must modify
two triggers, dept_reset and dept_after_stmt, to trigger only on insert, not on
update or delete.
OMNIbus event listener service
The OMNIbus event listener service is used to integrate with Netcool/OMNIbus
and receive immediate notifications of fast track events.
The OMNIbus event listener is used to get fast track notifications from
Netcool/OMNIbus through the Accelerated Event Notification feature of
Netcool/OMNIbus. It receives notifications through the Insert, Delete, Update, or
Control (IDUC) channel. To set up the OMNIbus event listener, you must set its
configuration properties through the GUI. You can use the configuration properties
to specify one or more channels for which events get processed and also one or
more policies that are to be run in response to events received from
Netcool/OMNIbus.
Important:
v The OMNIbus event listener service works with Netcool/OMNIbus 7.3 and later
to monitor ObjectServer events.
v If the Impact Server and OMNIbus server are in different network domains, for
the OMNIbus event listener service to work correctly, you must set the
Iduc.ListeningHostname property in the OMNIbus server. This property must
contain the IP address or fully qualified host name of the OMNIbus server.
For more information about Netcool/OMNIbus triggers and accelerated event
notification, and the Iduc.ListeningHostname property in the OMNIbus server, see
the Netcool/OMNIbus Administration Guide available from this website:
Tivoli Documentation Central
Setting up the OMNIbus event listener service
Use this procedure to create the OMNIbus event listener service.
Chapter 3. Working with services
57
Procedure
1. Click Services to open the Services tab.
2. If required, select a cluster from the Cluster list.
3. Click the Create New Service icon in the toolbar and select
OMNIbusEventListener to open the configuration window.
4. Enter the required information in the configuration window.
5. Click the Save icon in the toolbar to create the service.
6. Start the service to establish a connection to the ObjectServer and subscribe to
one or more IDUC channels to get notifications for inserts, updates, and
deletes.
How to check the OMNIbus event listener service logs
Starting the OMNIbus event listener service establishes a connection between
Netcool/Impact and the ObjectServer.
To ensure that the OMNIbus event listener service started successfully, check the
service logs. A message like the following example is displayed if the service
started:
Initializing Service
Connecting to the Data Source: defaultobjectserver
Service Started
Attempting to connect for IDUC notifications
Established connection to the Data Source defaultobjectserver
IDUC Connection: Established:
Iduc Hostname : nc050094
Iduc Port
: 58003
Iduc Spid
: 2
Creating Triggers
You must create triggers before Netcool/Impact can receive accelerated events
from Netcool/OMNIbus.
Triggers notify Netcool/Impact of accelerated events. For more information about
creating triggers, see the IBM Tivoli Netcool/OMNIbus Administration Guide available
from the following website:
https://www.ibm.com/developerworks/wikis/display/tivolidoccentral/OMNIbus.
This example shows how to create a trigger to immediately notify Netcool/Impact
when there is an alert with a severity of 5 from Node called ImpactNode:
create or replace trigger ft_insert1
group trigger_group1
priority 1
after insert on alerts.status
for each row
begin
if (new.Severity = 5 AND new.Node = ’ImpactNode’)
then
iduc evtft ’default’ , insert, new
end if;
end;
Another example shows how to create a trigger that sends an accelerated event to
Netcool/Impact when an event with Customer internet_banking is deleted:
58
Netcool/Impact: Solutions Guide
create or replace trigger ft_delete1
group trigger_group1
priority 1
before delete on alerts.status
for each row
begin
if (old.Customer = ’internet_banking’)
then
iduc evtft ’default’ , delete, old
end if;
end;
The following example shows how to create a trigger that immediately notifies
Netcool/Impact if a reinsertion of the event with the Node as New York is received:
create or replace trigger ft_reinsert1
group trigger_group1
priority 1
after reinsert on alerts.status
for each row
begin
if (new.Node = ’New York’)
then
iduc evtft ’default’ , insert, new
end if;
end;
The following example shows how to create a signal trigger that notifies you when
a gateway connection is established with the ObjectServer:
create or replace trigger notify_isqlconn
group trigger_group1
priority 1
on signal connect
begin
if( %signal.process = ’GATEWAY’ )
then
iduc sndmsg ’default’, ’Gateway Connection from ’
+ %signal.node + ’ from user ’ + %signal.username + ’ at ’ +
to_char(%signal.at)
end if;
end;
Yet another example shows how to create a signal trigger that notifies you when
connection gets disconnected:
create or replace trigger notify_isqldisconn
group trigger_group1
priority 1
on signal disconnect
begin
if( %signal.process = ’isql’ )
then
iduc sndmsg ’default’, ’ISQL Disconnect from ’ + %signal.node +
’ from user ’ + %signal.username + ’ at ’ + to_char(%signal.at)
end if;
end;
Using the ReturnEvent function
You can use the ReturnEvent function to insert, update, or delete events that
Netcool/Impact receives from Netcool/OMNIbus. To read more about the
ReturnEvent function, see the Policy Reference Guide.
This example shows how to use the ReturnEvent function to set the Node to
Impacted and to increment the Severity by 1:
Chapter 3. Working with services
59
@Node = ’Impacted’;
@Severity = @Severity + 1;
ReturnEvent(EventContainer);
Another example shows how to delete the event from alerts.status by using the
ReturnEvent function:
@DeleteEvent = TRUE;
ReturnEvent(EventContainer);
Subscribing to individual channels
How to subscribe to one or more OMNIbus channels for which Netcool/Impact
processes events received from Netcool/OMNIbus.
Procedure
1. In the OMNIbusEventListener configuration window, in One or more Channels
field, add the channel from which Netcool/Impact processes events.
2. To subscribe to more than one channel, add a comma between each channel
name.
3. To change the channel name or to add or remove one or more entries add the
changes and restart the OMNIbusEventListener service to implement the
changes.
Results
When Netcool/Impact receives a Fast Track event from a channel that matches one
of the configured channels, the OMNIbusEventListener service log displays the
following message:
Received Fast Track Message from channel: <channel_name>
When Netcool/Impact receives a Fast Track event that does not match any
configured channels, the OMNIbusEventListener service log displays the following
message:
Fast Track Message from channel:
<channel name> did not match any configured channels.
Restriction: Filtering messages by channel is only supported for Fast Track
messages that are sent by using the iduc evtf command. For signal messages sent
by using the iduc sndmsg command, Netcool/Impact does not filter the messages
by which channel they originated from. For information about these commands,
see the IBM Tivoli Netcool/OMNIbus Administration Guide available from the
following website:
https://www.ibm.com/developerworks/wikis/display/tivolidoccentral/OMNIbus.
Controlling which events get sent over from OMNIbus to
Netcool/Impact using Spid
You can use the Spid instead of a channel name to control which events get sent
over to Netcool/Impact.
When the OMNIbusEventListener Service starts, it displays the details of the
connection in the $IMPACT_HOME/log/<servername>_omnibuseventlistener.log,
including the connection Spid. In the following example, the Spid is 2:
60
Netcool/Impact: Solutions Guide
21
21
21
21
21
21
21
21
21
Feb
Feb
Feb
Feb
Feb
Feb
Feb
Feb
Feb
2012
2012
2012
2012
2012
2012
2012
2012
2012
11:16:07,363:
11:16:07,363:
11:16:07,405:
11:16:07,522:
11:16:07,919:
11:16:08,035:
11:16:08,036:
11:16:08,036:
11:16:08,036:
Initializing Service
Connecting to the Data Source: defaultobjectserver
Service Started
Attempting to connect for IDUC notifications
Established connection to the Data Source defaultobjectserver
IDUC Connection: Established:
Iduc Hostname : nc050094
Iduc Port
: 60957
Iduc Spid
: 2
Knowing that Netcool/Impact is connected with the Spid 2, you can use the Client
ID, and configure the trigger to send the Accelerated Event Notification only to the
client with Spid=2 (Impact). An OMNIbus trigger has the following syntax:
IDUC EVTFT destination, action_type, row
Where:
v destination = spid | iduc_channel
– spid = integer_expression (The literal client connection ID)
– iduc_channel = string_expression (Channel name)
v action_type = INSERT | UPDATE | DELETE
v row = variable (Variable name reference of a row in the automation)
For example, the following trigger would tell OMNIbus to send notifications only
to Spid=2, which in this case is Netcool/Impact:
create or replace trigger ft_insert1
group trigger_group1
priority 1
after insert on alerts.status
for each row
begin
if (new.Severity >= 5)
then
iduc evtft 2 , insert, new
end if;
end;
For more information about OMNIbus triggers and accelerated event notification,
see the OMNIbus Administration Guide available from this website:
https://www.ibm.com/developerworks/wikis/display/tivolidoccentral/OMNIbus
Chapter 3. Working with services
61
62
Netcool/Impact: Solutions Guide
Chapter 4. Handling events
From within an IPL policy, you can access and update field values of incoming
events; add journal entries to events; send new events to the event source; and
delete events in the event source.
Events overview
An event is a set of data that represents a status or an activity on a network. The
structure and content of an event varies depending on the device, system, or
application that generated the event but in most cases, events are
Netcool/OMNIbus alerts.
These events are generated by Netcool probes and monitors, and are stored in the
ObjectServer database. Events are obtained using event readers, event listeners,
and email readers services.
Incoming event data is stored using the built-in EventContainer variable. This
variable is passed to the policy engine as part of the context when a policy is
executed. When you write a policy, you can access the fields in the event using the
member variables of EventContainer.
Event container and event variables
The event container is a native Netcool/Impact data type used to store event data.
The event container consists of a set of event field and event state variables.
Event Container variable
The EventContainer is a built-in variable that stores the field data for
incoming events. Each time an event is passed to the policy engine for
processing, it creates an instance of EventContainer, populates the event
field variables, and stores it in the policy context. You can then access the
values of the event fields from within the policy.
Event field variables
Event field variables are member variables of an event container that store
the field values in an event.
Event state variables
Event state variables are a set of predefined member variables that you can
use to specify the state of an event when you send it to the event source
by using the ReturnEvent function. Two event state variables are used:
JournalEntry and DeleteEvent. For information about using JournalEntry,
see “Adding journal entries to events” on page 64. For information about
using DeleteEvent, see “Deleting events” on page 66.
User-defined event container variables
User-defined event container variables are variables that you create by
using the NewEvent function. You use these variables when you send new
events to the event source, or when you want to temporarily store event
data within a policy.
© Copyright IBM Corp. 2006, 2016
63
Using the dot notation to access the event fields
You use the dot notation to access the value of event fields in the same way you
access the values of member variables in a struct in languages like C and C++.
The following policy shows how to use the dot notation to access the value of the
Node, Severity, and Summary fields in an incoming event and print them to the
policy log:
Log(EventContainer.Node);
Log(EventContainer.Severity);
Log(EventContainer.Summary);
Using the @ notation to access the event fields
If you are using IPL, you can use the @ notation to access event fields.
The @ notation is shorthand that you can use to reference the event fields in the
built-in EventContainer variable without having to spell out the EventContainer
name. If you are using JavaScript you must use EventContainer.Identifier.
The following policy shows how to use the @ notation to access the value of the
Node, Severity, and Summary fields in an incoming event and print them to the
policy log:
Log(@Node);
Log(@Severity);
Log(@Summary);
Updating event fields
To update fields in an incoming event, you assign new values to event field
variables in the EventContainer.
An event with a new value assigned to its field variable will not be updated until
you call the ReturnEvent function.
The following examples show how to update the Summary and Severity fields in an
incoming event.
@Summary = "Node down";
@Summary = @Summary + ": Updated by Netcool/Impact";
@Severity = 3;
@Severity = @Severity + 1;
Adding journal entries to events
You can use IPL and JavaScript to add journal entries to existing
Netcool/OMNIbus events.
About this task
You can only add journal entries to events that exist in the ObjectServer database.
You cannot add journal entries to new events that you have created using the
NewEvent function in the currently running policy. Follow these steps to add a
journal entry to an event.
Procedure
1. Assign the journal text to the JournalEntry variable.
64
Netcool/Impact: Solutions Guide
JournalEntry is an event state variable used to add new journal entries to an
existing event. For more information, see “Assigning the JournalEntry
variable.”
2. Send the event to the event source using the ReturnEvent function.
Call ReturnEvent and pass the event container as an input parameter, in the
following manner:
ReturnEvent(EventContainer);
Example
The following example shows how to add a new journal entry to an incoming
event.
// Assign the journal entry text to the JournalEntry variable
@JournalEntry = ’Modified on ’ + LocalTime(GetDate()) + "\r\n" +
’Modified by Netcool\Impact.’;
// Send the event to the event source using ReturnEvent
ReturnEvent(EventContainer);
Assigning the JournalEntry variable
JournalEntry is an event state variable used to add new journal entries to an
existing event.
Netcool/Impact uses special rules for interpreting string literals assigned to
JournalEntry. Text stored in JournalEntry must be assigned using single quotation
marks, except for special characters such as \r, \n and \t, which must be assigned
using double quotation marks. If you want to use both kinds of text in a single
entry, you must specify them separately and then concatenate the string using the
+ operator.
To embed a line break in a journal entry, you use an \r\n string.
The following examples show how to assign journal text to the JournalEntry
variable.
@JournalEntry
@JournalEntry
@JournalEntry
’Modified by
= ’Modified by Netcool/Impact’;
= ’Modified on ’ + LocalTime(GetDate());
= ’Modified on ’ + LocalTime(GetDate()) + "\r\n" +
Netcool/Impact’;
Sending new events
Use this procedure to send new events to an event source.
Procedure
1. Create an event container using the NewEvent function
To create an event container, you call the NewEvent function and pass the name
of the event reader associated with the event source, in the following manner:
MyEvent = NewEvent("OMNIbusEventReader");
The function returns an empty event container.
2. Set the EventReaderName member variable:
MyEvent.EventReaderName = "OMNIbusEventReader";
In the examples the event reader is OMNIbusEventReader
Chapter 4. Handling events
65
3. Populate the event fields by assigning values to its event field variables.
For example:
MyEvent.EventReaderName = "OMNIbusEventReader";
MyEvent.Node = "192.168.1.1";
MyEvent.Summary = "Node down";
MyEvent.Severity = 5;
MyEvent.AlertKey = MyEvent.Node + ":" + MyEvent.Summary;
4. Send the event to the data source using the ReturnEvent function.
Call the ReturnEvent function, and pass the new event container as an input
parameter, in the following manner:
ReturnEvent(MyEvent);
Example
The following example shows how to create, populate, and send a new event to an
event source.
// Create a new event container
MyEvent = NewEvent("OMNIbusEventReader");
// Populate the event container member variables
MyEvent.EventReaderName = "OMNIbusEventReader";
MyEvent.Node = "192.168.1.1";
MyEvent.Summary = "Node down";
MyEvent.Severity = 5;
MyEvent.AlertKey = MyEvent.Node + ":" + MyEvent.Summary;
// Add a journal entry (optional)
MyEvent.JournalEntry = ’Modified on ’ + LocalTime(GetDate()) + "\r\n" +
’Modified by Netcool/Impact’;
// Send the event to the event source
ReturnEvent(MyEvent);
Deleting events
Use this procedure to delete an incoming event from the event source.
Procedure
1. Set the DeleteEvent variable in the event container.
The DeleteEvent variable is an event state variable that you use to specify that
an event is to be deleted when it is sent back to the event source. You must set
the value of DeleteEvent to True in order for an event to be deleted. For
example:
@DeleteEvent = True;
2. Send the event to the event source using the ReturnEvent function.
For example:
ReturnEvent(EventContainer);
Examples of deleting an incoming event from the event
source
These examples show how to delete an incoming event from the event source
using IPL, and JavaScript.
v Impact Policy Language:
66
Netcool/Impact: Solutions Guide
// Set the DeleteEvent Variable
@DeleteEvent = True;
// Send the event to the event source
ReturnEvent(EventContainer);
v JavaScript:
// Set the DeleteEvent Variable
EventContainer.DeleteEvent = true;
// Send the event to the event source
ReturnEvent(EventContainer);
Chapter 4. Handling events
67
68
Netcool/Impact: Solutions Guide
Chapter 5. Handling data
You can handle data in a policy.
From within a policy you can retrieve data from a data source by filter, by key, or
by link; delete, or add data to a data source; update data in a data source; and call
database functions, or stored procedures.
You can access data stored in a wide variety of data sources. These include many
commercial databases, such as Oracle, Sybase, and Microsoft SQL Server. You can
also access data stored in LDAP data source and data stored by various third-party
applications, including network inventory managers and messaging systems.
Working with data items
Data items are elements of the data model that represent actual units of data stored
in a data source.
The structure of this unit of data depends on the category of the associated data
source. For example, if the data source is an SQL database data type, each data
item corresponds to a row in a database table. If the data source is an LDAP
server, each data item corresponds to a node in the LDAP hierarchy.
Field variables
Field variables are member variables in a data item. There is one field variable for
each data item field. Field variable names are the same as the names in the
underlying data item fields. For example, if you have a data item with two fields
named UserID and UserName, it will also have two field variables named UserID
and UserName.
DataItem and DataItems variables
The DataItems variable is a built-in variable of type array that is used by default to
store data items returned by GetByFilter, GetByKey, GetByLinks or other functions
that retrieve data items. If you do not specify a return variable when you call these
functions, Netcool/Impact assigns the retrieved data items to the DataItems
variable.
The DataItem variable references the first item (index 0) in the DataItems array.
Retrieving data by filter
Retrieving data by filter means that you are getting data items from a data type
where you already know the value of one or more of the fields.
When you retrieve data by filter, you are saying: "Give me all the data items in this
type, where certain fields contain these values."
Working with filters
A filter is a text string that sets out the conditions under which Netcool/Impact
retrieves the data items.
© Copyright IBM Corp. 2006, 2016
69
The use of filters with internal, SQL, LDAP, and some Mediator data types is
supported. The format of the filter string varies depending on the category of the
data type.
SQL filters
SQL filters are text strings that you use to specify a subset of the data items in an
internal or SQL database data type.
For SQL database and internal data types, the filter is an SQL WHERE clause that
provides a set of comparisons that must be true in order for a data item to be
returned. These comparisons are typically between field names and their
corresponding values.
Syntax
For SQL database data types, the syntax of the SQL filter is specified by the
underlying data source. The SQL filter is the contents of an SQL WHERE clause
specified in the format provided by the underlying database. When the data items
are retrieved from the data source, this filter is passed directly to the underlying
database for processing.
For internal data types, the SQL filter is processed internally by the policy engine.
For internal data types, the syntax is as follows:
Field
Operator
Value [AND | OR | NOT (Field
Operator
Value) ...]
where Field is the name of a data type field, Operator is a comparative operator,
and Value is the field value.
Attention: Note that for both internal and SQL data types, any string literals in
an SQL filter must be enclosed in single quotation marks. The policy engine
interprets double quotation marks before it processes the SQL filter. Using double
quotation marks inside an SQL filter causes parsing errors.
Operators
The type of comparison is specified by one of the standard comparison operators.
The SQL filter syntax supports the following comparative operators:
v >
v <
v =
v <=
v =>
v !=
v LIKE
Restriction: You can use the LIKE operator with regular expressions as
supported by the underlying data source.
The SQL filter syntax supports the AND, OR and NOT boolean operators.
70
Netcool/Impact: Solutions Guide
Tip: Multiple comparisons can be used together with the AND, OR, and NOT
operators.
Order of operation
You can specify the order in which expressions in the SQL are evaluated using
parentheses.
Examples
Here is an example of an SQL filter:
Location = ’NYC’
Location LIKE ’NYC.*’
Facility = ’Wandsworth’ AND Facility = ’Putney’
Facility = ’Wall St.’ OR Facility = ’Midtown’
NodeID >= 123345
NodeID != 123234
You can use this filter to get all data items where the value of the Location field is
New York:
Location = ’New York’
Using this filter you get all data items where the value of the Location field is New
York or New Jersey:
Location = ’New York’ OR Location = ’New Jersey’
To get all data items where the value of the Location field is Chicago or Los
Angeles and the value of the Level field is 3:
(Location = ’New York’ OR Location = ’New Jersey’) AND Level = 3
LDAP filters
LDAP filters are filter strings that you use to specify a subset of data items in an
LDAP data type.
The underlying LDAP data source processes the LDAP filters. You use LDAP filters
when you do the following tasks:
v Retrieve data items from an LDAP data type using GetByFilter.
v Retrieve a subset of linked LDAP data items using GetByLinks.
v Delete individual data items from an LDAP data type.
v Specify which data items appear when you browse an LDAP data type in the
GUI.
Syntax
An LDAP filter consists of one or more boolean expressions, with logical operators
prefixed to the expression list. The boolean expressions use the following format:
Attribute
Operator
Value
where Attribute is the LDAP attribute name and Value is the field value.
The filter syntax supports the =, ~=, <, <=, >, >=, and ! operators, and provides
limited substring matching using the * operator. In addition, the syntax also
supports calls to matching extensions defined in the LDAP data source. White
Chapter 5. Handling data
71
space is not used as a separator between attribute, operator, and value, and those
string values are not specified using quotation marks.
For more information on LDAP filter syntax, see Internet RFC 2254.
Operators
As with SQL filters, LDAP filters provide a set of comparisons that must be true in
order for a data item to be returned. These comparisons are typically between field
names and their corresponding values. The comparison operators supported in
LDAP filters are:
v =
v ~=,
v <
v <=
v >
v >=
v !
One difference between LDAP filters and SQL filters is that any Boolean operators
used to specify multiple comparisons must be prefixed to the expression. Another
difference is that string literals are not specified using quotation marks.
Examples
Here is an example of an LDAP filter:
(cn=Mahatma Gandhi)
(!(location=NYC*))
(&(facility=Wandsworth)(facility=Putney))
(|(facility=Wall St.)(facility=Midtown)(facility=Jersey City))
(nodeid>=12345)
You can use this example to get all data items where the common name value is
Mahatma Gandhi:
(cn=Mahatma Gandhi)
Using this example you get all data items where the value of the location attribute
does not begin with the string NYC:
(!(location=NYC*))
To get all data items where the value of the facility attribute is Wandsworth or
Putney:
(|(facility=Wandsworth)(facility=Putney))
Mediator filters
You use Mediator filters with the GetByFilter function to retrieve data items from
some Mediator data types.
The syntax for Mediator filters varies depending on the underlying DSA. For more
information about the Mediator syntax for a particular DSA, see the DSA
documentation.
72
Netcool/Impact: Solutions Guide
Retrieving data by filter in a policy
To retrieve data by filter, you call the GetByFilter function and pass the name of
the data type and the filter string.
The function returns an array of data items that match the conditions in the filter.
If you do not specify a return variable, GetByFilter assigns the array to the
built-in variable DataItems.
Example of retrieving data from an SQL database data type
These examples show how to retrieve data from an SQL database data type.
In the first example, you get all the data items from a data type named Node where
the value of the Location field is New York and the value of the TypeID field is
012345.
Then, you print the data item fields and values to the policy log using the Log and
CurrentContext functions.
// Call GetByFilter and pass the name of the data type
// and the filter string.
DataType = "Node";Filter = "Location = ’New York’ AND TypeID = 012345";
CountOnly = False;
MyNodes = GetByFilter(DataType, Filter, CountOnly);
// Log the data item field values.
Log(CurrentContext());
A shorter version of this example is as follows:
MyNodes = GetByFilter("Node", "Location = ’New York’ AND TypeID = 012345", False);
Log(CurrentContext());
In the second example, you get all the data items from a data type named Node
where the value of the IPAddress field equals the value of the Node field in an
incoming event. As above, you print the fields and values in the data items to the
policy log.
// Call GetByFilter and pass the name of the data type
// and the filter string.
DataType = "Node";
Filter = "IPAddress = ’" + @Node + "’";
CountOnly = False;
MyNodes = GetByFilter(DataType, Filter, CountOnly);
// Log the data item field values.
Log(CurrentContext());
Make sure that you understand the filter syntax used in the sample code. When
using the value of a variable inside an SQL filter string, the value must be
encapsulated in single quotation marks. This is because Netcool/Impact processes
the filter string in two stages. During the first stage, it evaluates the variable.
During the second stage, it concatenates the filter string and sends it to the data
source for processing.
A shorter version of this example is as follows:
Chapter 5. Handling data
73
MyNodes = GetByFilter("Node", "Location = ’" + @Node + "’", False);
Log(CurrentContext());
Example of retrieving data from an LDAP data type
These examples show how to retrieve data from an LDAP data type.
In the first example, you get any data items from a data type named User where
the value of the cn (common name) field is Brian Huang. Then, you print the data
item fields and values to the policy log using the Log and CurrentContext
functions.
// Call GetByFilter and pass the name of the data type
// and the filter string.
DataType = "User";
Filter = "(cn=Brian Huang)";
CountOnly = False;
MyUsers = GetByFilter(DataType, Filter, CountOnly);
// Log the data item field values.
Log(CurrentContext());
A shorter version of this example is as follows:
MyUsers = GetByFilter("User", "(cn=Brian Huang)", False);
Log(CurrentContext());
In the second example, you get all data items from a data type named Node where
the value of the Location field is New York or New Jersey. As above, you print the
fields and values in the data items to the policy log.
// Call GetByFilter and pass the name of the data type
// and the filter string.
DataType = "Node";
Filter = "(|(Location=NewYork)(Location=New Jersey))";
CountOnly = False;
MyNodes = GetByFilter(DataType, Filter, CountOnly);
// Log the data item field values
Log(CurrentContext());
A shorter version of this example is as follows:
MyNodes = GetByFilter("Node", "(|(Location=New York)(Location=New Jersey))", False);
Log(CurrentContext());
Example of looking up data from a Smallworld DSA Mediator
data type
The following example shows how to look up data from a Smallworld DSA
Mediator data type.
Smallworld is a network inventory manager developed by GE Network Solutions.
Netcool/Impact provides a Mediator DSA and a set of predefined data types that
allow you to read network data from the Smallworld NIS.
In this example, you get all the data items from the SWNetworkElement data type
where the value of ne_name is DSX1 PNL-01 (ORP). Then, you print the data item
fields and values to the policy log using the Log and CurrentContext functions.
74
Netcool/Impact: Solutions Guide
// Call GetByFilter and pass the name of the data type
// and the filter string.
DataType = "SWNetworkElement";
Filter = "ne_name = ’DSX1 PNL-01 (ORP)’";
CountOnly = False;
MyElements = GetByFilter(DataType, Filter, CountOnly);
// Log the data item field values.
Log(CurrentContext());
A shorter version of this example is as follows:
MyElements = GetByFilter("SWNetworkElement", \
"ne_name = ’NSX1 PNL-01 (ORP)’", False);
Log(CurrentContext());
Retrieving data by key
Retrieving data by key means that you are getting data items from a data type
where you already know the value one or more key fields.
When you retrieve data items by key, you are saying, "Give me a certain number
of data items in this type, where the key fields equal these values." Because key
fields typically designate a unique data item, the number of data items returned is
typically one.
Keys
A key is a special field in a data type that uniquely identifies a data item.
You specify key fields when you create a data type. The most common way to use
the key field is to use it to identify a key field in the underlying data source. For
more information about data type keys, see “Data type keys” on page 18.
Key expressions
The key expression is a value or array of values that key fields in the data item
must equal in order to be returned.
The following key expressions are supported:
Single key expressions
A single key expression is an integer, float, or string that specifies the value
that the key field in a data item must match in order to be retrieved.
Multiple key expressions
A multiple key expression is an array of values that the key fields in a data
item must match in order to be retrieved. For more information, see
“Multiple key expressions.”
Multiple key expressions
A multiple key expression is an array of values that the key fields in a data item
must match in order to be retrieved.
Netcool/Impact determines if the key field values match by comparing each value
in the array with the corresponding key field on a one-by-one basis. For example,
if you have a data type with two key fields named Key_01 and Key_02, and you
use a key expression of {"KEY_12345", "KEY_93832"}, the function compares
Chapter 5. Handling data
75
KEY_12345 with the value of Key_01 and KEY_93832 with the value of Key_02. If both
fields match the specified values, the function returns the data item. If only one
field or no fields match, the data item is not returned.
Retrieving data by key in a policy
To retrieve data by key, you call the GetByKey function and pass the name of the
data type and the filter string.
The function returns an array of data items that match the conditions in the filter.
If you do not specify a return variable, GetByKey assigns the array to the built-in
variable DataItems.
Example of returning data from a data type using a single key
expression
In this example, you retrieve a data item from a data type called Node where the
value of the key field is ID-00001.
Then, you print the data item fields and values to the policy log using the Log and
CurrentContext functions.
// Call GetByKey and pass the name of the data type
// and the key expression.
DataType = "Node";
Key = "ID-00001";
MaxNum = 1;
MyNodes = GetByKey(DataType, Key, MaxNum);
// Log the data item field values.
Log(CurrentContext());
A shorter version of this example is as follows:
MyNodes = GetByKey("Node", "ID-00001", 1);
Log(CurrentContext());
Example of returning data by key using a multiple key
expression
In this example, you retrieve a data item from a data type called Customer where
the values of its key fields are R12345 and D98776.
You print the fields and values in the data items to the policy log.
// Call GetByKey and pass the name of the data type.
// the key expression.
Type = "Customer";
Key = {"R12345", "D98776"};
MaxNum = 1;
MyCustomers = GetByKey(Type, Key, MaxNum);
// Log the data item field values.
Log(CurrentContext());
A shorter version of this example is as follows:
MyCustomers = GetByKey("Customer", {"R12345", "D98776"}, 1);
Log(CurrentContext());
76
Netcool/Impact: Solutions Guide
Retrieving data by link
Retrieving data by link means that you are getting data items from data types that
are linked to one or more data items that you have previously retrieved.
When you retrieve data items by link, you are saying: "Give me data items in these
data types that are linked to these data items that I already have." The data items
that you already have are called the source data items. The data items that you
want to retrieve are known as the targets.
Links overview
Links are an element of the data model that defines relationships between data
items and between data types.
They can save time during the development of policies because you can define a
data relationship once and then reuse it several times when you need to find data
related to other data in a policy. Links are an optional part of a data model.
Dynamic links and static links are supported.
Netcool/Impact provides two categories of links.
Static links
Static links define a relationship between data items in internal data types.
Dynamic links
Dynamic links define a relationship between data types.
Retrieving data by link in a policy
To retrieve data items by link, you must first retrieve source data items using the
GetByFilter or GetByKey functions.
Then, you call GetByLinks and pass an array of target data types and the sources.
The function returns an array of data items in the target data types that are linked
to the source data items. Optionally, you can specify a filter that defines a subset of
target data items to return. You can also specify the maximum number of returned
data items.
Example of retrieving data by link
These examples show how to retrieve data by link.
In the first example, you call GetByFilter and retrieve a data item from the Node
data type whose Hostname value matches the Node field in an incoming event. Then
you call GetByLinks to retrieve all the data items in the Customers data type that
are linked to the Node. In this example, you print the fields and values in the data
items to the policy log before exiting.
// Call GetByFilter and pass the name of the data type
// and the filter string.
DataType = "Node";
Filter = "Hostname = ’" + @Node + "’";
CountOnly = False;
MyNodes = GetByFilter(DataType, Filter, CountOnly);
// Call GetByLinks and pass the target data type,
// the maximum number of data items to retrieve and
// the source data item.
Chapter 5. Handling data
77
DataTypes = {"Customer"};
Filter = "";
MaxNum = "10000";
DataItems = MyNodes;
MyCustomers = GetByLinks(DataTypes, Filter, MaxNum, DataItems);
// Log the data item field values.
Log(CurrentContext());
A shorter version of this example is:
MyNodes = GetByFilter("Node", "Hostname = ’" + @Node + "’", False");
MyCustomers = GetByLinks({"Customer"}, "", 10000, MyNodes);
Log(CurrentContext());
In the second example, you use a link filter to specify a subset of data items in the
target data type to return. As above, you call GetByFilter and retrieve a data item
from the Node data type whose Hostname value matches the Node field in an
incoming event. Then you call GetByLinks to retrieve all the data items in the
Customers data type whose Location is New York that are linked to the Node. You
then print the fields and values in the data items to the policy log before exiting.
// Call GetByFilter and pass the name of the data type
// and the filter string.
DataType = "Node";
Filter = "Hostname = ’" + @Node + "’";
CountOnly = False;
MyNodes = GetByFilter(DataType, Filter, CountOnly);
// Call GetByLinks and pass the target data type,
// the maximum number of data items to retrieve and
// the source data item.
DataTypes = {"Customer"};
Filter = "Location = ’New York’";
MaxNum = "10000";
DataItems = MyNodes;
MyCustomers = GetByLinks(DataTypes, Filter, MaxNum, DataItems);
// Log the data item field values.
Log(CurrentContext());
A shorter version of this example is:
MyNodes = GetByFilter("Node", "Hostname = ’" + @Node + "’", False");
MyCustomers = GetByLinks({"Customer"}, "Location = ’New York’", 10000, MyNodes);
Log(CurrentContext());
Adding data
Use this procedure to add a data item to a data type.
Procedure
1. Create a context using the NewObject function.
The following example shows how to create a context named MyNode.
MyNode = NewObject();
78
Netcool/Impact: Solutions Guide
2. Populate the member variables in the context with data that corresponds to the
values you want to set in the new data item.
The name of each member variable must be exactly as it appears in the data
type definition, as in the following example:
MyNode.Name = "Achilles";
MyNode.IPAddress = "192.168.1.1";
MyNode.Location = "London";
3. Add the data item.
You can add the data item to the data type by calling the AddDataItem function
and passing the name of the data type and the context as input parameters.
The following example shows how to add the data item to a data type.
AddDataItem("Node", MyNode);
Example of adding a data item to a data type
In this example, the data type is named User.
The User data type contains the following fields: Name, Location, and ID.
// Create new context.
MyUser = NewObject();
// Populate the member variables in the context.
MyUser.ID = "00001";
MyUSer.Name = "Jennifer Mehta";
MyUser.Location = "New York";
// Call AddDataItem and pass the name of the data type
// and the context.
DataType = "User";
AddDataItem(DataType, MyUser);
A shorter version of this example would be as follows:
MyUser=NewObject();
MyUser.ID = "00001";
MyUser.Name = "Jennifer Mehta";
MyUser.Location = "New York";
AddDataItem("User", MyUser);
Updating data
You can update single data items, and multiple data items.
To update single a data item, you must first retrieve the data from the data type
using GetByFilter, GetByKey or GetByLinks. Then you can update the data item
fields by changing the values of the corresponding field variables.
When you change the value of the field variables, the values in the underlying
data source are updated in real time. This means that every time you set a new
field value, Netcool/Impact requests an update at the data source level.
To update multiple data items in a data type, you call the BatchUpdate function
and pass the name of the data type, a filter string that specifies which data items
to update, and an update expression. Netcool/Impact updates all the matching
data items with the specified values.
Chapter 5. Handling data
79
The update expression uses the same syntax as the SET clause in the UPDATE
statement supported by the underlying data source. This clause consists of a
comma-separated list of the fields and values to be updated.
Updating multiple data items is only supported for SQL database data types.
Example of updating single data items
In this example, you call GetByFilter and retrieve a data item from a data type
called Node.
Then you change the value of the corresponding field variables.
// Call GetByFilter and pass the name of the data type
// and the filter string.
DataType = "Node";
Filter = "Location = ’" + @Node + "’";
CountOnly = False;
MyNodes = GetByFilter(DataType, Filter, CountOnly);
MyNode = MyNodes[0];
// Update the values of the field variables in MyNode
// Updates are made in real time in the data source
MyNode.Name = "Host_01";
MyNode.ID = "00001";
// Log the data item field values.
Log(CurrentContext());
A shorter version of this example is as follows:
MyNodes = GetByFilter("Node", "Location = ’" + @Node + "’", False);
MyNodes[0].Name = "Host_01";
MyNodes[0].ID = "00001";
Log(CurrentContext());
Example of updating multiple data items
In this example, you update all the data items in the Customer data type whose
Location is New York.
The update changes the values of the Location and Node fields. Then, you retrieve
the same data items using GetByFilter to verify the update. Before exiting, you
print the data item field values to the policy log.
// Call BatchUpdate and pass the name of the data type,
// the filter string and an update expression
DataType = "Customer";
Filter = "Location = ’New York’";
UpdateExpression = "Location = ’London’, Node = ’Host_02’";
BatchUpdate(DataType, Filter, UpdateExpression);
// Call GetByFilter and pass the name of the data type
// and a filter string
DataType = "Customer";
Filter = "Location = ’London’";
CountOnly = False;
MyCustomers = GetByFilter(DataType, Filter, CountOnly);
80
Netcool/Impact: Solutions Guide
// Log the data item field values.
Log(CurrentContext());
A shorter version of this example is as follows:
BatchUpdate("Customer", "Location = ’New York’", "Location = ’London’,
Node = ’Host_02’");
MyCustomers = GetByFilter("Customer", "Location = ’London’", False);
Log(CurrentContext());
Deleting data
You can delete single data items, or multiple data items.
Before you can delete a single data item from a data type, you must first retrieve it
from the data source. You can retrieve the data item using the GetByFilter,
GetByKey or GetByLinks functions. After you have retrieved the data item, you can
call the DeleteDataItem function and pass the data item as an input parameter.
To delete multiple data items, you call the BatchDelete function and pass it the
name of the data type, and either a filter or the data items you want to delete.
When you delete data items by filter, you are saying: "Delete all data items in this
type, where certain fields contain these values."
The filter is a text string that sets out the conditions that a data item must match in
order for it to be deleted. The syntax for the filter is that of an SQL WHERE clause
that provides a set of comparisons that must be true in order for a data item to be
returned. This syntax specified by the underlying data source. When
Netcool/Impact goes to the data source to delete the data items, it passes this filter
directly to the data source for processing.
Deleting data items by filter is only supported for SQL database data types.
You can also delete data items by passing them directly to the BatchDelete
function as an array.
Example of deleting single data items
In this example, you delete a data item from a data type named User where the
value of the Name field is John Rodriguez.
Because the data type (in this case) only contains one matching data item, you can
reference it as MyUsers[0].
// Call GetByFilter and pass the name of the data type
// and the filter string.
DataType = "User";
Filter = "Name = ’John Rodriguez’";
CountOnly = False;
MyUsers = GetByFilter(DataType, Filter, CountOnly);
MyUser = MyUsers[0];
// Call DeleteDataItem and pass the data item.
DeleteDataItem(MyUser);
A shorter version of this example is as follows:
Chapter 5. Handling data
81
MyUsers = GetByFilter("User", "Name = ’John Rodriguez’", False);
DeleteDataItem(MyUsers[0]);
Example of deleting data items by filter
In this example, you delete all the data items in a data type named Node, where the
value of Location is New York.
// Call BatchDelete and pass the name of the data type
// and a filter string that specifies which data items to delete
DataType = "Node";
Filter = "Location = ’New York’";
DataItems = NULL;
BatchDelete(DataType, Filter, DataItems);
A shorter version of this example is as follows:
BatchDelete("Node", "Location = ’New York’", NULL);
Example of deleting data items by item
The following example shows how to delete multiple data items by passing them
directly to BatchDelete.
In this example, you delete all the data items in a data type named Customer,
where the value of Location is London.
// Call GetByFilter and pass the name of the data type
// and a filter string
DataType = "Customer";
Filter = "Location = ’New York’";
CountOnly = False
MyCustomers = GetByFilter(DataType, Filter, CountOnly);
// Call BatchDelete and pass the array
// returned by GetByFilter
BatchUpdate(DataType, NULL, MyCustomers);
A shorter version of this example is as follows:
MyCustomers = GetByFilter("Customer", "Location = ’London’", False);
BatchDelete("Customers", NULL, MyCustomers);
Calling database functions
You can call functions that are defined in the underlying data source of an SQL
database data type.
These functions allow you to obtain such useful data as the number of rows in the
database that match a specified filter. To call a database function, you call
CallDBFunction and pass the name of the data type, a filter string, and the function
expression. CallDBFunction then returns the results of the function.
CallDBFunction uses the same SQL filter syntax as GetByFilter and BatchDelete.
Complete syntax and additional examples for SQL filters are provided in the Policy
Reference Guide.
82
Netcool/Impact: Solutions Guide
The following example shows how to call the database COUNT function within a
policy. In this example, you count the number of data items in the Node data type,
where the value of the Location field is New York. Then, you print the number of
items counted to the policy log.
// Call CallDBFunction and pass the name of the data type,
// a filter string and the function expression.
DataType = "Node";
Filter = "Location = ’New York’";
MyFunction = "COUNT(Location)";
NumItems = CallDBFunction(DataType, Filter, MyFunction);
// Print the number of counted items to the policy log.
Log(NumItems);
A shorter version of this example is as follows:
NumItems = CallDBFunction("Node", "Location = ’New York’", "COUNT()");
Log(NumItems);
Chapter 5. Handling data
83
84
Netcool/Impact: Solutions Guide
Chapter 6. Setting up instant messaging
Instant Messaging (IM) is a network service that allows two participants to
communicate through text in real time. The most widely used Instant Messaging
(IM) services are ICQ, AOL Instant Messenger (AIM), Yahoo! Messenger and
Microsoft Messenger. You can send and receive instant messages from within an
Impact policy.
Netcool/Impact IM
Netcool/Impact IM is a feature that you can use to send and receive instant
messages from within a policy.
Using this feature, Netcool/Impact can monitor an IM account on any of the most
widely used services for incoming messages and perform operations when specific
messages are received. Netcool/Impact can also send instant messages to any other
IM account. You can use this feature to send an instant message to notify
administrators, operators, and other users when certain events occur in your
environment.
Netcool/Impact IM uses Jabber to send and receive instant messages. Jabber is a
set of protocols and technologies that provide the means for two software entities
to exchange streaming data over a network. For more information, see the Jabber
Web site at http://www.jabber.org.
Netcool/Impact IM components
Netcool/Impact has two types of services that work together with your policies to
provide IM functionality.
The Jabber reader service listens for incoming instant messages and then runs a
specified policy when a new message is received. The Jabber service sends
messages to other IM accounts.
Netcool/Impact requires access to a Jabber server in order to send and receive
instant messages. A list of public Jabber servers is available from the Jabber Web
site at http://www.jabber.org/.
Netcool/Impact IM process
The Netcool/Impact IM process has two phases, message listening and message
sending.
Message listening
During the message listening phase, the Jabber reader service listens for new
messages from one or more IM accounts.
When a new message is received, the Jabber reader creates a new EventContainer
and populates it with the contents of the incoming message. Then, the Jabber
reader starts the policy specified in its configuration settings and passes it the
EventContainer. Netcool/Impact then processes the policy.
© Copyright IBM Corp. 2006, 2016
85
Message sending
Message sending is the phase during which Netcool/Impact sends new messages
through the Jabber service. Message sending occurs during the execution of a
policy when Netcool/Impact encounters a call to the SendInstantMessage function.
When Netcool/Impact processes a call to SendInstantMessage, it passes the
message content, recipient and other information to the Jabber service. The Jabber
service then assembles the message and sends it to a Jabber server where it is
routed to the specified recipient.
Setting up Netcool/Impact IM
Before you can send and receive instant messages using a policy, you must set up
the Jabber service and the Jabber reader service as described in the online help.
After you have set up these services, you can start writing instant messaging
policies using the information in “Writing instant messaging policies.”
Writing instant messaging policies
You use instant messages in a Netcool/Impact policy to send messages and handle
incoming messages
Handling incoming messages
When the Jabber reader receives an incoming message, it starts the policy that is
specified in the Jabber reader service configuration and passes the contents of the
message to the policy using the EventContainer variable.
About this task
The policy can then handle the incoming message in the same way it handles
information that is passed in an incoming event.
When the Jabber reader receives an incoming message, it populates the following
fields in the EventContainer variable: From and Body. The From field contains the
user name of the account from which the message was sent. Body contains the
contents of the message. You can access the contents of these fields using either the
dot notation or the @ notation.
Sending messages
You send instant messages from within a policy using the SendInstantMessage
function.
About this task
This function requires you to specify the recipient and the body content of the
message. You can also specify a subject, a chat room ID, and whether to send the
message directly or put it on the message queue for processing by the command
execution manager service. For a complete description of this function, see the
Policy Reference Guide.
Example
The following example shows how to send and receive instant messages using
Netcool/Impact IM.
86
Netcool/Impact: Solutions Guide
In this example, the Jabber reader service calls the policy whenever an incoming
message is received. The policy then confirms receipt of the message and performs
a different set of actions, depending on whether the message sender is
NetcoolAdmin or NetcoolOps.
// Call SendInstantMessage and pass the name of the recipient and the content
// of the message as message parameters
To = @From; // Recipient is sender of the original message
TextMessage = "Message receipt confirmed.";
SendInstantMessage(To, NULL, NULL, TextMessage, False);
If (@From == "NetcoolAdmin") {
Log("Message received from user NetcoolAdmin.");
Log("Message contents: " + @Body);
}
If (@From == "NetcoolOps") {
Log("Message
received from user NetcoolOps.");
Log("Message contents: " + @Body);
} Else {
Log("Message received from unrecognized user.");
Log("Message contents: " + @Body);
}
Chapter 6. Setting up instant messaging
87
88
Netcool/Impact: Solutions Guide
Chapter 7. Event enrichment tutorial
The goal of this tutorial is to develop an event enrichment solution to enhance the
value of an existing Netcool/Impact installation.
This solution automates common tasks performed manually by the network
operators and helps to integrate related business data with alerts in the
ObjectServer.
Tutorial overview
This tutorial uses a sample environment that provides the background for
understanding various event enrichment concepts and tasks.
The environment is a network operations center for a large enterprise where the
company has installed and configured Netcool/OMNIbus and is currently using it
to manage devices on its network. The sample environment is a scaled down
representation of what you might actually find in a real world operations center. It
contains only the network elements and business data needed for this tutorial.
This tutorial leads you through the following steps:
v Understanding the Netcool/Impact installation
v Understanding the business data
v Analyzing the workflow in the environment
v Creating a project
v Setting up a data model
v Setting up services
v Writing an event enrichment policy
v Configuring the OMNIbus event reader to run the policy
v Running the complete solution
Understanding the Netcool/Impact installation
The first step in this tutorial is to understand the current Netcool installation.
Generally, before you start developing any Netcool solution, you must find out
which products in the Netcool suite you installed and which devices, systems, or
applications are being monitored in the environment.
The Netcool installation in the sample environment consists of Netcool/OMNIbus
and a collection of probes that monitor devices on the network. This installation
uses two instances of an ObjectServer database named NCOMS that is set up in a
backup/failover configuration. These ObjectServers are located on host systems
named NCO_HOST_01 and NCO_HOST_02, and run on the default port of 4100.
The probes in this installation monitor various network devices. The details of the
devices are not important in this tutorial, but each probe sends the basic set of
alert fields to the ObjectServer database, including the Node, Summary, Severity,
AlertKey, and Identifier fields.
© Copyright IBM Corp. 2006, 2016
89
Understanding the business data
The next step in this tutorial is to understand the location and structure of the
business data in your environment.
In the sample environment, the company uses instances of the Oracle database to
store network inventory information, customer service information, and general
organizational information about the business.
The information that you want to use is stored in two databases named ORA_01 and
ORA_02. ORA_01 is a network inventory database that stores information about the
devices in the network, including their technical specification, facility locations,
and rack numbers. ORA_01 is located on a system named ORA_HOST_01. ORA_02 is a
database that contains information about the various departments in the business.
ORA_02 is located on a system named ORA_HOST_02. They both run on port 1521
Analyzing the workflow
After you find the location and structure of the business data, the next step is to
analyze the current event management workflow in your environment.
The tutorial work environment is a network operations center. In this center, a
number of operators are on duty at all times. They sit in an open work area and
each one has access to a console that displays a Netcool/OMNIbus event list. On
large projector screens on one wall of the operation center are large map
visualizations that provide geographical views into the current network status.
As alerts flow to the ObjectServer from the various Netcool probes and monitors
that are installed in the environment, they are displayed in the event lists available
to the operators. Depending on the severity of the alerts, the operators manually
perform a set of tasks using the event list tools, third-party applications, and
typical office tools like cell phones and email.
For the sake of this tutorial, we assume that, among other tasks, the operators
perform the following actions for each high severity alert. The operators:
v Manually acknowledge the alert using the event list.
v Use an in-house database tool to find information about the device causing the
alert. This tool runs a query against the network inventory database and returns
technical specifications, the location, and other information.
v Use another in-house tool to look up the business department being served by
the device that caused the alert.
v If the business department is part of a mission critical business function, they
increase the severity of the alert and update it in the ObjectServer database.
The operators might perform other actions, like looking up the administrators on
call at the facility where the device is located and contacting them by phone or
pager. After the problem that caused the alert is addressed, the operators might
also record the resolution in a problem log and delete the alert from the
ObjectServer. For this tutorial, however, only use the workflow tasks listed.
Creating the project
After you finish analyzing the workflow, the next step is to create a project in the
GUI.
90
Netcool/Impact: Solutions Guide
About this task
You can use this project to store the data model, services, and policies that are used
in this solution. The name of this project is NCI_TUT_01.
Procedure
1. Open the GUI in a web browser and log in.
2. Click one of the links, for example Data Model, to view the project and cluster
selection lists on the Data Model tab.
3. Select a cluster from the Cluster list. From the Project list, select Global.
4. Click the New Project icon on the toolbar to open the New Project window.
5. Use the New Project window to configure your new project.
6. In the Project Name field, type NCI_TUT_01.
7. Click OK then click Close.
Setting up the data model
After you create a project for this tutorial, the next step is to set up a
Netcool/Impact data model.
This data model consists of the event sources, data sources, and data types that are
required by the event enrichment solution. It also consists of a dynamic link that is
used to define the relationship between the data types.
You use the GUI to perform all the tasks in this step.
To set up the data model, you perform the following tasks:
v Create the event source
v Create the data sources
v Create the data types
v Create the dynamic link
Creating the event source
The first task in setting up the data model is to create the event source. As you
learned when you investigated the details of the Netcool installation, the example
environment has one event source, an ObjectServer named NCOMS.
About this task
Because you want to tap into the alerts that are stored in this ObjectServer, you
must create an event source that represents it in Netcool/Impact.
An event source is a special type of data source that Netcool/Impact can use to
represent a physical source of event data in the environment. Since your source of
event data is an ObjectServer database, you must create an ObjectServer data
source and configure it with the connection information you discovered when you
investigated the details of the Netcool installation.
To create the event source:
Procedure
1. Click Data Model to open the Data Model tab.
Chapter 7. Event enrichment tutorial
91
2. Select a cluster from the Cluster list. From the Project list, select
NCI_TUT_01.
3. Click the New Data Source icon and select ObjectServer from the list. The
New Data Source opens.
4. Type NCOMS in the Data Source Name field.
5. Type the name and password of an ObjectServer user in the Username and
Password fields.
6. Type NCO_HOST_01 in the Primary Host Name field.
7. Type 4100 in the Primary Port field.
8. Click Test Connection to test the ObjectServer connection.
9. Type NCO_HOST_02 in the Backup Host Name field.
10. Type 4100 in the Backup Port field.
11. Click Test Connection to test the ObjectServer connection.
12. Click OK.
Creating the data sources
The next task in setting up the data model is to create the data sources.
About this task
As you learned when you discovered the location and structure of the business
data in your environment, the data you want to use in this solution is in two
Oracle databases named ORA_01 and ORA_02. Since you want to access these
databases, you must create a data source that corresponds to each one.
To create the data sources:
Procedure
1. Click Data Model to open the Data Model tab.
2. Click the New Data Source icon and select Oracle from the list. The New Data
Source window opens.
3. Type ORACLE_01 in the Data Source Name field.
4. Type an Oracle user name and password in the Username and Password fields.
5. Type ORA_HOST_01 in the Primary Host Name field.
6. Type 1521 in the Primary Port field.
7. Type ORA_01 in the SID field.
8. Click Test Connection to test the ObjectServer connection.
9. Click OK.
Results
Repeat these steps to create another data source that corresponds to the ORA_02
database. Name this data source ORACLE_02.
Creating the data types
The next task in setting up the data model is to create the data types.
92
Netcool/Impact: Solutions Guide
About this task
As you learned when you discovered the location and structure of the business
data in your environment, the data that you want to use is contained in two tables.
The first table is called Device and is in the ORA_01 database. This table contains
information about each device on the network. Columns in this table include
Hostname, DeviceID, HardwareID, Facility, and RackNumber.
The second table is called Department and is in the ORA_02 database. This table
contains information about each functional department in the business. Columns in
this table include DeptName, DeptID, and Location.
Since you want to access the data in both of these tables, you must create a data
type for each. Name these data types Device and Department.
To create the data types:
Procedure
1. Click Data Model to open the Data Model tab.
2. Select ORACLE_01 from the data sources list.
3. Click the New Data Type icon.
A new Data Type Editor tab opens.
4. Type Device in the Data Type Name field.
5. Select ORACLE_01 from the Data Source Name drop down menu.
6. Ensure that the Enabled check box is selected. It is selected by default.
7. Scroll down the Data Type Editor tab so that the Table Description area is
visible.
8. Select Device from the Base Table list.
9. Click Refresh.
Netcool/Impact queries the Oracle database and populates the Table
Description browser with the names of each column in the Device table.
10. Specify that the DeviceID field is the key field for the data type by selecting
the Key option in the DeviceID row.
11. Select Hostname from the Display Name Field list.
12. Enter a name for this data type in the Data Type Name field.
13. Click Save in the Data Type Editor tab.
14. Click Close in the Data Type Editor tab.
Results
Repeat these steps to create another data type that corresponds to the Department
table in the ORA_02 database. Name this data type Department.
Creating a dynamic link
The next step is to create a dynamic link between the Device and Department data
types.
Chapter 7. Event enrichment tutorial
93
About this task
One property of the business data that you are using in this solution is that there is
a relationship between devices in the environment and departments in the
business. All the devices that are located in a certain facility serve the business
departments in the same location. You can make this relationship part of the data
model by creating a dynamic link between the Device and Department data types.
After you create the dynamic link, you can use the the GetByLinks function to
traverse it within a policy.
In this relationship, Device is the source data type and Department is the target
data type. When you create the link between the two data types, you can define it
using the following syntax:
Location = ’%Facility%’
This filter tells Netcool/Impact that Device data items are linked to Department
data items if the value of the Location field in the Department is equal to the value
of the Facility field in the Device.
To create the dynamic link:
Procedure
1. Double click Data Model to open the Data Model tab.
2. Click the name of the Device data type.
A new Data Type Editor tab opens in the Main Work panel of the GUI. This
editor displays configuration information for the Device data type.
3. Select the Dynamic Links tab in the editor.
The Links From This Data Type area opens in the editor.
4. Click the New Link By Filter button to open the Link By Filter window.
5. Select Department from the Target Data Type list.
6. In the Filter ... Field, type the filter string that defines the relationship between
the Device and Department list. As noted in the description of this task above,
the filter string is Location = ’%Facility%’. This means that you want Device
data items to be linked to Department data items if the Location field in the
Department is the same as the Facility field in the Device.
7. Click OK.
8. Click the Save button in the Data Type Editor tab.
9. Click the Close button in the Data Type Editor tab.
Reviewing the data model
After you create the dynamic links, you can review the data model using the GUI
to verify that you have performed all the tasks correctly.
About this task
You can review the data model by opening the Data Source and Data Type task
panes in the Navigation panel, and by making sure that the event source, data
sources, and data types that you created are visible.
94
Netcool/Impact: Solutions Guide
Setting up services
The next step in this tutorial is to set up the OMNIbus event reader required by
the solution.
Creating the event reader
The OMNIbus event reader for this solution must check the NCOMS ObjectServer
every 3 seconds and retrieve any new events.
Procedure
1. Click Services to open the Services tab.
2. Click the Create New Service icon and select OMNIbusEvent Reader from the
list.
3. Type TUT_READER_01 in the Service Name field.
4. Select NCOMS from the Data Sourcelist.
5. Type 3000 in the Polling Interval field.
6. Select the Startup option. This option specifies whether the service starts
automatically when you run Netcool/Impact.
7. Click Save.
Reviewing the services
After you create the event reader, you can use the GUI to verify that you
completed all the tasks correctly.
About this task
To review the service that you created, click the Services tab and make sure that
the TUT_READER_01 OMNIbus event reader is visible.
Writing the policy
After you set up the OMNIbus event reader service, the next step is to write the
policy for the solution.
This policy is named EnrichEvent and it automatically performs the tasks that you
discovered when you analyzed the workflow in the environment.
You can use the EnrichEvent policy to complete the following tasks:
v Look up information about the device that is causing the alert.
v Look up the business departments that are served by the device.
v If one of the business departments is part of a mission critical business function,
the policy increases the severity of the alert to critical.
This section assumes that you already know how to create, edit, and save a policy
using the policy editor tools in the GUI. For more information about these tools,
see the online help.
Looking up device information
The first task that you want the policy to perform is to look up device information
that is related to the alert in the network inventory database.
Chapter 7. Event enrichment tutorial
95
About this task
Specifically, you want the policy to retrieve technical specifications for the device
that is causing the alert, and information about the facility and the rack number
where the device is located.
To do this, the policy must perform a SELECT at the database level on the table that
contains the device data and return those rows that are related to the incoming
alert. Viewed from the data model perspective, the policy must get data items from
the Device data type where the value of the Hostname field is the same as the value
of the Node field in the alert.
To retrieve the data items, you type the following code into the Netcool/Impact
policy editor tab:
DataType = "Device";
Filter = "Hostname = ’" + @Node + "’";
CountOnly = False;
MyDevices = GetByFilter(DataType, Filter, CountOnly);
MyDevice = MyDevices[0];
If (Length(MyDevices) < 1) { Log("No matching device found."); }
If (Length(MyDevices) > 1) { Log("More than one matching device found."); }
Here, GetByFilter is retrieving data items from the Device data type where the
value of the Hostname field is equal to the value of the Node field in the incoming
alert. The data items are stored in an array named MyDevices.
Although GetByFilter is able to return more than one data item in the array, you
only expect the array to contain one data item in this situation, as each device in
the database has a unique Hostname. The first element of the MyDevices array is
assigned to the MyDevice variable so that MyDevice can be used as shorthand later
in the policy.
Because you want to retrieve only one data item from the data type, the policy also
prints error messages to the policy log if GetByFilter retrieves less than or more
than one.
Looking up business departments
The next task that you want the policy to perform is to look up the business
departments that are served by the device that caused the alert.
About this task
When you set up the data model for this solution, you created a dynamic link.
This link defined the relationship between the devices in the environment and
departments in the business. To look up the business departments that are served
by the device, the policy must take the data item that it previous retrieved from
the Device data type and traverse the links between it and the Department data
type.
To retrieve the Department data items that are linked to the Device, type the
following text into the policy editor below the code you entered previously:
DataTypes = {"Department"};
Filter = NULL;
MaxNum = 10000;
96
Netcool/Impact: Solutions Guide
MyDepts = GetByLinks(DataTypes, Filter, MaxNum, MyDevices);
If (Length(MyDepts) < 1) { Log("No linked departments found."); }
Here, GetByLinks retrieves up to 10,000 Department data items that are linked to
data items in the MyDevices array. Since you are certain that the business has less
than 10,000 departments, you can use a large value such as this one to make sure
that all Department data items are returned.
The returned data items are stored in the MyDepts array. Because you want at least
one data item from the data type, the policy also prints an error message to the
policy log if GetByLinks does not return any.
Increasing the alert severity
The final task that you want the policy to perform is to increase the severity of the
alert.
About this task
For example, if the department that it affects has a mission critical function in the
business. For the purposes of this tutorial, the departments in the business whose
function is mission critical are the data center and transaction processing units.
To perform this task, the policy must iterate through each of the Department data
items that are retrieved in the previous step. For each Department, it must test the
value of the Name field against the names of the two departments in the business
that have mission critical functions. If the Department name is that of one of the
two departments, the policy must increase the severity of the alert to Critical.
Count = Length(MyDepts);
While (Count > 0) {
Index = Count - 1;
MyDept = MyDepts[Index];
If (MyDept.Name == "Data Center" || MyDept.Name == "Transaction Processing") {
@Severity = 5;
}
Count = Count - 1;
}
Here, you use a While loop to iterate through the elements in the MyDepts array.
MyDepts is the array of Department data items that were returned previously in the
policy by a call the GetByLinks.
Before the While loop begins, you set the value of the Count variable to the number
of elements in the MyDepts array. Each time the loop runs, it tests the value of
Count. If Count is greater than zero, the statements inside the loop are executed. If
Count is less than or equal to zero, the statements are not executed. Because Count
is decremented by one each time the loop is performed, the While loop runs once
for each data item in MyDepts.
A variable named Index is used to refer the current element in the array. The value
of Index is the value of Count minus one, as Netcool/Impact arrays are zero-based
structures whose first element is counted as zero instead of one.
Chapter 7. Event enrichment tutorial
97
Inside the loop, the policy uses an If statement to test the name of the current
Department in the array against the name of the two mission-critical business
departments. If the name of the current Department matches the mission-critical
departments, the policy sets the value of the Severity field in the alert to 5, which
signifies a critical severity.
Reviewing the policy
After you finish writing the policy, you can review it for accuracy and
completeness.
About this task
The following example shows the entire text of this policy.
// Look up device information
DataType = "Device";
Filter = "Hostname = ’" + @Node + "’";
CountOnly = False;
MyDevices = GetByFilter(DataType, Filter, CountOnly);
MyDevice = MyDevices[0];
If (Length(MyDevices) < 1) { Log("No matching device found."); }
// Look up business departments
DataTypes = {"Department"};
Filter = NULL;
MaxNum = 10000;
MyDepts = GetByLinks(DataTypes, Filter, MaxNum, MyDevices);
If (Length(MyDepts) < 1) { Log("No linked departments found."); }
// If department is mission-critical, update severity of alert
Count = Length(MyDepts);
While (Count > 0) {
Index = Count - 1;
MyDept = MyDepts[Index];
If (MyDept.Name == "Data Center" || MyDept.Name == "Transaction Processing") {
@Severity = 5;
}
Count = Count - 1;
}
Running the solution
The final step in this tutorial is to run the event enrichment solution.
Before you begin
Before you run the solution, you must configure the TUT_READER_01 OMNIbus
event reader service so that it triggers the EnrichEvent policy. To configure
TUT_READER_01:
1. Open Netcool/Impact and click Services to open the Services tab.
98
Netcool/Impact: Solutions Guide
2. Select the the TUT_READER_01 service and click Edit.
3. Click the Event Mapping tab.
4. To create a mapping, click the New Mapping button.
5. If you want to trigger the EnrichEvent policy for all events, leave the Filter
Expression field empty. If you want to trigger the EnrichEvent policy for
specific events, enter the values for these events.
6. Select EnrichEvent in the Policy to Run field.
7. Click the Active check box.
8. To save the configuration, click OK.
9. To save the changes to the TUT_READER_01 service, click the save icon.
Procedure
To start the solution, you simply start the OMNIbus event reader service.
The event reader then begins to monitor the ObjectServer and retrieves any new
events that appear. When a new event appears, the event reader brings it back into
Netcool/Impact, where it is processed by running the EnrichEvent policy.
Chapter 7. Event enrichment tutorial
99
100
Netcool/Impact: Solutions Guide
Chapter 8. Working with the Netcool/Impact UI data provider
You can use the UI data provider in Netcool/Impact to provide data to UI data
provider compatible clients.
Integration with Netcool/Impact
The UI data provider accesses data from data sources, data types, and policies in
Netcool/Impact.
The console is a component of Jazz for Service Management called IBM Dashboard
Application Services Hub. The IBM Dashboard Application Services Hub is
referred to as the console in the rest of this documentation.
You can use the UI data provider to visualize data from Netcool/Impact in the
console. You can use the console to create your own widgets or you can use one of
the customizable self service dashboards.
Jazz for Service Management
The UI data provider requires Netcool/Impact and Jazz for Service Management.
Jazz for Service Management is available for download from the same page as
Netcool/Impact . You use the installer that is provided with Jazz for Service
Management to install it separately.
We recommend that you install Netcool/Impact and Jazz for Service Management
on separate servers. If you do install Netcool/Impact and Jazz for Service
Management on the same server, you must change the default port numbers to
avoid a conflict between the GUI Server and the IBM Dashboard Application
Services Hub, referred to as the console in this documentation. For example, a GUI
Server installed uses port 16310 as the default port. The console dashboards in Jazz
for Service Management use the same port. In this case, you must change the port
that is used by the console dashboards in Jazz for Service Management, for
example to 18310.
For more information about Jazz for Service Management, see
http://pic.dhe.ibm.com/infocenter/tivihelp/v3r1/topic/com.ibm.psc.doc_1.1.0/
psc_ic-homepage.html
Getting started with the UI data provider
Before you can use the Netcool/Impact UI data provider, you must complete the
prerequisites.
Prerequisites
Check that the correct components are installed.
You must configure the user authorization.
Create the data source and data types or the policy that you want to use to
provide data to the UI data provider.
© Copyright IBM Corp. 2006, 2016
101
Visualizing data in the console
To visualize data in the console, you must complete the following tasks:
1. Create the remote connection between the UI data provider and the console.
2. Create the data model or policy that provides the data from Netcool/Impact.
3. Create a page in the console.
4. Create a widget on the page in the console.
UI data provider components
Before you can use the UI data provider, you must ensure that you install all the
required components.
Required Jazz for Service Management components
Before you can use the UI data provider, you must install the IBM Dashboard
Application Services Hub (the console) component of Jazz for Service Management.
If you want to use the Netcool/Impact self service dashboard (SSD) widgets, you
must install the SSD widgets on the Dashboard Application Services Hub Server.
For more information, see “Installing the Netcool/Impact Self Service Dashboard
widgets” on page 157.
Component overview
A typical distributed deployment can consist of two or more instances of the
Impact Server installed on separate systems and configured as part of a server
cluster, and an instance of the GUI Server installed on a system that is configured
to allow users access the GUI though a web browser. Server clustering provides
failover and load-balancing functionality for the Impact Server instances.
The custom dashboard server uses the widgets that are created in Jazz for Service
Management to connect to the GUI Server. The custom dashboard uses the
Registry Services component that is provided by Jazz for Service Management to
connect to a DB2 database. For more information about the Registry Services
component, see http://pic.dhe.ibm.com/infocenter/tivihelp/v3r1/topic/
com.ibm.psc.doc_1.1.0/psc_ic-homepage.html.
Configuring user authentication
Before you can use the UI data provider, you must assign the appropriate role.
About this task
User authentication is controlled by impactUIDataProviderUser role. This role is
administered in the GUI Server.
Procedure
You must assign one of the following roles to users to enable access to the UI data
provider:
v iscadmins
v impactFullAccessUser
v impactUIDataProviderUser
102
Netcool/Impact: Solutions Guide
For more information about this role, see the information about the
impactUIDataProviderUser role in the working with roles section of the
Administration Guide.
Data types and the UI data provider
To ensure that the data type can send data to the UI data provider, you must
ensure that the following settings exist. Internal and SNMP data types require
additional settings.
When you create a data type, you must consider the following settings:
v For all data types, except the internal data type, you must also select the Key
Field check box for the key field. The key field identifies the uniqueness of the
data that is displayed by the widget in the console.
v You must enable the data type so that it can send data to the UI data provider.
To ensure that the data type can be accessed by the UI data provider, open the
data type editor and select the Access the data through UI data provider:
Enabled check box. After the data model refreshes, the data type is available as
a data provider source. The default refresh rate is 5 minutes. However, this
setting can be changed. For more information about how to change the refresh
rate, see “UI data provider customization” on page 166.
v You must select a display name that does not contain any special characters or
spaces. To select the display name, click Data Model to open the Data Model
tab. Expand the data source that the data type belongs to and click the data type
that you want to use. Select the field that you want to use as the display name
from the Display Name Field list.
v If the data type key field value contains quotation marks ("), the console cannot
support the widget that is based on the data type. This means that you cannot
click the row that is based on the key field, use it to send an event to another
widget or to provide hover preview information. You must use a key field that
does not contain quotation marks.
Internal data types
If you use Internal data types, the UI data provider uses the first field in the
Additional Fields configuration as the item identifier. You must choose a unique
value for the key field, otherwise the UI data provider overwrites your chosen key
with the most recent value.
SNMP data types
If you use SNMP data types, you must define a value in the Field Name field in
the data type editor. The UI data provider uses the value from the Field Name
field as the item identifier. If more than one entry for the same value in Field
Name field exists, the UI data provider uses the entry that was created most
recently. If you want the UI data provider to use an item identifier that is unique,
enter a unique value in the Field Name field for the data type.
Integrating chart widgets and the UI data provider
If you use the pie or line chart widgets to visualize data in the console, you must
change the number of items per page from All to a number to ensure that the
console can display the data.
Chapter 8. Working with the Netcool/Impact UI data provider
103
Procedure
1. Open a page in the console or create a new one. See “Creating a page in the
console” on page 106.
2. Select a widget. To select a widget, click All and drag the widget into the
content area.
3. Configure the widget data. To configure the widget data, click it in the content
area and click the Down arrow icon > Edit. The Select a dataset window is
displayed.
4. Select the data set that you want to use to provide the data. To search for the
data type, enter the data type name and click the Search button. To display a
list of all the available data types, click the Show All icon.
5. Click Settings. In the Items per page list list, change the number of items per
page from All to a number. For example, change it to 50.
6. Click OK to save the widget.
Names reserved for the UI data provider
The following names are reserved for use by the UI data provider. You cannot use
these names in your policies and databases.
The UI data provider uses the comma (,) and ampersand (&) characters on the UI
and in the URL for policies that use the DirectSQL policy function. You can use AS
instead of ampersand (&) in policies.
The UI data provider uses the UIObjectId field to index key fields. You cannot use
the UIObjectId field in any of your policies.
Netcool/Impact uses the AS UIDPROWNUM field in the query that is used for DB2 and
Oracle databases. You cannot use UIDPROWNUM as a field name for any of the
connected DB2 and Oracle databases.
The topology and tree widgets use the UITreeNodeId field. The UITreeNodeId field
is reserved for use by the UI data provider and it contains the following fields that
are also reserved:
v UITreeNodeId
v UITreeNodeParent
v UITreeNodeStatus
v UITreeNodeLabel
v UITreeNodeType
General steps for integrating the UI data provider and the
console
You can use a variation of the general steps that are listed here to visualize data
from the Netcool/Impact UI data provider in the console.
The exact steps differ depending on whether you use a data type or policy to
provide the data. However, in general, to integrate the UI data provider and the
console, you must complete the following activities:
1. Create the remote connection.
If you want to connect to the local UI data provider by using the UI data
provider data source with an SSL enabled connection, the signed certificate
104
Netcool/Impact: Solutions Guide
must be exchanged between the GUI Server and Impact Server. For more
information see Configuring SSL with scripts in the Security section of the
documentation.
2. Create the information provider.
If you want to visualize data directly from a DB2 table or other data source,
you must set up the data model in Netcool/Impact.
If you want to visualize data directly from a policy in Netcool/Impact, you
must create the policy. For example, if you want to use a policy to mashup data
from two different sources, you must create a policy in Netcool/Impact that
summarizes the data.
3. Create a page and widget in the console.
Setting up the remote connection between the UI data provider
and the console
Before you can visualize data from the UI data provider in the console, you must
configure the remote connection between the UI data provider and the console.
Procedure
1. Open the console.
2. To open the Connections window, click Settings > Connections. The
Connections window lists all the available data providers.
3. To create a new remote provider to represent the Netcool/Impact UI data
provider, click the Create new remote provider icon and complete the
following settings:
a. Select the protocol from the list. For example, HTTPS.
b. Enter the host name. For example, the IP address of the GUI Server.
c. Enter the port number. For example, for HTTP-SSL the value is 16311.
d. Enter the user name and password that you used when you installed the
Impact Server.
e. Select the data provider that you created. To view all the available data
providers, click Search. After you select the data provider, the Name and
Provider ID fields are automatically populated. If you use multiple servers
in a clustered environment, a connection is displayed for each server. For
example, if you use Netcool/Impact in a cluster with TBSM, a connection is
displayed for both members of the cluster.
4. To create the remote provider connection, click Ok.
Creating the data model
Before you can integrate the UI data provider and the console, you must create a
data source and data type to provide data.
Before you begin
Before you create a data type, there are a number of specific settings that are
required to facilitate the integration with the UI data provider. For more
information about these settings, see “Data types and the UI data provider” on
page 103.
Procedure
1. Create a data source.
2. Create a data type.
Chapter 8. Working with the Netcool/Impact UI data provider
105
Results
The changes are only visible after the refresh interval. The refresh rate is 5 minutes
by default. For more information about how to change this setting, see “UI data
provider customization” on page 166.
Creating a page in the console
Before you can create a widget to visualize the data from the UI data provider, you
must create a page in the console.
Procedure
1. Open the console.
2. To create a page, click the Console Settings icon at the bottom left side of the
screen. Choose Pages > New Page.
3. Enter the name of the page.
4. Click OK to save the page.
Results
The page is created. You can now create a widget to visualize the data.
Creating a widget on a page in the console
To visualize data from Netcool/Impact on a page in the console, you must create a
widget.
About this task
Before you create a widget, you must create a page in the console. For more
information, see “Creating a page in the console.”
Tip: When you create multiple output parameters, remember each policy output
parameter that you create generates its own data set. When you assign a data set
to a widget, only those tasks that are associated with the specific output parameter
are run.
Procedure
1. Open the page that you want to use for this widget in the console.
2. Select a widget. To select a widget, click All and drag the widget into the
content area.
For example, click All, select the table widget, and drag the widget into the
content area.
3. Configure the widget data. To configure the widget data, click it in the content
area and click the Down arrow icon > Edit. The Select a dataset window is
displayed.
4. Select the data set that you want to use to provide the data. To search for the
data source, data type, or policy that provides the data, enter the data type
name and click the Search button. If you use a data type from Netcool/Impact
to provide data for the widget, you can search for either the data source or data
type name. If you use a policy from Netcool/Impact, you can search for the
policy name or the output parameter name.
If you configured any specific policy-related actions on a policy to be used with
a UI data provider when you create the widget and right-click an action in the
widget the policy-related actions are displayed.
106
Netcool/Impact: Solutions Guide
The data type is only displayed after the defined refresh interval. The default is
5 minutes. If you use a data type that you just created, you must wait 5
minutes before the data type displays.
5. If you want to use the line or pie chart widget, you must change the number of
items per page form All to a number. To do so, click Settings. In the Items per
page list list, change the number of items per page from All to a number.
6. To save the new widget, click OK.
Results
You can now use the new widget to visualize the data from the specified data type
in the console.
Accessing data from Netcool/Impact policies
You can use the UI data provider to access data from Netcool/Impact policies.
You must create user output parameters for each policy that you want to use with
the UI data provider. You can also configure polices related to the UI data provider
to be enabled for specific actions. When the policy actions are enabled, you can
right-click an action in the widget in the console and the list of policy actions
displays. When a policy action is changed in the policy editor the list on the
widget in the console is updated automatically.
In addition, if you use an Impact object, an array of Impact objects, the DirectSQL,
or the GetByFilter policy function, you must be aware of certain special
requirements. These special cases are described and, where required, an example is
provided.
If your policy retrieves data from a database where the key field contains
quotation marks ("), the console cannot support the widget that is based on the
data that is provided by the policy. You cannot click the row that is based on the
key field, use it to send an event to another widget or to provide hover preview
information. You must use a key field that does not contain quotation marks.
Configuring policy settings
To use the UI data provider or OSLC with your Netcool/Impact policies, you must
configure policy input or output parameters to make the policy results compatible
with the UI data provider or available as OSLC resources. You can also enable
policy actions for use with the UI data provider.
About this task
You can create either policy input parameters or policy output parameters. Policy
input parameters represent the input parameters that you define in policies. For
example, you can use a policy input parameter to pass values from one policy to
another in a data mashup.
Policy output parameters represent the parameters that are output by policies. For
example, the UI data provider uses policy output parameters to visualize data
from policies in the console.
You can also configure polices related to the UI data provider to be enabled for
specific actions. When the policy actions are enabled, you can right-click on an
action in the widget in the console and the list of policy actions displays. When a
Chapter 8. Working with the Netcool/Impact UI data provider
107
policy action is updated in the policy editor the list on the widget in the console is
updated automatically.
Procedure
Click the Polices tab. To open the Policy Settings Editor in the policy editor
toolbar, click the Configure Policy Settings icon. Create a JavaScript or an IPL
policy.
2. To create a policy output parameter, click New Output Parameter:New. To
create a policy input parameter, click New Input Parameter:New. Mandatory
fields are denoted by an asterisk (*). You must enter a unique name in the
Name field.
1.
3. Define the custom schemas for the output parameters if required.
If you are using the DirectSQL policy function with OSLC, you must define the
custom schema for it. You need to create an output parameter for this policy to
create a UI Data Provider policy-related action.
If you are using DirectSQL, Impact Object, or Array of Impact Object with
the UI data provider or the chart widget, you must define the custom schema
for these values.
For more information, see “Creating custom schema values for output
parameters” on page 111
4. Click New to create a UI Data Provider policy-related action. You can use this
option to enable a policy action on a widget in the console (Dashboard the
Dashboard Application Services Hub) in Jazz for Service Management. For
more information, see “Creating a widget on a page in the console” on page
106.
a. In the Name field, add a name for the action. The name that you add
displays in the widget in the console when you right-click on an action in
the specified widget.
b. In the Policy Name menu, select the policy that you want the action to
relate to.
c. In the Output Parameter menu, select the output parameter that will be
associated with this action, if you select the All output parameters option,
the action becomes available for all output parameters for the selected
policy.
5. To enable a policy to run with an UI data provider, select the Enable policy for
UI Data Provider actions check box.
6. To enable a policy to run in with the Event Isolation and Correlation
capabilities, select the Enable Policy for Event Isolation and Correlation
Actions check box.
7. To save the changes to the parameters and close the window, click OK.
Example
This example demonstrates how to create output parameters for a policy. First, you
define a simple policy, like:
first_name = “Mark”;
zip_code = 12345;
Log(“Hello “ + first_name + “ living at “ + zip_code);
Next, define the output parameters for this policy. In this case, there are two
output parameters. You enter the following information:
108
Netcool/Impact: Solutions Guide
Table 4. PolicyDT1 output parameter
Field
User entry
Name
Enter a unique name. For example,
PolicyDT1.
Policy variable name
first_name
Format
String
Table 5. PolicyDT2 output parameter
Field
User entry
Name
Enter a unique name. For example,
PolicyDT2
Policy variable name
zip_code
Format
Integer
Accessing Netcool/Impact object variables in a policy
You can use the NewObject function to create Netcool/Impact objects in a policy. If
you want to access these objects from the UI data provider, you must create a
policy output parameter.
Procedure
1. To open the policy user parameter editor, click the Configure Policy Settings
icon in the policy editor toolbar. You can create policy input and output
parameters. Click New to open the Create a New Policy Output Parameter
window.
2. Select Impact Object from the Format list. The Schema Definition form control
appears on the form.
3. Beside the Schema Definition form control, click the Open Schema Definition
Editor and add a field. For example, enter the policy object name in the Policy
Variable Name field.
Example
The following example demonstrates how to make an Impact object variable
available to the UI data provider. First, you create the following policy, called
Test_Policy2:
MyObject = NewObject();
MyObject.fname = ’Sam’;
MyObject.age = 25;
MyObject.bmi = 24.5;
Define the output parameters for the policy as follows:
Table 6. PolicyObject1 output parameter
Field
User entry
Name
Enter a unique name. For example,
PolicyObject1.
Policy variable name
MyObject
Format
Impact Object
Schema definition
Chapter 8. Working with the Netcool/Impact UI data provider
109
Table 6. PolicyObject1 output parameter (continued)
Field
User entry
Data source name
Data type name
Accessing data types output by the GetByFilter function
If you want to access the results from the GetByFilter function, you must create
output parameters for the UI data provider.
Procedure
1. To open the policy settings editor, click the Configure Policy Settings icon in
the policy editor toolbar. You can create policy input and output parameters for
the policy. To open the Create a New Policy Output Parameter window, click
New.
2. Select data type as the format.
3. Enter the name of the data item to which the output of the GetByFilter
function is assigned in the Policy Variable Name field.
4. Enter the name of the data source in the Data Source Name field.
5. Enter the name of the data type in the Data Type Name field.
Example
This example demonstrates how to make the output from the GetByFilter function
available to the Netcool/Impact UI data provider.
You created a data type that is called ALERTS that belongs to the
defaultobjectserver data source. This data type belongs to Netcool/OMNIbus and
it points to alerts.status. The key field is Identifier. The following four rows of
data are associated with the key field:
v Event1
v Event2
v Event3
v Event4
Create the following policy, called Test_Policy3:
MyAlerts = GetByFilter("ALERTS", "Severity > 0", false);
Define the output parameters for the policy as follows:
Table 7. PolicyData1 output parameter
Field
User entry
Name
PolicyData1
Policy variable name
MyAlerts
Format
Data type
Data source name
defaultobjectserver
Data type name
ALERTS
Select the output parameter as the data set for the widget that you are using to
visualize the data.
110
Netcool/Impact: Solutions Guide
1. Open the console and open a page.
2. Drag a widget into the content area.
3. To configure the widget data, click it. Click the down arrow icon and click Edit.
The Select a dataset window is displayed.
4. Select PolicyData1 as the data type that belongs to the defaultobjectserver data
source.
5. To ensure that the policy runs when the widget is displayed, select the
executePolicy check box. Click Ok.
Accessing data types output by the DirectSQL function
If you want to access the results from the DirectSQL policy function, you must
create output parameters for the UI data provider.
About this task
The comma (,) and ampersand (&) characters are reserved as special characters for
the user interface and the URL. You cannot use these characters in policies that are
accessed by the DirectSQL policy function. You can use AS instead of ampersand
(&) in policies as required.
For example, consider the following policy:
SELECT "My&Test" AS My_Test FROM test_table
This policy returns the field name My_Test instead of My&Test
Procedure
1. To open the policy user parameter editor, click the Configure Policy Settings
icon in the policy editor toolbar. You can create policy input and output
parameters. To open the Create a New Policy Output Parameter window, click
New.
2. Select DirectSQL / UI Provider Datatype as the format.
3. Enter a name for the output parameter.
4. Enter the name of the data item to which the output of the DirectSQL function
is assigned in the Policy Variable Name field.
5. To define the DirectSQL format values, click the Open the schema definition
editor editor icon. For detailed information about how to create custom schema
values, see “Creating custom schema values for output parameters”
Creating custom schema values for output parameters
When you define output parameters that use the DirectSQL, Array of Impact
Object, or Impact Object format in the user output parameters editor, you also
must specify a name and a format for each field that is contained in the
DirectSQL, Array of Impact Object, or Impact Object objects.
About this task
Custom schema definitions are used by Netcool/Impact to visualize data in the
console and to pass values to the UI data provider and OSLC. You create the
custom schemas and select the format that is based on the values for each field
that is contained in the object. For example, you create a policy that contains two
fields in an object:
O1.city="NY"
O1.ZIP=07002
Chapter 8. Working with the Netcool/Impact UI data provider
111
You define the following custom schemas values for this policy:
Table 8. Custom schema values for City
Field
Entry
Name
City
Format
String
Table 9. Custom schema values for ZIP
Field
Entry
Name
ZIP
Format
Integer
If you use the DirectSQL policy function with the UI data provider or OSLC, you
must define a custom schema value for each DirectSQL value that you use.
If you want to use the chart widget to visualize data from an Impact object or an
array of Impact objects with the UI data provider and the console, you define
custom schema values for the fields that are contained in the objects. The custom
schemas help to create descriptors for columns in the chart during initialization.
However, the custom schemas are not technically required. If you do not define
values for either of these formats, the system later rediscovers each Impact object
when it creates additional fields such as the key field. UIObjectId, or the field for
the tree widget, UITreeNodeId. You do not need to define these values for OSLC.
Procedure
1. In the Policy Settings Editor, select DirectSQL, Impact Object, or Array of
Impact Object in the Format field.
2. The system shows the Open the Schema Definition Editor icon
the Schema Definition field. To open the editor, click the icon.
beside
3. You can edit an existing entry or you can create a new one. To define a new
entry, click New. Enter a name and select an appropriate format.
To edit an existing entry, click the Edit icon beside the entry that you want to
edit
4. To mark an entry as a key field, select the check box in the Key Field column.
You do not have to define the key field for Impact objects or an array of Impact
objects. The system uses the UIObjectId as the key field instead.
5. To delete an entry, select the entry and click Delete.
Accessing an array of Impact objects with the UI data provider
Before you can use the UI data provider to access an array of Impact objects, you
must create an output parameter that represents the array of Impact objects.
About this task
Netcool/Impact uses the field UIObjectID to index the key fields. As a result,
UIObjectID is a reserved field name. You must not use UIObjectID as a custom
field in any of your policies.
112
Netcool/Impact: Solutions Guide
Procedure
1. To define an output parameter for the array of Impact objects, click the
Configure Policy Settings icon in the policy editor toolbar. To open the Create
a New Policy Output Parameter window, click New. Create the output
parameter as outlined in the following table:
Table 10. Output parameter for a policy that contains the Array of Impact Objects array
Field
Instructions
Name
Enter a name for the output parameter.
Policy Variable Name
Enter a name that is identical to the name of
the array of Netcool/Impact objects in the
policy that you want to reference.
Format
Select Array of Impact Objects.
2. After you create the output parameter, you define the custom schema values
for the array of Impact objects. For more information, see “Creating custom
schema values for output parameters” on page 111.
3. To display all the fields and values that are associated with the array, use the
following URL:
https://<hostname>:<port>/ibm/tivoli/rest/providers/
Impact_NCICLUSTER /datasources/<datasourceid>/datasets/
<outputparametername>/items?properties=all
where <outputparametername> is the name of the parameter that is defined in
the previous step.
Example
For example, consider the following Netcool/Impact objects:
MyObject1=NewObject();
MyObject1.firstname="first_name";
MyObject1.lastname="last_name";
MyObject2=NewObject();
MyObject2.city="mycity";
MyObject2.state="mystate";
An Impact Policy Language (IPL) policy references the array as follows:
MyArrayOfObjects={MyObject1,MyObject2};
A JavaScript policy references the array as follows:
MyArrayOfObjects=[MyObject1,MyObject2];
To map MyArrayOfObjects to the output parameters, create the output parameter
for the array of objects as follows:
Table 11. Output parameters for MyArrayObj1
Field
User entry
Name
MyArrayObj1
Policy Variable Name
MyArrayOfObjects
Format
Array of Impact Object
To map the values that are contained in the array, create the custom schema values
as follows:
Chapter 8. Working with the Netcool/Impact UI data provider
113
Table 12. first_name custom schema values
Field
User entry
Name
first_name
Format
String
Table 13. last_name custom schema values
Field
User entry
Name
last_name
Format
String
Table 14. mycity custom schema values
Field
User entry
Name
mycity
Format
String
Table 15. mystate custom schema values
Field
User entry
Name
mystate
Format
String
Use the following URL to view the fields and values for MyArrayofObjects:
https://<hostname>:<port>/ibm/tivoli/rest/providers/
Impact_NCICLUSTER /datasources/<datasourceid>/datasets/
MyArrayObj1/items?properties=all
UI data provider and the IBM Dashboard Application Services Hub
To create visualizations and mashups of Netcool/Impact data from sources such as
Netcool/Impact policies and database tables in the IBM Dashboard Application
Services Hub, referred to as the console throughout this section, you can integrate
the Netcool/Impact UI data provider with the console.
Filtering data in the console
You can use the console to filter data based on input parameters. This data can be
derived from Netcool/Impact policies or other data types. To filter data in the
console, you configure the widget settings in the console.
About this task
You make these settings in the table widget UI in the console. To make the settings
that are described here, open the table widget UI and click Edit.
Procedure
v To filter data provided by Netcool/Impact policies, you must select the
executePolicy check box to include the executePolicy Boolean parameter in the
policy. The executePolicy parameter ensures that the policy runs when the user
opens the widget. The system then populates the widget with the required data
from the policy.
114
Netcool/Impact: Solutions Guide
If you want to enter values for the policy input parameters in the console, you
can enter these values under Configure Optional Dataset Parameters. The
system passes the values to the input parameters in the policy while the policy
is running.
Attention: The input parameters must be already defined in the policy. If the
input parameters are not defined in the policy, the system cannot pass the values
for the input parameters to the policy. For more information about how to create
policy input parameters, see “Configuring policy settings” on page 107.
v To filter data from other sources, such as data derived from a database table,
users can enter values for the filter parameters in the Configure Optional
Dataset Parameters section in the console. Netcool/Impact uses the values that
are entered here to filter the results that are displayed in the console.
Example: filtering data based on a database table
For example, you want to configure a console widget to display the rows from a
database table that contain a value of 192.168.1.119 for the Device field. In the
console under Configure Optional Dataset Parameters, enter 192.168.1.119 in the
Device field. The widget returns only the data that contains this value in the
Device field.
Integrating the tree widget with an Impact object or an array
of Impact objects
To integrate a policy that contains an Impact object or an array of Impact objects
with the tree widget that is available in the console, you must first specify certain
fields in the policy and create the required custom schema definitions.
Procedure
1. If the object is a parent with an indexed ID, add the UITreeNodeId field for the
object. If the object is a child, add the UITreeNodeParent field for the object. If
you do not add these values, the object is not displayed as a tree hierarchy. The
following conditions also apply:
v The first object in the hierarchy cannot be a child, as is the case for all
hierarchies.
v You must specify the parent object before the child.
v A child object cannot use the same ID for itself and its parent.
v The parent ID is an indexed ID and it must start with 0. If you want to skip
the parent ID, you must do so in this order.
v The schema of each object must be the same, as is the case for all objects that
use the tree widget. In other words, an object can use fewer schema elements
than its parent object but these elements must be defined in the parent object.
A child object cannot use additional schema elements that are not defined in
the parent object.
The UITreeNodeId and UITreeNodeParent fields are not displayed in the console
2. Create a policy output parameter for the Impact object or the array of Impact
objects. To create a policy output parameter, click the Configure Policy Settings
icon in the policy editor toolbar. To open the Create a New Policy Output
Parameter window, click New. Create the following entries:
Table 16. User output parameters for Impact object or array of Impact objects
Field
Instructions
Name
Enter a name for the output parameter.
Chapter 8. Working with the Netcool/Impact UI data provider
115
Table 16. User output parameters for Impact object or array of Impact objects (continued)
Field
Instructions
Policy Variable Name
Enter a name that is identical to the name of
the Impact object or the array of Impact
objects that is specified in the policy that
you want to reference.
Format
Choose Impact Object or Array of Impact
Objects.
3. Create the custom schema values for the values that you want to display as
columns in the console. You must specify a custom schema value for the Impact
object or the objects that are contained in an array of Impact objects. The
schema values that you define can be displayed as columns in the console. You
only need to specify custom schema values for the values that you want to
display. Values such as UITreeNodeId are displayed as properties unless you
specify them as custom schema values.
To create a custom schema value in the policy editor, select Impact object or
Array of Impact objects in the Format field. To open the editor, click the Open
the Schema Definition Editor icon. Define the custom schema values as
outlined in the following table:
Table 17. Custom schema values for Impact object or array of Impact objects
Field
Instructions
Name
Enter the name of the custom schema value.
For example, this could be the name of the
Impact object or the name of one of the
objects in the array of Impact objects.
Format
Choose the format of the custom schema
value. For example, if the parameter is a
string, choose String.
Example
The following example demonstrates how to integrate an array of Impact objects
and the tree widget.
1. Create a policy that contains an array of Impact objects and the additional
fields that are required for the tree widget, UITreeNodeId and
UITreeNodeParent.
Log("Array of objects with same fields....");
O1=NewObject();
O1.UITreeNodeId=0;
O1.fname="o1fname";
O1.lname="o1lname";
O1.dob="o1dob";
O2=NewObject();
O2.UITreeNodeId=1;
O2.UITreeNodeParent=0;
O2.fname="o2fname";
O2.lname="o2lname";
O2.dob="o2dob";
O3=NewObject();
O3.UITreeNodeId=2;
O3.UITreeNodeParent=1;
O3.fname="o3fname";
116
Netcool/Impact: Solutions Guide
O3.lname="o3lname";
O3.dob="o3odb";
O4=NewObject();
O4.UITreeNodeId=3;
O4.UITreeNodeParent=20;
O4.fname="o4fname";
O4.lname="o4lname";
O4.dob="o4odb";
O5=NewObject();
O5.UITreeNodeId=4;
O5.fname="o5fname";
O5.lname="o5lname";
O5.dob="o5odb";
O6=NewObject();
O6.UITreeNodeParent=4;
O6.fname="o6fname";
O6.lname="o6lname";
O6.dob="o6odb";
O7=NewObject();
O7.UITreeNodeParent=4;
O7.fname="o7fname";
O7.lname="o7lname";
O7.dob="o7odb";
O8=NewObject();
O8.UITreeNodeParent=4;
O8.fname="o8fname";
O8.lname="o8lname";
O8.dob="o8odb";
O9=NewObject();
O9.fname="o9fname";
O9.lname="o9lname";
O9.dob="o9odb";
O10=NewObject();
O10.fname="NJ";
O10.lname="Bayonne";
O10.dob="April 1st 2011";
O11=NewObject();
O11.UITreeNodeParent=11;
O11.fname="o11fname";
O11.lname="o11lname";
O11.dob="o11odb";
O12=NewObject();
O12.UITreeNodeId=11;
O12.UITreeNodeParent=0;
O12.fname="o12fname";
O12.lname="o12lname";
O12.dob="o12odb";
Oa=NewObject();
Oa.UITreeNodeId=12;
Oa.UITreeNodeParent=2;
Oa.fname="oafname";
Oa.lname="oalname";
Oa.dob="oaodb";
Ob=NewObject();
Ob.UITreeNodeId=13;
Ob.UITreeNodeParent=12;
Chapter 8. Working with the Netcool/Impact UI data provider
117
Ob.fname="obfname";
Ob.lname="oblname";
Ob.dob="obodb";
Oc=NewObject();
Oc.UITreeNodeId=14;
Oc.UITreeNodeParent=14;
Oc.fname="ocfname";
Oc.lname="oclname";
Oc.dob="ocodb";
Oe=NewObject();
Oe.UITreeNodeParent=14;
Oe.fname="oefname";
Oe.lname="oelname";
Oe.dob="obedb";
Os={O1,O2,O3,O4,O5,O6,O7,O8,O9,O10,O12,O11,Oa,Ob,Oc,Oe};
log("os " + Os);
2. In the policy editor, create the following user output parameter:
Table 18. ArrayofNewObject user output parameter
Field
User input
Name
ArrayofNewObject
Policy Variable Name
Os
Format
Array of Impact Objects
3. In the policy editor, create the following custom schema value definitions for
the array of Impact objects:
Table 19. fname custom schema definition
Field
User input
Name
fname
Format
String
Table 20. lname custom schema definition
Field
User input
Name
lname
Format
String
Integrating data from a policy with the topology widget
Before you can use the topology widget to visualize data from a Netcool/Impact
policy in the console, you need to specify certain fields in the policy.
About this task
The topology widget is intended for use with the tree widget. You use the fields
described here with the fields that you specify for the tree widget. For more
information, see “Integrating the tree widget with an Impact object or an array of
Impact objects” on page 115
Generally, the nodes are connected in a hierarchy. However, this connection is not
technically required. If you define a node that is not part of a hierarchy, it is
displayed as a stand-alone node that is not part of any other hierarchy.
118
Netcool/Impact: Solutions Guide
Procedure
v You must include the following statement in the first object in the policy:
ObjectName.UITreeNodeType= <Node Type>;
where <Node Type> is either GRAPH or TREE. If you do not specify a value, the
default value is TREE.
v You must specify a label for each object. If you do not, the system displays No
label was specified for the label and tooltip. To specify a label, add the
following statement for each object:
ObjectName.UITreeNodeLabel=<Tooltip text>
where <Tooltip text> is the text that is used for the label and tooltip.
v Define the status for each node. This status is not mandatory. If you do not add
this statement, the status is unknown. If you want to display the status for each
object, add the following statements for each node.
ObjectName.UITreeNodeStatus=<Status>;
where <Status> is the status. The table lists the supported values and the
numbers that represent those values. You can use either the number or the word
to represent the status.
Table 21. Tree node status
Status
Number
Critical
5
Major
4
Minor
3
Warning
2
Normal
0
Unknown
Example
The following examples illustrate how to define a policy that you want to use with
the tree and topology widgets. This example policy includes a multi-level topology
and a node status that represents severity.
Log("Test Topo");
O0=NewObject();
O0.UITreeNodeType="GRAPH";
O0.UITreeNodeId=0;
O0.UITreeNodeLabel="NJ-Bayonne";
O0.UITreeNodeStatus="Warning";
O0.state="NJ";
O0.city="Bayonne";
O1=NewObject();
O1.UITreeNodeId=1;
O1.UITreeNodeStatus="Normal";
O1.UITreeNodeParent=0;
O1.UITreeNodeLabel="NY-Queens";
O1.state="NY";
O1.city="Queens";
O2=NewObject();
O2.UITreeNodeId=2;
O2.UITreeNodeStatus="Critical";
Chapter 8. Working with the Netcool/Impact UI data provider
119
O2.UITreeNodeParent=1;
O2.UITreeNodeLabel="NC-Raleigh";
O2.state="NC";
O2.city="Raleigh";
O3=NewObject();
O3.UITreeNodeId=3;
O3.UITreeNodeParent=0;
O3.UITreeNodeStatus="Warning";
O3.UITreeNodeLabel="CA-Los Angeles";
O3.state="CA";
O3.city="Los Angeles";
O4=NewObject();
O4.UITreeNodeId=4;
O4.UITreeNodeParent=3;
O4.UITreeNodeStatus="Normal";
O4.UITreeNodeLabel="CO-Denver";
O4.state="CO";
O4.city="Denver";
O5=NewObject();
O5.UITreeNodeId=5;
O5.UITreeNodeStatus="Critical";
O5.UITreeNodeParent=4;
O5.UITreeNodeLabel="MA-Main";
O5.state="MA";
O5.city="Main";
O6=NewObject();
O6.UITreeNodeId=6;
O6.UITreeNodeParent=0;
O6.UITreeNodeStatus="Warning";
O6.UITreeNodeLabel="NH-New Hampshire";
O6.state="NH";
O6.city="New Hampshire";
O7=NewObject();
O7.UITreeNodeId=7;
O7.UITreeNodeParent=6;
O7.UITreeNodeStatus="Normal";
O7.UITreeNodeLabel="TX-Hudson";
O7.state="TX";
O7.city="Houston";
O8=NewObject();
O8.UITreeNodeId=8;
O8.UITreeNodeParent=7;
O8.UITreeNodeStatus="Critical";
O8.UITreeNodeLabel="VA-Virgina Beach";
O8.state="VA";
O8.city="Virigina Beach";
Obs={O0,O1,O2,O3,O4,O5,O6,O7,O8};
After you implement the policy you need to create the output parameters and
custom schema values. For more information about how to do this, see
“Configuring policy settings” on page 107.
Displaying status and percentage in a widget
You can show status and percentage in topology, tree, table, and list widgets by
using policies or data types. To show status and percentages in a widget, you must
create a script in JavaScript format in the data type if the policy uses the
GetByFilter function.
120
Netcool/Impact: Solutions Guide
About this task
For data types, SQL, SNMP, and internal data types are supported. For policies the
GetByFilter, DirectSQL and Impact Object, and Array Of Impact Objects are
supported.
1. Create the data type.
2. In the data type configuration window, add the script to the Define Custom
Types and Values (JavaScript) area.
Restriction: Not all functions that are provided by JavaScript are supported. If
you get a syntax error for a known JavaScript function, check that the function
is supported in an Impact policy.
3. Click the Check Syntax and Preview Sample Result button to preview the
results and to check the syntax of the script.
For DirectSQL and Impact Object, Array Of Impact Objects, the Status, and
Percentage can be specified when you create the schema definition. For policies,
you can use IPL or JavaScript for the DirectSQL or GetByFilter functions.
The script uses the following syntax for data types and for policies that use the
GetByFilter function.
ImpactUICustomValues.put("FieldName,Type",VariableName);
Where Type is either Percentage or Status. VariableName, can be a variable or
hardcoded value. Always cast the variable name to String to avoid any error even
if the value is numeric. See the following examples:
ImpactUICustomValues.put("MyField,Percentage",""+VariableName);
ImpactUICustomValues.put("MyField,Percentage","120");
ImpactUICustomValues.put("FieldName,Percentage",""+(field1/40));
The status field expects the value to be similar to the Topology widget
configuration:
Table 22. Status field values
Status
Number
Critical
5
Major
4
Minor
3
Warning
2
Intermediate or Indeterminate.
1
v Either status is available when the connection to
Netcool/Impact uses https.
v If the connection is https, go to $IMPACT_HOME/etc/
server.props and set the property
impact.uidataprovider.useiconfromprovider=true
v For all examples, you can replace Intermediate with
Indeterminate when needed.
There is no limit to how many fields you can put in the variable
ImpactUICustomValues. The variable must be at the very end of the script. Anything
Chapter 8. Working with the Netcool/Impact UI data provider
121
before the variable must be in JavaScript and can be anything if the variable
ImpactUICustomValues is populated correctly.
Example 1:
Assigns the field name from the table to be the status or the percentage and
assigns the field value. This example assigns SHAREDOWN and PROFIT as the
percentages, and STANDING as the status.
ImpactUICustomValues.put("SHAREUP,Percentage",SHAREUP);
ImpactUICustomValues.put("SHAREDOWN,Percentage",SHAREDOWN);
ImpactUICustomValues.put("PROFIT,Percentage",PROFIT);
ImpactUICustomValues.put("STANDING,Status",STANDING);
Example 2:
This example has an extra calculation to determine the value of percentage or
status fields. The percentage assumes the maximum value to use is 100. Then, a
factor is used to scale the values that are based on the maximum value that is
expected by the user. The status and percentage is scaled based on a factor.
var status = "Normal";
var down = 0;
var up = 0;
var factor = ( TOTAL / 100);
down = (DOWN / factor);
up = (UP / factor);
var statusFactor = (DOWN / TOTAL) * 100;
if ( statusFactor >= 50) {
status = "Critical";
} else if ( statusFactor >= 30 ) {
status = "Major";
} else if (statusFactor >= 20) {
status = "Minor";
} else if (statusFactor >= 10 ) {
status = "Warning";
} else {
status = "Normal";
}
ImpactUICustomValues.put("DownPercentage,Percentage",""+down);
ImpactUICustomValues.put("UpPercentage,Percentage",""+up);
ImpactUICustomValues.put("NetworkStatus,Status",""+status);
Example 3:
This example uses extra fields that do not exist in the table and used to be the
Status and Percentage. The values are the exact values that come from fields that
exist in the table. Calculation can be used to assign different values:
ImpactUICustomValues.put("CPUPercentUsage,Percentage",CPUUsage);
ImpactUICustomValues.put("RAMPercentUsage,Percentage",RAMUsage);
ImpactUICustomValues.put("DiskPercentUsage,Percentage",DiskUsage);
ImpactUICustomValues.put("NetworkAvailability,Status",NetworkStatus);
Tip: The Table or List widget shows duplicate entries or have missing data when
you compare the data to the data type data items. Check the data source to ensure
that all keys are unique.
Tip: In an instance where you use a policy function to create a dynamic filter, and
you get a message in the policy log. The messages states that the filter variable is
not defined in policy. No eventing occurs between widgets. Check that you are not
122
Netcool/Impact: Solutions Guide
using any special characters in the custom value in the data type for example,
ImpactUICustomValues.put("CPU%,Percentage",""+value). The widgets do not
support special characters in field names.
Tip: If a data type field type is incorrectly defined, for example the field is
defined as an integer, but contains float values the widget fails to load. The widget
shows a message similar to this example:
Failed to load
To resolve the issue, edit the data type field and select the correct data type float.
Controlling node images in the topology widget
You can change the image for topology widget nodes to enhance node
representation in the console.
Before you begin
Integrate policy data with the topology widget so you can use the topology widget
to visualize data from a Netcool/Impact policy in the console, see “Integrating
data from a policy with the topology widget” on page 118
About this task
To change the images for topology widget nodes, you need to modify the policy.
The image links, within the policy properties, can direct the widget to use any of
the existing icons in the JazzSM server. In the policy, under the status and label
properties add the following properties.
v Mandatory properties:
Object.UITreeNodeSmallImage=<link to the image>. For example,
Object.UITreeNodeSmallImage= "[widget]/resources/common_assets/
common_resource_icons_16/mainframe.png";
Object.UITreeNodeScalarImage=<link to the swf file>. For example,
Object.UITreeNodeScalarImage= "[widget]/resources/common_assets/
common_resource_icons/re64_mainframe1.swf";
v Optional properties:
Object.UITreeNodeLargeImage=<link to the large image>. For example,
Object.UITreeNodeLargeImage= "[widget]/resources/common_assets/
common_resource_icons/re64_mainframe1_8.png";
The following example is a policy that demonstrates the images by using the
widget relative path:
Log("Customize Links");
worldObject = {};
Obj = NewObject();
Obj.Node = "World";
Obj.UITreeNodeId = 0;
Obj.UITreeNodeStatus = "Critical";
Obj.NetworkStatus = Obj.UITreeNodeStatus;
Obj.UITreeNodeLabel = "World";
Obj.LastUpdate= LocalTime(GetDate()*90);
worldObject = worldObject + Obj;
Obj1 = NewObject();
Obj1.Node = "Africa";
Obj1.UITreeNodeId = 1;
Obj1.UITreeNodeLabel = "Africa";
Chapter 8. Working with the Netcool/Impact UI data provider
123
Obj1.UITreeNodeParent = 0;
Obj1.UITreeNodeStatus = "Critical";
Obj1.NetworkStatus = Obj1.UITreeNodeStatus;
Obj1.UITreeNodeRelation="Network in Africa is Down";
Obj1.LastUpdate= LocalTime(GetDate()+300*1);
worldObject = worldObject + Obj1;
Obj2 = NewObject();
Obj2.Node = "Asia";
Obj2.UITreeNodeId = 2;
Obj2.UITreeNodeLabel = "Asia";
Obj2.UITreeNodeParent = 0;
Obj2.UITreeNodeStatus = "Major";
Obj2.NetworkStatus = Obj2.UITreeNodeStatus;
Obj2.UITreeNodeRelation="Network n Asia is having issue"; //this
text will show up when user hover on the link
Obj2.UITreeNodeSmallImage = "[widget]/resources/common_assets/
common_resource_icons/re64_hypervisor1_8.png";
Obj2.UITreeNodeScalarImage = "[widget]/resources/common_assets/
common_resource_icons/re64_hypervisor1.swf";
Obj2.LastUpdate= LocalTime(GetDate()+300*3);
worldObject =worldObject + Obj2;
Obj3 = NewObject();
Obj3.Node = "North America";
Obj3.UITreeNodeId = 3;
Obj3.UITreeNodeLabel = "North America";
Obj3.UITreeNodeParent = 0;
Obj3.UITreeNodeStatus = "Minor";
Obj3.NetworkStatus = Obj3.UITreeNodeStatus;
Obj3.UITreeNodeRelation="Network n North America is under maintaenance";
Obj3.UITreeNodeSmallImage = "[widget]/resources/common_assets/
common_resource_icons/re64_hypervisor1_8.png";
Obj3.UITreeNodeScalarImage = "[widget]/resources/common_assets/
common_resource_icons/re64_hypervisor1.swf";
Obj3.LastUpdate= LocalTime(GetDate()-300*60);
worldObject = worldObject + Obj3;
Obj4 = NewObject();
Obj4.Node = "South America";
Obj4.UITreeNodeId = 2;
Obj4.UITreeNodeLabel = "South America";
Obj4.UITreeNodeParent = 0;
Obj4.UITreeNodeStatus = "Normal";
Obj4.NetworkStatus = Obj4.UITreeNodeStatus;
Obj4.UITreeNodeRelation="Network in South America is Normal";
Obj4.UITreeNodeSmallImage = "[widget]/resources/common_assets/
common_resource_icons/re64_hypervisor1_8.png";
Obj4.UITreeNodeScalarImage = "[widget]/resources/common_assets/
common_resource_icons/re64_hypervisor1.swf";
Obj4.LastUpdate= LocalTime(GetDate()+300*9);
worldObject =worldObject + Obj4;
Obj5 = NewObject();
Obj5.Node = "Brazil";
Obj5.UITreeNodeId = 5;
Obj5.UITreeNodeLabel = "Brazil";
Obj5.UITreeNodeParent = 2;
Obj5.UITreeNodeStatus = "Normal";
Obj5.NetworkStatus = Obj5.UITreeNodeStatus;
Obj5.UITreeNodeRelation="Network in Brazil is Normal";
Obj5.UITreeNodeSmallImage = "[widget]/resources/common_assets/
common_resource_icons/re64_jobinstance1_8.png";
Obj5.UITreeNodeScalarImage = "[widget]/resources/common_assets/
common_resource_icons/re64_jobinstance1.swf";
124
Netcool/Impact: Solutions Guide
Obj5.LastUpdate= LocalTime(GetDate()+300*100);
worldObject =worldObject + Obj5;
Obj6 = NewObject();
Obj6.Node = "Egypt";
Obj6.UITreeNodeId = 6;
Obj6.UITreeNodeLabel = "Egypt";
Obj6.UITreeNodeParent = 1;
Obj6.UITreeNodeStatus = "Intermediate";
Obj6.NetworkStatus = Obj6.UITreeNodeStatus;
Obj6.UITreeNodeRelation="Network in Egypt is down";
Obj6.UITreeNodeSmallImage = "[widget]/resources/common_assets/
common_resource_icons/re64_jobinstance1_8.png";
Obj6.UITreeNodeScalarImage = "[widget]/resources/common_assets/
common_resource_icons/re64_jobinstance1.swf";
Obj6.LastUpdate= LocalTime(GetDate()+300*80);
worldObject =worldObject + Obj6;
Log(worldObject);
Setting the topology node to have multiple parents in the
topology widget
Topology widget nodes can be set to have multiple parents.
Before you begin
Integrate policy data with the topology widget so you can use the topology widget
to visualize data from a Netcool/Impact policy in the console. For more
information about integrating with topology widgets, see “Integrating data from a
policy with the topology widget” on page 118
About this task
To set a topology node with multiple parents or to set the topology widget node
with circular relationship, or to do both, complete the following procedure. Within
the procedure, the first example is for multiple parents, the second example is for
circular relationships and the third example is for multiple parents and circular
relationships in a tree table where first node is a parent node.
Procedure
v
To set a topology node with multiple parents, configure UITreeNodeParent with
parents IDs separated by a comma. For example,
object.UITreeNodeParent="1,5,10", where 1,5,10 are the parents object's
UITreeNodeId value for each parent. See the following example.
worldObject = {};
Obj = NewObject();
Obj.Node = "World";
Obj.UITreeNodeId = 0;
Obj.UITreeNodeStatus = "Critical";
Obj.NetworkStatus = Obj.UITreeNodeStatus;
Obj.UITreeNodeLabel = "World";
Obj.LastUpdate= LocalTime(GetDate()*90);
worldObject = worldObject + Obj;
Obj1 = NewObject();
Obj1.Node = "Africa";
Obj1.UITreeNodeId = 1;
Obj1.UITreeNodeLabel = "Africa";
Obj1.UITreeNodeParent = 0;
Obj1.UITreeNodeStatus = "Critical";
Chapter 8. Working with the Netcool/Impact UI data provider
125
Obj1.NetworkStatus = Obj1.UITreeNodeStatus;
Obj1.UITreeNodeRelation="Network in Africa is Down";
Obj1.LastUpdate= LocalTime(GetDate()+300*1);
worldObject = worldObject + Obj1;
Obj2 = NewObject();
Obj2.Node = "Asia";
Obj2.UITreeNodeId = 2;
Obj2.UITreeNodeLabel = "Asia";
Obj2.UITreeNodeParent = 0;
Obj2.UITreeNodeStatus = "Major";
Obj2.NetworkStatus = Obj2.UITreeNodeStatus;
Obj2.UITreeNodeRelation="Network n Asia is having issue";
//this text will show up when user hover on the link
Obj2.UITreeNodeSmallImage = "[widget]/resources/
common_assets/common_resource_icons/re64_hypervisor1_8.png";
Obj2.UITreeNodeScalarImage = "[widget]/resources/
common_assets/common_resource_icons/re64_hypervisor1.swf";
Obj2.LastUpdate= LocalTime(GetDate()+300*3);
worldObject =worldObject + Obj2;
Obj2 = NewObject();
Obj2.Node = "India";
Obj2.UITreeNodeId = 12;
Obj2.UITreeNodeLabel = "India";
Obj2.UITreeNodeParent = "2,1;
Obj2.UITreeNodeStatus = "Major";
Obj2.NetworkStatus = Obj2.UITreeNodeStatus;
Obj2.UITreeNodeRelation="Network n India is having issue";
//this text will show up when user hover on the link
Obj2.UITreeNodeSmallImage = "[widget]/resources/
common_assets/common_resource_icons/re64_hypervisor1_8.png";
Obj2.UITreeNodeScalarImage = "[widget]/resources/
common_assets/common_resource_icons/re64_hypervisor1.swf";
Obj2.LastUpdate= LocalTime(GetDate()+300*3);
worldObject =worldObject + Obj2;
v To set the topology widget node with circular relationship, the circular
relationship can be used only in the topology widget and not in the tree widget.
See the following example.
worldObject =[];
index = 0;
Obj3 = NewObject();
Obj3.Node = "Link1";
Obj3.UITreeNodeId = 13;
Obj3.UITreeNodeLabel = Obj3.Node;
Obj3.UITreeNodeParent = 16;
Obj3.UITreeNodeStatus = "Minor";
Obj3.NetworkStatus = Obj3.UITreeNodeStatus;
Obj3.UITreeNodeRelation="Network in " + Obj3.Nod + " is under maintaenance";
Obj3.UITreeNodeSmallImage = "[widget]/resources/common_assets/common_
resource_icons/re64_hypervisor1_8.png";
Obj3.UITreeNodeScalarImage = "[widget]/resources/common_assets/common_
resource_icons/re64_hypervisor1.swf";
Obj3.LastUpdate= LocalTime(GetDate()-300*60);
worldObject[index] = Obj3;
index = index + 1;
Obj3 = NewObject();
Obj3.Node = "Link2";
Obj3.UITreeNodeId = 14;
Obj3.UITreeNodeLabel = Obj3.Node;
Obj3.UITreeNodeParent = 13;
Obj3.UITreeNodeStatus = "Minor";
Obj3.NetworkStatus = Obj3.UITreeNodeStatus;
Obj3.UITreeNodeRelation="Network in " + Obj3.Nod + " is under maintaenance";
Obj3.UITreeNodeSmallImage = "[widget]/resources/common_assets/common_
resource_icons/re64_hypervisor1_8.png";
126
Netcool/Impact: Solutions Guide
Obj3.UITreeNodeScalarImage = "[widget]/resources/common_assets/common_
1
resource_icons/re64_hypervisor1.swf";
Obj3.LastUpdate= LocalTime(GetDate()-300*60);
worldObject[index] = Obj3;
index = index + 1;
Obj3 = NewObject();
Obj3.Node = "Link3";
Obj3.UITreeNodeId = 15;
Obj3.UITreeNodeLabel = Obj3.Node;
Obj3.UITreeNodeParent = 14;
Obj3.UITreeNodeStatus = "Minor";
Obj3.NetworkStatus = Obj3.UITreeNodeStatus;
Obj3.UITreeNodeRelation="Network in " + Obj3.Nod + " is under maintaenance";
Obj3.UITreeNodeSmallImage = "[widget]/resources/common_assets/common_
resource_icons/re64_hypervisor1_8.png";
Obj3.UITreeNodeScalarImage = "[widget]/resources/common_assets/common_
resource_icons/re64_hypervisor1.swf";
Obj3.LastUpdate= LocalTime(GetDate()-300*60);
worldObject[index] = Obj3;
index = index + 1;
Obj3 = NewObject();
Obj3.Node = "Link4";
Obj3.UITreeNodeId = 16;
Obj3.UITreeNodeLabel = Obj3.Node;
Obj3.UITreeNodeParent = 15;
Obj3.UITreeNodeStatus = "Minor";
Obj3.NetworkStatus = Obj3.UITreeNodeStatus;
Obj3.UITreeNodeRelation="Network in " + Obj3.Nod + " is under maintaenance";
Obj3.UITreeNodeSmallImage = "[widget]/resources/common_assets/common_
resource_icons/re64_hypervisor1_8.png";
Obj3.UITreeNodeScalarImage = "[widget]/resources/common_assets/common_
resource_icons/re64_hypervisor1.swf";
Obj3.LastUpdate= LocalTime(GetDate()-300*60);
worldObject[index] = Obj3;
Log(""+worldObject);
v To set the topology widget node with circular relationship, the circular
relationship can be used only in the topology widget and not in the tree widget
unless there is a parent node for the first node. See the following example.
worldObject =[];
index = 0;
Obj3 = NewObject();
Obj3.Node = "Parent";
Obj3.UITreeNodeId = 12;
Obj3.UITreeNodeLabel = Obj3.Node;
Obj3.UITreeNodeStatus = "Minor";
Obj3.NetworkStatus = Obj3.UITreeNodeStatus;
Obj3.UITreeNodeRelation="Network in " + Obj3.Nod + " is under maintaenance";
Obj3.UITreeNodeSmallImage = "[widget]/resources/common_assets/common_
resource_icons/re64_hypervisor1_8.png";
Obj3.UITreeNodeScalarImage = "[widget]/resources/common_assets/common_
resource_icons/re64_hypervisor1.swf";
Obj3.LastUpdate= LocalTime(GetDate()-300*60);
worldObject[index] = Obj3;
index = index + 1;
Obj3.Node = "Link1";
Obj3.UITreeNodeId = 13;
Obj3.UITreeNodeLabel = Obj3.Node;
Obj3.UITreeNodeParent = "16,12";
Obj3.UITreeNodeStatus = "Minor";
Obj3.NetworkStatus = Obj3.UITreeNodeStatus;
Obj3.UITreeNodeRelation="Network in " + Obj3.Nod + " is under maintaenance";
Obj3.UITreeNodeSmallImage = "[widget]/resources/common_assets/common_
resource_icons/re64_hypervisor1_8.png";
Obj3.UITreeNodeScalarImage = "[widget]/resources/common_assets/common_
resource_icons/re64_hypervisor1.swf";
Chapter 8. Working with the Netcool/Impact UI data provider
127
Obj3.LastUpdate= LocalTime(GetDate()-300*60);
worldObject[index] = Obj3;
index = index + 1;
Obj3 = NewObject();
Obj3.Node = "Link2";
Obj3.UITreeNodeId = 14;
Obj3.UITreeNodeLabel = Obj3.Node;
Obj3.UITreeNodeParent = 13;
Obj3.UITreeNodeStatus = "Minor";
Obj3.NetworkStatus = Obj3.UITreeNodeStatus;
Obj3.UITreeNodeRelation="Network in " + Obj3.Nod + " is under maintaenance";
Obj3.UITreeNodeSmallImage = "[widget]/resources/common_assets/common_
resource_icons/re64_hypervisor1_8.png";
Obj3.UITreeNodeScalarImage = "[widget]/resources/common_assets/common_
resource_icons/re64_hypervisor1.swf";
Obj3.LastUpdate= LocalTime(GetDate()-300*60);
worldObject[index] = Obj3;
index = index + 1;
Obj3 = NewObject();
Obj3.Node = "Link3";
Obj3.UITreeNodeId = 15;
Obj3.UITreeNodeLabel = Obj3.Node;
Obj3.UITreeNodeParent = 14;
Obj3.UITreeNodeStatus = "Minor";
Obj3.NetworkStatus = Obj3.UITreeNodeStatus;
Obj3.UITreeNodeRelation="Network in " + Obj3.Nod + " is under maintaenance";
Obj3.UITreeNodeSmallImage = "[widget]/resources/common_assets/common_
resource_icons/re64_hypervisor1_8.png";
Obj3.UITreeNodeScalarImage = "[widget]/resources/common_assets/common_
resource_icons/re64_hypervisor1.swf";
Obj3.LastUpdate= LocalTime(GetDate()-300*60);
worldObject[index] = Obj3;
index = index + 1;
Obj3 = NewObject();
Obj3.Node = "Link4";
Obj3.UITreeNodeId = 16;
Obj3.UITreeNodeLabel = Obj3.Node;
Obj3.UITreeNodeParent = 15;
Obj3.UITreeNodeStatus = "Minor";
Obj3.NetworkStatus = Obj3.UITreeNodeStatus;
Obj3.UITreeNodeRelation="Network in " + Obj3.Nod + " is under maintaenance";
Obj3.UITreeNodeSmallImage = "[widget]/resources/common_assets/common_
resource_icons/re64_hypervisor1_8.png";
Obj3.UITreeNodeScalarImage = "[widget]/resources/common_assets/common_
resource_icons/re64_hypervisor1.swf";
Obj3.LastUpdate= LocalTime(GetDate()-300*60);
worldObject[index] = Obj3;
Log(""+worldObject);
Visualizing data from the UI data provider in the console
You can use the Netcool/Impact UI data provider to visualize data in the IBM
Dashboard Application Services Hub, referred to as the console throughout this
section.
You can visualize data from Netcool/Impact in the console. You can use data types
or Netcool/Impact policies to provide this data. You can also use Netcool/Impact
policies to create mashups of data from multiple sources.
The example scenarios that are provided are intended to provide examples that
help you when you are trying to visualize your own data in the console.
128
Netcool/Impact: Solutions Guide
Before you can implement any of the examples below, you must set up the remote
connection between Netcool/Impact and the console. For more information, see
“Setting up the remote connection between the UI data provider and the console”
on page 105.
Note: If you use the line or pie chart widget to visualize data from a DB2 data
type, you must change the number of items per page from All to a number. For
example, change it to 50.
Example scenario overview
Read the following example scenarios to get an overview of the possible ways to
integrate the UI data provider and the console.
For more scenarios and examples visit the Netcool/Impact developerWorks wiki
Scenarios and examples page available from the following URL:
https://www.ibm.com/developerworks/mydeveloperworks/wikis/
home?lang=en#/wiki/Tivoli%20Netcool%20Impact/page/Scenarios%20and
%20examples
Visualizing data from a DB2 database table in a line chart
You can use the console to visualize data that is retrieved directly from a data type
in Netcool/Impact.
About this task
This example uses a line chart to visualize data. You can use the same process to
visualize the data in a bar, column, or line chart.
Procedure
1. Create a DB2 data source.
a. Enter NewDataSource in the Data Source Name field.
b. Enter the user name and password for the database.
c. Complete the other fields as required.
d. Save the data source.
2. Create a data type for the DB2 data source.
a. Enter NewUIDPDT as the name and complete the required fields.
b. To ensure that the data type is compatible with the UI data provider, select
the UI data provider: enabled check box.
c. Select the key fields for the data type.
d. Save the data type.
3. Create a page in the console.
a. Open the console.
b. To create a page, click Settings > New Page.
c. Enter Page for DB2 in the Page Name field.
d. Save the page.
4. Create a widget in the console.
a. Open the Page for DB2 page that you created.
b. Drag the Line Chart widget into the content area.
c. To configure the widget data, click it. Click the down arrow icon and click
Edit. The Select a dataset window is displayed.
Chapter 8. Working with the Netcool/Impact UI data provider
129
d. Select the dataset. Select the NewUIDPDT data type that belongs to the
NewDataSource data source. The data type is only displayed after the
defined refresh interval. The default is 5 minutes.
e. The Visualization Settings UI is displayed. Enter the values that you want
to use for the y axis. You can select multiple lines. You can also select a text
to display as a tooltip. Click Ok.
f. To ensure that the console can display all the items, change the number of
items that are allowed per page from all to a number. Click Settings. In the
Items per page list list, change the number of items per page from All to a
number.
g. To save the widget, click the Save and exit button on the toolbar.
Results
When you display the widget, the data is retrieved directly from the data type in
Netcool/Impact and displayed in a line chart.
Visualizing data from a Netcool/Impact policy in a pie chart
You can use the pie chart widget in the console to visualize data from the UI data
provider.
Procedure
1. Create a policy to provide the data for the pie chart widget. The policy must
group all the items from a database table into a single Impact object.
For example, define the following policy, called Policyforpiechart, that gathers
the rows in the database table into a single Impact object:
Obj = NewObject ();
Obj.name = ’Internet Banking’;
Obj.availabilityduringoperationalhours=99.9;
Obj.availabilityduringnonoperationalhours=95;
2. Create the user output parameters for the policy.
a. In the policy editor, click the Configure Policy Settings icon to create the
output parameter.
Table 23. Output parameters for MyNewObject
Field
User entry
Name
MyNewObject
Policy Variable Name
Obj
Format
Impact Object
You must enter the name of the Impact object exactly as it is defined in the
policy in the Policy Variable Name field.
b. You must create the custom schema values for the fields in the object. In
this example, the Impact object contains two fields that are integers.
After you select Impact Object in the Format field, the system displays the
Open the Schema Definition Editor icon
beside the Schema
Definition field. To open the editor, click the icon.
You define the following custom schema definitions for the policy.
Table 24. Custom schema for operational and non-operational hours
130
Name
Format
availabilityduringoperationalhours
Integer
Netcool/Impact: Solutions Guide
Table 24. Custom schema for operational and non-operational hours (continued)
Name
Format
availabilityduringnonoperationalhours
Integer
3. Create a page.
a. Open the console.
b. To create a page, click Settings > New Page.
c. Enter Page for Pie Chart in the Page Name field.
d. Save the page.
4. Create a pie chart widget.
a. Open the Page for Pie Chart page that you created.
b. Drag the Pie chart widget into the content area.
c. To configure the widget data, click the down arrow icon and click Edit. The
Select a dataset window is displayed.
d. Select the data set. Select the MyNewObject data type that belongs to the
Policyforpiechart data source. The data set represents the user output
parameter that you defined previously. The data set is only displayed after
the defined refresh interval. The default is 5 minutes.
e. The Visualization Settings UI is displayed. To ensure that the policy runs
when the widget is displayed, select the executePolicy check box.
f. Select the values that you want to use for the pie chart. In this example,
select obj.availabilityduringoperationalhours and
obj.availabilityduringnonoperationalhours. To save your selection, click Ok
g. To ensure that the console can display all the items, change the number of
items that are allowed per page from all to a number. Click Settings. In the
Items per page list list, change the number of items per page from All to a
number.
h. To save the widget, click the Save and exit button on the toolbar.
Results
The data from the UI data provider is displayed as a pie chart in the console.
Visualizing a policy action in a widget
You can configure polices related to the UI data provider to be enabled for specific
actions. When the policy actions are enabled, you can right-click on an item in the
widget in the console and perform one of the following actions: execute a policy,
open a URL in a new tab, or open a new page tab in the DASH widget. When a
policy action is updated in the policy editor, the list on the widget in the console is
updated automatically.
Before you begin
You must integrate the UI data provider and the console. See “General steps for
integrating the UI data provider and the console” on page 104 and “Setting up the
remote connection between the UI data provider and the console” on page 105.
Procedure
1. Create a policy called WorldNetworkActionPolicy to provide the data for the
policy action.
Policy example:
Chapter 8. Working with the Netcool/Impact UI data provider
131
Log("Received an Action From Widget");
Log("Node : " + Node);
// the policy will log the node value of the right clicked item.
2. To enable a policy to run with an UI data provider, click the Configure Policy
Settings and select the Enable Policy for UI Data Provider Actions check box.
3. Create the following policy WorldNetwork:
Log("Customize Links");
worldObject = {};
Obj = NewObject();
Obj.Node = "World";
Obj.UITreeNodeId = 0;
Obj.UITreeNodeStatus = "Critical";
Obj.UITreeNodeLabel = "World";
worldObject = worldObject + Obj;
Obj1 = NewObject();
Obj1.Node = "Africa";
Obj1.UITreeNodeId = 1;
Obj1.UITreeNodeLabel = "Africa";
Obj1.UITreeNodeParent = 0;
Obj1.UITreeNodeStatus = "Critical";
Obj1.UITreeNodeRelation="Network in Africa is Down";
worldObject = worldObject + Obj1;
Obj2 = NewObject();
Obj2.Node = "Asia";
Obj2.UITreeNodeId = 2;
Obj2.UITreeNodeLabel = "Asia";
Obj2.UITreeNodeParent = 0;
Obj2.UITreeNodeStatus = "Major";
Obj2.UITreeNodeRelation="Network in Asia is having issue";
//this text will show up when a user hovers on the link
worldObject =worldObject + Obj2;
Obj3 = NewObject();
Obj3.Node = "North America";
Obj3.UITreeNodeId = 3;
Obj3.UITreeNodeLabel = "North America";
Obj3.UITreeNodeParent = 0;
Obj3.UITreeNodeStatus = "Minor";
Obj3.UITreeNodeRelation="Network in North America is under maintaenance";
worldObject = worldObject + Obj3;
Obj4 = NewObject();
Obj4.Node = "South America";
Obj4.UITreeNodeId = 2;
Obj4.UITreeNodeLabel = "South America";
Obj4.UITreeNodeParent = 0;
Obj4.UITreeNodeStatus = "Normal";
Obj4.UITreeNodeRelation="Network in South America is Normal";
worldObject =worldObject + Obj4;
Log(worldObject);
4. Create the policy output parameters for the policy in the WorldNetwork policy.
a.
In the policy editor toolbar, click the Configure Policy Settings icon to
create the output parameters.
b. To create a policy output parameter, click New Output Parameter:New.
132
Netcool/Impact: Solutions Guide
Table 25. Output parameters for WorldNetwork policy
Field
User entry
Name
WorldNetwork
Policy Variable Name
worldObject
Format
Array Of Impact Object
You must create the custom schema value for the fields in the object. In this
example, the Impact object contains one field which is a node.
After you select Array Of Impact Object in the Format field, the system
displays the Open the Schema Definition Editor icon
beside the
Schema Definition field. To open the editor, click the icon.
You define the following custom schema definitions for the policy.
Table 26. Custom schema value for Array Of Impact Object
Name
Format
Node
String
5. Create the UI Data Provider policy related actions.
a. In the UI Data Provider Policy Related Actions section, click New.
b. In the Name field, add a name for the action. The name that you add
displays in the widget in the console when you right-click on an item in the
specified widget.
c. In the Policy Name menu, select the policy WorldNetworkActionPolicy that
you want the action to relate to.
d. In the Output Parameter menu, select the output parameter that is
associated with this action WorldNetwork. If you select the All Output
Parameters option, the action becomes available for all output parameters
for the policy you are creating the action for, rather than the selected policy
to be executed by the action.
e. Click OK.
6. Create a page.
a. Open the console.
b. To create a page, click Settings > New Page.
c. Enter Page World Network in the Page Name field.
d. To save the page, click Ok.
7. Create a topology widget.
a. Open the Page World Network page that you created.
b. Drag the Topology widget into the content area.
c. To configure the widget data, click the down arrow icon and click Edit. The
Select a dataset window is displayed.
d. Select the data set. Select the data type that belongs to Page World Network
policy data source. The data set information is only displayed after the
defined refresh interval. The default is 5 minutes.
e. The Visualization Settings window is displayed.
f. To ensure that the policy runs when the widget is displayed, select the
executePolicy check box. Click Ok.
g. To save the widget, click Save and exit on the toolbar.
8. Configure the right-click action of the widget.
Chapter 8. Working with the Netcool/Impact UI data provider
133
a. Create a properties file in the IMPACT_HOME/uiproviderconfig/properites
directory using the following naming convention:
Policy_<PolicyName>_<output parameter name>.properties
Where:
<PolicyName> is name of the policy for which you are creating a right-click
action.
<output parameter name> is the name that you selected for the output
parameter when configuring the policy.
b. Add the following lines to the properties file:
actions.numactions=<number of actions>
actions.0.actiontype=<action type>
actions.0.actionname=<action name>
actions.0.action=<action>
Where:
<number of actions> specifies the number of actions defined in the
properties file. The actions are numbered sequentially starting from 0 and
each has a type and a name.
<action type> is url, dashpage, or policy.
<action name> is a descriptive name for the action.
<action> specifies the action performed.
For url, <action> must be a full URL including protocol, for example
http://www.ibm.com.
For dashpage, <action> must be the page ID (the unique name) of the DASH
page to be opened. The page ID is the value displayed in the unique name
field after saving the DASH page.
For policy, <action> must be the name of the policy to be executed.
The following example creates three right-click menu actions:
actions.numactions=3
actions.0.actiontype=dashpage
actions.0.actionname=Open Test Page Rec
actions.0.action=com.ibm.isclite.admin.Freeform.navigationElement.pagelayoutA.
modified.SXxNbLNnpq3SoBksMd6nEYw1434578685588
actions.1.actiontype=url
actions.1.actionname=Go to IBM Site
actions.1.action=http://www.ibm.com
actions.2.actiontype=policy
actions.2.actionname=Execute Test Policy
actions.2.action=TestPolicy.
When you right-click on an item in the DASH widget, the menu displays the
available actions. For the above example, there are three actions that allow you
to perform one of the following actions:
v dashpage: Open a new page tab within the DASH widget. All the data from
the right-clicked menu passed to the policy as Input parameters is passed to
the DASH page as input parameters.
v url: Open a new tab in the browser. All the data from the right-clicked menu
passed to the policy as Input parameters is passed to the URL as a URL
parameter, for example:
http://www.ibm.com?param1=value&param2=value,...
v policy: Execute the policy specified.
134
Netcool/Impact: Solutions Guide
Results
The UI Data Provider policy related actions that you configure in the policy editor
display automatically in the widget in the console when you right-click on an item.
Any changes that you make to the UI Data Provider policy-related actions in the
policy editor are updated automatically on the widget.
When you run the action, the policy runs in the backend and displays the log
statement similar to the follow information:
Log("Received an Action From Widget");
Log("Node : " + Node);
// the policy will log the node value of the right clicked item.
Visualizing data mashups from two web services in a table
You can use Netcool/Impact policies to create mashups of data from different
sources such as web services. You can also use the console and the UI data
provider to visualize the data from these mashups.
About this task
The following example uses a policy that is created in Netcool/Impact to retrieve
data from two different web services. The policy uses an array to group the results.
After you create the policy, you can use a table widget to visualize the data
mashup.
Procedure
1. In the policy editor, create a policy that is called
TestArrayOfObjectsWebService. For example, create the following policy. This
policy is based on the WSDL at http://wsf.cdyne.com/WeatherWS/
Weather.asmx?WSDL. Create the policy as follows:
a. Define the package that was defined when the WSDL file was compiled in
Netcool/Impact.
WSSetDefaultPKGName(’weather’);
b. Specify the parameters.
GetCityWeatherByZIPDocument=WSNewObject("com.cdyne.ws.weatherws.
GetCityWeatherByZIPDocument");
_GetCityWeatherByZIP=WSNewSubObject(GetCityWeatherByZIPDocument,
"GetCityWeatherByZIP");
_ZIP = ’07002’;
_GetCityWeatherByZIP[’ZIP’] = _ZIP;
WSParams = {GetCityWeatherByZIPDocument};
c. Specify the web service name, end point, and method.
WSService = ’Weather’;
WSEndPoint = ’http://wsf.cdyne.com/WeatherWS/Weather.asmx’;
WSMethod = ’GetCityWeatherByZIP’;
d. Use the GetbyXpath policy function to get the value for the element that you
want.
nsMapping= NewObject();
nsMapping.tns = "http://ws.cdyne.com/WeatherWS/";
nsMapping.xsd="http://www.w3.org/2001/XMLSchema" ;
nsMapping.soap="http://www.w3.org/2003/05/soap-envelope" ;
nsMapping.xsi="http://www.w3.org/2001/XMLSchema-instance";
log("About to invoke Web Service call GetCityWeatherByZIP ......");
Chapter 8. Working with the Netcool/Impact UI data provider
135
WSInvokeDLResult = WSInvokeDL(WSService, WSEndPoint, WSMethod, WSParams);
log("Web Service call GetCityWeatherByZIP return result: " +WSInvokeDLResult);
Result1=GetByXPath(""+WSInvokeDLResult, nsMapping, xPathExpr);
Object1=NewObject();
Object1.City=Result1.Result.City[0];
Object1.State=Result1.Result.State[0];
Object1.Temperature=Result1.Result.Temperature[0];
e. Start the WebService call.
log("About to invoke Web Service call GetCityWeatherByZIP ......");
WSInvokeDLResult = WSInvokeDL(WSService, WSEndPoint, WSMethod, WSParams);
log("Web Service call GetCityWeatherByZIP return result: " +WSInvokeDLResult);
f. Define another call.
_ZIP = ’23455’;
_GetCityWeatherByZIP[’ZIP’] = _ZIP;
WSParams = {GetCityWeatherByZIPDocument};
log("About to invoke Web Service call GetCityWeatherByZIP ......");
WSInvokeDLResult = WSInvokeDL(WSService, WSEndPoint, WSMethod, WSParams);
log("Web Service call GetCityWeatherByZIP return result: " +WSInvokeDLResult);
//Retrieve the element values and assign them to an object
xPathExpr = "//tns:State/text() |//tns:City/text() | //tns:Temperature/text()";
g. Define and assign values to the Impact Objects.
xPathExpr = "//tns:State/text() |//tns:City/text() | //tns:Temperature/text()";
Result2=GetByXPath(""+WSInvokeDLResult, nsMapping, xPathExpr);
Object2=NewObject();
Object2.City=Result2.Result.City[0];
Object2.State=Result2.Result.State[0];
Object2.Temperature=Result2.Result.Temperature[0];
CustomObjs= {Object1,Object2};
log(CustomObjs);
2. Define the user output parameters for the array of objects.
You must create the following user output parameters for the array that is
contained in the policy that you created. In the policy editor, click the
Configure Policy Settings icon to create the output parameters for the array of
objects.
Table 27. Output parameters for MyArrayObj1
Field
User entry
Name
MyArrayofCustomObjects
Policy Variable Name
CustomObjs
Format
Array of Impact Object
You must enter the exact name of the array as it is in the policy in the Policy
Variable Name field.
3. Use the following schema definition values.
v City - String
v State - String
v Temperature - String
4. Create a page.
a. Open the console.
b. To create a page, click Settings > New Page.
c. Enter Page for Array of Objects in the Page Name field.
136
Netcool/Impact: Solutions Guide
d. To save the page, click Ok.
5. Create a table widget.
a. Open the Page for Array of Objects page that you created.
b. Drag the Table widget into the content area.
c. To configure the widget data, click the down arrow icon and click Edit. The
Select a dataset window is displayed.
d. Select the dataset. Select the MyArrayofCustomObjects data type that
belongs to the TestArrayOfObjectsWebService data source. The dataset
information is only displayed after the defined refresh interval. The default
is 5 minutes.
e. The Visualization Settings UI is displayed. The system displays all the
available columns by default. You can change the displayed columns in the
Visualization Settings section of the UI. You can also select the row
selection and row selection type options.
f. To ensure that the policy runs when the widget is displayed, select the
executePolicy check box. Click Ok.
g. To save the widget, click the Save and exit button on the toolbar.
Results
When you open the table widget, the data from two different sources is displayed
in a table.
Visualizing data mashups with an array of Impact objects
You can use Netcool/Impact policies to create mashups of data from the DirectSQL
policy function and another sources. You can use the console and the UI data
provider to visualize the data from these mashups in a table.
Procedure
1. In the policy editor, create a policy that is called MultipleObjectsPolicy. The
policy uses the DirectSQL policy function to retrieve the data from the
defaultobjectserver data source. The policy also retrieves data from one other
source.
Create the following policy called MultipleObjectsPolicy.
Log("Test Policy with Multiple different objects..");
NodesVarJS=DirectSQL(’defaultobjectserver’,"SELECT Node,Identifier,
Severity from alerts.status", null);
Log("ClassOf() " + ClassOf(NodesVarJS));
Obj1=NewObject();
Obj1.fname="MyFirstName";
Obj1.lname="MyLastName";
Obj1.city="MyCity";
MyObjsAll={Obj1};
i=0;
while(i < length(NodesVarJS) ) {
O = newObject();
O.Node=NodesVarJS[i].Node;
O.Identifier=NodesVarJS[i].Identifier;
O.Severity=NodesVarJS[i].Severity;
MyObjsAll = MyObjsAll + {O};
i = i +1;
}
Log("MyObjs is " + MyObjsAll);
2. Define the user output parameters for the array of objects in the policy.
Chapter 8. Working with the Netcool/Impact UI data provider
137
In the policy editor, click the Configure Policy Settings icon to create the
output parameters for the array of objects.
Table 28. Output parameters for MyObjsAll
Field
User entry
Name
MyArrayofObjects
Policy Variable Name
MyObjsAll
Format
Array of Impact Object
You must enter the exact name of the array as it is in the policy in the Policy
Variable Name field.
3. Use the following schema definition values.
v fname = String
v lname = String
v city = String
v Node = String
v Identifier = String
v Severity = Integer
4. Create a page.
v Open the console.
v To create a page, click Settings > New Page.
v Enter Page for Table in the Page Name field.
v Save the page.
5. Create a table widget.
a. Open the Page for Table page that you created.
b. Drag the Table widget into the content area.
c. To configure the widget data, click it. Click the down arrow icon and click
Edit. The Select a dataset window is displayed.
d. Select the dataset. Select the MyArrayofObjects data type that belongs to
the MultipleObjectsPolicy data source. The dataset information is only
displayed after the defined refresh interval. The default is 5 minutes.
e. The Visualization Settings UI is displayed. The system displays all the
available columns by default. You can change the displayed columns in the
Visualization Settings section of the UI. You can also select the row
selection and row selection type options.
f. To ensure that the policy runs when the widget is displayed, select the
executePolicy check box. Click Ok.
g. To save the widget, click the Save and exit button on the toolbar.
Results
When you display the table widget, the data from the DirectSQL policy function
and the other source is displayed in a table.
Visualizing data output by the GetByFilter policy function in a
list
You can use the list widget to visualize data from a Netcool/Impact policy that
contains the GetByFilter policy function in the console.
138
Netcool/Impact: Solutions Guide
Procedure
1. In the Netcool/Impact policy editor, create a Netcool/Impact policy.
Create a policy that is called IPLGetByFilterPolicy that includes the
GetByFilter policy function. You want to visualize the data that is output by
the policy function. In this example, the filter is defined statically within the
policy. In a real world situation, you might want to pass the values
dynamically from the UI data provider to the policy function.
Log (“Executing IPL Impact Policy”);
filter = "SERVICEREQUESTIDENTIFIER = 1";
GetbyFilter ('dataTypeforDemoUIDP’, filter, false);
Note: When you use the GetByFilter to access or view data for a UI data
provider data type, the filter condition must match the UI data provider
standard, see the following two example filters.
AGG_NAME starts ’DayTrader’&param_SourceToken=fit-vm15-23:TO
(AGG_NAME starts ’DayTrader’ OR AGG_NAME starts ’Client’)
&param_SourceToken=fit-vm15-23:TO
The token before the & is used as a condition parameter that is used as a filter
in the UI data provider, and the tokens after the & are sent as request
parameters. Using the second example filter, it is sent to the UI data provider
as:
condition=(AGG_NAME starts ’DayTrader’ OR AGG_NAME starts ’Client’)
param_SourceToken=fit-vm15-23:TO
For more information about format of Jazz for Service Management filters, see
“Format of Jazz for Service Management filters” on page 140.
2.
Define the output parameter for the data items. This parameter is made
available to the UI data provider and are displayed as a data type in the
console.
In the policy editor, click the Configure Policy Settings icon and the Policy
output parameter:New button to create the output parameters for the data
items.
Table 29. User output parameters for data items
Field
Entry
Name
DataFromPolicy
Policy Variable Name
DataItems
Format
Datatype
Data Source Name
localDB2UIDPTest
Data Type Name
dataTypeforUIDPdemo
3. Create a page in the console.
a. Open the console.
b. To create a page, click Settings > New Page.
c. Enter Page for List in the Page Name field.
d. To save the page, click OK.
4. Create a list widget.
a. Open the Page for List page that you created.
b. Drag the List widget into the content area.
Chapter 8. Working with the Netcool/Impact UI data provider
139
c. To configure the widget data, click it. Click the down arrow icon and click
Edit. The Select a dataset window is displayed.
d. Select the dataset. Select the DataFromPolicy data type that belongs to the
IPLGetByFilterPolicy data source. The dataset information is only
displayed after the defined refresh interval. The default is 5 minutes.
e. The Visualization Settings UI is displayed. You must select values for the
label, status, description, and time stamp. You can also configure a number
of optional settings such as the size of the page.
f. To ensure that the policy runs when the widget is displayed, select the
executePolicy check box. Click Ok.
g. To save the widget, click the Save and exit button on the toolbar.
Results
When you display the widget, the policy is run. The information that is contained
in the policy results is added to the widget and is displayed as a list.
Format of Jazz for Service Management filters:
Filter conditions must match the UI data provider standard. Below are details
regarding three commonly used filter conditions.
String Data type properties
-
contains
!contains
starts
!starts
ends
!ends
isnull
!isnull
=
!=
Numeric and Date properties
-
=
!=
<
<=
>
>=
Boolean and Enumerated properties
- =
- !=
Visualizing data output by the DirectSQL policy function in an
analog gauge
You can use the analog gauge widget to visualize data from a Netcool/Impact
policy that contains the DirectSQL policy function in the console.
Procedure
1. In the policy editor in Netcool/Impact, create a policy that uses the DirectSQL
policy function.
Create the following policy that includes the DirectSQL policy function called
TestDirectSQL:
140
Netcool/Impact: Solutions Guide
Log("TestDirectSQL");
query= "select SUM(VALUE) as sumvalue, HISTORYRESOURCENAME, METRICNAME ,
( HISTORYRESOURCENAME || METRICNAME ) as key from
TBSMHISTORY.HISTORY_VIEW_RESOURCE_METRIC_VALUE;
DirectSQL (’directSQLSample’.query.false);
This policy accesses data from a database table and sums a particular column
in the table to create a sum value. The policy also groups a number of columns.
2. Define the output parameters for the policy.
To define the output parameters for the policy, click the Configure user output
parameters icon.
Table 30. Output parameters for IPLDirectSQL policy
Field
Entry
Name
IPLDirectSQL
Policy Variable Name
DataItems
Format
DirectSQL
You must also create new custom schema values to represent the values that
are contained in the fields of the DirectSQL policy function. After you select
DirectSQL in the Format field, the system displays the Open the Schema
Definition Editor icon. To create a value, click the icon. You must enter a name
for each new value and select a format. For this example, create the following
custom schema values:
Table 31. Custom schema values for IPLDirectSQL output parameter
Name
Value
HISTORYRESOURCENAME
String
METRICNAME
String
sumvalue
Float
3. Create a page.
a. Open the console.
b. To create a page, click Settings > New Page.
c. Enter Page for Analog Gauge in the Page Name field.
d. To save the page, click OK.
4. Create an analogue gauge widget
a. Open the Page for Analog Gauge page that you created.
b. Drag the Analog Gauge widget into the content area.
c. To configure the widget data, click it. Click the down arrow icon and click
Edit. The Select a dataset window is displayed.
d. Select the dataset. Select the IPLDirectSQL datatype that belongs to the
TestDirectSQL data source. The dataset information is only displayed after
the defined refresh interval. The default is 5 minutes.
e. The Visualization Settings UI is displayed. Select the value that you want
to display in the gauge in the Value field. In this example, you enter
SUMVALUE in the field. You can also select a number of other optional values
for the gauge such as minimum value, maximum value and unit of
measure.
f. To ensure that the policy runs when the widget is displayed, select the
executePolicy check box. Click Ok.
g. To save the widget, click the Save and exit button on the toolbar.
Chapter 8. Working with the Netcool/Impact UI data provider
141
Results
When you display the widget, the policy is run and the information that is
contained in the policy results is displayed as a gauge.
Visualizing data with the tree and topology widgets
You can use the tree and topology widgets with the UI data provider to visualize
hierarchical and topological data from a Netcool/Impact policy in the console.
About this task
You use the tree and topology widget to visualize a hierarchy with topological data
in the console. This example demonstrates how to do so for a specific policy. For
more information about the requirements for using these widgets, see “Integrating
the tree widget with an Impact object or an array of Impact objects” on page 115
and “Integrating data from a policy with the topology widget” on page 118.
Procedure
1. In the policy editor in Netcool/Impact, create a policy that is called
TestTreeTopoPolicy.
The policy uses the ArrayofImpactObjects policy function to retrieve
information about addresses. This information is hierarchical. Entries are
ordered by number, city, and country. You also want to add topological
information about the status of an entry to the policy.
MyObject1=NewObject();
MyObject1.country="United States";
MyObject1.city="New York";
MyObject2=NewObject();
MyObject2.country="United States";
MyObject2.city="Philadelphia";
MyObject3=NewObject();
MyObject3.country="England";
MyObject3.city="London";
MyArrayOfObjects={MyObject1,MyObject2,MyObject3};
2. Next, you must make the policy compatible with the tree widget. To make the
policy compatible with the tree widget, you must add the UITreeNodeId and
UITreeNodeParent parameters to the policy.
MyObject1=NewObject();
MyObject1.UITreeNodeId=0;
MyObject1.country="United States";
MyObject1.city="New York";
MyObject2=NewObject();
MyObject2.UITreeNodeId=1;
MyObject2.UITreeNodeParent=0;
MyObject2.country="United States";
MyObject2.city="Philadelphia";
MyObject3=NewObject();
MyObject3.UITreeNodeId=2;
MyObject3.UITreeNodeParent=1;
MyObject3.country="England";
MyObject3.city="London";
MyArrayOfObjects={MyObject1,MyObject2,MyObject3}
142
Netcool/Impact: Solutions Guide
3. Next, you must make the policy compatible with the topology widget. To make
the policy compatible with the tree widget, you must add the UITreeNodeType,
UITreeNodeLabel, and UITreeNodeStatus fields to the policy.
MyObject1=NewObject();
MyObject1.UITreeNodeId=0;
MyObject1.country="United States";
MyObject1.city="New York";
MyObject1.UITreeNodeType="GRAPH";
MyObject1.UITreeNodeLabel="NY";
MyObject1.UITreeNodeStatus="Major";
MyObject2=NewObject();
MyObject2.UITreeNodeId=1;
MyObject2.UITreeNodeParent=0;
MyObject2.country="United States";
MyObject2.city="Philadelphia";
MyObject2.UITreeNodeLabel="PA";
MyObject2.UITreeNodeStatus="Minor";
MyObject3=NewObject();
MyObject3.UITreeNodeId=2;
MyObject3.UITreeNodeParent=1;
MyObject3.country="England";
MyObject3.city="London";
MyObject3.UITreeNodeLabel="LN";
MyObject3.UITreeNodeStatus="Warning";
MyArrayOfObjects={MyObject1,MyObject2,MyObject3};
4. Define the user output parameters for the array of objects in the policy.
Table 32. User output parameters for MyObjArray
Field
Entry
Name
MyObjArray
Policy Variable Name
MyArrayOfObjects
Format
Array of Impact Object
5. Use the following schema definition values.
v UITreeNodeId = Integer
v UITreeNodeParent = Integer
v UITreeNodeLabel = String
v UITreeNodeStatus = String
v UITreeNodeType = String
v country = String
v city = String
6. Create a page in the console.
a. Open the console.
b. To create a page, click Settings > New Page.
c. Enter Page for Tree and Topology in the Page Name field.
d. To save the page, click Ok.
7. Create a tree and a topology widget in the console.
a. Open the Page for Tree and Topology page that you created.
b. Drag the Tree widget into the content area.
c. To configure the widget data, click it. Click the down arrow icon and click
Edit. The Select a dataset window is displayed.
Chapter 8. Working with the Netcool/Impact UI data provider
143
d. Select the dataset. Select the MyObjArray data type that belongs to the
TestTreeTopoPolicy data source. The dataset information is only displayed
after the defined refresh interval. The default is 5 minutes.
e. The Visualization Settings window is displayed. To ensure that the policy
runs when the widget is displayed, select the executePolicy check box.
Click Ok.
f. To save the widget, click the Save and exit button on the toolbar
g. Drag the Topology widget into the content area.
h. To configure the widget data, click it. Click the down arrow icon and click
Edit. The Select a dataset window is displayed.
i. Select the dataset. Select the MyObjArray data type that belongs to the
TestTreeTopoPolicy data source. The dataset information is only displayed
after the defined refresh interval. The default is 5 minutes.
j. The Visualization Settings window is displayed. To ensure that the policy
runs when the widget is displayed, select the executePolicy check box. Click
Ok.
k. To save the widget, click the Save and exit button on the toolbar.
Results
The data from the UI data provider is displayed in a hierarchy in the console
alongside the status.
Visualizing customized links in a topology widget
You can use Netcool/Impact policies and UI data provider to customize the
display of links and status information in a topology widget.
About this task
You can customize links, color, status, and relationships names in a topology
widget by modifying the $IMPACT_HOME/etc/server.props. In a split installation
environment, the server.props file is available in the GUI Server installation.
The default behavior is that the links are colored and have status icon that is based
on the child node. The relationship name between the child and the parent is
specified in the policy by using the following property in the child node:
Obj2.UITreeNodeRelation="Network in Asia is having issue"
When you hover on the link, it shows the value. The property must be added to
every child relationship in the policy.
Procedure
1. To implement this feature, go to $IMPACT_HOME/etc/server.props file. Identify
the following section in the server.props file, which shows the default settings.
impact.uidataprovider.linkstyle.showstatusicon=true
impact.uidataprovider.linkstyle.usecoloredlinks=true
impact.uidataprovider.linkstyle.relationshipname=IMPACT_NODES_RELATIONSHIP
The properties are enabled by default, and remain so unless you customize
them.
Tip: If you change the links to not use colors by using the following syntax:
impact.uidataprovider.linkstyle.usecoloredlinks=false
144
Netcool/Impact: Solutions Guide
This property overrides the option to show the status icon on the links in the
widget.
2. To show the colors and status icons in the topology widget:
impact.uidataprovider.linkstyle.showstatusicon=true
To change the colors and style of the links in the topology widget:
impact.uidataprovider.linkstyle.linestyle=<VALUE>
<VALUE> can be SOLID, DASHED, ALTERNATE
To customize the relationship name in a topology widget:
impact.uidataprovider.linkstyle.relationshipname=<some label>
If you do not provide a relationship name, the default value is
IMPACT_NODES_RELATIONSHIP
Remember: Obj2.UITreeNodeRelation in the policy supersedes the property
impact.uidataprovider.linkstyle.relationshipname in the server.props file.
3. Restart the Impact Server to implement the changes.
4. In the policy editor in Netcool/Impact, create the following policy that is called
WorldNetwork. The policy example uses the ArrayofImpactObjects policy
function to retrieve information about the status of networks around the world.
This information is hierarchical. Entries are ordered by location, status, and
node label.
For more information about the hierarchy of topology widgets, see “Visualizing
data with the tree and topology widgets” on page 142
Log("Customize Links");
worldObject = {};
Obj = NewObject();
Obj.Node = "World";
Obj.UITreeNodeId = 0;
Obj.UITreeNodeStatus = "Critical";
Obj.UITreeNodeLabel = "World";
worldObject = worldObject + Obj;
Obj1 = NewObject();
Obj1.Node = "Africa";
Obj1.UITreeNodeId = 1;
Obj1.UITreeNodeLabel = "Africa";
Obj1.UITreeNodeParent = 0;
Obj1.UITreeNodeStatus = "Critical";
Obj1.UITreeNodeRelation="Network in Africa is Down";
worldObject = worldObject + Obj1;
Obj2 = NewObject();
Obj2.Node = "Asia";
Obj2.UITreeNodeId = 2;
Obj2.UITreeNodeLabel = "Asia";
Obj2.UITreeNodeParent = 0;
Obj2.UITreeNodeStatus = "Major";
Obj2.UITreeNodeRelation="Network in Asia is having issue";
//this text will show up when a user hovers on the link
worldObject =worldObject + Obj2;
Obj3 = NewObject();
Obj3.Node = "North America";
Obj3.UITreeNodeId = 3;
Obj3.UITreeNodeLabel = "North America";
Obj3.UITreeNodeParent = 0;
Obj3.UITreeNodeStatus = "Minor";
Obj3.UITreeNodeRelation="Network in North America is under maintaenance";
Chapter 8. Working with the Netcool/Impact UI data provider
145
worldObject = worldObject + Obj3;
Obj4 = NewObject();
Obj4.Node = "South America";
Obj4.UITreeNodeId = 2;
Obj4.UITreeNodeLabel = "South America";
Obj4.UITreeNodeParent = 0;
Obj4.UITreeNodeStatus = "Normal";
Obj4.UITreeNodeRelation="Network in South America is Normal";
worldObject =worldObject + Obj4;
Log(worldObject);
Optionally, you can configure the node links to have their own status by
adding the following properties to the object:
Obj2.UITreeLinkStatus = "Critical";
Or if the node has more than one parent, add a comma-separated list of the
parent nodes, for example:
Obj2.UITreeNodeParent = "2,3";
Obj2.UITreeLinkStatus = "Normal,Critical";
Note: Without this property, the link's status is the same as the child node
status: Obj2.UITreeNodeStatus.
5. Define the user output parameters for the array of objects in the policy.
Table 33. Output parameters for WorldNetwork policy
Field
User entry
Name
WorldNetwork
Policy Variable Name
worldObject
Format
Array Of Impact Object
You must create the custom schema value for the fields in the object. In this
example, the Impact object contains one string.
After you select Array Of Impact Object in the Format field, the system shows
the Open the Schema Definition Editor icon
beside the Schema
Definition field. To open the editor, click the icon.
You define the following custom schema definitions for the policy.
Table 34. Custom schema value for Array Of Impact Object
Name
Format
Node
String
6. Create a page in the console.
a. Open the console.
b. To create a page, click Settings > New Page.
c. Enter Page for Topology World Network in the Page Name field.
d. To save the page, click OK.
7. Create a topology widget.
a. Open the Page World Network page that you created.
b. Drag the Topology widget into the content area.
146
Netcool/Impact: Solutions Guide
c. To configure the widget data, click the down arrow icon and click Edit. The
Select a dataset window is displayed.
d. Select the data set. Select the data type that belongs to WorldNetwork policy
data source. The data set information is displayed only after the defined
refresh interval. The default is 5 minutes.
e. The Visualization Settings window is displayed.
f. To ensure that the policy runs when the widget is displayed, select the
executePolicy check box. Click Ok.
g. To save the widget, click Save and exit on the toolbar.
Results
The data from the UI data provider is displayed in the topology widget with the
customized links and status. When a user hovers on the link, the status
information is displayed in the console.
Filtering data output by a policy in the console
If you visualize data from a policy that contains input parameters, you can use the
input parameters to filter the values that are displayed in the console.
Procedure
1. Create a policy that contains input parameters.
In this example, create a policy that is called TestKey that contains the GetByKey
policy function. The input parameter is the Key parameter.
DataType = "<DATA_TYPE_NAME>";
MaxNum = 1;
MyCustomers = GetByKey(DataType, Key, MaxNum);
2. Create the Key user input parameter. In the policy editor, click the Configure
Policy Settings icon to create the user input parameter.
Table 35. Node user input parameter
Field
Entry
Name
Node
Policy Variable Name
Key
Format
String
3. Create an output parameter called MyCustomers.
4. Create a page
v Open the console.
v To create a page, click Settings > New Page.
v Enter Page for Bar Chart in the Page Name field.
v Save the page.
5. Create a bar chart widget.
a. Open the Page for Bar Chart page that you created.
b. Drag the Bar Chart widget into the content area.
c. To configure the widget data, click it. Click the down arrow icon and click
Edit. The Select a dataset window is displayed.
d. Select the dataset. Select the DataFromPolicy data type that belongs to the
TestKey data source. The dataset information is only displayed after the
defined refresh interval. The data set is the MyCustomers output parameter
that you created earlier. The default is 5 minutes.
Chapter 8. Working with the Netcool/Impact UI data provider
147
e. The Visualization Settings UI is displayed.
f. The input parameters that are available for the policy are listed under
Configure Optional Dataset. You can enter the values that you want to filter
for here. In this example, the system displays a field that is called Key. If
you enter a value here, for example, R12345, only the data from the rows
that contain the key field value R12345 is displayed. In this way, you can
filter the values for the input parameters in the console.
g. To ensure that the policy runs when the widget is displayed, select the
executePolicy check box. Click Ok.
h. To save the widget, click the Save and exit button on the toolbar.
Passing parameter values from a widget to a policy
You can use widgets to pass values as input parameters to policies in
Netcool/Impact.
About this task
In this example, you want to be able to send an email that contains contextual
information from a table widget. The information is passed as an input parameter
value from the widget in the console to the policy in Netcool/Impact.
This example uses the wire widget. You may experience problems with dragging
and dropping the widget. This is a known issue with the GUI. For more
information, see http://www-01.ibm.com/support/docview.wss?uid=swg21626092.
Procedure
1. Create a policy. For example, create the following policy that is called
DB2Emailpolicy that retrieves values for some rows in a DB2 table and sends
these values to an email address:
Address = "srodriguez@example.com";
Subject = "Netcool/Impact Notification";
Message = EventContainer.Node + " has reported the following error condition: "
+ EventContainer.Summary;
Sender = "impact";
ExecuteOnQueue = false;
SendEmail(null, Address, Subject, Message, Sender, ExecuteOnQueue);
2. Create a page in the console.
a. Open the console.
b. Click Settings > Page.
c. Enter Pageforbutton in the Page Name field.
d. To save the page, click Ok.
3. Create a table widget that visualizes data from the DB2 database.
a. Open the Pageforbutton page.
b. Drag the Table widget into the content area.
c. To configure the widget data, click the down arrow icon and click Edit.
d. Select the dataset. Use the search box or the Show All button to find the
dataset that represents the DB2 database table.
e. To save the widget, click the Save button.
4. Create a button widget. Enter the name of the button and specify the parameter
that you would like to display in the console UI.
a. Drag the button widget that you created into the content area.
b. Enter a name. In this example, enter Emailbutton.
148
Netcool/Impact: Solutions Guide
c. Select the dataset. In this example, Use the search box or the Show All
button to find the dataset that represents the policy that you created and
select it.
d. To ensure that the policy is run when the user clicks the button, select the
executePolicy check box.
e. To save the button widget, click Save.
5. Create a wire.
a. Click the Show Wire icon and click the New Wire button.
b. Select the table widget as the source event for the new wire. Select Table >
NodeClickOn.
c. Select the target for the wire. Select the button widget that you created.
d. Select None for the transformation.
e. To save the wire, click Ok and click the Save button on the toolbar on the
page.
Results
After you complete this task, the Send button is displayed on the console. When a
user clicks the Send button, the information that is contained in the row in the
table is sent as an email to the address specified by the policy.
Passing parameter values from a table to a gauge
This example demonstrates how you can use Netcool/Impact policies to pass
variables as input parameters from a widget to a policy and on to another widget.
About this task
In this example, users want to select rows in a table and display the status for the
row in a gauge widget. You create two Netcool/Impact policies to facilitate this.
One policy is the publisher policy and provides data in an output parameter to the
second policy, the subscriber policy. The subscriber policy receives data from the
publisher policy in a policy input parameter and it outputs the results as an output
parameter. The data contained in the output parameter is then visualized as a
gauge in the console.
The publisher policy retrieves the SiteStatus data from the ObjectServer. The
subscriber policy retrieves related data from the DB2 database.
Procedure
1. Create the publisher policy.
The following policy is called PolicyEventingPublisher and uses the DirectSQL
policy function to retrieve data from the defaultobjectserver data source. You
need to create a new integer field called SiteStatus in the ObjectServer if you
have not done so already.
Log("Policies Eventing From OS...");
DataFromOS=DirectSQL(’defaultobjectserver’,
"SELECT SiteStatus,Node,Identifier from alerts.status",false);
Create the output parameter so that the widget can visualize the data.
Table 36. Output parameter for DatafromOS
Field
Entry
Name
DatafromOS
Chapter 8. Working with the Netcool/Impact UI data provider
149
Table 36. Output parameter for DatafromOS (continued)
Field
Entry
Policy Variable Name
DatafromOS
Format
DirectSQL / UI provider datatype
Create the custom schema values for the fields that you want to display in the
console. You must create three custom schema values. You also must select the
Key Field check box for the SiteStatus value.
Table 37. Custom schema value for SiteStatus
Field
Entry
Name
SiteStatus
Format
Integer
Table 38. Custom schema value for Node
Field
Entry
Name
Node
Format
String
Table 39. Custom schema value for Identifier
Field
Entry
Name
Identifier
Format
String
2. Create the subscriber policy.
The following policy is called PolicyEventingSubscriber and uses the
GetByFilter function to retrieve the value for SiteStatus that is output by the
publisher policy. The publisher policy retrieves the SiteStatus data from the
ObjectServer. The subscriber policy retrieves related data from the DB2
database.
Log("Demo Policies Eventing From DB2...");
Filter="SiteStatus="+ SiteStatus;
DataFromDB2=GetByFilter(’MachineInfo’,Filter,false);
Log(DataFromDB2);
Create the policy input parameter.
Table 40. SiteStatus input parameter
Field
Entry
Name
SiteStatus
Policy Variable Name
SiteStatus
Format
DirectSQL / UI provider datatype
Create the policy output parameter.
Table 41. DatafromDB2 output parameter
150
Field
Entry
Name
DatafromDB2
Policy Variable Name
DatafromDB2
Format
Datatype
Netcool/Impact: Solutions Guide
Table 41. DatafromDB2 output parameter (continued)
Field
Entry
Data Source Name
DB2Source
Data Type Name
MachineInfo
3. Create a page in the console.
a. Open the console.
b. To create a page, click Settings > New Page.
c. Enter PageforMachineInfo in the Page Name field.
d. Save the page.
4. Create a table widget.
a. Open the PageforMachineInfo page that you created.
b. Drag the Table widget into the content area.
c. To configure the widget data, click it. Click the down arrow icon and click
Edit. The Select a dataset window is displayed.
d. Select the dataset. Select the DatafromOS datatype that belongs to the
PolicyEventingPublisher data source. The datatype is only displayed after
the defined refresh interval. The default is 5 minutes.
e. The Visualization Settings UI is displayed. The system displays all the
available columns by default. You can change the displayed columns in the
Visualization Settings section of the UI. You can also select the row
selection and row selection type options.
f. To ensure that the policy runs when the widget is displayed, select the
executePolicy check box. Click Ok.
g. To save the widget, click the Save and exit button on the toolbar.
5. Create a gauge widget.
a. Open the PageforMachineInfo page that you created.
b. Drag the Analog Gauge widget into the content area.
c. To configure the widget data, click it. Click the down arrow icon and click
Edit. The Select a dataset window is displayed.
d. Select the dataset. Select the datatype DatafromDB2 that belongs to the
PolicyEventingSubscriber data source. The datatype is only displayed after
the defined refresh interval. The default is 5 minutes.
e. The Visualization Settings UI is displayed. Select the value that you want
to display in the gauge in the Value field. In this example, you select
SiteStatus from the list. You can also select a number of other optional
values for the gauge such as minimum value, maximum value and unit of
measure.
f. To ensure that the policy runs when the widget is displayed, select the
executePolicy check box. Click Ok.
g. To save the widget, click the Save and exit button on the toolbar.
Results
When a user selects a row in the table widget, the SiteStatus is displayed in the
gauge widget.
Chapter 8. Working with the Netcool/Impact UI data provider
151
Visualizing a data mashup from two IBM Tivoli Monitoring
sources
You can use Netcool/Impact to visualize data from two different sources in IBM
Tivoli Monitoring.
Before you begin
Find the configuration details for each UI data provider that you want to use. For
example, if you want to connect to an IBM Tivoli Monitoring system, you must
retrieve the following information for the Tivoli Enterprise Monitoring Server UI
data provider:
v User
v Password
v Port
v Base URL
About this task
Tip: You can also use this procedure to obtain data from Tivoli Monitoring 6.3 for
event management purposes, the only difference is that you omit the step that
describes how to create the user output parameters for the policies.
Procedure
1. Use the UI data provider DSA to create two data sources. Each data source
connects to a different IBM Tivoli Monitoring system. For example, create the
following data sources:
v Data source 1 retrieves data from ITM1
v Data source 2 retrieves data from ITM2
a. Create the data source that retrieves data from ITM1:
1) Enter ITM_DS1 in the Data Source Name field.
2) Enter the user name and password for the database.
3) Complete the other fields as required.
4) Save the data source.
b. Create the data source that retrieves data from ITM2:
1) Enter ITM_DS2 in the Data Source Name field.
2) Enter the user name and password for the database.
3) Complete the other fields as required.
4) Save the data source.
For more information about how to create a data source for the UI data
provider DSA, see the section about creating a UI data provider data source in
the Netcool/Impact DSA Guide.
2. Create two data types for each data source that you created in the previous
step. Later, you combine the data from the two data types that belong to the
same data source into a single object. Then, you combine the data from the two
objects so that the data from the different systems is merged. For example,
create the following data types:
v Datatype1A - select Tivoli Enterprise Monitoring Agent
v Datatype1B - select Tivoli Enterprise Monitoring Agent
v Datatype2A - select Tivoli Enterprise Monitoring Agent
v Datatype2B - select Tivoli Enterprise Monitoring Agent
152
Netcool/Impact: Solutions Guide
a. Create the data types as follows, changing the name for each data type:
1) Enter Datatype1A as the name and complete the required fields.
2) To enable the data type, select the Enabled check box.
3) Select the key fields for the data type.
4) Save the data type.
For more information about how to create a data type for the UI data provider
DSA, see the section about creating a UI data provider data type in the
Netcool/Impact DSA Guide.
3. To combine the data from the different sources, create a policy in
Netcool/Impact that uses the GetByFilter function. For this example, you must
create the following arrays to combine the data from the different sources:
v Array1A = GetByFilter()
v Array1B = GetByFilter()
v Array2A = GetByFilter()
v Array2B = GetByFilter()
For example, the following policy uses the GetByFilter function to combine the
data from the ITM_DS1 data source into a single object.
a. The output parameter of the policy is cpuLinuxITM={};.
b. Datatype1A retrieves data from a Tivoli Enterprise Monitoring Agent and it
also retrieves the IP address data for each node:
ipaddress01="";
DataType="datatype1A";
Filter="&param_SourceToken=paramValue";
iparray=GetByFilter(DataType, Filter, false);
count=0;
while(count<Length(iparray)){
if( (iparray[count].IPVERSION != "IPv6")&&(iparray[count].
IPADDRESS!="127.0.0.1")){
ipaddress01= iparray[count].IPADDRESS;
}
count = count +1;
}
c. Datatype1B retrieves data from a Tivoli Enterprise Monitoring Agent and it
also provides processor usage data for each node. The policy creates an
array of metrics for each monitored node. It also enhances this information
with the IP address:
DataType="datatype1B";
Filter="&param_SourceToken=paramValue&sort=BUSYCPU";
MyFilteredItems = GetByFilter( DataType, Filter, false );
index = 0;
if(Num > index){
while(index<Num){
cpu=NewObject();
cpu.TIMESTAMP= MyFilteredItems[index].TIMESTAMP;
cpu.ORIGINNODE= MyFilteredItems[index].ORIGINNODE;
cpu.BUSYCPU= MyFilteredItems[index].BUSYCPU;
cpu.IPADDRESS=ipaddress01;
cpuLinuxITM = cpuLinuxITM+{cpu};
index=index+1;
}
}
Log(" Finished collecting cpu usage from Metrics Agant :" + DataType );
Chapter 8. Working with the Netcool/Impact UI data provider
153
For more information about using the GetByFilter function with the UI data
provider, see the topic about accessing data types output by the GetByFilter
Function in the Netcool/Impact Solutions Guide.
4. Create the user output parameters for the policies. In this example, cpuLinuxITM
is the output parameter that is defined in the policy. You must create an output
parameter for cpuLinuxITM as outlined in the table. To create a user output
parameter, open the policy editor and click the Configure Policy Settings icon
and the Policy output parameter:New button.
Table 42. ITM1 output parameter
Field
Entry
Name
ITM1
Policy Variable Name
cpuLinuxITM
Format
Array of Impact Objects
This parameter ensures that the policy is exposed as part of the
Netcool/Impact UI data provider.
For more information about how to configure policy settings, see configure
policy settings in the Netcool/Impact Solutions Guide.
5. Use the following schema definition values.
v TIMESTAMP = String
v ORIGINNODE = String
v BUSYCPU = Float
v IPADDRESS = String
6. Create a data source and data type that are based on the policy. In this
example, create the data source as follows:
a. Select ITM_mashup_policy from the list in the Data Source Name field.
b. Enter the user name and password for the database.
c. Select the Netcool/Impact UI data provider, Impact_NCICLUSTER, as the
provider.
d. Complete the other fields as required.
e. Save the data source.
Create the data type as follows:
a. Enter ITM_mashup_dt as the name and complete the required fields.
b. To ensure that the data type is compatible with the UI data provider, select
the UI data provider: enabled check box.
c. Select the key fields for the data type.
d. Save the data type.
7. To confirm that the policy returns the correct data when it runs, right-click the
data type and select View Data Items. Enter &executePolicy=true in the filter
and refresh.
8. Create a page in the console.
a. Open the console.
b. To create a page, click Settings > New Page.
c. Enter Page for ITM mashup in the Page Name field.
d. Save the page.
9. Create a table widget that visualizes data from the policy's data type.
a. Open the Page for ITM mashup page that you created.
154
Netcool/Impact: Solutions Guide
b. Drag the Table widget into the content area.
c. To configure the widget data, click it. Click the down arrow icon and click
Edit. The Select a dataset window is displayed.
d. Select the dataset. Select the ITM_mashup_dt data type that belongs to the
ITM_mashup_policy data source. The data type is only displayed after the
defined refresh interval. The default is 5 minutes.
e. The Visualization Settings UI is displayed. Enter the values that you want
to use. You can select multiple lines. You can also select a text to display as
a tooltip. Click Ok.
f. To save the widget, click the Save and exit button on the toolbar.
Results
After you create the table widget, the same data that was displayed in step 6 in
Netcool/Impact is displayed in the console.
Visualizing Event Isolation and Correlation results in a topology
widget
How to view Event Isolation and Correlation results in a topology widget.
Before you begin
By default the following data types and policy parameters available to the UI data
provider to create widgets. Use the following information to check the
configuration for the data types and policy.
v Check that the data types EVENTRULES and EIC_TopologyVisualization in the
EventIsolationAndCorrelation project have the Access the data through UI data
provider selected. In the Data Model, select the data type. Then, select Edit to
view the settings.
v Check that policy EIC_TopologyVisualization is configured with an output
parameter EIC_Relationship. The output parameter shows the topology in the
topology widget. And the EICAffectedEvents output parameter shows only the
ObjectServer events that are related to the resources in the topology.
v
The policy EIC_PrimaryEvents includes an output parameter
EICPrimaryEvents with a filter to get the primary events for the specific rule.
About this task
In Jazz for Service Management, in the IBM Dashboard Application Services Hub,
create a page and use two tables widgets and one Topology widget.
Procedure
1. Create a page in the console.
a. Open the console.
b. To create a page, click Settings > Pages > New Page.
c. Enter Page for Event Isolation and Correlation in the Page Name field.
d. To save the page, click Ok.
2. Configure one table widget called Events Rules for the data type
EVENTRULES to show all the rules in the database. This table widget is the
main widget that drives the second table widget.
a. Open the Page for Event Isolation and Correlation page that you created.
b. Drag the Table widget into the content area.
Chapter 8. Working with the Netcool/Impact UI data provider
155
c. To configure the widget data, click the down arrow icon and click Edit. The
Select a dataset window is displayed.
d. Select the data set. Select the EVENTRULES data type that belongs to the
EventrulesDB data source. The data set information is only displayed after
the defined refresh interval. The default is 5 minutes.
e. The Visualization Settings UI is displayed. The system shows all the
available columns by default. You can change the displayed columns in the
Visualization Settings section of the UI. You can also select the row
selection and row selection type options.
f. To save the widget, click Save and exit on the toolbar.
3. Configure another table widget called Primary Events for the policy output
parameter EICPrimaryEvents.
a. Open the Page for Event Isolation and Correlation page that you created.
b. Drag the Table widget into the content area.
c. To configure the widget data, click the down arrow icon and click Edit. The
Select a dataset window is displayed.
d. Select the data set EICPrimaryEvents. The data set information is only
displayed after the defined refresh interval. The default is 5 minutes.
e. The Visualization Settings UI is displayed. The system shows all the
available columns by default. You can change the displayed columns in the
Visualization Settings section of the UI. You can also select the row
selection and row selection type options.
f. To ensure that the policy runs when the widget is displayed, select the
ExecutePolicy check box. Click Ok.
g. To save the widget, click Save and exit on the toolbar.
4. Configure a Topology widget called Resource Topology for the output
parameter EIC_Relationship.
a. Open the Page for Event Isolation and Correlation page that you created.
b. Drag the Topology widget into the content area.
c. To configure the widget data, click it. Click the down arrow icon and click
Edit. The Select a dataset window is displayed.
d. Select the data set. Select the EIC_Relationship data set. The data set
information is only displayed after the defined refresh interval. The default
is 5 minutes.
e. The Visualization Settings window is displayed. To ensure that the policy
runs when the widget is displayed, select the executePolicy check box.
Click Ok.
f. To save the widget, click Save and exit on the toolbar.
5. Configure a direct wire from the EVENTRULES table widget to the
EICPrimaryEvents table widget.
a. Edit the page.
b. Click the Wires button.
c. Click New Wire.
d. Select the source as the Event Rule widget OnNodeClicked and click OK.
e. Select the target as Primary Events widget.
f. Repeat and select target as Resources Topology to clear the typology.
6. Configure a direct wire from the EICPrimaryEvents table widget to the
EIC_Relationship topology widget.
a. Edit the page.
156
Netcool/Impact: Solutions Guide
b. Click the Wires button.
c. Click New Wire.
d. Select the source as the Primary Events widget OnNodeClicked and click
OK.
e. Select the target as Resources Topology widget.
Tip: If you have not saved the page yet, you can find it by going to the
Select Target for New Wire dialog and locate the Resources Topology
widget in Console Settings / This page.
f. Save the page.
Results
When you click an item in the EVENTRULES table widget, it populates the
EICPrimaryEvents table with the primary events from the objects. When you click
an item in the EICPrimaryEvents table widget, it populates the Topology widget
when the Event Isolation and Correlation operation is completed. Optionally, you
can create a table widget instead of the topology widget to view the related events
only. The table widget is configured for the EICAffectedEvents output parameter.
Visualizing data from the Netcool/Impact self service dashboards
Netcool/Impact self service dashboard widgets are designed to enable users or
administrators to create dashboards. The self service dashboards are able to accept
user input through a customizable Input Form widget and can drive user actions
through a Button widget. Both of these widgets interact with Netcool/Impact
through the Netcool Rest API Interface.
Netcool/Impact 7.1.0.5 has a UI Data provider which makes Netcool/Impact data
available for consumption by dashboard widgets in the IBM Dashboard
Applications Services Hub. The UI Data provider works well for dashboards that
are intended for read-only usage but cannot interact dynamically with
Netcool/Impact policies. For example, you can create a dashboard with a table
widget which displays all the trouble tickets that are managed by Netcool/Impact.
However you cannot interact with the trouble tickets to create a ticket.
When the Netcool/Impact Self Service dashboard widgets are installed on the
console, you have the option of adding to the existing visualizations. You can
update the data through the execution of Netcool/Impact policies and to take
certain actions on a data set.
Installing the Netcool/Impact Self Service Dashboard widgets
To create custom dashboards for Netcool/Impact to view in Jazz™ for Service
Management, you can add Netcool/Impact specific widgets to enhance the
capabilities of the dashboards you create.
Before you begin
Before you install the Netcool/Impact Impact_SSD_Dashlet.war file on the
Dashboard Application Services Hub Server. You must have the following
environment.
v A server with Netcool/Impact 7.1.0.5 installed.
v A server with IBM Dashboard Applications Services Hub installed and
configured with a data connection to the Netcool/Impact 7.1.0.5 server. For
Chapter 8. Working with the Netcool/Impact UI data provider
157
information about setting up a connection between the Impact server and Jazz
for Service Management, see “Setting up the remote connection between the UI
data provider and the console” on page 105.
For more information about Jazz for Service Management, see the Jazz for Service
Management Knowledge Center available from the following URL:
http://pic.dhe.ibm.com/infocenter/tivihelp/v3r1/index.jsp?topic=
%2Fcom.ibm.psc.doc_1.1.0%2Fpsc_ic-homepage.html.
Procedure
1. Log on to the Impact server.
2. Go to the add-ons directory, $NCHOME/add-ons/ssd.
3. Copy the Impact_SSD_Dashlet.war file over to the Dashboard Application
Services Hub Server. Note the location where you download the file to. For
example, C:\build\Impact_SSD_Dashlet.war.
4. On the Dashboard Application Services Hub Server, run the wsadmin tool by
using one of the following commands:
v UNIX: INSTALL/JazzSM/profile/bin/wsadmin.sh
Where INSTALL is the installed location of Jazz for Service Management.
5. Run the following command all on one line to install the
Impact_SSD_Dashlet.war file.
$AdminApp update isc modulefile
{-operation addupdate -contents "<ImpactSSDWar>"
-custom paavalidation=true
-contenturi Impact_SSD_Dashlet.war
-usedefaultbindings
-contextroot /Impact_SSD_Dashlet
-MapWebModToVH {{.* .* default_host}}}
Where <ImpactSSDWar> is the location of the copied war file. For example,
C:\build\Impact_SSD_Dashlet.war.
6. If the wsadmin command succeeds without any errors, use the following
command to save the changes:
$AdminConfig save
7. Use one of the following commands to restart the Dashboard Application
Services Hub Server:
v UNIX:
INSTALL/JazzSM/profile/bin/stopServer.sh server1
INSTALL/JazzSM/profile/bin/startServer.sh server1
Uninstalling the Netcool/Impact Self Service Dashboard widgets
How to uninstall the self service dashboard widgets feature from Jazz for Service
Management.
Before you begin
You must remove any dependencies on the Netcool/Impact self service dashboard
widgets from any existing pages or portlets in Jazz for Service Management. Then,
delete any instances of the Netcool/Impact dashboard widgets from the page or
portlets in Jazz for Service Management. For information about how to delete
pages or portlets see the IBM® Dashboard Application Services Hub online help.
158
Netcool/Impact: Solutions Guide
Procedure
1. On the Dashboard Application Services Hub Server, use one of the following
commands to run the wsadmin tool:
v UNIX: %INSTALL%/JazzSM/profile/bin/wsadmin.sh
2. Run the following command to uninstall the Impact_SSD_Dashlet.war file.
$AdminApp update isc modulefile
{-operation delete -contenturi Impact_SSD_Dashlet.war }
3. If the wsadmin command succeeds without errors, run the following command
to save the changes:
$AdminConfig save
4. Use one of the following commands to restart the Dashboard Application
Services Hub Server.
v UNIX:
%INSTALL%/JazzSM/profile/bin/stopServer.sh server1
%INSTALL%/JazzSM/profile/bin/startServer.sh server1
Editing an Input Form widget
The Input Form widget is a form control which can run Netcool/Impact policies
with user-defined input parameters. The Input Form widget dynamically generates
a form with a set of input fields which correspond to the input parameters of the
policy.
When you submit the form, the associated policy is run with input parameters
from the form input fields.
1. In the title bar, click the Edit options icon
and select Edit.
2. Choose the Netcool/Impact policy that you want to run. You can search for the
policy by completing the search field with either a full or partial name and
clicking Search. You can also view the complete list of available data sets by
clicking the Show All button in the center of the results panel.
3. Select that data set, and click the right arrow to show the Visualization
Settings page. In the Visualization Settings page, you can configure the button
title and Netcool/Impact policy parameters.
4. In the Required Settings section, select the executePolicy option.
5. Optional. Select Optional Settings, add the name of the form to the Title field.
The title is also used for the label of the form submission button.
6. Optional. Configure Optional Dataset Parameters: You can also set default
values for any input parameters that are attached to the Netcool/Impact policy.
The Input Form widget populates the input parameter form with any default
values you set here.
7. Click OK to implement the changes, or Cancel to discard the changes.
8. On the dashboard, click the button on the widget to run the policy. The input
parameters are passed to the policy as policy input parameters. The results are
displayed in the widget.
Tip: In the Input Form, you can manually change the values in the form fields,
click the button, and run the policy again and show the results in the dashboard.
Editing a Button widget
The Button widget can be used in an Operator View to run a policy. You can edit
the Button widget to use the input parameters that are set by the policy that is
attached to the button.
Chapter 8. Working with the Netcool/Impact UI data provider
159
1. In the title bar, click the Edit options icon
and select Edit.
2. Choose the Netcool/Impact policy that you want to run. You can search for the
policy by completing the search field with either a full or partial name and
clicking Search. You can also view the complete list of available data sets by
clicking the Show All button in the center of the results panel.
3. Select that data set, and click the right arrow to show the Visualization
Settings page. In the Visualization Settings page, you can configure the button
title and Netcool/Impact policy parameters.
4. In the Required Settings section, select the executePolicy option.
5. Optional. Select Optional Settings, add the name of the button to the Title
field. The title is also used for the label of the form submission button.
6. Optional. Configure Optional Dataset Parameters: You can also set default
values for any input parameters that are attached to the Netcool/Impact policy.
7. Click OK to implement the changes, or Cancel to discard the changes.
8. On the dashboard, click the button on the widget to run the policy. The input
parameters are passed to the policy as policy input parameters. The results are
displayed in the widget.
Tip: To change the values of the input parameters, you must edit the Button
widget and change the parameters in the Visualization Settings page before you
run the policy again from the dashboard.
Configuring the Button widget to receive data from other widgets
The Button widget can receive data from other widgets. For example, in the
console you can create a wire between the Button widget and a table widget. Wires
are connections between widgets to share information and for opening pages in
context. When you click a row in the Table widget, the Button widget processes the
data. When you click the button on the Button widget, that data from the table
row is processed. The data is then sent on to the policy as input parameters.
About this task
This example uses a Button widget and a table widget. You can use the same
process with a Button widget and any other widget.
Procedure
1. In Netcool/Impact, create a DB2 data source.
a. Enter NewDataSource in the Data Source Name field.
b. Enter the user name and password for the database.
c. Complete the other fields as required.
d. Save the data source.
2. Create a data type for the DB2 data source.
a. Enter NewDataType as the name and complete the required fields.
b. To ensure that the data type is compatible with the UI data provider, select
the UI data provider: enabled check box.
c. Select the key fields for the data type.
d. Save the data type.
3. Create the target policy.
a. In the Netcool/Impact policy editor, create a Netcool/Impact policy.
b. Define the following policy:
160
Netcool/Impact: Solutions Guide
Log (“Executing IPL Impact Policy”);
filter = “SERVICEREQUESTIDENTIFIER = ” + inputParamID;
GetbyFilter (’NewDataType’, filter, false);
c. Save the policy, name it PolicyForButtonWidget.
4. Create the input parameter for the policy.
a. In the policy editor, click the Configure Policy Settings icon to create the
input parameter.
b. Name the input parameter inputParamID.
c. Specify Long for the Format field.
d. Click OK.
e. Save the policy.
5. Create a page in the console.
a. Open the console.
b. To create a page, click Settings > New Page.
c. Enter a name for the page in the Page Name field.
d. Save the page.
6. Create a Button widget in the console.
a. Open the new page that you created.
b. Open the Impact widget folder, drag the Button widget into the content
area.
c. To configure the widget data, click the down arrow icon and click Edit. The
Select a data set window is displayed.
d. Select the data set. Select the PolicyForButtonWidget policy that you
created earlier.
e. The Visualization Settings UI is displayed. Click OK.
f. To save the button widget, click the Save button on the page toolbar.
7. Create a Table widget in the console.
a. Open the new page that you created.
b. Drag the Table widget into the content area.
c. To configure the widget data, click it. Click the down arrow icon and click
Edit. The Select a dataset window is displayed.
d. Select the data set. Select the NewDataType data type that belongs to the
NewDataSource data source. The data type is only displayed after the
defined refresh interval. The default is 5 minutes.
e. The Visualization Settings UI is displayed. Click OK.
f. To save the Table widget, click Save on the page toolbar.
8. Create a Wire between the Button and Table widgets.
a. To open the Wires wizard, click the Show Wires button on the page toolbar.
b. Click the New Wire button.
c. On the Select Source Event for New Wire page, identify the Table widget in
the list of Available source events and select the NodeClickedOn event.
d. Click OK.
e. On the Select Target for New Wire page, select the Button widget from the
list of Available targets.
f. Click OK on the next two pages to create a wire between the Button and
Table widgets.
Chapter 8. Working with the Netcool/Impact UI data provider
161
Results
When you click a row in the Table widget, a NodeClickedOn event is generated.
The Button widget processes the event by extracting the data for the clicked table
row. When you click the button on the Button widget, it runs the policy that is
configured in the widget. The data passes from the table row to the policy as
policy input parameters. The GetByFilter function runs and uses the input
parameter that is provided by the table row.
Reference topics
You can use custom URLs and the UI data provider to access data directly. You can
also customize the UI data provider and enable large data model support.
Large data model support for the UI data provider
You can use the UI data provider with large data models based on the supported
databases.
Large data model support is used to facilitate the integration of the UI data
provider with database tables that use filtering and paging to limit the number of
rows.
The following databases and associated data types are supported and large data
model integration is enabled by default:
v DB2
v Derby
v HSQLDB
v Informix
v MySQL
v MS-SQLServer
v Oracle
v PostgreSQL
For information about how to enable and disable large data models see “Disabling
and enabling large data models” on page 163.
Restrictions
v Sybase databases and associated data types are not supported. You can limit the
data by using the input filters in the data type or in the widget.
Important: If you access a Netcool/OMNIbus or Sybase data type that has a
huge number of rows, for example more than 10,000 rows you can potentially
run out of memory. Out of memory issues can occur when the number of rows
stored in memory exceed the available heap memory allocated to the Java
virtual machine. If you plan to access large amounts of data for a
Netcool/OMNIbus or Sybase data type, consider increasing the heap memory
settings for the GUI Server and Impact Server from their default values. For
information about changing heap memory settings, see the Memory status
monitoring section of the Administration Guide.
v If you want to integrate the UI data provider with large data models, you must
not use UIDPROWNUM as a field name in the Oracle or DB2 database. The AS
162
Netcool/Impact: Solutions Guide
UIDPROWNUM field is added to the query for Oracle and DB2 databases. As a
result, this field is reserved for use by the query that is associated with these
types of databases.
v The MS-SQLServer database uses the first field of the data type in the query.
There are no reserved fields for this type of database.
v If pagination is enabled in the GUI or in the URL, Netcool/Impact does not
store this information in the memory. Instead, Netcool/Impact retrieves the rows
directly from the database to avoid any adverse effects on performance and
memory.
Disabling and enabling large data models
You can enable and disable the UI data provider so that it is not compatible with
large data models.
About this task
Large data model integration is enabled by default for the following databases:
v DB2
v Derby
v HSQLDB
v Informix®
v MySQL
v MS-SQLServer
v Oracle
v PostgreSQL
Procedure
1. To disable the integration of the UI data provider and large data models,
change impact.uidataprovider.largetablemodel=true to
impact.uidataprovider.largetablemodel=false in the server.props file.
Tip: If this parameter does not exist, you can add it to the server.props file.
2. Restart the GUI Server.
Enabling and disabling the large data model for the ObjectServer:
Large data model integration is disabled by default for the ObjectServer and is not
supported unless you use Netcool/OMNIbus version 7.4.0. fix pack 1. This fix
pack has functions that can be used to perform paging in the ObjectServer.
Before you begin
You must stop the GUI Server.
On UNIX operating systems, enter the following command in the command line.
$IMPACT_HOME/bin/stopGUIServer.sh
[-username adminuser -password adminpassword]
On Windows operating systems, use the following command.You can also you use
the Services Extension in the Microsoft Management Console.
1. In the Start menu, select Control Panel > Administrative Tools > Services.
2. Right-click Tivoli Netcool/Impact GUI Server.
Chapter 8. Working with the Netcool/Impact UI data provider
163
3. Select Properties. In the Properties dialog box, click Stop and then click OK.
About this task
For information about Netcool/OMNIbus version 7.4.0. fix pack 1, see
http://www-01.ibm.com/support/knowledgecenter/SSSHTQ/landingpage/
NetcoolOMNIbus.html.
Procedure
1. To enable the integration of the UI data provider and large data models for the
ObjectServer, change the property
impact.uidataprovider.largetablemodel.objectserver=false to
impact.uidataprovider.largetablemodel.objectserver=true in the
server.props file.
If the property does not exist, you can create it.
2. Restart the GUI Server.
v On UNIX systems, enter the following command at the command line.
$IMPACT_HOME/bin/startGUIServer.sh
[-username adminuser -password adminpassword]
v On Windows systems, use the following command.
%IMPACT_HOME%\bin\startGUIServer.bat.
You can also you use the Services Extension in the Microsoft Management
Console to start the GUI Server from the Services panel.
a. In the Start menu, select Control Panel > Administrative Tools >
Services.
b. Right-click Tivoli Netcool/Impact GUI Server.
c. Select Properties. In the Properties dialog box, click Start and then click
OK.
Enabling large data model support for UI data provider policies
Enabling large data model support for UI data provider policies ensures faster
responses with policies that return large data. By default large data model support
for UI data provider policies is disabled.
About this task
If a policy is retrieving large amounts of data, an out of memory error can occur.
Enabling large data model support for policies allows paging through the data by
using start and count variables, which is a good process to manage memory. To
enable large data model support for UI data provider policies, complete the
following steps.
Procedure
1. Edit the policy property file and output parameters. If the policy name is
GetSQLNodes and the output parameter is result, then the policy property file
to be edited is Policy_GetSQLNodes_Result.properties.
a. Put the policy property file in <UI Data Provider Config Base
Directory>/<UI configuration directory>/<CLUSTER NAME>/properties
UI Data Provider Config Base Directory
By default this directory name is the same as the $IMPACT_HOME
installation directory. To use a different directory name, go to
164
Netcool/Impact: Solutions Guide
$IMPACT_HOME/etc/server.props and add or update the following
property impact.uidataprovider.config.directory=full path to an
exsisting directory.
UI configuration directory
By default this directory name is uiproviderconfig. To use a different
directory name, go to $IMPACT_HOME/etc/server.props and add or
update the following property
impact.uidataprovider.customprops.directory=directory name. The
directory name must be single directory structure without special
characters.
CLUSTER NAME
Use this parameter when multiple clusters connect to the same UI data
provider. To enable use of a cluster go to $IMPACT_HOME/etc/
server.props and add the following property
impact.uidataprovider.customprops.usecluster=true
b. In the policy property file, add or update the property
largemodel.enabled=true.
2.
Netcool/Impact sends the following property parameters to the policy to tell
the policy call to return specific data:
GET_NUMBEROFROWS
The value is a string value and instructs the policy to return only the
number of rows to be displayed. The result puts number of rows into the
policy variable NUMBEROFROWS
3. Netcool/Impact runs the policy twice, first to get the NUMBEROFROWS as the
total count. Then, the policy runs again to get the actual node. Netcool/Impact
sends the input parameters start and count and these number variables are
used to run only the query with this filter.
Remember:
The ObjectServer using Netcool/OMNIbus version 7.4.0. fix pack 1 supports
paging by using the SKIP and TOP key words in the SQL.
In a Dashboard Applications Services Hub page, you see the following results.
v When you scroll down the page, Netcool/Impact continues to send different
start and count parameters to the policy to get only this set of data.
v When you do a column sort, Netcool/Impact sends the sorting query in the
parameter SORTING_CLAUSE. The value of the SORTING_CLAUSE parameter must
be used in combination with ORDER BY to run the query.
v When you filter, the filter is passed to the policy in the CUSTOM_FILTER
parameter, by using the format that the database requires.
Remember:
If the policy uses the directSQL output parameter, the data source name field in
the output parameter configuration page must be updated with the data source
name that is used in the DirectSQL function. This naming ensures that the filter
is constructed based on the database type.
For an array of objects, the filter is returned in generic format. Unique columns
in the schema are marked as keys inside the user output parameter.
Chapter 8. Working with the Netcool/Impact UI data provider
165
Example
Here is an example policy to page the ObjectServer that contains more than 50,000
rows
Log("Test Direct SQL with large model");
Log("CurrentContext : " + CurrentContext());
sorting="";
if (SORTING_CLAUSE != NULL) {
sorting = SORTING_CLAUSE;
}
SQL="SELECT * FROM status";
if (GET_NUMBEROFROWS != NULL) {
Log("Getting count only");
SQL="SELECT COUNT(*) AS NumOfEvents FROM status ";
} else {
if (start == NULL) {
start = 0;
}
if (count == NULL) {
count = 25;
}
SQL="SELECT SKIP " + start + " TOP " + count + " * FROM status ";
if(CUSTOM_FILTER != NULL) {
SQL = SQL + " WHERE " + CUSTOM_FILTER;
}
if (sorting != "") {
SQL = SQL + " ORDER BY " + sorting;
}
}
Log("SQL is : " + SQL);
Nodes=DirectSQL("ObjectServerSource",SQL,null);
NUMBEROFROWS=0;
if( Nodes != NULL && Length(Nodes) > 0) {
result = Nodes[0].NumOfEvents;
if(result != NULL) {
NUMBEROFROWS=result;
}
}
Log("NUMBEROFROWS: " + NUMBEROFROWS);
Log("Num is : " + Num);
SQL statement results:
SELECT SKIP 0 TOP 25 * FROM status WHERE (( to_char(Serial) LIKE ’.*test.*’)
or ( to_char(Severity) LIKE ’.*test.*’) or (Summary LIKE ’.*test.*’)
or (Identifier LIKE ’.*test.*’) or (Node LIKE ’.*test.*’)
or (BSM_Identity LIKE ’.*test.*’)) ORDER BY Identifier ASC
Important: The STATUS and PERCENTAGE data types are treated as integers.
UI data provider customization
You can customize the UI data provider, by changing the refresh rate, initializing
all SQL data items, and enabling multiple Netcool/Impact clusters that access the
same GUI provider.
166
Netcool/Impact: Solutions Guide
Refresh rate
You can configure how often the UI data provider is refreshed. This interval is set
to 5 minutes. To change this setting, add the following statement to the
server.props file that is in the $IMPACT_HOME/etc/ folder:
impact.uidataprovider.refreshrate=<refresh_rate_in_miliseconds>
For example, add the following statement to change the refresh interval to 3
minutes:
impact.uidataprovider.refreshrate=180000
Initialization of SQL data items
You can configure Netcool/Impact so that the SQL data items are initialized during
startup by default. To do so, add the following to the server.props file:
impact.uidataprovider.sql.initializenodes=true
Restriction: This setting can have an adverse affect on performance and memory
usage. This restriction depends on the amount of data that is held by the data
type. This setting is disabled by default for this reason.
Enable multiple servers
Your deployment can include multiple UI data provider servers in a server cluster
that access the same GUI provider. To integrate the UI data provider with this type
of deployment, you must configure the navigational model load so that it regularly
refreshes the data from each UI data provider server. To do so, add the following
statement to the server.props file:
impact.uidataprovider.refreshclusters=true
Character encoding
By default, Netcool/Impact uses UTF-8 character encoding to parse parameter
values and to send these values to the UI data provider. You change this setting if,
for example, you want to use Chinese characters alongside the UI data provider. To
change this setting, shut down your GUI server and add the following statement
to the server.props file that is in the IMPACT_HOME/etc folder:
impact.uidataprovider.encoding=<charset>
Start your GUI server. Netcool/Impact uses the encoding that is defined in the
charset variable to parse parameter values.
Disabling the UI data provider
By default, the UI data provider is enabled in the GUI Server . To disable the UI
data provider, add the following statement to the server.props file in the
$IMPACT_HOME/etc folder. You must shut down the GUI Server before you add the
statement.
impact.uidataprovider.enable=false
Note: In a split installation, add the statement to the server.props file in the GUI
Server.
To complete the change, restart the GUI Server.
Chapter 8. Working with the Netcool/Impact UI data provider
167
Translating date filters for connected databases
Netcool/Impact must translate filter values from the console into a format that is
compatible with the queries used for the various databases. The translation is
required because the console uses milliseconds as the generic format to send dates.
This translation is controlled by the impact.uidataprovider.dateformat property in
the server.props file in the $IMPACT_HOME/etc folder. The default pattern is
yyyy-MM-dd HH:mm:ss.SSS. For example, if you filter for January 1st 2012,
Netcool/Impact translates the filter value into 2012-01-01 00:00:00.000.
To change the default pattern, change the impact.uidataprovider.dateformat
property in the server.props file in the $IMPACT_HOME/etc folder.
Changing the case sensitivity of filtered data
By default the CUSTOM_FILTER for policies and the data filter for data types is
case-sensitive. If you want to change policy or data type filters to be case
insensitive, add the line filter.casesensitive=false into the UI data provider
configuration properties file within theIMPACT_HOME/uiproviderconfig/properties
directory. If the properties file does not exist, you can create the properties file but
the properties file name must be in the following format.
For Netcool/Impact data types, the format is DataType_<Data Type
Name>_<encoding>.properties.
For Netcool/Impact policies, the format is Policy_<Policy Name>_<Output
Parameter Name>_<encoding>.properties.
Also, by default the case sensitivity of the SQL function that is used to convert the
column name and value is set as filter.casesensitive.function=lower. To change
the value to be case insensitive, enter filter.casesensitive.function=false. If
this function doesn't work for your database type, you might need to change this
function to be either lower, or false.
Column labels in the browser locale inside the console
The column labels that are displayed in the IBM Dashboard Applications Services
Hub are provided by the Netcool/Impact UI data provider. These column labels
are available in the data type configuration or policy output parameter
configuration. You configure the UI data provider so that the column labels can be
displayed in the locale of the browser.
Translation file location
Translating a new column label requires a translation file to be created for every
new column label that is used in an output parameter. The translation file must be
in the <UI Data Provider Config Base>/<translation> directory.
<UI Data Provider Config Base>
By default, is the same as the $IMPACT_HOME/Installation directory. To use a
different directory name, edit the etc/server.props property file and add or
update the property impact.uidataprovider.config.directory=<full path to
an existing directory>
<translation>
By default is translation. To use a different directory name, edit the
etc/server.props property file and add or update the property
impact.uidataprovider.translation.directory=<directory name>. The
directory name must be single name without any special characters.
168
Netcool/Impact: Solutions Guide
Enablement or disablement of translation
By default, translation of new column labels is enabled. To disable the translation,
edit the etc/server.props property file and add or update the property
impact.uidataprovider.translation.enabled=false.
Translation file name
The translation file name must be in the following format:
v DataType_<Data Type Name>_<encoding>.properties For Netcool/Impact data
types.
v Policy_<Policy Name>_<Output Parameter Name>_<encoding>.properties For
Netcool/Impact policies.
<Data Type Name>
Is the name of the Netcool/Impact data type that is enabled for UI data
provider.
<Policy Name>
Is the name of the Netcool/Impact policy.
<Output Parameter Name>
Is the name of an output parameter in the policy.
<encoding>
Is one of the supported encoding's in the browser locales. For example, zh_CN
or fr. If the translation file is provided without <encoding>, the translation is
done for the column labels during the initialization and are used as the display
name instead of actual translation. The <encoding> must be
<small2characters>_<CAPITAL2CHARACTERS>, but if it is like fr then just fr is
required.
Translation file format
v To translate a column label, the translation file must have the format of <column
name>=<translated label>.
<column name>
Is the actual column name.
<translated label>
Is the translation of the <column name> into a different locale.
v To translate a display value, the translation file must have the format of
<Fieldname>.displayValue.<actual value>=<display value>. For example:
Location.displayvalue.NY=New York
Location.displayvalue.New\ Jersey =NJ
By default display values are supported for the data types String, Numeric
(Float, Double, integer), Status, and Percentage. If you want to use a display
value for data types like Date, Timestamp, or other data types that are supported
by Netcool/Impact, then add the numeric type number to $IMPACT_HOME/etc/
server.props file: impact.uidataprovider.displayvalue.types=<numeric
value for each type separated by,> and restart the UI server. For example:
Date numeric value is 13
impact.uidataprovider.displayvalue.types=13
Multiple values are comma separated:
impact.uidataprovider.displayvalue.types=13,14,...
Chapter 8. Working with the Netcool/Impact UI data provider
169
Translation example
To translate the column label that is called REVIEWED to Chinese as Mark as
Reviewed, you must complete the following steps:
1. Create or edit the translation file <UI Data Provider Config Base
Directory>/<translation directory>/DataType_MyDataType_zh_CN.properties
2. Add the property REVIEWED=\u6807\u8bb0\u4e3a\u5df2\u8bfb. This property is
displayed in Chinese and reads as "Mark as Reviewed".
3. Set the Chinese locale for the etc/server.props property file to be first in the
locale list in the browser.
4. Logout and log back in to your browser.
5. See the column label that is displayed as " ⌂⌂⌂⌂⌂" instead of REVIEWED.
Customizing tooltips to display field value descriptions
The UI data provider supplies a tooltip property to enable viewing of field values
when a mouse hovers over a column.
About this task
Complete the following procedure to customize the tooltip property and enable
viewing of field value descriptions when a mouse hovers over a field.
Procedure
1. Enable the tooltip property.
a. Enable the policy and data type for both non-locale and locale descriptions.
v For non-locale:
DataType_Status.properties
Policy_<policyname>_<output_parameter_name>.properties
For a French locale:
DataType_Status_fr.properties
Policy_<policyname>_<output_parameter_name>_fr.properties
b. Create or edit the field name properties file IMPACT_HOME/uiproviderconfig/
properties/<DataType_Status>.properties.
c. Add the property <Field_Name>.tooltip.enabled=true. For example, for
Field Name: Severity you add Severity.tooltip.enabled=true.
2. Create or edit the translation properties file. For more information about
translation properties file, see “Column labels in the browser locale inside the
console” on page 168.
v For a data type, edit the translation properties file IMPACT_HOME/
uidataprovider/translation/DataType_<DataType_Name>_[optional
locale].properties
v For a policy, edit the translation properties file IMPACT_HOME/uidataprovider/
translation/Policy_<Policy_Name>_<output parameter name>_[optional
locale].properties
3. Within the translation properties file, for any field for which you want to add a
tooltip, add this property <FieldName>.tooltip.<value>=<tooltip text>. This
example applies a tooltip to the Severity field, for two locales:
Data Type: Status
Field Name: Severity
Create a file:
No Locale: IMPACT_HOME/uidataprovider/translation/DataType_Status.properties
170
Netcool/Impact: Solutions Guide
SEVERITY.tooltip.Critical=This is a critical event.
SEVERITY.tooltip.Major=This is a major event.
SEVERITY.tooltip.Normal=This is a normal event.
SEVERITY.tooltip.Minor=This is a minor event.
SEVERITY.tooltip.Warning=This is a warning.
SEVERITY.tooltip.Intermediate=This an intermediate event
SEVERITY.tooltip.Unknown=The event value is unknown or incorrect.
With French Locale: IMPACT_HOME/uidataprovider/translation/
DataType_Status_fr.properties
SEVERITY.tooltip.Critique=Ceci est un événement critique.
SEVERITY.tooltip.Major=Ceci est un événement majeur.
SEVERITY.tooltip.Normale=Ceci est un événement normale.
SEVERITY.tooltip.Minor=Ceci est un événement mineur.
SEVERITY.tooltip.Warning=Ceci est un événement avertissement.
SEVERITY.tooltip.Intermediate=Ceci est un événement intermédiaire.
SEVERITY.tooltip.Unknown=Ce est un événement inconnu ou incorrect.
When a user hovers on the Severity field that has a value of 5 and depending
on the locale in use, the tooltip displays either Critical or Critique.
For policies
Policies: Status
Field Name: Severity
Create a file:
No Locale: IMPACT_HOME/uidataprovider/translation/DataType_Status.properties
SEVERITY.tooltip.5=This is a critical event.
SEVERITY.tooltip.2=This is a normal event.
With French Locale: IMPACT_HOME/uidataprovider/translation/
DataType_Status_fr.properties
SEVERITY.tooltip.5=Ceci est un évént critique.
SEVERITY.tooltip.2=Ceci est un évént normale.
Results
Translation takes precedence over <fieldname>.tooltip.enabled=true.
However, if you add Severity.tooltip.enabled=true but do not add
<FieldName>.tooltip.<value>=<tooltip text>, then the tooltip displays the
<value>.
Accessing the Netcool/Impact UI data provider
You can use URL to access the UI data provider data provided by Netcool/Impact.
Procedure
Use the following URL to access the Netcool/ImpactUI data provider
https:/<hostname>:<port>/ibm/tivoli/rest/providers/
providername
hostname is the machine where the GUI is running.
port is the https port of the GUI, the default value is 16311
Note:
The Netcool/ImpactUI data provider registers the name Impact_NCICLUSTER by
default. If you registered another cluster name during installation, the UI data
provider registers this name as Impact_<clustername>.
Chapter 8. Working with the Netcool/Impact UI data provider
171
Example
For example, you can use the following URL to access the UI data provider:
https://example.com:16311/ibm/tivoli/rest/providers/Impact_NCICLUSTER
Running policies and accessing output parameters
You can use a URL to run a policy and to make the output parameters of that
policy such as variables, objects, or variables output by the GetByFilter function
available to the UI data provider.
Procedure
To run a policy and make the output parameters available to the UI data provider,
add executePolicy=true to the following URL:
https://<hostname>:<port>/ibm/tivoli/rest/providers/Impact_NCICLUSTER/
datasources/IMPACT_POLICY_<policyname>/datasets/
<policyname>_policy_variables/items?executePolicy=true
Example
You can use a URL to run a policy and to make the output parameters available to
the UI data provider. You create a policy called Test_Policy.
You add executePolicy=true to the following URL to run the Test_Policy policy
and make the output parameters available to the UI data provider:
https://example.com:16311/ibm/tivoli/rest/providers/
Impact_NCICLUSTER/datasources/IMPACT_POLICY_Test_Policy/
datasets/Test_Policy_policy_variables/items?executePolicy=true
Mapping field types to the correct output parameters in UI
Data Provider
You must always map the field types to the correct output parameters to avoid any
implication for future releases or misconfiguration in the policy. Numeric must
always be numeric such as Integer, Float, or Double. String must always be String,
and so on.
If, for any reason, you want to use numeric fields as string in UI Data Provider
output parameter, the value must be cast to String in the policy.
Example
MyObject=NewObject();
MyObject.IntField=String(""+value);
The value is the actual integer value in the policy.
Customizing a topology widget layout
A topology widget can be configured to display a layout view other than the
default tree layout.
About this task
Complete the following procedure to set an alternative layout for a policy.
172
Netcool/Impact: Solutions Guide
Procedure
1. Create a properties file for the topology widget using the following file naming
convention (unless a properties file already exists for the policy):
$IMPACT_HOME/uiproviderconfig/properties/Policy_<PolicyName>_<Output
Parameter name>.properties
For example, for a policy named TestTopology and an output parameter named
Result, the properties file would be:
$IMPACT_HOME/uiproviderconfig/properties/
Policy_TestTopology_Result.properties
2. Edit the file and add the following properties:
topology.layout=hierarchical
topology.layout.params={flowDirection: "bottom", globalLinkStyle: "orthogonal"}
In this example, the layout has been set to hierarchical. You can set
topology.layout to one of the following values:
v tree
v hierarchical
v grid
v forceDirected
v shortLink
v longLink
Each layout is configured by a set of parameters. The parameters are specified
by setting topology.layout.params to a comma-separated list of key:value
pairs. The following tables show the parameters that are available of each of
the topology.layout values.
Table 43. Parameters available with tree.
Key
Available Values
layoutMode
v free
v level
v tip over
v tip leaves over
v tip roots over
v tip roots and leaves over
v radial
v alternating radial
flowDirection
v right
v left
v top
v bottom
globalLinkStyle
v straight
v no reshape
v orthogonal
v mixed
connectorStyle
v automatic
v centered
v evenly spaced
Chapter 8. Working with the Netcool/Impact UI data provider
173
Table 43. Parameters available with tree. (continued)
Key
Available Values
globalAlignment
v center
v border center
v east
v west
v tip over
v tip over west
v tip over east west
v tip over both sides
v mixed
levelAlignment
v center
v north
v south
aspectRatio
v 1
v 10
Table 44. Parameters available with hierarchical.
Key
Available Values
flowDirection
v right
v left
v top
v bottom
levelJustification
v center
v left
v right
v top
v bottom
levelingStrategy
v optimal
v semi optimal
v higher levels
v lower levels
v spread out
globalLinkStyle
v polyline
v orthogonal
v straight
v mixed
v no reshape
connectorStyle
v automatic
v centered
v evenly spaced
horizontalNodeOffset
v 0
v 100
174
Netcool/Impact: Solutions Guide
Table 44. Parameters available with hierarchical. (continued)
Key
Available Values
horizontalLinkOffset
v 0
v 100
horizontalNodeLinkOffset
v 0
v 100
verticalNodeOffset
v 0
v 100
verticalLinkOffset
v 0
v 100
verticalNodeLinkOffset
v 0
v 100
Table 45. Parameters available with grid.
Key
Available Values
layoutMode
v tile to grid fixed height
v tile to grid fixed width
v tile to columns
v tile to rows
globalHorizontalAlignment
v center
v left
v right
globalVerticalAlignment
v center
v top
v bottom
maxNumberOfNodesPerRowOrColumn
v 1
v 10
compareMode
v order by index
v order alphabetically
Table 46. Parameters available with forceDirected.
Key
Available Values
layoutMode
v fast multilevel
v incremental
v non incremental
preferredLinksLength
v 25
v 45
linkStyle
v straight
v no reshape
maxAllowedMaxPerIteration
v 5
v 25
Chapter 8. Working with the Netcool/Impact UI data provider
175
Table 46. Parameters available with forceDirected. (continued)
Key
Available Values
allowedNumberOfIterations
v 250
v 1250
convergenceThreshold
v 0.1
v 3.5
Table 47. Parameters available with shortLink.
Key
Available Values
globalLinkStyle
v direct
v orthogonal
v mixed
v no reshape
globalSelfLinkStyle
v two bends orthogonal
v three bends orthogonal
globalConnectorStyle
v automatic
v fixed offset
v evenly spaced
v mixed
linkOffset
v 0
v 20
minFinalSegmentLength
v 10
v 100
flowDirection
v right
v left
v top
v bottom
levelJustification
v center
v left
v right
v top
v bottom
levelingStrategy
v optimal
v semi optimal
v higher levels
v lower levels
v spread out
connectorStyle
v automatic
v centered
v evenly spaced
176
Netcool/Impact: Solutions Guide
Table 48. Parameters available with longLink.
Key
Available Values
globalLinkStyle
v direct
v orthogonal
v mixed
v no reshape
verticalMinOffset
v 1
v 20
horizontalMinOffset
v 1
v 20
flowDirection
v right
v left
v top
v bottom
levelJustification
v center
v left
v right
v top
v bottom
levelingStrategy
v optimal
v semi optimal
v higher levels
v lower levels
v spread out
connectorStyle
v automatic
v centered
v evenly spaced
Note:
Changes to the properties file do not appear in the widget immediately, you
have to wait for the automatic refresh that is done by Impact in the backend or
restart Impact UI server.
Chapter 8. Working with the Netcool/Impact UI data provider
177
178
Netcool/Impact: Solutions Guide
Chapter 9. Working with OSLC for Netcool/Impact
You can use Open Services for Lifecycle Collaboration (OSLC) for Netcool/Impact
to integrate Netcool/Impact with other OSLC providers and clients.
Netcool/Impact also functions as a client of OSLC data. You can use these
capabilities to integrate Netcool/Impact with compatible products and data.
Netcool/Impact 7.1.0.5 contains an implementation of the Open Services for
Lifecycle Collaboration (OSLC) Core Specification version 2.0. For more
information about OSLC, see the OSLC Core Specification (http://openservices.net/bin/view/Main/OslcCoreSpecification).
Netcool/Impact does not support delegated UI dialogs or creation factories.
Netcool/Impact supports only the RDF/XML representation of OSLC and the
following aspects of the OSLC Core Specification v2:
v OSLC Service Provider
v OSLC Query Capability
v OSLC Resource Shape
v OSLC Resource
Usage Scenarios
Netcool/Impact is able to act as an OSLC provider and an OSLC client. You can
use Netcool/Impact as a generic OSLC adapter for other OSLC and non-OSLC
service providers.
Response Formats
Netcool/Impact uses the RDF/XML format for all OSLC responses, as required by
the OSLC Core Specification v2.
Important: When viewed in some web browsers, such as Mozilla Firefox, the raw
RDF/XML is automatically translated into the abbreviated RDF/XML format,
which omits blank nodes. For more information, see the RDF/XML Syntax
Specification (http://www.w3.org/TR/REC-rdf-syntax/#section-Syntax-blanknodes)
The raw RDF/XML and the abbreviated version are semantically identical. You can
use Internet Explorer, Mozilla Firefox, the Netcool/Impact GetHTTP function, or the
Linux curl utility to retrieve the raw XML/RDF.
Note: If you are working with the Netcool/Impact GUI in the Mozilla Firefox
browser and you simultaneously open a second instance to view an OSLC URL,
the system logs you out of the first instance. To prevent this problem, you must
create a second profile in Mozilla Firefox for viewing OSLC URLs. For more
information about how to do so, see the help section about profiles on the Mozilla
website (http://support.mozilla.org/en-US/kb/profile-manager-create-andremove-firefox-profiles).
© Copyright IBM Corp. 2006, 2016
179
Jazz for Service Management
OSLC requires Netcool/Impact and Jazz for Service Management. You use the
installer that is provided with Jazz for Service Management to install it separately.
Before you can use OSLC, you must install the Registry Service component of Jazz
for Service Management. For more information, see http://pic.dhe.ibm.com/
infocenter/tivihelp/v3r1/topic/com.ibm.psc.doc_1.1.0/psc_ic-homepage.html
Introducing OSLC
Before you use OSLC for Netcool/Impact, read this information about the specifics
of this implementation.
The following graphic outlines an example of a typical system architecture for
OSLC:
IBM Dashboard Application Services Hub
Get Service Providers
Get Hover Preview URL
Netcool / Impact
Server
Custom Dashboard
Server - Jazz for
Service Management
Display Operator View
Netcool / Impact
GUI Server
Impact server
Register
Impact UI
Policy
Library
Tivoli Widget Library
Tivoli Common Reporting
Registry Services
IBM Dashboard Application
Services Hub
Data
Source
Data
Source
Data
Source
The installation that is illustrated in the graphic shows how Netcool/Impact uses
the registry services from Jazz for Service Management to provide hover preview
support to Dashboard Application Services Hub (DASH). DASH retrieves the list
of service providers for a resource from the server where the Registry Services
component of Jazz for Service Management is installed. DASH connects to
specified service provider in the backend of Netcool/Impact where the service
provider was created. The backend is connected to the data sources and policies
180
Netcool/Impact: Solutions Guide
that provide data and generate the hover preview window information, including
the URL used to retrieve the hover preview content. The URL can be the frontend
of Netcool/Impact which uses the operator views to render the actual content
inside the hover preview window.
OSLC resources and identifiers
OSLC for Netcool/Impact follows the OSLC specifications with regards to OSLC
resources and identifiers. However, there are some important things to consider
before you start working on OSLC for Netcool/Impact.
Following the OSLC specification, the URIs generated by Netcool/Impact in OSLC
documents are opaque. The only valid URIs are those URIs that are discovered
through an OSLC service provider registry or from a previous OSLC document.
For example, if the URI for a resource that is called Person is http://<server>/
person it cannot be assumed that http://<server>/dog is the correct URI for a
resource called Dog. This document provides URIs as examples but these do not
imply functioning URIs on any particular system.
All Netcool/Impact URIs use the http:// or https:// scheme.
As URIs are opaque, it follows that the http://<server>/ resource and the
https://<server>/ resource are two different resources.
Although all URIs can use the http or https scheme, not all URIs can be resolved
as an HTTP resource.
Where possible, the term URI is used to indicate identifiers, which may or may not
resolve to a particular document, and URLs to refer to resources which resolve to a
document.
You can use HTTP or HTTPS authentication for security. If you use HTTP basic
authentication, the security credentials are available on the network as clear text.
HTTPS is the preferred method as the security credentials are not available as clear
text.
OSLC roles
You use the impactAdminUser and impactOSLCDataProviderUser roles to regulate
access to the OSLC service provider.
The impactAdminUser role is assigned to your Netcool/Impact administrator, who
is, in most cases, the GUI administrator.
v To add users to roles, use the script $NCHOME/install/security/mapRoles.sh
v To add users to the impactOSLCDataProviderUser role, use the command
./mapRoles -add -user username -roles "impactOSLCDataProviderUser"
v To add groups to the impactOSLCDataProviderUser role, use the command
./mapRoles -add -group groupname -roles "impactOSLCDataProviderUser"
Example
You use the following command to add the oslcuser user to the
impactOSLCDataProviderUser role for a Linux operating system:
$IMPACT_HOME/install/security/mapRoles.sh -add -user oslcuser -roles
"impactOSLCDataProviderUser"
Chapter 9. Working with OSLC for Netcool/Impact
181
Working with data types and OSLC
You can use the following information to integrate the Netcool/Impact OSLC
provider and data types.
You cannot use a display name that contains special characters with OSLC. You
must enter a display name that does not contain special characters. To edit the
display name:
1. Open Netcool/Impact and Click Data Model to open the Data Model tab.
2. Click the data source that the data type belongs to.
3. Select the row that contains the display name that uses special characters and
click the Edit Current Row icon.
4. Replace the special characters in the display name and save your changes.
Accessing Netcool/Impact data types as OSLC resources
To allow Netcool/Impact to access a data type as an OSLC resource, you must add
the data type to the NCI_oslc.props file in <IMPACT_HOME>/etc/.
About this task
In this scenario, the OSLC resources that are returned by a query are
Netcool/Impact data items, which are rows from the underlying database.
Procedure
1. Add a property for each Netcool/Impact data type that you want to make
available as OSLC resources to the NCI_oslc.props file in <IMPACT_HOME>/etc/.
NCI is the default name of the Impact Server. You add the property in the
following format:
oslc.data.<pathcomponent>=<datatypename>
where <pathcomponent> is the path component of the URI that you want to use,
and <datatypename> is the name of the data type you want to use.
For example, if you add the oslc.data.staff=Employees to the properties file,
you can use the following URL to access the Employees data type:
http://example.com:9080/NCICLUSTER_NCI_oslc/data/staff
where NCI is the default Impact Server name and NCICLUSTER is the
Netcool/Impact cluster name.
2. Restart the Impact Server.
Example
The following example shows how to create a data type for a DB2 table and how
to add the information to the NCI_oslc.props file.
In this example, the DB2 table has information for a table called People:
db2 => describe table People
Column name
------------------------------ID
FIRST_NAME
LAST_NAME
COMPANY
BIRTHDAY
182
Netcool/Impact: Solutions Guide
Data type
schema
--------SYSIBM
SYSIBM
SYSIBM
SYSIBM
SYSIBM
Column
Data type name
Length
Scale Nulls
------------------- ---------- ----- -----INTEGER
4
0 Yes
VARCHAR
255
0 Yes
VARCHAR
255
0 Yes
VARCHAR
255
0 Yes
DATE
4
0 Yes
1. Create a Netcool/Impact data type which represents the information in the DB2
table and add the information as fields to the data type:
a. Click Data Model, to open the Data Model tab.
b. Select the data source for which you want to create a data type, right-click
the data source and click New Data Type.
c. In the Data Type Name field, give the data type a name, for example
Employees.
d. Select the Data Source Name from the list menu, in this example DB2.
e. Select the Enabled check box to activate the data type so that it is available
for use in policies.
f. Select the Base Table name from the list menu.
g. Click Refresh to add the fields from the DB2 example table to the data type.
h.
Select at least one Key Field. Key fields are fields whose value or
combination of values can be used to identify unique data items in a data
type.
i. Click Save.
2. Specify the data type in the NCI_oslc.props file, for example
oscl.data.staff=Employees.
3. Restart the Impact Server.
Retrieving OSLC resources that represent Netcool/Impact data
items
You use OSLC resource collections to represent Netcool/Impact data items. You
use a URL to retrieve these resource collections.
Before you begin
Use only data types that conform with standard database best practices. The key
fields must be unique, non-NULL, and they must not change over time. If the key
values change, the OSLC URI also changes.
OSLC service providers are on the backend server. In a single server environment
or split environment use port 9080, or secure port 9081.
Procedure
Use a URL like the following one to retrieve the OSLC resource collections that
represent the data items:
http://<server>:<port>/NCICLUSTER_NCI_oslc/data/<datatype>
where <datatype> is defined in the NCI_oslc.props file.
If your entry in the NCI_oslc.props file is in this format
oslc.data.<path>=<datatype_name>. Then the URL to retrieve the OSLC resources
will be in the following format:https://<server>:9081/NCICLUSTER_NCI_oslc/data/
<path>
Results
Netcool/Impact maps the rows in the database to OSLC resources and it maps
columns to OSLC resource properties. The URL for each data item uses the key
values in the form of HTTP matrix parameters to uniquely identify the data item.
Chapter 9. Working with OSLC for Netcool/Impact
183
The key values are defined in the Netcool/Impact data type configuration. For
example, a data item with multiple keys would result in a URI like this one:
http://<server>:<port>/NCICLUSTER_NCI_oslc/data/people/item;
<key1=value1>;<key2=value2>
Each non-NULL value in the database is represented as an RDF triple that consists
of the data item, the value, and the property that is derived from the column
name. NULL values are represented in OSLC by the absence of the property that is
derived from the column name.
Example
For example, you can use the following URL to access the employee data type that
is configured to use the people path component:
http://example.com:9080/NCICLUSTER_NCI_oslc/data/people/
The URL returns a collection of OSLC resources that are based on the rows from
the database table. The following example shows the results for two data items
that belong to the employee data type:
<?xml version="1.0"?>
<rdf:RDF
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:people="http://jazz.net/ns/ism/event/impact#data/people/"
xmlns:impact="http://jazz.net/ns/ism/event/impact#/"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:oslc="http://open-services.net/ns/core#">
<oslc:ResponseInfo rdf:about="http://example.com:9080/NCICLUSTER_NCI_oslc/people">
<rdfs:member>
<people:people rdf:about="http://example.com:9080/NCICLUSTER_NCI_oslc
/data/people/item;ID=2">
<people:ID>2</people:ID>
<people:FIRST_NAME>George</people:FIRST_NAME>
<people:LAST_NAME>Friend</people:LAST_NAME>
</people:people>
</rdfs:member>
<rdfs:member>
<people:people rdf:about="http://example.com:9080/NCICLUSTER_NCI_oslc
/data/people/item;ID=1">
<people:FIRST_NAME>Michael</people:FIRST_NAME>
<people:LAST_NAME>Ryan</people:LAST_NAME>
<people:ID>1</people:ID>
</people:people>
</rdfs:member>
<oslc:totalCount>2</oslc:totalCount>
</oslc:ResponseInfo>
</rdf:RDF>
Displaying results for unique key identifier
Each resource that is returned is assigned a unique key that identifies the resource
in the results and has certain information associated with it. You can use a URL to
display the information associated with a specific identifier.
Example
You use the following URL to display the information associated with a particular
resource key, in this case 1010:
http://example.com:9080/NCICLUSTER_NCI_oslc/policy/example/
myGetFilter/item;ID=1010
This URL returns the following results:
184
Netcool/Impact: Solutions Guide
<rdf:RDF>
<examplePolicy:myGetFilter rdf:about="http://example.com:9080/NCICLUSTER_NCI_oslc/
policy/example/myGetFilter/item;ID=1010">
<myGetFilter:NAME>Brian Doe</myGetFilter:NAME>
<myGetFilter:STARTED>1980-08-11</myGetFilter:STARTED>
<myGetFilter:MANAGER>1001</myGetFilter:MANAGER>
<myGetFilter:ID>1010</myGetFilter:ID>
<myGetFilter:DEPT>Documentation</myGetFilter:DEPT>
</examplePolicy:myGetFilter>
</rdf:RDF>
OSLC resource shapes for data types
The OSLC resource shape represents the structure of the SQL schema that the
Netcool/Impact data type is using. Netcool/Impact automatically produces an
OSLC resource shape for the specified data types, extracting the data from the
underlying database table.
Table 49. Mapping to OSLC Resource Shape properties
OSLC Resource Shape parameters
Maps to Netcool/Impact
dcterms:title
Data type name
oslc:describes
http://jazz.net/ns/ism/event/impact#/
data/<pathcomponent>
Table 50. OSLC properties generated by the Netcool/Impact data type parameters
OSLC property
Netcool/Impact data type
oslc:readOnly
Always 'true'
oslc:valueType
For more information, see the Table 51.
dcterms:title
Column display name
oslc:propertyDefinition
http://jazz.net/ns/ism/event/impact#/
data/<pathcomponent>#<columnname>
oslc:occurs
oslc:ZeroOrOne
oslc:name
Column name
dcterms:description
Column description
Table 51. OSLC value type mapping:
Netcool/Impact column types
OSLC value types
String
http://www.w3.org/2001/XMLSchema#string
Integer, Long
http://www.w3.org/2001/XMLSchema#integer
Date, Timestamp
http://www.w3.org/2001/
XMLSchema#dateTime
Float
http://www.w3.org/2001/XMLSchema#float
Double
http://www.w3.org/2001/XMLSchema#double
Boolean
http://www.w3.org/2001/XMLSchema#boolean
Anything else
http://www.w3.org/2001/XMLSchema#string
Viewing the OSLC resource shape for the data type
The OSLC resource shape for a data type is displayed in the oslc:resourceShapes
property.
Chapter 9. Working with OSLC for Netcool/Impact
185
Example
The following example contains the OSLC resource shape for the Employees data
type that was created for a DB2 table that is called People in the abbreviated RDF
format.
The resource URI is as follows:
http://<host>:9080/NCICLUSTER_NCI_oslc/data/resouceShapes/staff
The URI returns the following RDF:
<?xml version="1.0"?>
<rdf:RDF
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:oslc="http://open-services.net/ns/core#">
<oslc:ResourceShape rdf:about=
"http://<host>:9080/NCICLUSTER_NCI_oslc/data/resourceShapes/staff">
<dcterms:title>Employees/dcterms:title>
<oslc:property>
<oslc:Property>
<oslc:readOnly>true</oslc:readOnly>
<oslc:valueType rdf:resource=
"http://www.w3.org/2001/XMLSchema#string"/>
<dcterms:title>LAST_NAME</dcterms:title>
<oslc:propertyDefinition rdf:resource=
"http://jazz.net/ns/ism/event/impact#/data/staff/LAST_NAME"/>
<oslc:occurs rdf:resource=
"http://open-services.net/ns/core#Exactly-one"/>
<oslc:name>LAST_NAME</oslc:name>
<dcterms:description>LAST_NAME</dcterms:description>
</oslc:Property>
</oslc:property>
<oslc:property>
<oslc:Property>
<oslc:readOnly>true</oslc:readOnly>
<oslc:valueType rdf:resource=
"http://www.w3.org/2001/XMLSchema#integer"/>
<dcterms:title>ID</dcterms:title>
<oslc:propertyDefinition rdf:resource=
"http://jazz.net/ns/ism/event/impact#/data/staff/ID"/>
<oslc:occurs rdf:resource=
"http://open-services.net/ns/core#Exactly-one"/>
<oslc:name>ID</oslc:name>
<dcterms:description>ID</dcterms:description>
</oslc:Property>
</oslc:property>
<oslc:property>
<oslc:Property>
<oslc:readOnly>true</oslc:readOnly>
<oslc:valueType rdf:resource=
"http://www.w3.org/2001/XMLSchema#string"/>
<dcterms:title>FIRST_NAME</dcterms:title>
<oslc:propertyDefinition rdf:resource=
"http://jazz.net/ns/ism/event/impact#/data/staff/FIRST_NAME"/>
<oslc:occurs rdf:resource=
"http://open-services.net/ns/core#Exactly-one"/>
<oslc:name>FIRST_NAME</oslc:name>
<dcterms:description>FIRST_NAME</dcterms:description>
</oslc:Property>
</oslc:property>
186
Netcool/Impact: Solutions Guide
<oslc:property>
<oslc:Property>
<oslc:readOnly>true</oslc:readOnly>
<oslc:valueType rdf:resource=
"http://www.w3.org/2001/XMLSchema#string"/>
<dcterms:title>COMPANY</dcterms:title>
<oslc:propertyDefinition rdf:resource=
"http://jazz.net/ns/ism/event/impact#/data/staff/COMPANY"/>
<oslc:occurs rdf:resource=
"http://open-services.net/ns/core#Exactly-one"/>
<oslc:name>COMPANY</oslc:name>
<dcterms:description>COMPANY</dcterms:description>
</oslc:Property>
</oslc:property>
<oslc:property>
<oslc:Property>
<oslc:readOnly>true</oslc:readOnly>
<oslc:valueType rdf:resource=
"http://www.w3.org/2001/XMLSchema#dateTime"/>
<dcterms:title>BIRTHDAY</dcterms:title>
<oslc:propertyDefinition rdf:resource=
"http://jazz.net/ns/ism/event/impact#/data/staff/BIRTHDAY"/>
<oslc:occurs rdf:resource=
"http://open-services.net/ns/core#Exactly-one"/>
<oslc:name>BIRTHDAY</oslc:name>
<dcterms:description>BIRTHDAY</dcterms:description>
</oslc:Property>
</oslc:property>
<oslc:describes rdf:resource=
"http://jazz.net/ns/ism/event/impact#/data/staff"/>
</oslc:ResourceShape>
</rdf:RDF>
Configuring custom URIs for data types and user output
parameters
Netcool/Impact can act as a proxy to non-OSLC systems. To facilitate this function,
you can use Netcool/Impact to represent data from a database as OSLC resources.
About this task
You use customized URIs to represent the data type columns. You need to add
these customized URIs to the OSLC configuration file to facilitate the mapping.
Restriction:
All the namespace URIs that you specify must include http. You cannot use https.
Procedure
Add the following statement to the NCI_oslc.props file to specify a particular Type
URI for a data type:
oslc.data.<path>.uri=<uri>
Optionally, you can add the following statement to specify a column name:
oslc.data.<path>.<columnname>.uri=<uri>
You can also add the following statement to specify a particular prefix for a
namespace:
Chapter 9. Working with OSLC for Netcool/Impact
187
oslc.data.<path>.namespaces.<prefix>=<uri>
If you do not specify a prefix, the RDF that is returned automatically shows the
generated prefix for the namespace.
Example
The following code example demonstrates how an employee table can be
represented in a friend of a friend (FOAF) specification by adding the following
statements to the NCI_oslc.props file:
oslc.data.staff=Employees
oslc.data.staff.uri=http://xmlns.com/foaf/0.1/Person
oslc.data.staff.NAME.uri=http://xmlns.com/foaf/0.1/name
oslc.data.staff.BIRTHDAY.uri=http://xmlns.com/foaf/0.1/birthday
oslc.data.staff.PHOTO.uri=http://xmlns.com/foaf/0.1/img
oslc.data.staff.STAFFPAGE.uri=http://xmlns.com/foaf/0.1/homepage
oslc.data.staff.EMAIL.uri=http://xmlns.com/foaf/0.1/mbox
When the user queries the OSLC resource http://example.com:9080/
NCICLUSTER_NCI_oslc/data/staff/jdoe, the following RDF is returned.
Note: The example RDF is an approximation. Also, as the user did not specify the
prefix and the namespace, the RDF automatically shows the generated prefix for
the namespace. In this example, the namespace is j.0 and the prefix is
http://xmlns.com/foaf/0.1/.
<?xml version="1.0"?>
<rdf:RDF
xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:j.0="http://xmlns.com/foaf/0.1/"
xmlns:impact="http://jazz.net/ns/ism/events/impact#/"
xmlns:oslc="http://open-services.net/ns/core#"">
<j.0:Person rdf:about=
"http://example.com:9080/NCICLUSTER_NCI_oslc/data/staff/jdoe"
xmlns:foaf="http://xmlns.com/foaf/0.1/">
<j.0:name>John Doe</foaf:name>
<j.0:homepage rdf:resource="http://example.com" />
<j.0:mbox rdf:resource="john.doe@example.com" />
<j.0:img rdf:resource="http://example.com/images/jdoe.jpg"/>
<j.0:birthday>19770801</foaf:birthday>
</foaf:Person>
</rdf:RDF>
The following code example demonstrates how to specify a particular prefix for a
namespace. First, you specify the prefix and the namespace:
oslc.data.staff.namespaces.foaf=http://xmlns.com/foaf/0.1/
When the user queries the OSLC resource http://example.com:9080/
NCICLUSTER_NCI_oslc/data/staff/jdoe, the following RDF is returned.
Note: The example RDF is an approximation.
<?xml version="1.0"?>
<rdf:RDF
xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:foaf="http://xmlns.com/foaf/0.1/"
xmlns:impact="http://jazz.net/ns/ism/events/impact#/"
xmlns:oslc="http://open-services.net/ns/core#"">
<foaf:Person rdf:about="http://example.com:9080/NCICLUSTER_NCI_oslc/data/staff/jdoe"
xmlns:foaf="http://xmlns.com/foaf/0.1/">
<foaf:name>John Doe</foaf:name>
<foaf:homepage rdf:resource="http://example.com" />
<foaf:mbox rdf:resource="john.doe@example.com" />
188
Netcool/Impact: Solutions Guide
<foaf:img rdf:resource="/images/jdoe.jpg" />
<foaf:birthday>19770801</foaf:birthday>
</foaf:Person>
</rdf:RDF>
Working with the OSLC service provider
To allow other applications that are OSLC consumers to use OSLC data from
Netcool/Impact, you must create the OSLC service provider, register the service
provider with the Registry Service provided by Jazz for Service Management, and
register the OSLC resources with the OSLC service provider.
To allow other applications that are OSLC consumers to use OSLC data from
Netcool/Impact, you must complete the following tasks:
1. Create the OSLC service provider. See “Creating OSLC service providers in
Netcool/Impact”
2. Register the OSLC service provider with the Registry Service provided by Jazz
for Service Management. See “Registering OSLC service providers with
Netcool/Impact” on page 191
3. Register the OSLC resources with the OSLC service provider. See “Registering
OSLC resources” on page 193
The Registry Service is an integration service that is part of the Jazz for Service
Management product. The Registry Service contains two directories, the Provider
registry and the Resource registry. As part of the implementation of OSLC for
Netcool/Impact, you must register the OSLC service provider and resources with
the Resource registry.
For more information about the Registry Service and Jazz for Service Management,
see http://pic.dhe.ibm.com/infocenter/tivihelp/v3r1/topic/
com.ibm.psc.doc_1.1.0/psc_ic-homepage.html.
If you are registering the OSLC resources with the Registry Service provided by
Jazz for Service Management, you need to use resources and RDF models that
match the specifications that are defined in the common resource type vocabulary
(CRTV). For more information, see the section about the common resource type
vocabulary in the Registry Services guide (https://www.ibm.com/
developerworks/mydeveloperworks/wikis/home?lang=en#/wiki/
W8b1151be2b42_4819_998e_f7de7db7bfa2/page/Milestone%20documentation).
The examples in this documentation use the namespace crtv. To integrate the
Netcool/Impact OSLC service provider with the Registry Service provided by Jazz
for Service Management, you must use the crtv namespace. If you do not want to
integrate the OSLC service provider with the Registry Service provided by Jazz for
Service Management, you must change the namespace. For more information about
how to define a custom namespace, see “Configuring custom URIs for data types
and user output parameters” on page 187.
Creating OSLC service providers in Netcool/Impact
Before you can use Netcool/Impact to register OSLC resources with the registry
service provided by Jazz for Service Management, you must create an OSLC
service provider in Netcool/Impact. To create an OSLC service provider, update
the NCI_oslc.props configuration file.
Chapter 9. Working with OSLC for Netcool/Impact
189
About this task
The service provider definition is based on the OSLC resource collection that it is
associated with. An OSLC resource collection can share a single provider or it can
use multiple providers.
While you can use RDF policy functions to manually create the service provider,
generally you use Netcool/Impact to generate a service provider automatically.
Procedure
1. To define a service provider, add the following statement to the NCI_oslc.props
configuration file:
oslc.<type>.<path>.provider=<provider_name>
oslc.provider.<provider_name>.title=<title>
oslc.provider.<provider_name>.description=<description>
For example:
oslc.data.computer=RESERVATION
oslc.data.computer.provider=provider01
...
oslc.provider.provider01.title=Customer-x Product-y OSLC Service Provider
oslc.provider.provider01.description=Customer-x Product-y OSLC Service Provider
2. OSLC resources can share an OSLC service or they can use different OSLC
services. This is controlled by the specified domain name. To specify a domain
and a title for a resource, add the following statement to the NCI_oslc.props
configuration file:
oslc.<type>.<path>.provider.domain=<domain_URI>
oslc.<type>.<path>.provider.title=<title>
For example:
oslc.data.computer=RESERVATION
...
oslc.data.computer.provider=provider01
oslc.data.computer.provider.domain=http://domainx/
oslc.data.computer.provider.title=Computer Title
If you specify the same service provider and domain name for two OSLC
resources, both resources share a single OSLC service. If two resources use the
same service provider but have different domains, the resources use different
OSLC services. If no domain is specified, then the system uses the default
Netcool/Impact namespace URI for this path.
3. Restart the Impact Server to implement the changes.
4. Use this URL to view the service providers:
https://<server>:9081/NCICLUSTER_NCI_oslc/provider
The results are returned as an RDF. For example:
<rdf:RDF
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:oslc="http://open-services.net/ns/core#">
<rdf:Description rdf:about="https://<server>:9081/NCICLUSTER_NCI_oslc/
provider">
<rdfs:member>
<oslc:ServiceProvider rdf:about="https://<server ip>:9081/
NCICLUSTER_NCI_oslc/provider/provider02">
<oslc:service>
<oslc:Service>
190
Netcool/Impact: Solutions Guide
<oslc:queryCapability>
<oslc:QueryCapability>
<dcterms:title>Query Capability - http://policy.js/xmlns/
directSQL</dcterms:title>
<oslc:resourceType rdf:resource="http://policy.js/xmlns/
directSQL"/>
<oslc:resourceShape rdf:resource="https://<server ip>:9081/
NCICLUSTER_NCI_oslc/policy/resourceShapes/testBry/myDirectSQL1"/>
<oslc:queryBase rdf:resource="https://<server ip>:9081/
NCICLUSTER_NCI_oslc/policy/testBry/myDirectSQL1"/>
</oslc:QueryCapability>
</oslc:queryCapability>
<oslc:domain rdf:resource="http://domainy/"/>
</oslc:Service>
</oslc:service>
</oslc:ServiceProvider>
</rdfs:member>
<rdfs:member>
<oslc:ServiceProvider rdf:about="https://<server ip>:9081/
NCICLUSTER_NCI_oslc/provider/provider01">
<dcterms:title>Customer-x Product-y OSLC Service Provider
</dcterms:title>
<dcterms:description>Customer-x Product-y OSLC Service Provider
</dcterms:description>
<oslc:service>
<oslc:Service>
<oslc:queryCapability>
<oslc:QueryCapability>
<dcterms:title>Query Capability - http://jazz.net/ns/ism/events/
impact/data/managers</dcterms:title>
<oslc:resourceType rdf:resource="http://jazz.net/ns/ism/
event/impact/data/managers"/>
<oslc:resourceShape rdf:resource="https://<server ip>:9081/
NCICLUSTER_NCI_oslc/data/resourceShapes/managers"/>
<oslc:queryBase rdf:resource="https://<server ip>:9081/
NCICLUSTER_NCI_oslc/data/managers"/>
</oslc:QueryCapability>
</oslc:queryCapability>
<oslc:queryCapability>
<oslc:QueryCapability>
<dcterms:title>Managers Title</dcterms:title>
<oslc:resourceType rdf:resource="http://open-services.net/ns/
crtv#ComputerSystem"/>
<oslc:resourceShape rdf:resource="https://<server ip>:9081/
NCICLUSTER_NCI_oslc/data/resourceShapes/computer"/>
<oslc:queryBase rdf:resource="https://<server ip>:9081/
NCICLUSTER_NCI_oslc/data/computer"/>
</oslc:QueryCapability>
</oslc:queryCapability>
<oslc:domain rdf:resource="http://domainx/"/>
</oslc:Service>
</oslc:service>
</oslc:ServiceProvider>
</rdfs:member>
</rdf:Description>
</rdf:RDF>
Registering OSLC service providers with Netcool/Impact
To specify the registry server information in the NCI_oslc.props file, add the OSLC
registry server property to the NCI_oslc.props file.
Chapter 9. Working with OSLC for Netcool/Impact
191
Before you begin
Before you can specify the registry server information in the NCI_oslc.props file,
you must create a service provider. See “Creating OSLC service providers in
Netcool/Impact” on page 189.
Procedure
1. Specify the registry server, user name, and password. If the registry server does
not require a user name and password, you do not need to specify them.
To specify the registry server, add the following statement to the
NCI_oslc.props file:
impact.oslc.registry.server=<RegistryserverproviderregistryURL>
where <RegistryserverproviderregistryURL> is registry server provider's registry
URL, for example
http://<registryserver>:<port>/oslc/pr
.
To specify a registry server user, add the following statement to the
NCI_oslc.props file:
impact.oslc.registry.username=<OSLCproviderregistryserver
username>
To specify the registry server password, add the following statement to the
NCI_oslc.props file:
impact.oslc.registry.password=<OSLCproviderregistryserver
password>
where <OSLCproviderregistryserverpassword> is the password for the OSLC
provider registry server in encrypted form.
To locate the encrypted form of the password, run the nci_crypt program in
the impact/bin directory. For example:
nci_crypt password
{aes}DE865CEE122E844A2823266AB339E91D
In this example, the password parameter uses the entire string,
{aes}DE865CEE122E844A2823266AB339E91D, as the password.
2. Restart Netcool/Impact to register the service providers. After the restart,
Netcool/Impact registers the service providers in the service registry. If the
service provider has been registered successfully, the resources that belong to
the service provider contain a new property, oslc:serviceProvider. The
oslc:serviceProvider property is displayed when you navigate to the URI that
contains the resources associated with the provider.
<rdf:RDF
xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:RESERVATION="http://jazz.net/ns/ism/event/impact#/data/computer/"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:crtv="http://open-services.net/ns/crtv#"
xmlns:oslc="http://open-services.net/ns/core#"
xmlns:impact="http://jazz.net/ns/ism/event/impact#">
<crtv:ComputerSystem rdf:about="http://<impact-server>:9080/NCICLUSTER_NCI_oslc/
data/computer/item;ID=4">
<crtv:serialNumber>IBM00003SN</crtv:serialNumber>
<crtv:model>IBM Model01</crtv:model>
<crtv:manufacturer>IBM Manufacturer01</crtv:manufacturer>
<oslc:serviceProvider rdf:resource="http://<registry-server>:16310/oslc/
providers/6015"/>
<RESERVATION:RESERVED_DATE>2012-07-16</RESERVATION:RESERVED_DATE>
<RESERVATION:RESERVED_BY>Michael Morton</RESERVATION:RESERVED_BY>
192
Netcool/Impact: Solutions Guide
<RESERVATION:RELEASE_DATE>2013-03-06</RESERVATION:RELEASE_DATE>
<RESERVATION:ID>4</RESERVATION:ID>
</crtv:ComputerSystem>
</rdf:RDF>
3. Register the resource with the registry server. Netcool/Impact does not
automatically register resources. See “Registering OSLC resources.”
Registering OSLC resources
Netcool/Impact does not automatically register OSLC resources with the services
registry.
About this task
If you are registering the OSLC resources with the Registry Service provided by
Jazz for Service Management, you need to use resources and RDF models that
match the specifications of the common resource type vocabulary (CRTV). For
more information, see the section about the common resource type vocabulary in
the Registry Services guide (http://pic.dhe.ibm.com/infocenter/tivihelp/v3r1/
topic/com.ibm.psc.doc_1.1.0/psc_ic-homepage.html).
If you want to view the resource record for OSLC resources that you register with
the Registry Service provided by Jazz for Service Management, you must include
the crtv namespace in the URL.
Procedure
To register OSLC resources, you can use one of the following two methods:
v Use the RDFRegister policy function in a policy to register the resource. For
more information, see “RDFRegister” on page 198.
v Use the GetHTTP policy function to perform a HTTP POST function on the
resource or list of resource members. You must define the Method parameter as
POST. You can also use the OSLC query syntax to limit the properties that are
registered as part of the resource. For more information on the GetHTTP policy
function, see the GetHTTP function in the Policy Reference Guide.
Results
After you run the policy that contains the policy function, Netcool/Impact tries to
register the resource or list of resource members included in the policy function.
Netcool/Impact also returns the response status, location header, and body text
from the registry server to the client. The location header displays the location of
the resources registration record for each resource that was registered. The body
content that is contained in the response specifies the location of each registration
record for each resource that was registered.
If a single resource is registered successfully, the system displays a 201 status
code (Created) message. If multiple resources are registered successfully, the
system displays a 200 status code (OK) message.
When you register multiple resources, Netcool/Impact also returns the following
headers and the response body text from the registry server to the client:
v NextPage: If a next page of resources exists, the header contains the location URI
of the next set of resources. If no next page exists, the response does not contain
this header.
Chapter 9. Working with OSLC for Netcool/Impact
193
v TotalCount: The total number of resources across all pages. This header is
returned when you register multiple resource.
The successful registration of an OSLC resource results in two records. A
registration record is created in the resource registry. A resource record is also
created and this record is available through the resource URI.
To view the registration records for the resource registry that is used by the
Registry Service, add /rr/registration/collection to the URI. For example:
http://example.com:16310/oslc/rr/registration/collection
To view the registered resources for a service provider, such as the Registry
Service, add /rr/collection to the Registry Service URL. For example:
http://example.com:16310/oslc/rr/collection?oslc.select=*
If the same resource is registered in two different instances because they belong to
two different service providers, two registration records are created but only a
single resource record is created and it is available through a single resource URI.
If you are integrating OSLC with the Registry Service and the OSLC resources are
not displayed in this collection, check that the resources used match the modeling
guidelines and use the common resource type vocabulary (CRTV). Also check that
the resource URL contains the crtv namespace.
Single resource example
For example, consider the resource that is located at the following URL:
http://<Impactserver>:9080/NCICLUSTER_NCI_oslc/data/
computer/item;ID=4
This returns the following RDF:
<rdf:RDF
xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:RESERVATION="http://jazz.net/ns/ism/event/impact/
data/computer/"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:crtv="http://open-services.net/ns/crtv#"
xmlns:oslc="http://open-services.net/ns/core#"
xmlns:impact="http://jazz.net/ns/ism/event/impact#">
<crtv:ComputerSystem rdf:about="http://<Impactserver>:9080/
NCICLUSTER_NCI_oslc/data/computer/item;ID=4">
<crtv:serialNumber>IBM00003SN</crtv:serialNumber>
<crtv:model>IBM Model01</crtv:model>
<crtv:manufacturer>IBM Manufacturer01</crtv:manufacturer>
<oslc:serviceProvider rdf:resource="http://
<registryserver>:9080/oslc/providers/6015"/>
<RESERVATION:RESERVED_DATE>2012-07-16</RESERVATION:RESERVED_DATE>
<RESERVATION:RESERVED_BY>Michael Morton</RESERVATION:RESERVED_BY>
<RESERVATION:RELEASE_DATE>2013-03-06</RESERVATION:RELEASE_DATE>
<RESERVATION:ID>4</RESERVATION:ID>
</crtv:ComputerSystem>
</rdf:RDF>
Use the query syntax in the URL to limit the properties to crtv:serialNumber,
crtv:model, crtv:manufacturer, and oslc:serviceProvider:
http://<Impactserver>:9080/NCICLUSTER_NCI_oslc/data/computer/
item;ID=4?oslc.properties=crtv:serialNumber,oslc:serviceProvider,
crtv:manufacturer,crtv:model
194
Netcool/Impact: Solutions Guide
This URL returns the following RDF:
<rdf:RDF
xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:RESERVATION="http://jazz.net/ns/ism/event/impact/data/
computer/"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:crtv="http://open-services.net/ns/crtv#"
xmlns:oslc="http://open-services.net/ns/core#"
xmlns:impact="http://jazz.net/ns/ism/event/impact#">
<crtv:ComputerSystem rdf:about="http://<impact-server>:9080/
NCICLUSTER_NCI_oslc/data/computer/item;ID=4">
<crtv:serialNumber>IBM00003SN</crtv:serialNumber>
<crtv:model>IBM Model01</crtv:model>
<crtv:manufacturer>IBM Manufacturer01</crtv:manufacturer>
<oslc:serviceProvider rdf:resource="http://<registryserver>:9080/
oslc/providers/6015"/>
</crtv:ComputerSystem>
</rdf:RDF>
Use the following policy to perform a POST function on the URI of the resource.
The POST function registers the resource with the resource registry associated with
the serviceProvider property that is defined in the resource.
Log("SCR_RegisterSystems: Entering policy");
HTTPHost="impactserver";
HTTPPort=9080;
Protocol="http";
Path="/NCICLUSTER_NCI_oslc/data/computer/item;ID=4?oslc.properties
=crtv:serialNumber,
oslc:serviceProvider,crtv:manufacturer,crtv:model";
ChannelKey="tom";
Method="GET"; //Retrieves the Systems
Method="POST"; //Registers the systems
AuthHandlerActionTreeName="";
FilesToSend=newobject();
HeadersToSend=newobject();
HttpProperties=newobject();
HttpProperties.UserId="impactadmin";
HttpProperties.Password="password";
x=GetHTTP(HTTPHost, HTTPPort, Protocol, Path, ChannelKey, Method,
AuthHandlerActionTreeName,
null, FilesToSend, HeadersToSend, HttpProperties);
Log(CurrentContext());
Log("SCR_RegisterSystems: HTTP Response: " + x);
After the policy runs and the resource is registered, the location of the registration
record on the Registry Services server is detailed in the Location header.
Registering multiple resources
You can also use Netcool/Impact to register multiple resources in the resource
registry.
Example
The following URL contains a set of resource members that are to be registered:
http://<Impactserver>:9080/NCICLUSTER_NCI_oslc/data/computer/
This URL returns the following RDF:
<rdf:RDF
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:RESERVATION="http://jazz.net/ns/ism/event/impact/data/computer/"
Chapter 9. Working with OSLC for Netcool/Impact
195
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:crtv="http://open-services.net/ns/crtv#"
xmlns:oslc="http://open-services.net/ns/core#"
xmlns:impact="http://jazz.net/ns/ism/event/impact#">
<rdf:Description rdf:about="http://<impact-server>:9080/
NCICLUSTER_NCI_oslc/data/computer/">
<rdfs:member>
<crtv:ComputerSystem rdf:about="http://<Impactserver>:
9080/NCICLUSTER_NCI_oslc/data/computer/item;ID=4">
<crtv:serialNumber>IBM00003SN</crtv:serialNumber>
<crtv:model>IBM Model01</crtv:model>
<crtv:manufacturer>IBM Manufacturer01</crtv:manufacturer>
<oslc:serviceProvider rdf:resource="http://<registry-server>:9080/
oslc/providers/6015"/>
<RESERVATION:RESERVED_DATE>2012-07-16</RESERVATION:RESERVED_DATE>
<RESERVATION:RESERVED_BY>Michael Morton</RESERVATION:RESERVED_BY>
<RESERVATION:RELEASE_DATE>2013-03-06</RESERVATION:RELEASE_DATE>
<RESERVATION:ID>4</RESERVATION:ID>
</crtv:ComputerSystem>
</rdfs:member>
<rdfs:member>
<crtv:ComputerSystem rdf:about="http://<Impactserver>
:9080/NCICLUSTER_NCI_oslc/data/computer/item;ID=3">
<crtv:serialNumber>IBM00002SN</crtv:serialNumber>
<crtv:model>IBM Model01</crtv:model>
<crtv:manufacturer>IBM Manufacturer01</crtv:manufacturer>
<oslc:serviceProvider rdf:resource="http://
<registryserver>:9080/oslc/providers/6015"/>
<RESERVATION:RESERVED_DATE>2011-02-20</RESERVATION:RESERVED_DATE>
<RESERVATION:RESERVED_BY>Sandra Burton</RESERVATION:RESERVED_BY>
<RESERVATION:RELEASE_DATE>2013-01-30</RESERVATION:RELEASE_DATE>
<RESERVATION:ID>3</RESERVATION:ID>
</crtv:ComputerSystem>
</rdfs:member>
<rdfs:member>
<crtv:ComputerSystem rdf:about="http://<impact-server>:9080/
NCICLUSTER_NCI_oslc/data/computer/item;ID=0">
<crtv:serialNumber>IBM00001SN</crtv:serialNumber>
<crtv:model>IBM Model01</crtv:model>
<crtv:manufacturer>IBM Manufacturer01</crtv:manufacturer>
<oslc:serviceProvider rdf:resource="http://
<registryserver>:9080/oslc/providers/6015"/>
<RESERVATION:RESERVED_DATE>2012-08-11</RESERVATION:RESERVED_DATE>
<RESERVATION:RESERVED_BY>John Lewis</RESERVATION:RESERVED_BY>
<RESERVATION:RELEASE_DATE>2013-04-12</RESERVATION:RELEASE_DATE>
<RESERVATION:ID>0</RESERVATION:ID>
</crtv:ComputerSystem>
</rdfs:member>
</rdf:Description>
<oslc:ResponseInfo rdf:about="http://<impact-server>9080/
NCICLUSTER_NCI_oslc/data/computer/?oslc.paging=true&amp;oslc.pageSize=100">
<oslc:totalCount>3</oslc:totalCount>
</oslc:ResponseInfo>
</rdf:RDF>
As this list contains a list of resource members, you can use the oslc.select query
parameter to limit the properties of each resource member:
http://<Impactserver>:9080/NCICLUSTER_NCI_oslc/data/
computer?oslc.select=crtv:serialNumber,crtv:manufacturer,crtv:model,
oslc:serviceProvider
The URL returns the following RDF:
<rdf:RDF
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:RESERVATION="http://jazz.net/ns/ism/event/impact/data/computer/"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:crtv="http://open-services.net/ns/crtv#"
xmlns:oslc="http://open-services.net/ns/core#"
196
Netcool/Impact: Solutions Guide
xmlns:impact="http://jazz.net/ns/ism/event/impact#">
<oslc:ResponseInfo rdf:about="http://<Impactserver>:9080/
NCICLUSTER_NCI_oslc/data/computer?oslc.select=crtv:serialNumber,
crtv:manufacturer,crtv:model,oslc:serviceProvider&amp;
oslc.paging=true&amp;oslc.pageSize=100">
<oslc:totalCount>3</oslc:totalCount>
</oslc:ResponseInfo>
<rdf:Description rdf:about="http://<Impactserver>:9080/
NCICLUSTER_NCI_oslc/data/computer?oslc.select=crtv:serialNumber,
crtv:manufacturer,crtv:model,oslc:serviceProvider">
<rdfs:member>
<crtv:ComputerSystem rdf:about="http://<Impactserver>:
9080/NCICLUSTER_NCI_oslc/data/computer/item;ID=4">
<crtv:serialNumber>IBM00003SN</crtv:serialNumber>
<crtv:model>IBM Model01</crtv:model>
<crtv:manufacturer>IBM Manufacturer01</crtv:manufacturer>
<oslc:serviceProvider rdf:resource="http://<registry-server>:
9080/oslc/providers/6015"/>
</crtv:ComputerSystem>
</rdfs:member>
<rdfs:member>
<crtv:ComputerSystem rdf:about="http://<Impactserver>:
9080/NCICLUSTER_NCI_oslc/data/computer/item;ID=3">
<crtv:serialNumber>IBM00002SN</crtv:serialNumber>
<crtv:model>IBM Model01</crtv:model>
<crtv:manufacturer>IBM Manufacturer01</crtv:manufacturer>
<oslc:serviceProvider rdf:resource="http://<registryserver>:
9080/oslc/providers/6015"/>
</crtv:ComputerSystem>
</rdfs:member>
<rdfs:member>
<crtv:ComputerSystem rdf:about="http://<Impactserver>:
9080/NCICLUSTER_NCI_oslc/data/computer/item;ID=0">
<crtv:serialNumber>IBM00001SN</crtv:serialNumber>
<crtv:model>IBM Model01</crtv:model>
<crtv:manufacturer>IBM Manufacturer01</crtv:manufacturer>
<oslc:serviceProvider rdf:resource="http://<registry-server>:
9080/oslc/providers/6015"/>
</crtv:ComputerSystem>
</rdfs:member>
</rdf:Description>
</rdf:RDF>
Use the following policy to perform a POST function on the URI of the resources.
The POST function registers the resources with the resource registry associated
with the serviceProvider property that is defined in the resource.
Log("SCR_RegisterSystems: Entering policy");
HTTPHost="impactserver";
HTTPPort=9080;
Protocol="http";
Path="/NCICLUSTER_NCI_oslc/data/computer?oslc.paging
=true&oslc.pageSize=100";
ChannelKey="tom";
//Method="GET";
//Retreives the Systems Method="POST";
//Registers the Systems AuthHandlerActionTreeName="";
FilesToSend=newobject();
HeadersToSend=newobject();
HttpProperties=newobject();
HttpProperties.UserId="impactadmin";
HttpProperties.Password="password";
x=GetHTTP(HTTPHost, HTTPPort, Protocol, Path, ChannelKey, Method,
AuthHandlerActionTreeName, null,
FilesToSend, HeadersToSend, HttpProperties); Log(CurrentContext());
Log("SCR_RegisterSystems: HTTP Response: " + x);
If the resources are registered successfully, the system displays a message to
confirm. Netcool/Impact also returns the header, body text, and other information
Chapter 9. Working with OSLC for Netcool/Impact
197
that is contained in the response from the registry server to the client. The header
and body text specify the location of each registration record for each resource that
was registered.
RDFRegister
You can use the RDFRegister function to help you to register service providers or
OSLC resources with the registry server.
Before you can register a service provider or resource, you must use the other RDF
policy functions to build an RDF model that meets the OSLC and Registry Services
requirements.
After you build the RDF model, use the RDFRegister function to register the RDF
with the resource registry contained in the Registry Services integration service.
If the service provider or OSLC resource is registered successfully, the RDFRegister
function returns the resource location of the registration record. The following
variables and their return values are also returned to provide more information:
v ResultCode contains the result code for the response.
v HeadersReceived contains the headers received in the response.
v HeadersSent contains the headers sent in the response.
v ResponseBody contains the response body text.
If the query parameters are set in the URL and you use the RDFRegister policy
function to register a service provider, you must manually add the location of the
service provider to the policy. For example:
RDFStatement(newModel, manu[0].subject,
"http://open-services.net/ns/core#serviceProvider", serviceProviderURL, true);
If you use the query string inside the path, you must also ensure that the
FormParameters parameter is set to null. For example:
FormParameters=null;
Finally, you must ensure that the policy contains pagination information. For
example:
Path="/NCICLUSTER_NCI_oslc/data/mysql1?oslc.paging=true&oslc.pageSize=100";
If unsuccessful, the return value of the resource location registration record is null.
Error code information is retuned in the ErrorReason and ResultCode variables.
Syntax
The RDFRegister function has the following syntax:
[ String =] RDFRegister(URI, Username , Password, Model)
where Username can be a null or void string to specify that no authentication is
required.
198
Netcool/Impact: Solutions Guide
Parameters
The RDFRegister function has the following parameters:
Table 52. RDFRegister function parameters
Parameter
Type
Description
URI
String
Registry Services server
creation factory URI
Username
String
User name for the Registry
Services server
Password
String
Password for the Registry
Services server
Model
Model
Model that contains the RDF
Example
The following example manually registers a service provider and a set of resources
that have been exposed by the OSLC server provider in Netcool/Impact.
The Registry Services server information is as follows:
RegistryServerProviderCFUri="http://<registry_services_server>:
16310/oslc/pr/collection";
RegistryServerResourceCFUri="http://<registry_services_server>:
16310/oslc/rr/registration/collection";
RegistryServerUsername="system";
RegistryServerPassword="manager";
The Netcool/Impact server information is as follows:
HTTPHost="<impact_server>";
HTTPPort=9080;
Protocol="http";
Path1="/NCICLUSTER_NCI_oslc/provider/provider01";
Path2="/NCICLUSTER_NCI_oslc/data/computer";
ChannelKey="";
Method="GET";
AuthHandlerActionTreeName="";
FormParameters=NewObject();
FilesToSend=NewObject();
HeadersToSend=NewObject();
HttpProperties = NewObject();
HttpProperties.UserId="impactadmin";
HttpProperties.Password="passw0rd";
HttpProperties.AuthenticationScheme="basic";
Get the service provider RDF from Netcool/Impact:
serviceProviderResponse=GetHTTP(HTTPHost,HTTPPort, Protocol, Path1,
ChannelKey,Method, AuthHandlerActionTreeName, null, FilesToSend,
HeadersToSend,HttpProperties);
Create an RDF model that is based on the service provider response:
serviceProviderModel=RDFParse(serviceProviderResponse)
Register the service provider in the provider registry:
serviceProviderURL = RDFRegister(RegistryServerProviderCFUri,
RegistryServerUsername, RegistryServerPassword,serviceProviderModel);
log("Provider Registry-Service Provider URL: " + serviceProviderURL);
Chapter 9. Working with OSLC for Netcool/Impact
199
Get all the computer system resources from Netcool/Impact:
allResources=GetHTTP(HTTPHost,HTTPPort, Protocol, Path2, ChannelKey,
Method,AuthHandlerActionTreeName, null, FilesToSend, HeadersToSend,
HttpProperties);
Create an RDF model that is based on the resource response:
allResourceModel=RDFParse(allResources);
Register each computer system and a set of properties with the resource registry:
statements=RDFSelect(allResourceModel, null,
"http://jazz.net/ns/ism/event/impact#data/computer/ID", null);
size=Length(statements);
count=0;
while(count<size) {
Path3=statements[count].subject;
//Get the individual computer system resource
resourceResponse=GetHTTP(HTTPHost,HTTPPort, Protocol, Path3, ChannelKey,
Method,AuthHandlerActionTreeName, null, FilesToSend, HeadersToSend,
HttpProperties);
resourceModel=RDFParse(resourceResponse);
Create a model that contains the properties and data that you want to register:
newModel=RDFModel();
manu=RDFSelect(resourceModel, null,
"http://open-services.net/ns/crtv#manufacturer",null);
model=RDFSelect(resourceModel, null,
"http://open-services.net/ns/crtv#model", null);
serial=RDFSelect(resourceModel, null,
"http://open-services.net/ns/crtv#serialNumber", null);
RDFModelUpdateNS(newModel, "crtv", "http://open-services.net/ns/crtv#");
RDFModelUpdateNS(newModel, "oslc","http://open-services.net/ns/core#");
RDFStatement(newModel, manu[0].subject,
"http://www.w3.org/1999/02/22-rdf-syntax-ns#type",
"http://open-services.net/ns/crtv#ComputerSystem", true);
RDFStatement(newModel, manu[0].subject, manu[0].predicate, manu[0].object,
RDFNodeIsResource(manu[0].object));
RDFStatement(newModel, manu[0].subject, model[0].predicate, model[0].object,
RDFNodeIsResource(manu[0].object));
RDFStatement(newModel, manu[0].subject, serial[0].predicate,
serial[0].object, RDFNodeIsResource(manu[0].object));
Update the model with the service provider location:
RDFStatement(newModel, manu[0].subject,
"http://open-services.net/ns/core#serviceProvider", serviceProviderURL, true);
Register the resource in the resource registry:
resourceURL = RDFRegister(RegistryServerResourceCFUri,
RegistryServerUsername, RegistryServerPassword, newModel);
log("Resource Registry-Resource URL: " +resourceURL);
count=count+1;
}
RDFUnRegister
To remove the registration record of a service provider or resource from the
registry server, use the RDFUnRegister function to supply the location of the
registration record, the Registry Services server username and password, and the
registration record that you want to remove.
Before you can remove the registration record of a service provider, you must
remove all the registration records for the associated OSLC resources.
200
Netcool/Impact: Solutions Guide
If successful, the RDFUnRegister function returns the message code 204 and the
value true. The following variables and their return values are also returned to
provide additional information:
v ResultCode contains the result code for the response.
v HeadersReceived contains the headers received in the response.
v HeadersSent contains the headers sent in the response.
v ResponseBody contains the response body text.
If unsuccessful, the return value of the resource location registration record is false.
Error code information is returned in the ErrorReason and ResultCode variables.
Syntax
The RDFUnRegister function has the following parameters:
[ String =] RDFUnRegister(URI, Username , Password)
where Username can be a null or void string to specify that no authentication is
required.
Parameters
Table 53. RDFUnRegister function parameters
Parameter
Type
Description
URI
String
Location that contains the
registration record for the
resource or service provider
Username
String
User name for the Registry
Services server
Password
String
Password for the Registry
Services server
Example of how to remove the registration of a service provider
The following example demonstrates how to remove the registration of the service
provider.
The service provider location is:
http://<registryserver>:16310/oslc/providers/6577
Use the RDFUnRegister function to remove the registration. For example:
//Registry server information
ServiceProviderUri="http://<registryserver>:16310/oslc/providers/6577";
RegistryServerUsername="system";
RegistryServerPassword="manager";
result = RDFUnRegister(ServiceProviderUri, RegistryServerUsername,
RegistryServerPassword);
Example of how to remove the registration of an OSLC resource
The following example demonstrates how to use the policy function to remove the
registration of an OSLC resource.
registrationURL = "http://oslcregistryserver.com:16310/oslc/registration/1351071987349";
providerURL = "http://oslcregistryserver.com:16310/oslc/providers/
1351071987343";
RegistryServerUsername="smadmin";
Chapter 9. Working with OSLC for Netcool/Impact
201
RegistryServerPassword="password";
returnString = RDFUnRegister (registrationURL, RegistryServerUsername, RegistryServerPassword);
Working with Netcool/Impact policies and OSLC
You can integrate the Netcool/Impact OSLC provider and Netcool/Impact policies.
Accessing policy output parameters as OSLC resources
To use the Netcool/Impact OSLC provider to run Netcool/Impact policies and
access the results, you must edit the NCI_oslc.props file that is in the
IMPACT_HOME/etc directory.
About this task
Netcool/Impact returns two types of RDF objects, literals and resources. RDF
literals contain an actual value. RDF resources are returned as URLs that you can
access to find more information about an object.
Procedure
1. To access Netcool/Impact policy results, edit the NCI_oslc.props file that is in
the IMPACT_HOME/etc directory, where NCI is the name of your Impact Server.
Add the following statement for each policy that you want to access:
oslc.policy.<pathcomponent>=<policyname>
2. Restart the Impact Server.
Example
For example, you add the following statement to the NCI_oslc.props file to access
the SNMPTableTest policy:
oslc.policy.tabletest=SNMPTableTest
Use the following URL to run the policy and return the results:
http://example.com:9080/ NCICLUSTER_NCI_oslc/policy/tabletest
where NCI is the Impact Server name and NCICLUSTER is the Netcool/Impact cluster
name.
When you access this URL, the policy runs and the policy output parameters are
available as RDF resources.
OSLC and variables output by policy results
Simple variables, such as string, integer, double, float, Boolean, and
date/timestamp are made available as RDF literals. More complex variables such
as impact objects, arrays, and function results are displayed as RDF resources with
an RDF link that contains internal details of the variable.
This example shows the user output parameters from the Example policy. The
results of this policy contain both RDF literals and resources. The following URL
triggers the policy execution and makes the results available as OSLC resources:
http://example.com:9080/NCICLUSTER_NCI_oslc/policy/example/
The information is returned as:
202
Netcool/Impact: Solutions Guide
<rdf:RDF
<examplePolicy:example rdf:about="http://example.com:9080/
NCICLUSTER_NCI_oslc/policy/example">
<example:myArrayStr rdf:resource="http://example.com:9080/
NCICLUSTER_NCI_oslc/policy/example/myArrayStr"/>
<example:myObject rdf:resource="http://example.com:9080/
NCICLUSTER_NCI_oslc/policy/example/myObject"/>
<example:myObjArray rdf:resource="http://example.com:9080/
NCICLUSTER_NCI_oslc/policy/example/myObjArray"/>
<example:myGetFilter rdf:resource="http://example.com:9080/
NCICLUSTER_NCI_oslc/policy/example/myGetFilter"/>
<example:MyAlerts rdf:resource="http://example.com:9080/
NCICLUSTER_NCI_oslc/policy/example/MyAlerts"/>
<example:myString>Brian</example:myString>
<example:myDouble>22.5</example:myDouble>
<example:myFloat>100.55</example:myFloat>
<example:myInteger>32</example:myInteger>
<example:myBoolean>true</example:myBoolean>
</examplePolicy:example>
</rdf:RDF>
The more complex variables in this example, such as myObject, myArrayStr,
myObjArray, and myGetFilter, are displayed as resource links. The other variables
are simple variables, such as myString, myDouble, myFloat, myInteger, and
myBoolean, that are displayed as literals alongside their values.
You use the following URL to access the resource URL used for Netcool/Impact
objects, represented by the myObject variable:
http://example.com:9080/NCICLUSTER_NCI_oslc/policy/example/myObject
The resource URL returns the results as:
<rdf:RDF>
<examplePolicy:myObject rdf:about="http://example.com:9080/NCICLUSTER_NCI_oslc/
policy/example/myObject">
<myObject:bmi>24.5</myObject:bmi>
<myObject:lname>Doe</myObject:lname>
<myObject:fname>John</myObject:fname>
<myObject:age>25</myObject:age>
<oslc:totalCount>1</oslc:totalCount>
<rdf:type rdf:resource="http://open-services.net/ns/core#ResponseInfo"/>
</examplePolicy:myObject>
</rdf:RDF>
Accessing arrays of variables from policy results
To access an array of variables that are contained in a policy result in an OSLC
context, you use a URL that contains the variable name.
Before you begin
The default array prefix is oslc_pos. To change this setting, add the following
definition to the NCI_oslc.props file:
oslc.policy.<pathcomponent>.arrayprefix=<prefix>.
For example, add the following definition to the NCI_oslc.props file to change the
prefix to pos for the example path component:
oslc.policy.example.arrayprefix=pos
Chapter 9. Working with OSLC for Netcool/Impact
203
Procedure
To access the resource URL used for an array of objects, represented in this
example by the myArrayStr variable, use the following URL:
http://<server>:<port>/NCICLUSTER_NCI_oslc/policy/
example/myArrayStr
Results
This URL returns the following results:
<rdf:RDF>
<examplePolicy:myArrayStr rdf:about="http://<server>:<port>/
NCICLUSTER_NCI_oslc/policy/example/myArrayStr">
<myArrayStr:oslc_pos_2>Hi</myArrayStr:oslc_pos_2>
<myArrayStr:oslc_pos_1>Hey</myArrayStr:oslc_pos_1>
<myArrayStr:oslc_pos_0>Hello</myArrayStr:oslc_pos_0>
<oslc:totalCount>1</oslc:totalCount>
<rdf:type rdf:resource="http://open-services.net/ns/core#ResponseInfo"/>
</examplePolicy:myArrayStr>
</rdf:RDF>
If an array variable contains multiple Netcool/Impact objects, then a resource that
contains a link to multiple resources is created. Each of these resources contains a
link to the actual Netcool/Impact object in the array.
Example
Use the following URL to access the array of variables that is represented by the
myObjArray variable:
http://example.com:9080/NCICLUSTER_NCI_oslc/policy/example/myObjArray/
As this array contains multiple objects, the URL returns the following results:
<rdf:RDF>
<oslc:ResponseInfo rdf:about="http://example.com:9080/
NCICLUSTER_NCI_oslc/policy/example/myObjArray/">
<rdfs:member>
<examplePolicy:myObjArray rdf:about="http://example.com:9080/
NCICLUSTER_NCI_oslc/policy/example/myObjArray">
<myObjArray:oslc_pos_2 rdf:resource="http://example.com:9080/
NCICLUSTER_NCI_oslc/policy/example/myObjArray/oslc_pos_2"/>
<myObjArray:oslc_pos_1 rdf:resource="http://example.com:9080/
NCICLUSTER_NCI_oslc/policy/example/myObjArray/oslc_pos_1"/>
<myObjArray:oslc_pos_0 rdf:resource="http://example.com:9080/
NCICLUSTER_NCI_oslc/policy/example/myObjArray/oslc_pos_0"/>
</examplePolicy:myObjArray>
</rdfs:member>
<oslc:totalCount>1</oslc:totalCount>
</oslc:ResponseInfo>
</rdf:RDF>
If you access the URL associated with one of the Netcool/Impact objects, the
following results are returned:
<rdf:RDF>
<myObjArray:oslc_pos_1 rdf:about="http://example.com:9080/
NCICLUSTER_NCI_oslc/policy/example/myObjArray/oslc_pos_1">
<oslc_pos_1:fname>Garrett</oslc_pos_1:fname>
<oslc_pos_1:bmi>33.1</oslc_pos_1:bmi>
<oslc_pos_1:age>30</oslc_pos_1:age>
</myObjArray:oslc_pos_1>
</rdf:RDF>
204
Netcool/Impact: Solutions Guide
Displaying the resource shapes for policy results
Resource shapes are available for any OSLC resource produced by the
Netcool/Impact OSLC provider. The resource shape defines the set of OSLC
properties for a specific operation.
Procedure
To display the resource shape for any OSLC object, add resourceShapes to the
URL.
Example
For example, you use the following URL to display the resource shape definition
for the specified resource:
http://example.com:9080/NCICLUSTER_NCI_oslc/policy/resourceShapes/
example/myGetFilter/item;ID=1010
This URL returns the following results which include the resource shape definition:
<rdf:RDF
<oslc:ResourceShape rdf:about="http://example.com:9080/
NCICLUSTER_NCI_oslc/policy/resourceShapes/example/myGetFilter">
<dcterms:title>examplePolicy</dcterms:title>
<oslc:property>
<oslc:Property rdf:about="http://xmlns.com/foaf/0.1/givenName">
<oslc:readOnly>true</oslc:readOnly>
<oslc:valueType rdf:resource="http://www.w3.org/2001/XMLSchema#string"/>
<dcterms:title>NAME</dcterms:title>
<oslc:propertyDefinition rdf:resource="http://jazz.net/ns/ism/events/
impact/policy/example/myGetFilter/NAME"/>
<oslc:occurs rdf:resource="http://open-services.net/ns/core#Zero-or-one"/>
<oslc:name>NAME</oslc:name>
<dcterms:description>NAME</dcterms:description>
</oslc:Property>
</oslc:property>
<oslc:property>
<oslc:Property rdf:about="http://jazz.net/ns/ism/event/impact#/policy/
example/myGetFilter/STARTED">
<oslc:readOnly>true</oslc:readOnly>
<oslc:valueType rdf:resource="http://www.w3.org/2001/XMLSchema#dateTime"/>
<dcterms:title>STARTED</dcterms:title>
<oslc:propertyDefinition rdf:resource="http://jazz.net/ns/ism/events/
impact/policy/example/myGetFilter/STARTED"/>
<oslc:occurs rdf:resource="http://open-services.net/ns/core#Zero-or-one"/>
<oslc:name>STARTED</oslc:name>
<dcterms:description>STARTED</dcterms:description>
</oslc:Property>
</oslc:property>
<oslc:property>
<oslc:Property rdf:about="http://jazz.net/ns/ism/event/impact#/policy/
example/myGetFilter/MANAGER">
<oslc:readOnly>true</oslc:readOnly>
<oslc:valueType rdf:resource="http://www.w3.org/2001/XMLSchema#integer"/>
<dcterms:title>MANAGER</dcterms:title>
<oslc:propertyDefinition rdf:resource="http://jazz.net/ns/ism/events/
impact/policy/example/myGetFilter/MANAGER"/>
<oslc:occurs rdf:resource="http://open-services.net/ns/core#Zero-or-one"/>
<oslc:name>MANAGER</oslc:name>
<dcterms:description>MANAGER</dcterms:description>
</oslc:Property>
</oslc:property>
<oslc:property>
<oslc:Property rdf:about="http://jazz.net/ns/ism/event/impact#/policy/
example/myGetFilter/ID">
Chapter 9. Working with OSLC for Netcool/Impact
205
<oslc:readOnly>true</oslc:readOnly>
<oslc:valueType rdf:resource="http://www.w3.org/2001/XMLSchema#integer"/>
<dcterms:title>ID</dcterms:title>
<oslc:propertyDefinition rdf:resource="http://jazz.net/ns/ism/events/
impact/policy/example/myGetFilter/ID"/>
<oslc:occurs rdf:resource="http://open-services.net/ns/core#Zero-or-one"/>
<oslc:name>ID</oslc:name>
<dcterms:description>ID</dcterms:description>
</oslc:Property>
</oslc:property>
<oslc:property>
<oslc:Property rdf:about="http://jazz.net/ns/ism/event/impact#/policy/
example/myGetFilter/DEPT">
<oslc:readOnly>true</oslc:readOnly>
<oslc:valueType rdf:resource="http://www.w3.org/2001/XMLSchema#string"/>
<dcterms:title>DEPT</dcterms:title>
<oslc:propertyDefinition rdf:resource="http://jazz.net/ns/ism/events/
impact/policy/example/myGetFilter/DEPT"/>
<oslc:occurs rdf:resource="http://open-services.net/ns/core#Zero-or-one"/>
<oslc:name>DEPT</oslc:name>
<dcterms:description>DEPT</dcterms:description>
</oslc:Property>
</oslc:property>
<oslc:property>
<oslc:Property rdf:about="http://jazz.net/ns/ism/event/impact#/policy/
example/myGetFilter/CEASED">
<oslc:readOnly>true</oslc:readOnly>
<oslc:valueType rdf:resource="http://www.w3.org/2001/XMLSchema#dateTime"/>
<dcterms:title>CEASED</dcterms:title>
<oslc:propertyDefinition rdf:resource="http://jazz.net/ns/ism/events/
impact/policy/example/myGetFilter/CEASED"/>
<oslc:occurs rdf:resource="http://open-services.net/ns/core#Zero-or-one"/>
<oslc:name>CEASED</oslc:name>
<dcterms:description>CEASED</dcterms:description>
</oslc:Property>
</oslc:property>
<oslc:describes rdf:resource="http://xmlns.com/foaf/0.1/Group"/>
</oslc:ResourceShape>
</rdf:RDF>
OSLC and UI data provider compatible variables for policy
results
The Netcool/Impact OSLC provider and the UI data provider can remotely run a
policy and make the results available as an OSLC or UI data provider compatible
resource with property values that contain the user output parameters.
The following variable types are supported:
v String
v Integer
v Double
v Float
v Boolean
v Date/Timestamp
v Impact Object
v Long (represented as an Integer in OSLC)
Variable arrays for each of these variables are also supported. These arrays must of
the same variable type.
206
Netcool/Impact: Solutions Guide
The variables returned by the GetByFilter and DirectSQL functions are also
supported.
Configuring policy settings
To use the UI data provider or OSLC with your Netcool/Impact policies, you must
configure policy input or output parameters to make the policy results compatible
with the UI data provider or available as OSLC resources. You can also enable
policy actions for use with the UI data provider.
About this task
You can create either policy input parameters or policy output parameters. Policy
input parameters represent the input parameters that you define in policies. For
example, you can use a policy input parameter to pass values from one policy to
another in a data mashup.
Policy output parameters represent the parameters that are output by policies. For
example, the UI data provider uses policy output parameters to visualize data
from policies in the console.
You can also configure polices related to the UI data provider to be enabled for
specific actions. When the policy actions are enabled, you can right-click on an
action in the widget in the console and the list of policy actions displays. When a
policy action is updated in the policy editor the list on the widget in the console is
updated automatically.
Procedure
1.
Click the Polices tab. To open the Policy Settings Editor in the policy editor
toolbar, click the Configure Policy Settings icon. Create a JavaScript or an IPL
policy.
2. To create a policy output parameter, click New Output Parameter:New. To
create a policy input parameter, click New Input Parameter:New. Mandatory
fields are denoted by an asterisk (*). You must enter a unique name in the
Name field.
3. Define the custom schemas for the output parameters if required.
If you are using the DirectSQL policy function with OSLC, you must define the
custom schema for it. You need to create an output parameter for this policy to
create a UI Data Provider policy-related action.
If you are using DirectSQL, Impact Object, or Array of Impact Object with
the UI data provider or the chart widget, you must define the custom schema
for these values.
For more information, see “Creating custom schema values for output
parameters” on page 111
4. Click New to create a UI Data Provider policy-related action. You can use this
option to enable a policy action on a widget in the console (Dashboard the
Dashboard Application Services Hub) in Jazz for Service Management. For
more information, see “Creating a widget on a page in the console” on page
106.
a. In the Name field, add a name for the action. The name that you add
displays in the widget in the console when you right-click on an action in
the specified widget.
b. In the Policy Name menu, select the policy that you want the action to
relate to.
Chapter 9. Working with OSLC for Netcool/Impact
207
c. In the Output Parameter menu, select the output parameter that will be
associated with this action, if you select the All output parameters option,
the action becomes available for all output parameters for the selected
policy.
5. To enable a policy to run with an UI data provider, select the Enable policy for
UI Data Provider actions check box.
6. To enable a policy to run in with the Event Isolation and Correlation
capabilities, select the Enable Policy for Event Isolation and Correlation
Actions check box.
7. To save the changes to the parameters and close the window, click OK.
Example
This example demonstrates how to create output parameters for a policy. First, you
define a simple policy, like:
first_name = “Mark”;
zip_code = 12345;
Log(“Hello “ + first_name + “ living at “ + zip_code);
Next, define the output parameters for this policy. In this case, there are two
output parameters. You enter the following information:
Table 54. PolicyDT1 output parameter
Field
User entry
Name
Enter a unique name. For example,
PolicyDT1.
Policy variable name
first_name
Format
String
Table 55. PolicyDT2 output parameter
Field
User entry
Name
Enter a unique name. For example,
PolicyDT2
Policy variable name
zip_code
Format
Integer
Creating custom schema values for output parameters
When you define output parameters that use the DirectSQL, Array of Impact
Object, or Impact Object format in the user output parameters editor, you also
must specify a name and a format for each field that is contained in the
DirectSQL, Array of Impact Object, or Impact Object objects.
About this task
Custom schema definitions are used by Netcool/Impact to visualize data in the
console and to pass values to the UI data provider and OSLC. You create the
custom schemas and select the format that is based on the values for each field
that is contained in the object. For example, you create a policy that contains two
fields in an object:
O1.city="NY"
O1.ZIP=07002
208
Netcool/Impact: Solutions Guide
You define the following custom schemas values for this policy:
Table 56. Custom schema values for City
Field
Entry
Name
City
Format
String
Table 57. Custom schema values for ZIP
Field
Entry
Name
ZIP
Format
Integer
If you use the DirectSQL policy function with the UI data provider or OSLC, you
must define a custom schema value for each DirectSQL value that you use.
If you want to use the chart widget to visualize data from an Impact object or an
array of Impact objects with the UI data provider and the console, you define
custom schema values for the fields that are contained in the objects. The custom
schemas help to create descriptors for columns in the chart during initialization.
However, the custom schemas are not technically required. If you do not define
values for either of these formats, the system later rediscovers each Impact object
when it creates additional fields such as the key field. UIObjectId, or the field for
the tree widget, UITreeNodeId. You do not need to define these values for OSLC.
Procedure
1. In the Policy Settings Editor, select DirectSQL, Impact Object, or Array of
Impact Object in the Format field.
2. The system shows the Open the Schema Definition Editor icon
the Schema Definition field. To open the editor, click the icon.
beside
3. You can edit an existing entry or you can create a new one. To define a new
entry, click New. Enter a name and select an appropriate format.
To edit an existing entry, click the Edit icon beside the entry that you want to
edit
4. To mark an entry as a key field, select the check box in the Key Field column.
You do not have to define the key field for Impact objects or an array of Impact
objects. The system uses the UIObjectId as the key field instead.
5. To delete an entry, select the entry and click Delete.
Accessing data types output by the GetByFilter function
If you want to access the results from the GetByFilter function, you need to create
output parameters for the OSLC provider.
Procedure
1. To open the policy user parameter editor, click the Configure Policy Settings
icon in the policy editor toolbar. You can create policy input and output
parameters. To open the Create a New Policy Output Parameter window, click
New.
2. Select data type as the format.
3. Enter the name of the data item to which the output of the GetByFilter
function is assigned in the Policy Variable Name field.
Chapter 9. Working with OSLC for Netcool/Impact
209
4. Enter the name of the data source in the Data Source Name field.
5. Enter the name of the data type in the Data Type Name field.
Example
This example demonstrates how to make the output from the GetByFilter function
available to the Netcool/Impact OSLC provider.
You create a data type called ALERTS that belongs to the defaultobjectserver data
source. This data type belongs to Netcool/OMNIbus and it points to
alerts.status. The key field is Identifier. The following four rows of data are
associated with the key field:
v Event1
v Event2
v Event3
v Event4
You create the following policy, called Test_Policy3:
MyAlerts = GetByFilter("ALERTS", "Severity > 0", false);
Next, you define the output parameters for the policy as follows:
Table 58. PolicyData1 output parameter
Field
User entry
Name
PolicyData1
Policy Variable Name
MyAlerts
Format
Datatype
Data Source Name
defaultobjectserver
Data Type Name
ALERTS
Accessing variables output by the DirectSQL function
To access variables output by the DirectSQL policy function, you must create
DirectSQL output parameters and format values.
About this task
If a variable is output by the DirectSQL policy function, Netcool/Impact creates an
RDF resource. This resource contains multiple properties for each defined output
parameter.
Only the following simple variables are supported:
v String
v Double
v Integer
v Long
v Date/Timestamp
v Boolean
If the column names contain special characters, you must add a statement that lists
these special characters to the NCI_server.props file. For more information, see the
topic about using special characters in column names in the Troubleshooting section.
210
Netcool/Impact: Solutions Guide
If the policies that you use to provide data to OSLC contain special characters, you
must escape these special characters. For more information, see the topic about
using special characters in OSLC and UI data provider policies in the
Troubleshooting section.
Procedure
To access variables output by the DirectSQL policy function, create a DirectSQL
output parameter and define the DirectSQL values for this parameter. For a
detailed description of these steps, see “Creating custom schema values for output
parameters” on page 111.
Example
This example demonstrates how to access variables output by the DirectSQL policy
function. You define the following policy which uses the DirectSQL function:
MyAlerts=DirectSQL(’defaultobjectserver’,’select min(Serial) as min_serial,
max(Serial) as max_serial,count(Node) as num_events from alerts.status’, false);
Next, define the DirectSQL output parameter as outlined in the table. You do not
need to enter a data source or data type name.
Table 59. DirectSQL output parameter
Field
User Entry
Name
DirectSQL_OP1
Policy Variable Name
DirectSQL_1
Format
DirectSQL
To create the DirectSQL format values, click the DirectSQL editor icon. Define the
format values as follows:
Table 60. DirectSQL format values
Name
Format
Key
min_serial
Double
True
max_serial
Float
True
num_events
Integer
True
Use the following URI to run the policy and return the results:
http://example.com:9080/NCICLUSTER_NCI_oslc/
policy/examplePolicy/MyAlerts
The results are:
<rdf:RDF
<oslc:ResponseInfo rdf:about="http://example.com:9080/NCICLUSTER_NCI_oslc/
policy/examplePolicy/MyAlerts">
<rdfs:member>
<examplePolicy:MyAlerts rdf:about="http://example.com:9080/
NCICLUSTER_NCI_oslc/policy/examplePolicy/MyAlerts/
item;min_serial=133;num_events=12;max_serial=521">
<MyAlerts:num_events>12</MyAlerts:num_events>
<MyAlerts:min_serial>133</MyAlerts:min_serial>
<MyAlerts:max_serial>521</MyAlerts:max_serial>
</examplePolicy:MyAlerts>
Chapter 9. Working with OSLC for Netcool/Impact
211
</rdfs:member>
<oslc:totalCount>1</oslc:totalCount>
</oslc:ResponseInfo>
</rdf:RDF>
The results also contain the resource shape:
<rdf:RDF
<oslc:ResourceShape rdf:about="http://example.com:9080/
NCICLUSTER_NCI_oslc/policy/resourceShapes/example/MyAlerts">
<dcterms:title>examplePolicy</dcterms:title>
<oslc:property>
<oslc:Property rdf:about="http://jazz.net/ns/ism/event/impact#/policy/
example/myDirectSQL/num_events">
<oslc:readOnly>true</oslc:readOnly>
<oslc:valueType rdf:resource="http://www.w3.org/2001/XMLSchema#integer"/>
<dcterms:title>num_events</dcterms:title>
<oslc:propertyDefinition rdf:resource="http://jazz.net/ns/ism/events/
impact/policy/example/myDirectSQL/num_events"/>
<oslc:occurs rdf:resource="http://open-services.net/ns/core#Zero-or-one"/>
<oslc:name>num_events</oslc:name>
<dcterms:description>num_events</dcterms:description>
</oslc:Property>
</oslc:property>
<oslc:property>
<oslc:Property rdf:about="http://jazz.net/ns/ism/event/impact#/policy/
example/myDirectSQL/min_serial">
<oslc:readOnly>true</oslc:readOnly>
<oslc:valueType rdf:resource="http://www.w3.org/2001/XMLSchema#double"/>
<dcterms:title>min_serial</dcterms:title>
<oslc:propertyDefinition rdf:resource="http://jazz.net/ns/ism/events/
impact/policy/example/myDirectSQL/min_serial"/>
<oslc:occurs rdf:resource="http://open-services.net/ns/core#Zero-or-one"/>
<oslc:name>min_serial</oslc:name>
<dcterms:description>min_serial</dcterms:description>
</oslc:Property>
</oslc:property>
<oslc:property>
<oslc:Property rdf:about="http://jazz.net/ns/ism/event/impact#/policy/
example/myDirectSQL/max_serial">
<oslc:readOnly>true</oslc:readOnly>
<oslc:valueType rdf:resource="http://www.w3.org/2001/XMLSchema#float"/>
<dcterms:title>max_serial</dcterms:title>
<oslc:propertyDefinition rdf:resource="http://jazz.net/ns/ism/events/
impact/policy/example/myDirectSQL/max_serial"/>
<oslc:occurs rdf:resource="http://open-services.net/ns/core#Zero-or-one"/>
<oslc:name>max_serial</oslc:name>
<dcterms:description>max_serial</dcterms:description>
</oslc:Property>
</oslc:property>
<oslc:describes rdf:resource="http://jazz.net/ns/ism/event/impact#/
policy/example/"/>
</oslc:ResourceShape>
</rdf:RDF>
Configuring custom URIs for policy results and variables
You can assign custom URIs to policy and user output parameters or variables to
create a custom mapping. You can use this mapping to represent a resource in any
domain.
About this task
Netcool/Impact supports only the one-to-one mapping of user output parameters
to OSLC properties.
212
Netcool/Impact: Solutions Guide
Procedure
1. To add a custom URI to a policy resource, add the following definition to the
NCI_oslc.prop file:
oslc.policy.<pathcomponent>.uri=<uri>
2. To add a custom URI to a variable, specify the variable and the path
component. As there are multiple layers of variables, you must specify each
variable until you reach the one that you want:
oslc.policy.<pathcomponent>.<variablename>.uri=<uri>
oslc.policy.<pathcomponent>.<variablename>.<variablename>
....<variablename>.uri=uri
Example
The following example demonstrates how the example policy can be represented in
a Friend of a Friend (FOAF) specification. You start by adding statements to the
NCI_oslc.props file:
oslc.policy.example=examplePolicy
oslc.policy.example.namespaces.foaf=http://xmlns.com/foaf/0.1/
oslc.policy.example.uri=http://xmlns.com/foaf/0.1/Person
oslc.policy.example.myGetFilter.NAME.uri=http://xmlns.com/foaf/0.1/givenName
You use this URL to query the OSLC resource:
http://example.com:9080/NCICLUSTER_NCI_oslc/policy/example/myGetFilter
This URL returns the RDF:
Note: This example is an approximation for exemplary purposes.
<rdf:RDF
<oslc:ResponseInfo rdf:about="http://example.com:9080/NCICLUSTER_NCI_oslc/policy/
example/myGetFilter">
<rdfs:member>
<j.0:Person rdf:about="http://example.com:9080/NCICLUSTER_NCI_oslc/policy/
example/myGetFilter/item;ID=1012">
<j.0:givenName>Kevin Doe</j.0:givenName>
<myGetFilter:STARTED>1976-07-06</myGetFilter:STARTED>
<myGetFilter:MANAGER>1001</myGetFilter:MANAGER>
<myGetFilter:ID>1012</myGetFilter:ID>
<myGetFilter:DEPT>Documentation</myGetFilter:DEPT>
</j.0:Person>
</rdfs:member>
<rdfs:member>
<j.0:Person rdf:about="http://example.com:9080/NCICLUSTER_NCI_oslc/policy/
example/myGetFilter/item;ID=1010">
<j.0:givenName>Brian Doe</j.0:givenName>
<myGetFilter:STARTED>1980-08-11</myGetFilter:STARTED>
<myGetFilter:MANAGER>1001</myGetFilter:MANAGER>
<myGetFilter:ID>1010</myGetFilter:ID>
<myGetFilter:DEPT>Documentation</myGetFilter:DEPT>
</j.0:Person>
</rdfs:member>
<oslc:totalCount>2</oslc:totalCount>
</oslc:ResponseInfo>
</rdf:RDF>
The user has not defined the prefix and namespace in this example. In this case,
the RDF shows the automatically generated prefix for the namespace j.0 for
http://xmlns.com/foaf/0.1/.
You specify a prefix for a namespace in this format:
Chapter 9. Working with OSLC for Netcool/Impact
213
oslc.policy.<path>.namespaces.<prefix>=<uri>
For this example, you add this statement:
oslc.policy.example.namespaces.foaf=http://xmlns.com/foaf/0.1/
When you use the following URL to query the OSLC resource, the RDF is
produced with the prefix specified by you:
http://example.com:9080/NCICLUSTER_NCI_oslc/policy/example/myGetFilter
This URL returns as:
<rdf:RDF
<oslc:ResponseInfo rdf:about="http://example.com:9080/NCICLUSTER_NCI_oslc/policy/
example/myGetFilter">
<rdfs:member>
<foaf:Person rdf:about="http://example.com:9080/NCICLUSTER_NCI_oslc/policy/
example/myGetFilter/item;ID=1012">
<foaf:givenName>Kevin Doe</foaf:givenName>
<myGetFilter:STARTED>1976-07-06</myGetFilter:STARTED>
<myGetFilter:MANAGER>1001</myGetFilter:MANAGER>
<myGetFilter:ID>1012</myGetFilter:ID>
<myGetFilter:DEPT>Documentation</myGetFilter:DEPT>
</foaf:Person>
</rdfs:member>
<rdfs:member>
<foaf:Person rdf:about="http://example.com:9080/NCICLUSTER_NCI_oslc/policy/
example/myGetFilter/item;ID=1010">
<foaf:givenName>Brian Doe</foaf:givenName>
<myGetFilter:STARTED>1980-08-11</myGetFilter:STARTED>
<myGetFilter:MANAGER>1001</myGetFilter:MANAGER>
<myGetFilter:ID>1010</myGetFilter:ID>
<myGetFilter:DEPT>Documentation</myGetFilter:DEPT>
</foaf:Person>
</rdfs:member>
<oslc:totalCount>2</oslc:totalCount>
</oslc:ResponseInfo>
</rdf:RDF>
Passing argument values to a policy
You can use URL query strings to pass argument values in the form of a string to a
policy. You can access these values by creating a policy output parameter for each
of the arguments.
Procedure
Use the following URL to pass argument values to a policy: <path name> is the
value defined in the NCI_oslc.props file for example
oslc.data.<path name>=<datatype_name>
http://<host>:<port>/NCI_NCICLUSTER_oslc/policy/
<path name>?arg1=<value>&arg2=<value>
Restriction: Unusually long URLs can cause issues. This depends on the browser
and the Liberty Core settings. To avoid these issues, limit the size of the values of
the variables that are passed through the query string.
Results
After you access this URL, the policy runs. If you do not define any policy output
parameters, the policy parameters are available as properties within an OSLC
resource.
214
Netcool/Impact: Solutions Guide
Example
For example, you use the following URL to pass the variable arg1 with the string
value table1 to the policy defined in the tableset path:
http://example.com:9080/NCI_NCICLUSTER_oslc/policy/tableset?arg1=table1
Configuring hover previews for OSLC resources
You can enable hover previews for OSLC resources. You can configure a title and
other aspects of the hover preview like the size of the window.
About this task
For more information about using hover previews for OSLC, see Open Services for
Lifecycle Collaboration Core Specification Version 2.0 UI Preview
(http://open-services.net/bin/view/Main/OslcCoreUiPreview)
For hover preview to work in the IBM Dashboard Application Services Hub Dash,
you might need to disable the Blocked loading mixed active content security
feature in the browser.
If you are using Netcool/Impact operator views as your display document, then
Single Sign On must be configured or authentication security for operator views
needs to be disabled. See Security, Single Sign on in the Administration section for
more information.
To enable UI (hover) preview in a widget, the data set must include a column,
OslcResourceURI, which contains the Netcool/Impact source record URI. The
source record URI is used to query the Resource Registry in the Dashboard
Application Services Hub that contains the previews for the listed resource. For
more information see, http://www-01.ibm.com/support/knowledgecenter/
SSEKCU_1.1.0.2/com.ibm.psc.doc_1.1.0.2/tip_original/
dash_t_twl_uipreview_resource_reg.html
Procedure
To configure hover previews, add a set of properties for each OSLC resource to the
NCI_oslc.props file. You can use any combination of these properties. Some of
these properties only display if the document or icon parameter exists in the OSLC
resource. For a detailed description of these properties, see “Hover preview
properties for OSLC resources” on page 217. To use hover preview (UI Preview)
with Dash, you should specify both smallPreview and largePreview properties.
Results
Each set of properties that you define for an OSLC resource generates a compact
XML representation of the hover preview. This compact XML is used to help
generate the content for the hover preview window in other applications. Each set
of properties can contain variables. When the XML is generated, the variables are
replaced with property values from the OSLC resource in the following format:
$<prefixnamespace>:<propertyname>
For example:
$RESERVATION:HOSTNAME
$RESERVATION:ID
Chapter 9. Working with OSLC for Netcool/Impact
215
To view all the possible variables for an OSLC resource, use the OSLC resource
URI to view the full XML representation.
If a resource does not exist or an error occurs, the system displays an error code
400 and a message that explains the issue.
If the resource does not support hover previews, for example if the resource
contains rdfs:member lists, the system displays a 406 Not Acceptable error code.
If no hover preview parameters are defined for the resource, the system displays a
compact XML that contains no parameters other than those parameters about the
URI.
If you design a third party hover preview consumer like TBSM, you can add
application/x-oslc-compact+xml to the URL as an HTTP Accept header to display
the hover preview compact XML document in the response.
Example
The following example demonstrates how to configure the hover preview for an
OSLC resource that is based on a database table called RESERVATION. The
following hover preview settings are defined in the NCI_oslc.props file:
oslc.data.computer=RESERVATION
oslc.data.computer.uri=http://open-services.net/ns/crtv#ComputerSystem
oslc.data.computer.MODEL.uri=http://open-services.net/ns/crtv#model
oslc.data.computer.MANUFACTURER.uri=http://open-services.net/ns/crtv#manufacturer
oslc.data.computer.SERIALNUMBER.uri=http://open-services.net/ns/crtv#serialNumber
oslc.data.computer.namespaces.crtv=http://open-services.net/ns/crtv#
oslc.data.computer.provider=provider01
oslc.data.computer.provider.domain=http://domainx/
oslc.data.computer.preview.title=
Computer Reservation System - $RESERVATION:HOSTNAME
oslc.data.computer.preview.shortTitle=Reservation
oslc.data.computer.preview.largePreview.document=
https://<impactserver>:16311/opview/displays/NCICLUSTER-Reservations.html?
id=$RESERVATION:ID
oslc.data.computer.preview.largePreview.hintWidth=31.250em
oslc.data.computer.preview.largePreview.hintHeight=21.875em
oslc.data.computer.preview.smallPreview.document=https:
//<impactserver>:16311/opview/displays/NCICLUSTER-Reservations.html?id=
$RESERVATION:ID
oslc.data.computer.preview.smallPreview.hintWidth=31.250em
oslc.data.computer.preview.smallPreview.hintHeight=21.875em
Next, derive the hover preview content. In this example, use a Netcool/Impact
operator. In this case, two variables are generated in the compact XML:
$RESERVATION:HOSTNAME
$RESERVATION:ID
These variables are converted into the property values based on data from the
OSLC resource:
$RESERVATION:HOSTNAME = mycomputer.ibm.com
$RESERVATION:ID = 4
When you use an HTTP GET method on the resource URL with the
application/x-oslc-compact+xml HTTP Accept header, the following RDF is
returned:
<?xml version="1.0"?>
<rdf:RDF
xmlns:dcterms="http://purl.org/dc/terms/"
216
Netcool/Impact: Solutions Guide
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:oslc="http://open-services.net/ns/core#">
<oslc:Compact rdf:about="http://example.com:9080/NCICLUSTER_NCI_oslc/data/
computer/item;ID=4">
<dcterms:title>Computer Reservation System - mycomputer.ibm.com</dcterms:title>
<oslc:shortTitle>Reservation</oslc:shortTitle>
<oslc:largePreview>
<oslc:Preview>
<oslc:hintWidth>31.250em</oslc:hintWidth>
<oslc:hintHeight>21.875em</oslc:hintHeight>
<oslc:document rdf:resource=
"https://<impact-server>:16311/opview/displays/NCICLUSTER-Reservations.html?id=4"/>
</oslc:Preview>
</oslc:largePreview>
<oslc:smallPreview>
<oslc:Preview>
<oslc:hintWidth>31.250em</oslc:hintWidth>
<oslc:hintHeight>21.875em</oslc:hintHeight>
<oslc:document rdf:resource=
"https://<impact-server>:16311/opview/displays/NCICLUSTER-Reservations.html?id=4"/>
</oslc:Preview>
</oslc:smallPreview>
</oslc:Compact>
</rdf:RDF>
Hover preview properties for OSLC resources
To configure hover previews, add the parameters that are listed in the tables to the
NCI_oslc.props file. Some of these parameters only display if the document or icon
parameter exists in the OSLC resource
Table 61. Hover preview parameters
Statement
Description
oslc.<type>.<path>.preview.title =
<longtitle>
where <longtitle> specifies the long title
string that is used for the hover preview.
oslc.<type>.
<path>.preview.shortTitle
= <shorttitle>
where <shorttitle> specifies the short title
string that is used for the hover preview.
In an example implementation with the
Dashboard Application Service Hub
(DASH), this statement supplies the tab
name for the hover preview.
oslc.<type>.<path>.preview.icon =
<URIof16x16image>
where <URIof16x16image> specifies the URI
for a 16x16 image.
oslc.<type>.<path>.preview.largePreview.
document = <PreviewdocumentURI>
where <PreviewdocumentURI> specifies the
URI used for the HTML preview document.
oslc.<type>.<path>.preview.smallPreview.
document = <PreviewdocumentURI>
where <PreviewdocumentURI> specifies the
URI for the HTML preview document.
In an example implementation with the
Dashboard Application Service Hub
(DASH), this statement specifies the content
rendered inside the hover preview.
Table 62. Hover preview parameters for icon parameter
Statement
Description
oslc.<type>.
<path>.preview.
iconTitle = <Icontitle>
where <Icontitle> specifies the title that is
used for the icon.
oslc.<type>.<path>.preview.iconAltLabel
= <Alternativelabel>
where <Alternativeicontitle> specifies an
alternative label for the icon.
Chapter 9. Working with OSLC for Netcool/Impact
217
Table 63. Hover preview parameters for document parameter
Statement
Description
oslc.<type>.<path>.preview.largePreview.
hintWidth =
<Previewwindowwidth>
where <Previewwindowwidth> specifies the
width of the preview window. For example,
31.250 em.
oslc.<type>.<path>.preview.largePreview.
hintHeight =
<Previewwindowheight>
where <Previewwindowheight> specifies the
height of the preview window. For example,
21.785 em.
oslc.<type>.<path>.preview.largePreview.
initialHeight =
<Previewwindowinitialheight>
where <Previewwindowinitialheight> specifies
the height of the preview window when it
first displays. For example, 21.785 em.
oslc.<type>.<path>.preview.smallPreview.
hintWidth =
<Previewwindowwidth>
where <Previewwindowwidth> specifies the
width of the small preview window. For
example, 31.250 em.
In an example implementation with the
Dashboard Application Service Hub
(DASH), this statement specifies the width
of the hover preview window.
oslc.<type>.<path>.preview.smallPreview.
hintHeight =
<Previewwindowheight>
where <Previewwindowheight> specifies the
height of the small preview window. For
example, 21.785 em.
In an example implementation with the
Dashboard Application Service Hub
(DASH), this statement specifies the height
of the hover preview window.
oslc.<type>.<path>.preview.smallPreview.
initialHeight =
<Previewwindowinitialheight>
where <Previewwindowinitialheight> specifies
the height of the small preview window
when it first displays. For example, 21.785
em.
where <type> is the OSLC resource type. It can be either a data or policy.
<path> is the OSLC resource path.
Example scenario: Using OSLC with Netcool/Impact policies
Read this example scenario to get an overview of how you can use
Netcool/Impact policies to create OSLC service providers and resources, and
register the provider and resources with the Registry Services component of Jazz
for Service Management.
Before you begin
Before you can use OSLC, you must install the Registry Services component of Jazz
for Service Management. For more information, seehttp://pic.dhe.ibm.com/
infocenter/tivihelp/v3r1/topic/com.ibm.psc.doc_1.1.0/psc_ic-homepage.html.
About this task
The Registry Services component of Jazz for Service Management includes two
registries, the resource registry and the provider registry. This example first
demonstrates how to create a service provider and register it with the provider
218
Netcool/Impact: Solutions Guide
registry. The second step creates an OSLC resource and registers it with the
resource registry.
Procedure
1. Create a Netcool/Impact policy that creates a service provider and registers it
in the provider registry that is part of the Registry Services component of Jazz
for Service Management:
a. Define the server information for the server where the Registry Services
component of Jazz for Service Management is installed:
RegistryServerProviderCFUri="http://<registry_server>:16310/oslc/pr/collection";
RegistryServerUsername="<user>";
RegistryServerPassword="<password>";
b. Define the service provider information for example:
Log(CurrentContext());
dsTitle = "Customer-x Product-y OSLC Service Provider";
dsDescription = "Customer-x Product-y OSLC Service Provider";
provider = "http://<URL>/<myProvider>";
domain = "http://<Domain>/";
For example
provider = "http://<impact-server>:9080/NCICLUSTER_NCI_oslc/provider/provider01";
domain = "http://jazz.net/ns/ism/event/impact#";
c. Use the RDFModel policy function to create the service provider RDF model:
serviceProviderModel = RDFModel();
d. Update the namespace definitions:
RDFModelUpdateNS(serviceProviderModel, "oslc","http://open-services.net/ns/core#");
RDFModelUpdateNS(serviceProviderModel, "dcterms","http://purl.org/dc/terms/");
e. Create the RDF statements and add them to the model:
RDFStatement(serviceProviderModel, provider, "http://www.w3.org/1999/02/22-rdf-syntax-ns#type",
"http://open-services.net/ns/core#ServiceProvider", true);
RDFStatement(serviceProviderModel, provider, "http://purl.org/dc/terms/title", dsTitle, false);
RDFStatement(serviceProviderModel, provider, "http://purl.org/dc/terms/description",
dsDescription, false);
serviceStmt=RDFStatement(serviceProviderModel, null, "http://www.w3.org/1999/
02/22-rdf-syntax-ns#type", "http://open-services.net/ns/core#Service", true);
RDFStatement(serviceProviderModel, serviceStmt.getSubject, "http://open-services.net/ns/
core#domain", domain, true);
RDFStatement(serviceProviderModel, provider, "http://open-services.net/ns/core#service",
serviceStmt.getSubject, true);
log("---------Service Provider RDF---------");
log(RDFModelToString(serviceProviderModel, "RDF/XML-ABBREV"));
f. Use the RDFRegister policy function to register the service provider in the
provider registry.
serviceProviderURL = RDFRegister(RegistryServerProviderCFUri, RegistryServerUsername,
RegistryServerPassword, serviceProviderModel);
log("Registered service provider: " + serviceProviderURL);
2. Create and register an OSLC resource in the resource registry:
a. Define the server information for the resource registry:
RegistryServerResourceCFUri="http://<registry_server>:16310/oslc/rr/
registration/collection";
RegistryServerUsername="<user>";
RegistryServerPassword="<password>";
b. Define the OSLC resource, in this example it represents a computer system:
computerSystem = "http://<OSLC_resource_URL>/mySystemX";
name = "mySystemX";
manufacturer = "VMware";
serialNumber = "422ABA0619B0DE94B02E40870D6462AF";
model = "VMWAREVIRTUALPLATFORM";
The service provider is located at the following URL. You can retrieve this
URL after you register the service provider:
"http://<registry_server>:16310/oslc/providers/1358241368487"
c. Create the RDF model that represents the computer system:
Chapter 9. Working with OSLC for Netcool/Impact
219
computerSystemModel = RDFModel();
RDFModelUpdateNS(computerSystemModel, "crtv", "http://open-services.net/ns/crtv#");
RDFModelUpdateNS(computerSystemModel, "oslc","http://open-services.net/ns/core#");
RDFStatement(computerSystemModel, computerSystem, "http://www.w3.org/1999/02/
22-rdf-syntax-ns#type",
"http://open-services.net/ns/crtv#ComputerSystem", true);
RDFStatement(computerSystemModel, computerSystem, "http://open-services.net/
ns/crtv#name", name, false);
RDFStatement(computerSystemModel, computerSystem, "http://open-services.net/
ns/crtv#model", model, false);
RDFStatement(computerSystemModel, computerSystem, "http://open-services.net/
ns/crtv#manufacturer", manufacturer, false);
RDFStatement(computerSystemModel, computerSystem, "http://open-services.net/
ns/crtv#serialNumber", serialNumber, false);
RDFStatement(computerSystemModel, computerSystem, "http://open-services.net/
ns/core#serviceProvider", serviceProviderURL , true);
log("---------Computer System RDF---------");
log(RDFModelToString(computerSystemModel, "RDF/XML-ABBREV"));
d. Register the computer system in the resource registry.
registrationRecordURL = RDFRegister(RegistryServerResourceCFUri, RegistryServerUsername,
RegistryServerPassword, computerSystemModel);
log("Registered service provider: " + registrationRecordURL);
OSLC reference topics
Read the following reference information for OSLC.
OSLC urls
You use the following URLs to access OSLC data.
Access data items
http://<server>:<port>/NCICLUSTER_NCI_oslc/data/<datatype>/
Retrieve the OSLC resource shape for the data type
http://<server>:<port>/NCICLUSTER_NCI_oslc/data/resourceShapes/
<datatype>
Run policy and return the results
http://<server>:<port>/NCI_NCICLUSTER_oslc/policy/<policyname>
Access array of variables from policy results
http://<server>:<port>/NCICLUSTER_NCI_oslc/policy/<policyname>/
<variablearray>
Display results for a unique key identifier:
http://<server>:<port>/NCICLUSTER_NCI_oslc/policy/<policyname>/
<function>item;ID=<uniquekeyidentifier>
OSLC pagination
The Netcool/Impact OSLC provider supports pagination to make the retrieval of
large amounts of data easier and more efficient.
Pagination is enabled by default for data items and policies whose variables
contain data items. To manually configure pagination, add the following query
parameters to the URL:
?oslc.paging=true&oslc.page=<pagenumber>&oslc.pageSize=<pagesize>
v oslc.paging=true enables pagination. This setting is enabled by default. To
disable pagination, use oslc_paging=false.
v oslc.page=<pagenumber> is the page number. This property is set to page 1 by
default.
v oslc.pageSize=<pagesize> is the page size. This property is set to 100 by
default.
220
Netcool/Impact: Solutions Guide
Administrators can add the following statement to the NCI_server.props
configuration file to set the default limit for the page size:
impact.oslc.pagesize=<pagesize>
If this property is not defined, it is set to 100 by default.
If the page size in the URL is greater than the limit that is defined in the
NCI_server.props configuration file, the size is limited to that set in the
NCI_server.props configuration file.
You can also add the oslc_paging=false property to the URL to disable
pagination. If this property is set, the entire result set is returned. If any additional
pagination properties are defined, these properties are ignored. If you disable
pagination and you also enable large data model support, this can have an adverse
affect on performance.
Response information
Two properties are added to the response information: oslc:nextPage and
oslc:totalCount.
The oslc:nextPage property is not returned when there is no next page. If the page
size of the result is smaller than the specified page size property, no next page
property is returned.
The oslc:totalCount property gives the total count information across all the
pages.
Example
For example, the following URL represents the alert variables that belong to the
GetByFilter function events:
http://example.com:16310/NCICLUSTER_NCI_oslc/policy/events/alerts?
oslc.paging=true&oslc.page=2&oslc.pageSize=25
In the URL, pagination is enabled. The variable results are contained in page two.
The page size is limited to 25.
Support for OSLC query syntax
Netcool/Impact supports a version of the query syntax for OSLC. The
implementation in Netcool/Impact supports the oslc.properties and oslc.select
query parameters.
Both query parameters have very similar functionality. The only difference is the
meaning of the identifiers. If the identifiers belong to the starting subject resource,
use oslc.properties. If the identifiers belong to a member list for a resource, use
oslc.select.
For more information about the query syntax for OSLC, see the Open Services for
Lifecycle Collaboration Core Specification Version 2.0 specification
(http://open-services.net/bin/view/Main/OslcCoreSpecification)
oslc.properties query parameter
Use the oslc.properties query parameter to display the properties for an
individual resource URI that does not contain any rdfs:member lists
Chapter 9. Working with OSLC for Netcool/Impact
221
If the identifiers that you want to limit the properties of belong to the starting
subject, use the oslc.properties query parameter to limit the properties that are
returned by the identifiers.
Example
To display properties for an individual resource URI that contains no rdfs:member
list, use the oslc.properties query parameter:
http://<server>:16310/NCICLUSTER_NCI_oslc/data/staff/item;
FNAME=%27Todd%27;LNAME=%27Bishop%27?oslc.properties=foaf:lastName,
foaf:firstName
The URI returns the following information:
<rdf:RDF
xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:EMPLOYEES="http://jazz.net/ns/ism/event/impact/data/staff/"
xmlns:oslc="http://open-services.net/ns/core#"
xmlns:impact="http://jazz.net/ns/ism/event/impact#"
xmlns:foaf="http://xmlns.com/foaf/">
<foaf:Person rdf:about="http://<server>:16310/NCICLUSTER_NCI_oslc/data/
staff/item;FNAME=&apos;Todd&apos;;LNAME=&apos;Bishop&apos;">
<foaf:lastName>Bishop</foaf:lastName>
<foaf:firstName>Todd</foaf:firstName>
</foaf:Person>
</rdf:RDF>
oslc.select query parameter
If the identifiers that you want to limit the properties of belong to a member list
for a resource, use the oslc.select query parameter.
Use oslc.select to complete the following tasks:
v Limit the properties of a member list that belongs to an OSLC resource. For
example, to limit the properties of a member list that belongs to an OSLC
resource that is generated from a data item.
v Display an OSLC resource that contains rdfs:member lists. For example, to
display the results of a policy function variable.
Data items example
If you query a URL that contains a list of resources that belong to a database table,
the system returns a rdfs:member list for each row. For example, to query the staff
data type and to limit the properties that are contained in the list to the
foaf:lastName and foaf:firstName properties, use the oslc.select query
parameter:
http://<server>:9080/NCICLUSTER_NCI_oslc/data/
staff?oslc.select=foaf:lastName,foaf:firstName
This URL returns only the rdfs:member lists that contain the foaf:lastName and
foaf:firstName properties:
<?xml version="1.0"?>
<rdf:RDF
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:EMPLOYEES="http://jazz.net/ns/ism/event/impact/data/staff/"
xmlns:oslc="http://open-services.net/ns/core#"
xmlns:impact="http://jazz.net/ns/ism/event/impact#"
xmlns:foaf="http://xmlns.com/foaf/">
222
Netcool/Impact: Solutions Guide
<oslc:ResponseInfo rdf:about="http://<server>:9080/NCICLUSTER_NCI_oslc/data/
staff?oslc.select=foaf:lastName,foaf:firstName&amp;
oslc.paging=true&amp;oslc.pageSize=100">
<oslc:totalCount>5</oslc:totalCount>
</oslc:ResponseInfo>
<rdf:Description rdf:about="http://<server>:9080/NCICLUSTER_NCI_oslc/data/
staff?oslc.select=foaf:lastName,foaf:firstName">
<rdfs:member>
<foaf:Person rdf:about="http://<server>:9080/NCICLUSTER_NCI_oslc/data/
staff/item;FNAME=&apos;Mika&apos;;LNAME=&apos;Masion&apos;">
<foaf:lastName>Masion</foaf:lastName>
<foaf:firstName>Mika</foaf:firstName>
</foaf:Person>
</rdfs:member>
<rdfs:member>
<foaf:Person rdf:about="http://<server>:9080/NCICLUSTER_NCI_oslc/data/
staff/item;FNAME=&apos;Kevin&apos;;LNAME=&apos;Doe&apos;">
<foaf:lastName>Doe</foaf:lastName>
<foaf:firstName>Kevin</foaf:firstName>
</foaf:Person>
</rdfs:member>
<rdfs:member>
<foaf:Person rdf:about="http://<server>:9080/NCICLUSTER_NCI_oslc/data/
staff/item;FNAME=&apos;Todd&apos;;LNAME=&apos;Bishop&apos;">
<foaf:lastName>Bishop</foaf:lastName>
<foaf:firstName>Todd</foaf:firstName>
</foaf:Person>
</rdfs:member>
<rdfs:member>
<foaf:Person rdf:about="http://<server>:9080/NCICLUSTER_NCI_oslc/data/
staff/item;FNAME=&apos;Miriam&apos;;LNAME=&apos;Masters&apos;">
<foaf:lastName>Masters</foaf:lastName>
<foaf:firstName>Miriam</foaf:firstName>
</foaf:Person>
</rdfs:member>
<rdfs:member>
<foaf:Person rdf:about="http://<server>:9080/NCICLUSTER_NCI_oslc/data/
staff/item;FNAME=&apos;Brian&apos;;LNAME=&apos;Doe&apos;">
<foaf:lastName>Doe</foaf:lastName>
<foaf:firstName>Brian</foaf:firstName>
</foaf:Person>
</rdfs:member>
</rdf:Description>
</rdf:RDF>
Policy variable example
If the starting resource contains the member lists, use the oslc.select query
parameter. For example, if the resource contains a policy variable such as
MyDirectSQL, use oslc.select:
http://<server>:9080/NCICLUSTER_NCI_oslc/policy/ipl/myDirectSQL2?
oslc.select=myDirectSQL2:num_events
This URL returns the following information:
<rdf:RDF
xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:direct="http://policy.js/xmlns/directSQL/"
xmlns:test_ipl="http://jazz.net/ns/ism/event/impact/policy/ipl/"
xmlns:oslc="http://open-services.net/ns/core#"
xmlns:myDirectSQL2="http://jazz.net/ns/ism/event/impact#policy/
ipl/myDirectSQL2/"
xmlns:impact="http://jazz.net/ns/ism/event/impact#"
xmlns:javascript="http://policy.js/xmlns/">
Chapter 9. Working with OSLC for Netcool/Impact
223
<rdf:Description rdf:about="http://<server>:9080/NCICLUSTER_NCI_oslc/
policy/ipl/myDirectSQL2?
<oslc.select=myDirectSQL2:num_events">
<rdfs:member>
<test_ipl:myDirectSQL2 rdf:about="http://<server>:9080/NCICLUSTER_NCI_oslc/
policy/ipl/
myDirectSQL2/item;min_serial=165248;num_events=928;max_serial=387781">
<myDirectSQL2:num_events>928</myDirectSQL2:num_events>
</test_ipl:myDirectSQL2>
</rdfs:member>
</rdf:Description>
<oslc:ResponseInfo rdf:about="http://<server>:9080/NCICLUSTER_NCI_oslc/policy/
ipl/myDirectSQL2?
<oslc.select=myDirectSQL2:num_events&oslc.paging=true&oslc.pageSize=100">
<oslc:totalCount>1</oslc:totalCount>
</oslc:ResponseInfo>
</rdf:RDF>
Nested variables and wildcard queries
Use wildcard queries to display nested variables.
To display all variables and their nested values, use the following statement:
oslc.properties=*
To display all the variables, their nested values, and the policy functions, use the
following statement:
oslc.properties=*&oslc.select=*
Example
For example, you define a policy and you want to use the following URL to access
it:
http://<server>:9080/NCICLUSTER_NCI_oslc/policy/ipl
The URL returns the following information:
<rdf:RDF
xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:direct="http://policy.js/xmlns/directSQL/"
xmlns:test_ipl="http://jazz.net/ns/ism/event/impact/policy/ipl/"
xmlns:oslc="http://open-services.net/ns/core#"
xmlns:impact="http://jazz.net/ns/ism/event/impact#"
xmlns:javascript="http://policy.js/xmlns/">
<rdf:Description rdf:about="http://<server>:9080/NCICLUSTER_NCI_oslc/policy/ipl">
<test_ipl:myTimestamp>20120818</test_ipl:myTimestamp>
<test_ipl:myString>test_ipl</test_ipl:myString>
<test_ipl:myInteger>55</test_ipl:myInteger>
<test_ipl:myImpactObjectArray rdf:resource="http://<server>:9080/
NCICLUSTER_NCI_oslc/policy/ipl/myImpactObjectArray"/>
<test_ipl:myImpactObject1 rdf:resource="http://<server>:9080/
NCICLUSTER_NCI_oslc/policy/ipl/myImpactObject1"/>
<test_ipl:myDouble>109.5</test_ipl:myDouble>
<rdf:type rdf:resource="http://jazz.net/ns/ism/event/impact/policy/ipl/"/>
</rdf:Description>
</rdf:RDF>
This example contains several RDF literals such as myTimestamp, myString, and
myInteger. It also contains the myImpactObjectArray RDF resource.
Use the following URL to show all the variables and their nested values:
224
Netcool/Impact: Solutions Guide
http://<server>:9080/NCICLUSTER_NCI_oslc/policy/
ipl?oslc.properties=*
This returns the following information:
<rdf:RDF
xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:myImpactObjectArray="http://jazz.net/ns/ism/event/impact/policy/
ipl/myImpactObjectArray/"
xmlns:myImpactObject1="http://jazz.net/ns/ism/event/impact/policy/
ipl/myImpactObject1/"
xmlns:direct="http://policy.js/xmlns/directSQL/"
xmlns:test_ipl="http://jazz.net/ns/ism/event/impact/policy/ipl/"
xmlns:oslc_pos_2="http://jazz.net/ns/ism/event/impact/policy/
ipl/myImpactObjectArray/oslc_pos_2/"
xmlns:oslc="http://open-services.net/ns/core#"
xmlns:oslc_pos_1="http://jazz.net/ns/ism/event/impact/policy/
ipl/myImpactObjectArray/oslc_pos_1/"
xmlns:oslc_pos_0="http://jazz.net/ns/ism/event/impact/policy/
ipl/myImpactObjectArray/oslc_pos_0/"
xmlns:impact="http://jazz.net/ns/ism/event/impact#"
xmlns:javascript="http://policy.js/xmlns/">
<rdf:Description rdf:about="http://<server>:9080/NCICLUSTER_NCI_oslc/policy/ipl">
<test_ipl:myTimestamp>20120818</test_ipl:myTimestamp>
<test_ipl:myString>test_ipl</test_ipl:myString>
<test_ipl:myInteger>55</test_ipl:myInteger>
<test_ipl:myImpactObjectArray>
<test_ipl:myImpactObjectArray rdf:about="http://<server>:9080/
NCICLUSTER_NCI_oslc/policy/ipl/myImpactObjectArray">
<myImpactObjectArray:oslc_pos_2>
<myImpactObjectArray:oslc_pos_2 rdf:about="http://<server>:9080/
NCICLUSTER_NCI_oslc/policy/ipl/myImpactObjectArray/oslc_pos_2">
<oslc_pos_2:lname>Doe</oslc_pos_2:lname>
<oslc_pos_2:fname>Kevin</oslc_pos_2:fname>
<oslc_pos_2:email>kdoe@us.ibm.com</oslc_pos_2:email>
<oslc_pos_2:birthday>1973-01-22</oslc_pos_2:birthday>
</myImpactObjectArray:oslc_pos_2>
</myImpactObjectArray:oslc_pos_2>
<myImpactObjectArray:oslc_pos_1>
<myImpactObjectArray:oslc_pos_1 rdf:about="http://<server>:9080/
NCICLUSTER_NCI_oslc/policy/ipl/myImpactObjectArray/oslc_pos_1">
<oslc_pos_1:lname>Doe</oslc_pos_1:lname>
<oslc_pos_1:fname>Danny</oslc_pos_1:fname>
<oslc_pos_1:email>doe@us.ibm.com</oslc_pos_1:email>
<oslc_pos_1:birthday>1976-05-12</oslc_pos_1:birthday>
</myImpactObjectArray:oslc_pos_1>
</myImpactObjectArray:oslc_pos_1>
<myImpactObjectArray:oslc_pos_0>
<myImpactObjectArray:oslc_pos_0 rdf:about="http://<server>:9080/
NCICLUSTER_NCI_oslc/policy/ipl/myImpactObjectArray/oslc_pos_0">
<oslc_pos_0:lname>Doe</oslc_pos_0:lname>
<oslc_pos_0:fname>John</oslc_pos_0:fname>
<oslc_pos_0:email>jdoe@us.ibm.com</oslc_pos_0:email>
<oslc_pos_0:birthday>1980-08-11</oslc_pos_0:birthday>
</myImpactObjectArray:oslc_pos_0>
</myImpactObjectArray:oslc_pos_0>
</test_ipl:myImpactObjectArray>
</test_ipl:myImpactObjectArray>
<test_ipl:myImpactObject1>
<test_ipl:myImpactObject1 rdf:about="http://<server>:9080/
NCICLUSTER_NCI_oslc/policy/ipl/myImpactObject1">
<myImpactObject1:lname>Doe</myImpactObject1:lname>
<myImpactObject1:fname>John</myImpactObject1:fname>
<myImpactObject1:email>jdoe@us.ibm.com</myImpactObject1:email>
<myImpactObject1:birthday>1980-08-11</myImpactObject1:birthday>
</test_ipl:myImpactObject1>
Chapter 9. Working with OSLC for Netcool/Impact
225
</test_ipl:myImpactObject1>
<test_ipl:myDouble>109.5</test_ipl:myDouble>
<rdf:type rdf:resource="http://jazz.net/ns/ism/event/impact/policy/ipl/"/>
</rdf:Description>
</rdf:RDF>
Notice that the myImpactObjectArray array is expanded to show each ImpactObject
that it contains and the property values for each of the ImpactObjects.
To obtain a specific property value for one of the resources, use a URL that
specifies the nested properties. For example:
http://<server>:9080/NCICLUSTER_NCI_oslc/policy/ipl?oslc.
properties=test_ipl:myImpactObjectArray{myImpactObjectArray:
oslc_pos_0{oslc_pos_0:lname,oslc_pos_0:fname}}
This URL returns the following information:
<rdf:RDF
xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:myImpactObjectArray="http://jazz.net/ns/ism/event/impact/policy/
ipl/myImpactObjectArray/"
xmlns:myImpactObject1="http://jazz.net/ns/ism/event/impact/policy/
ipl/myImpactObject1/"
xmlns:direct="http://policy.js/xmlns/directSQL/"
xmlns:test_ipl="http://jazz.net/ns/ism/event/impact/policy/ipl/"
xmlns:oslc_pos_2="http://jazz.net/ns/ism/event/impact/policy/
ipl/myImpactObjectArray/oslc_pos_2/"
xmlns:oslc="http://open-services.net/ns/core#"
xmlns:oslc_pos_1="http://jazz.net/ns/ism/event/impact/policy/
ipl/myImpactObjectArray/oslc_pos_1/"
xmlns:oslc_pos_0="http://jazz.net/ns/ism/event/impact/policy/
ipl/myImpactObjectArray/oslc_pos_0/"
xmlns:tivoli-impact="http://jazz.net/ns/ism/event/impact#"
xmlns:javascript="http://policy.js/xmlns/">
<rdf:Description rdf:about="http://<server>:9080/NCICLUSTER_NCI_oslc/
policy/ipl">
<test_ipl:myImpactObjectArray>
<test_ipl:myImpactObjectArray rdf:about="http://<server>:9080/
NCICLUSTER_NCI_oslc/policy/ipl/myImpactObjectArray">
<myImpactObjectArray:oslc_pos_0>
<myImpactObjectArray:oslc_pos_0 rdf:about="http://<server>:9080/
NCICLUSTER_NCI_oslc/policy/ipl/myImpactObjectArray/oslc_pos_0">
<oslc_pos_0:lname>Doe</oslc_pos_0:lname>
<oslc_pos_0:fname>John</oslc_pos_0:fname>
</myImpactObjectArray:oslc_pos_0>
</myImpactObjectArray:oslc_pos_0>
</test_ipl:myImpactObjectArray>
</test_ipl:myImpactObjectArray>
<rdf:type rdf:resource="http://jazz.net/ns/ism/event/impact/policy/
ipl/"/>
</rdf:Description>
</rdf:RDF>
To obtain resources that contain member lists, like the member lists contained in
the results of the DirectSQL and GetByFilter policy functions, use a combination of
the oslc.properties and the oslc.select query parameters. For example:
http://<server>:9080/NCICLUSTER_NCI_oslc/policy/
ipl?oslc.properties=*&oslc.select=test_ipl:myGetByFilter
{myGetByFilter:LNAME,myGetByFilter:FNAME}
This URL returns the following RDF:
226
Netcool/Impact: Solutions Guide
<rdf:RDF
xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:myGetByFilter="http://jazz.net/ns/ism/event/impact/policy/
ipl/myGetByFilter/"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:direct="http://policy.js/xmlns/directSQL/"
xmlns:test_ipl="http://jazz.net/ns/ism/event/impact/policy/ipl/"
xmlns:oslc="http://open-services.net/ns/core#"
xmlns:impact="http://jazz.net/ns/ism/event/impact#"
xmlns:javascript="http://policy.js/xmlns/">
<rdf:Description rdf:about="http://<server>:9080/NCICLUSTER_NCI_oslc/
policy/ipl">
<test_ipl:myTimestamp>20120818</test_ipl:myTimestamp>
<test_ipl:myString>test_ipl</test_ipl:myString>
<test_ipl:myInteger>55</test_ipl:myInteger>
<test_ipl:myGetByFilter>
<test_ipl:myGetByFilter
rdf:about="http://<server>:9080/NCICLUSTER_NCI_oslc/policy/ipl/myGetByFilter">
<rdfs:member>
<test_ipl:myGetByFilter
rdf:about="http://<server>:9080/NCICLUSTER_NCI_oslc/policy/ipl/myGetByFilter/
item;FNAME=&apos;Mika&apos;;LNAME=&apos;Masion&apos;">
<myGetByFilter:LNAME>Masion</myGetByFilter:LNAME>
<myGetByFilter:FNAME>Mika</myGetByFilter:FNAME>
</test_ipl:myGetByFilter>
</rdfs:member>
<rdfs:member>
<test_ipl:myGetByFilter
rdf:about="http://<server>:9080/NCICLUSTER_NCI_oslc/policy/ipl/myGetByFilter/
item;FNAME=&apos;Kevin&apos;;LNAME=&apos;Doe&apos;">
<myGetByFilter:LNAME>Doe</myGetByFilter:LNAME>
<myGetByFilter:FNAME>Kevin</myGetByFilter:FNAME>
</test_ipl:myGetByFilter>
</rdfs:member>
<rdfs:member>
<test_ipl:myGetByFilter
rdf:about="http://<server>:9080/NCICLUSTER_NCI_oslc/policy/ipl/myGetByFilter/
item;FNAME=&apos;Todd&apos;;LNAME=&apos;Bishop&apos;">
<myGetByFilter:LNAME>Bishop</myGetByFilter:LNAME>
<myGetByFilter:FNAME>Todd</myGetByFilter:FNAME>
</test_ipl:myGetByFilter>
</rdfs:member>
<rdfs:member>
<test_ipl:myGetByFilter
rdf:about="http://<server>:9080/NCICLUSTER_NCI_oslc/policy/ipl/myGetByFilter/
item;FNAME=&apos;Miriam&apos;;LNAME=&apos;Masters&apos;">
<myGetByFilter:LNAME>Masters</myGetByFilter:LNAME>
<myGetByFilter:FNAME>Miriam</myGetByFilter:FNAME>
</test_ipl:myGetByFilter>
</rdfs:member>
<rdfs:member>
<test_ipl:myGetByFilter
rdf:about="http://<server>:9080/NCICLUSTER_NCI_oslc/policy/ipl/myGetByFilter/
item;FNAME=&apos;Brian&apos;;LNAME=&apos;Doe&apos;">
<myGetByFilter:LNAME>Doe</myGetByFilter:LNAME>
<myGetByFilter:FNAME>Brian</myGetByFilter:FNAME>
</test_ipl:myGetByFilter>
</rdfs:member>
<oslc:ResponseInfo>
<oslc:ResponseInfo rdf:about="http://<server>:9080/NCICLUSTER_NCI_oslc/
policy/ipl/myGetByFilter<oslc.paging=true&oslc.pageSize=100">
<oslc:totalCount>5</oslc:totalCount>
</oslc:ResponseInfo>
</oslc:ResponseInfo>
</test_ipl:myGetByFilter>
</test_ipl:myGetByFilter>
Chapter 9. Working with OSLC for Netcool/Impact
227
<test_ipl:myDouble>109.5</test_ipl:myDouble>
<rdf:type rdf:resource="http://jazz.net/ns/ism/event/impact#
policy/ipl/"/>
</rdf:Description>
</rdf:RDF>
RDF functions
You can use RDF functions to make Netcool/Impact compatible with open services
for lifecycle collaboration (OSLC).
RDFModel
You can use the RDFModel function to create an RDF model without any input
parameters.
To create an empty RDF model, you call the RDFModel function without entering
any input parameters. The function returns an empty RDF model.
Syntax
The RDFModel function has the following syntax:
[Model =] RDFModel()
Parameters
The RDFModel function has no input parameters.
RDFModelToString
You can use the RDFModelToString function to export an RDF model to a string in a
particular language.
When you create or write an RDF model, you can use the RDFModelToString
function to export a model to a string in a particular language. You can define a
model object and a string that contains the language that is used as input
parameters. If the language string is null or an empty string, the default language
RDF/XML is used. The following language strings are supported:
v RDF/XML
v RDF/XML-ABBREV
v TURTLE
v TTL
v N3
RDFModelToString returns a string
Syntax
The RDFModelToString function has the following syntax:
[String =] RDFModelToString (Model, Language)
228
Netcool/Impact: Solutions Guide
Parameters
The RDFModelToString function has the following parameters:
Table 64. RDFModelToString function parameters.
Parameter
Type
Description
Model
Model
Model object to output
Language
String
Language type
The following example updates the namespaces in a model:
//Retrieve the RDF from OSLC provider through the GetHTTP method
HTTPHost="example.com";
HTTPPort=9081;
Protocol="https";
Path="/NCICLUSTER_NCI_oslc/data/resourceShapes/alerts";
ChannelKey="tom";
Method="";
AuthHandlerActionTreeName="";
FormParameters=NewObject();
FilesToSend=NewObject();
HeadersToSend=NewObject();
HttpProperties = NewObject();
HttpProperties.UserId="impactadmin";
HttpProperties.Password="passw0rd";
x=GetHTTP(HTTPHost, HTTPPort, Protocol, Path, ChannelKey, Method,
AuthHandlerActionTreeName,FormParameters, FilesToSend, HeadersToSend,
HttpProperties);
//Create Model from RDF payload
rdf=RDFParse(x);
//Retrieve all statements from model
allStatements=RDFSelect(rdf,null,null,null);
//Output RDF to log using N3
log(RDFModelToString(rdf, “N3”));
//Output RDF to log using the default language (RDF/XML)
log(RDFModelToString(rdf, null));
RDFModelUpdateNS
You can use the RDFModelUpdateNS function to insert, update, or remove a
namespace from an RDF model.
When you create an RDF model, you can use the RDFModelUpdateNS function to
insert, update, or remove a namespace from the model. You can define a model
object, prefix string, and a URI string as input parameters. If the URI is null or an
empty string, the function removes the prefix string from the model. If the URI
contains a string with a non-empty value and the prefix exists, the URI is updated.
If the prefix does not exist, a new prefix and URI is added to the model.
RDFModelUpdateNS returns this model.
Syntax
The RDFModelUpdateNS function has the following syntax:
[Model =] RDFModelUpdateNS (Model, Prefix, URI)
Chapter 9. Working with OSLC for Netcool/Impact
229
Parameters
The RDFModelUpdateNS function has the following parameters:
Table 65. RDFModelUpdateNS function parameters
Parameter
Type
Description
Model
Model
Model object to update
Prefix
String
Contains the prefix to be
updated in the model
URI
String
Contains the URI to associate
with prefix
The following example updates the namespaces in a model:
//Create model
model = RDFModel();
//Update or insert namespace to model
RDFModelUpdateNS(model,"oslc","http://open-services.net/ns/core#");
RDFModelUpdateNS(model,"rdfs","http://www.w3.org/2000/01/rdf-schma#");
RDFModelUpdateNS(model,"dcterms","http://purl.org/dc/terms/");
The following piece of code deletes an existing model's namespace:
//Retrieve the RDF from OSLC provider through the GetHTTP method
HTTPHost="example.com";
HTTPPort=9081;
Protocol="https";
Path="/NCICLUSTER_NCI_oslc/data/resourceShapes/alerts";
ChannelKey="tom";
Method="";
AuthHandlerActionTreeName="";
FormParameters=NewObject();
FilesToSend=NewObject();
HeadersToSend=NewObject();
HttpProperties = NewObject();
HttpProperties.UserId="impactadmin";
HttpProperties.Password="passw0rd";
x=GetHTTP(HTTPHost, HTTPPort, Protocol, Path, ChannelKey,
Method, AuthHandlerActionTreeName, FormParameters,
FilesToSend, HeadersToSend, HttpProperties);
//Create Model from RDF payload
model=RDFParse(x);
//Delete namespace from model that has prefix 'oslc'
RDFModelUpdateNS(model, “oslc”, null);
RDFNodeIsResource
You can use the RDFNodeIsResource function to help other functions read and parse
objects that are also an RDF resource. You can define an RDF node as a input
parameter in this function. If the object is an RDF resource, the function returns a
true value. If the object is an RDF literal, the function returns a false value. Other
functions can use the model returned by the RDFNodeIsResource function to
continue reading and parsing the RDF object.
Syntax
The RDFNodeIsResource function has the following syntax:
[Boolean =] RDFNodeIsResource (Object)
230
Netcool/Impact: Solutions Guide
Parameters
The RDFNodeIsResource function has the following parameters:
Table 66. RDFNodeIsResource function parameters
Parameter
Type
Description
Object
RDF node
RDF object type check
The following example shows statements based on an RDF that is retrieved by the
GetHTTP function:
//Retrieve the RDF from OSLC provider through the GetHTTP method
HTTPHost="example.com";
HTTPPort=9081;
Protocol="https";
Path="/NCICLUSTER_NCI_oslc/data/resourceShapes/alerts";
ChannelKey="tom";
Method="";
AuthHandlerActionTreeName="";
FormParameters=NewObject();
FilesToSend=NewObject();
HeadersToSend=NewObject();
HttpProperties = NewObject();
HttpProperties.UserId="impactadmin";
HttpProperties.Password="passw0rd";
x=GetHTTP(HTTPHost, HTTPPort, Protocol, Path, ChannelKey, Method,
AuthHandlerActionTreeName, FormParameters, FilesToSend, HeadersToSend,
HttpProperties);
//Create Model from RDF payload
rdf=RDFParse(x);
//Retrieve all statements from model
allStatements=RDFSelect(rdf,null,null,null);
//Output subject, predicate, and objects from all statements returned, whose
//object is a literal, to the log
Size=Length(allStatements);
log(Size);
Count=0;
While (Count < Size) {
if (!RDFNodeIsResource (allStatements [Count].object)) {
log (allStatements [Count].subject + " " + allStatements [Count].predicate + " "
+ allStatements [Count].object + ".");
}
Count = Count + 1;
}
RDFNodeIsAnon
You can use the RDFNodeIsAnon function to assist in reading and parsing an RDF.
The RDFNodeIsAnon takes in a subject/object containing an RDFNode as an input
parameter and returns true if the resource is anonymous. If the return value is
false, the RDF resource is not anonymous. The model generated by the function
can then be used by other functions to continue reading and parsing the RDF.
RDFNodeIsAnon returns true or false, depending if the RDFNode is anonymous
Syntax
The RDFNodeIsAnon function has the following syntax:
[Boolean =] RDFNodeIsAnon (Node)
Chapter 9. Working with OSLC for Netcool/Impact
231
Parameters
The RDFNodeIsAnon function has the following parameter:
Table 67. RDFNodeIsAnon parameter
Parameter
Type
Description
Node
RDFNode
Subject or object to check for
//Retrieve the RDF from OSLC provider through the GetHTTP method
HTTPHost="example.com";
HTTPPort=9081;
Protocol="http";
Path="/NCICLUSTER_NCI_oslc/data/resourceShapes/alerts";
ChannelKey="tom";
Method="";
AuthHandlerActionTreeName="";
FormParameters=newobject();
FilesToSend=newobject();
HeadersToSend=newobject();
HttpProperties = NewObject();
HttpProperties.UserId="impactadmin";
HttpProperties.Password="passw0rd";
x=GetHTTP(HTTPHost, HTTPPort, Protocol, Path, ChannelKey, Method,
AuthHandlerActionTreeName,FormParameters, FilesToSend, HeadersToSend,
HttpProperties);
//Create Model from RDF payload
rdf=RDFParse(x);
//Retrieve all statements from model
allStatements=RDFSelect(rdf,null,null,null);
//Output subject, predicate, and objects from all statements returned,
//whose object is a literal, to the log
Size=Length(allStatements);
Log(Size);
Count=0;
While (Count < Size) {
if (!RDFNodeIsAnon(allStatements[Count].subject)) {
log (allStatements [Count].subject + " " + allStatements [Count].predicate + " "
+ allStatements
[Count].object + ".");
}
Count = Count + 1;
}
RDFParse
You can use the RDFParse function to help other functions read and parse an RDF
object. It retrieves the data from a string that contains an RDF payload and returns
a model that contains the RDF payload passed to it. Other functions can use this
model to further read and parse an RDF object.
Syntax
The RDFParse function has the following syntax:
[Model =] RDFParse(Payload)
Parameters
The RDFParse function has the following parameters:
232
Netcool/Impact: Solutions Guide
Parameter
Payload
Type
String
Description
Payload containing the RDF
The following example provides statements based on an RDF that is retrieved by
the GetHTTP function:
//Retrieve the RDF from OSLC provider through the GetHTTP method
HTTPHost="example.com";
HTTPPort=9081;
Protocol="https";
Path="/NCICLUSTER_NCI_oslc/data/resourceShapes/alerts";
ChannelKey="tom";
Method="";
AuthHandlerActionTreeName="";
FormParameters=NewObject();
FilesToSend=NewObject();
HeadersToSend=NewObject();
HttpProperties = NewObject();
HttpProperties.UserId="impactadmin";
HttpProperties.Password="passw0rd";
x=GetHTTP(HTTPHost, HTTPPort, Protocol, Path, ChannelKey, Method,
AuthHandlerActionTreeName, FormParameters, FilesToSend, HeadersToSend,
HttpProperties);
//Create Model from RDF payload
rdf=RDFParse(x);
RDFRegister
You can use the RDFRegister function to help you to register service providers or
OSLC resources with the registry server.
Before you can register a service provider or resource, you must use the other RDF
policy functions to build an RDF model that meets the OSLC and Registry Services
requirements.
After you build the RDF model, use the RDFRegister function to register the RDF
with the resource registry contained in the Registry Services integration service.
If the service provider or OSLC resource is registered successfully, the RDFRegister
function returns the resource location of the registration record. The following
variables and their return values are also returned to provide more information:
v ResultCode contains the result code for the response.
v HeadersReceived contains the headers received in the response.
v HeadersSent contains the headers sent in the response.
v ResponseBody contains the response body text.
If the query parameters are set in the URL and you use the RDFRegister policy
function to register a service provider, you must manually add the location of the
service provider to the policy. For example:
RDFStatement(newModel, manu[0].subject,
"http://open-services.net/ns/core#serviceProvider", serviceProviderURL, true);
If you use the query string inside the path, you must also ensure that the
FormParameters parameter is set to null. For example:
FormParameters=null;
Chapter 9. Working with OSLC for Netcool/Impact
233
Finally, you must ensure that the policy contains pagination information. For
example:
Path="/NCICLUSTER_NCI_oslc/data/mysql1?oslc.paging=true&oslc.pageSize=100";
If unsuccessful, the return value of the resource location registration record is null.
Error code information is retuned in the ErrorReason and ResultCode variables.
Syntax
The RDFRegister function has the following syntax:
[ String =] RDFRegister(URI, Username , Password, Model)
where Username can be a null or void string to specify that no authentication is
required.
Parameters
The RDFRegister function has the following parameters:
Table 68. RDFRegister function parameters
Parameter
Type
Description
URI
String
Registry Services server
creation factory URI
Username
String
User name for the Registry
Services server
Password
String
Password for the Registry
Services server
Model
Model
Model that contains the RDF
Example
The following example manually registers a service provider and a set of resources
that have been exposed by the OSLC server provider in Netcool/Impact.
The Registry Services server information is as follows:
RegistryServerProviderCFUri="http://<registry_services_server>:
16310/oslc/pr/collection";
RegistryServerResourceCFUri="http://<registry_services_server>:
16310/oslc/rr/registration/collection";
RegistryServerUsername="system";
RegistryServerPassword="manager";
The Netcool/Impact server information is as follows:
HTTPHost="<impact_server>";
HTTPPort=9080;
Protocol="http";
Path1="/NCICLUSTER_NCI_oslc/provider/provider01";
Path2="/NCICLUSTER_NCI_oslc/data/computer";
ChannelKey="";
Method="GET";
AuthHandlerActionTreeName="";
FormParameters=NewObject();
FilesToSend=NewObject();
HeadersToSend=NewObject();
HttpProperties = NewObject();
234
Netcool/Impact: Solutions Guide
HttpProperties.UserId="impactadmin";
HttpProperties.Password="passw0rd";
HttpProperties.AuthenticationScheme="basic";
Get the service provider RDF from Netcool/Impact:
serviceProviderResponse=GetHTTP(HTTPHost,HTTPPort, Protocol, Path1,
ChannelKey,Method, AuthHandlerActionTreeName, null, FilesToSend,
HeadersToSend,HttpProperties);
Create an RDF model that is based on the service provider response:
serviceProviderModel=RDFParse(serviceProviderResponse)
Register the service provider in the provider registry:
serviceProviderURL = RDFRegister(RegistryServerProviderCFUri,
RegistryServerUsername, RegistryServerPassword,serviceProviderModel);
log("Provider Registry-Service Provider URL: " + serviceProviderURL);
Get all the computer system resources from Netcool/Impact:
allResources=GetHTTP(HTTPHost,HTTPPort, Protocol, Path2, ChannelKey,
Method,AuthHandlerActionTreeName, null, FilesToSend, HeadersToSend,
HttpProperties);
Create an RDF model that is based on the resource response:
allResourceModel=RDFParse(allResources);
Register each computer system and a set of properties with the resource registry:
statements=RDFSelect(allResourceModel, null,
"http://jazz.net/ns/ism/event/impact#data/computer/ID", null);
size=Length(statements);
count=0;
while(count<size) {
Path3=statements[count].subject;
//Get the individual computer system resource
resourceResponse=GetHTTP(HTTPHost,HTTPPort, Protocol, Path3, ChannelKey,
Method,AuthHandlerActionTreeName, null, FilesToSend, HeadersToSend,
HttpProperties);
resourceModel=RDFParse(resourceResponse);
Create a model that contains the properties and data that you want to register:
newModel=RDFModel();
manu=RDFSelect(resourceModel, null,
"http://open-services.net/ns/crtv#manufacturer",null);
model=RDFSelect(resourceModel, null,
"http://open-services.net/ns/crtv#model", null);
serial=RDFSelect(resourceModel, null,
"http://open-services.net/ns/crtv#serialNumber", null);
RDFModelUpdateNS(newModel, "crtv", "http://open-services.net/ns/crtv#");
RDFModelUpdateNS(newModel, "oslc","http://open-services.net/ns/core#");
RDFStatement(newModel, manu[0].subject,
"http://www.w3.org/1999/02/22-rdf-syntax-ns#type",
"http://open-services.net/ns/crtv#ComputerSystem", true);
RDFStatement(newModel, manu[0].subject, manu[0].predicate, manu[0].object,
RDFNodeIsResource(manu[0].object));
RDFStatement(newModel, manu[0].subject, model[0].predicate, model[0].object,
RDFNodeIsResource(manu[0].object));
RDFStatement(newModel, manu[0].subject, serial[0].predicate,
serial[0].object, RDFNodeIsResource(manu[0].object));
Update the model with the service provider location:
RDFStatement(newModel, manu[0].subject,
"http://open-services.net/ns/core#serviceProvider", serviceProviderURL, true);
Chapter 9. Working with OSLC for Netcool/Impact
235
Register the resource in the resource registry:
resourceURL = RDFRegister(RegistryServerResourceCFUri,
RegistryServerUsername, RegistryServerPassword, newModel);
log("Resource Registry-Resource URL: " +resourceURL);
count=count+1;
}
RDFUnRegister
To remove the registration record of a service provider or resource from the
registry server, use the RDFUnRegister function to supply the location of the
registration record, the Registry Services server username and password, and the
registration record that you want to remove.
Before you can remove the registration record of a service provider, you must
remove all the registration records for the associated OSLC resources.
If successful, the RDFUnRegister function returns the message code 204 and the
value true. The following variables and their return values are also returned to
provide additional information:
v ResultCode contains the result code for the response.
v HeadersReceived contains the headers received in the response.
v HeadersSent contains the headers sent in the response.
v ResponseBody contains the response body text.
If unsuccessful, the return value of the resource location registration record is false.
Error code information is returned in the ErrorReason and ResultCode variables.
Syntax
The RDFUnRegister function has the following parameters:
[ String =] RDFUnRegister(URI, Username , Password)
where Username can be a null or void string to specify that no authentication is
required.
Parameters
Table 69. RDFUnRegister function parameters
Parameter
Type
Description
URI
String
Location that contains the
registration record for the
resource or service provider
Username
String
User name for the Registry
Services server
Password
String
Password for the Registry
Services server
Example of how to remove the registration of a service provider
The following example demonstrates how to remove the registration of the service
provider.
The service provider location is:
236
Netcool/Impact: Solutions Guide
http://<registryserver>:16310/oslc/providers/6577
Use the RDFUnRegister function to remove the registration. For example:
//Registry server information
ServiceProviderUri="http://<registryserver>:16310/oslc/providers/6577";
RegistryServerUsername="system";
RegistryServerPassword="manager";
result = RDFUnRegister(ServiceProviderUri, RegistryServerUsername,
RegistryServerPassword);
Example of how to remove the registration of an OSLC resource
The following example demonstrates how to use the policy function to remove the
registration of an OSLC resource.
registrationURL = "http://oslcregistryserver.com:16310/oslc/registration/1351071987349";
providerURL = "http://oslcregistryserver.com:16310/oslc/providers/
1351071987343";
RegistryServerUsername="smadmin";
RegistryServerPassword="password";
returnString = RDFUnRegister (registrationURL, RegistryServerUsername, RegistryServerPassword);
RDFSelect
You can use the RDFSelect function to assist in reading and parsing an RDF. To
retrieve statements based on an RDF model, you call the RDFSelect function and
pass the RDF model that is created by the RDFParse function. You can filter based
on subject, predicate, and object.
The RDFSelect function returns an array of statements that are based on the filter,
retrieving values for the subject, predicate, and object variables. You can use it to
create RDF statements or triples. You can also use it to filter statements. If you do
not want to filter your results, you specify null or empty values for the input
parameters.
Syntax
The RDFSelect function has the following syntax:
[Array =] RDFSelect(Model, Subject, Predicate, Object)
Parameters
The RDFSelect function has the following parameters:
Table 70. RDFSelect function parameters
Parameter
Type
Description
Model
Model
The model that contains the
RDF payload
Subject
String
Filters for the subject value
in RDF statements
Predicate
String
Filters for the predicate value
in RDF statements
Object
String
Filters for the object value in
RDF statements
The following example provides statements based on an RDF that is retrieved by
the GetHTTP function:
Chapter 9. Working with OSLC for Netcool/Impact
237
//Retrieve the RDF from OSLC provider through the GetHTTP method
HTTPHost="example.com";
HTTPPort=9081;
Protocol="https";Path="/NCICLUSTER_NCI_oslc/data/resourceShapes/alerts";
ChannelKey="tom";
Method="";
AuthHandlerActionTreeName="";
FormParameters=NewObject();
FilesToSend=NewObject();
HeadersToSend=NewObject();
HttpProperties = NewObject();
HttpProperties.UserId="impactadmin";
HttpProperties.Password="passw0rd";
x=GetHTTP(HTTPHost, HTTPPort, Protocol, Path, ChannelKey, Method,
AuthHandlerActionTreeName, FormParameters, FilesToSend, HeadersToSend,
HttpProperties);
//Create Model from RDF payload
rdf=RDFParse(x);
//Retrieve all statements from model
allStatements=RDFSelect(rdf,null,null,null);
//Output subject, predicate, and objects from all statements returned to the log
Size=Length(allStatements);
log(Size);
Count=0;
While (Count < Size) {
log (allStatements [Count].subject + " " + allStatements [Count].predicate + "
" + allStatements [Count].object + ".");
Count = Count + 1;
}
The following piece of code provides all statements that contain a particular
subject name:
//Retrieve the RDF from OSLC provider through the GetHTTP method
HTTPHost="example.com";
HTTPPort=9081;
Protocol="https";Path="/NCICLUSTER_NCI_oslc/data/resourceShapes/alerts";ChannelKey="tom";
Method="";
AuthHandlerActionTreeName="";
FormParameters=NewObject();
FilesToSend=NewObject();
HeadersToSend=NewObject();
HttpProperties = NewObject();
HttpProperties.UserId="impactadmin";
HttpProperties.Password="passw0rd";
x=GetHTTP(HTTPHost, HTTPPort, Protocol, Path, ChannelKey, Method,
AuthHandlerActionTreeName, FormParameters, FilesToSend,
HeadersToSend, HttpProperties);
//Create Model from RDF payload
rdf=RDFParse(x);
//Retrieve statements containing subject from model
statements=RDFSelect(rdf,"
http://ibm.com/ns/netcool-impact/data/SCR_Components#MYCLASS",null,null);
//Output subject, predicate, and objects from all statements returned to the log
Size=Length(stmt);
log(Size);
Count=0;
While (Count < Size) {
log (stmt[Count].subject + " " + stmt[Count].predicate + " " +
stmt[Count].object + ".");
Count = Count + 1;
}
RDFStatement
You can use the RDFStatement function to create and add statements to an RDF
model.
You specify the following parameters in the function:
v Model object
238
Netcool/Impact: Solutions Guide
v Subject string or resource
v Predicate string or property
v Object string or RDF node
If the Object input parameter is a string, you must specify a flag to determine
whether the object input parameter is RDF literal or an RDF resource type.
To create an anonymous resource in the statement, define the value for the Subject
as null. When this parameter value is set to null, the policy function creates an
anonymous resource in the statement.
Syntax
The RDFStatement function has the following syntax:
[Statement =] RDFStatement (Model, Subject, Predicate, Object, isResource)
Parameters
If the Object input parameter is a string, you must specify the isResource
parameter. The RDFStatement function has the following parameters:
Table 71. RDFStatement function parameters
Parameter
Type
Description
Model
Model
Model object that the
statement is added to.
Subject
String, resource, or null
Subject value of statement. If
this parameter is set to null,
the function creates an
anonymous resource in the
statement. The policy
function returns a statement
instead of a model.
Predicate
String or property
Predicate value of statement.
Object
String or RDF node
Object value of statement.
isResource
Boolean
Determines whether an
object is a resource or a
literal.
The following example shows how to create a basic RDF with a single statement.
1. Use the RDFModel policy function to create a model:
Model = RDFModel();
RDFModelUpdateNS(model,"oslc","http://open-services.net/ns/core#");
subject = "http://ibm.com/ns/netcool-impact/data/SCR_Components#MYCLASS";
property = "http://open-services.net/ns/core#name";
value = "Brian";
isResource = false;
2. Use the RDFStatement policy function to create a statement:
RDFStatement(model,subject,property,value,isResource);
3. Finally, specify how the RDF model is output:
body = RDFModelToString(model, null);
4. Finally, specify how the RDF model is output:
body = RDFModelToString(model, null);
Chapter 9. Working with OSLC for Netcool/Impact
239
The following example shows how to create a model that is based on an existing
model and that uses only the subjects that the user is interested in:
1. Use the GetHTTP method to retrieve the RDF from the OSLC provider:
HTTPHost="example.com";
HTTPPort=9081;
Protocol="https";
Path="/NCICLUSTER_NCI_oslc/data/resourceShapes/alerts";
ChannelKey="tom";
Method="";
AuthHandlerActionTreeName="";
FormParameters=NewObject();
FilesToSend=NewObject();
HeadersToSend=NewObject();
HttpProperties = NewObject();
HttpProperties.UserId="impactadmin";
HttpProperties.Password="passw0rd";
x=GetHTTP(HTTPHost, HTTPPort, Protocol, Path, ChannelKey, Method,
AuthHandlerActionTreeName, FormParameters, FilesToSend,
HeadersToSend, HttpProperties);
2. Create the RDF model from the RDF payload:
rdf=RDFParse(x);
3. Define a subject to filter:
mySubject=" http://ibm.com/ns/netcool-impact/data/SCR_Components#ID";
4. Retrieve all the statements that contain mySubject from the model:
allStatements=RDFSelect(rdf,mySubject,null,null);
5. Use the RDFModel function to create a new model:
newModel = RDFModel()
6. Use the RDFModelUpdateNS function to add the required namespaces to the
model:
RDFModelUpdateNS(newModel,"oslc","http://open-services.net/ns/core#");
RDFModelUpdateNS(newModel,"rdfs","http://www.w3.org/2000/01/rdf-schma#");
RDFModelUpdateNS(newModel,"dcterms","http://purl.org/dc/terms/");
7. Use the RDFStatement function to add the statements from the old model to the
new model
Size=Length(stmt);
Count=0;
While (Count < Size) {
RDFStatement(newModel, stmt[Count].subject, stmt[Count].predicate,
stmt[Count].object,IsRDFNodeResource(stmt[Count].object));
Count = Count + 1;
}
8. Output the new model to the log:
log(RDFModelToString(model, null));
240
Netcool/Impact: Solutions Guide
Chapter 10. Service Level Objectives (SLO) Reporting
Service Level Objectives (SLO) Reporting is an optional feature that you can set up
in the existing Netcool/Impact 7.1.0 product. The SLO Reporting package provides
policies and schema files, which you can use to store Service Level Metrics in a
DB2 database. These metrics are used to generate a report.
SLO Reporting uses Tivoli Common Reporting to develop, create, and generate
reports. Tivoli Common Reporting consists of data stores, reporting engines, their
corresponding web user interfaces displayed in Dashboard Application Services
Hub, and a command-line interface. For more information about Tivoli Common
Reporting, see the Jazz for Service Management documentation.
Configuring SLO reporting
The SLO reporting function can be configured in 2 ways:
SLO reporting for TBSM
This feature is primarily intended for use with Tivoli Business Service
Manager (TBSM). Use the feature to create SLO reports that display the
outage times for business services in TBSM.
SLO reporting for third-party data
You can also configure SLO reporting to report on data from a third-party
database.
To configure SLO reporting, you need to complete the following steps:
1. Create a database that is called SLORPRT in your DB2 database.
2. Install the SLO reporting features.
3. Define the service definitions in the service definition properties file.
4. Define the business calendar in the business calendar properties file. This is
optional.
5. Run the createServiceDefinition policy to create the service definition.
6. Create policies to retrieve the outage data from the specified data source and
record it in the SLORPRT database. If you want to configure SLO reporting
with TBSM, you can use the sample policies that are provided.
7. Deploy the SLO Availability report in Tivoli Common Reporting.
Architecture
SLO consists of 3 components:
v The SLORPRT database that you use to store the SLO definitions and outage
data that is gathered for those definitions.
v Projects in Netcool/Impact that contain functions for interacting with the
SLORPRT database.
v The SLO Availability report definition for Tivoli Common Reporting.
You need to decide which of your business services that you want to report on and
the metrics that need to be associated with the services. These metrics are used in
the availability report in Tivoli Common Reporting.
© Copyright IBM Corp. 2006, 2016
241
First, you define the business services that are displayed in the report in the
SLORPRT database.
Next you can either use the sample policies, or define your own policies, to collect,
analyze, and record outage data in the SLORPRT database.
The SLO Availability Report uses the outage data to display the availability for a
specified period.
The following graphic illustrates this architecture:
SLO terminology overview
Before you start to configure SLO reporting, read the following terms to help you
to understand the solution.
Business service
A business service is an aspect of your enterprise, like Internet Banking
and Payroll, that is defined in Tivoli Business Service Manager. You choose
one or more of these business services to report on.
Service definitions
Service definitions are used to specify metadata for the service and metric
combinations. Service definitions are defined in Impact. The SLO
Availability report in Tivoli Common Reporting uses the service definition
for reporting.
Service Level Agreements (SLAs)
A service level agreement defines a metric and a set of operational hours
where the service is required to be available. The SLAs are defined as part
of a service definition.
Metric A measurable property for a service. For example, you can create metrics
to monitor downtime or transaction volume. The SLO Availability report in
Tivoli Common Reporting uses the metric to display the availability of the
business service based on the metrics that you specify. A metric is specified
in an SLA definition.
Operational hours
The operational hours defines the hours during which the business service
is expected to be available. The availability report summarizes the
242
Netcool/Impact: Solutions Guide
percentage of time the service was available during the operational hours.
Operational hours are defined as part of the service or SLA definition.
Calendar
A calendar indicates the days in a year that are holidays and weekends. If
a calendar is defined as part of a service definition, outages that occur in
these periods are recorded separately in the SLORPRT database. Calendars
are specified in a properties file that is passed to an Impact policy that
stores the definition.
SLO reporting prerequisites
Before you install the SLO Reporting package, complete the prerequisites.
v Netcool/Impact 7.1.0.4 must be installed and configured before the SLO Reports
package can be applied. The Reports package extensions are a set of policies,
data source, and data type configurations.
v The version of DB2 must be 9.7 Fix pack 4 or later, which is available as a
bundle with Netcool/Impact.
v Tivoli Common Reporting version 3.1 or higher is required to install the SLO
Reports package.
v You must create a database named SLORPRT in the DB2 system and catalog the
TCPIP remote node and the remote database. If a local version of DB2 does not
exist on the Tivoli Common Reporting (TCR) server, install the IBM DB2
Connect Server that is available as part of the fix pack installers for DB2 Server,
Enterprise and Workstation.
SLO reporting migration considerations
This section describes additional steps you may need to take if you want to use the
SLO capability on an Impact server that has been migrated from Impact 6.1.1. If
you deployed SLO on the 6.1.1 Impact server, then you should have a project
called SLA after the migration script has been completed.
If you are migrating from an Impact 6.1.1 server that did not have SLO deployed,
so no SLA project exists, then there is no action required. If you want to start using
SLO on this server, just follow the instructions found in the section Chapter 10,
“Service Level Objectives (SLO) Reporting,” on page 241.
You should not install the SLO solution on the Impact 7.1.0.4 server prior to
completing the migration.
Before upgrading the SLA project, stop all Impact services that are running policies
that call the SLO functions to record data in the SLORPRT database.
Updating the SLORPRT database schema
You should continue using the same SLORPRT database that you used with the SLO
solution in Impact 6.1.1, but it must be updated for schema changes required by
Impact 7.1.0.4. It is recommended to make a backup of the SLORPRT database before
continuing.
Perform the following steps to update the SLORPRT database schema.
1. Copy the sql files needed to update the SLORPRT database schema to a location
accessible by the DB2 client you will use to update the database. These files are
all found in the directory [InstallDirectory]/impact/add-ons/slo/db on the
Impact server:
Chapter 10. Service Level Objectives (SLO) Reporting
243
v add_timezone.sql
v update_views_timezone.sql
v slo_utility.sql
2. Open a DB2 command window on your DB2 client system that has access to
the SLORPRT database. Run the following commands:
db2 connect to SLORPRT user <dbuser> using <dbpassword>
db2 -tvf <slo_sql_dir>/add_timezone.sql **
db2 -tvf <slo_sql_dir>/update_views_timezone.sql
db2 -tvf <slo_sql_dir>/slo_utility.sql
db2 connect reset
Where <slo_sql_dir> is [InstallDirectory]/impact/add-ons/slo/db if the
database client is the same machine as the Impact server, or the directory where
you copied the files in Step 1 if the database client is on a different machine.
** If you had already installed Fixpack 3 for Impact 6.1.1 and updated the
SLORPRT database schema, then the command contained in the file
add_timezone.sql will fail. You can ignore this failure.
Importing the projects for SLO
Starting with Impact 7.1.0.4, the SLO solution consists of two Impact projects, SLA
and SLA_Utility. You will need to import each of these projects using the content
delivered with Impact 7.1.0.4.
Preparing to import
For the SLA project, the content of the existing project will be replaced. Before
continuing, make note of the following, as you may need this information after
importing the updated project:
v Any customization to the following policies, which are part of the original
deployment of the SLA project. Any customization will have to be reapplied
after importing the updated SLA project.
BusinessCalendar
createBusinessCalendarDefn
createServiceDefinition
db2GetData
getDataFromTBSMAvailability
recordSLAMetric
serviceDefinition
serviceDowntimeBasedOnTBSMStatusChange
slaDefGlobalSettings
v Any customization to the following data types, which are part of the original
deployment of the SLA project. Any customization will have to be reapplied
after importing the updated SLA project.
checkPointDT
correlationDT
operationalHourCalForReport
operationalHourDefinition
operationalHourToServiceMapping
slaCalendar
sla_ResourceMetric_Map
sloResourceIdentity
sloResourceMetricValueView
slo_metric_meta
slo_metric_value
slo_OperationalHoursTotalCount
slo_operational_hour_query_for_Service
slo_resource_definition
244
Netcool/Impact: Solutions Guide
v Any customization to the following sample activator service. This may have
included changing the startup/logging options, the policy activated, or the
interval for running the policy. Any customization will have to be reapplied after
importing the updated SLA project.
GetData
v The configuration for datasource SLOReportDatasource, including the
hostname, port, userid, and password for the SLORPRT database, as well as any
other changes to the properties of the datasource.
v Any datasources, data types, policies, or services you have added to the SLA
project. For example, you may have added a datasource and data type in order
to use the sample policy that accesses the TBSM Metric History database. These
Impact data elements will not be changed or deleted when the updated SLA
project is imported, but they will no longer be part of the SLA project after the
import. Use the Global project when you are done to locate these data elements
and restore them to the SLA project if that is desired.
Importing the projects
Complete the following steps to import the projects used for the SLO function.
1. From the Impact data server, import the project SLA from the importData
directory, by using the nci_import command.
UNIX:
nci_import <servername> <install_directory>/importData
Windows:
nci_import.bat <servername> <install_directory>/importData
Where <install_directory> is the directory where the SLO artifacts are stored
on the Impact Server. For example: /opt/IBM/tivoli/impact/add-ons/slo/
importData.
2. From the Impact data server, import the project SLA_Utility from the
sloutility directory, by using the nci_import command.
UNIX:
nci_import <servername> <install_directory>/sloutility
Windows:
nci_import.bat <servername> <install_directory>/sloutility
Where <install_directory> is the directory where the SLO artifacts are stored
on the Impact Server. For example: /opt/IBM/tivoli/impact/add-ons/slo/
sloutility.
Restart the Impact servers after importing the projects to flush any definitions for
the SLA project that have been cached.
Refer to the previous section to decide what changes you need to recover for your
SLA project. At a minimum you will need to update the SLOReportDatasource to
provide the access information for your SLORPRT database.
In addition, the project SLA_Utility contains datasource SLOUtilityDatasource.
This datasource must be configured to access the SLORPRT database before
attempting to use any of the utility functions added for Impact 7.1.0.4
Chapter 10. Service Level Objectives (SLO) Reporting
245
Sample policy to read TBSM Metric History
Starting with Fixpack 4, the default behavior for the sample policy that collects
data from the TBSM Metric History database has changed to only record outages
for the period when the TBSM status is “Bad”, ending an outage when any status
other than “Bad” is found. This is a slightly different algorithm than what
previously existed in the SLO function.
By completing the migration steps in this section you will continue to use the prior
algorithm to preserve consistency in the reporting data. The prior algorithm
calculates an outage as the time from when TBSM first indicates “Bad” status until
the time when the status becomes “Good”. The other TBSM statuses, like
“Marginal”, will not end an outage.
To learn more about this change and how to configure the algorithm you want to
use, refer to “Using the getDataFromTBSMAvailability sample policy” on page 260.
SLO sample report
The sample SLO availability report that is provided with Impact is unchanged in
Impact 7.1.0.4. You can continue to use this report with your existing SLORPRT
database.
Installing and enabling SLO report package
How to install and enable the Netcool/Impact SLO extensions in Tivoli Common
Reporting.
About this task
The SLO Reports package is in the install_home/impact/add-ons/slo directory.
The importData, sloutility, db, and Report directories are included in the SLO
Reports package.
Procedure
1. Create the SLORPRT database in DB2. For example:
db2 create database SLORPRT using CODESET UTF-8 territory en-US
If a local version of DB2 does not exist on the Tivoli Common Reporting server,
install the IBM DB2 Connect Server that is available as part of the fix pack
installers for DB2 Server, Enterprise and Workstation.
2. The db directory contains an sql file. Connect to the database and add this file
to the same system where the SLORPRT database is created. For example:
db2 connect to SLORPRT
db2 -tvf slo_dbschema.sql
db2 connect reset
3. From the primary server in the Impact profile, import the two Impact projects
that provide the SLO function.
a. Import the SLA project from the importData directory, by using the
nci_import command.
b. Import the SLA_Utility project from the sloutility directory using the
nci_import command. This Impact project provides additional utility
functions to help you administer your SLO configuration.
UNIX:
246
Netcool/Impact: Solutions Guide
nci_import <servername> <install_directory>/importData
nci_import <servername> <install_directory>/sloutility
Windows:
nci_import.bat <servername> <install_directory>/importData
nci_import.bat <servername> <install_directory>/sloutility
Where <install_directory> is the directory where the SLO artifacts are stored
on the Impact Server. For example, /opt/IBM/tivoli/impact/add-ons/slo/
importData.
4. The Report directory contains the model to be used in the Framework Manager
in Tivoli Common Reporting. The model is provided for your reference. Use
the model if you want to extend the schema or create more views for the
report.
5. The Report package contains the package that must be imported in to the Tivoli
Common Reporting Server. The package contains the sample report and the
queries that can be used to generate the reports. Complete the following steps
to import the Netcool/Impact SLO reports package into Tivoli Common
Reporting version 3.1.
a. Navigate to the Tivoli Common Reporting bin directory. For example,
/opt/IBM/JazzSM/reporting/bin.
b. Use the trcmd command to create a data source for the SLORPRT database:
trcmd.sh -user <TCR user> -password <TCR password>
-datasource -add SLORPRT -connectionString <db2 connection string>
-dbType DB2 -dbName <database name>
-dbLogin <database username> -dbPassword <database password>
For example:
./trcmd.sh -user tipadmin -password password1 -datasource -add SLORPRT
-connectionString jdbc:db2://server.ibm.com:50000/SLORPRT
-dbType DB2 -dbName SLORPRT -dbLogin db2inst1 -dbPassword password1
c. Use the trcmd command to import the Netcool/Impact SLO report package:
./trcmd.sh -import -bulk <file> -user <TCR User>
-password <TCR password>
For example:
./trcmd.sh -import -bulk /tmp/ImpactSLOReportPackage.zip -user smadmin
-password password2
6. In Netcool/Impact, configure the SLOReportDatasource and
SLOUtilityDatasource data sources to access the SLORPRT database that you
created in step 1. The SLOReportDatasource is available in the SLA project in
Netcool/Impact, and the SLOUtilityDatasource is available in the SLA_Utility
project.
What to do next
The SLO reporting package is installed and enabled. Next, you need to create the
service definitions files.
Defining service definition properties
Before you can view reports for a service, you need to define the service definition
parameters.
Chapter 10. Service Level Objectives (SLO) Reporting
247
About this task
You specify the service definition parameters, like the SLA metric names and the
operational hours, in a service definition properties file that you create. After you
create the properties file, you need to run the createServiceDefinition policy and
pass the service definition to it as an input parameter. When you run the policy,
the service definition is implemented.
In some cases, you might want to reuse service definitions and metric names. You
need to note the following logic:
v You can reuse the same service definition in multiple service definition
properties files to define multiple SLAs. The properties that describe the service
like the label and description, are specified by the last properties file used for
the service.
v If you use the same SLA metric name in multiple service definitions, only the
last settings are used, including any operational hours or time zone properties.
Each service must include the complete definition for the SLA and it needs to
match any previous definition.
For example service definition properties files, see “Properties files examples” on
page 264.
Procedure
1. Log in to the server where the Impact Server server was installed.
2. Create a service definition properties file. Note the file name. You need to
specify the file name as a parameter value in the policy that you use to
implement the service definition properties.
3. Specify the parameters in the services definition file. For more information, see
“Service definition properties file.” Service definition names and service level
agreement (SLA) names must be unique in the SLORPRT database.
4. Save the file.
What to do next
After you define the service definition properties in the properties file, run the
createServiceDefinition policy to implement the service definition. You need to
specify the directory where you saved the file in step 2 in the Service Definition
File Name parameter.
If you want to update properties of the service definitions, just update the
properties file and rerun the createServiceDefinition policy to create the service
definition. The new definition will replace the existing definition.
If you want to delete a service definition or an SLA defined in the service, refer to
“SLO Utility Functions” on page 269.
Service definition properties file
Use the service definition properties file to define a service definition.
Naming conventions
Properties use the following naming conventions:
<propertyName>.num
248
Netcool/Impact: Solutions Guide
This indicates that multiple sets of related properties are specified. For example,
sla.num=2 indicates that there are two sets of related properties. The related
properties are named sla.1.<propertyName>, sla.2.<propertyName>, and so on.
To nest multiple properties, use <propertyName>.n.<relatedPropertyName>.num. For
example, sla.1.operationalHour.num=2 specifies two sets of operational hour
values for the first SLA definition.
General properties
Table 72. General properties in the service definition properties file
Property
Description
serviceName
The name of the service for which the service
level agreement (SLA) definitions are provided.
The name must be unique. This is a required
property.
label
Display name for the service. This is optional. If
you do not specify a value, the value defaults
to an empty string.
description
Description for the service. This is optional. If
you do not specify a value, the serviceName
value is used.
businessCalendar
Defines the name of the business calendar that
is used by the SLAs defined for the service. If
no calendar is specified, the outage time for the
service is defined only as operational or
non-operational. There is no holiday outage.
For more information, see “Configuring
business calendars” on page 253.
Operational hours properties
Table 73. Operational hours properties in the service definition properties file
Property
Description
operationalHour.num
Defines the number of operational hour periods
that can be defined for the service. If this is not
specified, at least 1 period is defined for the
service. If the value is set, you need to specify a
start and end time for the number of periods
that are specified in this parameter.
operationalHourStartTime
The start time of a single operational hours
period. You must specify values in the 24-hour
clock format, for example, 13:00:00. If no value
is specified, the default value 00:00:00 is
assigned. This value is used for any SLAs that
do not include operational hours.
operationalHourEndTime
The end time of a single operational hours
period. You must specify values in the 24-hour
clock format, for example, 17:00:00. If no value
is specified, the default value 23:59:59 is
assigned.
Chapter 10. Service Level Objectives (SLO) Reporting
249
Table 73. Operational hours properties in the service definition properties file (continued)
Property
Description
operationalHourStartTime.n
The start time of the operational period n. You
must specify values in the 24-hour clock format,
for example, 13:00:00. If this parameter is not
specified, the operational hour period is not
defined.
operationalHourEndTime.n
The end time of the operational period n. You
must specify values in the 24-hour clock format,
for example, 17:00:00. If this parameter is not
specified, the operational hour period is not
defined.
Identity Properties
Table 74. Identity properties in the service definition properties file
Property
Description
identity.num
Defines the number of identities that are
defined for a service. If you specify a value for
this property, you must specify the required
values for the same number of identity types.
identity
An identity represents an alternative method
for identifying a service when it gathers the
outage data for the service. An identity is
defined as identityType:::identityString. If
you only specify a service name, the default
identity is tbsmIdentity:::servicename. If no
identity is specified, the service definition
cannot be created. For TBSM services, use the
instance name as the identity. The type defaults
to tbsmIdentity.
identity.n
The identity type and string for identity n. This
is required if you specify a value for the
identity.num parameter. If any required
identity is not included, the service definition
cannot be created.
For example, you can use multiple identities if
the outage data is gathered from multiple data
sources. In this example, you are receiving data
from another monitoring system. You define the
identities as:
identity.num=2
identity.1=internetBanking
identity.2=MSN:::IB001
where the identity type is MSN for Managed
System Name (MSN). IB001 is the value that is
specified for the MSN.
250
Netcool/Impact: Solutions Guide
SLA Properties
Table 75. SLA properties in the service definition properties file
Property
Description
sla.num
Defines the number of SLA definitions. If this
property is set, you must specify a matching
number of SLA definitions.
sla.name
The name of the SLA. This is also used as the
metric name that is listed as an option in the
SLO Availability report. The name is required
and must be unique.
Note: If an SLA exists that uses the same name,
the existing SLA is updated with the new
properties based on the current service
definition. No properties are inherited from the
definition that is saved in the SLORPRT
database.
sla.n.name
The name of the nth SLA in the service
definition file. This is also used as the metric
name that is listed as an option in the SLO
Availability report. This parameter is required if
the sla.num parameter is specified. The SLA
definition is not used if the sla.n.name
parameter is not defined.
The name must be unique.
Note: If an SLA exists that uses the same name,
the existing SLA is updated with the new
properties based on the current service
definition. No properties are inherited from the
definition that is saved in the SLORPRT
database.
sla.displayName, sla.n.displayName
The label for the metric that is associated with
the SLA. If you do not specify a value for the
display name, the default empty string is used.
sla.description, sla.n.description
The description for the metric that is associated
with this SLA. If no value is specified, the
default empty string is used.
sla.timezone, sla.n.timezone
Time zone that is used by the SLA. If you do
not specify a value, GMT is used by default.
The time zone ID must be a valid value. For
more information, see “Configuring the time
zone” on page 252.
sla.operationalHourThreshold,
sla.n.operationalHourThreshold
An availability threshold for operational hours
in the SLA. You specify a numeric percentage,
for example 98.5. The threshold value is
displayed in the SLO Availability report. The
default is zero.
sla.nonOperationalHourThreshold,
sla.n.nonOperationalHourThreshold
An availability threshold for non-operational
hours in the SLA. You specify a numeric
percentage, for example 98.5. The threshold
value is displayed in SLO Availability report.
The default is zero.
Chapter 10. Service Level Objectives (SLO) Reporting
251
Table 75. SLA properties in the service definition properties file (continued)
Property
Description
sla.operationalHour.num
Defines the number of operational hour periods
that are defined for the SLA. If a value is not
specified, then at most 1 operational hour
period is defined for the SLA.
sla.operationalHourStartTime,
sla.operationalHourStartTime.n
The start time of a single operational hour
period. You must specify the value in 24-hour
clock format. For example, 08:00:00 for 8 AM.
sla.operationalHourEndTime,
sla.operationalHourEndTime.n
The end time of a single operational hour
period. You must specify the value in 24-hour
clock format. For example, 17:00:00 for 5 PM.
sla.n.operationalHourStartTime,
sla.n.operationalHourStartTime.n
The start time of a single operational hour
period for the nth SLA. You must specify the
value in 24-hour clock format. For example,
08:00:00 for 8 AM.
sla.n.operationalHourEndTime,
sla.n.operationalHourEndTime.n
The end time of a single operational hour
period for the nth SLA. You must specify the
value in 24-hour clock format. For example,
17:00:00 for 5 PM.
Configuring the time zone
If a single business service operates in multiple time zones, you can specify a
timezone in the SLA definition that is included in the service definition for the
business service.
About this task
The time zone value is used to modify how the business calendar and the
operational hours are interpreted by the service definition. Netcool/Impact
includes utility functions that you can call to log information that helps you to
choose the time zone you need.
This setting is optional. It is not required for the service definition to work.
Procedure
1. Specify the time zone value in the service definition properties file. For more
information, see the documentation about the sla.timezone property in
“Service definition properties file” on page 248.
2. To get information about the available time zone identifiers, create a policy that
calls the utility functions.
3. Use the sample code to help you to create the policy that calls the utility
function.
To log information about all the time zone IDs, call the following function:
/*
* Log information for all time zones.
*/
function logAllTimeZoneInfo()
To log information about a specific time zone ID, call the following function:
/*
* Log information for a specific time zone id.
*/
function logTimeZoneInfo(<timezone_id>)
252
Netcool/Impact: Solutions Guide
where <timezone_id> is the ID of the time zone.
To log information for all the time zone IDs that use a specific offset, call the
following function:
/*
* Log information for time zones with a specific offset from GMT
*/
function logTimeZoneInfoForOffset( <hours> )
For example, you can use the following policy to load the utility functions and
log the timezone IDs:
/* This statement is required for the utility functions
/* to be loaded. */
Load("serviceDefinition");
Log( 0,"Starting policy timeZoneInfoLogger" );
/* Uncomment the function you need to help you pick
a time zone id */
/* Call a function to log information about all valid
/* time zones */
/* logAllTimeZoneInfo(); */
/*
/*
/*
/*
/*
Call a function to log information about a specific
time zone.*/
Replace "TimeZoneID" with the ID for which you
need information. */
logTimeZoneInfo( "TimeZoneID" ); */
/*
/*
/*
/*
/*
Call a function to log information about time zones
offset by n hours from GMT. Replace "offset" */
with a number of hours offset from GMT. For example,
5, +6, -3, -2.5 are valid offsets.*/
logTimeZoneInfoForOffset( offset ); */
4. Save your changes.
5. Run the policy. The time zone information that you requested in the policy is
listed in the policylogger.log file.
Configuring business calendars
You can use the business calendar feature to identify the holidays and weekends
during a specific time period. This is optional.
About this task
Note: If you specified a time zone value in the SLA, Netcool/Impact uses this
value to compare the outage times with the holiday and weekend values that are
defined in the business calendar. For example, if you specify GMT as the time zone
in the SLA, Netcool/Impact compares the time in GMT to the holiday and
weekend times that are defined in the business calendar.
For more information about the properties that you can specify in a business
calendar definition, see “Business calendar properties file” on page 255.
If you want to use common holiday and weekend values for use with multiple
business calendars, you can create common business calendars. For more
information, see “Creating common properties in business calendars” on page 254.
For examples of different types of business calendars, see “Properties files
examples” on page 264.
Chapter 10. Service Level Objectives (SLO) Reporting
253
Procedure
1. Create a file called <business_calendar>.props. Open the file.
2. Define a business calendar. The following example defines a business calendar
called US. This business calendar defines 2 holidays and 2 weekend days:
calendar.name = US
calendar.holidays.dateformat = MMM dd,yyyy
calendar.holidays.num = 2
calendar.holidays.1 = Jan 1,2015
calendar.holidays.2 = Jul 4,2015
calendar.weekends.num = 2
calendar.weekends.1 = 1
calendar.weekends.2 = 7
3. Save the file.
4. Run the createBusinessCalendarDefn policy. You need to use the Business
Calendar Definition Filename parameter to pass the name of the file in the
policy.
If you want to change the holiday or weekend days for a calendar, just update
the properties file and rerun the createBusinessCalendarDefn policy to create
the calendar. The new definition will replace the existing definition. If you want
to delete a calendar, refer to “SLO Utility Functions” on page 269.
Creating common properties in business calendars
If you need several business calendars that share weekend days or holidays, you
can define the common properties in a common calendar.
About this task
After you define the common calendar, you specify the name of the common
calendar in the calendar.duplicateCalendarName property in the business calendar
definition. After you specify the common calendar, you need to define the unique
holiday and weekend days for the business calendar in a second business calendar.
Procedure
1. Define a calendar that is called COMMON.US. For example:
calendar.name=COMMON.US
calendar.holidays.dateformat= MMM dd,yyyy
calendar.holidays.num = 6
calendar.holidays.1 = Jan 1,2015
calendar.holidays.2 = Feb 14,2015
calendar.holidays.3 = Dec 25,2015
calendar.holidays.4 = Jan 1,2016
calendar.holidays.5 = Feb 14,2016
calendar.holidays.6 = Dec 25,2016
calendar.weekends.num = 2
calendar.weekends.1 = 1
calendar.weekends.2 = 7
2. Define a second business calendar to specify the unique holiday and weekend
values. In the example, this business calendar specifies the holidays that are
unique to the United States. For example, create a calendar that is called US that
contains the following properties:
calendar.name=US
calendar.holidays.dateformat= MMM dd,yyyy
calendar.holidays.num = 2
calendar.holidays.1 = Jul 4,2015
calendar.holidays.2 = Jul 4,2016
254
Netcool/Impact: Solutions Guide
3. To add another region, you need to create a common file. For example, create a
calendar that is called COMMON.Canada. This calendar duplicates the properties
that are specified in the COMMON.US file:
calendar.name=COMMON.Canada
calendar.duplicateCalendarName=COMMON.US
4. To specify the values for the holidays and weekend days, create another
business calendar. For example, create a calendar that is called Canada that
specifies the unique holidays for Canada:
calendar.name=Canada
calendar.holidays.dateformat= MMM dd,yyyy
calendar.holidays.num = 2
calendar.holidays.1 = Jul 1,2015
calendar.holidays.2 = Jul 1,2016
5. To add any other region, repeat steps 3 and 4.
Results
When the business calendar is used by an SLO service definition, the function
checks the specified business calendar and the common version of the calendar.
For example, if the service definition includes the businessCalendar=US property,
the policy function checks both the common calendar, COMMON.US and the calendar
that specifies the unique values for the country, the US business calendar in the
example. The function uses the values in both to calculate the holidays and
weekend days.
Business calendar properties file
Use the business calendar properties file to specify the business calendars that are
used in SLO reporting.
Table 76. Business calendar properties
Property
Description
calendar.name
Specify the name of the business calendar
that you want to define. If you do not
specify a value for this property, the
createBusinessCalendarDefn policy creates
an exception.
Some calendars are prefixed with COMMON..
For more information, see “Creating
common properties in business calendars”
on page 254.
calendar.duplicateCalendarName
If you want to create a duplicate of an
existing calendar, specify the name of an
existing calendar.
If the specified duplicate does not exist,
the createBusinessCalendarDefn policy
creates an exception.
Chapter 10. Service Level Objectives (SLO) Reporting
255
Table 76. Business calendar properties (continued)
Property
Description
calendar.holidays.dateformat
Specify the date format that is used to
specify the holiday dates for the calendar.
For example, you can specify the date
format as MMM dd,YYYY and specify New
Year's day as Jan 01,2015. This parameter
is required if you want to specify holidays
in the calendar.
You must use a valid format. These
formats are specified by the
SimpleDateFormat class in Java.
If this property is omitted, no holidays are
defined in the calendar.
calendar.holidays.num
Specify the number of holiday dates that
are specified for the business calendar.
This parameter is required if you want to
specify holidays.
If this property is omitted, no holidays are
defined in the business calendar.
calendar.holidays.n
Specify the date of the nth holiday that is
specified for the business calendar. You
need to use the date format that is
specified in the
calendar.holidays.dateformat parameter.
For example, specify Jan 01,2015 for New
Year's day.
This parameter is required for each value
of n up to the number of properties that
are specified in the calendar.holidays.num
parameter.
If you do not specify a value for any of
these parameters, the policy does not
create holidays for the parameter.
calendar.weekends.num
Specify the number of weekend days for
the business calendar. This parameter is
required if you want to specify weekends
in your business calendar.
If you omit this parameter, no weekend
days are defined in the business calendar.
calendar.weekends.n
Specify the date of the nth weekend day
that is specified for the business calendar.
This parameter is required for each value
of n up to the number of properties that
are specified in the calendar.weekends.num
parameter.
If you do not specify a value for any of
these parameters, the policy does not
create weekend days for the parameter.
256
Netcool/Impact: Solutions Guide
Retrieving SLA metric data
After you create the service definition file, you need to create policies to retrieve
the SLA metric data. The package includes sample policies and functions that you
can use to retrieve data from TBSM.
About this task
The SLO Reporting package includes sample policies that you can use to store
outage data from TBSM. For more information, see “SLO reporting policies.”
You can also use a number of functions to help you to retrieve metric data. For
more information, see “SLO reporting policy functions.”
Procedure
1. Define and implement the service definition and SLA definitions for the service:
v Create the service definition properties file, including the SLA definitions.
v To implement the business service definition, run the
createServiceDefinition policy, configuring the policy to pass the service
definition properties files as a parameter.
v If you require business calendars, run the createBusinessCalendarDefn
policy, configuring the business calendar properties files as a parameter.
2. Use a Netcool/Impact policy to record the metric information. Create a policy
that uses the recordSLAMetric policy function. The db2GetData sample policy
provides instructions and sample code that can help you create your policy.
The Netcool/Impact policies are written in JavaScript language. Use the Load
function to load the recordSLAMetric policy in to the data retrieval policy.
To use the SLO functions, add the following two commands to the start of your
policy
Load("slaDefGlobalSettings");
Load("recordSLAMetric");
3. Save the policy.
SLO reporting policies
The SLO reports package contains the following policies:
v BusinessCalendar: Provides functional support for business calendars.
v createBusinessCalendarDefn: Creates a business calendar that is based on a
properties file.
v createServiceDefinition: Creates a service definition that is based upon a
properties file.
v recordSLAMetric: Provides supporting functions that are used to record SLA
metrics
v serviceDefinition: Provides functional support for service definitions
v slaDefGlobalSettings: Provides support for global settings
v db2GetData: Sample Policy
v getDataFromTBSMAvailability: Sample Policy
v serviceDowntimeBasedOnTBSMStatusChange: Sample Policy
SLO reporting policy functions
Use the following policy functions to help you to record outage data in the
SLORPRT database.
Chapter 10. Service Level Objectives (SLO) Reporting
257
The following policy functions are included as part of the SLO Reporting package:
v addCorrelationValue
v recordMetric
v addCheckpointValue
v getCheckpointValue
v addSLAMetricWithOperationalHoursAndBusinessCalendar
addCorrelationValue
The addCorrelationValue function records a correlation value for the SLA metric
that is being recorded. The correlated value is stored as a string. The format of the
string is defined by the user. The user is responsible for data maintenance.
Table 77. addCorrelationValue parameters
Parameter
Description
serviceName:
Service Name for the correlation value that
is being stored.
CorrelationValue:
The user's value, which is stored in the
database.
MetricName:
Metric Name the correlation value that is
being stored.
TimeRecorded:
Time record for this correlation value.
The following examples show where the addCorrelationValue function can be
used.
v Example 1:
If the downtime of a service must be calculated based on the correlated value of
the status of an event. When the event is generated (open state), record the
Serial Number in the correlated value with the time recorded. When the event is
updated with the close state, retrieve the correlated “open time” from the
correlated table. Use the time recorded field as the “Creation time” and the
current time as the resolved time.
v Example 2:
If you want to store data to be used in the report later, the addCorrelationValue
function can be used. For example, the ticket number for which the service
downtime is being recorded can be stored in this table. Using the timeRecorded
field, service name, and the metric name, the user can generate a report of all
the tickets that are associated with a SLA metric.
recordMetric
The recordMetric function records a single SLA metric. The SLA metric name must
be defined during the services definition section.
Table 78. recordMetric function parameters
258
Parameter
Description
serviceName:
The service name for the SLA metric that is
being recorded.
metricName:
The SLA Metric Name.
operValue:
The value that needs to be stored during
operational hours.
nonOperValue:
The value that needs to be stored during
non-operational hours.
Netcool/Impact: Solutions Guide
Table 78. recordMetric function parameters (continued)
Parameter
Description
holidayValue:
The value that needs to be recorded during
holiday hours.
CreationTimeInSeconds:
The time when this metric was created or is
recorded. The value must be in seconds
from Jan 1 1970.
operHourResourceId:
Pass -1 for operHourResourceID parameters.
If you need to record the time for the SLA, always use the
recordSLAMetricWithOperationalHoursAndBusinessCalendar function.
addCheckpointValue
The addCheckpointValue function adds a check point value for the solution. This
value can be used to store check points while the retrieval of the source data is
being processed.
Table 79. addCheckpointValue Parameters
Parameters
Description
serviceName:
Service Name for the check point that is to
be stored.
MetricName:
Metric name for the check point value that is
to be stored
Value:
The check point value that is to be recorded.
Example:
If the service downtime is based on the amount of time a ticket is opened, you can
use the addCheckpointValue function to track the last record that is read from the
source database. Store the last resolved time that was read from the database. The
next query can use the value that is stored in the checkpoint database for the filter.
getCheckpointValue
The getCheckpointValue function is used to retrieve the checkpoint value for a
service name and metric name. This function is used to get the value that was
added by the addCheckpointValue function.
Table 80. getCheckpointValue Parameters
Parameter
Description
serviceName:
The service name for the checkpoint value to
be retrieved from the table.
MetricName:
The metric name for the checkpoint value to
be retrieved from the table.
addSLAMetricWithOperationalHoursAndBusinessCalendar
The addSLAMetricWithOperationalHoursAndBusinessCalendar function inserts
downtime based on operational hours and the business calendar, if a business
calendar is defined for the SLA associated with the service. If the business calendar
is specified, then business calendar is applied. Similarly, if the operational hours
time is specified the time is broken down by the operational and non-operational
hours.
Chapter 10. Service Level Objectives (SLO) Reporting
259
Note: The start and end time values must be passed as a GMT value. This ensures
that it is calculated correctly based on the time zone property that is defined in the
service's SLA definition.
Table 81. addSLAMetricWithOperationalHoursAndBusinessCalendar Parameters
Parameters
Description
ctime:
The start time for the outage to be recorded.
Rtime:
The end time for the outage to be recorded.
serviceName:
The service name, as defined in the service
definition properties file, for the metric
name to be recorded. If there is an identity
that is defined for that service, then the
identity can be passed to the function. For
more information about this property, see
the entry for the identity property in
“Service definition properties file” on page
248.
MetricName:
The metric name, as defined in the service
definition properties file, for the SLA to be
recorded.
Note: The values that you specify in the ctime and Rtime parameters are converted
into the time zone that is defined in the SLA. If no time zone is specified, the
default value, GMT, is used. Therefore, you may need to adjust the values that you
use here, depending on the source of the data. For example, if you are using metric
history data from Tivoli Business Service Manager and you use the default time
zone in the SLA definition, you do not need to change anything because the source
data is also calculated in GMT in Tivoli Business Service Manager.
Using the getDataFromTBSMAvailability sample policy
The sample policy getDataFromTBSMAvailability obtains the status change from
the TBSM history database in TBSM to record the downtime for the service.
Procedure
1.
In Netcool/Impact, create a data source, for example TBSM_History that
connects to the TBSM metric history database where the status changes are
stored. For more information about the TBSM metric history, see the TBSM
documentation available from the following URL: http://www-01.ibm.com/
support/knowledgecenter/SSSPFK_6.1.1.3/com.ibm.tivoli.itbsm.doc/
timewa_server/twa_metrics_c_intro.html
2.
In Netcool/Impact, create a data type called HISTORY_VIEW_METRIC_VALUE, and
associate this data type with the HISTORY_VIEW_RESOURCE_METRIC_VALUE view in
the TBSM metric history database.
3. In Netcool/Impact, create a policy activator service to activate the
getDataFromTBSMAvailability policy.
The SLA project that is imported when you deploy the SLO add-on function
includes a sample policy activator service called GetData. When this service is
started, it activates the db2GetData sample policy on a 300-second interval. Use
this example to help you to create your own policy activator service.
4. The getDataFromTBSMAvailability policy reads records that show changes in
the TBSM status for the services identified by your SLO service definitions. An
outage is calculated in one of two ways:
260
Netcool/Impact: Solutions Guide
a. From the time a service is marked "Bad" in TBSM until the time the status
changes to anything other than “Bad”. This is the default behavior when
you deploy the SLO feature for the first time.
b. From the time a service is marked "Bad" in TBSM until the time the status
changes to “Good”. This is the behavior if you deployed the SLO feature
prior to installing Fixpack 4.
Starting with Impact 7.1.0 Fixpack 3, the getDataFromTBSMAvailability policy
can retrieve the outage times for active outages. Active outages are defined as
those where the status is currently "Bad" and not yet resolved. The end time is
recorded as the current time when the active outage is first recorded. Each
subsequent run of the policy updates the outage with the current time as the
updated end time, until the final outage time is recorded when the status either
becomes “Good” or not “Bad”, depending on how the policy is configured to
run.
Configuring getDataFromTBSMAvailability
There are two different algorithms that can be used to decide when an outage
starts and ends using the TBSM metric history data. The default if the SLO
function is deployed after installing Fixpack 4 is that an outage is defined as only
the time when the TBSM status is “Bad”.
Prior to Fixpack 4, the algorithm calculated the outage from the time that the
status first changes to “Bad” until the status becomes “Good”, regardless of other
statuses that might be reported by TBSM in the interim. This algorithm can still be
used in case you already have SLO deployed and prefer this method.
Consider the following example:
The default algorithm with Fixpack 4 will record an outage time of T4 – T3, which
is only the time period when the status was “Bad”. The legacy algorithm will
record the outage as T5 – T3, ignoring time when the status changed back to
“Marginal”.
You can use the sloSetConfigurationValue policy to ensure that you are using the
alogithm you prefer. For more information on setting the SLO configuration
properties, refer to “SLO Utility Functions” on page 269.
Reports
The application availability report package contains a sample application
availability report.
Record the service downtime for the application. The availability report is
generated at the entity level for all the applications. The report requests the service
and the SLA that you want the report to run, and a date. The report template has
the following options:
v A line chart which reports the availability for up to a month for the date you
selected.
Chapter 10. Service Level Objectives (SLO) Reporting
261
v A line report which reports the availability for up to a year up to the date you
selected.
v The table contains the data for each application.
Example SLO reporting configuration
To help you to understand how SLO reporting can be integrated with Tivoli
Business Service Manager (TBSM), you can use this example configuration.
Before you begin
For a complete set of prerequisites, see “SLO reporting prerequisites” on page 243.
This sample configuration assumes the following:
v You have installed Tivoli Business Service Manager 6.1.1 Fix Pack 3 or later.
v You have configured the Metric History database on DB2 10.1 or higher in
TBSM.
v You have installed Jazz for Service Management 1.1.2 with Tivoli Common
Reporting 3.1 or higher on DB2 10.1.
About this task
After you complete this procedure, you can understand how to implement SLO
reporting for metric history data that is stored in TBSM
Procedure
1. Create the SLORPRT reporting database. For example:
db2 create database SLORPRT using CODESET UTF-8 territory en-US
You must define the database. If the database is on the same server as Tivoli
Common Reporting, then you do not need to install a DB2 client. If the
database is remote, you need to install a DB2 client and catalog the node and
database on the local machine.
2. Copy the slo_db2schema.sql file from <IMPACT_HOME>/add-ons/slo/db to the
server where you want to create the SLORPRT database. Login with the DB2
user and run the following commands:
db2 connect to SLORPRT
db2 -tvf slo_dbschema.sql
db2 connect reset
3. Import the SLA and SLA_Utility projects.
Navigate to the <IMPACT_HOME>/bin and run the following commands:
nci_import.[bat/sh] <servername> IMPACT_HOME/add-ons/slo/importData
nci_import.[bat/sh] <servername> IMPACT_HOME/add-ons/slo/sloutility
For example:
./nci_import NCI /opt/IBM/tivoli/impact/add-ons/slo/importData
./nci_import NCI /opt/IBM/tivoli/impact/add-ons/slo/sloutility
4. Import the report package to the Tivoli Common Reporting server.
a. Copy the ImpactSLOReportPackage.zip file from the <IMPACT_HOME>/addons/slo/Reporting directory in Netcool/Impact to the server where Tivoli
Common Reporting is installed.
b. Log in to the Tivoli Common Reporting server with the user who installed
Tivoli Common Reporting.
262
Netcool/Impact: Solutions Guide
c. Navigate to the /bin directory. For example /opt/IBM/JazzSM/reporting/
bin.
d. To create a data source for the SLORPRT database, enter the trcmd
command:
trcmd.sh -user <TCR user> -password <TCR password>
-datasource -add SLORPRT -connectionString <db2 connection string>
-dbType DB2 -dbName <database name>
-dbLogin <database username> -dbPassword <database password>
For example:
./trcmd.sh -user smadmin -password password -datasource -add SLORPRT
-connectionString jdbc:db2://TCRserver.example.com:50000/SLORPRT
-dbType DB2 -dbName SLORPRT -dbLogin db2inst1 -dbPassword password
e. To import the SLO reporting package, enter the trcmd command:
./trcmd.sh -import -bulk <file> -user <TCR User>
-password <TCR password>
For example:
./trcmd.sh -import -bulk /tmp/ImpactSLOReportPackage.zip -user smadmin
-password password
5. Update the SLO Reporting data sources.
a. To log in to the Netcool/Impact GUI, use the appropriate url.
b. To open the Data Model page, click on the Data Model icon and select
Start Now from the drop-down menu, or just click on the Data Model tab
at the top of the page.
c. Click on the project selection drop-down menu in the top right corner of the
screen and select SLA.
d. Edit the SLOReportDatasource. Change the user name, password, host
name, port, and database values to match the SLORPRT database.
e. To confirm that the values are correct, click Test Connection.
f. Save your changes.
g. Repeat the previous steps for datasource SLOUtilityDatasource in project
SLA_Utility.
6. Create the service definitions.
Services are specified by users in a properties file on the Data Server in
Netcool/Impact. You use the createServiceDefintion policy in Netcool/Impact
to import these into Netcool/Impact.
a. Log in to the TBSM server with the user who installed TBSM.
b. Select the service that you want to create a definition for.
c. Create a copy of the service definition properties file in a temporary folder
called ServiceDefinition.props.
d. Add the properties. For more information, see “Service definition properties
file” on page 248.
e. Save your changes.
f. Log in to the Netcool/Impact GUI.
g. To open the Policies page, click on the Policies icon and select Start Now
from the drop-down menu, or just click on the Policies tab at the top of the
page.
h. Click the createServiceDefinition policy and click the Run with
parameters button.
i. Enter the path to the directory where the ServiceDefinition.props is stored.
Chapter 10. Service Level Objectives (SLO) Reporting
263
j. To run the policy, click Execute.
k. To verify that the policy has run correctly, click View Policy Log.
7. Retrieve outage data from TBSM to store in the SLORPRT database
a. Log in to the Netcool/Impact GUI.
b. To open the Data Model page, click on the Data Model icon and select
Start Now from the drop-down menu, or just click on the Data Model tab
at the top of the page.
c. Create a DB2 data source called TBSM_HISTORY. Specify the values for the
TBSM Metric History Database.
d. Test the connection and save the data source.
e. Create a data type called HISTORY_VIEW_METRIC_VALUE for the TBSM metric
history data source.
f. Select TBSMHISTORY and HISTORY_VIEW_RESOURCE_METRIC_VALUE.
g. Click Refresh.
h. Select HISTORYRESOURCEID as the key field.
i. Save the data type.
j. To open the Policies page, click on the Policies icon and select Start Now
from the drop-down menu, or just click on the Policies tab at the top of the
page.
k. Select the getDataFromTBSMAvailability policy and click Run. To update
the data at regular intervals, you can use the GetData service. Edit the
service so that the getDataFromTBSMAvailability policy runs at regular
intervals, which you can specify in seconds. By default, the policy will
record outages only when the TBSM status is “Bad”.
8. Running the Application Availability report in Tivoli Common Reporting
a. Log in to the IBM Dashboard Application Services Hub GUI.
b. To open the Common Reporting page, click Reporting > Common
Reporting.
c. Click Impact SLO Report > SLO application availability report.
d. Select the appropriate resource in the Parameter Selection field.
e. Select the metric name.
f. Specify an end date that occurs after the latest import of historic data for the
service and click Finish.
Properties files examples
Use these examples to help you to define the service definition properties and
business calendar properties files.
Operational hours service level example
Use this example to help you to understand how to specify the same operational
hours for all the SLAs in a service definition.
This example defines the operational hours at the service level. These hours apply
to all the SLAs in the example because there are no operational hours defined for
the SLAs.
#
#
#
#
#
264
This sample service definition file defines the operational
hours as properties of the service. Multiple slas are defined
without operational hours. Each sla in this file will use the
operational hours defined for the service.
These hours apply ONLY to slas defined in this same file.
Netcool/Impact: Solutions Guide
serviceName=SLOService
description=Service for testing SLO
identity=SLOService
# The next set of properties define the operational hours for the
# service. These are applied to all the slas defined later in the
# file, assuming the slas do not include specific operational hour
# definitions.
operationalHour.num=2
operationalHourStartTime.1 = 8:00:00
operationalHourEndTime.1 = 12:00:00
operationalHourStartTime.2 = 13:00:00
operationalHourEndTime.2 = 17:00:00
# Now define the slas, but do not define any operational hour periods.
sla.num=2
sla.1.name=serviceDowntimeEST
sla.1.displayName=Service Down Time
sla.1.description=Service Down Time
sla.1.timezone=EST
sla.1.operationalHourThreshold = 99.5
sla.1.nonOperationalHourThreshold = 98.5
sla.2.name=serviceDowntimePST
sla.2.displayName=Service Down Time
sla.2.description=Service Down Time
sla.2.timezone=PST
sla.2.operationalHourThreshold = 99.0
sla.2.nonOperationalHourThreshold = 98.0
Single SLA example
Use this example service definition to help you to understand how to define
operational hours for a single SLA in a service definition properties file.
This example service definition specifies a single set of operational hours for the
SLA:
# This sample service definition file contains a single sla
# definition with a single operational hour period. A business
# calendar is also defined.
serviceName=SLOService
description=Service for testing SLO
label=SLO Test Service
identity=SLOService
businessCalendar=US
sla.name=serviceDowntime
sla.displayName=Service Down Time
sla.description=Service Down Time
# Define a single operational hour period, 8AM to 5PM.
# Time zone defaults to GMT.
sla.operationalHourStartTime = 8:00:00
sla.operationalHourEndTime = 17:00:00
sla.operationalHourThreshold = 99.5
sla.nonOperationalHourThreshold = 98.5
Time zone example
Use the following example to help you to understand how to define operational
hours in different time zones.
This example defines two sets of operational hours and time zones, one is for each
coast of the United States:
Chapter 10. Service Level Objectives (SLO) Reporting
265
# This sample service definition file contains multiple sla definitions
# to support operational hours across multiple time zones.
# Multiple operational hour periods are defined. A business calendar is
# also defined that will apply to both slas.
serviceName=SLOService
description=Service for testing SLO
label=SLO Test Service
identity=SLOService
# The following business calendar specification will apply to all
# slas defined in this properties file. If there is another sla or slas
# for the same service that require a different calendar, then you can
# copy this file, change the businessCalendar value, and replace the sla
# definitions below. Note that the description, label, and identity
# will be replaced for the service if any of those property values are
# changed in the copied file.
businessCalendar=US
# Creating two slas to measure availability across multiple time zones.
sla.num=2
# The first sla reflects the operational hours on the East Coast of the US.
# Normal hours are 8AM to 5PM, with an hour down for lunch at noon.
sla.1.name=serviceDowntimeEST
sla.1.displayName=Service Down Time
sla.1.description=Service Down Time
sla.1.timezone=EST
sla.1.operationalHour.num=2
sla.1.operationalHourStartTime.1 = 8:00:00
sla.1.operationalHourEndTime.1 = 12:00:00
sla.1.operationalHourStartTime.2 = 13:00:00
sla.1.operationalHourEndTime.2 = 17:00:00
sla.1.operationalHourThreshold = 99.5
sla.1.nonOperationalHourThreshold = 98.5
# The second sla reflects the operational hours on the West Coast of the US.
# Starting time is an hour later, but no down time for lunch.
# The thresholds displayed on the availability report are slightly lower.
sla.2.name=serviceDowntimePST
sla.2.displayName=Service Down Time
sla.2.description=Service Down Time
sla.2.timezone=PST
sla.2.operationalHourStartTime = 9:00:00
sla.2.operationalHourEndTime = 17:00:00
sla.2.operationalHourThreshold = 99.0
sla.2.nonOperationalHourThreshold = 98.0
Simple service definition example
Use the following example to help you to understand how you can configure a
simple service definition properties file.
This example consists of the minimal required parameters:
# This sample service definition file consists of only the required properties
serviceName=SLOService
# label defaults to the empty string and description defaults to the
# serviceName value businessCalendar is optional, with no default value
# identity is a required property, but can be the same as the serviceName
# value when using a TBSM service
identity=SLOService
# Though technically not required, a service definition with no
# SLA metric name defined will not result in any outage data being
# created in SLORPRT database
266
Netcool/Impact: Solutions Guide
sla.name=serviceDowntime
#
#
#
#
#
#
sla.displayName and sla.description both default to the empty string
sla.timezone defaults to GMT
sla.operationalHourThreshold and sla.nonOperationalHourThreshold
values default to 0. Since no operational hours are defined for
the SLA or the service, the single default operational hour
period "00:00:00" to "23:59:59" is used.
Multiple identities in a service definition example
Use this example to help you to understand how to define multiple identities in a
single service definition file.
This example shows how you can use multiple identities in a single service
definition properties file:
# This sample service definition file illustrates how to specify
# multiple identities for a service. This can be used when there are
# multiple sources of outage data for a service, but these sources use a
# different identity for the service.
serviceName=SLOService
description=Service for testing SLO
# There will be outage data gathered from 3 different sources,
# including the TBSM Metric History database. An identity is of the form
# "identityType:::identityString". The default identity is
# "tbsmIdentity:::serviceName".
# The SLO metric functions can be passed any of these identities as the
# "resource name" and the outage data will be calculated for service "SLOService".
identity.num=3
identity.1=SLOService # This is the same as tbsmIdentity:::SLOService
identity.2=MSN:::ManagedSystemName/ABCCompany/SLOService
identity.3=GUID:::SLOService:guid:1ab2cd3ef4gh
# The rest of this file contains the single sla being defined by this
# file for SLOService.
sla.name=serviceDowntime
sla.displayName=Service Down Time
sla.description=Service Down Time
sla.operationalHourStartTime = 8:00:00
sla.operationalHourEndTime = 17:00:00
sla.operationalHourThreshold = 99.5
sla.nonOperationalHourThreshold = 98.5
Common US calendar properties
Use this example to help you to understand how to create a common business
calendar properties file.
This business calendar specifies the common holidays and weekend days for the
United States:
#
#
#
#
This sample calendar definition file defines the "common"
holidays and weekend days. Other calendars should specify this
calendar for the property calendar.duplicateCalendarName to
share these definitions.
# The name must start with the prefix "COMMON.". The SLO function
# uses the calendar definitions in calendars "US" and "COMMON.US"
# if the service definition specifies businessCalendar=US as a property.
calendar.name=COMMON.US
# The date format must meet the requirements defined by the
# Java SimpleDateFormat class.
calendar.holidays.dateformat= MMM dd,yyyy
Chapter 10. Service Level Objectives (SLO) Reporting
267
# Defining 6 holidays. Any numbered property that is missing is skipped.
# Any numbered property above 6 (in this example) will be ignored.
calendar.holidays.num = 6
calendar.holidays.1 = Jan 1,2015
calendar.holidays.2 = Feb 14,2015
calendar.holidays.3 = Dec 25,2015
calendar.holidays.4 = Jan 1,2016
calendar.holidays.5 = Feb 14,2016
calendar.holidays.6 = Dec 25,2016
# Defining 2 weekend days. Any numbered property that is missing will be skipped.
# Any numbered property above 2 (in this example) will be ignored.
calendar.weekends.num = 2
# The "day number" is defined by the Java Calendar class. In the calendar for the
# US locale the constant SUNDAY is defined as 1 and the constant 7 is SATURDAY.
# MONDAY is 2, TUESDAY is 3, etc.
calendar.weekends.1 = 1
calendar.weekends.2 = 7
US Calendar example
Use this example to help you to understand how to create a business calendar
properties file that uses common holidays and weekends that are defined in a
common file.
This example defines holidays and weekend days that are unique to the calendar.
However, the holidays and weekends are supplemented by the properties in the
COMMON.US calendar.
# This sample calendar definition file defines the holidays and weekend days unique
# to this specific calendar. This calendar will be supplemented by entries in the
# "COMMON" calendar of the same name.
# The name of this calendar. The entries in this calendar and the entries in
# calendar COMMON.US will all be used when evaluating outage time.
calendar.name=US
# The date format, as defined by Java class SimpleDateFormat, is required
# in order to specify holiday dates.
calendar.holidays.dateformat= MMM dd,yyyy
# The US calendar will have 2 holidays that are unique to the calendar.
# All other holidays and the weekend days are defined in the
# calendar COMMON.US.
calendar.holidays.num = 2
calendar.holidays.1 = Jul 4,2015
calendar.holidays.2 = Jul 4,2016
Common calendar properties file example
Use this example to help you to create your own business calendar properties files.
This example duplicates the common holidays and weekend days from the
COMMON.US calendar:
# This sample calendar definition file duplicates another calendar which
# defines "common" holidays and weekend days.
# The name must start with the prefix "COMMON.". The SLO function will use
# the calendar definitions in calendars "Canada" and "COMMON.Canada" if the
# service definition specifies businessCalendar=Canada as a property.
calendar.name=COMMON.Canada
# The following property instructs the calendar definition policy to just
268
Netcool/Impact: Solutions Guide
# copy all the entries in calendar COMMON.US to this calendar.
calendar.duplicateCalendarName=COMMON.US
# Any other properties are ignored when calendar.duplicateCalendarName is specified.
Canada calendar example
Use this example to help you to understand how to define the holidays that are
unique to Canada.
This business calendar specifies the unique holidays in Canada. The holidays and
weekend days that are specified in the COMMON.Canada business calendar are also
included.
# This sample calendar definition file defines the holidays and weekend days unique
# to this specific calendar. This calendar is supplemented by entries in the
# "COMMON" calendar of the same name.
# The name of this calendar. The entries in this calendar and the entries in
# calendar COMMON.Canada are used when evaluating outage time.
calendar.name=Canada
# The date format, as defined by Java class SimpleDateFormat, is
# required to specify holiday dates.
calendar.holidays.dateformat= MMM dd,yyyy
# The Canada calendar will have 2 holidays that are unique to the calendar.
# All other holidays and the weekend days are defined in the
# calendar COMMON.Canada.
calendar.holidays.num = 2
calendar.holidays.1 = Jul 1,2015
calendar.holidays.2 = Jul 1,2016
SLO Utility Functions
There are utility policies available in project SLA_Utility that provide additional
functionality for configuring the SLO feature, as well as functions for maintaining
the outage data that is accumulated for reporting purposes.
These utilities complement the initial implementation found in the SLA project. If
you deployed the SLO feature prior to Fixpack 4, you were instructed to import
the SLA_Utility project when installing Fixpack 4.
If you have not imported the SLA_Utility project, you must complete the import
in order to use these utility functions. You must update the SLOUtilityDatasource
datasource after importing to connect to the SLORPRT database being used by
existing datasource SLOReportDatasource.
The utility functions will generally throw exceptions for errors like missing
parameters or names that do not exist. You should check the Impact
policylogger.log file to make sure you got the expected results from the utility.
Note: When using utility functions that alter the SLO configuration or the outage
data collected by the SLO feature, consider making a backup of the database before
proceeding. As with any database, the SLORPRT database should be backed up on
a regular interval as well as having regular DB2 maintenance applied for optimal
performance.
Chapter 10. Service Level Objectives (SLO) Reporting
269
Maintaining the reporting data in the SLORPRT database
Project SLA_Utility includes the policy sloManageOutageTables that can be
activated to prune outage records from the SLO_METRIC_VALUE table in the SLORPRT
database. This can help control the amount of data retained for reporting purposes
by removing the older records. Using the default configuration, outage records
more than 365 days old will be removed and archived.
In addition to the policy, an Impact activator service called ManageSLOTables is
included that will activate the policy. It is set to activate once a day by default.
This service is not started by default.
When the outage records are pruned, they are written to an archive table. By
default the records will remain in the archive for 30 days, allowing for the
possibility to restore outages if too much data has been removed. The policy
sloManageOutageTables will also handle pruning records from the archive each
time it runs.
See “Setting SLO configuration values” on page 274 for information on changing
the default retention periods for both the outage table and the archived outage
table.
See “Restoring outage data” on page 273 for information on restoring outage data
from the archived outage table.
Removing service, SLA, and calendar definitions
Policies are included in the SLA_Utility project that will allow you to remove
service, SLA, and calendar definitions that are no longer needed. The policies that
remove the service and SLA definitions will archive any outage data that has
already been recorded for the service and/or SLA.
Use the following policies to remove service definitions, SLA definitions, or the
mapping of a service to an SLA definition:
sloDeleteServiceDefinition
This policy will delete a service definition and archive all outage records
associated with the service. Removing the service also removes all
mappings to any SLA that was used with the service. The SLA definitions
may still be used by other services and are not affected.
You can add the service back later and restore the outages if they have not
been pruned from the archive. The “Service Name” parameter is required.
sloDeleteSLADefinition
This policy will delete an SLA definition and archive all outage records
associated with the SLA. Removing the SLA also removes all mappings to
any service that was using the SLA. The service definitions may still be
used by other SLAs and are not affected.
You can add the SLA back later and restore the outages if they have not
been pruned from the archive. The “SLA Metric Name” parameter is
required.
sloDeleteSLAForService
This policy will delete the mapping between an SLA and a service. All
outage records for this SLA and service combination are archived. Only the
mapping is removed – the service and/or SLA may be used in other
mappings, so the definitions are not affected.
270
Netcool/Impact: Solutions Guide
You can add the SLA to resource mapping back later and restore the
outages if they have not been pruned from the archive. The “SLA Metric
Name” and “Service Name” parameters are required.
Use the following policy to remove calendar definitions.
sloDeleteCalendarDefinition
This policy will delete a calendar definition from the SLO configuration. If
you have created a “COMMON” calendar for this calendar, then it will not
be deleted. You must explicilty delete the” COMMON” calendar if it is also
no longer required.
For example, when you have a calendar called US, the SLO feature will
use information from this calendar and from the calendar called
COMMON.US, if that calendar also exists. You would need to run the
delete policy once for each calendar if you no longer need either.
Exporting service and calendar definitions
Policies are included in the SLA_Utility project that will allow you to export
service and calendar definitions. This may be useful if you no longer have the
properties files you used to create the services and calendars, or if you need to set
up the same SLO configuration on another system.
Before you can export any SLO elements, you must define the configuration
property SLO_EXPORT_DIRECTORY , which defines the target directory to be used for
export operations. See “Setting SLO configuration values” on page 274 for more
information.
Use the following policies to export service definitions. When a service is exported,
a properties file is created for each SLA Metric name that is mapped to the service.
The exported files will be placed in the directory defined by SLO configuration
value SLO_EXPORT_DIRECTORY and named SLO_Export_<service name>_<SLA metric
name>.props.
sloExportServiceDefinition
This policy will export one or more properties files that represent the
definition of the service and each SLA mapped to the service.
The “Service Name” parameter is required
The “SLA Metric Name” parameter is required and must be either:
v The name of a specific SLA metric mapped to the service. This name
must exist and be mapped to the service specified by “Service Name”.
v Or the single character *, which indicates that all SLAs mapped to the
service should be exported.
sloExportAllServiceDefinitions
This policy will look up all service definitions defined in the SLO
configuration and export each one. This will produce a properties file for
every combination of a service and SLA defined in the SLO configuration.
There are no parameters for this policy.
Use the following policies to export calendar definitions.
When a calendar is exported, a properties file is created in the directory defined by
SLO configuration value SLO_EXPORT_DIRECTORY. The file is named
SLO_Export_<calendar name>.props.
Chapter 10. Service Level Objectives (SLO) Reporting
271
sloExportCalendarDefinition
This policy will export a calendar definition properties file. If you have
created a “COMMON” calendar for this calendar, then that calendar will
also be exported.
For example, when you have a calendar called US, the SLO feature will
use information from this calendar and from the calendar called
COMMON.US, if that calendar also exists. When you run the export, two
properties files are created, one for US and one for COMMON.US.
The “Calendar Name” parameter is required.
sloExportAllCalendarDefinitions
This policy will look up all calendar definitions defined in the SLO
configuration and export each one. This will produce a properties file for
each unique calendar definition, including any “COMMON” calendars.
There are no parameters for this policy.
Removing specific outage data
The SLA_Utility project includes functions that can be used to remove specific
outages from the table used for the SLO Reporting. These functions should be used
only to make finer adjustments to the outage data that was generated by the SLO
function.
Note: You should always consider backing up the SLORPRT database before using
functions to remove outage data.
The functions included can remove outages for a service, for an SLA metric, or for
a specific SLA metric and service mapping. In addition, you can remove outages
based on time criteria, for example outages before a certain time, after a certain
time, or in a specific interval defined by a beginning and ending timestamp. Some
functions combine the capabilities to support removing outages for services or
SLAs while also defining a time range. When a time range is defined, it is applied
to the VALUETIME column of the SLO_METRIC_VALUE table, which is the
timestamp that defines when the outage occurred.
For all functions that remove outages , the outages are archived into another table
and stored with the name of the service and SLA metric. If you later attempt to
restore the outages from the archive, the service name and SLA metric name must
exist in the configuration or the records will not be restored. Archived records
include an “archive time”, which is set to the current time. This timestamp is used
by the policy sloManageOutageTables if you have activated this policy to
automatically prune the outage and archive tables.
Policy sloRemoveOutages
Policy sloRemoveOutages is provided as a sample policy for calling the various
remove functions. The policy has extensive comments that describe how to call
each function. In its shipped state the policy does not perform any actions, as all
sample function calls are commented out.
You should create your own policy and use the code in sloRemoveOutages as a
model for building the function calls you need.
272
Netcool/Impact: Solutions Guide
Restoring outage data
The SLA_Utility project includes functions that can be used to restore specific
outages from the archive table.
These outages were put in the archive when one of the following actions occurred:
v Regular maintenance was performed on the outage table by activating policy
sloManageOutageTables
v A service, SLA, or mapping between a service and SLA was deleted
v One or more of the remove functions described in the previous section was used
to remove specific outages from the SLO Reporting data
It is important to remember that archived outages are only retained for 30 days by
default. This assumes you are activating policy sloManageOutageTables to perform
regular pruning of the outage and archive tables.
Note: You should always consider backing up the SLORPRT database before using
functions to restore archived data.
The restore functions mirror the remove functions described in the previous
section, with the same capabilities for defining the restore criteria that are available
when defining the remove criteria. The restore functions can recover outages for a
service, for an SLA metric, or for a specific SLA metric and service mapping. In
addition, you can restore outages based on time criteria or a combination of the
time and name parameters.
For all functions that restore outages , the outages are restored into the
SLO_METRIC_VALUE table and removed from the archive table. It is important to
note the following when restoring outages:
v Each archived record has the name of the service and the name of the SLA
metric. The service and SLA must exist in the configuration, and must have a
mapping defined, or the records cannot be restored. For example, if a service
was deleted, causing outages to be archived, then that service and the
appropriate SLA mappings for the service must be created in the SLO
configuration before the archived records can be restored.
v When archive records are restored, be aware that the next run of
sloManageOutageTables, if activated, may archive the outage again, depending
on the VALUETIME of the outage and your configuration value for how long to
retain outage records (365 days by default). However, the archive time will be
set to the current time, giving you more time to update the configuration and
retry the restore before the outages are permanently deleted from the archive.
v Some restore functions may perform more slowly if there are a lot of archived
records to examine. As noted above, the service name, SLA metric name, and
mapping must be validated before an archived outage can be restored. Thus if
only the time criteria is specified, then the archived records may include a
variety of service and SLA combinations, each of which must be validated as
part of the restore operation.
Always check the Impact policylogger.log file to ensure the expected outages
were restored. If service, SLA, or mapping configuration is missing, correct the
configuration and re-run the restore operation.
Policy sloRestoreOutages
Chapter 10. Service Level Objectives (SLO) Reporting
273
Policy sloRestoreOutages is provided as a sample policy for calling the various
restore functions. The policy has extensive comments that describe how to call each
function. In its shipped state the policy does not perform any actions, as all sample
function calls are commented out.
You should create your own policy and use the code in sloRestoreOutages as a
model for building the function calls you need.
Setting SLO configuration values
This section describes each of the SLO configuration values and shows how to
query and set the values.
OUTAGE_RETENTION_DAYS
Set this configuration value to the number of days that outages should be
kept in table SLO_METRIC_VALUE before being archived. The default
value is 365 days. This configuration value is used by policy
sloManageOutageTables to prune the outage table, if the policy has been
activated.
A higher value for this setting means that more outage data will be
retained and a longer historical pattern can be shown in the reports.
Reduce this value to prune the data more quickly and retain less historical
data.
OUTAGE_ARCHIVE_RETENTION_DAYS
Set this configuration value to the number of days that archived outages
should be kept in table SLO_METRIC_VALUE_ARCHIVE before being
permanently deleted. The default value is 30 days. This configuration value
is used by policy sloManageOutageTables to prune the archive table, if the
policy has been activated.
A higher value for this setting means that more archive data will be
retained, providing a longer opportunity to restore the data if needed.
Reduce this value to prune the data more quickly, reducing the window of
time when the data can be restored.
SLO_EXPORT_DIRECTORY
Set this configuration value to the directory to be used for exporting SLO
services and calendars. This value must be set before running any export
function. There is no default value.
TBSM_OUTAGE_STATUS
Set this configuration value to control how the sample policy
getDataFromTBSMAvailability determines an outage from data in the
TBSM metric history database.
The default, recommended value is “BAD”, which indicates that outages
are defined as ONLY when the TBSM status is “Bad”. This is the default
value if you deploy SLO for the first time after installing Fixpack 4.
If you already have SLO deployed when you install Fixpack 4, then an
outage is defined to end ONLY when the TBSM status becomes “Good”.
This was the previous behavior and will not be changed by just installing
Fixpack 4.
Use sloSetConfigurationValue to set this property to “BAD” to configure
the recommended behavior. Set the value to anything else (for example, “”)
to use the legacy behavior, which requires the TBSM status to be “Good”
to end an outage.
274
Netcool/Impact: Solutions Guide
Policy sloSetConfigurationValue
The SLA_Utility project includes policy sloSetConfigurationValue for querying
and setting the configuration values described above.
The parameter Configuration Property is required and must take one of the
following values:
v OUTAGE_RETENTION_DAYS
v OUTAGE_ARCHIVE_RETENTION_DAYS
v SLO_EXPORT_DIRECTORY
v TBSM_OUTAGE_STATUS
The parameter Configuration Value specifies the value to be set for the property. If
you specify “?” for this parameter, then the policy will just log the current value to
the Impact policylogger.log file.
Chapter 10. Service Level Objectives (SLO) Reporting
275
276
Netcool/Impact: Solutions Guide
Chapter 11. Configuring Maintenance Window Management
Maintenance Window Management (MWM) is an add-on for managing
Netcool/OMNIbus maintenance windows.
MWM can be used with Netcool/OMNIbus versions 7.x and later. A maintenance
time window is a prescheduled period of downtime for a particular asset. Faults
and alarms, also known as events, are often generated by assets undergoing
maintenance, but these events can be ignored by operations. MWM creates
maintenance time windows and ties them to Netcool/OMNIbus events that are
based on OMNIbus fields values such as Node or Location. Netcool/Impact
watches the Netcool/OMNIbus event stream and puts these events into
maintenance according to the maintenance time windows. The Netcool/Impact
MWMActivator service in the Services tab under the MWM project must be
running to use this feature. For more information about maintenance windows, see
“About MWM maintenance windows” on page 279.
Activating MWM in a Netcool/Impact cluster
Maintenance Window Management (MWM) interacts with Netcool/OMNIbus
using an Netcool/Impact policy activator service called MWMActivator. This
service is turned off by default in Netcool/Impact.
About this task
Use the following steps to activate MWM in the Netcool/Impact cluster
NCICLUSTER.
Procedure
1. Log on to Netcool/Impact.
2. Click Policies to open the Policies tab.
3. From the Cluster list, select NCICLUSTER. From the Project list, select MWM.
4. In the Policies tab, select the MWM_Properties policy, right click, and select
Edit or click the Edit policy icon to view the policy and make any required
changes. For more information, see “Configure the MWM_Properties policy.”
5. Click Services.
6. In the Services tab, select MWMActivator, right click, and select Edit or click
the Edit services icon to open the MWMActivator service properties. Make any
required changes. For information about these properties, see “Configuring
MWMActivator service properties” on page 278.
7. To start the service, in the service status pane, select MWMActivator and either
right click and select Start or click the Start Service arrow in the Services
toolbar.
When the service is running it puts OMNIbus events into maintenance based
on schedules entered into the MWM GUI.
Configure the MWM_Properties policy
Configure the MWM_Properties policy for use with the MWM add-on.
© Copyright IBM Corp. 2006, 2016
277
The following configurable options are available in the Maintenance Window
Management MWM_Properties policy.
v Maintenance window expiration
– By default, MWM clears the “in maintenance” flag from corresponding
OMNIbus events when a window expires. You can edit the policy so that
MWM leaves those events flagged as “in maintenance” after the maintenance
window expires.
v Flagging existing events when a maintenance window starts
– By default, any matching events in OMNIbus are flagged, regardless of when
they came into OMNIbus. You can modify the policy so that MWM flags only
events arrive or deduplicate while the maintenance window is running.
You can change these options by editing the MWM_Properties policy in the MWM
project.
1. Click Polices to open the Policies tab.
2. In the Projects list, select MWM.
3. In the Policies tab, select MWM_Properties, right click and select Edit to open
the policy. MWM_Properties is a small policy with a single function called
getProperties(). Other MWM policies call this function to retrieve
configuration information.
4. To change the MWM options, change the function and the given values to TRUE
or FALSE if required.
See the following information in the policy for clearFlag options. clearFlag =
TRUE is the default option.
Use clearFlag = TRUE if you want the maintenance flag
on events cleared when windows expire.
Use clearFlag = FALSE if you want Impact to leave the events tagged
as in maintenance after the window expires.
See the following information in the policy for flagExistingEvents options.
flagExistingEvents = TRUE is the default option.
Use flagExistingEvents = TRUE if you want Impact to flag as "in maintenance"
events which last came in (based on LastOccurrence) before the time window started.
Use flagExistingEvents = FALSE if you want Impact to NOT flag events as
"in maintenance" unless they come in during the maintenance window.
function getProperties(propsContext)
{
propsContext = newobject();
clearFlag = TRUE;
flagExistingEvents = TRUE;
propsContext.clearFlag = clearFlag;
propsContext.flagExistingEvents = flagExistingEvents;
}
Configuring MWMActivator service properties
Configure the MWMActivator service to check for OMNIbus events that require
maintenance.
Procedure
1. Click Services to open the Services tab.
278
Netcool/Impact: Solutions Guide
2. In the Services tab, right click MWMActivator and select Edit or click the Edit
services icon to open the properties for the MWMActivator service.
3. By default, the MWMActivator Activation Interval is set to 7 seconds. The
MWMActivator service checks OMNIbus every seven seconds for events that
require maintenance. Select the interval time you want to use. If possible use
prime numbers.
4. You can change the Policy value if you have created your own policy to replace
the MWM_Properties policy.
5. Select Startup to start the MWMActivator service when Netcool/Impact starts.
6. Select the Service Log to create a file of the service log.
Logging on to Maintenance Window Management
How to access Maintenance Window Management (MWM).
Procedure
1. Click the Maintenance Window tab. This page lists the instances of the
different types of maintenance window.
2. Click the Add button to create a new window.
About MWM maintenance windows
Use the Maintenance Window Management (MWM) web interface to create
maintenance time windows and associate them with Netcool/OMNIbus events.
Netcool/OMNIbus events are based on OMNIbus field values such as Node or
Location. The Netcool/OMNIbus events are then put into maintenance according
to these maintenance time windows. If events occur during a maintenance
window, MWM flags them as being in maintenance by changing the value of the
OMNIbus field, integer field, SuppressEscl to 6 in the alerts.status table.
A maintenance time window is prescheduled downtime for a particular asset.
Faults and alarms (events) are often generated by assets that are undergoing
maintenance, but these events can be ignored by operations. MWM tags OMNIbus
events in maintenance so that operations know not to focus on them. You can use
MWM to enter one time and recurring maintenance time windows.
v One time windows are maintenance time windows that run once and do not
recur. One Time Windows can be used for emergency maintenance situations
that fall outside regularly scheduled maintenance periods. You can use them all
the time if you do not have a regular maintenance schedule.
v Recurring time windows are maintenance time windows that occur at regular
intervals. MWM supports three types of recurring time windows:
– Recurring Day of Week
– Recurring Date of Month
– Every nth Weekday
Maintenance time windows must be linked to OMNIbus events in order for MWM
to mark events as being in maintenance. When you configure a time window, you
also define which events are to be associated with the time window. The MWM
supports the use of all Netcool/OMNIbus fields for linking events to time
windows.
Creating a one time maintenance window
Create a one time maintenance time window for a particular asset.
Chapter 11. Configuring Maintenance Window Management
279
Procedure
1. Click the New Maintenance Window button to create a new window.
2. For Type of Maintenance Window, select One Time.
3. Check that the Time Zone you want to use is selected.
4. Add fields you wish to assign in the filter to match events. For each field you
add, select the operator from the list provided and assign a value to the field to
be used for the filter.
Tip: For a like operator, there is no requirement for regular expressions. You
can specify a substring and select the like operator from MWM.
Tip: For the in operator, provide a space separated list of strings that the field
can be (for example, server1.ibm.com server2.ibm.com server3.ibm.com)
Tip: Any field where a value is not provided will not be included in the filter.
5. Click the calendar icons to select the Start Time and End Time for the
maintenance time window.
6. Click the Save button to create the window.
7. Click the Back button to view the newly created window in the list of one time
windows.
Creating a recurring maintenance window
Create a recurring maintenance time window for a particular asset.
Procedure
1. Click the New Maintenance Window button to create a new window.
2. For Type of Maintenance Window, select the type of recurring window you
wish to configure. This can be either Day of Week, Day of Month, or Nth Day
of Week in Month.
3. Check that the Time Zone you want to use is selected.
4. Add fields you wish to assign in the filter to match events. For each field you
add, select the operator from the list provided and assign a value to the field to
be used for the filter.
Tip: For a like operator, there is no requirement for regular expressions. You
can specify a substring and select the like operator from MWM.
Tip: Any field where a value is not provided will not be included in the filter.
5. Provide the Start Time and End Time (hour, minute, second) for the
maintenance window.
6. Provide the details specific to the chosen recurring type of window:
v Recurring Day of Week These windows occur every week on the same day
and at the same time of day. For example, you can set the window to every
Saturday from 5 p.m. to 12 a.m. Or you can set the window for multiple
days such as Saturday, Sunday, and Monday from 5 p.m. to 12 a.m.
v Recurring Day of Month These windows occur every month on the same
date at the same time of day. For example, you can set the window to every
month on the 15th from 7 a.m. to 8 a.m. Or you can set the window for
multiple months.
v Every nth Weekday These windows occur every month on the same day of
the week at the same time. For example, you can set the window to the first
and third Saturday of the month from 5 p.m. to 12 a.m.
280
Netcool/Impact: Solutions Guide
7. Click the Save button to create the window.
8. Click the Back button to view the newly created window in the list of
windows.
Viewing maintenance windows
The main Maintenance Window page displays the full list of maintenance
windows, grouped by window type. You can use the Filter control to view the
windows of a single window type (One Time, Day of Week, Day of Month, Nth
Day of Week in Month or Active).
In the One Time Window, the color of the status icon indicates whether the
window is active (green), expired (purple), or has not started yet future (blue).
In the other windows the color of the status icon indicates whether the window is
active (green) or inactive (orange).
You can sort the maintenance windows by column by clicking on a column header
of the table. Clicking again reverses the sort.
To edit an existing maintenance window, click the Edit icon for the window.
You can delete the maintenance windows by checking the boxes next to windows
to delete and clicking Delete for that window type.
Maintenance Window Management and other Netcool/Impact
policies
Maintenance Window Management (MWM) runs independently from other
Netcool/Impact policies or OMNIbus automations. Every seven seconds, MWM
checks for open maintenance windows and marks the appropriate events as being
in maintenance. Take this feature into consideration when you add your own
policies and automations.
Known shortcomings
If there are overlapping time windows, there is a chance that an event could be
temporarily flagged as out of maintenance when the first window ends. If this
situation occurs, the event is flagged as in maintenance the next time the
MWMActivator Service runs. The clearFlag property comes to play here. If the
clearFlag = FALSE, then the event is never marked as out of maintenance.
Maintenance Window Management does not work properly if the default cluster
name, NCICLUSTER, is not used. When the MWM main page opens, you see the
following message:
Could not retrieve a client for accessing the Impact server, under cluster:
clustername
For information about how to resolve this issue, see the Troubleshooting section.
Auditing changes to MWM configurationt
Whenever a user creates or deletes an MWM maintenance window, an audit entry
is created in the following file:
IMPACT_HOME/logs/<server>_maintwinaudit.log
Chapter 11. Configuring Maintenance Window Management
281
The log entry records which user performed the action, the time the configuration
change was made, and the details about the window that was created or deleted.
282
Netcool/Impact: Solutions Guide
Chapter 12. Configuring Event Isolation and Correlation
Event Isolation and Correlation is provided as an additional component of the
Netcool/Impact product. Event Isolation and Correlation is developed using the
operator view technology in Netcool/Impact. You can set up Event Isolation and
Correlation to isolate an event that has caused a problem. You can also view the
events dependent on the isolated event.
Overview
Netcool/Impact has a predefined project, EventIsolationAndCorrelation that
contains predefined data sources, data types, policies, and operator views. When
all the required databases and schemas are installed and configured, you must set
up the data sources. Then, you can create the event rules by using the ObjectServer
sql in the Event Isolation and Correlation configuration view from the UI. You can
view the event analysis in the operator view, EIC_Analyze. You can also view the
output in the topology widget dashboard in the Dashboard Applications Services
Hub.
Complete the following steps to set up and run the Event Isolation and Correlation
feature.
1. Install Netcool/Impact.
2. Install DB2 or use an existing DB2 installation.
3. Configure the DB2 database with the DB2 schema.
4. Install the Discovery Library Toolkit with the setup-dltoolkit<platform>_64.bin installation image that is available in the directory
IMPACT_INSTALL_IMAGE/<platform>.
If you already have a Tivoli® Application Dependency Discovery Manager
(TADDM) installation, configure the Discovery Library Toolkit to consume the
relationship data from TADDM. You can also consume the data through the
loading of Identity Markup Language (IdML) books. For more information
about the discovery library toolkit, see the Tivoli Business Service Manager
Administrator's Guide and the Tivoli Business Service Manager Customization Guide.
The guides are available in the Tivoli Business Service Manager 6.1.1
documentation, available from the following URL, https://www.ibm.com/
developerworks/community/wikis/home?lang=en#!/wiki/Tivoli
%20Documentation%20Central.
You can load customized name space or your own model into SCR. This model
can be used for application topology-based event correlation. For more
information see Tivoli Business Service Manager Customization Guide, Customizing
the import process of the Service Component Repository, Service Component Repository
API overview.
5. In the GUI, configure the data sources and data types in the
EventIsolationAndCorrelation project to use with the Impact Server.
6. Create the event rules in the UI to connect to the Impact Server.
7. Configure WebGUI to add a new launchpoint or configure a topology widget to
visualize the results.
Tip: When you use Event Isolation and Correlation, the Event Isolation and
Correlation events must have a BSM identity value in field BSM_Identity. If the
© Copyright IBM Corp. 2006, 2016
283
field does not have a value, you must enter it manually or create it using the event
enrichment feature by using the EIC_EventEnrichment policy and
EIC_EventEnrichment service in the EventIsolationAndCorrelation project. You
might also want to update the event reader Filter Expression in the Event
Mapping tab according to your requirements.
General information about navigating Event Isolation and Correlation is in the
online help. Additional detailed information about setting up and configuring Event
Isolation and Correlation, is in the Netcool/Impact Solutions Guide.
Installing Netcool/Impact and the DB2 database
To run the Event Isolation and Correlation feature, install Netcool/Impact and the
DB2 database and configure the DB2 schema.
Procedure
1. Install Netcool/Impact. Refer to Netcool/Impact Administration Guide, Installation
chapter.
2. Install DB2. Netcool/Impact and Tivoli Business Service Manager support DB2
version 9.5 or higher. For information about installing and by using DB2, see
the information center that is listed here for the version you are using:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/
com.ibm.db2.luw.common.doc/doc/t0021844.html.
3. Configure the DB2 database with the DB2 schema. A user who has permissions
to run the DB2 command-line tools completes this step.
v For UNIX, use the user ID db2inst1.
v For Windows, use the user ID db2admin.
You can install the DB2 schema by using the installation image
setup-dbconfig-<platform>.bin that is available in the IMPACT_INSTALL_IMAGE/
<platform> directory.
Installing the Discovery Library Toolkit
Install the discovery library toolkit to import the discovered resources and
relationships into the Services Component Registry database.
About this task
For information about building the required relationships in the Services
Component Registry for Event Isolation and Correlation, see the Services Component
Registry API information in the Tivoli Business Service Manager Customization Guide.
See URL, https://www.ibm.com/developerworks/community/wikis/
home?lang=en#!/wiki/Tivoli%20Documentation%20Central.
Use the discovery library toolkit to import data from Tivoli® Application
Dependency Discovery Manager 7.1 or later to Tivoli Business Service Manager.
The toolkit also provides the capability of reading discovery library books in
environments that do not have a Tivoli Application Dependency Discovery
Manager installation.
v If you are using Tivoli Business Service Manager and Netcool/Impact, use the
information in Installing the Discovery Library Toolkit in the Tivoli Business Service
Manager Installation Guide available in the Tivoli Business Service Manager 6.1.1.4
284
Netcool/Impact: Solutions Guide
information center. Use the following url, https://www.ibm.com/
developerworks/community/wikis/home?lang=en#!/wiki/Tivoli
%20Documentation%20Central.
v For a Netcool/Impact implementation that does not use Tivoli Business Service
Manager, install the Discovery Library Toolkit with the setup-dltoolkit<platform>_64.bin installation image that is available in the directory
IMPACT_INSTALL_IMAGE/<platform>. For the Tivoli Business Service Manager
related information, the data source must be configured to access the DB2
database. This information is not required for an Netcool/Impact installation.
Important:
Netcool/Impact 7.1 bundles DBConfig 6.1.1 and XML Toolkit 6.1.1 on the
installation image. The DBConfig setup-dbconfig-<platform> is version 6.1.1. This
DBConfig 6.1.1 and XML Toolkit 6.1.1 are exactly the same as DBConfig 6.1.1 and
XML Toolkit 6.1.1 that were used and bundled with Impact 6.1.1 and TBSM 6.1.1.
Procedure
1. Go to the IMPACT_INSTALL_IMAGE/<platform> directory where the Impact Server
is installed.
2. Run the setup-dbconfig-<platform>.bin to install the database schema.
3. To install the discovery library toolkit, run the setup-dltoolkit<platform>_64.bin.
During the installation of the discovery library toolkit, there are options to
configure the Tivoli Business Service Manager data server. You can add any
values that you want. These values are not used in Netcool/Impact.
Updating the RCA views
When the Event Isolation and Correlation feature is analyzing an event to
determine the resources relationship, some resource information is missing. The
reason is a misconfiguration in the RCA_DEPENDENCYCOMPONENTS and the
RCA_COMPONENTS views.
About this task
The Netcool/Impact Event Isolation and Correlation component uses these views
to do the analysis. To rectify this issue, you must run the
RCA_DEPENDENCYCOMPONENTS.ddl file and the RCA_COMPONENTS.ddl file to show the
resource information.
Procedure
1. Copy the RCA_DEPENDENCYCOMPONENTS.ddl file and the RCA_COMPONENTS.ddl from
$IMPACT_HOME/add-ons/eic/db/ to the DB2 database machine where the SCR is
installed. For example, to /tmp/RCA_ DEPENDENCYCOMPONENTS.ddl and
/tmp/RCA_COMPONENTS.ddl.
2. Connect to the db2 database by using the following command.
db2 connect to TBSM
Where TBSM is the name of the database that contains the Service Component
Registry.
3. Run the db2 command to run the RCA_DEPENDENCYCOMPONENTS.ddl file to update
the RCA_DEPENDENCYCOMPONENTS view.
DB2_INSTALLATION_HOME/bin/db2 -tvf /tmp/RCA_DEPENDENCYCOMPONENTS.ddl
Chapter 12. Configuring Event Isolation and Correlation
285
4. Run the db2 command to run the RCA_COMPONENTS.ddl file to update the
RCA_COMPONENTS view.
DB2_INSTALLATION_HOME/bin/db2 -tvf /tmp/RCA_COMPONENTS.ddl
Updating RCA views by using the XML toolkit
You can also update the RCA views by editing the scc_schema_views.sql file in
the $XMLtoolkit_HOME/sql directory.
About this task
Procedure
1. Edit the scc_schema_views.sql file in the $XMLtoolkit_HOME/sql directory.
2. Locate the rca_components and rca_dependencycomponents CREATE VIEW
statements in the scc_schema_views.sql file.
3. Add the cdm:core.Collection class to the CREATE VIEW statements for the two
views.
CREATE VIEW %SCR_SCHEMADOT%rca_components AS
SELECT c.id, c.class, c.label, COALESCE(bc.baseclass, ’(N/A)’) AS baseclass,
min(n.name) AS guid
FROM %SCR_SCHEMADOT%scc_components c
LEFT JOIN %SCR_SCHEMADOT%cdm_namingemulations bc ON bc.class = c.class
JOIN %SCR_SCHEMADOT%sccp_componentnaming n ON n.comp_id = c.id
WHERE bc.baseclass IN ( ’cdm:sys.ComputerSystem’,
’cdm:sys.OperatingSystem’,
’cdm:app.AppServer’,
’cdm:process.Activity’,
’cdm:app.AppServerCluster’,
’cdm:sys.ITSystem’,
’cdm:app.SoftwareModule’,
’cdm:sys.SoftwareComponent’,
’cdm:core.Collection’,
’cdm:sys.RuntimeProcess’
)
OR bc.class is null
GROUP BY c.id, c.class, c.label, bc.baseclass
CREATE VIEW %SCR_SCHEMADOT%rca_dependencycomponents AS
( SELECT DISTINCT m.service_id AS srcid, cdmcs.class AS srcclass,
tgtservice.service_id AS tgtid, cdmct.class AS tgtclass, r.reltype
FROM %SCR_SCHEMADOT%scc_serviceid_to_component_id m
JOIN %SCR_SCHEMADOT%sccp_components comp ON comp.id = m.service_id
JOIN %SCR_SCHEMADOT%cdm_classids cdmcs ON comp.class_id = cdmcs.id
JOIN %SCR_SCHEMADOT%scc_serviceid_to_component_id m2 ON m2.service_id = m.service_id
AND (m2.type = 0 OR m2.type = 1)
JOIN %SCR_SCHEMADOT%scc_relations r ON m2.comp_id = r.srcid
AND (r.dependencydirection = 0 or r.dependencydirection = 2)
JOIN %SCR_SCHEMADOT%scc_serviceid_to_component_id tgtcomp ON tgtcomp.comp_id = r.tgtid
JOIN %SCR_SCHEMADOT%scc_serviceid_to_component_id tgtservice ON tgtservice.type = 0
AND tgtservice.service_id = tgtcomp.service_id
JOIN %SCR_SCHEMADOT%sccp_components tgtcomponents ON tgtcomponents.id =
tgtservice.service_id
JOIN %SCR_SCHEMADOT%cdm_classids cdmct ON tgtcomponents.class_id = cdmct.id
LEFT JOIN %SCR_SCHEMADOT%cdm_namingemulations bct ON bct.class = cdmct.class
LEFT JOIN %SCR_SCHEMADOT%cdm_namingemulations bcs ON bcs.class = cdmcs.class
WHERE m.type = 0 AND (m2.type = 0 OR (m2.type = 1 and m2.incCompRels=1))
AND (tgtcomp.type = 0 OR (tgtcomp.type = 1 and tgtcomp.incCompRels=1))
AND m.radinstanceid <> tgtservice.radinstanceid
AND (
bcs.baseclass IN ( ’cdm:sys.ComputerSystem’,
’cdm:sys.OperatingSystem’,
’cdm:app.AppServer’,
’cdm:process.Activity’,
’cdm:app.AppServerCluster’,
’cdm:sys.ITSystem’,
’cdm:app.SoftwareModule’,
’cdm:sys.SoftwareComponent’,
’cdm:core.Collection’,
’cdm:sys.RuntimeProcess’
)
OR bcs.class is null
)
AND (
bct.baseclass IN ( ’cdm:sys.ComputerSystem’,
’cdm:sys.OperatingSystem’,
286
Netcool/Impact: Solutions Guide
’cdm:app.AppServer’,
’cdm:process.Activity’,
’cdm:app.AppServerCluster’,
’cdm:sys.ITSystem’,
’cdm:app.SoftwareModule’,
’cdm:sys.SoftwareComponent’,
’cdm:core.Collection’,
’cdm:sys.RuntimeProcess’
)
OR bct.class is null
)
)
UNION
( SELECT DISTINCT m.service_id AS srcid, cdmcs.class AS srcclass,
tgtservice.service_id AS tgtid, cdmct.class AS tgtclass, r.reltype
FROM %SCR_SCHEMADOT%scc_serviceid_to_component_id m
JOIN %SCR_SCHEMADOT%sccp_components comp ON comp.id = m.service_id
JOIN %SCR_SCHEMADOT%cdm_classids cdmcs ON comp.class_id = cdmcs.id
JOIN %SCR_SCHEMADOT%scc_serviceid_to_component_id m2 ON m2.service_id = m.service_id
AND (m2.type = 0 OR m2.type = 1)
JOIN %SCR_SCHEMADOT%scc_relations r ON m2.comp_id = r.tgtid AND
( r.dependencydirection = 1 or r.dependencydirection = 3 )
JOIN %SCR_SCHEMADOT%scc_serviceid_to_component_id tgtcomp ON tgtcomp.comp_id = r.srcid
JOIN %SCR_SCHEMADOT%scc_serviceid_to_component_id tgtservice ON tgtservice.type = 0
AND tgtservice.service_id = tgtcomp.service_id
JOIN %SCR_SCHEMADOT%sccp_components tgtcomponents ON tgtcomponents.id =
tgtservice.service_id
JOIN %SCR_SCHEMADOT%cdm_classids cdmct ON tgtcomponents.class_id = cdmct.id
LEFT JOIN %SCR_SCHEMADOT%cdm_namingemulations bct ON bct.class = cdmct.class
LEFT JOIN %SCR_SCHEMADOT%cdm_namingemulations bcs ON bcs.class = cdmcs.class
WHERE m.type = 0 AND (m2.type = 0 OR (m2.type = 1 and m2.incCompRels=1))
AND (tgtcomp.type = 0 OR (tgtcomp.type = 1 and tgtcomp.incCompRels=1))
AND m.radinstanceid <> tgtservice.radinstanceid
AND (
bcs.baseclass IN ( ’cdm:sys.ComputerSystem’,
’cdm:sys.OperatingSystem’,
’cdm:app.AppServer’,
’cdm:process.Activity’,
’cdm:app.AppServerCluster’,
’cdm:sys.ITSystem’,
’cdm:app.SoftwareModule’,
’cdm:sys.SoftwareComponent’,
’cdm:core.Collection’,
’cdm:sys.RuntimeProcess’
)
OR bcs.class is null
)
AND (
bct.baseclass IN ( ’cdm:sys.ComputerSystem’,
’cdm:sys.OperatingSystem’,
’cdm:app.AppServer’,
’cdm:process.Activity’,
’cdm:app.AppServerCluster’,
’cdm:sys.ITSystem’,
’cdm:app.SoftwareModule’,
’cdm:sys.SoftwareComponent’,
’cdm:core.Collection’,
’cdm:sys.RuntimeProcess’
)
OR bct.class is null
)
)
;
4. Run the command setdbschema.bat -U dbuser -P dbpassword -f v to drop
and add the schema.
Where -U is the database user identifier, -P is the database password, -f is the
function that is identified, - v drops and rebuilds all of the views.
5. The changes are updated to the XMLtoolkit sql library for future reference.
Event Isolation and Correlation policies
The EventIsolationAndCorrelation project has a list of predefined polices that are
specific to Event Isolation and Correlation.
Chapter 12. Configuring Event Isolation and Correlation
287
The following policies are in the EventIsolationAndCorrelation project and
support the Event Isolation and Correlation feature and must not be modified:
v EIC_ActionExecutionExamplePolicy
v EIC_ActionExecutionExamplePolicyJS
v EIC_EventEnrichment
v EIC_IsolateAndCorrelate
v EIC_PrimaryEvents
v EIC_ResourcesTopology
v EIC_TopologyVisualization
v EIC_UtilsJS
v EIC_eventrule_config
v EIC_utils
v Opview_EIC_Analyze
v Opview_EIC_confSubmit
v Opview_EIC_configure
v Opview_EIC_requestHandler
Event Isolation and Correlation Services
The EventIsolationAndCorrelation project has a service specific to Event Isolation
and Correlation.
The following service is in the EventIsolationAndCorrelation project and supports
the Event Isolation and Correlation feature.
v EIC_EventEnrichment
Event Isolation and Correlation operator views
The EventIsolationAndCorrelation project has a list of predefined operator views
that are specific to Event Isolation and Correlation.
v EIC_Analyze shows the analysis of an event query.
v EIC_confSubmit supports the configuration of Event Isolation and
Configuration.
v EIC_configure configures the event rules for Event Isolation and Configuration.
v EIC_requestHandler supports the configuration of Event Isolation and
Configuration.
Configuring Event Isolation and Correlation data sources
All the Event Isolation and Correlation-related features are associated with the
project, EventIsolationAndCorrelation. Configure the necessary data sources, data
types, and data items for the event isolation and correlation.
Procedure
1. In the GUI, click Data Model.
2. From the project list, select the project EventIsolationAndCorrelation. A list of
data sources specific to the EventIsolationAndCorrelation feature is displayed.
v EIC_alertsdb
v SCR_DB
v EventrulesDB
288
Netcool/Impact: Solutions Guide
3. For each data source, update the connection information, user ID, and
password and save it.
4. Configure EIC_alertsdb to the object server where the events are to be
correlated and isolated.
5. Configure SCR_DB to the Services Component Registry database. When you
create the SCR schema, the following tables are created EIC_ACTIONS and
EIC_RULERESOURCE.
Note: When you configure the Services Component Registry (SCR) data
sources, you must point the data sources to what is commonly called the SCR.
The SCR is a schema within the TBSM database that is created when you run
the DB2 schema configuration step. The schema is called TBSMSCR. The database
has a default name of TBSM.
6. You must manually add the tables EIC_ACTIONS and EIC_RULERESOURCE
to the Services Component Registry.
a. Use the following SQL commands create the tables in your DB2 Services
Component Registry database.
--EIC_ACTIONS
CREATE TABLE EVENTRULES.EIC_ACTIONS (
RULENAME VARCHAR(64), ACTIONNAME VARCHAR(100), POLICYNAME VARCHAR(64),
AUTOEXECUTE char(5) not null,
CONSTRAINT auto_true_false CHECK (AUTOEXECUTE in (’true’,’false’)));
--EIC_RULERESOURCE
CREATE TABLE EVENTRULES.EIC_RULERESOURCE (
RULENAME VARCHAR(65) not null, SERIAL integer, Resources CLOB);
7. Configure the EventRulesDB data source to connect to the Services Component
Registry database.
Configuring Event Isolation and Correlation data types
The EventIsolationAndCorrelation project has a list of predefined data types that
are specific to Event Isolation and Correlation. Except for the data type
EIC_alertquery, which you must configure, the remaining data types are
preconfigured and operate correctly once the parent data sources are configured.
About this task
The following list shows the Event Isolation and Correlation data sources and their
data types:
v EIC_alertsdb
– EIC_alertquery
– EIC_TopologyVisualization
v SCR_DB
The following data types are used to retrieve relationship information from the
Services Component Registry.
– bsmidenties
– getDependents
– getRscInfo
v EventRulesDB
The following data types that are used by the database contain the user
configuration for Event Isolation and Correlation.
– EIC_RulesAction
Chapter 12. Configuring Event Isolation and Correlation
289
– EIC_RuleResources
– EVENTRULES
– EIC_PARAMETERS
Procedure
1. To configure the EIC_alertquery data type, right-click on the data type and
select Edit.
2. The Data Type Name and Data Source Name are prepopulated.
3. The State check box is automatically selected as Enabled to activate the data
type so that it is available for use in policies.
4. Base Table: Specifies the underlying database and table where the data in the
data type is stored.
5. Click Refresh to populate the table. The table columns are displayed as fields
in a table. To make database access as efficient as possible, delete any fields
that are not used in policies. For information about adding and removing fields
from the data type, see the SQL data type configuration window - Table Description
tab in the Online help.
6. Click Save to implement the changes.
Configuring policies for Event Isolation and Correlation
Polices are not available to be run as part Event Isolation and Correlation by
default. To enable a policy to run with an action for Event Isolation and
Correlation, you must enable the policy in the policy editor. When a policy is
enabled for Event Isolation and Correlation, you can run the policy as part of the
Event Isolation and Correlation process.
About this task
The following predefined action policies are available in the
EventIsolationAndCorrelation project:
v EIC_ActionExecutionExamplePolicy
v EIC_ActionExecutionExamplePolicyJS
You can use these predefined policies or add the following command to your own
action policy. The impactObjects must be an array of impact object.
For IPL:
EIC_utils.getRuleResourcesAsImpactObjects(RuleName,PrimaryEventSerial,impactObjects)
For JavaScript:
Load("EIC_UtilsJS");
impactObjects=getRuleResourcesAsImpactObjects(MatchingRuleName,PrimaryEventSerial);
When the policy runs, all the resources information and related ObjectServer
events are available to the policy context in a variable called impactObjects.
Procedure
1. In the policy editor toolbar, click the Configure Policy Settings icon to open
the policy settings editor. You can create policy input and output parameters
and also configure actions on the policy that relate to the UI Data Provider and
Event Isolation and Correlation options.
290
Netcool/Impact: Solutions Guide
2. To enable a policy to run with the Event Isolation and Correlation capabilities,
select the Enable Policy for Event Isolation and Correlation Actions check
box.
3. Click OK to save the changes to the parameters and close the window.
Creating, editing, and deleting event rules
How to create, edit, and delete an event rule for Event Isolation and Correlation.
Procedure
1. Select Event Isolation and Correlation to open the Event Isolation and
Correlation tab.
2. Click the Create New Rule icon to create an Event Rule. While creating this
item the configure page has empty values for various properties.
3. Click the Edit the Selected Rule icon to edit the existing event rules.
4. Click the Delete the Selected Rule icon to delete an event rule from the system
and the list.
Creating an event rule
Complete the following fields to create an event rule.
Procedure
1. Event Rule Name: Specify the event rule name. The event rule name must be
unique across this system. When you select Edit or New if you specify an
existing event rule name, the existing event rule is updated. When you edit an
event rule and change the event rule name, a new event rule is created with
the new name.
2. Primary Event: Enter the SQL to be run against the ObjectServer that is
configured in the data source EIC_alerts db. The primary event is the event
that is selected for analysis.
The primary event filter is used to identify if the event that was selected for
analysis has a rule associated with it. The primary event filter is also used to
identify the object in the Services Component Registry database that has the
event that is associated with it.
The object may or may not have dependent entities. During analysis, the
event isolation and correlation feature finds all the dependent entities and
their associated events.
For example, the primary event has 3 dependent or child entities and each of
these entities has three events that are associated with it. In total, there are
nine dependent events. Any of these secondary events could be the cause of
the primary event. This list of events is what is termed the list of secondary
events. The secondary event filter is used to isolate one or more of these
events to be the root cause of the issue.
3. Test SQL: Click Test SQL to test the SQL syntax that is specified in the
primary event. Modify the query so that only one row is returned. If there are
multiple rows, you can still configure the rule. However, during analysis only
the first row from the query is used to do the analysis.
4. Secondary Events: The text area is for the SQL to identify the dependent
events. When you specify the dependent events, you can specify variables or
parameters that can be substituted from the primary event information. The
variables are specified with the @ sign. For example, if the variable name is
dbname, it must be specified as @dbname@. An example is Identifier =
Chapter 12. Configuring Event Isolation and Correlation
291
'BusSys Level 1.2.4.4' and Serial = @ser@. The variables are replaced
during the analysis step. The information is retrieved from the primary event
that is based on the configuration in the parameters table and shows in the
Variables Assignment section of the page.
5. Extract parameters: Click Extract Parameters to extract the variable name
between @ and populate the parameter table. When the variable information is
extracted into the table, you can edit each column.
a. Select the field against the regular expression you want to run, and a
substitution value is extracted.
b. Enter the regular expression in the regular expression column. The regular
expression follows the IPL Syntax and is run by using the RExtract
function.
c. When the regular expression is specified, click Refresh to validate the
regular expression and check that the correct value is extracted. The table
contains the parameters.
6. Click the Create a new Action icon to add an event-related policy to the
event. A list of policies that are associated with the event that are enabled for
Event Isolation and Correlation are displayed.
7. Select the Auto Execute Action check box to run the policy during the
analysis. When the analysis is complete, you can also run the action by
selecting it.
8. Limit Analysis results to related configuration items in the Service
Component Registry: Select this check box if the analysis is to be limited to
related configuration items only. If the check box is not selected, the
dependent query is returned.
9. Primary Event is a root cause event: Select this check box to identify whether
the primary event is the cause event and rest of events, are symptom only
events.
10. Event Field: Identifies the field in the event that contains the resource
identifier in the Services Component Registry. Select the field from the
drop-down menu that holds the resource identifier in the event.
11. Time window in seconds to correlate events: Add the time period the event
is to analyze. The default value is 600 seconds. The events that occurred 600
seconds before the primary event are analyzed.
12. Click Save Configuration to add the configuration to the backend database.
13. Now the event rules are configured. The event is ready to be analyzed. You
can view the event analysis in the EIC_Analyze page or in the topology
widget in the Dashboard Applications Services Hub.
Configuring WebGUI to add a new launch point
Configure the WebGUI with a launch out context to launch the analysis page.
About this task
WebGUI can be configured to launch the analysis page. Refer to the procedure for
launch out integration described in the following URL, http://
publib.boulder.ibm.com/infocenter/tivihelp/v8r1/topic/
com.ibm.netcool_OMNIbus.doc_7.4.0/webtop/wip/task/
web_con_integrating.html.
292
Netcool/Impact: Solutions Guide
The URL you need for Event Isolation and Correlation is
<IMPACTHOSTNAME>:<IMPACTPORT>/opview/displays/NCICLUSTER-EIC_Analyze.html.
Pass the serial number of the selected row for the event.
Note: NCICLUSTER is the name of the cluster configured during the installation of
Netcool/Impact. You must use the name of your cluster whatever it is, in the URL.
For example, in Tivoli Business Service Manager the default cluster name is
TBSMCLUSTER. To launch from Tivoli Business Service Manager, you would need to
use the following html file, TBSMCLUSTER-EIC_Analyze.html.
Launching the Event Isolation and Correlation analysis page
How to launch the Event Isolation and Correlation analysis page.
About this task
You can launch the Event Isolation and Correlation analysis page in the following
ways:
v Manually by using the webpage and Event Serial number.
v Using the launch out functionality on Active Event List (AEL) or Lightweight
Event List (LEL) from WebGUI.
v Using a topology widget.
Procedure
Open a browser on Netcool/Impact. Use one of the following options:
v Point to <Impact_Home>:<Impact_Port>/opview/displays/NCICLUSTEREIC_Analyze.html?serialNum=<EventSerialNumber>. Where <Impact_Home> and
<Impact_Port> are the Netcool/Impact GUI Server and port and
EventSerialNumber is the serial number of the event you want to analyze. To
launch the analysis page outside of the AEL (Action Event List), you can add
serialNum=<Serial Number> as the parameter.
v The Event Isolation and Correlation analysis page can be configured to launch
from the Active Event List (AEL) or LEL (Lightweight Event List) within
WebGUI. For more information see, “Configuring WebGUI to add a new launch
point” on page 292. When you create the tool you have to specify only
<Impact_Home>:port/opview/displays/NCICLSTER-EIC_Analyze.html. You do not
have to specify SerialNum as the parameter, the parameter is added by the AEL
tool.
Viewing the Event Analysis
View the analysis of an Event query in the EIC_Analyze page. You can also view
the results in a topology widget.
About this task
The input for the EIC_IsolateAndCorrelate policy is the serial number of the event
through the serialNum variable. The policy looks up the primary event to retrieve
the resource identifier. The policy then looks up the dependent events based on the
configuration. The dependent events are further filtered using the related
resources, if the user has chosen to limit the analysis to the related resources. Once
the serial number has been passed as the parameter in WebGUI, you can view the
event from the AEL or LEL and launch the Analyze page.
Chapter 12. Configuring Event Isolation and Correlation
293
Procedure
Select the event from the AEL or LEL and launch the Analyze page. The
EIC_Analyze page contains three sections:
v Primary Event Information: shows the information on the selected event. This is
the event on which the event isolation and correlation analysis takes place.
v Correlated Events: shows information about the dependent events identified by
the tool. Dependant events are identified as the events that are associated with
the dependant child resources of the device or object that is associated with the
primary event. These events are displayed in the context of dependent resources
that were identified from the Services Component Registry.
v Event Rule Processed: shows the rule which was identified and processed when
this primary event was analyzed.
Visualizing Event Isolation and Correlation results in a topology widget
How to view Event Isolation and Correlation results in a topology widget.
Before you begin
By default the following data types and policy parameters available to the UI data
provider to create widgets. Use the following information to check the
configuration for the data types and policy.
v Check that the data types EVENTRULES and EIC_TopologyVisualization in the
EventIsolationAndCorrelation project have the Access the data through UI data
provider selected. In the Data Model, select the data type. Then, select Edit to
view the settings.
v Check that policy EIC_TopologyVisualization is configured with an output
parameter EIC_Relationship. The output parameter shows the topology in the
topology widget. And the EICAffectedEvents output parameter shows only the
ObjectServer events that are related to the resources in the topology.
v
The policy EIC_PrimaryEvents includes an output parameter
EICPrimaryEvents with a filter to get the primary events for the specific rule.
About this task
In Jazz for Service Management, in the IBM Dashboard Application Services Hub,
create a page and use two tables widgets and one Topology widget.
Procedure
1. Create a page in the console.
a. Open the console.
b. To create a page, click Settings > Pages > New Page.
c. Enter Page for Event Isolation and Correlation in the Page Name field.
d. To save the page, click Ok.
2. Configure one table widget called Events Rules for the data type
EVENTRULES to show all the rules in the database. This table widget is the
main widget that drives the second table widget.
a. Open the Page for Event Isolation and Correlation page that you created.
b. Drag the Table widget into the content area.
c. To configure the widget data, click the down arrow icon and click Edit. The
Select a dataset window is displayed.
294
Netcool/Impact: Solutions Guide
d. Select the data set. Select the EVENTRULES data type that belongs to the
EventrulesDB data source. The data set information is only displayed after
the defined refresh interval. The default is 5 minutes.
e. The Visualization Settings UI is displayed. The system shows all the
available columns by default. You can change the displayed columns in the
Visualization Settings section of the UI. You can also select the row
selection and row selection type options.
f. To save the widget, click Save and exit on the toolbar.
3. Configure another table widget called Primary Events for the policy output
parameter EICPrimaryEvents.
a. Open the Page for Event Isolation and Correlation page that you created.
b. Drag the Table widget into the content area.
c. To configure the widget data, click the down arrow icon and click Edit. The
Select a dataset window is displayed.
d. Select the data set EICPrimaryEvents. The data set information is only
displayed after the defined refresh interval. The default is 5 minutes.
e. The Visualization Settings UI is displayed. The system shows all the
available columns by default. You can change the displayed columns in the
Visualization Settings section of the UI. You can also select the row
selection and row selection type options.
f. To ensure that the policy runs when the widget is displayed, select the
ExecutePolicy check box. Click Ok.
g. To save the widget, click Save and exit on the toolbar.
4. Configure a Topology widget called Resource Topology for the output
parameter EIC_Relationship.
a. Open the Page for Event Isolation and Correlation page that you created.
b. Drag the Topology widget into the content area.
c. To configure the widget data, click it. Click the down arrow icon and click
Edit. The Select a dataset window is displayed.
d. Select the data set. Select the EIC_Relationship data set. The data set
information is only displayed after the defined refresh interval. The default
is 5 minutes.
e. The Visualization Settings window is displayed. To ensure that the policy
runs when the widget is displayed, select the executePolicy check box.
Click Ok.
f. To save the widget, click Save and exit on the toolbar.
5. Configure a direct wire from the EVENTRULES table widget to the
EICPrimaryEvents table widget.
a. Edit the page.
b. Click the Wires button.
c. Click New Wire.
d. Select the source as the Event Rule widget OnNodeClicked and click OK.
e. Select the target as Primary Events widget.
f. Repeat and select target as Resources Topology to clear the typology.
6. Configure a direct wire from the EICPrimaryEvents table widget to the
EIC_Relationship topology widget.
a. Edit the page.
b. Click the Wires button.
c. Click New Wire.
Chapter 12. Configuring Event Isolation and Correlation
295
d. Select the source as the Primary Events widget OnNodeClicked and click
OK.
e. Select the target as Resources Topology widget.
Tip: If you have not saved the page yet, you can find it by going to the
Select Target for New Wire dialog and locate the Resources Topology
widget in Console Settings / This page.
f. Save the page.
Results
When you click an item in the EVENTRULES table widget, it populates the
EICPrimaryEvents table with the primary events from the objects. When you click
an item in the EICPrimaryEvents table widget, it populates the Topology widget
when the Event Isolation and Correlation operation is completed. Optionally, you
can create a table widget instead of the topology widget to view the related events
only. The table widget is configured for the EICAffectedEvents output parameter.
Reference information for the Service Component Repository API
The Service Component Repository (SCR) Application Programming Interface
(API) allows you to load resource, attribute, and relationship information into the
SCR by using a programmatic interface. The SCR API is an alternative to using
iDML books or loading the information by using TADDM.
The types of resource information that can be added to the SCR by using the SCR
API can also be added by using other SCR import methods. The SCR API takes
advantage of the integration Impact policies that you can use to access data from
various sources that can then be used to introduce resource, attribute, and
relationship data models into the SCR.
The SCR API interface is a set of Impact function calls that run within an Impact
Policy Language (IPL) or JavaScript policy. Using the advanced data access
capabilities of Impact with this API and the support of both Common Data Model
(CDM) and custom namespace models provides a powerful platform for
integrating a broad set of resource and relationship instance dependency models
into SCR and other third-party applications.
For documentation on the advanced features of the Service Component Repository
see the Tivoli Business Service Management Customization Guide available from the
following URL, http://pic.dhe.ibm.com/infocenter/tivihelp/v3r1/topic/
com.ibm.tivoli.itbsm.doc_6.1.1/customization/bsmc_dlkc_importing_scr.html.
Components of the SCR API
The Service Component Registry (SCR) Application Programming Interface (API)
allows you to add new components, attributes, and relationships to the SCR. The
API also allows you to delete components and relationships from the SCR.
The following Impact policy functions have been developed that use the functions
provided by the SCR API. These functions are not available in the policy editor in
the GUI. For each transaction, where information is entered into the SCR using this
API, the functions outlined must be executed in the sequence in which they are
outlined:
296
Netcool/Impact: Solutions Guide
SCRRegister( )
This function creates the unique transaction ID (UUID) and also ensures
that the transaction information is persisted to the registration table. The
unique transaction ID returned from this function, is required for all other
functions so that the components and relationships created can be
identified.
SCRCreateComponentInfo( )
This function creates and adds component information values to the
appropriate database table. It returns the unique transaction ID, or where
an exception occurs, it returns a null value.
SCRCreateRelationship( )
This function creates a relationship between two components. It returns the
unique transaction ID, or where an exception occurs, it returns a null
value.
SCRDeleteComponentInfo( )
Where required, this function deletes a component from the SCR. It returns
the unique transaction ID, or where an exception occurs, it returns a null
value.
SCRDeleteRelationship( )
Where required, this function deletes the relationship between two
components. It returns the unique transaction ID, or where an exception
occurs, it returns a null value.
SCRComplete( )
This function commits all the component information and relationship
values for the transaction ID to the database. This function returns 0 when
successful and returns an exception when it fails.
SCR API functions
This section describes each of the Service Component Registry (SCR) Application
Programming Interface (API) functions.
Note: The examples provided to illustrate Service Component Registry (SCR)
Application Programming Interface (API) functions are provided in Impact Policy
Language (IPL) unless otherwise stated. For information about the differences
between IPL and JavaScript, see the Differences between IPL and JavaScript section of
the Policy Reference Guide.
SCRRegister
The SCRRegister function creates the unique transaction ID (UUID) and also
ensures that the transaction information is persisted to the registration table. The
unique transaction ID returned from this function, is required for all other
functions so that the components and relationships referenced can be correctly
associated with this source.
Syntax
SCRRegister (dataSource,transientId, cdmIdentity, productName, sourceControlInfo,
transtype,);
Chapter 12. Configuring Event Isolation and Correlation
297
Parameters
This function has the following parameters.
Table 82. SCRRegister function parameters
Parameter
Format
Description
dataSource
String
This is the name for the data source
created in the data model
transientId
String
ID that locally identifies what is being
registered
cdmIdentity
String
MSS name for the source
productName
String
Product name being used
sourceControlInfo
String
Source control information
transtype
String
This parameter can be set to Refresh if
you are synchronizing all components
and relationships as known by the source,
otherwise set this to Delta which allows
components and relationships to be
deleted as well as created individually.
Example
The following example returns the transaction ID and allows for components and
relationships to be deleted as well as created.
DataSource=”SCR_DB”;
TransientID=”Service_0001”;
CDMIdentity=”Service”;
ProductName=”ServiceApp”;
SourceControlInfo=NULL;
Transtype=”Delta”;
transactionID=SCRRegister(DataSource, TransientID, CDMIdentity,
ProductName, SourceControlInfo, Transtype);
SCRCreateComponentInfo
The SCRCreateComponentInfo function creates and adds component information
values to the appropriate database table. It returns the unique transaction ID, or
where an exception occurs, it returns a null value.
Syntax
transactionID =SCRCreateComponentInfo (dataSource, transactionId, transientId,
classType, label, sourceToken, keywords, values, modifyOnly);
Parameters
This function has the following parameters.
Table 83. SCRCreateComponentInfo function parameters
298
Parameter
Format
Description
dataSource
String
The name for the data source that is
created in the data model
transactionId
String
The unique transaction ID returned from
SCRRegister ();
Netcool/Impact: Solutions Guide
Table 83. SCRCreateComponentInfo function parameters (continued)
Parameter
Format
Description
transientId
String
Unique ID of the component within the
context of the transaction started with the
SCRRegister function call
classType
String
Resources that are created in the service
component registry must have a class
type in the format of
namespace:resourceclass. The namespace
designation is there to lessen the
likelihood of unexpected class collisions.
The namespace can be set to something
as simple as a short character designation
of a company for example myinc. The
class name must describe the type of
resource to be created for example:
v myinc:ComputerSystem
v myinc:Application
v myinc.Router
label
String
Label of the component that is being
created
sourceToken
String
Source token for the component that is
being created.
keywords
String
The keywords array allows keyword
attributes to be added for each
component you create. Every resource
that you create must include a keyword
and value combination that uniquely
identifies the resource. By default, you
can use tbsm:identity as the identity
keyword for your resources.
values
String
The values array allows value attributes
to be added for each component that is
being created. If you assigned a keyword
that is named tbsm:identity, the value
must be a unique identifying string for
this resource.
modifyOnly
Boolean
(Optional) When true an existing
component is updated if it exists. If
false, the component is created if it does
not exist or updated if it does exist.
Example
The following example uses the transactionID created by the SCRRegister function
and creates a resource in the SCR called Comp0001.
ComponentTransientID=”Comp0001”;
ClassType= "myinc:ComputerSystem";
Label=”NO_LABEL”;
SourceToken=NULL”;
Keywords = {"tbsm:identity"};
Values = {"Comp0001-uniquestring"};
transactionID=SCRCreateComponentInfo(DataSource, transactionID,
ComponentTransientID, ClassType, Label, SourceToken, Keywords,
Values);
Chapter 12. Configuring Event Isolation and Correlation
299
SCRCreateRelationship
The SCRCreateRelationship function creates a relationship between two
components. It returns the unique transaction ID, or where an exception occurs, it
returns a null value.
Syntax
transactionID =SCRCreateRelationship (dataSource, transactionId,
relationshipType, sourceId, targetId);
Parameters
This function has the following parameters.
Table 84. SCRCreateRelationship function parameters
Parameter
Format
Description
dataSource
String
The name for the data source created in
the data model
transactionId
String
The unique transaction ID returned from
SCRRegister ();
relationshipType
String
The relationship type, for example,
tbsm:contains
sourceId
String
Transient ID of the relationship source
targetId
String
Transient ID of the relationship target
Example
The following example utilizes the transactionID created by the SCRRegister
function. It creates a relationship between the two components created using the
SCRCreateComponentInfo function.
RelationshipType=”tbsm:contains”;
SourceID=”Comp0001”;
TargetID=”Comp0002”;
transactionID=SCRCreateRelationship(DataSource, transactionID,
RelationshipType, SourceID, TargetID);
SCRDeleteComponentInfo
Where required, the SCRDeleteComponentInfo function deletes a component from
the SCR. It returns the unique transaction ID, or where an exception occurs, it
returns a null value.
Syntax
transactionID =SCRDeleteComponentInfo (dataSource, transactionId, transientId,
classType, label, sourceToken, keywords, values);
300
Netcool/Impact: Solutions Guide
Parameters
This function has the following parameters.
Table 85. SCRDeleteComponentInfo function parameters
Parameter
Format
Description
dataSource
String
This is the name for the data source
created in the data model
transactionId
String
This is the unique transaction ID returned
from SCRRegister ();
transientId
String
Unique ID for the component being
created
classType
String
Class type of the component being
created
label
String
Label of the component being created
sourceToken
String
Source token for the component being
created. This value cannot be Null.
keywords
String
The keywords array allows keyword
attributes to be added for each
component being created
values
String
The values array allows value attributes
to be added for each component being
created
Example
The following example utilizes the transactionID created by the SCRRegister
function.
// DataSource declared
// transactionID created by the SCRRegister function
ComponentTransientID=”Comp0001”;
ClassType=”ServiceComponent”;
Label=”NO_LABEL”;
SourceToken=NULL”;
Keywords={};
Values={};
// delete the component
transactionID=SCRDeleteComponentInfo(DataSource, transactionID,
ComponentTransientID, ClassType, Label, SourceToken, Keywords,
Values);
SCRDeleteRelationship
Where required, the SCRDeleteRelationship function deletes the relationship
between two components. It returns the unique transaction ID, or where an
exception occurs, it returns a null value.
Syntax
transactionID =SCRDeleteRelationship (dataSource, transactionId,
relationshipType, sourceId, targetId);
Chapter 12. Configuring Event Isolation and Correlation
301
Parameters
This function has the following parameters.
Table 86. SCRDeleteRelationship function parameters
Parameter
Format
Description
dataSource
String
The name for the data source created in
the data model
transactionId
String
The unique transaction ID returned from
SCRRegister ();
relationshipType
String
The relationship type, for example,
tbsm:contains
sourceId
String
Transient ID of the relationship source
targetId
String
Transient ID of the relationship target
Example
The following example utilizes the transactionID created by the SCRRegister
function. It deletes the relationship between the two components deleted by the
SCRDeleteComponentInfo function:
RelationshipType=”tbsm:contains”;
SourceID=”Comp0001”;
TargetID=”Comp0002”;
transactionID=SCRDeleteRelationship(DataSource, transactionID,
RelationshipType, SourceID, TargetID);
SCRComplete
The SCRComplete function commits all the component info and relationship values
for the transaction ID to the database. This function returns 0 when successful and
returns an exception when it fails.
Syntax
SCRComplete (dataSource, transactionId);
Parameters
This function has the following parameters.
Table 87. SCRComplete function parameters
Parameter
Format
Description
dataSource
String
The name for the data source created in
the data model
transactionId
String
The unique transaction ID returned from
SCRRegister ();
Example
The following example uses the transactionID created by the SCRRegister
function. It returns 0 if it completes correctly, otherwise an exception is thrown.
ReturnValue = SCRComplete(DataSource, transactionID);
302
Netcool/Impact: Solutions Guide
SCRReadRelationships
The SCRReadRelationships function returns a list of all relationships.
Syntax
DataItems = SCRReadRelationships(dataSource, dependencyDirection,
relationshipType, sourceLabel, sourceClass, targetLabel, targetClass);
Parameters
This function has the following parameters.
Table 88. SCRReadRelationships function parameters
Parameter
Format
Description
dataSource
String
The name of the data source created in
the data model
dependencyDirection
String
Filters by dependency direction
relationshipType
String
Filters by relationship type
sourceLabel
String
Filters by source label
sourceClass
String
Filters by source class
sourceToken
String
Filters by source token
targetLabel
String
Filters by target label
targetClass
String
Filters by target class
Example
The example provided returns the relationships within the DataSource, using the
parameters as filters:
DataSource = "SCR_DB";
DataItems = SCRReadRelationships(DataSource, NULL, "tbsm:contains", "NO_LABEL",
"a:Organization", "NO_LABEL", "a:Person");
SCRReadManagedElements
The SCRReadManagedElements function returns all managed elements, filtered by
class type if the value is not null.
Syntax
DataItems = SCRReadManagedElements (dataSource,classType);
Parameters
This function has the following parameters.
Table 89. SCRReadManagedElements function parameters
Parameter
Format
Description
dataSource
String
This is the name for the data source
created in the data model
classType
String
Filters by class type
Chapter 12. Configuring Event Isolation and Correlation
303
Example
This example return all the managed elements, filtered by the class type
a:Organization:
DataSource = "SCR_DB";
DataItems = SCRReadManagedElements(DataSource, "a:Organization")
Creating a Netcool/Impact policy using the SCR API
This Netcool/Impact policy retrieves resources from a database and created
components and relationships for the resources in the Service Component
Repository (SCR). To allow this policy to run successfully, the database must be
populated with sample data. The sample policy in this topic provides the SCR API
functions to populate the database. Instructions to allow you to use this sample
policy are embedded in the code text shown.
This policy uses the following SCR functions:
v SCRRegister(): to register the transaction taking place
v SCRCreateComponentInfo(): to create each component from each resource
retrieved from the database
v SCRRelationship(): to create a relationship between the components
v SCRComplete(): to persist all the components and relationships created.
Note: The components and relationships can be viewed in the CRViewer after the
Discovery Library Toolkit has run.
Sample Impact policy
This sample JavaScript policy creates two resources in the SCR. One called
Comp0001 and one called Comp0002. The policy creates a relationship between
Comp0001 (the parent) to Comp0002 (the child).
The example uses a tbsm:resource namespace class designation which you can for
your own purposes, you can also create a company specific class structure.
TransactionID = SCRRegister
("SCR_DB", "Service_0001", "Service", "ServiceApp", null, "Delta");
Log("TransactionID is " + TransactionID);
TransactionID = SCRCreateComponentInfo
("SCR_DB", TransactionID, "Comp0001", "tbsm:resource", "NO_LABEL",
null, ["tbsm:identity"], ["comp0001-uniquestring"] );
TransactionID = SCRCreateComponentInfo
("SCR_DB", TransactionID, "Comp0002", "tbsm:resource", "NO_LABEL",
null, ["tbsm:identity"], ["comp0002-uniquestring"] );
TransactionID = SCRCreateRelationship
("SCR_DB", TransactionID, "tbsm:contains", "Comp0001", "Comp0002");
SCRComplete("SCR_DB", TransactionID);
304
Netcool/Impact: Solutions Guide
Chapter 13. Simple email notification
The Simple Email Notification solution, provides a way to configure email
notifications to be sent based on rules that are configured.
This solution is based on IBM Tivoli Netcool/Impact, and IBM Dashboard
Application Services Hub, which comes as part of IBM Jazz for Service
Management. To make this solution work, you need the IBM Dashboard
Application Services Hub component.
The solution provides a GUI to add configuration and control the rules to send
emails. The solution is packaged in a compressed file in the $NCHOME/add-ons/
notifications/ directory.
v For UNIX systems, use the Notifications.jar file.
v For Windows, use the Notifications.zip file
Each compressed file consists of the following parts:
v A db directory that contains sql files that can be used to create the database
tables in Apache Derby or DB2.
v An importData directory that contains an Impact project that contains the
artifacts like, policies, data source, and data type configurations for this solution
to work.
v A data.zip directory that contains a sample Dashboard page that must be
customized for your environment.
Installation
You must complete the following steps before you can use the solution.
1. In your database, create the database table that you want to use to store the
configuration in.
2. For UNIX systems, expand the IMPACT_HOME\add-ons\notifications\
Notifications.jar. Import the Netcool/Impact artifacts in the importData
directory into the Notifications.jar package in the Impact Server by using the
IMPACT_HOME/bin/nci_import script.
3. Update the EmailSender service with a working SMTP server.
4. Install the Impact_SSD_Dashlet.war file in the IBM Dashboard Application
Services Hub. Then, import the exported page in the data.zip file into the IBM
Dashboard Application Services Hub.
v For Windows, use the same process to install the Notifications.zip file.
Creating the database table
Apache Derby or DB2 can be used as the database to store the configuration. You
can also use the internal Impact database.
Procedure
1. To create the table in the Derby database, edit the notifications_derby.sql file
in the db directory of the compressed Notifications file.
v For UNIX systems use the Notifications.jar file.
v For Windows, use the Notifications.zip file.
© Copyright IBM Corp. 2006, 2016
305
2. Replace derbypass with your Derby password.
3. After you update the file, go to the IMPACT_HOME/bin directory. Run the nci_db
command to create the table. The command must be run from Netcool/Impact
primary server.
Use the following command syntax.
nci_db connect -sqlfile <files_extracted_location>/db/notifications_derby.sql
Where <files_extracted_location> is the directory where the compressed
Notifications file was extracted to.
4. If you are using a DB2 database see “Using DB2 as the database” on page 309.
Importing Netcool/Impact Artifacts
About this task
The Netcool/Impact artifacts are in the importData directory in the
<files_extracted_location>.
Procedure
Use the nci_import command to import the Netcool/Impact artifacts into the
Impact Server.
nci_import <ServerName> <files_extracted_location>/importData
Where <ServerName> is the Impact server name for example: NCI.
Updating the email service
Update the email service with a working SMTP server.
Procedure
1. Log in to Netcool/Impact. Go to the Services tab.
2. Edit the EmailSender service.
3. Enter a valid SMTP host and port.
Importing the exported page into the IBM Dashboard
Application Services Hub
How to install the war file and import the dashboard into the IBM Dashboard
Application Services Hub.
Procedure
1. Before you import the data.zip file, the Impact_SSD_Dashlet.war file must be
installed in the IBM Dashboard Application Services Hub.
2. The following link shows how to install the war file in to the IBM Dashboard
Application Services Hub. “Installing the Netcool/Impact Self Service
Dashboard widgets” on page 157
3. The data.zip file contains the sample dashboard that accesses the data from
the Remote Provider Connection name and Provider ID and can point to any
cluster.
4. To import the dashboard, copy the data.zip file into the DASH_HOME/ui/input
directory. If the input directory does not exist, create the directory.
5. Run the following command:
306
Netcool/Impact: Solutions Guide
DASH_HOME/ui/bin/consolecli.sh Import
--username <console_admin_user_ID>
--password <console_admin_password>
--excludePlugins TCRImportPlugins
6. For more information about this command, see the following link in the Jazz
for Service Management information center.
http://pic.dhe.ibm.com/infocenter/tivihelp/v3r1/index.jsp?topic=
%2Fcom.ibm.psc.doc_1.1.0%2Ftip_original%2Fttip_import.html
7. When you import the dashboard, if you are going to use DB2 or a different
cluster name further customization is required. For information see, “Using
DB2 as the database” on page 309.
Results
The Impact Notification dashboard page is now available in the Default folder.
Creating a Remote Connection
How to configure a remote connection between the IBM Dashboard Application
Services Hub and Netcool/Impact.
Procedure
1. Log in to the IBM Dashboard Application Services Hub.
2. From the settings in the navigation bar, open the Connections tab.
3. In the connections page, enter the host name where your Netcool/Impact GUI
Server is installed.
4. Enter the admin user ID and password. Click Search.
5. The list of providers is displayed in the table. Select Impact_NCICLUSTER or
similar depending on your cluster name.
6. Modify the name and the provider ID as follows:
IMPACT_EMAIL_NOTIFICATION_ADDON
7. Click OK to implement the changes.
Updating the Object Server data source configuration
How to update the Object Server data source to use event filtering.
Procedure
1. Log in to Netcool/Impact.
2. In the Projects list, switch to the Notifications project.
3. Select Data Model.
4. Edit the ImpactNotificationsOS data source, update the Username, password,
hostname, and port for the ObjectServer. The ObjectServer is used for event
filtering.
Email notification GUI
How to use the email notification in your environment.
To start the email notification GUI, click the Default folder button from the IBM
Dashboard Application Services Hub and choose Impact Notifications
The GUI has the following sections
Chapter 13. Simple email notification
307
v User Input: Save Email Notification
v Configured Email Notifications: Email Configuration
v Sample Results: Sample Results
Creating an email notification
The Save Email Notification form provides a way to configure event rules for
which email needs to be generated.
Procedure
1. Add the event in the Event Filter field. The event filter must be a valid
ObjectServer SQL.
2. Enter the subject for the email in the Email Subject field. For example, @Node@
@Severity@ event has occurred.
3. Enter the body of the email in the Email Body field. For example, the Email
Subject and Email Body can substitute values from the event. If a column in
Netcool/OMNIbus is added to the email, the variable substitution must be of
the format @COLUMNAME@. For example, if the value column Node is added to
subject or body of the email, specify @Node@ in the string. The value is
substituted when the email is generated.
4. To, CC, and BCC email addresses can be entered in the appropriate text boxes.
5. Select the Activate Filter check box, if you want to activate this filter while you
are adding or updating this configuration.
6. Click the Save Email Notification button.
Viewing configured email notifications and sample results
How to view configured email notifications and sample results.
About this task
The Email Notification section shows the email notification configurations that are
created. The list is updated when a new configuration is added. If you did not
select the Activate Filter check box, you can also activate a row in the
configuration by selecting the Activate button.
v Select the row that you want to activate, and click the Activate button.
v To delete a configuration, select the row and click the Delete button.
v To deactivate a configuration, select the row in the table and click the Deactivate
button.
The Sample Results table shows up to 10 rows of sample subject and body values
that are created for the selected configuration. This table is updated when you
choose a row in the Email Notification table.
Activating email notifications
How to activate the email notifications by editing the
ImpactEmailNotificationPolicyActivator service.
Procedure
1. To activate email notifications, login to Netcool/Impact.
2. Click Services to open the Services tab.
3. Switch to the Notifications project.
308
Netcool/Impact: Solutions Guide
4. Edit the ImpactEmailNotificationPolicyActivator service, update the
Activation Interval interval to a value that suits your environment. The default
is every 30 seconds.
5. Start the ImpactEmailNotificationPolicyActivator service.
Using DB2 as the database
To enable DB2 as the database, you need to complete the following steps:
v Create the database and the table.
v Update the DB2 data source ImpactNotificationsDB2 to your db2 user ID and
password to access the DB2 table.
v Update the notificationConstants policy.
v Update the IBM Dashboard Applications Services Hub GUI to pull the data from
DB2.
Creating the database and the table
How to set up the DB2 database with the notifications_db2.sql file and update
the DB2 data source in Netcool/Impact.
Procedure
1. If you want to use the DB2 database in your DB2 environment, create a
database that is called NOTIFCTN.
2. Copy the notifications_db2.sql file to your DB2 environment.
3. Connect to the new database by running db2 connect to NOTIFCTN.
4. Run the db2 -tvf notifications_db2.sql command to create the table.
5. Remove the database connection by running db2 connect reset.
Updating the DB2 data source
Procedure
1. In Netcool/Impact, click Data Model to open the Data Model tab.
2. Select the Notifications Project.
3. Edit the ImpactNotificationsDB2 data source.
4. Update the user ID, password, port, and the host for this configuration.
Updating the notificationConstants policy
Update the notificationConstants policy variables for Derby and DB2 database so
that the email notifications work correctly.
About this task
Procedure
1. Click Policies to open the Policies tab.
2. Select the Notifications project.
3. For DB2 and for Derby databases, you must update the impact_sender_address
variable to ensure that the email that is sent has a valid sender address. For
example:
var impact_sender_address="impact@ibm.com";
4. For a DB2 database, you must update the emailConfigDTName and the
emailConfigDSName variable
Chapter 13. Simple email notification
309
a. Edit the notificationConstants policy. Update the emailConfigDTName
variable to EmailConfigurationDB2.
b. In the same policy update emailConfigDSName variable to
ImpactNotificationsDB2.
310
Netcool/Impact: Solutions Guide
Index
A
Access policy output parameters 202
accessibility viii
Accessing an array of Netcool/Impact
objects 112
Accessing data type output 209
add-ons
Maintenance Window
Management 277, 278, 279, 281
B
books
see publications vii
Button widget 160
C
Column labels 168
configuring data sources 288
configuring data types 289
conventions
typeface xi
Creating a widget on a page in the
console 106
Creating an event rule 291
Creating editing and deleting an event
rule 291
customer support ix
Customized links 144
D
data
adding 78, 79
deleting 81, 82
retrieving by filter 69
retrieving by key 75
retrieving by link 77
updating 80
data items 6, 69
field variables 69
data mashups 135
data model 1
components 5
creating 3
data models
architecture 7
examples 7
setting up 6
data sources
architecture 13
categories 10
creating 13
JMS 12
LDAP 11
Mediator DSA 11
overview 5, 10
setting up 13
© Copyright IBM Corp. 2006, 2016
data sources (continued)
SQL database 11
data type
auto-populating 19
data item ordering 20
getting name of structural
element 19
LDAP 15
SQL 15
data types 5
caching 20
categories 14
data item filter 20
external 14
fields 17
internal 14, 15
keys 18
mediator 15
overview 14
predefined 14
predefined internal 16
setting up 18
system 16
user-defined internal 16
Data types and the DirectSQL
function 111
database event listener
creating call spec 42
creating triggers 42, 44, 45, 46, 47
editing listener properties file 39
editing nameserver.props file 38
example triggers 47, 48, 49, 50, 51,
52, 53, 54
granting database permissions 40
installing client files into Oracle 39
sending database events 41
setting up database server 37
writing policies 55, 56
database functions
calling 82
DataItem (built-in variable) 69
DataItems (built-in variable) 69
directory names
notation xii
Event Isolation and Correlation operator
views 288
Event Isolation and Correlation
polices 288
Event Isolation and Correlation
Services 288
event notification 2
event querying
reading state file 28
event reader
actions 29
event locking 29
event matching 29
event order 30
mapping 28
event readers
architecture 27
process 27
event sources 21
Architecture 23
non-ObjectServer 22
ObjectServer 21
events 63
adding journal entries to 64, 65
deleting 66
sending new 65
E
Installing Discovery Library Toolkit
Installing Self Service Dashboard
widgets 157
Installing the DB2 data base 284
instant messaging 85
internal data repository 12
education
See Tivoli technical training
enterprise service model
elements 8
environment variables
notation xii
event container 63
event enrichment 2
event fields
accessing 64
updating 64
event gateway 2
Event Isolation and Correlation
288, 289, 290, 291
F
Filter data output 147
Filtering data in the console
filters 70
LDAP filters 71
Mediator filters 72
SQL filters 70
fixes
obtaining viii
114
G
GetByFilter
139
I
284
J
Jabber 85
JMS
data source
12
283, 284,
311
K
key expressions 75
keys 75
multiple key expressions
75
L
large data model support 164
Large data model support 162
large data models 163
Launching the Event Isolation and
Correlation analysis page 293
LDAP data sources
creating 11
LDAP filters 71
links 6
overview 77
M
manuals
see publications vii
Mediator DSA
data sources 11
Mediator filters 72
multiple key expressions 75
MWM
See also Auditing changes to MWM
configuration
See Maintenance Window
Management
N
Netcool/Impact data types as OSLC
resources 182
notation
environment variables xii
path names xii
typeface xii
O
ObjectServer event source
setting up 23
omnibus event listener
triggers 58
using ReturnEvent 59
omnibus event reader
event querying 28
event queueing 28
OMNIbus triggers 60
OMNIbusEventListener 60
OMNIbusEventReader retrying
connection on error 33
OMNIbusEventReader with an
ObjectServer pair 30
online publications
accessing vii
operator view EIC_Analyze 293
ordering publications viii
OSLC 179
OSLC and data types 182
OSLC and variables from policy
results 202
312
Netcool/Impact: Solutions Guide
OSLC introduction 179
OSLC resource shapes for data
types 185
OSLC resources and identifiers
Overview 283
181
P
Passing argument values to a policy 214
Passing parameter values from a table to
a gauge 149
Passing parameter values from a widget
to a policy 148
path names
notation xii
percentage 121
pie chart widget 130
policies
creating 3
policy 1
retrieving data by filter 73, 74
retrieving data by key 76
retrieving data by link 77
problem determination and resolution x
publications vii
accessing online vii
ordering viii
R
RDFRegister function 198, 233
RDFUnRegister function 200, 236
S
SCR
API
SCRComplete 302
SCRCreateComponentInfo 298
SCRCreateRelationship 300
SCRDeleteComponentInfo 300
SCRDeleteRelationship 301
SCRReadManagedElements 303
SCRReadRelationships 303
SCRRegister 297
functions
SCRComplete 302
SCRCreateComponentInfo 298
SCRCreateRelationship 300
SCRDeleteComponentInfo 300
SCRDeleteRelationship 301
SCRReadManagedElements 303
SCRReadRelationships 303
SCRRegister 297
Self service dashboard widgets 157
Serial rollover 33
service
database event listener 36, 41
omnibus event listener 58
OMNIbus event listener 57, 58
OMNIbus event reader 26
service model
enterprise 8
services
overview 25
predefined 25
services (continued)
setting up 3
user-defined 26
working with 1, 25
Simple Email Notification 306
Software Support
contacting ix
overview viii
receiving weekly updates ix
solution
running 3
solution components 1
solutions
setting up 3
types 2
SQL filters 70
Status 121
T
Tivoli Information Center vii
Tivoli technical training viii
tooltips 170
topology widget 172
training
Tivoli technical viii
Tree widget and array of Impact
objects 115
typeface conventions xi
U
UI data provider 164
UI data provider and GetByFilter
function 110
UI data provider customization 167
Uninstalling the Self Service Dashboard
widgets 158
updating data 79
Updating the RCA components
view 285
User parameters 107, 207
using Spid 60
V
variables
notation for xii
Viewing Event Isolation and Correlation
results 292, 293
Visualizing a data mashup from two IBM
Tivoli Monitoring sources 152
Visualizing a policy action in a
widget 131
Visualizing data mashups with an array
of Impact objects 137
Visualizing data output 139
Visualizing data with the tree and
topology widgets 142
Visualizing Event Isolation and
Correlation 155, 294
W
Web hosting model 9
elements 10
WebGUI 292
working with data models
5
X
x events in y time
2
Index
313
314
Netcool/Impact: Solutions Guide
IBM®
Printed in USA
SC27-4923-04