Using Oracle Log Analytics
Oracle® Cloud
Using Oracle Log Analytics
E60700-28
March 2018
Oracle Cloud Using Oracle Log Analytics,
E60700-28
Copyright © 2015, 2018, Oracle and/or its affiliates. All rights reserved.
Primary Author: Oracle Corporation
This software and related documentation are provided under a license agreement containing restrictions on
use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your
license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify,
license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means.
Reverse engineering, disassembly, or decompilation of this software, unless required by law for
interoperability, is prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If
you find any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on
behalf of the U.S. Government, then the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software,
any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are
"commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agencyspecific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the
programs, including any operating system, integrated software, any programs installed on the hardware,
and/or documentation, shall be subject to license terms and license restrictions applicable to the programs.
No other rights are granted to the U.S. Government.
This software or hardware is developed for general use in a variety of information management applications.
It is not developed or intended for use in any inherently dangerous applications, including applications that
may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you
shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its
safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this
software or hardware in dangerous applications.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of
their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are
used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron,
the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro
Devices. UNIX is a registered trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content, products,
and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly
disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise
set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be
responsible for any loss, costs, or damages incurred due to your access to or use of third-party content,
products, or services, except as set forth in an applicable agreement between you and Oracle.
Contents
Preface
1
2
Audience
vi
Documentation Accessibility
vi
Related Resources
vi
Conventions
vii
Getting Started with Oracle Log Analytics
About Oracle Log Analytics
1-1
About Oracle Log Analytics Roles and Users
1-2
Before You Begin with Oracle Log Analytics
1-3
Troubleshooting Problems Using Oracle Log Analytics
Typical Workflow for Troubleshooting Problems
2-1
Searching by Entities
2-2
Searching Logs Using Keywords and Phrases
2-2
Uploading Log Data on Demand
2-5
Filtering by Collection Details and Fields
2-6
Filtering by Source Attributes
2-7
Filtering by Labels
2-8
Filtering by Data Uploaded on Demand
2-9
Filtering by Fields in Log Messages
3
2-10
Renaming a Field
2-11
Filtering by Field Range
2-12
Example Scenario: Performing Dynamic Log Analysis
2-13
Example Scenario: Detecting Anomalies Using Outliers
2-15
Transforming Logs into Operational Insight
Typical Workflow for Developing Operational Insights
3-1
Saving and Sharing Log Searches
3-1
Creating Alerts for Saved Searches
3-3
iii
Creating a Saved Search from an Existing One
3-4
Creating Alerts for Existing Saved Searches
3-4
Viewing and Editing Alert Rules
3-5
Viewing Saved Search Anomaly Alerts and Baseline Charts
3-5
Associating Saved Search Alerts with Entities
3-6
Generating Inline Alerts
3-6
Exporting Search Results
3-9
Comparing Log Records
3-9
Visualizing Data Using Charts and Controls
Using Log Scales
3-14
Viewing the Field Summary
3-15
Configuring the Display of the Field Summary
4
3-10
3-17
Viewing an Entity Card
3-18
Using Bar Charts
3-19
Using Clusters
3-21
Using Line Charts
3-24
Using Maps
3-26
Using Summary Tables
3-27
Using Word Cloud
3-28
Using Link
3-29
Performing Advanced Analytics with Link
3-37
Using Out-of-the-Box Widgets
3-47
Creating Custom Dashboards
3-48
Printing Dashboards
3-49
Administering Oracle Log Analytics
Typical Workflow for Administering Oracle Log Analytics
4-1
Installing Oracle Management Cloud Agents
4-1
Working with Log Sources
4-2
Configuring New Log Sources
Creating a Parser
4-2
4-5
Creating a Log Source
4-19
Creating a Label
4-30
Creating Lookups
4-34
Using the Generic Parser
4-35
Configuring Field Enrichment Options
4-39
Setting Up Syslog Monitoring
4-42
Setting Up Database Instance Monitoring
4-45
Managing Existing Log Sources
Editing Log Sources
4-49
4-49
iv
Associating Entities to Existing Log Sources
4-50
Creating a Log Source Based on an Existing One
4-50
Working with Entity Associations
4-51
Configuring New Entity Associations
4-51
Managing Existing Entity Associations
4-54
Viewing Collection Warnings
4-55
Purging Log Data
4-56
A
Understanding Log Analytics Search Commands
B
Out-of-the-Box Log Sources
C
Out-of-the-Box Labels
D
Sample Parse Expressions
E
Entity Types Modeled in Oracle Log Analytics
F
SQL Query Guidelines
G
List of Non-Facetable Fields
v
Preface
Preface
Oracle Log Analytics provides a platform for searching and analyzing logs that’re
collected from entities to troubleshoot the issues encountered in them. You can also
identify potential issues and plan to mitigate the errors.
Topics:
•
Audience
•
Documentation Accessibility
•
Related Resources
•
Conventions
Audience
Using Oracle Log Analytics is intended for users who want to analyze and monitor log
data across the enterprise from sources such as system logs, network access logs,
database logs, error logs, OS operations logs, application server logs, and many more.
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle
Accessibility Program website at http://www.oracle.com/pls/topic/lookup?
ctx=acc&id=docacc.
Access to Oracle Support
Oracle customers that have purchased support have access to electronic support
through My Oracle Support. For information, visit http://www.oracle.com/pls/topic/
lookup?ctx=acc&id=info or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs
if you are hearing impaired.
Related Resources
For more information, see these Oracle resources:
•
http://cloud.oracle.com
•
Using Oracle Application Performance Monitoring
•
Using Oracle IT Analytics
vi
Preface
Conventions
The following text conventions are used in this document:
Convention
Meaning
boldface
Boldface type indicates graphical user interface elements associated
with an action, or terms defined in text or the glossary.
italic
Italic type indicates book titles, emphasis, or placeholder variables for
which you supply particular values.
monospace
Monospace type indicates commands within a paragraph, URLs, code
in examples, text that appears on the screen, or text that you enter.
vii
1
Getting Started with Oracle Log Analytics
Topics:
•
About Oracle Log Analytics
•
About Oracle Log Analytics Roles and Users
•
Before You Begin with Oracle Log Analytics
About Oracle Log Analytics
Oracle Log Analytics is a unified, integrated cloud solution that lets you monitor,
aggregate, index, analyze, search, explore, and correlate all log data from your
applications and system infrastructure.
Using Oracle Log Analytics, you can:
•
Explore logs specific to the application that’s experiencing a problem
•
Analyze and explore log data efficiently
•
Gain business and IT operational insight from log data
•
Rapidly obtain the key values and collate them from the logs
Log events are loaded, analyzed, field-enriched, and indexed in Oracle Log Analytics.
These operations can be performed either by using out-of-box parsers, by using userdefined labels, or by defining extended fields. Depending on the amount of fieldenrichment done for each log event, the index size (the unit of measure for metering
and billing) in Oracle Log Analytics may vary between 1.2 to 1.8 times the original log
volume. While Oracle provides users with guidance on the amount of overhead that
these activities will create in the indexes, the actual amount will depend on the specific
operations defined or performed by the users.
1-1
Chapter 1
About Oracle Log Analytics Roles and Users
Note:
Oracle Log Analytics provides National Language Support (NLS) for ingesting
logs that contain single-byte and double-byte character sets. NLS is available
for the following nine languages:
•
French
•
German
•
Italian
•
Spanish
•
Brazilian Portuguese
•
Japanese
•
Korean
•
Simplified Chinese
•
Traditional Chinese
Watch this short video to get a brief overview of searching and analyzing logs.
Video
About Oracle Log Analytics Roles and Users
If you’re a new customer and have purchased an Oracle Management Cloud license
edition such as Standard edition or Enterprise edition, then after the instance is
created, the following roles are provisioned:
•
Oracle Management Cloud <instance name> Administrator
•
Oracle Management Cloud <instance name> User
For more information about the tasks that the users assigned with the above roles can
perform, see Adding Users and Assigning Roles in Getting Started with Oracle
Management Cloud.
However, if you’re an existing customer and you’ve purchased the standalone Oracle
Log Analytics service, then the following roles are created:
Role
Tasks
Oracle Log Analytics Administrator
•
•
Set up Oracle Log Analytics.
Monitor targets
•
•
.
Create and administer new log sources.
Create and administer new log parsers.
Oracle Log Analytics User
•
•
•
•
Select targets, groups, or systems to
explore.
Search and analyze logs.
Save and share log searches.
Build custom dashboards.
1-2
Chapter 1
Before You Begin with Oracle Log Analytics
Before You Begin with Oracle Log Analytics
Here are some of the common terms and basic concepts for Oracle Log Analytics.
Oracle WebLogic Server is the Java EE application server, part of the Oracle Fusion
Middleware suite of products, used for building and deploying enterprise applications.
An Oracle WebLogic Server Cluster consists of multiple Oracle WebLogic Server
instances running simultaneously and working together to provide increased scalability
and reliability.
A WebLogic domain is a logically related group of Oracle WebLogic Server resources.
Domains include a special Oracle WebLogic Server instance called the
Administration Server, which is the central point from which you configure and
manage all resources in the domain. Usually, you configure a domain to include
additional Oracle WebLogic Server instances called Managed Servers. You deploy
web applications, EJB, and other resources onto the Managed Servers and use the
Administration Server for configuration and management purposes only.
Oracle home refers to a directory where Oracle products are installed, pointed to by
an environment variable. Multiple active Oracle homes can exist on the same host.
An entity is a monitored resource, such as a database, a host server, an Oracle Cloud
compute resource, an application server, a network device, a syslog server, and so on.
A log source is a named group of log files. The files that belong to this group can be
configured using patterns such as /var/log/ssh*. A log source can be associated with
one or more parsers.
A log entity is the actual name of a log file.
A parser is a named entity used to define how to parse all log entries in a log source
and extract field information. It uses one or multiple parse expressions and a log entry
delimiter to parse all log entries in a log source. It also specifies how the parsed
content is converted into fields.
A parse expression is the regular expression used to parse a log entry.
1-3
2
Troubleshooting Problems Using Oracle
Log Analytics
Using Oracle Log Analytics, you can search any logs and drill down to specific log
entries to resolve problems quickly.
The described procedures and scenarios use an example application named
RideShare targeted at customers interested in carpool and vanpool services. As a
DevOps Administrator, you’re responsible for troubleshooting problems related to this
application, which is critical to your business.
Topics:
•
Typical Workflow for Troubleshooting Problems
•
Searching by Entities
•
Searching Logs Using Keywords and Phrases
•
Uploading Log Data on Demand
•
Filtering by Collection Details and Fields
•
Example Scenario: Performing Dynamic Log Analysis
•
Example Scenario: Detecting Anomalies Using Outliers
Typical Workflow for Troubleshooting Problems
Here are the common tasks for troubleshooting problems.
Task
Description
More Information
Select the entity or entities
that you want to
troubleshoot.
View log data pertaining to a
specific entity or a set of
entities.
Searching by Entities
Search logs using
keywords and phrases.
Use keywords and phrases in
commands to retrieve log data.
Searching Logs Using
Keywords and Phrases
Filter logs using collection
details and field attributes.
Use the collection details and
field attributes of Oracle Log
Analytics to filter log data.
Filtering by Collection Details
and Fields
Detect anomalies.
Cluster log events based on a
common signature that helps
you identify patterns and
outliers.
Example Scenario: Detecting
Anomalies Using Outliers
2-1
Chapter 2
Searching by Entities
Searching by Entities
You can use the Entities section in the Data panel of Oracle Log Analytics to filter
logs by an entity or multiple entities.
Entities are resources, such as host machines, databases, and Oracle Fusion
Middleware components, which can be managed and monitored in Oracle
Management Cloud.
To search for logs for the RideShare application entities:
1.
From Oracle Log Analytics, in the Entities section, click Entity .
2.
In the Entity dialog box, select the required entities, and click Submit.
Note:
In the Entity dialog box, you can see the occurrence trend for the available
entities in the form of sparklines. For the prior example, the sparklines
show when the log entries corresponding to the available entities are
generated based on the time range selected in the time selector on the top
right corner of the dialog box.
Searching Logs Using Keywords and Phrases
You use commands to retrieve log data as well as to perform manipulation on that
data. The first (and implicit) command in a query is the search command. A search is a
series of commands delimited by a pipe (|) character. The first white-spaced string
following the pipe character identifies the command to be used. The pipe character
2-2
Chapter 2
Searching Logs Using Keywords and Phrases
indicates that the results from the prior command should be used as input for the next
command.
For example, to search for all database error messages, enter the following logical
expression in the Search field of Oracle Log Analytics:
* | SEARCH Severity = 'error' AND 'Entity Type' = 'Database Instance'
In the previous example:
The following example returns the same result as the previous example:
Severity='error' AND 'Entity Type'='Database Instance'
The SEARCH keyword is optional, and you can directly enter your search criteria in the
Search bar to achieve the desired results.
By enclosing the words in quotation marks and including them in the query string as a
phrase (‘Database Instance’, for example), only those logs containing the phrase
‘Database Instance’ are returned. In addition, keyword searches where the substring
could be interpreted as a separate directive should be specific within quotation marks.
For example, to search for the string and, you have to enter the string within single
quotation marks (‘and’) to prevent the system from using its Boolean meaning.
Listing Recent Searches
Oracle Log Analytics lets you select and run a recently used search. When you click
the Search field or enter text in the Search field, Oracle Log Analytics displays a list of
recently used searches. This lets you quickly access recently used search commands.
You can select any of the listed commands and click Run to execute the selected
search command.
2-3
Chapter 2
Searching Logs Using Keywords and Phrases
Note:
The recently used list is available on a per session basis. So if you sign out of
Oracle Log Analytics, and then sign in again, the list from the previous session
isn’t displayed. A new list of recent searches is created for your session.
See About Oracle Log Analytics Search in Using Oracle Log Analytics Search.
Using the Autosuggest Feature
When you enter a query in the Search field, the autosuggest feature of Oracle Log
Analytics automatically suggests terms that you can use in your query. Oracle Log
Analytics displays a list of suggestions, based on the text that you’ve entered in the
Search field. For example, if you’ve entered the name of a field or a search action, the
autosuggest feature displays the possible values only for that field or the list of
available actions.
2-4
Chapter 2
Uploading Log Data on Demand
Uploading Log Data on Demand
Oracle Log Analytics lets you to upload log data on demand. This is useful when you
have log data from old applications that aren’t supported by the Oracle Management
Cloud agents, but you need to analyze them for troubleshooting. In addition, if you
have applications that aren’t set up to be monitored by Oracle Log Analytics, and if the
applications return a large number of log entries, then you can use the on-demand
upload feature to easily analyze the large volumes of log data.
The following are features of on-demand upload:
•
You can upload a single log file or any compressed file (.zip, .gz, .tgz, .tar.tgz)
containing multiple log files.
Note:
The maximum file size for a single upload (single file or a ZIP file) is 1 GB.
•
You can name each upload for easy reference.
•
Using the named reference, you can upload files at different times to the same
upload name.
•
You can select a parser for the log file. Oracle Log Analytics selects a default one
if you don’t specify a parser.
For more information about uploading on-demand log data to Oracle Management
Cloud, contact Oracle Support.
Deleting the Uploaded On-Demand Log Files
Upon completing the upload of the on-demand log data, you can view the uploads and
verify the file status. If in case you notice that the file upload failed, you can delete the
single failed file and upload it again.
To delete the log file:
2-5
Chapter 2
Filtering by Collection Details and Fields
1.
From Oracle Log Analytics, click the OMC Navigation (
corner of the interface.
) icon on the top left
In the OMC Navigation bar, click Configuration Home.
2.
In the Oracle Log Analytics Configuration page, click the available count of
uploads link in the Uploads section.
This displays the list of on-demand uploads. In case of failure in uploading any log
data files, a warning icon (
) is displayed adjacent to the name of the upload.
Position your cursor on the warning icon to read the warning message. For
example:Failure processing some files in this upload. Click
upload name to see the details.
3.
To delete an on-demand upload, click the check box next to the upload name. To
delete multiple on-demand uploads, click the corresponding check boxes. Click
Delete.
4.
To delete a file in an on-demand upload, click the upload name.
This displays the list of files that were uploaded in the specified upload. You can
view the status of the upload of each file adjacent to the file name.
To delete a file, click the Open Menu (
and select Delete.
) icon adjacent to the file name entry
Filtering by Collection Details and Fields
You can also filter data by using the log sources and the fields in the log messages.
The Collection Details attributes let you filter log data based on:
•
Log sources, such as database logs, Oracle WebLogic Server logs, and so on.
•
Log entities, which are the actual log file names.
•
Labels, which are tags added to log entries when log entries match specific
defined conditions. See Using Labels in Log Sources.
2-6
Chapter 2
Filtering by Collection Details and Fields
•
Upload names of log data uploaded on demand. See Uploading Log Data on
Demand.
Field attributes let you filter log data based on the different fields of the available log
files.
Topics:
•
Filtering by Source Attributes
•
Filtering by Labels
•
Filtering by Data Uploaded on Demand
•
Filtering by Fields in Log Messages
Filtering by Source Attributes
In the Data panel of Oracle Log Analytics, you can use the Collection Details section
to filter logs by the source attributes such as log source and log entities.
For example, to search for logs for a particular log source, such as Database Listener
Alert Logs:
1.
From Oracle Log Analytics, in the Collection Details section, click Log Source.
2.
In the Log Source dialog box, select Database Listener Alert Logs and click
Submit.
2-7
Chapter 2
Filtering by Collection Details and Fields
Note:
•
In the Log Source dialog box, you can see the occurrence trend for the
available log sources in the form of sparklines. The sparklines show
when the log entries corresponding to the available log sources are
generated based on the time range selected in the time selector on the
top right corner of the dialog box.
•
You can select all the listed items by selecting the checkbox in the
header pane on the top left.
Filtering by Labels
In the Data panel of Oracle Log Analytics, you can use the Collection Details section
to filter log data by data labels.
1.
In Oracle Log Analytics, from the Visualize panel, select Records with
Histogram.
2.
From the Collection Details list, click Label.
3.
In the Label dialog box, select the label that you want to analyze, such as
CriticalError, and click Submit.
2-8
Chapter 2
Filtering by Collection Details and Fields
Note:
4.
•
In the Label dialog box, you can see the occurrence trend for the
available labels in the form of sparklines. The sparklines show when
the log entries corresponding to the available labels are generated
based on the time range selected in the time selector on the top right
corner of the dialog box.
•
You can select all the listed items by selecting the checkbox in the
header pane on the top left corner of the dialog box.
From the Fields section of the Data panel, drag and drop Label to the Display
Fields section of the Visualize panel.
Oracle Log Analytics displays all the log entries pertaining to the selected label.
Filtering by Data Uploaded on Demand
In the Data panel of Oracle Log Analytics, you can use the Collection Details section
to filter log data by data uploaded on demand.
For example, to search for uploaded log data for Microsoft SQL Server errors:
1.
Ensure that you’ve uploaded your on-demand log data as specified in Uploading
Log Data on Demand.
2.
In Oracle Log Analytics, from the Visualize panel, select Records with
Histogram.
3.
From the Collection Details list, click Upload Name.
4.
In the Upload Name dialog box, select the entry that you want to analyze (for
example, MicrosoftSQLServer_ErrorLog), and click Submit.
Note:
•
In the Upload Name dialog box, you can see the occurrence trend for
the available uploads in the form of sparklines. The sparklines show
when the log entries corresponding to the available uploads are
generated based on the time range selected in the time selector on the
top right corner of the dialog box.
•
You can select all the listed items by selecting the checkbox in the
header pane on the top left.
Oracle Log Analytics displays all the log entries for the on-demand upload name.
2-9
Chapter 2
Filtering by Collection Details and Fields
Filtering by Fields in Log Messages
You can search logs by using fields in the Data panel.
The Fields section in the Data panel of Oracle Log Analytics lists the field attributes
based on which you can filter log data.
For example, to filter only those logs where the entity type is Oracle WebLogic Server,
and the values of the field attribute Severity are ERROR and NOTIFICATION:
1.
From Oracle Log Analytics, in the Entities section, click Entity Type.
2.
In the Entity Type dialog box, select Oracle WebLogic Server and click Submit.
3.
In the Fields section, click Severity.
4.
In the Severity dialog box, select ERROR and NOTIFICATION, and click Submit.
In the selected <field name> dialog box, you can see the occurrence trend for the
available field value in the form of sparklines. The sparklines will show when the
log entries corresponding to the available field values got generated based on the
time range chosen in the time selector on the top right corner of the dialog box.
You can select all the listed items by selecting the checkbox in the header pane on
the top left corner of the dialog box.
2-10
Chapter 2
Filtering by Collection Details and Fields
Note:
Fields, such as Message, which has too many large or distinct values are
not eligible to be filtered using the Data Palette. See List of Non-Facetable
Fields for the fields that can’t be filtered using the Data Palette.
If you try to filter such fields, Oracle Log Analytics displays a message that
values for the selected field can’t be displayed.
However, you can add any such field to the Display Fields section.
5.
From the Data panel, drag the Severity attribute and drop the attribute in the
Display Fields section in the Visualize panel.
Renaming a Field
You can use the rename command to rename one or more fields.
By renaming system-defined fields, you can control the names of the fields at the time
of generating reports. See Rename Command in Using Oracle Log Analytics Search.
For example, to rename the Host IP Address (Client) field to clientip, in the Search
field of Oracle Log Analytics, you need to enter the following command and press
Enter:
* | rename 'Host IP Address (Client)' as clientip
Note:
Renaming is only a runtime operation, and it doesn’t affect the underlying data
storage.
2-11
Chapter 2
Filtering by Collection Details and Fields
Filtering by Field Range
For the fields with numerical values, you can use the bucket option to group the log
records into buckets based on the range of values of a field. The resultant popup
window displays the counts and sparkline based on the range buckets instead of
distinct values.
1.
Click the Actions (
) icon next to the field.
The dialog box displays the following options:
2.
•
Filter: To display distinct individual values of the field
•
Bucket: To display the ranges of the field
Select Bucket.
In the dialog box, you can see the occurrence count for the field in the form of
ranges.
2-12
Chapter 2
Example Scenario: Performing Dynamic Log Analysis
When the selected field is rendered in the visualizations such as the pie chart, bar
chart, or treemap, the trend will be based on the value ranges and not the distinct
individual values.
Example Scenario: Performing Dynamic Log Analysis
You can explore logs to diagnose and troubleshoot issues at any time.
Procedures and scenarios described in this chapter use an example application
named RideShare targeted at customers interested in carpool and vanpool services.
As a DevOps administrator, you’re responsible for troubleshooting problems related to
this application, which is critical to your business. When customers book rides using
the RideShare web application, logs related to this transaction are sent to Oracle Log
Analytics, and the RideShare application dashboard is updated in near real time. The
updates include the number of rides accepted by users, the category of cars or rides
being requested by users, and the regions around the country from where the rides
are being requested.
John, one of your ride operators, receives complaints from users that they can’ot book
rides. He contacts your application support team and requests help. As a DevOps
administrator, you have to troubleshoot this problem, because it affects your business.
You’ve built a custom dashboard, the RideShare application dashboard in Oracle
Management Cloud, to help you manage routine administration tasks. The dashboard
helps you understand the following aspects of your online ride-sharing application:
•
Number of rides being processed every hour
•
Types of rides that are being requested, such as Economy, Compact, SUV, and so
on
•
Regionwise location of the customers
Start troubleshooting by:
1.
Open the RideShare Application dashboard and click the Configure widget icon
(the three dots) on the top right corner of the Accepted Rides widget and select
2-13
Chapter 2
Example Scenario: Performing Dynamic Log Analysis
Edit to view the log entries for the processed rides in the Oracle Log Analytics
Data Explorer.
2.
From the Visualize panel, select Records with Histogram.
3.
From the Collection Details list, click Log Source.
2-14
Chapter 2
Example Scenario: Detecting Anomalies Using Outliers
a.
4.
In the Log Source dialog box, select the required log sources for that entity,
and click Submit.
From the Fields list, click Severity.
a.
In the Severity dialog box, select the required entry (ERROR in this case), and
click Submit.
You select ERROR because you deduce that the incomplete bookings are due
to some errors in the application servers.
Oracle Log Analytics displays all the transactions that have errored out.
In this example scenario, you saw errors related to the application server infrastructure
used by the RideShare application. Drill down to logs related to the application server
instances by selecting a specific application server target or a group of targets. See
Searching by Entities.
Example Scenario: Detecting Anomalies Using Outliers
Using Oracle Log Analytics, you can:
•
Reduce millions of log events into a smaller set of patterns
•
Rapidly troubleshoot problems by identifying log records that’re behaving different
when compared to the expected behavior and intermittent errors
Intelligent drill-down and pivoting gives you additional insight into the cause of the
problem by showing a chronological log of entries preceding and following events of
interest.
2-15
Chapter 2
Example Scenario: Detecting Anomalies Using Outliers
Learn how to use Oracle Log Analytics to troubleshoot the cause for the drop in the
number of rides on the online application RideShare.
Jane, one of your ride operators, notices that between 10 p.m. and 11 p.m., there was
a sudden drop in the number of rides that were processed. She contacts your
application support department and requests help. As the DevOps administrator, you
have to troubleshoot this critical problem, because it affects your business.
1.
In the RideShare application dashboard, start by filtering the number of processed
rides between 10 p.m. and 11 p.m.
The dashboard shows a sudden dip in the number of processed rides between 10
p.m. and 11 p.m. To troubleshoot this issue, drill down into the details to find out
the problem with the application.
2.
on the top right corner of the Accepted Rides
Click Open in Data Explorer
widget and select Edit to view the log entries for the processed rides in the Oracle
Log Analytics Data Explorer.
3.
In the Data panel, click Entity, in the Entity dialog box, select all the three hosts
and the three applications, and then click Submit to expand your search to include
the hosts on which the applications are running.
Note that the search returned more than 29,000 log entries. Because it’s difficult to
analyze so many log entries, try to look for any patterns in these entries by using
the cluster command.
2-16
Chapter 2
Example Scenario: Detecting Anomalies Using Outliers
4.
In the Search field, enter * | cluster and press Enter.
The cluster command uses machine learning to group log records together based
on how similar they are to each other. See Cluster Command in Oracle Log
Analytics Search Language Reference.
Here, the cluster command reduces the large number of log entries into a small
number of patterns.
5.
Click the right end of the Count column header to sort the messages in reverse
order to see which patterns have a small number of entries.
2-17
Chapter 2
Example Scenario: Detecting Anomalies Using Outliers
After the log entries are sorted in the reverse order of message count, you can see
some outlier signatures. Outliers are events that occur rarely. Drill down into an
outlier to explore further.
You can see that a log message has returned a fatal error.
6.
In the Count column of the log message with the fatal error, click 1 to display the
relevant record.
7.
In the log records section, click the menu icon or right-click the record and select
Show Logs +/– 1 Minute to see more context for this outlier entry.
You can see all the log entries that were generated in that 1-minute context.
2-18
Chapter 2
Example Scenario: Detecting Anomalies Using Outliers
You can see that someone had run the chmod command to change permissions on
some files. That’s probably the cause of the problem.
2-19
3
Transforming Logs into Operational Insight
Oracle Log Analytics lets you transform log data into operational insight, to understand
the performance of your entity and apply corrective actions, if required.
Topics:
•
Typical Workflow for Developing Operational Insights
•
Saving and Sharing Log Searches
•
Exporting Search Results
•
Comparing Log Records
•
Visualizing Data Using Charts and Controls
•
Performing Advanced Analytics with Link
•
Using Out-of-the-Box Widgets
•
Creating Custom Dashboards
•
Printing Dashboards
Typical Workflow for Developing Operational Insights
Here are the common tasks for transforming log data into operational insight.
Task
Description
More Information
Save and share log
searches.
Save a search query as a widget See Saving and Sharing Log
so that you can run the widget to Searches and Exporting Search
retrieve latest results.
Results.
Visualize data.
Present Search results
graphically for easier analysis.
See Visualizing Data Using
Charts and Controls.
Customize out-of-the-box Customize out-of-the-box
See Using Out-of-the-Box
widgets.
widgets to suit your requirement. Widgets.
Create custom
dashboards.
Create custom dashboards by
using widgets.
See Creating Custom
Dashboards and Printing
Dashboards.
Saving and Sharing Log Searches
After you create and execute a search query, you can save and share your log
searches as a widget for further reuse. If you’ve created the widget based on a fixed
time range, then every time that you open the widget, it will show the results for the
time range that you specified in the search. If you’ve created the widget for a relative
3-1
Chapter 3
Saving and Sharing Log Searches
time range (say the last 7 days), then every time that you open the widget, it will show
the up-to-date results as per the time selector (Last 7 days).
Using saved searches, other users can also access the search query. You can save a
maximum of 50 search queries.
After you've entered a search query and displayed the results in a chart, to save the
search as a widget:
1.
Click Save.
2.
Enter the name and description of the widget.
You can now add this widget to a custom dashboard. See Creating Custom
Dashboards.
You can view the number of saved searches in your Oracle Log Analytics instance
from the Configuration page.
You can also save your search directly to a dashboard. After you've entered a search
query and displayed the results in a chart, to save the search to a dashboard:
1.
Click Save.
2.
Click the Add to dashboard check box.
3.
In the Dashboard field, click the down arrow, and select the name of the
dashboard to which you want to save the search. If you want to save the search to
a new dashboard, then select New Dashboard and enter the name of the new
dashboard.
Click Save.
You can now access the saved search from the specified dashboard.
From Oracle Log Analytics, click the OMC Navigation (
of the interface.
) icon on the top left corner
In the OMC Navigation bar, click Configuration Home.
Clicking the count of saved searches link displays the Saved Searches page where
you can view the list of built-in and custom saved searches. The built-in saved
searches are represented with gear icons and the custom ones are represented with
human icons.
3-2
Chapter 3
Saving and Sharing Log Searches
Click the Action icon next to a saved search entry to display the following menu
options:
•
Delete: Lets you delete a custom saved search. A built-in search can’t be deleted.
In the case of a built-in search, the Delete option is grayed out (disabled).
•
View in Log Explorer: Lets you open the saved search in the Oracle Log
Analytics Explorer view.
Creating Alerts for Saved Searches
You can create alert rules based on saved searches by specifying the threshold, time
range, and recipient of the email notification. When the search criteria meets the
threshold value over the specified time interval, an alert is generated and an email
notification is sent to the specified recipient.
For example, you want your system administrator to be notified with a warning or
critical email about any of your monitored targets throwing the ORA-0600 error message
more than three to five times in the past seven days. To do this, you save your search
and set an alert rule for it.
1.
In Oracle Log Analytics, in the Search field, enter the following:
ORA-0600 | stats count by Target
2.
From the Search Dates list, select Last 7 Days and click Run.
3.
Click Save.
4.
In the Save Search dialog box, enter the search name.
You can click Add Search Description and enter an optional description for the
search.
5.
Click Create alert rule.
The default alert rule name is displayed. If you don’t like the default rule name,
then you can change it to another name to meet your requirements.
You can click Add Rule Description and enter an optional description for the rule.
6.
For Condition Type, select Fixed Threshold or Anomaly.
3-3
Chapter 3
Saving and Sharing Log Searches
The anomaly based alert rule will be automatically enabled after the data is
collected for 30 intervals.
7.
For Operator, select >, for Warning Threshold, enter 3, and for Critical
Threshold, enter 5.
8.
For Schedule Interval, specify 7 days.
You can select any value between 15 minutes to 7 days as the Schedule Interval.
Your saved search runs automatically based on the interval that you specify.
9.
Enter the email address of the person who will be notified by an email when any
result violates the specified threshold.
10. Click Save.
Whenever over a period of 7 days, any of your monitored entities throws the ORA-0600
error more than the specified threshold value, an email will be sent to the specified
recipient listing each entity (along with the count of the error) that crossed the
threshold. The email also includes a link to the Oracle Log Analytics user interface.
Clicking the link takes you to the search results for the specific time range when this
alert was triggered.
Creating a Saved Search from an Existing One
Use the Save As option to customize a built-in or custom saved search.
1.
In Oracle Log Analytics, click Open.
2.
In the Open dialog box, select the saved search that you want to modify and click
Open.
3.
Update the search criteria based on your requirement, click the Save list, and
select Save As.
4.
In the Save Search dialog box, enter a name for the updated search. Optionally,
you can create an alert for the new search.
5.
Click Save.
The new search now appears in your list of saved searches.
Note:
The Save option is disabled for a built-in search, and you can perform a Save
As operation only to save the updated, built-in search as a new one.
Creating Alerts for Existing Saved Searches
1.
In Oracle Log Analytics, click Open.
2.
In the Open dialog box, search for the saved widget, such as ORA-0600 by Target,
select the widget, and click Open.
3.
Click Alert Rules (
4.
In the Alert Rules dialog box, click Create Alert Rule.
).
3-4
Chapter 3
Saving and Sharing Log Searches
The Create Alert Rule dialog box is displayed. Because you’re creating the alert
for an existing search, in the Create Alert Rule dialog box, the search name and
the search description are populated with the values that you provided when you
saved the search.
5.
Enter the rule name.
6.
Specify the rule details. See Steps 6 through Step 10 in the Creating Alerts for
Saved Searches section.
Viewing and Editing Alert Rules
1.
From Oracle Log Analytics, click the OMC Navigation (
) icon on the top left
corner of the interface. In the OMC Navigation bar, click Alerts.
2.
Click Alert Rules on the top right corner of the window.
3.
Click the name of the alert rule to view and edit it.
Note:
You can also delete an alert rule by clicking the Delete icon next to the alert
rule name.
Viewing Saved Search Anomaly Alerts and Baseline Charts
1.
) icon on the top left
From Oracle Log Analytics, click the OMC Navigation (
corner of the interface. In the OMC Navigation bar, click Alerts.
You can view the list of alerts with details such as Severity, Message, Entity,
Entity Type, Last Updated, and Duration.
2.
Click the message corresponding to the anomaly alert that you’ve set.
You can view the alert details.
3.
Click View more details.
The interface displays Alert History and Trend Graph for the anomaly alert that
you selected. The graph displays the anomalies detected and the baseline for the
recorded data.
3-5
Chapter 3
Saving and Sharing Log Searches
•
To view the trend graph corresponding to the entity of your choice, click the
View list, and select the entity.
•
To view the alert rule, click the link adjacent to Associated Rule.
•
To return to Oracle Log Analytics, click the name of the saved search on the
top left corner of the interface.
Associating Saved Search Alerts with Entities
Typically, saved search alerts are associated with the saved search entity type.
However, to trigger actions on entities in response to the alerts, the alerts must be
associated with the specific entities. For example, if an alert is raised on a Linux host
entity, then a restart action can be triggered in response to the alert.
To associate a saved search alert with an entity while creating a saved search alert:
1.
Group the log records by target. For example:
Exception | stats count by target
2.
On the result of the query, apply an Entity Type filter OR an Entity filter. For
example, select WebLogic Server entity type.
3.
Click Save.
4.
Enter the name for the saved search alert.
5.
Click the Create alert rule check box.
6.
Enter the rule details and save the search.
You can now view the saved search alert that you created in the alert rules list. The
alert is now associated with a specific entity type and not Saved search entity type.
Generating Inline Alerts
You can define alerts such that the anomalies are detected based on the inline content
of the logs. This can be done by associating an alert with a label that’s tagged for the
log records from a specific log source and entity type.
3-6
Chapter 3
Saving and Sharing Log Searches
To generate inline alerts, first edit the log source to add a label on detecting the
specific content in the log record. Next, associate the log source with an entity type.
Lastly, define a real time alert rule on the specific target type, label and log source. For
example, edit the source mvHostSrc2 and add a label invalid_usr that tags the user
name anonymous. Next, associate the log source mvHostSrc2 with the entity Host(Linux).
Lastly, create a real time alert rule that raises an alert every time a log record
containing the user name anonymous is encountered by associating the alert with the
label invalid_usr, log source mvHostSrc2, and entity Host(Linux).
1.
Edit the log source, and add a label for the specific log record content. For
example, add a label invalid_usr when the user name is anonymous. See Using
Labels in Log Sources.
2.
Associate the log source with an entity type. See Working with Entity Associations.
3.
Create an alert rule for the specific log source, label, and entity type.
a.
) icon on the top left
From Oracle Log Analytics, click the OMC Navigation (
corner of the interface. In the OMC Navigation bar, click Alerts.
b.
Click Alert Rules on the top right corner of the window.
c.
In the Alert Rules dialog box, click Create Alert Rule.
The Create Alert Rule dialog box opens.
3-7
Chapter 3
Saving and Sharing Log Searches
d.
In the Rule Name field, enter the rule name. For example, testAlertRule2.
e.
In the Rule Description field, enter the details about the rule.
f.
In the Rule type field, select Real time alert option.
g.
In the Entity Type field, click the down arrow, and select your entity type. For
example, Host (Linux).
h.
In the Label field, click the down arrow, and select the label for which you
want to generate the alert. For example, invalid_usr.
i.
In the Log Source field, enter the name of the log source. For example,
mvHostSrc2.
j.
In the Email Notification field, enter your email address to receive the
notification of the alert. For example, abc@xyz.com.
k.
Click Save to save the alert rule.
When the tag that you specified in the log source is encountered, an alert is raised.
For example, an invalid_usr alert is raised for the log record when the user name is
anonymous.
3-8
Chapter 3
Exporting Search Results
Click the message to view the alert details.
Exporting Search Results
Oracle Log Analytics lets you export search results in a comma-separated values
(CSV) format.
To export search results:
1.
Search for logs for a set of entities. See Searching by Entities.
2.
Click Export.
3.
Enter a name for the CSV file and click Export.
In the case of the Records and Histogram visualizations, the search result based on
time, original log content, and all the selected display fields is exported. In the case of
Table visualization, the search result based on the time and selected display field is
exported. For any other visualization, the results of the query displayed in the selected
visualization is exported.
Comparing Log Records
Some visualization options in Oracle Log Analytics let you compare two log records
and display the changes in patterns.
Some sample use cases where you may want to compare log records are:
•
Determine what was different in the log stream right after a failure as compared to
a normal period.
•
Compare events right after a software deployment.
You can compare log records in the following visualizations only:
•
Records with Histogram
•
Records
•
Table with Histogram
•
Table
1.
Search for logs for a set of entities. See Searching by Entities.
2.
Select a supported visualization.
3-9
Chapter 3
Visualizing Data Using Charts and Controls
3.
Right-click a record that you want to compare and select Add To Compare.
A floating window with the selected record appearing on the left side is displayed.
4.
Right-click the record with which you want to compare the first selected record and
select Add To Compare.
The right side of the floating window is populated with the newly selected records.
5.
Click Compare in the floating window.
The Log Entry Comparison window displays the comparison.
Visualizing Data Using Charts and Controls
Use the Visualize panel of Oracle Log Analytics to present search data in a form that
helps you better understand and analyze.
Consider a situation where you’ve performed a search operation on your log data
either by using the Search field or by using the target or field attributes. Now, you want
to visualize the search results in a specific format for analysis.
In this section, you’ll refer to the Example Scenario: Performing Dynamic Log Analysis
search results, and use the Visualize panel of Oracle Log Analytics to represent the
search data in the required format.
Drag the Data and Visualize palettes to increase or decrease their size for better
visualization with the charts.
To change the visualization of the search results generated by Example Scenario:
Performing Dynamic Log Analysis for analyzing the number of occurrences of the error
BEA—310002 over the last 30 days:
1.
In Oracle Log Analytics, in the Visualize panel, click the visualization options.
3-10
Chapter 3
Visualizing Data Using Charts and Controls
2.
Select Records With Histogram (
).
The data is represented in the form of a table with histogram.
3-11
Chapter 3
Visualizing Data Using Charts and Controls
3-12
Chapter 3
Visualizing Data Using Charts and Controls
Note:
If you run a query that needs to fetch data for a long duration, such as the last 7
days or the last 1 month, then Oracle Log Analytics may take some time to
display the entire result set in the selected visualization. In this case, Oracle
Log Analytics keeps updating the visualization until the query has finished
running.
The following image displays the visualization when the query is still running:
The following image displays the visualization after the query has finished
running:
3-13
Chapter 3
Visualizing Data Using Charts and Controls
Using Log Scales
Log scales allow a large range of values to be displayed without small values being
compressed down into bottom of the graph.
1.
Search for logs for a set of entities. See Searching by Entities.
2.
From the Visualize panel, select any visualization option containing a histogram
(
3.
and
), bar chart (
and
), or line chart (
).
Click Show Log Scale to view the smaller values that aren’t otherwise visible on
the chart.
This option is displayed only when the chart contains bars that aren’t visible. This
option is also useful when highlighting small values.
3-14
Chapter 3
Visualizing Data Using Charts and Controls
The smaller values are now displayed.
Viewing the Field Summary
When using any variation of the Histogram visualization, Oracle Log Analytics provides
a high-level summary of the key fields of the log entries that are returned as search
results. The field summary, which is represented in the form of horizontal bar charts,
presents a quicker way to analyze your search results, by grouping the entries based
on the display fields that you’ve selected.
For example, in your search query, you’ve selected Oracle WebLogic Server as the
entity type, FMW WLS Server logs as the log source, and bea-090898, bea-001110,
and bea-090152 as error IDs. You’ve also selected Entity Type, Log Source, and Error
ID as display fields. Then, the field summary displays a snapshot of the search results
grouped by the display fields. If you hover your cursor over a particular error ID in the
Field Summary section, then Oracle Log Analytics highlights the error ID and also
highlights the relevant portions of the horizontal bar charts of all the associated entity
types and log sources that are displayed in the summary. In addition, the relevant
portion in the Histogram visualization is also highlighted, depicting the number of
records that are associated with the selected error ID.
If you click the selected error ID, then the field summary, histogram, and the table of
records change to return only those entity types, log sources, and records that contain
the selected error ID. You can add as many fields as you want in the summary by
dragging and dropping attributes from the Data panel.
To view the field summary:
1.
In Oracle Log Analytics, from the Visualize panel, select Records with
Histogram (
). In the Entity Type dialog box, select the required Entity types,
such as Oracle WebLogic Server, Oracle HTTP Server, and Database
Instance, and click Submit.
2.
From the Entities list, click Entity Type.
3.
From the Collection Details list, click Log Source. In the Log Source dialog box,
select the required log sources, such as all FMW WLS Server logs, and click
Submit.
3-15
Chapter 3
Visualizing Data Using Charts and Controls
4.
From the Fields list, click Error ID. In the Error ID dialog box, select the required
error IDs, such as bea-090898, bea-001110, and bea-090152, and click Submit.
5.
From the Data panel, drag and drop Entity Type, Log Source, and Error ID to
the Display Fields section of the Visualize panel.
6.
In the Field Summary section, click the Show Facet Summary (
) icon.
The Field Summary section displays a summary of the search results grouped by
Entity Type, Log Source, and Error ID.
Error IDField Summary
If you click bea-090898 in the Error ID field of the summary results in the field
summary, histogram and the table of records change to return only those entity types,
log sources, and records that contain bea-090898.
3-16
Chapter 3
Visualizing Data Using Charts and Controls
Configuring the Display of the Field Summary
You can show or hide fields in the Field Summary section. You can also sort the
entries of a field. In addition, you can add more fields in the summary.
1.
Click the Hide Graph (
summary.
) icon on the field that you want to hide from the
The field is no longer displayed in the summary.
To display the hidden field, click the Hidden Fields link, and then click the Show
Graph (
) icon on the field that you want to display.
2.
Click the Sort ( ) icon on the field whose entries you want to sort in the
ascending or descending order.
3.
From the Data panel, drag and drop the Severity field to the Display Fields
section of the Visualize panel to add the Severity field in the summary.
3-17
Chapter 3
Visualizing Data Using Charts and Controls
Viewing an Entity Card
Oracle Log Analytics displays Entity Card which is the information related to specific
targets in the form of a histogram. You can access entity-related information easily
instead of going to other views or performing a separate search.
The Entity Card visualization displays an entity’s status and associated log records (in
the form of a histogram), and provides a link to the Data Explorer of Oracle IT
Analytics (to view and analyze the target).
1.
Search for logs for a set of entities. See Searching by Entities.
2.
From the Visualize panel, select Records With Histogram (
3.
In the log records section, hover your cursor over a target name to display the
floating View Entity Information button.
4.
Click View Entity Information to display the Entity Card.
).
3-18
Chapter 3
Visualizing Data Using Charts and Controls
Using Bar Charts
You can use bar charts in Oracle Log Analytics to view log records grouped by
entities, collection details, or log fields.
1.
Search for logs for a set of entities. See Searching by Entities.
2.
From the Visualize panel, select Bar (
3.
From the Collection Details section of the Data panel, drag and drop Log
Source to the Group by section of the Visualize panel.
).
This displays the log records in the form of a bar chart grouped by log sources.
3-19
Chapter 3
Visualizing Data Using Charts and Controls
4.
From the Entity section of the Data panel, drag and drop Entity Type to the
Group by section of the Visualize panel.
Now, the bar chart changes to display the log records grouped by log sources in
the y-axis against the target types displayed across the x-axis.
3-20
Chapter 3
Visualizing Data Using Charts and Controls
Note:
•
You can drag and drop a maximum of two fields from the Data panel to
the Group by section of the Visualize panel.
•
For the fields with numerical values, you can optionally display their
ranges in the bar chart visualization by using the bucket option. The
bucket option groups the log records into buckets based on the range
of values of a field. See Filtering by Field Range.
Using Clusters
Clustering uses machine learning to identify the pattern of log records, and then to
group the logs that have a similar pattern.
Clustering helps significantly reduce the total number of log entries that you have to
explore and easily points out the outliers. Grouped log entries are presented as
“Sample Message.”
1.
Search for logs for a set of entities. See Searching by Entities.
2.
From the Visualize panel, select Cluster (
).
You can see that similar log records are grouped in clusters along with a histogram
view of all the records grouped by time interval. You can zoom in to a particular set of
intervals (records grouped by time intervals in this case) in the histogram by keeping
your left mouse button pressed and drawing a rectangle over the required set of
intervals. After you zoom in, the cluster records change based on the selected interval.
The Cluster view displays a summary banner at the top showing the following tiles:
•
Total Clusters: Total number of clusters for the selected log records.
•
Potential Issues: Number of clusters that have potential issues based on log
records containing words such as error, fatal, exception, and so on.
3-21
Chapter 3
Visualizing Data Using Charts and Controls
•
Outliers: Number of clusters that have occurred only once during a given time
period.
•
Trends: Number of unique trends during the time period. Many clusters may have
the same trend. So, clicking this panel shows a cluster from each of the trends.
Note:
If you hover your cursor over this panel, then you can also see the number
of log records (for example, 22 clusters from 1,200 log records).
When you click any of the tiles, the histogram view of the cluster changes to display
the records for the selected tile.
Each cluster pattern displays the following:
•
Trend: This column displays a sparkline representation of the trend (called trend
shape) of the generation of log messages (of a cluster) based on the time range
that you selected when you clustered the records. Each trend shape is identified
by a Shape ID such as 1, 2, 3, and so on. This helps you to sort the clustered
records on the basis of trend shapes.
Clicking the arrow to the left of a trend entry displays the time series visualization
of the cluster results. This visualization shows how the log records in a cluster
were spread out based on the time range that was selected in the query. The trend
shape is a sparkline representation of the time series.
•
ID: This column lists the cluster ID. The ID is unique within the collection.
•
Count: This column lists the number of log records having the same message
signature.
•
Sample Message: This column displays a sample log record from the message
signature.
•
Log Source: This column lists the log sources that generated the messages of the
cluster.
3-22
Chapter 3
Visualizing Data Using Charts and Controls
You can click Show Similar Trends to sort clusters in an ascending order of trend
shapes. You can also select a cluster ID or multiple cluster IDs and click Show
Records to display all the records for the selected IDs.
You can also hide a cluster message or multiple clusters from the cluster results if the
output seems cluttered. Right-click the required cluster and select Hide Cluster.
In each record, the variable values are highlighted. You can view all the similar
variables in each cluster by clicking a variable in the Sample Message section.
Clicking the variables shows all the values (in the entire record set) for that particular
variable.
In the Sample Message section, some cluster patterns display a <n> more samples...
link. Clicking this link displays more clusters that look similar to the selected cluster
pattern.
3-23
Chapter 3
Visualizing Data Using Charts and Controls
Clicking Back to Trends takes you back to the previous page with context (it scrolls
back to where you selected the variable to drill down further). The browser back button
also takes you back to the previous page; however, the context won’t be maintained,
because the cluster command is executed again in that case.
Using Line Charts
Oracle Log Analytics provides a line chart visualization that lets you group by a
numeric field, then graph the trend of values over time. Only numeric fields or
aggregate output (sum as sum, average if the fields or output can be broken up by
time) can be selected for the y-axis.
For example, to view the count of all Apache log entries, which are grouped by log
source, over a period in time (say the last 7 days):
1.
In Oracle Log Analytics, set the time control on the top right corner to Last Week.
2.
In the Data panel, under Collection Details, click Log Source, select all the
Apache log sources, and click Apply Filter.
3.
In the Visualize panel, select Line (
) from the visualization options.
The count of Apache log entries grouped by Log Source is displayed.
Similarly, to view a graph displaying the average content size of the Apache log
records over time, grouped by log source, from the Fields section, drag and drop the
Content Size field to the Y-axis section in the Visualize panel. Ensure that you’ve
selected the Average function from the Y-axis list.
3-24
Chapter 3
Visualizing Data Using Charts and Controls
You can select different statistical operations that you can perform on the selected
field.
3-25
Chapter 3
Visualizing Data Using Charts and Controls
Note:
Some fields may not support all operations. For example, numeric operations,
such as Average, Min, Max, Median, and so on won’t work on fields that are of
data type String.
Using Maps
You can use the Maps visualization in Oracle Log Analytics to view log records
grouped by country or country code.
Before you can use Maps to view log records based on country or country codes, you
need to set the Field Enrichment options to populate the city, country, or country code
fields under the Log Source section from the Oracle Log Analytics Configuration page.
See Configuring Field Enrichment Options.
1.
Search for logs for a set of entities. See Searching by Entities.
2.
From the Visualize panel, select Map (
).
This displays a world map where log records are grouped by Client Coordinates,
Client Host Continent, Client Host Continent Code, Client Host Country,
Client Host Country Code, Client Host City, Client Host Region, or Client
Host Region Code.
3-26
Chapter 3
Visualizing Data Using Charts and Controls
Using Summary Tables
Using a Summary Table, you can view statistical information about log records in a
tabular format.
You can select which measures you want to see, as well as the fields on which to base
those measures. Currently, the available Summary Table options are average, count,
distinct count, sum, min, max, median, and percentile.
You can also group the results by any selected fields.
1.
Search for logs for a set of entities. See Searching by Entities.
2.
From the Visualize panel, select Summary Table (
3.
From the Fields section, drag and drop the required fields.
).
3-27
Chapter 3
Visualizing Data Using Charts and Controls
4.
Click the down arrow to the right of the selected fields to select the function.
The Summary Table displays the required result.
Note:
Apart from a Summary Table, all the graph and chart visualization options let
you apply multiple statistical functions to your log records.
Using Word Cloud
You can use word cloud in Oracle Log Analytics to view log records grouped by the
strings that represent the selected fields.
1.
Search for logs for a set of entities. See Searching by Entities.
2.
From the Visualize panel, select Word Cloud (
3.
From the Collection Details section of the Data panel, drag and drop Log
Source to the Group by section of the Visualize panel.
4.
From the Fields section of the Data panel, drag and drop Error ID to the Color
section of the Visualize panel.
).
This displays the log records in the form of a word cloud grouped by log sources.
The log sources are represented in different colors to indicate the Error ID
reported in the records.
3-28
Chapter 3
Visualizing Data Using Charts and Controls
5.
To view the word cloud using only the log records that reported error, in the
Visualize panel, select Show Problem Logs Only check box.
Now, the word cloud changes to display only the problem logs.
Hover the cursor on a string to get more details about the group that the string
represents.
Click the string to view the further analysis of the group of log records displayed as the
records with histogram visualization.
Using Link
Link lets you perform advanced analysis of log records by combining individual log
records from across log sources into groups, based on the fields you’ve selected for
linking. You can analyze the groups by using the same fields as the ones you used for
linking or additional fields for observing unusual patterns to detect anomalies.
Link command can be used for a variety of use-cases. For example, individual log
records from business applications can be linked to synthesize business transactions.
Groups can also be used to synthesize user sessions from web access logs. Once
3-29
Chapter 3
Visualizing Data Using Charts and Controls
these linked records have been generated, they can be analyzed for anomalous
behavior. Some examples of this anomalous behavior can include:
•
Business Transactions that are taking unusually long to execute or are failing.
•
User sessions that are downloading large amounts of data than normal.
Tip:
To use the Link feature, users need to have a good understanding of their log
sources. The Link feature relies on a field or a set of fields that are used to
combine individual log records. To generate meaningful associations of log
records, it is important to know the relevant fields that can be used for linking
the log records.
To understand the application of the Link feature in performing advanced analytics
with an example, see Performing Advanced Analytics with Link.
Analyzing the Log Records Using Link
You can use the example of the log records from the log source SOAOrderApp for an
order flow application, to apply the steps discussed below:
1.
Select Link (
) from the Visualize panel.
By default, Log Source is used in the Link By field to run the Link query. This
displays the groups table and a bubble chart. See Groups Table.
For example, the following groups table is displayed for SOAOraderApp:
2.
To analyze the fields that are relevant to your analysis, drag and drop one or more
fields to Link By, remove Log Source which is the default field in Link By, and
click the check mark to run the Link query.
You can view the groups represented in the bubbles in the chart.
3-30
Chapter 3
Visualizing Data Using Charts and Controls
This analyzes the groups for the values of the fields, and creates bubbles
representing the groups in the commonly seen ranges. The majority of the values
are treated as the baseline. For example, a large bubble can become the baseline,
or a large number of smaller bubbles clustered together can form the baseline.
Bubbles that are farthest from the baseline are typically marked as anomalies.
Generally, these bubbles represent the behavior that is not typical.
This chart shows the anomalies in the patterns indicated by the yellow bubbles.
The size of the bubble represents the number of groups that are contained in the
bubble. The position of the bubble is determined by the values of the fields that are
plotted along the x and y axes. Hover the cursor on the bubbles to view the
number of groups in the bubble, their percentage as against the total number of
groups, and the values of the fields plotted along the x and y axes.
Note:
When you run the link command, the group duration is shown in a
readable format in the bubble chart, for example, in minutes or seconds.
However, if you want to run a where command after the link command to
look for transactions that took more than the specified number of seconds
(say, 200 seconds), then the unit that you must use is milliseconds.
The next step may be to further examine the anomalies by clicking individual
bubble or multi-select the bubbles. To return to the original results after
investigating the bubble, click the Undo (
) icon.
You can toggle the display of the groups on the bubble chart by clicking on the
value of the Group Count legend that's available next to the chart. This can be
used to reduce the number of bubbles displayed on a densely packed chart.
From the order flow application:
a.
We’ve selected the fields Module and Context ID to group the log records.
This groups the log records based on the context ID of each record and the
specific module from shipping, notifications, inventory or preorder that was
used by the application in the log record.
3-31
Chapter 3
Visualizing Data Using Charts and Controls
The chart displays the bubbles that group the log records based on their
values of Context ID and Module. The blue bubbles represent most of the
groups that form the baseline. Notice the two anomaly bubbles that appear on
the chart against the modules for shipping and notifications. The bubble on the
extreme right of the chart represents the groups that’re taking a longer
duration to execute the module as compared to other groups. On hovering the
cursor on the bubble, you can observe that the bubble consists of 22 groups
that make for less than a percent of the total number. The bubble corresponds
to the oracle.order.shipping module and has the group duration of 1 min, 47
sec to 1 min, 52 sec.
b.
To view the details of the groups that correspond to the anomaly, select the
anomaly bubble in the chart.
•
In the next tab, a histogram chart is displayed showing the dispersion of
the log records.
•
A groups table listing each of the 22 groups and the corresponding values
of the fields is also available for the analysis.
3-32
Chapter 3
Visualizing Data Using Charts and Controls
c.
View the anomaly groups in clusters: First select all the rows in the table by
clicking on the first row, hold Shift key on your keyboard, and click on the last
row in the table, next click the down arrow next to Show, and select Clusters.
This displays the clusters. Click on the Potential Issues tab.
This lists the groups of log records and the sample messages indicating the
anomaly. The issues point at Shipment Gateway time out and
java.lang.ArrayIndexOutOfBoundsException exception for the cause of
delays in executing the shipping module in the specific groups.
3.
) icon on the
For more options to view the groups, click the Chart Options (
top left corner of the visualization panel. See Analyze Chart Options.
3-33
Chapter 3
Visualizing Data Using Charts and Controls
4.
Study the groups table to understand the groups and the values of the fields in
each group. See Groups Table.
In line with the observation in the bubble chart of the SOAOrderApp log records, from
the groups table, notice that the top two groups are taking 1 min, 52 sec and 1
min, 51 sec to complete the execution. This is very high compared to the group
duration of the other groups.
3-34
Chapter 3
Visualizing Data Using Charts and Controls
5.
Click the Search and Table Options (
) icon:
•
Click Hide/Show Columns and select the columns that you want to view in
the table.
•
Click Alias Options, and rename the groups and log records to create custom
dashboards.
•
Click Search Options:
–
Select the Show Top checkbox, and identify the number of log records to
view for the specified field.
–
Select the Include Nulls checkbox to view those log records that may not
have all the Link By fields.
–
Under Analyze Chart Behavior on Selection,
*
To view the filtered group table for the groups in the selected bubble,
click the Filter Only - filter group table only option.
*
To view the filtered group table and the re-classified bubble chart for
the groups in the selected bubble, click the Drill Down - filter group
table and re-classify bubbles option.
3-35
Chapter 3
Visualizing Data Using Charts and Controls
Note:
The filtered selection is not supported in the saved searches.
However, you can open the saved search and apply the same filter
selection again.
6.
To change the fields analyzed from the group data, click the Analyze (
) icon
and select up to two fields that have multiple values with high cardinality. By
default, the first field selected for Link By is analyzed with the group duration to
generate the analyze chart and the groups table. Click OK.
This displays a new chart based on the fields selected in the Analyze command.
7.
To view the log records in the histogram visualization, click the histogram tab.
The histogram chart displays the log records over time. Click the down arrow next
to the Chart options (
) icon and select the type of visualization to view the
data from the log records and groups on separate histograms, if necessary. See
Histogram Chart Options.
You can save your custom query for the analysis of the log records using the Link
feature to the saved searches and dashboard. See Saving and Sharing Log Searches.
Using the Getting Started Panel
If you’re new to using Link, then to familiarize with the feature, and perform the
following actions:
1.
On the results table header, click the Open the Getting Started panel (
to open the Getting Started Panel.
2.
On the Getting Started tab, click the Show Tips link to view some useful tips to
explore options on the visualization of the Link feature.
) icon
3-36
Chapter 3
Performing Advanced Analytics with Link
Click Hide Tips.
3.
Click on the Sample Link Commands tab. View and edit some of the sample link
commands.
You can select to Run a link command that’s listed under Available Sample Link
Commands or View the link commands listed under All Sample Link
Commands.
4.
Click on the Link Builder tab, and run the wizard to select the Log Source, select
up to four fields in Link By, select up to two fields in Analyze Fields, and click
Run Link to build custom queries. You can select multiple fields at once before
running the query, thus saving time from having the drag and drop operation to
complete the background query for every field.
Click Clear to clear the selection.
Performing Advanced Analytics with Link
Understand the application of the Link feature in performing advanced analytics with
the use-case discussed in this topic.
For the steps to use the Link feature to analyze your log records, see Using Link.
Example Scenario: Analyzing the Access Logs of Oracle WebLogic Server
Consider the example of a data set consisting of Oracle WebLogic Server Access
Logs from the log source FMW WLS Server Access Logs. The log records contain data
about the access to Oracle WebLogic Server by the users over a specific period of
time. These individual log records can be analyzed to get meaningful insight into the
usage statistics, the popularity of the URLs, the most active users, etcetera. From the
logs, learn to obtain the following results by analyzing the log records with the
selection of specific fields for each result:
1.
Display the top URLs by Number of Hits
2.
Display the anomalies by Number of Hits
3.
Display the anomalies by Access Duration
4.
Identify the URLs by Upload Size
5.
Identify the URLs by Download Size
6.
Analyze the correlation between Number of Hits and Download Size
7.
Determine the Most Visited Pages
8.
Identify the Top Users
9.
Identify the Top Users and their Favorite Pages
10. Identify the entry page that drives maximum visits
11. Identify the Entry and Exit path for most users
3-37
Chapter 3
Performing Advanced Analytics with Link
Note:
•
Use the rename command to change the name of the field to one that’s
more relevant for the use-case.
•
The classify command lets you analyze the groups, and displays the result
in the form of a bubble chart. To simply view the result of the execution of a
query in the tabular format, remove the classify command from the query,
and re-run it.
•
Click the anomalous bubble in the chart to view the details of the
anomalous groups. To return to the original result after investigating the
bubble, click the Undo (
) icon.
•
When you run the link command, the group duration is shown in a
readable format in the bubble chart, for example, in minutes or seconds.
However, if you want to run a where command after the link command to
look for transactions that took more than the specified number of seconds
(say, 200 seconds), then the unit that you must use is milliseconds.
•
To learn about Analyze Chart Options, see Analyze Chart Options.
•
To learn about Histogram Chart Options, see Histogram Chart Options.
•
To learn about Groups Table, see Groups Table.
To retrieve the data set, select a suitable date range, specify the log source, and run
the query:
'Log Source' = 'FMW WLS Server Access Logs'
Select Link (
) from the Visualize panel. This’ll display the 'FMW WLS Server
Access Logs' groups table and the bubble chart.
1.
To display the top URLs by Number of Hits, group the log records by the value
of the URL in the log record, obtain the total count for the URL in each group,
rename the default fields in the groups table to suitable values, and display the
result in the tabular format. With this analysis, you can determine the URLs that’re
most used.
a.
Drag and drop the field URI to Link By, remove the field Log Source from
Link By, and click the check mark to submit the query.
b.
After the query is executed, in the command-line, change the names of the
fields Count to Number of Hits, Start Time to First Access, End Time to
Last Access, and Group Duration to Access Duration.
c.
Remove the classify command from the command-line, and submit the query.
The query will be as follows:
'Log Source' = 'FMW WLS Server Access Logs' | link URI | rename Count as
'Number of Hits', 'Start Time' as 'First Access', 'End Time' as 'Last
Access', 'Group Duration' as 'Access Duration'
On running the query, you can determine the top URLs by number of hits in the
table. The columns are renamed as specified in the rename command.
3-38
Chapter 3
Performing Advanced Analytics with Link
2.
To display the anomalies by Number of Hits, group the log records by the value
of the URL in the log record, rename the default fields in the groups table to
suitable values, and analyze the groups for the URL’s number of hits. With this
analysis, you can separate the unusual pattern in accessing the URLs.
Click Analyze, select Number of Hits, and click OK.
The query must change to the following:
'Log Source' = 'FMW WLS Server Access Logs' | link URI | rename Count as 'Number
of Hits', 'Start Time' as 'First Access', 'End Time' as 'Last Access', 'Group
Duration' as 'Access Duration' | classify topcount = 300 'Number of Hits'
This query triggers analysis of the 'Number of Hits' column and creates bubbles
representing the commonly seen ranges. The majority of the values are treated as
the baseline. For example, a large bubble can become the baseline, or a large
number of smaller bubbles clustered together can form the baseline. Bubbles that
are farthest from the baseline are marked as anomalies.
So, this displays the anomalous URLs grouped into separate bubbles in the
bubble chart. To view the percentage of URLs in each range of number of hits,
hover the cursor on the bubbles.
3.
To display the anomalies by Access Duration, group the log records by the
value of the URL in the log record, rename the default fields in the groups table to
suitable values, and analyze the groups for the access duration of the URL. With
this analysis, you can separate the unusual pattern in the time spent in accessing
the URLs. In continuation to step 2:
Click Analyze, select Access Duration, and click OK.
Access Duration is an indication of the duration for which each URL was
accessed. This is computed as the difference between the last timestamp and the
first timestamp in the log file for each URL.
4.
To identify the URLs by Upload Size, group the log records by the value of the
URL in the log record, rename the default fields in the groups table to suitable
values, and analyze the groups for the size of the data uploaded. With this
analysis, you can identify the URLs that have unusual size of the data uploaded. In
continuation to step 3:
a.
Drag and drop the field Content Size In to Display Fields.
b.
Rename the field Content Size In to Bytes Uploaded by altering the query on
the command-line, and run the query.
c.
Click Analyze, select Bytes Uploaded, and click OK.
The query will be as follows:
'Log Source' = 'FMW WLS Server Access Logs' | link URI | stats avg('Content
Size In') as 'Bytes Uploaded' | rename Count as 'Number of Hits', 'Start
Time' as 'First Access', 'End Time' as 'Last Access', 'Group Duration' as
'Access Duration' | classify topcount = 300 'Bytes Uploaded'
The Analyze chart displays the groups of URLs by the bytes uploaded.
d.
5.
To correlate the Bytes Uploaded data across the time range, you can
selectively hide or show charts in the Histogram Chart Options. Explore the
other visualization options besides the bar chart.
To identify the URLs by Download Size, group the log records by the value of
the URL in the log record, rename the default fields in the groups table to suitable
3-39
Chapter 3
Performing Advanced Analytics with Link
values, and analyze the groups for the size of the data downloaded. With this
analysis, you can identify the URLs that have unusual size of the data
downloaded. In continuation to step 4:
a.
Drag and drop the field Content Size Out to Display Fields and remove
Content Size In from Display Fields.
b.
Rename the field Content Size Out to Download Size by altering the query
on the command-line, and run the query.
c.
Click Analyze, select Download Size, and click OK.
The query will be as follows:
'Log Source' = 'FMW WLS Server Access Logs' | link URI | stats avg('Content
Size Out') as 'Download Size' | rename Count as 'Number of Hits', 'Start
Time' as 'First Access', 'End Time' as 'Last Access', 'Group Duration' as
'Access Duration' | classify topcount = 300 'Download Size'
The Analyze chart displays the groups of URLs by the download size.
d.
6.
To correlate the Download Size data across the time range, you can
selectively hide or show charts in the Histogram Chart Options. Explore the
other visualization options besides the bar chart.
To analyze the correlation between Number of Hits and Download Size, group
the log records by the value of the URL in the log record, rename the default fields
in the groups table to suitable values, and analyze the groups for the size of the
data downloaded and the number of hits. With this analysis, you can identify the
URLs that have unusual patterns of size of data downloaded and number of hits.
In continuation to step 5:
a.
Click Analyze, select the fields Number of Hits, Download Size, and click
OK.
b.
Remove topcount=300 from the query to see all the bubbles, and run the query.
The query will be as follows:
'Log Source' = 'FMW WLS Server Access Logs' | link URI | stats avg('Content
Size Out') as 'Download Size' | rename Count as 'Number of Hits', 'Start
Time' as 'First Access', 'End Time' as 'Last Access', 'Group Duration' as
'Access Duration' | classify 'Download Size', 'Number of Hits'
In the bubble chart, the field Number of Hits is plotted along the x-axis and
Download Size along the y-axis.
3-40
Chapter 3
Performing Advanced Analytics with Link
The bubbles can be interpreted as follows:
7.
•
73.8% of the URLs were accessed one to seven times.
•
Average download size for the 73.8% of URLs is between 32,345 to 34,000.
This tight range implies that a large number of URLs have very uniform
behavior with reference to the download size.
•
Since 73.8% is the large majority, the rest of the points are marked as
anomalies.
•
With real data, it is common for the system to group .css, .js and image files
separately from other URLs because they tend to have different download
behaviors.
To determine the Most Visited Pages, group the log records by the value of the
URL in the log record, rename the default fields in the groups table to suitable
values, and analyze the groups for the number of unique visitors. With this
analysis, you can identify the URLs that’re most visited by the unique visitors. In
continuation to step 6:
a.
Drag and drop the field User Name to Display Fields.
b.
Click the down arrow next to the field name, change the function from Unique
to Distinct Count. See the other functions you can select for a numeric field:
3-41
Chapter 3
Performing Advanced Analytics with Link
c.
Rename the field User Name to Number of Unique Users, remove the
classify command by altering the query on the command-line, and run the
query. The query will be as follows:
'Log Source' = 'FMW WLS Server Access Logs' | link URI
Size In') as 'Bytes Uploaded', avg('Content Size Out')
distinctcount('User Name') as 'Number of Unique Users'
'Number of Hits', 'Start Time' as 'First Access', 'End
Access', 'Group Duration' as 'Access Duration'
d.
| stats avg('Content
as 'Download Size',
| rename Count as
Time' as 'Last
Click Analyze, select the field Number of Unique Users, and click OK.
The table lists the URLs and the corresponding number of unique users, helping
us to identify the URLs that were most visited by unique users. From the table, you
can also determine the number of hits that each URL has.
3-42
Chapter 3
Performing Advanced Analytics with Link
The analysis shows that more than 99% of the URLs have 0 or 1 unique users.
This would be the case for URLs that don't need a login, or are seldom accessed.
Drilling down to any of the smaller bubbles will point to the specific pages, how
many hits they typically have, and how many unique visitors.
8.
To identify the Top Users, group the log records by the value of the user name in
the log record, rename the default fields in the groups table to suitable values, and
analyze the groups for the number of hits. With this analysis, you can identify the
most active users.
a.
Edit the command-line to remove all the filters: 'Upload Name' =
Log_Analytics_User.WLS_Access_Logs | link URI
b.
Drag and drop the field User Name to Link By, remove URI, and run the
query.
c.
Remove the classify command, rename the default fields in the commandline, and run the following query:
'Log Source' = 'FMW WLS Server Access Logs' | link 'User Name' | rename
Count as 'Number of Hits', 'Start Time' as 'First Access', 'End Time' as
'Last Access', 'Group Duration' as 'Access Duration'
The table is sorted by the number of hits by the user.
9.
d.
To view the user behavior by access, click Analyze, select the field Number
of Hits, and click OK.
e.
Click the anomalies to identify the users who have recorded higher or lower
number of hits compared to the other users.
To identify the Top Users and their Favorite Pages, group the log records by
the value of the user name in the log record, rename the default fields in the
groups table to suitable values, and analyze the groups for the number of unique
pages. With this analysis, you can identify the least and most active users, and
their favorite pages. In continuation to step 8:
a.
Drag and drop the field URI to Display Fields. Change the function from
Unique to Distinct Count.
b.
Rename the field URI to Number of Unique Pages by altering the query in
the command-line, and run the query.
c.
Click Analyze, select the field Number of Unique Pages, and click OK.
10. To identify the entry page that drives maximum visits, group the log records
by the value of the user name in the log record, rename the default fields in the
groups table to suitable values, and analyze the groups for the values of the entry
URLs and number of hits to the URLs. With this analysis, you can identify the
pages that the users hit first. In continuation to step 9:
a.
To get the entry URLs, change the function of the field URI from Distinct
Count to Earliest.
b.
Rename the field URI to Entry URL by altering the query in the command-line,
and run the query.
c.
Click Analyze, select the fields Number of Hits and Entry URL, select the
topcount as 20, and click OK.
'Log Source' = 'FMW WLS Server Access Logs' | link 'User Name' | stats
earliest(URI) as 'Entry URL' | rename Count as 'Number of Hits', 'Start
Time' as 'First Access', 'End Time' as 'Last Access', 'Group Duration' as
'Access Duration' | classify topcount = 20 'Number of Hits', 'Entry URL'
3-43
Chapter 3
Performing Advanced Analytics with Link
This displays the first URL used by the users in relation to the number of hits. For
example, /login is the first URL majority of the users use.
11. To identify the Entry and Exit path for most users, group the log records by the
value of the user name in the log record, rename the default fields in the groups
table to suitable values, and analyze the groups for the values of the entry URLs
and exit URLs. With this analysis, you can identify
•
The most common paths taken by the users to transit through the website
•
The most popular product pages from where the users are exiting the website
•
The most common exit URLs, like the product checkout pages or the payment
gateway
•
The unusual exit URLs, and root cause the unexpected exits
In continuation to step 10:
a.
Drag and drop the field URI to Display Fields.
b.
To get the exit page, change the function of the field URI from Unique to
Latest.
c.
Edit the command-line and rename the field latest(URI) to Exit URL and
submit the query.
d.
Click Analyze, select the fields Entry URL and Exit URL, select the topcount
as 20, and click OK.
'Log Source' = 'FMW WLS Server Access Logs' | link 'User Name' | stats
earliest(URI) as 'Entry URL', latest(URI) as 'Exit URL' | rename Count as
'Number of Hits', 'Start Time' as 'First Access', 'End Time' as 'Last
Access', 'Group Duration' as 'Access Duration' | classify topcount = 20
'Entry URL', 'Exit URL'
e.
Increase the size of the chart by using the Analyze Chart Options.
3-44
Chapter 3
Performing Advanced Analytics with Link
This tree map shows the relationship between the entry and exit URLs in a site.
This would be very useful for the retail sites where the service providers would
want to identify the entry URLs that lead the customers to the checkout pages, and
the product URLs that’re causing users to not proceed to checkout.
Analyze Chart Options
The following chart options are available to analyze the groups that’re displayed by the
Link query:
3-45
Chapter 3
Performing Advanced Analytics with Link
Analyze Chart Option
Utility
Chart Type
Select from the bubble, scatter, tree map, and
sunburst type of charts to view the groups. By
default, a bubble chart is displayed.
•
•
•
•
Bubble Chart: To analyze the data from
three fields, and each field can have
multiple values. The position of the bubble
is determined by the values of the first
and second fields that’re plotted on the x
and y axes, and the size of the bubble is
determined by the third field.
Scatter Chart : To analyze the data from
two numeric fields, to see how much one
parameter is affecting the other.
Tree Map: To analyze the data from
multiple fields that’re both hierarchical and
fractional, with the help of interactive
nested rectangles.
Sunburst Chart: To analyze hierarchical
data from multiple fields. The hierarchy is
represented in the form of concentric
rings, with the innermost ring representing
the top of the hierarchy.
Height
Increase or decrease the height of the chart to
suit your screen size.
Swap X Y axis
You can swap the values plotted along the x
and y axes for better visualization.
Show Anomalies
View the anomalies among the groups
displayed on the chart.
Highlight Anomaly Baselines
If you’ve selected to view the anomalies, then
you can highlight the baselines for those
anomalies.
Show Group Count Legend
Toggle the display of the Group Count
legend.
Zoom and Scroll
Select Marquee zoom or Marquee select to
dynamically view the data on the chart or to
scroll and select multiple groups.
Histogram Chart Options
Histogram shows the dispersion of log records over the time period and can be used
to drill down into a specific set of log records.
You can generate charts for the log records, groups and numeric display fields. Select
a row to view the range highlighted in the histogram.
The following chart options are to view the group data on the histogram:
3-46
Chapter 3
Using Out-of-the-Box Widgets
Histogram Chart Option
Utility
Chart Type
Select from the following types of visualization
to view the group data:
•
•
•
•
Show Combined Chart
Bar: The log records are displayed as
segmented columns against the time
period. This is the default display chart.
Marker Only : The size of the log records
against the specific time is represented by
a marker.
Line Without Marker: The size of the log
records against the specific time is plotted
with the line tracing the number that
represents the size.
Line With Marker: The size of the log
records against the specific time is plotted
with the line tracing the marker that
represents the size.
This option combines all the individual charts
into a single chart.
Groups Table
The groups table displays the result of the analysis by listing the groups and the
corresponding values for the following default fields:
Column
Details
Field (s)
The field that’s used to analyze the group
Count
The number of log records in the group
Start Time
The start of the time period for which the logs
are considered for the analysis
End Time
The end of the time period for which the logs
are considered for the analysis
Group Duration
The duration of the log event for the group
Using Out-of-the-Box Widgets
Oracle Log Analytics provides a set of out-of-the-box widgets that you can use in a
dashboard.
General
Oracle Fusion Middleware
Oracle Database
Top Log Sources
Oracle Middleware Logs
Trend
Oracle Database Log Trend
All Logs Trend
Oracle WebServer Top
Accessed Pages
Oracle Database Errors Trend
Critical Incidents by Target
Type
Oracle WebServer Failed
HTTP Requests
Oracle Database Top Errors
Oracle Enterprise Manager
Log Trend
Top Oracle WebServer
Targets by Requests
Top Oracle Database Targets
with Errors
3-47
Chapter 3
Creating Custom Dashboards
General
Oracle Fusion Middleware
Host Log Trend
Top Oracle Middleware Error
Codes
Invalid User Login Attempts
Top ECIDs with BEA-x Error
Codes
Failed Password Attempts
Top Oracle Fusion
Middleware Targets with
Errors
Top Commands Run with
SUDO
Oracle WebServer Top
Accessed Pages (Excluding
Assets)
Top Hosts by Log Entries
Oracle WebServer Top Users
by Pages (Excluding Assets)
Oracle Database
Top Host Log Sources
Top SUDO Users
Access Log Error Status
Codes
Creating Custom Dashboards
You can use saved searches (referred to as widgets) in a custom dashboard.
For example, you can save the search that you’ve performed in Example Scenario:
Performing Dynamic Log Analysis and use the widget in a custom dashboard.
1.
Sign in to the Oracle Management Cloud home page by providing the user name,
password, and the tenant name.
2.
In the Oracle Management Cloud home page, click the Dashboard tile.
3.
In the Dashboards home page, click Create Dashboard.
4.
Select one of the following types:
5.
•
Dashboard (a single dashboard)
•
Dashboard Set (multiple dashboards on tabs)
Specify the name and description for the dashboard and click Create.
This description is displayed with the dashboard tile in the Oracle Management
Cloud home page.
6.
In the new dashboard page, click Add Content (the plus sign (+) icon).
7.
Enter the name of the widget that you created in the Search for content field, and
either drag and drop the widget on to the dashboard or click add to dashboard
(the plus sign (+) icon) next to the widget name.
You can use the Content Settings (the gear icon) menu (which appears when
you hover your cursor over the added widget) to perform multiple actions including
widening, narrowing, lengthening, and shortening the widget size, and also hiding
the title of the widget or removing the widget from the dashboard.
Clicking the Open in Data Explorer icon (which appears when you hover your
cursor over the added widget) opens the underlying search in the Data Explorer.
8.
Click the Save Dashboard icon on the top left.
3-48
Chapter 3
Printing Dashboards
Printing Dashboards
You can print an out-of-the-box dashboard or any custom dashboard that you’ve
created for future reference.
1.
Sign in to the Oracle Management Cloud home page by providing the user name,
password, and the tenant name.
2.
In the Oracle Management Cloud home page, click the Dashboard tile.
3.
In the Dashboards home page, click and open the dashboard that you want to
print.
4.
Click Options (the gear icon) in the top left corner of the dashboard and then
select Print.
3-49
4
Administering Oracle Log Analytics
The section discusses the tasks performed by Oracle Log Analytics Administrators.
Topics:
•
Typical Workflow for Administering Oracle Log Analytics
•
Installing Oracle Management Cloud Agents
•
Working with Log Sources
•
Working with Entity Associations
•
Viewing Collection Warnings
•
Purging Log Data
Typical Workflow for Administering Oracle Log Analytics
Here are the common tasks for administering Oracle Log Analytics.
Task
Description
More Information
Deploy gateway, data
collector, and cloud agents
Deploy Oracle Log Analytics to
analyze all your log data.
Installing Oracle Management
Cloud Agents
Create and manage log
sources
Configure new log sources and
edit existing log sources.
Working with Log Sources
Create and manage entity
associations
Configure new entity
associations and edit existing
associations.
Working with Entity
Associations
Installing Oracle Management Cloud Agents
You need to perform the following tasks to install cloud agents:
Task
Description
Task 1: Understand the agent deployment
topology.
Review and understand the deployment
topology of Oracle Management Cloud agents
that are required to set up Oracle Log
Analytics.
See Understanding the Architecture of Oracle
Management Cloud in Installing and Managing
Oracle Management Cloud Agents.
4-1
Chapter 4
Working with Log Sources
Task
Description
Task 2: Review the prerequisites for deploying Review the hardware and software
Oracle Management Cloud Agents.
requirements for deploying Oracle
Management Cloud agents.
See Common Prerequisites for Deploying
Oracle Management Cloud Agents in Installing
and Managing Oracle Management Cloud
Agents.
Task 3: Download the agent software.
Download the agent software that contains the
script required to install the Oracle
Management Cloud agents.
See Downloading Oracle Management Cloud
Agent Software in Installing and Managing
Oracle Management Cloud Agents.
Task 4 (Optional): Install a gateway on a host
in your data center (the host should have
Internet access to Oracle Management
Cloud.)
Install a gateway that acts as a channel
between Oracle Management Cloud and all
other Oracle Management Cloud agents.
Task 5: Install a data collector on the target
host (Optional, if you are setting up Oracle
Log Analytics without an existing Oracle
Enterprise Manager).
Install a data collector that uses a gateway to
make data available to Oracle Management
Cloud.
Task 6: Install the cloud agent on the target
host.
Install the cloud agent that collects logs from a
target host.
See Installing a Gateway in Installing and
Managing Oracle Management Cloud Agents.
See Installing a Data Collector in Installing and
Managing Oracle Management Cloud Agents.
See Installing Cloud Agents in Installing and
Managing Oracle Management Cloud Agents.
Working with Log Sources
This topic describes how to create and manage log sources.
Topics:
•
Configuring New Log Sources
•
Managing Existing Log Sources
Configuring New Log Sources
To monitor a custom log file, the Oracle Log Analytics Administrator must create a new
log source and a parser.
You can create a log source or parser in multiple ways:
•
From the Oracle Log Analytics Configuration page
4-2
Chapter 4
Working with Log Sources
•
From the OMC Navigation icon on the top left corner of the Oracle Log Analytics
interface
4-3
Chapter 4
Working with Log Sources
You can also import source and parser definitions from an XML file. Importing source
will import all source-related content such as Extended Fields, Tags, Lookups, and
Labels.
To import source or parser definitions, click the gear icon on the top right corner of the
Configuration page and select the XML definition file.
4-4
Chapter 4
Working with Log Sources
Topics:
•
Creating a Parser
–
•
Preprocessing Log Events
Creating a Log Source
–
Using Extended Fields in Log Sources
–
Using Data Filters
–
Using Labels in Log Sources
•
Creating a Label
•
Creating Lookups
•
Using the Generic Parser
•
Configuring Field Enrichment Options
•
Setting Up Syslog Monitoring
•
Setting Up Database Instance Monitoring
Creating a Parser
By creating a parser, you define how the fields are extracted from a log entry for a
given type of log file.
To create a parser:
4-5
Chapter 4
Working with Log Sources
1.
From Oracle Log Analytics, click the OMC Navigation (
) icon on the top left
corner of the interface. In the OMC Navigation bar, click Configuration Home.
2.
In the Oracle Log Analytics Configuration page, click Create parser in the Log
Parsers section.
Alternatively, you can click the available number of log parsers link in the Log
Parsers section and then click Create in the Log Parsers page that is displayed.
3.
In the Create Parser page,
•
If you want to generate the parser expression using the Parser Builder, then
click on the Guided tab.
a.
In the Parser Name field, enter the parser name. For example, enter OBIO
Performance Log Parser.
b.
In the Example Log Content field, paste the contents from a log file that
you want to parse. You can alternatively click Add from file, and select
the log file that you want to parse.
4-6
Chapter 4
Working with Log Sources
The log records are extracted from the file and displayed in the Example
Log Content field.
Select the Handle entire file as a single log entry check box, if required.
Click Next.
c.
From the log content, select the lines that represent a single log entry. To
select multiple lines, hold down Ctrl or Shift key, and select.
Click Next.
d.
In the log entry, click on each field. The Extract Field dialog box opens.
4-7
Chapter 4
Working with Log Sources
e.
–
To capture the type of field, select Capture this field as radio button,
click the down arrow under Field Name, and select the field name that
it corresponds with. Based on the field type, the field value in the log
record will be replaced with the regular expression for that field. For
example, select the time data in the log entry, and select Time field
name. Then the {TIMEDATE} regular expression is displayed.
–
To capture the selected field by it’s literal text, select the Literal text
radio button.
Click Save.
After the fields are extracted, the regular expression is displayed and
tested. The results of the test are displayed in a table.
4-8
Chapter 4
Working with Log Sources
f.
Click Next.
The fields that you identified for parsing are listed along with the
corresponding field names, type of data, descriptions of the fields, and the
regular expressions.
g.
Confirm the parser data by clicking Create Parser.
The parser builder will validate your input with the existing parsers. If an
existing parser can be used for the example log content provided earlier,
then you’ll be redirected to the specific Parser Creation page.
•
If you’ve identified the parser expression for your logs, then click on the
Manual tab.
WARNING:
After you’ve selected the Manual mode to create the parser, you can’t
change to the Guided mode.
a.
In the Parser field, enter the parser name. For example, enter Database
Audit Log Entries.
b.
In the Parse Expression field, enter the expression with delimiters.
The parse expression is unique to each log type and depends on the
format of the actual log entries. In this example, enter:
\w+\s+(\w{3})\s+(\d{1,2})\s+(\d{2})\:(\d{2})\:(\d{2})\s+(\d{4})(?:\s+
([+-]\d{2}\:?\d{2}))?.*
4-9
Chapter 4
Working with Log Sources
Note:
–
Oracle Log Analytics also lets you parse the local time and
date available in the log files by using the TIMEDATE expression
format.
So for those logs that use the TIMEDATE expression format, the
preceding parse expression should be written as:
{TIMEDATE}\s+(\d{1,2})\s+(\d{2})\:(\d{2})\:(\d{2})\s+(\d{4})
(?:\s+([+-]\d{2}\:?\d{2}))?.*
–
If some log events don’t have a year assigned in the log, then
Oracle Log Analytics assigns the year to those events.
Note:
–
Don’t include any spaces before or after the content.
–
If you’ve included hidden characters in your parse expression,
then the Create Parser interface issues an error message:
Parser expression has some hidden control characters.
To disable this default response, uncheck the Show hidden
control characters checkbox when the error message
appears.
To learn more about creating parse expressions, see Sample Parse
Expressions.
c.
Select the appropriate Log Record Delimiter.
The log entry can be a single line or multiple lines. If you chose multiple
lines, then enter the log entry’s start expression.
In the example, the start expression can be:
\w+\s+(\w{3})\s+(\d{1,2})\s+(\d{2})\:(\d{2})\:(\d{2})\s+(\d{4})
d.
Enter a sample of the log content for this log type, such as the following:
Tue May 6 23:51:23 2014
LENGTH : '157'
ACTION :[7] 'CONNECT'
DATABASE USER:[3] 'sys'
PRIVILEGE :[6] 'SYSDBA'
CLIENT USER:[8] 'user1'
CLIENT TERMINAL:[0] ''
STATUS:[1] '0'
DBID:[9] '592398530'
Tue May 6 23:51:23 2014 +08:00
LENGTH : '157'
ACTION :[7] 'CONNECT'
DATABASE USER:[3] 'sys'
PRIVILEGE :[6] 'SYSDBA'
4-10
Chapter 4
Working with Log Sources
CLIENT USER:[8] 'user1'
CLIENT TERMINAL:[0] ''
STATUS:[1] '0'
DBID:[9] '592398530'
i.
If you’ve selected Entry can span multiple lines as the Log Record
Delimiter, then you can select Handle entire file as a single log
entry.
This option lets you parse an entire log file as a single log record. This
is particularly useful when parsing log sources such as Java Hotspot
Dump logs, RPM list of packages logs, and so on.
If the time and date are not specified in the parser for a log file that’s
parsed as a single log record, then the last modified time of the log file
is considered by Oracle Log Analytics to obtain the corresponding
data. Note that the date and time data can be obtained only for the log
files that’re sourced through the agent and not for the log files that’re
uploaded on-demand.
4-11
Chapter 4
Working with Log Sources
e.
In the Fields tab, select the relevant type for each component of the log
entry.
For each component, select the name. The first component in the example
can be entered as follows:
–
Field Name: Month (Short Name)
–
Field Data Type: STRING
–
Field Description: Month component of the log entry time as short
name, such as Jan
–
Field Expression: (\w{3})
When you hover on a field name, an information icon appears. Hovering
on the icon displays the description of the field in a floating window.
f.
In the Functions tab, click Add to optionally add a function to pre-process
log events. See Preprocessing Log Events.
4-12
Chapter 4
Working with Log Sources
g.
Click the Parser Test tab to view how your newly created parser extracts
values from the log content.
You can view the list of events that failed the parser test and the details of
the failure.
h.
Click Save.
Note:
You can also create a parser using an out-of-the-box parser as a template.
Select an out-of-the-box parser in the Log Parsers page, click Create Like, and
modify the values in the fields as per your requirement.
Oracle Log Analytics provides many out-of-the-box parsers for log sources,
such as Java Hotspot Dump logs, multiple systems, such as Linux, AIX, Siebel.
PeopleSoft, and so on as well as for entity types, such as Oracle Database,
WebLogic Server, and Oracle Enterprise Manager Cloud Control. You can
access the complete list of supported parsers and log sources from within the
Oracle Log Analytics user interface.
Preprocessing Log Events
For certain log sources, such as Database Trace Files and MySQL, Oracle Log
Analytics provides functions that allows you to preprocess log events and enrich the
resultant log entries.
Currently, Oracle Log Analytics provides the following functions to preprocess log
events:
•
Master Detail Function
•
Find Replace Function
To preprocess log events, click the Functions tab and then click the Add button while
creating a parser.
In the resultant Add Function dialog box, select the required function (based on the log
source) and specify the relevant field values.
Master Detail Function
This function lets you enrich log entries with the fields from the header of log files. This
function is particularly helpful for logs that contain a block of body as a header and
then time-based entries in the body. This function enriches each log body entry with
the fields of the header log entry. Database Trace Files are one of the examples of
these types of logs.
4-13
Chapter 4
Working with Log Sources
To capture the header and its corresponding fields for enriching the time-based body
log entries, at the time of parser creation, you need to select the corresponding
Header Parser name in the Add Function dialog box.
Examples:
•
In these types of logs, the header mostly appears somewhere at the beginning in
the log file, followed by time-based entries. See the following:
Trace file /scratch/emga/DB1212/diag/rdbms/lxr1212/lxr1212_1/trace/
lxr1212_1_ora_5071.trc
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management,
OLAP,
Advanced Analytics and Real Application Testing options
ORACLE_HOME = /scratch/emga/DB1212/dbh
System name:
Linux
Node name: slc00drj
Release:
2.6.18-308.4.1.0.1.el5xen
Version:
#1 SMP Tue Apr 17 16:41:30 EDT 2012
Machine:
x86_64
VM name:
Xen Version: 3.4 (PVM)
Instance name: lxr1212_1
Redo thread mounted by this instance: 1
Oracle process number: 35
Unix process pid: 5071, image: oracle@slc00drj (TNS V1-V3)
*** 2015-10-12 21:12:06.169
*** SESSION ID:(355.19953) 2015-10-12 21:12:06.169
*** CLIENT ID:() 2015-10-12 21:12:06.169
4-14
Chapter 4
Working with Log Sources
***
***
***
***
SERVICE NAME:(SYS$USERS) 2015-10-12 21:12:06.169
MODULE NAME:(sqlplus@slc00drj (TNS V1-V3)) 2015-10-12 21:12:06.169
CLIENT DRIVER:() 2015-10-12 21:12:06.169
ACTION NAME:() 2015-10-12 21:12:06.169
2015-10-12 21:12:06.169: [
GPNP]clsgpnp_dbmsGetItem_profile: [at
clsgpnp_dbms.c:345] Result: (0) CLSGPNP_OK. (:GPNP00401:)got ASMProfile.Mode='legacy'
*** CLIENT DRIVER:(SQL*PLUS) 2015-10-12 21:12:06.290
SERVER COMPONENT id=UTLRP_BGN: timestamp=2015-10-12 21:12:06
*** 2015-10-12 21:12:10.078
SERVER COMPONENT id=UTLRP_END: timestamp=2015-10-12 21:12:10
*** 2015-10-12 21:12:39.209
KJHA:2phase clscrs_flag:840 instSid:
KJHA:2phase ctx 2 clscrs_flag:840 instSid:lxr1212_1
KJHA:2phase clscrs_flag:840 dbname:
KJHA:2phase ctx 2 clscrs_flag:840 dbname:lxr1212
KJHA:2phase WARNING!!! Instance:lxr1212_1 of kspins type:1 does not support 2
phase CRS
*** 2015-10-12 21:12:39.222
Stopping background process SMCO
*** 2015-10-12 21:12:40.220
ksimdel: READY status 5
*** 2015-10-12 21:12:47.628
...
KJHA:2phase WARNING!!! Instance:lxr1212_1 of kspins type:1 does not support 2
phase CRS
For the preceding example, using the Master Detail function, Oracle Log Analytics
enriches the time-based body log entries with the fields from the header content.
•
Observe the following log example:
Server: prodsrv123
Application: OrderAppA
2017-08-01
2017-08-01
2017-08-01
2017-08-01
backorder
23:02:43
23:02:43
23:02:43
23:02:43
INFO DataLifecycle Starting backup process
ERROR OrderModule Order failed due to transaction timeout
INFO DataLifecycle Backup process completed. Status=success
WARN OrderModule Order completed with warnings: inventory on
In the preceding example, we have four log entries that must be captured in
Oracle Log Analytics. The server name and application name appear only at the
beginning of the log file. To include the server name and the application name in
each log entry:
1.
Define a parser for the header that’ll parse the server and application fields:
4-15
Chapter 4
Working with Log Sources
2.
Define a second parser to parse the remaining body of the log:
4-16
Chapter 4
Working with Log Sources
3.
To the body parser, add a Master-Detail function instance and select the
header parser that you’d defined in the step 1:
4.
Add the body parser that you’d defined in the step 2 to a log source, and
associate the log source with an entity to start the log collection.
You’ll then be able to get four log entries with the server name and application
name added to each entry.
4-17
Chapter 4
Working with Log Sources
Find Replace Function
This function lets you extract text from a log line and add it to other log lines
conditionally based on given patterns. This function is particularly helpful in solving
missing time stamp problem in MySQL general and slow query logs.
Consider the following example:
# Time: 160201 1:55:58
# User@Host: root[root] @ localhost [] Id:
# Query_time: 0.001320 Lock_time: 0.000000
select @@version_comment limit 1;
# User@Host: root[root] @ localhost [] Id:
# Query_time: 0.000138 Lock_time: 0.000000
SET timestamp=1454579783;
SELECT DATABASE();
1
Rows_sent: 1 Rows_examined: 1
2
Rows_sent: 1 Rows_examined: 2
For the preceding example, the Find Replace function gets the timestamp from the
previous log line and adds it to log lines having no timestamp.
The attributes for the previous example can be:
) icon next to the fields Catch Expression, Find Expression, and
Click the Help (
Replace String to get the description of the field, sample expression, sample content,
and the action performed.
After adding the find-replace function, you’ll notice the following result:
4-18
Chapter 4
Working with Log Sources
In the preceding result of the example, you can notice that the find-replace function
has inserted the timestamp in place of the User@host entry in each log line that it
encountered.
The find-replace function has the following attributes:
•
Catch Expression: Regular expression that is matched with every log line and the
matched regular expression named group text is saved in memory to be used in
Replace Expression.
•
Find Expression: This regular expression specifies the text to be replaced by the
text matched by named groups in Catch Expression in log lines.
•
Replace Expression: This regular expression indicates the text to replace groups
found in Find Expression. The group names should be enclosed in parentheses.
Creating a Log Source
Log Sources define the location of your target's logs. A log source needs to be
associated with one or more targets to start log monitoring.
To create a log source:
1.
From Oracle Log Analytics, click the OMC Navigation (
) icon on the top left
corner of the interface. In the OMC Navigation bar, click Configuration Home.
2.
In the Oracle Log Analytics Configuration page, click Create source in the Log
Sources section.
Alternatively, you can click the available number of log sources link in the Log
Sources section and then click Create in the Log Sources page that is displayed.
4-19
Chapter 4
Working with Log Sources
This displays the Create Log Source dialog box.
3.
In the Source field, enter the name of the log source.
4.
From the Source Type list, select the type for the log source, such as File.
Oracle Log Analytics supports five log source types:
•
File: Use this type for parsing the majority of log messages supported by
Oracle Log Analytics, such as Database, Oracle Enterprise Manager, Apache,
Microsoft, Peoplesoft, Siebel, and so on.
•
Oracle Diagnostic Log (ODL): Use this type for parsing Oracle Service
Oriented Architecture (SOA) log messages.
•
Syslog Listener: Use this type for parsing Syslog (system event messages).
•
Windows Event System: Use this type for parsing Windows Event Viewer
messages. Oracle Log Analytics can collect all historic Windows Event Log
entries. It also supports Windows as well as custom event channels.
4-20
Chapter 4
Working with Log Sources
Note:
If you select this source type, then the File Parser field is not visible.
•
Database: Use this type for parsing database instance log records.
Note:
The following steps are not applicable to database instance log
sources. See Setting Up Database Instance Monitoring to learn about
how to configure database instance log sources.
5.
Click the Entity Type field and select the type of entity for this log source.
6.
Click the File Parser field and select the relevant parser name such as Database
Audit Log Entries Format.
You can select multiple file parsers for the log files. This is particularly helpful
when a log file has entries with different syntax and can’t be parsed by a single
parser.
For ODL source type, the only parser available is Oracle Diagnostic Logging
Format.
The File Parser field is not available for Windows Event System source type.
Oracle Log Analytics parsers are based on regular expressions. For the Windows
Event System source type, Oracle Log Analytics retrieves already parsed log
data. So, you don’t need any parser for Windows Event System logs.
The Author field already has your user name.
7.
To automatically associate this log source with all matching entity types, select the
Auto-Associate check box.
4-21
Chapter 4
Working with Log Sources
8.
In the Included Patterns tab, click Add to specify file name patterns for this log
source.
Enter the file name pattern and description.
You can enter parameters within {}, such as {AdrHome}, as a part of the file name
pattern. Oracle Log Analytics replaces the parameters with the actual value at run
time. You can view all the parameters for a particular target type by clicking the
See all available built-in parameters link.
The log source will contain only those log files that match the included patterns.
You can configure warnings in the log collection for a given pattern.
Select the Send Warning checkbox. In the adjacent drop-down list, select the
situation in which the warning must be issued:
•
Every missing or unreadable pattern
•
All patterns are unreadable
4-22
Chapter 4
Working with Log Sources
9.
In the Excluded Patterns tab, click Add to define patterns of log file names that
must be excluded from this log source.
For example, you can use an excluded pattern when there are files in the same
location that you don’t want to include in the log source definition. For example,
there is a file with the name audit.aud in the directory /u01/app/oracle/admin/
rdbms/diag/trace/. In the same location, there is another file with the name
audit-1.aud. You can exclude any files with the pattern *-1.aud.
10. Click Save.
Using Extended Fields in Log Sources
The Extended Fields feature in Oracle Log Analytics enables you to extract additional
fields from a log entry, in addition to the fields defined by the out-of-the-box parsers.
By default, analyzing log content using a log source extracts the fields that are defined
in the base parser. A base parser extracts common fields from a log entry. However, if
you have a requirement to extract additional fields, you can use the extended fields
definition. For example, a base parser may define that the last part of a log entry
starting an alpha character to be displayed as the value of the Message field. If you
need to parse the Message field further to extract additional fields from within the
value of the Message field, then you use the Extended Fields feature to update the log
source definition and define additional extended fields.
To define extended fields:
1.
) icon on the top left
From Oracle Log Analytics, click the OMC Navigation (
corner of the interface. In the OMC Navigation bar, click Configuration Home.
4-23
Chapter 4
Working with Log Sources
2.
In the Oracle Log Analytics Configuration page, click the count of available log
sources link.
3.
In the Log Sources page, select the required log source where you want to define
the extended fields and click Edit.
4.
Click the Extended Fields tab and then click Add.
5.
Select the Base Field whose value you want to extract and display as an
extended field.
6.
Enter an example of the value that would be extracted in the Example Content
field.
7.
Enter the extraction expression in the Extended Field Extraction Expression
field and select Enabled check box.
For example, to extract the endpoint file name from the URI field of a Fusion
Middleware Access log file, enter the following:
•
Base Field: URI
•
Example Content: /service/myservice1/endpoint/file1.jpg
•
Extended Field Extraction Expression: {Content Type:\.(jpg|html|png|ico|
jsp|htm|jspx)}
8.
Click Save.
If you use automatic parsing that only parsers time, then extended field definition is
based on the Original Log Content field, because that is the only field that will exist in
the log results. See Using the Generic Parser.
4-24
Chapter 4
Working with Log Sources
When you search for logs using the updated log source, values of the extended fields
are displayed along with the fields extracted by the base parser.
Sample Example Content and Extended Field Extraction Expression
Log Source
Parser Name
Base Field
Example Content
Extended Field
Extraction
Expression
/var/log/messages
Linux Syslog
Format
Message
authenticated
mount request from
10.245.251.222:735
for /scratch (/
scratch)
authenticated
{Action:\w+}
request from
{Address:[\d\.]+}:
{Port:\d+} for
{Directory:\S+}\s(
/var/log/yum.log
Yum Format
Message
Updated: kernel{Action:\w+}:
headers-2.6.18-371 {Package:.*}
.
0.0.0.1.el5.x86_64
Database Alert Log
Database Alert Log Message
Format (Oracle DB
11.1+)
Errors in file /
scratch/cs113/
db12101/diag/
rdbms/pteintg/
pteintg/trace/
pteintg_smon_3088.
trc
(incident=4921):
ORA-07445:
exception
encountered: core
dump [semtimedop()
+10] [SIGSEGV]
[ADDR:
0x16F9E00000B1C]
[PC:
0x7FC6DF02421A]
[unknown code] []
Errors in file
{Trace File:\S+}
(incident={Inciden
t:\d+}): {Error
ID:ORA-\d+}:
exception
encountered: core
dump [semtimedop()
+10] [SIGSEGV]
[ADDR:{Address:[\w
\d]+] [PC:{Program
Counter:[\w\d]+}]
[unknown code] []
FMW WLS Server Log
WLS Server Log
Format
Server state
changed to
STARTING
Server state
changed to
{Status:\w+}
Message
Using Data Filters
Oracle Log Analytics lets you mask and hide sensitive information from your log
records as well as hide entire log entries. Using the Data Filters tab under Log
Sources in the Configuration page, you can mask IP addresses, user ID, host name,
and other sensitive information with replacement strings, drop specific keywords and
values from a log entry, and also hide an entire log entry.
Masking Log Data
If you want to mask information such as the user name and the host name from the log
entries:
1.
) icon on the top left
From Oracle Log Analytics, click the OMC Navigation (
corner of the interface. In the OMC Navigation bar, click Configuration Home.
4-25
Chapter 4
Working with Log Sources
2.
In the Oracle Log Analytics Configuration page, click Create source in the Log
Sources section.
Alternatively, you can click the available number of log sources link in the Log
Sources section and then click Create in the Log Sources page that is displayed.
This displays the Create Log Source dialog box.
3.
Specify the relevant values for the Source, Source Type, Entity Type, and File
Parser fields.
4.
In the Included Patterns tab, click Add to specify file name patterns for this log
source.
5.
Click the Data Filters tab and click Add.
6.
Enter the mask Name, select Mask as the Type, enter the Find Expression
value, and its associated Replace Expression value.
Name
Find Expression
Replace Expression
mask username
User=(\S+)s+
confidential
4-26
Chapter 4
Working with Log Sources
Name
Find Expression
Replace Expression
mask host
Host=(\S+)s+
mask_host
Note:
The syntax of the replace string should match with the syntax of the string
that is being replaced. For example, a number should not be replaced with
a string. An IP address of the form ddd.ddd.ddd.ddd should be replaced with
000.000.000.000 and not with 000.000. If the syntaxes don’t match, the
parsers will break.
7.
Click Save.
When you view the masked log entries for this log source, you’ll find that Oracle Log
Analytics has masked the values of the fields that you’ve specified.
•
User = confidential
•
Host = mask_host
Note:
Apart from adding data filters when creating a log source, you can also edit an
existing log source to add data filters. See Managing Existing Log Sources to
learn about editing existing log sources.
Note:
Data masking works on continuous log monitoring as well as on Syslog
listeners.
4-27
Chapter 4
Working with Log Sources
Dropping Specific Keywords or Values from Your Log Records
Oracle Log Analytics allows you to search for a specific keyword or value in log
records and drop the matched keyword or value if that keyword exists in the log
records.
Consider the following log record:
ns5xt_119131: NetScreen device_id=ns5xt_119131 [Root]systemnotification-00257(traffic): start_time="2017-02-07 05:00:03" duration=4
policy_id=2 service=smtp proto=6 src zone=Untrust dst zone=mail_servers
action=Permit sent=756 rcvd=756 src=249.17.82.75 dst=212.118.246.233
src_port=44796 dst_port=25 src-xlated ip=249.17.82.75 port=44796 dst-xlated
ip=212.118.246.233 port=25 session_id=18738
If you want to hide the keyword device_id and its value from the log record:
1.
Perform Steps 1 to 5 listed in the Masking Log Data section.
2.
Enter the filter Name, select Drop String as the Type, and enter the Find
Expression value such as device_id=\S*.
3.
Click Save.
When you view the log entries for this log source, you’ll find that Oracle Log Analytics
has dropped the keywords or values that you’ve specified.
Note:
Ensure that your parser regular expression matches the log record pattern, else
Oracle Log Analytics may not parse the records properly after dropping the
keyword.
Note:
Apart from adding data filters when creating a log source, you can also edit an
existing log source to add data filters. See Managing Existing Log Sources to
learn about editing existing log sources.
Dropping an Entire Line in a Log Record Based on Specific Keywords
Oracle Log Analytics allows you to search for a specific keyword or value in log
records and drop an entire line in a log record if that keyword exists.
Consider the following log record:
ns5xt_119131: NetScreen device_id=ns5xt_119131 [Root]systemnotification-00257(traffic): start_time="2017-02-07 05:00:03" duration=4
policy_id=2 service=smtp proto=6 src zone=Untrust dst zone=mail_servers
action=Permit sent=756 rcvd=756 src=249.17.82.75 dst=212.118.246.233
src_port=44796 dst_port=25 src-xlated ip=249.17.82.75 port=44796 dst-xlated
ip=212.118.246.233 port=25 session_id=18738
If you want to drop entire lines if the keyword device_id exists in them:
4-28
Chapter 4
Working with Log Sources
1.
Perform Steps 1 to 5 listed in the Masking Log Data section.
2.
Enter the filter Name, select Drop Log Entry as the Type, and enter the Find
Expression value such as .*device_id=.*.
3.
Click Save.
When you view the log entries for this log source, you’ll find that Oracle Log Analytics
has dropped all those lines that contain the string device_id in them.
Note:
Apart from adding data filters when creating a log source, you can also edit an
existing log source to add data filters. See Managing Existing Log Sources to
learn about editing existing log sources.
Using Labels in Log Sources
Oracle Log Analytics enables you to add labels or tags to log entries, based on defined
conditions.
You can use patterns to specify a condition. When a log entry matches that condition,
the label associated with the pattern is displayed alongside the log entry.
To add labels to a log entry:
1.
) icon on the top left
From Oracle Log Analytics, click the OMC Navigation (
corner of the interface. In the OMC Navigation bar, click Configuration Home.
2.
In the Oracle Log Analytics Configuration page, click the count of available log
sources link.
3.
In the Log Sources page, select the required log source where you want to define
the extended fields and click Edit.
4.
Click the Labels tab and then click Add.
5.
Select the log field on which you want to apply the condition from the Field dropdown list.
6.
Select the operator from the Operator drop-down list.
7.
Specify the value of the condition to be matched for applying the label in the
Condition Value field.
8.
Enter the text for the label to be applied in the Label field and select the Enabled
check box.
In the following image, the log source has been configured to attach the critical
label to any log entry that contains a Severity value of INCIDENT ERROR.
4-29
Chapter 4
Working with Log Sources
You can also create a custom label to tag a specific log entry. See Creating a
Label.
9.
Click Save.
You can now search log data based on the labels that you’ve created. See Filtering by
Labels.
Creating a Label
Oracle Log Analytics offers multiple out-of-the-box labels for select log sources. You
can use these labels to tag the log entries in your log sources. See Out-of-the-Box
Labels. In the following Cisco ASA Logs log source example, the highlighted out-of-thebox labels and more are provided by Oracle Log Analytics for ready use.
4-30
Chapter 4
Working with Log Sources
However, if you can’t find the labels that you’re looking for, create custom labels that
can be used in log sources to tag the log entries, based on defined conditions. To
create a label:
1.
) icon on the top left
From Oracle Log Analytics, click the OMC Navigation (
corner of the interface. In the OMC Navigation bar, click Configuration Home.
2.
In the Oracle Log Analytics Configuration page, click Create Label in the Labels
section.
Alternatively, you can click the available number of labels link in the Labels
section and then click Create in the Labels page that is displayed.
3.
4.
In the Label field, enter the label name. For example, enter Gateway Timeout.
In the Description field, enter the details about the label. For example, enter Java
exception encountered.
5.
To assign priority to the label:
a.
Under Denotes Problem field, select Yes check box.
b.
In the Problem Priority field, click the down arrow and select a priority. For
example, select High.
4-31
Chapter 4
Working with Log Sources
6.
In the Related Terms field, enter the terms that are related to the log entry.
7.
Click Save.
You can now use the new custom label in your log source to tag a log entry. See
Using Labels in Log Sources.
After the custom labels are associated with the log sources, you can search the log
data based on the labels that you’ve created. See Filtering by Labels. The labels can
be used for search as in the following example use-cases:
•
To obtain rapid summary of all error trends
•
To identify the problem events
4-32
Chapter 4
Working with Log Sources
•
To perform plain language analysis across log sources
•
To perform plain language analysis in combination with clusters
4-33
Chapter 4
Working with Log Sources
Creating Lookups
Using Oracle Log Analytics, you can enrich event data by adding field-value
combinations from lookups. Oracle Log Analytics uses lookups to match field-value
combinations from events to an external lookup table, and if matched, Oracle Log
Analytics appends the field-value combinations to the events.
For example, the Error ID field in log events doesn’t provide a description of the errors.
You can create a lookup mapping Error ID to descriptions, and then use the Field
Enrichment options to make the descriptions available to search or visible in the log
records.
4-34
Chapter 4
Working with Log Sources
To create a lookup:
1.
Create a lookup CSV file with the field-value combinations.
For example:
errid,description
02323,Network Not Reachable
09912,User Activity
12322,Out of Memory
2.
From Oracle Log Analytics, click the OMC Navigation (
) icon on the top left
corner of the interface. In the OMC Navigation bar, click Configuration Home.
3.
In the Oracle Log Analytics Configuration page, click the Create Lookup link
under Lookups.
4.
In the Lookup page, enter the name of the lookup, such as server error code
lookups and an optional description.
5.
Click the Import button, select the lookup CSV file that you had created earlier,
and click Save.
After you create a lookup, you can use it as a Field Enrichment option in your log
source. See Configuring Field Enrichment Options.
Using the Generic Parser
Oracle Log Analytics allows you to configure a generic parser to parse logs from
different log sources. This is particularly helpful when you’re not sure about how to
parse your logs or how to write regular expressions to parse your logs, and you just
want to pass the raw log data to perform analysis.
Typically, a parser defines how the fields are extracted from a log entry for a given
type of log file. However, the generic parser in Log Analytics can:
•
Detect the time stamp and the time zone from log entries.
•
Create a time stamp using the current time if the log entries don’t have any time
stamp.
4-35
Chapter 4
Working with Log Sources
•
Detect whether the log entries are multiple lined or single lined.
To use the generic parser:
1.
From Oracle Log Analytics, click the OMC Navigation (
) icon on the top left
corner of the interface. In the OMC Navigation bar, click Configuration Home.
2.
In the Oracle Log Analytics Configuration page, click Create source in the Log
Sources section.
Alternatively, you can click the available number of log sources link in the Log
Sources section and then click Create in the Log Sources page that is displayed.
This displays the Create Log Source dialog box.
3.
In the Source field, enter the name for the log source.
4.
In the Source Type field, select File.
5.
Click Target Type and select the type of target for this log source.
6.
Select Automatically parse time only. Oracle Log Analytics will automatically
apply the generic parser type.
4-36
Chapter 4
Working with Log Sources
7.
To automatically associate this log source with all matching target types, select the
Auto-Associate check box.
8.
Click Save.
When you access the log entries of the newly created log source, Oracle Log Analytics
extracts and displays the following information from the log entries:
•
•
Time stamp:
–
When a log entry doesn’t have a time stamp, then the generic parser creates
and displays the time stamp based on the time when the log data was
collected.
–
When a log entry contains timestamp, but the time zone is not defined, then
the generic parser will use the cloud agent’s time zone.
Time zone:
–
When a log file has log entries with multiple time zones, the generic parser can
support up to 11 time zones.
4-37
Chapter 4
Working with Log Sources
–
•
When a log displays some entries with a time zone and some without a time
zone, then the generic parser follows the time zone of the latest log entry.
Multiple lines: When a log entry spans multiple lines, the generic parser can
captures the multiline content correctly.
4-38
Chapter 4
Working with Log Sources
Configuring Field Enrichment Options
Oracle Log Analytics enables you to configure Field Enrichment options that lets
further extract and display meaningful information from your extended fields data.
One of the Field Enrichment options is the Geolocation Lookup that converts IP
addresses or host names present in the log records to country or country code. This
can be used in log sources like Web Access Logs that have external client IP
addresses.
Using the Lookup Field Enrichment option, you can match field-value combinations
from events to an external lookup table.
Using Geolocation Lookup
After you set up the Geolocation Lookup options, you can view log records grouped by
country or country code. See Using Maps.
To set up the Geolocation Lookup Field Enrichment options:
1.
) icon on the top left
From Oracle Log Analytics, click the OMC Navigation (
corner of the interface. In the OMC Navigation bar, click Configuration Home.
2.
In the Oracle Log Analytics Configuration page, click the count of available log
sources link.
3.
In the Log Sources page, select the required log source where you want to define
the extended fields and click Edit.
4.
Add the Extended Fields definition for the base field that contains the countryspecific IP address or host names records, such as Host IP Address.
5.
Click the Field Enrichment tab and then click Add.
4-39
Chapter 4
Working with Log Sources
6.
In the Field Enrichment dialog box, select Geolocation Lookup as the Function.
Click the View details link to see a sample representation of the Geolocation
Lookup function.
7.
Keep the Enabled check box selected.
8.
In the IP or Host Name field, select the base field name that you have used in the
Extended Fields definition.
9.
Click Add.
To use the Maps visualization in Oracle Log Analytics to view log records grouped by
country or country code, see Using Maps.
Using Lookup
Using Oracle Log Analytics, you can enrich event data by adding field-value
combinations from lookups. Oracle Log Analytics uses lookups to match field-value
4-40
Chapter 4
Working with Log Sources
combinations from events to an external lookup table, and if matched, Oracle Log
Analytics appends the field-value combinations to the events. See Creating Lookups.
To set up the Lookup Field Enrichment options:
1.
From Oracle Log Analytics, click the OMC Navigation (
) icon on the top left
corner of the interface. In the OMC Navigation bar, click Configuration Home.
2.
In the Oracle Log Analytics Configuration page, click the count of available log
sources link.
3.
In the Log Sources page, select the required log source where you want to define
the lookup options and click Edit.
Note:
You can also click Create source and create a new log source.
4.
Click the Field Enrichment tab and then click Add.
5.
In the Field Enrichment dialog box, select Lookup as the Function.
Click the View details link to see a sample representation of the Lookup function.
6.
Keep the Enabled check box selected.
7.
In the Reference Lookup field, select the lookup file you just uploaded earlier.
The drop-down will list all lookups that have been previously uploaded.
8.
To map the key from the lookup to a field that is populated by your parser, in
Lookup Field, select the field for which you’ve created the lookup, such as Error
ID.
Note:
The drop down for the input field will be limited to the fields that your log
source populates. In this case, the Lookup Field will be matched against
your log entries field, Error ID.
9.
Select the Output Field, such as Error Text.
When there is a match, then the lookup value will be written to the output field,
which in this case is Error Text field.
10. Click Add.
4-41
Chapter 4
Working with Log Sources
When you display log records pertaining to the log source for which you created the
lookup, you can see that the Output Field displays values that are populated against
the log entries because of the lookup against the CSV file that you had uploaded
earlier. See Creating Lookups.
Setting Up Syslog Monitoring
Syslog is a commonly used standard for logging system event messages. The
destination of these messages can include system console, files, remote Syslog
servers, or relays.
Oracle Log Analytics allows you to collect and analyze Syslog data from various
sources. You just need to configure the Syslog output ports in the Syslog servers.
4-42
Chapter 4
Working with Log Sources
Oracle Log Analytics monitors the output ports, accesses the remote Syslog contents,
and performs the analysis.
Syslog monitoring in Oracle Log Analytics allows you to listen to multiple hosts and
ports. The protocols supported are TCP and UDP.
To set up Syslog monitoring:
1.
From Oracle Log Analytics, click the OMC Navigation (
) icon on the top left
corner of the interface. In the OMC Navigation bar, click Configuration Home.
2.
In the Oracle Log Analytics Configuration page, click Create source in the Log
Sources section.
Alternatively, you can click the available number of log sources link in the Log
Sources section and then click Create in the Log Sources page that is displayed.
This displays the Create Log Source dialog box.
3.
In the Source field, enter the name for the log source.
4.
From the Source Type list, select Syslog Listener.
4-43
Chapter 4
Working with Log Sources
5.
Click Entity Type and select the required entity type such as Host.
6.
Click File Parser and select Syslog Standard Format.
7.
In the Listener Pattern tab, click Add to specify the details of the listener to which
Oracle Log Analytics will listen to collect Syslogs.
Enter the listener port that you have specified as the output port in the Syslog
configuration file in the Syslog server, select either UDP or TCP (recommended for
heavy traffic) as the required protocol, and select Enabled.
Repeat the same step for adding multiple listener ports.
8.
Click Save.
9.
In the Log Sources page, select the newly created Syslog source (testSyslog in
this case) and click Associated Targets.
10. In the Associated Targets: <log source name> page, click Add.
11. Select the host name or host names with which you want to associate the source
and click Select.
12. In the Associated Targets: <log source name> page, click Save.
Viewing Syslog Data
You can use the Sources section in the Data panel of Oracle Log Analytics to view
Syslog data.
To view Syslog data:
1.
From Oracle Log Analytics, click Log Sources in the Source section.
4-44
Chapter 4
Working with Log Sources
2.
In the Filter by Log Source dialog box, select name of the Syslog source that you
have created, and click Submit.
Oracle Log Analytics displays the Syslog data from all the configured listener ports.
You can analyze Syslog data from different hosts or devices.
Setting Up Database Instance Monitoring
Oracle Log Analytics can extract database instance records based on the SQL query
that you provide in the log source configuration. You can define a parser for database
instance log records using Oracle Log Analytics.
Currently, the supported database types are Oracle Database Instance
(omc_oracle_db_instance), Microsoft SQL Server Database Instance
(omc_sqlserver_db_instance) and MySQL Database Instance (omc_mysql_db_instance).
Overall Flow for Collecting Database Logs
Following are the high-level tasks for collecting log information stored in a database:
1.
Creating a log source
2.
Providing entity credentials
3.
Associating an entity with the log source
Note:
By default, after you’ve installed the cloud agent, it collects the database
instance records for 30 days. If you want to extract records that’re more than 30
days old, then update the property before the event collection from the
database begins:
omcli setproperty agent -allow_new -name
loganalytics.database_sql.max_oldDays -value <newValue_for_max_oldDays>
Creating a Log Source
To set up database instance monitoring:
1.
From Oracle Log Analytics, click the OMC Navigation (
) icon on the top left
corner of the interface. In the OMC Navigation bar, click Configuration Home.
2.
In the Oracle Log Analytics Configuration page, click Create source in the Log
Sources section.
Alternatively, you can click the available number of log sources link in the Log
Sources section and then click Create in the Log Sources page that is displayed.
4-45
Chapter 4
Working with Log Sources
This displays the Create Log Source dialog box.
3.
In the Source field, enter the name for the log source.
4.
From the Source Type list, select Database.
5.
Click Entity Type and select the required entity type such as Microsoft SQL
Server Database Instance, MySQL Database Instance, or Oracle Database
Instance.
6.
In the Database Queries tab, click Add to specify the details of the SQL query
based on which Oracle Log Analytics collect database instance logs.
7.
Click the Configure button to display the Configure Column Mapping dialog box.
8.
In the Configure Column Mapping dialog box, you map the SQL fields with the field
names that would be displayed in the actual log records.
You need to specify a Sequence Column.
See SQL Query Guidelines.
4-46
Chapter 4
Working with Log Sources
Note that the first mapped field with a data type of Timestamp will be used as the
timestamp of the log entry. If no such field is present, then the collection time will
be used as the time of the log entry.
Click Done.
9.
Repeat steps 6 through 8 for adding multiple SQL queries.
10. Select Enabled for each of the SQL queries and then click Save.
4-47
Chapter 4
Working with Log Sources
Providing Entity Credentials
For each entity that is used for collecting the data defined in the Database log source,
you need to provide the necessary credentials that will be used to connect to the entity
and run the SQL query. These credentials need to be registered in a credential store
that is maintained locally by the cloud agent (that will be used to collect the data from
the entity).
Creating the JSON File
Create a JSON file that contains the credential information as the following:
[{
"entity": "<Enter Entity Type>.<Enter Entity Name>",
"name": "LCAgentDBCreds",
"type": "DBCredsNormal",
"usage": "LOGANALYTICS",
"globalName": "AgentUserCredential ",
"description": "SSH Credential for fetching the data from db tables via sql",
"properties": [{
"name": "USERNAME",
"value": "CLEAR[username]"
},
{
"name": "PASSWORD",
"value": "CLEAR[password]"
},
{
"name": "ROLE",
"value": "CLEAR[rolename]"
}]
}]
For example, for a database named avdf_instance and user name, password, and role
as sys, syspasswd, and SYSDBA respectively, the JSON file should contain:
[{
"entity": "omc_oracle_db_instance.avdf_instance",
"name": "LCAgentDBCreds",
"type": "DBCredsNormal",
"globalName": "AgentUserCredential",
"usage": "LOGANALYTICS",
"description": "DB Credentials",
"properties": [{
"name": "USERNAME",
"value": "CLEAR[sys]"
},
{
"name": "PASSWORD",
"value": "CLEAR[syspasswd]"
},
{
"name": "ROLE",
"value": "CLEAR[SYSDBA]"
}]
}]
4-48
Chapter 4
Working with Log Sources
The name, type and usage fields should be set to LCAgentDBCreds, DBCredsNormal and
LOGANALYTICS respectively. The globalName field needs to be unique within the credential
store managed by the local cloud agent. The ROLE property is optional.
Registering the Credential Information
You need to register the credential information with the cloud agent by performing the
following steps:
1.
Go to the Oracle Management Cloud host computer.
2.
To create a credential store if it was not created earlier,
a.
Stop the cloud agent:
omcli stop agent
b.
Run the following command from the <AGENT_BASE_DIR>/agent_inst/
bin location:
omcli add_credential_store agent -no_password
See omcli Command Options
c.
Start the cloud agent:
omcli start agent
3.
To register the credential information, run the following command from the
<AGENT_BASE_DIR>/agent_inst/bin location:
omcli add_credentials agent -credential_file <PATH_TO_CRED_JSON_FILE>
See omcli Command Options
Note:
By default, after you’ve installed the cloud agent, it collects the database
instance records for 30 days. If you want to extract records that’re more than 30
days old, then update the property before the event collection from the
database begins:
omcli setproperty agent -allow_new -name
loganalytics.database_sql.max_oldDays -value <newValue_for_max_oldDays>
Associating an Entity with the Log Source
See Working with Entity Associations.
Managing Existing Log Sources
You can use the Log Sources page to edit existing log sources and add entity
associations to existing log sources.
Editing Log Sources
To edit existing log sources:
4-49
Chapter 4
Working with Log Sources
1.
From Oracle Log Analytics, click the OMC Navigation (
) icon on the top left
corner of the interface. In the OMC Navigation bar, click Configuration Home.
2.
In the Oracle Log Analytics Configuration page, click the available count of log
sources link in the Log Sources section.
3.
Click the Open Menu (
edit and select Edit.
) icon adjacent to the log source entry that you want to
The Edit Log Source page is displayed.
4.
Modify the log source definition as per your requirement and click Save.
Associating Entities to Existing Log Sources
You an enable existing log sources by associating entities to them. To add entity
associations to existing log sources:
1.
) icon on the top left
From Oracle Log Analytics, click the OMC Navigation (
corner of the interface. In the OMC Navigation bar, click Configuration Home.
2.
In the Oracle Log Analytics Configuration page, click the count of available log
sources link in the Log Sources section.
If you want to associate entities to log sources that have existing auto-associated
log sources, then click the count of auto-associated log sources link.
3.
) icon adjacent to the log source entry to which you
Click the Open Menu (
want to associate entities and select Add Entity Associations.
The Associated Entities: <Log Source Name> page is displayed with a list of
associated entities.
4.
Click Add, select the available entities for the selected log source, and then click
Select.
You can also search for a log source and select it from the search result.
Note:
You can remove an associated entity or republish the association between the
selected log source and a entity by selecting the entity name and clicking
Remove or Republish Associations respectively.
Creating a Log Source Based on an Existing One
If you’re not sure about how to create a log source, you can use any existing log
source to create a new one.
To create a new log source based on an existing one:
1.
From Oracle Log Analytics, click the OMC Navigation (
) icon on the top left
corner of the interface. In the OMC Navigation bar, click Configuration Home.
2.
In the Oracle Log Analytics Configuration page, click the available count of log
sources link in the Log Sources section.
4-50
Chapter 4
Working with Entity Associations
3.
Click the Open Menu (
) icon adjacent to the log source entry based on which
you want to create a new log source and select Create Like.
The Create Log Source page is displayed with the log definition fields populated
with the definitions of the existing log source.
4.
Modify the log source definition as per your requirement and click Save.
Working with Entity Associations
This topic describes how to configure new entity association and manage existing
ones.
Topics:
•
Configuring New Entity Associations
•
Managing Existing Entity Associations
Configuring New Entity Associations
You can configure new entity associations or enable log sources for a target for
collecting log data. To configure entity associations on a large scale, you can use the
source-entity association APIs.
To configure a new entity association using the Oracle Log Analytics interface:
1.
From Oracle Log Analytics, click the OMC Navigation (
) icon on the top left
corner of the interface. In the OMC Navigation bar, click Configuration Home.
2.
In the Oracle Log Analytics Configuration page, click New association in the
Entities section.
Alternatively, you can click the count of available entities link in the Entities
section and then click New Association in the Entities page that is displayed.
4-51
Chapter 4
Working with Entity Associations
The Associate Entities for Log Collection page is displayed.
3.
In the Select Entities section, select the target type, such as WebLogic Server
from the Entity Type drop-down list.
4.
Click Add Entities in the Entities section, select the required entities to be
associated, and click Select. Then click Continue.
4-52
Chapter 4
Working with Entity Associations
From the list of entities, you can also select a remote agent that you’ve configured
using the REST API, and associate it with the specific log sources.
You can select the check box to the left of the Entity Name heading to select all
the available entities.
5.
In the Select Log Sources section, select the required log sources from the list of
available sources and click Continue.
You can select the Select All check box to add all the available log sources.
6.
In the Confirmation section, click Associate Entities to create the target
association.
4-53
Chapter 4
Working with Entity Associations
The new target is displayed in the list of available entities.
Managing Existing Entity Associations
You can use the Entities page to associate log sources to existing entities.
To associate log sources to existing entities:
1.
From Oracle Log Analytics, click the OMC Navigation (
) icon on the top left
corner of the interface. In the OMC Navigation bar, click Configuration Home.
2.
In the Oracle Log Analytics Configuration page click the count of available entities
link in the Entities section.
3.
Click the target name to which you want to associate log sources.
Alternatively, you can click the count of associated log sources link.
This displays the Associated Log Sources: <target name:port> page.
4.
Click Add to display the Select Log Sources dialog box.
5.
Select the required log sources and click Select.
You can also click the check box to the left of the Log Source heading to select all
the available log sources.
4-54
Chapter 4
Viewing Collection Warnings
6.
Click Save.
7.
Click Save in the Save changes dialog box.
Viewing Collection Warnings
Oracle Log Analytics lets you view the warning messages returned by log sources.
This helps you to diagnose problems with the sources and to take corrective action.
Viewing Warnings Summary
To view the warning messages:
1.
From Oracle Log Analytics, click the OMC Navigation (
) icon on the top left
corner of the interface. In the OMC Navigation bar, click Configuration Home.
2.
In the Oracle Log Analytics Configuration page, click the available count of log
sources link in the Log Sources section.
3.
In the resultant page, click Collection Warnings link.
This displays the summary of Oracle Log Analytics warnings.
Viewing Entities with Collection Warnings
To view the warning messages:
1.
From Oracle Log Analytics, click the OMC Navigation (
) icon on the top left
corner of the interface. In the OMC Navigation bar, click Configuration Home.
2.
In the Oracle Log Analytics Configuration page click the count of entities with
collection warnings link in the Targets section.
This displays the Targets page with a list of entities whose log sources have
returned warning messages.
4-55
Chapter 4
Purging Log Data
3.
Click the warning icon (
) adjacent to a entity entry.
This displays the warning messages returned by the erring log sources.
Purging Log Data
Oracle Log Analytics lets you purge log events loaded by agent or by on demand
upload to reduce the index size of the log data. Oracle Log Analytics billing depends
on the amount of log data indexed. So purging allows you to bring down your usage to
reduce overage charges.
Oracle Log Analytics data is stored in “buckets”, and you can purge log data within
bucket boundaries.
Buckets:
•
May not be of equal size
•
Will contain different number of log records
•
May contain log records from different time periods
All data from all buckets created prior to the selected time range gets purged.
To purge log data:
1.
From Oracle Log Analytics, click the OMC Navigation (
) icon on the top left
corner of the interface. In the OMC Navigation bar, click Configuration Home.
2.
On the Configuration page, click Manage Storage in the Storage tile.
4-56
Chapter 4
Purging Log Data
3.
On the Manage Storage page, select the time period till when you want to purge
the data. The selected time period is cumulative of values from all buckets prior to
the selected time period.
4.
Click the Purge Data button to delete all data from the selected bucket and
buckets from prior time periods.
4-57
A
Understanding Log Analytics Search
Commands
The Oracle Log Analytics Search Language allows you to specify what action to
perform on the search results.
Commands can be either search commands or statistical commands.
Search Commands
Search commands are those commands which further filter the available log entries.
The following table lists the search commands and provides a brief description of
each.
Command
Description
bottom
Use this command to display a specific number of results with the
lowest aggregated value as determined by the specified field.
See Bottom Command in Using Oracle Log Analytics Search.
bucket
Use this command to o group the log records into buckets based
on the range of values of a field.
See Bucket Command in Using Oracle Log Analytics Search.
cluster
Use this command to group similar log records.
See Cluster Command in Using Oracle Log Analytics Search.
clusterdetails
Use this command to return similar log records.
See Clusterdetails Command in Using Oracle Log Analytics
Search.
clustersplit
Use this command to view the log data within a cluster for specific
classify results in the tabular format.
See Clustersplit Command in Using Oracle Log Analytics Search.
eval
Use this command to calculate the value of an expression and
display the value in a new field.
See Eval Command in Using Oracle Log Analytics Search.
fields
Use this command to specify which fields to add or remove from
the results.
See Fields Command in Using Oracle Log Analytics Search.
fieldsummary
Use this command to return data for the specified fields.
See Fieldsummary Command in Using Oracle Log Analytics
Search.
highightrows
Use this command to match a string or a list of strings, and
highlight the entire row in the Log UI.
See Highlightrows Command in Using Oracle Log Analytics
Search.
highlight
Use this command to match a string or a list of strings, and
highlight them in the Log UI.
See Highlight Command in Using Oracle Log Analytics Search.
A-1
Appendix A
Command
Description
search
Use this command to retrieve a specific logical expression from
the available log data.
See Search Command in Using Oracle Log Analytics Search.
sort
Use this command to sort logs according to specified fields.
See Sort Command in Using Oracle Log Analytics Search.
top
Use this command to display a specified number of results with
the highest aggregated value as determined by the specified field.
See Top Command in Using Oracle Log Analytics Search.
where
Use this command to calculate the value of an expression to be
true or false.
See Where Command in Using Oracle Log Analytics Search.
Statistical Commands
Statistical commands perform statistical operations on the search results.
The following table lists the supported statistical commands, and provides a short
description for each.
Commands
Description
distinct
Use this command to remove duplicate entries from the search
results.
See Distinct Command in Using Oracle Log Analytics Search.
stats
Use this command to provide summary statistics for the search
results, optionally grouped by a specified field.
See Stats Command in Using Oracle Log Analytics Search.
timestats
Use this command to generate data for displaying statistical
trends over time, optionally grouped by a specified field.
See Timestats Command in Using Oracle Log Analytics Search.
A-2
B
Out-of-the-Box Log Sources
This section lists the out-of-the-box log sources provided by Oracle Log Analytics.
AIX Audit Logs
AIX Cron Logs
AIX Dynamic System Optimizer Logs
AIX HACMP Cluster Logs
AIX SU Logs
AIX Syslog Logs
Apache HTTP Server Access Logs
Apache HTTP Server Error Logs
Apache HTTP Server SSL Access Logs
Apache HTTP SSL Request Logs
Apache Tomcat Access Logs
Apache Tomcat Catalina Logs
Apache Tomcat Error Logs
Apache Tomcat Host Logs
ArcSight Common Event Format Source
Automatic Storage Management Alert Logs
Automatic Storage Management Trace Logs
Bluecoat Proxy Squid Logs
Bluecoat Proxy W3C Logs
Citrix Netscaler Logs
Clusterware Ready Services Alert Logs
Clusterware Ready Services Daemon Logs
Database Alert Logs
Database Audit Logs
Database Audit XML Logs
Database Incident Dump Files
Database Listener Alert Logs
Database Listener Trace Logs
B-1
Appendix B
Database Trace Logs
EM Cloud Control Agent AJTS Logs
EM Cloud Control Agent EMCTL Logs
EM Cloud Control Agent Host Target Event Logs
EM Cloud Control Agent JVMGC Logs
EM Cloud Control Agent Logs
EM Cloud Control Agent PFU Logs
EM Cloud Control Agent STDOUT Logs
EM Cloud Control OMS Access Logs
EM Cloud Control OMS Diagnostics Logs
EM Cloud Control OMS Logs
EM Cloud Control OMS STDOUT Logs
EM Cloud Services Agent AJTS Logs
EM Cloud Services Agent EMCTL Logs
EM Cloud Services Agent Host Target Logs
EM Cloud Services Agent JVMGC Logs
EM Cloud Services Agent Log Collector Logs
EM Cloud Services Agent Logs
EM Cloud Services Agent PFU Logs
EM Cloud Services Agent STDOUT Logs
F5 Big IP Logs
FMW BI JBIPS Logs
FMW BI Publisher Logs
FMW OAM Embedded LDAP Access Logs
FMW OHS Access Logs (V11)
FMW OHS Access Logs (V12)
FMW OHS Admin Access Logs (V12)
FMW OHS Diagnostic Logs (V11)
FMW OHS OPMN Logs (V11)
FMW OHS Server Logs (V12)
FMW OID Audit Logs
FMW OID Directory Control Logs
FMW OID Directory Dispatcher Server Logs
B-2
Appendix B
FMW OID Directory Replication Server Logs
FMW OID Directory Server Logs
FMW OID Monitor Logs
FMW OID OPMN Logs
FMW WLS Node Manager Log
FMW WLS Server Access Logs
FMW WLS Server Diagnostic Logs
FMW WLS Server Logs
FMW WLS Server STDOUT Logs
Fusion Apps Diagnostic Logs
IBM DB2 Audit Logs
IBM DB2 Diagnostic Logs
IBM Websphere Application Server (Classic) Logs
IBM Websphere Application Server (Classic) System Error
IPTables Logs
JBOSS EAP Log Source
Juniper SRX Syslog Logs
Linux Audit Logs
Linux Cron Logs
Linux Mail Delivery Logs
Linux Secure Logs
Linux Syslog Logs
Linux YUM Logs
Microsoft Active Directory Distributed File System Replication Logs
Microsoft Active Directory Installation Wizard Logs
Microsoft Active Directory Netsetup Logs
Microsoft Active Directory NtFrsApi Logs
Microsoft DNS Logs
Microsoft IIS Log Source for FTP format logs
Microsoft IIS Log Source for IIS format logs
Microsoft IIS Log Source for NCSA format logs
Microsoft IIS Log Source for W3C format logs
Microsoft SQL Server Agent Error Log
B-3
Appendix B
Microsoft SQL Server Error Log Source
MySQL Database Audit XML Logs
MySQL Error Logs
MySQL General Query Logs
MySQL Slow Query Logs
NetApp Syslog Logs
NGINX Access Logs
NGINX Error Logs
OMC Compliance Assessment Result Logs
OMC Orchestration Service Output Logs
OMC Security Monitoring Analytics Event Format (XML) Source
Oracle Access Manager Audit Logs
Oracle DB Audit Log Source Stored in Database
Oracle DB Audit Log Source Stored in Database for Unified Audit Trail
PeopleSoft Analytics Engine Server Logs
PeopleSoft Application Analytics Engine Server Logs
PeopleSoft Application server domain Application Server (APPSRV) Process Logs
PeopleSoft Application server domain Monitor Server (MONITORSRV) Process Logs
PeopleSoft Application server domain Watch Server (WATCHSRV) Process Logs
PeopleSoft Application Tuxedo Access Logs
PeopleSoft Application Tuxedo User Logs
PeopleSoft Integration Gateway Error Logs
PeopleSoft Integration Gateway Message Logs
PeopleSoft Master Scheduler Server Logs
PeopleSoft Process Scheduler App Engine Server Logs
PeopleSoft Process Scheduler Distribution Agent Logs
PeopleSoft Process Scheduler Master Scheduler Logs
PeopleSoft WLS Server Access Logs
PeopleSoft WLS Server Logs
PeopleSoft WLS Server STDOUT Logs
PeopleSoft WLS Servlet Logs
SAP Application Startup Logs
SAP Application Transport Logs
B-4
Appendix B
SAP Dev Dispatcher Logs
SAP Dev ICM Security Logs
SAP Dev Message Server Logs
SAP Dev RD Logs
SAP Java Server Application Logs
SAP Java Server Default Trace Logs
SAP VMC Available Logs
Siebel Component Logs
Siebel Gateway Name Server Audit Logs
Siebel Gateway Server Logs
Solaris Audit Logs
Solaris ILOM Configuration Logs
Solaris Install Logs
Solaris SMF Daemon Logs
Solaris SU Logs
Solaris Syslog Logs
Squid Proxy Access Logs
SUDO Logs
Ubuntu Secure Logs
Ubuntu Syslog Logs
Windows Application Events
Windows Security Events
Windows Setup Events
Windows System Events
Note:
Please note that the preceding list is evolving. Check with the product user
interface for the latest list of log sources.
B-5
C
Out-of-the-Box Labels
This section lists the log sources for which out-of-the-box labels are provided by
Oracle Log Analytics.
AIX Syslog Logs
Apache Tomcat Access Logs
Automatic Storage Management Alert Logs
Automatic Storage Management Trace Logs
Cisco ASA Logs
Cisco Syslog Listener Source
Citrix Netscalar Logs
Clusterware Ready Services Alert Logs
Database Alert Logs
Database Audit XML Logs
Database Listener Trace Logs
Database Trace Logs
F5 Big IP Logs
FMW OHS Access Logs (V11)
FMW OHS Access Logs (V12)
FMW OHS Admin Access Logs (V12)
FMW WLS Server Access Logs
FMW WLS Server Logs
FMW WLS Server STDOUT Logs
Linux Secure Logs
Linux Syslog Logs
NGINX Access Logs
Oracle DB Audit Log Source Stored in Database
Oracle Unified DB Audit Log Source Stored in Database 12.1
Solaris Syslog Logs
Ubuntu Secure Logs
Ubuntu Syslog Logs
C-1
Appendix C
Windows Security Events
Note:
Please note that the preceding list is evolving. Check with the product user
interface for the latest list of log sources.
C-2
D
Sample Parse Expressions
This section lists sample parse expressions for extracting values from a log file.
A log file comprises entries that are generated by concatenating multiple field values.
You may not need to view all the field values for analyzing a log file of a particular
format. Using a parser, you can extract the values from only those fields that you want
to view.
A parser extracts fields from a log file based on the parse expression that you’ve
defined. A parse expression is written in the form of a regular expression that defines a
search pattern. In a parse expression, you enclose search patterns with parentheses
(), for each matching field that you want to extract from a log entry. Any value that
matches a search pattern that is outside the parentheses is not extracted.
Example 1
If you want to parse the following sample log entries:
Jun 20 15:19:29 hostabc rpc.gssd[2239]: ERROR: can't open clnt5aa9: No such file or
directory
Jul 29 11:26:28 hostabc kernel: FS-Cache: Loaded
Jul 29 11:26:28 hostxyz kernel: FS-Cache: Netfs 'nfs' registered for caching
Following should be your parse expression:
(\S+)\s+(\d+)\s(\d+):(\d+):(\d+)\s(\S+)\s(?:([^:\[]+)(?:\[(\d+)\])?:\s+)?(.+)
In the preceding example, some of the values that the parse expression captures are:
•
(\S+): Multiple non-whitespace characters for the month
•
(\d+): Multiple non-whitespace characters for the day
•
(?:([^:\[]+): (Optional) All the characters except ^, :, \, []; this is for the service
name
•
(.+): (Optional) Primary message content
Example 2
If you want to parse the following sample log entries:
####<Apr 27, 2014 4:01:42 AM PDT> <Info> <EJB> <host> <AdminServer> <[ACTIVE]
ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'>
<OracleSystemUser> <BEA1-13E2AD6CAC583057A4BD> <b3c34d62475d5b0b:6e1e6d7b:
143df86ae85:-8000-000000000000cac6> <1398596502577> <BEA-010227> <EJB Exception
occurred during invocation from home or business:
weblogic.ejb.container.internal.StatelessEJBHomeImpl@2f9ea244 threw exception:
javax.ejb.EJBException: what do i do: seems an odd quirk of the EJB spec. The
exception is:java.lang.StackOverflowError>
####<Jul 30, 2014 8:43:48 AM PDT> <Info> <RJVM> <host.com> <> <Thread-9> <> <> <>
<1406735028770> <BEA-000570> <Network Configuration for Channel "AdminServer" Listen
Address host.com:7002 (SSL) Public Address N/A Http Enabled true Tunneling Enabled
false Outbound Enabled false Admin Traffic Enabled true ResolveDNSName Enabled
false>
D-1
Appendix D
Following should be your parse expression::
####<(\p{Upper}\p{Lower}{2})\s+([\d]{1,2}),\s+([\d]{4})\s+([\d]{1,2}):([\d]{2}):([\d]
{2})\s+(\p{Upper}{2})(?:\s+(\w+))?>\s+<(.*?)>\s+<(.*?)>\s+<(.*?)>\s+<(.*?)>\s+<(.*?)>
\s+<(.*?)>\s+<(.*?)>\s+<(.*?)>\s+<\d{10}\d{3}>\s+<(.*?)>\s+<(.*?)(?:\n(.*))?>\s*
In the preceding example, some of the values that the parse expression captures are:
•
(\p{Upper}\p{Lower}{2}): 3-letter short name for the month; with the first letter in
uppercase followed by two lowercase letters
•
([\d]{1,2}): 1-or-2-digit day
•
([\d]{4}): 4-digit year
•
([\d]{1,2}): 1-or-2-digit hour
•
([\d]{2}): 2-digit minute
•
([\d]{2}): 2-digit second
•
(\p{Upper}{2}): 2-letter AM/PM in uppercase
•
(?:\s+(\w+)): (Optional, some entries may not return any value for this) Multiple
alpha-numeric characters for the time zone
•
(.*?): (Optional, some entries may not return any value for this) One or multiple
characters for the severity level; in this case <INFO>
•
(.*): Any additional details along with the message
Search Patterns
Some of the commonly used patterns are explained in the following table:
Pattern
Description
Example
.
Any character except line break
d.f matches def, daf, dbf, and so on
*
Zero or more times
D*E*F* matches DDEEFF, DEF, DDFF,
EEFF, and so on
?
Once or none; optional
colou?r matches both colour and color
+
One or more
Stage \w-\w+ matches Stage A-b1_1,
Stage B-a2, and so on
{2}
Exactly two times
[\d]{2} matches 01, 11, 21, and so on
{1,2}
Two to four times
[\d]{1,2} matches 1, 12, and so on
{3,}
Three or more times
[\w]{3,} matches ten, hello, h2134,
and so on
[…]
One of the characters in the brackets
[AEIOU] matches one uppercase vowel
[x-y]
One of the characters in the range from [A-Z]+ matches ACT, ACTION, BAT,
x to y
and so on
[^x]
One character that is not x
[^/d]{2} matches AA, BB, AC, and so
on
[^x-y]
One of the characters not in the range
from x to y
[^a-z]{2} matches A1, BB, B2, and so
on
[\d\D]
One character that is a digit or a nondigit
[\d\D]+ matches any character,
including new lines, which the regular
dot doesn't match
\s
A whitespace
(\S+)\s+(\d+) matches AA 123, a_
221, and so on
D-2
Appendix D
Pattern
Description
Example
\S
One character that is not a whitespace
(\S+) matches abcd, ABC, A1B2C3,
and so on
\n
A new line
(\d)\n(\w) matches:
1
A
\w
An alphanumeric character
[\w-\w\w\w] matches a-123, 1–aaa,
and so on
\p{Lower}
Lowercase letters
\p{Lower}{2} matches aa, ab, ac, bb,
and so on
\p{Upper}
Uppercase letters
\p{Upper} matches A, B, C, and so on
\ followed
by ?, [], *, .
Escape character; to use the
characters after \ as literals
\? returns ?
D-3
E
Entity Types Modeled in Oracle Log
Analytics
This section lists the types of entities that are supported by Oracle Log Analytics
today.
Note:
This list of entities will constantly evolve as and when new entity types are
added.
Users can also create their own entity types.
Entity Types
Amazon Web Services (S3)
Apache Hadoop
Apache Hive
Apache Kafka
Apache Zookeeper
Automatic Storage Management
Automatic Storage Management Instance
Cassandra
Cisco ASA
Cisco Ethernet Switch
Cluster
Container
DB2
Docker Container
Docker Engine
EMC VMAX
EMC VNX
F5 BigIP
Generic System
E-1
Appendix E
Group
Host
Host (AIX)
Host (HP-UX)
Host (Linux)
Host (Solaris)
Host (Windows)
Hosted Target
Hyper-V
IBM Websphere
IBM WebSphere MQ
J2EE Application
Java Application Server
Java EE Application Server
Juniper SRX
LDAP Server
Listener
Load Balancer
Microsoft .NET Server
Microsoft AD
Microsoft Exchange
Microsoft IIS
Microsoft Internet Information Services
Microsoft Internet Information Services Web Site
Microsoft SharePoint
Microsoft SQL Server Database
Microsoft SQL Server Database Instance
Middleware Cluster
Middleware Domain
MongoDB
MySQL Database
MySQL Database Instance
NetApp FAS
E-2
Appendix E
NetApp FlexPod
NGINX
Node.js
OpenStack
Operating System
Oracle Access Management Cluster
Oracle Access Management Server
Oracle Business Intelligence (BI)
Oracle Cluster Node
Oracle Clusterware
Oracle Database
Oracle Database Cluster Listener
Oracle Database Instance
Oracle Database Listener
Oracle E-Business suite
Oracle Exadata Database Machine
Oracle Exadata Storage Server
Oracle Exadata Storage Server Grid
Oracle Hadoop Cluster
Oracle Hadoop HDFS
Oracle Hadoop Yarn
Oracle Home
Oracle HTTP Server
Oracle ILOM Server
Oracle InfiniBand Switch
Oracle Internet Directory
Oracle JD Edwards
Oracle PeopleSoft Application Server
Oracle PeopleSoft Internet Architecture
Oracle PeopleSoft Process Scheduler
Oracle PeopleSoft System
Oracle Power Distribution Unit (PDU)
Oracle Rack
E-3
Appendix E
Oracle Utilities Customer Care and Billing
Oracle VM
PostgresSQL
Ruby on Rails
SAP System
SAPNW Application Server Instance
SAPNW Application Server JAVA Server Process
Service Bus
Siebel Component
Siebel Enterprise
Siebel Server
SOA Infrastructure
Storage Manager
Storage Server
Switch
Sybase Adaptive Server Enterprise
System
Target
Tibco
Tomcat
Traffic Director Configuration
Traffic Director Instance
Virtual Platform
Virtual Server
VMWare
Web Application Server
WebLogic Cluster
WebLogic Domain
WebLogic Server
Note:
Please note that the preceding list is evolving.
E-4
F
SQL Query Guidelines
You should use the SQL queries that are used to extract the data judiciously.
Follow these guidelines when writing SQL queries for extracting log data:
•
Use read-only queries only.
•
The credentials provided to execute the queries should have only the required
privileges to extract the necessary data.
•
The query performance is also an important consideration, because it can impact
both the target database and other software running on the same host.
•
The query should include at least one column that can be used to order the
database records. This can be either some kind of a sequence number or a
timestamp column. Every new entry should have a value for this column that is
equal to or greater than the one in older records. The SQL query will be run at
regular intervals to extract new data. Oracle Log Analytics will use this column to
identify the new records that have been introduced since the previous collection. It
is recommended that this column should have an index to avoid full table scans.
F-1
G
List of Non-Facetable Fields
The following fields can’t be filtered using the Data Palette.
•
Alert Raised Time
•
Call Stack Trace
•
Data Received
•
Data Sent
•
Data
•
Event End Time
•
Error Stack Dump
•
Event Generation Time
•
First Event Time
•
Message
•
Resource Limit Settings
•
SQL Bind Variables
•
SQL Statement
•
Stack Trace
•
Supplemental Detail
•
Supplemental Filename
G-1
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertising