IBM SPSS Modeler 17 Algorithms Guide i

IBM SPSS Modeler 17 Algorithms Guide i
i
IBM SPSS Modeler 17 Algorithms Guide
Note: Before using this information and the product it supports, read the general information
under “Notices” on p. 403.
This edition applies to IBM SPSS Modeler 17 and to all subsequent releases and modifications
until otherwise indicated in new editions.
Adobe product screenshot(s) reprinted with permission from Adobe Systems Incorporated.
Microsoft product screenshot(s) reprinted with permission from Microsoft Corporation.
Licensed Materials - Property of IBM
© Copyright IBM Corporation 1994, 2015.
U.S. Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP
Schedule Contract with IBM Corp.
Preface
IBM® SPSS® Modeler is the IBM Corp. enterprise-strength data mining workbench. SPSS
Modeler helps organizations to improve customer and citizen relationships through an in-depth
understanding of data. Organizations use the insight gained from SPSS Modeler to retain
profitable customers, identify cross-selling opportunities, attract new customers, detect fraud,
reduce risk, and improve government service delivery.
SPSS Modeler’s visual interface invites users to apply their specific business expertise, which
leads to more powerful predictive models and shortens time-to-solution. SPSS Modeler offers
many modeling techniques, such as prediction, classification, segmentation, and association
detection algorithms. Once models are created, IBM® SPSS® Modeler Solution Publisher
enables their delivery enterprise-wide to decision makers or to a database.
About IBM Business Analytics
IBM Business Analytics software delivers complete, consistent and accurate information that
decision-makers trust to improve business performance. A comprehensive portfolio of business
intelligence, predictive analytics, financial performance and strategy management, and analytic
applications provides clear, immediate and actionable insights into current performance and the
ability to predict future outcomes. Combined with rich industry solutions, proven practices and
professional services, organizations of every size can drive the highest productivity, confidently
automate decisions and deliver better results.
As part of this portfolio, IBM SPSS Predictive Analytics software helps organizations predict
future events and proactively act upon that insight to drive better business outcomes. Commercial,
government and academic customers worldwide rely on IBM SPSS technology as a competitive
advantage in attracting, retaining and growing customers, while reducing fraud and mitigating
risk. By incorporating IBM SPSS software into their daily operations, organizations become
predictive enterprises – able to direct and automate decisions to meet business goals and achieve
measurable competitive advantage. For further information or to reach a representative visit
http://www.ibm.com/spss.
Technical support
Technical support is available to maintenance customers. Customers may contact Technical
Support for assistance in using IBM Corp. products or for installation help for one of the
supported hardware environments. To reach Technical Support, see the IBM Corp. web site
at http://www.ibm.com/support. Be prepared to identify yourself, your organization, and your
support agreement when requesting assistance.
© Copyright IBM Corporation 1994, 2015.
iii
Contents
Adjusted Propensities Algorithms
1
Model-Dependent Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
General Purpose Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Anomaly Detection Algorithm
3
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Primary Calculations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Notation . . . . . . . . . .
Algorithm Steps . . . .
Blank Handling . . . . .
Generated Model/Scoring
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
3
4
7
7
Predicted Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Blank Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Apriori Algorithms
9
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Deriving Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Frequent Itemsets . . .
Generating Rules . . . .
Blank Handling . . . . .
Effect of Options . . . .
Generated Model/Scoring
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
9
10
11
11
12
Predicted Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Confidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Blank Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Automated Data Preparation Algorithms
13
Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Date/Time Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Univariate Statistics Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Basic Variable Screening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Checkpoint 1: Exit? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
© Copyright IBM Corporation 1994, 2015.
v
Measurement Level Recasting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Outlier Identification and Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Missing Value Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Continuous Predictor Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Z-score Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Min-Max Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Target Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Bivariate Statistics Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Categorical Variable Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Reordering Categories . . . . . . . . . . . . . . . . . . . .
Identify Highly Associated Categorical Features .
Supervised Merge . . . . . . . . . . . . . . . . . . . . . . .
P-value Calculations . . . . . . . . . . . . . . . . . . . . . .
Unsupervised Merge . . . . . . . . . . . . . . . . . . . . .
Continuous Predictor Handling . . . . . . . . . . . . . . . . . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
25
26
26
27
30
31
Supervised Binning . . . . . . . . . . . . . .
Feature Selection and Construction .
Principal Component Analysis . . . . . .
Correlation and Partial Correlation . .
Discretization of Continuous Predictors . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
32
32
33
34
35
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
Predictive Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Bayesian Networks Algorithms
37
Bayesian Networks Algorithm Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Primary Calculations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Handling of Continuous Predictors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Feature Selection via Breadth-First Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Tree Augmented Naïve Bayes Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Markov Blanket Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Blank Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Model Nugget/Scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
vi
37
38
38
40
43
47
47
Binary Classifier Comparison Metrics
49
C5.0 Algorithms
51
Scoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Carma Algorithms
53
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Deriving Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Frequent Itemsets . . .
Generating Rules . . . .
Blank Handling . . . . .
Effect of Options . . . .
Generated Model/Scoring
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
53
54
55
55
56
Predicted Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Confidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Blank Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
C&RT Algorithms
59
Overview of C&RT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Primary Calculations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Frequency and Case Weight Fields . .
Model Parameters . . . . . . . . . . . . . . .
Blank Handling . . . . . . . . . . . . . . . . .
Effect of Options . . . . . . . . . . . . . . . .
Secondary Calculations . . . . . . . . . . . . . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
59
60
61
62
68
Risk Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Gain Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Generated Model/Scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Predicted Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Confidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Blank Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
vii
CHAID Algorithms
73
Overview of CHAID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Primary Calculations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Frequency and Case Weight Fields . .
Binning of Scale-Level Predictors . . .
Model Parameters . . . . . . . . . . . . . . .
Blank Handling . . . . . . . . . . . . . . . . .
Effect of Options . . . . . . . . . . . . . . . .
Secondary Calculations . . . . . . . . . . . . . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
73
74
75
81
81
82
Risk Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Gain Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Generated Model/Scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Predicted Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Confidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Blank Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Cluster Evaluation Algorithms
87
Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Goodness Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Data Preparation . . . . . . . . . . . .
Basic Statistics . . . . . . . . . . . . . .
Silhouette Coefficient . . . . . . . . .
Sum of Squares Error (SSE) . . . . .
Sum of Squares Between (SSB) .
Predictor Importance . . . . . . . . . . . . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
88
88
89
89
89
89
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
93
COXREG Algorithms
Cox Regression Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Estimation of Beta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Estimation of the Baseline Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Selection Statistics for Stepwise Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Score Statistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Wald Statistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
viii
LR (Likelihood Ratio) Statistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Conditional Statistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Initial Model Information . . . . . . . . . . . . . . . . . . .
Model Information . . . . . . . . . . . . . . . . . . . . . . .
Information for Variables in the Equation . . . . . . .
Information for the Variables Not in the Equation .
Survival Table . . . . . . . . . . . . . . . . . . . . . . . . . . .
Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
. . . 99
. . . 99
. . 100
. . 101
. . 101
. . 101
Survival Plot .
Hazard Plot . .
LML Plot . . . .
Blank Handling . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
102
102
102
102
Scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Blank Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Decision List Algorithms
105
Algorithm Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Terminology of Decision List Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Main Calculations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Notation . . . . . . . . . . . . . . . .
Primary Algorithm . . . . . . . . .
Decision Rule Algorithm. . . . .
Decision Rule Split Algorithm.
Secondary Measures . . . . . . . . . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
..
106
106
107
108
111
Blank Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Generated Model/Scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Blank Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
DISCRIMINANT Algorithms
113
Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Basic Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Variances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Within-Groups Sums of Squares and Cross-Product Matrix (W) .
Total Sums of Squares and Cross-Product Matrix (T) . . . . . . . . . .
ix
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
113
113
114
114
Within-Groups Covariance Matrix . . . . . .
Individual Group Covariance Matrices . .
Within-Groups Correlation Matrix (R) . . .
Total Covariance Matrix . . . . . . . . . . . . .
Univariate F and Λfor Variable I . . . . . . . .
Rules of Variable Selection . . . . . . . . . . . . . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
..
..
114
114
114
114
114
114
Method = Direct . . . . . . . . . . . . . . . .
Stepwise Variable Selection . . . . . . .
Ineligibility for Inclusion . . . . . . . . . .
Computations During Variable Selection .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
115
115
115
116
...
...
...
...
Tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
F-to-Remove . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
F-to-Enter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Wilks’ Lambda for Testing the Equality of Group Means . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
The Approximate F Test for Lambda (the “overall F”), also known as Rao’s R (Tatsuoka, 1971) 117
Rao’s V (Lawley-Hotelling Trace) (Rao, 1952; Morrison, 1976) . . . . . . . . . . . . . . . . . . . . . . . . 117
The Squared Mahalanobis Distance (Morrison, 1976) between groups a and b . . . . . . . . . . 117
The F Value for Testing the Equality of Means of Groups a and b (Smallest F ratio) . . . . . . . . 117
The Sum of Unexplained Variations (Dixon, 1973) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Classification Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Canonical Discriminant Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Percentage of Between-Groups Variance Accounted for . . . . . . . . . . . . . . . . . . . . . . . .
Canonical Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Wilks’ Lambda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Standardized Canonical Discriminant Coefficient Matrix D . . . . . . . . . . . . . . . . . . . .
The Correlations Between the Canonical Discriminant Functions and the Discriminating
Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Unstandardized Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Tests For Equality Of Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Blank Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
..
..
..
..
118
118
119
119
..
..
..
..
119
119
120
121
Generated model/scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Cross-Validation (Leave-one-out classification) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Blank Handling (discriminant analysis algorithms scoring) . . . . . . . . . . . . . . . . . . . . . . . . . . 123
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Ensembles Algorithms
125
Bagging and Boosting Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Notation . . . . . . . . . . . . . . . .
Bootstrap Aggregation . . . . .
Bagging Model Measures . . .
Adaptive Boosting . . . . . . . .
...
...
...
...
...
...
...
...
...
...
...
...
x
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
125
126
127
128
Stagewise Additive Modeling using Multiclass Exponential loss .
Boosting Model Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Very large datasets (pass, stream, merge) algorithms . . . . . . . . . . . . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
129
130
130
130
Pass . . . . . . . . . . . . . . . . . . . . . .
Stream . . . . . . . . . . . . . . . . . . . .
Merge . . . . . . . . . . . . . . . . . . . .
Adaptive Predictor Selection . . .
Automatic Category Balancing . .
Model Measures . . . . . . . . . . . .
Scoring . . . . . . . . . . . . . . . . . . . .
Ensembling model scores algorithms .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
..
..
..
..
131
132
132
132
133
134
136
136
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Factor Analysis/PCA Algorithms
139
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Primary Calculations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Factor Extraction . . . . . . .
Factor Rotation . . . . . . . .
Factor Score Coefficients
Blank Handling . . . . . . . .
Secondary Calculations . . . . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
..
139
145
151
151
152
Field Statistics and Other Calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Generated Model/Scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Factor Scores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Blank Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Feature Selection Algorithm
153
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Primary Calculations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Screening . . . . . . . . .
Ranking Predictors . .
Selecting Predictors .
Generated Model . . . . . . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
xi
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
153
154
160
160
GENLIN Algorithms
163
Generalized Linear Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Notation . . . . . . .
Model . . . . . . . . .
Estimation . . . . . .
Model Testing . . .
Blank handling. . .
Scoring . . . . . . . .
References . . . . . . . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
Generalized linear mixed models algorithms
..
..
..
..
..
..
..
163
163
169
177
183
183
184
187
Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
Fixed effects transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
Linear mixed pseudo model . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Iterative process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Wald confidence intervals for covariance parameter estimates .
Statistics for estimates of fixed and random effects . . . . . . . . . .
Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
..
192
194
195
196
199
Goodness of fit . . . . . . . . . . . . . . . . . . . . . . .
Tests of fixed effects . . . . . . . . . . . . . . . . . . .
Estimated marginal means . . . . . . . . . . . . . .
Method for computing degrees of freedom . .
Scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
..
199
199
200
203
204
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
Nominal multinomial distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Notation . . . . . . . . . . . . . . . .
Model . . . . . . . . . . . . . . . . . .
Estimation . . . . . . . . . . . . . . .
Post-estimation statistics . . .
Testing . . . . . . . . . . . . . . . . .
Scoring . . . . . . . . . . . . . . . . .
Ordinal multinomial distribution . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
..
..
..
206
207
208
209
211
212
213
Notation . . . . . . . . . . . . .
Model . . . . . . . . . . . . . . .
Estimation . . . . . . . . . . . .
Post-estimation statistics
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
213
214
216
217
...
...
...
...
xii
Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
Scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
Imputation of Missing Values
223
Imputing Fixed Values. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Imputing Random Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Imputing Values Derived from an Expression. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Imputing Values Derived from an Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
K-Means Algorithm
227
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Primary Calculations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Field Encoding . . . . . .
Model Parameters . . .
Blank Handling . . . . .
Effect of Options . . . .
Model Summary Statistics
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
..
227
229
230
230
231
Generated Model/Scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
Predicted Cluster Membership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
Distances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
Blank Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
Kohonen Algorithms
233
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Primary Calculations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Field Encoding . . . . . .
Model Parameters . . .
Blank Handling . . . . .
Effect of Options . . . .
Generated Model/Scoring
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
..
233
234
236
236
237
Cluster Membership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
Blank Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
xiii
Logistic Regression Algorithms
239
Logistic Regression Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
Multinomial Logistic Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
Primary Calculations . . . . . . .
Secondary Calculations . . . . .
Stepwise Variable Selection .
Generated Model/Scoring . . .
Binomial Logistic Regression . . . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
..
239
244
246
251
251
Notation . . . . . . . . . . . . . . . . . . . . . .
Model . . . . . . . . . . . . . . . . . . . . . . . .
Maximum Likelihood Estimates (MLE)
Stepwise Variable Selection . . . . . . .
Statistics . . . . . . . . . . . . . . . . . . . . .
Generated Model/Scoring . . . . . . . . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
..
..
251
252
252
253
256
261
KNN Algorithms
263
Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
Distance Metric . . . . . . . . . . . . . . . .
Crossvalidation for Selection of k . . .
Feature Selection . . . . . . . . . . . . . . .
Combined k and Feature Selection . .
Blank Handling . . . . . . . . . . . . . . . . . . . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
..
264
265
265
266
266
Output Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Blank Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
Linear modeling algorithms
271
Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Least squares estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
xiv
Model selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Forward stepwise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Best subsets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Model evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
Coefficients and statistical inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
Scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Predictor importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
Neural Networks Algorithms
285
Multilayer Perceptron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
Notation . . . . . . .
Architecture . . . .
Training . . . . . . .
Radial Basis Function .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
285
285
288
292
Notation . . . .
Architecture .
Training . . . .
Missing Values . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
292
293
293
295
...
...
...
...
Output Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Confidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
OPTIMAL BINNING Algorithms
299
Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
Simple MDLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
Class Entropy . . . . . . . . . . . . . . .
Class Information Entropy . . . . . .
Information Gain . . . . . . . . . . . . .
MDLP Acceptance Criterion . . . .
Algorithm: BinaryDiscretization .
Algorithm: MDLPCut . . . . . . . . . .
Algorithm: SimpleMDLP . . . . . . .
Hybrid MDLP . . . . . . . . . . . . . . . . . . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
..
..
..
..
299
300
300
300
301
301
302
302
Algorithm: EqualFrequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
Algorithm: HybridMDLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
xv
Model Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Merging Sparsely Populated Bins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Blank Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Predictor Importance Algorithms
305
Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
Variance Based Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
QUEST Algorithms
309
Overview of QUEST. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
Primary Calculations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
Frequency Weight Fields .
Model Parameters . . . . . .
Blank Handling . . . . . . . .
Effect of Options . . . . . . .
Secondary Calculations . . . . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
..
309
310
313
315
318
Risk Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
Gain Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Generated Model/Scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Predicted Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Confidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
Blank Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
Linear Regression Algorithms
321
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Primary Calculations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Notation . . . . . . . . . . . . .
Model Parameters . . . . . .
Automatic Field Selection
Blank Handling . . . . . . . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
xvi
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
321
321
323
325
Secondary Calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
Model Summary Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
Field Statistics and Other Calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
Generated Model/Scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
Predicted Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
Blank Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
Sequence Algorithm
327
Overview of Sequence Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
Primary Calculations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
Itemsets, Transactions, and Sequences . .
Sequential Patterns . . . . . . . . . . . . . . . . .
Adjacency Lattice . . . . . . . . . . . . . . . . . .
Mining for Frequent Sequences . . . . . . . .
Generating Sequential Patterns . . . . . . . .
Blank Handling . . . . . . . . . . . . . . . . . . . .
Secondary Calculations . . . . . . . . . . . . . . . . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
..
..
..
327
329
330
331
333
334
334
Confidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
Generated Model/Scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Predicted Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Confidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
Blank Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
Self-Learning Response Model Algorithms
337
Primary Calculations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
Naive Bayes Algorithms. .
Notation . . . . . . . . . . . . .
Naive Bayes Model . . . . .
Secondary Calculations . . . . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
337
337
337
338
Model Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
Blank Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Updating the Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Generated Model/Scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Predicted Values and Confidences. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Variable Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
xvii
Simulation algorithms
343
Simulation algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Distribution fitting . . . . . . . . . . . . . . . . . . . . . . . . . . .
Goodness of fit measures . . . . . . . . . . . . . . . . . . . . .
Anderson-Darling statistic with frequency weights . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Simulation algorithms: run simulation . . . . . . . . . . . . . . . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
..
..
343
343
352
360
360
361
Generating correlated data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
Sensitivity measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
Support Vector Machine (SVM) Algorithms
367
Introduction to Support Vector Machine Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
SVM Algorithm Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
SVM Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
C-Support Vector Classification (C-SVC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
ε-Support Vector Regression (ε-SVR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
Primary Calculations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
Solving Quadratic Problems. .
Variable Scale . . . . . . . . . . . .
Model Building Algorithm . . .
Model Nugget/Scoring . . . . . . . . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
369
370
370
377
Blank Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
Time Series Algorithms
379
Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
Exponential Smoothing Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
ARIMA and Transfer Function Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
Outlier Detection in Time Series Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
Notation . . . . . . . . . . . . . . . . . . . . . .
Definitions of Outliers . . . . . . . . . . . .
Estimating the Effects of an Outlier . .
Detection of Outliers . . . . . . . . . . . . .
...
...
...
...
...
...
...
...
xviii
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
388
388
390
390
Goodness-of-Fit Statistics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
Mean Squared Error . . . . . . . . . . . . . . . . . . .
Mean Absolute Percent Error . . . . . . . . . . . .
Maximum Absolute Percent Error . . . . . . . . .
Mean Absolute Error . . . . . . . . . . . . . . . . . . .
Maximum Absolute Error . . . . . . . . . . . . . . . .
Normalized Bayesian Information Criterion . .
R-Squared . . . . . . . . . . . . . . . . . . . . . . . . . . .
Stationary R-Squared . . . . . . . . . . . . . . . . . .
Expert Modeling . . . . . . . . . . . . . . . . . . . . . . . . . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
..
..
..
..
..
392
392
392
392
392
392
392
392
393
Univariate Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
Multivariate Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
Blank Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
Generated Model/Scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
Blank Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
TwoStep Cluster Algorithms
397
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
Model Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
Pre-cluster . . . . . . . . . . . . . . . . . . . .
Cluster . . . . . . . . . . . . . . . . . . . . . . .
Distance Measure . . . . . . . . . . . . . . .
Number of Clusters (auto-clustering)
Blank Handling . . . . . . . . . . . . . . . . . . . . .
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
..
..
..
..
..
397
398
398
399
400
Effect of Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
Outlier Handling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
Generated Model/Scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
Predicted Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
Blank Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
xix
Appendix
A Notices
403
Bibliography
407
Index
413
xx
Adjusted Propensities Algorithms
Adjusted propensity scores are calculated as part of the process of building the model, and will
not be available otherwise. Once the model is built, it is then scored using data from the test or
validation partition, and a new model to deliver adjusted propensity scores is constructed by
analyzing the original model’s performance on that partition. Depending on the type of model,
one of two methods may be used to calculate the adjusted propensity scores.
Model-Dependent Method
For rule set and tree models, the following method is used:
1. Score the model on the test or validation partition.
2. Tree models. Calculate the frequency of each category at each tree node using the test/validation
partition, reflecting the distribution of the target value in the records scored to that node.
Rule set models. Calculate the support and confidence of each rule using the test/validation
partition, reflecting the model performance on the test/validation partition.
This results in a new rule set or tree model which is stored with the original model. Each time
the original model is applied to new data, the new model can subsequently be applied to the raw
propensity scores to generate the adjusted scores.
General Purpose Method
For other models, the following method is used:
1. Score the model on the test or validation partition to compute predicted values and predicted
raw propensities.
2. Remove all records which have a missing value for the predicted or observed value.
3. Calculate the observed propensities as 1 for true observed values and 0 otherwise.
4. Bin records according to predicted raw propensity using 100 equal-count tiles.
5. Compute the mean predicted raw propensity and mean observed propensity for each bin.
6. Build a neural network with mean observed propensity as the target and predicted raw propensity
as a predictor. For the neural network settings:
Use a random seed, value 0
Use the "quick" training method
Stop after 250 cycles
Do not use prevent overtaining option
Use expert mode
Quick Method Expert Options:
Use one hidden layer with 3 neurons and persistence set to 200
Learning Rates Expert Options:
Alpha 0.9
© Copyright IBM Corporation 1994, 2015.
1
2
Adjusted Propensities Algorithms
Initial Eta 0.3
High Eta 0.1
Eta decay 50
Low Eta 0.01
The result is a neural network model that attempts to map raw propensity to a more accurate
estimate which takes into account the original model’s performance on the testing or validation
partition. To calculate adjusted propensities at score time, this neural network is applied to the raw
propensities obtained from scoring the original model.
Anomaly Detection Algorithm
Overview
The Anomaly Detection procedure searches for unusual cases based on deviations from the
norms of their cluster groups. The procedure is designed to quickly detect unusual cases for
data-auditing purposes in the exploratory data analysis step, prior to any inferential data analysis.
This algorithm is designed for generic anomaly detection; that is, the definition of an anomalous
case is not specific to any particular application, such as detection of unusual payment patterns
in the healthcare industry or detection of money laundering in the finance industry, in which the
definition of an anomaly can be well-defined.
Primary Calculations
Notation
The following notation is used throughout this chapter unless otherwise stated:
ID
n
The identity variable of each case in the data file.
The number of cases in the training data Xtrain .
The set of input variables in the training data.
Xok, k = 1, …, K
If Xok is a continuous variable, Mk represents the grand mean, or average of
Mk, k ∈ {1, …, K}
the variable across the entire training data.
If Xok is a continuous variable, SDk represents the grand standard deviation,
SDk, k ∈ {1, …, K}
or standard deviation of the variable across the entire training data.
XK+1
A continuous variable created in the analysis. It represents the percentage of
variables (k = 1, …, K) that have missing values in each case.
The set of processed input variables after the missing value handling is
Xk, k = 1, …, K
applied. For more information, see the topic “Modeling Stage ” on p. 4.
H, or the boundaries of H: H is the pre-specified number of cluster groups to create. Alternatively, the
bounds [Hmin, Hmax] can be used to specify the minimum and maximum
[Hmin, Hmax]
numbers of cluster groups.
The number of cases in cluster h, h = 1, …, H, based on the training data.
nh, h = 1, …, H
The proportion of cases in cluster h, h = 1, …, H, based on the training
ph, h = 1, …, H
data. For each h, ph = nh/n.
Mhk, k = 1, …, K+1, h = 1, If Xk is a continuous variable, Mhk represents the cluster mean, or average
…, H
of the variable in cluster h based on the training data. If Xk is a categorical
variable, it represents the cluster mode, or most popular categorical value of
the variable in cluster h based on the training data.
SDhk, k ∈ {1, …, K+1}, h If Xk is a continuous variable, SDhk represents the cluster standard deviation,
= 1, …, H
or standard deviation of the variable in cluster h based on the training data.
{nhkj}, k ∈ {1, …, K}, h = The frequency set {nhkj} is defined only when Xk is a categorical variable.
1, …, H, j = 1, …, Jk
If Xk has Jk categories, then nhkj is the number of cases in cluster h that fall
into category j.
m
An adjustment weight used to balance the influence between continuous and
categorical variables. It is a positive value with a default of 6.
The variable deviation index of a case is a measure of the deviation of
VDIk, k = 1, …, K+1
variable value Xk from its cluster norm.
© Copyright IBM Corporation 1994, 2015.
3
4
Anomaly Detection Algorithm
GDI
anomaly index
variable contribution
measure
pctanomaly or nanomaly
cutpointanomaly
kanomaly
The group deviation index GDI of a case is the log-likelihood distance d(h,
s), which is the sum of all of the variable deviation indices {VDIk, k = 1,
…, K+1}.
The anomaly index of a case is the ratio of the GDI to that of the average
GDI for the cluster group to which the case belongs.
The variable contribution measure of variable Xk for a case is the ratio of
the VDIk to the case’s corresponding GDI.
A pre-specified value pctanomaly determines the percentage of cases to be
considered as anomalies. Alternatively, a pre-specified positive integer value
nanomaly determines the number of cases to be considered as anomalies.
A pre-specified cutpoint; cases with anomaly index values greater than
cutpointanomaly are considered anomalous.
A pre-specified integer threshold 1≤kanomaly≤K+1 determines the number of
variables considered as the reasons that the case is identified as an anomaly.
Algorithm Steps
This algorithm is divided into three stages:
Modeling. Cases are placed into cluster groups based on their similarities on a set of input
variables. The clustering model used to determine the cluster group of a case and the sufficient
statistics used to calculate the norms of the cluster groups are stored.
Scoring. The model is applied to each case to identify its cluster group and some indices are
created for each case to measure the unusualness of the case with respect to its cluster group.
All cases are sorted by the values of the anomaly indices. The top portion of the case list is
identified as the set of anomalies.
Reasoning. For each anomalous case, the variables are sorted by its corresponding variable
deviation indices. The top variables, their values, and the corresponding norm values are presented
as the reasons why a case is identified as an anomaly.
Modeling Stage
This stage performs the following tasks:
1. Training Set Formation. Starting with the specified variables and cases, remove any case with
extremely large values (greater than 1.0E+150) on any continuous variable. If missing value
handling is not in effect, also remove cases with a missing value on any variable. Remove variables
with all constant nonmissing values or all missing values. The remaining cases and variables are
used to create the anomaly detection model. Statistics output to pivot table by the procedure are
based on this training set, but variables saved to the dataset are computed for all cases.
2. Missing Value Handling (Optional). For each input variable Xok, k = 1, …, K, if Xok is a continuous
variable, use all valid values of that variable to compute the grand mean Mk and grand standard
deviation SDk. Replace the missing values of the variable by its grand mean. If Xok is a
categorical variable, combine all missing values into a “missing value” category. This category is
treated as a valid category. Denote the processed form of {Xok} by {Xk}.
5
Anomaly Detection Algorithm
3. Creation of Missing Value Pct Variable (Optional). A new continuous variable, XK+1, is created that
represents the percentage of variables (both continuous and categorical) with missing values in
each case.
4. Cluster Group Identification. The processed input variables {Xk, k = 1, …, K+1} are used to create
a clustering model. The two-step clustering algorithm is used with noise handling turned on (see
the TwoStep Cluster algorithm document for more information).
5. Sufficient Statistics Storage. The cluster model and the sufficient statistics for the variables by
cluster are stored for the Scoring stage:

The grand mean Mk and standard deviation SDk of each continuous variable are stored,
k ∈ {1, …, K+1}.

For each cluster h = 1, …, H, store the size nh. If Xk is a continuous variable, store the cluster
mean Mhk and standard deviation SDhk of the variable based on the cases in cluster h. If Xk is
a categorical variable, store the frequency nhkj of each category j of the variable based on the
cases in cluster h. Also store the modal category Mhk. These sufficient statistics will be used
in calculating the log-likelihood distance d(h, s) between a cluster h and a given case s.
Scoring Stage
This stage performs the following tasks on scoring (testing or training) data:
1. New Valid Category Screening. The scoring data should contain the input variables {Xok, k = 1, …,
K} in the training data. Moreover, the format of the variables in the scoring data should be the
same as those in the training data file during the Modeling Stage.
Cases in the scoring data are screened out if they contain a categorical variable with a valid
category that does not appear in the training data. For example, if Region is a categorical variable
with categories IL, MA and CA in the training data, a case in the scoring data that has a valid
category FL for Region will be excluded from the analysis.
2. Missing Value Handling (Optional). For each input variable Xok, if Xok is a continuous variable, use
all valid values of that variable to compute the grand mean Mk and grand standard deviation SDk.
Replace the missing values of the variable by its grand mean. If Xok is a categorical variable,
combine all missing values and put together a missing value category. This category is treated
as a valid category.
3. Creation of Missing Value Pct Variable (Optional depending on Modeling Stage). If XK+1 is created in
the Modeling Stage, it is also computed for the scoring data.
4. Assign Each Case to its Closest Non-Noise Cluster. The clustering model from the Modeling Stage
is applied to the processed variables of the scoring data file to create a cluster ID for each case.
Cases belonging to the noise cluster are reassigned to their closest non-noise cluster. See the
TwoStep Cluster algorithm document for more information on the noise cluster.
5. Calculate Variable Deviation Indices. Given a case s, the closest cluster h is found. The variable
deviation index VDIk of variable Xk is defined as the contribution dk(h, s) of the variable to its
log-likelihood distance d(h, s). The corresponding norm value is Mhk, which is the cluster sample
mean of Xk if Xk is continuous, or the cluster mode of Xk if Xk is categorical.
6
Anomaly Detection Algorithm
6. Calculate Group Deviation Index. The group deviation index GDI of a case is the log-likelihood
distance d(h, s), which is the sum of all the variable deviation indices {VDIk, k = 1, …, K+1}.
7. Calculate Anomaly Index and Variable Contribution Measures. Two additional indices are calculated
that are easier to interpret than the group deviation index and the variable deviation index.
The anomaly index of a case is an alternative to the GDI, which is computed as the ratio of the
case’s GDI to the average GDI of the cluster to which the case belongs. Increasing values of this
index correspond to greater deviations from the average and indicate better anomaly candidates.
A variable’s variable contribution measure of a case is an alternative to the VDI, which is
computed as the ratio of the variable’s VDI to the case’s GDI. This is the proportional contribution
of the variable to the deviation of the case. The larger the value of this measure, the greater
the variable’s contribution to the deviation.
Odd Situations
Zero Divided by Zero
The situation in which the GDI of a case is zero and the average GDI of the cluster that the case
belongs to is also zero is possible if the cluster is a singleton or is made up of identical cases and
the case in question is the same as the identical cases. Whether this case is considered as an
anomaly or not depends on whether the number of identical cases that make up the cluster is large
or small. For example, suppose that there is a total of 10 cases in the training and two clusters are
resulted in which one cluster is a singleton; that is, made up of one case, and the other has nine
cases. In this situation, the case in the singleton cluster should be considered as an anomaly as it
does not belong to the larger cluster. One way to calculate the anomaly index in this situation is to
set it as the ratio of average cluster size to the size of the cluster h, which is:
Following the 10 cases example, the anomaly index for the case belonging to the singleton cluster
would be (10/2)/1 = 5, which should be large enough for the algorithm to catch it as an anomaly.
In this situation, the variable contribution measure is set to 1/(K+1), where (K+1) is the number of
processed variables in the analysis.
Nonzero Divided by Zero
The situation in which the GDI of a case is nonzero but the average GDI of the cluster that the case
belongs to is 0 is possible if the corresponding cluster is a singleton or is made up of identical cases
and the case in question is not the same as the identical cases. Suppose that case i belongs to cluster
h, which has a zero average GDI; that is, average(GDI)h = 0, but the GDI between case i and
cluster h is nonzero; that is, GDI(i, h) ≠ 0. One choice for the anomaly index calculation of case i
could be to set the denominator as the weighted average GDI over all other clusters if this value is
not 0; else set the calculation as the ratio of average cluster size to the size of cluster h. That is,
7
Anomaly Detection Algorithm
if
otherwise
This situation triggers a warning that the case is assigned to a cluster that is made up of identical
cases.
Reasoning Stage
Every case now has a group deviation index and anomaly index and a set of variable deviation
indices and variable contribution measures. The purpose of this stage is to rank the likely
anomalous cases and provide the reasons to suspect them of being anomalous.
1. Identify the Most Anomalous Cases. Sort the cases in descending order on the values of the anomaly
index. The top pctanomaly % (or alternatively, the top nanomaly) gives the anomaly list, subject
to the restriction that cases with an anomaly index less than or equal to cutpointanomaly are not
considered anomalous.
2. Provide Reasons for Considering a Case Anomalous. For each anomalous case, sort the variables by
their corresponding VDIk values in descending order. The top kanomaly variable names, its value
(of the corresponding original variable Xok), and the norm values are displayed as reasoning.
Blank Handling
Blanks and missing values are handled in model building as described in “Algorithm Steps ” on p.
4, based on user settings.
Generated Model/Scoring
The Anomaly Detection generated model can be used to detect anomalous records in new data
based on patterns found in the original training data. For each record scored, an anomaly score is
generated and a flag indicating anomaly status and/or the anomaly score are appended as new fields
Predicted Values
For each record, the anomaly score is calculated as described in “Scoring Stage ” on p. 5, based on
the cluster model created when the model was built. If anomaly flags were requested, they are
determined as described in “Reasoning Stage ” on p. 7.
Blank Handling
In the generated model, blanks are handled according to the setting used in building the model.
For more information, see the topic “Scoring Stage ” on p. 5.
Apriori Algorithms
Overview
Apriori is an algorithm for extracting association rules from data. It constrains the search space
for rules by discovering frequent itemsets and only examining rules that are made up of frequent
itemsets (Agrawal and Srikant, 1994).
Apriori deals with items and itemsets that make up transactions. Items are flag-type conditions
that indicate the presence or absence of a particular thing in a specific transaction. An itemset is a
group of items which may or may not tend to co-occur within transactions.
IBM® SPSS® Modeler uses Christian Borgelt’s Apriori implementation. Full details on this
implementation can be obtained at
http://fuzzy.cs.uni-magdeburg.de/~borgelt/doc/apriori/apriori.html.
Deriving Rules
Apriori proceeds in two stages. First it identifies frequent itemsets in the data, and then it
generates rules from the table of frequent itemsets.
Frequent Itemsets
The first step in Apriori is to identify frequent itemsets. A frequent itemset is defined as an
itemset with support greater than or equal to the user-specified minimum support threshold smin.
The support of an itemset is the number of records in which the itemset is found divided by
the total number of records.
The algorithm begins by scanning the data and identifying the single-item itemsets (i.e.
individual items, or itemsets of length 1) that satisfy this criterion. Any single items that do
not satisfy the criterion are not be considered further, because adding an infrequent item to an
itemset will always result in an infrequent itemset.
Apriori then generates larger itemsets recursively using the following steps:
E Generate a candidate set of itemsets of length k (containing k items) by combining existing
itemsets of length
:
, compare the
For every possible pair of frequent itemsets p and q with length
items (in lexicographic order); if they are the same, and the last item in q is
first
(lexicographically) greater than the last item in p, add the last item in q to the end of p to create a
new candidate itemset with length k.
E Prune the candidate set by checking every
length subset of each candidate itemset; all
subsets must be frequent itemsets, or the candidate itemset is infrequent and is removed from
further consideration.
E Calculate the support of each itemset in the candidate set, as
© Copyright IBM Corporation 1994, 2015.
9
10
Apriori Algorithms
where
is the number of records that match the itemset and N is the number of records in the
training data. (Note that this definition of itemset support is different from the definition used for
rule support. )
E Itemsets with support ≥ smin are added to the list of frequent itemsets.
E If any frequent itemsets of length k were found, and k is less than the user-specified maximum rule
size kmax, repeat the process to find frequent itemsets of length
.
Generating Rules
When all frequent itemsets have been identified, the algorithm extracts rules from the frequent
itemsets. For each frequent itemset L with length k > 1, the following procedure is applied:
E Calculate all subsets A of length
of the itemset such that all the fields in A are input fields
and all the other fields in the itemset (those that are not in A) are output fields. Call the latter
subset . (In the first iteration this is just one field, but in later iterations it can be multiple fields.)
E For each subset A, calculate the evaluation measure (rule confidence by default) for the rule
as described below.
E If the evaluation measure is greater than the user-specified threshold, add the rule to the rule table,
and, if the length k’ of A is greater than 1, test all possible subsets of A with length
Evaluation Measures
Apriori offers several evaluation measures for determining which rules to retain. The different
measures will emphasize different aspects of the rules, as detailed in the IBM® SPSS® Modeler
User’s Guide. Values are calculated based on the prior confidence and the posterior confidence,
defined as
and
where c is the support of the consequent, a is the support of the antecedent, r is the support of
the conjunction of the antecedent and the consequent, and N is the number of records in the
training data.
Rule Confidence. The default evaluation measure for rules is simply the posterior confidence
of the rule,
Confidence Difference (Absolute Confidence Difference to Prior). This measure is based on the
simple difference of the posterior and prior confidence values,
11
Apriori Algorithms
Confidence Ratio (Difference of Confidence Quotient to 1). This measure is based on the ratio of
posterior confidence to prior confidence,
Information Difference (Information Difference to Prior). This measure is based on the information
gain criterion, similar to that used in building C5.0 trees. The calculation is
where r is the rule support, a is the antecedent support, c is the consequent support,
is
the complement of antecedent support, and
is the complement of consequent support.
Normalized Chi-square (Normalized Chi-squared Measure). This measure is based on the chi-squared
statistical test for independence of categorical data, and is calculated as
Blank Handling
Blanks are ignored by the Apriori algorithm. The algorithm will handle records containing blanks
for input fields, but such a record will not be considered to match any rule containing one or
more of the fields for which it has blank values.
Effect of Options
Minimum rule support/confidence. These values place constraints on which rules may be entered
into the table. Only rules whose support and confidence values exceed the specified values can be
entered into the rule table.
Maximum number of antecedents. This determines the maximum number of antecedents that will
be examined for any rule. When the number of conditions in the antecedent part of the rule equals
the specified value, the rule will not be specialized further.
Only true values for flags. If this option is selected, rules with values of false will not be considered
for either input or output fields.
Optimize Speed/Memory. This option controls the trade-off between speed of processing and
memory usage. Selecting Speed will cause Apriori to use condition values directly in the frequent
itemset table, and to load the transactions into memory, if possible. Selecting Memory will
cause Apriori to use pointers into a value table in the frequent itemset table. Using pointers in
12
Apriori Algorithms
the frequent itemset table reduces the amount of memory required by the algorithm for large
problems, but it also involves some additional work to reference and dereference the pointers
during model building. The Memory option also causes Apriori to process transactions from
the file rather than loading them into memory.
Generated Model/Scoring
The Apriori algorithm generates an unrefined rule node. To create a model for scoring new
data, the unrefined rule node must be refined to generate a ruleset node. Details of scoring for
generated ruleset nodes are given below.
Predicted Values
Predicted values are based on the rules in the ruleset. When a new record is scored, it is compared
to the rules in the ruleset. How the prediction is generated depends on the user’s setting for
Ruleset Evaluation in the stream options.

Voting. This method attempts to combine the predictions of all of the rules that apply to the
record. For each record, all rules are examined and each rule that applies to the record is used
to generate a prediction. The sum of confidence figures for each predicted value is computed,
and the value with the greatest confidence sum is chosen as the final prediction.

First hit. This method simply tests the rules in order, and the first rule that applies to the record
is the one used to generate the prediction.
There is a default rule, which specifies an output value to be used as the prediction for records
that don’t trigger any other rules from the ruleset. For rulesets derived from decision trees, the
value for the default rule is the modal (most prevalent) output value in the overall training data.
For association rulesets, the default value is specified by the user when the ruleset is generated
from the unrefined rule node.
Confidence
Confidence calculations also depend on the user’s Ruleset Evaluation stream options setting.

Voting. The confidence for the final prediction is the sum of the confidence values for rules
triggered by the current record that give the winning prediction divided by the number of rules
that fired for that record.

First hit. The confidence is the confidence value for the first rule in the ruleset triggered by
the current record.
If the default rule is the only rule that fires for the record, it’s confidence is set to 0.5.
Blank Handling
Blanks are ignored by the algorithm. The algorithm will handle records containing blanks for
input fields, but such a record will not be considered to match any rule containing one or more of
the fields for which it has blank values.
Automated Data Preparation
Algorithms
The goal of automated data preparation is to prepare a dataset so as to generally improve the
training speed, predictive power, and robustness of models fit to the prepared data.
These algorithms do not assume which models will be trained post-data preparation. At the end
of automated data preparation, we output the predictive power of each recommended predictor,
which is computed from a linear regression or naïve Bayes model, depending upon whether the
target is continuous or categorical.
Notation
The following notation is used throughout this chapter unless otherwise stated:
X
n
A continuous or categorical variable
Value of the variable X for case i.
Frequency weight for case i. Non-integer positive values are rounded to the nearest
integer. If there is no frequency weight variable, then all
. If the frequency
weight of a case is zero, negative or missing, then this case will be ignored.
Analysis weight for case i. If there is no analysis weight variable, then all
. If
the analysis weight of a case is zero, negative or missing, then this case will be ignored.
Number of cases in the dataset
is not missing , where expression is the indicator function taking
value 1 when the expression is true, 0 otherwise.
is not missing
and
and
are not missing
are not missing
The mean of variable X,
is not missing
and
are not missing
A note on missing values
Listwise deletion is used in the following sections:

“Univariate Statistics Collection ” on p. 15
© Copyright IBM Corporation 1994, 2015.
13
14
Automated Data Preparation Algorithms

“Basic Variable Screening ” on p. 17

“Measurement Level Recasting ” on p. 17

“Missing Value Handling ” on p. 19

“Outlier Identification and Handling ” on p. 18

“Continuous Predictor Transformations ” on p. 20

“Target Handling ” on p. 21

“Reordering Categories ” on p. 25

“Unsupervised Merge ” on p. 30
Pairwise deletion is used in the following sections:

“Bivariate Statistics Collection ” on p. 22

“Supervised Merge ” on p. 26

“Supervised Binning ” on p. 32

“Feature Selection and Construction ” on p. 32

“Predictive Power ” on p. 35
A note on frequency weight and analysis weight
The frequency weight variable is treated as a case replication weight. For example if a case has
a frequency weight of 2, then this case will count as 2 cases.
The analysis weight would adjust the variance of cases. For example if a case
.
has an analysis weight , then we assume that
of a variable X
Frequency weights and analysis weights are used in automated preparation of other variables, but
are themselves left unchanged in the dataset.
Date/Time Handling
Date Handling
If there is a date variable, we extract the date elements (year, month and day) as ordinal variables.
If requested, we also calculate the number of elapsed days/months/years since the user-specified
reference date (default is the current date). Unless specified by the user, the “best” unit of duration
is chosen as follows:
1. If the minimum number of elapsed days is less than 31, then we use days as the best unit.
2. If the minimum number of elapsed days is less than 366 but larger than or equal to 31, we use
months as the best unit. The number of months between two dates is calculated based on average
number of days in a month (30.4375): months = days / 30.4375.
3. If the minimum number of elapsed days is larger than or equal to 366, we use years as the best
unit. The number of years between two dates is calculated based on average number of days in a
year (365.25): years = days / 365.25.
15
Automated Data Preparation Algorithms
Once the date elements are extracted and the duration is obtained, then the original date variable
will be excluded from the rest of the analysis.
Time Handling
If there is a time variable, we extract the time elements (second, minute and hour) as ordinal
variables. If requested, we also calculate the number of elapsed seconds/minutes/hours since
the user-specified reference time (default is the current time). Unless specified by the user, the
“best” unit of duration is chosen as follows:
1. If the minimum number of elapsed seconds is less than 60, then we use seconds as the best unit.
2. If the minimum number of elapsed seconds is larger than or equal to 60 but less than 3600, we
use minutes as the best unit.
3. If the minimum number of elapsed seconds is larger than or equal to 3600, we use hours as the
best unit.
Once the elements of time are extracted and time duration is obtained, then original time predictor
will be excluded.
Univariate Statistics Collection
Continuous Variables
For each continuous variable, we calculate the following statistics:

Number of missing values:
is missing

Number of valid values:

Minimum value:

Maximum value:

Mean, standard deviation, skewness. (see below)

The number of distinct values I.

The number of cases for each distinct value

Median: If the distinct values of X are sorted in ascending order,
:
, then the
, where
median can be computed by
.
Note: If the number of distinct values is larger than a threshold (default is 5), we stop updating
the number of distinct values and the number of cases for each distinct value. Also we do not
calculate the median.
Categorical Numeric Variables
For each categorical numeric variable, we calculate the following statistics:

Number of missing values:
is missing
16
Automated Data Preparation Algorithms

Number of valid values:

Minimum value:

Maximum value:

The number of categories.

The counts of each category.

Mean, Standard deviation, Skewness (only for ordinal variables). (see below)

Mode (only for nominal variables). If several values share the greatest frequency of
occurrence, then the mode with the smallest value is used.

Median (only for ordinal variables): If the distinct values of X are sorted in ascending order,
, then the median can be computed by
,
where
(only for ordinal variables)
(only for ordinal variables)
.
Notes:
1. If an ordinal predictor has more categories than a specified threshold (default 10), we stop
updating the number of categories and the number of cases for each category. Also we do not
calculate mode and median.
2. If a nominal predictor has more categories than a specified threshold (default 100), we stop
collecting statistics and just store the information that the variable had more than threshold
categories.
Categorical String Variables
For each string variable, we calculate the following statistics:

Number of missing values:
is missing

Number of valid values:

The number of categories.

Counts of each category.

Mode: If several values share the greatest frequency of occurrence, then the mode with the
smallest value is used.
Note: If a string predictor has more categories than a specified threshold (default 100), we stop
collecting statistics and just store the information that the predictor had more than threshold
categories.
Mean, Standard Deviation, Skewness
We calculate mean, standard deviation and skewness by updating moments.
.
1. Start with
2. For j=1,..,n compute:
is not missing
17
Automated Data Preparation Algorithms
is not missing
3. After the last case has been processed, compute:
Mean:
Standard deviation:
Skewness:
If
or
, then skewness is not calculated.
Basic Variable Screening
1. If the percent of missing values is greater than a threshold (default is 50%), then exclude the
variable from subsequent analysis.
2. For continuous variables, if the maximum value is equal to minimum value, then exclude the
variable from subsequent analysis.
3. For categorical variables, if the mode contains more cases than a specified percentage (default
is 95%), then exclude the variable from subsequent analysis.
4. If a string variable has more categories than a specified threshold (default is 100), then exclude the
variable from subsequent analysis.
Checkpoint 1: Exit?
This checkpoint determines whether the algorithm should be terminated. If, after the screening
step:
1. The target (if specified) has been removed from subsequent analysis, or
2. All predictors have been removed from subsequent analysis,
then terminate the algorithm and generate an error.
Measurement Level Recasting
For each continuous variable, if the number of distinct values is less than a threshold (default
is 5), then it is recast as an ordinal variable.
18
Automated Data Preparation Algorithms
For each numeric ordinal variable, if the number of categories is greater than a threshold (default
is 10), then it is recast as a continuous variable.
Note: The continuous-to-ordinal threshold must be less than the ordinal-to-continuous threshold.
Outlier Identification and Handling
In this section, we identify outliers in continuous variables and then set the outlying values to a
cutoff or to a missing value. The identification is based on the robust mean and robust standard
deviation which are estimated by supposing that the percentage of outliers is no more than 5%.
Identification
1. Compute the mean and standard deviation from the raw data. Split the continuous variable into
, where
non-intersecting intervals:
,
and
.
2. Calculate univariate statistics in each interval:
,
,
3. Let
,
, and
4. Between two tail intervals
5. If
, then
is 0.05). If it does, then
Else
and
, find one interval with the least number of cases.
. Check if
and
is less than a threshold
(default
, go to step 4; otherwise, go to step 6.
. Check if
is less than a threshold,
and
, go to step 4; otherwise, go to step 6.
6. Compute the robust mean
7. If
.
and robust standard deviation
. See below for details.
. If it is, then
within the range
satisfies the conditions:
or
where cutoff is positive number (default is 3), then
is detected as an outlier.
Handling
Outliers will be handled using one of following methods:

Trim outliers to cutoff values. If
by

Set outliers to missing values.
, and if
.
then replace by
then replace
19
Automated Data Preparation Algorithms
Update Univariate Statistics
After outlier handling, we perform a data pass to calculate univariate statistics for each continuous
variable, including the number of missing values, minimum, maximum, mean, standard deviation,
skewness, and number of outliers.
Robust Mean and Standard Deviation
Robust mean and standard deviation within the range
as follows:
are calculated
and
where
and
.
Missing Value Handling
Continuous variables. Missing values are replaced by the mean, and the following statistics are
updated:

Standard deviation:
, where

Skewness:

The number of missing values:

The number of valid values:
, where
.
and
Ordinal variables. Missing values are replaced by the median, and the following statistics are
updated:

The number of cases in the median category:
original number of cases in the median category.

The number of missing values:

The number of valid values:
, where
is the
Nominal variables. Missing values are replaced by the mode, and the following statistics are
updated:

The number of cases in the modal category:
number of cases in the modal category.

The number of missing values:

The number of valid values:
, where
is the original
20
Automated Data Preparation Algorithms
Continuous Predictor Transformations
We transform a continuous predictor so that it has the user-specified mean
(default
(default 1) using the z-score transformation, or minimum
0) and standard deviation
(default 0) and maximum
(default 100) value using the min-max transformation.
Z-score Transformation
Suppose a continuous variable has mean and standard deviation sd. The z-score transformation is
where
is the transformed value of continuous variable X for case i.
Since we do not take into account the analysis weight in the rescaling formula, the rescaled values
follow a normal distribution
.
Update univariate statistics
After a z-score transformation, the following univariate statistics are updated:

Number of missing values:

Number of valid values:

Minimum value:

Maximum value:

Mean:

Standard deviation:

Skewness:
Min-Max Transformation
Suppose a continuous variable has a minimum value
min-max transformation is
where
and a minimum value
is the transformed value of continuous variable X for case i.
Update univariate statistics
After a min-max transformation, the following univariate statistics are updated:

The number of missing values:
. The
21
Automated Data Preparation Algorithms

The number of valid values:

Minimum value:

Maximum value:

Mean:

Standard deviation:

Skwness:
Target Handling
Nominal Target
For a nominal target, we rearrange categories from lowest to highest counts. If there is a tie on
counts, then ties will be broken by ascending sort or lexical order of the data values.
Continuous Target
The transformation proposed by Box and Cox (1964) transforms a continuous variable into one
that is more normally distributed. We apply the Box-Cox transformation followed by the z score
transformation so that the rescaled target has the user-specified mean and standard deviation.
Box-Cox transformation. This transforms a non-normal variable Y to a more normally distributed
variable:
where
are observations of variable Y, and c is a constant such that all values
are positive. Here, we choose
.
The parameter λ is selected to maximize the log-likelihood function:
where
and
.
We perform a grid search over a user-specified finite set [a,b] with increment s. By default a=−3,
b=3, and s=0.5.
The algorithm can be described as follows:
1. Compute
where j is an integer such that
.
22
Automated Data Preparation Algorithms
2. For each
, compute the following statistics:
Mean:
Standard deviation:
Skewness:
Sum of logarithm transformation:
3. For each , compute the log-likelihood function
. Find the value of j with the largest
log-likelihood function, breaking ties by selecting the smallest value of . Also find the
,
and
.
corresponding statistics
4. Transform target to reflect user’s mean
is 1):
where
(default is 0) and standard deviation
and
(default
.
Update univariate statistics. After Box-Cox and Z-score transformations, the following univariate
statistics are updated:

Minimum value:

Maximum value:

Mean:

Standard deviation:

Skewness:
Bivariate Statistics Collection
For each target/predictor pair, the following statistics are collected according to the measurement
levels of the target and predictor.
Continuous target or no target and all continuous predictors
If there is a continuous target and some continuous predictors, then we need to calculate the
covariance and correlations between all pairs of continuous variables. If there is no continuous
target, then we only calculate the covariance and correlations between all pairs of continuous
predictors. We suppose there are there are m continuous variables, and denote the covariance
, with element , and the correlation matrix as
, with element .
matrix as
We define the covariance between two continuous variables X and Y as
23
Automated Data Preparation Algorithms
where
and are not missing and
are not missing .
and
The covariance can be computed by a provisional means algorithm:
1. Start with
.
2. For j=1,..,n compute:
and
are not missing
and
are not missing
After the last case has been processed, we obtain:
3. Compute bivariate statistics between X and Y:
Number of valid cases:
Covariance:
Correlation:
Note: If there are no valid cases when pairwise deletion is used, then we let
and
Categorical target and all continuous predictors
For a categorical target Y with values
, the bivariate statistics are:
Mean of X for each Y=i, i=1,...,J:
and a continuous predictor X with values
.
24
Automated Data Preparation Algorithms
Sum of squared errors of X for each Y=i, i=1,...,J:
Sum of frequency weight for each Y=i, i=1,...,J:
is not missing
Number of invalid cases
Sum of weights (frequency weight times analysis weight) for each Y=i, i=1,...,J:
is not missing
Continuous target and all categorical predictors
For a continuous target Y and a categorical predictor X with values i=1,...,J, the bivariate statistics
include:
Mean of Y conditional upon X:
Sum of squared errors of Y:
Mean of Y for each
, i=1,...,J:
25
Automated Data Preparation Algorithms
Sum of squared errors of Y for each
Sum of frequency weights for
, i=1,...,J:
, i=1,...,J:
is not missing
Sum of weights (frequency weight times analysis weight) for
, i=1,...,J:
is not missing
Categorical target and all categorical predictors
For a categorical target Y with values j=1,...,J and a categorical predictor X with values i=1,...,I,
then bivariate statistics are:
Sum of frequency weights for each combination of
and
:
Sum of weights (frequency weight times analysis weight) for each combination of
:
and
Categorical Variable Handling
In this step, we use univariate or bivariate statistics to handle categorical predictors.
Reordering Categories
For a nominal predictor, we rearrange categories from lowest to highest counts. If there is a tie on
counts, then ties will be broken by ascending sort or lexical order of the data values. The new field
values start with 0 as the least frequent category. Note that the new field will be numeric even if
the original field is a string. For example, if a nominal field’s data values are “A”, “A”, “A”, “B”,
“C”, “C”, then automated data preparation would recode “B” into 0, “C” into 1, and “A” into 2.
26
Automated Data Preparation Algorithms
Identify Highly Associated Categorical Features
If there is a target in the data set, we select a ordinal/nominal predictor if its p-value is not larger
(default is 0.05). See “P-value Calculations ” on p. 27 for details of
than an alpha-level
computing these p-values.
Since we use pairwise deletion to handle missing values when we collect bivariate statistics,
for a category i of a categorical
we may have some categories with zero cases; that is,
predictor. When we calculate p-values, these categories will be excluded.
If there is only one category or no category after excluding categories with zero cases, we set the
p-value to be 1 and this predictor will not be selected.
Supervised Merge
We merge categories of an ordinal/nominal predictor using a supervised method that is similar to a
Chaid Tree with one level of depth.
1. Exclude all categories with zero case count.
2. If X has 0 categories, merge all excluded categories into one category, then stop.
3. If X has 1 category, go to step 7.
4. Else, find the allowable pair of categories of X that is most similar. This is the pair whose test
statistic gives the largest p-value with respect to the target. An allowable pair of categories for an
ordinal predictor is two adjacent categories; for a nominal predictor it is any two categories. Note
that for an ordinal predictor, if categories between the ith category and jth categories are excluded
because of zero cases, then the ith category and jth categories are two adjacent categories. See
“P-value Calculations ” on p. 27 for details of computing these p-values.
5. For the pair having the largest p-value, check if its p-value is larger than a specified alpha-level
(default is 0.05). If it does, this pair is merged into a single compound category and
at the same time we calculate the bivariate statistics of this new category. Then a new set of
categories of X is formed. If it does not, then go to step 6.
6. Go to step 3.
7. For an ordinal predictor, find the maximum value in each new category. Sort these maximum
values in ascending order. Suppose we have r new categories, and the maximum values are:
, then we get the merge rule as: the first new category will contain all original
, the second new category will contain all original categories such that
categories such that
,…, and the last new category will contain all original categories such that
.
For a nominal predictor, all categories excluded at step 1 will be merged into the new category
with the lowest count. If there are ties on categories with the lowest counts, then ties are broken
by selecting the category with the smallest value by ascending sort or lexical order of the original
category values which formed the new categories with the lowest counts.
27
Automated Data Preparation Algorithms
Bivariate statistics calculation of new category
When two categories are merged into a new category, we need to calculate the bivariate statistics
of this new category.
Scale target. If the categories i and
can be merged based on p-value, then the bivariate statistics
should be calculated as:
Categorical target. If the categories i and
can be merged based on p-value, then the bivariate
statistics should be calculated as:
Update univariate and bivariate statistics
At the end of the supervised merge step, we calculate the bivariate statistics for each new category.
For univariate statistics, the counts for each new category will be sum of the counts of each
original categories which formed the new category. Then we update other statistics according to
the formulas in “Univariate Statistics Collection ” on p. 15, though note that the statistics only
need to be updated based on the new categories and the numbers of cases in these categories.
P-value Calculations
Each p-value calculation is based on the appropriate statistical test of association between the
predictor and target.
Scale target
We calculate an F statistic:
28
Automated Data Preparation Algorithms
where
.
Based on F statistics, the p-value can be derived as
where
is a random variable following a F distribution with
and
degrees of freedom.
At the merge step we calculate the F statistic and p-value between two categories i and
where
is the mean of Y for a new category
and
of X as
merged by i and :
is a random variable following a F distribution with 1 and
degrees of freedom.
Nominal target
The null hypothesis of independence of X and Y is tested. First a contingency (or count) table is
formed using classes of Y as columns and categories of the predictor X as rows. Then the expected
cell frequencies under the null hypothesis are estimated. The observed cell frequencies and the
expected cell frequencies are used to calculate the Pearson chi-squared statistic and the p-value:
where
expected cell frequency for cell
. How to estimate
then
is the observed cell frequency and
is the estimated
following the independence model. If
,
is described below.
, where
The corresponding p-value is given by
distribution with
degrees of freedom.
When we investigate whether two categories i and
statistic is revised as
follows a chi-squared
of X can be merged, the Pearson chi-squared
29
Automated Data Preparation Algorithms
and the p-value is given by
.
Ordinal target
Suppose there are I categories of X, and J ordinal categories of Y. Then the null hypothesis of
the independence of X and Y is tested against the row effects model (with the rows being the
categories of X and columns the classes of Y) proposed by Goodman (1979). Two sets of expected
(under the hypothesis of independence) and
(under the hypothesis that
cell frequencies,
the data follow a row effects model), are both estimated. The likelihood ratio statistic is
where
The p-value is given by
.
Estimated expected cell frequencies (independence assumption)
If analysis weights are specified, the expected cell frequency under the null hypothesis of
independence is of the form
where
and
are parameters to be estimated, and
Parameter estimates
1.
2.
3.
4.
,
,
,
, and hence
if
, otherwise
.
, are obtained from the following iterative procedure.
30
Automated Data Preparation Algorithms
5. If
(default is 0.001) or the number of iterations is larger than a
threshold (default is 100), stop and output
. Otherwise,
and go to step 2.
and
as the final estimates
Estimated expected cell frequencies (row effects model)
In the row effects model, scores for classes of Y are needed. By default, (the order of a
class of Y) is used as the class score. These orders will be standardized via the following linear
transformation such that the largest score is 100 and the lowest score is 0.
Where
and
are the smallest and largest order, respectively.
The expected cell frequency under the row effects model is given by
where
parameters to be estimated.
Parameter estimates
1.
, in which
and hence
,
, and
,
, and
are unknown
are obtained from the following iterative procedure.
,
2.
3.
4.
,
5.
otherwise
6.
7. If
(default is 0.001) or the number of iterations is larger than a
threshold (default is 100), stop and output
. Otherwise,
and
as the final estimates
and go to step 2.
Unsupervised Merge
If there is no target, we merge categories based on counts. Suppose that X has I categories which
are sorted in ascending order. For an ordinal predictor, we sort it according to its values, while
for nominal predictor we rearrange categories from lowest to highest count, with ties broken
31
Automated Data Preparation Algorithms
by ascending sort or lexical order of the data values. Let be the number of cases for the ith
be the total number of cases for X. Then we use the equal frequency method
category, and
to merge sparse categories.
1. Start with
2. If
and g=1.
, go to step 5.
3. If
, then
; otherwise the original categories
will
be merged into the new category g and let
,
and
, then go to step 2.
4. If
, then merge categories using one of the following rules:
i) If
, then categories
unmerged.
will be merged into category g and I will be left
ii) If g=2, then
will be merged into category g=2.
iii) If g>2, then
will be merged into category
If
.
, then go to step 3.
5. Output the merge rule and merged predictor.
After merging, one of the following rules holds:

Neither the original category nor any category created during merging has fewer than
cases, where b is a user-specified parameter satisfying
(default is
10) and [x] denotes the nearest integer of x.

The merged predictor has only two categories.
Update univariate statistics. When original categories
are merged into one new
category, then the number of cases in this new category will be
. At the end of the
merge step, we get new categories and the number of cases in each category. Then we update
other statistics according to the formulas in “Univariate Statistics Collection ” on p. 15, though
note that the statistics only need to be updated based on the new categories and the numbers
of cases in these categories.
Continuous Predictor Handling
Continuous predictor handling includes supervised binning when the target is categorical,
predictor selection when the target is continuous and predictor construction when the target is
continuous or there is no target in the dataset.
After handling continuous predictors, we collect univariate statistics for derived or constructed
predictors according to the formulas in “Univariate Statistics Collection ” on p. 15. Any derived
predictors that are constant, or have all missing values, are excluded from further analysis.
32
Automated Data Preparation Algorithms
Supervised Binning
If there is a categorical target, then we will transform each continuous predictor to an ordinal
predictor using supervised binning. Suppose that we have already collected the bivariate statistics
between the categorical target and a continuous predictor. Using the notations introduced in
“Bivariate Statistics Collection ” on p. 22, the homogeneous subset will be identified by the
Scheffe method as follows:
If
then
and
if
will be a homogeneous subset, where
; otherwise
, where
and
,
.
The supervised algorithm follows:
1. Sort the means
.
in ascending order, denote as
2. Start with i=1 and q=J.
3. If
, then
can be considered a homogeneous subset. At the
and
same time we compute the mean and standard deviation of this subset:
, where
and
then set
4. If
and
; Otherwise
,
.
, go to step 3.
5. Else compute the cut point of bins. Suppose we have
homogeneous subsets and we
assume that the means of these subsets are
, and standard deviations are
, then the cut points between the ith and (i+1)th homogeneous subsets are
computed as
.
; Category 2:
6. Output the binning rules. Category 1:
:
.
;…; Category
Feature Selection and Construction
If there is a continuous target, we perform predictor selection using p-values derived from the
correlation or partial correlation between the predictors and the target. The selected predictors are
grouped if they are highly correlated. In each group, we will derive a new predictor using principal
component analysis. However, if there is no target, we will do not implement predictor selection.
To identify highly correlated predictors, we compute the correlation between a scale and a group as
follows: suppose that X is a continuous predictor and continuous predictors
form
a group G. Then the correlation between X and group G is defined as:
where
is correlation between X and
.
33
Automated Data Preparation Algorithms
Let
be the correlation level at which the predictors are identified as groups. The predictor
selection and predictor construction algorithm is as follows:
1. (Target is continuous and predictor selection is in effect ) If the p-value between a continuous
predictor and target is larger than a threshold (default is 0.05), then we remove this predictor from
the correlation matrix and covariance matrix. See “Correlation and Partial Correlation ” on p.
34 for details on computing these p-values.
2. Start with
and i=1.
, stop and output all the derived predictors, their source predictors and coefficient
3. If
of each source predictor. In addition, output the remaining predictors in the correlation matrix.
4. Find the two most correlated predictors such that their correlation in absolute value is larger than
, and put them in group i. If there are no predictors to be chosen, then go to step 9.
5. Add one predictor to group i such that the predictor is most correlated with group i and the
. Repeat this step until the number of predictors in group i is
correlation is larger than
greater than a threshold (default is 5) or there is no predictor to be chosen.
6. Derive a new predictor from the group i using principal component analysis. For more
information, see the topic “Principal Component Analysis ” on p. 33.
7. (Both predictor selection and predictor construction are in effect) Compute partial correlations
between the other continuous predictors and the target, controlling for values of the new predictor.
Also compute the p-values based on partial correlation. See “Correlation and Partial Correlation ”
on p. 34 for details on computing these p-values. If the p-value based on partial correlation
between a continuous predictor and continuous target is larger than a threshold (default is 0.05),
then remove this predictor from the correlation and covariance matrices.
8. Remove predictors that are in the group from the correlation matrix. Then let i=i+1 and go to
step 4.
9.
, then go to step 3.
Notes:

If only predictor selection is needed, then only step 1 is implemented. If only predictor
construction is needed, then we implement all steps except step 1 and step 7. If both predictor
selection and predictor construction are needed, then all steps are implemented.

If there are ties on correlations when we identify highly correlated predictors, the ties will be
broken by selecting the predictor with the smallest index in dataset.
Principal Component Analysis
Let
as follows:
1. Input
be m continuous predictors. Principal component analysis can be described
, the covariance matrix of
.
2. Calculate the eigenvectors and eigenvalues of the covariance matrix. Sort the eigenvalues (and
corresponding eigenvectors) in descending order,
.
34
Automated Data Preparation Algorithms
3. Derive new predictors. Suppose the elements of the first component
.
the new derived predictor is
are
, then
Correlation and Partial Correlation
Correlation and P-value
Let
be the correlation between continuous predictor X and continuous target Y, then the
p-value is derived form the t test:
where
and
is a random variable with a t distribution with
degrees of freedom,
. If
, then set p=0; If
, then set p=1.
Partial correlation and P-value
For two continuous variables, X and Y, we can calculate the partial correlation between them
controlling for the values of a new continuous variable Z:
Since the new variable Z is always a linear combination of several continuous variables, we
compute the correlation of Z and a continuous variable using a property of the covariance rather
than the original dataset. Suppose the new derived predictor Z is a linear combination of original
:
predictors
Then for any a continuous variable X (continuous predictor or continuous target), the correlation
between X and Z is
where
, and
.
or
is less than
, let
. If
is larger than 1, then set it to
If
1; If
is less than −1, then set it to −1. (This may occur with pairwise deletion). Based on
partial correlation, the p-value is derived from the t test
where
and
is a random variable with a t distribution with
degrees of freedom,
. If
, then set p=0; if
, then set p=1.
35
Automated Data Preparation Algorithms
Discretization of Continuous Predictors
Discretization is used for calculating predictive power and creating histograms.
Discretization for calculating predictive power
If the transformed target is categorical, we use the equal width bins method to discretize a
continuous predictor into a number of bins equal to the number of categories of the target.
Variables considered for discretization include:

Scale predictors which have been recommended.

Original continuous variables of recommended predictors.
Discretization for creating histograms
We use the equal width bins method to discretize a continuous predictor into a maximum of 400
bins. Variables considered for discretization include:

Recommended continuous variables.

Excluded continuous variables which have not been used to derive a new variable.

Original continuous variables of recommended variables.

Original continuous variables of excluded variables which have not been used to derive a
new variable.

Scale variables used to construct new variables. If their original variables are also continuous,
then the original variables will be discretized.

Date/time variables.
After discretization, the number of cases and mean in each bin are collected to create histograms.
Note: If an original predictor has been recast, then this recast version will be regarded as the
“original” predictor.
Predictive Power
Collect bivariate statistics for predictive power
We collect bivariate statistics between recommended predictors and the (transformed) target. If
an original predictor of a recommended predictor exists, then we also collect bivariate statistics
between this original predictor and the target; if an original predictor has a recast version, then
we use the recast version.
If the target is categorical, but a recommended predictor or its original predictor/recast version is
continuous, then we discretize the continuous predictor using the method in “Discretization of
Continuous Predictors ” on p. 35 and collect bivariate statistics between the categorical target and
the categorical predictors.
36
Automated Data Preparation Algorithms
Bivariate statistics between the predictors and target are same as those described in “Bivariate
Statistics Collection ” on p. 22.
Computing predictive power
Predictive power is used to measure the usefulness of a predictor and is computed with respect
to the (transformed) target. If an original predictor of a recommended predictor exists, then we
also compute predictive power for this original predictor; if an original predictor has a recast
version, then we use the recast version.
Scale target. When the target is continuous, we fit a linear regression model and predictive power
is computed as follows.

Scale predictor:

Categorical predictor:
, where
and
.
Categorical target. If the (transformed) target is categorical, then we fit a naïve Bayes model and
the classification accuracy will serve as predictive power. We discretize continuous predictors
as described in “Discretization of Continuous Predictors ” on p. 35, so we only consider the
predictive power of categorical predictors.
is the of number cases where
and
If
then the chi-square statistic is calculated as
,
, and
where
and Cramer’s V is defined as
References
Box, G. E. P., and D. R. Cox. 1964. An analysis of transformations. Journal of the Royal
Statistical Society, Series B, 26, 211–246.
Goodman, L. A. 1979. Simple models for the analysis of association in cross-classifications
having ordered categories. Journal of the American Statistical Association, 74, 537–552.
,
Bayesian Networks Algorithms
Bayesian Networks Algorithm Overview
A Bayesian network provides a succinct way of describing the joint probability distribution
for a given set of random variables.
Let V be a set of categorical random variables and G = (V, E) be a directed acyclic graph with
nodes V and a set of directed edges E. A Bayesian network model consists of the graph G together
with a conditional probability table for each node given values of its parent nodes. Given the value
of its parents, each node is assumed to be independent of all the nodes that are not its descendents.
The joint probability distribution for variables V can then be computed as a product of conditional
probabilities for all nodes, given the values of each node’s parents.
Given set of variables V and a corresponding sample dataset, we are presented with the task of
fitting an appropriate Bayesian network model. The task of determining the appropriate edges in
the graph G is called structure learning, while the task of estimating the conditional probability
tables given parents for each node is called parameter learning.
Primary Calculations
IBM® SPSS® Modeler offers two different methods for building Bayesian network models:

Tree Augmented Naïve Bayes. This algorithm is used mainly for classification. It efficiently
creates a simple Bayesian network model. The model is an improvement over the naïve
Bayes model as it allows for each predictor to depend on another predictor in addition to the
target variable. Its main advantages are its classification accuracy and favorable performance
compared with general Bayesian network models. Its disadvantage is also due to its simplicity;
it imposes much restriction on the dependency structure uncovered among its nodes.

Markov Blanket estimation. The Markov blanket for the target variable node in a Bayesian
network is the set of nodes containing target’s parents, its children, and its children’s parents.
Markov blanket identifies all the variables in the network that are needed to predict the target
variable. This can produce more complex networks, but also takes longer to produce. Using
feature selection preprocessing can significantly improve performance of this algorithm.
Notation
The following notation is used throughout this algorithm description:
A directed acyclic graph representing the Bayesian Network model
A dataset
Categorical target variable
The ith predictor
The parent set of the ith predictor besides target
The number of cases in
© Copyright IBM Corporation 1994, 2015.
37
. For TAN models, its size is ≤1.
38
Bayesian Networks Algorithms
The number of predictors
Denote the number of records in
takes its kth value.
Denote the number of records in
for which
take its jth value and for which
for which
takes its jth value.
The number of non-redundant parameters of TAN
The Markov blanket boundary about target
A subset of
A subset of
with respect to
, such that variables
and
are conditionally independent
in G. and
are adjacent to each
An undirected arc between variables
other.
A directed arc from
to
in G.
is a parent of , and
is a child of .
A variable set which represents all the adjacent variables of variable
in G,
ignoring the edge directions.
The conditional independence (CI) test function which returns the p-value of the test.
The significance level for CI tests between two variables. If the p-value of the test is
larger than then they are independent, and vice-versa.
The cardinality of ,
The cardinality of the parent set
of
.
Handling of Continuous Predictors
BN models in IBM® SPSS® Modeler can only accommodate discrete variables. Target variables
must be discrete (flag or set type). Numeric predictors are discretized into 5 equal-width bins
before the BN model is built. If any of the constructed bins is empty (there are no records with a
value in the bin’s range), that bin is merged to an adjacent non-empty bin.
Feature Selection via Breadth-First Search
Feature selection preprocessing works as follows:
E It begins by searching for the direct neighbors of a given target Y, based on statistical tests of
independence. For more information, see the topic “Markov Blanket Conditional Independence
.
Test” on p. 43. These variables are known as the parents or children of Y, denoted by
E For each
, we look for
E For each
, we add it to
The explicit algorithm is given below.
, or the parents and children of X.
if it is not independent of Y.
39
Bayesian Networks Algorithms
RecognizeMB
(
D : Dataset, eps : threshold
)
{
// Recognize Y's parents/children
CanADJ_Y = X \ {Y};
PC = RecognizePC(Y,CanADJ_Y,D,eps);
MB = PC;
// Collect spouse candidates, and remove false
// positives from PC
for (each X_i in PC){
CanADJ_X_i = X \ X_i;
CanSP_X_i = RecognizePC(X_i,CanADJ_X_i,D,eps);
if (Y notin CanSP_X_i) // Filter out false positive
MB = MB \ X_i;
}
// Discover true positives among candidates
for (each X_i in MB)
for (each Z_i in CanSP_X_i and Z_i notin MB)
if (I(Y,Z_i|{S_Y,Z_i + X_i}) ≤ eps) then
MB = MB + Z_i;
return MB;
}
40
Bayesian Networks Algorithms
RecognizePC (
T
: target to scan,
ADJ_T : Candidate adjacency set to search,
D
: Dataset,
eps
: threshold,
maxSetSize : )
{
NonPC = {empty set};
cutSetSize = 0;
repeat
for (each X_i in ADJ_T){
for (each subset S of {ADJ_T \ X_i} with |S| = cutSetSize){
if (I(X_i,T|S) > eps){
NonPC = NonPC + X_i;
S_T,X_i = S;
break;
}
}
}
if (|NonPC| > 0){
ADJ_T = ADJ_T \ NonPC;
cutSetSize +=1;
NonPC = {empty set};
} else
break;
until (|ADJ_T| ≤ cutSetSize) or (cutSetSize > maxSetSize)
return ADJ_T;
}
Tree Augmented Naïve Bayes Method
The Bayesian network classifier is a simple classification method, which classifies a case
by determining the probability of it belonging to the ith target category
These probabilities are calculated as
.
where
is the parent set of
besides , and it maybe empty.
is the conditional
. If there are n independent predictors,
probability table (CPT) associated with each node
then the probability is proportional to
41
Bayesian Networks Algorithms
When this dependence assumption (conditional independence between the predictors given the
class) is made, the classifier is called naïve Bayes (NB). Naïve Bayes has been shown to be
competitive with more complex, state-of-the-art classifiers. In recent years, a lot of work has
focused on improving the naïve Bayes classifier. One important method is to relax independence
assumption. We use a tree augmented naïve Bayesian (TAN) classifier (Friedman, Geiger, and
Goldszmidt, 1997), and it is defined by the following conditions:

Each predictor has the target as a parent.

Predictors may have one other predictor as a parent.
An example of this structure is shown below.
Figure 5-1
Structure of an simple tree augmented naïve Bayes model.
TAN
X1
Y
X2
Xn
...
TAN Classifier Learning Procedure
Let
represent a categorical predictor vector. The algorithm for the TAN
classifier first learns a tree structure over using mutual information conditioned on . Then it
adds a link (or arc) from the target node to each predictor node.
The TAN learning procedure is:
1. Take the training data D,
and
as input.
2. Learn a tree-like network structure over
below.
3. Add
as a parent of every
where
4. Learning the parameters of TAN network.
by using the Structure Learning algorithm outlined
.
42
Bayesian Networks Algorithms
TAN Structure Learning
We use a maximum weighted spanning tree (MWST) method to construct a tree Bayesian network
from data (Chow and Liu, 1968). This method associates a weight to each edge corresponding to
the mutual information between the two variables. When the weight matrix is created, the MWST
algorithm (Prim, 1957) gives an undirected tree that can be oriented with the choice of a root.
is defined as
The mutual information of two nodes
Pr
Pr
Pr
Pr
We replace the mutual information between two predictors with the conditional mutual
information between two predictors given the target (Friedman et al., 1997). It is defined as
Pr
Pr
Pr
Pr
The network over can be constructed using the following steps:
1. Compute
between each pair of variables.
2. Use Prim’s algorithm (Prim et al., 1957) to construct a maximum weighted spanning tree with
to
by
.
the weight of an edge connecting
This algorithm works as follows: it begins with a tree with no edges and marks a variable at a
random as input. Then it finds an unmarked variable whose weight with one of the marked
variables is maximal, then marks this variable and adds the edge to the tree. This process is
repeated until all variables are marked.
3. Transform the resulting undirected tree to directed one by choosing
the direction of all edges to be outward from it.
as a root node and setting
TAN Parameter Learning
Let be the cardinality of . Let denote the cardinality of the parent set
of , that
is, the number of different values to which the parent of
can be instantiated. So it can be
calculated as
. Note
implies
. We use
to denote the number of
takes its jth value. We use
to denote the number of records in
records in D for which
take its jth value and for which
takes its kth value.
D for which
Maximum Likelihood Estimation
The closed form solution for the parameters
and
that maximize the log likelihood score is
43
Bayesian Networks Algorithms
where
denotes the number of cases with
Note that if
, then
in the training data.
.
The number of parameters K is
TAN Posterior Estimation
Assume that Dirichlet prior distributions are specified for the set of parameters
as
,
, and
(Heckerman, 1999). Let
well as for each of the sets
and
denote corresponding Dirichlet distribution parameters such that
and
. Upon observing the dataset D, we obtain Dirichlet posterior distributions with the
following sets of parameters:
The posterior estimation is always used for model updating.
Adjustment for small cell counts
To overcome problems caused by zero or very small cell counts, parameters can be estimated
and
using
as posterior parameters
and
.
uninformative Dirichlet priors
Markov Blanket Algorithms
The Markov blanket algorithm learns the BN structure by identifying the conditional independence
relationships among the variables. Using statistical tests (such as chi-squared test or G test),
this algorithm finds the conditional independence relationships among the nodes and uses these
relationships as constraints to construct a BN structure. This algorithm is referred to as a
dependency-analysis-based or constraint-based algorithm.
Markov Blanket Conditional Independence Test
The conditional independence (CI) test tests whether two variables are conditionally independent
with respect to a conditional variable set. There are two familiar methods to compute the CI test:
(Pearson chi-square) test and
(log likelihood ratio) test.
44
Bayesian Networks Algorithms
Suppose
are two variables for testing and S is a conditional variable set such that
be the observed count of cases that have
and
, and
Let
the expect number of cases that have
and
under the hypothesis that
independent.
.
is
are
Chi-square Test
We assume the null hypothesis is that
hypothesis is
are independent. The
test statistic for this
Suppose that N is the total number of cases in D,
is the number of cases in D where
takes its ith category, and
and
are the corresponding numbers for Y and S. So
is the number of cases in D where
takes its ith category and takes its jth category.
,
and
are defined similarly. We have:
Because
where
distribution, we get the p-value for
is the degrees of freedom for the
as follows:
As we know, the larger p-value, the less likely we are to reject the null hypothesis. For a given
are
significance level , if the p-value is greater than we can not reject the hypothesis that
independent.
We can easily generalize this independence test into a conditional independence test:
The degree of freedom for
is:
Likelihood Ratio Test
We assume the null hypothesis is that
hypothesis is
are independent. The
test statistic for this
45
Bayesian Networks Algorithms
or equivalently,
The conditional version of the
independence test is
The
test is asymptotically distributed as a
same as in the
test. So the p-value for the
distribution, where degrees of freedom are the
test is
In the following parts of this document, we use
to uniformly represent the p-value of
whichever test is applied. If
, we say variable X and Y are independent, and if
, we say variable X and Y are conditionally independent given variable set S.
Markov Blanket Structure Learning
This algorithm aims at learning a Bayesian networks structure from a dataset. It starts with a
, and compute
for each variable pair in G. If
complete graph G. Let
, remove the arc between
. Then for each arc
perform an exhaustive
to find the smallest conditional variable set S such that
.
search in
. After this, orientation rules are applied to orient the arcs in G.
If such S exist, delete arc
Markov Blanket Arc Orientation Rules
Arcs in the derived structure are oriented based on the following rules:
1. All patterns of the of the form
2. Patterns of the form
3. Patterns of the form
4. Patterns of the form
or
are updated so that
are updated to
are updated to
if
46
Bayesian Networks Algorithms
are updated so that
After the last step, if there are still undirected arcs in the graph, return to step 2 and repeat until
all arcs are oriented.
Deriving the Markov Blanket Structure
The Markov Blanket is a local structure of a Bayesian Network. Given a Bayesian Network G
and a target variable Y, to derive the Markov Blanket of Y, we should select all the directed
and all the
parents of Y in G denoted as , all the directed children of Y in G denoted as
in G denoted as .
and their arcs inherited from G
directed parents of
define the Markov Blanket
.
Markov Blanket Parameter Learning
Maximum Likelihood Estimation
The closed form solution for the parameters
the log likelihood score is
Note that if
, then
that maximize
.
The number of parameters K is
Posterior Estimation
Assume that Dirichlet prior distributions are specified for each of the sets
(Heckerman et al., 1999). Let
denote corresponding
Dirichlet distributed parameters such that
. Upon observing the dataset D, we
obtain Dirichlet posterior distributions with the following sets of parameters:
The posterior estimate is always used for model updating.
47
Bayesian Networks Algorithms
Adjustment for Small Cell Counts
To overcome problems caused by zero or very small cell counts, parameters can be estimated as
posterior parameters
using uninformative Dirichlet priors
.
specified by
Blank Handling
By default, records with missing values for any of the input or output fields are excluded from
model building. If the Use only complete records option is deselected, then for each pairwise
comparison between fields, all records containing valid values for the two fields in question
are used.
Model Nugget/Scoring
The Bayesian Network Model Nugget produces predicted values and probabilities for scored
records.
Tree Augmented Naïve Bayes Models
Using the estimated model from training data, for a new case
, the probability of
it belonging to the ith target category is calculated as
. The target category
with the highest posterior probability is the predicted category for this case,
, is predicted by
Markov Blanket Models
The scoring function uses the estimated model to compute the probabilities of Y belongs to
each category for a new case
. Suppose
is the parent set of Y, and
denotes the
given case
,
denotes the direct children set of Y,
configuration of
denotes the parent set (excluding Y) of the ith variable in
. The score for each category
of Y is computed by:
,
,
where the joint probability that
,
and
is:
48
Bayesian Networks Algorithms
where
Note that c is never actually computed during scoring because its value cancels from the numerator
and denominator of the scoring equation given above.
Binary Classifier Comparison Metrics
The Binary Classifier node generates multiple models for a flag output field. For details on how
each model type is built, see the appropriate algorithm documentation for the model type.
The node also reports several comparison metrics for each model, to help you select the optimal
model for your application. The following metrics are available:
Maximum Profit
This gives the maximum amount of profit, based on the model and the profit and cost settings. It
is calculated as
Profit
where
is defined as
if is a hit
otherwise
r is the user-specified revenue amount per hit, and c is the user-specified cost per record. The sum
is calculated for the j records with the highest , such that
Maximum Profit Occurs in %
This gives the percentage of the training records that provide positive profit based on the
predictions of the model,
Profit
where n is the overall number of records included in building the model.
Lift
This indicates the response rate for the top q% of records (sorted by predicted probability), as a
ratio relative to the overall response rate,
Lift
where k is q% of n, the number of training records used to build the model. The default value of q
is 30, but this value can be modified in the binary classifier node options.
Overall Accuracy
This is the percentage of records for which the outcome is correctly predicted,
© Copyright IBM Corporation 1994, 2015.
49
50
Binary Classifier Comparison Metrics
if
otherwise
where
is the predicted outcome value for record i and
is the observed value.
Area Under the Curve (AUC)
This represents the area under the Receiver Operating Characteristic (ROC) curve for the model.
The ROC curve plots the true positive rate (where the model predicts the target response and the
response is observed) against the false positive rate (where the model predicts the target response
but a nonresponse is observed). For a good model, the curve will rise sharply near the left axis and
cut across near the top, so that nearly all the area in the unit square falls below the curve. For an
uninformative model, the curve will approximate a diagonal line from the lower left to the upper
right corner of the graph. Thus, the closer the AUC is to 1.0, the better the model.
Figure 6-1
ROC curves for a good model (left) and an uninformative model (right)
The AUC is computed by identifying segments as unique combinations of predictor values that
determine subsets of records which all have the same predicted probability of the target value.
The s segments defined by a given model’s predictors are sorted in descending order of predicted
probability, and the AUC is calculated as
where is the cumulative number of false positives for segment i, that is, false positives for
, is the cumulative number of true positives, and
segment i and all preceding segments
.
C5.0 Algorithms
The code for training C5.0 models is licensed from RuleQuest Research Ltd Pty, and the algorithms
are proprietary. For more information, see the RuleQuest website at http://www.rulequest.com/.
Note: Modeler 13 upgraded the C5.0 version from 2.04 to 2.06. See the RuleQuest website
for more information.
Scoring
A record is scored with the class and confidence of the rule that fires for that record.
If a rule set is directly generated from the C5.0 node, then the confidence for the rule is calculated
as
number correct in leaf
total number of records in leaf
If a rule set is generated from a decision tree generated from the C5.0 node, then the confidence
is calculated as
number correct in leaf
total number of records in leaf number of categories in the target
Scores with rule set voting
When voting occurs between rules within a rule set the final scores assigned to a record are
calculated in the following way. For each record, all rules are examined and each rule that applies
to the record is used to generate a prediction and an associated confidence. The sum of confidence
figures for each output value is computed, and the value with the greatest confidence sum is
chosen as the final prediction. The confidence for the final prediction is the confidence sum for
that value divided by the number of rules that fired for that record.
Scores with boosted C5.0 classifiers (decision trees and rule sets)
When scoring with a boosted C5.0 rule set the n rule sets that make up the boosted rule set (one
rule set for each boosting trial) vote using their individual scores (as obtained above) to arrive
at the final score assigned to the case by the boosted rule set.
The voting for boosted C5 classifiers is as follows. For each record, each composite classifier
(rule set or decision tree) assigns a prediction and a confidence. The sum of confidence figures for
each output value is computed, and the value with the greatest confidence sum is chosen as the
final prediction. The confidence for the final prediction by the boosted classifier is the confidence
sum for that value divided by confidence sum for all values.
© Copyright IBM Corporation 1994, 2015.
51
Carma Algorithms
Overview
The continuous association rule mining algorithm (Carma) is an alternative to Apriori that
reduces I/O costs, time, and space requirements (Hidber, 1999). It uses only two data passes and
delivers results for much lower support levels than Apriori. In addition, it allows changes in
the support level during execution.
Carma deals with items and itemsets that make up transactions. Items are flag-type conditions
that indicate the presence or absence of a particular thing in a specific transaction. An itemset is a
group of items which may or may not tend to co-occur within transactions.
Deriving Rules
Carma proceeds in two stages. First it identifies frequent itemsets in the data, and then it generates
rules from the lattice of frequent itemsets.
Frequent Itemsets
Carma uses a two-phase method of identifying frequent itemsets.
Phase I: Estimation
In the estimation phase, Carma uses a single data pass to identify frequent itemset candidates.
A lattice is used to store information on itemsets. Each node in the lattice stores the items
comprising the itemset, and three values for the associated itemset:

count: number of transactions containing the itemset since the itemset was added to the lattice

firstTrans: the record index of the transaction for which the itemset was added to the lattice

maxMissed: upper bound on the number of occurrences of the itemset before it was added to
the lattice
The lattice also encodes information on relationships between itemsets, which are determined
by the items in the itemset. An itemset Y is an ancestor of itemset X if X contains every item in
Y. More specifically, Y is a parent of X if X contains every item in Y plus one additional item.
Conversely, Y is a descendant of X if Y contains every item in X, and Y is a child of X if Y contains
every item in X plus one additional item.
For example, if X = {milk, cheese, bread}, then Y = {milk, cheese} is a parent of X, and Z =
{milk, cheese, bread, sugar} is a child of X.
Initially the lattice contains no itemsets. As each transaction is read, the lattice is updated in
three steps:
E Increment statistics. For each itemset in the lattice that exists in the current transaction, increment
the count value.
© Copyright IBM Corporation 1994, 2015.
53
54
Carma Algorithms
E Insert new itemsets. For each itemset v in the transaction that is not already in the lattice, check all
subsets of the itemset in the lattice. If all possible subsets of the itemset are in the lattice with
, then add the itemset to the lattice and set its values:

count is set to 1

firstTrans is set to the record index of the current transaction

maxMissed is defined as
where w is a subset of itemset v,
is the ceiling of σ up to transaction i for varying
support (or simply σ for constant support), and |v| is the number of items in itemset v.
E Prune the lattice. Every k transactions (where k is the pruning value, set to 500 by default), the
lattice is examined and small itemsets are removed. A small itemset is defined as an itemset for
which maxSupport < σi, where maxSupport = (maxMissed + count)/i.
Phase II: Validation
After the frequent itemset candidates have been identified, a second data pass is made to compute
exact frequencies for the candidates, and the final list of frequent itemsets is determined based
on these frequencies.
The first step in Phase II is to remove infrequent itemsets from the lattice. The lattice is pruned
using the same method described under Phase I, with σn as the user-specified support level for
the model.
After initial pruning, the training data are processed again and each itemset v in the lattice is
checked and updated for each transaction record with index i:
E If firstTrans(v) < i, v is marked as exact and is no longer considered for any updates. (When all
nodes in the lattice are marked as exact, phase II terminates.)
E If v appears in the current transaction, v is updated as follows:

Increment count(v)

Decrement maxMissed(v)

If firstTrans(v) = i, set maxMissed(v) = 0, and adjust maxMissed for every superset w of v in
the lattice for which maxSupport(w) > maxSupport(v). For such supersets, set maxMissed(w)
= count(v) - count(w).

If maxSupport(v) < σn, remove v from the lattice.
Generating Rules
Carma uses a common rule-generating algorithm for extracting rules from the lattice of itemsets
that tends to eliminate redundant rules (Aggarwal and Yu, 1998). Rules are generated from the
lattice of itemsets (see “Frequent Itemsets” on p. 53) as follows:
E For each itemset in the lattice, get the set of maximal ancestor itemsets. An itemset Y is a maximal
ancestor of itemset X if
, where c is the specified confidence threshold for rules.
55
Carma Algorithms
E Prune the list of maximal ancestors by removing maximal ancestors of all of X’s child itemsets.
E For each itemset in the pruned maximal ancestor list, generate a rule
, where X−Y is
the itemset X with the items in itemset Y removed.
For example, if X the itemset {milk, cheese, bread} and Y is the itemset {milk, bread}, then the
resulting rule would be milk, bread cheese
Blank Handling
Blanks are ignored by the Carma algorithm. The algorithm will handle records containing blanks
for input fields, but such a record will not be considered to match any rule containing one or
more of the fields for which it has blank values.
Effect of Options
Minimum rule support/confidence. These values place constraints on which rules may be entered
into the table. Only rules whose support and confidence values exceed the specified values can be
entered into the rule table.
Maximum rule size. Sets the limit on the number of items that will be considered as an itemset.
Exclude rules with multiple consequents. This option restricts rules in the final rule list to those
with a single item as consequent.
Set pruning value. Sets the number of transactions to process between pruning passes. For more
information, see the topic “Frequent Itemsets” on p. 53.
Vary support. Allows support to vary in order to enhance training during the early transactions in
the training data. For more information, see “Varying support” below.
Allow rules without antecedents. Allows rules that are consequent only, which are simple
statements of co-occuring items, along with traditional if-then rules.
Varying support
If the vary support option is selected, the target support value changes as transactions are
processed to provide more efficient training. The support value starts large and decreases in four
steps as transactions are processed. The first support value s1 applies to the first 9 transactions,
the second value s2 applies to the next 90 transactions, the third value s3 applies to transactions
100-4999, and the fourth value s4 applies to all remaining transactions. If we call the final
support value s, and the estimated number of transactions t, then the following constraints are
used to determine the support values:
E If
E If
E If
or
, set
, set
, set
.
, such that
, such that
.
.
56
Carma Algorithms
E If
, set
, such that
.
In all cases, if solving the equation yields s1 > 0.5, s1 is set to 0.5, and the other values adjusted
accordingly to preserve the relation
values s1, s2, s3, or s4) for the ith transaction.
, where s(i) is the target support (one of the
Generated Model/Scoring
The Carma algorithm generates an unrefined rule node. To create a model for scoring new data,
the unrefined rule node must be refined to generate a ruleset node. Details of scoring for generated
ruleset nodes are given below.
Predicted Values
Predicted values are based on the rules in the ruleset. When a new record is scored, it is compared
to the rules in the ruleset. How the prediction is generated depends on the user’s setting for
Ruleset Evaluation in the stream options.

Voting. This method attempts to combine the predictions of all of the rules that apply to the
record. For each record, all rules are examined and each rule that applies to the record is used
to generate a prediction. The sum of confidence figures for each predicted value is computed,
and the value with the greatest confidence sum is chosen as the final prediction.

First hit. This method simply tests the rules in order, and the first rule that applies to the record
is the one used to generate the prediction.
There is a default rule, which specifies an output value to be used as the prediction for records
that don’t trigger any other rules from the ruleset. For rulesets derived from decision trees, the
value for the default rule is the modal (most prevalent) output value in the overall training data.
For association rulesets, the default value is specified by the user when the ruleset is generated
from the unrefined rule node.
Confidence
Confidence calculations also depend on the user’s Ruleset Evaluation stream options setting.

Voting. The confidence for the final prediction is the sum of the confidence values for rules
triggered by the current record that give the winning prediction divided by the number of rules
that fired for that record.

First hit. The confidence is the confidence value for the first rule in the ruleset triggered by
the current record.
If the default rule is the only rule that fires for the record, it’s confidence is set to 0.5.
57
Carma Algorithms
Blank Handling
Blanks are ignored by the algorithm. The algorithm will handle records containing blanks for
input fields, but such a record will not be considered to match any rule containing one or more of
the fields for which it has blank values.
There is an exception to this: when a numeric field is examined based on a split point,
user-defined missing values are included in the comparison. For example, if you define -999 as a
missing value for a field, Carma will still compare it to the split point for that field, and may return
a match if the rule is of the form (X < 50). You may need to preprocess specially coded numeric
missing values (replacing them with $null$, for example) before scoring data with Carma.
C&RT Algorithms
Overview of C&RT
C&RT stands for Classification and Regression Trees, originally described in the book by the
same name (Breiman, Friedman, Olshen, and Stone, 1984). C&RT partitions the data into two
subsets so that the records within each subset are more homogeneous than in the previous subset.
It is a recursive process—each of those two subsets is then split again, and the process repeats
until the homogeneity criterion is reached or until some other stopping criterion is satisfied (as do
all of the tree-growing methods). The same predictor field may be used several times at different
levels in the tree. It uses surrogate splitting to make the best use of data with missing values.
C&RT is quite flexible. It allows unequal misclassification costs to be considered in the tree
growing process. It also allows you to specify the prior probability distribution in a classification
problem. You can apply automatic cost-complexity pruning to a C&RT tree to obtain a more
generalizable tree.
Primary Calculations
The calculations directly involved in building the model are described below.
Frequency and Case Weight Fields
Frequency and case weight fields are useful for reducing the size of your dataset. Each has a
distinct function, though. If a case weight field is mistakenly specified to be a frequency field, or
vice versa, the resulting analysis will be incorrect.
For the calculations described below, if no frequency or case weight fields are specified, assume
that frequency and case weights for all records are equal to 1.0.
Frequency Fields
A frequency field represents the total number of observations represented by each record. It is
useful for analyzing aggregate data, in which a record represents more than one individual. The
sum of the values for a frequency field should always be equal to the total number of observations
in the sample. Note that output and statistics are the same whether you use a frequency field or
case-by-case data. The table below shows a hypothetical example, with the predictor fields sex
and employment and the target field response. The frequency field tells us, for example, that 10
employed men responded yes to the target question, and 19 unemployed women responded no.
Table 9-1
Dataset with frequency field
Sex
M
M
M
M
F
F
Employment
Y
Y
N
N
Y
Y
© Copyright IBM Corporation 1994, 2015.
Response
Y
N
Y
N
Y
N
59
Frequency
10
17
12
21
11
15
60
C&RT Algorithms
Sex
F
F
Employment
N
N
Response
Y
N
Frequency
15
19
The use of a frequency field in this case allows us to process a table of 8 records instead of
case-by-case data, which would require 120 records.
Case weights
The use of a case weight field gives unequal treatment to the records in a dataset. When a case
weight field is used, the contribution of a record in the analysis is weighted in proportion to
the population units that the record represents in the sample. For example, suppose that in
a direct marketing promotion, 10,000 households respond and 1,000,000 households do not
respond. To reduce the size of the data file, you might include all of the responders but only a
1% sample (10,000) of the nonresponders. You can do this if you define a case weight equal to
1 for responders and 100 for nonresponders.
Model Parameters
C&RT works by choosing a split at each node such that each child node created by the split is
more pure than its parent node. Here purity refers to similarity of values of the target field. In a
completely pure node, all of the records have the same value for the target field. C&RT measures
the impurity of a split at a node by defining an impurity measure. For more information, see the
topic “Impurity Measures” on p. 62.
The following steps are used to build a C&RT tree (starting with the root node containing all
records):
Find each predictor’s best split. For each predictor field, find the best possible split for that field,
as follows:

Range (numeric) fields. Sort the field values for records in the node from smallest to largest.
Choose each point in turn as a split point, and compute the impurity statistic for the resulting
child nodes of the split. Select the best split point for the field as the one that yields the largest
decrease in impurity relative to the impurity of the node being split.

Symbolic (categorical) fields. Examine each possible combination of values as two subsets.
For each combination, calculate the impurity of the child nodes for the split based on that
combination. Select the best split point for the field as the one that yields the largest decrease
in impurity relative to the impurity of the node being split.
Find the best split for the node. Identify the field whose best split gives the greatest decrease in
impurity for the node, and select that field’s best split as the best overall split for the node.
Check stopping rules, and recurse. If no stopping rules are triggered by the split or by the parent
node, apply the split to create two child nodes. (For more information, see the topic “Stopping
Rules” on p. 64.) Apply the algorithm again to each child node.
61
C&RT Algorithms
Blank Handling
Records with missing values for the target field are ignored in building the tree model.
Surrogate splitting is used to handle blanks for predictor fields. If the best predictor field to be
used for a split has a blank or missing value at a particular node, another field that yields a split
similar to the predictor field in the context of that node is used as a surrogate for the predictor
field, and its value is used to assign the record to one of the child nodes.
For example, suppose that X* is the predictor field that defines the best split s* at node t. The
surrogate-splitting process finds another split s, the surrogate, based on another predictor field X
such that this split is most similar to s* at node t (for records with valid values for both predictors).
If a new record is to be predicted and it has a missing value on X* at node t, the surrogate split s is
applied instead. (Unless, of course, this record also has a missing value on X. In such a situation,
the next best surrogate is used, and so on, up to the limit of number of surrogates specified.)
In the interest of speed and memory conservation, only a limited number of surrogates is
identified for each split in the tree. If a record has missing values for the split field and all
surrogate fields, it is assigned to the child node with the higher weighted probability, calculated as
where Nf,j(t) is the sum of frequency weights for records in category j for node t, and Nf(t) is the
sum of frequency weights for all records in node t.
If the model was built using equal or user-specified priors, the priors are incorporated into the
calculation:
where π(j) is the prior probability for category j, and pf(t) is the weighted probability of a record
being assigned to the node,
where Nf,j(t) is the sum of the frequency weights (or the number of records if no frequency
weights are defined) in node t belonging to category j, and Nf,j is the sum of frequency weights
for records belonging to category in the entire training sample.
Predictive measure of association
Let
(resp.
) be the set of learning cases (resp. learning cases in node t) that has
non-missing values of both X* and X. Let
be the probability of sending a case in
to the same child by both and , and
be the split with maximized probability
.
62
C&RT Algorithms
between s* and
The predictive measure of association
at node t is
where
(resp. ) is the relative probability that the best split s* at node t sends a case with
non-missing value of X* to the left (resp. right) child node. And where
if is categorical
if is continuous
with
,
,
and
being the indicator function taking value 1 when both splits s* and
the case n to the same child, 0 otherwise.
send
Effect of Options
Impurity Measures
There are three different impurity measures used to find splits for C&RT models, depending on the
type of the target field. For symbolic target fields, you can choose Gini or twoing. For continuous
targets, the least-squared deviation (LSD) method is automatically selected.
Gini
The Gini index g(t) at a node t in a C&RT tree, is defined as
where i and j are categories of the target field, and
63
C&RT Algorithms
where π(j) is the prior probability value for category j, Nj(t) is the number of records in category
j of node t, and Nj is the number of records of category j in the root node. Note that when the
Gini index is used to find the improvement for a split during tree growth, only those records in
node t and the root node with valid values for the split-predictor are used to compute Nj(t) and
Nj, respectively.
The equation for the Gini index can also be written as
Thus, when the records in a node are evenly distributed across the categories, the Gini index takes
its maximum value of 1 - 1/k, where k is the number of categories for the target field. When all
records in the node belong to the same category, the Gini index equals 0.
The Gini criterion function Φ(s, t) for split s at node t is defined as
where pL is the proportion of records in t sent to the left child node, and pR is the proportion sent
to the right child node. The proportions pL and pR are defined as
and
The split s is chosen to maximize the value of Φ(s, t).
Twoing
The twoing index is based on splitting the target categories into two superclasses, and then
finding the best split on the predictor field based on those two superclasses. The superclasses
C1 and C2 are defined as
64
C&RT Algorithms
and
where C is the set of categories of the target field, and p(j|tR) and p(j|tL) are p(j|t), as defined as
in the Gini formulas, for the right and left child nodes, respectively. For more information, see
the topic “Gini” on p. 62.
The twoing criterion function for split s at node t is defined as
where tL and tR are the nodes created by the split s. The split s is chosen as the split that
maximizes this criterion.
Least Squared Deviation
For continuous target fields, the least squared deviation (LSD) impurity measure is used. The
LSD measure R(t) is simply the weighted within-node variance for node t, and it is equal to the
resubstitution estimate of risk for the node. It is defined as
where NW(t) is the weighted number of records in node t, wi is the value of the weighting field for
record i (if any), fi is the value of the frequency field (if any), yi is the value of the target field, and
y(t) is the (weighted) mean for node t. The LSD criterion function for split s at node t is defined as
The split s is chosen to maximize the value of Φ(s,t).
Stopping Rules
Stopping rules control how the algorithm decides when to stop splitting nodes in the tree. Tree
growth proceeds until every leaf node in the tree triggers at least one stopping rule. Any of the
following conditions will prevent a node from being split:

The node is pure (all records have the same value for the target field)

All records in the node have the same value for all predictor fields used by the model

The tree depth for the current node (the number of recursive node splits defining the current
node) is the maximum tree depth (default or user-specified).

The number of records in the node is less than the minumum parent node size (default or
user-specified)
65
C&RT Algorithms

The number of records in any of the child nodes resulting from the node’s best split is less
than the minimum child node size (default or user-specified)

The best split for the node yields a decrease in impurity that is less than the minimum change
in impurity (default or user-specified).
Profits
Profits are numeric values associated with categories of a (symbolic) target field that can be used
to estimate the gain or loss associated with a segment. They define the relative value of each value
of the target field. Values are used in computing gains but not in tree growing.
Profit for each node in the tree is calculated as
where j is the target field category, fj(t) is the sum of frequency field values for all records in node
t with category j for the target field, and Pj is the user-defined profit value for category j.
Priors
Prior probabilities are numeric values that influence the misclassification rates for categories of
the target field. They specify the proportion of records expected to belong to each category of the
target field prior to the analysis. The values are involved both in tree growing and risk estimation.
There are three ways to derive prior probabilities.
Empirical Priors
By default, priors are calculated based on the training data. The prior probability assigned to each
target category is the weighted proportion of records in the training data belonging to that category,
In tree-growing and class assignment, the Ns take both case weights and frequency weights
into account (if defined); in risk estimation, only frequency weights are included in calculating
empirical priors.
Equal Priors
Selecting equal priors sets the prior probability for each of the J categories to the same value,
66
C&RT Algorithms
User-Specified Priors
When user-specified priors are given, the specified values are used in the calculations involving
priors. The values specified for the priors must conform to the probability constraint: the sum of
priors for all categories must equal 1.0. If user-specified priors do not conform to this constraint,
adjusted priors are derived which preserve the proportions of the original priors but conform
to the constraint, using the formula
where π’(j) is the adjusted prior for category j, and π(j) is the original user-specified prior for
category j.
Costs
Gini. If costs are specified, the Gini index is computed as
where C(i|j) specifies the cost of misclassifying a category j record as category i.
Twoing. Costs, if specified, are not taken into account in splitting nodes using the twoing criterion.
However, costs will be incorporated into node assignment and risk estimation, as described in
Predicted Values and Risk Estimates, below.
LSD. Costs do not apply to regression trees.
Pruning
Pruning refers to the process of examining a fully grown tree and removing bottom-level splits
that do not contribute significantly to the accuracy of the tree. In pruning the tree, the software
tries to create the smallest tree whose misclassification risk is not too much greater than that of the
largest tree possible. It removes a tree branch if the cost associated with having a more complex
tree exceeds the gain associated with having another level of nodes (branch).
It uses an index that measures both the misclassification risk and the complexity of the tree,
since we want to minimize both of these things. This cost-complexity measure is defined as
follows:
R(T) is the misclassification risk of tree T, and
is the number of terminal nodes for tree T. The
term α represents the complexity cost per terminal node for the tree. (Note that the value of α is
calculated by the algorithm during pruning.)
67
C&RT Algorithms
Any tree you might generate has a maximum size (Tmax), in which each terminal node contains
only one record. With no complexity cost (α = 0), the maximum tree has the lowest risk, since
every record is perfectly predicted. Thus, the larger the value of α, the fewer the number of
terminal nodes in T(α), where T(α) is the tree with the lowest complexity cost for the given α. As
α increases from 0, it produces a finite sequence of subtrees (T1, T2, T3), each with progressively
fewer terminal nodes. Cost-complexity pruning works by removing the weakest split.
The following equations represent the cost complexity for {t}, which is any single node, and
for Tt, the subbranch of {t}.
If
is less than
, then the branch Tt has a smaller cost complexity than the single
node {t}.
for (α = 0). As α increases from 0,
The tree-growing process ensures that
both
and
grow linearly, with the latter growing at a faster rate. Eventually, you
will reach a threshold α’, such that
for all α > α’. This means that when α
grows larger than α’, the cost complexity of the tree can be reduced if we cut the subbranch Tt
under {t}. Determining the threshold is a simple computation. You can solve this first inequality,
, to find the largest value of α for which the inequality holds, which is also
represented by g(t). You end up with
You can define the weakest link (t) in tree T as the node that has the smallest value of g(t):
Therefore, as α increases, is the first node for which
. At that point, { }
becomes preferable to , and the subbranch is pruned.
With that background established, the pruning algorithm follows these steps:
E Set α1 = 0 and start with the tree T1 = T(0), the fully grown tree.
E Increase α until a branch is pruned. Prune the branch from the tree, and calculate the risk estimate
of the pruned tree.
E Repeat the previous step until only the root node is left, yielding a series of trees, T1, T2, ... Tk.
E If the standard error rule option is selected, choose the smallest tree Topt for which
E If the standard error rule option is not selected, then the tree with the smallest risk estimate R(T)
is selected.
68
C&RT Algorithms
Secondary Calculations
Secondary calculations are not directly related to building the model, but give you information
about the model and its performance.
Risk Estimates
Risk estimates describe the risk of error in predicted values for specific nodes of the tree and for
the tree as a whole.
Risk Estimates for Symbolic Target Field
For classification trees (with a symbolic target field), the risk estimate r(t) of a node t is computed
as
where C(j*(t)|j) is the misclassification cost of classifying a record with target value j as j*(t),
Nf,j(t) is the sum of the frequency weights for records in node t in category j (or the number of
records if no frequency weights are defined), and Nf is the sum of frequency weights for all
records in the training data.
If the model uses user-specified priors, the risk estimate is calculated as
Note that case weights are not considered in calculating risk estimates.
Risk Estimates for numeric target field
For regression trees (with a numeric target field), the risk estimate r(t) of a node t is computed as
where fi is the frequency weight for record i (a record assigned to node t), yi is the value of the
is the weighted mean of the target field for all records in node t.
target field for record i, and
69
C&RT Algorithms
Tree Risk Estimate
For both classification trees and regression trees, the risk estimate R(T) for the tree (T) is
calculated by taking the sum of the risk estimates for the terminal nodes r(t):
where T’ is the set of terminal nodes in the tree.
Gain Summary
The gain summary provides descriptive statistics for the terminal nodes of a tree.
If your target field is continuous (scale), the gain summary shows the weighted mean of the
target value for each terminal node,
If your target field is symbolic (categorical), the gain summary shows the weighted percentage of
records in a selected target category,
where xi(j) = 1 if record xi is in target category j, and 0 otherwise. If profits are defined for the
tree, the gain is the average profit value for each terminal node,
where P(xi) is the profit value assigned to the target value observed in record xi.
Generated Model/Scoring
Calculations done by the C&RT generated model are described below
Predicted Values
New records are scored by following the tree splits to a terminal node of the tree. Each terminal
node has a particular predicted value associated with it, determined as follows:
Classification Trees
For trees with a symbolic target field, each terminal node’s predicted category is the category with
the lowest weighted cost for the node. This weighted cost is calculated as
70
C&RT Algorithms
where C(i|j) is the user-specified misclassification cost for classifying a record as category i when
it is actually category j, and p(j|t) is the conditional weighted probability of a record being in
category j given that it is in node t, defined as
where π(j) is the prior probability for category j, Nw,j(t) is the weighted number of records in node
t with category j (or the number of records if no frequency or case weights are defined),
and Nw,j is the weighted number records in category j (any node),
Regression Trees
For trees with a numeric target field, each terminal node’s predicted category is the weighted mean
of the target values for records in the node. This weighted mean is calculated as
where Nw(t) is defined as
Confidence
For classification trees, confidence values for records passed through the generated model are
calculated as follows. For regression trees, no confidence value is assigned.
71
C&RT Algorithms
Classification Trees
Confidence for a scored record is the proportion of weighted records in the training data in the
scored record’s assigned terminal node that belong to the predicted category, modified by the
Laplace correction:
Blank Handling
In classification of new records, blanks are handled as they are during tree growth, using
surrogates where possible, and splitting based on weighted probabilities where necessary. For
more information, see the topic “Blank Handling” on p. 61.
CHAID Algorithms
Overview of CHAID
CHAID stands for Chi-squared Automatic Interaction Detector. It is a highly efficient statistical
technique for segmentation, or tree growing, developed by (Kass, 1980). Using the significance of
a statistical test as a criterion, CHAID evaluates all of the values of a potential predictor field. It
merges values that are judged to be statistically homogeneous (similar) with respect to the target
variable and maintains all other values that are heterogeneous (dissimilar).
It then selects the best predictor to form the first branch in the decision tree, such that each
child node is made of a group of homogeneous values of the selected field. This process continues
recursively until the tree is fully grown. The statistical test used depends upon the measurement
level of the target field. If the target field is continuous, an F test is used. If the target field is
categorical, a chi-squared test is used.
CHAID is not a binary tree method; that is, it can produce more than two categories at any
particular level in the tree. Therefore, it tends to create a wider tree than do the binary growing
methods. It works for all types of variables, and it accepts both case weights and frequency
variables. It handles missing values by treating them all as a single valid category.
Exhaustive CHAID
Exhaustive CHAID is a modification of CHAID developed to address some of the weaknesses
of the CHAID method (Biggs, de Ville, and Suen, 1991). In particular, sometimes CHAID may
not find the optimal split for a variable, since it stops merging categories as soon as it finds
that all remaining categories are statistically different. Exhaustive CHAID remedies this by
continuing to merge categories of the predictor variable until only two supercategories are left.
It then examines the series of merges for the predictor and finds the set of categories that gives
the strongest association with the target variable, and computes an adjusted p-value for that
association. Thus, Exhaustive CHAID can find the best split for each predictor, and then choose
which predictor to split on by comparing the adjusted p-values.
Exhaustive CHAID is identical to CHAID in the statistical tests it uses and in the way it treats
missing values. Because its method of combining categories of variables is more thorough than
that of CHAID, it takes longer to compute. However, if you have the time to spare, Exhaustive
CHAID is generally safer to use than CHAID. It often finds more useful splits, though depending
on your data, you may find no difference between Exhaustive CHAID and CHAID results.
Primary Calculations
The calculations directly involved in building the model are described below.
Frequency and Case Weight Fields
Frequency and case weight fields are useful for reducing the size of your dataset. Each has a
distinct function, though. If a case weight field is mistakenly specified to be a frequency field, or
vice versa, the resulting analysis will be incorrect.
© Copyright IBM Corporation 1994, 2015.
73
74
CHAID Algorithms
For the calculations described below, if no frequency or case weight fields are specified, assume
that frequency and case weights for all records are equal to 1.0.
Frequency Fields
A frequency field represents the total number of observations represented by each record. It is
useful for analyzing aggregate data, in which a record represents more than one individual. The
sum of the values for a frequency field should always be equal to the total number of observations
in the sample. Note that output and statistics are the same whether you use a frequency field or
case-by-case data. The table below shows a hypothetical example, with the predictor fields sex
and employment and the target field response. The frequency field tells us, for example, that 10
employed men responded yes to the target question, and 19 unemployed women responded no.
Table 10-1
Dataset with frequency field
Sex
M
M
M
M
F
F
F
F
Employment
Y
Y
N
N
Y
Y
N
N
Response
Y
N
Y
N
Y
N
Y
N
Frequency
10
17
12
21
11
15
15
19
The use of a frequency field in this case allows us to process a table of 8 records instead of
case-by-case data, which would require 120 records.
Case weights
The use of a case weight field gives unequal treatment to the records in a dataset. When a case
weight field is used, the contribution of a record in the analysis is weighted in proportion to
the population units that the record represents in the sample. For example, suppose that in
a direct marketing promotion, 10,000 households respond and 1,000,000 households do not
respond. To reduce the size of the data file, you might include all of the responders but only a
1% sample (10,000) of the nonresponders. You can do this if you define a case weight equal to
1 for responders and 100 for nonresponders.
Binning of Scale-Level Predictors
Scale level (continuous) predictor fields are automatically discretized or binned into a set of
ordinal categories. This process is performed once for each scale-level predictor in the model,
prior to applying the CHAID (or Exhaustive CHAID) algorithm. The binned categories are
determined as follows:
1. The data values yi are sorted.
75
CHAID Algorithms
2. For each unique value, starting with the smallest, calculate the relative (weighted) frequency of
values less than or equal to the current value yi:
where wk is the weight for record k (or 1.0 if no weights are defined).
3. Determine the bin to which the value belongs by comparing the relative frequency with the ideal
bin percentile cutpoints of 0.10, 0.20, 0.30, etc.
where W is the total weighted frequency for all records in the training data,
, and

If the bin index for this value is different from the bin index for the previous data value, add a
new bin to the bin list and set its cutpoint to the current data value.

If the bin index is the same as the bin index for the previous value, update the cut point for
that bin to the current data value.
Normally, CHAID will try to create k = 10 bins by default. However, when the number of records
having a single value is large (or a set of records with the same value has a large combined
weighted frequency), the binning may result in fewer bins. This will happen if the weighted
frequency for records with the same value is greater than the expected weighted frequency in a bin
(1/kth of the total weighted frequency). This will also happen if there are fewer than k distinct
values for the binned field for records in the training data.
Model Parameters
CHAID works with all types of continuous or categorical fields. However, continuous predictor
fields are automatically categorized for the purpose of the analysis.For more information, see the
topic “Binning of Scale-Level Predictors” on p. 74.
Note that you can set some of the options mentioned below using the Expert Options for
CHAID. These include the choice of the Pearson chi-squared or likelihood-ratio test, the level of
αmerge, the level of αsplit, score values, and details of stopping rules.
76
CHAID Algorithms
The CHAID algorithm proceeds as follows:
Merging Categories for Predictors (CHAID)
To determine each split, all predictor fields are merged to combine categories that are not
statistically different with respect to the target field. Each final category of a predictor field X
will represent a child node if X is used to split the node. The following steps are applied to each
predictor field X:
1. If X has one or two categories, no more categories are merged, so proceed to node splitting below.
2. Find the eligible pair of categories of X that is least significantly different (most similar) as
determined by the p-value of the appropriate statistical test of association with the target field. For
more information, see the topic “Statistical Tests Used” on p. 77.
For ordinal fields, only adjacent categories are eligible for merging; for nominal fields, all pairs
are eligible.
3. For the pair having the largest p-value, if the p-value is greater than αmerge, then merge the pair
of categories into a single category. Otherwise, skip to step 6.
4. If the user has selected the Allow splitting of merged categories option, and the newly formed
compound category contains three or more original categories, then find the best binary split
within the compound category (that for which the p-value of the statistical test is smallest). If that
p-value is less than or equal to αsplit-merge, perform the split to create two categories from the
compound category.
5. Continue merging categories from step 1 for this predictor field.
6. Any category with fewer than the user-specified minimum segment size records is merged
with the most similar other category (that which gives the largest p-value when compared with
the small category).
Merging Categories for Predictors (Exhaustive CHAID)
Exhaustive CHAID works much the same as CHAID, except that the category merging is more
thoroughly tested to find the ideal set of categories for each predictor field. As with regular
CHAID, each final category of a predictor field X will represent a child node if X is used to split
the node. The following steps are applied to each predictor field X:
1. For each predictor variable X, find the pair of categories of X that is least significantly different
(that is, has the largest p-value) with respect to the target variable Y. The method used to
calculate the p-value depends on the measurement level of Y. For more information, see the
topic “Statistical Tests Used” on p. 77.
2. Merge into a compound category the pair that gives the largest p-value.
3. Calculate the p-value based on the new set of categories of X. This represents one set of categories
for X. Remember the p-value and its corresponding set of categories.
77
CHAID Algorithms
4. Repeat steps 1, 2, and 3 until only two categories remain. Then, compare the sets of categories
of X generated during each step of the merge sequence, and find the one for which the p-value
in step 3 is the smallest. That set is the set of merged categories for X to be used in determining
the split at the current node.
Splitting Nodes
When categories have been merged for all predictor fields, each field is evaluated for its
association with the target field, based on the adjusted p-value of the statistical test of association,
as described below.
The predictor with the strongest association, indicated by the smallest adjusted p-value, is
compared to the split threshold, αsplit. If the p-value is less than or equal to αsplit, that field is
selected as the split field for the current node. Each of the merged categories of the split field
defines a child node of the split.
After the split is applied to the current node, the child nodes are examined to see if they warrant
splitting by applying the merge/split process to each in turn. Processing proceeds recursively until
one or more stopping rules are triggered for every unsplit node, and no further splits can be made.
Statistical Tests Used
Calculations of the unadjusted p-values depend on the type of the target field. During the merge
step, categories are compared pairwise, that is, one (possibly compound) category is compared
against another (possibly compound) category. For such comparisons, only records belonging to
one of the comparison categories in the current node are considered. During the split step, all
categories are considered in calculating the p-value, thus all records in the current node are used.
Scale Target Field (F Test).
For models with a scale-level target field, the p-value is calculated based on a standard
ANOVA F-test comparing the target field means across categories of the predictor field under
consideration. The F statistic is calculated as
and the p-value is
where
78
CHAID Algorithms
,
,
and F(I − 1, Nf − I) is a random variable following an F-distribution with (I − 1) and (Nf − I)
degrees of freedom.
Nominal Target Field (Chi-Squared Test)
If the target field Y is a set (categorical) field, the null hypothesis of independence of X and Y is
tested. To do the test, a contingency (count) table is formed using classes of Y as columns and
categories of the predictor X as rows. The expected cell frequencies under the null hypothesis of
independence are estimated. The observed cell frequencies and the expected cell frequencies are
used to calculate the chi-squared statistic, and the p-value is based on the calculated statistic.
Pearson Chi-squared test
The Pearson chi-square statistic is calculated as
where
is the observed cell frequency and
is the expected
cell frequency for cell (xn = i, yn = j) from the independence model as described below. The
, where
follows a chi-square
corresponding p value is calculated as
distribution with d = (J − 1)(I − 1) degrees of freedom.
Expected Frequencies for Chi-Square Test
Likelihood-ratio Chi-squared test
The likelihood-ratio chi-square is calculated based on the expected and observed frequencies, as
described above. The likelihood ratio chi-square is calculated as
and the p-value is calculated as
Expected frequencies for chi-squared tests
For models with no case weights, expected frequencies are calculated as
79
CHAID Algorithms
where
If case weights are specified, the expected cell frequency under the null hypothesis of
independence takes the form
where αi and βj are parameters to be estimated, and
The parameter estimates
procedure:
,
1. Initially, k = 0,
, and hence
,
2.
, are calculated based on the following iterative
.
.
3.
4.
.
5. If
, stop and output
,
, and
,
, and
as the final estimates of
. Otherwise, increment k and repeat from step 2.
Ordinal Target Field (Row Effects Model)
If the target field Y is ordinal, the null hypothesis of independence of X and Y is tested against
the row effects model, with the rows being the categories of X and the columns the categories
(under the hypothesis of
of Y(Goodman, 1979). Two sets of expected cell frequencies,
independence and
(under the hypothesis that the data follow the row effects model), are both
estimated. The likelihood ratio statistic is computed as
and the p-value is calculated as
80
CHAID Algorithms
Expected Cell Frequencies for the Row Effects Model
For the row effects model, scores for categories of Y are needed. By default, the order of each
category is used as the category score. Users can specify their own set of scores. The expected
cell frequency under the row effects model is
where sj is the score for category j of Y, and
in which
, αi, γj and γi are unknown parameters to be estimated.
Parameter estimates
procedure:
,
,
, and hence
are calculated using the following iterative
1.
2.
3.
4.
5.
otherwise
6.
7. If
estimates of
, stop and set
,
,
, and
,
,
, and
as the final
. Otherwise, increment k and repeat from step 2.
Bonferroni Adjustment
The adjusted p-value is calculated as the p-value times a Bonferroni multiplier. The Bonferroni
multiplier controls the overall p-value across multiple statistical tests.
Suppose that a predictor field originally has I categories, and it is reduced to r categories after
the merging step. The Bonferroni multiplier B is the number of possible ways that I categories
can be merged into r categories. For r = I, B = 1. For 2 ≤ r < I,
81
CHAID Algorithms
Ordinal predictor
Nominal predictor
Ordinal with a missing value
Blank Handling
If the target field for a record is blank, or all the predictor fields are blank, the record is ignored in
model building. If case weights are specified and the case weight for a record is blank, zero, or
negative, the record is ignored, and likewise for frequency weights.
For other records, blanks in predictor fields are treated as an additional category for the field.
Ordinal Predictors
The algorithm first generates the best set of categories using all non-blank information. Then the
algorithm identifies the category that is most similar to the blank category. Finally, two p-values
are calculated: one for the set of categories formed by merging the blank category with its most
similar category, and the other for the set of categories formed by adding the blank category as a
separate category. The set of categories with the smallest p-value is used.
Nominal Predictors
The missing category is treated the same as other categories in the analysis.
Effect of Options
Stopping Rules
Stopping rules control how the algorithm decides when to stop splitting nodes in the tree. Tree
growth proceeds until every leaf node in the tree triggers at least one stopping rule. Any of the
following conditions will prevent a node from being split:

The node is pure (all records have the same value for the target field)

All records in the node have the same value for all predictor fields used by the model

The tree depth for the current node (the number of recursive node splits defining the current
node) is the maximum tree depth (default or user-specified).

The number of records in the node is less than the minumum parent node size (default or
user-specified)

The number of records in any of the child nodes resulting from the node’s best split is less
than the minimum child node size (default or user-specified)

The best split for the node yields a p-value that is greater than the αsplit (default or
user-specified).
82
CHAID Algorithms
Profits
Profits are numeric values associated with categories of a (symbolic) target field that can be used
to estimate the gain or loss associated with a segment. They define the relative value of each value
of the target field. Values are used in computing gains but not in tree growing.
Profit for each node in the tree is calculated as
where j is the target field category, fj(t) is the sum of frequency field values for all records in node
t with category j for the target field, and Pj is the user-defined profit value for category j.
Score Values
Scores are available in CHAID and Exhaustive CHAID. They define the order and distance
between categories of an ordinal categorical target field. In other words, the scores define the
field’s scale. Values of scores are involved in tree growing.
If user-specified scores are provided, they are used in calculation of expected cell frequencies,
as described above.
Costs
Costs, if specified, are not taken into account in growing a CHAID tree. However, costs will be
incorporated into node assignment and risk estimation, as described in Predicted Values and
Risk Estimates, below.
Secondary Calculations
Secondary calculations are not directly related to building the model, but give you information
about the model and its performance.
Risk Estimates
Risk estimates describe the risk of error in predicted values for specific nodes of the tree and for
the tree as a whole.
Risk Estimates for Symbolic Target Field
For classification trees (with a symbolic target field), the risk estimate r(t) of a node t is computed
as
83
CHAID Algorithms
where C(j*(t)|j) is the misclassification cost of classifying a record with target value j as j*(t),
Nf,j(t) is the sum of the frequency weights for records in node t in category j (or the number of
records if no frequency weights are defined), and Nf is the sum of frequency weights for all
records in the training data.
Note that case weights are not considered in calculating risk estimates.
Risk Estimates for numeric target field
For regression trees (with a numeric target field), the risk estimate r(t) of a node t is computed as
where fi is the frequency weight for record i (a record assigned to node t), yi is the value of the
is the weighted mean of the target field for all records in node t.
target field for record i, and
Tree Risk Estimate
For both classification trees and regression trees, the risk estimate R(T) for the tree (T) is
calculated by taking the sum of the risk estimates for the terminal nodes r(t):
where T’ is the set of terminal nodes in the tree.
Gain Summary
The gain summary provides descriptive statistics for the terminal nodes of a tree.
If your target field is continuous (scale), the gain summary shows the weighted mean of the
target value for each terminal node,
If your target field is symbolic (categorical), the gain summary shows the weighted percentage of
records in a selected target category,
where xi(j) = 1 if record xi is in target category j, and 0 otherwise. If profits are defined for the
tree, the gain is the average profit value for each terminal node,
84
CHAID Algorithms
where P(xi) is the profit value assigned to the target value observed in record xi.
Generated Model/Scoring
Calculations done by the CHAID generated model are described below
Predicted Values
New records are scored by following the tree splits to a terminal node of the tree. Each terminal
node has a particular predicted value associated with it, determined as follows:
Classification Trees
For trees with a symbolic target field, each terminal node’s predicted category is the category with
the lowest weighted cost for the node. This weighted cost is calculated as
where C(i|j) is the user-specified misclassification cost for classifying a record as category i when
it is actually category j, and p(j|t) is the conditional weighted probability of a record being in
category j given that it is in node t, defined as
where π(j) is the prior probability for category j, Nw,j(t) is the weighted number of records in node
t with category j (or the number of records if no frequency or case weights are defined),
and Nw,j is the weighted number records in category j (any node),
Regression Trees
For trees with a numeric target field, each terminal node’s predicted category is the weighted mean
of the target values for records in the node. This weighted mean is calculated as
where Nw(t) is defined as
85
CHAID Algorithms
Confidence
For classification trees, confidence values for records passed through the generated model are
calculated as follows. For regression trees, no confidence value is assigned.
Classification Trees
Confidence for a scored record is the proportion of weighted records in the training data in the
scored record’s assigned terminal node that belong to the predicted category, modified by the
Laplace correction:
Blank Handling
In classification of new records, blanks are handled as they are during tree growth, being treated as
an additional category (possibly merged with other non-blank categories). For more information,
see the topic “Blank Handling” on p. 81.
For nodes where there were no blanks in the training data, a blank category will not exist for
the split of that node. In that case, records with a blank value for the split field are assigned a
null value.
Cluster Evaluation Algorithms
This document describes measures used for evaluating clustering models.

The Silhouette coefficient combines the concepts of cluster cohesion (favoring models which
contain tightly cohesive clusters) and cluster separation (favoring models which contain
highly separated clusters). It can be used to evaluate individual objects, clusters, and models.

The sum of squares error (SSE) is a measure of prototype-based cohesion, while sum of
squares between (SSB) is a measure of prototype-based separation.

Predictor importance indicates how well the variable can differentiate different clusters. For
both range (numeric) and discrete variables, the higher the importance measure, the less
likely the variation for a variable between clusters is due to chance and more likely due to
some underlying difference.
Notation
The following notation is used throughout this chapter unless otherwise stated:
Continuous variable k in case i (standardized).
The sth category of variable k in case i (one-of-c coding).
N
Total number of valid cases.
The number of cases in cluster j.
Y
Variable with J cluster labels.
The centroid of cluster j for variable k.
The distance between case i and the centroid of cluster j.
The distance between the overall mean
and the centroid of cluster j.
Goodness Measures
The average Silhouette coefficient is simply the average over all cases of the following calculation
for each individual case:
where A is the average distance from the case to every other case assigned to the same cluster and
B is the minimal average distance from the case to cases of a different cluster across all clusters.
Unfortunately, this coefficient is computationally expensive. In order to ease this burden, we use
the following definitions of A and B:

A is the distance from the case to the centroid of the cluster which the case belongs to;

B is the minimal distance from the case to the centroid of every other cluster.
© Copyright IBM Corporation 1994, 2015.
87
88
Cluster Evaluation Algorithms
Distances may be calculated using Euclidean distances. The Silhouette coefficient and its average
range between −1, indicating a very poor model, and 1, indicating an excellent model. As found
by Kaufman and Rousseeuw (1990), an average silhouette greater than 0.5 indicates reasonable
partitioning of data; less than 0.2 means that the data do not exhibit cluster structure.
Data Preparation
Before calculating Silhouette coefficient, we need to transform cases as follows:
1. Recode categorical variables using one-of-c coding. If a variable has c categories, then it is stored
as c vectors, with the first category denoted (1,0,...,0), the next category (0,1,0,...,0), ..., and the
final category (0,0,...,0,1). The order of the categories is based on the ascending sort or lexical
order of the data values.
2. Rescale continuous variables. Continuous variables are normalized to the interval [−1, 1] using the
transformation [2*(x−min)/(max−min)]−1. This normalization tries to equalize the contributions
of continuous and categorical features to the distance computations.
Basic Statistics
The following statistics are collected in order to compute the goodness measures: the centroid
of variable k for cluster j, the distance between a case and the centroid, and the overall mean u.
For
with an ordinal or continuous variable k, we average all standardized values of variable
k within cluster j. For nominal variables,
is a vector
of probabilities of occurrence
for each state s of variable k for cluster j. Note that in counting , we do not consider cases with
missing values in variable k. If the value of variable k is missing for all cases within cluster j,
is marked as missing.
between case i and the centroid of cluster j can be calculated in terms of the
The distance
across all variables; that is
weighted sum of the distance components
where
denotes a weight. At this point, we do not consider differential weights, thus
equals 1 if the variable k in case i is valid, 0 if not. If all
equal 0, set
.
The distance component
is calculated as follows for ordinal and continuous variables
For binary or nominal variables, it is
89
Cluster Evaluation Algorithms
where variable k uses one-of-c coding, and
is the number of its states.
The calculation of
is the same as that of
is used in place of
.
, but the overall mean u is used in place of
and
Silhouette Coefficient
The Silhouette coefficient of case i is
where
denotes cluster labels which do not include case i as a member, while is the cluster
label which includes case i. If
equals 0, the Silhouette of case i is
not used in the average operations.
Based on these individual data, the total average Silhouette coefficient is:
Sum of Squares Error (SSE)
SSE is a prototype-based cohesion measure where the squared Euclidean distance is used. In order
to compare between models, we will use the averaged form, defined as:
Average SSE
Sum of Squares Between (SSB)
SSB is a prototype-based separation measure where the squared Euclidean distance is used. In
order to compare between models, we will use the averaged form, defined as:
Average SSB
Predictor Importance
The importance of field i is defined as
90
Cluster Evaluation Algorithms
where denotes the set of predictor and evaluation fields,
is the significance or
equals zero, set
p-value computed from applying a certain test, as described below. If
, where MinDouble is the minimal double value.
Across Clusters
The p-value for categorical fields is based on Pearson’s chi-square. It is calculated by
p-value = Prob(
),
where
where
.

If
, the importance is set to be undefined or unknown;

If
, subtract one from I for each such category to obtain

If
, subtract one from J for each such cluster to obtain

If
or
;
;
, the importance is set to be undefined or unknown.
.
The degrees of freedom are
The p-value for continuous fields is based on an F test. It is calculated by
},
p-value = Prob{
where

If N=0, the importance is set to be undefined or unknown;

If
, subtract one from J for each such cluster to obtain

If
or

If the denominator in the formula for the F statistic is zero, the importance is set to be
undefined or unknown;

If the numerator in the formula for the F statistic is zero, set p-value = 1;
;
, the importance is set to be undefined or unknown;
The degrees of freedom are
.
91
Cluster Evaluation Algorithms
Within Clusters
The null hypothesis for categorical fields is that the proportion of cases in the categories in
cluster j is the same as the overall proportion.
The chi-square statistic for cluster j is computed as follows
If
, the importance is set to be undefined or unknown;
If
, subtract one from I for each such category to obtain
If
, the importance is set to be undefined or unknown.
;
.
The degrees of freedom are
The null hypothesis for continuous fields is that the mean in cluster j is the same as the overall
mean.
The Student’s t statistic for cluster j is computed as follows
with
If
degrees of freedom.
or
, the importance is set to be undefined or unknown;
If the numerator is zero, set p-value = 1;
Here, the p-value based on Student’s t distribution is calculated as
p-value = 1 − Prob{
}.
References
Kaufman, L., and P. J. Rousseeuw. 1990. Finding groups in data: An introduction to cluster
analysis. New York: John Wiley and Sons.
Tan, P., M. Steinbach, and V. Kumar. 2006. Introduction to Data Mining. : Addison-Wesley.
COXREG Algorithms
Cox Regression Algorithms
Cox (1972) first suggested the models in which factors related to lifetime have a multiplicative
effect on the hazard function. These models are called proportional hazards models. Under the
proportional hazards assumption, the hazard function h of t given X is of the form
where x is a known vector of regressor variables associated with the individual, is a vector of
unknown parameters, and
is the baseline hazard function for an individual with
.
Hence, for any two covariates sets
and , the log hazard functions
and
should
be parallel across time.
When a factor does not affect the hazard function multiplicatively, stratification may be useful in
model building. Suppose that individuals can be assigned to one of m different strata, defined
by the levels of one or more factors. The hazard function for an individual in the jth stratum is
defined as
There are two unknown components in the model: the regression parameter and the baseline
. The estimation for the parameters is described below.
hazard function
Estimation
We begin by considering a nonnegative random variable T representing the lifetimes of individuals
in some population. Let
denote the probability density function (pdf) of T given a regressor
be the survivor function (the probability of an individual surviving until time
x and let
t). Hence
The hazard
is then defined by
Another useful expression for
in terms of
is
Thus,
For some purposes, it is also useful to define the cumulative hazard function
© Copyright IBM Corporation 1994, 2015.
93
94
COXREG Algorithms
Under the proportional hazard assumption, the survivor function can be written as
where
is the baseline survivor function defined by
and
Some relationships between
,
and
,
and
which will be used later are
To estimate the survivor function
, we can see from the equation for the survivor function
that there are two components, and
, which need to be estimated. The approach we use
here is to estimate from the partial likelihood function and then to maximize the full likelihood
for
.
Estimation of Beta
Assume that

There are m levels for the stratification variable.

Individuals in the same stratum have proportional hazard functions.

The relative effect of the regressor variables is the same in each stratum.
Let
stratum and
defined by
be the observed uncensored failure time of the
individuals in the jth
be the corresponding covariates. Then the partial likelihood function is
where
is the sum of case weights of individuals whose lifetime is equal to
and
is
individuals,
is the case weight of
the weighted sum of the regression vector x for those
individual l, and
is the set of individuals alive and uncensored just prior to
in the jth
stratum. Thus the log-likelihood arising from the partial likelihood function is
95
COXREG Algorithms
and the first derivatives of l are
is the rth component of
. The maximum partial likelihood estimate
(MPLE) of is obtained by setting
equal to zero for
, where p is the number of
can usually be
independent variables in the model. The equations
solved by using the Newton-Raphson method.
is invariant under
Note that from its equation that the partial likelihood function
translation. All the covariates are centered by their corresponding overall mean. The overall mean
of a covariate is defined as the sum of the product of weight and covariate for all the censored and
uncensored cases in each stratum. For notational simplicity, used in the Estimation Section
denotes centered covariates.
Three convergence criteria for the Newton-Raphson method are available:

Absolute value of the largest difference in parameter estimates between iterations
by the value of the parameter estimate for the previous iteration; that is,
divided
BCON
parameter estimate for previous iteration
 Absolute difference of the log-likelihood function between iterations divided by the
log-likelihood function for previous iteration.

Maximum number of iterations.
The asymptotic covariance matrix for the MPLE
is estimated by
where I
. The (r, s)-th
is the information matrix containing minus the second partial derivatives of
element of I is defined by
We can also write I in a matrix form as
96
COXREG Algorithms
where
is a
matrix which represents the p covariate variables in the model evaluated
is the number of distinct individuals in
, and
is a
matrix with
at time ,
the lth diagonal element
defined by
and the (l, k) element
defined by
Estimation of the Baseline Function
is estimated separately for
After the MPLE of is found, the baseline survivor function
each stratum. Assume that, for a stratum,
are observed lifetimes in the sample.
There are at risk and deaths at , and in the interval
there are censored times.
must be
Since
is a survivor function, it is non-increasing and left continuous, and thus
constant except for jumps at the observed lifetimes
.
Further, it follows that
and
, the observed likelihood function is of the form
Writing
where
is the set of individuals dying at and is the set of individuals with censored times in
. (Note that if the last observation is uncensored,
is empty and
)
If we let
Differentiating
,
with respect to
We then plug the MPLE
of
can be written as
and setting the equations equal to zero, we get
into this equation and solve these k equations separately.
97
COXREG Algorithms
There are two things worth noting:

If any

If
,
can be solved explicitly.
, the equation for the cumulative hazard function must be solved iteratively for
. A good initial value for is
where
Once the
is the weight sum for set
,
are found,
. (See Lawless, 1982, p. 361.)
is estimated by
Since the above estimate of
requires some iterative calculations when ties exist, Breslow
(1974) suggests using the equation for when
as an estimate; however, we will use
this as an initial estimate.
can be found in Chapter 4 of Kalbfleisch and Prentice
The asymptotic variance for
(1980). At a specified time t, it is consistently estimated by
where a is a p×1 vector with the jth element defined by
and I is the information matrix. The asymptotic variance of
is estimated by
Selection Statistics for Stepwise Methods
The same methods for variable selection are offered as in binary logistic regression. For more
information, see the topic “Stepwise Variable Selection ” on p. 253. Here we will only define the
three removal statistics—Wald, LR, and Conditional—and the Score entry statistic.
98
COXREG Algorithms
Score Statistic
The score statistic is calculated for every variable not in the model to decide which variable should
be added to the model. First we compute the information matrix I for all eligible variables based
on the parameter estimates for the variables in the model and zero parameter estimates for the
variables not in the model. Then we partition the resulting I into four submatrices as follows:
where
and
are square matrices for variables in the model and variables not in the model,
respectively, and
is the cross-product matrix for variables in and out. The score statistic
for variable is defined by
where
is the first derivative of the log-likelihood with respect to all the parameters associated
is equal to
, and
and
are the submatrices
with and
in
and
associated with variable .
Wald Statistic
The Wald statistic is calculated for the variables in the model to select variables for removal.
The Wald statistic for variable
is defined by
where is the parameter estimate associated with
with
.
and
is the submatrix of
associated
LR (Likelihood Ratio) Statistic
The LR statistic is defined as twice the log of the ratio of the likelihood functions of two models
evaluated at their own MPLES. Assume that r variables are in the current model and let us call the
current model the full model. Based on the MPLES of parameters for the full model, l(full) is
defined in “Estimation of Beta ”. For each of r variables deleted from the full model, MPLES
are found and the reduced log-likelihood function, l(reduced), is calculated. Then LR statistic is
defined as
–2(l(reduced) – l(full))
Conditional Statistic
The conditional statistic is also computed for every variable in the model. The formula for
conditional statistic is the same as LR statistic except that the parameter estimates for each
reduced model are conditional estimates, not MPLES. The conditional estimates are defined as
99
COXREG Algorithms
follows. Let
be the MPLES for the r variables (blocks) and C be the asymptotic
covariance for the parameters left in the model given
is
where is the MPLE for the parameter(s) associated with and
is without
the covariance between the parameter estimates left in the model
and , and
covariance of . Then the conditional statistic for variable is defined by
,
is
is the
b
where
is the log-likelihood function evaluated at
.
Note that all these four statistics have a chi-square distribution with degrees of freedom equal to
the number of parameters the corresponding model has.
Statistics
The following output statistics are available.
Initial Model Information
The initial model for the first method is for a model that does not include covariates. The
log-likelihood function l is equal to
where
is the sum of weights of individuals in set
.
Model Information
When a stepwise method is requested, at each step, the −2 log-likelihood function and three
chi-square statistics (model chi-square, improvement chi-square, and overall chi-square) and their
corresponding degrees of freedom and significance are printed.
–2 Log-Likelihood
where
is the MPLE of
for the current model.
100
COXREG Algorithms
Improvement Chi-Square
(–2 log-likelihood function for previous model) – ( –2 log-likelihood function for current model).
The previous model is the model from the last step. The degrees of freedom are equal to the
absolute value of the difference between the number of parameters estimated in these two models.
Model Chi-Square
(–2 log-likelihood function for initial model) – ( –2 log-likelihood function for current model).
The initial model is the final model from the previous method. The degrees of freedom are equal
to the absolute value of the difference between the number of parameters estimated in these
two model.
Note: The values of the model chi-square and improvement chi-square can be less than or equal to
zero. If the degrees of freedom are equal to zero, the chi-square is not printed.
Overall Chi-Square
The overall chi-square statistic tests the hypothesis that all regression coefficients for the variables
in the model are identically zero. This statistic is defined as
where
represents the vector of first derivatives of the partial log-likelihood function evaluated
. The elements of u and I are defined in “Estimation of Beta ”.
at
Information for Variables in the Equation
For each of the single variables in the equation, MPLE, SE for MPLE, Wald statistic, and its
corresponding df, significance, and partial R are given. For a single variable, R is defined by
Wald
2 log-likelihood for the intial model
sign of MPLE
if Wald
. Otherwise R is set to zero. For a multiple category variable, only the Wald statistic,
df, significance, and partial R are printed, where R is defined by
Wald df
2 log-likelihood for the intial model
if Wald
df. Otherwise R is set to zero.
101
COXREG Algorithms
Information for the Variables Not in the Equation
For each of the variables not in the equation, the Score statistic is calculated and its corresponding
degrees of freedom, significance, and partial R are printed. The partial R for variables not in the
equation is defined similarly to the R for the variables in the equation by changing the Wald
statistic to the Score statistic.
There is one overall statistic called the residual chi-square. This statistic tests if all regression
coefficients for the variables not in the equation are zero. It is defined by
where
is the vector of first derivatives of the partial log-likelihood function with
respect to all the parameters not in the equation evaluated at MPLE
and A is defined in “Score Statistic ”.
and
is equal to
Survival Table
For each stratum, the estimates of the baseline cumulative survival
and their standard errors are computed.
is estimated by
and the asymptotic variance of
the cumulative hazard function
and hazard
function
is defined in “Estimation of the Baseline Function ”. Finally,
and survival function
are estimated by
and, for a given x,
The asymptotic variances are
and
Plots
For a specified pattern, the covariate values
plots available for Cox regression.
are determined and
is computed. There are three
102
COXREG Algorithms
Survival Plot
For stratum j,
,
are plotted where
,
are plotted where
Hazard Plot
For stratum j,
LML Plot
The log-minus-log plot is used to see whether the stratification variable should be included as
,
are plotted. If the plot shows
a covariate. For stratum j,
parallelism among strata, then the stratum variable should be a covariate.
Blank Handling
All records with missing values for any input or output field are excluded from the estimation of
the model.
Scoring
Survival and cumulative hazard estimates are given in “Survival Table ” on p. 101.
Conditional upon survival until time t0, the probability of survival until time t is
Blank Handling
Records with missing values for any input field in the final model cannot be scored, and are
assigned a predicted value of $null$.
Additionally, records with “total” survival time (past + future) greater than the record with the
longest observed uncensored survival time are also assigned a predicted value of $null$.
References
Breslow, N. E. 1974. Covariance analysis of censored survival data. Biometrics, 30, 89–99.
103
COXREG Algorithms
Cain, K. C., and N. T. Lange. 1984. Approximate case influence for the proportional hazards
regression model with censored data. Biometrics, 40, 493–499.
Cox, D. R. 1972. Regression models and life tables (with discussion). Journal of the Royal
Statistical Society, Series B, 34, 187–220.
Kalbfleisch, J. D., and R. L. Prentice. 2002. The statistical analysis of failure time data, 2 ed.
New York: John Wiley & Sons, Inc.
Lawless, R. F. 1982. Statistical models and methods for lifetime data. New York: John Wiley &
Sons, Inc..
Storer, B. E., and J. Crowley. 1985. A diagnostic for Cox regression and general conditional
likelihoods. Journal of the American Statistical Association, 80, 139–147.
Decision List Algorithms
The objective of decision lists is to find a group of individuals with a distinct behavior pattern; for
example, a high probability of buying a product. A decision list model consists of a set of decision
rules. A decision rule is an if-then rule, which has two parts: antecedent and consequent. The
antecedent is a Boolean expression of predictors, and the consequent is the predicted value of the
target field when the antecedent is true. The simplest construct of a decision rule is a segment
.
based on one predictor; for example, Gender = ‘Male’ or
A record is covered by a rule if the rule antecedent is true. If a case is covered by one of the
rules in a decision list, then it is considered to be covered by the list.
In a decision list, order of rules is significant; if a case is covered by a rule, it will be ignored
by subsequent rules.
Algorithm Overview
The decision list algorithm can be summarized as follows:
E Candidate rules are found from the original dataset.
E The best rules are appended to the decision list.
E Records covered by the decision list are removed from the dataset.
E New rules are found based on the reduced dataset.
The process repeats until one or more of the stopping criteria are met.
Terminology of Decision List Algorithm
The following terms are used in describing the decision list algorithm:
Model. A decision list model.
Cycle. In every rule discovery cycle, a set of candidate rules will be found. They will then be
added to the model under construction. The resulting models will be inputs to the next cycle.
Attribute. Another name for a variable or field in the dataset.
Source attribute. Another name for predictor field.
Extending the model. Adding decision rules to a decision list or adding segments to a decision rule.
Group. A subset of records in the dataset.
Segment. Another name for group.
© Copyright IBM Corporation 1994, 2015.
105
106
Decision List Algorithms
Main Calculations
Notation
The following notations are used in describing the decision list algorithm:
Data matrix. Columns are fields (attributes), and rows are records (cases).
A collection of list models.
The ith list model of L.
A list model that contains no rules.
The estimated response probability of list Li.
N
Total population size
The value of the mth field (column) for the nth record (row) of X.
The subset of records in X that are covered by list model Li.
Y
The target field in X.
The value of the target field for the nth record.
A
Collection of all attributes (fields) of X.
The jth attribute of X.
R
A collection of rules to extend a preceding rule list.
The kth rule in rule collection R.
T
ResultSet
A set of candidate list models.
A collection of decision list models.
Primary Algorithm
The primary algorithm for creating a decision list model is as follows:
1. Initialize the model.
E Let d = Search depth, and w = Search width.
E If L = ∅, add
to L.
E T = ∅.
2. Loop over all elements
E Select the records
of L.
not covered by rules of
:
E Call the decision rule algorithm to create an alternative rule set R on
see the topic “Decision Rule Algorithm” on p. 107.
. For more information,
107
Decision List Algorithms
E Construct a set of new candidate models by appending each rule in R to
.
E Save extended list(s) to T.
3. Select list models from T.
E Calculate the estimated response probability
of each list model in T as
E Select the w lists in T with the highest
.
4. Add
as
to ResultSet.
5. If d = 1 or
step 2.
= ∅, return ResultSet and terminate; otherwise, reduce d by one and repeat from
Decision Rule Algorithm
Each rule is extended in decision rule cycles. With decision rules, groups are searched for
significantly increased occurrence of the target value. Decision rules will search for groups
with a higher or lower probability as required.
Notation
The following notations are used in describing the decision list algorithm:
Data matrix. Columns are fields (attributes), and rows are records (cases).
R
A collection of rules to extend a preceding rule list.
The ith rule in rule collection R.
A special rule that covers all the cases in X.
The estimated response probability of Ri.
N
Total population size.
The value of the mth field (column) for the nth record (row) of X.
The subset of records in X that are covered by rule Ri.
Y
The target field in X.
The value of the target field for the nth record.
A
Collection of all attributes (fields) of X.
The jth attribute of X. If Allow attribute re-use is false, A excludes
attributes existing in the preceding rule.
The rule split algorithm for deriving rules about Aj and records in X.
For more information, see the topic “Decision Rule Split Algorithm”
on p. 108.
A set of candidate list models.
A collection of decision list models.
SplitRule(X, Aj)
T
ResultSet
108
Decision List Algorithms
Algorithm Steps
The decision rule algorithm proceeds as follows:
1. Initialize the rule set.
E Let d = Search depth, and w = Search width.
E If R = ∅, add
to R.
E T = ∅.
in R.
2. Loop over all rules
E Select records
covered by rule
.
E Create an empty set S of new segments.
E Loop over attributes

in A.
Generate new segments based on attribute
:
SplitRule

Add new segments to S.
E Construct a set of new candidate rules by extending
E Save extended rules to T. If S = ∅, add
with each segment in S.
to ResultSet.
3. Select rules from T.
E Calculate the estimated response probability
E Select the w rules with the highest
Add
as
for each extended rule in T as
.
to ResultSet.
E If d = 1, return ResultSet and terminate. Otherwise, set R =
repeat from step 2.
, T = ∅, reduce d by one, and
Decision Rule Split Algorithm
The decision rule split algorithm is used to generate high response segments from a single attribute
(field). The records and the attribute from which to generate segments should be given. This
algorithm is applicable to all ordinal attributes, and the ordinal attribute should have values that
are unambiguously ordered. The segments generated by the algorithm can be used to expand an
n-dimensional rule to an (n + 1)-dimensional rule. This decision rule split algorithm is sometimes
referred to as the sea-level method.
109
Decision List Algorithms
Notation
The following notations are used in describing the decision rule split algorithm:
Data matrix. Columns are fields (attributes), and rows are records (cases).
C
A sorted list of attribute values (categories) to split. Values are sorted
in ascending order.
The ith category in the list of categories C.
The value of the split field (attribute) for the nth record (row) of X.
Y
The target field in X.
The value of the target field for the nth record.
N
M
Total population size.
Number of categories in C.
Observed response probability of category
.
A segment of categories,
The confidence interval (CI) for the response probability of
.
The category with the higher response probability from
.
The category with the larger number of records from
Algorithm Steps
The decision rule split algorithm proceeds as follows:
1. Compute
If
of each category
,
2. Find local maxima of
.
.
will be skipped.
to create a segment set.
where I is a positive integer satisfying the conditions
The segment set is the ordered segments based on
.
110
Decision List Algorithms
3. Select a segment in SegmentSet.
E If SegmentSet is empty, return ResultSet and terminate.
E Select the segment
E If
with the highest response probability
or
.
, remove the segment from SegmentSet and choose another.
4. Validate the segment.
E If the following conditions are satisfied:

The size of the segment exceeds the minimum segment size criterion
where

Response probability for the segment is significantly higher than that for the overall sample,
as indicated by non-overlapping confidence intervals
For more information, see the topic “Confidence Intervals” on p. 110.

Extending the segment would lower the response probability
and
then add the segment
to ResultSet, and remove any segments
as parent and for which
.
from ResultSet that have
5. Extend the segment.
E Add
to
, where
if
if
otherwise
E Adjust R or L accordingly, i.e. if
, set
.
E Return
to SegmentSet, and repeat from step 3.
Confidence Intervals
The confidence limits
for
are calculated as
and
; if
, set
111
Decision List Algorithms
if
if
if
if
where n is the coverage of the rule or list, x is the response frequency of the rule or list, α is the
desired confidence level, and
is the inverse cumulative distribution function for F with a
and b degrees of freedom, for percentile
.
Secondary Measures
For each segment, the following measures are reported:
Coverage. The number of records in the segment,
.
Frequency. The number of records in the segment for which the response is true,
.
Probability. The proportion of records in the segment for which the response is true,
Frequency
or Coverage .
Blank Handling
In decision list models, blank values for input fields can be treated as a separate category that can
be used to define segments, or can be excluded from the model, depending on the expert model
option. The default is to use blanks as a category for defining segments. Records with blank
values for the target field are excluded from model building.
Generated Model/Scoring
The decision list generated model consists of a set of segments. When scoring new data, each
record is evaluated for membership in each segment, in order. The first segment in model order
that describes the record based on the predictor fields claims the record and determines the
predicted value and the probability. Records where the predicted value is not the response value
will have a value of $null. Probabilities are calculated as described above.
Blank Handling
In scoring data with a decision list generated model, blanks are considered valid values for
defining segments. If the model was built with the expert option Allow missing values in conditions
disabled, a record with a missing value for one of the input fields will not match any segment
that depends on that field for its definition.
,
DISCRIMINANT Algorithms
No analysis is done for any subfile group for which the number of non-empty groups is less
than two or the number of cases or sum of weights fails to exceed the number of non-empty
groups. An analysis may be stopped if no variables are selected during variable selection or
the eigenanalysis fails.
Notation
The following notation is used throughout this chapter unless otherwise stated:
Table 14-1
Notation
Notation
g
p
q
Description
Number of groups
Number of variables
Number of variables selected
Value of variable i for case k in group j
Case weights for case k in group j
Number of cases in group j
Sum of case weights in group j
n
Total sum of weights
Basic Statistics
The procedure calculates the following basic statistics.
Mean
variable in group
variable
Variances
variable in group
variable
© Copyright IBM Corporation 1994, 2015.
113
114
DISCRIMINANT Algorithms
Within-Groups Sums of Squares and Cross-Product Matrix (W)
Total Sums of Squares and Cross-Product Matrix (T)
Within-Groups Covariance Matrix
Individual Group Covariance Matrices
Within-Groups Correlation Matrix (R)
if
SYSMIS otherwise
Total Covariance Matrix
Univariate F and Λfor Variable I
with g−1 and n−g degrees of freedom
with 1, g−1 and n−g degrees of freedom
Rules of Variable Selection
Both direct and stepwise variable entry are possible. Multiple inclusion levels may also be
specified.
115
DISCRIMINANT Algorithms
Method = Direct
For direct variable selection, variables are considered for inclusion in the order in which they are
passed from the upstream node. A variable is included in the analysis if, when it is included,
no variable in the analysis will have a tolerance less than the specified tolerance limit (default
= 0.001).
Stepwise Variable Selection
At each step, the following rules control variable selection:

Eligible variables with higher inclusion levels are entered before eligible variables with lower
inclusion levels.

The order of entry of eligible variables with the same even inclusion level is determined by
their order in the upstream node.

The order of entry of eligible variables with the same odd level of inclusion is determined
by their value on the entry criterion. The variable with the “best” value for the criterion
statistic is entered first.

When level-one processing is reached, prior to inclusion of any eligible variables, all
already-entered variables which have level one inclusion numbers are examined for removal.
A variable is considered eligible for removal if its F-to-remove is less than the F value for
variable removal, or, if probability criteria are used, the significance of its F-to-remove
exceeds the specified probability level. If more than one variable is eligible for removal, that
variable is removed that leaves the “best” value for the criterion statistic for the remaining
variables. Variable removal continues until no more variables are eligible for removal.
Sequential entry of variables then proceeds as described previously, except that after each step,
variables with inclusion numbers of one are also considered for exclusion as described before.

A variable with a zero inclusion level is never entered, although some statistics for it are
printed.
Ineligibility for Inclusion
A variable with an odd inclusion number is considered ineligible for inclusion if:

The tolerance of any variable in the analysis (including its own) drops below the specified
tolerance limit if it is entered, or

Its F-to-enter is less than the F-value for a variable to enter value, or

If probability criteria are used, the significance level associated with its F-to-enter exceeds the
probability to enter.
A variable with an even inclusion number is ineligible for inclusion if the first condition above
is met.
116
DISCRIMINANT Algorithms
Computations During Variable Selection
During variable selection, the matrix W is replaced at each step by a new matrix
using the
symmetric sweep operator described by Dempster (1969). If the first q variables have been
included in the analysis, W may be partitioned as:
where W11 is q×q. At this stage, the matrix
is defined by
In addition, when stepwise variable selection is used, T is replaced by the matrix
similarly.
, defined
The following statistics are computed.
Tolerance
TOL
if
if variable is not in the analysis and
if variable is in the analysis and
If a variable’s tolerance is less than or equal to the specified tolerance limit, or its inclusion in the
analysis would reduce the tolerance of another variable in the equation to or below the limit, the
following statistics are not computed for it or any set including it.
F-to-Remove
with degrees of freedom g−1 and n−q−g+1.
F-to-Enter
with degrees of freedom g−1 and n−q−g.
Wilks’ Lambda for Testing the Equality of Group Means
with degrees of freedom q, g−1 and n−g.
117
DISCRIMINANT Algorithms
The Approximate F Test for Lambda (the “overall F”), also known as Rao’s R (Tatsuoka,
1971)
where
if
otherwise
with degrees of freedom qh and r/s+1−qh/2. The approximation is exact if q or h is 1 or 2.
Rao’s V (Lawley-Hotelling Trace) (Rao, 1952; Morrison, 1976)
When n−g is large, V, under the null hypothesis, is approximately distributed as
with q(g−1)
degrees of freedom. When an additional variable is entered, the change in V, if positive, has
distribution with g−1 degrees of freedom.
approximately a
The Squared Mahalanobis Distance (Morrison, 1976) between groups a and b
The F Value for Testing the Equality of Means of Groups a and b (Smallest F ratio)
The Sum of Unexplained Variations (Dixon, 1973)
Classification Functions
Once a set of q variables has been selected, the classification functions (also known as Fisher’s
linear discriminant functions) can be computed using
for the coefficients, and
118
DISCRIMINANT Algorithms
for the constant, where
is the prior probability of group j.
Canonical Discriminant Functions
The canonical discriminant function coefficients are determined by solving the general eigenvalue
problem
where V is the unscaled matrix of discriminant function coefficients and λ is a diagonal matrix of
eigenvalues. The eigensystem is solved as follows:
The Cholesky decomposition
is formed, where L is a lower triangular matrix, and
The symmetric matrix
.
is formed and the system
is solved using tridiagonalization and the QL method. The result is m eigenvalues, where
and corresponding orthonormal eigenvectors, UV. The eigenvectors of the
original system are obtained as
For each of the eigenvalues, which are ordered in descending magnitude, the following statistics
are calculated.
Percentage of Between-Groups Variance Accounted for
Canonical Correlation
119
DISCRIMINANT Algorithms
Wilks’ Lambda
Testing the significance of all the discriminating functions after the first k:
The significance level is based on
which is distributed as a
with (q−k)(g−k−1) degrees of freedom.
The Standardized Canonical Discriminant Coefficient Matrix D
The standard canonical discriminant coefficient matrix D is computed as
where
S=diag
S11= partition containing the first q rows and columns of S
V is a matrix of eigenvectors such that
=I
The Correlations Between the Canonical Discriminant Functions and the Discriminating
Variables
The correlations between the canonical discriminant functions and the discriminating variables
are given by
If some variables were not selected for inclusion in the analysis (q<p), the eigenvectors are
implicitly extended with zeroes to include the nonselected variables in the correlation matrix.
are excluded from S and W for this calculation; p then represents
Variables for which
the number of variables with non-zero within-groups variance.
The Unstandardized Coefficients
The unstandardized coefficients are calculated from the standardized ones using
120
DISCRIMINANT Algorithms
The associated constants are:
The group centroids are the canonical discriminant functions evaluated at the group means:
Tests For Equality Of Variance
Box’s M is used to test for equality of the group covariance matrices.
log
log
where
= pooled within-groups covariance matrix excluding groups with singular covariance matrices
= covariance matrix for group j.
and
are obtained from the Cholesky decomposition. If any diagonal
Determinants of
element of the decomposition is less than 10-11, the matrix is considered singular and excluded
from the analysis.
where
is the ith diagonal entry of L such that
. Similarly,
where
= sum of weights of cases in all groups with nonsingular covariance matrices
The significance level is obtained from the F distribution with t1 and t2 degrees of freedom
using (Cooley and Lohnes, 1971):
if
if
where
121
DISCRIMINANT Algorithms
if
if
If
is zero, or much smaller than e2, t2 cannot be computed or cannot be computed
accurately. If
the program uses Bartlett’s
statistic rather than the F statistic:
with t1 degrees of freedom.
For testing the group covariance matrix of the canonical discriminant functions, the procedure is
and
are replaced by
and , where
similar. The covariance matrices
is the group covariance matrix of the discriminant functions.
The pooled covariance matrix in this case is an identity, so that
where the summation is only over groups with singular
.
Blank Handling
All records with missing values for any input or output field are excluded from the estimation of
the model.
Generated model/scoring
The basic procedure for classifying a case is as follows:

If X is the 1×q vector of discriminating variables for the case, the 1×m vector of canonical
discriminant function values is
122
DISCRIMINANT Algorithms

A chi-square distance from each centroid is computed
where
is the covariance matrix of canonical discriminant functions for group j and is
the group centroid vector. If the case is a member of group j,
has a
distribution with
m degrees of freedom. P(X|G), labeled as P(D>d|G=g) in the output, is the significance
.
level of such a

The classification, or posterior probability, is
where
is the prior probability for group j. A case is classified into the group for which
is highest.
The actual calculation of
is
if
otherwise
If individual group covariances are not used in classification, the pooled within-groups covariance
matrix of the discriminant functions (an identity matrix) is substituted for
in the above
calculation, resulting in considerable simplification.
If any
is singular, a pseudo-inverse of the form
replaces
and
replaces
.
is a submatrix of
whose rows and columns
correspond to functions not dependent on preceding functions. That is, function 1 will be excluded
, function 2 will be excluded only if it is dependent on function 1, and
only if the rank of
, but
so on. This choice of the pseudo-inverse is not optimal for the numerical stability of
maximizes the discrimination power of the remaining functions.
Cross-Validation (Leave-one-out classification)
The following notation is used in this section:
Table 14-2
Notation
Notation
Description
123
DISCRIMINANT Algorithms
Notation
Description
Sample mean of jth group
Sample mean of jth group excluding point
Polled sample covariance matrix
Sample covariance matrix of jth group
Polled sample covariance matrix without point
Cross-validation applies only to linear discriminant analysis (not quadratic). During
cross-validation, all cases in the dataset are looped over. Each case, say
, is extracted once and
treated as test data. The remaining cases are treated as a new dataset.
Here we compute
and
satisfies (
. If there is an i
that
), then the extracted point
is misclassified. The estimate of prediction error rate is the ratio of the sum of misclassified
case weights and the sum of all case weights.
To reduce computation time, the linear discriminant method is used instead of the canonical
discriminant method. The theoretical solution is exactly the same for both methods.
Blank Handling (discriminant analysis algorithms scoring)
Records with missing values for any input field in the final model cannot be scored, and are
assigned a predicted value of $null$.
References
Anderson, T. W. 1958. Introduction to multivariate statistical analysis. New York: John Wiley &
Sons, Inc..
124
DISCRIMINANT Algorithms
Cooley, W. W., and P. R. Lohnes. 1971. Multivariate data analysis. New York: John Wiley &
Sons, Inc..
Dempster, A. P. 1969. Elements of Continuous Multivariate Analysis. Reading, MA:
Addison-Wesley.
Dixon, W. J. 1973. BMD Biomedical computer programs. Los Angeles: University of California
Press.
Tatsuoka, M. M. 1971. Multivariate analysis. New York: John Wiley & Sons, Inc. .
Ensembles Algorithms
Ensembles are used to enhance model accuracy (boosting), enhance model stability (bagging),
build models for very large datasets (pass, stream, merge), and generally combine scores from
different models.

For more information, see the topic “Very large datasets (pass, stream, merge) algorithms”
on p. 130.

For more information, see the topic “Bagging and Boosting Algorithms” on p. 125.

For more information, see the topic “Ensembling model scores algorithms” on p. 136.
Bagging and Boosting Algorithms
Bootstrap aggregating (Bagging) and boosting are algorithms used to improve model stability and
accuracy. Bagging works well for unstable base models and can reduce variance in predictions.
Boosting can be used with any type of model and can reduce variance and bias in predictions.
Notation
The following notation is used for bagging and boosting unless otherwise stated:
K
The number of distinct records in the training set.
Predictor values for the kth record.
Target value for the kth record.
Frequency weight for the kth record.
Analysis weight for the kth record.
N
M
.
The total number of records;
The number of base models to build; for bagging, this is the number of
bootstrap samples.
The model built on the mth bootstrap sample.
Simulated frequency weight for the kth record of the mth bootstrap sample.
Updated analysis weight for the kth record of the mth bootstrap sample.
Predicted target value of the kth record by the mth model.
For a categorical target, the probability that the kth record belongs to
category , i=1, ..., C, in model m.
For any condition ,
is 1 if holds and 0 otherwise.
© Copyright IBM Corporation 1994, 2015.
125
126
Ensembles Algorithms
Bootstrap Aggregation
Bootstrap aggregation (bagging) produces replicates of the training dataset by sampling with
replacement from the original dataset. This creates bootstrap samples of equal size to the original
dataset. The algorithm is performed iteratively over k=1,..,K and m=1,...,M to generate frequency
weights:
otherwise
Then a model is built on each replicate. Together these models form an ensemble model. The
ensemble model scores new records using one of the following methods; the available methods
depend upon the measurement level of the target.
Scoring a Continuous Target

Mean

Median
Sort
and relabel them
Scoring a Categorical Target

Voting
where

Highest probability

Highest mean probability
if
is odd
if
is even
127
Ensembles Algorithms
Bagging Model Measures
Accuracy
Accuracy is computed for the naive model, reference (simple) model, ensemble model (associated
with each ensemble method), and base models.
For categorical targets, the classification accuracy is
For continuous targets, it is
where
Note that R2 can never be greater than one, but can be less than zero.
For the naïve model,
targets.
is the modal category for categorical targets and the mean for continuous
Diversity
Diversity is a range measure between 0 and 1 in the larger-is-more-diverse form. It shows how
much predictions vary across base models.
For categorical targets, diversity is
where
For continuous targets, diversity is
D
.
128
Ensembles Algorithms
Adaptive Boosting
Adaptive boosting (AdaBoost) is an algorithm used to boost models with continuous targets
(Freund and Schapire 1996, Drucker 1997).
1. Initialize values.
Set
if analysis weights specified
otherwise
Set m=1,
, and
. Note that analysis weights are initialized even if the method
used to build base models does not support analysis weights.
2. Build base model m,
, using the training set and score the training set.
Set the model weight for base model m,
where
.
3. Set weights for the next base model .
where
. Note that analysis weights are always updated. If
the method used to build base models does not support analysis weights, the frequency weights
are updated for the next base model as follows:
otherwise
If m<M, set m=m+1 and go to step 2. Otherwise, the ensemble model is complete.
Note: base models where
or
are removed from the
ensemble.
Scoring
AdaBoost uses the weighted median method to score the ensemble model.
and relabel them
Sort
and relabeling them
, retaining the association of the model weights,
,
129
Ensembles Algorithms
The ensemble predicted value is then
, where i is the value such that
Stagewise Additive Modeling using Multiclass Exponential loss
Stagewise Additive Modeling using a Multiclass Exponential loss function (SAMME) is an
algorithm that extends the original AdaBoost algorithm to categorical targets.
1. Initialize values.
Set
if analysis weights specified
otherwise
Set m=1,
, and
. Note that analysis weights are initialized even if the method
used to build base models does not support analysis weights.
2. Build base model m,
, using the training set and score the training set.
Set the model weight for base model m,
where
.
3. Set weights for the next base model.
where
. Note that analysis weights are always updated. If the
method used to build base models does not support analysis weights, the frequency weights are
updated for the next base model as follows:
otherwise
If m<M, set m=m+1 and go to step 2. Otherwise, the ensemble model is complete.
Note: base models where
or
are removed from the ensemble.
Scoring
SAMME uses the weighted majority vote method to score the ensemble model.
The predicted value of the kth record for the mth base model is
The ensemble predicted value is then
at random.
.
. Ties are resolved
130
Ensembles Algorithms
The ensemble predicted probability is
.
Boosting Model Measures
Accuracy
Accuracy is computed for the naive model, reference (simple) model, ensemble model (associated
with each ensemble method), and base models.
For categorical targets, the classification accuracy is
For continuous targets, it is
where
Note that R2 can never be greater than one, but can be less than zero.
For the naïve model,
targets.
is the modal category for categorical targets and the mean for continuous
References
Drucker, H. 1997. Improving regressor using boosting techniques. In: Proceedings of the 14th
International Conferences on Machine Learning , D. H. Fisher,Jr., ed. San Mateo, CA: Morgan
Kaufmann, 107–115.
Freund, Y., and R. E. Schapire. 1995. A decision theoretic generalization of on-line learning and
an application to boosting. In: Computational Learning Theory: 7 Second European Conference,
EuroCOLT ’95, , 23–37.
Very large datasets (pass, stream, merge) algorithms
We implement the PSM features PASS, STREAM, and MERGE through ensemble modeling.
PASS builds models on very large data sets with only one data pass; STREAM updates the
existing model with new cases without the need to store or recall the old training data; MERGE
builds models in a distributed environment and merges the built models into one model.
131
Ensembles Algorithms
In an ensemble model, the training set will be divided into subsets called blocks, and a model will
be built on each block. Because the blocks may be dispatched to different threads (here one process
contains one thread) and even different machines, models in different processes can be built at the
same time. As new data blocks arrive, the algorithm simply repeats this procedure. Therefore it
can easily handle the data stream and perform incremental learning for ensemble modeling.
Pass
The PASS operation includes following steps:
1. Split the data into training blocks, a testing set and a holdout set. Note that the frequency weight,
if specified, is ignored when splitting the training set into blocks (to prevent blocks from being
entirely represented by a single case) but is accounted for when creating the testing and holdout
sets.
2. Build base models on training blocks and build a reference model on the testing set. A single
model is built on the testing set and each training block.
3. Evaluate each base model by computing the accuracy based on the testing set. Select a subset
of base models as ensemble elements according to accuracy.
4. Evaluate the ensemble model and the reference model by computing the accuracy based on
the holdout set. If the ensemble model’s performance is not better than the reference model’s
performance on the holdout set, we use the reference model to score the new cases.
Computing Model Accuracy
The accuracy of a base model is assessed on the testing set. For each vector of predictors and
be the label predicted by the
the corresponding label observed in the testing set T, let
given model. Then the testing error is estimated as:
Categorical target.
Continuous target.
Where
is 1 if
and 0 otherwise.
The accuracy for the given model is computed by A=1−E. The accuracy for the whole ensemble
model and the reference model is assessed on the holdout set.
132
Ensembles Algorithms
Stream
When new cases arrive and the user wants to update the existing ensemble model with these
cases, the algorithm will:
1. Start a PASS operation to build an ensemble model on the new data, then
2. MERGE the newly created ensemble model and the existing ensemble model.
Merge
The MERGE operation has the following steps:
1. Merge the holdout sets into a single holdout set and, if necessary, reduce this set to a reasonable
size.
2. Merge the testing sets into a single testing set and, if necessary, reduce this set to a reasonable size.
3. Build a merged reference model on the merged testing set.
4. Evaluate every base model by computing the accuracy based on the merged testing set. Select a
subset of base models as elements of the merged ensemble model according to accuracy.
5. Evaluate the merged ensemble model and the merged reference model by computing the accuracy
based on the merged holdout set.
Adaptive Predictor Selection
There are two methods, depending upon whether the method used to build base models has an
internal predictor selection algorithm.
Method has predictor selection algorithm
The first base model is built with all predictors available to the method’s predictor selection
algorithm. Base model j (j > 1) makes the ith predictor available with probability
where
is the number of times the ith predictor was selected by the method’s predictor selection
algorithm in the previous j−1 base models, is the number of times the ith predictor was made
available to the method’s predictor selection algorithm in the previous j−1 base models, C is a
constant to smooth the value of , and is a lower limit on .
Method does not have predictor selection algorithm
Each base model makes the ith predictor available with probability
133
Ensembles Algorithms
if
otherwise
where

is the p-value of a test for the ith predictor, as defined below.
For a categorical target and categorical predictor,
is a chi-square test of
where
freedom
and with degrees of
else
is the number of cases with X=i and Y=j,
,
.
.
, and

For a categorical target and continuous predictor,
is an F test of
with degrees of freedom
number of cases with Y=j,
Y=j, and

and
.
are the sample mean and sample variance of X given
For a continuous target and categorical predictor,
is an F test of
with degrees of freedom
number of cases with X=i, and
.
X=i, and

is the
.
is the
are the sample mean and sample variance of Y given
For a continuous target and continuous predictor, is a two-sided t test of
and with degrees of freedom
of X and
.
where
is the sample variance
is the sample variance of Y.
Automatic Category Balancing
When a target category occurs relatively infrequently, many models do a poor job of predicting
members of that rarely occurring category, even if the overall prediction rate of the model is fairly
good. Automatic category balancing should improves the model’s accuracy when predicting
infrequently occurring values.
As records arrive, they are added to a training block until it is full. Then the proportion of records
, where
is the weighted number of records taking
in each category is computed:
category i and w is the total weighted number of records.
E If there is any category such that
, where
is the number of target categories
and = 0.3, then randomly remove each record from the training block with probability
This operation will tend to remove records from frequently-occurring categories. Add new records
to the training block until it is full again, and repeat this step until the condition is not satisfied.
E If there is any category such that
, then recompute the frequency weight for record k as
, where
is the category of the kth record. This operation
gives greater weight to infrequently occurring categories.
134
Ensembles Algorithms
Model Measures
The following notation applies.
N
M
Total number of records
Total number of base models
The frequency weight of record k
The observed target value of record k
The predicted target value of record k by the ensemble model
The predicted target value of record k by base model m
Accuracy
Accuracy is computed for the naive model, reference (simple) model, ensemble model (associated
with each ensemble method), and base models.
For categorical targets, the classification accuracy is
where
if
otherwise
For continuous targets, it is
where
Note that R2 can never be greater than one, but can be less than zero.
For the naïve model,
targets.
is the modal category for categorical targets and the mean for continuous
Diversity
Diversity is a range measure between 0 and 1 in the larger-is-more-diverse form. It shows how
much predictions vary across base models.
For categorical targets, diversity is
135
Ensembles Algorithms
where
and
is defined as above.
Diversity is not available for continuous targets.
Scoring
There are several strategies for scoring using the ensemble models.
Continuous Target
Mean.
Median.
where
is the final predicted value of case i, and
value of case i.
is the mth base model’s predicted
Categorical Target
Voting. Assume that
represents the label output of the mth base model for a given vector of
predictor values.
if the label assigned by the mth base model is the kth target category
and 0 otherwise. There are total of M base models and K target categories. The majority vote
method selects the jth category if it is assigned by the plurality of base models. It satisfies the
following equation:
Let
be the testing error estimated for the mth base model. Weights for the weighted majority
vote are then computed according to the following expression:
Probability voting. Assume that
is the posterior probability estimated for the kth target
category by the mth base model for a given vector of predictor values. The following rules
combine the probabilities computed by the base models. The jth category is selected such that it
satisfies the corresponding equation.
M
 Highest probability.
(maxM
m 1
m 1

Highest mean probability.
136
Ensembles Algorithms
Ties are resolved at random.
Softmax smoothing. The softmax function can be used for smoothing the probabilities:
where
is the rule-based confidence for category i and
is the smoothed value.
Ensembling model scores algorithms
Ensembling scores from individual models can give more accurate predictions. By combining
scores from multiple models, limitations in individual models may be avoided, resulting in a
higher overall accuracy. Models combined in this manner typically perform at least as well as the
best of the individual models and often better.
Note that while the options for general ensembling of scores are similar to those for boosting,
bagging, and very large datasets, the specific options for combining scoring are slightly different.
Notation
The following notation applies.
N
M
Total number of records
Total number of base models
The observed target value of record i
The predicted target value of record i by the ensemble model
The predicted target value of record i by base model m
Scoring
There are several strategies for scoring using the ensemble models.
Continuous Target
Mean.
where
is the final predicted value of case i, and
value of case i.
Standard error.
is the mth base model’s predicted
137
Ensembles Algorithms
Categorical Target
Voting. Assume that
represents the label output of the mth base model for a given vector of
if the label assigned by the mth base model is the kth target category
predictor values.
and 0 otherwise. There are total of M base models and K target categories. The majority vote
method selects the jth category if it is assigned by the plurality of base models. It satisfies the
following equation:
Confidence-weighted (probability) voting. Assume that
is the posterior probability estimated
for the kth target category by the mth base model for a given vector of predictor values. The
following rules combine the probabilities computed by the base models. The jth category is
selected such that it satisfies the corresponding equation.
Highest confidence (probability) wins.
M
m 1
(maxM
m 1
Raw propensity-weighted voting. This is equivalent to confidence-weighted voting for a flag target,
where the weights for true are the propensities and the weights for false are 1−propensity.
Adjusted propensity-weighted voting. This is similar to raw propensity-weighted voting for a
flag target, where the weights for true are the adjusted propensities and the weights for false
are 1−adjusted propensity.
Average raw propensity. The raw propensities scores are averaged across the base models. If the
average is > 0.5, then the record is scored as true.
Average adjusted propensity. The adjusted propensities scores are averaged across the base models.
If the average is > 0.5, then the record is scored as true.
Factor Analysis/PCA Algorithms
Overview
The Factor/PCA node performs principal components analysis and six types of factor analysis.
Primary Calculations
Factor Extraction
Principal Components Analysis
The matrix of factor loadings based on factor m is
where
The communality of variable i is given by
Analyzing a Correlation Matrix
are the eigenvalues and
is the correlation matrix.
are the corresponding eigenvectors of
, where
are the corresponding eigenvectors of
, where
Analyzing a Covariance Matrix
are the eigenvalues and
is the covariance matrix.
The rescaled loadings matrix is
.
The rescaled communality of variable i is
.
© Copyright IBM Corporation 1994, 2015.
139
140
Factor Analysis/PCA Algorithms
Principal Axis Factoring
Analyzing a Correlation Matrix
An iterative solution for communalities and factor loadings is sought. At iteration i, the
communalities from the preceding iteration are placed on the diagonal of , and the resulting is
denoted by . The eigenanalysis is performed on , and the new communality of variable j
is estimated by
The factor loadings are obtained by
Iterations continue until the maximum number (default 25) is reached or until the maximum
change in the communality estimates is less than the convergence criterion (default 0.001).
Analyzing a Covariance Matrix
This analysis is the same as analyzing a correlation matrix, except is used instead of the
correlation matrix . Convergence is dependent on the maximum change of rescaled communality
estimates.
. The rescaled
At iteration i, the rescaled loadings matrix is
communality of variable i is
.
Maximum Likelihood
The maximum likelihood solutions of
and
are obtained by minimizing
with respect to and , where p is the number of variables, is the factor loading matrix, and
is the diagonal matrix of unique variances.
The minimization of F is performed by way of a two-step algorithm. First, the conditional
, which is minimized
minimum of F for a given y is found. This gives the function
be the column vector containing the
numerically using the Newton-Raphson procedure. Let
logarithm of the diagonal elements of y at the sth iteration. Then
where
and where
is the solution to the system of linear equations
141
Factor Analysis/PCA Algorithms
and
is the column vector containing
. The starting point
is
for ML and GLS
for ULS
where m is the number of factors and
is the ith diagonal element of
.
The values of
,
, and
can be expressed in terms of the eigenvalues
and corresponding eigenvectors , ,..., of matrix
. That is,
where
The approximate second-order derivatives
are used in the initial step and when the matrix of the exact second-order derivatives is not positive
(Heywood
definite or when all elements of the vector are greater than 0.1. If
variables), the diagonal element is replaced by 1 and the rest of the elements of that column and
row are set to 0. If the value of
is not decreased by step , the step is halved and halved
decreases or 25 halvings fail to produce a decrease. (In this case, the
again until the value of
computations are terminated.) Stepping continues until the largest absolute value of the elements
of is less than the criterion value (default 0.001) or until the maximum number of iterations
(default 25) is reached. Using the converged value of (denoted by ), the eigenanalysis is
performed on the matrix
. The factor loadings are computed as
where
142
Factor Analysis/PCA Algorithms
Unweighted and Generalized Least Squares
The same basic algorithm is used in ULS and GLS as in maximum likelihood, except that
for ULS
for GLS
for the ULS method, the eigenanalysis is performed on the matrix
, where
are the eigenvalues. In terms of the derivatives, for ULS,
and
For GLS,
and
Also, the factor loadings of the ULS method are obtained by
The chi-square statistic for m factors for the ML and GLS methods is given by
143
Factor Analysis/PCA Algorithms
with
degrees of freedom.
Alpha Factoring
Alpha factoring involves an iterative procedure, where at each iteration i:
The eigenvalues (
) and eigenvectors (
) of
are computed.
The new communalities are
The initial values of the communalities,
, are
and all
otherwise
where
is the ith diagonal entry of
.
If
and all are equal to one, the procedure is terminated. If for some i,
the procedure is terminated.
,
Iteration stops if any of the following are true:
for any
The communalities are the values when iteration stops, unless the last termination criterion is true,
in which case the procedure terminates. The factor pattern matrix is
where f is the final iteration.
144
Factor Analysis/PCA Algorithms
Image Factoring
Analyzing a Correlation Matrix
Eigenvalues and eigenvectors of
where
are found.
is the ith diagonal element of
The factor pattern matrix is
where
and
eigenvectors). If
correspond to the m eigenvalues greater than 1 (and the associated
, the procedure is terminated.
The communalities are
The image covariance matrix is
The anti-image covariance matrix is
Analyzing a Covariance Matrix
When analyzing a covariance matrix, the covariance matrix is used instead of the correlation
matrix . The calculation is similar to the correlation matrix case.
The rescaled factor pattern matrix is
and the rescaled communality of variable i is
.
145
Factor Analysis/PCA Algorithms
Factor Rotation
Orthogonal Rotations
Rotations are done cyclically on pairs of factors until the maximum number of iterations is
reached or the convergence criterion is met. The algorithm is the same for all orthogonal rotations,
differing only in computations of the tangent values of the rotation angles.
The factor pattern matrix is normalized by the square root of communalities:
where
is the factor pattern matrix
The tranformation matrix
is initialized to
.
At each iteration i:

The convergence criterion is
where the initial value of
is the original factor pattern matrix. For subsequent iterations,
the initial value is the final value of
when all factor pairs have been rotated.
For all pairs of factors ( ,

) where
The angle of rotation is
where
Varimax
Equamax
Quartimax
Varimax
Equamax
Quartimax
, the following are computed:
146
Factor Analysis/PCA Algorithms
If

, no rotation is done on the pair of factors.
The new rotated factors are
where

The accrued rotation transformation matrix is
where

are the last values for factor j calculated in this iteration.
and
are the last calculated values of the jth and kth columns of
Iteration is terminated when
or the maximum number of iterations is reached.
Final rotated factor pattern matrix
where
is the value of the last iteration.
Reflect factors with negative sums. If
then
Rearrange the rotated factors such that
The communalities are
.
147
Factor Analysis/PCA Algorithms
Direct Oblimin Rotation
The direct oblimin method (Jennrich and Sampson, 1966) is used for oblique rotation. The user
can choose the parameter . The default value is
.
The factor pattern matrix is normalized by the square root of the communalities
where
If no Kaiser is specified, this normalization is not done.
Initializations
The factor correlation matrix
if Kaiser
if no Kaiser
is initialized to
. The following are also computed:
148
Factor Analysis/PCA Algorithms
At each iteration, all possible factor pairs are rotated. For a pair of factors
the following are computed:
A root a of the equation
The rotated pair of factors is
is computed, as well as
and
(
),
149
Factor Analysis/PCA Algorithms
These replace the previous factor values.
New values are computed for
All values designated with a tilde (~) replace the original values and are used in subsequent
calculations.
The new factor correlations with factor p are
After all factor pairs have been rotated, iteration is terminated if:
MAX iterations have been done, or
where
150
Factor Analysis/PCA Algorithms
Otherwise, the factor pairs are rotated again.
The final rotated factor pattern matrix is
where
is the value in the final iteration.
The factor structure matrix is
where
is the factor correlation matrix in the final iteration.
Promax Rotation
The promax rotation is a computationally fast rotation (Hendrickson and White, 1964). The speed
is achieved by first rotating to an orthogonal varimax solution and then relaxing the orthogonality
of the factors to better fit the simple structure.
Varimax rotation is used to get an orthogonal rotated matrix
The matrix
Here,
is calculated, where
is the power of promax rotation
.
The matrix
is calculated.
The matrix
is normalized by column to a transformation matrix
where
.
is the diagonal matrix that normalizes the columns of .
At this stage, the rotated factors are
151
Factor Analysis/PCA Algorithms
Because
modify the rotated factor to
, and the diagonal elements do not equal 1, we must
where
The rotated factor pattern is
The correlation matrix of the factors is
The factor structure matrix is
Factor Score Coefficients
IBM® SPSS® Modeler uses the regression method of computing factor score coefficients
(Harman, 1976).
PCA without rotation
PCA with rotation
otherwise
where
is the factor structure matrix. For orthogonal rotations
.
For principal components analysis without rotation, if any
, factor score coefficients
is less
are not computed. For principal components with rotation, if the determinant of
than
, the coefficients are not computed. Otherwise, if is singular, factor score coefficients
are not computed.
Blank Handling
By default, a case that has a missing value for any input or output field is deleted from the
computation of the correlation matrix on which all consequent computations are based. If the Only
is computed
use complete records option is deselected, each correlation in the correlation matrix
based on records with complete data for the two fields associated with the correlation, regardless
of missing values on other fields. For some datasets, this approach can lead to a nonpositive
definite matrix, so that the model cannot be estimated.
152
Factor Analysis/PCA Algorithms
Secondary Calculations
Field Statistics and Other Calculations
The statistics shown in the advanced output for the regression equation node are calculated in the
same manner as in the FACTOR procedure in IBM® SPSS® Statistics. For more details, see the
SPSS Statistics Factor algorithm document, available at http://www.ibm.com/support.
Generated Model/Scoring
Factor Scores
Factor scores are assigned to scored records by applying the factor score coefficients to the input
field value for the record,
where
is the factor score for the kth factor,
is the factor score coefficient for the ith input
matrix) and the kth factor, and is the value of the ith input field for the record
field (from the
being scored.For more information, see the topic “Factor Score Coefficients” on p. 151.
Blank Handling
Records with missing values for any input field in the final model cannot be scored and are
assigned factor/component score values of $null$.
Feature Selection Algorithm
Introduction
Data mining problems often involve hundreds, or even thousands, of variables. As a result,
the majority of time and effort spent in the model-building process involves examining which
variables to include in the model. Fitting a neural network or a decision tree to a set of variables
this large may require more time than is practical.
Feature selection allows the variable set to be reduced in size, creating a more manageable set
of attributes for modeling. Adding feature selection to the analytical process has several benefits:

Simplifies and narrows the scope of the features that is essential in building a predictive model.

Minimizes the computational time and memory requirements for building a predictive model
because focus can be directed to the subset of predictors that is most essential.

Leads to more accurate and/or more parsimonious models.

Reduces the time for generating scores because the predictive model is based upon only a
subset of predictors.
Primary Calculations
Feature selection consists of three steps:

Screening. Removes unimportant and problematic predictors and cases.

Ranking. Sorts remaining predictors and assigns ranks.

Selecting. Identifies the important subset of features to use in subsequent models.
The algorithm described here is limited to the supervised learning situation in which a set of
predictor variables is used to predict a target variable. Any variables in the analysis can be either
categorical or continuous. Common target variables include whether or not a customer churns,
whether or not a person will buy, and whether or not a disease is present.
The terms features, variables, and attributes are often used interchangeably. Within this
document, we use variables and predictors when discussing input to the feature selection
algorithm, with features referring to the predictors that actually get selected by the algorithm for
use in a subsequent modeling process.
Screening
This step removes variables and cases that do not provide useful information for prediction and
issues warnings about variables that may not be useful.
The following variables are removed:

Variables that have all missing values.

Variables that have all constant values.

Variables that represent case ID.
© Copyright IBM Corporation 1994, 2015.
153
154
Feature Selection Algorithm
The following cases are removed:

Cases that have missing target values.

Cases that have missing values in all its predictors.
The following variables are removed based on user settings:

Variables that have more than m1% missing values.

Categorical variables that have a single category counting for more than m2% cases.

Continuous variables that have standard deviation < m3%.

Continuous variables that have a coefficient of variation |CV| < m4%. CV = standard
deviation / mean.

Categorical variables that have a number of categories greater than m5% of the cases.
Values m1, m2, m3, m4, and m5 are user-controlled parameters.
Ranking Predictors
This step considers one predictor at a time to see how well each predictor alone predicts the target
variable. The predictors are ranked according to a user-specified criterion. Available criteria
depend on the measurement levels of the target and predictor.
, where p is the p value of the
The importance value of each variable is calculated as
appropriate statistical test of association between the candidate predictor and the target variable,
as described below.
Categorical Target
This section describes ranking of predictors for a categorical target under the following scenarios:

All predictors categorical

All predictors continuous

Some predictors categorical, some continuous
All Categorical Predictors
The following notation applies:
Table 17-1
Notation
Notation
X
Y
N
Description
The predictor under consideration with I categories.
Target variable with J categories.
Total number of cases.
The number of cases with X = i and Y = j.
155
Feature Selection Algorithm
Notation
Description
The number of cases with X = i.
The number of cases with Y = j.
The above notations are based on nonmissing pairs of (X, Y). Hence J, N, and
different for different predictors.
may be
P Value Based on Pearson’s Chi-square
Pearson’s chi-square is a test of independence between X and Y that involves the difference
between the observed and expected frequencies. The expected cell frequencies under the null
hypothesis of independence are estimated by
. Under the null hypothesis,
with degrees
Pearson’s chi-square converges asymptotically to a chi-square distribution
of freedom d = (I−1)(J−1).
The p value based on Pearson’s chi-square X2 is calculated by p value = Prob(
> X2), where
.
Predictors are ranked by the following rules.
1. Sort the predictors by p value in the ascending order
2. If ties occur, sort by chi-square in descending order.
3. If ties still occur, sort by degree of freedom d in ascending order.
4. If ties still occur, sort by the data file order.
P Value Based on Likelihood Ratio Chi-square
The likelihood ratio chi-square is a test of independence between X and Y that involves the ratio
between the observed and expected frequencies. The expected cell frequencies under the null
hypothesis of independence are estimated by
. Under the null hypothesis, the
with degrees
likelihood ratio chi-square converges asymptotically to a chi-square distribution
of freedom d = (I−1)(J−1).
The p value based on likelihood ratio chi-square G2 is calculated by p value = Prob(
, with
> G2), where
else.
Predictors are ranked according to the same rules as those for the p value based on Pearson’s
chi-square.
Cramer’s V
156
Feature Selection Algorithm
Cramer’s V is a measure of association, between 0 and 1, based upon Pearson’s chi-square. It is
defined as
.
Predictors are ranked by the following rules:
1. Sort predictors by Cramer’s V in descending order.
2. If ties occur, sort by chi-square in descending order.
3. If ties still occur, sort by data file order.
Lambda
Lambda is a measure of association that reflects the proportional reduction in error when values of
the independent variable are used to predict values of the dependent variable. A value of 1 means
that the independent variable perfectly predicts the dependent variable. A value of 0 means that
the independent variable is no help in predicting the dependent variable. It is computed as
.
Predictors are ranked by the following rules:
1. Sort predictors by lambda in descending order.
2. If ties occur, sort by I in ascending order.
3. If ties still occur, sort by data file order.
All Continuous Predictors
If all predictors are continuous, p values based on the F statistic are used. The idea is to perform a
one-way ANOVA F test for each continuous predictor; this tests if all the different classes of Y
have the same mean as X.
The following notation applies:
Table 17-2
Notation
Notation
Description
The number of cases with Y = j.
The sample mean of predictor X for target class Y = j.
The sample variance of predictor X for target class Y = j.
The grand mean of predictor X.
157
Feature Selection Algorithm
The above notations are based on nonmissing pairs of (X, Y).
P Value Based on the F Statistic
The p value based on the F statistic is calculated by p value = Prob{F(J−1, N−J) > F}, where
,
and F(J−1, N−J) is a random variable that follows an F distribution with degrees of freedom J−1
and N−J. If the denominator for a predictor is zero, set the p value = 0 for the predictor.
Predictors are ranked by the following rules:
1. Sort predictors by p value in ascending order.
2. If ties occur, sort by F in descending order.
3. If ties still occur, sort by N in descending order.
4. If ties still occur, sort by the data file order.
Mixed Type Predictors
If some predictors are continuous and some are categorical, the criterion for continuous predictors
is still the p value based on the F statistic, while the available criteria for categorical predictors are
restricted to the p value based on Pearson’s chi-square or the p value based on the likelihood ratio
chi-square. These p values are comparable and therefore can be used to rank the predictors.
Predictors are ranked by the following rules:
1. Sort predictors by p value in ascending order.
2. If ties occur, follow the rules for breaking ties among all categorical and all continuous predictors
separately, then sort these two groups (categorical predictor group and continuous predictor group)
by the data file order of their first predictors.
Continuous Target
This section describes ranking of predictors for a continuous target under the following scenarios:

All predictors categorical

All predictors continuous

Some predictors categorical, some continuous
158
Feature Selection Algorithm
All Categorical Predictors
If all predictors are categorical and the target is continuous, p values based on the F statistic are
used. The idea is to perform a one-way ANOVA F test for the continuous target using each
categorical predictor as a factor; this tests if all different classes of X have the same mean as Y.
The following notation applies:
Table 17-3
Notation
Notation
X
Y
Description
The categorical predictor under consideration with I categories.
The continuous target variable. yij represents the value of the continuous
target for the jth case with X = i.
The number of cases with X = i.
The sample mean of target Y in predictor category X = i.
The sample variance of target Y for predictor category X = i.
The grand mean of target Y.
The above notations are based on nonmissing pairs of (X, Y).
The p value based on the F statistic is p value = Prob{F(I−1, N−I) > F}, where
,
in which F(I−1, N−I) is a random variable that follows a F distribution with degrees of freedom
I−1 and N−I. When the denominator of the above formula is zero for a given categorical predictor
X, set the p value = 0 for that predictor.
Predictors are ranked by the following rules:
1. Sort predictors by p value in ascending order.
2. If ties occur, sort by F in descending order.
3. If ties still occur, sort by N in descending order.
4. If ties still occur, sort by the data file order.
All Continuous Predictors
If all predictors are continuous and the target is continuous, the p value is based on the asymptotic
t distribution of a transformation t on the Pearson correlation coefficient r.
159
Feature Selection Algorithm
The following notation applies:
Table 17-4
Notation
Notation
X
Y
Description
The continuous predictor under consideration.
The continuous target variable.
The sample mean of predictor variable X.
The sample mean of target Y.
The sample variance of predictor variable X.
The sample variance of target variable Y.
The above notations are based on nonmissing pairs of (X, Y).
The Pearson correlation coefficient r is
.
The transformation t on r is given by
.
Under the null hypothesis that the population Pearson correlation coefficient ρ = 0, the p value
is calculated as
2 Prob
if
else.
T is a random variable that follows a t distribution with N−2 degrees of freedom. The p value
based on the Pearson correlation coefficient is a test of a linear relationship between X and Y. If
there is some nonlinear relationship between X and Y, the test may fail to catch it.
Predictors are ranked by the following rules:
1. Sort predictors by p value in ascending order.
2. If ties occur in, sort by r2 in descending order.
3. If ties still occur, sort by N in descending order.
4. If ties still occur, sort by the data file order.
Mixed Type Predictors
If some predictors are continuous and some are categorical in the dataset, the criterion for
continuous predictors is still based on the p value from a transformation and that for categorical
predictors from the F statistic.
160
Feature Selection Algorithm
Predictors are ranked by the following rules:
1. Sort predictors by p value in ascending order.
2. If ties occur, follow the rules for breaking ties among all categorical and all continuous predictors
separately, then sort these two groups (categorical predictor group and continuous predictor group)
by the data file order of their first predictors.
Selecting Predictors
If the length of the predictor list has not been prespecified, the following formula provides an
automatic approach to determine the length of the list.
Let L0 be the total number of predictors under study. The length of the list L may be determined by
,
where [x] is the closest integer of x. The following table illustrates the length L of the list for
different values of the total number of predictors L0.
L0
10
15
20
25
30
40
50
60
100
500
1000
1500
2000
5000
10,000
20,000
50,000
L
10
15
20
25
30
30
30
30
30
45
63
77
89
141
200
283
447
L/L0(%)
100.00%
100.00%
100.00%
100.00%
100.00%
75.00%
60.00%
50.00%
30.00%
9.00%
6.30%
5.13%
4.45%
2.82%
2.00%
1.42%
0.89%
Generated Model
The feature selection generated model is different from most other generated models in that it does
not add predictors or other derived fields to the data stream. Instead, it acts as a filter, removing
unwanted fields from the data stream based on generated model settings.
161
Feature Selection Algorithm
The set of fields filtered from the stream is controlled by one of the following criteria:

Field importance categories (Important, Marginal, or Unimportant). Fields assigned to any
of the selected categories are preserved; others are filtered.

Top k fields. The k fields with the highest importance values are preserved; others are filtered.

Importance value. Fields with importance value greater than the specified value are preserved;
others are filtered.

Manual selection. The user can select specific fields to be preserved or filtered.
GENLIN Algorithms
Generalized linear models (GZLM) are commonly used analytical tools for different types of data.
Generalized linear models cover not only widely used statistical models, such as linear regression
for normally distributed responses, logistic models for binary data, and log linear model for count
data, but also many useful statistical models via its very general model formulation.
Generalized Linear Models
Generalized linear models were first introduced by Nelder and Wedderburn (1972) and later
expanded by McCullagh and Nelder (1989). The following discussion is based on their works.
Notation
The following notation is used throughout this section unless otherwise stated:
Table 18-1
Notation
Notation
n
p
px
y
r
m
μ
η
X
O
ω
f
N
Description
Number of complete cases in the dataset. It is an integer and n ≥ 1.
Number of parameters (including the intercept, if exists) in the model. It is an integer
and p ≥ 1.
Number of non-redundant columns in the design matrix. It is an integer and px ≥ 1.
n × 1 dependent variable vector. The rows are the cases.
n × 1 vector of events for the binomial distribution; it usually represents the number of
“successes.” All elements are non-negative integers.
n × 1 vector of trials for the binomial distribution. All elements are positive integers
and mi ≥ ri, i=1,...,n.
n × 1 vector of expectations of the dependent variable.
n × 1 vector of linear predictors.
n × p design matrix. The rows represent the cases and the columns represent the
T i=1,...,n with
if the model has an
parameters. The ith row is
intercept.
n × 1 vector of scale offsets. This variable can’t be the dependent variable (y) or one of
the predictor variables (X).
p × 1 vector of unknown parameters. The first element in is the intercept, if there is one.
n × 1 vector of scale weights. If an element is less than or equal to 0 or missing, the
corresponding case is not used.
n × 1 vector of frequency counts. Non-integer elements are treated by rounding the value
to the nearest integer. For values less than 0.5 or missing, the corresponding cases are
not used.
If frequency count variable f is not used, N = n.
Effective sample size.
Model
A GZLM of y with predictor variables X has the form
© Copyright IBM Corporation 1994, 2015.
163
164
GENLIN Algorithms
E
where η is the linear predictor; O is an offset variable with a constant coefficient of 1 for each
observation; g(.) is the monotonic differentiable link function which states how the mean of
, is related to the linear predictor η ; F is the response probability distribution.
y,
Choosing different combinations of a proper probability distribution and a link function can
result in different models.
In addition, GZLM also assumes yi are independent for i=1,….,n. Then for each observation,
the model becomes
T
Notes

X can be any combination of scale variables (covariates), categorical variables (factors),
and interactions. The parameterization of X is the same as in the GLM procedure. Due to
use of the over-parameterized model where there is a separate parameter for every factor
effect level occurring in the data, the columns of the design matrix X are often dependent.
Collinearity between scale variables in the data can also occur. To establish the dependencies
diag
, are examined by
in the design matrix, columns of XTΨX, where
using the sweep operator. When a column is found to be dependent on previous columns,
the corresponding parameter is treated as redundant. The solution for redundant parameters
is fixed at zero.

When y is a binary dependent variable which can be character or numeric, such as
“male”/”female” or 1/2, its values will be transformed to 0 and 1 with 1 typically representing
a success or some other positive result. In this document, we assume to be modeling the
probability of success. In this document, we assume that y has been transformed to 0/1
values and we always model the probability of success; that is, Prob(y = 1). Which original
value should be transformed to 0 or 1 depends on what the reference category is. If the
reference category is the last value, then the first category represents a success and we are
modeling the probability of it. For example, if the reference category is the last value, “male”
in “male”/”female” and 2 in 1/2 are the last values (since “male” comes later in the dictionary
than “female”) and would be transformed to 0, and “female” and 1 would be transformed to 1
as we model the probability of them, respectively. However, one way to change to model the
probability of “male” and 2 instead is to specify the reference category as the first value. Note
if original binary format is 0/1 and the reference category is the last value, then 0 would be
transformed to 1 and 1 to 0.

When r, representing the number of successes (or number of 1s) and m, representing
the number of trials, are used for the binomial distribution, the response is the binomial
proportion y = r/m.
Probability Distribution
GZLMs are usually formulated within the framework of the exponential family of distributions.
The probability density function of the response Y for the exponential family can be presented as
165
GENLIN Algorithms
where θ is the canonical (natural) parameter, is the scale parameter related to the variance of y
and ω is a known prior weight which varies from case to case. Different forms of b(θ) and c(y,
/ω) will give specific distributions. In fact, the exponential family provides a notation that allows
us to model both continuous and discrete (count, binary, and proportional) outcomes. Several are
available including continuous ones: normal, inverse Gaussian, gamma; discrete ones: negative
binomial, Poisson, binomial.
The mean and variance of y can be expressed as follows
where
and
denote the first and second derivatives of b with respect to θ, respectively;
is the variance function which is a function of .
In GZLM, the distribution of y is parameterized in terms of the mean (μ) and a scale parameter
( ) instead of the canonical parameter (θ). The following table lists the distribution of y,
corresponding range of y, variance function (V(μ)), the variance of y (Var(y)), and the first
derivative of the variance function (
), which will be used later.
Table 18-2
Distribution, range and variance of the response, variance function, and its first derivative
Distribution
Normal
Range of y
(−∞,∞)
V(μ)
1
Var(y)
V’(μ)
0
Inverse Gaussian
Gamma
Negative binomial
Poisson
Binomial(m)
(0,∞)
(0,∞)
0(1)∞
0(1)∞
0(1)m/m
μ3
μ2
μ+kμ2
μ
μ3
μ2
μ+kμ2
μ
μ(1−μ)
μ(1−μ)/m
3μ2
2μ
1+2kμ
1
1−2μ
Notes

0(1)z means the range is from 0 to z with increments of 1; that is, 0, 1, 2, …, z.

For the binomial distribution, the binomial trial variable m is considered as a part of the
weight variable ω.

If a weight variable ω is presented,

For the negative binomial distribution, the ancillary parameter (k) can be user-specified.
When k = 0, the negative binomial distribution reduces to the Poisson distribution. When
k = 1, the negative binomial is the geometric distribution.
is replaced by /ω.
166
GENLIN Algorithms
Scale parameter handling. The expressions for V(μ) and Var(y) for continuous distributions include
the scale parameter which can be used to scale the relationship of the variance and mean (Var(y)
and μ). Since it is usually unknown, there are three ways to fit the scale parameter:
1. It can be estimated with
jointly by maximum likelihood method.
2. It can be set to a fixed positive value.
3. It can be specified by the deviance or Pearson chi-square. For more information, see the
topic “Goodness-of-Fit Statistics ” on p. 178.
On the other hand, discrete distributions do not have this extra parameter (it is theoretically equal
to one). Because of it, the variance of y might not be equal to the nominal variance in practice
(especially for Poisson and binomial because the negative binomial has an ancillary parameter k).
A simple way to adjust this situation is to allow the variance of y for discrete distributions to have
the scale parameter as well, but unlike continuous distributions, it can’t be estimated by the ML
method. So for discrete distributions, there are two ways to obtain the value of :
1. It can be specified by the deviance or Pearson chi-square.
2. It can be set to a fixed positive value.
To ensure the data fit the range of response for the specified distribution, we follow the rules:

For the gamma or inverse Gaussian distributions, values of y must be real and greater than
zero. If a value of y is less than or equal to 0 or missing, the corresponding case is not used.

For the negative binomial and Poisson distributions, values of y must be integer and
non-negative. If a value of y is non-integer, less than 0 or missing, the corresponding case is
not used.

For the binomial distribution and if the response is in the form of a single variable, y must
have only two distinct values. If y has more than two distinct values, the algorithm terminates
in an error.

For the binomial distribution and the response is in the form of ratio of two variables denoted
events/trials, values of r (the number of events) must be nonnegative integers, values of m
(the number of trials) must be positive integers and mi ≥ ri, ∀ i. If a value of r is not integer,
less than 0, or missing, the corresponding case is not used. If a value of m is not integer, less
than or equal to 0, less than the corresponding value of r, or missing, the corresponding
case is not used.
The ML method will be used to estimate and possibly . The kernels of the log-likelihood
function (ℓk) and the full log-likelihood function (ℓ), which will be used as the objective function
for parameter estimation, are listed for each distribution in the following table. Using ℓ or ℓk won’t
affect the parameter estimation, but the selection will affect the calculation of information criteria.
For more information, see the topic “Goodness-of-Fit Statistics ” on p. 178.
167
GENLIN Algorithms
Table 18-3
The log-likelihood function for probability distribution
Distribution
Normal
ℓk and ℓ
Inverse Gaussian
Gamma
Negative
binomial
Poisson
Binomial(m)
where
When an individual y = 0 for the negative binomial or Poissondistributions and y = 0 or 1 for the
binomial distribution, a separate value of the log-likelihood is given. Let ℓk,i be the log-likelihood
value for individual case i when yi = 0 for the negative binomial and Poisson and 0/1 for the
binomial. The full log-likelihood for i is equal to the kernel of the log-likelihood for i; that is,
ℓi=ℓk,i.
Table 18-4
Log-likelihood
Distribution
Negative binomial
ℓk,i
if
168
GENLIN Algorithms
Distribution
Poisson
ℓk,i
if
Binomial(m)
if
if

Γ(z) is the gamma function and ln(Γ(z)) is the log-gamma function (the logarithm of the
gamma function), evaluated at z.

For the negative binomial distribution, the scale parameter is still included in ℓk for flexibility,
although it is usually set to 1.

For the binomial distribution (r/m), the scale weight variable becomes
in ℓk; that
is, the binomial trials variable m is regarded as a part of the weight. However, the scale
weight in the extra term of ℓ is still .
Link Function
The following tables list the form, inverse form, range of , and first and second derivatives
for each link function.
Table 18-5
Link function name, form, inverse of link function, and range of the predicted mean
Link function
Identity
η=g(μ)
μ
Inverse μ=g−1(η)
η
Log
ln(μ)
exp(η)
Range of
Logit
Probit
Φ
, where
Φ(η)
Φ
Complementary
log-log
ln(−(ln(1−μ))
1−exp(−exp(η))
if or
is odd integer
otherwise
Power(α)
Log-complement
ln(1−μ)
1−exp(η)
Negative log-log
−ln(−ln(μ))
exp(−exp(−η))
Negative binomial
Odds
power(α)
Note: In the power link function, if |α| < 2.2e-16, α is treated as 0.
Table 18-6
The first and second derivatives of link function
Link function
First derivative
Identity
1
Second derivative
0
169
GENLIN Algorithms
Link function
First derivative
Second derivative
Log
Logit
Probit
Φ
, where
Φ
Complementary log-log
Power(α)
Log-complement
Negative log-log
Negative binomial
Odds power(α)
When the canonical parameter is equal to the linear predictor,
, then the link function is
called the canonical link function. Although the canonical links lead to desirable statistical
properties of the model, particularly in small samples, there is in general no a priori reason why
the systematic effects in a model should be additive on the scale given by that link. The canonical
link functions for probability distributions are given in the following table.
Table 18-7
Canonical and default link functions for probability distributions
Distribution
Normal
Inverse Gaussian
Gamma
Negative binomial
Poisson
Binomial
Canonical link function
Identity
Power(−2)
Power(−1)
Negative binomial
Log
Logit
Estimation
Having selected a particular model, it is required to estimate the parameters and to assess the
precision of the estimates.
Parameter estimation
The parameters are estimated by maximizing the log-likelihood function (or the kernel of the
log-likelihood function) from the observed data. Let s be the first derivative (gradient) vector of
the log-likelihood with respect to each parameter, then we wish to solve
170
GENLIN Algorithms
0
In general, there is no closed form solution except for a normal distribution with identity link
function, so estimates are obtained numerically via an iterative process. A Newton-Raphson
and/or Fisher scoring algorithm is used and it is based on a linear Taylor series approximation
of the first derivative of the log-likelihood.
First Derivatives
If the scale parameter
is not estimated by the ML method, s is a p×1 vector with the form:
where
and
are defined in Table 18-5“Link function name, form, inverse of link
function, and range of the predicted mean” on p. 168, Table 18-2“Distribution, range and variance
of the response, variance function, and its first derivative” on p. 165 and Table 18-6“The first and
second derivatives of link function” on p. 168, respectively.
If the scale parameter is estimated by the ML method, it is handled by searching for ln( ) since
is required to be greater than zero.
Let τ = ln( ) so
= exp(τ) , then s is a (p+1)×1 vector with the following form
where
is the same as the above with
depending on the distribution as follows:
is replaced with exp(τ),
has a different form
Table 18-8
The 1st derivative functions w.r.t. the scale parameter for probability distributions
Distribution
Normal
Inverse Gaussian
Gamma
Note:
is a digamma function, which is the derivative of logarithm of a gamma function,
evaluated at z; that is,
.
171
GENLIN Algorithms
As mentioned above, for normal distribution with identity link function which is a classical linear
regression model, there is a closed form solution for both and τ, so no iterative process is
needed. The solution for , after applying the SWEEP operation in GLM procedure, is
xT x
xT
XT ΨX
XT Ψ
,
where Ψ diag
and Z is the generalized inverse of a matrix Z. If the scale
parameter is also estimated by the ML method, the estimate of τ is
xT
Second Derivatives
Let H be the second derivative (Hessian) matrix. If the scale parameter is not estimated by the ML
method, H is a p×p matrix with the following form
T
T
where W is an n×n diagonal matrix. There are two definitions for W depending on which
algorithm is used: We for Fisher scoring and Wo for Newton-Raphson. The ith diagonal element
for We is
and the ith diagonal element for Wo is
where
and
are defined in Table 18-2“Distribution, range and variance of the
response, variance function, and its first derivative” on p. 165 and Table 18-6“The first and second
derivatives of link function” on p. 168, respectively. Note the expected value of Wo is We and
when the canonical link is used for the specified distribution, then Wo = We.
If the scale parameter is estimated by the ML method, H becomes a (p+1)×(p+1) matrix with the
form
T
T
172
GENLIN Algorithms
T is a 1×p vector and the transpose of
where
is a p×1 vector and
For all three continuous distributions:
.
x
The forms of
are listed in the following table.
Table 18-9
The second derivative functions w.r.t. the scale parameter for probability distributions
Distribution
Normal
Inverse Gaussian
Gamma
Note:
is a trigamma function, which is the derivative of
, evaluated at z.
Iterations
An iterative process to find the solution for (which might include ) is based on Newton-Raphson
(for all iterations), Fisher scoring (for all iterations) or a hybrid method. The hybrid method
consists of applying Fisher scoring steps for a specified number of iterations before switching
to Newton-Raphson steps. Newton-Raphson performs well if the initial values are close to the
solution, but the hybrid method can be used to improve the algorithm’s robustness from bad initial
values. Apart from improved robustness, Fisher scoring is faster due to the simpler form of
the Hessian matrix.
The following notation applies to the iterative process:
Table 18-10
Notation
Notation
I
J
K
M
p,
Abs
Description
Starting iteration for checking complete separation and quasi-complete separation. It
must be 0 or a positive integer. This criterion is not used if the value is 0.
The maximum number of steps in step-halving method. It must be a positive integer.
The first number of iterations using Fisher scoring, then switching to Newton-Raphson.
It must be 0 or a positive integer. A value of 0 means using Newton-Raphson for all
iterations and a value greater or equal to M means using Fisher scoring for all iterations.
The maximum number of iterations. It must be a non-negative integer. If the value is
0, then initial parameter values become final estimates.
Tolerance levels for three types of convergence criteria.
A 0/1 binary variable; Abs = 1 if absolute change is used for convergence criteria
and Abs = 0 if relative change is used.
173
GENLIN Algorithms
And the iterative process is outlined as follows:
1. Input values for I, J, K, M,
p,
and Abs for each type of three convergence criteria.
2. For ( ) compute initial values (see below), then calculate log-likelihood ℓ(0), gradient vector
and Hessian matrix
based on ( ) .
3. Let ξ=1.
4. Compute estimates of ith iteration:
()
(
)
(
(
, where
is a generalized inverse of H. Then compute the
log-likelihood based on ( ) .
5. Use step-halving method if
: reduce ξ by half and repeat step (4). The set of values
of ξ is {0.5 j : j = 0, …, J – 1}. If J is reached but the log-likelihood is not improved, issue a
warning message, then stop.
6. Compute gradient vector
and Hessian matrix
if i ≤ K; Wo is used to calculate
calculate
based on ( ) . Note that We is used to
if i > K.
7. Check if complete or quasi-complete separation of the data is established (see below) if
distribution is binomial and the current iteration i ≥ I. If either complete or quasi-complete
separation is detected, issue a warning message, then stop.
8. Check if all three convergence criteria (see below) are met. If they are not but M is reached,
issue a warning message, then stop.
9. If all three convergence criteria are met, check if complete or quasi-complete separation of
the data is established if distribution is binomial and i < I (because checking for complete or
quasi-complete separation has not started yet). If complete or quasi-complete separation is
detected, issue a warning message, then stop, otherwise, stop (the process converges for binomial
successfully). If all three convergence criteria are met for the distributions other than binomial,
stop (the process converges for other distributions successfully). The final vector of estimates is
denoted by (and ). Otherwise, go back to step (3).
Initial Values
Initial values are calculated as follows:
1. Set the initial fitted values i
for a binomial distribution (yi can be
for a non-binomial distribution. From these derive
a proportion or 0/1 value) and i
=
,
and
If becomes undefined, set
.
2. Calculate the weight matrix
with the diagonal element
set to 1 or a fixed positive value. If the denominator of
, where
becomes 0, set
3. Assign the adjusted dependent variable z with the ith observation
for a binomial distribution and
distribution.
is
= 0.
for a non-binomial
174
GENLIN Algorithms
4. Calculate the initial parameter values
XT
β
XT
X
z
and
= z
T
Xβ
z
Xβ
if the scale parameter is estimated by the ML method.
Scale Parameter Handling
1. For normal, inverse Gaussian, and gamma response, if the scale parameter is estimated by the ML
method, then it will be estimated jointly with the regression parameters; that is, the last element
of the gradient vector s is with respect to τ.
2. If the scale parameter is set to be a fixed positive value, then it will be held fixed at that value for
in each iteration of the above process.
3. If the scale parameter is specified by the deviance or Pearson chi-square divided by degrees of
freedom, then it will be fixed at 1 to obtain the regression estimates through the whole iterative
process. Based on the regression estimates, calculate the deviance and Pearson chi-square values
and obtain the scale parameter estimate.
Checking for Separation
For each iteration after the user-specified number of iterations; that is, if i > I, calculate (note
here v refers to cases in the dataset)
where
if
(
if
success
failure
is the probability of the observed response for case v) and
If
xT β
we consider there to be complete separation. Otherwise, if
and if there are very small diagonal elements (absolute value
) in the non-redundant parameter locations in the lower triangular matrix
in Cholesky decomposition of –H, where H is the Hessian matrix, then there is a quasi-complete
separation.
or
175
GENLIN Algorithms
Convergence Criteria
The following convergence criteria are considered:
() (
(
Log-likelihood convergence:
)
if relative change
)
()
(
)
if absolute change
p if relative change
Parameter convergence:
p if absolute change
()
()
p and
()
()
if relative change
()
Hessian convergence:
where
T
T
()
()
if absolute change
are the given tolerance levels for each type.
If the Hessian convergence criterion is not user-specified, it is checked based on absolute change
with H = 1E-4 after the log-likelihood or parameter convergence criterion has been satisfied. If
Hessian convergence is not met, a warning is displayed.
Parameter Estimate Covariance Matrix, Correlation Matrix and Standard Errors
The parameter estimate covariance matrix, correlation matrix and standard errors can be
obtained easily with parameter estimates. Whether or not the scale parameter is estimated by
ML, parameter estimate covariance and correlation matrices are listed for only because the
covariance between and should be zero.
Model-Based Parameter Estimate Covariance
The model-based parameter estimate covariance matrix is given by
Σm
Η
where
is the generalized inverse of the Hessian matrix evaluated at the parameter estimates.
The corresponding rows and columns for redundant parameter estimates should be set to zero.
Robust Parameter Estimate Covariance
The validity of the parameter estimate covariance matrix based on the Hessian depends on the
correct specification of the variance function of the response in addition to the correct specification
of the mean regression function of the response. The robust parameter estimate covariance
provides a consistent estimate even when the specification of the variance function of the response
is incorrect. The robust estimator is also called Huber’s estimator because Huber (1967) was
176
GENLIN Algorithms
the first to describe this variance estimate; White’s estimator or HCCM (heteroskedasticity
consistent covariance matrix) estimator because White (1980) independently showed that this
variance estimate is consistent under a linear regression model including heteroskedasticity; or
the sandwich estimator because it includes three terms. The robust (or Huber/White/sandwich)
estimator is defined as follows
T
Σr
Σm
Σm
T Σ
m
Σm
Parameter Estimate Correlation
be an element of
The correlation matrix is calculated from the covariance matrix as usual. Let
. The corresponding
Σm or Σr , then the corresponding element of the correlation matrix is
rows and columns for redundant parameter estimates should be set to system missing values.
Parameter Estimate Standard Error
Let denote a non-redundant parameter estimate. Its standard error is the square root of the
ith diagonal element of Σm or Σr :
The standard error for redundant parameter estimates is set to a system missing value. If the
scale parameter is estimated by the ML method, we obtain and its standard error estimate
, where
can be found in Table 18-9“The second derivative functions w.r.t. the
scale parameter for probability distributions” on p. 172. Then the estimate of the scale parameter
and the standard error estimate is
is
Wald Confidence Intervals
Wald confidence intervals are based on the asymptotic normal distribution of the parameter
estimates. The 100(1 – α)% Wald confidence interval for j is given by
,
where
is the 100pth percentile of the standard normal distribution.
If exponentiated parameter estimates are requested for logistic regression or log-linear models,
then using the delta method, the estimate of
is
, the standard error estimate of
is
is
and the corresponding 100(1 – α)% Wald confidence interval for
.
Wald confidence intervals for redundant parameter estimates are set to system missing values.
177
GENLIN Algorithms
Similarly, the 100(1 – α)% Wald confidence interval for is
Chi-Square Statistics
The hypothesis
statistic:
is tested for each non-redundant parameter using the chi-square
which has an asymptotic chi-square distribution with 1 degree of freedom.
Chi-square statistics and their corresponding p-values are set to system missing values for
redundant parameter estimates.
The chi-square statistic is not calculated for the scale parameter, even if it is estimated by ML
method.
P Values
Given a test statistic T and a corresponding cumulative distribution function G as specified
. For example, the p-value for the chi-square
above, the p-value is defined as
test of
is
.
Model Testing
After estimating parameters and calculating relevant statistics, several tests for the given model
are performed.
Lagrange Multiplier Test
If the scale parameter for normal, inverse Gaussian and gamma distributions is set to a fixed value
or specified by the deviance or Pearson chi-square divided by the degrees of freedom (when the
scale parameter is specified by the deviance or Pearson chi-square divided by the degrees of
freedom, it can be considered as a fixed value), or an ancillary parameter k for the negative
binomial is set to a fixed value other than 0, the Lagrange Multiplier (LM) test assesses the
validity of the value. For a fixed or k, the test statistic is defined as
178
GENLIN Algorithms
where
and
evaluated at the
T
has an asymptotic chi-square distribution with 1
parameter estimates and fixed or k value.
degree of freedom, and the p-values are calculated accordingly.
T
For testing , see Table 18-8“The 1st derivative functions w.r.t. the scale parameter for probability
distributions” on p. 170 and see Table 18-9“The second derivative functions w.r.t. the scale
parameter for probability distributions” on p. 172 for the elements of s and A, respectively.
If k is set to 0, then the above statistic can’t be applied. According to Cameron and Trivedi (1998),
the LM test statistic should now be based on the following auxiliary OLS regression (without
constant)
where
and
is an error term. Let the response of the above OLS regression
be and the explanatory variable be . The estimate of the above
regression parameter α and the standard error of the estimate of α are
and
where
and
. Then the LM test statistic is a z statistic
and it has an asymptotic standard normal distribution under the null hypothesis of equidispersion
in a Poisson model (
). Three p-values are provided. The alternative hypothesis
), underdispersion (
) or two-sided
can be one-sided overdispersion (
non-directional (
) with the variance function of
. The calculation
-value
Φ
where Φ is the
of p-values depends on the alternative. For
-value Φ
and for
cumulative probability of a standard normal distribution; for
-value
Φ
Goodness-of-Fit Statistics
Several statistics are calculated to assess goodness of fit of a given generalized linear model.
Deviance
The theoretical definition of deviance is:
y y
y
179
GENLIN Algorithms
where
y is the log-likelihood function expressed as the function of the predicted mean values
(calculated based on the parameter estimates) given the response variable, and y y is the
log-likelihood function computed by replacing with y. The formula used for the deviance is
, where the form of for the distributions are given in the following table:
Table 18-11
Deviance for individual case
Distribution
Normal
Inverse Gaussian
Gamma
Negative Binomial
Poisson
Binomial(m)
Note

When y is a binary dependent variable with 0/1 values (binomial distribution), the deviance
and Pearson chi-square are calculated based on the subpopulations; see below.

When y = 0 for negative binomial and Poisson distributions and y = 0 (for r = 0) or 1 (for r
= m) for binomial distribution with r/m format, separate values are given for the deviance.
Let be the deviance value for individual case i when yi = 0 for negative binomial and
Poisson and 0/1 for binomial.
Table 18-12
Deviance for individual case
Distribution
Negative Binomial
Poisson
if
if
Binomial(m)
if
if
or
or
Pearson Chi-Square
where
for the binomial distribution and
for other distributions.
Scaled Deviance and Scaled Pearson Chi-Square
The scaled deviance is
and the scaled Pearson chi-square is
.
180
GENLIN Algorithms
Since the scaled deviance and Pearson chi-square statistics have a limiting chi-square distribution
with N – px degrees of freedom, the deviance or Pearson chi-square divided by its degrees
of freedom can be used as an estimate of the scale parameter for both continuous and discrete
distributions.
or
.
,
If the scale parameter is measured by the deviance or Pearson chi-square, first we assume
then estimate the regression parameters, calculate the deviance and Pearson chi-square values
and obtain the scale parameter estimate from the above formula. Then the scaled version of both
statistics is obtained by dividing the deviance and Pearson chi-square by . In the meantime, some
statistics need to be revised. The gradient vector and the Hessian matrix are divided by and
the covariance matrix is multiplied by . Accordingly the estimated standard errors are also
adjusted, the Wald confidence intervals and significance tests will be affected even the parameter
estimates are not affected by .
because the
Note that the log-likelihood is not revised; that is, the log-likelihood is based on
scale parameter should be kept the same in the log-likelihood for fair comparison in information
criteria and model fitting omnibus test.
Overdispersion
For the Poisson and binomial distributions, if the estimated scale parameter is not near the
assumed value of one, then the data may be overdispersed if the value is greater than one or
underdispersed if the value is less than one. Overdispersion is more common in practice. The
problem with overdispersion is that it may cause standard errors of the estimated parameters to be
underestimated. A variable may appear to be a significant predictor, when in fact it is not.
Deviance and Pearson Chi-Square for Binomial Distribution with 0/1 Binary Response Variable
When r and m (event/trial) variables are used for the binomial distribution, each case represents m
Bernoulli trials. When y is a binary dependent variable with 0/1 values, each case represents a
single trial. The trial can be repeated for several times with the same setting (i.e. the same values
for all predictor variables). For example, suppose the first 10 y values are 2 1s and 8 0s and x
values are the same (if recorded in events/trials format, these 10 cases is recorded as 1 case
with r = 2 and m = 10), then these 10 cases should be considered from the same subpopulation.
Cases with common values in the variable list that includes all predictor variables are regarded as
coming from the same subpopulation. When the binomial distribution with binary response is
used, we should calculate the deviance and Pearson chi-square based on the subpopulations. If we
calculate them based on the cases, the results might not be useful.
If subpopulations are specified for the binomial distribution with 0/1 binary response variable, the
data should be reconstructed from the single trial format to the events/trials format. Assume the
following notation for formatted data:
Table 18-13
Notation
Notation
ns
Description
Number of subpopulations.
181
GENLIN Algorithms
Notation
rj1
Description
Sum of the product of the frequencies and the scale weights associated with y = 1 in the
jth subpopulation. So rj0 is that with y = 0 in the jth subpopulation.
Total weighted observations; mj = rj1 + rj0.
The proportion of 1s in the jth subpopulation; yj1 = rj1/ mj.
The fitted probability in the jth subpopulation ( would be the same for each case in the
jth subpopulation because values for all predictor variables are the same for each case.)
mj
yj1
The deviance and Pearson chi-square are defined as follows:
and
,
then the corresponding estimate of the scale parameter will be
and
.
The full log likelihood, based on subpopulations, is defined as follows:
where is the kernel log likelihood; it should be the same as the kernel log-likelihood computed
based on cases before, there is no need to compute again.
Information Criteria
Information criteria are used when comparing different models for the same data. The formulas
for various criteria are as follows.
Akaike information criteria (AIC).
Finite sample corrected (AICC).
Bayesian information criteria (BIC).
Consistent AIC (CAIC).
where ℓ is the log-likelihood evaluated at the parameter estimates. Notice that d = px if only is
included; d = px + 1 if the scale parameter is included for normal, inverse Gaussian, or gamma.
Notes

ℓ (the full log-likelihood) can be replaced with ℓk (the kernel of the log-likelihood) depending
on the user’s choice.

When r and m (event/trial) variables are used for the binomial distribution, then the N used
here would be the sum of the trials frequencies;
. In this way, the same value
results whether the data are in raw, binary form or in summarized, binomial form.
182
GENLIN Algorithms
Test of Model Fit
The model fitting omnibus test is based on –2 log-likelihood values for the model under
consideration and the initial model. For the model under consideration, the value of the –2
log-likelihood is
Let the initial model be the intercept-only model if intercept is in the considered model or the
empty model otherwise. For the intercept-only model, the value of the –2 log-likelihood is
For the empty model, the value of the –2 log-likelihood is
Then the omnibus (or global) test statistic is
for the intercept-only model or
for the empty model.
S has an asymptotic chi-square distribution with r degrees of freedom, equal to the difference in
the number of valid parameters between the model under consideration and the initial model.
for the intercept-only model,; r =
for the empty model. The p-values then can
r=
be calculated accordingly.
Note if the scale parameter is estimated by the ML method in the model under consideration, then
it will also be estimated by the ML method in the initial model.
Default Tests of Model Effects
For each regression effect specified in the model, type I and III analyses can be conducted.
Type I Analysis
Type I analysis consists of fitting a sequence of models, starting with a model with only an
intercept term (if there is one), and adding one additional effect, which can be covariates, factors
and interactions, of the model on each step. So it depends on the order of effects specified in the
model. On the other hand, type III analysis won’t depend on the order of effects.
Wald Statistics. For each effect specified in the model, type I test matrix Li is constructed
and H0: Li = 0 is tested. Construction of matrix Li is based on the generating matrix
T
T
where Ω is the scale weight matrix with ith diagonal element and
such that Li is estimable. It involves parameters only for the given effect and the effects
containing the given effect. If such a matrix cannot be constructed, the effect is not testable.
183
GENLIN Algorithms
Since Wald statistics can be applied to type I and III analysis and custom tests, we express Wald
, where Li is a r×p full
statistics in a more general form. The Wald statistic for testing
row rank hypothesis matrix and K is a r×1 resulting vector, is defined by
T
T
where is the maximum likelihood estimate and Σ is the parameter estimates covariance matrix. S
degrees of freedom, where
LΣLT .
has an asymptotic chi-square distribution with
If
, then LΣLT is a generalized inverse such that Wald tests are effective for a restricted
set of hypotheses
containing a particular subset C of independent rows from H0.
For type I and III analysis, calculate the Wald statistic for each effect i according to the
corresponding hypothesis matrix Li and K=0.
Type III Analysis
Wald statistics. See the discussion of Wald statistics for Type I analysis above. L is the type III
test matrix for the ith effect.
Blank handling
All records with missing values for any input or output field are excluded from the estimation of
the model.
Scoring
Scoring is defined as assigning one or more values to a case in a data set.
Predicted Values
Due to the non-linear link functions, the predicted values will be computed for the linear predictor
and the mean of the response separately. Also, since estimated standard errors of predicted values
of linear predictor are calculated, the confidence intervals for the mean are obtained easily.
Predicted values are still computed as long all the predictor variables have non-missing values
in the given model.
Predicted Values of the Linear Predictors
T
o
Estimated Standard Errors of Predicted Values of the Linear Predictors
TΣ
Predicted Values of the Means
184
GENLIN Algorithms
T
where g−1 is the inverse of the link function. For binomial response with 0/1 binary response
variable, this the predicted probability of category 1.
Confidence Intervals for the Means
Approximate 100(1−α)% confidence intervals for the mean can be computed as follows
T
o
If either endpoint in the argument is outside the valid range for he inverse link function, the
corresponding confidence interval endpoint is set to a system missing value.
Blank handling
Records with missing values for any input field in the final model cannot be scored, and are
assigned a predicted value of $null$.
References
Aitkin, M., D. Anderson, B. Francis, and J. Hinde. 1989. Statistical Modelling in GLIM. Oxford:
Oxford Science Publications.
Albert, A., and J. A. Anderson. 1984. On the Existence of Maximum Likelihood Estimates in
Logistic Regression Models. Biometrika, 71, 1–10.
Cameron, A. C., and P. K. Trivedi. 1998. Regression Analysis of Count Data. Cambridge:
Cambridge University Press.
Diggle, P. J., P. Heagerty, K. Y. Liang, and S. L. Zeger. 2002. The analysis of Longitudinal
Data, 2 ed. Oxford: Oxford University Press.
Dobson, A. J. 2002. An Introduction to Generalized Linear Models, 2 ed. Boca Raton, FL:
Chapman & Hall/CRC.
Dunn, P. K., and G. K. Smyth. 2005. Series Evaluation of Tweedie Exponential Dispersion Model
Densities. Statistics and Computing, 15, 267–280.
Dunn, P. K., and G. K. Smyth. 2001. Tweedie Family Densities: Methods of Evaluation. In:
Proceedings of the 16th International Workshop on Statistical Modelling, Odense, Denmark: .
Gill, J. 2000. Generalized Linear Models: A Unified Approach. Thousand Oaks, CA: Sage
Publications.
Hardin, J. W., and J. M. Hilbe. 2001. Generalized Estimating Equations. Boca Raton, FL:
Chapman & Hall/CRC.
185
GENLIN Algorithms
Hardin, J. W., and J. M. Hilbe. 2003. Generalized Linear Models and Extension. Station, TX:
Stata Press.
Horton, N. J., and S. R. Lipsitz. 1999. Review of Software to Fit Generalized Estimating Equation
Regression Models. The American Statistician, 53, 160–169.
Huber, P. J. 1967. The Behavior of Maximum Likelihood Estimates under Nonstandard
Conditions. In: Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and
Probability, Berkeley, CA: University of California Press, 221–233.
Lane, P. W., and J. A. Nelder. 1982. Analysis of Covariance and Standardization as Instances of
Prediction. Biometrics, 38, 613–621.
Lawless, J. E. 1984. Negative Binomial and Mixed Poisson Regression. The Canadian Journal
of Statistics, 15, 209–225.
Liang, K. Y., and S. L. Zeger. 1986. Longitudinal Data Analysis Using Generalized Linear
Models. Biometrika, 73, 13–22.
Lipsitz, S. H., K. Kim, and L. Zhao. 1994. Analysis of Repeated Categorical Data Using
Generalized Estimating Equations. Statistics in Medicine, 13, 1149–1163.
McCullagh, P. 1983. Quasi-Likelihood Functions. Annals of Statistics, 11, 59–67.
McCullagh, P., and J. A. Nelder. 1989. Generalized Linear Models, 2nd ed. London: Chapman &
Hall.
Miller, M. E., C. S. Davis, and J. R. Landis. 1993. The Analysis of Longitudinal Polytomous Data:
Generalized Estimating Equations and Connections with Weighted Least Squares. Biometrics,
49, 1033–1044.
Nelder, J. A., and R. W. M. Wedderburn. 1972. Generalized Linear Models. Journal of the
Royal Statistical Society Series A, 135, 370–384.
Pan, W. 2001. Akaike’s Information Criterion in Generalized Estimating Equations. Biometrics,
57, 120–125.
Pregibon, D. 1981. Logistic Regression Diagnostics. Annals of Statistics, 9, 705–724.
Smyth, G. K., and B. Jorgensen. 2002. Fitting Tweedie’s Compound Poisson Model to Insurance
Claims Data: Dispersion Modelling. ASTIN Bulletin, 32, 143–157.
White, H. 1980. A Heteroskedasticity-Consistent Covariance Matrix Estimator and a Direct Test
for Heteroskedasticity. Econometrica, 48, 817–836.
Williams, D. A. 1987. Generalized Linear Models Diagnostics Using the Deviance and Single
Case Deletions. Applied Statistics, 36, 181–191.
Zeger, S. L., and K. Y. Liang. 1986. Longitudinal Data Analysis for Discrete and Continuous
Outcomes. Biometrics, 42, 121–130.
Generalized linear mixed models
algorithms
Generalized linear mixed models extend the linear model so that:

The target is linearly related to the factors and covariates via a specified link function.

The target can have a non-normal distribution.

The observations can be correlated.
Generalized linear mixed models cover a wide variety of models, from simple linear regression to
complex multilevel models for non-normal longitudinal data.
Notation
The following notation is used throughout this chapter unless otherwise stated:
n
p
px
K
y
r
m
μ
η
X
Z
O
β
γ
ω
f
N
θ
θ
Number of complete cases in the dataset. It is an integer and n ≥ 1.
Number of parameters (including the constant, if it exists) in the model. It is an integer
and p ≥ 1.
Number of non-redundant columns in the design matrix of fixed effects. It is an integer
and px ≥ 1.
Number of random effects.
n× 1 target vector. The rows are records.
n× 1 events vector for the binomial distribution representing the number of “successes”
within a number of trials. All elements are non-negative integers.
n× 1 trials vector for the binomial distribution. All elements are positive integers and mi
≥ ri, i=1,...,n.
n× 1 expected target value vector.
n× 1 linear predictor vector.
n× p design matrix. The rows represent the records and the columns represent the
parameters. The ith row is xT
where the superscript T means transpose
with
if the model has an intercept.
of a matrix or vector,
n× r design matrix of random effects.
n× 1 offset vector. This can’t be the target or one of the predictors. Also this can’t be
a categorical field.
p× 1 parameter vector. The first element is the intercept, if there is one.
r× 1 random effect vector.
n× 1 scale weight vector. If an element is less than or equal to 0 or missing, the
corresponding record is not used.
n× 1 frequency weight vector. Non-integer elements are treated by rounding the value
to the nearest integer. For values less than 0.5 or missing, the corresponding records
are not used.
Effective sample size,
. If frequency weights are not used, N = n.
covariance parameters of the kth random effect
covariance parameters of the random effects, θ
© Copyright IBM Corporation 1994, 2015.
187
θT
θT
T
188
Generalized linear mixed models algorithms
θ
covariance parameters of the residuals
θ
θ
VY
T
θT θT
θT
θT θT
T
Covariance matrix of y, conditional on the random effects
Model
The form of a generalized linear mixed model for the target y with the random effects γ is
η
E y
O,y
,
where η is the linear predictor; g(.) is the monotonic differentiable link function; γ is a (r× 1)
vector of random effects which are assumed to be normally distributed with mean 0 and variance
matrix G, X is a (n× p) design matrix for the fixed effects; Z is a (n× r) design matrix for the
random effects; O is an offset with a constant coefficient of 1 for each observation; F is the
conditional target probability distribution. Note that if there are no random effects, the model
reduces to a generalized linear model (GZLM).
The probability distributions without random effects offered (except multinomial) are listed in
Table 19-1 on p. 188. The link functions offered are listed in Table 19-3 on p. 189. Different
combinations of probability distribution and link function can result in different models.
See “Nominal multinomial distribution ” on p. 206 for more information on the nominal
multinomial distribution.
See “Ordinal multinomial distribution ” on p. 213 for more information on the ordinal multinomial
distribution.
Note that the available distributions depend on the measurement level of the target:

A continuous target can have any distribution except multinomial. The binomial distribution
is allowed because the target could be an “events” field. The default distribution for a
continuous target is the normal distribution.

A nominal target can have the multinomial or binomial distribution. The default is
multinomial.

An ordinal target can have the multinomial or binomial distribution. The default is
multinomial.
Table 19-1
Distribution, range and variance of the response, variance function, and its first derivative
Distribution
Normal
Range of y
(−∞,∞)
V(μ)
1
Var(y)
V’(μ)
0
Inverse Gaussian
Gamma
Negative binomial
Poisson
Binomial(m)
(0,∞)
(0,∞)
0(1)∞
0(1)∞
0(1)m/m
μ3
μ2
μ+kμ2
μ
μ3
μ2
μ+kμ2
μ
μ(1−μ)
μ(1−μ)/m
3μ2
2μ
1+2kμ
1
1−2μ
189
Generalized linear mixed models algorithms
Notes

0(1)z means the range is from 0 to z with increments of 1; that is, 0, 1, 2, …, z.

For the binomial distribution, the binomial trial variable m is considered as a part of the
weight variable ω.

If a scale weight variable ω is presented,

For the negative binomial distribution, the ancillary parameter (k) is estimated by the
maximum likelihood (ML) method. When k = 0, the negative binomial distribution reduces to
the Poisson distribution. When k = 1, the negative binomial is the geometric distribution.
is replaced by /ω.
The full log-likelihood function (ℓ), which will be used as the objective function for parameter
estimation, is listed for each distribution in the following table.
Table 19-2
The log-likelihood function for probability distribution
Distribution
Normal
ℓ
Inverse Gaussian
Gamma
Negative
binomial
Poisson
Binomial(m)
where
The following tables list the form, inverse form, range of , and first and second derivatives
for each link function.
Table 19-3
Link function name, form, inverse of link function, and range of the predicted mean
Link function
Identity
η=g(μ)
μ
Inverse μ=g−1(η)
η
Log
ln(μ)
exp(η)
Logit
Probit
Φ
, where
Φ(η)
Φ
Complementary
log-log
ln(−(ln(1−μ))
1−exp(−exp(η))
Range of
190
Generalized linear mixed models algorithms
η=g(μ)
Inverse μ=g−1(η)
Log-complement
ln(1−μ)
1−exp(η)
Negative log-log
−ln(−ln(μ))
exp(−exp(−η))
Link function
Power(α)
Range of
if or
is odd integer
otherwise
Note: In the power link function, if |α| < 2.2e-16, α is treated as 0.
Table 19-4
The first and second derivatives of link function
Link function
First derivative
Identity
Log
1
Second derivative
0
Logit
Probit
Φ
, where
Φ
Complementary log-log
Power(α)
Log-complement
Negative log-log
When the canonical parameter is equal to the linear predictor,
, then the link function is
called the canonical link function. Although the canonical links lead to desirable statistical
properties of the model, particularly in small samples, there is in general no a priori reason why
the systematic effects in a model should be additive on the scale given by that link. The canonical
link functions for probability distributions are given in the following table.
Table 19-5
Canonical and default link functions for probability distributions
Distribution
Normal
Inverse Gaussian
Gamma
Negative binomial
Poisson
Binomial
Canonical link function
Identity
Power(−2)
Power(−1)
Negative binomial
Log
Logit
The variance of y, conditional on the random effects, is
yγ
A
RA
191
Generalized linear mixed models algorithms
The matrix A is a diagonal matrix and contains the variance function of the model, which
is the function of the mean μ, divided by the corresponding scale weight variable; that is,
. The variance functions, V(μ), are different for different
Α diag
distributions. The matrix R is the variance matrix for repeated measures.
Generalized linear mixed models allow correlation and/or heterogeneity from random effects
(G-side) and/or heterogeneity from residual effects (R-side). resulting in 4 types of models:
I where I
1. If a GLMM has no G-side or R-side effects, then it reduces to a GZLM; G=0 and R
is the identity matrix and is the scale parameter. For continuous distributions (normal, inverse
Gauss and gamma), is an unknown parameter and is estimated jointly with the regression
parameters by the maximum likelihood (ML) method. For discrete distributions (negative
binomial, Poisson, binomial and multinomial), is estimated by Pearson chi-square as follows:
where
for the restricted maximum pseudo-likelihood (REPL) method.
2. If a model only has G-side random effects, then the G matrix is user-specified and R
estimated jointly with the covariance parameters in G for continuous distributions and
discrete distributions..
I.
is
for
3. If a model only has R-side residual effects, then G = 0 and the R matrix is user-specified. All
covariance parameters in R are estimated using the REPL method, defined in “Estimation ”
on p. 192.
4. If a model has both G-side and R-side effects, all covariance parameters in G and R are jointly
estimated using the REPL method.
For the negative binomial distribution, there is the ancillary parameter k, which is first estimated
by the ML method, ignoring random and residual effects, then fixed to that estimate while other
regression and covariance parameters are estimated.
Fixed effects transformation
To improve numerical stability, the X matrix is transformed according to the following rules.
T , i=1,...,n with
if the model has an intercept.
The ith row of X is x
Suppose x is the transformation of x then the jth entry of x is defined as
x
192
Generalized linear mixed models algorithms
where cj and sj are centering and scaling values for
of cj and sj , are listed as follows:

, respectively, for j=1,...,p and choices
For a non-constant continuous predictor or a derived predictor which includes a continuous
and
where is the sample
predictor, if the model has an intercept,
mean of the jth predictor,
and
and
where
. Note
the sample standard deviation of the jth predictor and
that the intercept column is not transformed. If the model has no intercept,
,
and
is

For a constant predictor

For a dummy predictor that is derived from a factor or a factor interaction,
that is, leave it unchanged.
and
, that is, scale it to 1.
and
;
Estimation
We estimate GLMMs using linearization-based methods, also called the pseudo likelihood
approach (PL; Wolfinger and O’Connell (1994)), penalized quasi-likelihood (PQL; Breslow
and Clayton (1993)), marginal quasi-likelihood (MQL; Goldstein (1991)). They are based on
the similar principle that the GLMMs are approximated by an LMM so that well-established
estimation methods for LMMs can be applied. More specifically, the mean target function; that is,
the inverse link function is approximated by a linear Taylor series expansion around the current
estimates of the fixed-effect regression coefficients and different solutions of random effects (0
is used for MQL and the empirical Bayes estimates are used for PQL). Applying this linear
approximation of the mean target leads to a linear mixed model for a transformation of the original
target. The parameters of this LMM can be estimated by Newton-Raphson or Fisher scoring
technique and the estimates then are used to update the linear approximation. The algorithm
iterates between two steps until convergence. In general, the method is a doubly iterative process.
The outer iterations are to update the transformed target for an LMM and the inner iterations are to
estimate parameters of the LMM.
It is well known that parameter estimation for an LMM can be based on maximum likelihood
(ML) or restricted (or residual) maximum likelihood (REML). Similarly, parameter estimation
for a GLMM in the inner iterations can based on maximum pseudo-likelihood (PL) or restricted
maximum pseudo-likelihood (REPL).
Linear mixed pseudo model
Following Wolfinger and O’Connell (1993), a first-order Taylor series of μ in (1) about
yields
μ
X
Z
O
X
Zγ
and
193
Generalized linear mixed models algorithms
where
Z
O is a diagonal matrix with elements consisting of evaluations of
. Since
the 1st derivative of
rearranged as
μ
Z
Z
O
, this equation can be
Zγ
If we define a pseudo target variable as
v
y
Z
y
O
then the conditional expectation and variance of v, based on E y γ and
are
E vγ
μ
vγ
RA
,
RA
diag
Furthermore, we also assume v
v
A
Z
A
where A
yγ
Zγ
is normally distributed. Then we consider the model of v
ε
as a weighted linear mixed model with fixed effects β, random effects γ
0 G , error terms
ε
0
A RA
, because
ε
v γ and diagonal weight matrix
A
. Note that the new target v (with O if an offset variable exists) is a Taylor
series approximation of the linked target y . The estimation method of unknown parameters
of β and θ, which contains all unknowns in G and R, for traditional linear mixed models can
be applied to this linear mixed pseudo model.
The Gaussian log pseudo-likelihood (PL) and restricted log pseudo-likelihood (REPL), which
are expressed as the functions of covariance parameters in θ, corresponding to the linear mixed
model for v are the following:
θ v
θ v
V θ
V θ
r θ TV θ
r θ TV θ
rθ
rθ
XT V θ
X
where
V θ
ZG θ Z
R θ
r θ
v X XT V θ X XT V θ v v X N
denotes the effective sample size, and px denotes the rank of the design matrix of X or the number
of non-redundant parameters in X. Note that the regression parameters in β are profiled from the
above equations because the estimation of β can be obtained analytically. The covariance
194
Generalized linear mixed models algorithms
parameters in θ are estimated by Newton-Raphson or Fisher scoring algorithm. Following the
tradition in linear mixed models, the objection functions of minimization for estimating θ would
θ v or
θ v Upon obtaining , estimates for β and γ are computed as
be
XT V
X
XT V
v
ZT V
where is the best linear unbiased estimator (BLUE) of β and is the estimated best linear
unbiased predictor (BLUP) of γ in the linear mixed pseudo model. With these statistics, v and
are recomputed based on and the objective function is minimized again to obtain updated
. Iteration between
θ v and the above equation yields the PL estimation procedure and
between
θ ν and the above equation the REPL procedure.
There are two choices for
1.
(the current estimates of γ):
for PQL; and
2. 0 for MQL.
On the other hand, is always used as the current estimate of the fixed effects. Based on the two
objective functions (PL or REPL) and two choices of random effect estimates (PQL or MQL), 4
estimation methods can be implemented for GLMMs:
1. PL-PQL: pseudo-likelihood with = ;
2. PL-MQL: pseudo-likelihood with = ;
3. REPL-PQL: residual pseudo-likelihood with = ;
4. REPL-MQL: residual pseudo-likelihood with = .
We use method 3, REPL-PQL.
Iterative process
The doubly iterative process for the estimation of θ is as follows:
1. Obtain an initial estimate of μ, μ . Specifically,
distribution (yi can be a proportion or 0/1 value) and
set the outer iteration index j = 0.
for a binomial
for a non-binomial distribution. Also
2. Based on , compute
v
O
y
and
A
Fit a weighted linear mixed model with pseudo target v, fixed effects design matrix X, random
effects design matrix Z, and diagonal weight matrix . The fitting procedure, which is called
the inner iteration, yields the estimates of θ, and is denoted as θ . The procedure uses the
195
Generalized linear mixed models algorithms
specified settings for parameter, log-likelihood, and Hessian convergence criteria for determining
convergence of the linear mixed model. If j = 0, go to step 4; otherwise go to the next step.
3. Check if the following criterion with tolerance level is satisfied:
If it is met or maximum number of outer iterations is reached, stop. Otherwise, go to the next step.
4. Compute by setting
estimates, set = .
θ
then set
. Depending on the choice of random effect
5. Compute the new estimate of μ by
Z
O
set j = j + 1 and go to step 2.
Wald confidence intervals for covariance parameter estimates
Here we assume that the estimated parameters of G and R are obtained through the above doubly
iterative process. Then their asymptotic covariance matrix can be approximated by Η , where
H is the Hessian matrix of the objective function (
θ v or
θ v ) evaluated at . The
standard error for the ith covariance parameter estimate in the vector, say , is the square root of
the ith diagonal element of Η .
Thus, a simple Wald’s type confidence interval or test statistic for any covariance parameter
can be obtained by using the asymptotic normality. However, these can be unreliable in small
and
samples, especially for variance and correlation parameters that have a range of
respectively. Therefore, following the same method used in linear mixed models, these
parameters are transformed to parameters that have range
. Using the delta method, these
transformed estimates still have asymptotic normal distributions.
in the autoregressive, autoregressive moving
For variance type parameters in G and R, such as
in the
average, compound symmetry, diagonal, Toeplitz, and variance components, and
unstructured type, the 100(1 – α)% Wald confidence interval is given, assuming the variance
and its standard error is se
from the corresponding diagonal element
parameter estimate is
of Η , by
se
For correlation type parameters in G and R, such as in the autoregressive, autoregressive moving
average, and Toeplitz types and in the autoregressive moving average type, which usually come
with the constraint of
, the 100(1 – α)% Wald confidence interval is given, assuming the
correlation parameter estimate is and its standard error is se from the corresponding diagonal
element of Η , by
se
196
Generalized linear mixed models algorithms
where
and
hyperbolic tangent, respectively.
are hyperbolic tangent and inverse
For general type parameters, other than variance and correlation types, in G and R, such as in
(off-diagonal elements) in the unstructured type, no
the compound symmetry type and
transformation is done. Then the 100(1 – α)% Wald confidence interval is simply, assuming the
parameter estimate is and its standard error is se
from the corresponding diagonal element
of Η ,
se
se
The 100(1 – α)% Wald confidence interval for
where
ln
is
.
where is a covariance parameter in
Note that the z-statistics for the hypothesis
θ vector, are calculated; however, the Wald tests should be considered as an approximation and
used with caution because the test statistics might not have a standardized normal distribution.
Statistics for estimates of fixed and random effects
The approximate covariance matrix of
XT R
ZT R
where R
ΖT
ΖT
−
XT R Z
ZT R Z G
X
X
vγ
T
β, −γ is
A
C
C
RA
CT
C
is evaluated at the converged estimates and
1
1
1 Z+
1
T
1Z
Statistics for estimates of fixed effects on original scale
If the X matrix is transformed, the restricted log pseudo-likelihood (REPL) would be different
based on transformed and original scale, so the REPL on the transformed scale should be
transformed back on the final iteration so that any post-estimation statistics based on REPL can
be calculated correctly. Suppose the final objective function value based on the transformed and
197
Generalized linear mixed models algorithms
original scales are
θ v and
θ v as follows:
from
θ v
θ v , respectively, then
θ v
θ v can be obtained
A
Because REPL has the following extra term involved the X matrix
X TV θ
XA T V θ
X
AT
XV θ
XV θ
X
XV θ
X
A
AT
A
X
XA
A
then
XV θ X
X TV θ X
A and
θ v
θ v
note that PL values are the same whether the X matrix is transformed or not.
A . Please
In addition, the final estimates of β, C11, C21 and C22 are based on the transformed scale, denoted
as
and
respectively. They are transformed back to the original scale, denoted as
and
respectively, as follows:
Α
T
AT
Note that A could reduce to S
scale.
; hereafter, the superscript * denotes a quantity on the transformed
Estimated covariance matrix of the fixed effects parameters
Two estimated covariance matrices of the fixed effects parameters can be calculated: model-based
and robust.
The model-based estimated covariance matrix of the fixed effects parameters is given by
Σm
The robust estimated covariance matrix of the fixed effects parameters for a GLMM is defined as
the classical sandwich estimator. It is similar to that for a generalized linear model or a generalized
estimating equation (GEE). If the model is a generalized linear mixed model and it is processed by
subjects, then the robust estimator is defined as follows
198
Generalized linear mixed models algorithms
T
Σr =Σm
where
v
1
T
1
Σm
X
Standard errors for estimates in fixed effects and predictions in random effects
Let denote a non-redundant parameter estimate in fixed effects. Its standard error is the square
root of the ith diagonal element of Σm or Σr ,
The standard error for redundant parameter estimates is set to a system missing value.
Let denote a prediction in random effects. Its standard error is the square root of the ith
:
diagonal element of
Test statistics for estimates in fixed effects and predictions in random effects
The hypothesis
t statistic:
is tested for each non-redundant parameter in fixed effects using the
which has an asymptotic t distribution with degrees of freedom. See “Method for computing
degrees of freedom ” on p. 203 for details on computing the degrees of freedom.
Wald confidence intervals for estimates in fixed effects and predictions in random effects
The 100(1 – α)% Wald confidence interval for
where
is the
is given by
100th percentile of the
distribution.
For some models (see the list below), the exponentiated parameter estimates, their standard
is
errors, and confidence intervals are computed. Using the delta method, the estimate of
, the standard error estimate is
and the corresponding 100(1 – α)% Wald
confidence interval for
is
199
Generalized linear mixed models algorithms
The list of models is as follows:
1. Logistic regression (binomial distribution + logit link).
2. Nominal logistic regression (nominal multinomial distribution + generalized logit link).
3. Ordinal logistic regression (ordinal multinomial distribution + cumulative logit link).
4. Log-linear model (Poisson distribution + log link).
5. Negative binomial regression (negative binomial distribution + log link).
Testing
After estimating parameters and calculating relevant statistics, several tests for the given model
are performed.
Goodness of fit
Information criteria
Information criteria are used when comparing different models for the same data. The formulas
for various criteria are as follows.
Finite sample corrected (AICC)
Bayesian information criteria (BIC)
where ℓ is the restricted log-pseudo-likelihood evaluated at the parameter estimates. For REPL,
N is the effective sample size minus the number of non-redundant parameters in fixed effects
(
) and d is the number of covariance parameters.
Note that the restricted log-pseudo-likelihood values are of the linearized model, not on the
original scale. Thus the information criteria should not be compared across models with different
distribution and link function and they should be interpreted with caution.
Tests of fixed effects
For each effect specified in the model, a type III test matrix L is constructed and H0: Liβ = 0 is
tested. Construction of L and the generating estimable function (GEF) is based on the generating
matrix H
XT ΨX XT ΨX where Ψ diag
such that Liβ is estimable; that
L H . It involves parameters only for the given effect and the effects containing the given
is, L
effect. For type III analysis, L does not depend on the order of effects specified in the model. If
such a matrix cannot be constructed, the effect is not testable.
Then the L matrix is then used to construct the test statistic
200
Generalized linear mixed models algorithms
T T
∑ T
1
where
∑ T . The statistic has an approximate F distribution. The numerator
degrees of freedom is and the denominator degrees of freedom is . See “Method for computing
degrees of freedom ” on p. 203 for details on computing the denominator degrees of freedom.
In addition, we test a null hypothesis that all regression parameters (except intercept if there is
one) equal zero. The test statistic would be the same as the above F statistic except the L matrix is
from GEF. If there is no intercept, the L matrix is the whole GEF. If there is an intercept, the L
matrix is GEF without the first row which corresponds to the intercept. This test is similar to the
“corrected model” in linear models.
Estimated marginal means
There are two types of estimated marginal means calculated here. One corresponds to the
specified factors for the linear predictor of the model and the other corresponds to those for the
original scale of the target.
Estimated marginal means are based on the estimated cell means. For a given fixed set of factors,
or their interactions, we estimate marginal means as the mean value averaged over all cells
generated by the rest of the factors in the model. Covariates may be fixed at any specified value.
If not specified, the value for each covariate is set to its overall mean estimate.
Estimated marginal means are not available for the multinomial distribution.
Estimated marginal means for the linear predictor
Calculating estimated marginal means for the linear predictor
Estimated marginal means for the linear predictor are based on the link function transformation,
and constructed such that LB is estimable.
Suppose there are r combined levels of the specified categorical effect. This r×1 vector can be
expressed in the form
. The variance matrix of is then computed by
V
=LΣLT
The standard error for the jth element of is the square root of the jth diagonal element of V .
, respectively, then the corresponding
Let the jth element of and its standard error be and
100(1 – α)% confidence interval for
is given by
201
Generalized linear mixed models algorithms
where
is the
percentile of the t distribution with degrees of freedom.
See “Method for computing degrees of freedom ” on p. 203 for details on computing the degrees
of freedom.
Comparing estimated marginal means for the linear predictor
We can compare estimated marginal means for the linear predictor based on a selected contrast
type, for which a set of contrasts for the factor is created. Let this set of contrasts define matrix
C
0. An F statistic is used for testing given set of
C used for testing the hypothesis
contrasts for the factor as follows:
C T CV
CT
C
which has an asymptotic F distribution with degrees of freedom, where
rank CV CT .
See “Method for computing degrees of freedom ” on p. 203 for details on computing the
denominator degrees of freedom. The p-values can be calculated accordingly. Note that adjusted
p-values based on multiple comparisons adjustments won’t be computed for the overall test.
Each row cT of matrix C is also tested separately. The estimate for the ith row is given by cT and
its standard error by cT V c . The corresponding 100(1 – α)% confidence interval is given by
cT
The test statistic for
cT
is
cT
It has an asymptotic t distribution. See “Method for computing degrees of freedom ” on p. 203
for details on computing the degrees of freedom. The p-values can be calculated accordingly. In
addition, adjusted p-values for multiple comparisons can also computed.
Estimated marginal means in the original scale
Estimated marginal means for the target are based on the original scale. As a conditional predictor
defined by Lane and Nelder (1982), estimated marginal means for the target are derived from
those for the linear predictor.
Calculating estimated marginal means for the target
The estimated marginal means for the target are defined as
L
202
Generalized linear mixed models algorithms
The variance of estimated marginal means for the target is
where
is a r×r matrix and
the link with respect to the jth value in and
from Table 19-4 on p. 190.
The 100(1 – α)% confidence interval for
is the derivative of the inverse of
where
is
is given by
Note:
is estimated marginal means for the proportion, not for the number of events when
events and trials variables are used for the binomial distribution.
Comparing estimated marginal means for the target
This is similar to comparing estimated marginal means for the linear predictor; just replace with
and
with
. For more information, see the topic “Estimated marginal means for the
linear predictor” on p. 200.
Multiple comparisons
The hypothesis
can be tested using the multiple row hypotheses testing technique.
be the ith row vector of matrix C. The ith row hypothesis is
. Testing
is the
Let
same as testing multiple non-redundant row hypotheses
simultaneously, where R is the
number of non-redundant row hypotheses, and
represents the ith non-redundant hypothesis. A
is redundant if there exists another hypothesis
such that
.
hypothesis
Adjusted p-values. For each individual hypothesis
, test statistics can be calculated. Let
denote the p-value for testing
and denote the adjusted p-value. The conclusion from
multiple testing is, at level (the family-wise type I error),
reject
reject
if
if
;
.
Several different methods to adjust p-values are provided here. Please note that if the adjusted
p-value is bigger than 1, it is set to 1 in all the methods.
Adjusted confidence intervals. Note that if confidence intervals are also calculated for the above
hypothesis, then adjusting confidence intervals is required to correspond to adjusted p-values.
The only item needed to be adjusted in the confidence intervals is the critical value from the
and the adjusted
standard normal distribution. Assume that the original critical value is
critical value is .
203
Generalized linear mixed models algorithms
LSD (Least Significant Difference)
The adjusted p-values are the same as the original p-values:
The adjusted critical value is:
Sequential Bonferroni
The adjusted p-values are:
The adjusted critical values will correspond to the ordered adjusted p-values as follows:
if
if
if
=
=
for
for
Sequential Sidak
The adjusted p-values are:
The adjusted critical values will correspond to the ordered adjusted p-values as follows:
if
if
if
where
for
=
for
.
Method for computing degrees of freedom
Residual method
The value of degrees of freedom is given by
and X is the design matrix of fixed effects.
X , where N is the effective sample size
204
Generalized linear mixed models algorithms
Satterthwaite’s approximation
First perform the spectral decomposition
where Γ is an orthogonal matrix of
eigenvectors and D is a diagonal matrix of eigenvalues. If
is the mth row of
,
is the
mth eigenvalues and
where
and
is the asymptotic covariance matrix of
H . If
Hessian matrix of the objective function; that is,
obtained from the
then the denominator degree of freedom is given by
Note that the degrees of freedom can only be computed when E>q.
Scoring
For GLMMs, predicted values and relevant statistics can be computed based on solutions of
random effects. PQL-type predictions use as the solution for the random effects to compute
predicted values and relevant statistics.
PQL-type predicted values and relevant statistics
Predicted value of the linear predictor
xT
zT
Standard error of the linear predictor
=
xT Σx
zT
z
zT
x
Predicted value of the mean
xT
zT
For the binomial distribution with 0/1 binary target variable, the predicted category
x
(or sucess)
(or failure)
if
otherwise
Approximate 100(1−α)% confidence intervals for the mean
x is
205
Generalized linear mixed models algorithms
xT
zT
Raw residual on the link function transformation
Raw residual on the original scale of the target
Pearson-type residual on the link function transformation
γ
where
γ is the ith diagonal element of
v γ and
vγ
is an n× 1 vector of PQL-type predicted values of the mean.
A
A
where
Pearson-type residual on the original scale of the target
γ
where
γ is the ith diagonal element of
y
A
A
and
.
Classification Table
Suppose that
is the sum of the frequencies for the observations whose actual target
category is j (as row) and predicted target category is
2 for binomial), then
where
(as column),
(note that J =
is indicator function.
Suppose that
percentage, then
is the
th
element of the classification table, which is the row
206
Generalized linear mixed models algorithms
The percentage of total correct predictions of the model (or “overall percent correct”) is
Nominal multinomial distribution
The nominal multinomial distribution requires some extra notation and explanation.
Notation
The following notation is used throughout this section unless otherwise stated:
S
J
Number of super subjects.
Number of cases in the sth super subject.
Nominal categorical target for the tth case in the sth super subject. Its category values
are denoted as 1, 2, and so on.
The total number of categories for target.
T , where
Dummy vector of
,
if
,
otherwise
. The superscript T means the transpose of a matrix or vector.
T
y
yT
yT
T
T T
Probability of category j for the tth case in the sth super subject; that is,
.
T
T
T
T
T T
T
207
Generalized linear mixed models algorithms
Linear predictor value for category j of the tth case in the sth super subject.
T
T
T
T
T
T T
1
p× 1 vector of predictor variables for the tth case in the sth super subject. The first
element is 1 if there is an intercept.
(n (J−1)) × 1 vector of linear predictor.
X
(n (J−1)) × (J−1)p design matrix of fixed effects,
r× 1 vector of coefficients for the random effect corresponding to the tth case in the
sth super subject.
Z
, where ⊕ is the direct sum of matrices.
Design matrix of random effects,
O
n× 1 vector of offsets,
, where
is the offset value of
the tth case in the sth super subject. This can’t be the target (y) or one of the predictors
(X). The offset must be continuous.
1
, where 1 is a length q vector of 1.
p× 1 vector of unknown parameters for category j,
,
The first element in is the intercept for the category j, if there is one.
.
r × 1 vector of random effects for category j in the sth super subject,
.
Random effects for the sth super subject,
ω
f
N
T
T
T
.
Scale weight of the tth case in the sth super subject. It does not have to be integers. If
it is less than or equal to 0 or missing, the corresponding case is not used.
T.
n× 1 vector of scale weight variable, ω
Frequency weight of the tth case in the sth super subject. If it is a non-integer value, it
is treated by rounding the value to the nearest integer. If it is less than 0.5 or missing,
the corresponding cases are not used.
T
n× 1 vector of frequency count variable,
Effective sample size,
. If frequency count variable f is not used, N = n.
Model
The form of a generalized linear mixed model for nominal target with the random effects is
208
Generalized linear mixed models algorithms
where is the linear predictor; X is the design matrix for fixed effects; Z is the design matrix for
random effects; γ is a vector of random effects which are assumed to be normally distributed with
is the logit link function such that
mean 0 and variance matrix G;
And its inverse function is
The variance of y, conditional on the random effects is
T
where
and R
are not supported for the multinomial distribution.
I which means that R-side effects
is set to 1.
Estimation
Linear mixed pseudo model
Similarly to “Linear mixed pseudo model ” on p. 192, we can obtain a weighted linear mixed
model
where v
D
D
y
O and error terms ε
D
d
d
and
T
And block diagonal weight matrix is
D
D
with
T
209
Generalized linear mixed models algorithms
D
D=
D
The Gaussian log pseudo-likelihood (PL) and restricted log pseudo-likelihood (REPL), which
are expressed as the functions of covariance parameters in θ, corresponding to the linear mixed
model for v are the following:
θ v
V θ
θ v
V θ
r θ TV θ
r θ TV θ
rθ
rθ
XT V θ
X
where V θ
G θ
R θ
θ
N denotes the effective sample
size, and denotes the total number of non-redundant parameters for .
The parameter can be estimated by linear mixed model using the objection function
θ v , and are computed as
T
θ v or
T
T
Iterative process
The doubly iterative process for the estimation of is the same as that for other distributions, if we
replace and
with and
O respectively, and set initial estimation
of as
For more information, see the topic “Iterative process ” on p. 194.
Post-estimation statistics
Wald confidence intervals
The Wald confidence intervals for covariance parameter estimates are described in “Wald
confidence intervals for covariance parameter estimates ” on p. 195.
Statistics for estimates of fixed and random effects
Similarly to “Statistics for estimates of fixed and random effects ” on p. 196, the approximate
covariance matrix of
is
210
Generalized linear mixed models algorithms
Where
with
=
T , and
Statistics for estimates of fixed and random effects on original scale
If the fixed effects are transformed when constructing matrix X, then the final estimates of ,
,
, and
above are based on transformed scale, denoted as ,
,
and
,
respectively. They would be transformed back on the original scale, denoted as ,
,
,
and
, respectively, as follows:
T
T
where
A .
Estimated covariance matrix of the fixed effects parameters
Model-based estimated covariance
Robust estimated covariance of the fixed effects parameters
211
Generalized linear mixed models algorithms
where
, and
is a part of
corresponding to the sth super subject.
Standard error for estimates in fixed effects and predictions in random effects
Let
denote a non-redundant fixed effects parameter estimate. Its standard error is the square
diagonal element of
root of the
The standard error for redundant parameter estimates is set to system missing value.
Similarly, let denote the ith random effects prediction. Its standard error is the square root
of the ith diagonal element of
:
Test statistics for estimates in fixed effects and predictions in random effects
Test statistics for estimates in fixed effects and predictions in random effects are as those described
in “Statistics for estimates of fixed and random effects ” on p. 196.
Wald confidence intervals for estimates in fixed effects and random effects predictions
Wald confidence intervals are as those described in “Statistics for estimates of fixed and random
effects ” on p. 196.
Testing
Information criteria
These are as described in “Goodness of fit ” on p. 199.
Tests of fixed effects
For each effect specified in the model, a type III test matrix L is constructed from
the generating matrix
, where
and
. Then the test statistic is
where
and
L. The statistic has an approximate F distribution.
The numerator degrees of freedom is and the denominator degree of freedom is . For more
information, see the topic “Method for computing degrees of freedom ” on p. 203.
212
Generalized linear mixed models algorithms
Scoring
PQL-type predicted values and relevant statistics
predicted vector of the linear predictor
T
z
T
Estimated covariance matrix of the linear predictor
z
z
z
z
where
is a diagonal block corresponding to the sth super subject, the approximate covariance
matrix of
;
is a part of
corresponding to the sth super subject.
The estimated standard error of the jth element in
element of
,
,
, is the square root of the jth diagonal
Predicted value of the probability for category j
Predicted category
x
If there is a tie in determining the predicted category, the tie will be broken by choosing the
category with the highest
If there is still a tie, the one with the lowest
category number is chosen.
Approximate 100(1−α)% confidence intervals for the predicted probabilities
The covariance matrix of
can be computed as
213
Generalized linear mixed models algorithms
where
..
.
..
.
..
.
with
then the confidence interval is
where
is the jth diagonal element of
.
and the estimated variance of
Ordinal multinomial distribution
The ordinal multinomial distribution requires some extra notation and explanation.
Notation
The following notation is used throughout this section unless otherwise stated:
S
J
Number of super subjects.
Number of cases in the sth super subject.
Ordinal categorical target for the tth case in the sth super subject. Its category values
are denoted as consecutive integers from 1 to J.
The total number of categories for target.
T , where
Indicator vector of
,
if
,
otherwise
. The superscript T means the transpose of a matrix or vector.
T
y
yT
yT
T
T T
Cumulative target probability for category j for the tth case in the sth super subject;
λ
T
T
λT , where λ
λT
λT
and λT
and
Probability of category j for the tth case in the sth super subject; that is,
and
.
λ
λT
,
214
Generalized linear mixed models algorithms
T
T
T
T
T
T T
Linear predictor value for category j of the tth case in the sth super subject.
T
T
T
T
T
T T
1
p× 1 vector of predictors for the tth case in the sth super subject.
(n (J−1)) × 1 vector of linear predictor.
r× 1 vector of coefficients for the random effect corresponding to the tth case in the
sth super subject.
O
n× 1 vector of offsets,
, where
is the offset value of
the tth case in the sth super subject. This can’t be the target (y) or one of the predictors
(X). The offset must be continuous.
1
, where 1 is a length q vector of 1’s.
ψ
T and
J−1 × 1 vector of threshold parameters, ψ
p× 1 vector of unknown parameters.
(J−1+p) × 1 vector of all parameters Β= ψT βT
Scale weight of the tth case in the sth super subject. It does not have to be integers. If
it is less than or equal to 0 or missing, the corresponding case is not used.
T.
n× 1 vector of scale weight variable, ω
ω
Frequency weight of the ith case in the sth super subject. If it is a non-integer value, it
is treated by rounding the value to the nearest integer. If it is less than 0.5 or missing,
the corresponding cases are not used.
T
n× 1 vector of frequency count variable,
f
N
A
T
Effective sample size,
. If frequency count variable f is not used, N = n.
B
direct (or Kronecker ) product of A and B, which is equal to
m× 1 vector of 1’s;
B
B
B
B
B
B
T
Model
The form of a generalized linear mixed model for an ordinal target with random effects is
λ
γ
B
B
B
215
Generalized linear mixed models algorithms
where is the expanded linear predictor vector; λ is the expanded cumulative target probability
is a cumulative link function; X is the expanded design matrix for fixed effects
vector;
arranged as follows
X
X
..
.
X
X
X
..
.
X
X
I
xT
..
.
..
.
..
.
..
..
.
Z
Z
Z
0
0
0
..
.
0
..
.
..
.
T
Β= ψT βT
ψ
arranged as follows
Z
..
.
.
xT
..
.
xT
T
βT
Z is the expanded design matrix for random effects
ψ
0
0
Z
Z
..
.
Z
zT
,
γ is a vector of random effects which are assumed to be normally distributed with mean 0 and
variance matrix G.
The variance of y, conditional on the random effects is
where
T
are not supported for the multinomial distribution.
and R
I which means that R-side effects
is set to 1.
216
Generalized linear mixed models algorithms
Estimation
Linear mixed pseudo model
Similarly to “Linear mixed pseudo model ” on p. 192, we can obtain a weighted linear mixed
model
where v
D
y
D
O and error terms ε
d
D
..
.
D
D
.
..
.
..
.
..
.
T
with
d
d
d
..
D
..
.
and
T
And block diagonal weight matrix is
DT
D
The Gaussian log pseudo-likelihood (PL) and restricted log pseudo-likelihood (REPL), which
are expressed as the functions of covariance parameters in , corresponding to the linear mixed
model for are the following:
θ v
θ v
V θ
V θ
r θ TV θ
r θ TV θ
rθ
rθ
XT V θ
X
where V θ
G θ
R θ
θ
N denotes the effective sample
size, and denotes the total number of non-redundant parameters for .
The parameter can be estimated by linear mixed model using the objection function
θ v , and are computed as
θ v or
217
Generalized linear mixed models algorithms
T
T
T
Iterative process
The doubly iterative process for the estimation of is the same as that for other distributions, if we
with and
O respectively, and set initial estimation
replace and
of as
For more information, see the topic “Iterative process ” on p. 194.
Post-estimation statistics
Wald confidence intervals
The Wald confidence intervals for covariance parameter estimates are described in “Wald
confidence intervals for covariance parameter estimates ” on p. 195.
Statistics for estimates of fixed and random effects
is the approximate covariance matrix of
D
D
and
in
should be
T
Statistics for estimates of fixed and random effects on original scale
If the fixed effects are transformed when constructing matrix X, then the final estimates of B,
denoted as . They would be transformed back on the original scale, denoted as , as follows:
B
..
.
ψ
β
A
β
where
A
I
TS
1
S
ψ
β
AB
218
Generalized linear mixed models algorithms
Estimated covariance matrix of the fixed effects parameters
The estimated covariance matrix of the fixed effects parameters are described in “Statistics for
estimates of fixed and random effects ” on p. 196.
Standard error for estimates in fixed effects and predictions in random effects
Let
be threshold parameter estimates and
denote
non-redundant regression parameter estimates. Their standard errors are the square root of the
and
, respectively, where
diagonal elements of Σm or Σr :
is the ith diagonal element of Σm or Σr .
Standard errors for predictions in random effects are as those described in “Statistics for estimates
of fixed and random effects ” on p. 196.
Test statistics for estimates in fixed effects and predictions in random effects
The hypotheses
t statistic:
are tested for threshold parameters using the
Test statistics for estimates in fixed effects and predictions in random effects are otherwise as
those described in “Statistics for estimates of fixed and random effects ” on p. 196.
Wald confidence intervals for estimates in fixed effects and random effects predictions
The 100(1 – α)% Wald confidence interval for threshold parameter is given by
Wald confidence intervals are otherwise as those described in “Statistics for estimates of fixed and
random effects ” on p. 196.
The degrees of freedom can be computed by the residual method or Satterthwaite method. For the
. For the Satterthwaite method, it should be similar to that
residual method,
described in “Method for computing degrees of freedom ” on p. 203.
Testing
Information criteria
These are as described in “Goodness of fit ” on p. 199, with the following modifications.
219
Generalized linear mixed models algorithms
For REPL, the value of N is chosen to be effective sample size minus number of non-redundant
, where
parameters in fixed effects,
is the number of non-redundant
parameters in fixed effects, and d is the number of covariance parameters.
For PL, the value of N is effective sample size,
, and d is the number of number of
non-redundant parameters in fixed effects,
, plus the number of covariance parameters.
Tests of fixed effects
For each effect specified in the model excluding threshold parameters, a type I or III test
matrix Li is constructed and H0: LiB = 0 is tested. Construction of matrix Li is based on
matrix H
XT X
XT X , where X
1
X and such that LiB is estimable.
Note that LiB is estimable if and only if L0 L0 H , where L0
l L β . Construction
of L0 considers a partition of the more general test matrix L
L ψ L β first, where
L ψ
l
l
consists of columns corresponding to the threshold parameters and
L β is the part of Li corresponding to regression parameters, then replace L ψ with their
sum l
l to get L0 .
Note that the threshold-parameter effect is not tested for both type I and III analyses and
construction of Li is the same as in GENLIN. For more information, see the topic “Default Tests
of Model Effects ” on p. 182. Similarly, if the fixed effects are transformed when constructing
matrix X, then H should be constructed based on transformed values.
Scoring
PQL-type predicted values and relevant statistics
predicted vector of the linear predictor
Estimated covariance matrix of the linear predictor
T
Z
T
Z
T T
T
where
is a diagonal block corresponding to the sth super subject, the approximate covariance
matrix of
;
is a part of
corresponding to the sth super subject.
The estimated standard error of the jth element in
element of
,
,
, is the square root of the jth diagonal
220
Generalized linear mixed models algorithms
Predicted value of the cumulative probability for category j
=
with
Predicted category
x
where
If there is a tie in determining the predicted category, the tie will be broken by choosing the
category with the highest
If there is still a tie, the one with the lowest
category number is chosen.
Approximate 100(1−α)% confidence intervals for the cumulative predicted probabilities
If either endpoint in the argument is outside the valid range for the inverse link function, the
corresponding confidence interval endpoint is set to a system missing value.
The degrees of freedom can be computed by the residual method or Satterthwaite method.
. For Satterthwaite’s approximation,
For the residual method,
Z
where X
and Z
are the jth rows of
the L matrix is constructed by X
X and Z , respectively, corresponding to the jth category. For example, the L matrix is
xT
zT
for the 1st category. The computation should then be
similar to that described in “Method for computing degrees of freedom ” on p. 203.
References
Agresti, A., J. G. Booth, and B. Caffo. 2000. Random-effects Modeling of Categorical Response
Data. Sociological Methodology, 30, 27–80.
Diggle, P. J., P. Heagerty, K. Y. Liang, and S. L. Zeger. 2002. The analysis of Longitudinal
Data, 2 ed. Oxford: Oxford University Press.
Fahrmeir, L., and G. Tutz. 2001. Multivariate Statistical Modelling Based on Generalized Linear
Models, 2nd ed. New York: Springer-Verlag.
Hartzel, J., A. Agresti, and B. Caffo. 2001. Multinomial Logit Random Effects Models. Statistical
Modelling, 1, 81–102.
Hedeker, D. 1999. Generalized Linear Mixed Models. In: Encyclopedia of Statistics in Behavioral
Science, B. Everitt, and D. Howell, eds. London: Wiley, 729–738.
221
Generalized linear mixed models algorithms
McCulloch, C. E., and S. R. Searle. 2001. Generalized, Linear, and Mixed Models. New York:
John Wiley and Sons.
Skrondal, A., and S. Rabe-Hesketh. 2004. Generalized Latent Variable Modeling: Multilevel,
Longitudinal, and Structural Equation Models. Boca Raton, FL: Chapman & Hall/CRC.
Tuerlinckx, F., F. Rijmen, G. Molenberghs, G. Verbeke, D. Briggs, W. Van den Noortgate, M.
Meulders, and P. De Boeck. 2004. Estimation and Software. In: Explanatory Item Response
Models: A Generalized Linear and Nonlinear Approach, P. De Boeck, and M. Wilson, eds.
New York: Springer-Verlag, 343–373.
Wolfinger, R., and M. O'Connell. 1993. Generalized Linear Mixed Models: A Pseudo-Likelihood
Approach. Journal of Statistical Computation and Simulation, 4, 233–243.
Wolfinger, R., R. Tobias, and J. Sall. 1994. Computing Gaussian likelihoods and their derivatives
for general linear mixed models. SIAM Journal on Scientific Computing, 15:6, 1294–1310.
Imputation of Missing Values
The following methods are available for imputing missing values:
Fixed. Substitutes a fixed value (either the field mean, midpoint of the range, or a constant that
you specify).
Random. Substitutes a random value based on a normal or uniform distribution.
Expression. Allows you to specify a custom expression. For example, you could replace values
with a global variable created by the Set Globals node.
Algorithm. Substitutes a value predicted by a model based on the C&RT algorithm. For each field
imputed using this method, there will be a separate C&RT model, along with a Filler node that
replaces blanks and nulls with the value predicted by the model. A Filter node is then used to
remove the prediction fields generated by the model.
Details of each imputation method are provided below.
Imputing Fixed Values
For fixed value imputation, three options are available:
Mean. Substitutes the mean of the valid training data values for the field being imputed,
where is the value of field x for record i, excluding missing values, and
records with valid values for field x.
is the number of
Midrange. Substitutes the value halfway between the minimum and maximum valid values for the
field being imputed,
where
and
respectively.
are the minimum and maximum observed valid values for field x,
Constant. Substitutes the user-specified constant value.
For imputing fixed missing values in set or flag fields, only the Constant option is available.
Note: Using fixed imputed values for scale fields will artificially reduce the variance for that field,
which can interfere with model building using the field. If you impute using fixed values and
find that the field no longer has the expected effect in a model, consider imputing with a different
method that has a smaller impact on the field’s variance.
© Copyright IBM Corporation 1994, 2015.
223
224
Imputation of Missing Values
Imputing Random Values
For random value imputation, the options depend on the type of the field being imputed.
Range Fields
For range fields, you can select from a uniform distribution or a normal distribution.
Uniform distribution. Values are generated randomly on the inverval
in the interval is equally likely to be generated.
, where each value
Normal distribution. Values are generated from a normal distribution with mean
, where
and
and variance
are derived from the valid observed values of x in the training data,
Set Fields
For set fields, random imputed values are selected from the list of observed values. By default, the
probabilities of all values are equal,
for the j possible values of k. The Equalize button will return any modified values to the default
equal probabilities.
If you select Based on Audit, probabilities are assigned proportional to the relative frequencies of
the values in the training data
where
is the number of records for which
.
If you select Normalize, values are adjusted to sum to 1.0, maintaining the same relative
proportions,
This is useful if you want to enter your own weights for generated random values, but they aren’t
expressed as probabilities. For example, if you know you want twice as many No values as Yes
values, you can enter 2 for No and 1 for Yes and click Normalize. Normalization will adjust the
values to 0.667 and 0.333, preserving the relative weights but expressing them as probabilities.
225
Imputation of Missing Values
Imputing Values Derived from an Expression
For expression-based imputation, imputed values are based on a user-specified CLEM expression.
The expression is evaluated just as it would be for a filler node. Note that some expressions
may return $null or other missing values, with the result that missing values may exist even
after imputation with this method.
Imputing Values Derived from an Algorithm
For the Algorithm method, a C&RT model is built for each field to be imputed, using all other
input fields as predictors. For each record that is imputed, the model for the field to be imputed
is applied to the record to produce a prediction, which is used as the imputed value. For more
information, see the topic “Overview of C&RT” on p. 59.
K-Means Algorithm
Overview
The k-means method is a clustering method, used to group records based on similarity of values
for a set of input fields. The basic idea is to try to discover k clusters, such that the records within
each cluster are similar to each other and distinct from records in other clusters. K-means is an
iterative algorithm; an initial set of clusters is defined, and the clusters are repeatedly updated until
no more improvement is possible (or the number of iterations exceeds a specified limit).
Primary Calculations
In building the k-means model, input fields are encoded to account for differences in measurement
scale and type, and the clusters are defined and updated to generate the final model. These
calculations are described below.
Field Encoding
Input fields are recoded before the values are input to the algorithm as described below.
Scaling of Range Fields
In most datasets, there’s a great deal of variability in the scale of range fields. For example,
consider age and number of cars per household. Depending on the population of interest, age
may take values up to 80 or even higher. Values for number of cars per household, however, are
unlikely to exceed three or four in the vast majority of cases.
If you use both of these fields in their natural scale as inputs for a model, the age field is
likely to be given much more weight in the model than number of cars per household, simply
because the values (and therefore the differences between records) for the former are so much
larger than for the latter.
To compensate for this effect of scale, range fields are transformed so that they all have the
same scale. In IBM® SPSS® Modeler, range fields are rescaled to have values between 0 and 1.
The transformation used is
where x’i is the rescaled value of input field x for record i, xi is the original value of x for record i,
xmin is the minimum value of x for all records, and xmax is the maximum value of x for all records.
Numeric Coding of Symbolic Fields
For modeling algorithms that base their calculations on numerical differences between records,
symbolic fields pose a special challenge. How do you calculate a numeric difference for two
categories?
© Copyright IBM Corporation 1994, 2015.
227
228
K-Means Algorithm
A common approach to the problem, and the approach used in IBM® SPSS® Modeler, is to
recode a symbolic field as a group of numeric fields with one numeric field for each category or
value of the original field. For each record, the value of the derived field corresponding to the
category of the record is set to 1.0, and all the other derived field values are set to 0.0. Such
derived fields are sometimes called indicator fields, and this recoding is called indicator coding.
For example, consider the following data, where x is a symbolic field with possible values A,
B, and C:
Record #
1
2
3
X
B
A
C
X1 ’
0
1
0
X 2’
1
0
0
X 3’
0
0
1
In this data, the original set field x is recoded into three derived fields x1’, x2’, and x3’. x1’ is an
indicator for category A, x2’ is an indicator for category B, and x3’ is an indicator for category C.
Applying the Set Encoding Value
After recoding set fields as described above, the algorithm can calculate a numerical difference
for the set field by taking the differences on the k derived fields (where k is the number of
categories in the original set). However, there is an additional problem. For algorithms that
use the Euclidean distance to measure differences between records, the difference between two
records with different values i and j for the set is
where J is the number of categories, and xkn is value of the derived indicator for category k for
record n. But the values will be different on two of the derived indicators, xi and xj. Thus, the
, which is larger than 1.0. That means that
sum will be
based on this coding, set fields will have more weight in the model than range fields that are
rescaled to 0-1 range.
To account for this bias, k-means applies a scaling factor to the derived set fields, such that a
difference of values on a set field produces a Euclidean distance of 1.0. The default scaling
. You can see that this value gives the desired result by inserting the value
factor is
into the distance formula:
The user can specify a different scaling factor by changing the Encoding value for sets parameter in
the K-Means node expert options.
229
K-Means Algorithm
Encoding of Flag Fields
Flag fields are a special case of symbolic fields. However, because they have only two values in
the set, they can be handled in a slightly more efficient way than other set fields. Flag fields are
represented by a single numeric field, taking the value of 1.0 for the “true” value and 0.0 for the
“false” value. Blanks for flag fields are assigned the value 0.5.
Model Parameters
The primary calculation in k-means is an iterative process of calculating cluster centers and
assigning records to clusters. The primary steps in the procedure are:
1. Select initial cluster centers
2. Assign each record to the nearest cluster
3. Update the cluster centers based on the records assigned to each cluster
4. Repeat steps 2 and 3 until either:

In step 3, there is no change in the cluster centers from the previous iteration, or

The number of iterations exceeds the maximum iterations parameter
Clusters are defined by their centers. A cluster center is a vector of values for the (encoded) input
fields. The vector values are based on the mean values for records assigned to the cluster.
Selecting Initial Cluster Centers
The user specifes k, the number of clusters in the model. Initial cluster centers are chosen using a
maximin algorithm:
1. Initialize the first cluster center as the values of the input fields for the first data record.
2. For each data record, compute the minimum (Euclidean) distance between the record and each
defined cluster center.
3. Select the record with the largest minimum distance from the defined cluster centers. Add a new
cluster center with values of the input fields for the selected record.
4. Repeat steps 2 and 3 until k cluster centers have been added to the model.
Once initial cluster centers have been chosen, the algorithm begins the iterative assign/update
process.
Assigning Records to Clusters
In each iteration of the algorithm, each record is assigned to the cluster whose center is closest.
Closeness is measured by the usual squared Euclidean distance
230
K-Means Algorithm
where Xi is the vector of encoded input fields for record i, Cj is the cluster center vector for cluster
j, Q is the number of encoded input fields, xqi is the value of the qth encoded input field for the ith
record, and cqj is the value of the qth encoded input field for the jth record.
For each record, the distance between the record and each cluster center is calculated, and the
cluster center whose distance from the record is smallest is assigned as the record’s new cluster.
When all records have been assigned, the cluster centers are updated.
Updating Cluster Centers
After records have been (re)assigned to their closest clusters, the cluster centers are updated. The
cluster center is calculated as the mean vector of the records assigned to the cluster:
where the components of the mean vector
are calculated in the usual manner,
where nj is the number of records in cluster j, xqi(j) is the qth encoded field value for record i
which is assigned to cluster j.
Blank Handling
In k-means, blanks are handled by substituting “neutral” values for the missing ones. For range
and flag fields with missing values (blanks and nulls), the missing value is replaced with 0.5. For
set fields, the derived indicator field values are all set to 0.0.
Effect of Options
There are several options that affect the way the model calculations are carried out.
Maximum Iterations
The maximum iterations parameter controls how long the algorithm will continue searching
for a stable cluster solution. The algorithm will repeat the classify/update cycle no more than
the number of times specified. If and when this limit is reached, the algorithm terminates and
produces the current set of clusters as the final model.
231
K-Means Algorithm
Error Tolerance
The error tolerance parameter provides another means of controlling how long the algorithm will
continue searching for a stable cluster solution. The maximum change in cluster means for an
iteration t is calculated as
where Cj(t) is the cluster center vector for the jth cluster at iteration t and Cj(t - 1) is the cluster
center vector at the previous iteration. If the maximum change is less than the specified tolerance
for the current iteration, the algorithm terminates and produces the current set of clusters as
the final model.
Encoding Value for Sets
The encoding value for sets parameter controls the relative weighting of set fields in the k-means
algorithm. The default value of
provides an equal weighting between range fields
and set fields. To emphasize set fields more heavily, you can set the encoding value closer to 1.0;
to emphasize range fields more, set the encoding value closer to 0.0. For more information, see
the topic “Numeric Coding of Symbolic Fields” on p. 227.
Model Summary Statistics
Cluster proximities are calculated as the Euclidean distance between cluster centers,
Generated Model/Scoring
Generated k-means models provide predicted cluster memberships and distance from cluster
center for each record.
Predicted Cluster Membership
When assigning a new record with a predicted cluster membership, the Euclidean distance
between the record and each cluster center is calculated (in the same manner as for assigning
records during the model building phase), and the cluster center closest to the record is assigned as
the predicted cluster for the record.
Distances
The value of the distance field for each record, if requested, is calculated as the Euclidean
distance between the record and its assigned cluster center,
232
K-Means Algorithm
Blank Handling
In k-means, scoring records with a generated model handles blanks in the same way they are
handled during model building. For more information, see the topic “Blank Handling” on p. 230.
Kohonen Algorithms
Overview
Kohonen models (Kohonen, 2001) are a special kind of neural network model that performs
unsupervised learning. It takes the input vectors and performs a type of spatially organized
clustering, or feature mapping, to group similar records together and collapse the input space
to a two-dimensional space that approximates the multidimensional proximity relationships
between the clusters.
The Kohonen network model consists of two layers of neurons or units: an input layer and
an output layer. The input layer is fully connected to the output layer, and each connection has
an associated weight. Another way to think of the network structure is to think of each output
layer unit having an associated center, represented as a vector of inputs to which it most strongly
responds (where each element of the center vector is a weight from the output unit to the
corresponding input unit).
Primary Calculations
Field Encoding
Scaling of Range Fields
In most datasets, there’s a great deal of variability in the scale of range fields. For example,
consider age and number of cars per household. Depending on the population of interest, age
may take values up to 80 or even higher. Values for number of cars per household, however, are
unlikely to exceed three or four in the vast majority of cases.
If you use both of these fields in their natural scale as inputs for a model, the age field is
likely to be given much more weight in the model than number of cars per household, simply
because the values (and therefore the differences between records) for the former are so much
larger than for the latter.
To compensate for this effect of scale, range fields are transformed so that they all have the
same scale. In IBM® SPSS® Modeler, range fields are rescaled to have values between 0 and 1.
The transformation used is
where x’i is the rescaled value of input field x for record i, xi is the original value of x for record i,
xmin is the minimum value of x for all records, and xmax is the maximum value of x for all records.
Numeric Coding of Symbolic Fields
For modeling algorithms that base their calculations on numerical differences between records,
symbolic fields pose a special challenge. How do you calculate a numeric difference for two
categories?
© Copyright IBM Corporation 1994, 2015.
233
234
Kohonen Algorithms
A common approach to the problem, and the approach used in IBM® SPSS® Modeler, is to
recode a symbolic field as a group of numeric fields with one numeric field for each category or
value of the original field. For each record, the value of the derived field corresponding to the
category of the record is set to 1.0, and all the other derived field values are set to 0.0. Such
derived fields are sometimes called indicator fields, and this recoding is called indicator coding.
For example, consider the following data, where x is a symbolic field with possible values A,
B, and C:
Record #
1
2
3
X
B
A
C
X1 ’
0
1
0
X 2’
1
0
0
X 3’
0
0
1
In this data, the original set field x is recoded into three derived fields x1’, x2’, and x3’. x1’ is an
indicator for category A, x2’ is an indicator for category B, and x3’ is an indicator for category C.
Encoding of Flag Fields
Flag fields are a special case of symbolic fields. However, because they have only two values in
the set, they can be handled in a slightly more efficient way than other set fields. Flag fields are
represented by a single numeric field, taking the value of 1.0 for the “true” value and 0.0 for the
“false” value. Blanks for flag fields are assigned the value 0.5.
Model Parameters
In a Kohonen model, the parameters are represented as weights between input units and output
units, or alternately, as a cluster center associated with each output unit. Input records are
presented to the network, and the cluster centers are updated in a manner similar to that used in
building a k-means model, with an important difference: the clusters are arranged spatially in a
two-dimensional grid, and each record affects not only the unit (cluster) to which it is assigned
but also units within a neighborhood about the winning unit. For more information, see the
topic “Neighborhoods” on p. 235.
Training of the Kohonen network proceeds as follows:
E The network is initialized with small random weights.
E Input records are presented to the network in random order. As each record is presented, the
output unit with the closest center to the input vector is identified as the winning unit.For more
information, see the topic “Distances” on p. 235.
E The weights of the winning unit are adjusted to move the cluster center closer to the input vector.
For more information, see the topic “Weight Updates” on p. 235.
E If the neighborhood size is greater than zero, then other output units that are within the
neighborhood of the winning unit are also updated so their centers are closer to the input vector.
E At the end of each cycle, the learning rate parameter
(eta) is updated.
235
Kohonen Algorithms
E This process repeats until one of the stopping criteria is met. Training proceeds in two phases,
a gross structure phase and a fine tuning phase. Typically the first phase has a relatively large
neighborhood size and large eta to learn the overall structure of the data, and the second phase
uses a smaller neighborhood and smaller eta to fine tune the cluster centers.
Distances
Distances in a Kohonen network are calculated as Euclidean distance between the encoded input
vector and the cluster center for the output unit,
where
is the value of the kth input field for the ith record, and
is the weight for the kth
input field on the jth output unit.
The activation of an output unit is simply the Euclidean distance between the output unit’s
weight vector (its center) and the input vector. Note that for Kohonen networks, the output unit
with the lowest activation is the winning unit. This is in contrast to other types of neural networks,
where higher activation represents stronger response.
Neighborhoods
The neighborhood function is based on the Chebychev distance, which considers only the
maximum distance on any single dimension:
where is the location of unit x on dimension i of the output grid, and is the location of
another unit y on the same dimension.
An output unit is considered to be in the neighborhood of another output unit if
, where n is the neighborhood size.
Neighborhood size remains constant during each phase, but different phases usually use
for Phase 1 and
for Phase 2.
different neighborhood sizes. By default,
Weight Updates
For the winning output node, and its neighbors if the neighborhood is > 0, the weights are
adjusted by adding a portion of the difference between the input vector and the current weight
vector. The magnitude of the change is determined by the learning rate parameter (eta). The
weight change is calculated as
where W is the weight vector for the output unit being updated, I is the input vector, and
learning rate parameter. In individual unit terms,
is the
236
Kohonen Algorithms
where
is the weight corresponding to input unit j for the output unit being updated, and
the jth input unit.
is
Eta Decay
At the end of each cycle, the value of is updated. The value of generally decreases across
training cycles. The user can control the rate of decrease by selecting either linear or exponential
decay.
Linear decay. This is the default decay rate. When this option is selected, the value of
decays in a
linear fashion, decreasing by a fixed amount each cycle, according to the formula
where
is the initial eta value for the current phase, and
is the low eta for the current
training phase, calculated as the lesser of the initial eta values for the current phase and the
following phase, and c is the number of cycles set for the current phase.
Exponential decay. When this option is selected, the value of
decays in an exponential fashion,
decreasing by a fixed proportion each cycle, according to the formula
The value of
logarithm.
has a minimum value of 0.0001 to prevent arithmetic errors in taking the
Blank Handling
In Kohonen networks, blanks are handled by substituting “neutral” values for the missing ones.
For range and flag fields with missing values (blanks and nulls), the missing value is replaced
with 0.5. For range fields, numeric values outside the range limits found in the field’s type
information are coerced to the type-defined range. For set fields, the derived indicator field
values are all set to 0.0.
Effect of Options
Stop on. By default, training executes the specified number of cycles for each phase. If the Time
option is selected, training stops when the elapsed time reaches the specified limit (or sooner if the
specified number of cycles for both phases is completed before the time limit is reached).
237
Kohonen Algorithms
Random seed. Sets the seed for the random number generator used to initialize the weights of the
new network as well as the order of presentation for training records. Select a fixed seed value to
create a reproducible network.
Generated Model/Scoring
Cluster Membership
Cluster membership for a new record is derived by presenting the input vector for the record
to the network and identifying the output neuron with the closest weight vector, as described
in Distances above. The predicted value is returned as the x and y coordinates of the winning
neuron in the output grid.
Blank Handling
Blank handling for scoring is the same as during model building. For more information, see the
topic “Blank Handling” on p. 236.
Logistic Regression Algorithms
Logistic Regression Models
Logistic regression is a well-established statistical method for predicting binomial or multinomial
outcomes. IBM® SPSS® Modeler now offers two distinct algorithms for logistic regression
modeling:
Multinomial Logistic. This is the original logistic regression algorithm used in SPSS Modeler,
introduced in version 6.0. It can produce models when the target field is a set field with more
than two possible values. See below for more information. It can also produce models for flag or
binary outcomes, though it doesn’t give the same level of statistical detail for such models as the
newer binomial logistic algorithm.
Binomial Logistic. This algorithm, introduced in SPSS Modeler 11, is limited to models where the
target field is a flag, or binary field. This algorithm provides some enhanced statistical output,
relative to the output of the multinomial algorithm, and is less susceptible to problems when the
number of cells (unique combinations of predictor values) is large relative to the number of
records. For more information, see the topic “Binomial Logistic Regression” on p. 251.
For models with a flag output field, selection of a logistic algorithm is controlled in the modeling
node by the Procedure option.
Multinomial Logistic Regression
The purpose of the Multinomial Logistic Regression procedure is to model the dependence of a
nominal (symbolic) output field on a set of symbolic and/or numeric predictor (input) fields.
Primary Calculations
Field Encoding
In logistic regression, each symbolic (set) field is recoded as a group of numeric fields, with one
numeric field for each category or value of the original field, except the last category, which is
defined as a reference category. For each record, the value of the derived field corresponding to
the category of the record is set to 1.0, and all of the other derived field values are set to 0.0. For
records which have the value of the reference category, all derived fields are set to 0.0. Such
derived fields are sometimes called dummy fields, and this recoding is called dummy coding.
For example, consider the following data, where x is a symbolic field with possible values A,
B, and C:
Record #
1
2
3
X
B
A
C
© Copyright IBM Corporation 1994, 2015.
X 1’
0
1
0
239
X 2’
1
0
0
240
Logistic Regression Algorithms
In this data, the original set field x is recoded into two derived fields x1’ and x2’. x1’ is an
indicator for category A, and x2’ is an indicator for category B. The last category, category C, is
the reference category; records belonging to this category have both x1’ and x2’ set to 0.0.
Notation
The following notation is used throughout this chapter unless otherwise stated:
The output field, which takes integer values from 1 to J.
The number of categories of the output field.
The number of subpopulations.
matrix with vector-element
, the observed values at the ith
subpopulation, determined by the input fields specified in the command.
matrix with vector-element , the observed values of the location
model’s input fields at the ith subpopulation.
The sum of frequency weights of the observations that belong to the cell
corresponding to
at subpopulation i.
The sum of all
’s.
The cell probability corresponding to
at subpopulation i.
The logit of response category j relative to response category k.
vector of unknown parameters in the jth logit (that is, logit of response
category j to response category J).
Number of parameters in each logit.
.
Number of non-redundant parameters in logit j after maximum likelihood
estimation.
.
The total number of non-redundant parameters after maximum likelihood
.
estimation.
vector of unknown parameters in the model.
The maximum likelihood estimate of
The maximum likelihood estimate of
.
.
Data Aggregation
Observations are aggregated by the definition of subpopulations. Subpopulations are defined by
the cross-classifications of the set of input fields.
Let be the marginal count of subpopulation i,
241
Logistic Regression Algorithms
If there is no observation for the cell of
at subpopulation i, it is assumed that
,
. A non-negative scalar
may be added to any zero cell (that is, cell
provided that
with
) if its marginal count is nonzero. The value of is zero by default.
Generalized Logit Model
In a generalized logit model, the probability
of response category j at subpopulation i is
where the last category J is assumed to be the reference category.
In terms of logits, the model can be expressed as
for j = 1, …, J-1.
When J = 2, this model is equivalent to the binary logistic regression model. Thus, the above
model can be thought of as an extension of the binary logistic regression model from binary
response to polytomous nominal response.
Log-Likelihood
The log-likelihood of the model is given by
A constant that is independent of parameters has been excluded here. The value of the constant
is
.
Model Parameters
Derivatives of the Log-Likelihood
For any j = 1, …, J-1, s = 1, …, p, the first derivative of l with respect to
is
242
Logistic Regression Algorithms
For any j, j’= 1, …, J-1 and s, t = 1, …, p, the second derivative of l with respect to
where
if
and
is
, 0 otherwise.
Maximum Likelihood Estimate
To obtain the maximum likelihood estimate of , a Newton-Raphson iterative estimation method
is used. Notice that this method is the same as Fisher-Scoring iterative estimation method in
this model, since the expectation of the second derivative of l with respect to is the same
as the observed one.
Let
be the
vector of the first derivative of l with respect to . Moreover,
be the
matrix of the second derivative of l with respect to .
let
Notice that
where
is a
matrix as
in which
and
is a
diagonal matrix of
at iteration
the parameter estimate at iteration , the parameter estimate
and
is a stepping scalar such that
of independent vectors,
..
.
and
is
..
and
,
is a
. Let
be
is updated as
matrix
..
.
.
is
, both evaluated at
.
Stepping
Use step-halving method if
step-halving, the set of values of is
. Let V be the maximum number of steps in
.
Starting Values of the Parameters
If intercepts are included in the model, set
where
243
Logistic Regression Algorithms
for j = 1, …, J-1.
If intercepts are not included in the model, set
for j = 1, …, J-1.
Convergence Criteria
Given two convergence criteria
and
if one of the following criteria are satisfied:
1.
, the iteration is considered to be converged
.
2.
.
3. The maximum above element in
is less than
.
Checking for Separation
The algorithm checks for separation in the data starting with iteration
check for separation:
1. For each subpopulation i , find
2. If
(20 by default). To
.
, then there is a perfect prediction for subpopulation i.
3. If all subpopulations have perfect prediction, then there is complete separation. If some patterns
have perfect prediction and the Hessian of is singular, then there is quasi-complete separation.
Blank Handling
All records with missing values for any input or output field are excluded from the estimation of
the model.
244
Logistic Regression Algorithms
Secondary Calculations
Model Summary Statistics
Log-Likelihood
Initial model with intercepts. If intercepts are included in the model, the predicted probability for
the initial model (that is, the model with intercepts only) is
and the value of –2 log-likelihood of the initial model is
Initial model with no intercepts. If intercepts are not included in the model, the predicted
probability for the initial model is
and the value of –2 log-likelihood of the initial model is
Final model. The value of –2 log-likelihood of the final model is
Model Chi-Square
The model chi-square is given by
If the final model includes intercepts, then the initial model is an intercept-only model. Under
, the model chi-square is asymptotically chi-squared
the null hypothesis that
degrees of freedoms.
distributed with
245
Logistic Regression Algorithms
If the model does not include intercepts, then the initial model is an empty model. Under the
, the Model Chi-square is asymptotically chi-squared distributed
null hypothesis that
with
degrees of freedoms.
Pseudo R-Square Measures
Cox and Snell. Cox and Snell’s
Nagelkerke. Nagelkerke’s
McFadden. McFadden’s
is calculated as
is calculated as
is calculated as
Goodness-of-Fit Measures
Pearson. The Pearson goodness-of-fit measure is
Under the null hypothesis, the Pearson goodness-of-fit statistic is asymptotically chi-squared
degrees of freedom.
distributed with
Deviance. The deviance goodness-of-fit measure is
Under the null hypothesis, the deviance goodness-of-fit statistic is asymptotically chi-squared
degrees of freedom.
distributed with
Field Statistics and Other Calculations
The statistics shown in the advanced output for the logistic equation node are calculated in the
same manner as in the NOMREG procedure in IBM® SPSS® Statistics. For more details, see the
SPSS Statistics Nomreg algorithm document, available at http://www.ibm.com/support.
246
Logistic Regression Algorithms
Stepwise Variable Selection
Several methods are available for selecting independent variables. With the forced entry method,
any variable in the variable list is entered into the model. The forward stepwise, backward
stepwise, and backward entry methods use either the Wald statistic or the likelihood ratio statistic
for variable removal. The forward stepwise, forward entry, and backward stepwise use the score
statistic or the likelihood ratio statistic to select variables for entry into the model.
Forward Stepwise (FSTEP)
1. Estimate the parameter and likelihood function for the initial model and let it be our current model.
2. Based on the MLEs of the current model, calculate the score statistic or likelihood ratio statistic
for every variable eligible for inclusion and find its significance.
3. Choose the variable with the smallest significance (p-value). If that significance is less than the
probability for a variable to enter, then go to step 4; otherwise, stop FSTEP.
4. Update the current model by adding a new variable. If this results in a model which has already
been evaluated, stop FSTEP.
5. Calculate the significance for each variable in the current model using LR or Wald’s test.
6. Choose the variable with the largest significance. If its significance is less than the probability for
variable removal, then go back to step 2. If the current model with the variable deleted is the same
as a previous model, stop FSTEP; otherwise go to the next step.
7. Modify the current model by removing the variable with the largest significance from the previous
model. Estimate the parameters for the modified model and go back to step 5.
Forward Only (FORWARD)
1. Estimate the parameter and likelihood function for the initial model and let it be our current model.
2. Based on the MLEs of the current model, calculate the score or LR statistic for every variable
eligible for inclusion and find its significance.
3. Choose the variable with the smallest significance. If that significance is less than the probability
for a variable to enter, then go to step 4; otherwise, stop FORWARD.
4. Update the current model by adding a new variable. If there are no more eligible variable left, stop
FORWARD; otherwise, go to step 2.
Backward Stepwise (BSTEP)
1. Estimate the parameters for the full model that includes the final model from previous method and
all eligible variables. Only variables listed on the BSTEP variable list are eligible for entry and
removal. Let current model be the full model.
2. Based on the MLEs of the current model, calculate the LR or Wald’s statistic for every variable
in the BSTEP list and find its significance.
247
Logistic Regression Algorithms
3. Choose the variable with the largest significance. If that significance is less than the probability
for a variable removal, then go to step 5. If the current model without the variable with the largest
significance is the same as the previous model, stop BSTEP; otherwise go to the next step.
4. Modify the current model by removing the variable with the largest significance from the model.
Estimate the parameters for the modified model and go back to step 2.
5. Check to see any eligible variable is not in the model. If there is none, stop BSTEP; otherwise,
go to the next step.
6. Based on the MLEs of the current model, calculate LR statistic or score statistic for every variable
not in the model and find its significance.
7. Choose the variable with the smallest significance. If that significance is less than the probability
for the variable entry, then go to the next step; otherwise, stop BSTEP.
8. Add the variable with the smallest significance to the current model. If the model is not the
same as any previous models, estimate the parameters for the new model and go back to step
2; otherwise, stop BSTEP.
Backward Only (BACKWARD)
1. Estimate the parameters for the full model that includes all eligible variables. Let the current
model be the full model.
2. Based on the MLEs of the current model, calculate the LR or Wald’s statistic for all variables
eligible for removal and find its significance.
3. Choose the variable with the largest significance. If that significance is less than the probability
for a variable removal, then stop BACKWARD; otherwise, go to the next step.
4. Modify the current model by removing the variable with the largest significance from the model.
Estimate the parameters for the modified model. If all the variables in the BACKWARD list are
removed then stop BACKWARD; otherwise, go back to step 2.
Stepwise Statistics
The statistics used in the stepwise variable selection methods are defined as follows.
Score Function and Information Matrix
The score function for a model with parameter B is:
The (j,s)th element of the score function can be written as
248
Logistic Regression Algorithms
Similarly, elements of the information matrix are given by
where
(Note that
if
, 0 otherwise.
in the formula are functions of B)
Block Notations
By partitioning the parameter B into two parts, B1 and B2, the score function, information matrix,
and inverse information matrix can be written as partitioned matrices:
where
where
Typically, B1 and B2 are parameters corresponding to two different sets of effects. The dimensions
of the 1st and 2nd partition in U, I and J are equal to the numbers of parameters in B1 and
B2 respectively.
Score Test
Suppose a base model with parameter vector
with the corresponding maximum likelihood
. We are interested in testing the significance of an extra effect E if it is added to the
estimate
base model. For convenience, we will call the model with effect E the augmented model. Let
be the vector of extra parameters associated with the effect E, then the hypothesis can be
written as
249
Logistic Regression Algorithms
v.s.
Using the block notations, the score function, information matrix and inverse information of the
augmented model can be written as
Then the score statistic for testing our hypothesis will be
where
and
information matrix evaluated at
are the 2nd partition of score function and inverse
and
.
Under the null hypothesis, the score statistic has a chi-square distribution with degrees of
freedom equal to the rank of
. If the rank of
is zero, then the score
is
statistic will be set to 0 and the p-value will be 1. Otherwise, if the rank of
, then the p-value of the test is equal to
, where
is the cumulative
distribution function of a chi-square distribution with
degrees of freedom.
Computational Formula for Score Statistic
When we compute the score statistic s, it is not necessary to re-compute
and
from scratch. The score function and information matrix of the base model can be
reused in the calculation. Using the block notations introduced earlier, we have
and
In stepwise logistic regression, it is necessary to compute one score test for each effect that are not
and
depend only on the
in the base model. Since the 1st partition of
base model, we only need to compute
each new effect.
,
and
for
250
Logistic Regression Algorithms
If
is the s-th parameter of the j-th logit in
, then the elements of
,
as follows:
where
,
and
is the t-th parameter of k-th logit in
and
can be expressed
are computed under the base model.
Wald’s Test
In backward stepwise selection, we are interested in removing an effect F from an already fitted
model. For a given base model with parameter vector
, we want to use Wald’s statistic to
test if effect F should be removed from the base model. If the parameter vector for the effect F is
, then the hypothesis can be formulated as
vs.
In order to write down the expression of the Wald’s statistic, we will partition our parameter vector
(and its estimate) into two parts as follows:
and
The first partition contains parameters that we intended to keep in the model and the 2nd partition
contains the parameters of the effect F, which may be removed from the model. The information
matrix and inverse information will be partitioned accordingly,
and
Using the above notations, the Wald’s statistic for effect F can be expressed as
Under the null hypothesis, w has a chi-square distribution with degrees of freedom equal to the
rank of
. If the rank of
is zero, then Wald’s statistic will be
is
, then
set to 0 and the p-value will be 1. Otherwise, if the rank of
251
Logistic Regression Algorithms
the p-value of the test is equal to
function of a chi-square distribution with
, where
is the cumulative distribution
degrees of freedom.
Generated Model/Scoring
Predicted Values
The predicted value for a record i is the output field category j with the largest logit value
for j = 1, ..., J-1. The logit for reference category J,
,
, is 1.0.
Predicted Probability
The probability for the predicted category
category ,
for scored record i is derived from the logit for
If the Append all probabilities option is selected, the probability is calculated for all J categories
in a similar manner.
Blank Handling
Records with missing values for any input field cannot be scored and are assigned a predicted
value and probability value(s) of $null$.
Binomial Logistic Regression
For binomial models (models with a flag field as the target), IBM® SPSS® Modeler uses an
algorithm optimized for such models, as described here.
Notation
The following notation is used throughout this chapter unless otherwise stated:
n
p
y
X
The number of observed cases
The number of parameters
vector with element , the observed value of the ith case of the
dichotomous dependent variable
matrix with element
, the observed value of the ith case of the
jth parameter
252
Logistic Regression Algorithms
vector with element
w
vector with element
Likelihood function
Log-likelihood function
Information matrix
l
L
I
, the coefficient for the jth parameter
, the weight for the ith case
Model
The linear logistic model assumes a dichotomous dependent variable Y with probability π, where
for the ith case,
or
Hence, the likelihood function l for n observations
, can be written as
case weights
, with probabilities
and
It follows that the logarithm of l is
and the derivative of L with respect to
is
Maximum Likelihood Estimates (MLE)
The maximum likelihood estimates for
satisfy the following equations
, for the jth parameter
where
for
.
Note the following:
1. A Newton-Raphson type algorithm is used to obtain the MLEs. Convergence can be based on

Absolute difference for the parameter estimates between the iterations
253
Logistic Regression Algorithms

Percent difference in the log-likelihood function between successive iterations

Maximum number of iterations specified
2. During the iterations, if
is smaller than 10−8 for all cases, the log-likelihood function
is very close to zero. In this situation, iteration stops and the message “All predicted values
are either 1 or 0” is issued.
After the maximum likelihood estimates are obtained, the asymptotic covariance matrix is
estimated by
, the inverse of the information matrix I, where
and
Stepwise Variable Selection
Several methods are available for selecting independent variables. With the forced entry method,
any variable in the variable list is entered into the model. There are two stepwise methods:
forward and backward. The stepwise methods can use either the Wald statistic, the likelihood
ratio, or a conditional algorithm for variable removal. For both stepwise methods, the score
statistic is used to select variables for entry into the model.
Forward Stepwise (FSTEP)
1. If FSTEP is the first method requested, estimate the parameter and likelihood function for the
initial model. Otherwise, the final model from the previous method is the initial model for FSTEP.
Obtain the necessary information: MLEs of the parameters for the current model, predicted
probability, likelihood function for the current model, and so on.
2. Based on the MLEs of the current model, calculate the score statistic for every variable eligible for
inclusion and find its significance.
3. Choose the variable with the smallest significance. If that significance is less than the probability
for a variable to enter, then go to step 4; otherwise, stop FSTEP.
4. Update the current model by adding a new variable. If this results in a model which has already
been evaluated, stop FSTEP.
5. Calculate LR or Wald statistic or conditional statistic for each variable in the current model.
Then calculate its corresponding significance.
254
Logistic Regression Algorithms
6. Choose the variable with the largest significance. If that significance is less than the probability
for variable removal, then go back to step 2; otherwise, if the current model with the variable
deleted is the same as a previous model, stop FSTEP; otherwise, go to the next step.
7. Modify the current model by removing the variable with the largest significance from the previous
model. Estimate the parameters for the modified model and go back to step 5.
Backward Stepwise (BSTEP)
1. Estimate the parameters for the full model which includes the final model from previous method
and all eligible variables. Only variables listed on the BSTEP variable list are eligible for entry
and removal. Let the current model be the full model.
2. Based on the MLEs of the current model, calculate the LR or Wald statistic or conditional statistic
for every variable in the model and find its significance.
3. Choose the variable with the largest significance. If that significance is less than the probability for
a variable removal, then go to step 5; otherwise, if the current model without the variable with the
largest significance is the same as the previous model, stop BSTEP; otherwise, go to the next step.
4. Modify the current model by removing the variable with the largest significance from the model.
Estimate the parameters for the modified model and go back to step 2.
5. Check to see any eligible variable is not in the model. If there is none, stop BSTEP; otherwise,
go to the next step.
6. Based on the MLEs of the current model, calculate the score statistic for every variable not in
the model and find its significance.
7. Choose the variable with the smallest significance. If that significance is less than the probability
for variable entry, then go to the next step; otherwise, stop BSTEP.
8. Add the variable with the smallest significance to the current model. If the model is not the
same as any previous models, estimate the parameters for the new model and go back to step
2; otherwise, stop BSTEP.
Stepwise Statistics
The statistics used in the stepwise variable selection methods are defined as follows.
Score Statistic
The score statistic is calculated for each variable not in the model to determine whether the
variable should enter the model. Assume that there are variables, namely,
in the
, not in the model. The score statistic for is defined as
model and variables,
255
Logistic Regression Algorithms
if is not a categorical variable. If is a categorical variable with m categories, it is converted to
-dimension dummy vector. Denote these new
variables as
. The
a
score statistic for is then
where
and the
matrix
is
with
in which
is the design matrix for variables
and
is the design matrix for dummy
. Note that contains a column of ones unless the constant term
variables
is excluded from . Based on the MLEs for the parameters in the model, V is estimated by
. The asymptotic distribution of the score statistic is a
chi-square with degrees of freedom equal to the number of variables involved.
Note the following:
1. If the model is through the origin and there are no variables in the model,
and is equal to
.
2. If
is defined by
is not positive definite, the score statistic and residual chi-square statistic are set to be zero.
Wald Statistic
The Wald statistic is calculated for the variables in the model to determine whether a variable
should be removed. If the ith variable is not categorical, the Wald statistic is defined by
If it is a categorical variable, the Wald statistic is computed as follows:
Let
and
be the vector of maximum likelihood estimates associated with the
the asymptotic covariance matrix for . The Wald statistic is
dummy variables,
The asymptotic distribution of the Wald statistic is chi-square with degrees of freedom equal to
the number of parameters estimated.
256
Logistic Regression Algorithms
Likelihood Ratio (LR) Statistic
The LR statistic is defined as two times the log of the ratio of the likelihood functions of two
models evaluated at their MLEs. The LR statistic is used to determine if a variable should
be removed from the model. Assume that there are variables in the current model which is
referred to as a full model. Based on the MLEs of the full model, l(full) is calculated. For each of
the variables removed from the full model one at a time, MLEs are computed and the likelihood
function l(reduced) is calculated. The LR statistic is then defined as
LR is asymptotically chi-square distributed with degrees of freedom equal to the difference
between the numbers of parameters estimated in the two models.
Conditional Statistic
The conditional statistic is also computed for every variable in the model. The formula for the
conditional statistic is the same as the LR statistic except that the parameter estimates for each
reduced model are conditional estimates, not MLEs. The conditional estimates are defined as
be the MLE for the
follows. Let
variables in the model and C be the
asymptotic covariance matrix for . If variable is removed from the model, the conditional
estimate for the parameters left in the model given is
where is the MLE for the parameter(s) associated with and
is with removed,
is
the covariance between
and , and
is the covariance of . Then the conditional statistic
is computed by
where
is the log-likelihood function evaluated at
.
Statistics
The following output statistics are available.
Initial Model Information
If is not included in the model, the predicted probability is estimated to be 0.5 for all cases and
the log-likelihood function
is
with
. If
is included in the model, the predicted probability is estimated as
257
Logistic Regression Algorithms
and
is estimated by
with asymptotic standard error estimated by
The log-likelihood function is
Model Information
The following statistics are computed if a stepwise method is specified.
–2 Log-Likelihood
Model Chi-Square
2(log-likelihood function for current model − log-likelihood function for initial model)
The initial model contains a constant if it is in the model; otherwise, the model has no terms.
The degrees of freedom for the model chi-square statistic is equal to the difference between the
numbers of parameters estimated in each of the two models. If the degrees of freedom is zero, the
model chi-square is not computed.
Block Chi-Square
2(log-likelihood function for current model − log-likelihood function for the final model from
the previous method)
The degrees of freedom for the block chi-square statistic is equal to the difference between the
numbers of parameters estimated in each of the two models.
Improvement Chi-Square
2(log-likelihood function for current model − log-likelihood function for the model from the
last step)
The degrees of freedom for the improvement chi-square statistic is equal to the difference between
the numbers of parameters estimated in each of the two models.
258
Logistic Regression Algorithms
Goodness of Fit
Cox and Snell’s R-Square (Cox and Snell, 1989; Nagelkerke, 1991)
where
is the likelihood of the current model and l(0) is the likelihood of the
initial model; that is,
if the constant is not included in the model;
if the constant is included in the model, where
.
Nagelkerke’s R-Square (Nagelkerke, 1981)
where
.
Hosmer-Lemeshow Goodness-of-Fit Statistic
The test statistic is obtained by applying a chi-square test on a
contingency table. The
contingency table is constructed by cross-classifying the dichotomous dependent variable with
a grouping variable (with g groups) in which groups are formed by partitioning the predicted
probabilities using the percentiles of the predicted event probability. In the calculation,
approximately 10 groups are used (g=10). The corresponding groups are often referred to as the
“deciles of risk” (Hosmer and Lemeshow, 2000).
If the values of independent variables for observation i and i’ are the same, observations i and
i’ are said to be in the same block. When one or more blocks occur within the same decile, the
blocks are assigned to this same group. Moreover, observations in the same block are not divided
when they are placed into groups. This strategy may result in fewer than 10 groups (that is,
) and consequently, fewer degrees of freedom.
.
Suppose that there are Q blocks, and the qth block has mq number of observations,
Moreover, suppose that the kth group (
) is composed of the q1th, …, qkth blocks of
. The total
observations. Then the total number of observations in the kth group is
observed frequency of events (that is, Y=1) in the kth group, call it O1k, is the total number of
observations in the kth group with Y=1. Let E1k be the total expected frequency of the event in the
, where is the average predicted event probability
kth group; then E1k is given by
for the kth group.
The Hosmer-Lemeshow goodness-of-fit statistic is computed as
259
Logistic Regression Algorithms
The p value is given by Pr
degrees of freedom (g−2).
where
is the chi-square statistic distributed with
Information for the Variables Not in the Equation
For each of the variables not in the equation, the score statistic is calculated along with the
associated degrees of freedom, significance and partial R. Let
be a variable not currently in
the model and the score statistic. The partial R is defined by
if
otherwise
is the log-likelihood
where df is the degrees of freedom associated with , and
function for the initial model.
The residual Chi-Square printed for the variables not in the equation is defined as
g
g
where g
Information for the Variables in the Equation
For each of the variables in the equation, the MLE of the Beta coefficients is calculated along with
is not a
the standard errors, Wald statistics, degrees of freedom, significances, and partial R. If
categorical variable currently in the equation, the partial R is computed as
if
otherwise
If
is a categorical variable with m categories, the partial R is then
if
otherwise
Casewise Statistics
The following statistics are computed for each case.
Individual Deviance
The deviance of the ith case,
, is defined as
if
otherwise
260
Logistic Regression Algorithms
Leverage
The leverage of the ith case,
, is the ith diagonal element of the matrix
where
Studentized Residual
Logit Residual
where
Standardized Residual
Cook’s Distance
DFBETA
Let
be the change of the coefficient estimates from the deletion of case i. It is computed as
Predicted Group
If
, the predicted group is the group in which y=1.
Note the following:
For the unselected cases with nonmissing values for the independent variables in the analysis,
is computed as
the leverage
where
261
Logistic Regression Algorithms
For the unselected cases, the Cook’s distance and DFBETA are calculated based on
.
Generated Model/Scoring
For each record passed through a generated binomial logistic regression model, a predicted value
and confidence score are calculated as follows:
Predicted Value
The probability of the value y = 1 for record i is calculated as
where
If
, the predicted value is 1; otherwise, the predicted value is 0.
Confidence
For records with a predicted value of y = 1, the confidence value is . For records with a predicted
value of y = 0, the confidence value is
.
Blank Handling (generated model)
Records with missing values for any input field in the final model cannot be scored, and are
assigned a predicted value of $null$.
KNN Algorithms
Nearest Neighbor Analysis is a method for classifying cases based on their similarity to other
cases. In machine learning, it was developed as a way to recognize patterns of data without
requiring an exact match to any stored patterns, or cases. Similar cases are near each other and
dissimilar cases are distant from each other. Thus, the distance between two cases is a measure
of their dissimilarity.
Cases that are near each other are said to be “neighbors.” When a new case (holdout) is presented,
its distance from each of the cases in the model is computed. The classifications of the most
similar cases – the nearest neighbors – are tallied and the new case is placed into the category that
contains the greatest number of nearest neighbors.
You can specify the number of nearest neighbors to examine; this value is called k. The pictures
show how a new case would be classified using two different values of k. When k = 5, the new
case is placed in category 1 because a majority of the nearest neighbors belong to category 1.
However, when k = 9, the new case is placed in category 0 because a majority of the nearest
neighbors belong to category 0.
Nearest neighbor analysis can also be used to compute values for a continuous target. In this
situation, the average or median target value of the nearest neighbors is used to obtain the
predicted value for the new case.
Notation
The following notation is used throughout this chapter unless otherwise stated:
Y
Optional 1×N vector of responses with element
indexes the cases.
X0
P0×N matrix of features with element
, where p=1,...,P0 indexes the
features and n=1,...,N indexes the cases.
P×N matrix of encoded features with element
, where p=1,...,P
indexes the features and n=1,...,N indexes the cases.
Dimensionality of the feature space; the number of continuous features
plus the number of categories across all categorical features.
Total number of cases.
The number of cases with Y = j, where Y is a response variable with
J categories
The number of cases which belong to class j and are correctly classified
as j.
The total number of cases which are classified as j.
X
P
N
Preprocessing
Features are coded to account for differences in measurement scale.
© Copyright IBM Corporation 1994, 2015.
263
, where n=1,...,N
264
KNN Algorithms
Continuous
Continuous features are optionally coded using adjusted normalization:
where
is the normalized value of input feature p for case n,
is the original value of the
feature for case n,
is the minimum value of the feature for all training cases, and
is the maximum value for all training cases.
Categorical
Categorical features are always temporarily recoded using one-of-c coding. If a feature has
c categories, then it is is stored as c vectors, with the first category denoted (1,0,...,0), the next
category (0,1,0,...,0), ..., and the final category (0,0,...,0,1).
Training
Training a nearest neighbor model involves computing the distances between cases based upon
their values in the feature set. The nearest neighbors to a given case have the smallest distances
from that case. The distance metric, choice of number of nearest neighbors, and choice of the
feature set have the following options.
Distance Metric
We use one of the following metrics to measure the similarity of query cases and their nearest
neighbors.
Euclidean Distance. The distance between two cases is the square root of the sum, over all
dimensions, of the weighted squared differences between the values for the cases.
City Block Distance. The distance between two cases is the sum, over all dimensions, of the
weighted absolute differences between the values for the cases.
265
KNN Algorithms
The feature weight
is equal to 1 when feature importance is not used to weight distances;
otherwise, it is equal to the normalized feature importance:
See “Output Statistics ” for the computation of feature importance
.
Crossvalidation for Selection of k
Cross validation is used for automatic selection of the number of nearest neighbors, between a
. Suppose that the training set has a cross validation variable
and maximum
minimum
with the integer values 1,2,..., V. Then the cross validation algorithm is as follows:
E For each
, compute the average error rate or sum-of square error of k:
, where is the error rate or sum-of square error when we apply the Nearest
Neighbor model to make predictions on the cases with
; that is, when we use the other
cases as the training dataset.
E Select the optimal k as:
.
Note: If multiple values of k are tied on the lowest average error, we select the smallest k among
those that are tied.
Feature Selection
Feature selection is based on the wrapper approach of Cunningham and Delany (2007) and uses
forward selection which starts from
features which are entered into the model. Further
features are chosen sequentially; the chosen feature at each step is the one that causes the largest
decrease in the error rate or sum-of squares error.
Let
represent the set of J features that are currently chosen to be included,
represents the
represents the error rate or sum-of-squares error associated
set of remaining features and
.
with the model based on
The algorithm is as follows:
E Start with
features.
E For each feature in
, fit the k nearest neighbor model with this feature plus the existing features
and calculate the error rate or sum-of square error for each model. The feature in
whose
in
model has the smallest error rate or sum-of square error is the one to be added to create
.
E Check the selected stopping criterion. If satisfied, stop and report the chosen feature subset.
Otherwise, J=J+1 and go back to the previous step.
Note: the set of encoded features associated with a categorical predictor are considered and added
together as a set for the purpose of feature selection.
266
KNN Algorithms
Stopping Criteria
One of two stopping criteria can be applied to the feature selection algorithm.
Fixed number of features. The algorithm adds a fixed number of features,
, in addition to those
features.
may be
forced into the model. The final feature subset will have
user-specified or computed automatically; if computed automatically the value is
When this is the stopping criterion, the feature selection algorithm stops when
features
have been added to the model; that is, when
, stop and report
as the chosen
feature subset.
Note: if
feature subset.
, no features are added and
with
is reported as the chosen
Change in error rate or sum of squares error. The algorithm stops when the change in the absolute
error ratio indicates that the model cannot be further improved by adding more features.
or
and
Specifically, if
where
is the specified minimum change, stop and report
If
as the chosen feature subset.
and
stop and report
as the chosen feature subset.
for
Note: if
the chosen feature subset.
, no features are added and
with
is reported as
Combined k and Feature Selection
The following method is used for combined neighbors and features selection.
1. For each k, use the forward selection method for feature selection.
2. Select the k, and accompanying feature set, with the lowest error rate or the lowest sum-of-squares
error.
Blank Handling
All records with missing values for any input or output field are excluded from the estimation of
the model.
267
KNN Algorithms
Output Statistics
The following statistics are available.
Percent correct for class j
Overall percent for class j
Intersection of Overall percent and percent correct
Error rate of classification
Sum-of-Square Error for continuous response
where
is the estimated value of
.
Feature Importance
Suppose there are
in the model from the forward selection
in the
process with the error rate or sum-of-squares error e. The importance of feature
model is computed by the following method.
E Delete the feature
sum-of-squares error
from the model, make predictions and evaluate the error rate or
based on features
.
E Compute the error ratio
The feature importance of
.
is
268
KNN Algorithms
Scoring
After we find the k nearest neighbors of a case, we can classify it or predict its response value.
Categorical response
Classify each case by majority vote of its k nearest neighbors among the training cases.
E If multiple categories are tied on the highest predicted probability, then the tie should be broken by
choosing the category with largest number of cases in training set.
E If multiple categories are tied on the largest number of cases in the training set, then choose the
category with the smallest data value among the tied categories. In this case, categories are
assumed to be in the ascending sort or lexical order of the data values.
We can also compute the predicted probability of each category. Suppose is the number of
cases of the jth category among the k nearest neighbors. Instead of simply estimating the predicted
probability for the jth category by , we apply a Laplace correction as follows:
where J is the number of categories in the training data set.
The effect of the Laplace correction is to shrink the probability estimates towards to 1/J when the
number of nearest neighbors is small. In addition, if a query case has k nearest neighbors with the
same response value, the probability estimates are less than 1 and larger than 0, instead of 1 or 0.
Continuous response
Predict each case using the mean or median function.
, where
is the index set of those cases
is the value of the continuous response variable
that are the nearest neighbors of case n and
for case m.
Mean function.
Median function. Suppose that
variable, and we arrange
denote them as
are the values of the continuous response
from the lowest value to the highest value and
, then the median is
is odd
is even
Blank Handling
Records with missing values for any input field cannot be scored and are assigned a predicted
value and probability value(s) of $null$.
269
KNN Algorithms
References
Arya, S., and D. M. Mount. 1993. Algorithms for fast vector quantization. In: Proceedings of the
Data Compression Conference 1993, , 381–390.
Cunningham, P., and S. J. Delaney. 2007. k-Nearest Neighbor Classifiers. Technical Report
UCD-CSI-2007-4, School of Computer Science and Informatics, University College Dublin,
Ireland, , – .
Friedman, J. H., J. L. Bentley, and R. A. Finkel. 1977. An algorithm for finding best matches in
logarithm expected time. ACM Transactions on Mathematical Software, 3, 209–226.
Linear modeling algorithms
Linear models predict a continuous target based on linear relationships between the target and
one or more predictors.
For algorithms on enhancing model accuracy, enhancing model stability, or working with very
large datasets, see “Ensembles Algorithms” on p. 125.
Notation
The following notation is used throughout this chapter unless otherwise stated:
n
p
Number of distinct records in the dataset. It is an integer and
.
Number of parameters (including parameters for dummy variables but
.
excluding the intercept) in the model. It is an integer and
Number of non-redundant parameters (excluding the intercept) currently in
the model. It is an integer and
.
Number of non-redundant parameters currently in the model.
Number of effects excluding the intercept. It is an integer and
y
target vector with elements
frequency weight vector.
regression weight vector.
f
g
N
X
.
Effective sample size. It is an integer and
. If there is no
frequency weight vector, N=n.
design matrix with element
. The rows represent the records
and the columns represent the parameters.
vector of unobserved errors.
intercept.
vector of unknown parameters;
.
is the
vector of parameter estimates.
b
vector of standardized parameter estimates. It is the result of a
sweep operation on matrix R. is the standardized estimate of the intercept
and is equal to 0.
vector of predicted target values.
Weighted sample mean for
,
Weighted sample mean for y.
Weighted sample covariance between
and
Weighted sample covariance between
and y.
,
.
Weighted sample variance for y.
R
weighted sample correlation matrix for X (excluding the
intercept, if it exists) and y.
The resulting matrix after a sweep operation whose elements are .
© Copyright IBM Corporation 1994, 2015.
271
272
Linear modeling algorithms
Model
Linear regression has the form
y
Xβ
ε
where ε follows a normal distribution with mean 0 and variance D , where
D
. The elements of ε are independent with respect to each other.
Notes:

X can be any combination of continuous and categorical effects.

Constant columns in the design matrix are not used in model building.

If n=1 or the target is constant, no model is built.
Missing values
Records with missing values are deleted listwise.
Least squares estimation
The coefficients are estimated by the least squares (LS) method. First, we transform the model
by pre-multiplying D as follows:
D
y
D
Xβ
D
ε
so that the new unobserved error D ε follows a normal distribution
0
, where I is an
identity matrix and D
. Then the least squares estimates of β can be
obtained from the following formula
β
where F
D
y
diag
D
T
TD
diag
T
D
D
. Note that
D
where
T
D
T
D
D
D
diag
T
, so the closed form solution of
is
273
Linear modeling algorithms
is computed by applying sweep operations instead of the equation above. In addition, sweep
operations are applied to the transformed scale of X and y to achieve numerical stability.
Specifically, we construct the weighted sample correlation matrix R then apply sweep operations
to it. The R matrix is constructed as follows.
First, compute weighted sample means, variances and covariances among Xi, Xj,
and y :
Weighted sample means of Xi and y are
and
;
Weighted sample covariance for Xi and Xj is
;
Weighted sample covariance for Xi and y is
;
Weighted sample variance for y is
.
,
Second, compute weighted sample correlations
and
.
Then the matrix R is
R
..
.
..
.
..
.
R
RT
..
.
R
If the sweep operations are repeatedly applied to each row of
predictors in the model at the current step, the result is
T
, where
contains the
T
The last column R R contains the standardized coefficient estimates; that is,
.
Then the coefficient estimates, except the intercept estimate if there is an intercept in the model,
are:
Model selection
The following model selection methods are supported:

None, in which no selection method is used and effects are force entered into the model. For
this method, the singularity tolerance is set to 1e−12 during the sweep operation.
274
Linear modeling algorithms

Forward stepwise, which starts with no effects in the model and adds and removes effects one
step at a time until no more can be added or removed according to the stepwise criteria.

Best subsets, which checks “all possible” models, or at least a larger subset of the possible
models than forward stepwise, to choose the best according to the best subsets criterion.
Forward stepwise
The basic idea of the forward stepwise method is to add effects one at a time as long as these
additions are worthy. After an effect has been added, all effects in the current model are checked
to see if any of them should be removed. Then the process continues until a stopping criterion
is met. The traditional criterion for effect entry and removal is based on their F-statistics and
corresponding p-values, which are compared with some specified entry and removal significance
levels; however, these statistics may not actually follow an F distribution so the results might be
questionable. Hence the following additional criteria for effect entry and removal are offered:

Maximum adjusted R2;

Minimum corrected Akaike information criterion (AICC); and

Minimum average squared error (ASE) over the overfit prevention data
Candidate statistics
Some additional notations are needed describe the addition or removal of a continuous effect Xj or
categorical effect
, where ℓ is the number of categories.
The number of non-redundant parameters of the eligible effect Xj or
.
The number of non-redundant parameters in the current model (including
the intercept).
The number of non-redundant parameters in the resulting model (including
for entering an effect
the intercept). Note that
for removing an effect
The weighted residual sum of squares for the current model.
The weighted residual sum of squares for the resulting model after entering
the effect.
The weighted residual sum of squares for the resulting model after removing
the effect.
The last diagonal element in the current R matrix.
The last diagonal element in the resulting
matrix.
F statistics. The F statistics for entering or removing an effect from the current model are:
275
Linear modeling algorithms
and their corresponding p-values are:
Adjusted R-squared. The adjusted R2 value for entering or removing an effect from the current
model is:
adj.
Corrected Akaike Information Criterion (AICC). The AICC value for entering or removing an effect
from the current model is:
Average Squared Error (ASE). The ASE value for entering or removing an effect from the current
model is:
where
x are the predicted values of yt and T is the number of distinct testing cases in
the overfit prevention set.
The Selection Process
There are slight variations in the selection process, depending upon the model selection criterion:

The F statistic criterion is to select an effect for entry (removal) with the minimum (maximum)
p-value and continue doing it until the p-values of all candidates for entry (removal) are equal
to or greater than (less than) a specified significance level.

The other three criteria are to compare the statistic (adjusted R2, AICC or ASE) of the
resulting model after entering (removing) an effect with that of the current model. Selection
stops at a local optimal value (a maximum for the adjusted R2 criterion and a minimum
for the AICC and ASE).
The following additional definitions are needed for the selection process:
FLAG
MAXSTEP
MAXEFFECT
index vector which records the status of each effect. FLAGi =
A
1 means the effect i is in the current model, FLAGi = 0 means it is not.
denotes the number of effects with FLAGi = 1.
The maximum number of iteration steps. The default value is
.
The maximum number of effects (excluding intercept if exists). The default
value is .
276
Linear modeling algorithms
Pin
Pout
MSCcurrent
The significance level for effect entry when the F-statistic criterion is used.
The default is 0.05.
The significance level for effect removal when the F statistic criterion is
used. The default is 0.1.
The F statistic change. It is
or
for entering or removing
an effect Xj (here Xj could represent continuous or categorical for simpler
notation).
The corresponding p-value for
.
The adjusted R2, AICC, or ASE value for the current model.
1. Set
and iter = 0. The initial model is
. If the adjusted R2, AICC, or ASE
criterion is used, compute the statistic for the initial model and denote it as MSCcurrent.
2. If
, iter ≤ MAXSTEP and
next step; otherwise stop and output the current model .
, go to the
3. Based on the current model, for every effect j eligible for entry (see Condition below),
and
If FC (the F statistic criterion) is used, compute
;
If MSC (the adjusted R2, AICC, or ASE criterion) is used, compute MSCj.
4. If FC is used, choose the effect
current model.
and if
< Pin, enter
to the
and if
<
,
If MSC is used, choose the effect
to the current model. (For the adjusted R2 criterion, replace min with max and reverse
enter
the inequality)
If the inequality is not satisfied, stop and output the current model.
5. If the model with the new effect is the same as any previously obtained model, stop and output the
current model; otherwise update the current model by doing the sweep operation on corresponding
in the current R matrix. Set
and iter
row(s) and column(s) associated with
= iter + 1.
If FC is used, let
and
;
.
If MSC is used, let
6. For every effect k in the current model; that is,
If FC is used, compute
and
,
;
If MSC is used, compute MSCk.
7. If FC is used, choose the effect
from the current model.
and if
> Pout, remove
and if
<
,
If MSC is used, choose the effect
from the current model. (For the adjusted R2 criterion, replace min with max and
remove
reverse the inequality)
If the inequality is met, go to the next step; otherwise go back to step 2.
277
Linear modeling algorithms
8. If the model with the effect removed is the same as any previously obtained model, stop and
output the current model; otherwise update the current model by doing the sweep operation
in the current R matrix. Set
on corresponding row(s) and column(s) associated with
and iter = iter + 1.
If FC is used, let
and
If AC is used, let
;
. Then go back to step 6.
Condition. In order for effect j to be eligible for entry into the model, the following conditions
must be met:
For continuous a effect Xj ,
For categorical effect
; (t is the singularity tolerance with a value of 1e−4)
,
;
where t is the singularity tolerance, and
current R matrix (before entering).
and
are diagonal elements in the
For each continuous effect Xk that is currently in the model,
with
For each categorical effect
.
levels that is currently in the model,
.
and
are diagonal elements in the resulting R matrix; that is, the
where
results after doing the sweep operation on corresponding row(s) and column(s) associated with Xk
or
in the current R matrix. The above condition is imposed so that entry of the effect
does not reduce the tolerance of other effects already in the model to unacceptable levels.
Best subsets
Stepwise methods search fewer combinations of sub-models and rarely select the best one, so
another option is to check all possible models and select the “best” based upon some criterion.
The available criteria are the maximum adjusted R2, minimum AICC, and minimum ASE over
the overfit prevention set.
Since there are free effects, we do an exhaustive search over
models, which include
). Because the number of calculations increases exponentially with
intercept-only model (
, it is important to have an efficient algorithm for carrying out the necessary computations.
However, if is too large, it may not be practical to check all of the possible models.
We divide the problem into 2 tiers in terms of the number of effects:
, we search all possible subsets

when

when > 20, we apply a hybrid method which combines the forward stepwise method and
the all possible subsets method.
278
Linear modeling algorithms
Searching All Possible Subsets
An efficient method that minimizes the number of sweep operations on the R matrix (Schatzoff
1968), is applied to traverse all the models and outlined as follows:
Each sweep step(s) on an effect results in a model. So
models can be obtained
through a sequence of exactly
sweeps on effects. Assuming that the all possible
models on
effects can be obtained in a sequence
of exactly
sweeps
pivotal effects, and sweeping on the last effect will produce a new
on the first
model which adds the last effect to the model produced by the sequence
, then
repeating the sequence
will produce another
distinct models (including
the last effect). It is a recursive algorithm for constructing the sequence; that is,
and so on.
The sequence of models produced is demonstrated in the following table:
Sk
0
1
121
1213121
121312141213121
...
, ,
k
0
1
2
3
4
...
Sequence of models produced
Only intercept
(1)
(1),(12),(2)
(1),(12),(2),(23),(123),(13),(3)
(1),(12),(2),(23),(123),(13),(3),(34),(134),(1234),(234),(24),(124),(14),(4)
...
All
models including the intercept model.
The second column indicates the indexes of effects which are pivoted on. Each parenthesis in the
third column represents a regression model. The numbers in the parentheses indicate the effects
which are included in that model.
Hybrid Method
If >20, we apply a hybrid method by combining the forward stepwise method with the all
possible subsets method as follows:
Select the effects using the forward stepwise method with the same criterion chosen for best
subsets. Say that ps is the number of effects chosen by the forward stepwise method.
Apply one of the following approaches, depending on the value of ps, as follows:

If ps ≤ 20, do an exhaustive search of all possible subsets on these selected effects, as
described above.

If 20 < ps ≤ 40, select ps – 20 effects based on the p-values of type III sum of squares tests from
all ps effects (see ANOVA in “Model evaluation ” on p. 279) and enter them into the model,
then do an exhaustive search of the remaining 20 effects via the method described above.

If 40 < ps, do nothing and assume the best model is the one with these ps effects (with a
warning message that the selected model is based on the forward stepwise method).
279
Linear modeling algorithms
Model evaluation
The following output statistics are available.
ANOVA
Weighted total sum of squares
with d.f.
where d.f. means degrees of freedom. It is called “SS (sum of squares) for Corrected Total”.
Weighted residual sum of squares
with d.f. = dfe = N – pc. It is also called “SS for Error”.
Weighted regression sum of squares
with d.f. =
. It is called “SS for Corrected Model” if there is an intercept.
Regression mean square error
Residual mean square error
F statistic for corrected model
which follows an F distribution with degrees of freedom dfr and dfe, and the corresponding
p-value can be calculated accordingly.
Type III sum of squares for each effect
280
Linear modeling algorithms
To compute type III SS for the effect j,
the type III test matrix Li
needs to be constructed first. Construction of Li is based on the generating matrix
H
XT DX XT DX where D
, such that Liβ is estimable. It involves
parameters only for the given effect and the effects containing the given effect. For type III
analysis, Li doesn’t depend on the order of effects specified in the model. If such a matrix cannot
be constructed, the effect is not testable. For each effect j, the type III SS is calculated as follows
T T
T
where
.
F statistic for each effect
The SS for the effect j is also used to compute the F statistic for the hypothesis test H0: Liβ
= 0 as follows:
where is the full row rank of . It follows an F distribution with degrees of freedom
, then the p-values can be calculated accordingly.
and
Model summary
Adjusted R square
adj.
where
Model information criteria
Corrected Akaike information criterion (AICC)
Coefficients and statistical inference
After the model selection process, we can get the coefficients and related statistics from the swept
correlation matrix. The following statistics are computed based on the R matrix.
281
Linear modeling algorithms
Unstandardized coefficient estimates
for
.
Standard errors of regression coefficients
The standard error of
is
Intercept estimation
The intercept is estimated by all other parameters in the model as
The standard error of
is estimated by
where
and
kth row and jth column element in the parameter estimates covariance matrix.
is the
t statistics for regression coefficients
for
, with degrees of freedom
100(1−α)% confidence intervals
and the p-value can be calculated accordingly.
282
Linear modeling algorithms
Note: For redundant parameters, the coefficient estimates are set to zero and standard errors, t
statistics, and confidence intervals are set to missing values.
Scoring
Predicted values
Diagnostics
The following values are computed to produce various diagnostic charts and tables.
Residuals
Studentized residuals
This is the ratio of the residual to its standard error.
where s is the square root of the mean square error; that is,
value for the kth case (see below).
, and
is the leverage
Cook’s distance
where the “leverage”
G T
is the kth diagonal element of the hat matrix
H
W
X XT WX
XT W
W
A record with Cook’s distance larger than
X XT W
is considered influential (Fox, 1997).
283
Linear modeling algorithms
Predictor importance
We use the leave-one-out method to compute the predictor importance, based on the residual sum
of squares (SSe) by removing one predictor at a time from the final full model.
, then the predictor importance can be
If the final full model contains p predictors,
calculated as follows:
1. i=1
2. If i > p, go to step 5.
3. Do a sweep operation on the corresponding row(s) and column(s) associated with
matrix of the full final model.
in the
4. Get the last diagonal element in the current and denote it
. Then the predictor importance of
is
. Let i = i + 1, and go to step 2.
5. Compute the normalized predictor importance of
:
References
Belsley, D. A., E. Kuh, and R. E. Welsch. 1980. Regression diagnostics: Identifying influential
data and sources of collinearity. New York: John Wiley and Sons.
Dempster, A. P. 1969. Elements of Continuous Multivariate Analysis. Reading, MA:
Addison-Wesley.
Fox, J. 1997. Applied Regression Analysis, Linear Models, and Related Methods. Thousand
Oaks, CA: SAGE Publications, Inc..
Fox, J., and G. Monette. 1992. Generalized collinearity diagnostics. Journal of the American
Statistical Association, 87, 178–183.
Schatzoff, M., R. Tsao, and S. Fienberg. 1968. Efficient computing of all possible regressions.
Technometrics, 10, 769–779.
Velleman, P. F., and R. E. Welsch. 1981. Efficient computing of regression diagnostics. American
Statistician, 35, 234–242.
Neural Networks Algorithms
Neural networks predict a continuous or categorical target based on one or more predictors by
finding unknown and possibly complex patterns in the data.
For algorithms on enhancing model accuracy, enhancing model stability, or working with very
large datasets, see “Ensembles Algorithms” on p. 125.
Multilayer Perceptron
The multilayer perceptron (MLP) is a feed-forward, supervised learning network with up to two
hidden layers. The MLP network is a function of one or more predictors that minimizes the
prediction error of one or more targets. Predictors and targets can be a mix of categorical and
continuous fields.
Notation
The following notation is used for multilayer perceptrons unless otherwise stated:
Input vector, pattern m, m=1,...M.
Target vector, pattern m.
I
Number of layers, discounting the input layer.
Number of units in layer i. J0 = P, Ji = R, discounting the bias unit.
Set of categorical outputs.
Set of continuous outputs.
Set of subvectors of
containing 1-of-c coded hth categorical field.
Unit j of layer i, pattern m,
.
Weight leading from layer i−1, unit j to layer i, unit k. No weights connect
and the bias
; that is, there is no
for any j.
, i=1,...,I.
Activation function for layer i.
w
Weight vector containing all weights
.
Architecture
The general architecture for MLP networks is:
Input layer: J0=P units,
; with
ith hidden layer: Ji units,
; with
.
© Copyright IBM Corporation 1994, 2015.
285
.
and
where
286
Neural Networks Algorithms
Output layer: JI=R units,
; with
and
where
.
Note that the pattern index and the bias term of each layer are not counted in the total number
of units for that layer.
Activation Functions
Hyperbolic Tangent
tanh
This function is used for hidden layers.
Identity
This function is used for the output layer when there are continuous targets.
Softmax
This function is used for the output layer when all targets are categorical.
Error Functions
Sum-of-Squares
where
This function is used when there are continuous targets.
287
Neural Networks Algorithms
Cross-Entropy
where
This function is used when all targets are categorical.
Expert Architecture Selection
Expert architecture selection determines the “best” number of hidden units in a single hidden layer.
A random sample is taken from the entire data set and split into training (70%) and testing samples
(30%). The size of random sample is N = 1000. If entire dataset has less than N records, use all of
them. If training and testing data sets are supplied separately, the random samples for training and
testing should be taken from the respective datasets.
Given Kmin and Kmax , the algorithm is as follows.
1. Start with an initial network of k hidden units. The default is k=min(g(R,P),20,h(R,P)), where
otherwise
where
denotes the largest integer less than or equal to x.
is the maximum
number of hidden units that will not result in more weights than there are records in the entire
training set.
If k < Kmin, set k = Kmin. Else if k > Kmax, set k = Kmax. Train this network once via the alternated
simulated annealing and training procedure (steps 1 to 5).
2. If k > Kmin, set DOWN=TRUE. Else if training error ratio > 0.01, DOWN=FALSE. Else stop and
report the initial network.
3. If DOWN=TRUE, remove the weakest hidden unit (see below); k=k−1. Else add a hidden unit;
k=k+1.
4. Using the previously fit weights as initial weights for the old weights and random weights for the
new weights, train the old and new weights for the network once through the alternated simulated
annealing and training procedure (steps 3 to 5) until the stopping conditions are met.
5. If the error on test data has dropped:
If DOWN=FALSE, If k< Kmax and the training error has dropped but the error ratio is still above
0.01, return to step 3. Else if k> Kmin, return to step 3. Else, stop and report the network with the
minimum test error.
288
Neural Networks Algorithms
Else if DOWN=TRUE, If |k−k0|>1, stop and report the network with the minimum test error. Else
if training error ratio for k=k0 is bigger than 0.01, set DOWN=FALSE, k=k0 return to step 3. Else
stop and report the initial network.
Else stop and report the network with the minimum test error.
If more than one network attains the minimum test error, choose the one with fewest hidden units.
If the resulting network from this procedure has training error ratio (training error divided by error
from the model using average of an output field to predict that field) bigger than 0.1, repeat the
architecture selection with different initial weights until either the error ratio is <=0.1 or the
procedure is repeated 5 times, then pick the one with smallest test error.
Using this network with its weights as initial values, retrain the network on the entire training set.
The weakest hidden unit
For each hidden unit j, calculate the error on the test data when j is removed from the network.
The weakest hidden unit is the one having the smallest total test error upon its removal.
Training
The problem of estimating the weights consists of the following parts:
E Initializing the weights. Take a random sample and apply the alternated simulated annealing
and training procedure on the random sample to derive the initial weights. Training in step 3 is
performed using all default training parameters.
E Computing the derivative of the error function with respect to the weights. This is solved via
the error backpropagation algorithm.
E Updating the estimated weights. This is solved by the gradient descent or scaled conjugate
gradient method.
Alternated Simulated Annealing and Training
The following procedure uses simulated annealing and training alternately up to K1 times.
Simulated annealing is used to break out of the local minimum that training finds by perturbing
the local minimum K2 times. If break out is successful, simulated annealing sets a better initial
weight for the next training. We hope to find the global minimum by repeating this procedure K3
times. This procedure is rather expensive for large data sets, so it is only used on a random sample
to search for initial weights and in architecture selection. Let K1=K2=4, K3=3.
1. Randomly generate K2 weight vectors between [a0−a, a0+a], where a0=0 and a=0.5. Calculate
the training error for each weight vector. Pick the weights that give the minimum training error
as the initial weights.
2. Set k1=0.
3. Train the network with the specified initial weights. Call the trained weights w.
289
Neural Networks Algorithms
4. If the training error ratio <= 0.05, stop the k1 loop and use w as the result of the loop. Else set
k1 = k1+1.
5. If k1 < K1, perturb the old weight to form K2 new weights
by adding K2 different
random noise
between [a(k1), a(k1)] where
. Let
be the weights that
give the minimum training error among all the perturbed weights. If
, set the
, return to step 3. Else stop and report w as the final result.
initial weights to be
Else stop the k1 loop and use w as the result of the loop.
If the resulting weights have training error ratio bigger than 0.1, repeat this algorithm until either
the training error ratio is <=0.1 or the procedure is repeated K3 times, then pick the one with
smallest test error among the result of the k1 loops.
Error Backpropagation
Error-backpropagation is used to compute the first partial derivatives of the error function with
respect to the weights.
tanh
identity
First note that
The backpropagation algorithm follows:
For each i,j,k, set
.
For each m in group T; For each p=1,...,JI, let
if cross-entropy error is used
otherwise
For each i=I,...,1 (start from the output layer); For each j=1,...,Ji; For each k=0,...,Ji−1
E Let
, where
E Set
E If k > 0 and i > 1, set
This gives us a vector of
elements that form the gradient of
.
Gradient Descent
Given the learning rate parameter
descent method is as follows.
(set to 0.4) and momentum rate
1. Let k=0. Initialize the weight vector to
, learning rate to
. Let
(set to 0.9), the gradient
.
290
Neural Networks Algorithms
2. Read all data and find
the current network.
and its gradient
. If
, stop and report
3. If
,
. This step is to make sure that the steepest gradient descent
direction dominates weight change in next step. Without this step, the weight change in next step
could be along the opposite direction of the steepest descent and hence no matter how small is,
the error will not decrease.
4. Let
5. If
return to step 3.
, then set
,
, and
, Else
and
6. If a stopping rule is met, exit and report the network as stated in the stopping criteria. Else let
k=k+1 and return to step 2.
Model Update
Given the learning rate parameters (set to 0.4) and
(set to 0.001), momentum rate (set
to 0.9), and learning rate decay factor β = (1/pK)*ln(η0/ηlow), the gradient descent method for
online and mini-batch training is as follows.
, learning rate to
1. Let k=0. Initialize the weight vector to
2. Read records in
(
. Let
is randomly chosen) and find
.
and its gradient
.
3. If
,
. This step is to make sure that the steepest gradient descent
direction dominates weight change in next step. Without this step, the weight change in next step
could be along the opposite direction of the steepest descent and hence no matter how small is,
the error will not decrease.
.
4. Let
5. If
6.
, then set
.
. If
and
, then set
, Else
.
7. If a stopping rule is met, exit and report the network as stated in the stopping criteria. Else let
k=k+1 and return to step 2.
Scaled Conjugate Gradient
To begin, initialize the weight vector to
1. k=0. Set scalars
success=true.
E
, and let N be the total number of weights.
E
. Set
2. If success=true, find the second-order information:
, where the superscript t denotes the transpose.
, and
,
,
291
Neural Networks Algorithms
3. Set
4. If
.
, make the Hessian positive definite:
5. Calculate the step size:
,
,
,
.
6. Calculate the comparison parameter:
.
7. If
, error can be reduced. Set
,
, return
as the final weight vector and exit. Set
N=0, restart the algorithm:
, else set
,
, reduce the scale parameter:
. else (if
): Set
8. If
.
, increase the scale parameter:
, If
, success=true. If k mod
. If
, success=false.
.
9. If success=false, return to step 2. Otherwise if a stopping rule is met, exit and report the network
,
and return to step 2.
as stated in the stopping criteria. Else set k=k+1 ,
Note: each iteration requires at least two data passes.
Stopping Rules
Training proceeds through at least one complete pass of the data. Then the search should be
stopped according to following criteria. These stopping criteria should be checked in the listed
order. When creating a new model, check after completing an iteration. During a model update,
check criteria 1, 3, 4, 5 and 6 is after completing a data pass, and only check criterion 2 after an
iteration. In the descriptions below, a “step” means an iteration when building a new model and
a data pass when performing a model update. Let E1 denote the current minimum error and
K1 denote the iteration where it occurs for the training set, E2 and K2 are that for the overfit
prevention set, and K3=min(K1,K2).
1. At the end of each step compute the total error for the overfit prevention set. From step K2, if the
testing error does not decrease below E2 over the next n=1 steps, stop. Report the weights at step
K2. If there is no overfit prevention set, this criterion is not used for building a new model; for a
model update when there is no overfit prevention set, compute the total error for training data at
the end of each step. From step K1, if the training error does not decrease below E1 over the next
n=1 steps, stop. Report the weights at step K1.
2. The search has lasted beyond some maximum allotted time. For building a new model, simply
report the weights at step K3. For a model update, even though training stops before the
completion of current step, treat this as a complete step. Calculate current errors for training and
testing datasets and update E1, K1, E2, K2 correspondingly. Report the weights at step K3.
3. The search has lasted more than some maximum number of data passes. Report the weights
at step K3.
4. Stop if the relative change in training error is small:
for
and
, where
are the weight vectors of two consecutive steps. Report weights
at step K3.
292
Neural Networks Algorithms
5. The current training error ratio is small compared with the initial error:
for
and
, where
is the total error from the model using the average of an
output field to predict that field;
where
is calculated by using
in the error function,
is the weight vector of one step. Report weights at step K3.
6. The current accuracy meets a specified threshold. Accuracy is computed based on the overfit
prevention set if there is one, otherwise the training set.
Note: In criteria 4 and 5, the total error for whole training data is needed. For model updates,
these criteria will not be checked if there is an overfit prevention set.
Model Updates
When new records become available, the synaptic weights can be updated. The new records are
split into groups of the size R = min(M,2N,1000), where M is the number of training records and N
is the number of weights in the network. A single data pass is made through the new groups to
update the weights. If the last of the new groups has more than one-quarter of the records of a
normal group, then it is processed normally; otherwise, it remains in the internal buffer so that
these records can be used during the next update. Thus, after the last update there may be some
unused records remaining in the buffer that will be lost.
Radial Basis Function
A radial basis function (RBF) network is a feed-forward, supervised learning network with only
one hidden layer, called the radial basis function layer. The RBF network is a function of one or
more predictors that minimizes the prediction error of one or more targets. Predictors and targets
can be a mix of categorical and continuous fields.
Notation
The following notation is used throughout this chapter unless otherwise stated:
Input vector, pattern m, m=1,...M.
Target vector, pattern m.
I
Number of layers, discounting the input layer. For an RBF network, I=2.
Number of units in layer i. J0 = P, Ji = R, discounting the bias unit. J1
is the number of RBF units.
jth RBF unit for input
h
, j=1, …,J1.
center of
, it is P-dimensional.
width of
, it is P-dimensional.
the RBF overlapping factor.
293
Neural Networks Algorithms
Unit j of layer i, pattern m,
.
weight connecting rth output unit and jth hidden unit of RBF layer.
Architecture
There are three layers in the RBF network:
Input layer: J0=P units,
; with
RBF layer: J1 units, ,
; with
.
and
.
Output layer: J2=R units,
; with
.
Error Function
Sum-of-squares error is used:
where
The sum-of-squares error function with identity activation function for output layer can be
approximates the
used for both continuous and categorical targets. For continuous targets,
conditional expectation of the target value
. For categorical targets,
approximates
.
the posterior probability of class k:
(the sum is over all classes of the same categorical target field),
Note: though
not lie in the range [0, 1].
may
Training
The network is trained in two stages:
1. Determine the basis functions by clustering methods. The center and width for each basis function is
computed.
2. Determine the weights given the basis functions. For the given basis functions, compute the
ordinary least-squares regression estimates of the weights.
294
Neural Networks Algorithms
The simplicity of these computations allows the RBF network to be trained very quickly.
Determining Basis Functions
The two-step clustering algorithm is used to find the RBF centers and widths. For each cluster, the
mean and standard deviation for each continuous field and proportion of each category for each
categorical field are derived. Using the results from clustering, the center of the jth RBF is set as:
if pth field is continuous
if pth field is a dummy field of a categorical field
where
is the jth cluster mean of the pth input field if it is continuous, and
is the proportion
of the category of a categorical field that the pth input field corresponds to. The width of the
jth RBF is set as
if pth field is continuous
if pth field is a dummy field of a categorical field
where
is the jth cluster standard deviation of the pth field and h>0 is the RBF overlapping
factor that controls the amount of overlap among the RBFs. Since some
may be zeros, we
use spherical shaped Gaussian bumps; that is, a common width
in for all predictors. In the case that is zero for some j, set it to be
are zero, set all of them to be
.
When there are a large number of predictors,
. If all
could be easily very large and hence
is practically zero for every record and every RBF unit if
is
relatively small. This is especially bad for ORBF because there would be only a constant term in
the model when this happens. To avoid this, is increased by setting the default overlapping
factor h proportional to the number of inputs: h=1 + 0.1 P.
Automatic Selection of Number of Basis Functions
The algorithm tries a reasonable range of numbers of hidden units and picks the “best”. By
default, the reasonable range [K1, K2] is determined by first using the two-step clustering method
to automatically find the number of clusters, K. Then set K1 = min(K, R) for ORBF and K1
=max{2, min(K, R)} for NRBF and K2=max(10, 2K, R).
295
Neural Networks Algorithms
If a test data set is specified, then the “best” model is the one with the smaller error in the test
data. If there is no test data, the BIC (Bayesian information criterion) is used to select the “best”
model. The BIC is defined as
where
is the mean squared error and k= (P+1+R)J1 for
NRBF and (P+1+R)J1+R for ORBF is the number of parameters in the model.
Model Updates
When new records become available, you can update the weights connecting the RBF layer and
output layer. Again, given the basis functions, updating the weights is a least-squares regression
problem. Thus, it is very fast.
For best results, the new records should have approximately the same distribution as the
original records.
Missing Values
The following options for handling missing values are available:

Records with missing values are excluded listwise.

Missing values are imputed. Continuous fields impute the average of the minimum and
maximum observed values; categorical fields impute the most frequently occurring category.
Output Statistics
The following output statistics are available. Note that, for continuous fields, output statistics are
reported in terms of the rescaled values of the fields.
Accuracy
For continuous targets, it is
where
Note that R2 can never be greater than one, but can be less than zero.
For the naïve model,
targets.
is the modal category for categorical targets and the mean for continuous
296
Neural Networks Algorithms
For each categorical target, this is the percentage of records for which the predicted value matches
the observed value.
Predictor Importance
For more information, see the topic “Predictor Importance Algorithms” on p. 305.
Confidence
Confidence values for neural network predictions are calculated based on the type of output field
being predicted. Note that no confidence values are generated for numeric output fields.
Difference
The difference method calculates the confidence of a prediction by comparing the best match with
the second-best match as follows, depending on output field type and encoding used.

Flag fields. Confidence is calculated as
, where o is the output activation
for the output unit.

Set fields. With the standard encoding, confidence is calculated as
the output unit in the fields group of units with the highest activation, and
with the second-highest activation.
, where is
is the unit
With binary set encoding, the sum of the errors comparing the output activation and the
encoded set value is calculated for the closest and second-closest matches, and the confidence
is calculated as
, where is the error for the second-best match and is the
error for the best match.
Simplemax
Simplemax returns the highest predicted probability as the confidence.
References
Bishop, C. M. 1995. Neural Networks for Pattern Recognition, 3rd ed. Oxford: Oxford University
Press.
Fine, T. L. 1999. Feedforward Neural Network Methodology, 3rd ed. New York: Springer-Verlag.
Haykin, S. 1998. Neural Networks: A Comprehensive Foundation, 2nd ed. New York: Macmillan
College Publishing.
Ripley, B. D. 1996. Pattern Recognition and Neural Networks. Cambridge: Cambridge University
Press.
Tao, K. K. 1993. A closer look at the radial basis function (RBF) networks. In: Conference
Record of the Twenty-Seventh Asilomar Conference on Signals, Systems, and Computers, A.
Singh, ed. Los Alamitos, Calif.: IEEE Comput. Soc. Press, 401–405.
297
Neural Networks Algorithms
Uykan, Z., C. Guzelis, M. E. Celebi, and H. N. Koivo. 2000. Analysis of input-output clustering
for determining centers of RBFN. IEEE Transactions on Neural Networks, 11, 851–858.
OPTIMAL BINNING Algorithms
The Optimal Binning procedure performs MDLP (minimal description length principle)
discretization of scale variables. This method divides a scale variable into a small number of
intervals, or bins, where each bin is mapped to a separate category of the discretized variable.
MDLP is a univariate, supervised discretization method. Without loss of generality, the
algorithm described in this document only considers one continuous attribute in relation to a
categorical guide variable — the discretization is “optimal” with respect to the categorical guide.
Therefore, the input data matrix S contains two columns, the scale variable A and categorical
guide C.
Optimal binning is applied in the Binning node when the binning method is set to Optimal.
Notation
The following notation is used throughout this chapter unless otherwise stated:
S
A
S(i)
N
D
Si
C
T
TA
Ent(S)
E(A, T, S)
Gain(A, T, S)
n
W
The input data matrix, containing a column of the scale variable A and a
column of the categorical guide C. Each row is a separate observation, or
instance.
A scale variable, also called a continuous attribute.
The value of A for the ith instance in S.
The number of instances in S.
A set of all distinct values in S.
A subset of S.
The categorical guide, or class attribute; it is assumed to have k
categories, or classes.
A cut point that defines the boundary between two bins.
A set of cut points.
The class entropy of S.
The class entropy of partition induced by T on A.
The information gain of the cut point T on A.
A parameter denoting the number of cut points for the equal frequency
method.
A weight attribute denoting the frequency of each instance. If the weight
values are not integer, they are rounded to the nearest whole numbers before
use. For example, 0.5 is rounded to 1, and 2.4 is rounded to 2. Instances
with missing weights or weights less than 0.5 are not used.
Simple MDLP
This section describes the supervised binning method (MDLP) discussed in Fayyad and Irani
(1993).
Class Entropy
Let there be k classes C1, ..., Ck and let P(Ci, S) be the proportion of instances in S that have
class Ci. The class entropy Ent(S) is defined as
© Copyright IBM Corporation 1994, 2015.
299
300
OPTIMAL BINNING Algorithms
Class Information Entropy
For an instance set S, a continuous attribute A, and a cut point T, let S1 ⊂ S be the subset of
instances in S with the values of A ≤ T, and S2 = S−S1. The class information entropy of the
partition induced by T, E(A, T; S), is defined as
Information Gain
Given a set of instances S, a continuous attribute A, and a cut point T on A, the information
gain of a cut point T is
MDLP Acceptance Criterion
The partition induced by a cut point T for a set S of N instances is accepted if and only if
and it is rejected otherwise.
Here
number of classes in the subset Si of S.
in which ki is the
Note: While the MDLP acceptance criterion uses the association between A and C to determine
cut points, it also tries to keep the creation of bins to a small number. Thus there are situations in
which a high association between A and C will result in no cut points. For example, consider the
following data:
D
1
2
Class
2
1
0
Then the potential cut point is T = 1. In this case:
3
0
6
301
OPTIMAL BINNING Algorithms
Since 0.5916728 < 0.6530774, T is not accepted as a cut point, even though there is a clear
relationship between A and C.
Algorithm: BinaryDiscretization
1. Calculate E(A, di; S) for each distinct value di ∈ D for which di and di+1 do not belong to the same
class. A distinct value belongs to a class if all instances of this value have the same class.
2. Select a cut point T for which E(A, T; S) is minimum among all the candidate cut points, that is,
Algorithm: MDLPCut
1. BinaryDiscretization(A, T; D, S).
2. Calculate Gain(A, T; S).
3. If
a)
then
.
b) Split D into D1 and D2, and S into S1 and S2.
c) MDLPCut(A, TA; D1, S1).
d) MDLPCut(A, TA; D2, S2). where S1 ⊂ S be the subset of instances in S with A-values ≤ T, and
S2 = S−S1. D1 and D2 are the sets of all distinct values in S1 and S2, respectively.
Also presented is the iterative version of MDLPCut(A, TA; D, S). The iterative implementation
requires a stack to store the D and S remaining to be cut.
First push D and S into stack. Then, while ( stack≠∅ ) do
1. Obtain D and S by popping stack.
2. BinaryDiscretization(A, T; D, S).
3. Calculate Gain(A, T; S).
4. If
i)
then
.
ii) Split D into D1 and D2, and S into S1 and S2.
iii) Push D1 and S1 into stack.
iv) Push D2 and S2 into stack.
302
OPTIMAL BINNING Algorithms
Note: In practice, all operations within the algorithm are based on a global matrix M. Its element,
mij, denotes the total number of instances that have value di ∈ D and belong to the jth class in S.
In addition, D is sorted in ascending order. Therefore, we do not need to push D and S into stack,
but only two integer numbers, which denote the bounds of D, into stack.
Algorithm: SimpleMDLP
1. Sort the set S with N instances by the value A in ascending order.
2. Find a set of all distinct values, D, in S.
3. TA = ∅.
4. MDLPCut(A, TA; D, S)
5. Sort the set TA in ascending order, and output TA.
Hybrid MDLP
When the set D of distinct values in S is large, the computational cost to calculate E(A, di; S)
for each di ∈ D is large. In order to reduce the computational cost, the unsupervised equal
frequency binning method is used to reduce the size of D and obtain a subset Def ∈ D. Then the
MDLPCut(A, TA; Ds, S) algorithm is applied to obtain the final cut point set TA.
Algorithm: EqualFrequency
It divides a continuous attribute A into n bins where each bin contains N/n instances. n is a
user-specified parameter, where 1 < n < N.
1. Sort the set S with N instances by the value A in ascending order.
2. Def = ∅.
3. j=1.
4. Use the aempirical percentile method to generate the dp,i which denote the
percentiles.
th
; i=i+1
5.
6. If i≤n, then go to step 4.
7. Delete the duplicate values in the set Def.
Note: If, for example, there are many occurrences of a single value of A, the equal frequency
criterion may not be met. In this case, no cut points are produced.
Algorithm: HybridMDLP
1. D = ∅;
303
OPTIMAL BINNING Algorithms
2. EqualFrequency(A, n, D; S).
3. TA = ∅.
4. MDLPCut(A, TA; D, S).
5. Output TA.
Model Entropy
The model entropy is a measure of the predictive accuracy of an attribute A binned on the class
variable C. Given a set of instances S, suppose that A is discretized into I bins given C, where
the ith bin has the value Ai. Letting Si ⊂ S be the subset of instances in S with the value Ai, the
model entropy is defined as:
where
and
.
Merging Sparsely Populated Bins
Occasionally, the procedure may produce bins with very few cases. The following strategy deletes
these pseudo cut points:
E For a given variable, suppose that the algorithm found nfinal cut points, and thus nfinal+1 bins. For
bins i = 2, ..., nfinal (the second lowest-valued bin through the second highest-valued bin), compute
where sizeof(bin) is the number of cases in the bin.
E When this value is less than a user-specified merging threshold,
is considered sparsely populated
or
, whichever has the lower class information entropy. For more
and is merged with
information, see the topic “Class Information Entropy ” on p. 300.
The procedure makes a single pass through the bins.
Blank Handling
In optimal binning, blanks are handled in pairwise fashion. That is, for every pair of fields
{binning field, target field}, all records with valid values for both fields are used to bin that
specific binning field, regardless of any blanks that may exist in other fields to be binned.
304
OPTIMAL BINNING Algorithms
References
Fayyad, U., and K. Irani. 1993. Multi-interval discretization of continuous-value attributes for
classification learning. In: Proceedings of the Thirteenth International Joint Conference on
Artificial Intelligence, San Mateo, CA: Morgan Kaufmann, 1022–1027.
Dougherty, J., R. Kohavi, and M. Sahami. 1995. Supervised and unsupervised discretization
of continuous features. In: Proceedings of the Twelfth International Conference on Machine
Learning, Los Altos, CA: Morgan Kaufmann, 194–202.
Liu, H., F. Hussain, C. L. Tan, and M. Dash. 2002. Discretization: An Enabling Technique. Data
Mining and Knowledge Discovery, 6, 393–423.
Predictor Importance Algorithms
Predictor importance can be determined by computing the reduction in variance of the target
attributable to each predictor, via a sensitivity analysis. This method of computing predictor
importance is used in the following models:

Neural Networks

C5.0

C&RT

QUEST

CHAID

Regression

Logistic

Discriminant

GenLin

SVM

Bayesian Networks
Notation
The following notation is used throughout this chapter unless otherwise stated:
Y
Target
Predictor, where j=1,...,k
k
The number of predictors
Model for Y based on predictors
through
Variance Based Method
Predictors are ranked according to the sensitivity measure defined as follows.
where V(Y) is the unconditional output variance. In the numerator, the expectation operator E
calls for an integral over
; that is, over all factors but , then the variance operator V implies
a further integral over .
Predictor importance is then computed as the normalized sensitivity.
© Copyright IBM Corporation 1994, 2015.
305
306
Predictor Importance Algorithms
Saltelli et al (2004) show that is the proper measure of sensitivity to rank the predictors in order
of importance for any combination of interaction and non-orthogonality among predictors.
The importance measure Si is the first-order sensitivity measure, which is accurate if the set of
the input factors (X1 , X2 ,…, Xk) is orthogonal/independent (a property of the factors), and
the model is additive; that is, the model does not include interactions (a property of the model)
between the input factors. For any combination of interaction and non-orthogonality among
factors, Saltelli (2004) pointed out that Si is still the proper measure of sensitivity to rank the
input factors in order of importance, but there is a risk of inaccuracy due to the presence of
interactions or/and non-orthogonality. For better estimation of Si, the size of the dataset should
be a few hundred at least. Otherwise, Si may be biased heavily. In this case, the importance
measure can be improved by bootstrapping.
Computation
In the orthogonal case, it is straightforward to estimate the conditional variances by computing
the multidimensional integrals in the space of the input factors, via Monte Carlo methods as
follows.
Let us start with two input sample matrices
and
, each of dimension N× k:
and
where N is the sample size of the Monte Carlo estimate which can vary from a few hundred to one
and
, we can build a third matrix
.
thousand. Each row is an input sample. From
We may think of
as the “sample” matrix,
as the “resample” matrix, and
as the matrix
where all factors except
are resampled. The following equations describe how to obtain the
variances (Saltelli 2002). The ‘hat’ denotes the numeric estimates.
307
Predictor Importance Algorithms
where
where
and
When the target is continuous, we simply follow the accumulation steps of variance and
expectations. For a categorical target, the accumulation steps are for each category of Y. For each
input factor, is a vector with an element for each category of Y. The average of elements of is
used as the estimation of importance of the ith input factor on Y.
Convergence. In order to improve scalability, we use a subset of the records and predictors when
checking for convergence. Specifically, the convergence is judged by the following criteria:
where
and
, D=100 and denotes the width of interest,
,
defines the desired average relative error.
This specification focuses on “good” predictors; those whose importance values are larger than
average.
Record order. This method of computing predictor importance is desirable because it scales well to
large datasets, but the results are dependent upon the order of records in the dataset. However, with
large, randomly ordered datasets, you can expect the predictor importance results to be consistent.
308
Predictor Importance Algorithms
References
Saltelli, A., S. Tarantola, F. , F. Campolongo, and M. Ratto. 2004. Sensitivity Analysis in Practice
– A Guide to Assessing Scientific Models. : John Wiley.
Saltelli, A. 2002. Making best use of model evaluations to compute sensitivity indices. Computer
Physics Communications, 145:2, 280–297.
QUEST Algorithms
Overview of QUEST
QUEST stands for Quick, Unbiased, Efficient Statistical Tree. It is a relatively new binary
tree-growing algorithm (Loh and Shih, 1997). It deals with split field selection and split-point
selection separately. The univariate split in QUEST performs approximately unbiased field
selection. That is, if all predictor fields are equally informative with respect to the target field,
QUEST selects any of the predictor fields with equal probability.
QUEST affords many of the advantages of C&RT, but, like C&RT, your trees can become
unwieldy. You can apply automatic cost-complexity pruning (see “Pruning” on p. 317) to a
QUEST tree to cut down its size. QUEST uses surrogate splitting to handle missing values. For
more information, see the topic “Blank Handling” on p. 313.
Primary Calculations
The calculations directly involved in building the model are described below.
Frequency Weight Fields
A frequency field represents the total number of observations represented by each record. It is
useful for analyzing aggregate data, in which a record represents more than one individual. The
sum of the values for a frequency field should always be equal to the total number of observations
in the sample. Note that output and statistics are the same whether you use a frequency field or
case-by-case data. The table below shows a hypothetical example, with the predictor fields sex
and employment and the target field response. The frequency field tells us, for example, that 10
employed men responded yes to the target question, and 19 unemployed women responded no.
Table 29-1
Dataset with frequency field
Sex
M
M
M
M
F
F
F
F
Employment
Y
Y
N
N
Y
Y
N
N
Response
Y
N
Y
N
Y
N
Y
N
Frequency
10
17
12
21
11
15
15
19
The use of a frequency field in this case allows us to process a table of 8 records instead of
case-by-case data, which would require 120 records.
QUEST does not support the use of case weights.
© Copyright IBM Corporation 1994, 2015.
309
310
QUEST Algorithms
Model Parameters
QUEST deals with field selection and split-point selection separately. Note that you can specify
the alpha level to be used in the Expert Options for QUEST—the default value is αnominal = 0.05.
Field Selection
1. For each predictor field X, if X is a symbolic (categorical), or nominal, field, compute the p value
of a Pearson chi-square test of independence between X and the dependent field. If X is scale-level
(continuous), or ordinal field, use the F test to compute the p value.
2. Compare the smallest p value to a prespecified, Bonferroni-adjusted alpha level αB.

If the smallest p value is less than αB, then select the corresponding predictor field to split
the node. Go on to step 3.

If the smallest p value is not less thanαB, then for each X that is scale-level (continuous), use
Levene’s test for unequal variances to compute a p value. (In other words, test whether X
has unequal variances at different levels of the target field.)

Compare the smallest p value from Levene’s test to a new Bonferroni-adjusted alpha level αL.

If the p value is less than αL, select the corresponding predictor field with the smallest p
value from Levene’s test to split the node.

If the p value is greater than αL, the node is not split.
Split Point Selection—Scale-Level Predictor
1. If Y has only two categories, skip to the next step. Otherwise, group the categories of Y into
two superclasses as follows:

Compute the mean of X for each category of Y.

If all means are the same, the category with the largest weighted frequency is selected as one
superclass and all other categories are combined to form the other superclass. (If all means
are the same and there are multiple categories tied for largest weighted frequency, select
the category with the smallest index as one superclass and combine the other categories
to form the other.)

If the means are not all the same, apply a two-mean clustering algorithm to those means to
obtain two superclasses of Y, with the initial cluster centers set at the two most extreme class
means. (This is a special case of k-means clustering, where k = 2. For more information, see
the topic “Overview” on p. 227.)
2. Apply quadratic discriminant analysis (QDA) to determine the split point. Notice that QDA
usually produces two cut-off points—choose the one that is closer to the sample mean of the
first superclass.
Split Point Selection—Symbolic (Categorical) Predictor
QUEST first transforms the symbolic field into a continuous field ξ by assigning discriminant
coordinates to categories of the predictor. The derived field ξ is then split as if it were any other
continuous predictor as described above.
311
QUEST Algorithms
Chi-Square Test
The Pearson chi-square statistic is calculated as
where
is the observed cell frequency and
is the expected
cell frequency for cell (xn = i, yn = j) from the independence model as described below. The
, where
follows a chi-square
corresponding p value is calculated as
distribution with d = (J − 1)(I − 1) degrees of freedom.
Expected Frequencies for Chi-Square Test
For models with no case weights, expected frequencies are calculated as
where
F Test
Suppose for node t there are Jt classes of target field Y. The F statistic for continuous predictor X
is calculated as
where
The corresponding p value is given by
where F(Jt − 1, Nf(t) − Jt) follows an F distribution with degrees of freedom Jt − 1 and Nf(t) − Jt.
312
QUEST Algorithms
Levene’s Test
For continuous predictor X, calculate
, where is the mean of X for records in
node t with target value yn. Levene’s F statistic for predictor X is the ANOVA F statistic for zn.
Bonferroni Adjustment
The adjusted alpha level αB is calculated as the nominal value divided by the number of possible
comparisons.
For QUEST, the Bonferroni adjusted alpha level αB for the initial predictor selection is
where m is the number of predictor fields in the model.
For the Levene test, the Bonferroni adjusted alpha level αL is
where mc is the number of continuous predictor fields.
Discriminant Coordinates
For categorical predictor X with values {b1,...,bI}, QUEST assigns a score value from a continuous
variable ξ to each category of X. The scores assigned are chosen to maximize the ratio of
between-class to within-class sum of squares of ξ for the target field classes:
For each record, transform X into a vector of dummy fields
, where
otherwise
Calculate the overall and class j mean of ν:
where fn is the frequency weight for record n, gn is the dummy vector for record n, Nf is the
total sum of frequency weights for the training data, and Nf,j is the sum of frequency weights
for records with category j.
Calculate the following
matrices:
313
QUEST Algorithms
Perform singular value decomposition on T to obtain
, where Q is an
matrix, D = diag(dl,...,dI) such that
. Let
if di > 0, 0 otherwise. Perform singular value decomposition on
obtain its eigenvector a which is associated with its largest eigenvalue.
orthogonal
where
to
The largest discriminant coordinate of g is the projection
Quadratic Discriminant Analysis (QDA)
To determine the cutpoint for a continuous predictor, first group the categories of the target field Y
to form two superclasses, A and B, as described above.
If
, order the two superclasses by their variance in increasing order and denote
the variances by
, and the corresponding means by
. Let ε be a very small positive
−12
and ε:
number, say ε = 10 . Set the cutpoint d based on
if
otherwise
Blank Handling
Records with missing values for the target field are ignored in building the tree model.
Surrogate splitting is used to handle blanks for predictor fields. If the best predictor field to be
used for a split has a blank or missing value at a particular node, another field that yields a split
similar to the predictor field in the context of that node is used as a surrogate for the predictor
field, and its value is used to assign the record to one of the child nodes.
For example, suppose that X* is the predictor field that defines the best split s* at node t. The
surrogate-splitting process finds another split s, the surrogate, based on another predictor field X
such that this split is most similar to s* at node t (for records with valid values for both predictors).
If a new record is to be predicted and it has a missing value on X* at node t, the surrogate split s is
applied instead. (Unless, of course, this record also has a missing value on X. In such a situation,
the next best surrogate is used, and so on, up to the limit of number of surrogates specified.)
In the interest of speed and memory conservation, only a limited number of surrogates is
identified for each split in the tree. If a record has missing values for the split field and all
surrogate fields, it is assigned to the child node with the higher weighted probability, calculated as
314
QUEST Algorithms
where Nf,j(t) is the sum of frequency weights for records in category j for node t, and Nf(t) is the
sum of frequency weights for all records in node t.
If the model was built using equal or user-specified priors, the priors are incorporated into the
calculation:
where π(j) is the prior probability for category j, and pf(t) is the weighted probability of a record
being assigned to the node,
where Nf,j(t) is the sum of the frequency weights (or the number of records if no frequency
weights are defined) in node t belonging to category j, and Nf,j is the sum of frequency weights
for records belonging to category in the entire training sample.
Predictive measure of association
Let
(resp.
) be the set of learning cases (resp. learning cases in node t) that has
non-missing values of both X* and X. Let
be the probability of sending a case in
to the same child by both and , and
be the split with maximized probability
.
The predictive measure of association
between s* and
at node t is
where
(resp. ) is the relative probability that the best split s* at node t sends a case with
non-missing value of X* to the left (resp. right) child node. And where
if is categorical
if is continuous
with
,
315
QUEST Algorithms
,
and
being the indicator function taking value 1 when both splits s* and
the case n to the same child, 0 otherwise.
send
Effect of Options
Stopping Rules
Stopping rules control how the algorithm decides when to stop splitting nodes in the tree. Tree
growth proceeds until every leaf node in the tree triggers at least one stopping rule. Any of the
following conditions will prevent a node from being split:

The node is pure (all records have the same value for the target field)

All records in the node have the same value for all predictor fields used by the model

The tree depth for the current node (the number of recursive node splits defining the current
node) is the maximum tree depth (default or user-specified).

The number of records in the node is less than the minumum parent node size (default or
user-specified)

The number of records in any of the child nodes resulting from the node’s best split is less
than the minimum child node size (default or user-specified)
Profits
Profits are numeric values associated with categories of a (symbolic) target field that can be used
to estimate the gain or loss associated with a segment. They define the relative value of each value
of the target field. Values are used in computing gains but not in tree growing.
Profit for each node in the tree is calculated as
where j is the target field category, fj(t) is the sum of frequency field values for all records in node
t with category j for the target field, and Pj is the user-defined profit value for category j.
Priors
Prior probabilities are numeric values that influence the misclassification rates for categories of
the target field. They specify the proportion of records expected to belong to each category of the
target field prior to the analysis. The values are involved both in tree growing and risk estimation.
There are three ways to derive prior probabilities.
316
QUEST Algorithms
Empirical Priors
By default, priors are calculated based on the training data. The prior probability assigned to each
target category is the weighted proportion of records in the training data belonging to that category,
In tree-growing and class assignment, the Ns take both case weights and frequency weights
into account (if defined); in risk estimation, only frequency weights are included in calculating
empirical priors.
Equal Priors
Selecting equal priors sets the prior probability for each of the J categories to the same value,
User-Specified Priors
When user-specified priors are given, the specified values are used in the calculations involving
priors. The values specified for the priors must conform to the probability constraint: the sum of
priors for all categories must equal 1.0. If user-specified priors do not conform to this constraint,
adjusted priors are derived which preserve the proportions of the original priors but conform
to the constraint, using the formula
where π’(j) is the adjusted prior for category j, and π(j) is the original user-specified prior for
category j.
Costs
If misclassification costs are specified, they are incorporated into split calculations by using
altered priors. The altered prior is defined as
where
.
Misclassification costs also affect risk estimates and predicted values, as described below ( on p.
318 and on p. 319, respectively).
317
QUEST Algorithms
Pruning
Pruning refers to the process of examining a fully grown tree and removing bottom-level splits
that do not contribute significantly to the accuracy of the tree. In pruning the tree, the software
tries to create the smallest tree whose misclassification risk is not too much greater than that of the
largest tree possible. It removes a tree branch if the cost associated with having a more complex
tree exceeds the gain associated with having another level of nodes (branch).
It uses an index that measures both the misclassification risk and the complexity of the tree,
since we want to minimize both of these things. This cost-complexity measure is defined as
follows:
R(T) is the misclassification risk of tree T, and
is the number of terminal nodes for tree T. The
term α represents the complexity cost per terminal node for the tree. (Note that the value of α is
calculated by the algorithm during pruning.)
Any tree you might generate has a maximum size (Tmax), in which each terminal node contains
only one record. With no complexity cost (α = 0), the maximum tree has the lowest risk, since
every record is perfectly predicted. Thus, the larger the value of α, the fewer the number of
terminal nodes in T(α), where T(α) is the tree with the lowest complexity cost for the given α. As
α increases from 0, it produces a finite sequence of subtrees (T1, T2, T3), each with progressively
fewer terminal nodes. Cost-complexity pruning works by removing the weakest split.
The following equations represent the cost complexity for {t}, which is any single node, and
for Tt, the subbranch of {t}.
If
is less than
, then the branch Tt has a smaller cost complexity than the single
node {t}.
The tree-growing process ensures that
for (α = 0). As α increases from 0,
and
grow linearly, with the latter growing at a faster rate. Eventually, you
both
will reach a threshold α’, such that
for all α > α’. This means that when α
grows larger than α’, the cost complexity of the tree can be reduced if we cut the subbranch Tt
under {t}. Determining the threshold is a simple computation. You can solve this first inequality,
, to find the largest value of α for which the inequality holds, which is also
represented by g(t). You end up with
318
QUEST Algorithms
You can define the weakest link (t) in tree T as the node that has the smallest value of g(t):
Therefore, as α increases, is the first node for which
. At that point, { }
becomes preferable to , and the subbranch is pruned.
With that background established, the pruning algorithm follows these steps:
E Set α1 = 0 and start with the tree T1 = T(0), the fully grown tree.
E Increase α until a branch is pruned. Prune the branch from the tree, and calculate the risk estimate
of the pruned tree.
E Repeat the previous step until only the root node is left, yielding a series of trees, T1, T2, ... Tk.
E If the standard error rule option is selected, choose the smallest tree Topt for which
E If the standard error rule option is not selected, then the tree with the smallest risk estimate R(T)
is selected.
Secondary Calculations
Secondary calculations are not directly related to building the model but give you information
about the model and its performance.
Risk Estimates
Risk estimates describe the risk of error in predicted values for specific nodes of the tree and for
the tree as a whole.
Risk Estimates for Symbolic Target Field
For classification trees (with a symbolic target field), the risk estimate r(t) of a node t is computed
as
where C(j*(t)|j) is the misclassification cost of classifying a record with target value j as j*(t),
Nf,j(t) is the sum of the frequency weights for records in node t in category j (or the number of
records if no frequency weights are defined), and Nf is the sum of frequency weights for all
records in the training data.
If the model uses user-specified priors, the risk estimate is calculated as
319
QUEST Algorithms
Gain Summary
The gain summary provides descriptive statistics for the terminal nodes of a tree.
If your target field is continuous (scale), the gain summary shows the weighted mean of the
target value for each terminal node,
If your target field is symbolic (categorical), the gain summary shows the weighted percentage of
records in a selected target category,
where xi(j) = 1 if record xi is in target category j, and 0 otherwise. If profits are defined for the
tree, the gain is the average profit value for each terminal node,
where P(xi) is the profit value assigned to the target value observed in record xi.
Generated Model/Scoring
Calculations done by the QUEST generated model are described below.
Predicted Values
New records are scored by following the tree splits to a terminal node of the tree. Each terminal
node has a particular predicted value associated with it, determined as follows:
For trees with a symbolic target field, each terminal node’s predicted category is the category with
the lowest weighted cost for the node. This weighted cost is calculated as
where C(i|j) is the user-specified misclassification cost for classifying a record as category i when
it is actually category j, and p(j|t) is the conditional weighted probability of a record being in
category j given that it is in node t, defined as
320
QUEST Algorithms
where π(j) is the prior probability for category j, Nw,j(t) is the weighted number of records in node
t with category j (or the number of records if no frequency or case weights are defined),
and Nw,j is the weighted number records in category j (any node),
Confidence
Confidence for a scored record is the proportion of weighted records in the training data in the
scored record’s assigned terminal node that belong to the predicted category, modified by the
Laplace correction:
Blank Handling
In classification of new records, blanks are handled as they are during tree growth, using
surrogates where possible, and splitting based on weighted probabilities where necessary. For
more information, see the topic “Blank Handling” on p. 313.
Linear Regression Algorithms
Overview
This procedure performs ordinary least squares multiple linear regression with four methods for
entry and removal of variables (Neter, Wasserman, and Kutner, 1990).
Primary Calculations
Notation
The following notation is used throughout this chapter unless otherwise stated:
Output field for record i with variance
Case weight for record i; in IBM® SPSS® Modeler,
Regression weight for record i;
l
if regression weight is not specified
Number of distinct records
The sum of weights across records,
Number of input fields
Sum of case weights,
The value of the kth input field for record i
Sample mean for the kth input field,
Sample mean for the output field,
Sample covariance for input fields
and
Sample variance for output field Y
Sample covariance for
and
Number of coefficients in the model.
Sample correlation matrix for
...
if the intercept is not included; otherwise
and
Model Parameters
The summary statistics
and covariance
to update the values as each record is read:
and
© Copyright IBM Corporation 1994, 2015.
321
are computed using provisional means algorithms
322
Linear Regression Algorithms
where, if the intercept is included,
or if the intercept is not included,
where
is the cumulative weight up to record k, and
is the estimate of
up to record k.
For a regression model of the form
sweep operations are used to compute the least squares estimates of and the associated
regression statistics (Dempster, 1969). The sweeping starts with the correlation matrix ,
where
and
Let be the new matrix produced by sweeping on the kth row and column of
are
and
. The elements of
323
Linear Regression Algorithms
If the above sweep operations are repeatedly applied to each row of
where
in
contains the input fields in the equation at the current step, the result is
The last row of
contains the standardized coefficients (also called beta), and
can be used to obtain the partial correlations for the variables not in the equation, controlling for
the variables already in the equation. Note that this routine is its own inverse; that is, exactly the
same operations are performed to remove an input field as to enter it.
are calculated as
The unstandardized coefficient estimates
and the intercept
, if included in the model, is calculated as
Automatic Field Selection
Let
be the element in the current swept matrix associated with
and . Variables are
entered or removed one at a time.
is eligible for entry if it is an input field not currently in
the model such that
and
where t is the tolerance, with a default value of 0.0001.
324
Linear Regression Algorithms
The second condition above is imposed so that entry of the variable does not reduce the
tolerance of variables already in the model to unacceptable levels.
is computed as
The F-to-enter value for
with 1 and
the model and
degrees of freedom, where
The F-to-remove value for
with 1 and
is the number of coefficients currently in
is computed as
degrees of freedom.
Methods for Variable Entry and Removal
Four methods for entry and removal of variables are available. The selection process is repeated
until no more independent variables qualify for entry or removal. The algorithms for these four
methods are described below.
Enter
The selected input fields are all entered in the model, with no field selection applied.
Stepwise
If there are independent variables currently entered in the model, choose
such that
is minimum.
is removed if
(default = 2.71) or, if
(default = 0.1). If the inequality does
probability criteria are used,
not hold, no variable is removed from the model.
If there are no independent variables currently entered in the model or if no entered
such that
is maximum.
is entered if
variable is to be removed, choose
(default = 3.84) or,
(default = 0.05). If the
inequality does not hold, no variable is entered.
At each step, all eligible variables are considered for removal and entry.
Forward
This procedure is the entry phase of the stepwise procedure.
325
Linear Regression Algorithms
Backward
This procedure starts with all input fields in the model and applies the removal phase of the
stepwise procedure.
Blank Handling
By default, a case that has a missing value for any input or output field is deleted from the
computation of the correlation matrix on which all consequent computations are based. If the Only
is computed
use complete records option is deselected, each correlation in the correlation matrix
based on records with complete data for the two fields associated with the correlation, regardless
of missing values on other fields. For some datasets, this approach can lead to a non-positive
definite matrix, so that the model cannot be estimated.
Secondary Calculations
Model Summary Statistics
The multiple correlation coefficient R is calculated as
R-square, the proportion of variance in the output field accounted for by the input fields, is
calculated as
The adjusted R-square, which takes the complexity of the model relative to the size of the training
data into account, is calculated as
Field Statistics and Other Calculations
The statistics shown in the advanced output for the regression equation node are calculated in the
same manner as in the REGRESSION procedure in IBM® SPSS® Statistics. For more details, see
the SPSS Statistics Regression algorithm document, available at http://www.ibm.com/support.
Generated Model/Scoring
Predicted Values
The predicted value for a new record is calculated as
326
Linear Regression Algorithms
Blank Handling
Records with missing values for any input field in the final model cannot be scored, and are
assigned a predicted value of $null$.
Sequence Algorithm
Overview of Sequence Algorithm
The sequence node in IBM® SPSS® Modeler detects patterns in sequential data, such as
purchases over time. The sequence node algorithm uses the following two-stage process for
sequential pattern mining (Agrawal and Srikant, 1995):
E Mine for the frequent sequences. This part of the process extracts the information needed for quick
responses to the pattern queries, yielding an adjacency lattice of the frequent sequences. This
structure provides an optimal configuration for the second stage.
E Generate sequential patterns online. This stage uses a pre-computed adjacency lattice. You can
extract the patterns according to specified criteria, such as support and confidence bounds, or
place restrictions on the antecedent sequence.
Primary Calculations
Itemsets, Transactions, and Sequences
A group of items associated at a single point in time constitutes an itemset, which will be
identified here using braces “{ }”. Consider the hypothetical data below representing sales at a
gourmet store.
Table 31-1
Example data - product purchases
Customer
1
2
3
4
5
6
Time 1
cheese & crackers
wine
bread
crackers
beer
crackers
Time 2
wine
beer
wine
wine
cheese & crackers
bread
Time 3
beer
cheese
cheese & beer
beer
bread
-
Time 4
cheese
-
Customer 1 yields three itemsets: {cheese & crackers}, {wine}, and {beer}. The ampersand
denotes items appearing in a single itemset. In this case, items separated by an ampersand appear
in the same purchase. Notice that some itemsets may contain a single item only.
The complete group of itemsets for a single object, in this case a customer, constitutes a
transaction. Time refers to a purchase occasion for a particular customer and does not represent a
specific time across all customers. For example, the first purchase occasion for customer 1 may
have been on January 23 while the first occasion for customer 4 was February 12. Although the
dates are not identical, each itemset was the first for that customer. The analysis focuses on time
relative to a specific customer instead of on absolute time.
Ordering the itemsets by time yields sequences. The symbol “>” denotes an ordering of
itemsets, with the itemset on the right occurring after the itemset on the left. For example,
customer 6 yields a sequence of [{crackers} > {bread}].
© Copyright IBM Corporation 1994, 2015.
327
328
Sequence Algorithm
Two common characteristics used to describe sequences are size and length. The number of
items contained in a sequence corresponds to the sequence size. The number of itemsets in the
sequence equals its length. For example, the three timepoints for customer 5 correspond to a
sequence having a length of three and a size of four.
A sequence is a subsequence of another sequence if the first can be derived by deleting
itemsets from the second. Consider the sequence:
[{wine} > {beer} > {cheese}]
Deleting the itemset cheese results in the sequence of length two [{wine} > {beer}]. This two
itemset sequence is a subsequence of the original sequence. Similar deletions reveal that the
three itemset sequence can be decomposed into three singleton subsequences ({wine}, {beer},
{cheese}) and three subsequences involving two itemsets ([{wine} > {beer}], [{beer} >
{cheese}], [{wine} > {cheese}]). A sequence that is not a subsequence of another sequence is
referred to as a maximal sequence.
Support
The support for a sequence equals the proportion of transactions that contain the sequence. The
table below shows support values for sequences that appear in at least one transaction for a set of
gourmet store sales data (note that this is a different data set from the one shown previously).
For example, the support for sequence [{wine} > {beer}] is 0.67 because it occurs in four of the
six transactions. Similarly, support for a sequential rule equals the proportion of transactions that
contain both the antecedent and the consequent of the rule, in that order. The support for the
sequential rule:
If [{cheese} >
{wine}] then [{beer}]
is 0.17 because only one of the six transactions contains these three itemsets in this order.
Sequences that do not appear in any transaction have support values of 0 and are excluded
from the mining analysis.
Table 31-2
Nonzero support values
Sequence
{cheese}
{crackers}
{wine}
{beer}
{bread}
{cheese &
crackers}
{cheese & beer}
{cheese} > {wine}
{cheese} > {beer}
{wine} > {beer}
{crackers} >
{wine}
Support
0.83
0.67
0.67
0.83
0.50
0.33
Sequence
{crackers} > {cheese}
{beer} > {cheese & crackers}
{cheese & crackers} > {wine}
{cheese & crackers} > {beer}
{bread} > {cheese & beer}
{wine} > {cheese & beer}
Support
0.17
0.17
0.17
0.17
0.17
0.17
0.17
0.17
0.17
0.67
0.33
{cheese & crackers} > {bread}
{cheese} > {wine} > {beer}
{crackers} > {wine} > {beer}
{wine} > {beer} > {cheese}
{bread} > {wine} > {beer}
0.17
0.17
0.33
0.33
0.17
329
Sequence Algorithm
Sequence
{crackers} > {beer}
{wine} > {cheese}
{beer} > {cheese}
{bread} > {wine}
{bread} > {beer}
{bread} > {cheese}
{beer} > {bread}
{beer} > {crackers}
{cheese} > {bread}
{crackers} >
{bread}
Support
0.33
0.50
0.50
0.17
0.17
0.17
0.17
0.17
0.17
0.33
Sequence
{bread} > {wine} > {cheese}
{beer} > {cheese} > {bread}
{beer} > {crackers} > {bread}
{crackers} > {wine} > {cheese}
{crackers} > {beer} > {cheese}
{cheese & crackers} > {wine} > {beer}
{bread} > {wine} > {cheese & beer}
{beer} > {cheese & crackers} > {bread}
{crackers} > {wine} > {beer} > {cheese}
Support
0.17
0.17
0.17
0.17
0.17
0.17
0.17
0.17
0.17
Typically, the analysis focuses on sequences having support values greater than a minimum
threshold, the support level. This value, defined by the user, determines the minimum level for
which sequences will be kept. Sequences with support values exceeding the threshold, referred to
as frequent sequences, form the basis of the adjacency lattice. For example, for a threshold of
0.40, sequence [{wine} > {beer}] is a frequent sequence because its support level is 0.67. By
relaxing the threshold, more sequences are classified as frequent.
Time Constraints
Defining the time at which events occur has a dramatic impact on sequences. For instance, each
purchase occasion in the gourmet data yields a new timed itemset. However, suppose a customer
bought wine and realized while walking to his car that beer was needed too. He immediately
returns to the store and buys the forgotten item. Should these two purchases be considered
separately?
One method for controlling for itemsets that occur very close in time is through a timestamp
tolerance parameter. This tolerance defines the length of time covering a single itemset.
Specifying a tolerance larger than the difference between two consecutive times results in a single
itemset at one time, such as {wine & beer} in the scenario described above.
Another time issue commonly arising in the analysis of sequences is gap. This statistic
measures the difference in time between two items and can be used to make time-based predictions
of future behavior. Gap statistics can be based on the gap between the last and penultimate sets in
sequences, or on the gaps between the last and first sets in sequences.
Sequential Patterns
Sequential patterns, or sequential association rules,identify items that frequently follow other
items in transaction-based data. A sequential pattern is simply an ordered list of itemsets. All
itemsets leading to the final itemset form the antecedent sequence, and the last itemset is the
consequent sequence. These statements have the following form:
If [antecedent] then [consequent]
330
Sequence Algorithm
For example, a sequential pattern for wine, beer, and cheese is: “if a customer buys wine, then
buys beer, he will buy cheese in the future”. Wine and beer form the antecedent, and cheese is
the consequent.
Notationally, the symbol “=>” separates the antecedent from the consequent in a sequential
rule. The sequence to the left of this symbol corresponds to the antecedent; the sequence on the
right is the consequent. For instance, the rule above is denoted:
[{wine} > {beer } => {cheese}]
The only notational difference between a sequence and a sequential rule is the identification
of a subsequence as a consequent.
Adjacency Lattice
The number of itemsets and sequences for a collection of transactions grows very quickly as the
number of items appearing in transactions gets larger. In practice, analyses typically involve many
transactions and these transactions include a variety of itemsets. Larger datasets require complex
methods to process the sequential patterns, particularly if rapid feedback is needed.
An adjacency lattice provides a structure for organizing sequences, permitting rapid generation
of sequential patterns. Two sequences are adjacent if adding a single item to one yields the
other, resulting in a hierarchical structure denoting which sequences are subsequences of other
sequences. The lattice also includes sequence frequencies, as well as other information.
The adjacency lattice of all observed sequences is usually too large to be practical. It may be
more useful to prune the lattice to frequent sequences in an effort to simplify the structure. All
sequences contained in the resulting structure reach a specified support level. The adjacency
lattice for the sample transactions using a support level of 0.40 is shown below.
Figure 31-1
Adjacency lattice for a threshold of 0.40 (support values in parentheses)
331
Sequence Algorithm
Mining for Frequent Sequences
IBM® SPSS® Modeler uses a non-sequential association rule mining approach that performs
very well with respect to minimizing I/O costs, time, and space requirements. The continuous
association rule mining algorithm (Carma), uses only two data passes and allows changes in the
support level during execution (Hidber, 1999). The final guaranteed support level depends on the
provided series of support values.
For the first stage of the mining process, the component uses a variation of Carma to apply the
approach to the sequential case. The general order of operations is:
E Read the transaction data.
E Identify frequent sequences, discarding infrequent sequences.
E Build an adjacency lattice of frequent sequences.
Carma is based upon transactions and requires only two passes through the data. In the first data
pass, referred to as Phase I, the algorithm generates the frequent sequence candidates. The second
data pass, Phase II, computes the exact frequency counts for the candidate sequences from Phase I.
Phase I
Phase I corresponds to an estimation phase. In this phase, Carma generates candidate sequences
successively for every transaction. Candidate sequences satisfy a version of the “apriori” principle
where a sequence becomes a candidate only if all of its subsequences are candidates from the
previous transactions. Therefore, the size of candidate sequences can grow with each transaction.
To prevent the number of candidates from growing too large, Carma periodically prunes candidate
sequences that have not reached a threshold frequency. Pruning may occur after processing any
number of transactions. While pruning usually lowers the memory requirements, it increases the
computational costs. At the end of the Phase I, the algorithm generates all sequences whose
frequency exceeds the computed support level (which depends on the support series). Carma can
use many support levels, up to one support level per transaction.
The table below represents support values during transaction processing with no pruning for
the gourmet data. As the algorithm processes a transaction, support values adjust to account for
items appearing in that transaction, as well as for the total number of processed transactions. For
example, after the first transaction, the lattice contains cheese, crackers, wine, and beer, each
having a support exceeding the threshold level. After processing the second transaction, the
support for crackers drops from 1.0 to 0.50 because that item appears in only one of the two
transactions. The support for the other items remains unchanged because both transactions contain
the items. Furthermore, the sequences [{wine} > {beer}] and [{beer} > {cheese}] enter the lattice
because their constituent subsequences already appear in the lattice.
Table 31-3
Carma transaction processing
Sequence
{cheese}
{crackers}
{wine}
{beer}
Transaction
1
2
1
1
1
0.50
1
1
1
1
3
1
0.33
1
1
4
1
0.50
1
1
5
1
0.60
0.80
1
6
0.83
0.67
0.67
0.83
332
Sequence Algorithm
Sequence
{wine} > {beer}
{beer} > {cheese}
{bread}
{wine} > {cheese}
{cheese & beer}
{crackers} > {wine}
{crackers} > {beer}
{crackers} > {cheese}
{wine} > {beer} > {cheese}
{cheese & crackers}
{beer} > {crackers}
{beer} > {bread}
{cheese} > {bread}
{crackers} > {bread}
Transaction
1
2
1
0.50
3
1
0.33
0.33
0.67
0.33
4
1
0.50
0.25
0.75
0.25
0.50
0.50
0.25
0.50
5
0.80
0.60
0.40
0.60
0.20
0.40
0.40
0.20
0.40
0.40
0.20
0.20
0.20
0.20
6
0.67
0.50
0.50
0.50
0.17
0.33
0.33
0.17
0.33
0.33
0.17
0.17
0.17
0.33
After completing the first data pass, the lattice contains five sequences containing one item, twelve
sequences involving two items, and one sequence composed of three items.
Phase II
Phase II is a validation phase requiring a second data pass, during which the algorithm
determines accurate frequencies for candidate sequences. In this phase, Carma does not generate
any candidate sequences and prunes infrequent sequences only once, making Phase II faster
than Phase I. Moreover, depending on the entry points of candidate sequences during Phase I,
a complete data pass my not even be necessary. In an online application, Carma skips Phase II
altogether.
Suppose the threshold level is 0.30 for the lattice. Several sequences fail to reach this level and
subsequently get pruned during Phase II. The resulting lattice appears below.
333
Sequence Algorithm
Figure 31-2
Adjacency lattice for a threshold of 0.30 (support values in parentheses)
{wine} > {beer} > {cheese}
(0.33)
{crackers} > {wine}
(0.33)
{wine}
(0.67)
{wine} > {beer}
(0.67)
{crackers} > {beer}
(0.33)
{beer}
(0.83)
{wine} > {cheese}
(0.50)
{crackers}
(0.67)
{cheese & crackers}
(0.33)
{cheese}
(0.83)
{beer} > {cheese}
(0.50)
{crackers} > {bread}
(0.33)
{bread}
(0.50)
{NULL}
(1.00)
Notice that the lattice does not contain [{crackers} > {wine} > {beer}] although the support for
this sequence exceeds the threshold. Although [{crackers} > {wine} > {beer}] occurs in one-third
of the transactions, Carma cannot add this sequence to the lattice until all of its subsequences
are included. The final two subsequences occur in the fourth transaction, after which the full
three-itemset sequence is not observed. In general, however, the database of transactions will be
much larger than the small example shown here, and exclusions of this type will be extremely rare.
Generating Sequential Patterns
The second stage in the sequential pattern mining process queries the adjacency lattice of the
frequent sequences produced in the first stage for the actual patterns. Aggarwal and Yu (1998a)
IBM® SPSS® Modeler uses a set of efficient algorithms for generating association rules online
from the adjacency lattice (Aggarwal and Yu, 1998). Applying these algorithms to the sequential
case takes advantage of the monotonic properties for rule support and confidence preserved by
the adjacency lattice data structures. The lattice efficiently saves all the information necessary
for generating the sequential patterns and is orders of magnitude smaller than all the patterns
it could possibly generate.
The queries contain the constraints that the resulting set of sequential patterns needs to satisfy.
These constraints fall into two categories:

constraints on statistical indices

constraints on the items contained in the antecedent of the patterns
334
Sequence Algorithm
Statistical index constraints involve support, confidence, or cause. These queries require returned
patterns to have values for these statistics within a specified range. Usually, lower confidence
bound is the primary criterion. The lower bound for the pattern support level is given by the
support level for the sequences in the corresponding adjacency lattice. Often, however, the support
specified for pattern generation exceeds the value specified for lattice creation.
For the lattice shown above, specifying a support range between 0.30 and 1.00, a confidence
range from 0.30 to 1.0, and a cause range from 0 to 1.0 results in the following seven rules:

If [{crackers}] then [{beer}].

If [{crackers}] then [{wine}].

If [{crackers}] then [{bread}].

If [{wine} > {beer}] then [{cheese}].

If [{wine}] then [{beer}].

If [{wine}] then [{cheese}].

If [{beer}] then [{cheese}].
Limiting the set to only maximal sequences omits the final three rules because they are
subsequences of the fourth.
The second type of query requires the specification of the sequential rule antecedent. This type
of query returns a new singleton itemset after the final itemset in the antecedent. For example,
consider an online shopper who has placed items in a shopping cart. A future item query looks at
only the past purchases to derive a recommended item for the next time the shopper visits the site.
Blank Handling
Blanks are ignored by the sequence rules algorithm. The algorithm will handle records containing
blanks for input fields, but such a record will not be considered to match any rule containing one
or more of the fields for which it has blank values.
Secondary Calculations
Confidence
Confidence is a measure of sequential rule accuracy and equals the proportion obtained by dividing
the number of transactions that contain both the antecedent and consequent of the rule by the
number of transactions containing the antecedent. In other words, confidence is the support for the
rule divided by the support for the antecedent. For example, the confidence for the sequential rule:
If [{wine}] then
[{cheese}]
is 3/4, or 0.75. Three-quarters of the transactions that include wine also include cheese at a later
time. In contrast, the sequential rule:
If [{cheese}] then
[{wine}]
335
Sequence Algorithm
includes the same itemsets but has a confidence of 0.20. Only one-fifth of the transactions that
include cheese contain wine at a later time. In other words, wine is more likely to lead to cheese
than cheese is to lead to wine.
displays the confidence for every sequential rule observed in the gourmet data. Rules with
empty antecedents correspond to having no previous transaction history.
Table 31-4
Nonzero confidence values
Sequence
{cheese}
{crackers}
{wine}
{beer}
{bread}
{cheese & crackers}
{cheese & beer}
{cheese} => {wine}
{cheese} => {beer}
{wine} => {beer}
{crackers} => {wine}
{crackers} => {beer}
{wine} => {cheese}
{beer} => {cheese}
{bread} => {wine}
{bread} => {beer}
{bread} => {cheese}
{beer} => {bread}
{beer} => {crackers}
{cheese} => {bread}
{crackers} =>
{bread}
Confidence
1.00
1.00
1.00
1.00
1.00
1.00
1.00
0.20
0.20
1.00
0.50
0.50
0.75
0.60
0.33
0.33
0.33
0.20
0.20
0.20
0.50
Sequence
{crackers} => {cheese}
{beer} => {cheese & crackers}
{cheese & crackers} => {wine}
{cheese & crackers} => {beer}
{bread} => {cheese & beer}
{wine} => {cheese & beer}
{cheese & crackers} => {bread}
{cheese} > {wine} => {beer}
{crackers} > {wine} => {beer}
{wine} > {beer} => {cheese}
{bread} > {wine} => {beer}
{bread} > {wine} => {cheese}
{beer} > {cheese} => {bread}
{beer} > {crackers} => {bread}
{crackers} > {wine} => {cheese}
{crackers} > {beer} => {cheese}
{cheese & crackers} > {wine} => {beer}
{bread} > {wine} => {cheese & beer}
{beer} > {cheese & crackers} => {bread}
{crackers} > {wine} > {beer} => {cheese}
Confidence
0.25
0.20
0.50
0.50
0.33
0.25
0.50
1.00
1.00
0.50
1.00
1.00
0.33
1.00
0.50
0.50
1.00
1.00
1.00
0.50
Generated Model/Scoring
Predicted Values
When you pass data records into a Sequence Rules model, the model handles the records in a
time-dependent manner (or order-dependent, if no timestamp field was used to build the model).
Records should be sorted by the ID field and timestamp field (if present).
For each record, the rules in the model are compared to the set of transactions processed
for the current ID so far, including the current record and any previous records with the same
ID and earlier timestamp. The k rules with the highest confidence values that apply to this set
of transactions are used to generate the k predictions for the record, where k is the number of
predictions specified when the model was built. (If multiple rules predict the same outcome for
the transaction set, only the rule with the highest confidence is used.)
336
Sequence Algorithm
Note that the predictions for each record do not necessarily depend on that record’s transactions.
If the current record’s transactions do not trigger a specific rule, rules will be selected based on
the previous transactions for the current ID. In other words, if the current record doesn’t add any
useful predictive information to the sequence, the prediction from the last useful transaction for
this ID is carried forward to the current record.
For example, suppose you have a Sequence Rule model with the single rule
Jam -> Bread (0.66)
and you pass it the following records:
ID
001
001
Purchase
jam
milk
Prediction
bread
bread
Notice that the first record generates a prediction of bread, as you would expect. The second record
also contains a prediction of bread, because there’s no rule for jam followed by milk; therefore the
milk transaction doesn’t add any useful information, and the rule Jam -> Bread still applies.
Confidence
The confidence associated with a prediction is the confidence of the rule that produced the
prediction. For more information, see the topic “Confidence” on p. 334.
Blank Handling
Blanks are ignored by the sequence rules algorithm. The algorithm will handle records containing
blanks for input fields, but such a record will not be considered to match any rule containing one
or more of the fields for which it has blank values.
Self-Learning Response Model
Algorithms
Self-Learning Response Models (SLRMs) use Naive Bayes classifiers to build models that can
be easily updated to incorporate new data, without having to regenerate the entire model. The
methods used for building, updating and scoring with SLRMs are described here.
Primary Calculations
The model-building algorithm used in SLRMs is Naive Bayes. A Bayesian Network consisting of
a Naive Bayes model for each target field is generated.
Naive Bayes Algorithms
The Naive Bayes model is an old method for classification and predictor selection that is enjoying
a renaissance because of its simplicity and stability.
Notation
The following notation is used throughout this chapter unless otherwise stated:
Table 32-1
Notation
Notation
J0
X
Mj
Y
K
N
Nk
Njmk
πk
pjmk
Description
Total number of predictors.
Categorical predictor vector X’ = ( X1, ..., XJ ), where J is the number of
predictors considered.
Number of categories for predictor Xj.
Categorical target variable.
Number of categories of Y.
Total number of cases or patterns in the training data.
The number of cases with Y= k in the training data.
The number of cases with Y= k and Xj=m in the training data.
The probability for Y= k.
The probability of Xj=m given Y= k.
Naive Bayes Model
The Naive Bayes model is based on the conditional independence model of each predictor given
the target class. The Bayesian principle is to assign a case to the class that has the largest posterior
probability. By Bayes’ theorem, the posterior probability of Y given X is:
© Copyright IBM Corporation 1994, 2015.
337
338
Self-Learning Response Model Algorithms
Let X1, ..., XJ be the J predictors considered in the model. The Naive Bayes model assumes that
X1, ..., XJ are conditionally independent given the target; that is:
These probabilities are estimated from training data by the following equations:
Where Nk is calculated based on all non-missing Y, Njmk is based on all non-missing pairs
of XJ and Y, and the factors λ and f are introduced to overcome problems caused by zero or
very small cell counts. These estimates correspond to Bayesian estimation of the multinomial
(Kohavi, Becker, and
probabilities with Dirichlet priors. Empirical studies suggest
Sommerfield, 1997).
A single data pass is needed to collect all the involved counts.
For the special situation in which J = 0; that is, there is no predictor at all,
. When there are empty categories in the target variable or
categorical predictors, these empty categories should be removed from the calculations.
Secondary Calculations
In addition to the model parameters, a model assessment is calculated.
Model Assessment
For a trained model, we need to assess how reliable it is. Given this problem, we face two
conditions which will result with different solutions:

A sample of test data (not used in training or updating the model) is available. In this case we
can directly feed these data into the model, and observe the outcome.

No extra testing data are available. This is more common since users normally apply all
available data to train the model. In this case, we have to simulate data first based on the
and
, then assess the trained model by scoring
calibrated model parameters, such as
these pseudo random data.
Testing with Simulated Data
In our simulation,
data are generated. For each round, we can
determine the corresponding accuracy; across all rounds, average accuracy and variance can be
calculated, and they are explained as reliability statistics.
339
Self-Learning Response Model Algorithms
E For each round, we generate
random cases as follows:

y is assigned a random value based on the prior probabilities

Each
.
is randomly assigned based on conditional probabilities
E The accuracy of each round is calculated by comparing the model’s predicted value for each case
to the case’s generated outcome y,
E The mean, variance, minimum and maximum of the accuracy estimates are calculated across
rounds.
Blank Handling
If the target is missing, or all predictors for a case are missing, the case is ignored. If every
value for a predictor is missing, or all non-missing values for a predictor are the same, that
predictor is ignored.
Updating the Model
The model can be updated by updating the cell counts ,
to account for the new records
and recalculating the probabilities
and
as described in “Naive Bayes Model” on p. 337.
Updating the model only requires a data pass of new records.
Generated Model/Scoring
Scoring with a generated SLRM model is described below.
Predicted Values and Confidences
By default, the first M offers with highest predicted value will be returned. However, sometimes
low-probability offers are of interest for marketing strategy. Model settings allow you to bias the
results toward particular offers, or include random components to the offers.
Some notation for scoring offers:
Number of offers modeled already
Scores for each offer
Randomly generated scores for offers
Randomization factor, ranging from 0.0 (offer based
only on model prediction) to 1.0 (offer is completely
random)
Number of cases used for training each offer
Empirical value of the amount of training cases that
will result in a reliable model. When “Take account
of model reliability” is selected in the Settings tab,
this is set to 500; otherwise 0.
340
Self-Learning Response Model Algorithms
User’s preferences for offers, or the ratings of the
offers. Can be any non-negative value, where
larger values means stronger recommendations for
the corresponding offers. The default setting is
Mandatory inclusion/exclusion filters.
where 0 indicates an excluded offer.
,
The final score for each offer is calculated as
The outcomes are ordered in specified order, ascending or descending, and the first M offers in
the list are recommended. The calculated score is reported as the confidence for the score.
Variable Assessment
Among all the features modeled, some are definitely more important to the accuracy of the model
than others. Two different approaches to measuring importance are proposed here: Predictor
Importance and Information Measure.
Predictor Importance
The variance of predictive error can be used as the measure of importance. With this method,
we leave out one predictor variable at a time, and observe the performance of remaining model.
A variable is regarded as more important than another if it adds more variance compared to
that of the complete model (with all variables).
When test data are available, they can be used for predictor importance calculations in a direct way.
and
.
When test data are not available, they are simulated based on the model parameters
data are generated. For each round, we determine
In our simulation,
the corresponding accuracy for each submodel, excluding
for each of the j predictors; across
all rounds, average accuracy and variance can be calculated.
E For each round, we generate
random cases as follows:

y is assigned a random value based on the prior probabilities

Each
.
is randomly assigned based on conditional probabilities
Within a round, each of the
predictors is excluded from the model, and the accuracy is
calculated based on the generated test data for each submodel in turn.
E The accuracies for each round are calculated by comparing the submodel’s predicted value for
each case to the case’s generated outcome y,
of the j submodels.
, for each
341
Self-Learning Response Model Algorithms
E The mean and variance of the accuracy estimates are calculated across rounds for each submodel.
For each variable, the importance is measured as the difference between the accuracy of the full
model and the mean accuracy for the submodels that excluded the variable.
Information Measure
The importance of an explanatory variable X for a response variable Y is the extent to which the
use of X reduces uncertainty in predicting outcomes of Y. The uncertainty about predicting an
outcome Y is measured by the entropy of its distribution (Shannon 1948):
Based on a value x of the explanatory variable, the probability distribution of the outcomes Y is
the conditional distribution
. The information value of using the value x for the prediction
is assessed by comparing the concentrations of the marginal distribution and the conditional
. The difference between the conditional and marginal distribution entropy is:
distribution
where
denotes the entropy of the conditional distribution
. The value
about Y if the conditional distribution
is more concentrated than .
is informative
The importance of a random variable X for predicting Y is measured by the expected uncertainty
reduction, referred to as the mutual information between two variables:
The expected fraction of uncertainty reduction due to X is a mutual information index given by
This index ranges from zero to one:
if and only if the two variables are independent,
and
if and only if the two variables are functionally related in some form, linearly
or nonlinearly.
Simulation algorithms
Simulation in IBM® SPSS® Modeler refers to simulating input data to predictive models using
the Monte Carlo method and evaluating the model based on the simulated data. You do this
by using the Simulation Generation (also known as SimGen) source node. The distribution of
predicted target values can then be used to evaluate the likelihood of various outcomes.
Simulation algorithms
Creating a simulation includes specifying distributions for all inputs to a predictive model that are
to be simulated. When historical data are present, the distribution that most closely fits the data
for each input can be determined using the algorithms described in this section.
Notation
The following notation is used throughout this section unless otherwise stated:
Table 33-1
Notation
Notation
Description
Value of the input variable in the ith case of the historical data
Frequency weight associated with the ith case of the historical data
Total effective sample size accounting for frequency weights
Sample mean
Sample variance
Sample standard deviation
Distribution fitting
The historical data for a given input is denoted by:
The total effective sample size is:
The observed sample mean, sample variance and sample standard deviation are:
© Copyright IBM Corporation 1994, 2015.
343
344
Simulation algorithms
Parameter estimation for most distributions is based on the maximum likelihood (ML) method,
and closed-form solutions for the parameters exist for many of the distributions. There is no
closed-form ML solution for the distribution parameters for the following distributions: negative
binomial, beta, gamma and Weibull. For these distributions, the Newton-Raphson method is used.
This approach requires the following information: the log-likelihood function, the gradient vector,
the Hessian matrix, and the initial values for the iterative Newton-Raphson process.
Discrete distributions
Distribution fitting is supported for the following discrete distributions: binomial, categorical,
Poisson and negative binomial.
Binomial distribution: parameter estimation
The probability mass function for a random variable x with a binomial distribution is:
where
is the probability of success. The binomial distribution is used to describe
the total number of successes in a sequence of N independent Bernoulli trials. The parameter
estimates for the binomial distribution using the method of moments (see Johnson & Kotz (2005)
for details) are:
where NaN implies that the binomial distribution would not be an appropriate distribution to fit
the data under this criterion, and where
If
is not an integer, then the parameter estimates are:
345
Simulation algorithms
where
denotes the integer part of .
Categorical distribution: parameter estimation
The categorical distribution can be considered a special case of the multinomial distribution in
which N = 1. Suppose , i = 1, 2, …, n, has the categorical distribution and its categorical values
are denoted as 1, 2, …, J. Then an indicator variable of for category can be denoted as
if
otherwise
and the corresponding probability is . Then the probability mass function for a random variable
with the categorical distribution can be described based on
and
as follows:
with
The parameter estimates for
are:
Poisson distribution: parameter estimation
The probability mass function for a random variable
with a Poisson distribution is:
where
is the rate parameter of the Poisson distribution. The parameter of the Poisson
distribution can be estimated as:
Negative binomial distribution: parameter estimation
The distribution fitting component for simulation supports the parameterization of the negative
binomial distribution that describes the distribution of the number of failures before the
th success. For this parameterization, the probability mass function for a random variable is:
for
346
Simulation algorithms
where
are the two distribution parameters. There is no closed-form solution
for the parameters r and θ, so the Newton-Raphson method with step-halving will be used. The
method requires the following information:
(1) The log likelihood function
ln
ln
ln
(2) The gradient (1st derivative) vector with respect to r and θ
ln
Γ' is a digamma function, which is the derivative of the logarithm of the gamma
where
Γ
function, evaluated at α.
(3) The Hessian (2nd derivative) matrix with respect to r and θ (since the Hessian matrix is
symmetric, only the lower triangular portion is displayed)
where
is the trigamma function, or the derivative of the digamma function.
(4) The initial values of θ and r can be obtained from the closed-form estimates using the method
of moments:
if
otherwise
Note
An alternative parameterization of the negative binomial distribution describes the distribution of
the number of trials before the th success. Although it is not supported in distribution fitting, it is
supported in simulation when explicitly specified by the user. The probability mass function for
this parameterization, for a random variable is:
for
where
are the two distribution parameters.
347
Simulation algorithms
Continuous distributions
Distribution fitting is supported for the following continuous distributions: triangular, uniform,
normal, lognormal, exponential, beta, gamma and Weibull.
Triangular distribution: parameter estimation
The probability density function for a random variable
such that
with a triangular distribution is:
. Parameter estimates of the triangular distribution are:
Since the calculation of the mode for continuous data may be ambiguous, we transform the
parameter estimates and use the method of moments as follows (see Kotz and Rene van Dorp
(2004) for details):
From the method of moments we obtain
from which it follows that
348
Simulation algorithms
Note: For very skewed data or if the actual mode equals a or b, the estimated mode, , may be
less than a or greater than b. In this case, the adjusted mode, defined as below, is used:
if
if
Uniform distribution: parameter estimation
The probability density function for a random variable
where
with a uniform distribution is:
is the minimum and is the maximum among the values of
. Hence, the parameter
estimates of the uniform distribution are:
Normal distribution: parameter estimation
The probability density function for a random variable
with a normal distribution is:
Here, is the measure of centrality and is the measure of dispersion of the normal distribution.
The parameter estimates of the normal distribution are:
Lognormal distribution: parameter estimation
The lognormal distribution is a probability distribution where the natural logarithm of a random
variable follows a normal distribution. In other words, if has a lognormal
distribution,
then ln( ) has a normal(ln( ), ) distribution. The probability density function for a random
variable with a lognormal distribution is:
349
Simulation algorithms
Define
Parameter estimates for the lognormal distribution are:
Exponential distribution: parameter estimation
The probability density function for a random variable
for
with an exponential distribution is:
and
The estimate of the parameter for the exponential distribution is:
Beta distribution: parameter estimation
The probability density function for a random variable
with a beta distribution is:
B α β
where,
Γ
Γ
Γ
There is no closed-form solution for the parameters α and β, so the Newton-Raphson method with
step-halving will be used. The method requires the following information:
(1) The log likelihood function
ln Γ
ln Γ
ln Γ
(2) The gradient (1st derivative) vector with respect to α and β
350
Simulation algorithms
Γ' is a digamma function, which is the derivative of the logarithm of the gamma
where
Γ
function, evaluated at α.
(3) The Hessian (2nd derivative) matrix with respect to α and β (since the Hessian matrix is
symmetric, only the lower triangular portion is displayed)
where
is the trigamma function, or the derivative of the digamma function.
(4) The initial values of α and β can be obtained from the closed-form estimates using the method
of moments:
Gamma distribution: parameter estimation
The probability density function for a random variable
Γ
If
for
with a gamma distribution is:
and
is a positive integer, then the gamma function is given by: Γ
There is no closed-form solution for the parameters α and β, so the Newton-Raphson method with
step-halving will be used. The method requires the following information:
(1) The log likelihood function
lnΓ
(2) The gradient (1st derivative) vector with respect to α and β
351
Simulation algorithms
Γ' is a digamma function, which is the derivative of the logarithm of the gamma
where
Γ
function, evaluated at α.
(3) The Hessian (2nd derivative) matrix with respect to α and β (since the Hessian matrix is
symmetric, only the lower triangular portion is displayed)
where
is the trigamma function, or the derivative of the digamma function.
(4) The initial values of α and β can be obtained from the closed-form estimates using the method
of moments:
Weibull distribution: parameter estimation
Distribution fitting for the Weibull distribution is restricted to the two-parameter Weibull
distribution, whose probability density function is given by:
for
and
There is no closed-form solution for the parameters β and γ, so the Newton-Raphson method with
step-halving will be used. The method requires the following information:
(1) The log likelihood function
(2) The gradient (1st derivative) vector with respect to β and γ
ln
(3) The Hessian (2nd derivative) matrix with respect to β and γ (since the Hessian matrix is
symmetric, only the lower triangular portion is displayed)
352
Simulation algorithms
where
(4) The initial values of β and γ are given by:
Goodness of fit measures
Goodness of fit measures are used to determine the distribution that most closely fits the
data. For discrete distributions, the Chi-Square test is used. For continuous distributions, the
Anderson-Darling test or the Kolmogorov-Smirnov test is used.
Discrete distributions
The Chi-Square goodness of fit test is used for discrete distributions (Dirk P. Kroese, 2011). The
Chi-Square test statistic has the following form:
where,
Table 33-2
Notation
Notation
k
Description
The number of classes, as defined in the table below for each discrete distribution
The total observed frequency for class i
353
Simulation algorithms
Notation
PDF(i)
Description
Probability density function of the fitted distribution. For the Poisson and negative
binomial distributions, the density function for the last class is computed as
PDF
PDF
Expected frequency for class i:
= W*PDF(i)
The total effective sample size
For large W, the above statistic follows the Chi-Square distribution:
where r = number of parameters estimated from the data. The following table provides the values
of k and r for the various distributions. The value Max in the table is the observed maximum value.
Distribution
Binomial
Notation
k (classes)
N+1
r (parameters)
2
Categorical
J
J-1
Poisson
Max + 1
1
Negative binomial
Max + 1
2
This Chi-Square test is valid only if all values of
.
The p-value for the Chi-Square test is then calculated as:
where
CDF of the Chi-Square distribution.
Note: The p-value cannot be calculated for the Categorical distribution since the number of
degrees of freedom is zero.
Continuous distributions
For continuous distributions, the Anderson-Darling test or the Kolmogorov-Smirnov test is used
to determine goodness of fit. The calculation consists of the following steps:
1. Transform the data to a Uniform(0,1) distribution
2. Sort the transformed data to generate the Order Statistics
3. Calculate the Anderson-Darling or Kolmogorov-Smirnov test statistic
4. Compute the approximate p-value associated with the test statistic
354
Simulation algorithms
The first two steps are common to both the Anderson-Darling and Kolmogorov-Smirnov tests.
The original data are transformed to a Uniform(0,1) distribution using the transformation:
where the transformation function
distributions.
Distribution
is given in the table below for each of the supported
Transformation F(x)
Φ
Φ
B αβ
Γ
The transformed data points
are sorted in ascending order to generate the Order Statistics:
Define
to be the corresponding frequency weight for
including
is defined as:
and where we define
.
. The cumulative frequency up to and
355
Simulation algorithms
Anderson-Darling test
The Anderson-Darling test statistic is given by:
For more information, see the topic “Anderson-Darling statistic with frequency weights” on
p. 360.
The approximate p-value for the Anderson-Darling statistic can be computed for the following
distributions: uniform, normal, lognormal, exponential, Weibull and gamma. The p-value is not
available for the triangular and beta distributions.
Uniform distribution: p-value
The p-value for the Anderson-Darling statistic is computed based on the following result, provided
by Marsaglia (2004):
where
Normal and lognormal distributions: p-value
The p-value for the Anderson-Darling statistic is computed based on the following result, provided
by D’Agostino and Stephens (1986):
where
356
Simulation algorithms
Exponential distribution: p-value
The p-value for the Anderson-Darling statistic is computed based on the following result, provided
by D’Agostino and Stephens (1986):
where
Weibull distribution: p-value
The p-value for the Anderson-Darling statistic is computed based on Table 33-3 below, provided by
D’Agostino and Stephens (1986). First, the adjusted Anderson-Darling statistic is computed from:
If the value of is between two probability levels (in the table), then linear interpolation is used
to estimate the p-value. For example, if
which is between
and
,
then the corresponding probabilities of and are p
and p
respectively. Then
the p-value of is computed as
If the value of is less than the smallest critical value in the table, then the p-value is
if is greater than the largest critical value in the table, then the p-value is 0.01.
0.25; and
Table 33-3
Upper tail probability and corresponding critical values for the Anderson-Darling test, for the Weibull
distribution
p-value
0.25
0.474
0.10
0.637
0.05
0.757
0.025
0.877
0.01
1.038
Gamma distribution: p-value
Table 33-4, which is provided by D’Agostino and Stephens (1986), is used to compute the p-value
of the Anderson-Darling test for the gamma distribution. First, the appropriate row in the table
is determined from the range of the parameter α. Then linear interpolation is used to compute
the p-value, as done for the Weibull distribution. For more information, see the topic “Weibull
distribution: p-value” on p. 356.
357
Simulation algorithms
If the test statistic is less than the smallest critical value in the row, then the p-value is 0.25; and
if the test statistic is greater than the largest critical value in the row, then the p-value is 0.005.
Table 33-4
Upper tail probability and corresponding critical values for the Anderson-Darling test, for the gamma
distribution with estimated parameter α
p-value
α
1
0.25
0.486
0.10
0.657
0.05
0.786
0.025
0.917
0.01
1.092
0.005
1.227
α
0.473
0.637
0.759
0.883
1.048
1.173
0.470
0.631
0.752
0.873
1.035
1.159
1 8
α
Kolmogorov-Smirnov test
The Kolmogorov-Smirnov test statistic,
, is given by:
Computation of the p-value is based on the modified Kolmogorov-Smirnov statistic, which is
distribution specific.
Uniform distribution: p-value
The procedure proposed by Kroese (2011) is used to compute the p-value of the
Kolmogorov-Smirnov statistic for the uniform distribution. First, the modified
Kolmogorov-Smirnov statistic is computed as
The corresponding p-value is computed as follows:
1. Set k=100
2. Define
3. Calculate
4. If
and
set k=k+1 and repeat step 2; otherwise, go to step 5.
5. p-value
Normal and lognormal distributions: p-value
The modified Kolmogorov-Smirnov statistic is
358
Simulation algorithms
The p-value for the Kolmogorov-Smirnov statistic is computed based on Table 33-5 below,
provided by D’Agostino and Stephens (1986). If the value of D is between two probability
levels, then linear interpolation is used to estimate the p-value. For more information, see the
topic “Weibull distribution: p-value” on p. 356.
If D is less than the smallest critical value in the table, then the p-value is 0.15; and if D is
greater than the largest critical value in the table, then the p-value is 0.01.
Table 33-5
Upper tail probability and corresponding critical values for the Kolmogorov-Smirnov test, for the
Normal and Lognormal distributions
p-value
D
0.15
0.775
0.10
0.819
0.05
0.895
0.025
0.995
0.01
1.035
Exponential distribution: p-value
The modified Kolmogorov-Smirnov statistic is
The p-value for the Kolmogorov-Smirnov statistic is computed based on Table 33-6 below,
provided by D’Agostino and Stephens (1986). If the value of D is between two probability
levels, then linear interpolation is used to estimate the p-value. For more information, see the
topic “Weibull distribution: p-value” on p. 356.
If D is less than the smallest critical value in the table, then the p-value is 0.15; and if D is
greater than the largest critical value in the table, then the p-value is 0.01.
Table 33-6
Upper tail probability and corresponding critical values for the Kolmogorov-Smirnov test, for the
Exponential distribution
p-value
D
0.15
0.926
0.10
0.995
0.05
1.094
0.025
1.184
0.01
1.298
Weibull distribution: p-value
The modified Kolmogorov-Smirnov statistic is
The p-value for the Kolmogorov-Smirnov statistic is computed based on Table 33-7 below,
provided by D’Agostino and Stephens (1986). If the value of D is between two probability
levels, then linear interpolation is used to estimate the p-value. For more information, see the
topic “Weibull distribution: p-value” on p. 356.
359
Simulation algorithms
If D is less than the smallest critical value in the table, then the p-value is 0.10; and if D is
greater than the largest critical value in the table, then the p-value is 0.01.
Table 33-7
Upper tail probability and corresponding critical values for the Kolmogorov-Smirnov test, for the
Weibull distribution
p-value
D
0.10
1.372
0.05
1.477
0.025
1.557
0.01
1.671
Gamma distribution: p-value
The modified Kolmogorov-Smirnov statistic is
The p-value for the Kolmogorov-Smirnov statistic is computed based on Table 33-8 below,
provided by D’Agostino and Stephens (1986). If the value of D is between two probability
levels, then linear interpolation is used to estimate the p-value. For more information, see the
topic “Weibull distribution: p-value” on p. 356.
If D is less than the smallest critical value in the table, then the p-value is 0.25; and if D is
greater than the largest critical value in the table, then the p-value is 0.005.
Table 33-8
Upper tail probability and corresponding critical values for the Kolmogorov-Smirnov test, for the
Gamma distribution
p-value
D
0.25
0.74
0.20
0.780
0.15
0.800
0.10
0.858
0.05
0.928
0.025
0.990
0.01
1.069
0.005
1.13
Determining the recommended distribution
The distribution fitting module is invoked by the user, who may specify an explicit set of
distributions to test or rely on the default set, which is determined from the measurement level
of the input to be fit. For continuous inputs, the user specifies either the Anderson-Darling test
(the default) or the Kolmogorov-Smirnov test for the goodness of fit measure (for ordinal and
nominal inputs, the Chi-Square test is always used). The distribution fitting module then returns
the values of the specified test statistic along with the calculated p-values (if available) for each of
the tested distributions, which are then presented to the user in ascending order of the test statistic.
The recommended distribution is the one with the minimum value of the test statistic.
The above approach yields the distribution that most closely fits the data. However, if the p-value
of the recommended distribution is less than 0.05, then the recommended distribution may not
provide a close fit to the data.
360
Simulation algorithms
Anderson-Darling statistic with frequency weights
To obtain the expression for the Anderson-Darling statistic with frequency weights, we first give
the expression where the frequency weight of each value is 1:
If there is a frequency weight variable, then the corresponding four terms of the above expression
are given by:
where
and
are defined in the section on goodness of fit measures for continuous
distributions. For more information, see the topic “Continuous distributions ” on p. 353.
References
D’Agostino, R., and M. Stephens. 1986. Goodness-of-Fit Techniques. New York: Marcel Dekker.
Johnson, N. L., S. Kotz, and A. W. Kemp. 2005. Univariate Discrete Distributions, 3rd ed.
Hoboken, New Jersey: John Wiley & Sons.
Kotz, S., and J. Rene Van Dorp. 2004. Beyond Beta, Other Continuous Families of Distributions
with Bounded Support and Applications. Singapore: World Scientific Press.
Kroese, D. P., T. Taimre, and Z. I. Botev. 2011. Handbook of Monte Carlo Methods. Hoboken,
New Jersey: John Wiley & Sons.
Marsaglia, G., and J. Marsaglia. 2004. Evaluating the Anderson-Darling Distribution. Journal of
Statistical Software, 9:2, .
361
Simulation algorithms
Simulation algorithms: run simulation
Running a simulation involves generating data for each of the simulated inputs, evaluating the
predictive model based on the simulated data (along with values for any fixed inputs), and
calculating metrics based on the model results.
Generating correlated data
Simulated values of input variables are generated so as to account for any correlations between
pairs of variables. This is accomplished using the NORTA (Normal-To-Anything) method
described by Biller and Ghosh (2006). The central idea is to transform standard multivariate
normal variables to variables with the desired marginal distributions and Pearson correlation
matrix.
Suppose that the desired variables are ,
matrix Σ , where the elements of Σ are given by
, with the desired Pearson correlation
. Then the NORTA algorithm is as follows:
and , where
, use a stochastic root finding algorithm (described in the
1. For each pair
following section) and the correlation
to search for an approximate correlation
of standard
bivariate normal variables.
2. Construct the symmetric matrix Σ whose elements are given by
3. Generate the standard multivariate normal variables
4. Transform the variables
to
, where
and
.
with Pearson correlation matrix Σ .
using
where is the desired marginal cumulative distribution, and
normal distribution function. Then the correlation matrix of
desired Pearson correlation matrix Σ .
is the cumulative standard
will be close to the
Stochastic root finding algorithm
Given a correlation , a stochastic root finding algorithm is used to find an approximate
such that if standard bivariate normal variables
and
have the Pearson
correlation
to
and
(using the transformation described
correlation , then after transforming and
in Step 4 of the previous section) the Pearson correlation between
and
is close to . The
stochastic root finding algorithm is as follows:
1. Let
and
2. Simulate N samples of standard normal variables
and
,
and
, such that the
and
is LowCorr and the Pearson correlation between
Pearson correlation between
and
is HighCorr. The sample size N is set to 1000.
3. Transform the variables
,
,
and
to the variables
,
using the transformation described in Step 4 of the previous section.
,
and
362
Simulation algorithms
4. Compute the Pearson correlation between
and
the Pearson correlation between
5. If the desired correlation
if
and
and denote it as
and denote it as .
or
then stop and set
. Otherwise go to Step 6.
. Similarly, compute
if
or set
6. Simulate N samples of standard bivariate normal variables
and
with a Pearson
. As in Steps 3 and 4, transform
and
correlation of
to
and
and compute the Pearson correlation between
and
, which
will be denoted
.
7. If
stop and set
8. If
or
where ε is the tolerance level (set to 0.01), then
. Otherwise go to Step 8.
, set
, else set
and return to Step 6.
Inverse CDF for binomial, Poisson and negative binomial distributions
Use of the NORTA method for generating correlated data requires the inverse cumulative
distribution function for each desired marginal distribution. This section describes the method for
computing the inverse CDF for the binomial, Poisson and negative binomial distributions. Two
parameterizations of the negative binomial distribution are supported. The first parameterization
describes the distribution of the number of trials before the th success, whereas the second
parameterization describes the distribution of the number of failures before the th success.
The choice of method for determining the CDF depends on the mean of the distribution. If
, where Threshold is set to 20, the following approximate normal method will be
used to compute the inverse CDF for the binomial distribution, the Poisson distribution and the
second parameterization of the negative binomial distribution.
For the first parameterization of the negative binomial distribution, the formula is as follows:
The parameters

and σ are given by:
and σ
Binomial distribution.
, where N is the number of trials and P
is the probability of success.

Poisson distribution. μ

Negative binomial distribution (both parameterizations). μ
, where λ is the rate parameter.
and σ
the specified number of successes and is the probability of success.
The notation
If
λ and σ
used above denotes the integer part of .
then the bisection method will be used.
, where is
363
Simulation algorithms
Suppose that x and z are the values of X and Z respectively, where X is a random variable with a
binomial, Poisson or negative binomial distribution, and Z is a random variable with the standard
to be used in the bisection search method is
normal distribution. The objective function
as follows:
Φ

Binomial distribution.

Poisson distribution.

Negative binomial distribution (second parameterization).
Φz
Φz
where
and
are random variables with the beta distribution and gamma
distribution, respectively, with parameters and .
The bisection method is as follows:
1. If
then stop and set
such that
and
2. If
where
3. Let
. Otherwise go to step 2 to determine two values
.
then let
and
. If
is the minimum integer such that
. If
, then stop and set
4. If
, let
then let
μ and
,
.
or
where is a tolerance level, which is set to
. Otherwise go to Step 4.
, else let
and return to Step 3.
Note: The inverse CDF for the first parameterization of the negative binomial distribution is
determined by taking the inverse CDF for the second parameterization and adding the distribution
parameter , where is the specified number of successes.
Sensitivity measures
Sensitivity measures provide information on the relationship between the values of a target and
the values of the simulated inputs that give rise to the target. The following sensitivity measures
are supported (and rendered as Tornado charts in the output of the simulation):

Correlation. Measures the Pearson correlation between a target and a simulated input.

One-at-a-time measure. Measures the effect on the target of modulating a simulated input by
plus or minus a specified number of standard deviations of the input.

Contribution to variance. Measures the contribution to the variance of the target from a
simulated input.
Notation
The following notation is used throughout this section unless otherwise stated:
Table 33-9
Notation
Notation
Description
Number of records of simulated data
364
Simulation algorithms
An
matrix of values of the inputs to the predictive model. The
rows
;
contain the values of the inputs
for each simulated record, excluding the target value. The columns
;
represent the set of inputs.
An
vector of values of the target variable, consisting of
A known model which can generate
from
The value of a sensitivity measure for the input
Correlation measure
The correlation measure is the Pearson correlation coefficient between the values of a target
and one of its simulated predictors. The correlation measure is not supported for targets with a
nominal measurement level or for simulated inputs with a categorical distribution.
One-at-a-time measure
The one-at-a-time measure is the change in the target due to modulating a simulated input by plus
or minus a specified number of standard deviations of the distribution associated with the input.
The one-at-a-time measure is not supported for targets with an ordinal or nominal measurement
level, or for simulated inputs with any of the following distributions: categorical, Bernoulli,
binomial, Poisson, or negative binomial.
The procedure is to modulate the values of a simulated input by the specified number of standard
deviations and recompute the target with the modulated values, without changing the values of
the other inputs. The mean change in the target is then taken to be the value of the one-at-a-time
sensitivity measure for that input.
For each simulated input
for which the one-at-a-time measure is supported:
1. Define the temporary data matrix
2. Add the specified number of standard deviations of the input’s distribution to each value of
in .
3. Calculate
F
4. Calculate
5. Repeat Step 2, but now subtracting the specified number of standard deviations from each value of
. Continue with Steps 3 and 4 to obtain the value of
in this case.
Contribution to variance measure
The contribution to variance measure uses the method of Sobol (2001) to calculate the total
contribution to the variance of a target due to a simulated input. The total contribution to variance,
as defined by Sobol, automatically includes interaction effects between the input of interest
and the other inputs in the predictive model.
365
Simulation algorithms
The contribution to variance measure is not supported for targets with an ordinal or nominal
measurement level, or for simulated inputs with any of the following distributions: categorical,
Bernoulli, binomial, Poisson, or negative binomial.
Let
be an additional set of simulated data, in the same form as
of simulated records.
and with the same number
Define the following:
For each simulated input
for which the contribution to variance measure is supported, calculate
where:


denotes the set of all inputs excluding
is a derived data matrix where the column associated with is taken from
and the remaining columns (for all inputs excluding ) are taken from
The total contribution to variance from
is then given by
Note: When interaction terms are present, the sum of the
over all simulated inputs for which
the contribution of variance is supported, may be greater than 1.
References
Biller, B., and S. Ghosh. 2006. Multivariate input processes. In: Handbooks in Operations
Research and Management Science: Simulation, B. L. Nelson, and S. G. Henderson, eds.
Amsterdam: Elsevier Science, 123–153.
Sobol, I. M. 2001. Global sensitivity indices for nonlinear mathematical models and their Monte
Carlo estimates. Mathematics and Computers in Simulation, 55, 271–280.
Support Vector Machine (SVM)
Algorithms
Introduction to Support Vector Machine Algorithms
The Support Vector Machine (SVM) is a supervised learning method that generates input-output
mapping functions from a set of labeled training data. The mapping function can be either a
classification function or a regression function. For classification, nonlinear kernel functions are
often used to transformed input data to a high-dimensional feature space in which the input data
become more separable compared to the original input space. Maximum-margin hyperplanes are
then created. The produced model depends on only a subset of the training data near the class
boundaries.
Similarly, the model produced by Support Vector Regression ignores any training data that is
sufficiently close to the model prediction. (Support Vectors can appear only on the error tube
boundary or outside the tube.)
SVM Algorithm Notation
The ith training sample
The class label for the ith training sample
The number of training samples
The kernel function value for the pair of samples i, j
The kernel matrix element at row i and column j
Coefficients for training samples (zero for
non-support vectors)
Coefficients for training samples for support vector
regression models
Decision function
The number of classes of the training samples
The upper bound of all variables
The vector with all elements equal to 1
The sign function:
if
otherwise
SVM Types
This section describes the types of SVM available, based on the descriptions in the LIBSVM
is the kernel function selected by the user. For
technical report(Chang and Lin, 2003).
more information, see the topic “SMO Algorithm” on p. 371.
© Copyright IBM Corporation 1994, 2015.
367
Spatial Temporal Prediction Algorithms
1. Introduction
Spatio-temporal statistical analysis has many applications. For example, energy management for
buildings or facilities, performance analysis and forecasting for service branches, or public
transport planning. In these applications, measurements such as energy usage are often taken over
space and time. The key questions here are what factors will affect future observations, what can
we do to effect a desired change, or to better manage the system. In order to address these
questions, we need to develop statistical techniques which can forecast future values at different
locations, and can explicitly model adjustable factors to perform what-if analyses.
However, these analytical needs are not the focus of traditional spatio-temporal statistical research.
In traditional statistical research, spatio-temporal analysis is treated just as an extension of spatial
analysis and focuses more on looking for patterns in past data rather than forecasting future values.
The traditional spatio-temporal research targets different application areas such as environmental
research. There are, however, different types of spatio-temporal problems in which time is the key
component. We therefore need to treat spatio-temporal analysis as a unique type of problem itself,
not an extension to spatial analysis. Moreover, we need to explicitly model these factors to allow
for what-if analysis. Although these kinds of problems could be addressed by traditional methods,
the emphasis is quite different.
This algorithm assumes a fixed set of spatial locations (either point location or center of an area)
and equally spaced time stamps common across locations. It can issue predicted or interpolated
values at locations with no response measurements (but with available covariates). We call our
model spatio-temporal prediction (STP).
The goal of the STP algorithm is to address the needs for solving the spatio-temporal problems.
STP can generate predictions at any location within a 3D space for any future time. It also
explicitly models the external factors so we can perform what-if analysis.
1.1 Handling of missing data
The algorithm is designed to accommodate missing values in the response variable, as well as in
the predictors. We consider an observation at a given time point and location ‘complete’ if all
predictors and the response are observed at that time and location. To allow for model fitting in
spite of missing data, all of the following conditions must be met:
1. At each location, observations need to be complete for at least one sequence of at least
‫ ܮ‬+ 2 consecutive time points.
2. At each location ‫ݏ‬௜, for any pair of locations ‫ݏ‬௜, ‫ݏ‬௝, ‫ݏ‬௝ ≠ ‫ݏ‬௜, observations must be complete
at both locations simultaneously for at least two sequences of ‫ ܮ‬+ 2 consecutive time
points.
Spatial Temporal Prediction Algorithms
3. Overall, at least ‫ ܮ‬sequences of at least ‫ ܮ‬+ 2 consecutive time points must be present in
the data (to allow for estimation of ߙ).
4. The total number of complete samples must be at least equal to ‫ ܦ‬+ ‫ ܮ‬+ 2, where ‫ ܦ‬is the
number of predictors, including the intercept, and ‫ ܮ‬the user-specified lag.
5. After removing locations according to the rules above, no more than 5% of the remaining
records should be incomplete. As an example, if after removing locations, ݊ locations and
݉ time stamps remain, no more than ݊ × ݉ × .05 records should be incomplete.
The above conditions should be verified in the following order:
Step 1. Remove locations that do not meet condition 1.
Step 2. Remove locations that violate condition 2 in the following order:
(a) Let ℐ be the set of points that violate condition 2.
(b) Eliminate from the data set the observation(s) that violate condition 2 for the greatest
number of pairs. In case of a tie, remove all observations that are tied.
(c) Update ℐ by removing any observations that now no longer violate Condition 2.
That is, remove observation that only violated the condition 2 in a pair with the
observations that were removed in Step 2b.
(d) Iterate steps 2b and 2c until ℐ is empty.
Step 3. If after Steps 1 and 2, conditions 3-5 are violated, the model cannot be fit.
2 Model
2.1 Notation
The following notation is used for the model inputs:
Name
Number of time stamps
Number of measurement locations
Number of prediction grid points
Number of predictors (including intercept)
Index of time stamps
Spatial coordinates
Targets observed at location ‫ ݏ‬and time ‫ݐ‬
Targets observed at location ‫ݏ‬
Targets observed at time ‫ݐ‬
Predictors observed at location ‫ ݏ‬and time ‫ݐ‬
Predictors observed at location ‫ݏ‬
Predictors observed at time ‫ݐ‬
Maximum autoregressive time lag
Length of prediction steps
Symbol
݉ >‫ܮ‬
݊≥3
ܰ
‫ܦ‬
‫{ ∈ݐ‬1, … , ݉ }
‫ݏ{ ∈ ݏ‬ଵ, … , ‫ݏ‬௡ }; ‫ݏ‬௝ = (‫ݑ‬௝, ‫ݒ‬௝, ‫ݓ‬௝)ᇱ
ܻ௧(‫)ݏ‬
ܻ(‫)ݏ‬
ܻ௧
ܺ௧(‫ܺ( = )ݏ‬௧,ଵ(‫)ݏ‬, … , ܺ௧,஽ (‫))ݏ‬ᇱ
ܺ(‫ܺ( = )ݏ‬ଵ(‫)ݏ‬, … , ܺ௠ (‫))ݏ‬ᇱ
ܺ௧ = (ܺ௧(‫ݏ‬ଵ), … , ܺ௧(‫ݏ‬௡ ))ᇱ
‫ >ܮ‬0
‫> ܪ‬0
Type
integer
integer
integer
integer
integer
vector
scalar
vector
vector
vector
matrix
matrix
integer
integer
Dimensions
1
1
1
1
1
3×1
1
݉ ×1
݊× 1
‫× ܦ‬1
݉ ×‫ܦ‬
݊× ‫ܦ‬
1
1
Notes
i.
For a predictor that does not vary over space, ܺ௧,ௗ (‫ݏ‬ଵ) = ܺ௧,ௗ (‫ݏ‬ଶ) = ⋯ = ܺ௧,ௗ (‫ݏ‬௡ );
Spatial Temporal Prediction Algorithms
For a predictor that does not evolve over time, ܺଵ,ௗ (‫ܺ = )ݏ‬ଶ,ௗ (‫ܺ = ⋯ = )ݏ‬௠ ,ௗ (‫)ݏ‬.
ii.
The following notation is used for model definition and computation:
Name
Symbol
Coefficient vector for linear model
ࢼ = (ߚଵ, … , ߚ஽ )
Coefficient vector for AR model
ࢻ = (ߙଵ, … , ߙ௅)
Vector of 1’s
1 = (1, … ,1)ᇱ
Kronecker product
⊗
Type
vector
vector
vector
operator
Dimension
‫ܦ‬
‫ܮ‬
variable
NA
2.1 Model structure
஽
ܻ௧(‫ = )ݏ‬෍ ߚௗ ܺ௧,ௗ (‫ )ݏ‬+ ܼ௧(‫)ݏ‬
(1)
ௗୀଵ
where ܼ௧(‫ )ݏ‬is mean-zero space-time correlated random process. Users can specify whether an
“intercept” term needs to be included in the model. The inference algorithm works with general
“continuous” variables, and with or without intercept.

Autoregressive model, AR(‫ )ܮ‬for time autocorrelation (Brockwell and Davis, 2002):
௅
ܼ௧(‫ = )ݏ‬෍ ߙ௟ܼ௧ି௟(‫ )ݏ‬+ ߳௧(‫)ݏ‬
(2)
௟ୀଵ
Note that users need to specify the maximum AR lag ‫ܮ‬.
Let ߳௧ = (߳௧(‫ݏ‬ଵ), … , ߳௧(‫ݏ‬௡ ))ᇱ be the AR residual vector at time ‫ݐ‬. Since the time autocorrelation
effect has already been removed, ߳௅ାଵ, … , ߳௠ are independent.

Parametric or nonparametric covariance model for spatial dependence:
(3)
ܸ(߳௧) = Σௌ, ‫ ܮ =ݐ‬+ 1, … , ݉
where Σௌ = {ܴ(‫ݏ‬௜, ‫ݏ‬௝)}௜,௝ୀଵ,…,௡ is a ݊ × ݊ covariance matrix of spatial covariance functions
ܴ(‫ݏ‬, ‫ݏ‬ᇱ) = ‫ܻ(ݒ݋ܥ‬௧(‫)ݏ‬, ܻ௧(‫ݏ‬ᇱ)) at observed locations. Two alternative ways of modeling the spatial
covariance function ܴ(‫ݏ‬௜, ‫ݏ‬௝) are implemented - a variogram-based parametric model (Cressie,
1993) and a Empirical Orthogonal Functions (EOF)-based nonparametric model (Cohen and
Johnes, 1969; Creutin and Obled, 1982).
Note that users can specify which covariance model to be used.
 If a “parametric model” is chosen, the algorithm will automatically test for the
goodness-of-fit. If the test suggests a parametric model is not adequate, the algorithm
switch to EOF model fitting and issue prediction based on EOF model.
 If a EOF model is chosen, the switching test part will be skipped, and both model fitting
and prediction will follow EOF-based algorithm.
Under this model decomposition, the covariance structure for the spatio-temporal process
ᇱ
ܻ = (ܻ௅ାଵ
, … , ܻ௠ᇱ )ᇱ is of separable form
(4)
ܸ(ܻ) = ܸ(ܼ) = Σ = Σ ் ⊗ Σௌ
where Σ ் = {ߛ் (‫ݐ‬− ‫ݐ‬ᇱ)}௧ୀ௅ାଵ,…,௠ ;௧ᇲୀ௅ାଵ,…,௠ is the (݉ − ‫ ݉( × )ܮ‬− ‫ )ܮ‬AR(L) covariance
Spatial Temporal Prediction Algorithms
matrix with the autocovariance function.
3 Estimation algorithm
This section provides details on the multi-step procedure to fit the STP model (see Figure 1) when
the user specifies a “parametric model”. If an “empirical model” is specified, the switching test
part will be skipped, and both model fitting and prediction follows EOF-based algorithm.
Figure 1. Flowchart of algorithm steps for model fitting when a “parametric model” is specified.
Step 1: Fit regression model by ordinary least squares (OLS) regression using only observations
that have no missing values (see Section 3.1).
We first ignore the spatio-temporal dependence in the data and simply estimate the fixed
regression part by OLS and obtain the regression residuals ܼ௧(‫)ݏ‬.
Step 2: Fit autoregressive model using only data without missing values (see Section 3.2).
Ignoring spatial dependence in OLS residuals ܼ௧(‫)ݏ‬, we estimate autoregressive
Spatial Temporal Prediction Algorithms
coefficients by fitting the regression model (2) and obtain the AR residuals ߳௧(‫)ݏ‬.
Step 3: Fit spatial covariance model and test for goodness of fit on data without missing values
(see Section 3.3).
We fit a parametric spatial covariance model. We perform two Goodness of Fit tests to
decide whether to continue with the parametric covariance model or the empirical
covariance matrix.
Step 4: Refit autoregressive model using augmented data (see Section 3.4).
We refit autoregressive model accounting for spatial dependence by generalized least
squares (GLS) and obtain improved AR coefficients ߙ.
Step 5: Refit Regression model using augmented data (see Section 3.5).
We obtain improved regression coefficients ߚ by GLS to account for spatio-temporal
correlation in the data.
Step 6: Save the results for use in output and prediction.
3.1 Fit regression model
We first ignore the spatio-temporal dependence in the data and simply estimate the fixed
regression part by OLS. Assume that out of ݊݉ location-time combinations, ‫ ݍ‬samples have
missing values in either ܺ or ܻ. Let ܻ = (ܻଵᇱ, … , ܻ௠ᇱ )ᇱ, a (݊݉ − ‫ × )ݍ‬1-vector andܺ =
(ܺଵᇱ, … , ܺ௠ᇱ )ᇱ, a (݉ ݊ − ‫ ܦ × )ݍ‬matrix, such that ܺ and ܻ contain only complete observations,
i.e., observations without any missing values. The OLS estimates of the regression coefficients
are:
The residuals are:
෡ = (ܺ ᇱܺ)ିଵܺ ᇱܻ
ࢼ
3.2 Fit autoregressive model
෡.
ܼመ= ܻ − ܺࢼ
(5)
(6)
We estimate autoregressive coefficients by OLS assuming no spatial correlation and AR(L) as
model for time-series autocorrelation,
ܼመ௧ = ߙଵܼመ௧ିଵ + ⋯ + ߙ௅ܼመ௧ି௅ + ࣕ௧,
(7)
where ܼመ௧ is a ݊௧ × 1 vector. Note that due to the existence of missing values, the number of
locations ݊௧ varies among different time points. Moreover, for each time points t, only locations
with no missing values at ‫ ܮ‬+ 1 consecutive time points, i.e., (‫ݐ‬, ‫ݐ‬− 1, … , ‫ݐ‬− ‫ )ܮ‬can be used for
model fitting, therefore, ∑௠௧ୀ௅ାଵ ݊௧ ≤ [݊(݉ − ‫ )ܮ‬− ‫]ݍ‬.
Step 1: Construct ݊௧ × ‫ ܮ‬time lag matrix
ܼመ௧ି௟௔௚ = ൫ܼመ௧ିଵ, ܼመ௧ିଶ, … , ܼመ௧ି௅൯, ‫ ܮ =ݐ‬+ 1, … , ݉
(8)
൫ܼመ௟ᇱ௔௚ ܼመ௟௔௚ ൯ࢻ = ܼመ௟ᇱ௔௚ ܼመ∗
(9)
ᇱ
ᇱ
መᇱ
መ∗
መᇱ
መᇱ ᇱ
Step 2: Let ܼመ௟௔௚ = ൫ܼመ௅ାଵି௟
௔௚ , … , ܼ௠ ି௟௔௚ ൯ and ܼ = ൫ܼ௅ାଵ, … , ܼ௠ ൯ . Solve the linear system
Spatial Temporal Prediction Algorithms
which is equivalent to solving
௠
൭෍
௧ୀ௅ାଵ
ᇱ
መ
ܼመ௧ି௟
௔௚ ܼ௧ି௟௔௚ ൱ ࢻ
௠
= ෍
using the sweep operation to find estimate ߙො.
௧ୀ௅ାଵ
ᇱ
መ
ܼመ௧ି௟
௔௚ ܼ௧
(10)
Step 3: Compute the de-autocorrelated AR(L) residuals
෡‫ ݐ‬− ߙ
෡‫ݐ‬−1 − ⋯ − ߙ
෡‫ݐ‬−‫ܮ‬, ‫ܮ =ݐ‬+ 1, … , ݉
ො‫ܼ = ݐ‬
ෝ1 ܼ
ෝ‫ܼܮ‬
߳
(11)
3.3 Fit model and check goodness of fit for spatial covariance
structure
We explicitly model the spatial covariance structure among locations, rather than using variogram
estimation.
Under the assumption of the model (stationarity, AR-relationship removed), the mean of the
residuals is 0 at all locations. We therefore estimate the unadjusted empirical covariances ‫ݏ‬௜௝ and
correlations ‫ݎ‬௜௝ assuming mean 0, i.e.,
ࡿ = ൣ‫ݏ‬௜௝൧௜,௝ୀଵ,…,௡ , ‫ݏ‬௜௝ =
1
‫ݐ‬௜௝
෍ ߳̂௧(‫ݏ‬௜)߳̂௧൫‫ݏ‬௝൯
(12)
௧
where ‫ݐ‬௜௝ is the number of complete residual pairs between locations ‫ݏ‬௜ and ‫ݏ‬௝, and ‫ ݐ‬indexes
these pairs, i.e., the time points for which both ߳Ƹ
௧(‫ݏ‬௜) and ߳Ƹ
௧(݆) are non-missing.
‫ݏ‬௜௝
‫ݎ‬௜௝ =
(13)
ඥ ‫ݏ‬௜௜‫ݏ‬௝௝
To determine whether to model the spatial covariance structure parametrically or to use the
nonparametric EOF model, we perform the following two tests sequentially:
1. Fit parametric model to covariances using the parameter vector ࣒ = (ߪଶ, ߠ, ߬ଶ) (Cressie
1993)
௣
ߪොଶ݁‫݌ݔ‬൫−൫ℎ௜௝⁄ߠ෠൯ ൯, ݂݅ ℎ௜௝ > 0; ෠
(‫ݏ‬
)
(14)
‫ݒ݋ܥ‬൫߳௧ ௜ , ߳௧൫‫ݏ‬௝൯; ߰ ൯ = ቊ
ߪොଶ + ߬̂ ଶ,
‫ݐ݋‬ℎ݁‫݁ݏ݅ݓݎ‬.
where ℎ௜௝ = ฮ‫ݏ‬௜ − ‫ݏ‬௝ฮଶ is the Euclidean distance between locations ‫ݏ‬௜ and ‫ݏ‬௝. Users
need to specify the values for the order parameter ‫݌‬.
‫[ ∈ ݌‬1, 2] is a user-defined parameter that determines the class of covariance models to be
fit. ‫ = ݌‬1 corresponds to an exponential covariance model, ‫ = ݌‬2 results in a Gaussian
covariance model and ‫( ∈ ݌‬1, 2) belongs to the powered exponential family.
Next, determine if there is a significant decay over space by testing ‫ܪ‬଴: − 1⁄ߠ௣ ≥ 0. If we
fail to reject ‫ܪ‬଴, we conclude that the decay over space is not significant, and EOF
estimation will be used. If EOF estimation is used, there is not need to calculate ߠ, ߪ or ߬,
as we have concluded that they are invalid descriptions of the covariance matrix. In fact,
there may not be valid solutions for these parameters, therefore they should not be
Spatial Temporal Prediction Algorithms
estimated.
2. If the previous test rejects ‫ ܪ‬0 , test for homogeneity of variances among locations: if
homogeneity of variances is rejected, EOF estimation will be used. Otherwise, the
parametric covariance model will be used.
3.3.1 Fit and test parametric model
a) Enforce a minimum correlation of +.01: if ‫ < ݆݅ݎ‬.01, set ‫ = ݆݅ݏ‬.01ඥ ‫ ݆݆ݏ݅݅ݏ‬and ‫ = ݆݅ݎ‬.01.
b) Let ࢙ be the vectorized lower triangular of the covariance matrix (excluding the diagonal, i.e.,
excluding variances), ࢘ be the vectorized lower triangular of the correlation matrix (excl.
diagonal), and ࢎ the corresponding vector of pairwise distances between the ݊ locations. ࢙, ࢘
and ࢎ are each vectors of length ݊(݊ − 1)⁄2.
Define ߮ = − 1⁄ߠ௣. Fit the linear model ln࢙ = lnߪଶ + ߮ࢎ௣ using a GLS fit:
࡭ = [1, ࢎ௣ ]
1
ࢂିଵ = ࢀ(࡮ ିଵ − ܾܾܿ′)ࢀ
2
(15)
(16)
where࢈ = 2࢘ଶ⁄(1 − ࢘ଶ), ࢘2 is obtained by squaring each element of vector ࢘, ࡮ −1 =
diag(࢈), and scalar ܿ = 1⁄(1 + ૚′࡮ ିଵ૚). Also, let ࢀ = diagൣඥ ‫ݐ‬௞൧, ݇ = 1, … , ݊(݊ − 1)⁄2,
where ‫ݐ‬௞ is the number of pairs of de-autocorrelated residuals in the calculation of the
corresponding element ‫ݎ‬௞ in ࢘, i.e., the number of observations pairs that went into
calculating ‫ݎ‬௞, which may be different for each entry of the covariance matrix, depending on
missing values. Note that ‫ݐ‬௞ corresponds to the vectorized lower triangular of ൣ‫݆݅ݐ‬൧݅,݆=1,…,݊,
where ‫ ݆݅ݐ‬are as defined in (12).
Let ࣁ = (lnߪଶ, ߮), the GLS estimator can be calculated as
−1
ෝ = ቀ࡭′ࢂ−૚࡭ቁ ࡭′ࢂ−1 ln࢙
ࣁ
ෝ will be ‫ࣁ(݁ݏ‬
ෝ) = ඥ diag[(࡭′ࢂି૚࡭)ିଵ].
The standard error for ࣁ
ෝ
߮
Calculate the test statistic ‫ݖ‬1 = ‫߮(݁ݏ‬ෝ). If ‫ݖ‬1 ≥ ‫ݖ‬.05 , where ‫ݖ‬.05 is the .05 quantile of the standard
normal distribution (or critical value for selected level of significance ߛ1 ), then all following
calculations will be performed using the empirical spatial covariance matrix, i.e., Σࡿ = ࡿ, and
the nonparametric EOF model will be used for prediction. Equivalently, a p-value ‫݌‬1 can be
calculated by evaluating the standard Normal cumulative distribution function (CDF) at ‫ݖ‬1
(i.e., ‫݌‬1 = ܲ(ܼ < ‫ݖ‬1 )). If ‫݌‬1 ≥ level of significance ߛ1 , then all following calculations will be
performed using the empirical covariance matrix.
c) If the previous test does reject ‫ܪ‬଴ (i.e., we have not yet decided to continue with the empirical
covariance matrix), continue to perform the following test: Let ‫ݏ( = ݒ‬ଵଵ, ‫ݏ‬ଶଶ, … , ‫ݏ‬௡௡ )′ be the
(݊ × 1)-vector of location-specific variances. Calculate the weighted mean variance ‫ݒ‬ҧ
‫ = ̅ݒ‬1′ࢃ
ିଵ
‫ݒ‬⁄(1′ࢃ
ିଵ
1) = 1′ࢃ
ିଵ
‫ݒ‬൘ ෍ ‫݆݅∗ ݓ‬
௜,௝
(17)
Spatial Temporal Prediction Algorithms
where ࢃ = ൣ‫ ݓ‬௜௝൧= ൣ‫ݏ‬௜ଶ௝ൗ‫ݐ‬௜௝൧
ࢃ
−1
= ൣ‫ ݓ‬௜∗௝൧
.
݅,݆=1,…,݊
௜,௝ୀଵ,…,௡
is an ݊ × ݊ matrix, where ‫ ݆݅ݐ‬is defined as in (12), and
ଶ
ഥ)′ࢃ −1 (࢜ − ‫ݒ‬
ഥ). If ‫ݖ‬2 ≥ ߯௡ିଵ,.ଽହ
Calculate the test statistic ‫ݖ‬2 = (࢜ − ‫ݒ‬
(or critical value for
[1 − selected level of•‹‰‹ϐ‹…ƒ…‡ߛଶ]), all following calculations will be performed using
the empirical spatial covariance matrix, i.e., Σࡿ = ࡿ, and the nonparametric EOF model will
be used for prediction. Equivalently, one may compute a p-value ‫݌‬2 by evaluating 1 minus the
ଶ
ଶ
߯௡ିଵ
− CDF: ‫݌‬2 = ܲ(߯௡ିଵ
> ‫ݖ‬2 ). If ‫݌‬2 < level of•‹‰‹ϐ‹…ƒ…‡ߛ2, then all following
calculations will be performed using the empirical spatial covariance matrix.
d) If the two tests in b) and c) do not indicate a switch to the EOF model, all following
calculations will be performed using the parametric covariance model, i.e., the spatial
covariance matrix ߑௌ is constructed according to (14). Recall that ߟ = (݈݊ߪଶ, − 1⁄ߠ௣ ). The
෢ଶ = ݉ ܽ‫ݔ‬ቄ0, ଵ ∑௜ୀଵ,…,௡ ‫ݏ‬௜௜ − ݁‫݌ݔ‬ൣ݈෣
missing parameter ߬ଶ is derived as ߬
݊ߪଶ൧ቅ.
௡
3.4 Re-fit autoregressive model
We refit the autoregressive model accounting for spatial dependence using GLS with augmented
data:
Step 1: Compute the Cholesky factorization ઱ௌ = ࡴ ௌࡴ ௌᇱ and the inverse matrix ࡴ ᇱࡿ.
෡‫ݐ‬−݈ܽ݃,݅݉ ‫ ݁ݐݑ݌‬is an ݊ × ‫ ܮ‬matrix and ࢆ
෡‫ݐ‬,݅݉ ‫ ݁ݐݑ݌‬is
Step 2: Substitute 0 for missing values such that ࢆ
a vector of length ݊.
Step 3: Augment predictor matrix as follows. Let
′
′
෡݈ܽ݃,݅݉ ‫ = ݁ݐݑ݌‬ቀࢆ
෡‫ܮ‬+1−݈ܽ݃,݅݉ ‫݁ݐݑ݌‬, … , ࢆ
෡݉ −݈ܽ݃,݅݉ ‫݁ݐݑ݌‬ቁ′ be a ݊(݉ − ‫ ܮ × )ܮ‬matrix and
ࢆ
′
′
෡݅݉ ‫ = ݁ݐݑ݌‬ቀࢆ
෡‫ܮ‬+1,݅݉ ‫݁ݐݑ݌‬, … , ࢆ
෡݉ ,݅݉ ‫݁ݐݑ݌‬ቁ′ is a vector of length ݊(݉ − ‫)ܮ‬, then
ࢆ
෡݈ܽ݃,ܽ‫ = ݃ݑ‬൫ࢆ
෡݈ܽ݃,݅݉ ‫݁ݐݑ݌‬, … , ۷ܼ݉ ݅‫ݏݏ‬൯
ࢆ
where ۷௓௠ ௜௦௦ is a ݊(݉ − ‫ݍ × )ܮ‬௓ indicator matrix given ‫ݍ‬௓ the total number of rows
෡∗ or ࢆ
෡௟௔௚ . If there is a missing value in the ith row of either
with missing values in either ࢆ
∗
෡ or ࢆ
෡௟௔௚ , and if this is the jth out of all ‫ݍ‬௓ rows that have missing values, then the jth
ࢆ
column of ۷௓௠ ௜௦௦ is all 0 except for the ith element, which is set to 1.
෩௧ି௟௔௚,௔௨௚ = ࡴ ିଵ
෡
෩
Step 4: Remove the spatial correlation: ࢆ
ௌ ࢆ௧ି௟௔௚,௔௨௚ and ࢆ௧,௜௠ ௣௨௧௘ =
ିଵ෡
෡‫ݐ‬−݈ܽ݃,ܽ‫ ݃ݑ‬are the submatrices of ࢆ
෡݈ܽ݃,ܽ‫ ݃ݑ‬that correspond to the
ࡴ ௌ ࢆ௧,௜௠ ௣௨௧௘, where ࢆ
෡‫ݐ‬−݈ܽ݃,݅݉ ‫݁ݐݑ݌‬.
rows of the matrices ࢆ
Step 5: Use the same computational steps as for the linear system in equation (10) to solve the
linear system
௠
൭෍
௧ୀ௅ାଵ
෩௧ି௟௔௚,௔௨௚ ൱ ࢻ௔௨௚
෩ᇱ௧ି௟௔௚,௔௨௚ ࢆ
ࢆ
௠
= ෍
௧ୀ௅ାଵ
෩ᇱ௧ି௟௔௚,௔௨௚ ࢆ
෩௧,௜௠ ௣௨௧௘
ࢆ
where ࢻ௔௨௚ is a vector of length ‫ ܮ‬+ ‫ݍ‬௓, and there are ‫ ∗ܮ‬+ ‫ݍ‬௓∗ non-redundant
(18)
Spatial Temporal Prediction Algorithms
ෝ is the subvector
parameters in above linear system. The AR coefficient estimate ࢻ
∗
ෝ௔௨௚ , there are ‫ ܮ‬non-redundant parameters in first
consisting of the first ‫ ܮ‬elements of ࢻ
∗
ෝ௔௨௚ , and ‫ݍ‬௓ non-redundant parameters in last ‫ݍ‬௓ elements of ࢻ
ෝ௔௨௚ .
‫ ܦ‬elements of ࢻ
3.5 Re-fit Regression model
Refit regression model by GLS using augmented data to account for spatio-temporal correlation in
the data.
Step 1: Substitute the following for missing values such that ࢄ ௜௠ ௣௨௧௘ is a ݊݉ × ‫ ܦ‬matrix and
ࢅ௜௠ ௣௨௧௘ is a vector of length ݊݉ : at location ‫ݏ‬௜, use the mean of ࢅ(‫ݏ‬௜) and the mean of
each predictor in ࢄ(‫ݏ‬௜).
Step 2: Augment predictor matrix as follows.
ࢄ ௔௨௚ = ൫ࢄ ௜௠ ௣௨௧௘, ࡵ௑௠ ௜௦௦൯
where ۷௑௠ ௜௦௦ is a ݊݉ × ‫ ݍ‬indicator matrix given ‫ ݍ‬the total number of rows with
missing values in either ࢄ or ࢅ. If there is a missing value in ith row of either ࢄ or ࢅ, and
if this is the jth out of all ‫ ݍ‬rows that have missing value, then the jth column of ۷௑௠ ௜௦௦ is
all 0 except for the ith element, which is 1.
ିଵ
෩௧,௔௨௚ = ࡴ ିଵ
෩
Step 3: Remove the spatial correlation: ࢄ
ௌ ࢄ ௧,௔௨௚ and ࢅ௧,௜௠ ௣௨௧௘ = ࡴ ௌ ࢅ௧,௜௠ ௣௨௧௘.
Step 4: Remove the autocorrelation:
෩‫ݐ‬,ܽ‫ ݃ݑ‬− ߙ
෩‫ݐ‬−1,ܽ‫ ݃ݑ‬− ⋯ − ߙ
෩‫ݐ‬−‫ܮ‬,ܽ‫݃ݑ‬, ‫ܮ =ݐ‬+ 1, … , ݉
ෙ‫ݐ‬,ܽ‫ࢄ = ݃ݑ‬
ෝ1 ࢄ
ෝ‫ࢄܮ‬
ࢄ
(19)
෩‫ݐ‬,݅݉ ‫ ݁ݐݑ݌‬− ߙ
෩‫ݐ‬−1,݅݉ ‫ ݁ݐݑ݌‬− ⋯ − ߙ
෩‫ݐ‬−‫ܮ‬,݅݉ ‫݁ݐݑ݌‬, ‫ܮ =ݐ‬+ 1, … , ݉
ෙ‫ݐ‬,݅݉ ‫ࢅ = ݁ݐݑ݌‬
ෝ1 ࢅ
ෝ‫ࢅܮ‬
ࢅ
(20)
Step 5: Solve the linear system
ෙᇱ௔௨௚ ࢄ
ෙ௔௨௚ ൯ࢼ௔௨௚ = ࢄ
ෙᇱ௔௨௚ ࢅ
ෙ௜௠ ௣௨௧௘
൫ࢄ
(21)
ෙ݅݉ ‫ = ݁ݐݑ݌‬൫ࢅ
ෙ′‫ܮ‬+1,݅݉ ‫݁ݐݑ݌‬, … , ࢅ
ෙ′݉ ,݅݉ ‫݁ݐݑ݌‬൯′, an ݊(݉ − ‫ × )ܮ‬1-vector and ࢄ
ෙܽ‫= ݃ݑ‬
where ࢅ
ෙ′‫ܮ‬+1,ܽ‫݃ݑ‬, … , ࢄ
ෙ′݉ ,ܽ‫݃ݑ‬൯′, a ݊(݉ − ‫ ܦ( × )ܮ‬+ ‫ )ݍ‬matrix, ࢼ௔௨௚ is a vector of length ‫ ܦ‬+ ‫ݍ‬,
൫ࢄ
and there are ‫ ∗ ܦ‬+ ‫ ∗ݍ‬non-redundant parameters in above linear system. The regression
෡ is the subvector consisting of first ‫ ܦ‬elements of ࢼ
෡௔௨௚ , there are
coefficients estimate ࢼ
෡௔௨௚ , and ‫ ∗ݍ‬non-redundant
‫ ∗ ܦ‬non-redundant parameters in first ‫ ܦ‬elements of ࢼ
෡௔௨௚ .
parameters in last ‫ ݍ‬elements of ࢼ
3.6 Statistics to display
3.6.1 Goodness of Fit statistics
We present statistics referring to the three main elements of the model: the mean structure, the
spatial covariance structure, and the temporal structure.
1. Goodness of fit mean structure model ࢄࢼ:
Spatial Temporal Prediction Algorithms
Let ࣫ be the set of observations (ܻ௧(‫)ݏ‬, ࢄ ௧(‫ ))ݏ‬that have missing values in either ܻ௧(‫ )ݏ‬or
ࢄ ௧(‫)ݏ‬. Note that ‫ ݍ‬has been defined as the number of observations in ࣫.
Calculate the mean squared error (MSE) and an ܴଶ statistic based only on complete
observations:
MSE =
ܴଶ =
⎧
⎪
⎪
⎪
1−
෍
෍
௦∈{௦భ,…,௦೙ };
௧ୀଵ,…,௠ ;
௒೟(ࡿ)∉࣫
෠௧(‫)ݏ‬൯ଶൗ(݊݉ − ‫ ݍ‬− ‫) ∗ ܦ‬
൫ܻ௧(‫ )ݏ‬− ܻ
෠௧(‫)ݏ‬൯ଶ൚
൫ܻ௧(‫ )ݏ‬− ܻ
௦∈{௦భ,…,௦೙ };
௧ୀଵ,…,௠ ;
௒೟(ࡿ)∉࣫
⎨
⎪1 −
෠௧(‫)ݏ‬൯ଶ൚
෍
൫ܻ௧(‫ )ݏ‬− ܻ
⎪
௦∈{௦భ,…,௦೙ };
⎪
௧ୀଵ,…,௠ ;
⎩
௒೟(ࡿ)∉࣫
෍
෍
௦∈{௦భ,…,௦೙ };
௧ୀଵ,…,௠ ;
௒೟(ࡿ)∉࣫
ܻ௧(‫)ݏ‬ଶ ,
ത௧(‫))ݏ‬ଶ ,
(ܻ௧(‫ )ݏ‬− ܻ
௦∈{௦భ,…,௦೙ };
௧ୀଵ,…,௠ ;
௒೟(ࡿ)∉࣫
(22)
if there is no intercept
if there is an intercept
(23)
෠௧(‫ ࢄ = )ݏ‬ᇱ௧(‫ࢼ)ݏ‬, ‫ ∗ ܦ‬is the number of non-redundant parameters of re-fitted
where ܻ
෡௔௨௚ , and ܻ
ത௧(‫ )ݏ‬is the mean of ܻ only on complete
regression in first ‫ ܦ‬elements of ࢼ
observations. Note that for this calculation the original (untransformed) observations ࢅ and
covariates ࢄ are used. Alternatively, we can calculate the adjusted ܴଶ
݊݉ − ‫ݍ‬
ଶ
(1 − ܴଶ)
ܴ௔ௗ௝
= 1−
(24)
݊݉ − ‫ ݍ‬− ‫∗ ܦ‬
2. Goodness of fit for AR model:
Present t-tests for AR parameters based on variance estimates in item 3 in Section 3.6.2.
3. Goodness of fit of spatial covariance model:
Present the test statistics listed in item 5 in Section 3.6.2.
3.6.2 Model and parameter estimates
The following information should be displayed as a summary of the model:
෡,ࢻ
ෝ obtained in Sections 3.4 and 3.5
1. Model coefficients ࢼ
෡൯, the covariance matrix of ࢼ
෡, which is the
2. Standard errors of elements of ࢼ based on ܸ൫ࢼ
෡௔௨௚ ൯:
upper ‫ ܦ × ܦ‬submatrix ofܸ൫ࢼ
Spatial Temporal Prediction Algorithms
௠
ܵܵ
ܵܵ
ିଵ
෡௔௨௚ ൯= ௘ × ൫ࢄ
ෙᇱ௔௨௚ ࢄ
ෙ௔௨௚ ൯ = ௘ × ൭ ෍
ܸ൫ࢼ
݂݀௘
݂݀௘
௧ୀ௅ାଵ
where


ିଵ
ᇱ
ෙ௧,௔௨௚ ࢄ
ෙ௧,௔௨௚ ൱
ࢄ
(25)
ଶ
ෙ௜௠ ௣௨௧௘ − ൫ࢅ
ෙ௜௠ ௣௨௧௘൯∗ ൯ = ‫ݎ‬ǁ௒ෘ௒ෘ(݊(݉ − ‫ )ܮ‬− 1)ܸ൫ࢅ
ෙ௜௠ ௣௨௧௘൯,
ܵܵ௘ = ∑ே௜ୀଵ൫ࢅ
∗
෡,
ෙ௜௠ ௣௨௧௘൯ is the predicted value based on estimated ࢼ
- ൫ࢅ
ෙ௜௠ ௣௨௧௘ in the correlation matrix of re-fitted
- ‫ݎ‬ǁ௒ෘ௒ෘ is corresponding element of ࢅ
regression after sweep operation,
- ݊(݉ − ‫ )ܮ‬is number of transformed records used in equation (21) for re-fit
regression ,
ෙ௜௠ ௣௨௧௘൯ is variance of ࢅ
ෙ௜௠ ௣௨௧௘.
- and ܸ൫ࢅ
∗
݂݀௘ = ݊(݉ − ‫ )ܮ‬− ‫݌‬, and ‫ ܦ = ݌‬+ ‫ ∗ݍ‬is the number of non-redundant parameters
in re-fitted regression.
Based on these standard errors, t-test statistics and/or p-values may be computed and
displayed according to standard definitions and output scheme of linear models (please refer
to linear model documentation):
෡ and the corresponding j-th diagonal element of ܸ൫ࢼ
෡൯,
(a) For each element ߚ௝ of ࢼ
෡൯
݆= 1, … , ‫ܦ‬, compute the t-statistic ‫ݐ‬௝ = ߚ௝ൗට ܸ൫ࢼ
௝௝
(b) The p-value corresponding to ‫ݐ‬௝ is 2 × the value of the cumulative distribution function
of a t-distribution with ݊݉ − ‫ ݍ‬− ‫ ∗ ܦ‬degrees of freedom, i.e., ‫݌‬௝ = 2 ∙ ቀ1 −
ܲ൫‫ݐ‬௡௠ ି௤ି஽ ∗ ≤ ห‫ݐ‬௝ห൯ቁ.
Note that depending on the implementation of the GLS estimation in Section 3.5,
ିଵ
ෙᇱ௔௨௚ ࢄ
ෙ௔௨௚ ൯ may have already been computed, in which case this expression does not
൫ࢄ
need to be recalculated.
ෝ), the covariance matrix of ࢻ
ෝ, which is the upper ‫ܮ × ܮ‬
3. Standard errors of ࢻ based on ܸ(ࢻ
ෝ௔௨௚ ൯:
submatrix of ܸ൫ࢻ
௠
where

ܵܵ௘∗
ෝ௔௨௚ ൯= ∗ × ൭ ෍
ܸ൫ࢻ
݂݀௘
௧ୀ௅ାଵ
ᇱ
ିଵ
෩௧ି௟௔௚,௔௨௚ ൱
෩௧ି௟௔௚,௔௨௚ ࢆ
ࢆ
ଶ
(26)
∗
ܵܵ௘∗ = ∑ே௜ୀଵ൫ܼ෨௧,௜௠ ௣௨௧௘ − ൫ܼ෨௧,௜௠ ௣௨௧௘൯ ൯ = ‫ݎ‬ǁ௓෨௓෨(݊(݉ − ‫ )ܮ‬− 1)ܸ൫ܼ෨௧,௜௠ ௣௨௧௘൯,
∗
- ൫ܼ෨௧,௜௠ ௣௨௧௘൯ is the predicted value based on estimated ߙො and ܼ෨௧ି௟௔௚,௔௨௚
- ‫ݎ‬ǁ୞෩୞෩is corresponding element of ܼ෨௧,௜௠ ௣௨௧௘ in the correlation matrix of re-fitted
autoregressive model after sweep operation,
- ݊(݉ − ‫ )ܮ‬is number of transformed records used in equation (18) for re-fit
autoregressive,
Spatial Temporal Prediction Algorithms

- and ܸ൫Z෨୲,୧୫ ୮୳୲ୣ൯ is variance of ܼ෨௧,௜௠ ௣௨௧௘.
݂݀௘∗ = ݊(݉ − ‫ )ܮ‬− ‫݌‬஺ோ , and ‫݌‬஺ோ = ‫ ∗ܮ‬+ ‫ݍ‬௓∗ is the number of non-redundant
parameters in re-fitted autoregressive model.
Based on these standard errors, t-test statistics and/or p-values may be computed and
displayed according to standard definitions and output scheme of linear models.
ෝ and the corresponding ݆-th diagonal element of ܸ(ࢻ
ෝ),
(a) For each element ߙ௝ of ࢻ
ෝ)௝௝
݆= 1, … , ‫ܮ‬, compute the t-statistic ‫ݐ‬௝ = ߙ௝⁄ඥ ܸ(ࢻ
(b) The p-value corresponding to ‫ݐ‬௝ is 2 ×the value of the cumulative distribution function
of a t-distribution with ∑௠௧ୀଵ ݊௧ − ‫ ∗ܮ‬degrees of freedom, i.e., ‫݌‬௝ = 2 ∙ ቀ1 −
ܲ൫‫∑ݐ‬೘೟సభ ௡೟ି௅∗ ≤ ห‫ݐ‬௝ห൯ቁ.
4. Indicator of which method has been automatically chosen to model spatial covariances,
either empirical covariance (EOF) or parametric variogram model.
5. Test statistics from goodness of fit tests for parametric model:
- Test statistic ‫ݖ‬ଵ, p-value ‫݌‬ଵ, level of significance ߛଵ used for automated test for fit of
slope parameter
- Test statistic ‫ݖ‬ଶ, p-value ‫݌‬ଶ, level of significance ߛଶ used for testing homogeneity of
variances
෡ if parametric model has been chosen
6. Parametric covariance parameters ࣒
3.6.3 Tests of effects in Mean Structure Model (Type III)
For each effect specified in the model, type III test matrix L is constructed and ‫ܪ‬଴: ‫ܮ‬௜ߚ = 0 is
tested. Construction of type III matrix L as well as generating estimable function (GEF) is based on
ିଵ ᇱ
ᇱ
the generating matrix ‫ ܪ‬, which is the upper ‫ ܦ × ܦ‬submatrix of ൫ܺෘ௔௨௚
ܺෘ௔௨௚ ൯ ܺෘ௔௨௚
ܺෘ௔௨௚ , such
that ‫ܮ‬௜ߚ is estimable. It involves parameters only for the given effect. For type III analysis, L does
not depend on the order of effects specified in the model. If such a matrix cannot be constructed,
the effect is not testable.
Then the L matrix is then used to construct the test statistic
where



‫=ܨ‬
ߚመ′‫ܮ‬′(‫ܮߑܮ‬′)ିଵ‫ߚܮ‬መ
‫ݎ‬௖
ߚመ is the subvector of the first D elements of ߚመ௔௨௚ obtained in Step 5 of Section 3.5,
‫ݎ‬௖ = ‫ܮߑܮ(݇݊ܽݎ‬ᇱ),
ߑ is the covariance matrix of ߚመ, which is the upper ‫ ܦ × ܦ‬submatrix of ܸ൫ߚመ௔௨௚ ൯ defined
in equation (25).
The statistic has an approximate F distribution. The numerator degrees of freedom ݂݀1 is ‫ݎ‬௖ and
the denominator degrees of freedom ݂݀2 is ݊݉ − ‫ ݍ‬− ‫ ∗ ܦ‬, where ‫ ∗ ܦ‬is the number of
Spatial Temporal Prediction Algorithms
non-redundant parameters in the first ‫ ܦ‬parameters of refitted regression model obtained in
Section 3.5. Then the p-values can be calculated accordingly.
An additional test also should be computed, which is similar to “corrected model” if there is an
intercept or “model” if there is no intercept in ANOVA table in linear regression. Essentially, the
null hypothesis is regression parameters (except intercept if there is on) are zeros. The test statistic
would be the same as the above F statistic except the L matrix is from GEF. If there is no intercept,
the L matrix is the whole GEF. If there is an intercept, the L matrix is GEF without the first row
which corresponds to the intercept.
Statistics saved for Test of effects in Mean Structure Model (including corrected model or model):




F statistics
݂݀1
݂݀2
p-value
3.6.4 Location clustering for spatial structure visualization
Large spatial covariance matrix or correlation matrix are not suitable to demonstrate the relation
among the locations. Grouping method, also called community detection or position analysis
(Wasserman, 1994), can be used to identify some representative location clusters. To simplify the
implementation, hierarchical clustering (Johnson, 1967) is used to detect clusters among locations
based on STP model spatial statistics.
Please note location clustering is only supported when empirical nonparametric covariance model
is used.
Given a set of n locations {‫ݏ‬ଵ, … , ‫ݏ‬௡ } in STP to be clustered, and their corresponding spatial
correlation matrix ܴ, a n*n matrix, as the similarity matrix
ܴ = ൣ‫ݎ‬௜௝൧௜,௝ୀଵ,…,௡
Given similarity threshold ߙ with default value 0.2, and ܰ஼ with default value 10, the process of
location clustering is described in following steps, which is based on the basic process of
hierarchical clustering.
Step 1. Initialize the clusters and similarities:



Assign each location ‫ݏ‬௜ to a cluster ‫ܥ‬௜ (݅= 1, … , ݊). So that for n locations, the total
number of clusters ݊஼ = ݊ at the beginning, and each cluster has just one location,
Define the set of clusters: ‫ܥ‬,
Define similarity matrix
ܴ஼ = ൣ‫ݎ‬௜஼௝൧
௜,௝ୀଵ,…,௡
where the similarity ‫ݎ‬௜஼௝ between the clusters ‫ܥ‬௜ and ‫ܥ‬௝ is the similarity ‫ݎ‬௜௝ between
location ‫ݏ‬௜ and ‫ݏ‬௝.
Step 2. Find 2 clusters ‫ܥ‬௜ and ‫ܥ‬௝ in ‫ ܥ‬with largest similarity ݉ ܽ‫ݔ‬൫‫ݎ‬௜஼௝൯,
If ݉ ܽ‫ݔ‬൫‫ݎ‬௜஼௝൯> ߙ:
Spatial Temporal Prediction Algorithms




Merge ‫ܥ‬௜ and ‫ܥ‬௝ into a new cluster ‫〈ܥ‬௜,௝〉 to include all locations in ‫ܥ‬௜ and ‫ܥ‬௝,
Compute similarities between the new cluster ‫〈ܥ‬௜,௝〉 and other clusters ‫ܥ‬௞ , ݇ ≠ ݆݅ܽ݊݀
஼
஼ ஼
‫〈ݎ‬௜
௝௞ ൯
,௝〉,௞ = ݉ ݅݊൫‫ݎ‬௜௞ , ‫ݎ‬
Update ‫ ܥ‬by adding ‫〈ܥ‬௜,௝〉 , discarding ‫ܥ‬௝ and‫ܥ‬௜. So ݊஼ = ݊஼ − 1.
஼
஼
஼
Update similarity matrix ܴ஼ by adding ‫〈ݎ‬௜
௝௞ , go to step 3.
,௝〉,௞, discarding ‫ݎ‬௜௞ and ‫ݎ‬
If ݉ ܽ‫ݔ‬൫‫ݎ‬௜஼௝൯≤ ߙ, go to step 4.
Step 3. Repeat step 2.
Step 4. For all the detected clusters with more than 1 location, compute following statistics:


Cluster size: ݊஼೔ is the number of locations in ‫ܥ‬௜,
Closeness:
1
݀௜ =
෍ ‫ݎ‬௞௟, ∀‫ݏ‬௞, ‫ݏ‬௟ ∈ ‫ܥ‬௜, ܽ݊݀݇ ≠ ݈.
݊஼೔൫݊஼೔ − 1൯⁄2
Step 5. Define clusters for interactive visualization:


‫ܥ‬௖௟௢௦௘௡௘௦௦: The first ܰ஼ clusters sorted by descending closeness ݀௜,
‫ܥ‬௦௜௭௘: The first ܰ஼ clusters sorted by descending cluster size ݊஼೔.
Step 6. Output the union for location cluster visualization:
‫ܥ = ∗ܥ‬௖௟௢௦௘௡௘௦௦ ∪ ‫ܥ‬௦௜௭௘
Statistics saved for spatial structure visualization including:
1. Number of excluded locations during handling of missing data
2. Spatial correlation matrix ࡾ = ൣ‫ݎ‬௜௝൧௜,௝ୀଵ,…,௡
3. Statistics of each output location cluster in ‫ ∗ܥ‬:
 Closeness ݀௜
 Cluster size ݊஼೔

Coordinates of locations in this cluster
3.7 Results saved for prediction
෡, ࢻ
෡൯ as defined in (25).
ෝ and the covariance estimate ܸ൫ࢼ
1. Model coefficients ࢼ
2. Transformed regression residuals and predictors of ‫ ܮ‬most recent observations for
prediction:
෡௔௨௚ ൯, ݈= 1, … , ‫ܮ‬
෱௠ ି௟ାଵ = ࡴ ′ௌିଵࡴ ௌିଵ൫ࢅ௠ ି௟ାଵ,௜௠ ௣௨௧௘ − ࢄ ௠ ି௟ାଵ,௔௨௚ ࢼ
ࢆ
෱௠ ି௟ାଵ,௜௠ ௣௨௧௘ = ࡴ ′ௌିଵࡴ ௌିଵࢄ ௠ ି௟ାଵ,௜௠ ௣௨௧௘, ݈= 1, … , ‫ܮ‬
ࢄ
(27)
(28)
3. Indicator of which method has been chosen to model spatial covariances, either empirical
Spatial Temporal Prediction Algorithms
covariance (EOF) or parametric variogram model.
4. Parametric covariance parameters ߰෠ if parametric model has been chosen.
5. Coordinates of locations ‫ݏ‬.
6. Number of unique time points used for model build, ݉ .
7. Number of records with missing values in the data set used in model building, ‫ݍ‬.
8. Spatial covariance matrix ߑௌ.
9. ‫ܪ‬ௌିଵ, inverse of Cholesky factor of spatial covariance matrix.
4 Prediction
We perform the following procedure to issue predictions for future time ݉ + 1, … , ݉ + ‫ ܪ‬at
prediction locations ࡳ = (ࢍଵ, … , ࢍே ) using the results saved in the output file (see Figure 2). The
input data set format should include location ࡳ, predictors ࢄ for ‫ ݉ =ݐ‬+ 1, … , ݉ + ‫ ܪ‬.
Figure 2. Flowchart of algorithm steps for model prediction
Spatial Temporal Prediction Algorithms
4.1 Point prediction
Step 1: Construct the ܰ × ݊ spatial covariance matrix to capture the spatial dependence between
prediction grids ࢍ ∈ ࡳ and original sample locations ࢙.
 If variogram-based spatial covariance matrix
ܸௌ(ࢍ) = ܸ൫߳௧(ࢍ)൯= ߪଶ + ߬ଶ
and
࡯ௌ(ࡳ) = ൛‫ݒ݋ܥ‬൫߳௧(ࢍ௜), ߳௧൫࢙௝൯; ߰෠൯ൟ݅=1,…,ܰ;݆=1,…,݊
(29)
(30)
according to (14) for all locations ࢍ (whether locations were included in the model
build or not).
 If EOF-based spatial covariance function is used:
For locations g ୧ that are included in the original sample locations ‫ݏ‬,
‫ݒ݋ܥ‬ாைி ൫߳௧(݃௜), ߳௧(‫)ݏ‬൯ is equal to the row corresponding to location ݃௜ in the
empirical covariance matrix ߑௌ and ܸௌ(݃௜) is equal to the empirical variance at that
location, i.e., the diagonal element of ߑௌ corresponding to that location.
For locations ݃௜ that were not included in the model build, calculate the spatial
covariance in the following way:
(a) Perform eigendecomposition on the empirical covariance matrix
ࡿ = ઴ ઩઴ ′
where ઴ = (߶ଵ, … , ߶௡ ) with ߔ ௞ = ൫߶௞(‫ݏ‬ଵ), … , ߶௞(‫ݏ‬௡ )൯′ is the ݊ × ݊ matrix of
eigenvectors and ઩ = diag(ߣଵ, … , ߣ௡ ) is the ݊ × ݊ matrix of eigenvalues.
(b) Apply inverse distance weighting (IDW) (Shepard 1968) to interpolate eigenvectors
to locations with no observations.
௡
where
߶௞(ࢍ) = ෍
௜ୀଵ
‫ ݓ‬௜(ࢍ)߶௞(࢙௜)
, ݇ = 1, … , ݊
∑௡௝ୀଵ ‫ݓ‬௝(ࢍ)
‫ ݓ‬௜(ࢍ) =
1
dist(ࢍ, ࢙௜)ఘ
is an Inverse Distance Weighting (IDW) function with ߩ ≤ ݀ for d-dimensional
space and dist(ࢍ, ࢙௜) may be any distance function. As a default value, use
Euclidean distance with ߩ = 2 and dist(ࢍ, ࢙௜)ଶ = (ࢍ − ࢙௜)′(ࢍ − ࢙௜).
(c) The EOF-based spatial variance-covariance functions are
௡
and
ܸௌ(ࢍ) = ܸ൫߳௧(ࢍ)൯= ෍ ߣ௡ ߶௞ଶ(ࢍ)
௞ୀଵ
(31)
Spatial Temporal Prediction Algorithms
݊
‫ݒ݋ܥ‬ቀ߳‫ݐ‬൫ࢍ݅൯, ߳‫ݐ‬൫࢙݆൯ቁ = ෍ ߣ݊߶݇൫ࢍ݅൯߶݇൫࢙݆൯
(32)
݇=1
and the corresponding ܰ × ݊ spatial covariance matrix
࡯ௌ(ࡳ) = ቄ‫ ܨܱܧݒ݋ܥ‬ቀ߳௧(ࢍ௜), ߳௧൫࢙௝൯ቁቅ
݅=1,…,ܰ;݆=1,…,݊
(33)
Note that under the EOF model, we allow for space-varying variances.
Step 2: Spatial interpolation to prediction locations g for the most recent L time units,
ܼ௠ ି௅ାଵ, … , ܼ௠
෡௠ ି௟ାଵ(ࡳ) = ࡯ௌ(ࡳ)઱ࡿିଵࢆ௠ ି௟ାଵ = ࡯ௌ(ࡳ)ࢆ
෱௠ ି௟ାଵ, ݈= 1, … , ‫ܮ‬
ࢆ
(34)
෡௠ ି௟ାଵ(ࡳ) is a vector of length ܰ .
where ࢆ
Step 3: Iteratively forecast for future time m + 1, … , m + H at prediction locations ࡳ.
෡௠ ାଵ(ࡳ) = ߙොଵࢆ
෡௠ (ࡳ) + ⋯ + ߙො௅ࢆ
෡௠ ି௅ାଵ(ࡳ)
ࢆ
(35)
෡௠ ାு (ࡳ) = ߙොଵࢆ
෡௠ ାு ିଵ(ࡳ) + ⋯ + ߙො௅ࢆ
෡௠ ାு ି௅(ࡳ)
ࢆ
(37)
෡௠ ାଶ(ࡳ) = ߙොଵࢆ
෡௠ ାଵ(ࡳ) + ⋯ + ߙො௅ࢆ
෡௠ ି௅ାଶ(ࡳ)
ࢆ
(36)
෡௠ ାு (ࡳ), ℎ = 1, … , ‫ ܪ‬are vectors of length ܰ .
where ࢆ
Step 4: Incorporate predicted systematic effect
෡,
෡௠ ାு (ࡳ) = ࢆ
෡௠ ାு (ࡳ) + ܺ௠ ା௛ (ࡳ)ࢼ
ࢅ
෡௠ ାு (ࡳ), ℎ = 1, … , ‫ ܪ‬are vectors of length ܰ .
where ࢅ
ℎ = 1, … , ‫ܪ‬
(38)
4.2 Prediction intervals
Under the assumption of Gaussian Process and known variance components, the prediction error
෠௠ ାு (ࢍ௜) − ܻ௠ ା௛ (ࢍ௜) comes from two sources:
ܻ
 The prediction error that would be incurred even if regression coefficients ࢼ were known.
 The error in estimating regression coefficients ࢼ
The variance of prediction error is thus
෠௠ ାு (ࢍ௜) − ܻ௠ ା௛ (ࢍ௜)൧
ܸൣܻ
෡൯൫ࢄ′௠ ା௛ (ࢍ௜) − ࡯′௠ ା௛ (ࢍ௜)઱ି૚ࢄ௜௠ ௣௨௧௘൯′
= ൫ࢄ′௠ ା௛ (ࢍ௜) − ࡯′௠ ା௛ (ࢍ௜)઱ି૚ࢄ௜௠ ௣௨௧௘൯ࢂ൫ࢼ
+ࢂ௠ ା௛ (ࢍ௜) − ࡯′௠ ା௛ (ࢍ௜)઱ି૚࡯௠ ା௛ (ࢍ௜)
(39)
(40)
Expression (39) arises from the variance expression for universal kriging, while (40) is the
variance of a predicted random effect with known variance of the random effects
Spatial Temporal Prediction Algorithms
(McCulloch et al. 2008, p.171).
 ‫ܥ‬௠ ା௛ (݃௜) = ‫ ݉( ்ܥ‬+ ℎ) ⊗ ‫ܥ‬ௌ(݃௜) is the covariance vector of length nm between the
prediction ܻ௠ ା௛ (݃௜) and measurements ܻଵ(‫)ݏ‬, … , ܻ௠ (‫)ݏ‬. Note that ‫ ݉( ்ܥ‬+ ℎ) =
{ߛ் (݉ + ℎ − ‫})ݐ‬௧ୀଵ,…,௠ is the AR(L) covariance vector of length m and ‫ܥ‬ௌ(݃௜) =
ቄ‫ݒ݋ܥ‬ቀܻ௧(݃௜), ܻ௧൫‫ݏ‬௝൯ቁቅ
௝ୀଵ,…,௡
is the spatial covariance vector of length ݊.
 The nm × nm covariance matrix ߑ is defined as to ߑ = ߑ் ⊗ ߑௌ and ߑ் =
{ߛ்|‫ݐ‬− ‫ݐ‬′|}௧,௧ᇲୀଵ,…,௠ . Note that Σୗ is a quantity stored after the model build step.
 ܸ௠ ା௛ (݃௜) = ܸ൫ܻ௠ ା௛ (݃௜)൯= ߛ் (0)ܸௌ(݃௜) is the variance of ܻ௠ ା௛ (݃௜).
 Note that expressions (39) and (40) are not computed explicitly, but instead are
implemented as described in the following.
Computational process:
Step 1: Compute the error in estimating regression coefficients ߚ in (39).
For ݈= 1, … , ‫ܮ‬, interpolate ࢄ to prediction locations ࢍ for the most recent‫ ܮ‬time units
෱′௠ ାଵି௟,௜௠ ௣௨௧௘࡯ௌ(ࢍ௜)
ࡼ ௠ ାଵି௟(ࢍ௜) = ࢄ′௠ ାଵି௟,௜௠ ௣௨௧௘઱ࡿିଵ࡯ௌ(ࢍ௜) = ࢄ
(41)
where ࡼ ௠ ାଵି௟(ࢍ௜) is a vector of dimension ‫ × ܦ‬1. Define
(ࢍ ),
෡௠ ା௛ି௟(ࢍ௜) = ൜ࡼ ௠ ା௛ି௟ ௜ ݂݅ ℎ − ݈≤ 0;
ࢄ
ࢄ ௠ ା௛ି௟(ࢍ௜), otherwise.
(42)
For ‫ ݉ =ݐ‬− ‫ ܮ‬+ 1, … , ݉ (ℎ ≤ ݈), we only have ܺ at sample locations ‫ݏ‬, so ܺ෠௧(݃௜) =
ܲ௧(݃௜), the interpolated values from ܺ௧(‫ ;)ݏ‬for ‫( ݉ >ݐ‬or ℎ > ݈), we already input ܺ at
prediction locations ݃, so there is no need to interpolate and ܺ෠௧(݃௜) = ܺ௧(݃௜).
Then, for ℎ = 1, … , ‫ ܪ‬, recursively compute the ‫ × ܦ‬1 vectors ܹ ௠ ା௛ (݃௜)
௅
where
ܹ ௠ ା௛ (݃௜) = ܺ௠ ା௛ (݃௜) + ෍ ߙො௟(ܹ෡௠ ା௛ି௟(݃௜) − ܺ෠௠ ା௛ି௟(݃௜))
(43)
௟ୀଵ
0,
if ℎ − ݈≤ 0; (7)
ܹ෡௠ ା௛ି௟(݃௜) = ൜
ܹ ௠ ା௛ି௟(݃௜), otherwise.
The prediction error in estimating ߚ, that is, expression (39) is thus
ܹ
ᇱ
መ
௠ ା௛ (݃௜)ܸ(ߚ)ܹ ௠ ା௛ (݃௜)
where ܸ(ߚመ) is computed in (25).
(44)
(45)
Step 2: Compute the prediction error that would be incurred if regression coefficients ߚ were
known, i.e., equation (40).
Spatial Temporal Prediction Algorithms
• Compute ‫ ݉( ்ܥ‬+ ℎ) by AR(L) autocovariance function ߛ் (݇) (McLeod 1975).
First, compute ߛ் (0), … , ߛ் (‫ )ܮ‬by solving a linear system ‫ܾ = ܺܣ‬,
1
−ߙොଵ
−ߙොଵ
1 − ߙොଶ
⎛
−(ߙොଵ + ߙොଷ)
⎜ −ߙොଶ
−ߙ
ො
−(ߙ
ොଶ + ߙොସ)
⎜
ଷ
⎜ ⋮
⋮
⎜
⎜−ߙො௅ିଶ −(ߙො௅ିଷ + ߙො௅ିଵ)
−ߙො௅ିଵ −(ߙො௅ିଶ + ߙො௅)
⎝ −ߙො௅
−ߙො௅ିଵ
−ߙොଶ
−ߙොଷ
1 − ߙොସ
−(ߙොଵ + ߙොହ)
⋮
−(ߙො௅ିସ + ߙො௅)
−ߙො௅ିଷ
−ߙො௅ିଶ
…
…
…
…
⋱
…
…
…
−ߙො௅ିଵ −ߙො௅
ߛ் (0)
1
−ߙො௅
0
ߛ் (1)
⎞⎛
⎞ ⎛ 0⎞
0
0 ⎟
ߛ
(2)
0
்
⎜
⎟
0
0 ⎟ ⎜ ߛ் (3) ⎟ ⎜0⎟
⎟
⎟=⎜
⋮
⋮
⋮ ⎟⎜
⎟ ⎜ ⎟
⎟⎜
0
0 ⎟ ⎜ߛ் (‫ ܮ‬− 2)⎟ ⎜0⎟
0
ߛ் (‫ ܮ‬− 1)
1
0
−ߙොଵ
1 ⎠ ⎝ ߛ் (‫⎝ ⎠ )ܮ‬0⎠
(46)
Note that the first element of the vector on the right hand side (the variance of the
measurement error) is fixed to be one, to account for the normalization through the spatial
variance-covariance structure.
For ݇ = ‫ ܮ‬+ 1, … , ݉ + ‫ ܪ‬− 1, recursively compute
ߛ் (݇) = ߙොଵߛ் (݇ − 1) + ⋯ + ߙො௅ߛ் (݇ − ‫)ܮ‬
(47)
Remark: To construct the (‫ ܮ‬+ 1) × (‫ ܮ‬+ 1) matrix ‫ܣ‬,
where
−[ߙ௜ିଵ],
݆= 1; ݅= 1, … , ‫ ܮ‬+ 1
‫ܣ‬௜௝ = ൜
−[ߙ௜ି௝] − [ߙ௜ା௝ିଶ], ݆= 2, … , ‫ ܮ‬+ 1; ݅= 1, … , ‫ ܮ‬+ 1.
−1,
[ߙ௞] = ൝0,
ߙො௞,
݇ = 0;
݇ < 0 or ݇ > ‫;ܮ‬
0 < ݇ ≤ ‫ܮ‬.
(48)
(49)
ିଵ
ᇱ
• Compute the approximated factorization of Σ ିଵ
் such that ܴ ܴ ≈ Σ ் , where ܴ is a
(݉ − ‫ ݉ × )ܮ‬matrix (follows from Cholesky or Gram-Schmidt orthogonalization, see for
example Fuller 1975):
−ߙො௅ … −ߙොଵ
1
0
0
…
⋮
⋮
⋮
⋮
⋱
⋱
⋮
⋮
⋮
⎛
⎞
0 −ߙො௅ …
−ߙොଵ
1
0
0⎟
ܴ=⎜ …
…
…
0
−ߙො௅
…
−ߙොଵ
1
0
…
…
…
0
−ߙ
ො
…
−ߙ
ො
⎝
௅
ଵ 1⎠
• Compute the value of expression (40):
ᇲ
(50)
ߛ் (0)ܸௌ(݃௜) − (‫ܥ‬ᇱ் (݉ + ℎ) ⊗ ‫ܥ‬ᇱௌ(݃௜))(ܴᇱܴ ⊗ ‫ܪ‬ௌିଵ ‫ܪ‬ௌିଵ)(‫ ݉( ்ܥ‬+ ℎ) ⊗ ‫ܥ‬ௌ(݃௜))
(51)
where ‫ܥ‬ᇱௌ(݃௜) is a the row of ‫ܥ‬ௌ(‫ )ܩ‬corresponding to location ݃௜.
Step 3: The (1 − α%) prediction interval is
Spatial Temporal Prediction Algorithms
෠௠ ା௛ (݃௜) ± ‫ݐ‬௡௠ ି௤ି஽ ∗,ఈ/ଶට ܸ[ܻ
෠௠ ା௛ (݃௜) − ܻ௠ ା௛ (݃௜)]
ܻ
(55)
෠௠ ା௛ (݃௜) − ܻ௠ ା௛ (݃௜)] is the sum of equations (39) and (40) as computed in
where ܸ[ܻ
expressions (45) and (51), respectively. ‫ݐ‬௡௠ ି௤ି஽,ఈ/ଶ is defined as
ܲ(ܺ ≤ ‫ݐ‬௡௠ ି௤ି஽ ∗,ఈ/ଶ) = 1 − ߙ/2 where ܺ follows t-distribution with degree freedom
݊݉ − ‫ ݍ‬− ‫ ∗ ܦ‬. The default value for ߙ is 0.05.
As final output from the prediction step, point prediction, variances of point predictions and
prediction interval (lower and upper bounds) are issued for each specified (location, time).
We remark that to perform what-if-analysis, a set of ࢄ variables under the new settings need to be
provided. Then we re-run the prediction algorithm described in Section 4 to obtain prediction
results under adjusted settings.
References
[1]
Brockwell, P., Davis, R.A. (2002), Introduction to Time Series and Forecasting, Second
Edition, New York: Springer.
[2]
Cohen, A., Johnes, R. (1969), “Regression on a Random Field”, Journal of the American
Statistical Association, 64 (328), 1172-1182.
[3]
Cressie, N. (1993), Statistics for Spatial Data, Revised Edition, Wiley-Interscience.
[4]
Creutin, J.D., Obled, C. (1982), “Objective Analyses and Mapping Techniques for Rainfall
Fields: an Objective Comparison”, Water Resources Research, 18(2), 413-431.
[5]
Fuller, W.A. (1975), Introduction to Statistical Time Series, John Wiley & Sonse, New York,
New York.
[6]
Johnson S. (1967), “Hierarchical Clustering Schemes”, Psychometrika, 32(3), 241-254.
[7]
McCulloch, C.E., Searle, S.R., Neuhaus, J.M. (2008), Generalized, Linear and Mixed
Models, Second Edition, John Wiley & Sons, Hoboken, New Jersey.
[8]
McLeod, I. (1975), “Derivation of the Theoretical Autocovariance Function of
Autoregressive-Moving Average Time Series”, Applied Statistics, 24(2), 255-256.
[9]
Shepard, D. (1968), “A two-dimensional interpolation function for irregularly-spaced data”,
Proceedings of the 1968 ACM National Conference, 517-524.
[10] Wasserman S. (1994), Social network analysis: Methods and applications. Cambridge
university press.
368
Support Vector Machine (SVM) Algorithms
C-Support Vector Classification (C-SVC)
Given training vectors
, i = 1, ..., l, in two classes, and a vector
, C-SVC solves the following dual problem:
such that
and
is an
and
such that
, where
matrix,
The decision function is
where b is a constant term.
ε-Support Vector Regression (ε-SVR)
In regression models, we estimate the functional dependence of the dependent (target) variable
on an n-dimensional input vector x. Thus, unlike classification problems, we deal with
real-valued functions and model an
mapping. Given a set of data
,
such that
is an input and
is a target output, the dual form of ε-Support Vector
Regression is
such that
and
where
for
,
The approximate function is
where b is a constant term.
, and
, and
is an
matrix,
369
Support Vector Machine (SVM) Algorithms
Primary Calculations
The primary calculations for building SVM models are described below.
Solving Quadratic Problems
In order to find the decision function or the approximate function, the quadratic problem must be
solved. After the solution is obtained, we can get different coefficients :

if
, the corresponding training sample is a free support vector.

if

if
, the corresponding training sample is a non-support vector, which doesn’t affect the
classification or regression result.
, the corresponding training sample is a boundary support vector.
Free support vectors and boundary support vectors are called support vectors.
This document adapts the decomposition method to solve the quadratic problem using second
order information (Fan, Chen, and Lin, 2005). In order to solve all the SVM’s in a unified
framework, we’ll introduce a general form for C-SVC and ε-SVR.
For ε-SVR, we can rewrite the dual form as
such that
and
for i = 1, ..., l and
for i = 1, ... , l, where y is a
for i = l + 1, ... , 2l.
Given this, the general form is
such that
for i = 1, ... , l, and
α in W(α)
C-SVC
ε-SVR
vector with
370
Support Vector Machine (SVM) Algorithms
The Constant in the Decision Function
After the quadratic programming problem is solved, we get the support vector coefficients in the
decision function. We need to compute the constant term in the decision function as well. We
introduce two accessory variables r1 and r2:
E For yi = 1:
If
,
Otherwise,
E For yi = −1:
If
,
Otherwise,
After r1 and r2 are obtained, calculate
Variable Scale
For continuous input variables, linearly scale each attribute to [-1, 1] or [0, 1]:
For categorical input fields, if there are m categories, then use (0, 1, 2, ..., m) to represent the
categories and scale the values as for continuous input variables.
Model Building Algorithm
In this section, we provide a fast algorithm to train the SVM. A modified sequential minimal
optimization (SMO) algorithm is provided for C-SVC binary and ε-SVR models. A fast SVM
training algorithm based on divide-and-conquer is used for all SVMs.
371
Support Vector Machine (SVM) Algorithms
SMO Algorithm
Due to the density of the kernel matrix, traditional optimization methods cannot be directly applied
to solve for the vector . Unlike most optimization methods which update the whole vector in
each step of an iterative process, the decomposition method modifies a subset of per iteration.
This subset, denoted as the working set B, leads to a small sub-problem to be minimized in each
iteration. Sequential minimal optimization (SMO) is an extreme example of this approach which
restricts B to have only two elements. In each iteration no optimization algorithm is needed to
solve a simple two-variable problem. The key step of SML is the working set selection method,
which determines the speed of convergence for the algorithm.
Kernel functions
The algorithm supports four kernel functions:
Linear function
Polynomial function
RBF function
Hyperbolic tangent function
Base Working Set Selection Algorithm
The base selection algorithm derives the selection set B = {i, j} based on τ, C, the target vector
y, and the selected kernel function K(xi, xj).
Let
and
if
otherwise
where τ is a small positive number.
Select
where
372
Support Vector Machine (SVM) Algorithms
or
or
Return B = {i, j}, where
.
Shrink Algorithm
In order to speed up the convergence of the algorithm near the end of the iterative process, the
decomposition method identifies a possible set A containing all final free support vectors. Hence,
instead of solving the whole problem, the decomposition method works on a smaller problem:
s. t.
where
is the set of shrunken variables.
Afer every min(l, 1000) iterations, we try to shrink some variables. During the iterative process
. Until
is satisfied, we can shrink variables in the
following set:
or
or
Thus the set A of activated variables is dynamically reduced every min(l, 1000) iterations.
E To account for the tendency of the shrinking method to be too aggressive, we reconstruct the
gradient when the tolerance reaches
After reconstructing the gradient, we restore some of the previously shrunk variables based on
the relationship
or
or
Gradient Reconstruction
To decrease the cost of reconstruction of the gradient
, during the iterations we always keep
373
Support Vector Machine (SVM) Algorithms
Then for the gradient
and for the gradient
, we have
we have
where t and s are the working set indices.
Unbalanced Data Strategy
For some classification problems, the algorithm uses different parameters in the SVM formulation.
The differences only affect the procedure for updating . Different conditions are handled
as follows:
For :
Conditions
Update parameters
and
and
and
and
and
and
and
and
374
Support Vector Machine (SVM) Algorithms
SMO Decomposition
The following steps are used in the SMO decomposition:
1. Find
2. If
as the initial feasible solution, and set k = 1.
is a stationary solution, stop.
, where
A feasible solution is stationary if
or
or
Find a two-element working set
using the working set selection algorithm. (For more
information, see the topic “Base Working Set Selection Algorithm” on p. 371.)
3. If the shrink algorithm is being used to speed up convergence, apply the algorithm here. (For more
information, see the topic “Shrink Algorithm” on p. 372.)
4. Derive
as follows:
E If
, or if solving a classification problem, use the unbalanced data strategy. (For more
information, see the topic “Unbalanced Data Strategy” on p. 373.)
E If
, solve the subproblem
cont
Subject to the constraints
and let
E Otherwise, solve the subproblem
375
Support Vector Machine (SVM) Algorithms
subject to the same constraints described above, where τ is a small positive number and
, and let
Finally, set
Set
to be the optimal point of the subproblem.
, set
, and go to step 2.
Fast SVM Training
For binary SVM models, the dense kernel matrix cannot be stored in memory when the number of
training samples l is large. Rather than using the standard decomposition algorithm which depends
on a cache strategy to compute the kernel matrix, a divide-and-conquer approach is used, dividing
the original problem into a set of small subproblems that can be solved by the SMO algorithm
(Dong, Suen, and Krzyzak, 2005). For each subproblem, the kernel matrix can be stored in a
kernel cache defined as part of contiguous memory. The size of the kernel matrix should be large
enough to hold all the support vectors in the whole training set and small enough to satisfy the
memory constraint. Since the kernel matrix for the subproblem is completely cached, each element
of the kernel matrix needs to be evaluated only once and must be calculated using a fast method.
There are two steps in the fast SVM training algorithm:
E Parallel optimization
E Fast sequential optimization
These steps are described in more detail below.
Parallel Optimization
Since the kernel matrix Q is symmetric and semi-positive definite, its block diagonal matrices are
semi-positive definite, and can be written as
..
.
376
Support Vector Machine (SVM) Algorithms
where
matrices
are block diagonal. Then we obtain k
optimization subproblems as described in “Base Working Set Selection Algorithm” on p. 371. All
the subproblems are optimized using the SMO decomposition algorithm in parallel. After this
parallel optimization, most non-support vectors will be removed from the training set. Then a new
training set can be obtained by collecting support vectors from the sub-problems. Although the
size of the new training set is much smaller than that of the original one, the memory may not be
large enough to store the kernel matrix, especially when dealing with a large dataset. Therefore a
fast sequential optimization technique is used.
Fast Sequential Optimization
The technique for fast sequential optimization works by iteratively optimizing subsets of the
problem. Initially, the training set is shuffled, all
are set to zero, and a subset
Sub
is selected from the training set S. The size of the subset d is set (
).
Optimization proceeds as follows:
E Apply the SMO algorithm to optimize a subproblem in Sub with kernel caching, and update
and
the kernel matrix. For more information, see the topic “SMO Algorithm” on p. 371.
E Select a new subset using the queue subset method. The size of the subset is chosen to be large
enough to contain all support vectors in the training set but small enough to satisfy the memory
constraint. For more information, see the topic “Queue Method for Subset Selection” on p. 376.
E Return to step 1 unless any of the following stopping conditions is true:

and (Number of learned samples) > l


Number of learned samples
where
is the change in number of support vectors between two successive subsets, l
is the size of the new training set, and T (> 1.0) is a user-defined maximum number of loops
through the data allowed.
Queue Method for Subset Selection
The queue method selects subsets of the training set that can be trained by fast sequential
optimization. For more information, see the topic “Fast Sequential Optimization” on p. 376..
The method is initialized by setting the subset to contain the first d records in the training data and
the queue QS to contain all the remaining records, and computing the kernel matrix for the subset.
Once initialized, subset selection proceeds as follows: each non-support vector in the subset
is added to the end of the queue, and replaced in the subset with the record at the front of the
queue (which is consequently removed from the queue). When all non-support vectors have been
replaced, the subset is returned for optimization. On the next iteration, the same process is applied,
starting with the subset and the queue in the same state they were in at the end of the last iteration.
377
Support Vector Machine (SVM) Algorithms
Blank Handling
All records with missing values for any input or output field are excluded from the estimation of
the model.
Model Nugget/Scoring
The SVM Model Nugget generates predictions and predicted probabilities for output classes.
Predictions are based on the category with the highest predicted probability for each record.
To choose a predicted value, posterior probabilities are approximated using a sigmoid
function(Platt, 2000). The approximation used is
.
The optimal parameters A and B are the estimated by solving the following
regularized maximum likelihood problem with a set of labeled examples
, and N+ is the number of positive examples
and N− is the number of negative examples:
and
if
if
Blank Handling
Records with missing values for any input field cannot be scored and are assigned a predicted
value and probability value(s) of $null$.
Temporal Causal Modeling Algorithms
1. Introduction
Forecasting and prediction are important tasks in real world applications that involve decision making. In such
applications, it is important to go beyond discovering statistical correlations and unravel the key variables that
influence the behaviors of other variables using an algebraic approach. Many real world data, such as stock price
data, are temporal in nature; that is, the values of a set of variables depend on the values of another set of variables at
several time points in the past. Temporal causal modeling, or TCM, refers to a suite of methods that attempt to
discover key temporal relationships in time series data. This chapter describes a particular method to discover
temporal relationships using a combination of Granger causality and regression algorithms for variable selection.
Although this treatment strives to be self-contained, a minimal set of papers describing the design principles behind
the method can be found in [Lozano et al., 2011, Lozano et al., 2009, Arnold et al., 2007]1.
The rest of the chapter is organized as follows. Section 2 lays the groundwork for the TCM algorithm (notation and
brief history) and explains the greedy orthogonal matching pursuit (GOMP) [Lozano et al., 2011] algorithm that is
used. Section 3 describes the techniques used to fit and forecast time series and compute approximated forecasting
intervals. Section 4 describes scenario analysis, which refers to a capability of the TCM product to “play-out” the
repercussions of artificially setting the value of a time series. Section 5 describes the detection of outliers, and
Section 6 discusses how potential causes for outliers can be established using root cause analysis.
2. Model
Introduced by Clive Granger [Granger, 1980], Granger causality in time series is based on the intuition that a cause
should necessarily precede its effect, and that if time series causally affects time series , then the past values of
should be useful in predicting the future values of . More specifically, time series is said to “Granger cause” time
series if the accuracy of regressing for in terms of past values of both and is statistically significantly better
than regressing just with past values of . If the time series have time points and are denoted by
and
, then the following regressions are performed:
(1)
(2)
Here is the number of lags; that is, the value of at time can only be determined by values of other time series at
times
. If Equation (1) is statistically more significant (using some test for significance) than
Equation (2), then is deemed to Granger cause .
1
The methods described in this chapter are particularly useful for under-determined systems, where the number of time
series ( ) far exceeds the number of samples ( ); that is
. Although these methods function for both overdetermined (
) and fully-determined (
) systems, there are other approaches to pursue for such systems.
Temporal Causal Modeling Algorithms
2.1 Graphical Granger Modeling
The classical definition of Granger causality is defined for a pair of time series. In the real world, we are interested
in finding not one, but all the significant time series that influence the target time series. In order to accomplish this,
we use group greedy ( ) regression algorithms with variable selection (see Section 2.3). An important feature of
our TCM algorithm is that it groups influencer/predictor variables; that is, we are interested in predicting whether
time series as a whole –
– has influence over time series . Such grouping is a more natural
interpretation of causality and also helps sparsify the solution set. For example, without such grouping we may
select the time-lagged series
to model but not select any other value of , which increases the number of
choices for variable selection -fold, where is the number of lags that is allowed.
2.2 Notation
The following notation is used throughout this chapter unless otherwise stated:
Table 1: Notation
Notation
\
Type
—
—
—
,
Description
Set of natural numbers
Set of real numbers
Regression solve operator
Size operator
norm of a vector, i.e.,
Number of time points
Number of time series
Number of lags for each target,
Design matrix of input series
Target series vector
Computes lag matrix
for the set of column indices in J
Computes means for series
Computes standard deviations for series
Tolerance value for stopping criterion
Max number of predictors selected or
maximum number of iterations
Actual number of predictors selected for a
target series
Estimated coefficients for predictors on the
transformed scale
In this section, we introduce the algorithm that is used to construct the temporal causal model. The list of symbols
used in the rest of this chapter is summarized in Table 1. Most of the symbols are self-explanatory; however, the
function , which stands for grouping, requires some additional explanation. is a function that takes a matrix
(
), a set of column indices , and a lag value and constructs a lag matrix that has
rows and
columns. Basically, for every column index
, constructs a
lag matrix by carefully unrolling the
jth column of the input matrix. An example of
action is shown below:
Temporal Causal Modeling Algorithms
In this example, the input matrix
has 4 time series (
) and five time points per time series (
).
The lag matrix associated with the time series in column 1, when (lag) is 2, is produced by invoking
Note that the lag matrix consists of the lag-1 vector of as the first column, the lag-2 vector as the second column,
up to the lag- vector as the
column. Similarly, the functions
accept any input matrix and compute the
mean and the standard deviation, respectively, of the matrix’s columns. For purposes of numerical stability, and to
increase interpretability during modeling, columns of the lagged matrix are both centered by the column means and
scaled by the column standard deviations 2. On the other hand, the target is only centered. An example of mean
centering and scaling for the lagged matrices is shown below:
Here, (
) and
respectively.
are the means and standard deviations of the first and the second columns, (
)
2.3 Group Orthogonal Matching Pursuit (GOMP)
Algorithm 1: GOMP
Input:
Output:
1
2
,
.
;
for
do
3
;
;
4
;
5
6
if any redundant series are found, delete them in
if (
), then
, update
7
8
9
otherwise update
for
,
and stop;
;
do
;
10
11
2
and
;
, return
if
, return
and
and stop;
;
Although each column of the lagged matrix has a different mean and standard deviation, due to the structure of these
columns, it is possible to compute the mean and the standard deviation of the time series itself and use those to center and
scale the lagged columns.
Temporal Causal Modeling Algorithms
12
for
do
13
;
14
;
15
16
17
;
;
, break;
if
18 return
,
.
We begin by describing Algorithm 1: GOMP, which will be used to establish causality of time-series data. This
algorithm receives the variables
(described in Table 1) as input. Briefly,
is a
target vector for which we want to establish the Granger causality (note that we have excluded the first values of
). In contrast,
is the input unlagged time series data. is the number of lags for each predictor in each
target series,
is the maximum number of predictors to be selected per-target, and determines whether a new
predictor needs to be added. In addition,
and are grouping, centering, and scaling functions which have been
described in Section 2.2.
is the set of pre-selected predictor indices for , and always contains the lagged .
is the set of forbidden predictors, if any, for . If there are no forbidden predictors, then
. Given these, the
goal is to greedily find predictors that solve the system
subject to sparsity constraints.
The greedy algorithm approximates an –sparse solution by itertively choosing the best predictor for addition at
each iteration. We use superscripts to denote the iteration number in Algorithm 1. For example,
represents the
initial values of
at the 0th iteration (before the actual iteration starts). The first part of the algorithm (lines 1 – 4)
constructs and solves a linear system consisting of the predictors in
to obtain
, the coefficient vector for
predictors on the transformed scale. At the end of this first part, we have , the initial residual. Then check whether
there are redundant predictor series in
. If yes, then delete them. If the number of predictor series in the (updated)
is equal to or larger than the maximum number of iterations (i.e.,
) then keep the first
predictor
series in
, update
, return
and
, and stop the process (line 6); otherwise (i.e.,
), update
and (line 7) if any redundant predictor series were deleted. Then start the iterative process to add one predictor
series at a time (line 8). The first step in predictor selection (line 9) consists of an argmin function that
systematically goes over each eligible predictor and evaluates its goodness (see Algorithm 2). This step is the
performance critical portion of the algorithm and can be searched in parallel. At the end of the step, , the index
corresponding to the best predictor is available. However, if no suitable predictor is found in the argmin function
(i.e.,
), then return
and
and stop (line 10). The next part (lines 11 – 14) re-estimates the model
coefficients by adding to the model. Line 15 updates the residual, , for this model and line 16 adds to the
model. Finally, if the norm of the current residuals is equal to or smaller than the tolerance value (i.e.,
), then the iterative process is terminated.
Note that if the tolerance is achieved by adding , then no new iterations are required and the iterative process is
terminated. Thus the actual number of predictors selected, , can be less than the maximum number of iterations,
(i.e.,
). However, if the tolerance is set very small, then it is highly unlikely that such a situation will
happen.
Algorithm 2: argmin
Input:
Temporal Causal Modeling Algorithms
Output:
1
2 for
3
if
4
5
: Selected group index.
,
;
do
continue;
for
do
6
;
;
7
;
8
if
9
return
, then
;
.
The implementation of the argmin function (line 8, Algorithm 1) is shown in Algorithm 2. The algorithm first
assigns the initial cost to be the square of the norm of the current residuals, and the selected group index to be
(line 1). Then it loops over each series group, first checking if the time series being considered for addition
has
already been added to the solution
or if it is a forbidden predictor (line 3). If the current group
is not yet
selected, the lagged transformed matrix corresponding to this time series (
) is constructed using the
and
functions (lines 4 and 5). After grouping and transforming
series j is computed by first regressing
on
, the residual
(line 6), and then computing the residual (line 7). Finally, the
current time series is selected as the leading candidate if the square of the
the previous estimate minus a threshold value,
identical series.
corresponding to the candidate time
norm of its residual
is lower than
. Including such a threshold value prevents selecting an (almost)
The loop in Algorithm 2 (line 2) can be thought of as iterating over all candidate series. For each candidate series,
the following computations are carried out: (1) a filter is applied in line 3 to ensure that it is a valid candidate; (2)
lines 4 and 5 map the current candidate to the transformed matrix (
) that represents the lag matrix to be used; (3)
lines 6 and 7 evaluate the goodness of the current candidate by first solving a dense linear system and then
computing the residual; (4) line 8 applies a predicate to check if the current candidate series is better than previously
evaluated candidates. Notice that the predicate (line 8) is associative and commutative; therefore, Algorithm 2 can
be parallelized by dividing the iteration space ([1,n]) into chunks and executing each chunk in parallel. To get the
globally best group, it is sufficient to reduce the groups that were selected by each parallel instance in a tree-like
fashion by applying the predicate in line 8.
2.4 Selecting
Both Algorithms 1 and 2 accept as an input parameter which can be specified by user. If is not explicitly
specified then the following heuristic approach can be used to determine based on (# of time points) and
(periodicity or seasonal length):
(1) If
and
(2) If
or
, then
, then
.
.
Temporal Causal Modeling Algorithms
2.5 AR( ) Model
Out of the series in the data, some series may be used as predictors only, so no TCM models are built for them.
However, if they are selected as predictors for some target series, then simple models need to be built for them in
order to do forecasting. For example, suppose that time series 1 is a selected predictor for time series 2, but there is
no model built for time series 1. While a model for time series 1 is not needed in order to forecast time series 2 at
time
(where is the latest time in the data), forecasts for time (t + 2) require values of time series 1 for time
, which then requires a model for time series 1.
Hence, for each predictor-only series, a simple auto-regressive (AR) model is built using the same lag, , as used for
the target series. This model, called an AR( ) model, can be constructed using Algorithm 1 by specifying
to be
the target itself and setting the maximum number of predictors to be 1.
2.6 Post-estimation steps
Algorithm 1 selects the best predictors (time series) to model a target series . Without loss of generality, we assume
that the model for is
, where
is the selected predictor series matrix with the lagged
terms on the transformed scale,
is the estimated standardized coefficient vector, and
is the residual
vector.
However, this is not the end of modeling. Several post processing steps are needed in order to complete the
modeling process for . The steps include three parts: (1) coefficients and statistics inference; (2) tests of model
effects; (3) model quality measures.
2.6.1 Coefficients and statistical inference
The results of Algorithm 1 include
and
(by solving the linear system from Cholesky decomposition),
where superscript T means the transpose of a matrix or vector, and
is a generalized inverse of the matrix.
Based on these quantities, the first step is to compute coefficient estimates, their standard errors, and statistical
inference on the original scale.
Table 2: Additional notation
Notation
Description
Actual number of predictors selected (including target itself) for , i.e.,
Number of coefficient estimates in
.
, i.e.,
Number of non-redundant coefficient estimates in
,
Selected predictor series matrix with lagged terms on the transformed scale. This is an
matrix as
with
=
(an
matrix).
Selected predictor series matrix on the original scale. This is an
matrix as
column vector of 1’s corresponding to an intercept.
Unstandardized coefficient estimates vector (corresponding to
, where
), which is a
is a
Temporal Causal Modeling Algorithms
vector. The first element,
, is the intercept estimate.
Estimated variance of the model based on residuals.
Covariance matrix of standardized coefficient estimates on the transformed scale, i.e.,
. The
diagonal element is
and its square root,
, is the
standard error of the
standardized coefficent estimate.
Covariance matrix of unstandardized coefficient estimates on the original scale. The
diagonal element is
and its square root,
, is the standard error of the
unstandardized coefficent estimate.
Centering vector of , i.e.,
, where
Scaling matrix of , i.e.,
of .
Transformation matrix of
Note that
, where
to
, i.e.,
.
is the standard deviation
, which is a
vector.
.
The relationship between and
is
. The relevant statistics are computed as follows:

is the mean of
and the relationship between
and
is
Unstandardized coefficient estimates
(3)
(4)

Standard errors of unstandardized coefficient estimates
(5)
(6)
where
and
with
and
.

t-statistics for coefficient estimates
(7)
which follows an asymptotic t distribution with
degrees of freedom. Then the p-value is computed as
Temporal Causal Modeling Algorithms
(8)

confidence internals
(9)
where
is the significance level and
is the
percentile of the distribution with
degrees of freedom.
2.6.2 Tests of model effects
For each selected predictor series for , there are lagged columns associated with it. The columns can be grouped
together, considered as an effect, and tested with a null hypothesis of zero for all coefficients. This is similar to the
test of a categorical effect with all dummy variables in a (generalized) linear model setting. Only type III tests are
conducted here. For each selected predictor series
, the type III test matrix is constructed and
is
tested based on an F-statistic.

F-statistics for effects
(10)
where
of freedom
. The statistic follows an approximate F distribution with the numerator degrees
and the denominator degrees of freedom
. Then the p-value is computed as follows:
(11)
2.6.3 Model quality measures
In addition to statistical inferences, the goodness of the model can be evaluated. The following model quality
measures are provided:

Root Mean Squared Error (RMSE)
(12)
Note that

.
Root Mean Squared Percentage Error (RMSPE)
(13)

R squared
Temporal Causal Modeling Algorithms
(14)

Bayesian Information Criterion (BIC)
(15)

Akaike Information Criterion (AIC)
(15’)
3. Scoring
Once the models
for all the required targets ( ) are built and post-estimation statistics are computed, the
next task is to use these models to do scoring. There are two types of scoring: (1) fit: in-sample prediction for the
past and current values of the target series; (2) forecast: out-of-sample prediction for future values of the target
series.
3.1 Fit
Without loss of generality, we assume and
are the selected predictor series matrices without lagged terms and with
lagged terms, respectively; and is the coefficient estimates vector for the target , so
,
in-sample prediction of
and
is one-step ahead prediction and can be written as
. Given that all series have
(16)
.
The corresponding
confidence interval of
(17)
is
.
(18)
3.2 Forecast
Given that data is available up to time interval
, the one-step ahead forecast for
is
(19)
The -step ahead forecast for
is
(20)
time points,
Temporal Causal Modeling Algorithms
where
Thus, forecasting the value of
requires us to first forecast the values of all the predictors up to time
Forecasting the values of all the predictors up to time
requires us to use Equation (19) on all the
predictors
. Similarly, to predict the value of
, we need to forecast the values of predictors
time
by using Equation (20). This task poses a bigger problem; to forecast the values of
at
time
, we first need to forecast the values of the predictors of
at time
. That is, as we
increasingly look into the future, we need to forecast more and more values to determine the value of
.
.
at
3.3 Approximated forecasting variances and intervals
In this subsection, we outline how forecasting variances and intervals can be computed for TCM models. We start
by using the following representation for the linear model built by TCM for target
:
(21)
where
and
is estimated as
(computed in Section 2.6.1). Please note that we don’t include
parameter estimation error when defining forecasting error in TCM.
The forecasting error at
is defined as the difference between
and
, which can be written as
(22)
The forecasting variance for one-step ahead forecasts is computed as
forecasting error at
is
. For multi-step ahead forecasts, the
(23)
where
In general,
dependence is. In addition,
and
if
.
are not independent of each other. The larger the
and
might not be independent for
is, the more complex the
. In order to fully
consider the dependence, we need to write all time series in vector autoregressive (VAR) format. Since we assume
the number of series is usually large, the parameter matrix, which is an
matrix, might be too large to handle
in computation of the forecasting variances. Therefore, we make the assumption that all forecasting error terms in
Equation (23),
are independent, so it is easier to compute the forecasting
variances.
Based on the above independence assumption, the approximated variance of the forecasting error,
, is
(24)
Temporal Causal Modeling Algorithms
where
is the variance of the forecasting error in the series
Then the corresponding
at
approximated forecasting interval of
.
can be expressed as
(25)
4. Scenario analysis
Scenario analysis refers to a capability of TCM to “play-out” the repercussions of artificially setting the value of a
time series. A scenario is the set of forecasts that are generated by substituting the values of a root time series by a
vector of substitute values, as illustrated in Figure 1.
Figure 1: Causal graph of a root time series and the specification of the vector of substitute values
During scenario analysis, we specify the targets that we want to analyze as a response to changes in the values of the
root series (“a” in Figure 1), along with the time window. In Figure 1, we are interested in the behavior of time
series “c”, “d”, “g”, “h”, and “j” only. The rest of the time series are ignored. The figure also depicts the vector
of values for “a” that should be used instead of the observed or predicted values of “a”. The values
specify the beginning and end of the replacement values for the root series, the current time, and the farthest time for
analysis, respectively.
The partial Granger causal graph of time series “a” is shown in Figure 1. That is, “a” is the parent of itself, “b”, “c”,
and “d”. Similarly, it is the grand-parent of “e”, “f”, “g”, “h”, “i”, and “j”. Further descendents are possible, but only
two generations suffice for the sake of explanation. Figure 1 also displays the specification of the vector
, of
length , that contains the replacement values of the root series. In the example shown in the figure,
starts at
time
, where is the current time, and ends at
, which is in the future. We are also given , the last
time point (
) for which we want to perform scenario analysis on the target variables. Finally, we are given a
set of time series for which the scenario predictions are carried out. In the figure, these are “c”, “d”, “g”, “h”, and
“j”, which are marked with a thick red border. Since “b” is required to model “g”, “b” is marked with a thick blue
border to signify that it is an induced target. Given this information, the goal of scenario analysis is to forecast the
Temporal Causal Modeling Algorithms
values of the target time series (“c”, “d”, “g”, “h”, and “j”) up to time
.
, based on the values of the root time series
Notice that we have to predict values of targets up to time , where can be
or
, we need to compute the values of the predictors of the target time series at time
when
, we need to compute the values of the predictors’ predictors at time
the predictors at time
before predicting the values of the target time series at time
. When
. Similarly,
and the values of
.
Figure 2: Scenarios with and without predicting future values
The left-hand panel in Figure 2 depicts a scenario where the values of ancestors of targets of interest also have to be
predicted. In this particular case,
and therefore it is necessary to predict the values of the predictors of
the targets at
and
, and values of the predictors’ predictors at time
. The right-hand panel
depicts a scenario where the entire period of prediction is earlier than the current time (i.e.,
). In this case,
all the values of the predictors and their ancestors are readily available.
Determining
In the discussion above, we have neglected the issue of
, the substitute values for time series “a”, which is the
root time series. For purposes of scenario analysis, it is sufficient to consider that
is readily available. In a typical
use case for scenario analysis,
will come from the values specified by the user’s direct input, although its values
could also come as input from a calling meta-process (as is the case with the use of scenario analysis as a subprocedure in root cause analysis, as shown in Section 6).
Caveat on scenario analysis
It is possible to carry out scenario analysis for a time period that is entirely in the future; that is
. However,
forecasting errors in the remaining predictors may make such scenario analysis inherently low-precision. That is, if
, then the precision of scenario analysis decreases with an increase in .
4.1 SA, the scenario analysis algorithm
Input:
The inputs to SA are: (1) : the root time series; (2)
: the vector of replacement values for time series ; (3)
: the beginning and end time for the modified values of , the current time, and the last time point for
Temporal Causal Modeling Algorithms
which target values need to be predicted, respectively; (4) : a set of descendant target time series of interest along
with their relation to (which may be input as the Granger causal graph, ). Notice that the length of
is
and
. Furthermore, it is erroneous to have a target
, where is not an ancestor of .
Output:
For each in , we output a vector
containing values that pertain to the scenario analysis of these time
series and the corresponding confidence intervals (when
) or apprxomiated forecasting intervals (when
). Please note that the time period for the children series in is
, for the grand-children series is
, etc.
Preparation:
To prepare for SA, we first calculate the closure on the set of targets
that need to be predicted, which is
determined by the relationship between and each of the targets in . Essentially,
is computed by iteratively
looking at the path from each
and adding all those intermediate nodes that are ancestors of and are also
descendents of . In the example shown in Figure 1, the time series “b” is itself not of primary interest, but since it is
a parent of “g”, which is of interest, “b” is also added as a target of interest to the set {“c”, “d”, “g”, “h”, “j”}.
Next, we compute , the set of models that need to be included in order to perform scenario analysis on .
Obviously, contains the models for each of the series in , i.e.,
; however, depending on the time span of
the scenario analysis, additional models of some time series might have to be brought in (see Figure 2). Basically,
depending on how far ahead is from , we may need to compute the values of the ancestors (other than ) of the
targets of interest at time points
. That is, the set
(which may be ) contains all series
that are needed for scenario analysis and are not descendants of .
At the end of the preparation phase we have
and
, which allows us to predict all the time series of interest.
Computation:
The computation in scenario analysis is exactly that of scoring the values of a set of time series (see Section 3). For
each target in , we have a range of time points for which we need to fit/forecast values. For example, for
immediate children of the root (“c”, “d”, and the induced child “b” in Figure 1), this range is
. Similarly,
for grand-children (“g”, “h”, and “j” in Figure 1), this range is
. Using the models in and substituted
values
for , this task can be carried out.
5. Outlier detection
One of the advantages of building TCM models is the ability to detect model-based outliers. Outliers can be defined
in several ways. For now, we shall define an outlier in a time series to be a value that strays too far from its expected
(fitted) value based on the TCM models. The detection process is based on the normal distribution assumption for
series . Consider the value of a time series at time . Let and be the observed and expected values of at
time , respectively; and
be the variance of from the TCM model (based on residuals). Given these inputs, we
call an outlier if the likelihood of when modeled as a normal random variable with mean and variance
is
below a particular threshold.
Temporal Causal Modeling Algorithms
Input:
The inputs to OD (outlier detection) are: (1)
default is 0.95).
; (2)
; (3)
; (4) the outlier threshold value
(the
Computation:
a)
Under the assumption that the observed value
compute the square score at time as
is a normal random variable with mean
and variance
,
(26)
b) Compute the outlier probability as
(27)
where
c) Flag
is a random variable with a chi-squared distribution with 1 degree of freedom.
as an outlier if
.
Output:
The output to OD for series
is a set of time points with their corresponding outlier probabilities.
6. Outlier root cause analysis
In Section 5, we saw how to detect outliers. The next logical step is to find the likely causes for a time series whose
value has been flagged as an outlier. Outlier root cause analysis refers to the capability to explore the Granger causal
graph in order to analyse the key/root values that resulted in the outlier under question. To formalize this notion,
consider a time series , whose observed value at time (that is, ) has been flagged as an outlier due to its
abnormal deviation from its expected value . The goal of outlier root cause analysis (ORCA) is to output the set of
time series that can be considered as root causes of the anomalous value of . The idea is that setting the values
of time series in the predictor set to their normal/expected values, instead of their observed values, will bring the
outlying back to normal. The normal value of is unknown so we specify it with the expected value of at time
as predicted by ’s univariate model, which is an AR(L) model, and denoted as .
The result of ORCA has the following objective function with a constraint as follows:
(28)
where
corresponds to the set of ancestors of according to the Granger causal graph . The quantity
should be interpreted as the likely predicted value of at time had the value of its ancestor been set to its
expected value of . We see that Equation (28) is made up of two parts: (1) the portion
, which is the
Temporal Causal Modeling Algorithms
degree of “outlier-ness” of at as predicted by the “Granger model”, where the outlier-ness is judged based on
what is expected from the history of ; (2) the portion
, which is the degree of “outlier-ness” of at
as predicted by the “Granger model”, if was corrected. In other words, Equation (28) amounts to replacing the
observed value by its “expected” value, given by a simpler, univariate model. Therefore Equation (28) expresses
the reduction in the degree of outlier-ness in brought about by correcting .
6.1 ORCA, the outlier root cause analysis algorithm
Input:
The inputs to ORCA are: (1) , the anomalous time series; (2) , the time at which the anomaly was detected; (3) ,
the anomalous value; (4) , the expected value of ; (5) , the oldest generation of ancestors to search based on the
Granger causal graph, .
Output:
ORCA outputs the set of root causes
Equation (28) by the same amount.
of the anomaly in
, where each
maximizes the objective function in
Preparation:
To prepare for ORCA, we first compute
causes of the anomaly in .
, the set of ancestors that need to be examined as the potential root
Figure 3: Outlier root cause analysis for a time series
In the example shown in Figure 3, assuming that =“a” and
, then
= { “b”, “c”, “d”, “e”, “f”, “g”, “h”,
“i”, “j”}.
can be computed by performing a reverse breadth-first search from to levels.
Second, each potential root cause
is prepped for scenario analysis by computing the vector of substitute
values of to be used during scenario analysis. Note that the length of this substitute vector is , the lag. For
Temporal Causal Modeling Algorithms
example, consider , the substitute for time series “b” in Figure 3. As “b” is a parent of “a”, we need to compute
the fits of “b” from
to
. On the other hand, as “g” is a grand-parent of “a”,
contains the fits for “g”
from the time
to
(see Section 3.1 for computation of fits). Please note that this approach
assumes that any anomalies are purely in “b” (the parent series) or “g” (the grandparent series). In particular, it is
assumed that anomalies in “b” are not caused by values in the grandparent series, including anomalous values in the
grandparent series.
Third, for each potential root cause
, scenario analysis is carried out (see Section 4) using the substitute
values computed in the previous step. For the example in Figure 3, scenario analysis is called for series “b” with the
parameters
. And the result of scenario analysis
is
.
Computation:
The process of ORCA is as follows:

Initiaize , the set of potential root causes for , to .
Initialize
, the maximum objective function value, to 0.

Suppose there are series in
,
For each ,
, compute
If
, set
.
.
and store
in
.
Temporal Causal Modeling Algorithms
References
[1].
Arnold, A., Liu, Y., and Abe, N. (2007). Temporal causal modeling with graphical granger methods. In
Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data
mining, KDD ’07, pages 66–75, New York, NY, USA. ACM.
[2].
Darema, F., George, D. A., Norton, V. A., and Pfister, G. F. (1988). A single-program-multiple-data
computational model for EPEX/FORTRAN. Parallel Computing, 7(1):11–24.
[3].
Dean, J. and Ghemawat, S. (2008). Mapreduce: simplified data processing on large clusters. volume 51.
[4].
Duchi, J., Gould, S., and Koller, D. (2008). Projected subgradient methods for learning sparse gaussians.
In Proceedings of the Twenty-fourth Conference on Uncertainty in AI (UAI).
[5].
Friedman, J., Hastie, T., and Tibshirani, R. (2008). Sparse inverse covariance estimation with the
graphical lasso. Biostatistics, 9(3):432–441.
[6].
Granger, C. W. J. (1980). Testing for causality : A personal viewpoint. Journal of Economic Dynamics
and Control, 2(1):329–352.
[7].
Hsieh, C.-J., Sustik, M. A., Dhillon, I. S., and Ravikumar, P. (2011). Sparse inverse covariance matrix
estimation using quadratic approximation. In Shawe-Taylor, J., Zemel, R., Bartlett, P., Pereira, F., and
Weinberger, K., editors, Advances in Neural Information Processing Systems 24, pages 2330–2338.
http://nips.cc/.
[8].
Kambadur, P. and Lozano, A. C. (2013). A parallel, block greedy method for sparse inverse covariance
estimation for ultra-high dimensions. In Sixteenth International Conference on Artificial Intelligence and
Statistics (AISTATS).
[9].
Li, L. and chuan Toh, K. (2010). An inexact interior point method for l1-regularized sparse covariance
selection. Technical report, National University Of Singapore.
[10]. Lozano, A. C., Abe, N., Liu, Y., and Rosset, S. (2009). Grouped graphical granger modeling methods for
temporal causal modeling. In Proceedings of the 15th ACM SIGKDD international conference on
Knowledge discovery and data mining, KDD ’09, pages 577–586, New York, NY, USA. ACM.
[11]. Lozano, A. C., Swirszcz, G., and Abe, N. (2011). Group orthogonal matching pursuit for logistic
regression. Journal of Machine Learning Research - Proceedings Track, 15:452–460.
[12]. MPI Forum (1995). Message Passing Interface. http://www.mpi-forum.org/.
[13]. MPI Forum (1997). Message Passing Interface-2. http://www.mpi-forum.org/.
[14]. O.Banerjee, El Ghaoui, L., and d’Aspremont, A. (2008). Model selection through sparse maximum
likelihood estimation for multivariate gaussian or binary data. Journal of Machine Learning Research,
9:485–516.
[15]. Scheinberg, K., Ma, S., and Goldfarb, D. (2010). Sparse inverse covariance selection via alternating
linearization methods. CoRR, abs/1011.0097.
Temporal Causal Modeling Algorithms
[16]. Scheinberg, K. and Rish, I. (2010). Learning sparse gaussian markov networks using a greedy coordinate
ascent approach. In Proceedings of the 2010 European conference on Machine learning and knowledge
discovery in databases: Part III, ECML PKDD’10, pages 196–212, Berlin, Heidelberg. Springer-Verlag.
[17]. Strang, G. (1993). Introduction to Linear Algebra. Wellesley-Cambridge Press.
Time Series Algorithms
The Time Series node builds univariate exponential smoothing, ARIMA (Autoregressive
Integrated Moving Average), and transfer function (TF) models for time series, and produces
forecasts. The procedure includes an Expert Modeler that identifies and estimates an appropriate
model for each dependent variable series. Alternatively, you can specify a custom model.
This algorithm is designed with help from professor Ruey Tsay at The University of Chicago.
Notation
The following notation is used throughout this chapter unless otherwise stated:
Yt (t=1, 2, ..., n)
n
Univariate time series under investigation.
Total number of observations.
Model-estimated k-step ahead forecast at time t for series Y.
S
The seasonal length.
Models
The Time Series node estimates exponential smoothing models and ARIMA/TF models.
Exponential Smoothing Models
The following notation is specific to exponential smoothing models:
Level smoothing weight
Trend smoothing weight
Damped trend smoothing weight
Season smoothing weight
Simple Exponential Smoothing
Simple exponential smoothing has a single level parameter and can be described by the following
equations:
It is functionally equivalent to an ARIMA(0,1,1) process.
© Copyright IBM Corporation 1994, 2015.
379
380
Time Series Algorithms
Brown’s Exponential Smoothing
Brown’s exponential smoothing has level and trend parameters and can be described by the
following equations:
It is functionally equivalent to an ARIMA(0,2,2) with restriction among MA parameters.
Holt’s Exponential Smoothing
Holt’s exponential smoothing has level and trend parameters and can be described by the
following equations:
It is functionally equivalent to an ARIMA(0,2,2).
Damped-Trend Exponential Smoothing
Damped-Trend exponential smoothing has level and damped trend parameters and can be
described by the following equations:
It is functionally equivalent to an ARIMA(1,1,2).
381
Time Series Algorithms
Simple Seasonal Exponential Smoothing
Simple seasonal exponential smoothing has level and season parameters and can be described
by the following equations:
It is functionally equivalent to an ARIMA(0,1,(1,s,s+1))(0,1,0) with restrictions among MA
parameters.
Winters’ Additive Exponential Smoothing
Winters’ additive exponential smoothing has level, trend, and season parameters and can be
described by the following equations:
It is functionally equivalent to an ARIMA(0,1,s+1)(0,1,0) with restrictions among MA parameters.
Winters’ Multiplicative Exponential Smoothing
Winters’ multiplicative exponential smoothing has level, trend and season parameters and can be
described by the following equations:
There is no equivalent ARIMA model.
382
Time Series Algorithms
Estimation and Forecasting of Exponential Smoothing
The sum of squares of the one-step ahead prediction error,
to optimize the smoothing weights.
, is minimized
Initialization of Exponential Smoothing
Let L denote the level, T the trend and, S, a vector of length s, denote the seasonal states. The
initial smoothing states are made by back-casting from t=n to t=0. Initialization for back-casting is
described here.
For all the models
.
For all non-seasonal models with trend, T is the negative of the slope of the line (with intercept)
fitted to the data with time as a regressor.
For the simple seasonal model, the elements of S are seasonal averages minus the sample mean;
for example, for monthly data the element corresponding to January will be average of all January
values in the sample minus the sample mean.
For the additive Winters’ model, fit
to the data where t is time and
are seasonal dummies. Note that the model does not have an intercept. Then
.
, and
For the multiplicative Winters’ model, fit a separate line (with intercept) for each season with time
as a regressor. Suppose is the vector of intercepts and is the vector of slopes (these vectors
will be of length s). Then
and
.
The initial smoothing states are:
ARIMA and Transfer Function Models
The following notation is specific to ARIMA/TF models:
at (t = 1, 2, ... , n)
p
q
d
P
Q
D
White noise series normally distributed with mean zero and variance
Order of the non-seasonal autoregressive part of the model
Order of the non-seasonal moving average part of the model
Order of the non-seasonal differencing
Order of the seasonal autoregressive part of the model
Order of the seasonal moving-average part of the model
Order of the seasonal differencing
383
Time Series Algorithms
s
Seasonality or period of the model
AR polynomial of B of order p,
MA polynomial of B of order q,
Seasonal AR polynomial of BS of order P,
Seasonal MA polynomial of BS of order Q,
Differencing operator
B
Backward shift operator with
Prediction variance of
and
Prediction variance of the noise forecasts
Transfer function (TF) models form a very large class of models, which include univariate ARIMA
models as a special case. Suppose is the dependent series and, optionally,
are
to be used as predictor series in this model. A TF model describing the relationship between the
dependent and predictor series has the following form:
The univariate ARIMA model simply drops the predictors from the TF model; thus, it has the
following form:
The main features of this model are:

An initial transformation of the dependent and predictor series, f and fi. This transformation
is optional and is applicable only when the dependent series values are positive. Allowed
transformations are log and square root. These transformations are sometimes called
variance-stabilizing transformations.

A constant term .

The unobserved i.i.d., zero mean, Gaussian error process

The moving average lag polynomial MA=
.
polynomial AR=

The difference/lag operators

A delay term,

Predictors are assumed given. Their numerator and denominator lag polynomials are
=
and
of the form:
=
.

The “noise” series
, where
and
with variance
.
and the auto-regressive lag
.
is the order of the delay
384
Time Series Algorithms
is assumed to be a mean zero, stationary ARMA process.
Interventions and Events
Interventions and events are handled like any other predictor; typically they are coded as 0/1
variables, but note that a given intervention variable’s exact effect upon the model is determined
by the transfer function in front of it.
Estimation and Forecasting of ARIMA/TF
There are two forecasting algorithms available: Conditional Least Squares (CLS) and Exact Least
Squares (ELS) or Unconditional Least Squares forecasting (ULS). These two algorithms differ in
only one aspect: they forecast the noise process differently. The general steps in the forecasting
computations are as follows:
through the historical period.
1. Computation of noise process
2. Forecasting the noise process
up to the forecast horizon. This is one step ahead forecasting
during the historical period and multi-step ahead forecasting after that. The differences in CLS
and ELS forecasting methodologies surface in this step. The prediction variances of noise
forecasts are also computed in this step.
3. Final forecasts are obtained by first adding back to the noise forecasts the contributions of the
constant term and the transfer function inputs and then integrating and back-transforming the
result. The prediction variances of noise forecasts also may have to be processed to obtain the
final prediction variances.
Let
and
be the k-step forecast and forecast variance, respectively.
Conditional Least Squares (CLS) Method
assuming
where
Minimize
for t<0.
are coefficients of the power series expansion of
.
Missing values are imputed with forecast values of
.
Maximum Likelihood (ML) Method (Brockwell and Davis, 1991)
.
385
Time Series Algorithms
Maximize likelihood of
; that is,
where
, and
is the one-step ahead forecast variance.
When missing values are present, a Kalman filter is used to calculate
.
Error Variance
in both methods. Here n is the number of non-zero residuals and k is the number of parameters
(excluding error variance).
Initialization of ARIMA/TF
A slightly modified Levenberg-Marquardt algorithm is used to optimize the objective function.
The modification takes into account the “admissibility” constraints on the parameters. The
admissibility constraint requires that the roots of AR and MA polynomials be outside the unit circle
and the sum of denominator polynomial parameters be non-zero for each predictor variable. The
minimization algorithm requires a starting value to begin its iterative search. All the numerator and
denominator polynomial parameters are initialized to zero except the coefficient of the 0th power
in the numerator polynomial, which is initialized to the corresponding regression coefficient.
The ARMA parameters are initialized as follows:
Assume that the series
follows an ARMA(p,q)(P,Q) model with mean 0; that is:
In the following and represent the lth lag autocovariance and autocorrelation of
respectively, and and represent their estimates.
Non-Seasonal AR Parameters
For AR parameter initial values, the estimated method is the same as that in appendix A6.2 of
(Box, Jenkins, and Reinsel, 1994). Denote the estimates as
.
Non-Seasonal MA Parameters
Let
The cross covariance
386
Time Series Algorithms
Assuming that an AR(p+q) can approximate
, it follows that:
The AR parameters of this model are estimated as above and are denoted as
Thus
can be estimated by
And the error variance
with
.
is approximated by
.
Then the initial MA parameters are approximated by
So can be calculated by
other parameters are set to 0.
, and
and estimated by
. In this procedure, only
are used and all
Seasonal parameters
For seasonal AR and MA components, the autocorrelations at the seasonal lags in the above
equations are used.
Calculation of the Transfer Function
The transfer function needs to be calculated for each predictor series. For the predictor series
let the transfer function be:
,
387
Time Series Algorithms
It can be calculated as follows:
1. Calculate
2. Recursively calculate
where
and
are the coefficients of
in the polynomials
and
respectively. Likewise, the summation limits
and
are the maximum degree of
and
respectively.
the polynomials
All missing
are taken to be
in the first term of
are taken to be
and missing
, where
is the first non-missing measurement of
where
and
are the
and
in
in the second term
.
is given by
polynomials evaluated at
.
Diagnostic Statistics
ARIMA/TF diagnostic statistics are based on residuals of the noise process,
.
Ljung-Box Statistic
where
is the kth lag ACF of residual.
Q(K) is approximately distributed as
, where m is the number of parameters other than
the constant term and predictor related-parameters.
Outlier Detection in Time Series Analysis
The observed series may be contaminated by so-called outliers. These outliers may change the
mean level of the uncontaminated series. The purpose of outlier detection is to find if there are
outliers and what are their locations, types, and magnitudes.
The Time Series node considers seven types of outliers. They are additive outliers (AO),
innovational outliers (IO), level shift (LS), temporary (or transient) change (TC), seasonal additive
(SA), local trend (LT), and AO patch (AOP).
388
Time Series Algorithms
Notation
The following notation is specific to outlier detection:
U(t) or
The uncontaminated series, outlier free. It is assumed to be a univariate ARIMA or
transfer function model.
Definitions of Outliers
Types of outliers are defined separately here. In practice any combination of these types can
occur in the series under study.
AO (Additive Outliers)
Assuming that an AO outlier occurs at time t=T, the observed series can be represented as
where
is a pulse function and w is the deviation from the true U(T) caused
by the outlier.
IO (Innovational Outliers)
Assuming that an IO outlier occurs at time t=T, then
LS (Level Shift)
Assuming that a LS outlier occurs at time t=T, then
where
is a step function.
TC (Temporary/Transient Change)
Assuming that a TC outlier occurs at time t=T, then
where
,
is a damping function.
SA (Seasonal Additive)
Assuming that a SA outlier occurs at time t=T, then
389
Time Series Algorithms
where
is a step seasonal pulse function.
LT (Local Trend)
Assuming that a LT outlier occurs at time t=T, then
where
is a local trend function.
AOP (AO patch)
An AO patch is a group of two or more consecutive AO outliers. An AO patch can be described
by its starting time and length. Assuming that there is a patch of AO outliers of length k at time
t=T, the observed series can be represented as
Due to a masking effect, a patch of AO outliers is very difficult to detect when searching for
outliers one by one. This is why the AO patch is considered as a separate type from individual
AO. For type AO patch, the procedure searches for the whole patch together.
Summary
For an outlier of type O at time t=T (except AO patch):
where
with
follows:
. A general model for incorporating outliers can thus be written as
where M is the number of outliers.
390
Time Series Algorithms
Estimating the Effects of an Outlier
Suppose that the model and the model parameters are known. Also suppose that the type and
location of an outlier are known. Estimation of the magnitude of the outlier and test statistics
are as follows.
The results in this section are only used in the intermediate steps of outlier detection procedure.
The final estimates of outliers are from the model incorporating all the outliers in which all
parameters are jointly estimated.
Non-AO Patch Deterministic Outliers
For a deterministic outlier of any type at time T (except AO patch), let
, so:
be the residual and
From residuals e(t), the parameters for outliers at time T are estimated by simple linear regression
of e(t) on x(t).
For j = 1 (AO), 2 (IO), 3 (LS), 4 (TC), 5 (SA), 6 (LT), define test statistics:
(T)
Var
Under the null hypothesis of no outlier,
model parameters are known.
(T) is distributed as N(0,1) assuming the model and
AO Patch Outliers
For an AO patch of length k starting at time T, let
for i = 1 to k, then
Multiple linear regression is used to fit this model. Test statistics are defined as:
Assuming the model and model parameters are known,
degrees of freedom under the null hypothesis
has a Chi-square distribution with k
.
Detection of Outliers
The following flow chart demonstrates how automatic outlier detection works. Let M be the total
number of outliers and Nadj be the number of times the series is adjusted for outliers. At the
beginning of the procedure, M = 0 and Nadj = 0.
391
Time Series Algorithms
Figure 35-1
Goodness-of-Fit Statistics
Goodness-of-fit statistics are based on the original series Y(t). Let k= number of parameters in the
model, n = number of non-missing residuals.
392
Time Series Algorithms
Mean Squared Error
Mean Absolute Percent Error
Maximum Absolute Percent Error
Mean Absolute Error
Maximum Absolute Error
Normalized Bayesian Information Criterion
Normalized
R-Squared
Stationary R-Squared
A similar statistic was used by Harvey (Harvey, 1989).
where
The sum is over the terms in which both
and
are not missing.
is the simple mean model for the differenced transformed series, which is equivalent to the
univariate baseline model ARIMA(0,d,0)(0,D,0).
393
Time Series Algorithms
For the exponential smoothing models currently under consideration, use the differencing orders
(corresponding to their equivalent ARIMA models if there is one).
Brown, Holt
,
other
Note: Both the stationary and usual R-squared can be negative with range
. A negative
R-squared value means that the model under consideration is worse than the baseline model. Zero
R-squared means that the model under consideration is as good or bad as the baseline model.
Positive R-squared means that the model under consideration is better than the baseline model.
Expert Modeling
Univariate Series
Users can let the Expert Modeler select a model for them from:

All models (default).

Exponential smoothing models only.

ARIMA models only.
Exponential Smoothing Expert Model
Figure 35-2
394
Time Series Algorithms
ARIMA Expert Model
Figure 35-3
Note: If 10<n<3s, set s=1 to build a non-seasonal model.
All Models Expert Model
In this case, the Exponential Smoothing and ARIMA expert models are computed, and the model
with the smaller normalized BIC is chosen.
Note: For short series, n<max(20,3s), use Exponential Smoothing Expert Model on p. 393.
Multivariate Series
In the multivariate situation, users can let the Expert Modeler select a model for them from:

All models (default). Note that if the multivariate expert ARIMA model drops all the
predictors and ends up with a univariate expert ARIMA model, this univariate expert ARIMA
model will be compared with expert exponential smoothing models as before and the Expert
Modeler will decide which is the best overall model.

ARIMA models only.
395
Time Series Algorithms
Transfer Function Expert Model
Figure 35-4
Note: For short series, n<max(20,3s), fit a univariate expert model.
396
Time Series Algorithms
Blank Handling
Generally, any missing values in the series data will be imputed in the Time Intervals node used
to prepare the data for time series modeling. If blanks remain in the series data submitted to
the modeling node, ARIMA models will attempt to impute values, as described in “Estimation
and Forecasting of ARIMA/TF” on p. 384.
Missing values for predictors will result in the field containing the missing values to be
excluded from the time series model.
Generated Model/Scoring
Predictions or forecasts for Time Series models are intricately related to the modeling process
itself. Forecasting computations are described with the algorithm for the corresponding model
type. For information on forecasting in exponential smoothing models, see “Exponential
Smoothing Models” on p. 379. For information on forecasting in ARIMA models, see “Estimation
and Forecasting of ARIMA/TF” on p. 384.
Blank Handling
Blank handling for the generated model is very similar to that for the modeling node.
If any predictor has missing values within the forecast period, the procedure issues a warning
and forecasts as far as it can.
References
Box, G. E. P., G. M. Jenkins, and G. C. Reinsel. 1994. Time series analysis: Forecasting and
control, 3rd ed. Englewood Cliffs, N.J.: Prentice Hall.
Brockwell, P. J., and R. A. Davis. 1991. Time Series: Theory and Methods, 2 ed. :
Springer-Verlag.
Gardner, E. S. 1985. Exponential smoothing: The state of the art. Journal of Forecasting, 4, 1–28.
Harvey, A. C. 1989. Forecasting, structural time series models and the Kalman filter. Cambridge:
Cambridge University Press.
Makridakis, S. G., S. C. Wheelwright, and R. J. Hyndman. 1997. Forecasting: Methods and
applications, 3rd ed. ed. New York: John Wiley and Sons.
Melard, G. 1984. A fast algorithm for the exact likelihood of autoregressive-moving average
models. Applied Statistics, 33:1, 104–119.
Pena, D., G. C. Tiao, and R. S. Tsay, eds. 2001. A course in time series analysis. New York:
John Wiley and Sons.
TwoStep Cluster Algorithms
Overview
The TwoStep cluster method is a scalable cluster analysis algorithm designed to handle very large
data sets. It can handle both continuous and categorical variables or attributes. It requires only one
data pass. It has two steps 1) pre-cluster the cases (or records) into many small sub-clusters; 2)
cluster the sub-clusters resulting from pre-cluster step into the desired number of clusters. It can
also automatically select the number of clusters.
Model Parameters
As the name implies, the TwoStep clustering algorithm involves two steps: Pre-clustering and
Clustering.
Pre-cluster
The pre-cluster step uses a sequential clustering approach. It scans the data records one by one
and decides if the current record should be merged with the previously formed clusters or starts a
new cluster based on the distance criterion (described below).
The procedure is implemented by constructing a modified cluster feature (CF) tree. The CF
tree consists of levels of nodes, and each node contains a number of entries. A leaf entry (an entry
in the leaf node) represents a final sub-cluster. The non-leaf nodes and their entries are used to
guide a new record quickly into a correct leaf node. Each entry is characterized by its CF that
consists of the entry’s number of records, mean and variance of each range field, and counts for
each category of each symbolic field. For each successive record, starting from the root node, it is
recursively guided by the closest entry in the node to find the closest child node, and descends
along the CF tree. Upon reaching a leaf node, it finds the closest leaf entry in the leaf node. If
the record is within a threshold distance of the closest leaf entry, it is absorbed into the leaf entry
and the CF of that leaf entry is updated. Otherwise it starts its own leaf entry in the leaf node. If
there is no space in the leaf node to create a new leaf entry, the leaf node is split into two. The
entries in the original leaf node are divided into two groups using the farthest pair as seeds, and
redistributing the remaining entries based on the closeness criterion.
If the CF tree grows beyond allowed maximum size, the CF tree is rebuilt based on the existing
CF tree by increasing the threshold distance criterion. The rebuilt CF tree is smaller and hence
has space for new input records. This process continues until a complete data pass is finished.
For details of CF tree construction, see the BIRCH algorithm (Zhang, Ramakrishnon, and Livny,
1996).
All records falling in the same entry can be collectively represented by the entry’s CF. When a
new record is added to an entry, the new CF can be computed from this new record and the old CF
without knowing the individual records in the entry. These properties of CF make it possible to
maintain only the entry CFs, rather than the sets of individual records. Hence the CF-tree is much
smaller than the original data and can be stored in memory more efficiently.
Note that the structure of the constructed CF tree may depend on the input order of the cases or
records. To minimize the order effect, randomly order the records before building the model.
© Copyright IBM Corporation 1994, 2015.
397
398
TwoStep Cluster Algorithms
Cluster
The cluster step takes sub-clusters (non-outlier sub-clusters if outlier handling is used) resulting
from the pre-cluster step as input and then groups them into the desired number of clusters. Since
the number of sub-clusters is much less than the number of original records, traditional clustering
methods can be used effectively. TwoStep uses an agglomerative hierarchical clustering method,
because it works well with the auto-cluster method (see the section on auto-clustering below).
Hierarchical clustering refers to a process by which clusters are recursively merged, until
at the end of the process only one cluster remains containing all records. The process starts by
defining a starting cluster for each of the sub-clusters produced in the pre-cluster step. (For more
information, see the topic “Pre-cluster” on p. 397.) All clusters are then compared, and the pair
of clusters with the smallest distance between them is selected and merged into a single cluster.
After merging, the new set of clusters is compared, the closest pair is merged, and the process
repeats until all clusters have been merged. (If you are familiar with the way a decision tree is
built, this is a similar process, except in reverse.) Because the clusters are merged recursively in
this way, it is easy to compare solutions with different numbers of clusters. To get a five-cluster
solution, simply stop merging when there are five clusters left; to get a four-cluster solution, take
the five-cluster solution and perform one more merge operation, and so on.
Distance Measure
The TwoStep clustering method uses a log-likelihood distance measure, to accommodate both
symbolic and range fields. It is a probability-based distance. The distance between two clusters
is related to the decrease in log-likelihood as they are combined into one cluster. In calculating
log-likelihood, normal distributions for range fields and multinomial distributions for symbolic
fields are assumed. It is also assumes that the fields are independent of each other, and so are
the records. The distance between clusters i and j is defined as
where
and
In these expressions,
KA is the number of range type input fields,
KB is the number of symbolic type input fields,
399
TwoStep Cluster Algorithms
Lk is the number of categories for the kth symbolic field,
Nv is the number of records in cluster v,
Nvkl is the number of records in cluster v which belongs to the lth category of the kth symbolic
field,
is the estimated variance of the kth continuous variable for all records,
is the estimated variance of the kth continuous variable for records in the vth cluster, and
< i, j > is an index representing the cluster formed by combining clusters i and j.
is ignored in the expression for ξv, the distance between clusters i and j would be exactly the
If
decrease in log-likelihood when the two clusters are combined. The
term is added to solve the
problem caused by
, which would result in the natural logarithm being undefined. (This
would occur, for example, when a cluster has only one case.)
Number of Clusters (auto-clustering)
TwoStep can use the hierarchical clustering method in the second step to assess multiple cluster
solutions and automatically determine the optimal number of clusters for the input data. A
characteristic of hierarchical clustering is that it produces a sequence of partitions in one run: 1, 2,
3, … clusters. In contrast, a k-means algorithm would need to run multiple times (one for each
specified number of clusters) in order to generate the sequence. To determine the number of
clusters automatically, TwoStep uses a two-stage procedure that works well with the hierarchical
clustering method. In the first stage, the BIC for each number of clusters within a specified range is
calculated and used to find the initial estimate for the number of clusters. The BIC is computed as
where
and other terms defined as in “Distance Measure”. The ratio of change in BIC at each
successive merging relative to the first merging determines the initial estimate. Let
the difference in BIC between the model with J clusters and that with (J + 1) clusters,
. Then the change ratio for model J is
be
400
TwoStep Cluster Algorithms
If
, then the number of clusters is set to 1 (and the second stage is omitted).
Otherwise, the initial estimate for number of clustersk is the smallest number for which
In the second stage, the initial estimate is refined by finding the largest relative increase in distance
between the two closest clusters in each hierarchical clustering stage. This is done as follows:
E Starting with the model Ck indicated by the BIC criterion, take the ratio of minimum inter-cluster
distance for that model and the next larger model Ck+1, that is, the previous model in the
hierarchical clustering procedure,
where Ck is the cluster model containing k clusters and dmin(C) is the minimum inter-cluster
distance for cluster model C.
E Now from model Ck-1, compute the same ratio with the following model Ck, as above. Repeat for
each subsequent model until you have the ratio R2(2).
E Compare the two largest R2 ratios; if the largest is more that 1.15 times the second largest, then
select the model with the largest R2 ratio as the optimal number of clusters; otherwise, from those
two models with the largest R2 values, select the one with the larger number of clusters as the
optimal model.
Blank Handling
The TwoStep cluster node does not support blanks. Records containing blanks, nulls, or missing
values of any kind are excluded from model building.
Effect of Options
Outlier Handling
An optional outlier-handling step is implemented in the algorithm in the process of building the
CF tree. Outliers are considered as data records that do not fit well into any cluster. We consider
data records in a leaf entry as outliers if the number of records in the entry is less than a certain
fraction (25% by default) of the size of the largest leaf entry in the CF tree. Before rebuilding the
CF tree, the procedure checks for potential outliers and sets them aside. After rebuilding the CF
tree, the procedure checks to see if these outliers can fit in without increasing the tree size. At the
end of CF tree building, small entries that cannot fit in are outliers.
401
TwoStep Cluster Algorithms
Generated Model/Scoring
Predicted Values
When scoring a record with a TwoStep Cluster generated model, the record is assigned to the
cluster to which it is closest. The distance between the record and each cluster is calculated, and
the cluster with the smallest distance is selected as the closest, and that cluster is assigned as the
predicted value for the record. Distance is calculated in a similar manner to that used during
model building, considering the record to be scored as a “cluster” with only one record. For more
information, see the topic “Distance Measure” on p. 398.
If outlier handling was enabled during model building, the distance between the record and the
closest cluster is compared to a threshold C = log(V), where
where Rk is the range of the kth numeric field and Lm is number of categories for the mth symbolic
field.
If the distance from the nearest cluster is smaller than C, assign that cluster as the predicted
value for the record. If the distance is greater than C, assign the record as an outlier.
Blank Handling
As with model building, records containing blanks are not handled by the model, and are assigned
a predicted value of $null$.
TwoStep-AS Cluster Algorithms
1. Introduction
Clustering technique is widely used by retail and consumer product companies who need to learn more
about their customers in order to apply 1-to-1 marketing strategies. By means of clustering technique,
customers are partitioned into groups by their buying habits, gender, age, income level, etc., and retail
and consumer product companies can tailor their marketing and product development strategy to each
customer group.
Traditional clustering algorithms can broadly be classified into partitional clustering and hierarchical
clustering. Partitional clustering algorithms divide data cases into clusters by optimizing certain criterion
function. A well-known representative of this class is the k-means clustering. Hierarchical clustering
algorithms proceed by stages producing a sequence of partitions in which each partition is nested into the
next partition in the sequence. Hierarchical clustering can be agglomerative and divisive. Agglomerative
clustering starts with a singleton cluster (for example, a cluster that contains one data case only) and
proceeds by successively merging the clusters at each stage. On the contrary, divisive clustering starts
with one single cluster that contains all data cases and proceeds by successively separating the cluster
into smaller clusters. Notice that no initial values are needed for hierarchical clustering.
However, traditional clustering algorithms do not adequately address the problem of large datasets. This
is where the two-step clustering method can be helpful (see ref. [1][2]). This method first performs a preclustering step by scanning the entire dataset and storing the dense regions of data cases in terms of
summary statistics called cluster features. The cluster features are stored in memory in a data structure
called the CF-tree. Then an agglomerative hierarchical clustering algorithm is applied to cluster the set of
cluster features. Since the set of cluster features is much smaller than the original dataset, the hierarchical
clustering can perform well in terms of speed. Notice that the CF-tree is incremental in the sense that it
does not require the whole dataset in advance and only scans the dataset once.
One essential element in the clustering algorithms above is the distance measure that reflects the relative
similarity or dissimilarity of the clusters. Chiu et al proposed a new distance measure that enables
clustering on data sets in which the features are of mixed types. The features can be continuous, nominal,
categorical, or count. This distance measure is derived from a probabilistic model in the way that the
distance is equivalent to the decrease in log-likelihood value as a result of merging two clusters. In the
following, the new distance measure will be used in both the CF-tree growth and the clustering process,
unless otherwise stated.
In this chapter, we extend the two-step clustering method into the distributed setting, specifically under
the map-reduce framework. In addition to generating a clustering solution, we also provide mechanisms
for selecting the most relevant features for clustering given data, as well as detecting rare outlier points.
Moreover, we provide an enhanced set of evaluation and diagnostic features enabling insight,
interactivity, and an improved overall user experience as required by the Analytic Catalyst application.
The chapter is organized as follows. We first declare some general notes about algorithms, development,
etc. Then we define the notations used in the document. Operations for data pre-processing are
introduced in section 4. In section 5, we briefly describe the data and the measures such as distance,
tightness, and so on. In section 6, 7, and 8, we present the key algorithms used in model building,
including CF-tree growth, Hierarchical Agglomerative Clustering (HAC), and determination of the
TwoStep-AS Cluster Algorithms
number of clusters, respectively. Section 9 describes the entire solution of distributed clustering on
Hadoop. Section 10 describes how to score new cases (to assign cluster memberships). Finally, Section 11
includes various measures used for model evaluation and model diagnostics. Insights and interestingness
are also derived.
2. Notes



To create CF-trees efficiently, we assume that operations within a main memory environment (for
example, RAM) is efficient, and the size of the main memory can be allocated or controlled by
user.
We assume that the data is randomly partitioned. If this assumption is not allowed, sequential
partition can still be applied. But note that the clustering result can be impacted, particularly if
the data is ordered in some special way.
CE is implemented in the Analytic Framework.
3. Notations
The following notations are used throughout this chapter unless otherwise stated:
Number of data partitions/splits.
Number of cases in cluster .
Number of cases in cluster which have non-missing values in
the th feature.
Number of features used for clustering.
The th data case. is a K-dimensional vector.
Value of the th continuous feature of the th data case . There
are
number of continuous features.
Value of the th categorical feature of the th data case . There are
number of categorical features.
Number of categories of the th categorical feature in the entire data.
Number of cases in cluster whose th categorical feature takes
the th category.
Sum of values of the th continuous feature in cluster .
Sum of squared values of the th continuous feature in cluster .
Distance between clusters and .
Cluster formed by combining clusters and .
4. Data Pre-processing
Data pre-processing includes the following transformations:





Trailing blanks are trimmed
Date/time features are transformed into continuous ones
Normalize continuous features
Category values of a categorical feature are mapped into integer. As such, the expression “
” indicates that the th categorical feature of the th case takes the th category.
System/user missing and invalid values are all considered as missing.
TwoStep-AS Cluster Algorithms

Cases with missing values in all features are discarded.
5. Data and Measures
Let be the th data case. Denote
total number of features in which
generality, write as
as the index set of cluster ,
of them are continuous and
. Let
be the
are categorical. Without loss of
(1)
where
is the value of the th continuous feature,
categorical feature,
. Express
as a vector
either zero or one:
, and
of
is the value of the th
values in which each entry is
.
(2)
5.1. Cluster Feature of a Cluster
The cluster feature (sufficient statistics set)
the characteristics of a cluster. A possible set
of a cluster is a collection of statistics that summarizes
is given as
(3)
where
is the number of data cases in cluster
,
is a -dimensional
vector; the th entry is the number of data cases in cluster which have non-missing values in the th
feature.
is a
-dimensional vector; the th entry is the sum of the non-missing values
of the th continuous feature in cluster , i.e.
(4)
for
. Similarly,
is a
-dimensional vector such that the th entry is the sum
of squared non-missing values of the th continuous feature in cluster
, i.e.
(5)
for
.
Similarly,
is a
-dimensional vector where the th sub-vector
is
dimensional, given by
(6)
for
. The th entry
feature takes the th category,
represents the total number of cases in cluster
, i.e.
whose th categorical
TwoStep-AS Cluster Algorithms
.
(7)
5.2. Updating Cluster Feature when Merging Two Clusters
When two clusters and are said to merge, it simply means that the two corresponding sets of data
points are merged together to form a union. In this case, the
for the merged cluster
can be
calculated by simply adding the corresponding elements in
and
, that is,
.
(8)
5.3. Tightness of a Cluster
The interpretation of tightness of a cluster is that the smaller of the value of tightness, the less variation of
the data cases within the cluster. In CE, there are two tightness measures, and they will be used
depending on the specified distance measure, log-likelihood distance or Euclidean distance.
5.3.1. Tightness based on Log-likelihood Distance
The tightness of a cluster can be defined as average negative log-likelihood function of the cluster
evaluated at the maximum likelihood estimates of the model parameters. See Ref. 1 for statistical
reasoning for definition.
The tightness
of a cluster
is given by
(9)
where
is the maximum likelihood variance estimate of the th continuous feature in cluster
.
(10)
in which
is the sample mean,
.
(11)
is the entropy of the th categorical feature in cluster
,
(12)
in which
is the portion of data cases in cluster
whose th categorical feature takes the th category,
.
Finally
is appositive scalar which is added to handle the degenerating conditions and balance the
contributions between a continuous feature and a categorical one. The default value of
is 0.01.
(13)
TwoStep-AS Cluster Algorithms
To handle missing values, the definition of tightness assumes that the distribution of missing values is the
same as for the observed non-missing points in the cluster.
Moreover, the following assumption is always applied:
.
(14)
5.3.2. Tightness based on Euclidean Distance
The tightness of a cluster can be defined as the average Euclidean distance from member cases to the
center/centroid of the cluster.
The tightness
of a cluster
is given by
.
Notice that if any feature in cluster
computation.
(15)
has all missing values, the feature will not be used in the
5.4. Distance Measures between Two Clusters
Suppose clusters and are merged to form a new cluster
cases in and . Two distance measures are available.
that consists of the union of all data
5.4.1. Log-likelihood Distance
The distance between and can be captured by observing the corresponding decrease in loglikelihood as the result of combining and to form
.
The distance measure between two clusters
and
is defined as
(16)
where
(17)
and
(18)
TwoStep-AS Cluster Algorithms
Note that since
first updating the
can be calculated by using the statistics in
of the merged cluster
.
, the distance can be calculated by
To handle missing values, the definition of distance assumes that the contribution of missing values
equals zero.
5.4.2. Euclidean Distance
The Euclidean distance can only be applied if all features are continuous.
The distance between two cases is clearly defined. The distance between two clusters is here defined by
the Euclidean distance between the two cluster centers. A cluster center is defined as the vector of cluster
means of each feature.
Suppose the centers/centroids of clusters
and
are
and
respectively, then
(19)
where
.
Again, any feature in cluster
(20)
with all missing values will not be used in the computation.
6. CF-Tree Building
CF-tree is a very compact summary of dataset in the way that each entry (leaf entry) in the leaf node is a
sub-cluster which absorbs the data cases that are close together, as measured by the tightness index and
controlled by a specific threshold value . CF-tree is built dynamically as new data case is inserted, it is
used to guide to a new insertion into the correct sub-cluster for clustering purposes.
CF-tree is a height-balanced tree with four parameters:
1.
2.
3.
4.
The branching factor for the non-leaf nodes. It is the maximum number of entries that a nonleaf node can hold. A non-leaf entry is of the form
, in which
is a
pointer to its th child node and
is the cluster feature of the sub-cluster represented by this
child.
The branching factor for the leaf nodes. It is the maximum number of entries that a leaf node
can hold. A leaf entry is similar to a non-leaf entry except that is does not have a pointer. It is of
the form
.
The threshold parameter that controls the tightness of any leaf entries. That is, all leaf entries
in a leaf node must satisfy a threshold requirement that the tightness has to be less than ,
i.e.
.
Maximum tree height .
In addition, each leaf node has two pointers: “
together for efficient scanning.
” and “
” which are used to chain all leaf nodes
TwoStep-AS Cluster Algorithms
Figure 1 illustrates a CF-tree of branching factors
,
, and
.
Figure 1. Example of a CF-tree.
6.1. Inserting a Single Case or a Sub-cluster into a CF-Tree
The procedure for inserting a data case or a sub-cluster (abbrev. “
”) into a CF-tree is as follows.
Step 1. Identify the appropriate leaf node.
Starting from the root node, recursively descend the CF-tree by choosing the closest child node
according to the distance measure .
Step 2. Modify the leaf node.
Upon reaching a leaf node, find the closest leaf entry
, say, and see if
can be absorbed
into
without violating the threshold requirement
. If so, update the CF information
in
to reflect the absorbing action. If not, add a new entry for
to the leaf. If there is space
on the leaf for this new entry to fit in, then we are done. Otherwise, split the leaf node by
choosing the farthest pair of entries as seeds, and redistribute the remaining entries based on the
closest criteria.
Step 3. Modify the path to the leaf node.
After inserting
into a leaf node, update the CF information for each non-leaf entry on the
path to the leaf node. If there is no leaf split, then only the corresponding CF information is
needed to update to reflect the absorbing of
. If a leaf split happens, then it is necessary to
insert a new non-leaf entry into the parent node in order to describe the newly created leaf. If the
parent has space for this entry, at all higher levels, only the CF information is needed to update to
reflect the absorbing of
. In general, however, the parent node has to split as well, and so on
up to the root node. If the root node is split, the tree height increases by one.
Notice that the growth of CF-tree is sensitive to case order. If the same data case is inserted twice but at
different time, the two copies might be entered into two distinct leaf entries. It is possible that two subclusters that should be in one cluster are split across nodes. Similarly, it is also possible that two subclusters that should not be in one cluster are kept together in the same node.
6.2. Threshold Heuristic
In building the CF-tree, the algorithm starts with an initial threshold value (default is 0). Then it scans the
data cases and inserts into the tree. If the main memory runs out before data scanning is finished, the
threshold value is increased to rebuild a new smaller CF-tree, by re-inserting the leaf entries of the old
tree into the new one. After the old leaf entries have been re-inserted, data scanning is resumed from the
case at which it was interrupted. The following strategy is used to update the threshold values.
TwoStep-AS Cluster Algorithms
Suppose that at step , the CF-tree of the threshold is too big for the main memory after data cases in
the data have been scanned, and an estimate of the next (larger) threshold
is needed to rebuild a new
smaller CF-tree.
Specifically, we find the first two closest entries whose tightness is greater than the current threshold, and
take it as the next threshold value. However, searching the closest entries can be tedious. So we follow
BIRCH’s heuristic to traverse along a path from the root to the most crowded leaf that has the most
entries and find the pair of leaf entries that satisfies the condition.
6.3. Rebuilding CF-Tree
When the CF-tree size exceeds the size of the main memory, or the CF-tree height is larger than , the CFtree is rebuilt to a smaller one by increasing the tightness threshold.
Assume that within each node of CF-tree , the entries are labeled contiguously from 0 to
,
where
is the number of entries in that node. Then a path from an entry in the root (level 1) to a leaf
node (level ) can be uniquely represented by
, where
, is the label of the th
level entry on that path. So naturally, path
is before (or <) path
if
,…,
, and
for
. It is obvious that each leaf node corresponds
to a path, since we are dealing with tree structure, and we will just use “path” and “leaf node”
interchangeably from now on.
With the natural path order defined above, it scans and frees the old tree, path by path, and at the same
time creates the new tree path by path. The procedure is as follows.
Step 1. Let the new tree start with NULL and OldCurrentPath be initially the leftmost path in the old
tree.
Step 2. Create the corresponding NewCurrentPath in the new tree.
Copy the nodes along OldCurrentPath in the old tree into the new tree as the (current) rightmost
path; call this NewCurrentPath
Step 3. Insert leaf entries in OldCurrentPath to the new tree.
With the new threshold, each leaf entry in OldCurrentPath is tested against the new tree to see if
it can either by absorbed by an existing leaf entry, or fit in as a new leaf entry without splitting, in
the NewClosestPath that is found top-down with the closest criteria in the new tree. If yes and
NewClosestPath is before NewCurrentPath, then it is inserted to NewClosestPath, and deleted
from the leaf node in NewCurrentPath.
Step 4. Free space in OldCurrentPath and NewCurrentPath.
Once all leaf entries in OldCurrentPath are processed, the nodes along OldCurrentPath can be
deleted from the old tree. It is also likely that some nodes along NewCurrentPath are empty
because leaf entries that originally corresponded to this path have been “pushed forward.” In this
case, the empty nodes can be deleted from the new tree.
Step 5. Process the next path in the old tree.
OldCurrentPath is set to the next path in the old tree if there still exists one, and go to step 2.
6.4. Delayed-Split Option
If the CF-tree that resulted by inserting a data case is too big for the main memory, it may be possible that
some other data cases in the data can still fit in the current CF-tree without causing a split on any node in
the CF-tree (thus the size of the current tree remains the same and can still be in the main memory).
TwoStep-AS Cluster Algorithms
Similarly, if the CF-tree resulted by inserting a data case exceeds the maximum height, it may be possible
that some other data cases in the data can still fit in the current CF-tree without increasing the tree height.
Once any of the two conditions happens, such cases are written out to disk (with amount of disk space
put aside for this purpose) and data scanning continues until the disk space runs out as well. The
advantage of this approach is that more data cases can fit into the tree before a new tree is rebuilt. Figure
2 illustrates the control flow of delayed-split option.
Start
Write current data case to disk
space S1, and update size of S1
Yes
Is disk space S1
currently empty?
Done
No
Continue receiving
data case
If current data case is to insert
to current CF-tree t1, will main
memory be empty, or tree
height larger than H?
Yes
No
Insert current
data case to t1
Figure 2. Control flow of delayed-split option.
6.5. Outlier-Handling Option
Outlier is defined as leaf entry (sub-cluster) of low density, which contains less than
cases.
(default 10)
Similar to the delayed-split option, some disk space is allocated for handling outliers. When the current
CF-tree is too big for the main memory, some leaf entries are treated as potential outliers (based on the
definition of outlier) and are written out to disk. The others are used to rebuild the CF-tree. Figure 3
shows the control flow of the outlier-handling option.
Implementation notes:


The size of any outlier leaf entry should also be less than 20% of the maximal size of leaf entries.
The CF-tree t1 should be updated once any leaf entry is written to disk space .
TwoStep-AS Cluster Algorithms

Outliers identified here are local candidates, and they will be analyzed further in later steps,
where the final outliers will be determined.
Start
Is disk space S2
currently empty?
Yes
Done
No
Check each leaf entry in
current CF-tree t1 for outlier
Write current leaf entry
to disk space S2, and
update size of S2
Yes
Current leaf
entry is outlier?
No
Keep current leaf
entry to rebuild t1
Yes
No
Any more
leaf entries?
Figure 3. Control flow of outlier-handling option.
6.6. Overview of CF-Tree Building
Figure 4 provides an overview of building a CF-tree for the whole algorithm. Initially a threshold value is
set, data is scanned, and the CF-tree is built dynamically. When the main memory runs out, or the tree
height is larger than the maximum height before the whole data is scanned, the algorithm performs the
delayed-split option, outlier-handling option, and the tree rebuilding step to rebuild a new smaller CFtree that can fit into the main memory. The process continues until all cases in the data are processed.
When all data is scanned, cases in disk space are absorbed and entries in disk space are scanned
again to verify if they are indeed outliers.
Implementation notes:

When all data is scanned, all cases in disk space
in rebuilding the tree if necessary.
will be inserted into the tree. This may result
The following table shows the parameters involved in CF-tree building and their default values.
Parameter
Assigned main memory ( )
Assigned disk space for outlier-handling ( )
Default value
80*1024 bytes (TBD)
20% of
TwoStep-AS Cluster Algorithms
Assigned disk space for delayed-split ( )
Adjustment constant to the tightness and
distance measures,
Distance measure (Loglikelihood/Euclidean)
Initial threshold value ( )
Branching factor ( )
Branching factor ( )
Maximum tree height ( )
Delayed-split option (on/off)
Outlier-handling option (on/off)
Outlier definition (
)
10% of
0.01
Log-likelihood
0
8
8
3
On
On
Leaf entry which contains less than
cases, default 10
Start CF-tree t1 of initial T
Continue receiving
data case
Has data scanning
finished?
Yes
Re-absorb cases in S1
and entries in S2 into t1
Done
No
Insert data
case to t1
No
If current data case
is to insert to current
CF-tree t1, will main
memory be empty,
or tree height larger
than H?
Yes
Delayed-split option
Outlier-handling option
Increase threshold T
Re-absorb cases in S1 and
entries in S2 into t1. Update
sizes of S1 and S2.
Rebuild t1 with new T
Figure 4. Control flow of CF-tree building.
7. Hierarchical Agglomerative Clustering
Hierarchical Agglomerative Clustering (HAC) proceeds by steps producing a sequence of partitions in
which each partition is nested into the next partition in the sequence. See ref. [3] for details.
HAC can be implemented using two methods, as described below.
TwoStep-AS Cluster Algorithms
7.1. Matrix Based HAC
Suppose that matrix based HAC starts with clusters. At each subsequent step, a pair of clusters is
chosen. The two clusters and in the pair are closest together in terms of the distance measure
.
A new cluster
is formed to replace one of the clusters in the pair, , say. This new cluster contains
all data cases in and . The other cluster is discarded. Hence the number of clusters is reduced by
one at each step. The process stops when the desired number of clusters is reached. Since the distance
measure between any two clusters that are not involved in the merge does not change, the algorithm is
designed to update the distance measures between the new cluster and the other clusters efficiently.
The procedure of matrix based HAC is as follows.
Step 1. For
,{
Compute
Find
for
;
and
;
}
Find
Step 2. For
and
, the closest pair is
;
,{
Merge the closest pair
, and replace
by
;
For
,{
If
, recompute all distances
,
, and update and ;
If
,{
Compute
;
If
, update
and
;
If
, no change;
If
and
,
Recompute all distances
,
, and update and ;
If
and
, no change;
}
}
For
, recompute all distances
,
, and update and ;
For
,{
If
, recompute all distances
,
, and update and ;
If
, no change;
}
For
, no change;
Erase
;
Find
and
, the closest pair is
;
}
Implementation notes:


In order to reduce the memory requirement, it is not necessary to create an actual distance matrix
when determining the closest clusters.
If the Euclidean distance is used, the ward measure will be used to find the closest clusters. We
just replace the distance measure
HAC.
by
. This also applies below for CF-tree based
TwoStep-AS Cluster Algorithms
7.2. CF-tree Based HAC
Suppose that CF-tree based HAC starts with
CF-trees
,
which contain leaf
entries ,
. Let
be the index of the CF-tree which contains the leaf entry . For
convenience, suppose
if
.
At each subsequent step, a pair of leaf entries is chosen. The two leaf entries and in the pair are
closest together in terms of the distance measure
. A new leaf entry
is formed to replace one
of the leaf entries in the pair, , say. This new leaf entry contains all data cases in and . The other leaf
entry is discarded. Hence the number of leaf entries is reduced by one at each step. Meanwhile, the
involved CF-trees will be updated accordingly. The process stops when the desired number of leaf
entries is reached. The output is the set of updated CF-trees, whose leaf entries indicate the produced
clusters.
The procedure of CF-tree based HAC is as follows.
Step 1. For
,{
Find the closest leaf entry
tree structure;
Find
}
Find
Step 2. For
in each CF-tree
for
, following the involved
and
and
;
, the closest pair is
;
,{
Merge the closest pair
, update CF-tree
by the new leaf entry
remove the leaf entry
from CF-tree
;
For
,{
If
,{
Find the closest leaf entry
in each CF-tree
Find
and
}
If
,{
Compute
;
If
, update
and
;
If
, no change;
If
and
,{
Find the closest leaf entry
Find
}
If
and
, and
for
in each CF-tree
and
;
;
for
;
;
, no change;
}
}
For
,{
Find the closest leaf entry
Find
}
For
If
,{
,{
in each CF-tree
and
for
;
;
TwoStep-AS Cluster Algorithms
Find the closest leaf entry
Find
}
If
in each CF-tree
and
for
;
;
, no change;
}
For
Find
, no change;
and
, the closest pair is
;
}
Step 3. Export updated non-empty CF-trees;
Clearly, CF-tree based HAC is very similar to matrix based HAC. The only difference is that CF-tree
based HAC takes advantage of CF-tree structures to efficiently find the closest pair, rather than checking
all possible pairs as in matrix based HAC.
8. Determination of the Number of Clusters
Assume that the hierarchical clustering method has been used to produce 1, 2 … clusters already. We
consider the following two criterion indices in order to find the appropriate number of final clusters.
Bayesian Information Criterion (BIC):
,
where
(23)
is the total number of cases in all the clusters,
.
(24)
Akaike Information Criterion (AIC):
.
(25)
Let
be the criterion index (BIC or AIC) of clusters,
be the distance measure between the two
clusters merged in merging
clusters to clusters, and be the total number of sub-clusters from
which to determine the appropriate number of final clusters.
Users can supply the range for the number of clusters
of clusters should lie. Notice that if
, reset
in which they believe the “true” number
.
The following four methods are proposed:
Method 1. Finding the number of clusters by information convergence.
Let
If
, where
,
. Else, let
can be either
;
or
depending on user’s choice.
TwoStep-AS Cluster Algorithms
Let
let
be the smallest in [
.
,
] which satisfies
, If none satisfies the condition,
Method 2. Finding the number of cluster by the largest distance jump.
To report
as the number of clusters.
Method 3. Finding the number of clusters by combining distance jump and information
convergence aggressively
The process goes as follows:
a) Let
.
b) Let be the largest in [
let
.
c) Calculate
for in [ ,
at
and
.
d) If
, report
e) Otherwise, report
,
] which satisfies
. If none satisfies the condition,
]. Suppose that the max and the second max of
occurred
as the cluster number.
.
Method 4. Finding the number of clusters by combining distance jump and information
convergence conservatively
This method performs the same steps from a) to d) in method 3. But in step e), method 4 reports
.
By default, method 3 is used with BIC as the information criterion.
9. Overview of the Entire Clustering Solution
Figure 5 illustrates the overview of the entire clustering solution.
Start
Filter features based
on summary statistics
Select features adaptively
based on clustering models
With selected features,
perform distributed
clustering with optional
outlier detection
Done
Figure 5. Control flow of the entire clustering solutin.
TwoStep-AS Cluster Algorithms
9.1. Feature Selection
9.1.1. Feature Filtering
Based on the summary statistics produced by DE, CE will perform an initial analysis and determine the
features that are not useful for making the clustering solution. Specifically, the following features will be
excluded.
#
Rule
Status
1
2
3
4
Frequency/analysis weight features
Identity features
Constant features
The percentage of missing values in any feature is
larger than (default 70%)
The distribution of the categories of a categorical
feature is extremely imbalanced, that is
(default 0.7)
One category makes up the overwhelming majority of
total population above a given percentage threshold
(default 95%)
The number of categories of a categorical feature is
larger than (default 24)
There are categories of a categorical feature with
extremely high or low frequency, that is, the outlier
strength is larger than (default 3)
The absolute coefficient of variation of a continuous
feature is smaller than (default 0.05)
Required
Required
Required
Required
5
6
7
8
9
Discarded
Comment
The statistic of
is
the effect size for one
sample chi-square test.
Required
Required
Discarded
Required
The remaining features will be saved for adaptive feature selection in the next step.
9.1.2. Adaptive Feature Selection
Adaptive feature selection selects the most important features for the final clustering solution.
Specifically, it performs the following steps.
Step 1.
Step 2.
Step 3.
Step 4.
Divide the distributed data into data splits.
Build a local CF-tree on each data split.
Distribute local CF-trees into multiple computing units. A unique key is assigned to each CF-tree.
On each computing unit, start with all available features:
a. Perform matrix based HAC with all features on the leaf entries to get an approximate
clustering solution, S0. Suppose there are final clusters.
b. Compute importance for the set of all features.
c. Let
and
be the information criteria of S0.
d. Remove features with non-positive importance as many as possible, and update and
.
e. Repeat to do the follows:
i. Select the most unimportant feature from remaining features which are not checked.
ii. Perform matrix based HAC with remaining features (not including the selected one)
on the leaf entries to get a new approximate clustering solution, S1, with the fixed
number of clusters.
TwoStep-AS Cluster Algorithms
iii. If the information criteria of S1 plus the information of all discarded features
determined by S1 is lower than
, then remove the selected feature, and let
.
iv. Continue to check the next feature.
f. Select the set of features corresponding to .
Step 5. Pour together all the sets of features produced by different computing units. Discard any feature
if its occurring frequency is less than
(default
). The remaining features will be
used to build the final clustering solution.
The process described above can be implemented in parallel using one map-reduce job under the Hadoop
framework, as illustrated in Figure 6. See appendix A for details the map-reduce implementation.
Mapper 1
Data split 1
1. Pass data and build a
local CF-tree with all
available features,
turning off the option of
outlier detection.
2. Assign a proper key to
the built CF-tree.
3. Pass the CF-tree to a
certain reducer
according to the
assigned key.
Mapper R
Data split K
Do the same as Mapper 1
Reducer 1
For each key,
1. Pour together all leaf entries
in the involved CF-tree.
2. Start with all available
features:
a. Build an approximate
clustering solution with
the selected features.
b. Remove the most
unimportant features.
c. Repeat step a) and b)
until all relevant features
for clustering have been
selected.
3. Pass the set of selected
features to the controller.
Controller
1. Pour together all the
sets of features
produced by different
reducers.
2. Select those features
which appear
frequently. The
selected features will
be used in the next
map-reduce job to
build the final
clustering solution.
Reducer G
Do the same as Reducer 1
Figure 6. One map-reduce job for feature selection.
Implementation notes:


In default, the information based feature importance is used for the log-likelihood distance
measure, and the effect size based feature importance is for the Euclidean distance.
If no features are selected, just report all features.
9.2. Distributed Clustering
The Clustering Engine (CE) can identify clusters from distributed data with high performance and
accuracy. Specifically, it performs the following steps:
Step 1. Divide the distributed data into data splits.
Step 2. Build a local CF-tree on each data split.
Step 3. Distribute local CF-trees into multiple computing units. Note that multiple CF-trees may be
distributed to the same computing unit.
Step 4. On each computing unit, with all CF-entries in the involved CF-trees, perform a series of CF-tree
based HACs, and get a specified number of sub-clusters.
TwoStep-AS Cluster Algorithms
Step 5. Pour together all the sub-clusters produced by different computing unit, and perform matrix
based HAC to get the final clusters.
The number of final clusters is determined automatically or using a fixed one depending on the
settings.
The process described above can be implemented in parallel using one map-reduce job under the Hadoop
framework, as illustrated in Figure 7. See appendix B for details of the map-reduce implementation.
Mapper 1
Data split 1
1. Pass data and build a
local CF-tree with the
set of specified
features. Suppose the
option of outlier
detection is turned on.
2. Assign a proper key to
the built CF-tree and
also CF-outliers.
3. Pass the CF-tree and
CF-outliers to a certain
reducer according to
the assigned key.
Mapper R
Data split K
Do the same as Mapper 1
Reducer 1
For each key,
1. Pour together all CF-trees
and CF-outliers with the
same key under
consideration.
2. Check if the allocated CFoutliers fit with any leaf
entries in the CF-trees.
3. Perform a series of CF-tree
based HACs on the
(merged) leaf entries to get
a specified number of subclusters.
4. Pass sub-clusters and
remaining CF-outliers to the
controller.
Controller
1. Pour together all sub-clusters and CF-outliers from
reducers.
2. Perform matrix based HAC on sub-clusters to get
final regular clusters.
3. Check if CF-outliers fit with any regular clusters,
and determine true outliers.
4. Compute model evaluation measures, insights,
interestingness, etc.
5. Export PMML and StatXML.
Reducer G
Do the same as Reducer 1
Figure 7. One map-reduce job for distributed clustering with outlier delection.
Implementation notes:


The number of computing units is
,
(28)
where
(default 50,000) is the number of data points which are suitable to perform CF-tree
based HAC,
(default 5,000) is the number of data points which are suitable to perform matrix
based HAC,
is the minimal number of sub-clusters produced by each computing unit, and
is the maximal number of leaf entries, i.e.
, in a single CF-tree.
The number of sub-clusters produced by each computing unit is
.
(29)
9.3. Distributed Outlier Detection
Outlier detection in the Clustering Engine will be based and will build upon the outlier handling method
described previously in section 6. It is also extended to the distributed setting with the following steps:
TwoStep-AS Cluster Algorithms
Step 1. On each data split, perform the following:
1) Generate local candidate outliers according to the method described in section 6.
2) Distribute the local candidate outliers together with the associated CF-tree to a certain
computing unit.
Step 2. Each computing unit is allocated with a set of candidate outliers and also a set of CF-trees
containing regular leaf entries. For each member in the set of candidate outliers, it will be merged
with the closest regular leaf entry only if the merging does not break the maximal tightness
threshold among the involved CF-trees. Note that we will pass the CF-trees in order to enhance
the performance of finding the closest regular leaf entry.
Step 3. Pour together all the remaining candidate outliers and sub-clusters produced by computing
machines. Do the following:
1) Perform matrix based HAC on sub-clusters, and get the final regular clusters.
2) Keep only candidate outliers whose distance from the closest regular cluster to the center of
the outlier candidate is greater than the corresponding cluster distance threshold
3) Merge the rest of candidate outliers with the corresponding closest regular clusters and
update the distance threshold for each regular cluster.
4) For each remaining outlier cluster, compute its outlier strength.
5) Sort remaining outlier clusters according to outlier strength in descending order, and get the
minimum outlier strength for the top P (default 5%) percent of outliers, and use it as an
outlier threshold in scoring.
6) Export a specified number of the most extreme outlier clusters (default 20), along with the
following measures for each cluster: cluster size, outlier strength, probabilities of belonging
to each regular cluster.
Outlier strength of a cluster
is computed as
,
(30)
where
is the distance threshold of cluster , which is the maximum distance from cluster to each
center of its starting sub-clusters in matrix based HAC,
is the distance from cluster to the center
of cluster , and
is the probability of cluster belonging to cluster , that is
.
(31)
Notice that the distance between the cluster center and a cluster is computed by considering the center
of cluster as a singleton cluster
. The cluster center herein is defined as the mean for a continuous
feature, while being the mode for a categorical feature.
10. Cluster Membership Assignment
10.1. Without Outlier-Handling
Assign a case to the closest cluster according to the distance measure. Meanwhile, produce the
probabilities of the case belonging to each regular cluster.
TwoStep-AS Cluster Algorithms
10.2. With Outlier-Handling
10.2.1. Legacy Method
Log-likelihood distance
Assume outliers follow a uniform distribution. Calculate both the log-likelihood resulting from assigning
a case to a noise cluster and that resulting from assigning it to the closest non-noise cluster. The case is
then assigned to the cluster which leads to the larger log-likelihood. This is equivalent to assigning a case
to its closest non-noise cluster if the distance between them is smaller than a critical value
, where
is the product of ranges of continuous fields, and
is the product of
category numbers of categorical fields. Otherwise, designate it as an outlier.
Euclidean distance
Assign a case to its closest non-noise cluster if the Euclidean distance between them is smaller than a
critical value
. Otherwise, designate it as an outlier.
10.2.2. New Method
When scoring a new case, we compute the outlier strength of the case. If the computed outlier strength is
greater than the outlier threshold, then the case is an outlier and otherwise belongs to the closest cluster.
Meanwhile, the probabilities of the case belonging to each regular cluster are produced.
Alternatively, users can specify a customized outlier threshold (3, for example) rather than using the one
found from the data.
11. Clustering Model Evaluation
Clustering model evaluation enables users to understand the identified cluster structure, and also to learn
useful insights and interestingness derived from the clustering solution.
Note that clustering model evaluation can be done using cluster features and also the hierarchical
dendrogram when forming the clustering solution.
11.1. Across-Cluster Feature Importance
Across-cluster feature importance indicates how influential a feature is in building the clustering
solution. This measure is very useful for users to understand the clusters in their data. Moreover, it helps
for feature selection, as described in section 12.2.
Across-cluster feature importance can be defined using two methods.
11.1.1. Information Criterion Based Method
If BIC is used as the information criterion, the importance of feature
is
TwoStep-AS Cluster Algorithms
,
(32)
where
,
,
,
and ,
entropy.
is the total valid count of feature
in the data,
is the grand variance, and
is the grand
Notice that the information measure for the overall population has been decomposed as
.
While if AIC is used, across-cluster importance is
,
(33)
where
,
,
.
Notice that, if the importance computed as above is negative, set it as zero. This also applies in the
following.
Notice that the importance of a feature will be zero if the information difference corresponding to the
feature is negative. This applies for all the calculations of information-based importance.
11.1.2. Effect Size Based Method
This method is similar to that used for defining association interestingness for bivariate variables. See ref.
6 for details.
TwoStep-AS Cluster Algorithms
Categorical Feature
For a categorical feature , compute Pearson chi-square test statistic
,
(34)
where
,
(35)
and
,
(36)
,
(37)
.
(38)
The p-value is computed as
,
(39)
in which
is a random variable that follows a chi-square distribution with freedom degree of
. Note that categories with
or
will be excluded when computing the statistic
and degrees of freedom.
The effect size, Cramer’s V, is
,
(40)
where
.
The importance of feature
(41)
is produced by the following mapping function
(42)
where
(default
is significance level (default 0.05), is a set of threshold values to assess effect size
), is a set of corresponding thresholds of importance (default
), and
is a monotone cubic interpolation
mapping function between and .
TwoStep-AS Cluster Algorithms
Continuous Feature
For a continuous feature , compute F test statistic
,
(43)
where
,
(44)
,
(45)
,
(46)
.
(47)
The F statistic is undefined if the denominator equals zero. Accordingly, the p-value is calculated as
(48)
in which
and
.
is a random variable that follows a F-distribution with degrees of freedom
The effect size, Eta square, is
,
(49)
where
.
The importance of feature
.
(50)
is produced using the same mapping function as (42), and default
11.2. Within-Cluster Feature Importance
Within-cluster feature importance indicates how influential a feature is in forming a cluster. Similar to
across-cluster feature importance, within-cluster feature importance can also be defined using two
methods.
TwoStep-AS Cluster Algorithms
11.2.1. Information Criterion Based Method
If BIC is used as the information criterion, the importance of feature
within cluster
(
) is
,
(51)
where
,
(52)
.
(53)
Notice that jc represents the complement set of j in J.
If AIC is used as the information criterion, the importance of feature
within cluster
(
,
) is
(54)
where
(55)
.
(56)
11.2.2. Effect Size Based Method
Within-cluster importance is defined by comparing the distribution of the feature within a cluster with
the overall distribution.
Categorical Feature
For cluster
(
) and a categorical feature , compute Pearson chi-square test statistic
,
(57)
where
.
(58)
The p-value is computed as
,
(59)
TwoStep-AS Cluster Algorithms
in which
is a random variable that follows a chi-square distribution with freedom degree of
Note that importance for feature within cluster will be undefined if
equals zero.
.
The effect size is
.
(60)
The importance of feature
default
within cluster
.
is produced using the same mapping function as (42), and
Continuous Feature
For cluster
(
) and a continuous feature , compute t test statistic
,
(61)
where
.
(62)
The p-value is calculated as
(63)
in which
is a random variable that follows a t-distribution with degrees of freedom
.
The effect size is
.
(64)
The importance of feature
default
within cluster
.
is produced using the same mapping function as (42), and
11.3. Clustering Model Goodness
Clustering model goodness indicates the quality of a clustering solution. This measure will be computed
for the final clustering solution, and it will also be computed for approximate clustering solutions during
the process of adaptive feature selection.
Suppose there are regular clusters, denoted as
sub-cluster .
,...,
. Let
be the regular cluster label assigned to
TwoStep-AS Cluster Algorithms
Then for each sub-cluster , the Silhouette coefficient is computed approximately as
,
(65)
where
is the weighted average distance from the center of sub-cluster to the center of every other sub-cluster
assigned to the same regular cluster, that is,
,
(66)
is the minimal average distance from the center of sub-cluster to the center of sub-clusters in a
different regular cluster among all different regular clusters, that is,
.
(67)
Clustering model goodness is defined as the weighted average Silhouette coefficient over all starting subclusters in the final stage of regular HAC, that is,
.
(68)
The average Silhouette coefficient ranges between -1 (indicating a very poor model) and +1 (indicating an
excellent model). As found by Kaufman and Rousseeuw (1990), average Silhouette greater than 0.5
indicates reasonable partitioning of data; lower than 0.2 means that data does not exhibit cluster
structure. In this regard, we can use the following function to map
into an interestingness
score:
,
where
, and
.
Implementation notes:


Please refer to section 9.3 for the definition of cluster center and also for the calculation of
distance.
When there is only a single sub-cluster in the regular cluster, let be the tightness of the subcluster.
11.4. Special Clusters
With the clustering solution, we can find special clusters, which could be regular clusters with high
quality, extreme outlier clusters, and so on.
(69)
TwoStep-AS Cluster Algorithms
11.4.1. Regular Cluster Ranking
To select the most useful or interesting regular clusters, we can rank them according to any of the
measures described below.
Cluster tightness
Cluster tightness is given by equation (9) or (15).
Cluster tightness is not scale-free, and it is a measure of cluster cohesion.
Cluster importance
Cluster importance indicates the quality of the regular cluster in the clustering solution. A higher
importance value means a better quality of the regular cluster.
If BIC is used as the information criterion, the importance for regular cluster
is
,
(70)
where
,
,
.
If AIC is used as the information criterion, the importance for regular cluster
is
,
(71)
where
,
,
.
Cluster importance is scale-free, and in some sense it is a normalized measure of cluster cohesion.
Cluster goodness
The goodness measure for regular cluster is defined as the weighted average Silhouette coefficient over
all starting sub-clusters in regular cluster , that is,
TwoStep-AS Cluster Algorithms
.
We can also map
(72)
into an interestingness score using equation (69).
Cluster goodness is also scale-free, and it is a measure of balancing cluster cohesion and cluster
separation.
11.4.2. Outlier Clusters Ranking
For each outlier cluster, we have the following measures: cluster size, outlier strength. Each of the
measures can be used to rank outlier clusters, so as to find the most interesting ones.
11.4.3. Outlier Clusters Grouping
Outlier clusters can be grouped by the nearest regular cluster, using probability values.
TwoStep-AS Cluster Algorithms
Appendix A. Map-Reduce Job for Feature Selection
Mapper
Each mapper will handle one data split and use it to build a local CF-tree. The local CF-tree is assigned
with a unique key. Notice that if the option of outlier handling is turned on, outliers will not be passed to
reducers in case of feature selection.
Let
be the CF-tree with the key of
on data split (
).
The map function is as follows.
Inputs:
 Data split

<Parameter settings>
 MainMemory
 OutlierHandling
 OutlierHandlingDiskSpace
 OutlierQualification
 DelayedSplit
 DelayedSplitDiskSpace
 Adjustment
 DistanceMeasure




InitialThreshold
NonLeafNodeBranchingFactor
LeafNodeBranchingFactor
MaxTreeHeight
Outputs:
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
Default 80*1024 bytes
{on, off}, default on
Default 20% of MainMemory
Default 10 cases
{on, off}, default on
Default 10% of MainMemory
Default 0.01
{Log-likelihood, Euclidean}, default
Log-likelihood
Default 0
Default 8
Default 8
Default 3

Procedure:
1. Build a CF-tree on data split
2. Assign
to the CF-tree;
3. Export
;
based on specified features and settings;
Reducer
Each reducer can handle several keys. For each key, it first pours together all CF-trees which have the
same key. Then it builds approximate clustering solutions iteratively in order to find the most influential
features. The selected features will be passed to the controller.
Let
be the set of features produced for the key of
The reduce function for each key is as follows.
Inputs:
,
.
TwoStep-AS Cluster Algorithms

<Parameter settings>
 Adjustment
 DistanceMeasure






AutoClustering
MaximumClusterNumber
MinimumClusterNumber
FixedClusterNumber
ClusteringCriterion
AutoClusteringMethod
//
//
//
//
//
//
//
//
//
//
Default 0.01
{Log-likelihood, Euclidean}, default Loglikelihood
{on, off}, default on
Default 15
Default 2
Default 5
{BIC, AIC}, default BIC
{information criterion, distance jump,
maximum, minimum}, default minimum
Outputs:

Procedure:
1. Let
be the set of all available features;
2. With all leaf entries in CF-tree
and using features
, perform
matrix based HAC to get an approximate cluster solution S0. Suppose the
number of approximate final clusters is , which is determined automatically
or using a fixed one depending on the settings;
Compute importance for each feature in
;
// Importance values should not be truncated
Compute I(S0), the information criterion of S0;
3. Let
and
I(S0);
Find
, the set of features in
with non-positive importance;
Let
;
4. With all leaf entries in CF-tree
and using features
, perform
matrix based HAC to get a new solution S1 with fixed ;
Compute I(S1), the information criterion of S1;
Compute the information of all discarded features I(
),
determined by S1, as
, or
, depending on the
setting, where
;
;
// Though the discarded features are not used to build S1, their
// statistics are still available in CFs of final clusters in S1.
5. While (I(S1)+I(
)>
){
Find the most important feature in
;
Let
, and
;
With all leaf entries in CF-tree
and using features
,
perform matrix based HAC to get a new solution S1 with fixed ;
Compute I(S1), the information criterion of S1;
Compute I(
), the information of all discarded features;
}
Let
and
I(S1)+I(
);
6. While (
is not empty){
Find the most unimportant feature in
;
TwoStep-AS Cluster Algorithms
Let
;
If (
is empty), break;
With all leaf entries in CF-tree
and using features
, perform matrix based HAC to get a new solution S1
with fixed ;
Compute I(S1), the information criterion of S1;
Compute I(
), the information of all discarded features;
If (I(S1)+ I(
)<= ){
Let
;
Let
;
Let
;
Let
;
}
}
7. Export
;
Controller
The controller pours together all sets of features produced by reducers, and selects those features which
appear frequently. The selected features will be used in the next map-reduce job to build the final
clustering solution.
The controller runs the following procedure.
Inputs:
<Parameter settings>
 MinFrequency
Outputs:

Procedure:
1. Let = MinFrequency,
2. Launch a map-reduce
3. Compute
4. For each feature in
If the occurring
5. Export ;
// Default 50%
// Set of selected features
and
be empty;
job, and get
, for
;
,
frequency is larger than
from the reducers;
, add the feature into
;
TwoStep-AS Cluster Algorithms
Appendix B. Map-Reduce Job for Distributed Clustering
Mapper
Each mapper will handle one data split and use it to build a local CF-tree.
Local outlier candidates and the local CF-tree will be distributed to a certain reducer. This is achieved by
assigning them a key, which is randomly selected from the key set
. The number of keys
is computed by equation (28).
For convenience, in the following we call leaf entries as pre-clusters. Let
CF-tree and the set of outliers, respectively, with the key of
(
and
), on data split (
be the
).
The map function is as follows.
Inputs:
 Data split

<Parameter settings>
 MainMemory
 OutlierHandling
 OutlierHandlingDiskSpace
 OutlierQualification
 DelayedSplit
 DelayedSplitDiskSpace
 Adjustment
 DistanceMeasure




InitialThreshold
NonLeafNodeBranchingFactor
LeafNodeBranchingFactor
MaxTreeHeight
Outputs:



//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
Default 80*1024 bytes
{on, off}, default on
Default 20% of MainMemory
Default 10 cases
{on, off}, default on
Default 10% of MainMemory
Default 0.01
{Log-likelihood, Euclidean}, default
Log-likelihood
Default 0
Default 8
Default 8
Default 3
// Tightness threshold
Procedure:
1. Build a CF-tree on data split based on specified features and settings;
2. If (DelayedSplit=’on’),
Absorb cases in disk space
with tree rebuilding if necessary;
2. If (OutlierHandling=’on’),{
Absorb entries in disk space
without tree rebuilding;
Check the final CF-tree for outliers;
Mark the identified outliers and remaining entries in disk space
as
local outlier candidates;
}
3. Assign
to the CF-tree and the set of outlier candidates;
4. Export
,
, and
;
TwoStep-AS Cluster Algorithms
Reducer
Each reducer can handle several keys. For each key, it first pours together all CF-trees which have the
same key. Then with all leaf entries in the involved CF-trees, it performs a series of CF-tree based HACs
to get a specified number of sub-clusters. Finally, the sub-clusters are passed to the controller. The
number of sub-clusters produced for each key is computed by equation (29).
Let
be the set of data split indices whose key is
,
clusters and the set of outliers, respectively, produced for the key of
and
,
be the set of sub.
The reduce function for each key is as follows.
Inputs:



,
,
,
<Parameter settings>
 OutlierHandling
 Adjustment
 DistanceMeasure


NumSubClusters
MinSubClusters
 MaximumDataPoitsCFHAC
Outputs:
//
//
//
//
//
//
//
//
{on, off}, default on
Default 0.01
{Log-likelihood, Euclidean}, default Loglikelihood
Number of sub-clusters produced for each key
Minimum sub-clusters produced for each key
default 500
Maximum data points for HAC, default 50,000


Procedure:
1. Let = NumSubClusters,
2. Compute
= MinSubClusters, and
;
= MaximumDataPoitsCFHAC;
3. Compute
;
4. If OutlierHandling is ‘on’,{
Compute
;
For each member in
,{
Find the closest leaf entry in the set of CF-trees
;
If the closest leaf entry can absorb the outlier member without
violating the threshold requirement
, then merge them, and
update
and the involved CF-tree;
}
}
5. Let
be the total number of leaf entries in
;
While
,{
Divide the set of CF-trees
randomly into
groups,
where
;
For each group which has a larger number of leaf entries than
, perform
CF-tree based HAC to get
leaf entries, where
;
Update
with new CF-trees produced in the above step;
Compute the total number of remaining leaf entries ;
TwoStep-AS Cluster Algorithms
}
6. With the set of CF-trees
sub-clusters, i.e.
7. Export
and
, perform CF-tree based HAC to get a set of
;
;
Controller
The controller pours together all sub-clusters produced by reducers, and performs matrix based HAC to
get the final clusters. It identifies outlier clusters as well if the option of outlier handling is turned on.
Moreover, it computes model evaluation measures, and derives insights and interestingness from the
clustering results.
The controller runs the following procedure.
Inputs:
<Parameter settings>
 MainMemory
 OutlierHandling
 OutlierHandlingDiskSpace
 OutlierQualification
 ExtremeOutlierClusters
 DelayedSplit
 DelayedSplitDiskSpace
 Adjustment
 DistanceMeasure










InitialThreshold
NonLeafNodeBranchingFactor
LeafNodeBranchingFactor
MaximumTreeHeight
AutoClustering
MaximumClusterNumber
MinimumClusterNumber
FixedClusterNumber
ClusteringCriterion
AutoClusteringMethod

MinSubClusters

MaxDataPoitsCFHAC

MaxDataPoitsMatrixHAC
Outputs:
 PMML
 StatXML
Procedure:
1. Let
= MinSubClusters,
MaximumDataPoitsMatrixHAC;
2. Compute the number of keys
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
//
Default 80*1024 bytes
{on, off}, default on
Default 20% of MainMemory
Default 10 cases
Default 20
{on, off}, default on
Default 10% of MainMemory
Default 0.01
{Log-likelihood, Euclidean}, default
Log-likelihood
Default 0
Default 8
Default 8
Default 3
{on, off}, default on
Default 15
Default 2
Default 5
{BIC, AIC}, default BIC
{information criterion, distance jump,
maximum, minimum}, default minimum
Minimum sub-clusters produced for each key,
default 500
Maximum data points for CF-tree based HAC,
default 50,000
Maximum data points for matrix based HAC,
default 5,000
= MaximumDataPoitsCFHAC, and
=
TwoStep-AS Cluster Algorithms
3.
4.
5.
6.
7.
8.
9.
NumKeys =
;
// Each mapper is assigned a key which is selected randomly from the
keys
Compute the number of sub-clusters produced for each key
NumSubClusters =
;
Launch a map-reduce job, and get
and
, for
;
Compute
;
Perform matrix based HAC on
to get the set of final regular clusters
,
where the number of final clusters is determined automatically or using a
fixed one depending on the settings;
If OutlierHandling is ‘on’, perform the steps from 2) to 7) in Step 3 in
section 9.3;
Compute model evaluation measures, insights, and interestingness;
Export the clustering model in PMML, and other statistics in StatXML;
Implementation notes:

The general procedure of the controller consists of both the controller procedure in appendix A
and that in appendix B.
TwoStep-AS Cluster Algorithms
Appendix C. Procedure for MonotoneCubicInterpolation( )
,
where
x is the input statistic that characterizes fields or field pairs in particular aspects (for example,
distribution), association strength, etc. Its value range must be bounded below, and it must have a
monotonically increasing relationship with the interestingness score threshold values. If the two
conditions are not met, a conversion (e.g.
, etc) should be carried out.
is a set of distinct threshold values for the input statistics, which have been accepted and commonly
used by expert users to interpret the statistics. The positive infinity (+∞) is included if the input statistic is
not bounded from above.
is a set of distinct threshold values for the interestingness scores that
values must be between 0 and 1.
The size of
and
must be the same. There are at least two values in
corresponds to. The threshold
excluding positive infinity (+∞).
Pre-processing
Let
such that
Let
such that
, where
is the number of values in
.
.
Condition A: There are more than two threshold values for input statistics, and they are all finite
numbers
Preparing for cubic interpolation
The following steps should be taken for preparing a cubic interpolation function construction.
Step 1, compute the slopes of the secant lines between successive points.
for
.
Step 2, Initialize the tangents at every data point as the average of the secants,
for
differences:
; these may be updated in further steps. For the endpoints, use one-sided
and
.
TwoStep-AS Cluster Algorithms
Step 3, let αk = mk /
k
and βk = mk + 1 /
k
for
.
If α or β are computed to be zero, then the input data points are not strictly monotone. In such cases,
piecewise monotone curves can still be generated by choosing mk = mk + 1 = 0, although global strict
monotonicity is not possible.
Step 4, update
If
, then set mk = τkαk
k
and mk + 1 = τkβk
k
where
.
Note:
1. Only one pass of the algorithm is required.
2. For
, if k = 0 (if two successive yk = yk + 1 are equal), then set mk = mk + 1 = 0, as the spline
connecting these points must be flat to preserve monotonicity. Ignore step 3 and 4 for those k.
Cubic interpolation
After the preprocessing, evaluation of the interpolated spline is equivalent to cubic Hermite spline, using
the data xk, yk, and mk for k = 1,...,n.
To evaluate x in the range [xk, xk+1] for k = 1,...,n-1, calculate
and
then the interpolant is
where hii(t) are the basis functions for the cubic Hermite spline.
h00(t)
2t3 − 3t2 + 1
h10(t)
t3 − 2t2 + t
h01(t)
− 2t3 + 3t2
h11(t)
t3 − t2
Condition B: There are two threshold values for input statistics
As we have clarified in the beginning that there are at least two values in excluding positive infinity
(+∞), they must be both finite numbers when there are only two threshold values.
In this case the mapping function is a straight line connecting
and
.
TwoStep-AS Cluster Algorithms
Condition C: Threshold values include infinity
Note that there are at least two values in excluding positive infinity (+∞). Take the last three statistic
threshold values and threshold values for the interestingness scores from the sorted lists, we have three
pairs of data
,
and
.
An exponential function
can be defined by the pairs, where
If
, which means there are only two distinct values in excluding positive infinity (+∞), the
exponential function is employed for evaluating x in the range [x1, +∞).
Otherwise, the exponential function is for evaluating x in the range [xn-1, +∞). To evaluate x in the range
[x1, xn-1), use procedures under “condition A: There are more than two threshold values for input
statistics, and they are all finite numbers” with data set
and
where
. To
insure a smooth transition to the exponential function, the tangent
at data point
is given as
again
TwoStep-AS Cluster Algorithms
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
Chiu, T. (2000a). mBIRCH Clustering Algorithm (Phase 1 – Preclustering). IBM SPSS Internal
Document.
Chiu, T., Fang, D., Chen, J., Wang, Y., and Jeris, C. (2001). A Robust and Scalable Clustering
Algorithm for Mixed Type Attributes in Large Database Environment. Proceedings of the seventh
ACM SIGKDD international conference on knowledge discovery and data mining, 263.
Chiu, T. (1999b). Hierarchical Agglomerative Clustering Algorithm. IBM SPSS Internal
Document.
Chiu, T. (2004). Algorithm Design: Enhancements of Two-Step Clustering. IBM SPSS Internal
Document.
Fang, D. (2000). Auto-Cluster in SPSS Clustering Component. IBM SPSS Internal Document.
Xu, J. (2011). ADD – Interestingness and Insights. IBM SPSS Internal Document
Zhang, T., Ramakrishnon, R., and Livny M. (1996). BIRCH: An Efficient Data Clustering
Method for Very Large Databases. Proceedings of the ACM SIGMOD conference on Management of
Data, p. 103-114, Montreal, Canada.
Kaufman, L., and Rousseeuw, P., J. (1990). Finding Groups in Data: An Introduction to Cluster
Analysis. Wiley Series in Probability and Statistics. John Wiley and Sons, New York.
Appendix
A
Notices
This information was developed for products and services offered worldwide.
IBM may not offer the products, services, or features discussed in this document in other countries.
Consult your local IBM representative for information on the products and services currently
available in your area. Any reference to an IBM product, program, or service is not intended to
state or imply that only that IBM product, program, or service may be used. Any functionally
equivalent product, program, or service that does not infringe any IBM intellectual property right
may be used instead. However, it is the user’s responsibility to evaluate and verify the operation
of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this
document. The furnishing of this document does not grant you any license to these patents.
You can send license inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785,
U.S.A.
For license inquiries regarding double-byte character set (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:
Intellectual Property Licensing, Legal and Intellectual Property Law, IBM Japan Ltd., 1623-14,
Shimotsuruma, Yamato-shi, Kanagawa 242-8502 Japan.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES
PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND,
EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A
PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties
in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are
periodically made to the information herein; these changes will be incorporated in new editions
of the publication. IBM may make improvements and/or changes in the product(s) and/or the
program(s) described in this publication at any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and
do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites
are not part of the materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate
without incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose of enabling: (i) the
exchange of information between independently created programs and other programs (including
this one) and (ii) the mutual use of the information which has been exchanged, should contact:
IBM Software Group, Attention: Licensing, 233 S. Wacker Dr., Chicago, IL 60606, USA.
© Copyright IBM Corporation 1994, 2015.
403
404
Notices
Such information may be available, subject to appropriate terms and conditions, including in
some cases, payment of a fee.
The licensed program described in this document and all licensed material available for it are
provided by IBM under terms of the IBM Customer Agreement, IBM International Program
License Agreement or any equivalent agreement between us.
Any performance data contained herein was determined in a controlled environment. Therefore,
the results obtained in other operating environments may vary significantly. Some measurements
may have been made on development-level systems and there is no guarantee that these
measurements will be the same on generally available systems. Furthermore, some measurements
may have been estimated through extrapolation. Actual results may vary. Users of this document
should verify the applicable data for their specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products,
their published announcements or other publicly available sources. IBM has not tested those
products and cannot confirm the accuracy of performance, compatibility or any other claims
related to non-IBM products. Questions on the capabilities of non-IBM products should be
addressed to the suppliers of those products.
All statements regarding IBM’s future direction or intent are subject to change or withdrawal
without notice, and represent goals and objectives only.
This information contains examples of data and reports used in daily business operations.
To illustrate them as completely as possible, the examples include the names of individuals,
companies, brands, and products. All of these names are fictitious and any similarity to the names
and addresses used by an actual business enterprise is entirely coincidental.
If you are viewing this information softcopy, the photographs and color illustrations may not
appear.
Trademarks
IBM, the IBM logo, ibm.com, and SPSS are trademarks of IBM Corporation, registered in
many jurisdictions worldwide. A current list of IBM trademarks is available on the Web at
http://www.ibm.com/legal/copytrade.shtml.
Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel
Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft
Corporation in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Java and all Java-based trademarks and logos are trademarks of Sun Microsystems, Inc. in the
United States, other countries, or both.
Other product and service names might be trademarks of IBM or other companies.
405
Notices
Bibliography
Aggarwal, C. C., and P. S. Yu. 1998. Online generation of association rules. In: Proceedings of the
14th International Conference on Data Engineering, Los Alamitos, Calif: IEEE ComputerSociety
Press, 402–411.
Agrawal, R., and R. Srikant. 1994. Fast Algorithms for Mining Association Rules. In:
Proceedings of the 20th International Conference on Very Large Databases, J. B. Bocca, M. Jarke,
and C. Zaniolo, eds. San Francisco: Morgan Kaufmann, 487–499.
Agrawal, R., and R. Srikant. 1995. Mining Sequential Patterns. In: Proceedings of the Eleventh
International Conference on Data Engineering, Los Alamitos, Calif.: IEEE Computer Society
Press, 3–14.
Agresti, A., J. G. Booth, and B. Caffo. 2000. Random-effects Modeling of Categorical Response
Data. Sociological Methodology, 30, 27–80.
Aitkin, M., D. Anderson, B. Francis, and J. Hinde. 1989. Statistical Modelling in GLIM. Oxford:
Oxford Science Publications.
Albert, A., and J. A. Anderson. 1984. On the Existence of Maximum Likelihood Estimates in
Logistic Regression Models. Biometrika, 71, 1–10.
Anderson, T. W. 1958. Introduction to multivariate statistical analysis. New York: John Wiley &
Sons, Inc..
Arya, S., and D. M. Mount. 1993. Algorithms for fast vector quantization. In: Proceedings of the
Data Compression Conference 1993, , 381–390.
Belsley, D. A., E. Kuh, and R. E. Welsch. 1980. Regression diagnostics: Identifying influential
data and sources of collinearity. New York: John Wiley and Sons.
Biggs, D., B. de Ville, and E. Suen. 1991. A method of choosing multiway partitions for
classification and decision trees. Journal of Applied Statistics, 18, 49–62.
Biller, B., and S. Ghosh. 2006. Multivariate input processes. In: Handbooks in Operations
Research and Management Science: Simulation, B. L. Nelson, and S. G. Henderson, eds.
Amsterdam: Elsevier Science, 123–153.
Bishop, C. M. 1995. Neural Networks for Pattern Recognition, 3rd ed. Oxford: Oxford University
Press.
Box, G. E. P., and D. R. Cox. 1964. An analysis of transformations. Journal of the Royal
Statistical Society, Series B, 26, 211–246.
Box, G. E. P., G. M. Jenkins, and G. C. Reinsel. 1994. Time series analysis: Forecasting and
control, 3rd ed. Englewood Cliffs, N.J.: Prentice Hall.
Breiman, L., J. H. Friedman, R. A. Olshen, and C. J. Stone. 1984. Classification and Regression
Trees. New York: Chapman & Hall/CRC.
Breslow, N. E. 1974. Covariance analysis of censored survival data. Biometrics, 30, 89–99.
Brockwell, P. J., and R. A. Davis. 1991. Time Series: Theory and Methods, 2 ed. :
Springer-Verlag.
Cain, K. C., and N. T. Lange. 1984. Approximate case influence for the proportional hazards
regression model with censored data. Biometrics, 40, 493–499.
Cameron, A. C., and P. K. Trivedi. 1998. Regression Analysis of Count Data. Cambridge:
Cambridge University Press.
© Copyright IBM Corporation 1994, 2015.
407
408
Bibliography
Chang, C. C., and C. J. Lin. 2003. LIBSVM: A library for support vector machines. Technical
Report. Taipei, Taiwan: Department of Computer Science, National Taiwan University.
Chow, C. K., and C. N. Liu. 1968. Approximating discrete probability distributions with
dependence trees. IEEE Transactions on Information Theory, 14, 462–467.
Cooley, W. W., and P. R. Lohnes. 1971. Multivariate data analysis. New York: John Wiley &
Sons, Inc..
Cox, D. R. 1972. Regression models and life tables (with discussion). Journal of the Royal
Statistical Society, Series B, 34, 187–220.
Cunningham, P., and S. J. Delaney. 2007. k-Nearest Neighbor Classifiers. Technical Report
UCD-CSI-2007-4, School of Computer Science and Informatics, University College Dublin,
Ireland, , – .
Dempster, A. P. 1969. Elements of Continuous Multivariate Analysis. Reading, MA:
Addison-Wesley.
Diggle, P. J., P. Heagerty, K. Y. Liang, and S. L. Zeger. 2002. The analysis of Longitudinal
Data, 2 ed. Oxford: Oxford University Press.
Dixon, W. J. 1973. BMD Biomedical computer programs. Los Angeles: University of California
Press.
Dobson, A. J. 2002. An Introduction to Generalized Linear Models, 2 ed. Boca Raton, FL:
Chapman & Hall/CRC.
Dong, J., C. Y. Suen, and A. Krzyzak. 2005. Fast SVM training algorithm with decomposition
on very large data sets. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27,
603–618.
Dougherty, J., R. Kohavi, and M. Sahami. 1995. Supervised and unsupervised discretization
of continuous features. In: Proceedings of the Twelfth International Conference on Machine
Learning, Los Altos, CA: Morgan Kaufmann, 194–202.
Drucker, H. 1997. Improving regressor using boosting techniques. In: Proceedings of the 14th
International Conferences on Machine Learning , D. H. Fisher,Jr., ed. San Mateo, CA: Morgan
Kaufmann, 107–115.
Dunn, P. K., and G. K. Smyth. 2005. Series Evaluation of Tweedie Exponential Dispersion Model
Densities. Statistics and Computing, 15, 267–280.
Dunn, P. K., and G. K. Smyth. 2001. Tweedie Family Densities: Methods of Evaluation. In:
Proceedings of the 16th International Workshop on Statistical Modelling, Odense, Denmark: .
D’Agostino, R., and M. Stephens. 1986. Goodness-of-Fit Techniques. New York: Marcel Dekker.
Fahrmeir, L., and G. Tutz. 2001. Multivariate Statistical Modelling Based on Generalized Linear
Models, 2nd ed. New York: Springer-Verlag.
Fan, R. E., P. H. Chen, and C. J. Lin. 2005. Working set selection using the second order
information for training SVM. Technical Report. Taipei, Taiwan: Department of Computer
Science, National Taiwan University.
Fayyad, U., and K. Irani. 1993. Multi-interval discretization of continuous-value attributes for
classification learning. In: Proceedings of the Thirteenth International Joint Conference on
Artificial Intelligence, San Mateo, CA: Morgan Kaufmann, 1022–1027.
Fine, T. L. 1999. Feedforward Neural Network Methodology, 3rd ed. New York: Springer-Verlag.
409
Bibliography
Fox, J., and G. Monette. 1992. Generalized collinearity diagnostics. Journal of the American
Statistical Association, 87, 178–183.
Fox, J. 1997. Applied Regression Analysis, Linear Models, and Related Methods. Thousand
Oaks, CA: SAGE Publications, Inc..
Freund, Y., and R. E. Schapire. 1995. A decision theoretic generalization of on-line learning and
an application to boosting. In: Computational Learning Theory: 7 Second European Conference,
EuroCOLT ’95, , 23–37.
Friedman, J. H., J. L. Bentley, and R. A. Finkel. 1977. An algorithm for finding best matches in
logarithm expected time. ACM Transactions on Mathematical Software, 3, 209–226.
Friedman, N., D. Geiger, and M. Goldszmidt. 1997. Bayesian network classifiers. Machine
Learning, 29, 131–163.
Gardner, E. S. 1985. Exponential smoothing: The state of the art. Journal of Forecasting, 4, 1–28.
Gill, J. 2000. Generalized Linear Models: A Unified Approach. Thousand Oaks, CA: Sage
Publications.
Goodman, L. A. 1979. Simple models for the analysis of association in cross-classifications
having ordered categories. Journal of the American Statistical Association, 74, 537–552.
Hardin, J. W., and J. M. Hilbe. 2003. Generalized Linear Models and Extension. Station, TX:
Stata Press.
Hardin, J. W., and J. M. Hilbe. 2001. Generalized Estimating Equations. Boca Raton, FL:
Chapman & Hall/CRC.
Harman, H. H. 1976. Modern Factor Analysis, 3rd ed. Chicago: University of Chicago Press.
Hartzel, J., A. Agresti, and B. Caffo. 2001. Multinomial Logit Random Effects Models. Statistical
Modelling, 1, 81–102.
Harvey, A. C. 1989. Forecasting, structural time series models and the Kalman filter. Cambridge:
Cambridge University Press.
Haykin, S. 1998. Neural Networks: A Comprehensive Foundation, 2nd ed. New York: Macmillan
College Publishing.
Heckerman, D. 1999. A Tutorial on Learning with Bayesian Networks. In: Learning in Graphical
Models, M. I. Jordan, ed. Cambridge, MA: MIT Press, 301–354.
Hedeker, D. 1999. Generalized Linear Mixed Models. In: Encyclopedia of Statistics in Behavioral
Science, B. Everitt, and D. Howell, eds. London: Wiley, 729–738.
Hendrickson, A. E., and P. O. White. 1964. Promax: a quick method for rotation to oblique simple
structure. British Journal of Statistical Psychology, 17, 65–70.
Hidber, C. 1999. Online Association Rule Mining. In: Proceedings of the ACM SIGMOD
International Conference on Management of Data, New York: ACM Press, 145–156.
Horton, N. J., and S. R. Lipsitz. 1999. Review of Software to Fit Generalized Estimating Equation
Regression Models. The American Statistician, 53, 160–169.
Hosmer, D. W., and S. Lemeshow. 2000. Applied Logistic Regression, 2nd ed. New York: John
Wiley and Sons.
Huber, P. J. 1967. The Behavior of Maximum Likelihood Estimates under Nonstandard
Conditions. In: Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and
Probability, Berkeley, CA: University of California Press, 221–233.
410
Bibliography
Jennrich, R. I., and P. F. Sampson. 1966. Rotation for simple loadings. Psychometrika, 31,
313–323.
Johnson, N. L., S. Kotz, and A. W. Kemp. 2005. Univariate Discrete Distributions, 3rd ed.
Hoboken, New Jersey: John Wiley & Sons.
Kalbfleisch, J. D., and R. L. Prentice. 2002. The statistical analysis of failure time data, 2 ed.
New York: John Wiley & Sons, Inc.
Kass, G. 1980. An exploratory technique for investigating large quantities of categorical data.
Applied Statistics, 29:2, 119–127.
Kaufman, L., and P. J. Rousseeuw. 1990. Finding groups in data: An introduction to cluster
analysis. New York: John Wiley and Sons.
Kohavi, R., B. Becker, and D. Sommerfield. 1997. Improving Simple Bayes. In: Proceedings of
the European Conference on Machine Learning, , 78–87.
Kohonen, T. 2001. Self-Organizing Maps, 3rd ed. New York: Springer-Verlag.
Kotz, S., and J. Rene Van Dorp. 2004. Beyond Beta, Other Continuous Families of Distributions
with Bounded Support and Applications. Singapore: World Scientific Press.
Kroese, D. P., T. Taimre, and Z. I. Botev. 2011. Handbook of Monte Carlo Methods. Hoboken,
New Jersey: John Wiley & Sons.
Lane, P. W., and J. A. Nelder. 1982. Analysis of Covariance and Standardization as Instances of
Prediction. Biometrics, 38, 613–621.
Lawless, R. F. 1982. Statistical models and methods for lifetime data. New York: John Wiley &
Sons, Inc..
Lawless, J. E. 1984. Negative Binomial and Mixed Poisson Regression. The Canadian Journal
of Statistics, 15, 209–225.
Liang, K. Y., and S. L. Zeger. 1986. Longitudinal Data Analysis Using Generalized Linear
Models. Biometrika, 73, 13–22.
Lipsitz, S. H., K. Kim, and L. Zhao. 1994. Analysis of Repeated Categorical Data Using
Generalized Estimating Equations. Statistics in Medicine, 13, 1149–1163.
Liu, H., F. Hussain, C. L. Tan, and M. Dash. 2002. Discretization: An Enabling Technique. Data
Mining and Knowledge Discovery, 6, 393–423.
Loh, W. Y., and Y. S. Shih. 1997. Split selection methods for classification trees. Statistica
Sinica, 7, 815–840.
Makridakis, S. G., S. C. Wheelwright, and R. J. Hyndman. 1997. Forecasting: Methods and
applications, 3rd ed. ed. New York: John Wiley and Sons.
Marsaglia, G., and J. Marsaglia. 2004. Evaluating the Anderson-Darling Distribution. Journal of
Statistical Software, 9:2, .
McCullagh, P. 1983. Quasi-Likelihood Functions. Annals of Statistics, 11, 59–67.
McCullagh, P., and J. A. Nelder. 1989. Generalized Linear Models, 2nd ed. London: Chapman &
Hall.
McCulloch, C. E., and S. R. Searle. 2001. Generalized, Linear, and Mixed Models. New York:
John Wiley and Sons.
411
Bibliography
Melard, G. 1984. A fast algorithm for the exact likelihood of autoregressive-moving average
models. Applied Statistics, 33:1, 104–119.
Miller, M. E., C. S. Davis, and J. R. Landis. 1993. The Analysis of Longitudinal Polytomous Data:
Generalized Estimating Equations and Connections with Weighted Least Squares. Biometrics,
49, 1033–1044.
Nelder, J. A., and R. W. M. Wedderburn. 1972. Generalized Linear Models. Journal of the
Royal Statistical Society Series A, 135, 370–384.
Neter, J., W. Wasserman, and M. H. Kutner. 1990. Applied Linear Statistical Models, 3rd ed.
Homewood, Ill.: Irwin.
Pan, W. 2001. Akaike’s Information Criterion in Generalized Estimating Equations. Biometrics,
57, 120–125.
Pena, D., G. C. Tiao, and R. S. Tsay, eds. 2001. A course in time series analysis. New York:
John Wiley and Sons.
Platt, J. 2000. Probabilistic outputs for support vector machines and comparison to regularized
likelihood methods. In: Advances in Large Margin Classifiers, A. J. Smola, P. Bartlett, B.
Scholkopf, and D. Schuumans, eds. Cambridge, MA: MITPress, 61–74.
Pregibon, D. 1981. Logistic Regression Diagnostics. Annals of Statistics, 9, 705–724.
Prim, R. C. 1957. Shortest connection networks and some generalisations. Bell System Technical
Journal, 36, 1389–1401.
Ripley, B. D. 1996. Pattern Recognition and Neural Networks. Cambridge: Cambridge University
Press.
Saltelli, A., S. Tarantola, F. , F. Campolongo, and M. Ratto. 2004. Sensitivity Analysis in Practice
– A Guide to Assessing Scientific Models. : John Wiley.
Saltelli, A. 2002. Making best use of model evaluations to compute sensitivity indices. Computer
Physics Communications, 145:2, 280–297.
Schatzoff, M., R. Tsao, and S. Fienberg. 1968. Efficient computing of all possible regressions.
Technometrics, 10, 769–779.
Skrondal, A., and S. Rabe-Hesketh. 2004. Generalized Latent Variable Modeling: Multilevel,
Longitudinal, and Structural Equation Models. Boca Raton, FL: Chapman & Hall/CRC.
Smyth, G. K., and B. Jorgensen. 2002. Fitting Tweedie’s Compound Poisson Model to Insurance
Claims Data: Dispersion Modelling. ASTIN Bulletin, 32, 143–157.
Sobol, I. M. 2001. Global sensitivity indices for nonlinear mathematical models and their Monte
Carlo estimates. Mathematics and Computers in Simulation, 55, 271–280.
Storer, B. E., and J. Crowley. 1985. A diagnostic for Cox regression and general conditional
likelihoods. Journal of the American Statistical Association, 80, 139–147.
Tan, P., M. Steinbach, and V. Kumar. 2006. Introduction to Data Mining. : Addison-Wesley.
Tao, K. K. 1993. A closer look at the radial basis function (RBF) networks. In: Conference
Record of the Twenty-Seventh Asilomar Conference on Signals, Systems, and Computers, A.
Singh, ed. Los Alamitos, Calif.: IEEE Comput. Soc. Press, 401–405.
Tatsuoka, M. M. 1971. Multivariate analysis. New York: John Wiley & Sons, Inc. .
412
Bibliography
Tuerlinckx, F., F. Rijmen, G. Molenberghs, G. Verbeke, D. Briggs, W. Van den Noortgate, M.
Meulders, and P. De Boeck. 2004. Estimation and Software. In: Explanatory Item Response
Models: A Generalized Linear and Nonlinear Approach, P. De Boeck, and M. Wilson, eds.
New York: Springer-Verlag, 343–373.
Uykan, Z., C. Guzelis, M. E. Celebi, and H. N. Koivo. 2000. Analysis of input-output clustering
for determining centers of RBFN. IEEE Transactions on Neural Networks, 11, 851–858.
Velleman, P. F., and R. E. Welsch. 1981. Efficient computing of regression diagnostics. American
Statistician, 35, 234–242.
White, H. 1980. A Heteroskedasticity-Consistent Covariance Matrix Estimator and a Direct Test
for Heteroskedasticity. Econometrica, 48, 817–836.
Williams, D. A. 1987. Generalized Linear Models Diagnostics Using the Deviance and Single
Case Deletions. Applied Statistics, 36, 181–191.
Wolfinger, R., R. Tobias, and J. Sall. 1994. Computing Gaussian likelihoods and their derivatives
for general linear mixed models. SIAM Journal on Scientific Computing, 15:6, 1294–1310.
Wolfinger, R., and M. O'Connell. 1993. Generalized Linear Mixed Models: A Pseudo-Likelihood
Approach. Journal of Statistical Computation and Simulation, 4, 233–243.
Zeger, S. L., and K. Y. Liang. 1986. Longitudinal Data Analysis for Discrete and Continuous
Outcomes. Biometrics, 42, 121–130.
Zhang, T., R. Ramakrishnon, and M. Livny. 1996. BIRCH: An efficient data clustering method
for very large databases. In: Proceedings of the ACM SIGMOD Conference on Management
of Data, Montreal, Canada: ACM, 103–114.
Index
absolute confidence difference to prior
Apriori evaluation measure, 10
accuracy
Binary Classifier node, 49
neural networks algorithms, 295
Pass, Stream, Merge algorithms, 134
activation functions
multilayer perceptron algorithms, 286
AdaBoost
boosting algorithms, 128
adaptive boosting
boosting algorithms, 128
adjacency lattice
in sequence rules, 330
adjusted propensities algorithms, 1
adjusted R-square
in regression, 325
advanced output
in factor analysis/PCA, 152
in logistic regression, 245
in regression, 325
AICC
linear modeling algorithms, 280
Akaike information criterion
generalized linear models algorithms, 181, 199
allow splitting of merged categories (CHAID), 76
alpha factoring
in factor analysis/PCA, 143
anomaly detection
blank handling, 7
generated model, 7
overview, 3
predicted values, 7
scoring, 7
anomaly index, 6
Apriori
blank handling, 11–12
confidence (predicted values), 12
deriving rules, 9
evaluation measures, 10
frequent itemsets, 9
generated model, 12
generating rules, 10
items and itemsets, 9
maximum number of antecedents, 11
maximum number of rules, 11
minimum rules support/confidence, 11
only true values for flags, 11
options, 11
overview, 9
predicted values, 12
ruleset evaluation options, 12
area under curve
Binary Classifier node, 49
association rules, 335
Apriori, 9
© Copyright IBM Corporation 1994, 2015.
Carma, 53
sequence rules, 327
auto-clustering
in TwoStep clustering, 399
automated data preparation algorithms, 13
bivariate statistics collection, 22
categorical variable handling, 25
checkpoint, 17
continuous variable handling, 31
date/time handling, 14
discretization of continuous predictors, 35
feature construction, 32
feature selection, 32
measurement level recasting, 17
missing values, 19
notation, 13
outliers, 18
predictive power, 35
principal component analysis, 33
references, 36
supervised binning, 32
supervised merge, 26
target handling, 21
transformations, 20
univariate statistics collection, 15
unsupervised merge, 30
variable screening, 17
automatic field selection
regression, 323
backward elimination
multinomial logistic regression algorithms, 247
backward field selection
in regression, 325
backward stepwise
multinomial logistic regression algorithms, 246
bagging algorithms, 125–126
accuracy, 127
diversity, 127
notation, 125
references, 130
Bayes Information Criterion (BIC)
in TwoStep clustering, 399
Bayesian information criterion
generalized linear models algorithms, 181, 199
Bayesian network algorithms, 37
binning, 38
blank handling, 47
feature selection, 38
Markov blanket algorithms, 43, 45–47
notation, 37
scoring, 47
tree augmented naïve Bayes (TAN) models, 40–43
variable types, 38
best subsets selection
linear modeling algorithms, 277
413
414
Index
binary classifier comparison metrics, 49
binning
automatic binning in BN models, 38
CHAID predictors, 74
binomial logistic regression
algorithms, 251
BIRCH algorithm
in TwoStep clustering, 397
blank handling
Apriori, 11–12
Carma, 55, 57
Cox regression algorithms, 102
in anomaly detection, 7
in Bayesian network algorithms, 47
in C&RT, 61, 71
in CHAID, 81, 85
in Decision List algorithm, 111
in discriminant analysis, 121, 123
in factor analysis/PCA, 151–152
in k-means, 230
in k-means clustering, 232
in Kohonen models, 236–237
in logistic regression, 243, 251
in nearest neighbor algorithms, 266
in optimal binning algorithms, 303
in QUEST, 313, 320
in regression, 325–326
in scoring Decision List models, 111
in support vector machines (SVM), 377
in TwoStep clustering, 400–401
nearest neighbor algorithms, 268
blanks
imputing missing values, 223
Bonferroni adjustment
in CHAID tests, 80
boosting algorithms, 125
accuracy, 130
adaptive boosting (AdaBoost), 128
notation, 125
stagewise additive modeling (SAMME), 129
Borgelt, Christian, 9
Box-Cox transformation
automated data preparation algorithms, 21
C&RT
blank handling, 61, 71
confidence values, 70
finding splits, 60
gain summary, 69
Gini index, 62
impurity measures, 62, 64
least squared deviation index, 64
misclassification costs, 66
overview, 59
predicted values, 69
prior probabilities, 65
profits, 65
pruning, 66
risk estimates, 68
stopping rules, 64
surrogate splitting, 61
twoing index, 63
weight fields, 59
C5.0, 51
scoring, 51
Carma
blank handling, 55, 57
confidence (predicted values), 56
deriving rules, 53
exclude rules with multiple consequents, 55
frequent itemsets, 53
generated model, 56
generating rules, 54
maximum rule size, 55
minimum rules support/confidence, 55
options, 55
overview, 53
predicted values, 56
pruning value, 55
ruleset evaluation options, 56
Carma (sequence rules algorithm), 331
case weights, 60, 74
CF (cluster feature) tree
TwoStep clustering, 397
CHAID
binning of continuous predictors, 74
blank handling, 81, 85
Bonferroni adjustment, 80
chi-squared tests, 78
compared to other methods, 73
confidence values, 85
costs, 82
Exhaustive CHAID, 73
expected frequencies, 78
gain summary, 83
merging categories, 76
predicted values, 84
profits, 82
risk estimates, 82
row effects chi-squared test, 79
score values, 82
splitting nodes, 77
statistical tests, 77–80
stopping rules, 81
weight fields, 73
Chebychev distance
in Kohonen models, 235
chi-square
generalized linear models algorithms, 177
chi-square test
in QUEST, 311
class entropy
optimal binning algorithms, 299
415
Index
class information entropy
optimal binning algorithms, 300
cluster assignment
in k-means, 229
cluster evaluation algorithms, 87
goodness measures, 87
notation, 87
predictor importance, 89
references, 91
Silhouette coefficient, 89
sum of squares between, 89
sum of squares error, 89
cluster feature tree
TwoStep clustering, 397
cluster membership
in k-means, 231
in Kohonen models, 237
in TwoStep clustering, 401
cluster proximities
in k-means, 231
clustering
k-means, 227
TwoStep algorithm, 397
coefficients
in factor analysis/PCA, 151
in regression, 321
comparison metrics
Binary Classifier node, 49
complete separation
in logistic regression, 243
component extraction
in factor analysis/PCA, 139
conditional statistic
Cox regression algorithms, 98
confidence
in Apriori, 10
in C&RT models, 70
in CHAID models, 85
in QUEST models, 320
in sequence rules, 334, 336
neural networks algorithms, 296
confidence difference
Apriori evaluation measure, 10
confidence ratio
Apriori evaluation measure, 11
confidence values
rulesets, 12, 56
consistent AIC
generalized linear models algorithms, 181
convergence criteria
logistic regression, 243
Cook’s distance
linear modeling algorithms, 282
logistic regression algorithms, 260
corrected Akaike information criterion (AICC)
linear modeling algorithms, 280
costs
in C&RT, 66
in CHAID, 82
in QUEST, 316
Cox and Snell R-square
in logistic regression, 245
Cox regression
blank handling, 102
Cox regression algorithms, 93
baseline function estimation, 96
blank handling, 102
output statistics, 99
plots, 101
regression coefficient estimation, 94
stepwise selection, 97
cross-entropy error
multilayer perceptron algorithms, 286
data aggregation
in logistic regression, 240
Decision List algorithm, 105
blank handling, 111
blank handling in scoring, 111
confidence intervals, 110
coverage, 111
decision rule algorithm, 107–108
decision rule split algorithm, 108–109
frequency, 111
primary algorithm, 106
probability, 111
scoring, 111
secondary measures, 111
terminology, 105
deviance
generalized linear models algorithms, 178
logistic regression algorithms, 259
deviance goodness-of-fit measure
in logistic regression, 245
DfBeta
logistic regression algorithms, 260
difference of confidence quotient to 1
Apriori evaluation measure, 11
direct oblimin rotation
factor analysis/PCA, 147
discretization
see binning, 74
discriminant analysis
blank handling, 121
discriminant analysis algorithms, 113
basic statistics, 113
blank handling, 123
canonical discriminant functions, 118
classification, 121
classification functions, 117
cross-validation, 122
notation, 113
references, 123
416
Index
variable selection, 114
distances
in k-means, 229, 231
in Kohonen models, 235
in TwoStep clustering, 398
diversity
Pass, Stream, Merge algorithms, 134
dummy coding
in logistic regression, 239
encoding value for sets
in k-means, 228, 231
ensembles algorithms, 125
ensembling model scores, 136
equamax rotation
in factor analysis/PCA, 145
error backpropagation
multilayer perceptron algorithms, 289
estimated marginal means
generalized linear mixed models algorithms, 200
eta decay
in Kohonen models, 236
evaluation measures
in Apriori, 10
Exhaustive CHAID
merging categories, 76
see CHAID, 73
expected frequencies
CHAID tests, 78
in CHAID tests, 80
F-test
in CHAID, 77
factor analysis/PCA
advanced output, 152
alpha factoring, 143
blank handling, 151–152
chi-square statistics, 142
direct oblimin rotation, 147
equamax rotation, 145
factor score coefficients, 151
factor scores, 152
factor/component extraction, 139
generalized least squares extraction, 142
image factoring, 144
maximum likelihood extraction, 140
overview, 139
principal axis factoring, 140
principal components analysis (PCA), 139
promax rotation, 150
quartimax rotation, 145
rotations, 145
unweighted least squares extraction, 142
varimax rotation, 145
factor equations
in factor analysis/PCA, 151
factor extraction
in factor analysis/PCA, 139
factor score coefficients
in factor analysis/PCA, 151
factor scores
in factor analysis/PCA, 152
feature selection
in Bayesian network algorithms, 38
field encoding
encoding of flag fields, 229, 234
encoding of symbolic fields, 227, 233
Kohonen models, 233
scaling of range fields, 227, 233
finite sample corrected AIC
generalized linear models algorithms, 181, 199
flag fields
encoding, 229, 234
forward entry
multinomial logistic regression algorithms, 246
forward field selection
in regression, 324
forward stepwise
multinomial logistic regression algorithms, 246
forward stepwise selection
linear modeling algorithms, 274
frequency weights, 59, 74, 309
frequent itemsets
in Apriori, 9
in Carma, 53
frequent sequences, 329
gain summary
in C&RT, 69
in CHAID, 83
in QUEST, 319
GDI
see Group Deviation Index, 5
generalized least squares
in factor analysis/PCA, 142
generalized linear mixed models algorithms, 187, 204
estimated marginal means, 200
fixed effects transformation, 191
goodness of fit statistics, 199
link function, 189
model, 188
model for nominal multinomial, 207
model for ordinal multinomial, 214
multiple comparisons, 202
notation, 187, 206
references, 220
scale parameter, 190
tests of fixed effects, 199
generalized linear models algorithms, 163
chi-square statistic, 177
default tests of model effects, 182
estimation, 169
goodness of fit, 178
417
Index
link function, 168
model, 163
model fit test, 182
model testing, 177
notation, 163
probability distribution, 164
references, 184
scoring, 183
generalized logit model
in logistic regression, 241
Gini index
in C&RT, 62
goodness of fit
generalized linear models algorithms, 178
goodness-of-fit measures
in logistic regression, 245
gradient descent
multilayer perceptron algorithms, 289
Group Deviation Index, 5
hazard plots
Cox regression algorithms, 102
hierarchical clustering
in TwoStep clustering, 398
Hosmer-Lemeshow goodness-of-fit statistic
logistic regression algorithms, 258
hyperbolic tangent activation function
multilayer perceptron algorithms, 286
hyperbolic tangent kernel function (SVM), 371
identity activation function
multilayer perceptron algorithms, 286
image factoring
in factor analysis/PCA, 144
impurity measures (C&RT), 62, 64
imputing missing values, 223
indicator coding, 227, 233
information criteria
generalized linear models algorithms, 181, 199
information difference
Apriori evaluation measure, 11
information gain
optimal binning algorithms, 300
initial cluster centers
in k-means, 229
items
in Apriori, 9
itemsets
in Apriori, 9
in sequence rules, 327
k-means
assigning records to clusters, 229
blank handling, 230
cluster centers, 229–230, 232
cluster proximities, 231
distance field (predicted values), 231
distance measure, 229
encoding value for sets, 228, 231
error tolerance, 231
field encoding, 227
initial cluster centers, 229
iterating, 229
maximum iterations, 230
overview, 227
predicted cluster membership, 231
Kohonen models
blank handling, 236–237
cluster centers, 234
cluster membership, 237
distances, 235
learning rate (eta), 234, 236
model parameters, 234
neighborhoods, 235
overview, 233
random seed, 236
scoring, 237
stopping criteria, 236
weights, 234–235
Lagrange multiplier test
generalized linear models algorithms, 177
learning rate (eta)
in Kohonen models, 234, 236
least significant difference
generalized linear mixed models algorithms, 203
least squared deviation index
in C&RT, 64
leave-one-out classification
discriminant analysis algorithms, 122
legal notices, 403
length
of sequences, 328
Levene’s test
in QUEST, 312
leverage
linear modeling algorithms, 282
logistic regression algorithms, 260
lift
Binary Classifier node, 49
likelihood ratio chi-squared test
in CHAID, 78
likelihood ratio statistic
Cox regression algorithms, 98
likelihood-based distance measure
in TwoStep clustering, 398
linear kernel function (SVM), 371
linear modeling algorithms, 271
coefficients, 280
diagnostics, 282
least squares estimation, 272
model, 272
model evaluation, 279
418
Index
model selection, 273–274, 277
notation, 271
predictor importance, 283
references, 283
scoring, 282
linear regression, 321
link function
generalized linear mixed models algorithms, 189
generalized linear models algorithms, 168
log-likelihood
in logistic regression, 241, 244
log-minus-log plots
Cox regression algorithms, 102
logistic regression
advanced output, 245
binomial logistic regression algorithms, 251
blank handling, 243, 251
checking for separation, 243
convergence criteria, 243
Cox and Snell R-square, 245
data aggregation, 240
field encoding, 239
generalized logit model, 241
goodness-of-fit measures, 245
log-likelihood, 241, 244
maximum likelihood estimation, 242
McFadden R-square, 245
model chi-square, 244
Nagelkerke R-square, 245
notation, 251
overview, 239
parameter start values, 242
predicted probability, 251
predicted values, 251
pseudo R-square measures, 245
reference category, 239
stepping, 242
logistic regression algorithms
maximum likelihood estimates, 252
model, 252
notation, 251
output statistics, 256
stepwise variable selection, 253
logit residuals
logistic regression algorithms, 260
logits
in logistic regression, 241
Markov blanket Bayesian network models
adjustment for small cell counts, 47
algorithms, 43, 45
chi-square independence test, 44
conditional independence tests, 43
deriving the Markov blanket, 46
G2 test, 44
likelihood ratio test, 44
parameter learning, 46
posterior estimation, 46
structure learning algorithm, 45
maximal sequences, 328
maximum likelihood
in factor analysis/PCA, 140
in logistic regression, 242
maximum profit
Binary Classifier node, 49
maximum profit occurs in %
Binary Classifier node, 49
McFadden R-square
in logistic regression, 245
MDLP
optimal binning algorithms, 299
merging categories
CHAID, 76
Exhaustive CHAID, 76
min-max transformation
automated data preparation algorithms, 20
misclassification costs
in C&RT, 66
in QUEST, 316
missing values
imputing, 223
model chi-square
in logistic regression, 244
model information
Cox regression algorithms, 99
model updates
multilayer perceptron algorithms, 292
multilayer perceptron algorithms, 285
activation functions, 286
architecture, 285
error functions, 286
expert architecture selection, 287
model updates, 292
notation, 285
training, 288
multinomial logistic regression, 239
multinomial logistic regression algorithms
stepwise variable selection, 246
Nagelkerke R-square
in logistic regression, 245
naive bayes
see self-learning response models, 337
Naive Bayes algorithms, 337
model, 337
notation, 337
nearest neighbor algorithms, 263
blank handling, 266, 268
distance metric, 264
feature selection, 265
feature weights, 264
k selection, 265
notation, 263
output statistics, 267
419
Index
preprocessing, 263
references, 269
scoring, 268
training, 264
neighborhoods
in Kohonen models, 234–235
network architecture
multilayer perceptron algorithms, 285
radial basis function algorithms, 293
neural networks algorithms, 285
confidence, 296
missing values, 295
multilayer perceptron (MLP), 285
output statistics, 295
radial basis function (RBF), 292
references, 296
simplemax, 296
nominal regression, 239
normalized chi-square
Apriori evaluation measure, 11
number of clusters
auto-selecting in TwoStep clustering, 399
optimal binning algorithms, 299
blank handling, 303
class entropy, 299
class information entropy, 300
hybrid MDLP, 302
information gain, 300
MDLP, 299
merging bins, 303
notation, 299
references, 304
ordinal fields
in CHAID, 79
ordinary least squares regression, 321
outlier handling
in TwoStep clustering, 400
overall accuracy
Binary Classifier node, 49
overdispersion
generalized linear models algorithms, 180
Pass, Stream, Merge algorithms, 130
accuracy, 134
adaptive feature selection, 132
category balancing, 133
diversity, 134
Merge, 132
Pass, 131
scoring, 135
Stream, 132
Pearson chi-square
generalized linear models algorithms, 179
Pearson chi-squared test
in CHAID, 78
Pearson goodness-of-fit measure
in logistic regression, 245
polynomial kernel function (SVM), 371
pre-clustering
in TwoStep clustering, 397
predicted group
logistic regression algorithms, 260
predicted values
anomaly detection, 7
generalized linear models algorithms, 183
rulesets, 12, 56
predictive power
automated data preparation algorithms, 35
predictor importance
cluster evaluation algorithms, 89
linear modeling algorithms, 283
predictor importance algorithms, 305
notation, 305
references, 308
variance based method, 305
principal axis factoring
in factor analysis/PCA, 140
principal component analysis
automated data preparation algorithms, 33
principal components analysis (PCA), 139
priors
in C&RT, 65
in QUEST, 315
profits
in C&RT, 65
in CHAID, 82
in QUEST, 315
promax rotation
in factor analysis/PCA, 150
pruning
in C&RT, 66
in QUEST, 317
quartimax rotation
in factor analysis/PCA, 145
quasi-complete separation
in logistic regression, 243
QUEST
blank handling, 313, 320
chi-square test, 311
confidence values, 320
F-test, 311
finding splits, 310
gain summary, 319
Levene’s test, 312
misclassification costs, 316
overview, 309
predicted values, 319
prior probabilities, 315
profits, 315
pruning, 317
risk estimates, 318
420
Index
stopping rules, 315
surrogate splitting, 313
weight fields, 309
R-square
in regression, 325
radial basis function algorithms, 292
architecture, 293
automatic selection of number of basis functions, 294
center and width for basis functions, 294
model updates, 295
notation, 292
training, 293
random seed
in Kohonen models, 236
range fields
rescaling, 227, 233
RBF kernel function (SVM), 371
regression
adjusted R-square, 325
advanced output, 325
automatic field selection, 323
backward field selection, 325
blank handling, 325–326
forward field selection, 324
model parameters, 321
notation, 321
overview, 321
predicted values, 325
R-square, 325
stepwise field selection, 324
replacing missing values, 223
risk estimates
in C&RT, 68
in CHAID, 82
in QUEST, 318
row effects model
in CHAID tests, 79
RuleQuest Research, 51
rulesets
confidence (predicted values), 12, 56
predicted values, 12, 56
SAMME
boosting algorithms, 129
Satterthwaite approximation
generalized linear mixed models algorithms, 203
scale parameter
generalized linear mixed models algorithms, 190
scaled conjugate gradient
multilayer perceptron algorithms, 290
scaled deviance
generalized linear models algorithms, 179
scaled Pearson chi-square
generalized linear models algorithms, 179
score coefficients
in factor analysis/PCA, 151
score statistic
Cox regression algorithms, 98
score test
multinomial logistic regression algorithms, 248
score values (CHAID), 82
scoring
Decision List algorithm, 111
in anomaly detection, 7
self-learning response model algorithms, 337
information measure, 341
model assessment, 338
predictor importance, 340–341
scoring, 339
updating the model, 339
separation
checking for in logistic regression, 243
sequence rules, 335
adjacency lattice, 330
antecedents, 329
blank handling, 334, 336
Carma algorithm, 331
confidence, 334, 336
consequents, 329
frequent sequences, 329
gap, 329
itemsets, 327
length of sequences, 328
maximal sequences, 328
overview, 327
predictions, 335
sequential patterns, 333
size of sequences, 328
subsequences, 328
support, 328
timestamp tolerance, 329
transactions, 327
sequences
in sequence rules, 327
sequential Bonferroni
generalized linear mixed models algorithms, 203
sequential minimal optimization (SMO) algorithm
support vector machines (SVM), 371
sequential sidak
generalized linear mixed models algorithms, 203
set encoding value
in k-means, 228
Silhouette coefficient
cluster evaluation algorithms, 89
simplemax
neural network confidence, 296
simulation
algorithms, 343
simulation algorithms, 343, 361
beta distribution fitting, 349
binomial distribution fitting, 344
categorical distribution fitting, 345
contribution to variance measure of sensitivity, 364
421
Index
correlation measure of sensitivity, 364
distribution fitting, 343
exponential distribution fitting, 349
gamma distribution fitting, 350
generating correlated data, 361
goodness of fit measures, 352
goodness of fit measures: Anderson-Darling test , 355
goodness of fit measures: continuous distributions, 353
goodness of fit measures: discrete distributions, 352
goodness of fit measures: Kolmogorov-Smirnov test ,
357
lognormal distribution fitting, 348
negative binomial distribution fitting, 345
normal distribution fitting, 348
one-at-a-time measure of sensitivity, 364
Poisson distribution fitting, 345
references, 360, 365
sensitivity measures, 363
tornado charts, 363
triangular distribution fitting, 347
uniform distribution fitting, 348
Weibull distribution fitting, 351
size
of sequences, 328
softmax activation function
multilayer perceptron algorithms, 286
splitting
of merged categories (CHAID), 76
splitting nodes
CHAID, 77
stagewise additive modeling
boosting algorithms, 129
standardized residuals
logistic regression algorithms, 260
stepping
in logistic regression, 242
stepwise field selection
in regression, 324
stepwise selection
Cox regression algorithms, 97
stopping rules
in C&RT, 64
in CHAID, 81
in QUEST, 315
multilayer perceptron algorithms, 291
studentized residuals
linear modeling algorithms, 282
logistic regression algorithms, 260
subpopulations, 240
subsequences, 328
sum of squares between
cluster evaluation algorithms, 89
sum of squares error
cluster evaluation algorithms, 89
multilayer perceptron algorithms, 286
support
sequence rules, 328
support vector machines (SVM), 367
ε-Support Vector Regression (ε-SVR), 368
algorithm notation, 367
blank handling, 377
C-support vector classification, 368
decision function constant, 370
fast training algorithm, 375
gradient reconstruction, 372
kernel functions, 371
model building algorithm, 370
parallel optimization, 375
predicted probabilities, 377
predictions, 377
queue method, 376
scoring, 377
sequential minimal optimization (SMO) algorithm, 371
sequential optimization, 376
shrinking, 372
SMO decomposition, 374
solving quadratic problems, 369
subset selection, 376
types of SVM models, 367
unbalanced data, 373
variable scaling, 370
working set selection, 371
surrogate splitting
in C&RT, 61
in QUEST, 313
survival plots
Cox regression algorithms, 102
symbolic fields
recoding, 227, 233
Time Series algorithms, 379
additive outliers, 388
all models expert model, 394
AO (additive outliers), 388
AO patch (AOP), 389
AO patch outliers, 390
AOP, 389
ARIMA and transfer function models, 382
ARIMA expert model, 394
Brown’s exponential smoothing, 380
CLS, 384
conditional least squares (CLS) method, 384
damped-trend exponential smoothing, 380
definitions of outliers, 388
detection of outliers, 390
diagnostic statistics, 387
error variance, 385
estimating the effects of an outlier, 390
estimation and forecasting of ARIMA/TF, 384
estimation and forecasting of exponential smoothing,
382
expert modeling, 393
exponential smoothing expert model, 393
exponential smoothing models, 379
422
Index
goodness-of-fit statistics, 391
Holt’s exponential smoothing, 380
initialization of ARIMA/TF, 385
initialization of exponential smoothing, 382
innovational outliers, 388
IO (innovational outliers), 388
level shift, 388
Ljung-Box statistic, 387
local trend, 389
LS (level shift), 388
LT (local trend), 389
maximum absolute error, 392
maximum absolute percent error, 392
maximum likelihood (ML) method, 384
mean absolute error, 392
mean absolute percent error, 392
mean squared error, 392
ML, 384
models, 379
multivariate series, 394
non-AO patch deterministic outliers, 390
normalized bayesian information criterion, 392
notation, 379, 388
outlier detection in time series analysis, 387
outliers summary, 389
R-squared, 392
references, 396
SA (seasonal additive), 388
seasonal additive, 388
simple exponential smoothing, 379
simple seasonal exponential smoothing, 381
stationary R-squared, 392
TC (temporary/transient change), 388
temporary/transient change, 388
transfer function calculation, 386
transfer function expert model, 395
univariate series, 393
Winters’ additive exponential smoothing, 381
Winters’ exponential smoothing, 381
timestamp tolerance
in sequence rules, 329
trademarks, 404
transactions
in sequence rules, 327
tree augmented naïve Bayes (TAN) models
adjustment for small cell counts, 43
algorithms, 40
learning algorithm, 41
parameter learning, 42
posterior estimation, 43
structure learning, 42
twoing index
in C&RT, 63
TwoStep clustering
auto-clustering, 399
blank handling, 400–401
cluster feature tree, 397
clustering step, 398
distance measure, 398
model parameters, 397
outlier handling, 400
overview, 397
pre-clustering step, 397
predicted values, 401
unweighted least squares
in factor analysis/PCA, 142
updating
self-learning response models, 339
variable contribution measure
in anomaly detection, 6
Variable Deviation Index, 5
varimax rotation
in factor analysis/PCA, 145
VDI
see Variable Deviation Index, 5
Wald statistic
Cox regression algorithms, 98
weight fields
CHAID, 73
weights
in Kohonen models, 234–235
z-score transformation
automated data preparation algorithms, 20
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement