Introduction to Statistical Modeling with SAS/STAT Software SAS/STAT

Introduction to Statistical Modeling with SAS/STAT Software SAS/STAT
®
SAS/STAT 13.2 User’s Guide
Introduction to Statistical
Modeling with SAS/STAT
Software
This document is an individual chapter from SAS/STAT® 13.2 User’s Guide.
The correct bibliographic citation for the complete manual is as follows: SAS Institute Inc. 2014. SAS/STAT® 13.2 User’s Guide.
Cary, NC: SAS Institute Inc.
Copyright © 2014, SAS Institute Inc., Cary, NC, USA
All rights reserved. Produced in the United States of America.
For a hard-copy book: No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by
any means, electronic, mechanical, photocopying, or otherwise, without the prior written permission of the publisher, SAS Institute
Inc.
For a Web download or e-book: Your use of this publication shall be governed by the terms established by the vendor at the time
you acquire this publication.
The scanning, uploading, and distribution of this book via the Internet or any other means without the permission of the publisher is
illegal and punishable by law. Please purchase only authorized electronic editions and do not participate in or encourage electronic
piracy of copyrighted materials. Your support of others’ rights is appreciated.
U.S. Government License Rights; Restricted Rights: The Software and its documentation is commercial computer software
developed at private expense and is provided with RESTRICTED RIGHTS to the United States Government. Use, duplication or
disclosure of the Software by the United States Government is subject to the license terms of this Agreement pursuant to, as
applicable, FAR 12.212, DFAR 227.7202-1(a), DFAR 227.7202-3(a) and DFAR 227.7202-4 and, to the extent required under U.S.
federal law, the minimum restricted rights as set out in FAR 52.227-19 (DEC 2007). If FAR 52.227-19 is applicable, this provision
serves as notice under clause (c) thereof and no other notice is required to be affixed to the Software or documentation. The
Government’s rights in Software and documentation shall be only those set forth in this Agreement.
SAS Institute Inc., SAS Campus Drive, Cary, North Carolina 27513.
August 2014
SAS provides a complete selection of books and electronic products to help customers use SAS® software to its fullest potential. For
more information about our offerings, visit support.sas.com/bookstore or call 1-800-727-3228.
SAS® and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in the
USA and other countries. ® indicates USA registration.
Other brand and product names are trademarks of their respective companies.
Gain Greater Insight into Your
SAS Software with SAS Books.
®
Discover all that you need on your journey to knowledge and empowerment.
support.sas.com/bookstore
for additional books and resources.
SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in the USA and other countries. ® indicates USA registration. Other brand and product names are
trademarks of their respective companies. © 2013 SAS Institute Inc. All rights reserved. S107969US.0613
Chapter 3
Introduction to Statistical Modeling with
SAS/STAT Software
Contents
Overview: Statistical Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20
Statistical Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20
Classes of Statistical Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23
Linear and Nonlinear Models . . . . . . . . . . . . . . . . . . . . . . . . . .
23
Regression Models and Models with Classification Effects . . . . . . . . . .
24
Univariate and Multivariate Models . . . . . . . . . . . . . . . . . . . . . .
26
Fixed, Random, and Mixed Models . . . . . . . . . . . . . . . . . . . . . .
27
Generalized Linear Models . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
Latent Variable Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
Bayesian Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
32
Classical Estimation Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
33
Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
33
Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35
Inference Principles for Survey Data . . . . . . . . . . . . . . . . . . . . . .
38
Statistical Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
39
Hypothesis Testing and Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
39
Important Linear Algebra Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . .
40
Expectations of Random Variables and Vectors . . . . . . . . . . . . . . . . . . . . .
47
Mean Squared Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Linear Model Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49
51
Finding the Least Squares Estimators . . . . . . . . . . . . . . . . . . . . .
51
Analysis of Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53
Estimating the Error Variance . . . . . . . . . . . . . . . . . . . . . . . . .
54
Maximum Likelihood Estimation . . . . . . . . . . . . . . . . . . . . . . .
54
Estimable Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
55
Test of Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
55
Residual Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
58
Sweep Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
60
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61
20 F Chapter 3: Introduction to Statistical Modeling with SAS/STAT Software
Overview: Statistical Modeling
There are more than 70 procedures in SAS/STAT software, and the majority of them are dedicated to solving
problems in statistical modeling. The goal of this chapter is to provide a roadmap to statistical models and to
modeling tasks, enabling you to make informed choices about the appropriate modeling context and tool. This
chapter also introduces important terminology, notation, and concepts used throughout this documentation.
Subsequent introductory chapters discuss model families and related procedures.
It is difficult to capture the complexity of statistical models in a simple scheme, so the classification used
here is necessarily incomplete. It is most practical to classify models in terms of simple criteria, such as the
presence of random effects, the presence of nonlinearity, characteristics of the data, and so on. That is the
approach used here. After a brief introduction to statistical modeling in general terms, the chapter describes a
number of model classifications and relates them to modeling tools in SAS/STAT software.
Statistical Models
Deterministic and Stochastic Models
Purely mathematical models, in which the relationships between inputs and outputs are captured entirely
in deterministic fashion, can be important theoretical tools but are impractical for describing observational,
experimental, or survey data. For such phenomena, researchers usually allow the model to draw on stochastic
as well as deterministic elements. When the uncertainty of realizations leads to the inclusion of random
components, the resulting models are called stochastic models. A statistical model, finally, is a stochastic
model that contains parameters, which are unknown constants that need to be estimated based on assumptions
about the model and the observed data.
There are many reasons why statistical models are preferred over deterministic models. For example:
• Randomness is often introduced into a system in order to achieve a certain balance or representativeness.
For example, random assignment of treatments to experimental units allows unbiased inferences about
treatment effects. As another example, selecting individuals for a survey sample by random mechanisms
ensures a representative sample.
• Even if a deterministic model can be formulated for the phenomenon under study, a stochastic model
can provide a more parsimonious and more easily comprehended description. For example, it is
possible in principle to capture the result of a coin toss with a deterministic model, taking into account
the properties of the coin, the method of tossing, conditions of the medium through which the coin
travels and of the surface on which it lands, and so on. A very complex model is required to describe
the simple outcome—heads or tails. Alternatively, you can describe the outcome quite simply as the
result of a stochastic process, a Bernoulli variable that results in heads with a certain probability.
• It is often sufficient to describe the average behavior of a process, rather than each particular realization.
For example, a regression model might be developed to relate plant growth to nutrient availability.
The explicit aim of the model might be to describe how the average growth changes with nutrient
availability, not to predict the growth of an individual plant. The support for the notion of averaging in
a model lies in the nature of expected values, describing typical behavior in the presence of randomness.
This, in turn, requires that the model contain stochastic components.
Statistical Models F 21
The defining characteristic of statistical models is their dependence on parameters and the incorporation of
stochastic terms. The properties of the model and the properties of quantities derived from it must be studied
in a long-run, average sense through expectations, variances, and covariances. The fact that the parameters of
the model must be estimated from the data introduces a stochastic element in applying a statistical model:
because the model is not deterministic but includes randomness, parameters and related quantities derived
from the model are likewise random. The properties of parameter estimators can often be described only
in an asymptotic sense, imagining that some aspect of the data increases without bound (for example, the
number of observations or the number of groups).
The process of estimating the parameters in a statistical model based on your data is called fitting the model.
For many classes of statistical models there are a number of procedures in SAS/STAT software that can
perform the fitting. In many cases, different procedures solve identical estimation problems—that is, their
parameter estimates are identical. In some cases, the same model parameters are estimated by different
statistical principles, such as least squares versus maximum likelihood estimation. Parameter estimates
obtained by different methods typically have different statistical properties—distribution, variance, bias, and
so on. The choice between competing estimation principles is often made on the basis of properties of the
estimators. Distinguishing properties might include (but are not necessarily limited to) computational ease,
interpretive ease, bias, variance, mean squared error, and consistency.
Model-Based and Design-Based Randomness
A statistical model is a description of the data-generating mechanism, not a description of the specific data to
which it is applied. The aim of a model is to capture those aspects of a phenomenon that are relevant to inquiry
and to explain how the data could have come about as a realization of a random experiment. These relevant
aspects might include the genesis of the randomness and the stochastic effects in the phenomenon under
study. Different schools of thought can lead to different model formulations, different analytic strategies,
and different results. Coarsely, you can distinguish between a viewpoint of innate randomness and one of
induced randomness. This distinction leads to model-based and design-based inference approaches.
In a design-based inference framework, the random variation in the observed data is induced by random
selection or random assignment. Consider the case of a survey sample from a finite population of size
N; suppose that FN D fyi W i 2 UN g denotes the finite set of possible values and UN is the index set
UN D f1; 2; : : : ; N g. Then a sample S, a subset of UN , is selected by probability rules. The realization
of the random experiment is the selection of a particular set S; the associated values selected from FN are
considered fixed. If properties of a design-based sampling estimator are evaluated, such as bias, variance, and
mean squared error, they are evaluated with respect to the distribution induced by the sampling mechanism.
Design-based approaches also play an important role in the analysis of data from controlled experiments by
randomization tests. Suppose that k treatments are to be assigned to kr homogeneous experimental units. If
you form k sets of r units with equal probability, and you assign the jth treatment to the tth set, a completely
randomized experimental design (CRD) results. A design-based view treats the potential response of a
particular treatment for a particular experimental unit as a constant. The stochastic nature of the error-control
design is induced by randomly selecting one of the potential responses.
Statistical models are often used in the design-based framework. In a survey sample the model is used to
motivate the choice of the finite population parameters and their sample-based estimators. In an experimental
design, an assumption of additivity of the contributions from treatments, experimental units, observational
errors, and experimental errors leads to a linear statistical model. The approach to statistical inference
where statistical models are used to construct estimators and their properties are evaluated with respect to
22 F Chapter 3: Introduction to Statistical Modeling with SAS/STAT Software
the distribution induced by the sample selection mechanism is known as model-assisted inference (Särndal,
Swensson, and Wretman 1992).
In a purely model-based framework, the only source of random variation for inference comes from the unknown variation in the responses. Finite population values are thought of as a realization of a superpopulation
model that describes random variables Y1 ; Y2 ; . The observed values y1 ; y2 ; are realizations of these
random variables. A model-based framework does not imply that there is only one source of random variation
in the data. For example, mixed models might contain random terms that represent selection of effects from
hierarchical (super-) populations at different granularity. The analysis takes into account the hierarchical
structure of the random variation, but it continues to be model based.
A design-based approach is implicit in SAS/STAT procedures whose name commences with SURVEY, such
as the SURVEYFREQ, SURVEYMEANS, SURVEYREG, SURVEYLOGISTIC, and SURVEYPHREG
procedures. Inferential approaches are model based in other SAS/STAT procedures. For more information
about analyzing survey data with SAS/STAT software, see Chapter 14, “Introduction to Survey Procedures.”
Model Specification
If the model is accepted as a description of the data-generating mechanism, then its parameters are estimated
using the data at hand. Once the parameter estimates are available, you can apply the model to answer
questions of interest about the study population. In other words, the model becomes the lens through which
you view the problem itself, in order to ask and answer questions of interest. For example, you might use the
estimated model to derive new predictions or forecasts, to test hypotheses, to derive confidence intervals, and
so on.
Obviously, the model must be “correct” to the extent that it sufficiently describes the data-generating
mechanism. Model selection, diagnosis, and discrimination are important steps in the model-building process.
This is typically an iterative process, starting with an initial model and refining it. The first important step
is thus to formulate your knowledge about the data-generating process and to express the real observed
phenomenon in terms of a statistical model. A statistical model describes the distributional properties of one
or more variables, the response variables. The extent of the required distributional specification depends on
the model, estimation technique, and inferential goals. This description often takes the simple form of a
model with additive error structure:
response = mean + error
In mathematical notation this simple model equation becomes
Y D f .x1 ; ; xk I ˇ1 ; ; ˇp / C In this equation Y is the response variable, often also called the dependent variable or the outcome variable.
The terms x1 ; ; xk denote the values of k regressor variables, often termed the covariates or the “independent” variables. The terms ˇ1 ; ; ˇp denote parameters of the model, unknown constants that are to be
estimated. The term denotes the random disturbance of the model; it is also called the residual term or the
error term of the model.
In this simple model formulation, stochastic properties are usually associated only with the term. The
covariates x1 ; ; xk are usually known values, not subject to random variation. Even if the covariates are
measured with error, so that their values are in principle random, they are considered fixed in most models fit
by SAS/STAT software. In other words, stochastic properties under the model are derived conditional on the
Classes of Statistical Models F 23
xs. If is the only stochastic term in the model, and if the errors have a mean of zero, then the function f ./
is the mean function of the statistical model. More formally,
EŒY  D f .x1 ; ; xk I ˇ1 ; ; ˇp /
where EŒ denotes the expectation operator.
In many applications, a simple model formulation is inadequate. It might be necessary to specify not only the
stochastic properties of a single error term, but also how model errors associated with different observations
relate to each other. A simple additive error model is typically inappropriate to describe the data-generating
mechanism if the errors do not have zero mean or if the variance of observations depends on their means.
For example, if Y is a Bernoulli random variable that takes on the values 0 and 1 only, a regression model
with additive error is not meaningful. Models for such data require more elaborate formulations involving
probability distributions.
Classes of Statistical Models
Linear and Nonlinear Models
A statistical estimation problem is nonlinear if the estimating equations—the equations whose solution
yields the parameter estimates—depend on the parameters in a nonlinear fashion. Such estimation problems
typically have no closed-form solution and must be solved by iterative, numerical techniques.
Nonlinearity in the mean function is often used to distinguish between linear and nonlinear models. A model
has a nonlinear mean function if the derivative of the mean function with respect to the parameters depends
on at least one other parameter. Consider, for example, the following models that relate a response variable Y
to a single regressor variable x:
EŒY jx D ˇ0 C ˇ1 x
EŒY jx D ˇ0 C ˇ1 x C ˇ2 x 2
EŒY jx D ˇ C x=˛
In these expressions, EŒY jx denotes the expected value of the response variable Y at the fixed value of
x. (The conditioning on x simply indicates that the predictor variables are assumed to be non-random.
Conditioning is often omitted for brevity in this and subsequent chapters.)
The first model in the previous list is a simple linear regression (SLR) model. It is linear in the parameters ˇ0
and ˇ1 since the model derivatives do not depend on unknowns:
@
.ˇ0 C ˇ1 x/ D 1
ˇ0
@
.ˇ0 C ˇ1 x/ D x
ˇ1
24 F Chapter 3: Introduction to Statistical Modeling with SAS/STAT Software
The model is also linear in its relationship with x (a straight line). The second model is also linear in the
parameters, since
@
ˇ0 C ˇ1 x C ˇ2 x 2 D 1
ˇ0
@
ˇ0 C ˇ1 x C ˇ2 x 2 D x
ˇ1
@
ˇ0 C ˇ1 x C ˇ2 x 2 D x 2
ˇ2
However, this second model is curvilinear, since it exhibits a curved relationship when plotted against x. The
third model, finally, is a nonlinear model since
@
.ˇ C x=˛/ D 1
ˇ
@
x
.ˇ C x=˛/ D
˛
˛2
The second of these derivatives depends on a parameter ˛. A model is nonlinear if it is not linear in at least
one parameter. Only the third model is a nonlinear model. A graph of EŒY  versus the regressor variable thus
does not indicate whether a model is nonlinear. A curvilinear relationship in this graph can be achieved by a
model that is linear in the parameters.
Nonlinear mean functions lead to nonlinear estimation. It is important to note, however, that nonlinear
estimation arises also because of the estimation principle or because the model structure contains nonlinearity
in other parts, such as the covariance structure. For example, fitting a simple linear regression model by
minimizing the sum of the absolute residuals leads to a nonlinear estimation problem despite the fact that the
mean function is linear.
Regression Models and Models with Classification Effects
A linear regression model in the broad sense has the form
Y D Xˇ C where Y is the vector of response values, X is the matrix of regressor effects, ˇ is the vector of regression
parameters, and is the vector of errors or residuals. A regression model in the narrow sense—as compared
to a classification model—is a linear model in which all regressor effects are continuous variables. In other
words, each effect in the model contributes a single column to the X matrix and a single parameter to the
overall model. For example, a regression of subjects’ weight (Y) on the regressors age (x1 ) and body mass
index (bmi, x2 ) is a regression model in this narrow sense. In symbolic notation you can write this regression
model as
weight = age + bmi + error
This symbolic notation expands into the statistical model
Yi D ˇ0 C ˇ1 xi1 C ˇ2 xi 2 C i
Classes of Statistical Models F 25
Single parameters are used to model the effects of age .ˇ1 / and bmi .ˇ2 /, respectively.
A classification effect, on the other hand, is associated with possibly more than one column of the X matrix.
Classification with respect to a variable is the process by which each observation is associated with one of k
levels; the process of determining these k levels is referred to as levelization of the variable. Classification
variables are used in models to identify experimental conditions, group membership, treatments, and so
on. The actual values of the classification variable are not important, and the variable can be a numeric or
a character variable. What is important is the association of discrete values or levels of the classification
variable with groups of observations. For example, in the previous illustration, if the regression also takes into
account the subjects’ gender, this can be incorporated in the model with a two-level classification variable.
Suppose that the values of the gender variable are coded as ‘F’ and ‘M’, respectively. In symbolic notation
the model
weight = age + bmi + gender + error
expands into the statistical model
Yi D ˇ0 C ˇ1 xi1 C ˇ2 xi 2 C 1 I.gender D 0 F0 / C 2 I.gender D 0 M0 / C i
where I(gender=‘F’) is the indicator function that returns 1 if the value of the gender variable is ‘F’ and
0 otherwise. Parameters 1 and 2 are associated with the gender classification effect. This form of
parameterizing the gender effect in the model is only one of several different methods of incorporating
the levels of a classification variable in the model. This form, the so-called singular parameterization, is
the most general approach, and it is used in the GLM, MIXED, and GLIMMIX procedures. Alternatively,
classification effects with various forms of nonsingular parameterizations are available in such procedures
as GENMOD and LOGISTIC. See the documentation for the individual SAS/STAT procedures on their
respective facilities for parameterizing classification variables and the section “Parameterization of Model
Effects” on page 387 in Chapter 19, “Shared Concepts and Topics,” for general details.
Models that contain only classification effects are often identified with analysis of variance (ANOVA) models,
because ANOVA methods are frequently used in their analysis. This is particularly true for experimental data
where the model effects comprise effects of the treatment and error-control design. However, classification
effects appear more widely than in models to which analysis of variance methods are applied. For example,
many mixed models, where parameters are estimated by restricted maximum likelihood, consist entirely of
classification effects but do not permit the sum of squares decomposition typical for ANOVA techniques.
Many models contain both continuous and classification effects. For example, a continuous-by-class
effect consists of at least one continuous variable and at least one classification variable. Such effects are
convenient, for example, to vary slopes in a regression model by the levels of a classification variable. Also,
recent enhancements to linear modeling syntax in some SAS/STAT procedures (including GLIMMIX and
GLMSELECT) enable you to construct sets of columns in X matrices from a single continuous variable. An
example is modeling with splines where the values of a continuous variable x are expanded into a spline basis
that occupies multiple columns in the X matrix. For purposes of the analysis you can treat these columns as a
single unit or as individual, unrelated columns. For more details, see the section “EFFECT Statement” on
page 397 in Chapter 19, “Shared Concepts and Topics.”
26 F Chapter 3: Introduction to Statistical Modeling with SAS/STAT Software
Univariate and Multivariate Models
A multivariate statistical model is a model in which multiple response variables are modeled jointly. Suppose,
for example, that your data consist of heights .hi / and weights .wi / of children, collected over several years
.ti /. The following separate regressions represent two univariate models:
wi D ˇw0 C ˇw1 ti C wi
hi D ˇh0 C ˇh1 ti C hi
In the univariate setting, no information about the children’s heights “flows” to the model about their weights
and vice versa. In a multivariate setting, the heights and weights would be modeled jointly. For example:
wi
wi
Yi D
D Xˇ C
hi
hi
D Xˇ C i
2
1 12
i 0;
12 22
The vectors Yi and i collect the responses and errors for the two observation that belong to the same subject.
The errors from the same child now have the correlation
12
CorrŒwi ; hi  D q
12 22
and it is through this correlation that information about heights “flows” to the weights and vice versa. This
simple example shows only one approach to modeling multivariate data, through the use of covariance
structures. Other techniques involve seemingly unrelated regressions, systems of linear equations, and so on.
Multivariate data can be coarsely classified into three types. The response vectors of homogeneous multivariate data consist of observations of the same attribute. Such data are common in repeated measures
experiments and longitudinal studies, where the same attribute is measured repeatedly over time. Homogeneous multivariate data also arise in spatial statistics where a set of geostatistical data is the incomplete
observation of a single realization of a random experiment that generates a two-dimensional surface. One
hundred measurements of soil electrical conductivity collected in a forest stand compose a single observation
of a 100-dimensional homogeneous multivariate vector. Heterogeneous multivariate observations arise
when the responses that are modeled jointly refer to different attributes, such as in the previous example of
children’s weights and heights. There are two important subtypes of heterogeneous multivariate data. In
homocatanomic multivariate data the observations come from the same distributional family. For example,
the weights and heights might both be assumed to be normally distributed. With heterocatanomic multivariate data the observations can come from different distributional families. The following are examples of
heterocatanomic multivariate data:
• For each patient you observe blood pressure (a continuous outcome), the number of prior episodes of
an illness (a count variable), and whether the patient has a history of diabetes in the family (a binary
outcome). A multivariate model that models the three attributes jointly might assume a lognormal
distribution for the blood pressure measurements, a Poisson distribution for the count variable and a
Bernoulli distribution for the family history.
• In a study of HIV/AIDS survival, you model jointly a patient’s CD4 cell count over time—itself a
homogeneous multivariate outcome—and the survival of the patient (event-time data).
Classes of Statistical Models F 27
Fixed, Random, and Mixed Models
Each term in a statistical model represents either a fixed effect or a random effect. Models in which all effects
are fixed are called fixed-effects models. Similarly, models in which all effects are random—apart from
possibly an overall intercept term—are called random-effects models. Mixed models, then, are those models
that have fixed-effects and random-effects terms. In matrix notation, the linear fixed, linear random, and
linear mixed model are represented by the following model equations, respectively:
Y D Xˇ
YD
C
Z C Y D Xˇ C Z C In these expressions, X and Z are design or regressor matrices associated with the fixed and random effects,
respectively. The vector ˇ is a vector of fixed-effects parameters, and the vector represents the random
effects. The mixed modeling procedures in SAS/STAT software assume that the random effects follow a
normal distribution with variance-covariance matrix G and, in most cases, that the random effects have mean
zero.
Random effects are often associated with classification effects, but this is not necessary. As an example
of random regression effects, you might want to model the slopes in a growth model as consisting of two
components: an overall (fixed-effects) slope that represents the slope of the average individual, and individualspecific random deviations from the overall slope. The X and Z matrix would then have column entries for
the regressor variable associated with the slope. You are modeling fixed and randomly varying regression
coefficients.
Having random effects in your model has a number of important consequences:
• Some observations are no longer uncorrelated but instead have a covariance that depends on the
variance of the random effects.
• You can and should distinguish between the inference spaces; inferences can be drawn in a broad,
intermediate, and narrow inference space. In the narrow inference space, conclusions are drawn about
the particular values of the random effects selected in the study. The broad inference space applies
if inferences are drawn with respect to all possible levels of the random effects. The intermediate
inference space can be applied for effects consisting of more than one random term, when inferences
are broad with respect to some factors and narrow with respect to others. In fixed-effects models, there
is no corresponding concept to the broad and intermediate inference spaces.
• Depending on the structure of G and VarŒ and also subject to the balance in your data, there might
be no closed-form solution for the parameter estimates. Although the model is linear in ˇ, iterative
estimation methods might be required to estimate all parameters of the model.
• Certain concepts, such as least squares means and Type III estimable functions, are meaningful only
for fixed effects.
• By using random effects, you are modeling variation through variance. Variation in data simply implies
that things are not equal. Variance, on the other hand, describes a feature of a random variable. Random
effects in your model are random variables: they model variation through variance.
It is important to properly determine the nature of the model effects as fixed or random. An effect is either
fixed or random by its very nature; it is improper to consider it fixed in one analysis and random in another
28 F Chapter 3: Introduction to Statistical Modeling with SAS/STAT Software
depending on what type of results you want to produce. If, for example, a treatment effect is random and
you are interested in comparing treatment means, and only the levels selected in the study are of interest,
then it is not appropriate to model the treatment effect as fixed so that you can draw on least squares mean
analysis. The appropriate strategy is to model the treatment effect as random and to compare the solutions for
the treatment effects in the narrow inference space.
In determining whether an effect is fixed or random, it is helpful to inquire about the genesis of the effect. If
the levels of an effect are randomly sampled, then the effect is a random effect. The following are examples:
• In a large clinical trial, drugs A, B, and C are applied to patients in various clinical centers. If the
clinical centers are selected at random from a population of possible clinics, their effect on the response
is modeled with a random effect.
• In repeated measures experiments with people or animals as subjects, subjects are declared to be
random because they are selected from the larger population to which you want to generalize.
• Fertilizers could be applied at a number of levels. Three levels are randomly selected for an experiment
to represent the population of possible levels. The fertilizer effects are random effects.
Quite often it is not possible to select effects at random, or it is not known how the values in the data became
part of the study. For example, suppose you are presented with a data set consisting of student scores in three
school districts, with four to ten schools in each district and two to three classrooms in each school. How do
you decide which effects are fixed and which are random? As another example, in an agricultural experiment
conducted in successive years at two locations, how do you decide whether location and year effects are
fixed or random? In these situations, the fixed or random nature of the effect might be debatable, bearing out
the adage that “one modeler’s fixed effect is another modeler’s random effect.” However, this fact does not
constitute license to treat as random those effects that are clearly fixed, or vice versa.
When an effect cannot be randomized or it is not known whether its levels have been randomly selected, it
can be a random effect if its impact on the outcome variable is of a stochastic nature—that is, if it is the
realization of a random process. Again, this line of thinking relates to the genesis of the effect. A random
year, location, or school district effect is a placeholder for different environments that cannot be selected
at random but whose effects are the cumulative result of many individual random processes. Note that this
argument does not imply that effects are random because the experimenter does not know much about them.
The key notion is that effects represent something, whether or not that something is known to the modeler.
Broadening the inference space beyond the observed levels is thus possible, although you might not be able
to articulate what the realizations of the random effects represent.
A consequence of having random effects in your model is that some observations are no longer uncorrelated
but instead have a covariance that depends on the variance of the random effect. In fact, in some modeling
applications random effects might be used not only to model heterogeneity in the parameters of a model, but
also to induce correlations among observations. The typical assumption about random effects in SAS/STAT
software is that the effects are normally distributed.
For more information about mixed modeling tools in SAS/STAT software, see Chapter 6, “Introduction to
Mixed Modeling Procedures.”
Classes of Statistical Models F 29
Generalized Linear Models
A class of models that has gained increasing importance in the past several decades is the class of generalized
linear models. The theory of generalized linear models originated with Nelder and Wedderburn (1972);
Wedderburn (1974), and was subsequently made popular in the monograph by McCullagh and Nelder (1989).
This class of models extends the theory and methods of linear models to data with nonnormal responses.
Before this theory was developed, modeling of nonnormal data typically relied on transformations of the data,
and the transformations were chosen to improve symmetry, homogeneity of variance, or normality. Such
transformations have to be performed with care because they also have implications for the error structure of
the model, Also, back-transforming estimates or predicted values can introduce bias.
Generalized linear models also apply a transformation, known as the link function, but it is applied to a
deterministic component, the mean of the data. Furthermore, generalized linear models take the distribution
of the data into account, rather than assuming that a transformation of the data leads to normally distributed
data to which standard linear modeling techniques can be applied.
To put this generalization in place requires a slightly more sophisticated model setup than that required for
linear models for normal data:
• The systematic component is a linear predictor similar to that in linear models, D x0 ˇ. The linear
predictor is a linear function in the parameters. In contrast to the linear model, does not represent the
mean function of the data.
• The link function g. / relates the linear predictor to the mean, g./ D . The link function is a
monotonic, invertible function. The mean can thus be expressed as the inversely linked linear predictor,
D g 1 ./. For example, a common link function for binary and binomial data is the logit link,
g.t/ D logft =.1 t /g. The mean function of a generalized linear model with logit link and a single
regressor can thus be written as
log
D ˇ0 C ˇ1 x
1 1
D
1 C expf ˇ0 ˇ1 xg
This is known as a logistic regression model.
• The random component of a generalized linear model is the distribution of the data, assumed to be
a member of the exponential family of distributions. Discrete members of this family include the
Bernoulli (binary), binomial, Poisson, geometric, and negative binomial (for a given value of the scale
parameter) distribution. Continuous members include the normal (Gaussian), beta, gamma, inverse
Gaussian, and exponential distribution.
The standard linear model with normally distributed error is a special case of a generalized linear model; the
link function is the identity function and the distribution is normal.
Latent Variable Models
Latent variable modeling involves variables that are not observed directly in your research. It has a relatively
long history, dating back from the measure of general intelligence by common factor analysis (Spearman
1904) to the emergence of modern-day structural equation modeling (Jöreskog 1973; Keesling 1972; Wiley
1973).
30 F Chapter 3: Introduction to Statistical Modeling with SAS/STAT Software
Latent variables are involved in almost all kinds of regression models. In a broad sense, all additive error
terms in regression models are latent variables simply because they are not measured in research. Hereafter,
however, a narrower sense of latent variables is used when referring to latent variable models. Latent variables
are systematic unmeasured variables that are also referred to as factors. For example, in the following diagram
a simple relation between Emotional Intelligence and Career Achievement is shown:
Emotional Intelligence
ˇ
Career Achievement
ı
In the diagram, both Emotional Intelligence and Career Achievement are treated as latent factors. They
are hypothetical constructs in your model. You hypothesize that Emotional Intelligence is a “causal factor”
or predictor of Career Achievement. The symbol ˇ represents the regression coefficient or the effect of
Emotional Intelligence on Career Achievement. However, the “causal relationship” or prediction is not
perfect. There is an error term ı, which accounts for the unsystematic part of the prediction. You can
represent the preceding diagram by using the following linear equation:
CA D ˇEI C ı
where CA represents Career Achievement and EI represents Emotional Intelligence. The means of the latent
factors in the linear model are arbitrary, and so they are assumed to be zero. The error variable ı also has a
zero mean with an unknown variance. This equation represents the so-called “structural model,” where the
“true” relationships among latent factors are theorized.
In order to model this theoretical model with latent factors, some observed variables must somehow relate to
these factors. This calls for the measurement models for latent factors. For example, Emotional Intelligence
could be measured by some established tests. In these tests, individuals are asked to respond to certain special
situations that involve stressful decision making, personal confrontations, and so on. Their responses to these
situations are then rated by experts or a standardized scoring system. Suppose there are three such tests
and the test scores are labeled as X1, X2 and X3, respectively. The measurement model for the latent factor
Emotional Intelligence is specified as follows:
X1 D a1 EI C e1
X2 D a2 EI C e2
X3 D a3 EI C e3
where a1 , a2 , and a3 are regression coefficients and e1 , e2 , and e3 are measurement errors. Measurement
errors are assumed to be independent of the latent factors EI and CA. In the measurement model, X1, X2, and
X3 are called the indicators of the latent variable EI. These observed variables are assumed to be centered in
the model, and therefore no intercept terms are needed. Each of the indicators is a scaled measurement of the
latent factor EI plus a unique error term.
Similarly, you need to have a measurement model for the latent factor CA. Suppose that there are four
observed indicators Y1, Y2, Y3, and Y4 (for example, Job Status) for this latent factor. The measurement
model for CA is specified as follows:
Y1 D a4 CA C e4
Y2 D a5 CA C e5
Y3 D a6 CA C e6
Y4 D a7 CA C e7
Classes of Statistical Models F 31
where a4 , a5 , a6 , and a7 are regression coefficients and e4 , e5 , e6 , and e7 are error terms. Again, the error
terms are assumed to be independent of the latent variables EI and CA, and Y1, Y2, Y3, and Y4 are centered
in the equations.
Given the data for the measured variables, you analyze the structural and measurement models simultaneously
by the structural equation modeling techniques. In other words, estimation of ˇ, a1 –a7 , and other parameters
in the model are carried out simultaneously in the modeling.
Modeling involving the use of latent factors is quite common in social and behavioral sciences, personality
assessment, and marketing research. Hypothetical constructs, although not observable, are very important in
building theories in these areas.
Another use of latent factors in modeling is to “purify” the predictors in regression analysis. A common
assumption in linear regression models is that predictors are measured without errors. That is, in the following
linear equation x is assumed to have been measured without errors:
y D ˛ C ˇx C However, if x has been contaminated with measurement errors that cannot be ignored, the estimate of ˇ
might be biased severely so that the true relationship between x and y would be masked.
A measurement model for x provides a solution to such a problem. Let Fx be a “purified” version of x. That
is, Fx is the “true” measure of x without measurement errors, as described in the following equation:
x D Fx C ı
where ı represents a random measurement error term. Now, the linear relationship of interest is specified in
the following new linear regression equation:
y D ˛ C ˇFx C In this equation, Fx , which is now free from measurement errors, replaces x in the original equation. With
measurement errors taken into account in the simultaneous fitting of the measurement and the new regression
equations, estimation of ˇ is unbiased; hence it reflects the true relationship much better.
Certainly, introducing latent factors in models is not a “free lunch.” You must pay attention to the identification
issues induced by the latent variable methodology. That is, in order to estimate the parameters in structural
equation models with latent variables, you must set some identification constraints in these models. There
are some established rules or conventions that would lead to proper model identification and estimation. See
Chapter 17, “Introduction to Structural Equation Modeling with Latent Variables,” for examples and general
details.
In addition, because of the nature of latent variables, estimation in structural equation modeling with latent
variables does not follow the same form as that of linear regression analysis. Instead of defining the estimators
in terms of the data matrices, most estimation methods in structural equation modeling use the fitting of the
first- and second- order moments. Hence, estimation principles described in the section “Classical Estimation
Principles” on page 33 do not apply to structural equation modeling. However, you can see the section
“Estimation Criteria” on page 1463 in Chapter 29, “The CALIS Procedure,” for details about estimation in
structural equation modeling with latent variables.
32 F Chapter 3: Introduction to Statistical Modeling with SAS/STAT Software
Bayesian Models
Statistical models based on the classical (or frequentist) paradigm treat the parameters of the model as fixed,
unknown constants. They are not random variables, and the notion of probability is derived in an objective
sense as a limiting relative frequency. The Bayesian paradigm takes a different approach. Model parameters
are random variables, and the probability of an event is defined in a subjective sense as the degree to which
you believe that the event is true. This fundamental difference in philosophy leads to profound differences
in the statistical content of estimation and inference. In the frequentist framework, you use the data to best
estimate the unknown value of a parameter; you are trying to pinpoint a value in the parameter space as well
as possible. In the Bayesian framework, you use the data to update your beliefs about the behavior of the
parameter to assess its distributional properties as well as possible.
Suppose you are interested in estimating from data Y D ŒY1 ; ; Yn  by using a statistical model described
by a density p.yj /. Bayesian philosophy states that cannot be determined exactly, and uncertainty about
the parameter is expressed through probability statements and distributions. You can say, for example, that follows a normal distribution with mean 0 and variance 1, if you believe that this distribution best describes
the uncertainty associated with the parameter.
The following steps describe the essential elements of Bayesian inference:
1. A probability distribution for is formulated as . /, which is known as the prior distribution, or
just the prior. The prior distribution expresses your beliefs, for example, on the mean, the spread, the
skewness, and so forth, about the parameter prior to examining the data.
2. Given the observed data Y, you choose a statistical model p.yj / to describe the distribution of Y
given .
3. You update your beliefs about by combining information from the prior distribution and the data
through the calculation of the posterior distribution, p. jy/.
The third step is carried out by using Bayes’ theorem, from which this branch of statistical philosophy derives
its name. The theorem enables you to combine the prior distribution and the model in the following way:
p.; y/
p.yj /. /
p.yj /. /
D
DR
p.y/
p.y/
p.yj /. /d
R
The quantity p.y/ D p.yj /. / d is the normalizing constant of the posterior distribution. It is also the
marginal distribution of Y, and it is sometimes called the marginal distribution of the data.
p. jy/ D
The likelihood function of is any function proportional to p.yj /—that is, L. / / p.yj /. Another way
of writing Bayes’ theorem is
p. jy/ D R
L. /. /
L. /. / d
The marginal distribution p.y/ is an integral; therefore, provided that it is finite, the particular value of the
integral does not yield any additional information about the posterior distribution. Hence, p. jy/ can be
written up to an arbitrary constant, presented here in proportional form, as
p. jy/ / L. /. /
Classical Estimation Principles F 33
Bayes’ theorem instructs you how to update existing knowledge with new information. You start from a prior
belief ./, and, after learning information from data y, you change or update the belief on and obtain
p. jy/. These are the essential elements of the Bayesian approach to data analysis.
In theory, Bayesian methods offer a very simple alternative to statistical inference—all inferences follow
from the posterior distribution p. jy/. However, in practice, only the most elementary problems enable you
to obtain the posterior distribution analytically. Most Bayesian analyses require sophisticated computations,
including the use of simulation methods. You generate samples from the posterior distribution and use these
samples to estimate the quantities of interest.
Both Bayesian and classical analysis methods have their advantages and disadvantages. Your choice of
method might depend on the goals of your data analysis. If prior information is available, such as in the form
of expert opinion or historical knowledge, and you want to incorporate this information into the analysis,
then you might consider Bayesian methods. In addition, if you want to communicate your findings in terms
of probability notions that can be more easily understood by nonstatisticians, Bayesian methods might be
appropriate. The Bayesian paradigm can provide a framework for answering specific scientific questions
that a single point estimate cannot sufficiently address. On the other hand, if you are interested in estimating
parameters and in formulating inferences based on the properties of the parameter estimators, then there is no
need to use Bayesian analysis. When the sample size is large, Bayesian inference often provides results for
parametric models that are very similar to the results produced by classical, frequentist methods.
For more information, see Chapter 7, “Introduction to Bayesian Analysis Procedures.”
Classical Estimation Principles
An estimation principle captures the set of rules and procedures by which parameter estimates are derived.
When an estimation principle “meets” a statistical model, the result is an estimation problem, the solution of
which are the parameter estimates. For example, if you apply the estimation principle of least squares to the
SLR model Yi D ˇ0 C ˇ1 xi C i , the estimation problem is to find those values b
ˇ 0 and b
ˇ 1 that minimize
n
X
.yi
ˇ1 xi /2
ˇ0
i D1
The solutions are the least squares estimators.
The two most important classes of estimation principles in statistical modeling are the least squares principle
and the likelihood principle. All principles have in common that they provide a metric by which you measure
the distance between the data and the model. They differ in the nature of the metric; least squares relies on a
geometric measure of distance, while likelihood inference is based on a distance that measures plausibility.
Least Squares
The idea of the ordinary least squares (OLS) principle is to choose parameter estimates that minimize the
squared distance between the data and the model. In terms of the general, additive model,
Yi D f .xi1 ; ; xi k I ˇ1 ; ; ˇp / C i
the OLS principle minimizes
SSE D
n
X
i D1
yi
2
f .xi1 ; ; xi k I ˇ1 ; ; ˇp /
34 F Chapter 3: Introduction to Statistical Modeling with SAS/STAT Software
The least squares principle is sometimes called “nonparametric” in the sense that it does not require the
distributional specification of the response or the error term, but it might be better termed “distributionally
agnostic.” In an additive-error model it is only required that the model errors have zero mean. For example,
the specification
Yi D ˇ0 C ˇ1 xi C i
EŒi  D 0
is sufficient to derive ordinary least squares (OLS) estimators for ˇ0 and ˇ1 and to study a number of their
properties. It is easy to show that the OLS estimators in this SLR model are
!
n
n
n
.X
X
X
b
.xi x/2
.xi x/
ˇ1 D
Yi Y
iD1
b
ˇ0 D Y
i D1
i D1
b
ˇ1x
Based on the assumption of a zero mean of the model errors, you can show that these estimators are unbiased,
EŒb
ˇ 1  D ˇ1 , EŒb
ˇ 0  D ˇ0 . However, without further assumptions about the distribution of the i , you cannot
derive the variability of the least squares estimators or perform statistical inferences such as hypothesis tests
or confidence intervals. In addition, depending on the distribution of the i , other forms of least squares
estimation can be more efficient than OLS estimation.
The conditions for which ordinary least squares estimation is efficient are zero mean, homoscedastic,
uncorrelated model errors. Mathematically,
EŒi  D 0
VarŒi  D 2
CovŒi ; j  D 0 if i 6D j
The second and third assumption are met if the errors have an iid distribution—that is, if they are independent
and identically distributed. Note, however, that the notion of stochastic independence is stronger than that of
absence of correlation. Only if the data are normally distributed does the latter implies the former.
The various other forms of the least squares principle are motivated by different extensions of these assumptions in order to find more efficient estimators.
Weighted Least Squares
The objective function in weighted least squares (WLS) estimation is
SSEw D
n
X
wi Yi
2
f .xi1 ; ; xi k I ˇ1 ; ; ˇp /
i D1
where wi is a weight associated with the ith observation. A situation where WLS estimation is appropriate is
when the errors are uncorrelated but not homoscedastic. If the weights for the observations are proportional
to the reciprocals of the error variances, VarŒi  D 2 =wi , then the weighted least squares estimates are best
linear unbiased estimators (BLUE). Suppose that the weights wi are collected in the diagonal matrix W and
that the mean function has the form of a linear model. The weighted sum of squares criterion then can be
written as
SSEw D .Y
Xˇ/0 W .Y
Xˇ/
Classical Estimation Principles F 35
which gives rise to the weighted normal equations
.X0 WX/ˇ D X0 WY
The resulting WLS estimator of ˇ is
b̌w D X0 WX X0 WY
Iteratively Reweighted Least Squares
If the weights in a least squares problem depend on the parameters, then a change in the parameters also
changes the weight structure of the model. Iteratively reweighted least squares (IRLS) estimation is an
iterative technique that solves a series of weighted least squares problems, where the weights are recomputed
between iterations. IRLS estimation can be used, for example, to derive maximum likelihood estimates in
generalized linear models.
Generalized Least Squares
The previously discussed least squares methods have in common that the observations are assumed to be
uncorrelated—that is, CovŒi ; j  D 0, whenever i 6D j . The weighted least squares estimation problem is
a special case of a more general least squares problem, where the model errors have a general covariance
matrix, VarŒ D †. Suppose again that the mean function is linear, so that the model becomes
Y D Xˇ C .0; †/
The generalized least squares (GLS) principle is to minimize the generalized error sum of squares
SSEg D .Y
Xˇ/0 †
1
.Y
Xˇ/
This leads to the generalized normal equations
.X0 †
1
X/ˇ D X0 †
1
Y
and the GLS estimator
b̌g D X0 † 1 X X0 †
1
Y
Obviously, WLS estimation is a special case of GLS estimation, where † D 2 W
Y D Xˇ C 0; 2 W 1
1 —that
is, the model is
Likelihood
There are several forms of likelihood estimation and a large number of offshoot principles derived from
it, such as pseudo-likelihood, quasi-likelihood, composite likelihood, etc. The basic likelihood principle is
maximum likelihood, which asks to estimate the model parameters by those quantities that maximize the
likelihood function of the data. The likelihood function is the joint distribution of the data, but in contrast
to a probability mass or density function, it is thought of as a function of the parameters, given the data.
The heuristic appeal of the maximum likelihood estimates (MLE) is that these are the values that make the
observed data “most likely.” Especially for discrete response data, the value of the likelihood function is the
ordinate of a probability mass function, even if the likelihood is not a probability function. Since a statistical
36 F Chapter 3: Introduction to Statistical Modeling with SAS/STAT Software
model is thought of as a representation of the data-generating mechanism, what could be more preferable as
parameter estimates than those values that make it most likely that the data at hand will be observed?
Maximum likelihood estimates, if they exist, have appealing statistical properties. Under fairly mild
conditions, they are best-asymptotic-normal (BAN) estimates—that is, their asymptotic distribution is normal,
and no other estimator has a smaller asymptotic variance. However, their statistical behavior in finite samples
is often difficult to establish, and you have to appeal to the asymptotic results that hold as the sample size tends
to infinity. For example, maximum likelihood estimates are often biased estimates and the bias disappears as
the sample size grows. A famous example is random sampling from a normal distribution. The corresponding
statistical model is
Yi D C i
i iid N.0; 2 /
where the symbol is read as “is distributed as” and iid is read as “independent and identically distributed.”
Under the normality assumption, the density function of yi is
1 yi 2
1
2
exp
f .yi I ; / D p
2
2 2
and the likelihood for a random sample of size n is
L.; 2 I y/ D
n
Y
1
exp
p
2
2
i D1
1 yi 2
2
Maximizing the likelihood function L.; 2 I y/ is equivalent to maximizing the log-likelihood function
log L D l.; 2 I y/,
2
l.; I y/ D
n
X
.yi /2
1
2
logf2g C
C logf g
2
2
i D1
n
D
X
1
.yi
n logf2g C n logf 2 g C
2
!
2
/ =
2
i D1
The maximum likelihood estimators of and 2 are thus
n
1X
Yi D Y
b
D
n
i D1
n
1X
.Yi
b
D
n
2
b
/2
i D1
The MLE of the mean is the sample mean, and it is an unbiased estimator of . However, the MLE of the
variance 2 is not an unbiased estimator. It has bias
2
E b
2 D
1 2
n
As the sample size n increases, the bias vanishes.
For certain classes of models, special forms of likelihood estimation have been developed to maintain the
appeal of likelihood-based statistical inference and to address specific properties that are believed to be
shortcomings:
Classical Estimation Principles F 37
• The bias in maximum likelihood parameter estimators of variances and covariances has led to the
development of restricted (or residual) maximum likelihood (REML) estimators that play an important
role in mixed models.
• Quasi-likelihood methods do not require that the joint distribution of the data be specified. These
methods derive estimators based on only the first two moments (mean and variance) of the joint
distributions and play an important role in the analysis of correlated data.
• The idea of composite likelihood is applied in situations where the likelihood of the vector of responses
is intractable but the likelihood of components or functions of the full-data likelihood are tractable.
For example, instead of the likelihood of Y, you might consider the likelihood of pairwise differences
Yi Yj .
• The pseudo-likelihood concept is also applied when the likelihood function is intractable, but the
likelihood of a related, simpler model is available. An important difference between quasi-likelihood
and pseudo-likelihood techniques is that the latter make distributional assumptions to obtain a likelihood
function in the pseudo-model. Quasi-likelihood methods do not specify the distributional family.
• The penalized likelihood principle is applied when additional constraints and conditions need to be
imposed on the parameter estimates or the resulting model fit. For example, you might augment the
likelihood with conditions that govern the smoothness of the predictions or that prevent overfitting of
the model.
Least Squares or Likelihood
For many statistical modeling problems, you have a choice between a least squares principle and the maximum
likelihood principle. Table 3.1 compares these two basic principles.
Table 3.1 Least Squares and Maximum Likelihood
Criterion
Least Squares
Maximum Likelihood
Requires specification of joint
distribution of
data
No, but in order to perform confirmatory
inference (tests, confidence intervals), a
distributional assumption is needed, or an
appeal to asymptotics.
Yes, no progress can be made with the genuine likelihood principle without knowing
the distribution of the data.
All parameters
of the model are
estimated
No. In the additive-error type models, Yes
least squares provides estimates of only
the parameters in the mean function. The
residual variance, for example, must be estimated by some other method—typically
by using the mean squared error of the
model.
Estimates
always exist
Yes, but they might not be unique, such as
when the X matrix is singular.
No, maximum likelihood estimates do not
exist for all estimation problems.
Estimators are
biased
Unbiased, provided that the model is
correct—that is, the errors have zero
mean.
Often biased, but asymptotically unbiased
38 F Chapter 3: Introduction to Statistical Modeling with SAS/STAT Software
Table 3.1 continued
Criterion
Least Squares
Maximum Likelihood
Estimators are
consistent
Not necessarily, but often true. Sometimes Almost always
estimators are consistent even in a misspecified model, such as when misspecification is in the covariance structure.
Estimators are Typically, if the least squares assumptions
best linear unbi- are met.
ased estimates
(BLUE)
Not necessarily: estimators are often nonlinear in the data and are often biased.
Asymptotically
most efficient
Not necessarily
Typically
Easy to compute
Yes
No
Inference Principles for Survey Data
Design-based and model-assisted statistical inference for survey data requires that the randomness due to the
selection mechanism be taken into account. This can require special estimation principles and techniques.
The SURVEYMEANS, SURVEYFREQ, SURVEYREG, SURVEYLOGISTIC, and SURVEYPHREG procedures support design-based and/or model-assisted inference for sample surveys. Suppose i is the selection
probability for unit i in sample S. The inverse of the inclusion probability is known as sampling weight and is
denoted by wi . Briefly, the idea is to apply a relationship that exists in the population to the sample
P and to
take into account the sampling weights. For example, to estimate the finite population total TN D i 2UN yi
P
b D
based on the sample S, you can accumulate the sampled values while properly weighting: T
i 2S wi yi .
b is design-unbiased in the sense that EŒT
b jFN  D TN (see Cochran 1977).
It is easy to verify that T
When a statistical model is present, similar ideas apply. For example, if ˇN 0 and ˇN1 are finite population
quantities for a simple linear regression working model that minimize the sum of squares
X
.yi ˇ0N ˇ1N xi /2
i 2UN
in the population, then the sample-based estimators b
ˇ 0S and b
ˇ 1S are obtained by minimizing the weighted
sum of squares
X
wi .yi ˇO0S ˇO1S xi /2
i 2S
in the sample, taking into account the inclusion probabilities.
In model-assisted inference, weighted least squares or pseudo-maximum likelihood estimators are commonly
used to solve such estimation problems. Maximum pseudo-likelihood or weighted maximum likelihood
estimators for survey data maximize a sample-based estimator of the population likelihood. Assume a
working model with uncorrelated responses such that the finite population log-likelihood is
X
l.1N ; : : : ; pN I yi /;
i 2UN
Statistical Background F 39
where 1N ; : : : ; pN are finite population quantities. For independent sampling, one possible sample-based
estimator of the population log likelihood is
X
wi l.1N ; : : : ; pN I yi /
i 2S
Sample-based estimators b
1S ; : : : ; b
pS are obtained by maximizing this expression.
Design-based and model-based statistical analysis might employ the same statistical model (for example, a
linear regression) and the same estimation principle (for example, weighted least squares), and arrive at the
same estimates. The design-based estimation of the precision of the estimators differs from the model-based
estimation, however. For complex surveys, design-based variance estimates are in general different from their
model-based counterpart. The SAS/STAT procedures for survey data (SURVEYMEANS, SURVEYFREQ,
SURVEYREG, SURVEYLOGISTIC, and SURVEYPHREG procedures) compute design-based variance
estimates for complex survey data. See the section “Variance Estimation” on page 248, in Chapter 14,
“Introduction to Survey Procedures,” for details about design-based variance estimation.
Statistical Background
Hypothesis Testing and Power
In statistical hypothesis testing, you typically express the belief that some effect exists in a population by
specifying an alternative hypothesis H1 . You state a null hypothesis H0 as the assertion that the effect does
not exist and attempt to gather evidence to reject H0 in favor of H1 . Evidence is gathered in the form of
sample data, and a statistical test is used to assess H0 . If H0 is rejected but there really is no effect, this is
called a Type I error. The probability of a Type I error is usually designated “alpha” or ˛, and statistical tests
are designed to ensure that ˛ is suitably small (for example, less than 0.05).
If there is an effect in the population but H0 is not rejected in the statistical test, then a Type II error has been
committed. The probability of a Type II error is usually designated “beta” or ˇ. The probability 1 ˇ of
avoiding a Type II error—that is, correctly rejecting H0 and achieving statistical significance, is called the
power of the test.
An important goal in study planning is to ensure an acceptably high level of power. Sample size plays a
prominent role in power computations because the focus is often on determining a sufficient sample size to
achieve a certain power, or assessing the power for a range of different sample sizes.
There are several tools available in SAS/STAT software for power and sample size analysis. PROC POWER
covers a variety of analyses such as t tests, equivalence tests, confidence intervals, binomial proportions,
multiple regression, one-way ANOVA, survival analysis, logistic regression, and the Wilcoxon rank-sum
test. PROC GLMPOWER supports more complex linear models. The Power and Sample Size application
provides a user interface and implements many of the analyses supported in the procedures.
40 F Chapter 3: Introduction to Statistical Modeling with SAS/STAT Software
Important Linear Algebra Concepts
A matrix A is a rectangular array of numbers. The order of a matrix with
n rows and k columns is .n k/.
The element in row i, column j of A is denoted as aij , and the notation aij is sometimes used to refer to the
two-dimensional row-column array
2
3
a11 a12 a13 a1k
6 a21 a22 a23 a2k 7
6
7 6
7
A D 6 a31 a32 a33 a3k 7 D aij
6 ::
::
::
:
:
::
:: 7
4 :
5
:
:
an1 an2 an3 ank
A vector is a one-dimensional array of numbers. A column vector has a single column (k D 1). A row vector
has a single row (n D 1). A scalar is a matrix of order .1 1/—that is, a single number. A square matrix has
the same row and column order, n D k. A diagonal matrix is a square matrix where all off-diagonal elements
are zero, aij D 0 if i 6D j . The identity matrix I is a diagonal matrix with ai i D 1 for all i. The unit vector 1
is a vector where all elements are 1. The unit matrix J is a matrix of all 1s. Similarly, the elements of the null
vector and the null matrix are all 0.
Basic matrix operations are as follows:
Addition
If A and B are of the same order, then A C B is the matrix of elementwise sums,
A C B D aij C bij
Subtraction
If A and B are of the same order, then A
A B D aij bij
Dot product
The dot product of two n-vectors a and b is the sum of their elementwise products,
abD
n
X
B is the matrix of elementwise differences,
ai bi
i D1
The dot product is also known as the inner product of a and b. Two vectors are said to be
orthogonal if their dot product is zero.
Multiplication
Matrices A and B are said to be conformable for AB multiplication if the number of
columns in A equals the number of rows in B. Suppose that A is of order .n k/ and that
B is of order .k p/. The product AB is then defined as the .n p/ matrix of the dot
products of the ith row of A and the jth column of B,
AB D ai bj np
Transposition
The transpose of the .n k/ matrix A is denoted as A0 and is obtained by interchanging
the rows and columns,
2
3
a11 a21 a31 an1
6 a12 a22 a23 an2 7
6
7 6
7
0
A D 6 a13 a23 a33 an3 7 D aj i
6 ::
::
::
:
::
:: 7
4 :
5
:
:
:
a1k a2k a3k
ank
Important Linear Algebra Concepts F 41
A symmetric matrix is equal to its transpose, A D A0 . The inner product of two .n 1/
column vectors a and b is a b D a0 b.
Matrix Inversion
Regular Inverses
The right inverse of a matrix A is the matrix that yields the identity when A is postmultiplied by it. Similarly,
the left inverse of A yields the identity if A is premultiplied by it. A is said to be invertible and B is said
to be the inverse of A, if B is its right and left inverse, BA D AB D I. This requires A to be square and
nonsingular. The inverse of a matrix A is commonly denoted as A 1 . The following results are useful in
manipulating inverse matrices (assuming both A and C are invertible):
A
AA
A0
1
.AC/
1
1
1
1
1
DA
D A
ADI
1 0
DA
DC
1
A
1
rank.A/ D rank A
1
If D is a diagonal matrix with nonzero entries on the diagonal—that is, D D diag .d1 ; ; dn /—then
D 1 D diag .1=d1 ; ; 1=dn /. If D is a block-diagonal matrix whose blocks are invertible, then
3
2
3
2
D1 0
0
0
D1 1 0
0
0
6 0
6 0
7
D2 0
0 7
D2 1 0
0
7
6
7
6
1
6 0
7
6
7
0
D3 0 7
0
D3
0
DD6
D 1D6 0
7
6 ::
7
6
7
::
::
::
::
::
: : :: 5
: : ::
4 :
4
5
: :
: :
:
:
:
:
:
1
0
0
0
Dn
0
0
0
Dn
In statistical applications the following two results are particularly important, because they can significantly
reduce the computational burden in working with inverse matrices.
Partitioned Matrix Suppose A is a nonsingular matrix that is partitioned as
A11 A12
AD
A21 A22
Then, provided that all the inverses exist, the inverse of A is given by
B11 B12
1
A D
B21 B22
A12 A221 A21
1
A21 A111 A12
.
where B11 D A11
and B22 D A22
Patterned Sum
1
, B12 D
B11 A12 A221 , B21 D
A221 A21 B11 ,
Suppose R is .n n/ nonsingular, G is .k k/ nonsingular, and B and C are .n k/ and
.k n/ matrices, respectively. Then the inverse of R C BGC is given by
.R C BGC/
1
DR
1
R
1
B G
1
C CR
1
B
1
CR
1
42 F Chapter 3: Introduction to Statistical Modeling with SAS/STAT Software
This formula is particularly useful if k << n and R has a simple form that is easy to
invert. This case arises, for example, in mixed models where R might be a diagonal or
block-diagonal matrix, and B D C0 .
Another situation where this formula plays a critical role is in the computation of regression
diagnostics, such as in determining the effect of removing an observation from the analysis.
Suppose that A D X0 X represents the crossproduct matrix in the linear model EŒY D Xˇ.
If x0i is the ith row of the X matrix, then .X0 X xi x0i / is the crossproduct matrix in the
same model with the ith observation removed. Identifying B D xi , C D x0i , and G D I
in the preceding inversion formula, you can obtain the expression for the inverse of the
crossproduct matrix:
0
XX
xi x0i
1
0
DXXC
X0 X
1
1
xi x0i X0 X
1
x0i X0 X
xi
1
This expression for the inverse of the reduced data crossproduct matrix enables you to
compute “leave-one-out” deletion diagnostics in linear models without refitting the model.
Generalized Inverse Matrices
If A is rectangular (not square) or singular, then it is not invertible and the matrix A
you want to find a solution to simultaneous linear equations of the form
1
does not exist. Suppose
Ab D c
If A is square and nonsingular, then the unique solution is b D A 1 c. In statistical applications, the case
where A is .n k/ rectangular is less important than the case where A is a .k k/ square matrix of rank
less than k. For example, the normal equations in ordinary least squares (OLS) estimation in the model
Y D Xˇ C are
X0 X ˇ D X0 Y
A generalized inverse matrix is a matrix A such that A c is a solution to the linear system. In the OLS
example, a solution can be found as X0 X X0 Y, where X0 X is a generalized inverse of X0 X.
The following four conditions are often associated with generalized inverses. For the square or rectangular
matrix A there exist matrices G that satisfy
.i/
.ii/
.iii/
.iv/
AGA
GAG
.AG/0
.GA/0
D
D
D
D
A
G
AG
GA
The matrix G that satisfies all four conditions is unique and is called the Moore-Penrose inverse, after the first
published work on generalized inverses by Moore (1920) and the subsequent definition by Penrose (1955).
Only the first condition is required, however, to provide a solution to the linear system above.
Important Linear Algebra Concepts F 43
Pringle and Rayner (1971) introduced a numbering system to distinguish between different types of generalized inverses. A matrix that satisfies only condition (i) is a g1 -inverse. The g2 -inverse satisfies conditions (i)
and (ii). It is also called a reflexive generalized inverse. Matrices satisfying conditions (i)–(iii) or conditions
(i), (ii), and (iv) are g3 -inverses. Note that a matrix that satisfies the first three conditions is a right generalized
inverse, and a matrix that satisfies conditions (i), (ii), and (iv) is a left generalized inverse. For example, if
1 0
B is .n k/ of rank k, then B0 B
B is a left generalized inverse of B. The notation g4 -inverse for the
Moore-Penrose inverse, satisfying conditions (i)–(iv), is often used by extension, but note that Pringle and
Rayner (1971) do not use it; rather, they call such a matrix “the” generalized inverse.
If the .n k/ matrix X is rank-deficient—that is, rank.X/ < minfn; kg—then the system of equations
X0 X ˇ D X0 Y
does not have a unique solution. A particular solution depends on the choice of the generalized inverse.
However, some aspects of the statistical inference are invariant to the choice of the generalized inverse. If
G is a generalized inverse of X0 X, then XGX0 is invariant to the choice of G. This result comes into play,
for example, when you are computing predictions in an OLS model with a rank-deficient X matrix, since it
implies that the predicted values
Xb̌ D X X0 X X0 y
are invariant to the choice of X0 X .
Matrix Differentiation
Taking the derivative of expressions involving matrices is a frequent task in statistical estimation. Objective
functions that are to be minimized or maximized are usually written in terms of model matrices and/or vectors
whose elements depend on the unknowns of the estimation problem. Suppose
thatA and B are real matrices
whose elements depend on the scalar quantities ˇ and —that is, A D aij .ˇ; / , and similarly for B.
The following are useful results in finding the derivative of elements of a matrix and of functions involving a
matrix. For more in-depth discussion of matrix differentiation and matrix calculus, see, for example, Magnus
and Neudecker (1999) and Harville (1997).
P ˇ and is the matrix of the first derivatives of the elements
The derivative of A with respect to ˇ is denoted A
of A:
P ˇ D @ A D @aij .ˇ; /
A
@ˇ
@ˇ
Similarly, the second derivative of A with respect to ˇ and is the matrix of the second derivatives
2
@ aij .ˇ; /
@2
R
Aˇ D
AD
@ˇ@
@ˇ@
44 F Chapter 3: Introduction to Statistical Modeling with SAS/STAT Software
The following are some basic results involving sums, products, and traces of matrices:
@
Pˇ
c1 A D c1 A
@ˇ
@
P ˇ C BP ˇ
.A C B/ D A
@ˇ
@
.c1 A C c2 B/ D
@ˇ
@
AB D
@ˇ
@
trace.A/ D
@ˇ
@
trace.AB/ D
@ˇ
P ˇ C c2 BP ˇ
c1 A
P ˇB
ABP ˇ C A
Pˇ
trace A
P ˇB
P ˇ C trace A
trace AB
The next set of results is useful in finding the derivative of elements of A and of functions of A, if A is a
nonsingular matrix:
@ 0 1
xA xD
@ˇ
@
A 1D
@ˇ
@
jAj D
@ˇ
@
log fjAjg D
@ˇ
@2
A 1D
@ˇ@
@2
log fjAjg D
@ˇ@
1
x0 A
A
1
1
P ˇA
A
P ˇA
A
jAj trace A
x
1
1
Pˇ
A
1 @
A D trace A
jAj @ˇ
A
1
R ˇ A
A
trace A
1
1
R ˇ
A
CA
1
1
Pˇ
A
P ˇA
A
trace A
1
P A
A
1
P ˇA
A
1
CA
1
P
A
1
P A
A
1
P ˇA
A
1
Now suppose that a and b are column vectors that depend on ˇ and/or and that x is a vector of constants.
The following results are useful for manipulating derivatives of linear and quadratic forms:
@ 0
axDa
@x
@
Bx D B
@x0
@ 0
x Bx D B C B0 x
@x
@2 0
x Bx D B C B0
@[email protected]
Important Linear Algebra Concepts F 45
Matrix Decompositions
To decompose a matrix is to express it as a function—typically a product—of other matrices that have particular properties such as orthogonality, diagonality, triangularity. For example, the Cholesky decomposition
of a symmetric positive definite matrix A is CC0 D A, where C is a lower-triangular matrix. The spectral
decomposition of a symmetric matrix is A D PDP0 , where D is a diagonal matrix and P is an orthogonal
matrix.
Matrix decomposition play an important role in statistical theory as well as in statistical computations. Calculations in terms of decompositions can have greater numerical stability. Decompositions are often necessary
to extract information about matrices, such as matrix rank, eigenvalues, or eigenvectors. Decompositions are
also used to form special transformations of matrices, such as to form a “square-root” matrix. This section
briefly mentions several decompositions that are particularly prevalent and important.
LDU, LU, and Cholesky Decomposition
Every square matrix A, whether it is positive definite or not, can be expressed in the form A D LDU,
where L is a unit lower-triangular matrix, D is a diagonal matrix, and U is a unit upper-triangular matrix.
(The diagonal elements of a unit triangular matrix are 1.) Because of the arrangement of the matrices, the
decomposition is called the LDU decomposition. Since you can absorb the diagonal matrix into the triangular
matrices, the decomposition
A D LD1=2 D1=2 U D L U
is also referred to as the LU decomposition of A.
If the matrix A is positive definite, then the diagonal elements of D are positive and the LDU decomposition
is unique. If A is also symmetric, then the unique decomposition takes the form A D U0 DU, where U
is unit upper-triangular and D is diagonal with positive elements. Absorbing the square root of D into U,
C D D1=2 U, the decomposition is known as the Cholesky decomposition of a positive-definite matrix:
A D U0 D1=2 D1=2 U D C0 C
where C is upper triangular.
If A is symmetric but only nonnegative definite of rank k, rather than being positive definite of full rank, then
it has an extended Cholesky decomposition as follows. Let C denote the lower-triangular matrix such that
Ckk 0
C D
0
0
Then A D CC0 .
Spectral Decomposition
Suppose that A is an .n n/ symmetric matrix. Then there exists an orthogonal matrix Q and a diagonal
matrix D such that A D QDQ0 . Of particular importance is the case where the orthogonal matrix is also
orthonormal—that is, its column vectors have unit norm. Denote this orthonormal matrix as P. Then the
corresponding diagonal matrix—ƒ D diag.i ; ; n /, say—contains the eigenvalues of A. The spectral
decomposition of A can be written as
A D PƒP0 D
n
X
i D1
i pi p0i
46 F Chapter 3: Introduction to Statistical Modeling with SAS/STAT Software
where pi denotes the ith column vector of P. The right-side expression decomposes A into a sum of rank-1
matrices, and the weight of each contribution is equal to the eigenvalue associated with the ith eigenvector.
The sum furthermore emphasizes that the rank of A is equal to the number of nonzero eigenvalues.
Harville (1997, p. 538) refers to the spectral decomposition of A as the decomposition that takes the previous
sum one step further and accumulates contributions
associated with the distinct eigenvalues. If i ; ; k
P
are the distinct eigenvalues and Ej D
pi p0i , where the sum is taken over the set of columns for which
i D j , then
AD
k
X
j Ej
i D1
You can employ the spectral decomposition of a nonnegative definite symmetric matrix to form a “squareroot” matrix of A. Suppose that ƒ1=2 is the diagonal matrix containing the square roots of the i . Then
B D Pƒ1=2 P0 is a square-root matrix of A in the sense that BB D A, because
BB D Pƒ1=2 P0 Pƒ1=2 P0 D Pƒ1=2 ƒ1=2 P0 D PƒP0
Generating the Moore-Penrose inverse of a matrix based on the spectral decomposition is also simple. Denote
as  the diagonal matrix with typical element
1=i i 6D 0
ıi D
0
i D 0
P
Then the matrix PP0 D ıi pi p0i is the Moore-Penrose (g4 -generalized) inverse of A.
Singular-Value Decomposition
The singular-value decomposition is related to the spectral decomposition of a matrix, but it is more general.
The singular-value decomposition can be applied to any matrix. Let B be an .n p/ matrix of rank k. Then
there exist orthogonal matrices P and Q of order .n n/ and .p p/, respectively, and a diagonal matrix D
such that
D1 0
P0 BQ D D D
0 0
where D1 is a diagonal matrix of order k. The diagonal elements of D1 are strictly positive. As with the
spectral decomposition, this result can be written as a decomposition of B into a weighted sum of rank-1
matrices
n
X
di pi q0i
B D PDQ0 D
i D1
The scalars d1 ; ; dk are called the singular values of the matrix B. They are the positive square roots of
the nonzero eigenvalues of the matrix B0 B. If the singular-value decomposition is applied to a symmetric,
nonnegative definite matrix A, then the singular values d1 ; ; dn are the nonzero eigenvalues of A and the
singular-value decomposition is the same as the spectral decomposition.
As with the spectral decomposition, you can use the results of the singular-value decomposition to generate
the Moore-Penrose inverse of a matrix. If B is .n p/ with singular-value decomposition PDQ0 , and if  is
a diagonal matrix with typical element
1=di jdi j 6D 0
ıi D
0
di D 0
then QP0 is the g4 -generalized inverse of B.
Expectations of Random Variables and Vectors F 47
Expectations of Random Variables and Vectors
If Y is a discrete random variable with mass function p.y/ and support (possible values) y1 ; y2 ; , then the
expectation (expected value) of Y is defined as
EŒY  D
1
X
yj p.yj /
j D1
P
provided that jyj jp.yj / < 1, otherwise the sum inPthe definition is not well-defined. The expected value
of a function h.y/ is similarly defined: provided that jh.yj /jp.yj / < 1,
EŒh.Y / D
1
X
h.yj / p.yj /
j D1
For continuous random variables, similar definitions apply, but summation is replaced by integration over
the
R support of the random variable. If X is a continuous random variable with density function f .x/, and
jxjf .x/dx < 1, then the expectation of X is defined as
Z 1
EŒX D
xf .x/ dx
1
The expected value of a random variable is also called its mean or its first moment. A particularly important
function of a random variable is h.Y / D .Y EŒY /2 . The expectation of h.Y / is called the variance of Y
or the second central moment of Y. When you study the properties of multiple random variables, then you
might be interested in aspects of their joint distribution. The covariance between random variables Y and X is
defined as the expected value of the function .Y EŒY /.X EŒX /, where the expectation is taken under
the bivariate joint distribution of Y and X:
Z Z
CovŒY; X  D EŒ.Y EŒY /.X EŒX / D EŒYX  EŒY EŒX  D
x yf .x; y/ dxdy EŒY EŒX 
The covariance between a random variable and itself is the variance, CovŒY; Y  D VarŒY .
In statistical applications and formulas, random variables are often collected into vectors. For example, a
random sample of size n from the distribution of Y generates a random vector of order .n 1/,
2
3
Y1
6 Y2 7
6
7
YD6 : 7
:
4 : 5
Yn
The expected value of the .n 1/ random vector Y is the vector of the means of the elements of Y:
2
3
EŒY1 
6 EŒY2  7
6
7
EŒY D ŒE ŒYi  D 6 : 7
:
4 : 5
EŒYn 
It is often useful to directly apply rules about working with means, variances, and covariances of random
vectors. To develop these rules, suppose that Y and U denote two random vectors with typical elements
48 F Chapter 3: Introduction to Statistical Modeling with SAS/STAT Software
Y1 ; ; Yn and U1 ; ; Uk . Further suppose that A and B are constant (nonstochastic) matrices, that a is a
constant vector, and that the ci are scalar constants.
The following rules enable you to derive the mean of a linear function of a random vector:
EŒA D A
EŒY C a D EŒY
EŒAY C a D AEŒY C a
EŒY C U D EŒY C EŒU
The covariance matrix of Y and U is the .n k/ matrix whose typical element in row i, column j is the
covariance between Yi and Uj . The covariance matrix between two random vectors is frequently denoted
with the Cov “operator.”
CovŒY; U D CovŒYi ; Uj 
D E .Y EŒY/ .U EŒU/0 D E YU0
EŒYEŒU0
2
6
6
6
D6
6
4
CovŒY1 ; U3  CovŒY2 ; U3  CovŒY3 ; U3  ::
::
:
:
CovŒYn ; U1  CovŒYn ; U2  CovŒYn ; U3  CovŒY1 ; U1 
CovŒY2 ; U1 
CovŒY3 ; U1 
::
:
CovŒY1 ; U2 
CovŒY2 ; U2 
CovŒY3 ; U2 
::
:
CovŒY1 ; Uk 
CovŒY2 ; Uk 
CovŒY3 ; Uk 
::
:
3
7
7
7
7
7
5
CovŒYn ; Uk 
The variance matrix of a random vector Y is the covariance matrix between Y and itself. The variance matrix
is frequently denoted with the Var “operator.”
VarŒY D CovŒY; Y D CovŒYi ; Yj 
D E .Y EŒY/ .Y EŒY/0 D E YY0
EŒYEŒY0
2
6
6
6
D6
6
4
2
6
6
6
D6
6
4
CovŒY1 ; Y3  CovŒY2 ; Y3  CovŒY3 ; Y3  ::
::
:
:
CovŒYn ; Y1  CovŒYn ; Y2  CovŒYn ; Y3  CovŒY1 ; Yn 
CovŒY2 ; Yn 
CovŒY3 ; Yn 
::
:
CovŒY1 ; Y3  CovŒY2 ; Y3  VarŒY3 
::
::
:
:
CovŒYn ; Y1  CovŒYn ; Y2  CovŒYn ; Y3  CovŒY1 ; Yn 
CovŒY2 ; Yn 
CovŒY3 ; Yn 
::
:
CovŒY1 ; Y1 
CovŒY2 ; Y1 
CovŒY3 ; Y1 
::
:
VarŒY1 
CovŒY2 ; Y1 
CovŒY3 ; Y1 
::
:
CovŒY1 ; Y2 
CovŒY2 ; Y2 
CovŒY3 ; Y2 
::
:
CovŒY1 ; Y2 
VarŒY2 
CovŒY3 ; Y2 
::
:
3
7
7
7
7
7
5
CovŒYn ; Yn 
3
7
7
7
7
7
5
VarŒYn 
Because the variance matrix contains variances on the diagonal and covariances in the off-diagonal positions,
it is also referred to as the variance-covariance matrix of the random vector Y.
Mean Squared Error F 49
If the elements of the covariance matrix CovŒY; U are zero, the random vectors are uncorrelated. If Y and U
are normally distributed, then a zero covariance matrix implies that the vectors are stochastically independent.
If the off-diagonal elements of the variance matrix VarŒY are zero, the elements of the random vector Y
are uncorrelated. If Y is normally distributed, then a diagonal variance matrix implies that its elements are
stochastically independent.
Suppose that A and B are constant (nonstochastic) matrices and that ci denotes a scalar constant. The
following results are useful in manipulating covariance matrices:
CovŒAY; U D ACovŒY; U
CovŒY; BU D CovŒY; UB0
CovŒAY; BU D ACovŒY; UB0
CovŒc1 Y1 C c2 U1 ; c3 Y2 C c4 U2  D c1 c3 CovŒY1 ; Y2  C c1 c4 CovŒY1 ; U2 
C c2 c3 CovŒU1 ; Y2  C c2 c4 CovŒU1 ; U2 
Since CovŒY; Y D VarŒY, these results can be applied to produce the following results, useful in manipulating variances of random vectors:
VarŒA D 0
VarŒAY D AVarŒYA0
VarŒY C x D VarŒY
VarŒx0 Y D x0 VarŒYx
VarŒc1 Y D c12 VarŒY
VarŒc1 Y C c2 U D c12 VarŒY C c22 VarŒU C 2c1 c2 CovŒY; U
Another area where expectation rules are helpful is quadratic forms in random variables. These forms arise
particularly in the study of linear statistical models and in linear statistical inference. Linear inference is
statistical inference about linear function of random variables, even if those random variables are defined
might be derived in a nonlinear model,
through nonlinear models. For example, the parameter estimator b
but this does not prevent statistical questions from being raised that can be expressed through linear functions
of ; for example,
1 22 D 0
H0 W
2 3 D 0
if A is a matrix of constants and Y is a random vector, then
EŒY0 AY D trace.AVarŒY/ C EŒY0 AEŒY
Mean Squared Error
The mean squared error is arguably the most important criterion used to evaluate the performance of a
predictor or an estimator. (The subtle distinction between predictors and estimators is that random variables
are predicted and constants are estimated.) The mean squared error is also useful to relay the concepts of
bias, precision, and accuracy in statistical estimation. In order to examine a mean squared error, you need a
50 F Chapter 3: Introduction to Statistical Modeling with SAS/STAT Software
target of estimation or prediction, and a predictor or estimator that is a function of the data. Suppose that the
target, whether a constant or a random variable, is denoted as U. The mean squared error of the estimator or
predictor T .Y/ for U is
h
i
MSE ŒT .Y/I U  D E .T .Y/ U /2
The reason for using a squared difference to measure the “loss” between T .Y/ and U is mostly convenience;
properties of squared differences involving random variables are more easily examined than, say, absolute
differences. The reason for taking an expectation is to remove the randomness of the squared difference by
averaging over the distribution of the data.
Consider first the case where the target U is a constant—say, the parameter ˇ—and denote the mean of the
estimator T .Y/ as T . The mean squared error can then be decomposed as
h
i
MSEŒT .Y/I ˇ D E .T .Y/ ˇ/2
h
i
h
i
D E .T .Y/ T /2
E .ˇ T /2
D VarŒT .Y/ C .ˇ
T /2
The mean squared error thus comprises the variance of the estimator and the squared bias. The two
components can be associated with an estimator’s precision (small variance) and its accuracy (small bias).
If T .Y/ is an unbiased estimator of ˇ—that is, if EŒT .Y/ D ˇ—then the mean squared error is simply the
variance of the estimator. By choosing an estimator that has minimum variance, you also choose an estimator
that has minimum mean squared error among all unbiased estimators. However, as you can see from the
previous expression, bias is also an “average” property; it is defined as an expectation. It is quite possible to
find estimators in some statistical modeling problems that have smaller mean squared error than a minimum
variance unbiased estimator; these are estimators that permit a certain amount of bias but improve on the
variance. For example, in models where regressors are highly collinear, the ordinary least squares estimator
continues to be unbiased. However, the presence of collinearity can induce poor precision and lead to an
erratic estimator. Ridge regression stabilizes the regression estimates in this situation, and the coefficient
estimates are somewhat biased, but the bias is more than offset by the gains in precision.
When the target U is a random variable, you need to carefully define what an unbiased prediction means. If
the statistic and the target have the same expectation, EŒU  D EŒT .Y/, then
MSE ŒT .Y/I U  D VarŒT .Y/ C VarŒU 
2CovŒT .Y/; U 
In many instances the target U is a new observation that was not part of the analysis. If the data are
uncorrelated, then it is reasonable to assume in that instance that the new observation is also not correlated
with the data. The mean squared error then reduces to the sum of the two variances. For example, in a linear
regression model where U is a new observation Y0 and T .Y/ is the regression estimator
b0 D x00 X0 X 1 X0 Y
Y
1
with variance VarŒY0  D 2 x00 X0 X
x0 , the mean squared prediction error for Y0 is
h
i
bI Y0 D 2 x00 X0 X 1 x0 C 1
MSE Y
and the mean squared prediction error for predicting the mean EŒY0  is
h
i
bI EŒY0  D 2 x00 X0 X 1 x0
MSE Y
Linear Model Theory F 51
Linear Model Theory
This section presents some basic statistical concepts and results for the linear model with homoscedastic,
uncorrelated errors in which the parameters are estimated by ordinary least squares. The model can be written
as
.0; 2 I/
Y D Xˇ C where Y is an .n 1/ vector and X is an .n k/ matrix of known constants. The model equation implies the
following expected values:
EŒY D Xˇ
2
VarŒY D I , CovŒYi ; Yj  D
2 i D j
0
otherwise
Finding the Least Squares Estimators
Finding the least squares estimator of ˇ can be motivated as a calculus problem or by considering the
geometry of least squares. The former approach simply states that the OLS estimator is the vector b̌ that
minimizes the objective function
SSE D .Y
Xˇ/0 .Y
Xˇ/
Applying the differentiation rules from the section “Matrix Differentiation” on page 43 leads to
@
@
SSE D
.Y0 Y 2Y0 Xˇ C ˇ 0 X0 Xˇ/
@ˇ
@ˇ
D 0 2X0 Y C 2X0 Xˇ
@2
SSE D X0 X
@ˇ@ˇ
@
Consequently, the solution to the normal equations, X0 Xˇ D X0 Y, solves @ˇ
SSE D 0, and the fact that
the second derivative is nonnegative definite guarantees that this solution minimizes SSE. The geometric
argument to motivate ordinary least squares estimation is as follows. Assume that X is of rank k. For any
Q the following identity holds:
value of ˇ, such as ˇ,
Y D XˇQ C Y XˇQ
Q is a point in an
The vector XˇQ is a point in a k-dimensional subspace of Rn , and the residual .Y Xˇ/
.n k/-dimensional subspace. The OLS estimator is the value b̌ that minimizes the distance of XˇQ from Y,
implying that Xb̌ and .Y Xb̌/ are orthogonal to each other; that is,
.Y
Xb̌/0 Xb̌ D 0. This in turn implies that b̌ satisfies the normal equations, since
b̌0 X0 Y D b̌0 X0 Xb̌ , X0 Xb̌ D X0 Y
52 F Chapter 3: Introduction to Statistical Modeling with SAS/STAT Software
Full-Rank Case
If X is of full column rank, the OLS estimator is unique and given by
b̌ D X0 X 1 X0 Y
The OLS estimator is an unbiased estimator of ˇ—that is,
h
1 0 i
EŒb̌ DE X0 X
XY
1 0
1 0
D X0 X
X EŒY D X0 X
X Xˇ D ˇ
Note that this result holds if EŒY D Xˇ; in other words, the condition that the model errors have mean zero
is sufficient for the OLS estimator to be unbiased. If the errors are homoscedastic and uncorrelated, the OLS
estimator is indeed the best linear unbiased estimator (BLUE) of ˇ—that is, no other estimator that is a linear
function of Y has a smaller mean squared error. The fact that the estimator is unbiased implies that no other
linear estimator has a smaller variance. If, furthermore, the model errors are normally distributed, then the
OLS estimator has minimum variance among all unbiased estimators of ˇ, whether they are linear or not.
Such an estimator is called a uniformly minimum variance unbiased estimator, or UMVUE.
Rank-Deficient Case
In the case of a rank-deficient X matrix, a generalized inverse is used to solve the normal equations:
b̌ D X0 X X0 Y
Although a g1 -inverse is sufficient to solve a linear system, computational expedience and interpretation of
the results often dictate the use of a generalized inverse with reflexive properties (that is, a g2 -inverse; see the
section “Generalized Inverse Matrices” on page 42 for details). Suppose, for example, that the X matrix is
partitioned as X D ŒX1 X2 , where X1 is of full column rank and each column in X2 is a linear combination
of the columns of X1 . The matrix
"
#
1
1 0
X01 X1
X01 X1
X1 X2
G1 D
1
X02 X1 X01 X1
0
is a g1 -inverse of X0 X and
"
#
1
X01 X1
0
G2 D
0
0
is a g2 -inverse. If the least squares solution is computed with the g1 -inverse, then computing the variance
of the estimator requires additional matrix operations and storage. On the other hand, the variance of the
solution that uses a g2 -inverse is proportional to G2 .
Var G1 X0 Y D 2 G1 X0 XG1
Var G2 X0 Y D 2 G2 X0 XG2 D 2 G2
If a generalized inverse G of X0 X is used to solve the normal equations, then the resulting solution is a biased
estimator
of ˇ (unless X0 X is of full rank, in which case the generalized inverse is “the” inverse), since
h i
E b̌ D GX0 Xˇ, which is not in general equal to ˇ.
If you think of estimation as “estimation without bias,” then b̌ is the estimator of something, namely GXˇ.
Since this is not a quantity of interest and since it is not unique—it depends on your choice of G—Searle
(1971, p. 169) cautions that in the less-than-full-rank case, b̌ is a solution to the normal equations and
“nothing more.”
Linear Model Theory F 53
Analysis of Variance
The identity
Y D XˇQ C Y
XˇQ
Q but only for the least squares solution is the residual .Y Xb̌/ orthogonal to the
holds for all vectors ˇ,
b̌
predicted value X . Because of this orthogonality, the additive identity holds not only for the vectors
themselves, but also for their lengths (Pythagorean theorem):
jjYjj2 D jjXb̌jj2 C jj.Y
Xb̌/jj2
1 0
Note that Xb̌ D X X0 X
X Y = HY and note that Y Xb̌ D .I H/Y D MY. The matrices H and
M D I H play an important role in the theory of linear models and in statistical computations. Both are
projection matrices—that is, they are symmetric and idempotent. (An idempotent matrix A is a square matrix
that satisfies AA D A. The eigenvalues of an idempotent matrix take on the values 1 and 0 only.) The matrix
H projects onto the subspace of Rn that is spanned by the columns of X. The matrix M projects onto the
orthogonal complement of that space. Because of these properties you have H0 D H, HH D H, M0 D M,
MM D M, HM D 0.
The Pythagorean relationship now can be written in terms of H and M as follows:
jjYjj2 D Y0 Y D jjHYjj2 C jjMYjj2 D Y0 H0 HY C Y0 M0 MY D Y0 HY C Y0 MY
If X0 X is deficient in rank and a generalized inverse
is used to solve the normal equations, then you work
instead with the projection matrices H D X X0 X X0 . Note that if G is a generalized inverse of X0 X, then
XGX0 , and hence also H and M, are invariant to the choice of G.
The matrix H is sometimes referred to as the “hat” matrix because when you premultiply the vector of
observations with H, you produce the fitted values, which are commonly denoted by placing a “hat” over the
Y vector, b
Y D HY.
The term Y0 Y is the uncorrected total sum of squares (SST) of the linear model, Y0 MY is the error (residual)
sum of squares (SSR), and Y0 HY is the uncorrected model sum of squares. This leads to the analysis of
variance table shown in Table 3.2.
Table 3.2 Analysis of Variance with Uncorrected Sums of
Squares
Source
df
Sum of Squares
Model
Residual
rank.X/
n rank.X/
Uncorr. Total
n
SSM D Y0 HY D b̌0 X0 Y
SSR D Y0 MY D Y0 Y b̌X0 Y D
2
Pn bi
Y
Y
i
i D1
P
SST D Y0 Y D niD1 Yi2
When the model contains an intercept term, then the analysis of variance is usually corrected for the mean, as
shown in Table 3.3.
54 F Chapter 3: Introduction to Statistical Modeling with SAS/STAT Software
Table 3.3 Analysis of Variance with Corrected Sums of Squares
Source
df
Sum of Squares
Model
rank.X/
Residual
n
rank.X/
Corrected Total
n
1
2
P
2
bi Y i
nY D niD1 Y
SSR D Y0 MY D Y0 Y b̌X0 Y D
2
Pn bi
Y
i D1 Yi
2
P
2
SSTc D Y0 Y nY D niD1 Yi Y
SSMc D b̌0 X0 Y
1
The coefficient of determination, also called the R-square statistic, measures the proportion of the total
variation explained by the linear model. In models with intercept, it is defined as the ratio
R2 D 1
SSR
D1
SSTc
Pn
i D1
Pn
i D1
2
bi
Y
2
Y
Yi
Yi
In models without intercept, the R-square statistic is a ratio of the uncorrected sums of squares
R2 D 1
SSR
D1
SST
Pn
i D1
Yi
bi
Y
2
Pn
2
i D1 Yi
Estimating the Error Variance
The least squares principle does not provide for a parameter estimator for 2 . The usual approach is to use a
method-of-moments estimator that is based on the sum of squared residuals. If the model is correct, then the
mean square for error, defined to be SSR divided by its degrees of freedom,
0 1
Y Xb̌ Y
n rank.X/
D SSR=.n rank.X//
b
2 D
Xb̌
is an unbiased estimator of 2 .
Maximum Likelihood Estimation
To estimate the parameters in a linear model with mean function EŒY D Xˇ by maximum likelihood, you
need to specify the distribution of the response vector Y. In the linear model with a continuous response
variable, it is commonly assumed that the response is normally distributed. In that case, the estimation problem
is completely defined by specifying the mean and variance of Y in addition to the normality assumption.
The model can be written as Y N.Xˇ; 2 I/, where the notation N.a; V/ indicates a multivariate normal
distribution with mean vector a and variance matrix V. The log likelihood for Y then can be written as
l.ˇ; 2 I y/ D
n
logf2g
2
n
logf 2 g
2
1
.y
2 2
Xˇ/0 .y
Xˇ/
This function is maximized in ˇ when the sum of squares .y Xˇ/0 .y Xˇ/ is minimized. The maximum
likelihood estimator of ˇ is thus identical to the ordinary least squares estimator. To maximize l.ˇ; 2 I y/
Linear Model Theory F 55
with respect to 2 , note that
@l.ˇ; 2 I y/
D
@ 2
n
1
C 4 .y
2
2
2
Xˇ/0 .y
Xˇ/
Hence the MLE of 2 is the estimator
0 1
b
2M D
Y Xb̌ Y Xb̌
n
D SSR=n
This is a biased estimator of 2 , with a bias that decreases with n.
Estimable Functions
A function Lˇ is said to be estimable if there exists a linear combination of the expected value of Y, such
as KEŒY, that equals Lˇ. Since EŒY D Xˇ, the definition of estimability implies that Lˇ is estimable if
there is a matrix K such that L D KX. Another way of looking at this result is that the rows of X form a
generating set from which all estimable functions can be constructed.
The concept of estimability of functions is important in the theory and application of linear models because
hypotheses of interest are often expressed as linear combinations of the parameter estimates (for example,
hypotheses of equality between parameters, ˇ1 D ˇ2 , ˇ1 ˇ2 D 0). Since estimability is not related to
the particular value of the parameter estimate, but to the row space of X, you can test only hypotheses that
consist of estimable functions. Further, because estimability is not related to the value of ˇ (Searle 1971, p.
181), the choice of the generalized inverse in a situation with rank-deficient X0 X matrix is immaterial, since
Lb̌ D KXb̌ D KX X0 X X0 Y
where X X0 X X is invariant to the choice of generalized inverse.
Lˇ is estimable if and only if L.X0 X/ .X0 X/ D L (see, for example, Searle 1971, p. 185). If X is of
full rank, then the Hermite matrix .X0 X/ .X0 X/ is the identity, which implies that all linear functions are
estimable in the full-rank case.
See Chapter 15, “The Four Types of Estimable Functions,” for many details about the various forms of
estimable functions in SAS/STAT.
Test of Hypotheses
Consider a general linear hypothesis of the form H W Lˇ D d, where L is a .k p/ matrix. It is assumed that
d is such that this hypothesis is linearly consistent—that is, that there exists some ˇ for which Lˇ D d. This
is always the case if d is in the column space of L, if L has full row rank, or if d D 0; the latter is the most
common case. Since many linear models have a rank-deficient X matrix, the question arises whether the
hypothesis is testable. The idea of testability of a hypothesis is—not surprisingly—connected to the concept
of estimability as introduced previously. The hypothesis H W Lˇ D d is testable if it consists of estimable
functions.
There are two important approaches to testing hypotheses in statistical applications—the reduction principle
and the linear inference approach. The reduction principle states that the validity of the hypothesis can be
inferred by comparing a suitably chosen summary statistic between the model at hand and a reduced model
in which the constraint Lˇ D d is imposed. The linear inference approach relies on the fact that b̌ is an
56 F Chapter 3: Introduction to Statistical Modeling with SAS/STAT Software
estimator of ˇ and its stochastic properties are known, at least approximately. A test statistic can then be
formed using b̌, and its behavior under the restriction Lˇ D d can be ascertained.
The two principles lead to identical results in certain—for example, least squares estimation in the classical
linear model. In more complex situations the two approaches lead to similar but not identical results. This is
the case, for example, when weights or unequal variances are involved, or when b̌ is a nonlinear estimator.
Reduction Tests
The two main reduction principles are the sum of squares reduction test and the likelihood ratio test. The test
statistic in the former is proportional to the difference of the residual sum of squares between the reduced
model and the full model. The test statistic in the likelihood ratio test is proportional to the difference of
the log likelihoods between the full and reduced models. To fix these ideas, suppose that you are fitting the
model Y D Xˇ C , where N.0; 2 I/. Suppose that SSR denotes the residual sum of squares in this
model and that SSRH is the residual sum of squares in the model for which Lˇ D d holds. Then under the
hypothesis the ratio
.SSRH
SSR/= 2
follows a chi-square distribution with degrees of freedom equal to the rank of L. Maybe surprisingly, the
residual sum of squares in the full model is distributed independently of this quantity, so that under the
hypothesis,
F D
.SSRH SSR/=rank.L/
SSR=.n rank.X//
follows an F distribution with rank.L/ numerator and n rank.X/ denominator degrees of freedom. Note
that the quantity in the denominator of the F statistic is a particular estimator of 2 —namely, the unbiased
moment-based estimator that is customarily associated with least squares estimation. It is also the restricted
maximum likelihood estimator of 2 if Y is normally distributed.
In the case of the likelihood ratio test, suppose that l.b̌; b
2 I y/ denotes the log likelihood evaluated at the ML
estimators. Also suppose that l.b̌H ; b
2H I y/ denotes the log likelihood in the model for which Lˇ D d holds.
Then under the hypothesis the statistic
D 2 l.b̌; b
2 I y/ l.b̌H ; b
2H I y/
follows approximately a chi-square distribution with degrees of freedom equal to the rank of L. In the case of
a normally distributed response, the log-likelihood function can be profiled with respect to ˇ. The resulting
profile log likelihood is
l.b
2 I y/ D
n
logf2g
2
n
logfb
2g
2
and the likelihood ratio test statistic becomes
˚ 2 ˚ 2 D n log b
H
log b
D n .log fSSRH g
log fSSRg/ D n .log fSSRH =SSRg/
The preceding expressions show that, in the case of normally distributed data, both reduction principles lead
to simple functions of the residual sums of squares in two models. As Pawitan (2001, p. 151) puts it, there
is, however, an important difference not in the computations but in the statistical content. The least squares
principle, where sum of squares reduction tests are widely used, does not require a distributional specification.
Linear Model Theory F 57
Assumptions about the distribution of the data are added to provide a framework for confirmatory inferences,
such as the testing of hypotheses. This framework stems directly from the assumption about the data’s
distribution, or from the sampling distribution of the least squares estimators. The likelihood principle, on
the other hand, requires a distributional specification at the outset. Inference about the parameters is implicit
in the model; it is the result of further computations following the estimation of the parameters. In the least
squares framework, inference about the parameters is the result of further assumptions.
Linear Inference
The principle of linear inference is to formulate a test statistic for H W Lˇ D d that builds on the linearity of
the hypothesis about ˇ. For many models that have linear components, the estimator Lb̌ is also linear in Y.
It is then simple to establish the distributional properties of Lb̌ based on the distributional assumptions about
Y or based on large-sample arguments. For example, b̌ might be a nonlinear estimator, but it is known to
asymptotically follow a normal distribution; this is the case in many nonlinear and generalized linear models.
If the sampling distribution or the asymptotic distribution of b̌ is normal, then one can easily derive quadratic
forms with known distributional properties. For example, if the random vector U is distributed as N.; †/,
then U0 AU follows a chi-square distribution with rank.A/ degrees of freedom and noncentrality parameter
1=20 A, provided that A†A† D A†.
In the classical linear model, suppose that X is deficient in rank and that b̌ D X0 X X0 Y is a solution to
the normal equations. Then, if the errors are normally distributed,
b̌ N X0 X X0 Xˇ; 2 X0 X X0 X X0 X
Because H W Lˇ D d is testable, Lˇ is estimable, and thus L.X0 X/ X0 X D L, as established in the previous
section. Hence,
Lb̌ N Lˇ; 2 L X0 X L0
The conditions for a chi-square distribution of the quadratic form
.Lb̌ d/0 L.X0 X/ L0 .Lb̌ d/
are thus met, provided that
.L.X0 X/ L0 / L.X0 X/ L0 .L.X0 X/ L0 / L.X0 X/ L0 D .L.X0 X/ L0 / L.X0 X/ L0
This condition is obviously met if L.X0 X/ L0 is of full rank. The condition is also met if L.X0 X/ L0 is a
reflexive inverse (a g2 -inverse) of L.X0 X/ L.
The test statistic to test the linear hypothesis H W Lˇ D d is thus
.Lb̌ d/0 L.X0 X/ L0 .Lb̌ d/=rank.L/
F D
SSR=.n rank.X//
and it follows an F distribution with rank.L/ numerator and n
under the hypothesis.
rank.X/ denominator degrees of freedom
This test statistic looks very similar to the F statistic for the sum of squares reduction test. This is no accident.
If the model is linear and parameters are
estimated by ordinary least squares, then you can show that the
quadratic form .Lb̌ d/0 L.X0 X/ L0 .Lb̌ d/ equals the differences in the residual sum of squares,
SSRH SSR, where SSRH is obtained as the residual sum of squares from OLS estimation in a model that
58 F Chapter 3: Introduction to Statistical Modeling with SAS/STAT Software
satisfies Lˇ D d. However, this correspondence between the two test formulations does not apply when
a different estimation principle is used. For example, assume that N.0; V/ and that ˇ is estimated by
generalized least squares:
b̌g D X0 V 1 X X0 V 1 Y
The construction of L matrices associated with hypotheses in SAS/STAT software is frequently based on
the properties of the X matrix, not of X0 V X. In other words, the construction of the L matrix is governed
only by the design. A sum of squares reduction test for H W Lˇ D 0 that uses the generalized residual sum of
squares .Y b̌g /0 V 1 .Y b̌g / is not identical to a linear hypothesis test with the statistic
b̌0g L0 L X0 V 1 X L0 Lb̌g
F D
rank.L/
Furthermore, V is usually unknown and must be estimated as well. The estimate for V depends on the
model, and imposing a constraint on the model would change the estimate. The asymptotic distribution of the
statistic F is a chi-square distribution. However, in practical applications the F distribution with rank.L/
numerator and denominator degrees of freedom is often used because it provides a better approximation to
the sampling distribution of F in finite samples. The computation of the denominator degrees of freedom
, however, is a matter of considerable discussion. A number of methods have been proposed and are
implemented in various forms in SAS/STAT (see, for example, the degrees-of-freedom methods in the
MIXED and GLIMMIX procedures).
Residual Analysis
The model errors D Y Xˇ are unobservable. Yet important features of the statistical model are connected
to them, such as the distribution of the data, the correlation among observations, and the constancy of
variance. It is customary to diagnose and investigate features of the model errors through the fitted residuals
b
DY b
Y D Y HY D MY. These residuals are projections of the data onto the null space of X and are
also referred to as the “raw” residuals to contrast them with other forms of residuals that are transformations
of b
. For the classical linear model, the statistical properties of b
are affected by the features of that projection
and can be summarized as follows:
EŒb
 D 0
VarŒb
 D 2 M
rank.M/ D n
rank.X/
Furthermore, if N.0; 2 I/, then b
N.0; 2 M/.
Because M D I H, and the “hat” matrix H satisfies @b
[email protected], the hat matrix is also the leverage matrix of
the model. If hi i denotes the ith diagonal element of H (the leverage of observation i), then the leverages
are bounded in a model with intercept, 1=n hi i 1. Consequently, the variance of a raw residual is less
than that of an observation: VarŒb
i  D 2 .1 hi i / < 2 . In applications where the variability of the data is
estimated from fitted residuals, the estimate is invariably biased low. An example is the computation of an
empirical semivariogram based on fitted (detrended) residuals.
More important, the diagonal entries of H are not necessarily identical; the residuals are heteroscedastic. The
“hat” matrix is also not a diagonal matrix; the residuals are correlated. In summary, the only property that the
fitted residuals b
share with the model errors is a zero mean. It is thus commonplace to use transformations
of the fitted residuals for diagnostic purposes.
Linear Model Theory F 59
Raw and Studentized Residuals
A standardized residual is a raw residual that is divided by its standard deviation:
bi
Yi Y
b
i
b
i D q
Dp
2
.1 hi i /
bi 
VarŒYi Y
Because 2 is unknown, residual standardization is usually not practical. A studentized residual is a raw
residual that is divided by its estimated standard deviation. If the estimate of the standard deviation is based
on the same data that were used in fitting the model, the residual is also called an internally studentized
residual:
b
i s D q
Yi
b
bi
Y
VarŒYi
b
i
Dp
2
b
.1 hi i /
bi 
Y
If the estimate of the residual’s variance does not involve the ith observation, it is called an externally
studentized residual. Suppose that b
2 i denotes the estimate of the residual variance obtained without the ith
observation; then the externally studentized residual is
b
i
b
i r D q
b
2 i .1
hi i /
Scaled Residuals
A scaled residual is simply a raw residual divided by a scalar quantity that is not an estimate of the variance
of the residual. For example, residuals divided by the standard deviation of the response variable are scaled
and referred to as Pearson or Pearson-type residuals:
bi
Yi Y
b
i c D q
VarŒYi 
b
In generalized linear models, where the variance of an observation is a function of the mean and possibly
of an extra scale parameter, VarŒY  D a./, the Pearson residual is
Yi
b
iP D p
b
i
a.b
/
because the sum of the squared Pearson residuals equals the Pearson X 2 statistic:
X2 D
n
X
b
2iP
i D1
When the scale parameter participates in the scaling, the residual is also referred to as a Pearson-type
residual:
Yi b
i
b
iP D p
a.b
/
60 F Chapter 3: Introduction to Statistical Modeling with SAS/STAT Software
Other Residuals
You might encounter other residuals in SAS/STAT software. A “leave-one-out” residual is the difference
between the observed value and the residual obtained from fitting a model in which the observation in
bi is the predicted value of the ith observation and Y
bi; i is the predicted
question did not participate. If Y
value if Yi is removed from the analysis, then the “leave-one-out” residual is
b
i;
i
D Yi
bi;
Y
i
Since the sum of the squared “leave-one-out” residuals is the PRESS statistic (prediction sum of squares;
Allen 1974), b
i; i is also called the PRESS residual. The concept of the PRESS residual can be generalized
if the deletion residual can be based on the removal of sets of observations. In the classical linear model, the
PRESS residual for case deletion has a particularly simple form:
b
i;
i
D Yi
bi;
Y
i
D
b
i
1 hi i
That is, the PRESS residual is simply a scaled form of the raw residual, where the scaling factor is a function
of the leverage of the observation.
When data are correlated, VarŒY D V, you can scale the vector of residuals rather than scale each residual
separately. This takes the covariances among the observations into account. This form of scaling is
accomplished by forming the Cholesky root C0 C D V, where C0 is a lower-triangular matrix. Then C0 1 Y is
a vector of uncorrelated variables with unit variance. The Cholesky residuals in the model Y D Xˇ C are
b
C D C0 1 Y Xb̌
In generalized linear models, the fit of a model can be measured by the scaled deviance statistic D . It
measures the difference between the log likelihood under the model and the maximum
Pnlog likelihood that is
achievable. In models with a scale parameter , the deviance is D D D D i D1 di . The deviance
residuals are the signed square roots of the contributions to the deviance statistic:
p
b
id D signfyi b
i g di
Sweep Operator
The sweep operator (Goodnight 1979) is closely related to Gauss-Jordan elimination and the Forward
Doolittle procedure. The fact that a sweep operation can produce a generalized inverse by in-place mapping
with minimal storage and that its application invariably leads to some form of matrix inversion is important,
but this observation does not do justice to the pervasive relevance of sweeping to statistical computing. In this
section the sweep operator is discussed as a conceptual tool for further insight into linear model operations.
Consider the nonnegative definite, symmetric, partitioned matrix
A11 A12
AD
A012 A22
Sweeping a matrix consists of performing a series of row operations akin to Gauss-Jordan elimination. Basic
row operations are the multiplication of a row by a constant and the addition of a multiple of one row to
another. The sweep operator restricts row operations to pivots on the diagonal elements of a matrix; further
References F 61
details about the elementary operations can be found in Goodnight (1979). The process of sweeping the
matrix A on its leading partition is denoted as Sweep.A; A11 / and leads to
A11
A11 A12
Sweep.A; A11 / D
A012 A11 A22 A012 A11 A12
If the kth row and column are set to zero when the pivot is zero (or in practice, less than some singularity
tolerance), the generalized inverse in the leading position of the swept matrix is a reflexive, g2 -inverse.
Suppose that the crossproduct matrix of the linear model is augmented with a “Y-border” as follows:
0
X X X0 Y
CD
Y0 X Y0 Y
Then the result of sweeping on the rows of X is
X0 X X0 X X0 Y
Sweep.C; X/ D
Y0 X X0 X
Y0 Y Y0 X X0 X X0 Y
"
# "
#
b̌
b̌
X0 X
X0 X
D
D
b̌
b̌
Y0 MY
SSR
The “Y-border” has been transformed into the least squares solution and the residual sum of squares.
Partial sweeps are common in model selection. Suppose that the X matrix is partitioned as ŒX1 X2 , and
consider the augmented crossproduct matrix
2 0
3
X1 X1 X01 X2 X01 Y
C D 4 X02 X1 X02 X2 X02 Y 5
Y0 X1 Y0 X2 Y0 Y
Sweeping on the X1 partition yields
2
3
0
0
0
X01 X1
X
X
X
X
X
X
X01 Y
1
2
1
1
1
1
5
X02 M1 X2
X02 M1 Y
Sweep.C; X1 / D 4 X02 X1 X01 X1
0
0
0
0
Y X1 X1 X1
Y M1 X2
Y M1 Y
where M1 D I X1 X01 X1 X01 . The entries in the first row of this partition are the generalized inverse
of X0 X, the coefficients for regressing X2 on X1 , and the coefficients for regressing Y on X1 . The diagonal
entries X02 M1 X2 and Y0 M1 Y are the sum of squares and crossproduct matrices for regressing X2 on X1
and for regressing Y on X1 , respectively. As you continue to sweep the matrix, the last cell in the partition
contains the residual sum of square of a model in which Y is regressed on all columns swept up to that point.
The sweep operator is not only useful to conceptualize the computation of least squares solutions, Type I and
Type II sums of squares, and generalized inverses. It can also be used to obtain other statistical information.
For example, adding the logarithms of the pivots of the rows that are swept yields the log determinant of the
matrix.
References
Allen, D. M. (1974), “The Relationship between Variable Selection and Data Augmentation and a Method of
Prediction,” Technometrics, 16, 125–127.
62 F Chapter 3: Introduction to Statistical Modeling with SAS/STAT Software
Cochran, W. G. (1977), Sampling Techniques, 3rd Edition, New York: John Wiley & Sons.
Goodnight, J. H. (1979), “A Tutorial on the Sweep Operator,” American Statistician, 33, 149–158.
Harville, D. A. (1997), Matrix Algebra from a Statistician’s Perspective, New York: Springer-Verlag.
Hastie, T. J., Tibshirani, R. J., and Friedman, J. H. (2001), The Elements of Statistical Learning, New York:
Springer-Verlag.
Jöreskog, K. G. (1973), “A General Method for Estimating a Linear Structural Equation System,” in A. S.
Goldberger and O. D. Duncan, eds., Structural Equation Models in the Social Sciences, New York:
Academic Press.
Keesling, J. W. (1972), Maximum Likelihood Approaches to Causal Analysis, Ph.D. diss., University of
Chicago.
Magnus, J. R. and Neudecker, H. (1999), Matrix Differential Calculus with Applications in Statistics and
Econometrics, New York: John Wiley & Sons.
McCullagh, P. and Nelder, J. A. (1989), Generalized Linear Models, 2nd Edition, London: Chapman & Hall.
Moore, E. H. (1920), “On the Reciprocal of the General Algebraic Matrix,” Bulletin of the American
Mathematical Society, 26, 394–395.
Nelder, J. A. and Wedderburn, R. W. M. (1972), “Generalized Linear Models,” Journal of the Royal Statistical
Society, Series A, 135, 370–384.
Pawitan, Y. (2001), In All Likelihood: Statistical Modelling and Inference Using Likelihood, Oxford:
Clarendon Press.
Penrose, R. A. (1955), “A Generalized Inverse for Matrices,” Proceedings of the Cambridge Philosophical
Society, 51, 406–413.
Pringle, R. M. and Rayner, A. A. (1971), Generalized Inverse Matrices with Applications to Statistics, New
York: Hafner Publishing.
Särndal, C. E., Swensson, B., and Wretman, J. (1992), Model Assisted Survey Sampling, New York: SpringerVerlag.
Searle, S. R. (1971), Linear Models, New York: John Wiley & Sons.
Spearman, C. (1904), “General Intelligence Objectively Determined and Measured,” American Journal of
Psychology, 15, 201–293.
Wedderburn, R. W. M. (1974), “Quasi-likelihood Functions, Generalized Linear Models, and the GaussNewton Method,” Biometrika, 61, 439–447.
Wiley, D. E. (1973), “The Identification Problem for Structural Equation Models with Unmeasured Variables,”
in A. S. Goldberger and O. D. Duncan, eds., Structural Equation Models in the Social Sciences, New York:
Academic Press.
Index
analysis of variance
corrected total sum of squares (Introduction to
Modeling), 53
geometry (Introduction to Modeling), 53
model (Introduction to Modeling), 25
sum of squares (Introduction to Modeling), 25
uncorrected total sum of squares (Introduction to
Modeling), 53
Bayesian models
Introduction to Modeling, 32
classification effect
Introduction to Modeling, 25
coefficient of determination
definition (Introduction to Modeling), 54
covariance
matrix, definition (Introduction to Modeling), 48
of random variables (Introduction to Modeling),
47
estimability
definition (Introduction to Modeling), 55
estimable function
definition (Introduction to Modeling), 55
estimating equations
Introduction to Modeling, 23
expected value
definition (Introduction to Modeling), 47
of vector (Introduction to Modeling), 47
exponential family
Introduction to Modeling, 29
function
estimable, definition (Introduction to Modeling),
55
generalized linear model
Introduction to Modeling, 29, 57, 59, 60
heteroscedasticity
Introduction to Modeling, 58
homoscedasticity
Introduction to Modeling, 51
hypothesis testing
Introduction to Modeling, 55
indepdendent
random variables (Introduction to Modeling), 49
inference
design-based (Introduction to Modeling), 21
model-based (Introduction to Modeling), 21
Introduction to Modeling
additive error, 23
analysis of variance, 25, 53
augmented crossproduct matrix, 61
Bayesian models, 32
Cholesky decomposition, 45, 60
Cholesky residual, 60
classification effect, 25
coefficient of determination, 54
column space, 55
covariance, 47
covariance matrix, 48
crossproduct matrix, 61
curvilinear models, 24
deletion residual, 60
dependent variable, 22
deviance residual, 60
diagonal matrix, 40, 58
effect genesis, 28
estimable, 55
estimating equations, 23
expectation operator, 23
expected value, 47
expected value of vector, 47
exponential family, 29
externally studentized residual, 59
fitted residual, 58
fixed effect, 27
fixed-effects model, 27
g1-inverse, 43
g2-inverse, 43, 57, 61
generalized inverse, 42, 55, 61
generalized least squares, 35, 58
generalized linear model, 29, 57, 59, 60
hat matrix, 53, 58
heterocatanomic data, 26
heterogeneous multivariate data, 26
heteroscedasticity, 58
homocatanomic data, 26
homogeneous multivariate data, 26
homoscedasticity, 51
hypothesis testing, 55
idempotent matrix, 53
independent random variables, 49
independent variable, 22
inner product of vectors, 41
internally studentized residual, 59
inverse of matrix, 41
inverse of partitioned matrix, 41
inverse of patterned sum of matrices, 41
inverse, generalized, 42, 55, 61
iteratively reweighted least squares, 35
latent variable models, 29
LDU decomposition, 45
least squares, 33
leave-one-out residual, 60
levelization, 25
leverage, 58, 60
likelihood, 35
likelihood ratio test, 56
linear hypothesis, 55
linear inference, 55, 57
linear model theory, 51
linear regression, 24
link function, 29
LU decomposition, 45
matrix addition, 40
matrix decomposition, Cholesky, 45, 60
matrix decomposition, LDU, 45
matrix decomposition, LU, 45
matrix decomposition, singular-value, 46
matrix decomposition, spectral, 45
matrix decompositions, 45
matrix differentiation, 43
matrix dot product, 40
matrix inverse, 41
matrix inverse, g1, 43
matrix inverse, g2, 43, 57, 61
matrix inverse, Moore-Penrose, 42, 46
matrix inverse, partitioned, 41
matrix inverse, patterned sum, 41
matrix inverse, reflexive, 43, 57, 61
matrix multiplication, 40
matrix order, 40
matrix partition, 61
matrix subtraction, 40
matrix transposition, 40
matrix, column space, 55
matrix, diagonal, 40, 58
matrix, idempotent, 53
matrix, projection, 53
matrix, rank deficient, 55
matrix, square, 40
matrix, sweeping, 60
mean function, 23
mean squared error, 49
model fitting, 21
model-based v. design-based, 21
Moore-Penrose inverse, 42, 46
multivariate model, 26
nonlinear model, 23, 57
outcome variable, 22
parameter, 20
Pearson-type residual, 59
power, 39
PRESS statistic, 60
projected residual, 58
projection matrix, 53
pseudo-likelihood, 35
quadratic forms, 49
quasi-likelihood, 35
R-square, 54
random effect, 27
random-effects model, 27
rank deficient matrix, 55
raw residual, 58
reduction principle, testing, 55
reflexive inverse, 43, 57, 61
residual analysis, 58
residual, Cholesky, 60
residual, deletion, 60
residual, deviance, 60
residual, externally studentized, 59
residual, fitted, 58
residual, internally studentized, 59
residual, leave-one-out, 60
residual, Pearson-type, 59
residual, PRESS, 60
residual, projected, 58
residual, raw, 58
residual, scaled, 59
residual, standardized, 59
residual, studentized, 59
response variable, 22
sample size, 39
scaled residual, 59
singular-value decomposition, 46
spectral decomposition, 45
square matrix, 40
standardized residual, 59
statistical model, 20
stochastic model, 20
studentized residual, 59
sum of squares reduction test, 56, 57
sweep, elementary operations, 61
sweep, log determinant, 61
sweep, operator, 60
sweep, pivots, 60
testable hypothesis, 55, 57
testing hypotheses, 55
uncorrelated random variables, 49
univariate model, 26
variance, 47
variance matrix, 48
variance-covariance matrix, 48
weighted least squares, 34
latent variable models
Introduction to Modeling, 29
least squares
definition (Introduction to Modeling), 33
generalized (Introduction to Modeling), 35, 58
iteratively reweighted (Introduction to Modeling),
35
weighted (Introduction to Modeling), 34
likelihood
function (Introduction to Modeling), 36
Introduction to Modeling, 35
likelihood ratio test
Introduction to Modeling, 56
linear hypothesis
consistency (Introduction to Modeling), 55
definition (Introduction to Modeling), 55
Introduction to Modeling, 55
linear inference principle (Introduction to
Modeling), 55, 57
reduction principle (Introduction to Modeling), 55
testable (Introduction to Modeling), 55, 57
testing (Introduction to Modeling), 55
testing, linear inference (Introduction to
Modeling), 55, 57
testing, reduction principle (Introduction to
Modeling), 55
linear model theory
Introduction to Modeling, 51
linear regression
Introduction to Modeling, 24
link function
Introduction to Modeling, 29
matrix
addition (Introduction to Modeling), 40
Choleksy decomposition (Introduction to
Modeling), 45, 60
column space (Introduction to Modeling), 55
crossproduct (Introduction to Modeling), 61
crossproduct, augmented (Introduction to
Modeling), 61
decomposition, Cholesky (Introduction to
Modeling), 45, 60
decomposition, LDU (Introduction to Modeling),
45
decomposition, LU (Introduction to Modeling),
45
decomposition, singular-value (Introduction to
Modeling), 46
decomposition, spectral (Introduction to
Modeling), 45
decompositions (Introduction to Modeling), 45
determinant, by sweeping (Introduction to
Modeling), 61
diagonal (Introduction to Modeling), 40, 58
differentiation (Introduction to Modeling), 43
dot product (Introduction to Modeling), 40
g1-inverse (Introduction to Modeling), 43
g2-inverse (Introduction to Modeling), 43, 57, 61
generalized inverse (Introduction to Modeling),
42, 55, 61
hat (Introduction to Modeling), 53, 58
idempotent (Introduction to Modeling), 53
inner product (Introduction to Modeling), 41
inverse (Introduction to Modeling), 41
inverse, g1 (Introduction to Modeling), 43
inverse, g2 (Introduction to Modeling), 43, 57, 61
inverse, generalized (Introduction to Modeling),
42, 55, 61
inverse, Moore-Penrose (Introduction to
Modeling), 42, 46
inverse, partitioned (Introduction to Modeling),
41
inverse, patterned (Introduction to Modeling), 41
inverse, reflexive (Introduction to Modeling), 43,
57, 61
LDU decomposition (Introduction to Modeling),
45
leverage (Introduction to Modeling), 58
LU decomposition (Introduction to Modeling), 45
Moore-Penrose inverse (Introduction to
Modeling), 42, 46
multiplication (Introduction to Modeling), 40
order (Introduction to Modeling), 40
partition (Introduction to Modeling), 61
projection (Introduction to Modeling), 53
rank deficient (Introduction to Modeling), 55
reflexive inverse (Introduction to Modeling), 43,
57, 61
singular-value decomposition (Introduction to
Modeling), 46
spectral decomposition (Introduction to
Modeling), 45
square (Introduction to Modeling), 40
subtraction (Introduction to Modeling), 40
sweep (Introduction to Modeling), 60
transposition (Introduction to Modeling), 40
mean function
linear (Introduction to Modeling), 23
nonlinear (Introduction to Modeling), 23
mean squared error
Introduction to Modeling, 49
multivariate data
heterocatanomic (Introduction to Modeling), 26
heterogeneous (Introduction to Modeling), 26
homocatanomic (Introduction to Modeling), 26
homogeneous (Introduction to Modeling), 26
nonlinear model
Introduction to Modeling, 23
parameter
definition (Introduction to Modeling), 20
power
Introduction to Modeling, 39
quadratic forms
Introduction to Modeling, 49
R-square
definition (Introduction to Modeling), 54
residuals
Cholesky (Introduction to Modeling), 60
deletion (Introduction to Modeling), 60
deviance (Introduction to Modeling), 60
externally studentized (Introduction to Modeling),
59
fitted (Introduction to Modeling), 58
internally studentized (Introduction to Modeling),
59
leave-one-out (Introduction to Modeling), 60
Pearson-type (Introduction to Modeling), 59
PRESS (Introduction to Modeling), 60
projected, (Introduction to Modeling), 58
raw (Introduction to Modeling), 58
scaled (Introduction to Modeling), 59
standardized (Introduction to Modeling), 59
studentized (Introduction to Modeling), 59
studentized, external (Introduction to Modeling),
59
studentized, internal (Introduction to Modeling),
59
sample size
Introduction to Modeling, 39
statistical model
definition (Introduction to Modeling), 20
stochastic model
definition (Introduction to Modeling), 20
sum of squares
corrected total (Introduction to Modeling), 53
uncorrected total (Introduction to Modeling), 53
sum of squares reduction test
Introduction to Modeling, 56, 57
Sweep operator
and generalized inverse (Introduction to
Modeling), 60
and log determinant (Introduction to Modeling),
61
elementary operations (Introduction to Modeling),
61
Gauss-Jordan elimination (Introduction to
Modeling), 60
pivots (Introduction to Modeling), 60
row operations (Introduction to Modeling), 60
testable hypothesis
Introduction to Modeling, 55, 57
testing hypotheses
Introduction to Modeling, 55
uncorrelated
random variables (Introduction to Modeling), 49
variance
matrix, definition (Introduction to Modeling), 48
of random variable (Introduction to Modeling), 47
variance-covariance matrix
definition (Introduction to Modeling), 48
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement