Identification and Estimation of Nonparametric Panel Data Regressions with Measurement Error ∗

Identification and Estimation of Nonparametric Panel Data Regressions with Measurement Error ∗
Identification and Estimation of Nonparametric Panel
Data Regressions with Measurement Error∗
Job Market Paper
Daniel Wilhelm†
University of Chicago
January 18, 2012
Abstract
This paper provides a constructive argument for identification of nonparametric
panel data models with measurement error in a continuous explanatory variable.
The approach point identifies all structural elements of the model using only observations of the outcome and the mismeasured explanatory variable; no further
external variables such as instruments are required. Restricting either the structural or the measurement error to be independent over time allows past covariates
or outcomes to serve as instruments. Time periods have to be linked through serial dependence in the latent explanatory variable, but the transition process is left
nonparametric. The paper discusses the general identification result in the context
of a nonlinear panel data regression model with additively separable fixed effects. It
provides a nonparametric plug-in estimator, derives its uniform rate of convergence,
and presents simulation evidence for good performance in finite samples.
∗
I am particularly indebted to Susanne Schennach, Chris Hansen, Alan Bester, and Azeem Shaikh, and thank them for
their advice, encouragement and comments. I also thank participants of the 2011 North American Summer Meetings of
the Econometric Society, the 17th International Panel Data Conference, the 2011 Meetings of the Midwest Econometrics
Group, of various seminars at the University of Chicago as well as Nina Boyarchenko, Kirill Evdokimov, James Heckman,
Roger Koenker, Carlos Lamarche, Yoonseok Lee, Elie Tamer, and Liqun Wang for helpful discussions. Remaining errors
are my own.
†
The University of Chicago, Booth School of Business, 5807 S. Woodlawn Ave, Chicago, IL 60637, United States; email
address: [email protected]; website: http://home.uchicago.edu/~dwilhelm1/.
1
Introduction
A vast amount of empirical work in economics, especially in microeconomics, recognizes
the possible presence of measurement error (ME) in relevant data sets and that failure to
account for it may result in misleading estimates and inference.1 These concerns deserve
particular attention in panel data models. Economists view panel data as very appealing
because they offer opportunities to flexibly deal with unobserved, individual-specific heterogeneity in economic behavior. In one form or another, panel data estimators combine
variables from different time periods to remove (“difference out”) those individual-specific
components, typically resulting in new variables with amplified ME relative to the original
variables. In this sense, ME poses an even more severe challenge to panel data models
than to cross-sectional ones.
This paper shows the possibly nonlinear relationship between an individual’s outcome
and unobserved, true covariates can be recovered from only observed variables within the
panel model (outcomes and mismeasured covariates). Specifically, consider the nonlinear
regression model with additively separable fixed effects,
Yit = m(Xit∗ ) + αi + εit ,
(1)
which in the subsequent discussion, serves as the leading example to which the new approach can be applied. The dependent variable Yit denotes an individual i’s outcome in
time period t, αi an individual-specific fixed effect, which may arbitrarily depend on a
continuous regressor Xit∗ , and εit the regression error, also referred to as the structural
error. Instead of the true regressor Xit∗ , only the mismeasured variable Xit is observed.
The ME ηit := Xit − Xit∗ is assumed classical in the sense that it does not depend on the
true regressor, but nonclassical features like serial dependence in the ME are allowed. The
structural relationship between Yit and Xit∗ , in (1) represented by the unknown function
m(x∗ ), is the object of interest. The identification approach in this paper provides explicit
formulae (in terms of observable variables) for m(x∗ ), for the distribution of the ME, and
for the distribution of the true regressor. The identification result does not require any observed variables other than outcomes and observed regressors for two or four time periods,
depending on whether additive fixed effects are present. The idea consists of taking past
observed regressors or past outcomes as instrumental variables. Consequently, the main
identifying assumptions resemble those of standard linear instrumental variable models:
(i) outcomes are determined by the true, not the mismeasured, covariates; (ii) different
time periods are linked through serial dependence in the true covariates (relevance con1
See the excellent handbook chapter by Bound, Brown, and Mathiowetz (2001).
2
dition); and (iii) past mismeasured covariates or past outcomes do not determine present
outcomes, conditional on present, true covariates (exclusion restriction). Depending on
whether covariates or outcomes are used as instruments, I show that the exclusion restrictions require, respectively, the structural error or the ME to be independent over
time. The constructive nature of the approach suggests nonparametric estimation of all
model components by simply substituting nonparametric estimators into the population
formulae and choosing values for smoothing parameters. The resulting estimator can be
computed based only on basic operations such as matrix multiplication, matrix inversion,
and discrete Fourier transforms, which are carried out efficiently on modern computers,
and does not require any optimization routines.
As an example in which the methods developed in this paper may lead to interesting
new conclusions, consider the relationship between investment and Tobin’s q in Lucas and
Prescott (1971)-type investment models. The firm’s profit maximization problem leads
to first-order conditions of the form
Iit
= m(qit∗ ) + αi + εit ,
Kit
where Iit denotes investment and Kit capital stock of firm i at time t. The unobserved regressor qit∗ , also called the shadow value of capital or marginal q, is the Lagrange multiplier
for the evolution equation of the capital stock. The ratio of book to market value of the
firm’s capital stock, qit , also called average q, is a popular proxy for qit∗ . To estimate the
model, empirical work typically imposes one or both of the following two assumptions: (i)
firms face a quadratic adjustment cost function and (ii) unobserved, marginal q is equal
to observed, average q. Assumption (i) leads to a linear function m, but has no economic foundation and (ii) eliminates the measurement error problem but imposes strong
conditions on the economic framework (Hayashi (1982), Erickson and Whited (2000)).
Since ME and nonlinearities can manifest themselves in similar ways (Chesher (1991)),
the ability to analyze the investment model without (i) or (ii) is important.
Other examples are the estimation of Engel functions from household panel data (Aasness, Biørn, and Skjerpen (1993)), studies of returns to research and development performed in private firms (Hall, Jaffe, and Trajtenberg (2005)), and analyzing the technology
of skill formation (Cunha, Heckman, and Schennach (2010)).
Existing work on the treatment of ME in panel data models is scarce and exclusively2
focuses on linear specifications. In the well-known approach by Griliches and Hausman
2
Part of the statistic literature has approached the problem by assuming the ME is classical and its
distribution known (see Carroll, Ruppert, Stefanski, and Crainiceanu (2006) and references therein). In
this special case, which is unrealistic in most economic applications, identification is straightforward.
3
(1986), the linearity facilitates the derivation of explicit formulae for the biases of the
first difference and the within estimator. In both bias expressions, the variance of the ME
is the only unknown, so, from the two different estimators, one can substitute out the
unknown variance and calculate a ME-robust estimator. Clearly, such an approach cannot
be expected to carry through to nonlinear models. More recent approaches such as HoltzEakin, Newey, and Rosen (1988), Biørn (2000), Buonaccorsi, Demidenko, and Tosteson
(2000), Wansbeek (2001), Xiao, Shao, and Palta (2008), Galvao Jr. and Montes-Rojas
(2009), Shao, Xiao, and Xu (2011), and Komunjer and Ng (2011) similarly rely heavily on
a linear specification and cannot identify or estimate nonlinear models such as (1). These
The ME literature for nonlinear cross-sectional models is extensive. Most identification
arguments assume the availability of instruments, repeated measurements, or auxiliary
data. Chen, Hong, and Nekipelov (2008) and Schennach (2011) review this stream of the
literature and provide numerous references. The existing instrumental variable approaches
require the instrument to predict the true covariate in a certain sense, whereas the true
value predicts the mismeasured covariate.3 This asymmetry between assumptions on the
mismeasured and the instrumental variable conflicts with the structure of conventional
panel data models such as (1) when mismeasured covariates or outcomes are the only
candidates for instruments. Looking for suitable instruments excluded from the panel
model is not a solution either because the motivation for using panel data oftentimes
lies in the desire to deal with endogeneity when external variation, for instance in the
form of instruments, is unavailable. Repeated measurements or auxiliary data for panel
data sets are difficult, if not impossible, to acquire4 , so that approaches based on their
availability appear to be of little use for panel models as well. In conclusion, the existing
approaches for nonlinear cross-sectional models with ME either impose assumptions on
instruments incompatible with panel data models such as (1) or require auxiliary data
that is typically not available. In contrast, the present paper provides conditions for
identification based only on observations from within the panel. A separate stream of the
literature combines independence assumptions with information in higher-order moments
of observed variables; see Schennach and Hu (2010) and the many references therein.
Hu and Schennach (2008) make an interesting theoretical contribution by providing
a powerful operator diagonalization result that can be used to identify a large class of
models with latent variables. The cost of their generality are high-level assumptions and
an identification argument that is not constructive. Furthermore, in regression models,
3
Hu and Schennach (2008) is an exception; see the discussion below.
Card (1996) describes an interesting and fortunate exception, but he considers misclassification of a
binary regressor, whereas this paper is concerned with continuous covariates.
4
4
their assumptions rule out homoskedastic errors with a regression function that is equal
at two points. On the contrary, the new approach of this paper is constructive, leads to
assumptions that are easily interpretable in a panel data context, and allows the treatment
of time dependence in the ME. In general, neither their approach nor mine nests the other.
The present paper also relates to the statistical literature on ill-posed inverse problems
that has gained recent interest in the econometrics community, for example, Newey and
Powell (2003), Hall and Horowitz (2005), Horowitz and Lee (2007), Blundell, Chen, and
Kristensen (2007), Carrasco, Florens, and Renault (2007), Horowitz (2009), Darolles, Fan,
Florens, and Renault (2011), and Horowitz (2011). Part of the identification argument
of this paper consists of solving two ill-posed inverse problems similar to those arising
in nonparametric instrumental variable models considered in these papers. My approach
implements Blundell, Chen, and Kristensen (2007)’s series estimator as an input to a
second-stage estimator of the actual quantity of interest.
Porter (1996), Henderson, Carroll, and Li (2008), Lee (2010) and Evdokimov (2010)
discuss nonparametric identification and estimation of panel data models without ME,
but their models are similar to the leading example in this paper, namely, nonparametric
panel data regressions with additively separable fixed effects such as (1).
2
2.1
Identification
A General Instrumental Variable Identification Result
Consider a panel data model with an individual’s outcome Yt , which can be discrete or
continuous, and a continuous independent variable Xt∗ . Both variables are indexed by
the time period t, but for the remainder of the paper, I omit the individual’s subscript i.
The relationship between these two variables, subsequently referred to as the structural
relationship, is the object of interest in this paper. Assume the researcher does not observe
Xt∗ directly, but only a mismeasured version of it, denoted by Xt . The difference between
the two, ηt := Xt − Xt∗ , is referred to as ME. Standard approaches such as regression
cannot recover features of the structural relationship because the independent variable
is unobserved. In this section, I provide assumptions restricting the joint distribution
of (Yt , Xt∗ , Xt ) so that in the population, the structural relationship can be uniquely
determined from observable quantities. Although the paper’s motivation is to treat ME in
panel data models, the approach to identification uses past observed variables in a general
way so other instruments can replace these variables. Therefore, the identification result
may be of more general interest, even in cross-sectional models; see Remark 2 below.
5
For now suppose only two time periods are available and no individual-specific effects
or any other perfectly measured covariates are present. I discuss such extensions in
the next subsection. Denote by Y (X, X ∗ and η) the vectors (matrices) stacking the
corresponding variables for individual time periods.5
Assumption ID 1. The distributions of X, X ∗ , and η admit densities fX , fX ∗ , and fη
with respect to Lebesgue measure on Rp and have support equal to Rp .
The first assumption restricts the latent and observed independent variables to be
continuously distributed with infinite support.6 The outcome, however, is allowed to be
discrete or continuous.
Assumption ID 2. (i) η ⊥ X ∗ and Eηt = 0 and (ii) the characteristic function of η is
nonzero everywhere.
The first part of Assumption ID 2 specifies that the ME is independent of the true
value and has mean zero. This assumption is strong, but standard in the ME literature
dealing with continuous mismeasured variables. The second part of Assumption ID 2 is
common in the deconvolution literature; see Carroll, Ruppert, Stefanski, and Crainiceanu
(2006), Fan (1991), Fan and Truong (1993), Li and Vuong (1998) and Schennach (2004a,
2007). It excludes, for example, uniform and triangular distributions, but most other
standard distributions such as the Normal, t, χ2 , gamma, and double exponential satisfy
the requirement.
Assumption ID 3. (i) There exist B < ∞ and β ∈ Rp+ such that
p p
Y
∂
∗ (k) −3−βk
∗
∗
1 + x2 FY2 |X2 (y2 |x2 ) ≤ B
sup ∗ (1)
· · · ∂x∗ (p)
y ∈Y ∂x
2
2
2
2
k=1
for all x∗2 = (x∗2 (1) , . . . , x∗2 (p) )0 ∈ Rp , where | · | denotes the Euclidean norm; (ii) fη is
bounded.
In economic models, Assumption ID 3 is a relatively weak assumption on the conditional cdf of Y2 given X2∗ as the conditioning argument becomes large. It rules out
rather pathological cases in which the cdf oscillates too much as |x∗2 | → ∞. The condition
5
By a slight abuse of notation, when the dimension of the covariates is larger than one, I may refer to
the distribution of the random matrices X, X ∗ , or η, which should be interpreted as the distribution of
the random vectors vec(X), vec(X ∗ ), or vec(η), respectively.
6
Relaxing the assumption of unbounded support along the lines of Carrasco and Florens (2011) appears
possible.
6
guarantees that after appropriate centering, the Fourier transform of the conditional cdf
is an ordinary function even though the conditional cdf is not absolutely integrable in its
conditioning argument; see Schennach (2008).
Definition 1. Let the notation x(k) refer to the k-th element of a vector x. For some
space Ḡ ⊆ L2 (fX2∗ ) that contains FY2 |X2∗ (y2 |·) for every y2 ∈ Y2 , define the set of functions
G := G1 ∪ G2 with
G1 := h ∈ Ḡ : 0 ≤ h(x) ≤ 1 ∀x ∈ Rp ,
G2 := h ∈ Ḡ : ∃k ∈ {1, . . . , p}, h̄ ∈ G1 : h(x) = x(k) h̄(x)
∀x ∈ Rp .
The set G contains two types of functions: the first component, G1 , is the set of functions bounded between zero and one, and the second, G2 , contains the bounded functions
from G1 multiplied by a component of its argument.
Assumption ID 4. The conditional distribution of X2∗ given X1∗ is G-complete.
G-completeness of the distribution of X2∗ given X1∗ means that, for all functions h in G,
E[h(X2∗ )|X1∗ ] = 0 almost surely implies h(X2∗ ) = 0 almost surely. If Xt∗ is independent over
time then past or future covariates and outcomes do not contain any information about
the latent variable in the present period, and the identification argument breaks down.
Assumption ID 4 rules out this case. The completeness condition can be interpreted as a
nonparametric analogue of the standard rank condition in linear models. For example, if
the transition process satisfies X2∗ = βX1∗ + U with E[U X1∗ ] = 0 then the rank condition
is E[X1∗ X2∗ ]β = 0 ⇒ β = 0. However, G-completeness can accommodate much more
general nonlinear specifications.
G-completeness is a weaker restriction on the distribution of X2∗ | X1∗ the smaller the
set G is. In the related literature, G typically consists of all integrable functions or of
all bounded functions, amounting to the familiar notions of completeness or bounded
completeness, respectively. The type of functions h ∈ Ḡ for which we need completeness
are potential candidates h(·) for the desired conditional cumulative distribution function (cdf) FYt |Xt∗ (yt |·). Economic theory often implies restrictions on FYt |Xt∗ (yt |·) such as
smoothness, monotonicity, or continuity. Ḡ can then be taken as the space of functions
satisfying those restrictions. One basic requirement all functions in Ḡ must satisfy is Assumption ID 3(i). If FYt |Xt∗ is strictly monotone in its conditioning argument (e.g., because
of a strictly monotone regression function) then one may require all functions in Ḡ to be
asymmetric around zero. In this case, G2 -completeness is implied by G1 -completeness, and
thus G-completeness becomes weaker than bounded completeness. In the absence of such
7
additional restrictions on Ḡ, however, G-completeness is slightly stronger than bounded
completeness and weaker than completeness for all functions bounded by polynomials
(called P-completeness; D’Haultfœuille (2011)). Completeness conditions have become
popular in the recent econometrics literature, and more primitive sufficient conditions are
known; see Newey and Powell (2003), D’Haultfœuille (2011), Hu and Shiu (2011), and
Andrews (2011). For instance, consider the transition process X2∗ = h(X1∗ ) + U with
U ⊥ X1∗ . Under regularity assumptions, a characteristic function of U that has infinitely
many derivatives and is nonzero then implies bounded completeness of X2∗ | X1∗ ; see
D’Haultfœuille (2011). Because completeness, P-completeness (D’Haultfœuille (2011)),
and L2 (fX2∗ )-completeness (Andrews (2011)) each implies G-completeness, sufficient conditions for any of the former are also sufficient for the latter. Finally, notice that beyond
the nonparametric rank condition, the latent transition process is left completely unspecified. In particular, no functional form assumptions are made.
The next set of assumptions called IVX describes the restrictions necessary for X1 to
be a valid instrument for identification of FY2 |X2∗ . Similar assumptions called IVY that
can replace IVX to make the outcome variable a valid instrument are given below.
Definition 2. Let the observable transition operator DX : L2 (fX2 ) → L2 (fX1 ) be defined
as
DX h := E[h(X2 )|X1 = ·],
h ∈ L2 (fX2 ).
−1
Assumption IVX 1. For any fixed y2 ∈ Y2 , the Fourier transform of [DX
FY2 |X1 (y2 |·)](·)
p
is nonzero on a dense subset of R .
−1
This assumption rules out cases in which the Fourier transform of the quantity DX
FY2 |X1
vanishes over an interval, but allows for (potentially infinitely many) isolated zeros.
−1
DX
FY2 |X1 being a polynomial seems to be the only non-trivial example violating the
−1
assumption, but the proof of the identification result below shows DX
FY2 |X1 convolved
with the ME distribution is bounded, so the polynomial case is unlikely. Appendix A
gives sufficient conditions for this assumption. Notice the assumption is stated in terms
of observables only and so, in principle, could be tested.
Assumption IVX 2. (i) (Y2 , X2 ) ⊥ (X1∗ , X1 ) | X2∗ , (ii) Y2 ⊥ X2 | (X2∗ , X1∗ , X1 ) and
(iii) X2∗ ⊥ X1 | X1∗ .
The first part of Assumption IVX 2 says past observed and latent regressors should not
contain any more information about the present outcome and covariate than the present
8
true regressor already does. This condition rules out time dependence in the ME.7 An extension of these exclusion restrictions to additional, perfectly measured regressors allows
dynamic models with past outcomes as additional regressors as well as for feedback from
past outcomes to the independent variable. The regression error Yt | Xt∗ is allowed to be
serially dependent as well as conditionally heteroskedastic: they can depend on contemporaneous regressors but not on past ones. Part (ii) requires the ME to be independent of
the structural error Yt | Xt∗ , a standard assumption even in linear panel models with ME.
The third exclusion restriction is a redundancy condition that assumes past mismeasured
covariates do not contain more information about true covariates than past true covariates. In the next subsection, I discuss these conditional independence assumptions in the
context of a nonparametric panel data regression model.
With Assumptions ID and IVX, stating the first of two identification results is possible.8
Theorem 1. Under Assumptions ID and IVX, the conditional cdf of Y2 given X2∗ , FY2 |X2∗ ,
is identified. If, in addition, the distribution of the ME ηt is the same for all t then the
distribution of η, Fη , the transition law FX2∗ |X1∗ , and the distribution of X ∗ , FX ∗ are
identified as well.
The theorem shows the structural relationship between the outcome and the latent
independent variable as characterized by the conditional cdf FY2 |X2∗ is identified. In some
sense, this distribution embodies all characteristics of the structural model. In discrete
choice models, for example, it determines choice probabilities and may be of direct interest.
In other settings, the distribution may not be relevant by itself, but I subsequently show
how regression functions and marginal effects can be computed from this conditional cdf.
Remark 1. Part (i) of Theorem 1 identifies only the structural relationship in the second
time period. Obviously, if the relationship is stationary over time, the argument identifies
it for all time periods. If, on the other hand, FYt |Xt∗ varies with t, the argument requires
T time periods to identify T − 1 structural relationships.
Remark 2. As can be seen in the subsequent sketch of the proof, the identification argument uses only X1 from the past period but does not involve Y1 . Therefore, Theorem 1
7
A second identification argument presented below uses the outcome as an instrument and accommodates time series dependence in the ME.
8
Intuitively, identification of a function here means it possesses a unique representation in terms of
observed variables or quantities that can be constructed from observables. For more formal definitions of
identification, see Koopmans and Reiersol (1950), Roehrig (1988), and the handbook chapters by Matzkin
(1994, 2007).
9
also presents a constructive identification result for cross-sectional models of the following form. Let Y denote an individual’s outcome, X ∗ a latent covariate, and suppose one
observes X and Z with
X = X ∗ + ηX ,
X ∗ ⊥ ηX ,
Z = Z ∗ + ηZ ,
Z ∗ ⊥ ηZ .
Here Z is a noisy observation of an (unobserved) instrumental variable Z ∗ , which is
assumed to depend on X ∗ . Identification of such a model appears to be a new result
relative to the existing constructive identification arguments. The latter either require
asymmetry between how X and Z relate to X ∗ ,
X = X ∗ + ηX ,
X ∗ = Z + ηZ ,
X ∗ ⊥ ηX ,
Z ⊥ ηZ ,
(2)
(Schennach (2007)), or that X and Z are repeated measurements of the exact same variable,
X = X ∗ + ηX ,
X ∗ ⊥ ηX ,
Z = X ∗ + ηZ ,
X ∗ ⊥ ηZ ,
(3)
(Hausman, Newey, Ichimura, and Powell (1991), Li and Vuong (1998), Schennach (2004a,
2008), and Cunha, Heckman, and Schennach (2010)). Model (2) requires the instrument
Z to predict X ∗ in the sense that X ∗ fluctuates around the value of Z. For example, Z
may be an aggregate measure of the variable X ∗ . If the second measurement Z is taken at
a different point in time or in a different environment than X, one may question whether
they really measure the same underlying quantity X ∗ . The approach of this paper allows
for such changes and accommodates instruments Z, which measure a variable related to
the variable of interest, but not necessarily the same. Consequently, the requirements an
instrument must satisfy are weaker than in existing approaches based on (2) and (3).
Remark 3. The same argument as in the proof of Theorem 1 can also be used to directly
identify E[Yt |Xt∗ = ·], provided the conditional expectation function is bounded by polynomials. Unlike for FYt |Xt∗ (y|·), however, the argument would require dealing with generalized
functions so that nonparametric estimation and inference becomes considerably more difficult.
Remark 4. In the recent econometrics literature, Kotlarski’s lemma (Kotlarski (1967))
has gained some popularity for identifying nonparametric panel data as well as ME models.
In a panel data model of the form Yt = m(Xt , α) + εt , but without ME, Evdokimov (2010)
10
views the outcomes in two different periods as repeated measurements of m(Xt , α) given
the regressor Xt does not change over those two periods. This approach cannot be expected
to work in the present setup because conditioning on the observed regressors to be equal
over time does not imply the latent regressors are constant over time as well. Kotlarski’s
lemma identifies cross-sectional models with ME and repeated measurements X1 = X ∗ +η1
and X2 = X ∗ + η2 , say, because they assume X1 and X2 are measurements of the same
underlying latent variable X ∗ ; see the references in Remark 2. This approach is applicable
to panel data models only in the special case when Xt∗ follows a random walk. Theorem 1,
on the other hand, allows for general nonparametric transition processes.
The formal proof of Theorem 1 is given in Appendix B.1. The following discussion provides an overview of the argument. For simplicity, consider the case of univariate covariates Xt∗ . First, I introduce the following short-hand notation: the contamination operator
Ch := E[h(X2 )|X2∗ = ·], the reverse contamination operator Crev h := E[h(X1∗ )|X1 = ·],
and the latent transition operator T h := E[h(X2∗ )|X1∗ = ·]. The following graph illustrates the relationship between the random variables in the model and the operators just
introduced:
X2
DX
X1 o
C
/ X∗
2
Crev
Y2
.
T
X1∗
Y1
For example, in the case of C, functions of X2 are mapped to functions of X2∗ . With
this notation at hand and under appropriate conditional independence assumptions, one
can write the observable quantities dY (x1 ) := FY2 |X1 (y2 |x1 ) and dY X (x1 ) := E[X2 1{Y2 ≤
y2 }|X1 ] as
ZZ
dY (x1 ) =
FY2 |X2∗ (y2 |x∗2 )fX2∗ |X1∗ (x∗2 |x∗1 )fX1∗ |X1 (x∗1 |x1 )dx∗2 dx∗1
and
ZZ
dY X (x1 ) =
x∗2 FY2 |X2∗ (y2 |x∗2 )fX2∗ |X1∗ (x∗2 |x∗1 )fX1∗ |X1 (x∗1 |x1 )dx∗2 dx∗1 .
With F̃Y2 |X2∗ (y2 |x∗2 ) := x∗2 FY2 |X2∗ (y2 |x∗2 ) and the operator notation just introduced, these
two equations can be rewritten as
dY = Crev T FY2 |X2∗ ,
(4)
dY X = Crev T F̃Y2 |X2∗ ,
(5)
where the operator Crev T is applied with respect to x∗2 , keeping y2 as a fixed parameter.
Next, the figure above suggests the observable transition in the covariates (represented
11
by DX ) consists of three intermediate transitions: (i) reversing the contamination from
X1 to X1∗ , (ii) performing the latent transition from X1∗ to X2∗ , and (iii) contaminating
X2∗ with ME to get X2 . In terms of operators, we thus have the identity DX = Crev T C.
Solving for Crev T and substituting the expression into (4) and (5) yields
−1
CDX
dY = FY2 |X2∗ ,
−1
CDX
dY X = F̃Y2 |X2∗ .
Rewrite this system of equations in the form of convolution equations
CsY = FY2 |X2∗ ,
(6)
CsY X = F̃Y2 |X2∗ ,
(7)
in which the functions sY and sY X solve the two integral equations
dY (x1 ) = E[sY (X2 )|X1 = x1 ],
(8)
dY X (x1 ) = E[sY X (X2 )|X1 = x1 ].
(9)
Notice (6) and (7) differ from the usual convolution equations encountered in related
work: convolving the observed functions sY and sY X with the distribution of the ME
produces the unobserved function we want to identify. Typically, the roles of observed
and unobserved quantities are reversed (Schennach (2004a,b, 2007, 2008)).
Since the two equations (8) and (9) involve only observable quantities, sY and sY X
are identified. The G-completeness condition is required to show these two functions
are in fact the unique solutions to (8) and (9), respectively. Finally, taking Fourier
transforms of both convolution equations (6) and (7) yields two algebraic equations with
two unknown functions, the characteristic function of the ME and the Fourier transform
of the conditional cdf. This system can then be solved and results in explicit expressions
as summarized in the following corollary.
√
Corollary 1. Let F denote the Fourier transform operator, i := −1, Γ a smooth path
connecting 0 and ζ in Rp , and let ∇z f denote the gradient vector of a function f with
respect to z. Then, under Assumptions ID and IVX,
Z
1
∗
∗
FY2 |X2∗ (y2 |x2 ) =
φ(ζ)σY (ζ)e−iζx2 dζ + cY (y2 ),
2π
where
Z
φ(ζ) := exp
0
ζ
iσY X (z) − ∇z σY (z)
· dΓ(z) ,
σY (z)
12
−1
−1
the Fourier transforms σY (ζ) := [FDX
dY ](ζ) and σY X (ζ) := [FDX
dY X ](ζ), the observable conditional expectations dY (x1 ) := FY2 |X1 (y2 |x1 ) and dY X (x1 ) := E[X2 1{Y2 ≤
y2 }|X1 = x1 ], and finally the centering constants
R
F
(y |x )dx2
R1 ≤|x2 |≤R2 Y2 |X2 2 2
R
.
cY (y2 ) := lim lim
R1 →∞ R2 →∞
dx2
R1 ≤|x2 |≤R2
I now turn to a variant of the above identification argument that gives assumptions
such that a past outcome can replace the past mismeasured covariate as the instrument.
Definition 3. For 1 ≤ r < s ≤ T , introduce the notation Yr:s := (Yr , . . . , Ys )0 . Define the
observable transition operator DY : L2 (fXT ) → L2 (fY1:T −1 ) as
DY h := E[h(XT )|Y1:T −1 = ·],
h ∈ L2 (fXT ).
Also, let M : L2 (fY1:T −1 ) → L2 (fXT∗ −1 ) be the latent model operator defined as
Mh := E[h(Y1:T −1 )|XT∗ −1 = ·],
h ∈ L2 (fY1:T −1 ).
In the second identification result below, the vector Y1:T −1 contains all the past outcomes and serves as the instrument vector for recovering the structural relationship in
period T . Because outcomes are scalar and identification requires at least as many instruments as mismeasured covariates, the number of observed time periods must exceed
the dimension of the mismeasured covariate by at least one (T ≥ p + 1).
The following two assumptions replace the similar counterparts, Assumptions IVX 1
and 2.
Assumption IVY 1. For any fixed yT ∈ YT , the Fourier transform of the function
[DY−1 FYT |Y1:T −1 (yT |·)](·) is nonzero on a dense subset of Rp .
Assumption IVY 2. Suppose (i) (YT , XT ) ⊥ (XT∗ −1 , Y1:T −1 ) | XT∗ , (ii) YT ⊥ XT |
(XT∗ , XT∗ −1 , Y1:T −1 ), and (iii) XT∗ ⊥ Y1:T −1 | XT∗ −1 .
Notice Assumption IVY 2(i) requires the structural error Yt | Xt∗ to be independent
over time. By parts (ii) and (iii), past outcomes must be excluded from the outcome
equation and from the latent transition process.
Assumption IVY 3. (i) Yt = m(Xt∗ ) + εt with εt ⊥ Xt∗ for all t; (ii) m2 (Xt∗ ) and
εt have a density with respect to Lebesgue measure on R and support on whole R; (iii)
fX ∗ is bounded and there is a constant C ∈ R such that fX ∗ (x∗ ) ≤ C(1 + |x∗ |2 )−1 for all
x∗ ∈ RT p ; (iv) The characteristic function of εt is infinitely often differentiable almost
everywhere and nonzero on R.
13
The independence assumption and additive separability in part (i) are strong restrictions, representing the cost of unrestricted time series dependence in the ME and of the
constructive approach taken in this paper.9 The assumption restricts the structural relationship in a way that ensures outcomes contain enough information about the true
covariates. Because the latent covariates are continuous random variables with support
on Rp , part (ii) ensures that the instruments (the past outcomes) have the same support.
The remaining two parts are similar to assumptions made above and satisfied by most
standard distributions.
With Assumptions ID and IVY, the next theorem summarizes the identification result
based on outcomes as instruments.
Theorem 2. Suppose T = p+1, and Assumptions ID and IVY hold. Then the conditional
cdf of YT given XT∗ , FYT |XT∗ , is identified. If, in addition, the distribution of the ME ηt is
the same for t ∈ {T − 1, T }, then the distribution of ηT −1:T , FηT −1:T , the transition law
FXT∗ |XT∗ −1 , and the distribution of XT∗ −1:T , FXT∗ −1:T are identified as well.
Remarks similar to those stated after Theorem 1 apply here as well. The proof of the
theorem is similar in spirit to the one of Theorem 1. One difference, however, is worth
pointing out. The discussion relies on adjoint operators, so I have to be explicit about
the inner products associated with the various weighted L2 -spaces. For any density f
occurring in this paper, equip the space L2 (f ) with the usual inner product hh1 , h2 i :=
R
h1 (u)h2 (u)f (u)du when h1 , h2 ∈ L2 (f ). With a slight abuse of notation, I denote all
inner products and the induced norms in the different L2 -spaces by the same symbols,
h·, ·i and k · k. Which space they refer to should be clear from the context. Consider the
univariate case p = 1 and, analogously to the conditional expectation operators defined
above, let C now be the contamination operator in the T -th period and M∗ the adjoint of
M. The following figure illustrates the relationships between the relevant operators and
random variables:
X2
X1
C
/
X2∗
Y2
.
T
X1∗
M∗
/
Y1
Using the definitions from Corollary 2 and F̃YT |XT∗ (yT |x∗T ) := x∗T FYT |XT∗ (yT |x∗T ), consider
9
The question of whether more general structural relationships may be allowed for non-constructive
identification, e.g. along the lines of Torgovitsky (2010), presents an interesting avenue for future research.
14
the two integral equations
d◦Y = M∗ T FYT |XT∗ ,
(10)
d◦Y X = M∗ T F̃YT |XT∗ ,
(11)
which follow from similar arguments as those that led to equations (4) and (5). Assumption IVY 3 guarantees the observable transition from Y1 to X2 , represented by
DY = M∗ T C, is invertible and can be used to substitute out the unobserved transition
T in (10) and (11):
CDY−1 d◦Y = FYT |XT∗ ,
CDY−1 d◦Y X = F̃YT |XT∗ .
(12)
With these two equations, the remainder of the proof closely follows that of Theorem 1.
The illustrating figures above suggest a certain symmetry between Xt and Yt in the sense
that one can view both as measurements of the underlying latent variable Xt∗ . Notice,
however, the two identification arguments based on covariates or outcomes as instruments
are not symmetric at all. The reason for the asymmetry lies in the different strategies
to extract information about the latent transition T from observable transitions. Even
though it is reasonable to assume a simple convolution relationship between observed
and latent covariates, Xt = Xt∗ + ηt , we want to allow for structural relationships more
general than that. Identification based on past covariates as instruments can use the
observed transition from X1 to X2 for backing out the unobserved transition, but when
the instruments are past outcomes, the observed transition from Y1 to Y2 would not lead to
a solvable convolution problem such as (12). Instead, the present identification argument
is based on the transition from Y1 to X2 , exploiting the convolution relationship between
X2∗ and X2 , and leads to the desired form (12).
Analogously to Corollary 1, the following corollary provides the expression of the
conditional cdf FYT |XT∗ (yT |x∗T ) in terms of observables.
√
Corollary 2. Let F denote the Fourier transform operator, i := −1, Γ a smooth path
connecting 0 and ζ in Rp , and let ∇z f denote the gradient vector of a function f with
respect to z. Then, under the assumptions of Theorem 2,
Z
1
∗
∗
φ(ζ)σY◦ (ζ)e−iζx1 dζ + c◦Y (yT ),
FYT |XT∗ (yT |xT ) =
2π
where
Z
φ(ζ) := exp
0
ζ
iσY◦ X (z) − ∇z σY◦ (z)
· dΓ(z) ,
σY◦ (z)
15
the Fourier transforms σY◦ (ζ) := [FDY−1 d◦Y ](ζ) and σY◦ X (ζ) := [FDY−1 d◦Y X ](ζ), the conditional expectations d◦Y (y1:T −1 ) := FYT |Y1:T −1 (yT |y1:T −1 ) and d◦Y X (y1:T −1 ) := E[XT 1{YT ≤
yT }|Y1:T −1 = y1:T −1 ], and finally the centering constants
R
F
(yT |xT )dxT
R1 ≤|xT |≤R2 YT |XT
R
.
c◦Y (yT ) := lim lim
R1 →∞ R2 →∞
dxT
R1 ≤|xT |≤R2
2.2
Nonparametric Panel Data Regression with Fixed Effects
In this subsection, I apply the instrumental variable identification approach from the
previous subsection to a nonparametric panel data regression model with fixed effects
and ME. Specifically, consider
Ỹt = m(X̃t∗ ) + α + εt
(13)
X̃t = X̃t∗ + η̃t
with individual-specific heterogeneity α and T = 4 time periods. The dependence between
α and X̃t∗ is left completely unrestricted. Porter (1996) and Henderson, Carroll, and Li
(2008), for example, study such a model without ME. Defining ∆Ỹ4 := Ỹ4 − Ỹ3 (and
similarly for other variables), and
Y2 := ∆Ỹ4
X2∗ := (X̃3∗ , X̃4∗ )
X2 := (X̃3 , X̃4 )
η2 := (η̃3 , η̃4 )
Y1 := ∆Ỹ2
X1∗ := (X̃1∗ , X̃2∗ )
X1 := (X̃1 , X̃2 )
η1 := (η̃1 , η̃2 )
then allows application of Theorem 1 or Theorem 2 to (Y, X, X ∗ , η), resulting in identification of F∆Ỹt |X̃t∗ ,X̃t−1
∗ . In the remainder of this subsection, I discuss lower-level sufficient
conditions for the hypotheses of the two theorems, using the particular structure imposed
in (13). I also show how knowledge of F∆Ỹt |X̃t∗ ,X̃t−1
identifies m and marginal effects.
∗
For the case with past covariates as outcomes, consider the following two assumptions
on the dependence between the variables in the model.
Assumption REG 1. (i) εt ⊥ {X̃s∗ }s<t | X̃t∗ and εt ⊥ {X̃s∗ }s<t | (X̃t∗ , εt−1 ), (ii) ε ⊥ η̃ |
X̃ ∗ , and (iii) η̃t ⊥ η̃s for all s 6= t.
Part (i) of this assumption allows for contemporaneous heterogeneity in the regression
error εt , so it could be of the form εt = σ(X̃t∗ )ut with ut independent of all other variables
in the model. Serial dependence in the regression error is permitted as well, but part (iii)
rules out serial dependence in the ME. The remainder of the assumption requires the ME
and the regression error to be independent, just as in the linear model with ME (Griliches
and Hausman (1986)).
16
Assumption REG 2. E[εt |X̃t∗ , X̃s∗ ] = 0 for s ∈ {t − 1, t + 1}.
The following lemma shows these assumptions are sufficient for Assumption IVX 1(i)
and (ii), and the conditional mean function m is identified.
Lemma 1. Under Assumptions ID, 1, 2(iii), and REG 1-2, m is identified up to an
additive constant. If, in addition, the distribution of the ME ηt is the same for all t then
the distribution of η̃, Fη̃ , the transition law FX2∗ |X1∗ , and the distribution of X̃ ∗ , FX̃ ∗ , are
identified as well.
Remark 5. Just as in the linear panel model with fixed effects, additive time-invariant
effects are not identified. They could be identified, of course, under the additional assumption that m passes through some known point or that Eα = 0.
Remark 6. In a semiparametric specification of the form m(x̃∗ ; θ), a finite-dimensional
parameter θ is identified directly from F∆Ỹ4 |X̃4∗ ,X̃3∗ (∆y|x̃∗4 , x̃∗3 ; θ), given standard maximum
likelihood assumptions. In this case, it suffices to identify σY (ζ) for only a finite number
−1
of values ζ, so one could weaken the requirement that the Fourier transform of DX
FY2 |X1
is nonzero almost everywhere.
Remark 7. One can allow for serial dependence in the ME when there are more than
four time periods. As in the linear model (Griliches and Hausman (1986)), regressors
sufficiently far in the past are valid instruments as long as the ME is serially independent
beyond some finite lag. Specifically, suppose the ME follows a moving average process of
order q, M A(q), and that T ≥ 4 + 2q. Then, for t ≥ 2q + 4, the regressors X̃t−2q−2 and
X̃t−2q−3 can be used as instruments to identify the structural relationship in periods t and
t − 1.
Remark 8. Identification in the presence of additional, perfectly measured regressors
simply requires conditioning all operations on those variables.
Because of E[εt |X̃t∗ , X̃s∗ ] = 0, the function m is directly identified from m(x∗2 ) =
R
const. + ∆y dF∆Ỹ4 |X̃4∗ ,X̃3∗ (∆y|x̃∗2 , 0) with F∆Ỹ4 |X̃4∗ ,X̃3∗ as defined in Corollary 1. Alterna∗
∗
tively, assume Q∆εt |X̃t∗ ,X̃t−1
∗ (τ |x̃t , x̃t−1 ), the τ -th conditional quantile of the difference in
the structural errors given the latent regressors, is equal to zero10 . Then
∗
∗
∗
∗
∗
∗
F∆Ỹt |X̃t∗ ,X̃t−1
∗ (q + m(x̃t ) − m(x̃t−1 )|x̃t , x̃t−1 ) = F∆ε |X̃ ∗ ,X̃ ∗ (q|x̃t , x̃t−1 ) = τ
t
t
t−1
10
For instance, the difference between two i.i.d. errors ε1 , ε2 is symmetric and thus the median of the
difference is equal to zero. In general, however, a conditional quantile being zero is difficult to characterize;
see also Khan, Ponomareva, and Tamer (2011).
17
∗
∗
if q = Q∆εt |X̃t∗ ,X̃t−1
∗ (τ |x̃t , x̃t−1 ) = 0. Therefore, the difference in the regression function at
different time points is identified by the conditional quantile
∗
∗
m(x̃∗t ) − m(x̃∗t−1 ) = Q∆Ỹt |X̃t∗ ,X̃t−1
∗ (τ |x̃t , x̃t−1 ).
Also, under the stronger assumption that ε is independent of the latent regressor, we have
−1
∗
∗
m(x̃∗t ) − m(x̃∗t−1 ) = q − F∆ε
F
(q|x̃
,
x̃
)
,
∗
∗
t
t−1
∆Ỹt |X̃t ,X̃t−1
t
where F∆εt (q) is identified by F∆Ỹt |X̃t∗ ,X̃t−1
∗ (q|c, c) for some constant c ∈ R. The difference
∗
∗
m(x̃t ) − m(x̃t−1 ) directly identifies the effect of a discrete change in the latent regressors
on the outcome. Alternatively, marginal changes ∂m(x̃∗t )/∂ x̃∗t are identified as soon as m
itself is identified.
The following result provides conditions under which the model (13) is identified when
using past outcomes as instruments.
Assumption REG 3. (i) εt ⊥ {X̃s∗ }s<t | X̃t∗ and εt ⊥ {X̃s∗ }s<t | (X̃t∗ , εt−1 ), (ii) ε ⊥ η̃ |
X̃ ∗ , and (iii) εt ⊥ εs | X̃ ∗ for all s 6= t.
Lemma 2. Suppose T = 2(p + 1) and Assumptions ID, IVY 1, 2(iii), 3, and REG 2 and
3 hold. Then m is identified up to an additive constant. If, in addition, the distribution
of the ME ηt is the same for t ∈ {T − 1, T } then the distribution of η̃T −1:T , Fη̃T −1:T , the
, are identified as well.
transition law FXT∗ |XT∗ −1 , and the distribution of X̃T∗ −1:T , FX̃ ∗
T −1:T
Discussion The comparative advantages of the two identification strategies, using covariates or outcomes as instruments, lie in the strategies’ ability to handle serial dependence in the structural error or in the ME. Temporal dependence in the structural error
may be important for a variety of well-known reasons such as omitted variables, omitted
individual-specific effects, or misspecified dynamics. On the other hand, in some applications, serial dependence in the ME could be considered an important deviation from the
classical ME assumptions. For example, suppose the ME is of the form ηt = β + νt , where
β is an individual-specific effect, persistent over time, and νt an i.i.d. error. In survey
data, β could be interpreted as an individual’s ability to answer a question correctly, which
may be correlated with other characteristics of the subject, for example, language skills
or the person’s ability to recall past events. Serial dependence in the ME could also arise
from economic theory directly, for instance, as in Erickson and Whited (2000), or due to
manager fixed effects as in Bertrand and Schoar (2003). A third possibility occurs when
the researcher observes mismeasured flow variables, but the true explanatory variable in
the model represents the corresponding stock variable, so the ME in a certain period
18
consists of the sum of past ME’s and, thus, exhibits serial correlation by construction.
Finally, Bound and Krueger (1991) find serially correlated ME in a validation study of
earnings data.
The exclusion restrictions for the different instruments (Assumptions REG 1 and
REG 3) are analogous to each other, but the approach based on outcomes as instruments
requires an additional restrictions on the structural relationship that are not present when
using covariates as instruments. The additional assumption (Assumption IVY 3) imposes
distributional assumptions and additive separability on the structural error.
In conclusion, given a particular application, the choice of instrument should be guided
by the relative importance of serial dependence in the structural error and the ME, and
whether the aforementioned distributional restrictions on the structural error are reasonable.
3
Estimation
The identification arguments in the previous section are constructive and suggest a nonparametric plug-in estimator for regression functions and all unobserved distributions.
This section describes the estimation procedure with past covariates as instruments and,
to reduce the notational burden, considers only panel data models with univariate regressors. However, estimation based on outcomes as instruments can be carried out analogously, and multivariate extensions are straightforward. Following the introduction of the
estimator, I present its uniform convergence rate and conditions for uniform consistency.
3.1
Construction of the Estimator
Suppose we observe an i.i.d. sample {(yi,1 , xi,1 , yi,2 , xi,2 )}ni=1 of (Y1 , X1 , Y2 , X2 ). I suggest
an estimator of FY2 |X2∗ (y2 |x∗2 ) based on the following procedure:
Step 1: Construct regularized series estimators ŝY and ŝY X of the solutions to the two
ill-posed inverse problems D̂X sY = dˆY and D̂X sY X = dˆY X , respectively.
Step 2: Take Fourier transforms of ŝY and ŝY X , resulting in estimators σ̂Y and σ̂Y X .
Step 3: Combine σ̂Y and ŝY X to an estimate φ̂ of φ using the formula from the identification argument. Substituting φ̂ and σ̂Y into the expression for FY2 |X2∗ then yields
an estimator F̂Y2 |X2∗ .
Estimation of regression models requires an additional step:
19
Step 4: Compute either the conditional mean or conditional quantile function of F̂Y2 |X2∗
as an estimator of the regression function m.
I proceed by describing the four steps in more detail and formally define the estimators
for the subsequent derivation of asymptotic properties.
Step 1: Inverting the Observed Transition Operator The first step of the estimation procedure requires solving finite sample counterparts of the two ill-posed inverse
problems, DX sY = dY and DX sY X = dY X . Consider the first equation, which, by the
definition of the observed transition operator DX , is equivalent to
E[sY (X2 )|X1 = x1 ] = dY (x1 ).
Estimating the solution sY to this equation poses an inherently difficult statistical problem. It is further complicated by the fact that the density fX2 |X1 and the function dY are
not known and need to be estimated as well. Theorem 1 guarantees the existence of a
unique solution sY to the population problem, but it is not a continuous functional of dY .
The discontinuity is implied by the underlying function space being infinite dimensional.
−1
As an important consequence, direct application of DX
may blow up small estimation
ˆ
errors in dY , leading to inconsistent estimates of sY .
The following nonparametric estimator of sY sufficiently regularizes the problem and
facilitates consistent estimation.
Definition 4. Let ui := F (xi,1 ) be the transformed realization of the first-period explanatory variable with F : R → [0, 1] a continuous, strictly increasing function. Let λ[0,1]
denote Lebesgue measure on [0, 1]. For bases {bj (u)}j≥1 and {pj (u)}j≥1 of L2 (λ[0,1] ) and
L2 (fX2 ), respectively, define B Jn (u) := (b1 (u), . . . , bJn (u))0 and B := (B Jn (u1 ), . . . , B Jn (un )),
P Kn (x2 ) := (p1 (x2 ), . . . , pKn (x2 ))0 and P := (P Kn (x1,2 ), . . . , P Kn (xn,2 )). For some fixed
y2 , let ΥY := (1{y1,2 ≤ y2 }, . . . , 1{yn,2 ≤ y2 })0 , ΥY X := (x1,2 1{y1,2 ≤ y2 }, . . . , xn,2 1{yn,2 ≤
y2 })0 , let I be the Kn × Kn identity matrix, and A− the generalized inverse of a matrix
A. Then define the series estimators ŝY and ŝY X of sY and sY X as
ŝY (x2 ) := P Kn (x2 )0 β̂Y ,
ŝY X (x2 ) := P Kn (x2 )0 β̂Y X ,
with the series coefficients
β̂Y := P0 B(B0 B)−1 B0 P + αn I
−
P0 B(B0 B)−1 B0 ΥY ,
(14)
β̂Y X := P0 B(B0 B)−1 B0 P + αn I
−
P0 B(B0 B)−1 B0 ΥY X .
(15)
20
The estimators are similar to the series estimator in Hall and Horowitz (2005) and
Blundell, Chen, and Kristensen (2007). It takes the form of linear instrumental variable
estimators with B and P playing the roles of instruments and endogenous variables,
respectively, except the penalty term αn I regularizes the denominator. The parameter αn
is required to vanish as the sample size grows, leading to no regularization in the limit.
In principle, it would be possible to derive the restrictions on sY and sY X implied by the
constraint that FY2 |X2∗ ∈ Ḡ. As in Blundell, Chen, and Kristensen (2007), those restrictions
could then be imposed by changing the penalty term αn I accordingly, but I do not pursue
this approach here because the regularization by αn I worked well in the simulations. The
conditioning variable X1 is transformed to U := F (X1 ) with support equal to [0, 1] to
facilitate the use of existing uniform convergence results for series estimators. F could
be chosen as some cdf, for instance. Given smoothing parameters αn , Kn , and Jn , the
computation of ŝY and ŝY X requires only matrix multiplication and inversion. In most
applications with scalar or bivariate mismeasured covariates, the dimensions of matrices
to be inverted (Kn × Kn and Jn × Jn ) tend to be small, leading to simple implementation.
Step 2: Computing Fourier Transforms In this step, I describe how to compute the
Fourier transforms σ̂Y and σ̂Y X , estimating σY and σY X . These estimators are Fourier
transforms of the regularized inverses computed in the previous step. The subsequent
asymptotic theory requires these estimated functions converge uniformly over R, which I
achieve by trimming their tails as follows.
Definition 5. For some trimming parameter x̄n > 0 and y2 ∈ Y2 , define the lim−
its c+
Y (y2 ) := limx2 →∞ FY2 |X2 (y2 |x2 ), cY (y2 ) := limx2 →−∞ FY2 |X2 (y2 |x2 ), and cY (y2 ) :=
−
(c+
Y (y2 ) + cY (y2 ))/2. Then,
+
šY (x2 ) := ŝY (x2 )1{|x2 | ≤ x̄n } − c−
Y (y2 )1{x2 < −x̄n } + cY (y2 )1{x2 > x̄n },
+
šY X (x2 ) := ŝY X (x2 )1{|x2 | ≤ x̄n } − x2 c−
Y (y2 )1{x2 < −x̄n } + x2 cY (y2 )1{x2 > x̄n },
šxY (x2 ) := ix2 šY (x2 ).
The trimming parameter x̄n is required to diverge to ∞ with the sample size, leading
to no trimming in the limit. On the interval [−x̄n , x̄n ], the estimators marked by a hacek
are equal to the corresponding estimators with a circumflex, but their tails are set to the
limits of the estimators with a circumflex. In many economic models, the conditional cdf
FY2 |X2∗ (y2 |x∗2 ) is known to converge to zero or one as the conditioning variable diverges
to −∞ or ∞, so, by Lemma 4(i), the centering constant becomes cY (y2 ) ≡ 1/2 with
−
c+
Y (y2 ) ≡ 1 and cY (y2 ) ≡ 0. In the regression example, this case occurs whenever the
21
regression function diverges to +∞ and −∞ as x∗2 → ∞ and x∗2 → −∞, respectively.
−
Subsequently, I assume c+
Y and cY are known, but the limiting constants are equivalent
to the limits of the observed conditional cdf FY2 |X2 , so they could also be estimated.
I define the desired estimators of the Fourier transforms, σ̂Y and σ̂Y X , as the Fourier
transforms of šY and šY X . Since multiplication by ix2 corresponds to differentiation in
the Fourier domain, one can estimate the derivative ∇ζ σY by the Fourier transform of
šxY .
R
Definition 6. For any ζ ∈ R, define the Fourier transforms σ̂Y (ζ) := šY (x2 )eiζx2 dx2 ,
R
R
σ̂Y X (ζ) := šY X (x2 )eiζx2 dx2 , and σ̂xY (ζ) := šxY (x2 )eiζx2 dx2 .
Most modern statistical software packages provide an implementation of the Fast
Fourier Transform algorithm, which can perform this step efficiently. In practice, one
would compute šY (x2 ), šY X (x2 ), and šxY (x2 ) for, say, K := 2k̄ , k̄ ∈ N, values of x2 in
the convex hull of the data {x1,2 , . . . , xn,2 }. After stacking those values for each of the
estimators, the resulting vectors can be fed into the Fast Fourier Transform algorithm,
yielding output vectors of the same length K. These output vectors are the estimates
σ̂Y (ζ), σ̂Y X (ζ), and σ̂xY (ζ) at K corresponding points ζ in the frequency domain.11
Step 3: Inverting Fourier Transforms The third step involves estimating the characteristic function of the ME and taking inverse Fourier transforms to get an estimator
of the desired conditional cdf FYt |Xt∗ .
Definition 7. For x∗2 ∈ R, y2 ∈ Y2 , and a trimming parameter ζ̄n > 0, define
n
n
o o
F̌Y2 |X2∗ (y2 |x∗2 ) := max min F̂Y2 |X2∗ (y2 |x∗2 ), 1 , 0
with
F̂Y2 |X2∗ (y2 |x∗2 )
1
=
2π
and
Z
∗
φ̂(ζ)σ̂Y (ζ)e−iζ·x2 dζ + cY (y2 )
(16)
|ζ|≤ζ̄n
Z
φ̂(ζ) := exp
0
ζ
iσ̂Y X (z) − σ̂xY (z)
dz .
σ̂Y (z)
(17)
Consistent estimation of FY2 |X2∗ requires ζ̄n → ∞ as the sample size grows, leading to
no trimming in the limit. This additional trimming in (16) is common in deconvolution
11
Due to the many different conventions for computing Fourier transforms, one must pay close attention
to the requirements of a particular implementation of the Fast Fourier Transform algorithm. For example,
can standardize a discrete Fourier transform in a variety of ways, and some algorithms require the input
function be sampled only at locations on the positive real line.
22
problems and necessary because the tails of Fourier transforms are difficult to estimate
and need to be cut off to gain uniform consistency. Since, in finite samples, F̂Y2 |X2∗ (y2 |x∗2 )
can take values outside of [0, 1], I define the version F̌Y2 |X2∗ (y2 |x∗2 ), which is constraint to
the unit interval.
Step 4: Computing Regression Functions In regression models, a fourth step is
required to compute the regression function from F̌Y2 |X2∗ (y2 |·). As discussed in section 2.2,
either the conditional mean or the conditional quantile of the regression error being zero
facilitates the estimation of the regression function from the conditional expectation or
conditional quantile of the outcome variable.
Definition 8. For some value x̃∗2 ∈ R, y2 ∈ Y2 , and τ ∈ (0, 1), define the estimator of
the regression function m(x̃∗2 ) as either
Z
∗
m̂(x̃2 ) :=
y dF̌Y2 |X2∗ (y|x̃∗2 )
(18)
Y2
or
m̂(x̃∗2 ) := min F̌Y2 |X2∗ (y2 |x̃∗2 ) ≥ τ .
y2 ∈Y2
(19)
In finite samples, the integration in (18) has to be performed numerically. One
possibility, given a fixed value for x̃∗2 , is to sample random variables from F̌Y2 |X2∗ (y|x̃∗2 )
and then compute their mean. Alternatively, integration by parts leads to the formula
Ry
Ry
y dF̌Y2 |X2∗ (y|x̃∗2 ) = y − y F̌Y2 |X2∗ (y|x̃∗2 )dy, which is valid as long as y and y are finite. In
y
practice, one can select these lower and upper integration bounds as the minimum and
maximum values of outcomes observed in the sample and then perform a standard nuRy
merical integration step to compute y F̌Y2 |X2∗ (y|x̃∗2 )dy. Both approaches approximate the
quantity in (18) but only incur a numerical error, which can be kept as small as desired,
independently of the sample size.
For a given x̃∗2 , the quantile in (19) is estimated by computing F̌Y2 |X2∗ (y2 |x̃∗2 ) over a grid
of y2 values, ordering the estimates, and keeping the smallest value that is larger than τ .
Remark 9. Notice the two estimators of m do not depend on the particular values for
y2 and τ . While deriving optimal choices is beyond the scope of this paper, some recommendations can be given. Since extremal quantiles and the tails of cdf ’s are difficult to
estimate, y2 and τ should neither be too small nor too large, but perhaps somewhere near
the center of the unconditional distribution of Y2 . Also, one may want to compute the
estimator m̂ for various values of y2 or τ and then take the average.
23
Remark 10. If m or FY2 |X2∗ (y2 |·) is known to be monotone, one can estimate either of
them for various values x̃∗2 and then simply sort the estimates in ascending or descending
order. The resulting estimator performs well in finite samples as shown in Chernozhukov,
Fernández-Val, and Galichon (2010).
3.2
Uniform Convergence Rates
In this section, I derive uniform convergence rates of the estimators F̌Y2 |X2∗ and m̂. These
rates differ depending on the tail behavior and smoothness of various model components.
As before, I focus on the univariate case.
Assumption C 1. Let {(yi,1 , xi,1 , yi,2 , xi,2 )}ni=1 be an i.i.d. sample of (Y1 , X1 , Y2 , X2 ) with
p = 1. The transformation F : R → [0, 1] in Definition 4 is such that U = F (X1 )
possesses a density that is bounded away from zero over its support [0, 1].
Choosing the function F approximately equal to the empirical cdf of X1 leads to
a transformed random variable U that is close to uniformly distributed over [0, 1] and
therefore possesses a density bounded away from zero.
Deriving explicit convergence rates for nonparametric estimators typically requires
assuming certain quantities involved, such as densities, regression functions, or characteristic functions, belong to some regularity space with known tail behavior or smoothness.
First, consider the ill-posed inverse problems of solving
E[sY (X2 )|X1 = x1 ] = dY (x1 )
and
E[sY X (X2 )|X1 = x1 ] = dY X (x1 )
(20)
for sY and sY X .
Definition 9. Denote by Dd f the d-th elementwise derivative of a scalar- or vector-valued
function f . Let Hn be the space of functions h ∈ L2 (fX2 ) such that there exists Π ∈ RKn
P
0
with h = P Kn Π and sk=1 kDk hk2 < c for some finite c ∈ R, s ∈ N. Then define
τn :=
khk
sup
∗
1/2 hk
h∈Hn : h6=0 k(DX DX )
.
The quantity τn is the sieve measure of ill-posedness introduced by Blundell, Chen, and
Kristensen (2007). For a given sequence of approximating spaces {Hn }, also called sieve,
it measures how severely ill-posed the equations in (20) are: a polynomial or exponential
divergence rate of τn classifies the problem as mildly or severely ill-posed, respectively.
The intuition for this distinction is analogous to the finite-dimensional case in which applying the operator DX becomes multiplication by a matrix. Inverting this matrix is
24
difficult when its eigenvalues are close to zero. In the infinite-dimensional case, however,
the operator DX possesses an infinite number of eigenvalues whose closeness to zero is
measured in terms of how fast their ordered sequence converges to zero. The fast (exponential) rate occurs, for example, when X1 and X2 are jointly Gaussian, leading to severe
ill-posedness.
Assumption C 2. (a) The smallest eigenvalues of E[B Jn (F (X1 ))B Jn (F (X1 ))0 ] and
E[P Kn (X2 )P Kn (X2 )0 ], respectively, are bounded away from zero uniformly in Jn , Kn ; (b)
there is a sequence {ω0,n } in R such that supu∈[0,1] |B Jn (u)| ≤ ω0,n ; (c) pj (x) are bounded
uniformly over j and have ρs > 1/2 square-integrable derivatives; (d) for any functions
hp ∈ L2 (fX2 ) and hb ∈ L2 (λ[0,1] ) with l square-integrable derivatives there are Πp and Πb
0
0
such that khp − P Kn Πp k = O(Kn−l ) and khb − B Jn Πb k = O(Jn−l ) as Jn , Kn → ∞; (e)
dY , dY X and E[h(X2 )|X1 = ·] have ρd > 1/2 square-integrable derivatives for all h ∈ Hn ,
and each of sY , sY X , and sxY has at least ρs ≥ 2 derivatives; (f ) for k ∈ {Y, Y X}, there
is a function hk ∈ Hn so that τn2 kDX (sk − hk )k2 ≤ const · ksk − hk k2 .
The various parts of this assumption are standard in the literature on series estimation; see Newey (1997), for example. Part (a) bounds the second moment matrix of the
approximating functions away from singularity. Part (b) bounds the individual series
terms, which, by the compact support of U = F (X1 ), is not restrictive. The third condition assumes the uniform approximation of the target function by a truncated series
expansion incurs an error that vanishes at rate O(Kn−ρ ). Assumption C 2(f) is taken
from Blundell, Chen, and Kristensen (2007) and requires that, for some hk ∈ Hn , DX hk
approximates DX sk at least as well as hk approximates sk (after standardizing by τn2 ).
The latter approximation incurs an error that vanishes at the rate given in part (d).
Assumption C 3. For some a > ρs , where ρs is defined in Assumption C 2(e), E[|η2 |2a ] <
∞ and E[|X2∗ |2a |X1∗ = x∗1 ] < ∞ ∀x∗1 ∈ R.
Definition 10. Given the sequences x̄n → ∞ and ζ̄n → ∞, define the bounds σ n :=
inf y2 ∈Y2 inf |ζ|≤ζ̄n |γ(ζ, y2 )/φ(ζ)| and r̄n := sup|ζ|≤ζ̄n |∂ log φ(ζ)/∂ζ|. Let
2
Tn := (r̄n TY,n + T∆,n )σ −1
n ζ̄n + Tγ,n
be a trail-trimming bound with
d,j,k
T∆,n := max max max{TY,n
, TYd,j=0,k
X,n },
k=0,2 j=0,1 d=1,2
d,j=0,k
TY,n := max max TY,n
,
d=0,1 k=0,1
25
and for L ∈ {Y, Y X}, d, j, k ∈ {0, 1, 2},
Z
d,j,k
k
TL,n := sup ζ̄n
|x2 |j ∇d ŝL (x2 , y2 ) − ∇d sY (x2 , y2 ) dx2 ,
y2 ∈Y2
|x2 |>x̄n
Z
|γ(ζ, y2 )|dζ.
Tγ,n := sup
y2 ∈Y2
|ζ|>ζ̄n
Further, define a bound on the density fX2 by f n := inf |x2 |≤x̄n fX2 (x2 ), δn := Kn−ρs +
p
τn Kn /n, and ωd,n := supu∈[0,1] |Dd B Jn (u)|.
Assuming the existence of upper and lower bounds of Fourier transforms and densities
is standard in the literature on deconvolution (e.g., Fan (1991), Fan and Truong (1993), Li
and Vuong (1998), and Schennach (2004b)) and nonparametric regression (e.g., Andrews
(1995)). The bound ωd,n depends only on the particular basis chosen. For splines, ωd,n =
1/2+d
Jn
, whereas for orthogonal polynomials, ωd,n = Jn1+2d .
Assumption C 4. As n → ∞, let the parameter sequences Kn → ∞, Jn → ∞, x̄n →
2
∞, ζ̄n → ∞, αn → 0 satisfy Jn /n → 0, limn→∞ (Jn /Kn ) = c > 1, ω0,n
Kn /n → 0,
1/2
r̄n δn ω1,n /(f n ζ̄n σ n ) → 0, and r̄n TY,n /σ n → 0.
Assumption C 4 states rate conditions on the various trimming and smoothing sequences of the nonparametric estimator. Corollaries 3 and 4 below imply these conditions
are mutually compatible.
With Assumption C at hand, stating the general form of the uniform convergence rate
in terms of the parameter sequences just defined is possible.
Theorem 3. Let X̄ be some compact subset of R. Under Assumptions ID, IDX, and C,
!
ω
x̄
+
r̄
ω
ζ̄
2,n
n
n
1,n
n
sup sup F̌Y2 |X2∗ (y2 |x∗2 ) − FY2 |X2∗ (y2 |x∗2 ) = Op
δn + Tn .
y2 ∈Y2 x∗2 ∈X̄
f n1/2 σ n
This expression of the convergence rate provides useful information about when the
p
estimator performs well. First of all, δn := Kn−ρs + τn Kn /n is the well-known convergence rate of nonparametric instrumental variable estimators arising from the estimation
of sY and sY X . Chen and Reiß (2011) and Johannes, Van Bellegem, and Vanhems (2011)
provide conditions under which this rate is minimax optimal in the nonparametric instrumental variable context. Second, due to the division by f n1/2 , the estimator converges
slowly if the density of X2 has thin tails. Intuitively, more variation in X2 improves the
precision of the estimator. Third, the formula for the characteristic function of the ME,
equation (17), suggests estimation errors may be large when the Fourier transform σ̂Y (ζ)
26
in the denominator is close to zero. Because σY (ζ) = γ(ζ)/φ(ζ), the Fourier transform
σY (ζ) is close to zero whenever γ(ζ), the Fourier transform of FY2 |X2∗ (y2 |·), is small relative to φ(ζ), the characteristic function of the ME. Therefore, thicker tails in the Fourier
transform of FY2 |X2∗ (y2 |·) and a ME characteristic function with thin tails result in a faster
convergence rate. Theorem 3 reflects this fact in the division by σ n . On the other hand,
if the characteristic function of the ME decays too quickly, the ill-posed inverse problems
may be severely ill-posed, leading to slow convergence rates δn . In conclusion, the new
estimator is expected to perform well, if (i) the ME’s characteristic function has moderately thin tails, (ii) the Fourier transform of the conditional cdf FY2 |X2∗ has thick tails, and
(iii) the density of X2 has thick tails.
The smoothness of a function determines how thick the tails of its Fourier transform are. The more derivatives the function possesses, the faster the tails of its Fourier
transform vanish. To derive explicit rates of convergence in terms of the sample size,
the literature on nonparametric estimation typically categorizes functions into different
smoothness classes.
Definition 11. For a function f (x) and a label f , let the expression f (x) ≥ (≤) S(f, x)
mean there exist constants Cf , γf ∈ R, and αf , βf ≥ 0 such that γf βf ≥ 0 and f (x) ≥ (≤
) Cf (1 + |x|)γf exp{−αf |x|βf } for all x in the domain of f .
This definition conveniently groups the two common smoothness classes, ordinary
smooth and super-smooth functions, into one expression, simplifying the exposition of
the explicit rates below. If, for instance, βf = αf = 0, the function f is ordinary smooth;
otherwise it is super-smooth.12
Assumption S. Suppose |∂ log φ(ζ)/∂ζ| ≤ S(r, ζ), supy2 ∈Y2 |γ(ζ, y2 )/φ(ζ)| ≤ S(g, ζ),
fX2 (x2 ) ≥ S(fX , x2 ), and supy2 ∈Y2 |∂sY (x2 , y2 )/∂x2 | ≥ S(s, x2 ).
This assumption assigns smoothness parameters α, β, and γ to the various quantities
whose smoothness is to be classified. Different combinations of values for these parameters
generate different convergence rates in terms of the sample size, as summarized in the
following two corollaries.
Corollary 3 (Mildly Ill-posed Case). Let X̄ be some compact subset of R. Suppose B (Jn ) is
a spline basis, τn = O(Knω ), αfX = βfX = 0 and αr = βr = 0. Let Kn = O(n1/[2(ρs +ω)+1] ).
Then, under Assumptions ID, IDX, C, and S, we have
12
See Fan (1991) for formal definitions.
27
(i) βg > 0 and βs > 0. Suppose γr − βg + 3 < 0 and set ζ̄n = O((log n)1/βg ). If
αs > αg and ρs /[2(ρs + ω) + 1] > αg , set x̄n = O((log n)1/βs ). If κ̄ := 2αg /γfX −
2ρs /[2γfX (ρs + ω) + γfX ] > 0, set x̄n = O(nκ ) with 0 < κ < κ̄. In either case,
sup sup F̌Y2 |X2∗ (y2 |x∗2 ) − FY2 |X2∗ (y2 |x∗2 ) = Op (log n)(γr −βg +3)/βg = op (1).
y2 ∈Y2 x∗2 ∈X̄
(ii) βg > 0 and βs = 0. Suppose γr − βg + 3 < 0 and γs < −3. Let ζ̄n = O((log n)1/βg )
and x̄n = O(nκ ) with κ > −αg /(γs + 3). Then the convergence rate is the same as
in (i).
(iii) βg = 0 and βs > 0. Suppose γr < −3, let ζ̄n = O(n−m/(γg +1) ) and x̄n = O((log n)1/βs )
with m := min{αs , ρs /[2(ρs + ω) + 1]}. Then
sup sup F̌Y2 |X2∗ (y2 |x∗2 ) − FY2 |X2∗ (y2 |x∗2 ) = Op β̄(n)n−m(γr +3)/(γg +1) = op (1)
y2 ∈Y2 x∗2 ∈X̄
with
(
β̄(n) :=
(log n)−γfX /(2βs ) ,
αs ≥ ρs /[2(ρs + ω) + 1]
.
(γs −2βs +3)/βs
, o.w.
(log n)
(iv) βg = 0 and βs = 0. Suppose γr , γs < −3 and let ζ̄n = O(n$ ) and x̄n = O(nς ) with
ς := ρs /[(2(ρs + ω) + 1)(−γfX /2 − (γs + 3))] and $ := ς(γs + 3)/(1 + γg ). Then
sup sup F̌Y2 |X2∗ (y2 |x∗2 ) − FY2 |X2∗ (y2 |x∗2 ) = Op n$(γr +3) = op (1).
y2 ∈Y2 x∗2 ∈X̄
Corollary 4 (Severely Ill-posed Case). Let X̄ be some compact subset of R. Suppose B (Jn )
is a spline basis, τn = O(exp{Kn }), ρd = ∞, and αfX = βfX = 0. Let Kn = O(log n).
Then under Assumptions ID, IDX, C, and S, we have
(i) βg > 0 and βs > 0. Set ζ̄n = O((log log n)1/βr ) and x̄n = O((log log n)1/βfX ). Then
sup sup F̌Y2 |X2∗ (y2 |x∗2 ) − FY2 |X2∗ (y2 |x∗2 )
y2 ∈Y2 x∗2 ∈X̄
= Op β̄1 (n)(log n)−αr +αfX /2−ρs + β̄2 (n)(log n)−αr −αs + β̄3 (n)(log n)−αr = op (1)
with
β̄1 (n) := (log log n)(2+γr −γg )/βr −γfX /(2βfX ) exp{αg (log log n)βg /βr },
β̄2 (n) := (log log n)(2+γr −γg )/βr +(γs −2βs +3)/βfX exp{αg (log log n)βg /βr },
β̄3 (n) := (log log n)(γr −βg +3)/βr .
28
(ii) βg > 0 and βs = 0. Set ζ̄n = O((log log n)1/βr ) and x̄n = O((log log n)1/βfX ). Then
the convergence rate is the same as in (i) except βs = αs = 0, and it is op (1) if
αr > αg , βr > 0.
(iii) βg = 0 and βs > 0. Suppose γr < −3, βs ≥ βfX and αfX /2 + $(2 + γr − γg ) − ρs < 0
with $ := −αs /(1 + γg ). Let ζ̄n = O((log n)$ ) and x̄n = O((log log n)1/βs ). Then
we have
sup sup F̌Y2 |X2∗ (y2 |x∗2 ) − FY2 |X2∗ (y2 |x∗2 )
y2 ∈Y2 x∗2 ∈X̄
= Op β̄1 (n)(log n)$(2+γr −γg )−ρs + β̄2 (n)(log n)$(γr +3) = op (1)
with
βf /βs
X
/2
β̄1 (n) := (log log n)−γfX /(2βs ) eαfX (log log n)
,
β̄2 (n) := (log log n)max{(γs −2βs +3)/βs ,0} .
(iv) βg = 0 and βs = 0. Suppose αr , βr > 0. Then let ζ̄n = O((log log n)1/βr ) and
x̄n = O((log log log n)1/βfX ) to get
sup sup F̌Y2 |X2∗ (y2 |x∗2 ) − FY2 |X2∗ (y2 |x∗2 ) = Op β̄(n)(log n)−αr = op (1)
y2 ∈Y2 x∗2 ∈X̄
with
β̄(n) := (log log n)(γr +3)/βr + (log log n)(2+γr −γg )/βr (log log log n)(γs +3)/βfX .
If the problems in (20) are severely ill-posed then the inversion of DX is inherently
difficult and leads to slow, logarithmic convergence rates when estimating the solutions to
the corresponding ill-posed inverse problem. The logarithmic rates given in Corollary 4
reflect this well-known fact. Notice, however, that such slow rates occur also when the
Fourier transform γ of the conditional cdf FY2 |X2∗ decays rapidly relative to the characteristic function of the ME (Corollary 3(i),(ii)). To prevent the denominator from blowing
up the estimation error, the estimator requires a lot of trimming in the sense that ζ̄n
has to increase slowly. This trimming cuts off large parts of the integrand’s tails, creates
large biases, and leads to a slow overall convergence rate. In the remaining scenarios, the
convergence rate is of polynomial order.
The next theorem establishes the convergence rates of the regression function estimators m̂ as in Definition 8. Because the discussion in this section is restricted to univariate
29
mismeasured regressors, the following convergence rate only covers the model without
fixed effects. However, the extension to accommodate fixed effects is straightforward as
it only requires the analogous result for two dimensions, so that estimation can be based
on differences.
Theorem 4. Let X̄ be some compact subset of R and Ỹt = m(X̃t∗ )+εt for t = 1, 2. Suppose
either E[εt |X̃t∗ ] = 0, defining m̂ as in (18), or Qεt |X̃t∗ (τ |x̃∗t ) = 0 for some τ ∈ (0, 1),
defining m̂ as in (19). Assume the distribution of Y2 | X̃t∗ = x̃∗t has compact support for
all values x̃∗t . If any of the assumptions in the different cases of Corollaries 3 or 4 are
satisfied then
sup |m̂(x̃∗2 ) − m(x̃∗2 )| = Op (βm (n)),
x̃∗2 ∈X̄
where βm (n) denotes the convergence rate of F̌Y2 |X2∗ defined in the relevant subcase of
Corollary 3 or 4.
4
Simulations
This section studies the finite sample performance of the proposed estimator m̂. I consider
a nonlinear panel data regression without individual-specific heterogeneity,
Yt = m(Xt∗ ) + εt ,
t = 1, 2,
Xt = Xt∗ + ηt ,
with the regression function m(x∗ ) := Φ(x∗ ) − 1/2 and Φ(·) the standard normal cdf. All
variables are scalar. The latent true explanatory variables are generated by X1∗ ∼ N (0, 1)
and X2∗ = 0.8 X1∗ + 0.7 N (0, 1). The ME ηt is i.i.d. N (0, ση2 ) with ση ∈ {0.5, 1.5}, which
correspond to the two scenarios called weak ME (ση = 0.5) and strong ME (ση = 1.5).
The structural error is independent over time and drawn from N (0, 1). The simulation
results are based on 1, 000 Monte Carlo samples of length n = 200.
For the new ME-robust estimator, the simulation setup presents a worst-case scenario
in the following sense. The choice of distributions implies (X1 , X2 ) are jointly normal,
resulting in severely ill-posed inverse problems. In addition, the conditional cdf FYt |Xt∗ (yt |·)
is super-smooth because the regression function and the density of the regression error
have infinitely many derivatives. As shown in the previous section, this scenario leads to
slow, logarithmic convergence rates.
The ill-posed inverse problem is regularized with αn ∈ {0.001, 0.01, 0.1}. The series
estimator is based on a quadratic polynomial basis in x2 . For x1 , I consider polynomial
30
bases of orders {3, 5, 7}, cubic spline and cubic B-spline bases with {5, 10, 15} knots. The
function m is estimated on a grid of 128 equidistant x-values on [−2, 2], which means that
all discrete Fourier transforms involved are calculated at 128 values in the corresponding
range in the frequency domain. The estimator m̂ is computed as the conditional mean of
F̌Y2 |X2∗ , approximating the integral in (18) over a grid of 100 equidistant y-values between
the 5% and the 95% quantile of all Y -draws pooled together. Other combinations of
the various simulation parameters have been considered but did not have any qualitative
effect on the results presented below.
Tables 1 and 2 summarize the performance of the new ME-robust estimator suggested
in the previous section, compared with that of the standard Nadaraya-Watson estimator,
which ignores the ME. The table reports the absolute value of bias, standard deviation,
and root mean squared error, each averaged over the Monte Carlo samples and over the
grid of 128 x-values. Figure 1 shows the estimated regression functions together with the
range spanning two pointwise empirical standard deviations of the estimators.13
As mentioned in the previous section, the series estimator of the solution to the illposed inverse problems takes the form of a standard instrumental variable estimator for
which the vectors of basis functions in x1 and x2 play the roles of the instruments and of
the endogenous variables, respectively. To assess the reliability of this estimator, Table 3
reports the Cragg-Donald statistic of testing for weak instruments, which is a multivariate
extension of the test based on the first-stage F-statistic; see Cragg and Donald (1993) and
Stock and Yogo (2005) for details of the procedure.14 The test is valid when the model
is correctly specified, which, in the present context, requires the population model be
expressed in terms of the finite basis vectors in B and P. In general, the finite-dimensional
case can, of course, only approximate the truth. The table also lists the critical values
for 5%-level tests of 10% two-stage least-squares bias and 15% two-stage least-squares
size distortion. The Cragg-Donald test rejects weak identification for large values of the
test statistic. Rejection occurs for the polynomial basis, whereas the results based on the
(B-)spline basis appear less reliable.
In terms of bias, standard deviation, and root mean squared error, the results demonstrate similar performance of the new ME-robust and the standard Nadaraya-Watson
estimator when ME is weak. In particular, the choices of tuning parameters (αn , Kn , Jn )
have almost no impact on the resulting estimates. However, when the ME has a large
13
Notice the asymptotic distribution of the ME-robust estimator is not available, so the displayed
measure of variability has to be interpreted with care.
14
In practice, one would want to test completeness or some notion of weak completeness directly, but
such a test may exist only in very special situations; see Canay, Santos, and Shaikh (2011).
31
variance, the Nadaraya-Watson estimator is strongly biased, whereas the bias of the MErobust estimator barely changes relative to the weak ME scenario. This result is expected
and confirms the theoretical finding that in the presence of ME, the ME-robust estimator is consistent, whereas the Nadaraya-Watson estimator is not. The variability of the
ME-robust estimator is higher than (in the case of strong ME) or comparable to (in the
case of weak ME) that of the Nadaraya-Watson estimator. This finding is not surprising
either: just as in the case of linear regression with an endogenous regressor, the OLS
estimator tends to have a smaller asymptotic variance than two-stage least squares, but
the former is centered around the wrong quantity, whereas the latter is not. Similarly,
here the Nadaraya-Watson estimator is biased and less variable, but only the ME-robust
estimator consistently estimates the correct object. This simulation experiment confirms
the theoretical findings from the previous sections in that the new estimator is indeed
robust to ME and significantly reduces bias.
5
Conclusions
The paper presents a constructive identification argument for nonlinear panel data regressions with measurement error and fixed effects. The identifying assumptions are easy to
interpret in the panel data context and resemble the standard conditions for identification
of linear instrumental variable models. They inform the applied researcher about which
types of panel data models are identified in the presence of measurement error, and what
type of variation in the data is required to secure identification. I show that, under regularity conditions, if only outcomes and mismeasured regressors are observed, the model is
identified if either the measurement error or the regression error is serially independent.
Current work in progress by the author provides extensions of the results, which are
beyond the scope of the present paper. An immediate but important extension consists
of a semiparametric estimation procedure that allows for both – simple estimation and
inference – and explicitly deals with multivariate mismeasured regressors. This procedure
then facilitates a rigorous study of the cross-sectional and time series properties of the
relationship between investment and q, for example.
This paper provides a first step toward a more general theory of identification in nonlinear panel data models with measurement error. For example, studying (partial) identification when individual-specific effects depend on the measurement error or when the
fixed effects enter the regression function in a nonseparable fashion would be interesting.
32
A
Sufficient Condition for Assumption IVX 1
Lemma 3. Suppose Assumption ID 2(ii) holds, that X2∗ = g(X1∗ ) + U with U ⊥ X1∗ , that
the characteristic function of U is nonzero on a dense subset of Rp and that FY2 |X1 (y2 |·)
satisfies Assumption ID 3 with the obvious modifications. Then Assumption IVX 1 holds.
Proof The proof is straightforward. (i) FC −1 F −1 is equal to multiplication by the
inverse of the characteristic function of the ME (see also proof of Theorem 1) which is
−1 −1
nonzero. (ii) Similarly, FCrev
F is equivalent to multiplication by a nonzero function.
(iii) By the assumption of the lemma, T is also a convolution operator whose kernel has
a finite Fourier transform (the characteristic function of U ). Therefore, FT −1 F −1 is also
equal to a multiplication operator whose multiplicator is nonzero over whole Rp . (iv) By
Lemma 4(ii), FFY2 |X1 is nonzero as well.
In conclusion, (i) – (iv) above imply
−1
−1 −1
FDX
FY2 |X1 = FC −1 F −1 FT −1 F −1 FCrev
F FFY2 |X1
is nonzero on a dense subset of Rp .
B
Q.E.D.
Proofs
Constants in this section are denoted by C, C 0 , C 00 , and so on, but the same symbol
in different instances does not necessarily refer to the same value. Also, let λ denote
Lebesgue measure on Rp and denote by L1 (λ) the space of functions that are absolutely
integrable. For two sequences {an } and {bn } in R, an bn means that the sequence an /bn
is bounded away from 0 and ∞ uniformly over n
B.1
Identification
Lemma 4. Under Assumption ID 2(i), 3 and Assumption IVX 2(i), the following holds:
(i) limxt →±∞ FYt |Xt (yt |xt ) = limx∗t →±∞ FYt |Xt∗ (yt |x∗t ).
(ii) The function g as defined in the proof of Theorem 1 is not in L1 (λ), but its Fourier
transform γ is an ordinary function: γ(ζ) = [Fgo ](ζ) − 2cY (y2 )/(iζ) for ζ ∈ Rp \ {0}
and where go is a function in L1 (λ).
33
(iii) The function g̃ as defined in the proof of Theorem 1 is not in L1 (λ), but its Fourier
−
2
transform is an ordinary function: [F g̃](ζ) = [F g̃o ](ζ) − (c+
Y (y2 ) − cY (y2 ))/|ζ| for
ζ ∈ Rp \ {0} and where g̃o is a function in L1 (λ).
Proof Parts (i) and (ii) follow directly from Lemma 1 in Schennach (2008). A modification of that derivation proves part (iii) as follows. For simplicity of exposition, consider
the univariate case p = 1. Define the function
+
∗
∗
H(x∗2 ) := c−
Y (y2 )1{x2 ≤ 0} + cY (y2 )1{x2 > 0}
and note
Z
0
∗
x2 FY2 |X ∗ (y2 |x∗2 ) − H(x∗2 ) dx∗2
2
−∞
Z 0
Z x∗2
∗
∂FY2 |X2∗ (y2 |u)
−
∗ −
dx2
du
−
c
(y
)
=
|x2 | cY (y2 ) +
2
Y
∂x∗2
−∞
−∞
Z 0
Z x∗2 ∂FY2 |X2∗ (y2 |u) dudx∗2
≤
|x∗2 |
∗
∂x
−∞
−∞
2
Z 0
Z x∗2
∗
≤
|x2 |
B(1 + |u|)−3−β dudx∗2
−∞
−∞
Z 0
0
≤B
|x∗2 |(1 + |x∗2 |)−2−β dx∗2
−∞
Z 0
Z 0
0
∗
∗ −2−β
∗
0
≤B
|x2 |(1 + |x2 |)
dx2 + B
(1 + |x∗2 |)−2−β dx∗2
−∞
−∞
Z 0
(1 + |x∗2 |)−2−β dx∗2
− B0
−∞
Z 0
Z 0
0
∗ −1−β
∗
0
≤B
(1 + |x2 |)
dx2 − B
(1 + |x∗2 |)−2−β dx∗2 < ∞.
−∞
−∞
Similarly, one can show the same integral from 0 to ∞ is finite. Then, defining g̃o (x∗2 ) :=
R
x∗2 FY2 |X2∗ (y2 |x∗2 ) − H(x∗2 ) , we have that |g̃o (x∗2 )|dx∗2 < ∞. Furthermore, g̃(x∗2 ) :=
x∗2 (FY2 |X2∗ (y2 |x∗2 ) − cY (y2 )) can be decomposed as
g̃(x∗2 ) = x∗2 FY2 |X2∗ (y2 |x∗2 ) − H(x∗2 ) + x∗2 (H(x∗2 ) − cY (y2 ))
+
∗
∗
= g̃o (x∗2 ) + x∗2 c−
Y (y2 )1{x2 ≤ 0} + cY (y2 )1{x2 > 0} − cY (y2 )
−
cY (y2 )
c+
∗
∗
∗
∗
Y (y2 )
[21{x2 ≤ 0} − 1] +
[21{x2 > 0} − 1]
= g̃o (x2 ) + x2
2
2
−
cY (y2 )
c+
∗
∗
∗
∗
Y (y2 )
= g̃o (x2 ) + x2 −
[21{x2 > 0} − 1] +
[21{x2 > 0} − 1]
2
2
34
−
c+
Y (y2 ) − cY (y2 )
sgn(x∗2 )
2
∗
= g̃o (x∗2 ) + (cY (y2 ) − c−
Y (y2 )) |x2 |.
= g̃o (x∗2 ) + x∗2
∗
where the first term is absolutely integrable as shown above, and (cY (y2 ) − c−
Y (y2 ))|x2 | is
2
not but possesses the Fourier transform ζ 7→ −2(cY (y2 ) − c−
Y (y2 ))/|ζ| . The case p > 1
follows from a similar argument.
Q.E.D.
Lemma 5. Under Assumptions ID 1 and 2, C : L2 (fX2 ) → L2 (fX2∗ ) and Crev : L2 (fX1∗ ) →
L2 (fX1 ) are injective. Furthermore, DX is injective on C −1 (G) and DX C −1 h = Crev T h
holds for all h ∈ G.
Proof The injectivity of C and Crev is straightforward to prove. See, for example, Proposition 8 of Carrasco and Florens (2011). By similar reasoning, the adjoint operator C ∗ of
C is injective as well. Since C is injective, it possesses an inverse over its range. Since
its adjoint C ∗ is injective as well and R(C) = N (C ∗ )⊥ = L2 (fX2∗ ), where R(·) and N (·)
denote the range and null space of an operator, the range of C is dense in L2 (fX2∗ ) and
DX C −1 = Crev T holds over whole L2 (fX2∗ ). Therefore, DX C −1 = Crev T holds over G in
particular.
Q.E.D.
Proof of Theorem 1 First, notice that by Assumption IVX 2(i) and (ii),
dY (x1 ) := E[1{Y2 ≤ y2 }|X1 = x1 ]
ZZ
=
FY2 |X2∗ ,X1∗ ,X1 (y2 |x∗2 , x∗1 , x1 )fX2∗ ,X1∗ |X1 (x∗2 , x∗1 |x1 )dx∗2 dx∗1
ZZ
=
FY2 |X2∗ (y2 |x∗2 )fX2∗ |X1∗ (x∗2 |x∗1 )fX1∗ |X1 (x∗1 |x1 )dx∗2 dx∗1
(21)
dY X (x1 ) := E[X2 1{Y2 ≤ y2 }|X1 = x1 ]
Z
= E[X2 1{Y2 ≤ y2 }|X2∗ = x∗2 ]fX2∗ |X1 (x∗2 |x1 )dx∗2
Z
= E[X2 |X2∗ = x∗2 ]FY2 |X2∗ (y2 |x∗2 )fX2∗ |X1 (x∗2 |x1 )dx∗2
ZZ
=
x∗2 FY2 |X2∗ (y2 |x∗2 )fX2∗ |X1∗ (x∗2 |x∗1 )fX1∗ |X1 (x∗1 |x1 )dx∗2 dx∗1 .
(22)
and
35
Letting F̃Y2 |X2∗ (y2 |x∗2 ) := x∗2 FY2 |X2∗ (y2 |x∗2 ) and with the operators introduced in the main
text, these two equations can be rewritten as
dY = Crev T FY2 |X2∗ ,
(23)
dY X = Crev T F̃Y2 |X2∗ .
(24)
Here, as well as in the remainder of the identification argument, I suppress the dependence of various functions on y2 as this value is fixed throughout. Next, notice that by
Assumption IVX 2, we have
ZZ
f (x2 |x1 ) =
fX2 |X2∗ (x2 |x∗2 )fX2∗ |X1∗ (x∗2 |x∗1 )fX1∗ |X1 (x∗1 |x1 )dx∗2 dx∗1
which can also be written as the operator identity DX = Crev T C, i.e. DX h = Crev T Ch for
h ∈ L2 (fX2 ). Substituting this expression into (23) and (24) yields
dY = DX C −1 FY2 |X2∗ ,
(25)
dY X = DX C −1 F̃Y2 |X2∗ .
(26)
By Lemma 5, DX C −1 h = Crev T h for any h ∈ G, a space of functions on which T and thus
DX are injective. Therefore, we have
−1
CDX
dY = FY2 |X2∗ ,
(27)
−1
CDX
dY X = F̃Y2 |X2∗ .
(28)
Next, I show a unique solution FY2 |X2∗ to these two equations exists. Let sY (x2 ) :=
−1
−1
[DX
dY ](x2 ) − cY (y2 ) and sY X (x2 ) := [DX
dY X ](x2 ) − x2 cY (y2 ) and define the centered
∗
counterparts of FY2 |X2∗ (y2 |x2 ) and F̃Y2 |X2∗ (y2 |x∗2 ) as g(x∗2 ) := FY2 |X2∗ (y2 |x∗2 ) − cY (y2 ) and
g̃(x∗2 ) := F̃Y2 |X2∗ (y2 |x∗2 ) − x∗2 cY (y2 ). The function cY centers the various functions involved
such that by Lemma 4, g and g̃ possess Fourier transforms that are ordinary functions.
−
Formally, if the covariates are scalars (p = 1), define cY (y2 ) := (c+
Y (y2 ) + cY (y2 ))/2 with
c+
Y (y2 ) := lim FY2 |X2 (y2 |x2 ),
x2 →∞
c−
Y (y2 ) := lim FY2 |X2 (y2 |x2 ).
x2 →−∞
If p > 1, cY can be selected as
R
cY (y2 ) := lim
lim
R1 ≤|x|≤R2
R
R1 →∞ R2 →∞
36
FYt |Xt (yt |x)dx
dx
R1 ≤|x|≤R2
.
Subtracting the centering constants, (27) and (28) are equivalent to
CsY = g,
(29)
CsY X = g̃.
(30)
R
Next, let F denote the Fourier transform operator [Fh](ζ) := h(x)eiζ·x dx, ζ ∈ Rp . It is
well-known that F diagonalizes convolution operators15 such as C; that is, FCF −1 = ∆φ
with the multiplication operator [∆φ h](ζ) := φ(ζ)h(ζ) and φ the characteristic function
of −η2 . Therefore, FCsY = FCF −1 FsY = φ σY and, similarly, FCsY X = φ σY X , where
σY and σY X are the Fourier transforms of sY and sY X . Therefore, applying F to both
(29) and (30) yields
φ(ζ) σY (ζ) = γ(ζ),
ζ 6= 0,
φ(ζ) σY X (ζ) = −i∇γ(ζ),
(31)
ζ 6= 0.
(32)
The last equality holds because multiplication by ix∗2 corresponds to taking derivatives
in the Fourier domain. The equations are valid for all ζ 6= 0 because, by Lemma 4, the
Fourier transform γ and its partial derivatives have poles at the origin but are ordinary
functions everywhere else. Now, differentiate (31) with respect to ζ, substitute in (31)
and (32), and divide by φ (which is allowed by Assumption ID 2(ii)) to get
∇ζ φ(ζ)
∇ζ γ(ζ)
σY (ζ) + ∇ζ σY (ζ) =
= iσY X (ζ),
φ(ζ)
φ(ζ)
ζ 6= 0
or the following set of partial differential equations in φ:
∇ζ φ(ζ)
iσY X (ζ) − ∇ζ σY (ζ)
=
,
φ(ζ)
σY (ζ)
ζ 6= 0.
This equation holds for all ζ ∈ {ζ 6= 0 : σY (ζ) 6= 0}, but since the left-hand side is an
ordinary continuous function on whole Rp and by Assumption IVX 1, one can uniquely
extend the quotient on the right to Rp by a continuous limiting process. Subsequently, let
σ := (iσY X − ∇ζ σY )/σY denote this extension. Solving the partial differential equations
with the initial condition φ(0) = 1 yields
Z ζ
φ(ζ) = exp
σ(z) · dΓ(z) ,
0
where the integral is a path integral along some smooth path Γ that connects 0 and ζ in
Rp . Then, by equation (31), g is identified as the inverse Fourier transform
Z
1
∗
∗
g(x2 ) =
φ(ζ)σY (ζ)e−iζ·x2 dζ
2π
15
See, for example, section 3 of Carroll, Rooij, and Ruymgaart (1991).
37
and FY2 |X2∗ (y2 |x∗2 ) = g(x∗2 ) + cY (y2 ).
The marginal distribution of the ME η2 is identified from the inverse Fourier transform
of the characteristic function φ. Knowledge of the marginal ME distribution in the second
period implies identification of the contamination operator C. If the distribution of the ME
is stationary, then Crev can be calculated from C, yielding an expression of the transition
−1
law T via T = Crev
DX C −1 . Therefore, the distribution of X2∗ given X1∗ is known. From the
∗
characteristic function E[eiζ·X1 ] = E[eiζ·X1 ]/E[eiζ·η1 ], we then get the marginal distribution
of X1∗ , which, together with the transition law, identifies the joint distribution of X ∗ as
∗
∗
well as that of η (from E[eiζ·η ] = E[eiζ·X ]/E[eiζ·X ]).
Q.E.D.
Proof of Corollary 1 The corollary follows from the proof of Theorem 1.
Q.E.D.
Proof of Theorem 2 To simplify the exposition, suppose p = 2 and T = 3. The
argument for larger dimensions p and T = p + 1 works analogously; if T > p + 1,
identification of FYT |XT∗ can be based only on the last p + 1 periods. Consider the two
equations
d◦Y (y1:2 ) := E[1{Y3 ≤ y3 }|Y1:2 = y1:2 ]
ZZ
=
FY3 |X3∗ ,X2∗ ,Y1:2 (y3 |x∗3 , x∗2 , y1:2 )fX3∗ |X2∗ ,Y1:2 (x∗3 |x∗2 , y1:2 )fX2∗ |Y1:2 (x∗2 |y1:2 )dx∗3 dx∗2
ZZ
(33)
=
FY3 |X3∗ (y3 |x∗3 )fX3∗ |X2∗ (x∗3 |x∗2 )fX2∗ |Y1:2 (x∗2 |y1:2 )dx∗3 dx∗2
and
d◦Y X (y1:2 ) := E[X3 1{Y3 ≤ y3 }|Y1:2 = y1:2 ]
ZZ
=
x∗3 FY3 |X3∗ (y3 |x∗3 )fX3∗ |X2∗ (x∗3 |x∗2 )fX2∗ |Y1:2 (x∗2 |y1:2 )dx∗3 dx∗2 ,
(34)
which hold by Assumption IVY 2. Letting F̃Y3 |X3∗ (y3 |x∗3 ) := x∗3 FY3 |X3∗ (y3 |x∗3 ) and with the
operators introduced in the main text, these two equations are equivalent to
d◦Y = M∗ T FY3 |X3∗ ,
d◦Y X
∗
= M T F̃Y3 |X3∗ ,
38
(35)
(36)
keeping y3 ∈ R fixed and implicit for the remainder of the proof. Similarly,
fX3 |Y1:2 (x3 |y1:2 )
ZZ
=
fX3 |X3∗ ,X2∗ ,Y1:2 (x3 |x∗3 , x∗2 , y1:2 )fX3∗ |X2∗ ,Y1:2 (x∗3 |x∗2 , y1:2 )fX2∗ |Y1:2 (x∗2 |y1:2 )dx∗3 dx∗2
ZZ
=
fX3 |X3∗ (x3 |x∗3 )fX3∗ |X2∗ (x∗3 |x∗2 )fX2∗ |Y1:2 (x∗2 |y1:2 )dx∗3 dx∗2 ,
(37)
which is equivalent to DY = M∗ T C. Next, we need to show that DY is invertible on
C −1 (G). As in the proof of Theorem 1, C is a convolution operator whose kernel has a
nonzero Fourier transform and therefore is invertible. T is invertible by Assumption ID 4.
Therefore, it remains to show M∗ is invertible on its range. To that end, denote by B
the set of all bounded functions from Rp to Rp and notice, by Assumption IVY 3(iii), T
maps G into B. Therefore, we only need to show bounded completeness of the conditional
distribution of X2∗ |(Y1 , Y2 ). Let h ∈ B and consider
Z
ZZ
∗
∗
∗
h(x2 )fX2∗ |Y1:2 (x2 |y1:2 )dx2 =
h(x∗2 )fX1∗ ,X2∗ |Y1:2 (x∗1 , x∗2 |y1:2 )dx∗1 dx∗2
ZZ
fX ∗ ,X ∗ (x∗1 , x∗2 )
fY1 |X1∗ (y1 |x∗1 )fY2 |X2∗ (y2 |x∗2 )dx∗1 dx∗2
=
h(x∗2 ) 1 2
fY1:2 (y1:2 )
ZZ
fX ∗ ,X ∗ (x∗1 , x∗2 )
=
h(x∗2 ) 1 2
fε1 (y1 − g(x∗1 ))fε2 (y2 − g(x∗2 ))dx∗1 dx∗2 .
fY1:2 (y1:2 )
Define h̃(u1 , u2 ) := h(m−1 (u2 ))fX1∗ ,X2∗ (m−1 (u1 ), m−1 (u2 )). By the previous equation and
R
Assumption IVY 3(ii), setting h(x∗2 )fX2∗ |Y1:2 (x∗2 |y1:2 )dx∗2 to zero is equivalent to setting
ZZ
h̃(u1 , u2 )
fε1 (y1 − u1 )fε2 (y2 − u2 )
du1 du2
fY1:2 (y1:2 )
Z Z
1
=
h̃(u1 , u2 )fε1 (y1 − u1 )du1 fε2 (y2 − u2 )du2 (38)
fY1:2 (y1:2 )
to zero. By Assumption IVY 3(ii), fY1:2 (y1:2 ) 6= 0 for all y1:2 ∈ R2 . By Assumptions IVY 3(i), (ii), and (iv), the integrals with respect to fε1 (y1 − u1 ) and fε2 (y2 − u2 )
are convolutions with nonzero Fourier transform of their respective kernels. FurtherR
more, h̃(u1 , u2 )fε1 (y1 − u1 )du1 is a bounded function in u2 , and so (38) implies that
R
h̃(u1 , m(X2∗ ))fε1 (y1 −u1 )du1 = 0 a.s. whenever E[h(X2∗ )|Y1:2 ] = 0 a.s.. Similarly, for any
R
u2 ∈ Rp , h̃(·, u2 ) = 0 whenever h̃(u1 , u2 )fε1 (· − u1 )du1 = 0. By Assumption IVY 3(ii),
fX ∗ is positive everywhere so, in conclusion, we have E[h(X2∗ )|Y1:2 ] = 0 a.s. implies
h(X2∗ ) = 0 a.s., the desired completeness result.
39
Having established invertibility of DY , use the relationship DY = M∗ T C to rewrite
(35) and (36) as
CDY−1 d◦Y = FY3 |X3∗ ,
CDY−1 d◦Y X = F̃Y3 |X3∗ ,
and the remainder of the proof closely follows that of Theorem 1.
Q.E.D.
Proof of Corollary 2 The corollary follows from the proof of Theorem 2.
Q.E.D.
Proof of Lemma 1 To simplify the exposition, I subsequently drop the arguments of
conditional densities that should be obvious from the context.
First, by Assumption REG 1(i) and (ii), (ε3 , ε4 ) ⊥ (η̃1 , η̃2 ) | X̃ ∗ and (ε3 , ε4 ) ⊥ (X̃1∗ , X̃2∗ ) |
(X̃3∗ , X̃4∗ ). These two independence conditions imply fε3 ,ε4 |X̃ ∗ ,η̃1 ,η̃2 = fε3 ,ε4 |X̃ ∗ = fε3 ,ε4 |X̃3∗ ,X̃4∗
and thus f∆ε4 |X̃ ∗ ,η̃1 ,η̃2 = f∆ε4 |X̃3∗ ,X̃4∗ . Therefore,
fY2 |X2∗ ,X1∗ ,X1 = f∆Ỹ4 |X̃ ∗ ,X̃2 ,X̃1 = f∆ε4 |X̃ ∗ ,X̃2 ,X̃1 = f∆ε4 |X̃ ∗ ,η̃2 ,η̃1
= f∆ε4 |X̃3∗ ,X̃4∗ = f∆Ỹ4 |X̃3∗ ,X̃4∗ = fY2 |X2∗ .
(39)
Second, Assumption ID 2(i) and Assumption REG 1(iii) imply the weaker statements
(η̃3 , η̃4 ) ⊥ (η̃1 , η̃2 ) | X̃ ∗ and (η̃3 , η̃4 ) ⊥ (X̃1∗ , X̃2∗ ) | (X̃3∗ , X̃4∗ ) so that fη̃3 ,η̃4 |X̃ ∗ ,η̃1 ,η̃2 =
fη̃3 ,η̃4 |X̃ ∗ = fη̃3 ,η̃4 |X̃3∗ ,X̃4∗ . Therefore,
fX2 |X2∗ ,X1∗ ,X1 = fX̃3 ,X̃4 |X̃ ∗ ,X̃2 ,X̃1 = fη̃3 ,η̃4 |X̃ ∗ ,X̃2 ,X̃1 = fη̃3 ,η̃4 |X̃ ∗ ,η̃2 ,η̃1
= fη̃3 ,η̃4 |X̃3∗ ,X̃4∗ = fX̃3 ,X̃4 |X̃3∗ ,X̃4∗ = fX2 |X2∗ .
(40)
Third, by Assumption REG 1(ii),
fY2 |X2∗ ,X1∗ ,X2 ,X1 = f∆Ỹ4 |X̃ ∗ ,X̃ = f∆ε4 |X̃ ∗ ,η̃ = f∆ε4 |X̃ ∗ ,η̃1 ,η̃2 = f∆Y4 |X̃ ∗ ,X̃1 ,X̃2 = fY2 |X2∗ ,X1∗ ,X1 (41)
Now, (41) implies Assumption IVX 2(ii), which in turn means (39) and (40) together imply
Assumption IVX 2(i). Therefore, Theorem 1 can be applied to identify the conditional
cdf F∆Ỹ4 |X̃4∗ ,X̃3∗ . Because of Assumption REG 2,
E[∆Ỹ4 |X̃4∗ = x̃∗4 , X̃3∗ = 0] = m(x̃∗4 ) − m(0) + E[ε4 |X̃4∗ = x̃∗4 , X̃3∗ = 0]
− E[ε3 |X̃4∗ = x̃∗4 , X̃3∗ = 0]
= m(x̃∗4 ) − m(0) − E[ε3 |X̃3∗ = 0],
40
so the regression function m can be written as
Z
∗
m(x̃4 ) = const. + ∆y dF∆Ỹ4 |X̃4∗ ,X̃3∗ (∆y|x̃∗4 , 0)
and the statement of the lemma follows.
Q.E.D.
Proof of Lemma 2 Analogous to the proof of Lemma 1.
Q.E.D.
B.2
Consistency and Convergence Rates
Proof of Theorem 3 The derivation of the convergence rate proceeds in roughly four
steps: (i) bound kŝY − sY k and similar estimation errors of the other s-functions; (ii)
use step (i) to bound kσ̂Y − σY k and similar estimation errors for the other σ-functions;
(iii) use step (ii) to bound kσ̂∆ /σ̂Y − σ∆ /σY k, where σ∆ (σ̂∆ ) is (an estimator of) the
difference in two of the σ-functions; (iv) use the previous steps to get the desired bound
on the estimation error in F̌Yt |Xt∗ .
Step (i) By Theorem 2 of Blundell, Chen, and Kristensen (2007), we have kŝY − sY k =
p
Op (δn ) and kŝY X −sY X k = Op (δn ) with δn := Knρs +τn Kn /n. These rates hold pointwise
for a fixed y2 that is kept implicit in the notation. Similarly, the derivatives can be
estimated at the rates k∇d ŝY − ∇d sY k = Op (ωd,n δn ) and k∇d ŝY X − ∇d sY X k = Op (ωd,n δn ),
for d = 1, 2, which follows from going through Blundell, Chen, and Kristensen (2007)’s
proof and applying Newey (1997)’s Theorem 1 with d = 1, 2 instead of d = 0.
Step (ii) Consider the estimation error of the Fourier transform σ̂Y and decompose it
as follows:
sup |iζ σ̂Y (ζ) − iζσY (ζ)| = sup |iζ σ̂Y (ζ) − iζ[F ŝY ](ζ)| + sup |iζ[F ŝY ](ζ) − iζσY (ζ)| .
|ζ|≤ζ̄n
|ζ|≤ζ̄n
|ζ|≤ζ̄n
41
Consider each of the two terms separately. First,
Z
iζx2
sup |iζ σ̂Y (ζ) − iζ[F ŝY ](ζ)| = sup iζ [šY (x2 ) − ŝY (x2 )] e dx2 |ζ|≤ζ̄n
|ζ|≤ζ̄n
Z
= sup iζ
[šY (x2 ) − ŝY (x2 )] eiζx2 dx2 |x2 |>x̄n
|ζ|≤ζ̄n
Z
Z
iζx
iζx
[ŝY (x2 ) − sY (x2 )] e 2 dx2 [šY (x2 ) − sY (x2 )] e 2 dx2 + sup iζ
≤ sup iζ
|x2 |>x̄n
|x2 |>x̄n
|ζ|≤ζ̄n
|ζ|≤ζ̄n
Z
Z
iζx
|šY (x2 ) − sY (x2 )| dx2 + sup [∇ŝY (x2 ) − ∇sY (x2 )] e 2 dx2 ≤ ζ̄n
|x2 |>x̄n
|ζ|≤ζ̄n
Z
d=0,j=0,k=1
|∇ŝY (x2 ) − ∇sY (x2 )| dx2 + O(TY,n
)
Z
Z
d=0,j=0,k=1
|∇ŝY (x2 ) − ∇sY (x2 )| dx2 +
|∇ŝY (x2 ) − ∇sY (x2 )| dx2 + O(TY,n
≤
)
≤
|x2 |≤x̄n
|x2 |>x̄n
1
≤
inf |x2 |≤x̄n |fX2 (x2 )|
Z
1/2
2
|∇ŝY (x2 ) − ∇sY (x2 )| fX2 (x2 )dx2
|x2 |≤x̄n
d=0,j=0,k=1
d=1,j=0,k=0
+ O(TY,n
+ TY,n
)
d=0,j=0,k=1
d=1,j=0,k=0
k∇ŝY − ∇sY k + O(TY,n
+ TY,n
)
= f −1/2
n
d=0,j=0,k=1
d=1,j=0,k=0
= Op (f −1/2
δn ω1,n ) + O(TY,n
+ TY,n
).
n
Similarly, the second term is bounded by
Z
iζx2
sup |iζ[F ŝY ](ζ) − iζσY (ζ)| = sup [∇ŝY (x2 ) − ∇sY (x2 )] e dx2 |ζ|≤ζ̄n
|ζ|≤ζ̄n
d=0,j=0,k=1
d=1,j=0,k=0
δn ω1,n ) + O(TY,n
+ TY,n
)
= Op (f −1/2
n
d,j=0,k
so that, in conclusion, letting TY,n := maxd=0,1 maxk=0,1 TY,n
,
n := sup |σ̂Y (ζ) − σY (ζ)| = Op (f n−1/2 δn ω1,n ζ̄n−1 ) + O(TY,n ).
(42)
|ζ|≤ζ̄n
The next estimation error to bound is that of the numerator in the expression of φ(ζ).
To that end, let σ∆ (ζ) := iσY X (ζ) − ∇σY (ζ) and σ̂∆ (ζ) := iσ̂Y X (ζ) − ∇σ̂Y (ζ). Consider
sup ζ 2 σ̂∆ (ζ) − ζ 2 σ∆ (ζ) = sup ζ 2 σ̂∆ (ζ) − ζ 2 [F ŝ∆ ](ζ) + sup ζ 2 [F ŝ∆ ](ζ) − ζ 2 σ∆ (ζ) ,
|ζ|≤ζ̄n
|ζ|≤ζ̄n
|ζ|≤ζ̄n
42
where ŝ∆ (x2 ) := šY X (x2 ) − ix2 šY (x2 ). The first term can be bounded as follows:
sup ζ 2 σ̂∆ (ζ) − ζ 2 [F ŝ∆ ](ζ)
|ζ|≤ζ̄n
Z
2
iζx2
= sup ζ
[išY X (x2 ) − ix2 šY (x2 ) − (iŝY X (x2 ) − ix2 ŝY (x2 ))] e dx2 |ζ|≤ζ̄n
Z
2
iζx2
[išY X (x2 ) − ix2 šY (x2 ) − (iŝY X (x2 ) − ix2 ŝY (x2 ))] e dx2 = sup ζ
|ζ|≤ζ̄n
|x2 |>x̄n
Z
2
iζx
[išY X (x2 ) − ix2 šY (x2 ) − (isY X (x2 ) − ix2 sY (x2 ))] e 2 dx2 ≤ sup ζ
|x2 |>x̄n
|ζ|≤ζ̄n
Z
2
iζx
+ sup ζ
[iŝY X (x2 ) − ix2 ŝY (x2 ) − (isY X (x2 ) − ix2 sY (x2 ))] e 2 dx2 |ζ|≤ζ̄n
≤
ζ̄n2
|x2 |>x̄n
Z
|šY X (x2 ) − sY X (x2 )| dx2 +
ζ̄n2
|x2 |>x̄n
Z
|x2 | |šY (x2 ) − sY (x2 )| dx2
|x2 |>x̄n
Z
2
iζx2
2
+ sup ∇ ŝY X (x2 ) − ∇ sY X (x2 ) e dx2 |ζ|≤ζ̄n
Z
iζx
+ sup [2∇ŝY (x2 ) − 2∇sY (x2 )] e 2 dx2 |ζ|≤ζ̄n
Z
iζx2
2
2
+ sup x2 ∇ ŝY (x2 ) − x2 ∇ sY (x2 ) e dx2 .
|ζ|≤ζ̄n
d=0,j=1,k=2
The two terms in the third-to-last line are O(TYd=0,j=0,k=2
+ TY,n
) by definition.
X,n
Splitting the three terms in the last two lines into integrals over {|x2 | ≤ x̄n } and over
{|x2 | > x̄n } yields
Z
2
sup ∇ ŝY X (x2 ) − ∇2 sY X (x2 ) eiζx2 dx2 |ζ|≤ζ̄n
≤ f −1/2
k∇2 ŝY X − ∇2 sY X k + O(TYd=2,j=0,k=0
)
X,n
n
Z
d=1,j=0,k=0
iζx2
sup [2∇ŝY (x2 ) − 2∇sY (x2 )] e dx2 ≤ 2f −1/2
k∇ŝY − ∇sY k + O(TY,n
)
n
|ζ|≤ζ̄n
Z
iζx2
2
2
sup x2 ∇ ŝY (x2 ) − x2 ∇ sY (x2 ) e dx2 |ζ|≤ζ̄n
d=2,j=1,k=0
x̄n k∇2 ŝY − ∇2 sY k + O(TY,n
),
≤ f −1/2
n
43
so, by collecting terms,
sup ζ 2 σ̂∆ (ζ) − ζ 2 [F ŝ∆ ](ζ)
|ζ|≤ζ̄n
= Op (f −1/2
(ω2,n + ω1,n + ω2,n x̄n )δn )
n
d=0,j=1,k=2
d=1,j=0,k=0
d=2,j=1,k=0
+ O(TYd=0,j=0,k=2
+ TY,n
+ TYd=2,j=0,k=0
+ TY,n
+ TY,n
)
X,n
X,n
d=0,j=1,k=2
= Op (f −1/2
ω2,n x̄n δn ) + O(TYd=0,j=0,k=2
+ TY,n
)
X,n
n
d=1,j=0,k=0
d=2,j=1,k=0
+ O(TYd=2,j=0,k=0
+ TY,n
+ TY,n
).
X,n
Similarly,
sup ζ 2 [F ŝ∆ ](ζ) − ζ 2 σ∆ (ζ)
|ζ|≤ζ̄n
d=1,j=0,k=0
d=2,j=1,k=0
= Op (f −1/2
ω2,n x̄n δn ) + O(TYd=2,j=0,k=0
+ TY,n
+ TY,n
)
X,n
n
d,j,k
and, thus, letting T∆,n := maxk=0,2 maxj=0,1 maxd=1,2 {TY,n
, TYd,j=0,k
X,n }, we have
ω2,n x̄n δn ζ̄n−2 ) + O(T∆,n ).
sup |σ̂∆ (ζ) − σ∆ (ζ)| = Op (f −1/2
n
(43)
|ζ|≤ζ̄n
Step (iii) By Assumption C 4 and r̄n−1 = Op (1), we have f n−1/2 δn ω1,n ζ̄ −1 σ −1
n → 0 and
TY,n σ −1
n → 0. Therefore, Lemma 3 of Schennach (2008) can be applied, and, together
with (42) and (43), yields
σ̂∆ (ζ) σ∆ (ζ) µ̄n := sup −
σ̂
(ζ)
σ
(ζ)
Y
Y
|ζ|≤ζ̄n
(
)
(
)
!
!
= Op
sup |σ̂∆ (ζ) − σ∆ (ζ)|
σ −1
n
+ Op
r̄n
sup |σ̂Y (ζ) − σY (ζ)|
σ −1
n
|ζ|≤ζ̄n
|ζ|≤ζ̄n
n
n
o
o
−1/2
−1
−1
−2
−1
= Op f −1/2
+
O
r̄
f
ω
x̄
δ
ζ̄
+
T
σ
δ
ω
ζ̄
+
T
σ
p
n
2,n n n n
∆,n
n 1,n n
Y,n
n
n
n
n
= Op f −1/2
+ Op (r̄n TY,n + T∆,n )σ −1
.
ω2,n x̄n δn ζ̄n−2 + r̄n δn ω1,n ζ̄n−1 σ −1
n
n
n
The last two equations use the convergence rates from step (ii).
Step (iv) This step is inspired in part by the proof of Theorem 2 in Schennach (2008).
Decompose the estimation error into three parts:
2π F̂Y2 |X2∗ (y2 |x∗2 ) − FY2 |X2∗ (y2 |x∗2 )
Z
Z
∗
∗
−iζ·x
−iζ·x
2 dζ −
2 dζ = σ̂Y (ζ, y2 )φ̂(ζ, y2 )e
σY (ζ, y2 )φ(ζ, y2 )e
|ζ|≤ζ̄n
≤ R1 + R2 + R3 ,
44
where
Z
h
i
−iζ·x∗2
R1 := σ̂Y (ζ, y2 ) φ̂(ζ, y2 ) − φ(ζ, y2 ) e
dζ Z|ζ|≤ζ̄n
∗
−iζ·x
2 dζ R2 := [σ̂Y (ζ, y2 ) − σY (ζ, y2 )] φ(ζ, y2 )e
Z|ζ|≤ζ̄n
∗
σY (ζ, y2 )φ(ζ, y2 )e−iζ·x2 dζ .
R3 := |ζ|>ζ̄n
First, notice σY (ζ, y2 ) = γ(ζ, y2 )/φ(ζ) diverges not only at the origin, but potentially
also as |ζ| → ±∞ when φ vanishes faster than γ. For this reason, I split up the first
remainder R1 further into R1a and R1b , which, respectively, bound the error for small |ζ|
around the origin and for those up to ζ̄n . Formally, fix some constant ζ0 ∈ (0, ζ̄n ) and
write R1 ≤ R1a + R1b with
Z
i
h
∗
−iζ·x
2 dζ ,
R1a := σ̂Y (ζ, y2 ) φ̂(ζ, y2 ) − φ(ζ, y2 ) e
Z|ζ|≤ζ0
h
i
−iζ·x∗2
R1b := σ̂Y (ζ, y2 ) φ̂(ζ, y2 ) − φ(ζ, y2 ) e
dζ ,
ζ0 <|ζ|≤ζ̄n
Consider the first remainder. On the interval (0, ζ0 ), Lemma 4(ii) bounds |σY (ζ, y2 )| from
above by σ̄ max{|ζ|−1 , 1} for some constant σ̄. Using this fact, we have
Z
Z ζ
Z ζ
σ̂∆ (z, y2 )
σ∆ (z, y2 )
−iζ·x∗2
R1a = dζ σ̂Y (ζ, y2 ) exp
dz − exp
dz e
σ̂Y (z)
σY (z)
|ζ|≤ζ0
0
0
Z ζ
Z ζ
Z
σ̂∆ (z, y2 )
σ∆ (z, y2 )
≤
|σ̂Y (ζ, y2 )| exp
dz − exp
dz dζ
σ̂Y (z)
σY (z)
|ζ|≤ζ0
0
0
Z ζ
Z
σ∆ (z, y2 )
=
|σY (ζ, y2 ) + n | exp
dz ×
σY (z)
|ζ|≤ζ0
0
Z ζ
Z ζ
σ̂
(z,
y
)
σ
(z,
y
)
∆
2
∆
2
× exp
dz −
dz − 1 dζ
σ̂Y (z)
σY (z)
0
0
Z ζ Z
σ̂∆ (z, y2 ) σ∆ (z, y2 )
=C
|σY (ζ, y2 ) + n | exp
−
dz − 1 dζ,
σ̂Y (z)
σY (z)
|ζ|≤ζ0
0
where n = Op (f −1/2
δn ω1,n ζ̄n−1 ) + O(TY,n ), which occurs from the use of the rate in (42).
n
Step (iii) gives a bound for the difference of ratios such that
Z
R1a ≤ C
σ̄ max{|ζ|−1 , 1} + n eµ̄n ζ − 1 dζ
Z|ζ|≤ζ0
≤C
σ̄ max{|ζ|−1 , 1} + n |µ̄n ζ| dζ
|ζ|≤ζ0
45
Z
≤C
(σ̄ max{1, |ζ|} + |ζ|n ) µ̄n dζ
|ζ|≤ζ0
≤ 2C (σ̄ max{1, ζ0 } + ζ0 n ) ζ0 µ̄n
= Op (n µ̄n + µ̄n ),
where the second inequality uses the series expansion of the exponential function. The
remainder R1b is treated in a similar way:
Z
Z ζ
Z ζ
σ
(z,
y
)
σ̂
(z,
y
)
∗
∆
2
∆
2
−iζ·x
2 dζ dz − exp
dz e
R1b = σ̂Y (ζ, y2 ) exp
σ̂Y (z)
σY (z)
0
ζ0 <|ζ|≤ζ̄n
0
Z ζ
Z ζ
Z
σ̂∆ (z, y2 )
σ∆ (z, y2 )
≤
|σ̂Y (ζ, y2 )| exp
dz − exp
dz dζ
σ̂Y (z)
σY (z)
ζ0 <|ζ|≤ζ̄n
0
0
Z ζ
Z
σ∆ (z, y2 )
|σY (ζ, y2 ) + n | exp
dz ×
=
σY (z)
0
ζ0 <|ζ|≤ζ̄n
Z ζ
Z ζ
σ̂
(z,
y
)
σ
(z,
y
)
∆
2
∆
2
× exp
dz −
dz − 1 dζ
σ̂Y (z)
σY (z)
0
0
Z ζ Z
σ̂
(z,
y
)
σ
(z,
y
)
∆
2
∆
2
|σY (ζ, y2 ) + n | |φ(ζ)| exp
=
−
dz − 1 dζ
σ̂Y (z)
σY (z)
0
ζ0 <|ζ|≤ζ̄n
!
≤
O(1) + n
sup
|φ(ζ)|
×
ζ0 <|ζ|≤ζ̄n
Z
×
ζ0 <|ζ|≤ζ̄n
≤
O(1) + n
Z
exp
sup
ζ
σ̂∆ (z, y2 ) σ∆ (z, y2 )
−
σ̂Y (z)
σY (z)
0
!Z
|µ̄n ζ|dζ
|φ(ζ)|
dz
− 1 dζ
ζ0 <|ζ|≤ζ̄n
ζ0 <|ζ|≤ζ̄n
!
≤
O(1) + n
sup
|φ(ζ)|
2µ̄n ζ̄n2 + C 00
ζ0 <|ζ|≤ζ̄n
≤ C 000 µ̄n ζ̄n2 + 2n µ̄n ζ̄n2 + C (4) µ̄n + C (5) n µ̄n ,
where the third inequality uses step (iii) as before. The second remainder can be bounded
as follows:
Z
Z ζ
σ
(z,
y
)
∗
∆
2
−iζ·x
2 dζ dz e
R2 = [σ̂Y (ζ, y2 ) − σY (ζ, y2 )] exp
σY (z)
|ζ|≤ζ̄n
0
Z ζ
Z
σ∆ (z, y2 )
dζ
[σ̂Y (ζ, y2 ) − σY (ζ, y2 )] exp
≤
dz
σY (z)
|ζ|≤ζ̄n
0
"
#Z
≤ sup φ(ζ)
|σ̂Y (ζ, y2 ) − σY (ζ, y2 )| dζ
|ζ|≤ζ̄n
|ζ|≤ζ̄n
46
≤ 2ζ̄n n ,
where the last inequality follows from step (ii). Finally, the last term R3 represents a
tail-trimming error:
Z
Z
∗
−iζ·x
2 dζ ≤
R3 = γ(ζ, y2 )e
|γ(ζ, y2 )|dζ = O(Tγ,n ).
|ζ|>ζ̄n
|ζ|>ζ̄n
Now, combine the three remainders to get
2π sup F̂Y2 |X2∗ (y2 |x∗2 ) − FY2 |X2∗ (y2 |x∗2 )
x∗2 ∈R
= Op µ̄n + n µ̄n + µ̄n ζ̄n2 + n µ̄n ζ̄n2 + µ̄n + n µ̄n + Op ζ̄n n + O(Tγ,n )
= Op µ̄n ζ̄n2 + ζ̄n n + O(Tγ,n ).
The first equality holds for the following reason. The conditions r̄n δn ω1,n /(f n1/2 ζ̄n σ n ) → 0
and r̄n TY,n /σ n → 0 in Assumption C 4 imply µ̄n → 0. In addition, r̄n−1 = O(1) and
σ n = O(1), so δn ω1,n /(f 1/2
ζ̄n ) → 0 and TY,n → 0, leading to n → 0. Next, substituting
n
in the rates of µ̄n and n yields
∗
∗ ∗
∗
2π sup F̂Y2 |X2 (y2 |x2 ) − FY2 |X2 (y2 |x2 )
x∗2 ∈R
h
i
−1
−2
−1
−1
= Op ζ̄n2 f −1/2
ω
x̄
δ
ζ̄
+
r̄
δ
ω
ζ̄
σ
+
(r̄
T
+
T
)σ
2,n n n n
n n 1,n n
n Y,n
∆,n
n
n
n
i
h
δn ω1,n ζ̄n−1 + TY,n + O(Tγ,n )
+ Op ζ̄n f −1/2
n
−1
= Op f −1/2
+
T
(44)
ω
x̄
δ
+
r̄
δ
ω
ζ̄
σ
n ,
2,n n n
n n 1,n n
n
n
which gives the convergence rate pointwise in y2 . The expression in (44) is also an
upper bound on the convergence rate of the constraint estimator F̌Y2 |X2∗ (y2 |x∗2 ). Since
F̌Y2 |X2∗ (y2 |x∗2 ) takes values only in [0, 1], the convergence rate holds, in fact, uniformly over
y2 ∈ Y2 . This conclusion follows by essentially the same argument as the proof of the
Glivenko-Cantelli Theorem; see Theorem 19.1 of van der Vaart (1998), for example. This
completes the proof.
Q.E.D.
Proof of Corollary 3 First, I establish the order of the tail-trimming term Tn . To that
end, let γn := supy2 ∈Y |γ(ζ̄n , y2 )| and sn := supy2 ∈Y |∇sY (x̄n , y2 )|.
Consider the first component of Tn . Analogously to the proof of Lemma 1 in Schennach
(2008), we have limx2 →−∞ sY (x2 , y2 ) = c−
Y (y2 ). Therefore, using the definition of ŝY (x2 , y2 )
47
when x2 < −x̄n and the Fundamental Theorem of Calculus, one can write
Z x2
Z −x̄n
Z −x̄n
−
∂s
(u,
y
)
Y
2
|x2 | −
|x2 | −cY (y2 ) − sY (x2 , y2 ) dx2 =
du dx2
∂x2
−∞
−∞
−∞
Z x2
Z −x̄n
(1 + |u|)γs exp −αs |u|βs dudx2
|x2 |
≤ Cs
Z ∞ −∞
Z−∞
∞
(1 + u)γs exp −αs uβs dudx2
x2
= Cs
x2
Zx̄n∞
o
n
x2 (1 + x2 )γs −βs +1 exp −αs xβ2 s dx2
= Cs
x̄n
= O (1 + x̄n )γs −2βs +3 exp −αs x̄βns
s +3
sn
= O x̄−2β
n
(45)
by Assumption R 3.2 and repeated application of Lemma 4.2 in Li and Vuong (1998). In
R∞
−2βs +3
the same fashion, one can show that x̄n |x2 ||c+
sn ) as
Y (y2 ) − sY (x2 , y2 )|dx2 = O(x̄n
well. The second term of the remainder Tn can be bounded as
Z −x̄n
Z −x̄n
−
−
sY X (x2 , y2 ) x2 c (y2 ) − sY X (x2 , y2 ) dx2 =
|x2 | cY (y2 ) −
Y
dx2
x2
−∞
−∞
Z −x̄n
|x2 | c−
(46)
Y (y2 ) − sY (x2 , y2 ) dx2
−∞
s +3
= O x̄−2β
sn ,
n
where the last equality uses (45). The asymptotic equality in (46) can be justified as
follows. From the two equations (27) and (28), we have that
E[sY X (X2 , y2 )|X2∗ = x∗2 ] = x∗2 E[sY (X2 , y2 )|X2∗ = x∗2 ] = E[X2 sY (X2 , y2 )|X2∗ = x∗2 ]
for all y2 ∈ Y2 . Since C is injective, the distribution of X2 given X2∗ is complete; that
is, E[sY X (X2 , y2 ) − X2 sY (X2 , y2 )|X2∗ = x∗2 ] = 0 implies sY X (X2 , y2 ) = X2 sY (X2 , y2 ) PX2 almost surely. Next, using again Lemma 4.2 in Li and Vuong (1998), it is easy to see the
R
−β +1
third component of Tn , |ζ|>ζ̄n |γ(ζ, y2 )|dζ, is of order O(ζ̄n g γn ). In conclusion, we have
−β +1
s +3
Tn = O(x̄−2β
sn + ζ̄n g γn ).
n
Now consider
h
i
−2βs +3
−βg +1
−1
βn := ζ̄n2 r̄n f −1/2
δ
+
x̄
s
+
ζ̄
γ
n σn
n
n
n
n
n
βf
βg −γ
βg
βs
X
f /2
= O ζ̄n2+γr −γg eαg ζ̄n x̄n X eαfX x̄n /2 δn + ζ̄n2+γr −γg eαg ζ̄n x̄γns −2βs +3 e−αs x̄n
β
β
2+γr −γg αg ζ̄ng γg −βg +1 −αg ζ̄ng
+ζ̄n
e
ζ̄n
e
48
β
βg
f
−γfX /2 αg ζ̄n
+αfX x̄n X /2
= O(ζ̄n2+γr −γg x̄n
e
βg
δn + ζ̄n2+γr −γg x̄γns −2βs +3 eαg ζ̄n
s
−αs x̄β
n
+ ζ̄nγr −βg +3 ).
(47)
p
To balance the bias and variance terms in Kn−ρs +Knω Kn /n, we select Kn = O(n1/[2(ρs +ω)+1] )
and get δn = O(n−ρs /[2(ρs +ω)+1] ).
The remainder of the proof consists in merely substituting in the given expressions
for x̄n and ζ̄n , and checking that the rates are op (1) under the stated assumptions on the
various parameters.
Q.E.D.
Proof of Corollary 4 In the severely ill-posed case, we select Kn = log(n) to balance
p
the bias and variance terms in δn = Kn−ρs +exp{Kn } Kn /n. Similarly as in the derivation
of (47),
β
β
f
βr
−γfX /2 −αr ζ̄n
+αg ζ̄ng +αfX x̄n X /2
βn = O(ζ̄n2+γr −γg x̄n
e
δn
βr
+ ζ̄n2+γr −γg x̄γns −2βs +3 e−αr ζ̄n
β
s
+αg ζ̄ng −αs x̄β
n
βr
+ ζ̄nγr −βg +3 e−αr ζ̄n ). (48)
As in the proof of the previous corollary, the remainder of this proof consists of merely
substituting in the given expressions for x̄n and ζ̄n , and checking that the rates are op (1)
under the stated assumptions on the various parameters.
Q.E.D.
Proof of Theorem 4 For the quantile estimation case, see the proof of Theorem 3.1
in Ould-Saı̈d, Yahia, and Necir (2009). If the conditional mean restriction holds, then
integration by parts and the fact that the rates in Corollaries 3 or 4 are uniform over Y2
yield the desired result.
Q.E.D.
49
References
Aasness, J., E. Biørn, and T. Skjerpen (1993): “Engel Functions, Panel Data, and
Latent Variables,” Econometrica, 61(6), 1395–1422.
Andrews, D. W. K. (1995): “Nonparametric Kernel Estimation for Semiparametric
Models,” Econometric Theory, 11(3), 560–596.
(2011): “Examples of L2 -Complete and Boundedly-Complete Distributions,”
Discussion Paper 1801, Cowles Foundation, Yale University.
Bertrand, M., and A. Schoar (2003): “Managing with Style: The Effect of Managers
on Firm Policies,” The Quarterly Journal of Economics, 118(4), 1169–1208.
Biørn, E. (2000): “Panel Data With Measurement Errors: Instrumental Variables And
Gmm Procedures Combining Levels And Differences,” Econometric Reviews, 19(4),
391–424.
Blundell, R., X. Chen, and D. Kristensen (2007): “Semi-Nonparametric IV Estimation of Shape-Invariant Engel Curves,” Econometrica, 75(6), 1613–1669.
Bound, J., C. Brown, and N. Mathiowetz (2001): “Measurement Error in Survey
Data,” in Handbook of Econometrics, ed. by J. J. Heckman, and E. E. Leamer, vol. V,
pp. 3705–3843. Elsevier Science B.V.
Bound, J., and A. B. Krueger (1991): “The Extent of Measurement Error in Longitudinal Earnings Data: Do Two Wrongs Make a Right?,” Journal of Labor Economics,
9(1), 1–24.
Buonaccorsi, J., E. Demidenko, and T. Tosteson (2000): “Estimation in Longitudinal Random Effects Models with Measurement Error,” Statistica Sinica, 10, 885–903.
Canay, I. A., A. Santos, and A. M. Shaikh (2011): “On the Testability of Identification in Some Nonparametric Models with Endogeneity,” Discussion paper, University
of Chicago.
Card, D. (1996): “The Effect of Unions on the Structure of Wages: A Longitudinal
Analysis,” Econometrica, 64(4), 957–979.
Carrasco, M., and J.-P. Florens (2011): “A Spectral Method for Deconvolving a
Density,” Econometric Theory, 7(Special Issue 03), 546–581.
50
Carrasco, M., J.-P. Florens, and E. Renault (2007): “Linear Inverse Problems in
Structural Econometrics Estimation Based on Spectral Decomposition and Regularization,” in Handbook of Econometrics, ed. by J. J. Heckman, and E. E. Leamer, vol. VI,
pp. 5633–5751. Elsevier Science B.V.
Carroll, R. J., A. C. M. Rooij, and F. H. Ruymgaart (1991): “Theoretical
Aspects of Ill-posed Problems in Statistics,” Acta Applicandae Mathematicae, 24(2),
113–140.
Carroll, R. J., D. Ruppert, L. A. Stefanski, and C. M. Crainiceanu (2006):
Measurement Error in Nonlinear Models: A Modern Perspective. Chapman & Hall,
New York.
Chen, X., H. Hong, and D. Nekipelov (2008): “Nonlinear Models of Measurement
Errors,” Journal of Economic Literature, forthcoming.
Chen, X., and M. Reiß (2011): “On Rate Optimality for Ill-Posed Inverse Problems
in Econometrics,” Econometric Theory, 27(Special Issue 03), 497–521.
Chernozhukov, V., I. Fernández-Val, and A. Galichon (2010): “Quantile and
Probability Curves Without Crossing,” Econometrica, 78(3), 1093–1125.
Chesher, A. (1991): “The Effect of Measurement Error,” Biometrika, 78(3), 451–462.
Cragg, J. G., and S. G. Donald (1993): “Testing Identifiability and Specification in
Instrumental Variable Models,” Econometric Theory, 9(2), 222–240.
Cunha, F., J. J. Heckman, and S. M. Schennach (2010): “Estimating the Technology of Cognitive and Noncognitive Skill Formation,” Econometrica, 78(3), 883–931.
Darolles, S., Y. Fan, J. P. Florens, and E. Renault (2011): “Nonparametric
Instrumental Regression,” Econometrica, 79(5), 1541–1565.
D’Haultfœuille, X. (2011): “On the Completeness Condition in Nonparametric Instrumental Problems,” Econometric Theory, 7(Special Issue 03), 460–471.
Erickson, T., and T. M. Whited (2000): “Measurement Error and the Relationship
between Investment and q,” Journal of Political Economy, 108(5), 1027–1057.
Evdokimov, K. (2010): “Identification and Estimation of a Nonparametric Panel Data
Model with Unobserved Heterogeneity,” Discussion paper, Princeton University.
51
Fan, J. (1991): “On the Optimal Rates of Convergence for Nonparametric Deconvolution
Problems,” The Annals of Statistics, 19(3), 1257–1272.
Fan, J., and Y. K. Truong (1993): “Nonparametric Regression with Errors in Variables,” The Annals of Statistics, 21(4), 1900–1925.
Galvao Jr., A. F., and G. Montes-Rojas (2009): “Instrumental Variables Quantile
Regression for Panel Data with Measurement Errors,” Discussion paper, City University
London.
Griliches, Z., and J. A. Hausman (1986): “Errors in Variables in Panel Data,”
Journal of Econometrics, 31, 93–118.
Hall, B. H., A. B. Jaffe, and M. Trajtenberg (2005): “Market Value and Patent
Citations,” The RAND Journal of Economics, 36(1), 16–38.
Hall, P., and J. L. Horowitz (2005): “Nonparametric Methods for Inference in the
Presence of Instrumental Variables,” The Annals of Statistics, 33(6), 2904–2929.
Hausman, J., W. Newey, H. Ichimura, and J. Powell (1991): “Identification and
Estimation of Polynomial Errors-in-Variables Models,” Journal of Econometrics, 50,
273–295.
Hayashi, F. (1982): “Tobin’s Marginal q and Average q: A Neoclassical Interpretation,”
Econometrica, 50(1), 213–224.
Henderson, D. J., R. J. Carroll, and Q. Li (2008): “Nonparametric Estimation
and Testing of Fixed Effects Panel Data Models,” Journal of Econometrics, 144(1),
257–275.
Holtz-Eakin, D., W. Newey, and H. S. Rosen (1988): “Estimating Vector Autoregressions with Panel Data,” Econometrica, 56(6), 1371–1395.
Horowitz, J., and S. Lee (2007): “Nonparametric Instrumental Variables Estimation
of a Quantile Regression Model,” Econometrica, 75(4), 1191–1208.
Horowitz, J. L. (2009): “Specification Testing in Nonparametric Instrumental Variables Estimation,” Discussion paper, Northwestern University.
(2011): “Applied Nonparametric Instrumental Variables Estimation,” Econometrica, 79(2), 347–394.
52
Hu, Y., and S. M. Schennach (2008): “Instrumental Variable Treatment of Nonclassical Measurement Error Models,” Econometrica, 76(1), 195–216.
Hu, Y., and J.-L. Shiu (2011): “Nonparametric Identification Using Instrumental Variables: Sufficient Conditions For Completeness,” Discussion paper, Johns Hopkins University.
Johannes, J., S. Van Bellegem, and A. Vanhems (2011): “Convergence Rates for
Ill-Posed Inverse Problems with an Unknown Operator,” Econometric Theory, 7(Special
Issue 03), 522–545.
Khan, S., M. Ponomareva, and E. Tamer (2011): “Identification of Panel Data
Models with Endogenous Censoring,” Discussion paper, Duke University.
Komunjer, I., and S. Ng (2011): “Measurement Errors in Dynamic Models,” Econometric Theory, forthcoming.
Koopmans, T. C., and O. Reiersol (1950): “The Identification of Structural Characteristics,” The Annals of Mathematical Statistics, 21(2), 165–181.
Kotlarski, I. (1967): “On Characterizing the Gamma and the Normal Distribution,”
Pacific Journal of Mathematics, 20(1), 69–76.
Lee, Y. (2010): “Nonparametric Estimation of Dynamic Panel Models with Fixed Effects,” Discussion paper, University of Michigan.
Li, T., and Q. Vuong (1998): “Nonparametric Estimation of the Measurement Error
Model Using Multiple Indicators,” Journal of Multivariate Analysis, 65(2), 139–165.
Lucas, Jr., R. E., and E. C. Prescott (1971): “Investment Under Uncertainty,”
Econometrica, 39(5), 659–681.
Matzkin, R. L. (1994): “Restrictions of Economic Theory in Nonparametric Methods,”
in Handbook of Econometrics, ed. by R. F. Engle, and D. L. McFadden, vol. IV, pp.
2523–2558. Elsevier Science B.V.
(2007): “Nonparametric Identification,” in Handbook of Econometrics, ed. by
J. J. Heckman, and E. E. Leamer, vol. VI, pp. 5307–5368. Elsevier Science B.V.
Newey, W. K. (1997): “Convergence Rates and Asymptotic Normality for Series Estimators,” Journal of Econometrics, 79(1), 147–168.
53
Newey, W. K., and J. L. Powell (2003): “Instrumental Variable Estimation of
Nonparametric Models,” Econometrica, 71(5), 1565–1578.
Ould-Saı̈d, E., D. Yahia, and A. Necir (2009): “A Strong Uniform Convergence
Rate of a Kernel Conditional Quantile Estimator under Random Left-Truncation and
Dependent Data,” Electronic Journal of Statistics, 3, 426–445.
Porter, J. R. (1996): “Essays in Econometrics,” Ph.D. thesis, MIT.
Roehrig, C. S. (1988): “Conditions for Identification in Nonparametric and Parametric
Models,” Econometrica, 56(2), 433–447.
Schennach, S. M. (2004a): “Estimation of Nonlinear Models with Measurement Error,”
Econometrica, 72(1), 33–75.
(2004b): “Nonparametric Regression in the Presence of Measurement Error,”
Econometric Theory, 20, 1046–1093.
(2007): “Instrumental Variable Estimation of Nonlinear Errors-in-Variables
Models,” Econometrica, 75(1), 201–239.
(2008): “Quantile Regression with Mismeasured Covariates,” Econometric Theory, 24, 1010–1043.
(2011): “Measurement Error in Nonlinear Models — A Review,” invited working
paper prepared for the World Congress of the Econometric Society, Tokyo (2010).
Schennach, S. M., and Y. Hu (2010): “The Nonparametric Classical Errors-inVariables Model,” Discussion paper, University of Chicago.
Shao, J., Z. Xiao, and R. Xu (2011): “Estimation with Unbalanced Panel Data Having
Covariate Measurement Error,” Journal of Statistical Planning and Inference, 141(2),
800–808.
Stock, J. H., and M. Yogo (2005): “Testing for Weak Instruments in Linear IV Regression,” in Identification and Inference for Econometric Models: Essays in Honor of
Thomas Rothenberg, ed. by D. W. K. Andrews, and J. H. Stock, pp. 80–108, Cambridge,
UK. Cambridge University Press.
Torgovitsky, A. (2010): “Identification and Estimation of Nonparametric Quantile
Regressions with Endogeneity,” Discussion paper, Northwestern University.
54
van der Vaart, A. W. (1998): Asymptotic Statistics. Cambridge University Press,
New York.
Wansbeek, T. (2001): “GMM Estimation in Panel Data Models with Measurement
Error,” Journal of Econometrics, 104(2), 259–268.
Xiao, Z., J. Shao, and M. Palta (2008): “A Unified Theory for GMM Estimation in Panel Data Models with Measurement Error,” Discussion paper, University of
Wisconsin-Madison.
55
Kn = 3
Kn = 5
Kn = 7
αn = 0.001
bias
SD RMSE
0.054 0.140
0.158
0.066 0.140
0.162
0.067 0.137
0.160
weak ME (ση = 0.5)
αn = 0.01
bias
SD RMSE
0.054 0.140
0.158
0.066 0.140
0.162
0.067 0.137
0.160
bias
0.066
0.067
0.067
spline
Ln = 5
Ln = 10
Ln = 15
0.066
0.068
0.068
0.141
0.140
0.140
0.163
0.162
0.162
0.066
0.068
0.068
0.141
0.140
0.139
0.163
0.162
0.162
0.067
0.068
0.068
0.140
0.140
0.140
0.163
0.163
0.163
B-spline
Ln = 5
Ln = 10
Ln = 15
0.071
0.078
0.079
0.136
0.134
0.139
0.160
0.162
0.168
0.071
0.078
0.079
0.136
0.134
0.139
0.161
0.163
0.168
0.072
0.078
0.080
0.137
0.134
0.138
0.162
0.163
0.168
0.070
0.125
0.147
basis (x1 )
poly
NW
αn = 0.1
SD RMSE
0.140
0.162
0.139
0.162
0.137
0.160
Table 1: For different combinations of the basis in x1 and the regularization parameter αn , the table
shows average bias, standard deviation (SD) and root mean squared error (RMSE) of the new ME-robust
estimator as well as of the Nadaraya-Watson (NW) estimator which ignores the ME.
56
basis (x1 )
poly
αn = 0.001
SD RMSE
0.328
0.334
0.243
0.250
0.251
0.258
strong ME (ση = 1.5)
αn = 0.01
bias
SD RMSE
0.054 0.326
0.332
0.051 0.242
0.250
0.053 0.251
0.258
bias
0.073
0.057
0.059
αn = 0.1
SD RMSE
0.313
0.322
0.238
0.248
0.246
0.256
Kn = 3
Kn = 5
Kn = 7
bias
0.052
0.051
0.053
spline
Ln = 5
Ln = 10
Ln = 15
0.052
0.049
0.049
0.233
0.221
0.221
0.242
0.231
0.231
0.053
0.049
0.049
0.232
0.221
0.221
0.242
0.232
0.231
0.057
0.053
0.053
0.230
0.219
0.219
0.241
0.231
0.231
B-spline
Ln = 5
Ln = 10
Ln = 15
0.051
0.065
0.075
0.217
0.197
0.192
0.229
0.217
0.218
0.052
0.065
0.075
0.217
0.197
0.192
0.229
0.218
0.218
0.055
0.068
0.077
0.216
0.196
0.192
0.230
0.218
0.219
0.171
0.110
0.207
NW
Table 2: For different combinations of the basis in x1 and the regularization parameter αn , the table
shows average bias, standard deviation (SD) and root mean squared error (RMSE) of the new ME-robust
estimator as well as of the Nadaraya-Watson (NW) estimator which ignores the ME.
strong ME
-0.4
-0.4
-0.2
-0.2
0.0
0.0
0.2
0.2
0.4
0.4
weak ME
-2
-1
0
1
2
-2
*
-1
0
1
2
*
x
x
Figure 1: The figure shows the true regression function (red squares), the ME-robust estimator (αn = 0.01,
polynomial bases with Kn = 5; blue circles) with two empirical standard deviations (shaded area), and
the Nadaraya-Watson estimator (black stars) with two empirical standard deviations (dashed line).
57
basis (x1 )
poly
Cragg-Donald statistic
weak ME strong ME CV bias CV size
Kn = 3
20.722
33.580
–
8.18
Kn = 5
15.213
27.515
8.78
11.22
Kn = 7
11.312
21.982
9.92
13.34
spline
Ln = 5
Ln = 10
Ln = 15
10.318
6.444
4.779
21.788
13.671
10.140
10.43
10.89
11.02
15.24
19.72
24.09
B-spline
Ln = 5
Ln = 10
Ln = 15
10.135
6.444
4.779
21.472
13.671
10.140
10.43
10.89
11.02
15.24
19.72
24.09
Table 3: Cragg-Donald statistic for testing the null of weak instruments; large values lead to rejection.
Critical values (CV) for 5%-tests of 10% TSLS bias and 15% TSLS size distortion are provided in the
last two columns.
58
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement