Likelihood-based inference for clustered line transect data

Likelihood-based inference for clustered line transect data
Likelihood-based inference for clustered line
transect data
Rasmus Waagepetersen
Institute of Mathematical Sciences, Aalborg University
Fredrik Bajersvej 7G, DK-9220 Aalborg
[email protected]
Tore Schweder
Department of Economics, University of Oslo
P.O. Box 1095 Blindern, N-0317 Oslo
The uncertainty in estimation of spatial animal density from line
transect surveys depends on the degree of spatial clustering in the
animal population. To quantify the clustering we model line transect
data as independent thinnings of spatial shot-noise Cox processes.
Likelihood-based inference is implemented using Markov chain Monte
Carlo (MCMC) methods to obtain efficient estimates of spatial clustering parameters. Uncertainty is addressed using parametric bootstrap
or by consideration of posterior distributions in a Bayesian setting.
Maximum likelihood estimation and Bayesian inference are compared
in an example concerning minke whales in the northeast Atlantic.
Keywords: minke whales, shot-noise Cox process, simulation-based inference,
spatial point process, thinning.
Line transecting together with point sampling are the most widely used techniques for estimating abundance of wild animal or plant populations (Buckland et al., 2004). The spatial point pattern of the animal or plant positions is
often clustered relative to a Poisson process. We consider a Cox point process
model (defined and motivated in Section 3.1) for such clustered populations,
and develop likelihood-based methods to infer the model parameters from
line transect data on animal positions.
In a line transect survey an observer traverses an area at fixed speed along
a predetermined transect line. The transect line is often a zigzag consisting
of a number of transect legs which are possibly broken into segments due
to changes in sighting condition. The observer records the position of each
sighted animal and possibly covariate data on sighting conditions. These
data may in the first place be used for estimating the unknown detection
probability p(x, y) of observing an animal located at (x, y). Our theme, however, is not the estimation of p. Viewing the animal positions as a realization
of a spatial point process, we instead focus on estimation of the animal density and clustering parameters, assuming the detection probability p given.
Assuming independent detection, the set of detected animal positions is a
thinned version of the population point process with thinning probability
1 − p(x, y). Line transect data were first regarded as thinnings of point processes in Schweder (1974) and Schweder (1977).
With n animals observed over the transect,
λ̃ = n/
p(x, y)dxdy
is the moment estimate of mean animal density, and also the maximum likelihood estimate of the intensity parameter of a spatially homogeneous Poisson
population process when animals are detected independently of each other.
Under clustering, this estimate is inefficient and to evaluate its variance it is
necessary to quantify the degree of clustering. We address these issues using
a parametric Cox process model for the clustering, and likelihood methods
to infer the unknown parameters.
A computationally easy approach to parameter estimation for a Cox process is to match a non-parametric estimate of a second order summary statistic with its theoretical expression depending on the unknown parameters.
This approach was first taken for line transect data by Hagen and Schweder
(1995) who used the so-called K-function. For a stationary point process
with intensity λ, λK(t) is the expected number of further points within distance t from a typical point of the point process. A non-parametric estimate
of the K-function may be obtained from line transect data as discussed in
Baddeley et al. (2000). Animal positions are observed within narrow strips
along the transect and an unbiased estimate of K(t) can only be obtained
for t smaller than the strip width. The estimate is moreover highly variable,
see Section 5.4.
Other methods of inference are based on projecting the detected points
onto the transect line whereby a one-dimensional point process is obtained.
Cowling (1998) (see also Aldrin et al., 2003) considers the K-function for the
projection of a thinned Neyman-Scott process. The thinning probability is
here assumed centred Gaussian with constant scale parameter. In practice,
however, the thinning probability is usually varying along the transect according to sighting conditions, see e.g. Skaug et al. (2004). In Buckland et
al. (2004, Chapter 4), the projected process is assumed to be inhomogeneous
Poisson with intensity depending on observed covariates. Skaug (2006) considers a one-dimensional Cox point process with random intensity modulated
by a latent two-state Markov process.
In this paper we develop likelihood-based inference for a thinned spatial
Cox process both in a frequentist and a Bayesian setting. The inference
is implemented using simulation methodology. A distinct advantage of our
approach (as opposed to the one in Cowling, 1998) is that we do not need
simplifying assumptions regarding the functional form of the detection probability. Our approach can moreover easily be adapted to other sampling
designs of the distance type (Buckland et al., 2004). It can also be extended
to take into account large scale heterogeneity due to spatially varying covariates (see Section 6). This would extend the approach in Hedley and Buckland
(2004) who consider spatially inhomogeneous Poisson processes.
Applications of maximum likelihood estimation and Bayesian inference
for spatial Cox processes in general are still very rare in the literature. Our
paper therefore also serves as a case study of general computational issues
concerning the implementation of likelihood-based inference for spatial Cox
Our discussion will be focused on a particular line transect study of minke
whale abundance in the northeast Atlantic, see Section 2. Sections 3 and 4
describe our model and the computational approach while Section 5 contains
an application to the minke whale data including model assessment using the
K-function. Section 6 contains some final remarks.
Minke whales
Minke whales (balaneoptera acutorostrata) are subject to commercial whaling
in the northeast Atlantic. Catch quotas are calculated using, among other
sources of information, periodic abundance estimates (International Whaling
Commission, 2004). Skaug et al. (2004) gave the most recent abundance
estimate of about 107 000 (coef. of var. 0.13) summering minke whales in
the northeast Atlantic including waters around Jan Mayen. The total area
is divided into blocks and the abundance estimate is based on separate line
transect surveys within the blocks. Each block was surveyed once in one of
the years 1996-2001 except the Lofoten block which was surveyed twice.
Here we focus on the survey block named VSS located west of Spitzbergen.
This block was visually surveyed in 1999 with 50 whales observed over a
transect with m = 7 transect legs (see Figure 1). A few comments regarding
maximum likelihood estimation for a neighbouring survey block VSN (not
shown) are given in Section 5 and 6. Following Skaug et al. (2004), we regard
the whales as immobile since the vessel travels much faster than minke whales
usually do.
[Figure 1 about here.]
The probability of detecting a whale is considerably less than one even
when located right on the transect. Let Q(x, y, x̃) denote the hazard probability of initially detecting a whale surfacing at position (x, y) when the ship
is at position (x̃, 0) on a transect leg along the x-axis. Sightings must be
forward of perpendicular to the vessel so Q(x, y, x̃) is zero for x ≤ x̃. Assuming that the whales surface according to a Poisson process in time with
intensity φ > 0 and that the ship moves at unit speed, the detection prob
ability is p(x, y) = 1 − exp − φ −∞ Q(x, y, x̃)dx̃ . To estimate Q(x, y, x̃),
a double platform design is used in the minke whale surveys. There is no
communication between the platforms, and from each platform tracks of successive surfacings of detected whales are recorded. An estimate of Q(x, y, x̃)
can then be obtained from trinomial data with outcomes: surfacing whale
observed from a) both platforms b) only from first platform c) only from
second platform, see Skaug et al. (2004) for further details.
In the Norwegian minke whale surveys, radial distance from the ship to
the surfacing whale is estimated by eye, and the angle between the transect
leg and the sighting line is estimated by way of an angle board fixed to the
rim of the barrel or platform fence. Time and ship positions are accurately
measured, but the angle and particularly radial distance measurements are
rather imprecise. Due to measurement error, tracks and surfacings from the
two platforms might be wrongly matched. This induces bias in the estimation
of p(x, y). The bias is estimated by regression analysis on simulated data,
and a bias-corrected estimate is obtained, see Skaug et al. (2004).
For our present purpose of fitting a spatial cluster model, the detection
probability is given by the estimate obtained in Skaug et al. (2004) and
we ignore the uncertainty due to the estimation of the detection probability.
The right plots in Figure 1 show the estimated detection probabilities for two
transect leg segments in the VSS block. The detection probability depends on
covariates (sea state, glare, observation team etc) recorded every hour, and
cannot easily be explicitly given here. An important feature of our likelihoodbased approach is that it easily accommodates the spatially varying detection
Spatial point process modelling of whale
In this section we discuss the modelling of whale positions observed along one
transect leg. Within the time-span of traversing a transect leg the whales are
regarded as immobile and occur at spatial locations ξ = (x, y) where these
locations are relative to a coordinate system with the transect leg along the
x-axis and origin at the start of the transect leg. The whales in the vicinity of
the transect leg are regarded as a subset of a planar stationary point process
X whose intensity λ is the parameter of main interest. The process Y of
positions of observed whales is regarded as an independent thinning of X
with thinning probabilities 1 − p(·) where p(ξ) is the probability of detecting
a whale positioned at ξ. In practice p(·) has bounded support so that Y is a
finite point process.
Shot-noise Cox processes
Minke whales in the northeast Atlantic tend to form loose and variable clusters, partially due to stochastic clustering in the prey distribution (Skaug
et al., 2004). Therefore a Cox process seems an appropriate model for the
whale positions. The distribution of a Cox process in the plane is governed
by a non-negative random intensity function Z = {Z(ξ)|ξ ∈ R2 }. Given a
realization z of the random intensity function, the Cox process is a Poisson process with intensity function z. In this paper we consider an example
of a so-called shot-noise Cox process (Brix, 1999; Møller, 2003; Møller and
Waagepetersen, 2003). The random intensity function is given by
Z(ξ|Φ) =
γk(ξ − c)
where the kernel k is a probability density and Φ is a homogeneous marked
Poisson process. That is, Φ = {(c, γ)|c ∈ C} where C is a homogeneous
Poisson process and given C, the marks γ > 0 are independent and identically
distributed. This Cox process can also be viewed as a cluster process, i.e.
conditional on Φ, X is distributed as a superposition of Poisson processes
X(c,γ) , (c, γ) ∈ Φ, each with intensity function γk(· − c) where c is the cluster
centre. Conditional on Φ, γ is the expected number of points in the cluster.
The process Y of observed whales is a shot-noise Cox process with random
intensity function
ZY (·|Φ) = p(·)Z(·|Φ).
Parametric model
In the minke whale context we parametrize the model by θ = (κ, α, ω) where
exp(κ) is the intensity of cluster centres, α = Eγ is the mean number of
whales per cluster, and ω is the scale parameter (spread) of the kernel. The
whale intensity λ = α exp(κ) is the parameter of main interest.
We assume that γ, the number of whales per cluster, is standard gamma
distributed with shape and scale parameter α and 1, respectively, and that
the kernel is a truncated bivariate Gaussian density with scale parameter
ω > 0, i.e.
k((x, y); ω) = 1[max(|x|, |y|) < T ω] exp − (x2 + y 2 )/(2ω 2 ) /(2πω 2 c(T )) (3)
where T > 0 and c(T ) is a normalizing constant ensuring that k integrates
to 1. In our application, T = 3 so that c(3) = 0.9973. Working with a k of
bounded support is advantageous for computational reasons, see Appendix B.
For a region A the overdispersion index (i.e. the ratio between the variance
and mean of the number of points in X ∩ A) is approximately 2 + α.
In addition to being an example of a shot noise G Cox process (see Appendix A), our shot-noise Cox process is moreover an example of a NeymanScott process with negative binomial numbers of points in each cluster. From
the point of view of constructing a random intensity function, the use of
gamma distributed marks γ in (2) adds additional flexibility compared to
more common examples of Neyman-Scott processes like the Thomas process
(first used in the line transect context by Hagen and Schweder, 1995). For
the Thomas process, the random intensity function is obtained by a super-
position of Gaussian kernels all multiplied with the same positive parameter.
The Thomas model does not seem to allow for the amount of dispersion seen
in our example, see Figure 1 where there are many isolated points and at
least one cluster with many points.
Likelihood-based inference
The set of spatial locations with positive probability of detecting a whale
is essentially a union of narrow bands around the transect legs. The geometry of this set is rather complicated from a computational point of view.
We therefore use a composite likelihood approach: log likelihood functions
are computed for each transect leg separately and then added to obtain a
composite log likelihood function based on all of the transect legs.
For a survey with m transect legs, we use the composite log likelihood
l(θ) = m
i=1 log Li (θ) where Li (θ) is the likelihood of the data from the ith
leg. Dependence between the likelihood components Li (θ) is due to clusters of whales which can be observed from more than one transect leg. For
our whale data the spatial extent (determined by ω) of the clusters is small
relative to the separation between the transect legs. It thus seems reasonable to consider the likelihood components independent. We therefore in the
following refer to the composite likelihood function and maximum composite likelihood estimates as the likelihood function and maximum likelihod
estimates, respectively.
In Section 5.1 we use a profile likelihood approach where l(θ) is maximized
with respect to (κ, α) for a finite set of ω values using Newton-Raphson. The
reason for using the profile likelihood approach is that it is computationally
very involved to compute the first and second derivatives with respect to the
kernel scale parameter ω. The likelihood may further be highly multimodal
as a function of ω, see Figure 2, in which case gradient based maximization
is not reliable. We are not aware of theoretical results concerning the properties of maximum likelihood estimates for spatial Cox processes so we use a
parametric bootstrap to investigate the repeated sampling properties of our
The score function and information matrix are obtained by summing the
corresponding quantities obtained from the log likelihood functions log Li (θ)
for each transect leg. Similarly, for two values θ1 = (κ1 , α1 , ω1 ) and θ2 =
(κ2 , α2 , ω2 ) of the parameter vector, the log likelihood ratio l(θ1 ) − l(θ2 ) is
given by the sum m
i=1 log(Li (θ1 ) − Li (θ2 )). It therefore suffices to work out
the likelihood function for one generic transect leg.
Likelihood function for one transect leg
To simplify notation we drop the transect leg index i in this and the following
section. Let S denote the bounded support of the detection probability for
a transect leg - in our application S is a narrow rectangular strip around
the transect leg, see the right plots in Figure 1. The conditional density of
YS = Y ∩ S given Φ in (2) is the Poisson process density
f y|ZY (·|Φ; ω) = exp |S| −
ZY (ξ|Φ; ω)dξ
ZY (η|Φ; ω).
Note that the conditional density only depends on Φ through the finite point
process of cluster centres with positive probability of contributing with offspring inside S. More specifically, this finite point process is ΦE = {(c, γ) ∈
Φ|c ∈ E} where E is the rectangle {ξ ∈ R2 |∃η ∈ S : k(ξ − η; ω) > 0}. The
likelihood function for the transect leg is thus
L(θ) = E(κ,α) f y|ZY (·|Φ; ω) = E(κ,α) f y|ZY (·|ΦE ; ω)
where E(κ,α) denotes expectation with respect to Φ or ΦE whose distributions
depend on κ and α. Denote by v the number of points in ΦE and let w =
(c,γ)∈ΦE log γ. The marginal density of ΦE is given by the Poisson process
density of the cluster centres times the densities of the standard gamma
f (φ; κ, α) = exp |E|(1 − exp(κ) exp(κ)v
γ α−1 exp(−γ)/Γ(α) =
exp |E|(1 − exp(κ)) + (κ − log Γ(α))v + (α − 1)w −
γ . (5)
Approximations of likelihood ratios L(θ2 )/L(θ1 ) are obtained using bridge
sampling, see Appendix C. To compute approximate derivatives of log L(θ)
(see Section 4.2) or bridge sampling likelihood ratios L(θ2 )/L(θ1 ) we need
conditional simulations of the “missing data” ΦE given YS . An algorithm for
this is discussed in Appendix B. This algorithm also forms the backbone in
an algorithm for posterior simulation in a Bayesian setting, see Section 4.3.
Computation of log likelihood derivatives
Consider a fixed ω and let Vθ (YS , ΦE ) = d log f (YS , ΦE ; θ)/d(κ, α) where
f (y, φ; θ) ∝ f y|ZY (·|φ; ω) f (φ; κ, α)
is the joint density of (YS , ΦE ), see (4) and (5). Following Section 8.6.2 in
Møller and Waagepetersen (2003), the score function is given by
u(κ, α) = Eθ,y Vθ (YS , ΦE ) = (Eθ,y v − exp(κ)|E|, Eθ,y w −
Γ0 (α)
Eθ,y v)
where Eθ,y denotes conditional expectation with respect to ΦE given YS = y
and Γ(α) is the gamma function. Similarly, the observed information matrix
j(κ, α) = −Eθ,y dVθ (YS , ΦE )/d(κ, α)T − Varθ,y Vθ (YS , ΦE ) =
 
Γ0 (α)
 Varθ,y v Covθ,y [v, w] − Γ(α) Varθ,y v 
 (7)
d2 log Γ(α)
Γ0 (α)
where Varθ,y and Covθ,y denotes conditional variance and covariance, and
where the matrices are symmetric with only the upper triangle shown. The
first term in (7) is the conditional expectation of the observed information
in the case where v and w are observed. The first and second derivatives of
the gamma function are known as the digamma and trigamma functions and
are available in many statistical or mathematical software packages.
We could reparametrize letting κ := log(κ − Γ(α)) in which case an exponential family density with sufficient statistic t = (v, w) would be ob13
tained for ΦE , cf. (5). Then we obtain particularly neat expressions for
the score function and observed information: u(κ, α) = Eθ,y t − Eθ t and
j(κ, α) = Varθ t − Varθ,y t. However, with the original parametrization a more
well-conditioned observed information matrix is obtained. A third option is
to parametrize in terms of (κ, log λ) = (κ, κ + log α). This gives a somewhat
better conditioned observed information than with the (κ, α) parametrization
but the expression is rather messy and omitted here.
The expectations appearing in the score function and the information
matrix cannot be evaluated analytically. In order to estimate the expectations using importance sampling methods (see Section 8.6.2 in Møller and
Waagepetersen, 2003) we use conditional simulations of ΦE given YS = y,
see Appendix B.
Bayesian approach
In the Bayesian framework we introduce a prior density p(θ) and consider the
joint posterior distribution of (θ, (ΦEi )m
i=1 ) where we reintroduce the index i =
1, . . . , m, for the m transect legs. Assuming independence between transect
legs, the posterior density is given by
p(θ, (φi )m
i=1 |(yi )i=1 )
∝ p(θ)
fi (yi , φi ; θ).
A Markov chain Monte Carlo (MCMC) algorithm for posterior simulation
can be obtained by combining the MCMC algorithm from Appendix B with
Metropolis-Hastings updates for θ (see e.g. Robert and Casella, 2004, for
background on MCMC).
Application to whale data
Precise Monte Carlo estimation of the score function and in particular the
observed information and log likelihood ratios requires large MCMC samples. Hence our approach to maximum likelihood estimation is demanding
in terms of computing time. The Bayesian approach on the other hand is
computationally less demanding, see Section 5.3. To give an idea of the computational complexity we report below computing times on a 2.4 GHz/256
MB Intel 4 processor.
Maximum likelihood estimation
Estimates (κl , αl ) = arg max(κ,α) l(κ, α, ωl ) and λl = exp(κl )αl are obtained
for different values ωl = l/10km, l = 2, . . . , 30, using Newton-Raphson. Occasionally, Monte Carlo error results in negative definite Monte Carlo estimates
of the observed information so we use a Marquardt-Levenberg variant of the
Newton-Raphson algorithm where positive terms are added to the diagonal
of the estimated observed information when it is negative definite.
The left plot in Figure 2 shows the profile log likelihood function for
ω obtained by cumulating log likelihood ratios l(θl+1 ) − l(θl ) (with θl =
(κl , αl , ωl )) obtained using bridge sampling.
[Figure 2 about here.]
The profile likelihood function for VSS has a well-defined maximum for ω =
ω6 = 0.6 with corresponding estimates κ6 = −3.7, α6 = 2.4 and λ6 =
0.06. In Section 6 we comment on the second more flat and multimodal
profile likelihood function for the VSN block. The small vertical bars in
Figure 2 indicate Monte Carlo confidence intervals for the log likelihood ratios
l(θl+1 )−l(θl ). We consider the Monte Carlo error for the estimates (κl , αl , λl )
in the simulation study in Section 5.2. The computation of an estimate
(κl , αl ) and a log likelihood ratio l(θl+1 ) − l(θl ) took around 70 minutes.
To illustrate how (κl , αl , λl ) depends on ωl , a collection of estimates are
given in Table 1.
[Table 1 about here.]
The estimates κl and αl are vary considerably as a function of ωl whereas λl
is essentially constant.
Parametric bootstrap
The repeated sampling properties of the parameter estimates are studied
using a parametric bootstrap based on 100 independent simulated data sets.
The data sets are simulated under the fitted model with parameters equal to
the maximum likelihood estimates obtained in Section 5.1. It is very time
consuming to repeat the whole profile likelihood procedure for each simulated
data set. We therefore use an adaption of the parametric bootstrap where ω
is assumed known and equal to the maximum likelihood estimate. In a full
parametric bootstrap we should also maximize with respect to ω. However,
Table 1 suggests that regarding the estimate of λ, it does not matter much
whether we maximize the likelihood function over all three parameters κ, α,
or ω or maximize only over κ and α for fixed ω.
Bootstrap estimates of the means for the sampling distributions of the
estimates κ6 , α6 and λ6 for fixed ω = ω6 = 0.6 and the moment estimate (1)
are -3.6 (-3.7), 2.3 (2.4), 0.06 (0.06), and 0.06 (0.06), respectively, with the
parameter values used for the bootstrap simulation given in parantheses. The
2.5% and 97.5% quantiles are κ6 : (-4.2;-2.9), α6 : (0.7;4.5), λ6 : (0.03;0.08),
and moment estimate: (0.03;0.08). The estimates of κ, α, and λ for fixed ω
seem close to unbiased but displays considerable variation. The right plot
in Figure 2 shows a so-called confidence net for λ obtained from its confidence distribution (Schweder and Hjort, 2002) estimated from the bootstrap
simulations. For each level of confidence on the vertical axis, the horizontal
interval from the left to the right branch of the net provides a tail-symmetric
confidence interval.
Our estimates are affected by sampling variation, but also by Monte Carlo
error due to the likelihood derivatives being evaluated using MCMC. We estimate the Monte Carlo error by pairwise comparison of two independent
optimizations for each simulated data set. The estimated Monte Carlo standard deviations for κ6 , α6 , and λ6 are 0.1, 0.2, and 0.002. The Monte Carlo
standard deviations seem reasonably small compared with the variability of
the bootstrap distributions.
The average time used for a bootstrap simulation and subsequent optimization is around 30 minutes.
Bayesian inference
From a numerical point of view the Bayesian approach is very advantageous
since Monte Carlo estimation of posterior expectations is rather simple compared with maximization of the likelihood function.
Hedley and Buckland (2004) mention that minke whales in the Antarctic comes in pods of 1-3 animals. We use this information to illustrate a
Bayesian approach. About 90% of the probability mass of a negative binomial distribution with mean α = 2 and variance 2α = 4 falls on {0, 1 . . . , 5}.
It therefore seems reasonable to use an informative N (2, 1) prior (truncated
at zero) for α. We further impose uniform priors on exp(κ) and ω on the
bounded intervals ]0.01, 0.2[ and ]0.1, 1.5[, respectively.
The marginal posterior means and 2.5% and 97.5% quantiles for κ, α, ω,
and λ are -3.6 (-4.3;-2.8), 2.2 (1.0;3.5), 0.7 (0.4;1.0), and 0.06 (0.04;0.08). A
posterior credibility net for λ is shown in the right plot in Figure 2, i.e., for
a probability q on the vertical axis, the horizontal interval from the left to
the right branch of the credibility net provides a tail-symmetric q posterior
credibility interval. The posterior means are very similar to the maximum
likelihood estimates. The credibility net for λ agrees well with the confidence
net, but is slightly narrower due to the use of prior information.
The MCMC computations for the Bayesian analysis took about 20 minutes.
Spatial K-function for whales
We conclude our study of the VSS whale data by considering a non-parametric
estimate of the K-function for the whale process X. Based on the data yi for
the ith transect leg, a non-parametric estimate of the K-function is given by
K̂i (t) =
X 1[0 < kξ − ηk < t]
where λ̃ is the moment estimate (1) of the intensity and wξ,η is an edge
correction factor, see Baddeley et al. (2000) or Section 4.3.2 in Møller and
Waagepetersen (2003). The edge correction requires that t is less than the
width of the strip Si (4 km in our application). Our estimate based on
all the transect legs is simply the average K̂(t) = m
i=1 K̂i (t)/m. Figure 3
shows L̂(t) − t where L̂(t) = K̂(t)/π. For a Poisson process, L(t) − t =
K(t)/π − t is zero for all t so our estimate L̂(t) − t which takes values
larger than zero indicates clustering (the theoretical value of L(t) − t is
t2 + [1 − exp(−(t/(2ω)2 )]/(πκ) − t > 0 under the shot noise Cox process).
The dotted curves in Figure 3 are 95% pointwise confidence bands: for each
t > 0 the corresponding values of the dotted curves provide a 95% confidence
interval for L̂(t) − t under the shot noise Cox process. The confidence bands
illustrates the large variability of K̂(t). Except for very small t, L̂(t) − t falls
within the confidence bands. Thus the plot does not provide strong evidence
against our model (the p-value obtained from a Monte Carlo test based on
the integrated squared distance between L̂(t) − t and the theoretical value of
L(t) − t under the fitted shot noise Cox process is 21%).
[Figure 3 about here.]
The K-function uniquely characterizes the second order properties of a
stationary point process. We also investigated the first order properties of
our model by comparing the observed counts of whales within each transect
leg with their sampling distributions under the fitted shot noise Cox process.
Only the high count (18) of whales in the lower left transect leg in Figure 1
falls outside the 0.025% and 0.975% quantiles (0 and 11, respectively).
The profile likelihood function (Figure 2) for the VSS data has a well-defined
maximum at ω = 0.6. In addition to the VSS block, we considered another
block, VSN, for which the profile likelihood function was nearly flat with
multiple local maxima, see Figure 2. One may speculate whether this indicates aggregation at different spatial scales where large scale aggregation
might be explained by spatial covariates concerning, say, sea depth or water
temperature. It is also plausible that such covariates might explain the high
count of whales in the lower left transect leg in Figure 1 which is also causes
the slight lack of fit seen in Figure 3 for small t. Failing to take possible
extra sources of variation into account may lead to an underestimation of
the uncertainty of the whale intensity estimate. Including too many sources
of variation might on the other hand lead to lack of identification given the
relatively sparse data.
Covariates can easily be incorporated in our model by multiplying a positive function e(·) depending on the covariates to the random intensity function (2). The random intensity function for the observed whales is then
ZY (ξ|Φ) = e(ξ)p(ξ)Z(ξ|Φ)
which accounts for both small scale clustering and large scale aggregation
due to the covariates. Note that ZY (ξ|Φ) = p̃(ξ)M Z(ξ|Φ), ξ ∈ S, where
M = maxξ∈S e(ξ)p(ξ) and p̃(ξ) = e(ξ)p(ξ)/M . Hence, our simulation algorithm from Appendix B is still applicable if we replace p(·) with p̃(·) and the
standard gamma marks with Γ(α, M ) marks.
Another very flexible class of Cox processes is given by log Gaussian
Cox processes (Møller et al., 1998) where the log random intensity function
is a Gaussian process. For these processes we would not need to consider
extended regions E since the marginal distribution of X ∩ B is known for a
log Gaussian Cox process X on R2 and a bounded region B. On the other
hand, for computational reasons it is required to discretize the Gaussian
process and this introduce the problem of choosing a suitable discretization,
see Waagepetersen (2004).
Skaug et al. (2004) estimated the detection probability p through a parametric model for the hazard probability Q. A simulation approach accounting
for uncertainty in this estimate and that of the surfacing rate φ was employed to obtain an approximate confidence distribution for the abundance
of minke whales in the survey area. A similar approach might be taken in our
shot-noise Cox process setup to account also for uncertainty in the detection
Our approach to maximum likelihood estimation is computationally demanding, but the computing times do not seem prohibitive. The computing
time can moreover easily be reduced by running optimization and bridge sampling computations in parallel on several computers. The Bayesian approach
is a computationally much easier alternative. However, the specification of
priors may be a controversial issue. For instance, not everyone might agree
on the informative prior for α used in Section 5.3 which in turn induces an
informative prior on λ.
Acknowledgement We are grateful to the editor and the two referees for
detailed and valuable comments and suggestions.
Aldrin, M., Holden, M. & Schweder, T. (2003). Comment on Cowling’s ”Spatial Methods for Line Transect Surveys”. Biometrics 59, 186–188.
Baddeley, A. J., Møller, J. & Waagepetersen, R. (2000). Non- and semiparametric estimation of interaction in inhomogeneous point patterns. Statistica Neerlandica 54, 329–350.
Brix, A. (1999). Generalized gamma measures and shot-noise Cox processes.
Advances in Applied Probability (SGSA) 31, 929–953.
Buckland, S. T., Anderson, D. R., Burnham, K. P., Laake, J. L., Borchers,
D. L. & Thomas, L. (2004). Advanced distance sampling. Oxford University
Cowling, A. (1998). Spatial methods for line transect surveys. Biometrics 54,
Gelman, A. & Meng, X.-L. (1998). Simulating normalizing constants: from
importance sampling to bridge sampling to path sampling. Statistical Science 13, 163–185.
Hagen, G. S. & Schweder, T. (1995). Point clustering of minke whales in the
northeastern Atlantic. In: Whales, Seals, Fish and Man (eds. A. Schytte
Blix, L. Walløe and Ø. Ulltang), Elsevier, Amsterdam, 27–33.
Hedley, S. L. & Buckland, S. T. (2004). Spatial models for line transect
sampling. Journal of Agricultural, Biological, and Environmental Statistics
9, 181–199.
International Whaling Commission (2004). Report of the scientific committee. Journal of Cetacean Research and Management 6, 171–179.
Møller, J. (2003). Shot noise Cox processes. Advances in Applied Probability
35, 614–640.
Møller, J. & Waagepetersen, R. P. (2003). Statistical inference and simulation
for spatial point processes. Chapman and Hall/CRC, Boca Raton.
Møller, J., Syversveen, A. R. & Waagepetersen, R. P. (1998). Log Gaussian
Cox processes. Scandinavian Journal of Statistics 25, 451–482.
Robert, C. P. & Casella, G. (2004). Monte Carlo Statistical Methods.
Springer-Verlag, New York, 2nd edition.
Schweder, T. (1974). Transformation of point processes: Application to Animal Sighting and Catch Problems, with special Emphasis on Whales. Ph.D.
thesis, University of California, Berkeley, 183 pp.
Schweder, T. (1977). Point process models for line transect experiments.
In: Recent developments in statistics (eds. J. R. Barra, B. Van Cutsem,
F. Brodeau and G. Romier), North Holland, 221– 242.
Schweder, T. & Hjort, N. L. (2002). Confidence and likelihood. Scandinavian
Journal of Statistics 29, 309–332.
Skaug, H. J. (2006). Markov modulated Poisson processes for clustered line
transect data. Environmental and Ecological Statistics 13, to appear.
Skaug, H. J., Øien, N., Schweder, T. & Bøthun, G. (2004). Abundance of
minke whales (balaneoptera acutorostrata) in the northeast Atlantic: variability in time and space. Canadian Journal of Fisheries and Aquatic Sciences 61, 870–886.
Waagepetersen, R. (2004). Convergence of posteriors for discretized log Gaussian Cox processes. Statistics and Probability Letters 66, 229–235.
Wolpert, R. L. & Ickstadt, K. (1998). Poisson/gamma random field models
for spatial statistics. Biometrika 85, 251–267.
In Appendix A-C a working knowledge of spatial point process densities
and Markov chain Monte Carlo methods is assumed. Background material
on these subjects can be found in e.g. Møller and Waagepetersen (2003) and
Robert and Casella (2004).
Relation to shot noise G Cox processes
Our shot-noise Cox process is a special case of a shot noise G Cox process
(Brix, 1999) which is obtained when Φ is a Poisson process with intensity
function of the form
ζ(c, γ) = exp(κ)β −α γ α−1 exp(−γ/β)/Γ(α)
where κ ∈ R, α > −1, and β > 0. With our rather small data set we can not
determine well both α and β and hence choose to fix β = 1. Alternatively
one might fix α = 0 whereby a so-called Poisson-gamma process (Wolpert
and Ickstadt, 1998) is obtained. With α ≤ 0 the process of cluster centres
C = {c ∈ R2 : (c, γ) ∈ Φ for some γ} is not locally finite and this is a
nuisance for computational reasons.
The notation in the following appendices B and C is as in Sections 4.1 and
4.2 where we omit the index i for the different transect legs.
Conditional simulation using MCMC
The conditional density of ΦE given YS = y is proportional to the joint
density (6). Simulations from the conditional distribution of ΦE can be
obtained using a birth/death MCMC algorithm as described in Chapter 7
in Møller and Waagepetersen (2003). In each MCMC iteration it is then
required to compute the integral
ZY (ξ|φ ; ω)dξ =
p(ξ)Z(ξ|φ0 ; ω)dξ
appearing in f (y, φ0 ; θ) (cf. (4) and (6)) when φ0 is the proposal for a new
state of the MCMC chain. Numerical quadrature is required to compute the
integral due to the rather irregular form of the detection probability p(·).
Instead we consider a data augmentation approach where we simulate the
joint distribution of Φ and the unobserved whales XS = (X ∩ S) \ YS within
S given YS = y. Given ΦE , XS and YS are independent Poisson processes
and the joint density of (YS , XS , ΦE ) is given by
f A (y, x, φ; θ) ∝ p(y|y ∪ x)f y ∪ x|Z(·|φ; ω) f (φ)
where p(y|y ∪ x) =
η∈x (1
− p(η)) is the probability of observing
y given all whales YS ∪ XS = y ∪ x and
f y ∪ x|Z(·|φ; ω) = exp(|S| −
Z(ξ|φ; ω)dξ)
Z(η|φ; ω)
is the Poisson process density of YS ∪ XS given ΦS = φ.
The conditional density of ΦE given YS = y and XS = x is proportional to
f A (y, x, φ; θ) and easy to evaluate since Z(η|φ; ω) is just a sum of scaled truncated Gaussian densities, cf. Section 3.1. The full conditional of XS given YS
and ΦE is simply a Poisson process with intensity function (1−p(·))Z(·|φ; ω).
We then simulate (XS , ΦE ) given YS = y using a Gibbs/Metropolis-withinGibbs algorithm where we alternate between the following two steps:
1. Gibbs update for XS given ΦE and YS where the current state of XS
is replaced by a simulation of a Poisson process with intensity function
(1 − p(·))Z(·|φ; ω).
2. Single point Metropolis birth/death updates for ΦE given XS and YS .
For computational speed it is convenient to work with a kernel of bounded
range since a birth or death of a marked cluster centre then only influences
the intensity function for the whales in a neighbourhood of the added or
removed marked cluster centre.
Computation of log likelihood ratios
A likelihood ratio L(θ2 )/L(θ1 ) can be calculated using bridge sampling (Gelman and Meng, 1998; Møller and Waagepetersen, 2003):
L(θ2 ) Y Eθ2lb ,y [f (y, XS , ΦE ; θ2l+1 )/f (y, XS , ΦE ; θ2l )]
L(θ1 )
,y [f (y, XS , ΦE , θ2l+1 )/f (y, XS , ΦE ; θ2l+2 )]
where f A is given by (9), θ0b = θ1 , θ2k
= θ2 and θjb , j = 1, . . . , 2k − 1
are “intermediate” parameter values between θ1 and θ2 , e.g. obtained by
linear interpolation. In each factor in (10), f A (y, XS , ΦE ; θ2l+1
) is a “bridge”
between f A (y, XS , ΦE ; θ2l
) and f A (y, XS , ΦE ; θ2l+2
). An approximation of
the likelihood ratio is obtained by replacing the conditional expectations
with Monte Carlo estimates.
If θj+1
and θjb are not sufficiently “close”, the conditional variance of a
ratio f A (y, XS , ΦE ; θj+1
)/f A (y, XS , ΦE ; θjb ) may be huge so that very large
Monte Carlo samples are need to compute the conditional expectation of the
ratio. Some pilot experiments are typically needed in order to determine a
suitable number k.
)/f A (y, XS , ΦE ; θ2l
Note that for each expectation Eθ2lb ,y [f A (y, XS , ΦE ; θ2l+1
the extended window E must be chosen according to the maximal ω value
, see Section 4.1.
and θ2l+1
in θ2l
•• •
Figure 1: Left: transect and observed whales for the VSS block with distances in km. The transecting was broken when sighting conditions became
unsuitable due to sea state, fog, or darkness, and restarted along the transect
leg when conditions improved. Right: gray scale plots of the detection probability for the two rightmost transect leg segments in the VSS block. Dark
means high detection probability and white dots show positions of observed
whales. The numbers on the x and y axes refer to distances in km.
−− −−
profile likelihood
Level of confidence or posterior probability
− −
Figure 2: Left: Profile log likelihood functions lp (ω) = max(κ,α) l(θ) obtained
for the VSS (solid line) and VSN (dotted line) blocks by cumulating log
likelihood ratios l(θl+1 ) − l(θl ). The small vertical bars indicate Monte Carlo
confidence intervals for the differences l(θl+1 )−l(θl ). Right: confidence (solid
line) and posterior credibility nets (dotted line) for λ, see Section 5.2 and
Section 5.3. For each confidence level/posterior probability on the vertical
axis, the horizontal intervals between the left and right branches of the curves
provide a confidence and posterior credibility interval, respectively.
Figure 3: Solid line, L̂(t) − t (distance t in km); dotted lines, 95% confidence
band based on simulations of the fitted shot noise Cox process; dashed line,
L(t) − t > 0 for the fitted shot noise Cox process. A Poisson process has
L(t) − t = 0.
0.2 0.4 0.6 0.8
-2.2 -3.0 -3.7 -4.1 -4.4
0.5 1.2 2.4 3.7 4.9
0.06 0.06 0.06 0.06 0.06
Table 1: A collection of estimates κl , αl and λl obtained by maximizing the
log likelihood l(κ, α, ωl ) with respect to κ and α.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF