Iris Pupil Detection by Structure Tensor Analysis (länk till annan webbplats)

Iris Pupil Detection by Structure Tensor Analysis (länk till annan webbplats)
Iris Pupil Detection by Structure Tensor Analysis
Fernando Alonso-Fernandez and Josef Bigun
Halmstad University. Box 823. SE 301-18 Halmstad, Sweden
{Fernando.Alonso-Fernandez,Josef.Bigun}@hh.se
Abstract—This
paper
present
a
pupil
detection/segmentation algorithm for iris images based on
Structure Tensor analysis. Eigenvalues of the structure
tensor matrix have been observed to be high in pupil
boundaries and specular reflections of iris images. We
exploit this fact to detect the specular reflections region
and the boundary of the pupil in a sequential manner.
Experimental results are given using the CASIA-IrisV3Interval database (249 contributors, 396 different eyes,
2,639 iris images). Results show that our algorithm works
specially well in detecting the specular reflections (98.98%
success rate) and pupil boundary detection is correctly
done in 84.24% of the images.
I. I NTRODUCTION
Biometric authentication has been receiving considerable attention over the last years due to the increasing demand for automatic person recognition. The term
“biometrics” refers here to automatic recognition of an
individual based on behavioral and/or physiological characteristics (e.g., fingerprints, face, iris, voice, signature,
etc.), which cannot be stolen, lost, or copied [1]. Among
all biometric techniques, iris recognition has been traditionally regarded as one of the most reliable and accurate
biometric identification system available [2].
The iris is a colored ring of tissue around the pupil
through which light enters the interior of the eye. Figure 1
shows an example image of a captured iris. The sclera is
the white region that surrounds the outer part of the iris.
The pupil region generally appears darker than the iris
and may also have specular reflections as a result of light
sources typically used in commercial acquisition systems.
The details of the iris texture are believed to be different
between different people and also between the left and
right eye of the same person, thus providing a valuable
source for personal recognition [3].
Iris analysis begins with the detection of the inner and
outer boundaries of the iris. The success of this task is
crucial for the good performance of a iris recognition
system. Early works include the Daugman’s approach
using an integro-differential operator [4] and the method
of Wildes involving edge detection and circular Hough
transform [5]. They are based on the assumption that the
boundaries of the iris can be modeled as two concentric
circles. Much of the subsequent research in this area has
tried to improve the Wildes idea of using edge detection
and a Hough transform, with suggestions to improve its
inherent computational burden or the lack of enough edge
points to define a circle [2].
In this paper, we come up with a pupil detection/segmentation algorithm for iris images based on analysis of the Structure Tensor. In a sequential fashion, we
first detect the specular reflections that typically appear
within the pupil and then, we detect the pupil region.
In the proposed method, no assumption is made about
circularity of the pupil, thus allowing the detection of noncircular or irregular pupil boundaries. Reported results
show the effectiveness of the proposed algorithm with
good quality images. One drawback of the algorithm
is the assumption that the pupil boundary is a closed
curve. Relaxing this assumption would allow the correct
detection in worse conditions, namely in the presence
of eyelashes/eyelids occlusion, spurious reflections, offangle images, etc [6].
Eyelashes
Specular
reflec!ons
Sclera
Iris
Pupil
Eyelid
Fig. 1.
Iris image with typical elements labeled.
II. T HE 2D S TRUCTURE T ENSOR
Given a gaussian smoothed image I[p], where p =
[x, y] is the coordinates of a point in 2D, the 2D Structure
Tensor Sw [p] at a given pixel p is the 2×2 matrix [7]:
Sw [p] =
X
r
·
w [r]
2
(Ix [p])
Ix [p] Iy [p]
2
Ix [p] Iy [p]
(Iy [p])
¸
(1)
with the summation index r ranging over a set of coordinate pairs {−m...m} × {−m...m} and w[p] a weighting
window centered at p (typically gaussian) such that the
sum of all weights is 1. The values Ix [p] and Iy [p] are
the estimated partial derivatives of image I[p] at pixel p.
If we rewrite the matrix of Eq. 1 as
·
¸
a c
Sw [p] =
(2)
c b
then the eigenvalues λ1 , λ2 of Sw [p] (ordered so that λ1 ≥
λ2 ) are found to be:
q
2
λ1 = 0.5 ∗ (a + b + (a − b) + 4c2 )
(3)
q
λ2 = 0.5 ∗ (a + b −
2
(a − b) + 4c2 )
(4)
The importance of the 2D Structure Tensor Sw [p]
is given by the fact that the eigenvalues λ1 , λ2 (and
their corresponding eigenvectors e1 , e2 ) summarize the
distribution of the gradient ∇I = (Ix , Iy ) of image
I[p] within the window w[p]. If for example λ1 > λ2 ,
then eigenvector e1 gives the direction that is maximally
aligned with the gradient within w[p]. In particular, if
λ1 > 0 λ2 = 0 then the values of I[p] within the window
varies along the direction e1 and are constant along e2 .
On the other hand, if λ1 = λ2 , the gradient in w[p] has
no predominant direction (balanced directions), which is
the case for instance when there is rotational symmetry
within that window. In particular, if λ1 = λ2 = 0, then
I[p] is constant within w[p] (∇I = (Ix , Iy ) = (0, 0)).
3x3 window (σ=1.5)
10x10 window (σ=5)
III. I RIS P UPIL D ETECTION BY S TRUCTURE T ENSOR
A NALYSIS
We propose the use of the Structure Tensor to detect
and segment the pupil region of iris images. In Figure 2,
we depict eigenvalues λ1 and λ2 of the iris image shown
in Figure 1 for different sizes of the weighting gaussian
window w[p]. Iris image is pre-smoothed using a gaussian
window of size 3×3 and standard deviation σ = 1.5.
The overall model of our detection/segmentation system
is depicted in Figure 3, which is described next.
It can be observed in Figure 2 that eigenvalue λ2
exhibits their highest values in the specular reflections,
specially when the window w[p] is of enough size to
detect the rotational symmetry around the center of
the reflection point (recall that higher λ2 means more
anisotropy of the gradient). For a window w[p] of size
20 × 20, we first detect the maximum of λ2 . We then
binarize the image of λ2 by using the Otsu method
[8] and select the connected binary element which falls
on the detected maximum. The border of the selected
binary element will give us the boundaries of the specular
reflections region. Also, the centroid of the binary element
will be the centroid of the specular reflections. The whole
process can be seen in Figure 3, top.
After detecting the border and centroid of the specular
reflections region, we use λ1 as follows to find the
boundary of the pupil. It can be observed in Figure 2
that λ1 exhibits their highest values in the boundaries
of the pupil region and the specular reflections, as well
as on the eyelashes region. Increasing the size of the
weighting window w[p] has an averaging effect, resulting
in lower spatial resolution and more diffused boundaries.
We first binarize with the Otsu method [8] the image of
λ1 obtained with a window w[p] of size 3 × 3. Then,
we remove from the binarized image the region of the
specular reflections previously detected and finally, we
apply the watershed transform [9]. When doing so, the
connected element which falls on the centroid of the
specular reflections region should correspond with the iris
pupil. This process can be seen in Figure 3, bottom. We
impose the condition that the size of the detected pupil
should not exceed a percentage of the total image size
(50% in our experiments), otherwise the image is marked
as “non passing” the segmentation stage.
20x20 window (σ=10)
40x40 window (σ=20)
Fig. 2. Eigenvalues λ1 (left column) and λ2 (right column) of the
iris image of Figure 1 for different sizes of the weighting gaussian
window w[p]. To depict the eigenvalues, they are re-scaled so that white
represents the maximum eigenvalue of the image and black represents
0 by using 256 gray tones and linear mapping.
For a given size of the pupil, one can also detect the
pupil by means the Generalized Structure Tensor (GST),
[10], which is essentially template matching in the tensor
domain, and can be conveniently expressed using complex
version of the structure tensor i.e.
X
c(p)(Ix (p) + iIy (p))2
(5)
I20 =
p
where c(p) is defined as the complex version of the
structure tensor response of a circle:
c(p) = exp(−i2ϕ)(x2 + y 2 )γ exp(−
x2 + y 2
)
2σ22
(6)
Here γ and σ2 are parameters that together determine the
radius of the pupil and the precision of the filter (width
of the pupil boundary region). It can be shown that, a
high response in terms of the magnitude of I20 and zero
argument of I20 is obtained at a point, if there are edges
Input image
Detecon of specular reflecons from λ2
Detecon of
maximum of λ2
Binarizaon and
object selecon
Border and
centroid computaon
Segmented pupil
Detecon of pupil border from λ1
Image of λ1
Fig. 3.
Binarizaon
Removal of the specular
reflecons region
Watershed
transform
System model for pupil detection/segmentation by Structure Tensor analysis.
Results (total images: 2259)
Correct detection of specular reflection: 2236 (98.98%)
Images passing the segmentation stage: 1946 (86.14%)
Images correctly segmented: 1903 (84.24%)
TABLE I
R ESULTS OF THE DIFFERENT STAGES OF OUR PUPIL
DETECTION / SEGMENTATION ALGORITHM .
at the prescribed (same) distance from that point and there
is an agreement in terms of local orientations (structure
tensors) with those of a circle. The GHT based detection
can be selectively applied to the candidate centers to
verify the found pupil to avoid false acceptance.
IV. DATABASE AND P ROTOCOL
We have used for the experiments the “Interval” set of
the CASIA-IrisV3 database [11], captured at the Chinese
Academy of Sciences’ Institute of Automation (CASIA).
Iris images of this dataset were acquired with a closeup iris camera that includes a circular NIR LED array.
This array produces a circular set of specular reflection
points as can be seen in Figure 1. CASIA-IrisV3-Interval
includes 2,639 iris images of 280 pixels height and
320 pixels width from 249 contributors acquired in two
sessions. The number of images per contributor and per
session is not constant and not all the individuals have
images of the two eyes. The number of different eyes
included in the database is 396.
The experimental protocol is as follows. The training
set comprises two iris images from each individual (one
from each eye, if available), resulting in 396 images. Images from the training set are used to infer the parameters
of Section III after running the detection/segmentation algorithm for a comprehensive combination of the different
parameters involved. The remaining 2,639-396=2,259 iris
images are used as test set to compute success rates of our
algorithm given in Section V. No training or inference of
parameters is done on the test set.
V. E XPERIMENTAL R ESULTS
In Table I, we give the success rates of the different
stages of our detection/segmentation algorithm on the test
set. Except for the number of images passing the segmentation stage (which is computed automatically), the
other two figures are manually obtained after reviewing
the resulting segmented images.
Correct detection of specular reflections is done in
98.98% of the images (Figure 4 depicts some examples
of correct and non-correct images). When the detection is
not made correctly, it is mainly due to eyelids occlusion,
secondary specular reflections and/or lack of one of the
reflection spots (probably due to a blown lamp). Although
the shape of the detected region is not as expected,
however, estimation of the centroid is not very affected.
According to Table I, 1946 images (86.14%) pass the
segmentation stage, meaning that a connected element is
detected by the watershed transform and it complies with
the size condition imposed (see Section III). However,
after manual review, it turns out that the iris region is
well detected in 1903 images (84.24%). Some examples
are shown in Figure 5a. In the remaining 1946-1903=43
images, the boundary of the pupil is not well detected,
mainly due to eyelids/eyelashes occlusion or to secondary
specular reflections. Some examples of these issues are
shown in Figure 5b. Finally, there are 2259-1946=313
images that do not pass the segmentation stage (some
examples are depicted in Figure 5c). In all cases, it is
because the pupil boundary in λ1 is not a closed curve,
so the watershed algorithm is not able to detect it. Reasons
for an open curve, as can be seen Figure 5c, are the occlusion of eyelashes or the presence of primary/secondary
specular reflections in the boundary of the pupil.
VI. C ONCLUSION
A pupil detection/segmentation algorithm for iris images by Structure Tensor analysis has been proposed.
It is based in the observations that pupil boundaries
(a) Correct detection of specular reflections
Fig. 4.
(b) Non-correct detection of specular reflections
Examples of images with correct detection of specular reflections (a) and non-correct detection (b).
(a) Correct pupil detection
(b) Incorrect pupil detection
(c) Pupil detection error (left part: iris image, right part: binarized λ1 with the region of the specular reflections removed)
Fig. 5.
Examples of images with correct pupil segmentation (a), incorrect pupil segmentation (b) and errors in the segmentation (c).
and specular reflections of the image are regions with
high value of eigenvalues λ1 and λ2 , respectively. Also,
specular reflections typically appear in iris images within
the pupil boundary. These facts are exploited to firstly
detect the specular reflections region and then, to find the
boundary of the pupil.
Experimental results show that our algorithm works
specially well in detecting the specular reflections from
λ2 . Although the specular reflections of the database used
here have a particular distribution as a circular array
of points, the above observation remains valid for any
given distribution, because reflection points are usually
the brightest points in iris images. Since the number and
distribution of reflection points is typically known for a
given acquisition sensor, the proposed algorithm could be
adapted accordingly.
Concerning the detection of the iris boundary, it is
well detected in 84.24% of the test images. Incorrect
functioning of the algorithm mainly occurs due to eyelashes occluding part of the pupil boundary or due to
spurious specular reflections appearing in different regions of the pupil. Our algorithm relies in the fact that
the pupil boundary appears in λ1 as a closed curve,
which is not always true in the presence of the mentioned
factors. In this sense, our algorithm would benefit of
other complementary alternatives such as the traditional
hough transform or active shape models [2] when the
circular curve that surrounds the pupil is not closed.
This capability is essential in images acquired in lesscontrolled conditions, and it will be the source of future
work.
ACKNOWLEDGMENT
Author F. A.-F. thanks the Swedish Research Council
(Vetenskapsrådet) for its financial support. Portions of this
research use the CASIA-IrisV3 collected by the Chinese
Academy of Sciences’ Institute of Automation (CASIA).
R EFERENCES
[1] A. Jain, A. Ross, and S. Prabhakar, “An introduction to biometric
recognition,” IEEE Trans. CSVT, vol. 14, no. 1, pp. 4–20, 2004.
[2] K.W. Bowyer, K. Hollingsworth, and P. Flynn, “Image understanding for iris biometrics: a survey,” Computer Vision and Image
Understanding, vol. 110, pp. 281–307, 2007.
[3] E. Newton and P. Phillips, “Meta-analysis of third party evalutions
of iris recognition,” NISTIR 7440, 2007.
[4] J. Daugman, “How iris recognition works,” IEEE Trans. CSVT,
vol. 14, pp. 21–30, 2004.
[5] R. P. Wildes, “Iris recognition: An emerging biometric technology,” Proc. IEEE, vol. 85, no. 9, pp. 1348–1363, 1997.
[6] J. Matey, O. Naroditsky, K. Hanna, R. Kolczynski, D. LoIacono,
S. Mangru, M. Tinker, T. Zappia, and W. Zhao, “Iris on the
move: acquisition of images for iris recognition in less constrained
environments,” Proc. IEEE, vol. 94, no. 11, pp. 1936–1946, 2006.
[7] J. Bigun, G. Granlund, “Optimal orientation detection of linear
symmetry,” Intl Conf Computer Vision, ICCV, pp. 433–438, 1987.
[8] N. Otsu, “A threshold selection method for gray-level histograms,”
IEEE Trans. SMC, vol. 9, pp. 62–66, 1979.
[9] F. Meyer, “Topographic Distance and Watershed Lines,” Signal
Processing, vol. 38, pp. 113–125, 1994.
[10] J. Bigun, Vision with Direction. Springer, 2006.
[11] CASIA Iris Image Database, “http://biometrics.idealtest.org.”
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement