ewi_deiana_2008.

ewi_deiana_2008.
Master Thesis
A Texture Analysis of 3D GPR Images
by
Daniela Deiana
IRCTR-A-023-08
30 July 2008
i
Abstract
In this thesis, image processing algorithms are applied to 3D GPR images, in order to
improve the detection capabilities of a radar system. Detection based on the magnitude
of the reflected signals may miss weak targets. On the other hand, an analysis of the
texture properties of a target, i.e. the repeating pattern all over the surface, which is
independent on the signal intensity, discriminates it better from the clutter.
The texture analysis algorithm applied to 2D and 3D radar images is called ”Texture
Feature Coding Method” (TFCM). It highlights neighboring volume pixels (voxels) with
high correlation and it has been applied iteratively to global and local volumes of the
3D image, in order to improve the detection of weak targets. The measurement of the
correlation between neighboring voxels is based on a tolerance value, and an threshold
algorithm to automatically detect this value has been customized. Image visualization is
performed with automatic threshold selection, extracted from the histogram of the 3D
images.
The algorithm has been applied to images of landmines or mine-simulant objects laying on the surface, giving remarkable results. The method is successfully able to detect
the targets and to highlight their edges, allowing a realistic visualization of the shapes of
the targets.
Further research in this direction is suggested: tests on buried targets should be performed in order to validate the algorithm. A degradation of the results is expected when
buried targets are used, however, texture features can be extracted and object classification techniques can be used in order to discriminate between clutter and targets.
ii
Acknowledgments
I would like to thank my supervisors, Alex Yarovoy and Xiaodong Zhuge, for their precious comments and suggestions, for being always available for discussions, and for all
the human support that they gave me when it was needed. I would like to thank all
the new friends that I met at the IRCTR department, and in particular the friends of
the 22nd floor, for the nice time spent together. I would like to thank Walter and my
acquired Dutch family for their support, and last but not least, I would like to thank my
family that, even if far, has always been very close to me.
Contents
Abstract
i
Acknowledgments
ii
1 Introduction
1.1 Overview of the Antipersonnel Landmine Problem
1.2 Ground Penetrating Radar . . . . . . . . . . . . .
1.2.1 GPR Fundamentals . . . . . . . . . . . . .
1.2.2 GPR state of the art . . . . . . . . . . . . .
1.3 Research objectives . . . . . . . . . . . . . . . . . .
1.4 Thesis outline . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
1
1
5
5
6
7
8
.
.
.
.
.
.
.
.
9
9
9
11
11
13
16
16
18
.
.
.
.
.
.
.
20
22
22
22
25
26
27
28
4 TFCM applied to radar images
4.1 GPR Image conversion to Gray Level Images . . . . . . . . . . . . . . . .
4.2 TFCM of two-dimensional images . . . . . . . . . . . . . . . . . . . . . . .
29
29
31
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2 Background
2.1 Image Processing . . . . . . . . . . . . . . . . . . . . . .
2.1.1 Basic relationships between pixels . . . . . . . .
2.1.2 Image segmentation . . . . . . . . . . . . . . . .
2.1.3 Thresholding . . . . . . . . . . . . . . . . . . . .
2.1.4 Image representation: Mathematical Morphology
2.2 Texture Analysis . . . . . . . . . . . . . . . . . . . . . .
2.2.1 Texture Feature Coding Method . . . . . . . . .
2.2.2 The TFCM technique . . . . . . . . . . . . . . .
3 GPR Radar: System Description and 3D Imaging
3.1 System Configuration and Data Acquisition . . . . .
3.2 Study of the received signals . . . . . . . . . . . . . .
3.2.1 A-scans . . . . . . . . . . . . . . . . . . . . .
3.2.2 B-scans . . . . . . . . . . . . . . . . . . . . .
3.3 Imaging algorithm . . . . . . . . . . . . . . . . . . .
3.3.1 Imaging with one loop . . . . . . . . . . . . .
3.3.2 Imaging with the mini-array . . . . . . . . . .
iii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
iv
CONTENTS
4.3
TFCM applied to linear and logarithmic images:
a comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5 Three-dimensional TFCM
5.1 3D TFCM . . . . . . . . . . . . . . . . . . . .
5.2 3D TFCM applied to Volumes . . . . . . . .
5.2.1 Automatic Threshold Selection . . . .
5.2.2 Volume splitting . . . . . . . . . . . .
5.2.3 Adaptive Thresholding . . . . . . . . .
5.2.4 Analysis of the two threshold methods
5.3 Discussion of the results . . . . . . . . . . . .
5.3.1 Possible improvements . . . . . . . . .
34
.
.
.
.
.
.
.
.
39
39
43
44
45
47
49
56
57
6 Conclusions
6.1 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2 Algorithm improvements and suggestions for future research . . . . . . . .
58
58
60
A Example of 2D TFCM
62
B TFCM of unfocused data
64
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
Introduction
1.1
Overview of the Antipersonnel Landmine Problem
One of the biggest issues that threatens the population of a country during and after a
conflict, is Anti Personnel Landmines (AP). They are conventional weapons initially developed to protect anti tanks landmines from theft by enemy soldiers. After World War
II they started to be massively used also as an offensive weapon, targeting civilians. It is
estimated that more than 50.000.000 mines are still buried in about 70 developing countries. An extreme case is represented by Cambodia, where the ratio between population
and number of mines is 0.85. This means that there are more mines than inhabitants[1].
During conflicts, landmines are placed in strategic areas, like in the proximity of bridges,
rivers and borders, along paths that connect the villages or that bring to natural resources, in order to restrict the movement of enemy forces. Besides, they do not aim
at killing soldiers, but at maiming them, since in war a dead soldier costs less resources
than an injured one.
The mine fields are almost never marked or mapped, and when a conflict ends, the
landmines are forgotten or let lay on purpose in the fields, remaining hiddenly active
for decades. The only way to discover a new minefield is to step on a landmine. The
discovery of 2 or 3 mines is a sufficient reason to abandon the field, which could have been
a potential agricultural resource or the main connecting path between villages, causing
an additional economical loss for those people that survived the conflict.
Every year thousands of people are victims of landmines, and hundreds of thousands are
reported maimed. In 2007 the official number of victims was 5.751, while the people
injured were 473.000[3]. In other terms, there is an injury every 70 seconds, and a death
every 90 minutes. However these numbers, even if dramatically high, are underestimated,
since several accidents are not reported.
1
2
1. INTRODUCTION
In 1980 the United Nations approved a convention on ’Certain Conventional Weapons’
(CCW), annexed to the Geneva Conventions of 1949 that concerns the treatment of
non-combatants and prisoners of war. The CCW regulates the use in armed conflicts
of certain conventional weapons which may be deemed to be excessively injurious or to
have indiscriminate effects[4]. The convention has five protocols, one per each group of
conventional weapons. Protocol II regulates the use of landmines. It prohibits the use of
non-detectable AP landmines and their transfer, the use of non-self-destructing and nonself-deactivating mines outside fenced, monitored and marked areas, the use of landmines
that explode when detected, causing the injury or the death of the operator, prohibits
the indiscriminate use of landmines and calls for penal sanctions in case of violation.
Despite the good purposes, this convention failed to ban landmines, since every signatary
country had the option to adopt a minimum of 2 of the 5 protocols, choosing those that
better fit in their political agenda.
In 1997 the International Campaign to Ban Landmines[2], a coalition of non governmental organizations, launched a petition to ban the use, stockpiling, production and
transfer of antipersonnel landmines, and in 1999 the ’Ottawa Convention’ was signed by
135 counties and ratified by 84 states.
In the last 10 years the States Parties of the Mine Ban Treaty augmented, thanks to
the spreading of public opinion’s awareness to the landmine problem, and the Monitor
Report of 2007 indicates as 155 the number of countries signatary, representing the 80%
of the world’s nations[3].
The Ottawa Convention has drastically reduced the number of new mines that are laid
every year, and the mines that are stockpiled by the signatary countries; however, there
are still thousands of hectares of fields polluted by landmines, which urge to be cleaned,
and to be used for economical revival of the population who suffered the war.
There are two types of fields clearance: military demining and humanitarian demining.
Military demining is an approximative clearance method which aims at creating minefree paths for the troupes who are leaving or moving to an area. This method is efficient,
but accepts the risk that some mines may remain. Humanitarian demining, on the other
hand, is the process that aims at clearing mine-fields, in order to make the land completely accessible and usable again for civilian activities.
The UN requires high clearance efficiency for humanitarian demining, equal or greater
than 99.6%. Nowadays this constraint is achieved only by hand clearing methods, represented by the use of metal detectors or sticks to prod the ground, which however are
time consuming and often dangerous for the operators. The metal detector scans the
shallow subsurface, looking for objects with metal content. Since they are sensitive to
1.1. OVERVIEW OF THE ANTIPERSONNEL LANDMINE PROBLEM
3
any kind of metal, they give high false alarm rates, slowing down the operations. Besides,
they are not able to detect mines with low metal content. The second method, prodding
the ground, is a very dangerous procedure, which puts the life of the operator at stake.
Figure 1.1 shows two operators at work using conventional methods:
(a)
(b)
Figure 1.1: (a) Metal detector; (b) Prodding the ground
A safer humanitarian demining technique consists in using mechanical deminers, remotely
controlled tracks that destroy the mines by pressuring the field. The result is satisfactory,
but unfortunately the tracks cannot be used in mountainous or in rocky areas.
Biological systems are nowadays used to detect landmines, such as dogs, rats and bees.
The explosive materials used in landmines have a specific smell, and these animals are
able to detect them. Dogs have been proven to be able to thoroughly clear a minefield.
On the other side, the animals get tired quickly and can work for only 2 hours a day.
The Apopo project[5] in Mozambique trains rats to detect landmines. The animals are
much faster than the dogs in the training process, they are lighter, thus they can move
freely without triggering the mines (see figure 1.2). However, the signals that the animals produce during their work, are vulnerable for interpretation, and usually only the
personal trainer of the rat is able to interpret these signals.
The use of biological systems has a big advantage, which consists in the fact that they
are cheap, thus they can be deployed in poor countries.
Figure 1.2: A rat of the APOPO project finds a mine
Each of the methods mentioned above has advantages and drawbacks: if integrated they
4
1. INTRODUCTION
can result in higher efficiency and efficacy. However they do not completely compensate
each other and leave space to uncertainties.
The scientific community is active in the research and development of alternative methods to support the conventional ones in humanitarian demining. The fusion of multiple
sensors appears to be the best solution at the moment, where the electromagnetic induction (EMI) sensor can be accompanied by other non invasive sensors, like ground
penetrating radars, microwave radiometers, infrared sensors and nuclear sensors. The
use of electromagnetic waves, in particular, has proven to be a powerful and promising
tool in the demining process, mainly because the ground penetrating radar is able to
generate images of the scanned area.
Images of the shallow subsurface give precious information to the operators, which can
discriminate between targets and clutter also based on the shapes of objects. Ground
penetrating radar and in particular Ultra Wideband GPR, is able to provide very high
resolution 2D and 3D images, and it is the sensor used to generate the data processed in
this thesis. Its working principle will be shortly introduced in section 1.2.
1.2. GROUND PENETRATING RADAR
1.2
1.2.1
5
Ground Penetrating Radar
GPR Fundamentals
A ground penetrating radar is a geophysical non-invasive sensor that uses the principle
of scattering of electromagnetic waves to generate high resolution images of the subsurface and locate the objects[6]. It consists in transmitting and receiving antennas in
monostatic or multistatic mode. The transmitter radiates electromagnetic waves into the
ground. When the EM pulse crosses an area with different electromagnetic properties,
expressed in terms of dielectric permittivity , magnetic permeability µ and conductivity
σ, a reflection occurs and the radar receiver collects it. The earth surface is usually made
of nonmagnetic material, thus µr = 1 and the reflection of the EM wave is mainly caused
by the contrast in permittivity. The conductivity affects the absorption of the wave by
the ground: high conductivity soils, like high water content soils, strongly attenuate the
waves, thus the penetration depth, which is also frequency dependent, is very low.
Given the properties of the ground, which are fixed, a good radar system design has
to take into account two main factors: penetration depth and image resolution. Lower
frequency bands allow higher penetration depth, but the image resolution decreases, disabling the radar to resolve close objects. The viceversa is also true. Thus a good tradeoff
has to be found and it depends on the objectives of each project.
A GPR can collect data in one, two or three dimensional base. A single waveform collected by a receiver at a given fixed position and in a determined time window is called
A-scan. The collection of A-scans along a scan line is called B-scan, and it is a spacetime representation of the shallow subsurface. An object in a B-scan has a hyperbolic-like
structure, which is a de-focused energy representation, dependent on the time of arrival
of the reflected signals. The collection of multiple parallel B-scans is a 3D data matrix,
called C-scan. The depth usually indicates the time window, while the horizontal plane
indicates the space.
li ne
Sc an
li n e
S ca n
c ro
(a)
(b)
ss
lin e
(c)
Figure 1.3: Examples of: (a) A-scan, (b) B-scan, (c) C-scan. Images taken from[7]
6
1. INTRODUCTION
The raw-data generated by the radar is normally pre-processed. The reflections of the
targets are enhanced, the clutter is reduced and migration methods are applied to increase object position accuracy. In high clutter scenarios, however, these methods may
not be sufficient to suppress clutter, whose high intensity could shadow the targets. It
is then necessary to apply additional methods, which are not directly dependent on the
intensity values of the responses of the targets, and that exploit other properties, like for
example the texture of an object. The texture is a pattern that repeats itself regularly
in the surface of an object. A weak target still has a texture, which is different from the
background and which can be analyzed and used to highlight the target. Thus, next to
intensity based methods, texture feature based methods can be implemented to detect
targets.
In chapter 3 the scan data matrices and a migration method will be discussed and explained with examples. In chapter 4 and 5 a texture feature method is applied to detect
targets and the comparison between the two methods is made.
1.2.2
GPR state of the art
The first use of GPR for landmine detection was suggested by the US military in the
seventies. In the last 40 years the scientific community has given a big contribute to this
field, and nowadays there are several multi-sensors fusion systems for landmine detection
which make use of GPR.
The additional value given by GPR, as anticipated in previous section, is its ability to
create 2D and 3D images, from which it is possible to extrapolate information about the
shape and the structure of the objects, permitting a better detection and recognition of
objects.
The images can be analyzed in 2D as well in 3D, focused and unfocused. The literature
has several examples of these approaches, of which the most interesting is certainly the
processing of 3D images. Firstly because the study is made on the whole data set, and
not only with a fraction of it; secondly because 3D imaging permits to see the shapes
and the volumes of objects, allowing a better discrimination with clutter.
Two important results recently obtained with 3D data sets are now shortly described.
E. Ligthart[12] in 2003 has worked on 3D object detection and classification of 3D postprocessed data sets generated by the video impulse radar (VIR) system developed at
IRCTR of Delft University of Technology. By using an adaptive depth-based threshold
system for detection, and then by extracting statistical, structure, shape and size based
features, she has been able to correctly classify all the targets buried in the shallow surface. Her method is based on intensity images.
1.3. RESEARCH OBJECTIVES
7
In 2007 P. Torrione[24] has presented a classification method applied to unfocused 3D
matrices, which extracts the texture features of points in consecutive B-scans and then
classifies the point-targets by training 12 statistical features.
The International Research Centre for Telecommunications-transmission and Radar (IRCTR)
of Delft University of Technology is researching on Ultra Wideband Ground Penetrating
Radars for detection, ranging, positioning and classification of targets.
The latest GPR radar system that has been developed and is now tested is the UWB
array-based time-domain ground penetrating radar. This system generates high resolution 3D images whose processing is the subject of this thesis. The system is described in
chapter 3.
1.3
Research objectives
Target detection and imaging is performed with signal processing. The decision making
techniques mainly focus on the magnitudes of the reflected signals. GPR images are
scenario dependent, and often the intensity of a target is overshadowed by stronger signals, which can come from the environment, or from the radar system itself, i.e. antenna
coupling.
On the other hand, objects have properties which can be extracted independently on the
magnitude of the signal intensity. For example they can be also classified on the basis of
their shape and structure features, as it was successfully demonstrated by E. Ligthart in
a previous project at IRCTR[12].
This thesis tries to detect 3D objects by exploiting their structural properties, their
textures, defined as ”the disposition of the several parts of any body in connection with
each other, or the manner in which the constituent parts are united”.
The research was subdivided in three milestones:
1. Learn to process GPR raw data in order to generate focused 3D images;
2. Apply Image Processing tools in order to analyze and extract relevant features from
a object which permit to increase detection;
3. Try to create a system which makes detection-decision automatically.
The imaging method used to focus images is the diffraction-stack algorithm, frequently
used to process seismic data.
The approach selected to analyze texture features is that one applied by Torrione to unfocused data, modified for the 3D focused volumes. This method, called Texture Feature
Coding Method, has been chosen on the basis of the good results obtained in medical
ultrasonic images[20] and in unfocused landmine detection by Torrione.
8
1. INTRODUCTION
GPR images are first converted to gray level intensity images, in order to apply the
standard image processing and texture analysis tools.
The feature coding method has been firstly applied to two dimensional images, and then
upgraded to 3D images. Given the uneven intensity distribution of clutter and targets in
a volume, adaptive threshold algorithms have been applied, and a iterative method has
been used to improve the detection of weak targets. The use of binary images has been
restricted to the extraction of volumes that encapsulate the targets, while the main algorithms have been applied to gray level images. This choice will be justified in chapters
4 and 5, when the feature coding methods are applied.
The novelty of this thesis is the development of algorithms which automatically detect
3D objects on the basis of texture analysis. The texture feature method is applied for
the first time to three dimensional post-processed images, showing a better detection
capability than the intensity based approach. Furthermore, a threshold algorithm based
texture histograms is used iteratively: first globally and then locally, in order to improve
the detection of weak targets.
1.4
Thesis outline
Chapter 2 introduces the image processing methods and texture analysis implemented
in the algorithms. Chapter 3 describes the GPR radar system developed at IRCTR, the
imaging algorithm which has been used to produce 2D and 3D images.
Chapter 4 analyzes landmines’ texture features in 2D, showing the potentiality of the
method, while chapter 5 upgrades to three-dimensional methods. Automatic threshold
techniques are also discussed and the results are shown.
In chapter 6 the conclusions and suggestions for future research are given.
Despite the fact that the GPR has been developed to detect objects below the ground,
the simulations have been performed with objects laying above the surface.
2
Background
This chapter describes the image processing techniques used to generate a 3D image of
the targets. Image processing techniques focus mainly on 2D images, but they can be
extended to the three dimensional case.
Next section introduces the basic concepts of 2D images, the tools used to analyze them,
and the extension to 3D images. Section 2.2 introduces texture analysis and describes
the method that, by using texture properties of an image, enhances the detection of the
targets.
2.1
Image Processing
A digital image is an array of numbers, real or complex, represented by an intensity
function of two or three spatial dimensions, f (x, y, z), where the value of f at the spatial
coordinates (x, y, z) gives the intensity (gray level value) of the image at that voxel[25][26].
The intensity is a form of energy, finite and positive: 0 < f (x, y, z) < inf.
Image pixels are commonly stored with 8 bits per sampled pixel (or voxel), which gives
28 = 256 different gray level values. The lowest intensity, 0, corresponds to no energy,
while at the other extremity, the value 256 corresponds to maximum energy.
2.1.1
Basic relationships between pixels
Some important relationships between pixels, which have been used in the imaging algorithms, are now introduced. These relationships are described for the 2D case, followed
by a short explanation for the 3D images.
Neighbors of a pixel A pixel p at coordinates (x, y) has four horizontal and vertical
neighbors, called the 4-neighbors of p, and denoted by N4 (p), shown in figure 2.2(a). It
9
10
2. BACKGROUND
has also 4 diagonal neighbors, denoted by ND (p) and shown in figure 2.2(b). The union
of these two sets of pixels is called 8-neighbors of p, denoted by N8 (p) and shown in figure
2.2(c).
In a 3D image a voxel centered at coordinates (2,2,2) of a 3x3x3 matrix, has 6 vertical
and horizontal neighbors (figure 2.2(d)); 18 neighbors in the main planes (figure 2.2(e)),
and is totally surrounded by 26 neighbors (figure 2.2(f)).
(a) N-4 neighbors
(d) N-6 neighbors
(b) N-d neighbors
(e) N-18 neighbors
(c) N-8 neighbors
(f ) N-26 neighbors
Figure 2.1: Neighbors of a pixel ((a),(b),(c)) and of a voxel((d),(e),(f))
Connectivity Two or more pixels are connected if they belong to the same neighboring
set (N4 or N8 in a 2D image) and if their gray levels satisfy a specified criterion of
similarity.
Distance between pixels The distance between two pixels in a grid can be calculated
in different ways, due to the presence of a spatial grid. The main distance metrics used
are three:
1. Euclidean Distance: Given two pixels p and q, with coordinates (xp , yp ) and (xq , yq )
respectively, the Euclidean distance is
DEu =
q
((xq − xp )2 + (yq − yp )2 )
(2.1)
2. City Block Distance: Also known as Manhattan distance, diagonal moves are not
allowed,
DCity = |xq − xp | + |yq − yp |
(2.2)
2.1. IMAGE PROCESSING
11
3. Chessboard Distance: The N-8 neighbors of a pixel are all at the same distance,
DChess = max(|xq − xp |, |yq − yp |)
(2.3)
The texture analysis method makes use of the Chessboard Distance metric.
2.1.2
Image segmentation
Segmentation deals with the process of subdividing the image into homogeneous meaningful areas which share similar characteristics. On the other hand, two consecutive
homogeneous areas show a discontinuity at their boundaries. These two properties, similarity and discontinuity, correspond to two different approaches in image segmentation.
In the first case, also called region-based segmentation, regions are partitioned according
to image properties, like intensity, textures and spectral profiles. One of the approaches
is thresholding. In the second case, called edge-based segmentation, the image is partitioned on the basis of abrupt changes in the gray level values of the pixels. The goal is
to demarcate the regions’ boundaries.
2.1.3
Thresholding
Thresholding is the simplest method of image segmentation[28]. It consists in separating
the pixels of an image into two groups, object and background, based on their intensity
values. The assumption is that different regions in the image will have different gray
level distributions. Discrimination between object and background can be done on the
basis of the mean and standard deviation of each distribution.
Histogram The distribution of gray level values in an image is represented by a function, the histogram h(x), where the variable x takes the gray level values and h(x) gives
the occurrences of these values in the image. Given the image in figure 2.1 (a), its histogram is shown in Figure 2.1(b).
(a)
(b)
Figure 2.2: (a) Example image, (b) Histogram of the image
12
2. BACKGROUND
The outcome of thresholding is a binary image, where the pixels with intensity value
below a certain threshold get a 0, while intensity pixels above the threshold get a 1. In
this project thresholding algorithms are used to detect the threshold value, but the image
is segmented only at the last step, when visualization occurs. In between operations are
performed with gray level images (Section 4.2 will justify in detail this choice).
Two types of thresholding can be applied to an image: global thresholding or local
thresholding.
Global Thresholding selects a fixed threshold for the whole image, on the basis of
the histogram distribution. This method gives a good outcome only if the intensity of
the objects is strongly different from the intensity of the background. If this is not the
case, low intensity objects will not be detected.
Local Thresholding, also called adaptive thresholding, calculates a different threshold
per each region of the image, adapting itself to the different local intensity distributions.
This is a good alternative to global thresholding in the cases in which the images are not
uniformly illuminated or, as it is in the case of GPR images, not all the targets respond
with the same intensity values, resulting in a varying contrast across the image.
Threshold selection
Threshold selection is a heuristic method, and there is no universal way to get an optimal result. The threshold of an image is the gray level value that detects objects with
minimum segmentation error. Given the probability distribution of the background and
the object, shown in figure 2.3(a), the corresponding histogram with optimal threshold is
obtained in figure 2.3(b). If object and background have a strong intensity contrast, the
histogram is bimodal and the error between optimal and conventional threshold is small
(left figures). Viceversa, when the probability distributions overlap, it is more difficult
to find a threshold (right figures).
Figure 2.3: [30](a) Probability distributions; (b) corresponding histograms
2.1. IMAGE PROCESSING
13
In this project two threshold selection methods have been used:
Triangle Algorithm The algorithm is used when there are small high intensity objects
in an uniform background. The histogram of the image is unimodal, with a peak at the
low intensity values, as shown in figure 2.4. This algorithm constructs a line between the
maximum and the lowest value of the histogram. The threshold is given by that gray
level value which has maximum distance from the line[26].
Triangle Algorithm
2500
2000
Occurrence
Th=Delta
1500
1000
d
500
0
0
50
100
150
200
Gray Level Value
250
300
Figure 2.4: The Triangle Algorithm
Mean Value The algorithm is used for local thresholds. It calculates a weighted mean
of the histogram.
These two methods have been chosen on the basis of the expectation of the image content. A radar image of a landmine field is represented by several small high intensity
areas surrounded by the ground. The gray levels which represent the background are
much more than those that represent the target. The triangle algorithm is thus suited
for this kind of images. The mean value threshold algorithm is used locally, when the
target is isolated from the rest of the image. In this case the background pixels are not
the majority anymore and an average algorithm is more suited for threshold selection.
2.1.4
Image representation: Mathematical Morphology
Image segmentation generates a binary image, divided into two sets: objects, that are
represented by a 1, and background pixels, that are represented by a 0. The new image can be described in terms of regions, or sets, where each pixel is a member of a
set of pixels that share a common property. Mathematical morphology is a tool that
extracts those image components, or set of pixels, that are useful for representation and
description[25]. Each set can be represented in terms of its external characteristics, that
are its boundaries, or its internal characteristics, that are the pixels in the region.
Mathematical morphology was originally introduced for binary images, and later its application had been extended to gray-scale and multi-band images. Hereafter operations
on binary images are presented.
14
2. BACKGROUND
Morphological Operations
Morphological operations probe an object with a structuring element, with the goal of
revealing the object’s shape. The two fundamental morphological transformations are
erosion and dilation, which involve the interaction between an object A and a structuring
set B, called the structuring element.
The structuring element can be a circular disk in the plane, or a 3x3 square, or a cross,
or any other shape.
Some basic set operations that are used for erosion and dilation are now introduced.
Let A and B be two sets in Z 2 , with components a = (a1 , a2 ) and b = (b1 , b2 ), respectively.
A set A of pixels α is defined as the group of pixels that share some common property:
A = {α|property(α) == T RU E}
(2.4)
/ A}
Ac = {α|α ∈
(2.5)
The complement of A is:
The translation of A by x, denoted as (A)x , is defined as:
(A)x = {c|c = a + x, ∀a ∈ A}
(2.6)
The reflection of B, denoted by B̂, is defined as:
B̂ = {x|x = −b, ∀b ∈ B}
(2.7)
The difference of two sets is defined as:
A − B = {x|x ∈ A, x ∈
/ B} = A ∩ Bc
(2.8)
Dilation Given the sets A and B in Z 2 , and denoting the empty set, the dilation of
A by B, denoted as A ⊕ B, is defined as:
A ⊕ B = {x|(B̂)x ∩ A 6= }
= {x|[(B̂)x ∩ A] ⊆ A}
= {c ∈ R2 |c = a + b, a ∈ A, b ∈ B}
(2.9)
Dilation is an expansion operation.
Erosion Given the sets A and B in Z 2 , the erosion of A by B, denoted by A B, is
defined as:
A B = {x|(B)x ⊆ A}
= {c ∈ R2 |c + b ∈ a, b ∈ B}
(2.10)
2.1. IMAGE PROCESSING
15
Erosion is a shrinking operation.
Two other important morphological operations are opening and closing. They are obtained by the combination of the two fundamental morphological transformations. Opening smoothes the contour of an image and eliminates thin protrusions. Closing smoothes
the image too, but it fuses narrow breaks, eliminates small holes and fills gaps in the
contour. The opening smooths from the inside of the object contour, while the closing
smoothes from outside of the contour.
The mathematical operation of the opening is defined as:
A ◦ B = (A B) ⊕ B
(2.11)
The opening of A by B is the erosion of A by B, followed by a dilation of the result by
B.
The closing of set A by structuring element B, is defined as:
A • B = (A ⊕ B) B
(2.12)
The closing of A by B is the dilation of A by B, followed by the erosion of the result by B.
Figure 2.5(a) shows a gray level image to which morphological operations are applied.
(a)
(b)
(d)
(c)
(e)
Figure 2.5: Morphological operations:(a) Original image, (b) Dilation, (c) Erosion, (d)
Opening, (e) Closing
16
2. BACKGROUND
2.2
Texture Analysis
The distinction between an object and the background can be also done based on the
texture analysis of the image. The texture of an object is a repeating pattern that characterizes the object itself. For example, a tailed floor or a rough wall represent textures
of the two objects. The same happens for image textures, where a pattern, i.e. intensity
values, repeats itself across an image. Textures are a powerful tool, because they are not
dependent on the contrast of the image (like threshold selection), instead it is possible
to detect a repeating pattern also if the intensity is low.
There are different texture analysis methods which nowadays are used for feature extraction and pattern recognition. The method that has been chosen is called Texture Feature
Coding Method; it is an edge detection method which extracts the features of an image
by exploiting the correlation of neighboring pixels. It was introduced by Horng in 2002
for the classification of 2D medical images and in 2007 has been used for the first time
to classify 2D and 3D unfocused radar images of landmines.
2.2.1
Texture Feature Coding Method
The texture feature coding method (TFCM) is a coding scheme which transforms an
intensity image into a texture feature image whose pixels are encoded into texture feature
numbers, which represent a certain type of local texture[19].
In order to describe the TFCM scheme it is necessary to introduce three methods of
texture analysis:
1. Gray-Level Cooccurrence Matrix (GLCM)
2. Texture Spectrum (TS)
3. Cross-Diagonal Texture Matrix (CDTM)
Gray-Level Cooccurrence Matrix is a tabulation of how often transitions between
all pairs of two gray-levels occur in an image[21]. The gray-level transitions are calculated
based on two parameters, displacement d and angular orientation θ. The displacement
d is the shortest distance between two pixels, and the angular orientation θ is the angle
that the line connecting the two pixels forms with a horizontal line.
Figure 2.6: N-8 neighborhood, distance d = 1, 0 ≤ θ ≤ 360
In a 3x3 pixels image, the central pixel has a distance d = 1 to all its neighbors and
2.2. TEXTURE ANALYSIS
17
the angle θ can take 8 different values, as shown in the figure to the left. Given two
gray levels i and j, distant d pixels apart and having angular orientation θ, Nd,θ (i, j) is
the number of occurrences of these two pixels. In mathematical terms, given the pixels
locations (x, y) and (w, z), with gray level values G(x, y) = i, G(w, z) = j, Nd,θ (i, j) is
the number of pixels that satisfies the following condition:
k (x, y) − (w, z) kdm = (d, θ)
(2.13)
Texture Spectrum The authors[22] introduce the notion of texture unit, which is
a 3x3 matrix calculated based on the difference between the central pixel V0 and its
neighbors. Given an image V, a 3x3 pixel area is considered, as shown here:
V2
V3
V4
V1
V0
V5
V6
V7
V8
The central pixel V0 is compared to the neighboring pixels and the texture unit is created
according to the following conditions:
Ei =



 −1
if Vi − V0 < −∆
if |Vi − V0 | ≤ ∆
if Vi − V0 > ∆
0
1



Where ∆ is a tolerance of variation. The choice of ∆ is determinant for the outcome of
the texture unit, thus particular care has to be taken in the selection of this value.
The texture unit E is determined. It represents the local texture information of a given
pixel and its neighborhood, and it is represented as it follows:
E2
E3
E4
E1
E0
E5
E8
E7
E6
Cross Diagonal Texture Matrix Once the texture unit is found, this 3x3 matrix
is decomposed into a cross-texture unit (CTU) and a diagonal-texture unit (DTU)[23],as
shown in the following matrices. The cross-texture unit is called primary connectivity
set, while the DTU is called secondary connectivity set.
E3
E1
E0
E4
E2
E0
E5
E7
(a) CTU
E8
E6
(b) DTU
18
2. BACKGROUND
2.2.2
The TFCM technique
This method encodes the intensity pixels of an image into texture feature numbers (TFN)
in five steps[19].
(i) Convert the intensity image V into texture unit image E using a tolerance value ∆,
as explained in the previous page. The elements of the new quantized matrix take
values from the set −1, 0, 1, corresponding respectively to decreasing, no change
and increasing gray level value with respect to the central pixel;
(ii) Extract the CTU and DTU connectivity sets. Each set has two vectors: horizontal
and vertical for CTU, diagonal and cross-diagonal for DTU;
(iii) Per each vector calculate the difference of the central pixel V0 with its two neighbors and determine the gray level (GL) variation, as it follows:
Given the vertical vector of CTU represented by pixels (a,b,c) with corresponding GL (Ga , Gb , Gc ), calculate the GL changes between two pairs (Ga , Gb ) and
(Gb , Gc ). There are 4 possible types of variations:
1. [(|Ga − Gb | ≤ ∆) ∩ (|Gb − Gc | ≤ ∆)]
2. [(|Ga − Gb | ≤ ∆) ∩ (|Gb − Gc | ≥ ∆)]∪ [(|Ga − Gb | ≥ ∆) ∩ (|Gb − Gc | ≤ ∆)]
3. [(Ga − Gb > ∆) ∩ (Gb − Gc > ∆)]∪ [(Gb − Ga > ∆) ∩ (Gc − Gb > ∆)]
4. [(Ga − Gb > ∆)∩(Gc − Gb > ∆)]∪ [(Gb − Ga > ∆) ∩ (Gb − Gc > ∆)]
1
2
3
4
Figure 2.7: GL graphical structure variations and class numbers
(iv) Determine the values of initial feature numbers (IFN) α and β relative to the
primary and secondary connectivity set respectively, by combining the pairs of
gray-level graphical structure variations. The total number of combinations is 10,
as shown in table 2.1.
The columns of the IFN table represent the horizontal vector for α and the diagonal
vector for β. The rows represent the vertical vector for α and the cross-diagonal
vector for β.
2.2. TEXTURE ANALYSIS
19
GLC1
1
2
GLC2
3
4
1
2
3
4
1
2
3
4
2
5
6
7
3
6
8
9
4
7
9
10
Table 2.1: Initial Feature Number mapping table
(v) The TFN is calculated with the following mapping table.
IFN1
1
2
3
4
5
IFN2
6
7
8
9
10
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
1
10
11
12
13
14
15
16
17
18
2
11
19
20
21
22
23
24
25
26
3
12
20
27
28
29
30
31
32
33
4
13
21
28
34
35
36
37
38
39
5
14
22
29
35
40
41
42
43
44
6
15
23
30
36
41
45
46
47
48
7
16
24
31
37
42
46
49
50
51
8
17
25
32
38
43
47
50
52
53
9
18
26
33
39
44
48
51
53
54
Table 2.2: Texture Feature Number mapping table
Higher TFN values correspond to higher degrees of gray-level variation.
This method is explained with an example in Appendix A.
3
GPR Radar: System Description
and 3D Imaging
The IRCTR group at TU Delft has designed and developed a novel GPR radar system[9]
that aims to have a high resolution capable of detecting small objects, and to acquire data
more efficiently. Ultra-wideband technology has been used to improve the downrange
resolution ∆R, which is inversely proportional to the radar bandwidth B:
∆R = v/(2B)
(3.1)
The high cross-range resolution has been achieved by using a mini-array of antennas.
The radar consists of a single transmitter antenna and a linear array of 13 receivers, and
it is shown in figure 3.1.
Figure 3.1: IRCTR mini-array antenna system
20
21
The transmitter is a dielectric wedge antenna with a footprint (at -10dB level) that illuminates an area of a diameter of 84cm, as shown in figure 3.2(a), while the multichannel
receiving system consists of 13 loops placed symmetrically beneath the transmitter. The
loops are at a distance of 7cm between each other, with a swath of 84 cm. A digital
delay is used to produce a near-field focusing in the cross-scan direction[10], as shown in
figure 3.2(b). This digital delay, called vector Tzero later on in this chapter, calibrates
the mini-array by compensating the time of arrival of the signal per each loop. After
focusing the object is correctly placed at its real position.
(a)
(b)
Figure 3.2: (a) Footprint of the transmitter; (b)Mini-array time domain footprint[10]
This radar system presents several novelties: the number of components is reduced, since
there is only one transmitter that illuminates the swath area of the array; the mechanical
scanning becomes one-dimensional, since the receiving arrays are steered electronically
along the cross-scan direction; the large bandwidth of 3.56GHz starting at 240MHz gives
high resolution and good ground penetration; the combination of near-field footprint formation in the cross-scan direction, and synthetic aperture focusing in the scan direction
results in high resolution 3D radar images.
This chapter describes the procedure used to obtain 3D GPR images, focusing on the main
steps: measurement configuration and data acquisition (section 3.1), focusing method
(section 3.2).
22
3. GPR RADAR: SYSTEM DESCRIPTION AND 3D IMAGING
3.1
System Configuration and Data Acquisition
The GPR system scans an area of 84x60 cm2 at the center of which two metal disks are
placed (see figure 3.3). The radar scans along the x-axis, while the array lies along the
y-axis. Per each A-scan 2048 points are collected, with a time window equal to 10ns. The
x-line is equal to 60cm within which 448 points are scanned, with a step of 1.33mm along
this axis. Once the scan is finished, a C-scan matrix has been collected. The surface is
Figure 3.3: 2 metal disks on a surface
scanned a second time, without the targets.
Collected Data Two C-scan matrices are obtained:
1. C matrix: C-scan of size [2048,448,13]. Thirteen B-scans, one relative to each
channel.
2. background matrix: C-scan of size [2048,448,13]. C-scan of the unwanted reflections.
3.2
3.2.1
Study of the received signals
A-scans
An A-scan is a one dimensional plot of the signal at a point in the scan line. Consider
the geometric configuration shown in figure 3.4. Only one loop is considered here.
The transmitter is at 44.2cm from the surface, while the loops are 26.5cm beneath the
transmitter. The surface is represented by a foam material with permittivity r = 1.1.
Two metal disks with a diameter of 5cm and 0.5cm thick are put on the surface at a
distance of 5cm between each other, just under loop 6 and 8, as shown in figure 3.4.
3.2. STUDY OF THE RECEIVED SIGNALS
23
The GPR system is located above the center of the disk and the signal collected by
loop number 8 is studied. The transmitter is at coordinates T = (xt , yt , zt ) = (0.3, 0, 0),
the receiving loop number 8 is at R = (xr , yr , zr ) = (0.3, 0.07, −0.265). A point on the
surface of the disk is at P = (x, y, z) = (0.3, 0.07, −0.437).
d1 , d2 , d3 are the distance of transmitter-receiver, transmitter-disk and receiver-disk respectively.
y
x
z
d1
d3
17.7cm
44.2cm
d2
Ch.8
Figure 3.4: Geometric configuration
Figure 3.5 shows the A-scan of the data collected by channel 8 (counting from left to right)
when it is above the targets during the two scans (with and without targets). The position
of the targets is marked. The direct wave reaches the receiver after propagating along distance d1 , arriving (after calibration) at time = d1 /c = 0.9143ns + T zero(8) = 1.4443ns,
while at time = (d2 + d3 )/c + T zero(8) = 2.58ns there is the first reflection from the
target, followed at time = 2.6117ns by the reflection from the surface.
1200
Signal + background
Background
1000
direct wave
800
Amplitude
600
400
200
0
−200
targets and surface
−400
−600
−800
0
1
2
3
4
5
time [ns]
6
7
8
9
10
Figure 3.5: A-scan of the signal and background
The C-scan of the background is subtracted from the first measured data: a big part
24
3. GPR RADAR: SYSTEM DESCRIPTION AND 3D IMAGING
of the unwanted reflections is removed and the reflection of the target remains. In Figure
3.6(a) the signal due to antenna crosstalk is still present, and its amplitude is stronger
than the reflection of the target. However the target can be easily located. Figure 3.6(b)
plots all the signals in the same scale.
200
1200
antenna crosstalk
Signal
Signal + background
Background
Signal
1000
150
800
100
targets
600
Amplitude
Amplitude
50
0
−50
targets
400
200
0
−200
−100
−400
−150
−200
−600
0
1
2
3
4
5
time [ns]
6
7
8
(a) A-Scan of the Signal
9
10
−800
0
1
2
3
4
5
time [ns]
6
7
8
9
(b) A-Scan of the three signals
Figure 3.6: A-scan of channel 8 above the targets
10
3.2. STUDY OF THE RECEIVED SIGNALS
3.2.2
25
B-scans
When the GPR system moves along the scanning line, a set of A-scans is acquired at
equidistant positions. This set is assembled in a 2D matrix, and visualized as gray scale
image[11]. Figure 3.7 shows the B-scan collected by loop 8. The x-axis represents the
scanning direction, while the y-axis represents the time. In this image the direct wave
and various unwanted reflections are present. Their amplitude covers almost completely
the weak signal scattered by the two metal disks.
1000
200
direct wave
800
400
600
time window
600
400
800
1000
200
1200
0
1400
−200
1600
−400
1800
−600
2000
50
100
150
200
250
points x−line
300
350
400
Figure 3.7: B-scan raw data
After background subtraction, the reflections of the disks become visible. In the image,
the targets have a hyperbola-like structure. This is due to the fact that target reflection
occurs at different times when the GPR system is scanning through the top of the target
area. The position of the target is located at the maximum of the hyperbola, that is
when the GPR is above it and the travel path between the antennas is the shortest[11].
200
direct wave
200
400
150
time window
600
100
800
50
target
1000
0
1200
−50
1400
1600
−100
1800
−150
2000
50
100
150
200
250
points x−line
300
350
400
Figure 3.8: B-scan after background subtraction
26
3.3
3. GPR RADAR: SYSTEM DESCRIPTION AND 3D IMAGING
Imaging algorithm
The hyperbolae shown in figure 3.8 are a representation of the energy backscattered by
the targets per each position of the GPR system along the scan line. This space-time
image can be easily interpreted when one target is present in a homogeneous scene: the
maximum of the arc represents the position of the target. When there are multiple
targets in a complex scene, the interpretation of the B-scan becomes more difficult. The
focusing technique, originally used for seismic data, migrates the energy spread over the
arcs into a focused area, giving the spatial location of the targets.
The imaging algorithm is used here aims at creating a high resolution 3D image by
combining array-based imaging in the cross-scan direction and synthetic aperture imaging
(SAR) in the scan direction. The cross-range focusing is performed by digital steering of
the receivers, as shown in figure 3.2(b), while the synthetic aperture focusing is obtained
with the diffraction stacking algorithm[16]. The algorithm can be intuitively explained
in the following way. Consider the mini-array GPR and set the origin of the coordinates
at the transmitter. Suppose that there is a point scatterer situated within the footprint
are of the transmitter, at coordinates (x1 , y1 , z1 ). For every position of the array along
the scan line, the point scatterer backscatters energy to each of the 13 receivers. These
energies are sequentially collected in a C-Scan matrix. The diffraction staking algorithm
calculates the travel times of each transmission-backscattering pair and stacks the relative
energy into the point. A coherent sum is done and the energy is now focused. In a real
case three-dimensional targets with a certain volume are used and mathematically the
imaging principle is expressed with the equation 3.2[9] and the imaging geometry is shown
in figure 3.9:
s(xl , ym , zn ) =
13
X
N
X
schannel (ti , xj , ym , zn )
(3.2)
channel=1 j=1
x
y
tx
z
Rx-array
Figure 3.9: Imaging Geometry[9]
(xl , ym , zn ) is a point in the volume of the target, x is the scan line direction, y is the arrayline, z is the depth. ti is the time of arrival of the wave relative to each position of the
receivers and to each depth. Thus, the syntectic aperture, realized with the mechanical
movement of the array, gives high resolution along the scan line, while the digital steering
3.3. IMAGING ALGORITHM
27
along the cross-line is obtained with summation of the time of arrival of one point for all
the 13 channels.
3.3.1
Imaging with one loop
This section shows the results of imaging in 2D and 3D when only one loop is used,
marking the system limitation in cross-range resolution. In section 3.3.2 this limitation
is solved with the use of the whole array, resulting in high-cross range resolution.
The system configuration of section 3.2.1, figure 3.4 is considered. The one-loop algorithm
can be summarized as it follows:
1. Define a volume within the scanned area and select the spatial resolution of a voxel;
2. Set the coordinate of the transmitter and define the coordinate of the loop receiver
with respect to the transmitter;
3. Per each voxel in the volume calculate the time of arrival of the wave per each
position of the loop in the scanning line, and then associate to this voxel the sum
of the amplitude values collected
Once the whole volume has been computed, a 2D image of the target is visualized in
figure 3.10 (a). The positive and negative intensity values in the image are due to the
zero-crossings of the A-scans.
Figure 3.10(b) shows the top view of the two targets. A single loop does not have crossrange resolution, thus along the array-line it is not possible to focus the image.
1
−0.4
0.8
−0.41
1
0.6
−0.42
0.4
10
0.2
20
0.5
−0.44
0
−0.45
−0.2
30
40
−0.46
0
−0.4
50
−0.47
0.26
scan line
depth
−0.43
−0.6
0.27
0.28 0.29
scan line
(a)
0.3
0.31
60
−0.5
10
20
30
cross line
40
50
60
(b)
Figure 3.10: (a) Focused B-scan of the target, (b) Top view of two targets when one loop
is used
28
3.3.2
3. GPR RADAR: SYSTEM DESCRIPTION AND 3D IMAGING
Imaging with the mini-array
The previous section showed the limitations of imaging when one single loop is used.
Despite the high downrange resolution, the cross-range resolution is null and it is not
possible to visualize 3D images. Instead it is necessary to integrate the signals received
by all thirteen loops. The imaging process is the same as that one described before, with
the difference that now per each voxel is necessary to calculate the time of arrival per
each receiver and per each scanning position. After calibration, the two disks are focused
and the top-view is plot in figure 3.11:
1
0.9
10
0.8
0.7
20
cm
0.6
0.5
30
0.4
0.3
40
0.2
50
0.1
0
60
10
20
30
cm
40
50
60
Figure 3.11: 2D image, top view
The 3D volume is plot in figure 3.12:
Figure 3.12: 3D image, front view
TFCM applied to radar images
4
In Chapter 3 the imaging method of diffraction stack algorithm has been described, and
high resolution 3D images have been shown in figure 3.12. When more complex cases are
analyzed, i.e. buried landmines, the intensities of the targets may not be strong enough
to be clearly visualized, and lower threshold should be used. As a consequence of this,
also unwanted clutter is detected.
It is then necessary to use a method that, based not only on the intensity values, but
also on other properties of the targets, increases the probability of detection. The TFCM
method exploits the correlation between neighboring points in an image, and highlights
the contrast.
In this chapter the TFCM method is applied to 2D images. Its extension to 3D images will be discussed in chapter 5.
4.1
GPR Image conversion to Gray Level Images
Standard image processing techniques make use of positive and finite value intensity
levels, normally encoded to 8 bits or 16 bits gray level values. GPR images, on the
other hand, are represented by A-scans with positive and negative values, both giving
information about the targets. The similarity between the two types of images, is that
intensity values close to zero correspond to background.
In order to be able to apply image processing algorithms described in chapter 2, the
intensity values of a radar image have to be converted first from bipolar to unipolar, and
then from unipolar to gray level values.
There are various methods to transform negative values into positive ones, for example
by using the absolute values. Previous works with GPR imaging suggest the use of the
Hilbert transform[12] to extract and use the envelope of an A-scan, plotted in red in
29
30
4. TFCM APPLIED TO RADAR IMAGES
figure 4.1.
A−scan
Envelope
200
150
intensity
100
50
0
−50
−100
−150
1
1.5
2
2.5
3
3.5
4
time
4.5
5
−9
x 10
Figure 4.1: A-scan and its envelope
The envelope of an A-scan x(t) is derived from the analytic signal of x(t). An analytic
signal is a complex signal, defined as:
xa (t) = A(t)ejφ(t)
(4.1)
where A(t) is the amplitude envelope of x(t), and φ(t) is the instantaneous phase.
A(t) is defined as:
A(t) = |xa (t)| =
q
x2 (t) + x̂2 (t)
(4.2)
where x̂(t) is the Hilbert transform of x(t) and it is defined as:
1
x̂(t) =
π
Z
∞
−∞
x(η)
dη
t−η
(4.3)
4.2. TFCM OF TWO-DIMENSIONAL IMAGES
4.2
31
TFCM of two-dimensional images
The TFCM scheme is first applied to a binary image. The two disks of figure 3.11 are
segmented with a threshold value equal to 0.4, and the binary image is plotted in figure
4.2(a). Figure 4.2(b) shows the outcome of TFCM on figure 4.2(a), where a tolerance
value ∆ = 0.5 has been used.
1
35
0.9
10
10
30
20
25
0.8
0.7
20
0.6
20
30
0.5
30
15
0.4
40
40
0.3
0.2
50
10
50
5
0.1
60
10
20
30
40
50
60
60
0
10
20
(a)
30
40
50
60
0
(b)
Figure 4.2: Binary images: (a) Binary image of figure 3.11, (b) TFN image after TFCM,
with tolerance ∆ = 0.5
The TFCM method applied to a binary image detects the edges of the image itself. However in real cases is preferable to do not use binary images, because image diversity is
suppressed and some weak targets could not be detected. Thus the method is now tested
on 8 bits gray level images. The 8bit gray level image of the two disks is shown in figure
4.3(a)). The targets are very well defined and their intensity values go from 100 to 255
in linear scale. Figure 4.3(b) shows the TFCM outcome when a tolerance ∆ = 5 is used.
Choosing a tolerance value equal to 5 means that all those pixels that differ at least by
an intensity of 5 with one or more neighbors, will get a TFN number bigger than zero
and will be visualized.
250
45
10
10
40
200
35
20
20
30
150
30
25
30
20
100
40
40
50
50
15
10
50
5
60
10
20
30
(a)
40
50
60
0
60
10
20
30
40
50
60
0
(b)
Figure 4.3: (a)Gray level image of the two disks, (b)TFN image after TFCM, with
tolerance ∆ = 5
32
4. TFCM APPLIED TO RADAR IMAGES
Figures from 4.4(a) to 4.4(d) show the results when different values of ∆ are used: 10,
15, 25 and 35. It is important to choose the correct tolerance ∆, in order to avoid to
select unwanted reflections as target, like in the case of ∆ = 5, where also the sidelobes
are detected. With a tolerance ∆ = 10, shown in figure 4.4(a), it is possible to detect the
approximate contours of the two disks. The shapes are more ovoidal than in the case of
a binary image, because the difference between adjacent pixels is more accentuated. As
the value of ∆ increases, the sizes of the two disks decrease as well as their TFN values.
40
45
10
40
10
35
35
20
30
20
30
25
30
25
30
20
20
40
15
10
50
15
40
10
50
5
5
60
10
20
30
40
50
60
0
60
10
20
(a)
30
40
50
60
0
(b)
10
20
20
10
20
20
15
30
15
30
10
40
10
40
5
50
60
10
20
30
(c)
40
50
60
0
5
50
60
10
20
30
40
50
60
0
(d)
Figure 4.4: TFN images after TFCM, with tolerances: (a) ∆ = 10, (b) ∆ = 15, (c)
∆ = 25, (d) ∆ = 35
Despite the fact that the two targets are perfectly circular, the outcome of TFCM shows
an elliptical shape of the targets. The cause of this change of shape is due to fact that
the spatial resolution of radar images is not the same along the scan line and the array
line (cross-scan line). The radar scans in only one mechanical direction, and along the
scan-line the data is sampled every 1.33mm, allowing for a high spatial resolution of the
image.
The spatial resolution is usually between 1/5 and 1/20 of the size of the target. Each
disk in figure 4.3(a) has a diameter of 5cm, and the pixel resolution that has been chosen
is equal to 0.5x0.5 cm2 .
The resolution along the cross-scan line is accomplished by means of electronic steering,
and it is not as high as the resolution along the scan line. This is the reason why the two
4.2. TFCM OF TWO-DIMENSIONAL IMAGES
33
disks are stretched in the vertical direction.
This test on binary images has shown the ability of the method to extract the needed
information, given a proper tolerance value. The data set used as input is however very
simple, there is almost no noise and thus no challenges for the TFCM algorithm. Secondly, this method has been applied to linear intensity values, while GPR images are
usually plot in logarithmic scale. Last but not least, a proper method of tolerance selection has to be developed, since a correct value of ∆ is crucial for the outcome of TFCM,
especially when applied to more noisy images.
In section 4.3 a new data set is used and the TFCM is analyzed for both linear and
logarithmic scale images. The discussion about the automatic tolerance value selection
will be discussed in chapter 5, where three dimensional images are processed.
34
4.3
4. TFCM APPLIED TO RADAR IMAGES
TFCM applied to linear and logarithmic images:
a comparison
A deeper discussion of the TFCM method is now carried on with a more complex data
set. The following picture shows four landmines in a row. They are, starting from left,
a NR22 C1 mine, a plastic mine with little metal content, a butterfly mine and a PMN
mine.
C22
plastic
butterfly
C4
Figure 4.5: Foto of 4 mines
Figure 4.6 shows the volume of the four mines after focusing, when segmentation with
a threshold Ds = 288 is applied. The intensities of this volume are logarithmic positive
values.
The mines do not have the same electromagnetic properties and the second mine has a
smaller volume of the the other three mines, due to a lower intensity reflection. Furthermore, the left side of the volume shows a sort of tail. It is a remaining clutter after
background subtraction.
Figure 4.6: 3D view of 4 mines with Isosurfaces
This three dimensional image is the result of the stacking of eleven 2D horizontal slices.
The TFCM method is applied to the 7th horizontal slice of the 3D data matrix, since in
this slice all the four mines are shown.
4.3. TFCM APPLIED TO LINEAR AND LOGARITHMIC IMAGES:
A COMPARISON
35
Before discussing the results it is necessary to describe the meaning of two thresholding
variables which will be used from now on. These are the tolerance value ∆ and the
threshold value T 0.
∆ is the tolerance value which determines the outcome of the TFCM method. Its value
represents the minimum contrast that neighboring pixels should have in order to be considered as target pixels.
T 0 is the threshold value which is used to segment a gray level image or to highlight
targets. All the intensity values greater than T 0 represent targets.
In TFCM ∆ is certainly the most important variable, since it determines whether an
object is detected or not. An image with low dynamic range of the intensity values
should also have a low tolerance value ∆, otherwise, as it will be shown later with logarithmic images, the targets will not be detected by TFCM.
The threshold variable T 0 can be applied to the image before or after TFCM. For 2D
images, it is applied before TFCM, while for 3D images, as it will be shown in chapter
5, is used after. The combination of the two variables results in an improved detection.
The intensity scale of GPR images is usually shown in dB. However, TFCM has resulted to work better with images with linear intensity values.
Both linear and dB scale images of the horizontal slice have been converted to 8bit gray
level images, as shown in figure 4.7(a) and 4.8(a). The logarithmic image shows a higher
contrast for the 2nd mine, but its TFCM transform does not give any valuable information, as shown in figure 4.8(b). The linear image, on the other hand, responds very well
to the algorithm, and, as shown in figure 4.7(b), the four mines are localized when a
tolerance ∆ = 30 is used. Applying a tolerance equal to ∆ means, in terms of TFCM,
to detect all those pixels which have a contrast equal to ∆ or higher with one or more
of their neighbors. These pixels are, in the original image, situated at the edges of the
targets and close surroundings.
If the intensity values of the targets are known, then the gray level image can be segmented with a threshold equal to T 0. The edges of the targets can then be easily detected
with the TFCM method, as shown in figures 4.7(c) and 4.8(c). In the case of the linear
gray level image a threshold T 0 = 90 has been chosen, while the logarithmic gray level
image has a higher value, T 0 = 200. In this case, since the logarithmic scale highlights
lower intensity values, the contours of the third mine are better defined.
36
4. TFCM APPLIED TO RADAR IMAGES
40
250
50
35
20
20
20
200
40
45
30
40
40
40
35
25
150
60
30
60
20
60
25
100
80
15
80
20
80
15
10
50
100
100
10
100
5
120
5 1015
120
0
5 1015
(a)
0
5
120
5 1015
(b)
0
(c)
Figure 4.7: Linear Scale Images: (a) Original image in linear scale, (b) TFN image after
TFCM, with tolerance ∆ = 30, (c) TFN image after TFCM on binary image, segmented
with a threshold T 0 = 90, tolerance ∆ = 0.5
35
250
50
20
30
20
20
45
200
40
25
40
40
40
150
60
35
20
60
30
60
25
15
100
80
20
80
80
10
50
100
100
5
15
10
100
5
120
5 1015
(a)
0
120
5 1015
(b)
0
120
5 1015
0
(c)
Figure 4.8: Logarithmic Scale Images: (a) Original image in dB scale, (b) TFN image
after TFCM, with tolerance ∆ = 30, (c) TFN image after TFCM on binary image,
segmented with a threshold T 0 = 200, tolerance ∆ = 0.5
4.3. TFCM APPLIED TO LINEAR AND LOGARITHMIC IMAGES:
A COMPARISON
37
Despite the good results obtained with binary images, binarization should be applied
only after detection, to extract the boundaries or to calculate the area of the targets. A
less discriminating approach consists in choosing a threshold T 0 and setting all the pixels
with intensity values above T 0 equal to the maximum gray level value. In this way the
detected targets are highlighted, and the targets with intensity values lower than T 0 can
eventually be detected with the TFCM method.
For the linear image the threshold T 0 = 90 is chosen, in such a way that all 4 mines have
intensity values above the selected threshold. Then TFCM is applied with ∆ = 30. The
result is shown in figure 4.9(a). The contours of the objects are detected and there is
almost no clutter. When the threshold is set to a higher value, i.e. T 0 = 150, only the
strongest object in the gray level image is detected (Fig. 4.9(b)). Figure 4.9(c) shows
the result: the edges are not so clear anymore, but the objects are clearly detected and
the image presents almost no clutter.
250
40
50
20
45
20
20
35
200
40
40
35
30
40
40
25
150
30
60
60
60
20
25
100
20
80
80
15
80
15
10
10
100
50
100
100
5
5
120
5 1015
(a)
0
120
5 1015
(b)
0
120
5 1015
0
(c)
Figure 4.9: Linear Images, Depth Slice 7: (a) TFN image after TFCM, segmented with
a threshold T 0 = 90, tolerance ∆ = 30, (b) Original image with highlighted targets,
T0=150, (c) TFN image after TFCM of image (b), tolerance ∆=30
The same procedure is applied to the logarithmic image(fig. 4.10(b)), where the threshold is set to T 0 = 200. The outcome of TFCM with ∆ = 55 is shown in figure 4.10(a).
The edges of the four objects are clearly depicted. Some clutter appears, but it looks like
random noise with punctiform structure, while the objects have a clear circular shape.
When the threshold is set to T 0 = 230, the TFCM method is not able to give any valuable
information (fig.4.10(c)).
38
4. TFCM APPLIED TO RADAR IMAGES
250
50
20
45
50
20
20
45
200
40
40
35
40
40
40
35
150
30
60
30
60
60
25
25
100
20
80
80
20
80
15
10
100
15
50
100
10
100
5
120
5 1015
(a)
0
5
120
5 1015
(b)
0
120
5 1015
0
(c)
Figure 4.10: Logarithmic Images, Depth Slice 7: (a) TFN image after TFCM, segmented
with a threshold T 0 = 200, tolerance ∆ = 55, (b) Original dB image with highlighted
targets, T0=230, (c) TFN image after TFCM of image (b), tolerance ∆ = 30
These tests have shown that TFCM method has good detection capabilities when applied to linear scale images, while the logarithmic images introduce too much noise and
reasonable results are obtained only when the gradient between targets and background
is high. The binary images show the best results.
The background noise, show by punctiform TFN values, i.e. figure 4.7(b), can be suppressed by setting to zero those pixels, since an object is usually represented by more
TFN pixels with similar values.
These results suggest to continue the research using only linear intensity images, apply
the TFCM first to gray level images with an automatically chosen tolerance value, and
then convert the outcome to a binary image and apply the TFCM again in order to
extract the edges of the objects.
Three-dimensional TFCM
5
The TFCM method exploits the correlation between neighboring pixels. In chapter 4
the two-dimensional method has been discussed. The 3D-TFCM method calculates the
intensity variations between neighboring voxels in a 3x3x3 matrix, resulting in higher
neighbors correlations than in the 2D case, as shown also by Torrione[24].
Torrione applies 3D-TFCM to a C-scan matrix and uses it to classify the targets.
In this chapter the method is applied to focused data in order to detect and visualize
targets in a volume.
This chapter is organized as follows. Section 5.1 describes the extension to 3D TFCM
and section 5.2 shows the application to two different 3D-volumes.
5.1
3D TFCM
The 3D TFCM method converts an intensity voxel of a volume into a TFN number. The
3D method differs from the two-dimensional one in several aspects. For each voxel a
N-26 connectivity is considered (see section 2.1.1, figure 2.1(f)). The voxel is centered at
(2,2,2) in the 3x3x3 matrix.
For simplicity and comparison, the same steps of section 2.3 are followed:
1. Convert the intensity matrix into texture unit matrix using a tolerance value ∆,
The elements of the new quantized matrix take values from the set -1,0,1
2. The central voxel is connected with 26 surrounding voxels, and it is passed by 13
unique vectors, as shown in figure 5.1 and 5.2. Each vector is composed by 3 voxels,
and it will be called connectivity vector (CV)
Figure 5.2 shows in detail all the 13 connectivity vectors (continuous lines in the
39
40
5. THREE-DIMENSIONAL TFCM
Figure 5.1: 13 Connectivity vectors in a N-26 connectivity set. The orange dot is the central voxel, while the blue dots are its 26 neighbors. The 5 planes contain the connectivity
vectors
figures), with the coordinates of the edge pixels with respect to the central pixel,
located at coordinates (x, y, z).
(x,y-1,z-1)
(x-1,y,z-1)
(x,y,z-1)
(x+1,y,z-1)
(x,y+1,z-1)
(x-1,y,z)
(x,y+1,z)
(x-1,y,z+1)
(x-1,y-1,z)
(x,y-1,z)
(x+1,y-1,z)
(x+1,y,z)
(x,y,z+1)
(x,y-1,z+1)
(x-1,y+1,z)
(x+1,y+1,z)
(x+1,y,z+1)
(x,y+1,z+1)
(a)
(x-1,y-1,z-1)
(x+1,y-1,z-1)
(x+1,y+1,z-1)
(x-1,y+1,z-1)
(x-1,y-1,z+1)
(x+1,y-1,z+1)
(x+1,y+1,z+1)
(x-1,y+1,z+1)
(b)
Figure 5.2: (a) CV of the main planes, (b) CV of the secondary planes
3. Associate to each connectivity vector its class value, as explained in section 2.2.2,
step iii; The central voxel is now characterized by a texture unit vector of 13 gray
level variations, taking a class value from 1 to 4.
4. In 3D-TFCM the Initial Feature Number, IFN, is not calculated. Instead, there is
a mapping of each texture unit vector to a texture feature number TFN.
Given 13 elements, each taking a class value from 1 to 4, there are 413 possible combinations. Torrione reduces this number in the following way: he considers equivalent those vectors with equal numbers of occurrences of the class values (1, 2, 3, 4),
independently to their position in the vector, thus, by considering translational
and rotational invariance, the unique TFNs now have the number of possibilities
5.1. 3D TFCM
41
expressed as:
!
13 + 3
16!
=
= 560
3!(16 − 3)!
3
5. 413 vectors have to be mapped to 560 unique TFNs. The mapping is done based
on the prime factor theorem. It states that each number greater than 1 can be
uniquely expressed by a product of prime numbers. Consider a 13-element vector
T with a texture class number in each element taking values 1 to 4, and let n(m)
represent the number of elements of T taking value of m. Let P (x) represent the
xth prime:P (1) = 2, P (2) = 3, P (3) = 5, P (4) = 7. The unique TFN is calculated
in this way:
T F N3D = Π4m=1 P (m)n(m)
(5.1)
The maximum number now is 713 , which is reduced to 513 , due to the mutual
dependence of the class numbers, given by:
4
X
n(m) = 13
(5.2)
m=1
However the total number of TFNs is 560, and 713 is associated to the TFN value
560 (since it is the highest value that can be obtained).
Figure 5.2 plots the TFN Mapping Vector. The value indicated corresponds to the
mapping of 713 .
10
10
x 10
X: 560
Y: 9.689e+010
9
8
7
TFN3D
6
5
4
3
2
1
0
0
100
200
300
TFN
400
500
600
Figure 5.3: Distribution of the TFN Mapping Vector
The TFN vector of 560 elements has been created as it follows. Per each combination of the four class values, starting from
n(1) = 13, n(2) = n(3) = n(4) = 0
(5.3)
n(1) = n(2) = n(3) = 0, n(4) = 13
(5.4)
until
42
5. THREE-DIMENSIONAL TFCM
given equation 5.5, calculate the product of the prime numbers by using equation
5.1. Then, once the lookup table has been created, per each voxel in a volume
its unique vector of 13 elements is calculated, and equation 5.1 is used to find the
T F N3D value. This value is searched in the lookup table and the corresponding
index is the TFN number for the voxel. This procedure is applied to all voxels in
the volume.
5.2. 3D TFCM APPLIED TO VOLUMES
5.2
43
3D TFCM applied to Volumes
Data set: 4 landmines
This three-dimensional technique is now applied to the four targets in a row, which
were the topic of the 2D-TFCM study in the previous chapter. Figure 4.6 showed a
volume obtained with Isosurfaces, assigning a threshold value manually, based on visual
inspection. Figure 5.4 shows the four targets before and after applying 3D-TFCM respectively.
(a) Image, Ds=288
(b) Image after TFCM,Ds=30
Figure 5.4: 3D view of 4 mines with Isosurfaces
The procedure followed to create the volume in figure 5.4(b) is schematized in figure
5.5: the original 3D matrix is transformed into a 8bit gray-level one, called Gray-Level
matrix. The 3D-TFCM method is then applied with a tolerance ∆ = 30 to the gray level
matrix and a TFN matrix with values in the range of [0, 560] is obtained. All the voxels
of the original gray level matrix associated with TFN values equal to zero are set to zero
and a new gray level matrix is obtained.
ORIGINAL
3D MATRIX
GRAY-LEVEL
3D MATRIX
8bit gray level
TFN
3D MATRIX
NEW GL
3D MATRIX
3D-TFCM
DISCRIMINATE TFN=0
Figure 5.5: 3D-TFCM Matrix Transform
Both volumes in figure 5.4 visualize correctly the four targets. However, the selection
of the tolerance value ∆ and the threshold for the visualization have been chosen on
the basis of the a-priori knowledge about the presence and the position of the targets
in the scanned area. In a real situation the position and the strength of the targets is
44
5. THREE-DIMENSIONAL TFCM
unknown.It is then necessary to find an automatic detection method based on adaptive
threshold techniques, both for the tolerance ∆ and for the threshold Ds .
Next section analyzes the 3D matrices, and describes the threshold method that has been
suggested.
5.2.1
Automatic Threshold Selection
The idea behind the TFCM is to find an optimal value of ∆ so that the target gets high
TFN values while the background is mapped with lower ones. In order to obtain this
value automatically, the distribution of the gray level variations all over the 3D matrix
has to be studied.
Per each voxel centered at (2,2,2) in a 3x3x3 matrix, its difference with the 26 neighbors
is calculated, and the maximum value is collected and associated to the voxel. The 3D
image is now mapped to a max-difference 3D Image. The threshold selection is histogrambased, as introduced in section 2.1.3. The histogram of a 3D image is calculated slice by
slice, as shown by the red plots in figure 5.6, while the blue line represents the maximum
occurrence of each gray level. Interpolation has been used to eliminate the zeroes in the
histogram. The tolerance value ∆ is selected from this histogram.
Histogram of 3D−TFCM Difference
140
slice 2
slice 3
slice 4
slice 5
slice 6
slice 7
slice 8
slice 9
slice 10
Envelope
Number of occurrencies
120
100
80
60
40
20
0
50
100
150
Gray Level Value
200
250
Figure 5.6: Histogram of max GL Difference
The histogram of a linear 3D GPR max-difference image has a peak at the low values of
the histogram, and then it decreases to zero. Based on this property of the images, the
threshold triangle algorithm described in section 2.1.3 is chosen for global thresholding.
Once ∆ has been calculated, the 3D-TFCM method can be applied and a TFN 3D matrix
is generated. Low feature numbers in this matrix mean zero or little gray level variation
in a volume of 27 neighboring pixels. If a conservative method is chosen, then all the
intensity voxels of the gray level image that are associated to a value T F N = 0 in the
TFN image are considered background and get a value equal to zero. A less conservative
approach can set to zero all the voxels whose T F N number is greater than a certain
value, i.e. T F N = 10. In this report the conservative method has been chosen.
5.2. 3D TFCM APPLIED TO VOLUMES
45
The histogram of this new gray level image is shown in figure 5.7 (a). Again, the triangle
algorithm is used to determine the optimal threshold value Ds that visualizes the targets.
This value is equal to 53. The result is shown in figure 5.7(b).
Histogram of Image after 3D−TFCM
Number of occurrencies
150
slice 2
slice 3
slice 4
slice 5
slice 6
slice 7
slice 8
slice 9
slice 10
Envelope
100
50
0
0
50
100
150
Gray Level Value
200
(a)
250
(b)
Figure 5.7: (a)Histogram of new gray level matrix, (b) Visualization of 4 targets
In this figure is clearly possible to see 3 targets with similar shape. At coordinates
(0,1.165,0.18) a small object is detected. In a situation with no a priori knowledge, it
would not be possible to infer with certainty that it is a target.
This result shows that an automatic threshold algorithm can be used at an initial point
of the detection procedure, but then it is necessary to start a second procedure, using
an adaptive threshold algorithm, which changes with the variation of the contrast in the
matrix.
5.2.2
Volume splitting
A local threshold has to be applied separately to every target and to those areas that
after the first detection showed only clutter or no signal at all.
The first step consists in isolating the targets. There are various ways to create a 3Dwindow around a target. Here the connectivity property of the pixels is used.
First of all the new gray level matrix is segmented into a binary matrix with the automatic
global threshold found in the previous section. All pixels of this matrix which are N-26
connected are extracted and the outcome of this operation gives 88 objects in the matrix,
shown in figure 5.8.
46
5. THREE-DIMENSIONAL TFCM
Figure 5.8: N26 Connected Objects
The several small objects that occupy only one depth slice of the matrix have to be
removed, since a target is always present in more than one slice. The small targets are
removed by the morphological operation of opening, introduced in section 2.1.4, where a
vertically oriented 3x3x3 structuring element is used. All objects with less than 3 pixels
in depth are removed. The outcome is shown in figure 5.9.
Figure 5.9: N26 connected object after opening
There are now 8 objects in the scene. A 3D-window that surrounds each object is
extracted with the Matlab operators ’regionprops’ and ’boundingbox’. These operators
give the smallest 3D-window that encapsulates the object. It is necessary that the whole
matrix is studied, thus the 3D-windows are enlarged until they partially overlap between
each other.
5.2. 3D TFCM APPLIED TO VOLUMES
5.2.3
47
Adaptive Thresholding
Once the targets are isolated, the whole 3D-TFCM algorithm is applied to each 3Dwindow.
The local threshold is firstly applied to the second target, the weakest one.
Figure 5.10 shows the difference of intensity of target 2 with respect to the other targets:
Intensity of targets along the scan line
90
80
number of occurrences
70
60
50
40
30
20
10
0
3
2.5
2
1.5
scan line
1
0.5
0
Figure 5.10: Intensity of the four targets along the scan line
The intensity of the second target is only 1/8 of the intensity of the strongest target, and
1/3 stronger than the clutter. It is expected that the 3D-TFCM method enhances the
second target with respect to the background, since the correlation of the pixels and the
TFN numbers are higher in the first case.
Threshold selection
The histograms of the 3D-window volumes will have a different distribution, since a big
part of the background is cut off. Two threshold selections for the 3D-TFCM have been
applied: the triangle algorithm and the mean value algorithm. Figure 5.11(a) shows the
histogram of the max-difference matrix, and the histogram of the new gray-level image
after applying 3D-TFCM.
48
5. THREE-DIMENSIONAL TFCM
Histogram of Max Difference Target 3
Histogram of Target 3
40
70
35
60
number of occurrences
number of occurrences
30
25
20
15
40
30
20
10
10
5
0
50
0
50
100
150
200
Gray Level value
250
300
0
0
50
(a)
100
150
200
Gray Level value
250
300
(b)
Figure 5.11: (a)Histogram of max difference, pre-TFCM;(b)Histogram of Image, postTFCM
Figure 5.12(a) shows the 2nd target obtained after the first detection procedure, while
5.12(b) shows the result obtained with the mean value threshold.
(a)
(b)
Figure 5.12: 2nd Target:(a) Texture Image after 3D-TFCM, (b)Texture Image after second iteration
Figure 5.13 shows all the four targets after that local 3D-TFCM has been applied. The
size of the targets is now comparable, thus target 2 can be now ’classified’ as target.
(a)
(b)
(c)
Figure 5.13: 4 Targets after local 3D-TFCM
(d)
5.2. 3D TFCM APPLIED TO VOLUMES
5.2.4
49
Analysis of the two threshold methods
In this section the outcomes of local 3D-TFCM are discussed and an analysis of the best
algorithm is done.
It is important to notice that the volumes can be visualized based on their intensity
values and on their feature numbers. An analysis of intensity values is followed by an
analysis of feature numbers.
Intensity value Threshold
The histogram of each 3D-window is plot in figure 5.14. Targets 1 and 3 have similar
distributions; histogram of target 2 presents some noise, but the distribution is similar
to the other 2 targets. The histogram of target 4 is completely different. For target 4 a
3D-window 1 meter long has been used. The target is situated between 0.5 and 1 meter;
between 0 and 50 cm only background is present. This justifies the high occurrences of
low gray level values. The distributions of the histograms suggest that the mean threshHistogram of Target 3
70
60
60
50
50
number of occurrences
number of occurrences
Histogram of Target 4
70
40
30
40
30
20
20
10
10
0
0
50
100
150
200
Gray Level value
250
0
300
0
50
(a) Target 1
250
300
250
300
Histogram of Target 1
Histogram of Target 2
140
60
120
50
100
number of occurrences
number of occurrences
150
200
Gray Level value
(b) Target 2
70
40
30
20
80
60
40
10
0
100
20
0
50
100
150
200
Gray Level value
(c) Target 3
250
300
0
0
50
100
150
200
Gray Level value
(d) Target 4
Figure 5.14: Histograms of 4 targets
old algorithm should perform better than the triangle algorithm (except for target 4).
The two groups of volumes are plot: figure 5.15 shows the results obtained with the
mean-threshold, while figure 5.16 shows the results obtained with the triangle algorithm.
Per each volume the threshold has been indicated.
50
5. THREE-DIMENSIONAL TFCM
(a) Target 1
(b) Target 2
(c) Target 3
(d) Target 4
Figure 5.15: Mean Threshold:(a) Ds = 70, (b) Ds = 56, (c) Ds = 65, (d) Ds = 41
(a) Target 1
(b) Target 2
(c) Target 3
(d) Target 4
Figure 5.16: Triangle Threshold: (a) Ds = 51, (b) Ds = 10, (c) Ds = 49, (d) Ds = 25
As supposed, figure 5.15 shows a better result than figure 5.16. The values of the thresholds give a very important information. Targets 1, 2 and 3 in figure 5.15 have a similar
threshold, while target 4 has a lower one. The same is for target 2 in figure 5.16. This
justifies also the bigger volume occupied these two targets.
At this point it is suggested to calculate the mean value of the thresholds and to correct
those thresholds with high variance.
The thresholds for targets 4 and 2 are recalculated and plotted again here:
(a)
(b)
Figure 5.17: (a) Target 4 with Ds = 65, (b) Target 2 with Ds = 51
Notice the high threshold value used to visualize target 2. This value is a the gray-level
transform of the original one, which is much smaller. Converted back to the original
value, it would correspond to a threshold equal to 28. This is due to the fact that the
3D-TFCM and the histogram based threshold work only with gray level values.
5.2. 3D TFCM APPLIED TO VOLUMES
51
Feature number Threshold
The 3D-TFCM method transforms a gray level image into a TFN image. This new image
can be used to create the images as well as the gray level ones.
The histograms of the TFN matrices are plot in figure 5.18. The high occurrences at
the low gray-level values correspond to the background. With this kind of histograms is
easier to find a threshold with the triangle algorithm.
Histogram of texture Target 4
Histogram of texture Target 3
200
200
180
180
160
140
number of occurrences
number of occurrences
160
120
100
80
60
40
120
100
80
60
40
20
0
140
20
0
50
100
150
200
Gray Level value
250
0
300
0
50
(a) Target 1
100
150
200
Gray Level value
250
300
250
300
(b) Target 2
Histogram of texture Target 2
Histogram of texture Target 1
200
700
180
600
140
number of occurrences
number of occurrences
160
120
100
80
60
500
400
300
200
40
100
20
0
0
50
100
150
200
Gray Level value
250
300
0
0
50
(c) Target 3
100
150
200
Gray Level value
(d) Target 4
Figure 5.18: Histograms of texture targets
Figures 5.19 and 5.20 show the imaging results when the two threshold algorithms are
used:
(a)
(b)
(c)
(d)
Figure 5.19: Triangle Threshold: (a)Ds = 23, (b) Ds = 26, (c) Ds = 15, (d) Ds = 12
52
5. THREE-DIMENSIONAL TFCM
(a)
(b)
(c)
(d)
Figure 5.20: Mean Threshold: (a)Ds = 26, (b) Ds = 29, (c) Ds = 24, (d) Ds = 16
Also for TFN images, the Mean threshold algorithm gives a better result. The volumes are smoother and the variance between the threshold values is acceptable and a
correction is not needed.
Comparison of the two types of visualization
In general, the feature number representation gives a better result than the intensity
value one. When the triangle threshold is used, figure 5.16(b) shows that the intensity
image cannot plot a realistic volume, while the feature images in figure 5.20 show more
reasonable volumes. When the mean threshold is used, both types of images show targets
with similar volumes. In this case the intensity value images have a shape stretched in
the length, while the feature number images show a more round shape. This difference
is due to the fact that gray level images are used instead of binary ones. When binary
images are used, the shapes are more realistic, as shown in figure 5.9 for targets 3 and 4.
5.2. 3D TFCM APPLIED TO VOLUMES
53
Data set: 7 Targets
In this section a second data set is introduced. Figure 5.21 shows the picture of the
7 targets used for the experiment. At the extremes and at the center of the picture there
are metal discs, which give high reflections, while the other 3 objects are weaker. In
particular, the second object from the left is an empty pipe, and its reflection is very
weak, as shown by the two-dimensional plot of one depth slice in figure 5.22.
Figure 5.21: Photo of 7 targets
5
x 10
4
0.2
0.1
2
0
−0.1
0
−0.2
1.4
1.2
1
0.8
0.6
0.4
0.2
0
(a)
0.2
120
0.1
100
0
80
−0.1
60
40
−0.2
1.4
1.2
1
0.8
0.6
0.4
0.2
0
(b)
Figure 5.22: Horizontal slice:(a) Linear intensity values; (b) Logarithmic intensity values
Figure 5.23 shows the 3D image of the 7 targets when no TFCM method is used. The
intensity values are in logarithmic scale and the image has been segmented with a threshold Ds = −10dB.
In the image only 6 targets are found and one of them, the second from the left, is very
weak and does not appear in more than one slice of the 3D matrix. Also the white disk
54
5. THREE-DIMENSIONAL TFCM
on the right is very weak in this image.
Figure 5.23: Volume of 7 targets with intensity threshold Ds = −10dB
The TFCM method will now be applied in order to detect all the target and eventually
extract their volumes.
The algorithms used to process this data set are the same used in the previous section, with the only difference that a double interpolation has been used to generate the
histograms, resulting in a smoother curve and a better selection of the threshold. Only
the best results are shown here. Figure 5.24 shows the intensity volume of the 7 targets
with its histogram.
Histogram of Image after 3D−TFCM
300
Number of occurrences
250
200
150
100
50
0
0
50
100
150
200
Gray Level Values
(a)
250
300
(b)
Figure 5.24: (a) Histogram of Image (b)Intensity value Image after 3D-TFCM, Triangle
Threshold
There are several ghosts in this image, which looks stretched in depth. Not all objects
are resolved and the shapes are not representative of the objects.
The TFN image is processed with the two types of threshold: triangle and mean. Figure
5.25 and 5.26 show the results.
5.2. 3D TFCM APPLIED TO VOLUMES
55
Figure 5.25: TFN Image, Triangle Threshold
Figure 5.26: TFN Image, Mean Threshold
In this case, unlike for the 4 targets, after the first iteration the mean threshold
performs better than the triangle threshold, showing all the 7 targets with less ghost
images. This difference is due to the choice of the double interpolation in the histogram.
The mean threshold, however, is not able to resolve completely the two central disks, and
several small ghosts are still plot.
56
5. THREE-DIMENSIONAL TFCM
Figure 5.27 shows the outcome of morphological operations on the previous figure:
Figure 5.27: N26 connected objects after opening
All the seven targets can now be extracted and used as input for a successive study, i.e.
target classification.
5.3
Discussion of the results
The results obtained with the two data sets show that the TFCM method improves the
detection of targets, shows weak targets and at the same time eliminates clutters.
Once the objects are detected with TFCM and a binary image is created, it is possible to notice that the shapes of the radar images are similar to the real ones. If for
example the last two mines of figure 4.5 in section 4.3 are compared with their correspondent radar images, shown in figure 5.27, the similarity is clear.
(a)
(b)
Figure 5.28: Binary images of: (a) Butterfly mine, (b) PNM mine
This is an important result obtained with TFCM, and suggests further investigations in
the area of shape extraction.
The strength of the TFCM lays also on the good selection of the tolerance value ∆
5.3. DISCUSSION OF THE RESULTS
57
and then on the choice of a good threshold Ds . The two thresholds have been chosen
on the basis of the intensity, gradient of the intensity, and texture feature number distributions of the 3D matrices. The histograms have been derived layer by layer and it has
been found that, despite the decrease of magnitude with the increase of depth, each slice
maintains the same distribution.
The histograms of texture feature numbers have a nice bimodal distribution, which allow
an easy choice of the threshold value by means of triangle algorithm.
The algorithms are quite fast, it takes about 117 seconds to completely process the
data set of section 5.2.1, whose input is a matrix of 73x18x11 values, with a processor
Intel Core 2 T5600 @1.83GHz, with 2GB of RAM.
5.3.1
Possible improvements
It would be interesting to rotate or to permutate the targets of an image. While the
histogram of the intensity values would remain the same, the tolerance value ∆ would
change since the neighbors of a voxel change.
In addition, when a new data set is used as input, the histogram distribution should be
studied in order to select the proper threshold algorithm.
6
Conclusions
6.1
Discussion
The steps followed in this thesis are schematized in figure 6.1.
This research demonstrates that texture feature methods improve object detection by
exploiting the correlation between neighboring voxels in a 3D image.
The global threshold Texture Feature Coding Method (TFCM) discriminates the targets,
leaving small clutters, while an iterative process eliminates them.
Two types of thresholds are needed to detect objects: a tolerance value ∆ that discriminates a target from the background with the texture feature method, and a second
threshold that, applied to the outcome of the first operation, detects the targets which
are then plotted into a volume.
After working on manual threshold selection to test the potentials of the TFCM, two
histogram-based threshold methods have been suggested and applied iteratively: the
adaptive triangle algorithm and the mean gray level algorithm.
The iterative segmentation method proposed works with gray level 3D images, and the
outcome is still a gray level image. Binarization is applied only in image morphology,
in order to extract the 3D windows that encapsulate each detected object. The choice
of working with gray level values is due to the fact that weak objects may not be found
with global threshold methods, and a binarization in the first iteration would distort the
outcome.
In the first iteration global threshold is used to detect the strongest targets, which are
windowed and separated from the contest. Also the remaining areas are windowed. The
splitted data matrix is the input for a second iteration, where local thresholding is ap58
6.1. DISCUSSION
59
RAW DATA: C-SCAN
DIFFRACTION
STACK
ALGORITHM
BIPOLAR FOCUSED
IMAGE
ENVELOPE
EXTRACTION
&
GRAY LEVEL
CONVERSION
GRAY LEVEL IMAGE
2D OR 3D TFCM
TFN IMAGE
GRAY LEVEL
IMAGE
EXTRACTION
SEGMENTATION
GRAY LEVEL IMAGE
WITH LESS NOISE
BINARY IMAGE
MORPHOLOGY
DETECTION
VOLUME SPLITTING
GL CONVERSION
MULTIPLE VOLUMES
IMAGE
TFN IMAGE
ADAPTIVE TFCM
TFN IMAGE
GRAY LEVEL
IMAGE
EXTRACTION
GRAY LEVEL SPLITTED
IMAGE
Figure 6.1: Flow chart of the signal and image processing steps
plied.
The weakest targets assume now a volume comparable to the strongest ones, as shown in
figure 5.12 and the clutter is eliminated on the basis of threshold values. Local thresholding normalizes the local values to a 8-bits gray level scale. The thresholds of the targets
have small variance, while those of the clutter have high variance. Since the 3D window
perfectly encapsulates the area under study, centering at the object, the volumes which
contain only clutter have a very low thresholds.
A third iteration could be used to decrease the variance between the thresholds of the
detected targets, and improve in this way the plotted volumes.
60
6. CONCLUSIONS
The data set obtained after the second iteration can be used as input to a classification
method. The texture feature numbers associated to each voxel can be used to extract
statistical features, while the shape features can be directly extracted from the volume.
This study has been performed with above surface targets. A degradation of the results
is expected when buried objects are used. However, given the fact that the magnitude
of the signal is relative in TFCM, probably the method is still able to detect the targets.
6.2
Algorithm improvements and suggestions for future research
The TFCM method has been applied by Torrione[24] to unfocused C-scan data and statistical features were extracted for target classification.
This thesis focuses on object detection and imaging, thus the algorithm has been adapted
to the focused data matrix in order to extract the volumes of the objects.
A future research could continue from Torrione’s results and extract from the unfocused
matrix all those voxels that are classified as targets. Then focusing should be applied in
order to obtain 3D GPR images.
Appendix B introduces a study of TFCM as surface subtraction method of unfocused
data. The TFCM, indeed, is a type of edge detection method. If the targets are in the
shallow surface, it can extract per each B-scan the shape of the surface. Then, if the
method is extended to the 3D case, the correlation between neighboring pixels increases
and the whole surface can be extracted.
Target classification is the next step that has to be performed in order to validate these
results. A method similar to that one suggested by E. Ligthart[12] can be used, extracting the statistical and structural features from the TFCM and the shape features from
the original image.
Other types of edge detection methods, which are intensively used in optical images,
i.e. Gaussian derivatives, could be adapted to 2D and 3D radar images, giving a better
performance than TFCM.
Example of 2D TFCM
A
Consider this 3x3 pixels image:
63
28
45
88
40
35
67
40
21
Step 1: Convert the image into a texture unit image, ∆ = 1:
5th
1
-1
1
1
x
-1
1
0
-1
Here the central pixel has been indicated with a x. Its initial value is 0 and at the
step x will get the TFN value.
Step 2: Extract the CTU:
-1
1
0
-1
0
and DTU:
1
1
0
1
-1
Step 3: Convert Quantized Difference Vectors to Gray-Level Class Numbers.
We have to give a value between 1 and 10 to α and β.
We consider first the CTU:
62
63
Vertical vector:
[-1,0,0] → [(-1,0),(0,0)]→ | − 1 − 0| ≥ 0 ∪ |0 − 0| = 0 → Type 2
Horizontal vector:
[1,0,-1] → [(1,0),(0,-1)]→ (1 − 0) > 0 ∪ (0 − (−1)) > 0 → Type 3
β is found from the DTU in the same way:
Diagonal vector (45’):
[1,0,1] → [(1,0),(0,1)] → (1 − 0) > 0 ∪ (0 − 1) < 0 → Type 4
Cross-diagonal vector (135’):
[1,0,-1] → [(1,0),(0,-1)]→ (1 − 0) > 0 ∪ (0 − (−1)) > 0 → Type 3
Summarizing: α = (2, 3), β = (4, 3).
Step 4: Associate an Initial Feature Number to α and β.
ICNα = 6
ICNβ = 9
(A.1)
Step 5: Calculate the TFN number of the central pixel of the matrix introduced in step
1. IF Nα is represented by the columns of the TFN table, while its rows represent IF Nβ
T F N = 43
(A.2)
This process is performed at each pixel location in such a way that the TFN image has
the same size of the intensity image.
B
TFCM of unfocused data
When the TFCM method is applied to unfocused data, it not only it detects the object,
but is also appears to be a good surface detection method, as it is shown later in this
appendix.
Figure B.1(a) shows the intensity image of Bscan of channel 8 of the two disks introduced
in chapter 3. When TFCM is applied to it, with a tolerance value ∆ = 10, a texture
image is created and is shown in figure B.1(b):
100
200
50
200
100
150
45
200
300
40
100
300
500
50
400
600
0
400
35
30
500
25
600
700
20
−50
700
800
15
−100
800
900
−150
1000
150
200
250
10
900
1000
300
(a)
5
50
100
150
200
250
(b)
Figure B.1:
64
300
350
400
0
65
If both texture and intensity images are zoomed in, it is possible to notice that the
threshold has eliminated two of the five hyperbolae.
450
350
50
200
500
400
45
40
550
150
450
35
500
30
550
25
600
20
650
15
700
10
750
5
800
100
50
600
0
650
−50
−100
700
750
180
200
220
240
260
0
280
−150
180
190
200
210
220
(a)
230
240
250
260
270
280
(b)
Figure B.2: zoomed TFCM of Bscan 8 with ∆ = 10
If the tolerance value is decreased to ∆ = 5, it is possible to clearly see 4 hyperbolae.
450
50
100
45
50
500
45
200
40
40
300
35
550
35
400
30
500
30
600
25
600
25
20
20
650
700
15
800
10
900
1000
15
10
700
5
50
100
150
200
250
(a)
300
350
400
0
5
750
180
200
220
240
(b)
Figure B.3: TFCM of Bscan 8 with ∆ = 5
260
280
0
66
B. TFCM OF UNFOCUSED DATA
TFCM as a method of background removal
TFCM is able to detect the edges of objects. The reflection of the surface is very strong,
therefore the TFN image will present high and continuous feature numbers in the area
relative to the surface.
Figure B.4(a) shows the B-scan of 4 mines buried under the ground. Two hyperbolae
are visible at depth=60, but their intensity is quite low. When the TFCM method is
applied, with tolerance ∆ = 1, the whole area of the surface gets high values, and the
area of the two hyperbolae is highlighted, as shown in figure B.4(b). The high values at
coordinate (30,60) let think that the strong reflection is due to a metallic object, or to
more than one object.
12
50
10
20
20
45
8
40
40
6
40
35
4
60
2
30
60
25
80
0
20
80
−2
15
100
−4
120
100
10
−6
10
20
30
40
50
5
120
20
(a)
40
0
(b)
Figure B.4: (a)B-scan, (b) TFN image after TFCM, tolerance ∆ = 1
Figure B.5(a) shows the surface extracted with the TFCM method, and in figure B.5(b)
the final result is shown. The surface is not perfectly subtracted because there is the
suspect that at coordinates (10,43) there is an object. This hypothesis is based on the
high TFN values of the TFCM in figure B.4(b) for the same coordinates.
67
12
4
10
20
2
40
20
8
40
6
0
4
60
60
2
−2
80
80
0
−4
100
−2
100
−4
−6
120
120
20
40
−6
20
(a)
40
(b)
Figure B.5: (a) Surface detected with TFCM, (b) B-scan after surface subtraction
If the TFCM method is applied iteratively to the Bscan of figure B.5(b), big part of
the background is removed, as shown in figure B.6(b).
TFCM
12
50
10
20
45
20
8
40
40
40
6
35
30
60
4
60
25
20
80
2
80
0
15
100
10
−2
100
−4
5
120
120
20
40
(a)
0
−6
20
40
(b)
Figure B.6: 2nd iteration: (a)TFCM applied to figure B.5(b), ∆ = 55; (b) Outcome after
background subtraction
68
B. TFCM OF UNFOCUSED DATA
Figure B.6(b) shows the potential of this method also as background remover. However,
particular care has to be taken in this case, since objects shallowly buried are not clearly
separable from the air-ground interface, and a removal based on TFCM may remove also
the objects.
Bibliography
[1] The Cambodian Land Mine Museum, www.cambodialandminemuseum.org
[2] International Campain to Ban Landmines, www.icbl.org
[3] Landmine Monitor Report, 2007, www.icbl.org/lm/2007/es/toc.html
[4] Unated Nations:
www.ccwtreaty.com
The
Convention
on
Certain
Conventional
Weapons,
[5] Apopo Project, www.apopo.org
[6] http : //en.wikipedia.org/wiki/Ground − penetratingr adar
[7] B. Scheers, Ultra-wideband gound penetrating radar with application to the detection
of anti personnel landmines, Ph.D. thesis, Catholic university of Louvain - Royal
Military Academy, Belgium, 2001.
[8] A. Yarovoy, P.Aubry, P.Lys, L.Ligthart UWB array-based radar for landmine detection Proc. of the 3rd European Radar Conference, 13-15 September 2006, Manchester,
UK, pp. 186-189
[9] A.G.Yarovoy, T.G.Savelyev, P.J.Aubry, P.E.Lys, L.P.Ligthart UWB Array-Based
Sensor for Near-Field Imaging IEEE Transactions on Microwave Theory and techniques, Vol. 55, No. 6, pp.1288-1295, June 2007
[10] A. Yarovoy Ultra-Wideband Radars for High-Resolution Imaging and Target Classification Proc. of the 4th European Radar Conference, October 2007, Munich, Germany
[11] L.van Kempen, H. Sahli Ground Penetrating Radar Data Processing: A Selective
Survey of the State of the Art Literature, Technical report (1999), Vrije Universiteit
Brussel, Faculty of Applied Sciences ETRO, Brussel
69
70
BIBLIOGRAPHY
[12] E.Ligthart Landmine Detection in High Resolution 3D GPR Images MSc. Thesis
(2003), Delft University of Technology, Department of IRCTR, Delft
[13] E.M.Johansson, J.E.Mast Three-dimensional ground penetrating radar imaging using synthetic aperture time-domain focusing Lawrence Livermore National Laboratory, Livermore, 1994
[14] T.Savelyev, A.Yarovoy, L.Ligthart Experimental Evaluation of an Array GPR for
Landmine Detection Proceedings of the 4th European Radar Conference, EuMA 2007,
Munich, Germany, pp.220-223
[15] T.G.Savelyev, A.G.Yarovoy, L.P.Ligthart Weighted Near-Field Focusing in an Array
GPR for Landmine Detection Electromagnetic Theory Symposium, EMTS 2007, July
26-28,2007, Ottawa,ON,Canada
[16] Data Processing and Imaging in GPR System Dedicated for Landmine Detection,
Subsurface Sens. Technol. Applicat., Vol. 3,No. 4, pp. 387-402, October 2002
[17] Energy Focusing Ground Penetrating Radar (EFGPR) Overview, Geo-Centers, 28
January 2003
[18] D.J.Daniels, Ed., Ground Penetrating Radar, 2nd ed., London, UK, IEE, 2004
[19] M.H. Horng, Texture Feature Coding Method for Texture Classification, Opt. Eng.,
Vol 42 (1), pp. 228-238, January 2003
[20] M.H. Horng, Y.N. Sun, X.Z. Lin, Texture feature coding method for classification of
liver sonography, Computerized Medical Imaging and Graphics, Vol. 26, Nr. 1, pp.
33-42, Jan. 2002, Elsevier
[21] M.Hall-Beyer, GLCM Texture: A Tutorial, v. 2.3 October, 2000 http
//www.cas.sc.edu/geog/rslab/Rscc/mod6/6 − 5/texture/tutorial.html
:
[22] D.C. He, L. Wang, Texture Feature Extraction from Texture Spectrum, Geoscience
and Remote Sensing Symposium, 1990. IGARSS ’90. ’Remote Sensing Science for the
Nineties’., 10th Annual International 20-24 May 1990 Page(s):1987 - 1990
[23] A. Al-Janobi, Performance evaluation of cross-diagonal texture matrix method of
texture analysis, Pattern Recognition 34, pp.171-180, 2001
[24] P. Torrione, L.M.Collins, Texture Features for Antitank Landmine Detection Using
Ground Penetrating Radar, IEEE transactions on Geoscience and Remote Sensing,
Vol. 45, No. 7, pp.2374-2382, July 2007
[25] R.C. Gonzales, R.E. Woods, Digital Image Processing, Addison-Wesley Publishing
Company, ISBN 0-201-50803-6
BIBLIOGRAPHY
71
[26] I.T. Young, J.J. Gerbrands, L.J. van Vliet, Fundamentals of Image Procssing, Lecture Notes, 1998
[27] B. Yang, A.G. Yarovoy, L.P. Ligthart Performance Analysis of UWB Antenna Array
for Short-Range Imaging, EUCAP 2007, Antennas and Propagations, 11-16 Nov.
2007. pp.1-6
[28] Wikipedia, http : //en.wikipedia.org/wiki/T hresholding( imagep rocessing)
[29] http : //la.wikipedia.org/wiki/M urexf erreus
[30] http : //www.icaen.uiowa.edu/ dip/LECT U RE/Segmentation1.html
[31] http : //www.humanitarian−demining.org/N ewDesign/resources/HST AM IDSF S.pdf
[32] http : //www.humanitarian−demining.org/N ewDesign/resources/N emesisF S.pdf
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement