RESOLUTION ENHANCEMENT OF MULTI-LOOK IMAGERY

RESOLUTION  ENHANCEMENT  OF  MULTI-LOOK IMAGERY
RESOLUTION ENHANCEMENT OF MULTI-LOOK
IMAGERY
by
Amy E. Galbraith
A Dissertation Submitted to the Faculty of the
DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING
In Partial Fulfillment of the Requirements
For the Degree of
DOCTOR OF PHILOSOPHY
In the Graduate College
THE UNIVERSITY OF ARIZONA
20 04
UMI Number: 3132221
INFORMATION TO USERS
The quality of this reproduction is dependent upon the quality of the copy
submitted. Broken or indistinct print, colored or poor quality illustrations and
photographs, print bleed-through, substandard margins, and improper
alignment can adversely affect reproduction.
In the unlikely event that the author did not send a complete manuscript
and there are missing pages, these will be noted. Also, if unauthorized
copyright material had to be removed, a note will indicate the deletion.
UMI
UMI Microform 3132221
Copyright 2004 by ProQuest Information and Learning Company.
All rights reserved. This microform edition is protected against
unauthorized copying under Title 17, United States Code.
ProQuest Information and Learning Company
300 North Zeeb Road
P.O. Box 1346
Ann Arbor, Ml 48106-1346
2
THE UNIVERSITY OF ARIZONA ®
GRADUATE COLLEGE
As members of the Final Examination Committee, we certify that we have
read the dissertation prepared by
entitled
Amy Elizabeth Galbraith
Resolution Enhancement of Multi-Look Imagery
and recommend that it be accepted as fulfilling the dissertation
requirement for the Degree of
Doctor of Philosophy
09
Richard W. Ziolkowslci^ Ph.D.
Kurt^ JTh^e, Ph,J>^
eagan, Ph.D.
l5^e V
Jac^D. ^skill, Ph.D.
Final approval and acceptance of this dissertation is contingent upon
the candidate's submission of the final copy of the dissertation to the
Graduate College.
I hereby certify that I have read this dissertation prepared under my
direction and recommend that it be accepted as fulfilling the dissertation
requirement
Co-Dissertation Director Richard W. Ziolkowski, Ph.D.^®^®
Co-Dissertation Director Kurtis J.Thome, Ph.D.
3
STATEMENT BY AUTHOR
This dissertation has been submitted in partial fulfillment of requirements for an
advanced degree at The University of Arizona and is deposited in the University
Library to be made available to borrowers under rules of the Library.
Brief quotations from this dissertation are allowable without special permission,
provided that accurate acknowledgment of source is made. Requests for permission
for extended quotation from or reproduction of this manuscript in whole or in part
may be granted by the head of the major department or the Dean of the Graduate
College when in his or her judgment the proposed use of the material is in the
interests of scholarship. In all other instances, however, permission must be obtained
from the author.
SIGNED
4
ACKNOWLEDGEMENTS
A work of this magnitude is not possible without the help and support of many
people. I would like to express deep gratitude to Dr. Richard Ziolkowski for being
a true advisor in every sense of the word. His constant support and encourage­
ment were the highlights of my graduate career - thank you for everything you have
taught me. It has been a privilege. I'd also hke to thank Dr. Kurtis Thome for his
expertise and enthusiasm as my co-dissertation director - you have been a wonderful
inspiration and role model. Dr. James Theiler deserves special recognition for being
my mentor at Los Alamos National Laboratory - it would be impossible to repay
you for your time and energy spent helping me in my research. I'd also like to thank
my other committee members, Dr. John Reagan and Dr. Jack Gaskill, for their
valuable input and support - you've both played an integral role in my education
and I sincerely appreciate your help with this dissertation.
Thanks also to the ISR-2 group management and the MTI project management
at Los Alamos National Laboratory, who provided the professional and financial
support that made this work possible. There are too many people to thank individ­
ually, but several of my co-workers have provided endless help and useful discussions
~ thanks for being willing to listen and for the valuable advice.
To my friends and family, thanks for being there to give me encouragement,
motivation, and your friendship and love. Thanks especially to my husband John
for your love and for always believing in me. Thank you. Mom and Dad, for always
encouraging me to reach for the stars. Dad, I'll bet you had no idea that teaching
me about logic gates when I was in elementary school would lead to this! Thanks to
my brother Mark for letting me play with his LEGOS and Star Wars figures when
we were little...I'd like to think they are also to blame for starting me down the path
of engineering and remote sensing. Last but certainly not least, my in-laws Arvid
and Mary Jo have been wonderfully supportive and fun to have around - thanks to
you both!
5
TABLE OF CONTENTS
LIST OF FIGURES
7
LIST OF TABLES
10
ABSTRACT
11
CHAPTER
1. INTRODUCTION
1.1. Motivation
1.2. Multispectral Thermal Imager; MTI
1.3. Overview of Multi-look Resolution Enhancement
12
12
16
22
2. RESOLUTION ENHANCEMENT ALGORITHMS
2.1. Resolution: Terms and Definitions
2.2. Goal of Resolution Enhancement
2.3. Image Restoration
2.4. Image Resampling and Interpolation
2.5. Image Fusion
2.6. Super-resolution
2.7. Interpolative Image Resolution Enhancement
2.8. Probabilistic Image Resolution Enhancement
2.9. Set Theoretic Image Resolution Enhancement: The Projection Onto
Convex Sets (POCS) Algorithm
2.10. Image Registration
25
25
26
29
31
33
35
36
37
3. SIMULATING MULTI-ANGLE IMAGERY
3.1. Image Formation
3.2. Modeling the Optics
3.3. Modeling the Changing View Angle Between Frames
3.4. Modeling the Detector Array Subsampling
3.5. The Elliptical Gaussian System PSF Model
47
47
47
54
57
59
4. VALIDATION OF THE POCS ALGORITHM
4.1. Validation Overview
4.2. Error Metrics
62
62
63
39
43
6
TABLE OF CONTENTS - Continued
4.3.
4.4.
4.5.
4.6.
4.7.
4.8.
4.9.
The Effect of Aliasing: Recreating Points and Edges
Combining Two-look Imagery
The Off-nadir Sampling Grid Problem
Combining Multi-look Imagery
Combining Symmetric Three-look Imagery
Aliasing and Three-look Imagery
Resolution Enhancement of Multispectral Imagery
66
82
96
100
118
126
151
5. PRE-PROCESSING ISSUES
5.1. Overview
5.2. Sensor Calibration
5.3. Atmospheric Correction
5.4. Bidirectional Reflectance Distribution Function (BRDF) Correction
5.5. Experiment: Atmospheric and BRDF Effects
161
161
162
163
166
170
6. CONCLUSIONS
6.1. Major findings
6.2. Directions for future research
180
181
182
Appendix RADIOMETRIC TERMS
REFERENCES
185
186
7
LIST OF FIGURES
1.1.
1.2.
1.3.
1.4.
1.5.
Pushbroom sensor
Layout of the MTI focal plane
Two-look imaging, showing nadir and off-nadir viewing
Off-nadir vs. nadir ground sample distance
Flow diagram of resolution enhancement processing
17
18
20
21
22
2.1.
2.2.
2.3.
2.4.
2.5.
Subpixel shifts provide unique information
Observation model
The POCS algorithm for two constraint sets
The POCS algorithm using the data consistency constraint
Mapping the LR pixel to the HR estimate
27
28
42
44
44
3.1.
3.2.
3.3.
3.4.
3.5.
3.6.
3.7.
3.8.
Optical imaging system
Linear, shift-invariant imaging system model
Circular aperture function
Incoherent OTF, H { u , v ) , for a circular aperture
High resolution object scene blurred by a circular aperture
Off-nadir geometry for a single band
Nadir and off-nadir image at 45°
Blurred image vs. blurred, 4X downsampled image
48
49
52
52
53
56
57
60
4.1.
4.2.
4.3.
4.4.
4.5.
4.6.
4.7.
Input test pattern containing lines and points
Uniform and Gaussian blur kernels
Nyquist sampling of a test pattern
Sixteen unique LR frames
Bicubic interpolation of one LR frame containing lines and points.
HR estimate after running POCS for 1 epoch
HR estimate after running POCS for 10 epochs
67
68
70
71
. 72
73
73
4.8. HR estimate after running POCS for 40 epochs
74
4.9. ISNR for the Nyquist-bandlimited test pattern
4.10. Spectrum of test pattern, k = 2.0
4.11. HR estimate after 200 epochs, k = 2.0
4.12. HR estimates for various amounts of aliasing
4.13. ISNR for various levels of aliasing
4.14. Original "Cameraman" image
4.15. Original "Isotopes" MTI image
76
77
78
79
80
83
84
8
LIST OF FIGURES - Continued
4.16. Nadir LR input "Cameraman" frame
86
4.17. Simulated multi-angle LR "Cameraman" frames
87
4.18. HR estimate, "Cameraman", two frames, 9 = 5°, 10°, 15°, 20° .... 88
4.19. HR estimate of the "Cameraman" image using two frames
89
4.20. ISNR for the "Cameraman" image using two frames
90
4.21. Nadir LR input "Isotopes" frame
91
4.22. Simulated multi-angle LR "Isotopes" frames
92
4.23. HR estimate of the "Isotopes" image using two frames
93
4.24. HR estimate of the "Isotopes" image using two frames
94
4.25. ISNR for the "Isotopes" image using two frames
95
4.26. Grids of different sizes overlaid
98
4.27. Distribution of sampling positions for two frames
99
4.28. HR estimate, "Cameraman", 6 = 0°, 8°, 16°, 24°, 32°, 40°
101
4.29. ISNR, 2 to 10 frames, "Cameraman", 9 = 0°, 8°, 16°,..., 72°
102
4.30. HR estimate, six frames, "Isotopes", 9 = 0°, 8°, 16°, 24°, 32°, 40° . . . 103
4.31. ISNR, 2 to 10 frames, "Isotopes", 9 = 0°,8°, 16°, ...,72°
104
4.32. HR estimate, six frames, "Cameraman", 9 = 0°, 5°, 10°, 15°, 20°, 25° . 106
4.33. ISNR, 2 to 10 frames, "Cameraman", 9 = 0°, 5°, 10°,..., 45°
107
4.34. HR estimate, six frames, "Isotopes", 9 — 0°, 5°, 10°, 15°, 20°, 25° . . . 108
4.35. ISNR, 2 to 10 frames, "Isotopes", 9 — 0°,5°, 10°, ...,45°
109
4.36. HR estimate, five frames, "Cameraman", 9 = 0°, 5°, 15°, 25°, 35° . . . 110
4.37. ISNR, 2 to 6 frames, "Cameraman", ^ = 0°, 5°, 15°,..., 45°
Ill
4.38. HR estimate, five frames, "Isotopes", 9 = 0°, 5°, 15°, 25°, 35°
112
4.39. ISNR, 2 to 6 frames, "Isotopes", 9 = 0°,5°, 15°, ...,45°
113
4.40. HR estimate, seven frames, "Cameraman", equally spaced angles . . 114
4.41. ISNR, 2 to 10 frames, "Cameraman", equally spaced angles to 40° . 115
4.42. HR estimate, seven frames, "Isotopes", equally spaced angles .... 116
4.43. ISNR, 2 to 10 frames, "Isotopes", equally spaced angles to 40° . . . .117
4.44. HR estimate, "Cameraman", three frames, 9 = ±5°, 10°, 15°, 20° . . . 120
4.45. HR estimate, "Cameraman", three frames, 9 = ±25°, 30°, 35°, 40° . . 121
4.46. ISNR,, "Cameraman", three frames, ±9
122
4.47. HR estimate, "Isotopes", three frames, 9 = ±5°, 10°, 15°, 20°
123
4.48. HR estimate, "Isotopes", three frames, 0 = ±25°, 30°, 35°, 40° .... 124
4.49. ISNR, "Isotopes", three frames, ±9
125
4.50. "Cameraman" aliased three-look images for k = 2.0
127
4.51. Bilinear interpolation of nadir LR "Cameraman" frame for k = 2.0 . 128
4.52. Aliased three-look "Isotopes" images for k = 2.0
129
4.53. Bilinear interpolation of nadir LR "Isotopes" frame for k = 2.0 . . . 1 3 0
9
LIST OF FIGURES - Continued
4.54. HR "Cameraman", 3 frames, k = 2.0, a — 0.9
132
4.55. ISNR, "Cameraman", varying a, k = 2.0
133
4.56. HR "Isotopes", 3 frames, k = 2.0, a = 0.9
134
4.57. ISNR, "Isotopes", varying a, k = 2.0
135
4.58. HR "Cameraman", 3 frames, k = 1.4, a = 1.29
136
4.59. ISNR, "Cameraman", varying cr. A; = 1.4
137
4.60. HR "Isotopes", k = 1.4, a = 1.29
138
4.61. ISNR, "Isotopes", varying a, k — lA
139
4.62. HR "Cameraman", k = 1.0, a = 1.8
141
4.63. ISNR, "Cameraman", varying cr. A: = 1.0
142
4.64. HR "Isotopes", k = 1.0, cr = 1.8
143
4.65. ISNR, "Isotopes", varying cr. A: = 1.0
144
4.66. HR "Cameraman", k = 0.8, a = 2.25
145
4.67. ISNR, "Cameraman", varying a, k — 0.8
146
4.68. HR "Isotopes", k — 0.8, a = 2.25
147
4.69. ISNR, "Isotopes", varying cr, /c = 0.8
148
4.70. ISNR, "Cameraman", varying Nyquist factors k
149
4.71. ISNR, "Isotopes", varying Nyquist factors k
150
4.72. Formation of a multispectral high-resolution image
152
4.73. One LR frame (out of nine) of a two-spectral-band cell image .... 154
4.74. HR estimate of multispectral cell image
155
4.75. One LR frame of a two-spectral-band cell image
156
4.76. HR estimate of multispectral cells
157
4.77. One LR frame of fluorescence beads
160
4.78. HR estimate of the fluorescence beads
160
5.1.
5.2.
5.3.
5.4.
5.5.
5.6.
5.7.
5.8.
Main paths of solar energy to a sensor
Flow diagram of BRDF correction
Three-look images with modeled atmospheric path radiance
HR estimate without atmospheric correction
ISNR with uncorrected atmospheric path radiance
HR estimate with bias subtracted
HR estimate with BRDF effects for 9 = —20°, 0°, 4-20°
ISNR for uncorrected BRDF effects
164
167
172
173
174
175
178
179
10
LIST OF TABLES
5.1. BRF and scale factor vs. angle for grass turf
177
11
ABSTRACT
This dissertation studies the feasibihty of enhancing the spatial resolution of
multi-look remotely-sensed imagery using an iterative resolution enhancement al­
gorithm known as Projection Onto Convex Sets (POCS). A multi-angle satellite
image modeling tool is implemented, and simulated multi-look imagery is formed
to test the resolution enhancement algorithm. Experiments are done to determine
the optimal configuration and number of multi-angle low-resolution images needed
for a quantitative improvement in the spatial resolution of the high-resolution es­
timate. The important topic of aliasing is examined in the context of the POCS
resolution enhancement algorithm performance. In addition, the extension of the
method to multispectral sensor images is discussed and an example is shown using
multispectral confocal fluorescence imaging microscope data. Finally, the remote
sensing issues of atmospheric path radiance and directional reflectance variations
are explored to determine their effect on the resolution enhancement performance.
12
CHAPTER 1
INTRODUCTION
1.1
Motivation
Remote sensing imagery plays a key role in our understanding of the compo­
sition of the earth's surface both spatially on a global scale and temporally with
each season or from year to year. Since the launch of the first Landsat satellite
in the early 1970s, new sensor approaches are continually being developed to im­
prove upon our ability to quantitatively assess the earth's resources. A sensor design
that has been exploited recently is one based upon multiple look angles to improve
upon the knowledge of directional radiance leaving the earth's surface or to assist
in understanding the atmospheric effect on surface-leaving radiance. Increased res­
olution of satellite sensors has been a major goal over the years; imaging technology
has improved dramatically over the last three decades. New commercial satellites
such as IKONOS [1,2] and Quickbird [3,4] have ground resolution finer than one
meter. Although such fine resolution is not needed for some applications, such as
global mapping of ocean currents or drought conditions, many customers of remote
sensing imagery desire the highest spatial-resolution images that are available. Ur­
ban planning, military planning, intelligence, and disaster monitoring/evaluation
are several tasks in which very high spatial resolution is needed. However, many
13
objects of interest for these tasks, such as cars, roads, and buildings, are small com­
pared to the ground resolution of most satellite-based sensors. In places that are
difficult or impossible to reach on foot, the ability to zoom in on an area of interest
using a space-based camera is invaluable. Even for tasks requiring global coverage,
higher spatial resolution is desired. If a computer existed that was powerful enough
to store imagery of the entire earth at a one-inch ground resolution, there would
be someone who could use that data (to measure urban boundaries, or to monitor
the total number of fires burning globally, detect the appearance of new forest fires,
etc.).
There are several issues to consider pertaining to spatial resolution. Size and
weight limitations of the satellite leads to a problem of attaining higher spatial reso­
lutions. Telescopes, for example, become more powerful as the lens or light-focusing
mirror gets larger; the Hubble telescope had to fit within the Space Shuttle cargo bay,
fundamentally limiting its resolution. For multispectral sensors, there is a tradeoff
between spectral and spatial resolution when designing a system. Multispectral sys­
tems often require multiple detector arrays, one for each spectral band, which are
expensive; by lowering the spatial resolution of each detector, more bands may be
put on the sensor for the same cost. Another difficulty is the storage and processing
of high-resolution imagery. While desiring higher and higher resolution imagery,
more and more storage and processing time are needed to handle the increased file
sizes.
14
An alternative way to increase image resolution is to design the sensor system to
acquire images at a lower resolution and use the sensor system's pointing capabilities
to allow collection of multiple low-resolution images within a short time span as
the sensor travels over a target area. Then, image processing methods to fuse the
multiple low-resolution images into a single high resolution image may be used [5-7].
These methods are called resolution enhancement algorithms, or super-resolution
image reconstruction. Resolution enhancement algorithms rely on the fact that subpixel surface features moving around in the sensor's field of view (FOV) can allow
these features to be resolved on a finer sampling grid. Applying these resolution
enhancement algorithms to satellite imagery acquired by rapid pointing maneuvers
adds considerable complexity to the problem. The images have geometric distortions
due to different satellite view angles, have a variable ground sample distance (GSD)
due to the changing optical path length from the sensor to the ground at different
view angles, and are affected by atmospheric conditions and directional reflectance
effects from surface materials. These other effects should be removed if feasible
so that changes in the sensor output are only due to the surface feature and not
atmospheric and bidirectional reflectance distribution function (BRDF) effects.
Several satellite-based imaging systems have the ability to quickly acquire images
at different view angles. These include the Multispectral Thermal Imager (MTI)
[8,9], IKONOS [1,2], Quickbird [3,4], the Multi-angle Imaging Spectro-Radiometer
(MISR) [10], the Along Track Scanning Radiometers (ATSR-1, ATSR-2, AATSR)
[11], and the Compact High Resolution Imaging Spectrometer (CHRIS) [12, 13].
The "agility" of a pointable spacecraft varies from platform to platform, limiting
the number of images that may be imaged in a single pass over a ground target.
For instance, the MTI instrument was designed to acquire two images of a tasked
ground target, while the CHRIS instrument on the Project For On-Board Autonomy
(PROBA) mission is capable of collecting five images on a single overpass [13]. The
MISR instrument collects nine images, but does so by using nine cameras at fixed
angles rather than pointing a single camera. It is difficult to estimate the maximum
number of images that a particular sensor is capable of collecting during an overpass;
many factors are involved, including the slew rate of the platform under constraints
of maximal torque, the amount of on-board memory, data transfer rates, attitude
control procedures, and other overhead processing that is highly dependent on the
design of the image acquisition control software.
This dissertation focuses primarily on studying the feasibility of enhancement of
data from the MTI, which was designed to acquire two images for each scene of
interest. However, the experiments use simulated off-nadir imagery in order to have
control over the image registration and view angle parameters, as well as to provide
sufficient data for analysis. A detailed description of the MTI sensor follows in
Sect. 1.2. An overview of steps required to correctly apply resolution enhancement
algorithms to multi-look remotely-sensed images is given in Section 1.3. Subsequent
chapters will address the details of each processing step required.
16
1.2
Multispectral Thermal Imager: MTI
The Multispectral Thermal Imager (MTI) is a U.S. Department of Energy re­
search satellite with fifteen spectral bands, from the visible to the thermal in­
frared [8,9]. The MTI is a pushbroom sensor] i.e., it has linear detector arrays
oriented in the cross-track direction. The cross-track direction is perpendicular to
the direction of motion of the satellite. Each spectral band has its own linear detec­
tor array that is filtered to collect light in a desired spectral region. As the satellite
travels, a two-dimensional image is formed for each band, with a GSD in the crosstrack direction determined by the detector element size, the elevation of the satellite
above the ground, and the satellite look angle (discussed below); the GSD in the
along-track direction also depends on the detector integration time, and the speed
and pitch of the satellite motion. An illustration of the pushbroom sensor design is
shown in Fig. 1.1. For near-nadir images, the ground sample distance is approxi­
mately 5 meters for the visible and the near-infrared bands (bands A through D),
and 20 meters for the midwave and the thermal bands (bands E through O).
Figure 1.2 shows the layout of the MTI focal plane. The focal plane contains
three sensor chip assemblies (SCAs), each consisting of linear detector arrays for
each band. The purpose of multiple SCAs is to increase the crosstrack field of view
(nominally 12 km). The line detectors for the visible and near infrared (VNIR)
bands are located near the center of the focal plane where the optical quality is
highest. Each line detector has a delayed start time for readout, so that the 48
17
time T
time T + AT
satellite
motion
cross-track
direction
along-track
direction
Figure 1.1: Pushbroom sensor
images (3 SCAs, 16 line detectors each) are collected sequentially in time, about
4.5 seconds for all of the bands to acquire a 12 km by 12 km image [14], The
visible/NIR bands (A-D) are arranged in a staggered pattern to reduce crosstalk
and increase the signal-to-noise ratio. Also, there are redundant line detectors for
bands E to O. Both line detectors for band H are read out, resulting in an HI and
an H2 band. Bands E-G and I-O also have two line detectors apiece, but active and
inactive detectors are selected, resulting in only the active detectors being turned
on for a given image acquisition. In practice, one complete line detector is activated
to acquire each band, rather than picking and choosing detectors as shown here.
Nonetheless, the satellite does have that capability.
i
III*
18
:
L
M
N
^
G
F
E
H
A
B
C
D
Active Pixel
Inactive Pixel
1
Bands E-G,
J
K
0
1
CD
•
i r: 1 1 f n ! n n
f.l, i. 1
,
U.J.J -
1-0 layout
l<-»l
49.6 |jm
•t.i . a . r
•
TTT
L
'
11111
-r
U
BandH
L J . u : m . f l i l I.J
if.lVl'
irTrn-nTriTi
M
layout
K-X
49.6 (jm
•••
•••
Bands A-D
layout
12.4 ptn
Along-track
direction
Cross-track
direction
Figure 1.2: Layout of the MTI focal plane
19
To increase the system's image acquisition flexibiUty, the MTI satellite has point­
ing capabilities, meaning that an image can be taken off to a side, forward, or behind
at a given look angle. Pointing allows the MTI to acquire a two-look image. First, an
image is taken with the camera pointing (almost) straight down at a target on the
Earth's surface. This image is called the nadir image. The target is usually located
some distance away from the satellite's orbital track in the cross-track direction,
so the nadir image look angle is not exactly zero; rather, it is as close to a nadir
angle as is possible given the current orbital position of the satellite, but at least
within 20 degrees of nadir. After the satellite has continued in its orbit for a short
time, on the order of a few minutes, it then points back at the first target location
and acquires a second, off-nadir image. The off-nadir image is taken with a look
angle of 50 to 60 degrees, typically. The number of images that may be taken in a
single overpass is limited by the time required to reorient the satellite. In theory,
more images could be taken, but this has not been done during MTI collects. The
cross-track pointing has a range of ±20 degrees, and the along-track pointing has
a range of ±60 degrees. With an orbit height of 575 km and a satellite speed of 7
km/sec, the MTI has a little under 5 minutes in which it must point at and acquire
its target scene. An illustration of two-look image acquisition is shown in Fig. 1.3.
The spatial resolution of the off-nadir image is poorer than that of the nadir
image because the extent of the projection of a detector onto the ground, or GSD, is
a factor of cos^ 6 larger, as shown in Fig. 1.4. The off-nadir GSD in the along-track
20
Off-nadir
look
Nadir
look
Figure 1.3: Two-look imaging, showing nadir and ofF-nadir viewing
direction is given by
GSBe =
f3h
cos^ 6
GSD,
cos^ 9
(1.1)
where GSD„ is the nadir ground sample distance, 6 is the angle from nadir, (3 is the
angular instantaneous field of view (IFOV), and h is the distance from the sensor to
the ground [15,16]. This simplified digram does not account for satellite motion or
the detector integration time, which increase the nadir and the off-nadir along-track
GSD. The cross-track GSD is similar to the along-track GSD, but scales by cos 9
rather than cos^ 9, resulting in an oblong projection of each pixel onto the ground.
sensor
GSD
n
GSD
e
Figure 1.4: Off-nadir vs. nadir ground sample distance.
22
1.3
Overview of Multi-look Resolution Enhancement
The goal of this dissertation is to create a single high-resolution image from
multiple low-resolution images taken at different look angles. Several processing
steps are used to combine multi-angle images into a single high resolution image,
as shown in Fig. 1.5. Each image, or "frame," may be optionally pre-processed to
compensate for atmospheric and BRDF effects. Then, the frames are aligned with
each other, or registered. The co-registered, corrected frames are then input to a
resolution enhancement algorithm that synthesizes a high resolution image.
Frame 2
Atmospheric
Correction
BRDF
Correction
Corrected
Frame 1
Atmospheric
Correction
BRDF
Correction
Corrected
Frame 2
Registration
Frame N
Atmospheric
Correction
BRDF
Correction
Resolution
Enhancement
High Resolution
Image
Corrected
Frame N
Figure 1.5: Flow diagram of resolution enhancement processing for multi-angle
remotely-sensed imagery.
An overview of each step required for multi-angle resolution enhancement is given
in subsequent chapters, using Fig. 1.5 as a guide. Since resolution enhancement is a
well-developed field of research, the primary goal for this dissertation was to deter­
mine which type of enhancement algorithm is best suited for the unique problems
inherent to multi-look remotely-sensed imagery, and to analyze the effects of those
23
problems on the performance of a chosen enhancement algorithm.
A review of
resolution enhancement algorithms is given in Ch. 2, with emphasis on those algo­
rithms most useful to combine multi-angle remote sensing imagery. The Projection
Onto Convex Sets (POCS) resolution enhancement algorithm is applied for the first
time to multi-angle satellite imagery in this dissertation. Chapter 3 describes the
modeling of multi-angle imagery, for use in testing the applicability of resolution
enhancement algorithms to a set of remotely-sensed images acquired at different
angles.
Results of applying the resolution enhancement algorithm to simulated
multi-angle imagery are shown in Ch. 4. We demonstrate that multi-look resolution
enhancement is feasible by combining as few as two images. A major contribution of
this work is a study of the effect of aliasing on resolution enhancement. Aliasing is
shown to be an important issue for resolution enhancement in the context of remote
sensing; for multispectral sensors, the aliasing present in the different spectral band
images can vary. Better resolution enhancement occurs if the set of low-resolution
input frames have more aliasing, while it is hard to improve upon non-aliased im­
agery. Prior research assumed that the low-resolution frames were aliased, but did
not quantify the amount of aliasing or model its effect on POCS performance. The
preprocessing steps of atmospheric correction and BRDF correction allow further
refinements to the resolution enhancement processing; their effects are discussed in
Ch. 5. It is shown that atmospheric correction is important for images at off-nadir
angles of 0 > 40°, and that BRDF effects are detrimental for 9 > 20°. These results
24
demonstrate that multi-angle resolution enhancement is possible, but that modifi­
cations to the image acquisition process to minimize atmospheric and BRDF effects
may be necessary for good results with real rather than simulated imagery.
25
CHAPTER 2
RESOLUTION ENHANCEMENT ALGORITHMS
2.1
Resolution: Terms and Definitions
Before reviewing the field of resolution enhancement, some definitions are neces­
sary to clarify terminology used in remote sensing and in digital image processing.
Remote sensing is the process of measuring an object at a distance (or observing
a scene) using an imaging system or sensor. A propagation medium separates the
sensor from the object to be measured. The output of the system is called an im­
age. Most remote sensing imaging systems today are digital, meaning that light is
collected on detector arrays rather than film. Photons reaching the detector array
are recorded at discrete positions, leading to the concept of digital image spatial
resolution. A two-dimensional CCD array, for instance, contains a finite number of
detectors. The detector measurements are stored in a computer as a digital image,
with each pixel corresponding to the output of one detector. The spatial resolution
of a digital image may be given in numbers of pixels in each of two dimensions, or,
equivalently, in the number of detectors. However, in remote sensing, spatial resolu­
tion must be more precisely defined. The distance from the detectors to the ground,
as well as the number of detectors, determines the smallest observable feature in a
scene. The size of each detector determines the size of the spatial area in the scene
26
from which photons are collected for a single pixel. This occurs because an imaging
system focuses light from a large area onto a small detector with a lens or a mirror.
In the image space, a detector has a spatial dimension that maps to an area in the
object space through the imaging system magnification factor. Projecting the opti­
cal path from a detector back through an imaging system, through a medium, and
onto the object being measured (on the ground) gives the ground sample distance
(GSD) for each detector. For a pushbroom system like the MTI, the integration time
determines the along-track GSD, as discussed earlier in Sect. 1.2. Spatial resolution
may be given in either the coordinates of object space or of image space; for remote
sensing, the GSD is the measurement of interest for determining spatial resolution.
Resolution, in this dissertation, refers to the GSD or spatial resolution in object
space. As an aside, the digital numbers (DN's) corresponding to the values stored
for each pixel also have a DN resolution, which describes the quantization of the DN
values. Clearly, resolution is an overloaded term and must be used carefully.
2.2
Goal of Resolution Enhancement
Given a set of low resolution images, the goal of resolution enhancement is to
form a single image with higher resolution, so that the effective GSD is improved.
Improving the GSD by combining data from several images will allow finer detail
to be seen in the image by a human analyst, and will allow better localization of
the edges of the features in an image. Forming an image at a higher resolution than
27
that of any of the low resolution images is possible if enough unique information is
available; luckily, this is usually the case. Low resolution digital images are typically
aliased (subsampled). If they are shifted with respect to one another with subpixel
precision, more information is available than with a single image. Conceptually, this
is shown in Fig. 2.1.
*
^
f
a
r
Figure 2.1: Subpixel shifts provide unique information.
The interleaving of low resolution images is useful to visualize the idea behind
resolution enhancement, but is overly simplistic.
Quantitatively, an observation
model is needed to relate the desired high resolution (HR) image to the observed
low resolution (LR) images, or frames. This model is shown in Fig. 2.2.
The LR frames are warped, blurred, and downsampled observations of the HR
image, and may have additive noise. Warping can involve translations, rotations.
28
Continuous
Scene
Ideal
Sampling
HR Image —• Warp
Noise
Blur
Downsample
LR image
Figure 2.2: Observation model relating low resolution (LR) images to a high res­
olution (HR) image.
skew, or projective distortion from one LR frame to another. Blur may be due to
many reasons, including diffraction-limited optics, motion of the imaging system,
motion of the viewed objects, or atmospheric turbulence. Sampling by a CCD array
discretizes the locations of incoming photons.
Noise from the imaging system's
detectors (shot noise) and other electronics also degrades the LR images.
To briefly digress, one could ask why an optical system's detectors are not simply
made smaller to acquire images with well-localized edges, rather than use algorithms
to improve the resolution. Unfortunately, there is a physical limit on the detector
size: the signal-to-noise ratio (SNR) becomes too small if the detector is made too
small. The current lower limit for detector area is close to 40/im^ using a 0.35/^m
CMOS process [7]. Consequently, for some imaging systems, the option to reduce
the detector size does not exist.
And, for most systems currently in orbit, the
detectors cannot be replaced.
Using the observation model above, the LR images may be input to an algorithm
that tries to remove the degradations of the imaging process while using the unique
29
information contained in each LR frame, resulting in an estimated HR image. The
resulting HR estimate is discrete (sampled), but is not aliased, and is sampled to a
finer grid than any of the sampling grids of the LR frames.
With the goal of resolution enhancement now explained, the theory of resolution
enhancement may be discussed. Resolution enhancement of images combines several
concepts of digital image processing in a single algorithm. These concepts are:
• Image restoration
• Image resampling and interpolation
• Image fusion
• Super-resolution
They will be explained in the following sections.
2.3
Image Restoration
Image restoration attempts to remove degradations in an image, such as blur
and noise. A model for the degradations is assumed or estimated, then the inverse
of the model is used to estimate the undegraded object. A comprehensive review
of digital image restoration is given in Ref. [17], with relevant points summarized
here. Early motivation for digital image restoration came from the desire to correct
the degradations seen in imagery acquired by the first planetary science missions
such as the Lunar Orbiter and Mariner [17]. In classic image restoration algorithms.
30
the output grid has the same number of pixels as the input grid. To do resolution
enhancement, the output grid must have a greater number of pixels than the input
grid; thus, image restoration is only part of the process.
A linear model for the degradations is typically used in restoration algorithms,
M
N
+
(2.1)
f c — 1 1
where g is the M x N degraded image, / is the M x N object to be restored, h is
the two-dimensional point spread function (PSF) of the imaging system, and n is
additive noise. If the blur does not vary spatially, this imaging system model may
be simplified to a linear shift-invariant model,
M
N
9{hi) =
-l)f{k,l)+ n{i,j)
(2.2)
k=l1=1
= h{i,j)
+ n{i,j),
(2.3)
where ** represents a two-dimensional convolution operator [18]. The imaging model
for an N
X
N degraded image may be also written as
g = H f -1- n,
(2.4)
where the image g, object f, and noise n are 1 x
vectors formed by scanning the
2D images row by row (or column by column), and H is an N'^ x N'^ blur matrix.
To solve for the object f, the inverse of the blur, H~^, must be found and applied to
the degraded image g.
31
2.4
Image Resampling and Interpolation
Resampling algorithms re-orient images and may increase or decrease the size of
the sampling grid. Image resampling is often needed in remote sensing. It is used to
put the samples from different bands of a multispectral sensor on a common grid, to
align images so that the top of the image points north (called georectification), and
allow fusion of images from multiple sensors. Images from different sensors must
be resampled to facilitate their use with one another because they acquire their im­
agery in different orientations and with differing ground sample distances (GSDs).
Resampling must be done with care to preserve the image quality. Multiple resam­
pling of data is considered non-ideal because it degrades the radiometric quality of
the data. The resampling of MTI imagery is discussed in [19].
Resampling an image from one regular grid to another, more finely-spaced, reg­
ular grid is called interpolation. Digital image processing techniques to interpolate
an image to a larger sampling grid have been used for decades. These methods in­
clude nearest neighbor, bilinear interpolation, cubic convolution, and cubic b-spline.
They may not be appropriate for interpolation of remotely sensed images because
nonuniform sampling of pixels, pixel overlap on the ground, and changing pixel size
with image acquisition geometry all present problems for these traditional meth­
ods [20]. However, they are computationally fast compared with more sophisticated
algorithms, so are widely used.
32
There are some interpolation methods that have been used with nonuniform sam­
ple grids. A resampling technique called "optimal" interpolation has been applied to
irregularly-spaced data points from the Advanced Very High Resolution Radiome­
ter (AVHRR) sensor [20]. AVHRR is a line scanner (or whiskbroom system), which
forms an image using a single detector and a mirror that spins to "scan" each line
in the cross-track direction as the satellite moves in the along-track direction [21].
The detector output is sampled to create each pixel. Each cross-track sweep covers
a distance on the ground called the field of view (FOV). At the far edges of each
sweep, the pixel size is larger since it is viewing off at an angle. Using this offnadir data is difficult since successive lines overlap and the pixel projections onto
the ground change size with varying angle. This "bowtie" effect is also discussed
in [21]. To meet the challenge of using AVHRR data at increasingly off-nadir an­
gles, Moreno and Melia [20] applied an "optimal" interpolation method that had
been previously applied to microwave image data [22]. The theory was originally
developed by Backus and Gilbert to find localized averages of the earth's struc­
ture at different depths [23]. The method calculates a resampled pixel value using
a weighted linear combination of neighboring pixel values, where the weights are
determined by each pixel's point spread function. Similar research on resampling
of irregularly-spaced data goes back many decades. Yen wrote a paper in 1956 on
nonuniform sampling [24]; he refers to even earlier observations by Cauchy in 1841
on the topic. Chen and Allebach discussed errors in reconstructing signals from
irregular samples in [25]. An area of research called signal and image reconstruction
has resulted in direct and iterative methods to interpolate irregularly-spaced data
points, including the estimation of missing data points in an image. Comparisons of
iterative and direct reconstruction methods are given in [26-28] and reviewed in [29].
2.5
Image Fusion
Multiple images are combined into one image during image fusion, but increased
resolution is not always the goal. Often, the goal in image fusion is to create an
output image that is more useful for further analysis than any of the individual
input images. There are many types of image fusion, including multi-sensor fusion,
multi-temporal fusion, multi-band fusion, and multi-resolution fusion.
In multi-resolution image fusion, for instance, high-resolution panchromatic bands
are fused onto lower-resolution IR bands, which aids image analysis by getting bet­
ter edge localization in the IR bands. Many ad-hoc algorithms of its type have
been devised; the images combined are not necessarily of the same physical units
(such as radiance or reflectance), resulting in radiometric mixing. However, it is
important for most remote sensing algorithms to preserve the radiometric fidelity
of the data collcctcd by a sensor
the images are carefully calibrated in order to
determine the physical quantities, such as temperature, of materials in a scene.
These direct measurements are then used by remote sensing researchers to derive
34
secondary measurements, such as estimates of the warming or coohng of the atmo­
sphere. Therefore, multi-resolution image fusion can enhance the visual analysis of
low-resolution data, but in general should not be used for deriving secondary data
products.
Resolution enhancement may be considered a subset of image fusion, because
multiple images are combined to form an image that may have increased interpretability, in this case because the pixel size is made smaller. By contrast, com­
positing is an image fusion method used in remote sensing that does not enhance
resolution [30]. Remote sensors with a wide field of view and a polar orbit, such
as MODIS and AVHRR, allow global coverage of surface features over time. Com­
positing algorithms are used in order to make global maps using data from these
sensors. After registration of the separate images, co-located pixels/data points
may be combined or a single data point may be selected for that pixel location in
the composite image. However, the output pixels are the same size as the input
pixels; no resolution enhancement occurs. As mentioned in Ref. [30], an algorithm
has been implemented that removes the BRDF effects prior to compositing to form
combined nadir estimates of the surface reflectance, the NDVI, and the surface
temperature [31],
35
2.6
Super-resolution
Super-resolution is the extrapolation of the frequency content in an image, per­
mitting the reconstruction of a digital image onto a grid with smaller pixels. Early
super-resolution algorithms used a single input image and generated a single output
image on a finer, or higher-resolution, grid. In these methods, prior knowledge about
the spatial extent of the objects in an image and analytic continuation theory are
used to extrapolate the spectrum of the image [32]. More recent super-resolution
methods use multiple low-resolution input images to generate either a single highresolution image or, in the case of video super-resolution, multiple high-resolution
video frames. These recent super-resolution methods are also called super-resolution
image reconstruction algorithms, and are the starting point for the multi-angle res­
olution enhancement technique explored in this dissertation. Reviews of approaches
for super-resolution using multiple images are given in [5-7].
Super-resolution image reconstruction methods are divided into two approaches:
frequency domain methods and spatial domain methods. Frequency domain meth­
ods are limited to global translational shifts between frames and spatially-invariant
blurring. Consequently, they cannot be used for combining multi-angle remote sens­
ing data, since the multi-angle frames may be rotated with respect to one another,
may have perspective distortion, and may have irregular sample spacing. Spatial
domain methods are much more flexible, allowing sophisticated imaging models to
36
be used. Three main categories of spatial domain resolution enhancement are interpolative, probabilistic, and set-theoretic methods. Each of these spatial domain
methods is reviewed in the following sections.
2.7
Interpolative Image Resolution Enhancement
Interpolative image reconstruction techniques take the simple approach of align­
ing or registering multiple low-resolution images followed by interpolating the values
from each image onto a high-resolution grid [7]. Multiple images are used rather
than a single image, as is the case for the standard image interpolation discussed in
Sect. 2.4.
Interpolative reconstruction techniques include shift-and-add and interlacing [33].
The shift-and-add method is a simplistic method of combining multiple images.
Pixel values are mapped, or shifted, to their proper location in a high-resolution
grid, then added to the grid values without modification. Interlacing is another
linear method of merging images where the input pixels are interleaved to form a
single output image. The positions of the pixel centers in the input images de­
termines the order of interleaving: if image "a" contains pixels shifted one-half of
a pixel to the left of those in image "b", the output pixels would be arranged as
"ababab". Neither method attempts to remove the effects of blur. Variable-Pixel
Linear Reconstruction, also known as Drizzle [33], is an algorithm that linearly re­
constructs a high-resolution image from a set of undersampled images. It has been
37
applied to the problem of reconstructing images from the wide field cameras (WFCs)
of the Hubble Space Telescope (HST). Interpolation of nonuniformly-spaced data
samples from several images has been considered. Sauer and Allebach [34] inter­
polate between irregular samples but do not account for system PSF blur. Chen
and Allebach [25] study the relation of mean squared error to the spacing of irreg­
ular samples. Scambos et al. [35] use a linear technique called data cumulation to
increase the resolution of AVHRR data by averaging multiple images onto a finer
sampling grid. Like other resolution enhancement algorithms, accurate coregistration, or alignment, of the images is required; data cumulation requires coregistration
accuracy of better than 0.2 pixels.
2.8
Probabilistic Image Resolution Enhancement
A popular approach for super-resolution image reconstruction uses Bayesian es­
timation methods, such as the maximum likelihood (ML) and the maximum a priori
(MAP) methods [5-7]. The Bayesian approach has the advantages of giving a unique
solution subject to its constraints, and using a priori information to constrain the
HR estimate. In the MAP method, the goal is to find the HR estimate / of the
object or scenc / that maximizes the a posteriori probability distribution function
(PDF) F(/|5,),
/ = argmaxP(/|5fi,5f2, ...,gN),
(2.5)
38
or, using Bayes' theorem,
/ = argmax[lnP(^i,p2, • • • ,^iv|/) + lnP(/)],
(2.6)
where each g k , k = 1 , . . . , N , i s the k-th observed LR frame, P{f) is the image model,
which defines the statistical correlation between neighboring pixels, and P(g\f) is
the likelihood function determined by the PDF of the noise in the image. Typi­
cally, Gaussian noise is assumed and a Markov random field (MRF) image model is
used [5]. A few researchers have used Bayesian methods for enhancing the resolu­
tion of remotely-sensed images. A Bayesian maximum posterior estimation (BMPE)
method was applied to AVHRR imagery in [36]. This work assumed that the ob­
served surfaces were Lambertian and that there was no atmosphere, but did remove
the effect of the AVHRR's PSF. A Bayesian approach was also used by Cheeseman
et al. for the super-resolution of images from the Mars Pathfinder mission [37].
However, their paper showed that a Gaussian neighbor correlation prior is not suit­
able for Landsat imagery due to the high energy in the tails of the distribution.
They concluded that "individual pixels in Earth imagery can differ substantially
from their neighbors, e.g., a road traversing ground" [37], meaning that Bayesian
resolution enhancement can oversmooth the HR estimate when applied to remotelysensed images. Because the goal of this dissertation is to enhance edge features in
remote sensing imagery, Bayesian resolution enhancement methods were therefore
not chosen for further study.
39
2.9
Set Theoretic Image Resolution Enhancement: The Projection Onto
Convex Sets (POCS) Algorithm
Set theoretic image resolution enhancement is a popular alternative to the Bayesian
resolution enhancement approach. The most widely used algorithm of this type is
the projection onto convex sets (POCS) algorithm. The POCS algorithm iteratively
estimates a high resolution image from a set of low resolution images by performing
simultaneous interpolation and restoration. Thus, POCS both increases the size of
the image and removes the effects of blur due to the system PSF. The POCS method
has many advantages for the resolution enhancement of images [38], including the
flexible incorporation of prior knowledge, the ability to acquire the LR frames using
any detector or scanning geometries including nonuniform sampling, the capability
to compensate for different amounts and types of blur in each LR frame, and the
ability to combine LR frames with different pixel sizes. The ability to input satellite
images with different GSD's was an important factor for selecting POCS for the
resolution enhancement. In addition, the POCS algorithm may be used with other
optimization methods, such as the statistical ML and MAP approaches, resulting in
powerful hybrid methods. Hybrid methods will not be explored in this dissertation,
but would be a good starting point for future research. The POCS algorithm does
have the drawback of slow convergence, but continues to be the most popular im­
plementation of convex set theoretic image recovery [39]. Another algorithm, called
40
extrapolated method of parallel subgradient projections (EMOPSP), uses approxi­
mate projections and applies constraints in parallel to speed convergence [39].
Convex set theoretic methods for resolution enhancement have been around for
several decades. In an early paper that introduced POCS to the engineering com­
munity, Youla and Webb [40] described an iterative image restoration method based
on convex projections. In contrast to earlier algorithms by Papoulis and Gerchberg,
which are based on linear image reconstruction concepts, the Youla-Webb algo­
rithm allowed multiple nonlinear constraints to be applied. Sezan and Stark [41]
then applied Youla and Webb's algorithm to an example, in the companion paper to
Ref. [40]. Lent and Tuy describe a POCS-based image reconstruction algorithm that
is similar to that of Youla and Webb [42]. They incorporate the ideas of Gerchberg,
Papoulis, and Youla [43], but their paper appeared before, and independently of,
Youla and Webb's nonlinear restoration paper. Their algorithm is iterative and also
allows a priori constraints. More recently, Stark and Oskoui [38] applied POCS to
the recovery of tomographic images, and this paper led to the work of Tekalp, Ozkan,
and Sezan, who extended POCS to handle linear-shift-varying (LSV) blurs [44,45].
Their algorithm implementation forms the main multi-angle resolution enhancement
algorithm of this dissertation. Yeh and Stark have also developed iterative and one-
step POCS for nonuniform samples [46]. Granrath and Lersch [47] used POCS for
41
the fusion of remote sensing imagery, but they used a 2D affine transform assump­
tion; for multi-angle remote sensing imagery, a projective transform is needed rather
than an affine transform.
The POCS algorithm uses prior knowledge about the imaging system to impose
constraints on a high-resolution (HR) estimate of the original scene. The use of
a priori knowledge helps the algorithm converge to a reasonable solution. Constraint
sets may be formed for data consistency, energy bounds, amplitude bounds, spatial
support bounds, etc. More precisely, each constraint defines a closed convex set C ,
which, for image reconstruction, is a set of images with a particular property. For
example, C could be defined as the set of all objects in continuous space that could
have resulted in one particular discretized LR image: if a red car and an orange
car are imaged with a panchromatic sensor, the LR image could contain the exact
same DN pixel values in each case. The object / is known a priori to belong to the
intersection Cg of m closed convex sets Ci, C2, •••, Cm,
m
c. = n Ca.
(2-7)
a=l
where Cg is found by iteratively computing projections onto the convex sets (thus
the name POCS):
fn+l = PmPm-l • • • -P^-Pl/n-
(2-8)
The projection operator Pj maps the current estimate / to the closest point in the set
Ci- Fig. 2.3 illustrates the POCS method, closely following a figure from Ref. [48].
One important constraint used in POCS ensures data consistency - that is, the HR
Figure 2.3: The POCS algorithm for two constraint sets, shown converging to a
solution / in the intersection set Cg.
43
estimate, when blurred and downsampled, must be consistent with the observed LR
frames. This constraint is defined as
Co^if :\r\<S},
(2.9)
where the residual r is the difference between a LR pixel value and a blurred region
in the HR estimate
= 9{i,j)
(2.10)
k
I
and (5 = ca^, the confidence of the user in the observation for a noise standard
deviation of a^, with the constant c > 0 found using a statistical confidence bound
[44]. The projection of / onto the constraint set
is then
fn + h{r - S) /EkEih'^
if
fn
if
fn+l = Pofn = <
fn + h { r + 6)
T,i
if
r>S
-S<r<5 •
(2.11)
r < -6
The POCS image reconstruction method using only the data consistency constraint
may be summarized in the procedure shown in Figs. 2.4 and 2.5.
2.10
Image Registration
One issue that has not yet been addressed is image registration in the context of
resolution enhancement. Image registration methods compute the relative motion
or shifts between two images in order to align them to one another. Resolution
enhancement algorithms must register the LR frames to one another in order to
44
• Align (register) the LR images using known translation, rotation,
and skew
• Until estimate / has converged;
— For each pixel in each LR frame:
* Find the non-integer location of the pixel center in the
HR estimate
* Apply blur to a neighborhood (window) of HR pixels
corresponding to the estimated or known point spread
function (PSF) of that LR frame
* Subtract the average of the blurred region in the HR
image from the LR pixel to find the residual r
* Compute the new HR image values within the PSF win­
dow by adding the residual r multiplied by the PSF
Figure 2.4: The POCS algorithm using the data consistency constraint.
w
LR frame
compare LR pixel
to
blurred HR region
HR frame
Figure 2.5: Mapping the LR pixel to the HR estimate and computing the residual
from the PSF applied to a local neighborhood.
45
determine the location of each LR data point in the HR output grid to subpixel
accuracy. If the registration parameters are not correct, resolution enhancement
algorithms will give poor results. Misregistration and its effect on change detection
are discussed in several papers [30,49,50]. In [50], the authors found that a change
detection error of ten percent requires the registration between the images to be
accurate to within one-fifth of a pixel.
For registration of remotely-sensed imagery, registration may be achieved by
determining a given subpixel location in each image's sampling grid that corresponds
to the same location on the surface of the earth. If the images are co-registered to
each other, and also are aligned to absolute latitude and longitude coordinates,
they are said to be geo-registered. Multispectral sensors such as the MTI require
co-registration of the images for each spectral band because each band image is
formed by a separate line detector [51]. Image-to-image registration is often done
using ground control points or cross-correlation [52]. Cross-correlation methods are
simple, but they require images with the same geometry (i.e., no rotation or skew).
The availability of the same geometry is rare in remotely-sensed imagery because
the imaging system is in motion and very far from the object being imaged. Instead,
the most common techniques to register satellite images are ground control point
(GCP) methods. Ground control points (GCP's) are features in each image that
match each other. Typically, these include road intersections or other features that
can be precisely located in each image. GCP's are chosen by a user or by using an
46
automated technique. User selection of ground control points is labor-intensive, but
it gives very accurate results. First, matching GCP's are selected in each image.
Then, a mapping function is computed that specifies how to warp one image onto
the other. Automated methods for selecting GCP's are more error prone. Other,
more advanced automated registration algorithms exist, using a variety of image
features and iterative methods to achieve subpixel registration.
Although image registration is critical for resolution enhancement, it is not stud­
ied in this dissertation. Instead, simulated imagery is used so that the registration
parameters are known absolutely. Other resolution enhancement research has ad­
dressed misregistration issues as well as incorporating image registration within the
resolution enhancement algorithm.
47
CHAPTER 3
SIMULATING MULTI-ANGLE IMAGERY
3.1
Image Formation
To simulate imagery for input into our resolution enhancement algorithm, a
model of the image formation process for multi-angle, remotely-sensed imagery is
needed. Our model must account for the optics of the imaging system, the changing
view angle, and the sampling process of the detector array. We discuss each of these
components in the following sections.
3.2
Modeling the Optics
In its most basic form, an optical system consists of four components: an object,
a propagation medium, an optical system, and an image. Energy in the form of
light travels from an object /, through a propagation medium and optical system,
and forms an image g, as shown in Fig. 3.1. The quality of the image is determined
by the optical wave properties of each component of the system.
Several simplifying assumptions are used to make the implementation of our op­
tical system model tractable. First, we assume that objects are imaged using quasimonochromatic incoherent light. Second, we assume an aberration-free, diffractionlimited optical system. The image is assumed to be z-coordinate independent. This
48
aperture
image
object
plane
propagation
medium
optical
system
image
plane
Figure 3.1: Optical imaging system
means the image g{ x , y , z ) is approximately equal to g{x^ y). We assume the image
is formed in the Fraunhofer (far field) region [18,53],
2»
-kRI
A '
(3.1)
where z is the distance from the optical system's limiting aperture to the detector,
and Ro is the radial extent of the object at the diffracting aperture. The exact nature
of the aperture is described later, but for the time being may be treated as a telescope
lens in front of a focal plane comprised of a 2D CCD array. These simplifications
result in a linear, shift-invariant (LSI) imaging model. The wavelength-dependent
scaling between the aperture and image plane coordinate systems is ignored for
simplicity in the following discussion. Since we assume spatially incoherent object
illumination, our optical system model is linear in intensity rather than amplitude
49
[18,53]. Thus the incoherent system impulse response is the squared magnitude of
the coherent system impulse response. It is real-valued and nonnegative. The system
impulse response for an optical system is often called the point spread function
(PSF). The intensity image, g, of an object / formed by an incoherent, LSI optical
system with a PSF h may be illustrated as shown in Fig. 3.2.
f(x,y)
h(x,y)
g(x,y)
object
optical
system
image
The PSF may be
Figure 3.2: Linear, shift-invariant imaging system model
thought of as a two-dimensional lowpass filter and will be described more precisely
below. The imaging system may be described equivalently as a two-dimensional
spatial convolution of the object / with the PSF h,
CO
/
POO
/
f{Lv)K^-^,y-'n)didr}
(3.2)
-CO J—CO
- {h*f){x,y)
where x , y are the coordinates in the image plane and
(3.3)
rj are the coordinates in the
object plane.
It is useful to describe the optical system in terms of its frequency response, be­
cause the sharpness of edges and fine detail in a scene correspond to high frequencies
in the Fourier domain. A brief description of an optical system from a Fourier optics
viewpoint is now provided.
50
The two-dimensional Fourier transform of a function s is defined as
/oo
roo
/
(3.4)
-OO J —OO
Taking the Fourier transform of Eq. 3.3, and after some rearranging of terms, we
can express the incoherent optical system response in the frequency domain as
G{u,v) = H{u,v)F\u,v),
(3.5)
where F is the Fourier transform of the object, G is the Fourier transform of the
image, and H{u,v) is the incoherent optical transfer function (OTF) of the system
[18,32]. The OTF governs how spatial frequencies are preserved from the object to
the image. Specifically, the incoherent OTF is defined as the normalized complex
autocorrelation of either the aperture stop or the exit pupil, with a scaling factor [18].
For our purposes, it is sufficient to define an aperture function, a(x, y), that could be
a physically real pupil or a limiting aperture within the optical system. This limiting
aperture restricts the total amount of light entering the imaging system [32]. Inside
the aperture, a{x,y) = 1, while outside the aperture, a{x,y) — 0 and no light can
pass through the system. In this case, the OTF is simply
roo
/OO
/
-OO
J
a{x,y)a{x + u,y + v)dxdy,
(3.6)
— OO
where the scaling factor has been neglected between the x , y space coordinates
and the u,v spatial frequency coordinates [32]. The OTF is normalized such that
H{0,0) = 1. The maximum of the OTF occurs at the origin since it is found by
an autocorrelation operation. An important point to make is that the OTF H{u,v)
51
goes to zero outside of a region of support, again, from properties of the autocorre­
lation. Therefore, spatial frequencies outside of the region where H{u^ v) > Q will
not be present in the image that is output from the optical system. The blur due
to the limiting aperture, and thus the OTF, fundamentally limits the resolution or
amount of detail that may be seen in the image. The spatial frequency at which the
OTF goes to zero is called the optical cutoff frequency, fc. It is given by
=
(3,7)
where D is the aperture diameter, A is the wavelength of the light, and I is the focal
length of the optical system.
The aperture function we will use to generate blurred imagery is a simple circular
pupil [53], a{x,y), of radius r,
r
\
•
f Vx'^ + y'^\
a [ x , y ) = circ
,
and is shown in Fig. 3.3.
o^
(3.8)
The incoherent OTF, or complex autocorrelation of the
aperture a, is given by
<="""' ii) -
/ < / c
Ho{u,v)
(3.9)
otherwise
where / = \/u'^ +
shown in Fig. 3.4.
is the radial distance in the frequency plane [53]. The OTF is
Figure 3.3: Circular aperture function
*
\\\\\\«
V\\\\\V
H\\\\V
I
Figure 3.4: Incoherent OTF, H { u , v ) , for a circular aperture
53
To simulate diffraction-limited image blur due to the optical system, we first
compute the two-dimensional Fourier transform of a high resolution "object" scene
to obtain its spectrum. A standard FFT is used in the implementation. A circular
aperture size is selected and the corresponding OTF is calculated. Next, the object
spectrum is multiplied by the OTF, which removes the high frequencies present in
the high-resolution image. The output of this step is a blurred image with the same
number of pixels as the high resolution image. An example of an image blurred by
diffraction effects by a circular aperture is shown in Fig. 3.5. For better visibility of
the high frequencies, the object and image spectra are displayed using a In ^1 -f- |.|^^
compression on the scaling.
Object Spectrum
Image Spectrum
Figure 3.5: High resolution object scene blurred by a circular aperture (spectra
shown scaled with logarithmic compression).
54
With the generic optical system model in place, we must now incorporate ad­
ditional knowledge of the satellite into our multi-angle imaging system model. To
generate the multi-angled frames that would be viewed by a satellite-based sensor,
two additional steps are required. First, the blurred image has a projective transform
applied to it to simulate an off-nadir view angle. Then, the image is subsampled to
model the detector array of the satellite system. These components of the system
are presented next.
3.3
Modeling the Changing View Angle Between Frames
Between frames, we assume that the satellite carrying our imaging system has
moved. Satellite motion can be very complex, but a simple model will be imple­
mented to study the most important issue for multi-angle resolution enhancement
problem: the view angle. Other issues, such as terrain, slope, elevation, and curva­
ture of the Earth, are important, but are secondary to the issue of the increasing view
angle from nadir and the subsequent increase in the pixel size due to the increased
path length to the imaged scene as the angle increases.
It takes the MTI satellite 4.5 seconds to acquire a 12 km by 12 km image for all
16 spectral bands. The velocity of the satellite is 7 km /scc, thus the satellite travels
approximately 31.5 km while acquiring a single image. The change in angle between
the beginning of acquisition and the end of acquisition results in a change in the
size of the projection of the detector line. The changing view angle (perspective)
55
between nadir and the satelhte with respect to the location of a feature on the
ground may be modeled with a projective transform. The simplified geometry of
an ofF-nadir image formed by a pushbroom sensor is shown in Fig. 3.6. The offnadir field of view (FOV) in the cross-track direction, FOVo,c! or swath width, is
determined by the cross-track extent of the linear detector array projected onto the
ground. Over At = t2 — ti = 4.5 sec, the path length increases from li to I2, and
the swath width becomes smaller. The off-nadir FOV in the along track direction,
FOVo,a, is determined by the image acquisition time and the off-nadir pointing
angle at the midpoint of image acquisition Oave • Compared to the nadir along-track
FOV, it is reduced by a factor of cosine squared (just as for the along-track off-nadir
GSD shown previously in Fig. 1.4).
Using the homogeneous coordinate system to
perform two-dimensional coordinate transforms, the projective transform is given
by [54]:
ail 0,12 0^13
\x',y',w'] = [u,v,w]
023
(3.10)
O3I 0,32 0,33
To model an off-nadir image with a projective transform, the forward mapping is
applied to convert the u,v input coordinates to x,y output coordinates, given by
x'
w'
y
anu + a2iV + a3i
ai3U + a23f -f- 033
y _ auu + a22V
w'
+ <232
CtlzU + 023^^ + Q.33
(3.11)
(3.12)
56
SIDE VIEW
tl
TOP VIEW
FOVo,c(^2)
I
along-track
direction
— FOVn ,ccos (^2)
FOVo,a =
FOVn,a cos "^9^
along-track
direction
FOVo,c(^i) — FOVn c cos (6'i)
cross-track
direction
Figure 3.6: OfF-nadir geometry for a single band.
57
An example of projective distortion due to a satellite viewing a scene from nadir and
off-nadir is shown in Fig. 3.7.
The off-nadir image appears scaled, but not much
Figure 3.7: Nadir and off-nadir image at 45° simulated using a projective transform
with an orbital height of 575 km.
keystoning is visible because the distance from the sensor to the target (575 km for
the MTI at nadir) is much larger than the extent of the image acquired (12 km by
12 km for the MTI).
3.4
Modeling the Detector Array Subsampling
The detector array effectively discretizes our image. This may be modeled by
downsampling the blurred and transformed images to a grid with fewer pixels in
each dimension. The subsampling process can introduce aliasing into the images.
Recall from Sect. 3.2 that the optics of the system have an impulse response, or
58
PSFoptics- If a point source is imaged by this system, it forms a diffraction pattern
in the far field that consists of bright concentric rings called the Airy pattern [53,55].
The diameter of the central lobe is given by
dMry = 2.U^-,
(3.13)
where A is the wavelength of the point source, / is the focal length (the distance
from the lens to the focal plane), and D is the diameter of the aperture or lens. The
detectors located at the focal plane also have a PSF,
P S F d e t e c t o r { x , y ) = rect
rect
(3.14)
where dx and dy are the sizes of a rectangular detector in the x and the y directions,
respectively. For most detectors, the detector shape is square, or
ddet
=
d ^
—
d y .
Assuming a 100% fill-factor for the detector array, when d^iry > ddet, the system
is called optics-limited [55]. When dAiry < ddet, it is called detector-limited. These
terms arise because the system PSF is given by the convolution of the optical and
the detector PSF's,
PSF^jy^fg^^
PSFopjjcs * PSFjjefector;
(3.15)
and the system OTF is given by the multiplication of the component OTF's,
OTF^jystem = OTY optiesOTY detector •
(3.16)
The cutoff frequency of the optics OTF versus the detector OTF determines the
whether aliasing is present. If high frequencies are allowed through the optics, but
59
the detectors are not small enough, the high frequencies are aliased to low frequencies
in the discretized output image.
It is important to understand the effects of aliasing when applying resolution
enhancement algorithms to multispectral imagery because the degree of aliasing
present varies with wavelength. For a system like the MTI, the same optics are
used for both the visible and the IR bands, while the size of the detectors changes,
resulting in more aliasing at the shorter wavelengths. For the MTI, D = 36cm and
/ = 125cm. In band A, A Ri 485nm, so dAiry ~
much smaller than ddet =
12.4/im. The system is detector-limited in band A. In the IR bands, A
dAiry ~ 84.7/^m, about twice the value of ddet =
lO^m, and
meaning the IR bands for
the MTI are optics-limited.
An example of an image after downsampling is shown in Fig. 3.8. Compared
to the blurred image, the pixel size of the dowrisampled image is four times larger
in each dimension, making high frequency detail much more difficult to see after
sampling.
3.5
The Elliptical Gaussian System PSF Model
The POCS algorithm requires knowledge of the system PSF. With the added
complication of off-nadir frames, the known or estimated PSF must be warped; the
projective transform model requires a mapping of the system PSF from every pixel
in each LR frame to the HR estimate, resulting in a distorted PSF in the HR grid.
60
Figure 3.8: Blurred image vs. blurred, 4X downsampled image
The computation of the projected system PSF can be time consuming [6]. However,
it is possible to use an approximate mapping of the PSF using an affine transform
rather than a projective transform and keep the processing time reasonable. For
off-nadir images simulated using the parameters of the MTI sensor, the primary
transformation of the off-nadir image is a change in scale; the perspective keystoning
is small due to the very large distance from the satellite to the ground. Therefore,
the affine transform is reasonable for mapping the PSF of a satellite sensor from
the off-nadir LR frames to the HR estimate grid. Using the affine mapping, the
projected PSF will be the same for every pixel in a given LR frame, thus reducing
the computational time considerably.
First, a model for the system PSF in a nadir viewing case must be selected.
Two-dimensional Gaussian functions, both circular and elliptical, have been used
previously to model the system PSF of a satellite. Moreno and Media [20] show
61
that an eUiptical Gaussian function is best for approximating the system PSF of the
AVHRR instrument. Baldwin et al. [36] use a circular Gaussian function with the
assumption that it is an adequate approximation for angles less than 20 degrees,
and cite Oleson et al. [56] for the justification of using a Gaussian function to
approximate an Airy disk. Following the justification in Ref. [20], a 2D Gaussian
function is used to approximate the system PSF in this dissertation. The Gaussian
PSF is circularly symmetric at nadir, assuming the optics do not have distortions.
Mapping the PSF from a given off-nadir LR frame to the HR estimate grid with an
affine transform results in an elliptical Gaussian PSF, given by
1
FSF{x,y) = —
QiTT (TxCTy
where ax and ay are determined by scaling the circularly symmetric nadir PSF of
width a by a cosine or a cosine-squared factor of the off-nadir look angle 0, so that
= -^
cos 6
(3.18)
in the cross-track direction and
cos^ 9
(3-19)
in the along-track direction. The PSF is normalized so that the area under the
PSF is constant. This ensures that the weighting during the blur step of the POCS
algorithm (see Fig. 2.4) is correct.
62
CHAPTER 4
VALIDATION OF THE POCS ALGORITHM
4.1
Validation Overview
In this chapter, multi-angle imagery is simulated using the sensor model of Ch. 3
and the POCS resolution enhancement algorithm of Sect. 2.9 is applied to the sim­
ulated data sets. The validation of POCS for use with remotely-sensed multi-angle
imagery covers several areas. First, a description of how the algorithm will be quan­
titatively tested is needed; image quality metrics are defined in Sect. 4.2. Aliasing
is an important issue to examine when using resolution enhancement algorithms for
remotely-sensed imagery. Therefore, a simulated test image with varying amounts
of aliasing is used to measure the performance of the POCS algorithm with aliased
and non-aliased images in Sect. 4.3. In Sect. 4.4, the POCS algorithm is applied to
simulated two-look images. Using images at off-nadir view angles is a challenge for
the algorithm due to the irregular sample spacing of the data; a brief analysis of the
effect of sample spacing on POCS performance is provided in Sect. 4.5. Resolution
enhancement results are extended to multiple looks in Sect. 4.6. The performance
of POCS using symmetric three-look images is discussed in Sect. 4.7, and aliasing
in three-look imagery is examined in Sect. 4.8. In Sect. 4.9, the POCS algorithm
is applied to unique imagery from a multispectral microscope in order to show how
63
the POCS resolution enhancement algorithm may be used to improve multispectral
data.
4.2
Error Metrics
One of the challenges of comparing resolution enhancement algorithms is the
paucity of quantitative results in the literature; many papers use only a visual
comparison of imagery before and after resolution enhancement. To truly measure
the performance of resolution enhancement algorithms, quantitative image quality
metrics are needed. Two frequently used image error metrics are the mean squared
error (MSE) and the peak signal-to-noise ratio (PSNR). These error metrics do not
always correspond well to a subjective visual comparison between images, but are
widely used because they provide an objective measure to compare image processing
methods. The MSE between two discrete images of size M x N is given by
1 iV JW
_
MSE(/, /) = — ^ ^ [/(m, n ) - f ( m , n)] ,
(4.1)
n=lm=l
where / is the desired object and / is the resolution-enhanced image. The MSE is
dependent on intensity scaling (the number of bits used to store each pixel value),
so the PSNR metric is often used to avoid this problem. The PSNR (in dB) is given
by
PSNR(/,/) = 10 log;
[max(/)]
(4.2)
64
= 10 log10
[max(/)]2
1
. mnE
\
N
M
r
E
-
f{m,n) - f{m,n)
n=l m=l
{4.3)
t2
/
where max is the maximum pixel value possible, e.g., 2^^ — 1 for a A:-bit image.
The SNR improvement, or ISNR (measured in dB), is a useful metric for image
restoration [17]. It is given by
f
1
N
M
E E [f{m,n)-g{m,n)\
\
M N n=l m=l
ISNR(/,/,^) = 10 logio
.
\
,
N
M
n=l
m=l
E E
r
^
- f{m,n)
(4.4)
i2
/
It measures the distance between the original scene / and a restored image / as well
as the distance between the original scene / and the degraded image g. This metric
assumes that the original, degraded, and restored images are the same size. Since the
degraded images here are the LR frames, which are at a lower resolution than that of
the original and HR estimate, some modification to the ISNR metric must be made in
order use it to measure resolution enhancement performance. A baseline "degraded"
image g may be formed by interpolating (i.e., bilinear interpolation) the nadir LR
frame to the same size as the original image and the HR estimate. Then, the ISNR
will provide a measure of how much better the restored image is compared to the
interpolation of just one LR frame, or roughly, how much additional information
has been gained from the off-nadir frames. Obviously, when real imagery is used
rather than simulated imagery, the ISNR cannot be computed since the "original"
image / is not known.
65
In addition to the above objective error metrics, simple visual differences be­
tween images and their spectra may be used to observe the algorithm performance.
A difference image between the original high resolution object scene / and the
resolution-enhanced image / is a useful qualitative measure of how the algorithm
performs. The difference image is simply
d=
f - f
•
(4.5)
Areas where convergence is more difficult, such as along edges, may be examined
using this visual measure of error, since not only the magnitude of the error is given,
but the locations of errors are given as well.
Finally, comparing the spectra of low resolution images to the spectrum of the
original high resolution image and to the spectrum of the resolution-enhanced im­
age can show improvement of high resolution detail by the presence of spectrum
components beyond the cutoff frequency of the low resolution images. It is expected
that a resolution enhancement algorithm would restore high frequencies present in
the original high resolution image that have been lost through the imaging process.
The appearance of high frequency components in the restored image's spectrum
that do not match the original object's spectrum would indicate undesirable ringing
artifacts and noise amplification.
66
4.3
The Effect of Aliasing: Recreating Points and Edges
Aliasing occurs in most imaging systems [55], as was discussed in Sect. 3.4. Alias­
ing is usually implicitly assumed to exist when generating subsampled frames as test
data for resolution enhancement algorithms [44,57]. However, the effect of aliasing
has not been amply treated in the literature; no experiments or simulations to study
the issue in detail have been done. Therefore, some simulations of super-resolution
image reconstruction with varying amounts of aliasing were performed to quantify
this behavior.
To demonstrate the ability of POCS to enhance the resolution of aliased images,
a simple test data set was formed. The test pattern contains both lines and points,
features that show how well the algorithm can reconstruct point sources and how
well it can localize edges. The test image is 64 x 64 pixels in size, and is shown in
Fig. 4.1.
Sixteen low-resolution (LR) input frames, each 16 x 16 pixels in size, may
be formed by decimating the test pattern. The decimation step must be done care­
fully because it has an important effect on the amount of aliasing in each LR frame,
and, as will be shown, on the super-resolution results. In some earlier resolution
enhancement experiments [44,57], the simulated LR frames were generated from
the input test image by first blurring the test image with a neighborhood operator
(either a 4 x 4 uniform or Gaussian kernel), then selecting every fourth pixel to form
sixteen distinct LR frames. This method does not allow the user to easily measure
the effects of aliasing on the resulting HR estimate because it does not give the
67
Figure 4.1: Input test pattern containing lines and points
user enough control to accurately model the behavior of a specific imaging system,
as described in Ch. 3. Carefully modeling the optics, detector array, and viewing
geometry is important for studying the applicability of resolution enhancement al­
gorithms to sets of low-resolution imagery taken at different sensor pointing angles.
Thus, in the experiments performed for this dissertation, blurring was done with a
modeled circular aperture to remove all frequency components above a well-defined
spatial cutoff frequency, fc- By varying fc, the effect of aliasing on the resolution
enhancement performance may be studied for specific remote sensing sensor config­
urations.
Figure 4.2 illustrates the aliasing issue, showing the spectrum of the test pattern
blurred with both the 4x4 uniform kernel and the 4x4 Gaussian kernel. To
correctly model a set of 16 x 16 LR frames without aliasing, the spatial cutoff
frequency, fc, required is shown by a white circle. The spectrum in both cases
68
64
1
64
1
64
Figure 4.2: The spectrum of a test pattern blurred by a 4 x 4 uniform kernel (left)
and by a 4 X 4 Gaussian kernel with cr = 1.5 (right), with the Nyquist
cutoff frequency represented by the white circles.
shows energy at frequencies above the Nyquist frequency, meaning that test pattern
will contain aliasing artifacts. The spectrum for the Gaussian kernel has moderate
levels of energy at frequencies nearly double the Nyquist frequency. The spectrum
for the uniform kernel contains very high frequencies compared to the Nyquist cutoff
frequency. The uniform blur kernel does not accurately model a real imaging system.
The Gaussian kernel is more accurate than the uniform kernel, but it is difficult to
relate the kernel size to a system's physical parameters such as mirror size or detector
size.
It is important to note that a few other researchers have taken the more rigor­
ous Fourier-optics-based approach in their resolution enhancement simulations. As
mentioned in Sect. 3.5, Baldwin et al. [36] used a Gaussian model of the AVHRR
69
PSF in their simulations. However, the effects of the optics versus the detector
size on the resolution enhancement algorithm were not studied; their Gaussian PSF
model combined the optics PSF and the detector PSF. Thus, their results were spe­
cific to the AVHRR instrument. Hardie et al. [58] modeled a system PSF for an
infrared imaging system. The system was detector-limited (aliased) and the resolu­
tion enhancement results are specific to that instrument. Borman and Stevenson [6]
modeled the PSF of an aliased camera system for the resolution enhancement of
video.
The spectrum of the HR test pattern bandlimited to the Nyquist frequency is
shown in Fig. 4.3, along with the spectrum of a LR frame after downsampling that
shows the preservation of the spectral information without aliasing. Spectra are
displayed with logarithmic compression to show detail. The complete set of sixteen
Nyquist-bandlimited LR images is shown in Fig. 4.4. In this experiment, the goal is
to increase the resolution by four in both the x and the y directions. In principle, for
a spatial resolution increase of N,
low-resolution frames are needed, each with
an A^th of a pixel shift with respect to the other frames. Thus, we create sixteen
frames, each of dimension 16 x 16, with one-fourth pixel shifts. Once the LR frames
have been simulated, they are used to test the performance of the POCS resolution
enhancement algorithm.
70
Figure 4.3: Spectrum of the test pattern after bandUmiting to the Nyquist fre­
quency (upper left) and spectrum of a LR frame (upper right), cor­
responding blurred image (lower left), and blurred and downsampled
image (lower right).
71
tt
-
Figure 4.4: Sixteen unique LR frames formed by bandlimiting to the Nyquist frequency, fc — In, and downsampling.
72
Borrowing terminology from machine learning, an epoch is defined as the pre­
sentation of all of the LR frames to the resolution enhancement algorithm. An
iteration is the presentation of all pixels in a single LR frame to the algorithm.
The Nyquist-limited simulation was run with the 16 LR frames over 40 epochs, or
16 X 40 = 640 iterations. This means the algorithm saw every LR frame 40 times
before it was stopped, ensuring time for its convergence. The elliptical Gaussian
function with a = 3.5 pixels was used to estimate the system PSF. To provide an
initial HR estimate, a single LR frame was upsampled using bicubic interpolation.
The interpolated LR frame is shown in Fig. 4.5. The estimated HR image after
one epoch is shown in Fig. 4.6, after ten epochs in Fig. 4.7, and after 40 epochs in
Fig. 4.8.
Figure 4.5: Bicubic interpolation of one LR frame containing lines and points.
64
Figure 4.6: The HR estimate after running POCS for 1 epoch, FC =
J
N-
Figure 4.7: The HR estimate after running POCS for 10 epochs, fc = /w-
64
Figure 4.8: The HR estimate after running POCS for 40 epochs, fc
75
It is interesting to compare the interpolated LR frame to the result obtained
by running POCS on multiple frames. The pixel size is smaller in the upsampled
image, but interpolation alone does not remove the blur, and the point sources are
still not visible. The HR estimate shows more detail after only one epoch, with
the point sources now visible and the width of the lines reduced compared to the
interpolated initial image. After 10 epochs, the image has been restored with good
visual quality, though not perfect. After 40 epochs, ringing is apparent in the image.
Some POCS implementations use an energy constraint to reduce such ringing at the
edges. Nonetheless, the resulting HR image demonstrates the ability of POCS to
enhance the resolution of a Nyquist-bandlimited image with a large blur relative
to the size of a feature. The point sources are nearly restored to their original
1x1 pixel size. A plot of the ISNR after 200 epochs for the Nyquist-bandlimited
case, fc — /n, is shown in Fig. 4.9. Even though the ISNR is increasing, meaning
that the algorithm has not converged, the visual quality is not improving due to
ringing artifacts. Thus, stopping criteria for POCS are not easy to define. A visual
inspection of the algorithm's progress may be needed to stop the convergence at a
suitable point; when real rather than simulated data is used, the original image is not
available for comparison, and the quantitative ISNR metric cannot be computed. It
is evident that stopping criteria are an important aspect of resolution enhancement
algorithms.
76
PQ
-o
Pi
:z;
00
(—I
0.8
0.6
0
50
100
150
200
Epoch
Figure 4.9: The ISNR for the Nyquist-bandhmited test pattern, /C = /AT, for 200
epochs.
77
Next, a case with aliasing present is examined. The hmiting aperture is set to be
twice as large as before. The spectrum of the HR test pattern for this case and the
downsampled LR image spectrum are shown in Fig. 4.10. The spectrum is truncated
during downsampling because fc > fN, resulting in aliased LR frames.
Figure 4.10: Spectrum of test pattern after bandlimiting to the twice the Nyquist
frequency, fc = 2/^, (upper left) and spectrum of LR frame (up­
per right), corresponding blurred image (lower left), and blurred and
downsampled image (lower right).
78
The HR estimate after running for 200 epochs with fc = 2/jv is shown in Fig. 4.11.
Compared to the result in Fig. 4.8, where FC =
JN,
the HR estimate computed
using aliased LR frames is visually better than the Nyquist-limited case.
Figure 4.11: HR estimate after running POCS with fc — 2fN for 200 epochs
The HR estimates for varying levels of aliasing are shown in Fig. 4.12, and the
corresponding ISNR curves are shown in Fig. 4.13. The cutoff frequency fc was
chosen using fc = kf^, where k is a, factor that indicates the amount of aliasing
present. Aliasing occurs when k > 1.0, and the image is Nyquist-sampled when
k = 1.0. For the simulations, the standard deviation of the Gaussian PSF estimate
was chosen to match the actual system PSF as closely as possible, with a = 3.5/k.
Figures 4.12 and 4.13 show several things. First, aliasing in a satellite sensor is not
always undesirable. A sensor system is often designed to have the largest diameter
79
k = 0.6
k = 0.8
k= 1.0
k= 1.2
k= 1.4
k= 1.6
k= 1.8
k = 2.0
k = 2.5
k = 3.0
k = 3.5
k = 4.0
Figure 4.12: The HR estimates for various amounts of aliasing, with cutoff fre­
quency fc — kfN, after 200 epochs. Each image has a final ISNR
that is shown on its corresponding curve in Fig. 4.13.
80
k =0.6
k =0.8
k =1.0
k =1.2
-^K- k =1.4
-B- k =1.6
k =1.8
k =2.0
k =2.5
—4— k =3.0
k =3.5
-0- k =4.0
-©-
K.
1
100
Epoch
200
Figure 4.13: The effect of aUasing on the resolution enhancement of the test pat­
tern, with cutoff frequency fc = kf^. AUasing is present for k > 1.0.
Every tenth epoch is plotted for clarity.
of telescope that is affordable or will fit on the launch vehicle. The blur that occurs
when reducing the size of the detectors to ensure Nyquist-bandlimited data for a
given telescope size can be large, and difficult to remove using image restoration
techniques. Inverting large blur functions results in undesirable ringing. Second,
the results with aliasing are much better than those without aliasing. Each aliased,
shifted LR frame contains information not present in the other frames; there is a
lower correlation between frames that are highly aliased. For 1.2 < k < 1.8, aliasing
gives us LR images with different shifts in the edge locations, and good restoration
of the plus sign edges. For k > 2.0, the shifts between frames allow very good
qualitative enhancement of both the plus sign and the point sources.
The results of this section have shown the effect of aliasing on the resolution
enhancement performance of LR frames simulated using only translation and downsampling. However, a simple test image was used rather than MTI data. The test
image is particularly challenging to the POCS algorithm's convergence due to its
maximal contrast (0.0 to 255.0, which is the same range enforced in this simula­
tion by the POCS algorithm using a bounds constraint) and small image features.
Satellite imagery does not typically contain such large contrast changes; the pixels
in "real world" images are statistically highly correlated with their neighbors [59].
Therefore, to study POCS convergence with a more realistic example, the aliasing
of multi-angle satellite images will be examined further in Sect. 4.8.
82
4.4
Combining Two-look Imagery
The performance of the POCS algorithm with simulated two-look imagery is
examined in this section. Using the multi-angle image simulation tools developed in
the previous chapter, test data consisting of a nadir image and one ofF-nadir image
are formed. One dataset is generated from the standard "Cameraman" image, shown
in Fig. 4.14. This image is often used in image processing literature when testing an
algorithm's performance, since there are strong contrasts, as well as a human face,
which allows a qualitative measure of image improvement. The other test input
used is an image acquired by the MTI instrument over Albuquerque, NM, over the
Isotopes Park baseball field. The image is a 128x128 crop from band A (the visible
"blue" band), and is shown in Fig. 4.15. This image allows a quantitative measure of
the image improvement possible for remotely-sensed imagery at visible wavelengths,
specifically for the MTI sensor.
The POCS algorithm is applied to two images, one at nadir and one at an offnadir angle 6, to observe the effect of the off-nadir view angle on convergence. The
basic parameters of the MTI instrument are used to model the off-nadir image
geometry. The height of the sensor is assumed to be 575 km, and the field-of-view
(FOV) is approximately that of the MTI instrument. Unlike the MTI, the sensor
is modeled by a circular aperture in front of a 64 x 64 pixel 2D CCD array. The
HR grid is defined to be 128 x 128 pixels, meaning that the output resolution will
Figure 4.14: The original "Cameraman" image, cropped to 128x128 pixels.
Figure 4.15: The original "Isotopes" MTI image, sized 128x128 pixels.
85
be twice that of the input images in both the x and y directions (four times the
number of pixels). However, this sampling grid is finer than the "true" resolution
expected, since two images with optimum shifts and no angular movement allows
a theoretical resolution increase of only 1.4 from simple geometry. The modeled
input LR frames for the "Cameraman" image are shown in Fig. 4.16 and Fig. 4.17.
The HR estimates found by applying POCS resolution enhancement are shown in
Figs. 4.18 and 4.19 with the corresponding ISNR curves (compared to bilinearly
interpolating the nadir image) shown in Fig. 4.20. Observing the HR estimates
of the "Cameraman" image, small details in facial features are enhanced. Since
each LR frame contained different details, the HR estimates vary on how well they
bring out features of interest. Nonetheless, combining only two images results in
improvement over the LR nadir image in Fig. 4.16. Repeating the above experiment
with the "Isotopes" image, the modeled input LR frames are shown in Fig. 4.21 and
Fig. 4.22. The HR estimates are shown in Figs. 4.23 and 4.24 with the ISNR curves
shown in Fig. 4.25. Details such as the divided road lanes and parking lot rows are
more apparent in the restored HR estimates, with details again varying depending
on which off-nadir angle was used. The ISNR values for the "Isotopes" image are
approximately 1 dB higher than those for the "Cameraman" image, with the best
two-angle combination at 6' = 0°, 10° and an ISNR of 5.4 dB.
Figure 4.16: The nadir LR input "Cameraman" frame, sized 64 x 64 pixels,
larged to show detail.
87
Figure 4.17: The simulated multi-angle LR "Cameraman" frames used as input
to the POCS algorithm, 64 x 64 pixels, at 0 = 0°, 5°,, 40°.
88
e = 0, 15
0 = 0, 20
Figure 4,18: The HR estimate of the "Cameraman" image using two frames, one
nadir and one off-nadir at look angles 9 = 5°, 10°, 15°, 20°.
89
0 = 0, 35
0 = 0, 40
Figure 4.19: The HR estimate of the "Cameraman" image using two frames, one
nadir and one off-nadir at look angles 9 = 25°, 30°, 35°, 40°.
90
^
^
^
^
—X—X—X—X—
^
'p
'p
^
^
^
^
Vr
^ 0 0 0 0 0 0 0 0 0 9
^ n—B—B—Q S ^ ^ ^ ^ ^ ^ ^
^ ^ ^—0—0—0—0—0—^
V V V V V V V V ' i ^
—•—
0 = 0,5
0 - 0 = 0,10
—><—
0 = 0,15
0 = 0,20
0 = 0,25
—B— 0 = 0,30
0 = 0,35
-V- 0 = 0,40
10
Epoch
15
20
Figure 4.20: Improvement in SNR for the "Cameraman" image using two frames,
one nadir and one off-nadir at various look angles 9.
Figure 4.21: The nadir LR input "Isotopes" frame, sized 64 x 64 pixels, enlarged
to show detail.
Figure 4.22: The simulated multi-angle LR "Isotopes" frames used as input to the
POCS algorithm, 64 x 64 pixels, at 0 = 0°, 5°,..., 40°.
93
0 = 0,15
0 = 0,20
Figure 4.23: The HR estimate of the "Isotopes" image using two frames, one nadir
and one off-nadir at look angles 0 — 5°, 10°, 15°, 20°.
Figure 4.24: The HR estimate of the "Isotopes" image using two frames, one nadir
and one off-nadir at look angles 9 = 25°, 30°, 35°, 40°.
95
e—e—©—©—©—•©—©—©—©—
0 0 0 0 0 0 0 <>
— 0 = 0,5
0 = 0,10
^ 0 = 0,15
^ 0 = 0,20
^ 0 = 0,25
^ 0 = 0,30
^ 0 = 0,35
^ 0 = 0,40
3
2.5
-
10
Epoch
15
20
Figure 4.25: Improvement in SNR for the "Isotopes" image using two frames, one
nadir and one off-nadir at various look angles 0.
96
4.5
The Off-nadir Sampling Grid Problem
The placement of an impulse or test pattern within a LR grid affects the algo­
rithm's convergence because the sample spacing goes into and out of phase for the
LR frames at different angles, due to the sample spacing increasing by a non-integer
amount as the off-nadir angle increases. Some regions will converge well, while
others not as well. When the samples are coincident, the resolution enhancement
algorithm has a difficult time converging. For off-nadir images, some samples will
be well-spaced, while others will not. This leads to a linear shift varying (LSV)
sample grid. As an example, consider Fig. 4.26. Sample positions are shown for a
"nadir" image of 67 x 67 pixels, a 10° "off-nadir" image of 65 x 65 pixels, and a 14°
"off-nadir" image of 63 x 63 pixels. Note that this example does not include per­
spective skew, but only the relative pixel sizes. The effective spacing of the sample
points is unevenly distributed, resulting in areas with poor coverage. The POCS
algorithm (or any other multi-frame resolution enhancement algorithm) will have
difficulties improving the resolution in these regions. The non-integer ratios of the
nadir and off-nadir sampling grids are problematic since it is nearly impossible to
ensure the optimal spacing of sample points when acquiring multiple-look images.
To form a resolution-enhanced image, one must hope that the sampled observations
in a region of interest are sufficiently shifted between looks to provide the additional
information required. However, the POCS algorithm can partially compensate for
97
insufficient samples because additional constraint sets may be applied to improve
the algorithm convergence. In Ref. [46], it was shown that an amplitude constraint
improved the convergence rate of the POCS method when the number of samples
of a one-dimensional function was insufficient to allow reconstruction.
To quantitatively show how the convergence of POCS varies with the distribution
of sample points, the algorithm is applied to a two-look "Cameraman" dataset.
Figure 4.27 shows the locations of sampling points for a nadir frame at 0 = 0°
and an off-nadir frame at 0 = 7°. The black box surrounds an area with suboptimal sample spacing, with a distance between the corresponding sample points
in each frame less than a tenth of a pixel. After twenty epochs of running the POCS
algorithm, the ISNR over the entire image is 3.00 dB. In the region with "bad"
sample spacing, the ISNR is 2.47 dB, while in the surrounding region with "good"
sample spacing the ISNR is 3.26 dB.
98
128
% NS
WNNSSN^
/////J
VV
V>
VV V *
r/%
V vvv.'x/.».sn«Xv\v*/«;*"; •••
'.WWW
s\ \
C C i*I.-C'.V'.V.».«.».».'.'.'XJVVV V
•' i'VVVA.S''.*.*.* A*'/VVV"
H*vwX'*.*.s*>!*>X''H'vv •?
'f\*\''y.*y,*yyyyyyyy:':':':
///
11
»»
•;•;•.•%•%%v%ss
// // / /
//
•:•;•;•;•.•%*%%*%% \ \ \\ \. \
\\\ I
$ $
128
Figure 4.26: Grids of different sizes overlaid. Blue points represent the pixel posi­
tions for a "nadir" image of 67 x 67 pixels, green points represent a
10° "oflF-nadir" image of 65 x 65 pixels, and red points represent a 14°
"off-nadir" image of 63 x 63 pixels. Non-integer sampling ratios result
in areas with good sampling coverage (areas where blue, green, and
red dots are equally spaced) and bad sampling coverage (the region
in the center).
99
128
,
,
,^
%
,
otxxxxxx
^xxxxxxl
i^xxxxxx;,.....
xxxxxxxxxy
<XXXXXXXXXXXXX XXXXXX>
<XXXXXXXXXXXXXXXXXX>C
<XXXXXXXXXXXXX XXXXXX:
^itm'iV2'i<mim(^i(:m^
<xxxxxxxxxxxxx xxxxxx:..
«?xxxxxxxxxxxxx xxxxxxx:
<xxxxxxxxxxxxxxxxxxxx:
<xxxxxxxxxxxxx xxxxx:~ ~
»»5«$<«'5«»iS«K«^W5CiK3K»C5iC!^^
•5»iXXXXK>J»JX««)?K«C^^
iCXXXXXXXXXXXXX XXXXX!
«»«»{K»0S<>5«I«}^K«K^^
««»«X«0{K«<X»JK««0^
«XX^>««0««K««K>)K>S^^
•»{X»»0K«C«««K«C««K8^^
<XXXXXXXXXXXXX XXXXX!
<XXXXXXXXXXXXX XXXXX!
<XXXXXXXXXXXXX XXXXX!
<XXXXXXXXXXXXX XXXXX!
<XXXXXXXXXXXXX XXXXX!
<XXXXXXXXXXXXX XXXXX!
<XXXXXXXXXXXXX XXXXX!
<XXXXXXXXXXXXX XXXXX!
<xxxxxxxxxxxxx xxxxx:
<XXXXXXXXXXXXX
KXXXXXXXXXXXXX
<XXXXXXXXXXXXX
XXXXXXXXXXXXXX
<XXXXXXXXXXXXX
XXXXXXXXXXXXXX
XXXXXXXXXXXXXX
<XXXXXXXXXXXXX
<XXXXXXXXXXXXX
XXXXXXXXXXXXXX
XXXXX!
XXXXX!
XXXXX!
XXXXX!
XXXXX!
XXXXX!
XXXXX!
XXXXX!
XXXXX!,_ XXXXXXX!
><xxxxxxxxxxxx> xxxxxxx:
:xxxxx:j<:.,
:X3<XXXXXXXXXXXXXXXXXXXX!X!X!
v\v\y\*^v\v\^'
xxxxx»?^:
.' WW'V^V
XXXXXp-"- ,
H*
128
Figure 4.27: Distribution of sampling positions for two frames, one nadir and one
off-nadir at a look angle of 7°. The region inside the box has poor
sample spacing compared to the region outside the box, resulting in
different ISNR's in each restored region after 20 iterations of POCS
(the ISNR is 2.47 dB inside the box and 3.26 dB outside the box).
100
4.6
Combining Multi-look Imagery
In this section, the performance of the POCS algorithm with simulated multilook imagery is examined. Ten images, one nadir and nine ofF-nadir, are combined
to form a HR image. The parameters used to model the ofF-nadir image geometry
are the same as those for the MTI instrument.
For the first multi-look experiment, varying numbers of images with a A0 = 8°
separation are combined. Figures 4.28 and 4.30 show the HR estimates from the
0 = 0°, 8°,..., 40° case for the "Cameraman" and the "Isotopes" image, respectively
Figures 4.29 and 4.31 show the corresponding ISNR curves. For the "Isotopes" test
image, the highest ISNR occurs at 40°, using six LR frames, but the final ISNR
of approximately 8.2 dB may be achieved using only four or five images. The 40°
image provides little additional information. For the "Cameraman" image, the 40°
image does help the final result by about 0.5 dB over the 6 = 32° maximum case.
The inclusion of the image at 48° makes the results worse; at this angle, the ofinadir GSD is more than twice that of the nadir GSD. Therefore, olf-nadir images at
6 > 48° do not help improve the resolution. Since the MTI acquires its second look
near 50° or 55°, it is therefore unlikely that standard two-look MTI data could be
used successfully for resolution enhancement.
Ten images with 5° separation are generated next, with a maximum angle of 45°
(designed to be less than 48° since the images fail to provide helpful information
101
Figure 4.28: HR estimate for six frames of the "Cameraman" image, 9 =
0°,8°, 16°, 24°, 32°, 40°.
10
_©—e—©—e—e—e—©—(>
-©-
II
O
p
00
—K—
0 = 0,8
0 = 0,8,16
0 = 0,8,...,24
0 = 0,8,...,32
0
0 = 0,8,...,48
0 = 0,8,...,56
0 = 0,8,...,64
0 = 0,8,...,72
-v^
-A-
1
5
10
Epoch
15
20
Figure 4.29: The ISNR for 2 to 10 frames of the "Cameraman" image
0°, 8°, 16°,..., 72°.
103
Figure 4.30: The HR estimate for six frames of the "Isotopes" MTI image, 0 =
0°, 8°, 16°, 24°, 32°, 40°.
104
— 0 = 0,8
-e- 0 = 0,8,16
^ 0 = 0,8,...,24
-t- 0 = 0,8,...,32
^ 0 = 0,8,...,40
-2
^ 0 = 0,8,...,48
.,64
-4
0
5
10
Epoch
15
20
Figure 4.31: The ISNR for 2 to 10 frames of the "Isotopes" MTI image, 9 =
0°, 8°, 16°,...,72°.
105
past that point). Figures 4.32 and 4.34 show the six-frame HR estimates for the two
images using 6 = 0°, 5°, 10°, 15°, 20°, 25°; Figs. 4.33 and 4.35 show the corresponding
ISNR curves with AO = 5° up to a maximum off-nadir angle oi 9 = 45°. In both
cases, the maximum ISNR is close to 9 dB. After combining five or six images, the
ISNR is not made better by combining more images, but the ISNR values do not get
worse, either. Therefore, six LR frames are ideal for this viewing scenario, as long
as the images are acquired for 6 < 48° off-nadir so that they do not cause divergence
of the POCS algorithm.
Figures 4.36 and 4.38 show the HR estimates using fewer frames by removing
some of the intermediate frames from the previous experiment, with Figs. 4.37 and
4.39 giving the ISNR curves using fewer frames. Combining five or six images gives a
final ISNR near 8 dB, between 0.5 dB and 1 dB less than the cases with more images.
For the "Isotopes" image, an approximately 8.5 dB improvement is achieved using
five images with either 6 = 0°, 5°, 10°, 15°, 20° or ^ = 0°, 5°, 15°, 25°, 35°. Thus, the
algorithm does not require a large (> 10) number of images or images with exact
A9 spacing.
If the maximum angle is set at 40°, and the two to ten LR frames occur at
equally-spaced angle intervals, the results are practically the same as before; the
HR estimates are shown in Figs. 4.40 and 4.42 and the ISNR curves are shown in
Figs. 4.41 and 4.43.
106
Figure 4.32: The HR estimate for six frames of the "Cameraman" image, 6 =
0°, 5°, 10°, 15°, 20°, 25°.
^
^
^
^
0 = 0,5
0 = 0,5,10
0 = 0,5,...,15
0 = 0,5,
0 = 0,5,.
0 = 0,5,.
0 = 0,5,.
0 = 0,5,.
0 = 0,5,.
Figure 4.33: The ISNR for 2 to 10 frames of the "Cameraman" image
0°, 5°, 10°,..., 45°.
108
Figure 4.34: The HR estimate for six frames of the "Isotopes" image, 6 =
0°, 5°, 10°, 15°, 20°, 25°.
^
H&0^
0 = 0,5
0 = 0,5,10
0 = 0,5,...,15
0 = 0,5,.
0 = 0,5,.
0 = 0,5,.
0 = 0,5,.
0 = 0,5,...,40
0 = 0,5,...,45
Figure 4.35: The ISNR for 2 to 10 frames of the "Isotopes" MTI image
0°, 5°, 10°, ...,45°.
110
Figure 4.36: The HR estimate for five frames of the "Cameraman" image, 6 =
0°, 5°, 15°, 25°, 35°.
10
8
6
4
2
0
-2
— 0 = 0,5
^ 0 = 0,5,15
^ 0 = 0,5,...,45
-4
0
10
Epoch
15
20
Figure 4.37: The ISNR for 2 to 6 frames of the "Cameraman" image
0°, 5°, 15°,...,45°.
112
Figure 4.38: The HR estimate for five frames of the "Isotopes" image, 9 —
0°, 5°, 15°, 25°, 35°.
113
10
8
6
4
2
0
-2
— 0 = 0,5
^ 0 = 0,5,15
^ 0 = 0,5,...,45
-4
0
10
15
20
Epoch
Figure 4.39: The ISNR for 2 to 6 frames of the "Isotopes" MTI image, 9 =
0°, 5°, 15°, ...,45°.
114
Figure 4.40: The HR estimate for seven frames of the "Cameraman" image, 6 —
0°, 6.7°, 13.3°, 20°, 26.7°, 33.3°, 40°.
T3
^ 9 = 0.0,20.0,40.0
+- 9 = 0.0,10.0,...,40.0
^ 9 = 0.0,8.0,...,40.0
^ 9 = 0.0,5.7,...,40.0
^ 9 = 0.0,5.0,...,40.0
-2
^ 9 = 0.0,4.4,...,40.0
0
5
10
15
20
Epoch
Figure 4.41: The ISNR for 2 to 10 frames of the "Cameraman" MTI image,
equally spaced angles from 0° to 40°.
116
Figure 4.42: The HR estimate for seven frames of the "Isotopes" image, 9 =
0°, 6.7°, 13.3°, 20°, 26.7°, 33.3°, 40°.
117
O
O
O
O
O
O
O
O
O
O
O
O
O
— 0 = 0.0,40.0
9 = 0.0,20.0,40.0
9 = 0.0,13.3,...,40.0
-f- 9 = 0.0,10.0,...,40.0
9 = 0.0,8.0,...,40.0
9 = 0.0,6.7,...,40.0
9 = 0.0,5.7,...,40.0
9 = 0.0,5.0,...,40.0
0 = 0.0,4.4,...,40.0
10
15
20
Epoch
Figure 4.43: The ISNR for 2 to 10 frames of the "Isotopes" MTI image, using
equally spaced angles from 0° to 40°.
118
4.7
Combining Symmetric Three-look Imagery
The two-look experiments of Sect. 4.4 assumed that the imaging system operated
like the MTI sensor, first acquiring a nadir image and then pointing back to acquire
a second image. A logical improvement in the image collection is to begin taking im­
ages as soon as the ground target is within view at some forward-pointing off-nadir
angle, and then continue to point the satellite and take imagery until the sensor has
passed over the ground target and is pointing backward at some maximum off-nadir
angle. This approach allows more images at look angles close to nadir to be col­
lected. Since near-nadir imagery has the highest resolution (the IFOV is smallest
when the distance from the satellite to the ground is at a minimum), as many im­
ages as possible should be taken close to nadir. The number of images that may be
taken is limited by the time required to re-point the satellite. At a minimum, three
images, one at nadir and two at symmetric off-nadir positions ±0, should be used
to maximize the quality of the data input to a resolution enhancement algorithm.
To explore the idea of using symmetric off-nadir images, the POCS algorithm is ap­
plied to simulated three-look imagery. The LR dataset is formed from a nadir image
(6 = 0), a, forward-pointing off-nadir image at —9, and a backward-pointing off-nadir
image at +9. The angle 9 is varied for each simulation to determine the useful range
of off-nadir angles for resolution enhancement. The HR estimates for the "Camera­
man" three-look imagery for symmetric off-nadir angles of ±9 are shown in Figs. 4.44
119
and 4.45. The corresponding ISNR curves are shown in Fig. 4.46. Comparing the
ISNR results here with the equally-spaced nadir/backward-pointing results shown
in Fig. 4.41, the asymmetric case with three looks at 6* = 0°,20°,40° has an ISNR
of 5 dB. For the symmetric case using the same /\9 = 20°, so that 9 = —20°, 0°, 20°,
the ISNR is 6.2 dB, a 1.2 dB improvement. The asymmetric acquisition geometry
requires four images rather than the three needed in the symmetric case to attain
the nearly the same ISNR of approximately 7 dB. The three-look HR estimates for
the "Isotopes" image are shown in Figs. 4.47 and 4.48, with corresponding ISNR
curves shown in Fig. 4.49. The A9 = 20° case for symmetric viewing gives an ISNR
of 6.2 dB, while the asymmetric case gives an ISNR of 6.9 dB, an improvement
of 0.7 dB. Symmetric viewing with 9 = —15°,0°, 15° gives a 7.5 dB improvement
compared to the similar 7.7 dB improvement using 9 = 0°, 13.3°, 26.7°, 40°; three
symmetric frames are needed compared to four asymmetric frames for equivalent
improvement of the "Isotopes" image. Therefore, symmetric viewing is better than
asymmetric viewing for POCS resolution enhancement.
120
Figure 4.44: The HR estimate of the "Cameraman" image using three frames, one
nadir and one off-nadir at look angles 6 = ±5°, dzlO°, ±15°, ±20°.
121
Figure 4.45: The HR estimate of the "Cameraman" image using three frames, one
nadir and one off-nadir at look angles 0 = ±25°, ±30°, ±35°, ±40°.
Figure 4.46: Improvement in SNR for the "Cameraman" image using three frames,
one nadir and two off-nadir at look angles ±6*.
123
Figure 4.47: The HR estimate of the "Isotopes" MTI image using three frames,
one nadir and one off-nadir at look angles 6 = ±5°, ±10°, ±15°, ±20°.
124
Figure 4.48: The HR estimate of the "Isotopes" MTI image using three frames, one
nadir and one off-nadir at look angles 9 = ±25°, ±30°, ±35°, ±40°.
125
^ 0 =-30,0,30
^ 0 =-35,0,35
^ 0 =-40,0,40
10
15
20
Epoch
Figure 4.49: Improvement in SNR for the "Isotopes" MTI image using three
frames, one nadir and two off-nadir at look angles ±0.
126
4.8
Aliasing and Three-look Imagery
The topic of aliasing is now revisited using three-look simulated imagery. Two
issues to explore are the selection of the elliptical Gaussian PSF width, a, and
how aliasing affects the resolution enhancement of multi-angle images. Recall that
aliasing is present in the LR frames with an optical cutoff frequency of fc — kf^ when
the Nyquist factor k > 1.0. Aliased and non-aliased data sets, for 0.8 < k < 2.0, are
created at symmetric look angles 9 = —20°, 0°, -|-20°. The POCS algorithm is then
applied to the three-look imagery using an estimated elliptical Gaussian system PSF
with varying widths a.
The aliased three-look 64 x 64 pixel "Cameraman" and "Isotopes" data sets with
k = 2.0 are shown in Figs. 4.50 and 4.52, along with their original 128 x 128 pixel
images. Figures 4.51 and 4.53 show the nadir LR frames bilinearly interpolated to
the HR grid at twice the size of the LR frames. The POCS algorithm is initialized
with the interpolated image. Note that the road medians seen in the original 128 x
128 "Isotopes" image (upper left of Fig. 4.52) are not visible in the interpolated
image.
Figures 4.54 to 4.67 show the "Cameraman" HR estimates using a value for a
that gives the best ISNR, along with their corresponding ISNR curves. Likewise,
Figs. 4.56 to 4.69 show the "Isotopes" HR estimates using a value for a that gives
the best ISNR, along with their corresponding ISNR curves.
127
Figure 4.50: The original "Cameraman" image (upper left) and the aliased threelook images for k = 2.0 (upper right at —20°, lower left at 0°, and
lower right at +20°).
1
128
Figure 4.51: The 64 x 64 nadir LR "Cameraman" frame for k = 2.0 after bilinear
interpolation to a 128 x 128 pixel grid.
129
1
64
1
64
Figure 4.52: The original "Isotopes" MTI image (upper left) and the aliased threelook images for k = 2.0 (upper right at —20°, lower left at 0°, and
lower right at +20°).
130
Figure 4.53: The 64 x 64 nadir LR "Isotopes" frame for k = 2.0 after biUnear
interpolation to a 128 x 128 pixel grid.
131
The ahased image cases are studied first, using k = 2.0 and k = 1.4. Figures 4.54
and 4.58 show the "Cameraman" HR estimates at twice the resolution of the LR
frames. When k = 2.0, the best INSR is 6.49 dB, and occurs when the elhptical
Gaussian PSF has width a = 1.8/k = 1.8/2.0 = 0.9, as shown in Fig. 4.55. Similarly,
from Fig. 4.59, it is observed that when k = 1.A, the best INSR is 4.35 dB, and occurs
when the estimated PSF has width a = 1.8/A; = 1.8/1.4 = 1.29. Figures 4.56 and
4.60 show the "Isotopes" HR estimates at twice the resolution of the LR frames.
When k = 2.0, the best INSR is 7.73 dB, also using an elliptical Gaussian PSF
width of cr = 0.9, shown in Fig. 4.57. In Fig. 4.61, when k = 1.4, the best INSR is
6.16 dB, also using an estimated PSF width of a = 1.29. As was observed with the
aliased, translated test images in Sect. 4.3, an increase in aliasing, or the Nyquist
factor k, results in better HR estimates. The best estimate for the "Cameraman"
image has an ISNR that is more than 2 dB better for k = 2.0 compared to k = 1.4.
For the "Isotopes" MTI image, the best HR estimate has an ISNR that is nearly
1.6 dB better for k = 2.0 compared to k = 1.4. The ISNR plots for k = 2.0
and k = 1.4 also show that using an estimated PSF that is too wide (a > 0.9 for
k = 2.0 and a > 1.29 for k = 1.4) results in unwanted ringing artifacts and POCS
algorithm divergence. If a is too small, POCS does not give optimal improvement
because not enough deblurring occurs. Therefore, correctly estimating the system
PSF is critical for obtaining optimal resolution enhancement. The estimated PSF
width must account for the combined PSF of the optics and the detector. For a
132
multispectral system such as the MTI, this means that the POCS algorithm must
use estimated system PSF's of different widths for each band.
Figure 4.54: The HR "Cameraman" image for 3 frames at 0 = —20°,0°,20°, k =
2.0, a = 0.9.
133
a =0.25
a =0.50
o =0.75
a =0.90
a =1.00
a =1.05
a =1.12
a =1.25
_4i
0
I
5
I
10
Epoch
I
15
20
Figure 4.55: The ISNR of the enhanced "Cameraman" image for varying widths
cr of the estimated PSF, using three frames at 6 = —20°, 0°, 20°, and
fc = 2/iv.
134
Figure 4.56: The HR "Isotopes" image for 3 frames at 0 = —20°,0°,20°, k = 2.0,
and a = 0.9.
135
"1
J
r
— a =0.25
-©- a =0.50
0 =0.75
^ o=0.90
a =1.00
^ 0=1.05
^ o=1.12
o =1.25
L
10
15
20
Epoch
Figure 4.57: The ISNR of the enhanced "Isotopes" image for varying widths a
of the estimated PSF, using three frames at 6 = —20°,0°,20°, and
fc = VN.
136
Figure 4.58: The HR "Cameraman" image for 3 frames at 9 = -20°,0°,20°, k =
1.4, and a = 1.29.
137
a =0.36
a =0.71
c =1.07
a =1.29
o =1.43
a =1.50
a =1.61
-V- a =1.79
—^
^ A lir A lif
o oo oo
tV
Q o o-e-o o o o
Figure 4.59: The ISNR of the enhanced "Cameraman" image for varying widths
a of the estimated PSF, using three frames at 0 = —20°, 0°, 20°, and
fc = 1.4/iv.
138
128
128
Figure 4.60: The HR "Isotopes" image for 3 frames at 0 = —20°,0°,20°, k = 1.4,
and (7 = 1.29.
139
AyrwwTV'jifTVAyr/t A ^
a =0.36
-e- a =0.71
—X— a =1.07
a =1.29
o=1.43
a =1.50
a =1.61
-V- a =1.79
Figure 4.61: The ISNR of the enhanced "Isotopes" image for varying widths a
of the estimated PSF, using three frames at 9 — —20°,0°,20°, and
fc = 1.4/iv.
140
Next the Nyquist-sampled and non-aUased data sets are considered, using k =
1.0 and k = 0.8. The "Cameraman" HR estimate and the ISNR curves for k =
1.0 are shown in Figs. 4.62 and 4.63, respectively. Similarly, the "Isotopes" HR
estimate and the ISNR curves for k = 1.0 are shown in Figs. 4.64 and 4.65. The HR
images contain undesirable ringing artifacts. Nyquist-sampled imagery is difficult
to improve qualitatively even though the ISNR indicates resolution improvement.
The ISNR is 2.50 dB for the "Cameraman" image and is 4.55 dB for the "Isotopes"
image when k = 1. However, several details are not resolved compared to the
original images. The roads in the "Isotopes" image do not show two lanes with
a median, and the parking lot above Isotopes Park no longer shows rows of cars.
Facial features are not improved in the "Cameraman" image. Figures 4.66 and
4.67 show the HR "Cameraman" estimate and the ISNR curves for k = 0.8. The
optimal PSF for the "Cameraman" image has a width oi a — l.h/k = 1.5/0.8 =
1.88. Figures 4.68 and 4.69 show the HR estimate and the ISNR curves for the
"Isotopes" image using k — 0.8. For this non-aliased case, the optimal PSF for
the "Isotopes" image has a width of cr = 1.8//c = 1.8/0.8 = 2.25. As was the
case for Nyquist-sampled imagery, ringing prevents a visually good HR estimate
even when the optimal estimated system PSF is used in the POCS algorithm. It
may be concluded that little qualitative improvement may be expected by applying
the POCS resolution enhancement algorithm to images that are not aliased, despite
141
efforts to obtain imagery at symmetric off-nadir angles and to use the best estimated
system PSF width, cr, using the elliptical Gaussian PSF.
To summarize the results of this section. Figs. 4.70 and 4.71 show the ISNR
curves for the "Cameraman" and the "Isotopes" images for differing Nyquist factors
fc, using a PSF width of cr = 1.%/k.
Figure 4.62: The HR "Cameraman" image for 3 frames at 6 = -20°,0°,20°, k =
1.0, and a = 1.8.
142
a =0.50
-&- a =1.00
a =1.50
o=1.80
-*- o=2.00
-&- a =2.10
a =2.25
o =2.50
g=-*=*=
O O O O O O O O O O O O
00
-2
-4
-6
10
15
20
Epoch
Figure 4.63: The ISNR of the enhanced "Cameraman" image for varying widths
cr of the estimated PSF, using three frames at 0 — —20°,0°, 20°, and
fc —
In -
143
128
Figure 4.64: The HR "Isotopes" image for 3 frames at 0 = —20°, 0°, 20°, k = 1.0,
and a — 1.8.
144
— a =0.50
-©- a =1.00
^ a =1.80
a =2.00
^ a =2.10
-0- a =2.25
^ a =2.50
o-e-
PQ
-o
-2
-4
-6
0
5
10
15
20
Epoch
Figure 4.65: The ISNR of the enhanced "Isotopes" image for varying widths a
of the estimated PSF, using three frames at 0 = —20°,0°,20°, and
fc = In -
145
I
128
128
Figure 4.66: The HR "Cameraman" image for 3 frames, k = 0.8, and a = 2.25.
146
-©-
— X X )(
Q-e-eO O
X
X
X
X-
K K X-
O O O O O O O O O O O O O O O O
10
15
o
a =0.62
a =1.25
a =1.88
a =2.25
o =2.50
a =2.62
a =2.81
a =3.12
20
Epoch
Figure 4.67: The ISNR of the enhanced "Cameraman" image for varying widths
a of the estimated PSF, using three frames at 9 = —20°, 0°, 20°, and
fc = O.SfN.
128
1
128
Figure 4.68: The HR "Isotopes" miage for 3 frames, k — 0.8, and cr — 2.25.
148
o =0.62
-e- a =1.25
—K— a =1.88
-k- a =2.25
c =2.50
o =2.62
a =2.81
a =3.12
AVrVriVVriVA ^
A A^ ^ ^ ^ ^ ^ ^ —X —X —X—X —K ),: X
O O O O O O O O O O O O O O O
10
15
20
Epoch
Figure 4.69: The ISNR of the enhanced "Isotopes" image for varying widths <j
of the estimated PSF, using three frames at 0 = —20°,0°,20°, and
f c = 0.8/^.
149
7
— k=0.80
-a- k=1.00
k =2.00
6
5
4
3
O-O OOOOOOOOOh ; )
2
1
0
0
20
Epoch
Figure 4.70: The ISNR of the enhanced "Cameraman" image for varying Nyquist
factors k, where fc = kfN, using three frames at ^ = —20°,0°,20°,
and a = 1.8/k.
150
8
— k=0.80
^ k=1.00
k =2.00
7
6
5
•e o o o o o o o-e-e-e~()
4
3
2
0
0
Epoch
15
20
Figure 4.71: The ISNR of the enhanced "Isotopes" image for varying Nyquist fac­
tors k, where fc — kf^, using three frames at 6 = —20°, 0°, 20°, and
a — 1.8/k.
151
4.9
Resolution Enhancement of Multispectral Imagery
Resolution enhancement for an instrument like the MTI has an additional com­
ponent not yet addressed: multiple spectral bands. It is possible to perform multispectral spatial-resolution enhancement. During each "look," the MTI instrument
collects many images with the same view angle, one for each spectral band. Multiple
frames, each at the same wavelength but at different angles, may be combined using
the standard POCS algorithm already discussed. Repeating this process for each
band, a multispectral HR estimate may be formed by stacking the HR estimates for
the individual bands. The concept is shown in Fig. 4.72.
In this section, real data from a multispectral fluorescence confocal imaging sys­
tem are used to show how POCS may be used to form high resolution images using
multiple spectral bands. The imaging system, developed in the Spectroscopy, Imag­
ing, and Molecular Chemistry Group at Los Alamos National Laboratory (LANL),
uses multiple lasers to scan across a living sample that has been stained with a
fluorescent dye, or fluorophore.
The fluorescence emission signals at incremental
spatial positions and at each wavelength are spectrally flltered, detected by a photomultiplier tube (PMT), and stored as digital image. The use of more than one laser
allows multiple types of cellular structures to be imaged simultaneously. Two lasers
are used in this imaging system: an Argon laser at 488 nm and a He-Ne laser at
453 nm. The fluorophores excited by the laser illumination give emissions at longer
152
Frame 1
Band 1
Frame 2
Band 1
•
Frame
Registration
Resolution
Enhancement
High Resolution
Band I
Frame
Registration
Resolution
Enhancement
High Resolution
Band 2
Frame
Registration
Resolution
Enhancement
High Resolution
Band M
Frame N
Band I
Frame 1
Band 2
Frame 2
Band 2
Registration
High Resolution
Multispectral
Image
Frame N
Band 2
Frame 1
Band M
Frame 2
Band M
Fjame N
Band M
Figure 4.72: Formation of a multispectral high-resolution image
153
wavelengths than that of the laser. The filters record emissions in a "green" channel
from 500 to 530 nm and a "red" channel for emissions above 550 nm. Figure 4.73
shows an image of a cell acquired by the imaging system. The green channel allows
imaging of actin fibers and the red channel allows imaging of mitochondria.
To collect data for input to the POCS algorithm, nine LR frames were acquired
by the microscope by translating the cell sample with respect to the optics in the x
and the y directions in increments of 100 nm using a precision stepper motor. Each
LR image has a 75 nm pixel size. The 100 nm step size was the smallest step size the
stepper motor could take. The lasers cause photobleaching of the sample to occur,
meaning that successive frames have reduced contrast with respect to the previous
frame. Therefore, the LR frames must be contrast-matched prior to forming the
HR estimate using POCS. One of the nine LR frames is shown in Fig. 4.73, and
is 400
X
400 pixels in size. The HR estimate is computed on a grid that is three
times larger, at 1200 x 1200 pixels, or 25 nm on a side for each pixel. The estimated
HR image at 3x the original resolution is shown in Fig. 4.74. Compared to the LR
image, details of the cell's structure are qualitatively better in the estimated HR
image (according to the author as well as the microscope's designers at LANL). The
green actin fibers appear more continuous rather than "speckled," and the shape and
contrast of the mitochondria is improved. A second example is shown in Figs. 4.75
and 4.76. Note the detail of the thin actin fibers, especially the thin fiber connecting
the two cells, and the enhanced visibility of the red mitochondria.
154
Figure 4.73: One LR frame (out of nine) of a two-spectral-band cell image
155
Figure 4.74: The HR estimate of the multispectral cell image with 3x resolution
increase
156
Figure 4.75: One LR frame of a two-spectral-band cell image.
157
Figure 4.76: HR estimate of multispectral cells
158
Although a visual comparison of the cell images shows a qualitative improve­
ment, a quantitative measure of improvement is difficult since the ISNR cannot
be computed without knowledge of the "original" object. As an alternative way
to quantitatively assess the POCS algorithm performance, images were acquired of
standard-sized 100 nm latex fluorescence beads, or microspheres, to see if their res­
olution could be enhanced. Microspheres are commonly used to adjust the depth
focus of scanning microscopes. The acquired imagery was very large, containing
hundreds of identical microspheres, so a small region of the image was cropped to
form a representative test image containing only a few beads. The before and after
images are shown in Figs. 4.77 and 4.78. The LR frames make the beads appear to
be 450 nm across. After applying POCS, the beads are not resolved to 100 nm, but
appear about 300 nm in diameter. The results are highly sensitive to the different
contrast between the frames, the registration, as well as the width of the estimated
PSF. The stepper motor has a precision of ±5nm, meaning the registration error
between any two frames could be up to 10 nm in both the x and the y directions, or
nearly one-half pixel on the HR grid. To achieve improved resolution enhancement
results from this microscope data, registration estimation methods could be used
rather than assume perfect stepper motor translation as was done in this case. Also,
using microspheres that have a thin layer of dye rather than dye throughout would
ensure that the depth focus is correct; the microspheres in the acquired images ap­
pear to be different sizes, indicating that the bead cross-sections are not necessarily
159
through the center of each bead. In addition, the imagery was acquired at the high­
est resolution possible for the microscope to test whether POCS could reconstruct
the beads at a resolution better than that of the microscope. Since these images are
nearly diffraction-limited, not much improvement beyond the 300 nm optical resolu­
tion can be expected. However, POCS could still be useful for scanning microscope
imagery; by reducing the exposure time and acquiring multiple images, less photobleaching of the sample occurs. Nonetheless, the microscope examples demonstrate
the ability of POCS to improve real multispectral data under non-ideal conditions.
»l
Figure 4.77: One LR frame of fluorescence beads
Figure 4.78: HR estimate of the fluorescence beads
161
CHAPTER 5
PRE-PROCESSING ISSUES
5.1
Overview
Several pre-processing steps can improve the results of resolution enhancement
algorithms when applied to multi-angle, remotely-sensed images. This chapter dis­
cusses remote sensing issues that do not always arise when applying resolution en­
hancement algorithms to standard images in the field of image processing. These
issues include sensor calibration, atmospheric correction, and reflectance properties
of materials. Sensor calibration ensures that the pixel values in an image are ac­
curate measurements of the objects or materials in a scene. Also, satellite-based
sensors acquire images through the earth's atmosphere, and the effects of the atmo­
sphere ideally should be removed from the multi-angle images before fusing them
because the path length from the sensor to the surface of the earth becomes larger
with increasing angles with respect to nadir, and the sun's position is different from
one image to another. For small angles and short times between subsequent images,
atmospheric effects may not change dramatically and could be neglected. However,
removing the atmospheric effects allows a retrieval of ground reflectance, which is
162
needed for another pre-processing step: bidirectional reflectance distribution func­
tion correction.
Each of these pre-processing steps is discussed in detail in the
following sections.
5.2
Sensor Calibration
Most sensor systems store their data digitally, typically as an integer value, with
a precision given by the number of bits per pixel [21]. The stored data value for a
pixel is commonly referred to as the digital count (DC) or the digital number (DN).
Sensor calibration is required to convert the digital numbers at the focal plane of
an instrument into physical units, such as at-sensor radiance. Without calibration,
there could be differences from pixel to pixel due to variations in detector fabrication,
or even condensation on the system optics, for instance. This would lead to suboptimal results when applying a resolution enhancement algorithm. Sensors are
characterized and calibrated before flight, but once launched, the performance of
the system changes. Therefore, calibration is usually done periodically throughout
the lifetime of a sensor. Some sensors, such as the MTI, are designed with an on­
board calibration system to allow frequent and precise data calibration. When no
onboard calibration system exists, as is the ease for many sensors, calibration may
be done by using deep-space looks or, alternatively, by collecting concurrent "ground
truth" measurements. In this dissertation, the images are assumed to be correctly
calibrated. However, all remote sensing imagery has some degree of uncertainty
163
in its calibration, and poorly calibrated images cannot be combined to form highresolution imagery. Therefore, sensor calibration is important to consider when
doing resolution enhancement using multiple low-resolution images.
5.3
Atmospheric Correction
One must also consider the effects of the atmosphere when taking measurements
of the ground from a space-based or high altitude airborne sensor [21]. Radiation
leaving the ground may be absorbed or scattered by the atmosphere, resulting in
attenuation of the desired signal. Additionally, direct scattering of solar radiation by
the atmosphere increases the radiance at the sensor, and these effects are different
in each spectral band.
These two effects make it a challenge to determine the
true radiance or reflectance of an object at the Earth's surface. In remote sensing,
compensating for the effects of the atmosphere is called atmospheric correction.
The primary inputs to atmospheric correction are the path length from the ground
target to the sensor and the angle between the sun and the sensor. The desired
output of an atmospheric correction algorithm is the ground reflectance, Pp, at each
pixel. Ground reflectance is a more useful measurement than the at-sensor radiance
because it is a material-spccific property. Reflectance properties of materials arc
discussed in more detail in Sect. 5.4.
For visible and near infrared wavelengths (the solar regime), radiance reaching
the sensor follows three primary paths from the sun [21]. These paths are shown
164
in Fig. 5.1.
Upwelled radiance from the atmosphere is the component removed
during atmospheric correction. These photons are scattered by the atmosphere
before reaching the ground, and reduce the contrast of objects on the ground as
viewed by the sensor.
Direct radiance may be partially or entirely absorbed by
the atmosphere before reaching the sensor. The direct radiance photons illuminate
objects on the ground. A human observer would perceive this illumination as a
sunlit area. The photons that comprise skylight are scattered by the atmosphere
and then illuminate objects on the ground. Objects that are not directly illuminated
by the sun (that is, they are in the shadow of another object) are still visible due to
skylight illumination.
Sun
sensor
upwelled
radiance
direct
radiance
skylight
Figure 5.1: Main paths of solar energy to a sensor
165
Applying atmospheric correction prior to running a resolution enhancement al­
gorithm is desirable for any image analysis task that requires high radiometric pre­
cision. A procedure for atmospheric correction is as follows. Each satellite image
that is input to the resolution enhancement algorithm will have a sun and view
angle associated with it. Given these sun and view angles (zenith and azimuth),
a radiative transfer code may be run with these sun and view angles for multiple
surface reflectances, assuming a fixed aerosol case. The output from the radiative
transfer code will contain a relative radiance output for the rough center wavelength
of each sensor spectral band for a set of assumed surface reflectances. The relative
radiances are converted to absolute radiances by multiplying them by the incident
solar irradiance,
E q,
at the top of the atmosphere. The value for
Eq
must be band-
averaged, incident normal to the atmosphere, and corrected for sun-earth distance.
This set of reflectance to absolute radiance values may then be used as a lookup
table. To correct a pixel in a scene, given the at-sensor radiance at that pixel,
Lsensor, and the wavelength A, or band, of that image, the nearest two predicted
radiances from the radiative transfer code runs are located in the proper lookup
table for that band and the values are linearly interpolated to determine the ground
reflectance, Pg, corresponding to that sensor radiance. For resolution enhancement
preprocessing, it is reasonable to assume that the same lookup table may be used
for all of the pixels within a single image, as long as the imagery does not contain
166
haze or partial cloud cover, since the wavelength, path length, sun angle, and view
angle are essentially the same over an image.
5.4
Bidirectional Reflectance Distribution Function (BRDF) Correction
The BRDF of a surface describes the reflectance of a surface as a function of view
angle for a given illumination source at a fixed angle. It is an intrinsic property of
materials that should be taken into consideration when analyzing remotely-sensed
images. The BRDF, fr, in units of [sr~^], is defined as
f fa A a
where 9i and
A\
d L j . ( 0 j , ( j ) i , 9 , . , (f)r',
/ r i\
are the elevation and azimuth angles of the incident light, Ei [^]
is the irradiance of the incident light, and 9r and (pr are the elevation and azimuth
angles of the reflected radiance Lr
[60]. In practice, the BRDF as deflned
above cannot be measured directly because all detector systems have a nonzero fieldof-view (FOV), meaning that a measurement must be made over a cone defined by
a small range of angles 9r, 4)r- Therefore, the BRDF is assumed to vary slowly over
the FOV and measurements are made using instruments with a small FOV.
The main steps for BRDF correction of an image are shown in Fig. 5.2. To
do BRDF correction, the spectral profile at each pixel is needed to determine the
identity of the material, such as soil, vegetation, water, concrete, etc. Then, stan­
dard tables of the BRDF response for a given material as a function of viewing and
167
BRDF ( Material Type, Angle )
Uncorrected
Image
Image
Classification
Material
Type
Class
Material
Identification
BRDF
Correction
BRDF Corrected
Image
Image Angle
Figure 5.2: Flow diagram of BRDF correction
illumination angle may be used to adjust the pixel values to look as if the identi­
fied material had been viewed by the imaging system from nadir. To identify the
materials present, the pixels in the image must be assigned to different classes us­
ing an image classification algorithm. Image classification algorithms identify the
pixel locations of various materials in an input image and assign a "class" or label
to each material. Classification algorithms come in two varieties: supervised and
unsupervised. In supervised classification, the classifier is trained with a labeled
dataset and the error, or cost, is minimized for the training data. Then, the trained
classifier may be used to label similar data outside of the training set. Unsuper­
vised learning does not use labeled training data, but tries to group the input data
into several clusters. The number of desired clusters is often the only input to this
type of classifier. Unsupervised classification gives the location and the number
of diff'erent materials in a scene, but does not normally perform the more difficult
task of identifying the materials. Standard classification methods include k-means.
168
minimum-distance-to-means, parallelpiped, maximum likelihood, and spectral an­
gle mapping (SAM) [16,61]. Sophisticated supervised classification methods like
GENIE [62-64], an image classification system based on genetic algorithms, are
becoming increasingly popular.
Standard BRDF responses for various materials are generated by measuring real
materials in the field using specialized instrumentation. As an example, the Remote
Sensing Group (RSG) of the Optical Science Center at The University of Arizona has
developed an imaging radiometer to take field measurements of the relative angular
refiectance of a material [65]. The system uses a four-band 2D CCD array and a
fisheye lens. This arrangement allows retrieval of surface BRDF without having to
rotate the system to collect data at a wide range of angles. Data sets of phase value
versus scattering angle are collected. The scattering angle, Q{x,y), is computed for
each pixel location (x, y) in the BRDF camera image as
0(a:, y) = arccos [cos 6i cos 6^ + sin 6i sin 0.^ cos {(j)^ — (^j)]
(5.2)
where 6i is the incident solar zenith angle, 9^ is the viewed zenith angle with respect
to the camera at pixel location (x,y), 0^, is the viewed azimuth angle with respect
to the camera at pixel location {x,y), and (pi is the angle of the solar principal
plane [65,66]. The BRDF camera collects nadir-normalized radiance values sampled
at one degree phase angle increments [65]. Data are next fit to a modified PintyVerstraete equation to generate a table of phase value versus scatter angle,
F(0) — a + b cos 0 -f-
^^
(5.3)
169
where
P
is the scattering phase function, 0 is the scattering angle, and a,
b,
and
c are the coefficients found by applying a least-squares fit to the data [67]. These
tables of phase value P(0) versus scatter angle 0 may then be used to do BRDF
correction of the images for a given date. Points are found on the phase value curve
at the nadir and off-nadir scatter angles 0„ and 0o of the input low-resolution
frames. Then, the ratio of the corresponding phase values,
and Po, is used to
adjust the reflectance in a given pixel to appear like a nadir reflectance:
Pnew
Pold *
Pn
•
(^'4)
The BRDF curves must be known for each identified material in the scene. If mate­
rials are not known, as is often the case, then any BRDF correction is problematic
to implement. One can make an educated guess based on a visual identification
of features; a city road or parking lot is likely to be made of asphalt. However, if
resolution enhancement is the task at hand, a user is unlikely to know the identity
of the material(s) in a region of interest because there is not high enough resolution
in a single image. If, after applying a resolution enhancement algorithm, a user
can see the shape of the feature of interest, an educated guess may be made about
the material type. This new knowledge would then allow a refinement to the image
based on a BRDF correction.
170
5.5
Experiment: Atmospheric and BRDF Effects
In this section, the "Isotopes" MTI image is used to generate multi-angle datasets
with BRDF or atmospheric effects. The upwelled radiance, also called the path
radiance, is modeled to determine how it affects the resolution enhancement of
multi-angle imagery. In a separate simulation, the grass lawn of the baseball field in
the "Isotopes" image is altered to simulate the effect of BRDF with changing view
angle.
As discussed previously, a radiative transfer code is normally used to model the
path radiance. However, with some reasonable assumptions, a very simple model
of atmospheric path radiance may be formed. Assuming a sensor view angle of no
greater than 60° and a thin atmosphere, the optical depth as a function of the angle
may be considered to be directly proportional to the increase in path length ( [68],
p. 105):
S ( 9 ) oc sec 6.
(5.5)
An estimate of the path radiance at nadir, L p { 0 ) , may be found by measuring the
pixel value in an area of dark vegetation, such as the baseball field in the "Isotopes"
image. An off-nadir image at view angle 9 with atmospheric path radiance effects is
then simulated by adding a constant off-nadir path radiance value, Lp{d), to all pixels
in the original image prior to applying the projective transform and downsampling
steps:
Lp{6) = Lp(0) * (sec0- 1.0).
(5.6)
The "Isotopes" image with modeled path radiance is shown in Fig. 5.3, using look
angles of 0 = —50°, 0°, +50°. The off-nadir images are brighter due to the increased
upwelled radiance at the longer off-nadir path length. If the POCS algorithm is
applied to the uncorrected data set, the algorithm diverges, as shown in Fig. 5.4,
resulting in an unacceptable "speckled" HR estimate. Uncorrected three-look data
sets are formed for various off-nadir angles to determine at what angle the POCS
algorithm diverges if given data that has not had atmospheric correction applied.
The resulting ISNR curves are shown in Fig. 5.5. Compared to the ISNR values
for three-look imagery without modeling path radiance, shown in Fig. 4.49, the
ISNR values are approximately 1.1 dB lower. Imagery dX 0 > 40° begins to pose a
problem for POCS. After a few iterations, the inconsistent LR data causes ringing
in the HR estimate. It may be concluded that atmospheric correction is needed
when the LR frames are acquired at 0 > 40° to ensure convergence of the algorithm.
However, assuming the off-nadir path radiance is constant over the image, it is
easy to pre-correct the imagery by subtracting the bias due to atmospheric path
radiance even if the off-nadir angles are not known. Alternatively, the contrast of
the LR frames could be matched using histogram equalization. The HR estimate
with the atmospheric term subtracted for the 6 = —50°, 0°, -1-50° three-look imagery
is shown in Fig. 5.6. The final ISNR is 2.79 dB with the correction. For comparison,
without correcting the atmospheric path radiance, the ISNR is 2.07 dB. Thus, the
bias subtraction improves the HR estimate by 0.72 dB. The simple bias removal is
172
therefore an effective method to ensure that the LR frames are "consistent," as long
as one can assume that the atmospheric path radiance is constant over the entire
image, and if the enhanced HR image is going to be used for visual interpretation
rather than to derive secondary remote sensing products. The best correction when
physical units are needed is to model the atmosphere with a radiative transfer code,
as discussed in Sect. 5.3.
Figure 5.3: The original "Isotopes" MTI image (upper left) and the three-look
images with modeled atmospheric path radiance (upper right at —50°,
lower left at 0°, and lower right at +50°).
173
Figure 5.4: The HR estimate for 9 = —50°,0°,+50° without atmospheric correc­
tion. The different contrast in the LR frames prevents POCS from
giving good results. The ISNR is 2.07 dB.
174
5
4
3
1
0
5
10
15
20
Epoch
Figure 5.5: The ISNR for three-look imagery with uncorrected atmospheric path
radiance, using different off-nadir angles.
175
Figure 5.6: The HR estimate with the bias subtracted from the LR frames to
compensate for off-nadir atmospheric path radiance. The pixels in the
baseball field no longer look speckled, however some artifacts remain
due to the large off-nadir imagery at ±50°. The ISNR is 2.79 dB here,
an improvement of 0.72 dB over the uncorrected case.
176
Next, the effects of directional reflectance are examined. The "Isotopes" image
contains a baseball field that is composed of grass turf. To simulate the effect of
BRDF on the resolution enhancement, the pixels composing the field are scaled to
simulate the change in radiance expected at off-nadir angles. The simulated BRDFscaled images are input to the POCS algorithm and the ISNR is computed. The
BRDF scaling used is taken from a study by Sandmeier, et al. [69] that measured
the reflectance of a grass lawn in the laboratory using a transportable field goniome­
ter (FIGOS). The data were provided in a graph of bidirectional reflectance factor
(BRF) versus view zenith angle; for this experiment, the values were measured from
the graph and stored in a lookup table of BRF vs. angle, so the values are approxi­
mate. The BRF R is the ratio of the reflected energy from a material of interest to
the energy that is reflected from an ideal Lambertian surface. A Lambertian surface
is one that reflects energy equally in all directions, or
pE^nMent
•^reflected
?
' )
TT
where L is radiance in Watts/m^/sr, E is the incident irradiance from the sun, and
p is the reflectance. The BRF is given by
=
(5.8)
^lambertian
To scale the grass baseball field pixels to model BRDF effects, the BRF at the view
angle of an off-nadir image is found in the lookup table, then the BRF at nadir is
177
found. The ratio of the BRFs provides the scale factor for the grass pixels:
^sampleiOo) =
(5.9)
Table 5.1 summarizes the measured BRF values as a function of view angle in the
principal plane for grass at 550 nm, along with the scaling factor
S = R{do)/R{dn)
applied to simulate the off-nadir BRDF effect.
0[°]
-30.0
-25.0
-20.0
-15.0
-10.0
-5.0
0.0
5.0
10.0
15.0
20.0
25.0
30.0
BRF
0.062
0.060
0.062
0.064
0.066
0.068
0.070
0.074
0.080
0.085
0.092
0.100
0.115
S
0.886
0.857
0.886
0.914
0.943
0.971
1.000
1.057
1.143
1.214
1.314
1.429
1.643
Table 5.1: The view angle and corresponding BRF and scaling factor for grass turf
at 550 nm.
Figure 5.7 shows the HR estimate for the three-look case at 0 = —20°, 0°, +20°.
Ringing is visible along the edges of the baseball field, indicating inconsistent data
due to the BRDF effect. The ISNR curves for the three-look imagery with un­
corrected BRDF effects are shown in Fig. 5.8. For 6 > 20°, the POCS algorithm
starts to diverge. Note that BRDF results will vary depending on the type and
amount of non-Lambertian material. However, these results indicate that if highly
178
non-Lambertian materials are in the scene that is to be resolution-enhanced, then
BRDF correction may be necessary.
Figure 5.7: HR estimate with BRDF effects for 9 = -20°,0°,+20°.
179
6.5
— 0 = -5, 0,5
0 =-10, 0,10
0 =-15, 0,15
^ 0 = -20, 0,20
0 = -25, 0,25
^ 0 = -30,0,30
5.5
4.5
3.5
0
5
10
Epoch
15
20
Figure 5.8: The ISNR for three-look imagery with uncorrected BRDF effects, using
different off-nadir angles.
180
CHAPTER 6
CONCLUSIONS
This dissertation showed that multiframe resolution enhancement algorithms can
quantitatively and qualitatively improve the spatial resolution of images from a
remote-sensing imaging system by combining sequences of multi-look images. A
simulation tool was developed that allowed the modeling of off-nadir imagery. The
multi-look image simulation tool was based on a Fourier-optics-based sensor model
and modeled the sensor viewing geometry with a projective transform, using the
basic sensor platform parameters of the Multispectral Thermal Imager (MTI). The
multi-look image simulation tool was used to generate two-look, three-look, and
multi-look imagery of a standard image-processing test image and of an image ac­
quired by the MTI. The Projection Onto Convex Sets (POCS) resolution enhance­
ment algorithm was implemented with standard data consistency and amplitudebound constraints and applied to the simulated multi-look imagery. The effects of
aliasing in a sensor system were modeled to test the POCS algorithm under a range
of aliasing conditions. Data from a multispectral microscope were used to demon­
strate how POCS may be used to improve the resolution of multispectral imagery.
Finally, atmospheric path radiance and directional reflectance effects were modeled
to determine their effects on POCS resolution enhancement.
181
6.1
Major findings
If the sensor is configured to acquire imagery with symmetric view angles in at
least a set of three, with one forward-pointing off-nadir image, one nadir image, and
one backward-pointing image, the resolution is optimally enhanced. The ideal is
to acquire as many images as possible, as close to nadir as possible, and within as
short a time as possible. Practical constraints on the number of images that may
be acquired during one overpass limit the resolution improvement that is possible
for remotely-sensed imagery.
Images acquired too far off-nadir can lead to divergence of the POCS algorithm.
For the simulated images here, using views farther than 48° off-nadir were counter­
productive for this reason.
From the "Isotopes" simulations here, three symmetric LR frames can improve
the imagery by 7.5 dB when restored on a grid twice the size of the LR frames.
Seven images can provide an improvement in SNR of 9.5 dB, while six images give
a 8.2 dB improvement. The "Cameraman" results are similar: three symmetric
frames give an ISNR of 7.1 dB, six asymmetric frames give an ISNR of 8.6 dB, and
seven asymmetric frames give an ISNR of 10 dB. Using more than seven asymmetric
frames did not improve the ISNR results.
Atmospheric and BRDF effects also should be considered when combining multiangle images.
Atmospheric correction is needed when 9 > 40°, and directional
182
reflectance prevents convergence of POCS when 9 > 20°. Thus, the LR frames
should be acquired at small angles, or 9 < 20°.
The aliasing present for a given sensor band also affects how much resolution
increase can be expected. Aliased images benefit the most from resolution enhance­
ment processing, since they provide imagery with different details.
Non-aliased
images are difficult to improve as they already are of high quality.
Although not studied here, the registration of the LR images is also critical for
good resolution improvement.
6.2
Directions for future research
A Gaussian estimated system PSF was used in the POCS algorithm implemen­
tation in this dissertation. The elliptical Gaussian system PSF approximation was
chosen because it has been previously used for remote sensing resolution enhance­
ment, it is easily computed for off-nadir imagery by scaling by the appropriate
factor of cosine separately for the along-track and cross-track directions, and its
affine mapping from the LR frame to the HR estimate grid is simple to compute,
thus avoiding the time-consuming computation of a projective mapping for each
LR pixel. The Gaussian width, or standard deviation, was chosen to account for
both the optics and detector subsampling. The Gaussian function smoothly rolls
off; for a detector-limited sensor, the shape of the Gaussian is not the best approxi­
mation for the system PSF. However, a function that can be mapped with an affine
183
transform is important for keeping the processing time reasonable. For the strongly
detector-limited case, it may be possible to use an indicator function (rectangle)
and get acceptable results. Due to the sensitivity of the results on the choice of the
Gaussian PSF width a, this issue is clearly important for further research.
The sensor model in this dissertation only included the effects of the optics and
detectors. Refinements of the model would be valuable to study the effects of other
system components on the POCS resolution enhancement results, including the PSF
due to image motion, the electronics PSF, and sensor noise. The detectors here
were modeled simply as a 2D CCD array rather than line detectors with variable
integration times, as is the case for the MTI. The detector fill factor (spacing between
detectors) was neglected as well. In addition, the image geometry could be better
modeled. Sensor pointing in the cross-track direction was neglected; many sensors
can point "to the side," allowing acquisition of targets in the previous or next orbital
track of the satellite. Using these images would increase the total number of LR
frames available to the POCS algorithm, and thus improve the HR estimate. In
addition, the surface of the earth is not flat, as was modeled here, and terrain and
elevation were not considered.
The POCS algorithm is only exceeded in popularity by Bayesian image resolu­
tion enhancement methods. The Bayesian approach was not considered here be­
cause it did not appear to be appropriate for remotely-sensed imagery in previous
research [37]. However, Bayesian methods do provide additional constraints that
184
can prevent the ringing observed in enhanced imagery when the sensor is nearly
Nyquist-sampled. Recent reviews [7] have indicated the possibility of combining
POCS with the MAP Bayesian approach to form a hybrid resolution enhancement
algorithm. Thus, the performance of hybrid image resolution enhancement meth­
ods for systems with and without aliasing would be a good avenue to explore for
enhancement of remote sensing imagery as well as for other types of imagery.
185
Appendix
RADIOMETRIC TERMS
is helpful to define some radiometric terms that may be used in the dissertation.
The radiant energy, Q [Joules], is the energy traveling in the form of electro­
magnetic waves.
The radiant flux, $ [Watts = Joules/sec], is the rate at which radiant energy
is transferred from a point on a surface to another surface.
The radiance, L [W/m^/sr], in a given direction at a point on the surface being
sensed, is the radiant flux leaving an element of the surface surrounding the
point and propagated in directions defined by an elementray cone containing
the given direction, divided by the product of the solid angle of the cone and
the area of the orthogonal projection of the element of the surface on a plane
perpendicular to the given direction [70]. It is given by
d f l d A cos 9 '
The reflectance, p, is the ratio of reflected radiant flux to incident radiant flux.
186
REFERENCES
[1] M. Cook, B. Peterson, G. Dial, L. Gibson, F. Gerlach, K. Hutchins, R. Kudola,
and H. Bowen, "IKONOS technical performance assessment," in Proc. SPIE,
vol. 4381, 2001, pp. 94-108.
[2] Space
imaging:
Ikonos.
[Online],
http: //www.spaceimaging.com/products/ikonos/index.htm
Available:
[3] T. Miers and R. Munro, "Ball global imaging system for earth remote sensing,"
in Proc. SPIE, vol. 4169, 2001, pp. 362-373.
[4] Digital
globe.
[Online].
http: / / www.digitalglobe.com/about / quickbird.html
Available:
[5] S. Borman and R. L. Stevenson, "Super-resolution from image sequences-a
review," in IEEE Midwest Symposium on Circuits and Systems, Aug. 1998, pp.
374-378.
[6] S. Borman and R. Stevenson, Encyclopedia of Optical Engineering.
2003, ch. Image Sequence Processing.
Dekker,
[7] S. C. Park, M. K. Park, and M. G. Kang, "Super-resolution image reconstruc­
tion: a technical overview," IEEE Signal Processing Magazine, pp. 21-36, May
2003.
[8] J. J. Szymanski, W. Atkins, L. Balick, C. C. Borel, W. B. Clodius, W. Christensen, A. B. Davis, J. C. Echohawk, A. Galbraith, K. Hirsch, J. B. Krone,
C. Little, P. Maclachlan, A. Morrison, K. Pollock, P. Pope, C. Novak,
K. Ramsey, E. Riddle, C. Rhode, D. Roussel-Dupre, B. W. Smith, K. Smith,
K. Starkovich, J. Theiler, and P. G. Weber, "MTI science, data products and
ground data processing overview," Proc. SPIE, vol. 4381, pp. 195 -203, 2001.
[9] P. G. Weber, B. C. Brock, A. J. Garrett, B. W. Smith, C. C. Borel, W. B.
Clodius, S. C. Bender, R. R. Kay, and M. L. Decker, "Multispectral Thermal
Imager mission overview," Proc SPIE, vol. 3750, pp. 340-346, 1999.
[10] D. Diner, J. Beckert, T. Reilly, C. Bruegge, J. Conel, R. Kahn, J. Martonchik,
T. Ackerman, R. Davies, S. Gerstl, H. Gordon, J.-P. Muller, R. Myneni,
187
P. Sellers, B. Pinty, and M. Verstraete, "Multi-angle imaging spectroradiometer
(MISR) instrument description and experiment overview." IEEE Transactions
on Geoscience and Remote Sensing, vol. 36, no. 4, pp. 1072-87, July 1998.
[11] M. C. Edwards, D. Llewellyn-Jones, and H. Tait, "The Advanced Along Track
Scanning Radiometer: validation and early data," IEEE International Geo­
science and Remote Sensing Symposium, pp. 614-616, June 2002.
[12] J. Bermyn, "PROBA - PRoject for On-Board Autonomy," Air & Space Europe,
vol. 2, no. 1, pp. 70-76, Jan./Feb. 2000.
[13] European space agency. Proba: Facts and figures.
[Online]. Available:
http://www.esa.int/export/esaMI/Proba_web_site/ESA6AKTHN6D_0.html
[14] R. Kay, S. Bender, T. Henson, D. Byrd, J. Rienstra, M. Decker, N. Rackley,
R. Akau, P. Claassen, and R. K. et. al., "Multispectral Thermal Imager (MTI)
payload overview," Proc. SPIE, vol. 3753, pp. 347-358, July 1999.
[15] J. A. Richards, Remote Sensing Digital Image Analysis. Springer-Verlag, 1993,
p. 51.
[16] T. M. Lillesand and R. W. Kiefer, Remote sensing and image interpretation.
John Wiley & Sons, 2000.
[17] M. R. Banham and A. K. Katsaggelos, "Digital image restoration," IEEE Signal
Processing Magazine, pp. 24-41, Mar. 1997.
[18] J. D. Gaskill, Linear Systems, Fourier Transforms, and Optics.
k Sons, 1978.
John Wiley
[19] A. Galbraith, J. Theiler, and S. Bender, "Resampling methods for the MTI
coregistration product," Proc. SPIE, vol. 5093, pp. 283-293, 2003.
[20] J. F. Moreno and J. Melia, "An optimum interpolation method applied to the
resampling of NOAA AVHRR data," IEEE Trans. Geosci. Remote Sensing,
vol. 32, pp. 131-151, 1994.
[21] J. R. Schott, Remote Sensing: The Image Chain Approach. Oxford University
Press, 1997.
188
[22] G. A. Poe, "Optimum interpolation of imaging microwave radiometer data,"
IEEE Trans. Geosci. Remote Sensing, vol. 28, pp. 800-810, 1990.
[23] Backhaus, "Uniqueness in the inversion of inaccurate gross earth data," Philo­
sophical Transactions of the Royal Society of London, vol. 266, pp. 123-192,
1970.
[24] J. Yen, "On nonuniform sampling of bandwidth-limited signals," IRE Trans.
Circuit Theory, pp. 251-257, Dec. 1956.
[25] D. S. Chen and J. P. Allebach, "Analysis of error in reconstruction of twodimensional signals from irregularly spaced samples," IEEE Trans. Acoust.,
Speech, Signal Processing, vol. ASSP-35, no. 2, pp. 173-180, Feb. 1987.
[26] T. Strohmer, "Irregular sampling, frames, and pseudo-inverse," Master's thesis,
University of Vienna, 1991.
[27] H. Feichtinger, C. Cenker, and H. Steier, "Fast iterative and non-iterative re­
construction methods in irregluar sampling," in Conf. ICASSP'91, Toronto,
May 1991, pp. 1773-1776.
[28] H. Feichtinger and K. Grochenig, Wavelets: Mathematics and Applications.
CRC Press, 1993, ch. Theory and Practice of Irregular Sampling, pp. 305-363.
[29] T. Strohmer, "Efficient methods for digital signal and image reconstruction
from nonuniform samples," Ph.D. dissertation, University of Vienna, 1993.
[30] D. P. Roy, "The impact of misregistration upon composited wide field of view
satellite data and implications for change detection," IEEE Trans. Geo. Rem.
Sens., vol. 38, no. 4, pp. 2017-2032, July 2000.
[31] J. Cihlar, H. Ly, Z. Li, J. Chen, H. Pokrant, and F. Huang, "Multitemporal,
multichannel avhrr data sets for land biosphere studies - artifacts and correc­
tions," Remote sensing of environment, vol. 60, no. 1, pp. 35-57, Apr. 1997.
[32] B. R. Hunt, "Super-resolution of images; Algorithms, principles, performance,"
International Journal of Imaging Systems and Technology, vol. 6, pp. 297-304,
1995.
189
[33] A. S. Fruchter and R. N. Hook, "Drizzle: A method for the linear reconstruction
of undersampled images," Publ. Astron. Soc. Pac., vol. 114, no. 792, pp. 144152, Feb. 2002.
[34] K. D. Sauer and J. P. Allebach, "Iterative reconstruction of band-limited images
from non-uniformly spaced samples," IEEE Trans. Circuits Systems, vol. CAS34, pp. 1497-1505, 1987.
[35] T. A. Scambos, G. Kvaran, and M. A. Fahnestock, "Improving AVHRR res­
olution through data cumulation for mapping polar ice sheets," Remote Sens.
Environ., vol. 69, pp. 56-66, 1999.
[36] D. G. Baldwin, W. J. Emery, and P. B. Cheeseman, "Higher resolution earth
surface features from repeat moderate resolution satellite imagery," IEEE
Trans. Geosci. Remote Sensing, vol. 36, no. 1, pp. 244-255, Jan. 1998.
[37] P. Cheeseman, B. Kanefsky, R. Kraft, J. Stutz, and R. Hanson, "Superresolved surface reconstruction from multiple images," in Maximum Entropy
and Bayesian Methods, G. R. Heidbreder, Ed. Kluwer Academic Publishers,
1996, pp. 293-308.
[38] H. Stark and P. Oskoui, "High-resolution image recovery from image-plane
arrays using convex projections," Journal of the Optical Society of America A,
vol. 6, no. 11, pp. 1715-1726, Nov. 1989.
[39] P. L. Combettes, "Convex set theoretic image recovery by extrapolated itera­
tions of parallel subgradient projections," IEEE Trans. Image Processing, vol. 6,
no. 4, pp. 493-506, April 1997.
[40] D. C. Youla and H. Webb, "Image restoration by the method of convex pro­
jections: Part 1 - theory," IEEE Trans. Medical Imaging, vol. MI-1, no. 2, pp.
81-94, Oct. 1982.
[41] M. I. Sezan and H. Stark, "Image restoration by the method of convex pro­
jections: Part2 - applications and numerical results," IEEE Trans. Medical
Imaging, vol. MI-1, no. 2, pp. 95-101, Oct. 1982.
[42] A. Lent and H. Tuy, "An iterative method for the extrapolation of band-limited
functions," Journal of Mathematical Analysis and Applications, vol. 83, pp.
554-565, 1981.
190
[43] D. C. Youla, "Generalized image restoration by the method of alternating or­
thogonal projections," IEEE Trans. Circuits and Systems, vol. CAS-25, no. 9,
1978.
[44] A. M. Tekalp, M. K. Ozkan, and M. I. Sezan, "High-resolution image reconstruc­
tion from lower-resolution images sequences and space-varying image restora­
tion," IEEE International Conference on Acoustics, Speech and Signal Process­
ing, vol. 2, pp. 169-172, March 1992.
[45] M. K. Ozkan, A. M. Tekalp, and M. I. Sezan, "POCS-based restoration of spacevarying blurred images," IEEE Trans. Im. Proc., vol. 3, no. 4, pp. 450-454, July
1994.
[46] Yeh and Stark, "Iterative and one-step reconstruction from nonuniform samples
by convex projections," J. Optical Society of America A, vol. 7, no. 3, pp. 491499, March 1990.
[47] D. Granrath and J. Lersch, "Fusion of images on affine sampling grids," J. Opt.
Soc. Am. A, vol. 15, no. 4, pp. 791-801, April 1998.
[48] M. I. Sezan, "An overview of convex projections theory and its application to
image recovery problems," Ultramicroscopy, vol. 40, pp. 55-67, 1992.
[49] J. R. G. Townshend, C. O. Justice, C. Gurney, and J. McManus, "The impact
of misregistration on change detection," IEEE Trans. Geo. Rem. Sens., vol. 30,
no. 5, pp. 1054-1060, Sept. 1992.
[50] X. Dai and S. Khorram, "The effects of image misregistration on the accuracy
of remotely sensed change detection," IEEE Trans. Geo. Rem. Sens., vol. 36,
no. 5, pp. 1566-1577, Sept. 1998.
[51] J. Theiler, A. Galbraith, P. Pope, K. Ramsey, and J. Szymanski, "Automated
coregistration of MTI spectral bands," Proc SPIE, vol. 4725, pp. 314-327, 2002.
[52] R. A. Schowengerdt, Remote Sensing, Models and Methods for Image Process­
ing. Academic Press, 1997.
[53] J. Goodman, Introduction to Fourier Optics.
[54] G. Wolberg, Digital Image Warping.
McGraw-Hill, 1996.
IEEE Computer Society Press, 1990.
191
[55] G. C. Hoist, Sampling, Aliasing, and Data Fidelity for Electronic Imaging
Systems, Communications, and Data Acquisition. SPIE Optical Engineering
Press, 1998.
[56] K. W. Oleson, S. Sarlin, J. Garrison, S. Smith, J. L. Privette, and W. J. Emery,
"Unmixing multiple land-cover type reflectances from coarse spatial resolution
satellite data," Remote Sensing of Environment, vol. 54, no. 2, pp. 98-112,
November 1995.
[57] N. X. Nguyen, "Numerical algorithms for image superresolution," Ph.D. dis­
sertation, Stanford University, 2000.
[58] R. Hardie, K. Barnard, J. Bognar, E. Armstrong, and E. Watson, "Highresolution image reconstruction from a sequence of rotated and translated
frames and its application to an infrared imaging system," Optical Engineering,
vol. 37, no. 1, pp. 247-260, Jan. 1998.
[59] A. K. Jain, Fundamentals of Digital Image Processing.
Prentice Hall, 1989.
[60] F. Nicodemus, J. Richmond, and J. Hsia, "Geometrical considerations and
nomenclature for reflectance," National Bureau of Standards, Tech. Rep., Oc­
tober 1977.
[61] R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification, 2nd ed. John
Wiley & Sons, Inc., 2001.
[62] N. Harvey, J. Theiler, S. Brumby, S. Perkins, J. Szymanski, J. Bloch, R. Porter,
M. Galassi, and A. Young, "Comparison of GENIE and conventional supervised
classiflers for multispectral image feature extraction," IEEE Trans. Geoscience
and Remote Sensing, vol. 40, no. 2, pp. 393-404, Feb. 2002.
[63] S. Brumby, P. Pope, A. Galbraith, and J. Szymanski, "Evolving feature extrac­
tion algorithms for hyperspectral and fused imagery," in Fifth International
Conference on Information Fusion, 8-11 July 2002, Annapolis, MD, USA,
vol. 2, 2002, pp. 986-993.
[64] S. Perkins, J. Theiler, S. Harvey, R. Porter, J. Szymanski, and J. Bloch, "GE­
NIE: a hybrid genetic algorithm for feature classification in multispectral im­
ages," in Proc. SPIE, vol. 4120, Aug. 2000, pp. 52-62.
192
[65] P. Nandy, K. Thome, and S. Biggar, "Characterization and field use of a CCD
camera system for retrieval of bidirectional reflectance distribution function,"
Journal of Geophysical Research, vol. 106, no. Dll, pp. 11957-11966, June
2001.
[66] B. Hapke, "Bidirectional reflectance spectroscopy, 1, theory," J. Geophys. Res.,
vol. 86, pp. 3039-3060, 1981.
[67] B. Pinty, M. Verstraete, and R. Dickenson, "A physical model for predicting
bidirectional reflectances over bare soil," Remote Sens. Environ., vol. 27, pp.
272-288, 1989.
[68] J. R. Schott, Remote Sensing: the Image Chain Approach.
Press, 1997.
Oxford University
[69] S. Sandmeier, C. Miiller, B. Hosgood, and G. Andreoli, "Sensitivity analysis
and quality assessment of laboratory BRDF data," Remote Sens. Environ.,
no. 64, pp. 176-191, 1998.
[70] P. Slater, Remote Sensing: Optics and Optical Systems. Addison-Wesley, 1980.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement