This manuscript has been reproduced from the microfilm master. UMI
films the text directly from the original or copy submitted. Thus, some
thesis and dissertation copies are in typewriter face, while others may
be from any type of computer printer.
The quality of this reproduction is dependent upon the quality of the
copy submitted. Broken or indistinct print, colored or poor quality
illustrations and photographs, print bleedtbrough, substandard margins,
and improper alignment can adversely affect reproduction.
In the unlikely event that the author did not send UMI a complete
manuscript and there are missing pages, these will be noted. Also, if
unauthorized copyright material had to be removed, a note will indicate
the deletion.
Oversize materials (e.g., maps, drawings, charts) are reproduced by
sectioning the original, beginning at the upper left-hand comer and
continuing from left to right in equal sections with small overlaps. Each
original is also photographed in one exposure and is included in
reduced form at the back of the book.
Photographs included in the original manuscript have been reproduced
xerographically in this copy. Higher quality 6" x 9" black and white
photographic prints are available for any photographs or illustrations
appearing in this copy for an additional charge. Contact UMI directly
to order.
University Microfilms International
A Bell & Howell Information Company
300 North Zeeb Road. Ann Arbor. M148106-1346 USA
Order Number 930'1685
Ne't'-.: methods of non-linear
restoration and reconstruction
Oh, Choonsuck, Ph.D.
The University of Arizona, 1992
300 N. Zeeb Rd.
Ann Arbor. MI 48106
Choonsuck Oh
A Dissertation Submitted to the Faculty of the
In Partial Fulfillment of the Requirements
For the Degree of
In the Graduate College
1 992
As members of the Final Examination Committee. we certify that we have
read the dissertation prepared by____~Ch~o~o~n~s~u~c~k__~O_h_______________________
and recommend that it be accepted as fulfilling the dissertation
requirement for the Degree of
Doctor of Philosophy
Dr. Robin N. Strickland
Final approval and acceptance of this dissertation is contingent upon
the candidate's submission of the final copy of the dissertation to the
Graduate College.
I hereby certify that I have read this dissertation prepared under my
direction and recommend that it be accepted as fulfilling the dissertation
Dissertation Director
Dr. B. Roy Frieden
This dissertation has been submitted in partial fulfillment of requirements for
an advanced degree at The University of Arizona and is deposited in the University
Library to be made available to borrowers under rules of the library.
Brief quotations from this dissertation are allowable without special permission,
provided that accurate acknowledgment of source is made. Requests for permission
for extended quotation from or reproduction of this manuscript in whole or in
part may be granted by the head of the major department or the Dean of the
Graduate College when in his or her judgment the proposed use of the material is
in the interests of scholarship. In all other instances, however, permission must be
obtained from the author.
To my wife and daughters
Many individuals have influenced this work. However, in greatest part, the ideas
developed here were sparked by discussions with my dissertation advisor, Professor
B. Roy Frieden. I would like to express my gratitude for his invaluable guidance,
encouragement, and support. I would also like to thank my co-advisor Professor
Robert A. Schowengerdt for his suggestions and his generous permission to use the
computer facilities in Digital Image Analysis Laboratory (DIAL). I wish to thank
Professor Robin N. Strickland for his help in completing my education and my
degree requirements. I appreciate all their time and efforts.
I gratefully thank the present and previous head, and the directors at the Electronics and Telecommunications Research Institute(ETRI) for their financial support and encouragement.
Thanks to my friends Hsien-huang Wu and Dr. Raymond White for invaluable
discussions and computer code development.
My sincere appreciation goes to my dear wife, Youngsook. Without her assistance, patience, and understanding, I would not have been able to pursue my career.
My two daughters, Haejin and Haeyoung, have supported my efforts with forbearance and goodwill. Finally, I would like to express my appreciation to my parents
and my wife's parents for their emotional support.
This research was supported by the Strategic Defense Initiative through the Office of Naval Research, under grant NOaa 14-90-J-404 7, as part of the Unconventional
Imaging Program. I thank F. QueUe and W. Miceli for their encouragement and
1.1. Invariance . . . .
1.2. Phase Retrieval.
Conformal :Mapping
Translation Invariance of Fourier Transform
:\lellin Transform .
Fast Mellin Transform
Fourier-Mellin Transform
2.7. Direct Mellin Transform .
3.1. Introduction
3.2. Derivation
3.3. Scale-invariant Property
3.4. Rotation-invariant Property .
3.5. Inversion
:3.6. Optical Implementation
:3.1. .-\pplications
FROM MODULUS DATA. . . . . . . . . . . . . .
4.1. Background. . . . . . . . . . . . . . . . . . .
·t2. Wiener Phase-Filter Approaches: Derivation.
·t3. Wiener Phase-Filter Approach: Examples . .
4.4. Discussion
............................. .
Introduction . . . . . . . . .
Image Formation Theory ..
Basic Speckle Interferometry
Shift-and-Add Method ..
Knox-Thompson Method ..
. . . . .
. . . . .
Bispectrum Method .. . . .
Turbulent Image Reconstruction from a Superposition Model
5.7.1. Superposition Process
5.7.2. Theory . . . . . . . .
5.7.3. Experimental Results
5.S. Discussion . . . . . . . . . .
REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Optical implementation of the integral logarithmic transform.
Formation of the logarithmic smoothing filter. . . . . . . . . .
Flow of operations in the data smoothing experiment. . . . .
Input images with additive Gaussian noise. (a) 64 x 64 images, (b)
128 x 128 scaled version image with the same noise statistics as
in (a). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
Scale invariant filtering study, using LT (secondary) image space.
(a) The output by scale-invariant, \Viener filtering of Fig. 3.4
(a). (b) The output by Wiener filtering of Fig. 3.4 (b) using
the same filter as in (a). . . . . . . . . . . . . . . . . . . . . .
Scale variant filtering study, using primary (direct) image space. (a)
The output by \Viener filtering of Fig. 3.4 (a) using the filter
derived at a different scale, that of Fig. 3.4 (b). (b) The output
by Wiener filtering of Fig. 3.4 (b) using the filter derived at the
same scale, that of Fig. 3.4 (b). . . . . . . . . . . . . . . . ..
Two input images to apply the integral logarithmic transform. . ..
Outputs of the two-dimensional integral logarithmic transform. (a)
Logarithmic transform of Fig. 3. 7( a), (b) Logarithmic transform of Fig. 3.7 (b).
4.1. Object training set. . . .
4.2. The nine modulus filters. (The origin is the upper left corner, and
the folding frequency is the center, of each frame.) . . . . . ..
4.3. The nine phase filters. (The origin is the upper left corner, and the
folding frequency is the center, of each frame.)
4.4. ;:\ine-filter outputs (no noise). . .
4.5. Nine-filter outputs (10% noise). . .
4.6. ::,7ine-filter outputs (20% noise). . .
4.7. Output, with 10% noise in filters. .
4.8. Output, from data with 10% obscuration.
4.9. Output, from data with 50% obscuration.
·5.1. Formation of point spread function s(x) from lens suffering phase
errors b.(f3). . . . . . . . . . . . . . . . . . . . . . . . . . . ..
Short-term images formed (to be processed). (a) Ideal one-point
object; (b) Ideal two-point object; (c) Turbulent image (psf) of
(a); (d) Another image (psf) of (a); (e) Image of (b) via psf (c);
(f) Image of (b) via psf (d). . . . . . . . . . . . . . . . . . . . 102
5.3. Outputs of image modelling algorithm. (a) Output o(x,y) of algorithm based upon data F.T.{Fig. 5.2(c)} JF.T.{Fig. 5.2( d)}.
(b) Output o(x, y) based upon data F.T.{Fig. 5.2(e)} JF.T.{Fig.
5.2(f)}. Use of first starting solution. (c) Output o(x,y) as in
(b), using second starting solution. (d) Output o(x, y) as in (b),
using third starting solution . . . . . . . . . . . . . . . . . . . . 103
.5.1. The three different starting points . . . . . . . . . . . . . . . . "
Integral logarithmic transforms are defined for both one-dimensional and twodimensional input functions. These have the desirable properties of linearity and
invariance to scale change of the input. Two-dimensional integral logarithmic transform is additionally invariant to rotation. The integral logarithmic transforms are
conveniently inverted by simple differentiation. They are amenable to optical analog implementation by using incoherent light and simple collimating lenses. As
an application, the problem of noise suppression of an arbitrarily scaled image, by
using Wiener filtering, is considered. Also, application to a problem of character
recognition and matched filtering is proposed.
A new approach is given for the problem of reconstruction of phase from modulus
data. A set of vViener-filter functions is formed that multiply, in turn, displaced
versions of the modulus data in frequency space such that the sum is a minimum L 2 error norm solution for the object. The modulus data are permitted to contain both
noise and object signal components. The required statistics are power spectra of
the signal and noise, and correlation between modulus data at a given frequencies
and complex object spectral values at adjacent frequencies. Reconstructions are
formed in the presence of data noise, data gaps, and filter-construction noise, in
yarying amounts.
Finally, a new technique is proposed to reconstruct a turbulent image from a
superposition model. Imagery through random atmospheric turbulence is modeled
as a stochastic superposition process. By this model, each short-exposure point
spread function is a superposition of randomly weighted and displaced versions of
one intensity profile, e.g. an Airy disc. If we could somehow estimate the weights
and displacements for a given image, then by the superposition model we would
know the spread function, and consequently, could invert the imaging equation for
the object. In principle, this allows an object scene to be reconstructed from but
two short-exposure images, and without the need for a point reference source in
the field. By comparison, all other methods of reconstruction require 20 or more
images for decent quality in the output.
There has been a growing interest in the exploitation of invariant pattern recognition. The goal is to describe computational techniques for recognizing patterns,
invariant to the distortions they might have been subject to. The distortions can
be geometric in nature, or they can be caused by changes in illumination and/or
motion. Specifically, obtaining object classification in the face of geometrical distortions [1] in the input object (due to translation, scale, and rotation) is a major
pattern recognition problem that has received extensive attention. They arise in
a variety of situations such as inspection and packaging of manufactured parts [2],
classification of chromosomes [3], target identification [4, 5], and scene analysis [6].
The current approaches to invariant two-dimensional shape recognition include extraction of the global image information using regular moments [5], boundary-based
analysis via Fourier descriptors [7, 8, 9], or autoregressive models [10], image representation by circular harmonic expansion [11], syntactic approaches [3], and artificial neural networks [12, 13, 14, 15, 16]. But we will concentrate only on realizing
invariance via mathematical transform in this dissertation.
Translation invariance can be given via the magnitude of the Fourier transform
of the image. If translation between the two object functions need to be known, it
can be done by calculating the cross correlation. Scale-invariant pattern recognition
is one of the basic requirements for general purpose image processing system. Scale
invariance can be realized either via a description of objects by image primitives and
their relation or via mathematical transforms. Recent interest concentrated on the
Mellin transform, which is identical to the Fourier transform of the function with
logarithmically distorted coordinates [17, 18]. The logarithmic distortion converts
scaling to translation; since the absolute value of the subsequent Fourier transform
is translation-invariant, the magnitude of the Mellin transform is scale-invariant.
That two logarithmically distorted functions differ by translation can be detected
alternatively via their cross correlation function. If the original images differ by a
scale factor only, the cross correlation has a prominent peak for some relative shift
and the scale factor can be obtained from tile peak location.
This principle of logarithmic mapping seems to playa role in biological sensory
systems. In the simian visual system, as well as in the visual system of the cat, the
mapping of the central 20° - 30° of retinal space onto area 17 of the visual cortex
approximates a polar coordinate transformation together with a logarithmic distortion of the radial-axis [19, 20]. In the mammalian auditory system there is also a
logarithmically scaled representation of frequencies along the basilar membrane. A
model of the auditory system, based on the Fourier-Mellin transform, has been proposed by Altes [21, 22]. Casasent and Psaltis [23] describe a transformation that is
invariant to rotation and scale changes in an input image. They present an optical
implementation of the transformation. The optical transformation combines the geometric polar transformation with the conventional optical Fourier transform. They
demonstrate the extension of the transformation to optical pattern recognition. The
transformation sequence can be realized by the following steps [23]:
1. The magnitudes IF1(wI,W2)1 and IF2(Wl,W2)i of the Fourier transforms of the
two input objects are formed;
2. These funct!ons in step (1) are converted to polar coordinates as FI (r, 8) and
3. These polar functions are then logarithmically scaled in r to yield FI (e P , 8)
and F2 ( eP, fJ) where p = In r;
4. The magnitudes of the Fourier transforms of the functions in step (3) yield
the translation-, rotation-, and scale-invariant functions 1F,\fl (wp,we)1 and
1 F.\!2
(Wp, we) I·
Despite the powerful properties of translation, rotation, and scale invariance,
the combined Fourier and Mellin transform, in its present form, is not suitable for
feature extraction because it obscures much of the discriminatory information contained in the original data. The Fourier and Mellin transform have three operations
that result in attenuation of information.
1. Magnitude of Fourier transform: Translation information contained in the
phase of the Fourier transform is discarded.
2. Magnitude of Mellin transform: Scale information contained in the phase of
the Mellin transform is discarded.
3. Fourier transform followed by exponential sampling: Low frequencies are accentuated.
So as not to distort the data to the extent that valuable information is lost,
integral logarithmic transforms [24], in Chapter three, are defined for both onedimensional and two-dimensional input functions. These have the desirable properties of linearity and invariance to scale change of the input. The two-dimensional
integral logarithmic transform is additionally invariant to rotation. The integral
logarithmic transforms are conveniently inverted by simple differentiation. Also,
they are amenable to optical analog implementation by using incoherent light and
simple collimating lenses. As an application, the problem of noise suppression of
an arbitrarily scaled image, by using Wiener filt.ering, is considered. Use of the
integral logarithmic transform of the image data as a preprocessing step permits
the creation of a single \\liener filter optimized for use at all scales of magnification.
Finally, application to a problem of character recognition and matched filtering is
Phase Retrieval
The reconstruction of a signal from the magnitude of its Fourier transform
(Fourier intensity), generally referred to as the phase-retrieval problem, arises in a
variety of different contexts and applications and such diverse fields as astronomy,
x-ray crystallography, electron microscope, optics, wave-front sensing, and signal
processing. One wishes to reconstruct f(x,y), an object function, from !F(wI,w2)1,
the modulus of its Fourier transform
where FT denotes Fourier transform, and B(wI, W2) indicates the phase part of its
Fourier transform. Since the autocorrelation of the object can be computed from the
Fourier modulus by FT- 1 [IF(u..'1,i.I,..'2W]' this problem is equivalent to reconstructing
an object from its autocorrelation.
One must have sufficiently strong a priori information about the object to make
the solution unique. Of course, one has the omnipresent ambiguities that f( x, y),
exp(jBe)f(x - Xo, Y - Yo), and exp(jBe)!*( -x - Xo, -y - Yo), where Be is a constant
phase and j*(x,y) is the conjugate function of f(x,y), all have the same Fourier
modulus. If these omnipresent ambiguities (phase constant, translation, and conjugate image) are the only ambiguities, then we consider the phase-retrieval problem
to be unique.
To overcome the difficulties associated with the reconstruction of a signal from
its Fourier intensity, a number of different methods have been proposed for adding
additional information or constraints to the phase-retrieval problem. It has been
shown, for example, that if the boundary values of a finite support two-dimensional
signal are specified along with the Fourier intensity of the signal, then a simple
recursive algorithm [25] can often be used to reconstruct the signal. A number of
researchers have also considered the problem of reconstructing a signal from more
than one intensity function. Gerchberg and Saxton [26, 27], for example, presented
both noniterative and iterative algorithms that use two intensities - one in the
diffraction plane and one in the image plane. A similar problem was considered by
Misell [28, 29], who presented an iterative algorithm to recover phase information
from image intensities measured in two defocused planes. Gonsalves [30], on the
other hand, presented two closed-form solutions to the phase-retrieval problem for
one-dimensional signals, using differential intensity measurements by changing parameters such as the position of the focal plane and transmission of the aperture.
More recently, Nakajima [31, 32] has considered a linear method for phase retrieval
from two intensity measurement that are obtained with and without an exponential
filter in the object plane.
In Chapter four a new approach is given for the problem of reconstruction of
phase from modulus data [33]. A set of \Viener-filter functions is formed that multiply, in turn, displaced versions of the modulus data in frequency space such that
the sum is a minimum L 2 -error norm solution for the object. The modulus data are
permitted to contain both noise and object signal components. The required statistics are power spectra of the signal and noise, and correlation between modulus data
at a given frequencies and complex object spectral values at adjacent frequencies.
In a numerical simulation, a 3 x 3 filter array is used to reconstruct any member of
an object class consisting of 16 pictures of space shuttles in various combinations.
The 16 pictures are used as a learning set to form the required power spectra and
correlations mentioned above. Reconstructions are formed in the presence of data
noise. data gaps, and filter-construction noise, in varying amounts.
In Chapter five we formulate a new method for the restoration of atmospherically degraded images [34]. In this problem we must compensate for severe wave
aberrations caused by atmospheric turbulence. First, we review some methods of
obtaining diffraction-limited information through the turbulent atmosphere. These
are stellar interferometers and speckle interferometry. This is followed by some conventional approaches for handling astronomical speckle data. Finally, a new technique is proposed to reconstruct an turbulent image from a superposition model.
Imagery through random atmospheric turbulence is modeled as a stochastic superposition process. By this model, each short-exposure point spread function is a
superposition of randomly weighted and displaced versions of one intensity profile,
e.g. an Airy disc. If we could somehow estimate the weights and displacements for
a given image, then by the superposition model we would know the spread function, and consequently, could invert the imaging equation for the object, using any
conventional deconvolution approach such as inverse filtering [3.5]. In principle, this
allows an object scene to be reconstructed from but two short-exposure images,
and without the need for a point reference source in the field. By comparison, all
other methods of reconstruction require 20 or more images for decent quality in the
output. Some computer-simulated demonstrations of the approach are given.
Chapter six will conclude the dissertation by summarizing the results, showing
the main contributions, and suggesting further research related to our work.
We consider in this chapter conventional mathematical transformations to achieve
invariance to two-dimensional geometrical distortions such as scale, rotation, and
translation. The image is complex-log conform ally mapped so that rotation and
scale changes become translation in the transform domain. The magnitude of the
Fourier transform is invariant to translation of the object in the image. A scaleand translation-invariant description of the image can be obtained via the absolute
value of the Mellin transform of its Fourier amplitude spectrum. Casasent and
Psaltis [23] have shown that rotation invariance, as well as scale and translation
invariance, can be achieved with a rectangular to polar transformation followed by
transform. Since the absolute value of the ?':mrier transform or Mellin trans-
form, i.e., the amplitude spectrum, contains no information on the relative phases
of the spectral components, valuable structural information is lost. This review
illustrates the need for Integral Logarithmic Transform that is examined in detail
in Chapter 3.
Conformal Mapping
One useful technique used to achieve invariance to rotation and scale changes
is the complex-logarithmic (CL) conformal mapping. Conformal mapping transforms an image from rectangular coordinates to polar exponential coordinates.
This transformation changes rotation and scale into translation. Specifically, assume that Cartesian plane points are given by (x,y)
= x + jy.
Thus we can write z
= rexpj(),
where r
(Re(z),Im(z)), where
= (x 2 + y2)1/2
() = arg(z) = arctan(yfx). Now the CL mapping is simply the conformal mapping
of points z onto points w defined by
= In(z) = In[rexpj()] = lnr + j().
Therefore, points in the target domain are given by (lnr,())
= (Re(w), Im(w)), and
logarithmically spaced concentric rings and radials of uniform angular spacing are
mapped into uniformly spaced straight lines. More generally, after CL mapping,
rotation and scaling about the origin in the cartesian domain correspond to simple
linear shifts in the () (mod 2/T) and In r directions, respectively.
There are several problems associated with conformal mapping. First, since
interpolation is necessary at exponentially spaced sample points for the logarithmic
distortion of the image, the image reconstructed from the samples will not carry
all the information from the original. In particular, details close to the edge of the
original image will be smeared by sampling and reconstruction. A second problem is
sensiti\·ity to center misalignment of the sampled image. Small shifts from the center
causes dramatic distortions in the codormal mapped image. A third problem that
occurs in the conformal mapping is related to its size invariant aspect. A change in
scale does not appear as a direct translation in practice. When an image is scaled
from smaller to larger a translation occurs in the conformal mapped image but the
points left vacant by the translation are filled with more samples from the center
of the image. If the object in the image has no hole in its center the new samples
which take the place of the translating points will in general be very similar to those
translating points. This has the effect of stretching, not simple translation in the
conformal mapped image.
Translation Invariance of Fourier Transform
The bread-and-butter approach for dealing with geometrical distortion is based
on the Fourier transform(FT). We consider a two dimensional image characterized
by f(x,y). The corresponding FT is given by F(WI,W2), where
The discrete version of the FT is given by the discrete Fourier transform(DFT),
which takes a discrete spatial image into another discrete and periodic frequency
representation. Formally, the DFT is given as
F(k, l) = M N
N-l M-l
f(n, m) exp[-j27i( N + lV!)]
n=O m=O
where 0 :5 k :5 N - 1, and 0 :5 I :5 A1 - 1.
It is easy to show that the magnitude of both the FT and DFT are shift invariant.
Assume that f(x, y) is shifted by (xo, Yo). Then, the FT of f(x - xo, y - Yo) is given
Changing variables, a
=x -
Xo and b = y - Yo, and it follows that
In a similar way one can show the same property regarding the DFT.
The magnitude component of the Fourier transform which is invariant to translation, carries much of the contrast information of the image. The phase component
of the Fourier transform carries information about how things are placed in the
image. Translation of f(x,y) corresponds to the addition of a linear phase component. The conformal mapping transforms rotation and scale into translation and
the magnitude of the Fourier transform is invariant to those translation so that it
will not change significantly with rotation and scale of the object in the mage.
The phase of the Fourier transform holds the spatial layout of image. Oppenheim and Lim [36] examined the importance of the phase and showed that under
fairly loose conditions the entire image could be reconstructed to within a constant
multiple of the magnitude given only the phase. This implies that most of the
information allowing discrimination between real images lie in the phase. However,
Lane et al. [:37] showed that the intrinsic form of a finite positive image is uniquely
related to the magnitude of its Fourier transform, except under contrived conditions or trivial situations. This suggests that reasonable discrimination can still be
obtained using the magnitude of the Fourier transform of an image.
Mellin Transform
In this section, the Mellin transform is reviewed, and the scale invariance property is shown. Given a function f(t), t 2:: 0, the continuous Mellin transform in one
dimension is defined as [18]
1\1(8) =
10';0 f(t)tS-1dt.
Introducing an exponential distortion of the independent variable,
= Te x ,
:\fellin transform can be implemented by the Fourier transform,
= -jw, and noting that the magnitude of
is unity, the mag-
nitude of J\1( - jw) is the magnitude of the Fourier transform of the exponentially
distorted function, as
1111(-jW) I
IT-iw111: f(TeX)e-iwXdxl
Ii: f(TeX)e-iWXdxl.
Combining the exponential distortion with the shift invariance property of the
magnitude of the Fourier transform, results in the magnitude of the Mellin transform
being scale invariant. For example. letting g(t) = f(mt) and applying (2.7) gives
G(s) = T S1: f(mTeX)eXSdx
A change of variable, y = x
T S1: f(Tex+lnm)eXSdx.
+ In m,
and evaluating gives
Taking the magnitude of both side of Eq. (2.12) gives
IG(s)1 = IM(s)l·
The scale factor is reduced to a translation term by the exponential distortion. The
translation term is further transformed into a pure phase component by the Fourier
transform. The magnitude of the resultant Mellin transform, is thus invariant to
the scale factor.
If f(O) is nonzero, then f(Te X ) will be nonzero at x
= -00.
Although the
infinite domain in x is unrealizable, the continuous Mellin transform can still be
approximated as follows. Assume f(t) is constant for 0
.M(s) =
+ !roo f(t)tS-1dt
f(O)T S/ s + T S faoo f(TeX)eXSdx
T, then
f(TeX) if x
Eq. (2.14) will be exact, and scale invariant, if f(t) is constant in 0 ~ t ~ kT
for the largest scale factor k of interest. Thus, for nonzero j(0) the continuous time
correction term
must be added to the imaginary part of the Fourier transform, at all desired frequencies.
Fast Mellin Transform
For sampled data, the discrete implementation of the continuous Mellin transform is referred to as the fast Mellin transform(FMT). This terminology reflects the
fact that the Fourier transform operation is performed by an FFT. Suppose that
j( t) is represented by N samples,
= 0,T,2T,···,(N -l)T.
Exponential sampling must both cover the domain of the f(t) samples and not
exceed the sample spacing (assuming f(t) is sampled at the Nyquist rate). This is
accomplished by selecting the uniform spacing in x to be
= liN.
The required number M of samples in x has been derived by Casasent and Psaltis [38]
to be
iVf = NlnN.
To complete the FMT the M exponentially sampled data points are then transformed using an FFT.
Fourier-Mellin Transform
It has been suggested that the Mellin transform be applied to the magnitude of
the Fourier transform data. [39, 23, 40, 22] This compound transform is called the
Fourier-Mellin(FM) transform. The Fourier transform property of shift invariance
eliminates the problem of translation while retaining the scaling problem for the
transform. Moreover, the intermediate Fourier transform operation has the
advantage of increasing the spectral detail by the artifact of zero filling the data
sequence. [41] If the increase in detail is sufficiently fine, then the interpolation
required by the exponential sampling can be eliminated.
The Fourier-Mellin operation is divided into three processing steps: normalizing,
exponential sampling, FFT. The normalization step is required because a scale
change in f( t) translates into a scale change in both the transform variable and the
transform value, i.e.,
The scaling on the transform variable will be counteracted by the Fourier-Mellin
transform, but the scaling on the transform must be removed either by normalizing
f(t) by its area, or normalizing the Fourier spectrum by its dc component.
Despite the powerful properties of both scale and translation invariance, the
Fourier-Mellin transform, in its present form, is not suitable for feature extraction
because it obscures much of the discriminatory information contained in the original
data. The Fourier-Mellin transform has three operations that result in attenuation
of information.
1. FFT Magnitude: Translation information contained in the phase of the FFT
is discarded.
2. FMT :Magnitude: Scale information contained in the phase of the FMT is
:3. FFT Followed by Exponential Sampling: Low frequencies are accentuated.
Direct Mellin Transform
An alternative to the Fourier-Mellin transform can be implemented via a direct
expansion of (2.7). The resulting implementation is referred to as the direct Mellin
Expanding (2.7) using an integration step size of T gives
F(s) =
](t)tS-ldt + l2T ](t)tS-ldt + ... + /,NT
Assuming f( t) is constant in any T interval then the subintegrals are readily evaluated,
Regrouping and defining
](0) = ]1, f(T) = ]2,"', ]((N - l)T) = ]N
and without loss of generality, letting T be unity, (2.21) is expressed as
sF(s) =
P(Jk - fk+l)
+ NS]N.
The input data is expressed in a more useful manner by defining the incremental
Assuming ] N is zero (the data record length can be adjusted to ensure this) the
DMT becomes
k S t1k
and using s = -jw
- jwF(w)
= 2) cos(w In k) -
j sin(w In k)).6. k •
The magnitude of the DMT is
The DMT is an exact implementation of the Mellin transform when performed on
sampled data. The FMT, on the other hand. requires interpolation between data
The DMT operation is more clearly expressed in matrix notation. Form (2.26),
W2 G(W2)
</>1 "
</>l,N -1
.6. 1
</>2,N -1
.6. 2
.6. N - 1
In k) - j sin(wi In k)
i = 1,2,··· ,M are arbitrary spectral components.
The magnitude of the DMT can also be expressed in matrix notation. First,
to be the real and imaginary components of the DMT matrix ele-
ments; and Ci and Si to be column vectors corresponding to the real and imaginary
row components of the DMT matrix,
, =
[ Cil
, -
[ Sil Si2
.6. =
C;,N-1 ],
Si,N -1
[ .6. k ].
The DMT given in (2.28) becomes
= [cT -jsT.6.],
= 1,2,···,l~1.
The magnitude squared of component i is then
The lm element of matrix Mi is given by
Transform theory [42, 43] has played a key role in optical signal processing for a
number of years. It continues to be a topic of interest in theoretical as well as applied
work in this field [18,39]. We have been seeking a new transform that is potentially
applicable to a wide class of processing problems. It is often desirable to preprocess,
or transform data into an output that is invariant to the scale of magnification of
the data. The scale of magnification is assumed to be unknown. In addition, the
transformation should be linear (for example, the complex modulus operation is
nonlinear) so as not to distort the data to the extent that valuable information,
e.g., phase information, is lost.
Thus, we seek a linear operator L upon given data f(x) (one-dimensional notation used for simplicity) such that
L[J(x)] = L[J(mx)],
where m is an arbitrary, unknown magnification factor. Data function f(x) is also
considered arbitrary. An important limitation in the choice of operator L is as
Theorem 3.1 A simple 1:1 mapping of data f(x) by means of a distortion rule on
coordinate x cannot achieve Eq. (3.1).
Let each coordinate x be mapped to a new coordinate g(x), function 9 to be
found. Thus a solution of the form
L[j(x)] = f[g(x)]
is sought for some g(.). Then directly
L[j(mx)] = f[g(mx)].
In order for scale invariance (3.1) to be obeyed, g(.) must then obey
f[g(x)] = f[g(mx)].
But since f is arbitrary, the only solution g(.) to Eq. (3.4) is
= g(x).
Since also magnification m is arbitrary, the only solution to Eq. (3.5) is
g(x) = canst.,
i.e., all input points map into one point. This is an unacceptable solution, since if
it were used in Eq. (3.2), the resulting transformation would be noninvertible for
the general input f( x ).
A second class of candidates for operator L consists of functionals of f(x), i.e.,
integral transforms. One candidate is the Mellin Transform (MT). It is known [44]
that the magnitude of MT obeys the scale invariance. That is, if
MT[J(x)] = FM(W)
= x-im FM(W),
= yCI
has the same magnitude, /FM(w)/. However, taking the magnitude of the MT causes
a loss of important phase information. This is undesirable for purposes of inverting
the MT back into direct (signal) space. In the absence of phase information, such
inversion problems are ill-posed and formidable to solve. [25]
In this chapter an integral logarithmic transform (LT) is defined which has the
required properties of linearity and geometrical scale invariance. Also, the LT is
easily inverted for its input.
Other types of logarithmic operations have been 1:1 point mapping, as follows.
Schwartz [19] has provided evidence that the retinotopic mapping of the visual field
to the surface of the striate cortex may be characterized as a logarithmic conformal
mapping. Weiman and Chaikin [45] describe a logarithmic spiral grid for picture
digitization. Schenker et al. [46] have defined the logarithmic conformal mapping on
a polar exponential grid for image understanding applications which have included
image correlation and target boundary estimation.
Unfortunately, these mappings cannot satisfy our requirement (3.1) of invariance
to change of scale. For example, with logarithmic conformal mapping, a change of
scale transforms into a shifted version of the original image. Messner and Szu [47,48]
experimentally confirmed this effect using images of an airplane at various scales. A
scale-dependent shift is not detrimental to scale-invariant Wiener image smoothing.
But it is detrimental to character recognition by matched filtering, because of the
resulting registration problem.
As motivation for the form of the LT, consider the simple 1:1 logarithmic mapping of linear function f(x) = x. Its differential in the presence of magnification
scale m is
d[logf(x)] = d[logmx] = - = d[logx],
m = const.,
regardless of m. The derivative operation accomplishes invariance by deleting the
magnification amount m. Such logarithmic mapping has also been proposed as
a model for the brain/cortex function [49]. Property (3.9) suggests the following
transformation of a general function
f (x).
Definition 3.1 Given a real, one-dimensional function f( x) with finite support
E ~ X ~
Xo, its logarithmic transform LT[J(x)] is defined as
rco -xf(xy)
= F(y) = Js.
E/XO ~
Y ~ 1.
This transform is linear in its input f( x). The finite support requirement avoids
a potential pole at x = 0, in particular. In practical problems, an image f(x) will
aiways have finite support. Then the lower support value
can be attained by
simply translating the image away from the origin by an amount
The choice
= 1 pixel is usually most convenient.
Scale-invariant Property
One important property of the LT is shown next.
Theorem 3.2 The LT of a scaled version of f(x} is the same as the LT of f(x}.
Let j(x) be a scaled version of f(x), j(x) = f(mx), m constant. Since f(x) = 0
for x
> Xo or for x <
necessarily j(x) = f(mx)
= 0 for x > xo/m or x < E/m,
and therefore the LT F(y) of j(x) obeys
F(y) =
j(xy) dx
= J~
f(mxy) dx
f/XO :::;
Y :::; 1. The lower limit on y is still
since in general it is the ratio
of the lower support value to the upper; here, (f/m)/(xo/m). We make a change of
variable t = mx in Eq. (3.11), and obtain
F(y) = (XO f(ty) dt = F(y),
as was to be shown.
More generally, it can be shown in the same way that a nonlinear, power-law
distortion of scale, to a coordinate t = mxk, k, m real and m > 0, produces a log
Ikl F(y ).
F(y) =
Hence, the transform is distorted 1:1 in the same way as is the input scale. The
power k
1 checks with result (3.12) for the linear scale change case. Change
of scale t = mx k also is the most general functional form for causing a mere 1:1
distortion of the original log transform. The utility of result (3.13) is that once
F(y) is known, the log transform for any power-law scale change is trivially known
as well.
It is interesting to consider an alternative definition of the LT:
L(y) =
1 f(x)Y
for 0 :::; y :::; 1. This definition would also satisfy the property of scale invariance,
as is easily verified. However, because of the variable exponent y, transform L(y) is
not linear in I( x), invalidating our linearity requirement. It also appears difficult
to invert Eq. (3.14) for I(x).
Other mathematical properties of F(y) are described following Eq. (3.25) below.
Definition 3.2 Given a real, two-dimensional function /(x,w), its LT can be defined
analogously as
F(y,z) =
f;o f(xy,wz)dxdw
:5 y :5 1,
:5 z :5 1.
Higher-dimensional LT's would have the analogous form. It is easy to show that
F(y, z) is likewise invariant to scale of magnification. The proof is an obvious
generalization of the proof of Theorem 3.2.
The LT was applied to both 64 x 64 image in Fig. 3.7. As shown in Fig. 3.8,
the LT outputs are monotonically increasing. Compared with two LT outputs, it is
found that the increasing slopes are different.
Rotation-invariant Property
It is interesting to consider the question of whether F(y, z) is invariant to rotation
of the image f(x, w). This cannot be satisfied, since rotation by the particular angle
() = 7r /2 interchanges the roles of x
and w. For this rotated image the LT would
produce an output F(z,y) =f:. F(y, z) generally.
However, there is an alternative two-dimensional version of the LT that is indeed
invariant to both magnification and rotation. Given the image J(x, w), first form
the polar coordinate image
jp(r,e) = J(rcose,rsine).
This is a 1:1 remapping of coordinates
ex, w) of j
into polar coordinates (r, e) of
Jp. Define the two-dimensional integral LT of JP as
y, Z ) =
21r fro
f p (ry,9z)
r dr de ,
~ y ~
Polar image jp is assumed to have a limited support region p :5 r
e ~ 27r.
Transforms (3.15) and (3.17) are analogous in form, but with
ro and 0 :5
coordinate product xw in the denominators.
Consider a polar image
resulting from arbitrary scale stretch Tn and rotation !:le of the initial image Jp(r, B).
Substituting image (3.18) into (3.17) results in a LT:
after change of integration variables to t = mr, and a = () - !:le. In summary, twodimensional transform (3.17) is linear in its input jp(r, e), and invariant to both
change of scale and rotation of the input.
Here we derive expressions for the inverses of the one- and two- dimensional
Theorem 3.3 If F(y) is the one-dimensional LT of f(x), the inverse logarithmic
transform is given by
f(x) = -=-F'( -=-)
for c ::; x ::; xo,
where the prime denotes derivative.
The logarithmic transform F(y) is defined by Eq. (3.10). If the derivative dJdy
is taken of both sides of Eq. (3.10), it becomes
F'(y) =
By the finite support condition, f(c) =
f'(xy)dx _ f(c).
Hence only the integral remains. Making
a change of variable t = xy and integrating give
1 jXOY
F'(y) = f'(t)dt = -[J(XOY) - f(c)].
y £
Again use f(c) =
Furthermore, since c/xo ::; y ::; 1, necessarily c ::; xoy ::; Xo,
the required support of f(x). Hence we let x = Xoy in Eq. (3.22). This allows
Eq. (3.22) to be solved directly for f(x) over its entire support region, yielding the
required result (3.20).
By analogous steps, inverse to the two-dimensional LT's (3.15) and (3.17) may
be found. The inverse LT of Eq. (3.15) is
J(x, w) =
~~ [EP F(y, z)]
Xo Wo
Also the inverse LT of Eq. (3.17) is
f (
Jp r,
()) _ _
- 27rr
[~[ oFp(Y, Z)]]
Once Jp(r, (}) is known, J(x, w) can be formed as well by reversing the sense of the
1:1 polar mapping defined Eq. (3.16).
Optical Implementation
Since the LT operations [Eq. (3.10) or (3.15)] are linear in the inputs J(x),
J(x, w), it should be possible to implement them optically. We suggest one method
of implementation here. The one-dimensional transform is treated first.
It is convenient to change the integration variable to t =
in Eq. (3.10). The
direct result is
F(y) =
f(t) dt
for f.jxo $ y $ l.
This shows that F(y) is a cumulative function, analogous to a cumulative probability
law in statistics, or to an edge response function in optics. Note in particular that
Xo must be finite for F to remain a function of y (and hence be a usable transform).
The LT requires inputs that have truly finite extension. Further regarding limits,
representation (3.25) shows that the limit
f. --lo
0 can be taken. The proviso is
that J(t) approach zero with t as fast or faster than denominator t. For example,
J(t) = t or t 2 could be so used.
Figure 3.1 shows an optical arrangement that implements F(y) of Eq. (3.25) as
a spatial image. The pinhole source S radiates white light that is collimated by lens
A. The light is incoherent and passes, in turn, through image energy transparency
profile J(t) in plane I, transmittance make profile lit in plane M, and variable
diaphragm D opening
€ :::;
t :::; xoy, before being refocused by lens A' about point y
in output plane O. The total light energy collected in
then obeys F(y) given by
Eq. (3.25). This is for one particular y value.
Parameter y is varied over its domain (€I xo, 1) by moving the upper jaw position
t = xoy of diaphragm D. Synchronous with this motion, output plane 0 moves
vertically (in scanning mode) so that energy F is focused spatially about a new
position y in O. If a piece of film is attached to plane 0, it will produce F(y) as a
spatial image.
This implementation is one-dimensional. An implementation of two-dimensional
transform (3.15) would use, analogously, an image J(t, s) of finite support area
(c:::;t:::;xo; €:::;s:::;wo),
a mask 1I (ts ), and a square diaphragm covering variable area
Motion of the output plane 0 in the directions (t, s) synchronous with jaw positions
(t = xoy, s = woz) will now produce a spatial, two-dimensional image F(y, z) in O.
The LT may be used as a linear preprocessing step to render data f(x) invariant to its scale of magnification. The resulting secondary data, if now optimally processed by any problem-specific algorithm (e.g., Wiener filtering for a
noise-suppression problem), will be optimized for use by this algorithm independent of scale in the original data. We discuss two such applications below: image
noise suppression and character recognition.
Suppose that an image f(x) suffers from additive noise n(x). It is desired to
estimate the object o( x) which gave rise to the image. This is the familiar problem
of image smoothing or noise suppression.
Let us regard o(x) and n(x) as stochastic processes o(x;).) and n(x; >.'), with
stochastic parameters)., ).'. Let o(x) and n(x) each obey arbitrary probability laws
po(o) and PN(n) at x. Then f(x) is also stochastic:
f( x; )., )") = o( x; ).) + n( x; >.')
It is convenient to Fourier transform both sides of Eq. (3.28), producing
= O(w) + N(w)
in terms of corresponding stochastic spectral quantities (parameters).,).' suppressed).
It is well known that the minimum L 2 -norm filter solution Y(w) to this stochastic
problem is the Wiener smoothing filter [35]
Y(w) = <I>o(w) + <I>N(W) ,
(3.31 )
<l>o(w) = (IO(wW)
is the the average power spectrum of the object o(x), and
is the average power spectrum of the additive noise n(x).
Let us call the directly observable image f(x) primary data. We show next that
we cannot optically filter the primary data f(x), which is at one scale of magnification, with the filter Y(w) derived at another scale. (This is also demonstrated
below in Fig. 3.6).
Regard filter Y (w) of Eq. (3.30) as formed at initial scale of magnification m = 1.
The new filter Y(w) for a scaled object, o(x)
= o(mx), m > 0, will of course have
the same form as Eq. (3.30),
Y(w) =
<l>o(w) + <I>N(W)
where a caret mark denotes that the quantity beneath it is in the scaled space.
There are two components ~o(w) and ~N(W) to the Wiener smoothing filter Y(w).
Since additive noise is independent of the object, it has the same statistics for any
scale of the object. Thus ~N(W) is the same as <I>N(W). However, of course, the
second component ~o(w) is dependent on object scale size. A change of scale in
the x axis by a scale factor m causes
<l>o{w) =
- 2 <I> 0 ( - ) .
Combining results, the Wiener smoothing filter Y(w) for a scaled object is then
given by
Y(w) =
~<Po(~) + iPN{W) =J Y(w).
The last inequality is made by comparison with Eq. (3.30). Hence the filter that is
designed for the initial object scale would not generally work for a new object scale.
[An exception is for the particular object class iPo(w) = aw- 2, as substitution into
Eq. (3.35) verifies.]
However, if we use the LT of the image data f{x) as secondary image data, and
seek Wiener filtering of the secondary data, then we can obtain a scale-invariant
Wiener filter. This is shown next.
According to Theorem 3.1, the LT of a linearly scaled object, o(x) = o(mx),
> 0, is the same as the LT of the original object o(x),
LT[o(x)] = LT[o{x)].
Hence their power spectra
(IFT{LT[o(x)]} 12 ),
are also equal,
= iPLT[o(x)](w).
The power spectrum of LT[n(x)] must also remain invariant to scale change of the
object, since the noise is additive [see argument below Eq.(3.33)]. The Wiener filter
YLT ( w) for the secondary image data is then,
}IiLT ( w ) -
+ ~LT[n(z)l (w) ,
which is explicitly independent of the object change of scale parameter m. Filter
(3.39) will optimally process any linearly scaled version of an object in the object
class (3.31).
We can now use the above results to demonstrate the smoothing of images with
additive noise. The smoothing process was applied to real images with added
computer-generated noise. The scale-invariant filter (3.39) was constructed as in
Fig. 3.2. The particular two-dimensional LT operation (3.15) was used, since invariance to magnification (but not rotation) is the issue here. The actual object
spectrum was used in formation of the power spectrum. Many representative noise
profiles ni(x),i = 1,2, ... ,N, were averaged to form the noise power spectrum, as
The data processing procedure is shown in Fig. 3.3. As indicated, it is the
secondary data LT[J(x)] that are actually filtered. The filtering procedure was
applied to the LT's both the 64 x 64 image and the 128 x 128 image in Fig. 3.4.
The image Fig. 3.4(b) is a scaled 2:1 version of image in Fig. 3.4(a). The images
suffer from independent sets of noise values. The filter was constructed by using the
smaller-scale object of Fig. 3.4(a). The LT-filter outputs are shown, respectively,
in Figs. 3.5( a) and 3.5(b). They both show good noise suppression properties, as
usual at the expense of resolution. Hence, the same filter works on either image
For comparison, we also repeated the study using primary image data. As we
saw in Eq. (3.35), results should now not be good when the filter derived at one
scale is applied to data at another scale. Images 3.4(a) and 3.4(b) were used as
primary image data at scales m = 2 and m = 1, respectively. The filter Y(w) was
constructed at the scale of Fig. 3.4(b). The output that is due to filtering the image
in Fig. 3.4(b) is shown in Fig. 3.6(b). The output that is due to using Y(w) upon
the image data in Fig. 3.4(a), i.e., at a different scale, gives the output image in
Fig. 3.6(a). As expected, this output is poor compared with that in Fig. 3.6(b).
It is important to note that the LT preprocessing step is not unique in permitting
Wiener filter outputs that are invariant to scale of magnification. In the Wiener
application, the requirement (3.1) can be replaced by the looser requirement
L[f(mx)] = L[J(x + a)]
for any a. The same Wiener filter operates optimally on shifted or unshifted data.
Shifted data simply produce a shifted output.
For example, a polar-log coordinate 1:1 mapping is a preprocessing step L that
obeys Eq. (3.40) and that allows for scale- (and orientation-) invariant Wiener
filtering (Messner and Szu [47]). This preprocessing would probably be easier to
implement digitally than LT filtering since it is a simple point-to-point mapping.
On the other hand, since polar-log coordinate mapping is a non-linear operation, the
linear LT outputs (3.15) might be easier to analyze for noise-propagation and signalto-noise effects. Regarding analog implementation, LT might have an advantage
because of the simplicity of the proposed optical implementation in Fig. 3.1. Further
comparisons of the two approaches would require detailed studies of such factors as
operational complexity, speed, cost, convenience, and accuracy.
Another potential application of the LT is to character recognition. Consider the
basic problem of recognizing individual letters of the alphabet as they are presented,
in turn, to a processing system. Assume that the letters are detected in perfect
registration, but can be at any scale of magnification. One way of recognizing the
letters is to use matched filtering [50]. The working principle of this approach is
that the maximum value of the cross-correlation of two images is largest when the
two images are the same. However, given a continuously variable magnification for
the letter images, it is impossible to have on hand the exactly matching template
image that will optimally identify each. Hence we seek a transformation of the
detected images that is invariant to magnification.
One possibility is conformal logarithmic mapping. However, This approach suffers from magnification-dependent translation (3.40) of the input. Hence, letter
images that are initially in registration will be transformed into randomly misregistered images. This would complicate the matched-filtering steps to follow.
One way around this registration problem would be to work with the modulus
of the Fourier transform of each log-mapped image [50]. The modulus is, of course,
invariant to lateral shift. Other methods [51, 52, 53, 54, 55, 56] have been proposed
as well.
It is interesting to consider applications of the integral logarithmic transform
concept to this problem. Two such approaches are described next.
One is to use the two-dimensional LT [Eq. (3.15)] of each primary letter image
as secondary image data to be processed. Secondary templates are also formed as
two-dimensional LT's of the 26 possible letter images. The intensities and registration states of the secondary images and templates are now independent of the scale
of magnification. Hence the matchups between images and templates can be (theoretically) perfect and the identifications made independently of scale. No other
filtering operations (aside from the matched filtering) would be necessary.
This approach lends itself to fairly direct electro-optical implementation. Let a
test character be input J(t) in the optical processor of Fig. 3.1, with jaw setting
Y fixed. The output Ftest (y) is sensed by a fixed detector located in plane 0 on
the OA. A second processor (Fig. 3.1) uses a template character as its input J(t),
simultaneously with formation of Ftest(Y). Call the second output Ftemp(Y). The
product Ftest(y)Ftemp(Y) is formed electronically from the two detector outputs.
The diaphragm D jaws open synchronously in the two processors so that they share
common openings y at any time. The detectors remain fixed in 0 on the OA. The
detector output products are electronically added, forming
This is the correlation, at zero lag, between test character and template. This is
a maximum when Ftemp(Y) is proportional to Ftest(Y), as is usual with matched
filtering. However, the match up additionally is independent of the scale of magnification of the test characters and the template characters. An additional virtue of
the approach is that the LT's are made on the fly and do not have to be formed as
two-dimensional hard-copy images, e.g., transparencies, prior to their use.
A final suggested approach to the magnification-registration problem is to use
LT's [Eq. (3.16) and Eq. (3.17)] of each primary letter image and template. The
secondary images are now invariant to magnification and rotation, without suffering
random misregistration. Optical implementation of this approach might not be as
convenient as the previous one, however, because of the necessity for producing
the polar image [Eq. (3.16)]. Computer-generated holograms [52, 54] have been
used for the analogous problem of implementing the log-polar transform. This is
cumbersome, however, compared with the on-the-fly implementation that is possible
by use of the two-dimensional LT [Eq. (3.15)].
t= xo
_ _ _ t=£
Figure 3.1: Optical implementation of the integral logarithmic transform.
Form LT[o(x)]
Eq. (3.38
1 N
N i=l
-LIF.T.I .
Form LT[ni (x)]
i =1,2,..•. .N
~L T[n(:z:)) (co)
Figure 3.2: Formation of the logarithmic smoothing filter.
Y (co)
Add noise
Object data o(x)
Primary data f(x)
L.T. Eq.(3 .10)
F. T.
Secondary data
Filter YLT (00)
F. T.
L. T .
Figure 3.3: Flow of operations in the data smoothing experiment.
Figure 3.4: Input images with additive Gaussian noise. (a) 64 x 64 images, (b)
128 x 128 scaled version image with the same noise statistics as in (a).
Figure 3.5: Scale invariant filtering study, using LT (secondary) image space. (a)
The output by scale-invariant, Wiener filtering of Fig. 3.4 (a). (b) The output by
Wiener filtering of Fig. 3.4 (b) using the same filter as in (a).
Figure 3.6: Scale variant filtering study, using primary (direct) image space. (a)
The output by Wiener filtering of Fig. 3.4 (a) using the filter derived at a different
scale, that of Fig. 3.4 (b). (b) The output by Wiener filtering of Fig. 3.4 (b) using
the filter derived at the same scale, that of Fig. 3.4 (b).
,,"" ">\\
Figure 3.7: Two input images to apply the integral logarithmic transform.
Figure 3.8: Outputs of the two-dimensional integral logarithmic transform. (a)
Logarithmic transform of Fig. 3.7(a), (b) Logarithmic transform of Fig. 3.7 (b).
All detectors react only to the energy conveyed by light; in other words, they
measure the intensity of the light, and not the complex amplitude, so that phase
information is lost. This can be a big problem in those coherent optical systems
where part of the information is represented by the phase. As an example, consider
the case of obtaining the Fourier transform of an image by use of a lens. If we try
to detect the Fourier transform in the back focal plane of the lens, we will only get
the power spectrum of the image, instead of the spectrum itself.
Estimating the phase part of generally complex spectrum from knowledge of its
modulus is a classical problem in signal processing. This problem is widespread,
occurring in (a) application to atmospheric turbulence speckle reduction, for which
the modulus is known as either the visibility function from optical interferometric
data [57] or as the power spectrum from Labeyrie speckle data [58]; (b) in optical
or electron microscopy [59], for which the modulus data is the image intensity and
from this the surface thickness profile of the specimen is to be determined; and (c) in
reconstruction of quantum-dot semiconductor clusters [60] using modulus-squared
structure factor IF(w)/2 as data.
Methods of reconstructing phase from image data generally break into two
classes: iterative or non-iterative. The phase-from-modulus problem, in particular,
seems to be most amenable to iterative approaches. Among these are the inputoutput method of Fienup [25], the exponential filter method of Walker [61], and
the Newton-Raphson approach of Currie and Frieden [62]. Unfortunately, these
approaches sometimes suffer from either slow convergence or nonconvergence. A
second class of phase reconstruction problems arises when many stochastic images
of one object are given, as in astronomy. Multiple complex images convey a good
deal more object information than does a single modulus image. Perhaps not surprisingly, then, the many-images problem has been solved by using noniterative
approaches. Among these are the Knox-Thompson approach [63] and the triplecorrelation approach of Lohmann et al. [64]. However, it is currently uncertain as
to how sensitive the Knox-Thompson approach is to the statistics of the turbulence [58], -nhile the triple-correlation method requires for its use the computation
and storage of a four-dimensional function, the average bispectrum. For a modestsized image of size 256 x 256, for examples, this requires the storage of 108 data
We report here on a new approach to phase retrieval from modulus data, called
the multiple Wiener phase-filter approach. This approach reconstructs the phase
by a simple filtering of the given modulus data. It is noniterative, and is described
Wiener Phase-Filter Approaches: Derivation
For simplicity of notation, the derivation is in one dimension. Two-dimensional
results, in application to pictures, are strictly analogous to these and are explicitly
stated where needed.
The spectrum O(w), w a spatial frequency, of an object o(x), x a pixel coordinate,
is defined as
where j =
= (271")-1/21: o(x) exp( -jwx) dx
Both object o(x) and spectrum O(w) are, in general, complex
functions. However, in the applications below, we use real-only objects o(x) arising
from incoherent radiation.
The problem that we are attacking is the reconstruction of O(w) from imperfect
modulus data
D(w) = IQ(w)
+ N(w)l,
where n is the passband and N(w) is the possible random noise. We take a Wienerfilter approach [65], because of its noise suppression capabilities, and some wellknown practical advantages of implementation. Among these are (a) non-iterative
estimation, (b) modest computer memory requirements and (c) possible optical
analog processing.
The Wiener approach is inherently statistical. It models reality, defined by the
data D, as a random sample from stochastic processes [35], here 0 and N. The
unknown complex object 0 that is to be reconstructed is assumed to be a member
of a statistical class, or ensemble, of objects. Likewise, the actual noise N that
contributes to data D is assumed to be a member of a class of noise profiles. All
second-order statistics of the object and noise processes are assumed to be known,
either on the basis of a theoretical model or by direct averaging over a typical set
of object and noise profiles. We take the latter point of view in the applications,
although noise profiles, in particular, are not averaged over.
Ordinarily, Wiener filtering uses a single filter Y (w ). For reasons that will become
clear, we seek a finite set of filter functions instead,
n= 1, ... ,N
such that
Q(w) =
E Yn(w)D(w + m~w),
m = n
-1 - (N - 1)/2,
is a good reconstruction of O(w). Frequency increment ~w is at the user's disposaL
We often use N = 3, corresponding to a 3 x 3 array of filters
in two
dimensions. Then Eq.(4.4) becomes a representation:
Q(w) = Yi(w) D(w - ~w)
+ }2(w) D(w) + Ys(w) D(w + ~w).
A rationale for the use of multiple-filter representation (4.4) now becomes apparent:
The value of estimate
6 at w is made to depend upon
the data at not only w, but
also at adjacent frequencies. This takes advantage of possible correlation between
the data and the signal at neighboring values in frequency space; these can only
aid in the reconstruction. This means that frequency increment, Dow, should not
be made so large as to lose such correlation. In practice Dow = 1 pixel width in
frequency space, the minimum possible D..w, worked best.
The sense of goodness of fit of representation (4.4) to O(w) is taken to be in the
minimum L2 norm error sense,
-n dw(IO(w) - O(wW} = minimum,
- In-n
The indicated average is an ensemble average over both stochastic processes 0, N.
In applications below, the ensemble average is carried through as a sample average
over a discrete learning set of objects. This particular ensemble consists of 16 discrete, typical objects of one class, the class of space shuttles. The statistical nature
of this object class originates in the random three-dimensional position, orientation, and magnification of each shuttle image. (However, the latter information is
assumed to be unavailable to the user. Otherwise, the reconstruction problem could
be parameterized, completely changing its character.) An example of the averaging
process is discussed next.
Typical among the average-squared terms in Eq. (4.6) is the term
Y:(w)(IO(w + ~w)
+ N(w + ~w)IO(w)}.
Denote the learning set of object and noise profiles as Om(w), Nm(w), m = 1, ... ,M.
Then the ensemble average in quantity (4.7) is evaluated, at each w, from these
profiles as
1 M
IOm(w + ~w)
+ Nm(w + b.w)IOm(w).
We use the case M = 16 in applications below.
The filters (4.3) are solved for by substituting Eq. (4.2) and (4.4) into Eq. (4.6),
explicitly squaring out the integrand, taking the expectation of each term, and then
using the Euler-Lagrange solution
ay* = 0,
n= 1, ... ,N.
Whereas Eq.( 4.6) is quadratic in the filters Yn(w), Eqs.( 4.9) is linear in them. The
result is an N x N set of linear equations that may be inverted. In practice, for the
two-dimensional problem with N = 3, we have 9 linear equations in 9 unknowns
Ymn , which are quickly inverted at each w by a suitable linear inversion technique.
It is found empirically (see below) that the resulting filters Ymn yield good results
when used in reconstruction formula (4.4). On the basis of prior information, one
can see why this is so. Quantity (4.7) is the known correlation between the modulus
data at one frequency and the desired complex signal 0 at another frequency. The
inclusion of many terms of the type of expression (4.7) in Eq. (4.6) represents
the building of a great deal of prior information; in particular how the complex
object, and noise, relate statistically to the data. The more filters Yn one uses in
reconstruction formula (4.4), the more information terms (4.7) exist, and hence the
better are the reconstructions.
Because of its insights, we show results for a one-dimensional case N = 1. Now
there is only one filter Y(w) to find. [Note: Empirical use of a single filter did not
yield good reconstructions. This was, in fact, the immediate motivation for the
multiple-filter approach (4.4).] Relation (4.6) becomes
Jdw (IY(IO + NI) - 01
Jdw[YY*{IO + N12) + (101
= JdwE.
) -
+ NIO*} -
+ NIO)]
: : = Y{IO
+ N12) - (IO + NIO)
according to Eq. (4.9). The result is a filter,
Y(w) = (IO(w) + N(w)IO(w)) .
(IO(w) + N(w)12)
Notice that this allows for the presence of noise in the data and uses knowledge of
the noise statistics to minimize the reconstruction error. This is one of the strengths
of the approach.
In the absence of noise, result (4.12) takes the interesting form
Y( ) = (IO(w)IO(w))
The reconstructing filter is then the correlation of the modulus with the complex
spectrum, relative to the power in the spectrum. The all-important phase part of
YeW) comes from the numerator, where the known correlation of the phase with
the modulus is an input. In summary, then, in the single-filter case the filter is
essentially the average correlation between modulus and phase of the unknown
object. This is an intuitively correct dependence.
Wiener Phase-Filter Approach: Examples
We show in the following eight figures some examples of the use of the twodimensional version of algorithm (4.4). Figure 4.1 shows the object training set,
a set of 16 views om(x,y),m = 1, ... ,16, of space shuttles. To form the filters
Ymn (Wl,W2) by using Eq.(4.9), the object spectra Om(Wt,w2),m = 1, ... ,16 were
formed from the Om(x,y) inputs, and all necessary averages [see Eq.
(4.7) for
example] were taken over this set of 16 objects. In these examples, for simplicity,
we assumed the noise N(Wl,W2) to be zero. However, we add noise to the actual
data (see below). This made for an acid test of the approach. Would knowledge of
the object correlations alone make the procedure robust to data noise? As it turns
out, the answer is yes.
We assumed a two-dimensional N = 3 case, i.e., nine filters Ymn (Wl,W2). This
was the minimum number of filters needed to attain good reconstructions. The
modulus of each of the constructed filters is shown in Fig. 4.2, and the phases are
shown in Fig. 4.3 (all phases re:Bected into the fundamental interval). Although
the phases appear to be random, they actually are appropriate signals for the given
training set (Fig. 4.1). The substitution of random noise for the phase filters, in
one experiment, obliterated the output.
In Fig. 4.4, 16 independent reconstructions are shown that use the fixed filters
Ymn of Fig. 4.2 and 4.3. The modulus of the spectrum of each image in Fig.
4.1 was used, in turn, as a new data set for reconstruction formula (4.4).
an example, when the modulus data of upper left-hand object of Fig. 4.1 was
presented to the filters, the output was the upper left-hand image in Fig. 4.4. In
this way, each output in Fig. 4.4 is the reconstruction of modulus data from the
corresponding image in Fig. 4.1. This emphasizes the problem: to represent 16
different objects by the use of only 9 different filters. A comparison of Figs. 4.1
and 4.4 shows that the result are good. Each output is a recognizable rendition of
its ideal image. Notice that there is no blurring present in the outputs. Departure
from the ideal images takes the from of distorted intensity values. This is not a
particularly bothersome effect, because the object edge details are reconstructed
crisply and without geometrical distortion. The objects are easily recognized.
In order to test the effect of data noise on the approach, we added 10% additive
Gaussian noise to the 16 data sets and again reconstructed each. Outputs are shown
in Fig. 4.5. Results are still good. There is hardly any difference from the noise-free
outputs in Fig. 4.4.
In Fig. 4.6 we show outputs when 20% noise was added to the data. Now the
backgrounds are becoming noisy enough to start affecting recognition of foreground
object details. However, the objects are still recognizable.
Because the overall approach is linear, it can conceivably be performed in analog
fashion using a coherent Fourier plane processor. [Note that physical realizablility
with respect to finite bandwidth is not a problem here because the filters Yn{W) are
in frequency space and, hence, already have a finite bandwidth, that of the data.]
Any number of incoherent processing arrangements could also be used.
With such applications in mind, it becomes crucial to test whether noise in
the filters, which would now have to be physically constructed, would seriously
bother the outputs. Accordingly, we added 10% additive Gaussian noise to both
the modulus and phase parts of the filters. The resulting outputs are shown in Fig.
4.7. Little degradation has taken place. We conclude that analog use of the filtering
approach is a real possibility.
Finally, we anticipated the use of the approach when part of the frequency space
of data is missing. For example, in astronomical use the telescope might have a
significant central obscuration because of the secondary mirror. Then a central,
circular area of frequencies would be missing from the data set. Essentially, all lowfrequency information would be gone. The filter optimization approach was adapted
to the case of missing frequencies by a simple rule: If a frequency (w
+ m.6.w)
relation (4.4) lies inside the missing region, it is replaced by the frequency obtained
by radial translation away from the missing frequency by a fixed amount, into
the region of given data. It was hoped that the correlation information provided
by these stand-in frequencies would be sufficient to overcome the missing band of
information. This was, in fact, the case.
It was found that data gaps of considerable size may be overcome in this way.
For example, consider two cases of 10% and 50% central obscuration. Figures 4.8
and 4.9 show the outputs when each central hole of diameter 10% and 50% of the
total passband diameter was deleted from each data set. In Fig. 4.8 the foreground
objects still recognizable, but there are much significant distortions as shown in Fig.
The utility of this filtering procedure rests upon statistical prior knowledge of
object and noise classes. Such statistical prior knowledge, gleaned from a learning
set of object and noise profiles, has a counterpart in the deterministic object constraint information that many other phase retrieval approaches require [58]. This
statistical knowledge appears to offer an advantage in that it leads to a noniterative
solution to the problem, whereas deterministic constraints require iterative solutions (e.g., Fienup's approach [58]) for their satisfaction. Also, the recognition that
noise can enter the data at the outset, Eq. (4.2), permits the statistical approach
to subdue it in the reconstruction optimally (by the L2 norm). No other approach
to the phase problem seems to acknowledge, and attack, noise in its formulation.
The Wiener approach, however, has its limitations. It presupposes the unknown
object to be a member of the given object class. As we saw in the applications
section, this often leads to good reconstructions. Conversely, the attempted reconstruction of modulus data for an object that does not belong to the given object
class is not usually successful.
....... :.. .
....... .
. .
....- - A L
. ."
Figure 4.1: Object training set.
Figure 4.2: The nine modulus filters. (The origin is the upper left corner, and the
folding frequency is the center of each filter.)
Figure 4.3: The nine phase filters. (The origin is the upper left corner, and the
folding frequency is the center of each filter.)
Figure 4.4: Nine-filter outputs (no noise).
Figure 4.5: Nine-filter outputs (10% noise).
Figure 4.6: Nine-filter outputs (20% noise).
Figure 4.7: Output, with 10% noise in filters.
Figure 4.8: Output, from data with 10% obscuration.
Figure 4.9: Output, from data with 50% obscuration.
In optical astronomy the angular resolution of an imaging system is approximately
iJ, where). is the wavelength of the light used and D is the diameter of the
aperture. For the 5 meter (200 inch) telescope, the above relation gives a resolution of 0.02 arc second at a wavelength of 400 nm. However, this resolution is not
even approached by the telescope because of the atmospheric turbulence. The typical resolution obtained is approximately 1 arc second. This resolution is roughly
characteristic of all large earth-based telescopes. This limitation is the result of
atmospheric turbulence, which causes the incoming wavefronts to be distorted in
a random manner so that most light is scattered into a disc much larger than the
Airy disc. One arc second corresponds to the Rayleigh resolution for the 25 cm (10
inch) optical aperture. The primary reason for building telescope larger than 25 cm
has been the collection of more light so that dimmer objects may be detected. But
it is difficult to imagine a telescope with a diameter greater than about 50 m in the
foreseeable future. [58]
For a few decades there has been increasing interest in the problem of how to
compensate for the effects of atmospheric turbulence on images. To overcome the
problem of limited resolution, the method of high-resolution imaging, so-called stellar speckle interferometry, has been developed. The basis of these approaches is the
Van Cittern-Zernike theorem, which, in the form of use in astronomy, states that
the spatial coherence function in the far field of a thermal light source is proportional to the Fourier transform of the object intensity. The speckle interferometry
invented by Labeyrie [66] gives only an autocorrelation of the object. Therefore,
the reconstruction of a real image using data obtained by speckle interferometry
has been studied extensively [67, 68, 69, 63, 70, 71].
High angular resolution imaging can be achieved by various interferometric methods in spite of image degradation by the atmospheric turbulence. Methods of reconstructing high angular imaging are generally divided into two main approaches.
One method is phase retrieval using numerical algorithms. [67, 68, 69] A characteristic of methods using numerical algorithms is that the procedure can be easily
performed for a two-dimensional image. However, for the obtained solution, the
uniqueness and convergence properties in the algorithms are not proved under the
given condition. In addition, the effect of various amounts of noise existing in the
process in those methods is not sufficiently known. The second approach, based
on phase retrieval using the mathematical properties of the Fourier modulus of
a physical object, [63, 70, 71] mathematically ensures the uniqueness of the solution of a reconstructed image, but it is generally difficult to apply to actual image
reconstruction because of high sensitivity to noises involved in observed images.
We shall review Labeyrie speckle interferometry, shift-and-add method, the KnoxThompson method, and the bispectrum method. Finally, we shall give the development of some new reconstruction methods based on Fourier image division. Before
proceeding with the treatment of restoration methods, it will be useful to present
some theory relevant to the formation of images degraded by turbulence.
Image Formation Theory
Here we shall now derive some basic deterministic relations governing image
formation. They may be generalized to two dimensions in an obvious way. The
imaging problem we are considering is depicted in Fig. 5.1 [35].
In Fig 5.1 a point source of light, of wavelength >., is located at infinity along the
optical axis of the given lens. Spherical waves emanate from it, but after traveling
the infinite distance to the lens they appear plane, as illustrated at the far left. The
lens has a coordinate y which relates to a reduced coordinate j3 by
j3 = ky/F
where k = 27r / >., and F is the distance from the lens to the image plane. At each
coordinate {3 of the pupil there is a phase distance error Ll from a sphere whose
radius is F, which would perfectly focus light in the image plane.
Huygen's principle states that each point of the wavefront just to the right of
the lens acts like a new source of spherical waves. This defines a complex pupil
at each point (3 of the pupil.
The focused image has an intensity profile s(x) called the point spread function
which is located in the image plane. This may be represented in terms of a point
amplitude function a(x) as
Quantity a(x) relates to the pupil function Eq. (5.2) by Fraunhofer's integral [72J
a( x) =
d{3 ei[k~(f3)+f3xl,
which is basically a finite Fourier transform of the pupil function. Eq. (5.4) is
proved by representing a(x) as a superposition integral over the pupil of Huygen's
waves r- 1 eikr , where r is the distance from a general point {3 on the wavefront in
the pupil to a general point x in the image plane. Since
+ Ll by Taylor series where (F + Ll) » Iy - xl.
= (y - x)2
+ (F + Ll)2,
It is assumed that the lens
has a compensating phase k y 2/2F and x is close to the optical axis.
Finally, the optical transfer function T{w) is defined as basically the Fourier
transform of s(x),
By combining Eq. (5.3), Eq. (5.4), and Eq. (5.5), we obtain T(w), the autocorrelation of the pupil function,
T(w) = (2.80)-11{30
d.8 ejk[~(/3)-~(/3-w)].
Basic Speckle Interferometry
Speckle interferometric imaging is the a set of techniques which astronomers use
to attempt to overcome the blurring of astronomical images due to atmospheric
turbulence. In optical astronomy, the traditional way to obtain an object map or
object intensity is to use a single large telescope to record conventional long exposure
images. This long exposure imaging might cause much high frequency information
to be lost. In 1970, Labeyrie developed a method for obtaining diffraction limited
information while utilizing the full aperture of a telescope. The method is based
on processing a large number of short exposure images. If a stellar object is bright
enough, it can be imaged by the telescope during a time exposure of 10 msec or less.
This causes atmospheric turbulence, which changes at a slower pace, to be effectively
frozen. The result is a phase function across the telescope pupil that is temporally
constant during the exposure but spatially random. Such a pupil function causes,
via Eq.(5.3) and Eq.(5.4), a point spread function s(x,y) that is said to suffer
from short-term turbulence. Typically, such a spread function consists of randomly
scattered blobs or speckles. Suppose that many such short-term exposures of the
object are taken. Since each causes randomly a new pupil phase function to be
recorded, s( x, y) must be considered a stochastic process. Since all the while one
object is being observed, the object o( x, y) is fixed. In mathematical terms, one
estimates the power spectrum < II{w)12 > of the image intensity i(x,y), where
<> denotes the ensemble average, from finite frames. It can be shown that the
image power spectrum is related to that of the object, IO(w)l2, through the speckle
transfer function
< IT{w)12 >
< II{wW >= IQ(wW < IT(w)12 > .
Hence, Eq. (5.8) allows IQ(wW to be found. The modulus IO(w)1 of the unknown
object is then known.
Although the phase of the spectrum IO(w)1 is still unknown, knowledge of the
modulus IO(w)1 by itself often allows the object o(x,y) to be known by using phase
retrieval algorithms.
An advantage of the Labeyrie technique is that it can be implemented by coherent optical processing methods. The collection of images is recorded photographically. The power spectrum < II(w)12
> can then be formed by making a multiple
exposure of the Fraunhofer diffraction patterns of the successive speckle images. As
an example of the study of closely-spaced binary stars, consider the case where the
object is a double star of equal intensities B and separation a. Then parameters a
and B are all that are needed to specify the object. Moreover, since for this object
= B8(x -
a/2) + B8(x + a/2), we have IQ(w)12
= 4B2 cos2(aw/2).
Or, IO(w)12
will be a set of fringes. Then a can be estimated from the frequency of the fringes,
while B is known from their contrast.
Shift-and-Add Method
One of the earliest methods proposed to retain high spatial frequency information
in astronomical images is simple shift-and-add. The technique works just as might
be expected
• The brightest point in each of a series of short exposure images is located
• The brightest point is relocated to a reference point
• The shifted image is added to the summed shifted images
This technique is based on the assumption that a speckle image can be modeled
as the convolution of a set of delta functions with the object. The idea is due to
Lynds et al. [73] and has been used with notable success over the years. A similar
technique, called weighted shift-and-add [74], locates a number of maxima in each
image. This technique has met with even greater success than simple shift-and-add.
As might be expected, the shift-and-add techniques have a number of drawbacks.
The most serious of these is the fact that shift-and-add seems to produce a object
dependent point spread function. This clearly would make accurate reconstruction
of the object very difficult.
Knox-Thompson Method
We now examine a method that has considerable potential for overcoming atmospheric turbulence in astronomical imaging. The algorithm by Knox and Thompson
[63] is the first real candidate for overcoming atmospheric turbulence, because it can
handle large phase fluctuation of several waves and also it overcomes the problem of
noise propagation in the phase estimates. The Knox and Thompson (KT) method
is an ad hoc addition to Labeyrie speckle interferometry. In the KT method, the
modulus II(w)1 of the image spectrum is obtained by the Labeyrie method and the
phase of the image spectrum is obtained by averaging the product of closely spaced
frequency components.
Suppose now that N short exposure images in(x,y), n
= 1,··· ,N of one inco-
herent object o( x, y) are given. These are Fourier transformed
The spectra are then auto correlated at 2 or 3 lags b.w, forming associated data
D(w; b.w) = N
+ b.w)
over the passband w
of the imaging system. This quantity D(w; .6.w), unlike
the power spectrum, is complex and retains phase information. The notation
indicates the complex conjugate.
By the transfer theorem each spectrum obeys
In(w) = Tn(w)O(w),
n = 1,···, N
where Tn(w) is the n'th stochastic transfer function of the atmosphere, and O(w)
is the fixed object spectrum. Substituting Eq. (5.11) into Eq. (5.10) gives
D(w; .6.w) = MKT(W; .6.w)O(w)O*(w + .6.w)
MKT(W; .6.w)
= N 2: Tn(w)T;(w + .6.w)
is the net transfer function linking the associated data function D( w; .6.w) with the
object information O(w)O*(w + .6.w).
It is assumed that transfer function MKT can either be observed by imaging
a nearby point source, (e.g., a star or a glint), or is known from theory. Then
representing each complex spectrum O(w), O(w
+ .6.w)
by its modulus and phase
O(w) = /O(w)/ei,p(W)
in Eq. (5.12), allows the all-important object phase function <p(w) to be estimated,
<p(w) - <p(w + .6.w)
= Phase[D(w; .6.W)jMKT(W; .6.w)].
Now the phase of D(w;bow) may be computed from the data. Also, the phase of
MKT(W; bow) can be known form the image of the an isolated star. This allows the
unknown object phase difference ¢>(w) - ¢>(w + .6.w) to be found in Eq. (5.15), and
is the touchstone of the approach.
The phase ¢> at any particular vector w may be found by noting that ¢>(O, 0) = 0,
since o( x, y) is a real function, and by stepwise using Eq. (5.15) to find ¢>( bow ),
then ¢>(2.6.w), etc., until w is reached. In fact, being a two-dimensional problem,
there are any number of paths in frequency space that can be taken to produce ¢>
at the required w. Each path should give rise to the correct ¢>(w) value. Therefore,
in the case of noisy data, the different path values may be averaged to effect some
gain in accuracy.
The modulus O(w) part of the object may more easily be obtained, by the use
of Labeyrie's approach. Then the modulus and phase functions are combined, via
Eq. (5.14), to form the final spectrum, which is Fourier transformed to get the final
estimated object.
Bispectrum Method
The bispectrum is defined as the product of the short exposure image transforms
at three spatial frequencies
and W2 are two-dimensional spatial frequency coordinates and where
denotes the complex conjugate. Also, the triple correlation can be represented by
its inverse Fourier transform as
where we use the notation z = (x,y),
= (x},yt), and
= (X2,Y2).
Since the image is a convolution of a object with a point spread function, then
in frequency space,
lew) = O(w)T(w)
where O( w) equals the spatial spectrum of the object intensity and T( w) the instantaneous optical transfer function. Using Eq.(5.18)
The average bispectrum of many short exposures is related to that of the object
through a bispectrum transfer function T 3 (wt, W2) as
It is clear that
(5.21 )
where (h(wt, W2) represents the phase of the average image bispectrum, cPT(WI, W2)
represents the phase of the bispectral transfer function and cPo (WI, W2) is the phase
of the object spectrum. It has been stated in the literature [70] that h(WJ,W2)
is very small if not equal to zero. Also, if it is assumed that complex amplitude
in the telescope pupil has Gaussian statistics, then it can be shown [75] that the
bispectrum transfer function is real and nonzero over the diffraction-limited portion.
Thus, from the above discussion,
where ¢o(w) represents an estimate of the object phase <l>o(w) at frequency w.
To see how the phase of the object spectrum can be calculated recursively from
that of the object bispectrum, consider Eq. (5.22) with
(0,0) and W2 =
(0,1), (0,2),'0 o. In fact, this process can be repeated for the different values of WI
to yield a more accurate estimate of the phase at each spatial frequency. Since
¢o(O,O) equals to zero for a real object, ¢o(O, 1) and ¢o(l, 0) define the phase slope
and are free to be chosen, the recursive scheme can be started. This method is in
some ways similar to, but more general than Knox-Thompson method described
Turbulent Image Reconstruction from a Superposition Model
Superposition Process
In many physical processes, data are formed as the random superposition [35]
over space, or over time, of one function. The latter is called a disturbance function
h(x,y) and has a known form. For example, we found that short-term atmospheric
turbulence causes the image of a star to be a random scatter of Airy-like functions.
Each Airy-like function has about the same form h(x, y), and these are centered
upon random points (xm' Ym). Let there be M fixed disturbances in all. Accordingly,
we postulate the superposition process to obey the following statistical model:
• A known disturbance function h( x, y) exists.
• It is centered upon m random positions (xm' Ym), m ;: 1, ... ,_'l;f.
• The disturbances add, so that the total disturbance with the weights Wm at
any point (x,y) obeys
wmh(x - Xm,Y - Ym).
• The {(xm,Ym)} are random variables that are independent samples from one,
known probability law PXy(x, y).
From the last model assumption and Eq. (5.23), s(x,y) is a random variable at
each (x,y). In fact,
so that s(x, y) is a stochastic process. We are interested in establishing the statistics
of s at any fixed point (x, y) = (xo, yo). But this problem has been already solved
in detail on Frieden's book [35]. We will consider only the parameter estimation of
the weights Wm and displacements (xm' Ym)
It has often been noted [71] that, in the presence of short-term turbulence, the
image of a star resembles a random superposition of speckles of approximately constant size. This is also theoretical justification for such a model [76]. The model
underlies the previously described shift-and-add algorithm [73] for reconstructing
objects consisting of impulses. In this section, we show that the model, in principIe, permits the efficient reconstruction of general object scenes. Application to a
particular case is also given, with encouraging results.
Model the short-term speckle point spread function s(x,y) as a superposition
s(x,y) =
2: wmh(x -
xm,y - Ym).
In statistical parlance, a disturbance function h( x, y) of known form, e.g., a Gaussian, is assumed present. The weights
and displacements
Ym) are stochas-
tic, i.e., unpredictable from exposure to exposure. Thus, each short-term spread
function is presumed to be characterized by randomly a different set of
The model (5.25) is, of course, an approximation. The shape of each disturbance
function h(x,y) will vary somewhat with exposure number n. Hence, the function
h(x,y) used in Eq. (5.25) may be regarded as an average entity. In any real case,
the departure of the model (5.25) from the true s(x,y) may be regarded as a noise
The model (5.25) is used to form reconstructions as follows. Assume that N
short-exposure images i n ( x, y), n = 1,2,···, N of a constant object scene o( x, y)
are at hand. These obey the linear transform theorem
of the n'th image in(x,y), n'th spread functions sn(x,y), and the object o(x,y).
(For a survey of past methods of reconstructing O(WI,W2) from knowledge of the
can be found, for anyone image n. We accomplish the latter as follow.
Taking the Fourier transform of Eq. (5.25) gives
T(Wll W2)
= H(Wll W2) L
w me-i (wIXm+W2Ym) ,
where H(Wl,W2) is the Fourier transform of h(x,y). This is the imaging characteristic of the n-th short-term image, or
Tn(WI, W2) = H(Wb W2)
L wmne-i(wIXmn+W2Ymn).
We had to double subscript the unknowns because there is a new set of unknowns
Wm and (xm' Ym) for each image n. Using result Eq. (5.28) gives for the ratio of
two images nand k,
I n(Wl,W2)
Lm wmne-i(WIXmn+W2Ymn)
Ik(wI, w2) = Tk(WI,W2)O(Wl,W2) = Lm wmke-i(WIXmk+W2Ymk)·
Remarkably, both the object and the disturbance function drop out. This is quite
convenient, since the object is of course unknown, and the disturbance function
input would necessarily suffer from error, which conceivably would propagate into
the outputs
Wmn., Xmn.,
For a fixed pair (n, k) of images, Eq. (5.29) has 2M unknowns
4M unknowns
Wmn., Wmk,
and Ymn., or a total of 6M unknowns. However, Eq. (5.29) holds
at each frequency (WI,W2) value. This enables many equations in the unknowns
to be formed. If, e.g., the image space is 64-by-64 pixels, by the use of the Fast
Fourier Transform, the frequency space will likewise contain 64-by-64 pixels, for a
total of 4096. Thus, 4096 equations in the unknowns could be formed. If, as well,
there are the order of a few hundred speckles, we see that an overdetermined set of
equations is produced. Seeking a least-square solution then enables the unknowns
to be found. With these known, the turbulence point spread functions of Eq. (5.25)
for the two images nand k are known. Then use of Eq. (5.28) for each image gives
two estimates of the object. These may be averaged to produce the output object.
It may be noted that this procedure holds for a general object intensity profile
o(x, y)j the details of the object spectrum O(Wl,W2) dropped out during the division
in Eq. (5.29). In particular, the approach is not limited to impulse-type object
Experimental Results
Suppose a binary star is modeled as a pair of point sources separated by some
pixel distance. The object function o( X, y) can be represented by the equation
where a and b are the intensities of the point sources at (Xl, Yl) and (X2, Y2), respectively. The Fourier transform of the object O(Wl,W2) is easily found to be
The image i (x, y) can be formed by telescope during a short-term exposure of
the object o(x, y). The Fourier spectrum J(WI,W2) is obtained by Fourier Transform
of image i(x,y). Thus, it is assumed that estimates are available for the image
spectrum J(Wl,W2) and the transfer function T(WI,W2) in Eq. (5.27).
The object Fourier spectrum O(Wl,W2; 0) then is a function of unknown parameters ()
= (wmn,xmmYmn)
in Eq. (5.31). The problem is to find a parameter set
such that the figure of merit functional S(fJ), a sum over the independent variables
WI and W2 which range over the passband,
is minimized as a function of (). It is clear that the equation for the object Fourier
spectrum is non-linear. Any convenient algorithm for seeking a least-square solution
to the problem may then be used. Though many algorithms exist for the leastsquare optimization problem at hand, the Levenberg-Marquardt algorithm for nonlinear least squares is common choice and will be employed here. It has become
one of the most successful and widely used algorithms for nonlinear optimization.
It was briefly described in the Appendix A.
Using Eq. (5.29) takes the ratio of the image spectrum for image number n = 1
to that for image number k = 2,
where M = 4 disturbances to each. For a fixed pair of images, Eq. (5.33) has 8
W mn ,
16 unknowns
or total of 24 unknowns.
For the first test of the algorithm the initial guesses for the parameters are
= W22 =
= W32 =
= W42 =
Wn =
(8.0, -2.0),
(X3b Y31)
(-7.0, -7.0),
(X41' Y4d
(X12' Y12)
(X22, Y22) -
(0.0, -5.0),
(X32, Y32) -
(X42,Y42) =
This first starting point was chosen to roughly correspond to what will be seen to
be the answer to the problem. The choice is based on experience with running the
algorithm. The problem of how many iterations to perform is of some importance.
Since the algorithm will iterate for some time after the function value S(O) has
ceased to change, some stopping criterion should be used. The criterion that will
be employed here is based on the function value S(O). Thus if S(Od is the function
value at the
iteration, then if
the iterations are halted. The stopping criterion of 1 x 10- 2 is somewhat arbitrarily
chosen since it is based on experience with running the non-linear least squares
routine. In the absence of a more rigorous method, the criterion appears to be
entirely sufficient.
For othe!" trials, three different starting points were chosen. These starting points
are summarized in Table 5.1. When dealing with non-linear least squares, an important question to ask is how sensitive are the results to the starting point. If more
prior knowledge such as the good starting point are used, the local minimum can be
avoided and convergence can be speeded up.
A computer-based demonstration
(xu, Yu)
(X3b Y31)
(X4b Y41)
(X12, Y12)
(X32, Y32)
(X42, Y42)
(1.0, 1.0)
(-6.0,-8.0 )
(-6.2,-6.5 )
Table 5.1: The three different starting points
In Fig. 5.2(a) is shown an ideal single-point object, and Fig. 5.2(b) is a twopoint object with the left-hand impulse at half the intensity of the right-hand one.
The aim will be to reconstruct each object from two short-exposure images. This
constitutes two distinct reconstruction problems. The fields are 64 x 64 pixels in
area. In Fig. 5.2(c) and 5.2(d) are images of the one-point object (hence, these are
the same two spread functions), with M = 4 disturbances to each. Fig. 5.2(e) and
5.2(f) show the images of object Fig. 5.2(b) due to the same two spread functions.
The two images Fig. 5.2(c) and 5.2(d) are input into algorithm Eq. (5.29). Since
= 4,
there are 6M
= 24
unknown weights and displacements to recover. We
found it efficient to use the 20 x 20 central frequencies (WI, W2) in Eq. (5.29)., for a
total of 400 equations in the 24 unknowns. This is very overdetermined system of
equations. A least-square solution was found using a Levenberg-Marquardt algorithm. The output is the object in Fig. 5.3( a). It is seen to be a good approximation
to the ideal single-point object input Fig. 5.2(a).
Next, the two images Fig. 5.2(e) and 5.2(f) are input into algorithm Eq. (5.29).
Again there are 24 unknown weights and displacements, and we again use the 20 x 20
central frequencies (Wt,W2) in Eq. (5.29). The output is shown in Fig. 5.3(b). Again
it is a good approximation to the ideal two-point object Fig. 5.2(b).
The Levenberg-Marquardt approach requires a first-guess at the unknown weights
and displacements. These were randomly generated. To test the sensitivity of the
approach to starting solution, we repeated the algorithm for the last problem, but
with two randomly different initial solutions. The outputs are shown in Fig. 5.3(c)
and Fig. 5.3( d). They are good approximations to the true object Fig. 5.2(b).
It may be noted that the reconstruction approach avoids the need for observation of a reference point source in the image field. Also, it requires but two
short-exposure images as input. In these regards, the approach offers an advantage over some past approaches to reconstruction, e.g. due to Labeyrie or KnoxThompson [63].
A few conclusions can be drawn from this chapter. The turbulent image reconstruction from a superposition model has some advantage. The previously described
approaches in Section 5.3, 5.4, 5.5 and 5.6 require, for implementation, either empirical or theoretical knowledge of the ensemble-average image of a point source.
They also, in practice, require N = 20 or more short-term images for successful resolution of object details. Most importantly, the proposed approach avoids the need
for these inputs. The object is reconstructed without the need for a reference point
source in the image field, and by the observation of but 2 short-exposure images. A
possible way out of the problem of determining a good disturbance function would
be that it could be regarded as an average speckle intensity profile.
It may be noted that the proposed approach holds for a general object intensity
profile o(x, y); the details of the object spectrum O(Wl' W2) dropped out during the
division in Eq. (5.29). In particular, the approach is not limited to impulse-type
object scenes.
Image Plane
Figure 5.1: Formation of point spread function s(x) from lens suffering phase errors
Figure 5.2: Short-term images formed (to be processed). (a) Ideal one-point object;
(b) Ideal two-point object; (c) Turbulent image (psf) of (a); (d) Another image (psf)
of (a); (e) Image of (b) via psf (c); (f) Image of (b) via psf (d).
Figure 5.3: Outputs of image modelling algorithm. (a) Output o(x,y) of algorithm
based upon data F.T.{Fig. 5.2(c)}/F.T.{Fig. 5.2(d)}. (b) Output o(x,y) based
upon data F.T.{Fig. 5.2(e)}/F.T.{Fig. 5.2(f)}. Use of first starting solution. (c)
Output o(x,y) as in (b), using second starting solution. (d) Output o(x,y) as in
(b), using third starting solution.
This chapter summarizes the main results presented in this dissertation and
provides recommendations for further research.
In chapter three, one- and two-dimensional integral logarithmic transforms were
defined. These transforms obey the useful properties of linearity and scale invariance. The two-dimensional integral logarithmic transform is additionally invariant
to rotation. The one-dimensional transform is analogous to an optical edge response
function or to a cumulative probability law. The one-dimensional LT of a function
undergoing a nonlinear, power-law change of scale is itself a power-law change of
scale on the unscaled log transform. The inverse one-dimensional LT was found, as
were the inverse two-dimensional transforms. Any scaled version of a given image
has the same LT. This allows scale-invariant Wiener filtering and character recognition to be performed. A computer simulation of scale-invariant Wiener filtering
was carried through, showing good noise suppression st two different scales of magnification. An optical implementation of scale-invariant character recognition was
We have proposed but a few applications of the concept of the LT. The transform
should find a host of other applications, in optics as well as in other fields of engineering. For example, a polar-log coordinate 1:1 mapping is a preprocessing step L
that obeys Eq. (3.40) and that allows for scale- (and orientation-) invariant Wiener
filtering (Messner and Szu [47]). This preprocessing would probably be easier to
implement digitally than LT filtering since it is a simple point-to-point mapping.
On the other hand, since polar-log coordinate mapping is a non-linear operation,
the linear LT outputs Eq. (3.15) might be easier to analyze for noise-propagation
and signal-to-noise effects. Regarding analog implementation, LT might have an
advantage because of the simplicity of the proposed optical implementation in Fig.
3.1. Further comparisons of the two approaches would require detailed studies of
such factors as operational complexity, speed, cost, convenience, and accuracy.
In chapter four, we showed that the concept of the Wiener filter can be extended
to the phase-retrieval-from-modulus problem. However, in contrast to conventional
Wiener filtering, more than one filter is needed to achieve high-quality outputs. For
example, a 3 x 3 array of Wiener filters is needed to reconstruct successfully one
class of objects - 16 space shuttle images. The approach was found, by computer
simulation, to be relatively insensitive to noise in the modulus data and in the
filters. Also, considerable gaps in the data passband, such as those from a central
obscuration in the imaging telescope, can be overcome by suitable modification of
the rules of filter construction.
As further research, consider an object that is a translated, magnified, and rotated version of an object in the learning set. Such an object can probably be
accommodated by a modification of the filtering approach, as follows. The object's
autocorrelation function is invariant to translation, and suffers the same magnification and rotation effects as the object itself. This suggests that we Fourier transform the object's squared modulus data, producing the object's autocorrelation
function, and then take its integral logarithmic transform. The two-dimensional integrallogarithmic transform has the property that is invariant to the magnification
and rotation distortions of its input, except for a translation that depends on the
magnification and rotation. Hence, the transformed data merely suffer from lateral
translation. A subsequent inverse-Fourier transform and then modulus operation
eliminates the translation and places the output back in frequency space. If this
modulus data is now used in place of IQ(w)1 in the development Eqs. (4.2) - (4.13),
the result should be a set of filters Yn (w) optimized for use on this transformed
data. The filters should then work on data arising from any translated, scaled, and
rotated version of a shuttle image as in Fig. 4.l.
In Chapter five, we reconstructed one-point and two-point objects without the
need for a reference point source in the image field, and with the observation of but
2, short-exposure images. The Knox-Thompson approach depends, for their use,
upon the observation of many (20 $ N) short-term images of a point source, in
addition to the N images. The proposed arproach required number N of images. It
uses but N = 2 of these, working from their quotient in frequency space, Eq. (5.29).
Each point spread function s(x,y) is modeled [Eq. (5.25)] as a superposition, with
unknown weights and displacements, of a known disturbance function h( x, y). Since
the quotient is known over a wide band of frequencies, this enables the weights and
displacements for two short-term spread functions sn(x,y), Sk(X,y) to be solved
for in a least-squares sense. Once these are known, the object spectrum may be
solved for, by the use of a transfer function in conjunction with either image. Thus,
reconstruction is attained by the use of two images instead of 20 or more, and the
need for observation of a reference point source is replaced by modeling the point
spread function appropriately and working with the division of two image spectra.
A case M = 4 disturbances corresponds to a weak turbulence situation where,
effectively, the optical aperture contains about 4 regions of correlated phase. Work
needs to be continued to extend the application of the algorithm to cases of stronger
turbulence, where M
100 disturbances are more common.
The following section is a brief summary of Levenberg-Marquardt optimization
algorithm. Some of optimization routines used in this dissertation are from the
Numerical Recipes mathematical subroutine library. Many of the same optimization algorithms are also available through IMSL. The IMSL routines are strictly a
black box, but the Numerical Recipes [77] text contains a very readable overview
of optimization. A more detailed review is found in series of articles edited by
Murray [78].
Optimization is the branch of mathematics which attempts to find the maximum
or minimum of a given function. These extrema may be either local or global
with global extremization being the more difficult of the tasks. The optimization
considered here will be restricted to finding the minima of a function. For the
applications considered here, moreover, the class of functions will be restricted
to ones composed of sums of squares. Specifically, if F( x) is the function to be
minimized then
= 2]fi(x)]2
(.1 )
where the sum is over the M data points where the function is evaluated. The
function F(x) is dependent on N variables
which are represented by the N-
dimensional vector x. In addition, all the functions considered will be non-linearly
dependent on the variables x j.
Proofs for the convergence of optimization algorithms are often based on an
assumption that the function to be optimized can be approximated by a quadratic
form. Quadratic form means that the function F( x) can be represented by
where b is a N x 1 vector and A is a N x N symmetric matrix. The representation
of F(x) as a Taylor series is an important example of a quadratic form.
approximation to F( x) is then written as
In this case the elements gi of 9 are the first partial derivatives
Likewise, the elements Gij of G are the second partial derivatives
The matrix G is often referred to as the Hessian
the literature. When the
function to be minimized is the sum of squares then a useful approximation to the
Hessian may be derived. The approximation is found by differentiating the function
with respect to each of the
The result is a sum composed of first and second
derivatives. The second derivative terms can be neglected because the terms are
small [79] or because the second derivative terms tend to cancel out [77].
The algorithm proposed by Levenberg [80] and Marquardt [81] has become one
of the most successful and widely used algorithms for nonlinear optimization. The
algorithm is actually a clever combination of two other optimization schemes. The
first scheme, steepest descent, is one of the oldest methods in existence and was
first suggested by Cauchy in 1847. Suppose 9 is the N x 1 gradient vector of F(x)
which has as
gi =
Suppose further that F( x) is well approximated by the quadratic form. Then the
steepest descent iteration is defined as
represents the values of the variables at the k-th step and where h is
some constant. The second algorithm combined in Levenberg-Marquardt is often
referred to as Newton-Raphson. The method is generated by differentiation of the
N-dimensional Taylor series approximating F(x). The procedure which results is
the iterative step
where G- 1 is the inverse of the Hessian matrix. Combining these two algorithms,
the Levenberg-Marquardt algorithm [80, 81] is thus defined as the iterative step
where I denotes the identity matrix. The form of the algorithm depends on the
constant ..\ at each step. Thus when ..\ is large, the algorithm behaves more like
steepest descent. When). is small, the algorithm resembles Newton-Raphson. The
constant ..\ is initially set to a small value (like 0.001). If during the iteration, the
function value increases, ). is increased by a factor of 10. If the function value
decreases, then). is decreased by a factor of 10.
[1] R. A. Schowengerdt, Techniques for Image Processing and Classification zn
Remote Sensing. New York: Academic press, 1983.
[2] E. Persoon and K. S. Fu, "Shape discrimination using Fourier descriptors,"
IEEE Trans. Syst., Man, and Cybern., vol. SMC-7, pp. 170-179, Mar. 1977.
[3] K. S. Fu, Syntactic Pattern Recognition and Application. Englewood Cliffs,
NJ: Prentice-Hall, 1982.
[4] S. A. Dudani, K. J. Breeding, and R. B. McGhee, "Aircraft identification by
moment invariants," IEEE Trans. Comput., vol. C-26, no. 1, pp. 39-45, Jan.
[5] A. P. Reeves, R. J. Prokop, S. E. Andrews, and F. Kuhl, "Three-dimensional
shape analysis using moments and Fourier descriptors," IEEE Trans. Pattern
Anal. Machine Intell., vol. PAMI-10, no. 6, pp. 937-943, Nov. 1988.
[6] R. C. Gonzalez and P. Wintz, Digital Image Processing.
Addison-Wesley, 1977.
Reading, MA:
[7] H. Wu and R. A. Schowengerdt, "Shape discrimination using invariant Fourier
representation and a neural network classifier," SPIE, vol. 1569, pp. 147-155,
July, 1991.
[8] A. Krzyzak; S. Y. Leung, and C. Y. Suen, "Reconstruction of two-dimensional
patterns by Fourier descriptors," in Proc. 9th ICPR, Rome, Italy, pp. 555-558,
Nov., 1988.
[9] C. T. Zhan and C. T. Roskies, "Fourier descriptors for plane closed curves,"
IEEE Trans. Comput., vol. C-21, pp. 269-281, Mar. 1972.
[10] R. L. Kashyap and R. Chellappa, "Stochastic models for closed boundary
analysis: Representation and reconstruction," IEEE Trans. Inform. Theory,
vol. IT-27, pp. 627-637, Sept. 1981.
[11] Y. N. Hsu, H. H. Arsenault, and G. April, "Rotational invariant digital pattern
recognition using circular harmonic expansion," Appl. Opt., vol. 21, pp. 40124015, 1982.
[12] R. P. Lippmann, "An introduction to computing with neural nets," IEEE
ASSP Magazine, vol. 4, pp. 4-22, April 1987.
[13] T. Kohonen, Self-organization and Associative Memory.
Verlag, 1984.
Berlin: Springer-
[14] G. M. Edelman, Neural Darwinism. NY: Basic Books, 1987.
[15] K. Fukushima, "Neural network model for selective attention in visual pattern
recognition and associative recall," Appl. Opt., vol. 26, pp. 4985-4992, 1987.
[16] D. E. Rumelhart and J. L. McClelland, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Cambridge, MA: MIT Press,
[17] E. C. Titchmarsh, Introduction to the Theory of Fourier Integrals. Oxford:
Oxford Univ. Press, 1959.
[18] R. Bracewell, The Fourier Transform and its Applications.
McGraw-Hill Book Co., 1965.
New York:
[19] E.1. Schwartz, "Computational anatomy and functional architecture of striate
cortex: A spatial mapping approach to perceptual coding," Vision Res., vol. 20,
pp. 645-669, 1980.
[20] E. 1. Schwartz, "Cortical Mapping and perceptual invariance: a reply to cavanagh," Vision Res., vol. 23, pp. 831-835, 1983.
[21] R. A. Altes, "Mechanism for aural pulse compression in mammals," J. Acoust.
Soc. Amer., vol. 57, pp. 513-515, 1975.
[22] R. A. Altes, "The Fourier-Mellin transform and mammalian hearing," J.
Acoust. Soc. Amer., vol. 63, pp. 174-183, 1978.
[23] D. Casasent and D. Psaltis, "New optical transforms for pattern recognition,"
Proc. IEEE, vol. 65, pp. 77-84, 1977.
[24] B. R. Frieden and C. Oh, "Integral logarithmic transform: theory and applications," Appl. Opt., vol. 31, pp. 1138-1145, 1992.
[25] J. R. Fienup, "Phase retrieval algorithms: a comparison," Appl. Opt., vol. 21,
pp. 2758-2769, 1982.
[26] R. W. Gerchberg and W. O. Saxton, "Phase determination from image and
diffraction plane pictures in electron microscopy," Optik, vol. 34, pp. 275-284,
[27] R. W. Gerchberg and W. O. Saxton, "A practical algorithm for the determination of phase from image and diffraction plane pictures," Optik, vol. 35,
pp. 237-246, 1972.
[28] D. 1. Misell, "An examination of an iterative method for the solution of the
phase problem in optics and electron optics: I. test calculations," J. Phys.,
vol. D 6, pp. 2200-2216, 1973.
[29] D. 1. Misell, "An examination of an iterative method for the solution of the
phase problem in optics and electron optics: II. sources of error," J. Phys.,
vol. D 6, pp. 2217-2225, 1973.
[30] R. A. Gonsalves, "Phase retrieval from differential intensity measurements,"
J. Opt. Soc. Am., vol. A 4, pp. 166-170, 1987.
[31] N. Nakajima, "Phase retrieval from two intensity measurements using the
Fourier series expansion," J. Opt. Soc. Am., vol. A 4, pp. 154-158, 1987.
[32] N. Nakajima, "Phase retrieval using the logarithmic Hilbert transform and
Fourier series expansion," J. Opt. Soc. Am., vol. A 5, pp. 257-262, 1988.
[33] B. R. Frieden and C. Oh, "Multiple-filter approach to phase retrieval from
modulus data," Appl. Opt., vol. 31, pp. 1103-1108, 1992.
[34] B. R. Frieden and C. Oh, "Turbulent image reconstruction from a superposition
model," Opt. Commun., 1993.
[35] B. R. Frieden, Probability, Statistical Optics, and Data Testing. Heidelberg:
Springer-Verlag, 2nd ed., 1991.
[36] A. V. Oppenheim and J. S. Lim, "The importance of phase in signals," Proc.
IEEE, vol. 69, no. 5, pp. 529-541, 1981.
[37] R. G. Lane, W. R. Fright, and R. H. Bates, "Direct phase retrieval," IEEE
Trans. Acoust., Speech, Signal Processing, vol. ASSP-35, pp. 520-526, 1987.
[38] D. Casasent and D. Psaltis, "Space-bandwidth product and accuracy of the
optical Mellin transform," Appl. Opt., vol. 16, p. 1472, 1977.
[39] D. Casasent and D. Psaltis, "Position, rotation, and scaling invariant optical
correlation," Appl. Opt., vol. 15, pp. 1795-1799, 1976.
[40] D. Casasent and M. Kraus, "Polar camera for space-variant pattern recognition," Appl. Opt., vol. 17, pp. 1559-1561, 1978.
[41] 1. R. Rabiner and B. Gold, Theory and Application of Digital Signal Processing. Englewood Cliffs: Prentice-Hall, 1975.
[42] 1. Mertz, Transformations in Optics. New York: John Wiley & Sons, 1965.
[43] W. T. Rhodes, J. R. Fienup, and B. E. Saleh, "Transformations in Optical
Signal Processing," SPIE, vol. 373, 1983.
[44] H. Wechsler, Computational Vision. San Diego: Academic Press, 1990.
[45] C. Weiman and G. Chaikin, "Logarithmic spiral grids for the image processing
and display," Computer Graphics and Image Processing, vol. 11, pp. 197-226,
[46] P. S. Schenker, K. M. Wong, and E. G. Cande, "Fast adaptive algorithms for
low-level scene analysis: Application of polar exponential grid (IPEG) representation to high-speed scale and rotation invariant target segmentation,"
SPIE, vol. 281, pp. 47-57, 1981.
[47] R. A. Messner and H. H. Szu, "Simultaneous image processing and feature
extraction for two-dimensional non-uniform sensors," SPIE, vol. 449, pp. 693710, 1983.
[48] R. A. Messner, "Smart visual sensors for real-time image processing and pat-
tern recognition based upon human visual system characteristics," Ph.D. dissertation, Clarkson Univ., Potsdam, N.Y., 1984.
[49] E. L. Schwartz, "Spatial mapping in the primate sensory projection: Analytic
structure and relevance to perception," BioI. Cybern., vol. 25, pp. 181-194,
[50] A. VanderLugt, F. B. Rotz, and J. A. Kloester, "Character reading by optical spatial filtering," in Optical and Electro-Optical Information Processing,
Cambridge, Mass.: MIT Press, 1965.
[51] T. Szoplik and H. H. Arsenault, "Shift and scale-invariant anamorphic Fourier
correlation using multiple circular harmonic filters," Appl. Opt., vol. 24,
pp. 3179-3183, 1985.
[52] D. Casasent, S. Xi a, A. Lee, and J. Song, "Real-time deformation invariant optical pattern recognition using coordinate transformations," Appl. Opt., vol. 26,
pp. 938-942, 1987.
[53] J. Rosen and J. Shamir, "Scale invariant pattern recognition with logarithmic
radial filters," Appl. Opt., vol. 28, pp. 240-244, 1989.
[54] D. Mendlovic, N. Konforti, and E. Marom, "Scale and projection invariant
pattern recognition," Appl. Opt., vol. 28, pp. 4982-4986, 1989.
[55] Y. Sun, Z. Wang, and G. Mu, "Amplitude compensated matched filters using circular harmonic expansion and a Mellin transform," Appi. Opt., vol. 29,
pp. 4i79-4783, 1990.
[56] D. Mendlovic, N. Konforti, and E. Marom, "Scale and projection invariant pattern recognition using logarithmic harmonics," Appi. Opt., vol. 29, pp. 47844789, 1990.
[57J D. G. Currie, "On a detection scheme for an amplitude interferometer," in
Synthetic-Aperture Optics (J. W. Goodman, ed.), Defense Documentation Cen-
ter, Alexandria, Virginia, p. Appendix II in Volume 2, Woods Hole Summer
Study, 1967.
[58] J. C. Dainty and J. R. Fienup, "Phase Retrieval and Image Reconstruction for
Astronomy," in Image Recovery: Theory and Application (fl. Stark, ed.), ch. 7,
Orlando: Academic Press, 1987.
[59] H. A. Ferwerda in Inverse Source Problems (H. P. Baltes, ed.), ch. 2, Berlin:
Springer-Verlag, 1978.
[60] L. C. Liu and S. H. Risbud, "Analysis of TEM image contrast of quantum-dot
semiconductor clusters in glasses," Phil. Mag. Lett., vol. 61, pp. 327-332, 1990.
[61] J. G. Walker, "The phase retrieval problem: A solution based on zero location
by exponential apodization," Opt. Acta, vol. 28, pp. 735-738, 1981.
[62] B. R. Frieden and D. G. Currie, "On unfolding the autocorrelation function,"
J. Opt. Soc. Am., vol. 66, p. UllA, 1976.
[63] K. T. Knox and B. J. Thompson, "Recovery of images from atmospherically
degraded short exposure photographs," Astrophys. J. Lett., vol. 193, pp. L45L48, 1974.
[64] A. W. Lohmann, G. Weigelt, and B. Wirnitzer, "Speckle masking in astronomy:
triple correlation theory and applications," Appl. Opt., vol. 22, pp. 4028-4037,
[65] N. Wiener, Extrapolation,lnterpolation, and Smoothing of Stationary Time Series. New York: Wiley, 1949.
[66] A. Labeyrie, "Attainment of diffraction limited resolution in large telescopes
by Fourier analyzing speckle patterns in star images," Astron. and Astrophys.,
vol. 6, pp. 85-87, 1970.
[67] J. R. Fienup, "Space object imaging through the turbulent atmosphere," Opt.
Eng., vol. 18, pp. 529-534, 1979.
[68] G. R. Ayers and J. C. Dainty, "Iterative blind deconvolution method and its
applications," Opt. Lett., vol. 13, pp. 547-549, 1988.
[69] R. H. T. Bates, "A stochastic image restoration procedure," Opt. Commun.,
vol. 19, pp. 240-244, 1976.
[70] G. Weigelt and B. Wirnitzer, "Image reconstruction by the speckle masking
method," Optics Lett., vol. 8, pp. 389-391, 1983.
[71] J. C. Dainty, "Laser Speckle and Related Phenomena," in Topics in applied
physics (J. C. Dainty, ed.), Heidelberg: Springer-Verlag, 2nd ed., 1984.
[72] J. W. Goodman, Introduction to Fourier Optics. New York: McGraw-Hill,
[73] C. R. Lynds, S. P. Worden, and J. W. Harvey, "Digital image reconstruction
applied to Alpha Orions," Astrophys. J., vol. 207, pp. 174-180, 1976.
[74] J. C. Christou, E. K. Freeman, and E. Ribak, "A self-calibrating shift-and-add
technique for speckle imaging," J. Opt. Soc. Am., vol. A3, pp. 204-209, 1986.
[75] A. W. Lohmann, G. Weigelt, and B. \Virnitzer, "Speckle making in astronomy:
triple correlation theory and applications," Appl. Opt., vol. 22, pp. 4028-4037,
[76] F. Roddier, "The effect of atmospheric turbulence in optical astronomy," in
Progress in optics, Amsterdam: North-Holland, 1981.
[77] W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes in C: The Art of Scientific Computing. Cambridge: Cambridge
university press, 1988.
[78] W. Murray, "Fundamentals," in Numerical Methods in Unconstrained Optimization (W. Murray, ed.), New York: Academic Press, 1972.
[79] D. R. Sadler, Numerical Methods for Nonlinear Regression. St. Lucia, Queensland: University of Queensland Press, 1975.
[80] K. Levenberg, "A method for the solution of certain problems in least squares,"
Quarterly of Applied Mathematics, vol. 2, pp. 164-168, 1944.
[81] D. Marquardt, "An algorithm for least-squares estimation of nonlinear parameters," SIAM Journal on Applied Mathematics, vol. 11, pp. 431-441, 1963.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF