INFORMATION TO USERS

INFORMATION TO USERS

INFORMATION TO USERS

This manuscript has been reproduced from the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from any type of computer printer.

The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard margins, and improper alignment can adversely affect reproduction.

In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion.

Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, b^inning at the upper left-hand comer and continuing from left to right in equal sections with small overlaps. Each original is also photographed in one exposure and is included in reduced form at the back of the book.

Photographs included in the original manuscript have been reproduced xerographically in this copy. Higher quality 6" x 9" black and white photographic prints are available for any photographs or illustrations appearing in this copy for an additional charge. Contact UMI directly to order.

UMI

A BeU & Howell Infonnation CompaiQ'

300 North Zeeb Road, Ann Aibor MI 48106-1346 USA

313/761-4700 800/521-0600

ADIFFRACTIVE OPTIC IMAGE SPECTROMETER (DOIS)

by

Denise Marie Blanchard Lyons

Copyright © Denise Marie Blanchard Lyons 1997

A Dissertation submitted to the Faculty of the

COMMITTEE ON OPTIC AL SCIENCES (GRADUATE)

In Partial Fulfillment of the Requirements

For the Degree of

DOCTOR OF PHILOSOPHY

In the Graduate College

THE UNIVERSITY OF ARIZONA

1 9 9 7

DMX Nximber: 9738976

Copyright 1997 by

Bleuichard Lyons, Denlse Marie

All rights reserved.

UMI Microform 9738976

Copyright 1997, by UMI Company. All rights reserved.

This microform edition is protected against unauthorized copying under Title 17, United States Code.

UMI

300 North 2^eeb Road

Arm Arbor, MI 48103

THE UNIVERSITY OF ARIZONA ®

GRADUATE COLLEGE

As members of the Final Examination Committee, we certify that we have read the dissertation prepared by Denise Marie Blanchard Lyons entitled A Diffractive Optic Image Spectrometer fDOIS")

2 and recommend that it be accepted as fulfilling the dissertation requirement for the Degree of Doctor of Philosophy

U

^

EUlStace Dereniak

/ / . / 9 J 1

Wyant

Dite

l U f 1 7

nathan Mooney

Date

Date

Final approval and acceptance of this dissertation is contingent upon the candidate's submission of the final copy of the dissertation to the

Graduate College.

I hereby certify that I have read this dissertation prepared under my direction and recommend that it be accepted as fulfilling the dissertation requirement.

Eustace Dereniak

Dissertation Director D?te^

3

STATEMENT BY AUTHOR

This dissertation has been submitted in partial fulfillment of requirements for an advanced degree at the University of Arizona and is deposited in the University Library to be made available to borrowers under rules of the Library.

Brief quotations from this dissertation are allowable without special permission, provided that accurate acknowledgment of source is made. Requests for permission for extended quotation from or reproduction of this manuscript in whole or in part may be granted by the copyright holder.

4

ACKNOWLEDGMENTS

I must start by thanking my best friend and husband, Steven Lyons, for his love, enthusiasm and constant support while I have been pursuing this degree.

I have been generously supported under a Long-Term-Full-Time-Study fellowship sponsored by my employer, the USAF Rome Laboratory, and my supervisors Greg Zagar and Dr. IDonald Hanson. I will always share a sense of camaraderie with my closest co­ workers, James Davis, Pierre Talbot, and especially my research partner Kevin Whitcomb.

I owe a special debt of gratitude to Herb Klumpe who managed to provide the outstanding computer resources that this project required.

1 sincerely thank my academic and technical advisors Dr. Eustace Dereniak and Dr.

Jonathan Mooney. They have persevered with me through the difficult task of finishing my dissertation in absentia. Additional appreciation goes to Michelle Hinnrichs of Pacific

Advanced Technology, who encouraged our parallel efforts and periodic collaborations in investigating spectral imaging with a diffractive optic.

I am indebted to my parents and step parents, for fostering ambition and perseverance in me, and for constant praise and encouragement which has made this endeavor much more pleasant Linda and Lawrence Ryan and Richard and Lewellyne

Blanchard. I would also like to thank my sister and brother-in-law, Melissa and Tony

Chiffy, for sharing my precious and always joyful niece, Ashley Lynn, who has provided a welcome break from this project.

I need to thank the strong, intelligent and independent women in my family who have acted as role models and have always expected that I could do anything, my grandmothers, Marie Blanchard and Vita LaVenia, and my mother, Linda Ryan.

In memory of my Great Aunt Francis Bono, a life long educator.

TABLE OF CONTENTS

LIST OF FIGURES

ABSTRACT

1. INTRODUCTION

1.1 Image Spectrometry

1.2 Figures of Merit

1.3 Common Image Spectrometers

1.4 Spectrotomography Techniques

1.5 DOIS

1.6 Dissertation's Objectives

2. DOIS - DIFFRACTTVE OPTIC IMAGE SPECTROMETER

2.1 Basic Scenario

2.2 Experimental Demonstration with a RGB monitor.

2.2.1 Blue Background Demonstration

2.3 Continuous Magnification

2.3.1 Post-detection image resampling

2.3.2 Optical designs for constant magniflcation

3. DIFFRACTTVE OPTICAL ELEMENTS

3.1 Fresnel Zone Plate

3.2 DOE design and characteristic equations

3.3 Fabrication of Multilevel Profiles

3.4 The Incoherent Point Spread Function

3.4.1 Di^raction limited incoherent psf

3.4.2 Experimentally measured incoherent psf

3.4.2.1 Coherent vs. Incoherent Imaging

3.4.2.2 Experimentally measured results

5

8

13

26

26

29

33

34

34

35

15

15

16

17

19

20

22

38

38

40

43

46

46

49

49

55

6

TABLE OF COKTEKTS-Continued

4. THREE DIMENSIONAL IMAGE FORMATION WITH A DIFFRACTIVE

OPTICAL ELEMENT (DOE) 59

4.1 Imaging with a Diffractive Optic

4.1.1 Spectral Gaussian Coordinate Transform

4.1.2 Three-dimensional Transfer Theory

4.2 Isotomic Imaging in a narrow waveband

4.2.1 Reconstruction

4.2.2 Define isotome bandwidth

4.3 Full Spectrum Imaging

4.3.1 Isotome Coordinate Transform

4.3.2 Reconstruction

4.3.3 Inverse Spectral Coordinate Transform

67

68

70

70

60

61

63

64

65

66

5. PROTOTYPE CHARACTERIZATION

5.1 DOIS Prototype

5.2 The Diffractive Optic Element

5.3 The CCD camera

5.4 DOIS prototype characteristics.

5.5 Spatial and Spectral point spread function (psf)

6. IMAGE FORMATION

6.1 Mercury 5771579 nm doublet with an X aperture.

6.2 Spectral/spatial analysis

6.2.1

Monochromatic target, increasing aperture

6.2.2 Doublet target, increasing aperture

71

71

72

78

80

82

7. PROTOTYPE DEMONSTRATION WITH FOUR TARGETS

7.1 Testbed Specifications

7.2 Results

85

85

89

92

92

96

96

lOl

7

TABLE OF CONTENTS-Confmtte^/

8. OBJECT RECONSTRUCTION ALGORITHMS

8.1 Digital Representation.

8.2 Nearest Neighbor Reconstruction

8.3 Inverse Filtering

8.3.1 Apodization Inverse Filter

8.3.2 Regularized Inverse Filter

8.3.3 Singular Value Decomposition Inverse Filter

8.3.3.1 SVD results.

8.4 Constrained Iterative Deconvolution

9. CONCLUSION 150

9.1 DOIS Advantages 150

9.2 Suggestions for future work.

9.2.1 Dual Waveband design using multiple orders

752

152

9.2.2 Programmable DOE with an SLM, variable focus device 153

9.2.3 Array of DOEs

153

APPENDIX A: CATALOG OF IMAGES 155

REFERENCES

107

109

109

115

116

119

124

134

143

207

8

LIST OF FIGURES

Figure I.l: Typical spectral line used to define spectral resolution 17

Figure 1.2: Three-dimensional image cube for a color filter wheel image spectrometer.. ..17

Figure 1.3: Grating image spectrometer three-dimensional image cube 18

Figure 1.4: Three-dimensional data cubes for a Fourier Transform image spectrometer... 19

Figure 1.5: Three-dimensional Spectral imaging with a diffiractive optical element 21

Hgure 1.6: The spectral image distances and magnifications for an object at

Hgmie 1.7: The six Cartesian Coordinate systems used in EXDIS

21

23

Figxire 2.1: DOIS - Diffiractive Optic Image Spectrometer basic scenario 27

Figure 2.2: Algorithm for calculating wavelength from image distance, S;

Figure 2.4: Phosphor emission curves for red (P22R), green (P22G), and blue (P22B) phosphor, the measured blue and green agree with wideband textbook curves however the measured red differs dramatically

28

Figure 2.3: Example of the color picker window in Adobe Photoshop™ for the blue AFMC target 30

31

Figure 2.5: Spectral images recorded with DOIS. The target is text displayed on an

AppleColor™ high resolution RGB monitor 32

Figure 2.6: Images from green ROME and red LAB on a blue background (a) blue focus

(b) green (c) red 33

Figure 2.7: Define a pixel footprint 0^,.,

Figure 2.8: Relay DOIS design with minimal magnification change

34

35

Figure 2.9: Magnification change vs. wavelength for original, relay and zoom designs... 37

Figure 2.10: Zoom DOIS design with constant magnification

Figure 3.1: Fresnel zone plate phase profiles

37

39

Figure 3.2: Diffraction efficiency vs. number of masks and phase levels

Figure 3.3: Ray trace that determines zone radii

Figure 3.4: Diffraction Efficiency vs. Wavelength for N=2 and N=16

Figure 3.5: Photolithography multilevel fabrication algorithm

Figure 3.6: SEM photos of DOE rings, (a) Top view, feature size is =10|im. (b) Side view of the outermost rings, feature size is = 2.5|im 45

Figure 3.7: A density plot of the diffraction limited intensity psf, h(v,u), it is rotationally symmetric around the u axis 47

Figure 3.8: A theoretical plot of h(v,0) in die focal plane 48

39

40

42

44

Figure 3.9: A theoretical plot of h(0,u) along the optical axis

Figure 3.10: A theoretical plot of h(v,v) where u=v

48

49

Figure 3.11: (a) The theoretical intensity distribution in the coherent and incoherent images of a slit, (b) the recorded image of a coherent slit (reprinted from Reynolds 1989, page 115) 52

Figure 3.12: The diffraction limited impulse response for (a) coherent and (b) incoherent illumination 53

9

LIST OF FIGURES-Continued

Rgure 3.13: Cross-sections of various magn^ed pinholes, MT=-.L (left column), alongside the resulting diffraction limited intensity distributions from coherent (bold line) and incoherent (thin line) illumination plotted together (ri^t column) 54

Figure 3.14: Series of images (32 x 32 pixels) of a 5 jim pinhole illuminated by a HeNe laser recorded with EXDIS through focus 55

Figure 3.15; The experimentally measured intensity point spread function, h(r,z), (a) a density plot (b) a 3D plot 56

Figure 3.16: The theoretical and measured on-axis/spectral intensity psf, h(z(A.))=h(0,u).57

Figure 3.17: The experimentally measured OTF(r,z), (a) a density plot (b) a 3D plot 58

Figure 4.1: A flow chart depicting the manipulation of the three-dimensional object cube as it is imaged, and reconstructed 59

Figure 4.2: A diffiractive optical element imaging various spectral point sources

Figure 4.3: The variance on (a) R,, and (b) 7^, at z„=10fo

61

67

Figure 4.4: Visual representation of the isotome coordinate transform forming an image space with unit magnification

Figure 5.1: Photo of the DOIS prototype

69

71

Figure 5.2: DOE figtires of Merit

Figure 5.3: A partial image of the DOE rings

73

73

Figure 5.4: Diffraction efficiency experimental semp (a) determine the total power, (b) first order power at r/2. (c) first order power at full aperture 75

Figure 5.5: Theoretical and measured diffraction efficiency vs. wavelength 75

Figure 5.6: DOE MTF, measured at RFC Diffractive Optic Testing Facility

Figure 5.7: DOIS camera specifications

77

78

Figure 5.8: The experimentally determined sp^tral response curve of the SONY CCD. The dashed line is the raw data, and the solid line has the spectral curve of the lamp and monochrometer grating divided out 79

Figure 5.9: The experimentally determined spectral response curve of the SONY CCD after the removjd of a factory placed filter. The dashed line is the raw data, and the solid line has the spectral curve of the lamp and monochrometer grating divided out.... 80

Figure 5.10: A pictorial representation of the system's MTF, a EMDIS image of an Air Force

Resolution Target 81

Figure 5.11: Schematic of DOIS

Figure 5.12: Excel spreadsheet calibrating the DOE and detector carrier locations z, to a

81 corresponding wavelength, X. 82

Figure 5.13: Series of images (32 x 32 pixels) of a 5 |im pinhole illuminated by a HeNe laser recorded with DOIS through focus 83

Figure 5.14: Spatial Impulse response h(r) at 632.8 nm, plotted as greylevel vs. pixel number for locations (a) approaching focus, (b) leaving focus 83

Figure 5.15: Spectral Impulse response h(A.), plotted as intensity of one pixel through focus (a) for 632.8 nm. (b) for 542 nm 84

10

LIST OF FIGURES-ConfmMg<f

Figure 5.16: The experimentally determined spectral resolution

Figure 6.1: Mathematica code that implements the linear system algorithm

Figure 6.2: Simulated spectral object & image channels of a Mercury doublet X, using experimentally measured psf(x,y,z(A.))

84

85

87

Figure 6.3: Spectral radiance plots of a Mercury doublet 577/579nm (a) the assumed input source object, (b) the computer generated results, (c) the experimental result 88

Figure 6.4: 3D object cube and zy view of a rectangular aperture emitting (a) monochromaticly (b) a doublet source 90

Figure 6.5: Cross-sections of the three-dimensional point spread functions used in CG image formation and object reconstruction

Figure 6.6: Computer generated monochromatic target of increasing size

91

92

Figure 6.7: Computer generated (CG) and experimentally recorded doublets of increasing size 93

Figure 6.8: A point spread function askew from off-axis aberrations 94

Figure 6.9: The psf of a pixel at the edge of die detecion area. 95

Figure 7.1: The Four targets: a Tungsten Halogen Lamp, a Mercury Lamp, a HeNe and a

GreNe 96

Figure 7.2: Spectral Distribution of the Newport Tungsten Halogen Lamp (a) reprinted from the Newport operations manual, (b) measured with DOIS 97

Figure 7.3: Spectral radiance curve of the Mercury Lamp (a) reprinted from Applied Optics,

(b) measured with DOIS 98

Figure 7.4: Schematic of the four targets testbed

Figure 7.5: Sample of the key spectral DOIS images in this demonstration

99

100

Figure 7.6: Peak spectral lines of the target, calculated from the locations of best focus. 101

Figure 7.7: Tungsten Halogen Data (a) spectral images of the 50|im pinhole, (b) the measured spectral radiance curve

Figure 7.8: Sketch of image drifting between pixels

103

103

Figure 7.9: HeNe Data (a) spectral images of the 5)im pinhole, (b) the measured spectral radiance curve (dz=0.005"=0.0125cm, dX=0.3-0.4nm) 104

Figure 7.10: GreNe Data (a) spectral images of the lO^m pinhole, (b) the measured spectral radiance curve (dz=0.005"=0.0125cm, dX^.2-0.3nm) 105

Figure 7.11: The measured spectral radiance curves of each Mercury line with the in-focus

32x32 pixel spectral image(s) of the center of the cross, (a) 365nm line, (b) 404nm line, (c) 435nm line, (d) 546nm line, (e) 577/579nm doublet 106

Figure 8.1: Mathematica implementation of the Nearest Neighbor algorithm for 20 images. 112

Figure 8.2: Recorded Mercury images and corresponding restored objects with Nearest

Neighbor technique 113

II

LIST OF ¥\G\JRES-Continued

Figure 8.3: Restoration of Mercury spectral radiance curve with Nearest Neighbor technique, (a) the recorded spectra, and the reconstructed spectra with (b) n=0.45.114

Figure 8.4: The Mathematica code for inverse reconstruction with apodization

Figure 8.5: List Plot of the flattened (a) psf. (b) OTF

117

117

Figure 8.6: Object results from apodization reconstruction with CG images (left column) and recorded iris images (right column) 118

Figure 8.7: Mathematica code for the Regularized Inverse Filter. 120

Figure 8.8: Resulting object sets after Regularized Inverse reconstruction with various a.. 121

Figure 8.9: Spectral Radiance of pixel 16,16 before and after (bold) the Regularized

Inverse reconstruction with a =0.005 121

Figure 8.10: CG doublet objects, images and reconstructed objects 122

Figure 8.11: Measured Mercury doublet images and Regularized Inverse reconstructed. 123

Figure 8.12: Results of Regularized Inverse filter on measured 400nm and iris images.. 123

Figure 8.13 (c): Cross-section of the 3D combination psf used in SVD object reconstmction (contrast enhanced to show detail)

Figure 8.14: Mathematica code to calculate the "basis" OTF, P[[Az,x,z]]

Figure 8.15: Mathematica code to form the OTF Matrix

128

128

129

Figure 8.16: Mathematica code to create inim which converts the NxN Complex matrix into an 2NxN of all Real values 130

Figure 8.17: Mathematica code to create the Fourier space matrix IMAGE 131

Figure 8.18: Mathematica code finds the SVD matrices, inverts the OTF and finds the reconstructed OBJ matrix 131

Figure 8.19: Mathematica code takes the inverse Fourier transform of the OBJ matrix.. 132

Figure 8.20: Singular Values w„(x,z) Matrix plots

Figure 8.21: Cross-section shows missing cone

Figure 8.22: Table of the z values for the following output series

133

133

134

Figure 8.23: z(A.) vs. y cross-section and spectral plots from SVD applied to CG Doublet. 136

Figure 8.24: SVD reconstructed object series from the CG 8x8 pixel doublet aperture, previously depicted as cross-sections in 8.23 137

Figure 8.25: SVD reconstructed object series from the CG 16 xl6 pixel doublet aperture, previously depicted as cross-sections in 8.23 138

Figure 8.26: SVD reconstructed object series from the experimentally measured Hg

577/579 nm X 139

Figure 8.27: SVD reconstracted object series frxim the experimentally measured 542 nm

GreNe point-source with a trapezoidal 546 nm Mercury target 140

Figure 8.28: SVD reconstructed object series from the experimentally measured Mercury

Rectangle 141

12

LIST OF FIGURES-ConfmMgfif

Figure 8.29: SVD reconstructed object series from the experimentally measured Iris illuminated by a Mercury bulb (notice the hot spot)

Figure 8.30: Mathematica code for the Jansson-vanCittert algorithm

Figure 8.31: Resulting images after Jansson-vanCittert reconstruction with various iterations, (a) original recorded data, (b) n=10. (c) n=20

142

144

144

Figure 8.32: Spectral Radiance of pixel 16,16 Jansson-vanCittert reconstruction at various iterations 145

Figure 8.33: Mathematica code for Regularized Inverse Filtered vanCittert iterative reconstruction 146

Figure 8.34: CG doublet; rms error after each iteration for (a) the CG doublet and (b) the iris 146

Figure 8.35: CG doublet; results of Regularized Inverse Filtered vanCittert iterative reconstruction with ji= 0.001 148

Figure 8.36: Results of iris reconstruction after (a) 3D Regularized Inverse filter and (b-e) with Regularized Inverse Filtered vanCittert iterative reconstruction a =.01 149

Figure 8.37: Reconstructed iris objects after 10 & 20 iterations 149

Figure 9.1: Two waveband design, mid IR and far ER, with X,g=7.3nm (a) spectral

T

| for each order, (b) the two wavelengths at each focal position 152

Figure 9.2: Design for a static DOIS with a controllable, variable focus lens

Figure 9.3: Design for a static DOIS with a DOE lens array

153

154

13

ABSTRACT

The dif&active optic imaging spectrometer, DOIS, is a high resolution, compact, economical, rugged, programmable, multi-spectral imager. The design implements a conventional CCD camera and emerging dif&active optical element (DOE) technology in an elegant configuration, adding spectroscopy capabilities to current imaging systems [Lyons

1995].

One limitation of DOEs, also known as zone plate lenses, is abundant chromatic aberration. DOIS exploits this typically unwanted effect, utilizing a DOE to perform the imaging and provide the dispersion necessary to separate a multi-spectral target into separate spectral images. The CCD is stepped or scanned along the optical axis recording a series of these spectral images. This process is referred to as diffractive spectral sectioning.

Under this dissertation, three-dimensional spectral/spatial DOE imaging theory was developed to describe and predict the system's performance. The theory was implemented in a software model to simulate EMDIS image cubes. A visible spectrum DOIS prototype was designed, fabricated and characterized. The system's incoherent point spread function was theoretically modeled and experimentally determined. To verify the simulations, the prototype's performance was demonstrated with a variety of known targets and compared to simulated image cubes. To reconstruct the three-dimensional object cubes, various deconvolution algorithms; nearest neighbor, inverse filtering and constrained iterative deconvolution, were developed and applied to both computer generated and experimentally measured image cubes. The best results were obtained using an SVD inverse Fourier deconvolution algorithm with regularization for noise suppression. The results demonstrate a resolving power greater than 288 (X/AX=577nm/2nm). Finally, three additional DOIS

14

designs are presented as suggestions for future work, including a configuration with no moving parts which records the entire 3D image cube in one "snapshot".

EXDIS is a practical image spectrometer that can be built to operate at ultraviolet, visible or infi^red wavelengths for applications in surveillance, remote sensing, medical imaging, law enforcement, environmental monitoring, and laser counter intelligence.

15

1. INTRODUCTION

The field of multi- and hyper-spectral imaging, also known as Imaging

Spectrometry, has a long history [Goetz 1995] and has been receiving increased attention for several years. As discussed elsewhere [Descour 1995], Imaging Spectrometry adds the ability to examine the spectral distribution of two-dimensional scenes to the fundamental power of imaging systems. The availability of known spectral radiance, reflectance and absorption curves coupled with an imaging spectrometer allows identification and classification of targets with an accuracy and resolution previously unknown.

An entire chapter of the IR Handbook [Wolfe 1989] is dedicated to the spectral properties of natural sources. Recorded spectra can be compared to these and other known spectra to identify the material composition of a target. A spectrometer is an instrument for resolving and recording these spectra. The combination of an imager and a spectrometer, an imaging spectrometer, provides a conventional spatial image as well as the spectral content of each pixel, forming a three-dimensional spatial/spectral image cube. The result is a complete system capable of target detection, classification and identification.

Image spectrometers can be built to operate in the ultra-violet (UV), visible or infi^red (IR) spectrum and measure spectral reflectance, emission and/or absorption.

Spectra can be used as a "fingerprint" to determine exactly what material or mix of materials is present. Whether the data is fix)m the paint of a military tank, the plume of a missile, the ground in the desert or the air we breathe, spectral information can lead to recognition of a detected target or even detect the presence of unwanted materials.

1.1 Image Spectrometry

The goal of an image spectrometer is to obtain a set of spectral images of a target to form an image cube. Each image represents a spectral band or channel, defined by a central

16

wavelength, Xc, and its spectral bandwidth, AX. The total number of channels is equal to the entire wavelength range divided by the channel bandwidth.

( 1 . 1 )

Conunon image spectrometers include three components: a dispersion element, an imager and a detection device [Wolfe 1989]. Examples include a lens and color filter wheel in front of a two-dimensional detector array, or a single pixel aperture illuminating a diffraction grating, dispersing the light onto a linear detector array. The detector records one pixel at ail wavelengths which is then scanned in the x and y directions to form an image.

1.2 Figures of Merit

When designing or comparing imaging spectrometers it is customar>' to discuss the spectral resolution of the system [Wolfe 1994]. Spectral lines usually have some predetermined shape, such as a Gaussian or Lorentzian. The line width, sometimes defined as the spectral slit width, is usually specified as the full width at half maximum (FWHM), as shown in Figure 1

.1, noted as AX.

spectral resolution = AA (1.2)

The relative resolution is the spectral resolution divided by the center wavelength.

relative resolution = AA/A^ (1.3)

Since bigger is always better, the resolving power is defined as the reciprocal of the relative resolution.

resolving power =

(1-4)

17

FWl

Figure 1.1: Typical spectral line used to define spectral resolution.

1.3 Common Image Spectrometers

The most simplistic image spectrometer is a standard camera Gens and detector) with various line-pass color filters. Typically the filters are placed on a wheel that is rotated so that they are in the optical path one at a time. Figure 1.2 shows that the data recorded is a series of two-dimensional (x,y) images stepped in wavelength. The number of filters determines how completely the spectra is sampled and the spectral resolution of each filter is the spectral resolution of the spectrometer. However, the system's size, complexity and cost are directly related to the number and quality of the filters, and the materials needed for some filters have Umited availability. Additionally, once designed and built it is inflexible; interest in additional wavelengths will require an upgrade.

AX y

Figure 1.2: Three-dimensional image cube for a color filter wheel image spectrometer.

18

Another popular design is the grating spectrometer. A narrow, rectangular entrance slit is imaged onto a linear difiiaction grating and dien on to a two-dimensional detector.

The grating's dispersion is perpendicular to the long axis of the slit The detector records a spatial image in one direction and the spectra along the other. The data is a set of spatial/spectral images, (x,X.) or (y that are recorded as the ^rture is scanned along the scene in a pushbroom manner to form an image shown in the image cube of Figure 1.3. Its advantage is that there is a continuous spectra from the diffraction grating, so the resolution is determined by the optical magnification and detector size (parameters that a designer should have full control over). However, in order to view a conventional (x,y) image for target acquisition and tracking, it requires a scanning mechanism which can be sensitive to vibration and motion of the target

Figure 1.3: Grating image spectrometer three-dimensional image cube.

19

A third design is a Michelson interferometer setup with an oscillating mirror in one path. As the mirror moves, the interference term varies as the cosine of the oscillation with a frequency of lizTK for a monochromatic source. A source of a different wavelength or frequency will provide an interference pattern with a different frequency, If the two are used at the same time then the pattern will be the sum of two cosines, each modified by its amplitude. With a multi-spectral source, the interferogram obtained as the mirror moves is the sum of a collection of monochromatic interference patterns, each with its own amplitude. The interference pattem is recorded over time, correlated to the optical path difference caused by the mirror, and Fourier transformed to yield the spectrum. A CCD focal plane array can be placed at the detection plane to simultaneously record the frequency spectrum of multiple pixels, forming an inoage once the transform is performed, as shown in Figure 1.4. This technique provides very high resolution data however it is sensitive to vibrations and target motion during recording.

X ^ X

I' J o « J a n

recorded over time

Fourier Transform

Figure 1.4: Three-dimensional data cubes for a Fourier Transform image spectrometer.

1.4 Spectrotomography Techniques

Spectrotomography is a new branch of optical tomography where the threedimensional data cube is integrated along various axes forming two-dimensional

20

projections of the 3D data. These projections, containing integral data on the spectral and spatial properties of the object of interest, are recorded then processed with computer tomography algorithms to reconstruct the 3D data cube. Researchers use several techniques with various dispersion, scanning, and reconstruction methods for applying computer tomography to imaging spectrometry. A major advantage of these systems is high throughput efficiency. The projections integrate all possible photons and therefore no light is discarded, which is useful in low light scenarios. Although the recorded data requires much processing to form a recognizable image cube. Also, much emphasis has been placed on the future ability to build a system with no moving parts and with a single snap-shot approach for moving targets [Descour 1995].

1.5 DOIS

The diffractive optic image spectrometer design is a spectro-tomographic technique which integrates over a narrowband of the image cube. Shown in Figure 1.5, the DOE provides both the spatial imaging and the spectral dispersion in this image spectrometer

[Hinnrichs 1995, Lyons 1995].

A panchromatic CCD camera is used for detection. This camera, mounted parallel to the DOE, is stepped along the optical axis, z. Each detector location z corresponds to a specific spectral charmel of the target's image. As an example. Figure 1.6 lists the calculated image distance z, and magnifications for particular spectral slices of an object located 2fj from the DOE, where fj is the focal length at the DOE design wavelength

The camera is interfaced to a frame grabber board within a computer for image capture, analysis, display and post detection processing. At each step along the optical axis, both an image frame and the corresponding detector location is recorded.

21

Object(x.y^)

Spectral lmages(x.y.z(A.B

Figure 1.5: Three-dimensional Spectral imaging with a diffractive optical element (DOE).

X [nm]

650

Zi

1.7f,

^^Transverse

0.83

M Longitudinal

0.68

600

588

550

500

450

1.9f,

2.0f,

2.3f,

2.9f,

3.8f,

0.96

1.00

1.15

1.43

1.88

0.92

1.00

1.32

2.04

3.55

400 5.5f, 2.77 7.69

Figure 1.6: The spectral image distances z, and magniOcations for an object

at z„=2/rf.

The images captured by the CCD are a superposition of one channel's infocus image and defocused images of the surrounding channels. These defocused components cause blurring of the desired image.

Like a color filter, DOIS integrates over a bandpass, dictated by the DOE's incoherent point spread function. However, DOIS can provide greater spectral resolution by scanning along the z axis, shifting the center of the bandpass and sampling the spectra at several spectral points for every point that the filter could sample. Once sampled, digital image restoration techniques can be applied to remove the blurred information and reconstruct the original spectra.

22

The EXDE dispersion is so great that each image is an integral of data in only a narrow spectral band, not the entire spectrum. This leads to a system that is useful both with and without post-detection processing. Without processing DOIS provides spectral images with spectral resolution that rivals color filter wheels. The spectral resolution is defined by the region of spectral integration measured to be as good as 1.5nm for spatially unresolved targets [Chapter 7]. Higher resolving power is achievable with DOIS by applying digital image restoration algorithms which improve the resolution by eliminating spectral and spatial blur [Chapter 8].

1.6 Dissertation's Objectives

Parallel to the efforts under this dissertation, Michele Hinnrichs from Pacific

Advanced Technology (PAT) conceived and patented the mechanical design of the dif&active optic unage spectrometer described in this dissertation [Hinnrichs 1995].

Although an ER system had been built and demonstrated on real-world targets, there was a lack of analysis theory and understanding of spectral imaging with a dif&active optic. This theory was needed for analytical modeling and simulation of the image spectrometer design, and required to develop object reconstruction algorithms for imaging resolved targets with high spectral and spatial resolution.

The objectives of this dissertation were to determine the theory behind diffractive spectral imaging, implement the theory in a software model to simulate DOIS image cubes, experimentally verify the simulated results with a hardware prototype imaging known spectral/spatial targets, and develop and apply object reconstruction algorithms to the simulated and experimentally measured image cubes. Additionally, an improved design was conceived to provide diffiractive optic image spectrometry with constant magnification

[Lyons 1997].

23

In the next chapter, the capabilities of the diffiractive optic image spectrometer are first demonstrated. A visible spectral range EKDIS prototype imaged an AppleColor™ high resolution RGB computer monitor with the words APMC ROME LAB spelled out in blue, green and red phosphor respectively. Although a much more analytical characterization is presented later in Chapter 7, this experiment is usefiil for discussion and introduction to

DOIS's operation.

Since the main component is a diffractive optical element (DOE) providing both imaging and dispersion for this image spectrometer. Chapter 3 details some important theory, design and characteristic equations, fabrication techniques and modeling schemes of the Fresnel Zone Plate which is the DOE used in DOIS.

The theory of image formation with the diffractive optic is developed in Chapter 4.

The three-dimensional image cube is collected by recording a set of images taken at sequential focal planes. By analogy with physical sectioning techniques, this method is called diffractive spectral sectioning. For convenience. Figure 1.7 lists the six Cartesian

Coordinate systems used throughout this dissertation. object space =>

Gaussian image space => o{x,y,z) image space ={• i(x,.,x,z,) isotomic image space => i (x/ ,>/ ,z/) isotomic Gaussian image space o' (x*, y, z') at z„ reconstructed object space => obj^[x^,y^,Xj) at

Figure 1.7: The six Cartesian Coordinate systems used in DOIS.

The EXDIS prototype was fully characterized, first as individual components and then as a complete system. This characterization is presented in Chapter 5.

Chapter 6 presents both computer generated (CG) and experimentally determined image sets of known targets. As you will see, the linear system theory of Chapter 4

24

accurately predicts output images formed by DOIS. The CG images are very similar to those determined experimentally.

A multi-spectral, multi-spatial four target testbed was designed and built to further test the performance of DOIS. The four targets include a Tungsten-Halogen grey body source, a multi-line Mercury source and two Helium Neon lasers, a 632.8nm HeNe and a

542nm GreNe. Chapter 7 summarizes the testbed's known spectral characteristics and includes a collection of spectral images and spectral radiance curves measured with DOIS.

Appendix A is a catalog of almost 600 images of the four targets that were recorded with the DOIS prototype. The most compelling results are the images and spectral plots of the

Mercury doublet at 577nm and 579nm. Without any post-detection processing this doublet is resolved, demonstrating a spectral resolution of less than 2nm!

Chapter 8 presents the object reconstruction algorithms to generate in-focus spectral images, spectral distributions, and full three-dimensional representations. Since not all applications require the same spectral resolution, processing techniques which fall in three general categories are presented, each providing different amounts of deblurring at various computational expense; nearest neighbor, inverse filtering and constrained iterative deconvolution.

The algorithms were implemented in Mathematica and applied to reconstruct both

CG and experimentally measured objects. The Nearest Neighbor algorithm estimates any spectral image with only three "snapshots" and minimal computational expense. However, greater resolution can be obtained using an SVD inverse Fourier deconvolution with regularization for noise suppression. For completeness, constrained iterative deconvolution is presented, although in practice this method has a relaxation fiinction which must be optimized to a particular application.

All of the deconvolution algorithms provide improved resolution and remove the effects of large targets, higher diffraction orders and stray light. The choice between

25

reconstruction approaches will ultimately depend on the application dictating such issues as the required spectral/spatial resolution, data storage, computation time and memory.

Chapter 9 summarizes the conclusions drawn during the course of this research, recommending the most effective approach to transition this work into a fieldable image spectrometer. The advantages of DOB over conventional image spectrometers are discussed. Finally, three designs for future DOIS systems are presented; a dual-waveband design for simultaneous spectral imaging in two spectral bands, such as the mid IR (3 to

Sum) and far IR (8 to 12|jjn), and two configurations that perform image spectrometry with no moving parts including a design which records the entire 3D image cube simultaneously in one "snapshot".

26

2. DOIS - DIFFRACTIVE OPTIC IMAGE SPECTROMETER

The principle shortcoming of EXDEs for polychromatic imaging is abundant chromatic aberration. This chapter presents a patented design that exploits this typically unwanted effect to create an image spectrometer [Hirmrichs 1995, Lyons 1995]. Under this dissertation a visible reginae Dif&active Optic Image Spectrometer (DOIS), shown in Figure

2.1, was designed, simulated, fabricated, demonstrated and characterized. A DOIS can be designed to operate at ultraviolet, visible or infrared wavelengths for multispectral and hyperspectral imaging in medicine, forensics, industrial and environmental monitoring, as well as military applications.

From Chapter 1, the goal of an image spectrometer is to obtain a set of spectral images of a target, forming an image cube. Each spectral image represents a spectral band or channel, defined by a central wavelength, and its spectral bandwidth, AX. Referring to equation (1.1), the total number of channels is equal to the total wavelength range divided by the channel bandwidth.

# of channels = (X^ - X^)/AX

2.1 Basic Scenario

A color computer monitor serves as the polychromatic source object; it will be referred to as the target. The DOE provides both the spatial imaging and the spectral dispersion in this image spectrometer. A conventional monochromatic CCD camera is used to record the image cube. This camera, mounted parallel to the DOE, is stepped along the optical axis, z. Each detector location, z, corresponds to a specific spectral channel of the image cube. At each step both an image firame and the corresponding detector location is recorded.

27 green

DOE at z= 0

CCD steps along the optical axis Az

Peak Phosphor emissions:

P22R = 625 nm

P22G = 520 nm

P22B = 450 nm z,=S, ^=20.63 cni z,=S, g«„=25.30 cm

A F M C note: images will be inverted

ciTi|

igure 2.1: DOIS - Diffractive Optic Image Spectrometer basic scenario.

The camera is interfaced to a frame grabber board within a computer for image capture, analysis, display and post detection processing. The images captured by the CCD are a superposition of one channel's infocus image and defocused images of the surrounding channels. These defocused components cause blurring of the desired image.

Post-detection, digital image restoration algorithms [Gonzalez 1987] are used to remove the unwanted blurred components, improving the imager's spatial and spectt^ resolution

[Chapter 8].

The diffractive optic T' order imaging equation relating spectra and the object and image distances , and is given in equation (2.1) [see Chapter 3 for the derivation]:

1 1 1

ux fx

1 _

(2.1) where/;i is the focal length at wavelength A., calculated from fj and the DOE design focal length and wavelength.

28

solving for wavelength:

A = A,/,

(2.2)

if j„-oothen5^=/i;

A =

^ d f j _ ^ d f d

A

(2.3)

Figure 2.2 shows the use of equation (2.2) for the scenario in Rgure 2.1.

With the target relatively close to DOIS as in the laboratory demonstration, it is best to keep the DOE fixed and the detector stepping, since an accurate target to DOE distance s^ is required. The practical field implementation would be designed for far range targets.

So=<». At infinite conjugates the object distance s^ isn't sensitive to changes in the lens location, the DOE can be scanned and the detector can be kept stationary, a preferable scenario. s = 7 f t fj = 20 cm

s,2 = TS.icm

/

I 1

Xj = 588 nm

J,3 = 29.73c/n

— + —

A , = 5 8 8 * 2 o f — 5 — 1

^ U0.63 213..^6j

A, =625nm A, = SlOnm

f — 1

^

Figure 2.2: Algorithm for calculating wavelength from image distance, s,.

When a target such as AFMC ROME LAB spelled out on a black background in blue, green and red respectively, is imaged with DOIS, each color or word will come to

AAA./. focus in a different position z^. The chromatic focal shift. A/" = —

A| A,

. is so severe that the separation between a blue AFMC and a red LAB is almost 10 cm. This extreme fall off with color provides this image spectrometer with a FWHM spectral bandwidth ranging fixtm 1 to 1.5 nm throughout the visible spectrum and over 100 possible spectral channels without post detection image processing. Once a series of images is captured, the image

29

cube can be displayed as series of spectral images, a normalized spcctral radiance curve

(intensity verses wavelength) for each pixel of interest or a two-dimensional spatial/spectral cross-section, xz^ or yz-^, of the image cube. The stepping can be controlled to tune to one particular channel, to step through a narrowband, wideband or the entire visible range in equal increments of dz, dX^ or any variation of user controlled steps. With proper control, a continuous video can be recorded matching the frame and scan rates, to caUbrate the wavelength to time or frame number.

2.2 Experimental Demonstration with a RGB monitor

The Image Spectrometer of Figure 2.1 was built and demonstrated at Rome

Laboratory's Photonic Center and is described below. This monitor demonstration is presented to facilitate an understanding of the basic operation and performance of DOIS. A much more analytical testing of the characteristics, performance and limitations of this

Image Spectrometer is found in the following chapters; Chapter 5. Prototype

Characterization and Chapter 7. Prototype demonstration with four targets.

The DOE is a 5 cm diameter, BK-7, 2 level phase grating with a design focal length,/^ = 20 cm at the design wavelength, = 588 nm. The CCD camera is a SONY

XC-75, monochrome video camera. The sensing area is 1/2 inch with 768 (H) x 494 (V) pixel elements. Each pixel is 8.4 |jjn x 9.8 ^tm. The approximate pixel size of 10 |jm x 10 pm will be used for simplicity and to account for dead space between pixels. The camera was connected to a framegrabber board in a Macintosh PowerPC 8100/100AV. The images were recorded and displayed with MacPhase by Otter Solution, an image capture, analysis and visualization software package.

30

The target simulation monitor used in this demonstration is an AppleColor™ High

Resolution RGB monitor. Each pixel of an RGB monitor is a cluster of three phosphor dots: a red, a green and a blue. All colors displayed on the monitor are a combination of these three basic colors. For instance, white is equal amounts of aU three, bright yellow is equal amounts of red and green, pink is blue and red, and turquoise is blue and green.

Adobe Photoshop™ is a software program which allows precise selection of text colors and was used to create the display target. The words AFMC ROME LAB were spelled out in Photoshop™ on a black background. The color picker was used to select 100% blue font color for the AFMC, and 100% green and red for the ROME and LAB respectively. Figure

2.3 is an example of the color picker window in Adobe Photoshop™ shown here with

100% blue for the AFMC text.

255 bhie

Figure 2.3: Example of the color picker window in Adobe Photoshop™ for the blue AFMC target.

Figure 2.4 shows the relative spectral emission curves of the red, green & blue phosphors. The general wideband spectral emission curves for the three phosphors; blue

P22B, green P22G and red P22R [Fink 1989]. Experimental characterization of the monitor's spectral radiance was performed with an Oriel grating monochrometer and

Newport power meter . This characterization confirmed the wide band spectral emissions of the blue and green phosphors. However, the red phosphor emitted in narrow lines, with two strong red lines, 625 and 617 nm, a few orange lines around 594 nm and even a few blips in the green region. Since the red emission differed so greatly from the textbook

31

P22R spectral curve, both are plotted in Figure 2.4. Unsuccessful attempts were made to verify the spectral radiance of the monitor with the manufacturer. The line nature of the red could indeed be from the P22R phosphor or a possible coating or filter on the monitor screen, and the lines in the green region could be from misalignment of the electron beam accidentally exciting the green phosphor dots. In any event, the band and line emission of this monitor makes an excellent demonstration target.

The horizontal curve in Figure 2.4 is the DOIS minimum detection threshold. DOIS is able to detect a faint green, an orange and the two strong red lines of the red phosphor and the continuous wideband emissions of the blue and green.

Monitor Spectral Curve g 0.9 c

o a

DC 0.6

•a 0 . 5

o

0 . 4 i

/ blueV

/ P22k

/green\^

Y P22^

/\

/ \

"I 0 . 3

5 0 . 2

Z 0.1

0

4 0 0 nOK fhreshoM /

4 5 0 5 0 0

\

\

\

\i

\

5 5 0

\

\ measured red

/"V"

\/ • ' '

X *

I '

/' \ 1 '

Ak \

600 6 5 0

Wavelength [nm]

Figure 2.4: Phosphor emission curves for red (P22R), green (P22G), and blue (P22B) phosphor, the measured blue and green agree with wideband textbook curves however the measured red differs dramatically.

Spectral images of the monitor taken with DOIS are shown in Figure 2.5. These are raw (256 x 256) images, displayed without any processing. The images depict the band emission of P22B, P22G and the line nature of P22R. Scanning from 400rmi to 650rmi the blue AFMC is present alone (a) from 4(X)rmi to 490nm, at 490 the green ROME cuts on and both AFMC and ROME can be seen in varying intensities (b) until the blue falls off completely at 54C)nm. The green ROME is continually present until 595nm. The intensity of

32

Figure 2.5: Spectral images recorded with

DOIS. The target is text displayed on an

AppleColor™ high resolution RGB monitor.

(a) X = 452 nm, blue AFMC, mid blue focus. Blurring is firom the wide band emissions of E^2B.

(b) A, = 520 nm, green ROME, mid green focus with the blur from the band emissions of P22G. Note that corresponding to Figure 2.4, P22B is still emitting.

(c) A, = 594 nm, green ROME and red LAB. Both P22G and P22R have slight emissions at this orange line.

(d) A, = 617 nm, red LAB. First strong P22R line with a halo from 625 nm line.

(e) X = 625, red LAB. Strongest P22R line. Note that

P22G is completely cutoff.

33

the red LAB pops in and out as the scan progresses from 540nm to 650nm. A very dim

LAB shows up at 542nm, then again at 594nm as in (c). It is absent until 615, with a crisp focus at 617 and 625nm, finally disappearing at 630nm.

2.2.1 Blue Background Demonstration

The monitor demonstration was repeated with a blue background instead of the black. This simulates a scenario with a target surrounded by a bright background (the sky).

Figure 2.6: Images from green ROME and red LAB on a blue background

(a) blue focus (b) green (c) red.

As shown above, DOIS is able to distinguish the targets. The defocused blue light adds to the noise floor and decreases the SNR. Notice that the green ROME is clear even though the blue phosphor is still emitting in that region.

34

2.3 Continuous Magnification

Recall from Figures 1.5 and 1.6 that the magnification changes with wavelength.

Since the CCD pixel configuration is fixed, the image cube is recorded with non-uniform sampling. This creates problems in registering target information between spectral slices and limits the accuracy of image reconstruction algorithms.

2.3.1 Post-detection image resampling

To compensate for the changing magnification the image cube can be resampled to approximate constant magnification. Illustrated in Figure 2.7, a pixel footprint or field-ofview (FOV) is defined at the smallest magnification. Each spectral image is then resampled by integrating over the same pixel FOV,

^

Z|

POV

red focal plane blue focal plane

Figure 2.7: Deflne a pixel footprint Qp,„,.

O _ ^pixel

^^ixet ~ 2

^smallest

_ _ ^pixei _ integration area

^^ixel ~ 2 ~

Z,

2

ZJ

^ lO^im X lOjiff; _ integration area

20cm^ ~ 30cn? integration area = 225^m^ = 1.5pixels

(2.4)

(2.5)

(2.6)

(2.7)

In order to form a constant magnification image cube with each spectral image having 256x256 spatial pixels, the red image at z=20cm is recorded with 256x256 and the

35

blue image at z=30cm will need to be recorded with 384x384 pixels, then interpolated to fit the 256x256 spectral slices of the image cube.

Problems can arise with this technique. Resampling the detected image cube requires averaging a non-integral number of pixels. This can cause blurring of edges and inaccurate pixel registration. A more accurate optical method of obtaining an image cube with constant magnification is presented in section 2.3.2.

2.3.2 Optical designs for constant magnification

A second approach to EXDIS imaging with constant magnification is a redesigned optical train. A refiractive lens is added between the DOE and CCD to relay and magnify the

DOE images [Lyons 1997]. The relay lens is the scanned element, and its position dictates the spectrum of the recorded image.

600am SOOom 400am ccc

oknj

Figure 2.8: Relay DOIS design witti minimal magnification change.

Shown in Figure 2.8, the total tube length or optical train length, T, is held constant. The DOE still performs the dispersion and forms an image space, however a refiractive, non-dispersive lens is placed after the DOE image space relaying the DOE images to the CCD camera. The images formed by this intermediate lens on the CCD will

36

be erect and the spectra will depend on the lens' position, /, and the target's object distance,

^aunef following equations derive the relationship between spectra and lens location.

(

Starting from equation (2.2): A =

^iDOE '^oorget j

(2.8)

^iDOE ~ ^ ^olens ftens

^iUns

^olens ~

^ilens flens

(2.9)

(2.10)

^olenx

(2.11)

^iDOE

(2.12)

A = K f .

J T - l ' - 2 f , J - f , „ T

1

(2.13)

The selected wavelength A is calculated in equation 2.13 from the EXDE design constants

A/j, the tube length T, the focal length of the relay lens

,, and the lens position /. the target's object distance

37

50% -

45% •

40% s o

35% •

(• o 30% •

'C

g

01

If

25% -

20% •

z

15% -

<

Original iXJS

0% •

400 450

T—'

500

1

Relay Design ^

550 wavelength [nm]

^ i

Zoom Design

T

600 650

Figure 2.9: Magnification change vs. wavelengtti for original, relay and zoom designs.

The change in magnification verses wavelength for this relay design is plotted in

Figure 2.9 along with that of the original design. Notice that although the magnification isn't constant, the change over the entire spectral band is substantially reduced to less than

10%. To further improve the changing magnification, the relay lens can be replaced with a pair of lenses forming a zoom lens, shown in Figure 2.10. The constant magnification of the zoom lens design is shown as a dotted line in Figure 2.9.

550 700i

DOE

100

150 200 250

Figure 2.10: Zoom DOIS design with constant magnification.

CCD

300 mm

38

3. DIFFRACTIVE OPTICAL ELEMENTS

The three main considerations in designing an optical system are cost, weight and performance. Dif&active optical elements (DOEs) have the potential to improve all three.

DOEs have a spectral dispersion inversely proportional to wavelength, whereas refractive optical elements have dispersion proportional to wavelength. Typically lens designers combine these dispersion characteristics to obtain an achromat [Ward 1971, Stone 1990].

However in this system the DOE dispersion is used alone, and exploited to create an imaging spectrometer.

3.1 Fresnel Zone Plate

The potential usefialness of dif&active optical elements has been known for years.

[Miyamoto 1961, Lesem 1969]. The problem with implementing them in real systems has been the lack of a process that can design and reliably produce the elements to the required tolerances necessary for high diffraction efficiency. Figure 3.1 (a) shows an example of a

Fresnel zone plate phase profile needed to achieve high efficiency. The 2n phase depth corresponds to a material etch depth of about 2}im for midinfrared radiation and Ipjn for visible. Several research teams are exploring techniques to produce this continuous phase profile with laser writers or greyscale masks [Morris 1994, O'Shea 1994]. However, it is common practice to approximate the continuous phase profile by quantized discrete phase levels [Goodman 1970, Swanson 1989]. Figures 3.1 (b), and (c) show the Fresnel zone plate phase profile quantized into two and four phase levels, respectively. The two level profile results in an efficiency of 40% and the four level profile results in an efficiency of

81%. For most systems, it is necessary to achieve diffraction efficiency of 99% or greater, this can be achieved with 16 levels created with 4 masks. Figure 3.2 shows the diffiraction efficiency as a function of the number of phase levels and required masks. Since this

39

dissertation is only meant as a proof-of-concept, 40% is tolerable and a 2 level phase plate was fabricated with one mask.

FRESNEL ZONE PLATE PHASE PROFILES

RADIAL DISTRIBUTION

Figure 3.1: Fresnel zone plate phase profiles.

MULTI-LEVEL ZONE PLATE

1ST ORDER DIFFRACTION EFFICIENCY

1 2 3 4 5 6 7 8 1 6

1 2

N UMBER OF MASKS

3 4

Figure 3.2: Di^raction efficiency vs. number of masks and phase levels.

40

3.2 DOE design and characteristic equations

f+ mA,

Figure 3.3 Ray trace that determines zone radii.

The zones of the Fresnel Zone Plate are defined when the optical path difference between adjacent zones is 7J2, causing constructive interference, shown in Figure 3.3 and characterized by equation 3.1 [Hecht 1989]. From Pythagorean's theorem, the radius of the m"* zone, r_, is:

r . + f = \ f + m -

r_ =

(3.1) where m is the zone counter, r^ is the value of the m"* radius, X is the design wavelength, and f is the primary focal length, corresponding to the 1" diffractive order. since m— « /

2

rl = fmX

(3.2)

Solving for f:

f = — mX

(3.3)

41

The zone radii are calculated to focus a particular design wavelength, at a design focal length, fj.

(3.4)

Once the DOE is fabricated with these fixed zones determined from the design parameters, the focal position at wavelengths other than the design wavelength can be found from substituting equation (3.4) into (3.3):

mX mX

/ W =

A./.

A (3.5)

This wavelength dependence is seen as chromatic aberration where the effective focal length is inversely proportional to wavelength. Adding it to the Gaussian lens formula yields the diffractive optic T* order imaging equation:

1

1

^/('^) /(A)

1

1

So

(3.6) where s„ is the object distances and sJiX) is the spectral image distance.

A critical figure of merit for this application is the first order diffraction efficiency, r],. It is dependent on the etch depth, the operational wavelength and the number of phase levels etched into the grating. The diffraction efficiency can be calculated from equation

(3.7) below, and is plotted in Figure 3.4 for a 2 level zone plate and a high efficiency 16 level DOE [Swanson 1989].

=

sm It

(n - l)d Y

-m sm

'(n-l)d

WSf

K

'(n-l)d

XN .

(3.7)

42

where; d=X,y(n-l)

Tj is the diffraction efficiency n is the refractive index of the substrate d is the etch depth m is the diffracted order of interest, typically m=l is the design wavelength

X is the operational wavelength

N is the number of levels in the grating, DOIS prototype is 2

" ^ • 3" order

2°lorder

1" order

.N=16

0.7 '

0.6 •

0.3 •

0 . 2 •

0.0

0.0 0.2

O A

Ofi 1.4 1.6 1.6 2.0 2.2 2.4 2.6 2.8 3.0

Figure 3.4: Di^raction Efficiency vs. Wavelength for N=2 and N=16.

The curve in Figure 3.4 is useful in selecting the proper design wavelength for applications where it is important to have high efficiency through a spectral range and not at just one wavelength, such as in DOIS. Note that the efficiency falls off much faster for smaller wavelengths, therefore should be selected near the shorter wavelength end of the range not simply the center wavelength.

An interesting note is that the peak for the two level grating is not at )JX^= 1 but at

1.3. This leads to the conclusion that calculation for etch depth, d, which is formulated for a continuous grating, isn't optimal when you have only 2 levels.

43

3.3 Fabrication of Multilevel Profiles

Swanson and Veldkamp developed a scheme that can accurately and reliably produce these multilevel diffraction surfaces with diffraction efficiencies in excess of 90%

[Veldkamp 1989]. Their technique takes advantage of technology the integrated circuit industry has developed for the miniaturization of circuits. Three essential tools that were developed are electron beam pattern generators, reactive ion etchers, and mask aligners. Ebeam pattern generators are capable of drawing binary amplitude patterns with feature sizes of 0.1 iim and positioning the features to an even greater accuracy. Reactive ion etchers can etch a binary profile to depths of a few microns with an accuracy on the order of tens of angstroms. Mask aligners are used routinely to aUgn two patterns with an accuracy of fractions of a micron. These are key technological advances that make it possible to produce the high quahty diffractive phase profiles.

Electron beam pattern generators produce masks that have binary transmittance profiles. A thin layer of chrome on an optically flat quartz substrate is patterned by the ebeam machine. The input to the e-beam pattern generator is a file stored on a computer tape and properly formatted for the particular machine. For multilevel diffractive elements the number of phase levels in the final diffractive element constructed from these masks is 2n, where n is the number of masks. For example, only four masks will produce 16 phase levels, resulting in an efficiency of 99%.

The binary amplitude masks produced from the pattern generator are actually low efficiency zone plates themselves, but are used to construct the multilevel optical element.

The fabrication process using the first mask is shown in Figure 3.5 . The optical surface on which the diffractive profile is to be etched is coated with a layer of photoresist. The ebeam generated mask is then placed over the substrate and illuminated with a standard UV photoresist exposure system. The photoresist is then developed, resulting in a properly

44

patterned layer of photoresist. The photoresist acts as an etchstop for the reactive ion etching.

BINARY ELEMENT FABRICATION

^BINARY AMPLITUDE MASK

••"SUBSTRATE

JJ" EXPOSE AND WET ETCH

V REACTIVE ION ETCH

pjvrLrLfTn

ii

REMOVE PHOTORESIST

rTj-Tj-LrLrun

I

BINARY PHASE ELEMENT

Figure 3.5: Photolithography multilevel fabrication algorithm.

Reactive ion etching is a process in which an RF electric field excites a gas, producing ions. The ions react with the material of the substrate and etch away the surface at a controlled rate. The reactive ion etching process is anisotropic, so the vertical side walls of the discrete phase profile are retained. Typical etch rates are on the order of lOOA to

200A per minute. As an example, the required first-level etch depth for a BK-7 substrate to be used at a wavelength of .588 |im is 1.176jmi. The necessary etch time is on the order of an hour. Numerous elements can be etched simultaneously. After the pattern of the first mask has been etched into the substrate, any residual photoresist is stripped away.

For multilevel structures the same procedure is then repeated on the optical substrate, only this time the second mask and etching to one-half the depth of the first etch.

For the second and subsequent masks an additional complication arises. These masks have to be accurately aligned to the already existing pattern produced from the first etch.

45

Commercially available mask aligners are capable of aligning two patterns to a fraction of a micron. This accuracy is sufficient to retain diffraction-limited performance for most multilevel structures designed to operate in the visible and infrared.

This technique was used at The Nanofabrication Facility at Comell University to fabricate a DOE lenses for a DOIS prototype [Lyons 1994].. The DOE is a BK-7, 2 level

Fresnel Phase Plate with approximately 2200 zones and a

minimum

feature size of 2.5|im.

Figure 3.6 is two scarming electron microscope (SEM) photos of the rings.

Figure 3.6: SEM photos of DOE rings, (a) Top view, feature size is

(b) Side view of the outermost rings, feature size is = 2.5^m.

46

3.4 The Incoherent Point Spread Function

The key parameter for a successful simulation and reconstruction is an accurate psf, h. Since the equivalence between DOEs and lenses has been established [Sweatt 1979], the three-dimensional psf of a lens can be adapted to the DOE. The theoretical diffraction limited and experimentally measured intensity point spread functions are presented below.

3.4.1 Diffraction limited incoherent psf

Bom and Wolf describe the normalized three-dimensional distribution of intensity in the neighborhood of focus, I(v,u), for a rotationally symmetric optical system as a convergent series of Bessel functions with the two equivalent expressions [Bom 1989].

h ( v , u ) = I { v , u ) =

""•"Hu)

1 + (V, m) + V, (v,

U

) - 2 V

Q{V,U)

[ U ^ { v , u ) + U l { v , u ) \

I \

(

v, m

)

s/n-^

J

where:

/ N lt+2j

-

Jn.lsiy)

=

1=0

V.(v.«) = XH)' -

j=0

\ U J n+ls

n+2j

(v)

(3.8)

(3.9)

(3.10)

U = iTt

v / y or

^ ± f f ]

u

2n:\a,

(3.11) v = -iTt

a 1 iTia 2

or

R

= A r /

2 K \ a .

(3.12)

Figure 3.7 is a density plot of equation (3.8), in terms of the dimensionless variables u and v, and the scaled dimensions r(nm] and z[pjn] calculated from the specific parameters of the DOIS prototype design. In three-dimensions h(v,u) is circularly symmetric about the optical axis, u . When evaluated with specific design parameters. A, /,

47

and aperture radius, a, the intensity point spread function will be scaled but

maintain

the fundamental spheroidal fomL

Figure 3.7: A density plot of the diffraction limited intensity psf,

h(v,u),

it is rotationally symmetric around the u axis.

For graphical analysis it is appropriate to look at three different views or crosssections. The first is a two-dimensional spatial psf in the plane perpendicular to the optical axis at focus, h(v,0). Shown in Figure 3.8, this is a cross-section of an infocus monochromatic point source. It displays the expected amount of image degradation or blurring applied to spatially adjacent coordinates and is a limiting factor in determining the spatial resolution. Substituting u=0 into equation (3.8):

h{v,0) =

' 2 J M '

(3.13)

l>(v.O)

• 2

3 r

Figure 3.8: A theoretical plot

ol h(v,0)

in the focal plane.

When rotated around the optical axis this pattern is known as the airy disk. The majority of the intensity falls between the first zeros of the Bessel function at

4r=+/.22AF/#. For the F/4 DOE designed for the prototype /4r=5.7(im. This is smaller than the 10|im pixels of the CCD detector. Therefore, once detected, the infocus diffraction limited incoherent psf will look like a delta function.

The second view of interest is an intensity plot through focus, h(0,u), along the optical axis, normally considered the depth of focus. Since DOIS mterprets z as wavelength, h(0,u) represents the expected amount of blurring apphed to adjacent spectral slices. It is the limiting factor that determines the spectral resolution and bandwidth.

Substituting v=0 into equation (3.9):

, . sin%

A(0,«)= —

2

(3.14)

h(0,u)

48

0

• 6

• 4 2

Figure 3.9: A. theoretical plot of

h(0,u)

along the optical axis.

49

In Figure 3.9, the first zero in intensity along the axis is at For the prototype Az = +75/im, predicting a theoretical incoherent dif&action limited spectral resolution of AX'=0.256ftm.

The third view of the psf is along the geometrical boundary, where u=v, Calculated in equation (3.15) and plotted in Figure 3.10, h(v,v) dictates the blur seen fix)m coordinates that are different in both spectra and spatial location. Substituting «=v into equation (3.9):

1

^ 1 u

,

6

/

'A /

' 2 f

\ /

u'

4ic\

Figure 3.10: A theoretical plot of

h(v,v)

where

u=v.

3.4.2 Experimentally measured incoherent psf

Unfortunately, because of aberrations, noise and other errors, the blurring encountered in real systems is greater than predicted by the diffraction limited intensity point spread function. Another approach to obtain the three-dimensional incoherent psf is by direct experimental measurement using a laser illuminated pinhole to simulate a monochromatic point source. The following discusses the validity of simulating the incoherent intensity psf with images of a coherent pinhole.

3.4.2.1 Coherent

vs.

Incoherent Imaging

Coherent imaging is described by the amplitude convolution equation [Goodman

1968]:

50

Uun (^.' y.)=JJ - Xo • yi - yo ^obi i^o' yo )^o^yo

(3.16) where represents the amplitude distribution in image space from the amplitude distribution in object space and ^(x,,y, ) is the coherent or amplitude impulse response. However it is actually the image's intensity which is detected:

coherent: L{^i.yi) =

If

% - - yo Pobi

yo )^adyo

(3.17)

Incoherent imaging is a linear mapping of intensity described by the intensity convolution integral:

incoherent: = Iobi[^,.y,)dxJy„

(3.18) where is the object's intensity distribution equal to|t/^^J*, andjA(x,,3',)j is the incoherent or intensity impulse response, previously represented by hix^yj.

To find the impulse response, the input object U^^j(x^yJ is a perfect point source represented by a delta function, 5(x^yJ. Substituting the delta fimction into equation

(3.17) for coherent imaging yields: lim(^f y,) = JJh{x, - y, - y„)S(x„, )dx^dy„

coherent:

(3.19)

=|%'y.)|

Likewise for incoherent imaging, substituting the delta function into equation (3.18) where (x„,y^) = |5(a:„,

f = S { x ^ , y „ ) :

51 lim . y.) = Jj - Jf.,. X - )| S{x^. yo )dXody„

incoherent:

(3.20)

Showing that if the input object was indeed a delta function point source, the recorded intensity would represent the incoherent or intensity impulse response,

. of the system. However in practice every target has some finite width, thus the perfect delta function point source does not exist.

The implications of the difference between coherent and incoherent illumination can be seen if the object being imaged is a pair of delta function point sources separated by the

Rayleigh distance denoted by x, [Gaskill 1978]. If the two sources are incoherent, the total image irradiance is simply the sum of the image irradiance distributions of the individual sources:

incoherent: (x,. ,>', ) = )| + |^(x, - x,, y,.)

2

(3.21)

On the other hand if the two sources are mutually coherent, the overall image irradiance is given by the square modulus of the sum of the amplitude responses of the individual sources. If the phase difference between the two sources is <I» than the irradiance may be expressed as;

coherent:

= |a(x,., y,. )| + |a(x,. - x,, y,.) (3.22)

+ e-^h[x,, y,)h'

(x,

- x^,y,)-\-e^h'

(x,.. )A(x, -

Notice that this is the incoherent result of equation (3.21) with additional interference terms. It is these interference terms which create the difference in images formed from

52

coherent and incoherent targets and the difference between the images of a laser illuminated pinhole and the desired incoherent intensity point spread function.

Reynolds et al., depict the difference between the coherent and incoherent illumination of a slit object in the book. The New Physical Optics Notebook: Tutorials in

Fourier Optics, which is reprinted in Figure 3.11. The theoretical intensity distribution of the coherent and incoherent images of a slit is shown in (a), notice the coherent illumination fringes caused by the interference terms of equation (3.22).

(a) (b)

Figure 3.11: (a) The theoretical intensity distribution in the coherent and incoherent images of a siit. (b) the recorded image of a coherent slit,

(reprinted from Reynolds 1989, page 115)

For this dissertation, the image distribution from a coherently illuminated 5^m diameter pinhole is detected, which is a circularly symmetric version of the slit. The diffraction limited intensity distribution in the coherent and incoherent images of various pinholes was calculated using the diffraction limited coherent and incoherent impulse responses from Figure 3.12.

53

0/.2

- 1

10

X [^im]

(a) The coherent or amplitude impulse response h[x^, y,) •

X [^m] ^

(b) The incoherent or intensity impulse response h(x^^yj =\h{x^,.

Figure 3.12: The diffraction limited impulse response for (a) coherent and

(b) incoherent illumination.

Figure 3.13 shows cross-sections of various magnified pinholes in the left column

(M t

=-.1), alongside the resulting diffraction limited intensity distributions from coherent and incoherent illumination plotted together in the right column. Notice that for a large

I60|jin pinhole. Figure 3.13(a), the distribution resembles the slit image of Figure 3.11(a).

The incoherent image (thin line) is a smoothed, slightiy expanded version of the pinhole's rectangle function, where as the coherent image (bold line) has additional ringing about the incoherent distribution due to the interference terms of equation (3.22). For smaller pinholes such as the 20|im in (d), the image distribution looks less like the slit image and more like the incoherent intensity impulse response. For the S^im pinhole used in the experimental psf measurement shown in (e), the intensity distribution matches the intensity

54

(a) 160|mi pinhole

10

(b) 100(un pinhole

(c) SO|un pinhole

X [Jim]

10

10

X Qiml

(d) 2(Hini pinhole

(e) lO^ra pinhole x[liral

10

10

X Uun)

(0 Sfun pinhole

Figure 3.13: Cross-sections of various magnified pinlioies, (left column), alongside the resulting diffraction limited intensity d^ributions from coherent (bold line) and incoherent (thin line) illumination plotted together (right column).

55

impulse response and the difference between the coherent and incoherent illumination is negligible.

Another experimental consideration is that the detector pixels are lOjim wide. This creates a scenario where all of the distributions of Figure 3.13 are unresolved. The diffraction limited images of pinholes smaller than lOOpm fall entirely within the boundaries of one pixel. Therefore, the interference effects of the coherent source aren't resolved and the coherent illumination can accurately simulate incoherent illumination.

3.4.2.2 Experimentally measured results

A set of images of the coherent illuminated 5|jjn pinhole taken at different levels of defocus is shown in Figure 3.14. For this data set the CCD was stepped along the optical axis with a step size of dz=0.125 mm. This series shows the effect of aberrations on the incoherent point spread function. The predominate aberration is spherical, visualized as lack of symmetry about best focus. The spherical aberration caustic creates a hard ring of intensity on the marginal side of focus and a softer, more evenly distributed blurring on the paraxial side of focus. Theoretically the coherent source will causes interference fringes on top of the aberrated intensity distributions, however again they are unresolved by the lOjim pixels and average to an appropriate representation of the incoherent intensity impulse response.

marginal ^ best focus —» paraxial

Figure 3.14: Series of images (32 x 32 pixels) of a 5 |j.m pinhole illuminated by a HeNe laser recorded with DOIS through focus.

A measured 3D incoherent intensity point spread function was assembled from stacking 32x32 pixel images of a pinhole illuminated with a 542mn GreNe laser at 32 stages of defocus. The incremental step size was dz=0.125nmi, corresponding to a

56

wavelength step size dX=0.25nm. A cross-section of the psf cube, h(r,z), is plotted in

Figure 3.15 as (a) a density plot and (b) a three-dimensional plot. The 3D distribution is circularly symmetric about the r=0 optical axis.

lamda[nm]

0 . 0 8

pa£f

-0.08

rtOBBl

-0.16

Z [mm] zlnmj

(a) (b)

Figure 3.15: The experimentally measured intensity point spread function,

h(r,z),

(a) a density plot (b) a 3D plot.

The measured spectral intensity point spread function, h(z(X))=h(0,u), is plotted in

Figure 3.16 with the theoretical h(z(X)) determined from both geometric and diffraction limited theory. The z axis labels include the spectral conversion. The full width at half max

(FWHM) represents the spectral bandwidth of DOIS. The FWHM diffraction limited bandwidth is AX=0.06nm and the measured spectral bandwidth is AX=1.5nm, both determined at X=542nm. See Chapter 5 for more on the experimental characterization of

DOIS.

57

1.0

On-axis/Spectral PSF psf(2(X) )

-Geometric

Diffraction

—Measured

0.5

\

0.0 - ' >

•*'

-1 -0.5 0

0.5

1.5

Figure 3.16: The theoretical and measured on-axis/spectrai intensity psf,

h(z(X)=h(0,u)

The three-dimensional intensity psf of Figure 3.15 can be Fourier transformed to yield the three-dimensional optical transfer function (OTF) shown in Figure 3.17. The OTF is required for several of the digital \raa%e restoration techniques in Chapter 8. It is important to note that there is a significant difference in the behavior of the experimental and theoretical OTFs. Thus it is extremely important to use the experimental OTF wherever possible. As there is probably less difference between similar EXDE lenses than there is between experimental and theoretical functions, it is reasonable to simply interpolate from experimental curves to correct for changes in the design parameters.

58

0 . 7 5 p [l/MM] 0

LOO

0 . 2 5

fO p

[l

/MM]

- 8

100 U

- 1 0 0

(a)

(b)

Figure 3.17: The experimentally measured

OTF(p,Q,

(a) a density plot (b) a

3D plot.

59

4. THREE DIMENSIONAL IMAGE FORMATION WITH A

DIFFRACTIVE OPTICAL ELEMENT (DOE)

The Diffiractive Optic Image Spectrometer (DOIS) approach to image spectrometry has been broken into the series of coordinate transformations and operators that are depicted in the flow chart of Figure 4.1, [Sitter 1990, Frieden 1967, Barrett 1981] The top box represents imaging with the diffractive optical element (DOE) in DOIS and is described in section 4.1. The DOE performs two operations, Gaussian imaging with a Spectral

Gaussian Coordinate Tranrform and image degradation characterized by Three-dimensional

Shift Variant Tranter Theory. The image space is then sampled by the CCD camera.

The acronyms used below include coordinate transform (CT), shift variant (SV) and shift invariant (SIV).

object

obj{x„

. ) at

Ji Spectral Gaussian CT

U 3D SV Transfer Theory image => i{xi,yi,Zi)

<;^nanow_waveband____j[____^__ftin_s£TCmm

shift invariant model for vshift variant imaging

Isotome CT

SIV Reconstruction

Gaussian image

=> o{x, y,

2) isotomic image t (jc/ .y,' ,2,')

SrV Reconstruction

U Inverse Spectral Gaussian CT isotomic Gaussian image

=^0' {x! ,y ,z')

reconstructed object

=> obj\[x„,y^, X„)

at li Inverse Spectral CT reconstructed object => obj^{xg

,yg

,Xg) at

Zg

Figure 4.1: A flow chart depicting the manipulation of the threedimensional object cube as it is imaged, and reconstructed.

Image processing can be applied to reconstruct the object under one of two conditions, shown as two possible paths. Selection of a path will depend on the extent of

60

the waveband desired. The left path can be taken within a narrowband image volume called an isotome and is discussed in section 4.2. In an isotome the point spread function (psf) is approximately unchanging and is considered shift invariant for modeling and reconstruction algorithms. It will be shown in section 4.2.2 that achievable isotomes have a bandwidth up to 20nm in the visible range prototype built for this dissertation.

When the spectra of interest includes the entire visible range it is necessary to perform the coordinate transform (CT) shown in the right path of Figure 4.1 and discussed in section 4.3. The CT converts the image space with a shift variant (SV) psf into an alternate coordinate system with a shift invariant (SIV) psf. The coordinate transform designed for this system is called the Isotome Coordinate Tranrform. Once applied, reconstruction algorithms using linear SIV filtering techniques can be applied to obtain an isotomic Gaussian image. The final step is to again transform the data. This time an Inverse

Spectral Coordinate Tranrform is used to reconstruct the object.

4.1 Imaging with a Diffractive Optic

Although one normally thinks of a diffractive optical element (DOE) as a monochromatic imaging device, they can be used to image polychromatic sources. Sweatt has shown a mathematical equivalence between a diffiractive optical element and an ultra high index lens [Sweatt 1979]. The EXDE performs as a lens with severe chromatic aberration. It is this aberration which provides the dispersion for DOIS.

It is important to remember that all unaging systems are imperfect, introducing aberrations which distort the image. Although high-quality, diffi:m:tive/refractive optical kinoforms can be designed to minimize image distortion and aberrations such as spherical and coma [Stone 1988], the DOIS spectral sectioning technique is plagued with defocus from the chromatic aberration. This results in errors in the recorded spectral slices which have to be corrected with post-detection image processing.

61

• pKA,)

1

1

1

1 1 psf(X.-M.) psf(^^|||^ psf(>,)

•PK>i)

•pt(X,-AX)

1

V psf(i,-AA,)

figure 4.2: A diffractive optical element imaging various spectral point sources.

A good place to start this discussion is to consider imaging a point source. Figure

4.2 shows a DOE imaging point sources of various wavelengths. Notice that the DOE does several things. The spectra is spread out along the optical axis and the magnification is different in each focal plane. These functions are described by Gaussian imaging.

Additionally, the images of the point sources are not perfect delta functions. Each is a spheroidal intensity distribution around the "ideal" Gaussian image point. This blurring is described by three-dimensional transfer theory. The spheroid intensity distributions represent the three-dimensional point spread functions of the DOE, and cause both spatial

(transverse) and spectral (longitudinal) blurring.

4.1.1 Spectral Gaussian Coordinate Transform

The imaging can be represented by a coordinate transform, performing both a scaling function and transforming the spectral data into a spatial coordinate, z. Starting with the imaging equation:

62

where/, is the focal length at is the object distance and z is the Gaussian image distance. Introducing the diffractive optic spectral response [Swanson 1989]:

/A = /A = ^

r

_

fA _ D

(4.2)

D is the DOE design constant where fj is the design focal length at the design wavelength,

Xj. Substituting (4.2) into (4.1) results in the diffractive optic imaging equations: i = l + i

D z„ z

(4.3)

A =D z + z ,

<=> z = '_D_}

\KZo-D^

(4.4)

The conversion between spectra and image distance, z, at a fixed object distance, z^, is described by equation (4.4).

The transverse and longitudinal magnification, Mj and are:

^ z„ A„z - D and = -

V^o /

(4.5)

Notice that the magnification is dependent on the design constant, D, and the fixed object distance, z„, leaving wavelength, as the only variable.

The DOE transforms the object objj(x^,y^Xj at z„, into a Gaussian image,

o(x,y,z), with the following Spectral Gaussian Coordinate Transform:

o{x,y,z) = obj{MTX„,MTyo-^TZo) o{x,y,z) = obj ff

-D \ { -D

\

y,,.

/

r D ]

\

Zo

[k^o-D)

/

(4.6)

(4.7)

63

4.1.2 Three-dimensional Transfer Theory

As previously mentioned, the actual image of a point source formed by a DOE is a blurred version of the Gaussian point, seen in Figure 4.2. The blurring is represented by

Three-dimensional Tranter Theory, based upon the assumptions of (1) superposition and

(2) stationarity. It is well known that these conditions are met in two-dimensions by an object which radiates incoherently from an "isoplanatic" area of an object plane. These conditions can be extended to three-dimensional imaging within "isotomic" volumes of the object space [Frieden 1967]. Frieden's theory is extended below to describe the EXDIS three-dimensional spectral/spatial system.

The theory of superposition implies that since the object is assumed to be incoherent, the intensities from different elements of the object, o{x,y,z), are additive. The total intensity in the image, is given by;

OS ea oe

'(•*,. ) = I j J >'• -

y- z)dxdydz

(4.8) where h[x,y,z\x^,y^,zi) is the space variant incoherent impulse response or psf. This impulse response physically represents the intensity distribution in image space from the illumination by a monochromatic point source in object space. It is particularly important for estimating the image degradation caused by surrounding coordinates. The threedimensional psf predicts cross-talk and blurring from coordinates both within the same spectral image and from surrounding spectra.

Classical two-dimensional transfer theory defines an "isoplanatic" plane where the psf is slowly varying and approximately stationary. For DOIS an "isotomic" volume of the object or Gaussian image space, called an isotome, can be defined where the threedimensional psf is dependent on the displacement from the Gaussian point and not the location of the Gaussian point itself, described by equation (4.9).

64

h{x,y,z',x„y^,zi) = h{x, -x,y. -y,Zi - z) (4.9)

Physically, this requires that the psf must remain invariant under changes in position of the Gaussian point. This will be true if each point within an isotome has essentially the same wave aberrations and magnification.

Image space can be broken into several isotomic volumes, V. Each isotome, which represents a narrow waveband, can be treated by linear shift invariant filtering theory. If a particular spectral image or a narrowband spectral curve is needed, the detection and processing can be confixied to an isotome following the left path of Figure 4.1.

4.2 Isotomic Imaging in a narrow waveband

Substituting equation (4.9) into (4.8), the way in which an arbitrary incoherent spectral image is distorted can be predicted from the resulting intensity convolution equation. It describes the convolution of the Gaussian image with the incoherent psf in an isotome.

) = J J I

y. -

- 2)o(-c.

y, z)dxdydz

(4.10) in short hand notation:

i(x,;y,;Z,} = h{x, y,z)*** o{x, y, z)

Applying the Fourier Integrals:

(4.11)

0{lv4) = n j o{w)e^^"''^'dxdydz

65

The Fourier transform of the psf, H ( y / , Q , is called the optical transfer fiinction

(OTF), or in three-dimensional microscopy texts, the contrast transfer function (CTF).

= (4.13)

Here capitals are used to refer to the Fourier transforms and the convolution is now simplified to a multiplication. In matrix form, dropping the variables:

I = HO (4.14)

If the isotome object and psf are known, the image can be modeled with equation (4.14), finishing with: i = Inverse Fourier{l}

= InverseFourier[l{^,\i/,Q} ^

This will simulate the image without the errors and noise introduced by detection.

4.2.1 Reconstruction

Within the isotome, if the recorded spectral images, the object distance,

Zg_ and the OTF are known, an Inverse Shift Invariant Filter and Inverse Spectral Gaussian

Coordinate Transform can be applied to reconstruct the object, objJ[x^y^,Xj at z„. The above transfer theory can be applied in reverse using:

0 = H-'I o = InverseFourier{0}

o{x,y,z) = InverseFourier^O{^,\lf,^)^

(4.16)

^ ^

where is the Fourier transform of the image, and H'* is the inverse of the OTF matrix which when applied as an inverse filter performs the required deconvolution.

66

To convert o(x,y,z} into a reconstructed spectral/spatial object, at

z„, the following Inverse Spectral Gaussian Coordinate Tranrform has been derived. a t z „

obJr{Xo^yo^K) = o

M r ' M r

.1 pN j

J J

(4.18)

r

objr{x,,y„,K) = of

X , - -

^y,z/-^2±£

I Z 2 [ ZZ„ J

(4.19)

4.2.2 Deflne isotome bandwidth

To determine which path of Figure 4.1 should be taken, it must be determined if the image cube, or bandwidth of the image cube, can be modeled as a shift invariant isotome.

To define an isotome the invariance on h must be quantified [Frieden 1967]. Refer to section 3.4.1 for the theoretical description of the incoherent point spread function h.. The invariance over the isotome can be measured by the invariance of h(u,o) and h(0,v) under changes Az or A^^ This can be quantified by the differential radial distance Ar=R^ to the first zero of h(v,0) in equation (3.13), and die axial distance Az= to the first zero of

h(0,u) in equation (3.14):

Ar^R^ =±1.22A

2a

J

A^Z.

2a

or=±

2z,D

(4.20)

(4.21) recall that D is the design constant f^X.^. The differentials AR^ and AZ„ measure the variance:

A/g, _ A^,.A

(4.22)

AZ„ ^ AM

Z. " z. " M

(4.23)

67

In practice, Frieden defines an isotome as a region where the magnification changes less than 10% (AM l

/M l

^10%). Therefore, over small spectral regions or small amounts of axial translation the image space can be considered isotomatic and SIV linear filtering algorithms will hold. Using the differentials in equation (4.22) and (4.23), which are plotted in Figure 4.3, see that for a target at an image distance of Zo>lOf and a bandwidth,

AAA, up to 5%, that the variance AR/R<1% and AZ/Z<6%. This indicates that the conditions for an isotome hold within a spectral region up to a 20nm bandwidth.

1.0%

- 0.8%

ASnm

AlOnm

AZOnm

~A5nm

-AlOnm

A20nm

- 0.6%

N " 3%

E

<

- 0.4%

-«hO%-

700 600

X [mn]

(a)

500

—•

40C

HMfr-

700

600

X[anii

(b)

500

Figure 4.3: The variance on (a) R,, and (b) Z„, at z„=10f„.

400

4.3 Full Spectrum Imaging

The technique of section 4.2 can be applied to individual isotomes to model and reconstruct data from a narrowband source. However, when the spectra of interest includes the entire visible range, it is necessary to perform the coordinate transform shown in the right path of Figure 4.1, which creates a shift invariant model for shift variant imaging .

Barrett and Sitter introduced a technique to model the shift variant blurring on threedimensional images [Barrett 1981, Sitter 1990]. The model consists of a coordinate transformation of the object followed by a shift invariant blur operation, which is in turn

68

followed by a second coordinate transformation of the image. This technique can be rederived to apply to the reconstruction of the object when the image space is shift variant

Recall from equation (4.8), if an imaging system is not isotometric, it is modeled by the shift variant integral of the form:

80 OS oa

'(-f,. y.. ) = j J I 2'"

z)dxdydz

where represents the image space distribution, o{x,y,z) is the Gaussian object distribution and h(x,y,z',x^,y^,zi) is the space variant incoherent psf. Sitter showed that for incoherent imaging systems, there exist coordinate transformations from object space and image space to an intermediate space (which will be denoted by primed quantities) where the integral of equation (4.8) can be expressed as a convolution. That is, there exists a coordinate transformation from to ''(-v/.x'.z/) and from o{x,y,z) to

o' (jc*

,y

,z!) so that: oa OS eo

'"{^,'.>•,'.2,')= I / l h ' { x ' , y \ z ' ; x / , y , ' . z / ) o ' { x - , y . z - ) d x - d y ' d z - (4.24) where: resulting in:

H {x! , y ,z! ;x. ,yi',z-) = h'{x^'-xf , y - - y , Z i ' - ^ )

(4.25) oo oo oo y.

= J J J

yi '-y'' 2. '-z')o'{x'. y. z')dx-dy'dz'

(4

.26) where T (

j

:/ ,yi',Z/') and o'{x',y ,z') are the coordinate transformed irradiance distributions and A' is the incoherent psf in the transformed space.

4.3.1 Isotome Coordinate Transform

A coordinate transform was derived for the DOE to transform the shift variant image space i to the shift invariant isotome /' with the assumption that aberrations are

69

minimized and ttiat magnification is the major cause of the space variance. The isotome

coordinate transform applies an inverse magnification transforming a polygon shaped space with variable magnification into a unit magnification space pictured in Hgure 4.4.

Isotome CT

Az is variable Az' is constant

Figure 4.4: Visual representation of the

isotome coordinate transform

forming an image space with unit magnification

In order to remove the variable magnification which is a function of z/Aj, the following coordinate transform is used:

M , = r h

J

(4.27)

A'(x',y,z') = zj—

Mr M, M,

(4.28)

(4.29)

Equations 4.28 and 4.29 completely reverse the magnification of the Gaussian imaging. However in some applications it may be practical to transform the image cube to a intermediate magnification, such as to match the magnification of the measured image cube to the magnification of a measured point spread fimction.

70

4.3.2 Reconstruction

Now that the im^e space is essentially all one isotome, the entire full spectrum object cube can be reconstructed with the inverse transfer reconstruction that was used in section 4.2.1, substituting the primed variables:

Zi') = h'{x'. y', z')*** o'{x', y', z')

(4.30)

(4.31)

(4.32)

0'=H-'I'

o'=

InverseFourier

{0'}

o' (.r" ,y ,z') = InverseFourier^O

(4.33)

4.3.3 Inverse Spectral Coordinate Transform

The magnification from the and y, coordinates was removed in the coordinate transform of equation (4.28). The final coordinate transform reconstructs the spectral information from z/, using equation (4.4). a t 2 „

objr(xo,yo^K) = o' r '

i ^ + l

//

(4.34)

71

5. PROTOTYPE CHARACTERIZATION

5.1 DOIS Prototype

A prototype of the DOIS described in Chapter 2 was assembled and characterized at

Rome Laboratory's Photonics Center. Figure 5.1 is a photo of the DOIS prototype. The

DOE is on the right and cormected to the CCD camera with a tubular baffle. Although a system designed for field use would include electronic control translation stages, the laboratory prototype is manually controlled. The DOE is mounted on an Oriel rail carrier which slides on a 60 cm Oriel rail marked with a metric scale for location measurement.

Hmi

Figure 5.1: Photo of the DOIS prototype.

The camera is first secured to a micropositioning translation stage for fine control, then on another rail carrier. Each carrier is marked with a graduated rule for positioning on the rail within 1 mm, and the micropositioning vernier on the detector is accurate to .001 inches (25.7 nm). Manual translation of the camera or DOE is possible. With a target at infinity, the lens would be moved without adversely impacting the object distance s^ and

72

the detector to DOE distance Sj,^ for each wavelength would be the spectral focal distance.

Six = However, for target distances restrained by the dimensions of the laboratory, the best results are obtained by keeping the DOE stationary and moving the detector. The detector carrier is stepped with coarse increments, dz = I mm, corresponding to spectral step size dX ranging from Inm to 3nm, at 400nm and 650nm respectively. For finer spectral measurements, the detector carrier was placed at a set position and the micropositioning vernier was timed with dz = .005", corresponding to a dA, of .1 run to

.4rmi depending on the wavelength region.

5.2 The Diffractive Optic Element

The DOE fabricated for this prototype is a single etch Fresnel Zone Phase Plate.

Figure 5.2 lists the figures of merit for the DOE in DOIS. The dominant effect of only a single etch DOE is the low first order diffraction efficiency. This limits the amount of light which is properly imaged and increases the minimum detectable target radiance, but it performs sufficiently for this proof of concept, laboratory demonstration. A well designed

DOIS would incorporate a high efficiency phase grating with

T

| = 100% at the design wavelength. The DOE could be a multilevel, N > 16 photolithography phase plate, or a continuous profile grating generated by holography, diamond turning or a laserwriter as discussed in Chapter 3.

DOE type

Fabrication at the National

Nanofabrication Facility of Cornell

University substrate diameter design focal length f^

2 level, phase etched, zone plate ebeam mask generation contact lithography exposure ion reactive etched

BK-7 optical flat, Bnun thick

2", 5 cm clear aperture

200 mm

588 nm design wavelength theoretical diffraction efficiency,

40 % at 588 nm experimentally measured

27% at 670 imi laser diode

35 % at 632.8 nm HeNe

30 % at 542 nm GreNe

MTF @ Ve

32 Ip/mm

Figure 5.2: DOE flgures of Merit

73

74

The ring pattern of Figure 5.3 was recorded by illuminating DOIS with a high intensity HeNe beam. The first order focus is super saturated and is seen on the bottom left. The DOE's zero order diffiaction efficiency is approximately 5% at 633nm. The resulting zero order beam has no optical power, passing straight through the lens. It adds background photon noise but is normally not intense enough to be detected as signal.

However, when illuminated with the laser beam, the undif&acted light projects a shadow of the zone plate on the detector.

The first order diffraction efficiency is a measure of the percentage of the light collected that is imaged to the desired detector location. This figure of merit dictates the minimum target radiance that will be detected. The setup in Figure 5.4 was used to experimentally determine the DOE's diffraction efficiency. To determine the spectral dependence of the diffraction efficiency the DOE was characterized at three wavelengths. A laser diode was used which emitted at 670 nm and measurements were obtained at 632.8 nm and 542nm with two Helium Neon lasers. Figure 5.5 shows the experimental results plotted along with the theoretical spectral fu^t order diffraction efficiency [Chapter 3].

The laser beam was directed into an integratmg sphere/detector apparatus and the power was recorded, The DOE was then placed in the beam and the integrating sphere was aligned to coUect only the first diffiracted order and again the power was recorded, Piaonter- The diffraction efficiency is the ratio of these two values.

(5.1)

total laser

To account for the efficiency at different radii and zone spacing, r| was determined with die beam hitting the DOE at half radius and repeated at almost fiill aperture. The two radii measurements were averaged.

LASER

DOE aperature integrating ^here

Power Meter

75

LASER

DO]

LASER

Figure 5.4: Diffraction efflciency experimental setup (a) determine the total power, (b) first order power at r/2. (c) flrst order power at full aperture.

Spectral Diffraction Efficiency of DOE

40

30

20

— theory

* measured

365 400

5 0 542 0

wavelength [nm]

Figure 5.5: Theoretical and measured diffraction efficiency vs. wavelength.

76

The DOE MTF was measured at the Diffiractive Optics Test Facility at Rcx:hester

Photonics Corporation, EIPC. This is an automated testbed that measures the MTF of a diffiractive optic using a double knife edge technique [Faklis 1993]. Figure 5.6 (a) shows the measured tangential and sagital MTFs of the DOE at 540 run, as well as the diffraction limited MTF. The dramatic drop from the DC term is due to the low diffraction efficiency of the DOE. The second curve (b) is the MTF normalized with the diffraction efficiency.

Warren Smith, in his book on lens design [Smith 1992], gives the goal of a MTF of 50% at

50 Ip/mm for high quality photographic lenses and 20% at 30 Ip/mm over 90% of the field.

This DOE falls well below this requirement, however Figure 5.6(c) is the measured MTF of a well designed, high efficiency DOE lens and is presented to show that this technology can greatly exceed standard design requirements. Additionally, the DOE design can be optimized using computer design codes such as ZEMAX to reduce aberrations both on and off axis. The substrate surface curvatures are used for minimizing aberrations by producing a kinoform, which adds a low power refractive surface for balancing aberrations such as spherical and coma.

I •

s I.

m

It

I.

I.

I.

I.

1.

I.

I.

I.

IS Zi

rnvHicii <lf/aU

n

77

4

LmW

an iM irrm 9 nZ trim 0 nt

Figure 5.6: DOE MTF, measured at RFC Diffractive

Optic Testing Facility.

(a) measured and diffiraction limited

NfTF at 540 nm.

[>» Ltm

JOOM MkMi

*imi

(b) Normalized 546 nm MTF,

MTF/

TI.

IS Z4

frMMici (If/a)

s

t.i

I.S

I.l

1.7 g IS

I

1 I.S m

1.3

I.Z

I

.l

I.l

1

IM la ai

fN^tici (iya) a

«

•r

IV17/M

Uf«W

Mf Lrni

aOCkl. 9MIIK1

(c) MTF performance of a well designed high efficiency DOE measured at 559 nm, provided as an example of the technology's capability.

78

5.3 The CCD camera

The CCD camera in the prototype is a SONY XC-75 whose specifications are found in Figure 5.7.

SONY CCD Monochrome Video Camera

Pickup device effective picture elements (pixels) sensing area

XC-75

Interline-transfer CCD

768 X 494 (horizontal / vertical)

CCD drive frequency pixel size chip size scanning gain settings

1/2 inch

15.7 kHz vertical

14.3 kHz horizontal

8.4 X 9.8 [im (horizontal / vertical)

7.95 X 6.45 mm (horizontal / vertical)

525 lines, 2:1 interlace video S/N ratio minimum illumination

AGC automatic gain control

F fixed or M manual

56 dB

3.0 lux sensitivity 400 lux with AGC ON

Figure 5.7: DOIS camera specincations.

The CCD camera was used without the conventional lens attachment, and the DOE was mounted = 200 mm away from the camera with a homemade baffle of concentric tubes, depicted in Figure 5.1, blocking most stray photons from the room. The output of the camera was connected to a monitor, a framegrabber board and a video recorder. The images were recorded digitally, one image at a time. The computer has die capability of recording many frames consecutively, but memory constraints prevented this. The low efficiency of the DOE made it necessary to have the automatic gain control on for most of the experimental demonstrations. This is unfortunate because the capability of absolute intensity readings is lost. However, the AGC did a wonderful job of automatically adjusting to the spectral efficiency T|;^ of the DOE and camera. In an improved system, the

79 gain will be set constant and the intensity will be conected by dividing the spectral intensity data by the DOE spectral efficiency, Tjj^, and the canaera spectral response curve, measured below.

The camera spectral response was measured using a Tungsten Halogen lamp and an

Oriel grating monochrometer. Figure 5.8 is the relative spectral response of the CCD camera in its original packaging. This curve suggested that there was a filter on the camera, possibly to mimic the response of the human eye. After careful removal of the faceplate, a filter was found between the protective window and the CCD chip. Figure 5.9 is the spectral response of the same CCD with the unknown filter removed. The image of Chapter

2 were recorded with this filter, but the analytical demonstration in the next chapter was performed after the filter was removed.

Camera Response with filter

0.9

O

.a

« 0.7 o 0.6

0.5

0.2

0 . 0

400

--Raw Camefa Data

—Corrected (or Source!

500 600 700

Figure 5.8: The experimentally determined spectral response curve of the

SONY CCD. The dashed line is the raw data, and the solid line has the spectral curve of the lamp and monochrometer grating divided out.

80

Camera Reaponaa wUhout fUtar a 0.70

CaiMn OatB lor Sourca

Figure 5.9: the experimentally determined spectral response curve of the

SONY CCD after the removal of a factory placed Alter. The dashed line is the raw data, and the solid line has the spectral curve of the lamp and monochrometer grating divided out.

5.4 DOIS prototype characteristics

Since imaging is a large part of the requirements of the system, and many readers may not have seen the imaging capabilities of a simple Frensel Zone Plate, a 256 x 256 image of the Air Force Resolution Target is included as Figure 5.10. The resolution target used was made of chrome on glass. It was illuminated with the Tungsten Halogen lamp and filtered with a 10 nm line width filter at 550 nm. The resolution target was placed over ground glass so that the target and not the filament was imaged. The resolution target was

12.5 X 12.5 mm, So=219.5 cm and S; = 23.5 cm. These distances correspond to a wavelength of 552 nm. The x and y extents of the image were measured in MacPhase to be

134 X 134 pixels, corresponding to a transverse magnification of 0.107 with lOjim pixels.

So

23.5cm

219.5cm

134pixels * I0\im ^

q

12.5mm

(5.2)

81

These measurements confirm that the spectral imaging and transverse magnification equations hold true for this dif&active optic.

Figure 5.10: A pictorial representation of the system's MTF, a DOIS image of an Air Force Resolution Target.

DOE offset

-1.5 cm

Detector offset

+ 3 cm inch vernier

~z

I I I I I I I I I I I I I I I I I I I I I I I I I M I I I I I I I I I I I I I I I I I I I I I I I I I M I I I I I I I I I I I I I I i n i l l l l l l l l l l l l l l l l l l M I I I I

0 I u l e d I a i l m i : / • " (lO l iii

(DCfti scale Uet scal^

^i~^Det"^DOE ^

Figure 5.11: Schematic of DOIS.

It is necessary to determine the exact location of both the DOE and detector to properly determine the spectral value. Figure 5.11 is a schematic diagram of DOIS and illustrates the distances used in the excel spreadsheet. Figure 5.12, which calculates X, fr^om

Sg, the DOE and detector positions. Precise offsets between the carrier markings and the

DOE and CCD surfaces are required. These were first roughly determined from the mounting parameters and then calibrated with the known HeNe and GreNe wavelengths.

82

Once set at -1.5 cm and +3 cm, they remained accurate for a variety of source targets, wavelengths and distances.

So

[cm]

*

DOE scale

[cm]

Det scale

[cm]

*

-data is entei "ed her

Inch scale

[in]

»

calculated

^DOE

[cm]

DOE scale

;

-1.5 offset

^DOE calculated

^Detector

[cm]

Det scale

+ 3 offset

- inscale*2.57

^Deleaor calculated

Si

[cm]

^Deiector

z

Z sdoE

Si calculated lamda

[nm]

A. =

Vd

*i f

= Vd*a

+ i )

So S;

Figure 5.12: Excel spreadsheet calibrating the DOE and detector carrier l o c a t i o n s z , t o a c o r r e s p o n d i n g w a v e l e n g t h ,

X.

5.5

Spatial and Spectral point spread function (psf)

Figure 5.13 is a series of images (32 x 32 pixels) of a 5 |Jin pinhole illuminated by a HeNe laser recorded with EXJIS through focus. The separation between each image is dz= .005", which is approximately a spectral step size of dX= 0.4 nm. Each image is plotted with its own min, max plot range to maximize the contrast and boldly show the spot diagram pattern. The geometrical image at best focus is contained within one pixel, so this will experimentally represent the system's spatial psf for various stages of defocus. These images not only tell us the amount of blur added to our image, but they also graphically demonstrate the dominant lens aberration, third order spherical. The spot diagrams correspond to under corrected spherical aberration. Notice the soft even blur as the images progress into focus, but then the hard ring of light after focus. Figure 5.14 is a collection of the cross sections of each of these psfs grouped as "approaching focus" and "leaving focus" respectively, and plotted as greyscale intensity value verses pixel number. The spectral images are each 32x32 pixels. Notice that the 255 greylevel maximum of the psf is centered on pixel number 16 and falls of quickly to a greylevel of zero within 6 pixels on either side (shown are pixel numbers 10 through 22, beyond them the psf is zero).

83

beyond focus approadiing focus

Figure 5.13: Series of images (32 x 32 pixels) of a 5 pn pinhole illuminated by a HeNe laser recorded with DOIS through focus. leaving focus

OOE

DOE

gieylevel

2S0 noienngs from sphenc^ abber:tion

PSF iMTUs aev az=o

greylevd

2S0

PSF •pproachlog ioe«s

4 z = - l

Az=-2

&z=2

iSL^-S

(a) (b)

Figure 5.14: Spatial Impulse response

h(T)

at 632.8 nm, plotted as greylevel vs. pixel number for locations (a) approaching focus, (b) leaving focus.

Another important system parameter can be obtained from this data set. Plotting the intensity distribution of one pixel through focus becomes a measure of the influence of one slice of the data on the next plane of data. In 3D microscopy this measures the degradation in image quality of one spatial plane z on the next, but in this configuration it is a measure of the degradation of one wavelength by the surrounding spectral channels. This translates into the spectral impulse function which determines spectral bandwidth or resolution of the system. The experimentally determined coherent spectral impulse function h(K) is plotted in

Figure 5.15, the intensity distribution of the center pixel through focus at (a) 632.8 nm

HeNe and (b) 542 nm GreNe.

84 grevlevd

»o

190 too grevlevd

390

200

FWHM= !.25nra

ISO too

90

FWHMs L45 nm

630

63t

632 633 wavdengh [nml

(a)

639 636

940 941

942

943 wavelength (nm)

(b)

944 949

946

Figure 5.15: Spectral Impulse response h(K), plotted as intensity of one pixel through focus (a) for 632.8 nm. (b) for 542 nm.

calibration source

HeNe

GreNe center wavelength

632.8 nm

542 nm spectral resolution

AX

1.25 nm

-0.25 and +l

1.45 nm

-0.55 and +0.9 relative resolution

AAA,

0.197 %

0.267 % resolving power

XjisX

527

373

Figure 5.16: The experimentally determined spectral resolution.

These spectral psfs can be scaled with wavelength to translate through the entire visible range. Each step size is dz-.005"=.127 mm, HeNe dz/zi=.0127/20.93=.06%,

GreNe dz/zi=.0127/24.91=.05%. To use the experimentally determined GreNe psf data increments of dz/Zx=-05% should be used. For example, to filter properly at 450 nm,

z=29.4 cm, the step size dz should be equal to 0.15 mm ».006".

85

6. IMAGE FORMATION

The previous ch^ters have concentrated on describing DOIS's operation theoretically. At this point it is useful to try and verify that the theory does indeed predict the image formation fimction of DOIS. This Chapter presents both computer generated

(CG) and experimentally determined image sets of known targets. As you will see, the linear system theory of Chapter 4 accurately predicts output images formed by DOIS. The

CG images are very similar to those experimentally measured, which verifies that the theory is appropriate.

6.1 Mercury 577/579 nm doublet with an X aperture

An excellent example of DOIS's performance is its ability to resolve a two nanometer doublet (577/579rmi) from a Mercury Lamp. To test both spectral and spatial imaging, a known aperture was placed in front of the lamp. The aperture was two crossed slits in metal that make an X. The lamp was positioned at the same object distance that the

GreNe pinhole psf images were recorded at and images were recorded every dz=0.125mm corresponding to a spectral step size of dX=0.3nm.

The algorithm in section 4.2 was implemented in Mathematica and used to simulate the experimental scenario. Figure 6.1 highlights the Mathematica code for this model. It creates a three-dimensional (32x32x32) image matrix, from 2 three-dimensional input matrices, a spectral object matrix and the measured psf matrix [Figure 2.12]. Each is formed by stacking 32 images that are 32 by 32 pixels in dimension.

OB J=centeredFourier[Flatten [object]]

OTF=centeredFourier[Flatten[psf]]

(6.1)

(6.2)

IMAGE = OTF * OBJ (6.3)

image=Partition[Abs[centeredInverseFourier[IMAGE]]] (6.4)

Figure 6.1: Mathematica code that implements the linear system algorithm.

86

For this simulatioa it was assumed that Mercury emits two narrow lines at 577 nm and 579 nm. Pictured in Rgure 6.2 is the series of assumed input objects, paired with the

CG output images that were generated from the Mathematica linear system algorithm, along side the experimentally measured images. The input object was assumed to have non-zero values in chamels 7,8,9 and 14,15,16 exclusively, corresponding to wavelengths of

576.6, 576.9, 577.2, 578.7, 579.0 and 579.3 nm. The corresponding object spectral radiance curve is Hgure 6.3(a) showing emission only around 577 nm and 579 mn. The experimentally measured OTF matrix from Figure 2.14 was used.

The images created by this algorithm accurately depict the image formation properties of DOIS. In Figure 6.2 you can see the excellent correlation between the CG and the experimentally recorded images of the third columns. The spectral radiance curve of the

CG images is Figure 6.3(b). As expected the psf does degrade the spectral separation of the doublet. Yet, the doublet can still be detected and can be restored with various deconvolution techniques [Chapter 8]. The corresponding spectral curve from the experimental image set is Figure 6.3(c), which shows even better spectral separation.

object „4.j assumed CG object -> image experimental image

object 577.g

assumed CG object -> image experimental image

87 object,,,, objectj7,_,

object ns.« object object 57 5 0 object 576.3 object

snA

object 57,_7 object

579

m

object

579.3

object

579.6

object,,,.. object

576.6

object 57 J 9

m

object

577.2

object.

'580.2

object

577 5

object 580.5

Figure 6.2: Simulated spectral object & image channels of a Mercury doublet X, using experimentally measured psf(x,y,z(A,)).

88

0.6

0 . 4

0 . 2

Assumed SourceSpeorai Radiance of a Mercury Lamp

0 . 8

0 . 6

0.4

0 . 2

0 . 8

0 . 6

0.4

0 . 2

0

A. [nm]

Simulated Specoal Radianceof a Meicury Lamp

Imaged with DOIS

X [nm]

S pectral Racfiance of a Mercury Lamp

&perimei<afly Measured with DOIS

Figure 6.3: Spectral radiance plots of a Mercury doublet 577/579nm (a) the assumed input source object, (b) the computer generated results, (c) the experimental result.

89

6.2 Spectral!spatial analysis

The above target demonstrated high spectral resolution by being able to discern a

2nm doublet while imaging the narrow slits of the X. Small targets encounter only minimal blurring; however as the target increases, so does the spectral/spatial blurring. This is investigated by generating CG images of targets of varying spatial extent with the Fourier model using the measured OTF.

The data can be displayed in many ways. A conventional representation is a series of spectral images like Figure 6.2, or a spectral curve at one pi.\el. Figure 6.3. However to visualize the spectral/spatial relationship, the following discussion will utilize a spectral/spatiai cross-section image. Figure 6.4 shows two three-dimensional data cubes.

The cross-section is formed as a z-^x or plane .shown as the images above the cubes.

Notice that the monochromatic z^y image looks hke a slit and the image from the doublet emission looks like a double slit. The height of the slit represents a dimension of the aperture of the target and the width is the spectral bandwidth of the emission line.

Figure 6.5 presents the spectral/spatial on-a.xis cross-sections of the point spread fimctions used during this project The contrast has been enhanced to show features.

(a)

Figure 6.4: 3D object cube and zy view of a rectangular aperture emitting

(a) monochromaticly (b) a doublet source.

91 spectral step # z

(a)iiieasiiiedpsf spectral step # z

G)) geometiic psf spectral step # z

(c) combination psf

Figure 6.5: Cross-sections of the three-dimensional point spread functions used in CG image formation and object reconstruction.

92

6.2.1 Monochromatic target, increasing aperture

Linear system theory was used to model the images formed by DOIS of a monochromatic target with six rectangular apertures of different sizes. The cross-sections are shown in Rgure 6.6. The images were generated by forming a 3D matrix, 32x32x32 where the only non-zero elements are a square of ones in the 16"* plane, simulating a square monochromatic target The size of the square is increased from 1 pixel to the full 32x32 plane (1, 2, 4, 8, 16 and 32 pixels respectively). As expected, as the target size is increased its total radiance is increased, causing increased blurring in both the spectral and spatial directions. Each two-dimensional image is a 32x32 z^ cross-section of the total threedimensional cube with the corresponding spectral curve plotted below it

z o r X

^a)axCGassjmedob|MK

zor A

(b) CG imaees

Figure 6.6: Computer generated monochromatic target of increasing size.

6.2.2 Doublet target, increasing aperture

A more interesting target is the same increasing square aperture in fn)nt of a doublet The images in Hgure 6.7(a) are the CG assumed input sources with the square

93

aperture at z=14 and 20. The series in Rgure 6.7(b) are die CG images which were calculated with the measured OTF. For comparison a mercury lamp with 577nm and 579 nm line emissions illuminated various apertures. The measured z,y images were extracted fn)m the measured cubes and shown in Figure 6.7(c). Notice the strong agreement between the simulated and experimental images and curves.

z o r X , aperture: 1 tax

O r\iyr

(a) CG assumed objects

A. ryx-v

16pix 32pix z o r X . aperture: 50j

(b) CG images

SOOum

M z o r X

(c) experimentally measured images

Figwe 6.7: Computer generated (CG) and experimentally recorded doublets of increasing size.

94

Looking at tlie experimentally measured image of the 50;<m and l(X);<m apertures on the left side of Figure 6.7, notice that the target drifted to a lower pixel in the y direction as the detector was scanned along the z axis. This misregistration visualizes the effect of both a changing magniflcation and off axis aberrations which skew the orientation of the point spread function, Rgure 6.8. This artifact can be eliminated with a constant magnification design which is optimized for minimum aberrations. spectral step # z

Figure 6.8: A point spread function askew from off-axis aberrations.

Another instance to be discussed while investigating targets of increasing size is when the target is image to the edge pixels of the CCD. If the target is imaged to the edge of the detector array field-of-view (FOV), the information about the target which is blurred outside of the FOV will be lost. Additionally, the simulation and reconstruction algorithms are based on transfer theory assuming a shift-invariant point spread function.

Mathematically this assumes that the psf and OTF matrices are circulant, that each row has equivalent elements where the values are shifted and the end value wrap around to the other side as shown in equation (6.5).

95

["a 6 c ^1

\d a b c\

\c <i a t\

\b c d a\

Rgure 6.9 shows the wrap around in the y direction that is assumed if the psf is on the edge of the field. A field stop is proposed to limit the FOV to be smaller than the useable detector area. Pixels outside of the FOV will record the blurred intensity from targets imaged to the edge of the FOV. By recording the fully blurred target the reconstruction will be improved. spectral step # z

Figure 6.9: The psf of a pixel at the edge of the detection area.

96

7. PROTOTYPE DEMONSTRATION WITH FOUR TARGETS

The DOIS Prototype was demonstrated with four known multi-spectraJ/spatiai targets. The sources include a Tungsten Halogen lamp, a Mercury lamp, a HeNe laser and a GreNe laser. For complete control, each source illuminated a known template which became the target. Figure 7.1 is a view of the targets imaged by a conventional lens and recorded with the monochrome CCD camera.

A.'

Figure 7.1: The four targets: a Tungsten Halogen Lamp, a Mercury Lamp, a

HeNe and a GreNe.

7.1 Testbed Specifications

The Tungsten Halogen lamp is a broadband source, as shown in Figure 7.2. It illuminated a 50 jim pinhole, creating a several pixel object and preventing images of the filament. The spectral emission of the Mercury lamp is shown in Figure 7.3. Mercury makes an excellent spectrometer target; it is a multi-line emitter with six major lines in the

DOIS spectral range. Of particular interest are a pair of emission lines, called a doublet, at

577nm and 579imi. DOIS can resolve this doublet, demonstrating that the system's spectral

resolution is better tlian 2nni! The aperture used with the Mercury lamp is a 1 x 1 mm cross,

X,

with 10 pim slits. The remaining sources were two Helium Neon lasers. The

HeNe emits at 632.8 nm and the GreNe at 542 rmi. They illuminated a 5|im and lOfim pinhole respectively, simulating monochromatic point sources.

relative

: visible spectrum

• • • • spectral radiance

!

O

300 400 500 600 700 wavelength [nin]

(a)

Measured Tungsten Halogen Spectral Radiance

1.00

0.80

0.80

0.40

0.20

0.00

300.00 400.00 500.00 600.00

wavelengt h [ nm]

(b)

700.00 800.0

Figure 7.2: Spectral Distribution of the Newport Tungsten Halogen Lamp

(a) reprinted from the Newport operations manual, (b) measured with

DOIS.

97

98 relative spectral radiance

:reMevel

175

150

125

1 0 0

75

50

365

25

404

435

400 m wavelength [nm]

(a)

546

577/579

400 450 500 550

600

wavelengtii [nm]

(b) step size: dz = 0.1cm, dX = 0.8nni-03nm

650

Figure 7.3: Spectral radiance curve of the Mercury Lamp (a) reprinted from

Applied Optics, (b) measured with DOIS.

99

The four targets were placed an equal distance from DOIS and beamsplitters were aUgned to create the illusion that they were all in the same object plane. Figure 7.3 is the schematic of the demonstration testbed. The Tungsten Halogen lamp was directly in line with DOIS. The Mercury and laser sources were reflected into the field of view with beamsplitters. The object distance s^ was 91.5" and the DOE was held constant. The CCD camera was manually scanned and the detector carrier location and inch scale readings were recorded for each spectral image. The 256x256 monochromatic images of each target at different wavelengths is given in Figure 7.5.

Mercury Source

HeNe-

Pinhole (Sjim)

GreNe

Pinhole -v

(10nm) \

Caniers—«

- Carriers —

Optical Rail

Pinhole (50nm)

Tungsten Source

Beam Splitters

Diffractive Lens

CCD Camei

Figure 7.4: Schematic of the four target testbed.

415nin Tungsten Halogen Lami 435nm Lamp & Mercury 470iun Tungsten Halogen Lami

542nm Lamp, Mercury & GteNe 546nm Lamp & Merci

560nm Tungsten Halogen Lami

578 Lamp & Mercury 600nm Tungsten Halogen Lamp 632.8nm Lamp & HeNe

Figure 7.5: Sample of the key spectral DOIS images in this demonstration.

101

7.2 Results

DOIS was used to image the multi spectral/spatial targets. Appendix A is a catalog of all of the images recorded for this experiment. As previously discussed, the DOE was kept stationary and the detector was stepped through a wide range of distances. As the detector was moved, the various targets popped in and out of the image depending on their spectral content, as seen in Figure 7.5. However, since Tungsten Halogen is a greybody emitting over the entire visible range, the image of the 50[im pinhole was present continuously in the z range for X from 383nm to 650iun. The Tungsten pinhole's intensity grew at longer wavelengths and its center shifted several pixels because of the magnification changes. The best focus of each spectral line was determined and the corresponding detector locations were recorded. The table in Figure 7.6 shows the calculation of the emission line peak wavelengths from this experiment. The results correspond perfectly to the known emissions of the lasers and the Mercury lamp of Figure

7.4, and are remarkably precise considering the manual stepping and reading of the carrier location. source

So [cm]

DOE carrier

Det carrier

[cm]

HeNe

HgLamp

HgLamp 232.41

Hg Lamp 232.41

GreNe

HgLamp 232.41

Hg Lamp

Hg Lamp

Hg Lamp

232.41

232.41

232.41

232.41

232.41

232.41

7

7

7

7

7

7

7

7

7

24

26.1

26.1

27.5

27.7

34.3

36.8

36.8

41

Inch scale

[in]

0.508

0.515

0.481

0.4875

0.504

0.488

0.51

0.396

0.43

Sj [cm] emission line peak wavelength

[nm]

20.2

22.3

633

579

577

22.4

23.8

23.9

30.6

33.0

546

542

435

33.3

37.4

407

404

365

Figure 7.6: Peak spectral lines of the target, calculated from the locations of best focus.

102

Attempts were made to experimentally verify the Mercury lamp's spectral emission curve with the grating monochrometer. Of particular interest is the doublet at 577 nm and

579 nm. The signal firom the lamp was low and the smallest useable exit slit width was = .2 mm corresponding to a bandwidth of 5 nm making the results unusable. However, this emphasizes the usefulness of even this inefficiently designed DOIS.

To determine the spectral radiance curves of the targets, series of images were recorded with known increments of dz. The spectral radiance is defined as the intensity value of the image at various wavelengths. A C program was written to open each image and record the value of a specific pixel in a list. This list was plotted against wavelength creating a measured spectral radiance curve. A different approach needed to be implemented when the change in magnification throughout the spectral range caused the image to drift across several pixels. Under this condition, a window was defined around the image location and the value extracted for each image is the maximum pixel value in the window.

The experiment was executed twice, each time with a different spectral resolution.

The first data run was low resolution, testing the entire visible spectrum. It incorporated manual translation of the detector carrier in increments of dz=0.1mm. The inch scale vernier was at a constant setting. The second was high resolution, concentrating on a limited spectral range around each emission line. The detector carrier was kept stationary and the inch vernier was adjusted in 0.005" or 0.01" increments.

The results from the dz=0.1mm, low resolution recording of the Mercury Lamp are plotted in Figure 7.4(b). There is excellent agreement between the measured and known spectral emission lines of the characteristic curve 7.4(a) from Applied Optics\\jem 1968].

However, the EXDE diffraction efficiency [Figure 5.5] weighted the relative intensity of each spectral line.

The results from the Tungsten Halogen pinhole are found in Figure 7.7. Included are 6 spectral images clipped from the frill images to show just the 50fim pinhole. The

103 spectral radiance plot is constructed from data recorded in low resolution steps. The measured data points are plotted as • and a line represents the best fit. An artifact of the changing magnification is that the image drifts from a condition of being centered on a pixel to one where the image straddles several pixels, as shown in Figure 7.8. This alignment problem explains the ripple effect on the intensity verses wavelength curve.

X = 400nm

450nm SOOnm 550nm

(a)

Tungsten Halogen

600iun 650nm greylevel

saturation data points best fit to data

400 450 500 550 600

wavelength [nm]

650

(b)

Figure 7.7: Tungsten Halogen Data (a) spectral images of the 50p.m pinhole, (b) the measured spectral radiance curve.

Maximum when the image is centered on a pixel

NCnimum when the image is bttween pixels

Figure 7.8: Sketch of image drifting between pixels.

104

Another difficulty of this data run was that the signal beyond 525nm saturated the detector array. A more accurate plot of the Tungsten Halogen is found in Figure 7.3(b).

The experiment was run again with a larger source whose image was several pixels, avoiding pixel hopping, and with the gain set to prevent saturation. The results accurately measure the spectral radiance when compared to the handbook curve in Figure 7.3(a).

The results from the GreNe and HeNe laser sources are shown in Figures 7.9 and

7.10. In addition to the spectral radiance plots, a representative series of images is given in each figure. The images are 32x32 windows which have been clipped from the 256x256 recorded images. As expected DOIS recorded narrowband high intensity spikes for both.

Figure 7.9 shows the HeNe narrowband spectral curve centered at 632.9nm. Figure 7.10 shows the same for the GreNe at 542nm. Notice that with constant spatial steps, dz=0.005", the change in mj^nification causes the spectral step size to change within the spectral range. At 634nm, the spectral steps were 0.4nm, yet at 541.5nm, dAp=0.2nm.

631.8 632.2 632.6 632.9

(a)

633.3

BelTe 633im

633.7 634.0 634.4nm grey level

200

175

150

125

100

75

50

25

0

610 620 630 wavelength [nm]

(b)

640 650

Figure 7.9: HeNe Data (a) spectral images of the 5^.m pinhole, (b) the measured spectral radiance curve (dz=0.005"=0.0125cm, dX~0.3-0.4nm).

105

X = 541.5

grevlevel

541.7 542.0 542.3 542.5nm

(a)

Sre«He 542n«

175

150

125

100

75

50

25

0

510 520 530 wavelength [nm]

540 550

(b)

Figure 7.10: GreNe Data (a) spectral images of the 10|im pinhole, (b) the measured spectral radiance curve (dz=:0.005"=:0.0125cm, dX~0.2-0.3nm).

The Mercury target was examined a second time with small step sizes around each

Hg emission line. Figure 7.11 shows the resulting spectral curves for the 365, 404, 435,

546 and 577/579nm lines with the cUpped infocus spectral images.

Close observation of the 576.9nm image in Figure 7.11(e) reveals a focused cross with a highly defocused cross, and a the center that looks like a diamond. Moving to

579nm, the diamond comes to focus as the center of the 579nm image, and the 577 cross defocuses and disappears. This corresponds to the computer generated images in Chapter

6.1. This recorded data around the 577nm and 579nm Hg doublet shows that the doublet is resolved without processing. However, still greater discrimination is obtained with the processing appUed in Chapter 8.

106

3S0 360 370

wavdength [nm]

380

(a) step size: (lz^.0P=0-025cnu dA^^nm

Evlevel

435ii«

390

175

150

125

100

75

50

25

0

410 420 430 440

wavdength [nm]

(c) step size: dz=0.01"=0.025cin, dA;=03nm

390 400 410 420

wavdength fnml

(b) step size: dz=0.0I"=0.02Scm dA;g0.3nm

Hg 54&W

430

540 550

wavdength [nm]

560

(d) step size: d2=0.005"=0.0125cm. d>M)3nm

57(

X =576.9

577.2 577.5

gre^levei

577.8 578.1

Oj 577/579MI

578.4 578.7 579.0 nm

175

150

125

100

75

50

25

0

560 570 580 wavelength [nm]

590 600

(e) step size; dz=.005 dA^.3nm

Figure 7.11: The measured spectral radiance curves of each Mercury line with the in-focus

32x32 pixel spectral image(s) of the center of the cross, (a)

365 nm line, (b) 404nm line,

(c) 435nni line, (d) 546nm line, (e) S77/579nm doublet.

107

8. OBJECT RECONSTRUCTION ALGORITHMS

The results in Chapter 7 show that DOIS performs image spectrometry. The image quality and spectral resolution available directly from DOIS will be usefiil in many applications. Notwithstanding, ^plying an object reconstruction algorithm can improve

DOIS's performance by removing the blur from surrounding channels, resulting in a more accurate representation of the target's object cube. Several object reconstruction/ deconvolution techniques are presented here. They have been applied to both computer generated (CG) and experimentally measured 3D DOIS im^e cubes.

In Chapters 1 and 4, DOIS was described as a diffractive spectral sectioning device.

The output of the DOE, described by the Spectral Gaussian Coordinate Tran^orm, is an image space which is mathematically equivalent to the output image space of a threedimensional imaging microscope, described as spatial sectioning microscopy.

There are proven digital im^e restoration techniques used in 3D microscopy that eliminate blurring from out-of-focus adjacent slices of the 3D object. It follows that these same techniques can be adapted and applied to DOIS images to remove blurring caused by the out-of-focus spectral images. This chapter reviews the algorithms which have been adapted and applied to DOIS.

Since not all applications require the same resolution, three general processing techniques will be discussed in this chapter, nearest neighbor, inverse filtering and constrained iterative deconvolution. Each approach provides different amounts of deblurring at various computational expense

The nearest neighbor algorithm reconstructs an image by looking only at the in^jact from the adjacent images on either side. This requires very little prior knowledge and could be appUed in real time.

108

Inverse filtering tries to eliminate the effect of the instrument fiinction on the output image cube. The system OTF must be known from either theoretical prediction or direct measurement The main task is to invert the OTF and multiply the image set by this inverse.

Variations include applying an apodization or regularized inverse filter for noise suppression. Additionally, tools such as singular value decomposition may be required to invert the large, possibly singular, OTF matrix. Inverse filtering reconstruction is limited by the cutoff frequencies of the OTF; frequencies beyond the cutoff can't be recovered.

Iterative techniques which apply physical constraints, such as positivity, to the reconstructed object can theoretically reconstruct frequencies beyond the OTF cutoff. An additional advantage of iterative algorithms is that they don't require the possibly difficult task of inverting the OTF matrix.

These algorithms should provide equal or greater reconstruction to the IX)IS spectral sectioning system than available with spatial sectioning 3D microscopy because of two advantages. First, when imaging a 3D spatial object such as a photo-luminescent cell, the cell itself will cause blurring and information loss due to absorption and scattering through the object. The spectral dimension in DOIS won't cause this degradation.

Secondly, 3D microscopy has the demanding task of removing the blur from slices with very low z-spatial resolution. The spatial image in the z direction is normally on the order of the psf blur, requiring reconstruction in the very low and noisy regions of the OTF.

However in DOIS the dispersion is so great that the total spectral range translates into a dz spatial region which is many orders of magnitude greater than the spatial z width of the psf, with a ratio similar to the radial x,y spatial imaging. Therefore, in most situations reconstruction around and beyond the OTF cutoff isn't necessary.

It is important to note that even the simplest nearest neighbor method produces a substantial improvement. However die most accurate results are obtained with the SVD

Inverse Filtering algorithm.

109

8.1 Digital Representation

In Chapter 4 the object and image cubes were represented as continuous fiinctions of x,y,z and A. However as seen in Figure 2.1, the images are sampled in x and y by the detector array and in z by stepping the array along the optical axis. This is best modeled by discrete object and image functions. In practice the object, image and psf cubes are more accurately represented by three-dimensional matrices. Recall equation (4.10): eo eo oa

J j jo(a:,);,zyz(x.

-x,y,-y,z,-z)dxdydz

oo ao oo this equation can be written as summations rather than integrals with discrete axis intervals

Ax, Ay and Az.

N N S m=l q-l *=1 o{mAx, qAy,

^yi(/Ar,

- qAyJAz^ -

( 8 . 1 )

using subscripts to describe the Az location:

N N N

ij (-^' >') = X X Z >) * * *^j-k (^' y)

m=l q=l

(8-2)

where x= /Ac, and mAx, and _y= pAy^ and qAy. The value of j-k represents the number of steps of defocus.

8.2 Nearest Neighbor Reconstruction

The Nearest Neighbor technique is routinely used in three-dimensional microscopy for deblurring optically sectioned image sets using data from a single focal plane above and a single focal plane below to correct the central plane. This is a reasonable approximation for two reasons. First, since spectral images from adjacent focal planes will contribute most strongly to the central plane, their effects are most important to consider. Secondly, these adjacent image planes are actually a collection of all of the object channels with various

1 1 0

amounts of defocus. This rationale was first used by Castleman [1979] and later by Agard

[1989] to justify using only the nearest neighbors:

Simplifying equation (8.2) by dropping the x and y variables, models the system as a series of spatial slices:

h

= X i=l

= S i=I-y

*K

( 8 - 3 )

This simply states that the image is a sum of convolutions of the various object spectra with the appropriate defocus psfs. Pulling out and solving for the in-focus k=Q term:

-1

i. =Oj*h^+

i=i-y

*h,+

N - j

t=i

*h,

(8.4)

-1

O j * f h = ij - t = l - j

N - j

* h , -

i=l

* h t

(8.5)

The is the in-focus psf of the DOE. This equation states that the object at j, convolved with the in-focus psf, is given by the image at level j minus a sum of adjacent object channels that have been blurred by out-of-focus psfs h^.

This suggests that the object at channel j can be recovered by subtracting from the image at chaimel j a series of adjacent objects blurred by the defocus transfer function. If a simultaneous solution approach is abandoned, the adjacent object channels are not available. However, the adjacent images are available. From equation (8.4) each image contains the corresponding object channel plus a sum of defocused adjacent object channels. Furthermore ignore the effect of the in-focus psf, and recognize that both the recorded image j_^, and image y./ contain the out-of-focus contributions from the entire stack. In the positive direction, blurring image by one step of defocus provides a good approximation to the out-of-focus contributions that contaminate ij. Combining this with a similar defocus in the negative direction:

I l l

+ 'y>i *

^

) (8.6)

where h_, and h_^, are psfs that approximate the blurring due to defocus by one step in either direction (-Az and +Az), and c is an adjustable constant (c=0.45 was found to be appropriate). This suggests that the defocused structures can be partially removed by subtracting images from adjacent planes that have been convolved with the appropriate defocus psf.

The nearest neighbor algorithm of equation (8.6) was implemented in Mathematica and applied it to twenty 2D spectral images of the Mercury doublet. The code is shown in

Figure 8.1. The resulting reconstructed objects are shown along side the recorded images in Figure 8.2. This technique significantly reduced the image blurring and greatly improved the spectral resolution. The original and reconstructed spectral radiance curves of a pixel are in Figure 8.3. The dip between 577nm and 579tmi was increased from 20% in (a) to 55% in (b). Additionally, since the psf is non-symmetric around focus, the recorded data had an incorrect higher value at 579 nm. The use of separate psfs for opposite defocus steps Az=

+1 and -1 reconstructed the equal intensities of each line. The best results were obtained with the section 3.4.2 experimentally measured psf images, one from each side of the infocus plane, for h_, and h^,.

The nearest neighbor algorithm is particularly useful when one specific spectral image is needed. It is very fast and requires little memory. Computations could be done using generally available image processing hardware. Thus, it should be possible to do the all of the required calculations in essentially real time. While one carmot expect this technique to recover the object function exactly, it does promise to improve images and spectral resolution at reasonable expense.

1 1 2

For[z=I, z<=20,

Z++,

IMAGE([z]]=cffi[iinage[[z]]]:]

For[i=l, i<=32,

i++,

Forlj=l, j<=32, j++,

IMAGEPI[[l,ij]]=IMAGE[[l,ij]]OTFl[[ij]];

IMAGEPI[[2,ij]]=IMAGE[[2,ij]]OTFl[[ij]];

IMAGEPl [[3.i J]]=IMAGE[[34 j]]OTFl [[i j]];

IMAGEPl[[20,ij]]=IMAGE[[20,ij]]OTFl[[ij]];]]

For[i=l, i<=32, i++,

For[j=I, j<=32, j-H",

IMAGENI [[ 1 ,io]]=IMAGE[[l ,i JllOTFn 1 [[i j]];

IMAGEN1 [[2.i j]]=IMAGE[[2,i j]]OTFn I [[i j]];

IMAGEN I [[3.id]]=:IMAGE[[3,i j]]OTFn 1 [[ij]];

IMAGENl[[20,ij]]=IMAGE[[20,id]]OTFnl[[ij]];]]

For[z=l, z<=20,

Z++, imagen I [[z]]=Abs[cinfft[lMAGENl [[z]]]]; imagep 1 [ [z]]=Abs[cinfft[IMAGEP 1 [[z]]]];]

For[z=2, z<=19, z++, estiinate[[z]]=image[[z]] - n * (iinagenl[[z-l]]+imagepl[[z+l]]);] estimate[[l]]=image[[l]] -2n * imagepl[[l+l]]; estimate[[20]]=iinage[[20]]

- 2n*

imagenl[[20-l]];

Figure 8.1: Mathematica implementation of the Nearest Neighbor algorithm for 20 images.

X,= 574.8 nm recor(led->recons tmcted images otgects

A, = 577.8 nm recorded->recons tmcted images otqects

X = 575.1 nm

X =578.1 nm

1 1 3

X= 575.4 nm

575.7 nm

A.= 576.0 nm

X =578.4 nm

X = 578.7 nm

X =579.0 nm

X= 576.3 nm

X= 579.3

X= 579.6 576.6 nm

X,= 576.9 nm

A. =579.9 nm

X =580.2 nm X= 577.2 nm

A.= 577.5 nm

A,= 580.5 nm

Figure 8.2: Recorded Mercury images and corresponding restored objects with Nearest Neighbor technique.

114

UeASured Spectral Badiaice

S 0 . 8

CO

•5? 0.4

0.2

575 576 577 578 579 580 581

wavelength [nm]

(a)

Filtered Spectral Radiance n?.45

8

S 0 . 6

Cfl

•o 0.4

0 . 2

575 576 577 578 579 580 581

wavelength [nm]

(b)

Figure 8.3: Restoration of Mercury spectral radiance curve with Nearest

Neighbor technique, (a) the recorded spectra, and the reconstructed spectra with (b) n=0.45.

The Nearest Neighbor technique worked for the above application; however, it is important to recognize that the doublet X target has only a few high spatial and spectral frequencies. The computer simulation of section 6.2 predicts that this target would create little blurring, but as the target size increases and the emission broadens the blurring becomes stronger and the reconstmction more challenging. It becomes important to choose an algorithm which takes into account the influence of the whole image cube, not just the adjacent slices, such as Inverse Filtering.

1 1 5

8.3 Inverse Filtering

A more accurate approach to remove the out-of-focus information requires that the full contributions of all of the observed data be utilized. The most direct way to utilize the entire image cube is to perform the inverse filter operations presented in section 4.2 and

4.3. 0 = H-'I

Starting with the object, image and point-spread fimction three-dimensional data cubes, recall equation 4.11:

i{x,y,z) = h{x,y,z)***oix,y,z)

A technique used in representing multi-dimensional data sets is lexicographical ordering. Each three-dimensional matrix, such as o{x,y,z), can be converted to an x 1 column vector by using a stacking operator [Hunt 1977]. Applying lexigraphical ordering to 32x32x32 data cubes forms vectors with 32768 elements each, leaving a one dimensional convolution:

i{v) = h{y)*o[y)

(8.7)

Taking the Fourier transform of each vector

riij/) = mMw) (8.8)

The object's Fourier spectrum vector 0( y/) can be computer from:

=

mw)

(8.9)

An inverse Fourier transform is applied to 0(i(/) to reconstruct o(v), which can be repartitioned in three dimension.

The three-dimensional matrix calculation has been reduced to a scalar division.

Notice that consideration must be given to the possibility of dividing by zero. Another

116 concern is that the regions in the image spectrum where the OTF values are very small correspond to frequencies that are mainly noise. Dividing these frequencies by the low values of the OTF would cause an over amplification of noise. Two methods are generally used to deal with this: apodization and applying a regularized inverse filter.

8.3.1 Apodization Inverse Filter

Apodization is a technique to deal with dividing the image spectrum with the lowest values of the OTF, by simply setting a maximum for the value of ^ equal to a constant,

H(w)

a. Values greater than a are set to zero [Erhardt 1985].

H(W)

—^— if < a

H(\ff)

0 if > a

(8.10)

An apodized inverse filter was implemented in Mathematica. The code is presented in Figure 8.4. The lexigraphical ordering function is performed by the command Flatten[

] and was used throughout the image processing, allowing the data cubes to be

"unwrapped". The function Partitioii[ ] is used after the processing has been performed to regenerate the three-dimensional organization of the data cube. Erhardt used this approach with a theoretical OTF. However, the differences between experimental and theoretical OTFs are significant; hence better results were obtained using the experimental

OTF of Figure 3.17. Figure 8.5 contains plots of lexicographical vector versions of the measured 3D psf and OTF. Figure 8.6 shows the results obtained from applying the apodized inverse filter to the challenging iris target and its computer generated 16 pixel counterpart.

117

For[p=l, p<=5,p++,

For[m=l., in<=32768, ni++,

If[Abs[ 1 ./FLATOTF[[m]]]<=(20*(p-1)+ 10),INOTF[[p,in]]= 1 ./FLATOTF[[m]],

INOTF

[[p,ml

]=0

.];l

;l;

FLATMIMAGE==cflatfft[Flatten[timage3D]];

FL ATMIMAGE=cflatfft[Hatten[iris]];

For[m=l., m<=5,

m-H-,

OBJ=FLATMIMAGE*INOTF[[m]]; obj=Abs[cflatinffl[OB J]]; parimagemono=Partition[obj, 1024];

For[k=l., k<=32, k

-H-,

obj3D[[m,k]]=Partition[parimagemono[[k]],32]; tobj3D[[mJc]]=Transpose[RotateRight(Transpose[RotateRight[obj3D[[m,k]],16]],16]];];

For[z=l, z<=32, z++, trans[[z]]=Transpose[tobj3D[[m,z]]]; heightrecon[[m,z]]=trans[[z, 16]]; zrrecon[[m,z]]=tobj3D[[in,z, 16]];]; heightrecon[[m]]=Transpose[heightrecon[[m]]]; zrrecon[[m]]=Transpose[zrrecoii[[m]]];];

Figure 8.4: The Mathematica code for inverse reconstruction with apodization.

0 . 8

I

0 . 6

§ 0 . 2

^Lfakkkid l l U L

5000 10000 ISOOO 20000 25000 30000 32768

vector element # v

(a)

SOOO 10000 15000 20000 25000 30000 S768

vector element #v

(b)

Figure 8.5: List Plot of the flattened (a) psf. (b) OTF.

118 fl=10

a =20 a =30

a =40

a =40

a

=100

a =50

a

=1000

a

=10000

a =10

Figure 8.6: Object results from apodization reconstruction with CG images

(left column) and recorded iris images (right column).

1 1 9

8.3.2 Regularized Inverse Filter

In inverse filtering situations, it is standard practice to use a regularized inverse filter to minimize the effects of noise that can dominate at high spatial frequencies [Sezan

1992], The form of the regularized inverse filter for the scalar equation (8.9) is:

0(iir) =

' H(W)'

y H

'

I(¥)

(ii

/ ) ' H

(\ir) + aj

(8.11)

where a is a constant whose choice is based on the signal-to-noise ratio of the data and should be in the range of 0.001 to 0.1. In the experimental reconstruction a value of a=0.005 was found to be appropriate. Nodce that like the apodization technique, the a in the denominator of equation (8.11) essentially sets a threshold maximum of _!_to deal with the low values of the OTF.

A regularized inverse filter can also be applied in the three-dimensional inverse equation:

0(^,V/-,0 = (8.12)

= + (8.13) however this requires taking the inverse of the [H*H+a] matrix, a task which can be difficult if the data is singular. Further discussion can be found below in section 8.3.3.

The scalar regularized inverse filter of equation (8.11) was implemented in

Mathematica and applied it to the 32x32x32 recorded image cube of the same Mercury X target. The code is printed in Figure 8.7.

120 flatimage=Flatten[iniage]; flatpsf=Flatten[psf];

FLA

'nMAGE

=cfft[flatimage];

FLATOTF=cfft[psf]; a=.005;

FLATOBJ=FLAT]MAGE

*Conjugate

[FLATOTF]/((FLATOTF

*Conjugate

[FLATOTF])+a); flatobj=Abs[cinfft[FLATOBJ]]; parobj=Partition[flatobj, 1024];

3Dobj=TabIe[0., {z,32}, {x,32}, {y ,32} ];

ForU=l., j<=32, j

-H-,

3Dobj [[j]]=Partition[parobj [[j]] ,32];]; t3Dobj=Table[0., {z,32}, {x,32}, {y,32} ];

For[j=l.,j<=32J++, t3Dobj[|j]]=Transpose[RotateRight|Transpose[RotateRight[3Dobj[|j]],I6]],16]]:];

Figure 8.7: Mathematica code for the regularized inverse Alter.

The results from applying the regularized inverse filter to the Mercury X target are presented in Figure 8.8. Pictured in Figure 8.8(a) when a=0 the regularized inverse filter defaults to standard inverse filter of equation (8.9). The results show that the reconstruction is ineffective because of the noise. The addition of the regularized inverse filter noise term, a, significantiy improved the unage quality. The image sets in Figure 8.8 (b), (c) and (d) show the reconstructed object sets with a equal to 0.1, 0.01 and 0.005 respectively. A large a such as 0.1 in Figure 8.8 (b) is similar to a low pass filter. Setting a to a smaller number increases the band pass for better reconstruction but also increases the possibility of noise. Figure 8.9 shows the normalized spectral radiance of one pixel before and after processing.

(a)ce=0

(b) (x=0.1

(c) a=0.01

{d) a=0.005

Figure 8.8: Resulting object sets after regularized inverse reconstruction with various a.

121

15 20 25 30 spectral step #

Figure 8.9: Spectra Radiance of pixel 16,16 before and after (bold) regularized inverse reconstruction with a =0.005.

Continuing the exploration of the spectral/spatial relationship this algorithm was applied to both the computer generated and experimentally measured increasing aperture doublet targets from Figure 6.7. The results are pictured in Figures 8.10, 8.11 and 8.12.

As before, each 2D image is a z(A,) vs. y cross-section of a 3D data cube with a corresponding spectral curve plotted below it.

r[pixels] 1

(a) computer generated doublet objects

kilLiA

(b)CG imaj

i

/AL^

Ji

JI aa. mi

A

JA

L

Inverse reconstruction without regularized noise reduction

M IM IAI.A. I A

(d) Regularized inverse reconstruction with a=0.0005

JIA I .M IM I ill LiL. I ..mi

z(^) (e) Regularized inverse reconstruction with a=0.00005

Figure 8.10: CG doublet objects, images and reconstructed objects

122

aperture: 5i 400^im SOOum iris

AL^

(a) measured doublet objects

z^^[/:k[zkiz:x z(X)

(b) Regularized inverse reconstruction with a=0.005

Figure 8.11: Measured and regularized inverse reconstructed Hg doublet images.

400^m

I ms

I

T

Z(A.) ->

(a) 0

=0.0001

too much noise

z(K)

z(X) izV z(A.) HIHIIH z(X)

(b) oc=0.001 noise and reconstruction balanced but not very good

••Mil

JV z(X) —> z(X) —^

(c)

CF

=0.005 low noise b^^^^^^struction

t

[/V z(X.) -> z(X)

(d) a=0.01 low noise littie reconstruction

Figure 8.12: Results of regularized inverse filter on measured 400p.m and iris images.

123

124

This lexicogr^hic regularized inverse filter worked very well on the measured X images of Figure 8.8 and the CG data of Figure 8.7. Even the long 32 pixel aperture of the right hand column is reconstructed. However, it was ineffective on the experimentally measured image cubes of the increasing apertures shown in Figures 8.11 and 8.12. Notice the unpredicted peaks at the edges of the spectral curves caused by both the apodization in

Figure 8.6, and regularized inverse filter applied to measured image cubes. These peaks are artifacts of an invalid assumption during lexigraphical ordering that the psf is shift invariant and circular. The limited success that has been achieved is due to the fact that the targets have been spatially and spectrally located in the center of our 3D data cube, similar to being padded with zeros. However, as our target increased, i.e. the iris aperture, the circular approximation is seen as the psf wrapping around to the other side of our data cube. This problem is addressed by not making the circular assumption, by explicitly performing the z dimension matrix multiplication and implementing an SVD algorithm which also addresses probable difficulties such as a non-square data cube (MxNxN) and singular OTF matrices.

8.3.3 Singular Value Decomposition Inverse Filter

DOIS records a series of 2D spectral images, i,, ij, -...in- These are saved as a 3D image cube image[[x,y,z]]. The following expands equation (8.2) for a y=4, four color target. It describes each spectral image, ij(x,y) as the superposition of each spectral object, o^(x,y) convolved with a defocused point spread function, hj|j(x,y) [Mooney 1995].

i^{x,y) = h^^{x,y)* {x,y) + h^2 y) * *02 (y) y) * *03 {x, y) + ^4 (-t' 3') * *04 >-)

= h2iix,y) * *o,(x,y) + h22{x,y) * *02(x,y) + * *03(x,}') + h2^(x,y} * *o^(x,y}

h

(x

,y)

= fhiix^y) * *Oi{x,y) + fh2{x

,y)

**02{x,y) + fhiix^y) * *03ix,y) + h^{x,y) * *o^{x,y)

i^(x

,y)

=

h^iix,y) * *o^{x,y)

+

h^^{x,y) * *Oj(x,y) + h^{x,y) * *04(0:,y)

(8.14)

Following the dieory of section 4.2 for simulation and reconstruction, 2D Fourier transforms are applied:

125

(8.15)

l.(U] = H.,(4,f)0,(|,?)+ /f„(«,f)0,(|,f)+ H„(«,f)0,(|,f)+ H„(|,4:)0.{|.f)

Rewritten in matrix form:

'o,{U)

OziU)

'AIQ

H,AU) h ( m M U ) HAU) Hy,(U)

I . U ) H„(U) H„{U)

(8.16)

This equation can be evaluated for each spatial frequency (^,0. Lets take a closer look at the H matrix;

> „ K . f )

Hn{.U)

H =

HnilO f»(lO

HJU)

(8.17)

In Chapter 4 great lengths were taken to show that the point spread function is shift invariant (SFV). Applied here, it means that each Hj^ can be expressed by the Fourier transform of a point spread function with an appropriate amount of defocus, H^, where

Az = i - j, i.e. is infocus, H, is 1 step away from focus and H_, is 1 step in the other direction.

H =

H,(U)

H-AlO

HMQ HAW

H,(U) H,(U)

H.ilQ

H,(U) HXiX)

(8.18)

This is a block Toeplitz matrix formed from Fourier transforming images of point source at various amounts of defocus. Typically when considering a large number of

126

spectral channels it will become a sparse band diagonal matrix since the approaches zero at significant amounts of defocus.

The task is to invert this H matrix, however it can have various problems which cause it to be singular and difficult to invert [Mooney 1995]. There can fail to be a unique solution if one or more of the N equations is a linear combination of the others, a condition called row degeneracy. Or if all equations contain certain variables in the exact same linear combination, inversion can fail due to column degeneracy. A set of equations that is degenerate is called singular. Singular value decomposition is a technique to solve for a solution to the inverse matrix with singular matrixes and in fact diagnose the singularities.

This is the method of choice for solving most linear least squares problems [Press 1988].

The SVD method is based on the linear algebra theorem that any MxN matrix H whose number of rows M is greater than or equal to its number of columns N, can be written as the product of an MxN column-orthogonal matrix U, an NxN diagonal matrix W with positive or zero elements, and the transpose of an NxN orthogonal matrix V.

H

U w.

(8.19)

w.

The matrices U and V are each orthogonal in the sense that their columns are orthonormal, where:

u

(8.20)

127

Since V is square, it is also row-orthonormai, V V^=1. The inverse of the diagonal matrix

W is easily determined by replacing the diagonal elements with lAv.

'm'i

1

W.

(8.21)

The decomposition and determination of U, W, and is accomplished by a standard routine found in Numerical Recipes. In Mathematica the function

SingularValues[H] outputs three matrices U, the elements of W and V. The inverse of

H is found from:

H ' = V

•c

(8.22)

The elements of W are called the singular values of the system. They help visualize the missing cone of our data. Figure 8.19. The noise can be surpressed by adding a regularization filter to the Wy values:

1

w w~

w

Substituting this SVD into the inverse reconstruction equation:

(8.23)

0(|,f) = V(|,f) diaa

(8.24)

This SVD Inverse filtering algorithm was implemented in Mathematica. The first step was to form the OTF matrix. The best results were obtained from using a combined point spread function which is an average of the experimentally measured and theoretical psfs found in Figure 8.13. The measured psf provides information about the influence of aberrations from our DOE, while the theoretical psf, calculated from the geometric cone.

128 provides the very small values which fall below the detectable range of the detector and would mistakenly be set to zero.

spectral step z(X)

Figure

8.13

(c): Cross-section of the

3D

combination psf used in

SVD

object reconstruction (contrast enhanced to show detaU).

Looking again at equation (8.18) one might nodce that it takes seven values of to form the four z(X) rows in the H matrix. To reconstruct OTF with 32 z(X,) rows, values from 63 z(X) planes are required as shown in Rgure 8.13. This provides images of a point source at focus (plane number 32) and 31 defocus steps on either side of focus. Since it has been established that the psf is shift invariant, a "basis" OTF matrix

P

is formed from taking the 2D Fourier transform of the image at each step of defocus with the procedure listed in Rgure 8.14.

F o r [ * = l , z < = « 6 3 , * + + ,

P [ [ > 1 1 = c f f t [ c o m b i n e d p s f [ [ s ] ] 1 ;

] ;

P » P / M a x [ A b s [ P ] ] }

This code takes the Fourier Transform of the p2x,32y) images at 63 steps of defocus, z, in the combing psf of Figure 8.13, then normalizes the results to form the normalized OTF, P[[Az,|,^]]

Figure

8.14:

Mathematica code to calculate the "basis" OTF,

P[[dz,|,^]].

The OTF matrix is formed by substituting the appropriate defocus data

P[[Az,|,y 1.

OTF is a four-dimensional matrix, NxN (32x32) for each spatial frequency

129

(|,0. shown in Figure 8.15. The values are complex. Re + /Im, yet the standard SVD code is written for real elements. However, SVD can handle non square matrices, M.\N, so our NxN complex matrix is converted into a 2NxN real matrix where the first N rows are the real values and the remainder are the imaginary. It is this matrix which will be inverted with the SVD routine and then rewritten as an NxN complex inverse matrix using the conversion matrix

inim,

formed in Figure 8.

16.

OTF-Table[0

.,{i,32}.

f

S e s s i o n X i a e [ ]

F o r [ | = l , 1

<=32,

F o r [ ^ = l , ^

<»32,^

++,

++,

"puts t h e proper values i n t o the 4D OTF matrix f o r each s p a t i a l frequency ( | , C)"

0TF[[|,C]1 =

1 , P[ [30,1,1; 1 , P[t29,|.^

]

, Pit

], P[[i,|,^]]}

{P([33,1,^11, P[[32,|.t]

r

p([3i,i.5i

r

P[[30,|,t]

Ptt.

' • r % X ]

, P([2,|,^]]}

{P[[34,|,t]l, P[[33,5,tl

r

P[[32,fc.^l

r

P[[3l

,|.i

;i

f

P[t. , P[[3

,|.5l]}

{P[[35,§.e]], P[[34,|.^)

r

P[[33,i.t] P[[32,|.^l

f

P[[-

{Pt[36,|.^]I, P[[35,|.t]

r

P[ 134,1.^1

f

P[[33,|,C]

f

P[t-

"forms a 4D OTF matrix"

, P

[[4

,^51I}

, Pt[5,

1.511}

{P[[37,i.t]], P[[36,|,t]

f

P[[35,|.t]

r

P[[34,t^]

f

P[t.

••rl.tl , P[[6,|.5ll}

{P([38,i.C]], P[[37,i,?]

r

P[[36,i.t]

r

P[[35,i.Cl

r

Ptf.

P([38,i.t]

f

P[[37,|.C]

f

P[t36,|,^]

r

P[[.

, P[[7,§.5]1}

, P[[8,

1,511}

P[[39,|.t]

Pll40,|,i;]

r

P[[38,i.5l f

P[[37,|,t]

f

P[[.

..,1.51 , P(t9,i,5]i}

t

P[[39,i,tl

r

Pt[38,|,l;]

f

PH.

, Pi[io,|.l;]]}

P[[41,?.tl P[[40,i.Cl P[[39,i.tl

P[[-

. , 1

{P[(43,|,C]1, P[[42,|.^l

9

P[l41,?.i:i

P[t40,|,^]

Pl[.

, P[[i2,5.i;ii}

{P[[44,|.S]1, P[[43,|.^] P[[42,i.i;]

Pt(41,|.C]

f

P[[-

..,1.51

, P[[13

,|,C]I}

P[(44,|.t]

t

P[[43

,|.CI

P[t42,fc,?l P[[.

..,1.51

, P[(14,

1.511}

P[[45,|.S1

P([44,|.l;]

P[[43,|,5I

f

P[[-

.,1>5]

, P([15,|,5]]}

{P[[47,t5]]r

P[(46,|,t]

t

P[[45,|.^l

P[[44,i,tl

P[[.

{Pn48,|,^]], I'[[47,i.C]

P[[46,i,Cl

/

P[[45,t,tl P[[.

.

, P[[16,5.t]l}

, P[[17,?,eil}

{I'[t49,i.tl], P[[48,i.t]

P[[47,i.C]

P[[46,i,?] P[[.

..,1.51

, P([18,

1.^11}

P[ [60,1.^1

P[[61ri.tl

{P[[63,i,S]],

P[[62,|.5]

f

P[[59,|,C]

P[[58,?.S1

t

P[[.

.,1.51

, P([30,|.t]]}

t

P[[60,|.C]

P[[59,|.C1

P[[.

.

9

P[f61,|,!;]

P[[60,|,^]

t

P[[.

.

, P[[31,i.tJ]}

, P[(32,i,S]]}}

1 r I f

71630.88

73560.42

Figure 8.15:

Mathematica

code to forin the OTF Matrix.

130

i m - T a b l e [ 0 . , { z , 3 2 > , { y , 6 4 } ] ;

F o r [ x = l , x < « 3 2 , x + + ,

F o r [ y = l , y < « 3 2 , y + + , i " ( [ * r y ] ] = r e a l i « [ [ x , y ] J ; i a [ [ x , y + 3 2 ] J = l * r e a l i * [ [ x , y ] ] ; I f 1 »

L l s t D e n s i t y P l o t I A b s [ i i

The SVD algorithm in Numerical Recipes and Matheimtica only works on real matrices and the OTF matrix has complex numbers. However, it is possible to rewrite the square NxN complex matrix as a 2NxN matrix, called

ri,

with the real values in the first N rows and the imaginary values in the lower N rows. The SVD is used to decompose and invert the 2NxN ri matrix.

i n i B « P 8 e u d o X n v e r s e [ i m ] ;

L l s t D e n s i t y P l o t r

A b s r i n i a i l ] ;

The code on the left forms a 2NxN matrix

inim

where the 1" N rows form an Identity matrix and the bottom rows form a diagond matrix with the complex value i along the diagonal. This matrix can be multiplied by the inverse of ri to reorder the inverse matrix into a NxN complex inverse matrix.

H

I

Re[Hl

H - 1

Im[H]

SVD

I is

is

QJ

Figure 8.16: Mathematica code to create inim which converts the NxN

Complex matrix into an 2NxN of all Real values.

The code in Figure 8.17 creates the matrix IMAGE by taking the Fourier transform of the 2D spectral images at each z(X). As shown, an additional dimension was added to the IMAGE matrix, m, where each value of m represents a different target. Since the OTF doesn't change, the algorithm can be applied to many image cubes while having to invert the matrix only once. In fact, once a proper OFF and a value are established, the inverse need only be performed once. It can be stored in memory for future use.

131

"forms a matrix with Fourier Transforms of 6 measured image cubes to reconstruct 6 object cubes simultaneously "

I M A G E ^ T a b X c [ 0 . r { S r 6 } r { Z r 3 2 ^ f ' { Z r 3 2 } f { y f 3 2 ^ ]

}

[0 ,{m,6>,{z,32

} - , { z

,32

} - , { Y

;

For[•»!,*<=6,•++,

For[r*l,t<«32,z++,

IMAGE[[BrZll^cfft[doubletinage[[BrZ]]];

For[y=l,y<=32,y++,

Figure 8.17: Mathematica code to create the Fourier space matrix IMAGE.

IHOTF"Table[0.

,{x,32>,{Yf32>];

ri»Table[0.,{x

,64>,{Yr32}]

singularvalues^Table

[0.,{x,32>,{Yr32>r{i,32>];

OBJ=Table

[0.,{B,6},{z,32>,{x,32>,{Yf32}];

SessionTiae[]

For[r=l,x<=32,x++,

For[y=l,

Y

<=3 ri[[k]]=Re[OTF[[x,y,k]]]; ri[[k+32]]=I«[OTF[[x,y,k]] ];

"form the 2NxN real valued matrix" singularvalues [ ri ; "use the SVD command to find the three matricies U,W & V*" inri^Transpose [V] .DiagonalMatrix[w] .u; "find the inverse"

"reorder into the complex inverse matrix"

,a<=6 •++, "Find the object from 0=H"^I"

OBJ[

[ B , x , Y ]

]-IHOTF.REIMAGE[[a,x,y]];];

1;

Print[X];

1 ;

SessionTiae[]

71630.88

73563.42

{Max[singaIarvalaes],Min[singuIarvaIues]}

Figure 8.18: Mathematica code finds the SVD matrices, inverts the OTF and flnds the reconstructed OBJ matrix.

The singular value decomposition, inverse and object reconstruction are performed with the code listed in Figure 8.18. The matri.\ ri is the 2NxN matrix of real values. The function SingularValues[ri] outputs the three matrices u, md, and v. The singular values

132 md are stored for future investigation and the a noise suppression is applied. The inverse is stored in INOTF then multiplied by the IMAGE resulting in OBJ. This is then inverse

Fourier transformed to reconstruct the object cube in the program listed in Rgure 8.19.

"Reorder the matrices"

R E O B J » T a b l e [ 0 . { n , 6 > , { i , 3 2 > , { j , 3 2 > , { l c , 3 2 > ] ;

F o r [ 2 = l , z < » 3 2 , z + + ,

F o r [ x = l , x < = 3 2 , x + + ,

F o r [ y = l , y < = 3 2 , y + + ,

F o r [ m s l , b < = 6 , • + + ,

"Tsike the inverse Fourier Transform of OBJ to find the object cubes"

F o r [ • = ! , b < = 6 , • + + ,

F o r [ z = l , z < = 3 2 , z + + , o b j e c t [ [ m , z ] ] » A b s [ c i n f f t [ R E O B J [ [ b ,

Z ] ]

1 ; P r i n t [ m l ] ;

Figure 8.19: Mathematica code takes the inverse Fourier transform of the

OBJ matrix.

The singular values, of the OTF matrix are plotted in Figure 8.20. Each image is a density plot of the spatial frequencies contained at a given spectral frequency.

Notice that there is a region of black pixels in the center of each image which grows with increasing spectral frequencies. This represents low frequency spatial/spectral information that isn't imaged by DOIS or recoverable with processing. Plotted in Figure 8.21 as a cross-section of each spectral frequency, the frequently discussed missing cone which plagues such imaging systems is seen [Barrett 1981, Mooney 1995, Descour 1995].

Singular value decomposition inverse reconstruction provides a unique and valuable opportunity to quantify the missing cone. The missing spatial and spectral frequencies can be determined from the design parameters to predict the cut off frequencies and used as a figure of merit when designing and comparing future imaging spectrometers.

(a) increasing spectral firequencies—>

(b) increased contrast on 1st 10 increasing spectral frequencies—>

(c) the 10 images of (b) only larger

Figure 8.20: Singular Values w„(^,Q Matrix plots.

spatial frequencies cross-section increasing spectral frequencies

Figure 8.21: Cross-section shows missing cone.

134

8.3.3.1 SVD results

The following figures present results fix>ni applying the SVD object reconstruction algorithm from equation 8.24 to computer generated (CG) and experimentally measured images cubes, (32x, 32y, 32z(X,)) in size. Figure 8.23 depicts the results from reconstructing the CG object cubes of increasing square apertures in front of a Hg doublet that were generated in section 6.2.2. As expected, the doublet is indeed reconstructed using the SVD. Looking closer at the results. Figures 8.24 and 8.25 picture the same CG data sets but displayed as series of spectral images. The first is from the 8x8 square aperture, and the second from the 16x16 square. Objects were reconstructed with the regularization filter value a set to 0.5, 0.1 and 0.05 from equation 8.23. Notice that only 30 images are printed for each image series, for convenience the 1" and 32"' were omitted from the figure. Figure 8.22 lists the location values z(k) corresponding to the (32x, 32y) images pictured in the series. Recall that each step represents dz=0.125mm and spectral step size dX«.25nm in both the CG and experimental cubes. The entire 32 step series records an

8nm band, a resolution that rivals any spectral filter.

2

12

22

3

13

23

4

14

24

5

15

25

6

16

26

7

17

27

8

18

28

9

19

29

10

20

30

Figure 8.22: Table of the z values for the following output series.

11

21

31

In (a) of Figure 8.24 and 8.25, the square aperture emits at locations 14 and 20.

The effect of DOIS imaging is seen in (b): the edges of the squares are blurred, and it inaccurately measures the presence of the square in more than the two locations, the image is now blurred over most z locations. The SVD Inverse filter algorithm was applied with a equal to 0.5, 0.1 and 0.05 with the results displayed in (c), (d) and (e) respectively. The

135 reconstruction does reduce the occurrence of the square in the inaccurate locations, and the square is sharpened. As a is made lower, the reconstruction is better; however, noise and ghost images are introduced which aren't present in the (b) image cube.

Interestingly, the reconstructed object squares appear to be shifted by two z locations, dXj=0.5 nm. The cause of this shift is indeterminate, an investigation into this is matter is recommended as a suggestion for further work.

The SVD algorithm was applied to four different experimentally measured image cubes. Each target was assembled to demonstrate a variety of spectral/spatial characteristics. For consistency Figure 8.26 shows that the SVD reconstraction works well on the experimentally measured 577/579 nm Mercury doublet with the X aperture. The noise regularization value a was set to 0.5,0.1 and 0.05 for (b), (c) and (d) respectively in the figures of all four targets.

The second target combines a 542nm GreNe laser point source imaged along side the 546 nm Hg lamp with a tr^zoidal aperture. The recorded image cube and reconstructed object cubes are shown in Figure 8.27. The SVD reconstruction improved both the spectral and spatial quality of the images without the artifacts demonstrated with the other algorithms.

To touch on larger targets, an almost full FOV rectangle aperture was placed in front of the Hg doublet in Figure 8.28. Although the 2nm doublet can't be resolved, EXDIS measures it to be a line with about 4 nm FWHM using the SVD reconstruction, and the spatial information of the rectangle aperture is reconstructed. Finally, the mercury illuminated iris image cube was processed with the SVD. The results are shown in Figure

8.29, notice that a "hot spot" on the bulb has been reconstructed.

136

(a) The cross-section plots from the CG objects.

(b) The cross-section plots from the CG ima;

(c) SVD object reconstruction results a=.5

(d) SVD object reconstruction results (X?=.I zW (e) SVD object reconstruction results a=.05

Figure 8.23: z(X.) vs. y cross-section and spectral plots from SVD applied to CG Doublet.

(a)CG doublet square 8x8 pixels

(b) CG image set

(c) SVD reconstruction with a=.5

(d) SVD reconstruction with ot=.I

(e) SVD reconstruction with

(X

=.05

Figure 8.24: SVD reconstructed object series from the CG 8 x

8 pixel doublet aperture, previously depicted as cross-sections in 8.23.

(a) CG doublet square(16xl6pixels) target at #13& 20

(b) CG image set with measured psf

(c) SVD reconstruction with

OP

=0.5

(d) SVD reconstruction with a=.l

(e) SVD reconstruction with

(X

=.05

Figure 8.25; SVD reconstructed object series from the CG 16 xl6 pixel doublet aperture, previously depicted as cross-sections in 8.23.

138

(a) experimentally measured images

(b) SVD reconstruction a=1.5

(c) SVD reconstruction

CP

=1

(d) SVD reconstruction ce=J5

Figure 8.26: SVD reconstructed object series from the experimentally measured Hg 577/579 nm X.

139

140

(a) experimentally measured images

(b) SVD reconstruction a=1.5

(c) SVD reconstruction

OP

=1

(d) SVD reconstruction a=.75

Figure 8.27: SVD reconstructed object series from the experimentally measured 542 nm GreNe point-source with a trapezoidal 546 nm Mercury target.

(a) experimentally measured images

(b) SVD reconstruction a=1.5

(c) SVD reconstruction a=l

(d) SVD reconstruction a=.75

Figure 8.28: SVD reconstructed object series from the experimentally measured Mercury Rectangle.

(notice poor reconstruction when the target extends to the edges)

141

(a) experimentally measured images

(b) SVD reconstruction a=1.5

(c) SVD reconstruction a=l

- (d) SVD reconstruction a=.75

Figure 8.29: SVD reconstructed object series from the experimentally measured Iris illuminated by a Mercury bulb (notice the hot spot).

142

143

8.4 Constrained Iterative Deconvolution

The SVD Inverse Reconstruction algorithm with a regularized filter for noise suppression provides a linear restoration. One drawback of such linear methods is that negative ripples build up around strong features from the low values of die OTF and no reconstruction is possible beyond the OTF cutoff. Nonlinear restoration methods that constrain the result to be positive can eliminate this problem while using positivity and other a priori information to provide a better restoration. The drawback is that these methods are invariably iterative and hence quite computationally expensive in three dimensions. An additional challenge is to develop a relaxation parameter optimized for a particular application and image cube.

The strategy is to develop a positively constrained solution or guess object cube go, that when convolved with the known smearing function h will regenerate the observed image cube The pixel-by-pixel differences between the convolved guess and the observed data are used to update the guess.

g/* =go^ * h

go**' =go* + r { i - g i ^ ) if go**' <0 then go**' =0

k = k + l

(8.25)

This is the Jansson-vanCittert constrained iterative deconvolution algorithm with the relaxation parameter [Jansson 1984]: which applies the a priori information that the intensity is a positive value and that the spectra is a transmission function with a maximum value of 1. The best results were obtained with an initial guess equal to the regularized inverse filtered data set and a weighted relaxation parameter, y.

(8.26)

144

The vanCittert iterative technique with a Jansson relaxation parameter of equation

(8.25) was implemented in

Mathematica

and is listed in Figure 8.30. The algorithm was applied it to the X Mercury doublet. The results after 10 and 20 iterations are pictured in

Figure 8.31. The spectral radiance curve plotted in Figure 8.32 shows that the doublet is better resolved with every iteration. gama=flatiniinage/Max[flatmiinage]; gobj=iegularizedobj;

Print["=start tiine"SessionTime[]]; Print["=original maxobj"Max[gobj]];

E)o[

GOB J=cflatfit[gobj]; gi=Abs[cflatinfft[GOBJ*FLATOTF]]; dif^flatmimage-gi; gobj=gobj+gama*diff: gobj=(gobj+Abs[gobj])/2; objnum[[c]]=gobj;

Print["iteration"];Print[c]: Print("=inaxobj"Max[gobj]]; Print["=maxdiff'Max[diff]];

Print["=mindiff'Min[diff]]: Print["time"SessionTimen]:

.{c^O}]

Figure 8.30:

Mathematica

code for the Jansson-vanCittert algorithm.

(a) recorded images

(b) after 10 iterations

(c) after 20 iterations

Figure 8.31: Resulting images after Jansson-vanCittert reconstruction with various iterations, (a) original recorded data, (b) nslO. (c) n=20.

145

0 5 10 15 20 25 30

TiX) spectral step #

Figure 8.32: Spectral Radiance of pixel 16,16 Jansson-vanCittert reconstruction at various iterations.

The Jansson-vanCittert algorithm didn't work for large targets or data with excessive noise. Various relaxation functions were implemented and several hundred iterations ran without ever reaching convergence. This inspired the addition of another step to the algorithm. The difference term is inverse filtered with a regularized inverse filter as described in equation (8.27). This achieved much better results that converge within a few iterations.

gi^ = g o ^ * h diff = i - g i ^

DIFF = Fourier\diff]

H*

regulard^ = Abs InverseFourier

DIFF

(8.27)

= go* + regulardiff

if go'*' <0 then go"*' =0

k = k + \

The

regularized inverse filtered vanCittert

algorithm of equation (8.27) was implemented in

Mathematica

with the code listed in Figure 8.33. It was applied to CG doublet and experimentally measured iris images. The rms difference between the recorded

146 images and the guess images is plotted against the number of iteration in Figure 8.34; they show convergence in less than 5 iterations. start with measured image image=flatmimage/Max[flatmimage]; gobj=image; a=.01

Print["=start time"SessionTime[]];

Do[gimage=Abs[cflatinfft[cflatfft[gobj]*FLATOTF]]: gimage=gimage/Max[gimage]; dif!=image-gimage; diff=diff-Min[diff]; n=Ceiling[(c)]:

REGULARDIFF=cflatfft[diff|*Conjugate[FLATOTF]/((FLATOTE^Conjugate[FLATOTF])+a): regulardiffI[n]]=Abs[cflatinfft[REGULARDIFF]]; gobj=gobj+regulardiRt[n]]; mis[[c]]=Mean[Abs[reguIardiff[[n]]]]; objnum[[n]]=gobj:

Print["iteration"c]; Print["=nns"rms[[c]]];

. { C

.20}]

Print["time"SessionTime[]];

For[i=l., i<=l, 1++, par{[i]]=Partition[reblurredobject, 1024];

For|j= 1j<=32, j

-H",

obj[[ij]]=Partition[par[[ij]],32]: obj[[ij]]=Transpose[RotateRight[Transpose[RotateRight[obj[[iJ]],16]],16]];]:];

For[m=l, m<=l, m-H-,

For[z=I, z<=32, z

-H-,

trans[[z]]=Traiispose[obj[[m,z]]]; h[[m,z]]=trans[[z, 16]]; w[[m,z]]=obj[[m.z,16]];]; h[[m]]=Transpose[h[[m]]]; w[[m]]=Transpose[w[[m]]];];

Figure 8.33:

Mathematica

code for regularized inverse filtered vanCittert iterative reconstruction. rms error nns error

1

1

0 . 8

0 . 8

0 . 6

0 . 6

0 . 4

0 . 4

0 . 2

0.2

10

1 5

20

Iterations

20 60

80 iterations

(a) (b)

Figure 8.34: CG doublet; rms error after each iteration for (a) the CG doublet and G>) the iris.

147

The results are depicted in Figures 8.35, 8.36 and 8.37. The objects' spectral and spatial information is reconstructed, including the lamp hot spot and spectral doublet in the iris data. Although this algorithm does look promising, it requires the regularized inverse filter which was shown to make approximations that aren't always appropriate. See the wrap around problems in the iris plots which result in improper peaks in the tails of the spectral plots. It was also found that the algorithm had to manipulated to fit the target. This aside, the constrained iterative technique could still be a solution to the deconvolution problem for certain applications, particularly if DOIS were required to image the same or similar targets repeatedly.

zx cross-section

T z(X) z(X) z(A.) —»

1 iteration

JUL

z(k) -»

5 iterations zy cross-section

A.

z(X)-»

A.

z(X)-^ z(X)

T

z(X)-

z(X)

10 iterations

vA/Lv

z(X.) —>

15 iterations z(X.)

vM^ z(k) —»

z(X)

-» z(X) ->

20 iterations

viAlL.

z(A.)

Figure 8.35: CG doublet; results of regularized inverse Hltered vanCittert iterative reconstruction with «== 0.001.

148

149 zy cross-section original measured data

I zx cross-section recorded data

z(K)

(a) after 3D Regularized inverse with «:=0.01 y T I

y t

z(A.) z(X,)

z(k) —»

(b) 1 iteration

r\i

z(A.) —>

(c) 5 iterations z(X)->

ir\i

z(A.) —»

t

ZQ.)

z(X)

(d) 10 iterations z(X)-^

T

Z(A.)

z(X.)

Z(X)-

(e) 20 iterations

Figure 8.36: Results of iris reconstruction after (a) 3D Regularized inverse niter and (b-e) with regularized inverse filtered vanCittert iterative reconstruction a=.01.

(a) after 10 iterations

(b) after 20 iterations

Figure 8.37: Reconstructed iris objects after 10 & 20 iterations.

150

9. CONCLUSION

The Dif&active Optic Image Spectrometer (DOIS) is a viable multi-spectral imager.

This dissertation demonstrated and proved the concept of spectral imaging with a diffractive optic. More importantly, the theory of spectral imaging with a difEractive optic, referred to as diffractive spectral sectioning, was developed and an appropriate digital image restoration algorithm was derived to deconvolve the DOE transfer function from the recorded image cube, providing a high resolution spectral/spatial object cube.

The spectral resolving power of 288 (X^AA, = Sllasa/lam) demonstrated by the

DOIS prototype can be utilized to predict capability of future diffiractive optic image spectrometer designs applied in any spectral range; UV, visible or IR.

I conclude that the best image spectrometer designed with a diffractive optic would include a kinoform; a high efficiency, multi-level DOE with a refractive surface to balance the spherical and off axis aberrations. An intermediate zoom lens should be used to perform the spectral scaiming while maintaining a constant magnification. The most accurate and powerful reconstruction algorithm is the Singular Value Decomposition Inverse Filter with a regularization term for noise suppression.

9.1 DOIS Advantages

DOIS has several advantages over current image spectrometers. It incorporates a simple one axis translation on a rugged platform making it insensitive to vibrations which limit Fourier Transform spectrometers. DOIS is prograimnable, providing single spectra, narrowband or full spectrum image cubes. It can provide coarse or fine spectral resolution by choosing the stepping increment and an object reconstruction algorithm at various levels of computational expense.

151

DOIS uses mainly off the shelf components. The DOE fabrication does have a large first time expense to generate a master; however, multiple copies can be replicated at minimal expense. The design is not limited by the availability of materials like conventional thin film spectral filters. Once built for one application, additional wavelengths within a wide spectral range can be viewed for other applications without changing components.

DOIS solves a common problem associated with spectral filters; the central wavelength of a filter's bandpass can shift due to envirormiental factors such as temperature. This can be corrected for in a EXDIS system with a simple change in position, making on-board calibration and realignment possible.

DOIS provides enough spectral and spatial image quality without post-detection processing that there are applications where the recorded image cube can be utilized to represent the target. Contrary to computer-tomography approaches, even when object reconstruction processing is required the pre-processed images are at least recognizable so that the operator can have confidence while recording the data.

The EXDIS spectrometry feature can be added to existing camera systems with a simple lens/mount replacement, providing additional information for the difficult task of target identification. DOIS can also be a cost effective solution to spectroscopy applications where imaging isn't required but would serve as a bonus. The imaging will help in minimizing misaligrmient and improve tracking of moving targets. The additional pixels within the field of view could act as simultaneous data measurements that can be averaged to improve accuracy and reduce noise.

As with most imaging spectrometers the scanning mechanism will limit use in applications with short lifetimes, however as presented below in section 9.2.3, a DOIS system can be designed with no moving parts, recording the entire image cube in a single

"snapshot".

152

9 . 2 S u g g e s t i o n s f o r f u t u r e w o r k

During this dissertation several ideas were conceived for continued research with difiiactive optic imaging spectrometers and are presented below. Additionally, it was discussed in section 6.2.2 that a field stop should be added to the optical train to aid in reconstructing targets on the edge of the field of view and that an investigation should be peiformed into the cause of a spectral shift in the reconstructed data in Figure 6.7.

9.2.1 Dual Waveband design using multiple orders

While working with the 2 level zone plate, 5 separate diffractive orders were observed by scanning the detector closer to the DOE. Each order is located at —, where f

m

is the focal length of the first diffractive order and m is the diffractive order number. While viewing wideband emitting targets, the orders started to superimpose one another. Why not use this effect to design a multi-band spectral imager, where the DOE images far IR radiation in the first order and the mid ER in the second order?

[i/m]

1st order

A. riimi

Figure 9.1: Two waveband design, mid IR and far IR, with X,=7.3/fm spectral

t

| for each order, and the two wavelengths at each focal position.

A 16 level DOE designed at X^=7.3fim images 8 to Yljita far IR radiation in its first order and 3 to Sftm mid IR in its second diffractive order. Rgure 9.1 shows the 1" and 2°*^ order spectral diffraction efficiency for a X^=7.3iim DOE The two wavelengths ttiat are

153 imaged at each focal position are listed in Figure 9.1(b). One can think of various detector configurations, as well as an array of both mid and far IR detectors.

This multi-order concept was demonstrated with the DOIS visible prototype. The second order image of a 633 nm HeNe laser came to focus at the same plane as the third order image of a 422 nm monochrometer slit.

9.2.2 Programmable DOE with an SLM, variable focus device

Depicted m Figure 9.2, an electronically controlled variable focus diffractive lens or spatial light modulator (SLM), can replace the EXDE in the diffractive optic image spectirometer to create a DOIS with no moving parts. The components are mounted with a fixed separation. The SLM will select the image spectra by electronically changing the DOE ring spacing, which will change the focal length [Domash 1996].

Spatial Light Modulator

Figure 9.2: Design for a static DOIS with a controllable, variable focus lens.

9.2.3 Array of DOEs

Fmally, an image spectrometer can be designed with diffractive optics and no movuig parts to record the entire 3D image cube simultaneously. Figure 9.3 is the schematic of a DOIS "snapshot" technique which records a set of images at predetermined spectral lines. Each lens of the array is fabricated to have the same design focal length but at different wavelengths. The detector array is segmented into a region for each wavelength or

154 an array of detector arrays could be assembled. This configuration eliminates scanning and magnification differences. However unlike a segmented detector with an array of filters, if the temperature shifts or requirements change the array can easily be repositioned at a different distance to detect another set of spectra. Additionally, by fabricating one lens to be refractive, such as a Fresnel Lens, a conventional full-band image of the entire scene can be recorded as well.

segmented CCD

DOE lens array

Figure 9.3: Design for a static DOIS witli a DOE lens array.

In conclusion, DOIS, the diffi^ictive optic imaging spectrometer presented in this dissertation is a practical, high resolution, compact, economical, rugged, programmable, multi-spectral imager. With proper selection of the DOE substrate and detector material, it can be built to operate at ultraviolet, visible or infrared wavelengths for applications in surveillance, remote sensing, law enforcement, environmental monitoring, laser communications, medical imaging and laser counter intelligence.

APPENDIX A: CATALOG OF IMAGES

spectrum

Sources of interest

# of pages

12 spectral images from the composite target, showing all four targets.

[nm]

365 - 632.8

1 pp. 156

Images from a coarse resolution run of the entire visible spectrum.

357.6 - 644.4

16 pp. 157-172

Fine resolution of X at 365 nm Mercury line.

362.2 - 368.6 3 pp. 173-175

Fine resolution of X at 404 am Mercury line and Tungsten Halogen point source.

396.0-413.6 6 pp. 176-181

Fine resolution of X at 435 nm Mercury line and Tungsten Halogen point source.

420.4 - 445.7 7 pp. 182-188

Fine resolution of GreNe pinhole at 542nm,

X at 546 nm Mercury line and Tungsten

Halogen point source.

531.3 - 554.8 8 pp. 189-196

Fine resolution of X at 577 and 579 nm

Mercury doublet line and Tungsten Halogen point source.

563.2 - 593.2 9 pp. 197-205

Fine resolution of HeNe pinhole at 632.8nm and Tungsten Halogen point source.

628.9 - 636.3 2 pp. 206-207

Note: All images have been printed at maximum contrast to enhance details.

155

365nm Mercury

383nm Tungsten Halogen Lamp 404nm Lamp & Mercury

415nm Tungsten Halogen Lamp 435nm Lamp & Mercury

470nm Tungsten Halogen Lamp

542nm Lamp, Mercury & GreNe 546nm Lamp & Mercury

560nm Tungsten Halogen Lamp

578nm Lamp & Mercury

600nm Tungsten Halogen Lamp 632.8nm Lamp & HeNe

357.6

358.4 359.2

360.0

360.8

361.7

362.5

363.3 364.1

365.0

365.8

366.7

157

367.5

368.4

369.2

370.1

371.0

371.9

372.7

373.6 374.5

375.4

376.3

377.2

158

378.1 379.0

380.0

380.9 381.8 382.7

383.7 384.6 385.6

386.5

387.5 388.5

159

394.4

395.4 396.4 397.4

398.5 399.5 400.5

160

401.6 402.6 403.7

404.8 405.8 406.9

408.0 409.1 410.2

411.3 412.4 413.5

161

414.6

415.7 416.9

418.0 419.2

420.3

421.5 422.7 423.9

425.0 426.2 427.4

162

428.7

429.9

431.1

432.3

433.6 434.8

436.1

437.4

438.6

439.9 441.2

442.5

163

443.8 445.1 446.5

447.8 449.2 450.5

451.9 453.3 454.6

456.0 457.4 458.8

164

460.3 461.7

464.6 466.1

467.5

469.0 470.5

472.0

473.5 475.1

476.6

165

479.7

481.3

482.9 484.4

486.1

487.7 489.3

490.9

492.6 494.3

496.0

166

513.5 515.3 517.2

167

519.0 520.9 522.8

524.7 526.6

528.5

530.5 532.4 534.4

536.4 538.4 540.5

168

542.5 544.6 546.7

548.8

550.9

553.0

555.2 557.4 559.6

561.8 564.0 566.2

169

568.5

570.8

573.1

575.5 577.8 580.2

582.6 585.0 587.4

589.9

592.4 594.9

170

597.4

600.0 602.6

605.2

607.8 610.4

613.1

615.8

618.5

621.3

624.1 626.9

171

638.4 641.4 644.4

172

362.2 362.4 362.6

362.9

363.1

363.3

363.5 363.7 363.9

364.1 364.3 364.5

173

364.8

365.0 365.2

365.4 365.6 365.8

366.0 366.3

366.5

366.7 366.9 367.1

174

368.6

175

396.0 396.3 396.5

396.8

397.0 397.3

397.6 397.8

398.4

398.6 398.9

176

399.1

399.4 399.7

399.9

400.2 400.5

400.7 401.0 401.2

401.5 401.8 402.0

177

402.3 402.6 402.8

403.1 403.4 403.7

403.9 404.2 404.5

404.7 405.0 405.3

178

405.5

405.8 406.1

406.4

406.6 406.9

407.2 407.5 407.7

408.0 408.3 408.6

179

180

412.8

181

420.4 420.7 421.0

421.3

421.6 421.9

422.2 422.5

422.8

423.1

423.4 423.7

182

424.0 424.3 424.6

424.9 425.2 425.5

425.8 426.1

426.4

426.7 427.0

427.3

183

427.6 427.9

428.6

428.9 429.2

429.5 429.8

430.1

430.4

430.7 431.0

184

431.4 431.7 432.0

432.3 432.6 432.9

433.2 433.6 433.9

434.2 434.5 434.8

185

435.2

435.5 435.8

436.1 436.4 436.8

437.1

437.4 437.7

438.1 438.4

438.7

186

439.0 439.4 439.7

440.0

440.3 440.7

441.0

441.3

441.7

442.0 442.3

442.6

187

445.0 445.3 445.7

188

531.3 531.5

531.8

532.0 532.3

532.5

532.8 533.0

533.3

533.5

533.8 534.0

189

534.3 534.6 534.8

535.1 535.3 535.6

535.8 536.1 536.3

536.6 536.8 537.1

190

539.7 539.9 540.2

191

540.4 540.7 541.0

541.2 541.5 541.7

542.0 542.3

542.5

542.8 543.0 543.3

192

545.9 546.2 546.5

193

546.7 547.0 547.3

547.5

547.8 548.1

548.3

548.6

548.9

549.1

549.4

549.7

194

550.0 550.2 550.5

550.8 551.0 551.3

551.6 551.8 552.1

552.4 552.7 552.9

195

554.8

196

563.2

563.5

563.8

564.0

564.3 564.6

564.9

565.2 565.5

565.8

566.1 566.3

197

566.6 566.9

567.2

567.5 567.8

568.1

568.4 568.6

568.9

569.2

569.5

569.8

198

570.4 570.7

571.0

571.3 571.6

571.9 572.1 572.4

572.7 573.0

573.3

199

573.6 573.9

574.2

574.5 574.8 575.1

575.4 575.7 576.0

576.3 576.6 576.9

200

577.2

577.5

577.8

578.1 578.4 578.7

579.0 579.3

579.6

579.9 580.2 580.5

201

580.8 581.1 581.4

581.7

582.0 582.3

582.6

582.9

583.2

583.6 583.9 584.2

202

584.8

203

588.2

588.5 588.8

589.1

589.4 589.8

590.1 590.4

590.7

591.0 591.3

591.7

204

205

632.2 632.6 632.9

206

635.5 635.9 636.3

207

208

REFERENCES

Agard D.A., Hiraoka Y., Sedat J.W., "Three-dimensional Microscopy: image processing for high resolution subcellular imaging," SPIE V0LII6I New methods in Microscopy and

Low Light Imaging, pp.24-30, (1989).

Agard D.A., Hiraoka Y., Shaw P., Sedat J.W., Flourescence Microscopy in Threedimensions," Methods in Cell Biology Vol.30 Ch.13, pp.353-377. Academic Press

(1989).

Aikens R.S., Agard D.A., Sedat J.W., "Solid-State Imagers for Microscopy," Methods in

Cell Biology Vol. 29 Ch.l6, pp.291-313. Academic Press (1989).

Andrews H.C., Hunt B.R., Digital Image Restoration, aaaaaaprentice-HaU (1977).

Barrett H.H., Swindell W., Radiological Imaging, Academic Press (1981).

Bernhardt P.A., "Direct reconstruction methods for hyperspectral imaging with rotational spectrotomography," J. Opt. Soc. Am. 12,9 (September 1995).

Blass W. E. and Halsey G. W., Deconvolution of Absorption Spectra, Academic Press

(1981).

Bora M., Wolf E., Principles of Optics, Pergamon Press, pp. 439-441, (1989)

Bronson R., Matrix Methods: An Introduction, Academic Press (1970).

Byrne C.L., "Block Methods for Image Reconstruction from Projections," IEEE

Transactions on Image Processing Vol.5 No. 5, pp. 792-794 (May 1996).

Byrae C.L., "Iterative Image Reconstruction Algorithms Based on Cross-Entropy

Minimization," IEEE Transactions on Image Processing Vol.2 No.l, pp. 96-103 (January

1993).

Castleman K.R., Digital Image Processing, Prentice-Hall (1979).

Chiu M.Y., Barrett H.H., Simpson R.G., "Three-dimensional reconstruction from planar projections," J. Opt. Soc. Am. 70,7 (July 1980).

Chiu M.Y., Barrett H.H., Simpson R.G., Chou C., Arendt J.W., Gindi G.R., "Threedimensional radiographic imaging with a restricted view angle," J. Opt. Soc. Am. 69,10

(October 1979).

Chmelik R., "Focusing and the optical transfer function in a rotationally symmetric optical system," Applied Optics Vol. 33(17), pp. 3702-3704 (June 1994).

Cochran W.T., Cooley J.W., Favin D.L., Helms H.D., Kaenel R.A., Lang W.W.,

Maling, G.C.Jr., Nelson D.E., Rader C.M., Welch P.D., "What Is the Fast Fourier

Transform?," Proceedings of the lEEEVoX. 55 No. 10, 1664-1677 (October 1967).

209

REFERENCES-Continued

Conchello J.A., Hansen E.W., "Enhanced 3-D reconstructuin from confocal scanning microscope images. 1: Deterministic and maximum likelihood reconstructions,"

Applied

Optics

Vol. 29(26), pp.3795-3804 (10 September 1990).

Conchello J.A., Kim J.J., Hansen E.W., "Enhanced three-dimensional reconstruction from confocal scanning microscope images, n.

Depth discrimination versus signal-to-noise ratio in partially confocal images,"

Applied Optics

Vol. 33(17), pp. 3740-3750 (10 June

1994).

Conchello J.A, "Deconvolution of fluorescence microscopy images,"

Pro. SPIE

2655,

(1996).

Descour M.R., Dereniak E.L., "Nonscanning no-moving-parts image spectrometer,"

Pro.

SPIE

2480, pp. 48-64, (1995).

Descour M.R., Mooney J.M., Perry D.L., Illing L.,

2480(1995).

Imaging Spectrometry, Pro. SPIE,

Demoment G., "Image Reconstruction and Restoration: Overview of Common Estimation

Structures and Problems,"

IEEE Transactions on Acoustics, Speech, and Signal

Processing

Vol. ASSP-37(12), pp. 2024-2036 (December 1989).

Domash L.H., Chen T., Gomatam B.N., Gozewski C.M., "Computer-generated swichable dif^ctive elements in a polymer-dispersed liquid crystal composite,"

Pro.

SPIE,

2689-26, (1996).

Driscoll W.G., Vaughan W.,

Handbook of Optics,

McGraw-Hill (1978).

Erhardt A., Zinser G, Komitowski D., Bille J., "Reconstructing 3-D light-microscope images by digital image processing,"

1985).

Applied Optics

Vol. 24(2), pp. 194-200 (15 January

Faklis, Blough, "Diffractive optics test facility", RPC Technical Report 1316-088-G-AR

(1993).

Fay F.S., Carrington W., Lifshitz L.M., Fogarty K., "Three-dimensional analysis of molecular distribution in single cells using the digital imaging microscope",

SPIE Vol.1161

New Methods in Microscopy and Low Light Imaging

(1989).

Fink D.G., Christiansen D.,

Electronics Engineers' Handbook,

pp.11-44, 11-45,

McGraw-Hill (1989).

Frieden B.R., "Optical ransfer of the Three-Dimensional Object,"

J. Opt. Soc. Am.

57,1

(January 1967).

Frieden B. R.,

Picture Processing and Digital Filtering,

Springer-Verlag (1975).

Frieden B.R.,

Probability, Statistical Optics, and Data Testing,

Springer-Verlag (1983).

210

REFERENCES-Continued

Gaskill J.D.,

Linear Systems, Fourier Tran^orms, and Optics,

John Wiley & Sons

(1978).

Goetz A.F.H., "Imaging spectrometry for remote sensing: Vision to reality in 15 years,"

Pro. SPIE

2480, pp. 2-13, (1995).

Gonzalez R.C., Wintz P.,

Digital Image Processing,

Addison-Wesley (1987).

Goodman J.W.,

Introduction to Fourier Optics,

McGraw-Hill (1968).

Goodman and Silvestri, "Some effects of Fourier-domain phase quantization," IBM J.

Res. Dev. 14, 478 (1970).

Hect,

Optics,

Addison-Wesley (1989).

Herman G.T., Meyer L.B., "Algebraic Reconstruction Techniques Can Be Made

Computationally Efficient,"

IEEE Transactions on Medical Imaging,

Vol.12 No. 3

(September 1993).

Hinnrichs M, Morris G.M.,"Image multispectral sensing". United States Patent

#5,479,258 (December 26, 1995).

Hinnrichs M., Massie M.A., "Image multispectral sensing: a new and innovative instrument for hyperspectral imaging using dispersive techniques"

Pro. SPIE

2480, pp.

93-104, (1995).

Jansson P.A.,

DECONVOLUTION With Applications in Spectroscopy,

Academic Press

(1984).

Javidi B., Caulfield H.J., Homer J.L., "Image deconvolution by nonlinear signal processing,"

Applied Optics

Vol.28(15), pp.3106-3111 (August 1989).

Jones P.F., Aitken G.J.M., "Comparison of three-dimensional imaging systems,"

J. Opt.

Soc. Am.

11,10 (October 1994).

Kawata S., Ichioka Y., "Iterative image restoration for linearly degraded images. H.

Reblurring procedure,"

J. Opt. Soc. Am.

Vol. 70(7), pp.768-772 (July 1980).

Lesem, Hirsute, and Jordan, "The kinoform: a new wavefront reconstruction device," IBM

J. Res. Dev. 13, 150 (1969).

Levi L.,

Applied Optics: A guide to Optical System Design/Volume I,

John Wiley & Sons

(1968)

Lyons D., Mikolas D., "Fabrication of large aperture optical elements on Germanium and

BK-7 Glass substrates."

Research Accomplisfments

National Nanofabricaton Facility at

Cornell (1994).

211

REFERENCES-Continued

Lyons D., "Image Spectrometry with a Dif&active Optic,"

Imaging Spectrometry,

Pro.

SPBE 2480, pp. 123-131, (April 1995).

Lyons D., Whitcomb K., "The DOE in DOIS, a dif&active optic image spectrometer",

Diffractive & Holographic Optics Technology III,

Proc SPIE Vol 2689 (February 1996).

Lyons D., Whitcomb K., "Image reconstruction algorithms for DOIS",

Algorithms for

Multispectral Imagery,

E^oc SHE Vol 2758 (April 1996).

Lyons D., Whitcomb K., "Characteri2ation of the DOIS prototype".

Imaging Spectrometry

II: New Sensors,

Proc SPEE Vol 2819 (August 1996).

Lyons D., Whitcomb K., "Diffractive optic image spectrometer with constant magnification". United States Patent pending (March 1997).

Madjidi-Zolbanine H., Froehly C., "Holographic correction of both chromatic and spherical aberrations of single glass lenses,"

Applied Optics

Vol. 18(14), pp.2385-2393

(July 15, 1979).

Miyamoto, "The phase Fresnel lens," J. Opt. Soc. Am. 51, 17 (1961).

Mooney J.M., "Angularity multiplexed spectral imager,"

(1995).

Pro. SPIE

2480, pp. 65-77,

Morris M.G., Rochester Photonics Corporation, Rochester NY, (716) 272-3010, private communications (1994)

O'Shea D.C., "Grayscale masks for diffractive optics," in Diffractive Optics, Vol. 11,

1994 OSA Technical Digest Series, pp. 119-124 (1995).

Press W. H., Flannety B. P., Teukolsky S. A. and Vetterling W. T.,

Numerical Recipes in

C, Cambridge University Press (1988).

Reynolds G.O., DeVelis LB., Parrent G.B., Thompson B.J.,

The New Physical Optics

Notebook: Tutorials in Fourier Optics,

SPBE Optical Engineering Press (1989).

Robbins G.M., Huang T.S., "Inverse Filtering for Linear Shift-Varient Imaging Systems,"

Proceedings of the IEEE

Vol.60 No. 7, pp. 862-872 (July 1972).

Russ J.C.,

The Image Processing Handbook,

CRC Press (1995).

Sawchuk A.A, "Space-Variant Image Motion Degradation and Restoration,"

Proceedings of the IEEE

Vol. 60 No.7, pp. 854-861 (July 1972).

Sawchuk A.A., "Space-variant image restoration by coordinate transformations,"

J. Opt.

Soc. Am.

64,2 (February 1974).

212

REFERENCES-Continued

Schowengerdt R.A.,

Techniques for Image Processing and Classification in Remote

Sensing,

Academic Press (1983).

Sezan M.I.,

Selected Papers on Digital Image Restoration,SPTE

Milestone Series, Vol. MS

47 (1992).

Sezan M.I., Tekalp A.M., "Survey of recent developments in digital image restoration,"

Optical Engineering

Vol. 29 No. 5,393-404 (May 1990).

Shaw P.J., "3-dimensional microscopy using tilted views,"

in Microscopy and Low Light Imaging

(1989).

SPIE Vol.1161 New Methods

Sitter D.N. Jr., Rhodes W.T., "Three-dimensional imaging; a space invarient model for space varient systems,"

Applied Optics

Vo. 29(26), pp.3789-3794 (10 September 1990).

Smith W.J.,

Modem Lens Design,

McGraw-Hill (1992).

Stone T.W., Thompson B.J.,

Holographic and Diffiactive Lenses and Mirrors,

SPIE

Milestone Series, Vol. MS 34, (1991).

Stone T., George N., "Hybrid diffractive-refractive lenses and achromats,"

Applied

Optics,

Vol. 27(14), pp. 2960-2971 (1990 )

Swanson G.J., "Binary Optics Technology: The Theory and Design of Multi-level

Diffractive Optical Elements", Lincoln laboratory Technical Report # 854, (1989).

Sweatt W.C., "Mathematical equivalence between a holographic optical element and an ultra-high index lens,"

J. Opt. Soc. Am.

Vol. 69(3), pp. 486-487 (March 1979).

Veldkamp, Swanson G.J., "Diffractive optical elements for use in infirared systems,"

Optical Engineering Vol. 28 No. 6,605(1989).

Waggoner A., DeBiasio R., Conrad P., Bright G.R., Ernst L. Ryan K., Nederlof M.,

Taylor D., "Multiple Spectral Parameter Imaging",

Methods in Cell Biology

Vol. 30

Ch.l7, pp. 449-478, Academic Press (1989).

Ward J.E., Auth D.C., Carlson P.P., "Lens Aberration corrected by holography," Applied

Optics, Vol. 10(4), pp. 896-900 (April 1971)

Wilson R.G., McCreary S.M., Thompson J.L., "Optical transformations in three-space:

Simulations with a PC,"

Am. J. Phys.

60 (1), (January 1992).

Wolfe W.L., Zissis G.J.,

The Infrared Handbook

, Infrared Information Analysis Center at ERIM, (1989).

Wolfe W.L., "Multispectral Imaging Devices," SPIE short course notes SC54, 27 July

1994.

Wolfram S.,

Mathematica,

Addison-Wesley (1991).

Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement