INTERFEROMETRIC CHARACTERIZATION OF TEAR FILM DYNAMICS by Brian Christopher Primeau

INTERFEROMETRIC CHARACTERIZATION OF TEAR FILM DYNAMICS by Brian Christopher Primeau
INTERFEROMETRIC CHARACTERIZATION OF TEAR FILM DYNAMICS
by
Brian Christopher Primeau
_____________________
A Dissertation Submitted to the Faculty of the
COLLEGE OF OPTICAL SCIENCES
In Partial Fulfillment of the Requirements
For the Degree of
DOCTOR OF PHILOSOPHY
In the Graduate College
THE UNIVERSITY OF ARIZONA
2011
THE UNIVERSITY OF ARIZONA
GRADUATE COLLEGE
As members of the Dissertation Committee, we certify that we have read the dissertation
prepared by Brian Christopher Primeau
entitled Interferometric Characterization of Tear Film Dynamics
and recommend that it be accepted as fulfilling the dissertation requirement for the
Degree of Doctor of Philosophy
_______________________________________________________________________
Date: September 16, 2011
John Greivenkamp
_______________________________________________________________________
Date: September 16, 2011
James Schwiegerling
_______________________________________________________________________
Date: September 16, 2011
Rongguang Liang
Final approval and acceptance of this dissertation is contingent upon the candidate’s
submission of the final copies of the dissertation to the Graduate College.
I hereby certify that I have read this dissertation prepared under my direction and
recommend that it be accepted as fulfilling the dissertation requirement.
________________________________________________ Date: September 16, 2011
Dissertation Director: John Greivenkamp
STATEMENT BY AUTHOR
This dissertation has been submitted in partial fulfillment of requirements for an
advanced degree at the University of Arizona and is deposited in the University Library
to be made available to borrowers under rules of the Library.
Brief quotations from this dissertation are allowable without special permission, provided
that accurate acknowledgment of source is made. Requests for permission for extended
quotation from or reproduction of this manuscript in whole or in part may be granted by
the head of the major department or the Dean of the Graduate College when in his or her
judgment the proposed use of the material is in the interests of scholarship. In all other
instances, however, permission must be obtained from the author.
SIGNED: Brian C. Primeau
ACKNOWLEDGEMENTS
I first would like to thank John Greivenkamp for his advice and mentoring throughout my
time at the College of Optical Sciences. His guidance has gone beyond just my academic
career, and I truly appreciate his support in every way that has been provided. I would
also like to thank the rest of my committee, Jim Schwiegerling and Ron Liang, for their
advice and feedback for this dissertation. Additionally, I would like to acknowledge Jim
Wyant, who was unable to be on the committee, but provided valuable insight throughout
this research nonetheless.
Special thanks must be given to my entire family for their continuous support throughout
my seemingly endless desire to be a student. You have always pushed me to realize my
full potential and I could not be in this position without your guidance.
Finally, to Christina, words are not enough to thank you for your love, patience and
support. Your confidence kicked me into beginning the doctoral program, your
encouragement gave me strength to continue through the most difficult parts, and your
persistence led to me completing this work.
5
TABLE OF CONTENTS
LIST OF FIGURES ............................................................................................................. 9
LIST OF TABLES ............................................................................................................ 12
ABSTRACT ...................................................................................................................... 13
1 INTRODUCTION TO TEAR FILM MEASUREMENTS............................................ 15
1.1 Tear Film Measurement System Requirements ...................................................... 18
1.1.1 In Vivo Tear Film Measurement Requirements ............................................... 19
1.1.2 In Vitro Fluid Layer Measurement Requirements............................................ 21
1.2 Existing Tear Film Measurement Techniques ........................................................ 22
1.2.1 In Vitro Fluid Layer Measurement Methods .................................................... 22
1.2.2 In Vivo Tear Film Measurement Methods........................................................ 23
1.3 University of Arizona Tear Film Interferometer ..................................................... 36
2 WAVE NATURE OF LIGHT AND POLARIZATION................................................ 38
2.1 Description of Light’s Electric Field ....................................................................... 38
2.2 Polarization Basics .................................................................................................. 40
2.2.1 Controlling the Polarization State .................................................................... 44
2.3 Jones Vectors and Calculus ..................................................................................... 48
2.4 Interference.............................................................................................................. 51
2.4.1 Conditions for Interference .............................................................................. 53
3 TWYMAN-GREEN INTERFEROMETRY FOR SURFACE METROLOGY ............ 57
3.1 Measuring Flat Surfaces .......................................................................................... 60
3.2 Measuring Spherical Surfaces ................................................................................. 61
3.3 Measuring Aspheres ................................................................................................ 65
3.4 Phase shifting .......................................................................................................... 67
3.4.1 Temporal Phase Shifting .................................................................................. 71
3.4.2 Spatial Phase Shifting....................................................................................... 72
3.5 Source Requirements............................................................................................... 77
3.6 Detector Requirements ............................................................................................ 78
3.7 Collimator requirements.......................................................................................... 83
3.8 Beam splitter requirements...................................................................................... 83
3.9 Imaging Lens Design .............................................................................................. 85
3.9.1 Imaging Lens Aberrations ................................................................................ 87
3.9.2 Maximum Supported Fringe Frequency .......................................................... 88
6
TABLE OF CONTENTS - CONTINUED
3.10 Polarization TGI .................................................................................................... 90
3.11 Instrument Transfer Function ................................................................................ 93
3.12 Closing .................................................................................................................. 95
4 PHASE I: IN VITRO FLUID LAYER INTERFEROMETER ....................................... 96
4.1 Phase Shifting Method ............................................................................................ 96
4.2 Interferometer Design ........................................................................................... 100
4.2.1 Camera ........................................................................................................... 103
4.2.2 Laser Source ................................................................................................... 105
4.2.3 Collimator....................................................................................................... 106
4.2.4 Diverger .......................................................................................................... 107
4.2.5 Imaging Lens .................................................................................................. 115
4.2.6 Reference Surface .......................................................................................... 122
4.3 Contact Lens Mount .............................................................................................. 122
4.4 System Calibration, Accuracy and Precision ........................................................ 132
5 FLUID LAYER INTERFEROMETER ANALYSIS METHODS AND RESULTS .. 141
5.1 Fluid Layer Measurement Process ........................................................................ 143
5.2 Unaltered Measurements ....................................................................................... 146
5.3 Subtracting the Fourth Order Aberrations ............................................................. 150
5.4 Referencing to a Baseline Measurement ............................................................... 152
5.5 Blob Analysis ........................................................................................................ 153
5.6 Results ................................................................................................................... 158
5.7 Closing .................................................................................................................. 167
6 SIMULATED IMAGE QUALITY EFFECTS OF TEAR FILM BREAKUP ............ 169
6.1 MTF Simulation .................................................................................................... 177
6.2 Vision Quality Conclusions .................................................................................. 181
7 OCULAR GEOMETRY AND MOVEMENT ............................................................ 183
7.1 Structure of the Human Eye .................................................................................. 185
7.2 Selected Cornea Geometry Studies ....................................................................... 188
7.3 Fixation Stability and Eye Movement ................................................................... 189
7.4 Implication on TFI Design .................................................................................... 192
8 PHASE II: IN VIVO TEAR FILM INTERFEROMETER ........................................... 194
7
TABLE OF CONTENTS - CONTINUED
8.1 Core Interferometer ............................................................................................... 195
8.1.1 Camera ........................................................................................................... 197
8.1.2 Laser Source ................................................................................................... 197
8.1.3 Collimator....................................................................................................... 198
8.1.4 Reference Mirror ............................................................................................ 199
8.1.5 Null System .................................................................................................... 200
8.1.6 Imaging Lens .................................................................................................. 214
8.1.7 Calibration Surface ......................................................................................... 220
8.1.8 Photographs .................................................................................................... 221
8.2 Laser Safety Strategy ............................................................................................ 224
8.3 Interferometer Calibration, Accuracy and Precision ............................................. 227
8.4 Human Interface and Motion Control ................................................................... 234
8.5 Eye Tracker ........................................................................................................... 237
8.6 Fixation Target ...................................................................................................... 239
8.7 Light Budget .......................................................................................................... 242
8.8 TFI Design Summary ............................................................................................ 244
9 OPTICAL HAZARD ANALYSIS FOR AN ON-EYE INTERFEROMETER .......... 245
9.1 Classes of Laser Systems ...................................................................................... 246
9.2 Tear Film Interferometer Optical Sources ............................................................ 248
9.3 Laser Exposure Limits .......................................................................................... 251
9.4 Retina Maximum Permissible Exposure and Accessible Emission Limit ............ 252
9.4.1 MPE Implications of Accidental vs. Intentional Laser Exposure .................. 255
9.4.2 Retinal MPE Calculations for the TFI............................................................ 257
9.5 Exposure Limits for the Cornea and Lens ............................................................. 258
9.6 Additional Analysis: Eye Movements, System Alignment, Ocular Variations .... 260
9.7 Laser Safety Conclusions ...................................................................................... 265
9.8 Incoherent Guidelines ........................................................................................... 267
9.9 Optical Hazard Conclusions .................................................................................. 268
10 SIMULATED TFI PERFORMANCE: EFFECTS OF EYE MOVEMENT AND
VARIABILITY ............................................................................................................... 270
10.1 Effects of Ocular Variability and Movement ...................................................... 271
8
TABLE OF CONTENTS - CONTINUED
10.2 Tolerancing Approach to Estimating Effects of Ocular Variation and Movement
..................................................................................................................................... 272
10.2.1 The Tolerancing Model ................................................................................ 275
10.2.2 Effects of Eye Movement ............................................................................. 278
10.2.3 Effects of Ocular Variations ......................................................................... 281
10.2.4 Tolerancing Summary .................................................................................. 283
10.3 Performance Summary ........................................................................................ 284
11 FUTURE WORK AND CONCLUSIONS ................................................................ 286
11.1 In Vitro Interferometer Future Work ................................................................... 286
11.2 In Vivo Interferometer Future Work.................................................................... 288
11.3 Conclusions ......................................................................................................... 292
12 REFERENCES ........................................................................................................... 295
9
LIST OF FIGURES
Figure 1.1: Cross section of the tear film showing component sub layers....................... 16
Figure 1.2. Contact angles for a bead of water................................................................. 23
Figure 1.3. Corneal topographer and recorded images .................................................... 25
Figure 1.4: Thin film interference schematic resulting in thickness dependent fringes ... 29
Figure 1.5. Shearing interferometer ................................................................................. 32
Figure 2.1. Wavefronts corresponding to a plane wave and focusing spherical wave .... 40
Figure 2.2 Polarization ellipse showing the location of the electric field vector tip ......... 42
Figure 2.3 Polarization ellipses for common polarization states ...................................... 43
Figure 2.4 Randomly polarized light incident on wire grid vertical polarizer .................. 46
Figure 3.1 Layout of a TGI ............................................................................................. 60
Figure 3.2. Layout of a TGI used for measuring spherical surfaces ................................ 62
Figure 3.3. Configuration for testing concave and convex surfaces ................................ 63
Figure 3.4. Cat's eye test configuration ............................................................................ 64
Figure 3.5. Four 90° phase shifted interferograms for an arbitrary wavefront ................ 69
Figure 3.6. Spatial carrier interferogram parsed and 4-shift algorithm ............................ 73
Figure 3.7. Fourier transform method of spatial carrier interferometry ........................... 74
Figure 3.8. Simultaneous phase shifting interferometer using four cameras ................... 76
Figure 3.9. Pixelated phase mask method ........................................................................ 77
Figure 3.10. Detector response for 10 micron pixels. ...................................................... 81
Figure 3.11. Plot showing 10 waves of defocus and spherical aberration ....................... 82
Figure 3.12. Beam splitter arrangements in a TGI ........................................................... 84
Figure 3.13. Layout of the test imaging arms of a TGI.................................................... 85
Figure 3.14. Determining the maximum supported fringe frequency .............................. 89
Figure 3.15. Polarization TGI .......................................................................................... 92
Figure 3.16. Polarization states through the interferometer ............................................. 92
Figure 3.17. Example instrument transfer function.......................................................... 94
Figure 3.18. Object surface profile and resulting measured surface profile .................... 94
Figure 4.1. Orientation of the micropolarizers in a 2x2 "superpixel.” ............................. 97
Figure 4.2. Layout of the 2x2 and 3x3 grids for phase calculation .................................. 99
Figure 4.3. Layout of the Fluid Layer Interferometer .................................................... 101
Figure 4.4. Layout of the fluid layer interferometer ...................................................... 103
Figure 4.5. Layout and transmitted wavefront OPD of the collimating lens ................. 107
Figure 4.6. Layout and prescription for the FLI diverger .............................................. 110
Figure 4.7. Transmitted doublepass wavefront of the diverger...................................... 115
Figure 4.8. Unfolded imaging path within the FLI ........................................................ 117
Figure 4.9. Allowable transverse aberrations as a function of wavefront slope ............ 120
Figure 4.10. Transverse ray aberration for the imaging system ..................................... 121
Figure 4.11. Initial holder with rubber ball and machined ring ..................................... 124
Figure 4.12. Sandblasted contact lens mold ................................................................... 128
10
LIST OF FIGURES - CONTINUED
Figure 4.13.
Figure 4.14.
Figure 4.15.
Figure 4.16.
Figure 4.17.
Figure 4.18.
Surface contours of a fluid layer on the 4 potential contact lens holders. . 128
Drawing of the final-generation lens holder .............................................. 131
Final iteration of the contact lens holder ................................................... 132
Average of the raw calibration measurements ........................................... 136
Reference subtraction process ................................................................... 138
Average of the reference-subtracted calibration measurements ................ 139
Figure 5.1. Example interferograms and corresponding surface measurement ............. 143
Figure 5.2. Surface measurement before, during and after the fluid application ........... 144
Figure 5.3. Example results of an RMS surface height over the full 200 frames .......... 145
Figure 5.4. t = 0 measurement with near central artifact present ................................... 146
Figure 5.5. Evolution of a raw measurement through 30 seconds ................................. 147
Figure 5.6. Power present in an example fluid layer measurement ............................... 148
Figure 5.7. Power present in 6 different measurements ................................................. 150
Figure 5.8. Process of removing a 4th order Zernike polynomial fit ............................. 151
Figure 5.9. Data manipulation methods for the FLI....................................................... 153
Figure 5.10. Two measurements taken 1 second apart having identical RMS value..... 154
Figure 5.11. Flow of the blob analysis ........................................................................... 156
Figure 5.12. Blob analysis GUI...................................................................................... 158
Figure 5.13. Sample results of three lens materials........................................................ 160
Figure 5.14. Sample results of three lens materials with 4th order removed .................. 161
Figure 5.15. Sample results of three lens materials with t = 0 result subtracted ............ 162
Figure 5.16. RMS surface height and standard deviation throughout a measurement .. 163
Figure 5.17. The average number of blobs as the fluid layer evolves............................ 165
Figure 5.18. Average blob area as the fluid layer evolves ............................................. 166
Figure 6.1.
Figure 6.2.
Figure 6.3.
Figure 6.4.
Figure 6.5.
Figure 6.6.
Figure 6.7.
Figure 6.8.
Contrast sensitivity function test with example limit overlaid .................... 171
Airy disk in log scale to visualize outer rings .............................................. 173
Diffraction-limited MTF for f/# = 5 and λ=589nm...................................... 174
Arizona Eye Model ...................................................................................... 175
MTF of the eye using the Arizona Eye Model ............................................ 176
Measured fluid layer surface topographies for the 2 contact lens materials 178
Mean MTF at 10s, 20s, and 30s for each contact lens material ................... 179
MTF of the eye model with fluid layer surfacs ........................................... 179
Figure 7.1: Cross section of the eye reproduced from MIL-HDBK-141 (1962) ............ 185
Figure 8.1.
Figure 8.2.
Figure 8.3.
Figure 8.4.
Figure 8.5.
Layout of the in vivo FLI............................................................................. 195
Layout and OPD of the collimating lens ..................................................... 199
Layout and prescription for the TFI diverger .............................................. 203
Primary astigmatism and residual power vs. lens spacing........................... 209
Primary astigmatism and residual power vs. radius of curvature ................ 209
11
LIST OF FIGURES - CONTINUED
Figure 8.6. Primary astigmatism and residual power vs. lens thickness ........................ 210
Figure 8.7. Full null assembly including the partial null and diverger lenses................ 212
Figure 8.8. OPD of the null assembly when testing on no corneal astigmatism............ 213
Figure 8.9. OPD of the null assembly when testing on 35λ of corneal astigmatism ..... 213
Figure 8.10. Residual wavefront error and maximum slope with corneal astigmatism. 214
Figure 8.11. Unfolded imaging path within the TFI ...................................................... 215
Figure 8.12. Transverse ray aberration for the imaging system ..................................... 217
Figure 8.13. Field curvature and distortion in the TFI imaging system ......................... 218
Figure 8.14. Maximum interferometer accuracy as a function of wavefront slope ....... 219
Figure 8.15. Figure error in the golden surface .............................................................. 221
Figure 8.16. Photograph of the TFI core interferometer showing the beam path .......... 222
Figure 8.17. Photograph of the TFI system .................................................................... 223
Figure 8.18. Subject's view in the TFI ........................................................................... 223
Figure 8.19. Safety shut-off and interlock circuit showing normal operation ............... 227
Figure 8.20. Average of the raw calibration measurements ........................................... 229
Figure 8.21. Average of the reference-subtracted calibration measurements ................ 231
Figure 8.22. Astigmatism introduced by the partial null system ................................... 233
Figure 8.23. Residual aberration introduced by the partial null system ......................... 233
Figure 8.24. Photographs of the head positioner ............................................................ 235
Figure 8.25. Arrangement of the eye tracker .................................................................. 238
Figure 8.26. Layout of the fixation module ................................................................... 240
Figure 8.27. Potential fixation targets ............................................................................ 242
Figure 9.1: Geometry of the accessible laser beam exiting the TFI................................ 249
Figure 9.2. Photograph of the LED system .................................................................... 250
Figure 9.3: The visual angle of the source ..................................................................... 253
Figure 9.4: Schematic of the converger system with varying distance to the eye .......... 261
Figure 9.5: Pupil transmission as a function of interferometer-to-cornea distance ........ 262
Figure 9.6: Allowable laser power levels for different eye configurations ..................... 265
Figure 10.1. Simulated interferograms with eye movement and variability .................. 285
Figure 11.1. Lens holding assembly............................................................................... 287
12
LIST OF TABLES
Table 2.1. Jones vectors for common polarization states ................................................. 49
Table 2.2. Jones matrices for selected polarization elements .......................................... 51
Table 4.1.
Table 4.2.
Table 4.3.
Table 4.4.
Table 4.5.
Table 4.6.
Tolerances and compensators for the FLI diverger....................................... 113
Metrology results and the sandblasted contact lens molds ........................... 127
Calibration measurement results. All quantities in waves. .......................... 135
Accuracy and repeatability of the Fluid Layer Interferometer...................... 137
Difference between the raw calibration measurements and their average .... 138
Calibrated accuracy and repeatability of the FLI .......................................... 140
Table 5.1. Description of the Zernike polynomial terms removed ................................ 152
Table 6.1. Percent MTF degradation compared with the baseline Arizona Eye Model 181
Table 8.1.
Table 8.2.
Table 8.3.
Table 8.4.
Calibration measurement results. All quantities in waves. .......................... 228
Uncalibrated accuracy and repeatability of the Fluid Layer Interferometer . 229
Difference between the raw calibration measurements and their average .... 230
Calibrated accuracy and repeatability of the TFI .......................................... 232
Table 9.1. Irradiance measurements of the LED illumator for the eye tracking system 250
Table 9.2. Laser safety calculations varying with converger-to-cornea separation ........ 263
Table 9.3: Laser Safety Calculation Results ................................................................... 266
Table 10.1. Eye movement sensitivity analysis ............................................................. 279
Table 10.2. Eye movement Monte Carlo analysis results .............................................. 280
Table 10.3. Eye variability sensitivity analysis ............................................................... 282
Table 10.4. Eye variability Monte Carlo analysis results .............................................. 282
13
ABSTRACT
The anterior refracting surface of the eye is the thin tear film that forms on the surface of
the cornea. When a contact lens is on worn, the tear film covers the contact lens as it
would a bare cornea, and is affected by the contact lens material properties. Tear film
irregularity can cause both discomfort and vision quality degradation. Under normal
conditions, the tear film is less than 10 microns thick and the thickness and topography
change in the time between blinks. In order to both better understand the tear film, and to
characterize how contact lenses affect tear film behavior, two interferometers were
designed and built to separately measure tear film behavior in vitro and in vivo.
An in vitro method of characterizing dynamic fluid layers applied to contact lenses
mounted on mechanical substrates has been developed using a phase-shifting TwymanGreen interferometer. This interferometer continuously measures light reflected from the
surface of the fluid layer, allowing precision analysis of the dynamic fluid layer. Movies
showing this fluid layer behavior can be generated. The fluid behavior on the contact
lens surface is measured, allowing quantitative analysis beyond what typical contact
angle or visual inspection methods provide. The in vivo interferometer is a similar
system, with additional modules included to provide capability for human testing. This
tear film measurement allows analysis beyond capabilities of typical fluorescein visual
inspection or videokeratometry and provides better sensitivity and resolution than
shearing interferometry methods.
14
The in vitro interferometer system has measured the formation and break up of fluid
layers. Different fluid and contact lens material combinations have been used, and
significant fluid layer properties have been observed in some cases. This dissertation
discusses the design of this interferometer along with analysis methods used. Example
measurement results of different contact lens are presented highlighting the capabilities
of the instrument.
This dissertation also provides the in vivo interferometer design, along with the
considerations that must be taken when designing an interferometer for on-eye
diagnostics. Discussions include accommodating eye movement, design of null optics
for a range of ocular geometries, and laser emission limits for on-eye interferometry in
general.
15
1 INTRODUCTION TO TEAR FILM MEASUREMENTS
The tear film is the thin layer of fluid on the human eye separating the cornea and air.
The function of the tear film is to maintain a smooth optical surface required for vision
quality, provide nutrients and oxygen to the cornea and to provide protection against
infections of the cornea. Although the exact makeup and thickness of the tear film is
often debated, general consensus is that it is between four and 10 microns thick and
consists of three sub-layers (Mishima, 1965; Holly, 1977). The outermost layer is the
lipid layer, consisting of oils that provide a hydrophobic barrier at the anterior of the tear
film which holds the layer in place. The middle is the aqueous layer which consists of
water and other substances such as nutrients or proteins. The bottom layer is the mucous
layer, which consists of mucin and coats the cornea with a hydrophilic layer causing an
even distribution of the tear film across the cornea. A cross section of the tear film’s sub
layers is shown in Figure 1.1. In a healthy eye, the tear film stabilizes and becomes a
smooth surface over the first few seconds after blinking (Bendetto , 1984). Without
blinking, the tear film begins to thin after five seconds and may begin to breakup after 15
seconds (Németh, 2002).
Tear film breakup is a clinically important occurrence and typically begins with holes,
pits or canyons forming in the surface of the tear film which grow as the time between
blinks increases. Eye care professionals diagnose dry eye syndrome often by measuring
the tear film breakup time, which is the elapsed time from a blink to the first indication of
tear film breakup. Tear film breakup time is also a common clinical measurement used to
16
diagnose different ocular conditions. One cause of tear film breakup is that the film as a
whole begins to evaporate and diffusion occurs from the lipid layer to the mucous layer,
causing the mucous to become hydrophobic further pushing fluid away from the cornea
(Holly, 1980).
Figure 1.1: Cross section of the tear film showing component sub layers for a continuously smooth
tear film (left) and a tear film presenting some breakup characteristics (right)
A drying or turbulent tear film can lead to discomfort, itching and redness of the eyes and
in some cases increased sensitivity to light. In some people, tear film abnormalities are
diagnosed as Keratoconjunctivitis Sicca, also known as Dry Eye Syndrome. The tear
film is a convex optical surface sitting on the cornea and creates the eye’s largest change
in refractive index at its air to liquid interface. Consequently, the tear film contributes
roughly 70% of the human eye’s refractive power. Therefore, irregularities in the tear
film can result in decreased visual performance in addition to discomfort. An irregular
tear film affects both the point-spread function (PSF) and modulation transfer function
(MTF) and has been shown to broaden the PSF and the reduce the modulation of midspatial frequencies (Kasprzak, 1999).
A stable tear film is necessary for comfortable and visually effective contact lens wear.
Nominally, a contact lens sits on a thin layer of tear film between the lens itself and the
17
cornea. This is known as the post-lens tear film and is usually around two microns thick
(Nichols, 2003). Additionally, a pre-lens tear film forms a continuous surface coating the
exterior of the contact lens. For a lens to function as intended, both in terms of comfort
and visual acuity, the form of the pre-lens tear film should be identical to a normal nolens tear film (Guillon, 1986). This means it should have the same thickness, structure
and breakup behavior. Lens discomfort, often the result of abnormal tear film behavior,
is a factor in a patient terminating their contact lens use and therefore efforts are
underway to engineer materials that allow the tear film to behave in a normal manner.
The makeup of a contact lens affects the behavior of tear film. A wide variety of contact
lens materials exist, each having their own advantages and different effects on the tear
film. Silicone hydrogel contact lenses are complex copolymers intended to increase
oxygen permeability which allows a user to wear a contact lens for a longer duration
without removal. Some of the lens constituents are hydrophobic (Thai, 2004) and may
contribute to tear film irregularity and in some cases discomfort. Therefore, the goal of
this research was to develop an instrument capable of measuring the anterior surface
topography of the tear film in real time, and display the surface changes over time in
order to compare the fluid interactions among different contact lens materials.
The development of a tear film measurement system was split into two phases, an in vitro
fluid layer measurement system in the first phase, and an in vivo tear film measurement
system in the second. The reasoning behind splitting the research into two phases was
18
twofold. First, while an on-eye tear film measurement is the end goal, there is a desire to
characterize fluid behavior on a contact lens in a laboratory in vitro environment where
human testing might not be appropriate. Such an instrument can be used to characterize a
new contact lens material’s wettability and compare the fluid behavior to existing
designs. Second, there is additional risk in developing a system for measurements on
humans. An eye-safe system requires significant resources, both financially and in the
time required. Without preliminary results verifying the capabilities of the technology,
the likelihood of human research approval through an Institutional Review Board is
reduced. Therefore, the results of phase I of this work determined the feasibility of
moving on to phase II.
1.1 Tear Film Measurement System Requirements
At the initiation of this research, the project sponsor approached our research group with
the question “Can you measure the topography of the tear film and produce a movie
showing how it changes over time?” Since this type of measurement had never been
performed before, it was not immediately clear what the system requirements should be.
Although the in vitro system was designed and built prior to the in vivo system, the
eventual requirements of the in vivo system drove the design of each system. While some
modifications between the two phases were necessary, a complete redesign and overhaul
of the measurement technology was not acceptable. In order to reduce cost and rework
while evaluating the feasibility of capturing on-eye measurements, both systems had to
19
use the same technology and a similar platform. The requirements applying to in vivo
tear film measurements were first derived and then applied to both systems. Therefore,
the in vivo measurement requirements will be presented first, even though that system
encompassed the second phase of this research.
1.1.1 In Vivo Tear Film Measurement Requirements
Given some of the tear film breakup features (i.e., holes, pits or canyons) that were
believed to exist, but not yet visualized, it was determined that the system should have
better than λ/4 height resolution, where λ is the wavelength of light. For a wavelength of
633 nm, for example, λ/4 corresponds to 158.25 nanometers. Resolution is the
instrument’s ability to identify changes in surface height of a λ/4. Meeting this
requirement provides capability to detect of tear film breakup features early in their
formation when their depth is as small as a few hundred nanometers. This would allow
the instrument to measure artifacts in the tear film surface that are 10-20 times smaller
than what the current state-of-the-art in tear film thickness measurement can detect
(Optical Coherence Tomography, see Chapter 1.2.2)
The pupil of the eye typically varies between three and eight millimeters in diameter
depending on brightness of the viewing scene. Under bright or daylight conditions the
pupil diameter is usually between four and six millimeters, and therefore the tear film
measurement instrument should be able to measure a central six millimeter diameter zone
on the eye. This will allow measurement over a sufficient area that can be used to
20
determine the tear film’s effect on the visual performance. A six millimeter diameter also
samples a significant portion of a contact lens, and the tear film behavior over that area is
sufficient to estimate behavior over the entire contact lens.
As discussed previously, material makeup of a contact lens could lead to localized
dimples or pockets of abnormal tear film behavior due to local hydrophobic properties of
the material. Since the localized pockets could be a fraction of a millimeter in diameter,
the system should have spatial resolution capable of resolving features on the order of 50
microns in diameter in order to detect early formation of tear film abnormalities.
In order to record how the tear film evolves in time, sequential measurements must be
captured on the eye. Based on customer feedback, a goal of at least one frame per second
is desired for the in vivo system. While a higher frame rate will likely be achieved, at
least one frame per second should contain usable measurement data. Since the surface
changes in time, and the eye will move during the measurement cycle, the integration
time of the detector must be fast enough that motion blur is not a factor. Based on best
practices in fixation and stabilization, a camera integration time of one millisecond or
faster is desired.
One additional complication in the design of this instrument is in the ergonomics of the
system. Since it must interface with a person, some non-optical considerations must be
considered. First and foremost, the instrument’s form must interface with a person in a
21
comfortable manner. Most likely this means that the subject sits in a normal position and
the instrument is aligned to them. By aligning the system to the person, rather than
physically moving the person, it is easier to maintain comfort and keep their movement to
a minimum (Miller, 2009).
In summary, the requirements of the tear film measuring system are:
•
•
•
•
•
•
•
Make measurements over a 4-6 millimeter diameter on the tear film,
Have 50 micron spatial resolution,
Have λ/4 height measurement resolution,
In general, be 10-20 times more accurate than current OCT methods,
Capture at least one measurement per second to create a “movie” of tear film
evolution,
Have an integration time faster than 1.0 ms,
Be eye safe and ergonomic.
1.1.2 In Vitro Fluid Layer Measurement Requirements
Since a main goal of the in vitro measurement system was to evaluate the feasibility of
using similar technology to measure a human tear film in vivo, the requirements for the in
vitro system were derived from the on-eye measurement requirements. All of the
resolution, sensitivity and accuracy requirements described in the previous section were
identical for the in vitro device. The requirement on the frame rate is increased on the in
vitro system as the contact lens itself will not move. In this system, at least three frames
per second should be captured to create a movie of tear film behavior over time. Since
the contact lenses will not be held on an eye, an additional requirement of this system
was that it supports the contact lens in a way that does not interfere with fluid layer
behavior on the lens.
22
1.2 Existing Tear Film Measurement Techniques
Before developing any new technology, a review of prior art is necessary in order to
evaluate the feasibility of either improving upon existing methods or developing an
entirely new method. This section reviews the current methods of in vitro fluid-on-lens
measurement along with in vivo tear film examination and measurement. The standard
optometric practices along with more advanced methods will be described and their
advantages and disadvantages discussed. This review of existing technology was used to
evaluate possible solutions to the system requirements previously discussed. Building
upon these systems was considered in some cases, and the merits of such an effort are
discussed.
1.2.1 In Vitro Fluid Layer Measurement Methods
A common in vitro characterization method relies on applying a drop of fluid to a contact
lens, or any surface of interest where the fluid forms a bead on the surface. This angle is
the contact angle and is a common measure of the interaction between a liquid and solid
sample. When water is applied to a substrate the contact angle describes the material’s
wettability; an obtuse angle corresponds to a more hydrophobic (water repellant)
material, while an acute angle corresponds to a more hydrophilic material. If the
substrate is strongly hydrophilic, the water drop will spread out evenly over the surface
forming a contact angle near 0°, while on a hydrophobic material the water will form a
more spherical bead and the contact angle will be larger than 90° (Förch, 2009). Figure
23
1.2 displays a representative contact angle measurement for both a hydrophobic and
hydrophilic material.
Figure 1.2. Contact angles for a bead of water on a hydrophilic and hydrophobic sample. The
hydrophilic sample has an shallow contact angle while the hydrophobic sample has a contact angle
larger than 90°.
While the contact angle method measures material properties at a local region, it does not
necessarily predict behavior over an entire surface area. In the case of contact lenses
infused with compounds to increase oxygen permeability for example, there could be
spatial variations across the lens surface that a contact angle measurement will not
capture. The contact angle measurement also does not provide a method of comparing
how the fluid layer is behaving over the contact lens compared to a normal tear film.
1.2.2 In Vivo Tear Film Measurement Methods
Standard Clinical Practices
The most common method of tear film examination consists of an eye care professional
applying fluorescein to the tear film and examining the eye with a slit lamp. The
fluorescein colors the tear film, allowing easy visual inspection by which some general
properties are viewed (Norn, 1986). When illuminated with a slit lamp, the fluorescein
glows and the viewed brightness corresponds to the thickness of the tear film. An
24
optometrist can qualitatively visualize the evolution of the tear film and determine the
tear film breakup time by counting how much time goes by until breakup features are
visualized.
This method of tear film examination does not provide a quantitative measurement of
either topography or thickness, and the consistency in its measurement and application
can vary between eye care professionals. Furthermore, introducing a foreign agent such
as fluorescein will alter tear film behavior, reducing the method’s accuracy (Patel, 1985).
Basing the design of an improved tear film measurement system off of the fluorescein
method is not feasible due to these factors.
Corneal Topographers
Corneal topographers can be used to measure tear film irregularity since the topographer
is actually measuring reflections from the tear film, not the cornea directly. Tear film
abnormalities result in distortions in the topographic measurement (Németh, 2002). Most
corneal topographers rely on the Placido method of projecting a pattern onto the cornea
and imaging the reflected pattern. Deviations in the imaged pattern correspond to
features in the tear film topography. Often times a pattern of concentric rings, known as
a Placido disk, is used which provides a measurement of the local radius of curvature at
different points on the cornea. Figure 1.3 shows a corneal topographer system along with
the recorded image of a circular pattern reflected off of the tear film. These systems
25
typically have an accuracy of 0.25 diopters in measuring the power distribution of the
cornea (Kasprzak, 1995).
Figure 1.3. Corneal topographer and recorded images (Image from Schwiegerling, 2011).
Variations of the Placido disk method have been developed by either arranging the
concentric rings on a curved bowl-like surface or by using a different projected pattern
altogether. Some efforts have been made to improve the accuracy of corneal
topographers by using a variety of different patterns, such as a grid (Mengher, 1985; Cho,
1993). In any arrangement, the pattern must align with the discontinuities in the tear
film, otherwise the features will not be detected. Again, the spatial resolution is limited
by the grid spacing and the system is only capable of measuring tear film topography at
the location of the grid. Even though these methods are an improvement, techniques
based on imaging the reflected grid pattern typically have an accuracy no better than 10
microns (Licznerski, 1999). Spatial resolution is driven by the spacing and size of the
pattern, and is typically on the order of 200 microns in commercially available systems.
26
While these topographers are capable of taking high speed measurements sufficient for
most corneal topography applications, they lack the resolution and sensitivity required to
measure the features of interest on the tear film surface.
Optical Coherence Tomography
Optical Coherence Tomography (OCT) is the most widely attempted system for
quantitative, high resolution measurements of interocular thicknesses, including tear film
thickness. Measuring the tear film thickness can in turn provide the surface topography if
the cornea is assumed to be a smooth and consistent surface. OCT is an interferometric
measurement method where a low coherence source (such as a super luminescent diode)
is focused onto or into a sample under test, often biologic tissue. Light reflected or backscattered from the sample under test is then interfered with a reference beam. Due to
interference, the irradiance at a detector is a function of the OPD between the reference
and test arms. For a low coherence source, maximum irradiance only occurs where the
optical path lengths of the test and reference arms are equal. By scanning the optical path
length from the reference arm, the measured irradiance at each location corresponds to
the OPD between the two arms, and the thickness of the sample can be determined. In
other words, as the reference path length is increased, light reflected from an increasing
depth within the sample interferes at the detector. Depending on the wavelength of the
source, light can penetrate into tissue and provide a thickness or depth measurement.
OCT, in general, can be further divided into two system types: time domain (TD-OCT)
27
and frequency domain (FD-OCT), the details of which are beyond the scope of this
dissertation.
OCT systems measure interference at a single point and use a scanning system measure
tear film thickness over the desired area. One possible method is to scan either the test
beam or the sample itself while using a single detector element. In this method each
individual measurement is known as an A-scan. Another method is to image the sample
onto an area detector while shifting the reference and analyzing the interference pattern
as a function of reference mirror location. In either method, an extended amount time is
required to scan over an area and the entire area measurement cannot be captured within
a camera’s single integration time.
Commercial OCT systems for measuring thicknesses within the eye have been developed
and are routinely used in ophthalmic and optometric offices (for example: Zeiss
Meditech, Dublin, CA, or Heidelberg Engineering, Heidelberg, Germany). These FDOCT systems have a depth resolution around 3µm (Wang, 2008). A number of
researchers have used either similar commercial or custom OCT systems to attempt to
measure tear film thickness (Christopoulos, 2007; Wang, 2003; Wang, 2008). These
systems typically scan over a single line up to 15 mm in length with one measurement
about every 12 µm spatially. As the scan area grows, the time required to complete the
measurement increases as well.
28
Resolution depends on the bandwidth of the source along with the f/# of the illumination
optics. As the source coherence length increases, the resolution decreases. As the
longitudinal blur of the imaging system increases, the resolution decreases as well.
While OCT does provide an accurate measurement of tear film thickness, accuracy is
limited by the depth resolution of these systems. A healthy tear film has a thickness on
the order of six microns, and as it breaks up and thins an OCT system with three micron
resolution cannot accurately measure the thickness. Instead, OCT is used to measure the
cornea and tear film together along with the thickness of only the cornea. To determine
the tear film thickness, the cornea’s thickness is subtracted from the combined
measurement, thus increasing the uncertainty in the measurement. Given the lack of
resolution and OCT’s inability to measure topography over an area on the tear film
instantaneously, OCT is not a suitable solution for this work.
Thin Film Interferometry
Interferometry can be used to measure the thickness of thin transparent fluid layers,
including the tear film. Consider the system shown in Figure 1.4 showing two rays
emitted from a point source incident on a tear film. The dotted ray refracts at the anterior
surface and is reflected by the posterior tear film after which it exits the tear film and is
imaged to an observation plane. The solid ray reflects at the anterior surface and is then
imaged to the same point. The optical path difference (OPD) between the two rays is
equal to the tear film’s refractive index, n, multiplied by the distance the ray travels
29
through the film. For near normal incidence, the distance is two times the thickness, t.
Therefore, the OPD is
1.1
Figure 1.4: Thin film interference schematic resulting in thickness dependent fringes
When the optical path difference between the two rays is equal to an integer multiple of
the wavelength (n·λ), there is constructive interference at the observation plane. When
the OPD between the two rays is an integer multiple of one half of the wavelength (n·λ/2)
destructive interference occurs. By imaging specular reflections over an area on any thin
film, a pattern of interference fringes corresponding to the thickness of the film is
localized on the surface of the layer. Since the OPD is proportional to two times the film
thickness, a bright fringe occurs when the tear film is an even-multiple of λ/4 and a dark
fringe occurs when the tear film is an odd multiple of λ/4.
30
Thin film interference is often observed in nature in the colorful fringes that occur in soap
bubbles or oil slicks. The spectral patterns observed in those occurrences are a function
of the thickness of the film. The interference pattern in a thin film depends on the
wavelength and is described by the equation
1.2
where is the fringe order, the refractive index of the medium, t the
thickness and the refracted angle through the film. When using a polychromatic light
source and the thickness of the film is on the order of a few wavelengths, as is the case
with the tear film, colorful fringes are formed corresponding to variations in the thickness
of the film.
Doane (1989) has developed a thin-film interferometer to measure the tear film thickness
by detecting the interference of light reflected from the posterior and anterior tear film
surfaces. The light source is a broadband tungsten-halogen bulb along with an eight
nanometer bandwidth filter which is used to illuminate a screen to form an extended
source. Appropriate condensing lenses are used to illuminate the tear film providing a
five millimeter diameter measurement area. A downfall of this instrument is that the
optics used to illuminate and image the fringes contain central obscurations which block
out the central portion of the measurement.
The reflectances of the anterior and posterior tear film surfaces are a function of the
change of refractive index at that interface. For normal incidence, the reflectance is:
31
1.3
The air-tear interface has a much larger refractive index difference than the tear-cornea
interface, resulting in a relatively large difference in the reflectance of the two surfaces.
As a consequence, the interference fringes from the Doane interferometer have very low
contrast, complicating their analysis.
Fogt, King-Smith and Tuell (1998) describe a similar system where they refer to the
fringes as “wavelength dependent fringes” or “spectral oscillations.” While they
eliminated the problem of the central obscuration, the system is still limited by the fringe
visibility. Furthermore, the parameter measured in all of these systems is tear film
thickness, not topography. While it can be assumed that the cornea is a perfectly smooth
surface and therefore spatial distributions in the tear film thickness correspond to
topography, this is not necessarily the case.
Lateral Shearing Interferometry
Shearing interferometry is a method in which a wavefront under test is interfered with a
shifted version of itself. Fringes corresponding to the wavefront slope exist in the area of
overlap between the original and sheared version of the beam as seen in Figure 1.5.
32
Figure 1.5. Shearing interferometer
A number of ways exist to introduce shear to a wavefront, one of which is to laterally
shear the beam with a plane-parallel plate, sometimes with wedge between the surfaces.
The interferogram produced by such a lateral shearing interferometer (LSI) has the
intensity distribution
!
"#$
%
%
1.4
where and are the intensities of the original and sheared beams and the
difference within the cosine term is the phase difference between the original and sheared
beams. This interferogram provides information about the wavefront slope in the shear
direction, S, of the wavefront under test. For shear along the x direction, the differences
in the optical path and fringe pattern are related by
&
' ( &
' ( 1.5
where &
' ( and &
' ( are the sheared and original wavefronts, m is an integer
representing the fringe order and is the wavelength. The difference term in Equation
33
1.5 is the wavefront’s average slope over the shear distance times the shear distance
itself. From this equation, the wavefront slope along the shear direction can be
determined as follows:
& &
' ( &
' (
' '
'
&
)
'
1.6
1.7
The sensitivity of an LSI depends on the amount of shear introduced to the wavefront.
As the shear is increased, the sensitivity increases as well. However, this comes at the
cost of reducing resolution and reduction of the measurement area since a shearing
interferometer measures the average wavefront slope only over the shear distance.
Although LSI is fundamentally limited by the sensitivity vs. resolution compromise based
on shear distance, it is advantageous in that it is less sensitive to eye movement or
variations in ocular geometry.
Researchers at the Wroclaw University of Technology have developed a lateral shearing
interferometer to analyze tear film irregularities and breakup on the human cornea
(Licznerski, 1998(1); Licznerski, 1998(2); Szczensa, 2005). That system illuminates the
eye with visible laser illumination and the reflected wavefront is sheared in one direction
and the fringes imaged to a detector. The tear film irregularity is evaluated by taking the
Fourier transform of the interferogram, in which case larger disturbances in the fringe
pattern corresponding to tear film irregularity result in a wider Fourier spectrum. The
tear film breakup time is determined by calculating a variety of statistics on the Fourier
34
spectra of consecutive interferograms. The primary function of this instrument is to rely
on the Fourier statistics to quantify certain features of the tear film. The tear film
topography or the spatial extents of breakup features are not determined.
Dubra, Patterson and Dainty (2004) improved upon the prior LSI system by adding
capability to shear the wavefront in two directions. Doing so allows analysis of the
wavefront slope over two directions, providing a full measurement of the tear film
surface. They also introduced a spatial carrier to the interferogram by adding a known
amount of tilt between the two beams, allowing phase measurement using the Fourierbased phase recovery method described by Takeda et. al. (1982) which is discussed in
more detail in Chapter 3.4.2. Their system captured measurements over a tear film
diameter of 3.5 mm that were then used to create topographic maps of tear film behavior.
Although this system provides improvement over other tear film measurement methods,
LSI is still limited by the sensitivity vs. resolution compromise based on shear distance.
Also, the Takeda phase recovery method, while providing a method of direct phase
measurement, limits the resolution of the measurement and is further discussed in
Chapter 3 of this dissertation.
Twyman-Green Interferometry
Some research was completed in the late 1990’s using a Twyman-Green interferometer to
measure the tear film topography over time (Licznerski, 1999) but appears to have been
discontinued after the initial studies, in favor of shearing interferometry. In that work, a
35
non-phase shifting system was used to make measurements of a live tear film by
obtaining interferograms with a camera having a 1 ms integration time captured at 25 Hz
with a CCD, frame grabber and VHS video tape. A Pentacon camera lens was used as
the objective to focus the test beam accordingly, contributing significant aberration to the
measurement since it was not designed to accommodate the nominal ocular geometry.
Interferograms over a 4.5 mm diameter on the eye were captured and analyzed through
fringe tracing and ordering. Since the topography is calculated with fringe tracing, but
without the ability to phase shift the sign of the topography remained ambiguous. While
some assumptions could be made that areas of breakup have a lower surface height, the
topography could not be determined with complete certainty. The fringe frequencies
existing in many of the interferograms captured by the Twyman-Green system were too
high to evaluate, so a 256x256 pixel area covering roughly a two millimeter diameter
area on the tear film was used for analysis.
Given the ability of the past Twyman-Green interferometer to at least capture
interferograms from a live human tear film, this technology shows promise as a
successful tear film evaluation tool. By improving the design of the interferometer along
with adding phase shifting capability, a Twyman-Green system will be capable of
measuring tear films in vivo. A modern system will be capable of producing movies of
the tear film’s surface topography with excellent resolution capable of identifying early
indicators of tear film breakup beyond the capabilities of the other technologies thus far
discussed. In addition to identifying micro-features in the tear film related to tear film
36
breakup, such a system will be capable of characterizing the bulk-behavior of a tear film
on the human eye both with and without a contact lens in place.
1.3 University of Arizona Tear Film Interferometer
Based on the requirements for an improved tear film measurement system along with the
advantages and disadvantages of prior systems, using interferometry to measure both the
in vitro fluid layer and in vivo tear film topographies is a logical selection. By improving
upon past designs through advanced technology, refined optical design, and improved
data analysis methods, a Twyman-Green interferometer was proposed for both
measurement systems. Designing a phase shifting system is a necessary improvement to
the prior art in order to improve measurement accuracy. Traditional interferometer
designs that achieve phase shifting temporally must be ruled out since the surface
changes over time. Instead, an instantaneous phase shifting technology allowing
simultaneous capture of all the phase shifted interferograms is used. The rationale behind
phase shifting, along with a background of leading instantaneous methods are presented
in Chapter 3.
The Phase I in vitro Fluid Layer Interferometer (FLI) is a visible (HeNe laser) polarizing
Twyman-Green system, implementing a pixelated phase mask approach for phase
shifting. The Phase II in vivo Tear Film Interferometer (TFI) is a similar system, but
modified to have a near infrared source wavelength of 785 nanometers and to include
37
appropriate systems for safe ocular testing. The details behind the design and operation
of these two systems are included in Chapters 4 and 8 of this dissertation.
38
2 WAVE NATURE OF LIGHT AND POLARIZATION
Interferometry is an optical phenomenon resulting from the wave nature of light. In order
to understand how an interferometer operates, one must first understand how light
behaves. This chapter describes the basic principles of light behavior as a wave, the
polarization of light, and finally interference. An understanding of all three is required
for a successful interferometer design. It is important to mind the sign conventions when
mathematically describing light as a wave. The descriptions in this chapter follow the
sign convention from Optics by Eugene Hecht (Hecht, 2002). Specifically, all
mathematical descriptions will follow the “decreasing phase” convention, where the
phase of an electric field traveling through space decreases as the field propagates in a
positive direction, apparent by the presence of the “-ωt” term in the phase. While it is
equally accurate to use other conventions (i.e., increasing phase), one should take care to
ensure the same convention is used throughout all calculations.
2.1 Description of Light’s Electric Field
Light is a transverse electromagnetic wave having orthogonally coupled electric and
magnetic fields. Since these fields are coupled, a wave’s behavior is typically
mathematically described by only the electric field. For the purpose of this discussion, a
monochromatic plane wave will be described. A monochromatic plane wave describes a
collimated beam of light with a single wavelength, and simplifies the mathematics
describing the electric field. As the wave travels along the optical axis, defined as the zdirection, the electric field is perpendicular to the propagation direction and varies
39
sinusoidally in the xy-plane. For such a plane wave traveling along the z-axis, the
electric field is described as:
+*
+,
' ( - ./ "#$
0- 1 %/ 2
3
3
.4 "#$50- 1 %4 6 7
2.1
where Ax and Ay are the electric field amplitudes, k is the wavenumber, ω is the angular
3
frequency and φx and φy are the phases of the two orthogonal components. The terms 2
3 are the directional unit vectors (1,0,0) and (0,1,0), respectively. The wavenumber
and 7
of a field is described by its wavelength where
0
8
)
2.2
The angular frequency is related to the frequency, along with the wave number and speed
of light, c, as follows:
1 89 8
0)
2.3
The phase of the wave is the term 0- 1 %4 , as shown within the cosine term of
Equation 2.1, and oscillates with time and position along the axis of travel, z.
The wavefront of an optical wave is defined as the surface formed by the points for which
the phase is constant (x, y, z and t). For an ideal plane wave of light originating from a
single point source, the wavefront is flat and the phase is constant across each plane.
Light can also be treated as a ray, where the rays are perpendicular to the wavefront as
seen in Figure 2.1. When light originates from a nearby point source or when light is
focused by a lens or mirror, the wavefront is ideally spherical.
40
Figure 2.1. Wavefronts corresponding to a plane wave and focusing spherical wave
While one definition of a wavefront is a surface of constant phase (Hecht, 2002), the
optical metrology community often defines the wavefront as the distribution of phase at a
given plane, such as the detector in an interferometer. This should be kept in mind
throughout this dissertation when the term “wavefront” is referred to while discussing the
interferometer design or results.
2.2 Polarization Basics
The polarization of light refers to the orientation of its electric field as it propagates
through space. The polarization state of the field is a function of the amplitude and phase
components from Equation 2.1. Light is known as having linear polarization if the
electric field is oriented along a constant plane as it travels in position and time.
Conversely, light has circular or elliptical polarization if the orientation of its electric
field varies with time and position. Light is considered unpolarized if the electric field
orientation varies randomly. As light is generally described as an electric field as shown
41
in Equation 2.1 above, that equation can be broken into the sum of two orthogonal
components which can be used to describe its polarization:
+*
+,
2 7 : ; +*
+,2 +*
+,7 where
And
+*
+,2 ./ "#$
0- 1 %/ 2
3,
++,7 .4 "#$50- 1 %4 6 7
3.
*
2.4
2.5
2.6
The phase difference between the x and y components of the electric field is
<=/4 %/ %4 ) The two components are in phase when <=/4 is equal to 0 or an
integer multiple of 2π radians. For a single point in time, the electric field orientation is a
function of the ratio Ax/Ay. The change in orientation is a function of the phase
difference<=/4 . If the phase difference is constant in time the field is polarized, and if
the phase difference varies randomly in time it is unpolarized. Situations can also arise
where the phase difference has a partially constant, partially random distribution, in
which case the field is partially polarized.
Along with describing an electromagnetic wave’s degree of polarization, the orientation
of polarization can also be described as having either a linear, circular or elliptical
polarization state. The shape traced out by the tip of the electric field vector as it
propagates in space describes the polarization ellipse, or polarization state (Hecht, 2002).
An illustration of the polarization ellipse for a wavefront with elliptical polarization is
seen in Figure 2.2.
42
Figure 2.2 Polarization ellipse showing the location of the electric field vector tip as it travels towards
the observer during one period.
Linear Polarization
When the phase difference between the orthogonal components of the electric field is
zero or an integer multiple of 2π, the electric field simplifies to:
+*
+, 5./ 2
3 .4 7
36 "#$
0- 1)
2.7
Under this condition, the orthogonal components are in phase and the field is linearly
polarized. The polarization ellipse traces a straight line in the x-y plane as the beam
propagates along the z-direction. The orientation of the linear polarization can vary
between ±180° and is determined by the arc-tangent of the ratio of the components’
amplitudes. Using typical sign conventions, the field is horizontally polarized when Ay=
0, vertically polarized when Ax= 0, and polarized at ±45° when Ax= Ay.
43
Elliptical and Circular Polarization
When the electric field components are out of phase (i.e., <=/4 ≠ 0 or 2π) the beam is
elliptically polarized. In this case, the tip of the electric field vector follows an ellipse as
it travels through space. Unlike linear polarization, the magnitude of the electric field
never goes to zero. In other words, +*, both rotates and varies in magnitude in time and
space. A special case of elliptical polarization occurs when the amplitudes are equal,
Ax/Ay = 1, and the phase difference <=/4 is ±π/2 radians. In this case, the electric field
traces a circle and is known as having circular polarization. Using the conventions used
for this dissertation, when <=/4 = π/2 (90 degrees) the electric field has left-circular
polarization and when <=/4 = -π/2 (-90 degrees) it has right circular polarization.
Figure 2.3 Polarization ellipses for common polarization states
44
2.2.1 Controlling the Polarization State
When light is emitted from a source having many independently radiating components, as
is the case in typical thermal sources, the phases of the electric fields are random and
uncorrelated and therefore the light is unpolarized. For a beam of either polarized or
unpolarized light, certain devices can be used to control the polarization either through
filtering or altering the incident signal. These devices function by altering either the
amplitudes of the components or phase difference between orthogonal electric field
components. The interferometers designed and built for the purpose of this dissertation
rely on polarizers to alter the component amplitudes originating from a laser source, and
retarders (also known as wave plates) to alter the electric field phase difference.
Polarizers
Polarizers filter an incident beam of light having random polarization into a beam having
a well-defined polarization state by removing or separating the desired and unwanted
polarization vector components from an incident beam. A linear polarizer filters a given
field to have linear polarization while a circular polarizer filters a field to one of circular
polarization. It should be stressed that these devices do not change the electric field
orientation of a given lightwave, but rather filter the signal through selective absorption
(dichroism), reflection, scattering or birefringence (Hecht, 2002).
45
In general, the transmission of an electric field through a linear polarizer is a function of
the angular difference between the electric field orientation and the orientation of the
polarizer. For an ideal polarizer, incident light having the same polarization as the
polarizer will be completely transmitted, while light having orthogonal polarization will
be completely blocked, and the transmission varies sinusoidally between those angles.
The irradiance of light transmitted through a polarizer follows Malus’s Law:
> >
2.8
where is the incident irradiance and > is the angle between the electric field and
polarizer orientations.
Absorptive filters function by absorbing incident light having a certain electric field
orientation while transmitting other orientations through the material. A common method
of achieving this effect is through the use of a wire-grid polarizer composed of a regular
array of very thin parallel metallic wires. Light with its electric field oriented parallel to
the wires either loses its energy through Joule heating of the wires or is reflected back
along its original path. Light having its electric field oriented perpendicular to the wires
causes little or no electron movement along the width of the wires to cause Joule heating,
and the beam is transmitted through the medium. In transmission, a wire grid polarizer
produces a highly polarized beam having linear polarization perpendicular to the wire’s
orientation. Figure 2.4 shows a schematic of a wire grid polarizer and its effect on
randomly polarized light.
46
Figure 2.4 Randomly polarized light incident on wire grid vertical polarizer (Image from Mellish,
2011).
In addition to wire grid polarizers, dichroic materials having anisotropic absorption
properties will produce linearly polarized light since different electric field orientations
are absorbed at a different rate as light propagates through the medium. The extinction
ratio of a dichroic absorptive polarizer is a function of the level of dichroism along with
the path length through the device.
Retarders
Since the polarization of light depends on the phase difference between its orthogonal
components, a phase retarder or wave plate can be used to alter the polarization of an
incident light wave by introducing additional phase difference between the components.
In general, retarders are manufactured from birefringent materials whose refractive index
is a function of the polarization traveling through the medium. As a light wave travels
through a retarder, the device alters the phase between the orthogonal electric field
+,7 . Retarders can be produced such that the extraordinary axis is
++,2 ?@A*
components*
47
parallel to the surface of the plate. Therefore, light having polarization parallel to the
surface travels through a refractive index ne (extraordinary) and light having polarization
perpendicular to the surface travels through no (ordinary). For light having field
components along each of these directions, the opposing polarizations will travel through
the retarder at different velocities, inducing a phase difference between the components.
The axis with the smaller refractive index is known as the fast axis since polarizations
aligned in this direction will travel faster than polarizations corresponding to the larger
refractive index, or slow axis. For a field normally incident on a retarder plate with
thickness t, the phase difference between the orthogonal field components is
<= 8
B C )
2.9
The phase retardation between two orthogonal fields can be controlled by altering the
thickness of a retarder plate. Common examples of these devices are half-wave and
quarter-wave plates in which the phase difference between orthogonal field components
will change by π and π/2 radians, respectively.
These wave plates are typically made from quartz, mica or organic polymers and must be
precisely manufactured so that their thickness introduces the proper amount of retardance
(Hecht, 2002).
The equations to determine the thickness of half-wave and quarter-wave
plates are:
DEF G
HB C H
2.10
48
IEF And
J G
JHB C H
2.11
where m = 0, 1, 2, …, and represents the order of the wave plate. When m is zero, the
retarder is known as a zero-order wave plate. Zero order wave plates are quite thin, so
the wave plates are quite often manufactured using a higher value of m, resulting in an
easier to manufacture device known as a multi-order wave plate. However, multi-order
retarders are very sensitive to wavelength, incident angle and temperature (Hecht, 2002),
so zero order retarders are commonly used when there are very tight polarization
tolerances.
2.3 Jones Vectors and Calculus
The electric field described in Equation 2.1 can alternatively be described in an
exponential form:
++,
' ( - KL M./ N'O5P
0- 1 %/ 62
3
*
.(N'OP0-1%(7
7.
2.12
Since an electric field can be written in terms of its orthogonal components, a vector
notation provides a convenient method of making polarization calculations. Equation
2.12 can be rewritten in vector form:
++,
' ( - KL QN 5R
STUVW6 X
*
./ N URYZ
\].
.4 N URY[
2.13
The vector form of the electric field can be easily manipulated using Jones Calculus,
which was developed to describe the polarization state of coherent, polarized light
49
propagating in one direction. The Jones vector, E, represents the polarization state of a
monochromatic plane wave propagating along one axis. The Jones vector is a two
element complex vector, and for a plane wave propagating along the z-axis is
./ N URYZ
^/
* ^ X
\)
4
.4 N URY[
2.14
The Jones vector has the units of an electric field (volts/meter), and is usually normalized
to values of one. The normalized Jones vectors for common polarization states follow in
Table 2.1.
Table 2.1. Jones vectors for common polarization states
Polarization State
Horizontal
Vertical
Jones Vector for Decreasing Phase Convention
(kz-ωt)
G
_` b
a
a
c` b
G
45°
def 135°
hief Right Circular
jk Left Circular
lk G
G
` b
g G
G
G
b
g G
G
`
G
b
g P
G
`
G
` b
g P
50
The normalized Jones vectors from Table 2.1 can be used to describe the electric field in
vector form for a given polarization state. For example, the electric field of a
G
horizontally polarized beam having a Jones vector of _ ` b is
a
+*
+, .N URY `Gb
a
2.15
where A is the amplitude and φ is the phase. The Jones matrix for a polarization state
having an angular orientation different than those listed can be calculated by applying a
two dimensional rotation matrix
"#$
n
j
m $o@
n
$o@
n
"#$
n
2.16
where α is the rotation angle. For example, a linear polarized state having an arbitrary
orientation, L(α), is obtained by multiplying the horizontal Jones vector by the rotation
matrix:
l
n j
n p _ "#$
n
)
$o@
n
"#$
n $o@
n
G
p` b
$o@
n "#$
n
a
2.17
The corresponding electric field for a linearly polarized field L(α) is then
"#$
n
* .N URY )
$o@
n
2.18
Now that the Jones vectors for a variety of polarization states have been described, the
vectors can be used to calculate the polarization of light as it interacts with different
polarization altering elements. Where polarization states were previously described as a
two element vector, a Jones matrix is a two-by-two matrix characterizing the polarizing
51
effect of different media. The Jones matrices for commonly used polarization elements
are shown in Table 2.2. The relationship between an incident Jones vector E and the
exiting Jones vector E’ transmitted through a medium having a Jones matrix J is as
follows:
* q r *
t//
^2 s
X \
t/4
^7 s
2.19
t4/
^/
r ^ )
t44
4
2.20
Table 2.2. Jones matrices for selected polarization elements
Element
Horizontal Polarizer
Vertical Polarizer
Quarter-wave plate,
vertical fast axis
Jones Matrix
G
a
a
`
a
a
b
a
a
b
G
G a
N Ru!v `
b
a P
`
Element
Linear Polarizer at +45°
Linear Polarizer at -45°
Quarter-wave plate,
horizontal fast axis
Jones Matrix
G G G
`
b
G G
G G G
`
b
G G
G a
N Ru!v `
b
a P
2.4 Interference
When two or more waves of light are combined under certain conditions, they interfere.
In this phenomenon, the resulting combined intensity is either amplified or reduced
depending on the phase of each of the waves. In other words, when two or more
electromagnetic fields interact, the resulting field intensity is a vector sum of the incident
fields; the resulting irradiance is not simply the sum of the individual irradiances (Hecht,
2002). Instead, the complex field amplitude can be examined where the resulting
complex amplitude of two interfered beams is the sum of the original beams’ complex
amplitudes:
52
. . . )
2.21
Since irradiance is equal to the modulus-squared of the complex amplitude, the irradiance
of the interference pattern is:
H.H . . .w .w H. H H. H . .w .w . 2.22
x "#$
<
This last equation is commonly referred to as the two beam interference equation, and
shows that the irradiance of the interference pattern is a function of the irradiance of the
two independent beams, along with the phase difference between the beams, ∆φ. The
maximum irradiance (constructive interference) occurs when the waves are in-phase,
having a phase difference between the two beams of 0 or an integer multiple of ±2π.
Similarly, the minimum irradiance occurs when the phase difference is an odd-integer
multiple of ±π (ie, ±π, ±3π, ±5π). The maximum and minimum possible irradiances both
occur when I1=I2=I0. Under this condition, the combined irradiance for total destructive
interference is 0 and the maximum irradiance under total constructive interference is 4I0.
If interference is now examined over an area, the resulting spatial irradiance distribution
is again a function of the phase difference between the interfered components. If the
phase difference is constant over the entire observation area the resulting irradiance will
have a constant value of Imin≤ I≤ Imax. If instead the phase difference spatially varies
53
between the two beams, the resulting irradiance distribution will vary, displaying a
pattern of bright and dark fringes known as an interferogram. The fringes are zones of
equal phase difference between the two beams.
The visibility of an interferogram is a description of the fringe pattern’s contrast, and is a
function of the maximum and minimum irradiances within the pattern. Ideally, the
visibility is:
yRzC{| x )
2.23
The previous equation describes the ideal visibility. In practice other factors can reduce
the contrast, and the visibility is calculated by:
y}C{| ~{/ ~R
)
~{/ ~R
2.24
In order to maximize fringe visibility, the irradiances of the two component fields should
be matched. Other factors can reduce visibility as well, and will be discussed shortly.
2.4.1 Conditions for Interference
As mentioned previously, light must meet some conditions in order interfere. Depending
on characteristics of the interfering waves, the fringe visibility is either reduced or
eliminated altogether. One of these conditions is that the polarizations of two beams
cannot be orthogonal. That is, a beam with vertical polarization will not interfere with a
horizontally polarized beam, a beam having right-circular polarization will not interfere
54
with a left-circular polarized beam, and so on. Maximum visibility occurs when identical
polarizations are interfered. When the polarizations are neither equal nor orthogonal, the
visibility is reduced, going as the dot-product of the electric field vectors. In the case of
linear polarizations, the resulting visibility varies as a function of the cosine of the angle
between the polarizations.
The visibility of an interferogram depends on the temporal and spatial coherence of the
interferometer’s light source. Temporal coherence depends on the bandwidth of the
source, while spatial coherence is a function of the physical source size. The OPD over
which visible fringes occur is a function of the temporal coherence. Spatial coherence
determines the maximum visibility that can be achieved over any OPD, and causes a
uniform reduction in fringe visibility over the entire interferogram as the spatial
coherence becomes smaller.
Temporal coherence is proportional to the Fourier transform of the source’s spectral
distribution. A source having a very narrow bandwidth, as is created by most gas lasers,
has a wide Fourier spectrum and therefore a long coherence length. Temporal coherence
is usually described in terms of the coherence length, which is a function of the source’s
center wavelength λc and bandwidth ∆λ as follows:

€ )
<
2.25
As the bandwidth increases, as is the case in sources such as super luminescent diodes
having bandwidths of tens of nanometers, the temporal coherence decreases and the OPD
55
over which fringes are achieved is reduced. As the bandwidth grows even wider, even up
to broadband white light sources, fringes are only visible within a very small OPD range.
While white light cannot be used for most interferometers, it is useful to use in measuring
the thickness of very thin films such as oil or soap where polychromatic fringes
corresponding to variations in thickness are visible. The two beam interference equation
taking into account temporal coherence visibility reduction as a function of OPD,
VT(OPD), is:
y‚ x "#$
<%)
2.26
Spatial coherence is proportional to the inverse of the source size. As the source size
grows, the visibility decreases due to the spatial coherence by a factor of VS, with values
between zero and one. The spatial coherence of a source can be increased by either
focusing the beam into a single mode fiber or by using an objective to focus the beam
onto a small pinhole with a diameter of the beam’s Airy disk. Both of these methods
effectively create a point source with a near-infinite spatial coherence. The two beam
interference equation can be modified to include the effects of spatial coherence as
follows:
yƒ x "#$
<%)
2.27
All of the factors discussed in this section have the effect of reducing the visibility of an
interference fringe pattern. Some reduction in visibility is expected and does not cause
problems in most systems. A visibility of at least 0.20 is usually required to discern a
56
fringe pattern (Goodwin, 2006), and in some special cases, even lower visibilities can be
used. Reduction in visibility as a function of OPD is sometimes intentionally built into
low coherence interferometers. In these systems, the OPD can be scanned through a
known range and the fringe visibility throughout the scan recorded. Through a priori
knowledge of the OPDs existing in the interferometer, the visibility distribution can be
used to determine measurement parameters such as thicknesses or heights.
57
3 TWYMAN-GREEN INTERFEROMETRY FOR SURFACE METROLOGY
Throughout the course of this dissertation, it became apparent that there was not a clear
“one stop” reference outlining how to design an interferometer for surface metrology.
While many references provide valuable information on certain aspects of the design and
analysis of these systems (i.e., Malacara, 2007; Malacara, 2005; Hariharan, 2006), a
reference outlining the entire design process for an interferometer such as a TwymanGreen was not found. This is likely the case since an interferometer’s design is driven by
its application; a system for measuring surface flatness of silicon wafers will have
different design constraints than one designed to measure surface figure of an eight meter
astronomical mirror, for example. The purpose of this chapter is to describe the design
methodology that went into designing the interferometers for this dissertation. Where
possible, the constraints on the system design will be explained in a general way so the
information herein can be used by those looking to design their own custom
interferometer for a variety of surface metrology applications.
Interferometry, in general, is used for a variety of measurements in many different
applications. In terms of optical testing, a Twyman-Green interferometer (TGI) is
generally used to measure the transmitted wavefront through a lens or other optic, or the
reflected wavefront of a mirror or other surface. The reflected wavefront can be used to
determine the surface topography of the surface under test. When a coherent lightwave is
reflected off of any surface, the spatial distribution of the reflected lightwave’s phase
corresponds to the topography of that surface. By detecting the spatial phase variations,
58
the surface topography can be calculated. While phase cannot be measured directly, it
can be detected by interfering the test beam with a reference signal, producing a pattern
of fringes corresponding to the phase variations. These fringes are known as fringes of
equal thickness since a given fringe corresponds to a constant surface height, much like
the contour lines on a geographical topography map.
While many interferometers can be used for surface metrology, Fizeau and TwymanGreen types are the most common. In general, Fizeau interferometers are sometimes
advantageous since most of the optics in the system are common path, allowing lower
quality optical components to be used without sacrificing system accuracy. However,
these systems are more difficult to phase shift and often require a larger diameter
reference surface than a TGI. Twyman-Green systems, on the other hand, are not
common path and require higher quality optics in the system to maintain accuracy.
However, their geometry allows for adjustment of the reference path to accommodate
multimode or low coherence sources, they are easy to phase shift, and can accommodate
different test part reflectances more easily than with a Fizeau.
The interferometers designed for this dissertation must measure small diameters on low
reflectivity surfaces using instantaneous phase shifting. They also must test higher
reflectivity calibration parts, so adjustable contrast is required. Since the in vivo system
will likely be modified for either refinement of the tear film measurement or for other oneye testing altogether, it must be easy to modify. This can be accomplished by changing
59
individual modules within the system rather than redesigning the entire interferometer.
Based on these needs, a TGI design was chosen for this work.
The general layout of a TGI system is shown in Figure 3.1. The source is collimated and
the light directed to a beam splitter, where it is split between the reference and the test
arms of the interferometer. A reference mirror is located in the reference arm, and the
surface under test and appropriate null optics are located in the test arm. When light is
reflected from the reference mirror and test part, they are recombined by the beam splitter
and directed to the imaging arm of the system. An imaging lens images the beam onto
the camera so that the detector and test part are conjugate. The reference is a flat mirror
aligned normal to the incident wavefront such that the reflected wavefront is planar and
parallel to that mirror. The test part reflects a wavefront having the shape and orientation
of surface under test. When combined at the image plane, the two beams interfere to
create a fringe pattern corresponding to the optical path difference between the test and
reference beams.
60
Figure 3.1 Layout of a TGI. The paths of the interfering waves are shown by the solid lines and the
imaging path is shown with the dotted lines.
3.1 Measuring Flat Surfaces
No additional modifications to the TGI pictured in Figure 3.1 are necessary to measure a
flat surface. The resulting interference pattern corresponds to the surface error in the part
under test, within the accuracy of the interferometer based on the quality of the other
optics in the system. The interferometer can be calibrated in order to increase the
accuracy of the measurement by removing the effects of aberrations inherent in the
optical system.
61
3.2 Measuring Spherical Surfaces
In order to measure spherical surfaces with a TGI, modifications to the system are
typically required to the interferometer. If a spherical test wavefront is interfered with a
flat reference, the resulting fringe frequency will often be beyond the resolveable limits
of the detector. It is therefore desireable to measure a spherical surface in a null
condition. A null condition exists when the interferometer is designed so that the
interferogram from a perfect surface contains either a single null fringe or a pattern of
straight line tilt fringes. A null condition is created by either using a spherical reference
mirror matching the power of the surface under test, or by including additional optics in
the test arm of the interferometer. These additional optics focus the beam at the test
surface’s center of curvature so that the incident wavefront is concentric with the surface.
These optics are called either null lenses, objectives, divergers or convergers depending
on the author. The rays focused by the diverger are at normal incidence on the surface,
and upon reflection will travel back through the interferometer along the same path as
they entered. This ensures that the wavefront at the interferometer’s detector will be flat
for a perfect spherical surface. A layout of a TGI for measuring a spherical surface is
shown in Figure 3.2.
62
Figure 3.2. Layout of a TGI used for measuring spherical surfaces
Since the diverger must focus the test beam at the center of curvature of the surface under
test, the design of the null optics are driven by the curvature and required test diameter on
the surface. The R-Number of a testing configuration is defined as the radius of
curvature, R, divided by the measurement diameter, D, on the part:
„ )
3.1
The working f/# of the diverger must be equal to or faster than the R-number defined by
the test configuration. As the measurement diameter grows or as the radius of curvature
shortens, the null optics must become faster.
63
Designing divergers for concave surfaces tends to be easier than designing for convex
surfaces. Since the diverger must focus the beam at the test part’s center of curvature, the
focus is between the diverger and the surface for a concave part. For a convex part, the
focus is beyond the test part, as seen in the lower portion of Figure 3.3. As the radius of
curvature increases, the test part must be positioned closer to the interferometer or the
diverger focal length must increase. Given that the f/# must be maintained, an increase in
focal length requires an increase in beam diameter, sometimes resulting in a prohibitively
expensive design.
Figure 3.3. Configuration for testing concave and convex surfaces so that the test wavefront is
concentric to the surface under test.
Commercial off-the-shelf optical systems such as infinite conjugate microscope
objectives or well corrected camera lenses may be used as the null optic for testing
spherical surfaces, while in other cases the optics must be designed specific for the
application at hand. In order to provide the required f/# and sufficient working distance
64
to measure convex surfaces, the addition of a beam expander prior to the diverger is often
necessary in interferometers with a relatively small internal beam diameter. The beam
expander is also a non-common path optic, and must have wavefront errors within the
desired accuracy of the system as well.
When measuring a spherical surface using a diverger, a perfect surface would produce a
single fringe over the entire interferogram. While variations in the interferogram provide
information about irregularity in the surface topography, the measurement does not
provide the radius of curvature of that surface. In order to determine the base radius of
curvature in addition to the irregularity, the interferometer must be arranged to measure
in the cat’s eye configuration. In this configuration, the optic is aligned so that the
diverger focuses the beam directly onto the surface, as seen in Figure 3.4. When
positioned at the diverger’s focus, the resulting interferogram is again a single null fringe.
Longitudinal displacement from the proper focus results in circular fringes corresponding
to defocus. The radius of curvature is the longitudinal displacement between the surface
when it is positioned at cat’s eye and when it is in the correct concentric position (Figure
3.3).
Figure 3.4. Cat's eye test configuration
65
3.3 Measuring Aspheres
Aspheric surfaces have a sag that deviates from a spherical form. These include conic
sections (parabola, elipse, etc.) as well as surfaces described by a polynomial having an
order higher than two or even free-form surfaces following much more complicated
mathematical prescriptions. Aspheric surfaces are sometimes used in optical systems to
improve system performance or reduce the total number of elements in the system.
While the use of aspheric surfaces often results in an improved design, manufacturing
and testing the parts is more difficult than for spherical surfaces. When measuring an
asphere interferometrically, deviations in the surface from a best-fit sphere result in
fringe frequencies that can make testing difficult.
When using a TGI to measure topography of an aspheric surface, the slope of the surface
may merit additional design considerations. If the wavefront slope is within the limits of
the detector (see Section 3.6), the aspheric surface can be measured in a non-null
configuration. However, if the slope exceeds the Nyquist limit, null testing is typically
required. In a null configuration, conventional null optics similar to those used for testing
spherical surfaces can be used in order to achieve straight fringes for many conic surfaces
(for example, see Chapter 12 of Malacara (2007)). In addition to using traditional optics,
computer generated holograms can be used to null an aspheric wavefront. A
disadvantage of testing in a null configuration is that the optics used are specific to a
single surface geometry. If the prescription of the surface under test changes, the testing
66
accessories will change as well. Often times the components used to measure an asphere
are more expensive than the component itself.
As an alternative to testing in the null configuration, non-null testing may be used in
some cases. For example, a detector with a large number of pixels will result in a larger
resolvable wavefront slope that allows interferometric testing with either non- or partialnull configurations. Additionally, sub-Nyquist methods can make use of aliased
interferograms by assuming that both the first and second derivatives of the wavefront are
continuous. One commonly used method of measuring optical surfaces in the early
stages of fabrication uses an interferometer with a long wavelength to reduce the fringe
frequency since every fringe corresponds to one wavelength of OPD. 10.6 micron CO2
lasers are often used for this application, effectively reducing the fringe frequency by
almost 20-times below the frequency in visible interferometers. In all of these non-null
methods extra care is required in calibration of the interferometer. The aspheric part
introduces retrace error into the system when the wavefront travels from the test part back
through the interferometer. That is, a ray leaving the interferometer and reflected off of
the aspheric surface will not travel along the same path back through the system. A
calibration method known as reverse ray tracing should be used in this case in order to
calibrate out the effects of non-common path wavefront travel.
67
3.4 Phase shifting
Phase shifting interferometry (PSI) is a modification of classical interferometry utilizing
electronic detectors to digitize the interferograms for more robust analysis. In PSI, a
number of interferograms related by a known phase shift are captured either over time or
simultaneously. The fundamental two beam interference equation describes the recorded
intensity of each individual phase shifted interferogram:
 ' ( ' ( ' ( "#$
%
' ( …
3.2
where In(x,y) is the intensity recorded at each pixel in the detector for each of the n
interferograms, I’(x,y) is the average intensity, I’’(x,y) is the intensity modulation,
%
' ( is the phase difference between the two beams, and … is the known phase shift
between the phase shifted interferograms. This equation presents three unknown
variables: I’(x,y), I’’(x,y) and %
' (. Meanwhile  ' ( is recorded by the detector
and … is known based on the instrument’s design. By capturing at least three
interferograms, these three unknowns can be determined and the phase difference
between the beams ultimately calculated. A number of algorithms have been developed
to calculate phase, many of which require a different number of interferograms. An
interested reader should refer to Chapter 14 of Optical Shop Testing (Greivenkamp,
1992) for an extensive list of phase shifting algorithms, along with their advantages. The
commonly used four-step algorithm will now be derived, as an example. To make use of
this algorithm, four interferograms having a 90° phase difference between each
interferogram are captured where:
68
8
‡8
… a 8 † )
3.3
By placing these phase shifts into Equation 3.2 and applying trigonometric identities to
the cosine terms, the four measured intensity patterns are:
' ( ' ( ' ( "#$5%
' (6
' ( ' ( ' ( $o@5%
' (6
ˆ ' ( ' ( ' ( "#$5%
' (6
and
v ' ( ' ( ' ( $o@5%
' (6)
3.4
3.5
3.6
3.7
At this point, there are still three unknowns along with a system of four equations. The
first unknown, I’(x,y), is eliminated through subtraction:
v ' ( $o@5%
' (6
and
ˆ ' ( "#$5%
' (6)
3.8
3.9
The modulation can be eliminated by dividing Equation 3.8 by Equation 3.9, leaving only
the known intensity values along with the unknown phase term:
v $o@5%
' (6
)
ˆ "#$5%
' (6
3.10
By rearranging and using trigonometric definitions the phase is:
%
' ( ‰?@U
v )
ˆ
3.11
This equation provides a method for calculating the phase from four recorded
interferograms with 90° relative phase shifts between each subsequent measurement, as
shown in Figure 3.5
69
Figure 3.5. Four 90° phase shifted interferograms for an arbitrary wavefront
When the phase is calculated using Equation 3.11, the value of the phase may be
ambiguous since the domain of the arctangent function is zero to 2π. Therefore, the
calculated phase is actually the phase-modulo 2π. To account for this, integer multiples
of 2π are added to the calculated phase in order to produce a smooth phase distribution.
This process is known as phase unwrapping and requires that the measured phase be
continuous.
Once phase is known, the OPD can be calculated. OPD is related to phase% by:
' ( %
' (
)
8
3.12
When the interferometer is used to measure a surface in reflection, the OPD calculated at
the detector is two-times the deviation in the surface. An interferometer used for
measuring surfaces in reflection will produce a phase difference at the detector twice that
of the test surface’s deviation. For a surface in reflection, the surface height, h(x,y), is
related to the measured phase by:
Š
' ( ' ( %
' (
)
J8
3.13
70
PSI provides a number of advantages, the main of which is that it is no longer necessary
to identify the extrema of the fringes. In fringe tracing, defining the centers of the fringes
is difficult for an interferogram with only a few widely spaced fringes, so tilt is usually
added to better identify the fringes. However, adding additional wavefront departure to
the interferometer in order to create the additional fringes decreases the precision of the
measurement, so a trade-off must be made between accuracy and precision (Malacara,
2007). In PSI, rather than identifying, tracing and ordering the fringe maxima and
minima, the phase is instead calculated pixel by pixel based on the series of recorded
interferograms. PSI is capable of accurately measuring phase for a single broad fringe
over the entire interferogram. In addition to increasing measurement accuracy and
precision, the sign of the phase is no longer ambiguous. PSI provides a method of
knowing if a feature on a measured surface is a bump or a hole without manually pushing
the test optic as is done in non-phase shifting interferometers. The results of PSI are not
affected by variations in intensity across the interferogram, allowing variations in source
uniformity. In general, PSI provides an extremely accurate method of measuring
wavefront phase with accuracy of λ/100 routine in commercially available systems and
accuracies exceeding λ/1000 available for demanding applications (Malacara, 2007).
Different methods of achieving phase shifting have been developed, and in general can be
broken into two categories: temporal and spatial phase shifting. Temporal phase shifting
captures the required number of measurements over time. Spatial phase shifting records
71
the interferograms within a single camera integration time. The spatial method is
sometimes referred to as instantaneous phase shifting, simultaneous phase shifting, or
dynamic interferometry, and is appropriate where the testing configuration changes over
the time required to temporally phase shift.
3.4.1
Temporal Phase Shifting
A majority of phase shifting systems capture the required measurements over time by
translating the reference mirror by the required amount, usually with a piezoelectric
transducer (PZT). In this case, the reference mirror is translated to introduce the phase
shift, a measurement is captured, and the process is repeated the required number of
times. In some cases, stopping and starting the mirror must be avoided due to either
damping characteristics in the mirror acceleration, or the need to decrease the
measurement time. In this case, temporal phase ramping is performed where the mirror is
set to move continuously and an integrating bucket data collection method is used to
average the measurement over time (Wyant, 1975).
A number of other methods exist to achieve phase shifting in addition to the use of a
PZT. For example, a diffraction grating can be scanned along the beam to utilize the
diffracted orders having π/2 relative phase shifts. An acousto-optic modulator, or Bragg
Cell, can similarly be used to utilize diffracted orders having a shifted frequency. If the
polarization states of the interferometer allow, polarizers can be used to introduce relative
phase shifts as well. Aside from using optics within the interferometer, frequency
72
shifting sources may be used to induce phase shifts, although care must be taken to use
stabilized sources when a frequency drift is not accounted for in other methods.
3.4.2
Spatial Phase Shifting
In some measurement conditions temporal phase shifting is not practical, especially in the
case of dynamic systems where the test part changes over time, or in systems where
mechanical vibration or air turbulence are prevalent. In this case, spatial phase shifting
can provide a method of direct phase measurement over a single integration period,
which is significantly faster than relying on temporal methods. Spatial phase shifting
relies on inducing the required phase shifts at different locations in the system all at one
time.
One method of spatial phase shifting is to introduce a spatial carrier across the
interferogram, and is usually achieved by introducing a large tilt between the reference
and test beams in the interferometer. If the tilt is introduced by simply tilting the
reference mirror in the interferometer, the test and reference beams will not be common
path through the imaging arm of the interferometer, introducing additional OPD to the
wavefront. Instead, if the reference and test beams have orthogonal polarization, a
Wollaston prism followed by a linear polarizer can be inserted prior to the detector to
introduce tilt between the beams without introducing additional OPD.
73
If one wave of tilt per every four pixels across the detector is introduced into the
interferogram, every column of pixels will have a relative phase shift of 90° between
itself and the next column. In a 1,000x1,000 megapixel detector for example, 250 waves
of tilt must be introduced between the reference and test beams. By assuming that the
wavefront is constant across four pixels, the interferogram can be parsed column by
column in order to create four phase shifted interferograms for use with the four-step
algorithm to calculate phase. This method results in a loss of resolution along one
direction of the detector since the phase is averaged over four columns at a time. Figure
3.6 shows this method of analyzing a spatial carrier interferogram.
Figure 3.6. Spatial carrier interferogram parsed and 4-shift algorithm applied to calculate the phase
modulo 2π
Instead of sorting the spatial carrier interferogram into individual phase shifted
interferograms, a Fourier transform method is often used to calculate phase (Takeda,
1982). When the Fourier transform of a tilt-carrier interferogram is calculated, the result
in frequency space is a central peak with symmetric side lobes, as seen in Figure 3.7. The
spacing between the peak and the lobes is a function of the frequency of the tilt carrier in
74
the interferogram. The information corresponding to the wavefront phase is contained in
either of the side lobes. By filtering out the opposite lobe along with the central peak, the
information corresponding to the phase is isolated. The phase is calculated by taking the
arc-tangent of the imaginary part divided by the real component of the inverse Fourier
transform.
Figure 3.7. Fourier transform method of spatial carrier interferometry
In the Fourier transform method, the tilt between the reference and test beam must be
large enough that the central peak and side lobes of the Fourier spectrum do not overlap.
On the other hand, the tilt must not exceed the sampling limits of the camera. The
amount of tilt must be balanced to compromise between these two factors. The
separation of the peaks in the Fourier domain determines the maximum spatial frequency
that is unaffected by the filtering process, corresponding to the minimum size of a
75
resolvable feature in the measurement. Accuracy in the Fourier transform method is
reduced due to the use of Fast Fourier Transforms, as well as the filtering method applied
to isolate a single lobe for the inverse Fourier transform.
Another method of spatial phase shifting is to use multiple detectors, where components
in each detection arm induce a separate phase shift. Figure 3.8 shows a schematic of how
such a system could be arranged with three cameras. There are a number of methods of
inducing the phase shift, including the use of dielectric beamsplitters or polarization
elements, the first of which is illustrated in Figure 3.8.
In this method the optical system
becomes quite complex since the beam must be split into multiple imaging legs. Since
the imaging legs are physically separate and imaged on multiple cameras, vibration or air
turbulence differences at each detector will introduce errors into the measurement. The
cameras must be aligned with subpixel accuracy, and difficult calibration is required to
account for differences in camera response. Additionally, camera control becomes
complicated due to the need to synchronize the exposure times.
76
Figure 3.8. Example of a simultaneous phase shifting interferometer using four cameras (Image
from Wyant, 2008)
The interferometers designed for this dissertation make use of a polarizing interferometer
coupled with a pixelated phase mask to create the spatial carrier (Millerd, 2004). In this
method, an array of micropolarizers is adhered to the detector. When the reference and
test beams have opposite circular polarizations, the phase shift at a given pixel is a
function of its polarizer’s orientation. By using four different polarizer orientations, four
phase shifts are induced at points across the detector, and the phase is calculated pixel by
pixel. Some advantages of the pixelated phase mask approach are that the phase
measurement is vibration insensitive, wavelength independent, and consists of a simple
optical design while maintaining precision and accuracy. A more detailed description of
this phase shifting method is available in Chapter 4.1.
77
Figure 3.9. Pixelated phase mask method (Image from Wyant, 2008)
3.5 Source Requirements
Three common variables in the source selection for a TGI are the source’s wavelength,
power output and coherence. The output power of the source is usually not a limiting
design factor. Modern sources coupled with responsive detectors are usually sufficient to
test parts with a wide range of reflectivity.
The wavelength is often determined by the application. If a lens assembly is to be used in
the visible spectrum, it likely should be tested in the visible regime as well. However,
this does not necessarily hold true for reflective systems since they are inherently
achromatic. Often times mirrors intended for use outside of the visible waveband are
tested using a visible interferometer due to the availability of these systems. Also, since
fringe spacing is a function of wavelength, systems having a shorter wavelength are more
78
sensitive to small surface deformities. On the other hand, interferometers having longer
wavelengths are often used when higher fringe frequencies, and therefore dynamic range,
is required.
The coherence of an interferometer’s source is a very important parameter in system
design. As discussed in Chapter Two, the coherence length limits the OPD over which
fringes are visible. The source used in the interferometer must take into account the
range of OPDs that will be measured with the device, or the interferometer must be
arranged in a way that the OPD can be adjusted depending on the test part. If a laser with
a long coherence length is used, the OPD between the interfering beams can become
quite large. This arrangement is commonly referred to as a Laser Unequal Path
Interferometer, or LUPI, and is routinely used to measure mirrors with large radii of
curvature where the test arm must become quite long in order to focus the beam at the
proper position.
3.6 Detector Requirements
The interferometer detector limits the dynamic range of the system. This component
limits how much variation from the reference wavefront the system is capable of
measuring before fringes become unresolvable. To be resolved, the wavefront phase
must change by no more than π over one pixel, resulting in a maximum of two pixels per
fringe. Fringe frequency is a function of the wavefront slope: a test wavefront that is
steep with respect to the reference wavefront results in a higher number of fringes over
79
that area of the detector. The unit waves/radius is used to describe the wavefront slope
and is the amount of wavefront departure across a length equal to the wavefront’s radius
due to the local wavefront slope. This is calculated across the entire wavefront, and the
largest slope corresponds to the greatest fringe frequency. One advantage of using
waves/radius to describe the wavefront slope is that it does not depend on the
interferometer’s magnification. That is, the requirements stemming from the
waves/radius calculation do not vary if the imaging lens or conjugate locations within the
system are changed.
In conventional imaging systems a detector must have at least two pixels per resolvable
feature in the image. This criterion is known as the Nyquist limit, and holds true for
interferometers. There must be at least two pixels per fringe existing in the recorded
interferogram. Therefore, the Nyquist frequency limit is defined as one over twice the
pixel spacing. This criterion sets a limit on the maximum resolvable wavefront slope in
the system. As a consequence, increasing the number of pixels over the interferogram
will increase the instrument’s dynamic range. The Nyquist frequency of a detector is
equal to one-half of the sampling frequency, where the sampling frequency is the inverse
of the center-to-center pixel spacing on that sensor (Gappinger, 2004). The relationship
between Nyquist and sampling frequencies along with pixel spacing is:
‹4Œ ‹Ž{~)
G
w R/C|
3.14
where ‹nyq is the Nyquist frequency, ‹samp is the sampling frequency and dpixel is the pixel
spacing. The maximum resolvable wavefront slope in waves/radius is calculated by
80
either multiplying the Nyquist frequency by the detector radius or by dividing the number
of pixels across the detector’s width by four:
”R/C|Ž
&
†‘N
`
b “4Œ w zCWCWB} )
 ~{/ †P’
J
3.15
While the resolvable frequency is determined by the Nyquist frequency, the camera can
detect frequencies up to the sampling frequency. In other words, a detector can detect
frequencies twice those that it can resolve. When a frequency is present on a detector
that is beyond the Nyquist frequency but still within the sampling frequency it will
“wrap” and present itself as a frequency below the limit, known as aliasing. Figure 3.10
shows how frequencies beyond the Nyquist limit wrap to lower frequencies. Increasing
the wavefront slope beyond the Nyquist limit causes aliasing in the detected interference
patterns, adding confusion to interferogram analysis. While in some cases an aliased
interferogram might appear normal, it is in fact incorrect to apply normal unwrapping
algorithms and the result will be erroneous, analogous to the problem of spurious
resolution in imaging systems.
81
Figure 3.10. Detector response for 10 micron pixels, showing Nyquist limit at 50 cycles/mm and
sampling limit at 100 cycles/mm. The dotted line shows frequencies beyond the Nyquist limit that are
wrapped to lower frequencies.
As previously discussed, the fringe frequency is a function of the wavefront slope.
Wavefronts having steeper slopes in turn have a higher fringe frequency, or more closely
spaced fringes. Therefore the maximum resolvable wavefront depends on the form of the
wavefront present. A wavefront dominated by tilt which has a constant slope across the
pupil will have a different effect than spherical aberration whose slope varies cubically
over the pupil. For two wavefronts having different peak-to-valley wavefront deviations,
it is possible that the wavefront having larger error will be resolvable while the wavefront
with lower total OPD will be unresolvable over portions of the interferogram. Figure
3.11 shows the OPD plot for 10 waves of spherical aberration along with 10 waves of
defocus, and the corresponding slope for each. Both systems have identical peak to
valley wavefront departures, but spherical aberration has a greater maximum wavefront
82
slope. At large enough OPDs, the spherically aberrated wavefront slope will become
unresolvable before the defocused wavefront.
Figure 3.11. Plot showing 10 waves of defocus and spherical aberration, along with their wavefront
slope over the entire pupil.
Often times the spatial resolution requirement for the interferometer will exceed the
wavefront slope requirement. In that case, the number of pixels on the detector is driven
by the spatial resolution requirements rather than the wavefront slope limit. Proper
interferometer design requires that the system’s detector have both the minimum number
83
of pixels for wavefront resolution along with the minimum pixel frequency for spatial
resolution.
3.7 Collimator requirements
The quality of the collimator is not of great concern since wavefront errors introduced by
this optic are present in both the test and reference beams of the interferometer.
However, if the OPD under test is relatively large, the quality of the collimator must
improve. Since an aberrated wavefront changes shape as it propagates, large OPD
between the test and reference arms coupled with aberrations from the collimator result in
a difference in the wavefront incident on the reference and test surfaces, contributing
erroneous OPD to the measurement results. For large OPDs, the wavefront should be flat
to within a fraction of a wavelength (Malacara, 2005). Such a flat wavefront ensures that
the beams incident upon the reference and test surfaces will be identical.
3.8 Beam splitter requirements
The beam splitter in a TGI must be high quality since its surfaces are not common to both
the reference and the test arm of the interferometer. Therefore, any wavefront
deformations introduced by these surfaces will contribute error to the measurement
results. Reflective surfaces must be flat with a maximum error of roughly twice the
desired accuracy of the interferometer, while transmitting surfaces are slightly less
restrictive with a required accuracy of one-half the desired measurement accuracy
(Malacara, 2007). Either a plate or cube beam splitter can be used. A plate beam splitter
84
has two surfaces, the first of which is transmissive and the second both transmits and
reflects one beam in the interferometer. A cube beam splitter has four transmissive
surfaces with a reflective interface running through the diagonal of the cube. Figure 3.12
shows the arrangement of both a plate and cube beam splitter in a TGI.
Figure 3.12. Beam splitter arrangements in a TGI, showing common (C), uncommon (U), reflective
(R) and transmissive (T) surfaces
In addition to accuracy requirements on the surfaces, the homogeneity of the beam
splitter material must be of high quality since the two interfering beams travel different
distances through the glass in the plate beam splitter, and through different areas of the
cube beamsplitter. Variations in refractive index will contribute error to the
interferometer’s measurement.
If a plate beamsplitter is used in an interferometer with a relatively short coherence
source, an additional optic is necessary in the transmitted arm of the system. The
reflected beam travels through the thickness of the beamsplitter twice, while the
transmitted path only travels through the thickness once, adding OPD between the
85
interfering beams. If the coherence length is on the order of the OPD, this factor could
reduce or eliminate fringe visibility. Therefore, in such an arrangement a compensating
plate having nearly identical thickness as the beamsplitter should be placed in the
transmitting arm of the interferometer, at the same angle as the beamsplitter in order to
eliminate the OPD between the two arms.
3.9 Imaging Lens Design
The imaging lens images the interferogram onto the observation plane so that it is
conjugate to the optic under test. If the interferometer is used to measure a surface in
reflection, the surface is conjugate with the observation plane. If an optical system in
transmission is being tested, the pupil is instead used as the object location. Locating the
test object and observation plane conjugate with one another ensures that the resulting
interferogram geometry corresponds to a known location on the test part. When null
optics are used in the testing configuration, those optics must be considered part of the
optical system as well. Figure 3.13 shows the test and imaging arms of a TGI displaying
the conjugate locations of the mirror under test and the interferogram.
Figure 3.13. Layout of the test imaging arms of a TGI showing both the interference path (solid) and
imaging path (dotted).
86
Neglecting diffraction, any non-planar wave changes shape as it propagates through
space. Spherical waves remain spherical with only the radius of curvature changing as it
travels. Distortions in the wavefront also change in shape as the wave propagates in
space. A wavefront reflecting off of a distorted surface will reflect those distortions, but
their shape will vary as the wavefront travels. If the wavefront is measured at an
arbitrary location in an interferometer, the fringe pattern does not show the effects of the
distorted surface, but rather corresponds to the surface defects plus the distortions from
propagation. In order to directly relate the interferogram to the surface under test, the
fringe pattern must be observed at a location conjugate to the surface (Malacara, 1995).
This arrangement ensures that the observed measurement corresponds to deformations on
the surface under test.
The major requirements driving the design of an imaging lens are three-fold. First, it
must have suitable image quality, especially in distortion, when imaging the test surface
onto the observation plane. Next, the lens must support reproduction of the maximum
fringe frequency determined by the maximum required wavefront slope in the
interferometer. Finally, the imaging lens must have sufficient resolution to meet the
spatial resolution requirements of the system, taking care to match or exceed the
resolution limit of the detector.
87
3.9.1 Imaging Lens Aberrations
When designing an imaging lens, the entire optical system including any null optics in
the test arm, must be considered. Since the null optics are primarily designed to focus the
test beam of the interferometer appropriately for the testing configuration, they often
degrade the image quality of the imaging system. However, the stop diameter is
generally very small, greatly reducing the aberrations in the system. The stop of the
imaging system is at the aperture placed at the focus of the interfering beams and should
have a diameter equal to the spot diameter at that location. Setting the diameter to match
the spot size only allows desired signal from the test and reference beams through to the
observation plane, so this aperture blocks out stray reflections from within the
interferometer that could contribute to coherent noise in the system. The diameter of the
stop depends on the geometry of the surface under test, the error in the surface and the tilt
between the interfering beams (Malacara, 1995). The half-diameter of the stop is
(
•
3.16
where m is the imaging system magnification, R is the radius of curvature of the mirror
under test and • is the minimum fringe separation on the detector. Generally this
diameter will be on the order of a couple of hundred microns.
Although the imaging lens is not required to produce a perfect wavefront, there are limits
on the aberrations in the imaging lens design. As derived by Malacara (2005), the
maximum allowable imaging system transverse aberration depends on the desired
accuracy of the interferometer. The desired accuracy is the maximum allowable
88
wavefront measurement error is expressed as λ/n (tenth-wave, for example). The
maximum allowable transverse ray error is equal to:
•
–.~{/ )
3.17
This shows that the imaging lens quality is a fraction of the fringe spacing on the
detector. For example, an interferometer required to have tenth-wave maximum
measurement error with 200 micron fringe spacing must have an imaging lens with at
most 20 microns of transverse ray error. As the fringe frequency increases, as is the case
in non-null or spatial carrier interferometers, the image quality of the imaging system
must increase accordingly, adding to the complexity and cost of the interferometer.
3.9.2 Maximum Supported Fringe Frequency
In order to determine the maximum fringe frequency supported by the imaging lens,
examine the imaging lens layout in Figure 3.14 showing two rays focused onto the
detector by an imaging lens. Assuming two plane waves are travelling along those two
rays, their wavefronts are tilted with respect to one another at the detector plane, resulting
in a pattern of parallel fringes. The maximum fringe frequency occurs when the two rays
span the largest possible angle with one ray originating at each the upper and lower
extremes of the imaging lens and meet at the center of the observation plane.
89
Figure 3.14. Determining the maximum supported fringe frequency by an imaging lens
The maximum supported frequency is calculated as follows. The electric field of a plane
wave is
+œ p +œ 16ž
^ . —˜™šP5›
where
and
¢
+œ 'Ÿ (¡ -›

+›œ 05€Ÿ £¡ ”›
¢ 6)
3.18
3.19
3.20
The terms L, M and N are the cosine vectors. Therefore, the phase difference between the
two plane waves is
++++œh p +œ 1 6 5›
++++œ¤ p +œ 1 6)
<% 5›
3.21
For two rays originating from the same source, 1 1 and 0 0 8!. Given
the relationship between OPD and phase difference, the OPD between the two beams at
the detector is:
<%
8
£ £ ( p ( p $o@
¥)
3.22
For small angles this simplifies to:
¦
(
X \ ( 9„§
3.23
where D is the ray separation equal to the or beam diameter, z’ is the image distance
from the rear principle plane and f#w is the working f-number of the imaging system.
Since each fringe corresponds to one wave of OPD, the location of consecutive fringes, m
and m+1 are:
90
and
(
9„§
(
G )
9„§
3.24
Therefore the fringe spacing is:
<( 9㤠)
3.25
The fringe frequency is the reciprocal of the fringe spacing, <(, and is:
‹¨~{©R©ªCŽ G
9㤠3.26
which is identical to the incoherent cutoff frequency of a lens (Gaskill, 1978). The
maximum supported fringe frequency of the imaging lens must be larger than the
maximum fringe frequency the interferometer is designed to measure.
3.10 Polarization TGI
A TGI can make use of a polarizing beamsplitter (PBS) cube in order to produce
orthogonal polarizations in the test and reference arms of the system, as shown in Figure
3.15. If a PBS is used as the system’s beamsplitter and the source has linear polarization,
the P-polarization is transmitted through the cube while the S-polarization is reflected.
By placing quarter-wave plates (QWP) with the fast axis at 45° immediately following
the PBS in both arms, orthogonal circular polarizations are incident on the test and
reference surfaces. Upon reflection, each circular polarization reverses its orientation;
right-circular becomes left-circular and vice versa. When transmitted back through the
QWP, the polarization entering the PBS is rotated 90° relative to its initial condition, so
the beams are combined in the imaging arm of the interferometer where they now have
91
orthogonal linear polarizations. An additional QWP in the imaging leg converts the
polarizations to orthogonal circular orientations. At this point, a linear polarizing
analyzer is located prior to the observation plane in order to obtain visible fringes.
One advantage of a polarizing TGI is that the intensities in each arm of the interferometer
can be adjusted in order to maximize fringe visibility. A half-wave plate prior to the PBS
can be rotated to adjust the angle of the linear polarization incident on the PBS. This
allows control over the intensity in each arm of the interferometer to perform
measurements on parts with a large range of reflectivities. Additionally, the polarization
states throughout this interferometer can be used for phase shifting interferometry.
Figure 3.16 shows the polarization states at each point in the interferometer. By placing
an additional QWP in the imaging arm, followed by a linear polarizer, interference is
observed. A relative phase difference between the two arms of twice the angle of the
linear analyzer is introduced into the measurement.
92
Figure 3.15. Polarization TGI
Figure 3.16. Polarization states through the interferometer
93
3.11 Instrument Transfer Function
The instrument transfer function (ITF) of an interferometer is a measure of the system’s
ability to correctly report the surface topography as a function of the feature size on the
surface (deGroot, 2005). Typically, as the feature size decreases (the spatial frequency
increases), the reported height of the feature is less than the real height. An
interferometer’s ITF is analogous to the Optical Transfer Function (OTF) of an imaging
system, which describes the contrast of an image as a function of its spatial frequency
content. Typical interferometers can more accurately measure surface topography
features having lower frequencies than higher frequency features with the same
amplitude.
The ratio between the actual object height and reproduced image height for a given
spatial frequency is the ITF of a surface measuring interferometer. An ITF equal to one
at a given frequency means that the interferometer exactly reports the surface height at
that frequency, while an ITF of zero at another frequency implies that the interferometer
is incapable of reporting the surface height at the second frequency. Figure 3.17 shows
an example of an ITF for an interferometer system, and Figure 3.18 shows the effects of
that ITF.
94
Figure 3.17. Example instrument transfer function
Figure 3.18. Object surface profile (left) and resulting measured surface profile based on ITF in
Figure 3.17 (right)
The ITF of an interferometer is affected by the components discussed previously in this
chapter, with the major factors being the illumination scheme, imaging system, detector
sampling and the phase calculation method (Kimbrough, 2010). As long as the detector
95
samples at the Nyquist limit of the imaging system, it does not generally affect the ITF.
However, other factors such as detector electronics controlling camera bandwidth and
readout speed can cause the ITF to roll off prior to the cutoff frequency (Gappinger,
2004). In terms of the imaging lens, any frequency content beyond the spatial frequency
cutoff of the imaging system will not be reproduced by the interferometer. Additionally,
aberrations in the imaging system have the effect of gradually reducing the ITF towards
the cutoff frequency, reducing the accuracy of the height measurements at the affected
spatial frequencies. The phase calculation method also affects the ITF and can reduce the
measurement accuracy at different frequencies. For example, different temporal phase
shifting algorithms have frequency dependent differences, and the carrier wave Fourier
transform method attenuates the ITF at some high spatial frequencies.
3.12 Closing
The purpose of this chapter was to explain the methodology used in designing a TGI for
surface metrology. Both the in vitro Fluid Layer Interferometer and the in vivo Tear Film
Interferometer are polarizing TGIs used to measure dynamic surface topography. The
practices described thus far were applied to the design of both of those systems, and the
design results are discussed in the following chapters.
96
4 PHASE I: IN VITRO FLUID LAYER INTERFEROMETER
This chapter presents the design results of the Fluid Layer Interferometer (FLI) for
measuring fluid layer behavior on the surface of a contact lens, in vitro. The overall goal
of the measurement is to detect features in the layer’s surface topography that can be used
to characterize fluid behavior on different contact lens material types. The end goal of
this instrument is to produce a series of measurements over time that can be used to
evaluate the dynamic surface structure on the fluid layer.
4.1 Phase Shifting Method
Phase shifting is achieved by coupling a polarizing interferometer with a camera having a
pixelated phase mask bonded to the detector. The pixelated phase mask consists of an
array of pixel-sized wire grid polarizers, each of which overlaps a single pixel when
bonded to the detector. The pixel-sized linear polarizers are arranged at four angles, 0°,
45°, 90°, and 135°, in a 2x2 “superpixel” with orientations as seen in Figure 4.1. This
method is advantageous over other simultaneous phase shifting methods since it is a
simpler design than those requiring multiple cameras, and provides greater resolution and
increased accuracy compared to carrier wave methods.
97
Figure 4.1. Orientation of the micropolarizers in a 2x2 "superpixel.”
In order to describe how the phase shift is achieved, the following provides an overview
of the polarization states throughout the interferometer, and the consequences of using
this arrangement. The designed interferometer has orthogonal polarization states in the
test and reference arms of the system. For a left-circular test beam and a right-circular
reference beam, the respective electric fields in Jones vector form are:
G
^+,W ' ( N RY« /4 ` b
P
and
G
^+,} ' ( N RY¬ /4 ` b)
P
4.1
4.2
At the detector plane, these polarizations are transmitted through wire-grid linear
polarizers having a Jones matrix:
n
$o@
n "#$
n
€
n )
$o@
n "#$
n
P n
4.3
When transmitted through the polarizer at an arbitrary angle α, the test beam electric field
becomes:
98
n
$o@
n "#$
n G
^+,W ' ( N RY« /4 ` b
P
$o@
n "#$
n
P n
RY« /4 "#$
n "#$
n P $o@
n
N
)
$o@
† "#$
n P $o@
n
4.4
4.5
By factoring out the common vector terms and rewriting in exponential form, the test
beam has an electric field:
"#$
n
^+,W ' ( N R
Y« /4­® )
$o@
n
4.6
Similarly, the reference beam after transmitting through the polarizer becomes:
"#$
n
^+,} ' ( N R
Y¬ /4U® )
$o@
n
4.7
The phase of each the test and reference beam is the term within the exponential, and the
phase difference between the two is<% n. The phase difference between a test and
reference beam with orthogonal polarizations passing through a linear polarizer is equal
to two times the polarizer’s angle. By having four different polarizer angles included in
the pixelated phase mask, four different relative phase shifts are induced between the
beams.
Since each pixel on the detector now contains a discrete phase shift, typical methods for
calculating the phase can be used. Algorithms using nearly any number of pixels to
calculate phase can be applied to calculate the phase across the detector, but two have
been studied in depth by Novak, et. al. (2005-1), and rely on calculating phase over either
a 2x2 or 3x3 grid. Figure 4.2 shows the layouts of both of those grid patterns, the
corresponding phase shifts and the form of the irradiance as a function of phase in each
99
pixel. In the 2x2 grid, the phase is calculated at the intersection of the four pixels, while
in the 3x3 routine the phase is calculated at the central pixel. Once the phase is
calculated at that point, the algorithm shifts by one pixel and repeats the calculation for
the next intersection. In this way, the phase is calculated at N-1 points across the detector
(or N-2 for a 3x3 grid), where N is the number of pixels, effectively maintaining the
original detector resolution.
Figure 4.2. Layout of the 2x2 and 3x3 grids for phase calculation showing phase shift and irradiance
at each pixel. Phase is calculated at the position of the black dot for each pattern.
In the 2x2 grid, the four-shift algorithm discussed in Chapter 3 is used, and the phase is
calculated at the center of the grid. In the 3x3 case, the phase calculation is more
complicated. In this case, the phase is calculated at the central pixel of the grid using the
equation:
‰?@
% ¯ šv­°±²5Y 6­°±²5Y 6­°±²5Y 6­°±²5Y 6žU¯ šU°±²5Y 6U°±²5Y 6ž
³³
³´
´³
´´
µ³
µ´
v
,
¯ š­¶·°5Y 6­¶·°5Y 6žUšU¶·°5Y 6ž
³µ
´µ
µµ
4.8
100
where the subscripts on the phase terms correspond to the row and column numbers of
the grid. Since the phase is calculated at an actual data point, the 3x3 method is more
conducive to data manipulation and analysis and it is insensitive to linear irradiance
variation (Novak, 2005-2). In both phase calculation algorithms, it is assumed that the
phase is constant over the pixels used in the calculation. Since this is likely not the case,
some error is inherent in the calculation method. As wavefront slope increases, the error
in the phase calculation increases as well. Interestingly, calculating the phase with the
3x3 method results in about half as much phase calculation error than the 2x2 method
(Novak, 2005). The 3x3 phase calculation method is built into the 4Sight™ analysis
software (4D Technology Corp.) and is used for interferogram analysis throughout this
dissertation.
4.2 Interferometer Design
The interferometer designed and built for this application is a polarized TGI system, in
the form of the system discussed in Chapter 3. In order to achieve phase shifting during a
single camera integration time, the custom interferometer was built around a Pixelated
Camera Kit from 4D Technology Corp. (Tucson, AZ). This kit includes a camera with
bonded pixilated phase mask, which is discussed in detail later in this chapter, along with
a computer, frame grabber and 4Sight™ interferometry analysis software. A schematic
layout of the interferometer is shown below in Figure 4.3.
101
Figure 4.3. Layout of the Fluid Layer Interferometer
The light source of the interferometer is a stabilized single mode 632.8 nm Helium-Neon
laser which is focused with an infinity-corrected 40X microscope objective onto a five
micron diameter pinhole for spatial filtering. The objective creates a diverging beam
which is then collimated by a 200 mm focal length collimator into an 18 mm diameter
beam. The collimated beam transmits through a half-wave plate (HWP) followed by a
polarizing beamsplitter (PBS) cube which splits the incident beam into the reference and
test arms of the interferometer, having orthogonal polarizations. The reference beam is
transmitted through the PBS and the test beam is reflected at the PBS.
In the reference arm, the beam first encounters a quarter-wave plate (QWP) oriented with
its fast axis at 45° and is then reflected off of an uncoated glass λ/20 flat reference
102
surface. The reference beam is then transmitted back through the QWP so that its
polarization is opposite as when it entered the reference arm and is instead reflected by
the PBS into the imaging arm of the interferometer.
In the test arm the beam transmits through a QWP and is then expanded and focused by a
custom null diverger which nominally focuses the beam at the contact lens/fluid layer’s
center of curvature. The beam is reflected off of the fluid layer under test and back
through the null diverger, quarter-wave plate and PBS into the imaging arm of the
system. The contact lens is held on a plastic surface (Zeonor 1060R), which is described
in detail in Section 4.3.
The reference and test beams have orthogonal polarizations in the imaging arm where
they are focused with an aplanatic imaging lens onto the detector. An additional QWP
with its fast axis at 45° is placed prior to the detector to convert the orthogonal linear
polarizations of the test and reference beams to opposite circular polarizations.
The purpose of the aforementioned half-wave plate prior to the PBS is to rotate the
polarization state allowing adjustment of the irradiance into each arm of the
interferometer. This allows the interferometer to accommodate a variety of test part
reflectances while maintaining high visibility fringes. Since the fluids used with this
interferometer have a refractive index near 1.337 and uncoated reference was used to
more closely match the reflectance in the test arm.
103
A photograph of the FLI, along with an overlaid beam path is shown in Figure 4.4. The
interferometer is built in a vertical orientation so that the contact lens could be mounted
and fluid applied without fighting against gravity to stay in place.
Figure 4.4. Layout of the fluid layer interferometer
4.2.1 Camera
The main criteria driving camera selection for the fluid layer interferometer was the
desired phase shifting method. Since relying on the pixelated phase mask approach was
identified as the best option, the camera selection is driven by those sensors compatible
with the phase mask technology. The manufacturer of the Pixelated Camera Kit (4D
Technology) only has access to micropolarizer arrays with a pitch of 9 microns or 7
104
microns center-to-center spacing; each individual wire-grid polarizer has a width of 9 or
7 microns, and a detector with matching pixel sizes must be used. It is possible to use a
detector with pixel sizes of one-half the phase mask pitch (i.e., 4.5 or 3.5 microns) where
the pixels could be grouped into super-pixels matching the extent of the phase mask. The
second criterion in order to use the phase mask technology is that the sensor must be
exposed so that the phase mask can be bonded directly to the detector array. This
excludes many cooled or infrared detectors where the sensor itself is inaccessible.
4D Technology offers a standard Pixelated Camera Kit which includes either a one
megapixel or four megapixel camera with phase mask, frame grabber, computer and
software package for controlling and reading out the interferograms. In order to
minimize cost, the one megapixel camera was selected for use in this system. Given that
the one megapixel camera has 1000 pixels across its width, the maximum resolvable
fringe frequency is 500 cycles/diameter, or 250 waves/radius. The array is made up of 9
micron pixels resulting in an active detector area of 9 millimeters square. The maximum
fringe frequency in cycles per millimeter is then 56 mm-1, which is the Nyquist frequency
of the detector. For 6.5 millimeter diameter on the fluid layer imaged onto the nine
millimeter wide detector plane, the interferometer operates at a magnification of |m|=
1.38.
Since the OPD measured in a Twyman-Green interferometer is twice the OPD of the
surface under test, the maximum fringe frequency of 250 waves/radius at the detector
105
corresponds to 125 waves/radius of departure in the fluid layer. For a wavelength of
632.8 nm, this corresponds to about 79 microns/radius of slope error on the surface.
As discussed in Chapter 3, the instrument transfer function is affected by the phase
calculation method, in this case the 3x3 grid algorithm. The ITF of a pixelated phase
mask based interferometer has been both modeled and measured empirically by
Kimbrough and Millerd (2010). In that work, the authors determined that an
interferometer using the pixelated phase mask approach and the 3x3 phase calculation
method has a cutoff frequency the same as the full detector array, and the pixelated phase
mask approach does not reduce the maximum frequency response. However, this method
does slightly attenuate the ITF at higher frequencies in a way that the ITF curve for a
1000x1000 pixel detector with pixelated phase mask has a form similar to that of a
700x700 pixel array, with the frequency extending to the 1000 by 1000 cutoff frequency
at a lower modulation. While the accuracy at some higher spatial frequencies is reduced
through use of the convolution, that method provides increased accuracy in the phase
calculation, so is a worthwhile tradeoff.
4.2.2 Laser Source
The source in the FLI is a single mode, frequency or intensity stabilized 632.8 nm
wavelength Helium Neon laser commercially available from Edmund Optics (p/n NT59939). The laser is a class IIIa device, with an output power of at least 1.5 mW. The
HeNe wavelength was chosen due to the availability of off the shelf components
106
corrected and coated for a 632.8 nm wavelength. The control electronics of the laser
allow an operator to select either power or frequency stabilization. Operating the laser in
frequency stabilized mode ensures that the coherence length sufficient for creating visible
fringes along with proper calculation of surface height from phase, which is a function of
wavelength. When operated in frequency stability mode, the frequency is stable to ±1
MHz over a one hour period and the intensity is also stable to within ±1%.
The coherence length of the laser source is longer than ten meters, ensuring fringe
visibility over large OPDs. Such a long coherence length was chosen since it was
desirable to build a system that could be easily modified for either this or other
applications. As a consequence however, the long coherence length allows coherent
noise from stray reflections off of surfaces or dust particles in the system. To minimize
these effects, all optics where undesired reflection could reach the detector were antireflection coated with laser-line coatings, ensuring less than 0.5% reflectivity per surface.
4.2.3 Collimator
The anticipated OPDs in the interferometer are not large, so the requirements on the
collimating optics are somewhat lenient. The collimating system consists of a telescope
to expand the beam to the 18 mm diameter existing within the interferometer. The initial
laser beam is first incident on a 40X microscope objective, where it is focused onto a five
micron diameter spatial filter. A 200 mm focal length doublet is positioned at its focal
length away from the pinhole in order to collimate the diverging beam. An aperture is
107
also located within the expanding beam in order to limit the beam diameter at the
collimator to 18 millimeters. The doublet consists of two 400 mm focal length singlets
(Edmund Optics p/n KPX115AR.14) arranged in a lens barrel separated by a two
millimeter spacing ring. The resulting collimator has an effective focal length of 200 mm
with a transmitted RMS wavefront better than λ/4. Figure 4.5 shows the collimating lens
along with its transmitted OPD adjusted to best focus.
OBJ: 0.0000 (deg)
W
W
Py
Layout
7/20/2011
Total Axial Length:
Px
Optical Path Difference
7/20/2011
Maximum Scale: ± 0.050 Waves.
0.633
231.07403 mm
Collimator.ZMX
Configuration 1 of 1
Surface: Image
Collimator.ZMX
Configuration 1 of 1
Figure 4.5. Layout and transmitted wavefront OPD of the collimating lens
4.2.4 Diverger
As described in Chapter 3, the diverger (also referred to as a null, objective or converger)
focuses the interferometer’s test beam at the center of curvature of the surface under test,
which in this case is the fluid layer on a contact lens. The major criteria driving the
design of the diverger is that it must focus the beam at the fluid layer’s center of
curvature while covering the desired measurement area on that surface. Assuming the
fluid layer has a base curvature equal to that of the underlying contact lens, the radius of
curvature of the anterior surface is approximated as eight millimeters, per conversations
with the sponsoring contact lens company. While the minimum required measurement
108
area is six millimeters, the diverger was designed to provide measurements over a 6.5
mm diameter on the fluid layer. The purpose of this increase in diameter is two-fold.
First, the 6.5 mm diameter allows the system to be used to measure longer radii of
curvature while still providing measurements over the minimum required area, providing
functionality for future measurements on lenses having larger base curvatures. Second,
providing a larger diameter allows for analysis masks to be applied to the captured data in
order to block out any undesired areas in the measurement, which often occur at the edge
of the pupil.
Based on the 8 mm radius of curvature and 6.5 mm measurement diameter, the f/# of the
lens must be 1.23 or faster. At least one inch of working distance is required between the
last surface and the contact lens in order to allow space to apply fluid to the lens as
needed. Given the constant f/# requirement, as the working distance increases, the
diameter of the null optics must increase as well. Given the existing beam diameter of 18
mm within the interferometer, a beam expander must be used to provide a large enough
beam to achieve both the f/# and working distance required.
Since the interferometer is used to measure relative changes in the fluid layer over time,
low-order wavefront error introduced by the diverger can be calibrated and removed from
the measurement. However, it is more difficult to remove high-order terms without also
removing the characteristics of the fluid layer, so the diverger optics were designed to
minimize the overall wavefront error. The transmitted wavefront of the diverger system
109
must be better than the desired accuracy of the interferometer. Although it is possible to
run a calibration routine to remove the wavefront aberrations introduced by the diverger,
its wavefront error will contribute to higher order induced aberrations from other optics
such as the beamsplitter or imaging lens since the higher order terms are a function of the
incident wavefront. Furthermore, the test wavefront will pass through the diverger twice,
so the wavefront must be specified and qualified in doublepass. Therefore, a
specification of λ/10 RMS transmitted doublepass wavefront error was specified. This
requirement minimizes high-order aberrations, while providing a transmitted wavefront
sufficient for absolute surface metrology should it be desired in future work. In order to
ensure that no large defects exist within the central six millimeter measurement diameter,
an additional requirement of a maximum λ/4 peak-to-valley transmitted doublepass error
over the central 90% of the pupil was desired. All requirements only apply to an on-axis
field illuminating the surface under test.
The beam expander and null optics were combined into a single system so that they could
be manufactured and aligned as one permanent assembly. The first step in the design
process was to independently design the null optics along with an afocal beam expander
to provide the required beam diameter entering the null. The preliminary design
consisted of a beam expander in the form of a reversed Galilean telescope with a single
negative element followed by a positive element, followed by the null optics formed by a
two element system of positive power meniscus lenses. The two modules were then
combined into a single system and optimized for combined performance. The afocal
110
restriction on the beam expander was lifted, allowing it to introduce some power in order
to best correct spherical aberration by distributing refractive power over a larger number
of optics. The resulting design is shown in Figure 4.6, and is discussed in detail below.
Figure 4.6. Layout and prescription for the FLI diverger
The as-designed diverger consists of four elements forming the nearly afocal beam
expander followed by the focusing doublet. The four elements are all manufactured from
a relatively high index SF57 glass in order to reduce spherical aberration. To achieve the
f/# of 1.23, the effective focal length was minimized in order to also minimize the beam
diameter, in turn lowering spherical aberration content along with the cost. The resulting
111
system took a form similar to a reverse telephoto lens where the rear principle plane is
beyond the last surface of the lens, increasing the working distance while maintaining the
focal length. The effective focal length is 20.7 mm, and the rear principle plane is 13.8
mm behind the last surface, resulting in a working distance of 28.5 mm between the
vertex of the last lens and the contact lens. The working f/# of the diverger is 1.14,
slightly faster than the required speed.
The as-designed doublepass wavefront has a quality better than λ/50 PV, but this does not
take into account manufacturing and alignment tolerances. Tolerancing a system in
doublepass using lens design software poses added challenges in setting up the initial
model. Built in tolerancing operands cannot be used in a doublepass system since the
software views each surface as a new, independent surface rather than the second pass
through a common surface. For example, if a random decenter is applied to the first lens
on the first pass, the software will introduce a new random decenter to that surface during
the return pass rather than applying the same initial magnitude and direction, as actually
exists. Therefore it is necessary to build the model to include coordinate breaks for every
surface in order to analyze surface tilts and decenters along with around every element to
do the same for tolerances on the entire element position. Tolerances are then applied to
the coordinate breaks present on the first pass through the system and then pick-ups and
solves are used to relay those tolerances onto the appropriate coordinate breaks in the
second pass. In order to just account for tilts and decenters, a single two-surface lens that
is normally represented by two surfaces in the lens design model now consists of eight
112
surfaces. For the FLI diverger system, the model to tolerance the system in doublepass
consists of nearly 80 surfaces and 85 tolerance operands!
The company Special Optics (Wharton, NJ) was chosen to manufacture the lens based on
their ability to build the custom lenses and provide assembly using active alignment,
along with the success of prior systems they have provided to the research group. For
precision systems such as this, Special Optics takes the metrology data on each of
manufactured surfaces and reoptimizes the design to account for some of the
manufacturing errors.
For this tolerance analysis, tolerances were placed on: surface and element tilts and
decenters, surface radii of curvature, glass and air thicknesses, surface irregularity, and
the glass refractive index and Abbe number. The mechanical system is built to allow
variation in the air gaps between the first and second, and second and third elements, so
those thicknesses were set for compensators. When used in the interferometer, the
longitudinal spacing between the test part and the diverger is adjusted to minimize the
fringe frequency, so an additional compensator was added on that distance as well.
Given input from the manufacturer, tolerances were applied to the lens and sensitivity
analysis in Zemax was used to examine their effects. Some of the parameters were
adjusted in an attempt to distribute the tolerance effects across the parameters as best as
possible. As expected, tilts and decenters dominate the toleranced performance
degradation. Variation in the radii of curvature, thicknesses, and refractive index do not
113
affect the system greatly since their effects are for the most part minimized by the
compensators built into the system. Table 4.1 lists the values for the tolerances and
compensators used in the tolerance analysis.
Table 4.1. Tolerances and compensators for the FLI diverger
Parameter (units)
Radius Tolerance (Fringes)
Thickness (mm)
Element Tilt (degrees)
Surface Tilt (degrees)
Element Decenter (mm)
Surface Decenter (mm)
Surface Irregularity (Fringes)
Glass Index
Abbe Number
Tolerance
±4
±0.05
±0.025
±0.005
±0.025
±0.005
±0.5
±0.001
±0.238
Compensator: Lens 1-2 Spacing (mm)
Compensator: Lens 2-3 Spacing (mm)
Compensator: Diverger-Test Part Spacing (mm)
±1.5
±1.5
±1.5
Notes
Newton ring doublepass test
Air and glass
Newton ring doublepass test
Nominally 1.847 for SF57
Nominally 23.780 for SF57
Nominally 49.91 mm
Nominally 20.15 mm
Nominally 26.50 mm
During the sensitivity analysis portion of tolerancing, the as-built performance is
estimated by combining the resulting sensitivities in a Root-Sum-Square (RSS) manner
combining effects of each individual tolerance. The resulting estimated RMS wavefront
error was 0.17λ in doublepass, outside of the system specification of λ/10. However, the
RSS method provides a worst-case estimate and does not account for how different
parameters interact with one another. Furthermore, active alignment will be used when
building the diverger, which should allow the manufacturer to tune the quality to
performance better than this estimate. In order to provide a better sense of how the
system will actually perform, a Monte Carlo tolerance analysis was run using these
tolerances. In the Monte Carlo analysis 100 systems were modeled independently. In
114
each of those systems the toleranced parameters were randomly perturbed within the
limits of the tolerances, the compensators were optimized and the system performance
was evaluated. The resulting performances of all 100 perturbed systems were combined
in order to estimate the as-built performance of the diverger. The Monte Carlo analysis
showed a mean performance of 0.07λ and estimated that 80% of systems built would
exceed the λ/10 RMS requirement. Given that the design will be reoptimized based on
manufacturing metrology data and active alignment will be used, the manufacturer was
confident they could exceed this performance for this design.
The resulting tilt and decenter tolerances definitely fall into the “precision” class of
optical systems, and special care is required when aligning the system. The mechanical
housing was designed to allow positional adjustment of the lenses with push screws
located around the perimeter of the lens barrel. Once roughly aligned, the manufacturer
positioned the system in an interferometer and actively aligned the diverger by adjusting
the lens position in order to minimize the transmitted wavefront error. Once finally
aligned, the set screws were sealed to ensure the lenses remain in the correct position.
Through use of the active alignment process and re-optimization based on manufacturing
metrology results, the as-built diverger exceeded the requirements. Figure 4.7 displays
the doublepass transmitted wavefront from the diverger. The wavefront is flat to within
0.01λ RMS and 0.125λ PV over the full 18 mm entrance aperture.
115
Figure 4.7. Transmitted doublepass wavefront of the diverger, with piston, tilt and power removed.
4.2.5 Imaging Lens
The imaging lens is a commercially available f/3.3 aplanat lens having an 83 mm focal
length and a maximum 25 mm clear aperture (CVI Laser, APM-125.0-25.0). In the
current embodiment, the beam diameter entering the imaging lens is 18 mm, and the lens
works at finite conjugates to image the fluid layer onto the detector plane. When imaged
through the null optics, a real intermediate image of the surface under test is not present
since the object distance is less than the effective focal length of the null system. Instead,
a virtual image exists within the null system, roughly 80 mm inside its first surface. For a
6.5 mm measurement diameter on the fluid layer, the virtual image diameter is 17.65 mm.
In order to image the fluid layer onto the full extent of the detector (9 mm, square), the
116
imaging lens must provide a magnification of 0.51. This magnification, along with the
focal length of the imaging lens, is used to determine object and image distances for a
first order system layout. The magnification in air is simply the ratio of the image
distance z’ divided by the object distance z:
-
)
-
4.9
The following equation relates the focal length and conjugate distances, and can be
simplified by substituting a rearranged version of Equation 4.9:
G G G
- - 9
G
G G
)
- - 9
4.10
4.11
For a focal length of 83 mm, and a magnification of -0.51 (negative accounts for image
inversion), the object distance is z = -245.75 mm. In turn, the image distance is z’ =
125.33 mm. The object distance is measured from the front principle plane of the
imaging lens to the virtual intermediate image location, and the image distance is
measured from the rear principle plane to the image plane. From modeling the system in
optical design software, it was determined that the front principle plane of the imaging
lens is 2.9 mm from the front vertex of the lens (inside of the lens), and the rear principle
plane is -13.0 mm from the rear vertex (also inside of the lens). Therefore, the spacing
between the virtual intermediate image and the front vertex of the imaging lens is 242.85
mm and the back focal distance from the rear vertex is 112.33 mm. The distance from
the last surface of the diverger, through the remainder of the test arm, beamsplitter, and
imaging arm to the front vertex of the imaging lens is 162.85 mm. Figure 4.8 shows the
117
imaging path from the contact lens, through the null optics and the imaging lens, and to
the detector, along with the locations of the virtual intermediate image and principle
planes within the imaging lens. For clarity, the figure has been unfolded by removing the
beamsplitter and fold mirrors within the path.
Figure 4.8. Unfolded imaging path within the FLI, showing the interferometer mode (top) and
imaging mode (bottom)
For these nominal conjugate locations, along with the in-use pupil diameter, the imaging
system has an effective f/# of f/#w = 6.7. Given the cutoff frequency derived in Chapter 3
(Equation 3.26), the maximum supported fringe frequency by the imaging lens is:
‹¨~{©R©ªCŽ G
G
9„§ ¸)¹
a)¸‡º
9P»N
‡¸
)
4.12
Given the detector limit of 56 fringes per millimeter, the imaging lens does support fringe
frequencies well beyond the limits of the detector. In fact, the cutoff frequency of the
imaging lens is 4.2X higher than the highest resolvable fringe frequency, ensuring that
even at the detector limit the fringes will still have high modulation.
118
If desired for a particular application, the imaging leg of the interferometer can be
modified to achieve a range of magnification values. Given the current embodiment of
the system, the imaging lens can be moved roughly two inches in each direction to either
increase or decrease the corresponding object distance, and therefore magnification. This
movement results in a range of magnifications from 0.39 to 0.74, with corresponding
working f/#s ranging from 6.2 to 7.7, respectively. Therefore, should there be a need to
magnify certain features of the lens, it is possible though there is a slight loss in imaging
lens resolution due to the slower f/#.
An aplanatic lens is free of both coma and spherical aberration for infinite conjugates,
and contributes little image quality degradation in the fluid layer interferometer.
However, since the null optics are designed only for on-axis performance with a focus at
the fluid layer center of curvature, its behavior in the imaging path does have a negative
effect on the imaging aberration content in the system. As derived in Equation 3.17, the
maximum allowable transverse ray aberration in the imaging system depends on the
minimum fringe spacing along with the allowable measurement error. Given the
requirements of the fluid layer interferometer, the maximum allowable measurement
error when measuring a nominal spherical surface is λ/4. At the detector limit, the
minimum fringe spacing is the inverse of the maximum fringe frequency:
<( G
‹¼}R©C
G
a)aG¾)
9P»N
½¸
4.13
119
Therefore, for the proposed accuracy, coupled with the minimum fringe spacing, the
maximum allowable transverse ray aberration is:
–.~{/ • a)aG¾
J)½P)
J
4.14
This is the required image quality for performance at the detector limit in terms of fringe
frequency. Since the interferometer is used in a null configuration, the fringes are
typically spaced much further apart. Although fringe frequency will increase in areas of
fluid layer breakup, these areas of higher slope make up a small fraction of the overall
detector area. Figure 4.9 shows the required imaging system quality as a function of
wavefront slope. As the wavefront slope at the detector decreases, the allowable
aberration in the imaging system becomes more relaxed. In general, the image quality for
an interferometer in a null configuration is relaxed, and increases in those applications
where null optics are not used or where large wavefront errors are present.
Distortion introduced by the imaging system is typically of greatest concern. With
distortion, the magnification across the field varies, and a point in the image plane does
not linearly correspond to the same point in the object plane. Therefore, if it is important
to know the location of a feature in an interferogram, the imaging system must either
minimize distortion or be calibrated to account for distortion effects. Other aberrations
contribute blur to the imaging system, reducing the fringe contrast. A reduction in fringe
contrast can reduce measurement accuracy, but significant blur is needed to seriously
affect the system performance.
120
Maximum Transverse Aberration for λ/4
Accuracy
Max TA (microns)
100
80
60
40
20
0
0
50
100
150
200
250
Wavefront Slope (waves/radius)
Figure 4.9. Allowable transverse aberrations as a function of wavefront slope
The transverse ray error of the entire imaging system shown in Figure 4.8. Figure 4.10
shows the transverse ray aberration across the field of the imaging system. The
maximum transverse ray error occurs at the edge of the field, and has a peak to valley
magnitude of about 15 microns. The ray error at the 0.7 field is about five microns, and
at the center of the field is about one micron.
121
OBJ: 0.0000 mm
ey
OBJ: 2.0000 mm
ex
ey
Py
ex
Px
Py
Px
OBJ: 3.0000 mm
ey
ex
Py
Px
Transverse Ray Fan Plot
InVitro FLI Imaging System
7/14/2011
Maximum Scale: ± 10.000 µm.
0.633
Surface: Image
InVitro Imaging System FullBeamDiameter.ZMX
Configuration 1 of 1
Figure 4.10. Transverse ray aberration for the imaging system for field points at a radius of 0mm,
2mm and 3mm on the fluid layer
While the image quality at the edge of the field is outside of the 4.5 micron limit derived
in Equation 4.14, the quality is near or within that limit over most of the field.
Furthermore, the aberration limit is derived for the maximum wavefront slope of 250
waves per radius, which represents the limits of the interferometers dynamic range.
These aberrations in the imaging system have an effect of slightly attenuating the
instrument transfer function of the interferometer at high spatial frequencies for features
towards the outside of the pupil. However, since the wavefront slopes are typically much
lower than the Nyquist limit, aberrations in the imaging system have little effect for most
measurements.
122
4.2.6 Reference Surface
The reference optic is an uncoated flat surface with an RMS flatness better than λ/20.
The surface is the front surface of a 10 mm thick fused silica window where the rear
surface is anti-reflection coated to minimize reflection at λ=632.8 nm (CVI Laser, p/n
W1-02FPS007-633-0). An uncoated reference was chosen in order to more closely
match the reflectance in the test and reference arms. Fused silica has a refractive index of
1.457 at the HeNe wavelength, so its reflectance at normal incidence is
¿ÀŽCzƒR|R{ G)J½¹ G ‡)J¸Á)
G)J½¹ G
4.15
Whereas the fluid layer has a refractive index near 1.37 and reflectance of
¿|ÀRz G)‡¹ G )JJÁ)
G)‡¹ G
4.16
Although the interferometer makes use of the half-wave plate prior to the beamsplitter to
balance intensity in each of the arms, more closely matching the reflectance was desired
to ensure high visibility fringes should use of the HWP be problematic.
4.3 Contact Lens Mount
A number of lens holding methods were empirically studied in the course of designing
the FLI system. Initial efforts of simply resting the lens on a glass slide while allowing
the contact lens to support itself were attempted. This method posed many problems in
terms of the stability of the lens. When applying fluid by application with a syringe, lens
deformations were observed. Additionally, without something holding the lens in place,
it tended to slide laterally on the underlying surface. Between the lens deformation and
123
the lens movement, there was no way of determining if the measured dynamic wavefront
was a result of fluid layer behavior or lens movement.
Initial attempts at mechanical mounting included resting the contact lens on plastic and
rubber ball bearings. Mounting the lens on a variety of black plastic and black rubber
balls having a radius of curvature similar to the contact lens was attempted. In order to
hold the lens in place, a ring was machined that could be placed over the lens to keep it
stationary. A photograph of the rubber ball and supporting ring is shown in Figure 4.11
below. While this method showed some improvement over the unsupported technique, it
was still problematic. The materials used tended to be hydrophobic to the point that
saline solution would bead on the surface. Since water is a significant ingredient in the
contact lens material, the hydrophobic mounting surface tended to cause the lens to fold,
bubble or stick to the surface. When measurements were possible, dry spots were
immediately apparent in the measurement, and often times the lens could not be rewetted.
In terms of the underlying geometry, since the ball bearing only matched the lens base
curvature within a couple of millimeters, the mismatch between contact lens and
mounting curvatures resulted in an aspheric anterior surface profile which introduced
high frequency fringes to the measurement.
124
Figure 4.11. Initial holder with rubber ball and machined ring
Based on these preliminary findings, it was determined that a custom contact lens holder
was necessary in order to successfully measure the fluid layer on a contact lens in vitro.
To separate movement in the contact lens from dynamics in the fluid layer, a rigid
method of holding the lens is necessary. The mount must have a curvature closely
matching the base curvature of the contact lens while providing a method of minimizing
its movement and holding it in place. Since the reflectance of the fluid layer is less than
3%, the mounting substrate should be either diffuse or optically black to minimize stray
reflections. Finally, the material used for the mechanical substrate must be somewhat
hydrophilic in order to attract the lens to the surface and not drive fluid out of the
material. Based on these requirements, using a structure similar to the plastic molds used
for manufacturing lenses seemed like a promising choice.
These plastic molds have geometries matching the desired contact lens, with a different
mold for each the anterior and posterior surfaces. During manufacturing, two molds are
125
placed together with a separation matching the required lens thickness. The void between
the molds is then injected with the liquid contact lens material which is then exposed to
UV light to cure the lens material. Additional processes are used to properly hydrate the
lenses and prepare them for packaging and use.
Contact lenses are manufactured by creating plastic molds matching the front and rear
surfaces of a contact lens, bringing the molds together, and injecting them with the liquid
contact lens material. The lenses are then UV cured, and treated with a number of other
processes to ready them for packaging and use. Since these plastic molds match the base
curvature of the contact lenses, and since they are made of an easily wettable material,
they were used for the next iteration of the contact lens holder. Unfortunately, the
contact lens molds have semi-polished smooth surfaces allowing specular reflections
from their surfaces. When a contact lens is mounted on the plastic mold, the reflection
from the lens-to-mold interface is not as strong as the air-to-fluid interface, but is still
visible and results in extra fringe patterns in the interferogram. Efforts towards blacking
out the surfaces with either marker or paint were unsuccessful since even permanent
marker seemed to wash away on the surface and paint tended to have undesired
hydrophobic properties. Therefore, the specular reflection was reduced by means of
creating a diffuse surface on the contact lens mold.
Molds made from Zeonor plastic having a nominal radius of curvature of 8.15 mm were
measured with a Fisba interferometer and a Talysurf contact profilometer prior to any
126
diffusing surface treatment in order to determine the baseline surface figure and
roughness. Four plastic molds were then treated with the following sandblasting process
to create a diffuse surface:
1. Setup sandblast system to use 90 micron grit sand with 60 PSI of pressure,
2. Hold tip of sandblasting gun 2” – 2.5” from the mold,
3. Sandblast two parts for approximately 10 seconds using a circular motion,
4. Sandblast remaining two parts for approximately 15 seconds using a circular
motion,
5. Clean parts with compressed air, Scotch tape touched to surface to remove grit,
and finally wipe with methanol on a soft cloth.
Following the surface treatment the parts were again measured with the Fisba
interferometer and Talysurf. The interferometer was unable to measure the diffuse
surface, indicating that it is rough enough to be used in the FLI without contributing
additional unwanted fringes. The Talysurf data shows that the surfaces for the most part
retained their radius of curvature and surface figure, while the roughness increased. The
results of all four sandblasted molds were similar, with an average radius of curvature
change of 0.002 mm. The parts were then shipped to the University of Arizona for
further analysis with the FLI system.
In order to further evaluate the sandblasted molds for use as the contact lens holder, they
were measured in the FLI system. The holder was first placed in the test arm of the
interferometer and aligned accordingly. Fringes were observed when the interferometer
was focused on the cat’s eye position, but no fringes were seen at the confocal test
127
configuration, indicating the holders are in fact diffuse enough to eliminate unwanted
reflections. Distilled water was then streamed onto to the holders with a syringe in order
to measure the fluid layer on the lens holder (with no contact lens in place). The purpose
of the wet-measurement was to evaluate the general surface figure of the holders and how
it may affect future measurements. A series of measurements were captured after fluid
application, and the measurement with the smallest error and no visible dry spots was
saved. The metrology results showing the pre- and post- sandblasting radius of curvature
along with the post-sandblasting wet-measurement surface irregularity are shown in
Table 4.2. The results of the best-captured fluid layer-on-holder measurements are
shown in Figure 4.13.
Since the water layer on the holder is very dynamic and dries quickly, some of the
measured surface topography surely corresponds to behavior of the fluid rather than the
underlying surface shape. This is especially apparent in the measurement on mold #4,
where there is a peak near the upper left of the measurement. This peak is the result of a
wave or ripple in the fluid, and not underlying structure. However, due to quick drying
time, these are the best available results. A photograph of the sandblasted lens mold is
shown in Figure 4.12.
Table 4.2. Metrology results and the sandblasted contact lens molds
Mold #1
Mold #2
Mold #3
Mold #4
Radius of Curvature (Talisurf)
Fluid Layer Measurement
Before Sandblasting After Sandblasting
RMS Surface (waves)
8.1582 mm
8.1578 mm
0.1241
8.1481 mm
8.1524 mm
0.1505
8.1531 mm
8.1533 mm
0.1584
8.1527 mm
8.1555 mm
0.1768
128
Figure 4.12. Sandblasted contact lens mold
Figure 4.13. Surface contours of a fluid layer on the four potential contact lens holders. Mold #1 is
on the top-left, #2 top-right, #3 lower left, and #4 lower right.
129
Overall, the performance of these sandblasted lens molds is suitable for use in the FLI
system. Similar measurements made on the rubber and plastic balls showed wavefront
errors on the order of 6λ RMS and 20λ PV. It is estimated that the improved lens holders
have errors on the order of λ /10 RMS when the effects of the dynamic fluid are removed
from the previously discussed results. Since the purpose of the measurements is to
capture relative measurements of fluid behavior and an absolute measurement is not
necessary, the small errors introduced by the underlying surface are of little or no
consequence.
The next step in qualification of the lens holder was to position a lens on the mold and
capture measurements with the FLI. Since there is no structure on the mold to keep the
lens in place, positioning the contact lens so that it is centered on the mold was quite
difficult. It was possible to observe an interferogram of a fluid layer applied to the
contact lens, and no stray reflections from other interfaces were observed, indicating the
sandblasting routine had in fact eliminated specular reflections. Throughout the testing
however, it did become apparent that the Zeonor material is slightly hydrophobic. Water
tended to bead on the surface and the lens would routinely stick to the surface, causing
the same problems as existed for the rubber ball. Therefore, it was desired to treat the
surface with a hydrophilic coating in order to allow an interaction between the contact
lens and holder that is more similar to the lens on the eye.
130
It turns out that anti-fog spray for goggles and masks is composed of hydrophilic
chemicals. When sprayed on the internal surface of goggles or masks, the coating causes
water condensing on the surface to form a small contact angle, effectively forming a flat,
smooth film of water on the surface, rather than an irregular collection of water droplets
that would block the wearer’s vision. The anti-fog spray is available at most sporting
goods stores, and a bottle of Speedo brand Anti-Fog spray was purchased for testing for
this application. Per the instructions, the lens holder was wetted with water, and then
sprayed with the anti-fog solution. After waiting for roughly 10 seconds, the surface was
rinsed with water and allowed to dry. After that process, the water was applied to the
holder, and this time it did evenly spread across the surface, without any apparent
beading. When a contact lens was applied, the lens conformed cleanly to the surface
without sticking, folding or bubbling. Based on these results, the lens holders were
routinely re-treated with the anti-fog spray for all further testing.
Based on the difficulty of positioning the contact lens in the proper centered location on
the mold, further engineering of the mounting structure was necessary. In the secondgeneration lens holder, a cylinder of Zeonor 1060R was diamond turned to have the same
form as the lens molds, matching the curvature of a contact lens with a base curvature of
8.3 mm. In addition to having the base curvature, a lip was machined around the nominal
diameter of the contact lens (14 mm) so that once the lens is positioned on the holder it
cannot slide around. The lip had a depth of 250 microns, deeper than the nominal contact
lens thickness for a low power lens which is typically 175 microns (thickness increases
131
with power). Figure 4.14 shows the mechanical drawing for the lens holder, which again
was manufactured by Apollo Optical Systems. Once manufactured, the holder was
sandblasted in the same manner as was done for the previous work on the lens molds.
Figure 4.14. Drawing of the final-generation lens holder. All units are in millimeters
The lens holders all perform in a similar manner, again with the sandblast process only
altering the radius of curvature by 0.007 mm. Again, no specular reflections are seen
during measurements. As before, the surfaces are treated with anti-fog spray periodically
to create a more lens-friendly hydrophilic surface. A photograph of this iteration of the
contact lens mount is shown in Figure 4.15 below.
132
Figure 4.15. Final iteration of the contact lens holder
4.4 System Calibration, Accuracy and Precision
Accuracy of an interferometer is the system’s ability to reproduce the actual value of the
surface under test, known as the quaesitum (Selberg, 1991). Accuracy includes all error
sources in the system, including factors such as non-common path wavefront, retrace or
phase shifting errors. The precision of an interferometer is the repeatability and
represents the measurement variance. Precision is the ability of the instrument to produce
the same measurement value in independent measurements, given identical conditions.
Precision accounts for random or time varying errors in the interferometer, such as source
frequency drift, vibration, or temperature stabilization for example.
An interferometer may be characterized and calibrated for improved accuracy, and the
precision calculated by measuring a standardized calibration part multiple times. In order
to measure accuracy, repeatability and precision of the interferometer, the calibration part
should be removed and replaced back into the system, and the alignment and
measurement process repeated. The uncertainty in the calibration is reduced as the
133
number of individual calibration measurements increases. The surface measured during
calibration must be known with certainty better than the desired accuracy of the system.
After the individual measurements are recorded, the average of the measurements is
calculated. Assuming that the calibration surface has a height of zero, accuracy is
represented by the average surface height. Peak to valley and RMS accuracy are
calculated by the peak to valley and RMS surface heights of the average surface,
respectively. Repeatability is a measure of an instrument’s precision, and is represented
by the standard deviation over all of the individual raw calibration measurements. As
with accuracy, the PV and RMS standard deviations are both calculated in order to
determine the PV and RMS repeatability.
Once built, the FLI was calibrated by measuring a Grade 3 ball bearing having an 11 mm
radius of curvature. While this does not match the curvature of the fluid layer and
underlying contact lens, reflections from the ball bearing do fill the aperture of the
interferometer and allow characterization of the intrinsic aberrations over the full pupil.
The interferometer was arranged to have the normal operating magnification that is used
for contact lens testing, and the imaging lens and camera focused accordingly. When the
ball bearing is located in the test arm and properly positioned so that the focus is at its
center of curvature, the reflected wavefront fills the pupil of the interferometer, allowing
for calibration over its full extent.
134
Once inserted into the system, the location of the ball bearing was adjusted using
translation stages to minimize the number of fringes observed in the live interferogram.
Once properly aligned, a measurement set was captured consisting of eight averaged
single measurements. The measurement was then saved for future use, and the ball
bearing was removed rotated to a random configuration, and placed back into the system
and the measurement was repeated. The entire process of removing, replacing and remeasuring the ball bearing was repeated 10 times, where each of the individual 10
measurements themselves consisted of eight averaged sub-measurements. The 10
calibration measurements were then averaged together and the average RMS and Q-Peakto-Valley (PVq) wavefront was calculated. Peak to valley error measures the error
between the highest and lowest values in the measurement, and is susceptible to noise
spikes. Instead, the PVq is calculated by only considering the central Q-percent of the
measurement values, so that the results corresponding to noise spikes are thrown out. For
all work in this dissertation, a Q value of 99.5% was used, so the PV is measured only
over 99.5% of the measurement values. Additionally, the standard deviation of both
terms were calculated. The results PVq and RMS surface heights for each raw
measurement are shown in Table 4.3.
135
Table 4.3. Calibration measurement results. All quantities in waves.
1
2
3
4
5
6
7
8
9
10
Raw PVq
0.1632
0.1625
0.1504
0.1589
0.1357
0.1744
0.1218
0.1448
0.1503
0.1538
Raw RMS
0.0344
0.0337
0.0305
0.0331
0.0274
0.0323
0.0219
0.0278
0.0303
0.0301
Average
Standard Deviation
0.1516
0.0150
0.0302
0.0037
After the 10 raw measurements were recorded, they were averaged together in order to
calculate the interferometer’s accuracy. The resulting average surface is displayed in
Figure 4.16. The PVq surface height of the average measurement is 0.1378 waves, and
the RMS height is 0.0280 waves. Based on these values along with the data in Table 4.3,
the accuracy and repeatability capabilities for the FLI are shown in Table 4.4.
136
Figure 4.16. Average of the raw calibration measurements, representing the uncalibrated accuracy
of the FLI. The PVq surface height is 0.1378 waves, and the RMS height is 0.0280 waves.
The calibration measurement is dominated mostly by astigmatism, which is likely the
result of the diverger being slightly misaligned to the interferometer. Additionally, a
pattern of nearly vertical linear ridges is seen in the average measurement pictured in
Figure 4.16. These ridges are the result of background low-contrast fringes resulting
from interference between the front and rear surfaces of the reference optic. While these
fringes do not phase shift since they are from the same arm of the interferometer, they
contribute some print-through to the measurement that is visible when there is low
surface error, as is the case when measuring the calibration ball. Some circular bulls-eye
patterns are seen throughout the measurement as well. These are the result of spurious
137
reflections from dust particles or defects in the interferometer optics. While the
astigmatism, ridges and bulls-eyes are visible in the calibration measurement, their error
is still well within the desired accuracy of the interferometer since the RMS error is
0.028λ and the PVq is 0.138λ
Table 4.4. Accuracy and repeatability of the Fluid Layer Interferometer
PVq Uncalibrated Accuracy 0.13783 λ
RMS Uncalibrated Accuracy
0.028 λ
PV Repeatability
0.014987 λ
RMS Repeatability
0.003731 λ
Given the quarter-wave requirement on the accuracy of the interferometer for testing
spherical surfaces, even the PV accuracy is within the limits. It should be noted that the
values discussed thus far represent the uncalibrated accuracy of the interferometer. That
is, they are not accounting for subtraction of a calibration measurement from any
subsequent measurement results which can remove some of the systematic measurement
error from the results. For example, it is possible to use the average of the calibration
measurements shown in Figure 4.16 as the reference measurement for the FLI. The
calibrated accuracy of the system is calculated by subtracting the average surface from
each of the 10 individual raw measurements. This process is displayed in Figure 4.17,
and the results of this subtraction are listed in Table 4.5, showing the PVq and RMS
surface heights of each of the reference-subtracted results.
138
Figure 4.17. Reference subtraction process displayed for calibration measurement number 1
Table 4.5. Difference between the raw calibration measurements and their average
1
2
3
4
5
6
7
8
9
10
Average
Standard Deviation
Difference PVq Difference RMS
0.0519
0.0099
0.0761
0.0115
0.0522
0.0099
0.0523
0.0101
0.0518
0.0093
0.0768
0.0137
0.0978
0.0157
0.0729
0.0153
0.0565
0.0101
0.0677
0.0088
0.0656
0.0154
0.0114
0.0025
As was done previously, these surfaces are averaged in order to determine the calibrated
measurement accuracy. The average surface representing the calibrated interferometer
accuracy is shown in Figure 4.18. The PVq surface height is 0.0138 waves, and the RMS
surface height is 0.0028 waves.
139
Figure 4.18. Average of the reference-subtracted calibration measurements, representing the
calibrated accuracy of the FLI. The PVq surface height is 0.0138 waves, and the RMS height is
0.0028 waves.
The calibrated accuracy and repeatability of the FLI are shown in Table 4.6 below. The
accuracy, both in PVq and RMS have increased by 10 fold with the calibration. The
repeatability has remained the same. This is expected since repeatability is affected by
random or cyclical instrument errors, which cannot be calibrated out. While the
measurement uncertainty resulting from repeatability cannot be removed with calibration,
averaging over many measurements will mitigate their effects.
140
Table 4.6. Calibrated accuracy and repeatability of the FLI
PVq Calibrated Accuracy
RMS Calibrated Accuracy
PVq Repeatability
RMS Repeatability
0.0138 λ
0.0028 λ
0.0154 λ
0.0025 λ
141
5 FLUID LAYER INTERFEROMETER ANALYSIS METHODS AND RESULTS
The Fluid Layer Interferometer (FLI) provides a method of quantitatively characterizing
the fluid behavior on a contact lens in a way never before achieved. Consequently,
significant effort also went into determining how to analyze the data once captured.
Since the FLI provides a new method of analyzing fluid behavior on a contact lens, there
was uncertainty as to what the data would look like and how it could be used to describe
the performance of a contact lens. Moreover, it was unknown exactly what metrics
should be used to objectively describe the lens/fluid interaction, and how those metrics
correspond to the “quality” of a contact lens, however that is defined. The goal of the
data analysis is to provide a description of the fluid layer behavior as it changes over time
in a consistent manner so that results from many measurement sets on identical materials
show the same result. The data analysis should provide a method of distinguishing
between different materials based on the fluid layer behavior. This analyzed data can be
used by materials scientists to use in the development of new lens materials.
At this time, it is unknown if certain metrics correspond to a “good” or “bad” contact
lens. Optical characteristics such as RMS surface error or Zernike coefficients do not
necessarily have the same influence as they do in traditional surface metrology. For
example, in traditional optical manufacturing, a surface having the smallest RMS surface
error is usually desired. However, in describing fluid layer behavior, there is more
interest in how much the RMS surface error changes over time and at what rate, as
opposed to its absolute value. Where lower order aberrations such as spherical
142
aberration, coma or astigmatism are usually of interest in traditional optical analysis, fluid
layer behavior tends to exhibit characteristics that cannot be described with a simple loworder polynomial description. Rather, it may be of interest to subtract a low-order fit
from the surface, highlighting the residual higher order fluid behavior. Typical optical
components have an associated scratch-dig rating, which is mostly of cosmetic interest.
In the fluid layer analysis however, dry spots analogous to digs are observed during the
measurements and likely are quite important in regards to contact lens comfort. An RMS
surface or polynomial fit applied to a fluid layer measurement will not identify these dry
spots, since they normally consume a small portion of the measured area. Instead, a form
of texture analysis can be applied to the measurement to quantify both the frequency and
extent of these drying pits.
The remainder of this chapter will present example results from fluid layer
measurements, discuss analysis routines used to highlight different aspects of the fluid
layer behavior, and provide measurement results of many samples portraying how these
analysis methods can be used in characterizing material behavior. To provide context for
the following discussions, Figure 5.1 shows the interferograms and corresponding surface
topography for an example fluid measurement over time. In the topographies, piston, tilt
and power have been removed from the measurement. Piston and tilt are removed from
the interferometric measurements since they do not correspond to any actual error in the
surface. Power represents a change in surface curvature and could be the result of either
fluid layer behavior or longitudinal displacement of the entire contact lens. Since the
143
longitudinal location of the contact lens is not held constant for all measurements, power
is removed. Any interferogram or surface contours displayed were created in 4Sight™
software from 4D Technology Corporation.
Figure 5.1. Example interferograms and corresponding surface during a fluid layer measurement.
Piston, tilt and power are removed from the surface contour.
5.1 Fluid Layer Measurement Process
In order to take a measurement of a fluid layer on a contact lens, the lens is first manually
mounted and properly positioned on the lens holder. Clean soft-tipped tweezers are used
to handle the lens in order to ensure no oil or residue from handling is transferred to the
contact lens. The lens holder is then positioned in the test arm of the interferometer, and
the live video output of the interferometer is viewed. The position of the contact lens is
adjusted via the translation stages in order to best null the live interferogram. In order to
maintain hydration of the contact lens, packing solution is dripped onto the lens with a
144
syringe roughly every 10 seconds during the mounting and alignment process. Once
nulled, the interferometer is set to begin capturing measurements at a rate of three
measurements per second, for a total of 200 measurements (67 seconds). As soon as
possible after beginning the measurement acquisition, five deciliters of artificial tear fluid
are applied to the contact lens with a syringe. This tear fluid is a proprietary solution
provided by the project sponsor, and is intended to mimic the behavior of human tears.
In a typical measurement sequence of 67 seconds, application of the fluid layer occurs
within the first few seconds after the acquisition begins. The application of the fluid
layer results in unresolvable interferograms due to the syringe being within the field of
view along with the very turbulent nature of the fluid at this time. The fluid layer begins
to smooth and the resulting interferogram usually becomes resolvable within three
seconds of applying the fluid. Figure 5.2 shows the resulting topography measurement
immediately before, during, and immediately after the fluid application.
Figure 5.2. Surface measurement before, during and after the fluid application
Given the variability in the fluid application time and occurrence of resolvable
interferograms, one of the difficulties in comparing independent measurement cycles to
145
one another is in defining which measurement frame to set as the first measurement of
the cycle. In order to compare all measurement sequences from a similar point in time,
the time-equals-zero measurement frame is defined as the frame within the measurement
sequence having the lowest RMS surface height with only tilt and power subtracted from
the surface. Figure 5.3 shows a plot of the RMS surface height of 200 frames with
notations showing areas before, during, and after fluid application along with the frame
identified as t = 0.
Figure 5.3. Example results of an RMS surface height over the full 200 frames of a measurement
cycle highlighting regions before fluid application (yellow), during fluid application (red) and after
fluid application (green).
The process to identify the t = 0 point is repeated for every measurement cycle so that
different measurements can be similarly compared using identical time comparisons. In
some cases dry spots or bubbles are observed in the t = 0 measurement, as shown in
146
Figure 5.4. These artifacts are likely the cause of one or a combination of: improper lens
mounting with bubbles beneath the lens allowing the contact lens to partially dry during
the mounting and alignment procedure, or improper application of the tear solution. It is
assumed that a fresh fluid layer is smooth and should not have any of these artifacts, so
any measurements where artifacts are present at t = 0 were thrown out and not used in the
data analysis process.
Figure 5.4. t = 0 measurement with near central artifact present. Peak to valley height is 12.6 waves
(~8 microns). This measurement cycle was thrown out for analysis due to the dry spot in the initial
condition.
5.2 Unaltered Measurements
The first method of analyzing data from the FLI is by examining the unedited surface
measurements. Once the phase is unwrapped it is converted to a contour map showing
the topography of the anterior fluid layer surface. As is typical in optical metrology,
piston and tilt are both subtracted from the measurement. Figure 5.5 shows the raw
measurement of a fluid layer over the first 30 seconds of the measurement. It is apparent
147
that the difference between the first and last measurement in the sequence is dominated
by a change in curvature (power), which is explained by the fluid layer thinning out after
application of the fluid. By the time 30 seconds have passed, the bulk form of the fluid
layer has flattened out, indicating its curvature now matches the underlying contact lens’s
shape. At this point, canyons and pits are easily visible by eye, indicating significant
fluid layer breakup.
Figure 5.5. Evolution of a raw measurement through 30 seconds
Some macro-scale behavior of the fluid layer behavior can be described by examining
aberration descriptions on the surface. A set of polynomials, such as Zernike terms, can
be fit to the surface topography and its components examined as the fluid layer
topography changes over time. For example, examining a second order power term as it
changes over time provides a measure of the time it takes for the fluid layer thickness to
stabilize and conform to the underlying curvature. Figure 5.6 shows an example of the
power component of a fluid layer measurement over time. The t = 0 point is defined as
discussed above, and occurs roughly one second after application of the artificial tear
fluid. The power changes rapidly over the first 10 seconds of the measurement after t =
0, indicating that the thickness of the fluid is changing quickly. Beyond 15 seconds, the
148
slope of the plot flattens out, indicating that the fluid layer has evolved into a somewhat
stable form, at least in terms of its bulk behavior. Since the power only indicates the low
spatial frequency behavior of the fluid, examining power in this manner does not fully
quantify fluid layer breakup.
Power Term in a Fluid Layer
Topography
Power (waves)
5
4
3
2
1
0
0
5
10
15
20
25
30
35
40
45
50
55
60
65
Time (s)
Figure 5.6. Power present in an example fluid layer measurement. T = 0 occurs at roughly 4 seconds.
Any aberration or polynomial term can be fit to the surface and examined in the same
manner to evaluate their behavior over time. For that purpose, a software script written
in the Python language has been developed that runs in the 4Sight interface. The script
exports any of the desired Zernike coefficients into a format compatible with Excel or a
similar package for further analysis. The Python programming language was used since
4Sight is developed in Python, and provides a custom scripting interface built in to the
software package.
149
Low-order aberrations, such as power, depend on both the behavior of the fluid layer and
on the alignment of the contact lens with respect to the interferometer. For example, if
the longitudinal spacing between the diverger and the contact lens is not properly
adjusted, the resulting topography is dominated by second order power. Since the lens
stays in this position throughout the entire measurement, examining how the power varies
over time does provide insight into the bulk behavior of the fluid layer. However, since it
is not certain that the longitudinal positions are identical in separate measurement cycles,
the magnitude of the power terms in different measurements should not be compared to
one another. Figure 5.7 shows the power exported from the results of six measurements
on one type of silicone hydrogel contact lens. While the forms of the curves are similar,
the magnitudes vary which is likely the effect of differences in the lens’s longitudinal
position in the interferometer.
150
Power Fits in 6 Measurements
5
Power (waves)
4
3
2
1
0
0
-1
5
10
15
20
25
30
35
40
45
50
55
60
65
70
Time (s)
Figure 5.7. Power present in 6 different measurements on a silicone hydrogel contact lenses
5.3 Subtracting the Fourth Order Aberrations
The fluid layer topography structure of interest, such as holes or canyons indicating tear
film breakup, is not described by a low-order polynomial surface fit but rather behaves in
a higher frequency fashion. Additionally, misalignment of the contact lens with respect
to the interferometer contributes significant low-order aberrations to the wavefront that
do not correspond to actual fluid layer topography. In order to highlight the high
frequency topography of interest along with removing the low-order effects of lens
misalignment, Zernike polynomials through the fourth order are fit to the surface and
then are subtracted from the original measurement, leaving the high-order topography
fluctuations resulting from dynamic fluid layer behavior (Figure 5.8). Similar methods
151
are used in corneal topography where low-order Zernike terms are removed from the
topography measurement to highlight the detailed features in the surface shape
(Schwiegerling, 1995).
Figure 5.8. Process of removing a 4th order Zernike polynomial fit
Zernike polynomials were selected for use in the surface fit for three reasons. First,
optical design and analysis software, including 4Sight and Zemax which were routinely
used in this dissertation, include built in methods of fitting and manipulating Zernike
polynomials on a surface or wavefront. Second, Zernike polynomials are orthogonal and
each of the terms are independent. Therefore, if a higher order polynomial is fit to the
surface, the magnitude of the lower fourth order terms will remain unchanged. While a
need to provide a higher order fit is not used at this point, using Zernike polynomials is a
convenient choice. Finally, Zernike polynomials are commonly used in the ophthalmic
community, and since the customer is part of that group, the data and analysis methods
will be more familiar than if other fitting methods were used. For reference, the Zernike
terms fit to the surface and subsequently removed from the original measurement are
shown in Table 5.1. For a given set of measurements, an independent polynomial is fit
152
and subtracted from each individual measurement. That is, the term subtracted is
dynamic and evolves as the fluid layer evolves.
Table 5.1. Description of the Zernike polynomial terms removed from the raw measurements
Term
Number
Form
Name
Term
Number
Form
Name
Z0
1
Piston
Z8
6ρ4- 6ρ2 +1
Spherical
Z1
ρ cos θ
Tilt X
Z9
ρ3 cos 3θ
Trefoil X
Z2
ρ sin θ
Tilt Y
Z10
ρ3 sin 3θ
Trefoil Y
Z3
2ρ2-1
Power
Z11
(4ρ2-3) ρ2 cos 2θ
Secondary
Astigmatism X
Z4
ρ2 cos 2θ
Astigmatism
X
Z12
(4ρ2-3) ρ2 sin 2θ
Secondary
Astigmatism Y
Z5
ρ2 sin 2θ
Astigmatism
Y
Z13
(10ρ4-12 ρ2 +3) ρ cos θ
Secondary Coma X
Z6
(3ρ2-2) ρ cos θ
Coma X
Z14
(10ρ4-12 ρ2 +3) ρ sin θ
Secondary Coma Y
Z7
(3ρ2-2) ρ sin θ
Coma Y
5.4 Referencing to a Baseline Measurement
As an alternative to subtracting a polynomial fit from each measurement, an additional
analysis method was developed to evaluate only how the surface changes over time. In
order to examine the changes in the surface over time, all measurements in a single
measurement cycle are referenced to a baseline measurement. Specifically, it is
convenient to reference every measurement in a sequence to the results at the t = 0 frame.
As opposed to the previously discussed polynomial method, this procedure identifies a
static reference to be subtracted from an entire measurement sequence. Assuming errors
in the interferometer and contact lens mounting are constant over time, subtracting the
initial measurement removes those affects from all other measurements in the cycle. This
method provides a tool showing only how the fluid layer has changed in time, and
153
removes some of the process induced variability between measurement cycles. Analysis
methods similar to those previously discussed can then be applied to the referenced
measurements in order to evaluate how certain performance metrics change in time.
Figure 5.9 displays representative results of the analysis methods discussed thus far,
including the routine of referencing each frame to the t = 0 measurement.
Figure 5.9. Data manipulation methods for the FLI. Zernike terms are not subtracted from the
“Referenced to First” set.
5.5 Blob Analysis
While the measured fluid layer surfaces can be described with traditional optical metrics
such as RMS surface height or aberration terms, these metrics do not adequately describe
the fluid layer topography. For example, examine the two surface measurements shown
in Figure 5.10 which are part of a measurement sequence and were taken one second
154
apart. Each measurement has an identical RMS surface height to the fourth decimal
point, essentially within the uncertainty of the interferometer. However, the later
measurement (seen on right of Figure 5.10) shows some small pits in the fluid layer. In
that figure, the visually apparent artifacts are roughly 50 microns in diameter and 1
micron in depth. Features such as these pits in the fluid layer are an early indication of
fluid layer breakup, and it is important to know precisely when and where these artifacts
begin. The presence of these artifacts in the fluid layer indicates the layer is beginning to
degrade, which on the eye can cause discomfort or vision degradation. Therefore, a
“blob analysis” routine was developed to analyze the measurements in order to identify
and quantify holes or pits in the fluid layer.
Figure 5.10. Two measurements taken 1 second apart having identical RMS surface measurements
to the 4th decimal point. The measurement on the right is beginning to exhibit some breakup areas
(circled).
Figure 5.11 shows the four major steps in the blob analysis routine developed in
conjunction with this instrument. The first step of the blob analysis procedure is to
155
import the surface measurement into IDL (ITT Visual Information Solutions, Boulder,
CO). In step two an unsharp mask is applied to the measurement which effectively
amplifies the high frequency areas on the surface corresponding to the perimeter of the
blobs. The original measurement is then subtracted from the unsharp masked version,
leaving data only at those areas enhanced by the unsharp mask, and a smoothing kernel
applied to remove noise, leaving only areas of interest (the blobs) in the fluid layer
visible. At this point (end of step 3) the processed measurement is a binary data array
consisting of high values where the blob exists and low values everywhere else. In the
final step perimeters are fit to every individual blob and the number of blobs along with
their areas and locations are recorded.
156
Figure 5.11. Flow of the blob analysis: 1.) raw surface measurement, 2.) enhanced, 3.) smoothing
applied, 4.) ellipses fit to blobs overlaid on raw measurement.
An unsharp mask, contrary to what the name implies, is a tool that is used to sharpen
edges or high frequency content of an image. Unsharp masking is built into most image
processing software, and is likely part of the engine behind a generic “sharpen image”
button in simple commercially available software. The mask sharpens an image by
generating a slightly blurred reproduction, and then subtracting the blurred version from
the original image. This process enhances edges in the image, and is effectively a highpass filter. While an unsharp mask is a routine procedure in digital image processing, its
roots are in traditional film-based photography. In photography, a slightly out of focus
version of the negative is created. The blurred negative is processed into a positive
157
image, and then sandwiched with the negative of the original in-focus image. When light
is transmitted through the blurred-positive/original-negative combination to create the
final positive print, the blurred positive cancels out some of the low frequency content of
the original image, effectively increasing the apparent sharpness, known as the acutance
(McHugh, 2011).
The unsharp mask subtraction process leaves behind some noise on the order of a couple
of pixels in width, as seen in part 2 of Figure 5.11. Therefore, a smoothing kernel is
applied to the image to remove the noise. Typically a kernel having a size of 3x3 is
sufficient to remove noise while retaining the information corresponding to the fluid layer
features of interest. After smoothing is applied, the resulting image is a binary image
where zones with a value of one correspond to the “blobs.” This image is passed to a
blob analysis routine, developed as open source software by David Fanning (Fanning,
2011) and slightly modified to fit this work. The blob analysis routine identifies all zones
in the binary image having a high value and fits perimeters to them. The number of
features, area of the zones enclosed by the perimeters, along with their locations is
calculated and routines have been developed to use the data to calculate distances
between the fluid layer features. A GUI was developed to allow for automated analysis
of a data set containing all of the measurements from a measurement cycle. A screen
capture of the GUI interface is shown in Figure 5.12.
158
Figure 5.12. Blob analysis GUI
5.6 Results
In order to evaluate the performance and utility of the FLI, contact lenses made of three
different materials were studied. The goal of the study was to show that FLI
measurements and corresponding analysis methods are capable of differentiating between
contact lens materials. For each lens material, multiple lenses were measured using a
brand new lens for each measurement cycle. The three materials tested were a hydrogel
contact lens and two different silicone hydrogel lenses, referred to as silicone hydrogel A
and silicone hydrogel B from herein. Five hydrogel lenses, eight silicone hydrogel A
lenses and eight silicone hydrogel B lenses were successfully measured for this work.
The figures on the following pages compare material behavior throughout the
measurement for one sample of each of the materials. Figure 5.13 shows the data with
159
piston, tilt and power removed, Figure 5.14 shows the data with a 4th order Zernike fit
removed, and Figure 5.15 show the data referenced to the t = 0 measurement of each set.
Data through 30 seconds beyond the calculated t = 0 frame is shown for the remainder of
this discussion.
160
Figure 5.13. Sample results of three lens materials with piston, tilt and power removed
161
Figure 5.14. Sample results of three lens materials with 4th order Zernike polynomial removed. The
Zernike terms evolve with the measurement.
162
Figure 5.15. Sample results of three lens materials with t = 0 result subtracted. The referenced
measurement is static throughout the measurement sequence.
163
In order to compare the general behavior of the fluid layer on the contact lens, the RMS
surface height from each individual measurement within the “referenced to first” analysis
method was recorded and the mean and standard deviation across each material type was
calculated. It is possible to analyze the stability of the fluid layer on the contact lens by
plotting the average RMS surface height for each material over the 30 second
measurement. By subtracting the t = 0 frame from every measurement in the 30 second
cycle, the relative behavior of the fluid layer on each lens material can be compared to
one another and their relative stability compared. Figure 5.16 shows the mean RMS
surface error for each material type with error bars corresponding to the standard
deviation. As discussed previously, N=5 for the hydrogel lens, and N=8 for each of the
silicone hydrogel materials.
Mean RMS Surface Height (waves)
Hydrogel
Silicone Hydrogel A
Silicone Hydrogel B
1.00
0.90
0.80
0.70
0.60
0.50
0.40
0.30
0.20
0.10
0.00
0.00
5.00
10.00
15.00
20.00
25.00
30.00
Time (s)
Figure 5.16. RMS surface height and standard deviation throughout a 30 second measurement for
three contact lens materials. All data points are referenced to the surface at t = 0.
164
As is seen in that data, there are some behavioral differences between the materials,
especially when comparing the hydrogel to the silicone hydrogel B. However, the error
bars on the data are quite large, indicating a wide variation across the collected
measurements. Interestingly, the most stable of the materials (the hydrogel) also has the
tightest error bars, indicating a repeatable fluid layer behavior across different
measurement cycles.
The RMS surface height data provides some insight into the optical quality of the fluid
layer surface on each of these materials. For example, a surface having an high-order
errors of λ/4 (removing power and astigmatism) is typically considered to be of good
quality for most applications, and the time the fluid layer takes to reach this threshold can
be examined. The hydrogel lens reaches the λ/4 criteria in about 12 seconds on average,
silicone hydrogel A in about 9 seconds, and silicone hydrogel B reaches the threshold
relatively quickly in about 4 seconds. While these times are likely not identical to what
occurs naturally on the eye due to environmental differences, the data imply that a
hydrogel lens will maintain its optical quality longer between blinks than either of the
silicone hydrogel materials, especially silicone hydrogel B.
The blob analysis routine previously described was applied to all of the measurements.
The number and area of blobs in the fluid layer were compiled and averaged for each
material. Figure 5.17 shows the average number of blobs over a 30 second measurement
for each of the three materials. The error bars correspond to the standard deviation over
165
all of the measurements. Figure 5.18 shows the average blob diameter over the same 30
second measurement along with the error bars again corresponding to the standard
deviation.
Number of Blobs
Hydrogel
Silicone Hydrogel A
Silicone Hydrogel B
25
Number
20
15
10
5
0
0.00
5.00
10.00
15.00
20.00
25.00
30.00
Time (s)
Figure 5.17. The average number of blobs as the fluid layer evolves for three different contact lens
materials
166
Average Blob Area
Area (mm2)
Hydrogel
Silicone Hydrogel A
Silicone Hydrogel B
0.09
0.08
0.07
0.06
0.05
0.04
0.03
0.02
0.01
0.00
0.00
5.00
10.00
15.00
20.00
25.00
30.00
Time (s)
Figure 5.18. Average blob area as the fluid layer evolves for three different contact lens materials
The results displayed in Figure 5.17 and Figure 5.18 show behavioral differences
between the three different contact lens materials. A significant result is the difference in
the total number of blobs. At the end of the 30 second measurement time the hydrogel
lens, on average, has 7.5 ± 3.0 blobs where the silicone hydrogel A and silicone hydrogel
B lenses display 15.7 ± 5.6 and 18.8 ± 6.8 blobs, respectively (N=5 for the hydrogel, and
N=8 for silicone hydrogels A and B). This implies the hydrogel material is capable of
withstanding longer periods of time before the fluid layer on its surface begins to breakup
as indicated by blob formation in the topography measurement. The results for the
silicone hydrogel A and silicone hydrogel B materials at the end of the 30 second
measurement time are identical within the uncertainty of the measurement, there is some
167
separation between the materials’ respective behavior for times between 5 seconds and 15
seconds where the fluid layer on the silicone hydrogel A lens exhibits fewer blobs in its
topography.
Along with differences in the number of blobs in the fluid layer, significant differences in
the area of the features are observed. In general, the average blob area on the hydrogel
material is smaller than the areas on the silicon hydrogel A or silicone hydrogel B contact
lenses. Since the hydrogel has both fewer blobs and blobs of lesser surface area
compared to silicone hydrogel A or silicone hydrogel B materials, it can be hypothesized
that blob formation has a lesser effect on vision quality degradation in the hydrogel
material. It should be noted that this does not necessarily mean that the hydrogel contact
lens is the “best” lens. Instead, this is just one data point in the numerous factors that
define the quality of a contact lens. While this material appears to be superior in this
particular sense, it may have drawbacks in other criteria that this research does not
address.
5.7 Closing
The FLI was successfully used to measure a variety of contact lens materials. In
particular, a hydrogel and two different silicone hydrogel materials were measured, and
each produced repeatable results. In general, hydrogel materials are more wettable than
their silicone hydrogel counterparts. Silicone hydrogels contain hydrophobic silicone in
order to increase oxygen permeability through the contact lens. The results of
168
measurements with the FLI aligns with the expected results since the hydrogel material
stayed wet longer (fewer blobs), and had a more stable fluid layer (RMS measurement).
Both of the silicone hydrogel materials behaved in a similar manner, although minor
differences were seen between them, with silicone hydrogel A being more wettable and
stable, on average. Based on the performance of the FLI, it was decided to pursue the in
vivo method of tear film measurement using a similar interferometer and analysis
methods, which is discussed beginning in Chapter 7.
169
6 SIMULATED IMAGE QUALITY EFFECTS OF TEAR FILM BREAKUP
One criteria defining a “good” fluid layer on a contact lens is that it provides an optically
smooth surface capable of providing good image quality. The tear film has a radius of
curvature of 7.8 mm on average, along with a refractive index of 1.336. Therefore, the
tear film contributes roughly 43 diopters of refractive power. Considering the average
human eye has a total power of approximately 60D, about 70% of the eye’s bulk
refractive power is attributed to the tear film. Therefore, irregularities in the tear film
will result in decreased visual performance.
The total thickness of the tear film does not have a significant effect on vision quality.
As the tear film uniformly thins over time, the power of the eye only changes by a
relatively small amount. Additionally, the eye will accommodate for any uniform
changes in power introduced by tear film thinning. Assuming a 6 micron thick tear film
uniformly thins to zero, the change in refractive power ∆Φ is:
∆Φ = Φ 6 mm − Φ 0 mm =
1.336 − 1
1.336 − 1
−
= −0.03 D
7.8mm
7.8mm − 0.006 mm
6.1
Indeed, a change in refractive power of 0.03D is negligible, especially given that vision is
generally only corrected to within 0.25D with spectacles or contact lenses. However,
irregular thinning or breakup of the tear film can create local radii of curvature variations
across the pupil. Local changes in the tear film radius of curvature can contribute over
one diopter of power error in the pupil zone corresponding to the breakup feature
(Montés-Micó, 2007). These localized variations result in high-order aberrations in the
170
retinal image, reducing vision quality. In fact, measurements of ocular aberrations have
been shown to progressively change post-blink, increasing by an average of 21% as the
tear film evolves and breaks up (Montés-Micó, 2004).
Image quality in the human visual system is a somewhat subjective measure. An
acceptable image must both contain enough detail for a person to understand the scene,
and be visual pleasing. In addition to the optical quality of an image falling on the retina,
the brain’s processing and perception of the image comes into play. The human eye is
not a diffraction-limited optical system, and is affected by wavefront aberrations.
However, there is some evidence that the human visual system has in fact evolved to
prefer some blur in the image, and a diffraction-limited wavefront would actually be
detrimental (Held, 1980).
In order to quantify vision quality, the resolution of the eye is often measured.
Resolution in the visual system is the ability of a person to identify certain spatial
frequencies at a specific contrast. The highest resolvable spatial frequency is the limiting
resolution at that contrast. A person’s resolution limit is different for different contrast
levels, and is often described by the contrast sensitivity function (CSF). The CSF is a
measure of the lowest detectable contrast as a function of spatial frequency, and is
typically measured by presenting a subject with a variety of sinusoidal patterns with
varying periods and contrast (Schwiegerling, 2004). Figure 6.1 shows a sample target for
171
measuring the CSF, with an example of a limit drawn corresponding to the threshold
between resolved and unresolved spatial frequencies.
Figure 6.1. Contrast sensitivity function test with example limit overlaid
The Modulation Transfer Function (MTF) of the eye is directly related to the CSF. The
MTF is the maximum achievable contrast in an image as a function of the component
spatial frequencies.
MTF is measured by dividing the ratio of the image contrast by the
object contrast at the spatial frequencies of interest. For example, if an object has a
spatial frequency with contrast of one, and the imaged contrast is 0.5, the MTF at that
spatial frequency is 0.5.
172
MTF is calculated by taking the Fourier transform of the Point Spread Function (PSF).
The PSF is the Fourier transform of the pupil function, where the pupil function of an
optical system is (Shannon, 1997):
' ( .
' (N RuE
/4 6.2
where .
' ( is the transmission factor for the pupil and W(x,y) is the wavefront
aberration at the pupil. The transmission factor is a function of the pupil geometry. For a
circular pupil in the eye, the transmission factor is a cylinder function with a diameter
matching the pupil diameter of the eye. The function has a value of one within the
pupil’s diameter and a value of zero beyond its extent. The optical path length through
the tear film is equal to the tear film refractive index, n, times the thickness of the tear
film, t(x,y). For an aberrated tear film, the optical path difference (OPD) is then a
function of the location on the tear film:
' ( p ' ()
6.3
The wavefront aberration through the tear film is the OPD divided by the wavelength:
&
' ( p ' (
)
6.4
The phase of the wavefront transmitted through the tear film is then
<%
' ( 8
p ' ()
6.5
Combining these results, the pupil function through an aberrated tear film is
' ( (Â X
ÀR|
u
\ N R Ã pW
/4 )
6.6
173
The Point Spread Function (PSF) defines the system response for a single point object,
and is the squared modulus of the Fourier transform of the pupil function:
Ä
' ( HÅÆ
' (ÇH
ÈÉ
/Ê
4Ê )
ËÉ
ü
ü
6.7
The resulting PSF for a circular pupil is a normalized Bessel function, and is described by
the Airy disk when there is no aberration in the system. A simulated Airy disk image is
shown in Figure 6.2, in log scale to visualize energy in the outer rings. As aberration is
introduced into the system, the central peak of the PSF decreases and more energy is
shifted to the side lobes.
Figure 6.2. Airy disk in log scale to visualize outer rings
The MTF is related to the PSF through Fourier transform. Specifically, the MTF is a
normalized modulus of the Fourier transform of the PSF:
HÅÆÄ
' ( ÇH
£–Ä
‹ Ì )
HÅÆÄ
' ( ÇHÈÉËÉ
6.8
174
For a system with no aberration, the diffraction-limited MTF is the Fourier transform of
the Airy disk, and is seen in Figure 6.3. The MTF is a function of the working f/#, and
has a maximum cutoff frequency equal to
9ÀWB¼¼ G
)
9
¯„
§
6.9
TS 0.0000 (deg)
1.0
0.9
Modulus of the OTF
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
0
34
68
102
136
170
204
238
272
306
340
Spatial Frequency in cycles per mm
Polychromatic Diffraction MTF
8/3/2011
Data for 0.5890 to 0.5890 µm.
Surface: Image
LENS.ZMX
Configuration 1 of 1
Figure 6.3. Diffraction-limited MTF for f/# = 5 and λ=589nm. The cutoff frequency is 340 cycles/mm
(ZEMAX)
As mentioned previously, the MTF of the eye is not diffraction-limited. Since
aberrations are a function of the pupil size and field of view, the MTF varies across
different pupil diameters and fields of view. The MTF of an eye with a 4 mm diameter
pupil for 0° and 10° half-fields of view and a wavelength of 589 nm was calculated by
modeling the Arizona Eye Model in CODE V. The Arizona Eye model (Schwiegerling,
2004) is a schematic eye developed at the University of Arizona designed to have the
175
same aberration content as an average eye, and therefore is suitable for this simulation.
Proper focus position of the eye was adjusted by optimizing the posterior chamber length
to maximize MTF up to 60 cycles/degree, which corresponds to the typical limits of
human vision. The layout of the eye model is shown in Figure 6.4 and the MTF is shown
in Figure 6.5. All MTFs in this section list the spatial frequency in object-space
cycles/degree rather than the image space cycles/mm. To convert between cycles/degree
and cycles/mm, the following equation is used:
‹ 4|CŽ ‹4|CŽ w 9C4C w
zC©}CC
~~
8
)
G¾a
6.10
where feye is the focal length of the eye, which is approximately 17 mm.
3.09
Tear Film Simulation
Figure 6.4. Arizona Eye Model (CODE V)
Scale:
8.10
BCP
MM
06-Aug-11
176
Diffraction Limit
0°
10° sagittal
10° tangential
Modulation
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
10
20
30
40
50
60
Spatial Frequency (cycles/degree)
Figure 6.5. MTF of the eye using the Arizona Eye Model with a pupil diameter of 4 mm and
λ=589nm
In order to estimate the effect of tear film breakup on vision quality, Kasprzak and
Licznerski (1999) modeled a variety of tear film breakup geometries and used Fast
Fourier Transforms (FFTs) to determine the tear film influence on the Point Spread
Function (PSF) of the human eye. In that work, they built a model of the tear film with
breakup characteristics in the form of canyons and pits in the tear film surface. Real
measured tear film topography was not used, and the unaberrated eye was modeled as a
diffraction-limited optical system. Their modeling estimates a reduction in Strehl ratio
on the order of 25% or more for canyon-like tear film breakup three microns in depth. In
order to improve upon these simulations, the topographies of the fluid layers measured
empirically with the Fluid Layer Interferometer (FLI) were used to determine their effects
177
on the ocular MTF. The results of these simulations are discussed in the following
section.
6.1 MTF Simulation
The measurement results from the hydrogel and silicone hydrogel A contact lenses
discussed in Chapter 5 were used to simulate the degradation in ocular MTF as the fluid
layer on the lens breaks up. The purpose of this study was to determine how vision
quality is affected by changes in the fluid layer over time. Therefore, a 4th order Zernike
fit was subtracted from the surface to remove low-order aberration attributed to
systematic or procedural measurement error, and every measurement within a sequence
was referenced to the t = 0 result to isolate the dynamic behavior of the fluid layer. As
before, five hydrogel and eight silicone hydrogel lenses were measured. The data
collected at 10s, 20s and 30s into the measurement was used for this analysis. Figure 6.6
shows one example measurement from each of the materials at these measurement times.
178
Figure 6.6. Measured fluid layer surface topographies for the two contact lens materials over 30
seconds with a 4th order Zernike fit removed, and referenced to the t = 0s result. Scale is ±4.0 waves
at λ=632.8 nm.
The surface topographies from each of the contact lens measurements at t = 10s, 20s and
30s were saved as interferogram files (“.int” file extension) and imported into CODE V
optical design software (Synopsys, Pasadena, CA) as a phase error corresponding to
irregularity of the anterior cornea surface of the eye model. The MTF was then
calculated, and saved. The MTFs for identical measurement times (i.e., 10s, 20s, and
30s) on each material were then averaged and the standard deviation calculated. Figure
6.7 shows the average MTF for each of the materials. The MTFs at times of 10s, 20s and
30s are shown for a field angle of 0° and 10°. In order to compare the behavior of each
material on the same chart, Figure 6.8 displays the data in a different form. In Figure 6.8,
the MTF of the eye with fluid layers from both a hydrogel or silicone hydrogel lens are
shown on the same plot. The different plots correspond to different measurement times
and field angles.
179
Figure 6.7. Mean MTF at 10s, 20s, and 30s for each contact lens material, at a field of 0° and 10°.
The error bars correspond to the average deviation across the 5 hydrogel and 8 silicone hydrogel
measurements.
Figure 6.8. MTF of the eye model with fluid layer surface for a hydrogel (dotted) and silicone
hydrogel (dashed) contact lens.
180
The hydrogel lens does not reduce the on-axis ocular MTF at 10 seconds into the fluid
layer evolution, where the silicone hydrogel does show some degradation at this time.
For the on-axis field of view, the MTF with the hydrogel fluid layer is higher than that of
the silicone hydrogel throughout the entire measurement duration. In terms of the 10°
field, the fluid layer on the hydrogel material does degrade the MTF by a few percent
through the first 10 seconds, indicating the immediate presence of high-order fielddependent aberrations. The silicone hydrogel material shows more degradation, on
average. At 20s and 30s into the measurement, the influence of fluid layers on hydrogel
and silicone hydrogel materials appear to have similar effects. It should be noted that the
retina’s resolution at a 10° field is much worse than the foveal resolution at the center of
the field of view. While the MTF for the eye with and without the effects of tear film
breakup were calculated, the retina still limits the resolution at this field.
Table 6.1 lists the percent mean degradation in the ocular MTF due to fluid layers on a
hydrogel and silicone hydrogel contact lens. The percent MTF degradation is calculated
as:
Á£–Ä £–ÄC4C £–ļ|Àz
£–ÄC4C
6.11
where MTFeye is the baseline MTF from the Arizona Eye model and MTFfluid is the
average of the MTF of the modeled eye with the corresponding fluid layer on the anterior
cornea surface. For the on-axis field, the MTF is generally affected more by fluid layers
at higher spatial frequencies. For the off-axis field, the MTF tends to be more affected in
the mid-spatial frequencies.
181
10° Field
0° Field
Table 6.1. Percent MTF degradation compared with the baseline Arizona Eye Model
Hydrogel
Silicone Hydrogel
Frequency
(cycles/degree)
10 s
20 s
30 s
10 s
20 s
30 s
15
30
45
2.04%
0.67%
0.00%
7.45%
11.60%
12.35%
15.42%
27.14%
32.03%
7.88%
14.48%
20.73%
19.59%
34.92%
44.53%
32.22%
50.99%
59.93%
60
1.81%
18.39%
37.92%
21.62%
44.88%
60.29%
15
30
9.67%
17.07%
23.83%
43.76%
33.92%
57.67%
15.91%
27.51%
32.10%
52.57%
43.40%
68.13%
45
20.85%
52.68%
64.23%
28.03%
52.89%
60.67%
60
19.19%
50.40%
61.27%
18.72%
42.29%
51.37%
6.2 Vision Quality Conclusions
By importing the measured fluid layer topographies into a model of the human eye,
simulated effects of tear film breakup on human vision was analyzed. Measurements on
contact lenses made from a hydrogel and a silicone hydrogel material were made to
compare the vision quality effects of each material as the tear film evolves and breaks up.
Fluid behavior on the hydrogel material has a lesser effect on vision quality than the
silicone hydrogel based on MTF calculations at different times in the fluid layer breakup.
The results of the tear film-reduced MTF measurements match what was expected prior
to the measurement. The hydrogel lens is composed of 58% water and the silicone
hydrogel is composed of 47% water. Hydrogels are generally known to be very wettable,
while silicone hydrogels are somewhat hydrophobic. The data collected shows that, in
general, the hydrogel material maintains an optically smooth fluid layer longer than the
182
silicone hydrogel. As a result, the aberrations which reduce vision quality remain lower
for a longer duration in the hydrogel contact lens.
The data captured with the fluid layer interferometer was able to differentiate between
fluid layer behavior on two different contact lens materials. The fluid layer
interferometer can be used to characterize how fluid layers behave on well-understood
contact lens materials. Once the known materials are characterized, the fluid behavior on
newly developed lens materials can be compared in order to estimate their vision quality
implications. While the in vivo behavior of the tear film is likely different than the in
vitro behavior of the fluid layer, the trends can be used to provide feedback during
contact lens material design.
183
7 OCULAR GEOMETRY AND MOVEMENT
The Tear Film Interferometer (TFI) for in vivo measurement and characterization of athe
human tear film must be designed for testing the nominal cornea geometry. A perfectly
smooth tear film showing no signs of breakup will have a topography and curvature
closely matching the cornea’s topography. In fact, since most corneal topographers rely
on measuring an image reflected from the anterior surface of the eye, they are actually
measuring tear film topography, but lack the resolution to examine fine tear film
structure. The corneal geometry used in this work is derived from the Arizona Eye
Model having a radius of 7.8 mm and a conic constant of -0.25 (Schweigerling, 2004).
As will be shown below, these parameters closely match the mean values of cornea shape
published in a number of studies. Obviously not all eyes will match this shape, so the
interferometer must be designed to accommodate a range of corneas. It should be
understood that this system is not intended as a regular ophthalmic screening device, so
does not have to accommodate every eye available. Instead, it should be compatible with
a range of corneal shapes such that a sufficient population can be studied to analyze tear
film properties.
Numerous authors have published studies discussing the range and nominal value of the
cornea’s radius of curvature, although published information on the asphericity is less
established. When discussing asphericity of the cornea, two major components are
usually considered: the conic constant and corneal astigmatism. A conic surface where
the z-axis is the axis of revolution is defined by the equation
184
' ( - Í G- a
7.1
where r is the radius of curvature at the vertex and K is the conic constant. Solving the
equation for z provides the sag of a conic surface:
-
' ( ¦
G
G ÎG Í G ` b ' ( )
7.2
Instead of the conic constant, conic surfaces are often defined by their eccentricity which
is a measure of how much a surface varies from being spherical. The relationship
between eccentricity, ε, and conic constant is
Í Ï )
7.3
The cornea generally has a conic constant of K=-0.25, which is known as a prolate
ellipsoid.
The second aspheric aspect considered is keratometric astigmatism. Also known as
corneal astigmatism, this occurs when there is a difference between radii of curvature in
the horizontal and vertical meridians on the cornea. In this case, the cornea takes the
shape of a toroid. In most cases of corneal astigmatism, the larger radius (flatter) occurs
on the horizontal meridian (Daily and Coe, 1962). This condition is known as “with-therule” astigmatism, while the opposite condition is referred to as “against-the-rule”
astigmatism.
185
7.1 Structure of the Human Eye
Compared to typical imaging systems, the human eye is quite sophisticated. The eye has
the ability to accommodate its focus for a range of object distances by automatically
altering the form of its internal optics. It has a large dynamic range in detectivity and
automatically adjusts its f/# depending on the incident power. It is rare to find another
optical system having the same abilities or quality of the eye. Figure 7.1 shows a
schematic of the eye from the MIL 141 Handbook (MIL-HDBK-141, 1962).
Descriptions of some of the components are described below.
Figure 7.1: Cross section of the eye reproduced from MIL-HDBK-141 (1962)
Cornea: The cornea is outermost tissue layer of the eye. This body forms a transparent
window at the entrance to the pupil. The cornea is often incorrectly considered the first
186
optical surface of the eye and is credited with providing a majority of the eye’s optical
power due to the large refractive index change at this surface. In fact, the cornea is
covered by the tear film which has the first interaction with incoming light and
contributes the bulk refractive power. However, since the refractive index of the cornea
nearly matches the tear film and it is assumed that the tear film shape matches that of the
anterior cornea, the tear film-cornea combination can be considered one single refracting
body. Understanding the geometry of the cornea is of great importance in vision
correction where contact lenses are used. Additionally, refractive surgery (LASIK,
Radial Keratotomy, etc.) take place at the cornea. Understanding the cornea’s geometry
and associated population statistics are critical in the design of the Tear Film
Interferometer, and are accordingly discussed in great detail in Section 7.2.
Crystalline lens: The crystalline lens provides the variable focusing ability of the eye.
Muscles attached to the lens alter its shape to adjust the eye’s focus for different viewed
object distances. This process is known as accommodation. Over time, the crystalline
lens hardens and the eye loses its ability to accommodate such that only distance objects
are in focus and it is impossible to focus on nearby objects. This condition is known as
presbyopia, and is common in people beginning at about 40 years of age.
Retina: The retina is the detector in the eye, consisting of rods and cones which are two
types of receptors for absorbing and detecting incident light. The cones are used in bright
conditions, as they are sensitive to high irradiance levels, and this vision condition is
187
known as photopic vision. There are three subtypes of cones, known as L, M and S types
which refer to their peak sensitivities in long, middle and short wavelengths, respectively.
L-type cones are most sensitive in the red region of the visible spectrum, middle in the
green, and short in the blue wavelengths. These cones are solely responsible for color
vision. Cones have their highest density near the center of the field of view within the
fovea and macula.
Rods, on the other hand, are responsive in low light levels, have no color sensitivity, and
become bleached under bright illumination. Although cones have no ability to
differentiate color, their peak wavelength response occurs at 500 nm (Schwiegerling,
2004). This vision condition is known as scotopoic vision and has its highest sensitivity
in the periphery of the field of view. Rods become saturated at light levels above roughly
3 cd/m2, and cones have no response below about 0.03 cd/m2. Between those thresholds,
both rods and cones contribute to vision.
Iris and Pupil: The iris is the stop of the ocular optical system, and as such determines
the diameter of the entrance pupil and determines how much light can enter the eye. Just
as the f-stop of a camera lens does, the iris controls how much light reaches the retina
along with the aberrations and depth of focus of the optical system. The image of the iris
in air is commonly referred to as the pupil. The iris itself is smaller than the pupil
appears due to magnification from the cornea.
188
7.2 Selected Cornea Geometry Studies
In a study by Kiely et al. (1982) the geometry of 176 corneas was measured using a
Placido disk keratometer where the conic constant and the radius at the vertex were
recorded. The mean and standard deviation radius was R = 7.7 ± 0.27 mm, and mean and
standard deviation conic was K = -0.26 ± 0.18 . Both measurements exhibited nearly a
normal distribution, so roughly 68% of corneas will fall within one standard deviation of
the mean value. Stenstrom (1948) published the measurement results on about 1,000
corneas and found a mean and standard deviation radius of curvature of R = 7.86 ± 0.259
mm.
Corneal astigmatism is the result of the cornea’s curvature having a different value in
each meridian. It is likely that corneal astigmatism is present even when a person does
not have any total visual astigmatism. It is thought that the crystalline lens and the
cornea grow to compensate for the other’s astigmatism component. An empirical study
was published by Javal in the 1890’s describing a relation between a person’s total
refractive astigmatism and their keratometric astigmatism, which has come to be known
as Javal’s Rule (Schwartz, 2002):
N9†P‘N.P»†P G)½
ÍN†NP.P»†P a)½a
ÐÑa,
7.4
where the refractive and keratometric astigmatism values are measured in diopters, and
the “X90” term indicates against-the-rule astigmatism.
189
In the time since Javal’s initial study, additional studies with more sophisticated
measurement devices have found the relationship to be somewhat different than
originally thought. Perhaps the most widely accepted equation was published by
Grosvenor et al. which is often referred to as either the Simplified or Modified Javal’s
Rule (Grosvenor, 1990):
N9†P‘N.P»†P G)a
ÍN†NP.P»†P a)½a
ÐÑa.
7.5
Since Grosvenor’s modifications, multiple studies have all concluded that the Modified
Javal’s rule is a sufficient relationship for predicting total refractive astigmatism from
corneal astigmatism, or visa-versa (Grosvenor, 1990; Auger, 1998; Keller, 1996). An
interesting effect of this is that, on average, a person having no corneal astigmatism likely
presents with refractive astigmatism. It also seems as if there is a natural astigmatism
balance between the cornea and crystalline lens where each element has a toroidal shape
that is corrected by the other.
7.3 Fixation Stability and Eye Movement
When a subject is asked to fixate on a target, the gaze direction of the eye is constantly
changing due to involuntary movements. The pattern the eye follows is comprised of
three movement types: saccades, drifts and physiological nystagmus (Steinman, 1973).
Saccadic eye movement appears as a quick flick of the eye and are very fast changes in
eye position usually occurring about once per second, with a magnitude on the order of
190
five arc-minutes. Between saccades, the eye continuously slowly drifts back and forth
over a range of a few minutes of arc (Steinman, 1965). Physiological nystagmus is a high
frequency (50 – 100 Hz) tremor that is always present in the eye, and has a magnitude
less than one arc minute (Steinman, 1973). During fixation the eye continuously moves
about its mean position due to these three movement patterns, though it doesn’t wander
far from the mean. Standard deviations from the mean position of two to five arcminutes are typically reported (Steinman, 1973).
One theory explaining the purpose of involuntary movements is that the rods and cones
on the retina only respond to changes in their signal and therefore need to refresh, much
like how a pyroelectric detector must be chopped or dithered to measure incident power.
Another explanation is that the eye’s gaze direction continuously drifts at a low velocity
until the eye quickly flicks back to adjust the image location to an “optimal locus” on the
retina (Cornsweet, 1956). The nystagmus movements are thought to exist in order to
sharpen an aberrated image created by an imperfect human eye. By rapidly dithering, the
image scans across different receptors on the retina, where the brain averages the signal
in order to identify edges in the image where shifting illumination gradients occur in the
blurred image (Marshall, 1942).
While efforts can be made to stabilize voluntary eye movements using proper fixation
targets and well trained subjects, involuntary movements are unavoidable due to their
necessary function in the human visual system. Interestingly, Steinman et. al. (1967)
191
provide some evidence that instructing a subject to “hold” their gaze rather than “fixate”
on the target reduces the frequency of involuntary eye movement. They hypothesize that
the psychology behind the instruction to “fixate” forces the subject to adjust their eyes for
best image quality though saccades, while instructing them to “hold” ignores a potentially
blurry image, and thus reduces the number of saccades used in the previously described
sharpening process. During clinical testing of the TFI, if the operator finds that a subject
exhibits too many involuntary movements, changing the verbal instructions accordingly
may result in improvement.
In addition to evaluating eye movement present during fixation, the movement of the
entire head, and in turn the eye, while positioned in an ophthalmic head rest must be
evaluated. A variety of head rest designs exist, ranging from a simple chin cup, to a
system making contact with the chin, cheek bones and forehead or even wrapping around
the entire head. For applications requiring increased positional accuracy, bite bars can be
used where the subject bites onto a soft dental wax mold that is rigidly mounted to the
system. While bite bars provide increased positional stability, they are more invasive,
uncomfortable, and can cause obscurations over parts of the eye when the subject’s
cheeks push up while clenching their jaw. It is desirable to avoid the use of a bite bar
unless absolutely necessary.
Kasprzak and Iskander (2010) studied head movements in a typical ophthalmic headrest
using ultrasound transducers with positional sensing accuracy of ±2 microns. The
192
headrest used in that study was a typical headrest used in ophthalmic devices such as
corneal topographers, and positions the subject using a chin cup and forehead rest. In
terms of longitudinal movement, breathing was observed to cause cheek movements with
an average amplitude of about 55 microns at a frequency of 0.32 Hz, and it can be
assumed the eye would translate in a similar manner. Blood pulsation was also observed
to create displacements of 18 microns at a frequency of 1.43 Hz. Lateral movements
were shown to have an amplitude of 44 microns measured at the subject’s temple. The
study also revealed that instructing the subject to close their jaw minimized head rotation.
7.4 Implication on TFI Design
In order to be a useful instrument, the TFI must be capable of measuring a range of
corneal geometries while accommodating the movement that occurs when a subject is
stabilized in an ophthalmic head rest. Based on the expected ocular variations previously
discussed, the interferometer will be designed for a conic test surface having a base
curvature of 7.8 mm and conic constant of -0.25, while functioning over a range of at
least ±300 microns on the radius of curvature and ±0.2 on the conic constant. These
variations represent roughly one standard deviation in each direction from the mean,
based on the study data in Section 7.2. To be capable of testing a significant portion of
the population, the system must also accommodate at least ±1.0 D of corneal
astigmatism, which is approximately ±5.75λ at the TFI’s 785nm wavelength.
193
In terms of eye movement, translations of ±100 microns around the nominal position
should be expected when using an ophthalmic headrest. Based on expected eye
movement while fixating on a target, the TFI must be capable of supporting tilts of at
least ±15 arc-minutes (±0.25°) around the nominal line-of-sight.
194
8 PHASE II: IN VIVO TEAR FILM INTERFEROMETER
This chapter presents the design of the Tear Film Interferometer (TFI) for in vivo
measurement of a human tear film. Similar to the in vitro Fluid Layer Interferometer, the
TFI is designed to detect features in the tear film topography that are indicative of overall
tear film health and behavior. The purpose of the instrument is to measure the tear film
on both people who wear and do not wear contact lenses. In the latter case, tear filmcontact lens interaction can be characterized as was previously done in an in vitro
manner.
In addition to measuring the tear film, a purpose of this research is to evaluate the
feasibility of on-eye interferometry, in general. Future research could potentially include
measuring the posterior tear film surface, corneal topography, or even on-eye contact lens
reconstruction. Therefore an additional goal in designing the interferometer was to
design it in a modular manner so that it could be more easily modified for future use. For
example, the fixation target for gaze stability is a stand-alone system that can be
independently redesigned without completely redesigning the entire interferometer,
should it be necessary.
At its core, the TFI is a polarization Twyman-Green Interferometer similar to the in vitro
system previously discussed. Additional modules which are included in order to measure
a human’s dynamic tear film are: the converger objective, fixation system, head
stabilization mechanism, motion control, laser power measurement and safety system and
195
eye tracker. The remainder of this chapter will discuss the rationale for and design of
these modules.
8.1 Core Interferometer
The core interferometer is again a polarizing Twyman-Green system built around a
pixelated camera kit from 4D Technology Corporation which is calibrated for a
wavelength of 785 nm. For a review on how phase shifting is achieved in this system,
refer to Chapter 4.1. A layout of the interferometer is shown below in Figure 8.1.
Figure 8.1. Layout of the in vivo FLI
196
The light source in the interferometer is a wavelength-stabilized diode laser with a
wavelength of 785 nm, providing an invisible and non-threatening beam onto the eye.
The laser beam is focused with an infinity-corrected 40X microscope objective onto a
five micron diameter pinhole for spatial filtering. The objective creates a diverging beam
which is then collimated by a 150 mm focal length collimator into an 18 mm diameter
beam. The collimated beam transmits through a half-wave plate (HWP) followed by a
polarizing beamsplitter (PBS) cube which splits the incident beam into the reference and
test arms of the interferometer, having orthogonal polarizations. The test beam is
transmitted through the PBS and the reference beam is reflected at the PBS.
After reflection from the PBS, the reference beam first transmits through a quarter-wave
plate oriented with its fast axis at 45° and is then reflected off of λ/20 flat mirror. The
reference beam is then transmitted back through the quarter-wave plate so that it is now
transmits through the PBS into the imaging arm of the interferometer.
The test beam transmits through a quarter-wave plate in the same manner as the reference
arm and then through the null module. The null optics consist of a pair of cylindrical
lenses followed by a diverger which expands and focuses the beam at the tear film’s
center of curvature. The light is reflected off of the anterior tear film surface and back
through the null system, QWP and reflected by the PBS into the imaging arm of the
system.
197
The reference and test beams have orthogonal polarizations in the imaging arm where
they are focused with an aplanatic imaging lens through an additional 45°-fast axis
quarter-wave plate and onto the detector.
8.1.1 Camera
The camera is the same megapixel device from 4D Technology Corp. as used in the FLI.
For a full discussion of the camera and calculations pertaining to the resolution and fringe
frequencies the camera can support, refer to Chapter 4.2.1.
8.1.2 Laser Source
The laser is a wavelength stabilized diode laser with a wavelength of 785 nm (Innovative
Photonics, p/n I0785SM0100B-TH). The laser system is built around a Gallium Arsenide
diode laser which is wavelength locked through use of holographic gratings which
provide spectrally filtered feedback to the diode. The normally relatively broad spectrum
of a laser diode is locked to within 20 MHz (4.11x10-14 m) through proper arrangement of
the gratings and laser cavity. The details of such a laser are beyond the scope of this
dissertation, but an interested reader should refer to Rudder (2006). This particular laser
has a coherence length of approximately 15 meters, so path matching in the
interferometer is not necessary. It has a maximum power output of 100 mW, but the
interferometer is internally limited to emit less than 1.0 mW from the system.
198
The 785 nm wavelength was selected for a few reasons. First, the human visual response
at this wavelength is quite low, if not completely invisible. Therefore, the illumination
will not cause any reflex or tearing even if the beam is visible. Second, since fringe
spacing is a function of wavelength, the longer wavelength increases the interferometers
dynamic range. While this comes at the cost of sensitivity, accommodating a larger
amount of ocular variations and movement is desirable. Finally, 785 nm is a commonly
used wavelength, so optics with appropriate coatings are readily available. At 785 nm,
the silicon based detector in the Pixelated camera kit has a quantum efficiency of 11%.
A Faraday optical isolator (Edmund Optics, p/n NT62-287) is located at the exit port of
the laser. The isolator is in place to prevent stray reflections from traveling back into the
laser cavity. If reflections were to make it back into the laser, the laser frequency could
become unstable. The Faraday isolator consists of a horizontal polarizer, Faraday rotator,
and a 45° linear polarizer. The device allows the laser to transmit, while blocking any
back-reflected signal. This particular device is wavelength tunable, and has been tuned in
the interferometer for the 785 nm laser.
8.1.3 Collimator
The collimating system expands and collimates the beam to an 18 mm diameter. The
laser is first focused by a 40X microscope objective, where it is focused onto a five
micron diameter spatial filter. A 150 mm focal length aplanatic doublet is positioned a
focal length away from the pinhole in order to collimate the diverging beam. The aplanat
199
is a commercially available optic (CVI Laser, p/n LAP-150.0-30.0-780) which is
corrected and coated for a wavelength of 780 nm. Since the microscope objective is well
corrected and an aplanat has no spherical aberration, a high quality flat wavefront is
created. The wavefront exiting the collimating telescope has an error smaller than 0.01λ
RMS when properly aligned. The layout and OPD of the collimator are shown in Figure
8.2.
OBJ: 0.0000 (deg)
W
W
Py
Px
Y
X
Z
3D Layout
Optical Path Difference
8/8/2011
8/8/2011
Maximum Scale: ± 0.010 Waves.
0.785
UND00021.ZMX
Configuration 1 of 1
Surface: Image
UND00021.ZMX
Configuration 1 of 1
Figure 8.2. Layout and OPD of the collimating lens
8.1.4 Reference Mirror
The reference mirror is a 2” diameter flat with accuracy of at least λ/20 (OptoSigma, p/n
033-1220). The reflective coating is a protected aluminum with a reflectance of roughly
80% at 785 nm. The decision to use a mirror rather than an uncoated flat was made for
two reasons. First, using a mirror eliminates the reflections from the second surface that
caused some error in the in vitro FLI. Second, a higher reflectance allows for a lower
total laser power to be used within the interferometer, increasing the safety of the system.
Although the reflectance of the mirror does not match the tear film, the half-wave plate
200
prior to the polarizing beamsplitter allows for adjustment of power into each arm of the
interferometer in order to maximize fringe visibility.
8.1.5 Null System
The null system in the TFI consists of two sub-systems: the diverger lens and the
astigmatism null. The diverger functions in a similar manner as in the in vitro system,
focusing the beam at the tear film’s center of curvature. The purpose of the astigmatism
null is to compensate for corneal astigmatism present in most eyes, and is also referred to
as the partial null module throughout this work.
Diverger
The design of the diverger is driven by the tear film geometry, along with the desired
measurement area. Based on the ocular geometries discussed in Chapter 7, the diverger
is designed to compensate for a wavefront reflected off of an aspheric tear film having a
7.8 mm radius of curvature and a -0.25 conic constant. Just as in the in vitro
interferometer, the diverger is designed to measure a 6.5 mm diameter on the tear film.
Based on the 7.8 mm radius of curvature and 6.5 mm measurement diameter, the f/# of
the lens must be 1.2 or faster. In order to provide a comfortable and non-threatening
distance between the subject and the instrument, the system is designed to provide a
working distance of at least 70 mm between the last surface of the diverger and the eye.
201
One major change between this lens and the diverger in the in vitro FLI is that the surface
under test is now aspheric due to the cornea’s conic geometry. In reflection, the probate
ellipsoid shape of the cornea contributes spherical aberration to the wavefront. A conic
surface contributes 4th order spherical aberration as follows:
…&v where
G
†
¾
8.1
G ˆ v
† Ï ( <)
8.2
The term ε is the eccentricity, r the radius of curvature, y maximum ray height and ∆n is
the change in refractive index. For a conic constant of -0.25, the eccentricity is ε = 0.5.
The radius of curvature is 7.8 mm and the ray height is 3.25 mm for a 6.5 mm diameter
measurement area. For calculation purposes, the change in refractive index at a reflecting
surface is -2. Using these parameters:
† a)½ and
…&v ˆ
G
‡)½v a)GG¹¸
¹)¾
G
) GG¹¸ a)aGJ¸Ñ
¾
G¾)¹† ¹¾½)
8.3
8.4
Therefore, the converger is designed to have -9.35 waves of primary spherical aberration
in singlepass. In doublepass, this nulls out the spherical aberration contribution from the
aspheric cornea. For an on-axis field of view, only spherical aberration is contributed by
the conic surface.
202
The design process used in designing the in vivo diverger started with modification of the
in vitro diverger design. The first step in the design process was to reoptimize the in vitro
converger for a wavelength of 785 nm. Once this was complete, the radius of curvature
of the test surface was changed to 7.8 mm, and a conic constant was added. The conic
was stepped in increments of -0.05 where the radii of curvature in the diverger were
reoptimized to account for the added spherical aberration. This process was iterated until
the conic constant reached the final value of -0.25. At this point, the design only had a
working distance between the converger and cornea of approximately 25 mm. Rather
than increase this to the full working distance all in one step, the distance was increased
iteratively while optimizing the curvatures and thicknesses in the converger design in an
iterative manner. At some point during this process, it became clear that a fifth lens
element would be required in order to provide the spherical aberration correction
necessary as the lens diameter grew to accommodate the growing working distance.
After adding the lens, the process was repeated until a sufficient working distance of over
75 mm was achieved. Based on conversations with the eventual manufacturer, the glass
type was adjusted to TIH6 for their manufacturing preferences (n=1.785 at 785 nm). The
resulting design is shown in Figure 8.3.
The as-designed converger consists of five elements which first expand the beam and
then focus it, and has a working f/# of 1.23. All elements are manufactured from a
relatively high index TIH6 glass in order to minimize spherical aberration more easily.
The rear principle plane is outside of the lens, providing a longer working distance while
203
maintaining a shorter effective focal length. The working distance between the vertex of
the last surface and the cornea is 77.4 mm, leaving sufficient room for mounting structure
while maintaining comfortable spacing between the subject and the instrument.
Figure 8.3. Layout and prescription for the TFI diverger
Tolerancing the system in doublepass was done in the same manner as in the in vitro FLI
diverger, and similar active alignment procedures were anticipated while determining the
tolerances. Following the procedure described in Chapter 4.2.4, the tolerances listed in
Chapter 4 were applied to this design. The resulting tolerance analysis provided that the
diverger is very sensitive to alignment error, but λ/10 RMS performance through active
204
alignment and proper compensators. Photon Gear Inc. (Ontario, NY) was chosen to
manufacture this diverger. The manufacturer of the in vitro system was unable to bid on
this project due to their inability to provide interferometric alignment at a wavelength of
785 nm. Photon Gear was recommended by the project sponsor, has experience in
designing and building null optics such as this diverger, and has metrology capabilities at
785 nm.
In order to interferometrically align the system, a calibration surface was required. For
null optics designed for testing spherical surfaces, a spherical mirror or high grade ball
bearing can be used as the calibration part. However, since this assembly is designed to
null the wavefront reflected from an asphere, a matching asphere must be used for
calibration. While it is possible align the assembly using a spherical surface and subtract
spherical aberration, this does not exactly predict the behavior on the nominal conic
surface. Therefore, a “golden surface” was required for alignment of the diverger. That
surface was manufactured by QED (Rochester, NY), and is discussed in more detail in
Section 8.1.7. After final alignment, the as-built doublepass performance of the TFI
diverger was 0.061λ RMS.
Cylindrical Astigmatism Null
As described in Chapter 7, a person having no refractive visual astigmatism on average
has 0.5D of corneal astigmatism. In order to accommodate a significant portion of the
population, the interferometer must compensate for at least ±1.0 D of corneal
205
astigmatism. The ophthalmic community tends to discuss refractive power and
aberrations in units of diopters. For reference, error in diopters (δΦ) is related to the
wavefront error (W) as:
…Ò G …&
8.5
where r is the wavefront radius. For astigmatism, the wavefront error is:
& & Ó  "#$ >
where
Ó "#$ > G† 8.6
~{/
)
8.7
The derivative in Equation 8.5 then becomes:
…& & )
…
~{/
8.8
Combining Equations 8.5 and 8.8 yields:
…Ò G & &
)
Ô ~{/
~{/
8.9
For a pupil diameter of 6 mm, rmax = 3mm. Therefore, 1.0 diopter of astigmatism over a
6 mm diameter pupil is:
G
& …Ò p ~{/
G
G
G)a p ‡ÕÕ
8.10
J)½P ½)¹‡†¹¾½)
Therefore, the interferometer must be capable of compensating for at least ±5.73 waves
of primary astigmatism (11.46λ PV). While this is the minimum requirement, additional
correction is desirable in order to accommodate a larger percentage of the population.
206
The orientation of the astigmatism compensation must be adjustable as well, since the
angle varies across the population. This is achieved through use of two plano-cylinder
lenses, each having opposite radii of curvature. When rotated with respect to each other,
and with respect to the interferometer, continuously variable astigmatism compensation is
provided.
There are four primary variables in the design of the cylinder null lenses: the glass type,
the radii of curvature, the lens thickness, and the lens spacings. A glass type of BK7 was
selected based on its availability and preference by the manufacturer. The radii of
curvature, thicknesses and lens spacing all have an effect on the magnitude of
astigmatism that can be created, along with the amount of undesired residual aberration
that is present. Therefore, system was modeled to analyze the effects of these different
design parameters. A model containing two lenses each having one planar and one
torroidal surface was built, with appropriate coordinate breaks to rotate the lenses as
needed. The standard Zernike polynomial fit for the wavefront transmitted through the
two lenses was then calculated.
For this model containing the cylindrical lenses, the only terms within the first eight
Zernike polynomials that varied with the different parameters were Z4, Z5 and Z6 where
according to the conventions used in Zemax:
ÖJ g‡
 G
8.11
207
Ö½ g¸ $o@
>
and
8.12
Ö¸ g¸ "#$
>)
8.13
The term Z4 represents a combination of power and piston, and Z5 and Z6 are a balance
of astigmatism with power. Each of the Zernike terms corresponds to a balance of
aberrations resulting in minimum wavefront variance. For combinations of Z4, Z5 and
Z6, the wavefront aberration terms for power and astigmatism can be calculated from the
Zernike polymomials by:
& p ÖJ p g‡ × Î5Ö½ p g¸6 5Ö¸ p g¸6 and
& ×Î5Ö½ p g¸6 5Ö¸ p g¸6 )
8.14
8.15
The sign of the ‘±’ term in Equation 8.14 is chosen to minimize the absolute magnitude
of W20, and the sign in Equation 8.15 is opposite. In some texts, the Zernike terms are
not multiplied by the g¸ and g‡ terms. In that case, the Zernike terms discussed are not
normalized, and are sometimes referred to as Zernike Fringe coefficients. Standard and
Fringe Zernike coefficients are related by the normalization factor, and the ordering is
slightly different. More information on the ordering and use of Zernike terms is available
in Chapter 13 of Malacara’s Optical Shop Testing (2007).
Adjusting the lens radii, thicknesses and spacings results in variation in the astigmatism
and power present in the wavefront For the lens orientations studied (0° and 90°), only
208
the astigmatism term Z6 is present, so is all that is used in the astigmatism calculation.
While other higher order terms within the Zernike fit did have some value, their
magnitude was quite small compared to the power and astigmatism terms, so are not
considered further in the analysis. The astigmatism correction capability of the
configuration was calculated using Equation 8.15. The residual power error in the
wavefront resulting from the lens configuration is shown in only the Z4 term. The
residual power error, in waves, is calculated by:
<& p ÖJ p g‡
8.16
The following charts show the results of this analysis. Figure 8.4 shows the primary
astigmatism and residual power as a function of the spacing between the two cylindrical
lenses for lenses having radii of curvature of ±1500 mm and a thickness of five
millimeters. Figure 8.5 shows the primary astigmatism and residual power as a function
of the radius of curvature on the cylindrical surface for lenses having a thickness of five
millimeters and spaced 10 mm apart. Figure 8.6 shows the primary astigmatism and
residual power as a function of the lens thickness for lenses having radii of curvature of
±1500 mm and spaced 10 mm apart.
209
Figure 8.4. Primary astigmatism and residual power vs. lens spacing with axes orthogonally aligned,
for 5mm thick lenses with radii of curvature of ±1500 mm.
Figure 8.5. Primary astigmatism and residual power vs. radius of curvature with axes orthogonally
aligned, for lenses 5 mm thick and spaced 10 mm apart.
210
Figure 8.6. Primary astigmatism and residual power vs. lens thickness with axes orthogonally
aligned, for lenses with ±1500 radii of curvature and spaced 10 mm apart.
The analysis of the cylindrical lenses design parameters shows that the lens spacing does
not have a large effect on the amount of primary astigmatism, while the residual power
increases with lens spacing. Based on the available motorized rotation mounts, the
nearest the lenses can be spaced is 10 mm apart. The radius of curvature controls the
maximum amount of astigmatism that is induced by the partial null module, however
more powerful lenses also introduce more residual power. Therefore, the longest radius
of curvature that can provide the required astigmatism should be used. A radius of
curvature of ±1500 mm was chosen in order to provide up to 35 waves (±4.95 D) of
astigmatism compensation while minimizing the residual power introduced by the
system.
The cylindrical lenses have a clear aperture of 25 mm, and have a ±1500 mm radius of
curvature on their respective convex or concave surfaces. The lenses are manufactured
from BK7 glass, and are five millimeters thick. The lenses were manufactured by
211
Optimax (Rochester, NY). Optimax was selected since they were able to provide the
lowest surface figure error of all vendors contacted. Most vendors could guarantee no
better than about 1λ of figure error on the cylindrical surface, and it was desired to have
better than λ/10 error for use in the TFI. In most cases, the limitations on lens quality
were based on available metrology methods for measuring the cylindrical surfaces.
Optimax was able to meet the tenth wave specification on the concave surface by using a
diffraction grating in conjunction with an interferometer to create a cylindrical wavefront
null for measuring the cylindrical surface. However, the same metrology process was not
possible on the convex surface, so a test plate was used which could only guarantee about
λ/5 error or better. The as built concave cylindrical surface has a PV error less than λ/20,
well within the manufacturing requirements. The convex cylindrical surface has as error
of about λ/5 PV and λ/15 RMS using the test plate measurement method. All surfaces
were antireflection coated to minimize stray reflections at 785 nm.
Combined Design
The complete null assembly containing the diverger and the partial null cylinders is
shown in Figure 8.7. The distance between the partial null and the diverger is 25 mm.
This is as close as the elements can be spaced with the available mechanical components.
212
Figure 8.7. Full null assembly including the partial null and diverger lenses
When the two independent null modules are combined as described, there is some
residual aberration which increases as the corneal astigmatism increases. Figure 8.8
displays the wavefront OPD of the null assembly in doublepass when testing on a person
with no corneal astigmatism. The longitudinal focus position was optimized for both
RMS wavefront error and again to minimize the maximum wavefront slope occurring in
the wavefront. While the RMS wavefront error optimization typically provides the best
overall wavefront for image quality, instead optimizing to minimize wavefront slope
reduces the fringe frequency, and is appropriate in interferometer design. The same
process was repeated for an eye having 35 waves of corneal astigmatism (the limit of the
null capability), and the results are shown in Figure 8.9.
213
Figure 8.8. OPD of the null assembly when testing on a person with no corneal astigmatism. The
focus and cylinder angles were optimized for best RMS wavefront (left) and minimum wavefront
slope (right). Scale is 0.05 waves.
Figure 8.9. OPD of the null assembly when testing on a person with 35 waves of corneal astigmatism.
The focus and cylinder angles were optimized for best RMS wavefront (left) and minimum wavefront
slope (right). Scale is 5 waves.
In order to evaluate how the wavefront and wavefront slope change as a function of
corneal astigmatism, Figure 8.10 shows the residual wavefront error and corresponding
maximum wavefront slope from zero to 35 waves of corneal astigmatism. While there is
clearly some residual aberration resulting from inclusion of the partial null module, that
214
device allows the interferometer to function over a wider population range. At the
maximum designed accommodation of 35 waves of astigmatism, the residual maximum
wavefront slope is approximately one-fifth of what it would be without the partial null
cylindrical lenses.
Figure 8.10. Residual RMS wavefront error (left) and corresponding maximum wavefront slope
(right) as a function of corneal astigmatism.
The diverger is mounted in a kinematic mount providing tilt and translation in all
directions required for alignment. The partial null cylinder lenses are mounted in cells
contained in motorized rotation stages which allow each to be independently rotated by
computer control. The last surface of the partial null is spaced 25 mm from the first
optical surface of the diverger assembly.
8.1.6 Imaging Lens
The imaging lens in the TFI is a commercially available aplanatic doublet (CVI Laser,
p/n LAP-125.0-25.0-780) with an effective focal length of 125 mm and maximum clear
aperture of 25 mm. As in the in vitro FLI, the beam diameter at the imaging lens is 18
mm, and the lens works at finite conjugates to image the test surface onto the detector.
215
Again, the diverger must be considered in the analysis of the imaging system, and a
similar configuration as in the in vitro system results in a virtual image within the
diverger. The layout of the imaging system was determined from first order properties in
the same manner as in Chapter 4.2.5, and the result is shown in Figure 8.11. The beam
paths for the interferometer and imaging modes are both shown in that figure. For
clarity, the figure has been unfolded by removing the beamsplitter and fold mirrors within
the path.
Figure 8.11. Unfolded imaging path within the TFI, showing the interferometer mode (top) and
imaging mode (bottom)
For these nominal conjugate locations, along with the in-use pupil diameter, the imaging
system has an effective f/# of f/#w = 6.76. Given the cutoff frequency derived in Chapter
3 (Equation 3.26), the maximum supported fringe frequency by the imaging lens is:
216
‹¨~{©R©ªCŽ G
9㤠G
9P»N
G¾¾
)
¸)¹¸
a)¹¾½º
8.17
Given the detector limit of 56 fringes per millimeter, the imaging lens does support fringe
frequencies well beyond the limits of the detector.
While the aplanatic lens has good wavefront quality, the diverger contributes aberration
to the imaging system. The diverger is designed for on-axis performance, with a nominal
focus at the tear film’s center of curvature. When imaging the tear film onto the detector,
the curved, conic object surface results in distortions in the image. As in the in vitro FLI,
the maximum allowable transverse ray aberration to achieve λ/4 accuracy is 4.5 microns.
As the fringe frequency decreases, the requirements on the imaging system are relaxed.
The transverse ray error of the entire imaging system was calculated and is shown in
Figure 8.11. Figure 8.12 shows the transverse ray aberration across the field of the
imaging system, with effects of distortion removed. The maximum transverse ray error
occurs at the edge of the field, and has a peak to valley magnitude of about 80 microns.
The ray error at the 0.7 field is about 30 microns, and at the center of the field is about 20
microns. The aberrations in the imaging system are dominated by field curvature and
distortion, shown in Figure 8.13. Field curvature is largely the result of a curved object
plane. Distortion is only a change in magnification across the field, and doesn’t result in
any blur in the system. The result of distortion is that positions in the measurement result
217
do not correspond to the same position on the test surface. For the purpose of the TFI,
this is not a problem since the device is intended only to measure relative differences in
tear film topography over time. Should distortion become problematic at some point, a
calibration can be completed in order to scale the measurement results to their
corresponding positions on the test surface.
OBJ: 0.0000 mm
ey
OBJ: 2.0000 mm
ex
ey
Py
ex
Px
Py
Px
OBJ: 3.0000 mm
ey
ex
Py
Px
Transverse Ray Fan Plot
NIR Converger Lens - Double Pass
8/12/2011
Maximum Scale: ± 50.000 µm.
0.785
Surface: Image
pgi Imaging Leg.ZMX
Configuration 1 of 2
Figure 8.12. Transverse ray aberration for the imaging system for field points at a radius of 0mm,
2mm and 3mm on the fluid layer
218
Field Curvature
S
+Y
-5.00
0.00
Millimeters
Distortion
T
+Y
5.00
-20
0
20
Percent
Field Curvature / F-Tan(Theta) Distortion
NIR Converger Lens - Double Pass
8/12/2011
Maximum Field is 3.000 Millimeters.
Wavelengths: 0.785
pgi Imaging Leg.ZMX
Configuration 1 of 2
Figure 8.13. Field curvature and distortion in the TFI imaging system
While the image quality at the edge of the field is outside of the 4.5 micron limit
previously described, the quality is near or within that limit over most of the field.
Furthermore, the aberration limit is derived for the maximum wavefront slope of 250
waves per radius, which represents the limits of the interferometer’s dynamic range.
Figure 8.14 shows the maximum achievable measurement accuracy as a function of
wavefront slope for the TFI imaging system. The values on the y-axis correspond to the
fraction of a wavelength. For example, a y-axis value of 4 corresponds to λ/4 accuracy.
219
0 Field
2mm Field
3mm Field
10
9
Accuracy (λ/n)
8
7
6
5
4
3
2
1
0
0
50
100
150
200
250
Wavefront Slope (waves/radius)
Figure 8.14. Maximum interferometer accuracy as a function of wavefront slope, based on the
transverse ray error at different fields of view in the TFI. A larger value on the y-axis corresponds to
better accuracy (i.e., 4 means λ/4 accuracy).
While the imaging lens performance is not ideal, it is likely good enough to perform the
desired measurements. Typical wavefront slopes are likely less than 50 waves per radius,
which occurs right near the knee of the curves in Figure 8.14 where the accuracy
increases at a faster rate. An improved imaging system with a faster f/# along with
correction for field curvature, astigmatism and distortion could improve the performance,
but a custom design would be required. Such an effort was outside of the budget for the
instrument, so the off-the-shelf aplanatic double was instead used. If the eventual on-eye
testing shows a need to increase the system accuracy, the imaging system would be at or
near the top a list of potential improvements.
220
8.1.7 Calibration Surface
In order to align the diverger optics and calibrate the interferometer, a “golden surface”
was required. Since the system is designed to test a nominally aspheric surface, spherical
mirrors or ball bearings could not be used for active alignment of the diverger or final
alignment and calibration of the interferometer. The golden surface must match the
nominal tear film geometry with a 7.8 mm radius of curvature and -0.25 conic constant,
and have figure error better than the desired accuracy of the interferometer.
After contacting a number of potential manufacturers for this part, QED (Rochester, NY)
was identified as the best option. Through their magnetorheological finishing (MRF)
method, QED was able to start with a base sphere nearly matching the desired geometry
and aspherize it to the form required for this work. The as-built golden surface has a
figure error of roughly λ/20 PV and λ/100 RMS from the nominal conic shape (at the 633
nm HeNe wavelength). The metrology results from the golden surface are shown in
Figure 8.15.
221
Figure 8.15. Figure error in the golden surface. PV error is 0.0515λ and RMS is 0.0097λ at
λ=632.8nm
8.1.8 Photographs
The following figures show photographs of the TFI system. Figure 8.16 shows the core
interferometer in the TFI, with the beam path overlaid, Figure 8.17 shows the TFI
enclosed along with the motion control stages, and Figure 8.18 shows the subject’s view
as they approach the interferometer.
222
Figure 8.16. Photograph of the TFI core interferometer showing the beam path
223
Figure 8.17. Photograph of the TFI system
Figure 8.18. Subject's view in the TFI
224
8.2 Laser Safety Strategy
The laser within the TFI is a Class 3B device with a maximum power output near 100 W.
The power is user-adjustable, though the laser is most stable when operated at 100%.
Therefore, the laser is always operated at full power, and optics internal to the TFI limit
the power exiting the system to safe levels. This configuration also prevents someone
from accidentally turning the laser power up to unsafe levels during a measurement. The
laser power emitted from the diverger of the interferometer varies depending on settings
of the continuously variable neutral density filter (CVND) and half-wave plate (HWP) in
the TFI. The HWP orientation controls the fraction of light that enters each arm of the
interferometer in order to maximize fringe visibility, while the CVND controls the total
amount of power incident on the HWP. In order to adjust these components without an
eye in place, the glass calibration surface was aligned to the interferometer. Since the
reflectance of the glass is slightly higher than the tear film, fluid was applied to the glass
surface to simulate the reflectance of the tear film. The live interferogram was viewed
and the orientation of the HWP was adjusted to maximize the fringe visibility. The glass
surface was then removed and replaced with a silicone photodetector (Newport p/n 918DSL-OD2) connected to a dual-channel power meter (Newport p/n 2936-C), and the
CVND was adjusted so that the power exiting the interferometer is 0.5 mW. This power
is half of the level approved by the university IRB, and roughly 1/60th of the safety limit
for the TFI configuration (discussed in detail in Chapter 9). The TFI is enclosed in a box
which requires tools and intent to open; when properly used, there is no way someone
could accidentally turn up the laser power during a measurement. Once the CVND and
225
HWP are adjusted, the enclosure is closed and locked. The entire process is repeated
anytime it is necessary to adjust those components. The photodetector and power meter
are both calibrated to NIST traceable standards annually by the supplier.
Per the university-approved testing protocol, the laser power exiting the interferometer
must be measured prior to all measurements on a live eye. To do this, a photodetector is
placed at the focus of the diverger and power meter used to measure the output power.
The interferometer is translated in all directions until the power reading is at its maximum
level, and the power must be less than 1.0 mW to proceed with the measurement.
In addition to measuring the power prior to every session, the laser power is continuously
monitored during a measurement. The pick-off beamsplitter in the interferometer divides
the laser power between the interferometer and an internal silicone photodetector. This
detector is connected to the second channel of the dual channel power meter previously
discussed. Although the power at the internal photodetector is not equal to the laser
power incident on the eye, the system is calibrated so that internal power measurement
corresponds to a known laser power at the eye. This calibration must be repeated every
time the HWP is adjusted.
In addition to viewing the laser power on the display, the raw analog signal from the
detector is captured by an HP 34970A Data Acquisition/Switch (DAS) unit. The DAS is
connected to a safety shut-off/interlock circuit shown in Figure 8.19 and described below.
226
The DAS unit is programmed to trigger an alarm signal if the input voltage from the
photodetector exceeds the safe power level. The alarm line is a pin-out normally having
a +5V TTL voltage out. When the alarm is triggered, the alarm line voltage falls to 0V.
The alarm line is connected to a normally open relay which closes for an input TTL
voltage of +5V. If the laser power exceeds its limits and the alarm is triggered, the
resulting 0V signal opens the relay. The relay is connected to both a laser interlock
circuit and the high speed shutter control. When the 0V signal opens the relay, the laser
is turned off and the shutter closes. This safety trigger and interlock system is designed
so that if any of the devices are powered off, the laser turns off and shutter closes. The
laser and shutter cannot be reactivated until the operator physically resets the alarm line
through adjustment of the DAS control interface. Therefore even if the laser power just
temporarily spikes, the entire system remains turned off. As was done with correlating
the internal power measurement to the incident power on the eye, a calibration routine
determined the relationship between the voltage monitored by the DAS and the on-eye
incident power.
The laser safety strategy uses passive monitoring to ensure the laser power is below the
threshold for safe use both prior to beginning the measurement and during it. Should the
power exceed the limits, there are redundant controls between the shutter and laser
interlock system that block access to the laser power. Furthermore, the laser itself cannot
be turned on until all systems are running and report normal operation via the alarm line.
227
The system is enclosed to block access to higher laser powers, and requires tools and
intent to open in order to eliminate accidental changes to the system.
Figure 8.19. Safety shut-off and interlock circuit showing normal operation. When an alarm is
triggered, the relay opens breaking the laser and shutter control circuits, turning off the laser and
closing the shutter.
8.3 Interferometer Calibration, Accuracy and Precision
The interferometer was calibrated in the same manner as described in Chapter 4.4.
Rather than use a ball bearing, the golden calibration surface matching the nominal
geometry of the eye was used. As in the in vitro calibration, ten measurements of the
calibration surface were made. In the first, the interferometer was aligned the mounted
calibration surface, and a measurement was taken. The mount was then removed,
replaced, and the interferometer realigned for the remaining nine measurements. The
PVq and RMS surface heights for each of the ten measurements are shown in Table 8.1.
As before, a Q-value of 99.5% was used, so the peak to valley surface was calculated
228
over the median 99.5% of the data points. Piston, tilt and power were removed from each
measurement.
Table 8.1. Calibration measurement results. All quantities in waves.
1
2
3
4
5
6
7
8
9
10
Average
Standard Deviation
Raw PVq Raw RMS
0.1657
0.0322
0.1632
0.0314
0.1618
0.0318
0.2031
0.0391
0.1660
0.0314
0.1919
0.0381
0.1638
0.0308
0.1650
0.0309
0.1636
0.0308
0.1578
0.0308
0.1702
0.0148
0.0327
0.0031
After the 10 raw measurements were recorded, they were averaged together in order to
calculate the interferometer’s accuracy. The resulting average surface is displayed in
Figure 8.20. The PVq surface height of the average measurement is 0.1479 waves, and
the RMS height is 0.0276 waves. Based on these values along with the data in Table 8.1,
the uncalibrated accuracy and repeatability capabilities for the FLI are shown in Table
8.2. The accuracy is determined by the error in the average measurement, and
repeatability by the standard deviation of the 10 calibration measurements.
229
Figure 8.20. Average of the raw calibration measurements, representing the uncalibrated accuracy
of the FLI. The PVq surface height is 0.1479 waves, and the RMS height is 0.0276 waves.
Table 8.2. Uncalibrated accuracy and repeatability of the Fluid Layer Interferometer
PVq Uncalibrated Accuracy 0.14788 λ
RMS Uncalibrated Accuracy
0.0276λ
PVq Repeatability
0.014826 λ
RMS Repeatability
0.003138 λ
The calibrated accuracy of the TFI is determined by subtracting the averaged reference
measurement from each of the ten calibration measurements, as shown in Figure 4.17 of
Chapter 4. The difference between each of the 10 raw measurements and the average
shown in Figure 8.20 are listed in Table 8.3, showing the PVq and RMS surface heights
of each result.
230
Table 8.3. Difference between the raw calibration measurements and the average of those
measurements
1
2
3
4
5
6
7
8
9
10
Average
Standard Deviation
Difference PV Difference RMS
0.0739
0.0138
0.1743
0.0316
0.0640
0.0123
0.0666
0.0127
0.1285
0.0241
0.0636
0.0113
0.1305
0.0222
0.0680
0.0123
0.0694
0.0127
0.0669
0.0121
0.0906
0.0392
0.0165
0.0070
As was done previously, these surfaces are averaged in order to determine the calibrated
measurement accuracy. The average surface representing the calibrated interferometer
accuracy is shown in Figure 8.21. The PVq surface height is 0.0067 waves, and the RMS
surface height is 0.0012 waves.
231
Figure 8.21. Average of the reference-subtracted calibration measurements, representing the
calibrated accuracy of the FLI. The PVq surface height is 0.0067 waves, and the RMS height is
0.0012 waves.
The calibrated accuracy and repeatability of the FLI are shown Table 8.4 below. The
accuracy, both in PVq and RMS, have increased with the calibration. The repeatability
has remained the same. This is expected since repeatability is affected by random or
cyclical instrument errors, which cannot be calibrated out. While the measurement
uncertainty resulting from repeatability cannot be removed with calibration, averaging
over many measurements will mitigate their effects.
232
Table 8.4. Calibrated accuracy and repeatability of the TFI
PVq Calibrated Accuracy
RMS Calibrated Accuracy
PVq Repeatability
RMS Repeatability
0.0906
0.0165
0.0392
0.0070
To this point, all calibration routines have been performed with the partial null cylindrical
lenses aligned to contribute no astigmatism to the system; only the residual wavefront
error from the components is present. The next stage of the calibration was to run the
cylinder lenses through their range of motion to determine the amount of primary
astigmatism introduced to the wavefront along with the amount of undesired residual
aberration as the astigmatism is varied. Figure 8.22 shows the amount of primary
astigmatism (W222) introduced as a function of the difference between the cylindrical lens
orientations. An angle of 0° or 180° between the cylinder lenses indicates the axes are
aligned and the introduced astigmatism is minimized. An angle of 90° or 270° indicates
the axes are orthogonal, and induced astigmatism is maximized. Figure 8.23 shows the
residual aberration introduced by the partial null module as the amount of astigmatism
compensation is varied. Piston, tilt, power and primary astigmatism were subtracted
from the measurement and the remaining components are the residual aberration.
Although power can be considered as a component of the residual aberration, it was
subtracted since it is corrected by proper longitudinal adjustment of the interferometer
during testing.
233
Primary Astigmatism
Waves of Astigmatism
35
30
25
20
15
10
5
0
0
50
100
150
200
250
300
350
Cylinder Angle Difference (degrees)
Figure 8.22. Astigmatism introduced by the partial null system. An angle of 0° indicates the cylinder
axes are aligned, and ideally no astigmatism is present.
Residual Aberration
Waves
PVq
RMS
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
0
50
100
150
200
250
300
350
Cylinder angle difference (degrees)
Figure 8.23. Residual aberration introduced by the partial null system as the rotation between the
lenses is varied. The PVq (q=99.5%) and RMS are both shown.
234
As expected, the partial null module is capable of introducing approximately 30 waves of
astigmatism compensation. The residual aberration in the system increases with the
amount of astigmatism introduced. At 0° and 180° the wavefront error in the
interferometer is identical to the uncalibrated results previously discussed.
8.4 Human Interface and Motion Control
During a measurement, the subject is stabilized using a commercially available
ophthalmic head rest. The head rest is the “Ultra Precision Head Positioner” sold by
Arrington Research (Scottsdale, AZ). This device, pictured in Figure 8.24, consists of a
chin cup, forehead rest, and head stabilization bars. The forehead rest also has a nosebridge similar in form to eye glasses, which helps stabilize the head position. The
stabilization bars rest against the side of the head to limit rotation of the head. This
positioner should keep the eye stable to within 100 microns, based on the studies
discussed in Chapter 7. If it becomes necessary to stabilize the head better than this
mount allows, the chin rest can be replaced with a dental impression bite bar. Bite bars
provide greater stability, but are less comfortable for the subject and require additional
processes for safe human subjects use.
235
Figure 8.24. Photographs of the head positioner
In order to keep the eye in a stable position, it is important that the subject be
comfortable. Therefore, the motion control strategy in the TFI is to allow the subject to
sit in front of the instrument in a comfortable position, and then align the interferometer
to the eye. In order to achieve this, the entire interferometer was built on a vertically
adjustable table. In this way, the entire system can be adjusted to be at a comfortable
position for subjects of different height. The interferometer is built on a platform affixed
to a two-axis horizontal translation stage in order to provide lateral and longitudinal
alignment of the system. The head rest itself is mounted on a vertical translation stage in
order to vertically adjust the eye’s position. While this means that the subject will move
slightly, stages were not on hand that were capable of lifting the entire interferometer.
The total translation of the eye is less than a centimeter, and will not pose any discomfort
236
to potential subjects. All of the stages are computer controlled so that the interferometer
can be remotely adjusted by the operator. A GUI written in IDL is used for this purpose.
The stages and drive control used for horizontal and vertical translation were selected
since they were on-hand in the labs. Without completing some feasibility studies on
human subjects, the requirements driving the stage design are somewhat unknown. For
example, if it turns out that 50 microns of head movement is common, it does not make
sense to use stages with one micron positional resolution. Also, at this point it is not
clear if repeatability of stage position is important, since the entire system will be
realigned to the subject for every measurement, and their position in the commercial head
positioned is likely less repeatable than even low-precision stages. Rather than
purchasing new stages, testing of the TFI with on-hand equipment provides insight into
what the future generation motion control system requirements should be.
The three motorized stages are all built by New England Affiliated Technologies
(NEAT), and are controlled by the NEAT 3000 drive electronics system which
communicates with a PC via serial communications. Linear translation is accomplished
with the NEAT XYL-1515-SM which provides two axis translation up to 12 inches in
each direction. That stage is capable of supporting a load up to 800 lbs, well over the
weight of the TFI. The motion is driven by stepper motors having 200 steps per
revolution. Coupled with the lead screw included in the stages, a full step results in a
translation of 10 microns. The motors can be ordered to move in half-step increments,
237
providing a maximum positional resolution of 5 microns. The vertical lift stage mounted
to the head positioned is also a NEAT stage, though the model number is unknown. The
vertical lift stage is capable of providing roughly two inches of translation, and supported
roughly 200 lbs during empirical testing. The lead screw in the vertical stage has a more
coarse resolution than the XY tables, resulting in 15 micron single step and 7.5 micron
half step resolutions.
8.5 Eye Tracker
The eye tracker is the commercially available ViewPoint Eye Tracker® from Arrington
Research, Inc. (Scottsdale, AZ). This package includes an infrared illuminator, camera
and ViewPoint software. The eye tracker operates in a dark-pupil mode, where the light
source illuminates the cornea from an angle of roughly 45° under the eye. When the
camera is similarly located beneath the eye, the ocular pupil acts as an infrared sink and
no light is reflected back to the camera from that region. The image of the eye captured
by the camera contains the dark pupil along with a specular reflection on the cornea.
After capturing the image, an ellipse is fit to the pupil and the vector between the center
of the pupil and the corneal glint is calculated. Infrared light is used since the reflectance
characteristics of the anterior segment at infrared wavelengths allow for easy
identification of the pupil. The layout of the eye tracker and an example image of the eye
using an illuminator and camera in this configuration are shown in Figure 8.25.
238
Figure 8.25. Arrangement of the eye tracker (left), and corresponding infrared illuminated eye
(right) showing the dark pupil and corneal glint directly beneath the pupil (Image of the eye from:
Arrington Research, 2010).
By determining the center of the pupil ellipse, the direction of gaze is calculated.
However, this calculation alone is sensitive to head movement. If the head moves, the
eye tracking software will interpret the movement as a change in gaze direction. In order
to account for head movement, the location of the corneal glint is used as well. The pupil
center and corneal glint move together when the head moves, and move independently
when the eye moves. Eye movements can be isolated by determining the difference
between the two signals, which is known as the vector difference method in eye tracking.
The Arrington Research device has an accuracy in determining the gaze direction of at
least 0.25°.
By running the eye tracker while capturing interferometric data, the movements of the
eye during testing are recorded. If a feature in the tear film appears to translate during a
measurement, the eye tracking data can be compared in order to determine if the feature
moved due to tear film behavior or eye movement. If excessive movements occur during
239
the measurement, the eye tracking data can be used to register the topographic
measurement results to compensate for ocular movement.
Data acquisition with the interferometer and eye tracking software is synchronized with a
Python script that runs within the 4Sight scripting interface. The Python script remotely
controls the entire eye tracking controls which enable eye tracker data acquisition. When
4Sight begins collecting data, a note is made in the eye tracking data file indicating the
beginning and end of the interferometric measurement set.
8.6 Fixation Target
A fixation target is necessary to provide potential subjects a target to stare at in order to
keep the eye in a stable position. To simplify the design of the interferometer, the
fixation module was designed as a stand-alone system. The interferometer is designed to
measure the non-dominant eye while providing the fixation target to the dominant eye.
Normally, when a person’s dominant eye fixates on a target, the non-dominant eye will
remain stable as well. Since most people are right-eye dominant, the interferometer is
usually arranged to provide fixation to the right eye and measure the tear film on the left
eye. The system can be modified to reverse that configuration, if necessary.
The fixation module consists of a target plane, collimating lens, and fold optics which
direct a collimated target to the fixating eye. A layout of the fixation target with respect
to the interferometer is shown in Figure 8.26. A white light LED is directed onto the
240
target plane in order to illuminate the fixation target to the preference of the subject. The
collimating lens is a 250 mm focal length achromatic doublet (Thorlabs p/n AC508-250A-ML) which is a focal length away from the target, allowing relaxed (unaccomodated)
viewing of the fixation target. The entire module is mounted on a linear rail, allowing for
adjustment to match the subject’s interocular distance.
Figure 8.26. Layout of the fixation module
Since the system has not been tested on human subjects at this time, it is unknown what
sort of target will provide the best fixation ability. Some research has shown that fixation
targets covering a field of view on the order of 50 minutes of arc minimize the frequency
and extent of saccadic eye movement (Steinman, 1965). However, the type of best type
of fixation target is often debated. In some applications a dot or crosshair is used, while
in others a natural scene is appropriate. As part of the eventual human subjects study on
the TFI, different fixation targets will be used. A variety of targets can be simply printed
241
out and located in the target holder. For reference, a target having a diameter of 3.6 mm
corresponds to an angular field of 50 arc minutes.
Three example targets are shown in Figure 8.27 below. The first target is a photograph of
a fountain in front of Old Main at the University of Arizona. This target provides a
natural scene with a central focal point in the fountain. This scene can provide a relaxed
viewing condition, and the subject would be asked to focus on the center of the fountain.
The second target is a photograph of railroad tracks vanishing in the distance. This scene
draws the subject’s eyes towards the end of the tracks. The tracks provide some guidance
to the eye, drawing the gaze direction towards the end of the tracks. Finally, the bullseye
is an artificial target where the subject would be asked to fixate or stare at the center of
the pattern (at the arrow tip). The size of this target can be adjusted when printed out to
cause the different circular zones to cover a desired field of view. These targets will be
tested during eventual human trials with the TFI to determine which provides the most
stable gaze position.
242
Figure 8.27. Potential fixation targets
8.7 Light Budget
Given that the interferometer is testing on an eye, the light budget in the system was of
concern. In most other interferometer applications, the laser power can be increased until
sufficient light is reflected from the test part to meet the detector’s requirements. In this
system, the laser power incident on the eye is limited to 1.0 mW (the details of the laser
hazard analysis are provided in Chapter 9). Therefore, determining if this power is
sufficient to match the specifications of the 4D Technology camera was completed prior
to the system design.
When 1.0 mW is incident on the tear film having a reflectance of 2.4%, only 24 µW of
laser power is reflected. Since all optics in the interferometer are laser-line coated for
λ=785nm, the transmission through the interferometer back to the detector is
conservatively estimated to be 90%. Therefore, at most 21.6 µW of power reaches the
243
phase mask prior to the detector. Conservatively estimating the transmission through the
polarizers in the phase mask to be 35%, the power reaching the detector is 7.56 µW.
The sensor in the detector is a Kodak KAI-1010 with a full-well capacity of 50,000
electrons, and 1,000 x 1,000 pixels. When a circular pupil is imaged onto the square
detector so that the circumference matches the width of the detector, only 79% of the
pixels are used. This results in a power per pixel of 9.57x10-11 W from only the test arm.
Assuming the irradiance of the test and reference arms are matched, the power incident in
a bright fringe is:
Ø ‚ w xØ ‚
J J
Ñ)½¹'Ga
U
& ‡)¾‡'Ga
UÙ
&
)
OP'NÂ
8.18
For an integration time of 1 ms, this power corresponds to an incident energy of
3.83x10-14 J/pixel. At λ=785 nm, a photon has an energy of:
^
Š
)½‡'GaUÚ Û)
8.19
Given the energy incident on a bright-fringe pixel, along with the energy per photon,
there are 151,266 photons incident on a pixel in a 1 ms integration time. The KAI-1010
detector claims a quantum efficiency of at least 11%, meaning there are 16,639 electrons
per pixel in a single integration time. The camera has adjustable gains of G = 1, 2, 4, or
8. For a gain of one, the pixels on a bright fringe are 33% saturated, 66% saturated for
G=2 and completely saturated for G=4. Maximum visibility is increased as the percent-
244
saturation increases, meaning there is sufficient power in the system to provide adequate
light at the detector.
Testing of the interferometer with the glass calibration surface shows that the calculations
provided are conservative. An incident power of 0.5 mW on the 4% reflective glass
surface saturated the detector for a bright fringe at G=2. The estimates for transmission
of the optics and polarizers used in the previous calculations are likely less than what
actually exists, and it is possible that the detector is slightly more efficient than 11% at
the system wavelength. Regardless, both the calculations and empirical testing show that
sufficient optical power is present to safely measure the tear film.
8.8 TFI Design Summary
An interferometer for the in vivo measurement of tear film topography has been designed
and built. The system consists of a core interferometer, along with additional systems
necessary for safe testing on humans. The interferometer has been designed to
accommodate a range of ocular geometries, and can measure tear film behavior both with
and without contact lenses in place. The following chapters discuss the optical hazard
analysis in the TFI along with the simulated performance analysis that was completed
prior to readiness for human testing.
245
9 OPTICAL HAZARD ANALYSIS FOR AN ON-EYE INTERFEROMETER
One of the major challenges in reengineering the in vitro system to be practical for
human testing was evaluating the optical hazards associated with an on-eye
interferometer. This evaluation determined the allowable ocular exposure limits for both
the laser illumination from the interferometer along with the exposure from the LED
illumination utilized by the eye tracker and fixation target modules. The most difficult of
these is evaluating the laser safety standards as they apply to the Tear Film Interferometer
(TFI) along with other potential ocular interferometers. While the standards taught in a
typical laser safety course seem relatively straight forward, they often do not apply to this
system. The safety standards corresponding to the red and black or yellow signs so often
seen on the doors of optical laboratories typically correspond to accidental exposure to a
collimated laser beam, and are in place to prevent damage to the retina. In that case, it is
assumed that if, for example, a visible laser strikes someone in the eye their blink reflex
would cause them to block the laser or look away within a short time, or that natural eye
movement eliminates the risk of a high irradiance spot forming on the retina for a long
period of time. In an instrument designed for clinical use however, the subject will be
asked to fixate into the laser beam while keeping their eyes open. The assumptions used
to derive the most common and well known laser safety standards no longer apply.
Further complicating the laser safety analysis is the fact that a number of safety standards
exist and all tend to vary slightly from one another. Major contributors in the United
States are the American National Standards Institute (ANSI, 2007), the Center for
246
Devices and Radiological Health (CDRH, 1995), the US Food and Drug Administration
(US FDA, 2007) and the American Conference of Governmental Hygienists (ACGIH,
2010; ACGIH, 2007). In the U.S. the most commonly used of these is likely the ANSI
Z136.1-2007 publication (ANSI, 2007) which is referenced by most laser safety courses,
including the version at the University of Arizona. Internationally the governing body
becomes even more convoluted with the International Commission on Non-Ionizing
Radiation Protection (ICNIRP, 2000; ICNIRP, 1997), World Health Organization (WHO,
1982), International Standards Organization (ISO, 2007) or International Electrotechnical
Commission (IEC, 2007). Each of these standards bodies has published their own set of
standards, and often times they differ or address certain situations that others do not.
The purpose of this chapter is to describe the laser and optical device safety standards
that apply to the Tear Film Interferometer and how those standards affect the design of
the system. While the specific discussion will apply to the already designed system, the
same concepts and analysis can be used in the design of other systems introducing a laser
to the eye.
9.1 Classes of Laser Systems
The ANSI standards (ANSI, 2007) break laser systems into different classes, or
subgroups, based on their risk. It should be noted that the class defines the system as a
whole, and not the internal components. For example, a laser system could contain a
Class 3 laser internally but as long as the accessible radiation (i.e., that which the system
247
emits) meets the Class 1 requirements, the system could be classified as a Class 1 laser
system.
Class 1 Laser Systems
These systems are incapable of producing damaging radiation levels and are exempt from
control measures other than surveillance. Lasers in this class may operate at any
wavelength, as long as their output is below the Class 1 Accessible Emission Limit. The
maximum allowable power level for a Class 1 device depends on factors such as the
wavelength and duration of exposure (either accidental or intentional).
The limits set in the ANSI Standard include at least a 10x safety factor below the retinal
hazard point, defined as the point where there is a 50% probability of damage to the
retina (ANSI, 2007). Any on-eye interferometer must operate at a Class 1 level, among
other requirements, to be considered a non-significant risk device safe for human subjects
applications.
Class 2 Laser Systems
These laser systems only operate in the visible wavelengths between 400 and 700
nanometers. At these power levels, the natural human aversion response (blinking,
turning head away, etc) provides enough protection to guard against damage. Lasers in
this class emit power higher than Class 1 systems, but less than 1 mW. Laser systems
with wavelengths outside of the visible regime cannot be classified as Class 2 systems
248
since the aversion response is not a factor. Additionally, systems used for intentional
staring where a person will be asked to fixate into the laser beam cannot be classified as
Class 2 since the exposure duration is now longer than the aversion response.
Class 3 Laser Systems
These lasers may be hazardous when viewed under both direct and specular reflections,
but normally do not produce a diffuse reflection threat. These lasers may operate at any
wavelength, and have powers larger than 1 milliwatt but less than 0.5 watts.
Class 4 Laser Systems
These lasers are hazardous to the eye or skin with both direct and diffuse exposures, and
may pose a fire hazard. Class 4 lasers may operate at any wavelength and have powers
greater than 0.5 watts.
9.2 Tear Film Interferometer Optical Sources
There are three light sources in the TFI system that must be considered for the optical
hazard analysis: the interferometer’s laser, the near infrared LEDs used by the eye tracker
and the white light LED for the fixation target.
As discussed in Chapter 8, the laser is a frequency stabilized diode system with a
wavelength of 785 nm, falling in the IR-A spectrum. This wavelength is slightly visible
to most humans and appears to be of deep-cherry red color. The laser is capable of
249
producing power outputs of over 100mW, but is internally limited to have a maximum
output power of 1.0 mW leaving the interferometer. In its nominal configuration the
interferometer focuses the beam at the tear film’s center of curvature where the exiting
beam has a cone angle of 45.3 degrees, or 790 milliradians. The geometry of the laser
exiting the interferometer is shown in Figure 9.1 for the test beam incident on an eye,
modeled with the Arizona Eye Model (Schwiegerling, 2004). The eye model is included
in the figure in order to show the out-of-focus laser spot size on the retina when the
system is properly aligned.
Figure 9.1: Geometry of the accessible laser beam exiting the tear film interferometer used for laser
safety calculations
The illuminator used by the eye tracker consists of six 880 nanometer wavelength LEDs
that illuminate the eye from an angle of roughly 45 degrees beneath it, as required for
dark pupil eye tracking. A schematic showing the size and spacings of the LED system
along with a photograph is shown in Figure 9.2. The manufacturer of the eye tracking
system specified a maximum irradiance from this module to be well below 10 mW/cm2.
The irradiances at five locations from the LED illumator were measured and the results
are shown in Table 9.1. Nominally the LED illuminator is located roughly 80 mm from
250
the eye, although it is conceivable that the distance could be as small as 40 mm during
alignment. Even when the power meter was placed as close as possible to the LEDs, the
irradiance was less than 10 mW/cm2, within the limits quoted by the supplier.
Figure 9.2. Photograph of the LED system with centimeter scale (left) and size and spacings of the
individual LED elements (right)
Table 9.1. Irradiance measurements of the LED illumator for the eye tracking system
Distance from LED
Irradiance (mW-cm2)
5 mm
20 mm
40 mm
60 mm
80 mm
5.7
Limited by FOV
1.8
0.88
0.34
0.19
The second LED in the interferometer is the white light LED used to illuminate the
fixation target in order to view the scene at a comfortable level. The light from the LED
is diffusely reflected from a printed scene in a manner similar to shining a flashlight on a
wall. The luminance from the LED is far below 1 cd/cm2 and therefore requires no
251
further safety assessment (ICNIRP, 1997) and will not be discussed further in this
chapter.
9.3 Laser Exposure Limits
The general method of calculating the laser exposure limits in the eye is a two-step
process. The first step is to calculate the maximum permissible exposure (MPE), which
corresponds to how much energy density is permitted at different ocular surfaces (in
joules per area, for example). The MPE is generally a function of wavelength, beam
geometry and exposure time. Once the MPE is calculated, that value is used to determine
the Accessible Emission Limit (AEL) which is the accessible power limit (in watts, for
example) exiting a laser system. The AEL is the quantity most laser users are likely
familiar with, and is typically the number printed on laser labels or safety placards; when
it is said that Class 1 lasers must be 1 mW or less, that power level corresponds to the
AEL of the system. Both the MPE and AEL of a laser system vary for exposure to the
retina, lens and cornea. While the limits on retinal exposure are usually most restrictive,
there are some instances where the exposure limits on other ocular surfaces determine the
AEL. This is the case of the TFI, where the AEL is limited by cornea exposure as will be
discussed later in this chapter.
The following sections will described generally how to calculate the MPE and AEL for
the retina, lens and cornea and will provide the specific calculations used for the laser
system in the TFI.
252
9.4 Retina Maximum Permissible Exposure and Accessible Emission Limit
The exposure limits are a function of the laser’s wavelength, beam geometry and
exposure time. It is important to keep in mind whether the calculations are being done
for accidental exposures or intentional viewing conditions (discussed in detail in section
9.4.1). Once these are known, the Maximum Permissible Exposure (MPE) is determined
via a look-up table found in the ANSI standards. The MPE defines the “level of laser
radiation to which an unprotected person may be exposed without adverse biological
changes in the eye or skin” (ANSI, 2007). Its units are either in energy per area (J/cm2)
or power per area (W/cm2). The MPE varies with wavelength and exposure duration, and
its value may be different when describing the thermal or photochemical damage
thresholds to different ocular components.
The MPE also depends on if the laser is a point or extended source. For the purposes of
laser safety calculations, the definitions of point and extended sources are somewhat
different than the conventional. The extent of the source is determined by the
convergence angle of the beam at the eye, and determines if a point or extended image is
formed on the retina. A point source exists in a system where the laser is focused onto a
spatial filter and then collimated entering the eye. If additional optics are used between
the collimated beam and the eye, as is the case in the TFI, the source may have an
extended geometry. The visual angle, α, is used to identify the source’s full extent and is
defined using the following equation:
253
n †U }
9C
9.1
where dr is the diameter of the beam on the retina and fe is the eye’s focal length,
approximated to be 17 mm. A point source is defined as a system whose visual angle is
less than or equal to 1.5 milliradians (α ≤ 1.5 mrad). Any source having a visual angle
larger than this limit is treated as an extended source.
Figure 9.3: The visual angle of the source is defined by the field covered by the illumination
geometry in object space or the image size on the retina corresponding to Equation 9.1.
Once these parameters are known, the corresponding MPE can be obtained from the
ANSI document (ANSI, 2007). The locations of the tables of interest in the ANSI
document are as follows:
ANSI Table 5a listing the MPE’s for laser point sources – page 74,
ANSI Table 5b listing MPE’s for extended sources – page 75,
ANSI Table 6 listing the wavelength-dependent correction factors – page 76.
254
The TFI laser functions as an extended source having a wavelength of 785 nm. The MPE
of the TFI laser producing an extended 785nm wavelength source is derived from Table
5b in the ANSI document and is:
£^ G)¾ p ÜÝ p ÜÞ p )ßà p GaUˆ
Û
9.2
where CA and CE are correction parameters found in ANSI Table 6, and t is the exposure
duration. The correction parameters for this laser as given in the ANSI tables are:
or
ÜÝ Ga
ÃU)ß 9.3
ÜÞ 9.4a
ÜÞ n
n~R
9†~R á n á n~{/ n
9n â n~{/ )
n~{/ n~R
9.4b
The wavelength, λ, in Equation 9.3 has the units of microns. The correction term CE is a
piecewise function dependent on the laser’s visual angle. The term n~R is equal to 1.5
mrad and the term n~{/ is equal to 100 mrad. The visual angle in the TFI is 790 mrad
(corresponding to the f/1.2 beam), so Equation 9.4b applies.
Once the MPE (J/cm2 or W/cm2) is determined using the equations and tables above, the
AEL is calculated (in Watts). The AEL takes into account exposure time as well as the
diameter of the beam compared to a standardized pupil diameter at a given wavelength.
Regardless of the actual pupil diameter that exists in the measurement conditions, the
Limiting Aperture diameter must always be considered and is specified by the ANSI
standards. These diameters are shown in ANSI Table 8b on page 78 of that document,
255
and are a function of both wavelength and exposure duration. The limiting aperture takes
into account the time it takes the pupil to dilate in addition to the extent of dilation when
viewing visible beams. For the TFI laser having a wavelength of 785 nm the standard
aperture diameter to be used for all calculations is 7.0 mm having an area of 0.385 cm2
(ANSI, 2007).
In order to determine the AEL, the MPE is multiplied by the Limiting Aperture Area:
AEL = MPE x AREAlimiting
9.5
9.4.1 MPE Implications of Accidental vs. Intentional Laser Exposure
It is important to distinguish between accidental and intentional exposure. Accidental
exposure limits are less restrictive than intentional staring conditions, and cannot be used
for this evaluation since several factors help protect the human eye in the case of
accidental viewing of a laser beam (blink response, pupil constriction, head/eye
movement, etc.).
When intentionally staring at a target, there are three terms that must be defined to
accurately calculate the MPE: the visual angle (α), the exposure time (t), and pupil
constriction (P). The visual angle must be assumed to be fixed for intentional staring,
and the method of the visual angle calculation has already been described in Equation
9.1. CE is a function of the visual angle for a specific system. For point source systems,
CE = 1, while for extended sources CE is a function of the visual angle α as shown in
256
Equations 9.4a and 9.4b. In terms of exposure duration, the MPE varies with time over
the first 100 seconds of laser exposure. For times exceeding 100 seconds, the MPE is a
constant value and a value of t = 100 s may be used for any calculations.
The last extra term to consider is pupil constriction. ANSI Z136.1 Section 8.3 describes
the effect of pupil constriction. If operating with visible wavelengths, 400-700 nm, the
pupil will constrict after exposures of 0.7 seconds. Since the TFI wavelength is 785 nm,
pupil constriction effects are nonexistent.
The assumption that head and eye movement exists is built into some of the MPE
equations listed in the ANSI standards, specifically in how exposure duration is handled.
The MPE equations listed in ANSI Z136.1 Table 6 are often a function of the parameter
T2. This T2 factor is equal to 10 s for α < 1.5 mrad (point source) and 100 seconds for α >
100 mrad (extended source) and applies only to accidental viewing conditions since its
use assumes natural eye and head movement. When referring to ANSI Z136.1 Tables 5a
and 5b for intentional staring applications, only the MPE equations that are a function of
the actual exposure duration, t, rather than T2 should be used. The proper equations all
have the units of J-cm-2 and are a function of the real exposure duration. Using equations
that are a function of T2 will greatly underestimate the risk for intentional staring
situations.
257
9.4.2 Retinal MPE Calculations for the TFI
All calculations and discussions in this section use an operating wavelength of 785 nm
and an exposure duration of 100 seconds. In terms of exposure duration, the MPE varies
with time over the first 100 seconds of laser exposure. For times exceeding 100 seconds,
the MPE is a constant value and a value of t = 100 s should be used in calculations.
Therefore, even though the exposure duration is longer than 100 seconds, that value is
used for all calculations. The laser illumination geometry in the TFI has been previously
discussed and is shown in Figure 9.1.
Given the geometry discussed in section 9.2, the visual angle is α = 790 mrad. The
Maximum Permissible Exposure (retinal) can be calculated from ANSI Z136.1 Table 5b
(Equation 9.2 above). The factors from Equations 9.3 and 9.4b can now be calculated
given the parameters of the TFI:
CA = 102(λ – 0.7) = 102(0.785 – 0.7) = 1.479
9.6
CE = α2/(αmin * αmax) = (790 mrad)2/(1.5mradp100mrad) = 4161
9.7
£^ G)¾ p ÜÝ p ÜÞ p )ßà p GaUˆ
ã
~µ
´
G)¾
G)J¾
JG¸G
Gaaä
‡)½
E
~µ
9 Gaa
~ã
~µ
‡½a
ã
~µ
9.8
258
This value of 3.5 W-cm-2 is the MPE for retina exposure and is multiplied by the area of
the standardized 7 mm diameter pupil in order to calculate the AEL. Therefore the AEL
for retina exposure of the TFI system is
.^€ £^ p .ªR~RWR©
‡)½
E
~µ
p a)‡¾½ G)‡½&.
9.9
Although the ANSI standards do not mention it, an additional factor must be considered
according to the IEC standard 60825-1. This standard requires a test called “Condition
2” which says that for a focusing beam, as is the case in the TFI, the diverging beam
beyond the focal point must be safe when viewed with an eye loupe at a distance of 35
mm beyond the focal point. The test required by the IEC standard is to either measure or
calculate the radiant power through a 7 mm diameter aperture located 70 mm beyond the
laser focus. In this case the beam is so divergent that a majority of the power does not
enter the eye and is still Class 1 per the IEC standard.
9.5 Exposure Limits for the Cornea and Lens
Discussion thus far has been related to laser safety issues as it pertains to the retina. Once
it is clear that the laser system is retina-safe, exposure to other surfaces within the eye
should be considered. While a majority of systems will be safe on the cornea and lens if
they are retina-safe, some measurement configurations exist where this is not true. For
example, when a converging beam is incident on the eye, the spot size on the retina is
quite large, and therefore so is the retina-safe exposure limit. However, if a beam waist
259
forms somewhere on the cornea or within the eye, this retina-safe system may cause
damage in other areas of the eye.
The ANSI Z136.1 Standard does not explicitly discuss exposure limits to the cornea or
lens. The International Commission on Non-Ionizing Radiation Protection (ICNIRP)
published a paper discussing these issues (Sliney, 2005). This paper presents the
guidelines to prevent photochemical and thermal damage to different surfaces of the eye
for unrestricted exposure durations. In the case of photochemical damage, wavelengths
longer than 700 nanometers have no effect, and can be ignored for this system. In order
to determine the thermal damage exposure limits, the ICNIRP derived guidelines based
on thermal modeling of focal beams in the eye to determine the energy required to raise
the temperature of ocular tissues to threatening levels. In the case of thermal exposure
damage, a constant maximum irradiance limit of 4 W/cm2 is specified for all wavelengths
between 380 nm and 1400 nm. This standard applies to the entire anterior segment of the
eye, which includes the full cornea through the sclera along with the crystalline lens.
This paper states that due to the beam waist within the eye, “thermal model calculations
were necessary to develop guidance for limiting apertures” and that a “1-mm averaging
aperture is then appropriate for ophthalmic instruments.” Therefore, all calculations
involving laser safety at the cornea and lens should take into account a 1 mm limiting
aperture which yields a cornea/lens AEL of 31.4 mW given the 4 W/cm2 requirement.
260
In discussing these tables, the ICNIRP (Sliney, 2005) document states “exceeding the
guidelines shown in Tables 3 and 4 would not be expected to cause injury unless the
limits were exceeded by a substantial factor of 2-10.” It goes on to say that the smallest
of these safety factors are incorporated for the UV spectrum. It is fair to say the safety
factor in the NIR wavelengths is closer to 10 than 2.
9.6 Additional Analysis: Eye Movements, System Alignment, Ocular Variations
Up to this point all calculations have been performed for a nominal geometry; that is, the
interferometer is already aligned and the cornea is at the proper test position in the
system. In reality, the distances between the interferometer and the subject’s eye will
vary as the system is aligned which will change the illumination geometry. While these
changes do not affect the cornea and crystalline lens exposure limits since those are
defined by constant values, the retinal exposure calculations will be affected. The retinal
MPE is a function of the laser source’s visual angle which will vary as the eye moves
with respect to the interferometer. While not required by the laser safety standards, a full
analysis of retinal exposure limits as a function of different ocular parameters was
completed. The purpose of this analysis was to determine if it is possible to create a high
irradiance spot on the retina in some “accidental” configuration of the interferometer.
As the interferometer is adjusted to properly position the eye, the visual angle, and
therefore maximum allowable power into the pupil, will vary. Additionally, for a
constant pupil diameter, the transmission through the eye will vary since the beam
261
diameter on the cornea will become larger than the pupil. Figure 9.4 below shows how
both the retinal spot size (visual angle) and transmission vary as the eye moves away
from its nominal position.
Figure 9.4: Schematic of the converger system with varying distance to the Arizona Eye. The
separation between the cornea and null objective is 20 mm (left), 77 mm (center) and 120 mm (right).
The MPE and AEL previously calculated for the eye in its nominal position were
recalculated for a range of interferometer-to-cornea distances varying from 0 mm to 170
mm, with 77.354 mm being the nominal position. The geometric spot diameter on the
retina for each of the configurations was calculated, and the visual angle was calculated
from that value using Equation 9.1. From here, the MPE was calculated using in the
manner described in Section 9.4 previously, and the MPE was used to determine the total
power allowed to enter the pupil.
In most cases, the beam diameter on the cornea is larger than the 7 mm pupil, so much of
the optical power is not transmitted to the retina. The transmission through a 7 mm
diameter pupil for each of the configurations was calculated, which in turn was used to
262
determine the total allowable beam power. Transmission as a function of the eye to lens
distance is shown in Figure 9.5. The only limiting aperture in the system was the pupil
diameter. All other surfaces in the eye were allowed to grow such that all of the incident
power was transmitted completely. Table 9.2 provides the results of this calculation.
Additionally, Figure 9.5 shows the pupil transmission as a function of axial spacing in
order to demonstrate how much of the laser reaches the retina for different eye-tointerferometer distances.
Transmission Through the Pupil
Transmission
100
80
60
40
20
0
0
50
100
150
Interferometer-to-Cornea Spacing (mm)
Figure 9.5: Pupil transmission as a function of interferometer-to-cornea distance for a 7 mm
diameter standardized pupil
263
Table 9.2. Laser safety calculations varying with converger-to-cornea separation for λ = 785 nm and t
= 2 hours using the relaxed Arizona Eye model
Separation
RMS
Retinal
Spot
Diameter
(mm)
0
10
20
30
40
50
55
60
65
70
77.354
80
85
90
100
110
120
130
150
170
MPE
Irradiance
Allowable
Power into
the Pupil
Transmission
Through
Pupil
Allowable
Incident
Power on
Cornea
J/cm^2
W/cm^2
mW
%
mW
5.08
5.95
7.49
10.02
14.98
24.11
32.51
46.07
70.55
120.65
365.32
365.60
365.05
338.09
90.34
35.07
18.04
10.78
5.28
3.93
0.051
0.059
0.075
0.100
0.150
0.241
0.325
0.461
0.706
1.207
3.653
3.656
3.650
3.381
0.903
0.351
0.180
0.108
0.053
0.039
19.55
22.89
28.82
38.57
57.67
92.78
125.09
177.31
271.52
464.32
1405.93
1407.01
1404.86
1301.14
347.68
134.96
69.42
41.47
20.30
15.13
1.34
1.7
2.27
3.16
4.72
7.74
10.5
14.96
22.86
38.59
100
100
100
100
33.53
13.57
7.21
4.44
2.2
1.3
1458.59
1346.42
1269.59
1220.44
1221.78
1198.77
1191.38
1185.25
1187.75
1203.21
1405.93
1407.01
1404.86
1301.14
1036.93
994.58
962.89
934.11
922.85
1163.64
Visual
Angle
MPE
Energy
Density
(cm)
(mrad)
0.149
0.170
0.191
0.221
0.270
0.343
0.399
0.476
0.591
0.779
1.408
1.409
1.408
1.349
0.671
0.415
0.297
0.229
0.155
0.116
90.48
102.94
115.51
133.62
163.39
207.25
240.65
286.51
354.54
463.63
806.77
807.08
806.46
776.12
401.20
249.96
179.28
138.57
93.99
70.03
Up to this point all calculations have assumed a perfect, relaxed eye. However, both
accommodation and refractive error in the form of defocus will cause the retinal spot size
to change. While the population used in testing can be controlled, some worst-case
scenarios have been modeled to show that the system is retina-safe even with
accommodation or refractive error. Therefore the calculations seen in Table 9.2 were
repeated for an eye having both 5 and 10 diopters of accommodation and again for ±10D
of refractive error. Accommodation was modeled using the parameters of the Arizona
Eye Model. Refractive error was modeled in the optical design by calculating the amount
264
of longitudinal defocus required to create 10D of defocus. The first step in determining
the appropriate focus shift is to relate diopters of defocus to the defocused wavefront
aberration coefficient W020. The relation between diopters, …Ò, and wavefront
aberration, W, is:
…Ò G …&
…
9.10
where r is the pupil radius. For defocus, the wavefront W is:
& & åæç
9.11
where rmax is the maximum pupil radius. In the case of the TFI, a maximum pupil radius
of 3 mm is assumed. The derivative of the wavefront is:
è&
& )
è
~{/
9.12
Combining the results of Equations 9.10 and 9.12 yields
èÒ &
)
~{/
9.13
Using Equation 9.13, ±10D of refractive error is equal to W020 = ±45µm (for a pupil
diameter of 6mm). We can calculate the longitudinal shift èé to produce this defocus by:
èé 9 ¾& ¯„
s
9.14
where the f/# is approximately 2.667 (f=16mm, D=6mm) and n’ is the refractive index of
the vitreous chamber, n’ = 1.336. For W020 = ±45µm, the corresponding focus shift is δz
= ±1916µm. This value was used to change the length of the vitreous chamber in the eye
model. Figure 9.6 plots the results the allowable power incident on the cornea taking into
265
account the visual angle and pupil transmission for each configuration. The results of the
calculations accounting for both visual accommodation and refractive error are all
grouped in the same relative area of the plot, well above the limits on exposure to the
anterior segment.
Allowable Power Levels
Unaccommodated
+10D Refractive Error
Operating Power
5D Accommodation
-10D Refractive Error
10D Accommodation
Cornea/Lens
1000.00
Power (mW)
100.00
10.00
1.00
0
20
40
0.10
60
80
100
120
140
160
Axial Separation (mm)
Figure 9.6: Allowable laser power levels for different eye configurations (log scale)
9.7 Laser Safety Conclusions
The AEL for exposure to the retina, cornea and lens have now been calculated and are
shown in Table 9.3. Due to the converging nature of the laser in the TFI, the limiting
266
factor in the allowable laser power is from exposure on the cornea or lens. This
intuitively makes sense as a focused beam could form on the cornea or lens while there is
no way for a small spot to form on the retina. Given a maximum TFI laser power of 1.0
mW exiting the system, a significant safety factor exists beyond what is already built into
the safety standards. Table 9.3 summarizes the calculations from this chapter. The AEL
is shown for the retina for both a normal, relaxed eye along with the worst case of the
additional analysis taking into account accommodation and refractive error. This worst
case value occurred for an eye with +10D of accommodation and an interferometer-toeye spacing of 100 mm. Even this worst case AEL, which is not required by the ANSI
standards, is significantly less restrictive than the limits on laser exposure to the cornea
and lens. In all cases, there is a 30X or more safety margin over the already conservative
standards.
Table 9.3: Laser Safety Calculation Results
Limiting
Aperture
Diameter (mm)
7
Retina
No Variations
Retina
Worst Case
7
10D
Accommodation
1
Cornea and Lens
Document
ANSI
Parameter
1.8 x CA x CE x t0.75 x 10-3
AEL
(mW)
1350
ANSI
1.8 x CA x CE x t0.75 x 10-3
577
ICNIRP
4 W/cm2
31
Throughout the discussion of this device, the question of “what happens if the beam
focuses on the cornea?” has come up repeatedly. Although the standards specify using a
1 mm limiting aperture for such a calculation, the spot size will be smaller and therefore
267
the irradiance larger. The rationale behind these limiting apertures is explained in an
excerpt from “Laser Safety” by Henderson and Schulmeister (2004):
"It is an important principle in laser safety that an averaged value of irradiance
or radiant exposure, which might be significantly smaller than the local 'true'
physical irradiance, is compared to the exposure limit for the eye or the skin. In
the field of laser safety...specific averaging aperture which are related to
biological parameters such as pupil size and eye movements are defined together
with the exposure limits for the eye and skin...Because of biophysical phenomena,
irradiance hotspots which are smaller than the specified apertures are not
relevant for laser safety assessments...When one uses an averaging area smaller
than the one specified, the level of hazard to the eye or the skin would be
overestimated."
Therefore, even though the beam has a focused spot diameter of less than 1 mm, the
specified standard apertures are always used.
In the case of a fairly fast converging beam, the large visual angle of the source results in
a large spot diameter on the retina. Therefore, the retinal irradiance is relatively low but
the irradiances at other ocular planes are higher. In this specific case, the exposure limit
is set by the cornea and crystalline lens limits, the Tear Film Interferometer maximum
laser power of 1.0 mW is well below safety thresholds.
9.8 Incoherent Guidelines
The ISO standards provide cornea and lens exposure limits for incoherent illumination.
For IR-A wavelengths longer than 780 nm, which is the case for the eye tracking
illuminator, the maximum irradiance is 20 mW/cm2 (ISO, 2007). In addition to the
cornea and lens limits, the retinal radiance exposure must be less than¸
E
~µ Ž}
. In order to
268
evaluate the retinal safety of the eye tracker illuminator, the source size must be used in
order to calculate the source radiance. At its nominal position of 80 mm from the eye,
the radiance of the source is conservatively approximated by dividing the irradiance
measured at the 80 mm position (Table 9.1) by the solid angle subtended by only a single
LED in the array. Although the measured irradiance corresponds to that emitted by all of
the LEDs, using the area of only a single LED provides a conservative limit calculation.
The solid angle Ω is approximated by the area of the LED divided by the distance to the
LED:
.ŽBÀ}C
‡ ê
8
¾ G)G'GaUˆ )
9.15
The radiance from a single LED is then:
&
a)GÑ
^
a)G¹ & )
€ ë G)G'GaUˆ 9.16
The radiance of the eye tracking illuminator is roughly 3% of emission limit per the ISO
guidelines, and poses no significant risk to human use.
9.9 Optical Hazard Conclusions
The analysis in this chapter shows that both the laser and LED optical powers are
significantly below published safety thresholds, which have built in safety factors
themselves. Proper use of the Tear Film Interferometer poses no significant risk to
potential subjects. In order to confirm the analysis for this work, David Sliney, PhD
independently verified this analysis. Dr. Sliney is a world renowned expert on the safe
269
use of lasers in ophthalmic devices and is a coauthor of many of the ANSI, ISO and
ICNIRP guidelines. His report came to the same conclusions as this analysis. In fact,
some analysis in this chapter such as studying movement and variations in the eye was
not necessary and provides a more conservative approach than Dr. Sliney’s approach. In
addition to Dr. Sliney’s analysis, the parameters of the system were provided to the
Office of Radiation Control at the University of Arizona for pre-approval prior to
submission to the Institutional Review Board for research involving human subjects.
270
10 SIMULATED TFI PERFORMANCE: EFFECTS OF EYE MOVEMENT AND
VARIABILITY
Interferometry is most often used in applications where both the test environment and
variations in the part under test are controlled or understood. A typical interferometer is
mounted on a vibration isolating optical table in a laboratory environment designed to
control factors such as vibration, temperature and air turbulence, all of which can
contribute significant variability to measurement results. Furthermore, the instrument is
generally used to test a well understood optical component. Take an example of
interferometry used to measure a telescope mirror with a well-known radius of curvature
and asphericity. In that case, the interferometer is likely used to identify defects in the
mirror surface on the order of a couple of waves for final polishing before delivery.
Surely the metrology station for this application will be in an environment with adequate
environmental controls.
Designing an interferometer for testing parameters of the human eye throws many of the
typical controlled factors out the window. Every eye is different, from base curvatures to
surface irregularity to even facial structure that could interfere with the measurement.
The eyelids will present additional difficulties since they could potentially obscure part of
the measurement area. An additional complication not present in traditional
interferometry applications is that by its nature, the eye moves. Ocular variations and
movement add complexity to an already complicated interferometer system. While
attempts are made to stabilize the subject and their eye with proper use of head rests, no
271
system is perfect and residual movements do exist, as discussed previously in Chapter 7.
The potential subject pool from which the Tear Film Interferometer (TFI) testing
candidates are drawn can be limited to an acceptable range of ocular geometries, but the
system must have a dynamic range capable of accommodating an adequate population to
deliver results.
This analysis discussed in this chapter was used to evaluate the TFI performance for the
anticipated range of corneal geometries, and to aid in determining what limits, if any,
should be put on the range acceptable for entry into clinical studies with the TFI.
10.1 Effects of Ocular Variability and Movement
An important consideration in designing any ocular interferometer must be to analyze the
effects of eye movements. Eye movements will introduce unwanted wavefront error into
the test beam of the interferometer, which reduce measurement accuracy or even render
the interferograms unresolvable if the error is great enough. For a spherical surface
measured in reflection, lateral displacements and tilts of the surface result in wavefront
tilt, power and astigmatism added to the measurement. Longitudinal displacement results
in power for a curved surface such as the tear film. For aspheres, such as the probate
ellipsoidal cornea, other aberrations such as coma are introduced as well. These errors
also contribute to retrace error, and could introduce non-common path aberrations into
the final measurement. Therefore it is important to understand the extent of anticipated
272
eye movement, along with how those movements affect the measurement, and how much
movement is acceptable.
The headrest selected for use in the TFI is designed to provide better stability than
standard ophthalmic headrests discussed in Chapter 7. The TFI headrest (Arrington
Research, Tempe, AZ) is comprised of a chin rest, forehead rest and a nose bridge with a
form similar to eyeglasses. Additionally, two bars are attached to the headrest making
contact with the side of the head. These modifications to the standard ophthalmic
headrest are designed to provide increased positional stability compared to the results of
the previously discussed study without requiring the use of an invasive bite bar.
Lateral and longitudinal eye movements of ±100 microns are anticipated to occur in a
subject during testing with the TFI. The movement is the result of breathing, pulsation
and random vibrations (Chapter 7), and assumed to be normally distributed about the
nominal position. Based on the selected research regarding involuntary eye movements,
a change in gaze direction of ±7.5 arc-minutes will be assumed as well. The eye
movements are also normally distributed around the nominal on-axis position.
10.2 Tolerancing Approach to Estimating Effects of Ocular Variation and
Movement
Tolerancing of an optical system is the method of analyzing a system’s performance
while taking into account the deviations from the ideal system design. While this process
273
typically involves analyzing the effects of manufacturing and assembly limitations, the
same process can be applied to evaluating the performance of the TFI with the expected
ocular variations and movements (i.e., tolerances). Variations in ocular geometry are
analogous to manufacturing variability, while eye movement is analogous to alignment
errors during assembly of an optical system. Optical design software, including Zemax
which was used for this analysis, has built-in tools to assign tolerances to any design
parameter in the optical system. Tolerances can be assigned to the cornea’s radius of
curvature, conic constant, tilt and decenter based on the previously discussed variability
in those parameters. Their effects on the interferometer’s ability to successfully measure
the tear film can then be analyzed through either sensitivity analysis or statistical analysis
based on random perturbation of those parameters.
The overall goal of the tolerancing process is to predict the system’s performance for the
anticipated variability in ocular parameters. Similarly, limits on the population
acceptable for study with the TFI can be derived from the tolerance analysis.
Tolerancing can be broken into two major steps: sensitivity analysis and Monte Carlo
statistical analysis. The goal of sensitivity analysis is to determine the degradation of
system performance due to each individual tolerance parameter. For an arbitrary
performance metric,Ò, the sensitivity of a given design parameter is:
NPP‘P(R èÒ ÜŠ†»NPN9†N
)
…'R
܊†»NP††NN
10.1
274
The performance metric can be any appropriate measure of system quality, including
measures such as RMS wavefront, MTF, or wavefront slope. The change in system
performance based on a single parameter is therefore:
<ÒR èÒ
<' NPP‘P(R –ÂN†NR )
…'R R
10.2
Assuming the tolerances behave independently from one another, their effects are
combined in a root-sum-square (RSS) method to estimate the performance of the
toleranced optical system. Based on a nominal performance equal toÒ , the estimated
as-built system performance is
Ò Ò × x
<Ò <Ò <҈ ì)
10.3
Sensitivity analysis is useful since it provides insight into the effects of each individual
tolerance parameter. However, it does have limitations. First, in combining the
estimated performance via an RSS method, it is assumed that all tolerances behave
independently. In reality, there is some interplay between the parameters that this
analysis does not take into account. Second, sensitivity analysis predicts the worst-case
performance of the system. Take an example of a mirror having a certain radius of
curvature and allowed tolerance on that parameter. If a number of mirrors are built within
the specified tolerances, it is fair to assume that the radius of curvature of each mirror
will vary somewhat randomly between the limits of the tolerances, and it may be useful
to know how many of the as-built mirrors fall within certain ranges within the toleranced
region. In this case, additional statistical analysis of the toleranced system is necessary.
275
The second step often used in tolerancing an optical system is to run a Monte Carlo
analysis where the optical system is modeled, the toleranced parameters are randomly
perturbed, and the performance recorded. The process is repeated a specified number of
times to build up a statistical model of the as-built performance of the system based on
random fluctuation in the toleranced parameters. This approach takes into account
interactions between tolerances and provides a measure of how many “acceptable”
systems would be built given an allowable tolerance range. For systems where a number
of systems will be manufactured, it is useful to run a Monte Carlo analysis to provide a
statistical distribution of the as-built performance by modeling and perturbing a large
number of toleranced optical systems. For example, Monte Carlo analysis consisting of
100 perturbed models could show that an optical system designed to nominally have an
RMS wavefront of λ/10 will have an RMS wavefront of λ/4 in at least 90% of the built
systems. In this case, the Monte Carlo analysis provides a statistical measure of how
much data will be usable within a measurement sequence.
10.2.1 The Tolerancing Model
The purpose of the tolerance analysis is to evaluate the effects of both ocular variations
and movement on the interferometer. Specifically of interest is how the variations in the
eye affect the wavefront at the detector. While the effects of ocular variations can be
removed from the final measurement by subtracting low-order aberrations, the major
276
concern is that the error could introduce enough wavefront error that the fringes can no
longer be resolved at the detector.
The model used for analysis consisted of the diverger focusing the input beam to the
center of curvature of the cornea with a beam diameter of six millimeters at the tear film.
The beam is reflected from the tear film and back through the diverger. The image
surface used for analysis is the virtual image location of the test surface. This is the plane
that is imaged onto the interferometer’s detector by the imaging lens, so is the logical
choice for analysis without modeling the entire system. The wavefront at this location
can be compared to a flat reference wavefront in order to determine the behavior at the
interferometers image plane. The tear film was modeled as a biconic surface, allowing
the radii in the x- and y-directions along with the conic constant in the x- and y-directions
to vary independently. The sag of a biconic surface is
-
/ ' 4 ( G ÎG G 0/ / ' 5G 04 64 ( 10.4
where / Ø and 4 Ø . Once modeled, the tolerances on movement, tilt and
Z
[
geometry can be applied to the parameters of the biconic ocular surface. Tolerances were
placed on the following parameters in the design:
•
•
•
•
•
Tear film x- and y-radii of curvature
Tear film x- and y-conic constant
Tear film x- and y- decenter
Tear film tilt about x and y directions
Separation between objective and tear film
277
When the interferometer is first aligned to a subject, the angle of the partial null cylinders
will be adjusted to remove the effects of corneal astigmatism. In order to account for the
interferometer’s ability to null out the astigmatic wavefront errors, the rotational
orientation of the partial null lenses were set as compensators in the tolerancing model.
The interferometer itself will also be moved into the proper position laterally and
longitudinally. The longitudinal spacing between the diverger and the subject’s tear film
will be adjusted in order to minimize the focus error in the interferogram. By focusing at
the proper position, any cornea radius of curvature can be compensated, aside from the
effects of corneal astigmatism. Therefore, a compensator on the axial separation was also
added to the tolerancing model.
In order to evaluate the toleranced performance of the TFI, appropriate metrics must be
selected. In this case, the tolerance analysis was repeated for two separate measures of
quality. The first measure is the RMS wavefront and the second is the maximum
wavefront slope. Evaluating how the tolerances affect the RMS wavefront in the TFI
provides a method of describing degradation in wavefront quality as a function of ocular
variations or movements. However, RMS wavefront cannot be used to predict the fringe
frequency of the detector, and in turn be used to determine if the measurements can be
resolved. Therefore, the maximum wavefront slope was calculated as part of the
tolerance analysis as well. A macro written in the Zemax programming language by a
former University of Arizona College of Optical Sciences student (Sullivan, 2011) was
used to determine the maximum wavefront slope, in waves/radius, that exists within the
278
entire wavefront as part of the tolerancing routine. The macro calculates the wavefront
slope at a user-defined number of points across the wavefront. For the purposes of this
work, the slope was calculated at 11 points along 11 different evenly spaced radial arms,
for a total of 121 sampling points across the wavefront. Wavefront slope is used to
determine the maximum fringe frequency that exists in the interferogram, and in turn if
that interferogram can be resolved by the TFI detector. As previously discussed, the
system is capable of resolving wavefront slopes of up to 250 waves per radius, but the
instrument accuracy decreases as the slope increases.
The tolerance analysis was split into two components in order to evaluate the system
performance as a function of only eye movements and only ocular variations. The
methods and results of those analyses are discussed in the following sections.
10.2.2 Effects of Eye Movement
The purpose of this analysis was to isolate the effects of eye movement. Tolerances on
lateral and longitudinal movement and tilt of the tear film were entered into the model.
Since the eye rotates about its nodal point, the tilt was applied 13.5 mm behind the cornea
which approximately corresponds to the center of rotation. This effectively creates a
lever arm so that when the eye rotates, the tear film exhibits both tilt and decenter. No
compensators were used in this model since it is estimating the errors during a
measurement. In the current embodiment of the interferometer, the system cannot
compensate for error during a measurement.
279
Table 10.1 shows the results of the eye movement sensitivity analysis using this model.
The table lists the eye movement parameters which were varied, the tolerance range, and
the estimated change in both waves per radius and RMS wavefront error based on the
given tolerance.
Table 10.1. Eye movement sensitivity analysis
Parameter
Longitudinal
X-Decenter
Y-Decenter
X-Tilt
Y-Tilt
Tolerance
(±)
0.1 mm
0.1 mm
0.1 mm
0.25°
0.25°
Wavefront Slope Sensitivity
(waves/radius)
57.0
149.3
145.1
31.9
31.2
RMS Wavefront
Error Sensitivity
(waves)
5.7
1.4
1.4
0.3
0.3
220.4
6.1
RSS
Table 10.2 shows the results of the Monte Carlo analysis. The Monte Carlo analysis
modeled 100 perturbed systems using the same tolerances shown in Table 10.1. After the
100 perturbed systems were modeled, statistics on the change in both wavefront slope
(waves/radius) and RMS wavefront were calculated. The mean error and standard
deviation were calculated, along with the thresholds on wavefront error that a given
percentage of systems will fall within.
280
Table 10.2. Eye movement Monte Carlo analysis results
Monte Carlo Wavefront Slope (waves/radius) RMS Wavefront (waves)
Mean
103.13
2.04
Std. Dev
39.90
1.15
90% >
80% >
50% >
20% >
10% >
153.13
135.74
101.39
67.07
52.21
3.71
3.22
1.74
1.07
0.77
The sensitivity analysis shows that decentering the eye has significant effects on the
interferometer system. Given the allowable tolerances of this study, these eye
movements still result in resolvable interferograms since the slope is less than 250 waves
per radius. Since the eye will consistently move about the nominal position, it is
expected that the actual wavefront error will be much less than the results of the
sensitivity analysis. Instead, the Monte Carlo results provide a measure of how many
frames of a measurement sequence will be under a certain threshold. For example, if
only measurements having less than 100 waves per radius are to be used, roughly 50% of
the measurements will be kept for analysis. By capturing data at a few frames per
second, one useable measurement per second will likely be gathered on average.
One additional interesting result of the sensitivity analysis is in the comparison between
wavefront slope and RMS wavefront error. The maximum wavefront slope is the most
sensitive to decentering, while the RMS wavefront error is most sensitive to longitudinal
translation. This provides some insight into the effects of each term. Longitudinal
translation, for example, results in low-order power in the interferogram. While the RMS
281
wavefront error is relatively large for this term, the wavefront slope is lower, meaning the
fringe frequency in the resulting interferogram is less. On the other hand, the wavefront
slope is very sensitive to decentration, which does not affect the overall RMS wavefront
error all that much. This implies the decenter introduces higher order variation to the
wavefront resulting in regions of high wavefront slope in turn creating local zones of high
fringe frequency in the resulting interferogram.
10.2.3 Effects of Ocular Variations
This second phase of tolerance analysis was used to isolate the effects of ocular variations
on the interferometer. In this analysis, the anticipated variations in cornea geometry were
entered as the tolerances in the system. Since a change in base radius of curvature is
completely compensated by adjusting the longitudinal distance between the diverger and
the eye, only the difference between corneal horizontal and vertical radii of curvature was
entered in order to study the effects of corneal astigmatism. Additionally, tolerances on
the x- and y-conic constants were entered in the model. Since the interferometer will be
adjusted pre-measurement to compensate for the subject’s ocular geometry, compensators
on the longitudinal distance and the cylindrical null rotation angles were allowed to vary.
Table 10.3 shows the results of the ocular variation sensitivity analysis. The table lists
the eye variation parameters studied, the tolerance range, and the estimated change in
both waves per radius and RMS wavefront based on the given tolerance.
282
Table 10.3. Eye variability sensitivity analysis
Parameter
Delta Radius
Y-Conic
X-Conic
Tolerance
(±)
0.1 mm
0.1
0.1
Wavefront Slope Sensitivity
(waves/radius)
19.27
7.85
7.85
RSS
RMS Wavefront
Error Sensitivity
(waves)
0.46
0.36
0.36
22.24
0.69
Table 10.4 shows the results of the Monte Carlo analysis resulting from the same
procedure as previously used on the eye movement study.
Table 10.4. Eye variability Monte Carlo analysis results
Monte Carlo Wavefront Slope (waves/radius) RMS Wavefront (waves)
Mean
5.59
0.25
Std. Dev
4.86
0.03
90% >
80% >
50% >
20% >
10% >
8.55
6.73
4.70
2.39
1.90
0.44
0.37
0.25
0.13
0.07
For the most part, the wavefronts introduced by ocular geometry can be compensated for
through proper adjustment of the null optics. Adjusting the longitudinal spacing between
the diverger and the cornea completely compensates for changes to the base curvature of
the cornea and overlying tear film. The only negative implication of this adjustment is
that as the radius of curvature becomes smaller than 7.8 mm, the eye must move further
away from the diverger, reducing the measurement area on the surface. However, since
the system was designed to provide measurement capability over 6.5 mm for a nominal
7.8 mm radius of curvature, the base radius of curvature of the cornea would have to
283
shrink all the way to 7.2 mm before the measurement diameter falls below the required 6
mm. Corneal astigmatism is compensated for by proper adjustment of the cylindrical null
optics. While those optics do introduce some residual aberration (discussed in Chapter
8), their inclusion increases the dynamic range of the interferometer by nulling the
astigmatic wavefront reflected from most corneas.
10.2.4 Tolerancing Summary
Overall, the interferometer is quite sensitive to eye movements. However, given the
predicted amounts of movement, the detector is capable of resolving most of the
interferograms in a measurement sequence. Since the eye moves with a nearly normal
distribution about the nominal position, most measurements will occur while the eye is
near the nominal location. Due to the variable null capabilities, the interferometer is not
very sensitive to ocular variations. Compared to the effects of eye movement, variations
in ocular geometry are inconsequential and the potential errors can be approximated by
only the effects of movement.
Both the displacements and tilts of the eye vary over time in a somewhat predictable
manner. Saccades occur roughly once per second, and regular respiration and pulse
contribute to eye movement. By collecting interferometer data at a rate of at least a few
frames per second, it is expected that a sufficient number of measurements will be
collected when the eye is near its nominal position. When the measured wavefront has a
large error resulting in a high fringe frequency, the accuracy of the instrument is reduced.
284
Therefore, it may be beneficial to throw out those measurements with large errors. The
threshold for an acceptable measurement should be determined empirically after
completing clinical testing with the TFI, but setting a “passing” qualification of
approximately 100 waves/radius is reasonable. Based on the Monte Carlo analysis
discussed in this chapter, slightly fewer than 50% of the measurements in a sequence will
be useable. By collecting data at a few frames per second, over half of the data can be
thrown out while still achieving one useable measurement per second.
10.3 Performance Summary
Although human testing of the TFI has not been approved to date, the estimated
performance of the system has been analyzed by using a tolerance analysis process.
Perhaps the largest uncertainty in attempting on-eye interferometry is in how eye
movements and variations affect measurement feasibility. Through proper use of null
optics, ocular variability can be almost completely accommodated. Since the instrument
is not intended as a general screening tool, the population can be controlled within the
limits of the instrument’s capabilities as well. Eye movement does introduce significant
error into the system, reducing the dynamic range of resolvable tear film features along
with reduction of system accuracy overall. However, the effects of the anticipated
amounts of movement are within the limits of the systems measurement capabilities, and
individual measurements that are beyond those limits can be thrown out.
285
Figure 10.1 displays some simulated interferograms with ocular variations and
movements present. These interferograms were created by randomly perturbing the
parameters discussed in this chapter, and calculating the resulting interference pattern.
While there are more fringes present than is typical in most interferometry applications,
all of these interferograms are resolvable using the megapixel detector of the TFI. The
interferograms are dominated by 4th order aberrations, which will be subtracted during
analysis and so do not affect the final measurement results. While human testing has not
yet been approved, the analysis in this chapter shows that the system is capable of
providing measurements of tear film topography.
Figure 10.1. Simulated interferograms with eye movement and variability
286
11 FUTURE WORK AND CONCLUSIONS
11.1 In Vitro Interferometer Future Work
One area of improvement in the in vitro Fluid Layer Interferometer (FLI) is in the lens
handling and placement procedure. In the current method, the lens holder is mounted on
translation stages and a rail, as shown in Figure 11.1. When a new lens is tested, the
assembly is slid out of the interferometer and the lens placed on the holder. Once the lens
is in place, the assembly is slid back into the interferometer and aligned by adjusting the
stages to minimize the number of fringes in the live interferogram. Since the lens must
be kept hydrated during the alignment, packing solution is routinely applied during the
process. This results in a very dynamic interferogram, making alignment quite difficult.
A typical interferogram from a smooth fluid layer contained on the order of 20 fringes
across the measurement area. When similar measurements were made on just the lens
holder, with no contact lens in place, interferograms with just a few fringes are routinely
observed.
287
Figure 11.1. Lens holding assembly
Manual application of the fluid also leads to some variability in the measurement. While
efforts were made to apply the same volume of fluid in every measurement, there was
variability in the location and rate at which fluid was applied. By removing the
variability in this process, the error bars existing on the plots within Chapter 5 will likely
be significantly reduced. The process of keeping the lens hydrated was also a manual
one, and the lens under test in each measurement cycle was likely at a different stage of
drying by the time data was acquired, due to variations in alignment times. This further
leads to variability, especially in the blob analysis results. It was often observed that
once a lens dried out on the lens holder, the “blobs” could not be completely eliminated
by just reapplying packing solution or artificial tears. Instead, the lens would have to
soak in packing solution for some time to return to its hydrated state.
288
New holders should be built to fit a contact lens and have a retaining ring in place to keep
it in position, providing better lens positioning and stability than the current method.
Additionally, the entire lens and holder can be mounted and submerged in a bath of
packing solution until the measurement is ready, keeping the lens hydrated more reliably
than with manual application. When the measurement begins, the lens and holder will be
vertically lifted out of the bath. This creates a more uniform distribution of fluid on the
surface at the initiation of the measurement, and is likely more repeatable than manual
application with a syringe.
As more contact lenses are measured with the FLI and the results examined by contact
lens designers and material scientists, it is likely that new analysis methods will be
needed. At present, the capability of the instrument is being explored, and a wealth of
information regarding fluid layer behavior on contact lenses has been gathered. While
procedures such as blob analysis show interesting results, it is not clear that this will be
the ultimate measure of fluid layer breakup, either in vitro or in vivo.
11.2 In Vivo Interferometer Future Work
The major future work on the in vivo Tear Film Interferometer (TFI) is to test the system
on human subjects. The project has been approved by the university radiation control
office and Institutional Review Board (IRB), but is still awaiting final approval by the
sponsoring company. While simulations show that the interferometer is capable of
measuring a range of tear film geometries while accounting for eye movement, testing is
289
necessary to confirm the instruments ability. The first step of a test will be to measure
the subject’s corneal topography using a commercially available Keratron Picollo
topographer. The radii of curvature accounting for any corneal astigmatism can be
entered in software into the model of the system, and the position of the cylinder lenses
optimized to compensate for the astigmatic wavefront. The results of that process will be
used to set the rotation angles of the partial null lenses in the TFI. At this point, the
subject will be seated in front of the instrument in the head positioner, and the instrument
aligned by viewing the live video from the eye tracker and interferometer camera. The
fixation target can be adjusted if necessary to match the subject’s interocular distance, as
well as preferred luminance level. Once aligned, data will be collected in bursts of up to
60 seconds. The procedure will be repeated as many times as is comfortable in a 30
minute session, with up to three sessions. It is likely that the first session will serve to
allow the subject to be comfortable with the instrument, and to train them to properly
fixate and hold steady during the measurements. The second and third sessions will be
used to collect the bulk of the useable measurement data.
The feasibility testing of the TFI will likely identify potential areas for improvement in
the system design, but those are unknown as of now. One area that can likely be
improved is in the motion control. The current strategy was selected since the NEAT
stages were already available and had the size and resolution required by the system. In
testing of the system with calibration surfaces, it was observed that there is some ringing
vibration after ordering the stages to move. Once the motion has stopped, the fringes
290
visibly vibrate, reducing their contrast. Using integration times of 0.5 ms reduces the
vibration effects on the measurement, but they are still apparent immediately after
ordering a move. The acceleration and velocity of the stage movement can be controlled,
and efforts have been made to slow down the movements and acceleration to reduce the
ringing, but it cannot be eliminated. The weight of all of the components moved by the
stages is roughly 100 lbs, so small and quick steps of the motors will result in some
ringing. While the stages do not move during a measurement, the ringing will likely
increase the time required for alignment, and could make alignment more difficult in
general. Additionally, the stage vibrations likely exclude these particular stages for being
used in future active alignment schemes.
It may be possible to replace the stages with improved versions which damp the vibration
better than the current embodiment. Another possible strategy should the vibrations be
problematic is to scan only the location of the test arm of the interferometer rather than
moving the entire system. For example, the test arm location could be adjusted with
periscopic mirrors. While this reduces the strain on the motion control system, it
complicates the optical design. Depending on the amount of test arm adjustment, the
focus position of the camera may have to be adjusted as well. Since the length of the test
arm is variable, the magnification of the imaging system will vary as well. Finally, the
system will have to be calibrated throughout the range of movement to account for noncommon path error in the scanning optics. Such a scheme would open the system to the
possibility of using active eye tracking so that the system could compensate for the
291
position of the eye. In the current embodiment, the eye’s location and gaze direction is
passively monitored, and not used for any active tracking procedures. Given the added
optical complexities of scanning the test arm, efforts should first be made to move the
entire interferometer.
One additional feature that would likely further improve the TFI is to include additional
variable null capability. While the system already has a continuously variable astigmatic
null, this only accounts for ocular variations. Compensating for eye movement could
provide added dynamic range and functionality. Ideally, such a system would use the
data from the live interferogram in a feedback loop to adjust the system to account for
eye movements. In that process, the position of the interferometer could be continuously
adjusted during measurements to compensate for eye movement, reducing the fringe
frequency throughout the measurement. Rather than adjusting the position of the entire
interferometer or test arm, it could be possible to use optical elements or adaptive optics
to provide the compensation as well. For example, QED has developed a system
(Variable Optical Null®) consisting of a pair of prisms which are rotated and tilted in an
interferometer to null an aspheric wavefront during their manufacturing process
(Kulawiec, 2009). While the dynamic range required for the TFI is likely not as large as
in the QED system, a similar method of providing continuously variable null capability
with a feedback loop to the live interferogram could potentially compensate for eye
movements during a measurement. Such an effort of active eye tracking and real-time
292
compensation is likely a large undertaking and should only be attempted after initial
studies prove the feasibility of using interferometry for tear film measurement.
11.3 Conclusions
In the early stages of this research, the goal of the in vitro Fluid Layer Interferometer was
mainly to determine the capability of using the same platform for an eventual on-eye
interferometer. As the instrument was developed and eventually used to measure fluid
layers on a contact lens, it became clear that the interferometer not only proves the
capability of measurement on dynamic surfaces but also has the potential to itself be a
valuable tool in contact lens material development. Therefore, the scope of the in vitro
system was increased to include extra development on the lens holding and positioning
along with data analysis methods. Numerous measurements were made on a number of
different contact lens materials, a few of which were discussed in Chapter 5 of this
dissertation.
The in vitro FLI has successfully been used to measure contact lenses made from
different materials. The results show material-dependent differences in fluid layer
behavior in a repeatable manner. This tool provides information about how the tear film
interacts with a contact lens that has not before been observed, providing valuable insight
which can be used in material design. A variety of contact lens materials are
commercially available, and their effects on comfort and vision quality are understood
based on patient feedback. Characterizing these well understood materials on the FLI
293
provides information about what fluid behavior is typical in a comfortable contact lens.
When developing new contact lens materials, they can be measured with the FLI prior to
any clinical trials, and the results compared to known materials in order to predict their
on-eye performance. While the results of an in vitro measurement don’t necessarily
predict the exact behavior of a live tear film, the trends of the testing should hold true in
vivo.
Lessons learned from the FLI system were used to develop and build the in vivo Tear
Film Interferometer. The TFI system has been designed to provide similar testing on a
live eye to analyze tear film behavior both with and without contact lenses in place. In
addition to the core interferometer, variable null, eye monitoring and fixation modules
were integrated into the system, and additional components are in place to monitor the
laser power to ensure safe operation. Simulation shows that the instrument is capable of
measuring tear film behavior on the eye, and the analysis methods developed for use with
the in vitro results can be applied to data collected with the TFI.
Significant effort has been put in to showing that safe on-eye interferometry can be
performed. A range of published guidelines and standards pertaining to safe exposures of
laser illumination to different parts of the eye were analyzed to determine how they apply
to not only this interferometer, but on-eye interferometry in general. These calculations
and analysis alone are crucial not only for the Tear Film Interferometer, but for other
laser-based eye measurement systems, whether interferometers or otherwise.
294
Proving the feasibility of the TFI opens the door for using on-eye interferometry for other
diagnostic purposes. The next logical step building upon the TFI platform is to modify
the system to provide corneal topography measurements. To do this, an absolute
calibration is necessary in order to determine the radius of curvature of the cornea. A
number of methods are used in Placido disk topographers that could be integrated into the
interferometer to determine the absolute position of the cornea in order to determine the
curvature. Similarly, the topography of other intraocular surfaces could be determined,
though the amount of light reflected is much less due to the more closely matched
refractive indices. The on-eye form of contact lenses or even intraocular lenses could
potentially be accurately measured interferometrically. Interferometry has the possibility
of drastically improving on-eye metrology. Improved metrology could in turn improve a
number of areas in vision care, potentially including custom vision correction or disease
mitigation.
295
12 REFERENCES
American Conference of Governmental Industrial Hygienists (ACGIH), “TLV's,
Threshold Limit Values and Biological Exposure Indices for 2010,” American
Conference of Governmental Industrial Hygienists, Cincinnati, OH (2010).
American Conference of Governmental Industrial Hygienists (ACGIH), “Documentation
for the Threshold Limit Values, 5th Edn.,” American Conference of
Governmental Industrial Hygienists, Cincinnati, OH (2007)
ANSI, “American National Standard for safe use of lasers (ANSI 136.1),” ANSI 136.12000, The Laser Institute of America, Orlando, FL (2000).
Arrington Research, Inc., ViewPoint EyeTracker Users Manual, (2010).
Auger, P., “Confirmation of the simplified Javal’s rule,” Am. J. Optom. Physiol. Opt. 65,
915 (1998).
Bendetto, D.A., Clinch, T.E., and Laibson, P.R., “In vivo observation of tear film
dynamics using fluorophotometry,” Arch. Ophthalmol. 102(3), 410-412 (1984).
Center for Devices and Radiological Health (CDRH), Laser Product Performance
Standard, Title 21, Code of Federal Regulations, Part 1040,Washington, DC,
Government Printing Office (1995).
Cho, P., ‘‘Reliability of a portable noninvasive tear breakup time test on Hong KongChinese,’’ Optom. Vis. Sci. 70, 1049–1054 (1993).
Christopoulos, V., Kagemann, L., Wollstein, G., Ishikawa, H., Gabriele, M., Wojtkowski,
M., Srinivasan, V., Fujimoto, J., Duker, J., Dhaliwal, D., and Schuman, J., “In
vivo corneal high-speed, ultra-high-resolution optical coherence tomography,”
Arch. Ophthalmol. 125(8), 1027-1035 (2007)
Cornsweet, T.N.,”Determination of the stimuli for involuntary drifts and saccadic eye
movements,” J. Opt. Soc. Am. 46(11), 987-988 (1956).
Daily, L. and Coe, R. E., “Lack of effect of anesthetic and mydriatic solutions on the
curvature of the cornea,” Am. J. Ophthalmol., 53, 49-51 (1962).
Doane, M.G., “An instrument for in vivo tear film interferometry,’’ Optom. Vis. Sci. 66,
383–388 (1989).
deGroot, P., and Lega, X.C, “Interpreting interferometric height measurements using the
instrument transfer function,” in Fringe 2005, Osten, W. (ed.), Springer, 30-37
(2005).
296
Dubra, A., Paterson, C., and Dainty, C., “Study of the tear topography dynamics using a
lateral shearing interferometer” Opt. Express 12(25), 6278-6288 (2004).
Fanning, D., “Blob_Analyzer,” Coyote’s Guide to IDL Programming. Retrieved from
http://www.idlcoyote.com/programs/blob_analyzer__define.pro (2011)
Fogt, N., King-Smith, P.E., and Tuell, G., ‘‘Interferometric measurement of tear film
thickness by use of spectral oscillations,’’ J. Opt. Soc. Am. A 15(1), 268–275
(1998).
Förch, R., Schönherr, H., and Jenkins, A.T.A., Surface Design: Applications in
Bioscience and Nanotechnology, Wiley-VCH, 471-472 (2009).
Gappinger, R.O., Greivenkamp, J.E., and Borman, C., “High-modulation camera for use
with a non-null interferometer,” Opt. Eng 43(3), 689-696 (2004).
Gaskill, J., Linear Systems, Fourier Transforms, and Optics, Wiley (1978).
Goodwin, E., and Wyant, J., Field Guide to Interferometric Optical Testing, SPIE Press
(2006).
Grosvenor, T. and Ratnakaram, R., “Is the relationship between keratometric astigmatism
and refractive astigmatism linear?” Optom. Vis. Sci. 67(8), 606-609 (1990).
Greivenkamp, J.E., and Bruning, J.H., “Phase shifting interferometry” in Optical Shop
Testing, 2nd Edition, Malacara, D. (ed.), Wiley (1992).
Guillon, P.J., “Tear film structure and contact lenses,” in The Preocular Tear Film: In
Health, Disease and Contact Lens Wear, Holly, F.J. (ed.), Dry Eye Institute, 914939 (1986).
Hariharan, P., Basics of Interferometry, Second Edition, Academic Press, (2006).
Hecht, E., Optics, 4th Edition, Addison Wesley, (2002).
Held, R., “The rediscovery of adaptability in the visual system: Effects of extrinsic and
intrinsic chromatic dispersion,” in Visual Coding and Adaptability, Harris, C. S.
(ed.), Lawrence Erlbaum, 69-94 (1980).
Henderson, R. and Schulmeister, K., Laser Safety, Taylor and Francis Group, 32 (2004).
Holly, F.J., and Lemp, M.A., “Tear physiology and dry eyes,” Surv. Ophthalmol. 22(2),
69-87 (1977).
Holly, F.J., “Tear film physiology,” Am. J. Optom. Phys. Opt. 57, 252-257 (1980).
297
International Commission on Non-Ionizing Radiation Protection (ICNIRP), “Revision of
guidelines on limits of exposure to laser radiation of wavelengths between 400 nm
and 1.4 µm,” Health Physics 79(4):43 1-440 (2000).
International Commission on Non-Ionizing Radiation Protection (ICNIRP), “Guidelines
on limits of exposure to broad-band incoherent optical radiation (0.38 to 3 µm),”
Health Physics 73, 539-554 (1997).
International Electrotechnical Commission (IEC), “Safety of Laser Products—Part I:
Equipment Classification, Requirements and Users Guide,” IEC Publication
60825-1, Edn. 2, IEC, Geneva (2007).
International Standards Organization (ISO), “Ophthalmic Instruments - Fundamental
requirements and test methods—Part 2: Light hazard protection,” International
Standard, ISO 15004-2:2007, Geneva (2007).
Kasprzak, H.T., and Iskander, D.R., “Ultrasonic measurement of fine head movements in
a standard ophthalmic headrest,” IEEE Transactions on Instrumentation and
Measurement 59(1), 0018-9456 (2010).
Kasprzak, H., Kowalik, W., and Jaronski, J., “Interferometric measurements of fine
corneal topography,” Proc. SPIE 2329, 32-39 (1995).
Kasprzak, H.T., Licznerski, T.J., “Influence of the characteristics of the tear film breakup on the point spread function of the eye model,” Proc. SPIE 3820, 390 (1999).
Keller, P.R., Collins, M.J., Carney, L.G., Davis, B.A., and van Saarloos, P.P., “The
relation between corneal and total astigmatism,” Optom. Vis. Sci. 73(2), 86-91
(1996).
Kiely, P.M., Smith, G., and Carney, L.G., “The mean shape of the human cornea,” Optica
Acta 29(8), 1027-1040, (1982).
Kimbrough, B., and Millerd, J., “The spatial frequency response and resolution
limitations of pixelated mask spatial carrier based phase shifting interferometry,”
Proc. SPIE 7790, 77900K (2010).
Kulawiec, A., Bauer, M., DeVries, G., Fleig, J., Forbes, G., Miladinovic, D., and
Murphy, P., “Subaperture stitching interferometry of high-departure aspheres by
incorporating configurable null optics,” Proc. SPIE TD06-44 (2009).
Licznerski, T.J., Kasprzak, H.T., and Kowalik, W. ‘‘Analysis of shearing interferograms
of the tear film by the use of fast Fourier transform,’’ J. Biomed. Opt. 3(1), 32–44
(1998-1).
298
Licznerski, T.J., Kasprzak, H.T., and Kowalik, W., ‘‘Two interference techniques for in
vivo assessment of the tear film stability on a cornea and a contact lens,’’ Proc.
SPIE 3320, 183–186 (1998-2).
Licznerski, T.J., Kasprzak, H.T., Kowalik, W., “Application of Twyman-Green
interferometer for evaluation of in vivo breakup characteristics of the human tear
film,” J. Biomed. Opt. 4(1), 176-182 (1999).
Malacara, D., Optical Shop Testing, Third Edition, Wiley, 564-568 (2007).
Malacara, D., Malacara, Z., and Servin, M., Interferogram Analysis For Optical Testing,
Second Edition, CRC Press (2005).
Marshall, W.H., and Talbot, S.A., “Visual Mechanisms,” in Biological Symposia 7,
Cattell Press, 117 (1942).
Malacara, Z., and Malacara, D., “Design of lenses to project the image of a pupil in
optical testing interferometers,” App. Opt. 34(4), 739-742 (1995).
Mchugh, S., “Sharpening using an unsharp mask,” Cambridge In Colour website,
Retrieved from http://www.cambridgeincolour.com/tutorials/unsharp-mask.htm
(July 2011).
Mellish, B., “Wire-grid-polarizer.svg,” Image available under GNU Free Documentation
License, Retrieved from http://en.wikipedia.org/wiki/File:Wire-grid-polarizer.svg
(2011).
Mengher, L.S., Bron, A.J., Tonge, S.R., Gilbert, and D.J., ‘‘A noninvasive instrument for
clinical assessment of the precorneal tear stability,’’ Curr. Eye Res. 4(1), 1–7
(1985).
MIL-HDBK-141. Optical Design. US Department of Defense (1962).
Miller, J., Personal communication, Department of Opthalmology, University of Arizona,
655 N. Alvernon Way, Tucson, AZ 85711 (2009).
Millerd, J.E., Brock, N.J., Hayes, J.B., North-Morris, M.B., Novak, M., and Wyant, J.C.,
“Pixelated phase-mask dynamic interferometer,” Proc. SPIE 5531, 304 (2004).
Mishima, S., “Some physiological aspects of the precorneal tear film,” Arch.
Ophthalmol. 73(2), 233-241 (1965).
Montés-Micó, R., Alió, J.L., Muñoz, G, Pérez-Santonja, J.J., and Charman, W.N.,
“Postblink changes in total and corneal aberrations,” Ophthalmol. 111(4), 758767 (2004)
299
Montés-Micó, R., “Role of the tear film in the optical quality of the human eye,” J.
Cataract Refract. Surg. 33, 1631-1635 (2007).
Németh, J., Erdélyi, B., Csákány, B., Gáspár, P., Soumelidis, A., Kablesz, F and Lang,
Z., “High-speed videotopographic measurement of tear film build-up time,”
Invest. Ophthalmol. Vis. Sci. 43(6), 1783-1790 (2002).
Nichols, J.J., and King-Smith, P.E., “Thickness of the pre- and post-contact lens tear film
measured in vivo by interferometry,” Invest. Opthalmol. Vis. Sci. 44(1), 68-77
(2003).
Norn, M.S., ‘‘Tear film breakup time. A review,’’ in The Preocular Tear Film: In
Health, Disease and Contact Lens Wear, Holly, F.J. (ed.), Dry Eye Institute, 52–
55 (1986).
Novak, M., Millerd, J., Brock, N., North-Morris, M., Hayes, J., and Wyant, J., “Analysis
of a micropolarizer array-based simultaneous phase-shifting interferometer,” App.
Opt. 44(32), 6861-6868 (2005-1)
Novak, M., “Micropolarizer Phase-Shifting Array for Use In Dynamic Interferometry,”
Ph.D. Dissertation, University of Arizona, Optical Sciences (2005).
Patel, S. et al., ‘‘Effects of fluorescein on tear breakup time and on tear thinning time,’’
Am. J. Optom. Physiol. Opt. 62, 188–190 (1985).
Rudder, S.L., Connoly, J.C., and Stechman, G.J., “Hybrid ECL/DBR wavelength and
spectrum stabilized laser demonstrate high power and narrow spectral linewidth,”
Proc. SPIE 6101, 61010I (2006).
Schwartz, S.H., Geometrical and Visual Optics: A Clinical Introduction, McGraw-Hill,
219-221 (2002).
Schwiegerling, J., OPTI 435/535 Visual Optics, Class Notes (2010).
Schwiegerling, J., Field Guide to Visual and Ophthalmic Optics, SPIE Press (2004).
Schwiegerling, J., Greivenkamp, J.E., and Miller, J.M., "Representation of
videokeratoscopic height data with Zernike polynomials," J. Opt. Soc. Am. A
Opt. Image Sci. Vis. 12(10), 2105-2113 (1995).
Selberg, L.A., “Interferometer accuracy and precision,” Proc. SPIE 1400, 24 (1991).
Shannon, R.R., The Art and Science of Optical Design, Cambridge University Press
(1997).
300
Sliney D., Aron-Rosa D., DeLori F., Fankhouser F., Landry R., Mainster M., Marshall J.,
Rassow B., Stuck B., Trokel S., West T., and Wolfe M., “Adjustment of
guidelines for exposure of the eye to optical radiation from ocular instruments: a
statement from a task group of the International Commission on Non-Ionizing
Radiation Protection,” App. Opt. 44(11), 2162-2176 (2005).
Stenstrom, S., “Investigation of the variation and the correlation of the optical elements
of human eyes,” Am. J. Optom. Am. Acad. Optom. 25(8), 388-397 (1948).
Steinman, R.M., Haddad, G.M., Skavenski, A.A., and Wyman, D., “Miniature eye
movement,” Science, New Series 181(4102), 810-819 (1973).
Steinman, R.M., Cunitz, R.J., Timberlake, G.T., and Herman, M., “Voluntary control of
microsaccades during maintained monocular fixation,” Science, New Series
155(3769), 1577-1579 (1967).
Steinman, R.M., “Effect of target size, luminance, and color on monocular fixation,” J.
Opt. Soc. Am. 55(9), 1158-1165 (1965).
Sullivan, J.J., Personal communication, College of Optical Sciences, University of
Arizona, 1630 E University Blvd., Tucson, AZ 85721 (2010).
Szczesna, D.H., Jaronski, J., Kasprzak, H.T, and Stenevi, U., “Interferometric
measurements of the tear film irregularities on the human cornea,” Proc. SPIE
5959, 59590A (2005).
Takeda, M., Ina, H., and Kobayashi, S., “Fourier-transform method of fringe-pattern
analysis for computer-based topography and interferometry,” J. Opt. Soc. Am. 72,
156-160 (1982).
Thai, L.C., Tomlinson, A., Doane, M.G., “Effect of contact lens materials on tear
physiology,” Optom. Vis. Sci. 81(3), 194-204 (2004).
U.S. Food and Drug Administration (US FDA), "Laser Products—Conformance with
IEC 60825-1, Am. 2 and IEC 60601-2-22; Final Guidance for Industry and FDA,"
Laser Notice No. 50, FDA/CDRH Office of Compliance, Rockville, MD (2007)..
Wang, J., Fonn, D., Simpson, T.L., and Jones, L., “Precorneal and pre- and postlens tear
film thickness measured indirectly with optical coherence tomography,” Invest.
Opthalmol. Vis. Sci. 44(6), 2524-2528 (2003).
Wang, J., Jiao, S., Ruggeri, M., and Wehbe, H., “Direct visualization of tear film on soft
contact lens using ultra-high resolution spectral domain optical coherence
tomography,” Proc. of SPIE 6844, 68441E (2008).
301
World Health Organization (WHO), “Environmental Health Criteria No. 23, Lasers and
Optical Radiation,” joint publication of the United Nations Environmental
Program, the International Radiation Protection Association and the World Health
Organization, Geneva (1982).
Wyant, J.C., “Use of an ac heterodyne lateral shear interferometer with real-time
wavefront correction systems,” App. Opt. 14, 2622 (1975).
Wyant, J.C., OPTI 513R Optical Testing, Class Notes (2008)
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertising