D I G I T A L L I G... a dissertation submitted to the department of computer science

D I G I T A L L I G... a dissertation submitted to the department of computer science
a dissertation
submitted to the department of computer science
and the committee on graduate studies
of stanford university
in partial fulfillment of the requirements
for the degree of
doctor of philosophy
Ren Ng
July 
© Copyright by Ren Ng 
All Rights Reserved
I certify that I have read this dissertation and that, in my opinion, it is fully
adequate in scope and quality as a dissertation for the degree of Doctor of
Patrick Hanrahan
Principal Adviser
I certify that I have read this dissertation and that, in my opinion, it is fully
adequate in scope and quality as a dissertation for the degree of Doctor of
Marc Levoy
I certify that I have read this dissertation and that, in my opinion, it is fully
adequate in scope and quality as a dissertation for the degree of Doctor of
Mark Horowitz
Approved for the University Committee on Graduate Studies.
I feel tremendously lucky to have had the opportunity to work with Pat Hanrahan, Marc
Levoy and Mark Horowitz on the ideas in this dissertation, and I would like to thank them
for their support. Pat instilled in me a love for simulating the flow of light, agreed to take me
on as a graduate student, and encouraged me to immerse myself in something I had a passion
for. I could not have asked for a finer mentor. Marc Levoy is the one who originally drew me
to computer graphics, has worked side by side with me at the optical bench, and is vigorously
carrying these ideas to new frontiers in light field microscopy. Mark Horowitz inspired me
to assemble my camera by sharing his love for dismantling old things and building new ones.
I have never met a professor more generous with his time and experience.
I am grateful to Brian Wandell and Dwight Nishimura for serving on my orals committee. Dwight has been an unfailing source of encouragement during my time at Stanford.
I would like to acknowledge the fine work of the other individuals who have contributed
to this camera research. Mathieu Brédif worked closely with me in developing the simulation
system, and he implemented the original lens correction software. Gene Duval generously
donated his time and expertise to help design and assemble the prototype, working even
through illness to help me meet publication deadlines. Andrew Adams and Meng Yu contributed software to refocus light fields more intelligently. Kayvon Fatahalian contributed
the most to explaining how the system works, and many of the ray diagrams in these pages
are due to his artistry.
Assembling the prototype required custom support from several vendors. Special thanks
to Keith Wetzel at Kodak Image Sensor Solutions for outstanding support with the photosensor chips, Thanks also to John Cox at Megavision, Seth Pappas and Allison Roberts at
Adaptive Optics Associates, and Mark Holzbach at Zebra Imaging.
In addition, I would like to thank Heather Gentner and Ada Glucksman at the Stanford Graphics Lab for providing mission-critical administrative support, and John Gerth
for keeping the computing infrastructure running smoothly.
Thanks also to Peter Catrysse, Brian Curless, Joyce Farrell, Keith Fife, Abbas El Gamal,
Joe Goodman, Bert Hesselink, Brad Osgood, and Doug Osheroff for helpful discussions
related to this work.
A Microsoft Research Fellowship has supported my research over the last two years. This
fellowship gave me the freedom to think more broadly about my graduate work, allowing me
to refocus my graphics research on digital photography. A Stanford Birdseed Grant provided
the resources to assemble the prototype camera. I would also like to express my gratitude
to Stanford University and Scotch College for all the opportunities that they have given me
over the years.
I would like to thank all my wonderful friends and colleagues at the Stanford Graphics Lab.
I can think of no finer individual than Kayvon Fatahalian, who has been an exceptional
friend to me both in and out of the lab. Manu Kumar has been one of my strongest supporters, and I am very grateful for his encouragement and patient advice. Jeff Klingner is a
source of inspiration with his infectious enthusiasm and amazing outlook on life. I would
especially like to thank my collaborators: Eric Chan, Mike Houston, Greg Humphreys, Bill
Mark, Kekoa Proudfoot, Ravi Ramamoorthi, Pradeep Sen and Rui Wang. Special thanks
also to John Owens, Matt Pharr and Bennett Wilburn for being so generous with their time
and expertise.
I would also like to thank my friends outside the lab, the climbing posse, who have helped
make my graduate years so enjoyable, including Marshall Burke, Steph Cheng, Alex Cooper,
Polly Fordyce, Nami Hayashi, Lisa Hwang, Joe Johnson, Scott Matula, Erika Monahan, Mark
Pauly, Jeff Reichbach, Matt Reidenbach, Dave Weaver and Mike Whitfield. Special thanks
are due to Nami for tolerating the hair dryer, spotlights, and the click of my shutter in the
name of science.
Finally, I would like to thank my family, Yi Foong, Beng Lymn and Chee Keong Ng, for
their love and support. My parents have made countless sacrifices for me, and have provided
me with steady guidance and encouragement. This dissertation is dedicated to them.
1 Introduction
The Focus Problem in Photography . . . . . . . . . . . . . . . . . . . . . . .
Trends in Digital Photography . . . . . . . . . . . . . . . . . . . . . . . . . .
Digital Light Field Photography . . . . . . . . . . . . . . . . . . . . . . . . .
Dissertation Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 Light Fields and Photographs
Previous Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Light Field Flowing into the Camera . . . . . . . . . . . . . . . . . . . .
Photograph Formation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Imaging Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3 Recording a Photograph’s Light Field
A Plenoptic Camera Records the Light Field . . . . . . . . . . . . . . . . . .
Computing Photographs from the Light Field . . . . . . . . . . . . . . . . . .
Three Views of the Recorded Light Field . . . . . . . . . . . . . . . . . . . . .
Resolution Limits of the Plenoptic Camera . . . . . . . . . . . . . . . . . . .
Generalizing the Plenoptic Camera . . . . . . . . . . . . . . . . . . . . . . . .
Prototype Light Field Camera . . . . . . . . . . . . . . . . . . . . . . . . . . .
Related Work and Further Reading . . . . . . . . . . . . . . . . . . . . . . . .
4 Digital Refocusing
Previous Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Image Synthesis Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Theoretical Refocusing Performance . . . . . . . . . . . . . . . . . . . . . . .
Theoretical Noise Performance . . . . . . . . . . . . . . . . . . . . . . . . . .
Experimental Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Technical Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Photographic Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5 Signal Processing Framework
Previous Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Photographic Imaging in the Fourier Domain . . . . . . . . . . . . . . . . .
Generalization of the Fourier Slice Theorem . . . . . . . . . . . . . .
Fourier Slice Photograph Theorem . . . . . . . . . . . . . . . . . . .
Photographic Effect of Filtering the Light Field . . . . . . . . . . . .
Band-Limited Analysis of Refocusing Performance . . . . . . . . . . . . . .
Fourier Slice Digital Refocusing . . . . . . . . . . . . . . . . . . . . . . . . .
Light Field Tomography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 
6 Selectable Refocusing Power
Sampling Pattern of the Generalized Light Field Camera . . . . . . . . . . . 
Optimal Focusing of the Photographic Lens . . . . . . . . . . . . . . . . . . . 
Experiments with Prototype Camera . . . . . . . . . . . . . . . . . . . . . . . 
Experiments with Ray-Trace Simulator . . . . . . . . . . . . . . . . . . . . . 
7 Digital Correction of Lens Aberrations
Previous Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 
Terminology and Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 
Visualizing Aberrations in Recorded Light Fields . . . . . . . . . . . . . . . . 
Review of Optical Correction Techniques . . . . . . . . . . . . . . . . . . . . 
Digital Correction Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 
Correcting Recorded Aberrations in a Plano-Convex Lens . . . . . . . . . . 
Simulated Correction Performance . . . . . . . . . . . . . . . . . . . . . . . . 
Methods and Image Quality Metrics . . . . . . . . . . . . . . . . . . 
Case Analysis: Cooke Triplet Lens . . . . . . . . . . . . . . . . . . . 
Correction Performance Across a Database of Lenses . . . . . . . . . 
8 Conclusion
A Proofs
Generalized Fourier Slice Theorem . . . . . . . . . . . . . . . . . . . . . . . . 
Filtered Light Field Imaging Theorem . . . . . . . . . . . . . . . . . . . . . . 
Photograph of a Four-Dimensional Sinc Light Field . . . . . . . . . . . . . . 
List of Figures
1 Introduction
Coupling between aperture size and depth of field . . . . . . . . . . . . . . .
Demosaicking to compute color . . . . . . . . . . . . . . . . . . . . . . . . .
Refocusing after the fact in digital light field photography . . . . . . . . . . .
Dissertation roadmap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 Light Fields and Photographs
Parameterization for the light field flowing into the camera . . . . . . . . . .
The set of all rays flowing into the camera . . . . . . . . . . . . . . . . . . . .
Photograph formation in terms of the light field . . . . . . . . . . . . . . . .
Photograph formation when focusing at different depths . . . . . . . . . . .
Transforming ray-space coordinates . . . . . . . . . . . . . . . . . . . . . . .
3 Recording a Photograph’s Light Field
Sampling of a photograph’s light field provided by a plenoptic camera . . . .
Overview of processing the recorded light field . . . . . . . . . . . . . . . . .
Raw light field photograph . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Conventional photograph computed from the light field photograph . . . .
Sub-aperture images in the light field photograph . . . . . . . . . . . . . . .
Epipolar images in the light field photograph . . . . . . . . . . . . . . . . . .
Microlens image variation with main lens aperture size . . . . . . . . . . . .
Generalized light field camera: ray-space sampling . . . . . . . . . . . . . . .
Generalized light field camera: raw image data . . . . . . . . . . . . . . . . .
list of figures
. Prototype camera body . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. Microlens array in prototype camera . . . . . . . . . . . . . . . . . . . . . . .
. Schematic and photographs of prototype assembly . . . . . . . . . . . . . . .
4 Digital Refocusing
Examples of refocusing and extended depth of field . . . . . . . . . . . . . .
Shift-and-add refocus algorithm . . . . . . . . . . . . . . . . . . . . . . . . .
Aliasing in under-sampled shift-and-add refocus algorithm . . . . . . . . . .
Comparison of sub-aperture image and digitally extended depth of field . .
Improvement in effective depth of focus in the light field camera . . . . . . .
Experimental test of refocusing performance: visual comparison. . . . . . .
Experimental test of refocusing performance: numerical analysis . . . . . .
Experimental test of noise reduction using digital refocusing. . . . . . . . . .
Refocusing and extending the depth of field . . . . . . . . . . . . . . . . . . .
. Light field camera compared to conventional camera . . . . . . . . . . . . .
. Fixing a mis-focused portrait . . . . . . . . . . . . . . . . . . . . . . . . . . .
. Maintaining a blurred background in a portrait of two people . . . . . . . .
. The sensation of discovery in refocus movies. . . . . . . . . . . . . . . . . . .
. High-speed light field photographs . . . . . . . . . . . . . . . . . . . . . . . .
. Extending the depth of field in landscape photography. . . . . . . . . . . . .
. Digital refocusing in macro photography . . . . . . . . . . . . . . . . . . . .
. Moving the viewpoint in macro photography . . . . . . . . . . . . . . . . . .
5 Signal Processing Framework
Fourier-domain relationship between photographs and light fields . . . . . .
Fourier-domain intuition for theoretical refocusing performance . . . . . .
Range of Fourier slices for exact refocusing . . . . . . . . . . . . . . . . . . .
Photographic Imaging Operator . . . . . . . . . . . . . . . . . . . . . . . . .
Classical Fourier Slice Theorem . . . . . . . . . . . . . . . . . . . . . . . . . .
Generalized Fourier Slice Theorem . . . . . . . . . . . . . . . . . . . . . . . .
Fourier Slice Photograph Theorem . . . . . . . . . . . . . . . . . . . . . . . .
Filtered Light Field Photography Theorem . . . . . . . . . . . . . . . . . . .
list of figures
Fourier Slice Refocusing Algorithm . . . . . . . . . . . . . . . . . . . . . . . 
. Source of artifacts in Fourier Slice Refocusing . . . . . . . . . . . . . . . . . 
. Two main classes of artifacts . . . . . . . . . . . . . . . . . . . . . . . . . . . 
. Correcting rolloff artifacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 
. Reducing aliasing artifacts by oversampling . . . . . . . . . . . . . . . . . . . 
. Reducing aliasing artifacts by filtering . . . . . . . . . . . . . . . . . . . . . . 
. Aliasing reduction by zero-padding . . . . . . . . . . . . . . . . . . . . . . . 
. Quality comparison of refocusing in the Fourier and spatial domains . . . . 
. Quality comparison of refocusing in the Fourier and spatial domains II . . . 
6 Selectable Refocusing Power
A family of plenoptic cameras with decreasing microlens size . . . . . . . . . 
Different configurations of the generalized light field camera . . . . . . . . . 
Derivation of the generalized light field sampling pattern . . . . . . . . . . . 
Predicted effective resolution and optical mis-focus as a function of β
Decreasing β trades refocusing power for image resolution . . . . . . . . . . 
Comparison of physical and simulated data for generalized camera . . . . . 
Simulation of extreme microlens defocus . . . . . . . . . . . . . . . . . . . . 
mtf comparison of trading refocusing power and image resolution . . . . . 
mtf comparison of trading refocusing power and image resolution II . . . . 
7 Digital Correction of Lens Aberrations
. . . 
Spherical aberration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 
Ray correction function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 
Comparison of epipolar images with and without lens aberrations . . . . . . 
Aberrations in sub-aperture images of a light field . . . . . . . . . . . . . . . 
Classical reduction in spherical aberration by stopping down the lens . . . . 
Classical reduction in aberrations by adding glass elements to the lens . . . 
Ray-space illustration of digital correction of lens aberrations . . . . . . . . 
Ray weights in weighted correction . . . . . . . . . . . . . . . . . . . . . . . . 
Set-up for plano-convex lens prototype . . . . . . . . . . . . . . . . . . . . . 
. Image evaluation of digital correction performance . . . . . . . . . . . . . . 
list of figures
. Comparison of physical and simulated data for digital lens correction. . . . 
. Comparison of weighted correction with stopping down lens . . . . . . . . . 
. psf and rms measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 
. Effective pixel size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 
. Aberrated ray-trace and ray-space of a Cooke triplet lens . . . . . . . . . . . 
. Spot diagrams for triplet lens . . . . . . . . . . . . . . . . . . . . . . . . . . . 
. Histogram of triplet-lens psf size across imaging plane . . . . . . . . . . . . 
. mtf of triplet lens with and without correction (infinity focus) . . . . . . . . 
. mtf of triplet lens with and without correction (macro focus) . . . . . . . . 
. Ray-space of triplet lens at infinity and macro focus . . . . . . . . . . . . . . 
. Database of lenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 
. Performance of digital correction on lens database . . . . . . . . . . . . . . . 
8 Conclusion
A Proofs
This dissertation introduces a new approach to everyday photography, which solves the longstanding problems related to focusing images accurately. The root of these problems is missing information. It turns out that conventional photographs tell us rather little about the
light passing through the lens. In particular, they do not record the amount of light traveling along individual rays that contribute to the image. They tell us only the sum total of light
rays striking each point in the image. To make an analogy with a music-recording studio,
taking a conventional photograph is like recording all the musicians playing together, rather
than recording each instrument on a separate audio track.
In this dissertation, we will go after the missing information. With micron-scale changes
to its optics and sensor, we can enhance a conventional camera so that it measures the light
along each individual ray flowing into the image sensor. In other words, the enhanced camera samples the total geometric distribution of light passing through the lens in a single
exposure. The price we will pay is collecting much more data than a regular photograph.
However, I hope to convince you that the price is a very fair one for a solution to a problem
as pervasive and long-lived as photographic focus. In photography, as in recording music, it
is wise practice to save as much of the source data as you can.
Of course simply recording the light rays in the camera is not a complete solution to the
focus problem. The other ingredient is computation. The idea is to re-sort the recorded light
rays to where they should ideally have terminated, to simulate the flow of rays through the
virtual optics of an idealized camera into the pixels of an idealized output photograph.
chapter . introduction
1.1 The Focus Problem in Photography
Focus has challenged photographers since the very beginning. In , the Parisian magazine Charivari reported the following problems with Daguerre’s brand-new photographic
process [Newhall ].
You want to make a portrait of your wife. You fit her head in a fixed iron collar
to give the required immobility, thus holding the world still for the time being.
You point the camera lens at her face; but alas, you make a mistake of a fraction
of an inch, and when you take out the portrait it doesn’t represent your wife –
it’s her parrot, her watering pot – or worse.
Facetious as it is, the piece highlights the practical difficulties experienced by early photographers. In doing so, it identifies three manifestations of the focus problem that are as real
today as they were back in .
The most obvious problem is the burden of focusing accurately on the subject before
exposure. A badly focused photograph evokes a universal sense of loss, because we all take
it for granted that we cannot change the focus in a photograph after the fact. And focusing
accurately is not easy. Although modern auto-focus systems provide assistance, a mistake
of a “fraction of an inch” in the position of the film plane may mean accidentally focusing past your model onto the wall in the background – or worse. This is the quintessential
manifestation of the focus problem.
The second manifestation is closely related. It is the fundamental coupling between the
size of the lens aperture and the depth of field – the range of depths that appears sharp in the
resulting photograph. As a consequence of the nature in which a lens forms an image, the
depth of field decreases as the aperture size increases. This relationship establishes one of the
defining tensions in photographic practice: how should I choose the correct aperture size?
On the one hand, a narrow aperture extends the depth of field and reduces blur of objects
away from the focal plane – in Figures .a-c, the arches in the background become clearer
as the aperture narrows. On the other hand, a narrow aperture requires a longer exposure,
increasing the blur due to the natural shake of our hands while holding the camera and
movement in the scene – notice that the woman’s waving hand blurs out in Figures .a-c.
.. the focus problem in photography
(a): Wide aperture
f /, / sec
(b): Medium aperture
f /, / sec
(c): Narrow aperture
f /, / sec
Figure .: Coupling between aperture size and depth of field. An aperture of f /n means
that the width of the aperture is /n the focal length of the lens.
Today’s casual picture-taker is slightly removed from the problem of choosing the aperture size, because many modern cameras automatically try to make a good compromise
given the light level and composition. However, the coupling between aperture size and
depth of field affects the decisions made before every photographic exposure, and remains
one of the fundamental limits on photographic freedom.
The third manifestation of the focus problem forces a similarly powerful constraint on
the design of photographic equipment. The issue is control of lens aberrations. Aberrations
are the phenomenon where rays of light coming from a single point in the world do not
converge to a single focal point in the image, even when the lens is focused as well as possible.
This failure to converge is a natural consequence of using refraction (or reflection) to bend
rays of light to where we want them to go – some of the light inevitably leaks away from the
desired trajectory and blurs the final image. It is impossible to focus light with geometric
perfection by refraction and reflection, and aberrations are therefore an inescapable problem
in all real lenses.
chapter . introduction
Controlling aberrations becomes more difficult as the lens aperture increases in diameter, because rays passing through the periphery of the lens must be bent more strongly to
converge accurately with their neighbors. This fact places a limit on the maximum aperture
of usable lenses, and limits the light gathering power of the lens. In the very first weeks of
the photographic era in , exposures with small lens apertures were so long (measured
in minutes) that many portrait houses actually did use a “fixed iron collar” to hold the subject’s head still. In fact, many portraits were taken with the subject’s eyes closed, in order to
minimize blurring due to blinking or wandering gaze [Newhall ]. One of the crucial developments that enabled practical portraiture in  was Petzval’s mathematically-guided
design of a new lens with reduced aberrations and increased aperture size. This lens was 
times as wide as any previous lens of equivalent focal length, enabling exposures that were 
times shorter than before. Along with improvements in the sensitivity of the photographic
plates, exposure times were brought down to seconds, allowing people who were being photographed to open their eyes and remove their iron collars.
Modern exposures tend to be much shorter – just fractions of a second in bright light –
but the problem is far from solved. Some of the best picture-taking moments come upon
us in the gentle light of early morning and late evening, or in the ambient light of building
interiors. These low-light situations require such long exposures that modern lenses can
seem as limiting as the portrait lenses before Petzval. These situations force us to use the
modern equivalents of the iron collar: the tripod and the electronic flash.
Through these examples, I hope I’ve conveyed that the focus problem in photography
encompasses much more than simply focusing on the right thing. It is fundamentally also
about light gathering power and lens quality. Its three manifestations place it at the heart of
photographic science and art, and it loves to cause mischief in the crucial moments preceding the click of the shutter.
Trends in Digital Photography
If the focus problem is our enemy in this dissertation, digital camera technology is our arsenal. Commoditization of digital image sensors is the most important recent development
in the history of photography, bringing a new-found sense of immediacy and freedom to
.. trends in digital photography
picture making. For the purposes of this dissertation, there are two crucial trends in digital
camera technology: an excess in digital image sensor resolution, and the notion that images
are computed rather than directly recorded.
Digital image sensor resolution is growing exponentially, and today it is not uncommon to see commodity cameras with ten megapixels (mp) of image resolution [Askey ].
Growth has outstripped our needs, however. There is a growing consensus that raw sensor
resolution is starting to exceed the resolving power of lenses and the output resolution of displays and printers. For example, for the most common photographic application of printing
” × ” prints, more than  mp provides little perceptible improvement [Keelan ].
What the rapid growth hides is an even larger surplus in resolution that could be produced, but is currently not. Simple calculations show that photosensor resolutions in excess
of  mp are well within today’s level of silicon technology. For example, if one were to
use the designs for the smallest pixels present in low-end cameras (. micron pitch) on the
large sensor die sizes in high-end commodity cameras ( mm× mm) [Askey ], one
would be able to print a sensor with resolution approaching  mp. There are at least two
reasons that such high resolution sensors are not currently implemented. First, it is an implicit acknowledgment that we do not need that much resolution in output images. Second,
decreasing pixel size reduces the number of photons collected by each pixel, resulting in
lower dynamic range and signal-to-noise ratio (snr). This trade-off is unacceptable at the
high-end of the market, but it is used at the low-end to reduce sensor size and miniaturize the
overall camera. The main point in highlighting these trends is that a compelling application
for sensors with a very large number of small pixels will not be limited by what can actually be printed in silicon. However, this is not to say that implementing such high-resolution
chips would be easy. We will still have to overcome significant challenges in reading so many
pixels off the chip efficiently and storing them.
Another powerful trend is the notion that, in digital photography, images are computed,
not simply recorded. Digital image sensors enabled this transformation by eliminating the
barrier between recording photographic data and processing it. The quintessential example
of the computational approach to photography is the way color is handled in almost all commodity digital cameras. Almost all digital image sensors sample only one of the three rgb
(red, green or blue) color primaries at each photosensor pixel, using a mosaic of color filters
chapter . introduction
Figure .: Demosaicking to compute color.
in front of each pixel as shown in Figure .a. In other words, each pixel records only one
of the red, green or blue components of the incident light. Demosaicking algorithms [Ramanath et al. ] are needed to interpolate the mosaicked color values to reconstruct full
rgb color at each output image pixel, as shown in Figures .b and c. This approach enables
color imaging using what would otherwise be an intensity-only, gray-scale sensor. Other
examples of computation in the imaging pipeline include: combining samples at different
sensitivities [Nayar and Mitsunaga ] in order to extend the dynamic range [Debevec
and Malik ]; using rotated photosensor grids and interpolating onto the final image
grid to better match the perceptual characteristics of the human eye [Yamada et al. ];
automatic white-balance correction to reduce color cast due to imbalance in the illumination spectrum [Barnard et al. ]; in-camera image sharpening; and image warping to
undo field distortions introduced by the lens. Computation is truly an integral component
of modern photography.
In summary, present-day digital imaging provides a very rich substrate for new photographic systems. The two key nutrients are an enormous surplus in raw sensor resolution,
and the proximity of processing power for flexible computation of final photographs.
Digital Light Field Photography
My proposed solution to the focus problem exploits the abundance of digital image sensor
resolution to sample each individual ray of light that contributes to the final image. This
.. digital light field photography
Figure .: Refocusing after the fact in digital light field photography.
super-representation of the lighting inside the camera provides a great deal of flexibility and
control in computing final output photographs. The set of all light rays is called the light field
in computer graphics. I call this approach to imaging digital light field photography.
To record the light field inside the camera, digital light field photography uses a microlens
array in front of the photosensor. Each microlens covers a small array of photosensor pixels.
The microlens separates the light that strikes it into a tiny image on this array, forming a
miniature picture of the incident lighting. This samples the light field inside the camera
in a single photographic exposure. A microlens should be thought of as an output image
pixel, and a photosensor pixel value should be thought of as one of the many light rays that
contribute to that output image pixel.
To process final photographs from the recorded light field, digital light field photography
uses ray-tracing techniques. The idea is to imagine a camera configured as desired, and trace
the recorded light rays through its optics to its imaging plane. Summing the light rays in this
imaginary image produces the desired photograph. This ray-tracing framework provides a
general mechanism for handling the undesired non-convergence of rays that is central to
the focus problem. What is required is imagining a camera in which the rays converge as
desired in order to drive the final image computation.
For example, let us return to the first manifestation of the focus problem – the burden of
having to focus the camera before exposure. Digital light field photography frees us of this
chore by providing the capability of refocusing photographs after exposure (Figure .). The
solution is to imagine a camera with the depth of the film plane altered so that it is focused
chapter . introduction
as desired. Tracing the recorded light rays onto this imaginary film plane sorts them to a
different location in the image, and summing them there produces the images focused at
different depths.
The same computational framework provides solutions to the other two manifestations
of the focus problem. Imagining a camera in which each output pixel is focused independently severs the coupling between aperture size and depth of field. Similarly, imagining
a lens that is free of aberrations yields clearer, sharper images. Final image computation
involves taking rays from where they actually refracted and re-tracing them through the
perfect, imaginary lens.
Dissertation Overview
Organizational Themes
The central contribution of this dissertation is the introduction of the digital light field photography system: a general solution to the three manifestations of the focus problem discussed in this introduction. The following four themes unify presentation of the system and
analysis of its performance in the coming chapters.
• System Design: Optics and Algorithms This dissertation discusses the optical principles
and trade-offs in designing cameras for digital light field photography. The second part of
the systems contribution is the development of specific algorithms to address the different
manifestations of the focus problem.
• Mathematical Analysis of Performance Three mathematical tools have proven particularly useful in reasoning about digital light field photography. The first is the traditional
tracing of rays through optical systems. The second is a novel Cartesian ray-space diagram that unifies visualizations of light field recording and photograph computation. The
third is Fourier analysis, which yields the simplest way to understand the relationship between light fields and photographs focused at different depths. These tools have proven
remarkably reliable at predicting system performance.
• Computer-Simulated Validation Software ray-tracing enables computer-aided plotting
of ray traces and ray-space diagrams. Furthermore, when coupled with a complete computer graphics rendering system, it enables physically-accurate simulation of light fields
.. dissertation overview
and final photographs from hypothetical optical designs.
• A Prototype Camera and Experimental Validation The most tangible proof of system viability is a system that works, and this dissertation presents a second-generation prototype
light field camera. This implementation provides a platform for in-depth physical validation of theoretical and simulated performance. The success of these tests provides some
reassurance as to the end-to-end viability of the core design principles. In addition, I have
used the prototype to explore real live photographic scenarios beyond the reach of theoretical analysis and computer simulation.
Dissertation Roadmap
I have tried to write and illustrate this dissertation in a
manner that will hopefully make it accessible and inter-
esting to a broad range of readers, including lay photographers, computer graphics researchers, optical engineers
and those who enjoy mathematics.
Figure . is a map of some paths that one might
choose through the coming chapters, and the territory that one would cover.
Photographers will
be most interested in the images and discussion of
Sections . – ., and may wish to begin their exploration there. Chapters  –  assume knowledge of calculus at the level of a first year college course, but it is not
Figure .: Roadmap.
not essential to develop the right intuition and digest the
main ideas. Chapters  –  may be read in any order. They present more sophisticated analysis and variations of the system, and employ more specialized mathematics and abstract
Chapter Descriptions
• Chapter  introduces notation and reviews the link between light fields and photographs.
• Chapter  presents the design principles, optics, and overall processing concepts for the
chapter . introduction
system. It also describes the prototype camera.
• Chapter  develops theory and algorithms to compute photographs with arbitrary focus
and depth of field, and presents experimental validation of predicted performance. It contains a gallery of images shot with the prototype camera.
• Chapter  applies Fourier analysis to refocusing performance. This style of thinking leads
to a fast Fourier-domain algorithm for certain kinds of light field post-processing.
• Chapter  continues the development of performance trade-offs, presenting a dynamic
means to exchange image resolution and refocusing power in the light field camera.
• Chapter  develops processing algorithms to reduce the effects of lens aberrations, testing
performance with the prototype camera and computer simulation.
• Chapter  summarizes lessons learned and points to future directions.
Light Fields and Photographs
Photographs do not record most of the information about the light entering the camera. For
example, if we think about the light deposited at one pixel, a photograph tells us nothing
about how the light coming from one part of the lens differs from the light coming from
another. It turns out that these differences are the crucial pieces of missing information that
lead to the focus problem in conventional photography.
This chapter introduces the notion of the total geometric distribution of light flowing into
a camera, the light field inside the camera, and defines a very useful graphical representation
of the light field called the ray-space. By plotting the footprint of a conventional photograph
on the ray-space, we can visualize which parts of the light field we measure, and which parts
are missing. These are the building blocks for later chapters that deal with recording and
processing the full light field.
Previous Work
Thinking about the geometric distribution of light flowing in the world has a long history.
Adelson and Bergen [] point to examples in the notebooks of Leonardo da Vinci. Levoy
and Hanrahan [] trace the mathematical formulations of the total distribution back to
early work on illumination engineering by Gershun [] in Moscow, and advanced by
Moon and Spencer [] and others in America.
In the last decade and a half, thinking in terms of the total flow of light has become very
chapter . light fields and photographs
popular in vision and computer graphics, finding a well-deserved, central position in the
theories of these fields. Adelson and Bergen were amongst the first in this modern intellectual movement, leading the way with an influential paper that defined the total geometric
distribution of light as a plenoptic function over the d space of rays – d for each spatial
position and d for each direction of flow. They introduced the plenoptic function in order
to systematically study how our visual systems might extract geometric information from
the images that we see [Adelson and Bergen ]. The ray-based model is a natural abstraction for the light flowing in the world, because light is conserved as it propagates along
a ray. The precise measure for the light traveling along a ray is defined in radiometry as radiance [Preisendorfer ], but for the purposes of this thesis it will largely suffice to think
of light as a scalar value traveling along each ray (or a scalar value for each color channel).
The notion of the total distribution as a d light field, which is the one used in this
thesis, was introduced to computer graphics by Levoy and Hanrahan [] and Gortler et
al. [] (who called it the Lumigraph). The reduction from d plenoptic function to d
light field works by restricting attention to rays passing through free-space – regions free of
occluders, such as opaque objects, and scattering media, such as fog. In this case, the light
traveling along a ray is constant along its length, eliminating one dimension of variation.
The resulting light field is closely related to the epipolar volumes developed by Bolles, Baker
and Marimont [] in studying robot vision.
One reason for reducing the dimensionality was making measurement feasible. For example, the  papers described methods for taking thousands of pictures of an object to
sample the d space, allowing synthetic views of it to be computed from any viewpoint outside its convex hull. The idea of computing synthetic views from a database of images originated with Chen and Williams [], and was first cast in terms of the plenoptic function
by McMillan and Bishop []. Following this early work, research on image-based rendering techniques exploded in popularity, presenting an alternative to traditional methods
based on explicit modeling of surface geometry, reflection properties and source lighting.
Shum and Kang [] survey some of the earlier work, and the best place to sample the
current state of the art is each year’s proceedings of the siggraph conference. The next two
 From the Latin plenus for complete or full, they explain.
 Light
field was a term first used by Gershun [] in Russian, translated by Moon and Timoshenko.
.. the light field flowing into the camera
chapters review more related work in the context of recording light fields, refocusing and
Fourier analysis.
With respect to this chapter, the one issue worth calling out is the choice of parameterization. The original parameterizations [Levoy and Hanrahan ; Gortler et al. ]
were based on the intersection of rays with two planes, which had the prime advantage of
simplicity. One limitation was uneven sampling density, motivating explorations in more
uniform representations [Camahort et al. ] and representations projected onto the surface of objects [Miller et al. ; Wood et al. ; Chen et al. ]. Another limitation
was the inability to reliably extrapolate viewing position into the scene because of occluders, motivating representations that incorporated depths or opacities along rays [Shade et al.
; Buehler et al. ; Matusik et al. ]. These improved performance in specific scenarios, but at the expense of increased complexity in representation and processing. Practical and general representations are still something of an unsolved problem in image-based
The Light Field Flowing into the Camera
One of the core ideas in this thesis is to restrict ourselves to looking at the light field inside
the camera body. With this narrowing of scope, the appropriate representation becomes
refreshingly simple and general. The issue of uniformity is easily solved because all the light
originates from the window of the lens aperture. In addition, the problems with occlusion
are gone – the inside of a camera is empty by definition. Nevertheless, I hope you’ll be
convinced that the light field inside the camera is a rich light field indeed, and that studying
it teaches us a great deal about some of the oldest problems in photography.
In this thesis we will use the parameterization of the light field shown in Figure ., which
describes each ray by its intersection points with two planes: the film and the aperture inside
the lens. The two-plane parameterization is a very natural fit for the light field inside the
camera because every ray that contributes to a photograph passes through the lens aperture
and terminates somewhere on the film.
In the ray diagram on the left of Figure ., a single ray is shown passing through the
lens aperture at u as it refracts through the glass of the lens, and terminating at position x on
chapter . light fields and photographs
Figure .: Parameterization for the light field flowing into the camera.
the photosensor. Let us refer to u as the directional axis, because the u intercept on the lens
determines the direction at which the ray strikes the sensor. In addition, let us refer to x as
the spatial axis. Of course in general the ray exists in d and we would consider intersections
(u, v) at the lens and ( x, y) on the film plane. Let us refer to the value of the light field along
the depicted ray as L( x, y, u, v), or L( x, u) if we are considering the d simplification.
The Cartesian ray-space diagram on the right in Figure . is a more abstract representation of the two-dimensional light field. The ray depicted on the left is shown as a point
( x, u) on the Cartesian ray-space. In general each possible ray in the diagram on the left
corresponds to a different point on the ray-space diagram on the right, as suggested by Figure .. The function defined over the ray-space plane is the d light field. Adelson and
Bergen [Adelson and Bergen ] used these kinds of diagrams to illustrate simple features in the plenoptic function. Levoy and Hanrahan [] used it to visualize the density
.. photograph formation
Figure .: The set of all rays flowing into the camera.
of rays in a sampled light field, and it has become very common in the light field literature.
Photograph Formation
In a conventional camera, a photograph forms on a piece of photosensitive material placed
inside the camera at the imaging plane. The material may be silver-halide film in traditional
cameras, where photons cause the development of silver crystals, or a ccd or cmos photosensor in a digital camera, where photons generate free electrons that accumulate in each
sensor pixel. Each position on the photosensitive imaging plane sums all the rays of light
that terminate there.
In general, the weight of each ray in the sum depends on its incident direction with the
sensor plane. For example, radiometry predicts that rays from the periphery of the lens,
chapter . light fields and photographs
Figure .: The cone of rays summed to produce one pixel in a photograph.
which arrive at the sensor from more oblique angles, contribute less energy to the value of
the pixel. Another example is that the photosensitive portion of a pixel in a cmos sensor is
typically obscured by an overlay of metal wires [Catrysse and Wandell ], so rays from
unshadowed directions will contribute more light. Nevertheless, these directional effects
are in some sense undesirable artifacts due to physical or implementation limitations, and
Figure . neglects them in illustrating the formation of somewhat idealized photographs.
Figure . draws in blue the cone of rays contributing to one photograph pixel value.
This cone corresponds (in d) to the blue vertical strip on the ray-space diagram because
the rays in the cone share the same x film intercept, but vary over all u positions on the lens.
Of course different pixels in the photograph have different x intercepts, so they correspond
to different vertical lines on the ray-space.
In fact, the ray-space drawn in Figure . is overlaid with vertical strips, where each strip
.. photograph formation
is the set of rays summed by a different photograph pixel. This drawing shows that the formation of a full photograph corresponds on the ray-space diagram to a vertical projection of
the light field values. The projection preserves the spatial x location of the rays, but destroys
the directional u information.
The preceding discussion is, however, limited to the photograph that forms on a piece of
film that coincides with the x parameterization plane. Later chapters study the computation
of photographs focused at different depths from a recording of the light field, and this kind
of digital refocusing depends on an understanding of the representation of photographs focused at different depths in terms of the ray space diagram in Figure ..
Figure . illustrates how the projection of the ray space changes as the camera is focused
at different depths. In these diagrams, the x plane is held fixed at a canonical depth while the
film plane of the camera moves. Changing the separation between the film and the lens is
how we focus at different depths in a conventional camera. For example, turning the focus
ring on a photographic lens simply slides the glass elements along the axis of the lens. In
Figure .a, the film plane is moved further from the lens, and the world focal plane moves
closer to the camera. The cone of blue rays corresponds to the blue strip with positive slope
on the ray-diagram. In contrast, Figure .b shows that when the camera is focused further
in the world, the corresponding vertical strip on the ray space has negative slope.
The slope of the ray-space strip can be understood by the fact that the convergence point
of the rays moves away from the x film plane. As the intercept of a ray moves linearly across
the u lens plane, the resulting intercept moves linearly across the x film plane. If the convergence point is further from the lens than the x plane, then the movement across the u plane is
in the same direction as the movement across the x plane (Figure .a). These two directions
are opposed if the convergence point is in front of the x plane (Figure .b). These figures
make it visually clear that the relative rates of the movements, hence slopes on the ray-space
diagram, depend on the separation between the convergence point and the x plane.
This separation is the same for every pixel in the photograph because the world focal
plane is parallel to the x plane, so the slant of each pixel is the same. Indeed, Figure .
shows that the entire sampling grid shears to the left if the focus is closer than the x plane,
 This
is limited to the most common type of cameras where the film and the lens are parallel, but not to
view cameras where they may be tilted relative to one another. In a view camera, the projection of the ray-space
is fan-shaped.
chapter . light fields and photographs
Figure .: The projection of the light field corresponding to focusing further and closer
than the chosen x parameterization plane for the light field.
.. imaging equations
and to the right if the focus is further. This is the main point of this chapter: a photograph
is an integral projection of the canonical light field, where the trajectory of the projection
depends on the depth at which the photograph is focused.
Imaging Equations
The d simplification above is well suited to visualization and high-level intuition, and will
be used for that purpose throughout this thesis. However, a formal mathematical version
of the d representation is also required for the development of analysis and algorithms. To
conclude this introduction to photographs and light fields, this section derives the equations
relating the canonical light field to photographs focused at different depths. As we will see
in Chapter , this mathematical relationship is a natural basis for computing photographs
focused at different depths from the light fields recorded by the camera introduced in the
next chapter.
The image that forms inside a conventional camera, as depicted in Figure ., is proportional to the irradiance on the film plane. Classical radiometry shows that the irradiance
from the aperture of a lens onto a point on the film is equal to the following weighted integral
of the radiance coming through the lens [Stroebel et al. ]:
EF ( x, y) = 2
L F ( x, y, u, v) cos4 θ du dv,
where F is the separation between the exit pupil of the lens and the film, EF ( x, y) is the
irradiance on the film at position ( x, y), L F is the light field parameterized by the planes at
separation F, and θ is the angle between ray ( x, y, u, v) and the film plane normal. The cos4 θ
term is a well-known falloff factor sometimes referred to as optical vignetting. It represents
the reduced effect of rays striking the film from oblique directions. However, Equation .
ignores all other directional dependences such as surface microstructure in cmos sensors.
For simplicity, Equation . also assumes that the uv and xy planes are infinite in extent,
and that L is simply zero beyond the physical bounds of the lens and sensor. To further
simplify the equations in the derivations throughout the thesis, let us also absorb the cos4 θ
into the definition of the light field itself, by re-defining L( x, y, u, v) := L( x, y, u, v) cos4 θ .
chapter . light fields and photographs
This re-definition is possible without reducing ac-
curacy for two reasons. First we will only be deal-
ing with re-parameterizations of the light field that
change the separation between the parameteriza
tion planes. Second, θ depends only on the angle
that the ray makes with the light field planes, not
on their separation.
Let us now turn our attention to the equations
for photographs focused at depths other than the x
¡0 ƺ ž
parameterization plane. As shown in Figure ., focusing at different depths corresponds to changing
the separation between the lens and the film plane,
resulting in a shearing of the trajectory of the integration lines on the ray-space. If we consider the
photograph focused at a new film depth of F , then
ǻ ¡ ƺ žǼ 0
deriving its imaging equation is a matter of expressing L F ( x , u) in terms of L F ( x, u) and then applying
Figure .: Transforming
Equation ..
ray-space coordinates.
The diagram above is a geometric construction
that illustrates how a ray parameterized by the x
and u planes for L F may be re-parameterized by its intersection with planes x and u for L F .
By similar triangles, the illustrated ray that intersects the lens at u and the film plane at x ,
also intersects the x plane at u + ( x − u) FF . Although the diagram only shows the d case
involving x and u, the y and v dimensions share an identical relationship. As a result, if we
define α = F /F as the relative depth of the film plane,
y − v
x − u
L F ( x , y , u, v) = L F u +
, v+
, u, v
= LF u 1−
+ , v 1−
+ , u, v .
This equation formalizes the d shear of the canonical light field that results from focusing at
different depths. Although the shear of the light field within the camera has not been studied
.. imaging equations
in this way before, it has been known for some time that the two-plane light field shears
when one changes its parameterization planes. It was observed by Isaksen et al. [] in a
study of dynamic reparameterization and synthetic focusing, and is a basic transformation
in much of the recent research on the properties of the light field [Chai et al. ; Stewart
et al. ; Durand et al. ; Vaish et al. ; Ng ]. The shear can also be seen in the
fundamental matrix for light propagation in the field of matrix optics [Gerrard and Burch
Combining Equations . and . leads to the final equation for the pixel value ( x , y )
in the photograph focused on a piece of film at depth F = α · F from the lens plane:
E(α· F) ( x , y ) = 2 2
α F
LF u 1−
+ , v 1−
+ , u, v du dv.
This equation formalizes the notion that imaging can be thought of as shearing the d light
field, and then projecting down to d.
Recording a Photograph’s Light Field
This chapter describes the design of a camera that records the light field on its imaging plane
in a single photographic exposure. Aside from the fact that it records light fields instead of
regular photographs, this camera looks and operates largely like a regular digital camera. The
basic idea is to insert an array of microlenses in front of the photosensor in a conventional
camera. Each microlens covers multiple photosensor pixels, and separates the light rays that
strike it into a tiny image on the pixels underneath (see, for example, Figure .).
The use of microlens arrays in imaging is a technique called integral photography that was
pioneered by Lippmann [] and greatly refined by Ives [; ]. Today microlens
arrays are used in many ways in diverse imaging fields. Parallels can be drawn between
various branches of engineering, optics and the study of animal vision, and the end of this
chapter surveys related work.
If the tiny images under each microlens are focused on the main lens of the camera,
the result is something that Adelson and Wang [] refer to as a plenoptic camera. This
configuration provides maximum directional resolution in recorded rays. Section . of
this chapter introduces a generalized light field camera with configurable microlens focus.
Allowing the microlenses to defocus provides flexible trade-offs in performance.
Section . describes the design and construction of our prototype light field camera.
This prototype provides the platform for the experiments in subsequent chapters, which
elaborate on how to post-process the recorded light field to address different aspects of the
focus problem.
chapter . recording a photograph’s light field
A Plenoptic Camera Records the Light Field
The plenoptic camera comprises a main photographic lens, a microlens array, and a digital
photosensor, as shown in Figure .. The scale in this figure is somewhat deceptive, because
the microlenses are drawn artificially large to make it possible to see them and the overall
camera at the same scale. In reality they are microscopic compared to the main lens, and so is
the gap between the microlenses and the photosensor. In this kind of camera, the microlens
plane is the imaging plane, and the size of the individual microlenses sets the spatial sampling
A grid of boxes lies over the ray-space diagram in Figure .. This grid depicts the sampling of the light field recorded by the photosensor pixels, where each box represents the
bundle of rays contributing to one pixel on the photosensor. To compute the sampling grid,
rays were traced from the boundary of each photosensor pixel out into the world through
its parent microlens array and through the glass elements of the main lens. The intercept
of the ray with the microlens plane and the lens plane determined its ( x, u) position on the
ray-space diagram. As an aside, one may notice that the lines in the ray-space diagram are
not perfectly straight, but slightly curved. The vertical curvature is due to aberrations in the
optical design of the lens. Correcting such defects is the subject of Chapter . The curvature
of the boundary at the top and bottom is due to movement in the exit pupil of the aperture
as one moves across the x plane.
Each sample box on the ray-space corresponds to a narrow bundle of rays inside the
camera. For example, Figure . depicts two colored sample boxes, with corresponding raybundles on the ray diagram. A column of ray-space boxes corresponds to the set of all rays
striking a microlens, which are optically sorted by direction onto the pixels underneath the
microlens, as shown by the gray boxes and rays in Figure .. If we summed the gray photosensor pixel samples on Figure . b, we would compute the value of an output pixel the
size of the microlens, in a photograph that were focused on the optical focal plane.
These examples highlight how different the plenoptic camera sampling grid is compared
to that for a conventional camera (compare Figure .). In the conventional camera, all the
grid cells extend from the top to the bottom of the ray space, and their corresponding set of
rays in the camera is a cone that subtends the entire aperture of the lens. In the conventional
.. a plenoptic camera records the light field
Figure .: Sampling of a photograph’s light field provided by a plenoptic camera.
camera the width of a grid column is the width of a photosensor pixel. In the plenoptic
camera, on the other hand, the grid cells are shorter and wider. The column width is the
width of a microlens, and the column is vertically divided into the number of pixels across
the width of the microlens. In other words, the plenoptic camera sampling grid provides
more specificity in the u directional axis but less specificity in the x spatial axis, assuming a
constant number of photosensor pixels.
This is the fundamental trade-off taken by the light field approach to imaging. For a fixed
sensor resolution, collecting directional resolution results in lower resolution final images,
with essentially as many pixels as microlenses. On the other hand, using a higher-resolution
sensor allows us to add directional resolution by collecting more data, without necessarily
sacrificing final image resolution. As discussed in the introductory chapter, much higher
resolution sensors may be possible in today’s semiconductor technology. Finally, Section .
chapter . recording a photograph’s light field
shows how to dynamically configure the light field camera to record a different balance of
spatial and directional resolution.
Computing Photographs from the Light Field
This thesis relies on two central concepts in processing the light field to produce final photographs. The first is to treat synthesis of the photograph as a simulation of the image formation process that would take place in a desired virtual camera. For example, Figure .a
illustrates in blue all the rays that contribute to one pixel in a photograph refocused on the
indicated virtual focal plane. As we know from Chapter , this blue cone corresponds to a
slanted blue line on the ray-space, as shown in Figure . b. To synthesize the value of the
pixel, we would estimate the integral of light rays on this slanted line.
The second concept is that the radiance along rays in the world can be found by raytracing. Specifically, to find the radiance along the rays in the slanted blue strip, we would
geometrically trace the ideal rays from the world through the main lens optics, through the
microlens array, down to the photosensor surface. This process is illustrated macroscopically by the blue rays in Figure .a. Figure . b illustrates a close-up of the rays tracing
through the microlenses down to the photosensor surface. Each of the shaded sensor pixels
corresponds to a shaded box on the ray-space of Figure . b. Weighting and summing
these boxes estimates the ideal blue strip on Figure . b. The number of rays striking each
photosensor pixel in b determines its weight in the integral estimate. This process can be
thought of as rasterizing the strip onto the ray-space grid and summing the rastered pixels.
There are more efficient ways to estimate the integral for the relatively simple case of
refocusing, and optimizations are presented in the next chapter. However, the ray-tracing
concept is very general and subsumes any camera configuration. For example, it handles the
case of the generalized light field camera model where the microlenses defocus, as discussed
in Section . and Chapter , as well as the case where the main lens exhibits significant
optical aberrations, as discussed in Chapter .
A final comment regards the first concept of treating image synthesis as simulating the
flow of light in a real camera. This approach has the appealing quality of producing final
images that look like ordinary, familiar photographs, as we shall see in Chapter . However,
.. three views of the recorded light field
Figure .: Overview of processing the recorded light field.
it should be emphasized that the availability of a light field permits ray-tracing simulations of
much more general imaging configurations, such as view cameras where the lens and sensor
are not parallel, or even imaging where each pixel is focused with a different depth of field
or a different depth. This final example plays a major role in the next chapter.
Three Views of the Recorded Light Field
In sampling the light traveling along all rays inside the camera, the light field provides rich
information about the imaged scene. One way to think of the data is that it captures the
chapter . recording a photograph’s light field
Figure .: Raw light field photograph read off the photosensor underneath the microlens
array. The figure shows a crop of approximately one quarter the full image so that the microlenses are clearly visible in print.
.. three views of the recorded light field
lighting incident upon each microlens. A second, equivalent interpretation is that the light
field provides pictures of the scene from an array of viewpoints spread over the extent of the
lens aperture. Since the lens aperture has a finite area, these different views provide some
parallax information about the world. This leads to the third property of the light field—it
provides information about the depth of objects in the scene.
The sections below highlight these different interpretations with three visualizations of
the light field. Each of these visualizations is a flattening of the d light field into a d array of
images. Moving across the array traverses two dimensions of the light field, and each image
represents the remaining two dimensions of variation.
Raw Light Field Photograph
The simplest view of the recorded light field is the raw image of pixel values read off the
photosensor underneath the microlens array, as shown in Figure .. Macroscopically, the
raw image appears like a conventional photograph, focused on the girl wearing the white
cap, with a man and a girl blurred in the background. Looking more closely, the raw image is actually composed of an array of disks, where each
disk is the image that forms underneath one microlens. Each of these microlens images is circular because it is a picture of the round aperture of
the lens viewed from that position on the film. In
other words, the raw light field photograph is an
( x, y) grid of images, where each image shows us
the light arriving at that film point from different
(u, v) positions across the lens aperture.
The zoomed images at the bottom of Figure .
show detail in the microlens images in two parts of
the scene: the nose of the man in the background
Figure .: Conventional photograph computed from the light
field photograph in Figure ..
who is out of focus (Image z), and the nose of the
girl in the foreground who is in focus (Image z). Looking at Image z, we see that the
light coming from different parts of the lens are not the same. The light coming from the
chapter . recording a photograph’s light field
Figure .: Sub-aperture images of the light field photograph in Figure .. Images z and
z are close-ups of the indicated regions at the top and bottom of the array, respectively.
.. three views of the recorded light field
left side of the microlens images originate on man’s nose. The light coming from the right
side originate in the behind the man. This effect can be understood by thinking of the rays
tracing from different parts of the aperture out into the world. They pass through a point on
the focal plane in front of the man, and diverge — some striking the man and others missing
him to strike the background. In a conventional photograph, all the light would be summed
up, leading to a blur around the profile of the man’s nose as in Figure ..
In contrast, the zoomed image of the girls’ nose in Figure . z reveals microlens images
of constant color. Since she is on the world focal plane, each microlens receives light coming
from a single point on her face. Since her skin reflects light diffusely, that is, equally in all directions, all the rays have the same color. Although not all materials are diffuse, most reflect
the same light over the relatively small extent of a camera aperture in most photographic
Sub-Aperture Images
The second view of the recorded light field is an array of what I call its sub-aperture images,
as shown in Figure .. I computed this view by transposing the pixels in the raw light field
photograph. Each sub-aperture image is the result of taking the same pixel underneath each
microlens, at the offset corresponding to (u, v) for the
desired sub-region of the main lens aperture.
Macroscopically, the array of images is circular
because the aperture of the lens is circular. Indeed,
each image in the array is a picture of the scene from
the corresponding (u, v) position on the circular aperture. The zoomed images at the bottom of Figure .
show sub-aperture images from the top and bottom of
the lens aperture. Although these images look similar,
examining the differences between the two images, as
Difference in pixel values between the two zoomed images
in Figure ..
shown on the right, reveals a parallax shift between
the girl in the foreground and the man in the background. The man appears several pixels
higher in Figure . z from the bottom of the array, because of the lower viewpoint.
This disparity is the source of blurriness in the image of the man in the conventional
chapter . recording a photograph’s light field
Figure .: Epipolar images of the light field photograph in Figure .. Images z and z are
close-ups of the indicated regions at the top and bottom of the array, respectively.
.. three views of the recorded light field
photograph of Figure .. Computing the conventional photograph that would have formed
with the full lens aperture is equivalent to summing the array of sub-aperture images, summing all the light coming through the lens.
Epipolar Images
The third view is the most abstract, presenting an array of what are called epipolar images of
the light field (see Figure .). Each epipolar image is the d slice of the light field where y
and v are fixed, and x and u vary. In Figure ., y varies up the array of images and v varies
to the right. Within each epipolar image, x increases horizontally (with a spatial resolution
of  pixels), and u varies up the image (with a directional resolution of about  pixels). Thus, the
zoomed images in Figure . show five ( x, u) epipolar images, arrayed vertically.
These zoomed images illustrate the well-known
fact that depth of objects in the scene can be estimated from the slope of lines in the epipolar images [Bolles et al. ; Forsyth and Ponce ].
The greater the slope, the greater the disparity as we
move across u on the lens, indicating a further distance from the world focal plane. For example, the
Rows of pixels shown in epipolar
zoomed image of Figure . z corresponds to five
images of Figures . z and z.
rows of pixels in a conventional photograph that cut
through the nose of the girl in the foreground and the arm of the girl in blue in the background, as shown on the image on this page. In Image z, the negative slope of the blue lines
correspond to the further distance of the girl in blue. The vertical lines of the nose of the
girl in the foreground show that she is on the focal plane. As another example, Figure . z
comes from the pixels on the nose of the man in the middle ground. The intermediate slope
of these lines indicates that the man is sitting between the two girls.
An important interpretation of these epipolar images is that they are graphs of the light
field in the parameterization of the d ray-space diagrams such as Figure . b. Figure .
provides a database of such graphs for different (y, v) slices of the d light field.
chapter . recording a photograph’s light field
Resolution Limits of the Plenoptic Camera
As we have seen, the size of the microlenses determines the spatial resolution of the light field
sampling pattern. This section describes how to optimize the microlens optics to maximize
the directional resolution, how diffraction ultimately limits the d resolution, and concludes
with a succinct way to think about how to distribute samples in space and direction of a
diffraction-limited plenoptic imaging system.
Microlens Optics
The image that forms under a microlens dictates the directional u resolution of the light
field camera system. In the case of the plenoptic camera, optimizing the microlens optics to
maximize the directional resolution means producing images of the main lens aperture that
are as sharp and as large as possible.
Maximizing the sharpness requires focusing the microlenses on the aperture plane of
the main lens. This may seem to require dynamically changing the separation between the
photosensor plane and microlens array in order to track the aperture of the main lens as
it moves during focusing and zooming. However, the microlenses are vanishingly small
compared to the main lens (for example the microlenses are  times smaller than the main
lens in the prototype camera described below), so regardless of its zoom or focus settings
the main lens is effectively fixed at the microlenses’ optical infinity. As a result, focusing the
microlenses in the plenoptic camera means cementing the microlens plane one focal length
away from the photosensor plane. This is a very convenient property, because it means that
the light field sensor comprising the microlens array and the photosensor can be constructed
as a completely passive unit if desired. However, there are benefits to dynamically changing
the separation, as explored in Section ..
The directional resolution relies not only on the clarity of the image under each microlens, but also on its size. We want it to cover as many photosensor pixels as possible.
The idea here is to choose the relative sizes of the main lens and microlens apertures so
that the images are as large as possible without overlapping. This condition means that the
f -numbers of the main lens and microlens array must be matched, as shown in Figure ..
 Principal plane of the lens to be exact.
.. resolution limits of the plenoptic camera
f /.
f /
f /
Figure .: Close-up of microlens images formed with different main lens aperture sizes.
The microlenses are f /.
The figure shows that increasing or decreasing the main lens aperture simply increases or
decreases the field of view of each microlens image. This makes sense because the microlens
image is just a picture of the back of the lens. The f -number of the microlenses in these
images is f /. At f / the field of view are maximal without overlapping. When the aperture
width is reduced by half to f /, the images are too small, and resolution is wasted. When
the main lens is opened up to f /., the images are too large and overlap.
In this context, the f -number of the main lens is not simply its aperture diameter divided
by its intrinsic focal length. Rather, we are interested in the image-side f-number, which is
the aperture diameter divided by the separation between the principal plane of the main lens
and the microlens plane. This separation is larger than the intrinsic focal length in general,
when focusing on subjects that are relatively close to the camera.
As an aside, Figure . shows that a significant fraction of the photosensor is black because of the square packing of the microlens disks. The square packing is due to the square
layout of the microlens array used to acquire the light field. The packing may be optimized by
using different microlens array geometries such as a hexagonal grid, which is fairly common
in integral cameras used for d imaging [Javidi and Okano ]. The hexagonal grid would
need to be resampled onto a rectilinear grid to compute final images, but implementing this
is easily understood using the ray-tracing approach of Section .. A different approach to
reducing the black regions would be to change the aperture of the camera’s lens from a circle
to a square, allowing tight packing with a rectilinear grid.
chapter . recording a photograph’s light field
Diffraction-Limited Resolution
If we were to rely purely on ray-based, geometrical arguments, it might seem that arbitrarily
high d resolution could be achieved by continually reducing the size of the microlenses and
pixels. However, at sufficiently small scales the ray-based model breaks down and the wave
nature of light must be considered. The ultimate limit on image resolution at small scales
is diffraction. Rather than obtaining a geometrically sharp image on the imaging plane, the
effect of diffraction is to blur the d signal on the imaging plane, which limits the effective
resolvable resolution.
In the plenoptic camera, the diffraction blur reduces the clarity of the circular images
formed on the photosensor under each microlens. Assuming that the microlenses are larger
than the diffraction spot size, the blur appears as a degradation in the resolution along the
directional axes of the recorded light field. In other words it corresponds to a vertical blurring
of the ray-space diagram within each column of the light field sampling grid in Figure .a,
for example.
Classical wave optics predicts that the micron size of the diffraction blur on an imaging plane is determined by the aperture of highest f -number (smallest size) in the optical
train [Hecht ; Goodman ]. This is based on the principle that light spreads out
more in angle the more it is forced to pass through a small opening. The exact distribution
of the diffraction blur depends on such factors as the shape of the aperture, the wavelength
of light, and whether or not the light is coherent. Nevertheless, the dominant sense of scale
is set by the f -number. Assuming that the highest f -number aperture in the optical system
is f /n, then a useful rule of thumb is that the blur spot size (hence resolvable resolution) is
roughly n microns on the imaging plane.
Let us apply this rule of thumb in considering the design of spatial and angular resolutions in a hypothetical light field camera. Assume that the lens and microlenses are f /
and diffraction limited, and that the sensor is a standard  mm format-frame measuring
 mm ×  mm. The f -number means that the diffraction-limited resolution is roughly 
microns, so let us assume that the sensor will contain pixels that are  microns wide. This
provides a raw sensor resolution of , × ,.
Our control over the final distribution of spatial and angular resolution is in the size of
the microlenses. If we choose a microlens size of  microns that covers  pixels, then we
.. generalizing the plenoptic camera
will obtain a spatial microlens resolution of  ×  (roughly . mp) with a directional
resolution of about  × . Alternatively, we could choose a smaller microlens size, say 
microns, for roughly . mp of spatial resolution, with just  ×  directional resolution.
Generalizing the Plenoptic Camera
As discussed in the previous section, the plenoptic camera model requires the microlenses
to focus on the aperture of the main lens, which means that the photosensor is fixed at the
focal plane of the microlenses. This section introduces a generalization of the camera model
to allow varying the separation between the microlenses and sensor from one focal length
down to zero as shown in Figure . on page . The motivating observation is that a magnifying glass stops magnifying when it is pressed flat against a piece of paper. This intuition
suggests that we can effectively transform the plenoptic camera back into a conventional
camera by pressing the photosensor against the microlens surface.
On first consideration, however, this approach may seem problematic, since decreasing
the separation between the microlenses and photosensor corresponds to defocusing the images formed by the microlenses. In fact, analysis of microlens focus in integral photography
generally does not even consider this case [Arai et al. ]. The macroscopic analogue
would be as if one focused a camera further than infinity (real cameras prevent this, since
nothing would ever be in focus in this case). I am not aware of any other researchers who
have found a use for this kind of camera in their application domain.
Nevertheless, defocusing the microlenses in this way provides tangible benefits for our
application in digital photography. Figure . illustrates that each level of defocus provides
a different light field sampling pattern, and these have different performance trade-offs.
Figure .a illustrates the typical plenoptic separation of one microlens focal length, where
the microlenses are focused on the main lens. This configuration provides maximal directional u resolution and minimal x spatial resolution on the ray-space.
Figure .b illustrates how this changes when the separation is halved. The horizontal
lines in the ray-space sampling pattern tilt, becoming more concentrated vertically and less
so horizontally. This effect can be intuitively understood in terms of the shearing of the
light field when focusing at different depths. In Chapter , this shearing was described in
chapter . recording a photograph’s light field
Figure .: Generalized light field camera: changes in the light field sampling pattern due to
reducing the separation between the microlenses and photosensor.
terms of focus of the main lens. Here the focus change takes place in the microscopic camera composed of each microlens and its patch of pixels, and the shearing takes place in the
microscopic light field inside this microlens camera. For example, Figure .a shows  such
microlens cameras. Their microscopic light fields correspond to the  columns of the rayspace diagram, and the sampling pattern within these columns is what shears as we change
the focus of the microlenses in Figure .b and c.
Figure .c illustrates how the ray-space sampling converges to a pattern of vertical
columns as the separation between microlenses and photosensor approaches zero. In this
.. prototype light field camera
case the microlenses are almost completely defocused, and we obtain no directional resolution, but maximal spatial resolution. That is, the values read off the photosensor for zero
separation approach the values that would appear in a conventional camera in the absence
of the microlens array.
Figure .: Simulated raw image data from the generalized light field camera under the three
configurations of microlens focus in Figure ..
Figure . shows a portion of simulated raw light field photographs of a resolution chart
for the three configurations of the generalized camera shown in Figure .. These images
illustrate how the microlens images evolve from sharp disks of the back of the main lens at
full separation (Figure .a), to filled squares of the irradiance at the microlens plane when
the separation reaches zero (Figure .c). Notice that the finer rings in the resolution chart
only resolve as we move towards smaller separations.
The prototype camera provides manual control over the separation between the microlenses and photosensor. It enables a study of the performance of the generalized light
field camera. An analysis of its properties is the topic of Chapter .
Prototype Light Field Camera
We had two goals in building a prototype light field camera. The first was to create a device
that would allow us to test our theories and simulations regarding the benefits of recording and processing light field photographs. Our second goal was to build a camera that
chapter . recording a photograph’s light field
could be used as much like an ordinary camera as possible, so that we would be able to
explore applications in a range of traditional photographic scenarios, such as portraiture,
sports photography, macro photography, etc.
Our overall approach was to take an off-the-shelf digital camera and attach a microlens
array in front of the photosensor. The advantage of this approach was simplicity. It allowed us
to rapidly prototype a working light field camera by leveraging the mechanical and electronic
foundation in an existing system. A practical issue with this approach was the restricted
working volume inside the camera body. Most digital cameras are tightly integrated devices
with little room for adding custom mechanical parts and optics. As discussed below, this
issue affected the choice of camera and the design of the assembly for attaching the microlens
array in front of the photosensor.
The main disadvantage of using an off-the-shelf camera was that we were limited to a
prototype of modest resolution, providing final images with  ×  pixels, with roughly
 ×  directional resolution at each pixel. The reason for this is that essentially all existing
sensors are designed for conventional imaging, so they provide only a moderate number of
pixels (e.g. mp) that match current printing and display resolutions. In contrast, the ideal
photosensor for light field photography is one with a very large number (e.g.  mp) of
small pixels, in order to match the spatial resolution of conventional cameras while providing extra directional resolution. As discussed in the introductory chapter, such high sensor
resolutions are theoretically possible in today’s vlsi technology, but it is not one of the goals
of this thesis to address the problem of constructing such a sensor.
The issues of resolution and working volume led us to choose a medium format digital camera as the basis for our prototype. Medium format digital cameras provide the maximum
sensor resolution available on the market. They also provide easiest access to the sensor because the digital “back,” which contains the sensor, detaches completely from the body, as
shown in Figure .b.
Our digital back is a Megavision fb. The image sensor that it contains is a Kodak
kaf-ce color sensor, which has effectively  ×  pixels that are . microns
wide. For the body of our camera we chose a medium-format Contax . We used a variety
.. prototype light field camera
Figure .: The medium format digital camera used in our prototype.
of lenses, including a  mm f /. and an  mm f /.. The wide maximum apertures on
these lenses meant that, even with extension tubes attached for macro photography, we could
achieve an f / image-side f -number to match the f -number of the microlenses.
In choosing our microlens array we focused on maximizing the directional resolution, in
order to best study the post-processing advantages of having this new kind of information.
This goal meant choosing microlenses that were as large as possible while still allowing us
to compute final images of usable resolution.
We selected an array that contains  ×  lenslets that are  microns wide (Figure .). Figure .c is a micrograph showing that the microlenses are square shaped,
and densely packed in a rectilinear array. The fill-factor is very close to . The focal
length of the microlenses is  microns; their f -number is f /.
Figure .: The microlens array in our prototype camera.
A good introduction to microlens fabrication technology can be found in Dan Daly’s
book [], although the state of the art has advanced rapidly and a wide variety of different
chapter . recording a photograph’s light field
techniques are used in the industry today. Our microlens array was made by Adaptive Optics
Associates (part -.-s). They first manufactured a nickel master for the array using a
diamond-turning process to carve out each microlens. The master was then used to mold
our final array of resin lenses on top of a square glass window.
The two main design issues in assembling the prototype were positioning the microlenses
close enough to the sensor surface, and avoiding contact with the shutter of the camera body.
The required position of the microlenses is  microns away from the photosensor surface: the focal length of the microlenses. This distance is small enough that we were required
to remove the protective cover glass that usually seals the chip package. While this was a challenge for us in constructing this prototype, it illustrates the desirable property that a chip for
a light field sensor is theoretically no larger than the chip for a conventional sensor.
To avoid contact with the shutter of the camera we had to work within a volume of only
a few millimeter separation, as can be seen in Figure .b. This is the volume within which
we attached the microlens array. To maximize this volume, we removed the infrared filter
that normally covers the sensor. We replaced it with a circular filter that screws on to the
front of the main camera lens.
Figure . illustrates the assembly that we used to control the separation between the
microlens array and the photosensor. We glued the microlens array window to a custom
lens holder, screwed a custom base plate to the digital back over the photosensor, and then
attached the lens holder to the base plate with three screws separated by springs. Adjusting
the three screws provided control over separation and tilt. The bottom of Figure . shows
a cross-section through the assembled parts.
We chose adjustment screws with  threads per inch to ensure sufficient control over the
separation. We found that adjusting the screws carefully provided a mechanical resolution
of roughly  microns. The accuracy required to focus the microlens images accurately is on
the order of - microns, which is the maximum error in separation for which the circle of
confusion falls within one microlens pixel. The formula for this error is just the usual depth
of focus [Smith ] given by twice the product of the pixel width (. micron) and the
f -number of the microlenses (f /).
.. prototype light field camera
Figure .: Left: Schematic of light field sensor assembly for attaching microlens array to
digital back. Exploded view on top, and cross-sectional view on bottom. Right: Photographs
of the assembly before and after engaging adjustment screws.
To calibrate the separation at one focal length for the plenoptic camera configuration,
we took images of a pin-hole light source with the bare light field sensor. The image that
forms on the sensor when illuminated by such a light is an array of sharp spots (one under
each microlens) when a focal length of one microlens is achieved. The procedure took –
 iterations of screw adjustments. The separation was mechanically stable after calibration
and did not require re-adjustments.
We created a high contrast pin-hole light source by stopping down the  mm main lens
to its minimum aperture and attaching  mm of extension tubes. This created an aperture
of approximately f /, which we aimed at a white sheet of paper. The resulting spot images
subtend about  sensor pixel underneath each microlens image.
Unlike most digital cameras, the Megavision digital back does not read images to on-camera
storage such as Flash memory cards. Instead, it attaches to a computer by Firewire ieee-,
chapter . recording a photograph’s light field
and reads images directly to the computer. We store raw light field photographs and use custom software algorithms, as described in later chapters, to process these raw files to produce
final photographs.
Two subsystems of our prototype require manual adjustments where an ideal implementation would use electronically-controlled motors. The first is control over the separation
between the microlens array and the photosensor. To test the performance of the generalized camera model in Chapter , we configured the camera with different separations by
manually adjusting the screws in the lens holder for the microlens array.
The second system requiring manual control is the aperture size of the main lens relative
to the focal depth. In the plenoptic camera, the optimum choice of aperture size depends on
the focal depth, in contrast to a conventional camera. The issue is that to maintain a constant
image-side f -number of f /, the aperture size must change proportional to the separation
between the main lens and the imaging plane. This separation changes when focusing the
camera. For example, consider an aperture width that is image-side f / when the camera
is focused at infinity (with a separation of one focal length). The aperture must be doubled
when the camera is focused close-by in macro photography to produce unit magnification
(with a separation of two focal lengths).
3.7 Related Work and Further Reading
Integral Photography, Plenoptic Cameras and Shack-Hartmann Sensors
Over the course of the last century, cameras based on integral photography techniques [Lippmann ; Ives ] have been used in many fields, under different names. In the area
of d imaging, many “integral camera” variants have been studied [Okoshi ; Javidi and
Okano ]. In the last two decades especially, the advent of cheap digital sensors has
seen renewed interest in using such cameras for d imaging and stereo display applications [Davies et al. ; Okano et al. ; Naemura et al. ; Yamamoto and Naemura
]. This thesis takes a different approach, focusing on enhanced post-processing of ordinary d photographs, rather than trying to provide the sensation of d viewing.
Georgiev et al. [] have also explored capturing directional ray information for enhanced d imaging. They use an integral camera composed of a coarse array of lenses and
.. related work and further reading
prisms in front of the lens of an ordinary camera, to capture an array of sub-aperture images
similar to the one showed in Figure .. Although their angular resolution is lower than
used here, they interpolate in the directional space using algorithms from computer vision.
A second name for integral cameras is the “plenoptic camera” that we saw earlier in this
chapter. Adelson and Wang introduced this kind of camera to computer vision, and tried
to estimate the shape of objects in the world from the stereo correspondences inherent in
its data [Adelson and Wang ]. In contrast, our application of computing better photographs turns out to be a fundamentally simpler problem, with a correspondingly more
robust solution. As an extreme example, it is impossible to estimate the depth of a red card
in front of a red wall using the plenoptic camera, but it is easy to compute the correct (all-red)
photograph. This is a general theme that the computer graphics and computer vision communities have discovered over the last thirty years: programming a computer to interpret
images is very hard, but programming it to make images turns out to be relatively easy.
A third name for integral cameras is the “Shack-Hartmann sensor” [Shack and Platt
; Platt and Shack ]. It is used to measure aberrations in optical systems by analyzing what happens to a laser beam as it passes through the optics and the microlens array.
In astronomy, Shack-Hartmann sensors are used to measure distortions in the atmosphere,
and deformable mirrors are used to optically correct for these aberrations in real-time. Such
adaptive optics took about twenty years from initial proposal [Babcock ] to the first
working systems [Hardy et al. ]), but today they appear in almost all new land-based
telescopes [Tyson ]. Adaptive optics have also been applied to compensate for the aberrations of the human eye, to produce sharp images of the retina [Roorda and Williams ].
In opthalmology, Shack-Hartmann sensors are used to measure aberrations in the human
eye for refractive surgery planning [Liang et al. ; Haro ]. Chapter  gives these approaches a twist by transforming the Shack-Hartmann sensor from an optical analysis tool
into a full-fledged imaging device that can digitally correct final images collected through
aberrated optics.
Other Related Optical Systems
To acquire light fields, graphics researchers have traditionally taken many photographs sequentially by stepping a single camera through known positions [Levoy and Hanrahan ;
chapter . recording a photograph’s light field
Isaksen et al. ]. This is simple, but slow. Instantaneous capture, allowing light field
videography, is most commonly implemented as an array of (video) cameras [Yang et al.
; Wilburn et al. ]. Sampling the light field via integral photography can be thought
of as miniaturizing the camera array into a single device. This makes acquisition as simple
as using an ordinary camera, but sacrifices the large-baseline and flexibility of the array.
I would like to make comparisons between the light field camera and three other optical
systems. The first is the modern, conventional photosensor array that uses microlenses in
front of every pixel to concentrate light onto the photosensitive region [Ishihara and Tanigaki
; Gordon et al. ; Daly ]. One can interpret the optical design in this paper as an
evolutionary step in which we use not one detector underneath each microlens, but rather
an array of detectors capable of forming an image.
The second comparison is to artificial compound eye sensors (insect eyes) composed of a
microlens array and photosensor. This is akin to the light field sensor in our prototype without a main lens. The first d version of such a system appears to have been built by Ogata
et al. [], and has been replicated and augmented more recently using updated microlens
technology [Tanida et al. ; Tanida et al. ; Duparré et al. ]. These projects endeavor to flatten the traditional camera to a plane sensor, and have achieved thicknesses as
thin as a sheet of paper. However, the imaging quality of these optical designs is fundamentally inferior to a camera system with a large main lens; the resolution past these small lens
arrays is severely limited by diffraction, as noted in comparative studies of human and insect
eyes [Barlow ; Kirschfeld ].
As an aside from the biological perspective, it is interesting to note that our optical design
can be thought of as taking a human eye (camera) and replacing its retina with an insect
eye (microlens / photosensor array). No animal has been discovered that possesses such
a hybrid eye [Land and Nilsson ], but this dissertation, and the work of Adelson and
Wang, shows that such a design possesses unique and compelling capabilities when coupled
with sufficient processing power.
The fourth comparison is to holographic stereograms, which are holograms built up by
sequentially illuminating from each direction with the appropriate view. Halle has studied
such systems in terms of discrete sampling [Halle ], identifying the cause of aliasing
artifacts that are also commonly seen in multi-camera light field sampling systems. The
.. related work and further reading
similarities are more than passing, and Camahort showed that holographic stereograms can
be synthesized from light fields [Camahort ]. In fact, I have worked with Zebra Imaging
to successfully print a hologram from one of the light fields taken with our prototype light
field camera. These results open the door to single-shot holography using light field cameras,
although the cost of printing is still an issue.
Digital Refocusing
This chapter explores the simplest application of recording the light field: changing the focus
of output photographs after a single exposure. In Figure ., a-a are five photographs
computed from the raw light field discussed in Section .. That is, these five images were
computed from a single / second exposure of the prototype light field camera. They
show that we can refocus on each person in the scene in turn, extracting a striking amount
of detail that would have been irretrievably lost in a conventional photograph. Image a,
focused on the girl wearing the white cap, is what I saw in the camera viewfinder when I
clicked the shutter. It is the photograph a conventional camera would have taken.
People who see these images for the first time are often surprised at the high fidelity of
such digital refocusing from light fields. Images a-a look like the images that I saw in the
viewfinder of the camera as I turned the focus ring to focus on the girl in the white cap. The
underlying reason for this fidelity is that digital refocusing is based on a physical simulation
of the way photographs form inside a real camera. In essence, we imagine a camera focused
at the desired depth of the output photograph, and simulate the flow of rays within this
virtual camera. The software simulates a real lens, tracing the recorded light rays to where
they would have terminated in the virtual camera, and simulates a real sensor, summing the
light deposited at each point on the virtual imaging plane.
Figure .b is a different kind of computed photograph, illustrating digitally extended
depth of field. In this image, every person in the scene is in focus at the same time. This is
the image that a conventional camera would have produced if we had reduced the size of the
chapter . digital refocusing
Figure .: Examples of refocusing (a–a) and extended depth of field (b).
lens aperture in the classical photographic method of optically extending the depth of field.
Image b was computed by combining the sharpest portions of images a-a [Agarwala et al.
], and can be thought of as refocusing each pixel at the depth of the closest object in
that direction.
A crucial advantage of the digitally extended depth of field photograph over the classical
photograph is that the former uses the light coming through a larger lens aperture. This
means that recording a light field to obtain high depth of field captures light more efficiently,
allowing less grainy images with higher signal to noise ratio (snr). Sections .–. study
these improvement theoretically and through numerical experimentation, demonstrating
linear improvement with the directional u resolution.
.. previous work
This chapter assumes that the light fields are recorded with a light field camera configured in the plenoptic configuration that provides maximum directional resolution. This
simplification enables easier discussion of the refocusing algorithm and performance characteristics. The added complexity of general light field camera configurations is considered
in Chapter .
Previous Work
Isaksen, McMillan and Gortler [] were the first to demonstrate virtual refocusing from
light fields. It was proposed in the original graphics paper on light field rendering [Levoy
and Hanrahan ], and sometimes goes by the name of synthetic aperture photography
in more recent work [Yang et al. ; Levoy et al. ; Vaish et al. ; Wilburn et al.
]. Vaish et al. [] consider the interesting variation of a tilted focal plane, such as
one might achieve in a view camera where the film plane may be set at an angle to the main
lens [Adams ].
These demonstrations of refocusing suffer from two problems, however. First, it is difficult to capture the required light field datasets, requiring lengthy scanning with a moving
camera, or large arrays of cameras that are not suitable for the spontaneous shooting that
we associate with regular photography. Second, the results tend to exhibit high aliasing in
blurred regions due to incomplete sampling of the virtual lens aperture (e.g.due to gaps between cameras). Light field photography addresses both these issues: the light field camera
is as easy to use as a conventional hand-held camera. In addition, the optical design reduces
aliasing drastically by integrating all the rays of light passing through the aperture.
A more theoretical point is that the method described here is the first attempt to accurately formulate refocusing in terms of the image formation process that occurs inside a real
camera. This was probably of greater interest to me because this dissertation shares a much
closer relationship with conventional photography than previous work. In the past, the relationship was much more abstract (the virtual lens diameters were sometimes over a meter
across), and the practitioners’ primary goal was simply to qualitatively reproduce the visual
effect of finite depth of field.
chapter . digital refocusing
Focus and Depth from Defocus and Auto-Focus
Other researchers have studied the problem of trying to refocus from two images focused at
different depths [Subbarao et al. ; Kubota and Aizawa ]. These methods are usually
based on a class of computer vision algorithms called depth from defocus, which estimate the
depth of objects based on the relative blur in two images focused at different depths [Krotkov
; Pentland ; Subbarao ]. Depth from defocus eventually led to impressive
systems for estimating depth from video in real-time [Nayar et al. ; Pentland et al. ].
However the systems for refocusing from two images were less successful. Although they
could generate reasonable images for virtual focal depths close to the optical focal planes in
the input images, artifacts quickly increased and resolved detail decreased at further depths.
The fundamental problem is that mis-focus is a low-pass filter that powerfully truncates
high-frequency detail for objects at increasing distances from the optical focal plane. Basic
signal processing principles make it clear that it is unrealistic to recover the attenuated high
frequencies with good fidelity.
Extended Depth of Field
Like the algorithms discussed later in this chapter, Wavefront Coding system of Dowski and
Johnson [] can be used to decouple the trade-off between aperture size and depth of
field. In their system they use aspheric lenses that produce images with a depth-independent
blur. Deconvolution of these images retrieves image detail at all depths. In contrast, as
described below, the approach in light field photography is to record a light field, process it
to refocus at all depths, and combine the sharpest parts of these images to extend the depth
of field, albeit with a trade-off in image resolution or amount of data collected. Dowski and
Johnson’s method may permit higher resolution at the expense of noise at edge boundaries,
but light field photography avoids the problems of deconvolving blurry images and provides
greater flexibility in image formation.
The algorithms for extending the depth of field described below rely on methods of of
producing extended depth of field images from a set of images focused at different depths.
This problem has been studied by many researchers, including Ogden et al. [], Haeberli [], Eltoukhy and Kavusi [] and Agarwala et al. [] (whose software I use).
.. image synthesis algorithms
More recently there has been some attention towards maximizing depth of field in classical
light field rendering [Takahashi et al. ].
Image Synthesis Algorithms
The first half of this section describes how to compute refocused photographs from the light
field by numerical integration of the imaging equation derived in Chapter . The second half
describes how to use collections of refocused images to extend the depth of field so that the
entire image is as sharp as possible.
The ideal set of rays contributing to a pixel in digital refocusing is the set of rays that converge
on that pixel in a virtual conventional camera focused at the desired depth. Chapter  derived
Equation ., which formally specifies in its integral the set of light rays for the pixel at
position ( x , y ):
E(α· F) ( x , y ) =
α F2
L F u(1 − 1/α)+ x /α, v(1 − 1/α)+ y /α, u, v du dv.
Recall that in this equation L F is the light field parameterized by an xy plane at a depth of F
from the uv lens plane, α is the depth of the virtual film plane relative to F, and E(α· F) is the
photograph formed on virtual film at a depth of (α · F )
One way to evaluate this integral is to apply numerical quadrature techniques, such
as sampling the integrand for different values of u and v and summing them. The raytracing procedure described in Section . is used to evaluate the integrand for these different samples of u and v. To idea is to trace the ray (u(1 − 1/α)+ x /α, v(1 − 1/α)+ y /α, u, v)
through the microlens array and down to the photosensor. The intersection point is where
the ray deposited its energy in the camera during the exposure, and the value of L F is estimated from the photosensor values near this point.
However, a more efficient method is suggested by the linearity of the integral with respect
to the underlying light field. Examining Equation . reveals the important observation that
refocusing is conceptually a summation of dilated and shifted versions of the sub-aperture
chapter . digital refocusing
images over the entire uv aperture.
This point is made clearer by explicitly defining the sub-aperture image at lens position
(u, v) in the light field L F . Let us represent this sub-aperture image by the d function L F
such that the pixel at position ( x, y) in the sub-aperture image is given by
L F ( x, y).
this notation, we can re-write Equation . as:
E(α· F) ( x , y ) = 2 2
α F
(u,v) LF
u(1 − 1/α)+ x /α, v(1 − 1/α)+ y /α du dv,
where L(Fu,v) (u(1 − 1/α)+ x /α, v(1 − 1/α)+ y /α) is simply the sub-aperture image L(Fu,v) ,
dilated by a factor of α and shifted by a factor of (u(1 − 1/α), v(1 − 1/α)). In other words,
digital refocusing can be implemented by shifting and adding the sub-aperture images of
the light field. This technique has been applied in related work on synthetic aperture imaging
using light fields acquired with an array of cameras [Vaish et al. ; Levoy et al. ]. Let
us take a closer look at the dilation and the shift in the present case of refocusing ordinary
The dilation factor α in Equation . actually plays no real part in computing final images—
it can simply be ignored. The reason for this is that the dilation factor is the same for all
images. Scaling all images and summing does not change the ultimate output resolution of
the synthesized photograph.
The real meaning of the dilation factor has to do with the property that digital refocusing does not alter the field of view on the subject. With most photographic lenses, optically
focusing the lens closer causes the field of view of the subject to decrease, because the magnification of the subject increases. In contrast, with digital refocusing the field of view stays
constant regardless of chosen focal depth, because the synthesized image has the field of
view of the original sub-aperture images. Only the focus changes. In other words, the dilation factor in Equation . represents the dilation in field of view relative to the field of view
that would have been obtained if we had optically focused the lens at the desired depth.
With respect to the characteristic of changing focus without changing magnification, digital
refocusing is functionally similar to a telecentric lens [Smith ], although the underlying
causes are quite different.
The shift, (u(1 −1/α), v(1 − 1/α)), of each sub-aperture image in Equation . increases
.. image synthesis algorithms
(a): No refocus
(b): Refocus closer
(c): Refocus further
Figure .: Shift-and-add refocus algorithm, illustrated with just two sub-aperture images
for didactic purposes.
with both the distance of the sub-aperture from the center of the lens (u, v), and the relative
extent, α, to which we want to refocus away from the optical focal plane. Figure . visualizes
the shifts for three different virtual film planes, summing just two sub-aperture images, x and
y, for illustrative purposes. Using two sub-aperture images causes out-of-focus regions to
appear as twice-repeated edges rather than a uniform blur, making it easier to see the shift
effect. Figure .a corresponds to no refocusing, with α = 1, and a shift of 0 for both subaperture images. The remaining two images show that the direction of shifts depends on
whether we are focusing closer or farther to align features at the desired depth.
The minimal discretization of this algorithm is to shift-and-add just the  ×  subaperture images in the raw light-field shown in Figure .. For most applications, the quality
of the resulting photographs is quite good. However, this process may generate undesired
chapter . digital refocusing
(a): Unrefocused
(b): Sub-aperture
(c): Undersampled, aliased
(c): Adequately sampled
Figure .: Aliasing of blurred regions in under-sampled shift-and-add refocus algorithm.
step edges in out-of-focus regions when focusing at greater distances from the optical focal
plane. These kinds of artifacts can be seen in Figure .-c, which illustrates digital refocusing of a photograph onto a chain-link fence in the extreme foreground. The step-edge
artifacts are visible as banding in the close-up of the out-of-focus region. While these artifacts are relatively subtle in still images, they become much more apparent in animations of
continuous refocusing. For reference, the unrefocused, conventional photograph is shown
in Figure .-a and one sub-aperture image is shown in Figure .-b.
These step-edge artifacts are a form of aliasing due to undersampling the (u, v) aperture
in numerical integration of Equation .. The problem occurs when the shift in neighboring
sub-aperture images differs by more than one pixel, so edges in one sub-aperture may not
be blended smoothly into the neighbor image during the summation process.
A solution is to super-sample the aperture plane, interpolating sub-aperture images at a
resolution finer than  × . The super-sampling rate is chosen so that the minimum shift
is less than one output pixel. The resulting artifact-free image is shown in Figure . c.
The extra sub-aperture images are interpolated from the nearest values in the light field.
.. image synthesis algorithms
Quadrilinear interpolation in the d space performs well. This process may be interpreted
as a higher-order quadrature method for numerically integrating Equation ..
Another kind of artifact is a darkening around the borders of the image as we refocus
away from the focal plane. Vignetting of this kind is visible in Figure . c. The cause of this
artifact is that some of the shifted sub-aperture images do not cover this border region (see
Figure .b and c). Another way to interpret the problem is that some of the rays required
to estimate Equation . are missing. These rays fell outside the physical boundary of the
light field sensor, were never measured by the camera, and the corresponding values in the
recorded L F function are zero.
A solution for vignetting is to normalize the values of the border pixels by the fraction
of rays that were actually found in the recorded light field. For example, this fraction will be
the smallest in the most extreme border pixels. Dividing out by this fraction normalizes the
value so that its intensity matches neighboring pixels. Figure . c was computed with this
normalization procedure, eliminating the darkening evident in Figure . c.
Extending the Depth of Field
There are many ways to compute an image with large depth of field from a light field. Perhaps the simplest is to simply extract one of the sub-aperture images. This approach can be
thought of as digitally stopping down the lens, because it corresponds to producing the image
that results from light coming through a reduced size aperture. “Stopping down the lens” is
photography jargon for selecting a smaller aperture size. The problem with digital stopping
down, as in its physical counterpart, is that it wastes the majority of the light that passes
through the full aperture. The result is grainier images with lower snr (see Figure .b).
It is useful to think concretely in terms of the number of photons involved. Let us assume
that each microlens in the prototype camera collects , photons during a particular
exposure. These , photons are distributed amongst the microlens image on the ×
patch of pixels underneath the microlens. A sub-aperture image uses only one of the pixels
in this patch, so it uses only about  photons out of the ,.
It is possible to extend the depth of field in a far superior manner by using all the information in the light field. The main concept is to refocus each pixel, assuming a full lens
aperture, on the depth of the closest object in that direction. By using a full aperture in the
chapter . digital refocusing
(a): Unrefocused
(b): Sub-aperture image
(c): Extended dof
Figure .: Comparison of a sub-aperture image and an image computed with digitally extended depth of field.
numerical integration of Equation ., we obtain high snr by combining the contributions
of the , photons from all over the aperture. The resulting image, shown in Figure .c,
matches the depth of field of the sub-aperture image, but is far less grainy. In this dissertation, the phrase digitally extending the depth of field will be reserved for the process of
computing high depth of field in this high snr manner.
The epipolar images in Figure . z provide a visual way to conceptualize this process.
Ordinary refocusing corresponds to projecting the entire epipolar image at a specific trajectory, as described in Chapter . This is how the images in Figure .a were produced. In
contrast, digitally extending the depth of field corresponds to projecting each column of the
epipolar image along an individual trajectory that best aligns with the local features of the
light field. Figure .b can be thought of as projecting the blue pixels in Figure . z along
a trajectory of negative slope and the brown pixels along a vertical trajectory.
The implementation of digitally extending the depth of field used in this dissertation
begins by refocusing at all depths in the scene to create a focal stack of images. For example,
.. theoretical refocusing performance
Figure . shows five of the frames in such a focal stack. Computing an extended depth
of field image from a focal stack is a well studied problem, as reviewed in Section ., and
I make use of the digital photomontage algorithm developed by Agarwala et al. [] to
compute the final extended depth of field image.
Digital photomontage accepts a set of images of a scene and an objective function defined
on its pixels. It computes an output image that combines segments of the input images to
optimize the objective function over the output image. It uses an iterative algorithm called
graph-cut optimization [Boykov et al. ] to compute where to cut the input images, and
gradient-domain diffusion [Pérez et al. ] to blend across the seams. For extending the
depth of field, the objective function maximizes local contrast and minimizes cuts across
image boundaries. By choosing other objective functions, some defined through interactive
painting, the algorithm can be used to produce a wide variety of montages.
Theoretical Refocusing Performance
The method of estimating the refocus imaging equation by shifting and adding sub-aperture
images provides intuition about the theoretical performance of digital refocusing. The shiftand-add method suggests that we should be able to render a desired focal plane as sharp as it
appears in the sub-aperture images. Assuming that the images underneath each microlens
are N pixels across, the sub-aperture images are ideally N times sharper than the full aperture
images, since they correspond to imaging through apertures that are N times smaller.
Chapter  formalizes this intuition by applying Fourier analysis. The assumption underlying the analysis is that the recorded light fields are band-limited in the d space. Bandlimited assumptions are very common in signal processing analysis, and such an assumption
is a very reasonable model for the plenoptic camera. The mathematical details will be deferred until Chapter , but it is useful to state the theoretical performance in three different
but equivalent ways.
First, digital refocusing allows us to reduce the amount of blur anywhere in the output
photograph by a factor of up to N compared to the optical mis-focus blur. As an example
consider our prototype camera where N = . If the optical blur in a particular region were
less than  pixels, then we would be able to refocus the region “exactly” in the sense that
chapter . digital refocusing
we could drive the blur down to at most one output image pixel. If the blur were more than
 pixels, refocusing would make the region  times sharper, but still leave a residual blur.
A second way to state the performance is to quantify the range of desired focal depths in
the world for which we can compute an exact refocused photograph. Since we can refocus
exactly in regions where the optical mis-focus is less than N pixels, this range of depths is
equal to the depth of field of an aperture N times smaller than the physical aperture of the
lens. In terms of our prototype, where the main lens is f / and N = , we would ideally
be able to refocus exactly on any depth within the depth of field of an f / lens. Put in
these terms, photographers will recognize that digital refocusing presents a very significant
extension in the effective depth of field.
Figure . makes these concepts clearer visually. In Image b, the depth of field includes
the building in the background and the duck in the middle ground, which are crisp. In
contrast, the chain-link fence in the foreground is out of the depth of field: it appears slightly
blurry. Thus, we can refocus exactly onto the building and the duck, but we cannot refocus
onto the chain link fence perfectly sharply. In Image c, note that the refocused chain link
fence is slightly blurry. Of course this is not to say that refocusing has failed—a striking
amount of detail has been retrieved over the conventional photograph in Image a. The point
is simply that the chain link fence cannot be rendered as sharp as if we had optically focused
onto it. Interestingly, at just  cm from the camera, the fence was closer than the minimum
focusing distance of the lens, so it would not have been possible to optically focus on it.
Digital refocusing can improve the minimum focusing distance of lenses.
The third way of stating the performance is in terms of an effective depth of focus for the
light field camera. The traditional depth of focus is the very narrow range of sensor depths
inside the camera for which a desired plane will be rendered crisply. It is usually defined
as the range for which the optical mis-focus falls within a single pixel. It is the job of the
auto-focus mechanism in a conventional camera to ensure that the sensor is placed within
the depth of focus of the desired subject before releasing the shutter of the camera. In the
case of the light field camera, let us define the effective depth of focus as the range of sensor
depths for which we can compute a crisp photograph of the desired subject plane, using digital refocusing if necessary. For this comparison let us assume that the output photographs of
the light field camera have the same resolution as the conventional camera being compared
.. theoretical noise performance
Figure .: Improvement in effective depth of focus in the light field camera compared to a
conventional camera. The range is  times wider in the light field camera, assuming that the
directional resolution of the light field is  × .
against (i.e. that the microlenses in the light field camera are the size of the conventional
camera sensor pixels, and the light field camera pixels are N times narrower). The first characterization of performance above states that we can refocus exactly if the blur radius is less
than N pixels. This is N times larger than the one-pixel tolerance in the definition of the
conventional depth of focus.
This observation directly implies that the effective depth of focus in the light field camera
is N times larger than the regular depth of focus in the conventional camera, as shown in
Figure .. This means that the auto-focus mechanism can be N times less precise in the
light field camera for equal output image quality, because digital refocusing can be used
after the fact. This statement of performance makes very intuitive sense: the post-exposure
capabilities of digital refocusing reduce the pre-exposure requirements on optical focusing.
Theoretical Noise Performance
A subtle but important implication of the theoretical refocusing performance is that it provides a superior way to produce images with high depth of field. For the following discussion,
let us again assume that we have a plenoptic camera and conventional camera that have the
same sensor size and that produce the same resolution output images (i.e.the plenoptic camera microlenses are the same size as the conventional camera pixels). However, assume that
chapter . digital refocusing
the light field camera has N × N pixels across each microlens’ image (i.e. it pays the price of
collecting N 2 times more data than the conventional camera).
Let us compare the clarity of an image produced by a conventional camera with aperture stopped down from f /A to f /( A · N ), and an image produced by a light field camera
with the full f /A aperture. According to the theoretical refocusing performance, the light
field camera can match the sharpness of the conventional photograph on any plane by digitally refocusing onto that plane. Furthermore, by digitally extending the depth of field the
light field camera can match the entire depth of field of the conventional f /( A · N ) camera.
However, the light field camera would collect N 2 times more light per unit time because of
its larger aperture. It would only require an exposure duration 1/N 2 times as long to collect as much light. Alternatively, for equal exposure duration, the light field camera would
collect N 2 times higher signal.
A critical question is, how much higher will the snr be in the case of the same exposure duration? That is, how much improvement would the N 2 times higher signal provide
relative to the inherent noise? This is a complicated question in general, depending on the
characteristics of the sensor, but the worst-case performance is easy to understand, and that
is what we test in the experiments below. The most important concept is that the standard
deviation in the number of photons collected at a pixel follows Poisson statistics [Goodman
], such that the standard deviation is proportional to the square root of the mean light
level. For example, if an average of , photons would be collected, then the standard
deviation for the detected value would be  photons. Given this fact, the least improvement occurs when the light level is sufficiently high that the Poisson noise exceeds any fixed,
constant sources of noise in the sensor. Under this worst-case assumption, the snr of the
light field camera is N times higher, because its signal is N 2 times higher, but the Poisson
noise is only N times higher.
That said, it is worth addressing a point in the original assumption that sometimes causes
confusion. Looking back at the original assumptions, one may wonder how the analysis
would change if we had assumed the conventional camera and the light field camera had
photosensors with the same resolution, rather than the light field camera having N × N
higher resolution. In other words, let us now consider the case where the conventional camera pixels are N times narrower. There is an incorrect tendency to suspect that the noise of
.. experimental performance
the conventional camera would somehow improve under this assumption. The reason that
it cannot improve is that the same total number of photons enter the camera and strike the
sensor. This number is fixed by the size of the lens aperture and the area of the sensor, not by
the resolution of the pixels. In fact, since there are N 2 more pixels on the sensor under the
current assumptions, each pixel now has 1/N 2 the area and collects 1/N 2 as many photons
as before. The snr would be worse by an additional factor of N . Of course this factor would
be eliminated by down-sampling the higher-resolution image to the resolution of the light
field camera, converging on the analysis above.
Experimental Performance
The following subsections present experimental data from the prototype camera to test both
the theoretical refocusing and noise reduction predictions. In summary, the results show
that the prototype can refocus to within a factor of  of the theoretically ideal performance,
and that the measured noise closely corroborates the theoretical discussion above.
Experimental Method
The basic experimental approach is to choose a fixed a main lens focus, and compare the
prototype camera with digital refocusing against a conventional camera with various main
lens aperture sizes. To make this comparison, I designed a scene with a single plane target at
a small distance away from the optical focus of the main lens. In this regime, the sharpness
of conventional photographs increases as the aperture size decreases.
In these experiments I chose to explicitly compare a hypothetical conventional camera
with  ×  -micron pixels against our prototype with microlenses of the same size
and  ×  pixels across each microlens image. Thus, the question that is addressed is, how
much improvement is obtained by capturing × directional samples at each output pixel
instead of a single light value?
A difficulty in implementing this comparison is how to obtain the images for the conventional camera, since no real device exists with -micron wide pixels. I approximated
the output of such a hypothetical device by summing all the pixels in each microlens image.
Since all the light that strikes a -micron microlens is deposited in some pixel in the image
chapter . digital refocusing
underneath it, summing this light counts all the photons that would have struck the desired
-micron conventional pixel.
An important advantage of this approach to estimating the conventional photographs
is that it allows use of exactly the same set-up for acquiring the light fields for digital refocusing and the conventional photographs for comparison. The only change that occurs is
in the aperture size of the camera, which is easily adjusted without disturbing the position
or focus of the camera. If I had used different cameras for each, meaningful comparison
would require factoring out differences in position and focus of the two cameras, as well as
differences in the noise characteristics of two separate sensors.
A potential disadvantage of the chosen approach is that it could result in artificially
higher sources of constant sensor noise due to the summation of read noise from multiple
pixels. Nevertheless, as already discussed, this increase is insignificant if the Poisson noise
dominates the constant sources of noise. The experiments below test the change in snr with
light level to ensure that this is the case.
Refocusing Results
I recorded an f / light field photograph, and a series of conventional photographs in halfstop increments from f / to f /. These images were shot without changing the focus setting on the lens. The higher f -number photographs were sharper because they have smaller
aperture sizes and the resulting circle of confusion is smaller. The light field photograph was
sharpened by digitally refocusing onto the resolution chart as a post-process.
Figure . presents a visual comparison of the light field camera performance compared
to select conventional images. Note how much more blurry the f / conventional photograph is compared to the f / light field camera image. The light field image seems sharper
than the conventional f / photograph, but not quite as sharp as the f /. It appears to
most closely match the sharpness of the f / image.
Figure . is a numerical version of the same experiment, comparing experimentally
measured modulation transfer function (mtf) curves. I computed the mtfs by shooting
photographs of a point light source and computing their Fourier transforms [Williams ].
For the light field camera, I shot the light field, computed the sharpest possible refocused
photograph of the point light, and computed the Fourier transform of the refocused image.
.. experimental performance
f / Refocused Light Field
f / Conventional
f / Conventional
f / Conventional
f / Conventional
Figure .: Experimental test of refocusing performance: visual comparison.
chapter . digital refocusing
Figure .: Experimental test of refocusing performance: mtf analysis. Horizontal axis
plots spatial frequency from 0 up to the Nyquist rate of the sensor.
Figure . illustrates the classical increase in mtf curve with aperture number. The graph
contains a series of mtf plots for a conventional camera with different aperture sizes. The
unlabeled gray plots from left to right are for f -numbers , ., , , ,  and . As
the f -number increases, the aperture size decreases, leading to sharper images and a higher
mtf. The two plots in black are of primary interest, being the plots for the f / conventional
image and the f / refocused light field image. The mtf of the light field camera most closely
matches that of the f / conventional camera, providing numerical corroboration for the
visual comparison in Figure ..
According to theory, the light field camera, with  pixels across each microlens image, would have matched the f / conventional camera in ideal conditions. In reality, the
experiment shows that ability to refocus within the depth of field of an f / aperture, approximately a loss of a factor of .
The main source of loss is the resampling that takes place to estimate the slanted strip
of rays corresponding to each pixel value. As shown in Figure ., estimating the slanted
strip with the sample boxes of the light field introduces error, because the boxes exceed the
boundaries of the strip by some amount. Another, relatively minor source of loss is diffraction, as discussed in Section .. While diffraction is not the limiting factor on resolution
in this camera given the f / aperture of our microlenses, the diffraction blur has a width of
.. experimental performance
approximately / of a sensor pixel, which contributes slightly to the loss in effective directional resolution.
Noise Results
I measured the improvement in noise when the exposure duration is kept constant. As predicted by theory, I found that the snr of the f / light field refocused photograph was higher
than the f / conventional photograph with equivalent depth of field. The computed increase in snr was ., which is close to the square root of the increase in light level (the lens
aperture is approximately . times wider in the f / light field camera). This square root
scaling of snr indicates that Poisson sources of noise are dominant, as discussed in Section .. I calculated the snr of each image by comparing the deviations of the photograph
against a noise-free standard, which I obtained by averaging multiple photographs shot under identical conditions. The snr is the root-mean-square of the difference between a given
image and the noise-free standard.
f / Conventional
f / Conventional
f / Refocused Light Field
Figure .: Experimental test of noise reduction using digital refocusing.
chapter . digital refocusing
(a): Closeup of raw light field photograph
(b): Unrefocused
(b): Refocused on face
(b): Extended dof
Figure .: Refocusing and extending the depth of field.
Figure . presents some of the images used in this numerical analysis. The three photographs were produced under the same conditions as in Figure ., except that the light
level was deliberately kept low enough to make noise visible, and the shutter duration was
held constant regardless of aperture size. In Figure ., the conventional f / photo is completely blurred out due to optical mis-focus of the main lens. The f / image is relatively
sharp, but much noisier. Note that the f / image is scaled by a factor of approximately 
.. technical summary
(a): f /, / sec
Light Field, Extended dof
(b): f /, / sec
Conventional Photograph
(c): f /, / sec
Conventional Photograph
Figure .: Comparison between a light field photograph with digitally extended depth of
field and conventional photographs with optically extended depth of field.
to match the brightness of the f / image. The third image is an f / light field photograph,
with digital refocusing onto the resolution chart. Note that this image did not have to be
scaled, because every pixel integrated all the light coming through the f / aperture. The
snr increase of . times is between the f / conventional image and the f / light field
Technical Summary
This section presents an experiment that visually summarizes the main improvements in
flexibility and light-gathering performance provided by digital refocusing as compared to
conventional imaging.
Figure . illustrates a light field recorded with the prototype camera, and a set of flexible
photographic edits applied to this single exposure in computing final photographs. Note that
the unrefocused photograph computed in Image b is optically focused on the woman’s hair,
and the motion of her hair has been frozen by the short / second exposure enabled by
chapter . digital refocusing
the relatively fast light-gathering power of the f / lens. However, her facial features are
blurred because of the shallow f / depth of field. In Image b, the photograph has been
digitally refocused onto the woman’s face to bring her eyes into focus. Note, however, that
her hair is now out of focus. Whether this is desirable depends on the artist’s intentions.
Image b illustrates the digitally extended depth of field image in which both her hair and
features are sharp.
Figure . compares the digitally extended depth of field image against comparable
photographs taken with a conventional camera. The digitally extended depth of field image
of Figure . is replicated in Figure .a. In Image b, the exposure duration was kept at
/ second, and I used the classical method of reducing the aperture size (down to f /) to
extend the depth of field. Although the woman’s hair and face are both in focus and sharp, the
resulting image was over  times darker because of the  times smaller aperture. I scaled
the image to match the brightness of Image a, revealing much higher graininess due to a
relative increase in Poisson photon noise. In contrast, in Image c, I kept the f / aperture,
but used a -times longer exposure of / second to capture the same amount of light as
in Image a. This is the exposure setting that would be chosen by an auto-exposure program
on most conventional cameras. The resulting image contains as many photons as the light
field photo and matches its low noise grain. However, in Image c the woman’s hair is blurred
out due to motion over the relatively long exposure time. Figure . makes it visually clear
why light field photography may have significant implications for reducing noise in low-light
In summary, this experiment highlights two improvements of the light field paradigm
over conventional imaging. The first is the unprecedented flexibility of choosing what is in
focus in final photographs after a single photographic exposure. The second is the capability
of shooting with significantly shorter exposure durations or lower image noise.
Photographic Applications
To conclude this chapter, this section presents examples of applying digital refocusing in
common photographic scenarios. The focus is on improvements in flexibility and performance enabled by digital refocusing and extending the depth of field.
.. photographic applications
Good photographs of people commonly have blurry backgrounds in order to concentrate
attention on the human subject of interest. In conventional photography, a blurry background is achieved by selecting a large lens aperture to obtain a shallow depth of field. For
example, in some point-and-shoot cameras, the camera may be able to detect that the scene
is a picture of a person, in which case it automatically selects the largest available aperture
size. In other cameras, the photographer may have to choose the correct aperture manually.
In any case, use of a large aperture presents certain practical difficulties. One example
is that because the depth of field is shallow, it becomes essential to focus very accurately.
Usually the correct focal point is on the eyes of the subject. However, with a large aperture,
the depth of field may cover only a few centimeters. When photographing a moving subject
or taking a candid picture, it is quite easy to accidentally focus on the ear or the hair of the
subject, leaving the eyes slightly out of focus. Figure .a illustrates this kind of accident,
where the camera is mis-focused by just  cm on the girl’s hair, rendering her features
blurry. Since the photograph was captured with the light field camera, digital refocusing
may be applied, bringing the focal plane onto the girls’ eyes in Figure .c.
(a): Mis-focused original
(b): Raw light field of eye
(c): Refocusing touch-up
Figure .: Fixing a mis-focused portrait
A different challenge is having more than one subject, such as a portrait of two people
offset in depth. In this case there is no single focal plane of interest, so using a large aperture
chapter . digital refocusing
means that only one face will be in focus (see, for example, Figure .a). With a conventional camera, it would be impossible to achieve a nicely blurred background and yet capture
both faces in sharp focus. Of course one could stop the aperture of the main lens down to
extend the depth of field, but this would make the background sharp and distracting (see
Figure .b.
(a): Unrefocused
(b): Extended dof
(c): Partially extended dof
Figure .: Maintaining a blurred background in a portrait of two people.
With the light field camera, a superior solution is to only partially extend the depth of
field, to selectively include only the human subjects while leaving the background blurry.
This can be achieved by digitally refocusing onto each person in turn, and combining the
sharpest portions of these images. Figure .c illustrates this approach. Notice that both
faces are sharp, but the blur in the background is maintained.
Figure . illustrates a novel application of digital refocusing in portraiture. This figure
contains three frames of a refocus movie that sweeps the focal plane from front to back in the
scene. The entire movie was computed from one light field, and is really just another light
field output format. The result is a digital snapshot that presents a new kind of interactivity.
A common experience in viewing such movies is a sensation of discovery as a subject of
interest “pops” into focus and can finally be recognized. Many of the people who have seen
these movies find this effect quite absorbing, and replay the movies many times.
.. photographic applications
Figure .: The sensation of discovery in refocus movies.
Action Photography
Action photography is the most demanding in terms of light capture efficiency and focus
accuracy. Large apertures are usually needed in all but the brightest conditions, in order
to enable short shutter durations to freeze the motion in the acquired photograph. The resulting shallow depth of field, coupled with moving targets, makes it challenging to focus
Figure . illustrates that our camera can operate with very short exposures. The light
field photographs in a-a and b-b illustrate refocusing photographs of swimmers diving
into a pool at the start of a race. The exposure duration in these images is /th of a second.
Images c-c show refocusing through water that is splashing out of a broken wine glass.
This photograph was shot in a dark room with a flash, for an effective exposure duration of
/th of a second.
Landscape Photography
While sports photography is a natural fit for the capabilities provided by digital refocusing,
landscape photography may at first seem like it presents relatively few opportunities. The
reason for this is that in landscape photography it is typically desirable for everything in the
scene to be in focus at the same time, from the flowers in the foreground to the mountains in
the background. It is very common to shoot landscapes using the smallest aperture available
on a lens to obtain this large depth of field.
chapter . digital refocusing
Figure .: Digital refocusing of high-speed light field photographs.
.. photographic applications
At the same time, the best landscapes are often taken in the hours closest to sunrise and
sunset, when the light is most dramatic and colorful. Shooting at these times means that the
light level is relatively low. Coupled with the use of a small aperture for large depth of field,
exposure durations are often quite long. For this reason, use of a tripod is usually considered mandatory to hold the camera still, which is unfortunate because it limits freedom of
The light field approach to photography is useful when taking landscapes, because it allows for shorter exposure durations using digitally extended depth of field, enabling a much
greater range of conditions over which the camera may be hand-held and better compositions achieved.
(a): Refocus far
(a): Refocus close
(a): Extended dof
(b): Refocus far
(b): Refocus close
(b): Extended dof
Figure .: Extending the depth of field in landscape photography.
Figure . illustrates the benefits with respect to two light field landscapes. In each case,
chapter . digital refocusing
Figure .: Digital refocusing of light field macro photographs of proteas flowers.
using a conventional camera with a large aperture would result in either the foreground or
background being blurry, as shown by the first two images in each sequence. The results in
the right-most images are the result of digitally extending the depth of field, providing large
depth of field with a  times shorter exposure than would be possible with a conventional
Macro photography
The depth of field decreases with the proximity of the focal plane to the camera, so macro
photography, where the subject is extremely close to the camera lens, places heavy demands
on focus accuracy. To give a sense of scale, to achieve : magnification, where one millimeter on the real subject is imaged to one millimeter on the sensor inside the camera, the
.. photographic applications
(a): High viewpoint
(b): Low viewpoint
(c): Further viewpoint
(d): Closer viewpoint
Figure .: Moving the viewpoint in macro photography
subject needs to be positioned at a distance of just two focal lengths away from the camera.
For example, some of the macro photographs shown here were shot with an  mm lens,
where the subject was placed just  mm away from the camera.
Because of the proximity of the subject, macro photography is very sensitive to the exact
placement of the camera. Movement of a few centimeters can drastically chance the composition of the scene. In addition, unlike most other kinds of photography, focusing in macro
photography is often achieved by moving the entire camera rather than turning the focus
ring. The photographer selects the desired magnification, and moves the camera back and
forth until the desired subject intersects the optical focal plane. As a result, getting the correct composition by moving parallel to the subject plane, and the correct focus, by moving
chapter . digital refocusing
perpendicularly, can be a challenging three-dimensional motion on the part of the photographer. The ability to digitally refocus reduces the requirements in at least the perpendicular
axis of motion, making it easier to compose the photograph as desired. Figure . illustrates
digital refocusing of two close-up shots of proteas flowers.
A different advantage afforded by light field macro photography is being able to virtually
move the camera position by synthesizing pin-hole images rather than full-aperture images.
Since the lens is large compared to the subject in macro photography, the cone of rays that
enters the camera from each point subtends a relatively wide angle. By moving the virtual
pin-hole camera within this cone of rays, one can obtain changes in parallax or perspective,
as illustrated in Figure ..
The top row of Figure . illustrates movement of the viewpoint laterally within the
plane of the lens aperture, to produce changes in parallax. The bottom row illustrates changes
in perspective by moving along the optical axis, away from the scene to produce a nearorthographic rendering (Image c) and towards the scene to produce a medium wide angle
(Image d).
Signal Processing Framework
As it has with so many scientific phenomena and engineering systems, Fourier analysis provides an entirely different perspective on photograph formation and digital light field photography. This chapter applies signal processing methods based on Fourier theory to provide
fresh insight into these matters. For example, this style of thinking allows more rigorous
analysis of the performance of digital refocusing. It also leads directly to the design of a very
different kind of algorithm for computing refocused photographs, which is asymptotically
faster than the integral projection methods described in Chapter .
Section . is an overview of the chapter, keeping the Fourier interpretation at a high
level. The focus is on intuition and geometry rather than equations, but basic exposure to
Fourier transforms is assumed. Later sections delve into the formal mathematical details,
and assume familiarity with Fourier transforms and linear systems theory at the level of a
first year graduate level course, such as a course based on Bracewell’s book [].
Previous Work
The closest related Fourier analysis is the plenoptic sampling work of Chai et al. [].
They show that, under certain assumptions, the angular band-limit of the light field is determined by the closest and furthest objects in the scene. They focus on the classical problem
of rendering pin-hole images from light fields, whereas this thesis analyzes the formation of
photographs through lenses.
chapter . signal processing framework
Imaging through lens apertures was first demonstrated by Isaksen et al. []. They
qualitatively analyze the reconstruction kernels in Fourier space, showing that the kernel
width decreases as the aperture size increases. This chapter continues this line of investigation, explicitly deriving the equations for full-aperture imaging from the radiometry of
photograph formation.
More recently, Stewart et al. [] have developed a hybrid reconstruction kernel that
combines full-aperture imaging with band-limited reconstruction. This allows them to optimize for maximum depth-of-field without distortion. In contrast, this chapter focuses on
fidelity with full-aperture photographs that have finite depth of field. As we have seen in the
previous chapter, narrow depth of field is often purposely used in photography to visually
isolate a subject and direct the viewers gaze.
Finally, Durand et al. [] have constructed a framework for analyzing the Fourier
content of the light field as it propagates through a scene and is transformed by reflections,
propagation, and other phenomena. Where their analysis focuses on the transformations
in the light field’s frequency spectrum produced by the world, this chapter focuses on the
transformations imposed by the detector and the ramifications on final image quality.
The advantage of moving into the Fourier domain is that it presents a simpler view of the process of photographic imaging. While the spatial relationship between photographs and light
fields (Chapter ) can be be understood fairly intuitively, the relationship itself—integral
projection—is a relatively heavyweight mathematical operation. In contrast, in the Fourier
domain the relationship is simpler—a photograph is simply a d slice of the d light field.
That is, the values of the Fourier transform of a photograph simply lie along a d plane in
the d Fourier transform of the light field. This “Fourier Slice Photograph Theorem,” which
is derived in Section ., is one of the main theoretical contributions of this thesis.
Figure . is a graphical illustration of the relationship between light fields and photographs, in both the spatial and Fourier domains. The graphs in the middle row of the
figure review the fact that, in the spatial domain, photographs focused at different depths
.. overview
Not refocused
Refocused closer
Refocused further
Figure .: The relationship between photographs and light fields is integral projection in
the spatial domain (middle row) and slicing in the Fourier domain (bottom row).
chapter . signal processing framework
correspond to slices at different trajectories through the ray-space. The graphs in the bottom row of Figure . visualize the situation in the Fourier domain. These graphs illustrate
the use of “k-space” notation for the Fourier axes, a practice borrowed from the magnetic
resonance (mr) literature and used throughout this chapter. In this notation, k x and k u are
the Fourier-domain variables that correspond to the x and u spatial variables (similarly for k y
and k v ). The graphs in the bottom row illustrate the Fourier-domain slices that provide the
values of the photograph’s Fourier transform. The slices pass through the origin. The slice
trajectory is horizontal for focusing on the optical focal plane, and as the chosen virtual film
plane deviates from this focus, the slicing trajectory tilts away from horizontal.
The underlying reason for the simplification when transforming into the Fourier domain is a consequence of a well-known result due to Ron Bracewell [] called the Fourier
Slice Theorem (also known as the Central Section Theorem and the Fourier Projection-Slice
Theorem). Bracewell originally discovered the theorem while studying reconstruction problems in synthetic aperture radar. However, it has grown to make its greatest contributions
in medical imaging, where it is of fundamental theoretical significance for reconstruction of
volumes in computed tomography (ct) and positron emission tomography (pet) [Macovski
], and the design of excitation pulse sequences in (mr) imaging [Nishimura ].
The classical theorem applies to d projections of d functions, so applying it to photographs and light fields requires generalization to the relatively unusual case of d projections from d functions. Nevertheless, the connection with the classical theorem remains
strong, and provides an important channel for cross-fertilization between light field photography and more mature imaging communities. For example, mining the mr literature
produced some of the algorithmic improvements described in Section . for computing
refocused photographs in the Fourier-domain.
Perhaps the most important theoretical application of the Fourier Slice Photograph Theorem is a rigorous derivation of the performance of digital refocusing from light fields sampled with a plenoptic camera. Section . presented geometric intuition in terms of subaperture images, but it is difficult to make those arguments formal and exact. In this chapter,
Section . presents a formal derivation under the assumption that the light fields recorded
by the plenoptic camera are band-limited.
Figure .c illustrates what it means for the recorded light field to be band-limited: it
.. overview
Figure .: Fourier-domain intuition for theoretical refocusing performance.
provides perfect information about the continuous light field within a box about the origin,
and is zero outside this box. The texture shown in the background of the graphs of Figure . is for illustrative purposes only. In the spatial domain, which is not shown, the bandlimited assumption means that the signal is sufficiently blurred that the finest details match
the sampling rate of the microlenses and pixels in the camera. While perfect band-limiting
is physically impossible, it is a plausible approximation in this case because the camera system blurs the incoming signal through imperfections in its optical elements, through area
integration over the physical extent of microlenses and photosensor pixels, and ultimately
through diffraction.
With the band-limited assumption, analysis of the performance of digital refocusing becomes exact. Figure . illustrates the basic concepts, for the case of simplified d light
chapter . signal processing framework
fields. Since photographs are simply slices at different trajectories through the origin of the
ray-space, a computed photograph is the line segment obtained when we intersect the slice
with the band-limited box of the recorded light field (Figures .d–f).
The complete slice, which extends to arbitrarily high frequencies away from the origin,
corresponds to a perfect photograph (Figure .b). The crucial observation is that the computed line segment is therefore a band-limited version of the perfect photograph. By calculating the band-limit of this segment, we can make precise statements about the effective
resolution of the refocused photograph.
The mathematical details of this analysis are left to Section ., but Figures .e and f
illustrate the critical turning point. They show that, when focusing at large distances from
the optical plane, the slice trajectory tilts sufficiently
far from the horizontal that it crosses the corner of the
band-limited box. Until this point, the slice intersects
the vertical border of the band-limit, and the resulting
photograph has full resolution (limited by the spatial
resolution of the recorded light field) . However, for extreme tilts the slice intersects the horizontal border, and
the resulting photograph has reduced resolution (partially limited by directional resolution). The range of
Figure .: The range of slice
slice trajectories that provide full resolution is shown
trajectories in the Fourier
in blue on Figure .. Analyzing the angular bounds
space for exact refocusing.
of this region provides an expression for the range of
virtual film depths that permit exact refocusing. Sec-
tion . performs this analysis, corroborating the intuition developed in Chapter  that the
range increases linearly with directional resolution.
Section . applies the Fourier Slice Photograph Theorem in a very different manner
to derive a fast Fourier Slice Digital Refocusing algorithm. This algorithm computes photographs by extracting the appropriate d slice of the light field’s Fourier transform and performing an inverse d Fourier transform. The asymptotic complexity of this algorithm is
O(n2 log n), where n is the resolution of the light field in each of its four dimensions. This
.. photographic imaging in the fourier domain
complexity compares favorably to the O(n4 ) approach of existing algorithms, which are essentially different approximations of numerical integration in the d spatial domain.
Photographic Imaging in the Fourier Domain
Chapter  introduced the imaging integral in Equation ., which relates light fields and photographs
focused at different depths. Our first step here is to
codify Equation . in the operator notation that will
be used throughout this chapter.
Operator notation provides a higher level of
mathematical abstraction, allowing the theorems
Pα >1
Pα <1
derived below to express the relationship between
transformations that we are interested in (e.g. image
formation) rather than being tied up in the underlying functions being acted upon (e.g. light fields and
photographs). Throughout this chapter, calligraphic
letters, such as A, are reserved for operators. If f is a
Figure .: Photographic Imag-
function in the domain of A, then A [ f ] denotes the
ing Operator.
application of A to f .
Photographic Imaging Operator Let Pα be the operator that transforms an incamera light field parameterized by a separation of depth F into the photograph
formed on film at depth (α · F ):
P α [ L F ] ( x , y ) =
du dv.
α2 F 2
This operator is what is implemented by a digital refocusing code that accepts light fields
and computes refocused photographs (Figure .). As noted in Chapter , the operator can
be thought of as shearing the d space, and then projecting down to d.
chapter . signal processing framework
As described in the chapter overview, the key to analyzing the imaging operator is the
Fourier Slice Theorem. The classical version of the Fourier Slice Theorem [Deans ]
states that a d slice of a d function’s Fourier spectrum is the Fourier transform of an orthographic integral projection of the d function. The slicing line is perpendicular to the
projection lines, as illustrated in Figure .. Conceptually, the theorem works because the
value at the origin of frequency space gives the dc value (integrated value) of the signal, and
rotations do not fundamentally change this fact. From this perspective, it makes sense that
the theorem generalizes to higher dimensions. It also makes sense that the theorem works
for shearing operations as well as rotations, because shearing a space is equivalent to rotating
and dilating the space.
These observations mean that we can expect that photographic imaging, which we have
observed is a shear followed by projection, should be proportional to a dilated d slice of
the light field’s d Fourier transform. With this intuition in mind, Sections .. and ..
are simply the mathematical derivations in specifying this slice precisely, culminating in
Equations . and ..
Generalization of the Fourier Slice Theorem
Let us first digress to study a generalization of the theorem to higher dimensions and projections, so that we can apply it in our d space. A closely related generalization is given by
the partial Radon transform [Liang and Munson ], which handles orthographic projections from N dimensions down to M dimensions. The generalization here formulates a
broader class of projections and slices of a function as canonical projection or slicing following an appropriate change of basis (e.g. an N -dimensional rotation or shear). This approach
is embodied in the following operator definitions.
N be the canonical projection operator that reduces
Integral Projection Let I M
an N -dimensional function down to M-dimensions by integrating out the last
N [ f ] (x , . . . , x ) =
N − M dimensions: I M
f ( x1 , . . . , x N ) dx M+1 . . . dx N .
N be the canonical slicing operator that reduces an N -dimensional
Slicing Let S M
function down to an M dimensional one by zero-ing out the last N − M dimenN [ f ] ( x , . . . , x ) = f ( x , . . . , x , 0, . . . , 0).
sions: S M
.. photographic imaging in the fourier domain
Change of Basis Let B denote an operator for an arbitrary change of basis
of an N -dimensional function. It is convenient to also allow B to act on N dimensional column vectors as an N × N matrix, so that B [ f ] (x) = f (B −1 x),
where x is an N -dimensional column vector, and B −1 is the inverse of B .
Scaling Let any scalar a be used to denote the operator that scales a function
by that constant, so that a[ f ](x) = a · f (x).
Fourier Transform Let F N denote the N -dimensional Fourier transform op
erator, and let F − N be its inverse. F N [ f ] (u) = f (x) exp (−2πi (x · u)) dx,
where x and u are N -dimensional vectors, and the integral is taken over all of
N -dimensional space.
With these definitions, we can state a generalization of the classical theorem as follows:
generalized fourier slice theorem Let f be an N -dimensional function. If we
change the basis of f , integral-project it down to M of its dimensions, and Fourier transform the
resulting function, the result is equivalent to Fourier transforming f , changing the basis with
the normalized inverse transpose of the original basis, and slicing it down to M dimensions.
Compactly in terms of operators, the theorem says:
F M ◦ IM
◦ B = SM
B −T
◦ F N,
|B −T |
where the transpose of the inverse of B is denoted by B −T , and B −T is its scalar determinant.
A proof of the theorem is presented in Appendix a..
Figure . summarizes the relationships that are implied by the theorem between the N dimensional signal, M-dimensional projected signal, and their Fourier spectra. One point
to note about the theorem is that it reduces to the classical version (compare Figures .
and .) for N = , M =  and the change of basis being a d rotation matrix (B = Rθ ). In
this case, the rotation matrix is its own inverse transpose (Rθ = Rθ −T ), and the determinant
−T Rθ equals 1. As a result, the basis change in the Fourier domain is the same as in the
spatial domain.
chapter . signal processing framework
G(x y)
O(n2 log n)
)(u v)
I12 R
S12 R
O(n2 )
g (x )
[O(n log n)]
F (u0 )
Figure .: Classical Fourier Slice Theorem, in terms of the operator notation used in this
chapter. Computational complexities for each transform are given in square brackets, assuming n samples in each dimension.
O(nN log n)
T B )N
O(nM log n)
O(nM )
Figure .: Generalized Fourier Slice Theorem (Equation .). Transform relationships between an N -dimensional function GN , an M-dimensional integral projection of it, GM , and
their respective Fourier spectra, G N and G M .
This is a special case, however, and in general the Fourier slice is taken with the nor
malized transpose of the inverse basis, (B −T / B −T ). In d, this fact is a special case of the
so-called Affine Theorem for Fourier transforms [Bracewell et al. ]. It is also related to
the well-known fact in geometry that transforming a surface by a particular matrix means
transforming the surface normals by the transpose of the inverse matrix. This is an important factor that is taken into account in computing the shaded color of surfaces in computer
.. photographic imaging in the fourier domain
graphics, for example. Readers interested in understanding why this occurs in the theorem
may consult the proof in Appendix a.. As a final point, dividing out by the (scalar) determinant of the matrix normalizes the equations for changes in the integrals due to dilations
of the space caused by the basis change.
The theorem provides both computational and theoretical advantages. From the computational perspective, the theorem leads directly to a fast method for calculating orthographic
projections of an N -dimensional function, GN , if we have its Fourier spectrum, G N . For
example, assume we are trying to compute a projection down to M dimensions, to obtain
function GM . As shown in Figure ., naïve projection via numerical integration (left downward arrow) takes O(n N ) time, where we assume that there are n samples in each of the N
dimensions. A faster approach is extracting a slice of G N (right downward arrow) and applying an inverse Fourier transform (bottom leftward arrow), which takes only O(n M log n)
via the Fast Fourier Transform algorithm.
From the theoretical perspective, the theorem is an analytic tool for characterizing the
spectral information content of a particular projection of a function. For example, in the
case of ct scanning, it allows us to analyze whether a family of x-ray projections fully characterizes the densities of the tissue volume that is being scanned. In general, the theorem
implies that a family of projections fully defines a function if the family of corresponding
Fourier slices spans the Fourier transform of the function. We will return to this issue later
in considering whether the set of all photographs focused at all depths captures the same
information as a light field.
5.3.2 Fourier Slice Photograph Theorem
This section derives the Fourier Slice Photograph Theorem, which lies at the heart of this
signal processing framework for light field photography. This theorem factors the Imaging
Operator using the Generalized Fourier Slice Theorem.
The first step is to recognize that the Imaging Operator (Equation .) indeed corresponds to integral projection of the light field following a change of basis (shear):
Pα [ L F ] =
α2 F 2
I24 ◦ Bα [ L F ] ,
chapter . signal processing framework
which relies on the following specific change of basis:
Imaging Change of Basis Bα is a d change of basis defined by the following
α 0 1−α
⎢ 0 α
Bα = ⎢
⎢ 0 0
0 0
⎥ B α −1 = ⎢
1 − 1/α
1 − 1/α ⎥
Note that Bα and Bα−1 should not be confused with the unscripted symbol, B , used for a
generic change of basis in the statement of the Generalized Fourier Slice Theorem. Directly
applying these definitions and the definition for I24 verifies that Equation . is consistent
with Equation ..
We can now apply the Generalized Fourier Slice Theorem (Equation .) to turn the integral projection in Equation . into a Fourier-domain slice. First, let us re-write Equation .
Pα =
α2 F 2
F −2 ◦ (F 2 ◦ I24 ◦ Bα ).
In this form, the Generalized Fourier Slice Theorem (Equation .) applies directly to the
terms in brackets, allowing us to write
Pα = 2 2 F −2 ◦ ⎝S24 ◦
α F
Bα − T
−T ◦ F
Bα Finally, noting that Bα −T = Bα−1 = 1/α2 , we arrive at the following result:
Pα =
F −2 ◦ S24 ◦ Bα −T ◦ F 4 ,
namely that a photograph (Pα ) is obtained from the d Fourier spectrum of the light field
by: extracting an appropriate d slice (S24 ◦ Bα −T ), applying an inverse d transform (F −2 ),
and scaling the resulting image (1/F2 ).
Before stating the final theorem, let us define one last operator that combines all the
action of photographic imaging in the Fourier domain:
.. photographic imaging in the fourier domain
Fourier Photographic Imaging Operator
Pα =
1 4
S ◦ Bα − T .
F2 2
It is easy to verify that Pα has the following explicit form, directly from the
definitions of S24 and Bα . This explicit form is required for calculations:
Pα [ G ](k x , k y ) =
G ( α · k x , α · k y , (1 − α ) · k x , (1 − α ) · k y ).
Applying Equation . to Equation . brings us, finally, to our goal:
fourier slice photograph theorem
P α = F −2 ◦ Pα ◦ F 4 .
A photograph is the inverse d Fourier transform of a dilated d slice in the d Fourier transform of the light field.
Figure . illustrates the relationships between light fields and photographs that are implied
by this theorem. The figure makes it clear that Pα is the Fourier-dual to Pα . The left half of the
diagram represents quantities in the spatial domain, and the right half is the Fourier domain.
In other words, Pα acts exclusively in the Fourier domain to produce the Fourier spectrum of
a refocused photograph from the Fourier spectrum of the light field. It is worth emphasizing
that the derivation of the theorem is rooted in geometrical optics and radiometry, and it is
consistent with the physics of image formation expressed by these models of optics.
The value of the theorem lies in the fact that Pα , a slicing operator, is conceptually simpler
than Pα , an integral operator. This point is made especially clear by reviewing the explicit
definitions of Pα (Equation .) and Pα (Equation .). By providing a Fourier-based interpretation, the theorem provides two equivalent but very different perspectives on image
formation. In this regard, the Fourier Slice Photograph Theorem is not unlike the Convolution Theorem, which provides different viewpoints on filtering in the two domains.
From a practical standpoint, the theorem provides a faster computational pathway for
certain kinds of light field processing. The computational complexities for each transform
chapter . signal processing framework
O(n4 )
log n)
O(n2 )
F 2 O(n2 log n)
Figure .: Fourier Slice Photograph Theorem. Transform relationships between the d light
field L F , a lens-formed d photograph Eα· F , and their respective Fourier spectra, LF and Eα· F .
are illustrated in Figure ., assuming a resolution of n in each dimension of the d light field.
The most salient point is that slicing via Pα (O(n2 )) is asymptotically faster than integration
via Pα (O(n4 )). This fact is the basis for the algorithm in Section ..
Photographic Effect of Filtering the Light Field
A light field produces exact photographs focused at various depths via Equation .. If we
distort the light field by filtering it, and then form photographs from the distorted light field,
how are these photographs related to the original, exact photographs? The following theorem provides the answer to this question.
filtered light field imaging theorem A d convolution of a light field results
in a d convolution of each photograph. The d filter kernel is simply the photograph of the d
filter kernel focused at the same depth. Compactly in terms of operators,
Pα ◦ Ch4 = CP2 α [h] ◦ Pα ,
where we have expressed convolution with the following operator:
.. photographic imaging in the fourier domain
Convolution ChN is an N -dimensional convolution operator with filter kernel
h, such that ChN [ F ](x) = F (x − u) h(u) du where x and u are N -dimensional
vector coordinates, and F and h are N -dimensional functions.
Figure . illustrates the theorem diagrammatically. On the diagram, L F is the input d
light field, and L F is a d filtering of it with d kernel h. Eα· F and Eα· F are the photographs
formed from the two light fields, respectively. The theorem states that Eα· F is a d filtering
of Eα· F , where the d kernel is the photograph of the d kernel, h.
In spite of its plausibility, the theorem is not obvious, and proving it in the spatial domain
is quite difficult. Appendix a. presents a proof of the theorem. At a high level, the approach
is to apply the Fourier Slice Photography Theorem and the Convolution Theorem to move
the analysis into the Fourier domain. In that domain, photograph formation turns into a
simpler slicing operator, and convolution turns into a simpler multiplication operation.
In the next section we will use this theorem to theoretically analyze a simple model of
Figure .: Filtered Light Field Photography Theorem. Transform relationships between a
d light field L F , a filtered version of the light field, L F , and photographs Eα· F and Eα· F .
chapter . signal processing framework
digital refocusing using the plenoptic camera. As an example of how to apply the theorem,
let us consider trying to compute photographs from a light field that has been convolved by
a box filter that is of unit value within the unit box about the origin. In d, the box function
is defined by ( x, y, u, v) = ( x ) (y) (u) (v), where the d box is defined by:
⎨ 1
( x ) =
⎩ 0
|x| <
With ( x, y, u, v) as the filter, the theorem shows that
Pα [ L F ∗∗∗∗ ] = (Pα ◦ C4 ) [ L F ]
= (CP2 α [] ◦ Pα ) [ L F ]
= Pα [] ∗∗ Pα [ L F ] .
For explicitness, these equations use the “star” notation for convolution, such that f ∗∗∗∗ g
represents the d convolution of d functions f and g, and a ∗∗ b represents the d convolution of d functions a and b.
The left hand side of the equation is the photograph computed from the convolved light
field. The right hand side is the exact photograph from the unfiltered light field (Pα [ L F ])
convolved by the d kernel function Pα []. This kernel is the d photograph of the box
filter treated as a d light field. What exactly is a photograph of a d box light field? The
following diagram visualizes things in terms of a d box light field.
The blue lines show the projection trajectories for focusing photographs at three different
.. photographic imaging in the fourier domain
depths. The resulting d projected photographs are a box function, a flat-top pyramid, and
a triangle function, as shown in the following diagram.
By analogy in the d light field, the d blur kernel over the exact photograph is a d box
function, a tent function or a flat-top tent function. Exactly which depends on the amount
of refocusing, α, as in the d versions above.
This example illustrates how the Filtered Light Field Imaging Theorem can be used as a
tool for assessing the design of light field imaging systems. For example, in studying light
field acquisition devices, such as the plenoptic camera, the impulse response of the recording
system is the d filter kernel in the theorem statement, h( x, y, u, v). As another example,
in processing the light field, the resampling strategy (e.g. quadrilinear or windowed sinc
interpolation) defines h( x, y, u, v). Given this kernel, the theorem shows that the resulting
filter over the output photograph is simply Pα [h]. Computing this d kernel is very practical:
it uses the same code as computing refocused photographs from the light field. Analyzing
the changes in the filter over output photographs provides a simple and practical procedure
for optimizing the design of light field camera systems.
Of course this analysis is an idealized view, and the model does not capture all the features of real systems. First, real systems are unlikely to be completely linear, and one can
draw reasonable conclusions from this kind of Fourier analysis only if a meaningful approximation to a system impulse response exists. This limitation makes it difficult to apply these
techniques to analysis of digital lens correction (Chapter ), for example, where the resampling strategy can be highly spatially variant. A second limitation is that real discrete systems
are not band-limited, so there will inherently be some aliasing which is not properly modelled by the convolution framework above. In spite of these limitations, the signal-processing
style of thinking developed here provides a new perspective to study light field imaging, and
reveals many insights that are not at all clear in the spatial domain.
chapter . signal processing framework
Band-Limited Analysis of Refocusing Performance
This section applies the Filtered Light Field Imaging Theorem to digital refocusing from a
plenoptic camera, to answer the following questions. What is the quality of a photograph refocused from a recorded light field? How is this photograph related to the exact photograph,
such as the one that might have been taken by a conventional camera that were optically focused at the same depth?
The central assumption here, as introduced in Section ., is that the plenoptic camera
captures band-limited light fields. Section . summarized the intuition in terms of the intersection of the ideal refocusing lines with the rectangular bounds of the light field’s bandwidth. This section presents the algebraic details of this analysis, working in the spatial
domain and using the Filtered Light Field Imaging Theorem.
The band-limited assumption means that the Fourier spectrum of the recorded light field
is multiplied by a dilated version of the d box filter described earlier, ( x, y, u, v). By the
Convolution Theorem, multiplication by the box function in the Fourier domain means that
the signal is convolved by the Fourier transform of the box function in the spatial domain.
It is well known that the Fourier transform of the box function is the perfect low-pass filter,
the sinc function. Let us adopt multi-dimensional notation for the sinc function also, so that
sinc( x, y, u, v) = sinc( x ) sinc(y) sinc(u) sinc(v), with
sinc( x ) =
sin πx
In other words, the band-limited assumption means that the recorded light field, L̂ F , is
simply the exact light field, L F , convolved by a d sinc:
L̂ F = Clowpass
[ L F ] , where
v 1
lowpass( x, y, u, v) =
Δx Δx Δu Δu
In this equation, Δx and Δu are the linear spatial and directional sampling rates of the
plenoptic camera, respectively. The 1/(ΔxΔu)2 is an energy-normalizing constant to account for dilation of the sinc.
.. band-limited analysis of refocusing performance
Analytic Form for Refocused Photographs
Our goal is an analytic solution for the digitally refocused photograph, ÊF , computed from
the band-limited light field, L F . This is where we apply the Filtered Light Field Photography
Theorem. Letting α = F /F,
ÊF = Pα L̂ F = Pα Clowpass
[ LF ]
= CP2 α [lowpass] [Pα [ L F ]] = CP2 α [lowpass] [ EF ] ,
where EF is the exact photograph at depth F . This derivation shows that the digitally refocused photograph is a d-filtered version of the exact photograph. The d kernel is simply
a photograph of the d sinc function interpreted as a light field, Pα [lowpass].
It turns out that photographs of a d sinc light field are simply d sinc functions:
v 1
Δx Δx Δu Δu
x y
= 2 · sinc
Dx Dx
Pα [lowpass] = Pα
where the Nyquist rate of the d sinc depends on the amount of refocusing, α:
Dx = max(αΔx, |1 − α|Δu).
This fact is difficult to derive in the spatial domain, but applying the Fourier Slice Photograph
Theorem moves the analysis into the frequency domain, where it is easy (see Appendix a.).
The critical point here is that since the d kernel is a sinc, the digitally refocused photographs are just band-limited versions of the exact photographs. The performance of digital
refocusing is therefore defined by the variation of the d kernel bandwidth (Equation .)
with the extent of refocusing.
Interpretation of Refocusing Performance
Recall that the spatial and directional sampling rates of the camera are Δx and Δu. Let us
further define the width of the camera sensor as Wx , and the width of the lens aperture as
chapter . signal processing framework
Wu . With these definitions, the spatial resolution of the sensors is Nx = Wx /Δx and the
directional resolution of the light field camera is Nu = Wu /Δu.
Since α = ( F /F ) and Δu = Wu /Nu , it is easy to verify that
|αΔx | ≥ |(1 − α)Δu|
ΔxNu F .
⇔ | F − F| ≤
The claim here is that this is the range of focal depths, F, where we can achieve “exact”
refocusing, i.e. compute a sharp rendering of the photograph focused at that depth. What
we are interested in is the Nyquist-limited resolution of the photograph, which is the number
of band-limited samples within the field of view.
Precisely, by applying Equation . to Equation ., we see that the bandwidth of the
computed photograph is (αΔx ). Next, the field of view is not simply the size of the light field
sensor, Wx , but rather (αWx ). This dilation is due to the fact that digital refocusing scales
the image captured on the sensor by a factor of α in projecting it onto the virtual focal plane
(see Equation .). If α > 1, for example, the light field camera image is zoomed in slightly
compared to the conventional camera, the telecentric effect discussed in Section ..
Thus, the Nyquist resolution of the computed photograph is
This is simply the spatial resolution of the camera, the maximum possible resolution for
the output photograph. This justifies the assertion that digital refocusing is “exact” for the
range of depths defined by Equation .. Note that this range of exact refocusing increases
linearly with the directional resolution, Nu , as described in Section ..
If we exceed the exact refocusing range, i.e.
| F − F| >
ΔxNu F ,
then the band-limit of the computed photograph, ÊF , is |1 − α|Δu, which will be larger than
.. fourier slice digital refocusing
αΔx (see Equation .). The resulting resolution is not maximal, but rather
|1 − α|Δu
which is less than the spatial resolution of the light field sensor, Wx /Δx. In other words, the
resulting photograph is blurred, with reduced Nyquist-limited resolution.
Re-writing this resolution in a slightly different form provides a more intuitive interpretation of the amount of blur. Since α = F /F and Δu = Wu /Nu , the resolution is
α Wx
|1 − α| Δu
Nu F Wu
Nu · F · | F − F|
is the f -number of a lens Nu times smaller than the actual lens used on the
camera, we can now interpret
Nu · F · | F − F | as the size of the conventional circle of con-
fusion cast through this smaller lens when the film plane is mis-focused by a distance of
| F − F |.
In other words, when refocusing beyond the exact range, we can only make the desired
focal plane appear as sharp as it appears in a conventional photograph focused at the original
depth, with a lens Nu times smaller, as described in Section .. Note that the sharpness
increases linearly with the directional resolution, Nu . During exact refocusing, it simply
turns out that the resulting circle of confusion falls within one pixel and the spatial resolution
completely dominates.
In summary, a band-limited assumption about the recorded light fields enables a mathematical analysis that corroborates the geometric intuition of refocusing performance presented in Chapter .
Fourier Slice Digital Refocusing
This section applies the Fourier Slice Photograph Theorem in a very different way, to derive
an asymptotically fast algorithm for digital refocusing. The presumed usage scenario is as
follows: an in-camera light field is available (perhaps having been captured by a plenoptic
camera). The user wishes to digitally refocus in an interactive manner, i.e. select a desired
chapter . signal processing framework
focal plane and view a synthetic photograph focused on that plane.
In previous approaches to this problem [Isaksen et al. ; Levoy et al. ; Ng et al.
; Vaish et al. ], spatial integration as described in Chapter  results in an O(n4 )
algorithm, where n is the number of samples in each of the four dimensions. The algorithm
described in this section provides a faster O(n2 log n) algorithm, with the penalty of a single
O(n4 log n) pre-processing step.
The algorithm follows trivially from the Fourier Slice Photograph Theorem. Figure . illustrates the steps of the algorithm.
• Prepare the given light field, L F , by pre-computing its d Fourier transform, F 4 [ L F ],
via the Fast Fourier Transform. This step takes O(n4 log n) time.
For each choice of desired virtual film plane at a depth of F
• Extract the Fourier slice (via Equation .) of the pre-processed Fourier transform, to
obtain (Pα ◦ F 4 ) [ L F ], where α = F /F. This step takes O(n2 ) time.
• Compute the inverse d Fourier transform of the slice, to obtain (F −2 ◦ Pα ◦ F 4 ) [ L F ].
By the theorem, this final result is Pα [ L F ] = EF the photo focused at the desired depth.
This step takes O(n2 log n) time.
This approach is best used to quickly synthesize a large family of refocused photographs,
since the O(n2 log n) Fourier-slice method of producing each photograph is asymptotically
much faster than the O(n4 ) method of brute-force numerical integration via Equation ..
Implementation and Results
The complexity in implementing this simple algorithm has to do with ameliorating the artifacts that result from discretization, resampling and Fourier transformation. Unfortunately,
our eyes tend to be very sensitive to the kinds of ringing artifacts that are easily introduced
by Fourier-domain image processing. These artifacts are conceptually similar to the issues
tackled in Fourier volume rendering [Levoy ; Malzbender ], and Fourier-based
medical reconstruction techniques [Jackson et al. ] such as those used in ct and mr.
.. fourier slice digital refocusing
w Ś

w2΅ >ŗ

w2΅ =ŗ

w2΅ <ŗ

w wF

w wF

w wF

Figure .: Fourier Slice Digital Refocusing algorithm.
chapter . signal processing framework
Sophisticated signal processing techniques have been developed by these communities to
address these problems. The sections below describe the most important issues for digital
refocusing, and how to address them with adaptations of the appropriate signal processing
Sources of Artifacts
In general signal-processing terms, when we sample a signal it is replicated periodically in
the dual domain. When we reconstruct this sampled signal with convolution, it is multiplied
in the dual domain by the Fourier transform of the convolution filter. The goal is to perfectly
isolate the original, central replica, eliminating all other replicas. This means that the ideal
filter is band-limited: it is of unit value for frequencies within the support of the light field,
and zero for all other frequencies. Thus, the ideal filter is the sinc function, which has infinite
(a): Spatial domain
(b): Fourier domain
Figure .: Source of artifacts. Example bilinear reconstruction filter (a), and frequency
spectrum (solid line on b) compared to ideal spectrum (dotted line).
In practice we must use an imperfect, finite-extent filter, which will exhibit two important
defects (Figure .). First, the filter will not be of unit value within the band-limit, instead
gradually decaying to smaller fractional values as the frequency increases. Second, the filter
will not be truly band-limited, containing energy at frequencies outside the desired stopband. Figure .b illustrates these deviations as shaded regions compared to the ideal filter
.. fourier slice digital refocusing
(a): Reference image
(b): Rolloff artifacts
(c): Aliasing artifacts
Figure .: Two main classes of artifacts.
The first defect leads to so-called rolloff artifacts [Jackson et al. ]. The most obvious
manifestation is a darkening of the borders of computed photographs. Figure .b illustrates this roll-off with the use of the Kaiser-Bessel filter described below. Decay in the filter’s frequency spectrum with increasing frequency means that the spatial light field values,
which are modulated by this spectrum, also “roll off ” to fractional values towards the edges.
The reference image in Figure .a was computed with spatial integration via Equation ..
The second defect, energy at frequencies above the band-limit, leads to aliasing artifacts (post-aliasing, in the terminology of Mitchell and Netravali []) in computed photographs. The non-zero energy beyond the band-limit means that the periodic replicas are
not fully eliminated, leading to two kinds of aliasing. First, the replicas that appear parallel
to the slicing plane appear as d replicas of the image encroaching on the borders of the final
photograph. Second, the replicas positioned perpendicular to this plane are projected and
summed onto the image plane, creating ghosting and loss of contrast. Figure . illustrates
these artifacts when the filter is quadrilinear interpolation.
Correcting Rolloff Error
Rolloff error is a well understood effect in medical imaging and Fourier volume rendering.
The standard solution is to multiply the affected signal by the reciprocal of the filter’s inverse
Fourier spectrum, to nullify the effect introduced during resampling. In our case, directly
chapter . signal processing framework
(a): Without pre-multiplication
(b): With pre-multiplication
Figure .: Rolloff correction.
analogously to Fourier volume rendering [Malzbender ], the solution is to spatially premultiply the input light field by the reciprocal of the filter’s d inverse Fourier transform.
This is performed prior to taking its d Fourier transform in the pre-processing step of the
algorithm. Figure . illustrates the effect of pre-multiplication for the example of a KaiserBessel resampling filter, described in the next subsection.
Unfortunately, this pre-multiplication tends to accentuate the energy of the light field
near its borders, maximizing the energy that folds back into the desired field of view as
Suppressing Aliasing Artifacts
The three main methods of suppressing aliasing artifacts are oversampling, superior filtering and zero-padding. Oversampling means drawing more finely spaced samples in the
frequency domain, extracting a higher-resolution d Fourier slice (PF ◦ F 4 ) [ L F ]. Increasing the sampling rate in the Fourier domain increases the replication period in the spatial
domain. This means that less energy in the tails of the in-plane replicas will fall within the
borders of the final photograph.
Exactly what happens computationally will be familiar to those experienced in discrete
Fourier transforms. Specifically, increasing the sampling rate in one domain leads to an
increase in the field of view in the other domain. Hence, by oversampling we produce an
image that shows us more of the world than desired, not a magnified view of the desired
.. fourier slice digital refocusing
portion. Aliasing energy from neighboring replicas falls into these outer regions, which we
crop away to isolate the central image of interest.
(a): × oversampling (Fourier domain)
(b): Inverse Fourier transform of (a)
(c): No oversampling
(d): Cropped version of (b)
Figure .: Reducing aliasing artifacts by oversampling in the Fourier domain.
Figure . illustrates this approach. Image a illustrates an extracted Fourier slice that
has been oversampled by a factor of . Image b illustrates the resulting image with twice the
normal field of view. Some of the aliased replicas fall into the outer portions of the field of
view, which are cropped away in Image d. For comparison, the image pair in c illustrates
the results with no oversampling. The image with oversampling contains less aliasing in the
desired field of view.
chapter . signal processing framework
Oversampling is appealing because of its simplicity, but oversampling alone cannot produce good quality images. The problem is that it cannot eliminate the replicas that appear
perpendicular to the slicing plane, which are projected down onto the final image as described in the previous section.
This brings us to the second major technique of combating aliasing: superior filtering.
As already stated, the ideal filter is a sinc function with a band-limit matching the spatial
bounds of the light field. Our goal is to use a finite-extent filter that approximates this perfect spectrum as closely as possible. The best methods for producing such filters use iterative techniques to jointly optimize the band-limit and narrow spatial support, as described
in Jackson et al. [] in the medical imaging community, and Malzbender [] in the
Fourier volume rendering community.
(a): Quadrilinear filter
(width )
(b): Kaiser-Bessel filter
(width .)
(c): Kaiser-Bessel filter
(width .)
Figure .: Aliasing reduction by superior filtering. Rolloff correction is applied.
Jackson et al. show that a much simpler, and near-optimal, approximation is the KaiserBessel function. They also provide optimal Kaiser-Bessel parameter values for minimizing
aliasing. Figure . illustrates the striking reduction in aliasing provided by such optimized
Kaiser-Bessel filters compared to inferior quadrilinear interpolation. Surprisingly, a KaiserBessel window of just width . suffices for excellent results.
As an aside, it is possible to change the aperture of the synthetic camera and bokeh of
the resulting images by modifying the resampling filter. Using a different aperture can be
.. fourier slice digital refocusing
(a): Without padding
(b): With padding
Figure .: Aliasing reduction by padding with a border of zero values. Kaiser-Bessel resampling filter used.
thought of as multiplying L F ( x, y, u, v) by an aperture function A(u, v) before image formation. For example, digitally stopping down would correspond to a mask A(u, v) that is one
for points (u, v) within the desired aperture and zero otherwise. In the Fourier domain, this
multiplication corresponds to convolving the resampling filter by the Fourier spectrum of
A(u, v). This is related to work in shading in Fourier volume rendering [Levoy ].
The third and final method to combat aliasing is to pad the light field with a small border of zero values before pre-multiplication and taking its Fourier transform [Levoy ;
Malzbender ]. This pushes energy slightly further from the borders, and minimizes the
amplification of aliasing energy by the pre-multiplication described in .. In Figure .,
notice that the small amount of aliasing present near the top left border of Image a is eliminated in Image b with the use zero-padding.
Implementation Summary
Implementing the algorithm proceeds by directly discretizing the algorithm presented at the
beginning of Section ., applying the following four techniques to suppress artifacts. In the
pre-processing phase,
. Pad the light field with a small border (e.g.) of zero values.
. Pre-multiply the light field by the reciprocal of the Fourier transform of the resampling
chapter . signal processing framework
Fourier slice algorithm
Spatial integration
Figure .: Comparison of refocusing in the Fourier and spatial domains.
In the refocusing step, which involves extracting the d Fourier slice,
. Use a linearly-separable Kaiser-Bessel resampling filter. A filter width of . produces
excellent results. For fast previewing, an extremely narrow filter of width . produces
results that are superior to (and faster than) quadrilinear interpolation.
. Oversample the d Fourier slice by a factor of . After Fourier inversion, crop the resulting photograph to isolate the central quadrant.
Performance Summary
This section compares the image-quality and efficiency of Fourier Slice algorithm for digital
refocusing against the spatial-domain methods described in Chapter .
Dealing first with image quality, Figures . and . compare images produced with
.. fourier slice digital refocusing
Fourier slice algorithm
Spatial integration
Figure .: Comparison of refocusing in the Fourier and spatial domains II.
the Fourier domain algorithm and spatial integration. The images in the middle columns
are the ones that correspond to no refocusing (α = ). Figure . illustrates a case that is
particularly difficult for the Fourier domain algorithm, because it has bright border regions
and areas of over-exposure that are  times as bright as the correct final exposure. Ringing
and aliasing artifacts from these regions, which are relatively low energy compared to the
source regions, are more easily seen when they overlap with dark regions.
Nevertheless, the Fourier-domain algorithm performs well even with the filter kernel of
width .. Although the images produced by the two methods are not identical, the comparison shows that the Fourier-domain artifacts can be controlled with reasonable cost.
In terms of computational efficiency, my cpu implementation of the Fourier and spatial
algorithms run at the same rate with × directional resolution. With × directional
resolution, the Fourier algorithm is an order of magnitude faster [Ng ]. The Fourier
chapter . signal processing framework
Slice method outperforms the spatial methods as the directional uv resolution increases,
because the number of light field samples that must be summed increases for the spatial
integration methods, but the cost of slicing in the Fourier domain stays constant per pixel.
Light Field Tomography
The previous section described an algorithm that computes refocused photographs by extracting slices of the light field’s Fourier spectrum. This raises the intriguing theoretical
question as to whether it is possible to invert the process and reconstruct a light field from
sets of photographs focused at different depths. This kind of light field tomography would
be analogous to the way ct scanning reconstructs density volumes of the body from x-ray
Let us first consider the reconstruction of d light fields from d photographs. We have
seen that, in the Fourier domain of the d ray-space, a photograph corresponds to a line
passing through the origin at an angle that depends on the focal depth, α. As α varies over
all possible values, the angle of the slice varies from −π/2 to π/2. Hence, the set of d
photographs refocused at all depths is equivalent to the d light field, since it provides a
complete sampling of its Fourier transform. Another way of saying this is that the set of all
d refocused photographs is the Radon transform [Deans ] of the light field. The d
Fourier Slice Theorem is a classical way of inverting the d Radon transform to obtain the
original distribution.
An interesting caveat to this analysis is that it is not physically clear how to acquire the
photographs corresponding to negative α, which are required to acquire all radial lines in the
Fourier transform. If we collect only those photographs corresponding to positive α, then it
turns out that we omit a full fourth of the d spectrum of the light field.
In any case, this kind of thinking unfortunately does not generalize to the full d light
field. It is a direct consequence of the Fourier Slice Photograph Theorem (consider Equation . for all α) that the footprint of all full-aperture d photographs lies on the following
d manifold in the d Fourier space:
(α · k x , α · k y , (1 − α) · k x , (1 − α) · k y )where α ∈ [0, ∞), and k x , k y ∈ R .
.. light field tomography
In other words, a set of conventional d photographs focused at all depths is not equivalent
to the d light field. It formally provides only a small subset of its d Fourier transform.
One attempt to extend this footprint to cover the entire d space is to use a slit aperture
in front of the camera lens. Doing so essentially masks out all but a d subset of the light
field inside the camera. One can tomographically reconstruct this d slit light field from d
slit aperture photographs focused at different depths. This works in much the same way that
one can build up the d light field from d photographs (although the same caveat about
negative α holds). By moving the slit aperture over all positions on the lens, this approach
builds up the full d light field by tomographically reconstructing each of its constituent d
slit light fields.
Selectable Refocusing Power
The previous two chapters concentrated on an in-depth analysis of the plenoptic camera
where the microlenses are focused on the main lens, since that is the case providing maximal
directional resolution and differs most from a conventional camera. The major drawback of
the plenoptic camera is that capturing a certain amount of directional resolution requires a
proportional reduction in the spatial resolution of final photographs. Chapter  introduced a
surprisingly simple way to dynamically vary this trade-off, by simply reducing the separation
between the microlenses and photosensor. This chapter studies the theory and experimental
performance of this generalized light field camera.
Of course the more obvious way to vary the trade-off in space and direction is to exchange the microlens array for one with the desired spatial resolution. The disadvantage
of this approach is that replacing the microlens array is typically not very practical in an
integrated device like a camera, especially given the precision alignment required with the
photosensor array. However, thinking about a family of plenoptic cameras, each customized
with a different resolution microlens array, provides a well-understood baseline for comparing the performance of the generalized light field camera. Figure . illustrates such a
family of customized plenoptic cameras. Note that the resolution of the photosensor is only
 pixels in these diagrams, and the microlens resolutions are similarly low for illustrative
The ray-trace diagrams in Figure . illustrate how the set of rays that is captured by one
photosensor pixel changes from a narrow beam inside the camera into a broad cone as the
chapter . selectable refocusing power
(a):  microlenses
(b):  microlenses
(c):  microlenses
(d):  microlenses
Figure .: Plenoptic cameras with custom microlens arrays of different resolutions.
microlens resolution increases. The ray-space diagrams show how the ray-space cell for this
set of rays transforms from a wide and short rectangle into a tall skinny one – this is the
ray-space signature of trading directional resolution for spatial resolution.
Sampling Pattern of the Generalized Light Field Camera
Each separation of the microlens array and photosensor is a different configuration of the
generalized light field camera. Let us define β to be the separation as a fraction of the depth
that causes the microlenses to be focused on the main lens. For example, the typical plenoptic camera configuration corresponds to β = 1, and the configuration where the microlenses
.. sampling pattern of the generalized light field camera
(a): β = 1
(b): β = .
(c): β = .
(d): β = 0
Figure .: Different configurations of a single generalized light field camera.
are pressed up against the photosensor is β = 0. As introduced in Section ., decreasing the
β value defocuses the microlenses by focusing them beyond the aperture of the main lens.
Figure . illustrates four β-configurations. The configurations were chosen so that the
effective spatial resolution matched the corresponding plenoptic camera in Figure .. Note
that the changes in beam shape between Figures . and . are very similar at a macroscopic level, although there are important differences in the ray-space. As the highlighted
blue cells in Figure . show, reducing the β value results in a shearing of the light field sampling pattern within each column. The result is an increase in effective spatial resolution
(reduction in x extent), and a decrease in directional resolution (increase in u extent). An
important drawback of the generalized camera’s ray-space sampling pattern is that it is more
chapter . selectable refocusing power
anisotropic, which causes a moderate loss in effective directional resolution. However, experiments at the end of the chapter suggest that the loss is not more than a factor of  in
effective directional resolution.
Recovering higher resolution in output images is possible as the β value decreases, and
works well in practice. However, it requires changes in optical focus of the main lens and
final image processing as discussed below. At β = 0 the microlenses are pressed up against
the sensor, lose all their optical power and the effective spatial resolution is that of the sensor.
The effective resolution decreases linearly to zero as the β value increases, with the resolution
of the microlens array setting a lower bound. In equations, if the resolution of the sensor
is Msensor × Msensor and the resolution of the microlens array is Mlenslets × Mlenslets , the output
images have effective resolution Meffective × Meffective , where
Meffective = max((1 − β) Msensor , Mlenslets ).
Deriving the Sampling Pattern
Figure . was computed by ray-tracing through a virtual model of the main lens, procedurally generating the ray-space boundary of each photosensor pixel. This section provides
additional insight in the form of a mathematical derivation of the observed sampling pattern.
An important observation in looking at Figure . is that the changes in the sampling
pattern of the generalized light field camera are localized within the columns defined by the
microlens array. Each column represents the microscopic ray-space between one microlens
and the patch of photosensors that it covers. Defocusing the microlens by reducing the β
value shears the microscopic ray-space, operating under the same principles discussed in
Chapter  for changing the focus of the main lens. The derivation below works from the
microscopic ray-space, where the sampling pattern is trivial, moving out into the ray-space
of the full camera.
Figure .a is a schematic for the light field camera, showing a microlens, labeled i,
whose microscopic light field is to be analyzed further. Figure .b illustrates a close-up
of the microlens, with its own local coordinate system. Let us parameterize the microscopic
light field’s ray-space by intersection of rays with the three illustrated planes: the microlens
plane, xi , the sensor plane, si , and the focal plane of the microlens, wi . In order to map
.. sampling pattern of the generalized light field camera
Figure .: Derivation of the sampling pattern for the generalized light field camera.
this microscopic ray-space neatly into a column of the macroscopic ray-space for the whole
camera, it is convenient to choose the origins of the three planes to lie along the line passing
through the center of the main lens and the center of microlens i, as indicated on Figure .b.
Also note on the figure that the focal length of the microlenses is f , and the separation between the microlenses and the sensor is β f .
Figure .c illustrates the shape of the sampling pattern on the ray-space parameterized
by the microlens xi plane and the photosensor si plane. This choice of ray-space parameterization makes it is easy to see that the sampling is given by a rectilinear grid, since each
photosensor pixel integrates all the rays passing through its extent on si , and the entire surface of the microlens on xi . Let us denote the microscopic light field as l βi f ( xi , wi ), where
the subscript β f refers to the separation between the parameterization planes. The eventual
chapter . selectable refocusing power
goal is to transform this sampling pattern in this ray-space into the macroscopic ray-space,
by a change in coordinate systems.
Figure .d illustrates the first transformation: re-parameterizing l βi f by changing the
lower parameterization plane from the sensor plane si to the microlens focal plane wi . Let
us denote the microscopic light field parameterized by xi and wi as l fi ( xi , wi ), where the f
subscript reflects the increased separation of one microlens focal length. Re-parameterizing
into this space is the transformation that introduces the shear in the light field. It is directly
analogous to the transformation illustrated in Figure .. Following the derivation described
there it is easy to show that
l βi f ( xi , si ) = l fi
xi , xi 1 −
The transformation from the microscopic light fields under each microlens into the
macroscopic ray-space of the camera is very simple. It consists of two steps. First, there
is a horizontal shift of Δxi , as shown on Figure .a, to align their origins. The second step
is an inversion and scaling in the vertical axis. Since the focal plane of the microlens is optically focused on the main lens, every ray that passes through a given point on l fi passes
through the same, conjugate point on the u. The location of this conjugate point is opposite
in sign due to optical inversion (the image of the main lens appears upside down under the
microlens). It is also scaled by a factor of
because of optical magnification. Combining
these transformation steps,
l fi ( xi , wi ) = L Δxi + xi , − wi .
Combining Equations . and . gives the complete transformation from l βi f to the macroscopic space:
l βi f ( xi , si )
= L Δx + x , x
F si
−1 −
f β
In particular, note that this equation shows that the slope of the grid cells in Figure .e is
−1 ,
.. optimal focusing of the photographic lens
a fact that will be important later.
Note that the preceding arguments hold for the microscopic light fields under each microlens. Figure .e illustrates the transformation of all these microscopic light fields into
the macroscopic ray space, showing how they pack together to populate the entire space. As
the final step in deriving the macroscopic sampling pattern, Figure .f illustrates that the
main lens truncates the sampling pattern vertically to fall within the range of u values passed
by the lens aperture.
As a final note, the preceding analysis assumes that the main lens is ideal, and that the
f -numbers of the system are matched to prevent cross-talk between microlenses. The main
difference between the idealized pattern derived in Figure .f and the patterns procedurally
generated in Figure . is a slight curvature in the grid lines. These are real effects due to
aberrations in the main lens, which are the subject of the next chapter.
The Problem with Focusing the Microlenses Too Close
The previous section examines what happens when we allow the microlenses to defocus by
focusing beyond the main lens ( β < ). An important question is whether there is a benefit to focusing closer than the main lens, corresponding to moving the photosensor plane
further than one focal length ( β > ). The difficulty with this approach is that the the microlens images would grow in size and overlap. The effect could be balanced to an extent by
reducing the size of the main lens aperture, but this cannot be carried very far. By the time
the separation increases to two (microlens) focal lengths, the micro-images will have defocused to be as wide as the microlenses themselves, even if the main lens is stopped down to
a pin-hole. For these reasons, this chapter concentrates on separations less than or equal to
one focal length.
Optimal Focusing of the Photographic Lens
An unusual characteristic of the generalized camera is that we must focus its main lens differently than in a conventional or plenoptic camera. In the conventional and plenoptic cameras
best results are obtained by optically focusing on the subject of interest. In contrast, for intermediate β values the highest final image resolution is obtained if we optically focus slightly
chapter . selectable refocusing power
beyond the subject, and use digital refocusing to pull the virtual focal plane back onto the
subject of interest.
Figure .b and c illustrate this phenomenon quite clearly. Careful examination reveals
that maximal spatial resolution of computed photographs would be achieved if we digitally
refocus slightly closer than the world focal plane, which is indicated by the gray line on the
ray-trace diagram. The refocus plane of greatest resolution corresponds to the plane passing
through the point where the convergence of world rays is most concentrated (marked by
asterisks). The purpose of optically focusing further than usual would be to shift the asterisks
onto the gray focal plane. Notice that the asterisk is closer to the camera for higher β values.
The ray-space diagrams shown in Figure .b and c provide additional insight. Recall
that on the ray-space, refocusing closer means integrating along projection lines that tilt
clockwise from vertical. Visual inspection makes it clear that we will be able to resolve the
projection line integrals best when they align with the diagonals of the sampling pattern –
that is, the slope indicated by the highlighted, diagonal blue cell on each ray-space. From
the imaging equation for digital refocusing (Equation .), the slope of the projection lines
for refocusing is
Recall that α = F /F, where F’ is the virtual film depth for refocusing, and F is the depth of
the microlens plane. The slope of the ray-space cells in the generalized light field sampling
pattern was calculated above, in Equation .. Equating these two slopes and solving for the
relative refocusing depth, α,
(1 − β ) F
(1 − β ) F − β f
This tells us the relative depth at which to refocus to produce maximum image resolution. If we wish to eventually produce maximum resolution at depth F, we should therefore
optically focus the main lens by positioning it at depth
Fopt =
(1 − β ) F − β f
= F+
(1 − β )
( β − 1)
For the range of β values that we are looking at (0 < β ≤ 1), Fopt < F, indicating that the
microlens plane should be brought closer to the main lens. This means optically focusing
.. optimal focusing of the photographic lens
further in the world, which is consistent with shifting the asterisks in the ray-trace view in
Figure . onto the desired world plane. The optical mis-focus is the difference between Fopt
and F, given by
( β −1)
focus asymptotes to negative infinity, which is
Note that as β approaches 1, the optical mismeaningless. The reason for this is that the slope
of the sampling grid cells become too horizontal,
and the optimal resolution is dominated by the
vertical columns of the the sampling pattern set
by the resolution of the microlens array, not by
the slope of the cells within each column. When
this occurs, it is best to set the optical focal depth
at the desired focal depth (i.e. Fopt = F), to provide the greatest latitude in refocusing about that
center. In practice it makes sense to stop using
Equation . once the effective resolution for that
β configuration falls to less than twice the resolu-
Figure .: Predicted effective resolu-
tion of the microlens array.
tion and optical mis-focus as a func-
This cutoff is shown as a dotted vertical line
tion of β for the prototype camera.
on Figure .. The graphs in this figure plot effective resolution and optical mis-focus for the prototype camera described in Chapter ,
where Msensor = , Mlenslets = , and f = . mm.
Recall that the predicted effective resolution Meffective × Meffective of the output images is
Meffective = max((1 − β) Msensor , Mlenslets ).
As with Fopt , the predicted value for Meffective derives from an analysis of the ray-space sampling pattern. Refocusing optimally aligns the imaging projection lines with the slope of
the grid cells, enabling extraction of the higher spatial resolution inherent in the sheared
sampling pattern. By visual inspection, the effective resolution of the computed image is
equal to the number of grid cells that intersect the x axis. Within each microscopic light
chapter . selectable refocusing power
field, Equation . shows that the number of grid cells crossed is proportional to (1 − β),
because of the shearing of the microscopic light field sampling patterns. Hence the overall
resolution is proportional to (1 − β). The maximum possible resolution is the resolution of
the sensor, and the minimum is the resolution of the microlens array. Experiments below
test this predicted linear variation in effective resolution with (1 − β).
In summary, when recording light fields with intermediate β, if the auto-focus sensor
indicates that the subject is at depth F, the sensor should be positioned at Fopt . Digital refocusing onto depth F after exposure will then produce the image with the maximum possible
effective resolution of max((1 − β) Msensor , Mlenslets ).
Experiments with Prototype Camera
The prototype camera allowed photographic testing of performance for β ≥ ., by manually adjusting the screws on the microlens array to choose different separations between the
microlens and photosensor. It was not possible to screw down to smaller β values because
the separation springs bottomed out. The overall set-up of our scene was similar to that in
Section .: I shot light fields of a resolution chart at varying levels of main lens mis-focus,
and tested the ability to digitally refocus to recover detail. In order to examine exactly how
much the effective spatial resolution changes with β, I computed final photographs with
 ×  pixels to match the full resolution of the photosensor.
Figure . illustrates the major trade-off that occurs when decreasing β: maximum spatial resolution increases (potentially allowing sharper final images), and directional resolution decreases (reducing refocusing power). The images show extreme close-up views of the
center /th of final images computed from recorded light fields of the iso  image
resolution chart. World focus was held fixed at a depth of approximately . meters from
the camera. Optical mis-focus refers to the distance that the target was moved closer to the
camera than this optical focal depth.
Notice that at β = . the resolution is never enough to resolve the finer half of rings on
the chart, but it is able to resolve the coarser rings over a wide range of main lens mis-focus
out to at least  cm. In contrast, at β = ., it is possible to resolve all the rings on the chart
at a mis-focus of  cm. However, notice that resolution is much more sensitive to mis-focus,
.. experiments with prototype camera
β = .
β = .
0 cm
 cm
 cm
Figure .: Decreasing β trades refocusing power for maximum image resolution.
blurring completely by  cm. Indeed, the images for optical mis-focus of . cm and .
cm (not shown) were noticeably more blurry than for  cm.
The fact that the highest image resolution is achieved when digitally refocusing  cm
closer is numerically consistent with theory. The lens of the camera has a focal length of 
mm, so the world focal depth of . meters implies F ≈ . mm. Equation . therefore
implies an optimal refocus separation of . mm, with a corresponding world refocus
plane at . meters, which is indeed a mis-focus of  cm closer than the optical world focal
plane, as observed on Figure ..
As an aside, the slight ringing artifacts visible in the images with 0 optical mis-focus are
due to pre-aliasing [Mitchell and Netravali ] in the raw light field. The high frequencies
in the resolution chart exceed the spatial resolution of these β configurations, and the square
microlenses and square pixels do not provide an optimal low-pass filter. Note that digital
refocusing has the desirable effect of anti-aliasing the output images by combining samples
chapter . selectable refocusing power
Figure .: Comparison of data acquired from the prototype camera and simulated with a
ray-tracer, for β = . (left half), and β = 1.
from multiple spatial locations, so the images with non-zero mis-focus actually produce
higher-quality images.
6.4 Experiments with Ray-Trace Simulator
To explore the performance of the generalized camera for low β configurations, we enhanced
Pharr and Humphreys’ physically-based rendering system [] to compute the irradiance
distribution that would appear on the photosensor in our prototype. Figure . illustrates
the high fidelity that can be achieved with such a modern ray-tracer. The images presented
in this figure correspond to the set-up for the middle row of Figure ., that is, world misfocus of  cm. The left half of Figure . is for β = ., the right half for β = .. The top two
rows illustrate the raw light field data. The bottom row shows final images refocused onto
.. experiments with ray-trace simulator
β =.
β =.
β =.
Figure .: Simulation of extreme microlens defocus.
the resolution chart. These images reveal very good agreement between the simulation and
our physically-acquired data, not only in final computed photographs but in the raw data
Figure . presents purely simulated data, extrapolating performance for lower β values
of ., . and close to 0, which could not be physically acquired with the prototype. Each of
the light fields was simulated and resampled assuming that the optimal main lens focus for
each β was achieved according to Equation .. In other words, the optical focus for β = .
and . were slightly further than the target. The top row of images are zoomed views of a
× section of microlens-images, illustrating how decreasing β causes these micro-images to
evolve from blurred images of the circular aperture to filled squares containing the irradiance
striking each microlens square.
chapter . selectable refocusing power
The bottom two rows of images illustrate how resolution continues to increase as β decreases to . The area shown in the extreme close-up in the bottom row contains the finest
lines on the iso  chart to the right of the right border of the black box. These lines
project onto the width of roughly  photosensor pixels per line-pair. As the right most column shows, as β converges on zero separation, final photographs are recorded at close to
the full spatial resolution of the photosensor.
MTF Analysis
The ray-tracing system enabled synthetic mtf analysis, which provided further quantitative
evidence for the theory predicted by ray-space analysis. The overall goal was to visualize
the decay in refocusing power and increase in maximum image resolution as β decreases.
Another goal was to compare the performance of each β configurations against a light field
camera customized with a microlens array of equivalent spatial resolution. Based on earlier discussion, we would expect the customized light field camera to provide slightly more
refocusing power due to its isotropic sampling grid.
The analysis consisted of computing the variation in mtf for a number of different β
configurations of the prototype camera. For each configuration, a virtual point light source
was held at a fixed depth, and the optical focus was varied about that depth to test the ability of the system to compensate for mis-focus. The resulting light fields were processed to
compute a refocused photograph of the point light source that was as sharp as possible. The
Fourier transform of the resulting photograph provided the mtf for the camera for that configuration and that level of mis-focus.
The following graphs summarize the performance of the mtf for each of these tests as a
single number. The summary measure that was chosen is the spatial frequency at which the
computed mtf first drops below . Although this measure is a simplistic summary of the
full mtf, it is still well correlated with the ability of the system to resolve fine details, rising
as the refocused image of the point light source becomes sharper and its mtf increases.
Figure .a illustrates the plots of the described measure for five β configurations of
the prototype camera. The horizontal axis is the relative optical mis-focus on the imageside, such that a deviation of . means that the separation between the main lens and the
microlens plane was  greater than the depth at which the point light source forms an ideal
.. experiments with ray-trace simulator
image. Note that the vertical axis is plotted on a log scale.
Figure .a illustrates three effects clearly. First, the maximum image resolution (the
maximum height of each plot) decreases as β increases, as expected by theory. The β configurations that are plotted move half-way closer to β =  with each step ( β = 0, ., ., .
and ), and the maximum spatial frequency halves as well, indicating roughly linear variation with (1 − β) as predicted by theory. The second effect is a broadening of the peak with
increasing β, indicating greater tolerance to optical mis-focus. The breadth of the peak is a
measure of the range of depths for which the camera can produce sharp images. The third
effect is clear evidence for the shift in optimal optical focal plane, as predicted by Equation .. As β increases, the maximum of the plot migrates to the left (the microlens plane
moves closer to the lens). This indicates optical focus further than the point light source,
corroborating the discussion earlier in the chapter.
Figure .b re-centers each graph about the optimal optical depth predicted by Equation ., in order to ease comparison with the customized plenoptic cameras, which are
presented in Figure .c. The resolution of the microlenses in the customized cameras were
chosen to match the maximum spatial resolution of the generalized camera configurations,
as predicted by Equation .. The legend in Figure .c gives the width of these microlenses.
Figure . re-organizes the curves in Figure .b and Figure .c for more direct comparison. The curve for each β value of the generalized camera is plotted individually, on a
graph with the closest curves from the customized plenoptic cameras. These graphs make
three technical points. The first point is that the overlap of the two curves in Figure .a for
β = . corroborates the prediction that the generalized light field camera can be made to
have the performance of a conventional camera with full spatial resolution.
The second point is corroboration for the predicted effective resolution in Equation ..
This is seen in the matching peaks of the two curves with the same color in Figures .b,
.c and .d. Recall that the dotted curves of the same color represents the performance
of a customized plenoptic camera with a microlens array resolution given by Equation ..
The third point is that the effective refocusing power of the generalized camera is less
than the customized camera of equivalent spatial resolution, but that the reduction in power
is quite moderate. In Figures .b, .c and .d, the generalized light field camera is compared not only against the plenoptic camera with equivalent spatial resolution, but also the
chapter . selectable refocusing power
Figure .: mtf comparison of trading refocusing power and image resolution.
one that has twice as much spatial resolution (hence half the directional resolution). In
these graphs, the plot for the generalized camera lies between the two plenoptic cameras,
bounding its effective refocusing power within a factor of  of the ideal performance given
by a customized plenoptic camera. The loss disappears, of course, as β increase to 1 and the
generalized camera converges on an ordinary plenoptic camera, as shown in Figure .e.
.. experiments with ray-trace simulator
Figure .: mtf comparison of trading refocusing power and image resolution II.
chapter . selectable refocusing power
This chapter shows that defocusing the microlenses by moving the photosensor plane closer
is a practical method for recovering the full spatial resolution of the underlying photosensor. This simple modification causes a dramatic change in the performance characteristics
of the camera, from a low-resolution refocusing camera to a high-resolution camera with
no refocusing. A significant practical result is that decreasing the separation between the
microlenses and photosensor by only one half recovers all but  ×  of the full photosensor
resolution. This could be important in practice, as it means that it is not necessary for the
photosensor to be pressed directly against the microlenses, which would be mechanically
For intermediate separations, the spatial resolution varies continuously between the resolution of the microlens array and that of the photosensor. A very nice property is that the
refocusing power decreases roughly in proportion to the increase in spatial resolution. There
is some loss in the effective directional resolution compared to the ideal performance of a
plenoptic camera equipped with a custom microlens array of the appropriate spatial resolution, but simulations suggest that the loss in directional resolution is well contained within
a factor of  of the ideal case.
These observations suggest a flexible model for a generalized light field camera that can
be continuously varied between a conventional camera with high spatial resolution, and a
plenoptic camera with more moderate spatial resolution but greater refocusing power. The
requirement would be to motorize the mechanism separating the microlenses and the photosensor and provide a means to select the separation to best match the needs of the user for
a particular exposure. For example, the user could choose high spatial resolution and put
the camera on a tripod for a landscape photograph, and later choose maximal directional
resolution to maximize the chance of accurately focusing an action shot in low light.
This approach greatly enhances the practicality of digital light field photography by eliminating one of its main drawbacks: that one must trade spatial resolution for refocusing
power. By means of a microscopic adjustment in the configuration of the light field camera,
the best of all worlds can be selected to serve the needs of the moment.
Digital Correction of Lens Aberrations
A lens creates ideal images when it causes all the rays that originate from a point in the
world to converge to a point inside the camera. Aberrations are imperfections in the optical
formula of a lens that prevent perfect convergence. Figure . illustrates the classical case of
spherical aberration of rays refracting through a plano-convex lens, which has one flat side
and one convex spherical side. Rays passing through the periphery of the spherical interface
refract too strongly, converging at a depth closer to the lens than rays that pass close to the
center of the lens. As a result, the light from the desired point is blurred over a spot on the
image plane, reducing contrast and resolution.
Maxwell established how fundamental a problem aberrations are in the s. He proved
that no optical system can produce ideal imaging at all focal depths, because such a system
would necessarily violate the basic mechanisms of reflection and refraction [Maxwell ].
Nevertheless, the importance of image quality has motivated intense study and optimization
over the last  years, including contributions from such names as Gauss, Galileo, Kepler,
Newton, and innumerable others. A nice introduction to the history of aberration theory is
presented in a short paper by Johnson [], and Kingslake’s classic book [] presents
greater detail in the context of photographic lenses.
Correction of aberrations has traditionally been viewed as an optical design problem.
The usual approach has been to combine lens elements of different shapes and glass types,
balancing the aberrations of each element to improve the image quality of the combined
chapter . digital correction of lens aberrations
system. The most classical example of this might be the historical sequence of improvements in the original photographic objective, a landscape lens designed by Wollaston in
 [Kingslake ]. It consisted of a single-element meniscus lens with concave side to
an aperture stop. In , Chevalier improved the design by splitting the meniscus into a
cemented doublet composed of a flint glass lens and a crown glass lens. Finally, in ,
Dallmeyer split the crown lens again, placing one on either side of the central flint lens.
Today the process of correcting aberrations by combining glass elements has been carried to remarkable extremes. Zoom lenses provide perhaps the most dramatic illustration of
this phenomenon. Zooming a lens requires a non-linear shift
of at least three groups of lens elements relative to one another,
making it very challenging to maintain a reasonable level of
aberration correction over the zoom range. However, the convenience of the original zoom systems was so desirable that it
quickly launched an intense research effort that led to the extremely sophisticated, but complex design forms that we see
today [Mann ]. As an example, commodity  mm zoom
lenses contain no fewer than  different glass elements, and
some have as many as  [Dickerson and Lepp ]! Today,
all modern lens design work is computer-aided [Smith ],
where design forms are iteratively optimized by a computer.
One reason for the large numbers of lens elements is that they
provide greater degrees of freedom for the optimizer to achieve
the desired optical quality [Kingslake ].
Figure .: Spherical
This chapter introduces a new pure-software approach to
compensating for lens aberrations after the photograph is
taken. This approach complements the classical optical tech-
niques. The central concept is simple: since a light field camera records the light traveling
along all rays inside the camera, we can use the computer to re-sort aberrated rays of light
to where they should ideally have converged. Digital correction of this kind improves the
quality of final images by reducing residual aberrations present in any given optical recipe.
For simplicity, this chapter assumes a light field camera configured with the plenoptic
.. previous work
separation ( β = 1) for maximum directional resolution. In addition, the analysis assumes
monochromatic light – that is, light of a single wavelength. This simplification neglects the
important class of so-called chromatic aberrations, which are due to wavelength-dependent
refraction of lens elements. The techniques described here may be extended to colored light,
but this chapter focuses on the simplest, single-wavelength case.
Previous Work
Ray-tracing has a long history in lens design. Petzval’s design of his famous portrait lens
was the first example of large-scale computation. In order to compete in the lens design
competition sponsored by the Société d’Encouragement in Paris, he recruited the help of
“Corporals Löschner and Haim [of the Austrian army] and eight gunners skilled in computing” [Kingslake ]. After about six months of human-aided computation, he produced
a lens that, at f /., was  times brighter than any other lens of its time. The lens was
“revolutionary,” and along with the use of “quickstuff,” new chemical coatings designed to
increase the sensitivity of the silver-coated photographic plates, “exposures were reduced to
seconds” from the minutes that were required previously [Newhall ].
Kingslake reports from the perspective of  years in studying lens design that “by far the
most important recent advance in lens-design technology has been the advent of the digital
computer” [Kingslake ]. The reason for this is that lens design involves the tracing
of a large number of rays to iteratively test the quality of an evolving design. One of the
earliest uses of computer-aided ray-tracing in optimizing lenses seems to have been the lasl
program at Los Alamos [Brixner ].
In computer graphics, ray-tracing of camera models has progressed from simulation of
the computationally efficient pin-hole camera to one with a real lens aperture [Potmesil and
Chakravarty ; Cook et al. ], to simulation of multi-element lenses with a quantitative consideration of radiometry [Kolb et al. ]. The method of Cook et al. differs from
Potmesil and Chakravarty in the sense that it is based on a numerically unbiased MonteCarlo evaluation of the rendering equation [Kajiya ]. It is one of the great algorithms
of computer graphics, forms the basis of most modern ray-tracing techniques, and lies at
the heart of the ray-tracing system [Pharr and Humphreys ] that I used to compute the
chapter . digital correction of lens aberrations
simulated light fields in this chapter and the previous one. One of the limitations of these
kinds of ray-tracing programs is that they do not take into account the wave-nature of light.
In particular, the simulated images in this chapter are free of diffraction effects – incorporating these requires a more costly simulation of the optical transfer function of the imaging
system [Maeda et al. ]. The implicit assumption here is that the aberrations under study
dominate the diffraction blur.
Terminology and Notation
This chapter introduces an extra level of detail to the ray-space notation used in previous
chapters. The new concept is the notion of two sets of ray-spaces inside the camera: the
ideal ray-space, which is the one we encountered in previous chapters, and the aberrated rayspace, which is composed of the rays physically flowing inside the camera body. Ideal rays
are what we wish we had recorded with the light field camera, and aberrated rays are what we
actually recorded. This subtlety was not necessary in previous chapters, where the implicit
assumption was that the main lens of the camera was aberration-free. Let us differentiate
between these two spaces by denoting an ideal ray as ( x, y, u, v) and an aberrated ray as
( x , y , u , v ).
The two ray-spaces are connected by the common space of rays in the world. An aberrated camera ray maps to a world ray via geometric refraction through the glass elements
of the main lens. In contrast, an ideal camera ray maps to a world ray via tracing through
an idealized approximation of the lens’ optical properties that is free of aberrations. In this
chapter, we will use the standard Gaussian idealization of the lens based on paraxial optics,
which is also known as the thick lens approximation [Smith ]. The Gaussian approximation is the linear term in a polynomial expansion of the lens’ properties, derived by considering the image formed by rays passing an infinitesimal distance from the center of the
lens. The process of transforming a ray through these ideal optics is sometimes referred to
as Gaussian conjugation.
These two mappings into the world space define a mapping, C, directly from the aberrated space to the ideal space:
.. terminology and notation
C : R4 → R4
C ( x , y , u , v ) = ( x, y, u, v).
I call this map the ray correction function, and its inverse the ray distortion function. These are
the fundamental mappings that must be calculated in computing digitally corrected images.
Conceptually, C results from composing the mapping from aberrated rays to world rays
with the inverse of the mapping from ideal rays to world rays. A procedure to compute this
mapping, as illustrated by Figure . is to take the input aberrated camera ray, trace it out
into the world through the real optics of the lens, and then compute its Gaussian conjugate
back into the camera
The ray correction function encodes the extent to which a real lens deviates from paraxial
imaging. In a well-corrected lens where the residual aberrations are small, the ray-correction
function is close to the identity mapping.
The light field sampling grid that is recorded by the light field camera is rectilinear in the
(a): Aberrated
(b): Trace rays
out optically
(c): Conjugate rays
in ideally
Figure .: Ray correction function.
(d): Ideal
chapter . digital correction of lens aberrations
(a): Six-element lens
(b): Single-element lens
Figure .: Comparison of epipolar images with and without lens aberrations.
aberrated ray-space. Projection into the ideal ray-space warps the grid. The light field sampling diagrams throughout this thesis visualize this warped grid. In earlier chapters where
aberrations were minimal and the ray correction function was close to the identity map, the
resulting sampling grid was close to rectilinear. In this chapter, the grid lines become curved
(Figure .d illustrates the curving of one vertical grid line). The footprint of each photosensor pixel in the light field camera is, in general, a curved quadrilateral in the ray-space,
rather than a simple box.
Visualizing Aberrations in Recorded Light Fields
Given the notion of aberrations as curvature in the ray-space, it is natural to look at the
epipolar images of the light field for manifestations of such curves. Figure . illustrates
epipolar images from two light fields recorded with the prototype camera. Figure .a was
shot with a consumer-grade lens that contains six glass elements and is well-corrected for
aberrations. This lens is close to Gaussian ideal, and the boundaries between objects are
straight lines as one expects in perfect light fields. In contrast, Figure .b was shot with a
single-element, plano-convex lens that exhibits heavy aberrations. The boundaries between
objects are ‘S’ shaped curves, corresponding to the warping of ray-space lines due to aberrations in the lens.
One might observe that the S-shaped curves curves in Figure .b are flipped horizontally compared to Figure .d. This is because the epipolar images are a map from the ideal
rays in the world into the aberrated space inside the camera. In other words, the epipolar images are plotted in the aberrated ray-space, and are actually a view of the distortion function
.. visualizing aberrations in recorded light fields
Figure .: Aberrations in sub-aperture images caused by a plano-convex lens.
chapter . digital correction of lens aberrations
– the inverse of the correction function visualized in Figure .d.
Tracing across a row of pixels in Figure .b reveals an interesting feature of the aberrated
epipolar images: the slopes of the curves are different for different points along the same
row. This effect appears in rows away from the center of the epipolar image. A moment’s
consideration will convince the reader that a row of pixels in an epipolar image corresponds
to a row of pixels in a sub-aperture image. Furthermore, from the discussion in Section .,
we know that the depth of an object in the world is related to its slope on the epipolar image.
These two facts imply that in the aberrated sub-aperture images, different pixels will appear
focused at different depths in the scene.
Evidence for this effect is visible in the zoomed sub-aperture images of the light field
in Figure .. The scene being photographed is a resolution test chart. Figure . shows
that in sub-aperture images from the periphery of the lens, part of the image is blurred and
part is in focus. It is easy to verify that the pixels which are blurry are focused closer in the
world – moving the resolution chart closer to the camera brings these pixels into focus, while
defocusing the rest of the resolution chart.
Review of Optical Correction Techniques
The visualization of the correction function on the ideal ray-space provides a different way
to visualize the action of traditional optical correction techniques. Two classical techniques
are stopping down the lens, and adding lens elements to balance aberrations.
Figure . illustrates stopping down the lens. The ray-space shown is ideal, and the
curved columns are the rays integrated by pixels in a conventional camera. The curved vertical lines separating columns are referred to as ray-intercept curves in traditional optical
engineering, although it is not common practice to illustrate their variation across the field
in the same diagram, as shown on the ray-space diagrams here. The curves in Figure .
show that each pixel collects light from a broad spatial range. The diagram shows that the
the most highly aberrated regions come from the edges of the lens aperture (extremal u values), where the slope is greatest. Stopping down the aperture prevents these extremal rays
from reaching the sensor pixel, reducing its x extent and image blur. Of course the price is
reduced light sensitivity – much fewer rays are captured, so longer exposures are required.
.. review of optical correction techniques
Figure .: Classical reduction in spherical aberration by stopping down the lens.
Figure .: Classical reduction in aberrations by adding glass elements to the lens.
Figure . illustrates the second class of optical corrections – adding lens elements and
tuning their respective curvatures and glass types. The ray-space diagrams show how adding
elements provides the ability to shape the ray-space curves so that they are closer to vertical.
chapter . digital correction of lens aberrations
In contrast to these optical techniques, digital correction is an attempt to “straighten the
curves” in software. This is possible because, in collecting ray information, the light field
camera essentially splits the vertical curved columns in Figures . and . into multiple
cells (Figure .).
Digital Correction Algorithms
At a high level, digital correction of lens aberrations is simply a repetition of the basic concept
at the heart of this thesis: resorting the rays in the recorded light field to where we ideally
wanted them to converge. To determine where we want the rays to converge we will raytrace a paraxial idealization of the lens, and to determine where the rays actually went in
the recorded light field we will ray-trace an optical model of the real lens. In the latter case,
we must accurately model the geometry of all the lens’ curved glass elements, as in optical
Figure . is an overview of digital correction in terms of the ray-space, illustrating how
to compute a single output pixel in a corrected image. Figure .a illustrates a set of rays
from a single point in the world, tracing into the camera through a double-convex lens. This
highly aberrated lens was chosen for illustrative purposes. Figure .b illustrates the ideal
( x, u) ray-space inside the camera, with the aberrated ( x , u ) light field sampling grid super-
imposed. Each cell in the grid represents the rays integrated by a single photosensor pixel
inside the camera. The vertical blue strip represents the set of rays shown on Figure .a.
Figure .c illustrates estimation of the desired vertical strip using the recorded photosensor
values. The procedure can be thought of as rasterizing the vertical strip onto the warped grip
and summing the rasterized pixels. In contrast, Figure .d illustrates all the rays collected
by a single microlens in the camera – this is the pixel value that would have been recorded
in a conventional photograph without digital correction. Note that the spatial extent of the
curved strip is wider, hence more blurry, than the digitally-corrected estimate in Figure .c.
Figures .a–c illustrate the first implementation of digital correction: the pixel-order
method, which involves iteration over the pixels of the output image. A second implementation is the ray-order method, involving iteration over the samples in the recorded light field.
.. digital correction algorithms
Figure .: Ray-space illustration of digital correction of lens aberrations.
These are similar to the gather and scatter methods of texture resampling in computer graphics. The operations at the core of the two correction methods are the same: tracing through
real optics with aberrations and tracing through idealized paraxial optics without aberrations. The two methods differ in the order that they apply these operations. The images in
the remainder of the chapter are computed with the pixel-order algorithm. The ray-order
algorithm is more convenient in numerical analysis of performance later in the chapter.
Pixel-Order Image Synthesis
The pixel-order method can be thought of as extracting the unaberrated energy for an output
image pixel from different cells in the aberrated light field. It comprises the following steps
to compute the value of each output image pixel.
. Sample all the ideal camera rays converging to that output pixel. A Monte-Carlo method
is to draw random samples distributed over the corresponding sensor pixel’s area and
chapter . digital correction of lens aberrations
over the aperture of the lens.
. Compute the world-space conjugates of the rays using the ideal paraxial approximation
for the camera lens.
. Reverse the direction of the world rays and ray-trace them back into the camera through
the geometrically accurate model of the camera’s lens, through the microlens array and
down to the sensor surface.
. Estimate the radiance along each ray from the neighborhood of sensor pixel values in the
recorded light field. The images below use quadrilinear interpolation of the nearest 
samples in the d space. Lower-quality nearest-neighbor interpolation can be used for
speed. Slower, wider reconstruction filters can be used for higher image quality.
. Average the radiance estimates to compute the final output pixel value.
Ray-Order Re-Projection of the Light Field
The ray-order method can be thought of as re-projecting the aberrated energy in the light
field into an unaberrated output photograph. It comprises the following steps for each cell
in the recorded light field.
. Sample the bundle of rays inside the camera that would converge to the corresponding
sensor pixel in the light field camera. A simple Monte-Carlo method for sampling this
bundle of rays is to draw random samples over the area of the sensor pixel, and random
directions over the pixel’s parent microlens.
. Trace these rays away from the sensor surface, through the microlenses, through the geometrically accurate model of the camera’s lens and out into the world.
. Reverse the direction of the world rays and compute their optical conjugates back into
the camera using the ideal paraxial approximation of the camera’s lens.
. Intersect these rays with the imaging plane. At each location, add the light field sample
value into a running sum of the values at the pixel in the corresponding location.
After this process concludes, normalize the value of each output image pixel, dividing by the
number of rays summed there over the course of processing the entire light field.
.. digital correction algorithms
Confidence Weighting for Increased Contrast Enhancement
The non-linear distortions introduced by aberrations mean that some light field cells pollute
the corrected photograph more than others. We have seen this effect in two different ways
so far. First, in looking at aberrated sub-aperture images in Figure ., we saw that the
same region of the scene can appear with very different amounts of blur when viewed from
different parts of the lens. Second, in looking at the projection of ideal vertical strips of rayspace onto the aberrated light field sampling grid, we saw that some grid cells could be much
wider than the ideal strip, leading to larger amounts of blur. For example, in Figure .d the
widest grid cells contributing to the estimate are at the top of the grid.
These observations motivate an optional enhancement in the resampling process for digital correction, designed to further raise the contrast and clarity of the corrected image. The
idea is to weight the contribution of each photosensor pixel in inverse proportion to its spatial extent when projected onto the output image plane. This modification means computing
a weighted average of light field sample values in the final step of the pixel-order algorithm.
In the corrected images shown later in this chapter, I used the following weight function,
where Δx and Δy are the projected width and height of the light field cell in the output image.
For convenience, the units are in terms of output pixel widths.
w(Δx, Δy) = h(Δx ) · h(Δy), where
⎨ 1,
h( x ) =
⎩ exp −
In words, the weighting function decreases according to a Gaussian fall-off as the projected
width of the cell increases beyond one output image pixel. The x and y dimensions are treated
separately, with the overall weight being the product of the weights for each dimension. I
used a standard deviation of σ =  for the Gaussian fall-off. Calculation of Δx and Δy, which
varies as a function of ( x, y, u, v), is discussed at the end of this section.
Figure . visualizes the weighting function of the aberrated light field cells. Each pixel’s
weight is proportion to how blue it appears in this figure. The figure illustrates that the weight
tends to be higher for rays passing through the center of the lens, where the aberrations
are least. A more subtle and interesting phenomenon is that the weight varies across the
chapter . digital correction of lens aberrations
Figure .: Weighting of rays (light field cells) in weighted correction. Each ray’s weight is
proportional to how blue it appears.
.. digital correction algorithms
pixels in the same sub-aperture image, as shown in the three zoomed images (z – z).
Close examination reveals that the weights are higher for areas in sharp focus, exactly as the
weighting function was designed to do.
Reducing the weight of the blurry samples reduces residual blur in the corrected photograph. Equation . defines one weighting function, but of course we are free to design
others. Choosing a weighting function that reduces the weight of cells with larger projected
area more aggressively results in greater contrast and resolution. The trade-off, however, is
that reducing the average weight (normalized to a maximum weight of 1) decreases the effective light gathering power of each output pixel. For example, the average weight of the
cells in Figure . is , which in some sense matches the light gathering power of a conventional camera with an aperture reduced to  area. However, stopping down the lens
imposes the same sub-aperture on every output image pixel. Weighted correction provides
the extra freedom of varying the aperture across the image plane. As shown in the experiments below, this allows weighted correction to produce a sharper image.
Computing the Projected 2D Size of Light Field Samples
Computing Δx and Δy for the weighting function in Equation . involves projecting the
aberrated light field cell onto the output image plane and calculating its d size. In practice,
it is sufficient to approximate the projected size by assuming that the correction function,
C, is locally linear over the light field cell. In this case, Δx can be approximated using the
first-order partial derivatives of the correction function:
Δx ≈
Δx δCx δCx δCx δCx δx Δx + δy Δy + δu Δu + δv Δv ,
where I have defined the four components of C explicitly:
C x , y , u , v = Cx ( x , y , u , v ),
Cy ( x , y , u , v ),
Cu ( x , y , u , v ),
Cv ( x , y , u , v )
= ( x, y, u, v) .
chapter . digital correction of lens aberrations
The analogous equation for Δy is
Δy ≈
δCy δCy δCy δCy δx Δx + δy Δy + δu Δu + δv Δv .
Let me focus your attention on three features of these equations. First, dividing by Δx
and Δy normalizes the units so that they are relative to the size of output image pixels, as
required by the weighting function in Equation ..
The second point is that the partial derivatives in these equations vary as a function of
the light field cell position ( x, y, u, v). For example in Figure .,
δx and
are the vectors
parallel to the distorted horizontal and vertical lines of the sampling grid, and the distortion
varies over the ray-space. I compute the value of the partial derivatives using simple finite
differences of the sampled correction function, C. Recall that computing C ( x , y , u , v ) is a
matter of tracing ray ( x , y , u , v ) out of the camera into the world using a model of the real
optics, then ideally conjugating it back into the camera using idealized paraxial optics.
The third point to note is that Δx , Δy , Δu and Δv are constants in Equations . and ..
Δx and Δy are the width and height of the microlenses in the light field camera (. mi-
crons in the prototype). Δu and Δv represent the projected size of the sensor pixels on the
(u , v ) lens plane. For example, the experiment described in the next section uses a plano-
convex lens with a clear aperture diameter of approximately  mm. With a directional
resolution of  × , Δu and Δv are approximately . mm for that experiment.
Correcting Recorded Aberrations in a Plano-Convex Lens
There were two over-arching goals to the experiments in this section. The first was to visually demonstrate that digital correction could raise contrast and resolution in real images
acquired with the prototype camera. The second goal was to use the prototype camera data
to provide a measure of validation for our simulation software, used to compute raw light
fields and digital corrections. This software is used in the last part of the chapter in quantitative performance tests of a wider range of lenses at much higher light field resolutions.
The lens tested in this section is a plano-convex lens with a focal length of  mm. It
is made out of standard bk glass. It is similar to the one illustrated in Figure ., and it
.. correcting recorded aberrations in a plano-convex lens
Figure .: Set-up for plano-convex lens prototype.
appears in the bottom right of the photograph in Figure .a. This simple lens was chosen
because it produces aberrations extreme enough to be visible and correctable in the relatively
low-resolution  ×  photographs produced by our prototype.
I recorded an aberrated light field using the prototype camera by replacing its usual photographic lens with this plano-convex lens (convex side up). A manual aperture was placed
against the planar side of the lens, and stopped down to achieve an f / aperture. I set the
separation between lens and image plane to focus on a resolution test-chart approximately
 cm away, as shown in Figure ., and tuned focus by adjusting the height of the target
until maximum sharpness was achieved.
I simulated a matching raw light field using Monte-Carlo ray-tracing. The computer
model of the lens, microlens array and sensor were matched to the manufacturer’s physical
specifications. The separation between the main lens and the microlens plane was matched
to measurements on the prototype set-up. As with the physical set-up, I tuned focus by
adjusting the distance of the virtual resolution chart until maximum sharpness was achieved.
(a) No correction
(Conventional photograph)
chapter . digital correction of lens aberrations
(b) Digital correction
No confidence weighting
(c) Digital correction
Confidence weighting
Figure .: Comparison of uncorrected and corrected images from a light field recorded
with the prototype camera.
Figure . visually compares the quality of images computed from the recorded light field
with and without correction. Column a, computed without correction, is equivalent to a
conventional photograph. It exhibits the classical softness in resolution across the image
plane due to spherical aberration. In addition, the zoomed image at the bottom of Figure .a illustrates significant loss in contrast at the edge of the frame, where regions that
should be black and white appear as gray due to cross-pollution. Column b illustrates that
correction raises the contrast, particularly along the edges of the image but less so in the center of the frame where aberrations are less. Column c illustrates that weighted correction
.. correcting recorded aberrations in a plano-convex lens
(a): Recorded light field
(b): Simulated light field
(a): Corrected photograph from a
(b): Corrected photograph from b
Figure .: Comparison of recorded and simulated data for digital lens correction.
raises the contrast and resolution further still.
Figure . compares the recorded data with the simulated version. Images a and b
compare close-ups of the raw light field data. Even at this extreme level of zoom, the overall
match is quite good, although small differences are visible due to error in calibrating the
physical and virtual geometry. Figure . A and b illustrate that these calibration errors
cause only small differences in output images. These two images are for correction without
weighting, and similarly good agreement is found in uncorrected images and correction
with weighting.
Comparison of Weighted Correction with Reduced-Aperture Conventional Imaging
From the discussion in Section ., we know that the weighted correction used on the planoconvex experiment results in an average light usage of  of the light field.
Figure . compares the corrected image with weighting against a conventional image
chapter . digital correction of lens aberrations
Digitally corrected image with weighting
Conventional photograph,  aperture
Figure .: Comparison of weighted correction with conventional imaging where the lens
aperture is stopped down for equivalent light gathering power.
where the aperture is reduced to  area ( diameter). The conventional image was
computed by only summing the rays under each microlens that passed through the reduced
aperture, without resorting of rays. Although the aberrations in the stopped-down conventional image are reduced compared to the full-aperture version in Figure ., the weighted
correction still provides significantly better contrast. For example, the black bars at the top
and bottom are much darker in the corrected image.
Weighted correction produces a superior image for two reasons. First, it resorts rays,
which improves in the convergence of rays that are used. Second, weighted correction has
greater flexibility in choosing to use rays that converge well on the resolution chart. In the
conventional case, stopping down the lens excludes sub-aperture images from the periphery
of the lens that tend to contain a larger fraction of blurry pixels, but even the reduced aperture contains some of these artifacts. In contrast, weighted correction can use an effectively
larger aperture and discard the worst rays, as shown in the zoomed images of Figure ..
.. simulated correction performance
Simulated Correction Performance
In contrast to the previous section, the results in this section derive purely from computer
simulation. The experiments here apply traditional, numerical analyses [Smith ] of the
ray-traced point spread function (psf) to compare the performance of various lenses with
and without digital correction. The psf is the spot of energy that appears in an output image
response to a point source of light in the world. Ideal imaging produces a diffraction-limited
spot, but in practice aberrations usually result in a larger blur. One of the main goals of this
section was to explore how digital correction works across a range of lenses. Is it likely to
be a general-purpose technique for improving the quality of lenses in optical engineering?
To keep the comparison as simple as possible, this section examines only digital correction
without weighting.
Methods and Image Quality Metrics
The cameras simulated in this section assume the following geometry and resolutions. A
 mm format sensor is assumed – that is, the microlens array and photosensor both measure  mm ×  mm. The spatial resolution, that is the resolution of the microlens array, is
assumed to be constant at × (. mp). A range of N × N directional resolutions are
tested, from N =  (uncorrected), up to N = . Since the spatial resolution and sensor size
are fixed, increasing N assumes increasing photosensor resolution. N = , requiring . micron pixels, lies at the limit of technology currently shipping in commodity cameras [Askey
]. . micron pixels have been demonstrated in the laboratory [Micron ], and the
maximum resolution simulated here, N = , assumes a further reduction by  down to
. micron.
The imaging performance of various lenses was quantified by computing psfs to analyze
imaging performance at different points on the imaging plane. Computing a psf means
tracing rays from a point light source in the world through all parts of the lens, down to
the imaging plane to produce an image. In the case of simulating a light field camera, the
rays are traced through the microlens array down to the photosensor surface to produce a
raw light field. Final corrected photographs of the psf were computed using the ray-order
method, which is more efficient than the pixel-order method in this case, because so few
chapter . digital correction of lens aberrations
photosensor pixels are illuminated by the psf calculation.
Although the psf can be used to provide very detailed analysis of imaging performance,
it can be cumbersome for comparison across lenses because it is very high-dimensional. The
psf is a d function that varies with position on the
imaging plane, as well as focal depth. For example, it
is very common for the psf to become broader at the
edge of the image. As another example, most lenses
do not focus well at the short focal distances required
for high magnification (except for specially-designed
macro lenses). For such close-focusing, the psf tends
(a): Spherical aberration
to spread out.
In any case, to compare performance across lenses, I
used a series of summary statistics derived from the psf.
The first level of summary is provided by the root mean
square (rms) spot radius, which is sometimes used in
optical design texts. The rms spot radius can be thought
of as the standard deviation of the psf interpreted as a
probability distribution function. As with all statistics,
(b): Coma
Figure .: psf and rms
the rms measure is a double-edged sword. On the one
hand, it provides a very compact summary of one of the
most important characteristic of the point spread: it’s
gross size. Figure .a is a spot diagram illustrating
the density of a psf exhibiting spherical aberration, overlaid with a circle of one rms radius.
On the other hand, the rms measure discards information about the shape of the spot,
which can greatly affect the qualitative appearance of aberrations in the final image. For
example, Figure .b illustrates a psf exhibiting the classical aberration known as coma.
This comet-shaped psf is oriented radially away from the center of the image – the illustrated
psf comes from the bottom right of an aberrated image. Coma is well known to cause objects
to appear as if they are “flying out of the field of view” [Smith ]. The overlaid rms circle
clearly does not capture this visual characteristic, giving only the standard deviation.
The rms spot radius is a large reduction in information, but it is still a d function over
.. simulated correction performance
the image plane. The next level of summary is provided by averaging the rms measure across
the d plane, producing a single-number summary of the entire imaging system’s performance. Although it is somewhat crude, this measure does track performance trends faithfully, improving with the quality of the photographic lens and with the amount of directional
resolution available in digital correction.
An equivalent, but slightly more intuitive measure of average rms radius is effective resolution. This measure is designed to give a rough idea of the “resolution” of output images. I
have, somewhat arbitrarily, defined the effective pixel size as
the square that can be inscribed within a circle of one average
rms spot radius (Figure .). The effective resolution is then
just the number of squares that fit within the sensor. In the
experiments below, the effective resolution is therefore defined
to be
 mm ×  mm
( 2R)2
where R is the average rms spot radius (in mm), and
2R is
the width of the inscribed square. The thought underlying the
Figure .: Effective
pixel size.
concept of effective resolution is that the total computed resolution of the output image is irrelevant if the psf is much broader than one output image
pixel. Effective resolution provides a measure of the number of psf-limited spots that we
can discriminate in the output image.
The final measure of image quality used in this section is the mtf. The mtf is one of the
most popular measures of image quality, providing a very sensitive measure of the ability of
the imaging system to resolve contrast and fine details in the output image. It is defined as
the magnitude of the Fourier transform of the point spread function. Indeed, that is how I
computed the mtf in the experiments below: by first computing the psf and then computing
its discrete Fourier transform. A detail of this implementation is that it incorporates the
effective mtf due to the finite sized microlenses and pixels of the sensor [Park et al. ].
This means that the computed mtf does not exceed the Nyquist sampling rate of the 
micron spatial sampling grid.
Since the mtf is the Fourier transform of the psf, it is as high-dimensional as the psf.
chapter . digital correction of lens aberrations
This chapter employs one of the most common summaries of the mtf, which is to plot a
small number of spatial frequencies as a function of distance on the imaging plane away
from the optical axis. If image quality degrades towards the edge of the image, the reduction
in quality would be visible as a decay in the plotted mtf. Additional features of the mtf are
described in relation to the plots presented below.
Case Analysis: Cooke Triplet Lens
Let us first analyze the psf of a particular lens in some detail. The chosen lens is an f /.
Cooke triplet [Tronnier and Eggert ]. Figure . illustrates the tracing of rays through
the lens’ three glass elements in the simulation of the light field camera’s psf. Columns a,
b and c show rays converging on three different positions in the image: at the optical axis,
half-way towards the edge of the image, and at the edge.
The middle row zooms in on a  micron patch of the imaging plane at the convergence
of rays. Ten microlenses are shown across the field of view, and the rays terminate at the
bottom on the photosensor surface. These close-up views illustrate the complexity of the
ray structure inside the camera. The shape of rays converging on the microlens plane is far
from a simple cone as assumed in ideal imaging. In addition, the shape of rays and quality of
convergence change across the imaging plane. For example, Diagram a shows that the rays
in the center of the frame converge onto  microlenses. In contrast, Diagram c shows worse
convergence near the edge of the image, with rays spread over approximately  microlenses.
The bottom row of Figure . provides a different view of the aberrations, in ray-space.
The horizontal, x, field of view is  microns, but the vertical, u, field of view spans the
entire lens plane. The directional resolution is N = . The distortion of the sampling grid
provides more information about the nature of the aberrations at different parts of the field.
In Diagram c, one can see how optical correction has been used to force rays coming from
the edge of the lens to converge reasonably well. In fact, the worst aberrations come from
the parts of the lens midway between the optical axis and the edge of the lens. In contrast,
Diagram c shows that at the right edge of the image, the worst aberrations come from the
left-most portion of the lens, since the grid is most distorted near the bottom.
These ray-diagrams highlight the crucial concept of scale relative to image resolution
.. simulated correction performance
Figure .: Aberrated ray-trace and ray-space of a Cooke triplet lens.
when considering the seriousness of aberrations. If the spatial resolution were less, the curvature would be negligible relative to the spacing of the grid columns. Conversely, any residual aberration will exhibit significant curvature if the spatial resolution is high enough. This
means that to correct for aberrations, the number of vertical cuts needed in each column,
chapter . digital correction of lens aberrations
(a): Center
(b): Middle
(c): Edge
Uncorrected spot diagrams for Cooke triplet psf.
(d): Center
(e): Middle
(f): Edge
Figure .: Digitally corrected spot diagrams for Cooke triplet psf.
corresponding to the directional resolution, is proportional to the spatial resolution. This
makes intuitive sense: increasing the spatial resolution increases our sensitivity to aberrations in the output image, and raises the amount of directional resolution required to correct
for the residual aberrations.
Figure . illustrates the uncorrected and corrected psf for the triplet lens at the three
positions shown in Figure .. The overlaid circles show the rms spot. With this amount
of directional resolution ( × ), the average rms spot radius is roughly  microns, and
the effective resolution of output images is close to the full . mp of the microlens array.
Figure . illustrates an important characteristic of digital correction that is not captured by
the summary statistics. It shows that digital correction tends to make the psf more radially
symmetric, reducing qualitative problems introduced by aberrations such as coma.
Figure . illustrates the improvement in correction performance as the directional
.. simulated correction performance
Figure .: Histogram of triplet-lens psf size across imaging plane.
resolution ( N × N ) increases from N =  (uncorrected conventional imaging) to . The
histograms for each value of N illustrate the distribution of rms radii across the imaging
plane. The vertical axis of the graph is measured in terms of output image pixels. For example, for N = ,  of the imaging plane has an rms spot radius of approximately  pixel
widths. These histograms were computed by sampling the psf at over  different positions distributed evenly but randomly (using stratified sampling) across the  mm × mm
imaging plane. , rays were cast in estimating the rms radius for each psf.
The histograms illustrate how the corrected rms spot size converges as the directional
resolution increases. The rate of convergence depends on how much aberration is present in
the lens, with greater distortions requiring larger amounts of directional resolution for accurate correction. The effective resolution of the output image grows from . mp to the full
. mp as the directional resolution increases from N =  to . The starting, uncorrected
resolution may seem surprisingly low. One factor is that we are using the lens with its aperture wide-open, where aberrations are worst. Second, effective resolution decays rapidly as
the spot size increases. If the effective resolvable spot covers just  pixels, as it almost does
in the triplet lens without correction, then the effective resolution decreases by almost an
order of magnitude.
The histograms give a sense for how the average performance increases across the image, but not how the performance varies from the center to the edge. Figure . measures
variation along that axis, in the form of mtf plots. With microlenses of  micron width,
chapter . digital correction of lens aberrations
Figure .: mtf of triplet lens with and without correction (infinity focus).
Figure .: mtf of triplet lens with and without correction (macro focus).
the Nyquist spatial sampling rate is  cycles per mm. The graphs plot the mtf at three
spatial frequencies: /, / and / the Nyquist rate. The curve for / the Nyquist rate
is generally correlated with “sharpness,” ability of the system to resolve fine details. Curves
above  are extremely rare at these higher frequencies. The curves for lower frequencies
generally correlate with how “contrasty” the final image is. Good lenses have very high mtf
at these frequencies, and conventional wisdom is that differences of a few percentage points
are visible in final images.
Figure . shows that, even without digital correction, the contrast of the triplet lens
is quite good across the imaging plane, dipping only slightly at the very edge of the image.
Sharpness, however, is relatively poor at these resolutions, especially out towards the edge.
.. simulated correction performance
Digital correction improves sharpness and contrast across the field of view, and makes performance more even.
Figure . illustrates the triplet used to focus much closer, at a distance of  mm
for : magnification (i.e. macro photography). The lens was not designed for such close
focusing, and its performance is quite poor at this depth. This phenomenon reflects the
kind of trade-off that is inherent in all real optical engineering, as a consequence of Maxwell’s
principle that perfect imaging at all focal depths cannot be achieved using only refraction
and reflection. The designers of the triplet lens prioritized simplicity of the optical recipe
(using only three elements), and performance when focused at infinity. Those constraints
were incompatible with good imaging performance in the macro range.
The corrected mtf in Figure . illustrates dramatic improvement in the contrast of
the system. This example illustrates that digital correction extends the useful focal range of
a lens. However, the improvement in the sharpness is moderate, because the directional
resolution of the system was not sufficient relative to the distortion in the light field at this
focal depth. The reason for this is two-fold. First, the light field distortion is much greater at
the macro focal depth. Figure . illustrates this fact by comparing the ray-space for infinity
and macro focus at the three image positions shown in Figure .. From the discussion
previously in this chapter, it is evident that a higher directional resolution would be required
to correct the greater aberrations at the macro focus.
Unfortunately, the second factor is that the directional resolution is halved at macro focal
depths. This can be seen in Figure . by the fact that there are only half as many vertical
cuts in the grid columns of d-f as there are in a-c. The underlying cause of the reduction is
that the separation between the main lens and the imaging plane increases by a factor of 
when changing focus from infinity to macro. As a result, the aperture appears half as large
radially from the perspective of the microlenses, and the images that appear under each
microlens span only half as many pixels.
These observations suggest that in designing lenses to be used with a light field camera,
the designer should optimize optical image quality for close focus distances rather than infinity. Higher directional resolution at further focal distances can be used to offset slightly
worse optical performance in that range.
chapter . digital correction of lens aberrations
(c): Edge
(a): Center
(b): Middle
Ray-space of triplet lens, focusing at infinity.
(d): Center
(e): Middle
(f): Edge
Figure .: Ray-space of triplet lens at macro focal depth.
Correction Performance Across a Database of Lenses
The previous section illustrated how digital correction could be used to improve the performance of one multi-element lens. It should be clear from the ray-traces diagrams and
visualizations of the aberrated ray-space that the amount of improvement will depend on
the exact formula of the lens and the shape of the distortions in the recorded light field.
To provide some feeling for the performance of digital correction across a range of lenses,
this chapter concludes with a a summary of simulated correction performance for a database
of  lenses. The optical formulas for these lenses were obtained by manually extracting
every fixed-focal-length lens in the Zebase [Zemax ] database for which the description
implied that photographic application was possible.
The initial set of lenses selected from Zemax was modified in two ways. First, the set
.. simulated correction performance
was pruned to exclude lenses that contained aspheric elements, because the simulation system does not currently support analytic ray-tracing of such aspheric surfaces. Second, the
remaining lenses were linearly scaled in size so that their projected image circles matched
the diagonal length of the  mm format sensor in our virtual camera. The issue is that the
scaling of the original lens formulas was quite arbitrary. For example, many formulas were
normalized in size to a nominal focal length of  mm, regardless of the effective focal
length in  mm format. Figure . lists the final set of lenses used in this experiment, and
some of their basic properties.
The database spans an eclectic range of design forms, including triplets, double Gauss
variations, telephotos and wide-angle lenses. They also span a fairly wide range of focal
lengths and f -numbers. Many of these lens recipes originated in classical optical engineering
textbooks, or from expired patents. Nevertheless, many of the designs are quite old, and this
may best be thought of as an experiment on variations of some classic design patterns. There
are many lens design forms used in diverse imaging applications today, and this database
does not attempt to sample the breadth of current art.
In spite of these caveats, the -lens database provides useful experimental intuition. The
results imply that digital correction will be quite generally applicable at resolutions that are
achievable in present or near-term technology.
Figure . presents the experimental data. Each lens appears on a different row, sorted
in order of increasing uncorrected performance. For each lens, effective resolution with unweighted correction is plotted across the row as a function of directional resolution. The
directional resolution is N × N , beginning at N = 1 (conventional uncorrected), and increasing up to N = , as shown by the key near the bottom of the figure. Note that the horizontal
axis is plotted on a log scale.
The most striking feature of the data is the regular improvement provided with digital
correction. Effective resolution rises steadily with directional resolution for all the lenses,
indeed as one would expect from the interpretation of the algorithm in ray-space. In other
words, the experiment provides evidence that digital correction is a robust and generalpurpose technique for improving image quality.
A second important point is that the data show that in many cases the amount of improvement is roughly proportional to the recorded directional resolution. In particular, the
Lens File
chapter . digital correction of lens aberrations
Glass elements Focal length Aperture Row∗
Cooke triplet anastigmat
Triplet objective usp ,,
Cooke triplet usp ,,
Wide angle usp ,,
Wide-angle objective
slr telephoto
Photographic objective
Double Gauss
Double Gauss
Photographic lens usp ,,
Photographic objective usp ,,
Large aperture objective usp ,,
Modified Gauss usp ,,
Large aperture photo lens usp ,,
Lrg. aper. photographic usp ,,
Gauss type objective usp ,,
Double gauss objective usp ,,
-glass double Gauss usp ,,
Momiyama usp ,,
Double Gauss objective
 mm
 mm
 mm
 mm
 mm
 mm
 mm
 mm
 mm
 mm
 mm
 mm
 mm
 mm
 mm
 mm
 mm
 mm
 mm
 mm
 mm
 mm
f /.
f /.
f /.
f /.
f /.
f /.
f /.
f /.
f /.
f /.
f /.
f /.
f /.
f /.
f /.
f /.
f /.
f /.
f /.
f /.
f /.
f /.
In Figure ..
Figure .: Database of lenses.
experiment indicates that relatively moderate amount of directional data can provide significant improvements in image quality. For example, a directional resolution of  × , which
is viable in current vlsi technology, produces increases in effective resolution by an average
factor of . across the database, and by up to a factor of  (for rows  and ). Taking a step
back, this is an important validation of the required directional resolution for useful digital
correction. From first principles it was not immediately clear whether correcting for aberrations across the lens aperture would work without extremely high resolution images of the
lens aperture. The experiment here shows that it does, although of course more resolution
provides greater effective resolution up to the limiting resolution of the microlens array.
.. simulated correction performance
Figure .: Improvement in effective resolution with digital correction.
chapter . digital correction of lens aberrations
Another feature of the data is that the effective resolution without digital correction (the
left-most data point for each row) is quite low for many lenses. This corresponds to the
performance of conventional photographic imaging. For  out of the  lenses the effective
resolution is actually below one quarter of the assumed . mp output image resolution.
Although some lenses are unusually low, and probably reflect a prioritization of maximum
light gathering power (aperture size) over image quality, it is not surprising that many lenses
would suffer from low effective resolution with their apertures wide-open, as tested here. It
is common photographic wisdom that in order to obtain the sharpest images from any lens,
one should stop the lens down to half its maximum diameter or less [Adams ]. Of
course this reduces the light gathering power, requiring longer exposures. The results here
re-iterate that digital correction can raise the image quality without having to pay as great a
penalty in light gathering power.
In Closing
To exploit digital correction to maximum advantage, it must be considered during the design phase of the lens, while determining the optical requirements. One example was already
discussed earlier in the chapter: lenses designed to be used with digital correction should
prioritize optical performance at focal planes close to the camera. A different consideration
is that digital correction expands the space of possible solutions given a set of image quality
requirements. It will allow the use of larger apertures than before, or it may enable simplification of the optical formula (e.g. a reduction in the number of glass elements) required to
satisfy the target requirements.
In order to reach the full potential of the technique, further study is needed into methods
for joint optimization of the the optical recipe and digital correction software. As one example of the complexity of the design task, rows  –  in Figure . illustrate very different
growth rates with directional resolution, even though they all start with similar uncorrected
effective resolutions. From a qualitative examination of the shape of the distortions in the
ray-space, it seem plausible that a major limitation on the rate of improvement are regions
of extremely high slope that are localized in u, such as uncorrected spherical aberration. On
the one hand, this suggests that unweighted digital correction would work best on lenses
.. simulated correction performance
without this characteristic. On the other hand, localized regions of high curvature in the ray
space are especially amenable to weighted correction, which can eliminate them without
greatly reducing the collecting power.
The concept of weighted correction raises an important question about aberrated light
fields: are cells of low weight fundamentally less useful? To answer this question, recall that
the discussion of aberrated sub-aperture images in Section . showed that different light
field cells are maximally focused at different depths in the world. What this means is that
the weight of a light field cell depends on the refocus depth. Maximum weight will result
when refocusing at the cell’s plane of maximal focus. The warped ray-space sampling grid of
Figure .d provides one way to visualize this process. Recall that refocusing corresponds
to straight, parallel projection lines, which will be overlaid on the warped grid. When the
slope of the lines match the slope of the grid cells, then the weight will be highest. A different
way to visualize the changing weight is in terms of the blue-tinted sub-aperture images of
Figure .. In this kind of diagram, the distribution of blue weight shifts as we refocus at
different depths. In other words, aberrations do not result in useless light field samples, but
rather in samples that extend the range of sharp refocusing at the price of reduced effective
light usage at any one refocus depth.
These fundamental non-linearities are what make the study of aberrations and their correction such a challenging subject. Continued study of digital correction will have to address
much of the complexity of traditional lens design, and add to it the additional nuances of
weighting function design.
Through a pioneering career as one of the original photojournalists, Henri Cartier-Bresson
( – ) inspired a generation of photographers, indeed all of us, to seek out and capture the Decisive Moment in our photography.
The creative act lasts but a brief moment, a lightning instant of give-and-take,
just long enough for you to level the camera and to trap the fleeting prey in your
little box.
Armed only with his manual Leica, and “no photographs taken with the aid of flashlight,
either, if only out of respect for the actual light,” Cartier-Bresson made it seem as if capturing
the decisive moment were as easy as turning one’s head to casually observe perfection.
But most of us do not find it that easy. I love photography, but I am not a great photographer. The research in this dissertation grew out of my frustration at losing many shots
to mis-focus. One of the historical lines in photography has been carried by generations
of camera engineers. From the original breakthrough by Daguerre in , which was instantly popular in spite of the toxicity of the chemical process, to the development of the
hand-camera, derided by even the great Alfred Stieglitz before he recognized its value, to
the rise of digital photography in the last ten years, we have seen continuous progress in the
photographic tools available to us. But picture-making science is still young, and there are
still many problems to be solved.
chapter . conclusion
The main lesson that I have learned through my research is that significant parts of the
physical process of making photographs can be executed faithfully in software. In particular,
the problems associated with optical focus are not fundamental characteristics of photography. The solution advanced in this dissertation is to record the light field flowing into
conventional photographs, and to use the computer to control the final convergence of rays
in our images. This new kind of photography means unprecedented capabilities after exposure: refocusing, choosing a new depth of field, and correcting lens aberrations.
Future cameras based on these principles will be physically simpler, capture light more
quickly, and provide greater flexibility in finishing photographs. There is a lot of work to be
done on re-thinking existing camera components in light of these new capabilities. The last
chapter discussed how lens design will change to exploit digital correction of aberrations.
With larger-aperture lenses, it may be possible to use a weaker flash system or do away with
it in certain scenarios. Similarly, the design of the auto-focus system will change in light of
digital refocusing and the shift in optimal lens focus required by selectable refocusing power.
Perhaps the greatest upheaval will be in the design of the photosensor. We need to maximize
resolution with good noise characteristics – not an easy task. And the electronics will need to
read it out at reasonable rates and store it compactly. This is the main price behind this new
kind of photography: recording and processing a lot more data. Fortunately, these kinds of
challenges map very well to the exponential growth in our capabilities for electronic storage
and computing power.
In photography, one of the most promising areas for future work is developing better
processing tools for photo-finishing. In this dissertation, I chose methods that stayed close
to physical models of image formation in real cameras. Future algorithms should boldly
pursue non-physical metaphors, and should actively interpret the scene to compute a final
image with the best overall composition. The quintessential example would be automatic
refocusing of the people detected in the picture while softening focus on the background, as
I tried to suggest in the treatment of the two-person portrait in Figure .. Such automatic
photo-finishing would be a boon for casual photographers, but it is inappropriate for the
professional photographer or serious enthusiast. Experts like these need interactive tools
that give them artistic control. A simple idea in this vein is a virtual brush that the user
would “paint” over the photograph on the computer to push the local focus closer or further
– analogous to dodging and burning in the old darkroom. Having the lighting at every pixel
in a photograph will enable all kinds of new computer graphics like this.
The ideas in this dissertation have already begun to make an impact in scientific imaging. A light field camera attached to a microscope enables d reconstruction of the specimen from a single photographic exposure [Levoy et al. ], because it collects rays passing through the transparent specimen at different angles. Telescopes present another interesting opportunity. Would it be possible to discard the real-time deformable mirror used
in modern adaptive-optics telescopes [Tyson ], instead recording light field video and
correcting for atmospheric aberrations in software? In general, every imaging device that
uses optics in front of a sensor may benefit from recording and processing ray directional
information according to the principles described in this dissertation.
This is a very exciting time to be working in digital imaging. We have two powerful
evolutionary forces acting: an overabundance of resolution, and processing power in close
proximity. I hope I have convinced you that cameras as we know them today are just an
evolutionary step in where we are going, and I feel that we are on the verge of an explosion
in new kinds of cameras and computational imaging.
But thankfully, some things are sure to stay the same. Photography will celebrate its
th birthday this year, and photographs are much older even than that – we had them
floating on our retinas long before we could fix them on metal or paper. To me, one of
the chief joys in light field photography is that it feels like photography – it very much is
photography as we know it. Refocusable images are compelling exactly because they look
like the images we’ve always seen, except that they retain a little more life by saving the power
to focus for later. I find that this new kind of photography makes taking pictures that much
more enjoyable, and I hope you will too. I look forward to the day when I can stand in the
tall grass and learn from fellow light field photographers shooting in the field.
A.1 Generalized Fourier Slice Theorem
generalized fourier slice theorem
F M ◦ IM
◦ B = SM
◦ B −T /
B −T ◦ F N .
Proof. The following proof is inspired by one common approach to proving the classical
d version of the theorem. The first step is to note that
F M ◦ IM
= SM
◦ F N,
because substitution of the basic definitions shows that for an arbitrary function, f , both
N ) [ f ] ( u , . . . , u ) and (S N ◦ F N ) [ f ] ( u , . . . , u ) are equal to
(F M ◦ I M
f ( x1 , . . . , x N ) exp (−2πi ( x1 u1 + · · · + x M u M )) dx1 . . . dx N .
The next step is to observe that if basis change operators commute with Fourier transforms
appendix a. proofs
via F N ◦ B = (B −T / B −T ) ◦ F N , then the proof of the theorem would be complete because for every function f we would have
(F M ◦ I M
◦ B) [ f ] = (S M
◦ F N ◦ B) [ f ]
by Equation a., and the commutativity relation would give us the final theorem:
(F M ◦ I M
◦ B) [ f ] = S M
◦ B −T / B −T ◦ F N [ f ] .
Thus, the final step is to show that F N ◦ B = (B −T / B −T ) ◦ F N . Directly substituting
the operator definitions establishes these two equations:
(F N ◦ B) [ f ] (u) = f (B −1 x) exp −2πi x T u
dx .
|B −T |
|B −T |
In these equations, x and u are N -dimensional column vectors (so xT u is the dot product of
x and u), and the integral is taken over all of N -dimensional space.
Let us now apply the change of variables x = B x to Equation a.. The first substitution is
x = B −1 x. Furthermore, dx = |B| dx . However, since |B| = 1/ B −1 = 1/ B −T by basic
properties of determinants, dx = 1/ B −T dx , which is the second substitution. Making
these substitutions,
T B −T
|B −T |
= f (B −1 x) exp −2πi
xT B −T B T u
= f (B −1 x) exp −2πi x T u
where the second line relies on the linear algebra rule for transposing matrix products. Equa
tions a. and a. show that F N ◦ B = (B −T / B −T ) ◦ F N , completing the proof.
a.. filtered light field imaging theorem
Filtered Light Field Imaging Theorem
filtered light field imaging theorem
P F ◦ Ch4 = CP2 F [h] ◦ P F .
To prove the theorem, let us first establish a lemma involving the closely-related modulation operator:
Modulation M βN is an N -dimensional modulation operator, such that M βN [ F ] (x) =
F (x) · β(x) where x is an N -dimensional vector coordinate.
Lemma. Multiplying an input d function by another one, h, and transforming the result by
PF , the Fourier photography operator, is equivalent to transforming both functions by PF and
then multiplying the resulting d functions. In operators,
PF ◦ M4h = M2PF [h] ◦ PF
Algebraic verification of the lemma is direct given the basic definitions, and is omitted
here. On an intuitive level, however, the lemma makes sense because Pα is a slicing operator:
multiplying two functions and then slicing them is the same as slicing each of them and
multiplying the resulting functions.
Proof of theorem.
The first step is to translate the classical Fourier Convolution Theorem
(see, for example, Bracewell []) into useful operator identities. The Convolution Theorem states that a multiplication in the spatial domain is equivalent to convolution in the
Fourier domain, and vice versa. As a result,
F N ◦ ChN = MF
N [h] ◦ F
and F N ◦ MhN = CFNN [h] ◦ F N .
Note that these equations also hold for negative N , since the Convolution Theorem also
applies to the inverse Fourier transform.
appendix a. proofs
With these facts and the lemma in hand, the proof of the theorem proceeds swiftly:
P F ◦ Ch4
= F −2 ◦ PF ◦ F 4 ◦ Ch4
= F −2 ◦ PF ◦ M4F 4 [h] ◦ F 4
= F −2 ◦ M2(P
= C(F
−2 ◦ P
F ◦F
F ◦F
4 )[ h ]
4 )[ h ]
◦ PF ◦ F 4
◦ F −2 ◦ P F ◦ F 4
= CP2 F [h] ◦ P F ,
where we apply the Fourier Slice Photography Theorem (Equation .) to derive the first
and last lines, the Convolution Theorem (Equations a. and a.) for the second and fourth
lines, and the lemma (Equation a.) for the third line.
Photograph of a Four-Dimensional Sinc Light Field
This appendix derives Equation ., which states that a photograph from a d sinc light
field is a d sinc function. The first step is to apply the Fourier Slice Photographic Imaging
Theorem to move the derivation into the Fourier domain.
Pα 1/(ΔxΔu)2 · sinc( x/Δx, y/Δx, u/Δu, v/Δu)
=1/(ΔxΔu)2 · (F −2 ◦ Pα ◦ F 4 ) [sinc( x/Δx, y/Δx, u/Δu, v/Δu)]
=(F −2 ◦ Pα ) (k x Δx, k y Δx, k u Δu, k v Δu) .
Now we apply the definition for the Fourier photography operator Pα (Equation .), to
arrive at
P α 1/(ΔxΔu)2 · sinc( x/Δx, y/Δx, u/Δu, v/Δu)
= F −2 (αk x Δx, αk y Δx, (1 − α)k x Δu, (1 − α)k y Δu) .
a.. photograph of a four-dimensional sinc light field
Note that the d rect function now depends only on k x and k y , not k u or k v . Since the product
of two dilated rect functions is equal to the smaller rect function,
Pα 1/(ΔxΔu)2 · sinc( x/Δx, y/Δx, u/Δu, v/Δu)
= F −2 (k x Dx , k y Dx )
where Dx = max(αΔx, |1 − α|Δu).
Applying the inverse d Fourier transform completes the proof:
Pα 1/(ΔxΔu)2 · sinc( x/Δx, y/Δx, u/Δu, v/Δu)
= 1/Dx2 · sinc(k x /Dx , k y /Dx ).
Adams, A. . The Camera. Bulfinch Press.
Adelson, E. H., and Bergen, J. R. . The plenoptic function and the elements of early
vision. In Computational Models of Visual Processing, edited by Michael S. Landy and J.
Anthony Movshon. Cambridge, Mass.: mit Press.
Adelson, E. H., and Wang, J. Y. A. . Single lens stereo with a plenoptic camera. ieee
Transactions on Pattern Analysis and Machine Intelligence ,  (Feb), –.
Agarwala, A., Dontcheva, M., Agrawala, M., Drucker, S., Colburn, A., Curless, B.,
Salesin, D., and Cohen, M. . Interactive digital photomontage. acm Transactions
on Graphics (Proceedings of siggraph ) , , –.
Arai, J., Hoshino, H., Okui, M., and Okano, F. . Effects of focusing on the resolution
characteristics of integral photography. J. Opt. Soc. Am. A ,  (June ), –.
Askey, P., . Digital cameras timeline: . Digital Photography Review, http://www.
Babcock, H. W. . The possibility of compensating astronomical seeing. Publications of
the Astronomical Society of the Pacific , , –.
Barlow, H. B. . The size of ommatidia in apposition eyes. Journal of Experimental
Biology , , –.
Barnard, K., Cardei, V., and Funt, B. . A comparison of computational color constancy algorithms – part I: Methodology and experiments with synthesized data. ieee
Transactions on Image Processing , , –.
Bolles, R., Baker, H., and Marimont, D. . Epipolar-plane image analysis: An approach to determining structure from motion. International Journal of Computer Vision ,
, –.
Boykov, Y., Veksler, O., and Zabih, R. . Fast approximate energy minimization via
graph cuts. ieee Transactions on Pattern Analysis and Machine Intelligence , , –
Bracewell, R. N., Chang, K.-Y., Jha, A. K., and Wang, Y. H. . Affine theorem for
two-dimensional fourier transform. Electronics Letters , –.
Bracewell, R. N. . Strip integration in radio astronomy. Aust. J. Phys. , –.
Bracewell, R. N. . The Fourier Transform and Its Applications, nd Edition Revised.
New York: McGraw-Hill.
Brixner, B. . Automatic lens design for nonexperts. Applied Optics , ,  –.
Buehler, C., Bosse, M., McMillan, L., Gortler, S., and Cohen, M. . Unstructured lumigraph rendering. In siggraph ’: Proceedings of the th annual conference on
Computer graphics and interactive techniques, –.
Camahort, E., Lerios, A., and Fussell, D. . Uniformly sampled light fields. In Proceedings of Eurographics Rendering Workshop , –.
Camahort, E. . d Light Field Modeling and Rendering. PhD thesis, The University of
Texas at Austin.
Catrysse, P. B., and Wandell, B. A. . Optical efficiency of image sensor pixels. J. Opt.
Soc. Am. A , , –.
Chai, J., Tong, X., Chan, S.-C., and Shum, H.-Y. . Plenoptic sampling. In siggraph
, –.
Chen, S. E., and Williams, L. . View interpolation for image synthesis. Computer
Graphics , –.
Chen, W., Bouguet, J., Chu, M., and Grzeszczuk, R. . Light field mapping: Efficient
representation and hardware rendering of surface light fields. In siggraph , –.
Cook, R. L., Porter, T., and Carpenter, L. . Distributed ray tracing. In siggraph
’, –.
Daly, D. . Microlens Arrays. New York: Taylor & Francis Inc.
Davies, N., McCormick, M., and Yang, L. . d imaging systems: a new development.
Appl. Opt., , –.
Deans, S. R. . The Radon Transform and Some of Its Applications. New York: WileyInterscience.
Debevec, P. E., and Malik, J. . Recovering high dynamic range radiance maps from
photographs. In siggraph , –.
Dickerson, J., and Lepp, G. . Canon Lenses. Sterling Pub. Co.Inc.
Dowski, E. R., and Johnson, G. E. . Wavefront coding: A modern method of achieving
high performance and/or low cost imaging systems. In spie Proceedings, vol. .
Duparré, J., Dannberg, P., Schreiber, P., and Bräuer, A. . Micro-optically fabricated
artificial apposition compound eye. In Electronic Imaging – Science and Technology, Proc.
spie .
Durand, F., Holzschuch, N., Soler, C., Chan, E., and Sillion, F. . A frequency
analysis of light transport. acm Transactions on Graphics (Proceedings of siggraph )
, , –.
Eltoukhy, H., and Kavusi, S. . A computationally efficient algorithm for multi-focus
image reconstruction. In Proceedings of spie Electronic Imaging.
Forsyth, D. A., and Ponce, J. . Computer Vision: A Modern Approach. Upper Saddle
River, N.J.: Prentice Hall.
Georgiev, T., Zheng, C., Curless, B., Salesin, D., Nayar, S., and Intwala, C. .
Spatio-angular resolution tradeoff in integral photography. To appear in Proceedings of the
Eurographics Symposium on Rendering .
Gerrard, A., and Burch, J. M. . Introduction to Matrix Methods in Optics. New York:
Gershun, A. . The light field. Moscow, . Journal of Mathematics and Physics xviii,
–. Translated by P. Moon and G. Timoshenko.
Goodman, J. W. . Statistical Optics. New York: Wiley-Interscience.
Goodman, J. W. . Introduction to Fourier Optics, nd edition. New York: McGraw-Hill.
Gordon, N. T., Jones, C. L., and Purdy, D. R. . Application of microlenses to infrared
detector arrays. Infrared Physics , , –.
Gortler, S. J., Grzeszczuk, R., Szeliski, R., and Cohen, M. F. . The Lumigraph. In
siggraph , –.
Haeberli, P., . A multifocus method for controlling depth of field. Graphics Obscura,
url: http://www.sgi.com/grafica/depth.
Halle, M., . Holographic stereograms as discrete imaging systems.
Hardy, J. W., Lefebvre, J. E., and Koliopoulos, C. K. . Real-time atmospheric compensation. J. Opt. Soc. Am. ,  (March ), –.
Haro, L. D. S. . Wavefront Sensing in the Human Eye with a Shack-Hartmann Sensor.
PhD thesis, University of London.
Hecht, E. . Optics, th edition. Reading, Mass.: Addison-Wesley.
Isaksen, A., McMillan, L., and Gortler, S. J. . Dynamically reparameterized light
fields. In siggraph , –.
Ishihara, Y., and Tanigaki, K. . A high photosensitivity il-ccd image sensor with
monolithic resin lens array. In Proceedings of ieee iedm, –.
Ives, H. E. . Parallax panoramagrams made with a large diameter lens. J. Opt. Soc. Amer.
, –.
Ives, H. E. . Optical properties of a Lippmann lenticulated sheet. J. Opt. Soc. Amer. ,
Jackson, J. I., Meyer, C. H., Nishimura, D. G., and Macovski, A. . Selection of
convolution function for Fourier inversion using gridding. ieee Transactions on Medical
Imaging ,  (September), –.
Javidi, B., and Okano, F., Eds. . Three-Dimensional Television, Video and Display Technologies. New York: Springer-Verlag.
Johnson, R. B. . A historical perspective on understanding optical aberrations. In Lens
design : proceedings of a conference held - January , Los Angeles, California, Warren
J. Smith, editor, Bellingham, Wash., USA: spie Optical Engineering Press, –.
Kajiya, J. . The rendering equation. In siggraph , –.
Keelan, B. W. . Handbook of Image Quality : Characterization and Prediction. New
York: Marcel Dekker.
Kingslake, R. . Development of the photographic objective. J. Opt. Soc. Am. , –.
Kingslake, R. . Lens Design Fundamentals. New York: Academic Press.
Kingslake, R. . A History of the Photographic Lens. Boston: Academic Press.
Kirschfeld, K. . The resolution of lens and compound eyes. In Neural Principles in
Vision edited by F. Zettler and R. Weiler. New York: Springer, –.
Kolb, C., Mitchell, D., and Hanrahan, P. . A realistic camera model for computer
graphics. In siggraph , –.
Krotkov, E. . Focusing. International Journal of Computer Vision , , –.
Kubota, A., and Aizawa, K. . Reconstructing arbitrarily focused images from two
differently focused images using linear filters. ieee Transactions on Image Processing , 
(Nov ), –.
Land, M. F., and Nilsson, D.-E. . Animal Eyes. New York: Oxford University Press.
Levoy, M., and Hanrahan, P. . Light field rendering. In siggraph , –.
Levoy, M., Chen, B., Vaish, V., Horowitz, M., McDowall, I., and Bolas, M. . Synthetic aperture confocal imaging. acm Transactions on Graphics (Proceedings of siggraph
) , , –.
Levoy, M., Ng, R., Adams, A., Footer, M., and Horowitz, M. . Light field microscopy.
Levoy, M. . Volume rendering using the fourier projection-slice theorem. In Proceedings
of Graphics Interface ’, –.
Liang, Z.-P., and Munson, D. C. . Partial Radon transforms. ieee Transactions on
Image Processing ,  (Oct ), –.
Liang, J., Grimm, B., Goelz, S., and Bille, J. F. . Objective measurement of wave
aberrations of the human eye with use of a hartmann-shack wave-front sensor. J. Opt. Soc.
Am. A ,  (July ), –.
Lippmann, G. . La photographie intégrale. Comptes-Rendus, Académie des Sciences ,
Macovski, A. . Medical Imaging Systems. Englewood Cliffs, N.J.: Prentice Hall.
Maeda, P., Catrysse, P., and Wandell, B. . Integrating lens design with digital camera
simulation. In Proceedings of the spie Electronic Imaging ’ Conference, Santa Clara, CA.
Malzbender, T. . Fourier volume rendering. acm Transactions on Graphics ,  (July),
Mann, A., Ed. . Selected Papers on Zoom Lenses, vol. ms. spie Optical Engineering
Matusik, W., Pfister, H., Ngan, A., Beardsley, P., Ziegler, R., and McMillan, L. .
Image-based d photography using opacity hulls. In Proceedings of siggraph , –
Maxwell, J. C. . On the general laws of optical instruments. The Quarterly Journal of
Pure and Applied Mathematics , –.
McMillan, L., and Bishop, G. . Plenoptic modeling: an image-based rendering system.
In Proceedings of siggraph , –.
Micron, . Micron Technology, Inc. demonstrates leading imaging technology with industry’s first .-micron pixel cmos sensor. http://www.micron.com/news/product/-_first_-_pixel-cmos_sensor.html.
Miller, G., Rubin, S., and Poncelon, D. . Lazy decompression of surface light fields
for precomputed global illumination. In Proceedings of Eurographics Rendering Workshop
, –.
Mitchell, D. P., and Netravali, A. N. . Reconstruction filters in computer graphics.
In siggraph , –.
Moon, P., and Spencer, D. E. . The Photic Field. Cambridge, Mass.: mit Press.
Naemura, T., Yoshida, T., and Harashima, H. . -d computer graphics based on
integral photography. Optics Express , , –.
Nayar, S. K., and Mitsunaga, T. . High dynamic range imaging: Spatially varying
pixel exposures. In ieee Conference on Computer Vision and Pattern Recognition (cvpr),
vol. , –.
Nayar, S. K., Watanabe, M., and Noguchi, M. . Real-Time Focus Range Sensor. In
ieee International Conference on Computer Vision (iccv), –.
Newhall, B. . The Daguerreotype in America, rd rev. ed. New York: Dover.
Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., and Hanrahan, P. . Light
field photography with a hand-held plenoptic camera. Tech. Rep. ctsr -, Stanford
University Computer Science.
Ng, R. . Fourier slice photography. acm Transactions on Graphics (Proceedings of siggraph ) , , –.
Nishimura, D. G. . Principles of Magnetic Resonance Imaging. Department of Electrical
Engineering, Stanford University.
Ogata, S., Ishida, J., and Sasano, T. . Optical sensor array in an artifical compound
eye. Optical Engineering , , –.
Ogden, J. M., Adelson, E. H., Bergen, J. R., and Burt, P. J. . Pyramid-based computer
graphics. RCA Engineer , , –.
Okano, F., Arai, J., Hoshino, H., and Yuyama, I. . Three-dimensional video system
based on integral photography. Optical Engineering ,  (June ), –.
Okoshi, T. . Three-Dimensional Imaging Techniques. New York: Academic Press.
Park, S. K., Schowengerdt, R., and Kaczynski, M.-A. . Modulation-transferfunction analysis for sampled image systems. Applied Optics , .
Pentland, A., Scherock, S., Darrell, T., and Girod, B. . Simple range cameras based
on focal error. J. Opt. Soc. Am. A ,  (Nov), –.
Pentland, A. P. . A new sense for depth of field. ieee Transactions on Pattern Analysis
and Machine Intelligence , , –.
Pérez, P., Gangnet, M., and Blake, A. . Poisson image editing. acm Transactions on
Graphics (Proceedings of siggraph ) , , –.
Pharr, M., and Humphreys, G. . Physically Based Rendering: From Theory to Implementation. Morgan Kaufmann.
Platt, B., and Shack, R. . History and principles of Shack-Hartmann wavefront sensing. Journal of Refractive Surgery,  (Sept/Oct ).
Potmesil, M., and Chakravarty, I. . A lens and aperture camera model for synthetic
image generation. In siggraph ’: Proceedings of the th annual conference on Computer
graphics and interactive techniques, –.
Preisendorfer, R. W. . Radiative Transfer on Discrete Spaces. Oxford: Pergamon Press.
Ramanath, R., Snyder, W. E., Bilbro, G. L., and Sander, W. A. . Demosaicking
methods for Bayer color arrays. Journal of Electronic Imaging , , –.
Roorda, A., and Williams, D. R. . The arrangement of the three cone classes in the
living human eye. Nature ,  ( Feb ), –.
Shack, R. V., and Platt, B. C. . Production and use of a lenticular hartmann screen
(abstract). J. Opt. Soc. Am. , .
Shade, J., Gortler, S., He, L.-W., and Szeliski, R. . Layered depth images. In siggraph ’: Proceedings of the th annual conference on Computer graphics and interactive
techniques, –.
Shum, H.-Y., and Kang, S. B. . A review of image-based rendering techniques. In
ieee/spie Visual Communications and Image Processing (vcip) , –.
Smith, W. J. . Modern Optical Engineering, rd edition. New York: McGraw-Hill.
Smith, W. J. . Modern Lens Design, nd edition. New York: McGraw-Hill.
Stewart, J., Yu, J., Gortler, S. J., and McMillan, L. . A new reconstruction filter
for undersampled light fields. In Proceedings of the Eurographics Symposium on Rendering
,  – .
Stroebel, L., Compton, J., Current, I., and Zakia, R. . Photographic Materials and
Processes. Boston: Focal Press.
Subbarao, M., Wei, T.-C., and Surya, G. . Focused image recovery from two defocused
images recorded with different camera settings. ieee Transactions on Image Processing ,
 (Dec ), –.
Subbarao, M. . Parallel depth recovery by changing camera parameters. In ieee International Conference on Computer Vision (iccv), –.
Takahashi, K., Kubota, A., and Naemura, T. . All in-focus view synthesis from
under-sampled light fields. In Proc. VRSJ Intl. Conf. on Artificial Reality and Telexistence
Tanida, J., Kumagai, T., Yamada, K., Miyatake, S., Ishida, K., Morimoto, T., Kondou, N.,
Miyazaki, D., and Ichioka, Y. . Thin observation module by bound optics (tombo):
concept and experimental verification. Applied Optics ,  (April), –.
Tanida, J., Shogenji, R., Kitamura, Y., Yamada, K., Miyamoto, M., and Miyatake, S.
. Color imaging with an integrated compound imaging system. Optics Express , 
(September ), –.
Tronnier, E., and Eggert, J., . Three lens photographic objective. United States Patent
Tyson, R. K. . Principles of Adaptive Optics. Boston: Academic Press.
Vaish, V., Wilburn, B., Joshi, N., and Levoy, M. . Using plane + parallax for calibrating
dense camera arrays. In Proceedings of cvpr, vol. , –.
Vaish, V., Garg, G., Talvala, E.-V., Antunez, E., Wilburn, B., Horowitz, M., and Levoy,
M. . Synthetic aperture focusing using a shear-warp factorization of the viewing
transform. In Workshop on Advanced d Imaging for Safety and Security (in conjunction
with cvpr ).
Wilburn, B., Joshi, N., Vaish, V., Talvala, E.-V., Antunez, E., Barth, A., Adams, A.,
Levoy, M., and Horowitz, M. . High performance imaging using large camera
arrays. –. acm Transactions on Graphics (Proceedings of siggraph ).
Williams, C. S. . Introduction to the Optical Transfer Function. New York: Wiley.
Wood, D., Azuma, D., Aldinger, K., Curless, B., Duchamp, T., Salesin, D., and Stuetzle, W. . Surface light fields for d photography. In siggraph , –.
Yamada, T., Ikeda, K., Kim, Y.-G., Wakoh, H., Toma, T., Sakamoto, T., Ogawa, K.,
Okamoto, E., Masukane, K., Oda, K., and Inuiya, M. . A progressive scan ccd
image sensor for dsc applications. ieee Journal of Solid-State Circuits , , –.
Yamamoto, T., and Naemura, T. . Real-time capturing and interactive synthesis of d
scenes using integral photography. In Stereoscopic Displays and Virtual Reality Systems xi,
Proceedings of spie, –.
Yang, J., Everett, M., Buehler, C., and McMillan, L. . A realtime distributed light
field camera. In Proceedings of Eurographics Workshop on Rendering , –.
Zemax. . Zebase, Optical Design Database, Version .. San Diego: Zemax Development
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF