null  null
Revised Edition: 2016
ISBN 978-1-280-29381-8
© All rights reserved.
Published by:
Learning Press
48 West 48 Street, Suite 1116,
New York, NY 10036, United States
Email: [email protected] Table of Contents
Chapter 1 - Angle of View
Chapter 2 - Aperture
Chapter 3 - Circle of Confusion
Chapter 4 - Color Temperature and Color Balance
WT
Chapter 5 - Depth of Field
Chapter 6 - Exposure
Chapter 7 - Exposure Value
Chapter 8 - F-number
Chapter 9 - Pinhole Camera
Chapter 10 - Science of Photography
________________________WORLD TECHNOLOGIES________________________
Chapter-1
Angle of View
WT
A camera's angle of view can be measured horizontally, vertically, or diagonally.
In photography, angle of view describes the angular extent of a given scene that is
imaged by a camera. It is used interchangeably with the more general term field of view.
It is important to distinguish the angle of view from the angle of coverage, which
describes the angle of projection by the lens onto the focal plane. For most cameras, it
may be assumed that the image circle produced by the lens is large enough to cover the
film or sensor completely. If the angle of view exceeds the angle of coverage, however,
then vignetting will be present in the resulting photograph.
________________________WORLD TECHNOLOGIES________________________
Calculating a camera's angle of view
WT
In 1916, Northey showed how to calculate the angle of view using ordinary carpenter's
tools. The angle that he labels as the angle of view is the half-angle or "the angle that a
straight line would take from the extreme outside of the field of view to the center of the
lens;" he notes that manufacturers of lenses use twice this angle.
For lenses projecting rectilinear (non-spatially-distorted) images of distant objects, the
effective focal length and the image format dimensions completely define the angle of
view. Calculations for lenses producing non-rectilinear images are much more complex
and in the end not very useful in most practical applications.
Angle of view may be measured horizontally (from the left to right edge of the frame),
vertically (from the top to bottom of the frame), or diagonally (from one corner of the
frame to its opposite corner).
________________________WORLD TECHNOLOGIES________________________
For a lens projecting a rectilinear image, the angle of view (α) can be calculated from the
chosen dimension (d), and effective focal length (f) as follows:
d represents the size of the film (or sensor) in the direction measured. For example, for
film that is 36 mm wide, d = 36 mm would be used to obtain the horizontal angle of view.
Because this is a trigonometric function, the angle of view does not vary quite linearly
with the reciprocal of the focal length. However, except for wide-angle lenses, it is
reasonable to approximate
radians or
degrees.
WT
The effective focal length is nearly equal to the stated focal length of the lens (F), except
in macro photography where the lens-to-object distance is comparable to the focal length.
In this case, the magnification factor (m) must be taken into account:
(In photography m is usually defined to be positive, despite the inverted image.) For
example, with a magnification ratio of 1:2, we find
and thus the angle of
view is reduced by 33% compared to focusing on a distant object with the same lens.
A second effect which comes into play in macro photography is lens asymmetry (an
asymmetric lens is a lens where the aperture appears to have different dimensions when
viewed from the front and from the back). The lens asymmetry causes an offset between
the nodal plane and pupil positions. The effect can be quantified using the ratio (P)
between apparent exit pupil diameter and entrance pupil diameter. The full formula for
angle of view now becomes:
Angle of view can also be determined using FOV tables or paper or software lens
calculators.
Example
Consider a 35 mm camera with a normal lens having a focal length of F = 50 mm. The
dimensions of the 35 mm image format are 24 mm (vertically) × 36 mm (horizontal),
giving a diagonal of about 43.3 mm.
At infinity focus, f = F, and the angles of view are:
________________________WORLD TECHNOLOGIES________________________
•
horizontally,
•
vertically,
•
diagonally,
Derivation of the angle-of-view formula
Consider a rectilinear lens in a camera used to photograph an object at a distance S1, and
forming an image that just barely fits in the dimension, d, of the frame (the film or image
sensor). Treat the lens as if it were a pinhole at distance S2 from the image plane
(technically, the center of perspective of a rectilinear lens is at the center of its entrance
pupil):
WT
Now α / 2 is the angle between the optical axis of the lens and the ray joining its optical
center to the edge of the film. Here α is defined to be the angle-of-view, since it is the
angle enclosing the largest object whose image can fit on the film. We want to find the
relationship between:
the angle α
the "opposite" side of the right triangle, d / 2 (half the film-format dimension)
the "adjacent" side, S2 (distance from the lens to the image plane)
Using basic trigonometry, we find:
________________________WORLD TECHNOLOGIES________________________
which we can solve for α, giving:
To project a sharp image of distant objects, S2 needs to be equal to the focal length, F,
which is attained by setting the lens for infinity focus. Then the angle of view is given by:
WT
where f = F
Macro photography
For macro photography, we cannot neglect the difference between S2 and F. From the
thin lens formula,
.
We substitute for the magnification, m = S2 / S1, and with some algebra find:
Defining f = S2 as the "effective focal length", we get the formula presented above:
where
.
A second effect which comes into play in macro photography is lens asymmetry (an
asymmetric lens is a lens where the aperture appears to have different dimensions when
viewed from the front and from the back). The lens asymmetry causes an offset between
the nodal plane and pupil positions. The effect can be quantified using the ratio (P)
between apparent exit pupil diameter and entrance pupil diameter. The full formula for
angle of view now becomes:
________________________WORLD TECHNOLOGIES________________________
Measuring a camera's field of view
WT
Schematic of collimator-based optical apparatus used in measuring the FOV of a camera.
In the optical instrumentation industry the term field of view (FOV) is most often used,
though the measurements are still expressed as angles. Optical tests are commonly used
for measuring the FOV of UV, visible, and infrared (wavelengths about 0.1–20 µm in the
electromagnetic spectrum) sensors and cameras.
The purpose of this test is to measure the horizontal and vertical FOV of a lens and
sensor used in an imaging system, when the lens focal length or sensor size is not known
(that is, when the calculation above is not immediately applicable). Although this is one
typical method that the optics industry uses to measure the FOV, there exist many other
possible methods.
UV/visible light from an integrating sphere (and/or other source such as a black body) is
focused onto a square test target at the focal plane of a collimator (the mirrors in the
diagram), such that a virtual image of the test target will be seen infinitely far away by
________________________WORLD TECHNOLOGIES________________________
the camera under test. The camera under test senses a real image of the virtual image of
the target, and the sensed image is displayed on a monitor.
WT
Monitor display of sensed image from the camera under test
The sensed image, which includes the target, is displayed on a monitor, where it can be
measured. Dimensions of the full image display and of the portion of the image that is the
target are determined by inspection (measurements are typically in pixels, but can just as
well be inches or cm).
D = dimension of full image
d = dimension of image of target
The collimator's distant virtual image of the target subtends a certain angle, referred to as
the angular extent of the target, that depends on the collimator focal length and the target
size. Assuming the sensed image includes the whole target, the angle seen by the camera,
its FOV, is this angular extent of the target times the ratio of full image size to target
image size.
The target's angular extent is:
where L is the dimension of the target and fc is the focal length of collimator.
________________________WORLD TECHNOLOGIES________________________
The total field of view is then approximately:
or more precisely, if the imaging system is rectilinear:
This calculation could be a horizontal or a vertical FOV, depending on how the target and
image are measured.
WT
________________________WORLD TECHNOLOGIES________________________
Lens types and effects
Focal length
WT
How focal length affects perspective: Varying focal lengths at identical field size
achieved by different camera-subject distances. Notice that the shorter the focal length
and the larger the angle of view, perspective distortion and size differences increase.
________________________WORLD TECHNOLOGIES________________________
Lenses are often referred to by terms that express their angle of view:
•
•
•
•
•
Ultra wide angle lenses (less than 24mm of focal length in 35mm film format),
also known as fisheye lenses if not rectilinear, cover up to 180° (or even wider in
special cases)
o A circular fisheye lens (as opposed to a full-frame fisheye) is an example
of a lens where the angle of coverage is less than the angle of view. The
image projected onto the film is circular because the diameter of the image
projected is narrower than that needed to cover the widest portion of the
film.
Wide-angle lenses (24-35mm) cover between 84° and 64°
Normal, or Standard lenses (36-60mm) cover between 62° and 40°
Telephoto lenses generally cover between 30° and 10°
Super Telephoto lenses generally cover between 8° through less than 1°
WT
Zoom lenses are a special case wherein the focal length, and hence angle of view, of the
lens can be altered mechanically without removing the lens from the camera.
Characteristics
Longer lenses magnify the subject more, apparently compressing distance and (when
focused on the foreground) blurring the background because of their shallower depth of
field. Wider lenses tend to magnify distance between objects while allowing greater
depth of field.
Another result of using a wide angle lens is a greater apparent perspective distortion
when the camera is not aligned perpendicularly to the subject: parallel lines converge at
the same rate as with a normal lens, but converge more due to the wider total field. For
example, buildings appear to be falling backwards much more severely when the camera
is pointed upward from ground level than they would if photographed with a normal lens
at the same distance from the subject, because more of the subject building is visible in
the wide-angle shot.
Because different lenses generally require a different camera–subject distance to preserve
the size of a subject, changing the angle of view can indirectly distort perspective,
changing the apparent relative size of the subject and foreground.
Examples
An example of how lens choice affects angle of view. The photos below were taken by a
35 mm still camera at a constant distance from the subject:
________________________WORLD TECHNOLOGIES________________________
28 mm lens, 65.5° × 46.4°
50 mm lens, 39.6° × 27.0°
WT
70 mm lens, 28.9° × 19.5°
210 mm lens, 9.8° × 6.5°
Common lens angles of view
This table shows the diagonal, horizontal, and vertical angles of view, in degrees, for
lenses producing rectilinear images, when used with 36 mm × 24 mm format (that is, 135
film or full-frame 35mm digital using width 36 mm, height 24 mm, and diagonal
43.3 mm for d in the formula above). Digital compact cameras state their focal lengths in
35mm equivalents, which can be used in this table.
Five images using 24, 28, 35, 50 and 72mm equivalent zoom lengths, portrait format, to
illustrate angles of view
________________________WORLD TECHNOLOGIES________________________
Five images using 24, 28, 35, 50 and 72mm equivalent step zoom function, to illustrate
angles of view
Three-dimensional digital art
Displaying 3d graphics as 3d projection of the models onto a 2d surface uses a series of
mathematical calculations to render the scene. The angle of view of the scene is thus
readily set and changed; some renderers even measure the angle of view as the focal
length of an imaginary lens. The angle of view can also be projected onto the surface at
an angle greater than 90°, effectively creating a fish eye lens effect.
WT
Cinematography and video gaming
Modifying the angle of view over time, or zooming, is a frequently used cinematic
technique.
For a visual effect, some first person video games (especially racing games), widen the
angle of view beyond 90° to exaggerate the distance the player is travelling, thus
exaggerating the player's perceived speed and giving a tunnel effect (like pincushion
distortion). Narrowing the view angle gives a zoom in effect.
________________________WORLD TECHNOLOGIES________________________
Chapter-2
Aperture
WT
A large (1) and a small (2) aperture
________________________WORLD TECHNOLOGIES________________________
WT
Aperture mechanism of Canon 50mm f/1.8 II lens
Definitions of Aperture in the 1707 Glossographia Anglicana Nova
________________________WORLD TECHNOLOGIES________________________
In optics, an aperture is a hole or an opening through which light travels. More
specifically, the aperture of an optical system is the opening that determines the cone
angle of a bundle of rays that come to a focus in the image plane. The aperture determines how collimated the admitted rays are, which is of great importance for the
appearance at the image plane. If an aperture is narrow, then highly collimated rays are
admitted, resulting in a sharp focus at the image plane. If an aperture is wide, then
uncollimated rays are admitted, resulting in a sharp focus only for rays with a certain
focal length. This means that a wide aperture results in an image that is sharp around
what the lens is focusing on and blurred otherwise. The aperture also determines how
many of the incoming rays are actually admitted and thus how much light reaches the
image plane (the narrower the aperture, the darker the image for a given exposure time).
An optical system typically has many openings, or structures that limit the ray bundles
(ray bundles are also known as pencils of light). These structures may be the edge of a
lens or mirror, or a ring or other fixture that holds an optical element in place, or may be
a special element such as a diaphragm placed in the optical path to limit the light
admitted by the system. In general, these structures are called stops, and the aperture
stop is the stop that determines the ray cone angle, or equivalently the brightness, at an
image point.
WT
In some contexts, especially in photography and astronomy, aperture refers to the
diameter of the aperture stop rather than the physical stop or the opening itself. For
example, in a telescope the aperture stop is typically the edges of the objective lens or
mirror (or of the mount that holds it). One then speaks of a telescope as having, for
example, a 100 centimeter aperture. Note that the aperture stop is not necessarily the
smallest stop in the system. Magnification and demagnification by lenses and other
elements can cause a relatively large stop to be the aperture stop for the system.
Sometimes stops and diaphragms are called apertures, even when they are not the
aperture stop of the system.
The word aperture is also used in other contexts to indicate a system which blocks off
light outside a certain region. In astronomy for example, a photometric aperture around a
star usually corresponds to a circular window around the image of a star within which the
light intensity is summed.
Application
The aperture stop is an important element in most optical designs. Its most obvious
feature is that it limits the amount of light that can reach the image/film plane. This can
either be undesired, as in a telescope where one wants to collect as much light as
possible; or deliberate, to prevent saturation of a detector or overexposure of film. In both
cases, the size of the aperture stop is constrained by things other than the amount of light
admitted; however:
________________________WORLD TECHNOLOGIES________________________
•
•
•
•
The size of the stop is one factor that affects depth of field. Smaller stops (larger f
numbers) produce a longer depth of field, allowing objects at a wide range of
distances to all be in focus at the same time.
The stop limits the effect of optical aberrations. If the stop is too large, the image
will be distorted. More sophisticated optical system designs can mitigate the
effect of aberrations, allowing a larger stop and therefore greater light collecting
ability.
The stop determines whether the image will be vignetted. Larger stops can cause
the intensity reaching the film or detector to fall off toward the edges of the
picture, especially when for off-axis points a different stop becomes the aperture
stop by virtue of cutting off more light than did the stop that was the aperture stop
on the optic axis.
A larger aperture stop requires larger diameter optics, which are heavier and more
expensive.
WT
In addition to an aperture stop, a photographic lens may have one or more field stops,
which limit the system's field of view. When the field of view is limited by a field stop in
the lens (rather than at the film or sensor) vignetting results; this is only a problem if the
resulting field of view is less than was desired.
The pupil of the eye is its aperture; the iris is the diaphragm that serves as the aperture
stop. Refraction in the cornea causes the effective aperture (the entrance pupil) to differ
slightly from the physical pupil diameter. The entrance pupil is typically about 4 mm in
diameter, although it can range from 2 mm (f/8.3) in a brightly lit place to 8 mm (f/2.1)
in the dark.
In astronomy, the diameter of the aperture stop (called the aperture) is a critical
parameter in the design of a telescope. Generally, one would want the aperture to be as
large as possible, to collect the maximum amount of light from the distant objects being
imaged. The size of the aperture is limited, however, in practice by considerations of cost
and weight, as well as prevention of aberrations (as mentioned above).
In photography
The aperture stop of a photographic lens can be adjusted to control the amount of light
reaching the film or image sensor. In combination with variation of shutter speed, the
aperture size will regulate the film's or image sensor's degree of exposure to light.
Typically, a fast shutter speed will require a larger aperture to ensure sufficient light
exposure, and a slow shutter speed will require a smaller aperture to avoid excessive
exposure.
________________________WORLD TECHNOLOGIES________________________
Diagram of decreasing aperture sizes (increasing f-numbers) for "full stop" increments
(factor of two aperture area per stop)
WT
A device called a diaphragm usually serves as the aperture stop, and controls the aperture.
The diaphragm functions much like the pupil of the eye – it controls the effective
diameter of the lens opening. Reducing the aperture size increases the depth of field,
which describes the extent to which subject matter lying closer than or farther from the
actual plane of focus appears to be in focus. In general, the smaller the aperture (the
larger the number), the greater the distance from the plane of focus the subject matter
may be while still appearing in focus.
The lens aperture is usually specified as an f-number, the ratio of focal length to effective
aperture diameter. A lens typically has a set of marked "f-stops" that the f-number can be
set to. A lower f-number denotes a greater aperture opening which allows more light to
reach the film or image sensor. The photography term "one f-stop" refers to a factor of √2
(approx. 1.41) change in f-number, which in turn corresponds to a factor of 2 change in
light intensity.
Aperture priority is a semi-automatic shooting mode used in cameras. It allows the
photographer to choose an aperture setting and allow the camera to decide the shutter
speed and sometimes ISO sensitivity for the correct exposure. This is sometimes referred
to as Aperture Priority Auto Exposure, A mode, Av mode, or semi-auto mode.
Typical ranges of apertures used in photography are about f/2.8–f/22 or f/2–f/16,
covering 6 stops, which may be divided into wide, middle, and narrow of 2 stops each,
roughly (using round numbers) f/2–f/4, f/4–f/8, and f/8–f/16 or (for a slower lens) f/2.8–
f/5.6, f/5.6–f/11, and f/11–f/22. These are not sharp divisions, and ranges for specific
lenses vary.
Maximum and minimum apertures
The specifications for a given lens typically include the minimum and maximum
apertures, such as for example f/22 - f/1.4. In this case f/22 is the smallest, or minimum
aperture opening, and f/1.4 is the widest, or maximum aperture. The maximum aperture
tends to be of most interest, and is always included when describing a lens. This value is
________________________WORLD TECHNOLOGIES________________________
also known as the lens speed, because it directly affects the exposure time. The aperture
is proportional to the square root of accepted light, and thus inversely proportional to the
square root of required exposure time, such that an aperture of f/2 allows for exposure
times one quarter that of f/4.
WT
The aperture range of a 50mm "Minolta" lens, f/1.4-f/16
Aperture values wider than f/2.8 are typically known as "fast" lenses, though this has
changed historically (in the past, wider than f/6 was considered fast, for example by the
1911 Encyclopaedia Britannica). The fastest lenses in general production are f/1.2 or
f/1.4, with more at f/1.8 and f/2.0, and many at f/2.8 or slower; f/1.0 is unusual, though
sees some use.
In exceptional circumstances lenses can have f-numbers below f/1.0. For instance, in
photography, both the current Leica Noctilux-M 50mm ASPH and a 1960's-era Canon
50mm rangefinder lens have a maximum aperture of f/0.95. Such lenses tend to be
optically exotic and very expensive; at launch, in September 2008, the Leica Noctilux
retailed for $11,000. Professional lenses for some movie cameras have f-numbers as low
as f/0.75. Stanley Kubrick's film Barry Lyndon has scenes shot with the largest relative
aperture in film history: f/0.7. Beyond the expense, these lenses have limited application
due to the correspondingly shallower depth of field – the scene must either be shallow,
shot from a distance, or will be significantly defocused, though this may be a desired
effect.
Zoom lenses typically have a maximum aperture (minimum f-number) of f/2.8 to f/6.3
through their range. High-end lenses will have a constant aperture, such as f/2.8 or f/4,
________________________WORLD TECHNOLOGIES________________________
which means that the relative aperture will stay the same throughout the zoom range. A
more typical consumer zoom will have a variable relative aperture, since it is harder and
more expensive to keep the effective aperture proportional to focal length at long focal
lengths; f/3.5 to f/5.6 is an example of a common variable aperture range in a consumer
zoom lens.
By contrast, the minimum aperture does not depend on the focal length – it is limited by
how narrowly the aperture closes, not the lens design – and is instead generally chosen
based on practicality: very small apertures have lower sharpness due to diffraction, while
the added depth of field is not generally useful, and thus there is generally little benefit in
using such apertures. Accordingly, DSLR lens typically have minimum aperture of f/16,
f/22, or f/32, while large format may go down to f/64, as reflected in the name of Group
f/64. Depth of field is a significant concern in macro photography, however, and there
one sees smaller apertures. For example, the Canon MP-E 65mm can have effective
aperture (due to magnification) as small as f/96.
WT
f/32 - narrow aperture and slow shutter speed
________________________WORLD TECHNOLOGIES________________________
WT
f/5.6 - wide aperture and fast shutter speed
Aperture area
The amount of light captured by a lens is proportional to the area of the aperture, equal
to:
Where f is focal length and N is the f-number.
The focal length value is not required when comparing two lenses of the same focal
length; a value of 1 can be used instead, and the other factors can be dropped as well,
leaving area proportion to the reciprocal square of the f-number N.
If two cameras of different format sizes and focal lengths have the same angle of view,
and the same aperture area, they gather the same amount of light from the scene. The
relative focal-plane illuminance, however, depends only on the f-number N, independent
of the focal length, so is less in the camera with the larger format, longer focal length, and
higher f-number. This assumes both lenses have identical transmissivity.
________________________WORLD TECHNOLOGIES________________________
Aperture control
Most SLR cameras provide automatic aperture control, which allows viewing and
metering at the lens’s maximum aperture, stops the lens down to the working aperture
during exposure, and returns the lens to maximum aperture after exposure.
The first SLR cameras with internal (“through-the-lens” or “TTL”) meters (e.g., the
Pentax Spotmatic) required that the lens be stopped down to the working aperture when
taking a meter reading. With a small aperture, this darkened the viewfinder, making
viewing and composition difficult. Subsequent models soon incorporated mechanical
coupling between the lens and the camera body, indicating the working aperture to the
camera while allowing the lens to be at its maximum aperture for composition and
focusing; this feature became known as automatic aperture control or automatic
diaphragm control.
WT
For some lenses, including a few long telephotos, lenses mounted on bellows, and
perspective-control and tilt/shift lenses, the mechanical linkage was impractical, and
automatic aperture control was not provided. Many such lenses incorporated a feature
known as a “preset” aperture, which allows the lens to be set to working aperture and
then quickly switched between working aperture and full aperture without looking at the
aperture control. Typical operation might be to establish rough composition, set the
working aperture for metering, return to full aperture for a final check of focus and
composition, and focusing, and finally, return to working aperture just before exposure.
Although slightly easier than stopped-down metering, operation is less convenient than
automatic operation. Preset aperture controls have taken several forms; the most common
has been the use of essentially two lens aperture rings, with one ring setting the aperture
and the other serving as a limit stop when switching to working aperture. Examples of
lenses with this type of preset aperture control are the Nikon PC Nikkor 28 mm f/3.5 and
the SMC Pentax Shift 6×7 75 mm f/4.5. The Nikon PC Micro-Nikkor 85 mm f/2.8D lens
incorporates a mechanical pushbutton that sets working aperture when pressed and
restores full aperture when pressed a second time.
Canon EF lenses, introduced in 1987, have electromagnetic diaphragms, eliminating the
need for a mechanical linkage between the camera and the lens, and allowing automatic
aperture control with the Canon TS-E tilt/shift lenses. Nikon PC-E perspective-control
lenses, introduced in 2008, also have electromagnetic diaphragms. Automatic aperture
control is provided with the newer Nikon digital SLR cameras; with some earlier
cameras, the lenses offer preset aperture control by means of a pushbutton that controls
the electromagnetic diaphragm.
Optimal aperture
Optimal aperture depends both on optics (the depth of the scene versus diffraction), and
on the performance of the lens.
________________________WORLD TECHNOLOGIES________________________
Optically, as a lens is stopped down, the defocus blur at the DOF limits decreases but
diffraction blur increases. The presence of these two opposing factors implies a point at
which the combined blur spot is minimized (Gibson 1975, 64); at that point, the f-number
is optimal for image sharpness, for this given depth of field – a wider aperture (lower fnumber) causes more defocus, while a narrower aperture (higher f-number) causes more
diffraction.
As a matter of performance, lenses often do not perform optimally when fully opened,
and thus generally have better sharpness when stopped down some – note that this is
sharpness in the plane of critical focus, setting aside issues of depth of field. Beyond a
certain point there is no further sharpness benefit to stopping down, and the diffraction
begins to become significant. There is accordingly a sweet spot, generally in the f/4 – f/8
range, depending on camera, where sharpness is optimal, though some lenses are designned to perform optimally when wide open. How significant this is varies between lenses,
and opinions differ on how much practical impact this has.
WT
While optimal aperture can be determined mechanically, how much sharpness is required
depends on how the image will be used – if the final image is viewed under normal
conditions (e.g., an 8″×10″ image viewed at 10″), it may suffice to determine the fnumber using criteria for minimum required sharpness, and there may be no practical
benefit from further reducing the size of the blur spot. But this may not be true if the final
image is viewed under more demanding conditions, e.g., a very large final image viewed
at normal distance, or a portion of an image enlarged to normal size (Hansma 1996).
Hansma also suggests that the final-image size may not be known when a photograph is
taken, and obtaining the maximum practicable sharpness allows the decision to make a
large final image to be made at a later time;
In scanning or sampling
The terms scanning aperture and sampling aperture are often used to refer to the opening
through which an image is sampled, or scanned, for example in a Drum scanner, an
image sensor, or a television pickup apparatus. The sampling aperture can be a literal
optical aperture, that is, a small opening in space, or it can be a time-domain aperture for
sampling a signal waveform.
For example, film grain is quantified as graininess via a measurement of film density
fluctuations as seen through a 0.048 mm sampling aperture.
________________________WORLD TECHNOLOGIES________________________
Chapter-3
Circle of Confusion
In optics, a circle of confusion is an optical spot caused by a cone of light rays from a
lens not coming to a perfect focus when imaging a point source. It is also known as disk
of confusion, circle of indistinctness, blur circle, or blur spot.
WT
In photography, the circle of confusion (“CoC”) is used to determine the depth of field,
the part of an image that is acceptably sharp. A standard value of CoC is often associated
with each image format, but the most appropriate value depends on visual acuity, viewing
conditions, and the amount of enlargement. Properly, this is the maximum permissible
circle of confusion, the circle of confusion diameter limit, or the circle of confusion
criterion, but is often informally called simply the circle of confusion.
Real lenses do not focus all rays perfectly, so that even at best focus, a point is imaged as
a spot rather than a point. The smallest such spot that a lens can produce is often referred
to as the circle of least confusion.
The depth of field is the region where the CoC is less than the resolution of the human
eye (or of the display medium).
________________________WORLD TECHNOLOGIES________________________
Two uses
Two important uses of this term and concept need to be distinguished:
1. For describing the largest blur spot that is indistinguishable from a point. A
camera can precisely focus objects at only one distance; objects at other distances
are defocused. Defocused object points are imaged as blur spots rather than
points; the greater the distance an object is from the plane of focus, the greater the
size of the blur spot. Such a blur spot has the same shape as the lens aperture, but
for simplicity, is usually treated as if it were circular. In practice, objects at
considerably different distances from the camera can still appear sharp (Ray 2000,
50); the range of object distances over which objects appear sharp is the depth of
field (“DoF”). The common criterion for “acceptable sharpness” in the final
image (e.g., print, projection screen, or electronic display) is that the blur spot be
indistinguishable from a point.
2. For describing the blur spot achieved at a lens’s best focus. Recognizing that real
lenses do not focus all rays perfectly under even the best conditions, the circle of
confusion of a lens is a characterization of that blur spot. The term circle of least
confusion is often used for the smallest blur spot a lens can make (Ray 2002, 89),
for example by picking a best focus position that makes a good compromise
between the varying effective focal lengths of different lens zones due to spherical
or other aberrations. Diffraction effects from wave optics and the finite aperture
of a lens can be included in the circle of least confusion, or the term can be
applied in pure ray (geometric) optics.
WT
In idealized ray optics, where rays are assumed to converge to a point when perfectly
focused, the shape of a defocus blur spot from a lens with a circular aperture is a hardedged circle of light. A more general blur spot has soft edges due to diffraction and
aberrations (Stokseth 1969, 1317; Merklinger 1992, 45–46), and may be non-circular due
to the aperture shape. So the diameter concept needs to be carefully defined to be
meaningful. Suitable definitions often use the concept of encircled energy, the fraction of
the total optical energy of the spot that is within the specified diameter. Values of the
fraction (e.g., 80%, 90%) vary with application.
Circle of confusion diameter limit in photography
In photography, the circle of confusion diameter limit (“CoC”) for the final image is
often defined as the largest blur spot that will still be perceived by the human eye as a
point.
With this definition, the CoC in the original image (the image on the film or electronic
sensor) depends on three factors:
________________________WORLD TECHNOLOGIES________________________
1. Visual acuity. For most people, the closest comfortable viewing distance, termed
the near distance for distinct vision (Ray 2000, 52), is approximately 25 cm. At
this distance, a person with good vision can usually distinguish an image
resolution of 5 line pairs per millimeter (lp/mm), equivalent to a CoC of 0.2 mm
in the final image.
2. Viewing conditions. If the final image is viewed at approximately 25 cm, a finalimage CoC of 0.2 mm often is appropriate. A comfortable viewing distance is
also one at which the angle of view is approximately 60° (Ray 2000, 52); at a
distance of 25 cm, this corresponds to about 30 cm, approximately the diagonal of
an 8″×10″ image. It often may be reasonable to assume that, for whole-image
viewing, a final image larger than 8″×10″ will be viewed at a distance correspondingly greater than 25 cm, and for which a larger CoC may be acceptable; the
original-image CoC is then the same as that determined from the standard finalimage size and viewing distance. But if the larger final image will be viewed at
the normal distance of 25 cm, a smaller original-image CoC will be needed to
provide acceptable sharpness.
3. Enlargement from the original image to the final image. If there is no enlargement
(e.g., a contact print of an 8×10 original image), the CoC for the original image is
the same as that in the final image. But if, for example, the long dimension of a
35 mm original image is enlarged to 25 cm (10 inches), the enlargement is
approximately 7×, and the CoC for the original image is 0.2 mm / 7, or 0.029 mm.
WT
The common values for CoC may not be applicable if reproduction or viewing conditions
differ significantly from those assumed in determining those values. If the original image
will be given greater enlargement, or viewed at a closer distance, then a smaller CoC will
be required. All three factors above are accommodated with this formula:
CoC (mm) = viewing distance (cm) / desired final-image resolution (lp/mm) for a 25 cm
viewing distance / enlargement / 25
For example, to support a final-image resolution equivalent to 5 lp/mm for a 25 cm
viewing distance when the anticipated viewing distance is 50 cm and the anticipated
enlargement is 8:
CoC = 50 / 5 / 8 / 25 = 0.05 mm
Since the final-image size is not usually known at the time of taking a photograph, it is
common to assume a standard size such as 25 cm width, along with a conventional finalimage CoC of 0.2 mm, which is 1/1250 of the image width. Conventions in terms of the
diagonal measure are also commonly used. The DoF computed using these conventions
will need to be adjusted if the original image is cropped before enlarging to the final
image size, or if the size and viewing assumptions are altered.
Using the “Zeiss formula”, the circle of confusion is sometimes calculated as d/1730
where d is the diagonal measure of the original image (the camera format). For full-frame
35 mm format (24 mm × 36 mm, 43 mm diagonal) this comes out to be 0.024 mm. A
________________________WORLD TECHNOLOGIES________________________
more widely used CoC is d/1500, or 0.029 mm for full-frame 35 mm format, which
corresponds to resolving 5 lines per millimeter on a print of 30 cm diagonal. Values of
0.030 mm and 0.033 mm are also common for full-frame 35 mm format. For practical
purposes, d/1730, a final-image CoC of 0.2 mm, and d/1500 give very similar results.
Criteria relating CoC to the lens focal length have also been used. Kodak (1972, 5)
recommended 2 minutes of arc (the Snellen criterion of 30 cycles/degree for normal
vision) for critical viewing, giving CoC ≈ f /1720, where f is the lens focal length. For a
50 mm lens on full-frame 35 mm format, this gave CoC ≈ 0.0291 mm. This criterion
evidently assumed that a final image would be viewed at “perspective-correct” distance
(i.e., the angle of view would be the same as that of the original image):
Viewing distance = focal length of taking lens × enlargement
WT
However, images seldom are viewed at the “correct” distance; the viewer usually doesn't
know the focal length of the taking lens, and the “correct” distance may be uncomfortably short or long. Consequently, criteria based on lens focal length have generally
given way to criteria (such as d/1500) related to the camera format.
If an image is viewed on a low-resolution display medium such as a computer monitor,
the detectability of blur will be limited by the display medium rather than by human
vision. For example, the optical blur will be more difficult to detect in an 8″×10″ image
displayed on a computer monitor than in an 8″×10″ print of the same original image
viewed at the same distance. If the image is to be viewed only on a low-resolution device,
a larger CoC may be appropriate; however, if the image may also be viewed in a highresolution medium such as a print, the criteria discussed above will govern.
Depth of field formulas derived from geometrical optics imply that any arbitrary DoF can
be achieved by using a sufficiently small CoC. Because of diffraction, however, this isn't
quite true. Using a smaller CoC requires increasing the lens f-number to achieve the same
DOF, and if the lens is stopped down sufficiently far, the reduction in defocus blur is
offset by the increased blur from diffraction.
Circle of confusion diameter limit based on d/1500
Image Format
Frame size
CoC
Small Format
Four Thirds System 13.5 mm × 18 mm 0.015 mm
APS-C
15.0 mm × 22.5 mm 0.018 mm
35 mm
24 mm × 36 mm
0.029 mm
Medium Format
645 (6×4.5)
56 mm × 42 mm
0.047 mm
6×6
56 mm × 56 mm
0.053 mm
6×7
56 mm × 69 mm
0.059 mm
________________________WORLD TECHNOLOGIES________________________
6×9
6×12
6×17
Large Format
4×5
5×7
8×10
56 mm × 84 mm
56 mm × 112 mm
56 mm × 168 mm
0.067 mm
0.083 mm
0.12 mm
102 mm × 127 mm 0.11 mm
127 mm × 178 mm 0.15 mm
203 mm × 254 mm 0.22 mm
Adjusting the circle of confusion diameter for a lens’s DoF scale
The f-number determined from a lens DoF scale can be adjusted to reflect a CoC different
from the one on which the DoF scale is based. It is shown in the Depth of field article that
WT
where N is the lens f-number, c is the CoC, m is the magnification, and f is the lens focal
length. Because the f-number and CoC occur only as the product Nc, an increase in one is
equivalent to a corresponding decrease in the other, and vice versa. For example, if it is
known that a lens DoF scale is based on a CoC of 0.035 mm, and the actual conditions
require a CoC of 0.025 mm, the CoC must be decreased by a factor of 0.035 / 0.025 =
1.4; this can be accomplished by increasing the f-number determined from the DoF scale
by the same factor, or about 1 stop, so the lens can simply be closed down 1 stop from the
value indicated on the scale.
The same approach can usually be used with a DoF calculator on a view camera.
Determining a circle of confusion diameter from the object field
________________________WORLD TECHNOLOGIES________________________
Lens and ray diagram for calculating the circle of confusion diameter c for an out-offocus subject at distance S2 when the camera is focused at S1. The auxiliary blur circle C
in the object plane (dashed line) makes the calculation easier.
WT
An early calculation of CoC diameter (“indistinctness”) by “T.H.” in 1866
________________________WORLD TECHNOLOGIES________________________
To calculate the diameter of the circle of confusion in the image plane for an out-of-focus
subject, one method is to first calculate the diameter of the blur circle in a virtual image
in the object plane, which is simply done using similar triangles, and then multiply by the
magnification of the system, which is calculated with the help of the lens equation.
The blur circle, of diameter C, in the focused object plane at distance S1, is an unfocused
virtual image of the object at distance S2 as shown in the diagram. It depends only on
these distances and the aperture diameter A, via similar triangles, independent of the lens
focal length:
WT
The circle of confusion in the image plane is obtained by multiplying by magnification m:
where the magnification m is given by the ratio of focus distances:
Using the lens equation we can solve for the auxiliary variable f1:
which yields
and express the magnification in terms of focused distance and focal length:
which gives the final result:
This can optionally be expressed in terms of the f-number N = f/A as:
________________________WORLD TECHNOLOGIES________________________
This formula is exact for a simple paraxial thin lens or a symmetrical lens, in which the
entrance pupil and exit pupil are both of diameter A. More complex lens designs with a
non-unity pupil magnification will need a more complex analysis, as addressed in depth
of field.
More generally, this approach leads to an exact paraxial result for all optical systems if A
is the entrance pupil diameter, the subject distances are measured from the entrance pupil,
and the magnification is known:
WT
If either the focus distance or the out-of-focus subject distance is infinite, the equations
can be evaluated in the limit. For infinite focus distance:
And for the blur circle of an object at infinity when the focus distance is finite:
If the c value is fixed as a circle of confusion diameter limit, either of these can be solved
for subject distance to get the hyperfocal distance, with approximately equivalent results.
History
Henry Coddington 1829
Before it was applied to photography, the concept of circle of confusion was applied to
optical instruments such as telescopes. Coddington (1829, 54) quantifies both a circle of
least confusion and a least circle of confusion for a spherical reflecting surface.
"This we may consider as the nearest approach to a simple focus, and term the
circle of least confusion."
________________________WORLD TECHNOLOGIES________________________
Society for the Diffusion of Useful Knowledge 1832
The Society for the Diffusion of Useful Knowledge (1832, 11) applied it to third-order
aberrations:
"This spherical aberration produces an indistinctness of vision, by spreading out every
mathematical point of the object into a small spot in its picture; which spots, by mixing
with each other, confuse the whole. The diameter of this circle of confusion, at the focus
of the central rays F, over which every point is spread, will be L K (fig. 17.); and when
the aperture of the reflector is moderate it equals the cube of the aperture, divided by the
square of the radius (...): this circle is called the aberration of latitude."
T.H. 1866
WT
Circle-of-confusion calculations: An early precursor to depth of field calculations is the
T.H. (1866, 138) calculation of a circle-of-confusion diameter from a subject distance, for
a lens focused at infinity; this article was pointed out by von Rohr (1899). The formula he
comes up with for what he terms "the indistinctness" is equivalent, in modern terms, to
for focal length f, aperture diameter A, and subject distance S. But he does not invert this
to find the S corresponding to a given c criterion (i.e. he does not solve for the hyperfocal
distance), nor does he consider focusing at any other distance than infinity.
He finally observes "long-focus lenses have usually a larger aperture than short ones, and
on this account have less depth of focus" [his italic emphasis].
Dallmeyer and Abney
T Dallmeyer (1892, 24), in an expanded re-publication of his father John Henry
Dallmeyer's 1874 pamphlet On the Choice and Use of Photographic Lenses (in material
that is not in the 1874 edition and appears to have been added from a paper by J.H.D. "On
the Use of Diaphragms or Stops" of unknown date) says:
"Thus every point in an object out of focus is represented in the picture by a disc, or
circle of confusion, the size of which is proportionate to the aperture in relation to the
focus of the lens employed. If a point in the object is 1/100 of an inch out of focus, it will
be represented by a circle of confusion measuring but 1/100 part of the aperture of the
lens."
This latter statement is clearly incorrect, or misstated, being off by a factor of focal
distance (focal length). He goes on:
________________________WORLD TECHNOLOGIES________________________
"and when the circles of confusion are sufficiently small the eye fails to see them as such;
they are then seen as points only, and the picture appears sharp. At the ordinary distance
of vision, of from twelve to fifteen inches, circles of confusion are seen as points, if the
angle subtended by them does not exceed one minute of arc, or roughly, if they do not
exceed the 1/100 of an inch in diameter."
Numerically, 1/100 of an inch at 12 to 15 inches is closer to two minutes of arc. This
choice of COC limit remains (for a large print) the most widely used even today. Abney
(1881, 207–08) takes a similar approach based on a visual acuity of one minute of arc,
and chooses a circle of confusion of 0.025 cm for viewing at 40 to 50 cm, essentially
making the same factor-of-two error in metric units. It is unclear whether Abney or
Dallmeyer was earlier to set the COC standard thereby.
Wall 1889
WT
The common 1/100 inch COC limit has been applied to blur other than defocus blur. For
example, Wall (1889, 92) says:
"To find how quickly a shutter must act to take an object in motion that there may be a
circle of confusion less than 1/100in. in diameter, divide the distance of the object by 100
times the focus of the lens, and divide the rapidity of motion of object in inches per
second by the results, when you have the longest duration of exposure in fraction of a
second."
________________________WORLD TECHNOLOGIES________________________
Chapter-4
Color Temperature and Color Balance
Color Temperature
WT
The CIE 1931 x,y chromaticity space, also showing the chromaticities of black-body light
sources of various temperatures (Planckian locus), and lines of constant correlated color
temperature.
Color temperature is a characteristic of visible light that has important applications in
lighting, photography, videography, publishing, manufacturing, astrophysics, and other
fields. The color temperature of a light source is the temperature of an ideal black-body
________________________WORLD TECHNOLOGIES________________________
radiator that radiates light of comparable hue to that light source. The temperature is
conventionally stated in units of absolute temperature, kelvin [K]. Color temperature is
related to Planck's law and to Wien's displacement law.
Higher color temperatures (5,000 K or more) are called cool colors (blueish white); lower
color temperatures (2,700–3,000 K) are called warm colors (yellowish white through
red).
Categorizing different lighting
Because it is the standard against which other light sources are compared, the color
temperature of the thermal radiation from an ideal black body radiator is defined as equal
to its surface temperature in kelvins, or alternatively in mired (micro-reciprocal Kelvin).
WT
To the extent that a hot surface emits thermal radiation but is not an ideal black-body
radiator, the color temperature of the light is not the actual temperature of the surface. An
incandescent light bulb's light is thermal radiation and the bulb is very close to an ideal
black-body radiator, so its color temperature is essentially the temperature of the
filament.
Many other light sources, such as fluorescent lamps, emit light primarily by processes
other than thermal radiation. This means the emitted radiation does not follow the form of
a black-body spectrum. These sources are assigned what is known as a correlated color
temperature (CCT). CCT is the color temperature of a black body radiator which to
human color perception most closely matches the light from the lamp. Because such an
approximation is not required for incandescent light, the CCT for an incandescent light is
simply its unadjusted temperature, derived from the comparison to a black body radiator.
The Sun
As the Sun crosses the sky, it may appear to be red, orange, yellow or white depending on
its position. The changing color of the sun over the course of the day is mainly a result of
scattering of light, and is unrelated to black body radiation. The blue color of the sky is
caused by Rayleigh scattering of the sunlight from the atmosphere, which tends to scatter
blue light more than red.
Daylight has a spectrum similar to that of a black body with a correlated color
temperature of 6500K.
________________________WORLD TECHNOLOGIES________________________
Hues of the Planckian locus, in the mired scale
For colors based on the black body, blue occurs at higher temperatures, while red occurs
at lower, cooler, temperatures. This is the opposite of the cultural associations that colors
have taken on, with "red" as "hot", and "blue" as "cold".
WT
Color temperature applications
Lighting
Color Temperature comparison of common electric lamps.
For lighting building interiors, it is often important to take into account the color
temperature of the lights used. For example, a warmer (i.e., lower color temperature) light
is often used in public areas to promote relaxation, while a cooler (higher color
temperature) light is used to enhance concentration in offices.
________________________WORLD TECHNOLOGIES________________________
Aquaculture
In fishkeeping, color temperature has different functions and foci, for different branches.
•
In freshwater aquaria, color temperature is generally of concern only for producing a more attractive display. Lights tend to be designed to produce an
attractive spectrum, sometimes with secondary attention to keeping plants alive.
•
In saltwater/reef aquaria, color temperatures are an essential part of tank health.
Cooler temperatures are seen as getting through the water better, providing
essential energy sources to the algae hosted in coral, that sustains it. Because coral
receives intense, direct tropical sunlight, the focus was once on simulating this
with 6,500K lights. Higher temperature light sources have become more popular
as their success became widely known...first 10,000K, more recently 16,000K and
20,000K. Meanwhile, actinic lighting is used to make the somewhat fluorescent
colors of many corals and fish "pop", creating brighter "display" tanks.
WT
Digital photography
In digital photography, color temperature is sometimes used interchangeably with white
balance, which allow a remapping of color values to simulate variations in ambient color
temperature. Most digital cameras and RAW image software provide presets simulating
specific ambient values (e.g., sunny, cloudy, tungsten, etc.) while others allow explicit
entry of white balance values in Kelvin. These settings vary color values along the blue–
yellow axis, while some software includes additional controls (sometimes labeled tint)
adding the magenta–green axis.
Film photography
Film sometimes appears to exaggerate the color of the light, since it does not adapt to
lighting color as our visual perception does. An object that appears to the eye to be white
may turn out to look very blue or orange in a photograph. The color balance may need to
be corrected while shooting or while printing to achieve a neutral color print.
Film is made for specific light sources (most commonly daylight film and tungsten film),
and used properly, will create a neutral color print. Matching the sensitivity of the film to
the color temperature of the light source is one way to balance color. If tungsten film is
used indoors with incandescent lamps, the yellowish-orange light of the tungsten
incandescent bulbs will appear as white (3,200 K) in the photograph.
Filters on a camera lens, or color gels over the light source(s) may also be used to correct
color balance. When shooting with a bluish light (high color temperature) source such as
on an overcast day, in the shade, in window light or if using tungsten film with white or
blue light, a yellowish-orange filter will correct this. For shooting with daylight film
(calibrated to 5,600 K) under warmer (low color temperature) light sources such as
sunsets, candle light or tungsten lighting, a bluish (e.g., #80A) filter may be used.
________________________WORLD TECHNOLOGIES________________________
If there is more than one light source with varied color temperatures, one way to balance
the color is to use daylight film and place color-correcting gel filters over each light
source.
Photographers sometimes use color temperature meters. Color temperature meters are
usually designed to read only two regions along the visible spectrum (red and blue); more
expensive ones read three regions (red, green, and blue). However, they are ineffective
with sources such as fluorescent or discharge lamps, whose light varies in color and may
be harder to correct for. Because it is often greenish, a magenta filter may correct it. More
sophisticated colorimetry tools can be used where such meters are lacking.
Desktop publishing
In the desktop publishing industry, it is important to know a monitor’s color temperature.
Color matching software, such as ColorSync will measure a monitor's color temperature
and then adjust its settings accordingly. This enables on-screen color to more closely
match printed color. Common monitor color temperatures, along with matching standard
illuminants in parentheses, are as follows:
•
•
•
•
•
WT
5,000 K (D50)
5,500 K (D55)
6,500 K (D65)
7,500 K (D75)
9,300 K.
Note: D50 is scientific shorthand for a Standard illuminant: the daylight spectrum at a
correlated color temperature of 5,000 K. (Similar definition for D55, D65 and D75.)
Designations such as D50 are used to help classify color temperatures of light tables and
viewing booths. When viewing a color slide at a light table, it is important that the light
be balanced properly so that the colors are not shifted towards the red or blue.
Digital cameras, web graphics, DVDs, etc. are normally designed for a 6,500 K color
temperature. The sRGB standard commonly used for images on the internet stipulates
(among other things) a 6,500 K display whitepoint.
TV, video, and digital still cameras
The NTSC and PAL TV norms call for a compliant TV screen to display an electrically
black and white signal (minimal color saturation) at a color temperature of 6,500 K. On
many consumer-grade televisions, there is a very noticeable deviation from this requirement. However, higher-end consumer-grade televisions can have their color temperatures
adjusted to 6,500 K by using a preprogrammed setting or a custom calibration. Current
versions of ATSC explicitly call for the color temperature data to be included in the data
stream, but old versions of ATSC allowed this data to be omitted. In this case, current
versions of ATSC cite default colorimetry standards depending on the format. Both of the
cited standards specify a 6,500 K color temperature.
________________________WORLD TECHNOLOGIES________________________
Most video and digital still cameras can adjust for color temperature by zooming into a
white or neutral colored object and setting the manual "white balance" (telling the camera
that "this object is white"); the camera then shows true white as white and adjusts all the
other colors accordingly. White-balancing is necessary especially when indoors under
fluorescent lighting and when moving the camera from one lighting situation to another.
Most cameras also have an automatic white balance function that attempts to determine
the color of the light and correct accordingly. While these settings were once unreliable,
they are much improved in today's digital cameras, and will produce an accurate white
balance in a wide variety of lighting situations.
Artistic application via control of color temperature
WT
The house above appears a light cream during the midday, but seems a bluish white here
in the dim light before full sunrise. Note the different color temperature of the sunrise in
the background.
Experimentation with color temperature is obvious in many Stanley Kubrick films; for
instance in Eyes Wide Shut the light coming in from a window was almost always
conspicuously blue, whereas the light from lamps on end tables was fairly orange. Indoor
lights typically give off a yellow hue; fluorescent and natural lighting tends to be more
blue.
Video camera operators can white-balance objects which aren't white, downplaying the
color of the object used for white-balancing. For instance, they can bring more warmth
into a picture by white-balancing off something light blue, such as faded blue denim; in
this way white-balancing can serve in place of a filter or lighting gel when those aren't
available.
Cinematographers do not "white balance" in the same way as video camera operators;
they can use techniques such as filters, choice of film stock, pre-flashing, and after
shooting, color grading (both by exposure at the labs and also digitally). Cinematographers also work closely with set designers and lighting crews to achieve the desired
effects.
For artists, most pigments and papers have a cool or warm cast, as the human eye can
detect even a minute amount of saturation. Gray mixed with yellow, orange or red is a
________________________WORLD TECHNOLOGIES________________________
"warm gray". Green, blue, or purple, create "cool grays". Note that this sense of
temperature is the reverse of that of real temperature; bluer is described as "cooler" even
though it corresponds to a higher-temperature blackbody.
Warm grey
Cool grey
WT
Mixed with 6% yellow. Mixed with 6% blue.
Lighting designers sometimes select filters by color temperature, commonly to match
light that is theoretically white. Since fixtures using discharge type lamps produce a light
of considerably higher color temperature than tungsten lamps, using the two in
conjunction could potentially produce a stark contrast, so sometimes fixtures with HID
lamps, commonly producing light of 6,000–7,000 K, are fitted with 3,200 K filters to
emulate tungsten light. Fixtures with color mixing features or with multiple colors, (if
including 3,200 K) are also capable of producing tungsten like light. Color temperature
may also be a factor when selecting lamps, since each is likely to have a different color
temperature.
Correlated color temperature
The correlated color temperature (Tcp) is the temperature of the Planckian radiator
whose perceived colour most closely resembles that of a given stimulus at the same
brightness and under specified viewing conditions
— CIE/IEC 17.4:1987, International Lighting Vocabulary (ISBN 3900734070)
Motivation
Black body radiators are the reference by which the whiteness of light sources is judged.
A black body can be described by its color temperature, whose hues are depicted above.
By analogy, nearly-Planckian light sources such as certain fluorescent or high-intensity
discharge lamps can be judged by their correlated color temperature (CCT); the color
temperature of the Planckian radiator that best approximates them. The question is: what
is the relationship between the light source's relative spectral power distribution and its
correlated color temperature?
________________________WORLD TECHNOLOGIES________________________
Background
WT
Judd's (r,g) diagram.
The concentric curves indicate the loci of constant
________________________WORLD TECHNOLOGIES________________________
purity.
WT
Judd's Maxwell triangle. Planckian locus in red. Translating from trilinear co-ordinates
into Cartesian co-ordinates leads to the next diagram.
Judd's uniform chromaticity space (UCS), with the Planckian locus and the isotherms
from 1,000 K to 10,000 K, perpendicular to the locus. Judd calculated the isotherms in
________________________WORLD TECHNOLOGIES________________________
this space before translating them back into the (x,y) chromaticity space, as depicted in
the diagram at the
top.
WT
Close up of the Planckian locus in the CIE 1960 UCS, with the isotherms in mireds. Note
the even spacing of the isotherms when using the reciprocal temperature scale, and
compare with the similar figure below. The even spacing of the isotherms on the locus
implies that the mired scale is a better measure of perceptual color difference than the
temperature scale.
The notion of using Planckian radiators as a yardstick against which to judge other light
sources is not a new one. In 1923, writing about "grading of illuminants with reference to
quality of color…the temperature of the source as an index of the quality of color", Priest
essentially described CCT as we understand it today, going so far as to use the term
apparent color temperature, and astutely recognized three cases:
•
•
•
"Those for which the spectral distribution of energy is identical with that given by
the Planckian formula."
"Those for which the spectral distribution of energy is not identical with that
given by the Planckian formula, but still is of such a form that the quality of the
color evoked is the same as would be evoked by the energy from a Planckian
radiator at the given color temperature."
"Those for which the spectral distribution of energy is such that the color can be
matched only approximately by a stimulus of the Planckian form of spectral
distribution."
Several important developments occurred in 1931. In chronological order:
1. Davis published a paper on correlated color temperature (his term). Referring to
the Planckian locus on the r-g diagram, he defined the CCT as the average of the
primary component temperatures (RGB CCTs), using trilinear coordinates.
2. The CIE announced the XYZ color space.
________________________WORLD TECHNOLOGIES________________________
3. Judd published a paper on the nature of "least perceptible differences" with
respect to chromatic stimuli. By empirical means he determined that the
difference in sensation, which he termed ΔE for a "discriminatory step between
colors…Empfindung" (German for sensation) was proportional to the distance of
the colors on the chromaticity diagram. Referring to the (r,g) chromaticity
diagram depicted aside, he hypothesized that:
KΔE = |c1 - c2| = max(|r1 - r2|, |g1 - g2|)
These developments paved the way for the development of new chromaticity spaces that
are more suited to the estimation of correlated color temperatures and chromaticity
differences. Bridging the concepts of color difference and color temperature, Priest made
the observation that the eye is sensitive to constant differences in reciprocal temperature:
WT
A difference of one micro-reciprocal-degree (μrd) is fairly representative of the doubtfully perceptible difference under the most favorable conditions of observation.
Priest proposed to use "the scale of temperature as a scale for arranging the chromaticities
of the several illuminants in a serial order." Over the next few years, Judd published three
more significant papers:
1. The first verified the findings of Priest, Davis, and Judd, with a paper on
sensitivity to change in color temperature.
2. The second proposed a new chromaticity space, guided by a principle that has
become the holy grail of color spaces: perceptual uniformity (chromaticity distance should be commensurate with perceptual difference). By means of a projective transformation, Judd found a more uniform chromaticity space (UCS) in
which to find the CCT. Judd determined the nearest color temperature by simply
finding the nearest point on the Planckian locus to the chromaticity of the
stimulus on Maxwell's color triangle, depicted aside. The transformation matrix
he used to convert X,Y,Z tristimulus values to R,G,B coordinates was:
.
From this one can find these chromaticities:
3. The third depicted the locus of the isothermal chromaticities on the CIE 1931 x,y
chromaticity diagram. Since the isothermal points formed normals on his UCS
diagram, transformation back into the xy plane revealed them still to be lines, but
no longer perpendicular to the locus.
________________________WORLD TECHNOLOGIES________________________
MacAdam's "uniform chromaticity scale" diagram; a simplification of Judd's UCS.
Calculation
WT
Judd's idea of determining the nearest point to the Planckian locus on a uniform chromaticity space is current. In 1937, MacAdam suggested a "modified uniform chromaticity
scale diagram", based on certain simplifying geometrical considerations:
This (u,v) chromaticity space became the CIE 1960 color space, which is still used to
calculate the CCT (even though MacAdam did not devise it with this purpose in mind).
Using other chromaticity spaces, such as u'v', leads to non-standard results that may
nevertheless be perceptually meaningful.
Close up of the CIE 1960 UCS. The isotherms are perpendicular to the Planckian locus,
and are drawn to indicate the maximum distance from the locus that the CIE considers
the correlated color temperature to be meaningful:
________________________WORLD TECHNOLOGIES________________________
The distance from the locus (i.e., degree of departure from a black body) is traditionally
indicated in units of
; positive for points above the locus. This concept of distance
has evolved to become Delta E, which continues to be used today.
Robertson's method
Before the advent of powerful, personal computers, it was common to estimate the
correlated color temperature by way of interpolation from look-up tables and charts. The
most famous such method is Robertson's, who took advantage of the relatively even
spacing of the mired scale (see above) to calculate the CCT Tc using linear interpolation
of the isotherm's mired values:
WT
Computation of the CCT Tc corresponding to the chromaticity coordinate
CIE 1960 UCS.
in the
where and
are the color temperatures of the look-up isotherms and i is chosen such
that
. (Furthermore, the test chromaticity lies between the only two adjacent
lines for which
.)
If the isotherms are tight enough, one can assume
, leading to
The distance of the test point to the i'th isotherm is given by
________________________WORLD TECHNOLOGIES________________________
where
is the chromaticity coordinate of the i'th isotherm on the Planckian locus and
mi is the isotherm's slope. Since it is perpendicular to the locus, it follows that
where li is the slope of the locus at
.
Precautions
Although the CCT can be calculated for any chromaticity coordinate, the result is
meaningful only if the light sources are nearly white. The CIE recommends that "The
concept of correlated color temperature should not be used if the chromaticity of the test
source differs more than [
] from the Planckian radiator." Beyond a certain
value of
, a chromaticity co-ordinate may be equidistant to two points on the locus,
causing ambiguity in the CCT.
WT
Approximation
If a narrow range of color temperatures is considered—those encapsulating daylight
being the most practical case—one can approximate the Planckian locus in order to
calculate the CCT in terms of chromaticity coordinates. Following Kelly's observation
that the isotherms intersect in the purple region near (x=0.325, y=0.154), McCamy
proposed this cubic approximation:
CCT(x, y) = -449n3 + 3525n2 - 6823.3n + 5520.33
where n = (x - xe)/(y - ye) is the inverse slope line and (xe = 0.3320, ye = 0.1858) is the
"epicenter"; quite close to the intersection point mentioned by Kelly. The maximum
absolute error for color temperatures ranging from 2856 (illuminant A) to 6504 (D65) is
under 2 K.
A more recent proposal, using exponential terms, considerably extends the applicable
range by adding a second epicenter for high color temperatures:
CCT(x,y) = A0 + A1exp(-n/t1) + A2exp(-n/t2) + A3exp(-n/t3)
where n is as before and the other constants are defined below:
3–50 kK 50–800 kK
xe 0.3366
0.3356
ye 0.1735
0.1691
A0 -949.86315 36284.48953
A1 6253.80338 0.00228
t1 0.92159
0.07861
________________________WORLD TECHNOLOGIES________________________
A2 28.70599
t2 0.20039
A3 0.00004
t3 0.07125
5.4535×10-36
0.01543
Color rendering index
The CIE color rendering index (CRI) is a method to determine how well a light source's
illumination of eight sample patches compares to the illumination provided by a reference
source. Cited together, the CRI and CCT give a numerical estimate of what reference
(ideal) light source best approximates a particular artificial light, and what the difference
is.
WT
Spectral power distribution
Here is an example of how different an incandescent lamp's SPD graph is from that of a
fluorescent lamp.
Light sources and illuminants may be characterized by their spectral power distribution
(SPD). The relative SPD curves provided by many manufacturers may have been
produced using 10-nanometre (nm) increments or more on their spectroradiometer. The
result is what would seem to be a smoother ("fuller spectrum") power distribution than
the lamp actually has. Owing to their spiky distribution, much finer increments are
advisable for taking measurements of fluorescent lights, and this requires more expensive
equipment.
________________________WORLD TECHNOLOGIES________________________
Color Balance
WT
The left half shows the photo as it came from the digital camera. The right half shows the
photo adjusted to make a gray surface neutral in the same light.
________________________WORLD TECHNOLOGIES________________________
WT
A seascape photograph at Clifton Beach, South Arm, Tasmania, Australia. The white
balance has been adjusted towards the warm side for creative effect.
In photography and image processing, color balance is the global adjustment of the
intensities of the colors (typically red, green, and blue primary colors). An important goal
of this adjustment is to render specific colors – particularly neutral colors – correctly;
hence, the general method is sometimes called gray balance, neutral balance, or white
balance. Color balance changes the overall mixture of colors in an image and is used for
color correction; generalized versions of color balance are used to get colors other than
neutrals to also appear correct or pleasing.
Image data acquired by sensors – either film or electronic image sensors – must be
transformed from the acquired values to new values that are appropriate for color
reproduction or display. Several aspects of the acquisition and display process make such
color correction essential – including the fact that the acquisition sensors do not match
the sensors in the human eye, that the properties of the display medium must be
accounted for, and that the ambient viewing conditions of the acquisition differ from the
display viewing conditions.
The color balance operations in popular image editing applications usually operate
directly on the red, green, and blue channel pixel values, without respect to any color
sensing or reproduction model. In shooting film, color balance is typically achieved by
using color correction filters over the lights or on the camera lens.
________________________WORLD TECHNOLOGIES________________________
Generalized color balance
Sometimes the adjustment to keep neutrals neutral is called white balance, and the phrase
color balance refers to the adjustment that in addition makes other colors in a displayed
image appear to have the same general appearance as the colors in an original scene. It is
particularly important that neutral (gray, achromatic, white) colors in a scene appear
neutral in the reproduction. Hence, the special case of balancing the neutral colors
(sometimes gray balance, neutral balance, or white balance) is a particularly important –
perhaps dominant – element of color balancing.
Normally, one would not use the phrase color balance to describe the adjustments needed
to account for differences between the sensors and the human eye, or the details of the
display primaries. Color balance is normally reserved to refer to correction for differrences in the ambient illumination conditions. However, the algorithms for transforming
the data do not always clearly separate out the different elements of the correction.
Hence, it can be difficult to assign color balance to a specific step in the color correction
process. Moreover, there can be significant differences in the color balancing goal. Some
applications are created to produce an accurate rendering – as suggested above. In other
applications, the goal of color balancing is to produce a pleasing rendering. This
difference also creates difficulty in defining the color balancing processing operations.
WT
Illuminant estimation and adaptation
Most digital cameras have a means to select a color correction based on the type of scene
illumination, using either manual illuminant selection, or automatic white balance
(AWB), or custom white balance. The algorithm that performs this analysis performs
generalized color balancing, known as illuminant adaptation or chromatic adaptation.
Many methods are used to achieve color balancing. Setting a button on a camera is a way
for the user to indicate to the processor the nature of the scene lighting. Another option
on some cameras is a button which one may press when the camera is pointed at a gray
card or other neutral object. This "custom white balance" step captures an image of the
ambient light, and this information is helpful in controlling color balance.
There is a large literature on how one might estimate the ambient illumination from the
camera data and then use this information to transform the image data. A variety of
algorithms have been proposed, and the quality of these have been debated. A few
examples and examination of the references therein will lead the reader to many others.
Examples are Retinex, an artificial neural network or a Bayesian method.
Color balance and chromatic colors
Color balancing an image affects not only the neutrals, but other colors as well. An image
that is not color balanced is said to have a color cast, as everything in the image appears
________________________WORLD TECHNOLOGIES________________________
to have been shifted towards one color or another. Color balancing may be thought in
terms of removing this color cast.
Color balance is also related to color constancy. Algorithms and techniques used to attain
color constancy are frequently used for color balancing, as well. Color constancy is, in
turn, related to chromatic adaptation. Conceptually, color balancing consists of two steps:
first, determining the illuminant under which an image was captured; and second, scaling
the components (e.g., R, G, and B) of the image or otherwise transforming the
components so they conform to the viewing illuminant.
Viggiano found that white balancing in the camera's native RGB tended to produce less
color inconstancy (i.e., less distortion of the colors) than in monitor RGB for over 4000
hypothetical sets of camera sensitivities. This difference typically amounted to a factor of
more than two in favor of camera RGB. This means that it is advantageous to get color
balance right at the time an image is captured, rather than edit later on a monitor. If one
must color balance later, balancing the raw image data will tend to produce less distortion
of chromatic colors than balancing in monitor RGB.
WT
Mathematics of color balance
Color balancing is sometimes performed on a three-component image (e.g., RGB) using a
3x3 matrix. This type of transformation is appropriate if the image were captured using
the wrong white balance setting on a digital camera, or through a color filter.
Scaling monitor R, G, and B
In principle, one wants to scale all relative luminances in an image so that objects which
are believed to be neutral appear so. If, say, a surface with R = 240 was believed to be a
white object, and if 255 is the count which corresponds to white, one could multiply all
red values by 255/240. Doing analogously for green and blue would result, at least in
theory, in a color balanced image. In this type of transformation the 3x3 matrix is a
diagonal matrix.
where R, G, and B are the color balanced red, green, and blue components of a pixel in
the image; R', G', and B' are the red, green, and blue components of the image before
color balancing, and R'w, G'w, and B'w are the red, green, and blue components of a pixel
which is believed to be a white surface in the image before color balancing. This is a
simple scaling of the red, green, and blue channels, and is why color balance tools in
Photoshop and the GIMP have a white eyedropper tool. It has been demonstrated that
performing the white balancing in the phosphor set assumed by sRGB tends to produce
________________________WORLD TECHNOLOGIES________________________
large errors in chromatic colors, even though it can render the neutral surfaces perfectly
neutral.
Scaling X, Y, Z
If the image may be transformed into CIE XYZ tristimulus values, the color balancing
may be performed there. This has been termed a “wrong von Kries” transformation.
Although it has been demonstrated to offer usually poorer results than balancing in
monitor RGB, it is mentioned here as a bridge to other things. Mathematically, one
computes:
WT
where X, Y, and Z are the color-balanced tristimulus values; Xw, Yw, and Zw are the
tristimulus values of the viewing illuminant (the white point to which the image is being
transformed to conform to); X'w, Y'w, and Z'w are the tristimulus values of an object
believed to be white in the un-color-balanced image, and X', Y', and Z' are the tristimulus
values of a pixel in the un-color-balanced image. If the tristimulus values of the monitor
primaries are in a matrix so that:
where LR, LG, and LB are the un-gamma corrected monitor RGB, one may use:
Von Kries's method
Johannes von Kries, whose theory of rods and three different color-sensitive cone types
in the retina has survived as the dominant explanation of color sensation for over 100
years, motivated the method of converting color to the LMS color space, representing the
effective stimuli for the Long-, Medium-, and Short-wavelength cone types that are
modeled as adapting independently. A 3x3 matrix converts RGB or XYZ to LMS, and
then the three LMS primary values are scaled to balance the neutral; the color can then be
converted back to the desired final color space:
________________________WORLD TECHNOLOGIES________________________
where L, M, and S are the color-balanced LMS cone tristimulus values; L'w, M'w, and S'w
are the tristimulus values of an object believed to be white in the un-color-balanced
image, and L', M', and S' are the tristimulus values of a pixel in the un-color-balanced
image.
Matrices to convert to LMS space were not specified by von Kries, but can be derived
from CIE color matching functions and LMS color matching functions when the latter are
specified; matrices can also be found in reference books.
WT
Scaling camera RGB
By Viggiano's measure, and using his model of gaussian camera spectral sensitivities,
most camera RGB spaces performed better than either monitor RGB or XYZ. If the
camera's raw RGB values are known, one may use the 3x3 diagonal matrix:
and then convert to a working RGB space such as sRGB or Adobe RGB after balancing.
Preferred chromatic adaptation spaces
Comparisons of images balanced by diagonal transforms in a number of different RGB
spaces have identified several such spaces that work better than others, and better than
camera or monitor spaces, for chromatic adaptation, as measured by several color
appearance models; the systems that performed statistically as well as the best on the
majority of the image test sets used were the "Sharp", "Bradford", "CMCCAT", and
"ROMM" spaces.
General illuminant adaptation
The best color matrix for adapting to a change in illuminant is not necessarily a diagonal
matrix in a fixed color space. It has long been known that if the space of illuminants can
be described as a linear model with N basis terms, the proper color transformation will be
the weighted sum of N fixed linear transformations, not necessarily consistently
diagonalizable.
________________________WORLD TECHNOLOGIES________________________
Chapter-5
Depth of Field
WT
A macro photograph with very shallow depth of field
________________________WORLD TECHNOLOGIES________________________
WT
Shallow depth of field can yield dramatic results and greatly emphasize the subject.
In optics, particularly as it relates to film and photography, the depth of field (DOF) is
the portion of a scene that appears acceptably sharp in the image. Although a lens can
precisely focus at only one distance, the decrease in sharpness is gradual on each side of
the focused distance, so that within the DOF, the unsharpness is imperceptible under
normal viewing conditions.
In some cases, it may be desirable to have the entire image sharp, and a large DOF is
appropriate. In other cases, a small DOF may be more effective, emphasizing the subject
while de-emphasizing the foreground and background. In cinematography, a large DOF
is often called deep focus, and a small DOF is often called shallow focus.
The DOF is determined by the camera-to-subject distance, the lens focal length, the lens
f-number, and the format size or circle of confusion criterion.
For a given format size, at moderate subject distances, DOF is approximately determined
by the subject magnification and the lens f-number. For a given f-number, increasing the
magnification, either by moving closer to the subject or using a lens of greater focal
length, decreases the DOF; decreasing magnification increases DOF. For a given subject
magnification, increasing the f-number (decreasing the aperture diameter) increases the
DOF; decreasing f-number decreases DOF.
When the “same picture” is taken in two different format sizes from the same distance at
the same f-number with lenses that give the same angle of view, and the final images
________________________WORLD TECHNOLOGIES________________________
(e.g., in prints, or on a projection screen or electronic display) are the same size, the
smaller format has greater DOF.
Many small-format digital SLR camera systems allow using many of the same lenses on
both full-frame and “cropped format” cameras. If the subject distance is adjusted to
provide the same field of view at the subject, at the same f-number and final-image size,
the smaller format has greater DOF, as with the “same picture” comparison above. If
pictures are taken from the same distance using the same f-number, and the final images
are the same size, the smaller format has less DOF. If pictures taken from the same
subject distance are given the same enlargement, both final images will have the same
DOF. The final images will, of course, have different sizes.
Cropping an image and enlarging to the same size final image as an uncropped image
taken under the same conditions is equivalent to using a smaller format under the same
conditions, so the cropped image has less DOF.
WT
When focus is set to the hyperfocal distance, the DOF extends from half the hyperfocal
distance to infinity, and the DOF is the largest possible for a given f-number.
The advent of digital technology in photography has provided additional means of
controlling the extent of image sharpness; some methods allow extended DOF that would
be impossible with traditional techniques, and some allow the DOF to be determined after
the image is made.
________________________WORLD TECHNOLOGIES________________________
Acceptable sharpness
WT
A 35 mm lens set to f/11. The depth-of-field scale (top) indicates that a subject which is
anywhere between 1 and 2 meters in front of the camera will be rendered acceptably
sharp. If the aperture were set to f/22 instead, everything from 0.7 meters to infinity
would appear to be in focus.
Precise focus is possible at only one distance; at that distance, a point object will produce
a point image. At any other distance, a point object is defocused, and will produce a blur
spot shaped like the aperture, which for the purpose of analysis is usually assumed to be
circular. When this circular spot is sufficiently small, it is indistinguishable from a point,
and appears to be in focus; it is rendered as “acceptably sharp”. The diameter of the circle
increases with distance from the point of focus; the largest circle that is indistinguishable
from a point is known as the acceptable circle of confusion, or informally, simply as the
________________________WORLD TECHNOLOGIES________________________
circle of confusion. The acceptable circle of confusion is influenced by visual acuity,
viewing conditions, and the amount by which the image is enlarged (Ray 2000, 52–53).
The increase of the circle diameter with defocus is gradual, so the limits of depth of field
are not hard boundaries between sharp and unsharp.
Several other factors, such as subject matter, movement, and the distance of the subject
from the camera, also influence when a given defocus becomes noticeable.
WT
The area within the depth of field appears sharp, while the areas in front of and beyond
the depth of field appear blurry.
The image format size affects the depth of field. If the original image is enlarged to make
the final image, the circle of confusion in the original image must be smaller than that in
the final image by the ratio of enlargement. Moreover, the larger the format size, the
longer a lens will need to be to capture the same framing as a smaller format. In motion
pictures, for example, a frame with a 12 degree horizontal field of view will require a
50 mm lens on 16 mm film, a 100 mm lens on 35 mm film, and a 250 mm lens on 65 mm
film. Conversely, using the same focal length lens with each of these formats will yield a
progressively wider image as the film format gets larger: a 50 mm lens has a horizontal
field of view of 12 degrees on 16 mm film, 23.6 degrees on 35 mm film, and 55.6 degrees
on 65 mm film. What this all means is that because the larger formats require longer
lenses than the smaller ones, they will accordingly have a smaller depth of field.
Therefore, compensations in exposure, framing, or subject distance need to be made in
order to make one format look like it was filmed in another format.
For a 35 mm motion picture, the image area on the negative is roughly 22 mm by 16 mm
(0.87 in by 0.63 in). The limit of tolerable error is usually set at 0.05 mm (0.002 in)
diameter. For 16 mm film, where the image area is smaller, the tolerance is stricter,
0.025 mm (0.001 in). Standard depth-of-field tables are constructed on this basis,
although generally 35 mm productions set it at 0.025 mm (0.001 in). Note that the
acceptable circle of confusion values for these formats are different because of the
relative amount of magnification each format will need in order to be projected on a fullsized movie screen.
(A table for 35 mm still photography would be somewhat different since more of the film
is used for each image and the amount of enlargement is usually much less.)
________________________WORLD TECHNOLOGIES________________________
Effect of lens aperture
WT
Effect of aperture on blur and DOF. The points in focus (2) project points onto the image
plane (5), but points at different distances (1 and 3) project blurred images, or circles of
confusion. Decreasing the aperture size (4) reduces the size of the blur spots for points
not in the focused plane, so that the blurring is imperceptible, and all points are within the
DOF.
For a given subject framing and camera position, the DOF is controlled by the lens
aperture diameter, which is usually specified as the f-number, the ratio of lens focal
length to aperture diameter. Reducing the aperture diameter (increasing the f-number)
increases the DOF; however, it also reduces the amount of light transmitted, and
increases diffraction, placing a practical limit on the extent to which DOF can be
increased by reducing the aperture diameter.
Motion pictures make only limited use of this control; to produce a consistent image
quality from shot to shot, cinematographers usually choose a single aperture setting for
interiors and another for exteriors, and adjust exposure through the use of camera filters
or light levels. Aperture settings are adjusted more frequently in still photography, where
variations in depth of field are used to produce a variety of special effects.
________________________WORLD TECHNOLOGIES________________________
DOF with various apertures
f/22
f/8
f/4
f/2.8
Obtaining maximum DOF
Lens DOF scales
WT
Many lenses for small- and medium-format cameras include scales that indicate the DOF
for a given focus distance and f-number; the 35 mm lens in the image above is typical.
That lens includes distance scales in feet and meters; when a marked distance is set
opposite the large white index mark, the focus is set to that distance. The DOF scale
below the distance scales includes markings on either side of the index that correspond to
f-numbers; when the lens is set to a given f-number, the DOF extends between the
distances that align with the f-number markings.
Zone focusing
Detail from the lens shown above. The point half-way between the 1 m and 2 m marks
represents approximately 1.3 m.
________________________WORLD TECHNOLOGIES________________________
When the 35 mm lens above is set to f/11 and focused at approximately 1.3 m, the DOF
(a “zone” of acceptable sharpness) extends from 1 m to 2 m. Conversely, the required
focus and f-number can be determined from the desired DOF limits by locating the near
and far DOF limits on the lens distance scale and setting focus so that the index mark is
centered between the near and far distance marks; the required f-number is determined by
finding the markings on the DOF scale that are closest to the near and far distance marks
(Ray 1994, 315). For the 35 mm lens above, if it were desired for the DOF to extend from
1 m to 2 m, focus would be set so that index mark was centered between the marks for
those distances, and the aperture would be set to f/11. The focus so determined would be
about 1.3 m, the approximate harmonic mean of the near and far distances. If the marks
for the near and far distances fall outside the marks for the largest f-number on the DOF
scale, the desired DOF cannot be obtained; for example, with the 35 mm lens above, it is
not possible to have the DOF extend from 0.7 m to infinity.
WT
The DOF limits can be determined visually, by focusing on the farthest object to be
within the DOF and noting the distance mark on the lens distance scale, and repeating the
process for the nearest object to be within the DOF.
Some distance scales have markings for only a few distances; for example, the 35 mm
lens above shows only 3 ft and 5 ft on its upper scale. Using other distances for DOF
limits requires visual interpolation between marked distances; because the distance scale
is nonlinear, accurate interpolation can be difficult. In most cases, English and metric
distance markings are not coincident, so using both scales to note focused distances can
sometimes lessen the need for interpolation. Many autofocus lenses have smaller distance
and DOF scales and fewer markings than do comparable manual-focus lenses, so that
determining focus and f-number from the scales on an autofocus lens may be more
difficult than with a comparable manual-focus lens. In most cases, determining these
settings using the lens DOF scales on an autofocus lens requires that the lens or camera
body be set to manual focus.
On a view camera, the focus and f-number can be obtained by measuring the focus spread
and performing simple calculations; the procedure is described in more detail in the
section Focus and f-number from DOF limits. Some view cameras include DOF
calculators that indicate focus and f-number without the need for any calculations by the
photographer (Tillmanns 1997, 67–68; Ray 2002, 230–31).
Hyperfocal distance
The hyperfocal distance is the nearest focus distance at which the DOF extends to
infinity; focusing the camera at the hyperfocal distance results in the largest possible
depth of field for a given f-number (Ray 2000, 55). Focusing beyond the hyperfocal
distance does not increase the far DOF (which already extends to infinity), but it does
decrease the DOF in front of the subject, decreasing the total DOF. Some photographers
consider this wasting DOF; however, see Object field methods below for a rationale for
doing so. If the lens includes a DOF scale, the hyperfocal distance can be set by aligning
the infinity mark on the distance scale with the mark on the DOF scale corresponding to
________________________WORLD TECHNOLOGIES________________________
the f-number to which the lens is set. For example, with the 35 mm lens shown above set
to f/11, aligning the infinity mark with the ‘11’ to the left of the index mark on the DOF
scale would set the focus to the hyperfocal distance. Focusing on the hyperfocal distance
is a special case of zone focusing in which the far limit of DOF is at infinity.
Object field methods
Traditional depth-of-field formulas and tables assume equal circles of confusion for near
and far objects. Some authors, such as Merklinger (1992), have suggested that distant
objects often need to be much sharper to be clearly recognizable, whereas closer objects,
being larger on the film, do not need to be so sharp. The loss of detail in distant objects
may be particularly noticeable with extreme enlargements. Achieving this additional
sharpness in distant objects usually requires focusing beyond the hyperfocal distance,
sometimes almost at infinity. For example, if photographing a cityscape with a traffic
bollard in the foreground, this approach, termed the object field method by Merklinger,
would recommend focusing very close to infinity, and stopping down to make the bollard
sharp enough. With this approach, foreground objects cannot always be made perfectly
sharp, but the loss of sharpness in near objects may be acceptable if recognizability of
distant objects is paramount.
WT
Other authors (Adams 1980, 51) have taken the opposite position, maintaining that slight
unsharpness in foreground objects is usually more disturbing than slight unsharpness in
distant parts of a scene.
Moritz von Rohr also used an object field method, but unlike Merklinger, he used the
conventional criterion of a maximum circle of confusion diameter in the image plane,
leading to unequal front and rear depths of field.
Limited DOF: selective focus
At f/32, the background competes for the viewer’s attention.
________________________WORLD TECHNOLOGIES________________________
At f/5.6, the flowers are isolated from the background.
WT
At f/2.8, the cat is isolated from the background.
Depth of field can be anywhere from a fraction of a millimeter to virtually infinite. In
some cases, such as landscapes, it may be desirable to have the entire image sharp, and a
large DOF is appropriate. In other cases, artistic considerations may dictate that only a
part of the image be in focus, emphasizing the subject while de-emphasizing the
background, perhaps giving only a suggestion of the environment (Langford 1973, 81).
For example, a common technique in melodramas and horror films is a closeup of a
person's face, with someone just behind that person visible but out of focus. A portrait or
close-up still photograph might use a small DOF to isolate the subject from a distracting
background. The use of limited DOF to emphasize one part of an image is known as
selective focus, differential focus or shallow focus.
Although a small DOF implies that other parts of the image will be unsharp, it does not,
by itself, determine how unsharp those parts will be. The amount of background (or
foreground) blur depends on the distance from the plane of focus, so if a background is
close to the subject, it may be difficult to blur sufficiently even with a small DOF. In
practice, the lens f-number is usually adjusted until the background or foreground is
acceptably blurred, often without direct concern for the DOF.
Sometimes, however, it is desirable to have the entire subject sharp while ensuring that
the background is sufficiently unsharp. When the distance between subject and background is fixed, as is the case with many scenes, the DOF and the amount of background
blur are not independent. Although it is not always possible to achieve both the desired
subject sharpness and the desired background unsharpness, several techniques can be
used to increase the separation of subject and background.
________________________WORLD TECHNOLOGIES________________________
For a given scene and subject magnification, the background blur increases with lens
focal length. If it is not important that background objects be unrecognizable, background
de-emphasis can be increased by using a lens of longer focal length and increasing the
subject distance to maintain the same magnification. This technique requires that
sufficient space in front of the subject be available; moreover, the perspective of the
scene changes because of the different camera position, and this may or may not be
acceptable.
The situation is not as simple if it is important that a background object, such as a sign, be
unrecognizable. The magnification of background objects also increases with focal
length, so with the technique just described, there is little change in the recognizability of
background objects. However, a lens of longer focal length may still be of some help;
because of the narrower angle of view, a slight change of camera position may suffice to
eliminate the distracting object from the field of view.
WT
Although tilt and swing are normally used to maximize the part of the image that is
within the DOF, they also can be used, in combination with a small f-number, to give
selective focus to a plane that isn't perpendicular to the lens axis. With this technique, it is
possible to have objects at greatly different distances from the camera in sharp focus and
yet have a very shallow DOF. The effect can be interesting because it differs from what
most viewers are accustomed to seeing.
Near:far distribution
The DOF beyond the subject is always greater than the DOF in front of the subject. When
the subject is at the hyperfocal distance or beyond, the far DOF is infinite, so the ratio is
1:∞; as the subject distance decreases, near:far DOF ratio increases, approaching unity at
high magnification. For large apertures at typical portrait distances, the ratio is still close
to 1:1. The oft-cited rule that 1/3 of the DOF is in front of the subject and 2/3 is beyond
(a 1:2 ratio) is true only when the subject distance is 1/3 the hyperfocal distance.
DOF vs. format size
The comparative DOFs of two different format sizes depend on the conditions of the
comparison; the DOF for the smaller format can be either more than or less than that for
the larger format. In the discussion that follows, it is assumed that the final images from
both formats are the same size, are viewed from the same distance, and are judged with
the same circle of confusion criterion.
Derivations of the effects of format size are given under Derivation of the DOF formulas
in the subsection DOF vs. format size.
“Same picture” for both formats
For the common “same picture” comparison, i.e., the same camera position and angle of
view, DOF is, to a first approximation, inversely proportional to format size (Stroebel
________________________WORLD TECHNOLOGIES________________________
1976, 139). More precisely, if photographs with the same final-image size are taken in
two different camera formats at the same subject distance with the same angle of view
and f-number, the DOF is, to a first approximation, inversely proportional to the format
size. Though commonly used when comparing formats, the approximation is valid only
when the subject distance is large in comparison with the focal length of the larger format
and small in comparison with the hyperfocal distance of the smaller format.
To maintain the same angle of view, the lens focal lengths must be in proportion to the
format sizes. Assuming, for purposes of comparison, that the 4×5 format is four times the
size of 35 mm format, if a 4×5 camera used a 300 mm lens, a 35 mm camera would need
a 75 mm lens for the same field of view. For the same f-number, the image made with the
35 mm camera would have four times the DOF of the image made with the 4×5 camera.
Same focal length for both formats
Many small-format digital SLR camera systems allow using many of the same lenses on
both full-frame and “cropped format” cameras. If the subject distance is adjusted to
provide the same field of view at the subject, at the same f-number and final-image size,
the smaller format has more DOF, as with the “same picture” comparison above. But the
pictures from the two formats will differ because of the different angles of view and the
different viewpoints.
WT
If pictures are taken from the same distance using the same lens and f-number, and the
final images are the same size, the original image (that recorded on the film or electronic
sensor) from the smaller format requires greater enlargement for the same size final
image, and the smaller format has less DOF. The pictures from the two formats will
differ because of the different angles of view. If the larger format is cropped to the
captured area of the smaller format, creating final printed images with the same field of
view, then they will have the same DOF.
Cropping
Cropping an image and enlarging to the same size final image as an uncropped image
taken under the same conditions with a smaller format is equivalent to using the smaller
format; the cropped image has less DOF than the original image from the larger format
(Stroebel 1976, 134, 136–37).
Same DOF for both formats
In many cases, the DOF is fixed by the requirements of the desired image. For a given
DOF and field of view, the required f-number is proportional to the format size. For
example, if a 35 mm camera required f/11, a 4×5 camera would require f/45 to give the
same DOF. For the same ISO speed, the exposure time on the 4×5 would be sixteen times
as long; if the 35 camera required 1/250 second, the 4×5 camera would require 1/15
second. The longer exposure time with the larger camera might result in motion blur,
especially with windy conditions, a moving subject, or an unsteady camera.
________________________WORLD TECHNOLOGIES________________________
Adjusting the f-number to the camera format is equivalent to maintaining the same
absolute aperture diameter; when set to the same absolute aperture diameters, both
formats have the same DOF.
Advantages and disadvantages of greater DOF
The greater DOF with the smaller format when taking the “same picture” can be either an
advantage or a disadvantage, depending on the desired effect. For the same amount of
foreground and background blur, a small-format camera requires a smaller f-number and
allows a shorter exposure time than a large-format camera; however, many point-andshoot digital cameras cannot provide a very shallow DOF. For example, a point-andshoot digital camera with a 1/1.8″ sensor (7.18 mm × 5.32 mm) at a normal focal length
and f/2.8 has the same DOF as a 35 mm camera with a normal lens at f/13.
Camera movements and DOF
WT
When the lens axis is perpendicular to the image plane, as is normally the case, the plane
of focus (POF) is parallel to the image plane, and the DOF extends between parallel
planes on either side of the POF. When the lens axis is not perpendicular to the image
plane, the POF is no longer parallel to the image plane; the ability to rotate the POF is
known as the Scheimpflug principle. Rotation of the POF is accomplished with camera
movements (tilt, a rotation of the lens about a horizontal axis, or swing, a rotation about a
vertical axis). Tilt and swing are available on most view cameras, and are also available
with specific lenses on some small- and medium-format cameras.
When the POF is rotated, the near and far limits of DOF are no longer parallel; the DOF
becomes wedge-shaped, with the apex of the wedge nearest the camera (Merklinger
1993, 31–32; Tillmanns 1997, 71). With tilt, the height of the DOF increases with
distance from the camera; with swing, the width of the DOF increases with distance.
In some cases, rotating the POF can better fit the DOF to the scene, and achieve the
required sharpness at a smaller f-number. Alternatively, rotating the POF, in combination
with a small f-number, can minimize the part of an image that is within the DOF.
DOF formulas
The basis of these formulas is given in the section Derivation of the DOF formulas; refer
to the diagram in that section for illustration of the quantities discussed below.
Hyperfocal Distance
Let f be the lens focal length, N be the lens f-number, and c be the circle of confusion for
a given image format. The hyperfocal distance H is given by
________________________WORLD TECHNOLOGIES________________________
Moderate-to-large distances
Let s be the distance at which the camera is focused (the “subject distance”). When s is
large in comparison with the lens focal length, the distance DN from the camera to the
near limit of DOF and the distance DF from the camera to the far limit of DOF are
and
WT
When the subject distance is the hyperfocal distance,
and
The depth of field DF − DN is
For
, the far limit of DOF is at infinity and the DOF is infinite; of course, only
objects at or beyond the near limit of DOF will be recorded with acceptable sharpness.
Substituting for H and rearranging, DOF can be expressed as
Thus, for a given image format, depth of field is determined by three factors: the focal
length of the lens, the f-number of the lens opening (the aperture), and the camera-tosubject distance.
________________________WORLD TECHNOLOGIES________________________
Close-up
WT
The integrated circuit package, which is in focus in this macro shot, is 2.5 mm higher
than the circuit board it is mounted on. In macro photography even small distances can
blur an object out of focus. At f/32 every object is within the DOF, whereas the closer
one gets to f/5, the fewer the objects that are sharp. The images were taken with a
105 mm f/2.8 macro lens. At f/5 the small dust particles at the bottom right corner set
examples for the circle of confusion phenomenon.
When the subject distance s approaches the focal length, using the formulas given above
can result in significant errors. For close-up work, the hyperfocal distance has little
applicability, and it usually is more convenient to express DOF in terms of image
________________________WORLD TECHNOLOGIES________________________
magnification. Let m be the magnification; when the subject distance is small in
comparison with the hyperfocal distance,
so that for a given magnification, DOF is independent of focal length. Stated otherwise,
for the same subject magnification, at the same f-number, all focal lengths used on a
given image format give approximately the same DOF. This statement is true only when
the subject distance is small in comparison with the hyperfocal distance, however.
The discussion thus far has assumed a symmetrical lens for which the entrance and exit
pupils coincide with the front and rear nodal planes, and for which the pupil magnification (the ratio of exit pupil diameter to that of the entrance pupil) is unity. Although
this assumption usually is reasonable for large-format lenses, it often is invalid for
medium- and small-format lenses.
When
WT
, the DOF for an asymmetrical lens is
where P is the pupil magnification. When the pupil magnification is unity, this equation
reduces to that for a symmetrical lens.
Except for close-up and macro photography, the effect of lens asymmetry is minimal. At
unity magnification, however, the errors from neglecting the pupil magnification can be
significant. Consider a telephoto lens with P = 0.5 and a retrofocus wide-angle lens with
P = 2, at m = 1.0. The asymmetrical-lens formula gives DOF = 6Nc and DOF = 3Nc,
respectively. The symmetrical-lens formula gives DOF = 4Nc in either case. The errors
are −33% and 33%, respectively.
Focus and f-number from DOF limits
For given near and far DOF limits DN and DF, the required f-number is smallest when
focus is set to
the harmonic mean of the near and far distances. When the subject distance is large in
comparison with the lens focal length, the required f-number is
________________________WORLD TECHNOLOGIES________________________
When the far limit of DOF is at infinity,
s = 2DN
and
In practice, these settings usually are determined on the image side of the lens, using
measurements on the bed or rail with a view camera, or using lens DOF scales on
manual-focus lenses for small- and medium-format cameras. If vN and vF are the image
distances that correspond to the near and far limits of DOF, the required f-number is
minimized when the image distance v is
WT
In practical terms, focus is set to halfway between the near and far image distances. The
required f-number is
The image distances are measured from the camera's image plane to the lens's image
nodal plane, which is not always easy to locate. In most cases, focus and f-number can be
determined with sufficient accuracy using the approximate formulas above, which require
only the difference between the near and far image distances; view camera users
sometimes refer to the difference
as the focus spread (Hansma 1996, 55). Most
lens DOF scales are based on the same concept.
The focus spread is related to the depth of focus. Ray (2000, 56) gives two definitions of
the latter. The first is the tolerance of the position of the image plane for which an object
remains acceptably sharp; the second is that the limits of depth of focus are the imageside conjugates of the near and far limits of DOF. With the first definition, focus spread
and depth of focus are usually close in value though conceptually different. With the
second definition, focus spread and depth of focus are the same.
Foreground and background blur
If a subject is at distance s and the foreground or background is at distance D, let the
distance between the subject and the foreground or background be indicated by
________________________WORLD TECHNOLOGIES________________________
The blur disk diameter b of a detail at distance xd from the subject can be expressed as a
function of the focal length f, subject magnification ms, and f-number N according to
The minus sign applies to a foreground object, and the plus sign applies to a background
object.
The blur increases with the distance from the subject; when
, the detail is within
the depth of field, and the blur is imperceptible. If the detail is only slightly outside the
DOF, the blur may be only barely perceptible.
WT
For a given subject magnification, f-number, and distance from the subject of the
foreground or background detail, the degree of detail blur varies with the lens focal
length. For a background detail, the blur increases with focal length; for a foreground
detail, the blur decreases with focal length. For a given scene, the positions of the subject,
foreground, and background usually are fixed, and the distance between subject and the
foreground or background remains constant regardless of the camera position; however,
to maintain constant magnification, the subject distance must vary if the focal length is
changed. For small distance between the foreground or background detail, the effect of
focal length is small; for large distance, the effect can be significant. For a reasonably
distant background detail, the blur disk diameter is
depending only on focal length.
The blur diameter of foreground details is very large if the details are close to the lens.
The magnification of the detail also varies with focal length; for a given detail, the ratio
of the blur disk diameter to imaged size of the detail is independent of focal length,
depending only on the detail size and its distance from the subject. This ratio can be
useful when it is important that the background be recognizable (as usually is the case in
evidence or surveillance photography), or unrecognizable (as might be the case for a
pictorial photographer using selective focus to isolate the subject from a distracting
background). As a general rule, an object is recognizable if the blur disk diameter is onetenth to one-fifth the size of the object or smaller (Williams 1990, 205), and unrecognizable when the blur disk diameter is the object size or greater.
The effect of focal length on background blur is illustrated in van Walree's article on
Depth of field.
________________________WORLD TECHNOLOGIES________________________
Practical complications
The distance scales on most medium- and small-format lenses indicate distance from the
camera’s image plane. Most DOF formulas, including those here, use the object distance
s from the lens’s front nodal plane, which often is not easy to locate. Moreover, for many
zoom lenses and internal-focusing non-zoom lenses, the location of the front nodal plane,
as well as focal length, changes with subject distance. When the subject distance is large
in comparison with the lens focal length, the exact location of the front nodal plane is not
critical; the distance is essentially the same whether measured from the front of the lens,
the image plane, or the actual nodal plane. The same is not true for close-up photography;
at unity magnification, a slight error in the location of the front nodal plane can result in a
DOF error greater than the errors from any approximations in the DOF equations.
The asymmetrical lens formulas require knowledge of the pupil magnification, which
usually is not specified for medium- and small-format lenses. The pupil magnification
can be estimated by looking into the front and rear of the lens and measuring the
diameters of the apparent apertures, and computing the ratio of rear diameter to front
diameter (Shipman 1977, 144). However, for many zoom lenses and internal-focusing
non-zoom lenses, the pupil magnification changes with subject distance, and several
measurements may be required.
WT
Limitations
Most DOF formulas, including those discussed here, employ several simplifications:
1. Paraxial (Gaussian) optics is assumed, and technically, the formulas are valid only
for rays that are infinitesimally close to the lens axis. However, Gaussian optics
usually is more than adequate for determining DOF, and non-paraxial formulas
are sufficiently complex that requiring their use would make determination of
DOF impractical in most cases.
2. Lens aberrations are ignored. Including the effects of aberrations is nearly impossible, because doing so requires knowledge of the specific lens design. Moreover,
in well-designed lenses, most aberrations are well corrected, and at least near the
optical axis, often are almost negligible when the lens is stopped down 2–3 steps
from maximum aperture. Because lenses usually are stopped down at least to this
point when DOF is of interest, ignoring aberrations usually is reasonable. Not all
aberrations are reduced by stopping down, however, so actual sharpness may be
slightly less than predicted by DOF formulas.
3. Diffraction is ignored. DOF formulas imply that any arbitrary DOF can be
achieved by using a sufficiently large f-number. Because of diffraction, however,
this isn't really true, as is discussed further in the section DOF and diffraction.
4. For digital capture with color filter array sensors, demosaicing is ignored. Demosaicing alone would normally decrease sharpness, but the demosaicing algorithm
used might also include sharpening.
________________________WORLD TECHNOLOGIES________________________
5. Post-capture manipulation of the image is ignored. Sharpening via techniques
such as deconvolution or unsharp mask can increase the apparent sharpness in the
final image; conversely, image noise reduction can reduce sharpness.
6. The resolutions of the imaging medium and the display medium are ignored. If
the resolution of either medium is of the same order of magnitude as the optical
resolution, the sharpness of the final image is reduced, and optical blurring is
harder to detect.
The lens designer cannot restrict analysis to Gaussian optics and cannot ignore lens
aberrations. However, the requirements of practical photography are less demanding than
those of lens design, and despite the simplifications employed in development of most
DOF formulas, these formulas have proven useful in determining camera settings that
result in acceptably sharp pictures. It should be recognized that DOF limits are not hard
boundaries between sharp and unsharp, and that there is little point in determining DOF
limits to a precision of many significant figures.
WT
DOF and diffraction
If the camera position and image framing (i.e., angle of view) have been chosen, the only
means of controlling DOF is the lens aperture. Most DOF formulas imply that any
arbitrary DOF can be achieved by using a sufficiently large f-number. Because of diffraction, however, this isn't really true. Once a lens is stopped down to where most
aberrations are well corrected, stopping down further will decrease sharpness in the plane
of focus. At the DOF limits, however, further stopping down decreases the size of the
defocus blur spot, and the overall sharpness may still increase. Eventually, the defocus
blur spot becomes negligibly small, and further stopping down serves only to decrease
sharpness even at DOF limits (Gibson 1975, 64).
There is thus a tradeoff between sharpness in the POF and sharpness at the DOF limits.
But the sharpness in the POF is always greater than that at the DOF limits; if the blur at
the DOF limits is imperceptible, the blur in the POF is imperceptible as well.
For general photography, diffraction at DOF limits typically becomes significant only at
fairly large f-numbers; because large f-numbers typically require long exposure times,
motion blur may cause greater loss of sharpness than the loss from diffraction. The size
of the diffraction blur spot depends on the effective f-number
, however, so
diffraction is a greater issue in close-up photography, and the tradeoff between DOF and
overall sharpness can become quite noticeable (Gibson 1975, 53; Lefkowitz 1979, 84).
Optimal f-number
As a lens is stopped down, the defocus blur at the DOF limits decreases but diffraction
blur increases. The presence of these two opposing factors implies a point at which the
combined blur spot is minimized (Gibson 1975, 64); at that point, the f-number is optimal
for image sharpness. If the final image is viewed under normal conditions (e.g., an
________________________WORLD TECHNOLOGIES________________________
8″×10″ image viewed at 10″), it may suffice to determine the f-number using criteria for
minimum required sharpness, and there may be no practical benefit from further reducing
the size of the blur spot. But this may not be true if the final image is viewed under more
demanding conditions, e.g., a very large final image viewed at normal distance, or a
portion of an image enlarged to normal size (Hansma 1996). Hansma also suggests that
the final-image size may not be known when a photograph is taken, and obtaining the
maximum practicable sharpness allows the decision to make a large final image to be
made at a later time.
Determining combined defocus and diffraction
Hansma (1996) and Peterson (1996) have discussed determining the combined effects of
defocus and diffraction using a root-square combination of the individual blur spots.
Hansma's approach determines the f-number that will give the maximum possible
sharpness; Peterson's approach determines the minimum f-number that will give the
desired sharpness in the final image, and yields a maximum focus spread for which the
desired sharpness can be achieved. In combination, the two methods can be regarded as
giving a maximum and minimum f-number for a given situation, with the photographer
free to choose any value within the range, as conditions (e.g., potential motion blur)
permit. Gibson (1975, 64) gives a similar discussion, additionally considering blurring
effects of camera lens aberrations, enlarging lens diffraction and aberrations, the negative
emulsion, and the printing paper. Couzin (1982, 1098) gives a formula essentially the
same as Hansma’s for optimal f-number, but does not discuss its derivation.
WT
Hopkins (1955), Stokseth (1969), and Williams and Becklund (1989) have discussed the
combined effects using the modulation transfer function. Conrad's Depth of Field in
Depth (PDF), and Jacobson's Photographic Lenses Tutorial discuss the use of Hopkins's
method specifically in regard to DOF.
Photolithography
In semiconductor photolithography applications, depth of field is extremely important as
integrated circuit layout features must be printed with high accuracy at extremely small
size. The difficulty is that the wafer surface is not perfectly flat, but may vary by several
micrometres. Even this small variation causes some distortion in the projected image, and
results in unwanted variations in the resulting pattern. Thus photolithography engineers
take extreme measures to maximize the optical depth of field of the photolithography
equipment. To minimize this distortion further, semiconductor manufacturers may use
chemical mechanical polishing to make the wafer surface even flatter before lithographic
patterning.
Ophthalmology and optometry
A person may sometimes experience better vision in daylight than at night because of an
increased depth of field due to constriction of the pupil (i.e., miosis).
________________________WORLD TECHNOLOGIES________________________
Digital techniques for extending DOF
WT
Series of images demonstrating a 6 image focus bracket of A Tachinid fly. First two
images illustrate typical DOF of a single image at f/10 while the third image is the
composite of 6 images.
Focus stacking
Focus stacking is a digital image processing technique which combines multiple images
taken at different focus distances to give a resulting image with a greater depth of field
than any of the individual source images. Available programs for multi-shot DOF
enhancement include Adobe Photoshop, Syncroscopy AutoMontage, PhotoAcute Studio,
Helicon Focus and CombineZM.
Getting sufficient depth of field can be particularly challenging in macro photography.
The images to the right illustrate the extended DOF that can be achieved by combining
multiple images.
Wavefront coding
Wavefront coding is a method that convolves rays in such a way that it provides an image
where fields are in focus simultaneously with all planes out of focus by a constant
amount.
Plenoptic cameras
A plenoptic camera uses a microlens array to capture 4D light field information about a
scene.
________________________WORLD TECHNOLOGIES________________________
Derivation of the DOF formulas
WT
DOF for symmetrical lens.
DOF limits
A symmetrical lens is illustrated at right. The subject, at distance s, is in focus at image
distance v. Point objects at distances DF and DN would be in focus at image distances vF
and vN, respectively; at image distance v, they are imaged as blur spots. The depth of field
is controlled by the aperture stop diameter d; when the blur spot diameter is equal to the
acceptable circle of confusion c, the near and far limits of DOF are at DN and DF. From
similar triangles,
and
It usually is more convenient to work with the lens f-number than the aperture diameter;
the f-number N is related to the lens focal length f and the aperture diameter d by
substitution into the previous equations gives
________________________WORLD TECHNOLOGIES________________________
Rearranging to solve for vN and vF gives
and
The image distance v is related to an object distance s by the thin lens equation
WT
applying this to vN and vF gives
and
solving for v, vN, and vF in these three equations, substituting into the two previous
equations, and rearranging gives the near and far limits of DOF:
and
Hyperfocal distance
Solving for the focus distance s and setting the far limit of DOF DF to infinity gives
________________________WORLD TECHNOLOGIES________________________
where H is the hyperfocal distance. Setting the subject distance to the hyperfocal distance
and solving for the near limit of DOF gives
For any practical value of H, the focal length is negligible in comparison, so that
Substituting the approximate expression for hyperfocal distance into the formulas for the
near and far limits of DOF gives
and
WT
Combining, the depth of field DF − DN is
Hyperfocal magnification
Magnification m can be expressed as
at the hyperfocal distance, the magnification mh then is
Substituting
for H and simplifying gives
________________________WORLD TECHNOLOGIES________________________
DOF in terms of magnification
It is sometimes convenient to express DOF in terms of magnification m. Substituting
and
WT
into the formula for DOF and rearranging gives
after Larmore (1965, 163).
DOF vs. focal length
Multiplying the numerator and denominator of the exact formula above by
gives
If the f-number and circle of confusion are constant, decreasing the focal length f
increases the second term in the denominator, decreasing the denominator and increasing
the value of the right-hand side, so that a shorter focal length gives greater DOF.
The term in parentheses in the denominator is the hyperfocal magnification mh, so that
________________________WORLD TECHNOLOGIES________________________
A subject distance is decreased, the subject magnification increases, and eventually
becomes large in comparison with the hyperfocal magnification. Thus the effect of focal
length is greatest near the hyperfocal distance, and decreases as subject distance is
decreased. However, the near/far perspective will differ for different focal lengths, so the
difference in DOF may not be readily apparent.
When
,
, and
so that for a given magnification, DOF is essentially independent of focal length. Stated
otherwise, for the same subject magnification and the same f-number, all focal lengths for
a given image format give approximately the same DOF. This statement is true only
when the subject distance is small in comparison with the hyperfocal distance, however.
WT
Moderate-to-large distances
When the subject distance is large in comparison with the lens focal length,
and
so that
For
, the far limit of DOF is at infinity and the DOF is infinite; of course, only
objects at or beyond the near limit of DOF will be recorded with acceptable sharpness.
Close-up
When the subject distance s approaches the lens focal length, the focal length no longer is
negligible, and the approximate formulas above cannot be used without introducing
significant error. At close distances, the hyperfocal distance has little applicability, and it
usually is more convenient to express DOF in terms of magnification. The distance is
small in comparison with the hyperfocal distance, so the simplified formula
________________________WORLD TECHNOLOGIES________________________
can be used with good accuracy. For a given magnification, DOF is independent of focal
length.
Near:far DOF ratio
From the “exact” equations for near and far limits of DOF, the DOF in front of the
subject is
WT
and the DOF beyond the subject is
The near:far DOF ratio is
This ratio is always less than unity; at moderate-to-large subject distances,
, and
When the subject is at the hyperfocal distance or beyond, the far DOF is infinite, and the
near:far ratio is zero. It’s commonly stated that approximately 1/3 of the DOF is in front
of the subject and approximately 2/3 is beyond; however, this is true only when
.
At closer subject distances, it’s often more convenient to express the DOF ratio in terms
of the magnification
substitution into the “exact” equation for DOF ratio gives
________________________WORLD TECHNOLOGIES________________________
As magnification increases, the near:far ratio approaches a limiting value of unity.
DOF vs. format size
When the subject distance is much less than hyperfocal, the total DOF is given to good
approximation by
WT
When additionally the magnification is small compared to unity, the value of m in the
numerator can be neglected, and the formula further simplifies to
The DOF ratio for two different formats is then
Essentially the same approach is described in Stroebel (1976, 136–39).
“Same picture” for both formats
The results of the comparison depend on what is assumed. One approach is to assume
that essentially the same picture is taken with each format and enlarged to produce the
same size final image, so the subject distance remains the same, the focal length is
adjusted to maintain the same angle of view, and to a first approximation, magnification
is in direct proportion to some characteristic dimension of each format. If both pictures
are enlarged to give the same size final images with the same sharpness criteria, the circle
of confusion is also in direct proportion to the format size. Thus if l is the characteristic
dimension of the format,
With the same f-number, the DOF ratio is then
________________________WORLD TECHNOLOGIES________________________
so the DOF ratio is in inverse proportion to the format size. This ratio is approximate, and
breaks down in the macro range of the larger format (the value of m in the numerator is
no longer negligible) or as distance approaches the hyperfocal distance for the smaller
format (the DOF of the smaller format approaches infinity).
If the formats have approximately the same aspect ratios, the characteristic dimensions
can be the format diagonals; if the aspect ratios differ considerably (e.g., 4×5 vs. 6×17),
the dimensions must be chosen more carefully, and the DOF comparison may not even be
meaningful.
WT
If the DOF is to be the same for both formats the required f-number is in direct proportion
to the format size:
Adjusting the f-number in proportion to format size is equivalent to using the same
absolute aperture diameter for both formats, discussed in detail below in Use of absolute
aperture diameter.
Same focal length for both formats
If the same lens focal length is used in both formats, magnifications can be maintained in
the ratio of the format sizes by adjusting subject distances; the DOF ratio is the same as
that given above, but the images differ because of the different perspectives and angles of
view.
If the same DOF is required for each format, an analysis similar to that above shows that
the required f-number is in direct proportion to the format size.
Another approach is to use the same focal length with both formats at the same subject
distance, so the magnification is the same, and with the same f-number,
so the DOF ratio is in direct proportion to the format size. The perspective is the same for
both formats, but because of the different angles of view, the pictures are not the same.
________________________WORLD TECHNOLOGIES________________________
Cropping
Cropping an image and enlarging to the same size final image as an uncropped image
taken under the same conditions is equivalent to using a smaller format; the cropped
image requires greater enlargement and consequently has a smaller circle of confusion.
The cropped image has less DOF than the uncropped image.
Use of absolute aperture diameter
The aperture diameter is normally given in terms of the f-number because all lenses set to
the same f-number give approximately the same image illuminance (Ray 2002, 130),
simplifying exposure settings. In deriving the basic DOF equations, the substitution of f /
N for the absolute aperture diameter d can be omitted, giving the DOF in terms of the
absolute aperture diameter:
WT
after Larmore (1965, 163). When the subject distance s is small in comparison with the
hyperfocal distance, the second term in the denominator can be neglected, leading to
With the same subject distance and angle of view for both formats, s2 = s1, and
so the DOFs are in inverse proportion to the absolute aperture diameters. When the
diameters are the same, the two formats have the same DOF. Von Rohr (1906) made this
same observation, saying “At this point it will be sufficient to note that all these formulae
involve quantities relating exclusively to the entrance-pupil and its position with respect
to the object-point, whereas the focal length of the transforming system does not enter
into them.” Lyon’s Depth of Field Outside the Box describes an approach very similar to
that of von Rohr.
Using the same absolute aperture diameter for both formats with the “same picture”
criterion is equivalent to adjusting the f-number in proportion to the format sizes,
discussed above under “Same picture” for both formats
________________________WORLD TECHNOLOGIES________________________
Focus and f-number from DOF limits
Object-side relationships
The equations for the DOF limits can be combined to eliminate Nc and solve for the
subject distance. For given near and far DOF limits DN and DF, the subject distance is
the harmonic mean of the near and far distances. The equations for DOF limits also can
be combined to eliminate s and solve for the required f-number, giving
WT
When the subject distance is large in comparison with the lens focal length, this
simplifies to
When the far limit of DOF is at infinity, the equations for s and N give indeterminate
results. But if all terms in the numerator and denominator on the right-hand side of the
equation for s are divided by DF, it is seen that when DF is at infinity,
Similarly, if all terms in the numerator and denominator on the right-hand side of the
equation for N are divided by DF, it is seen that when DF is at infinity,
Image-side relationships
Most discussions of DOF concentrate on the object side of the lens, but the formulas are
simpler and the measurements usually easier to make on the image side. If the basic
image-side equations
and
________________________WORLD TECHNOLOGIES________________________
are combined and solved for the image distance v, the result is
the harmonic mean of the near and far image distances. The basic image-side equations
can also be combined and solved for N, giving
WT
The image distances are measured from the camera's image plane to the lens's image
nodal plane, which is not always easy to locate. The harmonic mean is always less than
the arithmentic mean, but when the difference between the near and far image distances
is reasonably small, the two means are close to equal, and focus can be set with sufficient
accuracy using
This formula requires only the difference
between the near and far image
distances. View camera users often refer to this difference as the focus spread; it usually
is measured on the bed or focusing rail. Focus is simply set to halfway between the near
and far image distances.
Substituting
into the equation for N and rearranging gives
One variant of the thin-lens equation is
substituting this into the equation for N gives
, where m is the magnification;
At moderate-to-large subject distances, m is small compared to unity, and the f-number
can often be determined with sufficient accuracy using
________________________WORLD TECHNOLOGIES________________________
For close-up photography, the magnification cannot be ignored, and the f-number should
be determined using the first approximate formula.
As with the approximate formula for v, the approximate formulas for N require only the
focus spread
rather than the absolute image distances.
When the far limit of DOF is at infinity,
.
On manual-focus small- and medium-format lenses, the focus and f-number usually are
determined using the lens DOF scales, which often are based on the approximate
equations above.
WT
Defocus blur for background object at B.
Foreground and background blur
If the equation for the far limit of DOF is solved for c, and the far distance replaced by an
arbitrary distance D, the blur disk diameter b at that distance is
When the background is at the far limit of DOF, the blur disk diameter is equal to the
circle of confusion c, and the blur is just imperceptible. The diameter of the background
blur disk increases with the distance to the background. A similar relationship holds for
the foreground; the general expression for a defocused object at distance D is
For a given scene, the distance between the subject and a foreground or background
object is usually fixed; let that distance be represented by
then
________________________WORLD TECHNOLOGIES________________________
or, in terms of subject distance,
with the minus sign used for foreground objects and the plus sign used for background
objects. For a relatively distant background object,
WT
In terms of subject magnification, the subject distance is
so that, for a given f-number and subject magnification,
Differentiating b with respect to f gives
With the plus sign, the derivative is everywhere positive, so that for a background object,
the blur disk size increases with focal length. With the minus sign, the derivative is
everywhere negative, so that for a foreground object, the blur disk size decreases with
focal length.
The magnification of the defocused object also varies with focal length; the magnification
of the defocused object is
where vs is the image distance of the subject. For a defocused object with some
characteristic dimension y, the imaged size of that object is
________________________WORLD TECHNOLOGIES________________________
The ratio of the blur disk size to the imaged size of that object then is
so for a given defocused object, the ratio of the blur disk diameter to object size is
independent of focal length, and depends only on the object size and its distance from the
subject.
Asymmetrical lenses
WT
The discussion thus far has assumed a symmetrical lens for which the entrance and exit
pupils coincide with the object and image nodal planes, and for which the pupil
magnification is unity. Although this assumption usually is reasonable for large-format
lenses, it often is invalid for medium- and small-format lenses.
For an asymmetrical lens, the DOF ahead of the subject distance and the DOF beyond the
subject distance are given by
and
where P is the pupil magnification.
Combining gives the total DOF:
, the second term in the denominator becomes small in comparison with
When
the first, and (Shipman 1977, 147)
________________________WORLD TECHNOLOGIES________________________
When the pupil magnification is unity, the equations for asymmetrical lenses reduce to
those given earlier for symmetrical lenses.
Effect of lens asymmetry
Except for close-up and macro photography, the effect of lens asymmetry is minimal. A
slight rearrangement of the last equation gives
As magnification decreases, the 1 / P term becomes smaller in comparison with the 1 / m
term, and eventually the effect of pupil magnification becomes negligible.
WT
________________________WORLD TECHNOLOGIES________________________
Chapter-6
Exposure
WT
A long exposure showing stars rotating around the southern and northern celestial poles.
Credit: European Southern Observatory
________________________WORLD TECHNOLOGIES________________________
WT
A photograph with an exposure time of 1/13 second blurs the motion of flying birds.
________________________WORLD TECHNOLOGIES________________________
WT
A photograph of the sea after sunset with an exposure time of 15 seconds. The swell from
the waves appears as fog.
In photography, exposure is the total amount of light allowed to fall on the photographic
medium (photographic film or image sensor) during the process of taking a photograph.
Exposure is measured in lux seconds, and can be computed from exposure value (EV)
and scene luminance over a specified area.
In photographic jargon, an exposure generally refers to a single shutter cycle. For
example: a long exposure refers to a single, protracted shutter cycle to capture enough
low-intensity light, whereas a multiple exposure involves a series of relatively brief
shutter cycles; effectively layering a series of photographs in one image. For the same
film speed, the accumulated photometric exposure (H) should be similar in both cases.
Photometric and radiometric exposure
Photometric or luminous exposure is the accumulated physical quantity of visible light
energy (weighted by the luminosity function) applied to a surface during a given
exposure time. It is defined as:
where
________________________WORLD TECHNOLOGIES________________________
•
•
•
H is the luminous exposure (usually in lux seconds)
E is the image-plane illuminance (usually in lux)
t is the exposure time (seconds)
The radiometric quantity radiant exposure is sometimes used instead; it is the product of
image-plane irradiance and time, the accumulated amount of incident light energy per
area. If the measurement is adjusted to account only for light that reacts with the photosensitive surface, that is, weighted by the appropriate spectral sensitivity, the exposure is
still measured in radiometric units (joules per square meter), rather than photometric units
(weighted by the nominal sensitivity of the human eye). Only in this appropriately
weighted case does the H measure the effective amount of light falling on the film, such
that the characteristic curve will be correct independent of the spectrum of the light.
Many photographic materials are also sensitive to "invisible" light, which can be a
nuisance, or a benefit. The use of radiometric units is appropriate to characterize such
sensitivity to invisible light.
WT
In sensitometric data, such as characteristic curves, the log exposure is conventionally
expressed as log10(H). Photographers more familiar with base-2 logarithmic scales (such
as exposure values) can convert using 3.32 log2(H) ≈ log10(H).
Exposure settings
"Correct" exposure may be defined as an exposure that achieves the effect the photographer intended. The purpose of exposure adjustment (in combination with lighting
adjustment) is to control the amount of light from the subject that is allowed to fall on the
film, so that it falls into an appropriate region of the film's characteristic curve and yields
a "correct" or acceptable exposure.
________________________WORLD TECHNOLOGIES________________________
Overexposure and underexposure
WT
White chair: Deliberate use of overexposure for aesthetic purposes.
A photograph may be described as overexposed when it has a loss of highlight detail, that
is, when the bright parts of an image are effectively all white, known as "blown out
highlights" (or "clipped whites"). A photograph may be described as underexposed when
it has a loss of shadow detail, that is, the dark areas indistinguishable from black, known
as "blocked up shadows" (or sometimes "crushed shadows," "crushed blacks," or "clipped
blacks," especially in video). As the image to the right shows, these terms are technical
ones rather than artistic judgments; an overexposed or underexposed image may be
"correct", in that it provides the effect that the photographer intended. Intentionally overor under- exposing (relative to a standard or the camera's automatic exposure) is casually
referred to as "shooting to the right" or "shooting to the left", respectively, as these shift
the histogram of the image to the right or left.
Manual exposure
In manual mode, the photographer adjusts the lens aperture and/or shutter speed to
achieve the desired exposure. Many photographers choose to control aperture and shutter
independently because opening up the aperture increases exposure, but also decreases the
depth of field, and a slower shutter increases exposure but also increases the opportunity
for motion blur.
________________________WORLD TECHNOLOGIES________________________
'Manual' exposure calculations may be based on some method of light metering with a
working knowledge of exposure values, the APEX system and/or the Zone System.
Automatic exposure
A camera in automatic exposure (AE) mode automatically calculates and adjusts exposure settings in order to match (as closely as possible) the subject's mid-tone to the
mid-tone of the photograph. For most cameras this means using an on-board TTL
exposure meter.
Aperture priority mode gives the photographer manual control of the aperture, whilst the
camera automatically adjusts the shutter speed to achieve the exposure specified by the
TTL meter. Shutter priority mode gives manual shutter control, with automatic aperture
compensation. In each case, the actual exposure level is still determined by the camera's
exposure meter.
WT
Exposure compensation
Exposure compensation is a technique for adjusting the exposure indicated by a
photographic exposure meter, in consideration of factors that may cause the indicated
exposure to result in a less-than-optimal image. Factors considered may include unusual
lighting distribution, variations within a camera system, filters, non-standard processing,
or intended underexposure or overexposure. Cinematographers may also apply exposure
compensation for changes in shutter angle or film speed, among other factors.
Exposure compensation on still cameras
Snowy Mountains without exposure compensation
________________________WORLD TECHNOLOGIES________________________
WT
Same place with +2EV exposure compensation
In photography, some cameras include exposure compensation as a feature to allow the
user to adjust the automatically calculated exposure. Compensation can be either positive
(additional exposure) or negative (reduced exposure), and is commonly available in thirdor half-step increments, usually up to two or three steps in either direction; some digital
cameras allow a greater range. Camera exposure compensation is commonly stated in
terms of EV units; 1 EV is equal to one exposure step (or stop), corresponding to a
doubling of exposure.
Exposure can be adjusted by changing either the lens f-number or the exposure time;
which one is changed usually depends on the camera's exposure mode. If the mode is
aperture priority, exposure compensation changes the exposure time; if the mode is
shutter priority, the f-number is changed. If a flash is being used, some cameras will
adjust it as well.
Adjustment for lighting distribution
The earliest reflected-light exposure meters were wide-angle, averaging types, measuring
the average scene luminance. Exposure meter calibration was chosen to result in the
“best” exposures for typical outdoor scenes; when measuring a single scene element
(such as the side of a building in open shade), the indicated exposure is in the approximate middle of the film or electronic sensor's exposure range. When measuring a scene
with atypical distribution of light and dark elements, or a single element that is lighter or
________________________WORLD TECHNOLOGIES________________________
darker than a middle tone, the indicated exposure may not be optimal. For example, a
scene with predominantly light tones (e.g., a white horse) often will be underexposed,
while a scene with predominantly dark tones (e.g., a black horse) often will be overexposed. That both scenes require the same exposure, regardless of the meter indication,
becomes obvious from a scene that includes both a white horse and a black horse. A
photographer usually can recognize the difference between a white horse and a black
horse; a meter usually cannot. When metering a white horse, a photographer can apply
exposure compensation so that the white horse is rendered as white.
Many modern cameras incorporate metering systems that measure scene contrast as well
as average luminance, and employ sophisticated algorithms to infer the appropriate
exposure from these data. In scenes with very unusual lighting, however, these metering
systems sometimes cannot match the judgment of a skilled photographer, so exposure
compensation still may be needed.
WT
Exposure compensation using the Zone System
An early application of exposure compensation was the Zone System developed by Ansel
Adams and Fred Archer. Although the Zone System has sometimes been regarded as
complex, the basic concept is quite simple: render dark objects as dark and light objects
as light, according to the photographer's visualization. Developed for black and white
film, the Zone System divided luminance into 11 zones, with Zone 0 representing pure
black and Zone X representing pure white. The meter indication would place whatever
was metered on Zone V, a medium gray. The tonal range of color negative film is slightly
less than that of black and white film, and the tonal range of color reversal film and
digital sensors even less; accordingly, there are fewer zones between pure black and pure
white. The meter indication, however, remains Zone V.
The relationship between exposure compensation and exposure zones is straightforward:
an exposure compensation of one EV is equal to a change of one zone; thus exposure
compensation of −1 EV is equivalent to placement on Zone IV, and exposure
compensation of +2 EV is equivalent to placement on Zone VII.
The Zone System is a very specialized form of exposure compensation, and is used most
effectively when metering individual scene elements, such as a sunlit rock or the bark of
a tree in shade. Many cameras incorporate narrow-angle spot meters to facilitate such
measurements. Because of the limited tonal range, an exposure compensation range of
±2 EV is often sufficient for using the Zone System with color film and digital sensors.
________________________WORLD TECHNOLOGIES________________________
Exposure time
WT
A 1/30s exposure showing motion blur on fountain at Royal Botanic Gardens, Kew
________________________WORLD TECHNOLOGIES________________________
WT
A 1/320s exposure showing individual drops on fountain at Royal Botanic Gardens, Kew
The exposure for a photograph is determined by the sensitivity of the medium used. For
photographic film, sensitivity is referred to as film speed and is measured on a scale
published by the International Organization for Standardization (ISO). Faster film
requires less exposure and has a higher ISO rating. Exposure is a combination of the
length of time and the level of illumination received by the photosensitive material.
Exposure time is controlled in a camera by shutter speed and the illumination level by the
lens aperture. Slower shutter speeds (exposing the medium for a longer period of time)
and greater lens apertures (admitting more light) produce greater exposures.
An approximately correct exposure will be obtained on a sunny day using ISO 100 film,
an aperture of f/16 and a shutter speed of 1/100th of a second. This is called the sunny 16
rule: at an aperture of f/16 on a sunny day, a suitable shutter speed will be one over the
film speed (or closest equivalent).
A scene can be exposed in many ways, depending on the desired effect a photographer
wishes to convey.
________________________WORLD TECHNOLOGIES________________________
Reciprocity
A demonstration of the effect of exposure in night photography. Longer shutter speeds
result in increased exposure.
WT
In photography and holography, reciprocity refers to the inverse relationship between the
intensity and duration of light that determines the reaction of light-sensitive material.
Within a normal exposure range for film stock, for example, the reciprocity law states
that the film response will be determined by the total exposure, defined as intensity ×
time. Therefore, the same response (for example, the optical density of the developed
film) can result from reducing duration and increasing light intensity, and vice versa.
The reciprocal relationship is assumed in most sensitometry, for example when
measuring a Hurter and Driffield curve (optical density versus logarithm of total
exposure) for a photographic emulsion. Total exposure of the film or sensor, the product
of focal-plane illuminance times exposure time, is measured in lux seconds.
History
The idea of reciprocity, once known as Bunsen–Roscoe reciprocity, originated from the
work of Robert Bunsen and Henry Roscoe in 1862.
Deviations from the reciprocity law were reported by Captain William de Wiveleslie
Abney in 1893, and extensively studied by Karl Schwarzschild in 1899. Schwarzschild's
model was found wanting by Abney and by Englisch, and better models have been
proposed in subsequent decades of the early twentieth century. In 1913, Kron formulated
an equation to describe the effect in terms of curves of constant density, which J. Halm
adopted and modified, leading to the "Kron–Halm catenary equation" or "Kron–Halm–
Webb formula" to describe departures from reciprocity.
In chemical photography
In photography, reciprocity refers to the relationship whereby the total light energy –
proportional to the total exposure, the product of the light intensity and exposure time,
controlled by aperture and shutter speed, respectively – determines the effect of the light
on the film. That is, an increase of brightness by a certain factor is exactly compensated
by a decrease of exposure time by the same factor, and vice versa. In other words there is
________________________WORLD TECHNOLOGIES________________________
under normal circumstances a reciprocal proportion between aperture area and shutter
speed for a given photographic result, with a wider aperture requiring a faster shutter
speed for the same effect. For example, an EV of 10 may be achieved with an aperture (fnumber) of f/2.8 and a shutter speed of 1/125 s. The same exposure is achieved by
doubling the aperture area to f/2 and halving the exposure time to 1/250 s, or by halving
the aperture area to f/4 and doubling the exposure time to 1/60 s; in each case the
response of the film is expected to be the same.
Reciprocity failure
For most photographic materials, reciprocity is valid with good accuracy over a range of
values of exposure duration, but becomes increasingly inaccurate as we depart from this
range: reciprocity failure, reciprocity law failure, or Schwarzschild effect. As the light
level decreases out of the reciprocity range, the increase in duration, and hence of total
exposure, required to produce an equivalent response becomes higher than the formula
states; for instance, at half of the light required for a normal exposure, the duration must
be more than doubled for the same result. Multipliers used to correct for this effect are
called reciprocity factors.
WT
At very low light levels, film is less responsive. Light can be considered to be a stream of
discrete photons, and a light-sensitive emulsion is composed of discrete light-sensitive
grains, usually silver halide crystals. Each grain must absorb a certain number of photons
in order for the light-driven reaction to occur and the latent image to form. In particular,
if the surface of the silver halide crystal has a cluster of approximately four or more
reduced silver atoms, resulting from absorption of a sufficient number of photons
(usually a few dozen photons are required), it is rendered developable. At low light
levels, i.e. few photons per unit time, photons impinge upon each grain relatively
infrequently; if the four photons required arrive over a long enough interval, the partial
change due to the first one or two is not stable enough to survive before enough photons
arrive to make a permanent latent image center.
This breakdown in the usual tradeoff between aperture and shutter speed is known as
reciprocity failure. Each different film type has a different response at low light levels.
Some films are very susceptible to reciprocity failure, and others much less so. Some
films that are very light sensitive at normal illumination levels and normal exposure times
lose much of their sensitivity at low light levels, becoming effectively "slow" films for
long exposures. Conversely some films that are "slow" under normal exposure duration
retain their light sensitivity better at low light levels.
For example, for a given film, if a light meter indicates a required EV of 5 and the
photographer sets the aperture to f/11, then ordinarily a 4 second exposure would be
required; a reciprocity correction factor of 1.5 would require the exposure to be extended
to 6 seconds for the same result. Reciprocity failure generally becomes significant at
exposures of longer than about 1 sec for film, and above 30 sec for paper.
________________________WORLD TECHNOLOGIES________________________
Reciprocity also breaks down at extremely high levels of illumination with very short
exposures. This is concern for scientific and technical photography, but rarely to general
photographers, as exposures significantly shorter than a millisecond are only required for
subjects such as explosions and particle physics experiments, or when taking high-speed
motion pictures with very high shutter speeds (1/10,000 sec or faster).
Schwarzschild law
In response to astronomical observations of low intensity reciprocity failure, Karl
Schwarzschild wrote (circa 1900):
"In determinations of stellar brightness by the photographic method I have recently been
able to confirm once more the existence of such deviations, and to follow them up in a
quantative way, and to express them in the following rule, which should replace the law
of reciprocity: Sources of light of different intensity I cause the same degree of blackening under different exposures t if the products
are equal."
WT
Unfortunately, Schwarzschild's empirically determined 0.86 coefficient turned out to be
of limited usefulness. A modern formulation of Schwarzschild's law is given as
where E is a measure of the "effect of the exposure" that leads to changes in the opacity
of the photosensitive material (in the same degree that an equal value of exposure H = It
does in the reciprocity region), I is illuminance, t is exposure duration and p is the
Schwarzschild coefficient.
However, a constant value for p remains elusive, and has not replaced the need for more
realistic models or empirical sensitometric data in critical applications. When reciprocity
holds, Schwarzschild's law uses p = 1.0.
Since the Schwarzschild's law formula gives unreasonable values for times in the region
where reciprocity holds, a modified formula has been found that fits better across a wider
range of exposure times. The modification is in terms of a factor the multiplies the ISO
film speed:
Relative film speed
where the t + 1 term implies a breakpoint near 1 second separating the region where
reciprocity holds from the region where it fails.
Simple model for t > 1 second
Some models of microscope use automatic electronic models for reciprocity failure
compensation, generally of a form for correct time, Tc, expressible as a power law of
________________________WORLD TECHNOLOGIES________________________
metered time, Tm, that is, Tc=(Tm)p, for times in seconds. Typical values of p are 1.25 to
1.45, but some are low as 1.1 and high as 1.8.
The Kron–Halm catenary equation
Kron's equation as modified by Halm states that the response of the film is a function of
, with the factor defined by an catenary (hyperbolic cosine) equation accounting for
reciprocity failure at both very high and very low intensities:
where I0 is the photographic material's optimum intensity level and a is a constant that
characterizes the material's reciprocity failure.
WT
Quantum reciprocity-failure model
Modern models of reciprocity failure incorporate an exponential function, as opposed to
power law, dependence on time or intensity at long exposure times or low intensities,
based on the distribution of interquantic times (times between photon absorptions in a
grain) and the temperature-dependent lifetimes of the intermediate states of the partiallyexposed grains.
Baines and Bomback explain the "low intensity inefficiency" this way:
“
Electrons are released at a very low rate. They are trapped and neutralised
and must remain as isolated silver atoms for much longer than in normal
latent image formation. It has already been observed that such extreme
sub-latent image is unstable, and it is postulated that ineffiency is caused
by many isolated atoms of silver losing their acquired electrons during the
period of instability.
”
Astrophotography
Reciprocity failure is an important effect in the field of film-based astrophotography.
Deep-sky objects such as galaxies and nebulae are often so faint that they are not visible
to the un-aided eye. To make matters worse, many objects' spectra do not line up with the
film emulsion's sensitivity curves. Many of these targets are small and require long focal
lengths, which can push the focal ratio far above f/5. Combined, these parameters make
these targets extremely difficult to capture with film; exposures from 30 minutes to well
over an hour are typical. As a typical example, capturing an image of the Andromeda
Galaxy at f/4 will take about 30 minutes; to get the same density at f/8 would require an
exposure of about 200 minutes.
________________________WORLD TECHNOLOGIES________________________
When a telescope is tracking an object, every minute is difficult; therefore, reciprocity
failure is one of the biggest motivations for astronomers to switch to digital imaging.
Electronic image sensors have their own limitation at long exposure time and low
illuminance levels, not usually referred to as reciprocity failure, namely noise from dark
current, but this effect can be controlled by cooling the sensor.
Holography
A similar problem exists in holography. The total energy required when exposing
holographic film using a continuous wave laser (i.e. for several seconds) is significantly
less than the total energy required when exposing holographic film using a pulsed laser
(i.e. around 20–40 nanoseconds) due to a reciprocity failure. It can also be caused by very
long or very short exposures with a continuous wave laser. To try to offset the reduced
brightness of the film due to reciprocity failure, a method called latensification can be
used. This is usually done directly after the holographic exposure and using an incoherent
light source (such as a 25-40 W light bulb). Exposing the holographic film to the light for
a few seconds can increase the brightness of the hologram by an order of magnitude.
WT
Determining exposure
A fair ride taken with a 2/5 second exposure.
________________________WORLD TECHNOLOGIES________________________
WT
A photograph of the Forth Rail Bridge with an exposure time of 13 seconds. The effect of
a long exposure shot on moving water is to make it seem creamy and opalescent.
The Zone System is another method of determining exposure and development combinations to achieve a greater tonality range over conventional methods by varying the
contrast of the 'film' to fit the print contrast capability. Digital cameras can achieve
similar results (high dynamic range) by combining several different exposures (varying
only the shutter speeds) made in quick succession.
Today, most cameras automatically determine the correct exposure at the time of taking a
photograph by using a built-in light meter, or multiple point meters interpreted by a builtin computer.
Negative/Print film tends to bias for exposing for the shadow areas (film dislikes being
starved of light), with digital favouring exposure for highlights.
Latitude
Latitude is the degree by which one can over, or under expose an image, and still recover
an acceptable level of quality from an exposure. Typically negative film has a better
ability to record a range of brightness than slide/transparency film or digital. Digital
should be considered to be the reverse of print film, with a good latitude in the shadow
range, and a narrow one in the highlight area; in contrast to film's large highlight latitude,
________________________WORLD TECHNOLOGIES________________________
and narrow shadow latitude. Slide/Transparency film has a narrow latitude in both
highlight and shadow areas, requiring greater exposure accuracy.
Negative film's latitude increases somewhat with high ISO material, in contrast digital
tends to narrow on latitude with high ISO settings.
Highlights
WT
Example image exhibiting blown-out highlights. Top: original image, bottom: blown-out
areas marked red
Areas of a photo where information is lost due to extreme brightness are described as
having "blown-out highlights" or "flared highlights".
________________________WORLD TECHNOLOGIES________________________
In digital images this information loss is often irreversible, though small problems can be
made less noticeable using photo manipulation software. Recording to RAW format can
ameliorate this problem to some degree, as can using a digital camera with a better
sensor.
Film can often have areas of extreme overexposure but still record detail in those areas.
This information is usually somewhat recoverable when printing or transferring to digital.
A loss of highlights in a photograph is usually undesirable, but in some cases can be
considered to "enhance" appeal. Examples include black-and-white photography and
portraits with an out-of-focus background.
Blacks
WT
Areas of a photo where information is lost due to extreme darkness are described as
"crushed blacks". Digital capture tends to be more tolerant of underexposure, allowing
better recovery of shadow detail, than same-ISO negative print film.
Crushed blacks cause loss of detail, but can be used for artistic effect.
________________________WORLD TECHNOLOGIES________________________
Chapter-7
Exposure Value
WT
Fast shutter speed, short exposure of a water wave.
________________________WORLD TECHNOLOGIES________________________
WT
Slow shutter speed, long exposure of the wave.
In photography, exposure value (EV) denotes all combinations of a camera's shutter
speed and relative aperture that give the same exposure. The concept was developed in
Germany in the 1950s (Ray 2000, 318), in an attempt to simplify choosing among
combinations of equivalent camera settings. Exposure value also is used to indicate an
interval on the photographic exposure scale, with 1 EV corresponding to a standard
power-of-2 exposure step, commonly referred to as a stop.
Exposure value was originally indicated by the quantity symbol Ev; this symbol continues
to be used in ISO standards, but the acronym EV is now more common elsewhere.
Although all camera settings with the same exposure value nominally give the same
exposure, they do not necessarily give the same picture. The exposure time (“shutter
speed”) determines the amount of motion blur, as illustrated by the two images at the
right, and the relative aperture determines the depth of field.
Formal definition
Exposure value is a base-2 logarithmic scale defined by (Ray 2000, 318)
________________________WORLD TECHNOLOGIES________________________
where
•
•
N is the relative aperture (f-number)
t is the exposure time (“shutter speed”) in seconds
EV 0 corresponds to an exposure time of 1 s and a relative aperture of f/1.0. If the EV is
known, it can be used to select combinations of exposure time and f-number, as shown in
Table 1.
Each increment of 1 in exposure value corresponds to a change of one “step” (or, more
commonly, one “stop”) in exposure, i.e., half as much exposure, either by halving the
exposure time or halving the aperture area, or a combination of such changes. Greater
exposure values are appropriate for photography in more brightly lit situations, or for
higher ISO speeds.
WT
Camera settings vs. photometric exposure
“Exposure value” is somewhat of a misnomer, because it indicates combinations of
camera settings rather than photometric exposure, which is given by (Ray 2000, 310)
where
•
•
•
H is the photometric exposure
E is the image-plane illuminance
t is the exposure time
The illuminance E is controlled by the f-number but also depends on the scene
luminance. To avoid confusion, some authors (Ray 2000, 310) have used camera
exposure to refer to combinations of camera settings. The 1964 ASA standard for automatic exposure controls for cameras, ASA PH2.15-1964, took the same approach, and
also used the more descriptive term camera exposure settings.
Common practice among photographers is nonetheless to use “exposure” to refer to
camera settings as well as to photometric exposure.
Tabulated exposure values
An exposure meter may not always be available, and using a meter to determine exposure
for some scenes with unusual lighting distribution may be difficult. However, natural
light, as well as many scenes with artificial lighting, is predictable, so that exposure often
can be determined with reasonable accuracy from tabulated values.
________________________WORLD TECHNOLOGIES________________________
Table 2. Exposure values (ISO 100 speed) for various lighting conditions
Lighting Condition
EV100
Daylight
Light sand or snow in full or slightly hazy sunlight (distinct
16
shadows)a
Typical scene in full or slightly hazy sunlight (distinct shadows)a, b
15
Typical scene in hazy sunlight (soft shadows)
14
Typical scene, cloudy bright (no shadows)
13
Typical scene, heavy overcast
12
Areas in open shade, clear sunlight
12
Outdoor, Natural light
Rainbows
Clear sky background
15
Cloudy sky background
14
Sunsets and skylines
Just before sunset
12–14
At sunset
12
Just after sunset
9–11
The Moon,c altitude > 40°
Full
15
Gibbous
14
Quarter
13
Crescent
12
Moonlight, Moon altitude > 40°
−3 to
Full
−2
Gibbous
−4
Quarter
−6
Aurora borealis and australis
−4 to
Bright
−3
−6 to
Medium
−5
Outdoor, Artificial Light
Neon and other bright signs
9–10
Night sports
9
Fires and burning buildings
9
WT
________________________WORLD TECHNOLOGIES________________________
Bright street scenes
Night street scenes and window displays
Night vehicle traffic
Fairs and amusement parks
Christmas tree lights
Floodlit buildings, monuments, and fountains
Distant views of lighted buildings
Indoor, Artificial Light
Galleries
Sports events, stage shows, and the like
Circuses, floodlit
Ice shows, floodlit
Offices and work areas
Home interiors
Christmas tree lights
WT
8
7–8
5
7
4–5
3–5
2
8–11
8–9
8
9
7–8
5–7
4–5
a. Values for direct sunlight apply between approximately two hours after sunrise
and two hours before sunset, and assume front lighting. As a rough general rule,
decrease EV by 1 for side lighting, and decrease EV by 2 for back lighting
b. This is approximately the value given by the sunny 16 rule.
c. These values are appropriate for pictures of the Moon taken at night with a long
lens or telescope, and will render the Moon as a medium tone. They will not, in
general, be suitable for landscape pictures that include the Moon. In a landscape
photograph, the Moon typically is near the horizon, where its luminance changes
considerably with altitude. Moreover, a landscape photograph usually must take
account of the sky and foreground as well as the Moon. Consequently, it is nearly
impossible to give a single correct exposure value for such a situation.
Exposure values in Table 2 are reasonable general guidelines, but they should be used
with caution. For simplicity, they are rounded to the nearest integer, and they omit
numerous considerations described in the ANSI exposure guides from which they are
derived. Moreover, they take no account of color shifts or reciprocity failure. Proper use
of tabluated exposure values is explained in detail in the ANSI exposure guide, ANSI
PH2.7-1986.
The exposure values in Table 2 are for ISO 100 speed (“EV100”). For a different ISO
speed S, increase the exposure values (decrease the exposures) by the number of exposure
steps by which that speed is greater than ISO 100, formally
________________________WORLD TECHNOLOGIES________________________
For example, ISO 400 speed is two steps greater than ISO 100:
To photograph outdoor night sports with an ISO 400–speed imaging medium, find the
tabular value of 9 and add 2 to get EV400 = 11.
For lower ISO speed, decrease the exposure values (increase the exposures) by the
number of exposure steps by which the speed is less than ISO 100. For example, ISO 50
speed is one step less than ISO 100:
WT
To photograph a rainbow against a cloudy sky with an ISO 50–speed imaging medium,
find the tabular value of 14 and subtract 1 to get EV50 = 13.
Setting EV on a camera
A Kodak Pony II camera with exposure value setting ring
On most cameras, there is no direct way to transfer an EV to camera settings; however, a
few cameras, such as the Kodak Pony II shown in the photo, allowed direct setting of
________________________WORLD TECHNOLOGIES________________________
exposure value. Some medium-format cameras from Rollei (Rolleiflex, Rolleicord
models) and Hasselblad allowed EV to be set on the lenses. The set EV could be locked,
coupling shutter and aperture settings, such that adjusting either the shutter speed or
aperture made a corresponding adjustment in the other to maintain a constant exposure.
Use of the EV scale on Hasselblad cameras is discussed briefly by Adams (1981, 39).
Exposure compensation in EV
Many current cameras allow for exposure compensation, and usually state it in terms of
EV (Ray 2000, 316). In this context, EV refers to the difference between the indicated
and set exposures. For example, an exposure compensation of +1 EV (or +1 step) means
to increase exposure, by using either a longer exposure time or a smaller f-number.
WT
The sense of exposure compensation is opposite that of the EV scale itself. An increase
in exposure corresponds to a decrease in EV, so an exposure compensation of +1 EV
results in a smaller EV; conversely, an exposure compensation of −1 EV results in a
greater EV. For example, if a meter reading of a lighter-than-normal subject indicates
EV 16, and an exposure compensation of +1 EV is applied to render the subject appropriately, the final camera settings will correspond to EV 15.
Meter indication in EV
Some light meters (e.g., Pentax spot meters) indicate directly in EV at ISO 100. Some
other meters, especially digital models, can indicate EV for the selected ISO speed. In
most cases, this difference is irrelevant; with the Pentax meters, camera settings usually
are determined using the exposure calculator, and most digital meters directly display
shutter speeds and f-numbers.
Recently, articles on many web sites have used light value (LV) to denote EV at ISO 100.
However, this term does not derive from a standards body, and has had several conflicting definitions.
Relationship of EV to lighting conditions
The recommended f-number and exposure time for given lighting conditions and ISO
speed are given by the exposure equation
where
•
•
N is the relative aperture (f-number)
t is the exposure time (“shutter speed”) in seconds
________________________WORLD TECHNOLOGIES________________________
•
•
•
L is the average scene luminance
S is the ISO arithmetic speed
K is the reflected-light meter calibration constant
Applied to the right-hand side of the exposure equation, exposure value is
Camera settings also can be determined from incident-light measurements, for which the
exposure equation is
where
•
•
WT
E is the illuminance
C is the incident-light meter calibration constant
In terms of exposure value, the right-hand side becomes
When applied to the left-hand side of the exposure equation, EV denotes actual
combinations of camera settings; when applied to the right-hand side, EV denotes
combinations of camera settings required to give the nominally “correct” exposure. The
formal relationship of EV to luminance or illuminance has limitations. Although it
usually works well for typical outdoor scenes in daylight, it is less applicable to scenes
with highly atypical luminance distributions, such as city skylines at night. In such
situations, the EV that will result in the best picture often is better determined by
subjective evaluation of photographs than by formal consideration of luminance or
illuminance.
For a given luminance and film speed, a greater EV results in less exposure, and for fixed
exposure (i.e., fixed camera settings), a greater EV corresponds to greater luminance or
illuminance.
EV and APEX
The Additive system of Photographic EXposure (APEX) proposed in the 1960 ASA
standard for monochrome film speed, ASA PH2.5-1960, extended the concept of
exposure value to all quantities in the exposure equation by taking base-2 logarithms,
________________________WORLD TECHNOLOGIES________________________
reducing application of the equation to simple addition and subtraction. In terms of
exposure value, the left-hand side of the exposure equation became
where Av (aperture value) and Tv (time value) were defined as:
Av = log2 A2
and
Tv = log2
with
•
•
WT
A the relative aperture (f-number)
T the exposure time (“shutter speed”) in seconds
Av and Tv represent the numbers of stops from f/1 and 1 second, respectively.
Use of APEX required logarithmic markings on aperture and shutter controls, however,
and these never were incorporated in consumer cameras. With the inclusion of built-in
exposure meters in most cameras shortly after APEX was proposed, the need to use the
exposure equation was eliminated, and APEX saw little actual use.
Though it remains of little interest to the end user, APEX has seen a partial resurrection
in the Exif standard, which calls for storing exposure data using APEX values.
EV as a measure of luminance and illuminance
For a given ISO speed and meter calibration constant, there is a direct relationship
between exposure value and luminance (or illuminance). Strictly, EV is not a measure of
luminance or illuminance; rather, an EV corresponds to a luminance (or illuminance) for
which a camera with a given ISO speed would use the indicated EV to obtain the
nominally correct exposure. Nonetheless, it is common practice among photographic
equipment manufacturers to express luminance in EV for ISO 100 speed, as when
specifying metering range (Ray 2000, 318) or autofocus sensitivity. And the practice is
long established; Ray (2002, 592) cites Ulffers (1968) as an early example. Properly, the
meter calibration constant as well as the ISO speed should be stated, but this seldom is
done.
Values for the reflected-light calibration constant K vary slightly among manufacturers; a
common choice is 12.5 (Canon, Nikon, and Sekonic). Using K = 12.5, the relationship
between EV at ISO 100 and luminance L is then
________________________WORLD TECHNOLOGIES________________________
Values of luminance at various values of EV based on this relationship are shown in
Table 3. Using this relationship, a reflected-light exposure meter that indicates in EV can
be used to determine luminance.
As with luminance, common practice among photographic equipment manufacturers is to
express illuminance in EV for ISO 100 speed when specifying metering range.
The situation with incident-light meters is more complicated than that for reflected-light
meters, because the calibration constant C depends on the sensor type. Two sensor types
are common: flat (cosine-responding) and hemispherical (cardioid-responding). Illuminance is measured with a flat sensor; a typical value for C is 250 with illuminance in lux.
Using C = 250, the relationship between EV at ISO 100 and illuminance E is then
WT
Values of illuminance at various values of EV based on this relationship are shown in
Table 3. Using this relationship, an incident-light exposure meter that indicates in EV can
be used to determine illuminance.
Although illuminance measurements may indicate appropriate exposure for a flat subject,
they are less useful for a typical scene in which many elements are not flat and are at
various orientations to the camera. For determining practical photographic exposure, a
hemispherical sensor has proven more effective. With a hemispherical sensor, typical
values for C are between 320 (Minolta) and 340 (Sekonic) with illuminance in lux. If illuminance is interpreted loosely, measurements with a hemispherical sensor indicate “scene
illuminance”.
Exposure meter calibration is discussed in detail in the Light meter article.
Table 3. Exposure value vs. luminance (ISO 100, K = 12.5) and illuminance
(ISO 100, C = 250)
Luminance Illuminance
EV
cd/m2 fL
lx
fc
−4 0.008 0.0023 0.156 0.015
−3 0.016 0.0046 0.313 0.029
−2 0.031 0.0091 0.625 0.058
−1 0.063 0.018 1.25 0.116
0 0.125 0.036
2.5
0.232
1 0.25 0.073
5
0.465
2
0.5 0.146
10
0.929
3
1 0.292
20
1.86
________________________WORLD TECHNOLOGIES________________________
4
5
6
7
8
9
10
11
12
13
14
15
16
2
4
8
16
32
64
128
256
512
1024
2048
4096
8192
0.584
1.17
2.33
4.67
9.34
18.7
37.4
74.7
149
299
598
1195
2391
40
3.72
80
7.43
160
14.9
320
29.7
640
59.5
1280
119
2560
238
5120
476
10,240 951
20,480 1903
40,960 3805
81,920 7611
163,840 15,221
WT
________________________WORLD TECHNOLOGIES________________________
Chapter-8
F-number
WT
Diagram of decreasing apertures, that is, increasing f-numbers, in one-stop increments;
each aperture has half the light gathering area of the previous one. The actual size of the
aperture will depend on the focal length of the lens.
In optics, the f-number (sometimes called focal ratio, f-ratio, f-stop, or relative
aperture) of an optical system expresses the diameter of the entrance pupil in terms of
the focal length of the lens; in simpler terms, the f-number is the focal length divided by
the "effective" aperture diameter. It is a dimensionless number that is a quantitative
measure of lens speed, an important concept in photography.
Notation
The f-number (f/#) is often notated as N and is given by
where f is the focal length, and D is the diameter of the entrance pupil. By convention,
"f/#" is treated as a single symbol, and specific values of f/# are written by replacing the
number sign with the value. For example, if the focal length is 16 times the pupil
diameter, the f-number is f/16, or N = 16. The greater the f-number, the less light per unit
area reaches the image plane of the system; the amount of light transmitted to the film (or
________________________WORLD TECHNOLOGIES________________________
sensor) decreases with the f-number squared. Doubling the f-number increases the
necessary exposure time by a factor of four.
The pupil diameter is proportional to the diameter of the aperture stop of the system. In a
camera, this is typically the diaphragm aperture, which can be adjusted to vary the size of
the pupil, and hence the amount of light that reaches the film or image sensor. The
common assumption in photography that the pupil diameter is equal to the aperture
diameter is not correct for many types of camera lens, because of the magnifying effect of
lens elements in front of the aperture.
WT
Diagram showing why two lenses, with different focal lengths but the same aperture
setting, will produce the same illuminance in the focal plane.
A 100 mm lens with an aperture setting of f/4 will have a pupil diameter of 25 mm. A
135 mm lens with a setting of f/4 will have a pupil diameter of about 33.8 mm. The
135 mm lens' f/4 opening is larger than that of the 100 mm lens but both will produce the
same illuminance in the focal plane when imaging an object of a given luminance.
Other types of optical system, such as telescopes and binoculars may have a fixed
aperture, but the same principle holds: the greater the focal ratio, the fainter the images
created (measuring brightness per unit area of the image).
________________________WORLD TECHNOLOGIES________________________
Stops, f-stop conventions, and exposure
WT
A Canon 7 mounted with a 50 mm lens capable of an exceptional f/0.95
A 35 mm lens set to f/11, as indicated by the white dot above the f-stop scale on the
aperture ring. This lens has an aperture range of f/2.0 to f/22
The term stop is sometimes confusing due to its multiple meanings. A stop can be a
physical object: an opaque part of an optical system that blocks certain rays. The aperture
stop is the aperture that limits the brightness of the image by restricting the input pupil
size, while a field stop is a stop intended to cut out light that would be outside the desired
field of view and might cause flare or other problems if not stopped.
In photography, stops are also a unit used to quantify ratios of light or exposure, with one
stop meaning a factor of two, or one-half. The one-stop unit is also known as the EV
(exposure value) unit. On a camera, the f-number is usually adjusted in discrete steps,
known as f-stops. Each "stop" is marked with its corresponding f-number, and represents
a halving of the light intensity from the previous stop. This corresponds to a decrease of
the pupil and aperture diameters by a factor of
the area of the pupil.
or about 1.414, and hence a halving of
Modern lenses use a standard f-stop scale, which is an approximately geometric sequence
of numbers that corresponds to the sequence of the powers of the square root of 2: f/1,
________________________WORLD TECHNOLOGIES________________________
f/1.4, f/2, f/2.8, f/4, f/5.6, f/8, f/11, f/16, f/22, f/32, f/45, f/64, f/90, f/128, etc. The
values of the ratios are rounded off to these particular conventional numbers, to make
them easier to remember and write down. The sequence above can be obtained as
following: f/1 =
, f/1.4 =
,f/2 =
, f/2.8 =
...
Shutter speeds are arranged in a similar scale, so that one step in the shutter speed scale
corresponds to one stop in the aperture scale. Opening up a lens by one stop allows twice
as much light to fall on the film in a given period of time, therefore to have the same
exposure at this larger aperture, as at the previous aperture, the shutter speed is set twice
as fast (i.e., the shutter is open half as long); the film will usually respond equally to these
equal amounts of light, since it has the property known as reciprocity. Alternatively, one
could use a film that is half as sensitive to light, with the original shutter speed.
WT
Photographers sometimes express other exposure ratios in terms of 'stops'. Ignoring the fnumber markings, the f-stops make a logarithmic scale of exposure intensity. Given this
interpretation, one can then think of taking a half-step along this scale, to make an
exposure difference of "half a stop".
Fractional stops
Most old cameras had an aperture scale graduated in full stops but the aperture is
continuously variable allowing to select any intermediate aperture.
Click-stopped aperture became a common feature in the 1960s; the aperture scale was
usually marked in full stops, but many lenses had a click between two marks, allowing a
gradation of one half of a stop.
On modern cameras, especially when aperture is set on the camera body, f-number is
often divided more finely than steps of one stop. Steps of one-third stop (1/3 EV) are the
most common, since this matches the ISO system of film speeds. Half-stop steps are also
seen on some cameras. As an example, the aperture that is one-third stop smaller than
f/2.8 is f/3.2, two-thirds smaller is f/3.5, and one whole stop smaller is f/4. The next few
f-stops in this sequence are
f/4.5, f/5, f/5.6, f/6.3, f/7.1, f/8, etc.
To calculate the steps in a full stop (1 EV) one could use
20×0.5, 21×0.5, 22×0.5, 23×0.5, 24×0.5 etc.
The steps in a half stop (1/2 EV) series would be
20/2×0.5, 21/2×0.5, 22/2×0.5, 23/2×0.5, 24/2×0.5 etc.
________________________WORLD TECHNOLOGIES________________________
The steps in a third stop (1/3 EV) series would be
20/3×0.5, 21/3×0.5, 22/3×0.5, 23/3×0.5, 24/3×0.5 etc.
As in the earlier DIN and ASA film-speed standards, the ISO speed is defined only in
one-third stop increments, and shutter speeds of digital cameras are commonly on the
same scale in reciprocal seconds. A portion of the ISO range is the sequence
... 16/13°, 20/14°, 25/15°, 32/16°, 40/17°, 50/18°, 64/19°, 80/20°, 100/21°,
125/22°...
while shutter speeds in reciprocal seconds have a few conventional differences in their
numbers (1/15, 1/30, and 1/60 second instead of 1/16, 1/32, and 1/64).
WT
In practice the maximum aperture of a lens is often not an integral power of
(i.e.
to the power of a whole number), in which case it is usually a half or third stop above
or below an integral power of
.
Modern electronically-controlled interchangeable lenses, such as those from Canon and
Sigma for SLR cameras, have f-stops specified internally in 1/8-stop increments, so the
cameras' 1/3-stop settings are approximated by the nearest 1/8-stop setting in the lens.
Standard full-stop f-number scale
Including aperture value AV:
AV -2 -1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
f/# 0.5 0.7 1.0 1.4 2 2.8 4 5.6 8 11 16 22 32 45 64 90 128
Typical one-half-stop f-number scale
f/# 1.0 1.2 1.4 1.7 2 2.4 2.8 3.3 4 4.8 5.6 6.7 8 9.5 11 13 16 19 22
Typical one-third-stop f-number scale
f/# 1.0 1.1 1.2 1.4 1.6 1.8 2 2.2 2.5 2.8 3.2 3.5 4 4.5 5.0 5.6 6.3 7.1 8 9 10 11 13 14 16 18 20 22
Notice that sometimes a number shows on several scales; for example, f/1.2 may be used
in either a half-stop or a one-third-stop system; sometimes f/1.3 and f/3.2 and other
differences are used for the one-third stop scale.
________________________WORLD TECHNOLOGIES________________________
T-stops
Since all lenses absorb some portion of the light passing through them (particularly zoom
lenses containing many elements), T-stops are sometimes used instead of f-stops for
exposure purposes, especially for motion picture camera lenses. The practice became
popular in cinematographic usage before the advent of zoom lenses, where fixed focal
length lenses were calibrated to T-stops: This allowed the turret-mounted lenses to be
changed without affecting the overall scene brightness. Lenses were bench-tested
individually for actual light transmission and assigned T stops accordingly (The T in Tstop stands for transmission), but modern cinematographic lenses now usually tend to be
factory-calibrated in T-stops. T-stops measure the amount of light transmitted through the
lens in practice (actually on T-stops the amount of light is measured at the film plane),
and are equivalent in light transmission to the f-stop of an ideal lens with 100%
transmission. Since all lenses absorb some quantity of light, the T-number of any given
aperture on a lens will always be greater than the f-number. In recent years, advances in
lens technology and film exposure latitude have reduced the importance of t-stop values.
So, F-stops are for focal ratio, T-stops are for transmission.
WT
Sunny 16 rule
An example of the use of f-numbers in photography is the sunny 16 rule: an approximately correct exposure will be obtained on a sunny day by using an aperture of f/16
and a shutter speed close to the reciprocal of the ISO speed of the film; for example,
using ISO 200 film, an aperture of f/16 and a shutter speed of 1/200 second. The fnumber may then be adjusted downwards for situations with lower light.
Effects on image quality
Comparison of f/32 (top-left corner) and f/5 (bottom-right corner)
________________________WORLD TECHNOLOGIES________________________
Depth of field increases with f-number, as illustrated in the image here. This means that
photographs taken with a low f-number will tend to have one subject in focus, with the
rest of the image out of focus. This is frequently useful for nature photography, portraiture, and certain special effects. The depth of field of an image produced at a given fnumber is dependent on other parameters as well, including the focal length, the subject
distance, and the format of the film or sensor used to capture the image. Smaller formats
will have a deeper field than larger formats at the same f-number for the same distance of
focus and same angle of view. Therefore, reduced–depth-of-field effects, like those
shown below, will require smaller f-numbers (and thus larger apertures and so potentially
more complex optics) when using small-format cameras than when using larger-format
cameras.
Picture sharpness also varies with f-number. The optimal f-stop varies with the lens
characteristics. For modern standard lenses having 6 or 7 elements, the sharpest image is
often obtained around f/5.6–f/8, while for older standard lenses having only 4 elements
(Tessar formula) stopping to f/11 will give the sharpest image. The reason the sharpness
is best at medium f-numbers is that the sharpness at high f-numbers is constrained by
diffraction, whereas at low f-numbers limitations of the lens design known as aberrations
will dominate. The larger number of elements in modern lenses allow the designer to
compensate for aberrations, allowing the lens to give better pictures at lower f-stops.
Light falloff is also sensitive to f-stop. Many wide-angle lenses will show a significant
light falloff (vignetting) at the edges for large apertures. To measure the actual resolution
of the lens at the different f-numbers it is necessary to use a standardized measurement
chart like the 1951 USAF resolution test chart.
WT
Photojournalists have a saying, "f/8 and be there", meaning that being on the scene is
more important than worrying about technical details. The aperture of f/8 gives adequate
depth of field, assuming a 35 mm or DSLR camera, minimum shutter-speed, and ISO
film rating within reasonable limits subject to lighting.
Human eye
Computing the f-number of the human eye involves computing the physical aperture and
focal length of the eye. The pupil can be as large as 6-7mm wide open, which translates
into the maximum physical aperture.
The f-number of the human eye varies from about f/8.3 in a very brightly lit place to
about f/2.1 in the dark. The presented maximum f-number has been questioned, as it
seems to only match the focal length that assumes outgoing light rays. According to the
incoming rays of light (what we actually see), the focal length of the eye is a bit longer,
resulting in maximum f-number of f/3.2.
Note that computing the focal length requires that the light-refracting properties of the
liquids in the eye are taken into account. Treating the eye as an ordinary air-filled camera
lens may result in a different focal length, thus yielding an incorrect f-number.
________________________WORLD TECHNOLOGIES________________________
Toxic substances and poisons (like Atropine) can significantly reduce the range of
aperture. Pharmaceutical products such as eye drops may also cause similar side-effects.
Focal ratio in telescopes
WT
Diagram of the focal ratio of a simple optical system where f is the focal length and D is
the diameter of the objective
In astronomy, the f-number is commonly referred to as the focal ratio (or f-ratio) notated
as N. It is still defined as the focal length f of an objective divided by its diameter D or by
the diameter of an aperture stop in the system.
For example if you want to make a 12", f/8 telescope, then the focal length will be 96",
which means that light from distant objects must focus 96" behind the lens or 96" in front
of the concave mirror.
Even though the principles of focal ratio are always the same, the application to which
the principle is put can differ. In photography the focal ratio varies the focal-plane
illuminance (or optical power per unit area in the image) and is used to control variables
such as depth of field. When using an optical telescope in astronomy, there is no depth of
field issue, and the brightness of stellar point sources in terms of total optical power (not
divided by area) is a function of absolute aperture area only, independent of focal length.
The focal length controls the field of view of the instrument and the scale of the image
that is presented at the focal plane to an eyepiece, film plate, or CCD.
________________________WORLD TECHNOLOGIES________________________
For example, the SOAR 4m telescope has a small field of view (~f/16) which is useful
for stellar studies, whereas the LSST 8.4m telescope, which will cover the entire sky
every 3 days has a very large field of view (f/1.2), due to a special optical design.
Working f-number
The f-number accurately describes the light-gathering ability of a lens only for objects an
infinite distance away. This limitation is typically ignored in photography, where objects
are usually not extremely close to the camera, relative to the distance between the lens
and the film. In optical design, an alternative is often needed for systems where the object
is not far from the lens. In these cases the working f-number is used. The working fnumber Nw is given by
WT
,
where N is the uncorrected f-number, "NA" is the numerical aperture of the lens, and m is
the lens's magnification for an object a particular distance away. (Note that the
magnification m here is negative for the common case where the image is inverted.) In
photography, the working f-number is described as the f-number corrected for lens
extensions by a "bellows factor". This is of particular importance in macro photography.
History
The system of f-numbers for specifying relative apertures evolved in the late nineteenth
century, in competition with several other systems of aperture notation.
Origins of relative aperture
In 1867, Sutton and Dawson defined "apertal ratio" as essentially the reciprocal of the
modern f-number:
In every lens there is, corresponding to a given apertal ratio (that is, the ratio of the
diameter of the stop to the focal length), a certain distance of a near object from it,
between which and infinity all objects are in equally good focus. For instance, in a single
view lens of 6 inch focus, with a 1/4 in. stop (apertal ratio one-twenty-fourth), all objects
situated at distances lying between 20 feet from the lens and an infinite distance from it (a
fixed star, for instance) are in equally good focus. Twenty feet is therefore called the
'focal range' of the lens when this stop is used. The focal range is consequently the
distance of the nearest object, which will be in good focus when the ground glass is
adjusted for an extremely distant object. In the same lens, the focal range will depend
upon the size of the diaphragm used, while in different lenses having the same apertal
ratio the focal ranges will be greater as the focal length of the lens is increased. The terms
'apertal ratio' and 'focal range' have not come into general use, but it is very desirable that
________________________WORLD TECHNOLOGIES________________________
they should, in order to prevent ambiguity and circumlocution when treating of the
properties of photographic lenses.
In 1874, John Henry Dallmeyer called the ratio 1 / N the "intensity ratio" of a lens:
The rapidity of a lens depends upon the relation or ratio of the aperture to the equivalent
focus. To ascertain this, divide the equivalent focus by the diameter of the actual working
aperture of the lens in question; and note down the quotient as the denominator with 1, or
unity, for the numerator. Thus to find the ratio of a lens of 2 inches diameter and 6 inches
focus, divide the focus by the aperture, or 6 divided by 2 equals 3; i.e., 1/3 is the intensity
ratio.
Although he did not yet have access to Ernst Abbe's theory of stops and pupils, which
was made widely available by Siegfried Czapski in 1893, Dallmeyer knew that his
working aperture was not the same as the physical diameter of the aperture stop:
WT
It must be observed, however, that in order to find the real intensity ratio, the diameter of
the actual working aperture must be ascertained. This is easily accomplished in the case
of single lenses, or for double combination lenses used with the full opening, these
merely requiring the application of a pair of compasses or rule; but when double or triplecombination lenses are used, with stops inserted between the combinations, it is
somewhat more troublesome; for it is obvious that in this case the diameter of the stop
employed is not the measure of the actual pencil of light transmitted by the front combination. To ascertain this, focus for a distant object, remove the focusing screen and
replace it by the collodion slide, having previously inserted a piece of cardboard in place
of the prepared plate. Make a small round hole in the centre of the cardboard with a
piercer, and now remove to a darkened room; apply a candle close to the hole, and
observe the illuminated patch visible upon the front combination; the diameter of this
circle, carefully measured, is the actual working aperture of the lens in question for the
particular stop employed.
This point is further emphasized by Czapski in 1893. According to an English review of
his book, in 1894, "The necessity of clearly distinguishing between effective aperture and
diameter of physical stop is strongly insisted upon."
J. H. Dallmeyer's son, Thomas Rudolphus Dallmeyer, inventor of the telephoto lens,
followed the intensity ratio terminology in 1899.
Aperture numbering systems
At the same time, there were a number of aperture numbering systems designed with the
goal of making exposure times vary in direct or inverse proportion with the aperture,
rather than with the square of the f-number or inverse square of the apertal ratio or
intensity ratio. But these systems all involved some arbitrary constant, as opposed to the
simple ratio of focal length and diameter.
________________________WORLD TECHNOLOGIES________________________
For example, the Uniform System (U.S.) of apertures was adopted as a standard by the
Photographic Society of Great Britain in the 1880s. Bothamley in 1891 said "The stops of
all the best makers are now arranged according to this system." U.S. 16 is the same
aperture as f/16, but apertures that are larger or smaller by a full stop use doubling or
halving of the U.S. number, for example f/11 is U.S. 8 and f/8 is U.S. 4. The exposure
time required is directly proportional to the U.S. number. Eastman Kodak used U.S. stops
on many of their cameras at least in the 1920s.
By 1895, Hodges contradicts Bothamley, saying that the f-number system has taken over:
"This is called the f/x system, and the diaphragms of all modern lenses of good
construction are so marked."
Here is the situation as seen in 1899:
WT
Piper in 1901 discusses five different systems of aperture marking: the old and new Zeiss
systems based on actual intensity (proportional to reciprocal square of the f-number); and
the U.S., C.I., and Dallmeyer systems based on exposure (proportional to square of the fnumber). He calls the f-number the "ratio number," "aperture ratio number," and "ratio
aperture." He calls expressions like f/8 the "fractional diameter" of the aperture, even
though it is literally equal to the "absolute diameter" which he distinguishes as a different
term. He also sometimes uses expressions like "an aperture of f 8" without the division
indicated by the slash.
Beck and Andrews in 1902 talk about the Royal Photographic Society standard of f/4,
f/5.6, f/8, f/11.3, etc. The R.P.S. had changed their name and moved off of the U.S.
system some time between 1895 and 1902.
________________________WORLD TECHNOLOGIES________________________
Typographical standardization
By 1920, the term f-number appeared in books both as F number and f/number. In
modern publications, the forms f-number and f number are more common, though the
earlier forms, as well as F-number are still found in a few books; not uncommonly, the
initial lower-case f in f-number or f/number is set in a hooked italic form: f, or f.
Notations for f-numbers were also quite variable in the early part of the twentieth
century. They were sometimes written with a capital F, sometimes with a dot (period)
instead of a slash, and sometimes set as a vertical fraction.
The 1961 ASA standard PH2.12-1961 American Standard General-Purpose Photographic Exposure Meters (Photoelectric Type) specifies that "The symbol for relative
apertures shall be f/ or f: followed by the effective f-number." Note that they show the
hooked italic f not only in the symbol, but also in the term f-number, which today is more
commonly set in an ordinary non-italic face.
WT
________________________WORLD TECHNOLOGIES________________________
Chapter-9
Pinhole Camera
WT
Principle of a pinhole camera. Light rays from an object pass through a small hole to
form an image.
________________________WORLD TECHNOLOGIES________________________
WT
Holes in the leaf canopy project images of a solar eclipse on the ground.
A home-made pinhole camera (on the left), wrapped in black plastic to prevent light
leaks, and related developing supplies.
________________________WORLD TECHNOLOGIES________________________
A pinhole camera is a simple camera without a lens and with a single small aperture —
effectively a light-proof box with a small hole in one side. Light from a scene passes
through this single point and projects an inverted image on the opposite side of the box.
The human eye in bright light acts similarly, as do cameras using small apertures.
Up to a certain point, the smaller the hole, the sharper the image, but the dimmer the
projected image. Optimally, the size of the aperture should be 1/100 or less of the
distance between it and the projected image.
A pinhole camera's shutter is usually manually operated because of the lengthy exposure
times, and consists of a flap of some light-proof material to cover and uncover the
pinhole. Typical exposures range from 5 seconds to hours and sometimes days.
A common use of the pinhole camera is to capture the movement of the sun over a long
period of time. This type of photography is called Solargraphy.
WT
The image may be projected onto a translucent screen for real-time viewing (popular for
observing solar eclipses; Pinhole cameras with CCDs are often used for surveillance
because they are difficult to detect.
Invention of pinhole camera
As far back as the 4th century BC, Greeks such as Aristotle and Euclid wrote on
naturally-occurring rudimentary pinhole cameras. For example, light may travel through
the slits of wicker baskets or the crossing of tree leaves. (The circular dapples on a forest
floor, actually pinhole images of the sun, can be seen to have a bite taken out of them
during partial solar eclipses opposite to the position of the moon's actual occultation of
the sun because of the inverting effect of pinhole lenses.)
The 10th-century Ibn al-Haytham (Alhazen) published this idea in the Book of Optics in
1021 AD. He improved on the camera after realizing that the smaller the pinhole, the
sharper the image (though the less light). He provides the first clear description for
construction of a camera obscura (Lat. dark chamber). As a side benefit of his invention,
he was credited with being first man to shift physics from a philosophical to an experimental basis.
In the 5th century BC, the Mohist philosopher Mo Jing (墨經) in ancient China mentioned the effect of an inverted image forming through a pinhole. The image of an
inverted Chinese pagoda is mentioned in Duan Chengshi's (d. 863) book Miscellaneous
Morsels from Youyang written during the Tang Dynasty (618–907). Along with
experimenting with the pinhole camera and the burning mirror of the ancient Mohists, the
Song Dynasty (960–1279 AD) Chinese scientist Shen Kuo (1031–1095) experimented
with camera obscura and was the first to establish geometrical and quantitative attributes
for it.
________________________WORLD TECHNOLOGIES________________________
WT
Ancient pinhole camera effect caused by balistrarias in the Castelgrande in Bellinzona
In the 13th century, Robert Grosseteste and Roger Bacon commented on the pinhole
camera. Between 1000 and 1600, men such as Ibn al-Haytham, Gemma Frisius, and
Giambattista della Porta wrote on the pinhole camera, explaining why the images are
upside down. Pinhole devices provide safety for the eyes when viewing solar eclipses
because the event is observed indirectly, the diminished intensity of the pinhole image
being harmless compared with the full glare of the Sun itself.
Around 1600, Giambattista della Porta added a lens to the pinhole camera. It was not
until 1850 that a Scottish scientist by the name of Sir David Brewster actually took the
first photograph with a pinhole camera. Up until recently It was believed that Brewster
himself coined the term "Pinhole" in "The Stereoscope". The earliest reference to the
term "Pinhole" has been traced back to almost a century before Brewster to James
Ferguson's Lectures on select Subjects. Sir William Crookes and William de Wiveleslie
Abney were other early photographers to try the pinhole technique.
________________________WORLD TECHNOLOGIES________________________
Selection of pinhole size
WT
An example of a 20 minute exposure taken with a pinhole camera
A photograph taken with a pinhole camera using an exposure time of 2s
________________________WORLD TECHNOLOGIES________________________
Generally, a smaller pinhole (with a thinner surface that the hole goes through) will result
in sharper image resolution as the projected circle of confusion is smaller at the image
plane. An extremely small hole, however, can produce significant diffraction effects and
a less clear image due to the wave properties of light. Additionally, vignetting occurs as
the diameter of the hole approaches the thickness of the material in which it is punched,
because the sides of the hole obstruct the light entering at anything other than 90 degrees.
The best pinhole is perfectly round (since irregularities cause higher-order diffraction
effects), and in an extremely thin piece of material. Industrially produced pinholes benefit
from laser etching, but a hobbyist can still produce pinholes of sufficiently high quality
for photographic work.
WT
Some examples of photographs taken using a pinhole camera.
________________________WORLD TECHNOLOGIES________________________
One method is to start with a sheet of brass shim or metal reclaimed from an aluminium
drinks can or tin foil/aluminum foil, use fine sand paper to reduce the thickness of the
centre of the material to the minimum, before carefully creating a pinhole with a suitably
sized needle.
A method of calculating the optimal pinhole diameter was first attempted by Jozef
Petzval. The formula used today was evolved by Lord Rayleigh:
where d is diameter, f is focal length (distance from pinhole to focal plane) and λ is the
wavelength of light.
WT
For standard black-and-white film, a wavelength of light corresponding to yellow-green
(550 nm) should yield optimum results. (For a pinhole-to-film distance of 1 inch (25
mm), this works out to a pinhole 0.22 mm in diameter. For 5 cm, the appropriate diameter
is 0.32 mm.
The depth of field is basically infinite, but this does not mean that no optical blurring
occurs. The infinite depth of field means that image blur depends not on object distance,
but on other factors, such as the distance from the aperture to the film plane, the aperture
size, and the wavelength(s) of the light source.
Pinhole camera construction
Pinhole cameras are usually handmade by the photographer for a particular purpose. In its
simplest form, the photographic pinhole camera consists of a light-tight box with a
pinhole in one end, and a piece of film or photographic paper wedged or taped into the
other end. A flap of cardboard with a tape hinge can be used as a shutter. The pinhole is
usually punched or drilled using a sewing needle or small diameter bit through a piece of
tinfoil or thin aluminum or brass sheet. This piece is then taped to the inside of the light
tight box behind a hole cut through the box. An oatmeal box can be made into an excellent pinhole camera.
Pinhole cameras are often constructed with a sliding film holder or back so that the
distance between the film and the pinhole can be adjusted. This allows the angle of view
of the camera to be changed and also the effective f-stop ratio of the camera. Moving the
film closer to the pinhole will result in a wide angle field of view and a shorter exposure
time. Moving the film farther away from the pinhole will result in a telephoto or narrow
angle view and a longer exposure time.
Pinhole cameras can also be constructed by replacing the lens assembly in a conventional
camera with a pinhole. In particular, compact 35 mm cameras whose lens and focusing
assembly has been damaged can be reused as pinhole cameras—maintaining the use of
the shutter and film winding mechanisms. As a result of the enormous increase in f-
________________________WORLD TECHNOLOGIES________________________
number while maintaining the same exposure time, one must use a fast film in direct
sunshine.
Pinholes (homemade or commercial) can be used in place of the lens on an SLR. Use
with a digital SLR allows metering and composition by trial and error, and is effectively
free, so is a popular way to try pinhole photography.
Calculating the f-number & required exposure
WT
A pinhole camera made from an oatmeal box. The pinhole is in the centre. The black
plastic which normally surrounds this camera has been removed.
________________________WORLD TECHNOLOGIES________________________
WT
A fire hydrant photographed by a pinhole camera made from a shoe box, exposed on
photographic paper (top). The length of the exposure was 40 seconds. There is noticeable
flaring in the bottom-right corner of the image, likely due to extraneous light entering the
camera box.
The f-number of the camera may be calculated by dividing the distance from the pinhole
to the imaging plane (the focal length) by the diameter of the pinhole. For example, a
camera with a 0.02 inch (0.5 mm) diameter pinhole, and a 2 inch (50 mm) focal length
would have an f-number of 2/0.02 (50/0.5), or 100 (f/100 in conventional notation).
Due to the large f-number of a pinhole camera, exposures will often encounter reciprocity
failure. Once exposure time has exceeded about 1 second for film or 30 seconds for
________________________WORLD TECHNOLOGIES________________________
paper, one must compensate for the breakdown in linear response of the film/paper to
intensity of illumination by using longer exposures.
Other special features can be built into pinhole cameras such as the ability to take double
images, by using multiple pinholes, or the ability to take pictures in cylindrical or
spherical perspective by curving the film plane.
These characteristics could be used for creative purposes. Once considered as an obsolete
technique from the early days of photography, pinhole photography is from time to time a
trend in artistic photography.
Related cameras, image forming devices, or developments from it include Franke's
widefield pinhole camera, the pinspeck camera, and the pinhead mirror.
WT
NASA (via the NASA Institute for Advanced Concepts) has funded initial research into
the New Worlds Mission project, which proposes to use a pinhole camera with a diameter
of 10 m and focus length of 200,000 km to image earth sized planets in other star
systems.
World's largest pinhole camera
In an abandoned F-18 hangar at the closed El Toro fighter base in Irvine, California, a
team of six photographer artists and an army of assistants created the world's largest
pinhole camera, using 1.5 miles (2.4 km) of 2 inches (5.1 cm) wide black Gorilla Tape
and 40 US gallons (150 l) of black spray paint to make the hangar light-tight. The aim
was to make a black-and-white negative print of the Marine Corps air station with its
control tower and runways, with the San Joaquin Hills in the background. The purpose
was to subscribe to the Legacy Project, a photographic compilation and record of the
airfield's history before it is transformed into a giant urban park, as well as to demonstrate
to the digital world the value of print making the 168-year-old way.
A huge piece of muslin cloth was made light sensitive by coating it with 80 litres of
gelatin silver halide. and it was hung from the ceiling at a distance of about 80 feet (24
m) from a pinhole, just under .25 inches (0.64 cm) in diameter, situated 15 feet (4.6 m)
above ground level in the wall. The distance between the pinhole and the cloth was
determined to be 80 feet (24 m) for best coverage, and the exposure time was calculated
at 35 minutes. The opaque negative image print was developed in an Olympic-swimming-pool-size tray with 600 US gallons (2,300 l) of traditional developer and
1,200 US gallons (4,500 l) of fixer, and was washed using fire hoses attached to two fire
hydrants. The resulting finished print was nearly 108 ft (33 m) wide and 85 ft (26 m) high
and was exhibited for the first time at the Art Center College of Design in Pasadena,
California, on September 6, 2007.
________________________WORLD TECHNOLOGIES________________________
Chapter-10
Science of Photography
The science of photography refers to the use of science, such as chemistry and physics,
in all aspects of photography. This applies to the camera, its lenses, physical operation of
the camera, electronic camera internals, and the process of developing film in order to
take and develop pictures properly.
WT
Law of Reciprocity
Exposure ∝ ApertureArea × ExposureTime × SceneLuminance
The law of reciprocity describes how light intensity and duration trade off to make an
exposure—it defines the relationship between shutter speed and aperture, for a given total
exposure. Changes to any of these elements are often measured in units known as
"stops"; a stop is equal to a factor of two.
Halving the amount light exposing the film can be achieved either by:
1. Closing the aperture by one stop
2. Decreasing the shutter time (increasing the shutter speed) by one stop
3. Cutting the scene lighting by half
Likewise, doubling the amount of light exposing the film can be achieved by the opposite
of one of these operations.
The luminance of the scene, as measured on a reflected light meter, also affects the
exposure proportionately. The amount of light required for proper exposure depends on
the film speed; which can be varied in stops or fractions of stops. With either of these
changes, the aperture or shutter speed can be adjusted by an equal number of stops to get
to a suitable exposure.
Light is most easily controlled through the use of the camera's aperture (measure in fstops), but it can also be regulated by adjusting shutter speed. Using faster or slower film
is not usually something that can be done quickly, at least using roll film. Large format
cameras use individual sheets of film and each sheet could be a different speed. Also, if
you're using a larger format camera with a polaroid back, you can switch between backs
________________________WORLD TECHNOLOGIES________________________
containing different speed polaroids. Digital cameras can easily adjust the film speed they
are simulating by adjusting the exposure index, and many digital cameras can do so
automatically in response to exposure measurements.
For example, starting with an exposure of 1/60th at f/16, the depth-of-field could be made
shallower by opening up the aperture to f/4, an increase in exposure of 4 stops. To
compensate, the shutter speed would need to be increased as well by 4 stops, that is,
adjust exposure time down to 1/1000th. Closing down the aperture limits the resolution
due to the diffraction limit.
The reciprocity law specifies the total exposure, but the response of a photographic
material to a constant total exposure may not remain constant for very long exposures in
very faint light, such as photographing a starry sky, or very short exposures in very bright
light, such as photographing the sun. This is known as reciprocity failure of the material
(film, paper, or sensor).
Lenses
WT
A photographic lens is usually composed of several lens elements, which combine to
reduce the effects of chromatic aberration, coma, spherical aberration, and other
aberrations. A simple example is the three-element Cooke triplet, still in use over a
century after it was first designed, but many current photographic lenses are much more
complex.
Most but not all aberrations can be reduced by using a smaller aperture. They can also be
reduced dramatically by using an aspheric element, but these are more complex to grind
than spherical or cylindrical lenses. However, with modern manufacturing techniques the
extra cost of manufacturing aspherical lenses is decreasing, and small aspherical lenses
can now be made by molding, allowing their use in inexpensive consumer cameras.
Fresnel lenses are not used in cameras even though they are extremely light and cheap,
because they produce poor image quality.
All lens design is a compromise between numerous factors, not excluding cost. Zoom
lenses (i.e. lenses of variable focal length) involve additional compromises and therefore
normally do not match the performance of prime lenses.
When a camera lens is focused to project an object some distance away onto the film or
detector, the objects that are closer in distance, relative to the distant object, are also
approximately in focus. The range of distances that are nearly in focus is called the depth
of field. Depth of field generally increases with decreasing aperture diameter (increasing
f-number). The unfocused blur outside the depth of field is sometimes used for artistic
effect in photography. The subjective appearance of this blur is known as bokeh.
If the camera lens is focused at or beyond its hyperfocal distance, then the depth of field
becomes large, covering everything from half the hyperfocal distance to infinity. This
effect is used to make "focus free" or fixed-focus cameras.
________________________WORLD TECHNOLOGIES________________________
Motion blur
Motion blur is caused when either the camera or the subject moves during the exposure.
This causes a distinctive streaky appearance to the moving object or the entire picture (in
the case of camera shake).
Motion blur can be used artistically to create the feeling of speed or motion, as with
running water. An example of this is the technique of "panning", where the camera is
moved so it follows the subject, which is usually fast moving, such as a car. Done
correctly, this will give an image of a clear subject, but the background will have motion
blur, giving the feeling of movement. This is one of the more difficult photographic
techniques to master, as the movement must be smooth, and at the correct speed. A
subject that gets closer or further away from the camera may further cause focusing
difficulties.
WT
Light trails is another photographic effect where motion blur is used. Photographs of the
lines of light visible in long exposure photos of roads at night are one example of effect.
This is caused by the cars moving along the road during the exposure. The same principle
is used to create star trail photographs.
Generally, motion blur is something that is to be avoided, and this can be done in several
different ways. The simplest way is to limit the shutter time so that there is very little
movement of the image during the time the shutter is open. At longer focal lengths, the
same movement of the camera body will cause more motion of the image, so a shorter
shutter time is needed. A commonly cited rule of thumb is that the shutter speed in
seconds should be about the reciprocal of the 35 mm equivalent focal length of the lens in
millimeters. For example, a 50 mm lens should be used at a minimum speed of 1/50 sec,
and a 300 mm lens at 1/300 of a second. This can cause difficulties when used in low
light scenarios, since exposure also decreases with shutter time.
Motion blur due to subject movement can usually be prevented by using a faster shutter
speed. The exact shutter speed will depend on the speed at which the subject is moving.
For example, a very fast shutter speed will be needed to "freeze" the rotors of a
helicopter, whereas a slower shutter speed will be sufficient to freeze a runner.
A tripod may be used to avoid motion blur due to camera shake. This will stabilize the
camera during the exposure. A tripod is recommended for exposure times more than
about 1/15 seconds. There are additional techniques which, in conjunction with use of a
tripod, ensure that the camera remains very still. These may employ use of a remote
actuator, such as a cable release or infrared remote switch to activate the shutter, so as to
avoid the movement normally caused when the shutter release button is pressed directly.
The use of a "self timer" (a timed release mechanism that automatically trips the shutter
release after an interval of time) can serve the same purpose. Most modern single-lens
reflex camera (SLR) have a mirror lock-up feature that eliminates the small amount of
shake produced by the mirror flipping up.
________________________WORLD TECHNOLOGIES________________________
Focus
Focus is the tendency for light rays to reach the same place on the image sensor or film,
independent of where they pass through the lens. For clear pictures, the focus is adjusted
for distance, because at a different object distance the rays reach different parts of the
lens with different angles. In modern photography, focusing is often accomplished
automatically.
The autofocus system in modern SLRs use a sensor in the mirrorbox to measure contrast.
The sensor's signal is analyzed by an application-specific integrated circuit (ASIC), and
the ASIC tries to maximize the contrast pattern by moving lens elements. The ASICs in
modern cameras also have special algorithms for predicting motion, and other advanced
features.
WT
Film grain resolution
Black-and-white film has a "shiny" side and a "dull" side. The dull side is the emulsion, a
gelatin that suspends an array of silver halide crystals. These crystals contain silver grains
that determine how sensitive the film is to light exposure, and how fine or grainy the
negative the print will look. Larger grains mean faster exposure but a grainier appearance; smaller grains are finer looking but take more exposure to activate. The graininess
of film is represented by its ISO factor; generally a multiple of 10 or 100. Lower numbers
produce finer grain but slower film, and vice versa.
Diffraction limit
Since light propagates as waves, the patterns it produces on the film are subject to the
wave phenomenon known as diffraction, which limits the image resolution to features on
the order of several times the wavelength of light. Diffraction is the main effect limiting
the sharpness of optical images from lenses that are stopped down to small apertures
(high f-numbers), while aberrations are the limiting effect at large apertures (low fnumbers). Since diffraction cannot be eliminated, the best possible lens for a given operating condition (aperture setting) is one that produces an image whose quality is limited
only by diffraction. Such a lens is said to be diffraction limited.
The diffraction-limited optical spot size on the CCD or film is proportional to the fnumber (about equal to the f-number times the wavelength of light, which is near 0.0005
mm), making the overall detail in a photograph proportional to the size of the film, or
CCD divided by the f-number. For a 35 mm camera with f/11, this limit corresponds to
about 6,000 resolution elements across the width of the film (36 mm / (11 * 0.0005 mm)
= 6,500.
In other words, a camera cannot resolve two distant points unless their path lengths to the
two sides of the open lens aperture differ by at least a half wave length; otherwise they
will be both in phase at the same point on the film or CCD.
________________________WORLD TECHNOLOGIES________________________
Contribution to noise (grain)
Quantum efficiency
Light comes in particles and the energy of a light-particle (the photon) is the frequency of
the light times Planck's constant. A fundamental property of any photographic method is
how it collects the light on its photographic plate or electronic detector.
CCDs and other photodiodes
Photodiodes are back-biased semiconductor diodes, in which an intrinsic layer with very
few charge carriers prevents electric currents from flowing. Depending on the material,
photons have enough energy to raise one electron from the upper full band to the lowest
empty band. The electron and the "hole", or empty space where it was, are then free to
move in the electric field and carry current, which can be measured. The fraction of
incident photons that produce carrier pairs depends largely on the semiconductor
material.
WT
Photomultiplier tubes
Photomultiplier tubes are vacuum phototubes that amplify light by accelerating the
photoelectrons to knock more electrons free from a series of electrodes. They are among
the most sensitive light detectors but are not well suited to photography.
Aliasing
Aliasing can occur in optical and chemical processing, but it is more common and easily
understood in digital processing. It occurs whenever an optical or digital image is
sampled or re-sampled at a rate which is too low for its resolution. Some digital cameras
and scanners have anti-aliasing filters to reduce aliasing by intentionally blurring the
image to match the sampling rate. It is common for film developing equipment used to
make prints of different sizes to increase the graininess of the smaller size prints by
aliasing.
It is usually desirable to suppress both noise such as grain and detail of the real object
that are too small to be represented at the sampling rate.
________________________WORLD TECHNOLOGIES________________________
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement