Imaging Using Wave Optics
Imaging Using Wave Optics
23
A number of standard and novel methods to optically image biological and other
materials are discussed in this chapter to give the reader a sense of the large variety
of tools available. We start with a survey of the arsenal of newer light microscopies
available for the study of biological materials, in particular. Another wavelike property of light, its polarization, can be used in several optical polarization methods to
study biomolecules. Earlier, we discussed the important imaging technique of MRI
in Chapter 18; this chapter concludes with a discussion of two other wave-related
imaging techniques: electron microscopy and x-ray diffraction/computed tomography (CT) imaging with x-rays.
1. THE NEW LIGHT MICROSCOPIES
Aside from the resolution needed to form an image of a microscopic object, discussed in the previous chapter for a standard compound microscope, a minimal
amount of contrast is also needed to clearly detect an image. Contrast can be defined
in terms of the visibility of a sample object compared to the background using the
percent contrast,
% Contrast ⫽
(Ibkgd ⫺ Isample)
Ibkgd
⫻ 100 ,
(23.1)
where the intensities are average values over those portions of the image. Contrast is
determined both by the properties of the object and by those of the microscope. We
can distinguish two fundamental types of contrast: amplitude and phase contrast.
Amplitude contrast is due to direct differences in the wave amplitude of the
imaged sample and background light due to absorption or scattering from the sample. This is the basis for several types of microscopy including normal or bright-field
microscopy discussed in the previous chapter. In this technique, the background
appears bright white and objects are imaged by their darker or colored appearance
due to absorption or scattering. Because most biological materials do not absorb
much visible light, usually a colored stain that preferentially sticks to the sample and
is washed from the background is used to enhance the contrast. Before defining phase
contrast, we take a look at several microscopy methods that use amplitude contrast
enhancing schemes.
Very small or thin objects are difficult to see in bright-field microscopy because
of the light background and low contrast. If sufficient scattering occurs from an
object, it can be better viewed using a variation known as dark-field microscopy in
which the background light is blocked by a central stop and only the scattered light
from the object is imaged. Figure 23.1 shows this microscope arrangement. A hollow
cone of light from a special annular aperture is focused on the specimen and the
J. Newman, Physics of the Life Sciences, DOI: 10.1007/978-0-387-77259-2_23,
© Springer Science+Business Media, LLC 2008
T H E N E W L I G H T M I C RO S C O P I E S
563
optic axis
objective lens
sample
annular aperture/
condenser lens
source
*
FIGURE 23.1 Schematic of dark-field microscope optics. An annulus aperture in front
of the condenser lens (focusing light on the
sample) ensures that none of the unscattered
incident light (smooth red line) passes through
the collection optics. Only light scattered from
the sample (dotted red line) reaches the image
plane (not shown).
FIGURE 23.2 A dark-field image of a mosquito
head.
collection optics are arranged so that only the scattered light, and not the directly
transmitted cone of light, is collected and focused by the microscope. Figure 23.2
shows an example of a dark-field image.
Fluorescence microscopy is an important variation of amplitude contrast
microscopy. Since most samples are not sufficiently fluorescent, usually fluorescent
dyes are used to bind to specific sites on the sample and only fluorescent light is
imaged in the microscope. To accomplish this, filters must be used to block other
wavelengths of light. Unless a laser is used as a light source of the proper excitation
wavelength, an excitation filter is used to limit the incident light to the shorter wavelengths capable of exciting the fluorescent dye. The incident light is used in either a
dark-field microscope arrangement or in the arrangement shown in Figure 23.3 to
direct excitation light onto the sample. Fluorescent light emitted by the sample is then
FIGURE 23.3 Schematic of a
collected and filtered using a barrier filter that passes only longer wavelength fluofluorescence microscope without
rescent light, blocking the incident light. In this way there is no background light
the imaging optics shown. The
except for a stray unwanted fluorescent signal from imperfections in the optics. In
dichroic mirror reflects the shorter
wavelength light, but transmits the
Figure 23.3, the dichroic mirror is specially coated to reflect only
longer wavelength fluorescent light.
shorter wavelengths but to transmit only longer wavelengths of light,
thereby acting both as two filters as well as a beamsplitter. Figure 23.4
shows an image of a multiply labeled fluorescent endothelial cell.
filter
Recent developments of new multicolor fluorescent dyes for use
in microscopy have been partly responsible for a revolution in fluodichroic mirror
rescent microscopy. Aside from advances in scientists’ ability to label
specific molecules with a dye, many of the newer dyes have their
fluorescence controllable by specific environmental changes. For
example, certain dyes can serve as sensors of local pH, with their
fluorescence properties depending on pH, whereas others can serve to
monitor calcium ions Ca2⫹, the important messenger and regulating
sample
ion in a cell, because their fluorescence is affected by the binding of
564
I M A G I N G U S I N G W AV E O P T I C S
FIGURE 23.4 Three-color fluorescence image of an
endothelial cell showing the tubulin (green), nucleus (blue),
and actin cytoskeleton (red).
FIGURE 23.5 A wave of increase in calcium ion concentration sweeps across an egg cell just after fertilization as
monitored by the green fluorescence from a Ca-sensitive
dye attached to small dextran molecules. The images are
taken 5 s apart and show the Ca wave starting around the
1:30 o’clock position and spreading across the cell.
calcium. An even newer class of fluorescent dyes can be used as optical biosensors
to detect conformational changes in macromolecules or binding of ligands (small
molecules with specific binding sites) to those molecules. In this way, not only can FIGURE 23.6 Optics of the phase
the locations of specific macromolecules to which the dyes are bound be monitored, contrast microscope. A cone of
light is produced by the annulus
but so can their physiological state (Figure 23.5).
and focused on the sample. The
As already mentioned, most biological samples for microscopy are essentially undeviated beam is focused by the
completely transparent to visible light, absorbing and scattering very little light, objective onto a groove in a phase
and therefore having very poor contrast (hence, the use of stains and fluorescent plate located at the source image
dyes). However, all such samples do have somewhat different refractive indices plane. The groove both attenuates
the undeviated beam intensity and
than the surrounding solvent and are therefore called phase objects. These produce shifts its phase with respect to the
a phase shift in the light waves they transmit relative to those through the back- diffracted light, most of which
ground, more or less as shown in the last chapter in Figure 22.2. If light is simply passes through the rest of the
allowed to pass through the sample and be imaged, the relative phase shifts will not phase plate and is brought to focus
at the object image plane. This
change the intensity of the light and the objects will be invisiimage is then further magnified by
ble. However, encoded phase information in the light passing
the eyepiece (not shown).
through the sample can be used to provide phase contrast in
several types of microscopies. We discuss two major types:
optic axis
object image plane
phase contrast and differential-interference-contrast (DIC)
microscopy. In both cases the crux of the technique is to separately change the relative phase of the light that interacts with
the sample and the undeviated light so that when they are
recombined, there will be intensity differences in the images
phase plate
in source image plane
due to interference effects.
Phase contrast microscopy is similar to dark field
objective lens
microscopy in that a hollow cone of light is focused onto the
sample but now that light is collected by the objective lens
(Figure 23.6). In the absence of a sample, the objective lens produces an image of the annulus used to produce the cone of light
sample
at a plane known as the “source image plane.” However, light that
interacts with the sample will be diffracted from that path (dotted
lines in Figure 23.6) with a small phase shift from passing
annular aperture/
through the more optically dense sample as well. The intensity of
condenser lens
this diffracted light will be much less than that of the undeviated
source
light and it will be brought to a focus at a different plane (because
*
T H E N E W L I G H T M I C RO S C O P I E S
565
FIGURE 23.7 Phase contrast
micrograph of a paramecium.
detector
analyzer
Wollaston prism
objective lens
sample
condenser lens
Wollaston prism
polarizer
FIGURE 23.8 Optics of the differential interference contrast (DIC)
microscope. The Wollaston prisms
are used to create and recombine
two beams with slightly offset
centers, as well as to introduce a
180° phase shift between the two.
If the relative phase of these two
is shifted by the sample, then
when allowed to interfere after the
analyzer, a high-contrast image is
formed (not shown). Note
magenta ⫽ red ⫹ blue.
566
the object distance is much less than the light source distance from the
objective), known as the “object image plane.”
In the phase contrast microscope a device known as a phase plate is
inserted at the source image plane to improve the contrast. A groove in
the phase plate aligned with the image of the annulus is used to shift the
phase of the undeviated light relative to the diffracted light. An absorption coating in the groove also decreases the intensity of the undeviated
beam, so that it is closer to matching the intensity of the diffracted light
in order to provide even better contrast. Phase plates are usually built
into objective holders and matched pairs of condenser and objective
lenses are used to ensure proper alignment. The total phase difference
between undeviated and diffracted portions results in intensity variations
in the image that are directly proportional to optical path differences
between the sample and background regions. Depending on whether the
phase plate gives an additional positive or negative phase shift with
respect to the diffracted light, the background can be made dark or bright
(Figure 23.7).
In differential-interference-contrast (DIC) microscopy there is a
complete physical separation of the incident light into two closely
spaced beams that probe adjacent portions of the sample. These beams
are then used to generate an interference pattern that produces intensity differences in the object image plane. Two special prisms, known
as Wollaston prisms, are used both to produce two in-phase beams
from one and to recombine them after passing through the sample into
one final beam with a 180° phase shift introduced between the two
(Figure 23.8). In the absence of a sample and with a uniform background, the two beams completely cancel after recombination due to
the 180° phase difference. With a sample present in one beam but not the other, the
extra phase differences between the two beams give rise to bright interference light.
In this case the image intensities are not proportional to optical path differences,
but rather, because of the two spatially separated beams, to the rate of change of
optical path transversely (in the direction of the separation of the two beams) across
the object. That’s the reason for the term “differential interference”. Because the
rate of change, rather than the absolute optical path difference, is important in DIC
microscopy, edge contrast is greater and thinner samples can be better imaged
(Figure 23.9).
Wollaston prisms function by spatially separating the two different polarization
components of light. They are able to do this because the calcite crystal of which they
are made has different refractive indices along two different crystal axes as discussed
further in the next section. After the two beams of light travel through the sample this
process is reversed in a second matched prism and the two beams recombine after a
180° phase shift introduced by an asymmetric placement of the second prism. At this
point, even though the beams are out of phase and overlapping, they cannot interfere
with each other because their polarization directions are orthogonal and hence independent. A polarizer oriented at a 45° angle between these directions serves to analyze that portion of each beam and to allow them to subsequently interfere and
produce the image. Thus, the Wollaston prisms are serving solely as a beamsplitter
and recombiner, whereas the polarization properties of the beams are not used to produce the DIC image. Polarized light can be used in microscopy in the polarization
microscope discussed in the next section.
Most current versions of the above microscopic techniques use modern methods
of digital recording and computers to further increase the resolution and contrast
of images. Developments in detector technology have made use of CCDs (chargecoupled devices) and image intensifiers very commonplace in microscopy. CCD
video cameras, based on arrays of discrete light-sensitive detectors, allow digital
recording of time-dependent processes in two-dimensional arrays of picture elements,
or pixels. These arrays are now relatively inexpensive and are widely used in digital
I M A G I N G U S I N G W AV E O P T I C S
cameras, whose pictures can then be printed out on ordinary computer
printers. Digitally stored video frames from microscopy can be computer-enhanced and manipulated to allow improved resolution, contrast,
and quantitative measurements using special software.
Within the last ten years or so many new microscope techniques
have been developed that use laser illumination, including confocal
microscopy and multiphoton microscopy. Laser-scanning confocal
microscopy focuses a laser beam to an extremely small spot within the
sample and images light only from that spot onto the detector. A pinhole in front of the detector serves to eliminate out-of-focus light from
other regions of the sample, only allowing light from the focused spot
to be collected. The spot is then scanned over the sample, by moving
either the microscope stage or laser beam, in a raster pattern to map out
the sample image, having remarkable depth and lifelike appearance
(Figure 23.10).
Multiphoton microscopy uses a pulsed laser to provide an intense beam of
low-energy photons that is scanned across the sample similar to confocal
microscopy. When two (for two-photon microscopy) or more (for three- or multiphoton microscopy) of these photons with identical energy are simultaneously
absorbed by a fluorescent molecule they can provide the same total energy that a
single photon would in the usual fluorescence microscope. The incident photon
beam is tuned to the proper wavelength so that two or three or more photons, when
combined, give an energy resonant with the fluorescent material, producing subsequent fluorescence emission. Quantum mechanics allows this additive resonance only when the multiple photons are absorbed nearly simultaneously,
requiring very high laser intensities. One important advantage of this method is
that there is virtually no absorption of these lower-energy photons at any other
location in the sample where the beam is not focused and the density of photons
is not sufficient to allow multiphoton absorption. Thus instead of using highenergy photons that can damage the sample to produce fluorescence, one can use
much lower energy photons and excite the fluorescent molecules through the combined energy of several photons only where the beam is focused. This technique
is sensitive enough to image the intrinsic or autofluorescent light from the amino
acid tryptophan and other fluorescent macromolecular groups within the sample
itself without the addition of fluorescent dyes. High-resolution, high-contrast,
three-dimensional images can be obtained using these methods even with samples
as thick as 0.5 mm (Figure 23.11).
FIGURE 23.10 Laser-scanning confocal
microscopic images of mouse oocytes
showing microtubules in red and actin
filaments in green.
T H E N E W L I G H T M I C RO S C O P I E S
FIGURE 23.9 DIC image of a deer
tick. Note the sharp edges and high
contrast.
FIGURE 23.11 Confocal microscopic image of
anaphase in a cultured epithelial cell showing
chromosomes (blue), spindle apparatus
(green), and actin (red).
567
2. OPTICAL ACTIVITY; APPLICATIONS OF LIGHT
POLARIZATION
FIGURE 23.12 Combining two
orthogonally polarized waves (red
and blue E field vectors) at 45° with
respect to the vertical. (a) Equal
amplitude waves in phase to give a
vertically polarized wave (magenta),
(b) Unequal amplitude waves still in
phase to give a linearly polarized
wave at a fixed angle with the vertical (magenta), and (c) Equal amplitude waves 90° out of phase (red
and blue), so that the tip of the net
E field vector rotates around in a
circle as the wave propagates
(magenta). In this case a lefthanded circularly polarized wave is
shown, handedness defined looking backward at the source.
In Chapter 19, we introduced the concept of polarization of a light beam and discussed linearly polarized light as well as the use of Polaroid as a polarizing device to
preferentially absorb light with its electric field oriented along one direction. Here,
we further discuss the notion of circularly and elliptically polarized light and the use
of polarization methods in the study of biomolecules.
Consider two light waves with the same frequency linearly polarized along perpendicular directions as shown in Figure 23.12. If the amplitudes and phases of the
two waves are equal, then the superposition of the two waves results in a linearly
polarized wave along the vertical direction in (a). With different amplitudes for the
two waves, the resultant wave will still be linearly polarized so long as the phases are
equal (b). If two waves of equal amplitude are 90° (␲/2 rad or ␭/4) out of phase then
when one component is at a zero the other will be at a maximum or minimum. The
superposition of those two waves will describe a helical path as the tip of the electric
field vector executes circular motion in the transverse wavefront plane itself traveling
along at speed c in a vacuum (c). Depending on the relative phases, the circular polarization can be left- or right-handed. Handedness is defined in terms of an observer
looking back at the source and the light is right-handed if E rotates clockwise.
We can make these ideas quantitative by writing out expressions for the two linearly polarized electric fields (say, along x- and y-axes) as
B
Ex ⫽ Eox cos (vt)
,
Ey ⫽ Eoy sin (vt)
(23.2)
where we have assumed that Ex leads Ey by 90° (at time 0, Ex is at a maximum and
Ey is zero; after 1/4 of a period, Ex is now zero and Ey has increased to a maximum,
etc.), and Eox and Eoy are the amplitudes of the fields. By using the trigonometry
identity cos2␪ ⫹ sin2␪ ⫽ 1, we find that the components of the vector E satisfy
a
speed c
speed c
Ex
Eox
b + a
2
Ey
Eoy
b = 1,
2
(23.3)
which is the equation of an ellipse. If the two amplitudes are
equal (so that Eox ⫽ Eoy⫽ Eo) then Equation (23.3) becomes
the equation of a circle (Ex2 ⫹ Ey2 ⫽ Eo2, with radius Eo), the
case shown in Figure 23.12c. In the transverse plane the tip of
E will describe these closed ellipses or circles, but the
light wave is actually propagating at the speed of light along
the z-direction and the tip of E actually describes a helical path
in space. The projection of the helix in the x–y plane will be a
circle or an ellipse, depending on the amplitudes of the x- and
y-components of E In a similar way one can show that linearly
polarized light can be considered to be the sum of in-phase
right- and left-handed circularly polarized light. For example,
if the left-handed circularly polarized beam shown in Figure
23.12c is added to its mirror image right-handed beam, the
resulting beam has an E that is vertically polarized (imagine
the summation in the figure: the horizontal components will
always cancel with the mirror-image beam). This idea is used
below in a discussion of optical activity.
Circularly polarized light can be produced most easily
by sending linearly polarized light through a special device
known as a quarter-wave plate, or ␭/4 plate. These are made
from a birefringent (double-refracting) material, one having
B
B
a
b
speed c
c
568
B
B
I M A G I N G U S I N G W AV E O P T I C S
two crystal axes with different refractive indices, as mentioned in the last section in
connection with a Wollaston prism. When linearly polarized light passes through
such a material, the different polarization components along either axis travel at
different speeds, because v ⫽ c/n1 or c/n2, and will develop phase differences.
Furthermore, one beam, called the “ordinary” beam, will be transmitted undeviated,
whereas the other, called the “extraordinary” beam, will be refracted and physically
separated from the ordinary beam (see Figure 23.8). By adjusting the thickness of the
material, a quarter-wave phase difference can be introduced between the two beams,
producing fields governed by Equations (23.2). In general this will produce elliptically polarized light but if the wave plate is adjusted to have its axis at 45° to the incident polarization direction then circularly polarized light is produced.
Example 23.1 Suppose that a vertically polarized beam of 500 nm light is incident on a birefringent crystal of mica with a mean index of refraction of 1.552
and which has its crystal axes making a 45° angle with respect to the vertical. If
the birefringence of the crystal is ⌬n ⫽ 0.006, find the minimum thickness of the
crystal along the transmission direction of beam so as to produce circularly
polarized light.
Solution: If we call the unknown thickness t, then the optical path difference of
the two equal components of the vertical polarization along the two crystal axes
will be t⌬n (see Equation (22.2)). In order to produce circularly polarized light,
this difference should be set equal to 1/4 wavelength of the light, so that, as in the
Figure 23.12c, after leaving the crystal there will be two equal components of
electric field that are 90° out of phase, combining to produce a circularly polarized beam. We therefore require t⌬n ⫽ 1/4 (500 nm), so that t ⫽ 2.1 ⫻ 10⫺5 m
⫽ 0.021 mm. Mica can be cleaved and polished to produce such quarter-wave
plates designed for different wavelengths.
Many biological systems contain components that are anisotropic. These are ordered
structures that look different in different directions; for example, the fibrils within a muscle fiber or the crystal-like proteins of the lens of the eye. Polarized light will interact with
electrons in such a material in different ways depending on relative orientations and can
be used to gain information about such structures. Because of the anisotropy there will be
changes in the polarization of transmitted light. Polarization microscopy is yet another
way to get images of such anisotropic structures. Linearly polarized light is used as a light
source and the imaged light through the objective is passed through a crossed-polarizer.
In the absence of any sample, the background light is completely extinguished by the
crossed-polarizer. Any resolvable structures that produce some depolarization of the incident light will then produce a bright image (Figure 23.13).
FIGURE 23.13 Polarization
microscope image of rat skin
color-coded by the birefringence
retardation (see text) which
is related to the degree of
depolarization of the transmitted
light.
O P T I C A L A C T I V I T Y ; A P P L I C AT I O N S
OF
L I G H T P O L A R I Z AT I O N
569
FIGURE 23.14 Handedness
changes in a plane mirror; the
left-handed slinky helix (spiraling
counterclockwise around the
helix axis) changes to right-handed
in a plane mirror image (seen on
the left).
Most individual biological macromolecules are asymmetric,
meaning that they appear different from their mirror image. Most simple molecules are symmetric. Water, carbon dioxide, and many more
complex molecules look the same as their mirror images. Biopolymers
tend to be formed, at least partially, from helical arrays of molecules,
and these will have a handedness. Handedness is a property that
changes when viewed in a mirror. As shown in Figure 23.14, a righthanded coiled spring will appear to be a left-handed spring when
viewed in a mirror.
On the other hand, a solution of randomly oriented asymmetric molecules will not produce an image in a polarization microscope because
the solution as a whole is isotropic. However, asymmetric molecules do
have an effect on the polarization properties of light that can be used to
gain information about the macromolecules. Asymmetric molecules are
said to have optical activity and are characterized by different refractive indices for
left- and right-handed circularly polarized light. Asymmetric molecules will interact
differently with left- and right-handed circularly polarized light because of their
handedness.
A simple example may help to clarify this. Imagine a solution of small left-handed
helical molecules. Because the electric field vector of the light interacts with the electrons of the helical molecule, left-handed circularly polarized light will allow a stronger
interaction with the electrons of a left-handed helical molecule, with the ability to drive
them around the helix, and therefore a larger fraction of such light will be absorbed than
would be the case for right-handed circularly polarized light. This is somewhat similar
to the reason why Polaroid film, with its oriented long polymers, preferentially absorbs
light polarized along the polymers: the electric field can then interact more strongly
with polymer electrons.
Because linearly polarized light can be considered a sum of left and right circularly polarized light, a solution of optically active molecules probed with linearly
polarized light will interact differently with each of these components and affect the
polarization of the transmitted light. If the sample absorbs no light, then the light
remains linearly polarized, but has its direction of polarization rotated due to different effective optical paths for each polarization. Molecules that rotate the polarization in a left-handed sense are called levarotatory (L) and those that rotate the
polarization in a right-handed sense are called dextrorotatory (D). It is a fact that all
proteins and most other biological molecules are found only in the L form in nature.
When linearly polarized light is incident on an optically active solution, there can
be both phase and amplitude changes associated with the equivalent left- and righthanded circular polarization components making up the incident linear polarization.
These can be characterized by two quantities: the circular birefringence ⌬n,
¢n ⫽ (nL ⫺ nR ),
(23.4)
for the phase changes, where nL and nR are the refractive indices for left and right circularly polarized light; and the circular dichroism ⌬␧,
¢e = eL - eR,
(23.5)
for the amplitude changes, where ␧L and ␧R are the absorption coefficients for left and
right circularly polarized light. Recall from Chapter 19 (Section 6) that the absorption coefficient is a measure of the intensity of light absorbed in a unit path length
and per unit concentration of sample.
Both the circular birefringence and dichroism values depend on the wavelength of
light used on a given optically active sample. Spectra showing the wavelength dependence of the birefringence (using the technique known as optical rotary dispersion or
ORD experiments) and of the dichroism (using circular dichroism or CD experiments)
can be used to characterize biological materials. These techniques are used most to
probe the optically active regions of macromolecules, determining their helical content
570
I M A G I N G U S I N G W AV E O P T I C S
or following relatively slow kinetic changes that can occur
from conformational changes due to environmental factors or
to the binding of small ligands. Figure 23.15 shows an example CD spectrum for standards in particular conformations
and for a real protein, myoglobin.
3. ELECTRON MICROSCOPY
Standard Curves
80000
Ellipticity
Helix
Sheet
30000
Coil
–20000
–70000
190
210
230
Wavelength (nm)
Sperm Whale Myoglobin
60000
250
Experimental data
LSQ Fit data
40000
Ellipticity
In our discussion of the resolution possible in a microscope,
the resolving power, or closest distance that two distinct
objects can lie and still be distinguished under optimal conditions, was given by Equation (22.15) to be no less than
␭/4. For visible light this limits the resolution under the best
conditions to about 200 nm. Any further improvement on
this limit requires that the wavelength of the probing radiation be decreased. Although ultraviolet microscopes have
been developed, the most feasible method for improving
resolution is to use electrons in place of light. We show in
the next chapter that electrons have an associated wavelength that depends on their momentum (or, in turn, on their
energy). Just as with photons, where higher-energy photons
have a correspondingly shorter wavelength, we show that
higher-energy electrons also have a shorter wavelength.
Exactly what it means for an electron or another elementary
“particle” to have a wavelength is explored further in the
next chapter. For now, we can use the notion of a wave
packet introduced in Chapter 19 (Section 5) to picture an
electron as having wavelike properties.
Electrons accelerated through a potential difference of
50 kV, typical for an electron microscope (EM), have a
wavelength of 0.005 nm, allowing a theoretical improvement in resolution over a light microscope by a factor of
40,000. Unfortunately, other problems limit the practical
resolution of the EM, although using a particular variation
of electron microscopy has allowed resolutions approaching 0.1 nm at which individual atoms can be directly
imaged. The recently developed method of scanning tunneling microscopy (STM), described in the next chapter,
allows even higher resolution of surface topography with a
resolution of better than 0.1 nm.
The general plan of an EM is shown in Figure 23.16. An
electron “gun,” or filament and anode combination, is the
source of electrons boiled off a tungsten filament heated to
very high temperature, similar to a light bulb. The electrons
are accelerated through a large potential difference of typically 40–100 kV reducing the wavelength of the electron as
it gains kinetic energy. The entire microscope column is
evacuated to a fairly high vacuum, reducing energy losses of
the electrons from collisions with air molecules. Because
electrons can be steered in a magnetic field, a “magnetic
condenser lens” is used to focus the electron beam at or near
the sample plane down to a spot size of several microns.
Samples are supported on copper grids with an array of
typically 100 ␮m ⫻ 100 ␮m square holes coated with a
thin uniform layer of a supporting material, such as carbon,
that is essentially transparent to electrons. Copper is used
Difference
20000
0
–20000
–40000
190
210
230
wavelength (nm)
250
FIGURE 23.15 (top) Molecular model of the protein sperm
whale myoglobin showing helical and coil regions; (middle)
standard CD curves for pure helix, sheet, or coil; (bottom)
CD spectrum of the sperm whale myoglobin, showing the
best fit to the experimental data as a mix of three different
standard components.
electron gun
condenser lens
sample/holder
objective lens
intermediate lens
projector lens
detector
FIGURE 23.16 Schematic diagram
of a transmission electron microscope. The lenses are electromagnets; the entire electron beam path
is evacuated. Typical detectors are
fluorescent screens, photographic
film, or image intensifiers to record
digital images.
E L E C T RO N M I C RO S C O P Y
571
FIGURE 23.17 TEM of two fd virus
particles with one single-stranded
DNA from a virus in the lower
center.
electron beam
magnetic
scanning coils
detector
sample
time
FIGURE 23.18 Schematic diagram
of the final portion of the scanning
electron microscope showing the
scanned electron beam, in multiple
images, steered by magnetic scanning coils and the backscattered
electron detector.
572
because it is a good electrical and thermal conductor, carrying away any
heat from the interaction with the beam, and also minimizing the distortion of the focusing magnetic field. The sample is mounted on a movable
stage for positioning it in the focused electron beam.
After interacting with the sample, electrons are collected by a (magnetic) objective lens and a magnified image is projected onto a detector
by a system of other lenses. Overall magnifications can range from 1000
to over 300,000 times, limited mainly by aberrations in the magnetic
lenses. The simplest detector is a fluorescent screen that emits light
when struck by the electrons and can be viewed directly by eye or with
some further magnification using optical lenses. Other detectors include
photographic film or image intensifiers that allow digitization and computer enhancement of images.
Three types of EMs can be distinguished: transmission (TEM),
scanning (SEM), and the less common scanning transmission (STEM).
Normal TEM, developed in the 1940s, basically creates a greatly
enlarged shadow of the sample at the detector. Samples must be very
thin for good resolution and thin sections or evaporated deposits of solutions are used. Biological materials are made of smaller atoms (mostly
H, C, O, N, P, S) that do not strongly interact with the electron beam and
so the contrast is very poor. In order to “see” the sample, some contrast
improvement is needed in order to cast a shadow. The usual method is
to deposit a heavy metal with high electron scattering power (such as
osmium, platinum, gold, or uranium) to coat the structures of interest.
This is done in a variety of ways including “shadowing” by direct
deposit of heavy metals on the grid, or by negative staining in which
heavy metal salt solution fills the region immediately around particles of
interest producing a dark background edge around bright images of the transparent
objects of interest. Figure 23.17 shows a TEM image of two virus particles with a
closed loop of its single-stranded DNA.
SEM uses a tightly focused electron beam (spot size of ~10 nm) directed offaxis at a heavy metal-coated sample as shown schematically in Figure 23.18. The
beam is made to scan along the sample in a raster, or TV-like, pattern by a set of scanning coils that steer the electron beam and are coupled to the detectors. Electrons or
radiation “reflected” from the sample at each scanned point are collected and used to
create an image on a TV screen as the electron beam is scanned across the sample.
The spatial pattern of the scanned beam is reproduced in the spatial pattern displayed
on the TV screen. A variety of different signals from the electron–sample interaction
can be measured using different detectors in the SEM, including backscattered electrons and secondary electrons released from the sample itself, as well as x-rays and
emitted light. Although the resolution of this method is much lower (~10 nm at best)
than the TEM, the depth of focus is extremely large and the images are very threedimensional and lifelike (Figure 23.19).
STEM was developed to try to collect not only the “reflected” electrons and radiation as in SEM, but also the transmitted electrons that have interacted with the sample. These transmitted electrons undergo two basic types of interactions, elastic and
inelastic, aside from the bulk of the electrons that simply pass through without any
interaction at all. Inelastically scattered electrons lose some energy to the sample
through excitation of target atoms, whereas elastically scattered electrons, fewer in
number, are simply deflected from their path through much larger angles by interaction with the nuclei of target atoms without a change in their energy. The ratio of the
intensities of the elastic to inelastic electron scattering is a characteristic of the particular target atom and increases with the number of protons in the nucleus of the
atom. STEM scans an even more tightly focused electron beam (~0.5 nm) across the
sample simultaneously measuring the elastic and inelastic transmitted electron intensities. Furthermore, the inelastically scattered electrons can be energy-analyzed to
determine their energy loss. STEM pictures are at very high resolution (Figure 23.20)
I M A G I N G U S I N G W AV E O P T I C S
FIGURE 23.19 SEM of the head of a house fly at 200 X
magnification (the bar is 100 ␮m). The structure on the
right is a multifaceted eye.
FIGURE 23.20 STEM images of a particular
chaperonin, one of a family of large (~106 Da)
complexes involved in the folding of proteins,
under different solvent conditions showing
images used to reconstruct the detailed images
of shape shown in the insets on the right. The
bar represents 20 nm.
and can also determine the elemental composition of the sample from point to point.
Unfortunately, the fundamental limitation of sample degradation in the electron beam
has made STEM less useful in biological imaging than first expected when developed
in the 1970s.
4. X-RAYS: DIFFRACTION AND COMPUTED
TOMOGRAPHY (CT)
X-ray photons have wavelengths in the range from about 0.01–10 nm, short enough
to provide atomic resolution according to the equation for resolving power.
Unfortunately, until recently x-rays could not be easily focused and magnified
images, such as have been made with light and electron beams, have not yet been produced with x-rays. (In 1996 scientists developed a simple and effective way to focus
x-rays; this method is expected to lead to many new applications, particularly in
microelectronics.) Even if we had the ability to focus x-rays, their interaction with
biological tissue is so weak that there would be virtually no contrast seen in normal
thin samples used in microscopy. However, x-rays have two properties that make
them extremely useful in both medicine and science. First, because x-rays are a form
of electromagnetic radiation, they diffract from objects of comparable dimension to
their wavelength, similar to the diffraction of light. Because of their atomic-sized
wavelength, x-ray diffraction effects can be used to probe the atomic structure of matter and have been used to determine the structure of many complex biological macromolecules at atomic resolution. Second, because x-ray energies are high, these
photons are capable of passing through otherwise opaque materials and x-rays can be
used to produce “shadow” pictures of internal structures within thick samples, for
example, the human body.
Crystalline materials have a three-dimensional periodic array of their atoms that
can diffract x-rays and produce a pattern of detected x-rays containing information
about the spatial array of the atoms. In a similar way that a one-dimensional array of
slits gives rise to a diffraction pattern with light, the crystalline array of atoms results
in a more complicated pattern of diffracted x-rays. In this case the x-rays are scattered, or diffracted, in all directions from the crystalline array of atoms and interference effects result in a detected pattern of x-ray spots.
X - R AY S : D I F F R A C T I O N
AND
COMPUTED TOMOGRAPHY (CT)
573
θ
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
FIGURE 23.21 Cross-section of a
cubic lattice (shown in two dimensions) showing two sets of Bragg
planes with different spacings and
the diffraction of an x-ray beam
from one set with spacing d. The
extra path difference of the lower
beam is shown in red and is equal
to d sin ␪ for each of the triangles
shown, totaling 2d sin ␪.
Consider a simple cubic crystal made of identical atoms
in a periodic array, or lattice, with separation distance d as
• • •
shown in cross-section in Figure 23.21. The atoms form
d
planes, known as Bragg planes, and the pattern of diffracted
• • •
x-rays can be determined by imagining that the x-ray beam
• • •
reflects from these planes in a process known as Bragg dif• • •
fraction. This picture greatly simplifies the analysis but gives
• • •
the correct general result. For the x-ray beams shown in the
• • •
figure, there will be a path difference for beams reflecting
• • •
from neighboring planes. From the figure, we see that this
path difference will depend on the angle ␪ between the ray
and the Bragg plane and is given by 2d sin ␪. (Note that ␪ is
not the usual angle of reflection between the ray and the normal, but is the angle
between the ray and the line of atoms in the plane of reflection.) Constructive interference will occur when this path difference is equal to a whole number of wavelengths and the Bragg equation,
ml ⫽ 2 d sin u,
(23.6)
where m is an integer called the order, defines the location of an interference maximum. X-rays incident at an angle given by Equation (23.6), known as a Bragg angle,
will produce a diffraction peak, or spot, at some distant detector located at the
“reflected” ray. In a noncubic crystal with three different repeat distances along different directions there will be two additional order numbers for the other directions
and a generalized Bragg equation. In this case, the “unit cell,” or basic repeating
structure, dimensions can be found by the location of the Bragg spots.
Example 23.2 In an x-ray diffraction experiment on a cubic crystal with ␭ ⫽
0.40 ⫻ 10⫺10 m, find the crystal plane spacing if the first-order maximum
occurs at an angle of 6.4°. At what angle will the third-order maximum be
found?
Solution: Using Equation (23.6) with m ⫽ 1, we have that d ⫽ ␭/(2 sin ␪) ⫽
1.79 ⫻ 10⫺10 m. The third-order maximum will then be found at the angle given
by sin ␪ ⫽ 3␭/2d ⫽ 0.34, so that ␪ ⫽ 19.6°.
FIGURE 23.22 The structure of
myoglobin.
574
In the study of macromolecular structure, if a crystal of the macromolecule
can be formed, then x-ray diffraction can often be used to determine the threedimensional arrangement of all its atoms. In such a crystal, the individual scattering centers, or unit cells, may consist of thousands of individual
atoms. In addition to the unit cell dimensions affecting the
observed diffraction pattern, x-ray scattering from the molecules
within the unit cell will affect the pattern due to its “structure
factor.” In general, if there are N atoms per unit cell, there will
be N2 peaks in the diffraction pattern from the atoms within the
unit cell. As N increases for larger molecular crystals, the diffraction patterns become extremely complex and rich with information. From detailed studies of such patterns, together with as
much independent information on the macromolecular structure
as possible, the detailed three-dimensional atomic arrangement
has been determined for many macromolecules. Figure 23.22
shows the three-dimensional structure of myoglobin, a subunit
of hemoglobin consisting of 153 amino acids with a total of
I M A G I N G U S I N G W AV E O P T I C S
1260 atoms. To obtain the current resolution of better than
0.2 nm, more than 9600 diffraction spots were measured and
analyzed. These pictures of the structure conceal the fact that
most macromolecules have extensive flexibility and motion.
Because the x-ray pictures are obtained over relatively long
times, the resulting 3-D structures represent average positions of
the constituent atoms. X-ray diffraction is one of the best methods we have for determining macromolecular structure at an
atomic resolution.
Not all biological materials can be made to crystallize so that
they can be studied by x-ray diffraction. A large class of filamentous macromolecules can, however, be oriented into fibers and
studied by x-ray diffraction even though they are not in regular
crystalline arrays. Special techniques have been developed
for helical proteins and nucleic acids that reveal the symmetries present even
when neighboring oriented helices may not be “in register” along the axial direction. Such methods first led to the structural determination of the helical nature of
DNA by Watson and Crick and to the basic ideas on how DNA transmits genetic
information.
On the much larger dimensional scale of human organs and internal structures,
x-rays penetrate though skin and other soft tissue and travel in straight lines without diffraction. In this geometrical optics limit, they can be used to produce
shadow images of, for example, bones within the body, based entirely on differences in absorption of x-rays. In fact, when Roentgen first discovered x-rays in
1895, within a week he had obtained the first x-ray picture of a hand (Figure 23.23).
The depth of penetration of x-rays depends on the density of the material; denser
materials, such as lead, are more effective in absorbing x-rays. Medical technology uses x-rays to obtain pictures of such structures as bone and teeth in x-ray
radiography. Softer tissues can be pictured best if a dense material is introduced
to increase the contrast. The gastrointestinal tract can be imaged if it is filled with
a dense barium solution that casts a shadow in an x-ray picture. Similarly watersoluble organic compounds with iodine are used to give contrast for pictures of the
cardiovascular system, the urinary tract or the brain. Mammography can be done
without a contrast agent using low-energy x-rays because these give the greatest
contrast for soft tissues.
These pictures produce two-dimensional projection images, lacking resolution
along the beam direction because the intensity of the x-rays at the detector is determined by an integration or sum through the body along the beam. Thus threedimensional information is lost on conversion to a two-dimensional picture. Put
another way, there is no depth information in an x-ray picture and doctors must infer
relative depths of neighboring features in these pictures with much care.
Furthermore, it is more difficult to detect small differences in x-ray absorption at
neighboring points because there is no resolution along the beam and therefore many
minor abnormalities in x-ray radiography are not detectable.
To improve this situation, computed tomography (CT; the Greek word tomo
means cut or slice) is able to obtain three-dimensional information from a collection of x-ray pictures taken at different orientations. The original CT machines
developed in the 1970s used a single x-ray source and detector held in precise register on opposite sides of a patient. These were translated across the sample
region, rotated by 1° and scanned across the sample again, and so on, in steps all
around the body, so that a sequence of many pictures was obtained in a few minutes that could then be used to reconstruct the depth information in a threedimensional image. Today, CT machines use a wide fanlike beam and an array of
several hundreds to a few thousand x-ray detectors to decrease the time required
to a few seconds (Figure 23.24). The newest designed machines have stationary
detector arrays with an x-ray beam made to sweep in a circular pattern around the
patient with no moving parts. We show in Chapter 25 that x-rays are generated by
X - R AY S : D I F F R A C T I O N
AND
COMPUTED TOMOGRAPHY (CT)
FIGURE 23.23 The first x-ray
picture, obtained in 1896, of the
hand of Mrs. Roentgen.
575
transitions of outer electrons to inner empty electron shells after
the inner electrons have been ejected by bombardment with
high-speed electrons. In modern CT machines a scanning highenergy electron beam that generates the x-rays is an integral part
of this design. Projection data can then be obtained in about
50 ms, fast enough to image a beating heart without motion
artifacts.
However the various projection data are recorded, a computer
will have a record of digitized intensities that needs to be
processed to reconstruct the image of a cross-sectional slice in the
body. The image consists of a large number of two-dimensional
spots, or pixels, each having some grayscale level, a digital value
representing the brightness. Grayscale displays are sometimes
converted to false color images where the colors represent the
brightness level but have nothing to do with the color of the original tissue being imaged. These brightness scales are normally set
according to the absorption coefficient of the tissue ␧, compared to
that of water, ␧w, by the CT number
FIGURE 23.24 Modern CT
machine used in hospitals
and medical imaging facilities.
CT number ⫽ 1000
e ⫺ ew
.
ew
(23.7)
Table 23.1 shows the CT numbers for different tissue and media for 60 keV x-rays.
The absorption coefficients of tissue depend on the x-ray beam energy and corrections usually need to be made for this fact. Note that negative CT numbers indicate
that there is less absorption of x-rays than in water.
Table 23.1 CT Numbers for Various Materials*
Material
CT Number
Water
Air
Bone
Striated muscle
Fat
* Using
0
⫺1000
808
⫺48
⫺142
60 keV x-rays.
We briefly try to give the reader a sense of how projection data can be used to
determine the CT numbers for an array of pixels in order to generate a cross-sectional
FIGURE 23.25 (left) An N ⫻ N grid
of pixels defined by sets of parallel
beams. The transmited (projected)
intensities are used to reconstruct
the absorption coefficients of each
pixel and thus a two-dimensional
image based on x-ray absorption.
(right) A test object with varying
shape and absorption coefficient
(symbolized by shading) probed by
several x-ray beams to do a backprojection determining the absorption coefficient ␧ij of the overlap
region.
576
εij
I M A G I N G U S I N G W AV E O P T I C S
picture of the body based on x-ray contrast. In our context, the
absorption coefficient is in the relation
I = Io e-ex or log
Io
I
= ex,
(23.8)
where I and Io are the transmitted and incident intensity on a tissue thickness x and the log is to base e. Each x-ray beam can be
imagined to have traveled through a distance x in the body and
the transmitted intensity detected. We imagine that each of
N such neighboring parallel beams (the rows) is divided into
N intervals of length x/N (the columns), forming a twodimensional cross-sectional grid of N rows by N columns, with
N typically in the range 256–1024. In the pixel display of this
slice, the term ␧x in Equation (23.8) for the ith row, for example,
is given by the sum
ei x ⫽
a eij ¢x ,
j⫽1 to N
where we have labeled the ␧ij values according to the pixel number (ith row and jth
column) and have assumed that the pixel width, ⌬x ⫽ x/N, is the same in any direction. In the simplest case, imagine that two sets of parallel x-ray beams are used to
define a square grid as shown in Figure 23.25 (left) and that the projected (transmitted) intensity is measured for each beam. Using values for the projected intensities, computer algorithms can determine the ␧ij for the N ⫻ N pixels, giving a
two-dimensional absorption image.
In general, more complex patterns of beams can be used (Figure 23.25 right).
Because a set of N ⫻ N pixels is needed to image a given plane, a minimum of N2
values for ␧ij are needed. These can be obtained from at least that many data points
for log Io/I, or ␧x, obtained by imaging the same region of the body at many, many
different orientations. Large numbers of equations must be simultaneously solved
on a computer; with N ⫽ 256, there are at least N2 ⫽ 65,500 equations to solve.
Various computational techniques have been developed to do these calculations
rapidly.
With current technology, multiple cross-sectional images can be rapidly obtained
and computer techniques allow these to be superposed to produce 3-D images
(Figure 23.26). These same tomography methods can be applied to other types of
imaging, including ultrasonic (Chapter 11), magnetic resonance (Chapter 18), and to
such nuclear decay imaging as positron emission tomography (PET; discussed in
Chapter 26). The quality of images from CT and MRI scans are often comparable and
the choice of method depends on the type of tissue to be imaged.
CHAPTER SUMMARY
Contrast is the other major factor, in addition to resolution discussed in the previous chapter, that determines
whether an object can be imaged in a microscope. We
can distinguish two types of contrast: amplitude and
phase. Microscopes that use amplitude contrast include
the standard bright-field compound microscope discussed in the previous chapter, as well as the dark-field
FIGURE 23.26 Three-dimensional
rendering of a human heart by CT
imaging.
and fluorescence microscopes. Phase contrast and DIC
(differential interference contrast) microscopes use
phase contrast to image objects. Newer microscopies
use laser-scanning methods to do point-by-point imaging. These include confocal and multiphoton microscopies.
Optical activity refers to the effect of anisotropic
molecules on the circular polarization of light. Such
(Continued)
C H A P T E R S U M M A RY
577
They also absorb left- and right-handed circularly polarized light differently and produce circular dichroism,
backscattered, rather than transmitted, electrons. This
method gives a greater depth of focus so that the images
look three-dimensional. A less used method combines
these methods in the high-resolution scanning transmission EM (STEM).
X-rays can be used to study the structure of crystals, even crystals made from complex macromolecules. The basic crystalline array can be determined
using the Bragg equation
¢e = eL - eR ,
ml ⫽ 2 d sin u,
molecules have a different effective index of refraction
for left- and right-handed circularly polarized light and
are said to have circular birefringence
¢n ⫽ (nL - nR).
(23.4)
(23.5)
where ␧ is the absorption coefficient. This effect can be
measured using the optical technique of circular
dichroism CD, and is an important method to determine the percent of helix, beta sheet and random coil
composition of macromolecules.
Transmission electron microscopy uses a highenergy beam of electrons to produce a “shadow” image
of microscopic objects with a resolution approaching
atomic resolution. Scanning electron microscopy (SEM)
is a lower-resolution variation that scans a tightly
focused electron beam over the sample and detects
QUESTIONS
1. Compare image contrast with resolution for a brightfield microscope. How does each enter into producing
an image?
2. What is the function of the dichroic mirror in a fluorescent microscope? (See Figure 23.3.)
3. What are the origins of phase and amplitude contrast?
Are both always present to some extent?
4. Describe the main differences, in your own words,
between phase contrast and differential interference
contrast microscopy.
5. What is the function of the Wollaston polarizing
prisms in DIC optics? Is the fact that the two beams
have different polarizations important in the final
image seen?
6. What are the advantages of multiphoton microscopy
over single-photon methods?
7. Discuss the superposition of two linear polarized
light beams of the same frequency and equal amplitude, one polarized along the x- and one along the
y-axis. What is the result if the two are in phase?
90° out of phase? 180° out of phase?
8. Because a plane mirror reverses left and right, but
does not reverse up and down, if you hold a coiled
right-handed spring and look at its image in a mirror
is there an orientation of the spring that results in a
right-handed image?
9. Simple molecules produced in chemical reactions,
even if they have a handedness, are usually produced
578
(23.6)
where ␭ is the wavelength of the x-ray beam, d is the
spacing between Bragg planes of the crystal, and ␪ is
the diffraction angle (between the Bragg plane and the
incident or exit beam). X-ray beams can be used to produce shadow images through the body because transmission through bone and types of tissue are different.
The medical imaging technique known as computed
tomography (CT) uses fanlike x-ray beams and multiple detectors to allow images to be reconstructed by
computer of cross-sections through the human body at
a resolution of about 1 mm.
10.
11.
12.
13.
14.
15.
16.
17.
in nearly equal quantities of left- and right-handed
molecules. Biological molecules, on the other hand,
are nearly always found in pure left-handed form.
What benefits might be derived from only having one
form in living materials?
A linearly polarized light beam passes through a
birefringent material and two beams emerge. If the
beams are each made to pass through one slit of a
double-slit experiment, will a standard double-slit
interference pattern be produced on a distant
screen?
What is the difference between circular birefringence
and circular dichroism?
As the accelerating voltage in an electron microscope
is increased, what happens to the theoretical magnification? To the sample degradation? To the magnetic
field needed to focus the electron beam?
What is the purpose of heavy metal deposition in
TEM? How does it affect resolution?
Can you argue why the backscattered electrons in
SEM allow the images to appear much more threedimensional than the images transmitted electrons
produce in TEM?
Fill in the details in the derivation of the Bragg equation, Equation (23.6), using Figure 23.21.
Why, when you have a dental x-ray taken, are you
covered with a heavy lead-coated gown?
Contrast how a CT image is obtained with how you
perceive depth with two eyes.
I M A G I N G U S I N G W AV E O P T I C S
MULTIPLE CHOICE QUESTIONS
1. In dark-field microscopy (a) the sample images
darker than the background, (b) an annular aperture is
inserted between the sample and the objective lens,
(c) the image contrast is usually better than that of
bright-field, (d) the samples must be stained to show
up.
2. Fluorescent dyes can be used for all but which of the
following? (a) Imaging calcium concentration variations, (b) imaging pH variations, (c) localizing specific molecules, (d) high-resolution imaging of
molecules.
3. Which of the microscopic techniques usually
requires that the sample be stained? (a) Phase contrast, (b) bright field, (c) DIC, (d) polarizing
microscopy.
4. In DIC microscopy, the edges of microscopic objects
are sharp because (a) that’s where the most stain is,
(b) that’s where there is an extra ␲ phase shift,
(c) that’s where there is the greatest change in index
of refraction, (d) that’s where the greatest polarization
difference occurs.
5. In three-photon microscopy, to excite a fluor at
450 nm the incident wavelength of light should be (a)
150 nm, (b) 450 nm, (c) 900 nm, (d) 1350 nm.
6. In laser-scanning confocal microscopy all of the following are true except (a) the beam is focused to a
very small spot, (b) the beam is moved across the
sample, (c) two or more photons are absorbed at the
same time, (d) the images appear three-dimensional.
7. A circularly polarized beam of light (a) travels in a
spiral around its magnetic field, (b) travels in a spiral
around its propagation direction, (c) has an electric
field vector whose tip rotates in a closed circle, (d) has
an electric field vector whose tip travels in a spiral.
8. Which is not true of a birefringent material? (a) It
must be a solid because it has different indices of
refraction along two different directions, (b) it can
produce two beams of light from one, (c) it can produce circularly polarized light, (d) light can travel
through it with two different speeds.
9. Which of the following is not true of an optically
active molecule? (a) It produces a circular birefringence signal, (b) it produces a circular dichroism signal, (c) it must be asymmetric, (d) a solution of them
can always be imaged in a polarizing microscope.
10. A typical accelerating voltage used in an electron microscope is (a) 100 kV, (b) 1 kV, (c) 10 MV, (d) 100 V.
11. Electron microscope samples must be stained or metalcoated because (a) the atoms are too small to detect otherwise, (b) the samples are not colored otherwise, (c)
the samples do not interact with electrons otherwise, (d)
the samples would evaporate from the grid otherwise.
12. All of the following are consequences of using high
accelerating voltages and small focused spot sizes
in scanning electron microscopy except (a) higher
resolution, (b) decreased heating of the sample,
QU E S T I O N S / P RO B L E M S
(c) increased backscattered electrons, (d) more
accurate elemental analysis.
13. Which of the following is not true? 60 keV x-rays are
absorbed by (a) water more than fat, (b) water more
than air, (c) bone more than striated muscle, (d) fat
more than striated muscle.
14. The intensity remaining in a beam after traveling
10 cm through a sample with an absorption coefficient
of 0.2 cm⫺1 is (a) 1%, (b) 1.4%, (c) 14%, (d) 20%.
PROBLEMS
1. With a compound microscope adjusted poorly, the %
contrast for a certain sample is only 5%. If the microscope is adjusted and the sample intensity is reduced
by 10% and the background intensity is increased by
20%, what is the new % contrast?
2. In three-photon microscopy, if the peak in the absorption band of a fluorescent molecule to be imaged is at
360 nm, what incident frequency of light should be
used?
3. Show that two in-phase linearly polarized beams with
the same frequency but along orthogonal axes (x and
y) superpose to produce a linearly polarized beam
with a polarization direction that depends on the ratio
of their amplitudes. What is this polarization angle if
Eox ⫽ Eoy? If Eox ⫽ 3Eoy?
4. Show that the tip of the electric field vector produced
by the superposition of equal amplitude electric fields
given in Equation (23.2) rotates in a circle. Viewed
from a location at which the beam is approaching
you, does the E vector rotate clockwise or counterclockwise?
5. A birefringent crystal has a birefringence given by
⌬n ⫽ n1 ⫺ n2 ⫽ 0.01, where n1 and n2 are the indices
of refraction along its two transverse crystal axes at
right angles with each other. Suppose a linearly polarized wave with 550 nm wavelength, is polarized at 45°
to the crystal axes. If the crystal has a thickness of 1 cm,
what will be the path difference between the two waves
polarized along the crystal axes when they exit the crystal? What will be the net phase difference (as a fraction
of 2␲ rad, or modulo 2␲ rad) of the two waves?
6. Suppose that the spot size in an SEM is 10 nm and
that the beam is scanned over a region of 100 ␮ m ⫻
100 ␮m in a raster pattern, producing a singlescanned image in 10 ms. If the overall region is digitized into a 200 ⫻ 200 pixel area,
(a) What sample area is represented by 1 pixel?
(b) How long is the beam exposure in each pixel?
(This determines resolution time of the detector.)
7. X-rays with a 0.12 nm wavelength produce a firstorder diffraction peak at a Bragg angle of 24°. What
crystal spacing gave rise to this diffraction?
8. A cubic crystal with identical atoms separated by
distance d has sets of Bragg planes separated by
distance d. It also has other symmetry planes, as shown,
579
for example, in Figure 23.21. Using simple trigonometry, draw a two-dimensional square lattice projection of
the crystal (as in Figure 23.21) and find two other crystal plane spacings in terms of d.
9. If an x-ray beam is incident on a 1.5 cm thick sample
and 98% of the beam is transmitted what is the average
absorption coefficient of the material in units of m⫺1?
10. Two samples for an x-ray absorption experiment have
the same thickness. With the same incident intensity
580
one has a 95% transmission and the second has an
85% transmission. What is the ratio of their absorption coefficients?
11. Suppose that an x-ray beam is directed on a tissue
sample and suppose that 99.3% of the beam is transmitted. If a dummy blank sample of water is used
99.5% of the x-rays are transmitted using exactly the
same geometry and beam. What is the CT number of
this sample?
I M A G I N G U S I N G W AV E O P T I C S
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement