Image Processing System Applications

Image Processing System Applications
Image Processing System Applications:
from Barbie Cams to the Space Telescope
Robert Kremens, Ph.D
Rochester Institute for Technology
Center for Imaging Science
Digital Imaging and Remote Sensing Group
and
Pixelphysics, Inc.
May 2001
Outline
•
Fundamentals of Image Processing for Digital Cameras
•
Solid State Image Sensors - CCDs, CMOS, etc.
•
System Requirements for Several Applications
– Break
•
Hardware Analysis: The Jam Cam
•
Hardware Analysis: The Kodak DC210
•
Hardware Analysis: The Chandra Orbital X-ray Telescope
Fundamentals of Camera Image
Processing
Robert Kremens
May 2001
Digital Camera Image Processing Pipeline
Analog
Processing
& A/D Conv.
White
Balance
Scene
Balance
Color Filter
Array (CFA)
RGB to
YCC Conv.
Gamma
Correction
Blurring*
Chroma
Subsample
Unsharp
Masking
(Edge Enh.)
JPEG
Compress
CFA
Interpolation
Finished
File Format
White Balance
•
•
•
Usually performed on raw CFA pixels
White balance attempts to adjust for variations in the illuminant (D65,
Tungsten, etc.) by adjusting analog amplifier gain in the R,G,B
channels
What is white?
– R=G=B= ~255
Implementing White Balance
Method A - Predetermine the illuminant.
– Acquire image.
• Camcorder method - white lens cap pointed at light source.
• Can be a problem if scene has no white.
– R, G and B adjustment values calculated to make them scale to ~255.
– Subsequent images have incoming raw pixels multiplied by
adjustment value.
•
Can be done on the fly with hardware, or in analog stages (preferable).
Count
•
0
Pixel Level
255
Implementing White Balance (cont’d)
•
Method B - Adjust each image after acquisition
– Find area with R~G~B at highest intensity.
• Examine Full Image - time consuming.
• Predetermined small image area - what if no white?
• Subsampled image - faster.
– Determine adjustment values for R, G and B.
– If G isn’t high enough, there is no white.
• Still possible to scale to gray by adjusting R and B?
• Or just leave it alone.
•
Flash illumination removes much of the need for determining the
adjustment parameters, since the color spectrum of the
illumination source is known.
Digital Camera Image Processing
Analog
Processing
& A/D Conv.
White
Balance
Scene
Balance
Color Filter
Array (CFA)
RGB to
YCC Conv.
Gamma
Correction
Blurring*
Chroma
Subsample
Unsharp
Masking
(Edge Enh.)
JPEG
Compress
CFA
Interpolation
Finished
File Format
Scene Balance
•
•
•
Adjusts the color balance of an image so that neutral images are
seen as neutral.
Adjust color planes throughout their range; can use adjustment
that is a function of pixel value.
Unless subject is holding a neutral density chart, this is much
more of an art than a science.
Implementing Scene Balance
•
Look for areas of image with approximately equal R, G and B
values.
– Look at entire image - time consuming.
– Look at sub-sampled image - can miss data.
– Look at blocks of image.
• Create areas of image by averaging over 20x20 pixels.
•
•
Histogram and scene classification are basic methods - force
color histogram to be ‘correct’
Adjustment (multiplication) values must not affect overall
brightness of image.
– If R an B need to be increased, G should be decreased also.
•
Be careful of interaction w/ White Balance.
Digital Camera Image Processing
Analog
Processing
& A/D Conv.
White
Balance
Scene
Balance
Color Filter
Array (CFA)
RGB to
YCC Conv.
Gamma
Correction
Blurring*
Chroma
Subsample
Unsharp
Masking
(Edge Enh.)
JPEG
Compress
CFA
Interpolation
Finished
File Format
CFA Interpolation
•
CFA interpolation creates 3 separate bit planes for each pixel
location of the sensor.
•
Weighted averages are typical, but algorithms vary depending on
filter array patterns.
Can be clever and use adaptive algorithms to reduce subsampling effects and undesirable artifacts (‘zippers’)
•
Implementing CFA Interpolation (Median method)
•
00
01
02
03
10
11
12
13
20
21
22
23
30
31
32
33
•
Mean Interpolation (Green)
–
•
G11 = (G01 + G10 + G12 + G21)/4
Median Interpolation (Green)
–
G11 = [(G01 + G10 + G12 + G21) - MAX(G01 + G10 + G12
+ G21) - MIN(G01 + G10 + G12 + G21)]/2
Red and Blue may interpolated differently than green.
–
–
–
–
–
R11 = (R00 + R02 + R20 + R22) / 4
B11 = B11
R12 = (R02 + R22) / 2
B12 = (B11 + B13) / 2
B22 = (B11 + B13 + B31 + B33) / 4
Some Observations on CFA Interpolation
•
The math is not complicated (adds, compares and shifts)
– Data re-organization is key to speed.
• Barrel shifters and/or byte extraction instructions.
• MMX style pack and unpack.
– SIMD instructions (such as in MMX) can greatly accelerate math.
– A good candidate for hardware acceleration.
• Arithmetic compares, adders and shifters are easy to implement.
•
Interpolation schemes can cause image artifacts.
– Edges, corners and stripes present a problem.
Color Spaces and Standards
•
How is the image represented in R,G,B space?
–
•
Attempt to maximize color gamut and psycho-visual quality while minimizing
non-linear effects.
CCIR 601 - now ITU-R BT.601
–
–
Digital Video Standard
Y’CrCb (Y’Cr’Cb’) Color Space, 4:2:2 Subsampling
•
•
–
•
Y excursion 0 - 219, offset = 16 (Y = 16 to 235)
Cx excursion +/- 112, offset = 128 (Cx = 16 to 240)
No assumptions about white point
CCIR 709 - now ITU-R BT.709
–
–
HDTV Studio Standard
Y’CrCb Color Space
•
•
–
–
Y excursion 0 - 219, offset = 16 (Y = 16 to 235)
Cx excursion +/- 112, offset = 128 (Cx = 16 to 240)
Specifies White Point (x = .3127, y = .3290, z = .3582) (D65)
Specifies dark viewing conditions.
Color Spaces and Standards (cont’d)
•
sRGB (called NIFRGB by Kodak)
–
–
Default color space for HP and Microsoft
Same as CCIR 709 except
•
•
•
Specifies DIM viewing environment
Full 0 - 255 encoding of YCrCb values
Photo YCC
–
Also uses CCIR 709
•
•
•
White is 189 instead of 219
Results in RGB values from 0 - 346 when reconverted
Chroma channels are unbalanced (supposedly follows distribution of colors in a real
scene)
Digital Camera Image Processing
Analog
Processing
& A/D Conv.
White
Balance
Scene
Balance
Color Filter
Array (CFA)
RGB to
YCC Conv.
Gamma
Correction
Blurring*
Chroma
Subsample
Unsharp
Masking
(Edge Enh.)
JPEG
Compress
CFA
Interpolation
Finished
File Format
Gamma Correction
Gamma describes the nonlinear response of a display device (CRT) to
an applied signal.
Light Intensity
•
L’709 =
{
4.5L,
1.099L0.45-0.99,
Video Signal
•
RGB values must be corrected for Gamma before they are
transformed into a video space.
L<=0.018
0.018<L
Digital Camera Image Processing
Analog
Processing
& A/D Conv.
White
Balance
Scene
Balance
Color Filter
Array (CFA)
RGB to
YCC Conv.
Gamma
Correction
Blurring*
Chroma
Subsample
Unsharp
Masking
(Edge Enh.)
JPEG
Compress
CFA
Interpolation
Finished
File Format
R’G’B’ to Y’CrCb Conversion
•
Conversion to YCrCb occurs for 2 reasons:
– Chroma can be subsampled.
• Eye is more responsive to intensity changes (G) than color changes (R,B)
• Can compress color channels (R,B) for smaller stored image
– Video output potential.
•
Well known conversion matrices convert Gamma Corrected RGB
to Y’CrCb.
Y’
Cr
Cb
=
0.257 0.504 0.098
0.439 -0.368 -0.071
-0.148 -0.291 0.439
R’
0*
G’ + 128
B’
128
* 0 for UPF format, 16 for CCIR 601
Implementing RGB - YCrCb Conversion
•
Color space conversion can be implemented in several ways.
– Straight software implementation
• Flexible, but inefficient
– Hardware assist
• Single cycle MAC - 9 clock cycles
• SIMD instructions - 3 clock cycles
Red
– Straight Hardware
• Fast - 3 clock cycles
• Costly
– 3 Dimensional Lookup Tables
Green
Blue
Implementing Nonlinear Color Space Conversion 3D Lookup Tables
•
CMY Space is very device dependent and the conversions from
RGB, L*ab or YCrCb are not linear.
0,0,255
255,255,255
0,255,0
255,0,0
Implementing Nonlinear Color Space Conversion 3D Lookup Tables
•
•
•
A 3-D Lookup Table
defines the conversion
for specific colors
A fully populated table
would have over 16M
entries (256x256x256)
A subset of entries is
chosen to populate
the table.
0,0,255
255,255,255
0,255,0
255,0,0
Implementing Nonlinear Color Space Conversion 3D Lookup Tables
•
The actual values for the conversion are interpolated using various
mechanisms.
–
–
–
–
–
•
Tri-linear interpolation
Prism Interpolation
Tetrahedral Interpolation
Pyramid Interpolation
Fuzzy Logic methods
(10 */ 7+-)
(8 */ 5+-)
(6 */ 3+-)
(7 */ 4+-)
Simple Table Example
C,M,Y
0,0,0
0,0,128
0,0,255
0,128,0
0,128,128
…
255,255,255
Y,Cr,Cb
219,120.2,128.0
192,120.5,0.8
.
.
.
.
0.2, 120.5,127
Digital Camera Image Processing
Analog
Processing
& A/D Conv.
White
Balance
Scene
Balance
Color Filter
Array (CFA)
RGB to
YCC Conv.
Gamma
Correction
Blurring*
Chroma
Subsample
Unsharp
Masking
(Edge Enh.)
JPEG
Compress
CFA
Interpolation
Finished
File Format
Chroma Subsampling
•
•
The human eye is much more sensitive to intensity variations than
color variations.
Some color information can be discarded without loss of image
quality.
(Y) Luminance (RS-170)
+
Chrominance (I & Q)
=
Chroma Subsampling
•
4:2:2 and 4:2:0 are typical subsampling ratios
– 4:2:2 is typically used in video.
– 4:2:0 is prevalent in still photography.
4:4:4
(No subsampling)
4:2:2
Cb Cb
Cb
Cb Cb
Cb
CrCr CrCr
CrCr CrCr
Y Y Y Y
Y Y Y Y
Cb Cb Cb Cb
Cb Cb Cb Cb
Cr Cr Cr Cr
Cr Cr Cr Cr
Y Y Y Y
Y Y Y Y
4:2:0
Cb
Cb
Cb
Cb Cb Cb
Cr
Cr
Cr
Cr Cr
Y Y Y Y
Y Y Y Y
What about other image processing?
•
•
•
•
•
In consumer cameras, a large amount of post-capture processing
takes place to enhance the quality of the image.
Blurring and subsequent unsharp-masking are common image
improvements in processing chains
Blurring with a 3 X 3 convolution kernal after CFA interpolation
reduces artifacts (Moire patterns, ‘zippers’, color banding)
Implementation of 3 X 3 convolution with various kernals is well
known
Unsharp masking:
– Add a edge-enhanced image to the original image to sharpen lines
R = O + αS
where on a pixel-by-pixel basis original image (O) is added to a
fraction ( α) of a sharpened or edge-extracted image (S)
– Requires sharpened copy of image - can be performed in 3 row blocks
for 3X3 sharpening kernal
What is the future direction of the digital camera
processing chain?
•
•
•
•
•
•
Increased sensor size will require higher computing power to
maintain image quality
Multiple output CCD’s can use parallel processing for some image
functions
Smaller pixel size and decreased exposure latitude will require
more accurate image processing to achieve accurate images
Sensor development will halt around 3-4 Mpixels barring surprises
in process development
Movie modes (MPEG or AVI stream) may be desirable as storage
devices increase in capacity
Appearance of high capacity random access magnetic recording,
video, audio, HDTV, high speed radio packet network, could usher
in a new devices
Modern Sensor Characteristics
Solid State Imager Basics
Robert Kremens
May 2001
Outline
•
•
•
What Are The Basic Characteristics of Solid State Imagers?
What are the Different Solid State Imager Implementations?
Where are We Headed in The Future?
31
Imager Characteristics: Pixel Size & Pitch
•
Smaller pixels create marketing trade-offs
– More pixels per specific die size (increased resolution)
– Smaller die size for same number of pixels (lower cost)
•
Creating smaller pixels is an engineering tradeoff
– Smaller pixels have less dynamic range (total volume of charge
collecting depletion region is smaller)
– For some transport mechanisms, smaller pixels will be noisier.
– Unless gate metalization is also shrunk, sensitivity will suffer (more of
pixel is covered by metal)
•
Current commercially available minimum pixel size is 3.6 µm
(obtained in several ‘consumer’ 2-3 megapixel CCDs)
Imager Characteristics: Fill Factor
•
Fill factor (aperture ratio) is the ratio of usable sensor area to the
total pixel area.
Pixel Area
Active Area
Pixel Pitch
•
Impediments to a good fill factor
- Gate metalization
- Interline shift registers
- Active pixel amplifiers
- Anti-blooming structures
Current Fill Factors range from 25% up to near 100% depending
mainly on the transfer technology used.
Imager Characteristics: Dark Current
•
•
•
Dark current is thermally induced charge carriers generated by
impurities and silicon defects.
It manifests itself as a DC offset, which in turn lowers the dynamic
range of the device.
Dark current is non-uniform from pixel to pixel resulting in fixed
pattern noise - only continuously clocked CCDs avoid this effect.
Imager Characteristics: Noise and Defects
•
Fixed Pattern Noise - constant “speckle” under uniform
illumination conditions
– Dependent on transfer method and usage
– Not easily removed by process or design changes
•
Thermal Noise - shot noise
– Independent of transfer method (silicon is silicon)
•
Readout noise - clocking noise, active circuitry noise
– Shading
•
Sensor Defects
– Bad pixels, missing columns or rows, non-uniformity of response,
mis-aligned color filter array
•
Reset noise - (kTC noise) discharge resistance thermal noise
– Not easily removed by process or
design changes
Imager Characteristics: Reset and Fixed Pattern
Noise Removal
n
Correlated Double Sampling (CDS)
SH1
Imager
Amp
+
-
Output Signal
SH1 Pulse
SH2
SH2 Pulse
n
Delayed Data Sampling (DDS)
Imager
Amp
Delay
+
-
Output Signal
Delayed Output
SH Pulse
Imager Characteristics: Readout Speed
•
•
Measured in pixels/sec range - 50K to 10M
Integration Time
– Too little - Affects sensitivity (not enough electrons stored)
– Too much - Affects noise floor (thermal noise also accumulates)
•
Read Out Rate
– Too fast - Affects charge transfer, burdens clock driver circuits
– Too slow - increased integration time, can’t output standard video
•
Quick Calculation
– Want VGA (640x480) at 30 frames/sec ~ 300Kpixels/sec
– Assume 4 phase horizontal and 4 phase vertical clocks
– 640 x 480 x 30 = 9.2MHz (Clock generator would need 36MHz)
Imager Characteristics: Dynamic Range
•
•
•
•
•
Dynamic range is the pixel well capacity (in electrons) divided by
the r.m.s. noise floor (in electrons).
Sometimes expressed in dB
8 bits = 48.2dB 10 bits = 60.2 dB
Dynamic range is related to both processing (fixed) and voltage
bias levels (easily screwed up)
Typical Numbers
– ‘Full well’ capacity 30,000 - 2,000,000 e– Noise <5 - 300 e– Dynamic Range 40 - 100dB
Imager Characteristics: Blooming & Smearing
•
Blooming occurs when a charge collector overflows into its
neighbor.
– More of a problem with CCD transfer mechanisms
– Anti-blooming structures shunt excess charge away from active area
- - - - - --- --- -++++ +
•
Smearing is the effect of continued integration during the read out
phase
– Can occur with all un-shuttered sensor/camera designs
Charge Coupled Devices (CCD)
•
CCDs are bucket brigade devices
– Multiphase clocks transfer the charge
1 Pixel
1 Pixel
Φ1 Φ2 Φ3 Φ4 Φ1 Φ2 Φ3 Φ4
V+
V-
n
V+
V-
V+
V-
V+
V-
V+
V-
V+
V-
V+
V-
V+
V-
Blooming is a problem with CCDs, anti-blooming structures
are added which increases processing complexity and cost.
Full Frame CCDs
•
Full Frame CCDs transfer each row incrementally.
n
n
Full Frame CCDs exhibit significant smearing if it is not
shuttered during readout.
A Full Frame CCD can have an excellent fill factor.
Frame Transfer CCDs
•
Frame Transfer CCDs combat smearing by rapidly transferring
pixels through the active area.
n
n
n
n
Shielded
Area
Smearing is still an issue.
Extra silicon is needed.
Fill factor can be very good.
Next image can be acquired while
previous is being read out.
Interline Transfer CCDs
•
Interline Transfer CCDs have transfer shift registers along each
column.
Shielded
n
n
Smearing is eliminated at the cost of reduced Fill Factor.
A second image can be acquired during readout.
Progressive Scan CCDs
•
•
Camcorders and video are a significant market driver for CCDs hence many CCDs have interlaced readout.
Progressive scan devices are non-interlaced, but can be frame
transfer, interline transfer or any other transfer method.
Interlaced Scan
Progressive Scan
Charge Injection Devices (CID)
Charge Injection Devices have individually addressable pixels and
non-destructive readout.
n CIDs do not have
blooming problems.
n Fixed pattern noise is
high dynamic range is
low.
n CIDs are RAD hard.
n Electronic window and
zoom capabilities.
Vertical Scan
•
+
Horizontal Scan
Passive Pixel CMOS
Passive Pixel CMOS Sensors are very low cost.
n
n
Vertical Scan
•
n
n
n
n
Horizontal Scan
n
Fixed Pattern noise is very high,
dynamic range is low.
VERY low power.
Small output signal level - charge
is placed on entire row for readout
(many pF)
Electronic windowing and
zooming.
Fill Factor can be nearly 100%.
On board clocking can greatly
simplify interface.
Not prone to blooming.
Active Pixel CMOS
Active pixel CMOS sensors trade off Fill Factor for individual pixel
electronics and reduced noise.
n
n
Vertical Scan
•
n
n
Horizontal Scan
Current Fill Factor ~35%.
Same advantages as passive
CMOS, but significantly less
noise.
On board electronics - amps,
A/Ds, Processing!
Excellent dynamic range.
Active Pixel CMOS (cont’d)
•
There are two approaches to solving the Active CMOS Fill Factor
problem.
– Make the array bigger. Causes problems with the optics, making
image field wider, bad for fixed focus devices.
– Fabricate microlenses over each pixel.
– Microlenses have been manufactured, but are difficult to manufacture
in high volume and add a significant cost.
– Anti-reflection coating these highly curved surfaces is difficult.
Side by Side Comparison of the visible sensor
technologies
CCD
CID
+
Passive Pixel
CMOS
++
Active Pixel
CMOS
+
Pixel Size
++
Readout Noise
++
--
--
+
Fill Factor
++(1)
-
++
-
Dynamic Range
+
-
-
+
RAD Hardness
--
++
++
+
Single Supply
--
-
++
++
System Power
--
+
++
++
System Volume
-
+
++
++
System
Integration
System Noise
--
--
++
++
+
--
-
++
System Cost
--
--
++
+
Ease of Use
--
--
++
++
Electronic
Shutter
Electronic
Windowing
Electronic
Zoom
+
-
++
++
--
++
++
++
--
++
++
++
(1) - Frame transfer
can be very good,
interline transfer is
poor.
CCDs are radiation soft, but otherwise tend to be
the highest performance optical array sensors
Sensor Family Strength
CCD
Mature technology, mass
production lessons
Large size available (4K X 4K)
Very low noise
Widest experience base
Full frame CCD
Backthinned full
frame CCD
Lowest noise devices
Highest QE (~80%) of any
visible detector
Very low noise (~2e)
Interline CCD
Smear eliminated
Frame transfer
CCD
Smear eliminated
Interline CCD with
microlens
High frame rates possible
Improved fill factor (~60%)
Weakness
Family Deficiencies
Radiation soft (20 - 30 krad)
Capacitive device, high power
consumption (W to many W)
Cannot operate cryo
Specialized production lines
required
Smear with moving objects
Difficult process, low yield
Low fill factor due to added
structure
Up to 1/2 silicon area wasted in
storage register
Microlenses radiation soft
(plastic)
Microlens works best with high
f/ system.
Radiation soft
Low QE on shuttered
devices
CMOS sensors have not been manufactured in large
sizes and lag CCDs in image performance
Sensor Family Strength
Weakness
Family Deficiencies
CMOS
High fixed pattern noise
Small devices (1K X 1K)
High read noise
Low QE, low fill factor
Low fill factor
Noisy (read and fixed
pattern)
Limited shutter availability
Many lessons from CMOS
manufacturing
Becoming highest volume
sensor
On-chip integration with amps,
CDS, A/D,
Very low power (10 - 100 mW)
Poor sensitivity due to gate
structure on top of pixel
Very high frame rates possible Operating parameters change
(60 MHz pixel clock X 8
with radiation exposure
outputs)
Radiation hard (~200 - 1000
Multiple outputs complicate
krad)
electronics package
Single power supply
No shutter capability - smear
CMOS passive pixel Simplest structure CMOS
sensor
device
High sensitivity - large fill factor
70-80%
CMOS amp per
High signal output, lower read Fixed patern noise problems
pixel (APS)
noise
due to amplifier mismatch
CMOS APS with
Improved fill factor (~40 - 50%) Microlens works best with high f/
microlens
for higher sensitivity
system.
CMOS amp per row Higher signal output than PPS, Signal still small because of
(APR)
simpler, better fill factor
large column capcitance
(~50%), less fixed pattern
noise
Several CMOS alternatives exist
Sensor Family Strength
Weakness
CID
Low signal output
Hybrid (CMOS
multiplexer bonded detector
panel)
Simple structure eases
manufacturing
Extendable to large arrays
Very radiation hard (~1000
krad)
Random addressable
High readout speed
High readout speed
100% fill factor, high QE
Multiple wavelengths by
altering photosensor plane
Family Deficiencies
High fixed pattern noise
Large scale devices only
recently produced
Difficult to butt?
CMOS sensors have advantages in radiation
tolerance and readout speed
•
•
•
•
•
•
Natural extension of the original Reticon photodiode readout
arrays.
Each pixel may have active transistors to amplify and buffer
signal. This is very desirable form a S/N standpoint.
No charge transfer across the sensor.
APS / CMOS can be cooled to low temperatures to reduce dark
current - no CTE limitations. Passive cooling in space provides
ample noise reduction.
Since there is no long-range charge transport, dislocations from
massive charged particle does not degrade sensor performance.
CID (w/o active pixel) has high read noise on the order of 300 e-.
Increased sensor size will increase read noise (due to column/row
capacitance increase). The CID is a passive pixel sensor.
CMOS sensors still have process-related image
quality problems
•
•
•
Widely variable, poor sensitivity across array in many early CMOS
sensors. CMOS sensors got a ‘bad rap’.
Non uniformity of response and resultant fixed pattern noise due
to differences in size of photodiodes and storage capacitors.
Recent designs with smaller features may allow improved fill
factor, spectral response and sensitivity for a given pixel pitch
(Motorola and others).
– 0.35 µm design rules currently used in US on 6 and 8 inch wafers.
– 7.8 µm pixel pitch with pinned photodiode.
– Electronic exposure including rolling mode (usual for CMOS) and fully
shuttered mode.
– QE ~ 22%, fill factor 35%
CMOS sensors performance issues can be solved
today with more transistors per pixel
•
•
5-transistor per pixel
sensors have similar noise
performance to CCDs, but
(presently) low fill factor.
Need larger pixels (20 mm)
and smaller feature size
(0.18 mm design rules).
4 transistors per pixel
CMOS noise performance may not be an issue
•
•
•
•
Most noisy sensors are of
‘commercial’ variety computer cams, Barbie
cameras, etc.
Read noise can be reduced,
fixed pattern noise corrected
for post-capture.
When feature size is
reduced, design must pay
attention to maximum signal
from pixel to avoid overload.
Also need is the plot of
number of transistors vs. fill
factor (or QE) parameterized
for pixel size.
CCD vs. Active Pixel CMOS
•
•
CCD imagers are still the reigning champion in
everything except low cost, low quality (toys &
security) and RAD hard (space) applications.
Active pixel imagers are a revolution in progress.
– Powerhouses in the imaging, microprocessor and
memory fields are all throwing gobs of money into
Active Pixel R&D.
•
CCDs may soon give way to AP CMOS imagers in
the consumer arena, but CCDs will remain
dominant in the high end scientific applications
(astronomy).
– What will happen to all those specialized CCD fabs
when the consumer market dries up?
System Requirements
Consumer Cameras to the Space Telescope
Bob Kremens
May 2001
The electronic image systems in use today present
a conflicting set of design parameters for sensors
and electronics
•
Consumer Cameras (Sony Mavica series, Kodak DCS 2XX, etc.)
– Low cost
– Large number of relatively small pixels
• Need high resolution to rival film
• Small pixels OK for 8 bit dynamic range (more DR desirable, but…)
– Low power for long battery life
– Rapid readout for shortened click-to-click time
•
Professional digital cameras (Kodak DCS660, Fuji
– Cost not as significant an issue
– Very large number of larger pixels
• Need high resolution ( to match high end 35mm and 70mm formats) and
high dynamic range
• 12 bit color planes (‘36 bit images’) are de facto standard
• Need high readout speed to suit pro shooting style
Scientific applications are not cost sensitive but
present other sets of challenges
•
Scientific applications
– Cost usually not an issue
– Require highest dynamic range and lowest noise
– Astronomical (ground and space based) imaging
• Huge number of pixels desirable (replace 20-30 cm film)
• Extremely low noise for extended exposures
– Cooling to LN 2 temperatures or less ‘expected’
• Wide, flat spectral response
• Non-destructive readout
– Allows longer exposures on dimmer objects
• Freedom from blooming and adjacent pixel overload effects
– Bright objects often next to dim objects of interest
• Radiation hardness necessary for space applications
The electronic image systems in use today present
a conflicting set of design parameters for sensors
and electronics (2)
•
Remote Sensing applications (satellites and aircraft)
– Can use linear or array sensors
• Aircraft or satellite moves, can scan the area like a ‘pushbroom’
– Require sensitivity to reduce constraints on optics and stability to
avoid constant re-calibration
– May require radiation hardness in space applications
Some example sensing systems: Astronomy
Ground based astronomy
Lincoln Laboratory 2K X 4K
3-side edge buttable CCD
Can be made into an 8K X
2NK array (N = 1 to …)
Backthinned (no frontsurface structure) for ultra
high quantum efficiency
(~80%)
Optimized for low noise and
slow speed readout with 16
bit digitizer systems
Some example sensing systems: Astronomy
Sloan Digital Sky Survey
instrument
Uses 30 2K x 2K SITe
backthinned arrays.
Arrays cooled to -80 C.
That’s 126 megapixels per
frame!
Some example sensing systems: Professional
camera back
PhaseoneLightphase
16 Mpixel large pixel
Phillips CCD
14 bit digitizer for
~42 bit ‘deep’
images
Firewire IEEE 1394
readout to PC or
Mac
16 megapixel
camera back
System Hardware
A Digital Consumer Camera
-The Jam CamBob Kremens
May 2001
The KB-Gear JamCam versions have sold close to
millions (?) of units
•
•
•
•
•
VGA resolution
CMOS sensor
camera
RS-232 and USB
data
communications
Fixed focus, fixed
iris, no shutter
Simple ‘camera’
type interface
TWAIN drive, MS
Picture-It!,ArcSoft
PhotoFantasy
bundled software
What are the basic components of a digital camera?
•
•
•
•
•
•
•
Imaging Optics - lens or mirror and mount
Image Sensor
Power Supply - potentially 7 - 9 power supplies for CCD camera
with flash and LCD/backlighter
Processing electronics - CPU / DSP, memory, program store
Exposure and focus control - shutter, iris
(Removable media) Camera communication - serial, USB, IrDA, Firewire, radio
The future market and profit leaders may not be
today’s camera/electronics companies
•
•
•
•
•
•
Fine details of image quality (where great IP is required) not
necessarily an issue with low-end cameras.
All present low-end cameras have ‘acceptable’ images for the use
intended
These cameras have fundamentally different architecture from
previous ‘high-end’ cameras
The popularity of these cameras modify some fundamental digital
camera ‘truths’ that have been held since 1995
Ample opportunity for other camera ideas/modalities to come to
dominate the electronic marketplace.
1.3 Mpixel camera produces quite acceptable 4” X 6” prints (Why
the need for more pixels?)
The present camera architecture makes some
assumptions which may be invalid
Sensor Timing
Generator
•
Assumptions
- Removable
storage, incamera
processing,
finished files,
image review
on camera.
Zoom Lens/Focus/Aperture
Assembly
Image Sensor
Analog Front End
+ Analog
Processing
Motor Control
System ROM
Interface
USB
I/O Ports
Interface
RS-232
Image processing Microprocessor +
auxillary ASICS
System RAM
Interface
RS-170
Video
Memory Bus
Image Display LCD
Removable Flash Media for
Image Storage
The architecture of the JamCam is similar to older
digital cameras produced by first-line camera
companies
Fixed Focus
Fixed Aperutre
Lens Assembly
CMOS Integrated
Sensor
Digital Output
SRAM
Off-camera image
processing, fixed
storage, no image
display.
USB
Physical
Driver
Microprocessor with
Embedded USB/RS-232
Controller
System ROM
Fixed Flash ROM
Image Store
Status Display LCD
The JamCam provides a totally acceptable camera
experience - and this is all the consumer asks for!
•
Extremely simple to use
– Shutter button, mode switch
– Mode switch selects resolution
– Modes:
• Capture (normal power-on)
• Delete (all)
• Set resolution (VGA, 1/2 VGA. 1/4 VGA)
– Display indicates number of remaining pictures ala throw-away
camera
– RS232 and USB interfaces
– Fast turn on (~1.5 seconds)
– Fast image store (~1.5 - 7 seconds)
– Infinite battery life with 9V cell
The computer-centric JamCam model has the PC as
the center of entertainment.
•
•
•
•
Computer-centric world has home CPU performing functions of
DVD, MP3, still image player, audio/radio/CD player,etc.
Single I/O device moves to living room with display unit (HDTV).
Uses high CPU power of PC device to perform entertainment and
web tasks simultaneously.
Radio/IR links to cameras, remote I/O devices
In the present digital camera model, the portable
device is (or can be) an ‘entertainment center’.
•
The camera, as a powerful portable computing device with large
local memory can also be a:
–
–
–
–
–
–
•
Full featured still camera
Audio player (MP3)
Movie camera (short MPEGs)
Radio
Cell/radio Telephone
Audio recorder
Will the future consumer desire an all-in-one portable
‘entertainment center’?
The JamCam is composed of six major components
•
•
•
•
•
•
The CMOS sensor minimizes the
number of analog front-end
components
Flash ROM, DRAM and ROM memory
components couple with processor.
One Atmel microprocessor
component provides ample
computing power for this size sensor
and the limited processing done onboard.
Two PLD ‘glue’ parts interface the
LCD display and pushbuttons.
Double sided SMT PCB!!
PCB-mounted image sensor and lens
assembly
The JamCam is memory rich and processor-poor
•
•
•
2 MByte Flash image storage
2 Mbyte ROM
256KByte static ram
•
One low power, 8 bit RISC
processor with embedded USB
and other interface functionality
Limiting the number of mechanical components
increases ruggedness, reduces power consumption
and cost
•
•
•
Shutter and iris less lens assembly
Adequate exposure range is provided by electronic shuttering of
sensor
This system is similar to a primitive ‘box’ camera, with some
exceptions:
– Short focal length lens provides large depth of field
– Large exposure latitude provided by wide range electronic shutter on
sensor
•
Image acquisition limitations:
– Fixed depth of field
• Not an issue in consumer cameras
– Limited focus range
• 2’ - infinity: similar to best AF film point and shoot cameras
– Limited exposure range
• Shutter speed typically
The component count in the JamCam may be
nearing a minimum
•
•
•
Double sided
PCB
One ‘major’
CPU
component
Processing
time (to
compressed
finished file):
38.5
kpixels/sec.
Component
Vendor
Function
AT43320
AT49LV1614
AT27BV1024
ATF16LV8
TC55V2001
DS14C232
74HC273
74HC4040
74HC00
74HC244
CD4013
LCD
Resistors
Capacitors
Xsistors
Gates
12 MHz Xtal
Switches
USB conn
1/8" Mini
HDCS-2000
Atmel
Atmel
Atmel
Atmel
Toshiba
AVR Risc Core with USB Hub and Embedded USB
1M X 16 sectored flash RAM
1M X 16 non-window EPROM
200 gate EPLD
2 Mbit (256K X 8) SRAM
RS232 driver
Glue
Glue
Glue
Glue
Glue
Display
?
Number
22
29
4
2
1
2
1
1
1
Individual gates
HP
1
1
1
2
1
1
4
1
1
1
1
SPST
USB connector
1/8" mini stero phone
CMOS 640 X 480 image sensor
TOTAL
78
System Hardware
A Mid-Range Consumer Digital Camera
Kodak DC210
Bob Kremens
May 2001
The Kodak DC210 was a ‘second generation’ 1
Mpixel digital camera that sold well
2:1 f/ 4
auto-focus
zoom lens
Powerful
flash unit
Flash and ambient light
exposure sensors
Consumer digital cameras integrate optics, image
sensors and image processing electronics
•
Extremely compact threedimensional packaging
•
Flex board, double side SMT
essential for density on mid-range
and high-end cameras
•
Separation of boards according to
function is common to reduce
noise
– Flash unit almost always separate
– CCD/sensor PCB attaches to rear of
optics package with flex cable
– LCD and backlighter (discharge
lamp) separate
What are the basic components of a digital camera?
•
•
•
•
•
•
•
Imaging Optics - lens or mirror and mount
Image Sensor
Power Supply - potentially 7 - 9 power supplies for CCD camera
with flash and LCD/backlighter
Processing electronics - CPU / DSP, memory, program store
Exposure and focus control - shutter, iris
(Removable media) Camera communication - serial, USB, IrDA, Firewire, radio
The optics/imager/focus/zoom package is w wonder
of miniaturization
•
Sensor attaches to rear of optics package - fairly universal in this
arena
The goal of the image processing package is the
production of a ‘finished’ file on a removable media.
There are several
ways to assemble
these components
from basic
components or
custom ASIC
devices
Depending on
performance level
of camera and size
of image sensor, a
single CPU may
perform all camera
functions
Main CPU
Image
Sensor
Analog
Preprocessing
Basic Image
Creation
Image Intellectual
Property
Aperture,Zoom and
Focus Control
Camera Control
JPEG or Other
Compression
Focus and
Exposure
Sensors
User Input
Buttons, Dials
Display
Removable
Media
The image processing electronics must be
inexpensive, low power and provide adequate
processing
•
Consumer cameras are still low (no?) margin products - pennies
count!
•
Short battery life has been a serious user complaint for first and
second generation digital cameras - much effort has been
expended to increase battery life
•
The requirements of high processing horsepower and low power
consumption tend to be mutually exclusive - designers often
forced to compromise and idle components when not used.
The electronic architecture must also be familiar
enough to develop the camera application easily
•
•
•
•
•
•
Product lifetime is very short in this market (< 1year)
Development cycle is correspondingly short so exotic
architectures or components are less popular even when
providing superior performance
Complex interaction between system elements and processors
may require real time operating system (RTOS), multi-tasking
capability and complex development system support
Large amounts of code necessary for implementation of company
intellectual property requires source code reuse and control
Good development tools required!
Often, the best development tools are available for ‘general
purpose’ processors, so these are found in many camera designs
(Motorola PowerPC, Hitachi SH-DSP, SPARC, etc.)
The camera electronics performs other important
internal functions in addition to image processing
These systems are incredibly
complex and the image
processor is only a part of the
‘picture’ (multi-processor)
Ready
Light
IrDa
Drive
Processor (1)
Much near-real-time
processing for auto-focus,
auto-exposure, power
management and flash control
(RTOS)
Motor
Drive
Lens
Position
Processor (2)
Rear
Switch
Shutter
CCD
Carrier
CCD Driver / Interface
LCD Interface
Exposure
Sensor
Power/Interface
Analog (RF, serial, power,
Flashlamp, DC-DC
Converter)
Status
LCD
Camera must remain
responsive to user input in
spite of other operations
occurring simultaneously
(multi-tasking)
Batteries
Flash
Lamp
Photoflash
Cap.
Kodak has chosen a ‘computer-centric’ architecture
for most of their consumer cameras
It is clear that the
Kodak computer +
camera approach is one
that facilitates change,
code control and ease
of development
1160 X 872 CCD
Gain, A/D, CCD Timing
Generation
Input Signal Conditioning
0.5 MB Dual Port Video
DRAM
LCD and Composite Video
Buffer
4 MB Compact
FlashMemory Card
Removable Picture Storage
Control and Routing ASICs
Data Routing and Camera
Management
LCD Rear Panel Display
Ancillary camera
functions relegated to 8
bit micro with link to
main processor
Lots of memory - like a
real computer!
8 Bit Microcontroller
Exposure control, Control
inputs, voltage monitor,
Status LCD driver
Status LCD
DSP - RISC
Microprocessor
Image Processing,
Compression
2 MB Working DRAM
Local Storage, Working
Memory
0.5 MB Flash Ram
CCD Parameters, System
Code
System Hardware
The Chandra Orbital X-Ray Observatory
Bob Kremens
May 2001
Astronomers study the full spectrum of
electromagnetic radiation from the cosmos
Chandra is quite a different camera, but has the
same systems and requirements of the Jam Cam
Basic camera
components:
•
•
•
•
•
Imaging Optics
Image Sensor
Power Supply
Processing electronics
Exposure and focus
control
• (Removable media)
• Camera
communication
X-rays are energetic photons, and can be detected
by most solid state optical sensors
•
Incident X-ray deposits energy in pixel via the photoelectric effect.
•
Chandra observations are centered in the region under 10 KeV
•
A photon will generate a charge carrier for each 2.6 eV of energy,
e.g. 2.6 KeV X-ray generates 1000 electron-hole pairs
•
The incident flux is very low: incoming photons can be counted.
The detectors are read out rapidly and the image ‘accumulated’.
Each photon’s ‘signature’ (energy and position) can be precisely
known if the detector is energy sensitive
•
This is a very different situation from most optical photography
applications, where zillions of photons impinge on the detector
simultaneously and the detector is read once per image
X-ray optics are operated at glancing incidence
Chandra’s optics are ultraprecise cylindrical mirrors
of parabolic and
hyperbolic section
Aperture of telescope ~
1.2 m
It was very difficult to
provide the accuracy
necessary for these
concentric cylinders optics
(Eastman Kodak)
This instrument was
characterized before
orbital insertion!!!
Chandra has two focal plane instruments
•
High Resolution Camera - (HRC)
high spatial resolution and very
high sensitivity, but limited
energy resolution
– Stacked microchannel plates
and crossed wire grid readout
array
– Pulse from incident X-ray
impinges at intersection of two
wires, defining location of
incident photon
•
Advanced CCD Imaging
Spectrometer - (ACIS) simultaneous energy and
spatial location using CCDs
The CCD in the ACIS focal plane is read out rapidly
Most CCDs in astronomical
applications are read out
infrequently
The 10 CCDs in Chandra’s
ACIS instrument are read out
rapidly and continuously
when on target to insure only
a single photon will be
captured in each pixel
This readout method assures
that energy and spatial
information may be obtained
for each photon - photons
are rare!!!
The CCDs are operated in ‘single hit’ mode and
create spectra and images simultaneously
Eight frame transfer
CCDs are used in the
0.5o focal plane
Quick frame transfer
to storage area - then
readout to determine
energy and position
Total pixels:
The block diagram for the ACIS instrument looks
similar to a consumer digital camera!
•
Except for thermal control in Chandra, a CCD camera is a CCD
camera
The data stream is processed by the Earth station
•
Not much image processing is done on the ‘camera’ - raw data
stream is telemetered to Earth for subsequent image re-creation
Chandra is proving to be as spectacular an
instrument as the HST
•
•
•
This is the first decent glimpse
of the universe at these
wavelengths, uncovering new
physics and providing a host of
surprises
This instrument has tag-teamed
with the Hubble Space
Telescope to probe the universe
over a wide spectral range.
Especially important are the
differences in the visual and Xray images of the same object
Compact galaxy group HGC62 in 50,000
second exposure from Chandra
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertising