Photometric device
United States Patent, [191
[11]
[45]
Rea
4,962,425
Oct. 9, 1990
image analysis device is provided for acquiring and
interpreting calibrated images. The device is comprised
[54] PHOTOMETRIC DEVICE
[75] Inventor: Mark S. Rea, Orleans, Canada
[7 3] Assignee: National Research Council of
of a solid state video camera with V-lambda (photopic)
correction ?lter for acquiring light (luminance) and
Canada/Conseil National
spatial information from a scene and a personal com
deResherclies Canada, Ottawa,
puter with image capture board for storing and analyz
ing these data. From the acquired spatial-luminance
information the software may, for example, predict
Canada
[21] Appl. No.: 263,023
[22] Filed:
Patent Number:
Date of Patent:
Oct. 27, 1988
Relative Visual Performance, or RVP. Essentially, the
RVP is computed on the basis of three stimulus vari
ables contained in a captured image; the age dependent
adaptation luminance and apparent contrast of the tar
[51]
Int. Cl.5 ........................................... .. H04N 17/00
[52]
US. Cl. .................................. .. 358/139; 358/ 163;
358/168; 358/169; 358/903
get against its background, and the apparent size of the
[58]
Field of Search ............. .. 358/139, 163, 168, 169,
target. The device is not limited to assessments of RVP,
but can acquire and process images according to any set
358/903
[56]
of algorithms where light (luminance) and size informa
tion is required. The device is capable of providing
References Cited
U.S. PATENT DOCUMENTS
4,862,265
8/1989
Bartow .............................. .. 358/139
information for almost every vision algorithm. The two
essential functions of the device, are image acquisition
and image processing.
Primary Examiner-—Howard W. Britton
Attorney, Agent, or Firm-—Francis W. Lemon
[57]
ABSTRACT
_
5 Claims, 8 Drawing Sheets
The equipment and calibration of a luminance and
V-LAMBDA FILTER
_
2O
VlDEO CAMERA
7
2| /
IO
6
\
8 §
./
a4
VARIABLE APERTURE
ZOOM LENS
36
I}
12)1
I6
L
VIDEO
MONITOR
14 l
IMAGE i
CAPTURE§
BOARD
COMPUTER
~e40 kbyle
— 80287
24
J ‘—
\- '
COMPUTER
MONITOR
%
70 Mbyle
KEYBOARD
‘
,
.1
I8
US. Patent
Oct. 9, 1990
4,962,425
Sheet 1 of 8
38m
V-LAMBDA FILTER
VIDEO CAMERA
20
2|
22
IO
VARIABLEAPERTURE
ZOOM LENS
IMAGE
VIDEO
MONITOR
CAPTURE
BOARD
R
COMPUTER
- 640 kbyte
- 80287
24
COMPUTER
MONITOR
KEYBOARD
-
70 Mbyte
On
1.2 Mbyte
US. Patent
Oct.9, 1990
Sheet 2 of8
4,962,425
CAMERA
STANDARD LAMP
30/69?I
,
34
/ OPTICAL BENCH
BARIUM SULFATE
REFLECTANCE STANDARD
REGULATED J/
dc
POWER SUPPLY
US. Patent
O¢t.9, 1990
Sheet 3 of 8
4,962,425
240
220
~
—
200 180
I /
160 —
140
COUNT
—
_
/
-
./
100—
/
s0-
0
0'
~_
.
_
A
_
./-
|
100
|
200
—
|
300
|
‘400
1
500
LUMlNANCE, L (Cd / m2)
I
600
1
700
800
US. Patent
Oct. 9, 1990
Sheet 4 of 8
4,962,425
.‘ /..
12
2I/l
A.2oo,04620 0O.00
_
_
_
_
_
_
_
_
_
8I {I
0
+
n
_
_
_
_
_
_
_
_
6. 8
%
aI I.
0/L
H| I.
/
1..
O
l/
R,
lI
W .H F H_Tl E
/.
/
/
40
60
so
100
120
140
LUMINANCE, Hod/m2)
PIC-3.4
160
180
200
220
US. Patent
0a.9,1990
1.2
Sheet 5 of8
4,962,425
I
—— V-LAMBDA RESPONSE
--- FILTER / CAMERA RESPONSE
- -- FILTER TRANSMITTANCE
1.0—
_
0. 46
_
500
600
WAVELENGTH, nm
700
800
US. Patent
RESPON
Oct.9, 1990
Sheet 6 of8
4,962,425 \ v
1.00
0.90
,
|
HPS
LPS
MH
M
LIGHTSOURCE
CWF
WWF
VLF
Patent
Oct.9, 1990
Sheet 7 of 8
CYCLES / FRAME
4,962,425 0
US. Patent
FSdegIEZLED,
Oct.9, 1990
Sheet 8 of 8
4,962,425
vN—w.m' 0OQ
I
I
l
O
HO
FOCAL LENGTH, mm‘
H OO
1
4,962,425
2
and offset ampli?er, and means for storing the pixel
values in digital form in a frame memory spatial array,
PHOTOME'I‘RIC DEVICE
(d) a video target viewer connected to the camera,
and
This invention relates to a photometric device.
(e) means connected to the output of the image acqui
sition board for computing visual angle, and scaling the
pixel output signals for computing contrast from the
Lighting and the spectral sensitivity thereto of life
forms are closely linked, for example, lighting and
human vision are closely linked. Interior rooms and
exterior roadways are illuminated for discernment. Sur
is‘ technically weak. The ability to relate visual re
absolute value in relation to a predetermined light inten
sity received by the camera, and providing a substan
tially constant and linear relationship capability be
prisingly however, this link between lighting and vision‘
sponses to a given lighting condition suffers on two
tween the input luminance and pixel value output sig
counts. First, the scienti?c understanding of visual re
nals over substantially the entire pixel sensor array and
sponse is rudimentary, although perhaps functional for
the light range of operation.
some applications. Human visual processing is more
The video camera may have a variable aperture lens
complex than any computational model available. For
and the predetermined light intensity received by the
example, it cannot be explained how a mother’s face can
camera may be determined by the setting of the variable
be recognized from different perspectives and under
different lighting geometries or spectral compositions.
aperture lens.
However, simple responses can be predicted fairly ac
operation, producing a photopic response by the de
curately (reaction times or magnitude estimations) to
vice.
The ?lter means may be a V-lambda’ ?lter for, in
visual stimuli of different contrast or size. Thus, for
The ?lter means may be a V-lambda’ ?lter for, in
some practical applications, how these responses will be
improved or degraded under different illumination lev
operation, producing a scotopic response by the device.
The ?lter means may be one of a plurality of different
?lter means which are used sequentially to ?lter differ
els or lighting geometries can be predicted once we can
25
ent wavelengths, and the means connected to the output
specify the stimulus conditions.
'
of the image acquisition board may, in operation, de
duce colour information from the ?ltered wavelengths
In this speci?cation light intensity means the level of
cannot be predicted because current technology seri 30 electromagnetic ?ux received by an object.
A second limitation is an inability to easily specify the
visual stimulus. Therefore, even with a satisfactory
model of vision, visual responses to realistic materials
ously restricts the ability to accurately specify the visual
The spectral sensitivity (responsivity) of the object
stimulus._ Many hours are required to acquire the infor
mation necessary to describe, for example, the visibility
may be modelled through ?lters and the inherent spec
tral sensitivity of the detector so that the intensity of
light on that object can be correctly measured. The
of even a single letter. It is not trivial to specify its
luminance, its size or indirect techniques are required to 35 object may, for example, be animal (human), vegetable
(plants and trees) or mineral (artifacts).
make even these measurements, see, for example, Rea,
In the accompanying drawings which illustrate by
M. S., Ouellette, M. J., and Pasini, 1., Contrast measure
ments in the laboratory and the ?eld, Proceedings of the
21st Session of the Commission International de l’E
clairage, Venice, 1987.
This technical limitation has impeded progress in
way of example, an embodiment of the present inven
tion;
lighting. Indeed, there has been little reason to extend
FIG. 1 is a diagramatic view of a photometric device,
FIG. 2 is a diagramatic view of the devihe shown in
FIG. 1 being used in tests to verify the present inven
the understanding of the links between lighting and
tion,
FIG. 3 is a graph of the linearity response plotted as
acquiring the information necessary to make this link. 45 the response value against luminance, for the device
shown in FIG. 1 with the camera aperture at f/ 16 and
Importantly too, the tools have not been readily avail
without using a luminance correction ?lter,
able for processing information according to a visual
FIG. 4 is a similar graph to that of FIG. 3 but with the
performance model.
camera aperture at f/2 and with the luminance ?lter
There is a need for an image acquisition and an image
vision because there have been no technical means of
processing device whereby a relationship between
lighting and spectral sensitivity thereto of life forms
(e.g. humans and plants) is obtainable.
According to the present invention there is provided
a photometric device, comprising;
50
(a) a video camera having a pixel sensor array and 55
known pixel value output signals, relative to a black
attached
'
'
FIG. 5 is a graph of the spectral sensitivity of the
device shown in FIG. 1 shown as relative distribution
plotted against wavelength, with the luminance ?lter
attached,
FIG. 6 is a graph of the relative luminance response,
of the device, shown in FIG. 1, relative to another
reference zero light value storage element in the sensor
array, in response to the spatial - light intensity informa
tion being viewed by the camera, the camera having a
commercially available photometric device, and plotted
low geometric distortion,
in horizontal and vertical directions, plotted as modula
tion against the cycles/frame, for the device shown in
FIG. 1, and
(b) ?lter means on the variable aperture lens for, in
operation, transforming the camera spectral sensitivity
as a ratio against light source,
FIG. 7 is a graph of the modulation transfer function
FIG. 8 is a graph of the camera ?eld size in the hori
to match a known spectral sensitivity,
zontal and vertical directions, in degrees of the device
(c) an image acquisition board connected to the out
put from the camera and having a spatial resolution 65 shown in FIG. 1, plotted as a function of the focal
length of the camera lens.
closely related to that of the camera, the board having
In FIG. 1 there is shown a photometric device, com
a dc restoration circuit for correcting any drift in the
camera output signal, a pixel value programmable gain
prising;
3
4,962,425
(a) a video camera generally designated 1 having, in
this embodiment, a variable aperture lens 2, a pixel
sensor array, a portion of which is shown and desig
nated 4, and known pixel value output signals, relative
to a black reference zero light value storage elements,
four of which are shown and designated 6 to 9, in the
sensor array 4, in response to spatial - light intensity
information being viewed by the camera 1, the camera
1 having a low geometric distortion,
(b) ?lter means 10 on the variable aperture lens 2 for,
in operation, transforming the camera spectral sensitiv
ity to match a known spectral sensitivity,
c) an image acquisition board, generally designated
12, connected to the output from the camera 1 and
having a spatial resolution closely related to that of the
camera 1, the board 12 having a dc restoration circuit
for correcting any drift in the camera output signal, a
4
performed, however at a room temperature of 21 de
grees C.
Since the camera was intended for commercial video
applications, the sense array was sampled at 9.46 MHz
and the signals from the storage elements were output
according to the RS-170 television standard. This stan
dard requires a composite signal containing both image
and synchronization signals having a l V peak-to-peak
amplitude into a 75!) load.
Except for the following three modi?cations, the
camera was utilized as delivered from the factory. First
the infra-red (IR blocking filter, anterior to the sensor
array, was removed since its transmission characteris
tics were unknown, Second, an adjustment was per
formed inside the camera to establish a linear relation
ship between input light (luminance) and output. Thus,
if output=input 7, then by this modi?cation 7:1. With
7=1 there was equal brightness resolution over the
entire (unsaturated) image at the expense of a larger
means for storing the pixel values in digital form in a
frame memory spatial array, a portion of which is 20 dynamic range within a given image. Finally, the auto
matic gain control (AGC) was disabled so that the in
shown and designated 14,
put/output relationship would be constant over the full
(d) a video target viewer 16 connected to the camera,
range of scene light (luminances). Adjustments for dif
and
(e) means, in the form of a programmed computer 18, 25 ferent scene light (luminances) were accomplished with
a variable aperture lens 2.
connected to the output of the image acquisition board
pixel value programable gain and offset ampli?er, and
for computing visual angle, and scaling the pixel output
signals for computing contrast from the absolute value
in relation to a predetermined light intensity received
by the camera 1, and providing a substantially constant
and linear relationship capability between the light
input and pixel value output signals over substantially
the entire pixel sensor array and the light range of oper
ation.
The variable aperture lens 2 was that marketed as a
Cosmicar § inch, f/ 1.8, 12.5 to 75 mm multi-refractive
element zoom lens, and was equipped with a standard C
mount. A zoom lens was employed because it afforded
closer inspection of small targets without moving the
camera. The lens 2 was equipped with standard aper
tures from f/ 1.8 to f/22 with a detent at each f stop. The
lens focal length was continuously variable from 12.5 to
75 mm, although for target size calculations it was al
The predetermined light intensity received by the 35 ways
set by the operator to one of six labeled values
camera 1 is determined in this embodiment by the set
(12.5,
15, 20, 30, 50, or 75 mm). Focal distances ranged
ting of the variable aperature lens 2. However in other
embodiments this may achieved, by for example, using
spectrally neutral values.
In tests to verify the present invention the video cam
era 1 was on RCA model TC-lOl charge-coupled
(CCD) video camera. The CCD camera was used be
cause of its inherent linearity and lower geometric dis
tortion. Another reason for choosing this camera was
from 1 m to in?nity.
The spectral sensitivity of the camera (without the IR
blocking ?lter) was speci?ed by the manufacturer.
These data were used to design the ?lter means 10 in the
form of a V-lambda ?lter package that would convert
the camera’s spectral sensitivity to that of the CIE stan
dard observer the ?lter package comprised three glass
?lters 20 to 22, details of which are given in the follow
because it was possible to modify the camera to obtain 45 ing Table 1.
the accurate spatial - light datarequired.
.
TABLE 1
The camera contained a 532 horizontal by 504 verti
cal element (9 mm by 6.5 mm) interline transfer CCD
sensor. The sensor array 4 was a silicon based semicon
ductor that collects photons at discrete locations, called 50
storage elements, and converts these photon counts into
an electrical signal. Images were produced from 250,
920 storage elements, 510 horizontal by 492 vertical.
(As will be discussed later however, only 480 vertical
lines were used since this is the maximum vertical reso
V-lambda Filter Package
Filter
Glass Type
Glass Thickness
20
21
Schott RG38
Schott KGS
3.30 mm
4.00 mm
22
Coming 3307
3.16 mm
With this ?lter package, the response of each pixel in
lution with the memory spatial array 14 of the image
acquisition board 12. The manufacturer guaranteed that
the sensor array 4 to the electromagnetic spectral was
related to luminance. The output from the camera 1 was
calibrated in units of nits, or cd/mZ. A ?lter mount (not
there were no more than six defective storage elements
in the sensor array 4.
shown) was specially constructed for this correction
?lter package and ?xed anterior to the ?rst refractive
As has been previously stated, storage elements, such 60 element of the zoom lens 2.
as those designated 6 to 9, in the sensor array 4 were not
used as part of the image but were used as “black”
The image acquisition board 12 used was that mar
keted by Imaging Technology Inc. as PCVISION
reference values. Because the output of the CCD cam
plus'rM which was an stage acquisition board for a
era was temperature dependent, these “black” elements
PC/AT personal computer. Although several image
6 to 9 were used to de?ne the zero light value and thus 65 acquisition boards were commercially available, this
product was chosen because the spatial resolution was
set the output values for the picture elements, or pixels.
In this way, data generated by the camera 1 was mini
very close to that provided by the CCD camera 1, and
any greater resolution by the board could not have been
mally affected by temperature variations. All tests were
4,962,425
5
6
“black” values to maintain a constant black-level in the
utilized and, any less would have degraded that avail
able from the camera 1. Except for a change in the
entire image.
The image acquisition board 14 employed two fea
memory buffer address, the image acquisition board 12
was used as delivered from the factory. The board 12
tures to further ensure correct zero values, a dc restora
could accept two RS-170 video signals and several
synchronization signals. Video signals from the camera
1 were passed to an dc restoration circuit in the board
12 which corrected for any drift in the video signal
level the signal then went to a programmable gain and
tion circuit and a programmable zero offset adjustment.
The dc restoration circuitry corrected for any drift in
the black-level signal from the camera 1. The program
mable offset adjustment set the output black signal volt
age to correspond to a particular input signal the pur
offset ampli?er and on to an 8-bit analog-to-digital 10 pose of this adjustment was to ?ne-tune the response
level and, although irrelevant for this application, to
(A/D) converter. The A/D converter digitized the
video signal and stored the pixel values in the frame
permit non-standard video signal inputs.
memory 14, which was organized as two frame buffers
In principle, every pixel should always have gener
ated zero output in darkness Empirically, however,
of 512x480 pixels each. The output digital-to-analog
captured dark images (i.e., with the lens cap on) pro
(D/A) converters could be connected to either of the
frame buffers and used to reconstruct the RS-l70 video
duced variations in pixel values of ?ve or more counts.
These variations were likely due to thermal effects and
nonuniformities in the CCD camera response With the
signal for display on the video target viewer 16.
The video target viewer or monitor 16 was a Pana
sonic WV-5410 monochrome monitor and it was con
programmable offset of the board adjusted to 73 the
nected as shown to view the images acquired by the
camera 1 and processed by the image acquisition board
20 average pixel count in the dark was about seven counts.
14. Only a monochrome, or black-and-white, monitor
With this zero offset setting positive values were associ
Thus no pixel value ever dropped to or below zero.
ated with the true dark values for every one of the
was required because the luminance data contained no
color information. This particular monitor was chosen 25 approximately 250,000 pixels. In each subsequent image
acquired the positive value representing the dark value
because it had a desirable underscan feature which al
was substracted from the corresponding pixel value in
lowed all of the acquired image to be displayed on the
the image. In this way the largest dynamic range of
voltage could be assigned to the scene luminance in the
The monitor served essentially as a view-?nder for
the system operator. The images displayed on the 30 acquired images without introducing a low luminance
screen.
'
“clipping” non-linearity.
_ screen were not intended to be acurate representations
All calibrations were performed with this offset set
of the luminance values stored in the computer 18. In
ting. Once set, this value was intended to be maintained
deed, the monitor has non-linear characteristics and
in non-volatile memory. Some dif?culty was experi
could not be used for this purpose. Therefore, the moni
tor was only used for locating targets of interest and to 35 enced with this feature, however, so in the application
software the offset value was always reset to 73 prior to
set the general exposure level for the camera 1.
The image acquisition board was mounted inside the
image acquisition.
'
computer 18 which was an IBM PC/AT compatible
Thirty-two dark images were acquired and averaged
computer and included an 80287 math coprocessor.
Other conventional hardware of the computer 18 in
subsequent image measurements. The majority of pixels
to give a black “correction image” used for calibrating
cluded disk drives, a monitor 24, and a keyboard 26. A
in this image had counts ranging between three and nine
mouse was used as an optional feature in conjunction
'with a mode of seven. Interestingly, this dark image was
composed of five equally wide horizontal bands differ
ing by one count in luminance; the brightest band was in
were installed on the personal computer 18. All soft 45 the center of the image falling off symmetrically to the
top and bottom. The dark image was found to be likely
ware was run under the MS-DOS V3.20 operating sys
with the ImageActionplus TM software.
To facilitate calibration, several software packages
to change with temperature and so new dark images
tem.
were obtained prior to image acquisition of actual
The main piece of software used throughout the cali
bration was ImageActionplus TM , which was produced
by the image acquisition board manufacturer (Imaging
Technology, Inc.). This program came with mouse and
menu support and could easily acquire, manipulate and
process images.
A Microsoft'rM C compiler, RS/l TM and Lotus
scenes.
50
With the video camera and image board modi?ca
tions described above, together with the V-lambda cor
rection ?lter 10 on the objective lens 2, the output video
signal was linearly related to the number of photons
impinging on each photosite. That is, the output of the
l-2-3 TM were used to perform some calibration calcu 55 system was linearly related to scene luminance between
noise and response saturation. The slope of the linear
lations. A diagnostic program PCPLUSCD was used to
verify the correct operation of the hardware ITEX
function relating scene light (luminance) to system out
PCplus TM , a library of routines for creating user pro
grams, was also used.
put could be changed, however, by a programmable
An important part of the linear response system was
gain ampli?er in the input section of the image acquisi
tion board 14. This modi?cation set the ampli?cation of
to establish a correct zero value. Without a ?xed zero
the RS-l70 composite signal. The gain could be ad
value it would have been impossible to perform arith
metic manipulations of the data and retain accurate
values.
justed from 0.67 to 1.33 in 100 steps. Large values in
.creased the resolution at the expense of dynamic range
between noise and saturation and vice versa. Conse
As stated earlier, the camera 1 utilized some storage 65 quently, a middle gain value was preferred for most
applications. The factory setting of 50 was found satis
elements in the sensor array to correct for temperature
variations, these storage elements, on each of the 492
factory in this regard; this value was reset before each
horizontal lines of the array, were used as reference
image acquisition.
7
4,962,425
8
The CCD sensor array was inherently linear. Cou
TABLE 2
pled with the video camera 1 and image processing
board 14, however, non-linearities between input and
The relative areas of successive f steps
A rture Ratios
output could be produced. Tests were therefore per
formed to ensure that data produced by the system,
after the system modi?cation described above accu
rately scaled light (luminance) information in the visual
scene.
In FIG. 2, similar parts to those shown in FIG. 1 are
designated by the same reference numerals and the 0
previous description is relied upon to described them.
. FIG. 2 shows the way that the camera 1 of the device
shown in FIG. 1 was used in tests to verify the present
invention.
In FIG. 2 there is shown an optical bench 28, a cali 15
brated light source 30, a regulated dc electrical power
supply 32 and a calibrated barium sulfate reflectance
standard 34.
F/Stop
Ratio
1.8
2.0
2.8
4.0
5.6
8.0
11.0
16.0
1.02
1.77
1.89
1.84
1.95
2.12
2.22
2.01
22.0
—
To determine the response function for the device
shown in FIG. 1 with every aperture under actual oper
ating conditions (i.e., when measuring luminance), it
was necessary to obtain data with the V-lambda correc
tion ?lter 10in place with an aperture of f/2, again using
In the tests, the light (luminance) of the re?ectance
the experimental setup shown in FIG. 2. These data are
also described well by a straight line of slope 1.095, thus
providing a gain of 0.913 cd/m2 per count (system re
standard 34 at different distances from the source 30
was calculated using the inverse square law. Thus, it
was possible to produce precisely known luminance
values for measurement by the device shown in FIG. 1.
(These expected values were veri?ed with a light (lumi
sponse value) for the f/2 aperture. Using the ratios in
Table 2, it was then possible to determine the gain val
v 25 ues of every other aperture with the V-lambda correc
nance) photometer).
tion ?lter 10 in place. It should be noted, however, that
with a 3% uncertainty for a given aperture value, some
FIG. 3 shows the data obtained with a camera aper
ture of f/ 16 and the linear equation best ?tting those
accumulated error possibly occurs when using the ra
data using a least squares criterion. The V-lambda ?lter
10 was removed for this test to increase sensitivity. The
filter 10 has no effect on the linearity of the system as
long as the spectral power distribution of the source
tios in Table 2.
It was necessary to evaluate the spectral response .of
the device shown in FIG. 1 with the V-lambda correc
tion ?lter 10 attached. It was thought that the ?lter 10
would make the spectral sensitivity of the device shown
establish then, that the device shown in FIG. 1 responds
in FIG. 1 exactly equal to V-lambda The spectral re
linearly to changes in scene light (luminance), in the
35 sponse of the device shown in FIG. 1 is given in FIG. 5
response range between noise and saturation.
and this was compared to that of a high quality Pritch
Adjustments to the exposure of the sensor array 4
ard laboratory photometer whose spectral sensitivity is
must be made for different brightness levels in the visual
does not change, as was the case for this test. These data
documented to be very close to V-lambda. A compari
son between the two devices was performed using a
scene. Since the automatic gain control in the camera 1
was disconnected, the sensor array exposure was con
trolled by varying the lens aperture of lens 2. Although
the system retained its response linearity (between noise
and saturation) with these changes, the slope of the
40
response curve changed by factors related'to the areas
variety of comercially available light sources, each
having different spectral power distributions. Since the
device shown in FIG. 1 was designed for use in actual
of the lens apertures of the lens 2. Thus, if the slope of
environments, this procedure was thought to be suf?
cient for estimating error magnitudes for most lighting
the response curve was 1.0 for a given f stop, then re 45
applications.
ducing exposure by one stop (nominally a factor of 2)
produced a response curve slope of 0.5 with the new A
aperture
_
Eight light sources having different spectral composi
tions were selected: incandescent (I), high pressure
sodium (HPS), low pressure sodium (LPS), metal halide
Under the experimental setup shown in FIG. 2, the
(MH), mercury (M), cool-white ?uorescent (CWF),
areas of the different apertures of the lens 2 were de
duced. With a ?xed amount of light falling on the re?ec
warm white fluorescent (WWF), and vita-lite TM ?uo
tance standard 34, output from the (linear) device
shown in FIG. 1 was measured for two successive aper
ture settings demarcated by the lens aperture detents.
The ratios of outputs from successive aperture settings
are presented in the following Table 2. Depending upon
the sensitivity range, measurements were made with
and without the V-lambda ?lter 10. All measurements
were obtained from pixels in the center of the captured
rescent (VLF). Using the standard 0-45 degree photo
metric geometry (Wyszecki and Stiles, 1982), these
sources illuminated, in turn, a barium sulfate plate
which was viewed, again in turn, by the two photomet
ric devices from a distance of about 1 m.
FIG. 6 shows the ratio of the camera output to the
Pritchard luminance values, normalized for the incan
descent source. All camera values were obtained with
an aperture of f/2.8 except that for incandescent lamp
images. By restricting the measurement area to the
which was taken at f/2. To minimize potential errors
center of the image, vignetting (response falloff at the
from vignetting only the pixels from the central area of
image edges) was avoided. (Vignetting was a problem
image were considered.
The differences between the output from the device
with this system and is discussed in detail later). The
ratios for different aperture settings were different from
shown in FIG. 1 and the Pritchard device were found
the expected values of 2.0. These values were consid 65 to be small, never exceeding 8%. It should be noted,
ered accurate to within about 3%. This uncertainty is
however, that those light sources with mercury line
caused by mechanical inconsistency in the aperture
emissions (254, 313, 365, 405, 436, 546 and 728 nm) were
mechanism.
associated with the largest error. This error may be due
4,962,425
10
to improper UV blocking for the V-lambda ?lter 10 or
each spatial frequency (luminance cycles per distance)
using “typical” spectral sensitivity data rather than that
could be resolved.
As for all imaging systems, there is a loss in image
fidelity with the device shown in FIG. 1 at higher spa
for the particular camera 1; this may be corrected by
using ?lters. Correction factors taken from FIG. 7 can
be used to minimize these small errors while acquiring
image illuminated by sources with mercury line emis
tial frequencies. In other words, the image contrast of
small details was less than it actually was in the visual
scene. Ignored, these losses produced errors in mea
sions.
The device shown in FIG. 1 should ideally produce
sured contrast and, consequently, calculated levels of
visual performance. Importantly, these losses also found
in conventional optical photometers, but are rarely if
the same response for the same scene luminance any
where in the image. In order to check for any inconsis
tencies in pixel responses to the same scene luminance,
it was necessary to develop a technique that would
ever reported. Therefore, such losses must be consid
ered for every optical system.
provide equal scene luminances throughout the cap
tured image.
To adequately define the spatial frequency response
of the device shown in FIG. 1 it was necessary to per
Images were acquired of the interior of a l m inte
grating sphere, illuminated with either a 100 or 300 W
incandescent lamp. The camera lens 2 was aimed at the
form measurements in both the horizontal and vertical
directions because the camera array 4 and imaging
board 2 were not isotropic. To minimize aliasing in the
horizontal direction there was an electronic ?lter for
opposite interior wall of the sphere and defocussed
during image acquisition to minimize the effects of paint
imperfections. Thirty-two images were acquired and
averaged to further reduce the impact of these imper
the video signal in the input stages of the image acquisi
20
fections.
Using this technique it was determined that the cam~
era lens 2 produced vignetting. Thus, more light from
tion board 4. It had a 3 dB cutoff frequency (70% of
maximum) of 4.2 MHz. Since the horizontal scanning
rate was 63.5 psec/line, the 3 dB cutoff frequency of 4.2
MHz limited the resolution to 270 cycles/frame in that
direction. To avoid aliasing, the sampling thereof re
the equal-luminance scene reaches the center of the 25 quired that the video signal be sampled at the Nyguist
rate, i.e., at a rate which is twice the highest frequency
equal luminance in a scene did not create equal pixel
contained in the video signal. The line-by-line sampling
focused image than the edges. Consequently, points of
responses throughout the image. The magnitude of lens
vignetting depended on the aperture setting and the
rate of 512 pixels/line was approximately twice the
?lter cutoff frequency of 270 cycles/franc as required
zoom lens focal length. In the device shown in FIG. 1, 30 by the sampling theorem. There was no corresponding
9 lens apertures and 6 focal lengths were employed.
filter in the vertical direction, so aliasing was to be
Without correction the same visual scene produced 54
different sets of luminance data, one for each combina
tion of lens aperture and zoom lens focal length. It was
expected.
multiplying each acquired image by the inverse of its
comprised of 48 dark bars on 203 mm wide white paper.
The target was produced with a 300 dot per inch laser
printer.‘ A bar stimulus was used instead of the more
conventional sine wave stimulus because it was easier to
Normally, the MTF is speci?ed in terms of cycles per
degree subtended by the target. Since the device shown
found that vignetting could be overcome to a large 35 in FIG. 1 was equipped with the zoom lens 2 it was
degree through software manipulation. This was not an
necessary to de?ne the MTF in terms of cycles per
ideal solution, and proper optical components can be
image frame. This was the number of cycles subtended
incorporated into the device shown in FIG. 1. Fifty
by the active area of the sensor array 4 in the horizontal
four calibration images, based upon an average of
or vertical direction. (The horizontal direction was
thirty-two images of the hemisphere wall, were ob 40 larger than the vertical by a factor of about 4/ 3.)
tained and stored for subsequent image corrections. By
The target used in the tests was a periodic stimulus
respective calibration image, the same scene luminance
produced the same pixel response throughout the image
for any aperture and zoom focal length.
As with other optical devices, errors can be created
by dust and dirt on the optical components. These er
produce and provides similar results. The stimulus was
placed 1585 mm from the plane of the camera sensor
rors are particularly noticeable with the device shown
array for the horizontal line measurements and 2092 mm
in FIG. 1. Dust particles on the lens 2 and sensor array
for the vertial line measurements. The zoom lens 2 was
surface cause circular dark spots in the image. The 50 , used to vary the spatial frequency of the target on the
larger the aperture the larger the spot diameter; the
sensor array 4. The target was illuminated with ambient
closer the particle to the sensor array 4, the sharper the
room lighting from cool-white ?uorescent luminaires.
image. The luminance of the areas shaded by the dust
The V-lambda filter 10 was removed to achieve greater
were of the order of 3% darker than unshaded areas.
sensitivity at a lens aperture of f/2.
- Before calibration the optical components were thor
All acquired images were offset corrected and mea
oughly cleaned, but, unfortunately, it was impossible to
surements were taken only in the center of the image to
remove all of the dust. Thus, the calibration images
record these occasional spots. Consequently small er
avoid vignetting. The maximum and minimum pixel
values over several cycles were measured. Contrast (C),
rors, of approximately 3%, were observed in some areas
as defined in equation 1, was calculated and modulation,
of the scene if between calibration and subsequent 60 relative to the observed contrast at the lowest measured
image acquisition the location of the spots had changed,
spatial frequency (C=0.89 at 28 cycles/frame), was
the spots were removed, or more dust accumulated on
plotted as a function of cycles per frame in FIG. 8.
the optical components.
The image quality of any optical device depends
upon its refracting (and re?ecting) elements as well as 65
the spatial resolution of the photosensitive medium.
Imaging systems could be characterized by the modula
tion transfer function (MTF) which describes how well
Lb=average luminance of the white paper
Lt=average luminance of a dark bar
11
4,962,425
the manual is based on tests carried out with this system.
The following trademarks are used in these pages;
know where measurement errors would occur with the
IBM and IBM personal computer AT are registered
trademarks of International Business Machines Corpo
ration. PCVISIONplus is a registered trademark of
imaging Technology Inc. RCA is a registered trade
mark of RCA Corporation. COSIMICAR is a regis
tered trademark of ASAHI Precision Company Ltd.
PANOSONIC is a registered trademark of Panasonic
device shown in FIG. 1, it was found that the actual size
of target must be related to the size of the image frame
for a given focal length. The number of cycles/degree
in the target can be related to the number of cycles
displayed in a frame and the focal length of the lens by
equation 2:
cycles/frame=cycles/degree ‘ k/f
12
referred to in these pages as the CapCalc system, and
FIG. 7 shows that errors occured in measuring the
luminances of targets smaller than 58 cycles/frame. To
Corporation.
(2)
1.0 PROGRAM OVERVIEW
1.1 Introduction to Version 1.0 of System
where
15
k
CapCalc stands for Capture and Calculate. The Cap
=
degrees ' rim/frame
=
420 in the horizontal direction
Calc system accurately measures a large number of
=
320 in the vertical direction
luminances and quickly performs lighting analyses on
_
focal length of lens, in mm.
and
f
those data. The system is a synthesis of state of the art
components including a solid state Charged Couple
Device (CCD) video camera with a photopic spectral
correction filter, and a personal computer with a digital
These values of k were determined empirically from
the lens focal length, the number of cycles/degree in the
bar stimulus, and the number of cycles displayed in an
image processing board. The capability and potential
for the system make it valuable for a wide range of
image frame.
To avoid this problem with the device shown in FIG. 25
1, it was deduced that objects must ?ll at least 2% of the
imaging frame 14. This was determined from the data in
FIG. 7 which showed that the luminances of objects
application.
The calibrated video camera acquires luminance data
much like the typical spot luminance meter, but unlike a
spot meter, it simultaneously resolves an entire scene
into approximately 250,000 luminance measurements.
having a fundamental frequency greater than 58 cycles
per frame (either vertically or horizontally) will be 30 These data are then stored by the digital image process
ing board.
attentuated by the high frequency cut off. At maximum
There are other important aspects of the visual scene
zoom (focal length of 75 mm) the (vertical) image frame
made available for evaluation by use of the video cam
covers 4.3 degrees (FIG. 9). Thus, objects 0.086 degrees
era. Not only is a large number of luminance values
(5 minutes of arc) or larger were found to have negligi
available, but their precise spatial relation is maintained.
ble luminance attentuation due to the high spatial fre
Therefore, the acquired image also allows for the deter
quency cut shown in FIG. 7. This limit is better than
mination of object size, shape, contrast, and viewing
that for most conventional luminance photometers.
distance within the visual scene.
Values for other focal lengths may be determined from
The personal computer dramatically reduces the time
the data in FIG. 8 where the ?eld deg are plotted
against the focal length of the camera 1. A macro lens 40 required to understand and evaluate lighting analyses
currently, the software calculates Relative Visual Per
will be affixed to the camera 1 for measurements of still
formance (RVP). However, the menu driven software
smaller objects.
will be expanded to perform other procedures. Selec
In some embodiments of the present invention the
tion and learning of the various procedures are made
sensor array 4 could rapidly scan a visual scene to pro
45 easy by using help screens. Any information required
duce a two-dimensional image.
from the user is prompted for and checked by the soft
In other embodiments of the present invention the
ware upon entry so that mistakes are detected In short,
?lter means 10 may be one of a plurality of different
filter means 10, 36 and 38 which are used sequentially to
the system is a practical tool for both lighting applica
tion and education.
deduce colour information. For example, long, medium
This system is also a tool for research The convenient
and short wave ?lters 10, 36 and 38 respectively, could 50
ability to capture and have access to such a complete
be used sequentially to deduce colour information.
It is within the scope of the present invention to use '
more than one camera 1, filter 10 and image capture
board 12 in the device in order to obtain colour infor
mation.
'
Embodiments of the present invention may be used,
for example, to align lamps in re?ectors, to measure
different light intensities for horticultural purposes at
different positions in, for example, greenhouses in order
to adjust the lighting towards uniformity throughout
the greenhouse, for measuring the different light intensi
ties in studios to enhance photographic and television
reproductions, and for measuring different light intensi
ties to improve the visibilities of, for example, roads or
airport runways.
The following are the relevant pages of a user’s man
ual that has been compiled for the device shown in FIG.
1. The particular form of the device shown in FIG. 1 is
array of luminance values within an image has never
been possible before. Issues regarding brightness, size,
and shape will be easier to investigate. Having this in
55 formation available will facilitate a more complete un
derstanding of human response to light and lighting.
This manual discusses how to use the capabilities
which are currently available with the CapCalc system.
Although every attempt has been made to produce a fail
60 safe system, the National Research Council Canada
assumes no responsibility for the validity, accuracy, or
applicability of any of the results obtained from the use
of CapCalc. However, any comments, suggestions or
errors encountered in either the results or the documen
65 tation should be brought to our attention. cl 1.2 System
Capabilities
CapCalc is an extensive measurement and analysis
system. The software is designed and documented for
4,962,425
13
14
Panasonic WV-54l0 black and white video monitor.
Any RGB or black and white video monitor of equal
speci?cation will suf?ce (refer to Panasonic WV
ease of use. Menu driven activities permit complete
?exibility and control of the system capabilities. Its
major capabilities include:
a. With the use of a calibrated video camera and digital
5410 Operation Instruction manual for speci?cations)
image processing board, an image is quickly acquired,
1.3.3 Digital Image Processing Board
Imaging Technology’s PCVISIONplus Frame Grabber
digitized, and stored as approximately a quarter mil
lion luminance values. The reader should refer to
Appendix A where a technical report is provided
which discusses the camera and computer.
and cable that connects it to the calibrated camera
and display video monitor.
Imaging Technology's PCVISIONplus Frame Grabber
b. All image and luminance information can be saved on
disk under a user speci?ed image ?le name for future
use. This information can also be easily retrieved or
erased.
c. Portions of the image can be isolated by placing a
User’s Manual (this is necessary for installation of
Frame Grabber board and other video equipment)
The combination of camera, lens, and digital image
processing board have been calibrated at the National
user speci?ed rectangular frame around the area of 15 Research Council Canada, and delivered to you along
with this manual, and the Frame Grabber manual. Due
interest. The details of the image within the frame can
to
the unique characteristics of each camera, lens, and
be more easily observed by scaled enlargement (mag
processing board, the results of calibration for each
ni?cation).
system are slightly different These differences are com
The user can scan an image with a cursor observing
pensated for by unique calibration factors which are
the luminance at any desired pixel location.
used by your system software For this reason, your
system is given a unique number which is recorded at
the beginning of this manual.
The serial number for each of these system compo
e. The resolution of luminances within the framed area
of an image can be reduced. This process is used for
converging luminances of a similar level, and will be
explained in more detail later. The visual result pro
nents is also recorded for your reference. Only these
duced on the image by doing so, is a contouring of the
components should be used with your CapCalc system
luminances to a new speci?ed number of steps. This is
software to insure accurate luminance measurement.
helpful for purposes of separating areas of the image,
such as target and background for calculations.
1.3.4 User’s Manual and Master Program Diskettes
f. Relative Visual Performance (RVP) can be calculated 30 CapCalc user’s manual and master program diskettes.
for any user speci?ed target, background, and size
The following diskettes comprise the CapCalc system
within an image, as well as determining the conse
software
quences of observer age in the investigation. The
CapCalc System Software (CCl)
results are immediately displayed to the user. The
CapCalc Run Data 1 (CC2)
reader should refer to Appendix B where three tech
CapCalc Run Data 2 (CC3)
nical reports are provided which explain RVP.
CapCalc Run Data 3 (CC4)
g. On-line documentation is available to help the user
during system use. This is user documentation which
can be displayed on the computer screen for assisting
A suf?cient number of blank high-density diskettes
for master program diskette back-up and image ?le and
luminance information storage.
in system use. Status lines are also located at the bot
tom of the screen to inform the user of current activi
14 Getting Started
ties and errors encountered by the system.
The CapCalc user’s manual, and system software
provide all of the information needed to operate the
1.3 What You Need to Use CapCalc System
CapCalc system successfully and to have it become a
To insure proper system operation and complete use 45 useful tool for luminance measurement and analysis
of all the features and capabilities of the CapCalc sys
applications. The following sections provide instruc
tem, you should have the following:
tions to help you set up the system and, get started.
1.3.1 Personal Computer and Con?guration
IBM Personal Computer AT, or fully compatible
microcomputer con?gured as below:
IBM AT System Unit with at least one 1.2 Mbyte high
1.4.1 Backing up Master Program Diskettes
The master program diskettes included as part of the
CapCalc system package contain the software and run
data which is used by the software They must be care
fully protected to insure against loss or damage to the
software. Therefore, before attempting to install the
density diskette drive and a 20 Mbyte hard disk
Expanded memory to 640K with DOs Version 3.0 or
higher
55 software onto the computer hard disk and run CapCalc,
80 column monitor
80287 Numerical Data Processor chip: “coprocessor”
(Optional but strongly recommended)
1.3.2 Calibrated Video Camera and Video Monitor
RCA Solid State CCD Video Camera model TClOO
(electronically modi?ed for luminance measure
ment). The camera should always be mounted on a
tripod or other rigid device.
COSMICAR TV ZOOM LENS (?tted with additional 65
it is important that you do the following:
1) The four master diskettes have been tested prior to
shipment. If you suspect that any of the master
diskettes you received have been damaged, contact
the National Research Council Canada immedi
ately.
2) Make
which
copies
empty
a copy of each CapCalc master diskette
you have received. To make the necessary
the master diskette should be copied to an
directory on the C drive. Then, a blank,
measurement)
formatted high density diskette should be placed in
12.5mm-75mm 1:1.8. The lens cap should be kept on
the lens when the camera is not being used.
the A drive and all ?les from the chosen directory
on the C drive, copied to the A drive. This should
optical
?lter
for
luminance
15
4,962,425
be repeated for each master diskette This set should
be labeled as the “back-up-version”, while the mas
ter set should be saved in a safe place where it will
not be damaged.
Note; All diskettes used to back-up the master pro
gram diskettes should be double-sided and high-density.
The CapCalc system software must be operated from
the hard disk of the IBM AT. For an explanation of the
system software installation on the hard disk, please
16
1.4.5 Using This Manual
The remaining chapters of this manual contain infor
mation concerning the operation of the CapCalc system
Chapter 2 discusses the concept and control of menus,
as well as describing the online help and status lines
which further aid in making effective and ef?cient use
of the system. Chapter 3 describes in more detail each
main menu and sub-menu activity A step by step exam
ple of how to use the CapCalc system can be found in
refer to the next section
Chapter 4. The Appendices contain various technical
information, reports, and references for a deeper under
standing of the system.
1.4.2 Installation. of Software onto Hard Disk
Due to the disk space necessary to store image ?le
It is recommended that you carefully continue
and luminance information, the CapCalc system soft 15 through all of the information in the following chapters.
ware has been designed to run on an IBM AT that
Once you are familiar with the structure, terminology,
and use of the system, this manual will take on the role
includes a hard disk. The recommended arrangement
for installing the CapCalc software involves placing all
of a reference document, and will find only occasional
of the contents of the system software diskettes into a
use.
single subdirectory on the hard disk. This subdirectory
2.0 CAPCALC SYSTEM SOP I WARE
is assumed to be named “CAPCALC”. To perform this
you need to do the following:
Step 1: With the computer on and at the <C> prompt,
TECHNIQUES
initialize subdirectory CAPCALC by typing
“MKDIR CAPCALC”
25
Luminance measurement and analysis with the Cap
Calc system is performed with the supplied equipment
and software. The system software gives step by step
instructions on what the user must do with the equip
Step 2: Successively insert each of the CapCalc system
software diskettes into drive A, and type “COPY
ment to acquire and analyze the luminances. All of the
activities to perform the steps are arranged as menu
A:*.* CzCAPCALC”.
items.
After all ?les have been copied to this subdirectory,
the installation of CapCalc on the hard disk is complete.
This chapter will introduce the CapCalc system soft
ware initialization, structure, and techniques for use of
Each time you wish to run the CapCalc system soft
the system. It will discuss how you are able to move
ware you should be in the CAPCALC subdirectory. To
around within the software to perform the task of inter
get to the CAPCALC subdirectory, type “CD CAP
est, and some features that will facilitate this process.
CALC”
Note All information in this manual which appears on
35
the screen will be shown in bold print to help distin
1.4.3 Installation of Video Equipment
guish the screen display from descriptive text.
The PCVISIONplus Frame Grabber is a video digi
2.1 Preparing Your IBM Personal Computer AT
tizer and frame memory capable of digitizing the stan
dard RS-l70/330 video signal received from the cali
To begin a session with the CapCalc system, your
brated camera The digitized image is then stored in a
IBM AT must ?rst be on and at the system prompt
frame memory on the Frame Grabber This image is
within the CAPCALC subdirectory of the hard disk.
simultaneously displayed on the video monitor.
Be sure that your video equipment is plugged in and
turned on. You may want to check the date and time
The PCVISIONplus Frame Grabber must be placed
into the IBM AI expansion slot .to allow the CAP 45 kept by your computer so that all ?les on the diskette
directory will be properly recorded. To initialize the
CALC system software to perform the various image
system software, type CAPCALC after the <C:CAP
analysis processing. In order to perform the proper
CALC> prompt. The program title will appear in a
con?guration and installation of the PCVISIONplus
window. A window is a rectangular area on your
Frame Grabber, the reader is directed to the PCVI
SIONplus Frame Grabber User’s Manual which accomi . screen, usually bounded by a border, which is used for
various applications. Such applications will become
panics the Frame Grabber. Chapters 2-3 of the Frame
apparent as you move along through this manual. The
Grabber User’s Manual contain the information neces
instructions “Hit Enter” will be printed at the bottom
sary to perform this task. Chapter 3 will also explain
how the video camera and monitor are connected to the
Frame Grabber, which completes the process of install
ing the video equipment. Note the address of the frame
grabber must be changed to D0000.
center of the window border
55
2.2 Introducing the Main Menu and Sub-Menu
After you have read the title window, press the [En
ter] key to move ahead to the following main menu:
1.4.4 Additional Diskettes
In addition to the ?oppy diskettes to which you copy
the CapCalc system software diskettes, you may want
to keep handy additional blank formatted diskettes to
store image file and luminance information. The image
Acquire
File
Frame
Calculate
Exit
A menu is a special application of a window which
consists of a list of items. Each menu item performs a
?le and luminance information can take considerable 65 special function or activity, and is selected by typing the
disk space (approximately 500K), so for your own hard
?rst character of the item or moving the highlight bar
disk maintenance purposes, you may wish to store old
with the cursor arrow keys ([—->], [52 ], [l], [<—]) to the
?les and information to ?oppy diskettes
desired item and pressing the [Enter] key. To leave the
4,962,425
17
CapCalc system software and return back to DOS, the
user selects the Exit item. The main menu in CapCalc
consists of items displayed in a single-row multiple
column formatted window. For example, select the
main menu item “Acquire”. Upon selection of this item,
the user is branched to another Window containing a
sub-menu as follows:
Acquire
File
Frame
Calculate
Exit
18
second line is a short explanation of the particular
activity where the user is located, and is also used
for error and warning messages when encountered
by the system software. The behaviour of these
two status lines will be illustrated in Chapter 3.
3.0 DETAILS OF THE MAIN MENU AND
SUB-MENU ACTIVITIES
This chapter will cover the details of each main menu
item and related sub-menu activities It will cover the
purpose of and user response to each activity. All of the
activities have instructions which are displayed on the
Long
Short
Refresh
Number
Clear
screen However, the explanations given‘here are more
complete. A status line at the bottom of the screen gives
15
a short explanation of the activity in which the user is
Zeroing
A sub-menu is a special type of menu which consists
of activities relating to the previous menu item it has
branched from. The control of a sub-menu is just like a
standard menu. The user presses the [Esc] key to return
to the item of the previous menu. The sub-menus in
currently involved. Should the user need a more de
tailed information, the online help is available at any
time by pressing [F1].
3.1 Acquire
The luminance measurement process involves the
selection of a scene with the camera and acquiring its
CapCalc consist of activities displayed in a multiple
row single-column formatted window. For example,
seléct the sub-menu activity “Number”. This activity
image using the Acquire main menu item. To acquire an
is branched to yet another window containing the fol
desired visual detail and then to adjust the zoom and
lowing instructions:
aperture setting.
‘ performs a speci?c function, so upon selection the user 25 image it is necessary ?rst to select a scene with the
.
The zoom setting is used to increase the spatial reso
lution within the image. The system software keeps
Select number of images to be averaged.
track of the original size for calculation purposes.
Therefore the user is responsible for supplying this
zoom information to the CapCalc system at image ac
quisition time. This information is maintained with the
image. The losses of small spatial detail within the ?nal
35 image are due to the optical and the electronic imaging
The user can perform the instructions to accomplish
the selected activity or return to the previous menu by
pressing the [Esc] key. The purpose of this exercise has
been to introduce the structure of, and techniques for
moving around within, the software. At this time it is
not intended to perform any activities, so please hit the
[Esc] key twice to return back to the main menu. By the
same method, one can observe sub-menu activities asso
process. These losses are reduced if one moves closer to
the object of interest to increase its size. The same effect
can be produced by zooming in on the object. Essen
tially, objects of interest should ?ll 2% or more of a
captured frame to avoid losses in spatial detail.
The ?nal image is produced in several steps. First, an
initial image of the scene is produced by focusing on the
photosensitive CCD array within the camera. The di
ciated to the other main menu item. The selection of
mensions of this array are 510 columns by 492 rows.
main menu item [Exit] will terminate the session with 45 Second, every discrete element of this array integrates
the CapCalc system software, and return to DOS.
the luminous portions of the image which falls onto it
and converts them into a digital signal. Third, the digital
2.3 On-line Documentation and Status lines
image is transformed into the standard RS-l70 analog
In addition to the help provided by this user’s manual,
video signal for transportation to the Frame Grabber
there are two more convenient forms of assistance as
follows:
-
l) The user can obtain online documentation by
pressing the [Fl] function key. This documentation is a
reduced version of the information in the manual. Press
ing the [F1] key will bring a window onto the bottom of
the screen. In it will be documentation concerning the
area of the software where the user is located. In most
cases the explanation is larger than will ?t into the pro
vided window However, the user can scroll to various
parts of this documentation by using the cursor arrow
keys. Pressing the [Esc] key removes this window and
returns control of the menu system to the user.
2) The two status lines at the bottom of the screen
also supply helpful information. The ?rst line keeps
within the IBM AT. The Frame Grabber then con
structs a digital image for storage in the frame memory
by digitizing the analog signal. This frame memory
consists of an array with dimensions of 512 columns, by
480 rows. Notice that the array dimensions of the
Frame Grabber do not match those of the 0CD sensor.
Therefore, information will be lost in the digitizing
process to reconstruct the ?nal digital image stored on
the Frame Grabber.
The lens aperture is used to control the exposure of
the CCD array. Therefore, the measured luminance
levels must be scaled by the aperture setting to obtain
the true luminance information within the scene. For
this reason the camera is calibrated as a function of
a current status of the ?le and path with which the 65 aperture setting and the user is responsible for supplying
this information to the CapCalc system at time of image
user is working. The ?le is the name associated to
the image and luminance information. A path is
acquisition. This information is maintained along with
used to search a speci?ed directory for a ?le. The
the image.
19
4,962,425
20
2) Successive images at each aperture setting may be
The dynamic range of the camera is de?ned by the
created by averaging multiple images (section 3.1.4).
following system characteristics:
Upon selection of the Long activity the user is
1) the maximum signal which can be tolerated by the
branched from the sub-menu to a window with the
sensor, and
following message
2) the minimum signal it can resolve above the elec
tronic noise (dark current). Electronic noise is an
undesirable electrical disturbance of random ampli
tude and frequency which constitutes an irreduc
ible limit on signal-resolving capability.
As mentioned above, the aperture is used to scale the
Select image of interest
This instructs the user to position the camera on the
scene of interest. The image can be observed on the
video monitor. The camera should always be mounted
in a stationary position, usually on a tripod. Once satis
tied with an image, you can select it by pressing any
key. Another window will appear on the screen with
scene luminance within this range. It is important to
point out the following consequences of doing so:
1) If the aperture setting is such that parts of the
image are above the dynamic range of the camera, then
those portions are assigned the maximum luminance
the following message:
Select zoom setting from lens.
value and are referred to as “saturated.”
2) If the aperture setting is such that parts of the
image are below the dynamic range of the camera, then
those portions of the image are indistinguishable from
black (or noise).
20
Under some circumstances, the luminance range of a
scene is greater than that which can ?t inside the range
of the camera at a single aperture Therefore, one of the
12.5
15.0
20.0
30.0
50.0
75.0
This informs the user to set the zoom setting on the lens
to a position which produces the best spatial resolution
of the image without losing any area of interest within
1) In order to keep parts of the image from going 25 the image. The setting must line up with one of desig
dark, you must allow part of the image to remain satu
nated focal lengths of 2.5, 15 0, 20.0. 30.0, 50.0, or 75.0
mm as shown on the lens barrel. This information is
rated.
need by the software to compute the actual size infor
2) In order to keep parts of the image from being
mation within the image Once the zoom has been set,
saturated, you must allow parts of the image to
the user should select the appropriate focal length from
remain dark.
the above window using the arrow cursor keys and
In either case you are sacri?cing the ability of the
system to generate accurate luminance data, because
hitting the [Enter] key. Another window will appear on
information is lost through saturation or noise. It is for
the screen with the following message:
this reason that multiple aperture image construction is
available with the CapCalc system. This is a sub-menu 35
Set aperture to 1.8
following two conditions will arise:
activity of Acquire.
This informs the user to set the aperture at the position
Due to the noise of the system, it is also necessary to
of highest exposure (aperture is fully open). Once the
perform the two following sub-menu activities to gener
user does this, a window will temporarily appear to the
screen with the following message:
ate accurate luminances:
1) To improve reliability of a ?nal image it is best to
average the results of multiple images.
2) To estimate the noise level (dark current) pro
duced by the system, an image is captured with the lens
Processing .
cap on. This zero level image is then subtracted from all
subsequent images (without the lens cap) to scale the
image luminances above the noise.
The noise stabilizes considerably once the system
45
.
.
.
The system is acquiring an image or multiple images,
(section 3.1.4) at the 1.8 aperture setting, subtracting the
zero level (see section 3.1.6), and storing the informa
tion. After the processing is complete, if there is no
saturation in the image, the user will be informed with a
new instruction indicating the process in complete (be
low) If there is saturation within the image, then these
components have been on for at least one hour. The
system components are on if the computer is turned on
and the camera is plugged in. If the green LED on the
back of the camera is illuminated, then the camera is on.
portions of the image will begin ?ashing black and
white, and another window will appear with the follow- .
ing message:
Upon selection of the Acquire main menu item, the
user is branched to the sub-menu of activities which
Flashing areas are saturated.
give instructions to perform these functions.
To measure higher luminance, set aperture to 2.0 and
3.1.1 Long
The Long sub-menu activity permits the user to ac
quire an image using multiple apertures, which takes
longer to perform than the Short activity (section 3.1.2).
As discussed above, this process permits the accurate
acquisition of an image which has luminances in the
scene greater than the dynamic range of the camera.
Note: It is important that the image be static and the
55
hit <ENTER>.
To accept picture as is, hit <ESC>.
This permits more of those portions of the image that
were saturated at aperture setting 1.8 to come within
the dynamic range of the camera After hitting [Enter],
the software will once again acquire an image (or multi
ple images) and subtract the zero level, but this time
considering only those areas that have now been re
camera not move during this entire process for two
duced below saturation This process will continue
reasons:
through successive aperture settings (2.8, 4, 5.6, 8, ll,
of
1) The ?nal image data are constructed from portions 65 16, 22) until no part of the image is saturated. Hitting
[ESC] at any time terminates this sequence, leaving
multiple images captured at different aperture set
some saturation within the image (This implies that the
tings.
saturated areas are of no interest to the user). Once the
4,962,425
‘
21
22
acquisition process has ended, a window will appear
-continued
with the following message:
Long capture phase completed.
Select an aperture setting for which you are comfort
able with the image
Much of the image may go dark in order to bring all
areas below saturation. For this reason, these instruc
tions permit the user to select the preferred aperture
setting which produces the best image for viewing. This
This informs the user to set the aperture on the lens to
information must also be maintained by the software So
once the aperture has been set, hitting any key will
display another window to the screen with the follow
the desired position. Once again, the user should select
an aperture setting that is the best balance for lost image
due to saturation and noise. Flashing black areas of the
ing instructions:
15
Select Aperture setting from lens.
1.8
2.0
2.8
4.0
5.6
8.0
11.0
16.0
22.0
The user then selects the appropriate aperture setting
image designate the saturated portion of the image. The
user then selects the appropriate aperture setting from
the above menu using the arrow cursor keys, then hit
ting the [Enter] key. Once the user does this, a window
will temporarily appear on the screen with the follow
20
message:
25
The system is acquiring an image or multiple images,
sectionv 3.1.4) at the selected aperture setting, subtract
ing the zero level (section 3.1.6), and storing the infor
'
Processing
mation. The user is then returned to the sub-menu.
from the above menu which matches the setting on the
lens barrel This is done by use of the arrow cursor keys,
3.1.3 Refresh
The
Refresh
sub-menu
activity displays on the moni
is returned to the sub-menu This last step in no way 30
tor
the
image
which
has
the
current image status (sec
affects the stored data from the Long image acquisition
then hitting the [Enter] key. Once this is done, the user
process.
3.1.2 Short
The Short sub-menu activity is exactly like the Long 35
tions 3.2, 3.2.1 and 3.2.2) at the bottom of the screen.
The image on the video monitor can be modi?ed by
performing any one of a number of submenu activities
(section 3.3). It can also be completely cleared from the
activity (section 3.1.1), except the image is acquired
screen (section 3.1.5). Therefore, this activity is helpful
with only one aperture setting. This requires a shorter
to return to an unmodi?ed display of the image.
period of time than the Long activity (section 3.1.1).
3.1.4 Number
The luminance range within a scene may be beyond the
dynamic range of the camera, in which case the user
must consider the unfavorable circumstances of satura
select the number of images to be averaged during the
tion and noise described above (section 3.1).
Long (section 3.1.1), Short (section 3.1.2), and Zeroing
The Number sub-menu ‘activity allows the user to
Upon selection of the Short activity the user is
(section 3.1.6) sub-menu activities. The purpose of aver
aging is to reduce the error associated with the elec
branched from the sub-menu to a window with .the
following message:
45
tant to mention that the time necessary to perform the
averaging process increases with number. Under cir
Select image of interest
Once satis?ed with the image the user continues press
cumstances where a high order of accuracy is neces
sary, the user is recommended to use a high number.
ing any key. A window with the following message will
appear:
tronic noise of the system (section 3.1). It is also impor
50
Acquiring 32 images takes approximately five minutes.
For preliminary applications the user may ?nd one
image to be sufficient; this takes approximately twenty
Select zporn setting from lens.
12.5
15.0
20.0
30.0
50.0
75.0
55
This informs the user to set the zoom to line up with one
seconds to complete.
Upon selection of the Number activity the user is
branched from the sub-menu to a window containing
the following message:
Select number of images to be averaged.
of the designated focal lengths shown on the lens barrel.
The user then selects the appropriate focal length from
the above menu using the arrow cursor keys, then hit~
ting the [Enter] key. Another window will appear on
the screen with the following message:
65
Select Aperture setting from lens.
The user should select the desired number of images
needed for his application using the arrow cursor keys
and then hit the [Enter] key.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement