null  null
University of Kentucky
UKnowledge
Computer Science Faculty Patents
Computer Science
11-7-2006
Dynamic Shadow Removal from Front Projection
Displays
Christopher O. Jaynes
University of Kentucky
Follow this and additional works at: http://uknowledge.uky.edu/cs_patents
Part of the Computer Sciences Commons
Recommended Citation
Jaynes, Christopher O., "Dynamic Shadow Removal from Front Projection Displays" (2006). Computer Science Faculty Patents. 3.
http://uknowledge.uky.edu/cs_patents/3
This Patent is brought to you for free and open access by the Computer Science at UKnowledge. It has been accepted for inclusion in Computer
Science Faculty Patents by an authorized administrator of UKnowledge. For more information, please contact [email protected]
US007133083B2
(12)
(54)
United States Patent
(10) Patent N0.:
Jaynes et al.
(45) Date of Patent:
DYNAMIC SHADOW REMOVAL FROM
6,377,277 B1 *
FRONT PROJECTION DISPLAYS
4/2002 Yamamoto ................ .. 345/629
.
KY (Us)
_
Nov. 7, 2006
(Contmued)
(75) Inventors: Christopher O. Jaynes, Lexington, KY
(Us); Stephen B‘ Webb’ Lexmgpon’
KY ms)’ Robert M' Steele’ Lexmgton’
_
US 7,133,083 B2
OTHER PUBLICATIONS
Han Chen, R. Sukthankar, G. Wallace, and Tat-Jen Cham, “Cali
brating Scalable Multi-Projector Displays Using Camera
Homography Trees.” Included here by Way of background, only;
_
noting that a corresponding citation of this group Was made to [1]
(73) Asslgnee: Unlverslly 0f Keptucky Research
Technical Report TR-639-01, Princeton University, dated Sep.
Foundatlon, Lexlngton, KY (Us)
2001; this document Was available on-line for viewing on Dec. 4,
2001 and printed on that date. This document identi?ed Attachment
(*)
Notice:
Subject to any disclaimer, the term of this
D of prov. app.
patent is extended or adjusted under 35
C
.
( ontmued)
U.S.C. 154(b) by 542 days.
Primary ExamineriVictor R. Kostak
(74) Attorney, Agent, or FirmiMacheledt Bales &
(21) APP1- N04 10/315,377
_
Heidmiller LLP
(22) F1led:
Dec. 9, 2002
(65)
57
Prior Publication Data
( )
ABSTRACT
Jul‘ 10’ 2003
A technique and system for detecting a radiometric varia
tion/artifacts of a front-projected dynamic display region
Related Ulsl Application Data
under ~observat1on by at least one camera. The d1splay 1s
US 2003/0128337 A1
compr1sed of one or more 1mages projected from one or
(60)
Provisional application No. 60/339,020, ?led on Dec.
more of a plurality of projectors; the system is preferably
7, 2001.
(2006-01)
calibrated by using a projective relationship. A predicted
image of the display region by the camera is constructed
using frame-buifer information from each projector contrib
uting to the display, Which has been geometrically trans
US. Cl. ....................... .. 348/745; 348/ 189; 353/94
Field of Classi?cation Search .............. .. 348/745,
formed for the camera and its relative image intensity
adjusted. A detectable difference between a predicted image
348/189, 607, 383, 36, 121; 345/1.3, 157,
345/619, 626, 624, 625; 353/69, 70, 94
See application ?le for complete search history,
and the display region under observation causes corrective
adjustment of the image being projected from at least one
projector. The corrective adjustment may be achieved by
(51) Int- ClH04N 3/22
(52)
(58)
_
References Cited
Way of pixel-Wise approach (an alpha-mask is constructed
U.S. PATENT DOCUMENTS
(difference/bounding region is siZed to include the area of
the display affected by the radiometric variation). Also: a
technique, or method, for detecting a radiometric variation
(56)
5,506,597 A
4/1996 Thompson et al. ......... .. 345/85
5,871,266 A
5,923,381
2/1999 Negishi et al.
*
7/1999
6,115,022 A *
6,310,650 B1*
9/2000
10/2001
6,361,438
A
from delta pixels/images), or bounding region approach
Demay et al.
353/98
......
. . . ..
348/592
Mayer et al. ............. .. 345/418
Johnson et al. ........... .. 348/383
B1*
3/2002
Morihira
.. .. ... ..
. . . . ..
6,377,269 B1 *
4/2002
Kay et al. ................. .. 345/589
of a display region under observation, as Well as associated
computer executable program code on a computer readable
storage medium, therefor.
463/31
21 Claims, 6 Drawing Sheets
US 7,133,083 B2
Page 2
R. Sukthankar, Tat-Jen Cham, and G. Sukthankar, “Dynamic
Shadow Elimination for Multi-Projector Displays.” vol. 2, Proceed
US. PATENT DOCUMENTS
6,456,339 B1 *
9/2002
Surati et a1. .............. .. 348/745
6,469,710 B1* 10/2002 Shum et al.
6,512,507 B1* 1/2003 Furihata et al.
6,554,431 B1* 4/2003 Binsted et al.
345/619
345/157
.... ..
353/28
ings of the IEEE Conference on Computer Vision and Pattern
Recognition, CVPR, Kauai, Hawai’I, Dec. 8-14, 2001. Included
here by way of background, only; noting that conference took place
after provisional app. ?ling date granted, although document was
6,570,623 B1*
6,611,241 B1 *
5/2003 Li et al. ...... ..
8/2003 Firester et al
348/383
.. 345/1.3
6,733,138 B1*
5/2004
.. 353/94
R. Samanta, J. Zheng, T. Funkhouser, K. Li and J. Pal Singh, “Load
. 382/173
Balancing for Multi-Projector Rendering Systems.” Included here
2002/0164074 A1
2003/0085867 A1*
Raskar ........ ..
11/2002 Matsugu et a1. ..
5/2003
Grabert .................... .. 345/156
OTHER PUBLICATIONS
R. Sukthankar, Tat-Jen Cham, and G. Sukthankar, “Dynamic
Shadow Elimination for Multi-Projector Displays, ” on-line
Compaq Tech Report CRL Apr. 2001 document dated Jun. 2001.
C. Jaynes, S. Webb, and R. Matt Steele, “Dynamic Shadow Removal
available on-line for viewing on Dec. 4, 2001.
by way of background, only; noting that this document was avail
able on-line for viewing on Dec. 4, 2001 and printed on that date
from: http://citeseernj.nec.com/update/201288 (listed on cover
page).
R. Samanta, and T. Funkhouser, “Dynamic Algorithms for Sorting
Primitives Among Screen-Space Tiles in a Parallel Rendering
from Front Projection Displays.” Abstract from 12th IEEE Visual
iZation 2001 Conference, VIS 2001, San Diego, CA, Oct. 24-26,
System.” Included here by way of background, only; noting that this
2001. Included here by way of background, only, as it is authored
printed on that date from: http://citeseernj.nec.com/.
by applicants; noting that conference dates are within l-year prior
to provisional app. ?ling date.
* cited by examiner
document was available on-line for viewing on Dec. 4, 2001 and
U.S. Patent
Nov. 7, 2006
Sheet 1 0f 6
US 7,133,083 B2
U.S. Patent
Nov. 7, 2006
FIG. 2
_,
US 7,133,083 B2
Geometric Transformation
Colormetric
of framebuffer information
Transformation
Camera 1
\
F
Sheet 2 0f 6
30
23
____________________ -1
a
I,
l
',
+'
'
l
g
:
g
l
. . . . . . . .1
_ I l m“
?_ 5
I
.6
i _,
c1HP:
l
E |
ll |
.l:
'%
Framebuffer 3 91 g
l \
:
‘M 5? I I
:-LLLLJ_I
I
'
|
I
l
ID
|
l
I
I
_
_
:
-_i-
. . . .
I
1 l
l
:_l
:
+
I
a
I
?
I
22 /\
l
l
:
l
"I
g
I“
'5
I
I
l
:
I
--—— l
‘ii
*-
I
g
i
Q
l
I
Camera 2
24
I
H"
-
nmi?
ll
m
8
l
I
I
~=I .
l
q
,7
I
{
—
a
m
Predigted
Framebuffer 2 9 2 I
Geometric
CZHPZ
IIII
L
'
V,
I
c2HP1
V
l
View in
a. H
Framebuffer 1 9 2 I
l I(
I
C
M
I
I I ‘i. 1
i
Framebuffer 3 9 2 I=ia=aaaaai \
CZTP3
czHPs
H7 \:g>
T"
j; l
all?’ I I H P I
Garner“
C1TF1
l
|
X
l
:
|
I
View in
:11"???
I
II‘ |
l l I :1 I
'
I
I
-
Q1
.fred'ctfg:
I I
. . .
n
C1TP2
E
C1HP1
E
c
l?l
&‘\\>
Framebuffer 1 91 I
u
8
I
("l-l"
:|
pli
'
Framebuffer 2 91
T
C
,/I/:;%
L
____
l
L____ ________________ M
c2
T
T
2
amera
P1
‘
I
map delta
image or
29A
25
27
_
bounding
region back
tobfrgfml? ‘
u er‘
Predicted Image1
Detected
Difference
<_
Image
bounding region)
IISIFIQ "Ivefse
for Camera 1
(comparator)
I"
Capturedlobserved
homographles)
H '1
‘3H:-
5&3
Camera 1
-1
1
image or
bounding
r99w" back
to frame '
buffer!
‘1:33:53’
.
homoairapnles)
02
'
Image1
at “me t
C1 P1 f9]; g 28
map delta
_
“
(— (delta image or + Processing
“"WaFP le'g"
H
I
at time t
PJ
cz |=:a_1
C
Detected
Difference
Display
Surface(s)
5
Predicted imagez ,
at time t
‘
Image
(- (delta image or (- Processing
bounding region)
(comparator)
for Camera 2
1
\
Captured/observed
Image 2
at time t
-(— g
Camera 2
“I
U.S. Patent
Nov. 7, 2006
Sheet 3 0f 6
US 7,133,083 B2
FIG. 3
ICOnbotselrvidy
4
FIG. 5A
Projected Intensity (R,G,E)
FIG. 58
FIG. 5C
Comparison of predicted image with and without color correction to the observed image.
FIG. 6A
F IG. 6B
6A-Difference image (Pixels detected in the A"! image); and 6B--resulting alpha mask (after warping the
difference image into the display projeclor’s frame buffer) from a shadow event of person with silhouette shown.
U.S. Patent
Nov. 7, 2006
8
Sheet 5 0f 6
US 7,133,083 B2
Providing a front-projected display by projecting at least one [/22
image from at least one of a plurality of projector
F
80
Identify a projective relationship between devices:
/ 74
projector-to-camera homography for each projector and kv/
projector-to-projector homography between projectors
I
>:
//
Display region under observation by at least one camera
//j
I“
l
Using projective relationship, geometrically transform framebuffer
information from every projector contributing to the display, to a
frame of each camera observing the display
l
Adjust image intensity of (each) geometrically transformed framebuffer infonnation
FL
*eduCobin7stdprsvluieanyg
(e.g., using earlier constructed color transfer function for selected color channels).
Construct a predicted image of the display region for each observing camera.
80 p//
Is there a detectable difference
between a camera's predicted image and that
NO
camera's observed image?
YES
7
/
Compute difference of predicted image for a camera vs. its observed image:
Compute set(s) of delta image pixels,
construct an alpha-mask therefrom
‘ Define a bounding region sized to
I 0| l
encompass the radiometric variation
R
r
make corrective adjustment of the image projected from at
least one projector with unobstructed view-path
"v
86
continue observing
K89
Has display ended?
YES
@
76
U.S. Patent
Nov. 7, 2006
Sheet 6 6f 6
FIG. 9
101
FIG. 10
US 7,133,083 B2
US 7,133,083 B2
1
2
DYNAMIC SHADOW REMOVAL FROM
FRONT PROJECTION DISPLAYS
problems including space considerations, intensity and
sharpness attenuation, and mechanical complexity. Con
This application claims priority to tWo pending US.
provisional patent applications ?led on behalf of the
assignee hereof: Ser. No. 60/339,020 ?led on Dec. 7, 2001,
and to provisional patent application Ser. No. 60/430,575
?led Dec. 3, 2002 entitled “Monitoring and Correction of
Geometric Distortion in Projected Displays.”
able for interactive display environments that adaptively
straining user movement to prevent shadoWs is not accept
render a model based on the user’ s position. Requiring a user
to move in order to avoid shadoWs forbids particular vieWs
of the model to be visualiZed. Here, according to the
invention, using an automatically-derived relative position
of cameras and projectors in the display environment and a
straightforward novel color correction scheme, the system
The invention disclosed herein Was made With United
States government support aWarded by the folloWing
renders an expected/predicted image for each camera loca
tion. Cameras observe the displayed image, Which is com
agency: National Science Foundation, under contract num
ber NSF-4-62699. Accordingly, the US. Government has
certain rights in this invention.
pared With the expected/predicted image to detect regions of
BACKGROUND OF THE INVENTION
frames, Where corresponding pixel values are adjusted. In
Field of the Invention
display regions Where more than one projector contributes to
the image, shadoW regions may be eliminated, all as further
detailed herein.
In general, the present invention relates to multi-proj ector
systems subject to shadoWs and other radiometric artifacts in
difference (e.g., shadoWs or other radiometric variations).
These regions are transformed back to associated projector
20
SUMMARY OF THE INVENTION
the display environment. More-particularly, the instant
invention is directed to a technique and system for detecting
and minimiZing or removing shadoWs/shadoWing caused by
an occlusion (such as a human ?gure or some object
25
It is a primary object of this invention to provide a
technique and system for detecting a radiometric variation/
artifacts (e.g., shadoW(s), static or dynamic ambient illumi
blocking one or more projected displays), re?ectance varia
tions across the display surface, inconsistencies betWeen
nation of the immersive environment or display surfacei
e.g., Where a neW light source is turned-on, surface inter
projectors, inter-re?ections from the display surface(s) itself,
re?ection, non-uniformities due to projector and display
surface variances), of a front-projected dynamic display
changes in display color and intensity, and other radiometric
inconsistencies, of a multi-projector front-projection display
30
region under observation by at least one camera. The display
system. Radiometric correction takes place in the context of
is comprised of one or more images projected from one or
an adaptive, self-con?guring display environment. At least
more of a plurality of projectors-the display may be com
posed of a series of still or moving images, a single or series
of video clips, and so on, for an educational presentation,
movie or game being vieWed for entertainment, etc. Once
the camera(s) and projector(s) are set up for operation, the
one cameras and a plurality of proj ectors are used in concert
to determine relative device positions, derive a depth map
for the display surface, and cooperatively render a blended
image that is correct for the user’s vieWing location. For
each rendering pass, a head-tracker may supply an estimate
of the vieWer’s location. This information is used to compute
What each projector should display in order to produce a
complete, perspectively correct vieW. Here, a synchroniZa
tion protocol capable of supporting doZens of PC (personal
computer) rendering pipelines in a projection display using
35
system is preferably calibrated by ?nding homography rela
40
tionships offering mappings from device to device. A pre
dicted image of the display region by the camera is con
structed using framebulfer information from each of the
projectors contributing to the display. This framebulfer
information is geometrically transformed for the camera and
standard multicast over UDP. Novel geometric Warping (or
its relative image intensity (color) is adjusted to provide
mapping), based on a knoWn display surface geometry
useful comparison information With that actually observed
enables projection-based display in environments Where ?at
45
by the camera. A detectable difference betWeen a predicted
50
image and the display region under observation at a time, t,
causes a corrective adjustment of the image being projected
from at least one of the projectors, preferably the projector
having an unobstructed vieW-path to the display surface. The
corrective adjustment may be achieved by Way of a pixel
surfaces are unavailable, and Where multiple surfaces are
used (for example, tWo Walls, ceiling and ?oor of a room
used to create an immersive environment). In order to
remove shadoWs and other radiometric artifacts in the dis
play environment, the regions of pixels that are radiometri
cally incorrect, i.e., the delta pixels, are ?rst discovered or in
the case of a bounding region approach a bounding region is
Wise approach (an alpha-mask is constructed from delta
images), or a bounding region approach (difference/bound
identi?ed/sized. The delta pixels making up the delta image,
are identi?ed by comparing a predicted image With that
by the radiometric variation).
observed by a camera(s), and then associated With corre
ing region is siZed to include the area of the display affected
55
rendered image should be modi?ed. As such, according to
the invention, a unique approach has been outlined by
applicants as further detailed herein as Well as applicants’
technical manuscript labeled ATTACMENT [A]: calibra
tion, prediction, and correction.
In front-projection systems, shadoWs and other radiomet
ric variations to the display are easily created and, though
60
from the visually immersive experience. While back-pro
jection can be used to avoid shadoWs, it introduces other
mation to be communicated to a vieWer. Speci?c advantages
of providing the neW ?lter and associated method for
producing include any expressly identi?ed herein as Well as
the folloWing, Without limitation:
(a) Ease of operabilityiThe invention provides an ability
to automatically detect and make corrective adjustments to
transient, are extremely distracting. ShadoWs, regardless of
position, provide a perceptual cue that removes the user
As one Will appreciate, additional features included and
disclosed herein provide advantages of display clarity, addi
tional functionalities, speed, ef?ciency, overall system cost
reduction, alloWing for more-accurate, reliable display infor
sponding projector pixels to determine hoW the currently
65
displays Without operator intervention, alloWing the party
presenting the display full opportunity to focus on the
presentation material, or in the case of fully-automated
US 7,133,083 B2
3
4
presentation/entertainment, fewer (if any) display distrac
the projector(s) With an unobstructed vieW-path) image
information for that portion of the display Within the bound
ing region. Here, the image information is projected for a
time period after the difference is detectable; if and When the
radiometric variation is removed or dissipates, preferably
the bounding region is reduced in siZe until it is negligible
in siZe (for example, the projector(s) employed to make the
corrective adjustment, is effectively no longer projecting any
image contributing to the corrective adjustment).
Further unique to the invention, is the opportunity to,
While the display is active, continue observation With at least
tions Will need maintenance operator intervention.
(b) Flexibility of design and use-The technique of the
invention can be tailored for use to detect and address a Wide
variety of radiometric variances/artifacts that may affect a
display in a Wide variety of front-projection display appli
cations.
(c) ManufacturabilityiThe unique technique and system
of the invention can be tailored to current, as Well as those
under development or yet-to-be-developed, multi-proj ector
camera projection systems providing a cost-effective means
by Which systems can be upgraded, or sold initially as a
complete package.
one camera (With an unobstructed vieW of the display
region) for other additional radiometric variations While a
Brie?y described, once again, the invention includes a
system for detecting a radiometric variation of a front
of selected siZe is being projected). As can be also appre
corrective adjustment is being made (e.g., a bounding region
projected display region under observation by at least one
ciated in FIG. 8, several corrective adjustments (Whether
pixel-Wise or as bounding region adjustments) are possible
during a presentation of the display. For example in con
nection With the bounding region approach, in the event a
camera at a ?rst location (see summary description above).
Also characterized is a technique, or method for detecting a
radiometric variation of a front-projected display region
under observation by at least one camera, as Well as asso
20
ciated computer executable program code on a computer
readable storage medium, therefor.
As one Will appreciate, there are many further distinguish
ing features of the system and technique, and associated
code, of the invention. The detectable difference may com
prise a ?rst and second set of delta image pixels Which are,
thereafter, mapped to a framebulfer of the projector for the
25
radiometric variation. Then, should that object/individual
step aWay, neW successively smaller bounding regions are
siZed to provide a phase-out of the image being projected
employed in the corrective adjustment.
corrective adjustment (preferably, that projector has an
unobstructed projection-path to the radiometric variation of
the display). In this case, the corrective adjustment includes
blending an alpha-mask constructed from the sets of delta
30
image pixels. The geometrically transformed framebulfer
35
projective relationship determined by identifying a projec
tor-to-camera homography for each projector to each cam
era, and a projector-to-projector homography(ies) for pro
jectors. Once homographies have been identi?ed, the
mapping back to the framebulfer of the projector (for
corrective adjustment) comprises using an inverse of the
projector-to-camera homography. See FIG. 2 for reference.
The adjustment of the image intensity preferably includes
constructing a color transfer function, fc(x), for at least one
color channel, c, to provide a mapping of the intensity of the
It may be preferable to dedicated one or more projectors
to make the corrective adjustment: for either approach-the
pixel-Wise or bounding region approach. For example, a
third projector positioned such that it has an unobstructed
information comprises pixel information for the image pro
jected by each respective projector, that has been trans
formed into a frame of the camera using, preferably, a
bounding region of a selected siZe is projected to address one
radiometric variation (e.g., a shadoW of an object/individual)
and that variation is removed (object moves, yet is still
partially in front of one or more projector casting a smaller
shadoW on the display), a neW bounding region is siZed for
use in instructing the projector(s) to address the ‘neW’
40
vieW-path to the display region, may be employed for
making a bounding region corrective adjustment While tWo
other projectors are producing the display under observa
tion. Here, the corrective adjustment Will include projecting,
from the third projector, image information for that portion
of the display Within the bounding region for a time period
after the difference is detectable. During that time period,
each of the other projectors affected by the radiometric
variation projects may be instructed to project no image
Within the bounding region.
45
pixel information for the image projected by a respective
projector, into the frame of the camera. As explained further,
the color transfer function may be of the form:
BRIEF DESCRIPTION OF THE DRAWINGS
AND ATTACHMENT A
For purposes of illustrating the innovative nature plus the
50
a
?exibility of design and versatility of the preferred system
and technique disclosed hereby, the invention Will be better
appreciated by revieWing the accompanying draWings (in
Which like numerals, if included, designate like parts) and
Where: fc(x) represents an intensity value in the camera
55
frame for a pixel projected from the respective projector at
applicants’ ATTACHMENT A. One can appreciate the many
features that distinguish the instant invention from knoWn
attempted techniques. The draWings and ATTACHMENT A
channel, c, and having an intensity value of x; and Where a,
a, b and k represent parameters obtained during a camera
have been included to communicate the features of the
projector pre-calibration phase.
the invention by Way of example, only, and are in no Way
intended to unduly limit the disclosure hereof.
In another aspect of the invention, the focus is on making
innovative platform structure and associated technique of
60
the corrective adjustment using bounding regions identi?ed
FIG. 1 schematically depicts a multi-projector system 10,
to encompass the radiometric variation detected. Here, pref
having for example, projectors P1, P2, P3 (While all are
erably each projector (may be more then one) employed in
projecting to contribute to the display region of surface S,
they need not be) and cameras Cl and C2, according to the
connection With making the corrective adjustment, has an
unobstructed projection-path to a bounding region siZed to
encompass the radiometric variation of the display. The
corrective adjustment, then, Will include projecting (from
65
invention.
FIG. 2 is a system schematic depicting data/information
How in connection With the multi-projector system of FIG.
US 7,133,083 B2
5
6
1 (framebulfers 1, 2, and 3 each associated With a respective
projector and cameras 1 and 2).
FIG. 3 is a pictorial representing a display region com
Which the process of detecting is engaged. For example, the
placement of the cameras in the display environment might
be: mounting overhead to minimize the chance of occlusion
prising tWo orthogonal display surfaces (comer of a room);
by the user. Preferably, projectors may be placed arbitrarily,
the display composed of multiple images from multiple
projectors (not shoWn) and an object/individual/occlusion
Without regard for the potential for occlusion, so as to
causing a radiometric variation (shadoW of a human) of the
display on both display surfaces.
FIG. 4 graphically depicts observed color intensity vs.
area coverage or resolution).
maximiZe the usefulness of the display environment (surface
Calibration of each device Within the system engaged in
producing the display is critical to detection and a resulting
projected intensity for three color channelsiblue, green,
corrective adjustment. Initially, changes due to unexpected
and red, as labeled on the curvesifor a single camera
radiometric artifacts on the display surface are detected.
Predicted imagery is constructed for a speci?c camera
projector pair in connection With the colormetric calibration/
transformation phase (FIG. 2) of constructing the predicted
position and color transfer function and compared to cap
tured images. Predicted images 23, 24 (FIG. 2) are con
structed using the identi?ed position of the camera With
image for that camera.
FIGS. 5Ai5C are pictorials depicting, respectively: the
predicted image Without colormetric transformation/‘correc
tion’ (FIG. 5A); an image capture/observed by camera While
the display environment is in use (FIG. 5B); and predicted
respect to each projector as Well as a unique color (transfer
function) calibration phase applied in a straightforWard
image of FIG. 5A after three channel transfer functions have
been applied according to the invention.
20
FIGS. 6A and 6B are pictorials depicting, respectively: a
manner. The features of system 20 depicted in FIG. 2 are
herein referenced in connection With a multi-projector sys
tem of the invention, such as that in FIG. 1. Given a camera
(21 and 22) and projector pair, geometric calibration com
pixel-Wise difference image (pixels detected in the A+I
prises the transformation from pixels in the camera plane
image) from a shadoW event of a human With silhouette
(shoWn Within box de?ned at 21 and box de?ned at 22) to
shoWn; and the resulting alpha mask after Warping the
difference image in FIG. 6A into the display projector’s
their corresponding positions in the projectors’ frame buffers
25
(depicted Within dashed box 30 are three framebulfers
framebulfer.
FIGS. 7Ai7C are pictorials depicting, respectively: a
observed in a camera, can then be correctly adjusted in the
predicted image (FIG. 7A); a captured/ observed image (FIG.
projected imagery. Once the homography betWeen each
7B); and the resulting alpha mask after 10 iterations in the
projector’s framebulfer Which Will be employed in the
identi?ed as 1*3). Given this transform, regions in shadoW,
30
corrective adjustment (FIG. 7C).
FIG. 8 is a flow diagram depicting details of a method 80
for detecting a radiometric variation of a display region
under observation of at least one cameraiillustrated are
projector and the camera has been recovered, a composition
homography can be constructed to relate projector pixels to
one another. Each projector projects a grid pattern that is
parallel the axes of its oWn framebulfer. Given the knoWn
calibration, a coherent grid can be draWn by all projectors in
invention for producing displays and images such as those
the respective reference frame of a single projector.
While a planar assumption is not a requirement, hoWever,
it is used by Way of example in the analysis done, here.
represented and depicted in FIGS. 3, 5Ai5C, 6Ai6B, and
Presume that the camera devices observe a plane, the
7Ai7C, using features illustrated in FIGS. 1 and 2.
FIGS. 9 and 10 depict point and mapping references in
calibration problem becomes a matter of ?nding the col
lineation A such that:
core, as Well as further distinguishing, features of the
35
connection With certain aspects of applicants’ rigorous tech
nical analysis.
ATTACHMENT A: Jaynes, Christopher O. and Stephen
B. Webb, “Dynamic shadow removal?’om ?’onl projection
displays,” a technology disclosure manuscript numbered
40
pgs. 2*18, con?dentially submitted and Which has remained
45
{31-2419,-
Equation 1
for all points pl- in the camera and all pj in the projector.
Because A is a planar projective transform (a collineation in
P2) it can be determined up to an unknoWn scale factor 7», by
con?dential, included hereWith and incorporated by refer
four pairs of matching points in general con?guration.
ence herein to the extent it provides technical background
Iteratively projecting a random point from the projector onto
the display surface and observing that point in the camera
information and support of the unique technique and system
generates matching points. Each image point center pl- is
of the invention.
50
DETAILED DESCRIPTION OF THE
EMBODIMENTS DEPICTED IN THE
DRAWINGS
computed by ?tting a 2D Gaussian to observed greyscale
response Whose variance is related to expected image noise.
The resulting center point of the function is then stored With
its matching projector pixel pj. Given at least four random
pairs, compute A up to an unknoWn scale factor 7». A may be
In connection With discussing FIGS. 1 and 2, reference
Will be made to FIG. 8 (detailing features of a technique of
55
Alternatively, and preferably, the subpixel location of
each matchpoint center in the camera frame may be esti
the invention 80 in ?oW-diagram format) as Well as other
?gures, so that one can better appreciate the features of the
mated by ?tting a 2D Gaussian function governed by tWo
system and technique of the invention. FIG. 1 schematically
depicts a multi-projector system 10, having for example,
computed using 10 matching pairs.
60
parameters {mean and variance}, With the distortion param
eters being eight independent values of distorting homog
raphy. Initially, a bounding box is ?t/constructed around a
projectors P 1, P2, P3 (While all are projecting to contribute to
the display region of surface S, they need not be) and
detectable ‘blob’ of pixels in the projector framebulfer
cameras C l and C2, according to the invention. Correction of
Whose center and siZe provides the initial estimate of the
least one camera as illustrated. Preferably, at least one
unknoWn homography matrix. For this bounding box, let’s
say that its top is at py+sigma, it’s bottom is at pyisigma,
its left edge is at px—sigma, and it’s right edge is at px+sigma.
camera is able to observe the screen surface at all times for
Note that the projector bounding box has four comers, as
radiometric artifacts is performed for display regions that are
illuminated by at least tWo projectors and observed by at
65
US 7,133,083 B2
7
8
does a bounding box calculated for the blob mapped to the
image. By using this information, predicted color images
camera. One can then list four correspondences, match
can be more accurately compared With the observed images
points, consisting of: [(upper-left comer of projector’s
bounding box), (upper-left corner of camera’s bounding
box)]; [(upper-right comer of projector’s bounding box),
to detect areas of difference.
Results of the colormetric pixel intensity adjustment (box
80 FIG. 8) can be seen by Way of example in the pictorials
(upper-right comer of camera’s bounding box)]; and so on.
These four correspondences can be used to compute a
depicting, respectively: the predicted image Without color
H to build a predicted vieW of What the camera should have
metric transformation/ ‘correction’ (FIG. 5A); an image cap
ture/ observed by camera While the display environment is in
use (FIG. 5B); and predicted image of FIG. 5A after three
channel transfer functions have been applied according to
seen. All ten parameters are then optimiZed so as to mini
the invention.
miZe the sum of the squared distances betWeen the observed
FIG. 4 graphically depicts observed color intensity vs.
projected intensity for three color channelsiblue, green,
homography matrix, call it H for temporary reference, here.
Next, take What the projector projected, and Warp it through
blob pixels and the distorted Gaussian predicted by
unknown parameters. This technique has provided very
good subpixel estimates, With simulated data, accurate to
Within ~0.25 pixels. The resulting subpixel camera coordi
nates is then stored With its matching projector pixel pj.
and red, as labeled on the curvesifor a single camera
projector pair in connection With the colormetric calibration/
transformation phase (FIG. 2) of constructing the predicted
image for that camera. The three transfer functions depicted
in FIG. 4 Were measured by the color calibration process for
In addition to the planar mapping betWeen each device
(camera-to-projector and projector-to-projector) in the dis
play system, the vieWer position is computed With the
20
observed by the camera). The transfer function, fc(x), Eqn.
respect to the display surface geometry. Under the assump
tion that the display is pieceWise planer, one can compute a
vieWer’s position With respect to the surface as the correct
planar homography betWeen the users frame and camera.
a single camera-projector pair (Where values of particular
knoWn intensities are projected by the projector and
3, computes the expected value of channel c in the camera
image for a projected value of x.
25
This mapping is computed at each frame in order for the
camera to correctly predict the observed image based on the
a
+k
1 + e’alx’bl
image currently being vieWed by the user and distortion
introduced by the display surface. One can construct a
homography to re?ect the current {frame-Wise} mapping
30
Green, and Blue color components, respectively, are used.
Thus, if the projector projects a pixel With a red-channel
The accuracy of the recovered A can be measured as a
35
knoWn projector pixel p, observing its corresponding posi
tion in the camera, and then computing a (sub)pixel differ
40
ence:
Typically in data projector systems for human vieWing,
three color channels, {R, G, B}, corresponding to Red,
betWeen the vieWer’s rendered vieW and that of camera i.
See FIG. 9 for reference.
pixel projection error on the projector’s frame buffer for a
number of matching points. Speci?cally, one can make
calibration error estimates by illuminating the scene With a
Equation 3
Equation 2
45
value of x, the camera sees that pixel as having a red-channel
value of frQi). Preferably, a separate color transfer function
is computed for each channel independently, thus in addition
to the red-channel value of f,(x) for the green and blue
channels, respectively, a value for fg(x) and fb(x), is also
computed. The parameters a, a, b and k used in the various
channels {r, g, b} for Eqn. 3 are independent and may be
different for each function. These four parameters are pref
erably discovered/estimated by Way of a calibration phase
Where values of particular knoWn intensities are projected by
the projector and observed by the camera. For instance, the
projector is instructed to project an image Which is com
pletely ?lled With pixels having RGB value of: (100, 0, 0).
After geometric calibration, the mapping betWeen color
values for each camera-projector pair is estimated using a
color transfer function betWeen camera and projectors, as the
50
relationship betWeen a projected color and the image cap
red channel, spanning the dynamic range of the projector
tured by a camera pair is a complex multi-dimensional
function including the projector’s gamma curve, the cam
era’s gain function, projection surface properties, and Wave
length. For simplicity, the three-color channels (Red, Green,
Blue) are presumed independent, and calibrated separately
by approximating this complex function for each color
channel. A given camera C observes the display surface,
While uniform color images of increasing intensity are
iteratively projected from projector P. For each projected
under calibration (i.e., near Zero, some intermediate values,
and close to 255). The process is then carried out for the blue
55
60
color image, the mean color intensity is computed over the
corresponding observed image. This can be computed for
each color channel, holding the other tWo color values
hoW a color in projector space Will appear in the camera
and green channels. For example, one might project four
different color values for each color channel, and observe
them in the camera. As an approximation, it is presumed that
the color channels are independent-that the red channel
value in the camera only depends on the red channel value
of the projected image, likeWise, any combination of blue
and green Will give a Zero response in the red channel of the
camera. BeloW is an example set of values observed during
calibration, Whereby all three-tuples are for R, G, B color
values:
constant at Zero. The mean value over ten trials Was com
puted for each color channel, by Way of example. The
resulting color transfer functions provide a Way of predicting
Once this image has been observed in the camera, the
corresponding red channel value (in the camera) can be
obtained. The process to project an image that is observed by
the camera, is repeated for several different intensities in the
65
Project (0, 0, 0) from the projector, observe (15, 20, 31)
Project (100, 100, 100) from the projector, observe (76,
82, 103)
US 7,133,083 B2
9
10
Project (200, 200, 200) from the projector, observe (113,
128, 154)
Project (255, 255, 255) from the projector, observe (125,
151, 161)
imagery is a simple desktop environment, and is readily
extended to interactive display environments.
In a dynamic display the imagery may change in an
unpredictable Way (user movement, simulations, video data,
etc.). The predicted imagery must account for the changing
display. This is accomplished by Warping the rendered
Using the values from the calibration one can compute
values for the parameters a, ct, b and k for each of the color
projector framebulfer to camera coordinates (through the
earlier-recovered nomography). Given a desired image I,
each projector in the display environment computes an
transfer functions; for example, in connection With the R
transfer function [f,(x)], the values of interest fall Within the
appropriate framebulfer for projection in order to correctly
following ranges: 0%15; 100—>76; 200—>113; 255—>125;
contribute to the overall image L This can be accomplished
and so on. A single transfer function can adequately model
in a manner that accounts for display surface geometry,
the spectral relationship betWeen the tWo devices regardless
relative projector positions, and the user’s headtracked
vieWpoint in the display. Given the recovered collineation
of spatial position. In addition, computing global transfer
functions avoids coupling the measurement of the transfer
function With spatial errors produced in an earlier geometric
betWeen the camera c and a projector p, APE, a predicted
image is recovered by Warping all projector pixels into the
calibration phase.
camera frame. For a single projector, the predicted image is
Once the display has been calibrated, a corrective adjust
given by:
ment automatically takes place. A ?ltering process produces
a set of delta pixels that must be corrected in the next frame.
Given a set of delta pixels, the system then determines
iIAPCI
Because the predicted image is the basis for subsequent
modi?cation of projector frame buffer pixels in the display
environment, it is important that the image is generally
corresponding projector pixels that are attenuated or inten
si?ed to bring the intensities of the observed image into
agreement With the predicted imagery. FIGS. 6A and 6B are
pictorials depicting, respectively: a pixel-Wise difference
image (pixels detected in the A+I image) from a shadoW
accurate. For example, one can super-sample the predicted
25
event of a human With silhouette shoWn; and the resulting
alpha mask after Warping the difference image in FIG. 6A
into the display projector’s framebulfer.
A preferred alternative to this approach uses a region
image by computing the position of each pixel comer on the
projector plane to recover an interpolated subpixel estimate
of the pixel value at its center. That is, for a pixel a:[i j 0]T
on the camera plane, compute
30
based technique. Rather than perform a pixelWise compu
tation of a neW region for Which pixels should be either
detection of the shadoWed regions (represented in the delta
b:[AP‘]’la:6a
Equation 5
Where 6a is the effective siZe of half a pixel in the camera.
Each b is a vertex of a quadrilateral on the image plane of
projector.
image), a bounding region can be ?t/siZed to detected
shadoWed regions. This bounding region can then be
unWarped (step 29A, 29B in FIG. 2) to the appropriate
Equation 4
20
35
The correct pixel value I(ij) for the predicted image I, is
estimated as a Weighted average of the pixels contained
Within the quadrilateral, Weighed by the percentage of each
pixel that is contained in the back-projected region:
projector framebuifers to illuminate or darken. In a complex
multi-projector environment this more-compressed repre
sentation can be e?iciently transmitted to each rendering
device in the display. The pixelWise radiometric correction
approach presented here has certain advantages: In contrast
With region-based shadoW removal in Which rectangular
regions are either completely on (alpha :1) or off (alpha :1),
the pixel-Wise approach operates on individual pixels and
accommodates all intensity values by incrementally adjust
ing the alpha channel values.
40
45
Where pk is a projector pixel contained in the quadrilateral
and Mpk) is a Weighting factor equal to the total percentage
of pixel pk contained in the quadrilateral. Finally, in the case
camera pixel, the mean of all corresponding projector
50
FIG. 2, each camera observes as large a screen area as
according to the invention, is that radiometric changes are
regions, computed using Equation 6, is stored in the pre
dicted image.
Note that the predicted image differs by the image actually
captured by the camera due to sensor nonlinearities, and
detected directly in camera space. This removes the need for
55
full Euclidean calibration of the camera and the position of
the occluding object does not need to be knoWn. A predicted
image is made available to each camera so that it can be
compared to the currently observed camera vieW. In situa
tions Where the image is ?xed (exploration of a high
Equation 6
Where more than one projector contributes to a single
possible. An important aspect of the novel approach taken
an explicit model of the occluding object. In addition,
image-based change detection removes the requirement for
1 Nil
I<:0
Preferably, the camera is able to observe the display
surface Without a signi?cant chance of occlusion by the
user/vieWer. For example, the camera may be mounted
overhead and oriented doWn onto a display Wall. Ideally, see
~
1(1'. j) = N2 pk Mm)
60
properties of the projection surface. The transfer functions
discovered in the color calibration phase (discussed earlier,
see also Equation 3) are applied to the predicted image I to
recover a color corrected, predicted image, I, that can then
be compared directly to the captured imagery. Each color
component in the predicted image is adjusted according to:
i(itil,c):fc(i(x,y, O),C:{R1G1B}
Equation 7
resolution still, for example), the predicted image for each
display camera can be pre-computed prior to running the
shadoW removal phase. Predicted imagery may also be
updated according to knoWn mouse and keyboard input for
scenarios Where the system does not correct for underlying
surface distortions. This approach is useful if the projected
Color corrected predicted images are compared to captured
65
imagery by a subtraction of color components to derive tWo
delta images, each representing pixels that are either too
dark and should be intensi?ed (the A+I image) or pixels that
are too bright and should be attenuated (A‘I).
US 7,133,083 B2
11
12
Each delta image, AI, is then ?ltered With a 3x3 median
?lter to remove spurious pixels that may emerge due to
sensor noise. The siZe of the ?lter is directly related to the
FIG. 6A into the display projector’s framebulfer. The image
in FIG. 7A is of a predicted image; FIG. 7B is of a
expected calibration error and expected image noise.
Detected di?‘erence(s) in predicted images vs. observed
captured/observed image; and FIG. 7C is an image of the
resulting alpha mask after 10 iterations in the projector’s
framebulfer Which Will be employed in the corrective adjust
images are ‘unWarped’ from the camera frame back into
ment.
each contributing projector’s frame to determine the appro
By Way of example only, the radiometric correction
algorithm has been integrated With a front-projection
research display environment. The display is composed of
four high-resolution projectors, tWo infrared head-tracking
priate projector pixels for adjustment (see FIG. 2 at 29A,
29B).
Delta images are directly related to the difference of the
observed and predicted imagery for each camera in the
display system. Therefore they are computed in the coordi
units and tWo digital video cameras. Each projector is
connected to a single PC that contains a commodity,
nate frame of the camera and must be projected to the
OpenGL-compliant graphics accelerator, and a standard
reference frame of each projector for correction. This is
netWork interface. The cameras are connected to a single PC
accomplished using the earlier recovered homography
that contains tWo frame-grabbers and a netWork interface.
The PC rendering clients are connected via a 100 Mb
netWork hub. The display ran on the LINUX operating
betWeen the camera and projector (used in the geometric
calibration phase). In practice, camera devices and projec
tors are loosely coupled in the display environment through
a communication mechanism such as TCP/IP, and therefore
recovered delta images must be transmitted as ef?ciently as
20
possible to projectors for rendering. FIG. 10 depicts hoW
of calibration, the mean error contained in the recovered
each vertex of a camera pixel (depicted at 101) is ‘back
projected’ (notation A“1 in FIG. 10) to a projector’s frame
bulfer (depicted at 102) Whereby pixels overlapped by the
resulting quadrilateral (b 1, b2, b3, b4) contribute to a
25
Weighted value for the image pixel using the relationship
guided by Equation 4*3 of ATTACHMENT A. Targeting
transmission ef?ciency, a single delta image may be con
structed from the tWo images identi?ed using a representa
tion scheme that encodes the sign of a pixel value in the high
a pixel disparity. Although We do not calibrate to a Euclidean
30 coordinate system, metric errors on the screen can be
estimated by back-projecting a line of knoWn pixel length
for each device and measuring the pixel sample distance on
the screen for each device. The mean pixel error can then be
current alpha mask to correct for observed differences. Due
to the typical structure of a difference image, a resulting
multicast to all rendering client projectors that have an
overlapping ?eld of vieW With the camera that has detected
the radiometric variation. Each rendering client that receives
a delta image, AI, ?rst decodes the image and then Warps the
resulting alpha mask based on the relative position of the
projector to the camera to recover the delta image in the
multiplied by this scale to arrive at an estimate for calibra
35
40
betWeen the projector and camera has been recovered in the
AIIL’HPCTIAI
tion error in millimeters. Error! Reference source not found
reports these errors for all rendering clients in the display
environment:
projector coordinate frame, AI". Because the homography
calibration phase, this Warping is implemented as:
homographies betWeen each device and all other devices
Was estimated. This Was accomplished by selecting 10
matching pairs for all devices With a vieW frustum overlap.
Using these matching pairs, the mean pixel error for a
particular device Was computed by projecting knoWn match
points from other devices to the image plane and measuring
order bit of a single byte. This single image represents the
alpha values that should be added to and subtracted from the
encoded image can typically be reduced from approximately
500 k bytes to less than 3 k bytes. The encoded mask is
system. The example system Was calibrated using the tech
niques described herein then used to demonstrate the
shadoW removal technique. In order to estimate the accuracy
45
Rendering
Client
Mean
Pixel Error
Mean Screen
Error (mm)
Projector
Projector
Projector
Projector
0.583
0.603
0.616
0.664
1.23
1.61
1.64
1.72
0.782
0.793
1.02
1.18
1
2
3
4
Camera A
Camera B
Equation 8
Once a delta image has been aligned to a projector, an
appropriate alpha mask is computed as folloWs:
(1P(i,j):p[A*I"(i,j)-A’I”(i,j)]
50
EXAMPLE 1
Equation 9
Summary of Process
Where p is maximum alloWed intensity change betWeen any
tWo frames and is used to avoid rapid ?uctuations on the
display surface. By Way of example presented beloW, a p
55
value of 25 may be used. Although it may seem important
to change the projected imagery as quickly as possible,
projector, P (collineation)
sensor noise may lead to over-correction for shadoWed
variable-surface display: recover surface shape (struc
regions. Potentially, this can result in the system iteratively
over- and under-correcting the shadoWed region. Due to
minor calibration error, this feedback may propagate the
region of ?uctuation on the display. The p parameter acts as
a dampener to avoid this situation.
Once again, FIG. 6A is an image of a pixel-Wise differ
ence image (pixels detected in the A+I image) from a shadoW
event of a human With silhouette shoWn; and FIG. 6B is the
resulting alpha mask after Warping the difference image in
A. Projector Calibration
?at display: homography from display surface, S, to each
tured light) compute projection matrix
60
B. Camera-Projector Pre-Calibration
Compute mapping from camera to each projector: compute
relative or absolute position of each projector and each
65
camera in the system (may use relative projective rela
tionship betWeen devices, or an absolute projective rela
tionship betWeen each device and a ‘World’ coordinate
system);
US 7,133,083 B2
14
13
Construct a homography A (transform/mapping) of points
between devices: Alj: piIAZJpj Where A as a 3x3 matrix Which
can be recovered from 4 (or more) corresponding pairs
For projector i and a camera j iteratively project 50 points
discover corresponding pixel in camera
Compute AU- from 4 randomly selected match pairs
Estimate re-projection error for each random subset using
bounding region-approach: (per region)
k-4 correspondences:
Alpha mask constructed from delta images
blur mask With gaussian, g(x,y).
Scale by single-frame gain, 7»
F. Channel Blending Using ot-Mask (PixelWise-Approach)/
Corrective Adjustment
If pixelWise approach, blending of the ot-Mask constructed
select Alj With loWest error over many trials (preferably,
Where matchpoint pairs are given by (a, b) a represents
With alpha channels for each active projector aids in
a pixel in the camera image and b represents a knoWn
corresponding pixel in the projector framebulfer, A is
constructed such that the difference betWeen point a
and its predicted location A><a is minimized).
20
C. Obtaining the Predicted Camera(s) Image
Geometric Construction/Calibration/transformation
Render next frame
Warp frame using Alfl
25
obtain the vieW of framebulfer in the camera
Colormetric Construction/Calibration/adjustmenti
Compute values for the color transfer function (beloW);
presume channel independence
Apply transfer function to each pixel in image; in a
making corrective adjustment to projected image (from
unobstructed projector)
If bounding region-approach make corrective adjustment.
While certain representative embodiments and details
have been shoWn for the purpose of illustrating the inven
tion, those skilled in the art Will readily appreciate that
various modi?cations, Whether speci?cally or expressly
identi?ed herein, may be made to these representative
embodiments Without departing from the novel teachings or
scope of this technical disclosure. Accordingly, all such
modi?cations are intended to be included Within the scope of
the claims. Although the commonly employed preamble
30
pre-calibration stage by projecting and observing/re
cording intensity With camera, determine parameters
for the transfer function of the form
phrase “comprising the steps of’ may be used herein in a
method claim, Applicants do not intend to invoke 35 U.S.C.
§112116. Furthermore, in any claim that is ?led herewith or
hereafter, any means-plus-function clauses used, or later
found to be present, are intended to cover at least all
35
structure(s) described herein as performing the recited func
tion and not only structural equivalents but also equivalent
structures.
Wait for synchronization signal
Observe display region With each respective camera,
producing observed images for comparison
What is claimed is:
1. A system for detecting a radiometric variation of a
40
one camera at a ?rst location, the system comprising:
the front-projected display comprising at least one image
projected from each of a plurality of projectors; and
D. ShadoW Detection
at least one processing unit for (a) constructing a pre
Compare captured/observed images With predicted to detect
transient artifacts
projected image need not be ?xed, may be a series of still
45
50
subtract to compute tWo neW delta/difference images, A+l
and A'l
A+l and A‘l contain regions that are too dark and too
2. The system of claim 1 Wherein a detectable difference
55
a set of delta pixels (in camera frame),i
region encompassing the area Where a difference as
60
E. Computing ot-Mask (for PixelWise-Approach)
Each delta image (or delta bounding region) is unWarped to
corresponding projector framebulfers [point-Wise map
ping betWeen delta image or bounding region and asso
ciated framebulfer]
pixelWise Warp using calibration (per projector)
observation causes a corrective adjustment of said image
being projected from at least one of said plurality of pro
jectors.
in bounding region-approach, identify/de?ne a bounding
been detected.
using a projective relationship, and for Which image
intensity has been adjusted, and (b) comparing said
predicted image With the display region under obser
vation for the detecting.
betWeen said predicted image and the display region under
bright respectively
in pixelWise-approach, using a median ?lter (3x3), ?lter
each delta image to remove spurious pixels, producing
dicted image of the display region by the camera using
framebulfer information from each of said projectors
that has been geometrically transformed for the camera
images/slides, video, etc.
distinguish betWeen framebulfer changes (intended) and
artifacts (unintended)
Image Processing
front-projected display region under observation by at least
3. The system of claim 2 Wherein said detectable differ
ence comprises a ?rst and second set of delta image pixels
Which are, thereafter, mapped to a framebulfer of said at
least one projector for said corrective adjustment; said at
least one projector has an unobstructed projection-path to
the radiometric variation of the display; and said corrective
adjustment comprises blending an alpha-mask constructed
65
from said ?rst and second sets of delta image pixels.
4. The system of claim 3 Wherein said geometrically
transformed information comprises pixel information for
said image projected by a respective one of said projectors,
US 7,133,083 B2
15
16
information from another of said plurality of projectors
that has been transformed into a frame of the camera using
said projective relationship by identifying a projector-to
for a time period after said difference is detectable.
10. A method for detecting a radiometric variation of a
camera homography for each said proj ector and a proj ector
to-projector homography for said projectors; and said map
front-projected display region under observation by at least
ping to said framebuffer of said proj ector comprises using an
one camera at a ?rst location, the method comprising the
inverse of said projector-to-camera homography.
steps of:
5. The system of claim 2 Wherein said at least one
projector has an unobstructed projection-path to a bounding
providing the front-projected display by projecting at least
one image from each of a plurality of projectors;
region siZed to encompass the radiometric variation of the
display; said corrective adjustment comprises projecting,
10
framebulfer information from each said projector, said
from said projector With said unobstructed projection-path,
image information for that portion of the display Within said
step of constructing to comprise geometrically trans
forming, using a projective relationship, each said
bounding region, said image information being projected for
a time period after said difference is detectable.
framebulfer information to the camera, and adjusting
image intensity of said geometrically transformed
6. The system of claim 1 Wherein said plurality comprises
framebulfer information; and
a ?rst and second projector; and further comprising a third
projector having an unobstructed proj ection-path to a bound
ing region siZed to encompass the radiometric variation of
the display; and Wherein a detectable difference betWeen
said predicted image and the display region under observa
comparing said predicted image With the display region
20
tion causes a corrective adjustment, said corrective adjust
bounding region, said image information being projected for
25
transformed information comprises pixel information for
said image projected by a respective one of said projectors,
Wherein said at least one projector has an unobstructed
30
projection-path to the radiometric variation of the display;
and said corrective adjustment comprises blending an alpha
mask constructed from said ?rst and second sets of delta
that has been transformed into a frame of the camera using
image pixels With an alpha channel of said image projected
said projective relationship; and said adjustment of said
image intensity comprises constructing a color transfer
from said at least one projector.
13. The method of claim 12 further comprising the steps
of identifying a projector-to-camera homography for each
said projector and a projector-to-projector homography for
function, fc(x), for at least one color channel, c, to provide
a mapping of said intensity of said pixel information for said
image projected by said respective projector into said frame
said projectors for use in said step of geometrically trans
of the camera.
8. The system of claim 7 Wherein said color transfer
function is of the form:
causes a corrective adjustment of said image being projected
from at least one of said plurality of projectors.
12. The method of claim 11 further comprising, in the
event said difference is detected, the step of mapping a ?rst
and second set of delta image pixels to a framebulfer of said
at least one projector for said corrective adjustment; and
bounding region.
7. The system of claim 1 Wherein said geometrically
under observation for the detecting.
11. The method of claim 10 Wherein, upon performing
said step of comparing, a difference detected betWeen said
predicted image and the display region under observation
ment to comprise projecting, from said third projector,
image information for that portion of the display Within said
a time period after said difference is detectable; and during
said time period, each of said plurality of projectors affected
by the radiometric variation projects no image Within said
constructing a predicted image of the display region using
40
forming; and Wherein:
each said framebulfer information comprises pixel infor
mation for said image projected by a respective one of
said projectors;
said step of geometrically transforming further comprises
using said projective relationship comprising said
45
homographies to transform said pixel information for
said image projected by said respective projector, into
Where: fc(x) represents an intensity value in said frame of the
camera for a pixel projected from said respective projector
at said channel, c, and having an intensity value of x; and a,
ct, b and k represent parameters obtained during a camera
a frame of the camera; and
said step of mapping said ?rst and second set of delta
50
projector pre-calibration phase.
homography.
9. A system for detecting a radiometric variation of a
14. The method of claim 11 Wherein said at least one
front-projected display region under observation by at least
one camera at a ?rst location, the system comprising:
the display comprising at least one image projected from
55
projector has an unobstructed projection-path to a bounding
region siZed to encompass the radiometric variation of the
display; said corrective adjustment comprises projecting,
at least one of a plurality of projectors;
at least one processing unit for (a) constructing a pre
from said projector With said unobstructed projection-path,
image information for that portion of the display Within said
dicted image of the display region by the camera using
framebulfer information from said at least one projector
that has been geometrically transformed for the camera,
image pixels to said framebulfer of said projector
comprises using an inverse of said projector-to-camera
bounding region, said image information being projected for
and for Which image intensity has been adjusted, and
a time period after said difference is detectable; and during
said time period, each of said plurality of proj ectors affected
by the radiometric variation projects no image Within said
(b) comparing said predicted image With the display
bounding region.
60
region under observation for the detecting; and
Wherein a detectable difference betWeen said predicted
image and the display region under observation causes
a corrective adjustment comprising projecting image
65
15. The method of claim 10 Wherein said plurality com
prises a ?rst and second projector; and further comprising a
third projector having an unobstructed projection-path to a
bounding region siZed to encompass the radiometric varia
US 7,133,083 B2
17
18
tion of the display; and wherein, upon performing said step
tion of a front-proj ected display region under observation by
of comparing, a difference detected between said predicted
image and the display region under observation causes a
corrective adjustment, said corrective adjustment to com
comprising:
at least one camera at a ?rst location, the program code
a ?rst program sub-code for providing the front-projected
display by projecting at least one image from each of a
prise projecting, from said third projector, image informa
plurality of projectors;
tion for that portion of the display Within said bounding
region, said image information being projected for a time
period after said difference is detectable.
a second program sub-code for constructing a predicted
image of the display region using framebuffer infor
16. The method of claim 10 Wherein:
mation from each said projector, said second sub-code
each said framebulfer information comprises pixel infor
mation for said image projected by a respective one of
comprising instructions for geometrically transform
ing, using a projective relationship, each said frame
said projectors;
said step of geometrically transforming further comprises
using said projective relationship to transform said
pixel information for said image projected by said
bulfer information to the camera, and instructions for
adjusting image intensity of said geometrically trans
formed framebulfer information; and
a third program sub-code for comparing said predicted
image With the display region under observation for the
respective projector, into a frame of the camera; and
detecting.
said step of adjusting image intensity comprises con
20. The program code of claim 19 Wherein: each said
structing a color transfer function, fc(x), for at least one
color channel, c, to provide a mapping of said intensity
of said pixel information for said image projected by
20
framebulfer information comprises pixel information for
said image projected by a respective one of said projectors;
and said instructions for geometrically transforming com
said respective projector into said frame of the camera.
17. The system of claim 16 Wherein said color transfer
function is of the form:
prise instructions for using said projective relationship to
transform said pixel information for said image projected by
25
said respective projector, into a frame of the camera; and
further comprising a fourth sub-code for causing a corrective
adjustment of said image being projected from at least one
a
of said plurality of projectors in the event a difference is
detected betWeen said predicted image and the display
region under observation.
Where: fc(x) represents an intensity value in said frame of the
camera for a pixel projected from said respective projector
at said channel, c, and having an intensity value of x; and a,
21. A computer executable program code on a computer
readable storage medium for detecting a radiometric varia
tion of a front-proj ected display region under observation by
ct, b and k represent parameters obtained during a camera
at least one camera at a ?rst location, the program code
projector pre-calibration phase.
comprising:
18. A method for detecting a radiometric variation of a
a ?rst program sub-code for providing the display by
front-projected display region under observation by at least
projecting at least one image from at least one of a
one camera at a ?rst location, the method comprising the
plurality of projectors;
steps of:
providing the display by projecting at least one image
from at least one of a plurality of projectors;
second program sub-code for constructing a predicted
image of the display region using framebuffer infor
40
constructing a predicted image of the display region using
mation from said at least one projector, said second
sub-code comprising instructions for geometrically
framebulfer information from said at least one projec
transforming said framebuffer information to the cam
tor, said step of constructing to comprise geometrically
era, and instructions for adjusting image intensity of
said geometrically transformed framebulfer informa
tion; and
transforming said framebulfer information to the cam
era, and adjusting image intensity of said geometrically
45
transformed framebulfer information; and
a third program sub-code for comparing said predicted
upon comparing said predicted image With the display
image With the display region under observation,
region under observation, a difference detected ther
ebetWeen causes a corrective adjustment comprising
Whereupon a difference detected therebetWeen causes a
projecting image information from another of said
plurality of projectors for a time period after said
difference is detectable.
19. A computer executable program code on a computer
readable storage medium for detecting a radiometric varia
corrective adjustment comprising projecting image
50
information from another of said plurality of projectors
for a time period during Which said difference is
detectable.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement