United States Patent

United States Patent
United States Patent
(10) Patent N0.:
US 6,340,994 B1
(45) Date of Patent:
Jan. 22, 2002
Margulis et al.
5,668,569 A
5,696,848 A
5,754,260 A
9/1997 Greene et al.
12/ 1997 PM? et 91
5/1998 001 et 91
(75) Inventors; Neal Margulis, Woodside, CA (US);
Chad Fogg, Seattle, WA (US)
Foley, Van Dam, Feiner, Hughes: Computer Graphics Prin
cipals and Practice; 1990; pp. 155—165, 564—567, 822—834;
Addison Wesley.
(73) Assignee: Pixonics, LLC, Palo Alto, CA (US)
pp. 237—256; John Wiley and Sons.
Subject to any disclaimer, the term of this
patent is extended or adjusted under 35
(List Continued on next page‘)
Primary Examiner_sherrie Hsia
U-S-C- 154(k)) by 0 days-
(74) Attorney, Agent, or Firm—Wells, St. John, Roberts,
Patrick Candry: Projection Systems; Display Systems; 1997;
Gregory & Matkin
(21) Appl. No.: 09/277,100
(22) Flled:
An image processing system including a display output
Mar‘ 26’ 1999
Related US Application Data
(60) Provisional application No. 60/096,322, ?led on Aug. 12,
"""""""""""""" "
H04N 5/21_ H04N 9/64
US. Cl. ..................... .. 348/625; 348/630;
348/571, 607, 606, 620, 708, 718, 719,
720, 721, 674, 675; H04N 5/21, 5/14, 9/64
R f
display images have apparent resolution higher than can be
supported by an image modulator, sequences loWer resolu
4,843,380 A
6/1989 Oakley et 91-
570927671 A
3/1992 Vans 05
5j559j676 A
tion images at higher frame rates to simulate higher resolu
tion outputs. RSR also spatially ?lters the loWer resolution
images into transitional images by shifting the pixel matrix
of each image frame such that the Weighted ?lter center of
gemfdyt 1
performs a Superset of the frame rate Conversion
process for converting betWeen disparate input and output
frame rates. RSR, improving display quality When intended
Cit d
e erences
an image is as accurate as possible, and thus, based on a
previous frame value and a known transfer function of the
display modulation System, adjusts its Output Values to
provide a desired output value during display of a desired
Of Search ............................... ..
processor using Temporal Gamma Processing (TGP) and
Reverse supepresolunon (RSR) techmques to PrPCeSS
images. TGP assures that the time-related representation of
I t C17
n ‘
a e
each sequenced frame is constant or such that motion
a .
9/1996 Gessaman
artifacts are not generated. Additionally, RSR, to prevent
grlfjgllllaet a1
motion artifacts, enables moving the vieWed images at the
5,579,027 A
11/1996 Sakurai et a1_
5,633,956 A
5,666,164 A
5/1997 Burl
9/1997 Kondo et al.
display refresh screen rate and uses motion-adaptive ?lters
and motion tracking information to track image objects.
32 Claims, 11 Drawing Sheets
m was
I. m
/. 420
US 6,340,994 B1
Page 2
Charles McLaughlin , Dave Arrnitage: Prospects for Micro
display Based Rear Projection; 1996.
Faroudja Labs: VP401 On Line Product OVervieW; Jul. 28,
Faroudja Labs: VSSO On Line Product OVervieW; Jul. 28,
Snell & Wilcox: Kudos NRSSOO Online Product Guide; pp.
Darirn: M—Filter Product OVervieW; 1997; pp. 1—3.
D. W. Parker: The Dynamic Performance of CRT and LC
Snell & Wilcox: Kudoes NRS30 Online Product Guide; pp.
PiXelWorks: PW364 Image Processor Data Sheet; WWW.piX
elWorksinc.corn/products/364datasheet. htrnl.
Displays; Getting the Best from State—of—the—Art Display
Systems; Feb. 21—23, 1995; London, UK; Society for Infor
rnation Display.
U.S. Patent
Jan. 22, 2002
US 6,340,994 B1
wo “ M\JQ Q
#6: $6
U.S. Patent
Jan. 22, 2002
Sheet 2 0f 11
QRQ8([email protected]|\
US 6,340,994 B1
20.58 mLH|m
U.S. Patent
Jan. 22, 2002
Sheet 8 0f 11
US 6,340,994 B1
U.S. Patent
Jan. 22, 2002
Sheet 9 0f 11
US 6,340,994 B1
U.S. Patent
Jan. 22, 2002
7004 x
US 6,340,994 B1
Sheet 10 0f 11
\— 7008
1.1g ll?
U.S. Patent
Jan. 22,2002
Sheet 11 0f 11
US 6,340,994 B1
7728/ D/SPLA Y DA TA
US 6,340,994 B1
desired full color gamut in LCD-based parallel color pro
jection systems, as in CRT-based projection systems, uses
three separate LCD image modulators, one for each of the R,
G, and B color components. Asingle LCD image modulator
Which produces R, G, and B either through spatial color
?lters or With sequential color ?elds at a suf?ciently high
rate can provide a loW cost system.
This application claims the bene?t of US. Provisional
Application Ser. No. 60/096,322 ?led on Aug. 12, 1998, and
is related to co-pending US. patent application Ser. No.
09/250,424, entitled “System and Method for Using Bit
FIG. 1 shoWs a prior art projection system 150 that
stream Information to Process Images for Use in Digital
includes a light system 100, mirrors 102, 104, 106, and 108,
transmissive image modulators 110, 112, and 114, dichroic
recombiners 116 and 118, and a projection lens 120. Light
Display Systems,” ?led on Feb. 16, 1999, Which is hereby
system 100 includes an illumination source such as a Xenon
incorporated by reference.
lamp and a re?ector system (not shoWn) for focusing light.
Mirrors 102, 104, 106, and 108, together With other
components (not shoWn) constitute a separation subsystem
that separates the light system 100 output White light beam
1. Field of the Invention
The present invention relates generally to digital image
display systems and more particularly to using temporal
into color
Red can
and Blue
X-cube dichroic prism pairs or polariZing beam splitters.
gamma and reverse super-resolution techniques to process
images for optimiZing appearance quality.
2. Discussion of Prior Art
functions as an active, full resolution, monochrome light
Cathode Ray Tubes (CRTs), used in conventional televi
sions and computer monitors, are analog devices Which scan
an electron beam across a phosphor screen to produce an
image. Digital image processing products that enhance dis
play graphics and video on CRTs have been increasingly
available because CRTs can Work With many different input
and output data formats. Further, CRTs can display moving
images With high quality screen brightness and response.
HoWever, CRTs have considerable limitations in applica
tions such as portable ?at screen displays Where siZe and
poWer are important. Additionally, as direct vieW CRT
display siZe increases, achieving high quality completely
across the display becomes more dif?cult and eXpensive.
Many recent portable and desktop systems include digital
displays using liquid crystal displays (LCDs), a term Which
generally describes ?at-panel display technologies and in
(FLCs), ?eld emission displays (FEDs), electroluminescent
include a buffer memory and associated digital processing
unit (not shoWn). A projection system may use only one
image modulator Which is responsible for all three color
components, but the three image modulator system 150
provides better chromaticity and is more efficient.
Dichroic recombiners 116 and 118 combine modulated R,
G, and B color components to provide color images to
projection lens 120, Which focuses and projects images onto
a screen (not shoWn).
FIG. 1 system 150 can use transmissive light valve
technology Which passes light on ads 1002 through an LCD
use re?ective light valve technology (referred to as re?ective
displays (ELDs), plasma displays (PDs), and digital mirror
displays) Which re?ects light off of digital display mirror
display (DMD) image modulators 110, 112, and 114.
Because each image modulator 110, 112, and 114 functions
displays (DMDs).
as an active, full resolution, monochrome light valve that
modulates the corresponding color component, system 150
Compared to traditional CRT displays LCDs have the
advantages of being smaller and lighter, consuming less
valve that, according to the desired output images, modu
lates light intensities for the respective R, G, or B color
component. Each image modulator 110, 112, and 114 can
shutter matriX (not shoWn). Alternatively, system 150 can
particular active matriX liquid crystal displays (AMLCDs),
silicon re?ective LCDs (si-RLCDs), ferroelectric displays
Each image modulator 110, 112, and 114 receives a
corresponding separated R, G, or B color component and
poWer, and having discrete display elements, Which can
provide consistent images across the entire display.
requires signi?cant buffer memory and digital image pro
cessing capability.
steps to achieve acceptable visual quality. Further, large
Because of inherent differences in the physical responses
of CRT and LCD materials, LCD-based projection and
direct vieW display systems both have different ?icker
screen direct vieW LCDs are expensive, and LCDs usually
characteristics and exhibit different motion artifacts than
require a display memory.
Both CRT and LCD technologies can provide economical
CRT-based display systems. Additionally, an intense short
pulse depends on the properties of CRT phosphors to eXcite
HoWever, manufacturing LCDs requires special processing
projection system large screen displays. CRT-based projec
tion systems usually require three CRTs and three projection
tubes, one for each of the Red (R), Green (G), and Blue (B)
color components. Each tube must produce the full resolu
a CRT piXel Whereas a constant eXternal light source is
tion display output at an acceptable brightness level, Which
makes the tubes eXpensive. Achieving proper tolerances for
mechanical components in projection systems, including
alignment hardWare and lenses, is also eXpensive.
intensity modulated during the frame period of an LCD
display. Further, LCDs sWitch in the ?nite time it takes to
change the state of a piXel. Active matriX thin ?lm transistor
(TFT) displays, Which have an active transistor controlling
each display piXel, still require a sWitching time related to
the LCD material composition and thickness, and the tech
Consequently, manufacturing CRT-based projection systems
niques of sWitching.
Most LCD-based image modulators (110, 112, 114, etc.)
is costly. Since CRTs are analog, applying digital image
processing techniques to CRT-based systems usually
each piXel during each display frame interval. Accordingly,
are addressed in raster scan fashion and require refreshing
requires a frame buffer memory to effectively represent the
digital image data.
Proj ection display systems also use transmissive or re?ec
tive LCD “microdisplay” technologies. Achieving the
every output piXel is Written to the display during every
refresh cycle regardless of Whether the value of the piXel has
changed since the last cycle. In contrast, active matriX
display technologies and some plasma display panel tech
US 6,340,994 B1
nologies allow random access to the display pixels. Other,
screen display regions have feWer image modulator pixels
simpler panels use a simpler roW by roW addressing scheme
than screen pixels.
Motion artifacts appear Where image objects move near
similar to the raster scan of a CRT. Additionally, some
the edges of curved screens. Even When a ?at screen
displays have internal storage to enable output frames to
self-refresh based on residual data from the previous output
projection is motion-adaptive ?ltered, the difference in the
distances of objects from the projector causes an apparent
motion of moving objects on a curved screen. Additionally,
Field Emission Displays (FEDs) may include thousands
of microtips grouped in several tens of mesh cells for each
pixel. The ?eld emission cathodes in FEDs can directly
address sets of roW or column electrodes in FEDs, and FEDs
have fast response times. FEDs can use external mesh
extremely large curved screens can achieve necessary reso
lution and brightness only With ?lm projectors.
cameras record respective halves of a scene to improve
addressing for better resolution images, but this requires
increased input/output (I/O) bandWidth outside of the FED.
Opto-mechanical systems can provide uniform brightness
and high chromaticity for high quality displays.
output. A layered coding technique may include a standard
assembly tolerances in opto-mechanical systems can result
in system imperfections including imprecise image modu
lator alignment and geometric lens distortion.
Commercially available digital image processing
systems, usually a part of an electronic control subsystem,
can process analog or digital input data and format the data
format conversion and line doubling or quadrupling for
interlaced analog input data. Some systems include a
decompression engine for decompressing compressed digi
in Which a ?rst camera records a left eye vieW and a second
camera records a right eye vieW. Because camera lenses
focus at a certain distance, one camera uses one focal plane
for all objects in a scene. A multi-camera system can use
multiple cameras each to capture a different focal plane of a
digital image processing systems do not often accommodate
single scene. This effectively increases the focal depth.
Digital image processing can further improve focusing for
these multi-camera systems.
Image sensing algorithms, for example in remote sensing
Types of three dimensional binocular display systems
include anaglyph displays, frame sequence displays,
autostereoscopic displays, single and multi-turn helix dis
and computer vision applications, use special sampling and
image Warping techniques to correct input sensor distortions
and to reconstruct images.
Data compression tools such as those standardiZed by the
plays. These normally have multiple camera data channels.
Anaglyph systems usually require a user to Wear red and
green glasses so that each eye perceives a different vieW.
Moving Pictures Experts Group (MPEG) can compact video
data prior to transmission and reconstruct it upon reception.
MPEG-2 can be applied to both standard de?nition (SDTV)
and high de?nition television (HDTV).
camera vieWs combined is less than the total compression
ratio Would be if each vieW Were captured and compressed
independently. Additionally, the second camera can provide
a vieW that may be occluded from the ?rst camera. Systems
using additional camera angles for different vieWs can
provide additional coded and compressed data for later use.
Multiple camera systems can also compensate for the lim
ited focal depth of a single camera and can substitute for the
use of a depth ?nding sensor Which senses and records depth
information for scenes. Image processing can improve the
outputs of multiple camera systems.
Stereoscopic photography also uses multi-camera systems
tal data, and input data scaling to match the resolution and
aspect ratio to the display device. HoWever, these systems do
not perform advanced image processing speci?c to a digital
imaging LCD or to the display system. Additionally, these
digital or compressed digital image data Which can include
bitstream information for enhanced outputs.
MPEG-2 stream as a base layer and enhancement informa
tion as a supplemental layer. Even if the tWo vieWs are from
slightly different angles, the compression ratio for the tWo
Additionally, high quality projection lens systems can pro
vide bright and uniform images. HoWever, component and
into higher resolution output modes. These processing sys
tems typically perform operations such as de-interlacing,
Multiple camera systems are commonly used to improve
display quality on curved screen displays. For example, tWo
Frame sequencing systems use shutter glasses to separate
left and right vieWs. Autostrereoscopic displays use lenticu
lar lenses and holographic optical elements. Single or multi
Projecting an image from a projector on a tabletop to a ?at
screen Which is closer to the projector at the bottom than the
top results in an image Which is narroWer at the bottom than
screens Which can be seen by multiple observers Without
at the top in What is knoWn as the “Keystone” effect.
Radial distortion occurs When an image pixel is displaced
bene?t from image processing.
from its ideal position along a radial axis of the image.
Because an image has the largest ?eld angles in the display
values Which are digitally represented by a number of bits.
For example, if 8 bits represent each R, G, and B color
turn helix displays use multiple semi-transparent display
special glasses. Multiple camera data channels systems can
Each R, G, and B color component has different intensity
corners, the corners exhibit Worse radial distortion than
component then each component has 256 (=28) intensity
other display areas. Radial distortion includes barrel
distortion, Where image magni?cation decreases toWards the
corners, and pin cushion distortion, Where the magni?cation
values from 0 to 255. Changing intensity value of a color
component in an ideal digital device from a number X, for
example, to a number Y, takes just as long Whatever the Y
increases toWards the corners. Lens related distortions
including radial distortion can cause image deformation.
value. Consequently, changing a color component value
Distortion can also result from non-?at screens or earth’s
200. HoWever, because of the nature of LCD image modu
from 2 to 3 takes as long as changing the value from 2 to
magnetic ?eld.
Image modulators (110, 112, 114, etc.) have a ?xed
number of pixels spaced uniformly in a pattern. Projecting
lator pixels, the transitions for modulating light intensities
an image from an image modulator to a display screen
display quality and provide the best possible visual images.
deforms the uniformity of pixel spacing, that is, pixels are
not correlated one to one from the image modulator to the 65
display screen. Therefore, some screen display regions have
more image modulator pixels than screen pixels While other
are not purely digital, and various analog distortions remain.
What is needed is an image processing system to enhance
The present invention is directed to an image processing
system for enhancing visual quality and improving perfor
US 6,340,994 B1
mance in digital display systems. The processing system,
output to a digital image modulator or to a digital memory
from Which a display device retrieves data.
useful for DTV displays and electronic theaters, can receive
different types of data inputs including analog, digital, and
compressed (or coded) display images. When loWer resolu
tion input images are used, the images are converted into
FIG. 2 shoWs an image processing system 200 Which
includes a Display Input Processor (DIP) 210, a Display
Output Processor (DOP) 230, and a buffer memory 240, all
higher quality display images using multiframe restoration.
coupled to a databus 250. System 200 also includes an image
The processing system uses knoWn information to provide
unique enhancements to the image modulator and to the
display system, and thus takes into account system
modulator 245 (comparable to FIG. 1 modulators 110, 112,
and 114) coupled to DOP 230 and to an external display
characteriZation, manufacturing defects, calibration data,
screen 260. DIP 210 preferably receives images on line 2050
image visual quality. DOP 230 outputs, preferably in frame
The processing system also employs Temporal Gamma
format, are stored in frame buffer 2400 Which is part of
Processing (TGP) and Reverse Super-Resolution (RSR)
buffer memory 240. Buffer memory 240 stores data for use
techniques to process images.
TGP independently processes each R, G, or B color
component and compensates for modulating transition char
memory 240, but alternatively can be part of the digital
memory, Which can in turn be part of buffer memory 240.
Image modulator 245 can be part of a CRT- or LCD-based
value and a knoWn transfer function of the display modu
lating system to adjust TGP outputs to desired values. TGP
direct vieW system, displaying images that can be in piXel
can overdrive an LCD image modulator to compensate for
the LCD material characteristics so that the desired output
value can be quickly achieved.
RSR performs a superset of the frame rate conversion
format on display screen 260. HoWever, if image modulator
245 is part of a projection system then image modulator 245
provides images to be projected and enlarged onto display
images have a higher apparent resolution than can be
supported by the number of piXels of the image modulator.
RSR, block by block, spatially ?lters one frame in a video
sequence having a higher resolution into numerous frames
original frame. RSR, While sequencing the higher frame rate
and loWer resolution frames, shifts the piXel matriX of each
fraction of a piXel in the X and Y directions, preferably at the
35 screen refresh rate.
System 200 processes image data in a high-resolution
internal format to preserve detailed image information
because such information can be lost in each of the various
matriX by a sub-pixel distance during each sequenced frame
refresh, that is, during each high resolution image the frame
image processing steps if the internal image format has
is sequenced into loWer resolution frames and the frames are
loWer resolution than the output of image modulator 245.
System 200, for eXample, can assume that the processed
displayed during the cycle of frame refreshes.
image has four times (doubled vertically and horiZontally)
better piXel resolution than the (spatial resolution) output of
image modulator 245.
FIG. 1 is a block diagram of a prior art projection display
system using three transmissive LCD imaging elements;
FIG. 2 is a block diagram of an image processing system
in accordance With the invention;
FIG. 3 is a block diagram of FIG. 2 DIP 210;
FIG. 4 is a block diagram of FIG. 2 DOP 230;
FIG. 5 is a block diagram of FIG. 4 Transformation 404;
FIG. 6 illustrates the operation of FIG. 4 TGP 412;
FIG. 7 illustrates the operation of FIG. 6 TG LUT 608;
FIG. 8 shoWs an image having an 8x8 piXel resolution and
a display having a 4x4 piXel resolution;
tal Input Control 304, Compressed Input Control 312, and
Image Reconstruction (IR) 318, all connected to a databus
350. DIP 210 also includes one or more input data connec
tors 300 for receiving on line 2050 image data input to
system 200, Which can be one or more of analog video,
digital video, non-tuned data, graphics data, or compressed
common carrier, may require a tuner (not shoWn) included
FIG. 10 is a ?oWchart illustrating DIP 210’s image
tuned from the channels. Compressed data may be in
MPEG-2 format, Which includes video and audio content,
the data containing control or video overlay information for
DOP 230. Image data on line 2050 may be encrypted for
FIG. 11 is a ?oWchart illustrating DOP 230’s image
tem Which digitally processes one or more input data for
data. Analog or digital video data may be in a native video
format such as composite video, S-video, or some compo
nent YUV/YCrCb. Non-tuned data, receiving from a broad
cast delivery system that may have many channels on a
in or separate from DIP 210 so that relevant data can be
processing; and
The present invention provides an image processing sys
FIG. 3 is a block diagram of FIG. 2 DIP 210, including
image processing modules Analog Input Control 302, Digi
FIG. 9 illustrates reverse super-resolution operating on an
screen 260. In a projector system, image modulator 245 is
small (inches) and may be either a stationary or a movable
element. To increase the apparent resolution of the displayed
images, a reverse super-resolution technique in accordance
With the invention adjusts the data values Written into a
stationary image modulator 245 at an increased frame rate.
For a movable image modulator 245, the invention, prefer
ably during each output frame in a cycle, moves image
modulator 245 to effectively shift the display piXel matrix a
having a higher frame rate and a loWer resolution than the
sequenced frame and uses motion tracking to ?lter the
images and to prevent motion artifacts. RSR can also be
used With a moveable image modulator that shifts the piXel
by DIP 210 and DOP 230. Frame buffer 2400, Which stores
image data for outputting to image modulator 245 or to a
digital memory (not shoWn), is preferably part of buffer
acteristics to assure that the time related representation of an
image is as accurate as possible. TGP uses a previous frame
process for converting disparate input and output frame
rates. RSR improves display quality When intended display
and reconstructs the images both spatially and temporally.
DIP 210 outputs are processed by DOP 230 to enhance
environment effects, and user controlled setup information.
security and thus require decryption by DIP 210. Accom
panying the image data, DIP 210 also receives control data
including for eXample, selected inputs, data types, vertical
blanking interval (VBI) data, overlay channel information
for the on-screen display (OSD), etc., and provides this
US 6,340,994 B1
control data to DOP 230. Each of the image processing
to an audio subsystem (not shoWn) for decoding and play
back. Compressed Input Control 312 decompresses an
modules, Analog Input Control 302, Digital Input Control
304, and Compressed Input Control 312, preferably receives
encoded bitstream input, but retains relevant motion vector
information for DOP 230 processing.
image data from connector 300. A system microcontroller
(not shoWn) preferably uses user-selected input controls to
select image data, Which is appropriately processed by each
Bitstream Engine 3125 optimiZes reconstruction of
MPEG-2 input bitstreams into enhanced video frames in a
manner that has not been used in prior art video enhance
ment products. The bitstream information includes side
information such as picture and sequence header ?ags,
Which can be used as control data to optimiZe presentation
for a particular display. LoWer-layer coded data can reveal
of modules 302, 304, and 312, and then preferably stored in
buffer memory 240. The system microcontroller also uses
the user input commands to control WindoWing for picture
in-picture displays, OSD information, and other system
WindoWing capabilities.
object shapes and other information that can be exploited to
DIP 210 preferably processes images in RGB formats.
HoWever, if applicable, for example, When the UV vectors
provide enhanced spatial and temporal rendering of blocks
constituting images. Decompression engine 3120, its decod
are the same as the Y vector, DIP 210 processes images in
YUV formats. In this example DIP 210 requires three times
less processing poWer and the saved poWer can be applied to
ing engine (not shoWn), and IR 318 can use the bitstream
information, for example, to better track, and thereby pro
duce higher quality, moving objects.
per pixel processing.
Analog Input Control 302 preferably includes an analog
Bitstream Engine 3125, to MPEG-2 encode an image
to-digital converter (ADC) 3002, Which samples the analog
frame, uses image blocks (or macroblocks). Since most
data inputs and produces digital data outputs. ADC 3002, to
video frames Within a sequence are highly correlated, Bit
achieve high quality, samples its input data frequently and
stream Engine 3125 exploits this correlation to improve
rendering. Bitstream Engine 3125 also employs motion
estimation techniques for motion compensated prediction as
precisely enough that the image can be reconstructed from
the sampled data points. For example, if an analog input
signal is Red, Green, and Blue having data for an 800x600
resolution screen that is being refreshed at 70 HZ, then the
data rate for each channel is approximately 50 MHZ.
Consequently, ADC 3002, to properly sample the input for
image reconstruction, runs at 100 MHZ. ADC 3002 prefer
ably uses Nyquist sampling to determine the appropriate
sampling rate.
Digital Input Control 304 preferably includes a synchro
niZation engine 3040 and processes digital data, Which may
be in a YUV video or a digital RBG format. Since the data
is already in digital format, Digital Input Control 304 does
not include an analog-to-digital converter. Digital Input
Control 304 also uses high-speed digital data transmittal
techniques that are described in the Institute of Electrical
ponent. Bitstream Engine 3125 preserves the motion vector
information for later use in generating DOP 230 output
frames in conjunction With motion compensated temporal
?ltering and reverse super-resolution. The information can
also be used for constructing a special block ?lter for post
decompression ?ltering of the coded input stream so that IR
318 can ?lter artifacts of block boundary edges.
Buffer memory 240 receives data from Analog Input
to assure that Digital Input Control 304 properly receives the
digital data input.
Compressed Input Control 312, preferably including a
Control 302, Digital Input Control 304, and Compressed
Input Control 312, and provides the data to Image Recon
struction 318. Buffer memory 240 also stores IR 318 output
IR 318 preferably includes a Motion Estimator 3180 and
because transmitting non-compressed high-speed digital
receives image data from Analog Input control 302, Digital
Input Control 304, Compressed Input Control 312, or from
data outside of system 200 (FIG. 2) is dif?cult and can be
expensive in the cabling and component design. Further,
current high speed digital interfaces are not fast enough to
perform all of the panel-related processing in a separate
buffer memory 240. IR 318 processes data based on data
types. For example, if data in YUV format requires a
conversion to the RGB domain, then IR 318, through either
system that requires a cable interface to a high resolution and
high quality display system. Compressed Input Control 312,
prior to decompression by decompression engine 3120,
track the How of video data prescribed by the prediction
blocks belonging to the macroblocks Within the bitstream,
rather than re-estimating motion or creating the macroblocks
similarly to a second pass encoding process. Bitstream
Engine 3125 tracks the prediction blocks over several
frames in Which the temporal path of the prediction blocks
delineates a coarse trajectory of moving objects. This coarse
trajectory can be re?ned by additional motion estimation and
bitstream processing. The motion vector of the prediction
blocks includes a horiZontal component and a vertical com
and Electronics Engineering (IEEE) standard 1394, LoW
Voltage Differential Signaling (LVDS), and Panel Link.
These standards include line termination, voltage control,
data formatting, phase lock loops (PLLs), and data recovery
decompression engine 3120 and a Bitstream Engine 3125,
processes compressed data that usually includes audio,
video, and system information. DIP 210, via Compressed
Input Control 312, preferably receives compressed data
a method of temporal processing across image frames.
Bitstream Engine 3125 preferably uses a loW-complexity
and block-based prediction. Bitstream Engine 3125 can
mathematics calculations or a look-up table, converts YUV
preferably demodulates the compressed digital data.
Alternatively, a preprocessing system (not shoWn) may
values to RGB color space. HoWever, IR 318 preferably
processes image frames While they are still in the YUV color
space and, if required, RGB color space conversion is
demodulate the data and provide it to Compressed Input
performed during one of the last image processing steps by
Control 312. Compressed Input Control 312, performing
DOP 230. Additionally, YUV data is often sub-sampled, that
additional steps such as error correction, assures that it
is, one UV pair may correspond to tWo or four Y values.
Consequently, IR 318 uses the UV values to interpolate and
create RGB pixels. If YUV data is interlaced then IR 318
properly receives the data and that the data is not corrupted.
If the data is corrupted, Compressed Input Control 312 may
conceal the corruption or request that the data be retrans
mitted. Compressed Input Control 312, once having cor
rectly received the data, de-multiplexes the data into audio,
video, and system streams, and provides the audio streams
converts the data from ?eld based (sequential half frames) to
frame based. IR 318 stores each ?eld in buffer memory 240,
then ?lters, analyZes, and combines the ?elds to generate an
input image frame. IR 318, if required, retransmits the
US 6,340,994 B1
processed input frames in analog video format.
Nevertheless, IR 318 preferably uses the processed image
steps to compress the number of bits needed to represent a
video sequence. Consequently, the invention, via Motion
frames and the motion information created by DIP 210 While
the frames and the information are still in their digital
format. If IR 318 processes data, such as overlay
Estimator 3180, advantageously provides better quality
images than prior art techniques.
Because detecting motion is important in restoring
images, Motion Estimator 3180 (and other processing mod
information, relevant to image modulator 245 (FIG. 2), IR
318 provides such data to DOP 230 to later be combined
With the image data frames. IR 318 may process multiple
input data streams in parallel and provide such data to DOP
230 to later produce a picture-in-picture display of multiple
images. IR 318 also does post decompression ?ltering based
on block boundary information included in the input bit
ules according to the invention) tracks motion on a sub- (or
smaller) block basis. For example, instead of on an 8x8
(pixels) block, Motion Estimator 3180 tracks motions on a
2x2 block, Which tracks more re?ned motions. To reduce the
need to track re?ned sub-blocks, Motion Estimator 3180
uses the image error terms to pre-qualify a block, and thus
does not perform re?ned tracking on a block that has motion
IR 318 preferably uses techniques from, for example,
vectors With a high value for the error terms. Conversely,
Faroudja Labs and Snell & Willcox and Darim, that can 15 Motion Estimator 3180 does perform re?ned tracking on a
sample and reconstruct input video, Which includes
block With a loW value for the error terms.
composite, S-Video, and Component (Y, Cr, Cb) that may
When receiving motion estimation vectors, such as those
provided in an MPEG-2 data stream, IR 318 stores them
based on the magnitude of the image error terms. IR 318
then uses vectors With high error terms only in compliant
MPEG-2 decoding, and vectors With loW error terms both in
folloW one of the industry standards such as Phase Alterna
tive Line (PAL) or the National Television Standards Com
mittee (NTS C). IR 318, to spatially ?lter for high quality
image frames, preferably uses various techniques for noise
reduction, such as recursive, median ?lter, and time base
compliant NPEG-2 decoding and in analyZing re?ned
motions for restoring multiple frames. AnalyZing re?ned
IR 318 takes account of multiple input images and then,
to enhance the resolution of those images, uses super
resolution techniques that employ data shared by different
produce high resolution output frames.
IR 318 preferably separates its output images into video
input frames to restore an image, and thereby to produce
each output frame. This cannot be done by independently
?elds or frames, and creates a pointer to the start of each
using one input image at a time. The invention is thus
advantageous over prior art systems Which use super
?eld (or frame). Either the actual ?eld (or frame) data or a
pointer to the ?eld (or frame) data may serve as inputs to
resolution techniques for generating high-resolution still
DOP 230. Processing input video ?elds and producing
images from a video sequence, but not for generating output
frames that combine ?elds is useful for de-interlacing video
in the image restoration process, Which in turn is useful for
frames. The super-resolution techniques used by the inven
tion depend on a high correlation of the data betWeen
frames, and require a sub-pixel shift of the input images,
typically based on slight movements of objects in the
images. IR 318, in correlating images to restore multiple
increasing image resolution and for restoring the vertical
detail that Was lost during interlacing. IR 318 outputs (and
DOP 230 outputs), having been reconstructed in accordance
With the invention can have a higher resolution than can be
frames, uses motion vectors provided by Motion Estimator
3180 or preserved from the input bitstream. IR 318, While
generating still frames, can use mathematical equations
supported by the number of pixels of image modulator 245.
IR 318 outputs can be stored in buffer memory 240 or in a
meta?le that includes a description of the image both in a
spatial RGB frame buffer format and in a semantic descrip
from, for example, deterministic techniques of Projections
On Convex Sets (POCS) and stochastic techniques of Baye
tion of the image objects, textures, and motions.
sian enhancements.
When an image does not include MPEG-2 motion vector
bitstream information, Motion Estimator 3180 preferably
motions can produce motion vectors for sub-block pixel
siZes, Which can be used in multiframe restoration to better
FIG. 4 is a block diagram of FIG. 2 DOP 230, Which has
uses techniques such as optical ?oW, block matching, or
Pel-recursion to estimate motion that tracks the image object
a display map memory (DMM) 402 and image processing
modules including Geometric Transformation 404, Post GT
Filtering 406, Color/Spatial Gamma Correction 410, Tem
same motion estimation techniques in conjunction With
poral Gamma Processing (TGP) 412, Reverse Super
Resolution 414, and Display Modulation (DM) 420, all
MPEG-2 motion vector bitstream information. Motion Esti
mator 3180 compares groups of pixels from one image ?eld
bandWidth and concurrency requirements for parallel image
to those of subsequent and previous image ?elds to correlate
object motion. Motion Estimator 3180 then records the
processing. DOP 230 also connects to buffer memory 240,
Which stores data frames for use by each of the processing
motion in time. Motion Estimator 3180 can also use the
connected to a databus 450. Databus 450 satis?es system
modules 402, 404, 406, 410, 412, 414, and 420, although
detected motion relative to the ?eld position so that DOP
230, together With input frame information and IR 318
motion information, can later generate motion-compensated
image frames. For compression systems, Motion Estimator
DOP 230 receives DIP 210 outputs either directly or via
buffer memory 240. DOP 230 can use pointers (if
applicable) to directly access DIP 210 output data. DOP 230
3180 ?nds the best match betWeen frames, then codes the
mismatches as image “error terms.” For motion
compensation, Motion Estimator 3180 masks out motion
vectors that do not meet a certain level of matching criteria,
and tags the vectors that have a high level of matching so
also receives multiple DIP 210 output images for performing
picture-in-picture operations Where a single image frame
includes more than one processed input video frame. DOP
230 combines overlay data both from the input coded data
that these vectors can subsequently be used in more re?ned
motion tracking operations, Which are performed on smaller
image blocks or on individual pixels. Motion Estimator
3180 thus differs from prior art techniques in Which video
compression systems use the detected motion as one of the
each of these modules may include a local memory buffer
(not shoWn).
and from any on-screen display (OSD) information such as
a user menu selection provided by the system microcontrol
ler. DOP 230 processes its input images and outputs image
data including display coordination for both video and data
US 6,340,994 B1
output, and data and control signals for each R, G, and B
image color component. Frame buffer 2400 (FIG. 2) can
Transformation 404 uses edge enhancement ?ltering to
increase differences betWeen neighboring pixel values to
pre-compensate for distortion that Will be introduced When
store DOP 230 outputs.
DMM 402 stores data corresponding to image modulator
245 (FIG. 2) characteristics at chosen pixel or screen loca
tions. DMM 402, Where applicable, also stores a memory
image projection spreads out neighboring pixels.
Geometric Transformation 404 preferably uses ?ltering
algorithms, such as nearest neighbor, bilinear, cubic
convolution, sync ?lters, or cubic spline interpolation, to
description corresponding to each display pixel or a shared
description of groups of display pixels or pixel sectors.
process images and thus produce accurate interpolated
image pixel values. Further, Where multiframe restoration
Because the description does not change on a frame-by
frame basis, DMM 402 preferably reads the description only
requires, Geometric Transformation 404 uses time varying
once during the display process. DOP 230 then uses the
multiframe ?ltering methods including deterministic tech
description information to generate image frames. DMM
402, When reading data, uses a set of control registers (not
shoWn) that provide references to the data blocks.
DMM 402 data varies and includes, for illustrative
purposes, manufacturing related information, system con
?guration information, and user data. Manufacturing related
niques such as projection onto convex sets (POCS), and
stochastic techniques such as Bayesian ?ltering. Based on
the computation complexity, Geometric Transformation 404
chooses an appropriate ?ltering technique.
Geometric Transformation 404 can improve image de?
ciencies related to the screen 260 environment. Geometric
information includes, usually at assembly time, for example,
Transformation 404 performs a spatial projection Which
a map of locations of defective or Weak pixel display bits,
Warps the image to compensate for a curved display screen
correlation data of ideal radial imperfections and of optically
distorted projection, and correlation data for alignment
260 as is usually used in front projection theater systems,
and subsequently uses bitstream information to improve the
points for image modulator 245. System con?guration
image. For example, if it can acquire the depth of moving
information, through an automatic self-calibration, includes,
for example, a registration map having adjustable intensity
objects, Geometric Transformation 404 can reduce the dis
torted motions at the edges of a curved screen 260. Geo
metric Transformation 404 constructs an optical ?oW ?eld of
values for each R, G, and B color component and the color
component pixel offset at given locations. DMM 402, Where
the moving objects along With the object distance informa
tion. Geometric Transformation 404 then uses motion adap
tive ?ltering to construct a sequence of output frames that
applicable, preferably uses sensor techniques, such as sonar
range ?nding, infra red range ?nding, or laser range ?nding
to measure distances from a projector (not shoWn) to dif
ferent parts of display screen 260. DMM 402 then uses these
position the objects at the proper spatial coordinates in the
time domain. Geometric Transformation 404 thus, during
measurements to mathematically characteriZe and model a
projection display system. DMM 402 thus alloWs projecting
projection on a curved screen 260, conveys the proper
motion of all objects in a scene. Geometric Transformation
images onto a mathematical approximation of a display
404 also Works in conjunction With an optical correction to
screen 260 surface. User data includes user preference
information such as brightness, color balance, and picture
improve distortions resulting from the different focal dis
tances from a projector (not shoWn) to different parts of
sharpness that are input by a user during a setup sequence.
screen 260. Geometric Transformation 404 uses range ?nd
DMM 402 preferably provides data, either directly or
through buffer memory 240, to Geometric Transformation
ing techniques (discussed above) to construct a model of the
module 404.
Geometric Transformation 404 rede?nes the spatial rela
the model and the optical system to mathematically con
screen 260 environment and then uses the information from
struct a formula to compensate for image distortions. Geo
metric Transformation 404, to correct a Warping distortion
produced by an optical system, uses the same mathematical
basis for a ?at screen geometric transformation to apply to
tionship betWeen pixel points of an image to provide to
frame buffer 2400 compensated digital images that, When
displayed, exhibit the highest possible image quality. Geo
a curved screen.
metric transformation, also referred to as Warping, includes
image scaling, rotation, and translation. Geometric Trans
formation 404 resamples data to produce an output image
head-mounted displays (HMDs). A HMD is a display unit
that can readily map onto FIG. 2 image modulator 245.
HoWever, the Geometric Transformation 404 output data
points, due to scaling or resampling, may not correspond
one-to-one data points the image modulator 245 grid.
Consequently, DOP 230 includes Post Geometric Transform
Filtering 406 to ?lter the transformed data samples from
Geometric Transformation 404 and produce an output pixel
value for each data point of image modulator 245. Post
Geometric Transform Filtering 406 uses spatial ?ltering
methods to smooth the image and to resample, and thus
Geometric Transformation 404 uses special processing,
similar to the curved screen 260 processing, for various
combined With a helmet or glasses that a user Wears and
usually includes tWo image modulators 245, one for the right
eye and one for the left eye. HMDs are useful for a single
vieWer and, because of their physically smaller area, display
high quality images.
Geometric Transformation 404, Without considering
motion adaptive ?ltering, treats image spatial projection
With Warping onto a curved screen 260 in the context of 3D
graphics. Geometric Transformation 404 considers a display
properly space, the data samples.
Geometric Transformation 404 also improves display
image frame as a 2D texture and considers a curved surface
as a 3D surface. Geometric Transformation 404 then maps
the 2D texture onto a surface that is the mathematical inverse
of the curved screen 260. Geometric Transformation 404
image characteristics related to image modulator 245 and the
display system. For image modulator 245 screen regions that
thus pre-corrects the image frame so that, When projected,
the mapped image Will have ?ltered out the distortions
have more image modulator 245 pixels than screen 260
pixels, Geometric Transformation 404 adjusts the pixel
values by a spatial ?ltering to reduce differences in neigh
associated With a curved screen 260. Geometric Transfor
boring pixel values. Consequently, the corresponding image
mation 404 preferably uses techniques such as anisotropic
?ltering to assure that the best texture is used in generating
(stored in frame buffer 2400) is smooth and does not contain
artifacts. For screen display regions that have feWer image
modulator 245 pixels than screen 260 pixels, Geometric
output pixels. Geometric Transformation 404 also preferably
uses ?ltering techniques such as sync ?lters, Wiener
US 6,340,994 B1
deconvolution, and POCS, and/or other multipass ?ltering
410 prefers using a CLUT to mathematical calculations
because a CLUT alloWs a non-linear mapping of the input
RGB values to the translated (output) RGB values. A non
techniques to ?lter the images off-line and then output the
?ltered images onto a ?lm recorder. Geometric Transforma
linear mapping enables input colors represented by RGB
values to be adjusted (emphasiZed or de-emphasiZed) during
tion 404 preferably alloWs more computationally intensive
image operations to be performed off-line.
the mapping process, Which is useful for crosstalk suppres
sion and for compensation of shortcomings in a color gamut
of image modulator 245. Color and Spatial Gamma Correc
Geometric Transformation 404 processes video as 3D
texture mapping, preferably using systems that accommo
date multiple textures in images. For example, Geometric
Transformation 404 can use high quality texturing tech
tion 410, to realiZe a non-linear relationship, uses a trans
Which apply multiple texture maps to an image. For another
lation table represented by a number of bits that is larger than
the number of data input bits. For example, if eight bit
example, Geometric Transformation 404, modeling the
graininess inherent in ?lm, applies multi-surface texturing to
represents 256 (=28) color component intensity values, then
Color and Spatial Gamma Correction 410 uses, for another
modeling feature as part of the setup procedure similarly to
TGP 412 assures that the time related representation of an
niques such as bump mapping and displacement mapping
example, 10 bits to represent 1024 (=21O) translated values.
give video a more ?lm-like appearance. Geometric Trans
system manufacturer maps 256 values to 1024 translated
formation 404 can alloW a user to select the graininess
selecting room effects such as “Hall,” “Stadium,” etc., in an
image is as accurate as possible. TGP 412 thus, based on a
audio playback option.
previous frame value and a knoWn transfer function of the
Geometric Transformation 404 can process digital data
from a multi-camera system to improve the focus, and
display modulation system, adjusts its output values to
thereby provide higher quality images for image modulator
245. Geometric Transformation 404 evaluates Which of the
multiple camera vieWs provides the best focus for an object
and then reconstructs the object in proper perspective.
Geometric Transformation 404 then combines the multiple
camera vieWs on a regional or object basis to produce output
characteristics so that the desired output can be achieved
more quickly. Consequently, TGP 412 overcomes the video
Geometric Transformation 404 can also use multi-camera
quality limitation of prior art systems having materials that
bitstream information included in the image data to deter
produce fast-motioned and blurred outputs. TGP 412 can
also reduce the cost of the display system because the
mine the object depth of a scene and to construct a 3D model
of the shape and motion pattern of the moving objects.
materials used for image modulation in prior art systems that
Geometric Transformation 404 then uses the same bitstream
information to solve problems related to a curved screen 260
projection to achieve proper object motion completely
provide faster image response are usually expensive. TGP
412 is described in detail With reference to FIGS. 6 and 7.
across the screen 260.
Geometric Transformation 404 can also improve auto
disparate input and output frame rates, and can improve
display quality When intended display images have a higher
apparent resolution than can be supported by the number of
pixels of image modulator 245. RSR 414 simulates higher
resolution outputs by sequencing loWer resolution images at
higher frame rates. Thus, for example, RSR 414, block by
channels present a binocular display and each of a vieWer’s
eye sees a different monocular vieW of a scene. Geometric
Transformation 404 can construct each of the monocular
vieWs in accordance With the focus and motion adaptive
?ltering techniques described above.
block. For example, because there are Y RSR frames, RSR
414 shifts the pixel matrix block Y times, once for each RSR
frame, and each shift is by the same pixel (or pixel fraction)
amount. The number of pixel fractions to be shifted depends
on the physical characteristics of the display system and of
image modulator 245. Where a system adjusts the position
of the vieWed image, the shift fraction corresponds to the
look-up table, and the table provides the output (or
410 independently processes each R, G, or B color compo
nent. Color and Spatial Gamma Correction 410 maps each
color component based both on a combination of individual
RGB values and on RGB values of surrounding pixels. For
example, if FIG. 2 image modulator 245 requires a certain
brightness for an identi?ed area on display screen 260, then
Color and Spatial Gamma Correction 410 may use the RGB
values of the pixels in the identi?ed area and of the pixels in
the neighboring area. Color and Spatial Gamma Correction
410 uses mathematical calculations, or preferably a color 65
look-up table (CLUT), to provide the RGB values for the
desired image outputs. Color and Spatial Gamma Correction
block, spatially ?lters one frame in a video sequence having
a transfer rate of X frame per second (fps) to Y number of
RSR frames having a transfer rate of Z fps, Where Z=X><Y.
RSR 414 then shifts by the same pixel (or pixel fraction)
amount the pixel matrix representing each RSR image
corresponding to color intensities, to translate image colors.
Each R, G, and B intensity value represents an index into the
“translated”) value. Color and Spatial Gamma Correction
Reverse Super-Resolution (RSR) 414 performs a superset
of the frame rate conversion process for converting betWeen
stereoscopic 3D display systems in Which multiple camera
Color and Spatial Gamma Correction 4 1 0 converts YUV
to RGB color space and determines the intensity values for
each of the R, G, and B color components. Those skilled in
the art Will recogniZe that a color space conversion is not
necessary if it has been done previously or if the image is
otherWise already in the RGB color space. Color and Spatial
Gamma Correction 410 preferably uses a look-up table, in
Which each of the R, G, and B color components has values
provide a desired output value during a desired frame. TGP
412 independently processes each R, G, or B color compo
nent and compensates for modulating transition character
istics that, due to the nature of an LCD image modulator
245, are not purely digital. TGP 412 also overdrives the LCD
image modulator 245 to compensate for the LCD material
physical movement of the vieWed displayed image. Where
there is no actual movement of the displayed image, the
fractional adjustment is based on the physical nature of the
display device such as the pixel siZe relative to the siZe of
image modulator 245 and to the projection characteristics of
the system. RSR 414 then produces each RSR frame With a
motion compensated Weighted ?ltered center so that the
center of the input image for each RSR frame is maintained
such that no motion artifacts are introduced. Apixel matrix
Weighted ?ltered center is the center of a pixel matrix taking
account of ?lter Weights in a ?lter transfer function. Filter
Weights, varying depending on the ?lter characteristics, are
the values (usually of multiplications and additions) Which
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF