Margulis

Margulis
US 20070035706Al
(19) United States
(12) Patent Application Publication (10) Pub. No.: US 2007/0035706 A1
Margulis
(43) Pub. Date:
Feb. 15, 2007
(54)
IMAGE AND LIGHT SOURCE
(52)
US. Cl.
............................................................ .. 353/122
MODULATION FOR A DIGITAL DISPLAY
SYSTEM
(57)
(75) Inventor: Neal D. Margulis, Woodside, CA (US)
ABSTRACT
Correspondence Address:
Image processing enhances display quality by controlling
JAMES EPPA HITE, III
both the image modulator and the light sources to produce
2318 LOUIS ROAD
the best possible visual images. The image processing
PALO ALTO, CA 94303-3635 (US)
system utilizes a combination of user inputs, system con
?guration, design information, sensor feedback, pixel modu
(73) Assignee: Digital Display Innovations, LLC
(21) Appl. No.:
11/255,353
(22) Filed:
Oct. 21, 2005
lation information, and lighting control information to char
acteriZe the display environment on a per pixel basis. This
per pixel characterization information is combined With one
or more frames of the incoming real time display data. The
Related US. Application Data
image processing system processes each incoming pixel and
produces a modi?ed corresponding output pixel. Each pixel
(63) Continuation-in-part of application No. 11/158,476,
producing modi?ed output pixels, the image processing
of each frame is processed accordingly. In addition to
system produces control information for the light sources.
?led on Jun. 20, 2005.
The light source control information can control individual
lamps, tubes or LEDs, or can control a block or subset of the
Publication Classi?cation
light sources. The resulting display has improved image
consistency, enhanced color gamut, higher dynamic range
(51) Int. Cl.
G03B
21/00
(2006.01)
and is better able to portray high motion content.
200
240
(/
Databus 250
230
210
1
24a
DOP
‘
Display Sensor
‘
Feedback
DIP
242
A
v
252
a
2°5°\/\
‘
Light Source
drivers
f’)
Modulator Drivers
I (‘j
244
Image Modulator
('2
Optical Sensors
270
K
260
‘
.
Display Light
Sources
/\,
Display Screen
246
Patent Application Publication Feb. 15, 2007 Sheet 1 0f 8
10FIG.
\
)
R
-
104
US 2007/0035706 A1
FIG1A
122
Q2
102f
I
I
I
100
110108
102s
1BFIG.
Patent Application Publication Feb. 15, 2007 Sheet 2 0f 8
x0382
new
\$Hom3nu2mGV
LO
.OEN
Nam
k
2E525
ED
US 2007/0035706 A1
Patent Application Publication Feb. 15, 2007 Sheet 5 0f 8
352 0 5:8am;
US 2007/0035706 A1
Bumbag‘ aow?bcu
vow
wzsmiw
x
NE
5.2am qouwb
2Q.c5Um2EwmV
25cm
.QEm
:omubU
bcoaqmz‘520M 5:89.30
E56 LOwmGduH. G385
BU
Nam
ۤE<qonuwbU
05
Patent Application Publication Feb. 15, 2007 Sheet 6 0f 8
US 2007/0035706 A1
[email protected] _0mwo Omw0EGMZK _Om0EMKJ
_: _: a:_IZC
Nam
wmw N
mmm
W62m
“52F.90E8mD
x382
N
omm5w v5
AL
_:
wwm @
F>>Om Om: HrO
Now
vow
wow
mom
mmoN
Patent Application Publication Feb. 15, 2007 Sheet 8 0f 8
US 2007/0035706 A1
@
V
Input Selection
$804
_ ‘
808
v
\/\
Input Processing
V V
Display Setup
806
S
V
830
y
.
.
Dlsplay Output Image Processing
,
@
s
81°
r
812
7
Display Controls
814
816
I
V
R
L
V
Image Modulator
Light Source
Sequencing
Sequencing
ll
832
V
318 /\_/ Sensor Readings
FIG. 8
o834
US 2007/0035706 A1
Feb. 15, 2007
Temperature Poly-Silicon (HTPS) technology, Micro Elec
IMAGE AND LIGHT SOURCE MODULATION FOR
A DIGITAL DISPLAY SYSTEM
tro-Mechanical Systems (MEMS) or Carbon Nanotubes.
[0001] This application is a Continuation-in-Part of US.
application Ser. No. 11/158,476 ?led Jun. 20, 2005.
associated additional properties. For example, projecting an
BACKGROUND OF THE INVENTION
image from a projector on a level beloW the middle of a ?at
screen results in an image Which is narroWer at the bottom
1. Field of the Invention
than at the top in What is knoWn as the “Keystone” e?‘ect.
Radial distortion occurs When an image pixel is displaced
[0002]
[0003] The present invention relates generally to an image
processing and modulation system for a digital display
system, and more particularly to performing lighting control
along With advanced image processing and image modula
tion to produce high-quality output images.
[0004]
2. Discussion of Prior Art
[0005] The many types of Flat Panel Display (FPD) tech
nologies include Active Matrix Liquid Crystal Displays
(AMLCDs) also referred to as Thin Film Transistor displays
(TFTs), silicon Re?ective LCDs (si-RLCDs), Liquid Crystal
On Silicon (LCOS), ferroelectric displays (FLCs), Field
Emission Displays (FEDs), Carbon Nanotube based Nano
Emissive Displays (NEDs), ElectroLuminescent Displays
(ELDs), Light Emitting Diodes (LEDs), Organic LEDs
(OLEDs), Plasma Displays (PDs), Passive matrix Liquid
Crystal Displays (LCDs), Thin Film Transistor (TFT), Sili
con X-tal Re?ective Displays (SXRD) and Digital Mirror
Displays (DMDs).
[0006] Manufacturing FPDs requires special processing
[0009] Projection displays have a projection path With
from its ideal position along a radial axis of the image.
Because an image has the largest ?eld angles in the display
corners, the corners exhibit Worse radial distortion than
other display areas. Radial distortion includes barrel distor
tion, Where image magni?cation decreases toWards the cor
ners, and pin cushion distortion, Where the magni?cation
increases toWards the corners. Lens related distortions
including radial distortion can cause image deformation.
Distortion can also result from non-?at screens or the
Earth’s magnetic ?eld. Shortening the projection path, often
done to thin a rear projection system, also increases distor
tion. The non-uniform projection characteristics of a display
system may be represented by a “distortion map” Which is
based on a single pixel or a group of pixels. A “tessellation
map” for characterizing the display distortion may be made
up of non-uniform triangles Where the mesh of triangles is
denser Where the amount of distortion is highest.
[0010] Image modulators have a ?xed number of pixels
spaced uniformly in a pattern. Projecting an image from an
image modulator to a display screen deforms the uniformity
of pixel spacing. In other Words, pixels are not correlated
steps and it is often di?icult to achieve acceptable visual
one-to-one from a sub-area of the image modulator to the
quality in terms of consistence across the entire area of the
corresponding sub-area of the display screen. Therefore,
display. Besides being a yield issue for neW displays, display
some screen display areas have more image modulator
characteristics can change over operating conditions and
over the lifetime of the display. Most FPD technologies also
pixels than screen pixels While other screen display areas
require light sources, such as ?uorescent lamps, Cold Cath
ode Florescent lamps, Ultra High PoWer (UHP) lamps or
Light Emitting Diodes (LEDs) to illuminate the display
images. LEDs may be made from a variety of materials and
each may have their oWn “color temperature” characteris
tics. Other systems may use a hybrid of a traditional lamp
along With one or more White or colored LED arrays. In one
example system, Sharp Corporation of Japan uses a cold
cathode ?orescent lamp along With red LEDs in a hybrid
back light approach for 37" and 57" LCD TVs.
[0007]
Some of these technologies can be vieWed directly
as FPDs and others are used as microdisplay devices Where
the image is projected and a user vieWs the projected image.
[0008] Projection display systems may use transmissive or
re?ective “microdisplay” technologies. To achieve the
desired full color gamut in microdisplay based parallel color
projection systems, three separate LCD image modulators,
one for each of the R, G, and B (RGB) components, may be
used. A single LCD image modulator Which produces R, G,
and B, either through spatial color ?lters or With sequential
color ?elds at a su?iciently high rate can cost less. Other
light sources such as LEDs and lasers may also be used for
the different colors With one, three or more imagers. Re?ec
tive technologies such as Texas Instruments Digital Light
have feWer image modulator pixels than screen pixels.
[0011] For panoramic displays, motion artifacts appear
Where image objects move near the edges of curved screens.
Even When a ?at screen projection is motion-adaptive ?l
tered, the difference in the distances of objects from the
projector causes an apparent motion of moving objects on a
curved screen. Additionally, extremely large curved screen
systems can achieve necessary resolution and brightness
only With multiple light sources. Even very large ?at screens
typically require multiple light sources to accurately control
brightness and color. Control of multiple light sources
becomes a critical factor in achieving high quality images
for large screen format displays.
[0012] During the production of content, multiple camera
systems are commonly used to improve display quality on
curved screen and very large screen displays. For example,
tWo cameras record overlapping halves of a scene to
increase the content of the scene being captured. The
increase of content results from capturing a Wider scene and
different angles of the same scene. A layered coding tech
nique may include a standard codec stream as a base layer
and enhancement information as a supplemental layer. Even
if the tWo vieWs are from slightly different angles, the
compression ratio for the tWo camera vieWs combined is less
than the total compression ratio Would be if each vieW Were
Processing (TI DLP) and technologies such as LCOS imag
captured and compressed independently. Additionally, the
ers are other examples of popular microdisplay technolo
gies. Sony uses a Silicon X-tal Re?ective Display (SXRD)
microdisplay in some of their rear projection television
systems. Other display devices may be based on High
second camera can provide a vieW that may be occluded
from the ?rst camera.
[0013] Systems using additional camera angles for differ
ent vieWs can provide additional coded and compressed data
Feb. 15, 2007
US 2007/0035706 A1
for later use. Multiple camera systems can also compensate
for the limited focal depth of a single camera and can
component for each pixel is typically made up of multiple
sub-pixels. HoWever, the term “pixel” is also used to
substitute for the use of a depth-?nding sensor Which senses
describe the triad of sub-pixels. Each color component of the
pixels mapped to the screen display pixels may transfer non
uniformly Where some pixels end up too bright and others
end up not bright enough. This non-uniformity may be a
and records depth information for scenes. Multiple cameras
can also be used to alloW a more interactive playback
environment Where the user can either choose the angle from
Which to Watch a scene, Zoom in or out of a scene, or as a
function of the light path characteristics, the mirrors, the
result of game play or similar controls, be shoWn a different
lenses, the lamps or any combination thereof Similarly for
LED backlit FPDs, the spacing and positioning of the LEDs
may cause non-uniform brightness patterns Where the image
modulator pixels closer to the LEDs appear brighter than the
image modulator pixels further from the LEDs. In the case
vieW of a scene.
[0014]
Stereoscopic video capture also uses multi-camera
systems in Which a ?rst camera records a left-eye vieW and
a second camera records a right-eye vieW. Because camera
lenses focus at a certain distance, one camera uses one focal
plane for all objects in a scene. A multi-camera system can
use multiple cameras each to capture a different focal plane
of a single scene. This effectively increases the focal depth.
Digital image processing can further improve focusing for
these multi-camera systems.
[0015] Types of three dimensional binocular display sys
tems include anaglyph displays, frame sequence displays,
autostereoscopic displays, single and multi-tum helix dis
plays. These normally have multiple camera data channels.
Anaglyph systems usually require a user to Wear red and
green glasses so that each eye perceives a different vieW.
Frame sequencing systems use shutter glasses to separate
left and right vieWs. Autostereoscopic displays use lenticular
lenses and holographic optical elements. Single or multi
turn helix displays use multiple semi-transparent display
screens Which can be seen by multiple observers Without
of colored LEDs or LEDs With color ?lters, the modulator
pixels may be further affected not just With brightness
differences, but With color uniformity differences. The
brightness of LEDs can be modulated using various Pulse
Width Modulation (PWM) techniques or varied by adjusting
the voltage or current. The LEDs may be controlled collec
tively, by proximity in groups, by color, by some combina
tion or in the most sophisticated case each LED can be
controlled individually.
[0019]
Display systems typically include various user con
trols for color, brightness, White balance (color temperature)
and other settings. Users adjust the controls based on pref
erence, ambient lighting conditions, and type of content
(video versus computer). Over time, lifetime effects of the
display may require the user to make further adjustments or
the FPD may automatically make such adjustments. The
user adjustments are typically on the entire system, not on
special glasses.
subsections of the display, though factory trained techni
cians using instruments could perform other more sophisti
[0016] Multiple camera systems can bene?t from image
processing during both the capture and the playback of the
ties to measure light output feedback from both lighting
cated adjustments. Some display systems include capabili
data. During the capture, the different camera images can be
sources as Well as the ambient lighting conditions. Such a
registered to each other to create a combined video. The
registration gives a Way to relate the video from the cameras
to each other and to the vieWer. In a more sophisticated
feedback system could be used to compensate for tempera
ture and lifetime effects of the display components and the
changing ambient light conditions. The timescale for such
adjustments Will vary based on the speci?c measurements
and the criticality of the adjustments. Adjustments based on
the type of content may be made in multiple seconds, While
frame to frame adjustments for dynamic control need to be
done much faster, on the order of milliseconds. Adjustments
based on ambient light conditions may naturally vary over
the time of day as Well as With changes to the lamps and
capture system, the various video images can be captured
into a 3D environment Where a 3D triangle map of the video
images is used to make a video database. During playback,
the vieW seen by the user can be manipulated and, by using
the multiple video registration or the 3D triangle map, can
be manipulated to give the user the best vieW. Since the user
vieWpoint at times Will be unique compared to the captured
video, the display output system can manipulate the video
data via geometric transforms to produce a neW vieW for the
other light sources in a room.
user Which Was not one of the captured vieWs.
[0020] FIG. 1A shoWs a front vieW of an LED array 102f
made up of a structure of multiple LEDs 104. Each LED is
[0017] Because of inherent response characteristics of
LCD materials, LCD-based projection and direct vieW dis
play systems each have unique ?icker characteristics and
exhibit different motion artifacts. Further, LCDs sWitch in
variety of colors such as red, green or blue. Other LED
colors are also used and the color may be part of the LED
a light source Which can be either a White LED or one of a
active transistor controlling each display pixel, require a
structure, may be a color ?lter as part of each LED or may
be a ?lter mechanism for the array of LEDs. The array of
LEDs may be arranged in a pattern of colors so as to produce
a controllable and consistent light source that, When com
sWitching time related to the LCD material composition and
thickness, and to the techniques of sWitching. The transitions
bined With an image modulator, is capable of producing the
full gamut of display colors.
the ?nite time it takes to change the state of a pixel. Active
matrix thin ?lm transistor (TFT) displays, Which have an
from one state to another are also not necessarily linear and
can vary depending on the sequence of pixel values.
[0018]
The ability to accurately portray each color com
ponent for each pixel at each screen location is another
desired function of a high quality display device. Since a
display output pixel typically consists of a triad (red, green,
blue) or other combination of color components, each color
[0021] FIG. 1B shoWs subsystem 100 Which is a side vieW
of an LED array 102s structured from multiple LEDs 104,
a multi-lens system 108 and 112 and a microdisplay imager
or image modulator 110 that may use a variety of light
modulation techniques. The LEDs may be White or multi
colored and an additional color system (not shoWn), such as
a color Wheel, may also be included in system 100. In one
Feb. 15, 2007
US 2007/0035706 A1
example system, lens 108 concentrates the light through the
microdisplay 110, and lens 112 controls the light projection.
The projection system may be front or rear projection and
use other lenses and mirrors, dichroic recombiners and
?lters to project the image on a screen (not shoWn). Micro
display 110 may be either transmissive, passing modulated
amounts of light through each pixel, or re?ective, re?ecting
modulated amounts. The resolution of the microdisplay 110
is typically much higher than the resolution of the compo
nents that make up the light source.
[0022] FIG. 1C shoWs subsystem 140 Which is a side vieW
of an LCD panel system Where a backlight 122 is made up
of either strips 122 or an array 102 of LEDs. The LCD
display or image modulator 130 typically has a much higher
resolution than the resolution of the components that make
up the light source. Exceptions include OLEDs, Where each
LCD pixel has a light source as part of the pixel and as such
the resolution of the light sources is the same as that of the
display and NEDs, Where carbon nanotubes are arranged
such that a properly applied voltage excites each nanotube
Which in turn bombards the color phosphors for the pixels or
subpixels With electrons and “lights them up.” The bright
ness and color Wavelength is based on the quantity of
istics may change over time and need to be compensated for.
The display system may include appropriate sensors to
determine the status of the light sources.
[0026] Therefore, for all the foregoing reasons, What is
needed is an image processing system to effectively enhance
display quality by controlling both the image modulator and
the light sources and thereby provide better visual images.
SUMMARY OF THE INVENTION
[0027] The present invention provides an apparatus and
method for performing image processing that controls both
the image modulator and the light sources to enhance
display quality.
[0028] The image processing system considers user
inputs, system con?guration and design information, sensor
feedback, pixel modulation information, and lighting control
information to characteriZe the display environment on a
per-pixel basis. This per pixel characterization information
is combined With one or more frames of the incoming real
time display data. The image processing system processes
each incoming pixel and produces a modi?ed corresponding
output pixel. Each pixel of each frame is processed accord
electrons, the phosphors used and a variety of other factors.
ingly.
[0023] For LED backlit displays, various ?lters, color and
[0029] In addition to producing modi?ed output pixels, the
image processing system produces light source control
light diffusion gradients, Brightness Enhancement Films
(BEF), diffusion plates, light guides and mixing light guides
126 may be used as an option to improve the light uniformity
for the display. Some backlit displays may also include one
or more optical sensors (not shoWn in FIG. 1) that can detect
changes in brightness and Wavelength (see Hamamatsu
product brief).
[0024] Most LCD-based image modulators (110 and 130)
are addressed in raster scan fashion and each pixel is
refreshed during each display frame interval. Accordingly,
every output pixel is Written to the display during every
refresh cycle regardless of Whether the value of the pixel has
changed since the last cycle. Each R, G, and B color
component normally has a different intensity value Which is
digitally represented by a number of bits. For example, if 8
bits represent each R, G, and B color component, then each
component has 28 (=256) intensity values from 0 to 255.
Changing the intensity value of a color component in an
ideal digital device from a number X, for example, to a
number Y, takes just as long regardless of the Y value. So in
an ideal system, changing a color component value from 2
to 3 takes as long as changing the value from 2 to 200.
HoWever, because of the nature of LCD image modulator
information Which can control individual lamps, tubes or
LEDs, or can control a block or subset of the light sources.
The light sources Will include the lighting elements Within
the display system and may include lights that affect the
ambient lighting around the display as Well as around the
entire room.
[0030] An exemplary Flat Panel Display system has a
patterned array of red, green and blue LEDs as the back light
and a TFT panel as the image modulator. The 240 LEDs are
individually controlled but the resolution of the TFT panel
is 1600x1200 pixels. Various ?lters and
used to produce color and brightness
uniform as possible across the panel.
position on the panel, each TFT pixel is
diffusion ?lms are
as consistent and
Depending on its
affected by one or
more RGB LED triplets. The Display Output Processing
(DOP) for this exemplary system modulates each LED to
control its brightness and adjusts the per pixel values for the
TFT panel. The resulting display has improved image con
sistency, enhanced color gamut, and higher dynamic range,
and is better able to portray high motion content.
BRIEF DESCRIPTION OF THE DRAWINGS
pixels, the transitions for modulating light intensities are not
purely digital, and are not necessarily linear. US. Pat. No.
[0031] FIG. 1A is a block diagram ofa prior art LED array
of individual LED elements;
6,340,994 by Margulis et al. teaches Temporal Gamma
[0032] FIG. 1B is a block diagram shoWing a side vieW of
a light source, microdisplay image modulator and lenses of
a prior art projection system;
Processing (TGP) Which assures that the time-related rep
resentations of an image are as accurate as possible, , based
on a previous frame value and a knoWn transfer function of
the display modulation system and adjusting its output to a
desired value during display of a desired frame.
[0025] The light sources for a modulator based system
may not be able to accurately and uniformly reproduce the
intended color gamut, and different modulator pixel posi
tions on the screen may be affected by the light sources
differently, causing a display of non-uniform color and
brightness. Additionally, the light sources’ output character
[0033]
FIG. 1C is a block diagram shoWing a side vieW of
a light source, a ?lter and an image modulator of a prior art
?at panel display system;
[0034]
FIG. 2 is a block diagram of one embodiment of the
Display Input Processor (DIP) and Display Output Processor
(DOP) system in accordance With the invention;
[0035] FIG. 3 is a block diagram of one embodiment of the
FIG. 2 DIP 210 in accordance With the invention;
Feb. 15, 2007
US 2007/0035706 A1
[0036] FIG. 4 is a block diagram of one embodiment of the
FIG. 2 DOP 230 in accordance With the invention;
[0037] FIG. 5 is a block diagram of one embodiment of the
FIG. 4 Geometric Transformation 404 in accordance With
the invention;
[0038]
FIG. 6 is a block diagram of an LED based
advanced image processing and by controlling both the light
elements and the image modulator, digital displays With high
dynamic range and an expanded color gamut can be devel
oped. The human eye can often perceive high dynamic range
more readily than higher resolution alone; thus a higher
apparent quality display can result from a higher dynamic
range. LCD material is not able to fully block out very high
backlight system for a Flat Panel Display including light
intensity light, making achieving proper black levels another
sensors for feedback;
problem that can be relieved With intelligent processing and
[0039]
FIG. 7 is a block diagram of a System-On-Chip
controls of the light sources.
embodiment combining various video and graphics process
ing With display processing and the light source processing;
[0045] Projection systems using scanning lasers With the
ability to adjust the light intensity at different points along
and
each scan can produce improved dynamic range displays.
Even though the scanning laser may not fully vary the light
intensity across the full light range for each pixel, the varied
[0040]
FIG. 8 is a ?owchart of steps in a method of
processing images in accordance With one embodiment of
the invention.
DETAILED DESCRIPTION OF THE
PREFERRED EMBODIMENTS
light intensities at the loWer resolution can be combined With
a full resolution microdisplay to produce a high dynamic
range projection system. Multi-Wavelength scanning lasers
can combine With the microdisplay for a high dynamic range
projection system and can also be used to enhance the color
gamut. Enhancements for motion portrayal can also be
included in the controls for a scanning laser Where the scan
[0041] The present invention provides improvements in
electronic image processing technology and an image pro
cessing apparatus for use in an image display system. The
apparatus includes a display output processing module that
path of the laser light illumination is coordinated With the
controls both a digital image modulator and one or more
[0046]
light sources illuminating the image modulator.
[0042] FIG. 2 shoWs an image processing system 200
Which includes a Display Input Processor (DIP) 210 and a
scanning of the microdisplay image.
Because the control of the lighting elements is so
critical, in addition to processing the pixels for the image
modulator 244, DOP 230 performs the processing for con
trolling the display light sources 270. In the case of a single
Display System 240 including a Display Output Processor
lamp, the processing may be very simple though the signal
(DOP) 230 With a common databus 250 coupling the DIP
210 and the DOP 230. Display System 240 also includes an
ing may still pass through a light source driver 252 for
isolation from a high voltage lamp. In the case Where the
image modulator 244 (comparable to FIG. 1 modulators 110
and 130) coupled to a display screen 260 and via modulator
trolled elements, such as With an LED array, DOP 230 may
drivers 242 to DOP 230. DOP 230 also includes light source
perform extensive processing to determine intensity levels
drivers 252 coupled to the display light sources 270 and
receives display sensor feedback via path 248 from optical
DIP 210 outputs are processed by DOP 230 to enhance
for each of the LEDs. Based on the levels determined by
DOP 230, the light source drivers 252 can control brightness
signaling for a group of LEDs or for each individual LED.
Brightness may be controlled via analog current or voltage
techniques, or via a digital Pulse Width Modulation (PWM)
visual quality of images and perform display-speci?c pro
means.
sensors 246. DIP 210 preferably receives image data on line
2050 and reconstructs images both spatially and temporally.
cessing.
[0043] Image modulator 244 can be part of an LCD-based
direct vieW system such as a TFT display Where the display
screen 260 is an integrated part of a ?at panel display and the
modulator drivers 242 control the screen pixels. DOP 230
may also include Timing Controls (TCON) for a ?at panel
display. If image modulator 244 is part of a projection
system, then image modulator 244 provides images to be
projected and enlarged onto display screen 260. In a pro
jection system, image modulator 244 is more likely a
display light sources 270 include multiple individually con
[0047] In one example, a projection system may use an
array of individually controlled LEDs as the light source
combined With a DLP microdisplay as the image modulator.
The LEDs may be White and be used along With a color
Wheel or other color system, or multi-color LEDs may be
used. Hybrid lighting systems consisting of multiple planes
of lighting may also be designed. For example, an UHP lamp
can be combined With controlled LEDs, or an LED backlight
system can be further enhanced With OLED, NED or other
FED based imaging planes. Special location based systems
relatively small (inches) microdisplay element and may be
and game machines may combine projection systems With
either stationary or movable. The modulator drivers 242 may
be relatively simple or, as in the case of a DLP projection
other light sources to achieve different effects. Enhanced
environments can be created by combining various light
system, include complex processing speci?c to the micro
sources, such as lasers and strobe lights, both Within the
display.
display and Within the vieWing environment.
[0044] The choice of the lighting elements for both FPD
direct vieW screens and for microdisplay based projection
[0048]
systems is a key element to achieving high display quality.
and may also detect ambient external light conditions.
Sensors may be single Wavelength based or may be tris
In addition to providing the overall brightness to the display,
a lighting system of controllable light elements according to
the invention can be combined With image modulation
techniques to achieve improved display results. With
Optical Sensors 246 of various types may detect
light levels and light Wavelengths Within the display system
timulus sensors that more accurately mimic the behavior of
the human eye. Other more sophisticated optical sensor
systems may include imaging sensors, such as cameras that
Feb. 15, 2007
US 2007/0035706 A1
record the images produced by a projection system. The
cameras may be part of the assembled display product, or
may be used in the production and factory setting of the
system and not shipped as part of the product. The optical
sensors 246 may require various conversions from analog to
digital as Well as other signal processing steps that may be
performed Within the optical sensors 246 module, With an
external processing module (not shoWn) or Within the DOP
230. The sensor feedback may be parameteriZed separately
or incorporated into the general display parameters.
[0049]
Both the DIP 210 and the DOP 230 may have one
or more memory buffers, frame buffers, line bulfers or
encoded or another compressed format, Which may include
video and audio content. The compressed format may
include subband encoded data that includes multiple reso
lutions of the input image frames. Input data may be in a
variety of standard and high de?nition ?eld or frame based
formats that also may differ in the aspect ratio of the input
image and may differ in the frame rate of the input image.
Image data on line 2050 may be encrypted for security and
thus require decryption by DIP 210.
[0053] DIP 210 also receives, accompanying the image
data, various control data including for example selected
inputs, data types, Vertical Blanking Interval (VBI) data, and
use of memory bulfers is an important consideration as it can
overlay channel information for the On-Screen Display
(OSD), and provides this control data to DOP 230. Each of
affect cost and performance in the design of a system. Of
particular note is the use of pipelining operations to reduce
the image processing modules, Analog Input Control 302,
Digital Input Control 304, and Compressed Input Control
overall latency through the system. Where possible, line
312, preferably receives image data from interface 300. A
system processor (not shoWn) preferably uses user-selected
input controls to select image data, Which is appropriately
processed by each of modules 302, 304, and 312, and then
preferably stored in buffer memory 308. The system pro
caches that can be used by their respective processing. The
buffers are used for on the ?y processing and frames are
stored for processing of subsequent frames. If processing of
a current frame requires full access to that frame, then a full
frame of latency is added.
[0050] DIP 210 and DOP 230 may processes image data in
a high-precision internal format to preserve detailed image
information, because such information can be lost in each of
the various image processing steps if the internal image
format has loWer precision than the output of image modu
lator 244. DIP 210 and DOP 230 for example, can produce
and maintain a processed image having four times (doubled
vertically and horizontally) higher pixel resolution than the
spatial resolution output of image modulator 244. Similarly,
the internal format for each color component may be main
tained at 32 bits or more, such as With ?oating point values,
during the image processing even though the image modu
lator may be limited to 8 bits per color component. Pixel
address information may also be processed With much
greater precision using an extended integer or ?oating point
representation than Will be ultimately output to the image
modulator 244. In a later processing step, the higher internal
resolution, color representation and pixel address informa
tion may be dithered or otherWise processed to match the
required output resolution and format of the image modu
cessor also uses the user input commands to control Win
doWing for picture-in-picture displays, OSD information,
and other system WindoWing capabilities. DIP 210 prefer
ably processes images in either YUV or RBG formats.
[0054] Analog Input Control 302 preferably includes an
Analog-to-Digital Converter (ADC) 3002 Which samples
the analog data inputs and produces digital data outputs.
ADC 3002, to achieve high quality, samples its input data
frequently and precisely enough that the image can be
reconstructed from the sampled data points. Additional prior
art techniques for sub-carrier demodulation are used to
extract the video data from the analog input signal.
[0055] Digital Input Control 304 preferably includes a
synchronization engine 3040 and processes digital data,
Which may be in a YUV video or a digital RBG format.
Since the data is already in digital format, Digital Input
Control 304 does not include anADC. Digital Input Control
304 also uses high-speed digital data transmission tech
niques and may include physical or electrical interfaces such
FIG. 3 is a block diagram of FIG. 2 DIP 210 Which
as LoW Voltage Di?‘erential Signaling (LVDS), Digital
Visual Interface (DVI), High De?nition Multimedia Inter
face (HDMI), Video Electronics Standards Association
includes image processing modules Analog Input Control
302, Digital Input Control 304, Compressed Input Control
(SDVO). These interfaces may include line termination,
312, and Image Reconstruction (IR) 318, all connected to a
voltage control, data formatting, Phase Lock Loops (PLLs),
common databus 350. Interface 306 can isolate the DIP 210
from the DOP 230 and utiliZe an interface databus 250 that
properly receives the digital data input. Other packet based
lator 244.
[0051]
(VESA) DisplayPort, or Serial Digital Video Output
and data recovery to assure that Digital Input Control 304
may folloW an industry standard or proprietary format. DIP
inputs such as VESA Digital Packet Video Link (DPVL)
210 also includes one or more input data interfaces 300 for
may also be supported over the DVI or HDMI inputs. Other
receiving image data input on line 2050 from system 200.
packet and descriptor based inputs may be supported over a
The image data may include one or more of analog video,
netWork input.
digital video, non-tuned data, graphics data, or compressed
data. Analog video data may be in a native video format such
as composite video, S-video, or some component YUV/
YCrCb. Non-tuned data, received from a broadcast delivery
system that may have many channels on a common carrier,
may require a tuner included in or separate from DIP 210 so
that relevant data can be tuned from the channels.
[0052] Digital input data may be in an RGB data format or
a YUV based video format. Compressed data may be
encoded in MPEG-2, MPEG-4 including H.264, Wavelet
[0056] Compressed Input Control 312 may support inter
face inputs such as a version of 1394A, 1394B or another
1394 format, Universal Serial Bus (USB) or a type of
netWork interface, and preferably includes a decompression
engine 3120 and a Bitstream Engine 3125. NetWork inter
faces may include Wired interfaces such as 10/100 Ethernet,
Gigabit Ethernet, Multimedia Over Coax Alliance (MOCA),
Home Phone NetWork Alliance (HPNA), various poWerline
based netWorks, or some other type of Wired interface.
Wireless NetWorks may include WiFi (also knoWn as 802.11
US 2007/0035706 A1
With A, B, G, N, I and numerous other variations), Ultra
Wide Band (UWB), WiMAX or a variety of other Wireless
interfaces. In each case, a netWork interface (not shoWn)
manages and terminates the netWork data traf?c to provide
a digital bitstream to the input data interface 300. The data
may be additionally formatted either inside or outside the
DIP.
Feb. 15, 2007
[0061] Other information from the Bitstream Engine 3125,
such as the error terms for the macroblocks, indicates hoW
much quantization took place for each block and the IR 318
enhancement steps can utiliZe this information for noise
reduction ?ltering and any color enhancement processing.
[0062] Each input control block may include its oWn line
buffers and memory for local processing and each may make
[0057] The compressed data Will usually include audio,
use of a common memory 308. Memory 308 receives data
video, and system information. System information may
identify the bitstream format. Compressed Input Control
312, by performing additional steps such as error checking
from Analog Input Control 302, Digital Input Control 304,
of coded input, assures that it properly receives the data and
that the data is not corrupted. If the data is corrupted,
Compressed Input Control 312 may correct, conceal or
report the corruption. Compressed Input Control 312, once
having correctly received the data, de-multiplexes the data
into audio, video, and system streams, and provides the
audio streams to an audio subsystem (not shoWn) for decod
ing and playback. Compressed Input Control 312 decom
presses an encoded bitstream input, but retains relevant
motion vector information for use in further processing.
[0058] Bitstream Engine 3125 may be combined With
Decompression Engine 3120 to optimiZe reconstruction of
compressed input bitstreams into enhanced video frames. If
the compressed input bitstream Was encoded using sub
bands, the reconstruction steps preserve the highest quality
subband images at each resolution that Was encoded. The
bitstream information may include compliant video coded
bitstreams, bitstreams With side information, layered cod
ings for video and special bitstreams that have additional
detail information leaked into a compliant bitstream. LoWer
layer coded data can reveal object shapes and other infor
mation that can be exploited to provide enhanced spatial and
temporal rendering of blocks constituting images. Decom
pression engine 3120 can perform the prior art steps of
decoding a standards-compliant bitstream into a decoded
frame.
[0059] Bitstream Engine 3125 processes MPEG-2 bit
streams including the image blocks (or macroblocks). Most
video frames Within a sequence are highly correlated and
Bitstream Engine 3125 exploits this correlation to improve
rendering. Bitstream Engine 3125 also employs motion
estimation techniques for motion compensated prediction as
a method of temporal processing across image frames.
Bitstream Engine 3125 can track the How of video data
prescribed by the prediction blocks belonging to the mac
roblocks Within the bitstream, rather than re-estimating
motion or creating the macroblocks similarly to a second
pass encoding process.
[0060] Bitstream Engine 3125 tracks the prediction blocks
over several frames in Which the temporal path of the
prediction blocks delineates a coarse trajectory of moving
objects. This coarse trajectory can be re?ned by additional
sub-block motion estimation and bitstream processing per
formed either in the Bit Stream Engine 3125 or by the
Motion Estimator 3180. Bitstream Engine 3125 preserves
the motion vector information for later use in generating
DOP 230 output frames in conjunction With motion com
and Compressed Input Control 312, and provides the data to
Image Reconstruction 318. Memory 308 also stores IR 318
input frames and output frames. The Image Reconstruction
318 can provide the data through interface 306 to databus
250 or a Direct Memory Access (DMA) engine may be used
to transfer the data to databus 250. Databus 250 may be used
to transfer data to DOP 230 in an expanded resolution,
expanded color range or other enhanced format generated by
DIP 210 Which differs from the format of the input data that
entered DIP 210 via 2050. IR 318 preferably includes a
Motion Estimator 3180 and receives image data from Ana
log Input control 302, Digital Input Control 304, Com
pressed Input Control 312, or from buffer memory 308. IR
318 processes data based on data types. For example, if data
in YUV format requires a conversion to the RGB domain,
then IR 318, through either mathematical calculations or a
look-up table, converts YUV values to RGB color space and
may use an extended range representation of the color
component values such as 32 bit integer or ?oating point.
Additionally, YUV data is often sub-sampled, that is, one
UV pair may correspond to tWo or four Y values. Conse
quently, IR 318 uses the UV values to interpolate and create
RGB pixels. If YUV data is interlaced then IR 318 converts
the data from ?eld based (sequential half frames) to frame
based. IR 318 stores each ?eld in buffer memory 308, then
?lters, analyZes, and combines the ?elds to generate an input
image frame. IR 318 preferably uses the processed image
frames and the motion information created by DIP 210 While
the frames and the information are still in their digital
format. If IR 318 processes data, such as overlay informa
tion, relevant to image modulator 244 (FIG. 2), IR 318
provides such data to DOP 230 to later be combined With the
image data frames. IR 318 may process multiple input data
streams in parallel and provide such data to DOP 230 to later
produce a picture-in-picture display of multiple images. IR
318 also does post decompression ?ltering based on block
boundary information included in the input bitstream.
[0063] For analog video inputs, IR 318 preferably uses
techniques, for example, from Faroudja Labs, that can
sample and reconstruct input video, Which includes com
posite, S-Video, and Component (Y, Cr, Cb) that may folloW
one of the industry standards such as Phase Alternative Line
(PAL) or the National Television Standards Committee
(NTSC). To spatially ?lter for high quality image frames, IR
318 preferably uses various noise reduction techniques such
as recursive, median ?lter, and time base correction.
[0064] In the present invention, IR 318 takes account of
multiple input images and then, to enhance the resolution of
those images, uses super-resolution techniques that employ
pensated temporal ?ltering and other ?ltering and enhance
data shared by different input frames to reconstruct an
ments related to color representations. The information can
image, and thereby to produce each output frame. This
cannot be done by independently using one input image at
also be used for constructing a special block ?lter for post
decompression ?ltering of the coded input stream so that IR
318 can ?lter artifacts of block boundary edges.
a time. The invention is thus advantageous over prior art
systems Which use super-resolution techniques for generat
Feb. 15, 2007
US 2007/0035706 A1
ing high-resolution still images from a video sequence, but
not for generating real time output frames. This multiframe
super-resolution technique can generate an image data
superband that represents the input image With higher,
typically double, the resolution of the input data. The
super-resolution techniques used by the invention depend on
a high correlation of the data betWeen frames, and require a
sub-pixel shift of the input images Which is typically based
on the movements of objects in the video image sequence.
IR 318, in correlating images to reconstruct output frames,
uses motion vectors provided by Motion Estimator 3180 or
preserved from the input bitstream. IR 318, While generating
still frames, can use mathematical equations from, for
example, deterministic techniques of Projections On Convex
Sets (POCS) and stochastic techniques of Bayesian enhance
ments.
motions can produce motion vectors for sub-block pixel
siZes, Which can be used in multiframe reconstruction to
better produce high resolution output frames.
[0068] IR 318 preferably separates its output images into
video ?elds or frames, and creates a pointer to the start of
each ?eld (or frame). Either the actual ?eld (or frame) data
or a pointer to the ?eld (or frame) data may serve as inputs
to DOP 230. Processing input video ?elds and producing
frames that combine ?elds is useful for de-interlacing video
in the image reconstruction process, Which in turn is useful
for increasing image resolution and for restoring the vertical
detail that Was lost during interlacing as the technique of
interlacing trades off the vertical image detail in each ?eld
in order to provide half of the lines of information tWice as
often. IR 318 outputs (and DOP 230 outputs), having been
reconstructed in accordance With the invention can have a
[0065]
When an image does not include motion vector
higher resolution than the standard input resolution and can
bitstream information, Motion Estimator 3180 preferably
be stored as an image superband. IR 318 outputs can be
uses techniques such as optical ?oW, block matching, or
Pel-recursion to estimate motion that tracks the image object
motion in time. Motion Estimator 3180 can also use the
same motion estimation techniques in conjunction With
MPEG-2 motion vector bitstream information. Motion Esti
mator 3180 compares groups of pixels from one image ?eld
to those of subsequent and previous image ?elds to correlate
object motion. Motion Estimator 3180 then records the
detected motion relative to the ?eld position so that DOP
stored in memory 308 or in a meta?le that includes a
description of the image both in a spatial RGB frame buffer
format and in a semantic description of the image objects,
textures, and motions. The digital processing system of the
DIP 210 utiliZes techniques such as super-resolution to
produce images that have higher resolution than the indi
vidual input images. Other analog techniques are used in the
DIP 210 combined With the super-resolution techniques for
producing the high-resolution internal representation of the
230, together With input frame information and IR 318
motion information, can later generate motion-compensated
image frames. For compression systems, Motion Estimator
images.
3180 ?nds the best match betWeen frames, then codes the
mismatches. Motion Estimator 3180 masks out motion
vectors that do not meet a certain level of matching criteria,
and tags the vectors that have a high level of matching so
block 3185 for generating multiple loWer resolution images
of just-constructed very high quality image frames from
that these vectors can subsequently be used in more re?ned
tion of the prior step. Subbands can be generated from any
image frame including from frames that Were decoded from
compressed data input that Was not prepared for subband
motion tracking operations, Which are performed on smaller
image blocks or on individual pixels. Motion Estimator
3180 thus differs from prior art techniques in Which video
[0069]
IR 318 also includes a subband decomposition
input data that Was not previously subband encoded. Sub
bands are typically constructed at steps of one half resolu
tion using the coded motion vectors and macroblocks.
encoding. The various techniques described to improve
image quality, along With other ?ltering modes, are used to
construct the highest quality image subbands. These sub
band images, combined With the superband image generated
at higher resolution, provide the image data for the Display
Output Processing (DOP) steps described With reference to
Consequently, the invention, via Motion Estimator 3180,
FIGS. 4 and 5 beloW.
compression systems use the detected motion as one of the
steps to compress the number of bits needed to represent a
video sequence. Motion estimation is not used in a standard
compliant decoder that simply performs motion compensa
advantageously provides better quality images than prior art
techniques.
[0070] Other than the previously described scheme of
matches. Conversely, Motion Estimator 3180 does perform
utiliZing Wavelet encoded subbands of image data as the
input, in accordance With the invention, a geometric trans
formation may also be used for processing the input data that
includes other forms of layered coding video bitstreams.
This geometric transformation may either be performed as
part of GT 404 in the DOP, or alternately a Geometric
Transform Module may be included as part of the Image
Reconstruction 318 in order to reconstruct input video
frames. One technique for tracking image How is to compare
the coef?cient data of the input bitstream to ?nd the same
patterns across time. If the same pattern is found, it may
re?ned tracking on blocks that are close matches.
represent the How of an object across the frames. This
[0066] Because detecting motion is important in restoring
images, Motion Estimator 3180 (and other processing mod
ules according to the invention) tracks motion on a sub (or
smaller) block basis. For example, instead of on an 8x8
(pixels) block, Motion Estimator 3180 tracks more re?ned
motions on a 2x2 block. To reduce the need to track re?ned
sub-blocks, Motion Estimator 3180 uses the coarse block
matching differences to pre-qualify a block, and thus does
not perform re?ned tracking on blocks that are poor
[0067]
When receiving motion estimation vectors, such as
those provided in an MPEG data stream or another temporal
technique may be applied to single camera or multiple
camera systems.
encoding scheme, Decompression Engine 3120 uses all of
[0071] With layered coding, the conjecture of image How
the vectors for compliant MPEG decoding. IR 318 then uses
vectors With better block matching in analyZing re?ned
can be further tested in the different layers and different
camera vieWs to either con?rm or reject the conjecture.
motions for restoring multiple frames. Analyzing re?ned
Layered video coding is a technique for scalability Which,
Feb. 15, 2007
US 2007/0035706 A1
for example, transmits multiple resolutions of video bit
techniques using standard decoders that do not keep previ
streams Where the higher resolutions utiliZe the bits from the
loWer resolution transmissions. In this coding technique, a
loWer resolution decoder can utiliZe just the portion of the
decode and display output frames. Additionally, the standard
ous ?elds and frame information any longer than required to
decoder cannot recogniZe the instructional cues across GOPs
bitstream required to generate the required resolution. A
decoder that requires higher resolution Will use the bitstream
of the loWer resolution and of the additional layers in order
to create the higher resolution image.
and Will just utiliZe motion vectors for the best match Within
adjacent frames. Also, While the enhanced decoder of the
[0072] Layered coding techniques may include other types
video bitstream in a standard-compliant manner. Instruc
of compressed data, such as Wavelet data, to enhance a base
level transmission. For example, Wavelet data may be
included as a layered stream of data. Wavelet data does not
use the same Discrete Cosine Transform (DCT) compression
scheme as the standard video portion of MPEG video data.
As part of the MPEG syntax, the Wavelet data could be
coded as a private video data stream, or could be part of the
video program stream and indicated in the program header
information. The Wavelet information represents a higher
resolution image for a complete or partial frame for some or
all of the MPEG frames. When an MPEG frame that has
invention can use the instructional cues to achieve a higher
quality display output, the standard decoder can use the
tional cues require adding only a minor amount of data to the
bitstream.
[0075] FIG. 4 is a block diagram of FIG. 2 DOP 230,
Which includes the processing blocks for a projection system
along With the processing for a display modulator and light
source controls. Speci?c to a projection system is the
Geometric Transformation 404 and Post GT ?ltering 406.
The other processing modules including color, intensity and
Spatial Gamma Processing (SGP) 408, Temporal Gamma
Processing (TGP) 412, display Modulator and Light source
Controls (MLC) 420 and Reverse Super-Resolution (RSR)
corresponding Wavelet information is decoded, the IR 318
414, are all connected to a common databus 450. Databus
combines the MPEG data With the Wavelet data. Because of
the different characteristics of DCT and Wavelet-based com
450 satis?es system bandWidth and concurrency require
ments for parallel image processing. DOP 230 also includes
buffer memory 424, Which stores data frames and image
subbands for use by each of the processing modules 402,
pression, the combination is used to produce a single high
quality output frame.
[0073] Another example of layered coding is Where
supplemental bitstream data includes motion estimator
information that is an enhancement beyond the standard X
and Y macroblock motion estimator vectors that are part of
the MPEG standard. For example, motion estimator infor
mation that relates to the scale, rotation and shear of image
elements can also be provided as supplemental bitstream
404, 406, 408, 412, 414 and 420, although each of these
modules may include a local memory bulfer (not shoWn).
[0076]
There is also a display modulator and light source
control block 420 Which connects to the image modulator
244 via driver circuits (not shoWn) and the display light
controls 602-608 circuits and may include a Timing CON
data. For example, if a camera is Zooming in or out of a
trol (TCON) block, not shoWn. A system designed for ?at
panel display systems Would not necessarily include the
scene, improved block matching for the encoder system can
be achieved by using a scale-based compare instead of the
X and Y displacement compare. As a second example, a
Geometric Transform 404 or the Post GT ?ltering 406
processing modules. While both a projection system and a
moving object may rotate instead of move in the X or Y
?at panel display system Would typically have a Display
Map 402, the speci?c information stored and utiliZed Within
direction. A rotation compare Will have a more accurate
the Display Map 402 Would differ depending on the type of
motion estimator comparison than standard motion vectors.
Both the encoder system and the enhanced decoder system
display system.
need to use a commonly de?ned protocol to take full
[0077]
advantage of layered coding techniques. IR 318 can use the
over databus 250, via buffer memory 308 or via buffer
supplemental information relating to scale, rotation and
shear of image elements to reconstruct, preferably-using
memory 424. UtiliZing databus 250 alloWs a separation of
image transform techniques, a higher quality image from the
partitioning Where the DIP 210 may be able to perform
input bitstream. A multiple camera system combined With a
multi dimension playback system can also take advantages
knowledge of the type of display. Databus 250 may be
of a layered coding technique.
[0074] Another enhanced decoder operation of IR 318
uses instructional cues embedded in a bitstream for inter
preting the video stream to utiliZe the macroblock and
motion vector information for enhancing output images. The
advantages of instructional cues are very signi?cant over the
ability to extract frame-to-frame and (Group-Of-Pictures)
GOP-to-GOP correlation Without the cues. Because IR 318
may maintain complete GOPs in buffer memory 308, IR 318
can utiliZe these cues Which provide information across
?elds, frames, and GOPs. For example, the enhanced
decoder of the invention uses the macroblock information
from tWo GOPS. For another example, IR 318, recognizing
the enhanced instructional cues, improves image quality by
using macroblock information from both a current GOP and
an adjacent GOP. This is therefore advantageous over prior
DOP 230 receives DIP 210 outputs either directly
the DIP 210 and DOP 230 that can alloW for a convenient
image reconstruction of various inputs Without speci?c
instantiated as an internal bus Within a System On Chip
(SOC), a bus on a Printed Circuit Board (PCB) betWeen
chips, some type of ribbon cable or connector betWeen
subsystems, or another type of interface. The DOP 230
processing could begin from the processed and recon
structed input image and speci?cally process the images for
speci?cs of the display. Alternatively, in a more highly
integrated system, some or all of the DIP 210 processing
elements may be combined With some or all of the DOP 230
processing elements and the DIP 210 memory 308 may be
combined With DOP 230 memory 424. DOP 230 processing
could also utiliZe image data that has undergone Wavelet or
subband decomposition during the DIP 210 processing
steps.
[0078]
DOP 230 can use pointers (if applicable) to directly
access DIP 210 output data. DOP 230 also receives multiple
Feb. 15, 2007
US 2007/0035706 A1
DIP 210 output images for performing picture-in-picture
operations Where a single image output frame Will include
more than one processed input video frame. DOP 230
combines overlay data both from the input coded data and
from any On-Screen Display (OSD) information such as a
user menu selection provided by the system microcontroller
(not shoWn). DOP 230 processes its input images and
outputs image data including display coordination for both
video and data output, and data and control signals for each
R, G, and B image color component.
[0079] Due to processing complexity or other partitioning
reasons, a single DOP 230 or MLC 420 may not be able to
fully control the entire display. Some systems may be
designed With multiple DOPs 230 or MLCs 420 Where each
performs the controls for one of the color planes. In another
partitioning, the light control portion of the MLC 420 is
separate from the rest of the DOP 230 and multiple MLCs
420 may be used to control different color planes. In such a
as another example, 10 bits to represent 210 (=l024) trans
lated values. A system manufacturer maps 256 values to
1024 translated values.
[0083]
Displays that can display both very dark black as
Well as very bright light colors are said to have a high
contrast ratio. Achieving high contrast ratios With LCD
based ?at panel displays is typically dif?cult as the LCD
material has difficulty blocking out all of the light from very
bright backlights for display images that are meant to be
very dark. Additionally, display modulators may attenuate
the light source for display images that are meant to be very
bright. Using loW re?ective coatings on the display front
glass is helpful in achieving darker black colors, though it
might attenuate and dull the brightness and colors. In a
preferred embodiment of this invention, combining the
image data processing for the image modulator 244 values
and light intensity values, the resulting display image has a
higher contrast ratio and increased color gamut. Color,
Intensity, Spatial Gamma Processing (SGP) 408 provides
con?guration, the display modulator controls may be per
processing not only for the image modulation 244 pixel data,
formed by another block Within the DOP 230, a MLC 420
or by another controller. DOP 230 processing may also be
split spatially Where multiple DOPs 230 are used for differ
but also for the light intensity values. The SGP 408 operates
on a per pixel basis Where the gamma processing per pixel
ent portions of the display.
[0080]
SGP 408 converts YUV to RGB color space and
determines the intensity values for each of the R, G, and B
color components. Those skilled in the art Will recogniZe
that a color space conversion is not necessary if the image
is already in the RGB color space. SGP 408 preferably uses
a look-up table, in Which each of the R, G, and B color
components has values corresponding to color intensities, to
translate image colors. Each R, G, and B intensity value
represents an index into the look-up table, and the table
provides the output (or “translated”) value. SGP 408 inde
pendently processes each R, G, or B color component and
Will vary based on the parameters for each pixel location. In
a simple embodiment for high contrast spatial processing,
contrast for lights and darks may dynamically be increased
Where blacks are made darker and Whites are made brighter
With the controls for darkening and lightening include both
light source modulation and image modulation.
[0084] Beyond just the traditional contrast ratio, by con
trolling the light levels for different display frames, and
within different regions of the display for a single frame, the
system is able to achieve High Dynamic Range (HDR). High
dynamic range usually corresponds to the physical values of
luminance and radiance that can be observed in the real
World. HDR image formats are sometimes considered scene
maps each color component based both on a combination of
individual RGB values and on RGB values of surrounding
referenced instead of device-referenced and the SGP 408,
pixels.
different algorithms to preserve the HDR. The system 200
[0081] Some displays have non uniform brightness and
color issues that result from the design of the system. For
Were either transferred to the system via input 2050 or by
example, if the LED light sources are not able to provide a
from another type of input stream. NeWer content sources
uniform light intensity, then a mapping can be created Where
SGP 408 may adjust the RGB values of the pixels in the
identi?ed area and of the pixels in the neighboring area to
compensate for non-uniformities of the display. Color and
Spatial Gamma Processing 408 uses mathematical calcula
tions, a Color Look-Up Table (CLUT), or a combination of
the tWo to provide the RGB values for mapping the input
pixels to the desired image outputs.
[0082] A non-linear mapping enables input colors repre
sented by RGB values to be adjusted (emphasized or de
emphasiZed) during the mapping process, Which is useful for
crosstalk suppression and for compensation of shortcomings
in a color gamut of image modulator 244. In another
example, colors that Were produced for a movie screen or
CRT based system may require further enhancement in order
to achieve the desired levels When used With an LCD
system. SGP 408, to realiZe a non-linear relationship, uses a
TGP 412 and MLC 420 Will process such HDR content With
can make use of additional bits of color information that
Image Reconstruction 318 extrapolating such information
such as High De?nition DVD players (HD-DVD) utiliZing
Microsoft WindoWs Media, H.264 or another encoding
scheme may include additional bits of color information to
provide HDR content.
[0085] In another embodiment an image has a higher
resolution than the image modulator can natively support.
This may be a source image or an image that has been
reconstructed by IR 318 or by another means. In order to
display the full resolution of the source image, special
processing is performed With respect to dynamic range. The
image is ?ltered doWn to a loWer resolution image to match
the image modulator. The enhanced dynamic range of the
display is used so that the user perceives he is vieWing a
higher resolution image than Would otherWise be possible
With the image modulator and standard dynamic range.
[0086]
Temporal Gamma Processor 412 assures that the
translation table represented by a number of bits that is
time related representation of an image is as accurate as
larger than the number of data input bits. For example, if
possible. TGP 412 thus, based on a previous frame value and
a knoWn transfer function of the display modulation system,
adjusts its output values to provide a desired output value
eight bits represent 28 (=256) color component intensity
values, then Color and Spatial Gamma Processing 408 uses,
Feb. 15, 2007
US 2007/0035706 A1
during a desired frame. TGP 412 independently processes
each R, G, or B color component and compensates for
modulating transition characteristics that, due to the nature
of an LCD image modulator 244, are not pure digital. TGP
412 also provides the information for the Display Modulator
and Light source controls 420 With the required information
to overdrive the LCD image modulator 244 to compensate
for the LCD material characteristics, so that the desired
output can be achieved more expeditiously. Consequently,
TGP 412 overcomes the video quality limitation of prior art
systems having materials that produce blurred outputs. TGP
412 can also reduce the cost of the display system because
the materials used for image modulation in prior art systems
that provide faster image responses are usually expensive.
Like SGP 408, TGP 412 may perform per pixel gamma
having a transfer rate of Z fps, Where Z=X><Y. RSR 414 then
shifts by the same pixel (or pixel fraction) amount the pixel
matrix representing each RSR image block. For example,
because there are Y RSR frames, RSR 414 shifts the pixel
matrix block Y times, once for each RSR frame, and each
shift is by the same pixel (or pixel fraction) amount. The
number of pixel fractions to be shifted depends on the
physical characteristics of the display system and of image
modulator 244. Where a system adjusts the position of the
vieWed image, the shift fraction corresponds to the physical
movement of the vieWed displayed image. Where there is no
actual movement of the displayed image, the fractional
adjustment is based on the physical nature of the display
device such as the pixel siZe relative to the siZe of image
modulator 244 and to the projection characteristics of the
processing.
system.
[0087]
[0090] RSR 414 then produces each RSR frame With a
motion-compensated Weighted ?ltered center so that the
center of the input image for each RSR frame is maintained
In addition to compensating via control of the
image modulator 244, the Temporal Gamma Processing 412
also provides the temporal processing for the various light
sources. By utiliZing the temporal processing for the light
such that no motion artifacts are introduced. A pixel-matrix
sources, the response time of the image modulator 244 pixel
values can be improved. For example, if a pixel value for the
color green is to change from 32 to the maximum of 255, the
Weighted ?ltered center is the center of a pixel matrix taking
account of ?lter Weights in a ?lter transfer function. Filter
Weights, varying depending on the ?lter characteristics, are
green value for that pixel can not be overdriven since it is
already at the maximum value. HoWever, if there is a green
the values (usually of multiplications and additions) Which
LED that effects that pixel, the green LED intensity value
can be effectively increased such that the vieWed pixel value
at that location can be 255 right aWay even though the value
at the imager may still not have fully transitioned to 255.
Since the LEDs effect more than one pixel on the modulator,
other compensation for both surrounding pixels and possibly
for the other LEDs needs to be included in determining the
right values for the overall image. The Color, Intensity,
Spatial Gamma Processing 408 Works in conjunction With
the Temporal Gamma Processing 412 to achieve optimiZed
controls for both the image modulator 244 and the display
light controls 602-608.
[0088] For systems Where the DOP 230 has knowledge of
the image motion for various pixels, the SGP 408 and the
TGP 412 may use motion information associated With each
pixel or block of pixels in order to determine the amount of
are combined With the input pixel values to produce the
?ltered image output. A ?lter transfer function uses ?lter
Weights to transform an input image to an output image.
Output image pixels, based on a transfer function, can be
adjusted to move the corresponding image. RSR 414 pref
erably uses image blocks having 8x8 to 256x256 pixels
Where each block has uniquely processed motion informa
tion. For static images, RSR 414 produces a sequence of
frame-rate-adjusted output frames based on the difference
betWeen the input and output frame rates. For motion
pictures, RSR 414, at the time of the output frame, portrays
the intermediate position of the image and compensates for
the image motion. With increased processing, each pixel or
sub-pixel Will have its motion information processed
uniquely.
color enhancement performed. This type of processing is
[0091] In an alternative embodiment of the invention, the
RSR 414 utiliZes a stationary display (image) modulator
With multiple moving light sources and utiliZes the RSR 414
based on the ability of the eye to discern color information,
processing to determine the lighting patterns. The lighting
and image resolution is affected by the amount of image
motion. The motion information and subsequent color
enhancement criteria may be part of the original motion
mechanical structure such as one or more rotating color
Wheels of LEDs. Alternatively, the lighting elements may be
vectors, error terms associated With the original motion
lit in any pattern such as by a voltage ?eld applied to an FED
vectors, or motion information determined from a motion
or NED based backlight. The RSR processing performs
lighting optimiZations multiple times Within a frame display
to improve the dynamic range and perceived resolution of
the display.
estimator Within the display system. By understanding the
motion of the pixels, the amount of color enhancement can
be increased or decreased to best optimiZe vieWing.
elements may move in a ?xed pattern as dictated by a
[0089] Reverse Super-Resolution (RSR) 414 performs a
[0092] While traditional ?ltering techniques may be used
superset of the frame rate conversion process for converting
betWeen disparate input and output frame rates, and can
by DIP 210 and after Geometric Transformation, the GT 404
has the ability to perform the mapping of the DIP 210
superband outputs to the image modulator 244 to pre
compensate f6r distortions that occur during projection from
the modulator to the display screen. Similarly, the GT 404
improve display quality When intended display images have
a higher resolution than can be supported by the number of
pixels of image modulator 244. RSR 414 simulates higher
resolution outputs by sequencing loWer resolution images at
can use the DIP 210 subband outputs to perform non-linear
higher frame rates Where each of the loWer resolution
?ltering and scaling to produce the luminance mapping for
images is slightly different and at a slightly different posi
the light sources. Since the resolution of the light source is
tion. Thus, for example, RSR 414, block by block, spatially
typically loWer than that of the original image, starting from
?lters one frame in a video sequence having a transfer rate
the subband data saves on the amount of computation for
of X frames per second (fps) to Y number of RSR frames
determining each light source value. The ?ltering to produce
Feb. 15, 2007
US 2007/0035706 Al
the luminance map may folloW a variety of techniques from
Where the triangles’ siZes and orientations vary over the
simple Weighted averaging to more complex de-convolution
?ltering and other types of ?ltering
display. The triangle mesh may be built from an importance
map Whose density varies in accordance With the amount of
distortion for a particular area of the display. Techniques
such as adaptive isosurface extraction may be used to
[0093] In the FIG. 4 embodiment, DM 402 stores various
forms of system transfer information for use by the DOP 230
to pre-compensate the image data prior to Writing to the
imaging device. This information varies based on the type of
display system and may have any number of parameters. For
example the DM data may be a simple one-to-one map of
data corresponding to image modulator 244 (FIG. 2) char
acteristics at chosen pixel or screen locations. DM 402,
Where applicable, also stores a memory description corre
sponding to each display pixel or a shared description of
groups of display pixels or pixel sectors. Such a description
generate the triangle mesh. Optical techniques of using
cameras both inside and outside the display system can be
combined With a series of knoWn image patterns to deter
mine the triangle mesh information. Information on the
characteristics of the texture mapping hardWare may also be
used in determining the triangle mesh. The triangle mesh
may also be modi?ed to perform other system functions
such as format conversion of 4:3 input to match the display
output resolution of 16:9 aspect ratio.
may be stored in a delta or difference format Where the
complete information exists for one pixel, referred to as an
[0097]
anchor pixel, and the information for the surrounding pixels
distortion includes a 3D depth (Z) component. If the Z value
is ignored, various types of perspective distortion may be
introduced into the system. In some systems that may be
is a difference from the anchor pixel and requires less
storage. Because the description does not change on a
frame-by-frame basis, DM 402 preferably reads the descrip
tion only once during the display process. DOP 230 then
uses the description information to generate image frames.
DM 402, When reading data, uses a set of control registers
(not shoWn) that provide references to the data blocks.
[0094]
DM 402 data varies and may include, for illustra
tive purposes, manufacturing related information, system
con?guration information, and user data. Manufacturing
related information may include, for example, a map of
locations, usually compiled at assembly time, of defective or
Weak pixel display bits, correlation data of ideal radial
imperfections and of optically distorted projection, and
correlation data for alignment points for image modulator
244. System con?guration information, through an auto
matic self-calibration, may include, for example, a registra
tion map having adjustable intensity values for each R, G,
and B color component and the color component pixel offset
at given locations. The format for such a registration map
may be based on anchor pixels Where only difference
information is stored for surrounding pixels. Such a regis
tration map may be stored in ?ash memory or another type
of non-volatile memory.
[0095]
DM 402, Where applicable, preferably uses sensor
techniques, such as sonar range ?nding, infrared range
?nding, laser range ?nding, or optical techniques of display
Though both a ?at display screen and cameras
include only 2D Qi and Y) dimensions, the projection
acceptable and in others higher quality may be desirable. By
using knoWn patterns for projection and comparing those
images With the captured representation of the image, the Z
values can be determined. For example, the interpolation of
a line based on X and Y vertices Where no Z value is
included may appear as an arc Which indicates perspective
distortion, since the combination of the X and Y displace
ment and the Z displacement all contribute to the Warping of
a line into an arc. Those skilled in the art Will recogniZe that
by varying the input pattern along With recording the vieW
able pattern, a “curve ?tting” algorithm can conclude the
proper X, Y and Z vertex values for a tessellation map to best
represent the distortion of the projection path. Once the
proper X, Y and Z values for the vertex are used, the
interpolation across triangles Will result in straight lines that
are not distorted into arcs. The tessellation map of the proper
vertex values is used during the texture mapping modes as
described elseWhere in this disclosure.
[0098]
The display tessellation map Will include the loca
tion information for each triangle vertex and may also
include other color and intensity information either as part of
the same triangle mesh or in a separate triangle mesh. In a
system that includes a more sophisticated setup procedure,
the vertex information Will include differing Z values
(depths) for each vertex. Each color component may require
different mappings to represent hoW the color path varies
over the complete display surface. A triangle mesh can be
ordered as individual triangles, triangle strips, triangle fans
ing and capturing knoWn patterns to measure distances and
distortions from a projector to different parts of a display
screen (not shoWn). The use of image sensors and digital
camera technology for the capture of images from the
display screen represents a good example of optical tech
provides data, either directly or through buffer memory 424,
niques. Combined With knoWn pattern projections, sophis
to Geometric Transformation module 404.
ticated models can be built of the projection path that include
X, Y and Z displacements for different pixel locations and
for triangle mesh patterns. DM 402 then uses these mea
surements to mathematically characteriZe and model a pro
jection display system. DM 402 thus alloWs projecting
images onto a mathematical approximation of a display
screen surface. User data includes user preference informa
tion such as brightness, color balance, and picture sharpness
that are input by a user during a setup sequence.
[0096] In a different implementation, DM 402 includes a
triangle mesh to represent the transfer characteristics of a
projection system. The triangle mesh may be non-uniform
or some other organiZation. Other mathematical representa
tions such as B-Spline may also be used. DM 402 preferably
[0099] In accordance With the invention, Geometric Trans
formation 404 advantageously rede?nes the spatial relation
ship betWeen pixel points of a compensated digital image
that, When displayed, exhibits the highest possible image
quality. Geometric transformation, also referred to as Warp
ing, includes image scaling, rotation, and translation. Geo
metric Transformation 404 resamples data to produce an
output image that can readily map onto FIG. 2 image
modulator 244. HoWever, the Geometric Transformation
404 output data points, due to scaling or resampling, may not
correspond one-to-one to data points of the image modulator
244 grid. Consequently, DOP 230 includes Post Geometric
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement