Automated conversion of a visual presentation into digital data format

Automated conversion of a visual presentation into digital data format
US006697569B1
(12)
United States Patent
(10) Patent N0.:
Gomez et al.
(45) Date of Patent:
(54) AUTOMATED CONVERSION OF A VISUAL
gglliilliilgATlON INTO DIGITAL DATA
(75) Inventors: Lllldahl,
Peter Gomez,
UmeaoStockholm
(SE)
(SE); Giithe
(Pu )’
(*) Notice:
St kh 1
0C 0 m (
SE
Feb. 24, 2004
FOREIGN PATENT DOCUMENTS
GB
2 282 506 A
4/1995
GB
2 306 274 A
4/1997
JP
10257463 A
9/1998
(73) Assignee: Telefonaktiebolaget LM Ericsson
bl
US 6,697,569 B1
OTHER PUBLICATIONS
)
Zhang and Kittler, Using Background Memory for E?icient
Subject to any disclaimer, the term of this
Wave? Coding’ Centre for Vlslon: speéch and Slgnal Pro‘
patent is extended or adjusted under 35
ce'ssing, School of EEIT&M, University of Surrey, United
U.S.C. 154(b) by 0 days.
Kmgdom 199? IEEE
Zhang and Kittler, Global Motion Estimation and Robust
Regression for Video Coding, Centre for Vision, Speech and
_
(21) Appl' NO" 09/326’410
Signal Processing, School of EEIT&M, University of Sur
(22) Filed:
rey, United Kingdom, 1998 IEEE.
Jun. 4, 1999
Related US. Application Data
* Cited by examiner
(60) liggiéisional application No. 60/099,951, ?led on Sep. 11,
Primary Examiner_Huy Nguyen
(51)
Int. Cl? ................................................ .. H04N 5/91
(57)
(52)
US. Cl. ........................ .. 386/120; 386/65; 386/112
A fun multimedia production Such as a Seminar, Conference,
Fleld Of Search .......................... ..
(56)
95,
ABSTRACT
lecture, etc‘ can be Captured in real time using multiple
386/121’ 52’ 65’ 111’ 112’ 358/906’ 909'1’
348/1407’ 14'08’ 14'09’ 461
References Cited
cameras. A live movie of a speaker together With the
speaker’s ?ipping still images or slide shoW can be vieWed
interactively Within the same video display screen. The
complete production can be stored on a hard drive for
retrieval on demand, or sent live to a host server for live
US. PATENT DOCUMENTS
distribution throughout a data network. It is also possible to
5,331,345 A
5,742,329 A
5,745,161 A
7/1994 Aklmoto et a1‘ """"" " 348/297
4/1998 Masunaga et a1. .......... .. 348/15
*
4/1998
Ito
store the complete presentation on portable storage media
and/Or to Send the Complete presentation as an e_ma?
........................ .. 348/1409
5,751,445 A
5/1998 Masunaga
6,426,778 B1 *
7/2002 Valdez, Jr. ................ .. 348/461
'
348/426
9 Claims, 15 Drawing Sheets
Active limer
output
Couple grab still
button to still
image grobber/
Couple timer output
to still image
grabber/converter
Compute delay value
and loud timer
converter
K
810
U.S. Patent
Feb. 24, 2004
Sheet 1 0f 15
$825 5 m
65%23
mENQE A,w
8 35
US 6,697,569 B1
U.S. Patent
Feb. 24, 2004
Sheet 3 0f 15
US 6,697,569 B1
310
Create JPEG image
/
from stiil video
i
Give JPEG file 0 unique
/ 320
name based on current
system time (hhmmss.JPG)
and store
Create corresponding
wrapping HTML fiie
it
Name the HTML-file
/ 34°
with 0 name based on the
some time (hhmmss.HTM)
us for the JPEG file
i
Send HTML file name as
/ 350
0 relative URL to the
encoder creating on ASP
script object
360
include URL in ASP file
at corresponding time
stamp
FIG. 3
/
U.S. Patent
Feb. 24, 2004
US 6,697,569 B1
Sheet 4 0f 15
in Browser:
/
410
ASF player detects URL
In Browser:
Interpret URL'
/
420
Connect to web server
Request HTML document
In Web Server:
/430
Access HTML document
Extract JPEG file name
In Web Server:
Retrieve JPEG ?le
and send to browser
In Browser:
Disphy JPEG image
at proper time
FIG. 4
/440
U.S. Patent
Feb. 24, 2004
Sheet 5 0f 15
US 6,697,569 B1
520
\_
deloy
timer
1
/580
delay
calculator
f 560
Bit Rate
JPEG me size
/
515
FIG. 5
/ from
\
2‘
U.S. Patent
Feb. 24,2004
Sheet 6 6f 15
US 6,697,569 B1
from 510
grab stiil
IA/ IN 21
550
f‘ 610
STILL SMAGE
JPEG
GRAB/CONVERT
f 620
FILE
550
JPEC
to
CREATOR 7L6:
SIZ€
53o
Pixel
from 23
Data
7
current
THRESHOL
time
from counter
0 P640
CURRENT
615
7
PICTURE
0|”
650 /
DIFFERENCE
DETERMINER \630
V
LAST
660 /'
PICTURE
FIG. 6‘
630 a
710
from
f
650
PIXEL XOR
f 730
,
t
COMPARE
from
660
°
mm
PlXEL XOR
\ 720
FIG. 7
540
U.S. Patent
Feb. 24, 2004
Sheet 7 0f 15
Active timer
output
US 6,697,569 B1
/ 860
/
805
Couple grub still
button to still
image grobber/
Couple timer output
converter
to still image
'
grabber/converter
810
K 820
Compute delay value’
and load timer
/- 350
Deactivate timer ’
output
FIG. 8
U.S. Patent
Feb. 24, 2004
Sheet 8 0f 15
US 6,697,569 B1
I
grob
N
still
'2
910
Y
grob picture
and convert
to JPEG
K920
uuto
930
Compare last
creatglewm [/1940
picture to
current picture
\ 950
Create JPEG
file
, 930
’
0
JPEG file size
=
/‘
Delay calculator
FIG. 9
/ 990
U.S. Patent
Feb. 24, 2004
Sheet 9 0f 15
US 6,697,569 B1
from
user
camera
control
presel seieclion
inputs
2g
'
comero conkrol
\ CAMERA
signals
‘0 13
CONTROL
from 20(23)
\/
1060
1020
\
1040
- 1080 \
,/ f L
/
/
w3 0
>
X
1070
r
preset
indices
‘
and
GUI
\ :
“"35”
poii'fgon
video
A1010
info
\ 1050
preset 1
preset 2
position 1
position 2
O
O
O
O
O
C
video 1
video 2
‘IN 22
FIG. 10
to
27(21)
U.S. Patent
Feb. 24, 2004
Sheet 10 0f 15
1110 \s
Store preset
information
1120
preset
selected
?
113D \
Retrieve video
and camera position info
1140 \t Apply video to encoder and
Move camera to
preset position
FIG. 11
US 6,697,569 B1
U.S. Patent
Feb. 24, 2004
Sheet 12 0f 15
US 6,697,569 B1
nmw
8
.QNR
m.
“
wmw
US 6,697,569 B1
1
2
AUTOMATED CONVERSION OF A VISUAL
PRESENTATION INTO DIGITAL DATA
FORMAT
Zation points (time stamps) must be inserted in the stream
?le. In both Real NetWorks products and in Microsoft
NetshoW there are utilities available to do this. This typically
requires an expert, even though the aforementioned tools
have good user interfaces.
After all this has been accomplished, it is time to create
HTML pages and frames. There has to be an HTML page for
This application claims the priority under 35 USC 119
(e)(1) of copending US. Provisional Application No.
60/099,951, ?led on Sep. 11, 1998.
each picture.
FIELD OF THE INVENTION
It is therefore desirable in vieW of the foregoing to provide
for distributing a visual presentation using Internet or
another data netWork, Without the aforementioned disadvan
tages of the prior art.
Embodiments of the present invention offer the ability to
The invention relates generally to streaming video
through a data netWork and, more particularly, to integrating
still images into a stream of live, moving images.
BACKGROUND OF THE INVENTION
Many companies and education institutes have tried to use
capture a full multimedia production such as a seminar,
15
the Internet for distributing documentation of seminars,
meetings, lectures, etc. including video. Those Who have
conference, lecture, etc. in real time using multiple cameras.
A live movie of the speaker together With the speaker’s
?ipping still images or slide shoW can be vieWed interac
tively Within the same video display screen. The complete
tried knoW that one problem is the cost for the video
post-production, and that it is impossible to be able to send
the material on the Internet When it is fresh and has high
production can be stored on a hard drive or sent live to a host
server. The information can be retrieved on demand or
actuality. One day of normal video production of a seminar
may cost $25,000—$37,500. The production is typically
distributed live throughout a data netWork such as the
Internet or corporate Intranet. It is also possible to store the
accessible on the Internet from one Week to fourteen days
complete presentation on CD or other portable storage
after the event. Experts are typically engaged that knoW
about ?lming, digitiZation and encoding. An expert in cre
ating Web pages is also usually needed.
A ?rst problem in producing multimedia for the Internet
media, and/or to send it as an e-mail to a PC.
25
According to the principles of the invention, the tools are
handled automatically in the background, shielded from the
user, and the encoding is done in real-time. The synchroni
is the large amount of tools a user has to manage. A second
Zation points are set When the event is really happening. In
big issue is that it is time consuming. One hour of video
one example, overhead-projector plastic slides, computer
takes about one hour to encode. Athird problem is that if the
VGA-graphics, Whiteboard draWings, etc. are captured and
converted to JPEG, and the video encoding is done in MPEG
and stored together With the sound and synchroniZation
user Wants to have synchroniZed ?ipping images (from e.g.,
an overhead-projector etc.) there is a lot of Work ?nding the
points in an ASF ?le for RTSP (Real Time Streaming
Protocol; see RFC 2326 published by IETF
synchroniZation points and creating the control ?les. A
fourth problem is that the use of several cameras requires
35
several camera operators.
Around the presentation or seminar there has to be at least
one camera operator and often one person responsible for
the sound and taking still pictures, With a digital still picture
camera, of draWings at, for example, a Whiteboard, notepad
or overhead. A stopWatch is used to keep records of all still
ciples of the invention for automated conversion of a visual
45
FIG. 2 diagrammatically illustrates an exemplary embodi
ment of the system of FIG. 1;
FIG. 3 illustrates in How diagram format exemplary
operations of the synchroniZing section of FIG. 1;
FIG. 4 illustrates in How diagram format exemplary
mat by the production staff. It is, of course, possible to take
stills at the same time as the pictures are presented, Which is
operations of the broadcast server of FIG. 1 or the Web
server of FIG. 2;
done When external presenters hold presentations.
The PoWerpoint slides, When they arrive by e-mail, are (as
mentioned above) converted to JPEG by the streaming
production staff. The slides are also resiZed to ?t in an
HTML page together With the video WindoW.
The production of streaming videos for 28.8K, 56K and
The disclosed invention Will be described With reference
presentation into digital data format;
movie of the speaker, What pictures to shoW as JPEG.
PoWerpoint slideshoWs etc., and other computer-based pre
BRIEF DESCRIPTION OF THE DRAWINGS
to the accompanying draWings, Which shoW important
sample embodiments of the invention, Wherein:
FIG. 1 illustrates one example system according to prin
pictures When they are presented, because it is not alWays
possible to ?gure out, from after-the-fact vieWing of the
sentations are often sent as e-mail the day after the
presentation, for conversion to JPEG or other suitable for
(WWW.IETF.org)) streaming.
55
FIG. 5 illustrates pertinent portions of an exemplary
embodiment of the still image controller of FIG. 2;
FIG. 6 illustrates pertinent portions of an exemplary
embodiment of the still image grabber and converter of FIG.
100K bit rates needs an extra WindoW for the real informa
2;
tion shoWn on slides, etc., because the video WindoW is very
small and the information in it is unreadable.
The video ?lm is often manually edited With softWare like
Adobe Premier. After editing, if any, the encoder is used to
compress the video and audio to the correct baud-rate, and
encode them to a streaming format like ASF (Active Stream
FIG. 7 illustrates an exemplary embodiment of the dif
ference determiner of FIG. 6;
FIG. 8 illustrates exemplary operations Which can be
performed by the exemplary still image controller embodi
ment of FIG. 5;
FIG. 9 illustrates exemplary operations Which can per
ing Format) or RMFF (Real Media File Format). The
formed by the exemplary still image grabber and converter
encoding takes the same amount of time as it takes to run
embodiment of FIG. 6;
FIG. 10 illustrates an arrangement for implementing
camera preset control techniques according to principles of
through the movie. This is time consuming.
To be able to shoW the JPEG images (e.g. slide shoW) at
the right time (compared to the movie events), synchroni
65
the invention;
US 6,697,569 B1
4
3
camera 11, and converts this video signal into digital video
data Which is in turn output to the still image grabber and
FIG. 11 illustrates exemplary operations Which can be
performed by the arrangement of FIG. 10;
converter 21. Similarly, a video grabber card 20 receives as
input a video signal from the live video camera 13, and
. FIG. 12 illustrates one example of an interactive visual
display provided by a graphical user interface according to
produces therefrom corresponding digital video data Which
principles of the invention;
FIG. 13 illustrates exemplary operations performed by the
exemplary audio control for feeding the exemplary audio
level graph display and handling of activation of input
source;
FIGS. 14A—14C illustrate exemplary operations per
is in turn output to the encoder and streamer 27. The video
grabber card 23 also includes a port for receiving VGA data
output from the laptop computer 10. Video grabber cards
such as 20 and 23 are Well knoWn in the art.
10
the GUI for audio level feedback and activation/launching of
the standard mixer system installed in the operating system;
FIG. 15 illustrates diagrammatically an exemplary modi
?cation of the system of FIG. 2;
FIG. 16 illustrates diagrammatically a further exemplary
modi?cation of the system of FIG. 2;
FIG. 17 illustrates diagrammatically a further exemplary
modi?cation of the system of FIG. 2; and
FIG. 18 illustrates diagrammatically a further exemplary
modi?cation of the system of FIG. 2.
15
SOUNDBLASTER PCI card. The audio input section
receives as input an audio signal from the microphone 12,
and outputs a corresponding audio signal to the encoder and
streamer 27. The audio output section also receives the audio
signal output from the audio input section, and provides this
signal to speaker, for example, an audio headset, so an
operator can monitor the recorded sound quality.
The graphical user interface GUI 30 includes a camera
control section 29 that permits a user to control the cameras
11 and 13 via a conventional serial card 31. The serial card
31 also couples a user command input device 32, for
example a touch screen as shoWn in FIG. 2, to the graphical
DETAILED DESCRIPTION
As shoWn in FIG. 1, an exemplary system according to
principles of the invention for automated conversion of a
visual presentation into digital data format includes video
cameras 11 and 13, a microphone 12, an optional lap top
computer 10, and a digital ?eld producer unit 14, also
The audio input and audio output sections can be realiZed,
for example, by a conventional audio card such as a
formed by the exemplary audio level graph display handling
user interface GUI 30.
25
The digital video data output from video grabber card 20
is also input to a live image display portion 33 of the
graphical interface GUI 30. The live image display portion
33 of the GUI 30 causes the direct digital video data
obtained from the live image camera 13 to be displayed to
referred to herein as DFP unit or DFP computer. One of the
a user on a monitor 34 such as the ?at XVGA monitor
video cameras 13 covers the speaker and provides video
information to the live video section 1, and the other video
illustrated in FIG. 2. A conventional video card 24 coupled
betWeen the GUI 30 and the monitor can suitably interface
camera 11 covers the slide show, ?ip chart, White board, etc.
and provides the corresponding video information to the still
video section 3. The microphone provides the audio to the
sound section 2. In the example DFP unit of FIG. 1, the live
video is encoded 4 (e.g., in MPEG) in real time during the
speaker’s visual presentation, and the still video of the slide
shoW etc. is converted 5 into JPEG ?les in real time during
the digital video data from the live image display section 33
35
The GUI 30 also includes a still image display portion 35
coupled to the output of video grabber card 23 in order to
receive therefrom the digital video data associated With the
still video camera 11. The still image display section 35 of
the GUI 30 uses the video card 24 to cause the digital video
data associated With still image camera 11 to be displayed to
the user via the monitor 34.
the presentation.
A synchroniZing section 16 of FIG. 1 operates automati
cally during the speaker’s presentation to synchroniZe the
still video information from the slide shoW, ?ip chart etc.
With the live video information from the speaker. Both the
live video and the still video can then be streamed live
to the monitor 34.
45
through a server 15 to multiple individual users via a data
netWork 18 such as, for example, the Internet, a LAN, or a
ADFP application section 19 (e.g., a softWare application
running on the DFP computer 14) includes a still image
grabber and converter portion 21 Which receives as input the
digital still video data from video grabber card 23, and
produces therefrom, as an output, image data ?les 173 such
as JPEG or GIF ?les. Taking JPEG ?les as an example
data netWork including a Wireless link.
Alternatively, the live video and still video can be stored
output, each JPEG ?le produced by the still image grabber
in storage unit 17 (eg hard disk), for a later replay broadcast
via the server 15 and data netWork 18. Also, the information
in the storage unit 17 can be transferred to portable storage
media 9 such as ?oppy disks or CD5, for local replay by
individual users later. The stored information can also be 55
sent to a vieWer as an e-mail.
and converter portion 21 represents a freeZing of the digital
video data received from video grabber card 23 in order to
produce at a desired point in time a still image associated
With the video being recorded by the still video camera 11.
The GUI 30 includes a still image control section 25 having
an output coupled to the still image grabber and converter
21. The still image control portion 25 applies an appropriate
control signal to the still image grabber and converter 21
When the still image grabber and converter 21 is to “grab”
(capture) the next still image and create the corresponding
A graphical user interface GUI 8 permits a user to control
operations of the FIG. 1 system using a monitor 6 and input
apparatus 7 (e.g., a keyboard, mouse, touch screen, etc.)
suitably coupled to the GUI 8.
JPEG ?le. The serial card 31 connecting the user command
input (e.g., the touch screen) to the GUI 30 permits a user to
The encoding 4, converting 5 and synchroniZing portions
16 of FIG. 1, and the GUI 8, can be implemented using one
access the still image control section 25, and thereby control
(or more) suitably programmed data processor(s), as Will be
evident from the folloWing description.
the still image “grabbing” operation of the still image
embodiment of the system of FIG. 1. Avideo grabber card
grabber and converter 21.
The DFP application section 19 further includes an
encoder and streamer module 27 Which receives the digital
23 receives as an input a video signal from the still image
video output from video grabber card 20, and continuously
FIG. 2 is a detailed block diagram of an exemplary
65
US 6,697,569 B1
5
6
encodes and compresses this data into a digitally transfer
rable stream With loW bandwidth. The corresponding audio
information from the audio input section is also encoded and
in the form of a Script Command Object, Which can be easily
done by Workers in this art. The NetShoW encoder can then
compressed into the digitally transferrable stream. The
desired time stamp, hhmmss.
FIG. 3 illustrates the above-described exemplary opera
tions Which can be performed, for example, by the compo
nents 21, 26 and 27 of FIG. 2 to implement the synchro
niZing function 16 illustrated diagrammatically in FIG. 1. At
310, the still image grabber and converter 21 creates a J PEG
image ?le from a still video picture that has been grabbed
from the still image camera. At 320, the JPEG ?le is given
a unique name (e.g., hhmmss.jpg) based on the system time
at Which the picture Was grabbed, (available, for example,
from the recording counter in the encoder control portion of
GUI), and the J PEG ?le is stored. At 330, the HTML & URL
insert the Script Command Object (i.e., the URL) at the
encoding process is also conventionally referred to as
streaming or streaming video. Encoding modules such as
shoWn at 27 are conventionally knoWn in the art. One
example is the NetShoW encoder and streamer convention
ally available from Microsoft. In one example, the video
encoding can be done in MPEG. The encoder and streamer
10
module 27 can assemble the encoded video data in a
document ?le, for example an ASF ?le.
The ASF ?le can be output from the encoder and streamer
module 27 for live streaming out of the DFP unit 14, and also
for storage at 171 in the storage unit 17. The encoder and
streamer module 27 also encodes the digitiZed audio signal
received from the audio input section. The encoded video
information is also output from the encoder and streamer
module 27 to a streamed image display portion 260 of the
GUI 30, Whereby the streaming video can be displayed on
the monitor 34 via the video card 24. The encoder and
15
generator 26 uses the received JPEG ?le to create a corre
sponding Wrapping HTML ?le. At 340, the generator 26
names the HTML ?le (e.g., hhmmss.htm) based on the
system time at Which the picture Was grabbed, Which can be
provided by grabber/converter 21 to generator 26 along With
the JPEG ?le. At 350, the generator 26 sends the HTML ?le
streamer module 27 receives a control input from an encoder
name to the encoder 27 as a relative URL. At 360, the
control portion 36 of the GUI 30. The encoder control
portion 36 permits a user, via the user command input and
encoder and streamer 27 receives the URL, and includes the
URL in its output ASF ?le at the time stamp corresponding
serial card 31, to control starting and stopping of the
encoding process. In addition, the encoder control 36 pro
25 to the system time on Which the HTML ?le name Was based.
vides a recording counter Which tracks the passage of time
an ASF ?le) provided by the encoder and streamer 27 is
during the encoding of the video event.
The still image control section 25 in the GUI 30 of FIG.
2 controls the still image grabber and converter 21, Which
receives an input from video grabber card 23. Video grabber
input to a Web server 37, Which can forWard the streaming
video output to a desired destination (e.g., a vieWer’s Web
In the example of FIG. 2, the streaming video output (e.g.,
broWser 40) via a suitable data netWork 39, thereby provid
ing a live broadcast of the event. The Web server 37 can also
add the appropriate server name and directory name to the
card 23 interfaces With the still image camera 11 Which is
used to record the slide shoW, ?ip chart etc. The still image
relative URL (Which typically does not include such
grabber and converter 21 can create, for example, JPEG ?les
The still image control 25 can be automated according to
principles of the invention to cause the still image grabber
information). The Web server 37 also is coupled to the
storage section 17, for example, via an FTP client for
publishing 41, to receive the JPEG documents, HTML
documents and ASF ?les stored therein. In addition, the
and converter 21 to periodically create a JPEG image of the
still video source, give the JPEG image ?le a unique name
based on the current system time, and store the ?le at 173.
One example ?le name is hhmmss.jpg Where hh is the
storage section 17 may be coupled to a CD-R burner 42 or
to a Zip drive 43 for external storage.
The Web server 37 is coupled to a netWork card 38 that
gives the Web server 37 access to a data netWork 39, for
current hour, mm is the current minute and ss is the current
example a local area netWork (LAN) or the Internet. In one
embodiment, the netWork card 38 can be an Ethernet net
as an output.
35
second. Along With the creation of the JPEG ?le, a corre
sponding Wrapping HTML (Hyper Text Markup Language)
?le is created by an HTML & URL generator 26 and stored
at 172 in the data storage unit 17. In one example, the HTML
?le can be created by copying a template from a template
directory, and inserting the aforementioned ?le name
hhmmss in a name ?eld of the template.
45
Web server 37 to access the data netWork.
FIG. 2 illustrates a Web broWser 40 coupled to the Web
server 37 via the data netWork 39 and the netWork card 38.
Examples of suitable Web broWsers include conventional
Netscape and Internet Explorer broWsers. During a live
The HTML ?le name, hhmmss.htm, is then sent as a
relative URL (Uniform Resource Locator) from generator
26 to the encoder and streamer 27 for inclusion, at time
video streaming broadcast, a vieWer can connect to the Web
server 37 via the World Wide Web by typing a URL into the
stamp hhmnmss, in the encoded streaming video data (e.g.,
in an ASF ?le) output by the encoder and streamer 27. This
synchroniZes the still video information from the slide shoW
With the “live” video information from the speaker. In
addition, other ?les can be synchroniZed to the “live” video,
such as sound, VRML, JAVA script, text ?les, voice-to-text
?les and ?les containing translations of voice-to-text ?les
into other languages.
Conventional encoder and streamer products, such as the
aforementioned NetShoW product, provide functionality for
Work (PCI) card, Which can handle TCP/IP and UUCP traf?c
in conventional fashion.
In another embodiment, a modem can be utiliZed by the
55
vieWer’s Web broWser (note that the Web broWser could also
be provided as part of the DFP unit itself). The live video
stream is distributed over the Web, With the “live” video
synchroniZed to the still images from the still image camera.
Both the live video stream and the still video image can be
shoWn on the same Web page.
After an event (for example a seminar) has been recorded,
a vieWer can replay the video recording by performing a
similar Web connection as in the above-described live broad
cast case. A URL is typed into the vieWer’s Web broWser,
streaming video data output. For example, the “SendScript” 65 Which connects the vieWer to the Web server 37 in the DFP
computer. The Web server 37 Will then stream out the
function in NetShoW provides this functionality. SendScript
recorded video information the same as it Would be streamed
can insert the URL into the ASF ?le if the URL is provided
passing URLs to the encoder for inclusion in the encoded
US 6,697,569 B1
7
8
during the live streaming broadcast. The still video images
FIG. 2) to be controlled either by user input (i.e., manually)
are synchronized as in the live case, and they change in the
or automatically. A selector 510 receives a user (manual)
output video stream at the same relative time as they did
during the actual event. The vieWer can decide When to start
(or restart) the video stream in order to vieW the event as
input from a grab still button 590 Which is provided in the
user command input portion 32 (e.g., the touch screen) and
can be actuated by a user Whenever the user desires to grab
desired, and can navigate to a particular part of the recorded
event, for example, by using a slider control provided by the
Web broWser.
AvieWer also has the option of vieWing the event locally
from a disk or CD-ROM. All that is needed to vieW an event 10
recorded on disk or CD-ROM is a Web broWser With a
conventional video streaming plug-in such as supported by
Internet Explorer.
The Web broWser 40 preferably includes an ASF player,
executing as a plug-in or an ActiveX control, that processes
15
the ASF ?le and presents the audio/video to the vieWer.
When the player, for example a conventional multimedia
player such as Microsoft WindoWs Media Player, encounters
a Script Command Object in the ASF ?le, it interprets and
?es the Script Command Object as a URL, it passes the URL
value as a function of the bit rate 515 used in the live video
streaming operation and the siZe of the JPEG ?le created by
25
sponding JPEG document hhmmss.jpg.
If the WindoWs Media Player control is embedded in an
HTML ?le that uses frames, the URL can be launched in a
frame that is also speci?ed by the Script Command Object.
This alloWs the WindoWs Media Player control to continue
rendering the multimedia stream in one frame, While the
broWser renders still images or Web pages in another frame.
If the Script Command Object does not specify a frame, then
FIG. 4 illustrates exemplary operations of the Web
35
video grabber card 23 and, responsive to activation of the
grab still signal from the still image controller, grabs a
picture, converts it into JPEG format, and notes the current
time (e. g., from the counter in the encoder control portion of
GUI). A ?le creator portion 620 receives the JPEG data and
current time information (indicative of the time that the
example in the form of a Script Command Object) at 410 by
the ASF player, the Web broWser at 420 interprets the URL
for server destination and protocol to use (e.g., HTTP),
picture Was grabbed) from the still image grab/convert
portion 610, and creates a JPEG ?le in response to a create
?le input 680. When the ?le creator portion 620 creates a
JPEG ?le, it outputs the ?le siZe information to the delay
connects to the Web server and sends the Web server a
calculator 530 of FIG. 5.
45
therefrom the JPEG ?le name. At 440, the Web server
The still image grab/convert portion 610 provides the
pixel data received from the video grabber card 23 to a data
storage section at 650 and 660. Each time a still image is
grabbed, the pixel data is provided to a current picture
storage section 650 Whose previous contents are then loaded
into a last picture storage section 660. In this manner, the
pixel data associated With the current still image and the
retrieves the JPEG ?le from storage 173 and sends it to the
broWser. At 450, the broWser displays the J PEG image at the
appropriate time With respect to the video streaming pre
sentation.
During replay broadcasts, the Web server retrieves and
forWards the stored ASF ?le (containing the encoded/
compressed “live” video data) from storage at 171, and also
accesses the stored HTML documents, and retrieves and
forWards the stored JPEG documents, generally as described
above With respect to live streaming operation. The Web
broWser receives the ASF ?le and JPEG documents, and
the still image grabber and converter 21 in response to the
grab still signal at 550. The JPEG ?le siZe information from
the still image grabber and converter 21 is provided at 560
to the delay calculator 530.
FIG. 6 illustrates pertinent portions of an exemplary
embodiment of the still image grabber and converter 21 of
FIG. 2, particularly those portions Which interface With the
exemplary still image controller embodiment of FIG. 5. A
still image grab/convert portion 610 receives the output of
broWser and Web server of FIG. 2. The operations of FIG. 4
are advantageously executed during the Web broWser’s
processing of the ASF ?le. When a URL is detected (for
request for the HTML document. At 430, the Web server
accesses the HTML document from storage 172 and extracts
selector 510 connects the grab still button input to its output
550, and When the manual/automatic signal indicates auto
matic operation, the selector 510 connects the output 570 of
timer 520 to selector output 550. The selector output 550 is
coupled to the still image grabber and converter 21, and
provides thereto a grab still signal. The manual/automatic
signal is also coupled at 540 to the still image grabber and
A delay calculator 530 calculates a delay value Which is
preloaded at 580 from the delay calculator 530 into the timer
520. The delay calculator 530 calculates the desired delay
to the broWser. The broWser executes the URL as if it had
been embedded inside an HTML document. According to
the URL can be launched in a default frame.
manual/automatic signal indicates manual operation, the
converter 21.
executes Script Command Object. When the player identi
one embodiment, the URL points to HTML document
hhmmss.htm, Which in turn contains a pointer to the corre
a still picture from the still image camera. The selector 510
has another input connected to an output 570 of a delay timer
520. The selector 510 is controlled by a manual/automatic
signal Which can be preset from the user command input into
the settings & WiZards section 22 of the GUI 30. When the
most recently grabbed previous still image (i.e., the last still
image) are respectively stored in the data storage sections
650 and 660. A difference determiner receives the current
55
and last picture data from the storage sections 650 and 660,
and determines a difference measure, if any, betWeen the
current still image and the last still image. If the difference
synchronously integrates the “still” video images into the
determiner determines that a difference exists betWeen the
“live” video stream using generally the same procedure
discussed above With respect to live streaming operation.
As mentioned above, still image control portion 25 of the
tWo still images, then information indicative of this differ
ence is provided to a threshold portion 640, Which compares
the difference to a threshold value to determine Whether the
images differ enough to Warrant creation of a neW JPEG ?le
GUI 30 can, either in response to user input or automatically,
direct the still image grabber and converter portion 21 to
corresponding to the current image. If the difference infor
mation received from difference determiner 630 exceeds the
grab a still image and convert it into JPEG format. FIG. 5
illustrates pertinent portions of an exemplary embodiment of
the still image controller 25. The exemplary arrangement of
FIG. 5 permits the still image grabber and converter 21 (see
65
threshold of threshold portion 640, then the output 690 of
threshold portion 640 is activated, Whereby the create ?le
signal 680 is activated by operation of an OR gate 685 that
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement