Sharp DV-720X Specifications

Sharp DV-720X Specifications
A Digital Video Primer:
An Introduction to DV Production,
Post-Production, and Delivery
1 Introduction
Video basics
12 Digital video formats and camcorders
16 Configuring your system
19 The creative process: an overview
of movie-making
21 Acquiring source material
24 Nonlinear editing
31 Correcting the color
33 Digital audio for video
The world of digital video (DV) encompasses a large amount of technology. There are
entire industries focused on the nuances of professional video, including cameras, storage,
and transmission. But you don’t need to feel intimidated. As DV technology has evolved,
it has become increasingly easier to produce high-quality work with a minimum of
underlying technical know-how.
This primer is for anyone getting started in DV production. The first part of the primer
provides the basics of DV technology; and the second part, starting with the section titled “The creative process,” shows you how DV technology and the applications
contained in Adobe® Production Studio, a member of the Adobe Creative Suite family,
come together to help you produce great video.
36 Visual effects and motion graphics
42 Getting video out of your computer
44 Conclusion
44 How to purchase Adobe software products
44 For more information
50 Glossary
Video basics
One of the first things you should understand when talking about video or audio is the
difference between analog and digital.
An analog signal is a continuously varying voltage that appears as a waveform when
plotted over time. Each vertical line in Figures 1a and 1b, for example, could represent
1/10,000 of a second. If the minimum voltage in the waveform is 0 and the maximum is
1, point A would be about .25 volts (Figure 1a).
A digital signal, on the other hand, is a numerical representation of an analog signal. A
digital signal is really a stream of bits (a long list of binary numbers). Each number in
the list is a snapshot, or sample, of the analog signal at a point in time (Figure 1b). The
sample rate of a digital stream is the number of snapshots per second. For example, if 0
volts is represented by the numerical value 0 and 1 volt by the value 256, point A would
be represented by the number 64, which in binary form is a string of ones and zeros like
this: 1000000
Digital has a number of advantages over analog. One of the most important is the very
high fidelity of the content. An analog device, like a cassette recorder or television set,
simply renders the variations in voltage as sound or pictures, and has no way of distinguishing between a voltage variation that comes from the original signal and one that
comes from electrical interference caused by a power line, for example (Figure 2a).
Electrical interference or noise can come from an external source or from the tape, or
components in a recorder or the television itself. When you duplicate a tape, noise
recorded on the original tape is added to the new tape. If you were to then duplicate the
new tape, noise from the two previous tapes would be added to the third tape and so on.
Each copy of the tape adds to the generation loss, or loss in fidelity, from the original.
Figure 1a: Analog signal
Figure 2a: Analog signal with noise from
systems electronics or recording tape
Figure 1b: Digital signal
Figure 2b: Digital (binary) signal with noise
With digital, the signal recorded on a tape or sent through the air consists of nothing more than
a string of one and zeroes that a digital player converts to numbers and then to sounds or pictures. Because a digital player only reads one and zeroes, it can more easily discriminate between
the original signal and noise (Figure 2b). So a digital signal can be transmitted and duplicated as
often as you want with no (or very little) loss in fidelity.
Digital audio basics
Analog audio is an electrical representation of sound. A microphone or some other transducer
converts rapid changes in air pressure to variations in voltage. Electronic devices amplify and
process the audio signal. Digital audio is a digital representation of analog audio: An analog-todigital converter samples the variations in voltage at set intervals, generates a list of binary numbers, and then saves the bit stream to a computer hard disk, records it on tape, or streams it over
a network. The quality of digitized audio and the size of the audio file depend on the sample rate
(the number of samples per second) and the bit depth (the number of bits per sample). Higher
sample rates and bit depths produce higher-quality sound, but with correspondingly larger file
Digital video has come of age
Video has made the transition from analog to digital. This transition has happened at every level
of the industry. At home and at work, viewers watch crystal-clear video delivered via DVDs, digital cable, and digital satellite. In broadcasting, standards have been set and stations are moving
towards programming only digital television (DTV). In time, we will all be watching DTV.
The full transition to digital, however, won’t happen overnight. And although there is much
digital content available now, TV programming is, for the most part, still engineered for analog
production. Nonetheless, the U.S. Government has mandated a full conversion of U.S. television
broadcasting to DTV in order to make better use of the available frequency spectrum.
There are two types of digital television: standard definition (SDTV) and high definition (HDTV).
SDTV offers resolution roughly equivalent to a conventional analog signal, with image display
ratios or aspect ratios of 4:3 or 16:9. The ATSC HDTV format used in the U.S. offers the potential for approximately twice the horizontal and vertical resolution of current analog television,
which can result in about five times as much visual information. It also takes approximately five
times the bandwidth to broadcast as SDTV. HDTV has a 16:9 aspect ratio.
Not all digital TV sets on the market support HDTV, even the sets with 16:9 widescreens. But
virtually all new sets today are, at least, SDTV-ready, meaning that they are equipped to accept
a digital signal. You can connect DV camcorders, digital VCRs, and DVD players to new digital
TV sets through an IEEE 1394 or DVI connector to achieve a pristine, noiseless picture.
Even high-end filmmaking is transitioning to digital. Today, viable HD digital video formats
deliver magnificent quality for both high-end motion pictures and broadcast TV. Many major
motion pictures contain footage that was digitally generated or enhanced. A number of films are completely digital: shot, edited, and finished using digital cameras and computers. For those who prefer
the look of film, digital effects can be used to add the texture of film to the impeccably clean images.
In fact, producers have a virtually limitless choice of grain and textures they can add.
A Digital Video Primer
While the continuous-tone contrast range of film is still greater than even the highest definition video, there are many compelling arguments for shooting digitally, not the least of which is
cost. Many independent filmmakers used to have to scavenge leftover film remnants to complete
a project; today the lower cost of digital video is making it possible for more indies than ever
before to be produced and distributed. In consumer electronics, an ever-growing selection of
digital video camcorders offers impressive quality at an affordable price.
Video post-production has moved from analog tape-to-tape editing to the world of digital nonlinear editing (NLE).
The advantages of using a computer for video production activities such as nonlinear editing
are enormous. Traditional tape-to-tape editing follows a linear path, like writing a letter with
a typewriter. If you want to insert new video at the beginning or middle of a finished project,
you have to reedit everything after that point. Desktop video, however, enables you to work with
moving images in much the same way you write with a word processor. Your movie document
can quickly and easily be edited and reedited, including adding music, titles, and special effects.
Frame rates and fields
When a series of sequential pictures is shown in rapid succession, an amazing thing happens.
Instead of seeing each image separately, we perceive a smoothly moving animation. This is the
basis for film and video. The number of pictures shown per second is called the frame rate.
It takes a minimum frame rate of about 10 frames per second (fps) for the viewer to perceive
smooth motion. Below that speed, a person can perceive the individual still images and motion
appears jerky. To avoid that flicker between frames, you need a frame rate of between 30 and 45
fps. Film has a frame rate of 24 fps. Television has a frame rate of approximately 30 fps (29.97
fps) in the U.S. and other countries that use the National Television Systems Committee (NTSC)
standard, and roughly 25 fps in countries that use the Phase-Alternating Line (PAL) and Sequentiel Couleur Avec Memoire (SECAM) standards.
Before nonlinear editing systems and graphical
Interfaces, projects were edited linearly with
multiple videotape recorders using timecode.
There are two ways that a frame can be presented to a viewer: progressively or with interlaced
scanning. With film, the shutter in a projector displays each frame in its entirety, and then
displays the next frame. This progressive method of displaying complete frames is similar to the
manner in which a computer display is refreshed. A whole new image is scanned about 60 times
a second. Digital television sets are also capable of progressive display.
Interlaced scanning was developed in the early days of television to accommodate the old
cathode ray tube (CRT). Inside the tube, an electron beam scans across the inside of the screen,
which contains a light-emitting phosphor coating. Unlike the phosphors used in today’s computer monitors, those used when televisions were first invented had a very short persistence. That
means the amount of time they could remain illuminated was short. In the time it took the electron beam to scan to the bottom of the screen, the phosphors at the top were already going dark.
To solve this problem, the early television engineers designed an interlaced system for scanning
the electron beam. With an interlaced system, the beam only scans the odd-numbered lines the
first time, and then returns to the top and scans the even-numbered lines. These two alternating
sets of lines are known as the upper (or odd) and lower (or even) fields in the television signal.
A television that displays 30 frames per second is really displaying 60 fields per second—two
interlaced images per frame.
Why is the frame/field issue of importance? Imagine that you are watching a video of a ball flying
across the screen. In the first 1/60th of a second, the TV scans all of the odd lines in the screen
and shows the ball in position at that instant. Because the ball continues to move, the even lines
that are scanned in the next 1/60th of a second show the ball in a slightly different position.
With progressive scan, all lines of a frame show an image that occurs at one point in time; with
interlaced scan, even lines occur 1/60th of a second later than the odd lines. Because of this difference, you need to consider fields and frames when you want to display an interlaced image on
a progressive-scanned monitor. This situation most often occurs when you edit interlaced video
on a computer. If the video is destined for computer playback, you can convert or deinterlace the
video using your editing program or capture device. However, if the final video will be played on
a standard television, through a DVD or tape, you need to maintain interlacing while you edit.
In either case, if you are using Adobe® Premiere® Pro software for video editing or Adobe After
Effects® software for creating motion graphics and visual effects, you can easily work with either
scanning method.
A Digital Video Primer
Converting film
The term telecine refers to the combination of processes, equipment, and software used to perform film-to-video conversion. Pulldown techniques are used in the telecine process to convert
the 24 fps rate of film to the approximately 30 fps rate of NTSC video and to handle the conversion from progressive frames to interlaced fields.
Pulldown performs its magic without speeding the film up by inserting redundant fields as the
film is being transferred. Here’s how it works:
1. Film frame 1 transfers to the first two fields of video frame 1.
2. Film frame 2 transfers to the two fields of video frame 2 as well as the first field of video frame 3.
3. Film frame 3 transfers to the second field of video frame 3 and the first of video frame 4.
4. Film frame 4 transfers to the second field of video frame 4 and the two fields of video frame 5
and the process repeats.
By inserting two extra fields every 1/6th of a second, four film frames fill five video frames, and
24 frames fill 30.
Frame rate differences and video interlacing complicate the process of converting film to video when
motion pictures are to be shown on TV. It’s even more
complicated when converting from video to film.
Standards conversion (converting from one format
to another) often causes motion artifacts and softens
crisp images.
The advent of digital television has underscored the
need for a better way to move between formats. In
the U.S., the digital television mandate by the FCC
allows broadcasters to choose from among 18 different SD and HD formats. These formats are specified
with a number indicating the lines of vertical resolution, and a letter indicating whether the display is
interlaced (i) or progressive (p). CBS and NBC have
chosen 1080i; ABC prefers 720p; and FOX works with
480p, 480i, and 720p. It sounds like the commencement of chaos, doesn’t it? Imagine the poor producer
who must be prepared to deliver in one or more
The benefits to the film industry of having a suitable
digital production format are enormous. Savings on
the cost of traditional film and film processing, not to
mention the time required for film processing is huge.
When digital effects are incorporated into footage,
film must be digitized anyway, so being able to begin
with digital material makes sense.
The solution may be the 24P format—a 24 fps, progressive scanned HD image with 1080 lines of vertical
resolution. 24P digital cameras are delivering major
motion picture quality content, such as Star Wars: Episode Ill. Film is easily converted to 24P video, because
film is 24 fps and compatible with progressive scanning. Because it’s digital, you can make a single digital
master from which multiple formats can be produced
with virtually no generation loss: from NTSC or PAL, to
any of the HD formats, even film.
3-2 or 2-3 pulldown is used to match the frame rate of film (24 fps) to that of video (29.97 fps) for transferring.
The term cadence refers to the allocation of frames to fields. With a 2-3 cadence, the first film
frame is transferred to 2 fields, the second frame to 3 fields; with a 3-2 cadence, the first frame is
transferred to 3 fields, the second frame to 2.
For PAL telecine, 2-2 pulldown is used, which allocates each film frame to two video fields, yielding the required 50 fields per second. The film runs through the telecine machine at 25 fps, 4%
faster than normal speed, to compensate for the frame rate difference.
Pulldown is also used to convert 24P video to 30p and 60i formats.
A Digital Video Primer
The quality of the images you see in film and video is not only dependent upon frame rate. The
amount of information in each frame, or image resolution, is also a factor. All other things being
equal, a higher resolution results in a better quality image.
The resolution of analog video is represented by the number of scan lines per image, or, the number of lines the electron beam draws across the screen or vertical resolution.
•NTSC is based on 525 vertical lines of resolution, displayed as two interlaced fields. However,
some of these lines are used for synchronization and blanking, so only 486 lines are actually
visible in the active picture area. The blanking part of a television signal can be seen as the
black bars above and below the active area.
•PAL is based on 625 vertical lines of resolution, displayed as two interlaced fields. As with
NTSC, some of these lines are used for synchronization and blanking, so only 576 lines are
actually visible in the active picture area.
Horizontal resolution in analog video refers to the number of black-to-white transitions or the
detail that can be displayed on each scan line. The horizontal resolution of video equipment is
usually described in lines of resolution to depict an equivalent resolution to the actual lines of
resolution in the vertical domain. Be careful when comparing the horizontal resolution of an
analog signal (NTSC or PAL) to a digital signal. With an analog signal, you are really looking at
bandwidth (or frequency response), which translates to the sharpness of an image or how much
detail can be seen. With digital video, the horizontal resolution is more easily measurable, because
there are a fixed number of pixels.
A section of a resolution chart expanded to show how resolution degrades with DV25 and MPEG-2 compression,
and after being transferred to VHS tape.
Analog signals have no limitation on horizontal resolution, and many high-end studio cameras
can produce images with very high horizontal resolution (greater than 720 lines). However, when
the signal is recorded on an analog tape, processed, or run through a broadcast transmitter, you
effectively reduce the horizontal resolution. In general, the horizontal resolution of VHS is considered to be ~250 lines, and SVHS is considered to be ~410 lines.
Resolution for digital images, on computer displays and digital television sets, for example, is
represented by a fixed number of individual picture elements (pixels) on the screen, and is often
expressed as a dimension: the number of horizontal pixels by the number of vertical pixels. For
example, 640 x 480 and 720 x 486 are full-frame SD resolutions, and 1920 x 1080 is a full-frame
HD resolution.
In the U.S., there is one FCC-approved standard for analog video and 18 standards for digital TV.
Currently, the three most commonly encountered digital resolutions are:
• 480p* or 480 lines of vertical resolution, scanned progressively
• 720p or 720 lines of vertical resolution, scanned progressively
• 1080i or 1080 lines of vertical resolution, interlaced
* Note that when the letter P is used to denote progressive in the 24P video format, it is capitalized; but when it
is used to refer to television display resolution, it is typically lower case, such as 720p.
A Digital Video Primer
Another factor to be aware of regarding resolution on digital TVs is the physical size of the
screen. There are more dots placed horizontally across a 50-inch plasma screen than on a 27-inch
direct-view screen. Although a 1080i image may be fed to an HDTV display, that display may
not be able to reproduce all the dots in the image received. Digital TVs reprocess (upconvert or
downconvert) the image to conform to the number of dots actually available on the screen. A
1080i image created for HDTV with a resolution of 1920 x 1080 may be downconverted to fit
1366 x 768, 1280 x 960, 1024 x 768, or any other pixel field. As you may expect, downconversion
results in a loss of detail.
You may find yourself working with a wide variety of frame rates and resolutions. For example, if
you are producing a video that is going to be distributed on HDTV, DVD, and the web, then you
need to produce videos in three different resolutions and frame rates. Frame rate and resolution
are very important in digital video because they determine to a great extent how much data must
be stored and streamed to view your video. There are often trade-offs between the desire for great
quality video and the requirements imposed by storage and bandwidth limitations.
More data is required to produce higher-quality images and sound.
Aspect ratios
The width-to-height ratio of an image is called its aspect ratio. The 35mm still photography film
frames on which motion picture film was originally based have a 4:3 (width:height) ratio, which
is often expressed as 1.33:1 or 1.33 aspect ratio (multiplying the height by 1.33 yields the width).
From 1917 to 1952, the 4:3 aspect ratio was used almost exclusively to make movies and to
determine the shape of theater screens. When television was developed, existing camera lenses
all used the 4:3 format, so the same aspect ratio was chosen as the standard for the new broadcast
medium. This 4:3 format is now known as fullscreen TV.
In the 1950s, the motion picture industry began to worry about losing audiences to broadcast
television. So the movie studios began to introduce a variety of enhancements to give audiences a
bigger, better, and more exciting experience than they could have in their own living rooms. One
of those enhancements was a wider screen. Studios produced widescreen films in a number of
scope formats, such as Cinemascope (the original), Warnerscope, Technicscope, and Panascope.
Widescreen became a hit with audiences, and eventually a standard aspect ratio of 1.85 was
adopted for the majority of films.
One problem with the widescreen format was that it did not translate well to television. For many
years, when widescreen films were shown on television, the sides of the image were lopped off to
accommodate the 4:3 ratio of TV. Eventually, letterboxing came into vogue, whereby black bars
were positioned above and below the widescreen image, in order fit the full width of the image on
the TV screen.
4:3 aspect ratio of fullscreen TV
16:9 aspect ratio of widescreen TV
A comparison of the sizes, shapes, and resolutions of
standard frame dimensions: CIF (Common Intermediate Format), QCIF (Quarter CIF), NTSC DV, and two HD
A Digital Video Primer
Today, as a result of the popularity of letterboxed films on DVD, broadcast TV, and HDTV, many
new televisions come with wider screens. The aspect ratio of widescreen TV is 16:9 (1.78), which
is well-suited for the most-popular film aspect ratio of 1.85. For movies with wider aspect ratios,
such as 2.35:1, the new TVs display narrow letterbox bars.
Fo r m at
A s pec t R at i o
H o ri zo n ta l
R e s o lu t i o n
(pi x e l s/ l i n e)
V e r t ica l R e s o lu t i o n (s ca n
l i n e s)
Fra m e R at e /
i n t e r l ac e d o r
pr o g r e ss iv e
B i t R at e
(M eg abi t s pe r
s eco n d)
NTSC (USA, Canada, Japan, Korea,
525 (480 visible)
PAL (Australia, China, most of Europe,
South America)
625 (576 visible)
SECAM (France, Middle East, much
of Africa)
625 (576 visible)
18 Mbps
18 Mbps
18 Mbps
8 Mbps
10 Mbps
18 Mbps
3 Mbps
30p 30i
4 Mbps 4 Mbps
8 Mbps
3 Mbps
30p 30i
4 Mbps 4 Mbps
7 Mbps
3 Mbps
30p 30i
3 Mbps 3 Mbps
7 Mbps
*330 lines of resolution assumes that the bandwidth of the analog video signal has been limited to 4.2MHz for transmission over the air.
Broadcast standards including the 18 DTV options authorized in the U.S. by the FCC.
Video color systems
Most of us are familiar with the concept of RGB color, referring to the red, green, and blue
components of a color. Each pixel we see is actually the product of the light coming from a red,
a green, and a blue phosphor. Because the phosphors are very small and placed close together,
our eyes blend the primary light colors into a single color. The three color components are often
referred to as the channels.
Computers typically store and transmit color with 8 bits of information for each of the red,
green, and blue components. With these 24 bits (224) of information, over 16 million different
variations of color can be represented for each pixel. In the computer world, this type of representation is known as 24-bit color; in the video industry, it is referred to as 8-bit-per-channel
While 8-bit-per-channel color is in common use, much of today’s high-end professional hardware
and software deliver even higher quality color with 10-bit-per-channel color. An 8-bit number
(28) has 256 possible values, while a 10-bit number (210) has 1024 possible values. Therefore,
10-bit-per-channel color has the potential for as much as four times the color resolution of 8-bit
color. If you are concerned with the very highest quality output, you can even work in 32-bit-perchannel color in After Effects. When you work with high-resolution images that use a wide range
of colors, such as when you’re creating film effects or output for HDTV, the difference is easily
visible. Gradual transitions between colors are smoother with less visible banding, and more
detail is preserved, which is critical when applying filters and special effects.
A Digital Video Primer
Like computer monitors, televisions also display video using red, green, and blue phosphors.
However, television signals are not transmitted or stored in RGB. Why not? When television was
first invented, the system was optimized to work in only black and white. The term black-andwhite is actually something of a misnomer because what you really see are the shades of gray
between black and white. With black-and-white television, the only information being transmitted is brightness or luminance.
When color television was being developed, it was imperative that the new system be compatible
with the black and white system, so that millions of people didn’t have to throw out the sets they
already owned. Instead of transmitting RGB, the component signal is converted to something
called YUV. The Y component is the same old luminance signal that is used by black-and-white
televisions, and the U and V components contain the color information or chrominance. The
two color components determine the hue of a pixel, while the luminance component determines
its brightness. With a YUV signal, a color television can reproduce a color image, and a black-andwhite television can simply ignore the U and V components and display a black-and-white image.
YUV is typically associated with analog video, where YCrCb, is used in the digital realm.
Color sampling
When working with RGB images, the same number of bits is used to store the three color
components. When working with YCrCb video, on the other hand, a phenomenon of human
perception is used to reduce the amount of data required for each pixel. Because the eye is much
more sensitive to changes in the luminance of an image than to its chrominance, broadcast-quality
video uses only half as much color information as it does luminance information. Using less
color information helps save bandwidth for transmission, as well as storage space.
In technical terms, the NTSC broadcast specifications call for video to provide 8-bit samples at
13.5 MHz with a 4:2:2 sampling ratio. What does all this mean?
• 13.5 million times per second an 8-bit sample of the black-and-white or luminance (Y) component is taken.
• 4:2:2 is the ratio between the luminance (Y), and the Cr and Cb color samples. It means that
for every four samples of the luminance (Y) component, two samples of each of the two color
components (Cr and Cb) are taken—360 samples per scan line.
YCrCb can be reduced even further to what is known as 4:1:1 color, in which for every four samples of the luminance (Y) component, one sample of each of the two color components (Cr and
Cb) is taken—180 samples per scan line. 4:1:1 color provides adequate quality for most consumer
or prosumer (nonbroadcast) needs. The reduced information in 4:1:1 color is not a problem in
most usages, but it can cause issues such as visual artifacts around composited images.
Figure 3 shows what happens when each pixel is sampled from right to left across each horizontal line.
Figure 3: Color sampling
As you can see, in 4:4:4 color sampling, each pixel contains a Y, Cr, and Cb sample. With 4:2:2,
each group of four pixels contains four Y samples, two Cr samples, and two Cb samples. With
4:1:1, each group of four pixels contains four Y samples, one Cr sample, and one Cb sample—180.
How the Y, U, and V color components are sampled
to convert from 4:4:4 to 4:2:2, 4:1:1, and 4:2:0. Note
that 4:2:0 sampling converts odd and even lines
You may also encounter 4:2:0 color. This notation does not mean that the second chrominance
(Cb) component is not sampled. In 4:2:0 color, the chrominance resolution is half the luminance resolution in the horizontal domain (like 4:2:2 color), but is also half the resolution in the
vertical domain. The original 4:2:0 color space is only used for progressively scanned images,
because reduced vertical resolution means that every other line has no chrominance component.
If 4:2:0 were used for interlaced video, then all the color would be removed from the second field.
Video codecs that use 4:2:0 (MPEG-2 and Microsoft’s VC1) get around this limitation by using a
modified 4:2:0 scheme, in which the locations of the chrominance pixels are shifted so that color
information is evenly divided between fields.
A Digital Video Primer
Color space issues
When producing video, knowledge of color sampling is a plus, but you will rarely have to
think about it. Typically, the only time you’ll run into problems is when converting or crossing
between color spaces. In most situations, conversion happens automatically and the result is
acceptable or unnoticed. However, you should be aware of one situation in particular that can
significantly reduce color fidelity.
The DV video format, discussed in the next section, uses 4:1:1 color, while DVDs use 4:2:0 color.
Quite often, producers shoot on DV to reduce costs but distribute on DVD, because of its wide
availability. The problem arises when converting from DV (4:1:1) to DVD (4:2:0). Here’s why: The
color components in 4:1:1 are reduced to 1/4 resolution in the horizontal domain, and the color
components in 4:2:0 are reduced to 1/4 resolution by going to 1/2 resolution in both the horizontal and vertical domains. When you convert directly from 4:1:1 to 4:2:0, a great deal of color
resolution is lost. To avoid the loss of resolution when video is destined for DVD, make sure your
source video uses the 4:2:2 or 4:2:0 color space.
Video compression
Whether you use a capture card or a digital camcorder, in most cases, your digitized video will
be compressed. Compression is necessary because of the enormous amount of data required for
uncompressed video.
A single frame of uncompressed video takes about 1MB of space to store. You can calculate
this by multiplying the horizontal resolution (720 pixels) by the vertical resolution (486 pixels),
and then multiplying that by 3 bytes for the RGB color information. At the standard video rate
of 29.97 fps, uncompressed video consumes about 30MB of storage for each and every second
of video and over 1.5 gigabytes (GB) to hold a minute of video. In order to view and work with
uncompressed video, you would need a very expensive disk array, a very fast CPU, and a whole
lot of RAM to move and process all that data in real time.
The goal of compression is to reduce the data rate while keeping the image quality high. The
amount of compression depends on how the video will be used. The popular DV25 format
compresses at a 5:1 ratio. In other words, the video is compressed to one-fifth of its original size.
Video you access on the web might be compressed at 50:1 or even more. Generally, the higher the
compression ratio, the lower the quality.
How compression works
Before applying compression, there are a number of ways to reduce the size and bit rate of a video
file or stream. One method is to simply reduce the dimensions of each video frame. A 320 x 240
image has only one fourth the number of pixels of a 640 x 480 image. Reducing the frame rate
will also reduce the data rate. An uncompressed 15 fps video has only half the data of a 30 fps
video. These simple methods won’t work, however, if a video is to be displayed on a television
monitor at full resolution and frame rate.
Color detail is lost when converting from DV25 compression, which uses 4:1:1 sampling, to MPEG-2, which
uses 4:2:0.
Beyond reducing the dimensions and frame rate, compression is also most often required to
reduce the size of video. To get the compression needed to work with audio and video, a codec is
used to compress and then decompress the content. Codecs may be found in hardware (for
example in DV camcorders and capture cards), or in software. Some codecs have a fixed compression ratio that compresses video at a fixed data rate. Others can compress each frame
differently depending on the content, resulting in a data rate that varies over time. Many codecs
enable you to select a quality setting that controls the data rate, or a data rate that controls the
quality. Such settings can be useful for editing. For example, you may want to capture a large
quantity of video at a low-quality setting to edit a rough cut of your program, and then recapture
just the portions that will go into the final edit at a high-quality setting. This process enables you
to edit large quantities of video with a smaller hard disk, because you do not need to store the
high-data-rate video that will not be used.
A Digital Video Primer
Most codecs compress video using intraframe compression. With intraframe, or spatial, compression, each frame of video is compressed separately. Many video compression schemes start
by discarding color detail in the picture. As long as this type of compression is not too severe, it
is generally acceptable.
A number of codecs also use interframe, or temporal, compression. This type of compression
takes advantage of the fact that any given frame of video is often very similar to the frames before
and after it. Instead of storing all complete frames, interframe compression saves just the image
data that is different, by generating three types of frames:
• I frames (which serve as the keyframes) contain a full representation of a frame of video and
use intraframe compression. I frames preserve more information than P or B frames and are,
therefore, the largest, in terms of the amount of data needed to describe them.
• P frames are predicted frames, computed from previous frames, and each may require less than
a tenth of the data needed for an I frame.
• B frames, or bidirectional frames, are interpolated from previous frames and those that follow.
B frames can be even smaller than P frames.
A typical sequence might look something like this:
How each frame is compressed depends on the type of content. If the content is fairly static (for
example, a talking head shot against a plain, still background with not much changing from
frame to frame), then few I frames will be needed, and the video can be compressed into a relatively small amount of data. But if the content is action-oriented (for example, a soccer game, in
which either the action or the background moves or changes rapidly or dramatically from frame
to frame), then more I frames are required, a greater amount of data is needed to maintain good
quality, and the video cannot be compressed as much.
B and P frames contain only those portions of the adjacent I frames that change, therefore reducing the
amount of data required for a video. If more than half of a frame changes, an I frame is automatically
DV25 compression
DV25 is the compression format used for the standard DV format employed by most consumer
and prosumer camcorders. DV25 is compressed at a fixed rate of 5:1 and delivers video data at 25
megabits per second (Mbps). Audio and control information is also included in the data stream,
so the total data rate in bytes is about 3.6 million bytes (megabytes or MB) per second. This
means that one hour of DV25-compressed footage will require about 13 billion bytes (gigabytes
or GB) of storage. It is impressive to realize that each 60-minute mini-DV cassette is actually
13GB of offline storage. DV25 compression uses 4:1:1 color sampling. The audio is uncompressed,
and there are two stereo audio pairs. The audio can be digitized at either 12 bits with a sampling
rate of 32kHz or 16 bits with a sampling rate of 48kHz. You should generally use the highest
quality setting (16 bit, 48kHz).
A Digital Video Primer
MPEG-2 compression
MPEG stands for the Moving Pictures Expert Group, an organization of film and video professionals involved in establishing industry standards. The 2 refers to compression standard version 2.
MPEG-2 can provide very high-quality video. Readily supporting data rates in excess of 8 Mbps
(equivalent to 1MB per second), MPEG-2 is ideal for DVD with its high-end data rate of 9.8
Mbps. It is also one of the compression schemes used in the upcoming high definition optical
disc formats, and is used in the new HDV format.
While MPEG-2 is an excellent compression choice for distribution, it was only recently that
computer speed and memory reached the point where MPEG-2 video could easily be edited. And
only recently has there been a need to edit MPEG-2 video. With the introduction of the HDV
format, the impetus to move toward MPEG-2 editing has increased. HDV enables producers on
a budget to produce video in high definition. Adobe Premiere Pro includes support for native
HDV editing, which means you can capture and edit high definition video with one of the new
relatively inexpensive HDV camcorders, a standard computer, and Adobe Premiere Pro.
It is important to note that not all MPEG-2 codecs are the same. MPEG-2 is not a patent; it is
a set of standards and specifications that must be met for the codec to qualify as MPEG-2 and
for the encoding and decoding sides of the process to mesh. Codec developers have created and
continue to create a wide variety of applications based on MPEG standards, some more efficient
than others. This variance is most significant when considering the encoding side of the process,
which can greatly impact the quality of the resulting decoded video. If the standard continues to
be MPEG-2, the decoder chip in playback devices will not need to change to yield better quality
for video that has been compressed with better encoding technology.
Getting video into your computer
Because a computer only understands digital (binary) information, video has to be converted to
a supported digital format before you can work with it.
• Analog. Traditional (analog) video camcorders record what they “see and hear” in an analog
format. So, if you are working with an analog video camera or other analog source material
(such as videotape), you will use a video capture device to digitize and then store the video
on your computer. Most capture devices are cards that you install in your computer. A wide
variety of analog video capture cards are available, with many different features and levels of
quality, including support for different types of video signals and formats, such as composite
and component. Make sure you understand what you are getting. An inexpensive capture card
may lack features, produce low-quality video, and be incompatible with your editing software.
The digitization process may be controlled by software such as Adobe Premiere Pro. Once the
video has been digitized, it can be manipulated in your computer with Adobe Premiere Pro
and After Effects, or other software. After you have finished editing, you can then produce your
final video for distribution by exporting a digital format, or by recording to an analog format
like VHS or Beta-SP.
MPEG-1, limited to a 352 x 240-pixel frame size, was
the first MPEG standard established and is still used
for CD-ROMs, VideoCD (VCD), and some web video.
The specifications for MPEG-3 were abandoned, as
the industry moved on to complete MPEG-4. Note
that MP3, which stands for MPEG-1, Layer 3, is an
audio-only compression format and should not be
confused with MPEG video formats.
MPEG-4 Part 10, better known as AVC and H.264, is
currently in use in the latest releases of the QuickTime
and Microsoft® Windows Media architectures. The
codec facilitates streaming video on the web and over
wireless networks, as well as providing mechanisms
for multimedia interactivity. MPEG-4, with its lowerbit-rate approach, is one of three codecs adopted for
HD-DVD and Blu-ray DVD for high-definition video.
The names MPEG-5 and MPEG-6 will not be used; the
next release is expected to be MPEG-7, which will not
advance compression, but will focus on the incorporation of metadata, enabling sophisticated indexing
and retrieval of multimedia content.
MPEG-21, also in the planning stages, is expected to
create a complete structure for the management and
use of digital assets, incorporating e-commerce that
will make sharing creative products more commercially viable.
• Digital. Digital video camcorders have become widely available and affordable for both consumers
and professionals. Digital camcorders translate what they record into digital format inside the
camera. Your computer can work with this digital information as it is fed straight from the
camera via a digital interface such as IEEE 1394 or SDI. Digital capture is far easier and less
expensive than analog capture, and produces much better results. A capturing program, such
as Adobe Premiere Pro, can also control playback of a device through the IEEE 1394 interface
or through RS-232C and RS-422 ports.
A Digital Video Primer
A few words about analog video connections
The music industry has already converted to digital. Most music today is mastered, edited, and
distributed in digital form, primarily on CD and via the web. While video today is generally
captured digitally, it doesn’t mean that you can ignore the analog video world. Many professional
video devices are still analog, as well as tens of millions of consumer cameras, tape devices, and
of course, televisions. You should understand some of the basics of analog video.
Know Your Cables and Connectors
If you are new to video, figuring out all of those audio
and video cables and connectors can be as difficult as
untangling a bowl of spaghetti one noodle at a time.
This chart is intended to help.
The pictured connectors (two audio and three video)
are all male; female counterparts also exist.
Because of the noise concerns mentioned on page 1 of this primer, in analog video the connection
between devices is extremely important. There are three basic types of analog video connections.
Typically, the higher the quality of the recording format, the higher the quality of the connection type.
XLR connectors are used to
connect microphones and
other balanced audio devices
and for the AES/EBU digital
audio connection.
• Composite: The simplest type of analog connection carries the complete video signal over a cable
with a single wire. The luminance and chrominance information is combined into one signal using
the NTSC, PAL, or SECAM standard. Though this is the most common type of connection, composite
video has the lowest quality because of the amount of processing required to merge the two signals.
An RCA connector is also
called a phono plug and
is often used to connect
consumer audio and video
equipment like VCRs, tuners,
and CD players.
• S-Video: The next higher quality analog connection is called S-Video. This cable separates the
luminance signal onto one wire and the combined color signals onto another wire. The separate wires are encased in a single cable.
• Component: The highest quality analog connection is component video, in which the Y, U, and
V signals are carried over separate cables.
Making digital video connections
Whichever interface you use for getting digital video into your computer from your camcorder
or from any digital video recording device, it’s as simple as plug-and-play.
BNC is used to connect various video sources, including
analog composite, analog
component, and serial digital
video interface (SDI).
• IEEE 1394: Originally developed by Apple Computer, IEEE 1394 is the most common form
of connection used by standard DV camcorders. Also known by the trade names FireWire®
(Apple Computer) and i.LINK (Sony Corporation), this high-speed serial interface currently
allows up to 400 Mbps to be transferred (and higher speeds are coming soon). IEEE 1394 cards
are inexpensive to add to your computer. However, most of today’s computers come equipped
with built-in ports. The single IEEE 1394 cable transmits all of the information, including
video, audio, timecode, and device control, which enables you to control a camcorder or deck
from the computer. IEEE 1394 is not exclusively used for video transfer; it is a general purpose
digital interface that can also be used for other connections, such as hard drives and networks.
The S-Video connector is for
transferring video at a higher
quality between devices that
support S-Video, like S-VHS
camcorders and video decks.
• SDI: Serial Digital Interface (SDI) is the high-end professional connection for digital video. It
was originally meant for SD, but it is now also used for HD transport of uncompressed video.
SDI is typically only supported in high-end gear, although the price is dropping dramatically.
The IEEE 1394 connector (also
known as i.Link and FireWire)
is used to transfer audio
and video digitally between
a camcorder, digital tape
recorder, and computer.
Digital video formats and camcorders
DV is often used to denote digital video in general. However...
DV has typically been used to refer to a specific digital video format based on DV25 compression that primarily addresses the consumer and prosumer markets. The tape cassettes for this
standard DV format come in two sizes: one about the size of an audio cassette; the other, known
as mini-DV, about half that size. Standard DV is a standard-definition (SD), interlaced signal
using DV25 compression, which outputs a 5:1-compressed stream with a bit rate of 25 Mbps. For
NTSC, the color sampling is 4:1:1; for PAL, it’s 4:2:0.
When someone refers to a standard DV camcorder, they are usually talking about a digital video
camcorder that uses miniDV tape, records in the standard DV format using DV25 compression,
and has a port for connecting to a computer via the IEEE 1394 interface. DV camcorders are used
by consumers, prosumers, and even professionals shooting nonbroadcast-quality material (for
example, events like weddings and meetings.)
Digital Video terminology can be confusing. As you’ll learn by reading on, there are also variations of DV that refer to professional and broadcast-quality formats.
A Digital Video Primer
What makes DV better than analog video?
There are many benefits of the standard DV format, particularly when compared to analog
devices like VHS decks and Hi-8 cameras:
•Superior images and sound: A DV camcorder can capture much higher quality video than
other consumer video devices. DV video provides 500 lines of horizontal resolution, compared
to about 250 for VHS, resulting in a much sharper and higher quality image. Not only is the
video resolution better, so is the color accuracy. DV sound, too, is of much higher quality. DV
can provide better-than-CD quality stereo sound recorded at a sampling rate of 48 kHz, and bit
depth of 16 bits.
•No generation loss: Since the connection to your computer is digital, there is no generation loss
when transferring DV. You can make a copy of a copy of a copy of a DV tape, and it will still be
as good as the original.
•No need for a video capture card: Because digitization occurs in the camera, you don’t need an
analog-to-digital video capture card in your computer.
•Better engineering: The quality of DV videotape is better than what analog devices provide.
Plus, the smaller size and smoother transport mechanism of the tape means DV cameras can
be smaller and have more battery life than their analog counterparts.
Is DV perfect?
The image quality of the DV format has been tested by both human and mechanical means.
This testing ranks DV quality with Beta-SP, which has been the mainstay for professional video
production for decades. But DV is not perfect.
Because the video is compressed, it may include visible quality degradations, known as compression artifacts. These artifacts, which result from the color compression, are most noticeable
around sharp color boundaries like white text on a black background. The lower color sampling
(4:1:1) in DV compression can also cause problems when performing professional compositing.
Also, compression adds noise to the picture. If DV is repeatedly decompressed and then recompressed, the quality of the image degrades noticeably. This process is different from copying DV
from generation to generation without processing, which is lossless.
While DV isn’t perfect, it is certainly the highest-quality, most cost-effective standard definition video format ever made for the average consumer and many professionals. The entire video
industry has been transformed by the low cost and high quality of the DV solution.
DV variations
There are a many variations to the DV format, including but not limited to:
•Sony Digital8: A prosumer-targeted variation that offers the same data rate and color sampling
as DV25, but at a slightly lower resolution. The Digital8 camcorder is designed to accommodate
customers who want to move up to digital video, but who might have a significant investment
in analog Hi-8 movies. The Digital8 camcorder records digitally on Hi8 videotape cassettes,
but it can also play back analog Hi-8.
•Sony DVCAM and Panasonic DVPRO or DVCPRO: These formats use the same DV25 compression
as DV, but record less video on each tape. Putting less data on the tape makes the recording more durable and facilitates better interchange between devices. Both the DVCAM and
DVCPRO systems are designed with the professional in mind for applications such as electronic news gathering. The DVCAM and DVCPRO tape and tape shells are more durable than
standard DV or miniDV, and the gear is typically more rugged and higher quality overall.
•Sony DV50, Panasonic DVPRO50 or DVCPRO50, and JVC D-9 (Digital-S): As the name suggests,
DV50 video streams at 50 Mbps. The format offers 4:2:2 color sampling and lower compression
than DV25, making the video quality of this standard extremely high, suitable for the most
demanding professional broadcast purposes. Variations allow for progressive scanning.
•DV100, DVPROHD and D-9 HD: Used for HD (high definition) recording, DV100 offers a data
rate of 100 Mbps and 4:2:2 color sampling.
In October, 2003, four leading video equipment
manufacturers (Canon, Sharp, Sony, and JVC) finalized
the specification for a new consumer/prosumer digital video format that records and plays back HD video
on standard DV or miniDV cassettes. Since then, HDV
camcorders have been released by all of the above
manufacturers, at steadily decreasing prices: from
$1,500 for consumer models to $10,000 for models
aimed at professionals.
HDV uses MPEG-2 compression to record 720p (progressive) or 1080i (interlaced) HD formats, supporting
frame rates of 25p, 30p, 50p, and 60p for 720p at a
data rate of 19 Mbps, and 50i and 60i for 1080i at a
data rate of 25 Mbps. Audio is recorded using 16-bit,
48 kHz, MPEG-1 Audio Layer-2 encoding at 384 Kbps.
Although the size of the picture is larger (1280 x
720 for 720p and 1440 x 1080 for 1080i), the actual
resolution is about the same as standard DV, using
4:2:0 color sampling. Newer models have added 24P
support for digital cinema use.
A Digital Video Primer
•Sony Digital Betacam, DigiBeta, or Betacam SX, IMX, and HDCAM: These formats are the choices
of high-end broadcast professionals. The formats provide superior image quality, and the highend equipment required to work in these formats is proportionately costly. The video interface
is SDI or HD-SDI, which provides an uncompressed bitstream at 270 Mbps for Digi-Beta, and
up to 1.5 Gbps for HDCAM.
• Sony XDCAM (SD and HD) and Panasonic P2: These DV variations use the same formats as others
(DV25 or DV50), without tape. The P2 camcorders record to solid-state memory cards, and the
XDCAMs record to Professional Disc optical recording media. The biggest advantage of recording
to a disc or memory card is that you can skip the capture process entirely and perform nonlinear
editing directly from the source media—a real time-saver for broadcast news. There is little
doubt that the tapeless solutions are the future of recording media.
Camcorder basics
A video camera is called a camcorder when it includes a recording device, such as a video cassette recorder (VCR) or optical disc recorder. Most camcorders also include a microphone and
other features, such as lighting, that make them a complete production unit in one portable
package. The line between consumer and professional can be somewhat blurry, but understanding
the basics of camcorder technology will help you make the best decisions when purchasing or
selecting a camcorder for production.
The better the lens, the better the quality. Camcorders are similar to still cameras, in that a better
lens (and that usually means a more expensive one) produces clearer, sharper images. Lower-end,
consumer-targeted DV camcorders have permanent lenses that are typically not of the same
high quality as professional video camcorder lenses. If you want the flexibility of interchangeable
lenses, you’ll probably have to use a professional-grade camcorder.
Optical or digital zoom. Whether fixed or interchangeable, most camcorders come with zoom
lenses, which allow you to achieve more of a close-up view of your subject without actually moving the camera closer. But you’ll want to know if the camcorder lens you’re getting offers true
optical zoom, or only digital zoom. For true optical zoom, the lens physically varies the focal
length, which is measured in millimeters. The longer the focal length, the closer you can get to
your subject. An optical zoom gives you the highest-quality picture.
OK 1-1/4” CCD = 1 CCD with a small
(1/4”) chip
Good 1-1/3” CCD = 1 CCD with a large
(1/3”) chip
Very good 1-2/3” CCD = 1 CCD w/ an extra
large (2/3”) chip
Excellent 3-1/4” CCD = 3 CCDs with small
Best 1-1/3” CCD = 3 CCDs with large chips
Image quality based on CCD setup
Digital zoom, on the other hand, is not really a zoom feature at all. It’s more of a cropping feature
that enlarges a small area of the image to simulate a close-up. As the image is enlarged, so are the
pixels, so what you get is degraded quality. If you want clarity, use optical zoom only.
One CCD or three? CCD stands for charge-coupled device. CCDs detect the light coming through
the lens into a camcorder and convert it into electrical signals. The factors that determine the
quality of the resulting images are: the number of CCDs, size of each CCD chip, number of active
pixel elements on each chip, and the quality of CCD electronics. Camcorders with one CCD rely
on a single chip to capture light from all three primary colors (red, green, and blue); those with
three CCDs dedicate a chip to each color and are, therefore, able to produce higher-quality images.
Expect a significant difference in price between 1-CCD and 3-CCD camcorders.
What about lux? A CCD’s responsiveness to light also impacts video quality. Lux is a measure
of illumination (reflected light) used to specify a camcorder’s low light responsiveness limit and
the amount of light recommended for achieving good quality video. The more light a camcorder
requires, the higher its lux rating. Some camcorders have infrared (IR) capabilities that will
record in 0 lux situations (for example, at night). You may also want to note a camcorder’s signalto-noise ratio. A camcorder may be able to achieve a low lux rating by producing a very noisy
picture. A higher signal-to-noise ratio produces better quality images in low light conditions,
while a low ratio records images that appear grainy or smudged.
A Digital Video Primer
Optical image stabilization is best. There are three kinds of image stabilization in handheld
camcorders: optical, digital, and electronic. Optical image stabilization uses a system of motion
detectors and lenses to mechanically reduce the effects of vibration and camera movement.
Electronic and digital image stabilization merely manipulates the digital image and may degrade
video quality. If you plan to record your summer vacation or amateur sports events, optical
image stabilization may not be an important issue for you; but if you want professional quality,
choose optical image stabilization.
Want to override automatic settings? Camera controls such as zoom, focus, audio gain, white
balance, exposure, and shutter speed are likely to be adjusted automatically in most consumer
camcorders. If you want to do more professional work, be sure your camcorder lets you override
automatic mode, so that you can adjust camera controls manually.
What about widescreen? Many camcorders let you toggle between standard 4:3 and widescreen
16:9 modes. If widescreen is important to you, find a camera that provides anamorphic widescreen for a better-quality image (read about anamorphic in the sidebar on this page).
Those little LCD screens are mighty small. If you plan to shoot professional-quality video, you’ll
want a video output to support an external video monitor so you, your crew, and possibly a client
can have a clear review of the tape.
Do you want progressive scan mode? DV camcorders with progressive scan mode are becoming
more popular. If you want to shoot 24P (see the sidebar on 24P on page 4 of this primer), you’ll
need progressive scan capability, but you’ll want to be sure your progressive scan camcorder can
also shoot at a full 29.97 fps. Progressive scan video is better for desktop editing and for delivery
over progressive scan monitors (DTV or computer viewing) because it eliminates interlace artifacts. It’s also much better if you plan to capture still images from your video.
What about HD? High-definition video used to be used exclusively by a select group of professional producers with lots of money to spend. Today, a number of HD camcorders are available
from under U.S. $2,000 to over $10,000, and the prices are sure to drop as the selection increases.
This new breed of HD camcorder uses the HDV format, which uses MPEG-2 compression and
records onto miniDV tapes. For higher quality, a wide variety of HD camcorders are available
using formats such as Pansonic’s DVCPRO HD and Sony’s HDCAM. Before investing in SD, you
should look into HD.
How about audio recording? The DV specification allows for up to four channels of 32 kHz, 12-bit
audio (four mono tracks or two stereo tracks) or two channels of 48 kHz 16-bit audio (betterthan-CD quality). Most camcorders support both of these formats. If you want the best audio,
make sure the camcorder has an audio level meter and the ability to adjust audio levels manually.
You’ll also want a jack so you can plug in high-quality headphones to monitor the audio.
Cameras and camcorders record the images that
comprise film or video in standard 4:3 aspect ratio. If
your video camcorder has a switch to toggle between
the standard 4:3 and widescreen 16:9 aspect ratios,
it may simply be masking off (that is, letterboxing)
the top and bottom of the image as it records. If that
is the case, then 25% of the available pixels (not to
mention bandwidth) are being squandered on black
bars, meaning that there are fewer pixels available for
the actual image. With only 75% of the available pixels
used for essential information, your video will suffer a
loss in resolution.
Anamorphic video, on the other hand, uses all
available pixels to store as much video information as
possible, so your image resolution is as high as possible. How does it work?
If your camcorder provides anamorphic widescreen
or if you can get an anamorphic adapter, then as the
image is recorded, it is squeezed horizontally to fill
the 4:3 space. When the image is played back on a
widescreen TV, the display device stretches the image
back to normal. The stretching is accomplished in the
display device using nonsquare pixels—pixels that
are actually rectangular. To make post-production
easy, Adobe Premiere Pro and After Effects support
anamorphic aspect ratios.
Shot in 4:3 mode.
Shot in nonanamorphic 16:9 mode, only 75%
of pixels are used
Consumer camcorders use a mini-plug microphone connector like the ones used for headphones
for portable radios. The audio system that uses this type of connector is prone to electrical
interference, so you should avoid running cables longer than 10 feet when using an external
microphone. Mini-plugs and jacks are also easier to break and the connection is not particularly
dependable. Professional camcorders come with low-impedance, balanced-line inputs using XLR
connectors that provide a much better connection and higher audio quality. If you want to use a
professional microphone, you can insert an adapter between the XLR connector on the microphone and camcorder’s mini-jack.
Do you need analog in? Some digital camcorder models let you input an analog video signal, usually
through an S-video connector. The camcorder then digitizes the video, and you can use the IEEE
1394 connector to send the video directly to your computer for editing. With the analog input option,
you can use your camcorder instead of a capture card for analog-to-digital capturing.
In anamorphic 16:9 mode, all pixels are used; image
is squeezed, but looks right when played back
A Digital Video Primer
Configuring your system
Whether you’re a professional or a hobbyist, choosing the right combination of software and
hardware can be a tricky guessing game about future technology developments. You need to
purchase enough power, storage, and flexibility to meet your current needs, while being mindful
that technology is inexorably advancing, so you had better conserve enough capital to keep your
systems current, as well as to fund anticipated growth. Not long ago, the more money you paid,
the more capability you bought. But the differences between results are ultimately becoming
more a matter of the artist’s vision than the cost of the system being used. Today, you can put
together a powerful, desktop-based video production setup for under U.S. $5,000. Here are a few
more questions to consider:
What kind of video will you be putting into the computer? Will you only be working with DV
footage? Do you need to edit footage captured in component or composite video? For example,
many industrial and broadcast users need to capture and record video in the component format
for use with Beta-SP decks, in addition to DV. It would make little sense for such a user to have a
DV-only system.
How time-critical will your productions be? When you add effects like transitions and titles to
video, they usually have to be rendered by the computer into their final form. The rendering
time can vary from minutes to hours depending on the complexity of your productions. If you
are producing home videos, the time lag isn’t much of a problem. But, if you have clients looking
over your shoulder asking for changes, you might want to purchase a system that can produce
these effects instantly—in real time.
How much video will you be working with? Remember that one hour of standard DV video takes
about 13GB of disk storage. If you are producing a one-hour documentary, you’ll want at least
enough storage for several hours of raw footage. You will often find yourself working with four or
five times as much raw footage as you will eventually use. If you are doing professional editing,
you could be working with 20 or even 50 times the amount of final footage! Of course, you don’t
need to have all of it available at all times, but you will need to think about the amount of footage
you’ll need to access when configuring your storage.
How will you distribute your finished video? Do you intend to distribute on film, in SD or HD, on
VHS tape, DVD, or the web?
It’s important that you choose a computer with a CPU (central processing unit) that’s powerful
enough to meet the demands your creative process will place on it. Post-production is all about
processing and moving huge amounts of data, while maintaining a steady data rate. Rendering complex edits, transitions, filters, composites, and effects places enormous demands on the
system. Although the video captured by your system is compressed, it must be decompressed to
be processed, and then, once rendered, it must be recompressed to be saved and stored.
For example, just for standard NTSC, each frame of uncompressed video consists of 720 x 486
pixels (NTSC). That’s 349,920 pixels per frame. There are 29.97 frames in every second of video,
so that’s approximately 10,500,000 pixels per second. Each pixel is made up of 3 bytes of color
(RGB), meaning that nearly 31,500,000 bytes (31.5MB) of information must be processed
for every second of video that’s altered in any way. Even for something as seemingly simple as
adjusting brightness or contrast, millions of calculations must be made to get the job done. The
speed at which the task can be completed is dependent upon the power and speed of the processor. Moreover, your creative process, flowing from task to task, can only proceed as rapidly as
each operation is executed, so that it, too, is ultimately dependent on the processor.
When all is said and done, output is CPU-dependent, as well. If you’re planning to export your completed production in a compressed format, such as one of the MPEG variations or a web-streaming
format, then the power of the CPU will determine the speed of the final file-creation process.
A Digital Video Primer
Even when the processing load is shared with or shunted to a video card (there’s more on video
cards later in this section), the performance of the CPU is still critical. In most cases, the videoediting software relies on the CPU to handle functions like real-time previews and transcoding
video for export. A number of computer manufacturers offer workstations specifically recommended for digital video editing. There are many single and dual-processor computers that
provide appropriate CPUs as well as other important features, such as necessary I/O interfaces,
that make them well-suited to video creation and other post-production tasks.
How much RAM do you need?
First, check the system requirements for the software you’ll be using. But keep in mind that system
requirements are typically established using a clean computer. In the real world, you’re likely to
want more than what is recommended. Some experts will tell you that when it comes to RAM,
bigger is better and biggest is best, others will say that above a certain amount, adding more RAM
is moot, a case of diminishing return on investment. But it’s always a good idea to hedge your
bets. While you may be able to struggle along with 512MB of RAM, you’ll probably be much
happier with at least 1GB. Most professionals opt for 4GB of RAM. Make sure you can add more
RAM down the road.
How much bandwidth do you need?
You’ll need to transfer the data for each frame of video to and from the processor at the video frame
rate of 29.97 fps (NTSC) regardless of how much data is contained in each frame. For uncompressed
SD video, this is approximately 1MB per frame, which translates to a data rate of almost 30 megabytes per second (MBps); for HD video, it’s 6MB per frame or a data transfer rate of 180 MBps. The
transfer rate for standard DV, compressed 5:1, is approximately 5-6 MBps. Real-time editing often
entails accessing two video streams, combining them in a dissolve, for example, and then merging the
result into a single stream. This process triples the required data rate. When you start thinking about
compositing three or more streams of video and previewing or rendering the results in real time, the
rate multiplies even more.
Video requires not only moving a great deal of data rapidly, but also at a steady, sustained pace.
If the transfer rate falls below what’s required, frames may be dropped, resulting in poor quality
video. Because systems with faster disks typically cost more, you may opt for a system that is fast
enough, but not so fast that you’re paying a premium for speed you don’t need. If you are working
with uncompressed video or HD, check the requirements for data transfer rate recommended by
the manufacturer of your video card.
How much storage do you need?
You cannot avoid the fact that digitized video is big. We’ve seen that one minute of uncompressed video requires 1.5GB of storage. An hour-long program can therefore consume 90GB of
storage, without even considering all the unused footage. If a production has a 5:1 shooting ratio
(5 minutes shot for every 1 minute used), you would need to store 450GB. High-end productions
may end up with 20:1 or 50:1 ratios: 1,800GB to 4,500GB. And, for HD, you need 600GB of storage per hour.
To figure the amount of storage you need for DV (compressed 5:1), you can calculate based on
approximately 216MB per minute of stored video. Or, looking at it from the opposite direction,
each gigabyte of storage holds about 4 minutes, 45 seconds of video. For an hour of DV, you
would need a 13GB disk.
Let’s say that you’re an event videographer shooting standard DV and creating DVDs for your
clients. To figure out how much storage you would need to make a two-hour DVD, here’s how
you might do the math:
Start with what you need for your finished production: two hours of DV footage
Add a conservative amount for unused footage, a 2:1 ratio
Figure in some additional elements, such as titles and audio tracks
You’ll also need space for the MPEG-2 files you export for your DVD
Total minimum storage needed
Example estimate of storage needed for a two-hour DVD
A Digital Video Primer
It is unlikely that the amount of storage that comes as standard equipment with your computer
will be adequate for your video production needs. If you intend to produce more than just very
short video clips, you’ll want to consider a storage subsystem. There are three general scales of
subsystems, as outlined in the sidebar.
Companies that specialize in disk storage for video production applications often rate their systems
based on the amount of video they can store. When assessing such systems, be sure to check
whether the ratings are based on uncompressed or compressed video, and if compressed, by how
much. A storage system rated for 15 hours of DV video (compressed 5:1) would only hold three
hours of uncompressed (1:1) video.
Do you need a video capture card?
With IEEE 1394 ports built into most computers these days, support for DV devices built into
editing software, and analog-in with pass-through capability available in camcorders, if you’re
not going to be capturing much analog video, why do you need a video capture card at all?
If you are a professional editor who captures a large amount of analog footage, you will probably
be best served by investing in a good-quality capture card. Make sure your capture card provides
the capabilities you need to work with your acquisition formats. Beyond capturing, you may also
want to consider other capabilities and features:
• Some cards come with software tools that can be used to augment the capture capabilities
found in your editing software, cutting down capture time and saving wear and tear on camcorders and tapes.
• If your CPU speed is less than 3 GHz, you may not be able to take full advantage of the real-time
editing features of Adobe Premiere Pro. A number of high-end cards take over a significant
amount of the CPU-intensive processing, so you can increase the power and speed of your
• Some capture card solutions enhance the capabilities of your editing or effects software,
enabling you to work with 3D effects or real-time HD.
• Video cards can also boost productivity when you are delivering your finished productions,
speeding up the process of rendering to a variety of formats.
Six basic features define video capture cards:
• Types of analog video input/output supported
Storage Subsystems
Individual external hard disks, available in the
hundreds-of-dollars range, can now hold over 500GB
of data. They are typically small and compact, usually
quite portable, and take advantage of convenient
hot-swappable IEEE 1394 or USB 2.0 interfaces. Such
drives provide excellent and affordable “sneakernet”
solutions that can be physically picked up and moved
from one workstation to another.
RAID (redundant arrays of independent disks) is
fast, fault-tolerant, and relatively expensive, typically
costing from just under a thousand to thousands of
dollars. A RAID consists of multiple hard disks, which
appear to the workstation operating system as a
single volume. RAID is a technology that specifies at
least 10 different ways to coordinate multiple disks,
each method optimized for different types of storage
requirements. Because all the disks in a RAID can read
and write simultaneously, a RAID can access and
deliver information faster than a single hard disk.
Most RAID configurations also store parity information to reconstruct lost data in the event of a crash.
RAIDs may be connected to workstations via IEEE
1394, SCSI, or fiber channel interfaces.
A storage area network (SAN) is a centralized storage subnetwork that can provide terabytes of storage
and be simultaneously accessed by multiple users. A
SAN may be JBOD (just a bunch of disks) or composed
of multiple RAIDs. Data may be accessed in real time
and at very high speeds, most often via fiber channel
interfaces or SCSI, although IEEE 1394-based SANs are
available. Anyone with authorization can access any
digitized content on the SAN, so the need for multiple
copies of large media files is eliminated, thereby making this a very efficient solution for large production
facilities and workgroups. Depending on the software
interface, the administration of a SAN may be done
remotely, providing incredible flexibility to mobile
workgroups among which workflow must be reorganized on the fly. With the ever-increasing demand for
more digital video content, SANs are becoming more
common, even in smaller production environments.
• Types of digital video input/output supported
• Types of video compression supported
• Types of special processing supported
• Types of software included or supported
• Types of audio supported
Your choice will depend on the type of video and how much video you will be working with, as
well as how time-critical your workflow is. Other factors may include cost and compatibility.
A Digital Video Primer
The creative process: an overview of movie-making
Let’s assume you have a story to tell. Whether you are making a very short video for the web,
an industrial or training presentation, a television commercial, a feature film, or just doing a
personal project, the process is virtually the same. As you can see from the following chart, the
stages of the production process often overlap. You’ll end up tailoring your own process to fit the
project, or to your own, individual working style. Depending upon your personal working preferences, you may choose to shoot, create, or gather all your clips before you begin the assembly
process. Or, you may prefer to go back and forth between production and post-production tasks.
If you have a team, you may choose to work on production and post-production tasks concurrently. With digital video, your movie making tasks can flow over and around one another in an
extremely fluid manner.
Preproduction is the planning stage. Typically, it includes the steps you take before you begin
production (shooting film or video). When you begin your project, you may have already shot
some or all of the video you’ll need. You may be repurposing content, such as existing video, still
photography, charts, graphs, illustrations, or animations. Or you may be starting with a blank
slate. The preproduction phase includes all the steps you need to take to be sure that you are
prepared to move from concept to completion.
Virtually all productions follow this basic process.
•Outline: No matter how simple you intend your project to be, begin with an outline. An outline
helps you plan. It can be shared with co-workers or clients to make sure everyone has the same
expectations. Your outline will help you identify what materials you need to create, assemble, or
acquire to get your process underway. You can also use your outline to plan the budget for your
•Script: An outline may be enough for you to work from, or you may want a more complete
script that includes dialogue, narration, notes about shooting locations and settings, the action,
the lighting, the camera angles and movements, the edits, as well as visual and sound effects.
Think of a script as the blueprint for your production.
A Digital Video Primer
•Storyboards: You may also choose to do storyboards, which are sketches of key moments in
the action, like a comic strip. Storyboards can include notes about the action, sound, camera
angles, or movement. They can even be translated into movies called animatics, using a tool
like Adobe Premiere Pro or After Effects. This step is called previsualization, and may be helpful for working out complicated sequences, sharing ideas with coworkers, or selling a concept
to a client.
•Budgeting: Whether you are doing a personal or a professional project, it is definitely a good
idea to add a budget to your production plan as early as possible. For professionals, you’ll need
a budget to secure financing. Your budget should include wages for yourself, your co-workers,
actors, and other talent, such as effects specialists, graphic designers, musicians, a narrator, and
animal trainers. You should figure in costs for location fees, costumes, props, equipment rentals,
catering, and anything else you can think of, such as videotape or DV cassettes, lunch, and
miscellaneous expenses.
•Production details: Even a small production can include a million details, like casting, locations,
props, costumes, equipment rentals, and catering. Every project is different. Plan adequately
for yours. Pay attention to the details. It is far easier and less expensive to do it now than when
you’re in the middle of production. Here’s a very brief list of tips to get you started thinking
about some of those details:
Get to know your cast to make sure they work well together. For example, a conversation between a very tall and a very short person might not work well on camera.
If you are shooting real people, be sure to give them guidance about what to wear.
For example, white shirts generally don’t photograph well, as they contrast poorly
with facial tones; stripes and small patterns may be problematic. On-camera talent
should be reminded to pay special attention to their grooming (hair and makeup)
or you can have professional help on hand.
If necessary, secure permission to use locations.
Be sure your costumes, sets, and props are ready when you need to shoot.
Make sure you have all the rental and borrowed equipment you need, that it all
functions, and that you know how to use it well in advance of production.
“Quiet on the set! Action! Roll ‘em!” Capturing live or animated action and sound on film or
videotape, in other words, shooting the raw footage, is called production. During production,
your concerns include: lighting, working out the movements of the talent and camera or blocking, and finally shooting—getting the images and sound on tape or film. There are many good
references available regarding production, including books, websites, classes, and more.
A Digital Video Primer
What comes out of production is a collection of clips: shots taken in different places at different
times. To actually develop and deliver your story, you need to edit and assemble your clips and,
perhaps, add visual effects, graphics, titles, and a soundtrack. This part of the process is called
post-production, and this is where Adobe enters the picture, with Adobe Production Studio,
which includes four of the industry’s leading software applications specifically designed for post-production:
•Adobe Premiere Pro: real-time editing for HD, SD, and DV
•Adobe After Effects: the industry standard for motion graphics and visual effects
•Adobe Audition®: integrated audio recording, mixing, editing, and mastering
•Adobe Encore® DVD: the essential tool for DVD creation
Production Studio applications work seamlessly with Adobe’s
desktop imaging software:
•Adobe Photoshop®: the professional standard in desktop digital imaging
•Adobe Illustrator®: vector graphics reinvented
Adobe Production Studio brings new power and efficiency to your film, video, DVD, and web
workflows. Part of the Adobe Creative Suite family, Adobe Production Studio Premium software
is a complete post-production solution that combines Adobe’s video and graphics software with
the timesaving integration and workflow features Adobe Dynamic Link and Adobe Bridge.
In the next sections, you’ll find useful information about post-production techniques. We have
used our own products to illustrate these techniques because Adobe software products adhere to,
and in many cases have established, industry standards for digital video post-production. Whatever software you choose, the material in this primer will help you learn about what’s involved in
the post-production process.
Acquiring source material
You’ve configured your system. You’ve shot or gathered some video. You are eager to begin postproduction, but first you need to gather all of your raw material together on your computer.
You often do not know what file formats you’ll need to handle, or what the media requirements
will be for every project. Adobe Premiere Pro imports and exports all of the leading video and
audio formats natively, and supports almost any codec that the Windows XP® operating system
You can import and work with these leading formats in Adobe Premiere Pro:
•Video files in MPEG-1, MPEG-2, DV, AVI, Windows Media 9 Series, QuickTime, HDV, and
Open DML
•Audio files in WAV, MP3, and AIFF, as well as audio-only AVI and QuickTime
•Still-image and sequence files in AI, AI sequence, PSD, PSD sequence, JPEG, TGA, TGA
sequence, TIFF, TIFF sequence, PCX, BMP, and BMP sequence
Applications included in Adobe Production Studio
Capturing analog video
You may still need to capture analog footage, so it’s best to choose format-independent software,
like Adobe Premiere Pro, that is designed to handle a wide variety of video formats, such as
composite, component, S-Video, SDI, and HD. You can digitize analog video directly into Adobe
Premiere Pro by connecting your analog video player or camcorder to your computer through
digitizing hardware, like a video capture card. Digitizing capability is built into some personal
computers, but in most cases, must be added to a system by installing a compatible hardware
capture card. For more information, see “Do you need a video capture card?” on page 18 of this
A Digital Video Primer
DV without delay
If you shot DV or HDV, or if your raw material is on DV tape, capturing your clips can be as easy
as plug-and-play with Adobe Premiere Pro. Built-in support for the IEEE 1394 interface allows
frame-accurate control for all types of DV and HDV devices. You can review footage, set In and
Out points, and use edit decision lists (EDLs) to perform automated batch captures, without
leaving your NLE application. Adobe Premiere Pro lets you customize a wide range of settings to
streamline and optimize your workflow.
•Device control customization: Specify the DV device (deck or camcorder) manufacturer and
model, and Adobe Premiere Pro optimizes its built-in device control for maximum reliability,
efficiency, and editing precision. Scene detection controls let you automatically detect scenes
and divide raw DV footage into separate scene-based clips that are faster and easier to work
with. You can also scan tapes to create low-resolution, scene-based clips for offline editing.
After editing your rough-cut, batch-capture full-resolution versions of the clips for the final
edit. By default, Adobe Premiere Pro uses the new Adobe DV Codec to capture DV clips in
their native YUV color space to preserve color quality.
•Project presets: Adobe Premiere Pro stores groups of project settings in files called presets,
which include settings for codec, frame size, pixel aspect ratio, frame rate, depth, audio, and
field order. When you start a project, you’ll be prompted to select a preset, or select individual
settings to customize your own.
For a list of Adobe-compatible third-party capture
cards, visit the Adobe website at www.adobe.
To find out if your video equipment is compatible
with the built-in DV support in Adobe Premiere Pro,
visit the Adobe website:
For camcorders, see
For decks, see
Color without compromise
Adobe Premiere Pro provides native support for YUV color, enabling you to preserve the native
color space of the original video material. This support ensures higher color quality in your final
video productions because the source footage no longer passes through a lossy conversion to
RGB color. It also improves overall performance because the application isn’t performing processorintensive color conversions. With native YUV processing, you get better results faster.
Batch capture
If you have the proper setup for device control and a videotape recorded with timecode, you can
set up Adobe Premiere Pro for automatic, unattended capture of multiple clips from the same
tape. This process is called batch capture. You can batch capture clips from camcorders or decks.
First, set In and Out points, and log each segment you want to capture. The segments you log appear as offline files in the Project panel.
When you’re done, select the offline files and open the Batch Capture dialog box. You can enter
a handle value, which automatically captures additional frames before and after each segment.
Then start the batch capture and Adobe Premiere Pro automatically controls the DV device
and captures each segment to a file. Batch capturing is very useful in a professional production
environment, and can be especially helpful when you need to redigitize footage when returning
to an old project.
Importing still images
The ability to import still images, such as photographs and illustrations, is also an important
feature to look for. You may want to import photographs to create movie montages or acquire
illustrations to incorporate in animations. Tight integration with industry-standard imageediting software like Photoshop and leading vector-drawing software like Illustrator facilitates
this type of work.
A Digital Video Primer
Importing computer graphics
You can import or export many different types of video, audio, and image formats. Support for
input and output formats in Adobe digital video software is extensive. If support for the format
you want is not built into Adobe Premiere Pro or After Effects, chances are a third-party plug-in
will provide it.
Find a list of plug-ins for Adobe Premiere Pro on
the Adobe website at
plugins/premiere/main.html and a list of plug-ins for
After Effects at
Status area
Metadata, such as
source tape name,
clip name, and
Preview panel
Device controls
The Adobe Premiere Pro Capture panel lets you specify a target bin in the Project panel,
then capture clips directly to the bin. Also, you can keep an eye on available hard-disk
space, deck activities, and other data during capture.
Capturing audio
Be sure your software provides support for all the significant audio formats. You should be able
to import separate digital audio clips from tracks in video files or from audio files stored on a
hard disk or other digital media such as a CD or DAT tape.
Adobe Premiere Pro and Adobe Audition both provide excellent support for capturing audio.
For example, both applications can import and export the highest-quality 24-bit, 96 KHz audio
files, and support any ASIO-compliant audio card or device.
In Adobe Audition, you’ll find support for 20 different input audio file formats, as well as support
for recording from all the standard input devices and cards you would expect. The Preferences
dialog box gives you all the controls you need to work with your favorite input and output audio
hardware. Record up to 32 different sources at once, and specify different output devices for different tracks or buses in your session. Also, Adobe Audition makes extracting audio from a CD,
CD ripping, as easy as working from any other source audio file.
Some Capture Tips
Use a separate hard disk for capturing video—the
fastest one you’ve got. You can use Adobe Premiere
Scratch Disks preferences to select the disk to which
you want to record. Faster disk rotational speeds
allow for faster sustained throughput without dropping frames. While 7200-rpm IEEE 1394 drives will get
the job done, 10,000 or 15,000 rpm Ultra 320 (U320)
SCSI or SATA hard drives are a better choice for professional production environments.
If your system barely meets the minimum requirements and you have problems capturing, defragment
the capture disk just prior to capture, so free space is
available in large contiguous blocks. A fragmented
hard disk can reduce the capture frame rate.
Place as few other demands on the system as possible, to gain the undivided attention of the CPU. If
other programs are running, like anti-virus programs,
virtual memory, network connections, or unnecessary
drivers or extensions, they may interrupt capture with
calls for processing time.
Capture audio at the highest quality settings your
computer can handle, even if those settings are
higher than those you’ll specify for final export or
playback. Capturing at the highest quality provides
headroom or extra data that helps preserve quality
if you adjust audio gain (volume) or apply effects,
such as equalization. Make sure the audio level is set
correctly when you capture. You can adjust the levels
later in the editing program, but if audio in a clip is
too low, raising the level can emphasize noise or
introduce distortion. Also, if you are capturing digitally, be sure to capture audio at the same sample rate
and bit depth as the as the source format, to avoid
resampling artifacts.
A Digital Video Primer
Nonlinear editing
It’s finally time to put it all together. Nonlinear editing (NLE) makes editing and assembling
your production as easy and as flexible as word processing. Once your raw materials are in your
computer, you can edit, alter, adjust, and reconfigure them, over and over again, with a few
mouse clicks. In this section, we’ll introduce you to some of the basic concepts of NLE, as well as
give you an overview of Adobe Premiere Pro.
Getting to know NLE tools
The Adobe Premiere Pro interface provides many of the tools and methods that are familiar to
seasoned professionals who may have learned their craft working on costly high-end systems.
But even though Adobe Premiere Pro is loaded with professional features, it is also easy for
beginning video enthusiasts to learn and use. Because of its flexibility and many customization
options, Adobe Premiere Pro is a good choice for beginners and experts alike.
Most of the work takes place in these panels of the workspace:
Adobe Premiere Pro
The real-time editing solution for professional
video production
• Project panel, where assets are managed
• Monitor panels, where video being edited is viewed
• Timeline panel, where the actual editing takes place
Project panel
Monitor panels
Timeline panel
In the main Adobe Premiere Pro workspace, manage content with the Project panel, trim edit points, and view clips and edited video
in the Program Monitor, and build your project on the Timeline.
The Adobe Premiere Pro user interface consists of a workspace containing multiple panels for
tasks such as editing, monitoring, managing a project, capturing video, creating titles, applying
and controlling effects, and mixing audio. You can create any number of customized workspaces
by selecting, grouping, and laying out the panels. For example, you could create a workspace for
general editing and another for working with effects. You can also create a free-floating panel by
undocking it from a group. For example, you can undock the Program Monitor and drag it to a
second video monitor in a dual-monitor system. Additionally, Adobe Premire Pro comes with a
number of preconfigured workspaces.
A Digital Video Primer
Staying organized
A short production may include only a few clips; longer productions may require hundreds
or even thousands of assets. With the current propensity for repurposing, it has become more
important than ever for videographers to keep assets well organized. Make sure your software
includes a good asset management system that lets you preview clips, identify clips visually with
still or poster frames you select, annotate clips with essential information, and easily access
detailed information about all your video and audio assets.
In Adobe Premiere Pro, the Project panel manages all the assets in your video project including
video, audio, stills, titles, and nested timelines. You can organize your assets into folders called
bins, which can be given custom names, such as Scene 12, Voiceovers, or Chase Scene. The
Project panel displays assets and associated metadata in columns, with which you can sort and
search data. The Project window can be displayed in a variety of different ways, depending on the
task at hand:
•As shown on the previous page, the Project panel can display Preview and Bin areas in List
view, providing a convenient overview of the files associated with a project.
Preview area: Click the Play button under a thumbnail-sized poster frame to preview a video clip. The Preview area includes basic information about the clip, such
as frames per second and average data rate. The poster frame used to represent a
clip can be changed from the default (first frame) to any frame you select.
Bin area: The Bin area provides a hierarchical representation of the files in your
project. Use the Search button to find what you need, fast. Command buttons let
you quickly delete selected clips and bins, and add new items. The fields available in
List view include columns for media start/end, video and audio In and Out points,
offline properties, scene, shot/take, client, log notes, and more. You can rearrange,
add, remove, rename, hide, and show any column. In addition, you can create any
number of user-defined columns that offer text-entry fields or check boxes. For
example, you could create a Legal Signoff column and check off each clip as usage
approvals come in for a video shot or piece of audio.
• Icon view: Presents media in an orderly grid. You can select and rearrange icons anywhere in
the grid, even in noncontiguous arrangements, and create storyboards.
Online and Offline Editing
Online editing: In online editing, you assemble and
edit all the elements to create your final cut. Online
editing used to be done only on high-end workstations that could meet the quality and data-processing
requirements of broadcast video. Editors who could
not afford an online system had to rent time at a
post-production facility. As personal computers and
affordable workstations have become more powerful,
online editing has become practical for a wider range
of high-quality productions.
For online editing using analog source material, you
capture clips once at the highest level of quality your
computer and peripherals can handle. With standard
DV source material, all editing is typically done
online because DV compression makes standard DV
Offline editing: In offline editing, you first edit a
final version of your project using low-quality clips.
Then you go into online editing and use the offline
version to create a final version of the project using
high-quality clips. Offline editing was originally
developed to save money by preparing rough cuts
on less expensive systems. Although offline editing
can be as simple as writing down time points for clips
while watching them on a VCR, it is increasingly done
using personal computers and capable software such
as Adobe Premiere Pro.
If you are working with analog source material,
offline editing techniques can be useful even if your
computer can edit at the quality of your final cut. By
batch-capturing video using low-quality settings,
you can edit faster, using smaller files. In most cases,
you need only enough quality to identify the correct
beginning and ending frames for each clip. When
you’re ready to create the final cut, you can redigitize
the video at the final-quality settings. This is another
example of where the logging and batch-capture
techniques in Adobe Premiere Pro can be useful.
Professional editors looking for a powerful, affordable
offline editor will appreciate the way Adobe Premiere
Pro software facilitates quickly building an offline edit
and exporting an advanced authoring format (AAF)
file. AAF files can be exported from Adobe Premiere
Pro for use with other editing systems. For more
information about AAF files, see “Good housekeeping” on page 45.
The Adobe Premiere Pro Project panel showing Preview and Bin areas
A Digital Video Primer
Looking for approval?
With Adobe Premiere Pro, you can assemble a storyboard or rough cut in minutes. Using
the Icon view in the Project panel, you can quickly assemble stills, such as photos or concept
sketches, into a storyboard-style slide show or, if you have clips, into a rough cut. Just drag and
drop poster-frame icons, arranging and rearranging them until you, your colleagues, and your
clients are completely satisfied. Then use the Automate To Sequence command to instantly send
your sequenced material to the Timeline, where it will be automatically assembled using a default
transition you specify. Add music and voiceover for a smooth presentation you can use to share
your concepts.
Thank You Mam!
As the world’s gone digital, the facts, photos, and
footage we used to archive and catalog in file cabinets and on library shelves has found its way into our
computers. What an incredible opportunity! Digital
media assets can and should be searchable, accessible, and easily exchangeable across workgroups and
even around the world, via intranets and the Internet.
A whole industry has emerged, focused on Media
Asset Management (MAM), also known as Digital
Asset Management (DAM).
After Effects is also a wonderful visualization tool that can be used to help you share and sell
your concepts. Read more about After Effects in the next section.
One of the major challenges in the development of
media asset management solutions was how to make
the content (the images, animations, video, and audio
clips that have been created and stored in a host of
different formats) and its associated metadata (ancillary data that describes and specifies content, such as
source location, timecode, transitions, descriptive key
words, and so on) exchangeable across different computing platforms and between various multimedia
and post-production applications. An open standard
was needed; one that would be accepted by the many
video-related industries.
Putting the pieces together
Enough of your assets have been captured, coordinated, corrected, and created for you to begin
putting your production together. With capable and cost-effective NLE programs, like Adobe
Premiere Pro, you can work just as you would on any high-end proprietary system, with precise
trimming tools and support for three-, four-, five-, six-, and seven-point edits. Adobe Premiere
Pro also facilitates the slip and slide, ripple, and rolling edits described in this section, and lets
you work with industry-standard keyboard shortcuts.
For piecing together your production, you’ll typically work back and forth between the three main
panels in your workspace: the Project panel, the Timeline panels, and the Monitors.
The timeline graphically shows the placement of each clip in time, its duration, and its relationship to the other clips in the program. Once you’ve captured or imported clips into your project,
you can use the Timeline panel to organize your clips sequentially; make changes to a clip’s duration,
speed, and location; add transitions; superimpose clips; and apply effects, opacity, and motion.
The Adobe Premiere Pro Timeline panel is easy to use, understand, and manage; audio, video,
and graphics clips can be moved, trimmed, and adjusted with simple mouse clicks or with key­board
commands. Up to 99 video and 99 audio tracks can be created for your program, and each track
can be given a descriptive ID. Tracks can be hidden to reduce screen clutter or locked to avoid
accidental changes. Each track in Adobe Premiere Pro is collapsible, so you can free up screen
space. You can expand tracks to make precise adjustments to transitions between video clips. The
preview indicator area (directly under the time ruler) is color-enhanced: green means that a
preview exists on disk for the segment; red indicates that the segment needs to be prerendered
before it can be previewed; and no color indicates a cuts-only segment that can play in real time.
In Adobe Premiere Pro, you can set up a virtually unlimited number of timelines and nest any
number of timelines inside others with complete flexibility. The ability to create and nest multiple timelines streamlines a range of editing tasks. You could, for example, divide a complicated
video project into parts with each part assembled on a separate timeline, and then combine
those parts together by nesting the timelines into one main timeline. You could also set up one
timeline and then duplicate it several times to try out different cuts or visual effects for a client or
director without affecting the original version. Quickly comparing the original against several
variations can speed up editing decisions and client approval time significantly. In addition,
you can use separate timelines to manage how effects are applied. For example, you could apply
different effects to several timelines, and then nest the timelines to apply an effect, such as a color
correction, to all of them.
You can use the Source Monitor to view a wide range of media including individual clips, still
images, audio, color mattes, and titles. Resizing the monitor dynamically resizes the video image
displayed in each view. To help you position on-screen elements, you can switch on safe zone
guides. A magnification setting lets you examine the image in detail or zoom out when you need
to see the off-screen pasteboard area. In addition, you can manually reset display quality, which
can reduce rendering times.
The answer began to emerge with OMF (or OMFI),
the Open Media Framework Interchange format,
a media and metadata exchange solution introduced
by Avid. OMF adoption has been slow, but as the
industry transitions to the more widely accepted
AAF standard, more applications and utilities are also
including support for the OMF interchange.
AAF, the Advanced Authoring Format, has emerged
as the open standard of choice. Sometimes described
as a super EDL solution, AAF is, essentially, a wrapper
technology that can include the content itself or links
(pointers) to it, along with relevant metadata.
Although AAF files may contain the actual content,
the emphasis of this format is the exchange of composition metadata, in other words, the information
that describes how content is handled in a composition, rather than on the exchange of the content itself.
In addition to AAF, a related standard is now also coming into broader use, MXF, the Material eXchange
Format. Like AAF, MXF is an open standard for the
exchange of content and its associated metadata
across platforms and between applications. MXF was
designed for less complex metadata applications
than AAF. Where AAF may include the actual content
or only a link to it, MXF always includes the content
along with the metadata. The primary objective of
MXF is the streamlined exchange of content and
associated metadata. MXF files may be used as a
source for AAF. With its greater emphasis on actual
content exchange, MXF is better optimized than AAF
for real-time streaming of video and audio assets,
making it an excellent solution for such applications
as broadcast news editing.
A Digital Video Primer
Use Source Monitor controls to play, pause, and scrub a clip. Use tools to set video, audio, and
program In and Out points. Set and move among clip and sequence markers, perform insert and
overlay edits, move forward and backward frame-to-frame, or edit point to edit point. Editing
clips in the Source Monitor dynamically updates the clip in the timeline (or timelines).
Use the Program Monitor to play back your timelines with effects and transitions.
In addition to the three main areas of the workspace, there are numerous other panels that provide information and functionality. For example:
•The Trim monitor provides even more precise control than the Source Monitor over ripple,
rolling, slip, and slide edits. You can view live updates in the Trim monitor, which shows an
edit in progress as you’re adjusting the clip.
•You’ll use the bins in the Effects panel to keep your video and audio effects and transitions
organized; use the Effect Controls panel to apply effects and transitions to your clips.
•In the Audio Mixer, you can adjust settings while listening to audio and viewing video tracks.
•The Titler gives you the ability to design sophisticated titles for use in your productions, by using
preconfigured templates or working from scratch.
•The Info and History panels will be familiar if you’ve worked in other Adobe applications. The
Info panel displays vital information about the selected item; the History panel lets you navigate among the available levels of Undo.
Most panels include menus that appear by clicking a button. All panels have context menus, the
content of which depends on the current task or mode.
Multiple timelines
Nested timelines
on video track 1
The Timeline panel
A tab for each timeline appears in
the Program Monitor. Clicking a tab
brings the related tab forward in the
Timeline panel, or sends it back.
The Program Monitor
The Tool panel
can be docked
to the Timeline
panel or freefloating.
A Digital Video Primer
Ripple edits
In this example of a ripple edit, the Out point of a
clip is moved two frames to the right in the timeline,
resulting in the duration of the clip being lengthened
by two frames. The adjacent clip is not altered by a
ripple edit; therefore, the overall program duration is
lengthened from eight frames to 10.
In this ripple edit, the Out point of a clip is moved
two frames to the left in the timeline, resulting in the
duration of the clip being shortened by two frames.
Because the adjacent clip is not altered by a ripple
edit, the overall program duration is shortened from
10 frames to eight.
Rolling edits
In this example of a rolling edit, the Out point of a
clip is moved two frames to the right in the timeline,
resulting in the duration of the clip being lengthened
by two frames. The rolling edit shortens the beginning of the adjacent clip by two frames, thereby preserving the duration of the overall program.
In this rolling edit, the Out point of a clip is moved two
frames to the left in the timeline resulting in the duration of the clip being shortened by two frames. The
rolling edit correspondingly lengthens the beginning
of the following clip by two frames, thereby preserving the duration of the overall program.
Slip edits
The slip edit moves the In and Out points of a clip, but
does not change the duration of the clip, does not
affect the adjacent clips, and does not alter the duration of the overall program.
You can slip the In and Out points of the clip to the
right or to the left on the timeline; neither the adjacent clips nor the overall program length are affected.
Slide edits
The slide edit moves the In and Out points of a clip
without changing its duration, while the Out and In
points of the adjacent clips are moved, so the overall
program duration is preserved.
You can slide the In and Out points of the clip to the
right or to the left on the timeline; the overall program length is maintained because the Out and In
points of the adjacent clips slide accordingly.
Making transitions
Transitions are the methods you use to get from one clip to the next. The basic transition is a cut.
Slower transitions can be useful in setting a mood or adding a creative element to your project.
Examples of transitions include dissolves, wipes, zooms, and page peels. Adobe Premiere Pro
includes a whole library of transitions, and you can add others, such as QuickTime transitions.
You’ll find transitions in the Video Transitions bin in the Effects panel. Within this bin, transitions are organized into nested bins by type. You can customize these groupings, putting the
transitions you prefer into bins you name, or by hiding transitions that you don’t often use.
Useful Editing Techniques
Changing clip speed: Clip speed is the playback
rate of action or audio compared to the rate at which
it was recorded. When the speed is accelerated,
everything appears to move faster; when the speed is
reduced, the action or audio plays back in slow
motion. Changing a clip’s speed alters its source
frame rate. Some frames may be omitted when the
speed is increased; when the speed is decreased,
frames may be repeated. Changing the speed to a
negative value, such as -100, plays the clip in reverse.
You can change a clip’s speed numerically in the
Project panel, or in the timeline by choosing Clip >
Speed/Duration from the title bar. You can change
speed visually in the Timeline panel by using the rate
stretch tool to drag either end of the clip. A threepoint edit can also change the speed of a clip.
Altering clip duration: The duration of a clip is the
length of time it plays: from its In point to its Out
point. The initial duration of a clip is the same as it
was when the clip was captured or imported; if you
alter the source In and Out points, the duration of
the clip changes. In Adobe Premiere Pro, you can edit
In and Out points in the Project panel, the Source
Monitor, or directly in the timeline. You can change
duration numerically in the Project panel or in the
Timeline panel by choosing Clip > Speed/Duration
from the title bar. You can change duration visually in
the Timeline panel by dragging either end of the clip
with the selection tool.
It’s important to note that when you perform any
action that extends the duration of a clip (which may
include ripple or rolling edits) additional frames must
be available in the source clip (the clip you originally
captured or imported) before the current In point
or after the current Out point. This is why it’s a good
practice, whenever possible, to capture extra material, sometimes referred to as a handle.
Ripple edit: A ripple edit changes the duration of a
clip, correspondingly changing the duration of the
entire program. When you use the ripple edit tool to
shorten or lengthen a clip by dragging its beginning or ending in the timeline, the adjacent clip is
not affected and, consequently, the duration of the
program is shortened or lengthened.
Rolling edit: A rolling edit changes the duration of
the selected clip and of an adjacent clip, while maintaining the overall duration of the program. When
you use the rolling edit tool to shorten or lengthen a
clip by dragging its beginning or ending in the Timeline, the adjacent clip is correspondingly lengthened
or shortened.
Find “More Useful Editing Techniques” on the next
To add a transition, drag the icon from the Effects panel to a point in the timeline where two
clips meet. Alternatively, you can specify a default transition, and automate the process of adding
transitions. You can use the Effect Controls panel to apply, remove, or adjust the settings of a
transition at any time.
A Digital Video Primer
All transitions, except a cut, have duration, alignment, and direction parameters. Duration
refers to the length of the transition in frames. Transitions use frames from the end of the first
clip, called tail material, and frames from the beginning of the second clip, called head material.
Alignment refers to the position of the transition in relation to the cut between the two clips. The
options are Center at Cut, Start at Cut, and End at Cut. Direction indicates how the transition
operates on the two clips. Normally, the direction will be from the first clip to the second, from
left to right on the timeline, but for some types of transitions, you may want to change the direction.
Adding effects
Video and audio effects, sometimes called filters, serve many useful purposes. You can use them
to fix defects in video or audio, such as correcting the color balance of a video clip or removing
background noise from dialogue. Effects are also used to create qualities not present in the raw
video or audio, such as softening focus, giving a sunset tint, or adding reverb or echo to a sound
track. Multiple effects may be applied to a clip, but note that the result may vary depending on
the order in which effects are rendered.
Adobe Premiere Pro includes dozens of effects, including many shared with After Effects. Additional effects are available as plug-ins. Adobe Premiere Pro comes with several After Effects plugins that can be used in your video work, and many other plug-ins are available from third-party
vendors or can be acquired from other compatible applications. Video effects are found in the
Video Effects bin in the Effects panel; audio effects are found in the Audio Effects bin. As with
transitions, effects are grouped by type in nested bins. You can reorganize effects and customize
bins as you prefer and hide effects or bins that you rarely use.
To apply an effect, drag it to a clip in the Timeline panel. Or, if the clip is selected in the Timeline
panel, you can drag the effect to the Effect Controls panel, where you can modify attributes and,
if multiple effects have been applied, adjust the order in which they are rendered. You can apply,
disable, or remove an effect at any time.
By default, when an effect is added, keyframes are set at the beginning and end of the clip,
resulting in the effect being applied to the entire clip. If an effect has adjustable controls, you can
change the start or end point of the effect by adjusting the keyframes in the Timeline panel, or
Effect Controls panel to add additional keyframes to create an animated effect.
More Useful Editing Techniques
Slip edit: A slip edit shifts the In and Out points of a
clip without changing the clip’s duration, without
affecting adjacent clips, and without altering the
overall program duration. You can use the slip edit
tool in the Timeline panel to drag a clip left or right,
and its In and Out points will shift accordingly. In
other words, a slip edit alters which specific portion
of the source clip is included, but does not alter the
duration of the selection. The slip edit is useful when
you want to create a rough cut quickly, and then finetune individual clips later without affecting the clips
around them or the overall duration.
Slide edit: A slide edit preserves the duration of a clip
and of the overall program by changing the Out point
of the preceding clip and the In point of the following
clip. When you use the slide edit tool, sliding an entire
clip forward or backward in the timeline, the adjacent
clips are correspondingly lengthened or shortened by
the same number of frames; therefore, the duration
of the program stays the same. A slide edit affects
three clips: the location of the clip being slid (the
duration of which stays the same), as well as the two
clips before and after the slid clip (the durations of
which are both altered). The overall program duration
is maintained.
Three-point edits: When you lift and replace footage
in a video program, four points must be specified.
Those four are the In and Out points of the source clip
(the segment you are inserting) and the In and Out
points of the program (the segment you are replacing). With three-point editing in Adobe Premiere Pro,
you need only specify any three of these four In and
Out points. The software then automatically calculates the fourth point to ensure a proper edit, and
will adjust the speed of the clip to fill a gap. Monitor
window controls and keyboard shortcuts make threepoint editing quick and easy in Adobe Premiere Pro.
Four-point edits: A four-point edit is useful when the
starting and ending frames in both the source and
program are critical. In a four-point edit, you mark
all four points. If the durations are different for the
marked clips, Adobe Premiere Pro alerts you to the
discrepancy and provides alternatives to resolve it.
Enter keyframes on a timeline to control how effects and motion parameters change over time.
The effect controls in Adobe Premiere Pro work similarly to the ones in After Effects. The settings provide exacting control over every aspect of an effect because you can set keyframes for
individual effect parameters to vary how a clip is affected over time. When you apply an effect to
a clip in the timeline, the Effect Controls panel displays all of the parameters associated with that
effect. For example, if you were to apply a Radial Blur effect to a clip in the timeline, you would
go to the Effect Controls panel to select and set independent keyframes for the amount of blur
and the X and Y position of the blur. Rather than applying a uniform effect, you could start out
with a clip that looks sharply focused and gradually blur the clip over time by using keyframes.
You can then evaluate the effect design choices you’re making through the real-time editing
experience described earlier in this document. Note that keyframes are preserved with Adobe
Premiere Pro projects when you move the projects to After Effects.
Six-point edits: More commonly called a split edit,
in a six-point edit, a clip’s video and audio start or
end at different times. In one version of a split edit,
called an L-cut, the audio Out point is later than the
video Out point, so the audio continues to play after
the video transitions to the next clip. The audio from
a concert, for example, could extend into the next
shot of a nature scene. Another kind of split edit is
the J-cut, also known as an audio lead, which you use
when you want a clip’s audio to begin playing before
the corresponding video appears. For example, you
may want to begin hearing a speaker’s voice while
showing a relevant scene, then transition to the shot
of the person speaking.
A Digital Video Primer
Still more ways to enhance your productions
Adobe Premiere Pro lets users create motion, picture-in-picture, and keying effects. You can create smooth keyframed animations of flying video, controlling such parameters as rotation, scale,
and distortion. Chroma, luminance, and alpha keying are also supported in Adobe Premiere
Pro. You can also use Photoshop images as mattes, then superimpose clips with transparency to
create composited sequences. But for even more advanced control over compositing and animation (and to learn a little bit about the techniques mentioned in this paragraph) you’ll want to
look ahead to the section of this primer that describes some of the sophisticated features found
in After Effects.
Marking time
Markers can be used to indicate important points in time, help you position and arrange clips,
and perform a number of other functions. Working with markers is much the same as working
with In and Out points, but markers are only for reference and do not alter the video program.
In Adobe Premiere Pro, each sequence and each clip can contain up to 100 numbered markers,
labeled from 0 to 99, and any number of unnumbered markers.
In general, you add markers to clips to identify important points within individual clips; you add markers
to sequences in the Timeline panel to identify significant time points that affect multiple clips, such as
when you need to synchronize video and audio on different clips. Timeline markers can include:
• A comment, which will appear in the Program Monitor
• A chapter link, which can initiate a jump to a specified point in a QuickTime movie or on a DVD
• A web link, which will initiate a jump to a web page in the browser when the video is playing
on a computer connected to the Internet or an intranet
Don’t forget titles, graphics, and credits
Text and graphics can play an integral role in conveying information in a video program. And,
when you’re proud of all that you’ve accomplished, you’ll want to include credits that acknowledge your hard work and that of everyone else who helped create your production. Titles may
include lines, shapes, images, animations, video, and text. You can create titles using still graphics software applications, like Illustrator and Photoshop; using motion graphics software, like
After Effects; or simply by using the Titler in Adobe Premiere Pro.
The Titler gives you the ability to design complex titles using customizable templates and styles
created by professional designers, or develop your own custom styles that you can save and use in
other title documents. Use familiar spline-based drawing tools to create and freely manipulate
shapes. Import still backgrounds to appear behind your titles or view a frame of video footage in
the drawing area as you create a title to ensure that your titles will look their best as video plays
behind them. Add logos or other custom graphics with ease and use the Align and Distribute
features, similar to those found in Illustrator, to facilitate the design process. Incorporate any
vector type font in your system, including Type 1 (PostScript), OpenType®, and TrueType fonts.
The Titler gives you the artistic control you’d expect from an Adobe product, letting you easily
adjust such properties as font size, aspect, leading, kerning, tracking, baseline shift, slant, and
small cap size. You can also apply strokes, fills, gradients, sheens, textures, shadows, and glows to
both objects and type to create exactly the look you want.
Editing a video project means choosing and arranging audio and video segments from the elements you
have shot or gathered. In the first stage of the process,
capturing, you record the elements you think you
might want to use to your hard disk. Typically, you
capture more material than you will actually use.
When you insert clips into your video project, the
clips do not become part of the project file; rather,
the project file contains references to the source
clips stored on your hard disk. Clips become part of
a finished project only when you export your project
to a delivery medium, such as videotape or a file to
be posted on the web. Unless you are absolutely sure
you will not be using some of the source clips you
captured, it’s best not to delete any of them from your
hard disk until your project is completed.
Trimming clips
You define the beginning of the clip’s appearance by
marking an In point (the first frame that will appear in
your program). You define the end by marking an Out
point (the last frame that will appear). During capture,
you select rough In and Out points that contain
extra footage before and after the parts you want to
use. These extra frames are called handles. You can
remove the handles later during editing or use them
to provide overlapping footage for transitions.
It is common to fine-tune the beginning and end
of a clip just before moving a clip into a project. For
numerical precision, you can set In and Out points
in the Monitor window in Adobe Premiere Pro. For
visual precision, or if you prefer to use the mouse,
you can edit directly in the timeline. Even if you use
only a small portion of a captured clip in your project,
the entire clip remains available on your hard disk,
enabling you to make adjustments at any point in the
editing process. “Trimming clips” usually refers to this
process of selecting In and Out points for individual clips.
In Adobe Premiere Pro, you can use the Trimming
window to trim two clips at once, setting the In point
of the second clip simultaneously while setting the
Out point of the first.
Trimming a project
The term trimming is also used to refer to the practice
of removing frames from clips when you have completed your project and you want to tidy up your files.
This function in Adobe Premiere Pro is nondestructive, meaning that the original footage remains intact.
When you use the Project Manager to trim a project,
Adobe Premiere Pro creates a new version of a project, called a trimmed project, that contains only those
portions of clips actually used (including specified
handles). You can then delete or archive the original
clips to save disc space. The Project Manager can also
help you consolidate or collect a project in one location for sharing or archiving.
Make an automatic music video!
Organize a sequence of clips in the Project or Storyboard window. Then drop a series of unnumbered
markers onto the timeline, highlighting rhythmic
features as you listen to your audio track. When you
perform Automate to Sequence, your clips will be
choreographed to the music, cutting in and out on
the beats you marked.
A Digital Video Primer
The Titler in Adobe Premiere Pro
Although static titles, graphics, and logos may suffice for some projects, many others require
titles that move across the screen in front of your footage. Titles that move vertically (up or
down) are called rolls; titles that move horizontally are called crawls. The Titler provides choices
and settings that facilitate creating smooth, expert rolls and crawls.
Correcting the color
Assets aren’t always perfect. After assembling your production, you may want to clean up imperfections and inconsistencies, especially when it comes to color.
Color can have a dramatic impact on a movie. Emotional overtones change when the colors
on-screen look lush and vibrant, or when they look more muted. It’s critical to ensure that colors
are consistent from cut to cut because jumps in color can appear jarring to an audience. Editors
commonly perform scene-by-scene color correction to make sure that all of the shots in a scene
match, to give scenes the right look, and to correct exposure, color-balance, and other production problems caused by lighting, cameras, and environment.
Also, if your production is destined for broadcast, the chrominance (color hue and saturation)
and luminance (brightness and contrast) must meet broadcast standards. When video exceeds
these limits, colors tend to bleed, blacks and whites look washed out, and the picture signal can
even get distorted.
Use the color correction controls for precise adjustment of most color parameters.
A Digital Video Primer
Adobe Premiere Pro provides built-in vectorscope, waveform, YCbCr Parade, and RGB Parade
monitors to provide accurate representations of chrominance and luminance levels. With these
tools, you can see whether clips share a common color spectrum and make sure that your color
adjustments fall within broadcast limits. For color adjustments, Adobe Premiere Pro provides
a number of options ranging from the Fast Color Corrector, for simple adjustments that render
in real time, to the Three Way Color Corrector, which provides control over hue, saturation, and
luminance for highlights, midtones, and shadows. Many of the color correction modules also
feature optional secondary color correction, which allows you to limit the range of the image
that is corrected. Secondary color correction can be used for fine adjustments or for achieving
special effects.
What is real-time editing?
Previewing involves rendering (displaying) the frames
of a sequence for playback. Sequences that consist of
cuts between single tracks of video and audio render
quickly, whereas sequences that include layered
video and audio and complex effects require more
processing time.
Rendering: Desktop software used to (some still
does) make you wait while it rendered. Sometimes,
rendering an effect on a desktop system would take
minutes or even hours, which would slow production
to a crawl. If you wanted to generate results in real
time you had to purchase and equip your system with
a real-time video card that was compatible with your
Background rendering: Background rendering still
requires you to wait before you can preview your
work. You can move onto something else while your
adjustments are rendering, but if the next thing
you want to do is dependent on the results, you’re
no better off. In effect, background rendering is like
being able to do something else while your dinner
cooks, but not being able to taste the food until it’s
completely done.
Use built-in color monitors to see if clips meet color broadcast standards.
Merging creativity and productivity
One of the more time-consuming aspects of editing video on a desktop has been waiting for
productions to render before you can see how effects, transitions, and other edit choices look. As
computers have become faster, video editing systems have introduced real-time previewing, but
usually with artificial boundaries that limit their effectiveness. Adobe Premiere Pro enables you
to see exactly how your video will look without waiting for sequences to render.
Whether you’re making on-the-fly changes for a client or preparing to export your final production, you’ll deliver results quickly. Adobe Premiere Pro plays back full-resolution frames, including
titles, transitions, effects, motion paths, and color correction on two channels, in real time with
no additional hardware support. Because it’s fast and efficient to preview editing decisions as you
make them, you can experiment more freely. You could, for example, try different settings for
the effects you’re creating, and then play back each combination to check the results and decide
which one works best. You can also view scenes played back in real time on an external NTSC
or PAL video monitor, a time win when you need to check how a work in progress will look on a
final viewing device.
Real-time software: Real-time software (such as
Adobe Premiere Pro) offers you a better option,
one that’s more supportive of your creativity, while
promoting your productivity. The Real Time Preview
capability in Adobe Premiere Pro renders the frames
of the sequence on the fly, so that in most cases,
previewing simply involves playing the sequence
using any of the controls in the Program view or
Timeline. When Adobe Premiere Pro can’t achieve
the sequence’s full frame rate, you have the choice of
playing the segment right away at a reduced quality
and frame rate, or waiting to render a preview file
that can play at the full frame rate. Sequences that
have been rendered at full frame rate for previewing
need not be rerendered for export. Real Time Preview
supports all Adobe Premiere Pro effects, transitions,
transparencies, motion settings, and titles.
Real-time hardware: Real-time hardware shunts
the processor-intensive work of rendering from the
CPU to a specialized processor on a video card. Most
real-time cards can handle the most common types
of effects, such as transitions and titles; more costly
cards can handle a much wider array of effects and
other techniques, even the capability to fly your video
around in 3D, in real time.
Note: The real-time editing experience is designed to take advantage of Pentium 4 systems, 3 GHz
and faster. Playback frame rates and quality degrade gracefully on less powerful systems.
A Digital Video Primer
Digital audio for video
Just as you create a finished video product with color correction, you can polish the audio, so that
sound levels and tonal quality is consistent throughout, and transitions between audio elements
are smooth. And just as effects add an element of magic to your video, you can sweeten the audio
track with music, sound effects, and additional dialogue or voice-overs.
You can use Adobe Premiere Pro to perform basic audio sweetening, and then open your editing
project in Adobe Audition for more advanced control of your audio. By using Audition, you can
work easily with multiple audio tracks and elements, add audio effects and processing, and then
fine-tune the mix.
Sweetening and mixing audio
Sweetening means adding audio elements, such as music, sound effects, and additional dialogue,
and processing the audio with software or hardware to change the tonal quality and volume of
the sound. The final stage of sweetening is mixing, when you combine the elements by adjusting
the audio levels of each track to create an overall balanced sound. For example, you might mix
dialogue clips with ambient background sounds and a music track.
You can perform any combination of the following tasks in Adobe Premiere Pro:
•Adding audio elements and tracks: Just as you can add and edit video clips on the timeline, you
can add and edit audio elements. All of the same tools and techniques apply to audio clips, such
as setting In and Out points, speed, and duration. For example, you can add an audio track for
sound effects, and then add the sound of a door closing. You can then use the editing tools to
adjust the In and Out points of the clip, and change its position on the timeline to synchronize
(sync) with the video.
•Fading audio clips over time: While watching the video program, you can increase or decrease
the audio gain (volume levels) of an audio track at precise time points in the Adobe Premiere
Pro Timeline panel or by using the volume faders in the Audio Mixer to adjust and record
the volume levels for each audio track. The mixer channels include automation, so the level
changes you make are reproduced exactly when you preview or render a timeline.
•Panning/balancing stereo clips: When panning an audio clip, you create the illusion of a
sound coming from somewhere between the speakers by adjusting the amount of sound that
is sent to each speaker. For example, as you increase the amount sent to the right channel and
decrease the amount sent to the left, the sound appears to move to the right. If the audio level
is equal in both speakers, the sound appears to be centered. You could use panning to match a
dialogue clip to a person’s movement in the video frame. You can adjust pan and balance in the
Timeline panel, or by using the Pan control in the Audio Mixer to precisely position audio in a
stereo channel.
•Adding audio effects: Adobe Premiere Pro provides a wide range of built-in controls for processing audio. For example, the Compressor/Expander effect fine-tunes dynamic range; the
Notch/Hum effect removes distracting hum; the Reverb effect acoustically simulates an environment, like a large hall, and the Parametric Equalizer effect lets you tweak specific frequency
ranges. Like video effects, you can add multiple effects to a single audio clip, and use keyframes
to modify effects over time. While a variety of audio effects are included with Adobe Premiere
Pro, built-in support for industry-standard VST audio plug-ins enables you to use your favorite
audio plug-ins with Adobe Premiere Pro.
Adobe Premiere Pro Audio Mixer panel
In Adobe Premiere Pro, you can create and work with multichannel audio to produce surround
sound and other richly layered audio experiences. With support for editing audio clips at the
subframe, audio-sample level, you can adjust audio clips with sample-accurate precision (up to
1/96,000th of a second) to perfectly sync audio elements on different tracks or precisely edit a
clip, such as remove a pop or click.
A Digital Video Primer
When you import or capture a video clip that contains audio, the audio and video tracks are
linked by default, so that they move together in order to maintain sync. When you edit or move
a video clip linked to an audio clip, the changes apply to both the audio and video. However,
there are situations when you may want to work with the audio and video as separate clips. Then
you can unlink the tracks, make your separate edits, and then relink the clips if you want. For
example, you can unlink clips to create an L-cut.
You can process an audio clip in several ways: choose a menu command for a selected clip, apply
an audio effect, or adjust volume and pan/balance levels either directly in the timeline or by using
the Audio Mixer. The Adobe Premiere Pro Audio Mixer supports many features. Use the Audio
Mixer to capture audio directly to the timeline. For example, you can record live professional
voiceovers to the timeline as it plays back or record notes about an edit sequence as you watch it.
Adobe Premiere Pro automatically records the voiceover live as the video plays and inserts a new
clip on the specified track.
Advanced audio post-production
Adobe Premiere Pro is primarily a video editing application. When your production requires
more advanced audio editing and processing, you can hand-off the audio to Adobe Audition,
which specializes in audio production. Adobe Audition includes support for the Edit Original
command found in both After Effects and Adobe Premiere Pro. When working in either of those
programs, select an audio file or clip in your project and use the Edit Original command to open
either that single file or the entire session that created it in Adobe Audition. The process is seamless, with Adobe Premiere Pro taking care of all the necessary file management.
Adobe Audition
Lets you create and mix audio in a professional multitrack recording studio environment while watching
your video.
The integrated wave editing view in Adobe Audition means you don’t need to leave the application for any of
your digital audio tests.
Adobe Audition is comprehensive and versatile enough to satisfy the demands of broadcast
sound engineers and professional musicians, but intuitive enough for anyone to grasp. Adobe
Audition can be thought of as a professional multitrack recording studio on a computer, which
means you can record, play, edit, process, and mix multiple tracks of audio with the same high
level of quality you would expect in a professional studio. To build a complete studio, you can
add multichannel audio hardware to your computer, microphones and a studio space. Then you
can sweeten your video with musical underscores, music beds, foley effects, and replace and
synchronize dialogue.
A Digital Video Primer
You can import AVI files and sweeten audio tracks while you watch video playback, then resave
the AVI file with a new audio track. The editing tools in Adobe Audition enable you to be as
precise in your cuts as you like, with editing control down to the sample level and automatic
zero-crossing detection to avoid pops when you make cuts. You can also add crossfades
and automation envelopes to smooth transitions and balance the over-all volume; and you can
change tempo without shifting pitch or shift pitch without changing tempo.
When you need to produce audio quickly, you can build a soundtrack from thousands of highquality royalty-free loops that are included with Adobe Audition. The loops come in a wide
variety of musical styles, and exceptional looping controls in Adobe Audition make them easy to
work with. In addition, the loops automatically conform to the global session tempo and key.
The tools in Adobe Audition give you the power to create rich, nuanced audio at 32-bit resolution using any sample rate up to 10 MHz. Precise sample rate conversion guarantees high-quality
results, and is ideal for upsampling CD material from 44.1 kHz to 48 kHz for video or 96 kHz for
audio DVD. Adobe Audition also includes sophisticated audio restoration features. When you’re
ready for the final mix, you can use the powerful mastering and analysis tools, which all run
natively at 32-bit resolution. Batch processing tools save you time by automating repetitive tasks,
such as file format conversion, and matching the volume of multiple files. With the multichannel
encoder, you can easily transform any mix into a surround sound experience.
Adobe Audition provides extensive support for industry-standard audio file formats, including WAV, AIFF, MP3,
mp3PRO, WMA, and WMAPro.
Synchronization issues
To make sure the audio tracks synchronize properly with the video, you need to consider audio
sample rates in relation to the timebase and frame rate of your project. It is a common mistake
to create a movie at 30 fps with audio at 44.1 kHz, and then play back the movie at 29.97 fps (for
NTSC video). With the video playing at 29.97 fps and the audio at 30 fps, at some point you will
notice that the audio starts to get ahead of the video. The difference in frame rates results in a
synchronization discrepancy that appears at a rate of one frame per 1000 frames, or one frame
per 33.3 seconds (just under two frames per minute). If you notice audio and video drifting apart
at about this rate, check for a project frame rate that doesn’t match the timebase.
A Digital Video Primer
Visual effects and motion graphics
Adobe Premiere Pro provides a wide range of transitions and effects, as well as powerful capabilities for titling, motion graphics, transparency, and compositing. However, just as Adobe Audition
enables you to do more with your audio, Adobe After Effects gives you more control over the
visual aspects of your production, providing the tools to work with effects and create motion
graphics. After Effects lets you do more advanced tasks, including sophisticated compositing of
moving imagery and precisely controlled 2D and 3D animations.
After Effects offers the speed, precision, and creative control you need to produce superb motion
graphics and visual effects for film, video, multimedia, or the web. With its professional compositing tools, keyframe-based animation system, and extensive selection of visual effects, After
Effects delivers an unparalleled set of powerful production tools for generating dynamic openers,
bumpers, titles, games, web animations, and more. After Effects has also spawned an entire
category of third-party software and training support products.
At almost any time, the work of After Effects artists can be seen in broadcast, cable, and satellite
programming in every part of the world. The list of major motion pictures that have been created with the help of After Effects is extensive, including effects-heavy films such as The Aviator,
Monsters Inc., Gladiator, Tomb Raider, Hannibal, Spy Kids 3D, Hulk, Bruce Almighty, The Italian
Job, Cold Mountain, and Hollow Man.
If you are new to the art of motion graphics and visual effects, some of what you are going to
read about in the next few pages may sound pretty complicated, but After Effects makes it easy to
learn. Context-sensitive menus make commands available right where you need them, and tool
tips help new users see what a tool or option does.
After Effects
Choose the Edition that’s right for you
After Effects is available in two editions: Standard and
Professional. You can find a detailed description of
the features of both editions at
Composition panel
Character and
Paragraph panels
Project panel
Motion Sketch and
Time Control panels
Effects & Presets panel
Timeline panel
Commonly used panels in After Effects
A Digital Video Primer
Video compositing
Compositing is the process of combining two or more images to yield a resulting, enriched
image. Composites can be made with still or moving images. Compositing simply means playing
one clip on top of another.
The terms keying and matting, in video and film production, refer to specific compositing techniques:
• Keying uses different types of transparency keys to find pixels in an image that match a specified color or brightness and makes those pixels transparent or semitransparent. For example, if
you have a clip of a weatherman standing in front of a blue-screen background, you can key out
the blue using a blue-screen key, and replace it with a weather map.
• Matting uses a mask or matte to apply transparency or semitransparency to specified areas
of an image. By using keying or matting to apply transparency to portions of an image that is
layered on top of another image, portions of the lower image are revealed.
The Auto-trace feature in After Effects converts alpha channels into vector-based masks. This feature makes it
easy to use the edge of an object or any key you’ve created as a path. For example, you can use an alpha channel
from a green-screen shot to create an animated vector shape or use as the basis for text on a path.
Combining diverse types of media elements is one of the things for which After Effects is best known.
After Effects is the optimal program for layering media in motion because of its extensive transfer
mode support (just like in Photoshop), and its powerful masking capabilities, along with its wide
selection of keying methodologies.
Editing: In order to composite video clips, you first edit and assemble them onto a timeline. Place
the clips to which you want to apply keys or mattes on superimpose tracks above the Video 1
track footage. After Effects includes tools and commands that streamline the process of constructing and refining compositions by turning time-consuming manual tasks into operations
that can be completed with a simple tool or command.
Masking: You can create, edit, and animate an unlimited number of masks on every layer in
After Effects. Draw paths to create transparencies or add new objects to an animation such as
stroked lines. Combine paths to make unusual shapes using operations such as Add, Subtract, and
Intersect. Rotate and scale masks, and apply opacity settings to make masks appear and disappear over time. Lock masks to protect them from change. Extensive masking capabilities give
you extraordinary control:
• Edit masks in the Composition panel: Copy and paste masks into your compositions from Illustrator
and Photoshop, or create masks on the fly by drawing them directly in the After Effects Composition window. This process saves time and can make it easier to adjust a mask precisely, relative to
other layers. You can also continue to create masks in the Layer panel.
• Assign mask colors: Assign colors to masks for easy identification.
• Feather the mask edge: Create and adjust the inner or outer feather of a mask by insetting or
outsetting the mask edge from the mask shape.
• Apply motion blur: By adding motion blur to masks, you create realistic-looking mask animations.
We applied the Glow and Stroke effects to the
mask created by the Auto-trace command.
Because the alpha channel is traced in each
frame, the mask animates smoothly.
A Digital Video Primer
2D and 3D compositing: You can animate images in either 2 or 3 dimensions. With either type,
you can move objects horizontally (x axis) or vertically (y axis), but 3D animation enables you
to add depth (z axis), such as change the z-position, z-rotation, and orientation or perspective.
And you can animate the object to interact with light direction, shadows, and cameras (points of
view). In addition, you can use different types of animation on each layer. For example, you could
composite a 2D title animation over a 3D animation that synchronizes movement with video on
a third layer.
Making things move is only one aspect of animation. After Effects offers a wide range of features
and tools to augment your animation capabilities.
Timeline implementation: Animation revolves around the concept of elements changing over
time. The ability to selectively display control curves with linear keyframe information directly
inside the Timeline panel lets you fine-tune timings of multiple elements. The Timeline panel
provides flexibility for viewing and editing all object parameters.
Keyframe control: Keyframes are the heart and soul of moving objects, and After Effects provides
precise control over keyframe type, generation, placement, and all other aspects of keyframe
functionality. Full curve-based editing of keyframe data delivers the ability to exactly tweak
motion and animation data to fit a desired requirement for all aspects of motion and effects over
time. Use the Graph Editor in the Timeline panel to view and work with changes in effects and
animations as a two-dimensional graph.
Motion Sketch and Smoother: Plotting complex motion can be difficult if you must enter
keyframes manually. By using the Motion Sketch panel, you can draw animation paths on the
screen, varying the velocity of a path by adjusting your drawing speed. After Effects, then, automatically creates the keyframes for you. Use the Smoother to smooth the shape of the path and
fine-tune it until the animation moves exactly as you want.
Parenting: You can synchronize the motion and other properties of objects in two or more layers
by defining a parent layer and one or more child layers. By defining a parent-child relationship
between layers, you ensure that the child layers inherit all of the transformations applied to the
parent. Parenting is useful for making objects in multiple layers appear to move and change as
one object. For example, when the scale and position of the parent layer are animated, the child
layers behave the same way. Parent-child relationships aren’t limited to footage layers. You can
also define relationships between light and camera layers in 3D compositions. For example,
define a camera as the child to a key footage element in a composition, so the camera will automatically track the movement of that element. Or, a light might have a camera as a parent, so the
elements that a camera is pointing at are always illuminated.
Parent-child relationships are defined between different layers to quickly create a dancing skeleton. As a parent
part moves (the upper arm), so do its children (the lower arm and hand.)
A Digital Video Primer
Text/character generation: With After Effects, you can type and edit text directly in the Composition panel using the Adobe-standard Type tool, and format text using familiar, Adobe-standard
Character and Paragraph panels, as well as keyboard shortcuts. You can then composite or
animate the text, like any other video source. If you’ve ever worked with text in Photoshop or
Illustrator, you’ll be right at home using the text tools in After Effects. You can fine-tune the look
of text using kerning, tracking, baseline shift, and other interactive options that provide instant
visual feedback.
A single text layer and only two keyframes were used to create this 2D text animation. Scale, Opacity, Rotation,
and Character Offset properties were animated for a single text selector, so that the property changes resolve
into a clear, recognizable word.
Text animation: Animating text used to be a labor-intensive process, in which every letter was
placed on a separate layer and individually animated. With After Effects, you can animate characters, words, or lines within a single text layer, animate properties that move smoothly across
the same range, and animate the entire text layer as a unit. Animated text remains fully
editable throughout the design process, so making late-stage copy changes is easy. To choose
which part of a text layer you want to animate, you define a selector that applies to specific
characters or a certain percentage range of the overall text string. Because you can animate the
selector, for example, by moving it from the start of the text to the end, it’s easy to create animations that ripple a property change, such as a change in color or scale, across the text on a layer.
Each selector you create can animate multiple properties, from standard ones, such as position
and opacity, to text-specific options, such as baseline shift and tracking. You can animate a random wiggle across a range of text, and you can also apply a wiggle to other animated properties
that apply to text. For example, you could create an animation in which a random scale change
ripples across an entire range of text, and, at the same time, wiggle the rotation of each letter in
the range.
Adding effects
After Effects provides precision tools for creating a limitless range of visual and audio effects
from the most utilitarian color correction and audio sweetening tools to extremely sophisticated
distortion and time-remapping features. After Effects comes with hundreds of effect plug-ins and
animation presets, and you can expand your effects toolkit even further with numerous thirdparty plug-ins. You can apply an unlimited number of effects to every layer, and save your most
frequently used effects (including keyframes) as animation presets.
3D image created in After Effects
The tools in After Effects make it easy to create elaborate 3D motion graphics and visual effects.
View 3D compositions from different perspectives: View a composition from six different preset
vantage points (front, back, top, bottom, left, and
right), the active camera, and three additional userdefinable custom views. You can switch views easily
with keyboard shortcuts.
Define cameras and lenses: Create one or more
cameras to define the perspectives from which your
audience views your 3D animation, and then cut
between cameras to create complex scenes. For
example, you might define a camera using a wide
angle 15mm preset, then cut to a second camera
created using a 200mm lens to capture close-ups
from a different perspective. In addition to standard
preset lenses, you can create and save custom camera
Define lights to illuminate layers in 3D space:
Create as many lights as you need, and then adjust
and animate each light’s properties, controlling its
illumination and color, as well as the shadow it casts.
For example, spotlights provide dramatic lighting
effects by pointing a cone of light at the point you
Control how layers interact with light sources:
Specify material properties that define how a light
affects the surface of a layer, as well as how layers
interact with lights. You can define and animate
Ambient, Diffusion, Specular, and Shininess values.
Animate 3D layer properties: Animate many properties of 3D layers, lights and cameras, such as position,
rotation, and orientation, to create a wide range of
effects. You can also automatically orient 3D layers
towards a camera, or animate lights and cameras
along a path or towards a point of interest you define.
A Digital Video Primer
Visual excitement: For each effect that comes with After Effects or that you add to your toolkit,
there are an unlimited number of ways to apply that effect. The effects functionality in Adobe
Premiere Pro is based on the toolset in After Effects and they work quite similarly. You organize
your effects in the Effects & Presets panel, and manipulate the properties of effects in the Effect
Controls panel.
Using the Scribble effect, you can vary the look of
animated scribbles by adjusting how much the fill
wiggles, where the fill starts and ends, where it is
applied to a mask, how it is composited with the layer,
and how random the variations in the fill appear.
The Effect Controls panel in After Effects
• Liquify: When you apply Liquify, you can distort footage using brush-based Liquify tools
similar to those in Photoshop. For example, the Turbulence tool smoothly scrambles pixels
and is great for creating clouds, smoke, and other similar effects. The Clone Stamp tool makes
it easy to clone the distortion from one part of an image to another, and the Twirl tools rotate
pixels clockwise or counter-clockwise. You can use the Shift Pixels and Reflection tools to
move pixels perpendicular to the brush stroke to create the effect of reflections in water. Work
with the Reconstruction tool to make dramatic distortions more subtle or return the footage to
its original state. You can customize settings for each tool, and use masks to protect, or freeze
areas of the footage so that the Liquify tools don’t modify them. You can control how quickly
a distortion animates by setting keyframes for Distortion Percentage; if you want to apply a
distortion to tracked footage, you can offset the distortion mesh by applying tracking data to
the Distortion Mesh Offset property.
• Warp: Transform layers with a Warp effect. Fifteen preset warp styles give you options that
range from transforming layers into regular geometric shapes, such as Arcs, Wave, and Flag,
to simulating the look of objects viewed through a fisheye lens or inflated like a balloon. You
can animate the effect easily by setting keyframes for the Bend and Distortion properties, and
you can customize each Warp Style by changing its axis and specifying a more or less extreme
Bend value.
The Liquify tool was used to make the
cat’s eyes bulge and to add a small smile.
A Digital Video Primer
In addition, After Effects includes Turbulent Displace and Magnify effects for creating specialized distortions. Turbulent Displace uses fractal noise to create turbulent distortions, such as
for flowing water, waving flags, or fun-house mirrors. Magnify simulates the placement of a magnifying glass over an area of the image, making it possible to scale an image beyond 100% while
maintaining resolution.
After Effects also delivers a comprehensive set of audio effects for full-featured audio-processing. For
example, you can synchronize animation elements to audio amplitude and drive video effects using
audio data. In addition to applying audio effects to your footage, you can also change the volume
levels of audio layers, preview them at a specified quality, and identify and mark locations. Use the
convenient Audio panel to set the volume levels of an audio level, or use the Timeline panel to view
the waveform values and apply time remapping.
Using expressions
Expressions enable you to link complex animations. For example, you could link the rotation of a
wheel on one layer with its shadow on another layer to synchronize the rotation. The expression
translates the motion, so you don’t need to enter keyframes. You can create relationships between
the behavior of a property and the behavior of almost any other property on any other layer,
opening up an infinite number of animation possibilities.
The easiest way to create an expression is to drag the expression pick whip from one property to
another. For example, you could drag the opacity property of one layer to the scale property of
another, so that as one layer increases in size, the opacity of the other increases. Or the tracking
path of text could be linked to the rotation of another layer, so that the text tracks more tightly as
the layer rotates in one direction, then tracks more loosely as it rotates back. After Effects automatically creates the expression for you. You can even drag the pick whip between the Timeline
and Effect Controls panels. If you have some familiarity with JavaScript, you can create powerful,
complex expressions with scripting.
A few examples of the Distort effects
available in After Effects
An expression is used to tie a directional blur effect to text tracking.
Leverage graphics experience into new opportunities
If you are a graphic designer, you are probably acutely aware that motion is finding its way into
your world, in everything from animated web banners to business presentations. Your experience with Illustrator and Photoshop will make it easy for you to migrate to the world of motion
graphics, expanding your creative and business potential. After Effects lets you directly animate
layered media from Illustrator and Photoshop. The layering and compositing methodologies in
After Effects build on similar functionality in the Adobe software applications you already know.
Many graphic designers have found new markets for their talents in work ranging from the web
to the world of music videos and even film titles by adding After Effects to their tool kits.
Build on your Adobe product skills
If you already use Photoshop, Illustrator, or Adobe Premiere Pro, you’ll recognize the awardwinning Adobe user interface featured in After Effects. You’ll find the familiar tools and
common keyboard shortcuts that make it possible for you to work more efficiently and move
among the programs with ease. Productivity-boosting features such the pen tool, Align panel,
rulers and guides, editing tools, and Free Transform mode work in After Effects just as they do
in other Adobe products. Plus After Effects and Adobe Premiere Pro use a similar time-based
interface. Here are just a few of the ways you benefit from After Effects integration with other
Adobe applications.
Adobe Bridge is your control center for managing
audio, video, and image assets that you use in Adobe
Production Studio applications. Simply drag a file
from Bridge into your layout, project, or composition,
or work with the asset directly in Bridge.
Manage media files: Organize, browse, locate, and
view assets, including audio files.
Preview and apply presets and templates: Use
Adobe Bridge to apply After Effects project templates,
and animation and behavior presets.
Search metadata: Find files on hard disks or networks using extensive metadata, such as title, author,
keywords, and camera, that you create.
Access stock photos: Use Adobe Stock Photos to
locate and purchase images or search for royalty-free
Process camera raw images: Adjust, crop, and process
camera raw images, and copy settings between files.
Manage color: Synchronize image color settings, so
the colors look the same regardless of which Adobe
Creative Suite 2 application you open the image in.
A Digital Video Primer
•Adobe Photoshop: You can transform layered Photoshop images into animations. Import
Photoshop files as compositions, one at a time or in batches. After Effects preserves layers,
common layer effects, adjustment layers, alpha channels, transfer modes, vector masks, guides,
and more. You can then apply visual effects to color correct, stylize, or manipulate each layer,
and animate these layers over time. Use Photoshop paths as masks or motion points. Text also
remains fully editable and formatting is preserved when you import Photoshop files.
•Adobe Illustrator: Add carefully crafted typography or eye-catching graphics to your video productions. Import layered Illustrator files as compositions, one at a time or in batches. Choose whether
After Effects preserves the layers or merges them on import. Then resize the Illustrator layers to any
resolution without losing detail, and animate them with complete control. Copy paths in Illustrator and paste them into After Effects files as masks or motion points. Preserve transparency and
transfer modes. Continuously rasterize Illustrator layers in both 2D and 3D.
•Adobe Premiere Pro: Import Adobe Premiere Pro projects as compositions. Each video, audio,
and still-image clip appears on its own layer, arranged in the correct time-based sequence in
the Timeline panel. Nested sequences in Adobe Premiere Pro appear as nested compositions
when the project is opened in After Effects; transparency, Cross Dissolve, and motion keyframes in Adobe Premiere Pro appear as keyframes in After Effects; cropping in Adobe Premiere Pro appears as a mask in After Effects. You can then manipulate these clips to create the
sophisticated effects and animations best produced in After Effects. If you use the After Effects
filters included with Adobe Premiere Pro, those effects and their associated keyframes are also
imported. In addition, you can embed a link in the After Effects movies you output so that you
can use the Edit Original command in Adobe Premiere Pro to open the original project.
•Adobe Audition: Import and export audio from Adobe Audition. Use the Edit Original command in After Effects to open either a single audio file or the session that created it in Adobe
Audition. After Effects recognizes the changes and updates your project automatically.
•Adobe Encore DVD: Use Adobe After Effects to create motion menus for the DVDs you author
in Adobe Encore DVD. As with other Adobe applications, you can use the Edit Original command in Adobe Encore DVD to open and adjust source files in After Effects.
The integrated vector paint engine in After
Effects provides Photoshop-style brushes
and powerful cloning capabilities.
Getting video out of your computer
Once you have finished assembling and editing clips, it’s time to get your final production out of
your computer and on its way to distribution. These days, creative professionals are expected to
deliver video that can be used in multiple media. Broadcast and film professionals alike are now
creating web-based work, while web designers may need to create animations that are output in
video formats. DVDs have also become an extremely popular way to combine high-quality video
and audio content with menu-driven interactivity. To address this growing need for flexibility,
Adobe Premiere Pro and After Effects offer a wide range of options that enable you to produce
high-quality deliverables for any medium.
The program you edited in the timeline does not actually contain the material from which it was
pieced together. Rather, it references your source files. Before export, make sure that the timeline
is ready to output at the quality you require. For example, replace any offline files with highresolution files suitable for final export. To get your edited program out of your computer in one
piece, you can:
• Record the timeline to physical media including videotape or motion picture film, assuming
that you have the proper hardware for video or film transfer, or have access to a service provider that offers the appropriate equipment and services.
• Export a video file for viewing from a hard disk, removable cartridge, CD, DVD, or the web.
• Export portions of your timeline as clips.
• Capture stills or sequences of stills.
From Adobe Premiere Pro, you can also export:
• An EDL (edit decision list)
Rotoscoping involves painting on individual frames
over a series of frames to create an animation or to
remove unwanted details from your footage. This
type of painting can be accomplished either in After
Effects or Photoshop.
For rotoscoping in Photoshop, export the Filmstrip
format from Adobe Premiere Pro or After Effects. You
can render all or part of a composition as a filmstrip, a
single file that contains all the frames of a composition or only a portion of them.
Video compression is not used in creating a filmstrip
file, because rotoscoping requires each and every
frame to be available in its entirety. Filmstrip files can
be very large, but you can break a filmstrip file into
any number of smaller files.
A filmstrip opens in Photoshop as a series of frames
in a column, with each frame labeled by number, reel
name, and timecode. If the column created by the
filmstrip frames is more than 30,000 pixels tall, the
frames continue in a second column. The number
of frames displayed depends on the duration of the
footage and the frame rate selected when you render
the filmstrip.
• An AAF (Advanced Authoring Format) file
A Digital Video Primer
Good housekeeping
In professional production environments, after a video project has been completed, it is typically
cleared from the editing system to make room for new work. Because the multigigabyte storage
media that would be needed is costly, and the process of uploading can be very time-consuming,
projects and source files are not usually saved in their entirety. If you do want to save your entire
project, you can trim unused frames from some or all of your source clips and remove unused
clips in their entirety from Project Bins.
Typically, however, a digital master file is exported and archived, the original raw footage is
stored on tapes, and an EDL is saved. If the project needs to be revised later, the master file can
often be edited. For more extensive repurposing, the EDL can be used to recapture the necessary
clips from the original tapes. Files used to develop titles, graphics, and animations, as well as
portions of the project that have undergone extreme manipulation to achieve special effects can
also be archived.
Today, more and more production professionals are exporting AAF files, rather than EDLs, to
archive or exchange projects. AAF is a widely supported industry standard for high-end
exchange of data, such as the information necessary to transfer a video project from one platform
to another. An AAF file helps you preserve as much of the project’s integrity as possible when
you transfer it to another system. However, not all elements of a project can be successfully
transferred using AAF. Also, the application you use to open the AAF file may not support all
features. In general, an AAF file dependably translates editing data and commonly used transitions, such as cross-dissolves and wipes, but does not support effects (filters) or audio fade and
pan information, including audio transitions.
Exporting to videotape
You can record your edited program onto videotape directly from your computer. This process
can be as simple as playing the timeline and recording on a connected device. When you record
standard DV video back to standard DV tape, all that is required is an IEEE 1394 connection.
However, if you plan to record DV audio and video to an analog format, such as VHS tape, you’ll
need a device that can convert DV to analog using the connectors supported by your analog
video recorder. Most DV cameras and all DV video tape recorders are capable of this conversion; some DV cameras require you to record the video to DV tape, then copy the DV tape to the
analog video recorder.
Exporting to digital files
You can prepare variations of a program or clip for a variety of different uses. For example, you
can create separate versions for DVD distribution and web viewing. Adobe Premiere Pro and
After Effects both offer built-in support for exporting the following digital video file formats:
Microsoft AVI, Animated GIF, QuickTime, MPEG-1 and -2, as well RealMedia and Windows
Media files for the web. After Effects also exports Adobe Flash (SWF) files. Several audio-only
formats and a variety of still-image and sequence formats are also supported by both applications. Additional file formats may be available if provided with your video capture card or if you
add third-party plug-in software.
To start the export process, you enter settings that determine the properties of the final file.
These settings may include the data rate for playback, the color depth, the frame size and frame
rate, the quality, and what type of compression method, or codec, to use. Choosing compression
settings is a balancing act that varies depending on the type of video material, the target delivery format, and the intended audience. Often, you discover the optimal compression settings
through trial and error. Prior to distribution, you should always test the files you export on the
type of platform or equipment you expect your audience to use.
Web video
The web is rapidly gaining importance as a vehicle for distributing video content. From training
programs, to sharing the experience of personal events such as weddings, to full-length feature
films, video delivered via the Internet or a corporate intranet is big business.
Adobe Encore DVD adds creative authoring for professional DVD production to the Adobe
Production Studio solution set. To learn more about DVD production and Adobe Encore DVD,
take a look at the Adobe DVD Primer on the Adobe website at
A Digital Video Primer
We hope this Digital Video Primer has answered enough of your questions to encourage you to
get started. We know that once you do, you and your audience will be thrilled upon the screening
of your first motion picture project—whether personal or professional. The best thing to do is to
jump right in and learn as you go. Finding the information you need is easy with the comprehensive HTML-based Help included with Adobe products. You can also access additional help and
training materials through Adobe Online located on the Help menu. The Adobe Production Studio
is a great way to get started, with a comprehensive set of easy to learn and use tools that include
features to grow with.
How to purchase Adobe software products
Via web:
Via phone:
Call the Adobe Digital Video and Audio Hotline at: (888) 724-4507
Education customers:
Contact an Adobe Authorized Education Reseller at:
Free tryouts:
To find the reseller nearest you, visit:
A variety of products and information are available that can be helpful to learning and working
with digital video. The following information is provided as a courtesy. Adobe does not endorse
third-party products or services. This listing was last updated March 2004.
For more information
Adobe Classroom in a Book
published by Peachpit Press
Series of hands-on software training workbooks for Adobe products; includes CD
Visual QuickStart Guides
published by Peachpit Press
Concise, step-by-step instructions to get you up and running quickly, and later provide a great
visual reference
Creating Motion Graphics with After Effects
by Trish and Chris Meyer
published by CMP Books (2004)
ISBN: 0879306068
Techniques for creating animation, composites, and effects; includes CD
Training resources
Adobe Certified Expert (ACE) Program
To become an Adobe Certified Expert you must pass an Adobe Product Proficiency Exam for the
product for which you want to be certified. For more information see the Adobe website at:
Adobe Certified Training Provider (ACTP) Program
For more information see the Adobe website at:
CD-ROM, Videotape, and Web-based Training
Adita Video Inc.
Class On Demand Premeire Pro Fast Track (4 DVD Set)
A Digital Video Primer
Online training libraries on Adobe products. Learn at your own pace with unlimited subscription
access for one full year.
Total Training Inc.
In-depth training on DVD for both Adobe Premiere Pro and After Effects
Toll-free: 888-368-6825
Phone: 760-51 7-9001
Fax: 760-51 7-9060
Trish and Chris Meyer provide in-depth information about motion graphics and special effects in
After Effects
VTC: The Virtual Training Company
CD and Web-based training on Premiere Pro and After Effects
Free information on the web
Adobe After Effects:
Adobe Audition:
Adobe Encore DVD:
Adobe Premiere Pro:
Adobe Digital Video Primer
Download a PDF copy from
Master the Art of Digital Video
Download a PDF copy from
Design and film school product tips
Adobe Product Support Announcements
Creative Mac
Premiere and After Effects tutorials
After Effects Portal
Multipurpose site with tutorials and tips for After Effects
Tips on using network rendering, 3D channels, and Mesh Warp in After Effects
Tips for low budget video-web production
Video Guys
Helpful resource that explains many of the new technologies related to digital video
A Digital Video Primer
Information about MPEG
Information about DVD
Online glossaries
PC Technology Guide
AV Video Multimedia Producer
Covers video production, multimedia, and presentation
Phone: 847-559-7314
Fax: 847-291-4816
Broadcast Engineering
Covers broadcast technology
Computer Videomaker
Covers camcorders, computers, tools and techniques for creating video
Toll-free: 800-284-3226
Phone: 530-891-8410
Fax: 530-891-8443
Digital Editor Online
Master the tools needed to make non-linear and digital editing profitable
Phone: 888-261-9926
DV (Digital Video Magazine)
Covers mainstream digital video
For digital studio professionals who capture, edit, encode, publish, and stream digital content
Film & Video
Covers film and video production
Phone: 847-559-7314
Fax: 847-291-4816
Resource for technology trends in animation, production and post-production for film, video
and streaming
US Toll Free: 866-505-7173
Fax: 402-293-0741
Post Magazine
Resource for video, audio, and film post-production
Toll-free: 888-527-7008
Phone: 218-723-9477
Fax: 218-723-9437
A Digital Video Primer
Covers the professional video production market
Phone: 323-634-3401
Fax 323-634-2615
Video Systems
Covers the video production process from acquisition through presentation
US Toll-free: 866-505-7173
Fax: 402-293-0741
Sign up for technical support announcements
Desktop video
Digital Media Net
Topics related to digital content creation
Digital video industry news
Adobe User to User Forums
DMN Forums
Home of worldwide users groups for Adobe Premiere Pro and Adobe After Effects users
DVD Forum
International association of hardware manufacturers, software firms and other users of digital
versatile discs
Canopus Users Forums Forums
Creative Cow
Online creative communities of the world including Adobe Premiere Pro and After Effects user
Mailing lists
Use e-mail to exchange information and distribute questions and answers on a particular topic.
Adobe Premiere Pro and After Effects mailing lists
DV-L List Server
DV and Fire Wire technologies
Discussions for video and television professionals
A Digital Video Primer
If you use an internet application that lets you access newsgroups, you can read and respond to
postings in the follow-ing digital video newsgroups:
Professional associations
Digital Video Professionals Association
Society of Motion Pictures and Television Engineers
Digital Editors
DV Expo
NAB (National Association of Broadcasters)
Third-party software and hardware
For Adobe Premiere Pro
For detailed descriptions of third- party plug-ins for Adobe Premiere Pro, visit the Adobe Premiere Pro page on the Adobe website:
For Adobe After Effects
For detailed descriptions of third-party plug-ins for After Effects, visit the After Effects page on
the Adobe website:
Capture cards
For a list of video capture cards that Adobe has tested and certified for use with Adobe Premiere
Pro, visit the Adobe website:
Encoding software
Main Concept
One Chagrin Highlands
2000 Auburn Drive
Suite 200
Beachwood, Ohio 44122
Phone: 216-378-7655
Fax: 216-378-7656
QDesign Corporation
QDesign Music Codec
Phone: 604-451-1527
Fax: 604-451-1529
A Digital Video Primer
Helix Producer
Toll-free: 800-444-8011
Phone: 206-674-2700
Windows Media Technologies
Sorenson Media
Sorenson Video Developer and Basic Edition
Phone: 888-767-3676
A Digital Video Primer
4:1:1 color: Nonbroadcast color-sampling system in which for every four samples of the luminance (Y)
component, one sample of each of the two chrominance components (Cr and Cb) are taken.
4:2:0 color: Color-sampling system used for PAL video in which for every four samples of the
luminance (Y) component, two samples of each of the two chrominance components (Cr and
Cb) are taken but, unlike 4:2:2 color, only on every other line of each field.
4:2:2: color: Color-sampling system used for NTSC video in which for every four samples of the
luminance (Y) component, two samples of each of the two chrominance components (Cr and
Cb) are taken.
8-bit-per-channel color: Type of color representation that stores and transmits 8 bits of information
for each of the red, green, and blue (RGB) components. In computer terms, known as 24-bit color.
24-bit color: Type of color representation used by most computers. For each of the red, green,
and blue components, 8 bits of information are stored and transmitted—24 bits total. With these
24 bits of information, over a million different variations of color can be represented. In digital
video, known as 8-bit-per-channel color.
24P: A high-definition (1080 lines of vertical resolution), 24 fps, progressive-display video format.
AAF: The Advanced Authoring Format, which was developed to provide a common (open)
file format to allow content to be used by different multimedia authoring and post-production software applications. AAF is an open standard for the interchange of program content
(actual images, video and audio clips, and so forth.) and its associated metadata (ancillary
data that describes source location, timecode, transitions, and effects applied) across platforms
and between applications. Sometimes described as a super EDL solution, AAF is, essentially, a
wrapper technology that can carry either the content itself or merely links (pointers) to it, along
with relevant metadata. Although AAF files may contain the actual content, the emphasis of
this format (contrast with MXF) is the exchange of composition metadata (that is, the information that describes how content is handled in a composition, rather than on the exchange of the
content itself).
Aliasing: The jaggy appearance of unfiltered angled lines. Aliasing is often caused by sampling
frequencies too low to faithfully reproduce an image. There are several types of aliasing that can
affect a video image including temporal aliasing (for example, wagon wheel spokes
apparently reversing) and raster scan aliasing (such as flickering effects on sharp horizontal
Alpha channel: Color in an RGB video image is stored in three color channels (see channel). An
image can also contain a matte (also known as a mask) stored in a fourth channel called the
alpha channel.
Analog: The principal feature of analog representations is that they are continuous. For example,
clocks with hands are analog—the hands move continuously around the clock face. As the minute hand goes around, it not only touches the numbers 1 through 12, but also the infinite number
of points in between. Similarly, our experience of the world, perceived in sight and sound, is
analog. We perceive infinitely smooth gradations of light and shadow; infinitely smooth modulations of sound. Traditional (nondigital) video is analog.
Animatic: A limited animation used to work out film or video sequences. It consists of artwork
shot on film or videotape and edited to serve as an on-screen storyboard. Animatics are often
used to plan out film sequences without incurring the expense of the actual shoot.
Anti-aliasing: The manipulation of the edges of a digital image, graphic, or text to make them
appear smoother. On zoomed inspection, anti-aliased edges appear blurred, but at normal viewing distance, the apparent smoothing is dramatic. Anti-aliasing is important when working with
high-quality graphics for broadcast use.
Artifact: Visible degradations of an image resulting from any of a variety of processes. In digital
video, artifacts usually result from color compression and are most noticeable around sharply
contrasting color boundaries such as black next to white.
A Digital Video Primer
Aspect ratio: The ratio of an image’s width to its height. For example, a standard video display
has an aspect ratio of 4:3.
Assets: Typically refers to video and audio clips, stills, titles, and any other elements that comprise the content of a video production. With the recent proliferation of media asset management
solutions, asset has come to mean a piece of content and its associated metadata.
Audio gain: Audio level or volume.
Audio lead: See J-cut.
Audio sweetening: Processing audio to improve sound quality or to achieve a specific effect.
AVI: Audio Video Interleave. AVI is one of the video file formats on the Microsoft Windows
Balancing: Adjusting the balance of sound between the two channels (left and right) in a stereo clip.
Batch capture: Automated process of capturing a specified list of clips from a digital or analog
videotape source.
Batch list: List of clips to be captured by batch capture. Each clip is identified by In and Out
points using timecode on the videotape.
Binary: A type of digital system used to represent computer code in which numerical places can
be held only by zero or one (on or off).
Bit depth: In digital audio, video, and graphics, the number of bits used to represent a sample.
For example, bit depth determines the number of colors the image can display. A high-contrast
(no gray tones) black-and-white image is 1-bit. As bit depth increases, more colors become available. 24-bit color allows for millions of colors to be displayed. Similarly, in digital audio, a higher
bit depth produces better sound quality.
BNC connector: A connector typically used with professional video equipment for connecting
cables that carry the video signal.
Camcorder: A video camera that includes a device for recording audio and video, and typically a
microphone and other devices and controls to make it a complete portable production unit. Most
camcorders record to tape. However, a number record to other media such as hard disks and
optical discs.
Capture: If the source is analog, the process of converting audio or video footage to digital form
for use on a computer. Capture typically also involves the simultaneous application of compression to reduce the data rate of the content, so that it is easier to process and store. If the source
is digital, the content can be transferred directly to the computer hard disk, typically without
conversion or processing.
Capture card: See Video capture card.
CCD: Charge-coupled device. The sensor that detects light inside a digital camera or camcorder.
In single-chip camcorders, the CCD detects all three colors of light (red, green, and blue); in a
camcorder with three chips, each chip is dedicated to one of the three colors, typically, resulting
in better quality images.
CG: see Character generator.
CGI: Computer graphic imagery.
Channel: Each component color defining a computer graphic image (red, green, and blue). By
carrying each component on a separate channel, the colors can be individually adjusted. Channels may also be added to a computer graphic file to define masks.
Character generator: Stand-alone device or computer program used to create text for video display.
Chrominance: The color portion of a video signal.
Clip: A digitized portion of video, also called a shot.
CMX: A standard file format for EDLs.
A Digital Video Primer
Codec: Compressor/decompressor or encoder/decoder; hardware or software that handles the
compression of audio and video to make it easier to work with and store, as well as decompression for playback.
Color sampling: A method of compression that reduces the amount of color information (chrominance) while maintaining the amount of intensity information (luminance) in images.
Component video: A video signal with three separate signals: Y for luminance, Cr for chroma/
red, and Cb for chroma/blue. Component signals offer the maximum luminance and chrominance bandwidth. Some component video, like Betacam and Betacam-SP, is analog; other
component video, like D1, is digital.
Composite video: An analog video signal that includes chrominance and luminance information. NTSC, PAL, and SECAM are the international standard formats for composite video.
Compositing: The process of combining two or more images to yield a resulting, or composite image.
Compression: Reducing the amount of data in digital video or audio.
Compression ratio: A comparison of the amount of data before and after compression is applied.
Crawling title: Text or graphics that move horizontally across the screen.
Cut: The simplest type of transition, in which the last frame of one clip is simply followed by the
first frame of the next.
DAM: Digital asset management; see Media asset management.
Data rate: Amount of data transferred over a period of time, such as 10MB per second. In digital
media, data rate is the amount of data required each second to render audio or video in real time.
Digital: A system that uses numbers, such as a computer system. Digital media are sounds and
images represented by binary numbers.
Digital asset management (DAM): See Media asset management (MAM).
Digitize: To convert an analog audio or video signal into a digital bitstream.
Dissolve: A fade from one clip to another.
DTV: Digital television.
Duration: The length of time a video or audio clip or sequence of clips plays; the difference in
time between an In point and Out point.
DV: Generally refers to digital video, but current usage suggests a variety of nuances. DV can
refer to the type of compression used by DV systems or a format that incorporates DV compression. DV camcorders employ a DV format; more specifically, a standard consumer DV
camcorder uses mini-DV tape, compresses the video using the DV25 standard, and has a port
for connecting to a desktop computer. The DV designation is also used for a special type of tape
cartridge used in DV camcorders and DV tape decks.
DVD: A digital storage medium that looks like a CD but has higher storage capacity. A DVD can
store a feature length film compressed with MPEG-2.
DVI: Digital Video Interface, a connection interface for high-end digital video equipment.
DV25: The most common form of DV compression, using a fixed data rate of 25 megabits per
second (Mbps).
EDL: Edit decision list, a master list of all edit In and Out points, plus any transitions, titles, and
effects used in a film or video production. An EDL can be sent to an edit controller, which is a
device that interprets the list of edits and automatically controls the decks or other gear in the
system to create a final edit from original sources.
Effect: A process used to modify the quality of audio or video. In digital media, effects are
typically programs or plug-ins that manipulate data to change the appearance of video or the
character of the audio.
A Digital Video Primer
Fields: The sets of upper (odd) and lower (even) lines drawn by the electron gun when illuminating the
phosphors on the inside of a standard television screen, thereby resulting in displaying an interlaced
image. In the NTSC standard, one field contains 262.5 lines; two fields make up a complete television
frame. The lines of field 1 are vertically interlaced with field 2 to produce 525 lines of resolution.
Final cut: The final video production, assembled from high-quality clips, and ready for export to
the selected delivery media.
FireWire: The Apple Computer trade name for IEEE 1394.
Four-point edit: An edit used for replacing footage in a program when the precise In and Out
points of the clip to be inserted and the portion of the program to be replaced are critical and
are, therefore, specified by the editor. The four-point editing feature in Adobe Premiere Pro alerts
the editor to any discrepancy in the two clips and automatically suggests alternatives.
fps: Frames per second, the measurement of frame rate.
Frame: A single still image in a sequence of images which, when displayed in rapid succession, creates
the illusion of motion. The more frames per second (fps), the smoother the motion appears.
Frame rate: The number of video frames displayed per second (fps). In interlaced scanning, a
complete frame consisting of two fields. Video formatted using the NTSC standard has a frame
rate of 29.97 fps. PAL and SECAM standards use a frame rate of 25 fps.
Fullscreen: Format that utilizes the entire aspect ratio of a standard (4:3) television screen.
Generation loss: Incremental reduction in image or sound quality caused when analog audio or video
is copied, and then the copy is copied, and so on. Generation loss does not occur when copying digital
media unless the media is repeatedly processed or compressed and decompressed.
Handles: Extra frames specified before the In and Out points of a clip that may be needed to
accommodate transitions or editing adjustments.
Headroom: The practice of capturing digital media at a higher quality setting than will be used
in the final product in order to preserve quality through editing and processing. In audio, extra
audio gain above the average level to help prevent peak levels from distorting.
Horizontal resolution: The number of pixels across each horizontal scan line on a television.
IEEE 1394: The interface standard that enables the direct transfer of DV between devices such as a DV
camcorder and computer; also used to describe the cables and connectors utilizing this standard.
i.LINK: The Sony trade name for IEEE 1394.
In point: The point in a source clip at which the material used in a video program begins.
Insert edit: An edit in which a series of frames is added, lengthening the duration of the overall
Interframe compression: Reduces the amount of video information by storing only the differences
between a frame and those that precede and follow it. (Also known as temporal compression.)
Interlacing: System developed for early television and still in use in standard television displays.
To compensate for limited persistence, the electron gun used to illuminate the phosphors coating
the inside of the screen alternately draws even, then odd horizontal lines. By the time the even
lines are dimming, the odd lines are illuminated. We perceive these interlaced fields of lines as
complete pictures.
Intraframe compression: Reduces the amount of video information within each frame. (Also
known as spatial compression.)
J-cut: A type of split edit where the audio In point is earlier than the video In point so that the
audio begins to be heard during the previous video clip. Also known as an audio lead.
JPEG: File format defined by the Joint Photographic Experts Group of the International Organization for Standardization (ISO) that sets a standard for compressing still computer images.
Because video is a sequence of still computer images played one after another, JPEG compression
can be used to compress video (see MJPEG).
A Digital Video Primer
Key: A method for creating transparency, such as a blue-screen key or a chromakey.
Keyframe: A frame that is used as a reference for any of a variety of functions. For example, in
interframe video compression, keyframes typically store complete information about the image,
while the frames in between may store only the ways in which they differ from one or more
keyframes; in video editing, a frame can be designated as a keyframe in order to define certain
properties of the audio or video at a particular time. Keyframes are typically used by effects programs or plug-ins to define properties, like image color or frame position and size, at a number
of points on a timeline in order change the properties over time. For example, keyframes can be
used to define the movement of elements in an animation.
Keyframing: The process of creating an animated clip by selecting a beginning image and an
ending image whereby the software automatically generates the frames in between (similar to
Keying: The technique of using a key to apply transparency when superimposing video clips.
L-cut: A type of split edit where the audio Out point is later than the video Out point so that the
audio continues to be heard with the next video clip.
Log: A list of shots described with information pertinent to content or other attributes; or the
process of creating such a list.
Lossless: A process that does not result in a loss of signal fidelity or data; for example, compression by run-length encoding or the transfer of DV via an IEEE 1394 connection.
Lossy: Generally refers to a compression scheme or other process, such as duplication, that causes
degradation of signal fidelity and loss of data.
Luminance: Brightness portion of a video signal.
MAM: Media asset management.
Markers: Can be added during editing to indicate important points in the Timeline or in individual clips. Markers are for reference only; they do not alter the video program.
Mask: See Matte. The term mask is usually used in working with still images, while the term
matte is typically used in film and video post-production.
Matte: An image that specifies an area of another image on which to apply transparency, semitransparency, or some other effect.
Matting: The technique of using a matte to specify transparency when superimposing video clips.
Media asset management (MAM): The warehousing of digital media content in such a way that
it can be easily referenced and retrieved using a relational database. Also known as digital asset
management (DAM). Content (images, graphics, animations, video, and audio) is linked to
critical information about that content, known as metadata, which can include creation date, a
description, the equipment (camera or recorder) that recorded the material, timecode, and so
on. Together, the content and the metadata for a single item comprise an asset. One of the most
significant features and benefits of a MAM is that assets can also be linked to other systems, such
as financial databases.
Metadata: In media asset management formats such as AAF, that portion of the data consisting
of ancillary information such as description, source, and time-code, and so forth.
Motion control photography: A system for using computers to precisely control camera
movement so that multiple shots can be made with matching movement. The shots can then be
composited to appear as one shot.
Motion effect: Speeding up, slowing down, or strobing of video.
A Digital Video Primer
MPEG: Moving Pictures Expert Group of the International Organization for Standardization
(ISO), which has defined multiple standards for compressing audio and video sequences. Setting
it apart from JPEG which compresses individual frames, MPEG compression uses a technique
where the differences in what has changed between one frame and its predecessor are calculated
and encoded. MPEG is both a type of compression and a video format. MPEG-1 was initially
designed to deliver near-broadcast-quality video through a standard speed CD-ROM. Playback
of MPEG-1 video requires either a software decoder coupled with a high-end computer, or a
hardware decoder. MPEG-2 is the broadcast quality video found on DVDs.
MXF: Material eXchange Format, a wrapper technology designed to facilitate asset interchange
between different multimedia and post-production software applications. Like AAF (of which
it can be an object subset), MXF is an open standard for the exchange of content (actual images,
video and audio clips, and so on) and its associated metadata across platforms and between
applications. MXF was designed for less complex metadata applications than AAF. While AAF
can contain the actual content or only a link to it, MXF always contains the actual content along
with the metadata. The primary objective of MXF is the streamlined exchange of the content with
its associated metadata. MXF files may be used as a source for AAF. With its greater emphasis
on actual content exchange, MXF is better optimized than AAF for real-time streaming of video
and audio assets, making it an excellent solution for such applications as broadcast news editing.
NLE: Nonlinear editing.
Noise: Distortions of the pure audio or video signal that would represent the original sounds and
images recorded, usually caused by interference.
Nonlinear editing (NLE): Random-access editing of video and audio on a computer, allowing for
edits to be processed and reprocessed at any point in the timeline, and at any point in the editing
process. Traditional videotape editing is linear, requiring that video be edited sequentially, from
beginning to end.
NTSC: National Television Standards Committee, the standard for color television transmission
used in the United States, Japan, and elsewhere. NTSC incorporates an interlaced display at 29.97
frames per second.
Offline editing: The practice of editing a final version that is not intended for distribution using
low-quality clips. The offline version is then used in online editing to produce the final distributed version using high-quality clips.
OMF or OMFI: Open Media Framework or Open Media Framework Interchange format, a media
and metadata exchange solution introduced prior to AAF. It was not broadly adopted. However,
as the industry transitions to the more widely accepted AAF standard, more applications and
utilities are including support for OMF interchange.
Online editing: The practice of editing that results in a final product for distribution.
Out point: The point in a source clip at which the material used in a video program ends.
PAL: Phase-Alternating Line, the television standard used in most European and South American countries. PAL uses an interlaced display at 25 frames per second.
Panning: Moving a camera horizontally or vertically as a scene is being shot. Also, shifting stereo
sound between the left and right channels.
Phosphor: A luminescent substance, used to coat the inside of a television or computer display,
that is illuminated by an electron gun in a pattern of graphical images as the display is scanned.
Pixel: Picture element, the smallest computer display element, represented as a point with a
specified color and intensity level. One way to measure image resolution is by the number of
pixels used to create the image.
Poster frame: A single frame of a video clip used as an icon to represent and identify that clip in
parts of the Adobe Premiere Pro interface.
Post-production: The phase of a film or video project that involves editing and assembling footage, and adding effects, graphics, titles, and sound.
A Digital Video Primer
Preproduction: The planning phase of a film or video project, usually completed prior to commencing production.
Previsualization: A method of communicating a project concept by creating storyboards or
rough animations.
Print to tape: A command for exporting a digital video file for recording onto videotape.
Production: The phase of a film or video project that includes shooting or recording raw footage.
Program monitor: Window on the Adobe Premiere Pro interface that displays the edited program.
Progressive display: A method for displaying sequential images, such as the frames comprising
film or video, whereby the entire image is shown at once; contrast with interlacing.
Project: File with all information pertaining to a job, including settings and source material.
Prosumer: Defines a market segment for video equipment and software, comprising serious hobbyists and those whose primary profession is not video production.
Pulldown: Technique used during the telecine process in which the 24 fps rate of film is converted
to a video frame rate: 29.97 fps for NTSC; 25 fps for PAL and SECAM.
QuickTime: A multiplatform, industry-standard, multimedia software architecture developed by
Apple and used by software developers, hardware manufacturers, and content creators to author
and publish synchronized graphics, sound, video, text, music, VR, and 3D media.
RAID: Redundant array of independent disks, a digital data storage subsystem composed of multiple hard disks that are handled as a single volume in a computer.
RCA connector: A connector typically used for cabling in both audio and video applications.
RealMedia: Format designed specifically for the web by RealNetworks, featuring streaming and
low data-rate compression options; works with or without a RealMedia server.
Real-time: In an NLE, refers to the processing of effects and transitions, so that playback of an
edit is continuous and there is no wait for rendering or processing.
Rendering: The processing of digital media into a final form.
Resolution: The amount of information in each frame of video, normally represented for digital
displays by the number of horizontal pixels times the number of vertical pixels (such as 720 x 480);
for television, by the number of vertical scan lines (for example, 525 for NTSC). All other things
being equal, a higher resolution will result in a better quality image.
RGB: Red, green, blue, a way of describing the color of a pixel using the three primary colors (in
the additive color system).
Ripple edit: Automatic forward or backward movement of program material in relationship to an
inserted or extracted clip, or to a change in the duration of a clip.
Rolling edit: Automatic change in the duration of a program when a clip is inserted or extracted,
or when the duration of a clip is altered.
Rolling title: Text that moves vertically up or down across the screen.
Rotoscoping: Painting on individual frames over a series of frames to create an animation or to
remove unwanted details in film or video footage.
Rough cut: A preliminary version of a video edit, often assembled from lower quality clips than
those used for the final cut. Rough cuts are created to communicate an editorial concept, or
provide a guide for the final edit.
Sample rate: In digital audio, the number of times per second the amplitude of the analog waveform is
measured and converted to a binary number; the higher the number, the better the sound quality.
SAN: Storage area network, a data storage subsystem that can provide terabytes of capacity and
be simultaneously accessed by multiple users. A SAN may be JBOD (just a bunch of disks) or
composed of multiple RAIDs.
A Digital Video Primer
Scrubbing: Variable-rate backward or forward movement through audio or video material using
a mouse, keyboard, or other device.
SECAM: Similar to PAL at 25 fps, the SECAM analog broadcast television standard is used in
France, the Middle East, and Africa. In countries employing the SECAM standard, PAL format
cameras and decks are used.
SDI: Serial Digital Interface, a professional digital video connection format with a 270 Mbps
transfer rate. SDI uses standard 75-ohm BNC connectors and coaxial cable.
Six-point edit: See Split edit.
Slide edit: An edit that adjusts the previous clip’s Out point and the next clip’s In point without
affecting the clip being slid or the overall program duration.
Slip edit: An edit that adjusts the In and Out points of a clip without affecting the adjacent clips
or affecting overall program duration.
Spatial compression: See Intraframe compression.
Speed: The playback rate of a video or audio clip compared to the rate at which it was recorded.
Split edit: A technique resulting in a clip’s video and audio beginning or ending at different
times. Also see L-cut and J-cut.
Storyboard: A series of sketches or still images outlining material to be shot on film or video, or
indicating a sequence of clips to be edited together.
Streaming: Process of sending digital media over the Web or other network, allowing playback on the
desktop as the video is received, rather than requiring that the file be downloaded prior to playback.
Superimposition: A composite, or layered image involving transparency; see also compositing.
S-Video: Super-Video, a technology for transmitting analog video signals over a cable by dividing
the video information into two separate signals: one for luminance and the other chrominance.
(S-Video is synonymous with Y/C video).
Telecine: Refers to the combination of process, equipment, and software used to acquire and
convert film to video.
Temporal compression: See interframe compression.
Three-point edit: An edit in which a clip is inserted into a Timeline using three of the four In and
Out points. The fourth point is automatically calculated by Adobe Premiere Pro.
Timecode: Time reference that identifies each video frame on a tape, used to locate video segments and implement frame-accurate tape-to-tape editing. When video is captured digitally, the
timecode is transferred to the computer. Though timecode is not necessary for frame-accurate
editing on a computer, it can be used to build batch capture lists and locate source footage.
Timecode log: See Batch list.
Timeline: On an NLE interface, the graphical representation of program length onto which
video, audio, and graphics clips are arranged.
Titler: See Character generator.
Track: In the Adobe Premiere Pro Timeline panel, a horizontal row on which clips are arranged.
Tracks are similar to the layers found in many other Adobe applications. When clips are placed
one above another, both clips play back simultaneously. The Video 1 track is the main video editing track; all tracks above Video 1 are for superimposing clips over the Video 1 track; all tracks
below Video 1 are for audio.
Transcoding: Converting a file from one file format into another; that is, reencoding the data.
Transition: A change in video from one clip to another. Often these visual changes involve effects
where elements of one clip are blended with another.
Transparency: Percentage of opacity of a video clip or element.
A Digital Video Primer
Trimming: May refer to setting the In and Out points of a clip (usually with handles) or
to actually removing unwanted portions of clips.
Uncompressed: Raw digitized video displayed or stored in its native size.
For a comprehensive overview of Adobe Production Studio, please visit
Vertical resolution: The number of horizontal scan lines (counting from top to bottom)
that the electron beam draws across a television screen to form the picture.
For a comprehensive overview of Adobe After
Effects, please visit
Video capture card (or board): Installed inside a computer, adds the functionality
needed to digitize analog video for use by the computer. Using a hardware or software
codec, the capture card may also compress video as it is captured and decompress video
as it is played or transferred back to a videotape.
Voice over: A voice, such as a narrator, coming from off camera.
Widescreen: Any aspect ratio for film and video wider than the standard 4:3 format;
previously used to refer to wide-aspect film formats; now typically used to refer to the
16:9 format, which is the standard aspect ratio for HDTV.
XLR connector: A connector with three conductors used in professional audio applications, typically with a balanced signal.
Y/C video: A video signal in which the chrominance and luminance are physically separated to provide superior images (synonymous with S-Video).
YCrCb: A video signal comprised of three components: luminance (Y) and two chrominance (Cr and Cb).
YUV: Another term for YCrCb.
Zooming: Enlarging or decreasing the apparent size of the subject within the frame by
either optical or digital means.
Adobe Systems Incorporated • 345 Park Avenue, San Jose, CA
95110-2704 USA •
Adobe, the Adobe logo, Adobe Audition, Adobe Encore, Adobe
Premiere, After Effects, Flash, Illustrator, and Photoshop are registered trademarks or trademarks of Adobe Systems Incorporated
in the United States and/or other countries. Apple and Mac are
trademarks of Apple Computer, Inc., registered in the United
States and other countries. Microsoft, Windows, and Windows
Media are either registered trademarks or trademarks of Microsoft
Corporation in the United States and/or other countries. All other
trademarks are the property of their respective owners.
© 2006 Adobe Systems Incorporated. All rights reserved.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF