Tektronix: Glossary Video Terms and Acronyms

Tektronix: Glossary Video Terms and Acronyms
video terms and acronyms
This Glossary of Video Terms and Acronyms is a compilation of material gathered over time from numerous sources. It is provided "as-is" and in good faith, without any warranty as to the accuracy or currency
of any definition or other information contained herein. Please contact Tektronix if you believe that any of
the included material violates any proprietary rights of other parties.
Video Terms and Acronyms
0V – The reference point of vertical (field) sync. In both NTSC and PAL
systems the normal sync pulse for a horizontal line is 4.7 µs. Vertical sync
is identified by broad pulses, which are serrated in order for a receiver to
maintain horizontal sync even during the vertical sync interval. The start
of the first broad pulse identifies the field sync datum, 0V.
1/4” Phone – A connector used in audio production that is characterized
by its single shaft with locking tip.
1/8th Mini – A small audio connector used frequently in consumer
1:1 – Either a perfectly square (9:9) aspect ratio or the field:frame ratio
of progressive scanning.
0H – The reference point of horizontal sync. Synchronization at a video
interface is achieved by associating a line sync datum, 0H, with every
scan line. In analog video, sync is conveyed by voltage levels “blackerthan-black”. 0H is defined by the 50% point of the leading (or falling)
edge of sync. In component digital video, sync is conveyed using digital
codes 0 and 255 outside the range of the picture information.
1.56 µs
3.12 µs
3.12 µs
125M – See SMPTE 125M.
100 Field Per Second – Field rate of some European proposals for a
world standard for ATV (Advanced Television).
1410 NTSC Test Signal Generator – Discontinued analog circuit based
Tektronix test signal generator that is used to generate full field composite
analog test signals. Has been replaced by the Tektronix TSG-170A.
100% Amplitude, 100% Saturation – Common reference for
100/7.5/100/7.5 NTSC color bars.
1450 Demodulator – Tektronix high quality demodulator that provides
envelope and synchronous demodulation.
100/0/75/7.5 – Short form for color bar signal levels, usually describing
four amplitude levels.
1480 Waveform Monitor – Discontinued Tektronix waveform monitor.
It has been replaced by the 1780R.
1st number: white amplitude
2nd number: black amplitude
3rd number: white amplitude from which color bars are derived
4th number: black amplitude from which color bars are derived
16 QAM – (16 Quadrature Amplitude Modulation)
In this example: 75% color bars with 7.5% setup in which the white bar
has been set to 100% and the black to 0%.
1780R Waveform Monitor/Vectorscope – Tektronix microprocessor
controlled combination waveform monitor and vectorscope.
1080i – 1080 lines of interlaced video (540 lines per field). Usually refers
to 1920 x 1080 resolution in 1.78 aspect ratio.
1080p – 1080 lines of progressive video (1080 lines per frame). Usually
refers to 1920 x 1080 resolution in 1.78 aspect ratio.
12.5T Sine-Squared Pulse with 3.579545 MHz Modulation –
Conventional chrominance-to-luminance gain and delay measurements
are based on analysis of the baseline of a modulated 12.5T pulse. This
pulse is made up of a sine-squared luminance pulse and a chrominance
packet with a sine-squared envelope as shown in the figure below. This
waveform has many advantages. First it allows for the evaluation of both
gain and delay differences with a single signal. It also eliminates the
need to separately establish a low-frequency amplitude reference with
a white bar. Since a low-frequency reference pulse is present along
with the high-frequency information, the amplitude of the pulse itself
can be normalized. The HAD of 12.5T was chosen in order to occupy
the chrominance bandwidth of NTSC as fully as possible and to produce
a pulse with sufficient sensitivity to delay distortion.
16 VSB – Vestigial sideband modulation with 16 discrete amplitude levels.
16 x 9 – A widescreen television format in which the aspect ratio of the
screen is 16 units wide by 9 high as opposed to the 4 x 3 of normal TV.
1910 Digital Generator/Inserter – Tektronix VITS test signal generator.
1-H – Horizontal scan line interval, usually 64 µs for PAL or 63.5 µs
for NTSC.
2:1 – Either an aspect ratio twice as wide as it is high (18:9) or the
field:frame ratio of interlaced scanning.
2:2 Pull-Down – The process of transferring 24-frames/sec film format
into video by repeating each frame as two video fields.
2:3 Pull-Down – See Pull-Down.
2-1/2D (Two and One-Half Dimensions) – This term refers to the kind
of dimensionality (i.e., 2D, 3D) that can be created using multiplane animation. Since a layer in such animation can lie in front of one cel (or plane),
or in back of another layer, the resulting effect is of a 3 dimensional world.
This is a limited 3D world, however, because the layers are fixed in relation
to each other. For this reason, multiplane animation is referred to as 2-1/2
dimensions. It is a very useful technique, however, even for computer
graphics, because by ordering the layers in the way a painter does, you
www.tektronix.com/video_audio 3
Video Terms and Acronyms
can save the computer the need to compare objects that are in different
layers (that is, compare them for purposes of hidden surface removal).
24 Frames Per Second – International standard for motion picture film
shooting and projection, though film shot for television in 625 scanningline countries is usually shot at 25 frames per second (even if not, it is
transferred to television at 25 frames per second). There are moves afoot
in the U.S. to increase the film frame rate to 30 for improved temporal resolution. The ImageVision HDEP system and other electronic cinematography
systems use 24 frames per second. RCA once proposed an electronic
cinematography system with 2625 scanning lines (2475 active), a 2:33:1
aspect ratio, and a frame rate of 23.976023 frames/sec.
24-Bit Color – Color for which each red, green and blue component
stores 8 bits of information. 24-bit color is capable of representing over
one million different variations of color.
25 Frames Per Second – Frame rate of television in all countries not
conforming to CCIR system M (NTSC). Also the frame rate of film shot for
television in those countries.
25 Hz HDTV Bitstream – A bitstream which contains only Main Profile,
High Level (or simpler) video at 25 Hz or 50 Hz frame rates.
25 HZ HDTV IRD – An IRD (Integrated Receiver Decoder) that is capable
of decoding and displaying pictures based on a nominal video frame rate
of 25 Hz or 50 Hz from MPEG-2 Main Profile, High Level bitstreams, in
addition to providing the functionality of a 25 Hz SDTV IRD.
25 Hz SDTV Bitstream – A bitstream which contains only Main Profile,
Main Level video at 25 Hz frame rate.
25 Hz SDTV IRD – An IRD (Integrated Receiver Decoder) which is capable
of decoding and displaying pictures based on a nominal video frame rate of
25 Hz from MPEG-2 Main Profile, Main Level bitstreams.
29.97 Frames Per Second – Frame rate of NTSC color television,
changed from 30 so that the color subcarrier could be interleaved between
both the horizontal line frequency and the sound carrier.
2K – A film image scanned into a computer file at a resolution of 2048
horizontal pixels per line.
2T Pulse – See the discussion on Sine-Squared Pulses.
3.579545 MHz – This is the frequency of the NTSC color subcarrier.
3:2 Pull-Down – a) The technique used to convert 24 frames per second
film to 30 frames per second video. Every other film frame is held for 3
video fields resulting in a sequence of 3 fields, 2 fields, 3 fields, 2 fields,
etc. b) A frame cadence found in video that has been telecined or converted from film to video. This cadence is produced because the frame rates
for film and video are different. During the process of compression, some
compression hardware recognizes this cadence and can further compress
video because of it. Material which is video to start with gains no extra
compression advantage. Material edited after being telecined may not gain
a compression advantage.
30 Frames Per Second – Frame rate of NTSC prior to color. Frame rate
of the ATSC/SMPTE HDEP standard. A potential new film standard.
30 Hz HDTV Bitstream – A bitstream which contains only Main Profile,
High Level (or simpler) video at 24000/1001, 24, 30000/1001, 30,
60/1001 or 60 Hz frame rates.
30 Hz HDTV IRD – An IRD (Integrated Receiver Decoder) that is capable
of decoding and displaying pictures based on nominal video frame rates of
24000/1001, 24, 30000/1001, 30, 60/1001 or 60 Hz from MPEG-2 Main
Profile, High Level bitstreams, in addition to providing the functionality of a
30 Hz SDTV Bitstream – A bitstream which contains only Main Profile,
Main Level video at 24000/1001, 24, 30000/1001 or 30 Hz frame rate.
30 Hz SDTV IRD – An IRD (Integrated Receiver Decoder) which is capable
of decoding and displaying pictures based on a nominal video frame rate of
24000/1001 (approximately 23,98), 24, 3000/1001 (approximately 29,97)
or 30 Hz from MPEG-2 Main Profile at Main Level bitstreams.
3D (Three Dimensional) – Either as in stereoscopic television (NHK has
suggested alternating 3DTV transmissions with HDTV), or more often, when
referring to ATV, relating to the three dimensions of the spatio-temporal
spectrum: horizontal, vertical, and time.
3D Axis (Menu) – The 3D function that moves the image away from the
center of rotation. The image can be moved along, or off any of the three
3D Space – Three dimensional space is easily imagined by looking at a
corner of a rectangular room. The corner is called the origin. Each edge
leaving from the origin (there are three of them) is called an axis. Each
axis extends infinitely in two directions (up/down, left/right, and front/back).
Imagine laying long measuring sticks on each axis. These are used to
locate specific points in space. On the Cubicomp, or any other graphics
systems, the yardsticks are not infinitely long, and 3D space on these
devices is not infinite; it is more like an aquarium.
3-Perf – A concept for saving money on film stock by shooting each 35
mm frame in an area covered by three perforations rather than four. The
savings is more than enough to compensate for switching from 24 frames
per second to 30. Three-perf naturally accommodates a 1.78:1 (16:9)
aspect ratio and can be easily masked to the 1.85:1 common in U.S. movie
theaters. It changes the shoot-and-protect concept of using theatrical film
on television, however, from one in which the protected area is extended
vertically to one in which the shooting area is reduced horizontally.
3XNTSC – A Zenith proposal for an HDEP scheme that would use three
times as many scanning lines as NTSC (1575), but would otherwise retain
NTS characteristics. It is said to allow easy standards conversion to 525or 625-scanning line systems and to accept material shot in 1125 scanning lines in a 16:9 aspect ratio without difficulty. 3XNTSC would have
1449 active scanning lines, 2:1 interlace, a 4:3 aspect ratio, and a bandwidth of 37.8 MHz.
4:1:1 – 4:1:1 indicates that Y’ has been sampled at 13.5 MHz, while Cb
and Cr were each sampled at 3.375 MHz. Thus, for every four samples of
Y’, there is one sample each of Cb and Cr.
Video Terms and Acronyms
4:2:0 – a) A sampling system used to digitize the luminance and color
difference components (Y, R-Y, B-Y) of a video signal. The four represents
the 13.5 MHz sampling frequency of Y, while the R-Y and B-Y are sampled
at 6.75 MHz – effectively between every other line only. b) The component
digital video format used by DVD, where there is one Cb sample and
one Cr sample for every four Y samples (i.e., 1 pixel in a 2 x 2 grid). 2:1
horizontal downsampling and 2:1 vertical downsampling. Cb and Cr are
sampled on every other line, in between the scan lines, with one set of
chroma samples for each two luma samples on a line. This amounts to a
subsampling of chroma by a factor of two compared to luma (and by a
factor of four for a single Cb or Cr component).
4:2:0 Macroblock – A 4:2:0 macroblock has four 8 x 8 blocks of luminance (Y) and two 8 x 8 blocks of chrominance (one block of Cb and one
block, of Cr).
4:2:2 – a) A commonly used term for a component digital video format.
The details of the format are specified in the ITU-R BT.601 standard
document. The numerals 4:2:2 denote the ratio of the sampling frequencies of the single luminance channel to the two color difference channels.
For every four luminance samples, there are two samples of each color
difference channel. b) ITU-R BT.601 digital component waveform sampling
standard where the luminance signal is sampled at the rate of 13.5 MHz,
and each of the color difference signals, (Cr and Cb) are sampled at the
rate of 6.25 MHz each. This results in four samples of the luminance signal
for each two samples of the color difference signals. See ITU-R BT.601-2.
10 Bit
Y Sample
10 Bit
Cr Sample
10 Bit
Y Sample
10 Bit
Cb Sample
10 Bit
Y Sample
10 Bit
Cr Sample
10 Bit
Y Sample
10 Bit
Cb Sample
4:2:2 Profile at Main Level – An MPEG-2 profile that benefits the needs
of video contribution applications. Features include high-chrominance resolution.
4:2:2:4 – Same as 4:2:2 with the addition of a key channel sampled at the
same frequency as the luminance.
4:2:2p (Professional Profile) – 4:2:2p refers to a higher quality, higher
bitrate encoding designed for professional video usage. It allows multiple
encodings/decodings before transmission or distribution.
4:3 – The aspect ratio of conventional video, television and computer
4:4:4 – A sampling ratio that has equal amounts of the luminance and
both chrominance channels.
4:4:4:4 – Same as 4:2:2 with the addition of a key channel, and all channels are sampled at the same frequency as the luminance.
45 Mbps – Nominal data rate of the third level of the hierarchy of ISDN in
North America. See also DS3.
480i – 480 lines of interlaced video (240 lines per field). Usually refers to
720 x 480 (or 704 x 480) resolution.
4C – The four-company entity: IBM, Intel, Matsushita, Toshiba.
4fsc – Composite digital video as used in D2 and D3 VTRs. Stands for 4
times the frequency of subcarrier, which is the sampling rate used. In NTSC
4FSC is 14.3 MHz and in PAL it is 17.7 MHz.
4K – A film image scanned into a computer file at a resolution of 4096
horizontal pixels per line. 4K is considered to be a full-resolution scan of
35 mm film.
5.1 Channel Audio – An arrangement of five audio channels (left, center,
right, left-surround and right-surround) and one subwoofer channel.
50 Fields Per Second – Field rate of 25 frame-per-second interlaced
520A Vectorscope – Discontinued Tektronix vectorscope. It has been
replaced by the 1780R.
525/60 – Another expression for NTSC television standard using 525
lines/frame and 60 fields/sec.
59.94 Fields Per Second – Field rate of NTSC color television.
5C – The five-company entity: IBM, Intel, Matsushita, Toshiba, Sony.
60 Fields Per Second – Field rate of the ATSC/SMPTE HDEP standard.
60 Frames Per Second – Frame rate of Showscan and some progressively scanned ATV schemes.
601 – See ITU-R BT.601-2.
625/50 – Another expression for PAL television standard using 625
lines/frame and 50 fields/sec.
720p – 720 lines of progressive video (720 lines per frame). Higher
definition than standard DVD (480i or 480p). 720p60 refers to 60 frames
per second; 720p30 refers to 30 frames per second; and 720p24 refers
to 24 frames per second (film source). Usually refers to 1280 x 720
resolution in 1.78 aspect ratio.
75% Amplitude, 100% Saturation – Common reference for
75/7.5/75/7.5 NTSC/EIA color bars.
75%/100% Bars – See Vectorscope.
8 mm – A compact videocassette record/playback tape format which uses
eight millimeter wide magnetic tape. A worldwide standard established in
1983 allowing high quality video and audio recording. Flexibility, lightweight
cameras and reduced tape storage requirements are among the format’s
8 PSK (8 Phase Shift Keying) – A variant of QPSK used for satellite links
to provide greater data capacity under low-noise conditions.
8 VSB – Vestigial sideband modulation with 8 discrete amplitude levels,
used in the ATSC digital television transmission standard.
8/16 Modulation – The form of modulation block code used by DVD to
store channel data on the disc. See Modulation.
480p – 480 lines of progressive video (480 lines per frame). 480p60
refers to 60 frames per second; 480p30 refers to 30 frames per second;
and 480p24 refers to 24 frames per second (film source). Usually refers to
720 x 480 (or 704 x 480) resolution.
www.tektronix.com/video_audio 5
Video Terms and Acronyms
A – Abbreviation for Advanced.
A and B Cutting – A method of assembling original material in two separate rolls, allowing optical effects to be made by double printing.
A and B Rolls, Tape – Separation of material into two groups of reels (A
rolls and B rolls), with alternate scenes on each reel pair (A reel and B reel)
to allow transitions between reels.
AAC (Advanced Audio Coding) – Part 7 of the MPEG-2 standard. It is a
multichannel coding standard that defines the highest quality multichannel
audio known today. It also has modes that perform extremely well for
audio, speech and music at <16 kbps.
A Bus Keyer – A keyer that appears only on top of an “A” bus background
video on an M/E.
AAF (Advanced Authoring Format) – Used to describe the standardized
metadata definitions that are used to exchange metadata between creative
content workstations. This metadata format can contain much more
information than the description implies. Nevertheless, this open standard
“format” has been created primarily for post-production use. It is worth
noting that the definition of AAF does provide for essence exchange as
well as metadata exchange.
A/A (A/X/A) Roll Editing – Editing from a single source using effects to
transition from the source to itself (source “A” to “A”) using a picture freeze
at the end of one scene to transition the start of the next scene.
AAL (ATM Adaption or Adaptation Layer) – ATM protocols that map
large data packets into ATM cells are defined by segmentation and
reassembly protocols.
A/B Roll – a) Creating fades, wipes and other transitions from one video
source to another. b) Typically, A/B roll is an editing technique where
scenes or sounds on two source reels (called Roll A and Roll B) are played
simultaneously to create dissolves, wipes and other effects. On nonlinear
editing systems, A/B roll refers to using two source streams (.avi,.wav,.tga
and so on) to create an effect.
AAL5 (ATM Adaption or Adaptation Layer 5) – Connection-oriented,
Unspecified Bit Rate (UBR). Least amount of error checking and
A Bus – The top row of the two rows of video source select buttons associated with a given M/E.
A/B Roll Editing – Editing from two source VCRs (“A” and “B”) to a third
(recording) VCR. Typically a switcher or mixer, such as the Digital Video
Mixer, is used to provide transition effects between sources. Control over
the machines and process can be done manually or automatically using an
edit controller.
A/B Roll Linear Editing – Recording edits from two video sources, such
as two VCRs to a third, to achieve transition effects. See also, B-Roll.
A/D – See A-to-D Converter.
A/V (Audio/Video) – Frequently used as a generic term for the audio
and video components and capabilities in home entertainment system and
related product descriptions and reviews.
A/V Drive (Audio/Video Drive) – A high-end hard drive capable of
storing high-bandwidth (i.e., high data rate) audio/video data.
A/V Edit – An edit that records new audio and video tracks. Also called
Straight Cut.
A/V Mixer – See Audio/Video Mixer.
A:B:C Notation – The a:b:c notation for sampling ratios, as found in the
ITU-R BT.601 specifications, has the following meaning: a) 4:2:2 means
2:1 horizontal downsampling, no vertical downsampling. Think 4 Y samples
for every 2 Cb and 2 Cr samples in a scan line. b) 4:1:1 ought to mean
4:1 horizontal downsampling, no vertical. Think 4 Y samples for every 1 Cb
and 1 Cr samples in a scan line. It is often misused to mean the same as
4:2:0. c) 4:2:0 means 2:1 horizontal and 2:1 vertical downsampling. Think
4 Y samples for every Cb and Cr samples in a scan line. Not only is this
notation not internally consistent, but it is incapable of being extended to
represent any unusual sampling ratios, that is different ratios for the Cb
and Cr channels.
AAU (Audio Access Unit) – See Access Unit.
A-B Rolls – Duplicate rolls of videotape information having identical time
code; required to achieve effects of dissolves.
ABC – Television network financially supporting development of ACTV and
pioneering the use of digital video transmission.
Aberration – A term from optics that refers to anything affecting the
fidelity of the image in regards to the original scene.
ABKW – See Audio Breakaway.
Abort – Halts the program and returns control to the operator or operating
Absolute Time Code – Absolute time code (ATC) is generally recorded
in the subcode or control track region of any digital tape. This is the
code that digital tape machines use to locate specific points on a tape
for autolocation or other functions. In some machines it is even used to
synchronize the tape to other equipment. ATC is precisely accurate and
usually conforms to the IEC standard which is easily converted to the more
commercially used SMPTE time code. Unlike SMPTE, ATC always begins
at zero at the beginning of a digital tape and increments one frame at a
time until recording stops. Some DAT machines have the ability to function
without ATC on a tape while others simply will not play a tape without it.
These days almost all machines record it automatically so it will always be
on every tape.
Absorption – In acoustics, the opposite of reflection. Sound waves are
“absorbed” or soaked up by soft materials they encounter. Studio designers
put this fact to work to control the problem of reflections coming back to
the engineer’s ear and interfering with the primary audio coming from the
monitors. The absorptive capabilities of various materials are rated with an
“Absorption Coefficient”.
Video Terms and Acronyms
Absorption Coefficient – a) A measurement of the absorptive characteristics of a material in comparison to air. b) A measure of the relative
amount of sound energy absorbed by the material when a sound strikes its
ABU (Asia-Pacific Broadcasting Union) – The Asia-Pacific Broadcasting
Union (ABU) is a professional association of television and radio broadcasters. It has over 100 members in 52 countries. The ABU was established in
1964 to promote the development of broadcasting in the Asia-Pacific
region and to organize cooperative activities amongst its members.
AC Bias – The alternating current, usually of frequency several times higher than the highest signal frequency, that is fed to a record head in addition to the signal current. AC bias serves to linearize the recoding process
and is universally used in analog recording. Generally, a large AC bias is
necessary to achieve maximum long wavelength output and linearity, but a
lower value of bias is required to obtain maximum short-wavelength output.
The mechanism of AC bias can best be explained in terms of anhysteresis.
AC Coefficient – Any discrete cosine transform (DCT) coefficient for which
the frequency in one or both dimensions is non-zero.
AC Coupled – a) AC coupling is a method of inputting a video signal to a
circuit to remove any DC offset, or the overall voltage level that the video
signal “rides” on. One way to find the signal is to remove the DC offset by
AC coupling, and then do DC restoration to add a known DC offset (one
that we selected). Another reason AC coupling is important is that it can
remove harmful DC offsets. b) A connection that removes the constant
voltage (DC component) on which the signal (AC component) is riding.
Implemented by passing the signal through a capacitor.
AC Erasure – See Erasure.
AC’97, AC’98 – These are definitions by Intel for the audio I/O implementation for PCs. Two chips are defined: an analog audio I/O chip and a digital controller chip. The digital chip will eventually be replaced by a software
solution. The goal is to increase the audio performance of PCs and lower
AC-3 – Audio Coding algorithm number 3. An audio-coding technique used
with ATSC. The audio compression scheme invented by Dolby Laboratories
and specified for the ATSC Digital Television Standard. In the world of consumer equipment it is called Dolby Digital.
ACC – See Automatic Color Correction.
Acceleration – Graphic accelerators function like application-specific
microprocessors whose purpose is to work in conjunction with a PC’s host
microprocessor to display graphics. In general, graphic accelerators control
frame memory, color processing, resolution, and display speed. with the
advent of the high-speed local buses and higher clock rates, accelerators
operate on 32-, 64-, and 128-bit pixel data.
Access Channels – Channels set aside by a cable operator for use by
third parties, including the public, educational institutions, local governments, and commercial interests unaffiliated with the operator.
Access Time – a) The time required to receive valid data from a memory
device following a read signal. b) This is the time it takes from when a disk
command is sent, until the disk reaches the data sector requested. Access
time is a combination of latency, seek time, and the time it takes for the
command to be issued. Access time is important in data intensive situations like hard disk recording, multimedia playback, and digital video applications. Lower access times are better. Keeping your drives in good shape
with periodic de-fragging, etc. will ensure that your drive is providing the
fastest access times it can.
Access Unit (AU) – a) The coded data for a picture or block of sound and
any stuffing (null values) that follows it. b) A coded representation of a
presentation unit. In the case of audio, an access unit is the coded representation of an audio frame. In the case of video, an access unit includes
all the coded data for a picture, and any stuffing that follows it, up to but
not including the start of the next access unit. If a picture is not preceded
by a group_start_code or a sequence_header_code, the access unit
begins with a picture_start_code. If a picture is preceded by a
group_start_code and/or a sequence_header_code, the access unit begins
with the first byte of the first of these start codes. If it is the last picture
preceding a sequence_end_code in the bit stream, all bytes between the
last byte of the coded picture and the sequence_end_code (including the
sequence_end_code) belong to the access unit.
Access Unit Header (AU Header) – Optional information preceding an
Access Unit Payload. This information consists of decoding and/or presentation time stamps. This information may be defaulted, resulting in an
empty AU header. The format of the AU header is determined in the ES
Academy – Pertaining to specifications that meet the Academy of Motion
Picture Arts and Sciences standards, such as academy leader, academy
format (for film stock), academy countdown, and so forth.
Access Unit Payload (AU Payload) – The data field of an access unit.
ACATS (Advisory Committee on Advanced Television Service) –
A group comprised almost exclusively of presidents, chief executive
officers, and chairs of the boards of major broadcasting, CATV, consumer
electronics, and entertainment production companies. It is currently
supported by a planning subcommittee (with two advisory groups and
six working parties), a systems subcommittee (with four working parties),
and an implementation subcommittee (with two working parties). ACATS
is an entity under the FCC, and is the approving body of advanced TV
in the USA. ACTS recommended the ATSC digital TV system to the FCC
in November 1995.
Accumulator – One or more registers associated with the Arithmetic and
Logic Unit (ALU), which temporarily store sums and other arithmetical and
logical results of the ALU.
Account – See Login Account.
Accuracy – The closeness of the indicated value to the true value.
ACD/ACD – Application Control Data/Application Communication Data
Acicular – Needle-shaped, used to describe the shape of oxide particles.
ACLE (Analog Component Link Equipment) – A form of MAC optimized
for remote broadcasting links.
www.tektronix.com/video_audio 7
Video Terms and Acronyms
Acoustic Echo Canceller – Full-duplex audio technology; used for the
elimination of acoustically-coupled return echoes within a teleconference
room. Note that all microphones connected to an AEC are active at all
times. Consequently, as more microphones are added, the total transmitted
noise level (caused by picking up room ambient noise) increases. See also
Tail Time, Echo Suppresser and Echo Return Loss Enhancement.
Active Pixel Region – On a computer display, the area of the screen
used for actual display of pixel information.
Acoustic Shadow – An area in which sound waves are attenuated due to
the presence of an acoustic absorber or reflector in the path of the sound
Active Video – The part of the video waveform that is not specified to be
blanking, burst, or sync information. Most of the active video, if not all of it,
is visible on the display screen.
Acoustic Suspension – A type of speaker design using a sealed cabinet.
Primarily used for low frequency enclosures, acoustic suspension designs
use the air mass within the cabinet as a “spring” to help return the
relatively massive speaker to the rest position. This allows heavier, longer
throw drivers to be used, but results in a less efficient design requiring
more amplifier power.
Active Video Lines – All video lines that are not in the horizontal and
vertical blanking intervals.
ACT (Anti-Comet-Tail) – This is a complex technique of preventing
picture highlights from “comet-tailing” due to lack of beam current in
the camera tube. (The usually colored trail behind a moving, very bright
light/reflection in a picture is called a “comet-tail” since the effect looks
similar to an astronomical comet.) The technique involves a special tube
and circuitry to drive it. Basically, the charge due to a very bright object
is never allowed to build up to an unmanageable level by discharging the
target above a preset level during horizontal retrace time when the ACT
action is turned on, with an increased beam current.
Active Line (PAL) – The part of the video waveform (usually 64 µs), which
occupies the visible part of the signal (without sync, blanking or burst).
The active line time is usually 52 µs. Also called Active Line Time or Active
Active Picture Area – The part of a TV picture that contains actual
picture as opposed to sync or other data. Vertically the active picture area
is 487 lines for NTSC and 576 lines for PAL. The inactive area is called
Active Window – On A PC, the only window that recognizes input (activity) from the keyboard and mouse; only one window is active at a time.
ActiveMovie – Microsoft’s architecture for the control and processing of
streams of multimedia data and software that uses this architecture to play
digital video and sound. It is intended to supersede Video for Windows®.
Activity Detection – Refers to a method built into some multiplexers for
detecting movement within the camera’s field of view (connected to the
multiplexer), which is then used to improve camera recording update rate.
ACTV (Advanced Compatible Television) – Techniques for ATV transmission developed by the DSRC, with support initially from NBC and
RCA/GE Consumer Electronics (now Thomson Consumer Electronics) and
with later support from such organizations as ABC and HBO. There are
two ACTVs. a) ACTV I is a channel-compatible, receiver-compatible system
utilizing many different techniques to add widescreen panels and increase
horizontal and vertical resolution. Among the techniques are the filling
of a Fukinuki hole, time compression, seam-elimination, spatio-temporal
filtering, and quadrature modulation of the picture carrier. The last prevents
direct compatibility with videotape recorders and with ordinary satellite
transmission techniques. b) ACTV II is ACTV I plus an augmentation channel to improve resolution and sound.
Acuity – See Visual Acuity.
Adaptation – Visual process whereby approximate compensation is made
for changes in the luminances and colors of stimuli, especially in the case
of changes in illuminants.
Adaptation Field – Ancillary program data (especially PCR) which are
uncoded and are transmitted at least every 100 ms after the TS header of
a data stream (PID) belonging to a program.
Adaptation Layer Entity (AL Entity) – An instance of an MPEG-4
systems resource that processes AL PDUs associated to a single FlexMux
Active Line Time – The duration of a scanning line minus that period
devoted to the horizontal blanking interval.
Active Lines – The total number of scanning lines minus those scanning
lines devoted to the vertical blanking interval.
Active Picture – That portion of the ITU-R BT.601 digital picture signal
between the SAV and EAV data words.
Adaptation Layer Protocol Data Unit (AL PDU) – The smallest protocol
unit exchanged between peer AL entities. It consists of AL PDU header
and AL PDU payload. One or more AL PDUs with data from one or more
elementary streams form the payload of a FlexMux PDU.
Adaptation Layer Protocol Data Unit Header (AL PDU Header) –
Optional information preceding the AL PDU payload. It is mainly used
for error detection and framing of the AL PDU payload. The format of the
AL PDU header is determined when opening/configuring the associated
FlexMux channel.
Video Terms and Acronyms
Adaptation Layer Protocol Data Unit Payload (AL PDU Payload) –
The data field of an AL PDU.
Adaptation Layer Service Data Unit (AL-SDU) – An information unit
whose integrity is preserved in transfer from one AL user to the peer AL
ADC – See A-to-D Converter.
Add Edit – An edit added between consecutive frames in a sequence
segment with the timeline. An add edit separates segment sections so
the user can modify or add effects to a subsection of the segment.
Adaptation Layer User (AL User) – A system entity that makes use of
the services of the adaptation layer, typically an elementary stream entity.
Added Calibrator – This is a feature of some waveform monitors which
allows an internal 1 volt calibrator signal to be used as a reference for
amplitude measurements.
Adapter – A device used to achieve compatibility between two items of
audio/video equipment.
Adder – Device that forms, as output, the sum of two or more numbers
presented as inputs.
Adaptive – Changing according to conditions.
Additive – Any material in the coating of magnetic tape other than the
oxide and the binder resins; for example, plasticizers (materials used to
soften an otherwise hard or brittle binder), lubricants (materials used
to lower the coefficient of friction of an otherwise high-friction binder),
fungicides (materials used to prevent fungus growth), dispersants (to
uniformly distribute the oxide particles) or dyes.
Adaptive Bit Allocation – The allocation of more bits to image areas of
high activity which does not lend itself to all types of video compression
techniques, especially when interframe sampling is used.
Adaptive Compression – Data compression software that continually
analyzes and compensates its algorithm, depending on the type and
content of the data and the storage medium.
Adaptive Differential Pulse Code Modulation – a) A compression technique that encodes the predictive residual instead of the original waveform
signal so that the compression efficiency is improved by a predictive gain.
Rather than transmitting PCM samples directly, the difference between the
estimate of the next sample and the actual sample is transmitted. This
difference is usually small and can thus be encoded in fewer bits than the
sample itself. b) Differential pulse code modulation that also uses adaptive
quantizing; an audio coding algorithm which provides a modest degree of
compression together with good quality. c) A technique for compressing
the transmission requirements of digital signals. ADPCM has been used by
ABC between New York and Washington to allow NTSC transmission on a
45 Mbps (DS3) telephone company data transmission circuit. d) A pulse
code modulation system typically operating at a high sampling rate whereby coding is based on a prior knowledge of the signal to be processed (i.e.,
greater than, equal to, or less than the previous sample). The system is
adaptive in that digital bits of code signify different sizes of signal change
depending on the magnitude of the signal.
Adaptive Emphasis – An ATV technique for improving detail of dark parts
of the picture by increasing their level. If a complementary de-emphasis
is performed at the receiver, noise can be reduced. Dolby B noise reduction
(the form of Dolby noise reduction most common in consumer cassette
recorders) is a classic example of complementary adaptive emphasis.
Adaptive Filter – A filter which changes its parameters on a continual
basis to guarantee a constant or desired output value.
Adaptive Multichannel Prediction – Multichannel data reduction exploiting statistical inter-channel dependencies in audio.
Adaptive Noise Allocation – Variable assignment of coding noise in
audio frequency bands based on a psychoacoustic model.
Adaptive Quantization – Varying quantization values are applied based
on some model analysis of the data characteristics.
Adaptor – A device that allows an ordinary NTSC television to receive pictures from a non-receiver-compatible ATV system.
Additive Color – Color produced by “adding” colors, usually the combination of red, green and blue.
Additive Color System – Color specification system in which primary
colors are added together to create a desired color. An example is the
red/green/blue (RGB) system. Additive systems are generally associated
with light emitting devices (CRTs).
Additive Mix – A mix wherein the instantaneous video output signal is
equal to the weighted sum of the input video signals. Unless otherwise
specified, “mix” is taken to mean “additive mix”.
Address – Number that indicates the position of a word in the memory.
Address Bus – Set of wires (typically 32) used to transmit addresses,
usually from the microprocessor to a memory or I/O device.
Address Decoding – Process of selecting a specific address or field of
addresses to enable unique devices.
Address Dial – See SCSI Address Dial.
Addressable – Capable of being activated or accessed remotely by signals sent from a cable system’s headend (usually refers to descramblers
and other set-top boxes.)
Addressability – The capability to selectively and remotely activate,
disconnect or descramble television signals in individual subscribers’
homes. A functionality of pay-per-view systems.
Addressing Modes – Various methods of specifying an address as part
of an instruction. See Direct Addressing, Indirect Addressing, Immediate
Addressing and Indexed Addressing.
Adhesion – The degree to which the coating adheres to the base film.
Anchorage may be checked by measuring the force required to separate
the coating from the base film by means of a specially designed plow blade
or, more simply, by determining whether the coating can be peeled from
the base film by means of ordinary pressure-sensitive adhesive tape.
ADIF (Audio Data Interchange Format) – ADIF is just one header at the
beginning of the AAC file. The rest of the data is just the same as a raw
Advanced Audio Coding (AAC) file.
www.tektronix.com/video_audio 9
Video Terms and Acronyms
Adjacent Channel – A television transmission channel immediately adjacent to an existing channel. For example, channel 3 is adjacent to channels
2 and 4. There are three exceptions to what might otherwise be considered
adjacent channels: there is a small gap between channels 4 and 5, there
is a large gap between channels 6 and 7, and there is an enormous gap
between channels 13 and 14. Adjacent channels figure into ATV in two
ways. a) First, it is currently illegal to broadcast on adjacent channels in
a single location. Some ATV proponents feel that augmentation channels
might someday be allowed to be placed in adjacent channels. If half-size
(3 MHz) or smaller augmentation channels are used, all current broadcasters could then be allowed an augmentation channel. Some proponents feel
the use of a low power digital augmentation channel will allow adjacent
channels to be used without interference. b) Second, some ATV proposals
require that the augmentation channel be adjacent to the transmission
channel or require a larger than normal transmission channel, thus occupying a channel and one of its adjacent channels.
Adjust input video timing to match a reference video input. Eliminates the
need for manual timing adjustments.
Administrator – See System Administrator and Network Administrator.
ADO (Ampex Digital Optics) – Trade name for digital effects system
manufactured and sold by Ampex.
ADPCM – See Adaptive Differential Pulse Code Modulation.
ADR (Automatic Display Replacement) – The process of looping playback of a selected region in a sequence and automatically recording multiple replacement takes.
ADSL – See Asymmetrical Digital Subscriber Line.
ADSR (Attack, Decay, Sustain and Release) – These are the four
parameters found on a basic synthesizer envelope generator. An envelope
generator is sometimes called a transient generator and is traditionally
used to control the loudness envelope of sounds, through some modern
designs allow for far greater flexibility. The Attack, Decay, and Release
parameters are rate or time controls. Sustain is a level control. When a key
is pressed, the envelope generator will begin to rise to its full level at the
rate set by the attack parameter, upon reaching peak level it will begin to
fall at the rate set by the decay parameters to the level set by the sustain
control. The envelope will remain at the sustain level as long a the key is
held down. Whenever a key is released, it will return to zero at the rate set
by the release parameters.
ADTS (Audio Data Transport Stream) – ADTS headers are present
before each Advanced Audio Coding (AAC) raw_data_block or block of 2
to 4 raw_data_blocks. Until the MPEG revision from December 2002 for
MPEG-4 AAC ADTS headers, this was basically the same as a MP3 header,
except that the emphasis field was not present for MPEG-2 AAC, only for
ADTV (Advanced Definition Television) – A term sometimes used for
both EDTV and HDTV.
Advance – The separation between a point on the sound track of a film
and the corresponding picture image.
Advanced Coding Efficiency (ACE) – The ACE profile supports coding
efficiency for both rectangular and arbitrary shaped objects. It is suitable
for applications such as mobile broadcast reception, acquisition of image
sequences, and other applications where high coding efficiency is requested and a small footprint isn’t the prime concern.
Advanced Encoder – A device that changes RGB or DAV into NTSE
utilizing some form or forms of pre-filtering to reduce or eliminate NTSC
artifacts. Some advanced encoders also offer image enhancement, gamma
correction, and the like.
Advanced Real-Time Simple (ARTS) – The ARTS profile provides
advanced error resilient coding techniques of rectangular video objects
using a back channel and improved temporal resolution stability with the
low buffering delay. Use it for real-time coding applications, such as the
videophone, teleconferencing and remote observation.
Advanced Television Systems Committee (ATSC) – The US-based
organization that is defining the high definition television standard for the
U.S.. A sort of NTSE for ATV. It is comprised of three technology groups
and a number of smaller committees. T1 Group is studying receiver-compatible improved NTSC. T2 Group is studying non-receiver-compatible
525 scanning line production, distribution, and display systems. T3 Group
is studying HDTV.
Advanced TV – Although sometimes used interchangeably, advanced and
high-definition television (HDTV) are not one and the same. Advanced television (ATV) would distribute wide-screen television signals with resolution
substantially better than current systems. It requires changes to current
emission regulations, including transmission standards. In addition, ATV
would offer at least two-channel, CD-quality audio.
AEA (American Electronics Association) – An organization of manufacturers more associated with computers and communications than is the
EIA. The AEA has established an ATV Task Force, the members of which
include: AT&T, Apple Computer, Hewlett-Packard, IBM and Motorola.
AEC – See Acoustic Echo Canceller.
AES (Audio Engineering Society) – The official association of technical
personnel, scientists, engineers and executives in the audio field. Of
potential interest in electronic production are the following: SC-2,
Subcommittee on Digital Audio; SC-3, Subcommittee on the Preservation
and Restoration of Audio Recording; and SC4, Subcommittee on Acoustics.
AES/EBU – a) Informal name for a digital audio standard established
jointly by the Audio Engineering Society and European Broadcasting Union
organizations. b) The serial transmission format standardized for professional digital audio signals (AES3-1992 AES Recommended Practice for
Digital Audio Engineering – Serial Transmission Format for Two-Channel
Linearly Represented Digital Audio Data). c) A specification using time
division multiplex for data, and balanced line drivers to transmit two
channels of digital audio data on a single twisted-pair cable using 3-pin
(XLR) connectors. Peak-to-peak values are between 3 and 1-V with driver
and cable impedance specified as 110 ohms.
AES/EBU Digital Audio – Specification titled “AES recommended practice
for digital audio engineering – Serial transmission format for two channel
linearity represented digital audio data”. AES/EBU digital audio standard
that is the result of cooperation between the US based AES and the
European based EBU.
Video Terms and Acronyms
AES3 – See AES/EBU Digital Audio.
AGC – See Automatic Gain Control.
AF – See Adaptation Field.
AI (Amplitude Imbalance) – The purpose of the AI measurement is to
assess the QAM distortions resulting from amplitude imbalance of I and Q
AFC – See Automatic Frequency Control.
AFC/Direct – See Waveform Monitors.
AFI (Authority and Format Identifier) – Part of network level address
AFL (After Fade Listen) – Used in mixing boards to override the normal
monitoring path in order to monitor a specific signal at a predefined point
in the mixer. Unlike PFL, the AFL signal definition is taken after the fader
of a channel or group buss such that the level of the fader will affect the
level heard in the AFL monitor circuit. AFL is sometimes also taken after
the pan pot which also allows the engineer to monitor the signal with the
pan position as it is in the mix. AFL is a handy way to monitor a small
group of related instruments by themselves with all of their eq, level, and
pan information reproduced as it is in the overall mix. An AFL circuit that
includes pan information is often called “solo” or “solo in place” depending
upon who builds the mixer.
AFM (Audio Frequency Modulation) – The most common form of audio
recording found in most consumer and professional video recording decks,
especially in VHS and 8 mm recorders. AFM audio is limited to dynamic
range and frequency response, and can include stereo and multitrack
AFNOR (Association Francaise de Normalisation) – French standards
A-Frame Edit – A video edit which starts on the first frame of the 5 video
frame (4 film frame) sequence created when 24 frame film is transferred
to 30 frame. The A-frame is the only frame in the sequence where a film
frame is completely reproduced on one complete video frame. Here is the
full sequence. (The letters correspond to the film frames.) A-frame = video
fields 1&2, B-frame = video fields 1&2&1, C-frame = video fields 2&1,
D-frame = video fields 2&1&2.
Aftertouch – MIDI data sent when pressure is applied to a keyboard after
the key has been struck, and while it is being held down or sustained.
Aftertouch is often routed to control vibrato, volume, and other parameters.
There are two types: the most common is Channel Aftertouch which looks
at the keys being held, and transmits only the highest aftertouch value
among them. Less common is Polyphonic Aftertouch, which allows each
key being held to transmit a separate, independent aftertouch value. While
polyphonic aftertouch can be extremely expressive, it can also be difficult
for the unskilled to control, and can result in the transmission a great deal
of unnecessary MIDI data, eating bandwidth and slowing MIDI response
AFV – See Audio Follow Video.
AFX (Animation Framework Extension) – AFX is an integrated toolbox
that uses existing MPEG-4 tools to create powerful synthetic MPEG-4
environments. This collection of interoperable tool categories (with each
tool providing a functionality, such as an audiovisual stream) works together to produce a reusable architecture for interactive animated content.
AIFF (Audio Interchange File Format) – This is the format for both
compressed and uncompressed audio data.
AIFF-C (Audio Interchange File Format-Condensed) – A sampledsound file format that allows for the storage of audio data. This format is
primarily used as data interchange format but can be used as a storage
format as well. OMF Interchange includes AIFF-C as a common interchange
format for non-compressed audio data.
Air Tally – The ability of a switcher console to indicate to an operator
which video sources and keys are on air at any given time. Ampex switchers have “true” air tally in that they sense actual presence of sources.
AIT (Application Information Table) – Provides information about the
activation state of service bound applications.
A-Law – A pulse code modulation (PCM) coding and companding standard
that is used in Europe for digital voice communications.
ALC – See Automatic Level Control.
ALC (Automatic Light Control) – A part of the electronics of an
automatic iris lens that has a function similar to backlight compensation
in photography.
Algorithm – a) A set of rules or processes for solving a problem in a
finite number of steps. In audio, video and data coding, the step-by-step
procedure (often including repetition) which provides suitable compression
and/or encryption for the specific application. When used for compression,
this mathematical process results in a significant reduction in the number
of bits required for transmission and may be either lossless or lossy.
b) Step-by-step procedure for the solution to a problem. First the problem
is stated and then an algorithm is devised for its solution.
Alias, Aliasing – Something other that what it appears to be. Stairsteps
on what should be a smooth diagonal line are an example of spatial alias.
Wagon wheels appearing to move backwards are an example of temporal
alias. Aliases are cause by sampling and can be reduced or eliminated
by pre-filtering, which can appear to be a blurring effect. Defects in the
picture typi-cally caused by insufficient sampling (violation of the Nyquist
sampling rate) in the analog to digital conversion process or poor filtering
of digital video. De-fects are typically seen as jaggies on diagonal lines
and twinkling or brightening in picture detail. Examples are: Temporal
Aliasing – such as rotating wagon wheel spokes appearing to rotate in
the reverse direction. Raster Scan Aliasing – such as sparkling or pulsing
effects in sharp horizontal lines. Stair-Stepping – stepped or jagged
edges in diagonal lines or the diagonal parts of a letter.
Alignment – Most commonly, Head Alignment, but also used to describe
the process of adjusting a recorder’s Bias and Equalization for optimum
results from a specific tape.
www.tektronix.com/video_audio 11
Video Terms and Acronyms
Alignment Jitter – The variation in time of the significant instants (such
as zero crossings) of a digital signal relative to a hypothetical clock
recovered from the signal itself. This recovered clock will track in the
signal up to its upper clock recovery bandwidth, typically 1 kHz to 100 kHz.
Measured alignment jitter includes those terms above this frequency.
Alignment jitter shows signal-to-latch clock timing margin degradation.
The allowed specification for SMPTE 292 is 0.2 unit intervals.
Alpha – See Alpha Channel and Alpha Mix.
Alpha Channel – The alpha channel is used to specify an alpha value
for each color pixel. The alpha value is used to control the blending, on a
pixel-by-pixel basis, of two images:
new pixel = (alpha)(pixel A color) + 1 – (alpha)(pixel B color)
Alpha typically has a normalized value of 0 to 1. In a computer environment, the alpha values can be stored in additional bit planes of framebuffer memory. When you hear about 32-bit frame buffers, what this really
means is that there are 24 bits of color, 8 each for red, green, and blue,
along with an 8-bit alpha channel. Also see Alpha Mix.
Alpha Map – The representation of the transparency parameters associated to a texture map.
Alpha Mix – This is a way of combining two images. How the mixing is
performed is provided by the alpha channel. The little box that appears
over the left-hand shoulder of a news anchor is put there by an alpha
mixer. Wherever the pixels of the little box appear in the frame buffer, an
alpha number of “1” is put in the alpha channel. Wherever they don’t
appear, an alpha number of “0” is placed. When the alpha mixer sees a
“1” coming from the alpha channel, it displays the little box. Whenever it
sees a “0”, it displays the news anchor. Of course, it doesn’t matter if a
“1” or a “0” is used, but you get the point.
Alpha Plane – Image component providing transparency information.
Alphanumeric – Set of all alphabetic and numeric characters.
ALU – See Arithmetic and Logic Unit.
AM – A form of modulation where the level of the baseband information
affects the level of the carrier. See Amplitude Modulation.
A-MAC – A MAC (Multiplexed Analog Component) with audio and data
frequency multiplexed before modulation. See also MAC.
Ambient – Natural, or surrounding light in a clip.
Ambient Lighting – Light that emanates from no particular source,
coming from all directions with equal intensity.
Ambient Sound – A representative sample of background audio (such
as a refrigerator hum or crowd murmur) particular to a shooting location.
Ambient sound is gathered in the course of a production to aid the
sound editor in making cuts or filling in spaces between dialog. Also
called Room Tone.
American Television and Communications – See ATC.
A-Mode – A linear method of assembling edited footage. In A-mode, the
editing system performs edits in the order in which they will appear on the
master, stopping whenever the edit decision list (EDL) calls for a tape that
is not presently in the deck. See also B-Mode, C-Mode, D-Mode, E-Mode,
Source Mode.
A-Mode Edit – An editing method where the footage is assembled in the
final scene order. Scene 1, scene 2, etc.
Amplitude – a) The height of a waveform above or below the zero line.
The maximum value of a varying waveform. b) The maximum distance an
oscillating body (e.g., a pendulum) or wave travels from a mean point.
Amplitude Modulation (AM) – a) The process used for some radio (AM
broadcast, in North American audio service broadcast over 535 kHz-1705
kHz) and television video transmission. A low frequency (program) signal
modulates (changes) the amplitude of a high frequency RF carrier signal
(causing it to deviate from its nominal base amplitude). The original program signal is recovered (demodulated) at the receiver. This system is
extensively used in broadcast radio transmission because it is less prone to
signal interference and retains most of the original signal quality. In video,
FM is used in order to record high quality signals on videotape. b) The
process by which the amplitude of a high-frequency carrier is varied in
proportion to the signal of interest. In the PAL television system, AM is
used to encode the color information and to transmit the picture. Several
different forms of AM are differentiated by various methods of sideband filtering and carrier suppression. Double sideband suppressed carrier is used
to encode the PAL color information, while the signal is transmitted with a
large-carrier vestigial sideband scheme.
Amplitude Non-Uniformity – A term used in connection with magnetic
tape testing and refers to the reproduced peak-to-peak voltage and its
variation from what was recorded.
Amplitude Versus Frequency Response – Refer to the Frequency
Response discussion.
AM-VSB (Amplitude Modulation with Vestigial Sideband) – The form
of modulation used in broadcast and cable television transmission. It is
more efficient than dual-sideband amplitude modulation and is easier to
implement than single-sideband amplitude modulation.
Analog – a) A continuous electrical signal that carries information in the
form of variable physical values, such as amplitude or frequency modulation. b) A signal which moves through a continuous range of settings or
levels. c) An adjective describing any signal that varies continuously as
opposed to a digital signal that contains discrete levels representing the
binary digits 0 and 1. d) A signal that is an analogy of a physical process
and is continuously variable, rather than discrete. See also Digitization.
Analog Components – Video signals in which a continuously variable
voltage or current (rather than a set of digital numbers) represents a pixel.
Analog Interface – An interface between a display controller and a
display in which pixel colors are determined by the voltage levels on three
output lines (RGB). Theoretically, an unlimited number of colors can be
supported by this method (24 bits per pixel allows 16,777,216 colors).
The voltage level on any line varies between zero volts (for black) to about
700 millivolts (for maximum brightness).
Analog Monitor – A video monitor which accepts analog signals. Several
types of inputs are accepted by analog monitors: composite video, RGB &
sync, Y/C, YUV and any combination of these formats. The signals transmitted to an analog monitor are usually between 0 and 1 V and use 75 ohm
coaxial cables.
Video Terms and Acronyms
Analog Recording – The common form of magnetic recording where the
recorded waveform signal maintains the shape of the original waveform
Analog Signal – Representation of data by continuously varying quantities. An analog electrical signal has a different value of volts or amperes
for electrical representation of the original excitement (sound, light) within
the dynamic range of the system.
Analog Video – a) A video signal represented by a smooth and infinite
number of video levels. b) A video signal made of a continuous electrical
signal. A television and VCR can be analog video devices. To be stored
and manipulated on a computer, analog video must be converted to
digital video.
Analysis Filterbank – Filterbank that transforms a broadband signal into
a set of subsampled sub-band samples. An audio encoder function.
Analysis-By-Synthesis Coding – A method of coding in which the
analysis procedure (encoder) has embedded in it the synthesis procedure
(decoder). The reconstructed and original signals are compared and the
difference is minimized. Used in many recent speech coding standards.
Anamorphic – a) Unequally scaled in vertical and horizontal dimensions.
Applies to lenses used for widescreen movies. b) Distortion in viewing of
images or geometry related to the difference between computer monitor
screen aspect ratio (in which pixels are square) and broadcast, projected
or frame aspect ratio (in which image pixels are wider than they are high).
Anamorphic Squeeze – A change in picture geometry to compress one
direction (usually horizontal) more than the other. Anamorphic squeeze
lenses made CinemaScope possible. Occasionally, when widescreen movies
are transferred to video, an anamorphic squeeze will be used (usually
only in credits) to allow the smaller aspect ratio of television to accommodate the larger movie aspect ratio. Some ATV proponents have suggested
a gentle anamorphic squeeze as a technique to assist in aspect ratio
Anamorphic Video – Found on a large number of DVDs, anamorphic
video squeezes a 1.78:1 picture shape into a 1.33:1 image area. If you
view an anamorphic video image on a 1.33 set, the characters will look
tall and thin. This format is designed for the 1.78 aspect ratio TV sets
where the horizontal is stretched back out to the full width of the set.
Unsqueezing an anamorphic image on a 1.33 set is accomplished by
squeezing the vertical size. The advantage of the anamorphic video system
is 33% more vertical information in a widescreen picture.
Anchor Frame – A video frame that is used for prediction. I-frames and
P-frames are generally used as anchor frames, but B-frames are never
anchor frames.
Anchor Point – A bit stream location that serves as a random access
point. MPEG I-frames are the most common anchor points.
Anchorage – For recording tape, the degree to which the magnetic tape
oxide coating adheres to the base film.
Ancillary Timecode (ATC) – BT.1366 defines how to transfer VITC and
LTC as ancillary data in digital component interfaces.
Anechoic – Literally, without echoes. Anechoic refers to the absence of
audio reflections. The closest thing to this situation in nature is the great
outdoors, but even here there are reflections from the ground, various
objects, etc. It is almost impossible to create a truly anechoic environment,
as there is no such thing as a perfect sound absorber. At high frequencies,
it is possible to create near-anechoic conditions, but the lower the frequency, the harder that is.
Anechoic Chamber – A room which has totally sound absorbent walls,
so that no reflected waves can exist and only the direct waves are heard.
Angle – An angle is a scene recorded from different viewpoints. Each
angle is equal in time length and an Angle Block may contain up to nine
Angle Menu – Menu used to select the Angle number.
Anhysteresis – The process whereby a material is magnetized by applying
a unidirectional field upon which is superimposed an alternating field of
gradually decreasing amplitude. One form of this process is analogous to
the recoding process using AC Bias.
Animatic – Limited animation consisting of art work shot and edited to
serve as a videotape storyboard. Commonly used for test commercials.
Animation – a) Animation is the process of fooling the human eye into
perceiving a moving object by presenting the eye with a rapid succession
of still pictures. Each still is called a frame. On the Cubicomp, animation
consists of moving objects which, in themselves stay unchanged. b) The
recording of a sequence of still artwork or objects in a way that makes
them appear to move on film or video. 24 fps is considered the appropriate
speed for animation.
Animation Curve – A curve depicting the interpolation between the
various keyframes.
Animation Path – The motion of an object as it flies through space is
called its animation or motion path.
Anisotropy – Directional dependence of magnetic properties, leading to
the existence of easy or preferred directions of magnetization. Anisotropy
of a particle may be related to its shape, to its crystalline structure or to
the existence of strains within it. Shape anisotropy is the dominant form
in acicular particles.
ANRS, Super ANRS – A noise reduction system used to JVC. ANRS
operates on principles similar to those used by the Dolby system.
Therefore, there is a degree of compatibility between recordings made
with either system.
ANSI (American National Standards Institute) – ANSI is a voluntary
and privately funded business standards group in the USA. ANSI seeks
to promote and to facilitate consensus standards nationally, and is
internationally engaged as the sole US member of the ISO. The members
of ANSI consist of about 1,300 American and international companies,
30 government agencies and some 250 organizations of trade, labor,
professionals, consumers, etc.
ANSI 4.40 – See AES/EBU Digital Audio.
Answer – Smoothing, removing, or reducing jagged edges along the lines
and curves in test, images, or geometry.
www.tektronix.com/video_audio 13
Video Terms and Acronyms
Answer Print – The first print combining picture and sound submitted by
the laboratory for the customers’ approval.
Anti-Alias Filter – A filter (typically a lowpass filter) used to bandwidthlimit the signal to less than half the sampling rate before sampling.
Anti-Aliased Fonts – Computer generated fonts that have been digitally
rounded for smooth edges.
Anti-Aliasing – The process of reducing aliasing effects. Aliasing occurs
because a raster system is “discrete”, i.e., made up of pixels that have
finite size. Representing a line with black and white pixels results in “jaggies”, or “aliases”. These are particularly disturbing during animation. To
correct them, “anti-aliasing” techniques are used. These techniques compute the proportion of a pixel to be a blend of the pixel’s existing color
(background) and the edge’s value. This isn’t possible in color mapped
mode because each color map location is already allocated; there aren’t
enough map locations.
AOE (Applications and Operational Environments)
A-Only Edit (Audio-Only Edit)
AP – See Active Picture.
Aperture – a) An adjustable opening in a lens which, like the iris in the
human eye, controls the amount of light entering a camera. The size of the
aperture is controlled by the iris adjustment and is measured in F-stops.
A smaller F-stop number corresponds to a larger opening that passes
more light. b) As applied to ATV, the finite size and shape of the point of
the electron beam in a camera or picture tube. As the beam does not come
to an infinitesimal point, it affects the area around it, reducing resolution.
c) The opening of a lens that controls the amount of light reaching the
surface of the pickup device. The size of the aperture is controlled by the
iris adjustment. By increasing the F-stop number (F/1.4, F/1.8, F/2.8, etc.)
less light is permitted to pass to the pickup device.
Aperture Correction – a) Signal processing that compensates for a loss
of detail caused by the aperture. It is a form of image enhancement adding
artificial sharpness and has been used for many years. b) Electrical compensation for the distortion introduced by the (limiting) size of a scanning
aperture. c) The properties of the camera lens, optical beam-splitting
installation, and camera tube all contribute to a reduced signal at higher
spatial frequencies generally falling off as an approximate sin (x)/x function. Additionally, it is obvious in a scanning system that the frequency
response falls off as the effective wavelength of the detail to be resolved
in the image approaches the dimension of the scanning aperture and
becomes zero when the effective wavelength equals the dimension of the
scanning aperture. Aperture correction normally introduced in all video
cameras restores the depth of modulation to the waveform at higher
frequencies with the objective of flat response to 400 TV lines (in NTSC)
for a subjective improvement in image quality.
Aperture Delay – In ADCs, aperture delay is the time from an edge of the
input clock of the ADC until the time the part actually takes the sample.
The smaller this number, the better.
Aperture Jitter – The uncertainty in the aperture delay. This means the
aperture delay time changes a little bit over time, and that little bit of
change is the aperture jitter.
Aperture, Camera – The available maximum dimensions of the optical
image on the active surface of the photo-sensor, within which good quality
image information is being recorded. The camera aperture determines
the maximum usable scene information captured and introduced into the
system, and available for subsequent processing and display. These dimensions are usually defined by standards. (Note: Not to be confused with lens
aperture, which defines the luminous flux transmission of the optical path.
Aperture, Clean – The concept of a clean aperture in a digital system
defines an inner picture area (within the production aperture) within which
the picture information is subjectively uncontaminated by all edge transient
distortions (SMPTE 260M). Filtrations for bandwidth limitation, multiple
digital blanking, cascaded spatial filtering, etc., introduce transient disturbances at the picture boundaries, both horizontally and vertically. It is
not possible to impose any bounds on the number of cascaded digital
processes that might be encountered in the practical post-production
system. Hence, the clean aperture is defined to represent an acceptable
(and practical) worst-case level of production.
Aperture, Display – The available maximum dimensions (mapped back
into the camera aperture) for the system’s ability to display good quality
image information. The information available for display is usually cropped
from the total captured by the cascade of tolerances that may be incorporated in the system, and also by intentional design features that may be
introduced in the display.
Aperture, Production – A production aperture for a studio digital device
defines an active picture area produced by signal sources such as cameras, telecines, digital video tape recorders, and computer-generated
pictures. It is recommended that all of this video information be carefully
produced, stored, and properly processed by subsequent digital equipment.
In particular, digital blanking in all studio equipment should rigorously
conform to this specified production aperture (SMPTE 260M). The width
of the analog active horizontal line is measured at the 50% points of the
analog video signal. However, the analog blanking may differ from equipment to equipment, and the digital blanking may not always coincide with
the analog blanking.
Aperture, Safe Action – As defined by a test pattern, a safe action aperture indicates the safe action image area within which all significant action
must take place, and the safe title image area, within which the most
important information must be confined, to ensure visibility of the information on the majority of home television receivers. SMPTE RP 27.3 defines
these areas for 35 mm and 16 mm film and for 2 x 2-inch slides.
API (Application Program Interface) – a) The software used within an
application program to activate various functions and services performed
by the operating system. b) The Windows operating system refers to API
functions as those which open and close windows, interpret mouse movement, read the keyboard, etc. These control-type functions are called
“hooks” to the operating system. c) APIs define the interfaces to the library
of tools that are made available by the MPEG-4 systems, and the interfaces
of the pieces of code that can be downloaded to the MPEG-4 systems.
Video Terms and Acronyms
APL (Average Picture Level) – The average signal level (with respect
to blanking) during active picture time, expressed as a percentage of the
difference between the blanking and reference white levels.
Application Format – A specification for storing information in a particular way to enable a particular use.
Arithmetic Coding – Perhaps the major drawback to each of the Huffman
encoding techniques is their poor performance when processing texts
where one symbol has a probability of occurrence approaching unity.
Although the entropy associated with such symbols is extremely low, each
symbol must still be encoded as a discrete value. Arithmetic coding
removes this restriction by representing messages as intervals of the real
numbers between 0 and 1. Initially, the range of values for coding a text is
the entire interval (0, 1). As encoding proceeds, this range narrows while
the number of bits required to represent it expands. Frequently occurring
characters reduce the range less than characters occurring infrequently,
and thus add fewer bits to the length of an encoded message.
Application Window – The workspace (window) available to an application. The size can be adjusted by the user and limited only by the size of
the monitor’s display.
A-Roll – A method of conforming that requires the compositing of all
multilayer effects into a single layer (including laboratory-standard
dissolves and fades) before assembly. Also called Single-Strand Editing.
APS (Advanced Photo System) – A new photographic system conceived
by Kodak and developed jointly with Canon, Fuji, Minolta, and Nikon. The
APS was launched in April 1996. APS also represents the file format used
to store data on the new film’s magnetic coating.
ARP (Address Resolution Protocol) – A TCP/IP protocol used to obtain
a node’s physical address. A client station broadcasts an ARP request onto
the network with the IP address of the target node it wishes to communicate with, and the node with that address responds by sending back its
physical address so that packets can be transmitted. ARP returns the layer
2 address for a layer 3 address. Since an ARP gets the message to the
target machine, one might wonder why bother with IP addresses in the first
place. The reason is that ARP requests are broadcast onto the network,
requiring every station in the subnet to process the request.
Apostilb – A photometric unit for measuring luminance where, instead of
candelas, lumens are used to measure the luminous flux of a source.
Application – An application runs in a module, communicating with the
host, and provides facilities to the user over and above those provided
directly by the host. An application may process the transport stream.
Apt-X100 – The apt-X100 is a proprietary audio compression algorithm
from APT, Ltd., which features an adaptive differential PCM (ADPCM)
algorithm in four sub-bands. The algorithm provides a fixed 4:1 compression with low delay and bandwidths ranging from 7.5 kHz to 22.5 kHz and
output bit rates from 64 to 384 kbit/s, depending on the sampling rate.
APU (Audio Presentation Unit 13818-1) – A 13818-1 audio frame.
Architecture – a) Logical structure of a computer system. b) In digital
video, architecture (also known as format) refers to the structure of the
software responsible for creating, storing and displaying video content.
An architecture may include such things as compression support, system
extensions and browser plug-ins. Different multimedia architectures offer
different features and compression options and store video data in different
file formats. QuickTime, RealVideo and MPEG are examples of video architectures (though MPEG is also a type of compression).
Archive – a) Off-line storage of video/audio onto backup tapes, floppy
disks, optical disks, etc. b) A collection of several files bundled into one file
by a program (such as ar, tar, bar, or cpio) for shipment or archiving. This
method is very reliable and can contain large amounts of data. c) Longterm off-line storage. In digital systems, pictures are generally archived
onto some form of hard disk, magnetic tape, floppy disk or DAT cartridge.
ARIB (Association of Radio Industries and Businesses) – ARIB conducts studies and R&D, provides consultation services for radio spectrum
coordination, cooperates with other organizations around the world and
provides frequency change support services for the smooth introduction of
digital terrestrial television broadcasting.
ARQ – See Application Programming Interface.
Array Processor – A compute engine that efficiently performs operations
on large amounts of data with a regular structure (array).
ARS – See Automatic Route Selection.
Arithmetic and Logic Unit (ALU) – One of three essential components
of a microprocessor. The other two are the registers and the control block.
The ALU performs various forms of addition, subtraction, and logic operations, such as ANDing the contents of two registers or masking the contents of a register.
www.tektronix.com/video_audio 15
Video Terms and Acronyms
Artifacts – a) Artifacts can range from noise and snow, to spots. Anything
that is visually wrong with the picture is an artifact. Artifacts however do
not include picture errors caused by improperly adjusted displays. Artifacts
are visual errors caused by the signal being sent to the display. b) A defect
or distortion of the image, introduced along the sequence from origination
and image capture to final display. Artifacts may arise from the overload
of channel capacity by excess signal bandwidth. Artifacts may also result
from: sampling effects in temporal, spatial, or frequency domains; processing by the transfer functions; compromises and inadequacies in the system
employed; cascading of minor defects; basically any other departure of
the total system from “complete transparency”. c) Visible (or audible)
consequences of various television processes. Artifacts are usually referred
to only when they are considered defects. Artifact elimination is often
more apparent than quality increases such as resolution enhancement.
d) Interference or other unwanted “noise” in video such as flickering,
changes in color and macroblocking. Some artifacts, such as macroblocking, can be remedied in video compression and some cannot. The quality
of the finished product is, in large part, no better than the source material.
See also Filter Artifacts, Impairments, and NTSC Artifacts.
ASA – Exposure index or speed rating that denotes the film sensitivity,
defined by the American National Standards Institution. Actually defined
only for black-and-white films, but also used in the trade for color films.
ASCII (American Standard Code for Information Interchange) –
a) Character code used for representing information as binary data in most
computer systems. b) A standard code for transmitting data, consisting
of 128 letters, numerals, symbols and special codes each of which is
represented by a unique binary number.
ASF (Active Streaming Format) – a) A Microsoft file format for digital
video playback over the Internet, or on a standalone computer. Kind of a
wrapper around any of a number of compression types, including MPEG.
b) Part of a NetShow, a proprietary streaming media solution from
Microsoft. Biggest competitor is Real Networks. While this ‘wrapper’
supports many standard formats, ASF files are themselves proprietary.
ASI (Asynchronous Serial Interface) – Transmission standard defined
by the digital video broadcast (DVB) used to connect video delivery equipment within a cable, satellite or terrestrial plant.
ASIC (Application Specific Integrated Circuit) – An integrated circuit
designed for special rather than general applications.
ASN.1 (Abstract Syntax Notation 1) – OSI language for describing data
types independent of particular computer structures and representation
techniques. Described by ISO International Standard 8824.
ASPEC (Adaptive Spectral Perceptual Entrophy Coding) – An
algorithm developed by Fraunhofer Institut, AT&T, Thomas Brandt, and
the CNET. The ASPEC algorithm was later used for developing the MPEG
audio Layer 3 specification.
Aspect Ratio – The ratio of the width of the picture to the height. For
most current TVs, this ratio is 4:3. For HDTV, the ratio will be 16:9. The
aspect ratio, along with the number of vertical scan lines that make up
the image, determines what sample rate should be used to digitize the
video signal.
Square photographic formats, including Instamatic 126
Existing television, old movies, Pocket Instamatic 110
IMAX film
35mm still photographs, proposed for theatrical release
Faroudja HDTV proposal
Original NHK proposal, theatrical projection outside the U.S.
ATSC/SMPTE HDEP standard, optimized for shoot and protect
Theatrical projection in the U.S.
Most forms of VistaVision
Some widescreen movie formats
CinemaScope and similar movie formats
Dimension-150, Ultra-Panavision
Dynavision widescreen 3D film format
Aspect Ratio Accommodation – Techniques by means of which
something shot in one aspect ratio can be presented in another. The five
currently used or proposed techniques are compared in the following
table. It is also possible to combine techniques. Current ATV aspect ratio
debates concentrate on the problems of presenting widescreen images to
existing TV sets; the same problems (in an opposite direction) will occur
when current aspect ratio images are presented on widescreen TV sets.
In movie theaters these problems are usually solved with movable drapes.
Pan and Anamorphic Shoot and
Uses Full
(No Burn)
All Action
Full Production Y
Asperities – Small projecting imperfections on the surface of the tape
costing that limit and cause variations in head-to-tape contact.
Video Terms and Acronyms
Aspherical Lens – A lens that has an aspherical surface. It is harder and
more expensive to manufacture, but it offers certain advantages over a
normal spherical lens.
Assemble – One of the two editing modes that are possible with video
tapes. All tracks on the tape are added free of disturbances at the cutting
point, but all tracks are newly written. The other editing method is known
as Insert Edit.
Assembled Edit – a) Electronic edit that replaces all previously recorded
material with new audio and video and a new control track, starting at
the edit point. Inserting a new control track allows for a constant speed
reference throughout the entire tape. b) Adding material that has a different signal to the end of a pre-recorded section of a videotape. Adding
an assemble edit to the middle of an existing segment causes an abrupt
and undesirable change in the sync of the video signal. Contrast with
Insert Edit.
Assembler Program – Translates assembly language statements
(mnemonics) into machine language.
Assembly Language – Machine-oriented language. A program is normally
written as a series of statements using mnemonic symbols that suggest
the definition of the instruction. It is then translated into machine language
by an assembler program.
Astigmatism – The uneven foreground and background blur that is in an
ASV (Audio Still Video) – A still picture on a DVD-Audio disc.
ASVP (Application-Specific Virtual Prototype)
Asymmetric Compression – Compression in which the encoding and
decoding require different processing power (the encoding is normally
more demanding).
Asymmetrical Digital Subscriber Line – Bellcore’s term for one-way
T-1 to the home over the plain old, single twisted pair wiring already going
to homes. ADSL is designed to carry video to the home. ADSL, like ISDN,
uses adaptive digital filtering, which is a way of adjusting itself to overcome noise and other problems on the line. According to Northern Telecom,
initial ADSL field trails and business cases have focused on ADSL’s potential for video on Demand service, in competition with cable pay-per-view
and neighborhood video rental stores. But ADSL offers a wide range of
other applications, including education and health care. Once telephone
companies are able to deliver megabits to the home, Northern Telecom
expects an explosion in potential applications including work-at-home
access to corporate LANs, interactive services such as home shopping and
home banking and even multi-party video gaming, interactive travelogues,
and remote medical diagnosis. Multimedia retrieval will also become
possible, enabling the home user to browse through libraries of text, audio,
and image data – or simply subscribe to CD-quality music services. In
the field of education, ADSL could make it possible to provide a low-cost
“scholar’s workstation” – little more than a keyboard, mouse and screen –
to every student, providing access to unlimited computer processing
resources from their home. For a more modern version of ADSL, see DMT,
which stands for Discrete Multi-Tone.
Asynchronous – a) A transmission procedure that is not synchronized by
a clock. b) Any circuit or system that is not synchronized by a common
clock signal. c) Lacking synchronization. In video, a signal is asynchronous
when its timing differs from that of the system reference signal. A foreign
video signal is asynchronous before a local frame synchronizer treats it.
Asynchronous Data Streaming – Streaming of only data without any
timing requirements. See Asynchronous Data Streaming, Synchronous Data
Asynchronous Signals – Data communication transmission of signals
with no timing relationship between the signals. Stop and start bits may be
used to avoid the need for timing clocks.
Asynchronous Transfer Mode (ATM) – a) A digital transmission system
using packets of 53 bytes for transmission. ATM may be used for LANs
and WANs. ATM is a switching/ transmission technique where data is
transmitted in small, 53 byte fixed sized cells (5 byte header, 48 byte
payload). The cells lend themselves both to the time-division-multiplexing
characteristics of the transmission media, and the packet switching
characteristics desired of data networks. At each switching node, the ATM
header identifies a virtual path or virtual circuit that the cell contains data
for, enabling the switch to forward the cell to the correct next-hop trunk.
The virtual path is set up through the involved switches when two endpoints wish to communicate. This type of switching can be implemented in
hardware, almost essential when trunk speed range from 45 Mbps to 1
Gbps. The ATM Forum, a worldwide organization, aimed at promoting ATM
within the industry and the end user community was formed in October
1991 and currently includes more than 500 companies representing all
sectors of the communications and computer industries, as well as a
number of government agencies, research organizations and users.
b) A digital signal protocol for efficient transport of both constant-rate
and bursty information in broadband digital networks.
AT&T – Consumer electronics manufacturer and long distance telephone,
television, and data carrier. Its Bell Labs has worked on the development
of ATV systems.
ATAPI (Advanced Technology Attachment Packet Interface) –
An interface between a computer and its internal peripherals such as
DVD-ROM drives. ATAPI provides the command set for controlling devices
connected via an IDE interface. ATAPI is part of the Enhanced IDE (E-IDE)
interface, also known as ATA-2. ATAPI was extended for use in DVD-ROM
drives by the SFF 8090 specification.
ATC – See Ancillary Timecode.
ATC (Adaptive Transform Coding) – A method used to encode voice
transmissions using only 16 kpbs.
ATC (American Television and Communications) – Time Inc.’s CATV
multiple system operator (MSO), a co-proposer with HBO of C-HDTV and a
supporter of ACTV.
ATEL (Advanced Television Evaluation Laboratory) – World-unique
facility for conducting subjective assessments of picture quality for
advanced television, digital video and multimedia services delivered using
a wide range of formats, from low resolution to high-definition television
(HDTV) and three-dimensional television (3D-TV).
www.tektronix.com/video_audio 17
Video Terms and Acronyms
A-Time (Absolute Time) – Elapsed time, referenced to the program start
(00:00:00), on a DVD. A-time is measured in minutes, seconds and frames.
time; careful use of a ‘reverb’s predelay parameter will allow you to
optimize the reverb for different types of attacks.
ATM – See Asynchronous Transfer Mode.
ATTC (Advanced Television Test Center) – Created by seven broadcasting organizations to test different broadcast ATV systems. See also Cable
ATM Cell – An ATM packet of 53 bytes, 5 bytes for the header, 48 bytes
ATM Forum – An international body of technical representatives defining
ATM as a delivery mechanism, including ATM-based transfer, switching and
A-to-D Converter – a) A circuit that uses digital sampling to convert
an analog signal into a digital representation of that signal. An ADC for
digitizing video must be capable of sampling at 10 to 150 million samples
per second (MSPS). b) Converts analog voltages and currents to the digital
representation used by computer systems. This enables the computer to
sense real-world signals.
ATR (Audiotape Recorder) – A device for recoding and reproducing
sound on magnetic recording tape.
ATRAC (Adaptive Transform Acoustic Coding) – An algorithm that
splits an audio signal into three non-uniform sub-bands.
ATRP (Advanced Television Research Program) – ATRP was established at MIT in 1983 by a consortium of U.S. companies. The major
objectives of the ATRP are: to develop the theoretical and empirical basis
for the improvement of existing television systems, as well as the design
of future television systems; to educate students through television-related
research and development and to motivate them to undertake careers in
television-related industries; to facilitate continuing education of scientists
and engineers already working in the industry; to establish a resource
center to which problems and proposals can be brought for discussion
and detailed study; to transfer the technology developed from this program
to the industries.
ATSC – See Advanced Television Systems Committee.
ATSC A/49 – Defines the ghost cancellation reference signal for NTSC.
ATSC A/52 – Defines the (Dolby Digital) audio compression for ATSC
ATSC A/53, A/54 – Defines ATSC HDTV for the USA.
ATSC A/57 – Defines the program, episode, and version ID for ATSC HDTV.
ATSC A/63 – Defines the method for handling 25 and 50 Hz video for
ATSC A/65 – Defines the program and system information protocol (PSIP)
ATSC A/70 – Defines the conditional access system for ATSC HDTV.
ATSC A/90 – Defines the data broadcast standard for ATSC HDTV.
ATSC A/92 – Defines the IP multicast standard for ATSC HDTV.
Attack – In audio terms, the beginning of a sound. What type of attack a
sound has is determined by how long it takes for the volume of the sound
to go from silence to maximum level. It is critical to consider the attack
time of sounds when applying processing Compression, gating, and other
types of processors as they may destroy a sound’s attack, changing the
character and quality of the audio. Reverbs can also be affected by attack
ATT-C (AT&T Communications) – The Long distance arm of AT&T.
Attenuation – A decrease in the level of a signal is referred to as attenuation. In some cases this is unintentional, as in the attenuation caused by
using wire for signal transmission. Attenuators (circuits which attenuate a
signal) may also be used to lower the level of a signal in an audio system
to prevent overload and distortion.
Attenuator – A circuit that provides reduction of the amplitude of an electrical signal without introducing appreciable phase or frequency distortion.
Attic Folder – The folder containing backups of your files or bins. Every
time you save or the system automatically saves your work, copies of
your files or bins are placed in the attic folder, until the folder reaches
the specified maximum. The attic folder copies have the file name extension.bak and a number added to the file name. The number of backup
files for one project can be changed (increased or decreased) in the Bin
Settings dialog box.
Attribute Clip – A mechanism that applications can use to store supplemental information in a special track that is synchronized to the other track
in a track group.
ATV – See Advanced TV.
AU – See Access Unit.
Audio – a) Signals consisting of frequencies corresponding to a normally
audible sound wave ranging between the frequencies of 20 Hz to 20,000
Hz. b) A DC signal with varying amounts of ripple. It is sometimes possible
to see the ripple on the DC signal to convey information of widely variable
degrees of usefulness. c) The sound portion of a program.
Audio Balanced Signals – These are signals with two components, equal
in amplitude but opposite in polarity. The impedance characteristics of
the conductors are matched. Current practices designate these as noninverted and inverted, + and – or positive and return. Interconnect cables
usually have three conductors. Two arranged as a twisted pair, carry the
non-inverted and inverted. By employing a twisted pair of conductors for
the signal leads, the loop area responsible for magnetic interference is
minimized. The third conductor is a shield.
Audio Bandwidth – The range of audio frequencies which directly influence the fidelity of the audio. The higher the audio bandwidth, the better
the audio fidelity. The highest practical frequency the human ear can
normally hear is 20 kHz. An audio amplifier that processes all frequencies
equally (flat response to 20 kHz) and a reasonably high signal to noise
ratio, will accurately amplify the audio signal.
Audio Breakaway (ABKW) – The ability to independently select audio
sources regardless of which video source is selected, even though the
audio is normally associated with a particular video (as opposed to follow).
Audio Buffer – A decoder buffer for storage of compressed audio data.
Video Terms and Acronyms
Audio Channel Number – These are consecutive numbers assigned to
the Audio channel of the audio stream. They range from “0” to “7” in the
description are of the video title set manager area. ACH0 and ACH1 are
assigned to left channel and right channel respectively for two-channel
stereo audio signals.
Audio Coding Mode – In general this is often used to show an audio
coding method such as linear PCM, AC-3 or MPEG audio, etc., but in
some contexts it refers to the channel constitution in AC-3 tracks and
the speaker layout.
Audio Control Packet – Transmitted once per field in the second horizontal ancillary data space after the vertical interval switch point. It contains
information on audio frame number, sampling frequency, active channels,
and relative audio-to-video delay of each channel. Transmission of audio
control packets is optional for 48 kHz synchronous operation and required
for all other modes of operation.
Audio Dub – Process which allows for the replacement of an audio signals
on a previously recorded tape without disturbing the video signal.
Audio Mixer – A component that combines more than one sound input
for composite output.
Audio Mixing – The blending of two or more audio signals to generate a
combined signal which is often used for audio dub. During video processing, audio mixing may be used to insert narration or background music.
Audio Modulation – A carrier is modified with audio information and is
mixed with the video information for transmission.
Audio Modulation Decoders – Converts sound carrier elements of the
video waveform into left and right audio channels for stereo monitoring.
Audio Modulation Monitors – Displays sound carrier elements of the
video waveform.
Audio On ISDN – Through use of the MPEG audio specification, the ISDN
(Integrated Services Digital Network) may be tuned into an audio transmission media. Data compression techniques like MPEG Layer II allow a
tailored mix of cost and quality, and are now thought of implicitly when
talking audio on ISDN.
Audio Editing – Portions of the audio material are combined and recorded
onto the videotape. Examples include creating a sound track that includes
signals such as background music, voice narration or sound effects.
Audio Scrub – See Scrubbing
Audio Effects Board – Similar to a switcher, an audio effects board is the
primary router and mixer for source audio, and for adjusting, mixing and
filtering audio. Usually, a digital audio workstation is used to perform more
complex audio work.
Audio Signals – XLR connectors provide dual-channel audio signals. The
left channel can be set to click as a means of easily distinguishing the left
channel from the right channel in audio tests.
Audio Follow Video (AFV) – Audio selections made simultaneously upon
selection of associated video sources (as opposed to audio breakaway).
Audio Level Measurements – Typically within audio measurements a
dBm value is specified. This means that a reference power of 1 mW was
used with a 600 W termination. Therefore using the equations 0 dBm is
equivalent to a voltage of 0.775 V into a 600 W load. You may encounter
several different types of dB measurements used within audio. The following list indicates the typically used equations.
dBm = 10 logP1/.001W
dBV = 20 logV2/1V rms
dBu = 20 log V2/775mV rms
dBSPL = 20 logP1/P2
Audio Levels – The level of the audio signal in either voltage or current.
Audio levels are measured and indicated by mechanical VU-meters or
electronic LED bar graph meters. It is important to maintain the proper
audio level. If the audio level is too high when recording, overload of the
input electronics and audio distortion will result. When audio levels are
low, the signal-to-noise ratio is compromised.
Audio Matrix – That portion of the switcher electronics used to switch
audio sources. Usually this matrix is controlled by AFV selections on the
primary matrix, ABKW selections on an aux audio bus, or by an external
editor or computer control.
Audio Menu – Menu used to select the audio stream.
Audio Sequence – A series of audio frames with the same global
Audio Stream Number – These are consecutive numbers assigned to the
Audio streams for a Title in a VTS. These range from `0' to `7' in the order
described in the video title set manager area. For menus the number of
audio streams is limited to 0 or 1.
Audio Subcarrier – A specific frequency that is modulated with audio
data before being mixed with the video data and transmitted.
Audio Subframe – There are 100 subframes of audio for every frame of
Audio Sweetening – The mixing of sound effects, music and announcer
audio tracks with the audio track of the edited master tape, usually during
the mixing stages of a production. Also called Audio Post-Production for
Audio Timecode – Longitudinal timecode (LTC) recorded on an audio
Audio Visual Objects (AV Objects) – An AV object is a representation of
a real or virtual object that can be manifested aurally and/or visually. AV
objects are generally hierarchical, in that they may be defined as composites of other AV objects, which are called sub-objects. AV objects that are
composites of sub-objects are called compound AV objects. All other AV
objects are called primitive AV objects.
Audio Visual Scene (AV Scene) – A set of media objects together with
scene description information that defines their spatial and temporal
attributes including behavior resulting from object and user interaction.
Audio/Video Mixer – A single electronic component that consists of
an audio mixer and a video mixer, switcher, or special effects generator.
Also called an A/V Mixer.
www.tektronix.com/video_audio 19
Video Terms and Acronyms
Audio-Follow-Video – During video recording or editing, the video signal
is usually accompanied by its associated audio signal. While editing
video, it is sometimes necessary to separate the audio and video signals.
Audio-follow-video mixers allow the audio to, or not to follow the video
when switching video signals.
AudioVision – A registered trademark of Avid Technology, Inc. A digital,
nonlinear audio editing system that locks digital video in sync with audio
for audio editing and sweetening.
Auditory Masking – Auditory masking is used in MPEG and Dolby Digital
compression, and is coded based on the range of frequency that human
ears can detect. A phenomenon that occurs when two sounds of similar
frequencies occur at the same time. Because of auditory masking, the
louder sound drowns out the softer sound and makes it inaudible to the
human ear.
Augmentation Channel – A transmission channel carrying information
that can augment that being transmitted in an ordinary transmission
channel such that a special television set that can receive both channels
can get a better picture than those available from the main channel alone.
Some ATV schemes require the augmentation channel to be adjacent to
the main channel. Others can theoretically accept a non-adjacent augmentation channel, though, at the time of this writing, the acceptability of
non-adjacent channels has not been proven to everyone’s satisfaction.
Authoring – The encoding of material from various sources, all the conversion processes of the encoded data, incorporating the required control
structures and other signals for playback sequences in the DVD-video
format. The final product of authoring is a DLT tape with DVD image files
in DDP format.
Authoring Platform – Computer hardware and software used to create
material for use on a multimedia system. The video quality of the authoring
platform has to be high enough that the playback equipment is the limiting
Authoring System – Software, which helps developers design interactive
courseware easily, without the painstaking detail of computer programming.
Auto Assembly – a) Process of assembling an edited videotape on a
computerized editing system under the control of an edit decision list
(EDL). A computer automatically conforms source footage into an edited
video program under the direction of a list of preprogrammed edit instructions. b) An edit in which an off-line edit decision list is loaded into an
on-line edit computer and all the edits are assembled automatically with
little or no human intervention. c) The automatic assembling of an edited
video tape on a computerized editing system (controller), based on an
edit decision list (EDL).
Automatic Color Correction (ACC) – A circuit found in many consumer
viewing devices that attempts to compensate for the “Never Twice the
Same Color” broadcast problems. This circuit can go far beyond the Auto
Tint function in that it changes color saturation as well as type of color.
In most cases where ACC is present, it cannot be defeated. Adjusting the
color and tint controls, using the SMPTE Color Bar pattern and the blue
filter will result in a gross misadjustment of color level on the set. The color
level may have to be reduced by as much as half the value calibrated with
the SMPTE Color Bar pattern.
Automatic Focus – A feature on most consumer and industrial video
cameras and camcorders that automatically makes minor focal length
adjustments, thus freeing the videographer from focusing concerns.
Automatic Frequency Control (AFC) – Automatic frequency control.
Commonly used to lock onto and track a desired frequency.
Automatic Gain Control (AGC) – a) Circuitry used to ensure that output
signals are maintained at constant levels in the face of widely varying input
signal levels. AGC is typically used to maintain a constant video luminance
level by boosting weak (low light) picture signals electronically. Some
equipment includes gain controls that are switchable between automatic
and manual control. b) Electronic circuitry that compensates for either
audio or video input level changes by boosting or lowering incoming signals
to match a preset level. Using AGC, changing input levels can output at a
single constant setting. c) A feature on most video cameras and camcorders that, when engaged, boosts the signal to its optimum output level.
Automatic gain control (AGC) is available for video, and less frequently
audio use.
Automatic Iris – A feature on most video cameras and camcorders that
automatically creates the lens aperture that allows the imaging device to
perform under optimum conditions.
Automatic Level Control (ALC) – Circuitry used to automatically adjust
the audio recording level to compensate for variations in input volume.
Some equipment includes level controls that are switchable between automatic and manual control.
Automatic Picture Stop – The disc player will automatically take the
program from the play mode to a still frame mode according to information
programmed in the vertical interval of the disc’s video.
Automatic Retransmission Tool (ARQ) – One of the error correction
tools of the Protection Layer. This tool is used to correct errors detected
by the error detection tool by requesting retransmission of the corrupted
information. A bidirectional connection is necessary in order to use ARQ.
Automatic Route Selection – An important part of an automatic leastcost routing system.
Auto Iris (AI) – An automatic method of varying the size of a lens
aperture in response to changes in scene illumination.
Automatic Shut-Off – A device (usually a mechanical switch) incorporated into most tape recorders that automatically stops the machine when the
tape runs out or breaks.
Automated Measurement Set – Device that automatically performs
tests on audio and video signals and generates pass/fail results by testing
the signals against predetermined parameters.
Auto-Pan – A feature exclusive to AVC series switchers causing a positioned pattern to center itself as it grows in size.
Automatic – In recorders, refers to either electrical or mechanical automatic bias switching devices.
AutoSave – A feature that saves your work at intervals you specify.
Backups are placed in the attic folder.
Video Terms and Acronyms
Auto-Transition – a) The ability to electronically simulate a fader motion
over an operator specified duration. b) An automatic transition where the
motion of the switcher lever arm is electronically simulated when the AUTO
TRANS push-button is pressed. The duration of the transition in television
frames or seconds is indicated by the rate display LED.
AVR (Avid Video Resolution) – The compression level at which visual
media is stored by the Avid system. The system creates media in a
particular AVR using proprietary conversion algorithms to convert analog
video to digital form.
AUX (Auxiliary Track) – In a video editing system, a channel reserved for
connecting an external audio device, video device or both.
AVSS (Audio-Video Support System) – DVI system software for DOS.
It plays motion video and audio.
Auxiliary Bus – A bus which has the same video sources as the switcher
but whose crosspoints may be remotely controlled, independently of the
switcher console.
AWG (American Wire Gauge) – A wire diameter specification based on
the American standard. The smaller the AWG number, the larger the wire
Auxiliary Channel (AUX) – In a video editing system, a channel reserved
for connection to an external audio and/or video device.
AWGN (Additive White Gaussian Noise) – This is an additive noise
source in which each element of the random noise vector is drawn
independently from a Gaussian distribution.
Available Bitrate (ABR) – An ATM service that allows users to access
unused network capacity.
AVI (Audio Video Interleaved) – The Video for Windows® file format
for digital video and audio. An AVI (.avi) file is a RIFF file format used
with applications that capture, edit and playback audio/video sequences.
AVI files contain multiple streams of different types of data. Most AVI
sequences will use both audio and video data streams. Specialized AVI
sequences might include control track as an additional data stream. See
Video for Windows®.
AVO – See Audio Visual Objects.
Axis – a) An imaginary line through the video image used as a reference
point for rotation and movement. The three axes are H (horizontal),
Y (vertical) and A (depth). b) The component of an object that you use to
determine its two or three dimensional space and movement.
Azimuth – The angle of a tape head’s recoding gap relative to the tape.
Avid Disk – The disk on the Macintosh platform that contains the operating system files. The computer needs operating system information in order
to run.
Azimuth Alignment – Alignment of the recoding and reproducing gaps so
that their center lines lie parallel with each other and at right angles to the
direction of head/tape motion. Misalignment of the gaps causes a loss in
output at short wavelengths. For example, using a track width of 50 mils, a
misalignment of only 0.05 degrees will cause a 3 dB loss at a wavelength
of 0.1 mil.
Avid Projects Folder – The folder containing your projects.
Azimuth Loss – High frequency losses caused by head misalignment.
A-Vision – An ATV system proponent.
AVK (Audio Video Kernel) – DVI system software designed to play motion
video and audio across hardware and operating system environments.
www.tektronix.com/video_audio 21
Video Terms and Acronyms
B Bus – The bottom row of the two rows of video source select buttons
associated with a given mixed effect (M/E).
BAB (Binary Alpha Blocks) – Binary shapes coded into blocks 16 pixels
square, like the macroblock used for textures, are known as binary alpha
blocks (BABs). There are three classes of block in a binary mask; those
where all pixels are transparent (not part of the video object); those where
all pixels are opaque (part of the video object); and those where some
pixels are transparent and other opaque.
Baby Bell – A term commonly used for one of the seven regional holding
companies established when AT&T divested itself of its local telephone
companies. The Baby Bells are: American, Bell Atlantic, Bell South, Nynex,
Pacific Telesis, Southwestern Bell, and US West.
Back Focus – a) A physical repositioning of the CCD, the camera element
that translates light into electronic pulses for recording on videotape.
The effect is to lengthen or shorten the distance between the lens and
the CCD. b) A procedure of adjusting the physical position of the CCDchip/lens to achieve the correct focus for all focal length settings (especially critical with zoom lenses).
Background – May be thought of as the deepest layer of video in a given
picture. This video source is generally selected on a bus row, and buses
are frequently referred to as the background source.
Background Generator – A video generator that produces a solid-color
output which can be adjusted for hue, chroma, and luminance using the
controls in the MATTE/BKGD control group.
Background Transition – A transition between signals selected on the
Preset Background and Program Background buses, or between an “A”
bus and “B” bus on an M/E.
Background Video (BGD) – a) Video that forms a background scene into
which a key may be inserted. Background video comes from the Preset
Background and/or Program Background bus or from an N/E “A” or “B”
bus. b) A solid-color video output generated by the color Background
generator within the switcher for use as background video.
Backhaul – In television, the circuits (usually satellite or telephone) used
to transmit or “haul” a signal back from a remote site (such as a sports
stadium) to a network headquarters, TV station or other central location for
processing before being distributed.
Back Haul – Long distance digital data transport service such as Sonet,
SDH or Telecos.
Backplane – The circuit board that other boards in a system plug into.
Usually contains the system buses. Sometimes called a Motherboard.
Back Hauler – Company that provides back haul services.
Back-Timing – a) Timing of a program from the end to the beginning.
A reversal of clock-order so that remaining time or time left to the end of
the program can be easily seen. b) A method of calculating the IN point
by subtracting the duration from a known OUT point so that, for example,
music and video or film end on the same note.
Back Light – a) A switch on some camcorders used to compensate exposure for situations where the brightest light is coming from behind the subject. b) A light source that illuminates a subject from behind, used to separate the subject from the background and give them depth and dimension.
Back Porch – a) The portion of the video signal that lies between the
trailing edge of the horizontal sync pulse and the start of the active picture
time. Burst is located on the back porch. b) The back porch of a horizontal
synchronizing pulse is that area from the uppermost tip of the positivegoing right-hand edge of a sync pulse to the start of active video. The back
porch of a color video sync pulse includes the 8 to 9 cycles of reference
color burst. The back porch is at blanking level.
Backup – A duplicate copy of a file or disk in another location if the
original file or disk becomes corrupted. See also Attic Folder.
Backup Tape – A tape that contains a copy of a set of files and directories that are on your hard disk. A full backup tape contains a copy of all
files and directories, including IRIX, which are on your hard disk.
Back Porch Tilt – The slope of the back porch from its normal horizontal
position. Positive or negative refer respectively to upward or downward tilt
to the right.
Backward Compatibility – A new coding standard that is backward compatible with an existing coding standard if existing decoders (designed to
operate with the existing coding standard) are able to continue to operate
by decoding all or part of a bit stream produced according to the new coding standard.
Back Time – Calculation of a tape in-point by finding the out-point and
subtracting the duration of the edit.
Backward Motion Vector – A motion vector that is used for motion
compensation from a reference picture at a later time in display order.
Back Up – To copy a certain set of files and directories from your hard
disk to a tape or other non-volatile storage media.
Backward Prediction – Prediction from the future reference vop.
Backbone – Transmission and switching equipment that provides connections in distributed networks.
Backcoating – A conductive additional coating used on the reverse
side of magnetic tape to control mechanical handling and minimize static
Baffles – Sound absorbing panels used to prevent sound waves from
entering or leaving a certain space.
Balanced Cable – In audio systems, typically refers to a specific cable
configuration that cancels induced noise.
Balanced Line – A line using two conductors to carry the signal, neither
of which is connected to ground.
Video Terms and Acronyms
Balanced Signal – a) A video signal is converted to a balanced signal to
enable it to be transmitted along a “twisted pair” cable. b) In CCTV this
refers to a type of video signal transmission through a twisted pair cable.
It is called balanced because the signal travels through both wires, thus
being equally exposed to the external interference, so by the time the
signal gets to the receiving end, the noise will be cancelled out at the input
of a differential buffer stage.
Bark – An audio measure in units of critical band rate. The Bark Scale is a
non-linear mapping of the frequency scale over the audio range. It closely
corresponds to the frequency selectivity of the human ear across the band.
Balun – A device used to match or transform an unbalanced coaxial cable
to a balanced twisted pair system.
Base – See Radix.
Banding – The presence of extraneous lines.
Bandpass Filter – Circuit that passes a selected range of frequencies.
Bandwidth – The range of frequencies over which signal amplitude
remains constant (within some limits) as it is passed through a system.
More specific definitions include: a) The difference between the upper and
lower limits of a frequency, often measured in megahertz (MHz). b) The
complete range of frequencies over which a circuit or electronic system
can function with less than a 3 dB signal loss. c) The information carrying
capability of a particular television channel. d) A measure of information
capacity in the frequency domain. The greater the bandwidth of a transmission channel, the more information it can carry. e) In television, bandwidth
is usually expressed in MHz.
Bandwidth Efficient – Phrase sometimes used to describe techniques to
carry the maximum amount of picture information within a prescribed
bandwidth; also, name applied to one MIT ATV proposal that would transmit
only the spatio-temporal resolution necessary for a particular scene. For
example, it would transmit no more than 24 frames per second when
showing a movie shot at that rate.
Bandwidth Limiting – A reduction in the effective bandwidth of a signal,
usually to facilitate recording, transmission, broadcast, display. etc. The
reduction is usually accomplished through the action of an algorithm,
which may involve simple lowpass filtering, more complex processing such
as interleaving or quadrature modulation, or complete resampling. The term
bandwidth limiting is normally applied in analog systems, although it also
has a comparable meaning in digital systems.
Bandwidth Segmented Orthogonal Frequency Division Multiplexing
(BST-OFDM) – Attempts to improve on COFDM by modulating some OFDM
carriers differently from others within the same multiplex. A given transmission channel may therefore be “segmented”, with different segments being
modulated differently.
Bandwidth, Monitor – Monitor bandwidth is proportional to the speed at
which a monitor must be turned on and off to illuminate each pixel in a
complete frame and is proportional to the total number of pixels displayed.
For example, a monitor with a resolution of 1000 x 1000 pixels which is
refreshed at 60 times a second, requires a minimum theoretical bandwidth
of over 45 MHz. Once overhead is considered for scanning and small spot
size, the bandwidth could be as much as 100 MHz.
BAP (Body Animation Parameters) – Set of parameters used to define
and to animate body objects. See also BDP.
Bar Code – A pattern of vertical stripes of varying width and spacing that
encodes information. Bar codes can be used to encode timecode on film.
Barn Doors – a) Two- or four-leafed metal blinders mounted onto lights
to control brightness or direction. b) A term used in television production
to describe the effect that occurs when a 4:3 image is viewed on a 16:9
screen. Viewers see black bars (barn doors) on the sides of the screen.
Base Bandwidth – The amount of bandwidth required by an unmodulated
signal, such as video or audio. In general, the higher the quality of the
signal, the greater the base bandwidth it requires.
Base Board – Printed circuit board (and mounted components such as
integrated circuits, etc.) that is inserted into the computer’s expansion slot.
A module board is often attached to the base board.
Base Film – For magnetic tapes, the plastic substrate that supports the
coating. The base film of most precision magnetic tape is made of polyester.
Base Film Thickness – The thickness of the polyester material used for
magnetic tape, varying from 0.24 mil in C120 cassette tape to 1.5 mil for
audio mastering tape and instrumentation tape.
Base Layer – The minimum subset of a scalable hierarchy that can be
Baseband – a) Refers to the composite video signal as it exists before
modulating the picture carrier. Not modulated. Composite video distributed
throughout a studio and used for recording is at baseband. b) Video and
audio signals are considered to be “prime”, or baseband. Video and audio
can be broken down into more basic elements, but those elements no
longer constitute the desired signal as a single element. Baseband video
and audio signals are often AM or FM modulated onto a carrier frequency,
so that more than one set of “prime” signals can be transmitted or recorded at the same time. c) In DTV, baseband also may refer to the basic
(unmodulated) MPEG stream.
Baseband Signal – A baseband signal is an analog or digital signal in its
original form prior to modulation or after demodulation.
Baseline IRD – An IRD (Integrated Receiver Decoder) which provides the
minimum functionality to decode transmitted bitstreams. It is not required
to have the ability to decode Partial Transport Streams (TS) as may be
received from a digital interface connected to digital bitstream storage
device such as a digital VCR.
Baseline Restorer – An information processing unit intended to remove
the DC and low order frequency distortion terms that result from use of
record/reproduce transfer function which cannot pass DC in conjunction
with a binary code that requires low frequency response to DC (i.e., zero
frequency) for accurate recovery of such a code.
Baseline Shift – A form of low-frequency distortion resulting in a shift in
the DC level of the signal.
BASIC – An easy-to-learn, easy-to-use language, which is available on
most microcomputer systems.
www.tektronix.com/video_audio 23
Video Terms and Acronyms
Basic Cable Service – Package of programming on cable systems
eligible for regulation by local franchising authorities under 1992 Cable
Act, including all local broadcast signals and PEG (public, educational
and government) access channels.
Basic Rate – ISDN’s basic rate interface (BRI) consists of two B-channels
(128 kbps) and a D-channel (data) of 16 kbps.
BAT (Body Animation Table) – A downloadable function mapping from
incoming Body Animation Parameters (BAPs) to body surface geometry that
provides a combination of BAPs for controlling body surface geometry
BAT (Bouquet Association Table) – a) The BAT provides information
regarding bouquets (collections of services marketed as a single entity).
b) A table describing a bouquet of programs offered by a broadcaster.
DVB only.
Batch Capture – a) Combining your video capture card with deck control
so that you can define your in and out points first, then capture only the
footage you want. b) The automated process of capturing clips in a list.
See Batch List.
Batch Digitize – The automated process in which groups of clips,
sequences, or both are digitized (recorded digitally).
BCD (Binary Coded Decimal) – A 4-bit representation of the 10 decimal
digits “0” through “9”. Six of the sixteen possible codes are unused. Two
BDC digits are usually packed into one byte.
BCDM (Broadcast Cable Digital Multiplexer) – Provides off-line multiplexing of existing transport streams and TSMF information in order to produce ISDB-C streams (TSMF streams). It can also be used to demultiplex
existing TSMF streams and enables the TSMF information to be edited.
B-Channel – A “bearer” channel in ISDN user-to-network interfaces carrying 64 kbps of digitized voice, video or data.
BDP (Body Definition Parameters) – Set of parameters used to define
and to animate body objects. See also BAP.
BDR – See Border.
Beam – The directed flow of bombarding electrons in a TV picture tube.
Beam-Splitter Prism – The optical block in a video camera onto which
three CCD sensors are mounted. The optics split the red, green and blue
wavelengths of light for the camera.
Bearding – An overloading condition in which highly saturated or white
areas of a television picture appear to flow irregularly into darker areas.
Batch List – A list of clips to be batch captured.
Beat Frequency – The difference between color subcarrier frequency and
sound subcarrier frequency, expressed in Hz.
Batch Record – The automated process in which groups of clips,
sequences, or both are digitized (recorded digitally).
Beats – Variation in the amplitude of a mixture of two signals of close
frequency as a result of constructive and destructive interference.
Baud – A unit of signaling speed equal to the number of signal events per
second. Baud is equivalent to bit per second in cases where each signal
event represents exactly one bit. Often the term baud rate is used informally to mean baud, referring to the specified maximum rate of data
transmission along an interconnection. Typically, the baud settings of two
devices must match if the devices are to communicate with each other.
Bel – A measure of voltage, current or power gain. One bel is defined as a
tenfold increase in power. If an amplifier increases a signal’s power by 10
times, its power gain is 1 bel or 10 decibels (dB). If power is increased by
100 times, the power gain is 2 bels or 20 decibels. 3 dB is considered a
Baud Rate – a) The speed (calculated as bits per second) at which the
computer sends information to a serial device, such as a modem or terminal. b) Measure of data flow: the number of signal elements per second.
When each element carries one bit, the baud rate is numerically equal to
bits per second (BPS). For example, teletypes transmit at 110 baud. Each
character is 11 bits, and the TTY transmits 10 characters per second.
c) The rate at which data is transmitted. The baud rates must match if two
devices are to communicate with each other. d) The number of electrical
oscillations that occur each second. Baud was the prevalent measure for
bandwidth or data transmission capacity, but bps (bits per second) is used
most often now and is more accurate.
BB – See Baseband.
BBC – See British Broadcasting Corporation.
BCH (Broadcast Channel) – The broadcast channel is a downlink UMTS
(Universal Mobile Telecommunication System) transport channel that is
used to broadcast cell and system information.
BCA (Burst Cutting Area) – A circular section near the center of a DVD
disc where ID codes and manufacturer information can be inscribed in
bar-code format.
Bell Labs – Originally Bell Telephone Laboratories, the research arm of
the Bell System. When AT&T divested itself of its regional telephone companies, Bell Labs was split. One division, still called Bell Labs, belongs
to AT&T and is a proponent of a particular ATV system (SLSC). The other
division, called Bellcore for short, belongs to the Bell regional holding
companies (RHC) and is, among many other R&D projects, investigating
mechanisms for reducing the bit rate of digital video transmission, which
may be applicable to ATV. Bellcore has formed a joint venture with NHK
for HDTV research.
Bellcore – See Bell Labs.
Benchmark – Method used to measure performance of a computer in a
well-defined situation.
Bento – A registered trademark of Apple Computer, Inc. A general container format and software API (application programming interface). Bento is
used by OMF interchange as a storage and access system for the information in an OMF interchange file.
BEP (Bit Error Probability)
BER – See Bit Error Rate.
Best Light – A telecine transfer performed with optimum settings of the
color grade controls but without precise scene-by-scene color correction.
Video Terms and Acronyms
Betacam‚ SP – A superior performance version of Betacam‚ that uses
metal particle tape and a wider bandwidth recording system. The interconnect standard is the same as Betacam‚ and there is also limited tape interchangeability with standard Betacam‚.
Betacam‚ SX – A digital tape recording format developed by Sony which
used a constrained version of MPEG-2 compression at the 4:2:2 profile,
Main Level (422P@ML) using 1/2-inch tape cassettes.
Betacam‚ Betacam‚ Format – A camera/recorder system and related
equipment originally developed by Sony, the name may also be used for
just the recorder or for the interconnect format. Betacam‚ uses a version
of the (Y, R-Y, B-Y) component set.
Betamax – Consumer videocassette record/playback tape format using
half-inch wide magnetic tape. Developed by Sony, Betamax‚ was the first
home VCR format.
Bezel – The frame that covers the edge of the picture tube in some TV
sets and can therefore hide edge information transmitted in an ATV system
(such as ACTV) not meant for the viewer to see. See also Overscanning.
Bézier – A curve that connects the vertices of a polygon; each vertex has
two tangents, or handles, which you can use to adjust the slope of the
adjacent curve or side of a polygon.
Bézier Spline – A type of smooth curve or surface bound to its control
points, always passing through its first and last control point.
B-Frame (Bidirectional Frame) – The frame in an MPEG sequence
created by comparing the difference between the current frame and the
frames before and after it.
BG (Also BKG and BKGND) – See Background.
BH Loop Tracer – See BH Meter.
BH Meter – A device for measuring the intrinsic hysteresis loop of a sample of magnetic material. Usually, the sample is magnetized in a 60 Hz field
supplied by a solenoid and the intrinsic flux is detected by integrating the
emf produced in an opposing pair of search coils, one of which surrounds
the sample. The hysteresis loop may be displayed on an oscilloscope by
feeding the X and Y plates with voltages proportional to the magnetizing
coil current and the integrated search coil emf respectively.
Bi O-L – Bi-Phase Level (Code). Also called Manchester (Code).
Bias – a) A steady-state signal applied to the tape (usually by a high frequency oscillation of 50 to 100,000 Hz or more) to minimize distortion
and noise and increase frequency response and efficiency in recording.
Every tape formulation has slightly different bias requirements. b) Current
or voltage applied to a circuit to set a reference operating level for proper
circuit performance, such as the high frequency bias current applied to an
audio recording head to improve linear performance and reduce distortion.
Bias Adj. – The control which regulates the amount of bias mixed in with
the signal to be recorded.
Bias Cal. – A control which calibrates the VU meter on a recorder so it
reads 0 VU in the bias position of the output selector switch when bias is
properly set.
Bias Switch – Switch used on cassette recorder to change the amount
of bias current required for different types of tapes.
Bicubic Surface – A surface that you can add to a layer with four control
handles that you can use for four-point tracking.
Bid Sheet – A written estimate, or quote, for video or production services.
Bidirectional – a) Indicates that signal flow may be in either direction.
Common bidirectional buses are three-state or open collector TTL. b) In
open reel or cassette recorders, the ability to play (and, in some cases,
record) both stereo track pairs on a tape by reversing the tape’s direction
of motion without removing and replacing the tape reels or cassette.
Bidirectional Prediction – A form of compression in which the codec
uses information not only from frames that have already been decompressed, but also from frames yet to come. The codec looks in two directions: ahead as well as back. This helps avoid large spikes in data rate
caused by scene changes or fast movement, improving image quality.
See Forward Prediction.
BIFS (Binary Format for Scenes) – a) Describes the spatio-temporal
arrangements of the objects in the scene. b) BIFS provides a complete
framework for the presentation engine of MPEG-4 terminals. BIFS enables
to mix various MPEG-4 media together with 2D and 3D graphics, handle
interactivity, and deal with the local or remote changes of the scene over
time. BIFS has been designed as an extension of the VRML 2.0 specification in a binary form.
Big Endian – A process which starts with the high-order byte and ends
with the low-order byte. Motorola 68000 processors used the big endian
Bi-Level Keyer – A keyer where two levels of hole cutting are independently adjustable. The top level, or insert, cuts a hole and fills with the key
video. In a luminance key the second level forms the border of the key, and
in a chroma key the second level forms the shadow. The second level has
adjustable luminance allowing borders to be varied from black to white and
shadows to be varied in density. This is the type of keying provided on all
Ampex switchers.
Bilinear Surface – A surface that you can add to a layer with more than
four control handles for creating non-linear effects.
BIM (Broadcast Interface Module)
Bin – A database in which master clips, subclips, effects and sequences
are organized for a project. Bins provide database functions to simplify
organizing and manipulating material for recording, digitizing and editing.
Binary – A base-2 numbering system using the digits 0 and 1 (as opposed
to 10 digits, 0-9) in the decimal system). In computer systems, the binary
digits are represented by two different voltages or currents, on corresponding to 0 and the other corresponding to 1. All computer programs are
executed in binary form. Binary representation requires a greater number
of digits than the base 10 decimal system more commonly used. For
example, the base 10 number 254 is 11111110 in binary. The result of a
binary multiplication contains the sum of digits of the original numbers. So,
in binary: 10101111 x 11010100 = 10010000011101100
in decimal:175 x 212 = 37,100
From right to left, the digits represent 1, 2, 4, 8, 16, 32, 64, 128, 256,
512, 1024, 2048, 4096, 8192, 16384, 32768. Each digit is known as a
bit. This example multiples two 8-bit number to produce a 16-bit result, a
very common process in digital television equipment.
www.tektronix.com/video_audio 25
Video Terms and Acronyms
Binary File – An executable file that contains a relocatable machine code
program; in other words, a program ready to be run.
Bit Bucket – Any device capable of storing digital data, whether it be
video, audio or other types of data.
Binary Search – Technique in which the search interval is divided by two
at every iteration.
Bit Budget – The total amount of bits available on the media being used.
In DVD, the bit budget of a single sided/single layer DVD5 disk is actually
4.7 GB.
Binary Shape – A bit map that indicates the shape of a video object plane
Bit Density – See Packing Density.
Binaural Effect – The human ability to localize the direction from which a
sound comes due to the fact that people have two ears.
Bit Depth – The number of levels that a pixel might have, such as 256
with an 8-bit depth or 1024 with a 10-bit depth.
Binder – On recording tape, the binder is usually composed of organic
resins used to bond the oxide particles to the base material. The actual
composition of the binder is considered proprietary information by each
magnetic tape manufacturer. The binder is required to be flexible and still
maintain the ability to resist flaking or shedding binder material during
extended wear passes.
Bit Error – The incorrect interpretation of a binary bit by a message
processing unit.
BIOP (Broadcast Inter-Object Request Broker Protocol) – Defines a
way of exchanging information in a broadcast carousel environment about
an object, including a directory and broadcast file systems and information
on the object itself. BIOP message contains an internationally agreed
method to exchange information about an object being broadcast in a
carousel. The BIOP may also indicate how to use the object, including
possibly providing the application software.
BIOS (Basic Input/Output System) – Settings for system components,
peripherals, etc. This information is stored in a special battery-powered
memory and is usually accessible for changes at computer startup.
Bi-Phase – Electrical pulses from the tachometer of a telecine, used to
update the film footage encoder for each new frame of film being transferred.
Bi-Phase Sync – Bi-phase is an older synchronization technology used in
the film industry. Typically, the clock was derived from a box that hung off
of large film mag recorders. This box emitted a pulse that provided sync.
Working with pulses alone, bi-phase sync did not provide location information, making it a rather limited solution.
Bipolar – A signal containing both positive-going and negative-going
amplitude. May also contain a zero amplitude state.
Birefringence – a) An optical phenomenon where light is transmitted at
slightly different speeds depending on the angle of incidence. b) Light
scattering due to different refractions created by impurities, defects, or
stresses within the media substrate.
B-ISDN (Broadband Integrated Services Digital Network) – A mechanism by means of which telephone companies will be able to carry television signals (and, probably ATV signals) digitally, probably via optical fibers.
ISDN systems are considered broadband if they carry at least 45 Mbps, the
DS3 rate, currently used for delivery of broadcast television signals. If and
when B-ISDN reaches homes it will be a powerful competitor to other delivery mechanisms, potentially able to perform a computer-television function.
Bit (Binary Digit) – a) A single digit in a binary number. b) A binary
representation of 1 or 0. One of the quantized levels of a pixel. c) An
instruction in a data transmission, usually part of a word (byte) with
high status = 1, and low status = 0. d) An eight-bit byte can define 256
brightness or color values.
Bit Error Rate (BER) – a) This term is used in High Density Digital
Recording (HDDR), or High Density Recording (HDR), or other such names
and refers to the number of errors a specific magnetic tape may contain,
and is expressed in errors per data bits, such as one in 106 or one error in
one million data bits. b) The average probability of a digital recording system reproducing a bit in error. Note: IEEE 100 defines error rate as “the
ratio of the number of characters of a message incorrectly received to the
number of characters of the message received”. Bit error rates typical of
current digital tape recording are: digital video tape, about 106; digital
instrumentation tape, about 109; digital computer tape, about 1012.
Bit Packing Density – The number of bits recorded per track length unit,
usually expressed in terms of kilobits per inch (KBPI) or bits per millimeter
Bit Parallel – Byte-wise transmission of digital video down a multi-conductor cable where each pair of wires carries a single bit. This standard is
covered under SMPTE125M, EBU 3267-E and ITU-R BT.656 (CCIR 656).
Bit Plane – Video RAM containing formatted graphics data for VGA and
SVGA systems where four or more bit planes can be addressed in parallel.
A bit plane is sometimes called a map.
Bit Rate – a) The rate at which the compressed bit stream is delivered
from the storage medium to the input of a decoder. The digital equivalent
of bandwidth. b) The speed at which bits are transmitted, usually
expressed in bit per second (IEEE 100). Video information, in a digitized
image for example, is transferred, recorded, and reproduced through the
production process at some rate (bits/s) appropriate to the nature and
capabilities of the origination, the channel, and the receptor. c) The amount
of data transported in a given amount of time, usually defined in Mega
(million) bits per second (Mbps). Bit rate is one means used to define the
amount of compression used on a video signal. Uncompressed D1 has a
bit rate of 270 Mbps. MPEG-1 has a bit rate of 1.2 Mbps.
Bit Rate Reduction – a) Schemes for compressing high bit rate signals
into channels with much lower bit rates. b) A reduction in the real-time
transmission rate in digital format, usually to facilitate recording, transmission, broadcast, display, etc., or even to comply with fixed limitations.
Various algorithms appropriate for video signals may be employed from
arbitrary resampling to more complex processing with the objective of
reducing the transmission of redundant information in the image and
possibly eliminating image content that will not be obvious in the final
specified display. Bit rate reduction is also appropriate and employed in
audio records, either associated with video or standing alone.
Video Terms and Acronyms
Bit Rate, Real-Time – When the information is obtained from a continuously varying source, and the information is being transmitted continuously
without buffering, it is exchanged at the real-time bit rate. Within the production sequence, it is actually only the image capture (i.e., camera and its
recording system) that is required to be in real-time. The balance of production, including post-production operations, can be at a fraction of realtime if a more desirable result is achieved. (Subsequent to production, the
final display must of course also be in real-time.)
Bit Rate, Recording – The bit rate required of a recorder mated to a
video camera or functioning in the origination, post-production, or distribution is generally greater than the concurrent bit rate, real-time because of
the error correction designed into the recording format. This “overhead”
may increase the bit rate, sometimes by as much as one-third, and frequently sets a practical limit in systems design. Examples in the following
table are intended only to clarify the definition. They indicate the range of
some systems currently considered and a first estimate of their challenges.
Probable Recording Rate, Mbits/s
Rec 601-2 (3)
(1, 2)
Rec 709 (3)
8 (3)
227 (4)
(1) All systems postulated employ field rates of 60 or 59.94 Mbits/s,
component encoding and 2:1 interlace. Progressive scan systems at
the same frame rates would have double these bit rates.
(2) Estimates for gross data recording rates assume the same ratio of
overhead to data bits in component format recording as that in the D-1
(3) CCIR Recommendations 601-2 and 709 document 8-bit and 10-bit
sampling, based upon sampling frequencies that are integral multiples
of 2.25 MHz (i.e., 13.5 MHz for Rec 601-2).
(4) The D-1 standard recording format is defined by SMPTE 224M and its
related SMPTE Recommended Practices and Engineering Guidelines.
Bit Serial – Bit-wise transmission of digital video down a single conductor
such as coaxial cable. May also be sent through fiber optics. This standard
is covered under ITU-R BT.656 (CCIR 656).
Bit Slip – The condition in a message processing unit where the bit rate
clock has gained (or lost) more than 180 degrees phasing with respect to
synchronism with the binary message bits.
graphics controller card. An 8-bit controller can display 256 colors or levels
of gray; a 16-bit controller, 64,000 colors; a 24-bit controller, 16.8 million
Bit Stream (also Bitstream) – a) A continuous series of bits transmitted
on a line. b) A binary signal without regard to grouping according to character.
Bit Synchronizer – An information processing unit intended to extract the
binary message and associated bit rate clock included in a PCM signal.
BitBLT (Bit Block Transfer) – The transfer of blocks of screen data
(rather than a byte at a time) from one place to another in memory. The
microprocessor tells the graphic chip what block to move and where to put
it. The graphics chip carries out this operation freeing the microprocessor
to work on the next operation.
BITC – See Burn In Time Code.
Bitmap (BMP) – a) A bitmap is the digital representation of an image, in
terms of pixel values. Storing an image as a bitmap is the most space-consumptive method of storing an image. b) An image consisting of an array
of pixels that can be displayed on a computer monitor. c) A pixel-by-pixel
description of an image. Each pixel is a separate element. Also a computer
file format.
Bitmapped Graphics – Images, which are created with matrices of pixels,
or dots. Also called Raster Graphics.
Bits Per Pixel (BPP) – The number of bits used to represent the color
information of a pixel. One bit allows only two values (black and white), two
bits allows four values, and so on. Also called color depth or bit depth.
Bit-Slice – Method that implements n-bits of the CPU on each of several
chips, or slices, usually n=4. A bit-slice processor chip implements a complete data path across the CPU. A 32-bit processor could be constructed
by using eight 4-bit CPU slices.
Bitstream Recorder – A device capable of recording a stream of digital
data but not necessarily able to process the data.
Black (BLK) – A black video output generated within the switcher and
selected by the Black push-buttons on the crosspoint buses and by the
Fade to Black push-button in the downstream mixer.
Black A Tape – The process of recording a black burst signal across the
entire length of a tape. Often done before recording edited footage on the
tape to give the tape clean, continuous video and sync and to insure there
is no video already on the tape.
Black and Code – Video black, timecode and control track that are prerecorded onto videotape stock. Tapes with black and code are referred to as
striped or blacked tapes.
Bit Slippage – a) Occurs when word flaming is lost in a serial signal so
that the relative value of a bit is incorrect. This is generally reset at the
next serial signal, TRS-ID for composite and EAV/SAV for component.
b) The erroneous reading of a serial bit stream when the recovered clock
phase drifts enough to miss a bit. c) A phenomenon which occurs in parallel digital data buses when one or more bits gets out of time in relation to
the rest. The result is erroneous data. Differing cable lengths is the most
common cause.
Black and White – Monochrome or luminance information. Monochrome
means one color. In the color television system the black and white portion
of the picture has to be one “color” gray, D6500, 6500°K as defined by
x and y values in the 1939 CIE color coordinate system. The black and
white signal in the S or Component video path is separate from the color
Bit Specifications – Number of colors or levels of gray that can be displayed at one time. Controlled by the amount of memory in the computer’s
Black Box – A term used to describe a piece of equipment dedicated to
one specific function, usually involving a form of digital video magic.
www.tektronix.com/video_audio 27
Video Terms and Acronyms
Black Burst – a) Black burst is a composite video signal consisting of all
horizontal and vertical synchronization information, burst and in North
America NTSC, setup. Also called “color black”, “house sync” or “house
black”. Typically used as the house reference synchronization signal in
television facilities. b) A composite color video signal. The signal has
composite sync, reference burst and a black video signal, which is usually
at a level of 7.5 IRE (50 mV) above the blanking level.
Black Compression – a) The reduction in gain applied to a picture signal
at those levels corresponding to dark areas in a picture with respect to the
gain at that level corresponding to the midrange light value in the picture.
b) Amplitude compression of the signals corresponding to the black
regions of the picture, thus modifying the tonal gradient.
Black Edits – a) A video source with no image. b) A special source you
can fade into, out of, or use for other effects.
Black Level – a) This voltage defines the picture’s black level. Video that
dips below this level such as sync pulses are called blacker then black.
b) Strictly interpreted, denotes the light level at which a video signal representing picture black is reproduced on your TV screen. In terms of light
output from a TV set, black areas of the picture should be represented by
an absence of light. Something that is black or below black in the video
signal shouldn’t produce any light from the display. c) Some TV sets actually use Black Level as a control name. It is a far better description of the
function than the most commonly found name for it, Brightness. d) A part
of the video signal, close to the sync level, but slightly above it (usually
20 mV – 50 mV) in order to be distinguished from the blanking level. It
electronically represents the black part of an image, whereas the white
part is equivalent to 0.7 V from the sync level.
Black Level Setup – Refer to the Setup discussion.
Black Level, Monitor – The luminance produced on the monitor display
by a signal at reference black level. Since the monitor brightness control
should be adjusted to align CRT beam cutoff with reference black level
signal, this provides zero excitation light from the CRT (only room ambient
light reflected from the CRT faceplate). Monitor black level is normally set
by use of a pluge signal to adjust CRT beam cutoff subjectively.
Black Level, Reference – The video signal level which is intended to
produce monitor black level in the reproduced image. In systems with a
setup level, i.e., the 7.5 IRE setup in a 525/59.94/2:1/NTSC composite
video documented by ANSI/EIA TIA 250-C and SMPTE 170M, reference
black is at the setup level. In systems with no setup level, reference black
is at blanking level.
Black Peak – The maximum excursion of the picture signal black direction
at the time of observation.
Black Point – The luminance value in a video image that you set to be
equal to reference black when making a color adjustment. Compare with
White Point.
Black Stripe – See Striping.
Black to White Excursion – The excursion from reference black to reference white. Conventionally 92.5 IRE (37/56 V or 660 mV); System M and
EIA-343A 100 IRE (or 700 mV) in other analog systems and codes 16-235
in component digital systems.
Black, Absolute – a) Optical black is no light. An absolute black can only
be produced in a scene via a light-trap, “a black hole”. b) A capped lens
on the camera is the equivalent of an absolute scene black and should
produce reference black level video signal from a properly adjusted studio
Black, Projection – The luminance level in a projected image that is
correlated with subjective scene black has two sources: in photographic
and other light-modulating systems there will be luminance from whatever
transmitted light passes through the maximum modulating density representing scene black, additional luminance may be produced by nominateforming light (flare, room illumination, stray light, etc.).
Black, Subjective, Monitor – The luminance level which produces the
perception of black on the monitor display. This subject has not been
explored extensively, but Bartleson and Novick present evidence that it is
relative to the high-light or white level, such that the luminance ratio to
produce subjective black on a monitor is higher than that in a televised
scene. They propose a luminance ratio of 100:1 for subjective white to
black on TV monitors in a control room “dimly lighted”. This luminance
ratio specification has been formalized in SMPTE RP 166.
Black, Subjective, Scene – That fraction of the high-light luminance
required in a scene reproduced on a television display to convey the perception of black. The luminance of subjective black on a CRT has been
studied by Lowry and Jarvis, who measured luminances on the original
scenes, and compared the subjective appearance on a CRT display, as
evaluated by viewing audiences. They found that the perception of black
depends on a great many factors both in the reproduced scene and in the
viewing conditions such as average scene reflection, luminance of areas
adjacent to the display, etc. In most situation, luminance levels of 1/40 to
1/60 of the highlight luminance produce the perception of black even
though the scene luminance range may reach 200:1 or more. It follows
then that a scene element that is perceived as black may not necessarily
be at reference black level in a video signal.
Blacked Tapes – See Black and Code.
Blacker-than-Black – The amplitude region of the composite video signal
below reference black level in the direction of the synchronizing pulses.
Blackout – The fading of a video signal to black to indicate, for example,
the end of a show.
Blanket Fee – Typically used for musical selections. One who pays a
blanket fee has permission to use the musical selection the fee covers in
an unlimited number of released projects and videos.
Blanking – A video signal level below which no light should be emitted
from a TV screen (the level at which the screen is blanked); also, that
portion of the time that a video signal is transmitted when it is at or below
blanking. These time portions can be divided into a horizontal blanking
interval (HBI) and a vertical blanking interval (VBI). Since no picture information is carried in either blanking interval in an NTSC signal, various ATV
schemes propose using them for carrying augmentation information, such
as higher quality sound or widescreen panel information. Potentially conflicting with those schemes are other schemes that already use the blanking intervals for descrambling codes, test transmission, time code, and
test and reference signals. Reducing the duration of the blanking intervals
Video Terms and Acronyms
to allow more picture information to be transmitted potentially conflicts
with the demands of the scanning circuitry of older TV sets. Sometimes
this conflict is said to be resolved by bezel coverage and overscanning.
Blanking (Picture) – The portion of the composite video signal whose
instantaneous amplitude makes the vertical and horizontal retrace invisible.
Blanking Adjustment – A technique proposed in some ATV schemes to
increase the VBI (and, sometimes, decrease the HBI) to deal with wide
aspect ratios. See also Burn.
Blanking Interval – The horizontal blanking interval is the time between
the end of one horizontal scanning line and the beginning of the next. The
vertical blanking interval is the time between the end of one video field
and the beginning of the next. Blanking occurs when a monitor’s electron
beam is positioned to start a new line or a new field. The blanking interval
is used to instantaneously reduce the beam’s amplitude so that the return
trace is invisible.
Blanking Level – a) Refers to the 0 IRE level which exists before and
after horizontal sync and during the vertical interval. This voltage level
allows the electron beam to be turned off while it is being repositioned
(retracing) across the face of the CRT into the position needed to start
tracing the next visible line. b) The level of the front and back porches of
the composite video signal. c) The level of a composite picture signal
which separates the range containing picture information from the range
containing synchronizing information. Note: This term should be used for
controls performing this function (IEEE 100). d) The beginning of the video
signal information in the signal’s waveform. It resides at a reference point
taken as 0 V, which is 300 mV above the lowest part of the sync pulses.
Also known as pedestal, the level of a video signal that separates the
range that contains the picture information from the range that contains
the synchronizing information.
Blanking Panel – A piece of black plastic attached to the front plastic
panel of the Indigo chassis that covers either the top or middle drive slot.
The blanking panel is removed after installing a drive in the slot that it was
Blanking Processor (Sync Proc) – A circuit on the video module which
strips blanking sync and burst from the program output of the switcher and
replaces it with blanking and sync from a reference source. This process
ensures that sync and blanking do not contain any unwanted timing shifts,
and the record VPR is always receiving constant relationships of sync,
blanking and burst.
Blanking Stuffing – An ATV technique that adds information to blanking
areas that is supposed to be invisible to ordinary sets but can be used by
an ATV set for increased resolution and/or widescreen panels.
Blast Filter – A dense mesh screen on a microphone, which minimizes
overload caused by loud, close sounds.
Bleach – a) Converting a metallic silver image to a halide or other salt
which can be removed from the film with hypo. When bleaching is not
carried to completion, it is called reducing. b) Any chemical reagent that
can be used for bleaching.
Bleeding Whites – An overloading condition in which white areas appear
to flow irregularly into black areas.
Blink – A modification to a key to cause it to flash on and off. The speed
at which a key blinks.
Blitting – The process of using BitBLT to copy video data such as a
bitmap from one area in memory to another.
Block – An 8-row by 8-column matrix of pels, or 64 DCT coefficients
(source, quantized or dequantized). A block is the entity on which the DCT
operates and it represents luminance or chrominance information. This
term is used for both the actual picture information, and the corresponding
DCT coefficients.
Block Companding – Digital representation of an audio signal that has
been normalized within a certain time period.
Block Matching – A method of motion estimation. A search for the picture area that best matches a specific macro block of preceding and/or
subsequent pictures.
Blockiness – An artifact that refers to the tile-like appearance of a
compressed image where the 8 x 8 blocks have become visible due to a
(too) hard compression.
Blocking – a) Occurs in a multistage routing system when a destination
requests a source and finds that source unavailable. In a tie line system,
this means that a destination requests a tie line and receives a tie line
busy message, indicating that all tie lines are in use. b) Distortion of the
received image characterized by the appearance of an underlying block
encoding structure.
Blooming – a) This effect is sometimes called whiter-than-white.
Blooming occurs when the white voltage level is exceeded and screen
objects become fuzzy and large. b) The defocusing of regions of a picture
where brightness is excessive.
BLT (Block Transfer) – The process of moving blocks of data from one
place to another rather than a byte at a time in order to save processor
time and to expedite screen display in operations such as vertical rolling
of video.
Blue Aging – A tendency for blue phosphors to age more rapidly than red
or green. See also Phosphor Aging.
Blue Book – The document that specifies the CD extra interactive music
CD format (see also Enhanced CD). The original CDV specification was also
in a blue book.
Blue Screen – A special effects procedure in which a subject is photographed in front of a uniformly illuminated blue or green background.
A new background image can be substituted for the blue or green during
the shoot or in post-production through the use of chroma key.
Blur – A state of reduced resolution. Blur can be a picture defect, as when
a photograph is indistinct because it was shot out of focus or the camera
was moved during exposure. Blur can also be a picture improvement, as
when an unnaturally jagged-edged diagonal line or jerky motion is blurred
to smoothness.
Blurring/Smearing – In a single frame (spatial example), reducing the
number of pixels per horizontal line, causes a blurring or smearing effect.
In multiple frames (temporal example), the causes become more complicated. They may include reduction of bandwidth, degree of image movement,
algorithm type, and motion prediction/compensation techniques.
www.tektronix.com/video_audio 29
Video Terms and Acronyms
B-MAC – A MAC (Multiplexed Analog Component) with audio and data time
multiplexed before modulation, which forms the basis for the HDB-MAC
ATV scheme, currently used for satellite transmission and scrambling in the
U.S.. See also MAC.
B-Mode – A “checkerboard” or non-sequential method of assembly. In
B-mode, the edit decision list (EDL) is arranged by source tape number.
The edit system performs all edits from the tapes currently assigned to
decks, leaving gaps that will be filled by material from subsequent reels.
See also A-Move, C-Mode, D-Mode, E-Mode, Source Mode.
Boot Up – To start up. Most computers contain a system operating
program that they load into memory from disk after power up or restart.
The process of reading and running that program is called boot up.
Bootstrap – Program used to initialize the computer. Usually clears memory, sets up I/O devices, and loads the operating system.
BMP – A bitmapped graphic files format for Windows which stores images
as a grid of dots or pixels. The BMP file defines the size of the image, the
number of color planes, and the palette used.
Border – a) The boundary between two merged video pictures, as created
with chroma key or wipe effects. b) May be thought of as the frame which
surrounds a given pattern or key. In the case of a key, the border is on
or two lines side, adjustable anywhere from black to white, and may
be symmetrical about the key or to the right and bottom (drop shadow).
An outline is a special key border where the insert video appears in the
border area and the background video fills the hole where the insert
would normally be. In the case of a pattern, the border is adjustable in
width and color. A pattern border may be hard colored, soft colored
(halo), or soft with no color. AVC switchers can also do half halo borders,
hard on one side and soft on the other.
BNC – A cable connector used extensively in television and is an
abbreviation that has several different meanings depending on who you
ask. Four common meanings for BNC are listed below: Baby N Connector,
Bayonet Neill Concelman Connector, British Naval Connector, and
British National Connector.
Border (Key) – A title (caption, super) enhancement option which produces a black or white border or dropshadow around a key or changes
the key into a matte filled outline in the shape of the key. The Border,
Dropshadow, and Outline push-buttons select these optional modes. If
the Border option is not installed, these push-buttons do not function.
Board – The audio console control in radio and television.
Border (Menu) – A function that uses ADO 100’s internal key to place a
border around the image and adjust width and color (saturation, luminance
and hue).
B-Mode Edit – An editing method where the footage is assembled in the
order it appears on the source reels. Missing scenes are left as black holes
to be filled in by a later reel. Requires fewer reel changes and generally
results in a faster edit session.
Board Fade – A radio term, used to designate the process of gradually
fading the volume of sound by means of a master fading control on the
Board Tester – Product programmed to automatically stimulate the
circuits on a PC board and check the responses. Electrical failures can
be detected and diagnosed to facilitate board repair.
BOC (Bell Operating Company) – A local telephone company formerly
owned by AT&T.
Book A – The document specifying the DVD physical format (DVD-ROM).
Finalized in August 1996.
Book B – The document specifying the DVD-Video format. Mostly finalized
in August 1996.
Book C – The document specifying the DVD-Audio format.
Book D – The document specifying the DVD record-once format (DVD-R).
Finalized in August 1997.
Book E – The document specifying the rewritable DVD format (DVD-RAM).
Finalized in August 1997.
Boolean – In digital picture manipulation, a method of working on
polygonal objects.
Boolean Logic – Named after George Boole, who defined binary
arithmetic and logical operations such as AND, OR, NOT, and XOR.
Boom – A mechanical cantilevering device used to hold a microphone
closer to a set by positioning it above the set while keeping it out of view
of the cameras.
Boot – To start up the system by turning on the workstation and monitor;
the system is fully booted when you see a prompt or the login screen.
Short for Bootstrap.
Border (Wipe) – The boundary area between the “A” video and “B” video
when doing a wipe, to which hard, soft, halo or 1/2 halo edges and matte
color can be added.
Border Luminance – The brightness of a border.
Border Modify – A feature exclusive to AVC series switchers, allowing
key borders to be extended to the right and bottom up to 14 lines deep.
Several special key effects can be accomplished with this including
delayed and decayed keys.
Border Modify (Key) – An enhancement to the basic key border function
allowing up to 14 lines of dropshadow or reinserted insert video in a
decaying mode. This uses a patented circuit which increases the creative
Bottom Field – One of two fields that comprise a frame of interlaced
video. Each line of a bottom field is spatially located immediately below
the corresponding line of the top field.
Bounce – a) An unnatural sudden variation in the brightness of the
picture. b) Oscillations and noise generated when a mechanical switch
is opened or closed. See Debounce.
Boundary Representation Modeling – This modeling technique defines
a world in terms of its edges. The primary components of a boundary rep
world are vertices and polygons. PictureMaker is a boundary rep system.
Bounding Box – A relatively simple object, usually a rectangle or box with
the overall dimensions, or bounds, of a more complex object. A bounding
is used in place of that exact, more complex, modeled shape to represent
it in an animation preview, or to predict the inclusion of that object in the
scene. This reduces the calculation/production time and expense when
Video Terms and Acronyms
previewing computer animation sequences to check continuity, positions,
and timing.
Box – Electronic equipment used to process television signals in a consumers’ home, usually housed in a “box” that sits atop a TV set or VCR.
Bouquet – a) A group of transport streams in which programs are identified by a combination of network ID and PID (part of DVB-SI). b) A collection of services marketed as a single entity.
Box House – A slang term for a mail-order business for audio and video
components. Box houses frequently offer little or no consumer support or
equipment repair.
Bowtie Test Signal – Each of three component signals is fed to a different channel of the CAV system and used to evaluate the relative amplitudes and relative timing on some CAV waveform monitors. In standard
definition the first signal is a 500 kHz sine wave packet, which is fed to
video channel 1. The other two signals are identical 502 kHz. The three
sine wave packets are generated to be precisely in phase at their centers.
Because of their 2 kHz offset, the color difference channels become
increasingly out of phase with the luminance channel on either side of
center. If the three signals are properly timed, their sum results in the
bowtie waveform.
BPF – See Bandpass Filter.
BPI – Bits per linear inch down a recorded track.
B-Picture (Bidirectionally Predictive-Coded Picture) – An MPEG
picture that is coded using motion compensated prediction from past
and/or future reference pictures. Motion vectors pointing forward and
backwards are used, and they may point at either I-pictures or P-pictures.
The B-pictures provide the highest compression, but demand knowledge
of several pictures. Consequently, B-pictures give a higher delay and call
for a larger picture memory. B-pictures are never used as a reference in
a prediction. When B-pictures are part of a sequence, the pictures are not
sent in chronological order owing to the fact that future P-pictures and/or
I-pictures are needed (and therefore must be decoded) for the decoding
of B-pictures. The P- and I-pictures have to be sent earlier than the actual
point of time to which they relate.
BPS – Abbreviation for Bits Per Second.
BPSK (Binary Phase Shift Keying) – A modulation technique that has
proven to be extremely effective for LowFER and MedFER operation, as
well as for amateur HF work.
BR (Radiocommunication Bureau) – The Radiocommunication Bureau
(BR), the executive arm of the Radiocommunication Sector, is headed by a
Director who organizes and coordinates the work of the
Radiocommunication Sector.
BRA (Basic Rate Access) – Two 64 kbps B channels + one 16 kbps
D channel (2B + D), carrying user traffic and signaling information respectively to the user via twisted pair local loop.
Braid – A group of textile or metallic filaments interwoven to form a tubular structure that may be applied over one or more wires or flattened to
form a strap.
Branch – See Jump.
Break Elongation – The relative elongation of a specimen of magnetic
tape or base film at the instant of breaking when it has been stretched at
a given rate.
Breakdown – A written accounting of the shooting schedule and production resources.
Break-Down – The separation of a roll of camera original negative into its
individual scenes.
Breakpoint – a) A break in the smoothness of a curve. b) Software or
hardware device that stops the program and saves the current machine
status, under user-specified conditions.
Breakup – Disturbance in the picture or sound signal caused by loss of
sync or by videotape damage.
Breathing – Amplitude variations similar to “bounce” but at a slow,
regular rate.
www.tektronix.com/video_audio 31
Video Terms and Acronyms
Breezeway – The portion of the video signal which lies between the
trailing edge of the horizontal sync pulse and start of burst. The Breezeway
is part of the back porch. Also refer to the Horizontal Timing discussion.
Bridge – Bridges are devices that connect similar and dissimilar LANs at
the Data Link Layer (OSI layer 2), regardless of the Physical Layer protocols
or media being used. Bridges require that the networks have consistent
addressing schemes and packet frame sizes. Current introductions have
been termed learning bridges since they are capable of updating node
address (tracking) tables as well as overseeing the transmission of data
between two Ethernet LANs.
Brightness – a) Overall DC voltage level of the video signal. The brightness control is an adjustment of setup (black level, black reference).
b) Attribute of a visual sensation according to which an area appears to
emit more or less light. The subjective counterpart of objective luminance.
c) The value of a pixel along the black-white axis. d) In NTSC and PAL
video signals, the brightness information at any particular instant in a
picture is conveyed by the corresponding instantaneous DC level of active
video. Brightness control is an adjustment of setup (black level, black
Brightness Signal – Same as the luminance signal (Y). This signal carries
information about the amount of light at each point in the image.
Broad Pulses – Another name for the vertical synchronizing pulses in the
center of the vertical interval. These pulses are long enough to be distinguished from all others and are the part of the signal actually detected by
vertical sync separators.
B-Roll – a) Off the shelf video sequences for various needs. b) Refers to
secondary or duplicated footage of a fill or secondary nature usually played
from the B source player in an A/B roll linear editing system. B-roll does
not refer to all tapes played from the B source player.
Brouter – Brouters are bridge/router hybrid devices that offer the best
capabilities of both devices in one unit. Brouters are actually bridges
capable of intelligent routing and therefore are used as generic components to integrate workgroup networks. The bridge function filters information that remains internal to the network and is capable of supporting
multiple higher-level protocols at once. The router component maps out
the optimal paths for the movement of data from one point on the network
to another. Since the brouter can handle the functions of both bridges and
routers, as well as bypass the need for the translation across application
protocols with gateways, the device offers significant cost reductions in
network development and integration.
Brown Stain – A non-magnetic substance that forms on that area of
a magnetic head’s surface over which tape passes. Its origin is not well
understood but it is known to occur primarily in the presence of low
Browse – To scan a database or a list of files, either for a particular item
or for anything that seems to be of interest. Browsing implies observing
rather than changing information.
Browse Station – A viewing station that provides browsing of stored
images or video. Browse stations are internal and connected via ethernet.
BRR – See Bit Rate Reduction.
Broadband – a) A response that is the same over a wide range of frequencies. b) capable of handling frequencies greater than those required
for high-grade voice communications (higher than 3 to 4 kilohertz).
Bruch Blanking – A 4-field burst blanking sequence employed in PAL
signals to ensure that burst phase is the same at the end of each vertical
Broadcast – A one-to-many transmission of information that may be
simultaneously received by many, but unknown, receivers.
BS – Bandwidth of the frequency slot allocated to a service.
Broadcast Communications System – A network such as a cable system capable of delivering multiple high capacity services simultaneously.
Broadcast Monitor – Television set without receiving circuitry, wired
directly to a VTR or other output device.
Broadcast Quality – a) A nebulous term used to describe the output of a
manufacturer’s product no matter how bad it looks. b) A standard of 525
lines of video picture information at a rate of 60 Hz – NTSC in the USA; or
625 lines at a rate of 50 Hz – PAL in Europe (except France). c) A quality
standard for composite video signals set by the NTSC and conforming to
FCC rules. When recording video signals or videotape for broadcast, it is
important to note that devices providing NTSC signals do not necessarily
meet FCC broadcast standards.
Broadcast Television – Conventional terrestrial television broadcasting,
the most technically constrained delivery mechanism for ATV, faced with
federal regulations and such potential problems as multipath distortion and
co-channel interference.
Broadcaster (Service Provider) – An organization which assembles a
sequence of events or programs to be delivered to the viewer based upon
a schedule.
BS.707 – This ITU recommendation specifies the stereo audio specifications (Zweiton and NICAM 728) for the PAL and SECAM video standards.
BS1, BS2, BS3 – DBV-RCT burst structures for data transmission.
BSI (British Standards Institution) – The British Standards Institution
was the first national standards body in the world. There are now
more than 100 similar organizations which belong to the International
Organization for Standardization (ISO) and the International Electrotechnical
Commission (IEC).
BSLBF (Bit String, Left Bit First) – Bit string, left bit first, where “left”
is the order in which bit strings are written in ISO/IEC 11172. Bit strings
are written as a string of 1s and 0s within single quote marks, e.g.
‘1000 0001’. Blanks within a bit string are for ease of reading and
have no other significance.
B-Spline – a) A type of smooth curve (or surface) bound to its control
points. b) A smooth curve that passes on the inner side of the vertices of
a polygon to connect the vertices to interpolate or draw the polygon.
c) A curve used to define a motion path.
Video Terms and Acronyms
BSS (Broadcast Satellite Services) – Typically used to refer to a range
of frequencies intended for direct reception of satellite television and entertainment services. These frequencies are subject to internationally-agreed
upon regulations that govern their use and are designed to ensure that all
countries are able to offer services of this nature.
BT.656 – Defines a parallel interface (8-bit or 10-bit, 27 MHz) and a serial
interface (270 Mbps) for the transmission of 4:3 BT.601 4:2:2 YCbCr
digital video between pro-video equipment. See also SMPTE 125M.
BST-OFDM – See Bandwidth Segmented Orthogonal Frequency Division
BT.709 – This ITU recommendation specifies the 1920 x 1080 RGB and
4:2:2 YCbCr interlaces and progressive 16:9 digital video standards. Frame
refresh rates of 60, 59.94, 50, 30, 29.97, 25, 24 and 23.976 Hz are supported.
BT.1119 – Defines the widescreen signaling (WSS) information for NTSC
and PAL video signals. For (B, D, G, H, I) PAL systems, WSS may be
present on line 23, and on lines 22 and 285 for (M) NTSC.
BT.799 – Defines the transmission of 4:3 BT.601 4:4:4:4 YCbCr and RGBK
digital video between pro-video equipment. Two parallel interfaces (8-bit or
10-bit, 27 MHz) or two serial interfaces (270 Mbps) are used.
BT.1124 – Defines the ghost cancellation reference (GCR) signal for NTSC
and PAL.
BTA – Japan’s Broadcast Technology Association. A national standardsmaking organization comprising manufacturers and broadcasters, not
unlike SMPTE. A proponent of an ATV system.
BT.1197 – Defines the PALplus standard, allowing the transmission of
16:9 programs over normal PAL transmission systems.
BT.1302 – Defines the transmission of 16:9 BT.601 4:2: YCbCr digital
video between pro-video equipment. It defines a parallel interface (8-bit or
10-bit, 36 MHz) and a serial interface (360 Mbps).
BT.1303 – Defines the transmission of 16:9 BT.601 4:4:4:4 YCbCr and
RGBK digital video between pro-video equipment. Two parallel interfaces
(8-bit or 10-bit, 36 MHz) or two serial interfaces (360 Mbps) are used.
BT.1304 – Specifies the checksum for error detection and status for
pro-video digital interfaces.
BT.1305 – Specifies the digital audio format for ancillary data for pro-video
digital interfaces. See also SMPTE 272M.
BT.1358 – 720 x 480 (59.94 Hz) and 720 x 576 (50 Hz) 4:2:2 YCbCr
pro-video progressive standards. See also SMPTE 293M.
BT.1362 – Pro-video serial interface for the transmission of BT.1358 digital video between equipment. Two 270 Mbps serial interfaces are used.
BT.1364 – Specifies the ancillary data packet format for pro-video digital
interfaces. See also SMPTE 291M.
BT.1365 – Specified the 24-bit digital audio format for pro-video HDTV
serial interfaces. See also SMPTE 299M.
BT.1366 – Specifies the transmission of timecode as ancillary data for provideo digital interfaces. See also SMPTE 266M.
BT.1381 – Specifies a serial digital interface-based (SDI) transport interface for compressed television signals in networked television production
based on BT.656 and BT.1302.
BT.470 – Specifies the various NTSC, PAL and SECAM video standards
used around the world. SMPTE 170M also specifies the (M) NTSC video
standard used in the U.S.. BT.470 has replaced BT.624.
BT.601 – 720 x 480 (59.94 Hz), 960 x 480 (59.94 Hz), 720 x 576 (50 Hz)
and 960 x 576 (50 Hz) 4:2:2 YCbCr pro-video interlaced standards.
BT.653 – Defines the various teletext standards used around the world.
Systems A, B, C and D for both 525-line and 625-line TV systems are
BTS (Broadcast Television Systems) – A joint venture of Bosch Fernseh
and Philips established to sell television production equipment. BTS offers
the first multi-standard HDTV camera.
BTSC – This EIA TVSB5 standard defines a technique of implementing
stereo audio for NTSC video. One FM subcarrier transmits a L+R signal,
and an AM subcarrier transmits a L-R signal.
Buckling – Deformation of the circular form of a tape pack which may be
caused by a combination of improper winding tension, adverse storage
conditions and/or poor reel hub configuration.
Buffer – a) An IC that is used to restore the logic drive level. b) A circuit
or component that isolates one electrical circuit from another. c) A digital
storage device used to compensate for a difference in the rate of flow of
information or the time of occurrence of events when transmitting information from one device to another. d) In telecommunications, a protective
material used in cabling optical fiber to cover and protect the fiber. The
buffer material has no optical function.
Buffer Control – The feedback algorithms used by the encoder to avoid
overflow of the video rate buffer. The video rate buffer is a FIFO which
holds the coded video prior to output into the channel.
Buffer Model – A model that defines how a terminal complying with this
specification manages the buffer resources that are needed to decode a
Bug – An error in a computer program. Eliminating errors is known as
Built-In Reference Tones – Refers to adjustment tones which are available within the recorder for adjusting record level and bias.
Bulk Eraser – A device used to erase an entire tape at one time. Bulk
erasers are usually more effective than recorders’ erase heads.
Bump Up – Copying from one recording medium onto another that is more
suitable for post-production purposes because, for example, it offers better
bandwidth or timecode capabilities.
Bumping Up – Transferring a program recorded on a lower quality videotape to a higher quality videotape (e.g., from Hi-8 to Betacam). Bumping
up to a higher format allows footage to be preserved on a more stable tape
format and makes it possible to edit in a higher-end editing environment.
www.tektronix.com/video_audio 33
Video Terms and Acronyms
Burn – An image or pattern appearing so regularly on the screen of a
picture tube that it ages the phosphors and remains as a ghost image even
when other images are supposed to be shown. On computer terminals,
the areas occupied by characters are frequently burned, particularly in
the upper left corner. In television transmission centers, color bars are
sometimes burned onto monitors. There is some concern that some ATV
schemes will burn a widescreen pattern on ordinary TV sets due to
increased vertical blanking or will burn a non-widescreen pattern on ATV
sets due to reception of non-ATV signals. In production, refers to long-term
or permanent image retention of camera pickup tubes when subjected to
excessive highlights.
Burned-In Image – An image which persists in a fixed position in the
output signal of a camera tube after the camera has been turned to a
different scene.
Burned-In Time Code (BITC) – Time code numbers that are superimposed on the picture. This is time code that is displayed on the monitor
along with the video it pertains to. BITC can either be Vertical Interval Time
Code (VITC) or Longitudinal Time Code (LTC).
Burn-In – a) Component testing method used to screen out early failures
by running the circuit for a specified length of time. b) A visible time code
permanently superimposed on footage, usually in the form of white numbers in a black rectangle.
Burn-In Dub – A duplicate of an original or master tape that includes the
time code reference on-screen and is used as a reference for logging and
locating scenes.
Burst – A small reference packet of the subcarrier sine wave, typically
8 or 9 cycles, which is sent on every line of video. Since the carrier is suppressed, this phase and frequency reference is required for synchronous
demodulation of the color information in the receiver. Refer to the
Horizontal Timing discussion.
Burst Gate – This signal tells the receiver valid color ready for use.
Bus – a) Any row of video crosspoints that allow selection of various
sources to be selected, and the associated row of buttons for such selection. Buses are usually associated with a given M/E or the DSK although
they may be independent as in aux buses. Also, any row of video or key
source selections which may or may not be selected by push buttons on a
bus row. For example, key video selections on Ampex switchers appear
on buses which are accessed and selected by keypads. Due to the fact
that there is no associated row of buttons, this arrangement is called a
“phantom bus”. b) A parallel data path in a computer. c) In computer
architecture, a path over which information travels internally among various
components of a system and is available to each of the components.
Bus Address – A code number sent out to activate a particular device on
a shared serial or parallel bus interface. Also the identification number of a
Bus Conflict – Conflict that occurs when two or more device outputs of
opposite logic states are placed on a three-state bus at the same time.
Bus Controller – Generates bus commands and control signals.
Bus Driver – An IC that is added to a bus to provide sufficient drive
between the CPU and the other devices that are tied to the bus. These are
necessary because of capacitive loading, which slows down the data rate
and prevents proper time sequencing of microprocessor operation and/or
to overcome resistive loading when fan out requirements increase.
Bus Keyer – A keyer that does a key on top of the bus video before the
signal gets to the M/E. On the 4100, these are packaged as “dual bus
keyers” and are the modules between the bus rows and the M/Es. On
the AVC, bus keyers are integral with the M/E module, with controls in a
similar location.
Bus Row – Any row of video source select buttons allowing immediate
selection of switcher video sources.
Bus Termination – Method of preventing reflections at the end of a bus.
Necessary only in high-speed systems.
Business Television – One-way television broadcasts (usually by satellite)
by corporations to multiple sites. The return path for interactivity is typically
audio only.
Buss – In video switching equipment, a wire carrying line level signals
(anything greater than mike level).
Button – a) On a mouse, a button is a switch that you press with a finger.
b) In a window on your screen, a button is a labeled rectangle that you
click using the cursor and mouse. c) This is a rectangular area in the Subpicture display area highlighted by the Highlight Information (HLI) that is
used to define the active area on a menu associated with a specific action.
Button Menu – These are consecutive numbers assigned to each button
on a menu, ranging from “1” to “36”.
BVB (Black-Video-Black) – A preview mode that displays black, newly
inserted video, and then black again.
B-vop (Bidirectionally Predictive-Coded Video Object Plane) – A vop
that is coded using motion compensated prediction from past and/or future
reference vops.
BW – See Bandwidth.
BWF (Broadcast WAV Format) – Broadcast WAV Format is an audio file
format based on Microsoft’s WAV Format that carries PCM or MPEG encoded audio. BWF adds the metadata, such as a description, originator, date
and coding history, needed for interchange between broadcasters.
B-Y – One of the color difference signals used in the NTSC system,
obtained by subtracting luminance from the blue camera signal. This is
the signal that drives the horizontal axis of a vectorscope. The human
visual system has much less acuity for spatial variation of color than for
brightness. Rather than conveying RGB, it is advantageous to convey luma
in one channel, and color information that has had luma removed in the
two other channels. In an analog system, the two color channels can have
less bandwidth, typically one-third that of luma. In a digital system each
of the two color channels can have considerably less data rate (or data
capacity) than luma. Green dominates the luma channel: about 59% of
the luma signal comprises green information. Therefore it is sensible, and
advantageous for signal-to-noise reasons, to base the two color channels
on blue and red. The simplest way to remove luma from each of these is
to subtract it to form the difference between a primary color and luma.
Video Terms and Acronyms
Hence, the basic video color-difference pair is (B-Y), (R-Y) [pronounced “B
minus Y, R minus Y”]. The (B-Y) signal reaches its extreme values at blue
(R=0, G=0, B=1; Y=0.114; B-Y=+0.886) and at yellow (R=1, G=1, B=0;
Y=0.886; B-Y=-0.886). Similarly, the extreme of (R-Y), +-0.701, occur at
red and cyan. These are inconvenient values for both digital and analog
systems. The color spaces YPbPr, YCbCr, Photo YCC and YUV are simply
scaled versions of (Y, B-Y, R-Y) that place the extreme of the color difference channels at more convenient values.
Byte – a) A complete set of quantized levels containing all of the bits.
Bytes consisting of 8 to 10 bits per sample are typical. b) Group of eight
bits. Can be used to represent a character. Microcomputer instructions
require one, two, or three bytes. A word can be one or more bytes. c) A
group of adjacent binary digits operated upon as a unit, capable of holding
one character in the local character set, and usually shorter than a computer word (frequently connotes a group of eight bits). Current usage within
the context of electronic production concerns is tending to define a byte as
eight bits to have a consistent data unit for measuring memory capacities,
etc. d) 8 bits. The combination of 8 bits into 1 byte allows each byte to
represent 256 possible values. See Megabyte, Gigabyte, Terabyte.
byte = 8 bits = 256 discrete values (brightness, color, etc.)
kilobyte = 1,024 bytes (not 1000 bytes)
megabyte = 1,048,576 bytes (not one million bytes)
gigabyte = 1, 073,741,824 bytes (not one billion bytes)
terabyte = 1,099,511,627,776 bytes (not one trillion bytes)
Byte Aligned – a) A bit in a coded bit stream is byte-aligned if its position
is a multiple of 8-bits from the first bit in the stream. b) Data in a coded
bit stream that is positioned a multiple of 8-bits from the first bit in the
stream. For example, MPEG video and system streams are byte-aligned.
www.tektronix.com/video_audio 35
Video Terms and Acronyms
C/N – Ratio of RF or IF signal power to noise power.
CA (Conditional Access) – Information describing, or indicating whether
the program is scrambled.
Camera Control Unit (CCU) – Remote control device for video cameras
usually placed in the editing suite. Controls usually include video levels,
color balancing and iris control.
Cable Equalization – The process of altering the frequency response of a
video amplifier to compensate for high-frequency losses in coaxial cable.
Camera Log – A record sheet giving details of the scene photographed on
a roll of original negative.
Cable Network – Group of radio or television outlets linked by cable or
microwave that transmit identical programs simultaneously, or the company
that produces programs for them. Cable networks include companies such
as: The Discovery Channel, ESPN, C-SPAN. National broadcast commercial
television networks in the U.S. include ABC, NBC, CBS.
Camera Match – Shot-to-shot picture fidelity. Improperly matched
cameras may exhibit differences in level, balance, colorimetry, or defects
that will cause the picture quality to change from shot to shot. These
differences may present problems during editing, as the editor attempts
to minimize differences.
Cable Television – System that transmits original programming and
programming of broadcast television stations, to consumers over a wired
Camera Supply – Most video cameras use an external DC voltage supply
which is derived either from a battery belt worn by the camera operator,
from a battery within the video recorder itself, or from the mains power
supply (after voltage conversion).
Cable Virtual Channel Table (CVCT) – An ATSC table that identifies a
set of one or more channels within a cable network. The table includes
major and minor channel numbers, carrier frequency, short channel name,
and information for navigation and tuning.
Cablecasting – To originate programming over a cable system. Includes
public access programming.
CAD (Computer-Aided Design) – This usually refers to a design of
system that uses computer specialized software.
Camera Tube – See Pickup Tube.
Candela (cd) – A unit for measuring luminous intensity. One candela is
approximately equal to the amount of light energy generated by an ordinary
candle. Since 1948 a more precise definition of a candela has become:
“the luminous intensity of a black body heated up to a temperature at
which platinum converges from a liquid state to a solid”.
Candlepower – The unit measure of incident light.
Calibrate – To fine-tune video levels for maximum clarity during digitizing
(from videotape).
Canned – In the can, old movie term still used occasionally to mean
Calibrated Delay Fixture – This fixture is another way of measuring
Chrominance to Luminance delay. The fixture allows the delay to be incrementally adjusted until there is only one peak in the baseline indicating all
the delay errors have been dialed out. The delay value can be read from
the fixture while the gain can be calculated from the remaining peaks.
Capstan – The driven spindle or shaft in a tape recorder, sometimes the
motor shaft itself, which rotates against the tape (which is backed up by a
rubber pressure or pinchroller), pulling it through the machine at constant
speed during recording and playback modes of operation.
Call – Jump to a subroutine. A jump to a specified address is performed,
but the contents of the program counter are saved (usually in the stack) so
that the calling program flow can resume when the subroutine is finished.
Camcorder – The combination of camera and video tape recorder in one
device. Camcorders permit easy and rapid photography and recording
simultaneously. Camcorders are available in most home video formats:
8 mm, Hi-8, VHS, VHS-C, S-VHS, etc.
Camera Analysis – The measurement and evaluation of the spectral
sensitivities of the three color channels of a television camera. The camera
and matrixing are identified and measured.
Camera Analysis, Ideal – For optimum image quality, both objective and
perceived, the spectral sensitivities of the three color channels of a television camera should be matched to the primary colors of the R, G, B color
space. Note: Some practice still exists matching the color channels of the
camera to the display phosphors. This reduces the color gamut and carries
unnecessary noise penalties. The practice is deprecated.
Camera Chain – Television camera and associated equipment, consisting
of power supply and sync generator.
Capstan Crease – Wrinkles or creases pressed into the tape by the
capstan//pinchroller assembly.
Capstan Idler – A rubber wheel which presses the magnetic tape against
the capstan so that the capstan can move the tape.
Capstan Servo – The regulating device of the capstan as it passes tape
through a videotape recorder.
Caption – See Title.
Capture – The process of digitizing the analog video signal. See Digitize.
Capture Card – Sometimes called a capture or video board, the logic card
installed into a computer and used to digitize video. Or, for video that is
already digitized, the device that simply transfers the file to the hard disk.
Using a hardware or software codec, the capture card also compresses
video in and decompresses video out for display on a television monitor.
Capture Mask Effect – An effect that converts the format of source data
during playback. For example, it could convert video frame data between
PAL (25 FPS) and NTSC (29.97 fps) formats.
Card Guides – Narrow metal or plastic tracks at the top and bottom of the
chassis into which you slide printed circuit boards.
Video Terms and Acronyms
Cardioid – The quasi-heart-shaped sensitivity pattern of most unidirectional microphones. Hypercardioid and supercardioid microphones have
basically similar patterns, but with longer, narrower areas of sensitivity at
the front, and slightly increased rear sensitivity.
Carriage – A cable system’s procedure of carrying the signals of television
stations on its various channels. FCC rules determine which signals cable
systems must or may carry.
Carrier – A signal which is modulated with data to be transmitted.
Carry Flag – Flag bit in the microprocessor’s status register, which is
used to indicate the overflow of an operation by the arithmetic logic unit.
Cartridge – A plastic container that holds tape for easy loading into a
matching recorder or player.
CAS – See Conditional Access System.
Cassette – A tape cartridge in which the tape passes from one hub to
Casting – The ability to distribute live video (or audio) broadcasts over
local or wide area networks that may optionally be received by many
CAT (Conditional Access Table) – Provides information on the conditional access systems used, packets having PID codes of 1 and information
about the scrambling system. See ECM and EMM.
Cathode-Ray Tube (CRT) – a) An electron tube assembly containing an
electron gun arranged to direct a beam upon a fluorescent screen.
Scanning by the beam can produce light at all points in the scanned raster.
b) Display device, or picture tube, for video information.
CATV (Community Access Television) – Acronym for cable TV, derived
from the older term, community antenna television. Also can stand for
Community Access Television.
CATV Penetration – The ratio of the number of subscribers to the total
number of households passed by the cable system.
CAV (Component Analog Video) – Analog video signal format in which
the picture information is conveyed in three signals. CAV formats include:
RGB; Y, R-Y, B-Y; Y, I, Q; Y, U, V; Y, Pb, Pr. Refer to the definition for Analog
CB – Scaled version of the B-Y signal.
C-Band – The group of microwave frequencies from 4 to 6 GHz. C-band
satellites use a band of satellite downlink frequencies between 3.7 and 4.2
GHz. C-band is also used by terrestrial, line-of-sight microwave links.
CBC – See Canadian Broadcasting Corporation.
CBPS (Coded Bits Per Symbol)
CBR – See Constant Bit Rate.
CC – See Closed Captioning.
CCD – See Charge Coupled Device.
CCD Aperture – The proportion of the total area of a CCD chip that is
CCETT (Centre Commun d’Etudes de Telecommunications et de
Telediffusion, France) – The CCETT is one of the three licensors of the
MPEG Layer II coding algorithm. The audio coding technique, originally
developed for DAB under EUREKA 147 jointly with IRT and Philips, was
selected by ISO/MPEG as Layer II of the MPEG-1 standard.
CCI (Copy Control Information) – Information specifying if content is
allowed to be copied.
CCIR (Comite Consultatif Internationale des Radiocommunications)
– International Radio Consultative Committee, an international standards
committee that has been absorbed by the parent body, the ITU. A permanent organization within the ITU with the duty to study technical and
operating questions relating specifically to radio communications and to
make recommendations on them. The CCIR does not prepare regulations;
it draws up recommendations and reports, produced by experts from both
public and private entities, which provide guidance on the best operational
methods and techniques. The CCIR is expected to base its recommendations upon 150 and IEC international standards, but when no relevant one
exists, the CCIR has been known to initiate standardization. These recommendations and reports provide a basis for international standardization of
CCIR-468 – Specifies the standard for weighted and unweighted noise
measurements. The weighted standard specifies the weighting filter and
quasi-peak detector. The unweighted standard specifies a 22 Hz to 22 kHz
bandwidth limiting filter and RMS detector.
CCIR-500 – Method for the Subjective Assessment of the Quality of
Television Pictures. CCIR-500 is a detailed review of the recommendations
for conducting subjective analysis of image quality. The problems of defining perceived image quality are reviewed, and the evaluation procedures
for interval scaling, ordinal scaling, and ratio scaling are described – along
with the applications for which each is best employed.
CCIR-601 – See ITU-R BT.601.
CCIR-656 – The physical parallel and serial interconnect scheme for ITU-R
BT.601-2-601. CCIR 656 defines the parallel connector pinouts as well as
the blanking, sync, and multiplexing schemes used in both parallel and
serial interfaces. Reflects definitions in EBU Tech 3267 (for 625 line signals) and in SMPTE 125M (parallel 525) and SMPTE 259M (serial 525).
CCIR-6601 – Consultative Committee International Radio. A standard that
corresponds to the 4:2:2 format.
CCIR-709 – The recommendation considers that the HDTV studio standard
must be harmonized with those of current and developing television systems and with those of existing motion-picture film. In a review of current
systems, a consensus was identified in specifications for opto/electronic
conversion, picture characteristics, picture scanning characteristics, and
signal format (both analog and digital representations). Work is underway in
the editing of national and CCIR related documents to determine whether
these consensus values may be affirmed in the next review of the individual documents. The values in Rec 709 are considered interim, and CCIR
notes that continuing work is expected to define target parameters for
future improved image rendition.
www.tektronix.com/video_audio 37
Video Terms and Acronyms
CCIR-801 – At present, the first results on studies related to Study
Programme 18U/11 have been collected. It must be recognized that these
studies must be intensified in close cooperation with such organizations as
the IEC and ISO to take fully into account the requirements for implementation of HDTV for media other than broadcasting, i.e., cinema, printing,
medical applications, scientific work, and video conferencing. In addition,
the transmission of HDTV signals via new digital transmission channels or
networks has to be considered and taken into account.
CDDI (Copper Data Distributed Interface) – A high speed data interface, like FDDI but using copper. See FDDI.
CCITT (Comite Consultatif Internationale Telegraphique et
Telephonique) – A committee of the International Telecommunications
Union responsible for making technical recommendations about telephone
and data communication systems for PTTs and suppliers. Plenary sessions
are held every four years to adopt new standards. Now part of ITU-TSS.
CDT (Carrier Definition Table)
CCITT 0.33 – Recommendation 0.33 of the CCITT Specification for
Measuring Equipment, Volume IV, Series O Recommendations-1988. This
defines the automatic test sequences that are used to check on the
different parameters that are important to signal quality. Recommendation
0.33 has defined sequences for both monaural and stereo audio testing.
Also called EBU Recommendation R27.
CCK – See Composite Chroma Key.
CCTV – See Closed Circuit TV.
CCTV Camera – A unit containing an imaging device that produces a
video signal in the basic bandwidth.
CCTV Installation – A CCTV system, or an associated group of systems,
together with all necessary hardware, auxiliary lighting, etc., located at the
protected site.
CCTV System – An arrangement comprised of a camera and lens with all
ancillary equipment required for the surveillance of a specific protected
CCU – See Camera Control Unit.
CCVE (Closed Circuit Video Equipment) – An alternative acronym for
CD (Committee Draft) – This is the first public form of a proposed international standard.
CD (Compact Disc) – a) A 4.75” disc used to store optical, machinereadable, digital data that can be accessed with a laser-based reader such
as a CD player. b) A standard medium for storing digital data in machinereadable form, accessible with a laser-based reader. Readers are typically
referred to as CD-ROM drives.
CD+G (Compact Disc Plus Graphics) – A variation of CD which embeds
graphical data in with the audio data, allowing video pictures to be displayed periodically as music is played. Primarily used for karaoke.
CD-DA (Compact Disc-Digital Audio) – Standard music CDs. CD-DA
became CD-ROMs when people realized that you could store 650 Mb of
computer data on a 12cm optical disc. CD-ROM drives are simply another
kind of digital storage media for computers, albeit read-only. They are
peripherals just like hard disks and floppy drives. (Incidentally, the convention is that when referring to magnetic media, it is spelled disk. Optical
media like CDs, laserdisc, and all the other formats are spelled disc.)
CD-I – See Compact Disc Interactive.
CD-ROM – See Compact Disc Read Only Memory.
CDS (Correlated Double Sampling) – A technique used in the design
of some CCD cameras that reduces the video signal noise generated by
the chip.
CDTV – See Conventional Definition Television.
CD-XA – CD-XA is a CD-ROM extension being designed to support digital
audio and still images. Announced in August 1988 by Microsoft, Philips,
and Sony, the CD-ROM XA (for Extended Architecture) format incorporates
audio from the CD-I format. It is consistent with ISO 9660, (the volume
and the structure of CD-ROM), is an application extension. CD-XA defines
another way of formatting sectors on a CD-ROM, including headers in the
sectors that describe the type (audio, video, data) and some additional info
(markers, resolution in case of a video or audio sector, file numbers, etc.).
The data written on a CD-XA can still be in ISO9660 file system format
and therefore be readable by MSCDEX and UNIX CD-ROM file system
translators. A CD-I player can also read CD-XA discs even if its file system
only resembles ISO9660 and isn’t fully compatible. However, when a disc
is inserted in a CD-I player, the player tries to load an executable application from the CD-XA, normally some 68000 application in the /CDI directory. Its name is stored in the disc’s primary volume descriptor. CD-XA bridge
discs, like Kodak’s Photo CDs, do have such an application, ordinary CD-XA
discs don’t. A CD-DA drive is a CD-ROM drive but with some of the compressed audio capabilities found in a CD-I player (called ADPCM). This
allows interleaving of audio and other data so that an XA drive can play
audio and display pictures (or other things) simultaneously. There is special
hardware in an XA drive controller to handle the audio playback. This
format came from a desire to inject some of the features of CD-I back into
the professional market.
CED (Capacitance Electronic Disk) – Technology used by RCA in their
Videodisk product.
Cel – Refers to a transparent sheet of glass or acetate on which a “layer”
or “level” of artwork is painted. Since the sheet is clear where there is
no artwork, several sheets can be superimposed, allowing “automatic
hidden-surface removal”, or simply, the “painter’s algorithm”.
Celanar – Trade name for polyester produced by Celanese.
Cell – In DVD-Video, a unit of video anywhere from a fraction of a second
to hours long. Cells allow the video to be grouped for sharing content
among titles, interleaving for multiple angles, and so on.
Cell Animation – Also called Onion Skinning, an animation technique in
which a background painting is held in place while a series of transparent
sheets of celluloid containing objects are placed over the background
painting, producing the illusion of movement. One of the two main types
of animation associated with digital video. Compare with Frame-Based
2D Animation.
Cell Command – A navigation command executed when the presentation
of a cell has been completed.
Video Terms and Acronyms
Cell Compression – Cell is a compression technique developed by Sun
Microsystems. The compression algorithms, the bit stream definition, and
the decompression algorithms are open; that is, Sun will tell anybody who
is interested about them. Cell compression is similar to MPEG and H.261
in that there is a lot of room for value-add on the compressor end. Getting
the highest quality image from a given bit count at a reasonable amount
of computation is an art. In addition the bit-stream completely defines the
compression format and defines what the decoder must do and there is
less art in the decoder. There are two flavors of Cell: the original called Cell
or CellA, and a newer flavor called CellB.
Cell Loss Priority (CLP) – A flag in the ATM cell header which indicates
the priority (normal or low) of the payload.
Cell Loss Ratio (CLR) – A QoS specification in an ATM network. It measures the number of cells that can be lost to the network relative to the total
number of cells transmitted.
Cell Side – The base (celluloid) surface of a strip of film.
CellB – A video coding scheme based on quadtree decomposition of each
CELP – See Code-Excited Linear Prediction.
CEN (Comite Europeen de Normalisation) – European committee for
CENELEC (Comite Europeen de Normalisation Electrotechnique) –
European committee for electrotechnical standardization.
Center Channel – The central component of a front stereo audio presentation channel.
Central Processing Unit – Computer module in charge of fetching,
decoding, and executing instructions. It incorporates a control unit, an ALU,
and related facilities (registers, clocks, drivers).
Centralized Network – A network where a central server controls services and information; the server is maintained by one or more individuals
called network administrators. On a centralized network that uses NIS, this
server is called the NIS master, and all other systems on the network are
called NIS clients. See also Network Administrator, NIS, NIS Client, NIS
Domain, and NIS Master.
Ceramic Microphone – See Piezoelectric Microphone.
Certified Tape – Tape that is electrically tested on a specified number
of tracks and is certified by the supplier to have less than a certain total
number of permanent errors.
Certifier – Equipment that evaluates the ability of magnetic tape to record
and reproduce. The equipment normally counts and charts each error on
the tape, including level and duration of dropouts. In the Certify Mode, it
stops on error to allow for visually inspecting the tape to see if the error
cause is correctable or permanent.
CES – Consumer Electronics Show – A semi-annual event sponsored by
the Consumer Electronics Group of EIA, at which IDTV and HDTV schemes
have been demonstrated.
CFA (Color Filter Array) – A set of optical pixel filters used in single-chip
color CCD cameras to produce the color components of a video signal.
CG – See Character Generator.
CGA (Color Graphics Adapter) – A low-resolution video display standard,
invented for the first IBM PC. CGA pixel resolution is 320 x 200.
CGI – Abbreviation for Computer Graphic Imagery.
CGM (Computer Graphics Metafile) – A standard format that allows for
the interchanging of graphics images.
CGMS (Copy Guard Management System) – For NTSC systems, a
method of preventing copies or controlling the number of sequential copies
allowed. CGMS is transmitted on line 20 for odd fields and line 283 for
even fields for NTSC. For digital formats it is added to the digital signal
conforming to IEEE 1394.
CGMS-A (Copy Generation Management System – Analog) –
See EIA-608.
Challenge Key – Data used in the authentication key exchange process
between a DVD-ROM drive and a host computer, where one side
determines if the other side contains the necessary authorized keys
and algorithms for passing encrypted (scrambled) data.
Change List – A list of instructions produced by the film composer that
is used to track and compare the differences between two versions of a
digital sequence. A change list is used to update a work print cutting with
specified new edits and revisions.
Change-Over – a) In projection, the act of changing from one projector
to another, preferably without interrupting the continuity of projection.
b) The points in the picture at which such a change is made.
Changing Pixel – In shape coding, first pixel with color change from the
previous pixel (opaque to transparent or vice versa).
Channel – a) An independent signal path. Stereo recorders have two such
channels. Quadraphonic ones have four. b) A digital medium that stores or
transports a digital television stream. c) A term mainly used to describe the
configuration of audio tracks. For Dolby Digital there are 6 channels (left,
center, right, left rear, right rear and low frequency effects). For linear PCM
and MPEG audio, there are 8 channels. All DVD players are required to
have a two-channel downmix output, which is a stereo version produced
from the intrinsic channels on the disc if there are more than two channels
on the disc.
Channel Bit – The bits stored on the disc, after being modulated.
Channel Capacity – The maximum number of 6 MHz channels which can
be simultaneously carried on a CATV system.
Channel Code – A modulation technique that converts raw data into a
signal that can be recorded or transmitted by radio or cable.
Channel Coding – a) Describes the way in which the 1s and 0s of the
data stream are represented on the transmission path. b) Refers to any
processing to use a particular communication channel or medium.
Examples are forward error correction coding and prioritization of different
parts of the coded video bit stream.
Channel Data – The bits physically recorded on an optical disc after errorcorrection encoding and modulation. Because of the extra information and
processing, channel data is larger than the user data contained within it.
www.tektronix.com/video_audio 39
Video Terms and Acronyms
Channel Editor – The tool used to set keyframes and modify animation
curves of the channels.
Channel Hierarchy – A set of animation parameters arranged and displayed in a logical group. A group, or upper-level, channel is called a
folder. For example, the camera folder contains channels for camera
settings such as position, interest and focal length.
Channel Stuffing – Techniques for adding information to an NTSC
channel without increasing its bandwidth or eliminating its receivercompatibility.
Channel-Compatible – An ATV transmission scheme that will fit within
the confines of a standard, 6 MHz NTSC transmission channel. A higher
level of channel-compatibility demands NTSC-like AM-VSB transmission so
that the ATV channel will not cause any interference to other channels that
would not otherwise be caused by an NTSC channel. Channel-compatible
ATV schemes need not necessarily also be receiver-compatible.
Chaoji VideoCD – Another name for Super VideoCD.
CHAP (Challenge Handshake Authentication Protocol) – Network
logon authentication. Three-way handshaking occurs. A link is established.
The server agent sends a message to the machine originating the link. This
machine then computes a hash function from the challenge and sends it to
the server. The server determines if this is the expected response and, if
so, authenticates the connection. The authentication procedure can take
place once or multiple times during a session and each time it takes place
the challenge can change.
C-HDTV (Cable HDTV) – A seemingly impossible concept calling for channel-compatible ATV transmission of 850 lines of both static and dynamic
horizontal and vertical resolution, among other characteristics. Its feasibility
is being studied at ATRP.
Check Box – Used to select from a list of related items. An “x” marks the
selected options in the corresponding box. (Select as many items as
desired – one, none, or all.)
Checkerboard – Automatic assembly process where all edits from mounted reels are made, and edits for unmounted reels are skipped. Example:
Reels 5, 29 and 44 are mounted on VTRs. The editing system looks at the
list and assembles all edits that have reel numbers 5, 29 and 44 assigned
to them, inserting these events at the exact spot on the master tape where
they belong.
Checkerboard Cutting – A method of assembling alternate scenes of
negative in A and B rolls allowing prints to be made without visible splices.
Checksum – a) An error-detecting scheme which is the sum of the data
values transmitted. The receiver computes the sum of the received data
values and compares it to the transmitted sum. If they are equal, the
transmission was error-free. b) Method used to verify the integrity of data
loaded into the computer. c) A simple check value of a block of data,
calculated by adding all the bytes in a block. It is easily fooled by typical
errors in data transmission systems; so that for most applications, a more
sophisticated system such as CRC is preferred.
Chapter – A chapter in a video disc is a section divider. Chapters are subsets of the video disc. In the DVD format, a chapter is a division of a title.
Chip – a) Common name for all ICs. b) An integrated circuit in which all
the components are micro-fabricated on a tiny piece of silicon or similar
Chapter Stop – Programming that allows a viewer to jump immediately to
a particular part of a title. A book with chapters is the common metaphor
for a DVD.
Chip Chart – A black and white test chart. It contains “chips” in varying
intensities, that make up a gray scale. It is used to check the gray scale
taking characteristics of a camera, including the parameter of gamma.
Character Generator (CG) – a) A computer used to electronically generate text and sometimes graphics for video titles or captions which can be
superimposed over a video signal. Text is usually entered via a keyboard,
allowing selection of various fonts, sizes, colors, styles and background
colors, then stored as multiple pages for retrieval. b) An electronic device
that generates video letters for use as captions in television productions.
The output of the character generator is often used as an external key
input to the switcher. c) Circuit that forms the letters or numbers on a
display or printer.
Chip Enable (CE) – See Chip Select.
Characteristic – An aspect or parameter of a particular television system
that is different from another system’s, but not necessarily a defect.
Characteristics include aspect ratio, colorimetry, resolution, and sound
Charge Coupled Device (CCD) – a) A semiconductor device that converts optical images to electronic signals. CCDs are the most commonly
found type of image sensor in consumer camcorders and video cameras.
b) Serial storage technology that uses MOS capacitors. c) A solid-state
image sensor that converts light energy to electricity.
Chassis – The housing for removable disk modules. The chassis contains
a power supply, drives and connectors for each module.
Chip Select (CS) – Usually enables three-state drivers on the chip’s
output lines. Most LSI chips have one or more chip selects. The CS line is
used to select one chip among many.
Choose – Choose means make a choice to select an action that will take
place, i.e., press the left mouse button to bring up a pop-up menu, move
the cursor to highlight the command that you want to run, then release the
Chroma – a) The depth or saturation of color. The saturation control
adjusts the amplitude of color of the switcher’s matte and background
outputs. b) The (M) NTSC or (B, D, G, H, I) PAL video signal contains two
pieces that make up what you see on the screen: the black and white
(luma) part, and the color part. Chroma is the color part. Chroma can
be further broken down into two properties of color: hue and saturation.
Chroma can also be describe as a matrix, block or single pel representing
one of the two color difference signals related to the primary colors in
the manner defined in the bit stream. The symbols used for the color
difference signals are Cr and Cb.
Video Terms and Acronyms
Chroma Bandpass – In an (M) NTSC or (B, D, G, H, I) PAL video signal,
the luma (black and white) and the chroma (color) information are combined together. To decode an NTSC or PAL video signal, the luma and chroma must be separated. The chroma bandpass filter removes the luma from
the video signal, leaving the chroma relatively intact. This works fairly well
except in certain images where the luma information and chroma information overlap, meaning chroma and luminance information occupy the same
frequency space. Depending on the filtering technique used, it can be difficult for the filter to separate the chroma from the luminance information.
This results in some luminance information being interpreted as chroma
and some chroma information being interpreted as luminance. The effects
of this improper separation of luminance and chroma are especially noticeable when the television scene contains objects with thin, closely spaced
black and white lines. As the camera moves across this object, there will
be a rainbow of colors appear in the object indicating the improper separation of the luminance and chroma information.
Chroma Burst – See Color Burst.
Chroma Comp – This is a deliberate distortion of colors usually used to
achieve unusual matching. By detecting the quadrant the color is in (By
normally deciding whether R-Y and B-Y are positive or negative), the amplitude of R-Y, B-Y just for colors in that quadrant can be changed; hence,
the hue and saturation can be changed for those colors without affecting
Chroma Key (CK) – a) A method of combining two video images. The
most common example of chroma keying is the news weather person
standing in front of a weather map. The details of the process are, a camera is pointed at the weather person who is standing in front of a bright
blue or green background. The weather person and bright-blue or green
background image is fed along with the image of the weather map into a
computing device. Wherever the computing device sees the bright-blue or
green background, it displays the weather map. Wherever the computing
device does not see bright blue or green, it shows the weather person.
b) A process for controlling the overlay of one video image over another,
the areas of overlay being defined by a specific color or chrominance in
one of the images. More versatility is available when working in the digital
mode than in the analog since the color to define the effective mask
can be more precisely specified. Effective use of chroma key frequently
required high definition in the color image and, therefore, full bandwidth R,
G, B is preferred. Linear key provides an alternate method for control of
the overlay. c) Chroma keying is the process of controlling the overlay of
one video image over another. The overlay is defined by a specific color or
chrominance in one of the images.
Chroma Noise – a) Noise that manifests itself in a video picture as
colored snow. b) Colors appear to be moving on screen. In color areas of
picture, usually most noticeable in highly saturated reds.
Chroma Corrector – A device used to correct problems related to the
chroma of the video signal, as well as color balance and color noise.
Chroma Nulling – A process of generating a matte color 180 degrees out
of phase with a background color and summing them hence removing all
Chroma Crawl – An NTSC artifact also sometimes referred to as moving
dots, a crawling of the edges of saturated colors in an NTSC picture.
Chroma Crawl is a form of cross-luminance, a result of a television set
decoding color information as high-detail luminance information (dots).
Most ATV schemes seek to eliminate or reduce chroma crawl, possibly
because it is so immediately apparent.
Chroma Resolution – The amount of color detail available in a television
system, separate from any brightness detail. In almost all television
schemes, chroma resolution is lower than luminance resolution, matching
visual acuity. Horizontal chroma resolution is only about 12 percent of
luminance resolution in NTSC; in advanced schemes it is usually 50
percent. See also Resolution.
Chroma Demodulation – The process of removing the color video information from a composite video signal where chrominance information is
modulated on a color subcarrier. The phase reference of the subcarrier,
is color burst which is a phase coherent sample of the color subcarrier.
Chroma Simulcast – A type of scalability (which is a subset of SNR
scalability) where the Enhancement Layer(s) contain only coded refinement
data for the DC coefficients and all the data for the AC coefficients of the
chroma components.
Chroma Demodulator – Refer to the NTSC Composite Receiver Model at
the end of this glossary when studying this definition. After the (M) NTSC
or (B, D, G, H, I) PAL video signal makes its way through the Y/C separator,
by either the chroma bandpass, chroma trap, or comb filter method, the
colors are then decoded by the chroma demodulator. Using the recovered
color subcarrier, the chroma demodulators take the chroma output of the
Y/C separator and recovers two color difference signals (typically I and Q
or U and V).
Chroma Trap – In an (M) NTSC or (B, D, G, H, I) PAL video signal, the
luma (black and white) and the chroma (color) information are combined
together. To decode the video signal, the luma and chroma must be separated. The chroma trap is a method of doing this.
Chroma Flutter – A rapid coherent variation in the chroma saturation.
Chroma Format – Defines the number of chrominance blocks in a macroblock.
Chroma Gain – In video, the gain of an amplifier as it pertains to the
intensity of colors in the active picture.
Chrominance – a) The data that represents one of the two color-difference signals Cr and Cb. b) The color portion of a video signal that is a
mixture of hue and saturation, but not of luminance (brightness). Every
color signal has both chrominance and luminance. c) Chrominance refers
to the color information in a television picture. Chrominance can be further
broken down into two properties of color: hue and saturation. See Chroma.
Chrominance Component – A matrix, block or single sample representing one of the two color difference signals related to the primary colors in
the manner defined in the bitstream. The symbols used for the chrominance signals are Cr and Cb.
Chrominance Format – Defines the number of chrominance blocks in a
www.tektronix.com/video_audio 41
Video Terms and Acronyms
Chrominance Frequency Response – Describes the frequency response
of the chrominance channel.
combination signals such as FCC Composite and NTC-7 Composite contain
this pulse.
Chrominance Luminance Delay Inequality – Appears as the change
in relative timing of the chrominance component relative to the luminance
component of the test signal when a test signal having defined chrominance and luminance components is applied to the sending end of a
television facility.
Chrominance to Luminance Gain Distortion – This is the difference
between the gain of the chrominance components and the gain of the
luminance components as they pass through the system. The amount of
distortion can be expressed in IRE, percent or dB. The number given is
negative for low chrominance and positive for high chrominance. This distortion most commonly appears as attenuation or peaking of the chrominance information that shows up in the picture as incorrect color saturation. Any signal containing a 12.5T sine-squared pulse with 3.579545 MHz
modulation can be used to measure chrominance-to-luminance gain distortions. Many combination signals such as FCC Composite and NTC-7
Composite contain this pulse.
Chrominance Luminance Gain Inequality – Appears as the change
in amplitude of the color component relative to the luminance component
(of the test signal) when a test signal having defined chrominance and
luminance components is applied to the sending end of a television facility.
Chrominance Nonlinear Gain – Present if chrominance gain is affected
by chrominance amplitude. Chrominance nonlinear gain distortion is
expressed in IRE or percent. It should be measured at different APL levels
and typically the worst error is quoted. Picture effects include incorrect
color saturation due to nonlinear gain in relatively high amplitude chrominance signals. The modulated pedestal test signal is used to test for this
Chrominance Nonlinear Phase – This distortion is present if a signal’s
chrominance phase is affected by chrominance amplitude. These phase
errors are a result of the system’s inability to uniformly process all amplitudes of high-frequency chrominance information. Chrominance nonlinear
phase distortion is expressed in degrees of shift of subcarrier phase. This
parameter should be measured at different APL (Average Picture Level); the
worst result is quoted as the amount of distortion. Chrominance nonlinear
phase distortion will cause picture hue to shift as color saturation increases. A modulated pedestal signal is used to measure this distortion. The
modulated pedestal signal consists of three chrominance packets with the
same phase and luminance level but each chrominance packet has
increasing amplitudes of 20, 40 and 80 IRE.
Chrominance Signal – The high-frequency portion of the video signal
which is obtained by quadrature amplitude modulation (QAM) of a
4.43 MHz (PAL) or 3.579545 MHz (NTSC) subcarrier with R-Y and B-Y
Chrominance Subsampling – Reduction of the amount of color information by either rejecting chrominance samples or by averaging adjacent
chrominance samples.
Chrominance to Burst Phase – The difference between the expected
phase and the actual phase of the chrominance portion of the video signal
relative to burst phase.
Chrominance to Luminance Delay Distortion – The difference between
the time it takes for the chrominance portion of the signal to pass through
a system and the time it takes for the luminance portion to pass through.
The amount of distortion is typically expressed in nanoseconds. The number is positive for delayed chrominance and negative for advanced chrominance. This distortion manifests itself in the picture as smearing or bleeding of the color particularly at the edges of objects in the picture. It may
also cause poor reproduction of sharp luminance transitions. Any signal
containing a 12.5T sine-squared pulse with 3.579545 MHz modulation can
be used to measure chrominance-to-luminance delay distortions. Many
Chrominance to Luminance Intermodulation – This distortion is also
known as crosstalk or cross-modulation. Splice is present when luminance
amplitude is affect by the superimposed chrominance. The luminance
change may be caused by clipping of high-amplitude chrominance peaks,
quadrature distortion or crosstalk. The modulated pedestal is used to test
for this distortion. Distortions can be expressed as: IRE with the pedestal
level normalized to 50 IRE, as a percentage of the pedestal level, as a percentage of the measured white bar amplitude, as a percentage of 714 mV.
These definitions will yield different results under some conditions so it is
very important to standardize on a single method of making intermodulation measurements. Picture effects include unwarranted brightness variations due to color saturation changes affecting the luminance.
Chromium Dioxide (CrO2) – A modern magnetic particle oxide of the
high energy type used in magnetic recording tape. Chromium dioxide is a
highly acicular particle with the crystal structure of rutile. Tapes made of
CrO2 exhibit a coercivity of 425 to 475 oersteds.
Chunking – The transfer of media files in segments so other workgroup
users can access and use the media before complete files have been sent.
CI (Common Interface) – CI is used for satellite receivers. Manufacturers
have agreed on use a common interface for satellite decoding cards. For
CI these cards (called CAM) look like PCMCIA cards, as seen with laptops,
which can hold one smart card. This smart card holds the keys to the
subscribed service. The CAM holds the hardware and software required
for decoding the data stream (after decoding this is video and audio).
CIE (Commission Internationale de l’Eclairage) – French acronym for
the International Illumination Commission. An international standardization
organization that created the chromaticity diagrams (color charts) used
to define the colorimetry of all television systems. The CIE is concerned
with methods of measurement plus recommended practices and standards
concerning the properties and applications of light.
CIE 1931 Standard Colorimetric System (XYZ) – A system for determining the tristimulus values of any spectral power distribution using the
set of reference color stimuli X, Y, Z, and the three CIE color matching
functions x(lambda), y(lambda), z(lambda), adopted by the CIE in 1931.
CIELab Color Space – Three-dimensional, approximately uniform color
space produced by plotting in rectangular coordinates L*, a*, b* quantities
defined by the following equations. X, Y, Z describe the color stimulus considered, and Xn, Yn, Zn describe a specified white achromatic stimulus
Video Terms and Acronyms
(i.e., white reference). Equal distances in the color space represent approximately equal color differences.
Circuit Switching – A dedicated path is formed for the duration of the
communication through switching nodes between a number of locations.
L* = 116 (Y/Yn)^(1/3) – 16
CK – See Chroma Key.
a* = 500[(X/Xn)^(1/3) – (Y/Yn)^(1/3)]
X/Xn > 0.008 856
b* = 200[(Y/Yn)^(1/3) – (Z/Zn)^(1/3)]
Cladding – The outer part of a fiber optics cable, which is also a fiber
but with a smaller material density than the center core. It enables a total
reflection effect so that the light transmitted through the internal core
stays inside.
CIELuv Color Space – Three-dimensional, approximately uniform color
space produced by plotting in rectangular coordinated L*, u*, v* quantities
defined by the following equations. Y, u_, v_ describe the color stimulus
considered, and Yn, u_n, v_n describe a specified white achromatic stimulus (white reference). The coordinates of the associated chromaticity diagram are u_ and v_. L* is the approximate correlation of lightness, u* and
v* are used to calculate an approximate correlate of chroma. Equal distances in the color space represent approximately equal color differences.
L* = 116 (Y/Yn)^(1/3) – 16
Y/Yn > 0.008 856
u* = 13 L* (u_ – u_n)
V* = 13 L* (v_ – v_n)
CIF – See Common Image Format, Common Interchange Format, Common
Interface Format or Common Intermediate Format.
Cinch – Interlayer slippage of magnetic tape in roll form, resulting in
buckling of some strands of tape. The tape will in many cases fold over
itself causing permanent vertical creases in the tape. Also, if not fixed, it
will cause increased dropouts. See Windowing.
Cinch Marks – Short scratches on the surface of a motion picture film,
running parallel to its length; these are caused by improper winding of the
roll, permitting one coil of film to slide against another.
Cinching – a) Longitudinal slippage between the layers of tape in a tape
pack when the roll is accelerated or decelerated. b) The wrinkling, or
folding over, of tape on itself in a loose tape pack. Normally occurs when
a loose tape pack is stopped suddenly, causing outer tape layers to slip,
which in turn causes a buckling of tape in the region of slip. The result
is large dropouts or high error rates. c) Videotape damage due to creasing
or folding.
CinemaScope – a) Trade name of a system of anamorphic widescreen
presentation. b) The first modern widescreen movie format, achieving a
2.35:1 aspect ratio through the use of a 2:1 anamorphic squeeze.
Cinepak – Cinepak is a compression scheme dedicated to PC environments, based on a vector quantization algorithm. CinePak is a highly
asymmetrical algorithm, i.e., the encoding takes much more processing
power than the decoding process. The Cinepak algorithm is developed by
Radius, and is licensed by a range of companies. Both Microsoft Windows
95 and Apple’s QuickTime have built in Cinepak, for instance.
Cinex Strip – A short test print in which each frame has been printed at a
different exposure level.
CIRC (Cross-Interleaved Reed Solomon Code) – An error-correction
coding method which overlaps small frames of data.
Circle Take – A take from a film shot that has been marked for use or
printing by a circled number on the camera report.
Clamp – a) A device which functions during the horizontal blanking or
sync interval to fix the level of the picture signal at some predetermined
reference level at the beginning of each scanning line. b) Also known as
a DC-restoration circuit or it can also refer to a switch used within the
DC-restoration circuit. When used in the context of DC restoration, then
it is usually used as “clamping”. When used in its switch context, then it
is referred to as just “clamp”.
Clamper – A device which functions during the horizontal blanking or
sync interval to fix the level of the picture signal at some predetermined
reference level at the beginning of each scanning line.
Clamping – a) The process that establishes a fixed level for the picture
signal at the beginning of each scanning line. b) The process whereby a
video signal is references or “clamped” to a DC level to prevent pumping
or bouncing under different picture levels. Without clamping, a dark picture
would bounce if a white object appeared. Changes in APL would cause
annoying pulsations in the video. Clamping is usually done at zero DC level
on the breezeway of the back porch of horizontal sync. This is the most
stable portion of a TV picture.
Clamping Area – The area near the inner hole of a disc where the drive
grips the disc in order to spin it.
Class – In the object-oriented methodology, a class is a template for a set
of objects with similar properties. Classes in general, and MPEG-4 classes
in particular, are organized hierarchically. This hierarchy specifies how a
class relates to others, in terms of inheritance, association or aggregation,
and is called a Class Library.
Clean List (Clean EDL) – An edit decision list (EDL) used for linear editing
that has no redundant or overlapping edits. Changes made during offline
editing often result in edits that overlap or become redundant. Most computer-based editing systems can clean an EDL automatically. Contrast with
Dirty List (Dirty EDL).
Clean Rooms – Rooms whose cleanliness is measured by the number
of particles of a given size per cubic foot of room volume. For example,
a class 100,000 clean room may have no more than 100,000 particles
one-half micron or larger per cubic foot. Similarly, for class 10,000 and
class 100 rooms. In addition, a class 10,000 room may have no more
than 65 five-micron particles per cubic foot, while class 100,000 may
have no more than 700.
Clear – Set a circuit to a known state, usually zero.
Clear Channel – AM radio station allowed to dominate its frequency with
up to 50 kW of power; their signals are generally protected for distance of
up to 750 miles at night.
Click – To hold the mouse still, then press and immediately release a
mouse button.
www.tektronix.com/video_audio 43
Video Terms and Acronyms
Click and Drag – A computer term for the user operation of clicking on
an item and dragging it to a new location.
from another, the clipping logic clips the information until a legal color is
Cliff Effect – An RF characteristic that causes DTV reception to change
dramatically with a small change in power. At the fringes of reception, current analog TV pictures degrade by becoming “snowy”. With DTV, relatively
small changes in received power in weak signal areas will cause the DTV
picture to change from perfect to nothing and hence the name, cliff effect.
Clock – Reference timing source in a system. A clock provides regular
pulses that trigger or synchronize events.
Clip – a) A video file. b) In keying, the trigger point or range of a key
source signal at which the key or insert takes place. c) The control that
sets this action. to produce a key signal from a video signal, a clip control
on the keyer control panel is used to set a threshold level to which the
video signal is compared. d) In digital picture manipulators, a manual
selection that blanks portions of a manipulated image that leave one side
of the screen and “wraps” around to enter the other side of the screen.
e) In desktop editing, a pointer to a piece of digitized video or audio that
serves as source material for editing.
Clock Doubling – Many processor chips double the frequency of the clock
for central processing operations while maintaining the original frequency
for other operations. This improves the computer’s processing speed without requiring expensive peripheral chips like high-speed DRAM.
Clock Frequency – The master frequency of periodic pulses that are used
to synchronize the operation of equipment.
Clock Jitter – a) Timing uncertainty of the data cell edges in a digital
signal. b) Undesirable random changes in clock phase.
Clock Phase Deviation – See Clock Skew.
Clock Recovery – The reconstruction of timing information from digital
Clip (Insert Adjust) – To produce a key signal from a video signal, a clip
insert control on the front panel is used to set a threshold level to which
the video signal is compared. In luminance keying, any video (brightness)
level above the clip level will insert the key; any level below the clip level
will turn the key off. The clip level is adjusted to produce an optimum key
free of noise and tearing. In the Key Invert mode, this clip relationship is
reversed, allowing video below the clip level to be keyed in. This is used for
keying from dark graphics on a light background.
Clock Reference – A special time stamp that conveys a reading of a time
Clip Level – The level that determines at what luminance a key will cut its
hole. On AVC switchers, these are the insert and border adjust controls. On
4100 series, the corresponding controls are foreground and background.
See Bi-Level Keyer.
Close Miking – Placing a mike close to the sound source in order to pick
up mainly direct sound and avoid picking up reverberant sound.
Clip Properties – A clip’s specific settings, including frame size, compressor, audio rate, etc.
Clip Sheet – A nonlinear editing term for the location of individual
audio/video clips (or scenes). Also known as clip bin.
Clipping – a) An electronic limit usually imposed in cameras to avoid
overly bright or dark signals. When improperly applied can result in loss of
picture information in very bright or very dark areas. Also used in switchers
to set the cutoff point for mixing video signals. b) The electronic process
of shearing off the peaks of either the white or black excursions of a video
signal for limiting purposes. Sometimes, clipping is performed prior to
modulation, and sometimes to limit the signal, so it will not exceed a predetermined level.
Clipping (Audio) – When recording audio, if an input signal is louder than
can be properly reproduced by the hardware, the sound level will be cut off
at its maximum. This process often causes distortion in the sound, so it is
recommended that the input signal level be reduced in order to avoid this.
Clipping (Video) – With video signals, clipping refers to the process of
recording a reduced image size by ignoring parts of the source image. Also
referred to as cropping.
Clipping Logic – Circuitry used to prevent illegal color conversion. Some
colors can be legal in one color space but not in another. To ensure a converted color is legal in one color format after being converted (transcoded)
Clock Skew – A fixed deviation from proper clock phase that commonly
appears in D1 digital video equipment. Some digital distribution amplifiers
handle improperly phased clocks by reclocking the output to fall within D1
Clock Timecode – See Drop-Frame Timecode.
Closed Captioning – Service that provides decoded text information
transmitted with the audio and video signal and displays it at the bottom
of the display. See (M) NTSC EIA-608 specification. Transmitted on line
21 of NTSC/525 transmissions, contains subtitling information only. For
HD see EIA708 specification. CC has no support for block graphics or
multiple pages but it can support 8-colors and the use of an italic typeface.
Frequently found on pre-recorded VHS cassettes and LDs, also used
in broadcast. Also found on PAL/625 pre-recorded VHS cassettes in a
modified version.
Closed Circuit – The method of transmission of programs or other material that limits its target audience to a specific group rather than the general
Closed Circuit TV (CCTV) – a) A video system used in many commercial
installations for specific purposes such as security, medical and educational. b) A television system intended for only a limited number of viewers, as
opposed to broadcast TV.
Closed GOP – A group of pictures in which the last pictures do not need
data from the next GOP for bidirectional coding. Closed GOP is used to
make a splice point in a bit stream.
Closed Subtitles – See Subtitles.
Closed-Loop – Circuit operating with feedback, whose inputs are a
function of its outputs.
Video Terms and Acronyms
Closed-Loop Drive – A tape transport mechanism in which the tape’s
speed and tension are controlled by contact with a capstan at each end of
the head assembly.
Closeup (CU) – A camera shot that is tightly framed, with its figure or
subject filling the screen. Often qualified as medium closeup or extreme
closeup. See also ECU.
CLUT – See Color Lookup Table.
CLV (Constant Linear Velocity) – Spiral format of audio compact disks
and some video laser disks.
C-MAC – A MAC (Multiplexed Analog Component) with audio and data time
multiplexed after modulation, specified for some European DBS. See also
C-Mode – A non-sequential method of assembly in which the edit decision
list (EDL) is arranged by source tape number and ascending source timecode. See also A-More, B-Mode, D-Mode, E-Mode, Source Mode.
C-Mount – The first standard for CCTV lens screw mounting. It is defined
with the thread of 1’’ (2.54 mm) in diameter and 32 threads/inch, and the
back flange-to-CCD distance of 17.526 mm (0.69’’). The C-mount description applies to both lenses and cameras. C-mount lenses can be put on
both, C-mount and CS-mount cameras, only in the latter case an adaptor
is required.
CMTT – French acronym for the Mixed Telephone and Television
Committee, an international standardization committee concerned with
such issues as B-ISDN.
CMYK – Refers to the colors that make up the subtractive color system
used in pigment printers: cyan, magenta, yellow and black. In the CMYK
subtractive color system these pigments or inks are applied to a white
surface to filter that color light information from the white surface to create
the final color. Black is used because cyan, magenta and yellow cannot be
combined to create a true black.
CMYK Color Space – A subtractive color space with cyan, magenta, and
yellow as primary color set with an optional addition of black (K). For
such a color set subtractive color mixture applies. The CMYK values used
represent the amount of colorant placed onto the background medium.
They include the effects of dot gain.
CNG (Comfort Noise Generator) – During periods of transmit silence,
when no packets are sent, the receiver has a choice of what to present
to the listener. Muting the channel (playing absolutely nothing) gives the
listener the unpleasant impression that the line has gone dead. A receiverside CNG generates a local noise signal that it presents to the listener
during silent periods. The match between the generated noise and the true
background noise determines the quality of the CNG.
Coating Thickness – The thickness of the magnetic coating applied to
the base film of a mag tape. Modern tape coatings range in thickness from
170 to 650 microinches. Coating thickness is normally optimized for the
intended application. In general, thin coatings give good resolution at the
expense of reduced output at long wavelengths; thick coatings give a high
output at long wavelengths at the expense of degraded resolution.
Coaxial Cable – a) A transmission line with a concentric pair of signal
carrying conductors. There is an inner conductor and an outer conductor
metallic sheath. The sheath aids in preventing external radiation from
affecting the signal on the inner conductor and mini-mizes signal radiation
from the transmission line. b) A large cable composed of fine foil wires
that is used to carry high bandwidth signals such as cable TV or cable
modem data streams. c) The most common type of cable used for copper
transmission of video signals. It has a coaxial cross-section, where the
center core is the signal conductor, while the outer shield protects it from
external electromagnetic interference.
Cobalt Doped Oxide – A type of costing used on magnetic recording
tape. This is normally a gamma ferric oxide particle which has been doped
with cobalt to achieve a higher coercivity. Modern forms of this oxide are
acicular and have been used to make tapes with coercivities in excess of
1000 oersteds.
Co-Channel Interference – Interference caused by two or more television
broadcast stations utilizing the same transmission channel in different
cities. It is a form of interference that affects only broadcast television.
Code – a) In computers, the machine language itself, or the process of
converting from one language to another. b) A plan for representing each
of a finite number of values or symbols as a particular arrangement or
sequence of discrete conditions or events. To encode is to express given
information by means of a code. c) A system of rules defining a one-to-one
correspondence between information and its representation by characters,
symbols, or signal elements.
CODEC (Coding/Decoding) – a) The algorithm used to capture analog
video or audio onto your hard drive. b) Used to implement the physical
combination of the coding and decoding circuits. c) A device for converting
signals from analog to coded digital and then back again for use in digital
transmission schemes. Most codecs employ proprietary coding algorithms
for data compression. See Coder-Decoder.
Coded Audiovisual Object (Coded AV Object) – The representation of
an AV object as it undergoes parsing and decompression that is optimized
in terms of functionality. This representation consists of one stream
object, or more in the case of scalable coding. In this case, the coded representation may consist of several stream objects associated to different
scalability layers.
CNR – Carrier to Noise Ratio – Indicates how far the noise level is down
on carrier level.
Coded Bitstream – A coded representation of a series of one or more
pictures and/or audio signals.
Coating – The magnetic layer of a magnetic tape, consisting of oxide
particles held in a binder that is applied to the base film.
Coded Data – Data elements represented in their encoded (compressed)
Coating Resistance – The electrical resistance of the coating measured
between two parallel electrodes spaced a known distance apart along the
length of tape.
Coded Description – A description that has been encoded to fulfill
relevant requirements such as compression efficiency, error resilience,
random access, etc.
www.tektronix.com/video_audio 45
Video Terms and Acronyms
Coded Order – The order in which the pictures are stored and decoded.
This order is not necessarily the same as the display order.
Coded Orthogonal Frequency Division Multiplex – A modulation
scheme used for digital transmission that is employed by the European
DVB system. It uses a very large number of carriers (hundreds or thousands), each carrying data at a very low rate. The system is relatively
insensitive to doppler frequency shifts, and can use multipath signal constructively. It is, therefore, particularly suited for mobile reception and for
single-frequency networks. A modified form of OFDM.
Coded Picture – An MPEG coded picture is made of a picture header, the
optional extensions immediately following it, and the following compressed
picture data. A coded picture may be a frame picture or a field picture.
Coded Representation – A data element as represented in its encoded
Coded Video Bitstream – A coded representation of a series of one or
more VOPs as defined in this specification.
Code-Excited Linear Prediction – a) Audio encoding method for low bit
rate codecs. b) CELP is a speech coding algorithm that produces high
quality speech at low rates by using perceptual weighting techniques.
Coder-Decoder – Used to implement the physical combination of the
coding and decoding circuits.
Coding – Representing each level of a video or audio signal as a number,
usually in binary form.
Coding Parameters – The set of user-definable parameters that characterize a coded video bit stream. Bit streams are characterized by coding
parameters. Decoders are characterized by the bit streams that they are
capable of decoding.
Coefficient – a) A number (often a constant) that expresses some property of a physical system in a quantitative way. b) A number specifying the
amplitude of a particular frequency in a transform.
Coefficient of Friction – The tangential force required to maintain
(dynamic coefficient) or initiate (static coefficient) motion between two
surfaces divided by the normal force pressing the two surfaces together.
Coefficient of Hygroscopic Expansion – The relative increase in the
linear dimension of a tape or base material per percent increase in relative
humidity measured in a given humidity range.
Coefficient of Thermal Expansion – The relative increase in the linear
dimension of a tape or base material per degree rise in temperature
(usually Fahrenheit) measured in a given temperature range.
Coefficient Recording – A form of data bit-rate reduction used by Sony
in its digital Betacam format and with its D-2 component recording accessory, the DFX-C2. Coefficient recording uses a discrete cosine transformation and a proprietary information handling scheme to lower the data rate
generated by a full bit-rate component digital signal. Such a data bit-rate
reduction system allows component digital picture information to be
recorded more efficiently on VTRs.
Coercivity – Measured in oersteds, the measurement of a magnetic
characteristic. The demagnetizing force required to reduce the magnetic
induction in a magnetic materiel to zero from its saturated condition.
COFDM (Coded Orthogonal Frequency Division Multiplex) – A digital
coding scheme for carrying up to 6875 single carriers 1 kHz apart which
are QAM modulated with up to 64 states. “Coded” means that the data to
be modulated has error control. Orthogonality means that the spectra of
the individual carriers do not influence each other as a spectral maximum
always coincides with a spectrum zero of the adjacent carriers. A singlefrequency network is used for the actual transmission.
Coherent – Two or more periodic signals that are phase-locked to a common submultiple. The subcarrier of a studio quality composite video signal
is coherent with its sync.
Collision – The result of two devices trying to use a shared transmission
medium simultaneously. The interference ruins both signals, requiring both
devices to retransmit the data lost due to collision.
Color Back Porch – Refer to the Horizontal Timing discussion.
Color Background Generator – a) A circuit that generates a full-field
solid color for use as a background in a video picture. b) A device that
produces a full-frame color, normally used as a background for various
graphics effects, the output of which is selectable on the last button of all
switcher buses.
Color Balance – Adjustment of color in the camera to meet a desired
standard, i.e., color bar, sponsor’s product, flesh tones. Also may be
referred to as “white balance”.
Color Bar Test Signal – Originally designed to test early color camera
encoders, it is commonly (albeit incorrectly) used as a standard test signal.
The saturated color bars and luminance gray bar are usually used to check
monitors for color accuracy. The saturated color bars are a poor test of any
nonlinear circuit or system and at best, show video continuity. Testing a
video system using color bars is analogous to testing an audio system
using a simple set of monotonal frequencies. Many color TV test signals
have been developed to accurately assess video processing equipment
such as ADCs, compressors, etc.
Color Bars – A video test signal widely used for system and monitor
setup. The test signal, typically containing eight basic colors: white, yellow,
cyan, green, magenta, red, blue and black, is used to check chrominance
functions of color TV systems. There are two basic types of color bar signals in common use. The terms “75% bars” and “100% bars” are generally
used to distinguish between the two types. While this terminology is widely
used, there is often confusion about exactly which parameters the 75%
versus 100% notation refer to. a) RGB Amplitudes – The 75%/100%
nomenclature specifically refers to the maximum amplitudes reached by
the Red, Green and Blue signals when hey form the six primary and secondary colors required for color bars. For 75% bars, the maximum amplitude of the RGB signals is 75% of the peak white level. For 100% bars, the
RGB signals can extend up to 100% of peak white. Refer to the following
two figures. b) Saturation – Both 75% and 100% amplitude color bars
are 100% saturated. In the RGB format, colors are saturated if at least one
of the primaries is at zero. Note: In the two associated figures that the zero
signal level is at setup (7.5 IRE) for NTSC. c) The Composite Signal –
In the composite signal, both chrominance and luminance amplitudes
vary according to the 75%/100% distinction. However, the ratio between
chrominance and luminance amplitudes remains constant in order to
Video Terms and Acronyms
maintain 100% saturation. d) White Bar Levels – Color bar signals can
also have different white bar levels, typically either 75% or 100%. This
parameter is completely independent of the 75%/100% amplitude distinction and either white level may be associated with either type of bars.
e) Effects of Setup – Because of setup, the 75% signal level for NTSC is
at 77 IRE. The maximum available signal amplitude is 100-7.5 or 92.5 IRE.
75% of 92.5 IRE is 69.4 IRE, which when added to the 7.5 IRE pedestal
yields a level of approximately 77 IRE.
Color Demodulator – See Chroma Demodulators.
Color Decoder – a) A device that divides a video signal into its basic
color components. In TV and video, color decoding is used to derive signals
required by a video monitor from the composite signals. b) Video function
that obtains the two color difference signals from the chrominance part of
an NTSC/PAL signal. See Chroma Demodulators.
Color Depth – The number of levels of color (usually including luma and
chroma) that can be represented by a pixel. Generally expressed as a
number of bits or a number of colors. The color depth of MPEG video in
DVD is 24 bits, although the chroma component is shared across 4 pixels
(averaging 12 actual bits per pixel).
Color Cycling – A means of simulating motion in a video by changing
Color Difference Signals – Signals used by color television systems to
convey color information (not luminance) in such a way that the signals
go to zero when there is no color in the picture. Color difference signal
formats include: R-Y and B-Y; I and Q; U and V; PR and PB. The following
figure show general color difference waveforms along with the Y signal.
The color difference signal shown above must first be converted in their
RGB form before they can recreate the picture. Refer to the RGB discussion
to view what the RGB version of the color bar signal looks like. The color
difference signals in the figure described above are centered around 0
volts but this is only true for the SMPTE/EBU N10 standard. The NTSC
and M11 color difference standards have the most negative portions of
the color difference signals riding on a voltage of 0 volts or close to it.
Color Black – A composite video signal that produces a black screen
when viewed on a television receiver.
Color Burst – a) The portion of a color video signal that resides on the
backporch between the breezeway and the start of active video which
contains a sample of the color subcarrier used to add color to a signal. It is
used as a color synchronization signal to establish a reference for the color
information following it and is used by a color monitor to decode the color
portion of a video signal. The color burst acts as both amplitude and phase
reference for color hue and intensity. The color oscillator of a color television receiver is phase locked to the color burst. b) A nine-cycle-NTSC
burst of color subcarrier which is imposed on blanking after sync. Color
burst serves as the reference for establishing the picture color.
Color Carrier – The sub-frequency in a color video signal (4.43 MHz for
PAL) that is modulated with the color information. The color carrier frequency is chosen so its spectrum interleaves with the luminance spectrum
with minimum interference.
PB, B-Y, V or Q
PR, R-Y, U or I
Color Edging – Spurious colors appearing along the edges of color
pictures, but that do not have a color relationship to the picture.
Color Encoder – Performs the reverse function of the chroma demodulator in that it combines the two color difference signals into the single
chroma signal.
Color Coordinate Transformation – Computation of the tristimulus
values of colors in terms of one set of primaries from the tristimulus values
of the same colors in another set of primaries. Note: This computation may
be performed electrically in a color television system.
Color Field – In the NTSC system, the color subcarrier is phase-locked to
the line sync so that on each consecutive line, subcarrier phase is changed
180º with respect to the sync pulses. In the PAL system, color subcarrier
phase moves 90º every frame. In NTSC this creates four different field
types, while in PAL there are eight. In order to make clean edits, alignment
of color field sequences from different sources is crucial.
Color Correction – a) A process by which the coloring in a television
image is altered or corrected electronically. Care must be taken to insure
that the modified video does not exceed the limits of subsequent processing or transmission systems. b) The adjustment of a color reproduction
process to improve the perceived-color conformity of the reproduction to
the original.
Color Frame – a) In NTSC color television, it takes four fields to complete
a color frame. In PAL, it takes eight fields. b) Polarity of the video frame.
Color frame must alternate polarity with each frame to keep the video
signal in phase. c) A sequence of video fields required to produce a complete pattern of both field and frame synchronization and color subcarrier
synchronization. The NTSC system requires four fields; PAL requires eight.
www.tektronix.com/video_audio 47
Video Terms and Acronyms
Color Frame Timed – See the Color Framed discussion.
Color Framed – Two signals are said to be color framed at a switcher or
router when their field 1, line 10 events (field 1, line 7 in PAL) occur at
the same time at the input to the switcher or router. To prevent picture
distortions when changing signals at a switcher or router, the signals must
be color framed.
Color Gamut – In a system employing three color primaries to encode
image color, each primary can be located on a CIE chromaticity diagram
and these points connected as a plane figure. If the apexes are then connected with an appropriate value on the white point axis, a so) id figure is
produced enclosing the color gamut for that system. (On the CIE chromaticity diagrams, the points in x, y, z space approximate an inverted tetrahedron. In u, v, w space, they become a somewhat irregular four-cornered
solid.) Colors within the color gamut solid volume can be reproduced by the
system as metameric matches. Colors outside the color gamut solid volume
cannot be matched. Note: The area of the cross-section from the color
gamut solid is a function of the luminance. Although it is advantageous to
have the widest possible color gamut for the ability to provide metameric
matches for the largest number of colors, the required transformations
from origination colorimetry to colorimetry matched to available display
primaries, for example, may require large matrix coefficients and, therefore, a signal-to-noise penalty. The choice of color gamut is a compromise
between color rendition and signal-to-noise.
Color Key – See Chroma Key.
Color Keying – To superimpose one image over another for special
Color Killer – Circuitry which disables the receiver’s color decoder if the
video does not contain color information.
Color Lookup Table (CLUT) – The CLUT is a compression scheme where
pixel values in the bitmap represent an index into a color table where the
table colors have more bits-per-pixel than the pixel values. In a system
where each pixel value is eight bits, there are 256 possible values in the
lookup table. This may seem a constraint but, since multiple lookup tables
can be referenced, there can be many tables with varying 256 color
schemes. CLUTs work best for graphics where colors do not have to be
Color Map – A color map is just a numbered list of colors. Each color is
specified in terms of its red, green, and blue components.
Color Map Animation – In normal animation, the images representing
separate frames are written on separate pieces of artwork. In computer
color map animation, many images can be written into a frame buffer,
each with a different color number. By ‘cycling’ white, for example, through
the color map, so that only one image at a time is visible, the illusion of
animation can be achieved very quickly. PictureMaker’s wireframe test
mode works this way.
Color Mapping – Color mapping is distinguished by the following: a) Each
pixel contains a color number (or address) referring to a position in a color
map. Each pixel has ‘n’ bits, so there are ‘2 to the n’ color map addresses.
b) A hardware device called the color map defines the actual RGB values
for each color.
Color Masking – A method of correcting color errors which are fundamental in any three primary color additive reproducing system, by electrically changing the R, G and B signals with a matrix or masking amplifier
which mixes (usually subtracts) the signals in a very precise predetermined
amount. The form is generally as follows. Note that a, b, c, d, e and f are
referred to as the masking or correction coefficients.
R out = R in + a (G-R) + b (R-B)
G out = G in + c (G-R) + d (B-G)
B out = B in + e (R-B) + f (B-G)
Color Match, Corresponding – A corresponding color is defined as the
stimulus that, under some different condition of adaptation, evokes the
same color appearance as another stimulus when it was seen under the
original state of adaptation. Color match, corresponding is a subjective
Color Match, Metameric – a) Color images are metameric matches
when their spectrally different color stimuli have identical tristimulus
values. The requirements for such a metameric match can be calculated
for a specified viewing condition (and for viewing conditions other than
those specified, the chromaticity will not be judged to correspond).
b) The corresponding color chosen for the metameric match will not
provide a spectrophotometric match. In practical applications, spectrophotometric matches are of only academic interest, and metameric matches
are sought. c) Color match, metameric, resulting from calculations based
upon colorimetry, produces a visual match as evaluated by the CIE description of human observers.
Color Model – Any of several means of specifying colors according to
their individual components. See RGB, YUV.
Color Modulator – See Color Encoder.
Color Palette – A component of a digital video system that provides a
means of establishing colors (foreground and background) using a color
lookup table to translate a limited set of pixel values into a range of
displayable colors by converting the colors to RGB format.
Color Phase – a) The phase of the chroma signal as compared to the
color burst, is one of the factors that determines a video signal’s color
balance. b) The timing relationship in a video signal that is measured in
degrees and keeps the hue of a color signal correct.
Color Picker – A tool used to plot colors in an image.
Color Plane – In planar modes, the display memory is separated into four
independent planes of memory, with each plane dedicated to controlling
one color component (red, green, blue and intensify). Each pixel of the
display occupies one bit position in each plane. In character modes and
packed-pixel modes, the data is organized differently.
Color Primaries – Red, green and blue light.
Color Processing – A way to alter a video signal to affect the colors. The
Video Equalizer is suited to this task. See Chroma Corrector.
Color Purity – Describes how close a color is to the mathematical representation of the color. For example, in the Y’UV color space, color purity is
specified as a percentage of saturation and +/-q, where q is an angle in
degrees, and both quantities are referenced to the color of interest. The
Video Terms and Acronyms
smaller the numbers, the closer the actual color is to the color that it is
really supposed to be. For a studio-grade device, the saturation is +/-2%
and the hue is +/-2 degrees.
Color Reference Burst – The color synchronizing signal included as part
of the overall composite video signal. When compared with the color subcarrier signal, the color reference burst determines the hue of the video
Color Reversal Intermediate (CRI) – A duplicate color negative prepared
by reversal processing.
Color Saturation – This is the attribute of color perception determining
the degree of its difference from the achromatic color perception most
resembling it. An achromatic color perception is defined as one not
possessing a hue/color. In other words, how much “color” is in an object.
Color Space – The mathematical representation of a color. a) Regardless
of the color space used, RGB, YIQ, YUV, a color will appear the same on the
screen. What is different is how the color is represented in the color space.
In the HLS color space are represented based on three-dimensional polar
coordinate system where as in the RGB color space, colors are represented
by a Cartesian coordinate system. b) Many ways have been devised to
organize all of a system’s possible colors. Many of these methods have two
things in common: a color is specified in terms of three numbers, and by
using the numbers as axes in a 3D space of some sort, a color solid can
be defined to represent the system. Two spaces are popular for computer
graphics: RGB and HSV.
Color Space, Reference – Geometric representation of colors in space,
usually of three dimensions. There are three reference spaces recognized
by ISO 8613: CMYK color space; CIELuv color space; and R, G, B color
Color Standard – The parameters associated with transmission of color
information. For example, RGB, YCbCr or MAC component color standards
or NTSC, PAL or SECAM composite color standards.
Color Subcarrier – The signal used to modulate the color information in
the color encoder and demodulate the color information in the color
decoder. For (M) NTSC the frequency of the color subcarrier is about
3.579545 MHz and for (B, D, G, H, I) PAL it’s about 4.43 MHz.
Color Temperature – The amount and color of light being given off by an
object and is based on the concept of a “black body”. A black absorbs all
incident light rays and reflects none. If the black body is heated, it begins
to emit visible light rays; first dull red, then red, then through orange to
“white heat”. It can be likened to the heating of metal. If a metal object is
heated enough, the metal body will emit the array of colors mentioned
above until the object achieves a bluish white light. The amount of light
being emitted by the body can then be correlated to the amount of “heat”
it would take to get the body that hot and that heat can be expressed in
terms of degrees Kelvin. Objects that give off light equivalent to daylight
have a temperature of about 6,500 degrees Kelvin. Colors with a bluish
tint, have a color temperature of about 9,000 degrees Kelvin.
Color Timing – The process wherein colors are referenced and alternate
odd and even color fields are matched to ensure colors match from shot to
shot. Most commonly found in high-end equipment, such as Betacam SP.
Color Under – A degenerate form of composite color in which the subcarrier is crystal stable but not coherent with line rate. The term derives from
the recording technique used in U-Matic, Betamax, VHS and 8 mm videotape recorders, where chroma is heterodyned onto a subcarrier whose
frequency is a small fraction of that of NTSC or PAL. The heterodyning
process looses the phase relationship of color subcarrier to sync.
Color Wheel – A circular graph that maps hue values around the circumference and saturation values along the radius. Used in the color correction
tool as a control for making hue offset and secondary color correction
Color, Additive – Over a wide range of conditions of observation, many
colors can be matched completely by additive mixtures in suitable amounts
of three fixed primary colors. The choice of three primary colors, though
very wide, is not entirely arbitrary. Any set that is such that none of the primaries can be matched by a mixture of the other two can be used. It follows that the primary color vectors so defined are linearly independent.
Therefore, transformations of a metameric match from one color space to
another can be predicted via a matrix calculation. The limitations of color
gamut apply to each space. The additive color generalization forms the
basis of most image capture, and of most self-luminous displays (i.e.,
CRTs, etc.).
Color, Primary – a) The colors of three reference lights by whose additive
mixture nearly all other colors may be produced. b) The primaries are
chosen to be narrow-band areas or monochromatic points directed toward
green, red, and blue within the Cartesian coordinates of three-dimensional
color space, such as the CIE x, y, z color space. These primary color points
together with the white point define the colorimetry of the standardized
system. c) Suitable matrix transformations provide metameric conversions,
constrained by the practical filters, sensors, phosphors, etc. employed in
order to achieve conformance to the defined primary colors of the specified
system. Similar matrix transformations compensate for the viewing conditions such as a white point of the display different from the white point
of the original scene. d) Choosing and defining primary colors requires a
balance between a wide color gamut reproducing the largest number of
observable surface colors and the signal-to-noise penalties of colorimetric
transformations requiring larger matrix coefficients as the color gamut is
extended. e) There is no technical requirement that primary colors should
be chosen identical with filter or phosphor dominant wavelengths. The
matrix coefficients, however, increase in magnitude as the available display
primaries occupy a smaller and smaller portion of the color gamut. (Thus,
spectral color primaries, desirable for improved colorimetry, become
impractical for CRT displays.) f) Although a number of primary color sets
are theoretically interesting, CCIR, with international consensus, has established the current technology and practice internationally that is based
(within measurement tolerances) upon the following: Red – x = 0.640,
y = 0.330; Green – x = 0.300, y = 0.600; Blue – x = 0.150, y = 0.060.
g) SMPTE offers guidance for further studies in improving color rendition
by extending the color gamut. With regard to color gamut, it is felt that the
system should embrace a gamut at least as large as that represented by
the following primaries: Red – x = 0.670, y = 0.330; Green – x = 0.210,
y = 0.710; Blue – x = 0.150, y = 0.060.
www.tektronix.com/video_audio 49
Video Terms and Acronyms
Color, Subjective – Subtractive colorimetry achieves metameric matching
by removing portions of the spectrum from white light. The subtractive
counterparts to the additive color primaries are those which when removed
from white leave the red, green, and blue accordingly cyan, magenta, and
yellow. Combinations of these subtractive colors in various add mixtures
provide metameric matches to many colors. Subtractive color principles are
employed in all hard-copy color images and in light-valve systems such as
color transparencies, LCD panel display, motion-picture films, etc.
Colorimetry – a) Characteristics of color reproduction including the range
of colors that a television system can reproduce. Some ATV schemes call
for substantially different colorimetry (with a greater range) than NTSC’s.
b) The techniques for the measurement of color and for the interpretation
of the results of such computations. Note: The measurement of color is
made possible by the properties of the eye, and is based upon a set of
Frequencies the Comb Filter
passes as chrominance information.
Colorist – The title used for someone who operates a telecine machine to
transfer film to video. Part of the process involves correcting the video
color to match the film.
Comb – Used on encoded video to select the chrominance signal and
reject the luminance signal, thereby reducing cross-chrominance artifacts
or conversely, to select the luminance signal and reject the chrominance
signal, thereby reducing cross-luminance artifacts.
Colorization – Special effect (also called paint) which colors a monochrome or color image with artificial colors. This feature is found on both
the Digital Video Mixer and Video Equalizer.
Combination Tone – A tone perceived by the ear which is equal in
frequency to the sum or difference of the frequencies of two loud tones
that differ by more than 50 Hz.
Color-Matching Functions – a) The tristimulus values of monochromatic
stimuli of equal radiant power. The three values of a set of color-matching
functions at a given wavelength are called color-coefficients. The colormatching functions may be used to calculate the tristimulus values of a
color stimulus from the color stimulus function. b) The tristimulus value per
unit wavelength interval and unit spectral radiant flux. c) A set of three
simultaneous equations used to transform a color specification from one
set of matching stimuli to another. Note: Color-matching functions adopted
by the CIE are tabulated as functions of wavelength throughout the spectrum and are given in Section 13.5 of ANSI/IES RP16-1986.
Combinational Logic – Circuit arrangement in which the output state is
determined only by the present states of two or more inputs. Also called
Combinatorial Logic.
ColorStream, ColorStream Pro, ColorStream HD – The name Toshiba
uses for the analog YPbPr video interface on their consumer equipment.
If the interface supports progressive SDTV resolutions, it is called
ColorStream Pro. If the interface supports HDTV resolutions, it is called
ColorStream HD.
Comb Filter – This is a filter that can be used to separate luminance from
chrominance in the NTSC or PAL composite video systems. The figure
below shows a signal amplitude over frequency representation of the luminance and chrominance information that makes up the composite video
signal. The peaks in gray are the chroma information at the color carrier
frequency. Note how the chroma information falls between the luminance
information that is in white. The comb filter is able to pass just energy
found in the chroma frequency areas and not the luminance energy. This
selective bandpass profile looks likes the teeth of a comb and thus the
name comb filter. The comb filter has superior filtering capability when
compared to the chroma trap because the chroma trap acts more like a
notch filter.
Combiner – In digital picture manipulators, a device that controls the way
in which two or more channels work together. Under software control, it
determines the priority of channels (which picture appears in front and
which in back) and the types of transitions that can take place between
Combo Box – In Microsoft™ Windows, a combination of a text and a list
box. You can either type the desired value or select it from the list.
Combo Drive – A DVD-ROM drive capable of reading and writing CD-R
and CD-RW media. May also refer to a DVD-R or DVD-RW or DVD+RW
drive with the same capability.
Command Buttons – In Microsoft™ Windows, “button-shaped” symbols
that are “pressed” (“clicked on”/chosen) to perform the indicated action.
Comment Field – Field within an instruction that is reserved for comments. Ignored by the compiler or the assembler when the program is converted to machine code.
Common Carrier – Telecommunication company that provides communications transmission services to the public.
Common Data Rate (CDR) – In the search for a single worldwide standard for HDTV, one proposal is to establish a common data rate, to be
independent of line structure, frame rate, and sync/blanking.
Common Image Format (CIF) – The standardization of the structure of
the samples that represent the picture information of a single frame in
digital HDTV, independent of frame rate and sync/blank structure.
Common Interchange Format (CIF) – A 352 x 240 pixel format for
30 fps video conferencing.
Video Terms and Acronyms
Common Interface Format (CIF) – This video format was developed to
easily allow video phone calls between countries. The CIF format has a
resolution of 352 x 288 active pixels and a refresh rate of 29.97 frames
per second.
Common Intermediate Format (CIF) – Picture format. For this ITU
defined CIF frame, Y is 352 pixels x 288 lines, and Cb and Cr are 176
pixels x 144 lines each. This frame structure is independent of frame rate
and sync structure for all digital TV formats. Uncompressed bit rate is
36.45 Mbps at 29.97 frames/sec.
Communication Protocol – A specific software based protocol or language for linking several devices together. Communication protocols are
used between computers and VCRs or edit controllers to allow bidirectional
“conversation” between the units. See RS-232/RS-422.
Compact Cassette – A small (4 x 2-1/2 x 1/2”) tape cartridge developed
by Philips, containing tape about 1/7” wide, running at 1-7/8 ips.
Recordings are bidirectional, with both stereo tracks adjacent for compatibility with monophonic cassette recorders; whose heads scan both stereo
tracks at once.
Compact Disc (CD) – A compact disc is a 12cm optical disc that stores
encoded digital information (typically audio) in the constant linear velocity
(CLV) format. For high-fidelity audio/music, it provides 74 minutes of digital
sound, 90 dB signal-to-noise ratio and no degradation from playback.
Compact Disc Interactive (CD-I) – It is meant to provide a standard
platform for mass consumer interactive multimedia applications. So it is
more akin to CD-DA, in that it is a full specification for both the data/code
and standalone playback hardware: a CD-I player has a CPU, RAM, ROM,
OS, and audio/video (MPEG) decoders built into it. Portable players add an
LCD screen and speakers/phone jacks. It has limited motion video and still
image compression capabilities. It was announced in 1986, and was in
beta test by spring 1989. This is a consumer electronics format that uses
the optical disc in combination with a computer to provide a home entertainment system that delivers music, graphics, text, animation, and video
in the living room. Unlike a CD-ROM drive, a CD-I player is a standalone
system that requires no external computer. It plugs directly into a TV and
stereo system and comes with a remote control to allow the user to interact with software programs sold on discs. It looks and feels much like a CD
player except that you get images as well as music out of it and you can
actively control what happens. In fact, it is a CD-DA player and all of your
standard music CDs will play on a CD-I player; there is just no video in that
case. For a CD-I disk, there may be as few as 1 or as many as 99 data
tracks. The sector size in the data tracks of a CD-I disk is approximately 2
kbytes. Sectors are randomly accessible, and, in the case of CD-I, sectors
can be multiplexed in up to 16 channels for audio and 32 channels for all
other data types. For audio these channels are equivalent to having 16 parallel audio data channels instantly accessible during the playing of a disk.
Compact Disc Read Only Memory – a) CD-ROM means “Compact Disc
Read Only Memory”. A CD-ROM is physically identical to a Digital Audio
Compact Disc used in a CD player, but the bits recorded on it are interpreted as computer data instead of music. You need to buy a CD-ROM Drive
and attach it to your computer in order to use CD-ROMs. A CD-ROM has
several advantages over other forms of data storage, and a few disadvantages. A CD-ROM can hold about 650 megabytes of data, the equivalent of
thousands of floppy disks. CD-ROMs are not damaged by magnetic fields
or the x-rays in airport scanners. The data on a CD-ROM can be accessed
much faster than a tape, but CD-ROMs are 10 to 20 times slower than
hard disks. b) A flat metallic disk that contains information that you can
view and copy onto your own hard disk; you cannot change or add to its
Companding – See Compressing-Expanding.
Comparator – A circuit that responds to the relative amplitudes of two
inputs, A and B, by providing a binary output, Z, that indicates A>B or
A<B.. The comparator has two inputs, X, Y, and one output, Z. A comparator “compares” A to B. If A is larger than B, the output of the comparator
is a “1”. If A is smaller than B, then the output is a “0”. If A = B, the output
Z may be undefined and oscillate between “1” and “0” wildly until that
condition is removed it may be a “1”, or it may be a “0”. It depends on
how the comparator was designed. The comparator implements the
following mathematical function.
If A – B > 0, then Z = 1
If A – B < 0, then Z = 0
Compatibility – A complex concept regarding how well ATV schemes
work with existing television receivers, transmission channels, home video
equipment, and professional production equipment. See also ChannelCompatible, Receiver-Compatible.
A. ATV Receiver Compatibility Levels
Level 5 – ATV signal is displayed as ATV on an NTSC TV set
Level 4 – ATV signal appears as highest quality NTSC on an NTSC
TV set
Level 3 – ATV signal appears as reduced quality NTSC on an NTSC
TV set
Level 2 – ATV signal requires inexpensive adapter for an NTSC TV
Level 1 – ATV signal requires expensive adaptor for an NTSC TV set
Level 0 – ATV signal cannot be displayed on an NTSC TV set
B. Compatible ATV Transmission Schemes
• Receiver-compatible and channel-compatible single 6 MHz
• Receiver-compatible channel plus augmentation channel
• Necessarily adjacent augmentation channel
• Not necessarily adjacent augmentation channel
• Non-receiver-compatible channel plus simulcast channel
Compatible Video Consortium (CVC) – An organization established
by Cox Enterprises and Tribune Broadcasting, which together own 14
television stations, 24 CATV systems, and two production companies.
The CVC, which is open to other organizations, was created to support
ATV research and is currently supporting Del Ray’s HD-NTSC system.
Compile – To compute an image or effect using a nonlinear editing,
compositing or animation program. The result is generally saved in a file
on the computer. Also called Render.
Compiler – Translation program that converts high-level program
instructions into a set of binary instructions (machine code) for execution.
Each high-level language requires a compiler or an interpreter. A compiler
translates the complete program, which is then executed.
www.tektronix.com/video_audio 51
Video Terms and Acronyms
Complement – Process of changing each 1 to a 0 and each 0 to a 1.
Complex Surface – Consists of two or more simple surfaces attached or
connected together using specific operations.
Component – a) A matrix, block or single pel from one of the three
matrices (luminance and two chrominance) that make up a picture.
b) A television system in which chrominance and luminance are distributed
separately; one of the signals of such a television system; or one of
the signals that comprise an ATV system (e.g., the widescreen panels
Component (Elementary Stream) – One or more entities which together
make up an event, e.g., video, audio, teletext.
Component Analog – The unencoded output of a camera, videotape
recorder, etc., consisting of three primary color signals: red, green, and
blue (RGB) that together convey all necessary picture information. In some
component video formats, these three components have been translated
into a luminance signal and two color difference signals, for example, Y,
B-Y, R-Y.
Component Color – Structure of a video signal wherein the R’, G’, and B’
signals are kept separate from each other or wherein luminance and two
band-limited color-difference signals are kept separate from one another.
The separation may be achieved by separate channels, or by time-division
multiplexing, or by a combination of both.
Component Digital – A digital representation of a component analog
signal set, most often Y, B-Y, R-Y. The encoding parameters are specified
by CCIR 601. The parallel interface is specified by ITU-r BT.601-2 656 and
SMPTE 125M (1991).
Component Digital Post Production – A method of post production that
records and processes video completely in the component digital domain.
Analog sources are converted only once to the component digital format
and then remain in that format throughout the post production process.
Component Gain Balance – This refers to ensuring that each of the three
signals that make up the CAV information are amplified equally. Unequal
amplification will cause picture lightness or color distortions.
Component Video – Video which exists in the form of three separate
signals, all of which are required in order to completely specify the color
picture with sound. Most home video signals consist of combined (composite) video signals, composed of luminance (brightness) information, chrominance (color) information and sync information. To get maximum video
quality, professional equipment (Betacam and MII) and some consumer
equipment (S-VHS and Hi-8) keep the video components separate.
Component video comes in several varieties: RGB (red, green, blue), YUV
(luminance, sync, and red/blue) and Y/C (luminance and chrominance),
used by S-Video (S-VHS and Hi-8) systems. All Videonics video products
support the S-Video (Y/C) component format in addition to standard composite video.
Composite – A television system in which chrominance and luminance
are combined into a single signal, as they are in NTSC; any single signal
comprised of several components.
Composite Analog – An encoded video signal, such as NTSC or PAL
video, that includes horizontal and vertical synchronizing information.
Composite Blanking – The complete television blanking signal composed
of both line rate and field rate blanking signals. See Line Blanking and
Field Blanking.
Composite Chroma Key – a) Also known as encoded chroma key. A
chroma key which is developed from a composite video source, i.e., off
of tape, rather than the components, i.e., RGB, R-Y B-Y. b) A chroma key
wherein the keying signal is derived from a composite video signal, as
opposed to an RGB chroma key. See Chroma Key.
Composite Color – Structure of a video signal wherein the luminance and
two band-limited color-difference signals are simultaneously present in the
channel. The format may be achieved by frequency-division multiplexing,
quadrature modulation, etc. It is common to strive for integrity by suitable
separation of the frequencies, or since scanned video signals are highly
periodic, by choosing frequencies such that the chrominance information is
interleaved within spectral regions of the luminance signal wherein a minimum of luminance information resides.
Composite Color Signal – A signal consisting of combined luminance
and chrominance information using frequency domain multiplexing. For
example, NTSC and PAL video signals.
Composite Digital – A digitally encoded video signal, such as NTSC or
PAL video, that includes horizontal and vertical synchronizing information.
Composite Image – An image that contains elements selected from two
or more separately originated images.
Composite Print – A motion picture print with both picture and sound on
the same strip of film.
Composite Sync – a) Horizontal and vertical sync pulses combined.
Often referred to simply as “sync”. Sync is used by source and monitoring
equipment. b) A signal consisting of horizontal sync pulses, vertical sync
pulses and equalizing pulses only, with a no-signal reference level.
Composite Video – a) A single video signal containing all of the necessary information to reproduce a color picture. Created by adding quadrature amplitude modulated R-Y and B-Y to the luminance signal. A video
signal that contains horizontal, vertical and color synchronizing information.
b) A complete video including all synchronizing pulses, may have all values
of chroma, hue and luminance, may also be many sources layered.
Composite Video Signal – A signal in which the luminance and chrominance information has been combined using one of the coding standards
Composited Audiovisual Object (Composited AV Object) – The
representation of an AV object as it is optimized to undergo rendering.
Compositing – Layering multiple pictures on top of each other. A cutout
or matte holds back the background and allows the foreground picture to
appear to be in the original picture. Used primarily for special effects.
Composition – a) Framing or makeup of a video shot. b) The process
of applying scene description information in order to identify the spatiotemporal attributes of media objects.
Composition Information – See Scene Description.
Video Terms and Acronyms
Composition Layer – The MPEG-4 Systems Layer that embed the component sub-objects of a compound AV object in a common representation
space by taking into account the spatio-temporal relationships between
them (Scene Description), before rendering the scene.
Composition Memory (CM) – A random access memory that contains
composition units.
Composition Parameters – Parameters necessary to compose a scene
(place an object in a scene). These include displacement from the upper
left corner of the presentation frame, rotation angles, zooming factors.
Composition Time Stamp (CTS) – An indication of the nominal composition time of a composition unit.
Composition Unit (CU) – An individually accessible portion of the output
that a media object decoder produces from access units.
Compress – a) The process of converting video and audio data into a
more compact form for storage or transmission. b) A digital picture manipulator effect where the picture is squeezed (made proportionally smaller).
Compressed Serial Digital Interface (CSDI) – A way of compressing
digital video for use on SDI-based equipment proposed by Panasonic.
Now incorporated into Serial Digital Transport Interface.
Compressing-Expanding – Analog compression is used at one point in
the communications path to reduce the amplitude range of the signals,
followed by an expander to produce a complementary increase in the
amplitude range.
Compression – a) The process of electronically processing a digital video
picture to make it use less storage or to allow more video to be sent down
a transmission channel. b) The process of removing picture data to
decrease the size of a video image. c) The reduction in the volume of
data from any given process so that more data can be stored in a smaller
space. There are a variety of compression schemes that can be applied
to data of which MPEG-1 and MPEG-2 are called lossy since the data
produced by compression is not totally recoverable. There are other compression schemes that are totally recoverable, but the degree of compression is much more limited.
Compression (Amplitude) – a) Data Transmission – A process in
which the effective gain applied to a signal is varied as a function of the
signal magnitude, the effective gain being greater for small rather than for
large signals. b) Video – The reduction in amplitude gain at one level of a
picture signal with respect to the gain at another level of the same signal.
Note: The gain referred to in the definition is for a signal amplitude small in
comparison with the total peak-to-peak picture signal involved. A quantitative evaluation of this effect can be obtained by a measurement of differential gain. c) Production – A transfer function (as in gamma correction) or
other nonlinear adjustment imposed upon signal amplitude values.
Compression (Bit Rate) – Used in the digital environment to describe
initial digital quantization employing transforms and algorithms encoding
data into a representation that requires fewer bits or lower data rates or
processing of an existing digital bit stream to convey the intended information in fewer bits or lower data rate. Compression (bit rate) may be reversible compression, lossless or it may be irreversible compression, lossy.
Compression Artifacts – Small errors that result in the decompressed
signal when a digital signal is compressed with a high compression ratio.
These errors are known as “artifacts”, or unwanted defects. The artifacts
may resemble noise (or edge “busyness”) or may cause parts of the picture, particularly fast moving por-tions, to be displayed with the movement
distorted or missing.
Compression Factor – Ratio of input bit rate to output (compressed) bit
rate. Like Compression Ratio.
Compression Layer – The layer of an ISO/IEC FCD 14496 system that
translates between the coded representation of an elementary stream and
its decoded representation. It incorporates the media object decoders.
Compression Ratio – A value that indicates by what factor an image file
has been reduced after compression. If a 1 MB image file is compressed to
500 KB, the compression ratio would be a factor of 2. The higher the ratio
the greater the compression.
Compression, Lossless – Lossless compression requires that the reproduced reconstructed bit stream be an exact replica of the original bit
stream. The useful algorithms recognize redundancy and inefficiencies
in the encoding and are most effective when designed for the statistical
properties of the bit stream. Lossless compression of image signal requires
that the decoded images match the source images exactly. Because of
differences in the statistical distributions in the bit streams, different
techniques have thus been found effective for lossless compression of
either arbitrary computer data, pictures, or sound.
Compression, Lossy – Bit-rate reduction of an image signal by powerful
algorithms that compress beyond what is achievable in lossless compression, or quasi-lossless compression. It accepts loss of information and
introduction of artifacts which can be ignored as unimportant when viewed
in direct comparison with the original. Advantage is taken of the subtended
viewing angle for the intended display, the perceptual characteristics of
human vision, the statistics of image populations, and the objectives of the
display. The lost information cannot be regenerated from the compressed
bit stream.
Compression, Quasi-Lossless – Bit-rate reduction of an image signal,
by an algorithm recognizing the high degree of correlation ascertainable
in specific images. The reproduced image does not replicate the original
when viewed in direct comparison, but the losses are not obvious or
recognizable under the intended display conditions. The algorithm may
apply transform coding, predictive techniques, and other modeling of the
image signal, plus some form of entrophy encoding. While the image
appears unaltered to normal human vision, it may show losses and artifacts
when analyzed in other systems (i.e., chroma key, computerized image
analysis, etc.). The lost information cannot be regenerated from the
compressed bit stream.
Compressionist – One who controls the compression process to produce
results better than would be normally expected from an automated system.
Compressor – An analog device that reduces the dynamic range of a
signal by either reducing the level of loud signals or increasing the level
of soft signals when the combined level of all the frequencies contained in
the input is above or below a certain threshold level.
www.tektronix.com/video_audio 53
Video Terms and Acronyms
Computer – General purpose computing system incorporating a CPU,
memory, I/O facilities, and power supply.
Computer Input – Some HDTV sets have an input (typically SVGA or VGA)
that allows the TV set to be connected to a computer.
Computer Television – Name of a Time Inc. pay-TV company that
pre-dated HBO; also an unrealized concept created by Paul Klein, the
company’s founder, that would allow viewers access to a vast selection
of television programming with no temporal restrictions, in the same
way that telephone subscribers can call any number at any time. B-ISDN
might offer the key to the transmission problem of computer television;
the random-access library-storage problems remain.
Concatenation – Linking together (of systems). Although the effect on
quality resulting from a signal passing through many systems has always
been a concern, the use of a series of compressed digital video systems
is, as yet, not well known. The matter is complicated by virtually all digital
compression systems differing in some way from each other, hence the
need to be aware of concatenation. For broadcast, the current NTSC and
PAL analog compression systems will, more and more, operate alongside
digital MPEG compression systems used for transmission and, possibly, in
the studio. Even the same brand and model of encoder may encode the
same signal in a different manner. See also Mole Technology.
Concave Lens – A lens that has negative focal length, i.e., the focus is
virtual and it reduces the objects.
Condenser Mike – A microphone which converts sound pressure level
variations into variations in capacitance and then into electrical voltage.
Condition Code – Refers to a limited group of program conditions, such
as carry, borrow, overflow, etc., that are pertinent to the execution of
instructions. The codes are contained in a condition code register. Same
as Flag Register.
Conditional Access (CA) – This is a technology by which service
providers enable subscribers to decode and view content. It consists of
key decryption (using a key obtained from changing coded keys periodically sent with the content) and descrambling. The decryption may be
proprietary (such as Canal+, DigiCipher, Irdeto Access, Nagravision, NDS,
Viaccess, etc.) or standardized, such as the DVB common scrambling algorithm and OpenCable. Conditional access may be thought of as a simple
form of digital rights management. Two common DVB conditional access
(CA) techniques are SimulCrypt and MultiCrypt. With SimulCrypt, a single
transport stream can contain several CA systems. This enables receivers
with different CA systems to receive and correctly decode the same video
and audio streams. With MultiCrypt, a receiver permits the user to manually
switch between CA systems. Thus, when the viewer is presented with a CA
system which is not installed in his receiver, they simply switch CA cards.
Conditional Access System – A system to control subscriber access to
services, programs and events, e.g., Videoguard, Eurocrypt.
Conditional Jump or Call – Instruction that when reached in a program
will cause the computer either to continue with the next instruction in the
original sequence or to transfer control to another instruction, depending
on a predetermined condition.
Conductive Coatings – Coatings that are specially treated to reduce the
coating resistance, and thus prevent the accumulation of static electrical
charge. Untreated, non-conductive coatings may become highly charged,
causing transport, noise and dust-attraction problems.
Conferencing – The ability to conduct real-time interactive video and/or
audio and/or data meetings via communication services over local or wide
area networks.
Confidence Test – A test to make sure a particular device (such as the
keyboard, mouse, or a drive) is set up and working properly.
Confidence Value – A measurement, expressed as a percentage, of the
probability that the pattern the system finds during a motion tracking operation is identical to the pattern for which the system is searching. During a
motion tracking operation, Avid Symphony calculates a confidence value for
each tracking data point it creates.
CONFIG.SYS – A file that provides the system with information regarding
application requirements. This information may include peripherals that are
connected and require special drivers (such as a mouse). Other information
that might be specified is the number of files that can be open simultaneously, or the number of disk drives that can be accessed.
Configuration File – A system file that you change to customize the way
your system behaves. Such files are sometimes referred to as customization files.
Conform – To prepare a complete version of your project for viewing. The
version produced might be an intermediate working version or the final cut.
Conforming – The process wherein an offline edited master is used as a
guide for performing final edits.
Conforming a Film Negative – The mathematical process that the editing system uses to ensure that the edits made on a videotape version of a
film project (30 fps) are frame accurate when they are made to the final
film version (24 fps).
Connection-Oriented Protocol – In a packet switching network, a virtual
circuit can be formed to emulate a fixed bandwidth switched circuit, for
example, ATM. This benefits transmission of media requiring constant
delays and bandwidth.
Connector – Hardware at the end of a cable that lets you fasten the
cable to an outlet, port, or another connector.
Console – A display that lists the current system information and
chronicles recently performed functions. It also contains information
about particular items being edited, such as the shots in the sequence
or clips selected from bins.
Console Window – The window that appears each time you log in. IRIX
reports all status and error messages to this window.
Consolidate – To make copies of media files or portions of media files,
and then save them on a drive. The consolidate feature operates differently
for master clips, subclips and sequences.
Constant – a) A fixed value. b) An option for the interpolation and/or
extrapolation of an animation curve that produces a square or stepped
Video Terms and Acronyms
Constant Alpha – A gray scale alpha plane that consists of a constant
non-zero value.
Continuation Indicator (CI) – Indicates the end of an object in the
current packet (or continuation).
Constant Bit Rate (CBR) – a) An operation where the bit rate is constant
from start to finish of the compressed bit stream. b) A variety of MPEG
video compression where the amount of compression does not change.
c) Traffic that requires guaranteed levels of service and throughput in
delay-sensitive applications such as audio and video that are digitized and
represented by a continuous bit stream.
Continuous Monitoring – The monitoring method that provides continuous real-time monitoring of all transport streams in a network.
Constant Bit Rate Coded Media – A compressed media bitstream with
a constant average bit rate. For example, some MPEG video bitstreams.
Constant Bit Rate Coded Video – A compressed video bit stream with a
constant average bit rate.
Constant Luminance Principle – A rule of composite color television
that any change in color not accompanied by a change in brightness
should not have any effect on the brightness of the image displayed on
a picture tube. The constant luminance principle is generally violated by
existing NTSC encoders and decoders. See also Gamma.
Constant Shading – The simplest shading type is constant. The color of
a constant shaded polygon’s interior pixels is always the same, regardless
of the polygon’s orientation with respect to the viewer and light sources.
Constant shading is useful for creating light sources, for example. With all
other shading types, a polygon changes its shade as it moves.
Constellation Diagram – A display used within digital modulation to
determine the health of the system. It consists of a plot of symbol values
onto an X-Y display, similar to a vectorscope display. The horizontal axis is
known as the In-Phase (I) and the vertical axis is known as the Quadrature
Phase (Q) axis. The position of the symbols within the constellation diagram
provides information about distortions in the QAM or QPSK modulator as
well as about distortions after the transmission of digitally coded signals.
Constrained Parameters – MPEG-1 video term that specifies the
values of the set of coding parameters in order to assure a baseline
Constrained System Parameter Stream (CSPS) – An MPEG-1 multiplexed system stream to which the constrained parameters are applied.
Constructive Solid Geometry (CSG) – This way of modeling builds a
world by combining “primitive” solids such as cubes, spheres, and cones.
The operations that combine these primitives are typically union, intersection, and difference. These are called Boolean operations. A CSG database
is called a CSG tree. In the tree, branch points indicate the operations that
take place on the solids that flow into the branch point.
Content – The program content will consist of the sum total of the
essence (video, audio, data, graphics, etc.) and the metadata. Content
can include television programming, data and executable software.
Content Object – The object encapsulation of the MPEG-4 decoded
representation of audiovisual data.
Content-Based Image Coding – The analysis of an image to recognize
the objects of the scene (e.g., a house, a person, a car, a face,...). The
objects, once recognized are coded as parameters to a general object
model (of the house, person, car, face,...) which is then synthesized (i.e.,
rendered) by the decoder using computer graphic techniques.
Continuous Tone – An image that has all the values (0 to 100%) of gray
(black and white) or color in it. A photograph is a continuous tone image.
Contour Enhancement – A general term usually intended to include both
aperture correction and edge enhancement.
Contouring – a) Video picture defect due to quantizing at too coarse a
level. The visual effect of this defect is that pictures take on a layered look
somewhat like a geographical contoured map. b) This is an image artifact
caused by not having enough bits to represent the image. The reason the
effect is called “contouring” is because the image develops vertical bands
of brightness.
Contrast – Contrast describes the difference between the white and black
levels in a video waveform. If there is a large difference between the white
and black picture levels, the image has high contrast. If there is a small
difference between the white and black portions of the picture, then the
picture has low contrast and takes on a gray appearance.
Contrast Ratio – a) Related to gamma law and is a measurement of
the maximum range of light to dark objects that a television system can
reproduce. b) The comparison of the brightest part of the screen to the
darkest part of the screen, expressed as a ratio. The maximum contrast
ratio for television production is 30 x 1.
Contribution – A form of signal transmission where the destination is
not the ultimate viewer and where processing (such as electronic matting)
is likely to be applied to the signal before it reaches the ultimate viewer.
Contribution demands higher signal quality than does distribution because
of the processing.
Contribution Quality – The level of quality of a television signal from the
network to its affiliates. For digital television this is approximately 45 Mbps.
Control Block – Circuits that perform the control functions of the CPU.
They are responsible for decoding instructions and then generating the
internal control signals that perform the operations requested.
Control Bus – Set of control lines in a computer system. Provides the
synchronization and control information necessary to run the system.
Control Channel – A logical channel which carries control messages.
Control Layer – The MPEG-4 Systems Layer that maintains and updates
the state of the MPEG-4 Systems Layers according to control messages or
user interaction.
Control Menu Box – Located on the upper left corner of all application
windows, document windows, and dialog boxes, it sizes (maximize, minimize, or restore) or exits the window.
Control Message – An information unit exchanged to configure or modify
the state of the MPEG-4 systems.
Control Point – A location on a Bézier curve that controls its direction.
Each control point has two direction handles that can extend from it.
www.tektronix.com/video_audio 55
Video Terms and Acronyms
Control Processor Unit/Central Processing Unit (CPU) – a) Circuits
used to generate or alter control signals. b) A card in the frame which
controls overall switcher operation.
Convergence – The act of adjusting or the state of having adjusted, the
Red, Green and Blue color gun deflection such that the electron beams are
all hitting the same color triad at the same time.
Control Program – Sequence of instructions that guide the CPU through
the various operations it must perform. This program is stored permanently
in ROM where it can be accessed by the CPU during operation. Usually
this ROM is located within the microprocessor chip. Same as Microprogram
or Microcode.
Conversion Ratio – The size conversion ratio for the purpose of rate
control of shape.
Control Room – The enclosed room where the electronic control system
for radio and television are located and where the director and technical
director sit.
Control Signal – A signal used to cause an alteration or transition of
video signals.
Control Track – a) The magnetized portion along the length of a videotape on which sync control information is placed. The control track
contains a pulse for each video field and is used to synchronize the tape
and the video signal. b) A synchronizing signal on the edge of the tape
which provides a reference for tracking control and tape speed. Control
tracks that have heavy dropouts are improperly recorded and may cause
tracking defects or picture jumps. c) A signal recorded on videotape to
allow the tape to play back at a precise speed in any VTR. Analogous to
the sprocket holes on film. d) A linear track, consisting of 30-or 60-Hz
pulses, placed on the bottom of videotape that aids in the proper playback
of the video signal.
Control Track Editing – The linear editing of videotape with equipment
that reads the control track information to synchronize the editing between
two decks. Contrast with Timecode Editing.
Control Track Editor – Type of editing system that uses frame pulses on
the videotape control track for reference.
Control-L (LANC)– Sony’s wired edit control protocol, also called LANC
(Local Application Control), which allows two-way communication between
a camcorder or VCR and an edit controller such as the Thumbs Up.
Control-L allows the controller to control the deck (fast forward, play, etc.)
and also allows the controller to read the tape position (tape counter)
information from the deck.
Control-M – Panasonic’s wired edit control protocol. Similar to Control-L
in function but not compatible. Also called Panasonic 5-pin edit control.
See Control-L.
Control-S – Sony wired transport control protocol that duplicates a VCR’s
infra-red remote transport control (play, stop, pause, fast forward and
rewind). Unlike Control-L, Control-S does not allow the controller to read
tape counter information.
Control-T – Similar to Control-L but allows multiple units to be controlled.
Not used in current equipment.
Conventional Definition Television (CDTV) – This term is used to
signify the analog NTSC television system as defined in ITU-R
Recommendation 470. See also Standard Definition Television and ITU-R
Recommendation 1125.
Conversion, Frame-Rate – Standardized image systems now exist in the
following frame rates per second: 24, 25, 29.97, 30, and 60. In transcoding from one system to another, frame rate conversion algorithms perform
this conversion. The algorithm may be as simple as to drop or add frames
or fields, or it may process the information to generate predictive frames
employing information from the original sequence. In interlace systems, the
algorithm may be applied independently to each field.
Converter – Equipment for changing the frequency of a television signal
such as at a cable head-end or at the subscriber’s receiver.
Convex Lens – A convex lens has a positive focal length, i.e., the focus
is real. It is usually called magnifying glass, since it magnifies the objects.
Convolutional Coding – The data stream to be transmitted via satellite
(DVB-S) which is loaded bit by bit into shift registers. The data which is
split and delayed as it is shifted through different registers is combined
in several paths. This means that double the data rate (two paths) is
usually obtained. Puncturing follows to reduce the data rate: the time
sequence of the bits is predefined by this coding and is represented by
the trellis diagram.
Coordination System – See Reference.
CORBA (Common Object Request Broker Architecture) – A standard
defined by the Common Object Group. It is a framework that provides
interoperability between objects built in different programming languages,
running on different physical machines perhaps on different networks.
CORBA specifies an Interface Definition Language, and API (Application
Programming Interface) that allows client / server interaction with the
ORB (Object Request Broker).
Core – Small magnetic toruses of ferrite that are used to store a bit of
information. These can be strung on wires so that large memory arrays can
be formed. The main advantage of core memory is that it is nonvolatile.
Core Experiment – Core experiments verify the inclusion of a new technique or set of techniques. At the heart of the core experiment process are
multiple, independent, directly comparable experiments, performed to
determine whether or not proposed algorithmic techniques have merits.
A core experiment must be completely and uniquely defined, so that the
results are unambiguous. In addition to the specification of the algorithmic
technique(s) to be evaluated, a core experiment also specifies the parameters to be used (for example, audio sample rate or video resolution), so
that the results can be compared. A core experiment is proposed by one
or more MPEG experts, and it is approved by consensus, provided that two
or more independent experts carry out the experiment.
Core Visual Profile – Adds support for coding of arbitrary-shaped and
temporally scalable objects to the Simple Visual Profile. It is useful for
applications such as those providing relatively simple content interactivity
(Internet multimedia applications).
Video Terms and Acronyms
Coring – A system for reducing the noise content of circuits by removing
low-amplitude noise riding on the baseline of the signals. Both aperture
correction and enhancement can be cored. It involves preventing any
boosting of very low level edge transitions. The threshold point is the
coring control. The more the coring is increased, the more the extra noise
added by the enhanced (or aperture corrector) high frequency boosting is
reduced. Of course, the fine detail enhancement is also reduced or
eliminated. Too high levels of coring can cause a “plastic picture” effect.
Correlation – A comparison of data which is used to find signals in noise
or for pattern recognition. It uses a best-match algorithm which compares
the data to the reference.
Co-Sited Sampling – Co-sited sampling ensures that the luminance and
the chrominance digital information is simultaneous, minimizing
chroma/luma delay. This sampling technique is applied to color difference
component video signals: Y, Cr, and Cb. The color difference signals, Cr and
Cb, are sampled at a sub-multiple of Y, the luminance frequency – 4:2:2,
for example. With co-sited sampling, the two color difference signals are
sampled at the same instant, as well as one of the luminance samples.
Co-Siting – Relates to SMPTE 125M component digital video, in which the
luminance component (Y) is sampled four times for every two samples of
the two chrominance components (Cb and Cr). Co-siting refers to delaying
transmission of the Cr component to occur at the same time as the second
sample of luminance data. This produces a sampling order as follows:
Y1/Cb1, Y2/Cr1, Y3/Cr3, Y4/Cb3 and so on. Co-siting reduces required bus
width from 30 bits to 20 bits.
CP_SEC (Copyright Protection System) – In DVD-Video, a 1-bit value
stored in the CPR_MAI that indicates if the corresponding sector has
implemented a copyright protection system. See Content Scrambling
System (CSS).
CPE (Common Phase Error) – Signal distortions that are common to all
carriers. This error can (partly) be suppressed by channel estimation using
the continual pilots.
CPM (Copyrighted Material) – In DVD-Video, a 1-bit value stored in
the CPR_MAI that indicates if the corresponding sector includes any
copyrighted material.
CPPM (Content Protection for Prerecorded Media) – Copy protection
for DVD-Audio.
CPR_MAI (Copyright Management Information) – In DVD-Video, an
extra 6 bytes per sector that includes the Copyright Protection System
Type (CPS_TY) and Region Management information (RMA) in the Contents
provider section of the Control data block; and Copyrighted Material flag
(CPM), Copyright Protection System flag (CP_SEC) and Copy Guard
Management System (CGMS) flags in the Data Area.
CPRM (Content Protection for Recordable Media) – Copy protection
for writable DVD formats.
CPS – Abbreviation for Characters Per Second.
CPS_TY (Copyright Protection System Type) – In DVD-Video, an 8-bit
(1 byte) value stored in the CPR_MAI that defines the type of copyright
protection system implemented on a disc.
CPSA (Content Protection System Architecture) – An overall copy
protection design for DVD.
CPTWG (Copy Protection Technical Working Group) – The industry
body responsible for developing or approving DVD copy protection systems.
CPU – See Central Processing Unit.
CPU Board – The printed circuit board within a workstation chassis that
contains the central processing unit(s). When you open the front metal
panel of the Indigo chassis, it is the board on the left.
CPV – This is a proprietary and relatively old format designed for 30 fps
video over packet based networks. It is still being used in closed video
systems where 30 fps is required, such as in security applications.
CR – Scaled version of the R-Y signal.
Crash Edit – An edit that is electronically unstable, such as one made
using the pause control on a deck, or using a non-capstan served deck.
Crash Recording – See Hard Recording.
Crawl – a) Titles that move slowly up the screen, mounted on a revolving
drum. b) Sideways movement of text across a screen. c) An appearance of
motion in an image where there should be none. See also Chroma Crawl
and Line Crawl.
Crawling Text – Text that moves horizontally over time. Examples include
stock and sports score tickers that appear along the bottom of a television
CRC – See Cyclic Redundancy Check.
Crease – A tape deformity which may cause horizontal or vertical lines in
the playback picture. See Wrinkle.
Credits – Listing of actors, singers, directors, etc., in title preceding or
directly following the program.
Creepy-Crawlies – Yes, this is a real video term! Creepy-crawlies refers
to a specific image artifact that is a result of the NTSC system. When the
nightly news is on, and a little box containing a picture appears over the
anchorperson’s shoulder, or when some computer-generated text shows up
on top of the video clip being shown, get up close to the TV and check it
out. Along the edges of the box, or along the edges of the text, you’ll notice
some jaggies “rolling” up (or down) the picture. That is the creepy-crawlies.
Some people refer to this as zipper because it looks like one.
Crispening – A means of increasing picture sharpness by generating and
applying a second time derivative of the original signal.
Critical Band – Frequency band of selectivity of the human ear which is a
psychoacoustic measure in the spectral domain. Units of the critical band
rate scale are expressed as Barks.
Crop – Term used for the action of moving left, right, top and bottom
boundaries of a key. See Trim.
Crop Box – A box that is superimposed over frames, either automatically
or manually, to limit color corrections, key setups, etc., to the area inside
the box.
Cropping – A digital process which removes areas of a picture (frame) by
replacing video pixels with opaque pixels of background colors. Cropping
may be used to eliminate unwanted picture areas such as edges or as
quasi-masking in preparation for keying.
www.tektronix.com/video_audio 57
Video Terms and Acronyms
Cross Color – Spurious signal resulting from high-frequency luminance
information being interpreted as color information in decoding a composite
signal. Typical video examples are “rainbow” on venetian blinds and striped
Cross Luma – This occurs when the video decoder incorrectly interprets
chroma information (color) to be high-frequency luma information (brightness).
Cross Luminance – Spurious signals occurring in the Y channel as a
result of composite chroma signals being interpreted as luminance, such
as “dot crawl” or “busy edges” on colored areas.
Cross Mod – A test method for determining the optimum print requirements for a variable area sound track.
Cross Modulation – See Chrominance-to-Luminance Intermodulation.
Cross-Assembler – Assembler that runs on a processor whose assembly
language is different from the language being assembled.
Cross-Color – An artifact observed in composite systems employing
quadrature modulation and frequency interleaving. Cross-color results from
the multiplicities of line-scan harmonics in the baseband signal, which
provide families of frequencies surrounding each of the main harmonic
peaks. These families become even more complex if there is movement
in the scene luminance signals between scans. Since the interstices are,
therefore, not completely empty, some of the information on the luminance
signal is subsequently decoded as color information. A typical visible effect
is a moiré pattern.
Crossfade – The audio equivalent of the video dissolve where one sound
track is gradually faded out while a second sound track simultaneously
replaces the original one. See Mix.
Crosshatch – A test pattern consisting of vertical and horizontal lines
used for converging color monitors and cameras.
Cross-Luminance – An artifact observed in composite systems employing
quadrature modulation and frequency interleaving. As the analog of crosscolor, cross luminance results in some of the information carried by the
chrominance signal (on color subcarrier) being subsequently interpreted as
fine detail luminance information. A typical visible effect is chroma crawl
and visible subcarrier.
Cross-Luminance Artifacts – Introduced in the S-VHS concept for a
better luminance resolution.
Crossover Network – A device which divides a signal into two or more
frequency bands before low frequency outputs of a crossover network.
The level of each output at this frequency is 3 dB down from the flat
section of the crossover’s frequency response curve.
Cross-Play – By cross-play capability is meant the ability to record and
reproduce on the same or a different machine; record at one speed and
reproduce at the same or a different speed; accomplish the foregoing
singly or in any combination without readjustment for tape or transport
Crosspoint – a) The electronic circuit used to switch video, usually on
a bus. b) An electronic switch, usually controlled by a push-button on the
panel, or remotely by computer that allows video or audio to pass when
the switch is closed.
Cross-Sectional Modeling – This type of modeling is also a boundary
representation method available in PictureMaker. The artist can define an
object’s cross-section, and then extrude in the longitudinal direction after
selecting an outline to define the cross-section’s changes in scale as it
traverses the longitudinal axis.
Crosstalk – The interference between two audio or two video signals
caused by unwanted stray signals. a) In video, crosstalk between input
channels can be classified into two basic categories: luminance/sync
crosstalk; and color (chroma) crosstalk. When video crosstalk is too high,
ghost images from one source appear over the other. b) In audio, signal
leakage, typically between left and right channels or between different
inputs, can be caused by poor grounding connections or improperly shielded cables. See Chrominance-to-Luminance Intermodulation.
Crosstalk Noise – The signal-to-crosstalk noise ratio is the ratio, in
decibels, of the nominal amplitude of the luminance signal (100 IRE units)
to the peak-to-peak amplitude of the interfering waveform.
CRT (Cathode Ray Tube) – There are three forms of display CRTs in color
television: tri-color (a color picture tube), monochrome (black and white),
and single color (red, green, or blue, used in projection television systems).
Many widescreen ATV schemes would require a different shape CRT, particularly for direct-view systems.
CRT Terminal – Computer terminal using a CRT display and a keyboard,
usually connected to the computer by a serial link.
Crushing the Blacks – The reduction of detail in the black regions of a
film or video image by compressing the lower end of the contrast range.
CS (Carrier Suppression) – This is the result of an unwanted coherent
signal added to the center carrier of the COFDM signal. It could be
produced from the DC offset voltages or crosstalk.
CSA (Common Scrambling Algorithm) – Scrambling algorithm specified
by DVB. The Common Scrambling Algorithm was designed to minimize the
likelihood of piracy attack over a long period of time. By using the Common
Scrambling Algorithm system in conjunction with the standard MPEG2
Transport Stream and selection mechanisms, it is possible to incorporate
in a transmission the means to carry multiple messages which all enable
control of the same scrambled broadcast but are generated by a number
of Conditional Access Systems.
CSC (Computer Support Collaboration) – Describes computers that
enhance productivity for people working in groups. Application examples
include video conferencing, video mail, and shared workspaces.
CSDI – See Compressed Serial Digital Interface.
CSELT (Centro Studi e Laboratori Telecomunicazioni S.p.A.) – CSELT
situated in Torino, Italy, is the research company owned by STET (Societa
Finanziaria Telefonica per Azioni), the largest telecommunications company
in Italy. CSELT has contributed to standards under ITU, ISO and ETSI and
has participated in various research programs. In order to influence the
production of standards, CSELT participates in groups such as DAVIC, the
ATM Forum, and in the Network Management Forum.
Video Terms and Acronyms
CSG (Constructive Solid Geometry) – In CSG, solid objects are
represented as Boolean combinations (union, intersection and difference)
of solids.
CS-Mount – A newer standard for lens mounting. It uses the same physical thread as the C-mount, but the back flange-to-CCD distance is reduced
to 12.5 mm in order to have the lenses made smaller, more compact and
less expensive. CS-mount lenses can only be used on CS-mount cameras.
Cursor – a) The small arrow on the screen that echoes the movements
of the mouse. It changes shape depending on its location on the screen.
b) An indicator on a screen that can be moved to highlight a particular
function or control which is the current parameter now under adjustment
or selected.
CSPS – See Constrained System Parameter Stream.
Curvature Error – A change in track shape that results in a bowed or
S-shaped track. This becomes a problem if the playback head is not able
to follow the track closely enough to capture the information.
CSS (Content Scrambling System) – A type of digital copy protection
sanctioned by the DVD forum.
Curve – A single continuous line with continuity of tangent vector and of
curvature. It is defined by its type, degree, and rational feature.
CS-to-C-Mount Adaptor – An adaptor used to convert a CS-mount camera to C-mount to accommodate a C-mount lens. It looks like a ring 5 mm
thick, with a male thread on one side and a female on the other, with 1”
diameter and 32 threads/inch. It usually comes packaged with the newer
type (CS-mount) of cameras.
Curves Graph – An X, Y graph that plots input color values on the horizontal axis and output color values on the vertical axis. Used in the Color
Correction Tool as a control for changing the relationship between input
and output color values.
CSV (Comma Separated Variables) – Commonly used no-frills text
file format used for import from and import to spreadsheets and SQL
Cut – a) The immediate switching from one video source to another during
the vertical blanking interval. The visual effect is an abrupt change from
one picture to another. b) The nearly instantaneous switch from one picture
to another at the on-air output of the switcher. The switcher circuitry allows
cuts only during the vertical interval of the video signal so as to prevent
disruption of the picture. On the Vista, the Cut push-button in the Effects
Transition control group activates an effects cut. The DSK Cut Key-In
push-button cuts the downstream key on or off air. On AVCs, this is performed by a zero time auto transition.
CTA (Cordless Terminal Adapter) – Provides the interface between the
subscriber line on a hook-up site and the DBS (Direct Broadcast Satellite).
The CTA offers subscribers a range of services equivalent or better quality
than a wired connection. The CTA offers the option of more advanced
services, such as high-speed V.90 Internet access, and thus provide a
supplementary income source.
Cusp – Breakpoints on curves.
Cue – a) An editing term meaning to bring all source and record VTRs to
the predetermined edit point plus pre-roll time. b) An audio mixer function
that allows the user to hear an audio source (usually through headphones)
without selecting that source for broadcast/recording; the audio counterpart of a preview monitor. c) The act of rewinding and/or fast-forwarding a
video- or audiotape so that the desired section is ready for play.
Cut List – A series of output lists containing specifications used to
conform the film work print or negative. See also Dupe List.
Cue Channel – A dedicated track for sync pulses or timecode.
Cuts Only – Transition limited to on/off or instantaneous transition-type
edits; a basic editing process with limited capabilities.
Cue Control – A switch that temporarily disables a recorder’s Tape Lifters
during fast forward and rewind so the operator can judge what portion of
the recording is passing the heads.
Cue Mark – Marks used to indicate frames of interest on a clip.
Cupping – Curvature of a tape in the lateral direction. Cupping may occur
because of improper drying or curing of the coating or because of differences between the coefficients of thermal or hygroscopic expansion of
coating and base film.
Curl – A defect of a photographic film consisting of unflatness in a plane
cutting across the width of the film. Curl may result from improper drying
conditions, and the direction and amount of curl may vary with the humidity
of the air to which the film is exposed.
Cut-Off Frequency – That frequency beyond which no appreciable energy
is transmitted. It may refer to either an upper or lower limit of a frequency
Cutout – See Matte.
Cutting – The selection and assembly of the various scenes or sequences
of a reel of film.
Cutting Head – A transducer used to convert electrical signals into hills
and valleys in the sides of record grooves.
CVBS (Color Video Blanking and Sync) – Another term for Composite
CVBS (Composite Video Baseband Signal)
CVBS (Composite Video, Blanking, Synchronization)
CVBS (Composite Video Bar Signal) – In broadcast television, this
refers to the video signal, including the color information and syncs.
Current – The flow of electrons.
CVC – See Compatible Video Consortium.
Current Tracer – Handheld troubleshooting tool used to detect current
flow in logic circuits.
CVCT – See Cable Virtual Channel Table.
Current Working Directory – The directory within the file system in
which you are currently located when you are working in a shell window.
CW (Continuous Wave) – Refers to a separate subcarrier sine wave used
for synchronization of the chrominance information.
www.tektronix.com/video_audio 59
Video Terms and Acronyms
CX Noise Reduction – This is a level sensitive audio noise reduction
scheme that involves compression, on the encode side, and expansion, on
the decode side. It was originally developed for CBS for noise reduction
on LP records and is a trademark of CBS, Inc. The noise reduction obtained
by CX was to be better than Dolby B3 for tape, but remain unnoticeable in
playback if decoding didn’t take place. A modified CX system was applied
to the analog audio tracks for the laserdisc to compensate for interference
between the audio and video carriers. The original CX system for LP
records was never implemented.
Cycle – An alternation of a waveform which begins at a point, passes
through the zero line and ends at a point with the same value and moving
in the same direction as the starting point.
Cycle Per Second – A measure of frequency, equivalent to Hertz.
Cycle Time – Total time required by a memory device to complete a read
or write cycle and become available again.
Cyclic Redundancy Check (CRC) – a) Used to generate check information on blocks of data. Similar to a checksum, but is harder to generate
and more reliable. b) Used in data transfer to check if the data has been
corrupted. It is a check value calculated for a data stream by feeding
it through a shifter with feedback terms “EXORed” back in. A CRC can
detect errors but not repair them, unlike an ECC, which is attached to
almost any burst of data that might possibly be corrupted. CRCs are used
on disks, ITU-R 601 data, Ethernet packets, etc. c) Error detection using
a parity check.
Video Terms and Acronyms
D/I (Drop and Insert) – A point in the transmission where portions of the
digital signal can be dropped out and/or inserted.
D1 – A non-compressed component digital video recording format that
uses data conforming to the ITU-R BT.601-2 standard. Records on high
end 19 mm (3/4”) magnetic tape recorders. Systems manufactured by
Sony and BTS. Most models can record 525, 625, ITU-R BT.601-2 and
SMPTE 125M. The D1 designation is often used in-correctly to indicate
component digital video.
D16 – A format to store film resolution images on D1 format tape
recorders. Records one film frame in the space normally used for 16
video frames.
D2 – A non-compressed composite digital video record-ing format originally developed by Ampex that uses data conforming to SMPTE 244M and
four 20 bit audio channels. Records on high end 19 mm (3/4”) magnetic
tape recorders. It uses the same tape cassette cartridge but the tape itself
is metal particle tape like Beta SP and MII. The D2 designation is often
used incorrectly to indicate composite digital video.
D2-MAC – Similar to D-MAC, the form preferred by manufacturers for
European DBS. See also MAC.
D3 – A non-compressed composite digital video record-ing format that
uses data conforming to SMPTE 244M and four 20 bit audio channels.
Records on high end 1/2” magnetic tape similar to M-II. The format was
developed by Matsushita and Panasonic.
D4 – A format designation never utilized due to the fact that the number
four is considered unlucky (being synonymous with death in some Asian
D5 – A non-compressed, 10 bit 270 Mbit/second, component or composite
digital video recording format developed by Matsushita and Panasonic. It
is compatible with 360 Mbit/second systems. It records on high end 1/2”
magnetic tape recorders.
D6 – A digital tape format which uses a 19 mm helical-scan cassette tape
to record uncompressed high definition television material at 1.88 GBps
(1.2 Gbps).
D7 – DVCPRO. Panasonic’s development of native DV component format.
D8 – There is no D8, nor will there be. The Television Recording and
Reproduction Technology Committee of SMPTE decided to skip D8 because
of the possibility of confusion with similarly named digital audio and data
D9 – Digital-S. A 1/2-inch digital tape format developed by JVC which
uses a high-density metal particle tape running at 57.8 mm/s to record a
video data rate of 50 Mbps.
DA-88 – A Tascam-brand eight track digital audio tape machine using the
8 mm video format of Sony. It has become the defacto standard for audio
post production though there are numerous other formats, ranging from
swappable hard drives to analog tape formats and everything in between.
DAC (Digital-to-Analog Converter) – A device in which signals having a
few (usually two) defined levels or states (digital) are converted into signals
having a theoretically infinite number of states (analog).
DAC to DAC Skew – The difference in a full scale transition between R, B
and B DAC outputs measured at the 50% transition point. Skew is measured in tenths of nanoseconds.
DAE (Digidesign Audio Engine) – A trademark of Avid Technology, Inc.
The application that manages the AudioSuite plug-ins.
DAE (Digital Audio Extraction) – Reading digital audio data directly from
a CD audio disc.
DAI (DMIF Application Interface) – The bridge between DMIF (delivery
multimedia integration framework) and MPEG-4 systems.
Dailies – a) The first positive prints made by the laboratory from the negative photographed on the previous day. b) Film prints or video transfers of
recently shot film material, prepared quickly so that production personnel
can view and evaluate the previous day’s shooting before proceeding. Also
called Rushes, primarily in the United Kingdom.
Daisy Chain – Bus line that is interconnected with units so that the signal
passes from one unit to the next in serial fashion.
DAM (DECT Authentication Module) – a) An IC card used for cordless
telecommunications. b) A smart card that makes billing more secure and
prevents fraud. The DAM is reminiscent of the subscriber identity module
(SIM) card in the GSM standard.
Damped Oscillation – Oscillation which, because the driving force has
been removed, gradually dies out, each swing being smaller than the
preceding in smooth regular decay.
Dark Current – Leakage signal from a CCD sensor in the absence of
incident light.
Dark Noise – Noise caused by the random (quantum) nature of the dark
DAT (Digital Audio Tape) – a) A consumer digital audio recording and
playback system developed by Sony, with a signal quality capability
surpassing that of the CD. b) A magnetic tape from which you can read
and to which you can copy audio and digital information.
Data – General term denoting any or all facts, numbers, letters, and
symbols or facts that refer to or describe an object, idea, condition, situation or other factors. Connotes basic elements of information that can be
processed or produced by a computer. Sometimes data is considered to
be expressible only in numerical form, but information is not so limited.
Data Acquisition – Collection of data from external sensors usually in
analog form.
Data Area – The physical area of a DVD disc between the lead in and
the lead out (or middle area) which contains the stored data content of
the disc.
DAB – See Digital Audio Broadcasting.
www.tektronix.com/video_audio 61
Video Terms and Acronyms
Data Base – Systematic organization of data files for easy access,
retrieval, and updating.
Data Bus – Set of lines carrying data. The data bus is usually bidirectional
and three-state.
Data Carousels – The data broadcast specification for data carousels
supports data broadcast services that require the periodic transmission of
data modules through DVB compliant broadcast networks. The modules
are of known sizes and may be updated, added to, or removed from the
data carousel in time. Modules can be clustered into a group of modules
if required by the service. Likewise, groups can in turn be clustered into
SuperGroups. Data broadcast according to the data carousel specification
is transmitted in a DSM-CC data carousel which is defined in MPEG-2
DSM-CC. This specification defines additional structures and descriptors
to be used in DV compliant networks. The method is such that no explicit
references are made to PIDs and timing parameters enabling preparation
of the content off-line.
Data Circuit-Terminating Equipment (DCE) – Equipment at a node or
access point of a network that interfaces between the data terminal equipment (DTE) and the channel. For example, a modem.
Data Compression – Application of an algorithm to reduce the bit rate of
a digital signal, or the bandwidth of an analog signal while preserving as
much as possible of the information usually with the objective of meeting
the constraints in subsequent portions of the system.
Data Conferencing – Sharing of computer data by remote participants
by application sharing or shared white board technologies.
Data Domain – Analysis or display of signals in which only their digital
value is considered and not their precise voltage or timing. A logic state
analyzer displays information in the data domain.
Data Element – An item of data as represented before encoding and after
Data Encryption Standard (DES) – A national standard used in the U.S.
for the encryption of digital information using keys. It provides privacy
protection but not security protection.
Data Essence – a) Essence that is distinguished as different from video
or audio essence. Digital data that may stand alone or may be associated
with video or audio essence or metadata. b) Refers to the bits and bytes
of new forms of content, such as interactive TV-specific content, Advanced
Television Enhancement Forum (ATVEF) content (SMPTE 363M), closed
Data Partitioning – A method for dividing a bit stream into two separate
bit streams for error resilience purposes. The two bit streams have to be
recombined before decoding.
Data Piping – The data broadcast specification profile for data pipes supports data broadcast services that require a simple, asynchronous, end-toend delivery of data through DVB compliant broadcast networks. Data
broadcast according to the data pipe specification is carried directly in the
payloads of MPEG-2 TS packets.
Data Rate – The speed at which digital information is transmitted, typically
expressed in hertz (Hz), bits/second (b/s), or bytes/sec (B/s). The higher the
data rate of your video capture, the lower the compression and the higher
the video quality. The higher the data rate, the faster your hard drives must
be. Also called throughput.
Data Reduction – The process of reducing the number of recorded or
transmitted digital data samples through the exclusion of redundant or
unessential samples. Also referred to as Data Compression.
Data Search Information (DSI) – These packets are part of the 1.00
Mbit/sec overhead in video applications. These packets contain navigation
information for searching and seamless playback of the Video Object Unit
(VOBU). The most important field in this packet is the sector address. This
shows where the first reference frame of the video object begins. Advanced
angle change and presentation timing are included to assist seamless
playback. They are removed before entering the MPEG systems buffer, also
known as the System Target Decoder (STD).
Data Set – A group of two or more data essence or metadata elements
that are pre-defined in the relevant data essence standard or Dynamic
Metadata Dictionary and are grouped together under one UL Data Key.
Set members are not guaranteed to exist or be in any order.
Data Streaming – The data broadcast, specification profile for data
streaming supports data broadcast services that require a streaming-oriented, end-to-end delivery of data in either an asynchronous, synchronous
or synchronized way through DVB compliant broadcast networks. Data
broadcast according to the data streaming specification is carried in
Program Elementary Stream (PES) packets which are defined in MPEG-2
systems. See Asynchronous Data Streaming, Synchronous Data Streaming.
Data Terminal Equipment (DTE) – A device that controls data flowing to
or from a computer. The term is most often used in reference to serial
communications defined by the RS-232C standard.
Datacasting – Digital television allows for the transmission of not only
digital sound and images, but also digital data (text, graphics, maps,
services, etc.). This aspect of DTV is the least developed; but in the
near future, applications will likely include interactive program guides,
sports statistics, stock quotes, retail ordering information, and the like.
Datacasting is not two-way, although most industry experts expect
that set-top box manufacturers will create methods for interaction. By
integrating dial-up Internet connections with the technology, simple
responses will be possible using a modem and either an add-on keyboard
or the set-tops remote control.
DATV (Digitally Assisted Television) – An ATV scheme first proposed in
DAVIC (Digital Audio Visual Council) – Facing a need to make a
multitude of audio/visual technologies and network protocols interoperate,
DAVIC was formed in 1993 by Dr. Leonardo Chiariglione, convenor of the
MPEG. The purpose of DAVIC is to provide specifications of open interfaces
and protocols to maximize interoperability in digital audio/visual applications and services. Thus, DAVIC operates as an extension of technology
development centers, such as MPEG.
Video Terms and Acronyms
dB (Decibel) – a) dB is a standard unit for expressing changes in relative
power. Variations of this formula describe power changes in terms of
voltage or current. dB = 10log10 (P1/P2). b) A logarithmic ratio of two
signals or values, usually refers to power, but also voltage and current.
When power is calculated the logarithm is multiplied by 10, while for
current and voltage by 20.
DCE (Data Communication Equipment) – Devices and connections of a
communications network that comprise the network end of the user-to-network interface. The DCE provides a physical connection to the network,
forwards traffic, and provides a clocking signal used to synchronize data
transmission between DCE and DTE devices. Modems and interface cards
are examples of DCE.
dBFS (Decibel Full Scale)
DCI (Display Control Interface) – A software layer that provides direct
control of the display system to an application or client. The display vendor
provides information to the system (in addition to the display driver) that
allows DCI to offer a generic interface to a client.
dBm – dBm is a special case of dB where P2 in the dB formula is equal to
1 mW. See dB.
DBN – See Data Block Number.
DBS – See Direct Broadcast Satellite.
dBw – Refer to the definition of dB. dBw is a special case of dB where P2
in the dB formula is equal to 1 watt.
DC Coefficient – The DCT coefficient for which the frequency is zero in
both dimensions.
DC Coupled – A connection configured so that both the signal (AC component) and the constant voltage on which it is riding (DC component) are
passed through.
DC Erasure – See Erasure.
DC Noise – The noise arising when reproducing a tape which has been
non-uniformly magnetized by energizing the record head with DC, either
in the presence or absence of bias. This noise has pronounced long wavelength components which can be as much as 20 dB higher than those
obtained from a bulk erased tape. At very high values of DC, the DC noise
approaches the saturation noise.
DC Restoration – The correct blanking level for a video signal is zero
volts. When a video signal is AC-coupled between stages, it loses its DC
reference. A DC restoration circuit clamps the blanking at a fixed level.
If set properly, this level is zero volts.
DC Restore – DC restore is the process in which a video waveform has its
sync tips or backporch set to some known DC voltage level after it has
been AC coupled.
DCT – See Discrete Cosine Transform.
DCT Coefficient – The amplitude of a specific cosine basis function.
DCT Recording Format – Proprietary recording format developed by
Ampex that uses a 19 mm (3/4”) recording cassette. Records ITU-R
BT.601-2 and SMPTE 125M data with a 2:1 compression.
DCT-1/IDCT (Inverse Discrete Cosine Transform) – A step in the MPEG
decoding process to convert data from temporal back to spatial domain.
DD (Direct Draw) – A Windows 95 version of DCI. See DCI.
DD2 – Data recorders that have been developed using D2 tape offer
relatively vast storage of image or other data. Various data transfer rates
are available for different computer interfaces. Other computer storage
media editing is difficult and images are not directly viewable.
DDB (Download Data Block)
DDC (Data Download Control)
DDC2B – A serial control interface standard used to operate control
registers in picture monitors and video chips. The two-wire system is
defined by data and clock signals.
DDP (Disc Description Protocol) – A file or group of files which describe
how to master a data image file for optical disc (DVD or CD). This is an
ANSI industry standard developed by Doug Carson and Associates. The
laser beam recorders use this information in the mastering process.
DDR (Digital Disk Recorder) – See Digital Disk Recorder.
DC Restorer – A circuit used in picture monitors and waveform monitors
to clamp one point of the waveform to a fixed DC level.
DDS (Digital Data Service) – The class of service offered by telecommunications companies for transmitting digital data as opposed to voice.
DC Servo Motor – A motor, the speed of which is determined by the
DC voltage applied to it and has provision for sensing its own speed and
applying correcting voltages to keep it running at a certain speed.
Debouncing – Elimination of the bounce signals characteristic of
mechanical switches. Debouncing can be performed by either hardware
or software.
DC30 Editing Mode – An edit mode in Premiere – specifically for DC30
users – that allows video to be streamed out of the DC30 capture card
installed in a computer running Windows.
Debugger – A program designed to facilitate software debugging. In
general, it provides breakpoints, dump facilities, and the ability to examine
and modify registers and memory.
DCAM (Digital Camera) – Captures images (still or motion) digitally and
does not require analog-to-digital conversion before the image can be
transmitted or stored in a computer. The analog-to-digital conversion
process (which takes place in CODECs) usually causes some degradation
of the image, and a time delay in transmission. Avoiding this step theoretically provides a better, faster image at the receiving end.
Decay – a) The length of time it takes for an audio signal to fall below
the noise threshold. b) The adjustable length of time it takes for an ADO
DigiTrail effect to complete. (The trail catches up with the primary video.)
Decay Time – The time it takes for a signal to decrease to one-millionth
of its original value (60 dB down from its original level).
DCC (Digital Compact Cassette) – A consumer format from Philips using
PASC audio coding.
www.tektronix.com/video_audio 63
Video Terms and Acronyms
Decibel – One-tenth of a Bel. It is a relative measure of signal or sound
intensity or “volume”. It expresses the ratio of one intensity to another. One
dB is about the smallest change in sound volume that the human ear can
detect. (Can also express voltage and power ratios logarithmically.) Used to
define the ratio of two powers, voltages, or currents. See the definitions of
dB, dBm and dBw.
Decimation – Term used to describe the process by which an image file is
reduced by throwing away sampled points. If an image array consisted of
100 samples on the X axis and 100 samples on the Y axis, and every other
sample where thrown away, the image file is decimated by a factor of 2
and the size of the file is reduced by 1/4. If only one sample out of every
four is saved, the decimation factor is 4 and the file size is 1/16 of the
original. Decimation is a low cost way of compressing video files and is
found in many low cost systems. Decimation however introduces many
artifacts that are unacceptable in higher cost systems.
Decoder Input Buffer – The first-in first-out (FIFO) buffer specified in the
video buffering verifier.
Decoder Input Rate – The data rate specified in the video buffering verifier and encoded in the coded video bit stream.
Decoding (Process) – a) The process that reads an input coded bit
stream and produces decoded pictures or audio samples. b) Converting
semantic entities related to coded representation of individual audiovisual
objects into their decoded representation. Decoding is performed by calling
the public method decode of the audiovisual object.
Decoding Buffer (DB) – A buffer at the input of a media object decoder
that contains access units.
Decoding Layer – The MPEG-4 Systems Layer that encompass the
Syntactic Decoding Layer and the Decompression Layer and performs the
Decoding Process.
Decimation Filter – The Decimation Filter is designed to provide
decimation without the severe artifacts associated with throwing data
away although artifacts still exist. (See the definition of Decimation.)
The Decimation Filter process still throws data away but reduces image
artifacts by smoothing out the voltage steps between sampled points.
Decoding Script – The description of the decoding procedure (including
calls to specific decoding tools).
Deck Controller – A tool that allows the user to control a deck using
standard functions such as shuttle, play, fast forward, rewind, stop and
Decompose – To create new, shorter master clips based on only the
material you have edited and included in your sequence.
Deck, Tape – A tape recorder that does not include power amplifiers or
Decode – a) To separate a composite video signal into its component
parts. b) To reconstruct information (data) by performing the inverse
(reverse) functions of the encode process.
Decoded Audiovisual Object – See Decompressed Audiovisual Objects.
Decoding Time Stamp (DTS) – A field that may be present in a PES
packet header that indicates the time that an access unit is decoded in
the system target decoder.
Decompress – The process of converting video and audio data from its
compact form back into its original form in order to play it. Compare
Decompressed Audiovisual Object (Decompressed AV Object) –
The representation of the audiovisual object that is optimized for the needs
of the Composition Layer and the Rendering Layer as is goes out of the
Decompression Layer.
Decoded Representation – The intermediate representation of AV objects
that is output from decoding and input to compositing. It is independent of
the particular formats used for transmitting or presenting this same data.
It is suitable for processing or compositing without the need to revert to a
presentable format (such as bit map).
Decompression Layer – The MPEG-4 Systems Layer that converts
semantic entities from Syntactic Decoded Audiovisual Objects into their
decompressed representation (Decompressed Audiovisual Objects).
Decoded Stream – The decoded reconstruction of a compressed bit
Decryption – The process of unscrambling signals for reception and
playback by authorized parties. The reverse process of encryption.
Decoder – a) Device used to recover the component signals from a
composite (encoded) source. Decoders are used in displays and in various
processing hardware where components signals are required from a
composite source such as composite chroma keying or color correction
equipment. b) Device that changes NTSC signals into component signals;
sometimes devices that change digital signals to analog (see DAC). All
color TV sets must include an NTSC decoder. Because sets are so inexpensive, such decoders are often quite rudimentary. c) An embodiment of a
decoding process.
DECT (Digital Enhanced Cordless Telecommunications) – A cordless
phone standard widely used in Europe. Based on TDMA and the 1.8 and
1.9 GHz bands, it uses Dynamic Channel Selection/Dynamic Channel
Allocation (DCS/DCA) to enable multiple DECT users to coexist on the
same frequency. DECT provides data links up to 522 kbps with 2 Mbps
expected in the future. Using dual-mode handsets, DECT is expected to
coexist with GSM, which is the standard cell phone system in Europe.
Decoder Buffer (DB) – A buffer at the input of a media object decoder
that contains access units.
Decoder Configuration – The configuration of a media object decoder
for processing its elementary stream data by using information contained
in its elementary stream descriptor.
Decrement – Programming instruction that decreases the contents of a
storage location.
Dedicated – Set apart for some special use. A dedicated microprocessor
is one that has been specially programmed for a single application such as
weight measurement, traffic light control, etc. ROMs by their very nature
are dedicated memories.
Dedicated Keyboard – A keyboard assigned to a specific purpose.
Video Terms and Acronyms
Deemphasis – Also known as postemphasis and post-equalization.
Deemphasis modifies the frequency-response characteristic of the signal
in a way that is complementary to that introduced by preemphasis.
Deemphasis Network – Circuit that restores the preemphasized frequency response to its original levels.
Deesser – A compressor which reduces sibilance by triggering compression when it senses the presence of high frequency signals above the
compression threshold.
Default – The setup condition (for example, transition rate settings, color
of the matte gens, push-button status) existing when a device is first
powered-up, before you make any changes.
Default Printer – The printer to which the system directs a print request
if you do not specify a printer when you make the request. You set the
default printer using the Print Manager.
the communication channel. It is the combined processing time of the
encoder and decoder. For face-to-face or interactive applications, the delay
is crucial. It usually is required to be less than 200 milliseconds for oneway communication.
Delay Correction – When an electronic signal travels through electronic
circuitry or even through long coaxial cable runs, delay problems may
occur. This is manifested as a displaced image and special electronic
circuitry is needed to correct it.
Delay Distortion – Distortion resulting from non-uniform speed of transmission of the various frequency components of a signal; i.e., the various
frequency components of the signal have different times of travel (delay)
between the input and the output of a circuit.
Delay Distribution Amplifier – An amplifier that can introduce adjustable
delay in a video signal path.
Defaults – A set of behaviors specified on every system. You can later
change these specifications which range from how your screen looks to
what type of drive you want to use to install new software.
Delay Line – An artificial or real transmission line or equivalent device
designed to delay a wave or signal for a specific length of time.
Defect – For tape, an imperfection in the tape leading to a variation in
output or a dropout. The most common defects take the form of surface
projections, consisting of oxide agglomerates, imbedded foreign matter, or
redeposited wear products.
Delivery – Getting television signals to a viewer. Delivery might be
physical (e.g., cassette or disc) or electronic (e.g., broadcast, CATV, DBS,
optical fiber).
Definition – The aggregate of fine details available on-screen. The higher
the image definition, the greater the number of details that can be discerned. During video recording and subsequent playback, several factors
can conspire to cause a loss of definition. Among these are the limited
frequency response of magnetic tapes and signal losses associated with
electronic circuitry employed in the recording process. These losses occur
because fine details appear in the highest frequency region of a video
signal and this portion is usually the first casualty of signal degradation.
Each additional generation of a videotape results in fewer and fewer fine
details as losses are accumulated.
Degauss – To demagnetize (erase) all recorded material on a magnetic
videotape, an audiotape or the screen of a color monitor.
Degaussing – A process by which a unidirectional magnetic field is
removed from such transport parts as heads and guides. The presence of
such a field causes noise and a loss of high frequencies.
Degenerate – Being simpler mathematically than the typical case. A
degenerate edge is reduced to one point. A degenerate polygon has a
null surface.
Degree – An indication of the complexity of a curve.
Deinterlace – Separation of field 1 and field 2 in a source clip, producing
a new clip twice as long as the original.
Del Ray Group – Proponent of the HD-NTSC ATV scheme.
Delay – a) The time required for a signal to pass through a device or
conductor. b) The time it takes for any circuitry or equipment to process a
signal when referenced to the input or some fixed reference (i.e., house
sync). Common usage is total delay through a switcher or encoder. c) The
amount of time between input of the first pixel of a particular picture by
the encoder and the time it exits the decoder, excluding the actual time in
Delete – Edit term to remove.
Delivery System – The physical medium by which one or more multiplexes are transmitted, e.g., satellite system, wideband coaxial cable, fiber
optics, terrestrial channel of one emitting point.
Delta Frame – Contains only the data that has changed since the last
frame. Delta frames are an efficient means of compressing image data.
Compare Key Frame.
Demodulation – The process of recovering the intelligence from a
modulated carrier.
Demodulator – a) A device which recovers the original signal after it has
been modulated with a high frequency carrier. In television, it may refer to
an instrument which takes video in its transmitted form (modulated picture
carrier) and converts it to baseband; the circuits which recover R-Y and
B-Y from the composite signal. b) A device that strips the video and audio
signals from the carrier frequency.
Demultiplexer (Demux) – A device used to separate two or more signals
that were previously combined by a compatible multiplexer and transmitted
over a single channel.
Demultiplexing – Separating elementary streams or individual channels
of data from a single multi-channel stream. For example, video and audio
streams must be demultiplexed before they are decoded. This is true for
multiplexed digital television transmissions.
Density – a) The degree of darkness of an image. b) The percent of
screen used in an image. c) The negative logarithm to the base ten of the
transmittance (or reflectance) of the sample. A sample which transmits
1/2 of the incident light has a transmittance of 0.50 or 50% and a density
of 0.30.
Depth Cueing – Varies the intensity of shaded surfaces as a function of
distance from the eye.
www.tektronix.com/video_audio 65
Video Terms and Acronyms
Depth of Field – a) The range of objects in front of a camera lens which
are in focus. Smaller F-stops provide greater depth of field, i.e., more of
the scene, near to far, will be in focus. b) The area in front of and behind
the object in focus that appears sharp on the screen. The depth of field
increases with the decrease of the focal length, i.e., the shorter the focal
length the wider the depth of field. The depth of field is always wider
behind the objects in focus.
Depth of Modulation – This measurement indicates whether or not
video signal levels are properly represented in the RF signal. The NTSC
modulation scheme yields an RF signal that reaches its maximum peak-topeak amplitude at sync tip (100%). In a properly adjusted signal, blanking
level corresponds to 75%, and peak white to 12.5%. The zero carrier
reference level corresponds to 0%. Over modulation often shows up in the
picture as a nonlinear distortion such as differential phase or differential
gain. Incidental Carrier Phase Modulation (ICPM) or white clipping may
also result. Under modulation often result in degraded signal-to-noise
Zero Carrier
of Modulation
Description Definition Language (DDL) – A language that allows the
creation of new description schemes and, possibly, descriptors. It also
allows the extension and modification of existing description schemes.
Description Scheme (DS) – Specifies the structure and semantics of the
relationships between its components, which may be both descriptors and
description schemes.
Descriptor (D) – a) MPEG systems data structures that carry descriptive
and relational information about the program(s) and their Packetized
Elementary Streams (PES). b) A representation of a feature, a descriptor
defines the syntax and the semantics of the feature representation. c) A
data structure that is used to describe particular aspects of an elementary
stream or a coded media object.
Descriptor Value – An instantiation of a descriptor for a given data set (or
subset thereof).
Deserializer – A device that converts serial digital information to parallel.
Desk Top Video (DTV) – a) Use of a desktop computer for video production. b) Self-contained computer and display with integrated video and
optional network interface for local and remote work and information
Detail – Refers to the most minute elements in a picture which are distinct
and recognizable. Similar to Definition or Resolution.
Deterministic – A process or model whose outcome does not depend
upon chance, and where a given input will always produce the same
output. Audio and video decoding processes are mostly deterministic.
Development System – Microcomputer system with all the facilities
required for hardware and software development for a given microprocessor. Generally consists of a microcomputer system, CRT display, printer,
mass storage (usually dual floppy-disk drivers), PROM programmer, and
in-circuit emulator.
Device Driver – Software to enable a computer to access or control a
peripheral device, such as a printer.
Device Interface – A conversion device that separates the RGB and sync
signals to display computer graphics on a video monitor.
Sync Tip
Depth Shadow – A shadow that extends solidly from the edges of a title
or shape to make it appear three-dimensional. See also Drop Shadow.
Dequantization – The process of rescaling the quantized discrete cosine
transform coefficients after their representation in the bit stream has been
decoded and before they are presented to the inverse DCT.
Descrambler – Electronic circuit that restores a scrambled video signal
to its original form. Television signals – especially those transmitted
by satellite – are often scrambled to protect against theft and other
unauthorized use.
Description – Consists of a description scheme (structure) and a set of
descriptor values (instantiations) that describe the data.
DFD (Displaced Frame Difference) – Differential picture if there is
D-Frame – Frame coded according to an MPEG-1 mode which uses DC
coefficients only.
DHEI (DigiCable Headend Expansion Interface) – The DigiCable
Headend Expansion Interface (DHEI) is intended for the transport of
MPEG-2 system multiplexes between pieces of equipment in the headend.
It originally was a proprietary interface of General Instrument, but now
has been standardized by the SCTE (Society of Cable Telecommunications
Engineers) for use in the cable industry.
Diagnostics – A series of tests that check hardware components of a
Diagonal Resolution – Amount of detail that can be perceived in a diagonal direction. Although diagonal resolution is a consequence of horizontal
and vertical resolution, it is not automatically equivalent to them. In fact,
ordinary television systems usually provide about 40 percent more diagonal
Video Terms and Acronyms
resolution than horizontal or vertical. Many ATV schemes intentionally sacrifice diagonal resolution in favor of some other characteristics (such as
improved horizontal or vertical resolution) on the theory that human vision
is less sensitive to diagonal resolution than to horizontal or vertical. There
is some evidence that diagonal resolution could be reduced to about 40
percent less than either horizontal or vertical (overall half of its NTSC value)
with no perceptible impairment. See also Resolution.
Diagonal Split – An unusual quad split feature found on Ampex switchers,
allowing diagonal or X shaped divisions between sources, as opposed to
the traditional horizontal and vertical divisions.
Dialog Normalization Value – The dialog normalization value is a Dolby
Digital parameter that describes the long-term average dialog level of the
associated program. It may also describe the long-term average level of
programs that do not contain dialog, such as music. This level is specified
on an absolute scale ranging from -1 dBFS to -31 dBFS. Dolby Digital
decoders attenuate programs based on the dialog normalization value in
order to achieve uniform playback level.
Colors may not be properly reproduced, particularly in high-luminance
areas of the picture. b) The phase change of the 3.6 MHz color subcarrier
introduced by the overall circuit, measured in degrees, as the subcarrier is
varied from blanking to white level.
Differential Pulse Code Modulation – DPCM is a source coding
scheme that was developed for encoding sources with memory. The
reason for using the DPCM structure is that for most sources of practical
interest, the variance of the prediction error is substantially smaller than
that of the source.
Differentiated Step Filter – A special “diff step” filter is used to measure
luminance nonlinearity. When this filter is used with a luminance step
waveform each step on the waveform is translated into a spike that is
displayed on the waveform monitor. The height of each spike translates
into the height of the step so the amount of distortion can be determined
by comparing the height of each spike. Refer to the figure below.
DIB (Device Independent Bitmap) – A file format that represents bitmap
images in a device-independent manner. Bitmaps can be represented at 1,
4 and 8 bits-per-pixel with a palette containing colors representing 24 bits.
Bitmaps can also be represented at 24 bits-per-pixel without a palette in a
run-length encoded format.
Dielectric – An insulating (nonconductive) material.
Differential Gain – a) A nonlinear distortion often referred to as “diff
gain” or “dG”. It is present if a signal’s chrominance gain is affected by
luma levels. This amplitude distortion is a result of the system’s inability to
uniformly process the high frequency chrominance signals at all luma levels. The amount of differential gain distortion is expressed in percent. Since
both attenuation and peaking of chrominance can occur in the same signal, it is important to specify whether the maximum over all amplitude difference or the maximum deviation from the blanking level amplitude is
being quoted. In general, NTSC measurement standard define differential
gain as the largest amplitude deviation between any two levels, expressed
as a percent of the largest chrominance amplitude. When differential gain
is present, color saturation has an unwarranted dependence on luminance
level. Color saturation is often improperly reproduced at high luminance
levels. The Modulated Ramp or Modulated Stair Step signals can be used
to test for differential gain. b) The amplitude change, usually of the 3.6
MHz color subcarrier, introduced by the overall circuit, measured in dB or
percent, as the subcarrier is varied from blanking to white level.
Differential Phase – a) A nonlinear distortion often referred to as “diff
phase” or “dP”. It is present if a signal’s chrominance phase is affected by
the luminance level. It occurs because of the system’s inability to uniformly
process the high frequency chrominance information at all luminance levels. Diff phase is expressed in degrees of subcarrier phase. The subcarrier
phase can be distorted such that the subcarrier phase is advanced (lead
or positive) or delayed (lag or negative) in relation to its original position.
In fact, over the period of a video line, the subcarrier phase can be both
advanced and delayed. For this reason it is important to specify whether
“peak to peak diff phase” is being specified or “maximum deviation from 0”
in one direction or another. Normally the “peak to peak diff phase” is given.
dP distortions cause changes in hue when picture brightness changes.
Diffuse – a) Diffuse light is the light reflected by a matte surface; without
glare or highlight. It is based on relative orientation of surface normal and
light source positions and luminance. b) Widely spread or scattered. Used
to define lighting that reflects equally in all directions producing a matte,
or flat, reflection on an object. The reflection intensity depends on the light
source relative to the surface of the object.
DigiCipher® – DigiCipher is a compression and transmission technology
from General Instrument (now Motorola), dedicated to Digital TV distribution
via satellite. DigiCipher video coding is based on DCT like MPEG, but
does not use B-pictures. Instead, it uses a so-called adaptive prediction
mode. DigiCipher 1 was the first incarnation and is still used today
by many providers since it was the first commercially available digital
compression scheme.
DigiCipher® II – This is General Instrument’s (now Motorola) latest distribution system and is the standard for 4DTV product. DCII uses standard
MPEG-2 video encoding, but just about everything else in this “standard” is
unique to DCII. For example, DVB/MPEG-2 uses Musicam for audio where-
www.tektronix.com/video_audio 67
Video Terms and Acronyms
as DCII uses Dolby AC-3. Despite using the same video standard,
DVB/MPEG-2 and DCII signals are totally incompatible and no receiver can
currently receive both.
Digiloop – Patented circuitry within the Vista switcher, which allows the
insertion of a digital effects device within the architecture of the switcher.
This allows multi-channels of digital effects to be utilized on a single M/E,
which would otherwise require 3 M/Es.
Digimatte (Menu) – The key channel processor, providing a separate
channel specifically for black and white key signals that processes and
manipulates an external key signal in the same way as source video in
3D space.
Digit – Sign or symbol used to convey a specific quantity of information
either by itself or with other numbers of its set: 2, 3, 4, and 5 are digits.
The base or radix must be specified and each digit’s value assigned.
DigiTAG (Digital Television Action Group)
Digital – a) Having discrete states. Most digital logic is binary, with two
states (on or off). b) A discontinuous electrical signal that carries information in binary fashion. Data is represented by a specific sequence of off-on
electrical pulses. A method of representing data using binary numbers.
An analog signal is converted to digital by the use of an analog-to-digital
(A/D) converter chip by taking samples of the signal at a fixed time interval
(sampling frequency). Assigning a binary number to these samples, this
digital stream is then recorded onto magnetic tape. Upon playback, a digital-to-analog (D/A) converter chip reads the binary data and reconstructs
the original analog signal. This process virtually eliminates generation loss
as every digital-to-digital copy is theoretically an exact duplicate of the
original allowing multi-generational dubs to be made without degradation.
In actuality of course, digital systems are not perfect and specialized
hardware/software is used to correct all but the most severe data loss.
Digital signals are virtually immune to noise, distortion, crosstalk, and
other quality problems. In addition, digitally based equipment often offers
advantages in cost, features, performance and reliability when compared
to analog equipment.
Digital 8 – Digital 8 compresses video using standard DV compression,
but records it in a manner that allows it to use standard Hi-8 tape. The
result is a DV “box” that can also play standard Hi-8 and 8 mm tapes. On
playback, analog tapes are converted to a 25 Mbps compressed signal
available via the iLink digital output interface. Playback from analog
tapes has limited video quality. New recordings are digital and identical
in performance to DV; audio specs and other data also are the same.
Digital Audio – Audio that has been encoded in a digital form for
processing, storage or transmission.
Digital Audio Broadcasting (DAB) – a) NRSC (National Radio Systems
Committee) term for the next generation of digital radio equipment.
b) Modulations for sending digital rather than analog audio signals by
either terrestrial or satellite transmitter with audio response up to compact
disc quality (20 kHz). c) DAB was started as EUREKA project EU 147 in
1986. The digital audio coding process called MUSICAM was designed
within EUREKA 147 by CCETT. The MUSICAM technique was selected by
MPEG as the basis of the MPEG-1 audio coding, and it is the MPEG-1
Layer II algorithm which will be used in the DAB system. The EUREKA 147
project, in close cooperation with EBU, introduced the DAB system
approach to the ITU-R, which subsequently has been contributing actively
for the worldwide recognition and standardization of the DAB system. EBU,
ETSI and EUREKA 147 set up a joint task committee with the purpose of
defining a European Telecommunications Standard (ETS) for digital sound
broadcasting, based on the DAB specifications. ETSI published the EUREKA
147 system as standard ETS 300 401 in February 1995, and market adoption is forthcoming; the BBC, for instance, plans to have 50% transmission
coverage in 1997 when DAB receivers are being introduced to the public.
Digital Audio Clipping – Occurs when the audio sample data is 0 dBFS
for a number of consecutive samples. When this happens, an indicator will
be displayed in the level display for a period of time set by the user.
Digital Audio Recording – A system which converts audio signals into
digital words which are stored on magnetic tape for later reconversion to
audio, in such a manner that dropouts, noise, distortion and other poor
tape qualities are eliminated.
Digital Betacam – A development of the original analog Betacam VTR
which records digitally on a Betacam-style cassette. A digital video tape
format using the CCIR 601 standard to record 4:2:2 component video in
compressed form on 12.5 mm (1/2”) tape.
Digital Borderline – A GVG option and term. A digital border type with
fewer settings, hence less control than the analog type used on Ampex
Digital Cable – A service provided by many cable providers which offers
viewers more channels, access to pay-per-view programs and online
guides. Digital cable is not the same as HDTV or DTV; rather, digital
cable simply offers cable subscribers the options for paying for additional
Digital Chroma Keying – Digital chroma keying differs from its analog
equivalent in that it can key uniquely from any one of the 16 million colors
represented in the component digital domain. It is then possible to key
from relatively subdued colors, rather than relying on highly saturated colors that can cause color spill problems on the foreground. A high-quality
digital chroma keyer examines each of the three components of the picture
and generates a linear key for each. These are then combined into a composite linear key for the final keying operation. The use of three keys allows
much greater subtlety of selection than does a chrominance-only key.
Digital Cinemas – Facing the high costs of copying, handling and distribution of film, an infrastructure enabling digital transport of movies to
digital cinemas could be highly attractive. In addition, digital delivery of
films can effectively curb piracy. The MPEG-2 syntax supports the levels
of quality and features needed for this application.
Digital Component – Component signals in which the values for each
pixel are represented by a set of numbers.
Digital Component Video – Digital video using separate color components, such as YCbCr or RGB. See ITU-R BT.601-2. Sometimes incorrectly
referred to as D1.
Digital Composite Video – The digitized waveform of (M) NTSC or (B, D,
G, H, I) PAL video signals, with specific digital values assigned to the sync,
blank, and white levels. Sometimes incorrectly referred to as D2 or D3.
Video Terms and Acronyms
Digital Compression – A process that reduces storage space and/or
transmission data rate necessary to store or transmit information that is
represented in a digital format.
Digital Cut – The output of a sequence, which is usually recorded to tape.
Digital Disk Recorder (DDR) – a) A digital video recording device based
on high-speed computer disk drives. Commonly used as a means to get
video into and out from computers. b) A video recording device that uses
a hard disk or optical disk drive mechanism. Disk recorders offer quick
access to recorded material.
Digital Effects – Special effects created using a digital video effects (DVE)
Digital Moving Picture (dpx) – This is the SMPTE standard file format
of the Digital Moving Picture Exchange Kodak Cineon raster file format.
Digital Parallel Distribution Amplifier – A distribution amplifier
designed to amplify and fan-out parallel digital signals.
Digital Recording – A method of recording in which the information
(usually audio or video) is first coded in a digital form. Most commonly,
a binary code is used and recoding takes place in terms of two discrete
values of residual flux.
Digital Rights Management (DRM) – A generic term for a number of
capabilities that allow a content producer or distributor to determine under
what conditions their product can be acquired, stored, viewed, copied,
loaned, etc. Popular proprietary solutions include InterTrust, etc.
Digital S – A digital tape format that uses 1.25-inch high-density metal
particle tape, running at 57.8 mm/s, to record a video data rate of 50
Mbps. Video sampled at 4:2:2 is compressed at 3:3:1 using DCT-based
intra-frame compression. Two individually editable audio channels are
recorded using 16-bit, 48 kHz sampling. The tape can be shuttled and
searched up to x32 speed. Digital S includes two cue tracks and four
further audio channels in a cassette housing with the same dimensions
as VHS.
Digital Sampling Rate – This is the frequency at which an analog signal
is sampled to create a digital signal.
Digital Signal – An electronic signal where every different value from the
real-life excitation (sound, light) has a different value of binary combinations (words) that represent the analog signal.
Digital Simultaneous Voice and Data (DSVD) – DSVD is a method for
combining digital voice and data packets for transmission over an analog
phone line.
Digital Storage Media (DSM) – a) A means of storage (usually magnetic
tape, disk or DVD) for audio, video or other information, that is in binary
form. b) A digital storage or transmission device or system.
Digital Storage Media, Command and Control (DSM-CC) – DSM-CC
is part 6 of ISO/IEC 12818 MPEG-2 standard. It specifies open interfaces
and protocols for delivery of multimedia broadband services and is transport-layer independent.
Digital System – A system utilizing devices that can be in only one of two
possible states.
Digital Television Communications System (DITEC) – System developed by Comstat Corp. for satellite links.
Digital Transmission Content Protection (DTCP) – An encryption
method (also known as 5D) developed by Sony, Hitachi, Intel, Matsushita
and Toshiba for IEEE 1394 interfaces.
Digital Tuner, Digital Receiver – A digital tuner serves as the decoder
required to receive and display digital broadcasts. A digital tuner can
down-convert broadcasts for an analog TV or provide a digital signal to a
digital television. It can be included inside TV sets or via a set-top box.
Digital TV Group – This is a UK forum of technology and service providers
created in August 1995 with the objective to speed up the introduction of
digital terrestrial TV in the UK. With its focus on implementation aspects,
the efforts of the group are seen as an extension of the work done in
DVB. Membership is open to those DVB members who wish to participate
actively in the introduction of digital terrestrial TV in the UK.
Digital Versatile Disk (DVD) – The modern proposals for DVD are the
result of two former optical disc formats, supporting the MMCD (Multimedia
CD) and the SD (Super Density) formats. The two groups agreed on a third
format. The DVD, initially, addressed only movie player applications, but
today’s DVD is positioned as a high-capacity multimedia storage medium.
The DVD consortium addresses topics such as video, ROM, audio-only, and
copy protection. The movie player remains the DVD’s prime application, but
the DVD is taking an increasingly large share of the CD-ROM market. The
promoters of the format agreed in December 1995 on a core set of specifications. The system operates at an average data rate of 4.69 Mbit/s and
features 4.7 GB data capacity, which allows MPEG-2 coding of movies, or
which may be utilized for a high-resolution music disc. For the PAL and
NTSC specifications of the DVD, different audio coding has been chosen to
obey market patterns. For the NTSC version, the Dolby AC-3 coding will be
mandatory, with MPEG audio as an option, whereas the opposite is true for
PAL and SECAM markets.
Digital Vertical Interval Timecode (DVITC) – DVITC digitizes the analog
VITC waveform to generate 8-bit values. This allows the VITC to be used
with digital video systems. For 525-line video systems, it is defined by
SMPTE 266M. BT.1366 defines how to transfer VITC and LTC as ancillary
data in digital component interfaces.
Digital Video (DV) – A video signal represented by computer-readable
binary numbers that describe colors and brightness levels.
Digital Video Broadcasting (DVB) – a) A system developed in Europe
for digital television transmission, originally for standard definition only,
though high-definition modes have now been added to the specification.
DVB defines a complete system for terrestrial, satellite, and cable transmission. Like the ATSC system, DVB uses MPEG-2 compression for video, but
it uses MPEG audio compression and COFDM modulation for terrestrial
transmission. b) At the end of 1991, the European Launching Group (ELG)
was formed to spearhead the development of digital TV in Europe. During
1993, a Memorandum of Understanding was drafted and signed by the
ELG participants, which now included manufacturers, regulatory bodies
and other interest groups. At the same time, the ELG became Digital Video
Broadcasting (DVB). The TV system provided by the DVB is based on
MPEG-2 audio and video coding, and DVB has added various elements not
www.tektronix.com/video_audio 69
Video Terms and Acronyms
included in the MPEG specification, such as modulation, scrambling and
information systems. The specifications from DVB are offered to either ETSI
or CENELEC for standardization, and to the ITU.
Digital Video Cassette (DVC) – a) Tape width is 1/4”, metal particle
formula. The source and reconstructed video sample rate is similar to that
of CCIR-601, but with additional chrominance subsampling (4:1:1 in the
case of 30 Hz and 4:2:0 in the case of 25 Hz mode). For 30 frames/sec,
the active source rate is 720 pixels/lines x 480 lines/frame x 30
frames/sec x 1.5 samples/pixel average x 8 samples/pixel = ~124
Mbit/sec. A JPEG-like still image compression algorithm (with macroblock
adaptive quantization) applied with a 5:1 reduction ratio (target bit rate of
25 Mbit/sec) averaged over a period of roughly 100 microseconds (100
microseconds is pretty small compared to MPEG’s typical 1/4 second
time average!) b) A digital tape recording format using approximately 5:1
compression to produce near-Betacam quality on a very small cassette.
Originated as a consumer product, but being used professionally as
exemplified by Panasonic’s variation, DVC-Pro.
Digital Video Cassette Recorder (Digital VCR) – Digital VCRs are
similar to analog VCRs in that tape is still used for storage. Instead of
recording an analog audio/video signal, digital VCRs record digital signals,
usually using compressed audio/video.
Digital Video Disc – See DVD.
Digital Video Express (DIVX) – A short-lived pay-per-viewing-period
variation of DVD.
Digital Video Interactive (DVI) – A multimedia system being marketed
by Intel. DVI is not just an image-compression scheme, but includes
everything that is necessary to implement a multimedia playback station.
including chips, boards, and software. DVI technology brings television to
the microcomputer. DVI’s concept is simple: information is digitized and
stored on a random-access device such as a hard disk or a CD-ROM, and
is accessed by a computer. DVI requires extensive compression and realtime decompression of images. Until recently this capability was missing.
DVI enables new applications. For example, a DVI CD-ROM disk on twentieth-century artists might consist of 20 minutes of motion video; 1,000
high-res still images, each with a minute of audio; and 50,000 pages of
text. DVI uses the YUV system, which is also used by the European PAL
color television system. The Y channel encodes luminance and the U and V
channels encode chrominance. For DVI, we subsample 4-to-1 both vertically and horizontally in U and V, so that each of these components requires
only 1/16 the information of the Y component. This provides a compression
from the 24-bit RGB space of the original to 9-bit YUV space. The DVI
concept originated in 1983 in the inventive environment of the David
Sarnoff Research Center in Princeton, New Jersey, then also known as
RCA Laboratories. The ongoing research and development of television
since the early days of the Laboratories was extending into the digital
domain, with work on digital tuners, and digital image processing algorithms that could be reduced to cost-effective hardware for mass-market
consumer television.
Digital Word – The number of bits treated as a single entity by the
Digital Workstation – The computer-based system used for editing and
manipulating digital audio, and synchronizing digital audio with video for
video post-production applications (e.g., Adobe Premiere).
Digital Zoom – A feature found on some camcorders that electronically
increases the lens zoom capability by selecting the center of the image
and enlarging it digitally.
Digitally Record – To convert analog video and audio signals to digital
Digitization – The process of changing an electronic signal that is an
analogy (analog) of a physical process such as vision or hearing into a
discrete numerical form. Digitization is subdivided into the processes of
sampling the analog signal at a moment in time, quantizing the sample
(assigning it a numerical level), and coding the number in binary form.
The advantages of digitization include improved transmission; the disadvantages include a higher bit rate than the analog bandwidth. Bit rate
reduction schemes work to reduce that disadvantage.
Digitize – a) The process of turning an analog signal into digital data.
b) To convert an image from hard copy (a photo) into digital data for
display on a computer. c) To convert an analog signal into digital form
for storage on disk arrays and processing.
Digitizer – A system that converts an analog input to a digital format,
such as analog-to-digital converters (ADC), touch tablets and mice. The
last two, for example, take a spatial measurement and present it to a
computer as a digital representation.
Digitizing – The act of taking analog audio and/or video and converting it
to digital form. In 8 bit digital video there are 256 possible steps between
maximum white and minimum black.
Digitizing Time – Time taken to record footage into a disk-based editing
system, usually from a tape-based analog system, but also from newer
digital tape formats without direct digital connections.
DigiTrail – An enhancement of ADO effects by adding trails, smearing,
sparkles, etc.
DigiVision – A company with an early line-doubling ATV scheme.
DII (Download Information Indication) – Message that signals the
modules that are part of a DSM-CC object carousel.
Dimmer Switch – A control used to gradually increase and decrease the
electricity sent to lighting fixture, thereby effecting the amount of light
given by the lighting fixture.
DIN (Deutsches Institut fuer Normung) – A German association that
sets standards for the manufacture and performance of electrical and
electronic equipment, as well as other devices. DIN connectors carry both
audio and video signals and are common on equipment in Europe. (Also
referred to as Deutsche Industrie Normenausschuss.)
Digital Video Noise Reduction (DVNR) – Digitally removing noise from
video by comparing frames in sequence to spot temporal aberrations.
Dip – An adjustment to an audio track in which the volume gain level
decreases or “dips” to a lower level, rather than fading completely.
Digital Video Recording – “D1” Component, “D2” Composite.
DIP (Dual In-Line Package) – Standard IC package with two parallel
rows of pins.
Video Terms and Acronyms
Dipswitch – A block of small switches formed so that they fit into an IC
socket or into a PCB on standard IC spacing.
Direct Access Restriction – The ability to limit a user’s capability to gain
access to material not intended in the product structure. This is not
parental control, but it is useful for material such as games or training
material where such access would destroy the intent of the product. This
type of control is usually accomplished with pre and post commands in the
authoring process.
Direct Addressing – Standard addressing mode, characterized by the
ability to reach any point in main storage directly. The address is specified
as part of the instruction.
Direct Broadcast Satellite (DBS) – a) A distribution scheme involving
transmission of signals directly from satellites to homes. It does not carry
the burden of terrestrial broadcasting’s restricted bandwidth and regulations and so is thought by many to be an ideal mechanism for the introduction of high base bandwidth ATV. DBS is the most effective delivery
mechanism for reaching most rural areas; it is relatively poor in urban
areas and in mountainous terrain, particularly in the north. Depending on
frequency band used, it can be affected by factors such as rain. b) Multiple
television channel programming service that is transmitted direct from high
powered satellites, directly to a home receiving dish.
Direct Color – An SVGA mode for which each pixel color value is specified
directly by the contents of a bit field.
Directional Microphone – One whose sensitivity to sound varies with
direction. Such microphones can be aimed so their most sensitive sides
face the sound source, while their least sensitive sides face sources of
noise or other undesired sound.
Directional Source – Light that emanates from a constant direction with
a constant intensity. This is called the infinite light source.
Directory – a) A container in the file system in which you store other
directories and files. b) A logical or physical portion of a hard disk drive
where the operating system stores files.
DirectShow – The application programming interface (API) for client-side
playback, transformation, and capture of a wide variety of data formats.
DirectShow is the successor to Microsoft Video for Windows® and
Microsoft ActiveMovie, significantly improving on these older technologies.
Direct-View – A CRT watched directly, as opposed to one projecting its
image on a screen.
Dirty List (Dirty EDL) – An edit decision list (EDL) containing overlapping
or redundant edits. Contrast with Clean List (Clean EDL).
DIS (Draft International Standard) – The last step before a fast-track
document is approved as an International Standard. Note: The fast-track
process is a different process than the normal development process. DIS
documents are balloted and approved at the TC-level.
Disable – Process of inhibiting a device function.
Direct Digital Interface – The interconnection of compatible pieces of
digital audio or video equipment without conversion of the signal to an analog form.
Disc Array – Multiple hard disks formatted to work together as if they
were part of a single hard drive. Disc arrays are typically used for high
data rate video storage.
Direct Draw Overlay – This is a feature that lets you see the video full
screen and full motion on your computer screen while editing. Most new
3D graphics cards support this. If yours does not, it simply means you will
need an external monitor to view the video. Direct Draw Overlay has
absolutely nothing to do with your final video quality.
Discrete – Having an individual identity. An individual circuit component.
Direct Memory Access (DMA) – Method of gaining direct access to main
storage in order to perform data transfers without involving the CPU.
Direct Recording – A type of analog recording which records and reproduces data in the electrical form of its source.
Direct Sound – The sound which reaches a mike or listener without
hitting or bouncing off any obstacles.
Direct to Disk – A method of recording directly to the cutting head of
the audio disk cutter, eliminating the magnetic recorder in the sequence,
typified by no tape hiss.
Direction Handle – A line extending from a control point that controls the
direction of a Bézier curve. Each control point has two direction handles.
These two handles together affect how the curve passes through the control point, with one handle controlling how the curve appears before the
control point, and the other handle controlling how the curve appears after
the control point.
Directional Antenna – An antenna that directs most of its signal strength
in a specific direction rather than at equal strength in all directions.
Discrete Cosine Transform (DCT) – a) Used in JPEG and the MPEG,
H.261, and H.263 video compression algorithms, DCT techniques allow
images to be represented in the frequency rather than time domain.
Images can be represented in the frequency domain using less information
than in the time domain. b) A mathematical transform that can be perfectly
undone and which is useful in image compression. c) Many encoders perform a DCT on an eight-by-eight block of image data as the first step in
the image compression process. The DCT converts the video data from the
time domain into the frequency domain. The DCT takes each block, which
is a 64-point discrete signal, and breaks it into 64 basis signals. The
output of the operation is a set of 64 basis-signal amplitudes, called DCT
coefficients. These coefficients are unique for each input signal. The DCT
provides a basis for compression because most of the coefficients for a
block will be zero (or close to zero) and do not need to be encoded.
Discrete Signals – The sampling of a continuous signal for which the
sample values are equidistant in time.
Discrete Surround Sound – Audio in which each channel is stored and
transmitted separate from and independent of other channels. Multiple
independent channels directed to loudspeakers in front of and behind
the listener allow precise control of the sound field in order to generate
localized sounds and simulate moving sound sources.
Discrete Time Oscillator (DTO) – Digital implementation of the voltage
controlled oscillator.
www.tektronix.com/video_audio 71
Video Terms and Acronyms
Dish – A parabolic antenna used to receive satellite transmissions at
home. The older “C band” dishes measure 7-12 feet in diameter, while the
newer “Ku band” dishes used to receive high-powered DBS services can be
as small as 18 inches in diameter.
Disk (Menus) – Recall and Store enable effects to be stored, renamed
and recalled on 3-1/2” disks in the disk drive provided with the system.
Disk Drive – The machine used to record and retrieve digital information
on disk.
Disk Resource – Any disk (hard, CD-ROM, or floppy) that you can access
either because it is physically attached to your workstation with a cable, or
it is available over the network.
Disk Use – The percentage of space on your disk that contains information.
Disk, Disc – a) An information/digital data storage medium. b) A flat
circular plate, coated with a magnetic material, on which data may be
stored by selective magnetization of portions of the surface. May be a
flexible, floppy disk or rigid hard disk. It could also be a plastic compact
disc (CD) or digital video disc (DVD).
Dispersion – Distribution of the oxide particles within the binder. A good
dispersion can be defined as one in which equal numbers of particles
would be found in equal, vanishingly small volumes sampled from different
points within the coating.
Displacement Mapping – The adding of a 3D effect to a 2D image.
Displacement of Porches – Refers to any difference between the level of
the front porch and the level of the back porch.
Display – a) The ultimate image presented to a viewer; the process of
presenting that image. b) CRT, LCD, LED or other photo luminescent panel
upon which numbers, characters, graphics or other data is presented.
Display Order – The order in which the decoded pictures are displayed.
Normally this is the same order in which they were presented at the input
of the encoder.
Display Rate – The number of times/sec the image in a video system is
refreshed. Progressive scan systems such as film or HDTV change the
image once per frame. Interlace scan systems such as standard TV change
the image twice per frame, with two fields in each frame. Film has a frame
rate of 24 fps but each frame is shown twice by the projector for a display
rate of 48 fps. NTSC TV has a rate of 29.97 fps, PAL 25 fps.
Display Signal Processing – An efficient, widely compatible system
required that distribution be free of detailed requirements specific to display, and that necessary additional display processing unique to that display class be conducted only at the display. The variety of display systems,
already numerous, continues to increase. Each system or variant has its
own set of specifications, performance characteristics, and requirements,
including electro-optic transfer function, color gamut, scanning sequence,
etc. Display signal processing might include transformation at the display
to the appropriate luminance range and chrominance, to display primaries
and reference white, matrixing to achieve metameric color match, adaptation to surround, plus conversion to scanning progressive or scanning
interlaced, etc. Display processing may not be required for transmission
if there is unique point-to-point routing clearly identified and appropriate
processing has been provided in distribution. But it is frequently required
for emission to a diffuse population of display system.
Dissolve – a) A process whereby one video signal is gradually faded out
while a second image simultaneously replaces the original one. b) A video
or audio transition in which an image from one source gradually becomes
less distinct as an image from a second source replaces it. An audio
dissolve is also called a segue. See also Crossfade, Fade.
Distance Learning – Technologies that allow interactive remote site
classes or training by use of multipoint or point-to-point connections.
Distant Miking – Placing a mike far from a sound source so that a high
proportion of reflected sound is picked up.
Distant Signal – TV signals which originate at a point too far away to be
picked up by ordinary home reception equipment; also signals defined by
the FCC as outside a broadcaster’s license area. Cable systems are limited
by FCC rules in the number of distant signals they can offer subscribers.
Distortion – In video, distortion usually refers to changes in the luminance
or chrominance portions of a signal. It may contort the picture and
produce improper contrast, faulty luminance levels, twisted images,
erroneous colors and snow. In audio, distortion refers to any undesired
changes in the waveform of a signal caused by the introduction of spurious
elements. The most common audio distortions are harmonic distortion,
intermodulation distortion, crossover distortion, transient distortion and
phase distortion.
Distribution – a) The process of getting a television signal from point to
point; also the process of getting a television signal from the point at which
it was last processed to the viewer. See also Contribution. b) The delivery
of a completed program to distribution-nodes for emission/transmission
as an electrical waveform, or transportation as physical package, to the
intended audiences. Preparation for distribution is the last step of the
production cycle. Typical distribution-nodes include: release and duplicating
laboratories, satellite systems, theatrical exchanges, television networks
and groups, cable systems, tape and film libraries, advertising and program
agencies, educational systems, government services administration, etc.
Distribution Amplifier – Device used to multiply (fan-out) a video signal.
Typically, distribution amplifiers are used in duplication studios where many
tape copies must be generated from one source or in multiple display
setups where many monitors must carry the same picture, etc. May also
include cable equalization and/or delay.
Distribution Quality – The level of quality of a television signal from the
station to its viewers. Also know as Emission Quality.
DIT (Discontinuity Information Table)
DITEC – See Digital Television Communications System.
Dither – a) Typically a random, low-level signal (oscillation) which maybe
added to an analog signal prior to sampling. Often consists of white
noise of one quantizing level peak-to-peak amplitude. b) The process of
representing a color by mixing dots of closely related colors.
Dither Component Encoding – A slight expansion of the analog signal
levels so that the signal comes in contact with more quantizing levels.
The results are smoother transitions. This is done by adding white noise
Video Terms and Acronyms
(which is at the amplitude of one quantizing level) to the analog signal prior
to sampling.
Dither Pattern – The matrix of color or gray-scale values used to represent colors or gray shades in a display system with a limited color palette.
Dithering – Giving the illusion of new color and shades by combining
dots in various patterns. This is a common way of gaining gray scales and
is commonly used in newspapers. The effects of dithering would not be
optimal in the video produced during a videoconference.
DIVX – A commercial and non-commercial video codec that enables high
quality video at high compression rates.
DivX – A hacked version of Microsoft’s MPEG4 codec.
DLT (Digital Linear Tape) – a) A high capacity data tape format.
b) A high-density tape storage medium (usually 10-20 gigabytes) used to
transport and input data to master a DVD. Media is designated as “Type III”
or “Type IV” for tapes used for DVD.
allocation routine that distributes bits to channels and frequencies depending on the signals, and this improves the coding efficiency compared to
AC-2. The AC-3 algorithm is adopted for the 5.1-channel audio surround
system in the American HDTV system.
Dolby Digital – Formerly AC-3, a perceptual audio coding system based
upon transform coding techniques and psycho-acoustic principles.
Frequency-domain processing takes full advantage of noise masking by
confining quantization noise to narrow spectral regions where it will be
masked by the audio signal. Designed as an emissions (delivery) system,
Dolby Digital provides flexible coding of up to 5.1 audio channels at a
variety of data rates. In addition, Dolby Digital bit streams carry informational data about the associated audio.
DMA – See Direct Memory Access.
Dolby Laboratories – Founded in 1965, Dolby Laboratories is well known
for the technologies it has developed for improving audio sound reproduction, including their noise reduction systems (e.g., Dolby A, B, and C), Dolby
Digital (AC-3), Dolby Surround, and more. For more information, visit the
Dolby Laboratories website.
D-MAC – Originally, a MAC (Multiplexed Analog Component) with audio
and data frequency multiplexed after modulation, currently a term used in
Europe to describe a family of B-MAC-like signals, one of which is the
British choice for DBS. See also MAC.
Dolby Pro Logic – The technique (or the circuit which applies the technique) of extracting surround audio channels from a matrix-encoded audio
signal. Dolby Pro Logic is a decoding technique only, but is often mistakenly used to refer to Dolby Surround audio encoding.
DMD (Digital Micro-Mirror Device) – A new video projection technology
that uses chips with a large number of miniature mirrors, whose projection
angle can be controlled with digital precision.
Dolby Surround – A passive system that matrix encodes four channels of
audio into a standard two-channel format (Lt/Rt). When the signal is decoded using a Dolby Surround Pro Logic decoder, the left, center and right signals are recovered for playback over three front speakers and the surround
signal is distributed over the rear speakers.
DMIF (Digital Storage Media-Command and Control Multimedia
Integration Framework) – In November 1996, a work item on DMIF
(DSM-CC Multimedia Integration Framework) was accepted as part 6 of
the MPEG-4 ISO/IEC 14496 work activity. DMIF extends the concepts in
DSM-CC to symmetric conversational applications and the addition of
Internet as a core network. These extensions are required to satisfy the
needs of MPEG-4 applications.
DMK (Downstream Mixer-Keyer) – See DSK.
DM-M (Delayed Modulation Mark) – Also called Miller Code.
D-Mode – An edit decision list (EDL) in which all effects (dissolves, wipes,
graphic overlays) are performed at the end. See also A-Mode, B-Mode,
C-Mode, E-Mode, Source Mode.
DNG (Digital News Gathering) – Electronic News Gathering (ENG) using
digital equipment and/or transmission.
Dolby Surround Pro Logic (DSPL) – An active decoding process
designed to enhance the sound localization of Dolby Surround encoded
programs through the use of high-separation techniques. Dolby Surround
Pro Logic decoders continuously monitor the encoded audio program and
evaluate the inherent sound field dominance, applying enhancement in the
same direction and in proportion to that dominance.
Dolby™ – A compression/expansion (companding) noise reduction system
developed by Ray Dolby, widely used in consumer, professional and broadcast audio applications. Signal-to-noise ratio improvement is accomplished
by processing a signal before recording and reverse-processing the signal
upon playback.
DNL – Noise reduction system produced by Philips.
Dolly – a) A set of casters attached to the legs of a tripod to allow the tripod to roll b) A forward/backward rolling movement of the camera on top
of the tripod dolly.
DNR (Dynamic Noise Reduction) – This filter reduces changes across
frames by eliminating dynamic noise without blurring. This helps MPEG
compression without damaging image quality.
Domain – a) The smallest known permanent magnet. b) Program Chains
(PGC) are classified into four types of domains, including First Play Domain,
Video Manager Menu Domain, VTS Menu Domain and Title Domain.
Document Window – A sub-window inside an application. The size is
user adjustable but limited by the size of its application window.
Dongle – A hardware device used as a key to control the use of licensed
software. The software can be installed on any system but will run only
on the system that has a dongle installed. The dongle connects to the
Apple Desktop Bus on Macintosh systems or to the parallel (printer) port
on PC systems.
Dolby AC-2 and AC-3 – These are compression algorithms from the
Dolby Laboratories. The AC-2 coding is an adaptive transform coding that
includes a filterbank based on time domain alias cancellation (TDAS). The
AC-3 is a dedicated multichannel coding, which like AC-2 uses adaptive
transform coding with a TDAS filterbank. In addition, AC-3 employs a bit-
www.tektronix.com/video_audio 73
Video Terms and Acronyms
Doppler Effect – An effect in which the pitch of a tone rises as its
source approaches a listener, and falls as the source moves away from
the listener.
Downscaling – The process of decimating or interpolating data from an
incoming video signal to decease the size of the image before placing it
into memory.
DOS (Disk Operating System) – a) A single-user operating system from
Microsoft for the PC. It was the first operating system for the PC and is the
underlying control program for Windows 3.1, 95, 98 and ME. Windows NT,
2000 and XP emulate DOS in order to support existing DOS applications.
b) A software package that makes a computer work with its hardware
devices such as hard drive, floppy drive, screen, keyboard, etc.
Downstream – A term describing the precedence of an effect or key. The
“stream” of video through a switcher allows multiple layers of effects to be
accomplished, with each successive layer appearing on top of the previous
one. The most downstream effect is that video which appears as the topmost layer.
Dot Matrix – Method of forming characters by using many small dots.
Downstream Keyer – The last keyer on the switcher. A key on the DSK
will appear in front of all other video. Ampex DSKs are actually DMKs, that
is they also allow mixes and fades with the switcher output.
Dot Pitch – a) This is the density measurement of screen pixels specified
in pixels/mm. The more dense the pixel count, the better the screen resolution. b) The distance between phosphor dots in a tri-color, direct-view CRT.
It can be the ultimate determinant of resolution.
Downstream Keyer (DSK) – A term used for a keyer that inserts the key
“downstream” (last layer of video within switcher) of the effects system
video output. This enables the key to remain on-air while the backgrounds
and effects keys are changed behind it.
Double Buffering – As the name implies, you are using two buffers, for
video, this means two frame buffers. While buffer 1 is being read, buffer 2
is being written to. When finished, buffer 2 is read out while buffer 1 is
being written to.
DPCM – See Differential Pulse Code Modulation.
Dot Crawl – See Chroma Crawl.
Double Precision Arithmetic – Uses two words to represent each
Double System – Any film system in which picture and sound are recorded on separate media. A double system requires the resyncing of picture
and sound during post-production.
Double-Click – To hold the mouse still, then press and release a mouse
button twice, very rapidly. When you double-click an icon it opens into
a window; when you double-click the Window menu button the window
D-Pictures – Pictures for which only DC coefficients are transmitted.
D-pictures are not part of MPEG-2 but only of MPEG-1. MPEG-2 decoders
must be able to decode D-pictures.
Drag – To press and hold down a mouse button, then move the mouse.
This drags the cursor to move icons, to highlight menu items, or to perform
other functions.
DRAM (Dynamic Random Access Memory) – An integrated circuit
device that stores data bits as charges in thousands of tiny capacitors.
Since the capacitors are very small, DRAM must be constantly refreshed to
restore charges in appropriate cells. DRAM is used for short-term memory
such as frame and screen memory and memory which contains operating
programs which are loaded from ROM or disk.
Double-Strand Editing – See A/B Roll.
DRC (Dynamic Range Control) – A feature of Dolby Digital that allows
the end user to retain or modify the dynamic range of a Dolby Digital
Encoded program upon playback. The amount of control is dictated by
encoder parameter settings and decoder user options.
Doubling – To overdub the same part that has previously been recorded,
with the object of making the part appear to have been performed by
several instruments playing simultaneously.
Drift – Gradual shift or change in the output over a period of time due to
change or aging of circuit components. Change is often caused by thermal
instability of components.
Down Converter – This device accepts modulated high frequency television signals and down converts the signal to an intermediate frequency.
Drive – A hardware device that lets you access information on various
forms of media, such as hard, floppy, and CD-ROM disks, and magnetic
Double-Perf Film – Film stock with perforations along both edges of the
Down Link – a) The frequency satellites use to transmit data to earth
stations. b) Hardware used to transmit data to earth stations.
Download – The process of having an effect moved from disk storage into
the ADO control panel.
Downloadability – Ability of a decoder to load data or necessary decoding tools via Internet or ATM.
Downmix – A process wherein multiple channels are summed to a lesser
number of channels. In the audio portion of a DVD there can be as many
as 8 channels of audio in any single stream and it is required that all DVD
players produce a stereo version of those channels provided on the disc.
This capacity is provided as legacy support for older audio systems.
Drive Address – See SCSI Address.
Drive Pulse – A term commonly used to describe a set of signals needed
by source equipment such as a camera. This signal set may be composed
of any of the following: sync, blanking, subcarrier, horizontal drive, vertical
drive, and burst flag. Also called pulse drive.
Driving Signals – Signals that time the scanning at the pickup device.
Drop Field Scrambling – This method is identical to the sync suppression technique for scrambling analog TV channels, except there is no
suppression of the horizontal blanking intervals. Sync pulse suppression
only takes place during the vertical blanking interval. The descrambling
pulses still go out for the horizontal blanking intervals (to fool unauthorized
Video Terms and Acronyms
descrambling devices). If a descrambling device is triggering on descrambling pulses only, and does not know that the scrambler is using the drop
field scrambling technique, it will try to reinsert the horizontal intervals
(which were never suppressed). This is known as double reinsertion, which
causes compression of the active video signal. An unauthorized descrambling device creates a washed-out picture and loss of neutral sync during
drop field scrambling.
Drop Frame – a) System of modifying the frame counting sequence
(dropping two frames every minute except on every tenth minute) to allow
time code to match a real-time clock. b) The timecode adjustment made
to handle the 29.97 per second frame rate of color video by dropping
certain, agreed-upon frames to compensate for the 0.03 fps discrepancy.
Drop-frame timecode is critical in broadcast applications. Contrast with
Non-Drop Frame.
Drop Frame Time Code – a) SMPTE time code format that skips (drops)
two frames per minute except on the tenth minute, so that the time code
stays coincident with real time. b) The television broadcast standard for
time code. c) The NTSC color coding system uses a 525/60 line/field
format, it actually runs at 59.94 fields per second, or 29.97 frames per
second (a difference of 1:1000). Time code identifies 30 frames per
second, whereas drop frame time code compensates by dropping two
frames in every minute except the tenth. Note that the 625/50 PAL system
is exact and does not require drop frame.
Drop Outs – Small bit of missing picture information usually caused by
physical imperfections in the surface of the video tape.
Drop Shadow – a) A type of key border where a key is made to look three
dimensional and as if it were illuminated by a light coming from the upper
left by creating a border to the right and bottom. b) A key border mode
which places a black, white or gray border to the right and below the title
key insert, giving a shadow effect.
Drop-Down List Box – Displays a list of possible options only when the
list box is selected.
Dropout – a) A momentary partial or complete loss of picture and/or
sound caused by such things as dust, dirt on the videotape or heads,
crumpled videotape or flaws in the oxide layer of magnetic tape.
Uncompensated dropout produces white or black streaks in the picture.
b) Drop in the playback radio frequency level, resulting from an absence of
oxide on a portion of the videotape, causing no audio or video information
to be stored there. Dropout usually appears as a quick streak in the video.
Dropout Compensator – Technology that replaces dropped video with the
video from the previous image’s scan line. High-end time base correctors
usually included a dropout compensator.
Dropout Count – The number of dropouts detected in a given length of
magnetic tape.
Dropped Frames – Missing frames lost during the process of digitizing or
capturing video. Dropped frames can be caused by a hard drive incapable
of the necessary data transfer rate.
Dry Signal – A signal without any added effects, especially without reverb.
DS (Dansk Standard) – Danish standarding body.
DS0 (Digital Service Level 0) – 64 kbps.
DS1 (Digital Service Level 1) – A telephone company format for transmitting information digitally. DS1 has a capacity of 24 voice circuits at a
transmission speed of 1.544 megabits per second.
DS3 (Digital Service Level 3) – One of a hierarchy of North American
data transmission rates associated with ISDN and B-ISDN, 44.736 Mbps.
The terrestrial and satellite format for transmitting information digitally.
DS3 has a capacity of 672 voice circuits at a transmission speed of
44.736 Mbps (commonly referred to as 45 Mbps). DS3 is used for digital
television distribution using mezzanine level compression – typically
MPEG-2 in nature, decompressed at the local station to full bandwidth
signals (such as HDTV) and then re-compressed to the ATSC’s 19.39 Mbps
transmission standard.
DSI (Download Server Initiate)
DSK (Downstream Keying) – An effect available in some special effects
generators and video mixers in which one video signal is keyed on top of
another video signal. The lightest portions of the DSK signal replace the
source video leaving the dark areas showing the original video image.
Optionally, the DSK signal can be inverted so the dark portions are keyed
rather than the lightest portions allowing a solid color to be added to the
keyed portions. The DSK input is most commonly a video camera or character generator. The DSK signal must be genlocked to the other signals.
DSK Monitor – A video output showing program video with the DSK key
over full time.
DSM – See Digital Storage Media.
DSM-CC (Digital Storage Media-Command and Control) – A syntax
defined in the Mpeg-2 Standard, Part 6.
DSM-CC IS U-N (DSM-CC International Standard User-to-Network)
DSM-CC U-N (DSM-CC User-to-Network)
DSM-CC-U-U (DSM-CC User-to-User)
DSNG (Digital Satellite News Gathering) – The use of mobile communications equipment for the purpose of worldwide newscasting. Mobile
units are usually vans equipped with advanced, two-way audio and video
transmitters and receivers, using dish antennas that can be aimed at
geostationary satellites.
DSP (Digital Signal Processing) – a) A DSP segments the voice signal
into frames and stores them in voice packets. It usually refers to the
electronic circuit section of a device capable of processing digital signals.
b) When applied to video cameras, DSP means that the analog signal from
the CCD sensors is converted to a digital signal. It is then processed for
signal separation, bandwidth settings and signal adjustments. After processing, the video signal either remains in the digital domain for recording
by a digital VTR or is converted back into an analog signal for recording
or transmission. DSP is also being used in other parts of the video chain,
including VTRs, and switching and routing devices.
DSRC (David Sarnoff Research Center) – Formerly RCA Laboratories
(now part of SRI International), home of the ACTV research.
DSS (Direct Satellite System) – An alternative to cable and analog
satellite reception initially utilizing a fixed 18-inch dish focused on one or
more geostationary satellites. DSS units are able to receive multiple chan-
www.tektronix.com/video_audio 75
Video Terms and Acronyms
nels of multiplexed video and audio signals as well as programming
information, email, and related data. DSS typically used MPEG-2 video
and audio encoding.
DSSB (Dual Single Sideband) – A modulation technique that might be
applied to two of the components of ACTV.
DTV (Digital Television) – a) A term used for all types of digital television
including High Definition Television and Standard Definition Television.
b) Another acronym for the new digital television standards. c) The technology enabling the terrestrial transmission of television programs as data.
DTG (Digital Terrestrial Group) – Over 80 companies that are working
together for the implementation of digital television around the world, but
most importantly in the UK.
DTV Team – Originally Compaq, Microsoft and Intel, later joined by Lucent
Technology. The DTV Team promotes the computer industry’s views on digital television, namely, that DTV should not have interlace scanning formats
but progressive scanning formats only. (Intel, however, now supports all the
ATSC Table 3 formats, including those that are interlace, such as 1080i.)
DTM (Digital Transmodulation)
DTVB (Digital Television Broadcasting)
DTMF (Dual Tone Multi-Frequency) – The type of audio signals that are
generated when you press the buttons on a touch-tone telephone.
DTVC (Digital Television by Cable)
DTE – See Data Terminal Equipment.
D-to-A Converter (Digital to Analog Converter) – A device that converts digital signals to analog signals.
DTS (Decoding Time Stamp) – Part of PES header indicating when an
access unit is to be decoded.
DTS (Digital Theater Sound) – A perceptual audio-coding system developed for theaters. A competitor to Dolby Digital and an optional audio track
format for DVD-Video and DVD-Audio.
DTS (Digital Theater Systems) – It is a multi-channel surround sound
format, similar to Dolby Digital. For DVDs that use DTS audio, the DVD –
Video specification still requires that PCM or Dolby Digital audio still be
present. In this situation, only two channels of Dolby Digital audio may be
present (due to bandwidth limitations).
DTS-ES – A version of DTS decoding that is compatible with 6.1-channel
Dolby Surround EX. DTS-ES Discrete is a variation of DTS encoding and
decoding that carries a discrete rear center channel instead of a matrixed
DTT (Digital Terrestrial Television) – The term used in Europe to
describe the broadcast of digital television services using terrestrial
DTTV (Digital Terrestrial Television) – DTTV (digital terrestrial television, sometimes also abbreviated DTT) is digital television (DTV) broadcast
entirely over earthbound circuits. A satellite is not used for any part of the
link between the broadcaster and the end user. DTTV signals are broadcast
over essentially the same media as the older analog terrestrial TV signals.
The most common circuits use coaxial cable at the subscriber end to
connect the network to the TV receiver. Fiber optic and/or microwave links
may be used between the studio and the broadcast station, or between the
broadcast station and local community networks. DTTV provides a clearer
picture and superior sound quality when compared to analog TV, with less
interference. DTTV offers far more channels, thus providing the viewer
with a greater variety of programs to choose from. DTTV can be viewed on
personal computers. Using a split-screen format, a computer user can surf
the Web while watching TV.
DTTV-SA (Digital Terrestrial Television – System Aspects)
Dual Capstan – Refers to a transport system in which a capstan and
pinchroller are used on both sides of the recording and playback head
Dual Channel Audio – A mode, where two audio channels are encoded
within one bit stream. They may be played simultaneously (stereo) or
independently (two languages).
Dub – a) A duplicate copy made from one recording medium to another.
b) To record or mix pre-recorded audio or video from one or more sources
to a another source to create a single recording. See also, Bump-Up.
Dubbing – a) In videotape production, the process of copying video or
audio from one tape to another. b) In film production, the process of
replacing dialog on a sound track. See also ADR, Foley.
Dubmaster – A second-generation copy of a program master used for
making additional preview or distribution copies, thereby protecting the
master from overuse.
Dubs – Copies of videotape.
Dupe – To duplicate. A section of film or video source footage that has
been repeated (duplicated) one or more times in an edited program.
Dupe List – A sublist of duplicated clips of film requiring additional prints
or copies of negative for film finishing. See also Cut List.
Dupe Reel – A reel designated for the recording and playback of dupes
(duplicate shots) during videotape editing.
Duplex – A communication system that carries information in both direction is called a duplex system. In CCTV, duplex is often used to describe
the type of multiplexer that can perform two functions simultaneously,
recording in multiplex mode and playback in multiplex mode. It can also
refer to duplex communication between a matrix switcher and a PTZ site
driver, for example.
Duplication – The reproduction of media. Generally refers to producing
discs in small quantities, as opposed to large-scale replication.
Durability – Usually expressed as a number of passes that can be made
before a significant degradation of output occurs; divided by the corresponding number that can be made using a reference tape.
Duration – Length of time (in hours, minutes, seconds and frames) that a
particular effect or section of audio or video material lasts.
Video Terms and Acronyms
DV (Digital Video) – This digital VCR format is a cooperation between
Hitachi, JVC, Sony, Matsushita, Mitsubishi, Philips, Sanyo, Sharp, Thomson
and Toshiba. It uses 6.35 mm (0.25-inch) wide tape in a range of products
to record 525/60 or 625/50 video for the consumer (DV) and professional
markets (Panasonic’s DVCPRO, Sony’s DVCAM and Digital-8). All models
use digital intra-field DCT-based “DV” compression (about 5:1) to record
8-bit component digital video based on 13.5 MHz luminance sampling.
dv_export – An export mode in Adobe Premiere that enables digital video
to be exported through a capture card.
DV25 – The most common form of DV compression. DV25 uses a fixed
data rate of 25 megabits per second.
DVB (Digital Video Broadcasting) – Broadcasting TV signals that comply
with a digital standard.
DVB-C (Digital Video Broadcasting – Cable) – Broadcasting TV signals
that comply with a digital standard by cable (ETS 300 429).
DVB-RCC – Interaction channel for cable TV distribution system (CATV)
(ETS 300 800).
DVB-RCCL (Return Channel for Cable and LMDS Digital Television
Platform) – An older cable standard that used to compete with DOCSIS.
DVB-RCCS – Interaction channel for satellite master antenna TV (SMATV)
distribution systems. Guidelines for versions based on satellite and coaxial
sections (TR 101 201).
DVB-RCDECT – Interaction channel through the digital enhanced cordless
telecommunications (DECT) (EN 301 193).
DVB-RCL – Interaction channel for local multi-point distribution system
(LMDS) distribution systems (EN 301 199)
DVB-RCS (Return Channel for Satellite Digital Television Platform) –
DVB-RCS is a satellite standard.
DVB-CA – Support for use of scrambling and conditional access (CA)
within digital broadcasting systems (ETR 289).
DVB-RCT (Return Channel for Terrestrial Digital Television
Platform) – Interaction channel through public switched telecommunications network (PSTN)/integrated services digital networks (ISDN)
(ETS 300 801).
DVB-CI – Common interface specification for conditional access and other
digital video broadcasting decoder applications (EN 50221).
DVB-S (Digital Video Broadcasting – Satellite) – For broadcasting TV
signals to a digital standard by satellite (ETS 300 421).
DVB-Cook – A guideline for the use of DVB specifications and standards
(TR 101 200).
DVB-SDH – Interfaces to synchronous digital hierarchy (SDH) networks
(ETS 300 814).
DVB-CS – Digital video broadcasting baseline system for SMATV distribution systems (ETS 300 473).
DVB-SFN – Mega-frame for single frequency network (SFN) synchronization (TS 101 191).
DVB-Data – Specification for Data Broadcasting (EN 301 192).
DVB-SI (Digital Video Broadcasting – Service Information) –
a) Information carried in a DVB multiplex describing the contents of
different multiplexes. Includes NIT, SDT, EIT, TDT, BAT, RST, and ST.
b) The DVB-SI adds the information that enables DVB-IRDs to automatically
tune to particular services and allows services to be grouped into
categories with relevant schedule information (ETS 300 468).
DVB-DSNG – Digital satellite news gathering (DSNG) specification
(EN 301 210).
DVB-IRD (Digital Video Broadcasting Integrated Receiver Decoder) –
A receiving decoder that can automatically configure itself using the
MPEG-2 Program Specific Information (PSI).
DVB-IRDI – Interface for DVB-IRDs (EN 50201).
DVB-M – Measurement guidelines for DVB systems (ETR 290).
DVB-MC – Digital video broadcasting baseline system for multi-point video
distribution systems below 10 GHz (EN 300 749).
DVB-MPEG – Implementation guidelines for the use of MPEG-2 systems,
video and audio in satellite, cable and terrestrial broadcasting applications
(ETR 154).
DVB-SIM – DVB SimulCrypt. Part 1: headend architecture and synchronization (TS 101 197).
DVB-SMATV – DVB satellite master antenna television (SMATV) distribution systems (EN 300 473).
DVB-SUB – DVB subtitling systems (ETS 300 743).
DVB-T (Digital Video Broadcasting – Terrestrial) – Terrestrial broadcasting of TV signals to a digital standard (ETS 300 744).
DVB-MS – Digital video broadcasting baseline system for multi-point video
distribution systems at 10 MHz and above (EN 300 748).
DVB-TXT – Specification for conveying ITU-R system B teletext in DVB
bitstreams (ETS 300 472).
DVB-NIP – Network-independent protocols for DVB interactive services
(ETS 300 802).
DVC – See Digital Video Cassette.
DVB-PDH – DVB interfaces to plesiochronous digital hierarchy (PDH)
networks (ETS 300 813).
DVB-PI – DVB-PI (EN 50083-9) describes the electrical, mechanical and
some protocol specification for the interface (cable/wiring) between two
devices. DVB-PI includes interfaces for CATV/SMATV headends and similar
professional equipment. Common interface types such as LVDS/SPI, ASI
and SSI are addressed.
DVCAM – Sony’s development of native DV which records a 15 micron (15
x 10 6 m, fifteen thousandths of a millimeter) track on a metal evaporated
(ME) tape. DVCAM uses DV compression of a 4:1:1 signal for 525/60
(NTSC) sources and 4:2:0 for 625/50 (PAL). Audio is recorded in one of
two forms – four 12-bit channels sampled at 32 kHz or two 16-bit channels sampled at 48 kHz.
DVCPRO P – This variant of DV uses a video data rate of 50 Mbps –
double that of other DV systems – to produce a 480 progressive frames.
Sampling is 4:2:0.
www.tektronix.com/video_audio 77
Video Terms and Acronyms
DVCPRO50 – This variant of DV uses a video data rate of 50 Mbps –
double that of other DV systems – and is aimed at the higher quality end of
the market. Sampling is 4:2:2 to give enhanced chroma resolution, useful
in post-production processes (such as chroma-keying). Four 16-bit audio
tracks are provided. The format is similar to Digital-S (D9).
DVCPROHD – This variant of DV uses a video data rate of 100 Mbps –
four times that of other DV systems – and is aimed at the high definition
EFP end of the market. Eight audio channels are supported. The format is
similar to D9 HD.
DVCR – See Digital Video Cassette Recorder.
DVD (Digital Video Disc) – A new format for putting full length movies on
a 5” CD using MPEG-2 compression for “much better than VHS” quality.
Also known as Digital Versatile Disc.
DVD Forum – An international association of hardware and media manufacturers, software firms and other users of digital versatile discs, created
for the purpose of exchanging and disseminating ideas and information
about the DVD Format.
DVD Multi – DVD Multi is a logo program that promotes compatibility with
DVD-RAM and DVD-RW. It is not a drive, but defines a testing methodology
which, when passed, ensures the drive product can in fact read RAM and RW. It puts the emphasis for compatibility on the reader, not the writer.
DVD+RW (DVD Rewritable) – Developed in cooperation by HewlettPackard, Mitsubishi Chemical, Philips, Ricoh, Sony and Yamaha, it is a
rewritable format that provides full, non-cartridge, compatibility with
existing DVD-Video players and DVD-ROM drives for both real-time
video recording and random data recording across PC and entertainment
DVD-10 – A DVD format in which 9.4 gigabytes of data can be stored on
two sides of a two-layer disc.
DVD-18 – A DVD format in which 17.0 gigabytes of data are stored on two
sides of the disc in two layers each.
DVD-5 – A DVD format in which 4.7 gigabytes of data can be stored on
one side of a disc in one layer.
DVD-9 – A DVD format in which 8.5 gigabytes of data can be stored on
one side of a two-layer disc.
DVDA (DVD Association) – A non-profit industry trade association representing DVD authors, producers, and vendors throughout the world.
DVD-A (DVD Audio) – DVDs that contain linear PCM audio data in any
combination of 44.1, 48.0, 88.2, 96.0, 176.4, or 192 kHz sample rates,
16, 20, or 24 bits per sample, and 1 to 6 channels, subject to a maximum
bit rate of 9.6 Mbps. With a 176.4 or 192 kHz sample rate, only two channels are allowed. Meridian Lossless Packing (MLP) is a lossless compression method that has an approximate 2:1 compression ratio. The use of
MLP is optional, but the decoding capability is mandatory on all DVD-Audio
players. Dolby Digital compressed audio is required for any video portion of
a DVD-Audio disc.
DVD-Interactive – DVD-Interactive is intended to provide additional capability for users to do interactive operation with content on DVDs or at Web
sites on the Internet. It will probably be based on one of three technologies: MPEG-4, Java/HTML, or software from InterActual.
DVD-on-CD – A DVD image stored on a one-sided 650 megabyte CD.
DVD-R (DVD Recordable) – a) A DVD format in which 3.95 gigabytes of
data are stored on a one-sided write-once disc. b) The authoring use drive
(635nm laser) was introduced in 1998 by Pioneer, and the general use format (650nm laser) was authorized by DVD Forum in 2000. DVD-R offers a
write-once, read-many storage format akin to CD-R and is used to master
DVD-Video and DVD-ROM discs, as well as for data archival and storage
DVD-RAM (DVD Random Access Memory) – A rewritable DVD disc
endorsed by Panasonic, Hitachi and Toshiba. It is a cartridge-based, and
more recently, bare disc technology for data recording and playback. The
first DVD-RAM drives were introduced in Spring 1998 and had a capacity
of 2.6GB (single-sided) or 5.2GB (double sided). DVD-RAM Version 2 discs
with 4.38GB arrived in late 1999, and double-sided 9.4GB discs in 2000.
DVD-RAM drives typically read DVD-Video, DVD-ROM and CD media. The
current installed base of DVD-ROM drives and DVD-Video players cannot
read DVD-RAM media.
DVD-ROM (DVD Read Only Memory) – a) DVD disks for computers.
Expected to eventually replace the conventional CD-ROM. The initial version
stores 4.7 GB on one disk. DVD-ROM drives for computers will play
DVD movie disks. b) The base format of DVD. ROM stands for read-only
memory, referring to the fact that standard DVD-ROM and DVD-Video
discs can't be recorded on. A DVD-ROM can store essentially any form
of digital data.
DVD-RW (DVD Rewritable) – A rewritable DVD format, introduced by
Pioneer, that is similar to DVD+RW. It has a read-write capacity of 4.38 GB.
DVD-V (DVD Video) – a) Information stored on a DVD-Video can represent over an hour or two of video programming using MPEG video compressed bit streams for presentation. Also, because of navigation features,
the programming can be played randomly or by interactive selection.
b) DVDs that contain about two hours of digital audio, video, and data.
The video is compressed and stored using MPEG-2 MP@ML. A variable bit
rate is used, with an average of about 4 Mbps (video only), and a peak of
10 Mbps (audio and video). The audio is either linear PCM or Dolby Digital
compressed audio. DTS compressed audio may also be used as an option.
Linear PCM audio can be sampled at 48 or 96 kHz, 16, 20, or 24 bits per
sample, and 1 to 8 channels. The maximum bitrate is 6.144 Mbps, which
limits sample rates and bit sizes in some cases. c) A standard for storing
and reproducing audio and video on DVD-ROM discs, based on MPEG
video, Dolby Digital and MPEG audio, and other proprietary data formats.
DVE Move – Making a picture shrink, expand, tumble, or move across the
DVE Wipe – A wipe effect in which the incoming clip appears in the form
of a DVE similar to those you create with the DVE tool.
DVE™ (Digital Video Effects) – a) These effects are found in special
effects generators which employ digital signal processing to create two or
three dimensional wipe effects. DVE generators are getting less expensive
and the kind of effects they create getting more popular. The Digital Video
Mixer includes such effects. b) A “black box” which digitally manipulates
the video to create special effects, for example, the ADO (Ampex Digital
Optics) system. Common DVE effects include inverting the picture, shrink-
Video Terms and Acronyms
ing it, moving it around within the frame of another picture, spinning it, and
a great many more.
D-VHS (Digital – Video Home System) – Digital video recording but
based on conventional VHS recording technology. It can record broadcasted, (and typically compressed) digital data, making it compatible with
computers and digital televisions, but it still is also compatible with existing
analog VHS technology.
DVI – See Digital Video Interactive.
DV-Mini (Mini Digital Video) – A new format for audio and video recording
on small camcorders, adopted by the majority of camcorder manufacturers.
Video and sound are recorded in a digital format on a small cassette
(66_48_12 mm), superseding S-VHS and Hi 8 quality.
DVS (Descriptive Video Services) – Descriptive narration of video for
blind or sight-impaired viewers.
DVTR (Digital Video Tape Recorder)
Dye Polymer – The chemical used in DVD-R and CD-R media that darkens
when heated by a high-power laser.
Dye Sublimation – Optical disc recording technology that uses a
high-powered laser to burn readable marks into a layer of organic dye.
Other recording formats include magneto-optical and phase-change.
Dynamic Gain Change – This distortion is present when picture or sync
pulse luminance amplitude is affected by APL changes. This is different
from APL induced Transient Gain Distortions which only occur at the APL
change transition time, rather this distortion refers to gain changes that
occur after the APL has changed. The amount of distortion is usually
expressed as a percent of the amplitude at 50% APL, although sometimes
the overall variation in IRE units is quoted. This is an out of service test.
This distortion causes picture brightness to seem incorrect or inconsistent
as the scene changes.
Dynamic Gain Distortion – One of several distortions (long-time waveform distortions is another) that may be introduced when, at the sending
end of a television facility, the average picture level (APL) of a video signal
is stepped from a low value to a high value, or vice versa, when the operating point within the transfer characteristic of the system is affected,
thereby introducing distortions on the receiving end.
Dynamic Memory – Memory devices whose stored data must be continually refreshed to avoid degradation. Each bit is stored as a charge on
a single MOS capacitor. Because of charge leakage in the transistors,
dynamic memory must be refreshed every 2 ms by rewriting its entire
contents. Normally, this does not slow down the system but does required
additional memory refresh logic.
Dynamic Metadata Dictionary – The standard database of approved,
registered Metadata Keys, their definitions, and their allowed formats.
Dynamic Mike – A mike in which the diaphragm moves a coil suspended
in a magnetic field to generate an output voltage proportional to the sound
pressure level.
Dynamic Range – a) A circuit’s signal range. b) An audio term which
refers to the range between the softest and loudest levels a source can
produce without distortion. c) The difference, in decibels, between the
overload level and the minimum acceptable signal level in a system or
transducer. d) The ratio of two instantaneous signal magnitudes, one being
the maximum value consistent with specified criteria or performance, the
other the maximum value of noise. e) The concept of dynamic range is
applicable to many measurements beyond characterization of the video
signal, and the ratios may also be expressed as f stops, density differences, illumination or luminance ratios, etc.
Dynamic Range Compression – a) Level adjustment applied to an audio
signal in order to limit the difference, or range of the loudest to the softest
sounds. b) A technique of reducing the range between loud and soft
sounds in order to make dialogue more audible, especially when listening
at low volume levels. Used in the downmix process of multichannel Dolby
Digital sound tracks.
Dynamic Range, Display – The range of luminances actually achieved
in a display. The system’s overall transfer function is the most informative
specification of dynamic range, inasmuch as nonlinear processing has
nearly always been applied to the luminance of the reproduced scene.
Frequently, however, the dynamic range, display is estimated by observing
the reproduction of a stepped gray-scale having calibrated intervals.
Conventionally, the dynamic range is reported to include every step whose
transition can be detected, no matter how miniscule. Human vision is less
adept at judging luminance of extended areas, but particularly sensitive
to luminance transitions which may even have been exaggerated by edge
enhancement. “Resolved steps” may be reported, therefore, even when
the perceived luminance difference between the areas of adjacent steps
is not obvious.
Dynamic Range, Image Capture – The range of luminances actually
captured in the image is defined and limited by the transfer function which
is usually nonlinear. Capture and recording systems traditionally limit their
linear response to a central portion of their dynamic range, and may have
extended nonlinear shoulder and toe regions. For any scene, it is usually
possible to place the luminances of interest on a preferred portion of the
transfer function, with excursions into higher and lower limits rolled off or
truncated by the respective shoulder and toe of the curve.
Dynamic Resolution – The amount of spatial resolution available in
moving pictures. In most television schemes, dynamic resolution is
considerably less than static resolution. See also Motion Surprise, Spatial
Resolution, and Temporal Resolution.
Dynamic Rounding – The intelligent truncation of digital signals. Some
image processing requires that two signals are multiplied, for example in
digital mixing, producing a 16-bit result from two original 8-bit numbers.
This has to be truncated, or rounded, back to 8-bits. Simply dropping the
lower bits can result in visible contouring artifacts especially when handling
pure computer generated pictures. Dynamic rounding is a mathematical
technique for truncating the word length of pixels, usually to their normal
8-bits. This effectively removes the visible artifacts and is non-cumulative
on any number of passes. Other attempts at a solution have involved
increasing the number of bits, usually to 10, making the LSBs smaller but
only masking the problem for a few generations. Dynamic rounding is a
licensable technique, available form Quantel and is used in a growing
number of digital products both from Quantel and other manufacturers.
www.tektronix.com/video_audio 79
Video Terms and Acronyms
E Mem – Term used for a panel memory system.
E1 – European digital transmission channel with a data rate of 2.048 kbps.
EACEM – European Association of Consumer Electronics Manufacturers
EAPROM (Electrically Alterable Programmable Read-Only Memo) –
A PROM whose contents can be changed.
Earth Station – Equipment used for transmitting or receiving satellite
EAV (End of Active Video) – A term used with component digital
EB (Errored Block)
EBR – See Electron Beam Recording.
EBU (European Broadcasting Union) – An organization of European
broadcasters that, among other activities, produces technical statements
and recommendations for the 625/50 line televi-sion system. Created in
1950 and headquartered in Geneva, Switzerland, the EBU is the world’s
largest professional association of national broadcasters. The EBU assists
its members in all areas of broadcasting, briefing them on developments
in the audio-visual sector, providing advice and defending their interests
via international bodies. The Union has active members in European and
Mediterranean countries and associate members in countries elsewhere
in Africa, the Americas and Asia.
EBU TECH.3267-E – a) The EBU recommendation for the serial composite
and component interface of 625/50 digital video signal including embedded digital audio. b) The EBU recommendation for the parallel interface of
625 line digital video signal. A revision of the earlier EBU Tech.3246-E,
which in turn was derived from CCIR-601 and contributed to CCIR-656
EBU Timecode – The timecode system created by the EBU and based on
SECAM or PAL video signals.
ECC (Error Correction Code) – A type of memory that corrects errors on
the fly.
ECC Constraint Length – The number of sectors that are interleaved to
combat bursty error characteristics of discs. 16 sectors are interleaved in
DVD. Interleaving takes advantage of typical disc defects such as scratch
marks by spreading the error over a larger data area, thereby increasing
the chance that the error correction codes can conceal the error.
ECC/EDC (Error Correction Code/Error Detection Code) – Allows data
that is being read or transmitted to be checked for errors and, when necessary, corrected on the fly. It differs from parity-checking in that errors are
not only detected but also corrected. ECC is increasingly being designed
into data storage and transmission hardware as data rates (and therefore
error rates) increase.
Eccentricity – A mathematical constant that for an ellipse is the ratio
between the major and minor axis length.
Echo (or Reflection) – a) A wave which has been reflected at one or
more points in the transmission medium, with sufficient magnitude and
time difference to be perceived in some manner as a wave distinct from
that of the main or primary transmission. Echoes may be either leading
or lagging the primary wave and appear in the picture monitor as
reflections or “ghosts”. b) Action of sending a character input from a
keyboard to the printer or display.
Echo Cancellation – Reduction of an echo in an audio system by
estimating the incoming echo signal over a communications connection
and subtracting its effects from the outgoing signal.
Echo Plate – A metal plate used to create reverberation by inducing
waves in it by bending the metal.
E-Cinema – An HDTV film-complement format introduced by Sony in
1998. 1920 x 1080, progressive scan, 24 fps, 4:4:4 resolution. Using a
1/2-inch tape, the small cassette (camcorder) will hold 50 minutes while
the large cassette will hold 156 minutes. E-Cinema’s camcorder will use
three 2/3-inch FIT CCDs and is equivalent to a film sensitivity of ISO 500.
The format will compress the electronic signal somewhere in the range
of 7:1. The format is based on the Sony HDCAM video format.
ECL (Emitter Coupled Logic) – A variety of bipolar transistor that is
noted for its extremely fast switching speeds.
ECM – See Entitlement Control Message.
ECMA (European Computer Manufacturers Association) – An
international association founded in 1961 that is dedicated to establishing
standards in the information and communications fields.
ECMA-262 – An ECMA standard that specifies the core JavaScript
language, which is expected to be adopted shortly by the International
Standards Organization (ISO) as ISO 16262. ECMA-262 is roughly
equivalent to JavaScript 1.1.
ECU (Extreme Closeup)
ED-Beta (Extended Definition Betamax) – A consumer/Professional
videocassette format developed by Sony offering 500-line horizontal
resolution and Y/C connections.
Edge – a) An edge is the straight line that connects two points.
b) Synonym for key border. Used by our competitors but not preferred by
Ampex. c) A boundary in an image. The apparent sharpness of edges
can be increased without increasing resolution. See also Sharpness.
Edge Busyness – Distortion concentrated at the edge of objects,
characterized by temporally varying sharpness or spatially varying noise.
Edge Curl – Usually occurs on the outside one-sixteenth inch of the
videotape. If the tape is sufficiently deformed it will not make proper
tape contact with the playback heads. An upper curl (audio edge) crease
may affect sound quality. A lower edge curl (control track) may result in
poor picture quality.
Video Terms and Acronyms
Edge Damage – Physical distortion of the top or bottom edge of the magnetic tape, usually caused by pack problems such as popped strands or
stepping. Affects audio and control track sometimes preventing playback.
Edge Effect – See Following Whites or Following Blacks.
Edge Enhancement – Creating hard, crisp, high-contrast edges beyond
the correction of the geometric problem compensated by aperture correction, frequently creates the subjective impression of increase image detail.
Transversal delay lines and second-directive types of correction increase
the gain at higher frequencies while introducing rather symmetrical “undershoot followed by overshoot” at transitions. In fact, and contrary to many
causal observations, image resolution is thereby decreased and fine detail
becomes obscured. Creating a balance between the advantages and disadvantages is a subjective evaluation and demands an artistic decision.
Edge Enhancing – See Enhancing.
Edge Filter – A filter that applies anti-aliasing to graphics created to the
title tool.
Edge Numbers – Numbers printed on the edge of 16 and 35 mm motion
picture film every foot which allows frames to be easily identified in an
edit list.
Edgecode – See Edge Numbers, Key Numbers.
Edit Point – The location in a video where a production event occurs.
(e.g., dissolve or wipe from one scene to another).
Edit Rate – In compositions, a measure of the number of editable units
per second in a piece of media data (for example, 30 fps for NTSC, 25 fps
for PAL and 24 fps for film).
Edit Sequence – An assembly of clips.
Editing – A process by which one or more compressed bit streams are
manipulated to produce a new compressed bit stream. Conforming edited
bit streams are understood to meet the requirements defined in the Digital
Television Standard.
Editing Control Unit (ECU) – A microprocessor that controls two or more
video decks or VCRs and facilitates frame-accurate editing.
Editor – A control system (usually computerized) which allows you to control video tape machines, the video switcher, and other devices remotely
from a single control panel. Editors enable you to produce finished video
programs which combine video tape or effects from several different
EDL (Edit Decision List) – A list of edit decisions made during an edit
session and usually saved to floppy disk. Allows an edit to be redone or
modified at a later time without having to start all over again.
EDH (Error Detection and Handling) – Defined by SMPTE standards
RP-165 and is used for recognizing inaccuracies in the serial digital signal.
It may be incorporated into serial digital equipment and employ a simple
LED error indicator. This data conforms to the ancillary data formatting
standard (SMPTE 291M) for SD-SDI and is located on line 9 for 525 and
line 5 for 625 formats.
EDO DRAM (Extended Data Out Dynamic Random Access Memory) –
EDO DRAM allows read data to be held past the rising edge of CAS
(Column Address Strobe) improving the fast page mode cycle time critical
to graphics performance and bandwidth. EDO DRAM is less expensive
than VRAM.
Edit – a) The act of performing a function such as a cut, dissolve, wipe on
a switcher, or a cut from VTR to VTR where the end result is recorded on
another VTR. The result is an edited recording called a master. b) Any point
on a video tape where the audio or video information has been added to,
replaced, or otherwise altered from its original form.
E-E Mode (Electronic to Electronic Mode) – The mode obtained when
the VTR is set to record but the tape is not running. The VTR is processing
all the signals that it would normally use during recording and playback
but without actually recording on the tape.
Edit Control – A connection on a VCR or camcorder which allows direct
communication with external edit control devices. (e.g., LANC (Control-L)
and new (Panasonic) 5-pin). Thumbs Up works with both of these control
formats and with machines lacking direct control.
Edit Controller – An electronic device, often computer-based, that allows
an editor to precisely control, play and record to various videotape
Edit Decision List (EDL) – a) A list of a video production’s edit points.
An EDL is a record of all original videotape scene location time references,
corresponding to a production’s transition events. EDLs are usually
generated by computerized editing equipment and saved for later use
and modification. b) Record of all edit decisions made for a video program
(such as in-times, out-times, and effects) in the form of printed copy,
paper tape, or floppy disk file, which is used to automatically assemble
the program at a later point.
Edit Display – Display used exclusively to present editing data and
editor’s decision lists.
Edit Master – The first generation (original) of a final edited tape.
EDTV – See Extended/Enhanced Definition Television.
EEprom E2, E’squared Prom – An electronically-erasable, programmable
read-only memory device. Data can be stored in memory and will remain
there even after power is removed from the device. The memory can be
erased electronically so that new data can be stored.
Effect – a) One or more manipulations of the video image to produce a
desired result. b) Multi-source transition, such as a wipe, dissolve or key.
Effective Competition – Market status under which cable TV systems are
exempt from regulation of basic tier rates by local franchising authorities,
as defined in 1992 Cable Act. To claim effective competition, a cable
system must compete with at least one other multi-channel provider that
is available to at least 50% of an area’s households and is subscribed to
by more than 15% of the households.
Effects – The manipulation of an audio or video signal. Types of film or
video effects include special effects (F/X) such as morphing; simple effects
such as dissolves, fades, superimpositions, and wipes; complex effects
such as keys and DVEs; motion effects such as freeze frame and slow
motion; and title and character generation. Effects usually have to be
rendered because most systems cannot accommodate multiple video
streams in real time. See also Rendering.
www.tektronix.com/video_audio 81
Video Terms and Acronyms
Effects (Setup) – Setup on the AVC, Century or Vista includes the status
of every push-button, key setting, and transition rate. The PANEL-MEM
system can store these setups in memory registers for future use.
EIA-762 – Specifies how to convert QAM to 8-VSB, with no support for
OSD (on screen displays).
Effects Keyer (E Keyer) – The downstream keyer within an M/E, i.e., the
last layer of video.
EIA-770 – This specification consists of three parts (EIA-770.1, EIA-770.2,
and EIA-770.3). EIA-770.1 and EIA-770.2 define the analog YPbPr video
interface for 525-line interlaced and progressive SDTV systems. EIA-770.3
defines the analog YPbPr video interface for interlaced and progressive
HDTV systems. EIA-805 defines how to transfer VBI data over these YPbPr
video interfaces.
Effects System – The portion of the switcher that performs mixes, wipes
and cuts between background and/or affects key video signals. The Effects
System excludes the Downstream Keyer and Fade-to-Black circuitry. Also
referred to as Mix Effects (M/E) system.
EFM (Eight-to-Fourteen Modulation) – This low-level and very critical
channel coding technique maximizes pit sizes on the disc by reducing
frequent transitions from 0 to 1 or 1 to 0. CD represents 1's as Land-pit
transitions along the track. The 8/14 code maps 8 user data bits into 14
channel bits in order to avoid single 1's and 0's, which would otherwise
require replication to reproduce extremely small artifacts on the disc. In the
1982 compact disc standard (IEC 908 standard), 3 merge bits are added
to the 14 bit block to further eliminate 1-0 or 0-1 transitions between
adjacent 8/14 blocks.
EFM Plus – DVD’s EFM+ method is a derivative of EFM. It folds the merge
bits into the main 8/16 table. EFM+ may be covered by U.S. Patent
EGA (Enhanced Graphics Adapter) – A display technology for the IBM
PC. It has been replaced by VGA. EGA pixel resolution is 640 x 350.
EIA (Electronics Industries Association) – A trade organization that
has created recommended standards for television systems (and other
electronic products), including industrial television systems with up to
1225 scanning lines. EIA RS-170A is the current standard for NTSC studio
equipment. The EIA is a charter member of ATSC.
EIA RS-170A – The timing specification standard for NTSC broadcast
video equipment. The Digital Video Mixer meets RS-170A.
EIA/IS-702 – NTSC Copy Generation Management System – Analog
(CGMS-A). This standard added copy protection capabilities to NTSC video
by extending the EIA-608 standard to control the Macrovision anti-copy
process. It is now included in the latest EIA-608 standard.
EIA-516 – U.S. teletext standard, also called NABTS.
EIA-608 – U.S. closed captioning and extended data services (XDS) standard. Revision B adds Copy Generation Management System – Analog
(CGMS-A), content advisory (v-chip), Internet Uniform Resource Locators
(URLs) using Text-2 (T-2) service, 16-bit Transmission Signal Identifier, and
transmission of DTV PSIP data.
EIA-708 – U.S. DTV closed captioning standard. EIA CEB-8 also provides
guidance on the use and processing of EIA-608 data streams embedded
within the ATSC MPEG-2 video elementary transport stream, and augments
EIA-744 – NTSC “v-chip” operation. This standard added content advisory
filtering capabilities to NTSC video by extending the EIA-608 standard. It
is now included in the latest EIA-608 standard, and has been withdrawn.
EIA-761 – Specifies how to convert QAM to 8-VSB, with support for OSD
(on screen displays).
EIA-766 – U.S. HDTV content advisory standard.
EIA-775 – EIA-775 defines a specification for a baseband digital interface
to a DTV using IEEE 1394 and provides a level of functionality that is similar to the analog system. It is designed to enable interoperability between a
DTV and various types of consumer digital audio/video sources, including
set top boxes and DVRs or VCRs. EIA-775.1 adds mechanisms to allow a
source of MPEG service to utilize the MPEG decoding and display capabilities in a DTV. EIA-775.2 adds information on how a digital storage device,
such as a D-VHS or hard disk digital recorder, may be used by the DTV
or by another source device such as a cable set-top box to record or
time-shift digital television signals. This standard supports the use of such
storage devices by defining Service Selection Information (SSI), methods
for managing discontinuities that occur during recording and playback,
and rules for management of partial transport streams. EIA-849 specifies
profiles for various applications of the EIA-775 standard, including digital
streams compliant with ATSC terrestrial broadcast, direct-broadcast
satellite (DBS), OpenCable™, and standard definition Digital Video (DV)
EIA-805 – This standard specifies how VBI data are carried on component
video interfaces, as described in EIA-770.1 (for 480p signals only), EIA770.2 (for 480p signals only) and EIA-770.3. This standard does not apply
to signals which originate in 480i, as defined in EIA-770.1 and EIA-770.2.
The first VBI service defined is Copy Generation Management System
(CGMS) information, including signal format and data structure when carried by the VBI of standard definition progressive and high definition YPbPr
type component video signals. It is also intended to be usable when the
YPbPr signal is converted into other component video interfaces including
RGB and VGA.
EIA-861 – The EIA-861 standard specifies how to include data, such as
aspect ratio and format information, on DVI and HDMI.
EIAJ (Electronic Industry Association of Japan) – The Japanese
equivalent of the EIA.
EIA-J CPR-1204 – This EIA-J recommendation specifies another
widescreen signaling (WSS) standard for NTSC video signals.
E-IDE (Enhanced Integrated Drive Electronics) – Extensions to the
IDE standard providing faster data transfer and allowing access to larger
drives, including CD-ROM and tape drives, using ATAPI. E-IDE was adopted
as a standard by ANSI in 1994. ANSI calls it Advanced Technology
Attachment-2 (ATA-2) or Fast ATA.
EISA (Enhanced Industry Standard Architecture) – In 1988 a consortium of nine companies developed 32-bit EISA which was compatible with
AT architecture. The basic design of EISA is the result of a compilation of
the best designs of the whole computer industry rather than (in the case of
Video Terms and Acronyms
the ISA bus) a single company. In addition to adding 16 new data lines
to the AT bus, bus mastering, automated setup, interrupt sharing, and
advanced transfer modes were adapted making EISA a powerful and useful
expansion design. The 32-bit EISA can reach a peak transfer rate of
33 MHz, over 50% faster than the Micro Channel architecture. The EISA
consortium is presently developing EISA-2, a 132 MHz standard.
EISA Slot – Connection slot to a type of computer expansion bus found in
some computers. EISA is an extended version of the standard ISA slot
EIT (Encoded Information Type)
EIT (Event Information Table) – Contains data concerning events (a
grouping of elementary broadcast data streams with a defined start and
end time belonging to a common service) and programs (a concatenation
of one or more events under the control of a broadcaster, such as event
name, start time, duration, etc.). Part of DVB-SI.
coded audio, or other coded bit streams. One elementary stream is carried
in a sequence of PES packets with one and only one stream_id.
Elementary Stream Clock Reference (ESCR) – A time stamp in the PES
from which decoders of PES may derive timing.
Elementary Stream Descriptor – A structure contained in object
descriptors that describes the encoding format, initialization information,
transport channel identification, and other descriptive information about the
content carried in an elementary stream.
Elementary Stream Header (ES Header) – Information preceding the
first data byte of an elementary stream. Contains configuration information
for the access unit header and elementary stream properties.
Elementary Stream Interface (ESI) – An interface modeling the
exchange of elementary stream data and associated control information
between the Compression Layer and the Sync Layer.
Electromagnetic Interference (EMI) – Interference caused by electrical
Elementary Stream Layer (ES Layer) – A logical MPEG-4 Systems Layer
that abstracts data exchanged between a producer and a consumer into
Access units while hiding any other structure of this data.
Electron Beam Recording – A technique for converting television images
to film using direct stimulation of film emulsion by a very fine long focal
length electronic beam.
Elementary Stream User (ES User) – The MPEG-4 systems entity that
creates or receives the data in an elementary stream.
Electronic Beam Recorder (EBR) – Exposes film directly using an
electronic beam compared to recording from a CRT.
EM (Electronic Mail) – Commonly referred to as E-mail.
ELG (European Launching Group) – Now superseded by DVB.
Electronic Crossover – A crossover network which uses active filters and
is used before rather than after the signal passes through the power amp.
Embedded Audio – a) Embedded digital audio is mul-tiplexed onto a serial digital data stream within the horizontal ancillary data region of an SDI
signal. A maximum of 16 channels of audio can be carried as standardized
with SMPTE 272M or ITU-R.BT.1305 for SD and SMPTE 299 for HD.
b) Digital audio that is multiplexed and carried within an SDI connection –
so simplifying cabling and routing. The standard (ANSI/SMPTE 272M-1994)
allows up to four groups each of four mono audio channels.
Electronic Editing – The assembly of a finished video program in which
scenes are joined without physically splicing the tape. Electronic editing
requires at least two decks: one for playback and the other for recording.
Embossing – An artistic effect created on AVAs and/or switchers to make
characters look like they are (embossed) punched from the back of the
background video.
Electronic Matting – The process of electronically creating a composite
image by replacing portions of one image with another. One common, if
rudimentary, form of this process is chroma-keying, where a particular
color in the foreground scene (usually blue) is replaced by the background
scene. Electronic matting is commonly used to create composite images
where actors appear to be in places other than where they are being shot.
It generally requires more chroma resolution than vision does, causing
contribution schemes to be different than distribution schemes. While there
is a great deal of debate about the value of ATV to viewers, there does
not appear to be any dispute that HDEP can perform matting faster and
better than almost any other moving image medium.
EMC (Electromagnetic Compatibility) – Refers to the use of components in electronic systems that do not electrically interfere with each
other. See also EMI.
Electronic Cinematography – Photographing motion pictures with
television equipment. Electronic cinematography is often used as a term
indicating that the ultimate product will be seen on a motion picture
screen, rather than a television screen. See also HDEP and Mathias.
Electronic Pin Register (EPR) – Stabilizes the film transport of a
telecine. Reduces ride (vertical moment) and weave (horizontal movement).
Operates in real time.
Electrostatic Pickup – Pickup of noise generated by electrical sparks
such as those caused by fluorescent lights and electrical motors.
Elementary Stream (ES) – a) The raw output of a compressor carrying a
single video or audio signal. b) A generic term for one of the coded video,
EMF (Equipment Management Function) – Function connected to
all the other functional blocks and providing for a local user or the
Telecommunication Management Network (TMN) a mean to perform all
the management functions of the cross-connect equipment.
EMI (Electromagnetic Interference) – An electrical disturbance in a system due to natural phenomena, low-frequency waves from electromechanical devices or high-frequency waves (RFI) from chips and other electronic
devices. Allowable limits are governed by the FCC. See also EMC.
Emission – a) The propagation of a signal via electromagnetic radiation,
frequently used as a synonym for broadcast. b) In CCIR usage: radiofrequency radiation in the case where the source is a radio transmitter
or radio waves or signals produced by a radio transmitting station.
c) Emission in electronic production is one mode of distribution for the
completed program, as an electromagnetic signal propagated to the
point of display.
www.tektronix.com/video_audio 83
Video Terms and Acronyms
EMM – See Entitlement Management Message.
E-Mode – An edit decision list (EDL) in which all effects (dissolves, wipes
and graphic overlays) are performed at the end. See also A-Mode, B-Mode,
C-Mode, D-Mode, Source Mode.
Emphasis – a) Filtering of an audio signal before storage or transmission
to improve the signal-to-noise ratio at high frequencies. b) A boost in
signal level that varies with frequency, usually used to improve SNR
in FM transmission and recording systems (wherein noise increases
with frequency) by applying a pre-emphasis before transmission and a
complementary de-emphasis to the receiver. See also Adaptive Emphasis.
Emulate – To test the function of a DVD disc on a computer after formatting a complete disc image.
Enable – Input signal that allows the device function to occur.
ENB (Equivalent Noise Bandwidth) – The bandwidth of an ideal rectangular filter that gives the same noise power as the actual system.
Energy Plot – The display of audio waveforms as a graph of the relative
loudness of an audio signal.
ENG (Electronic News Gathering) – Term used to describe use of videorecording instead of film in news coverage.
ENG Camera (Electronic News Gathering camera) – Refers to CCD
cameras in the broadcast industry.
Enhancement Layer – A relative reference to a layer (above the base
layer) in a scalable hierarchy. For all forms of scalability, its decoding
process can be described by reference to the lower layer decoding process
and the appropriate additional decoding process for the Enhancement
Layer itself.
Enhancing – Improving a video image by boosting the high frequency
content lost during recording. There are several types of enhancement.
The most common accentuates edges between light and dark images.
ENRZ (Enhanced Non-Return to Zero)
Encode – a) The process of combining analog or digital video signals,
e.g., red, green and blue, into one composite signal. b) To express a single
character or a message in terms of a code. To apply the rules of a code.
c) To derive a composite luminance-chrominance signal from R, G, B
signals. d) In the context of Indeo video, the process of converting the
color space of a video clip from RGB to YUV and then compressing it.
See Compress, RGB, YUV. Compare Decode.
Entitlement Control Message (ECM) – Entitlement control messages are
private conditional access information. They are program-specific and
specify control and scrambling parameters.
Encoded Chroma Key – Synonym for Composite Chroma Key.
Entrophy – The average amount of information represented by a symbol
in a message. It represents a lower bound for compression.
Encoded Subcarrier – A reference system created by Grass Valley Group
to provide exact color timing information.
Encoder – a) A device used to form a single composite color signal
(NTSC, PAL or SECAM) from a set of component signals. An encoder is
used whenever a composite output is required from a source (or recording)
which is in component format. b) Sometimes devices that change analog
signals to digital (ADC). All NTSC cameras include an encoder. Because
many of these cameras are inexpensive, their encoders omit many of
the advanced techniques that can improve NTSC. CAV facilities can
use a single, advanced encoder prior to creating a final NTSC signal.
c) An embodiment of an encoding process.
Encoding (Process) – A process that reads a stream of input pictures or
audio samples and produces a valid coded bit stream as defined in the
Digital Television Standard.
Encryption – a) The process of coding data so that a specific code or
key is required to restore the original data. In broadcast, this is used to
make transmission secure from unauthorized reception as is often found
on satellite or cable systems. b) The rearrangement of the bit stream of
a previously digitally encoded signal in a systematic fashion to make the
information unrecognizable until restored on receipt of the necessary
authorization key. This technique is used for securing information transmitted over a communication channel with the intent of excluding all other
than authorized receivers from interpreting the message. Can be used for
voice, video and other communications signals.
END (Equivalent Noise Degradation)
End Point – End of the transition in a dissolve or wipe.
Entitlement Management Message (EMM) – Private Conditional Access
information which specifies the authorization levels or the services of
specific decoders. They may be addressed to individual decoder or groups
of decoders.
Entrophy Coding – Variable-length lossless coding of the digital representation of a signal to reduce redundancy.
Entrophy Data – That data in the signal which is new and cannot be
Entropy – In video, entropy, the average amount of information represented by a symbol in a message, is a function of the model used to produce
that message and can be reduced by increasing the complexity of the
model so that it better reflects the actual distribution of source symbols
in the original message. Entropy is a measure of the information contained
in a message, it’s the lower bound for compression.
Entry – The point where an edit will start (this will normally be displayed
on the editor screen in time code).
Entry Point – The point in a coded bit stream after which the decoder can
be initialized and begin decoding correctly. The picture that follows the
entry point will be an I-picture or a P-picture. If the first transmitted picture
is not an I-picture, the decoder may produce one or more pictures during
acquisition. Also referred to as an Access Unit (AU).
E-NTSC – A loosely applied term for receiver-compatible EDTV, used by
CDL to describe its Prism 1 advanced encoder/decoder family.
ENTSC – Philips ATV scheme now called HDNTSC.
Envelope Delay – The term “Envelope Delay” is often used interchangeably with Group Delay in television applications. Strictly speaking, envelope
delay is measured by passing an amplitude modulated signal through the
system and observing the modulation envelope. Group Delay on the other
Video Terms and Acronyms
hand, is measured directly by observing phase shift in the signal itself.
Since the two methods yield very nearly the same result in practice, it is
safe to assume the two terms are synonymous.
Equivalent Input Noise – Noise created by the input stage of an amplifier
which appears in the output of the amplifier increased in level by the gain
of the amp.
Envelope Detection – A demodulation process in which the shape of the
RF envelope is sensed. This is the process performed by a diode detector.
Erase Adj. – A control which adjusts the coupling of the bias oscillator to
the erase head in a manner which purifies the oscillator’s waveform.
Envelope Detector – A form of device in a television set that begins the
process of converting a broadcast or CATV television signal into a video
signal that can be displayed. Envelope detectors are sensitive to some of
the modifications to television signals that have been proposed for receiver-compatible ATV systems.
Erase Field Strength – The minimum initial amplitude of a decreasing
alternating field (normally applied in the longitudinal direction) required to
reduce the output of a given recorded signal by a specified amount.
EPG (Electronic Program Guide) – a) An electronic program guide is
delivered by data transfer rather than printed paper. The EPG gives the
content of the current program. b) Display that describes all programs and
events available to the viewer. It functions like an interactive TV guide that
allows users to view a schedule of available programming and select an
event for viewing.
Erased Noise – The noise arising when reproducing a bulk erased tape
with the erase and record heads completely de-energized.
EPROM (Erasable Programmable Read Only Memory) – a) A PROM
that can be reused. Most EPROMs can be erased by exposing them to
ultraviolet light. b) Erasable and programmable read only memory.
An electronic chip used in many different security products that stores
software instructions for performing various operations.
EPS (Encapsulated PostScript) – A standard file format for high-resolution PostScript illustrations.
EPU (European Platforms Union) – EPU is a body that coordinates
national platforms in Europe for widescreen TV and the migration to HDTV.
EPU seeks to promote and to coordinate knowledge about widescreen TV,
embracing broadcasting, medicine, corporate and cinema use. EPU emphasizes digital aspects and the migration to HDTV, but not necessarily 1250
line HDTV. Through the EPU, the national platforms may exchange experience, facts and views.
EQ – See Equalization.
EQTV (Enhanced Quality Television) – See EDTV.
Equalization (EQ) – a) Process of altering the frequency response of a
video amplifier to compensate for high-frequency losses in coaxial cable.
b) The selective amplification or attenuation of certain frequencies.
c) The balancing of various frequencies to create a pleasing sound by
attenuating or boosting specific frequencies within the sound.
Equalizer – a) Equipment designed to compensate for loss and delay
frequency effects within a system. A component or circuit that allows for
the adjustment of a signal across a given band. b) The pulses which occur
before and after the broad pulses in the vertical interval. These pulses help
the horizontal oscillator to maintain synchronization. See Equalizing Pulses.
Equalizing Pulses – Pulses of one-half the width of the horizontal sync
pulses which are transmitted at twice the rate of the horizontal sync pulses
during the blanking intervals immediately preceding and following the
vertical sync pulses. The action of these pulses causes the vertical deflection to start at the same time in each interval, and also serves to keep
the horizontal sweep circuits in step during the vertical blanking intervals
immediately preceding and following the vertical sync pulse.
Equipment Noise – See Noise.
Erase Head – A device used to remove recorded signals from magnetic
Erasure – A process by which a signal recorded on a tape is removed and
the tape made ready for rerecording.
Error – In digital recording, either a dropout or a noise pulse that exceeds
a certain limit is usually termed an error. In video and instrumentation
recording, an error has no commonly accepted meaning but is defined in
relation to the particular system requirements.
Error Blocks – A form of block distortion where one or more blocks in the
received image bear no resemblance to the current or previous scene and
often contrast greatly with adjacent blocks.
Error Concealment – a) A technique used when error correction fails
(see error correction). Erroneous data is replaced by data synthesized from
surrounding pixels. b) When the error correction program discovers in the
reproduced signal, an error too extensive to permit reconstruction, the
redundancy in most image information makes it possible for error concealment to make the error nearly inobvious. Video images are frequently
nearly identical from frame to frame. Adjacent video lines frequently have
almost the same detail. It becomes possible, therefore, when a “burst
error” involving the modification or loss of many recorded bits occurs, to
determine from image segments adjacent in time or in space, a most
probable substitution. Such substitutions, when infrequent and supported
by the image redundancy, are often accepted by the viewers as “correct”.
(This is a degree of freedom in image data recording that obviously is
not available to scientific and financial data recording. The additional
information needed by the algorithm for decision and substitution is
usually provided by a data-storage cache established during reproduction.
Error Correction Tool – One of the tools of the Protection Layer used
to correct corrupted information detected by error detection tools at the
same layer.
Error Detection and Correction – a) Coding schemes incorporated into
the information before it is transmitted (or stored) in such a way that errors
which may arise in transmission can be detected and corrected before
restoration or retrieval. In PCM systems, error correction effectively
improves the SNR of the system. b) Ingenious software programs make it
possible to check that the digital stream of image information has not been
corrupted by the loss of a few bit here and there. Additional information
introduced as “overhead” to the image bit stream (thereby increasing the
bit rate, recording) is chosen to conform to specific rules of construction.
Departures from this construction can be detected readily, so that many
www.tektronix.com/video_audio 85
Video Terms and Acronyms
potential errors can not only be identified, but corrected so that the information can be restored with high probability. Error correction contributes to
the reliability of recording/reproducing and is a normal part of all data
Error Resilience – The ability to handle transmission errors without
corrupting the content beyond the ability of the receiver to properly display
it. MPEG-4 supports error resilience through the use of resynchronization
markers, extended header code, data partitioning, and reversible VLCs.
ETSI (European Telecommunication Standard Institute) – A European
forum for standardization with participation of major players in the
telecommunications industry. ETSI replaced the CEPT in 1988 with the
objective of making the telecommunications standards needed for the
implementation of the common market in Europe. ETSI has now become a
leading body on all telecommunications standards, however, and provides
a strong input to international bodies. This being so, the ETSI focuses on
standards that involve interactions between public and private networks,
and specifies the framework of activities that form the telecommunications
infrastructure. ETSI produces standards through a number of technical
committees, and utilizes project teams composed of paid experts to produce drafts of standards. The standards produced are called European
Telecommunications Standards (ETS) or Interim European
Telecommunications Standards (I-ETS).
ES (Elementary Stream) – Data stream for video, audio or data.
Preliminary stage to PES.
ETSI EN 300 163 – This specification defines NICAM 728 digital audio
for PAL.
ESAC (Economics and Statistics Advisory Committee)
ETSI EN 300 294 – Defines the widescreen signaling (WSS) information
for PAL video signals. For (B, D, G, H, I) PAL systems, WSS may be present
on line 23.
Error Detection Tool – One of the tools of the Protection Layer used to
detect corrupted information. Further error correction can then be performed by error correction tools at the same layer.
Error Rate – The ratio of the number of bits incorrectly transmitted to the
total number of bits of information received.
ESCR (Elementary Stream Clock Rate) – A time stamp in PES stream
from which decoders may derive timing.
ESPRIT (European Strategic Program for Research and Development
in Information Technology) – A funding program to develop information
technology in the European Economic Communities.
ETSI EN 300 421 – This is the DVB-S specification.
Essence – The actual program (audio, video and/or data) without
metadata. Essence could also be graphics, telemetry, photographs or
other information.
ETSI EN 300 775 – This is the specification for the carriage of Vertical
Blanking Information (VBI) data in DVB bitstreams.
Essence Media or Essence Data – Refers to the actual bits and bytes
that represent the sound and picture. It is frequently (And incorrectly) used
by IT folks to describe a cassette, DVD, or streaming file containing audio,
video, and graphics elements.
Ethernet (IEEE 802.3) – a) A type of high-speed network for interconnecting computing devices. Ethernet can be either 10 or 100 Mbps (Fast
Ethernet). Ethernet is a trademark of Xerox Corporation, Inc. b) A type
of local area network that enables real-time communication between
machines connected directly together through cables. A widely implemented network from which the IEEE 802.3 standard for contention networks
was developed, Ethernet uses a bus topology (configuration) and relies on
the form of access known as CSMA/CD to regulate traffic on the main
communication line. Network nodes are connected by coaxial cable (in
either of two varieties) or by twisted-pair wiring.
ETR 290 – ETSI recommendation priorities for monitoring MPEG-2/DVB
transport streams.
ETS (European Telecommunications Standards) – Standard issued by
the ETSI.
ETS (Expiration Time Stamp) – Supports the notion of object persistence. An object, after it is presented, is saved at the decoder (cache) until
a time given by ETS. Such an object can be used multiple times before
ETS runs out. A Persistent Object (PO) with an expired ETS is no longer
available to the decoder.
ETSI EN 300 429 – This is the DVB-C specification.
ETSI EN 300 744 – This is the DVB-T specification.
ETSI ETR 154 – This specification defines the basic MPEG audio and video
parameters for DVB applications.
ETSI ETS 300 231 – This specification defines information sent during the
vertical blanking interval using PAL teletext (ETSI ETS 300 706) to control
VCRs in Europe (PDC).
ETSI ETS 300 706 – This is the enhanced PAL teletext specification.
ETSI ETS 300 707 – This specification covers Electronic Program Guides
(EPG) sent using PAL teletext (ETSI ENTS 300 706).
ETSI ETS 300 708 – This specification defines data transmission using
PAL teletext (ETSI ETS 300 706).
ETSI ETS 300 731 – Defines the PALplus standard, allowing the transmission of 16:9 programs over normal PAL transmission systems.
ETSI ETS 300 732 – Defines the ghost cancellation reference (GCR) signal
for PAL.
ETSI ETS 300 743 – This is the DVB subtitling specification.
ETT – See Extended Text Table.
ETV (Educational Television) – A term applied to any television program
or equipment related to some form of education or instruction.
Eureka – A massive European research effort, sometimes called the
European version of Star Wars, embracing many separate R&D projects,
including semiconductors, telecommunications, and computers. The
Eureka EU-95 project is about ATV systems for 625 scanning line/50
field per second countries.
Video Terms and Acronyms
EuroDAB – This is an organization formed through the EBU with the purpose of paving the way for DAB in Europe. The group, which holds more
than 100 broadcasters, manufacturers, regulators, etc., looks into services
to be offered, identified features and applications, it researches data
services and receiver implementation, and monitors national regulations.
Finally, the group is analyzing satellite DAB projects.
Europe – A geographic region that led the opposition to the ATSC proposal
when it was presented to the CCIR as a proposed worldwide standard and
is developing its own ATV systems. European television currently has 625
scanning lines and 50 field per second as opposed to NTSC’s 525/59.94.
Evaluator – Equipment that evaluates physical and magnetic quality of
tape, usually provided as an adjunct to a winder/cleaner. In contrast to a
certifier, it does not stop when it detects an error.
E-Value – The difference in inches between the radii of the outside layer
of tape in a roll and the outside edge of the reel flange.
Even Field – In a 2:1 interlaced system, the field that begins with a broad
pulse halfway between two line syncs. For NTSC that is line 262-1/2 –
525, for PAL that is line 312-1/2 – 625.
Even Number – The number of scanning lines per frame possible in a
progressively scanned television system. An interlaced scan system must
use an odd number of lines so that sequential fields will be displaced by
one scanning line.
Event – a) An event is defined as a collection of elementary streams with
a common time base, an associated start time, and an associated end
time. b) A grouping of elementary broadcast data streams with a defined
start and end time belonging to a common service, e.g., first half of a
football match, News Flash, first part of an entertainment show.
Event Number – Number assigned by the system (or editor) to each edit
that is recorded in the EDL.
EVM (Error Vector Magnitude)
Exabyte – An 8 mm data tape format. Popular for storing graphics files due
to its low cost and high capacity (commonly 8 GB, but new models hold up
to 40 GB). Exabyte is also the number of bytes that comes after petabyte.
Excursion – The amplitude difference between two levels.
Execute (Cycle) – Last cycle of instruction execution. During this time,
the instruction operation is performed.
Execution Time – Time required for the execution of an instruction.
Exif (Exchangeable Image Format) – A file format used in digital
Exit – The point at which an edit will end (normally displayed by time
Expander – A device which increases the dynamic range of a signal by
either reducing the level of soft signals or increasing the level of loud
signals when the input is above or below a certain threshold level.
Expansion – An undesired increase in amplitude of a portion of the composite video signal relative to that of another portion. Also, a greater than
proportional change in the output of a circuit for a change in input level.
For example, expansion of the sync pulse means an increase in the percentage of sync during transmission.
Expansion Slot – Electrical connection slot mounted on a computer’s
motherboard (main circuit board). It allows several peripheral devices to be
connected inside a computer.
Explicit Scene Description – The representation of the composition
information based on a parametric description (syntax and semantic) of the
spatio-temporal relationships between audiovisual objects, as opposed to
Implicit Scene Description.
Exponent – Power of ten by which a number is multiplied, used in floating
point representation. For example, the exponent in the decimal number
0.9873 x 107 is 7.
Export – To use NFS software to make all or part of your file system
available to other users and systems on the network.
Exposure Sheet – In a piece of animation there are hundreds of frames.
Typically, they are organized on an exposure sheet. The sheet describes,
for each piece of artwork used, on which frame the art is first used, what
happens to it (on a frame by frame basis) while it is used, and on which
frame it disappears. Also noted on the sheet, for each frame, are any
changes in the animation system (animation table, camera, lights, etc.).
Exposure sheets on the PictureMaker are created using the SEQ program,
and are organized somewhat differently than traditional sheets, in order to
best use the computer. Each level (or layer, or plane) can be one of three
types: Image (a file of pixel values), object (a 3D database and animation
path), and explicit command (a PictureMaker command mode command).
Each level specifies a beginning from and duration (ending frame), and
the computer keeps track of all levels with respect to their overlaps in
both time and space.
Extended Studio PAL – A 625-line video standard that allows processing
of component video quality digital signals by composite PAL equipment.
The signal can be distributed and recorded in a composite digital form
using D2 or D3 VTRs.
Extended Text Table (ETT) – The optional ATSC PSIP table that carries
long descriptions of events and channels. There are two types of ETTs:
Channel ETTs, which carry channel descriptions, and Event ETTs, which
carry event descriptions.
Extended/Enhanced Definition Television (EDTV) – a) Extended (or
Enhanced) Definition Television is a proposed intermediate television
system for evolution to full HDTV that offers picture quality substantially
improved over conventional 525-line or 625-line receivers, by employing
techniques at the transmitter and at the receiver that are transparent to
(and cause no visible quality degradation to) existing 525-line or 625-line
receivers. One example of EDTV is the improved separation of luminance
and color components by pre-combing the signals prior to transmission.
Also see Improved Definition Television. b) Specifically a video format with
sampling frequencies 18 MHz (Y), 4.5 MHz (C), and resolution 960 pixels
by 576 lines (Y), 480 pixels by 288 lines (C).
Extensibility – A property of a system, format, or standard that allows
changes in performance or format within a common framework, while
retaining partial or complete compatibility among system that belong to
the common framework.
www.tektronix.com/video_audio 87
Video Terms and Acronyms
Extent – a) For the volume structure and the ISO 9660 file structure, an
extent is defined as a set of logical sectors, the logical sector numbers of
which form a continuous ascending sequence. The address, or location,
of an extent is the number of the first logical sector in the sequence.
b) For the UDF file structure an extent is defined as a set of logical blocks,
the logical block numbers of which form a continuous ascending sequence.
The address, or location, of an extent is the number of the first logical
block in the sequence.
Eye Diagram – A means to display the health of the Physical Layer of the
digital data. It is formed by overlaying segments of the sampled digital
signal in much the same way as a waveform monitor overlays lines of a
video signal to produce the familiar line display. By providing enough of
the sample digital segments the eye display is produced and should ideally
conform to the digital standards for the appropriate format.
External Device – In computer systems, any piece of hardware that is
attached to the workstation with a cable.
External Key Input – Extra key inputs that may be accessed by keyboard
that do not appear on the bus rows. Traditionally these inputs are used
only for luminance keys, such as simple character generators or titling
cameras, however, they are not limited to this on Ampex switchers. These
are sources 9 and 0 on 4100 series switchers, and 31 and 32 on AVC
External Key Processor – See Processed External Keys.
External Synchronization – A means of ensuring that all equipment is
synchronized to the one source.
Extract – To remove a selected area from an edited sequence and close
the resulting gap in the sequence.
Extrapolation – A mode that defines the shape of an animation curve
before the first and after the last control points on the curve. Extrapolation
affects the animation before the first keyframe and after the last keyframe.
Extrapolation is only apparent if there are frames before and after the
Extrusion – The next stop in creating a boundary rep solid is to “extrude”
the silhouette. Extrusion (or sweeping) is a method of dragging a polygon
through space in order to define a solid. There are typically two kinds of
extrusion: translational and rotational.
Eye Pattern – Waveform monitor pattern produced by random waves
introduced to verify the ability to test for the presence or absence of pulses
in a digital system.
Eye Tracking – The process by means of which eyes follow a person or
object across a television screen. Many ATV techniques take advantage
of the fact that human vision cannot simultaneously demand high spatial
resolution and high temporal resolution to reduce the amount of spatial
resolution transmitted for moving objects. However, when the eyes track
such an object, its image is stationary on the retina, and the visual system
can demand as much resolution as it would for a truly stationary object.
See also Dynamic Resolution.
Eyedropper – A tool for taking a color from a screen image and using
that color for text or graphics.
Video Terms and Acronyms
Fade – Fading is a method of switching from one video source to another.
Next time you watch a TV program (or a movie), pay extra attention when
the scene is about to end and go on to another. The scene fades to black,
then a fade from black to another scene occurs. Fading between scenes
without going to black is called a dissolve. One way to do a fade is to use
an alpha mixer.
Fade to Black – a) This is a video editing term that describes switching
from one video source to a black level or from black to a video signal. This
is commonly called a “fade to black” or “fade from black”. b) The picture
luminance is reduced until the screen is black.
Fader – The console control which allows an operator to perform manual
dissolves, fades and wipes.
Fader Bar – A vertical slide controller on audio and video equipment.
Fall Time – Usually measured from the 10% to the 90% amplitude points
of a negative going transition. See Rise Time.
Falling Edge – High-to-low logic or analog transition.
Fan-In – Electrical load presented by an input. Usually expressed as the
number of equivalent standard input loads.
Fan-Out – Electrical load that an output can drive. Usually expressed as
the number of inputs that can be driven.
FAP (Face Animation Parameters) – Represents a complete set of facial
actions; allows representation of most of the natural facial expressions.
FAPU (Facial Animation Parameter Units) – The amount of displacement described by a FAP is expressed in specific measurement units,
called Facial Animation Parameter Units (FAPU), which represent fractions
of key facial distances. Rotations are instead described as fractions of
a radian.
FAT (File Allocation Table) – A file system used on MS-DOS and
Windows computers.
Father – The metal master disc formed by electroplating the glass master.
The father disc is used to make mother discs, from which multiple stampers (sons) can be made.
FBA (Face and Body Animation) – A collection of nodes in a scene
graph which are animated by the FAB (Face and Body Animation) object
FC-AL (Fiber Channel-Arbitrated Loop) – Architecture used to maintain
high data transfer rates over long distances. With FC-AL storage arrays
can be separated by as much as 20 kilometers, connected by only one
non-amplified Fibre Channel fiber optic link. In the dual-loop architecture,
data transfer rates can reach 200 Mbps. Another advantage is increased
fault tolerance. In the unlikely event of a drive failure, port bypass circuits
single out each failed drive and quickly route around it, with no limitation
on the number of drives that can be bypassed.
FCC (Federal Communications Commission) – a) The government
agency responsible for (among other things) the regulation of the electromagnetic spectrum utilization in the U.S., and the body that licenses
radio and television broadcast stations. The FCC is an independent
government agency, which answers directly to Congress. b) The FCC
rules and regulations constitute mandatory standards for broadcasters,
CATV operators, transmission organizations, and others. See also ACATS.
FCC 73.699 – Federal Communications Commission (FCC) NTSC video
signal specifications standard.
FCC Composite Test Signal
Faroudja – Yves Faroudja and Faroudja Laboratories. First to market an
advanced NTSC encoder with pre-combing; proponent of the Super-NTSC
ATV system and of a 1050 scanning line (900 active line), progressive
scan, 29.97 frame per second, 1.61:1 aspect ratio HDEP system.
FAS (Frame Alignment Signal) – The distinctive signal inserted in every
frame or once in frames that always occupies the same relative position
within the frame and is used to establish and maintain frame alignment,
i.e. synchronization.
Fast Forward – The provision on a tape recorder permitting tape to be run
rapidly through it in normal play direction, usually for search purposes.
Fast Forward Playback – The process of displaying a sequence, or parts
of a sequence, of pictures in display-order faster than real-time.
Fast Reverse Playback – The process of displaying the picture sequence
in the reverse of display order faster than real-time.
Fast-Page Mode – A read or write mode of DRAMs characterized by a
decrease in cycle time of about 2-3 times and a corresponding increase
in performance. The data accessed in Fast-Page Mode cycles must be
adjacent in memory. See EDO.
FDC (Final Committee Draft) – This is the final public form of the
Committee Draft of a proposed international standard, and must be
identified as such before being submitted for a four-month approval
ballot amongst the Participating Member Bodies of the Subcommittee.
F-Connector – A video connector characterized by a single metal wire.
F-connectors may be either push-on or screw-post.
www.tektronix.com/video_audio 89
Video Terms and Acronyms
FDDI (Fiber Distributed Data Interface) – Standards for a 100 Mbps
local area network, based upon fiber optic or wired media configured as
dual counter rotating token rings. This configuration provides a high level
of fault tolerance by creating multiple connection paths between nodes,
connections can be established even if a ring is broken.
FDIS (Final Draft International Standard) – This is the final form of a
proposed standard before it is adopted as an International Standard. An
approved Final Committee Draft, modified as necessary to accommodate
comments submitted by National Bodies during, or after, the approval
ballot, must first be registered as a Final Draft International Standard,
and then submitted to a two-month letter ballot amongst Participating
Member Bodies of JTC1.
FDM (Frequency Division Multiplex) – A technology that transmits
multiple signals simultaneously over a single transmission path, such
as a cable or wireless system. Each signal travels within its own unique
frequency range (carrier), which is modulated by the data (text, voice,
video, etc.).
FDP (Facial Definition Parameters)
Feathering – A tool that tapers the values around edges of binary alpha
mask for composition with the background.
Feature Connector – An expansion connector on the VGA that can accept
or drive video signals to or from the VGA. This is used in applications
involving video overlay. This is also called VESA Pass-Through Connector.
FEC (Forward Error Correction) – a) A system in which redundancy
is added to the message so that errors can be corrected dynamically at
the receiver. b) Error control bits added to useful data in the QAM/QPSK
Feed – The transmission of a video signal from point to point.
Feed Reel – Also called “stock”, “supply” or “storage” reel. The reel on a
tape recorder from which tape unwinds as the machine records or plays.
Feedback – a) Information from one or more outputs to be used as inputs
in a control loop. b) A loop caused by audio or video signal being fed back
into itself. In video the effect is caused when a camera is directed at its
receiving monitor. In audio the effect, manifested as an echo or squeal,
is caused when a microphone is aimed at a speaker. c) A loud squeal or
howl caused when the sound from a loudspeaker is picked up by a nearby
microphone and reamplified. Also caused when the output of a tape
recorder is fed back into the record circuit.
Female Connector – A connector that has indentations or holes into
which you plug a male connector. An example of a female connector is an
electrical wall outlet that accepts and electrical plug.
Ferrichrome – A relatively recent word describing the technique of dual
coating with both a layer of gamma ferric oxide and a layer of chromium
dioxide. An intermediate level bias position used only for ferrichrome tapes.
Fetch – Reading an instruction from memory.
FF – See Full Field.
FFT (Fast Fourier Transform) – A mathematical means of converting
time domain information to frequency domain information.
FGS (Fine Grain Scalability) – A tool that allows small quality steps
by adding or deleting layers of extra information. It is useful in a number
of environments, notably for streaming purposes but also for dynamic
(statistical) multiplexing of pre-encoded content in broadcast environments.
FH – Line frequency (horizontal) 15,734 lines/sec Hz for NTSC
(525 lines x 29.97 Hz).
Fiber Bundle – A group of parallel optical fibers contained within a
common jacket. A bundle may contain from just a few to several hundred
Fiber Channel – See Fibre Channel.
Fiber Optics – See Optical Fiber.
Fiber-Optic Cable – “Wires” made of glass fiber used to transmit video,
audio, voice or data providing vastly wider bandwidth than standard coaxial
Fibre Channel – A high speed data link planned to run up to 2 Gbps on a
fiber optic cable. A number of manufacturers are developing products to
utilize the Fiber Channel-Arbitrated Loop (FC-AL) serial storage interface
at 1 Gbps so that storage devices such as hard disks can be connected.
Supports signaling rates from 132.8 Mbps to 1,062.5 Mbps, over a mixture of physical media including optical fiber, video coax, miniature coax,
and shielded twisted pair wiring. The standard supports data transmission
and framing protocols for the most popular channel and network standards
including SCSI, HIPPI, Ethernet, Internet Protocol, and ATM.
Field – a) In interlaced scan systems, the information for one picture is
divided up into two fields. Each field contains one-half of the lines required
to produce the entire picture. Adjacent lines in the picture are in alternate
fields. b) Half of the horizontal lines (262.5 in NTSC and 312.5 in PAL)
needed to create a complete picture. c) One complete vertical scan of an
image. In a progressive scanning system, all of the scanning lines comprising a frame also comprise a field. d) An area in a window in which you
can type text. e) A television picture is produced by scanning the TV screen
with an electron beam. One complete scan of the screen is called a field.
Two fields are required to make a complete picture, which is called a
frame. The duration of a field is approximately 1/60 of a second in NTSC
and 1/50 or 1/60 of a second in PAL. f) One half of a complete interlaced
video picture (frame), containing all the odd or even scanning lines of
the picture.
Field Alias – An alias caused by interlaced scanning. See also Interlace
Field Blanking – Refers to the part of the signal at the end of each field
that make the vertical retrace invisible. Also called vertical blanking.
Field DCT Coding – Discrete cosine transform coding is where every
block consists of lines from one field. The chrominance blocks in the 4:2:0
format must never be coded by using field DCT coding, but it is allowed
to use field based prediction for this type of block.
Field Dominance – When a CAV laserdisc is placed in the still frame
mode, it continuously plays back two adjacent fields of information. There
are no rules in the NTSC system stating that a complete video picture has
to start on field 1 or field 2. Most of the video in this program is field 1
dominant. There are two sections of the disc that are field 2 dominant. In
Video Terms and Acronyms
the case of film translated to video, the start of a complete film picture
changes from field 1 to field 2 about 6 times a second. There is a code in
the vertical interval of the disc that tells the player on which field it can
start displaying each of the disc’s still frames.
Field Frequency – The rate at which one complete field is scanned,
normally 59.94 time a second in NTSC or 50 times a second in PAL.
the “circle of confusion” may depend upon the resolution capabilities of
the light-sensitive receptor (electronic or photographic) and of the system
within which it is functioning. Quantitative measurements for actual
imaging systems may be made on an optical bench. Practical determinations are made from subjective examination of the actual images in the
system of interest.
Field Picture – A picture in which the two fields in a frame are coded
independently. Field pictures always come in sets of two fields, which are
called top field and bottom field, respectively. When the first field is coded
as a P- or a B-picture, the second picture must be coded in the same
manner; however, if the first field is coded as an I-picture, the second field
may be coded as either an I-picture or a P-picture (that is predicted from
the first field).
FIFO (First-In-First-Out) – a) A memory structure in which data is
entered at one end and removed from the other. A FIFO is used as a buffer
to connect two devices that operate asynchronously. b) A storage device
(parallel shift register) which operates as a Turing machine to buffer
asynchronous data where the first data stored is the first data read out.
FIFOs are used to store video and act as “rubber-band” type buffers to
keep a steady video stream where memory and system clock speeds do
not match. FIFOs have less delays than standard shift registers as input
and output are controlled by separate clocks.
Field Rate – Number of fields per second.
FIG (Facial Animation Parameters Interpolation Graph)
Field Time Linear Distortions – Distortions involve signals in the 64
µsec to 16 msec range. Field time distortions cause field-rate tilt in video
signals. The error is expressed in IRE or as a percentage of a reference
amplitude which is generally the amplitude at the center of the line bar.
Figure-8 Microphone – A microphone (usually a ribbon type) whose
sensitivity is greatest to front and rear, and weakest to both sides.
Field Period – The reciprocal of twice the frame rate.
File Set – A collection of files and directories.
FD = a
a in % of RA
0.2 ms
File – A container in which you store information such as text, programs,
or images.
Top of Line Bar
0.2 ms
File System – A hierarchy of directories and files. Directories contain
other directories and files; files cannot contain directories. The root (/)
directory is at the top of the hierarchy. See also Format.
Fill – The video information that replaces a “hole” (video information) cut
in the video picture by the key signal.
Fill (Insert) Video – A video signal which replaces a “hole” (video information) cut in background video by a key source.
Fill Bus – A separate bus or buses from which fill videos can be selected
independently from the key source cutting the hole.
Field Bar
Fill Light – Fill lights, commonly referred to as “scoops”, provide a
soft-edged field of light used to provide additional subject illumination to
reduce harsh shadows or areas not highlighted by the key light.
These distortions will cause top to bottom brightness inaccuracies in large
objects in the picture. These distortions can be measured with either a
window signal or a field square wave. See Linear Distortions.
Filled Clip – A segment of a sequence that contains no audio or video
information. Filler can be added to the source monitor (or pop-up monitor)
and edited into a sequence. See also Filler Proxy.
Field Time Waveform Distortions – See Field Time Linear Distortions.
Filled Key – A key effect in which the key source image is different from
the foreground image. Areas not keyed (that is, not made transparent) in
the key source image are filled with the corresponding areas of the foreground image.
Field, Depth of – a) The range of distance in subject space within which
a lens (or a system) provides an image that reproduces detail with an
acceptably small circle of confusion (acceptable focus) usually small
enough for subjective evaluation as a “point”, defines the depth of field.
Tables are calculated for lenses as a function of optical aperture and the
subject distance at which they are focused. Regrettably, these calculations
are strictly geometric (ignoring the possibility of diffraction effects, of all
optical aberrations, and of possible differing contributions to focal length
from different annuli of the optical system). Thus, the tables are at times
overly optimistic. b) Depth of field for a given imaging system decreases
with increasing optical aperture of that system, and decreases as the
distance to the subject decreases. A “maximum acceptable” diameter for
Filler Proxy – The result of a composition specifying media to be played
for the filler clips in each track.
Film Chain – a) Projectors, multiplexers and cameras, connected for the
purpose of transferring film to video. b) A device that transfers a film
image to a video image. It is also know as a Telecine chain.
Film Loop – A piece of file, quite short, which is to be played repeatedly.
www.tektronix.com/video_audio 91
Video Terms and Acronyms
Film Recorder – A device for converting digital data into film output.
Continuous tone recorders produce color photographs as transparencies,
prints or negatives.
Film Timecode – Timecode added to the film negative during the film
shoot via a film timecode generator. Film timecode numbers are synced to
the film key numbers on the dailies during the telecine transfer process. A
special key link reader is required for viewing the film timecode.
Filter – A device used to remove or pass certain frequencies from a
signal. Low pass filters pass the low frequency content of a signal while
high pass filters pass the high frequency content. A bandpass filter passes
frequencies within a certain “band”.
Filter Artifacts – Distortions introduced by filters. The most common
visual artifacts introduced by filters are reduced resolution and ringing.
Filter, Brick Wall – A low-pass filter with a steep cut-off (such as 20
dB/octave or greater), such that a negligible amount of higher frequency
information passes. The filter typically has uniform group delay.
First-Frame Analysis – A transparency technique wherein the first frame
of the video file is a dummy frame that supplies the color or range of colors to be rendered as transparent: the color of the chroma-key background, for example. See Transparency, Transparency Frame.
Fit to Fill – An insert edit where an incoming source clip replaces an
existing segment (or gap) in the record clip. A fit to fill edit functions like a
swap shot edit except that the edit sequence does not ripple. If the source
clip has a different length than the segment it replaces, the source clip is
shortened or lengthened proportionally to fit the duration of the replaced
FITS (Functional Interpolating Transformation System) – A format
that contains all data used to design and assemble extremely large files in
a small, efficient mathematical structure.
Five-Step Staircase – Test signal commonly used to check luminance
gain linearity.
Filter, Gaussian – A low-pass filter providing a gradual attenuation of
the higher frequencies. Strictly the attenuation should follow the curve
V=e^(-af^2). But the term is also applied to attenuation functions that
only qualitatively resemble the precise power function.
Filter, Optical – In addition to the familiar optical filters for modifying
spectral energy distribution, and thereby color rendition, optical filters are
also produced as low-pass filters for spatial detail in an optical image,
eliminating high-frequency information that would exceed the Nyquist limit
of the system and produce excessive aliasing. Many of these filters are
cut from optically birefringent crystals and function by providing multiple
images slightly displaced one form another so that fine detail is blurred
(i.e., low-pass filtered).
Filterbank – A set of bandpass filters covering the entire media frequency
Fixed Focal Length Lens – A lens with a predetermined fixed focal
length, a focusing control and a choice of iris functions.
Filtering – A process used in both analog and digital image processing
to reduce bandwidth. Filters can be designed to remove information content such as high or low frequencies, for example, or to average adjacent
pixels, creating a new value from two or more pixels.
Fixed Rate – Information flow at a constant volume over time. See CBR.
Finite Impulse Response Filter (FIR) – A digital filter that is in general,
better than analog filters but also more complex and expensive. Some
specialized filter functions can only be accomplished using a FIR.
Flag – a) A variable which can take one of only two values. b) Information
bit that indicates some form of demarcation has been reached, such as
overflow or carry. Also an indicator of special conditions such as interrupts.
FIP (Forward Interaction Path)
Flags – Menu functions other than the X, Y or Z parameters which turn
on/off or enable a selection of one or more system conditions.
FIR – See Finite Impulse Response Filter.
FireWire (IEEE P1394) – FireWire is a special high-speed bus standard
capable of over 100 megabits/sec sustained data rate.
Firmware – Program stored in ROM. Normally, firmware designates any
ROM-implemented program.
First Play PGC – This Program Chain (PGC) is described in the Video
Manager Information table, and has no corresponding video objects (VOB).
The First Play PGC is executed at initial access, e.g. just after disc loading.
Fixed-Point Representation – Number representation in which the
decimal point is assumed to be in a fixed position.
Flanging – Another name for phasing. Originally, the method of phasing
where phase was varied by resting your thumb on the flanges of the reel
to slow it down.
Flash – Momentary interference to the picture of a duration of approximately one field or less, and of sufficient magnitude to totally distort the
picture information. In general, this term is used alone when the impairment is of such short duration that the basic impairment cannot be recognized. Sometimes called “Hit”.
Video Terms and Acronyms
Flash Analog to Digital Converter – A high speed digitizing device
based on a bank of analog comparators. The analog value to be digitized is
the input to one side of the comparators bank. The other comparators input
is tied to a tap of a resistor ladder, with each comparator tied to its own
tap. The input voltage at each comparators will be somewhere between the
top and bottom voltages of the resistor ladder. The comparators output
a high or a low based on the comparison of the input voltage to the
resistor ladder voltage. This string of 1s and 0s are converted to the
binary number.
Flash Frame – After a long, complex piece is edited, small bits of video
might be accidentally left in a sequence. When the Timeline is zoomed to
100 percent, these small, unwanted, pieces might not be visible. An editor
can find these bits using the Find Flash Frame command.
Flash Memory – Nonvolatile, digital storage. Flash memory has slower
access than SRAM or DRAM.
FlashPix – A multi-resolution image format in which the image is stored
as a series of independent arrays. Developed by Kodak, Hewlett-Packard,
Live Picture, Inc. and Microsoft and introduced in June 1996.
Flat Field – As used herein, the entire area viewed by a television camera
with the viewed area being uniformly white or any single specified color or
any shade of gray.
Flat Shading – A polygon rendered so that its interior pixels are all the
same color has been rendered with “flat” shading. An object represented
by polygons that is rendered with flat shading will look distinctly faceted.
No highlights or reflections are visible.
Flatten – The process of converting a Macintosh file into a self-contained,
single-forked file so that it is compatible with Windows environment. See
Self-Contained, Single-Forked.
Flexibility Layer – The MPEG-4 Systems Layer that specifies how some
parts of the MPEG-4 terminal can be configured or downloaded. Two
modes are identified in this layer, the non-flexible mode and the flexible
Flexible Mode – The configuration of an MPEG-4 terminal in which the
capability to alter parameters or algorithms for the processing of audiovisual objects is achieved by the transmission of new classes or scripts.
FlexMux Channel (FMC) – A label to differentiate between data
belonging to different constituent streams within one FlexMux stream.
FlexMux Entity – An instance of the MPEG-4 system resource that
processes FlexMux Protocol Data Units (PDUs) associated to one FlexMux
FlexMux Layer (FML) – A logical MPEG-4 Systems Layer between the
Elementary Stream Layer and the TransMux Layer used to interleave one or
more elementary streams, packetized in Adaption Layer protocol data unit,
into one FlexMux stream.
FlexMux Packet – The smallest data entity managed by the FlexMux tool
consisting of a header and a payload
FlexMux Protocol Data Unit (FlexMux-PDU) – The smallest protocol
unit of a FlexMux stream exchanged between peer FlexMux entities. It consists of FlexMux-PDU Header and FlexMux-PDU Payload. It carries data
from one or more FlexMux channel(s).
FlexMux Protocol Data Unit Header (FlexMux-PDU Header) –
Information preceding the FlexMux-PDU payload. It identifies the FlexMux
channel(s) to which the payload of this FlexMux-PDU belongs.
FlexMux Stream – A sequence of FlexMux packets associated with one or
more FlexMux channels flowing through one TransMux channel.
Flicker – a) Flicker occurs when the refresh rate of the video is too low
and the light level on the display begins to decrease before new information is written to the screen to maintain the light level. To prevent the
human eye from seeing flicker, the screen refresh rate needs to be at least
24 frames per second. b) A rapid visible change in brightness, not part of
the original scene. See also Flicker Frequency, Fusion Frequency, Judder,
Large-Area Flicker, and Twitter.
Flicker Filter – Video data from a VGA is not interlaced. This data must
be converted into interlaced format for display on a TV. If every second line
is discarded of the non-interlaced data, flicker may occur if, for example,
video information is contained in just one noninterlaced line. Flicker will
also be perceptible at the top and bottom of multilane objects. A flicker
filter overcomes these problems in computing a weighted average of two
or three adjacent lines (noninterlaced) for each line of output (interlaced).
Flicker Frequency – The minimum rate of change of brightness at which
flicker is no longer visible. The flicker frequency increases with brightness
and with the amount of the visual field being stimulated. In a recent study,
a still image flashed on and off for equal amounts of time was found to
have a flicker frequency of 60 flashes per second at a brightness of 40
foot lamberts (fL) and 70 at 500. Television sets generally range around
100 fL in peak brightness (though some new ones claim over 700). The
SMPTE recommends 16 fL for movie theater screens (though this is measured without film, which reduces the actual scene brightness by at least
50 percent). One reason for interlaced scanning is to increase television’s
flashing pictures to the flicker frequency, without increasing bandwidth.
Flip – Special effect in which the picture is either horizontally or vertically
Floating – Logic node that has no active outputs. Three-state bus lines,
such as data bus lines, float when no devices are enabled.
Floating-Point Representation – Technique used to represent a large
range of numbers, using a mantissa and an exponent. The precision of the
representation is limited by the number of bits allocated to the mantissa.
See Mantissa and Exponent.
Floppy Disk – Mass-storage device that uses a flexible (floppy) diskette to
record information. See Disk.
Flowchart or Flow Diagram – Graphical representation of program logic.
Flowcharts enable the designer to visualize a procedure. A complete flowchart leads directly to the final code.
FLSD (Fixed Linear Spline Data) – The different modes used to animate
a value, for example, position, color, or rotation.
Fluid Head – Refers to a tripod mount that contains lubricating fluid which
decreases friction and enables smooth camera movement.
Flutter – Distortion which occurs in sound reproduction as a result of
undesired speed variations during recording or reproducing. Flutter
occurring at frequencies below approximately 6 Hz is termed “wow”.
www.tektronix.com/video_audio 93
Video Terms and Acronyms
Flux – Magnetic field generated by a record head, stored on magnetic
tape, and picked up by the playback head. Also the magnetic field that
exists between the poles of a magnet.
Flux Transition – A 180 degree change in the flux pattern of a magnetic
medium brought about by the reversal of the magnetic poles within the
Flux Transition Density – Number of flux transitions per track length
Fly-Back – See Horizontal Retrace.
Flying Erase Head – The erase head mounted on the spinning (flying) video
head drum. Facilitates smooth, seamless edits whenever the camcorder
recording begins. Without a flying erase head, a video “glitch” may occur at
scene transitions.
Flying Head – A video head that engages when the video deck is on
“pause”, providing a clear still-frame image.
Fly-Through – A fly-through is a type of animation where a moving
observer flies through a seemingly stationary world.
FM – See Frequency Modulation.
FM Recording – The data signal is used to modulate the frequency of a
“carrier” having a frequency much higher than any spectral component of
the data signal. Permits the recording of DC or very low signal frequencies.
FM-FM – Dual carrier FM coded discrete stereo transmissions, analogue.
Can be used for bi-lingual operation under user selection, but no autoselection is available. Audio characteristics better than standard mono
Font – A style of type. Many character generators offer the user a menu of
several fonts.
Foot Candles – A measure of the amount of light falling on an object (its
illumination). This is a measure only of the light energy that can be seen by
the human eye (becoming an obsolete unit; replaced by the Lux).
1 foot candle = 1 lumen per square foot
Foot Lamberts – A measurement of the brightness of an object. If 100
foot candles are illuminating a 60% white chip, then its brightness will be
60 foot lamberts, regardless of viewing distance. Again, remember that
brightness is measured over the same energy response of a human eye
(becoming obsolete unit; replaced by the Nit).
Footage Encoder Time Code Generator – An electronic device which
takes the input from a reader of keycode numbers, decodes this information and correlates the numbers with the SMPTE time code it generates.
These data, along with 3:2 pull-down status of the transfer, footage count,
and audio time code (if applicable) are made available for window burn-ins,
VITC-LTC recording and output to a computer.
Foot-Candela – An illumination light unit used mostly in American CCTV
terminology. It equals ten times (more precisely, 9.29) of the illumination
value in luxes.
Footprint – Area on earth within which a satellite’s signal can be
Forbidden – The term forbidden when used in the clauses defining the
coded bit stream indicates that the value shall never be used. This is
usually to avoid emulation of start codes.
FMV – See Full Motion Video.
Forbidden Value – An excluded value in the coded bit stream. A value
that is not allowed to appear in the bit stream.
F-Number – In lenses with adjustable irises, the maximum iris opening
is expressed as a ratio (focal length of the lens)/(maximum diameter of
aperture). This maximum iris will be engraved on the front ring of the lens.
Forced Activation Button – Menu buttons that automatically perform the
specified action as soon as the button has been highlighted on the menu.
Focal Length – The distance between the secondary principal point in the
lens and the plane of the imaging device. The longer the focal length, the
narrower is the angle of view.
Focus – Adjustment made to the focal length of the lens, designed to
create a sharper, more defined picture.
Focusing Control – A means of adjusting the lens to allow objects at
various distances from the camera to be sharply defined.
Foldover – Tape that has folded over resulting in the oxide surface facing
away from the heads.
Foley – Background sounds added during audio sweetening to heighten
realism, e.g. footsteps, bird calls, heavy breathing, short gasps, etc.
Forced Display – A DVD feature that forces the display of a sub-picture
regardless of whether or not the user wanted the sub-picture to be displayed. This would be used, for instance, in an English movie in which
there were non-English words spoken and it was desired that a translation
be provided even if the subtitle system was turned off.
Forced Selected Button – Menu button that is automatically selected
when the menu is displayed.
Forced Updating – a) The process by which macroblocks are intra coded
from time-to-time to ensure that mismatch errors between the inverse
DCT processes in encoders and decoders cannot build up excessively.
b) The recurrent use of I-coding to avoid build-up of errors between the
inverse DCT processes in encoders and decoders.
Following (or Trailing) Blacks – A term used to describe a picture
condition in which the edge following a white object is overshaded toward
black. The object appears to have a trailing black border. Also called
“trailing reversal”.
Foreground (FGND) – May be thought of as the front layer of video in a
picture. Also used to describe the insert video (on 4100 series) of a key.
Following (or Trailing) Whites – A term used to describe a picture
condition in which the edge following a black or dark gray object is overshaded toward white. The object appears to have a trailing white border.
Also called “trailing reversal”.
Format – a) The configuration of signals used for interconnecting equipment in a specified system. Different formats may use different signal
composition, reference pulses, etc. A variety of formats are used to
record video. They vary by tape width (8 mm, 1/2”, 3/4”, 1”), signal form
Form – A window that contains buttons that you must click and/or editable
fields that you must fill in.
Video Terms and Acronyms
(composite, Y/C, component), data storage type (analog or digital) and
signal standard (PAL, NTSC, SECAM). b) For data storage media (hard
disks, floppies, etc.), the process of initializing the media prior to use.
Formatting effectively deletes any data that was previously on the media.
See Format Disk.
Format Conversion – The process of both encoding/decoding and resampling of digital rates to change a digital signal from one format to another.
Format Converter – A device that allows the reformatting of a digital data
stream originating from one sampling structure (lines per frame, pixels
per line) into a digital data stream of another sampling structure for the
purposes of recording or passing the original data stream through distribution devices designed to accommodate the latter structure. Since the data
still represents the original sampling structure, this is not the same as
standards conversion.
Format Disk – The process of preparing a disk for data storage by determining where data is to be placed and how it is to be arranged on disk.
Formatting – The transfer and editing of material to form a complete
program, including any of the following: countdown, test patterns, bars
and tone, titles, credits, logos, space for commercial, and so forth.
Forward Compatibility – A decoder is able to decode a bit stream
coming from an encoder of a previous generation. A new coding standard
is forward compatible with an existing coding standard if new decoders
(designed to operate with the new coding standard) continue to be able to
decode bit streams of the existing coding standard.
Forward Motion Vector – Information that is used for motion compensation from a reference picture at an earlier time in display order.
Forward Prediction – Prediction from the past reference vop. See
Bidirectional Prediction.
Fourier Transformation – Mathematical transformation of time domain
functions into frequency domain.
Fractional Compression – A global compression method that exploits
highly correlated data in an image. It is resolution-independent.
Fractional T1 – Part of the bandwidth of a T1 system.
Fragile Watermark – A watermark designed to be destroyed by any form
of copying or encoding other than a bit-for-bit digital copy. Absence of the
watermark indicates that a copy has been made.
Fragmentation – The scattering of data over a disk caused by successive
recording and deletion operations. Generally this will eventually result in
slow data recall, a situation that is not acceptable for video recording or
replay. The slowing is caused by the increased time needed to randomly
access data. With such stores, defragmentation routines arrange the data
(by copying from one part of the disk to another) so that it is accessible
in the required order for replay. Clearly any change in replay, be it a
transmission running order or the revision of an edit, could require further
defragmentation. True random access disk stores, able to play frames in
any order at video rate, never need defragmentation.
Frame – a) A frame consists of all the information required for a complete
picture. For interlaced scan systems, there are two fields in a frame.
For progressive video, these lines contain samples starting from one time
instant and continuing through successive lines to the bottom of the frame.
b) A complete picture composed of two fields. In the NTSC system, 525
interlaced horizontal lines of picture information in 29.97 frames per
second. In the PAL system, 625 interlaced horizontal lines of picture information in 25 frames per second. c) The metal cabinet which contains
the switcher’s circuit boards. d) One complete video image, containing
two fields. There are 30 frames in one second of NTSC video.
Frame Accurate – The importance of specific edits as compared to the
ability to start, stop and search for specific frames of video. Frame accurate editing requires the use of a timecode system.
FPLL (Frequency- and Phase-Locked Loop)
Frame Buffer – a) A block of digital memory capable of buffering a frame
of video. The amount of memory required for a frame buffer is based on
the video being stored. For example to store a 640 x 480 image using the
RGB color space with eight bits per color, the amount of memory required
would be: 640 x 480 x 3 = 921,600 bytes. b) A frame buffer is a digital
frame store, containing a large chunk of memory dedicated to pixel memory, at least one complete frame’s worth. All the pixels in the buffer have the
same depth. Each bit of depth is called a bit plane. Frame buffers can use
the bit planes in a variety of ways. First, a pixel’s bits can store the RGB
values of colors. This simple method is called full-color mode. In full-color
mode, it is common to refer to the red plane, or the blue or green plane,
meaning the bits reserved for specifying the RGB components of the pixel.
Full-color systems may also have an alpha channel, which encodes the
transparency of each bit. The alpha channel is like a matte or key of the
image. Alternately, the bits can store a color number, which selects the
final color from a color map. Finally, some bit planes may be reserved for
use as overlay planes.
FPS (Frames Per Second) – A measure of the film or video display rates.
Film is 24 FPS, NTSE is 30 FPS, PAL/SECAM is 25 FPS.
Frame Capture (Frame Grabber) – Taking one frame of video and
storing it on a hard drive for use in various video effects.
Fractals – Mathematically generated descriptions (images) which look like
the complex patterns found in nature (e.g., the shoreline and topographic
elevations of a land mass as seen from an aerial photograph). The key
property of fractal is self-similarity over different domain regions.
Frame DCT Coding – Frame DCT coding is where the complete frame of
the image is coded as a set of DCT blocks. In the case of interlace signals,
the fields are combined together and then coded as a single entity.
Four-Track or Quarter-Track Recoding – The arrangement by which
four difference channels of sound may be recorded on quarter-inch-wide
audio tape. These may be recorded as four separate and distinct tracks
(monophonic) or two stereo pairs of tracks. Tracks 1 and 3 are recorded
in the “forward” direction of a given reel, and Tracks 2 and 4 are recorded
in the “reverse” direction.
FP (Fixed Part)
FPGA (Field-Programmable Gate Array) – A programmable logic chip
(PLD) with a high density of gates. Containing up to hundreds of thousands
of gates, there are a variety of architectures. Some are very sophisticated,
including not only programmable logic blocks, but programmable interconnects and switches between the blocks. FPGAs are mostly reprogrammable
(EEPROM or flash based) or dynamic (RAM based). See also PLD.
www.tektronix.com/video_audio 95
Video Terms and Acronyms
Frame Doubler – A video processor that increases the frame rate
(display rate) in order to create a smoother-looking video display. Compare
to line doubler.
Frame Frequency – The rate at which a complete frame is scanned,
nominally 30 frames per second.
Frame Grabber – a) A device that enables the real-time capture of a
single frame of video. The frame is captured within a temporary buffer
for manipulation or conversion to specified file format. The buffers of
some frame grabbers are large enough to store several complete frames,
enabling the rapid capture of many images. A frame grabber differs from
a digitizer in that a digitizer captures complete sequential frames, so it
must use compression or acceleration or both to capture in real-time.
b) A device that “captures” and potentially stores one complete video
frame. Also known as Frame Storer.
Frame Offset – A way of indicating a particular frame within the group of
frames identified by the edge number on a piece of film. For example, a
frame offset of +12 indicates the twelfth frame from the frame marked by
the edgecode.
Frame Period – The reciprocal of the frame rate.
Frame Picture – A picture in which the two fields in a frame are merged
(interlaced) into one picture which is then coded.
Frame Pulse – A pulse superimposed on the control track signal. Frame
pulses are used to identify video track locations containing vertical sync
Frame Rate – a) The rate at which frames of video data are scanned on
the screen. In an (M) NTSC system, the frame rate is 29.97 frames per
second. For (B, D, G, H, I) PAL, the frame rate is 25 frames per second.
b) The number of frames per second at which a video clip is displayed.
c) The rate at which frames are output from a video decoding device or
stored in memory. The NTSC frame rate is 30 frames/second while some
graphics frame rates are as high as 100 frames/second.
Frame Rate Conversion – The process of converting one frame rate to
another. Examples include converting the (M) NTSC frame of 29.97 frames
per second to the PAL frame rate of 25 frames per second.
Frame Relay – A network interface protocol defined by CCITT
Recommendation I.122 as a packet mode service. In effect it combines the
statistical multiplexing and port sharing of X.25 packed switching with the
high speed and low delay of time division multiplexing and circuit switching. Unlike X.25, frame relay implements no layer 3 protocols and only the
so-called core layer 2 functions. It is a high-speed switching technology
that achieves ten times the packet throughput of existing X.25 networks by
eliminating two-thirds of the X.25 protocol complexity. The basic units of
information transferred are variable length frames, using only two bytes
for header information. Delay for frame relay is lower than X.25, but it is
variable and larger than that experienced in circuit switched networks.
Frame Roll – A momentary vertical roll.
Frame Store – a) Term used for a digital full-frame temporary storage
device with memory for only one frame of video. b) An electronic device
that digitizes a TV frame (or TV field) of a video signal and stores it in
memory. Multiplexers, fast scan transmitters, quad compressors and even
some of the latest color cameras have built-in frame stores.
Frame Store Synchronizer – A full-frame synchronizer used by a time
base corrector with full-frame memory and can be used to synchronize
two video sources.
Frame Switcher – Another name for a simple multiplexer, which can
record multiple cameras on a single VCR (and play back any camera in
full screen) but does not have a mosaic image display.
Frame Synchronizer – A digital buffer, that by storage, comparison of
sync information to a reference, and time release of video signals, can
continuously adjust the signal for any timing errors. A digital electronic
device which synchronizes two or more video signals. The frame synchronizer uses one of its inputs as a reference and genlocks the other video
signals to the reference’s sync and color burst signals. By delaying the
other signals so that each line and field starts at the same time, two
or more video images can be blended, wiped and otherwise processed
together. A TBC (Time Base Controller) takes this a step further by
synchronizing both signals to a stable reference, eliminating time base
errors from both sources. The Digital Video Mixer includes a frame
synchronizer and dual TBCs.
Frame Transfer (FT) – Refers to one of the three principles of charge
transfer in CCD chips. The other two are interline and frame-interline
Frame-Based 2D Animation – A two-dimensional animation technique
in which an object is moved from one position, size, and color to another.
Adobe After Effects, for example, uses keyframes to create frame-based
2D animation. One of the two main types of animation associated with
digital video. Compare Cell Animation.
Frame-Interline Transfer (FIT) – Refers to one of the few principles
of charge transfer in CCD chips. The other two are interline and frame
Framing – For multiplexed digital channels, framing is used as a control
procedure. The receiver can identify time slots of the subchannels by the
framing bits that were added in generating the bitstream.
Framing Tool – One of the tools of the Protection Layer used to segment
the content of the LP-SDU in elements of a given length that can be
Franchise – An agreement between a CATV operator and the governing
cable authority. A franchise agreement is essentially a license to operate.
Franchising Authority – Governmental body (city, county, or state)
responsible for awarding and overseeing cable franchises. In New Jersey,
the Local Franchising Authority is the Board of Public Utilities (BPU).
Free-Run – Process of allowing a digital circuit (typically a microprocessor) to run without feedback (open-loop). This is done to stimulate other
devices in the circuit in a recurring and predictable manner.
Video Terms and Acronyms
Freeze Frame – Special effect in which the picture is held as a still
image. It is possible to freeze either one field or a whole frame. Freezing
one field provides a more stable image if the subject is moving, however,
the resolution of the video image is half that of a full frame freeze. Digital
freeze frame is one special effect that could be created with a special
effects generator or a TBC (Time Base Controller). The Digital Video Mixer
includes this feature.
French Proposals – Three HDEP proposals, two closely related, suggested by a number of French organizations. For countries with a field rate of
50 field per second, there would be 1200 scanning lines, 1150 of them
active. For countries with a field rate of 59.94 fields per second, there
would be 1001 scanning lines, 970 of them active. Both systems would
have identical line rates (60,000 lines per second) and bandwidths (65
MHz luminance), and would be progressively scanned. This correspondence
would allow a great deal of common equipment, as Recommendation 601
does for digital component video. The third proposal is for a worldwide
standard based on 1050 scanning lines (970 active), 2:1 interlace, and
100 field per second.
signal component. Frequency response numbers are only meaningful if
they contain three pieces of information: the measured amplitude, the frequency at which the measurement was made and the reference frequency.
There are a number of test signals that can be used including multiburst,
multipulse, a swept signal or sin (x)/x.
Frequency Response Curve – The curve relating the variation in output
with frequency of a piece of equipment or magnetic tape when the input is
kept constant.
Frequency Response Rolloff – A distortion in a transmission system
where the higher frequency components are not conveyed at their original
full amplitude. In video systems, this causes loss of color saturation.
Frequency Synthesizer – An electronic circuit that generates a number
of frequencies from a fixed-reference frequency. Some frequency synthesizers generate only a relatively small number of frequencies; others
generate hundreds of different frequencies.
Fringing – The pickup of extra bass frequency signals by a playback
head when reproducing a signal recorded by a head with a wider track
configuration, such as playing a full track tape with a half-track head.
Frequency – The number of cycles a signal that occurs per second, measured in hertz (repetition rate. In electronics, almost invariably the number of
times a signal changes from positive to negative (or vice versa) per second.
Only very simple signals (sine waves) have a single constant frequency;
the concept of instantaneous frequency therefore applies to any transition,
the frequency said to be the frequency that a sine wave making the same
transition would have. Images have spatial frequencies, the number of
transitions from dark to light (or vice versa) across an image, or per degree
of visual field.
Front-to-Back Ratio – The ratio between a cardioid microphone’s
sensitivity to sounds arriving from the front and from the rear, a measure
of its directionality.
Frequency Allocation Table – List of which frequencies can be used
for transmission of different signals in the U.S. It may require revision
for certain ATV (Advanced TV) schemes. A similar function is performed
internationally by the International Frequency Registration Board (IFRB),
like the CCIR, part of the International Telecommunications Union.
FSM (Finite States Machine) – A finite states machine is a markovian
source, meaning that the evolution after the time t depends only on the
machine state at the time t and the future inputs. Particularly, the evolution
doesn’t depend on the sequence that brought the machine in its current
Frequency Domain – A concept that permits continuous functions in
the space or time domain to be mapped into a representation with linear
properties in frequency coordinates. It benefits the application of mathematical functions. For example, spectrum analysis can be performed on
the sampled signal.
FSS (Fixed Satellite Services) – Provides point-to-point and point-tomulti-point satellite communications of voice, data and video between
fixed or stabilized earth stations. Major providers of space segment
include INTELSAT, PanAmSat Corporation, EUTELSAT, Telesat Canada
and GE Americom Communications, Inc.
Frequency Interleaving – The process by which color and brightness
signals are combined in NTSC.
FST (Fast Slant Transform) – Applied on image subblocks.
Frequency Modulation – a) Modulation of sine wave or “carrier” by varying its frequency in accordance with amplitude variations of the modulating
signal. b) Also referring to the North American audio service broadcast over
88 MHz-108 MHz.
FTP (File Transfer Protocol) – A client-server protocol which allows
users to transfer files over a TCP/IP network. FTP is also the name for the
client program the user executes to transfer files. Though it was once the
only way to download files on the Internet, it has now been integrated into
many web browsers.
Frequency Multiplex – See Multiplex.
Frequency Response – The range of frequencies that a piece of equipment can process and is directly related to the system’s ability to uniformly
transfer signal components of different frequencies over the entire video
spectrum without affecting their amplitudes. This parameter is also known
as gain/frequency distortion or amplitude versus frequency response. The
amplitude variation maybe expressed in dB, percent or IRE. The reference
amplitude (0 dB, 100%) is typically the white bar or some low frequency
From Source – VTR or other device that is generating the video/audio
signal that is being dissolved or wiped away from.
Front Porch – The portion of the video signal between the end of active
picture time and the leading edge of horizontal sync. See Horizontal Timing.
FT (Fixed Termination)
FTTC (Fiber to the Curb) – The installation of optical fiber to within a
thousand feet of the home or office.
FTTH (Fiber to the Home) – The installation of optical fiber from the
carrier directly into the home or office.
FUCE – Full compatible EDTV. A Hitachi ATV scheme filling a Fukinuki hole
for increased luminance detail, with recent proposed additions to increase
chroma detail.
www.tektronix.com/video_audio 97
Video Terms and Acronyms
Fukinuki – Takahiko Fukinuki and the Fukinuki Hole named for him.
Fukinuki is a Hitachi researcher who proposed filling an apparently unused
portion of the NTSC spatio-temporal spectrum with additional information
that might be used for ATV. The signal that fills a Fukinuki hole is sometimes referred to as a Fukinuki subcarrier. It is extremely similar to the
color subcarrier and can cause an effect like cross-luminance under certain conditions.
Full-Color Mode – Full-color mode is distinguished by: each pixel contains its own values; a full-color render takes about three times as long
as color mapped render. Anti-aliasing, transparency, and texture mapping
are possible only in this mode. Full-color mode makes possible such things
as transparency, texture mapping, and anti-aliasing.
Full Field – All sampled points in the digital component signal as opposed
to active picture (AP) which are all sampled points in the digital component
signal with the exception of the points between EAV and SAV.
Fusion Frequency – The minimum rate of presentation of successive
images of a motion picture that allows motion to seem smooth, rather than
jerky. The fusion frequency is almost always lower than the flicker frequency. As it applies to the rate at which images are presented, rather than
the rate at which they were shot, material that appears to be at or above
the fusion frequency when viewed at normal speed may be below it when
viewed in slow motion. Techniques to smooth motion presented at a rate
below the fusion frequency have been developed for such purposes as
computer-assisted animation; these are sometimes called in-betweening
techniques. See also Judder.
Full Field Signals – Signals with video on each line of active video.
These signals can only be used for out of service testing.
Future Reference Picture – A future reference picture is a reference
picture that occurs at a later time than the current picture in display order.
Full Field Testing – See Out of Service Testing.
Future Reference VOP – A future reference vop is a reference vop that
occurs at a later time than the current vop in display order.
Full Duplex – Sending data in both directions at the same time. Usually
higher quality than half duplex, but requires more bandwidth. In video conferencing, full duplex will be much more natural and useable. Cheap
speakerphones are half duplex, whereas more expensive ones are full
Full Motion Video (FMV) – Video that plays at 30 frames per second
(NTSC) or 25 frames per second (PAL).
Full Track Recording – Recording monophonically on one track whose
width is essentially the same as the tape’s.
Video Terms and Acronyms
G.711 – This ITU recommendation defines an 8-bit A-law (European
companding) and µ-law (American companding) PCM audio format with
8 kHz sampling used in standard telephony. G.711 audio is also used in
H.320 video conferencing. 64 kbps PCM speech coder for 3 kHz sampled
G.722 – This is an ITU-T recommendation which embraces 7 kHz audio
coding at 64 kbit/s. G.722 uses an adaptive differential PCM (ADPCM)
algorithm in two sub-bands, and is widely used for news and sports
commentary links. The sound quality is normally considered inferior
compared to MPEG audio coding, but has the advantage of low coding
delay in comparison with MPEG. Due to the low delay, and because of
the large installed base of G.722 equipment, the algorithm will continue
to be in service.
G.723.1 – Dual-rate speech coder with 5.3/6.3 kbps compressed bitrates.
It is a linear prediction analysis-by-synthesis coder using ACELP/MP-MLQ
excitation methods.
G.726 – This ITU-T recommendation is entitled “40, 32, 24, 16 kbit/s
adaptive differential pulse code modulation (ADPCM)”. It defines the
conversion between 64 kbit/s A-law or µ-law PCM audio and a channel
of the rates stated in the title, by using ADPCM transcoding.
G.728 – This ITU-T recommendation defines coding of speech at 16 kbit/s
based on code-excited linear prediction (CELP). The delay of about 2 ms
in G.728 is lower than other typical implementations of this type of coding.
G.728 audio is used in H.320 video conferencing.
G.729/G.729A – Conjugate structure-ACELP algorithm for 3 kHz speech
bandwidth input and 8 kbps coded bitstream. Used in simultaneous voice
and data (DSVD) applications.
G.7xx – A family of ITU standards for audio compression.
GA – See Grand Alliance.
Gain – a) Any increase or decrease in strength of an electrical signal.
Gain is measured in terms of decibels or number of times of magnification.
b) The ratio of output power to the input power for a system or component.
c) The amount of amplification of a circuit. The term gain is often used
incorrectly to denote volume and loudness which are psychological factors
which are the results of “gain”.
Gain Ratio Error – In a three wire interconnect CAV system, the gain of
one signal may be higher or lower then what it should be because of
gain distortion caused by one channel. This will cause the ratio of signal
amplitudes to be incorrect. This error manifests itself as color distortions.
In some cases, errors in gain ratio will generate illegal signals (see the
discussion on Illegal Signals). The distorted signal may be legal within
its current format but could become illegal if converted into a different
component format.
Gain/Frequency Distortion – Distortion which results when all of the
frequency components of a signal are not transmitted with the same gain
or loss. A departure from “flatness” in the gain/frequency characteristic
of a circuit. Refer also to the Frequency Response discussion.
Galaxy Group – The group of companies proposing the Galaxy watermarking format. (IBM/NEC, Hitachi/Pioneer/Sony.)
Gamma – Since picture monitors have a nonlinear relationship between
the input voltage and brightness, the signal must be correspondingly
predistorted. Gamma correction is always done at the source (camera) in
television systems: the R, G, and B signals are converted to R 1/g, G 1/g,
and B 1/g. Values of about 2.2 are typically used for gamma. Gamma is a
transfer characteristic. Display devices have gamma (or at least CRTs do).
If you measure the actual transfer characteristic of a CRT used for either
television display or computer display, you will find it obeys a power law
Light = Volts^gamma
where gamma is 2.35 plus or minus 0.1. CRTs have values between 2.25
and 2.45, 2.35 is a common value. It is a function of the CRT itself, and
has nothing to do with the pictures displayed on it. CRT projectors are
different, green tubes are typically 2.2 while red is usually around 2.1 and
blue can be as low as 1.7. But there are no direct-view CRTs which have
values lower than 2.1. Pictures which are destined for display on CRTs
are gamma-corrected, it means that a transfer characteristic has been
applied in order to try to correct for the CRT gamma. Users of TV cameras
have to accept the characteristic supplied by the manufacturer, except
for broadcasters who have adjustable camera curves (the video engineers
adjust the controls until they like the look of the picture on the studio
monitor in their area). Even so, no TV camera uses a true gamma curve,
they all use rather flattened curves with a maximum slope near black
of between 3 and 5. The higher this slope, the better the colorimetry but
the worse the noise performance.
Gamma Correction – a) The RGB data is corrected to compensate for
the gamma of the display. b) Historically, gamma correction was a
precompensation applied to the video signal at the camera to correct for
the nonlinearities of the CRT (i.e., power function of the electron gun) and,
as such, it was the inverse of the electron gun function. It is now widely
used, however, to describe “the total of all transfer function manipulations”
(i.e., including the departures from a true power law function), whether
inherent or intentionally introduced to act upon the video signal for the
purpose of reducing the bandwidth for signal processing, making the image
on the final display conform to preconceived artistic objectives, and/or
providing noise suppression, or even bit rate reduction. c) The insertion
of a nonlinear output-input characteristic for the purpose of changing the
system transfer characteristic. As this usage has grown, the IEEE definition
correlating gamma to an analytical function becomes optimistic. d) An
adjustment factor used to correct an image’s intensity when it is displayed.
Display devices can perform gamma correction but raster images can also
be gamma corrected with software prior to display.
Gamma Ferric Oxide – The common magnetic constituent of magnetic
tapes in the form of a dispersion of fine acicular particles within the
www.tektronix.com/video_audio 99
Video Terms and Acronyms
Gamma Table – A table of constants which functions as a nonlinear
amplifier to correct the electron gun drive voltages so that the CRT display
appears to be linear. Because the gamma function for each color is different in a typical CRT, different values for each color are usually contained in
the gamma table. This processes is called Gamma Correction.
Gamma, Electronic – a) The exponent of that power law that is used to
approximate the curve of output magnitude versus input magnitude over
the region of interest. b) Video – The power function of the electro gun in a
CRT. It has become customary in video, as in photography, to extend the
meaning and to use gamma as a synonym for the complete transfer function regardless of curve shape. Note: In the electronics system, increasing
gamma decreases image contrast. c) Imaging Processing and Display –
Nonlinear processing is useful in many television systems as a means of
bandwidth limiting, and is normally applied at the camera. Given the predominance of CRT displays, the chosen exponent is related to that of the
electron gun (typically 2.2 for systems with 525/59.94 scanning, 2.8 for
systems with 625/50 scanning, and 2.22 for SMPTE 240M).
Gamma, Photographic – a) The slope of the transfer function: density
(log of reciprocal transmission ) vs. log exposure. It is thus the power function correlating transmission to exposure. b) Gamma in the photographic
sense was originally applied specifically to the straight-line portion of the
transfer function. Only if all of the photographic densities corresponding
to light intensities in the scene lie within that straight-line portion of the
transfer function is gamma proportional to contrast. It is sometimes loosely
used to indicate either an average or a point slope of the transfer function.
Note: In the photographic system, increasing gamma increases image
Gamut – The range of voltages allowed for a video signal, or a component
of a video signal. Signal voltages outside of the range (that is exceeding
the gamut) may lead to clipping, crosstalk, or other distortions.
Gang – Any combination of multiple tracks that are grouped. An edit that
is performed on one track is also performed on tracks that are ganged
Gap – The space between the pole pieces of a tape head.
GAP (Generic Access Profile) – The Generic Access Profile (GAP) is the
basic DECT profile and applies to all DECT portable and fixed parts that
support the 3.1 kHz telephony service irrespective of the type of network
accessed. It defines a minimum mandatory set of technical requirements to
ensure interoperability between any DECT GAP fixed part and portable part.
Gap Depth – The dimension of the gap measured in the direction perpendicular to the surface of a head.
Gap Length – The dimension of the gap of a head measured from one
pole face to the other. In longitudinal recording, the gap length can be
defined as the dimension of the gap in the direction of tape travel.
Gap Loss – The loss in output attributable to the finite gap length of the
reproduce head. The loss increases as the wavelength decreases.
Gap Scatter – The phenomenon of the gaps in a multitrack head not
being in a straight line.
Gap Smear – This is due to head wear and is the bridging or shorting out
of the record or reproduce gap as the result of flowing of the pole face
material in the direction of tape motion.
Gap Width – The dimension of the gap measured in the direction parallel
to the head surface and pole faces. The gap width of the record head
governs the track width. The gap widths of reproduce heads are sometimes
made appreciably less than those of the record heads to minimize
tracking errors.
Gatekeeper – In the H.323 world, the gatekeeper provides several important functions. First, it controls access to the network, allowing or denying
calls and controlling the bandwidth of a call. Second, it helps with address
resolution, making possible email type names for end users, and converting
those into the appropriate network addresses. They also handle call tracking and billing, call signaling, and the management of gateways.
Gateway – a) Gateways provide a link between the H.323 world and other
video conferencing systems. A common example would be a gateway to an
H.320 (ISDN) video conferencing system. b) Gateways provide functional
bridges between networks by receiving protocol transactions on a layer-bylayer basis from one protocol (SNA) and transforming them into comparable
functions for the other protocol (OSI). In short, the gateway provides a
connection with protocol translation between networks that use different
protocols. Interestingly enough, gateways, unlike the bridge, do not require
that the networks have consistent addressing schemes and packet frame
sizes. Most proprietary gateways (such as IBM SNA gateways) provide
protocol converter functions up through layer six of the OSI, while OSI
gateways perform protocol translations up through OSI layer seven. See
OSI Model.
Gauss – The metric unit of magnetic flux density equal to one Maxwell per
square centimeter.
GBR Format – The same signals as RGB. The sequence is rearranged
to indicate the mechanical sequence of the connectors in the SMPTE
GCR – See Ghost Cancellation Reference Signal.
G-DOTS – ITU Recommendations for speech coding standards.
GE (General Electric) – A proponent of the ACTV schemes.
General Parameter (GPRM) – GPRMs are used to store the users
operational history and to modify a players behavior. DVD-Video players
have 16 unique GPRMs. Each GRPM may store a fixed length, two-byte
numerical value.
General Purpose Interface (GPI) – a) A connector on the back of the
switcher frame or editor which allows remote control of the Auto Trans,
DSK Mix, Fade to Black or Panel Memory Function or Sequence on
the switcher. This is usually a contact closure (i.e., switch) which
provides short to ground. b) A standard interface for control of
electronic equipment.
General Purpose Serial Interface (GPSI) – A form of translator which
allows the switcher to talk to other devices, i.e., ADO, and to be given
instructions by devices such as Editors serially.
Video Terms and Acronyms
Generation – The number of duplication steps between an original
recording and a given copy. A second generation duplicate is a copy of
the original master and a third generation duplicate is a copy of a copy of
the original master, etc.
Generation Loss – When an analog master videotape is duplicated, the
second-generation copy is usually inferior in some way to the master.
This degradation appears as loss of detail, improper colors, sync loss, etc.
Limited frequency response of audio/video magnetic tape and imperfections in electronic circuitry are the main causes of generation loss. Higher
performance formats (such as 1”) exhibit much less generation loss
than more basic formats. Digital formats make generation loss negligible
because each copy is essentially an exact duplicate of the original. Video
enhancing equipment can minimize generation loss. Some video processors
pre-enhance the video signal to overcome generation loss.
Genlock – a) The process of locking both the sync and burst of one signal
to the burst and sync of another signal making the two signals synchronous. This allows the receiver’s decoder to reconstruct the picture including luminance, chrominance, and timing synchronization pulses from the
transmitted signal. b) The ability to internally lock to a non-synchronous
video. AVC switchers allow genlocked fades on the DSK. c) Equipment or
device that recovers the original pixel clock and timing control signals
(sync) from a video signal; thus allowing an NTSC/PAL decoder to correctly
decode the video signal. d) A way of locking the video signal of a camera
to an external generator of synchronization pulses.
Genlock Outputs – A timed color black output synchronous with the
input reference video. The AVC series also provides the DSK genlocked
color black. On 4100 series switchers this also includes composite sync,
subcarrier, vertical and horizontal drive pulses, burst flag pulse and
composite blanking.
Geometric Distortion – Any aberration which causes the reproduced
picture to be geometrically dissimilar to the perspective plane projection
of the original scene.
Geometry – The shape of objects in a picture, as oppose to the picture
itself (aspect ratio). With good geometry, a picture of a square is square.
With poor geometry, a square might be rectangular, trapezoidal, pillowshaped, or otherwise distorted. Some ATV schemes propose minor adjustments in geometry for aspect ratio accommodation.
Geostationary Orbit – A satellite orbit 22,300 miles above earth’s
equator circling the earth at the same rate earth rotates.
Ghost – A shadowy or weak image in the received picture, offset either to
the left or right of the primary image, the result of transmission conditions
which create secondary signals that are received earlier or later than the
main or primary signal. A ghost displaced to the left of the primary image
is designated as “leading” and one displaced to the right is designated as
“following” (lagging). When the tonal variations of the ghost are the same
as the primary image, it is designated as “positive” and when it is the
reverse, it is designated as “negative”. See Multipath Distortion.
Ghost Cancellation Reference (GCR) Signal – ITU-R BT.1124 standard
reference signal found on lines 19 and 282 of (M) NTSC systems and on
line 318 (B, D, G, H, I) of PAL systems. This signal allows for the removal of
ghosting from TVs by filtering the entire transmitted signal based on the
condition of the transmitted GCR signal.
Ghost Point – A supplementary point included on the tangent to the
acquired point in order to force the line to begin and end on the acquired
Ghosting – A weak, secondary, ghost-like duplicate video image in a video
signal caused by the undesired mixing of the primary signal and a delayed
version of the same signal.
GHz (Gigahertz) – Billions of cycles per second.
Gibbs Effect – The mirage-like haze at the boundaries of picture objects,
seen in DCT-based compression algorithms at high compression ratios. The
effect is most noticeable around text and high-contrast geometrical
GIF (Graphic Interchange Format) – A bit-mapped graphics file format
popular for storing lower resolution image data.
Gigabyte (GB) – One billion bytes (1,073,741,824 bytes) of information.
Glenn – William and Karen Glenn, researchers for NYIT in Dania, Florida,
who developed the VISTA ATV scheme. They are often cited for their work
indicating that human vision cannot simultaneously perceive high spatial
detail and high temporal detail.
Glitch – a) A form of low frequency interference, appearing as a narrow
horizontal bar moving vertically through the picture. This is also observed
on an oscilloscope at field or frame rate as an extraneous voltage pip
moving along the signal at approximately reference black level. b) Slang
for visual error, i.e., dropout on tape, spikes at switcher pattern boundaries.
Patterns that jump off screen or any other aberration. c) Slang for a fault
in data transmission or other error that does not cause a total lock up.
Glitch Impulse – A term used to define the voltage/time function of a single DAC step until the output video level has settled to within +/- 1 LSB of
the final value. Glitches are apt to appear in output video as the input to
the DAC changes from:
0111 1111 to 1000 0000
Global (Menu) – A separate channel that allows additional rotations to
be superimposed on an image and, in 3D systems, “motion on motion” in
an effect.
Global Data Set – A data set with all data essence or metadata elements
defined in the relevant data essence standard or Dynamic Metadata
Gloss Level – A shiny surface imparted to the magnetic coating due to
calende ring.
GMC (Global Motion Compensation) – Global motion compensation
(GMC) is an important tool for a variety of video processing applications
including for instance segmentation and coding. The basic idea is that a
part of the visible 2D motion within video sequences is caused by camera
operation (translation, rotation, zoom).
GMSK (Gaussian Minimum Shift Keying) – Gaussian Minimum Shift
Keying is the modulation technique used in GSM networks. It employs a
form of FSK (Frequency Shift Keying). GMSK was been chosen because it
provides good spectral efficiency.
www.tektronix.com/video_audio 101
Video Terms and Acronyms
GMT (Greenwich Mean Time) – Greenwich, England has been the home
of Greenwich Mean Time (GMT) since 1884. GMT is sometimes called
Greenwich Meridian Time because it is measured from the Greenwich
Meridian Line at the Royal Observatory in Greenwich. Remember: Clocks
Spring Forward & Fall Back (Fall = Autumn), but GMT remains the same
all year around.
GOP (Group of Pictures) – a) A GOP starts with an I-picture and ends
with the last picture before the next I-picture. b) A picture sequence which
can be coded as an entity. For instance, it is possible to cut between GOPs.
For that reason, the first picture in a GOP has to be intra-coded (I-picture).
Time codes are carried on GOP levels.
Gouraud Shading – This type of smooth shading has no true “specular”
highlights and is faster and cheaper than Phong shading (which does).
GOV (Group of Video Object Planes)
Graphics Board – The printed circuit board within a workstation that
contains the graphics processors.
Graphics Combination Profile – A combination profile that describes the
required capabilities of a terminal for processing graphical media objects.
Gray Card – A nonselective (color neutral) diffuse reflector intended to
be lighted by the normal illumination of the original scene, and having
a reflectance factor of 18% (compared with a perfect reflector at 100%
and prepared magnesium oxide at 98%). The gray card luminance is
used as a guide in determining scene exposure so that the image is
placed upon the most favorable portion of the transfer function curve.
Gray Market – Dealers and distributors who sell equipment without proper
authorization from the manufacturer.
Gray Point – See Gamma.
GPI Trigger – The signal sent by a GPI that instructs an external device
to execute a particular command, such as to start or stop playback of a
video effect.
Gray Scale – a) The luminance portion of the video signal. A scale of 10
from TV black to TV white indicating the shades of gray a camera can
see at any one time and to which a camera can be adjusted. A gray scale
adjustment of 7 is good. b) An optical pattern in discrete steps between
light and dark. Note: A gray scale with ten steps is usually included in
resolution test charts.
GPI/GPO (General Purpose Input/General Purpose Output)
Gray Scale Shape – Gray Level Alpha Plane.
GPS (Global Positioning System) – The GPS (Global Positioning System)
is a “constellation” of 24 well-spaced satellites that orbit the Earth and
make it possible for people with ground receivers to pinpoint their geographic location. Accuracy can be pinpointed to within one meter with
special military-approved equipment. GPS equipment is widely used in
science and has now become sufficiently low-cost so that almost anyone
can own a GPS receiver.
Green Book – The document developed in 1987 by Philips and Sony as
an extension to CD-ROM XA for the CD-i system.
GPSI (General Purpose Serial Interface) – Allows direct access to/from
the MAC if an external encoding/decoding scheme is desired.
Ground (GND) – A point of zero voltage potential. The point in reference
to which all voltages are measured.
Graceful Degradation – Capability of decoders to decode MPEG-4
services that are above their capacity.
Ground Loop – a) Hum caused by currents circulating through the ground
side of a piece of equipment due to grounding different components at
points of different voltage potential. b) An unwanted interference in the
copper electrical signal transmissions with shielded cable, which is a
result of ground currents when the system has more than one ground.
For example, in CCTV, when we have a different earthing resistance at
the camera, and the switcher or monitor end. The induced electrical noise
generated by the surrounding electrical equipment (including mains) does
not discharge equally through the two earthings (since they are different)
and the induced noise shows up on the monitors as interference.
GPI (General Purpose Interface) – In computerized editing systems,
GPIs allow the computer to control various remote components.
Gradient – a) In graphics, having an area smoothly blend from one color
to another, or from black to white, or vice versa. b) A blended mix of two or
three colors that you can use to draw or fill objects.
Grand Alliance (GA) – The U.S.’ grouping, formed in May 1993, to produce “the best of the best” initially proposed HDTV systems. The participants are: AT&T, General Instrument Corporation, Massachusetts Institute
of Technology, Philips Consumer Electronics, David Sarnoff Research
Center, Thomson Consumer Electronics and Zenith Electronics Corporation.
The format proposed is known as the ATSC format.
Granules – In MPEG Audio Layer II, a set of 3 consecutive sub-band
samples from all 32 sub-bands that are considered together before
quantization. They correspond to 96 PCM samples. In MPEG Audio Layer III,
576 frequency lines carry their own side information.
Graphic Equalizer – An equalizer which indicates its frequency response
graphically through the position of its controls. When the controls are in a
straight line at the 0 position, the response is flat.
Green Screen – See Blue Screen.
Green Tape – An abrasive tape used to clean and lap heads that are
unevenly worn, stained, scratched, etc. Should be used with caution and
should not be used on ferrite heads. This also applies to gray tape.
Grounded Electrical Outlet – An electrical wall outlet that accepts a
plug that has a grounding prong. In the USA, all properly wired three-prong
outlets provide a ground connection.
Group – A group is any arbitrary collection of polygons; a subset of the
database, usually the group represents a coherent object. A group could
contain all the polygons constituting the model of a chair, or it could
contain twenty such chairs and a table. A polygon can only be in one
group at a time, but it can move to another group.
Video Terms and Acronyms
Group 1, 2, 3 and 4 – The ITU-T Group 1 to 4 specify compression of
black and white documents and the operation of facsimile equipment.
Group 3 (also known as G3 or T.4) is presently the most important standard
in the world of fax and document storage applications. G3 compression
features modified Huffman encoding. The ITU-T Group 4 (also known as G4
or T.6) is an improvement of ITU-T G3, dedicated to digital telephone lines,
in particular ISDN.
Group Delay – a) A distortion present when signal components of different frequencies experience different delays as they pass through a system.
Distortions are expressed in units of time. The largest difference in delay
between a reference low frequency and the other frequencies tested is
typically quoted as the group delay distortion. Group delay problems can
cause a lack of vertical line sharpness due to luminance pulse ringing,
overshoot or undershoot. The multipulse or sin (x)/x signals can be used
to check for group delay in the same way as these signals are used to
check for chrominance to luminance delays. b) A signal defect caused by
different frequencies having differing propagation delays.
GSM (Global System for Mobile Communication) – Also known as
Groupe Speciale Mobile. A European radio standard for mobile telephones
(based on TDMA-8) in the 900 MHz band.
GSTN (General Switched Telephone Network) – The GSTN is what the
public telephone network is called.
Guard Interval – Additional safety margin between two transmitting
symbols in the COFDM standard. The guard interval ensure that reflections
occurring in the single-frequency network are eliminated until the received
symbol is processed.
Guest – A modeling object visualized in the presence of another database
which will serve as a visualization support but cannot be modified.
GUI (Graphical User Interface) – A computer interface that allows the
user to perform tasks by pointing to icons or graphic objects on the screen.
Windows is a graphics user interface. Most multimedia programs require
www.tektronix.com/video_audio 103
Video Terms and Acronyms
H Drive – See Horizontal Drive.
H Phase (Horizontal Phase) – The horizontal blanking interval used to
synchronize the timing of two or more video signals.
H Rate – The time for scanning one complete horizontal line, including
trace and retrace. NTSC equals 1/15734 seconds (color) or 63.56 µsec.
H.222 – This ITU-T recommendation is identical to the audio specification
of MPEG-2.
H.261 – a) Recognizing the need for providing ubiquitous video services
using the Integrated Services Digital Network (ISDN), CCITT (International
Telegraph and Telephone Consultative Committee) Study Group XV established a Specialist Group on Coding for Visual Telephony in 1984 with the
objective of recommending a video coding standard for transmission at
m x 384 kbit/s (m=1,2,..., 5). Later in the study period after new discoveries in video coding techniques, it became clear that a single standard,
p x 64 kbit/s (p = 1,2,..., 30), can cover the entire ISDN channel capacity.
After more than five years of intensive deliberation, CCITT Recommendation
H.261, Video Codec for Audio Visual Services at p x 64 kbit/s, was completed and approved in December 1990. A slightly modified version of this
Recommendation was also adopted for use in North America. The intended
applications of this international standard are for videophone and video
conferencing. Therefore, the recommended video coding algorithm has to
be able to operate in real time with minimum delay. For p = 1 or 2, due to
severely limited available bit rate, only desktop face-to-face visual communication (often referred to as videophone) is appropriate. For p>=6, due to
the additional available bit rate, more complex pictures can be transmitted
with better quality. This is, therefore, more suitable for video conferencing.
The IVS (INRIA Video conferencing System) is software implementation of
H.261 codec which also features PCM and ADPCM audio codecs and
includes an error control scheme to handle packet losses in the Internet.
b) The ITU-T H.261 recommendation embraces video codecs for audio
visual services at p x 64 kbit/s data rate, where p is between 1 and 30.
Thus, the standard is informally called “p x 64”. It is aimed at low bit rate
media, and is used in the H.320 video conferencing recommendation.
H.261 provides a resolution of 352 x 288 pixels (CIF) or 176 x 144 pixels
(QCIF), independent of bit rate. The H.261 recommendation defines both
encoding and decoding. However, it defines, more strictly, how to decode
than to encode the bit stream, and has room for options in the encoder.
The coding is based on DCT with word-length encoding. H.261 defines
both independently coded frames (key frames), and frames that frame
by using block-based motion compensation (non-key frames). H.261
also defines error-correction codes, and it allows rate control by varying
quantization and by dropping frames and jumping blocks.
H.262 – The H.262 recommendation is identical to the video specification
of MPEG-2.
H.263 – This is an ITU-T recommendation concerning “video coding for
low bit rate communication”. The H.263 is dedicated to video conferencing
via H.324 terminals using V.34 modems at 28.8 kbit/s, and to H.323
LAN-based video conferencing. The coding algorithm in H.263 is based on
H.261, but has better performance than the H.261, and it may eventually
displace H.261.
H.26L – A next-generation video codec, H.26L has been a university
research project until recently. It is now being worked on by MPEG, with
the intention of making it part 10 of the MPEG-4 standard.
H.310/H.321 – Broadband audiovisual communications systems and
terminals over B-ISDN using ATM protocols. H.310 includes H.262 and
H.261 video, H.222.1 systems and H.245 control. H.321 is a subset of
H.310 which enables H.320 with broadband signaling (Q.2931).
H.320 – This is an ITU-T recommendation for low bit rate visual communication. The H.320 is entitled “narrow-band visual telephone systems and
terminal equipment” and is widely accepted for ISDN video conferencing.
The H.320 is not a compression algorithm, but is rather a suite of standards for video conferencing. H.320 specifies H.261 as the video compression, and defines the used of one of three audio formats: either G.711,
G,722 or G.728.
H.322 – Visual telephone systems for guaranteed QoS LANs. Suite includes
H.261 and H.263 video, H.225.0 and H.245 supervision and control and
numerous G-DOT speech modes.
H.323 – ITU standard for video conferencing over networks that do not
guarantee bandwidth, such as the Internet. H.323 is the standard that is
recommended for most users in the education community.
H.324 – ITU recommendation H.324 describes terminals for low bit rate
multimedia applications, utilizing V.34 modems operating over the general
telephone system. H.324 terminals may carry real-time voice, data, and
video or any combination, including video telephony. H.324 makes use of
the logical channel procedures of recommendation H.245, in which the
content of each logical channel is described when the channel is opened.
H.324 terminals may be used in multipoint configurations through MCUs,
and may interwork with H.320 terminals on ISDN, as with terminals on
wireless networks.
H.324M – Mobile multimedia terminal adapted from H.324 but with
improved error resilience.
HAD – See Half Amplitude Duration.
Half Amplitude Duration (HAD) – Commonly used as a measurement
on sine-squared pulses of a test signal. It is the 50 percent point on a test
waveform and the pulses are often expressed in terms of time interval T.
The T, 2T and 12.5T pulses are common examples. T is the Nyquist interval
or 1/2 fc where fc is the cutoff frequency of the system to be measured.
For NTSC, fc is taken to be 4 MHz and T is therefore 125 nanoseconds.
For PAL, fc is taken to be 5 MHz and T is therefore 100 nanoseconds.
Half D1 – An MPEG-2 video encoding mode in which half the horizontal
resolution is sampled (352x480 for NTSC, 352x576 for PAL).
Half Splitting – Troubleshooting technique used for fault isolation. It
involves the examination of circuit nodes approximately midway through
a circuit. Once the operational state of these nodes has been determined,
Video Terms and Acronyms
the source of the fault can be isolated to the circuits either before or after
this point. This process can then be continued.
Hard Disk – A magnetic data recording disk that is permanently mounted
within a disk drive.
Half T1 – North American transmission rate of 768 kbps.
Hard Key – A key effect in which areas of the keyed image are either
completely transparent or completely opaque, creating a hard edge
between the keyed image and background image.
Half-Duplex – An operational mode in which transmission of data occurs
in only one direction at a time in a communications link.
Half-Duplex Transmission – Data transmitted in either direction, one
direction at a time. Cheaper speakerphones are a good example of this,
where only one person can talk at a time.
Halo – a) Most commonly, a dark area surrounding an unusually bright
object, caused by overloading of the camera tube. Reflection of studio
lights from a piece of jewelry, for example, might cause this effect. With
certain camera tube operating adjustments, a white area may surround
dark objects. b) Type of pattern border with soft edges and a mix from a
vid to border matte gen then to “B” vid.
Halt – Command to stop the computer.
Handles – Material outside the IN and OUT points of a clip in a sequence.
The Avid system creates handles when you decompose or consolidate
material. The decompose and consolidate features can create new master
clips that are shorter versions of the original master clip. The handles are
used for dissolves and trims with the new, shorter master clips.
Handshake – a) The protocol that controls the flow of information
between two devices. b) Control signals at an interface in which the
sending device generates a signal indicating the new information is
available, and the receiving device then responds with another signals
indicating that the data has been received.
Handshaking – Process of exchanging communication parameters
between two terminals.
Hanging Dots – A form of cross-luminance created by simple comb
filters. It appears as a row of dots hanging below the edge of a highly
saturated color. See also Cross-Luminance.
Hangover – Audio data transmitted after the silence detector indicates
that no audio data is present. Hangover ensures that the ends of words,
important for comprehension, are transmitted even though they are often
of low energy.
Hann Window – A time function applied sample-by-sample to a block of
audio samples before Fourier transformation.
Hard Banding – A variation in thickness or elasticity across the width
of the tape, it may be a coating defect, or it may be caused by stretch
damage either during manufacture or in use. It results in a variation of
the recovered RF due to the effect on head-to-tape contact and may result
in color saturation banding and velocity errors.
Hard Border – A hard border usually applies to patterns and is characterized by an abrupt change from background video to the border video and
by an abrupt change from the border video to the foreground video. Also
sometimes used to describe key borders with a high gain.
Hard Commit – Removing the soft edit properties of an edit sequence.
Hard commits are different from soft commits in that hard commits cannot
be restored, the commit is permanent. Hard commits force a render on the
selected elements.
Hard Recording – The immediate recording of all audio, video, timecode
and control tracks on a magnetic recorder. Because hard recording creates
breaks in any existing timecode or control track on the tape, the procedure
is often performed on black tape when an edit is not required or in emergency circumstances. See also Crash Recording.
Hardware – a) Term used generically for equipment, i.e., VTRs, switchers,
etc. b) Individual components of a circuit, both passive and active, have
long been characterized as hardware in the jargon of the engineer. Today,
any piece of data processing equipment is informally called hardware.
Hardware Inventory – An IRIX command (HINV) used to list the hardware,
memory and peripheral equipment in, or connected to, a workstation.
Hard-Wired Logic – See Random Logic.
Harmonic Distortion – If a sine wave of a single frequency is put into a
system, and harmonic content at multiples of that frequency appears at
the output, there is harmonic distortion present in the system. Harmonic
distortion is caused by nonlinearities in the system.
Harmonics – a) Whole number multiples of a frequency. Fx1 is called
the fundamental or first harmonic; Fx2 is the second harmonic; Fx3 is the
third harmonic; etc. b) Integral multiples of a fundamental frequency are
harmonics of that frequency. A pure sine wave is free of harmonics. Adding
harmonics to a fundamental frequency will change its wave shape. A
square wave contains a fundamental frequency plus all the odd harmonics
of that frequency.
HARP (High-Gain Avalanche Rushing Amorphous Photoconductor) –
A very new type of image sensor (target) for a camera tube. HARP target
tubes are about 10 times more sensitive to light than conventional
tube types and have been demonstrated to offer hope of overcoming
the sensitivity drawbacks of HDTV cameras.
HBF (Half Band Filter) – Half band filter are used in subband coding of
digital video and audio signals.
HBI – See Horizontal Blanking Interval.
HBO (Home Box Office) – Time Inc.’s pay-cable and entertainment
production company, a co-proposer with ATC of C-HDTV and supporter
of ACTV.
HCR (Huffman Codeword Reordering) – Extends the Huffman coding
of spectral data in an MPEG-4 AAC bitstream. By placing some of the
Huffman codewords at known positions, error propagation into these
so-called “priority codewords” (PCW) can be avoided.
HD (High Definition) – A frequently used abbreviation for HDEP and
sometimes HDTV. The term High Definition, applied to television, is almost
as old as television itself. In its earliest stage, NTSC was considered high
definition (previous television systems offered from 20 to 405 scanning
lines per frame).
www.tektronix.com/video_audio 105
Video Terms and Acronyms
HD D5 – A compressed recording system developed by Panasonic which
uses compression at about 4:1 to record HD material on standard D5
HD-0 – A set of formats based on the ATSC Table 3, suggested by the DTV
Team as the initial stage of the digital television rollout.
Formats for DTV Transmission
(i = interlaced, p = progressive)
Vertical Size Horizontal Size Aspect Ratio
Value (active) Value (active)
16:9 (square pixel)
Frame Rate
and Scan
(HD) 1,080
24p, 30p, 30i
(HD) 720
16:9 (square pixel)
24p, 30p, 60p
(SD) 480
4:3 non-square pixel)
24p, 30p, 30i, 60p
(SD) 480
(SD) 480
16:9 (non-square pixel) 24p, 30p, 30i, 60p
4:3 (square pixel)
24p, 30p, 30i, 60p
HD-1 – A set of formats based on the ATSC Table 3, suggested by the DTV
Team as the second stage of the digital television rollout, expected to be
formalized in the year 2000.
HD-2 – A set of formats based on the ATSC Table 3, suggested by the DTV
Team as the third stage of the digital television rollout contingent on some
extreme advances in video compression over the next five years. The added
format is not part of the ATSC Table 3.
HDCAM – Sometimes called HD Betacam, is a means of recording
compressed high-definition video on a tape format (1/2-inch) which uses
the same cassette shell as Digital Betacam, although with a different tape
HDDR – See High Density Digital Recording.
HDDTV (High Definition Digital Television) – The upcoming standard
of broadcast television with extremely high resolution and aspect ratio of
16:9. It is an advancement from the analog high definition, already used
experimentally in Japan and Europe. The picture resolution is nearly
2000_1000 pixels, and uses the MPEG-2 standard.
HDMAC-60 – The baseband and satellite transmission form of HDS-NA.
See also MAC.
HDMI (High Definition Multimedia Interface) – This is a proposed
digital audio/video interface for consumer equipment. It is designed to
replace DVI in a backwards compatible fashion and supports EIA-861 and
HDCP. Digital RGB or YCbCr data at rates up to 5 Gbps are supported
(HDTV requires 2.2 Gbps). Up to 8 channels of 32-192 kHz digital audio
are also supported, along with AV.link (remote control) capability and a
smaller connector.
HD-NTSC – The Del Rey Group’s ATV scheme, comprised primarily of a
quincunx scanning scheme referred to as Tri-Scan, which would sub-sample each NTSC pixel three times, in a triangular fashion, for increased
vertical and horizontal static resolution, at an effective 10 frame-per-second rate. Blanking adjustment is used for aspect ratio accommodation.
HDNTSC – The terrestrial transmission form of HDS-NA, comprised of a
receiver-compatible, channel-compatible signal and an augmentation channel, which may be half-sized and low-power. The augmentation channel
carries increased resolution, improved sound, widescreen panels, and pan
and scan information to let an ATV set know where to apply the panels.
H-DOTS – Suites of ITU recommendations for multimedia terminals and
systems that define mandatory and/or optional recommendations for video,
speech (or audio), multiplex and control.
HD-PRO – A universal, worldwide HDEP proposal from the Del Rey Group,
said to accommodate all ATV systems. Details are not available pending
patent protection.
HDS-NA (High Definition System for North America) – The Philips
Laboratories (Briarcliff, NY) ATV scheme, comprised of two separate
systems, HDMAC-60, a single, satellite-deliverable channel designed to
get the signal to broadcast stations and CATV head-ends, and HDNTSC,
a two-channel (receiver-compatible plus augmentation) system to deliver
it to home TVs.
HDTV – See High Definition Television.
HDTV 1125/60 Group – An organization of manufacturers supporting the
SMPTE HDEP standard.
HDEP (High Definition Electronic Production) – A term bearing little or
no implications for transmission and display systems. The SMPTE and the
ATSC have approved one standard for HDEP, sometimes referred to as
SMPTE 240M. This standard has 1125 scanning lines per frame, 60 field
per second, 2:1 interlace, an aspect ratio of 16:9, extended colorimetry,
and a 30 MHz base bandwidth for each of its three color components. It is
based on work at NHK, but includes considerable American modifications.
Clearly, the combined 90 MHz base bandwidth of this DHEP standard
cannot be practically broadcast (not counting sound or modulation characteristics, it takes up as much bandwidth as 15 current broadcast
channels). That is why there are so many ATV transmission schemes.
HDVS (High Definition Video System) – A Sony trade name for its HDEP
equipment and ancillary products, such as HD videodisc players.
HDLC (High Level Data Link Control) – An ISO communications protocol
used in X.25 packet switching networks. It provides error correction at the
Data Link Layer. SDLC, LAP and LAPB are subsets of HDLC.
Head Alignment – Mechanical adjustment of the spatial relationships
between the head gaps and the tape.
HD-MAC (High Definition MAC) – A variety of systems, all European
except for HDMAC-60.
Head – In a magnetic recorder, the generally ring-shaped electromagnet
across which the tape is drawn. Depending on its function, it either erases
a previous recoding, converts an electrical signal to a corresponding
magnetic pattern and records it on the tape, or picks up a magnetic pattern already on the tape and converts it to an electrical playback signal.
2 Head: The system used on most cassette recorders, requiring that
playback occur after the recording has been made. 3 Head: Refers to
the recording/playback head configuration within the recorder. A 3-head
system allows simultaneous playback of recorded material.
Head Block – An assembly holding an erase, record and playback head
in a certain physical alignment.
Video Terms and Acronyms
Head Clogging – The accumulation of debris on one or more heads
usually causing poor picture clarity during playback. Clogging of the
playback head with debris causes dropouts.
Head Demagnetizer or Degausser – A device used to neutralize possible
residual or induced magnetism in heads or tape guides.
Head Frame – The first frame in a clip of film or a segment of video.
Headend – Facility in cable system from which all signals originate. Local
and distant television stations, and satellite programming, are picked up
and amplified for retransmission through the system.
Head-End – The part of a CATV system from which signals emanate.
Header – A block of data in the coded bit stream containing the coded
representation of a number of data elements pertaining to the coded data
that follow the header in the bit stream.
Header/Descriptor – See Image File Header/Descriptor.
Headroom – a) The number of dB increases possible above the operation
level (0 VU) before unacceptable distortion occurs. b) In composition, the
space between a subject’s head and the upper boundary of the frame.
c) The difference between the nominal level (average) and the maximum
operating level (just prior to “unacceptable” distortion) in any system
or device. Because it is a pure ratio, there is no unit or reference-level
qualifier associated with headroom – simply “dB”; headroom expressed
in dB accurately refers to both voltage and power.
Heads Out – A way of winding tape so that the beginning of a selection is
on the outside of the reel.
Head-to-Tape Contact – The degree to which the surface of the magnetic coating approaches the surface of the record or replay heads during
normal operation of a recorder. Good head-to-tape contact minimizes
separation loss and is essential in obtaining high resolution.
Height – The vertical positioning of a head with respect to a piece of tape.
The size of the picture in a vertical direction.
Helical Recording – A video recording method in which the information is
recorded in diagonal tracks. Also known as Slant-Track Recording.
Helical Scan – A method of recording video information diagonally on
a tape, used in home and professional VCRs. High speed rotating video
heads scan these diagonal video tracks, giving an effective tape speed
much higher than the actual tape speed allowing more information to be
recorded on a given length of magnetic tape.
Hermite – An option for the interpolation of an animation curve that produces a smooth curve by assigning a slope to each control point on the
curve. Each control point has a tangent handle that you can use to adjust
the slope for the point.
Herringbone – Patterning caused by driving a color-modulated composite
video signal (PAL or NTSC) into a monochrome monitor.
Hertz (Hz) – a) The unit of frequency. Equivalent to cycles per second.
b) An unit that measures the number of certain oscillations per second.
HEX (Hexadecimal) – Base 16 number system. Since there are 16 hexadecimal digits (0 through 15) and only ten numerical digits (0 through 9),
six additional digits are needed to represent 10 through 15. The first six
letters of the alphabet are used for this purpose. Hence, the hexadecimal
digits read: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F. The decimal number
16 becomes the hexadecimal number 10. The decimal number 26
becomes the hexadecimal number 1A.
HFC – See Hybrid Fiber Coaxial.
HHR (Half Horizontal Resolution) – Part of the MPEG-2/DVB standard
where half of the normal 720 pixel horizontal resolution is transmitted
while maintaining normal vertical resolution of 480 pixels. Since it is a
4:2:0 format, the color information is encoded at 240 pixels vertically and
176 pixels horizontally. Virtually all the DBS providers use HHR format since
it dramatically reduces the bandwidth needed for channels, though at the
expense of picture quality. Special logic in the video decoder chip in the set
top box re-expands the picture to normal horizontal size by interpolation
before display. 4:2:2 video at Standard Definition looks as good as the
NBC analog feeds on GE-1 Ku. High bandwidth 4:2:0 video such as the
NBC digital feeds on GE-1 Ku come very close to studio quality and the
low bandwidth video encoded in HHR format looks like DBS.
Hi Con – A black and white hi contrast signal used as a key source. See
also Matte Reel.
Hi Impedance Mike – A mike designed to be fed into an amplifier with
input impedance greater than 20 to 50 ohms.
Hi-8 – 8 mm videotape format which provides better quality than VHS.
An improved version of the 8 mm tape format capable of recording better
picture resolution (definition). A higher-density tape is required which
provides a wider luminance bandwidth, resulting in sharper picture quality
(over 400 horizontal lines vs. 240 for standard 8 mm) and improved
signal-to-noise ratio. Camcorders using this format are very small, light
and provide a picture quality similar to S-VHS.
Hidden Line Removal – A wireframed object can be confusing to look at
because edges that would be hidden are still displayed. Hidden line
removal is the process of computing where edges are hidden and not
drawing them.
Hierarchy – A structure of levels that organizes component elements. For
example, the IRIX operating system uses a tree-like hierarchy to organize
directories on a hard disk drive.
Hi-Fi (High Fidelity) – Most commonly used to refer to the high quality
audio tracks recorded by many VCRs. These tracks provide audio quality
approaching that of a CD. However, because they are combined with
the video signal before recording, audio dubs using them are impossible
without re-recording the video.
High Definition Films – British organization that began using the term
High Definition for its electronic cinematography system before even color
TV was broadcast in the U.S.
High Definition Television (HDTV) – a) General term for proposed standards pertaining to consumer high-resolution TV. b) An ATV term sometimes confused with HDEP. HDTV is usually used to describe advanced
production and delivery mechanisms that will get ATV to the home. As
HDEP cannot practically be broadcast, all broadcast HDTV schemes must
make compromises in quality. The line between broadcast HDTV and EDTV,
therefore, is difficult to define. See Minimum Performance. c) A TV format
www.tektronix.com/video_audio 107
Video Terms and Acronyms
capable of displaying on a wider screen (16 x 9) as opposed to the conventional 4 x 3) and at higher resolution. Rather than a single HDTV standard the FCC has approved several different standards, allowing broadcasters to choose which to use. This means new TV sets will have to support
all of them. All of the systems will be broadcast as component digital.
d) By HDTV, we normally understand transmission, rendering and display
systems that feature about double the number of scanning lines, improved
color quality, and less artifacts than that of today’s composite systems. The
video may be analog, like the Japanese MUSE or the European HD-MAC,
or digital, like the ATSC system in the USA. The European, MPEG-2 based
Digital Video Broadcasting (DVB) specifications embrace HDTV in addition
to 625 line TV. In the USA, the Grand Alliance has succeeded in combining
various digital HDTV systems into the ATSC system – a multiple format
system based on MPEG-2 video coding – that allows HDTV transmissions
to use the same frequency bands now used by regular NTSC television.
The Japanese, who have had regular analog HDTV transmission for some
time, are also planning to implement digital HDTV.
The New HDTV/SDTV Standards
(i = interlaced, p = progressive scan, * = SDTV)
Frame Rate
1920 x 1080
30i, 30p, 24p
Aspect Ratio
1280 x 720
60p, 30p, 24p
720 x 483*
60p, 30p, 24p
640 x 480*
High Density Digital Recording (HDDR) – Recording of digital data on
a magnetic medium, having a flux transition density in excess of 15,000
transitions per inch per track.
High Energy Oxide – Any magnetic oxide particle exhibiting a BSHC
product higher than that of gamma ferric oxide. Chromium dioxide and
cobalt are the two most common examples at the present time.
High Energy Tape – A tape made with a high energy oxide.
High Frequency Subcarrier – An information channel added to a television signal where the finest brightness detail is normally transmitted. As
the human visual system is least sensitive to the finest detail, it is unlikely
to be bothered by interface from such a subcarrier. This technique was
first applied to the NTSC color subcarrier; most recently it has been
proposed in Toshiba’s ATV system.
High Level – A range of allowed picture parameters defined by the
MPEG-2 video coding specification which corresponds to high-definition
High Line Rate – More than 525 scanning lines per frame.
High Resolution (Hi-Res) – An adjective describing improvement in
image quality as a result of increasing the number of pixels per square
High Resolution Sciences (HRS) – Proponent of the CCF ATV scheme.
HRS plans to offer other ATV schemes, including one using synchronized
electron beam spatial modulation (turning each scanning line into a series
of hills and valleys) in both camera and receiver to achieve increased vertical resolution.
High Sierra Format – A standard format for placing files and directories
on CD-ROM, revised and adopted by the International Standards
Organization as ISO9660.
High-Frequency Distortion – Undesirable variations that occur above the
15.75 kHz line rate.
High-Frequency Interference – Interference effects which occur at high
frequency. Generally considered as any frequency above the 15.75 kc line
High-Level Language – Problem-oriented programming language, as distinguished from a machine-oriented programming language. A high-level
language is closed to the needs of the problem to be handled than to the
language of the machine on which it is to be implemented.
Highlight – a) In lighting, to add a light which will cause an area to have
more light. b) In switchers, to allow one portion of the video to have a
greater luminance level. c) In screens, monitors, displays, etc., to cause a
word on the display to be brighter, commonly by inverting and surrounding
the work with a box of white video.
Highlight Information (HLI) – This is used to specify button highlights
for menus. HLI contains information on the button number, highlight timing,
palette for sub-picture highlights, coordinates of the button, etc.
Highlighting – In the menu system for DVDs it is necessary to be able to
indicate a menu selection since there is no “computer mouse” available.
This highlighting is accomplished through a wide variety of graphic arts
and post-production techniques coupled with the capabilities provided by
the DVD itself.
Highlights – a) Shiny areas that suggest intense reflections of light
sources. Highlights move when light sources move relative to a surface, but
are independent of all other lighting types. b) Highlights may be applied to
a smooth surface by both Gouraud and Phong shading, but only the latter
computes specular reflections based on the angle between reflected light
from a light source and the eye’s line of sight.
High-Lights – The maximum brightness of the picture, which occurs in
regions of highest illumination.
High-Order – Most significant bits of a word. Typically, bit 8 through 15
of a 16-bit word.
Highpass Filter (HPF) – a) Filter that passes only high frequencies.
b) A circuit that passes frequencies above a specific frequency (the cutoff
frequency). Frequencies below the cutoff frequency are reduced in amplitude to eliminate them.
High-Speed Shutter – A feature on video cameras and camcorders that
allows detail enhancement of fast-moving objects by electronically dividing
the CCD into imaging sections.
HIIP (Host Image Independence Protocol) – A registered trademark of
Avid Technology, Inc. HIIP allows the Avid system to import and export files
in various standard formats. Also called Image Independence.
HIIP Folder – The folder containing files that support the host image
independence protocol.
HILN (Harmonic Individual Line and Noise) – A parametric coding
scheme for coding of general audio signals for low bit-rates provided by
the MPEG-4 standard.
Video Terms and Acronyms
HIPPI (High Performance Parallel Interface) – A parallel data channel
used in mainframe computers that supports data transfer rates of 100
and/or data signals to the HBI. Some ATV schemes fill it with widescreen
panel or detail enhancement signals. See also Blanking and Blanking
Hiss – The most common audible noise component in audio recording,
stemming from a combination of circuit and tape noise. Several noise
reduction systems are available, such as Dolby™, DBX, DNR (Dynamic
Noise Reduction), DNL (Dynamic Noise Limiter), to help alleviate such
Horizontal Displacements – Describes a picture condition in which the
scanning lines start at relatively different points during the horizontal scan.
See Serrations and Jitter.
Histogram – A bar graph used in the keyer to adjust to values of the red,
green, blue and luminance channels of an image when you create a matte.
Hit – See Flash.
Hitachi – Proponent of the FUCE ATV scheme and enhanced versions of
Hi-Vision – Japanese term for HDTV.
HLO-PAL (Half-Line Offset PAL) – An early NHK proposal for an ATV
transmission scheme.
HLS (Hue, Luminance and Saturation) – A color model based on human
perception of colors.
Hold Time – The time data must be stable following the completion of a
write signal.
Holdback Tension – Tension applied by the supply turntable to hold the
tape firmly against the heads.
Hole – a) In modeling a 3D world, it is often necessary to create polygons
and solids which literally have holes in them. PictureMaker can make 2D
holes in individual surfaces and drill 3D holes through convex portions of
closed solids. b) A volume in the three-dimensional NTSC spectrum into
which an auxiliary sub-channel can be placed with minimal impairment.
Holes are found where horizontal, vertical, and temporal detail are simultaneously high. The most famous hole is the Fukinuki hole, but the most
common hole is the one carrying the NTSC color subcarrier.
Home Directory – The directory into which IRIX places you each time
you log in. It is specified in your login account; you own this directory and,
typically, all its contents.
Horizontal (Hum) Bars – Relatively broad horizontal bars, alternately
black and white, which extend over the entire picture. They may be stationary, or may move up or down. Sometimes referred to as a “Venetian blind”
effect. Caused by approximate 60 cycle interfering frequency, or one of its
harmonic frequencies.
Horizontal Blanking – a) Includes the entire time between the end of
the active picture time of one line and the beginning of the active picture
time of the next line. It extends from the start of front porch to the end of
back porch. b) The video synchronizing signal before and after each active
television line that defines the border or black area at the left and right
side of the display. In a CRT it hides (blanks out) the electron beam’s
retrace path as it returns from the right to the left of the display to begin
scanning a new line.
Horizontal Blanking Interval (HBI) – That portion of the scanning line
not carrying a picture. In NTSC, the HBI carries a synchronizing pulse and
a color reference signal. Some scrambling and other systems add sound
Horizontal Drive – A pulse at the horizontal sweep rate used in TV cameras. Its leading edge is coincident with the leading edge of the horizontal
sync pulse and the trailing edge is coincident with the leading edge of the
burst flag pulse.
Horizontal Interval – The time period between lines of active video. Also
called Horizontal Blanking Interval.
Horizontal Lock – A subsystem in a video receiver/decoder which detects
horizontal synchronizing pulses, compares them with the on-board video
clock in the video system and uses the resultant data to stabilize the
incoming video by re-synching to the system clock. In the case of severe
horizontal instability, a large FIFO memory may be required to buffer the
rapid line changes before they are compared and re-synchronized.
Horizontal Resolution – a) Rating of the fine detail (definition) of a TV
picture, measured in scan lines. The more lines, the higher the resolution
and the better the picture. A standard VHS format VCR produces 240 lines
of horizontal resolution, while over 400 lines are possible with S-VHS,
S-VHS-C, and Hi-8 camcorders. b) Detail across the screen, usually specified as the maximum number of alternating white and black vertical lines
(line of resolution) that can be individually perceived across the width of a
picture, divided by the aspect ratio. This number is usually expressed as
TV lines per picture height. The reason for dividing by the aspect ratio and
expressing the result per picture height is to be able to easily compare
horizontal and vertical resolution. Horizontal chroma resolution is measured
between complementary colors (rather than black and white) but can
vary in some systems (such as NTSC), depending on the colors chosen.
Horizontal resolution in luminance and/or chrominance can vary in some
systems between stationary (static resolution) and moving (dynamic resolution) pictures). It is usually directly related to bandwidth.
Horizontal Retrace – The return of the electron beam from the right to
the left side of the raster after the scanning of one line.
Horizontal Scan Frequency – The frequency at which horizontal sync
pulses start the horizontal retrace for each line. A high frequency is
needed for a non-interlaced scan. The horizontal sync frequency for NTSC
is 15.75 kHz.
Horizontal Scan Rate – The rate at which the screen’s scanning beam is
swept from side to side. For (M) NTSC systems, this rate is 63.556 µs, or
15.734 kHz.
Horizontal Sync – The -40 IRE (NTSC) or the –300 mV (PAL) pulse
occurring at the beginning of each line. This pulse signals the picture
monitor to go back to the left side of the screen and trace another
horizontal line of picture information. The portion of the video signal that
occurs between the end of one line of signal and the beginning of the
next. A negative going pulse from the blanking signal used to genlock
(synchronize) equipment. It begins at the end of front porch and ends at
the beginning of back porch.
www.tektronix.com/video_audio 109
Video Terms and Acronyms
Horizontal Sync Pulse – See Horizontal Sync.
HPF – See Highpass Filter.
Horizontal Timing
HQTV (High Quality TV) – Another term for HDTV.
HRS – See High Resolution Sciences.
HSB – See Hue, Saturation and Brightness.
10.9 µs ±0.2 µs
100 IRE
714 mV)
Sync to Blanking End
9.4 µs = ±0.1 µs
1.5 µs
±0.1 µs
2.5 µs
4.7 µ +0.1 µs
20 IRE
40 IRE
(286 mV)
Ref Black Level
40 IRE
Ref Burst
HSL – See Hue, Saturation and Lightness.
1.6 µs
Sync to Burst End
7.4 µs
0.6 µs
HSI – See Hue, Saturation and Intensity.
7.5 IRE
Blanking Level
HSM (Hierarchical Storage Management) – HSM systems transparently
migrate files from disk to optical disk and/or magnetic tape that is usually
robotically accessible. When files are accessed by a user, HSM systems
transparently move the files back to disk.
HSV – See Hue, Saturation and Value.
Ref Sync
RS-170A Pulse Width Requirements
HSV Space – The three numbers are hue, saturation and value. The solid
is a cone. Also called HSI.
HSYNC – See Horizontal Synchronization or Sync.
HTTP (HyperText Transfer Protocol) – The protocol used by Web
browsers and Web servers to transfer files, such as text and graphics.
10.49 µs Minimum
11.49 µs Maximum*
1.27 µs
9.22 µs Minimum
7.94 µs Maximum
0.38 µs
4.45 to
5.08 µs
+4 IRE
-4 IRE
90% to
110% of
M Sync
8 to 11 Cycles of
Chrominance Subcarrier
-36 IRE
-40 IRE
FCC Pulse Width Requirements
* Recommended Values Only
Host – a) Any system connected to the network. b) A device where module(s) can be connected, for example: an IRD, a VCR, a PC.
Host Bus – Computer system bus to which a card is connected by insertion in the appropriate slot. This will be either a PCI, an EISA or an ISA bus.
Hostname – The name that uniquely identifies each host (system) on the
Hot Signal – When a video signal exceeds the limitations of a display,
color bleeding and over-saturation can occur. This is referred to as a
hot signal. Computer graphics are able to display a wider range of color
than video. It is important to keep this in mind when performing image
processing functions destined for video. It is often necessary to perform
a dynamic range function, or similar, to limit the color range.
House Sync – a) The black burst signal used to synchronize all the
devices in the studio or station. b) Sync generated within the studio
and used as a reference for generating and/or timing other signals
(i.e., sync gens).
Hue – a) A color wheel of basic pigments. All the hues of the rainbow
encircle the cone’s perimeter. b) The wavelength of the color which allows
color to be distinguished such as red, blue and green. Often used synonymously with the term tint. It is the dominant wavelength which distinguishes a color such as red, yellow, etc. Most commonly, video hue is influenced
by: a camera’s white balance or scene lighting. Video color processors,
such as the Video Equalizer, are the main tools used to adjust and correct
hue problems. c) One of the three characteristics of television color. Hue is
the actual color that appears on the screen. See Chroma and Luminance.
d) Attribute of a visual sensation according to which an area appears to be
similar to one of the perceived colors, red, yellow, green, and blue, or to a
combination of two of them.
Hue, Saturation and Brightness (HSB) – With the HSB model, all colors
can be defined by expressing their levels of hue (the pigment), saturation
(the amount of pigment) and brightness (the amount of white included),
in percentages.
Hue, Saturation and Intensity (HSI) – Color space system based on the
values of Hue, Saturation and Intensity. Intensity, analogous to luma, is the
vertical axis of the polar system. The hue is the angle and the saturation is
the distance out from the axis.
Hue, Saturation and Lightness (HSL) – Nearly identical to HSI except
Intensity is called Lightness. Both serve the same function.
Hue, Saturation and Value (HSV) – Nearly identical to HSI and HSL
except Intensity and Lightness are called Value. All three serve the same
Huffman Coding – Method of data compression that is independent of the
data type. The data could represent an image, audio or spreadsheet. This
compression scheme is used in JPEG and MPEG-2. Huffman Coding works
by looking at the data stream that makes up the file to be compressed.
Those data bytes that occur most often are assigned a small code to represent them. Data bytes that occur the next most often have a slightly larger code to represent them. By assigning short codes to frequently occurring
characters and longer codes to infrequently occurring characters, Huffman
Video Terms and Acronyms
minimizes the average number of bytes required to represent the characters in a text. Static Huffman encoding uses a fixed set of codes, based
on a representative sample of data with a single pass through the data.
Dynamic Huffman encoding, on the other hand, reads each text twice; once
to determine the frequency distribution of the characters in the text and
once to encode the data. The codes used for compression are computed
on the basis of the statistics gathered during the first pass with compressed texts being prefixed by a copy of the Huffman encoding table for
use with the decoding process. By using a single-pass technique, where
each character is encoded on the basis of the preceding characters in a
text, Gallager’s adaptive Huffman encoding avoids many of the problems
associated with either the static or dynamic method.
Hum – Undesirable coupling of 50 Hz (PAL) or 60 Hz (NTSC) power sine
wave into other electrical signals.
Hum Bug – Another name for a ground loop corrector.
Human Factors Guidelines – A set of standards and suggestions for
making the working environment more comfortable and healthy.
HUT (Households Using Television) – An estimate of the number
of households within a specified coverage area which are viewing any
television programming during a specified time.
HVS (Human Visual System) – Eyes and brain.
HVT (Horizontal, Vertical and Temporal) – The three axes of the spatiotemporal spectrum.
HVXC (Harmonic Vector Excitation Coding) – Harmonic Vector
Excitation Coding (HVXC) enables the representation of speech signals at
very low bit rates. The standard defines two HVXC bit rates: 2 kbps and
4 kbps. Unlike the code excited linear prediction (CELP) speech coder,
HVXC is a parametric coding system, which means that certain aspects of
the coded representation can be manipulated independently. For example,
the playback speed of a HVXC-encoded bitstream can be altered without
affecting the pitch of the voice. Similarly, the pitch of the voice can be
modified without altering playback speed. HVXC is useful for a variety of
synthetic speech applications in bandwidth-constrained environments.
Hybrid CD-ROM – A single disc containing files for both a Windows PC
and a Macintosh. See CD-ROM.
Hybrid Coder – In the archetypal hybrid coder, an estimate of the next
frame to be processed is formed from the current frame and the difference
is then encoded by some purely intraframe mechanism. In recent years, the
most attention has been paid to the motion compensated DCT coder where
the estimate is formed by a two-dimensional warp of the previous frame
and the difference is encoded using a block transform (the Discrete Cosine
Transform). This system is the basis for international standards for video
telephony, is used for some HDTV demonstrations, and is the prototype
from which MPEG was designed. Its utility has been demonstrated for video
sequence, and the DCT concentrates the remaining energy into a small
number of transform coefficients that can be quantized and compactly represented. The key feature of this coder is the presence of a complete
decoder within it. The difference between the current frame as represented
as the receiver and the incoming frame is processed. In the basic design,
therefore, the receiver must track the transmitter precisely, the decoder at
the receiver and the decoder at the transmitter must match. The system is
sensitive to channel errors and does not permit random access. However, it
is on the order of three to four times as efficient as one that uses no prediction. In practice, this coder is modified to suit the specific application.
The standard telephony model uses a forced update of the decoded frame
so that channel errors do not propagate. When a participant enters the
conversation late or alternates between image sources, residual errors die
out and a clear image is obtained after a few frames. Similar techniques
are used in versions of this coder being developed for direct satellite television broadcasting.
Hybrid Coding – The basic coding process used by current international
standards for video telephony and MPEG. This predictive coding reduces
decoder processing and storage and also gives reasonable compression
and adaptability. A key feature is that a decoder is embedded in the
encoder architecture.
Hybrid Editing – Combining nonlinear edited video files with linear (deckto-deck) segments of footage.
Hybrid Fiber Coaxial – a) Hybrid fiber coaxial network is a combination
of fiber optic cable and coaxial cable with bandwidth for video distribution
and communications. b) Cable TV technology that provides two-way,
high-speed data access to the home using a combination of fiber optics
and traditional coaxial cable.
Hybrid Filterbank – A serial combination of Sub-band filterbank and
MDCT in MPEG audio.
Hybrid Scalability – The combination of two or more types of scalability.
Hybrid Wavelet Transform – A combination of wavelet and transform
algorithms within the same compression technology.
Hydrolysis – The chemical process in which scission of a chemical bond
occurs via reaction with water. The polyester chemical bonds in tape binder
polymers are subject to hydrolysis, producing alcohol and acid end groups.
Hydrolysis is a reversible reaction, meaning that the alcohol and acid
groups can react with each other to produce a polyester bond and water
as a by-product. In practice, however, a severely degraded tape binder
layer will never fully reconstruct back to its original integrity when placed
in a very low-humidity environment.
Hypercardioid – A directional pickup pattern where maximum discrimination occurs at more than 90 and less than 180 degrees off axis.
Hyper-HAD – An improved version of the CCD HAD technology, utilizing
on-chip micro-lens technology to provide increased sensitivity without
increasing the pixel size.
www.tektronix.com/video_audio 111
Video Terms and Acronyms
I – Improved or Increased; also the in-phase component of the NTSC
color subcarrier, authorized to have more than twice as much horizontal
resolution as the Q, or quadrature component. Few TV sets have ever
taken advantage of this increased chroma resolution, though there is
renewed interest.
I, W, Q, B – An NTSC test signal used to check television broadcast equipment. It consists of an I signal followed by a white bar, then a Q signal and
a black level on each line.
I/O – See Input/Output.
I/O Device – Input/output equipment used to send information or data
signals to and from an editing computer.
I/O Mapped I/O – I/O devices that are accessed by using instructions and
control signals that differ from those of the memory devices in a system.
Assigns I/O devices to a separate address space.
I/Q – In Phase/Quadrature Phase.
I2C (Inter-Integrated Circuit) – Bidirectional, two-line interface to interface integrated circuits capable of transmitting 100 kbits/sec in normal
mode or 400 kbits/sec in fast mode. In conjunction with a processor it can
be used to control TV reception, TV decoders/encoders, AD or DA conversion. In audio it can be used to control tone, volume, AD or DA conversion,
amplification, etc.
I720 – Name of the programmable video processor family from Intel.
ICCE (International Conference on Consumer Electronics) –
Sponsored by the Consumer Electronics Society of the IEEE and held
annually in the Chicago area immediately following CES. ATV has become
an increasingly important topic at ICCE.
Icon – A small picture that represents a stowed or closed file, directory,
application, or IRIX process.
Iconoscope – A camera tube in which a high velocity electron beam scans
a photo-emissive mosaic which has electrical storage capability.
ICPM (Incidental Carrier Phase Modulation) – A transmission defect
most noticeable as a cause of sync buzz.
ID (Identification Data) – 32-bit field identifying the sector number
within the disc volume.
IDE (Integrated Development Environment) – An integrated development environment (IDE) is a programming environment that has been
packaged as an application program, typically consisting of a code editor,
a compiler, a debugger, and a graphical user interface (GUI) builder. The
IDE may be a standalone application or may be included as part of one
or more existing and compatible applications. The BASIC programming
language, for example, can be used within Microsoft Office applications,
which makes it possible to write a WordBasic program within the Microsoft
Word application. IDEs provide a user-friendly framework for many modern
programming languages, such as Visual Basic, Java, and PowerBuilder.
IB (In-Band)
IDE (Interface Device Electronics) – Software and hardware communication standard for interconnecting peripheral devices to a computer.
IBA – Britain’s Independent Broadcasting Authority, home of a great deal
of ATV research.
IDTV – See Improved Definition Television.
IBE (Institution of Broadcast Engineers)
IBM – Member of the AEA ATV Task Force; also one of the first organizations to suggest sub-sampling as a technique for compatibility increasing
IBO (Input Back-Off) – The ratio of the signal power measured at the
input to a high power amplifier to the input signal power that produces
the maximum signal power at the amplifier’s output. The input back-off is
expressed in decibels as either a positive or negative quantity. It can be
applied to a single carrier at the input to the HPA (carrier IBO), or to the
ensemble of input signals (total IBO).
IC (Integrated Circuit) – A small device incorporating the equivalent
of hundreds or thousands of transistors, capacitors, resistors and other
components within a small, solid block.
IEC (International Electrotechnical Commission) – The IEC and its
affiliated International Organization for Standardization (ISO) are the two
major global standards-making groups. They are concerned with establishing standards that promote interchange of products, agreement upon
methods of evaluation, and resolution of nonfunctional differences among
national standards. They are structured as an international federation of
the more than 50 national standards organizations. The USA is represented
by the American National Standards Institute (ANSI).
IEC 60461 – Defines the longitudinal (LTC) and vertical interval timecode
(VITC) for NTSC and PAL video systems. LTC requires an entire field time to
transfer timecode information, using a separate track. VITC uses one scan
line each field during the vertical blanking interval.
IEC 60958 – Defines a serial digital audio interface for consumer (SPDF)
and professional applications.
IC (Interaction Channel)
IEC 61834 – Defines the DV standard.
ICC (International Color Consortium) – Established in 1993 by eight
industry vendors for the purpose of creating, promoting and encouraging
the standardization and evolution of an open, vendor-neutral, cross-platform color management system architecture and components.
IEC 61880 – Defines the widescreen signaling (WSS) information for NTSC
video signals. WSS may be present on lines 20 and 283.
IEC 61883 – Defines the methods for transferring data, audio, DV and
MPEG-2 data per IEEE 1394.
IEC 62107 – Defines the Super VideoCD standard.
Video Terms and Acronyms
IEEE – See International Electrical and Electronic Engineers.
IEEE 1394 – A high-speed “daisy-chained” serial interface. Digital audio,
video and data can be transferred with either a guaranteed bandwidth or
a guaranteed latency. It is hot-pluggable, and uses a small 6-pin or 4-pin
connector, with the 6-pin connector providing power.
IEEE P1394 (FireWire) – A low-cost digital interface organized by Apple
Computer as a desktop LAN and developed by the IEEE P1394 Working
Group. This interface can transport data at 100, 200 or 400 Mbps. Serial
bus management provides overall configuration control of the serial bus in
the form of optimizing arbitration timing, guarantee of adequate electrical
power for all devices on the bus, assignment of which IEEE P1394 device
is the cycle master, assignment of isochronous channel ID and notification
of errors. There are two types of IEEE P1394 data transfer: asynchronous
and isochronous. Asynchronous transport is the traditional computer memory-mapped, load and store interface. Data requests are sent to a specific
address and an acknowledgment is returned. In addition to an architecture
that scales with silicon technology, IEEE P1394 features a unique isochronous data channel interface. Isochronous data channels provide guaranteed
data transport at a predetermined rate. This is especially important for
time-critical multimedia data where just-in-time delivery eliminates the
need for costly buffering.
IEEE Standard 511-1979 Video Signal Transmission Measurement of
Linear Waveform Distortions – This IEEE standard gives a comprehensive technical discussion of linear waveform distortions.
IETF (Internet Engineering Task Force) – One of the task forces of the
Internet Activities Board (IAB). The IETF is responsible for solving the shortterm engineering needs of the Internet. It has over 40 working groups.
I-ETS (Interim European Telecommunications Standards) – An interim standard issued by the ETSI.
than a FIR filter but on the other hand it can become unstable since part of
the output is fed back to the input. A common way to express the IIR is:
y(n) = x(N) + Y(n-1)
i.e., present output = present input + previous output where n = time
interval; x = input; y = output.
IIT (Illinois Institute of Technology) – Home of most of the research
into the SLSC ATV scheme.
Ikegami – Broadcast equipment manufacturer involved in a number of ATV
schemes, including production of HDEP equipment to the SMPTE standard
and schemes involving the use of a line doubler either before or after
iLink – Sony’s name for their IEEE 1394 interface.
Illegal Video – a) A video signal that falls outside the appropriate gamut
for that format. For instance, the gamut limits for an R’, G’, B’ signal are 0
mV to 700 mV and Y’ is 0 mV to 700 mV and P’b/P’r are +/-350 mV. If the
signal falls outside of these limits it is an illegal value. b) Some colors that
exist in the RGB color space can’t be represented in the NTSC and PAL
video domain. For example, 100% saturated red in the RGB space (which
is the red color on full strength and the blue and green colors turned off)
can’t exist in the NTSC video signal, due to color bandwidth limitations.
The NTSC encoder must be able to determine that an illegal color is being
generated and stop that from occurring, since it may cause over-saturation
and blooming.
Illuminance – Quotient of the luminous flux dFv incident on an element of
the surface containing the point by the area dA of the element. The illuminance also is commonly used in a qualitative or general sense to designate
the act of illuminating or the state of being illuminated. Units of luminance
are lux, foot candle.
IFFT (Inverse FFT) – Analytical or digital signal processing step that
converts frequency domain information into a time domain sequence.
IM4DTTV (Integrated Modem for Digital Terrestrial TV) – The
IM4DTTV project (2001-2004), aims at demonstrating the feasibility of an
integrated DVB-RCT end-to-end solution (base station and user terminal),
able to meet the technical and cost requirements of the forthcoming
terrestrial interactive TV services.
I-Frame (Intra Frame) – One of the three types of frames that are used
in MPEG-2 coded signals. The frame in an MPEG sequence, or GOP (Group
of Pictures), that contains all the data to recreate a complete image. The
original information is compressed using DCT.
IMA (Interactive Multimedia Association) – IMA has been active in the
definition of the DVD through its DVD Special Interest Group (IMA DVD SIG).
The IMA DVD SIG is a committee of DVD manufacturers working for interactive DVDs by establishing requirements and influencing specifications.
IGMP (Internet Group Management Protocol) – This protocol is used in
IMA ADPCM – The IMA has selected the 4:1 ADPCM audio compression
scheme from Intel’s DVI as the preferred compressed audio date type for
interactive media platforms. Intel had offered the algorithm as an open
standard to the IMA. The algorithm compresses 16-bit audio data at up to
44.1 kHz sampling into 4-bit ADPCM words.
IF (Intermediate Frequency) – The first state in converting a broadcast
television signal into baseband video and audio.
IIM (Interactive Interface Module)
IIOP (Internet Inter-ORB Protocol) – The CORBA message protocol
used on a TCP/IP network (Internet, intranet, etc.). CORBA is the industry
standard for distributed objects, which allows programs (objects) to be
run remotely in a network. IIOP links TCP/IP to CORBA’s General Inter-ORB
protocol (GIOP), which specifies how CORBA’s Object Request Brokers
(ORBs) communicate with each other.
IIR (Infinite Impulse Response) – A type of digital filter which has an
infinite output response, as opposed to a FIR filter with a finite output
response. It needs usually less coefficients to define signal performance
Image – A two dimensional (usually) picture. The picture may be represented in digital form or mathematically as an image is a set of planes
in two dimensions. The two dimensions are the resolution in X and Y
(columns, lines). The origin (0, 0) of the image is sometimes its lower left
corner. There are four basic types of images: black & white or color, mask
or no mask, Z plane or no Z plane, IPR information or no IPR information.
Image Buffer – See Frame Buffer.
www.tektronix.com/video_audio 113
Video Terms and Acronyms
Image Capture – The transducing of the information in a real image into
the photographic or electronic medium. Normally in motion-reproducing
systems, synchronous audio information is simultaneously transduced.
Image Compression – a) Process used to reduce the amount of
memory required to store an image. See JPEG, MPEG and Decimation.
b) Application of an appropriate transfer function to the image signal
so as to limit dynamic range. c) Application of bandwidth limiting or bit
rate reduction to an image signal in order to bring it within the limitations
of a lower capacity channel.
Image Enhancement – a) Techniques for increasing apparent sharpness
without increasing actual resolution. This usually takes the form of increasing the brightness change at edges. Since image enhancement has
advanced continuously for nearly 50 years, ordinary NTSC pictures sometimes look better than the NTSC pictures derived from an HDEP source,
particularly when these derived pictures are designed to be augmented by
other signals in an ATV receiver. It is very difficult to enhance pictures for
NTSC receivers and then unenhance them for receivers with augmentation.
b) Once the camera response has been made flat to 400 lines (by aperture
correction), an additional correction is applied to increase the depth of
modulation in the range of 250 to 300 lines (in an NTSC system), both
vertically and horizontally. This additional correction, known as image
enhancement, produces a correction signal with symmetrical overshoots
around transitions in the picture. Image enhancement must be used very
sparingly, if natural appearance is to be maintained.
Image Enhancer – A device used to sharpen transition lines in a video
Image File – A format for storing digital images. To save disk space,
images are compressed in a binary file. The image format is contained in
a file header which is read by all the programs. The header contains: the
image name, the resolution, the type of image.
Image File Architecture – The Digital Information Exchange Task Force
(SMPTE, IEEE, ATSC) on digital image architecture has as its goal the multidisciplinary agreement upon and the definition of fully flexible, interoperable, scalable, and extensible systems. The objective is agreement on the
structure of digital image files that will facilitate the exchange of such files
across the technology interfaces. The scope includes both the rapid, unambiguous but concise identification of the file and its utilization, as well as
the organization of the image data itself.
Image File Descriptor – The descriptor is a block of data that enhances
the utility of the main data for the user. It may contain, in standardized format, data concerning production, ownership, access, previous processing,
etc., relevant to the basic interpretation of the data.
Image File Header – The header is a very compact label that can be
decoded by a universally accepted algorithm. Specific objectives are:
identify encoding standard, specify length of the file, indicate whether a
readable descriptor is included, permit random interception of data stream,
and offer optional error protection.
Image File Header/Descriptor – A standard introductory identification
directing access to a digital image file. The header provides a brief image
file identification, universally decodable, indicating the format and length of
the data block. The (optional) descriptor conveys additional information
improving the usefulness of the data block to the user, such as cryptographic, priority, or additional error-protection information as well as
source, time, authorship, ownership, restrictions on use, processing
performed, etc.
Image File Motion-Picture Format – SMPTE Working Group H19.16
has proposed SMPTE Standard H19.161 defining the file format for the
exchange of digital motion-picture information on a variety of media
between computer-based systems. This flexible file format describes pixelbased (raster) images with attributes defined in the binary file descriptor,
which identifies: generic file information, image information, data format,
and image orientation information, motion-picture and television industry,
specific information, user defined information. The draft assumes non
real-time application, with formats for real-time to be considered as the
developing technology permits.
Image File Video Index – Proposed descriptor developed by SMPTE
Working Group P18.41. This proposed SMPTE recommended practice is
intended to provide a method of coding video index information in which
various picture and program related source data can be carried in conjunction with the video signal. There are three classes of video index data
based on type and use of the data. Class 1: Contains information that is
required to know how to use the signal. Class 2: Contains heritage information for better usage of the signal. Class 3: Contains other information
not required to know how to use the signal.
Image Generation – The creation of an image in the photographic or
electronic medium from an image-concept (painted or generated by
computer graphics, for example).
Image Independence – See HIIP.
Image Innovator – An optional package which adds additional flags and
menus to ADO 100, including Mosaics, Posterization, Solarization and Mask
submenu, Target Defocus flag and Background menu, Border flags and
Sides submenu.
Image Pac – A multi-resolution image file format developed by Kodak as
part of the Photo CD System.
Image Processing, Digital – Digital images are represented by a stream,
currently of 8-bit or 10-bit values representing the luminance and chrominance information, or a stream of 8-bit or 10-bit values representing the
R’, G’, and B’ information. Image processing sometimes involves multiplication of each digital word by: its proportional contribution to the
processed image, a vector to relocate the pixel, an algorithm to change
overall image size. To control these processes, additional information may
be carried in the alpha channel synchronized to the image. As an example
of the process, if an 8-bit sample is multiplied by an 8-bit factor, the
product becomes a 16-bit word. At some point, this may have to be rounded or truncated back to 8 bits for the next operation. This introduces slight
discrepancies in the result which may be visible as lagged edges, color
bleeding, etc. If successive truncations are performed during a sequence
of image processing steps, the artifacts frequently become increasingly
visible. Good practice calls for maintaining some or all of the “extra bits”
throughout as much of the image processing as the facilities permit.
Experience has shown that digital image processing provides the fewest
distracting artifacts when the R’, G’, B’ signals are first converted to the
Video Terms and Acronyms
linear R, G, B. For complex image processing, and for critical results,
the 8-bit encoding may be replaced by 10 bits (or more if that can be
Image Quality Evaluation, Interval-Scaled – For comparisons of perceived image quality among significantly different systems, a requirement
frequently encountered in electronic production, the technique of intervalscaling is recommended by most students of psycho-physics. Interval
scaling gives some indication of the magnitude of preference for one
system over another. Observers are asked to place a numerical value upon
the perceived differences (either in total or with regard to a specified
characteristic such as noise, resolution, color rendition, etc.).
Image Quality Evaluation, Ordinal-Scaled – For comparisons of perceived image quality resulting from a controlled variant within a single
system, a requirement encountered when fine-tuning a system, the technique of ordinal-scaling is frequently employed. The ordinal-scale indicates
that one image is preferred over another. Observers are asked to evaluate
perceived image quality on an established scale, usually of five levels,
from excellent to unacceptable. Correlations among isolated tests are
sometimes uncertain.
Image Quality Evaluation, Ratio-Scaled – When images that differ
significantly in creation, display, and content are being compared and interval-scaling becomes necessary, interpretation of the results become more
and more complex as the number of observers is increased. Ratio-scaling
provides a means for correlating multiple observations and multiple data
sources. Observers are asked to assign a numerical value to perceived
image quality (either in total or with regard to a specified characteristic
such as noise, resolution, color rendition, etc.). They are also asked to
identify numerical values for the best possible image, and the completely
unacceptable image. Each is allowed to choose a numerical scale with
which the observer feels most comfortable. The relationship between the
value for the test image and the two extremes provides a useful ratio.
Analyses involving comparisons among observers, comparisons with other
systems, correlation of results obtained over periods of time, etc., are
made by normalizing each observer’s scale (for example, best possible =
100, completely unacceptable = 0).
Image Quality, Objective – The evaluation obtained as a result of
objective measurement of the quantitative image parameters (including
tone scale, contrast, linearity, colorimetry, resolution, flicker, aliasing,
motion artifacts, etc.)
Image Quality, Perceived – The evaluation obtained as a result of
subjective judgment of a displayed image by a human observer.
Image Resolution – The fineness or coarseness of an image as it was
digitized, measured in Dots Per Inch (DPI), typically from 200 to 400 DPI.
Image Scaling – The full-screen video image must be reduced to fit into
a graphics window (usually a fraction of the total computer display area),
while at the same time maintaining a clear and complete image. To do this,
it is important to remove or avoid visual artifacts and other “noise” such
as degradation caused by pixel and line dropping, and interlacing problems
from the scaling process. The challenges increase when dealing with
moving images and the compression/decompression of large amounts of
video data.
Image Stabilization – A camcorder feature which takes out minor picture
shakiness, either optically or electronically.
Image Transform – First U.S. organization to modify television scanning
for electronic cinematography, utilizing 655 scanning lines per frame at 24
frames per second. Also created ImageVision.
ImageVision – An early HDEP scheme utilizing 655 scanning lines per
frame and 24 frames per second, with wide bandwidth video recording and
a color subcarrier shifted to a higher frequency. Created and used by
Image Transform for electronic cinematography.
Imaging Device – a) The part of the video camera or camcorder that
converts light into electrical signals. b) A vacuum tube or solid state-device
in which the vacuum tube light-sensitive face plate or solid-state
light-sensitive array provides an electronic signal from which an image
can be created.
Immediate Addressing – In this mode of addressing, the operand contains the value to be operated on, and no address reference is required.
Impact Strength – A measure of the work done in breaking a test sample
of tape or base film by subjecting it to a sudden stress.
Impairments – Defects introduced by an ATV scheme.
Impedance (Z) – a) The opposition of a device to current flow. A combination of resistance, inductive reactance and capacitive reactance. When
no capacitance or inductance is present, impedance is the same as resistance. b) A resistance to signal flow. Microphones and audio mixers are
rated for impedance. c) A property of all metallic and electrical conductors
that describes the total opposition to current flow in an electrical circuit.
Resistance, inductance, capacitance and conductance have various
influences on the impedance, depending on frequency, dielectric material
around conductors, physical relationship between conductors and external
Impedance Matching – A video signal occupies a wide spectrum of
frequencies, from nearly DC (0 Hz) to 6 MHz. If the output impedance
of either the video source, cable or input impedance of the receiving
equipment are not properly matched, a series of problems may arise.
Loss of high frequency detail and color information as well as image
instability, oscillations, snow, ghost images and component heat-up may
result. Proper connections and cable types provide correct impedances.
See Load Resistance.
Implicit Scene Description – The representation of the composition
information based on the transmission of classes that contains the
spatio-temporal relationships between audiovisual objects, as opposed
to Explicit Scene Description.
Improved Definition Television (IDTV) – IDTV is different from HDTV in
that it uses the standard transmitted (M) NTSC or (B, D, G, H, I) PAL signal.
IDTV improves the display of these signals by doing further processing of
the signal before displaying it. IDTV offers picture quality substantially
improved over conventional receivers, for signals originated in standard
525-line or 625-line format, by processing that involves the use of field
store and/or frame store (memory) techniques at the receiver. One example
is the use of field or frame memory to implement de-interlacing at the
receiver in order to reduce interline twitter compared to that of an inter-
www.tektronix.com/video_audio 115
Video Terms and Acronyms
laced display. IDTV techniques are implemented entirely at the receiver
and involve no change to picture origination equipment and no change to
emission standards.
Impulsive Noise – Short, high-level, unwanted signals that tend to cause
a sparkling effect in the picture and/or a percussive effect in the sound.
The signal-to-impulsive noise ratio is the ratio, in decibels, of the nominal
amplitude of the luminance signal (100 IRE units) to the peak-to-peak
amplitude of the noise. Impulsive noise is often caused by motorized
appliances and tools.
IMTC (International Multimedia Teleconferencing Consortium) – An
international membership organization founded in 1993 as Consortium for
Audiographics Teleconferencing Standards (CATS). IMTC contributes to the
development of and implements the standards recommendations of the ITU
for data and videoconferencing.
IN (Interactive Network)
IN Point – The starting point of an edit. Also called a Mark IN. See also
Mark IN/OUT, OUT Point.
In the Can – Describes a scene or program which has been completed.
Also, “That’s a Wrap”.
INA (Interactive Network Adapter) – Central point or hub in broadband
networks that receives signals on one set frequency band and retransmits
them to another. Every transmission in a broadband network has to go
through the INA or head-end. In CATV technology, the head-end is the
control center for a cable system where video, audio, and data signals
are processed and distributed along the coaxial cable network.
Inband Signaling – Signaling is carried in the same communications
channel as the data.
Incident Light – Light arriving at the surface of an object.
Incidental Carrier Phase Modulation (ICPM) – This is a distortion of
the picture carrier phase caused by changes in either the chrominance or
luminance video signal levels. This distortion is described in degrees using
the following definition:
ICPM = arctan (quadrature amplitude/video amplitude)
The picture effects of ICPM will depend on the type of demodulation being
used to recover the baseband signal from the transmitted signal. ICPM
shows up in synchronously demodulated signals as differential phase and
many other types of distortions, but the baseband signal is generally not as
seriously affected when envelope detection is used. The effects of ICPM
are therefore rarely seen in the picture in home receivers, which typically
use envelope detection. However ICPM may manifest itself as an audio
buzz at the home receiver. In the intercarrier sound system, the picture
carrier is mixed with the FM sound carrier to form the 4.5 MHz sound IF.
Audio rate phase modulation in the picture carrier can therefore be transferred into the audio system and heard as a buzzing noise. An unmodulated
5 to 10 stair step signal or unmodulated ramp can be used to test for this
In-Circuit Emulator (ICE) – Debugging aid that connects to the system
under test by plugging into the microprocessor’s socket. This allows the
ICE to gain full control over the system. Typical features include the ability
to set breakpoints, single-step a program, examine and modify registers
and memory, and divide memory and I/O between the system under test
and the ICE system.
Increment – Adding the value one to the contents of a register or memory
Indeo – a) Intel’s series of compressor and decompressor technologies
for digital video, capable of producing software-only video playback.
b) The Indeo is a video compression/playback technique from Intel. Just
like CinePak, playback of Indeo compressed video does not require any
special hardware. The Indeo algorithm, which used techniques like vector
quantization and run-length coding, is used by various other companies.
A video file compressed with Indeo may be played on systems that support
either Video for Windows® or QuickTime. The new Indeo Video Interactive
(IVI) software incorporates additional features to support interactive applications, and used a hybrid wavelet-based algorithm with bidirectional
prediction. IVI may be played on systems that support Video for Windows®,
later also QuickTime, without dedicated hardware. Video encoded by IVI
may be played at up to 640 x 480 pixels resolution and at up to 30 fps,
depending on hardware configuration.
Indeo Video Interactive – Intel’s latest compressor and decompressor
for digital video, incorporating such special features as transparency,
scalability, and local decode. See Indeo Video, Local Decode, Scalability,
Indeo-C – The Indeo-C was a compression algorithm in the Personal
Conferencing Specification (PCS) from the Personal Conferencing Work
Group (PCWG), which was an industry group led by Intel. Due to lacking
support by the industry, the PCWG dropped the PCS, and has now consolidated with International Multimedia Teleconferencing Consortium (IMTC)
which supports ITU-T Red. H.320 video conferencing. The Indeo-C algorithm did not use vector quantizing, as in Indeo, or a hybrid wavelet-based
algorithm, as in Indeo Video Interactive, but used a transform coding called
Fast Slant Transform (FST). An FST calculates frequency coefficients of picture blocks, like the DCT used in MPEG, but requires less computational
power. Both intra-frame and inter-frame coding with motion estimation was
applied in Indeo-C and finally, run-length and Huffman coding.
Independent Television – Television stations that are not affiliated with
networks and that do not use the networks as a primary source of their
Index Register – Contains address information used for indexed
Indexed Addressing – Mode in which the actual address is obtained by
adding a displacement to a base address.
Indexing – Creation of a data index to speed up search and retrieval.
Indication Signals – They communicate the status of the functioning of
a system.
Indirect Addressing – Addressing a memory location that contains the
address of data rather than the data itself.
Industrial/Professional – The grade of audio and video equipment
that falls between consumer (low end) and broadcast quality. Industrial/
professional equipment is characterized by its durability, serviceability,
and more-professional end-result.
Video Terms and Acronyms
Inertia Idler – A rotating guide attached to a heavy flywheel to reduce the
effect of varying supply reel friction on tape speed.
Insert Editing – The process of television post-production that combines
audio and video signals on an existing control track.
Information Services – Broad term used to describe full range of audio,
video and data transmission services that can be transmitted over the air
or by cable.
Inserter – A device for providing additional information, normally superimposed on the picture being displayed; this can range from one or two
characters to full-screen alphanumeric text. Usually, such generators use
the incoming video signal sync pulses as a reference point for the text
insertion position, which means if the video signal is of poor quality, the
text stability will also be of poor quality. Also known as Alphanumeric Video
Infrared Light – The wavelength of light produced below the visible part
of the frequency spectrum.
Initial Object Description – A special object descriptor that allows the
receiving terminal to gain access to portions of content encoded according
to this specification.
Initial Property Identification (IPI) – A unique identification of one or
more elementary streams corresponding to parts of one or more media
Insertion Gain – In a CAV system, this refers to the overall amplitude of
all three signals that make up the CAV signal and is measured as the
peak-to-peak voltages of the three video signals (usually including sync on
luminance levels).
Initialization – Setting a system to a known state.
Insertion Gain Measurement – Measurement of peak-to-peak amplitude
Initialize – a) An auto sequence that causes a machine upon power up
to arrive at a default condition. b) Record some data on a disk to allow its
segments to be recognized by a controller.
Insertion Loss – The decrease in level which occurs when a piece of
equipment is inserted into a circuit so that the signal must flow through it.
Initializing – The setting of the computer edit program to proper operating conditions at the start of the editing session.
Ink Numbers – The frame identification numbers used to conform a film
work print. Film composer cut lists and change lists reference ink numbers.
In-Point – a) Beginning of an edit. b) The first frame that is recorded.
c) In-points (and out-points) are used in editing to determine where and
how edits are inserted on the record clip, and to determine what part of a
source clip is used in an insert or overwrite.
Input – The terminals, jack or receptacle provided for the introduction of
an electrical signal or electric power into a device or system.
In-Service (VITS or ITS Mode Testing)
Test Signal
(inserts test signals in
vertical blanking interval)
TV System
Test Signal
(with line select feature)
Input Converter – See Down Converter.
Instance – A clone of an object. If you modify the original, all the instance
objects are likewise modified.
Input Port – Circuit that connects signals from external devices as inputs
to the microprocessor system.
Instantaneous Value – The amplitude of a waveform at any one instant
of time.
Input/Output (I/O) – a) Typically refers to sending information or data
signals to and from devices. b) Lines or devices used to transfer information outside the system.
Institute of Electrical and Electronics Engineers – The Institute of
Electrical and Electronics Engineers (IEEE) is the world’s largest technical
professional society. Founded in 1884 by a handful of practitioners of the
new electrical engineering discipline, today’s Institute includes 46,000
students within a total membership of nearly 320,000 members who
conduct and participate in its activities in 150 countries. The men and
women of the IEEE are the technical and scientific professionals making
the revolutionary engineering advances which are reshaping our world
today. And today’s students are the future of the profession. The technical
objectives of the IEEE focus on advancing the theory and practice of electrical, electronics and computer engineering and computer science. To
realize these objectives, the IEEE sponsors nearly 800 Student Branches
worldwide, as well as scholarships and awareness programs, technical
conferences, symposia and local meetings; publishes nearly 25% of the
world’s technical papers in electrical, electronics and computer engineering; and provides educational programs to keep its members’ knowledge
and expertise state-of-the-art. The main IEEE information system is in
Piscataway, New Jersey, USA.
INRS – French acronym for the National Scientific Research Institute of the
University of Quebec. INRS-Telecommunications shares facilities with Bell
Northern Research, sort of Canada’s Bell Labs, and has simulated both
advanced encoders and ATV schemes on its computer simulation system.
Insert – a) The video that fills a key. Also used to describe the key itself.
Insert for most keys is “self”, that is, a key that is filled with the same
video that cuts the hole. Ampex switchers also allow “matte” fill with an
internally generated color and “bus fill” where any bus source may be
selected to fill the key. b) An edit mode meaning to record a new video
over a certain section of an existing video where the entry and exit are
both defined and no new time code of control track is recorded.
Insert Edit – An electronic edit in which the existing control track is not
replaced during the editing process. The new segment is inserted into
program material already recorded on the video tape. Recording new
video and/or audio material onto a prerecorded (or striped) tape. Insert
edits can be made in any order, unlike assemble edits, which must be
made sequentially.
www.tektronix.com/video_audio 117
Video Terms and Acronyms
Instruction – Single command within a program. Instructions may be
arithmetic or logical, may operate on registers, memory, or I/O devices, or
may specify control operations. A sequence of instructions is a program.
Instruction Cycle – All of the machine states necessary to fully execute
an instruction.
Instruction Decoder – Unit that interprets the program instructions into
control signals for the rest of the system.
Instruction Register – Register inside the microprocessor that contains
the opcode for the instruction being executed.
Instruction Set – Total group of instructions that can be executed by a
given microprocessor. Must be supplied to the user to provide the basic
information necessary to assemble a program.
Integrated Services Digital Networks (ISDN) – ISDN is a CCITT term
for a relatively new telecommunications service package. ISDN is basically
the telephone network turned all-digital end to end, using existing switches
and wiring (for the most part) upgraded so that the basic call is a 64 kbps
end-to-end channel, with bit manipulation as needed. Packet and maybe
frame modes are thrown in for good measure, too, in some places. It’s
offered by local telephone companies, but most readily in Australia, France,
Japan, and Singapore, with the UK and Germany somewhat behind, and
USA availability rather spotty. A Basic Rate Interface (BRI) is two 64K bearer (B) channels and a single delta (D) channel. The B channels are used
for voice or data, and the D channel is used for signaling and/or X.25
packet networking. This is the variety most likely to be found in residential
service. Another flavor of ISDN is Primary Rate Interface (PRI). Inside the
US, this consists of 24 channels, usually divided into 23 B channels and
1 D channel, and runs over the same physical interface as T1. Outside of
the US then PRI has 31 user channels, usually divided into 30 B channels
and 1 D channel. It is typically used for connections such as one between
a PBX and a CO or IXC.
Intensity – Synonymous with luminance.
Intensity Stereo Coding – Stereo redundancy in stereo audio is exploited
by retaining the energy envelope of the right and left channels at high
frequencies only.
Inter – A mode for coding parameters that uses previously coded parameters to construct a prediction.
Inter Shape Coding – Shape coding that uses temporal prediction.
Interactive – Allowing random access to information.
Interactive Television (ITV) – TV programming that features interactive
content and enhancements, blending traditional TV viewing with the interactivity of a personal computer.
Interactive Video – The fusion of video and computer technology. A video
program and a computer program running in tandem under the control
of the user. In interactive video, the user’s actions, choices, and decisions
affect the way in which the program unfolds.
Interactive Videodisc – Interactive videodisc is another video related
technology, using an analog approach. It has been available since the early
1980s, and is supplied in the U.S. primarily by Pioneer, Sony, and IBM.
Intercarrier Sound – A method used to recover audio information in the
NTSC system. Sound is separated from video by beating the sound carrier
against the video carrier, producing a 4.5 MHz IF which contains the sound
Intercast – a) An Intel developed process which allows Web pages to be
sent in the vertical blanking interval of a (M) NTSC video signal. The
process is based on NABTS. b) Intercast technology allows television
broadcasters to create new interactive content-text, graphics, video, or
data around their existing programming and deliver this programming
simultaneously with their TV signal to PCs equipped with Intercast technology. Intercast content is created with HTML which means that the interactive content of broadcast with the TV signal appears to the user as Web
pages, exactly as if they were using the actual World Wide Web. These
broadcast Web pages can also contain imbedded hyperlinks to related
information on the actual Internet.
Interchange – Transfer of information between two processes.
Interchannel Timing Error – This error occurs in component analog
video three-wire or two-wire interconnect systems when a timing difference
develops between signals being transmitted through the wires. The error
manifests itself as distortions around vertical lines, edges and in color
Inter-Coding – Compression that uses redundancy between successive
pictures; also known as Temporal Coding.
Interconnect Format – See the Format definition.
Interconnect Standard – See the Standard definition.
Interface – Indicates a boundary between adjacent components, circuits,
or systems that enables the devices to exchange information. Also used
to describe the circuit that enables the microprocessor to communicate
with a peripheral device.
Interference – a) In a signal transmission path, extraneous energy which
tends to interfere with the reception of the desired signals. b) Defect of
signal reproduction caused by a combination of two or more signals that
must be separated, whether all are desired or not.
Inter-Frame Coding – a) Coding techniques which involve separating the
signal into segments which have changed significantly from the previous
frame and segments which have not changed. b) Data reduction based on
coding the differences between a prediction of the data and the actual
data. Motion compensated prediction is typically used, based on reference
frames in the past and the future.
Interframe Compression – A form of compression in which the codec
compresses the data within one frame relative to others. These relative
frames are called delta frames. See Delta Frame, Key Frame. Compare
Intraframe Compression.
Interframe Compression Algorithms – MPEG is one of many interframe
algorithms that use certain key frames in a motion-prediction, interpolation
Video Terms and Acronyms
Interlace – a) Technique for increasing picture repetition rate without
increasing base bandwidth by dividing a frame into sequential fields. When
first introduced, it also had the characteristic of making the scanning
structure much less visible. NTSC uses 2:1 interlace (two fields per frame).
b) A process in which the picture is split into two fields by sending all the
odd numbered lines to field one and all the even numbered lines to field
two. This was necessary when there was not enough bandwidth to send a
complete frame fast enough to create a non-flickering image.
monitors. NTSC video (standard TV) uses interlace video. A display system
where the even scan lines are refreshed in one vertical cycle (field), and
the odd scan lines are refreshed in another vertical cycle. The advantage is
that the bandwidth is roughly half that required for a non-interlaced system
of the same resolution. This results in less costly hardware. It also may
make it possible to display a resolution that would otherwise be impossible
on given hardware, The disadvantage of an interlaced system is flicker,
especially when displaying objects that are only a single scan line high.
Interlace Artifacts – Picture defects caused by interlace. These include
twitter, line crawl, loss of resolution, and motion artifacts. In addition to
causing artifacts, interlaced scanning reduces the self-sharpening effect
of visible scanning lines and makes vertical image enhancement more
difficult to perform.
Interlacing – The process of drawing a frame by alternately drawing the
rows of each field, creating the illusion that the image is being redrawn
twice as often as it actually is. See Field.
Interlace Coefficient – A number describing the loss of vertical resolution
due to interlace, in addition to any other loss. It is sometimes confused
with the Kell factor.
Interleaver – The RS-protected transport packets are reshuffled byte by
byte by the 12-channel interleaver. Due to this reshuffle, what were neighboring bytes are now separated by at least one protected transport packet.
That is, they are at least 204 bytes apart from each other. The purpose of
this is the burst error control for defective data blocks.
Interlace Ratio – Alternate raster lines are scanned producing an odd
field (odd numbered lines) and an even field (even numbered lines). An
interlace of 1:1 implies vertically adjacent lines comprise the field.
Interleaving – A technique used with error correction that breaks up burst
errors into many smaller errors.
Interlaced – Display system in which two interleaved fields are used to
create one frame. The number of field lines is one-half of the number of
frame lines. NTSC (M) systems have 262.5 lines per field. PAL (B, D, G, H,
I) scan system have 312.5 lines per field. Each field is drawn on the screen
consecutively-first one field, then the other. The field scanned first is called
the odd field, the field scanned second is called the even field. The interlaced scanning system is used to prevent screen flicker. If frames where
scanned on the screen without interlacing fields, the light level created by
the first frame would decrease noticeably before the next frame could be
scanned. Interlacing the fields allows the light level of the screen to be
held more constant and thus prevent flicker.
Interline Transfer – This refers to one of the three principles of charge
transferring in CCD chips. The other two are frame transfer and frameinterline transfer.
Interlaced Carrier – A television subcarrier at a frequency that is an odd
multiple of one half the line rate (for example, the NTSC color subcarrier).
Such subcarriers fall onto a line in the spatio-temporal spectrum that is
simultaneously high in vertical detail and in temporal detail, and is therefore not likely to be objectionably visible under normal viewing conditions.
Interlaced Scanning – a) A scanning process in which each adjacent line
belongs to the alternate field. b) A technique of combining two television
fields in order to produce a full frame. The two fields are composed of only
odd and only even lines, which are displayed one after the other but with
the physical position of all the lines interleaving each other, hence interlace. This type of television picture creation was proposed in the early
days of television to have a minimum amount of information yet achieve
flickerless motion. See Interlaced.
Interlaced Sequence – Sequence of pictures, that can be either field
picture or frame pictures.
Interlaced Video Mode – A mode in which the video raster is scanned
over the face of the CRT by the electron gun tracing alternate scan lines
in successive refresh cycles. The quality of interlaced video is lower than
sequentially scanned (non-interlaced) video because only half of the lines
are refreshed at a time and, interlaced video scans at a lower rate than
non-interlaced video allowing for the manufacture of less expensive video
Interline Flicker – See Twitter.
Intermediates – General term for color masters and dupes.
Intermodulation Distortion – Signal nonlinearity characterized by the
appearance of frequencies in the output equal to the sums and differences
of integral multiples of the component frequencies present in the input
signal. Harmonics are usually not included as part of the intermodulation
Internal Drive – A drive that fits inside the workstation and connects to
an internal port; it is never connected with a cable to a visible external
port. An internal drive is occasionally referred to as a front-loading drive.
Internal Sync – The internal generation of sync pulses in a camera using
a crystal controlled oscillator. This is needed on non-mains power cameras.
International Organization for Standardization (ISO) – This is a
Geneva based organization for many of the national standardization bodies.
Together with the International Electrotechnical Commission, IEC, ISO
concentrates its efforts on harmonizing national standards all over the
world. The results of these activities are published as ISO standards.
Among them are, for instance, the metric system of units, international
stationery sizes, all kinds of bolt nuts, rules for technical drawings, electrical connectors, security regulations, computer protocols, file formats,
bicycle components, ID cards, programming languages, International
Standard Book Numbers (ISBN). Over 10,000 ISO standards have been
published so far and you surely get in contact with a lot of things each
day that conform to ISO standards you never heard of. By the way, ISO is
not an acronym for the organization in any language. It’s a wordplay based
on the English/French initials and the Greek-derived prefix iso- meaning
same. Within ISO, ISO/IEC Joint Technical Committee 1 (JTC1) deals with
information technology.
www.tektronix.com/video_audio 119
Video Terms and Acronyms
International Thomson – Name used by France’s Thomson group for
some recently acquired holdings outside of France. International Thomson
is a strong proponent of progressive-scan ATV and has proposed two such
schemes for NTSC countries, both of which would offer a 16:9 aspect ratio
and 60 frames per second. One would have 900 scanning lines (864
active), matching the number of scanning lines in International Thomson’s
proposal for non-NTSC countries. The other would have 750 scanning lines
(728 active), matching the digitization rates in the non-NTSC proposal.
Interrupt Vectoring – Providing a device ID number or an actual branching address in response to the interrupt acknowledge signal. Allows each
interrupt to automatically be serviced by a different routine.
Interoperability – The capability of providing useful and cost-effective
interchange of electronic image, audio, and associated data among
different signal formats, among different transmission media, among
different applications, among different industries, among different performance levels.
Intra Shape Coding – Shape coding that does not use any temporal
Interpolation – In digital video, the creation of new pixels in the image by
some method of mathematically ma-nipulating the values of neighboring
pixels. This is necessary when an image is digitally altered, such as when
the image is expanded or compressed.
Interpolation (Line) – In television standards conversion, the technique
for adjusting the number of lines in a 625-line television system to a 525line system (and vice versa) without impairing the picture quality.
Interpolation (Movement) – A technique used in standards conversion
to compensate for the degrading effects of different field frequencies on
pictures which contain movement. Different approximate proportions of
successive input fields are used in each output field.
Interpolation (Spatial) – When a digital image is repositioned or resized,
different pixels are usually required from those in the original image.
Simply replicating or removing pixels causes unwanted artifacts. With
interpolation, the new pixels are calculated by making suitably weighted
averages of adjacent pixels, giving more transparent results. The quality
depends on the techniques used and the area of original picture, expressed
as a number of pixels or points. Compare with Interpolation (Temporal).
Interpolation (Temporal) – Interpolation between the same point in
space on successive frames. It can be used to provide motion smoothing
and is extensively used in standard converters to reduce the defects
caused by the 50/60 Hz field rate difference. This technique can also
be adapted to create frame averaging for special effects.
Inter-Positive – A color master positive print.
Interrupt – Involves suspension of the normal program that the microprocessor is executing in order to handle a sudden request for service
(interrupt). The processor then jumps from the program it was executing
to the interrupt service routine. When the interrupt service routine is
completed, control returns to the interrupted program.
Interrupt Mask – Register that has one bit to control each interrupt.
Used to selectively disable specific interrupts.
Interrupt Service Routine – Program that is executed when an interrupt
Interval Timer – Programmable device used to perform timing, counting,
or delay functions. Usually treated as a peripheral.
Intra – A mode for coding parameters that does not make reference to
previously coded parameters to perform the encoding.
Intra-Coded Pictures (I-Pictures or I-Frames) – Pictures that are
coded by using information present only in the picture itself and without
depending on information from other pictures. I-pictures provide a
mechanism for random access into the compressed video data. I-pictures
employ transform coding of the pixel blocks and provide only moderate
Intra-Coding – a) Coding of a macroblock or picture that uses information
only from that macroblock or picture. b) Compression that works entirely
within one picture: also known as Spatial Coding.
Intra-Frame Coding – Video coding within a frame of a video signal.
Intraframe Compression – A form of compression in which the codec
compresses the data within one frame relative only to itself. Key frames
are compressed with intraframe compression because they must reconstruct an entire image without reference to other frames. See Delta Frame,
Key Frame. Compare Interframe Compression.
Intraframe Compression Algorithm – A still image or photo video
compression standard. JPEG compression ratios vary from 20:1 to 40:1
with a lossless ratio of 5:1. JPEG is a symmetrical standard inasmuch as
it takes the same amount of time to decompress as it does to compress
video. JPEG works best with smooth transitions and little motion.
Intrinsic Coercive Force – The magnetizing field strength needed to
reduce flux density from saturation to zero.
Intrinsic Coercivity – The maximum value of the intrinsic coercive force.
The intrinsic coercivity is a basic magnetic parameter for the material and
requires complete saturation of the ample for its measurement as does the
saturation flux density.
Intrinsic Flux – In a uniformly magnetized sample of magnetic material,
the product of the intrinsic flux density and the cross-sectional area.
Intrinsic Flux Density – In a sample of magnetic material for a given
value of the magnetizing field strength, the excess of the normal flux
density over the flux density in vacuum.
Intrinsic Hysteresis Loop – Graph of magnetic flux (B) plotted against
the magnetizing force (H) producing it. The value of B when H has dropped
to zero is the residual magnetism, and the reverse force needed to
reduce B to zero is known as the coercivity. Units used are: Magnetizing
Force (H) in oersteds and Flux Density (B) in gauss. Coercivity is measured
in oersteds.
Video Terms and Acronyms
INTSC (Improved NTSC) – A term rarely used to describe ATV schemes
incorporating any combination of techniques.
Techniques to Improve NTSC Compatibility
A. Monochrome and Color
1. Sampling, Aperture, and Interlace Problems
• Progressive
• High Line Rate Display
• Progressive Camera and Prefiltering
• High Line Rate Camera and Prefiltering
• Image Enhancement at the Camera
• Image Enhancement at the Receiver
2. Transmission Problems
• Ghost Elimination
• Noise Reduction
• Improved Filter Design and Adjustment
3. Changing Equipment Problems
• Gamma Correction
• Adaptive Emphasis
• Rigid Adherence to Standards
B. Color Problems
1. Improved Decoder Filtering
2. Prefiltering
3. Full Detail Decoders
4. Luminance Detail Derived from Pre-Encoded Chroma
Invar – This is an expensive, brittle metal used to make the shadow mask
in a direct view color picture tube. Incorporating it allows higher picture
contrast levels from the tube without incurring long-term damage to the
shadow mask itself. It allows the set manufacturer to offer higher contrast
levels. Since the phosphors in the tube reach the point of blooming well
before the need for the Invar mask, anyone properly setting the contrast
level for no blooming in the picture won’t ever need the features of the
Invar mask. The high contrast levels permitted by the Invar mask will
eventually burn the phosphors.
Inverse Multiplexing – Operation that combines (bonds together) multiple
channels to increase the net available bandwidth into a single larger bandwidth channel.
Inverse Non-Additive Mix – A mixing process that compares the color
values of the corresponding pixels in the two source clips, and assigns the
higher value to the corresponding pixel in the output clip.
Inverse Nyquist Filter – A filter that is a complement of the filter used
to reduce interference in the IF section of a television set.
Inverse Quantization (Q-1) – Rescaling the quantized values in order
to recover the original quantized values.
Inverse Telecine – The reverse of 3:2 pulldown, where the frames which
were duplicated to create 60-fields/second video from 24-frames/second
film source are removed. MPEG-2 video encoders usually apply an inverse
telecine process to convert 60-fields/second video into 24-frames/second
encoded video. The encoder adds information enabling the decoder to
recreate the 60-fields/second display rate.
Inverted Key – We think of a normal key as, for example, letters superimposed over a background. When this key is inverted, the background
appears inside the key; it appears we are looking through the cut-out key
and seeing the background. The key insert video appears outside the key.
IO (Image Orthicon) – The picture forming tube in a TV camera.
Ion – A charged atom, usually an atom of residual gas in an electron tube.
Ion Spot – A spot on the fluorescent surface of a cathode ray tube, which
is somewhat darker than the surrounding area because of bombardment by
negative ions which reduce the phosphor sensitivity.
Ion Trap – An arrangement of magnetic fields and apertures which
will allow an electron beam to pass through but will obstruct the passage
of ions.
IOR (Interoperable Object Reference)
IP (Internet Protocol) – a) IP is the basic language of the Internet. It
was developed by the government for use in internetworking multiple
computer networks together. b) The Network Layer protocol for the Internet
protocol suite.
IP Address – The number that uniquely identifies each host (system) on
the network.
IP Datagram – Basic unit of information that passes across a connectionless TCP/IP Internet. It contains routing source and destination addresses
with the data.
IP Multicast – A system for sending IP transmissions out only one time,
but allowing for multiple users to receive it. This would reduce the bandwidth required for audio and video broadcasting over the Internet, but it is
not widely used yet.
IP (Index of Protection) – A numbering system that describes the quality
of protection of an enclosure from outside influences, such as moisture,
dust and impact.
IPCP (Internet Protocol Control Protocol) – Protocol that establishes
and configures IP over PPP.
IPI (Intellectual Property Identification) – The IPI descriptor is a
vehicle to convey standardized identifiers for content like international
standard book number, international standard music number, or digital
object identifier if so desired by the content author. If multiple media
objects within one MPEG-4 session are identified by the same IPI information, the IPI descriptor may consist just of a pointer to another elementary
stream, using its ES ID, that carries the IPI information.
I-Picture (Intra-Coded Picture) – One of three types of digital pictures
in an MPEG data stream. An I-picture is not predictive and is essentially a
snapshot picture. This type of picture generally has the most data of any of
the picture types. A picture coded using information only from itself. For
that reason, an I-picture can be decoded separately.
IPMP (Intellectual Property Management and Protection) – The
Intellectual Property Management and Protection (IPMP) identifies carriers
of creative works. The tool was developed as a complement of MPEG-4,
the ISO compression standard for digital audio-visual material. Involved
experts, notably those representing authors’ societies, felt that MPEG-4
needed extra rules designed to protect intellectual property. To this end,
IPMP was constructed as a supplementary layer on the standard.
www.tektronix.com/video_audio 121
Video Terms and Acronyms
IPR (Intellectual Property Rights) – The conditions under which the
information created by one party may be appreciated by another party.
IPS (Inches Per Second) – The measurement of the speed of tape
passing by a read/write head or paper passing through a pen plotter.
IQ (In-Phase/Quadrature Components) – Color difference signals used
in NTSC systems.
IRE Units – a) A linear scale for measuring the relative amplitudes of the
various components of a television signal. Reference white is assigned a
value of 100, blanking a value of 0. b) The values for NTSC composite
and for SMPTE 240M are shown in the following table. One IRE unit corresponds to 7-1/7 mV in CCIR System M/NTSC and to 7.0 mV in all other
systems. Measurement procedure developed by the Institute of Radio
Engineers, the predecessor to the IEEE.
U = 0.492 (B_-Y)
V = 0.877 (R_-Y)
IRE Units
I = V cos 33º _-U sin 33º
Q = V sin 33º _-U cos 33º
IQTV (Improved Quality Television) – A rarely used term for IDTV and
IR (Infrared) – An invisible band of radiation at the upper end of the electromagnetic spectrum. It starts at the middle of the microwave spectrum
and goes up to the beginning of visible light. Infrared transmission requires
an unobstructed line of sight between transmitter and receiver. It is used
for wireless transmission between computer devices as well as most
remote controls for TVs and stereo equipment.
IR Light – Infrared light, invisible to the human eye. It usually refers to
wavelengths longer than 700 nm. Monochrome (B/W) cameras have
extremely high sensitivity in the infrared region of the light spectrum.
IRD (Integrated Receiver Decoder) – a) A combined RF receiver and
MPEG decoder that is used to adapt a TV set to digital transmissions.
b) An IRD with digital interface has the ability to decode partial transport
streams (TS) received from a digital interface connected to digital bitstream
storage device such as a digital VCR, in addition to providing the functionality of a baseline IRD.
IrDA (Infrared Data Association) – A membership organization founded
in 1993 and dedicated to developing standards for wireless, infrared transmission systems between computers.
IRE (Institute of Radio Engineers) – a) The composite analog television
signal’s amplitude can be described in volts or IRE units with 140 IRE representing a full amplitude composite analog signal. The 0 IRE point is at
blanking level, with sync tip at -40 IRE and white extending to +100 IRE
In the studio, the composite analog video signal is typically 1 volt in amplitude. Thus in the studio, 1 IRE is equal to 1/140 of a volt or 7.14 mV. IRE
stands for Institute of Radio Engineers, the organization which defined the
unit. b) Unit of video measurement. 140 IRE measures the peak-to-peak
amplitude of the video signal (including sync) and is typically 1 volt.
IRE Roll-Off – The IRE standard oscilloscope frequency response characteristic for measurement of level. This characteristic is such that at 2 MHz
the response is approximately 3.5 dB below that in the flat (low frequency)
portion of the spectrum, and cuts off slowly.
IRE Scale – An oscilloscope or waveform monitor scale conforming to IRE
Standard 50, IRE 23.S1 and the recommendations of the Joint Committee
of TV Broadcasters and Manufacturers for Coordination of Video Levels.
Video Baseband
Zero Carrier
White Clip (3)
Reference Black (6)
Reference White
Sync Peaks
(Max Carrier)
–286 (5)
(1) From Benson: Television Engineering Handbook.
(2) Video waveform specified in ANSI/EIA/TIA 25D-C-1989. It becomes an
operational requirement to map the scene luminance within the video
waveform specifications so that subjectively acceptable image recreation
can be obtained on display.
(3) Typical (arbitrary) values to limit overload of analog signals, or to define
maximum digital equivalent.
(4) Under scene illumination, the light from a nonselective diffuse reflector
(white card) whose reflectance factor is 90% compared to a “perfect
reflector” (prepared magnesium oxide = 98%).
(5) Frequently indicated as +700 and –300, respectively.
(6) Specified for NTSC in ANSI/EIA/TIA 250-C-1989. Many other systems place
reference black at blanking level.
Iredale, Richard – Creator of the HD-NTSC ATV scheme and the HD-PRO
HDEP scheme.
IRIG (Inter-Range Instrumentation Group) – Has recently been
renamed “Range Control Council”.
Iris – a) The video camera’s lens opening which regulates the amount
of light entering a camera. b) A means of controlling the size of a lens
aperture and therefore the amount of light passing through the lens.
IRIS – Any graphics workstation manufactured by Silicon Graphics, Inc.
IRIX – Silicon Graphics, Inc.’s version of the UNIX operating system. See
also System Software.
Iron Oxide/Gamma Ferric Oxide – The most popular oxide particle used
as a magnetic recording medium produced from an oxide of pure iron.
IRT (Institut für Rundfunktechnik) – IRT is the research and development branch of the public broadcasters in Germany (the ARD and ZDF),
Austria (the ORF) and in Switzerland (the SRG). Situated in Munich,
Germany, the IRT participates in both national and international research
projects, and is highly involved in broadcasting system development.
Specifically, IRT has participated in the development of digital audio bit rate
reduction, and is one of the three licensors of MPEG Layer II of which the
IRT conducts conformance tests.
Video Terms and Acronyms
IS (International Standard) – The series of standards from ISO and its
IS&T (Society for Imaging Science and Technology) – An international
non-profit organization whose goal is to keep members aware of the latest
scientific and technological developments in the field of imaging through
conferences, journals and other publications. We focus on imaging in all
its aspects, with particular emphasis on silver halide, digital printing, electronic imaging, photo finishing, image preservation, image assessment,
pre-press technologies and hybrid imaging systems.
ISA (Industry Standard Architecture) – Originally designed around the
16-bit 286 microprocessor and called the AT bus, the ISA bus has 24
address and 16 data lines, sufficient to handle 16 megabyte memory I/O
addresses. The ISA bus is limited to a slow 8 MHz clock speed and for this
reason, faster peripherals and memory left the ISA bus behind soon after
its development. Unlike the earlier 8-bit PC/XT bus, the ISA bus includes
two connectors. In addition to the single, 62-pin, 8-bit PC/XT bus connector, the ISA bus includes a second connector with four additional address
and eight additional data lines, interrupt, and DMA control lines. Although
IBM documented every pin on the ISA bus, they never published strict
timing specifications to signals on the bus. As a result, ISA bus system
developers designing products for many platforms had to guess at timing.
Problems developed as a result of holding the ISA bus to 8 MHz for backward compatibility. Some anxious manufacturers pushed the system speed
causing products with marginal operations characteristics, especially when
extra memory was added to high-speed PCs. Since the IEEE ISA standard
of 1987, the bus signals have remained unchanged. In 1993, Intel and
Microsoft announced a joint development, Plug and Play ISA, a method for
making expansion boards work with the ISA bus, eliminating the need for
DIP switch settings, jumpers, interrupts, DMA channels, ports, and ROM
ranges. The Plug and Play card tells the host computer what resources it
requires. This requires a large software-based isolation protocol which
keeps an expansion board switched off until it can be addressed, allowing
one card to be polled at a time because slot-specific-address enable
signals for expansion cards are not part of the ISA specification. In 1987,
the ISA bus made way for the IBM PS/2 “clone-killer” computer “Micro
Channel” bus however, the clone makers initially ignored the PS/2 and
Micro Channel.
Since OFDM uses a large number of carriers that are digitally modulated.
It provides sufficient transmission quality under multipath interference. The
basic approach of BST-OFDM is that a transmitting signal consists of the
required number of narrow band OFDM blocks called BST-segments, each
with a bandwidth of 100 kHz.
ISDB (Integrated Services Digital Broadcasting) – An NHK-suggested
broadcast equivalent to ISDN.
ISDN – See Integrated Services Digital Network.
ISI (Inter Symbol Interference) – Inter Symbol Interference is the
interference between adjacent pulses of a transmitted code.
ISMA (Internet Streaming Media Alliance) – ISMA is a group of industry leaders in content management, distribution infrastructure and media
streaming working together to promote open standards for developing
end-to-end media streaming solutions. The ISMA specification defines the
exact features of the MPEG-4 standard that have to be implemented on
the server, client and intermediate components to ensure interoperability
between the entire streaming workflow. Similarly, it also defines the exact
features and the selected formats of the RTP, RTSP, and SDP standards
that have to be implemented. The ISMA v1.0 specification defines two
hierarchical profiles. Profile 0 is aimed to stream audio/video content on
wireless and narrowband networks to low-complexity devices, such as cell
phones or PDAs, that have limited viewing and audio capabilities. Profile 1
is aimed to stream content over broadband-quality networks to provide the
end user with a richer viewing experience. Profile 1 is targeted to more
powerful devices, such as set-top boxes and personal computers.
ISO – See International Organization for Standardization.
ISO 2202 – Information Processing: ISO 7-bit and 8-bit coded character
sets – Code extension techniques
ISO 3166 – Codes for the representation of names of countries.
ISO 3901 – Documentation: International Standard Recording Code (ISRC).
ISO 639 – Codes for the representation of names of languages.
ISO 8859-1 – Information Processing: 8-bit single-byte coded graphic
character sets.
ISO 9660 – The international standard for the file system used by CDROM. Allows file names of only 8 characters plus a 3-character extension.
ISA Slot – Connection slot to a type of computer expansion bus formerly
found in most computers. It is larger in size than the PCI slots found on
most Pentium based computers and provides connections to the slower
ISA bus.
ISO Reel – Multiple reels of tape of the same subject recorded simultaneously from different cameras on different VTRs.
ISA Transfer – One of the advantages of an ISA transfer is that it allows
the user to process images as they go through the processor. However,
its utility is limited by its low bandwidth, Even under ideal conditions, the
ISA transfer requires three to five BCLK cycles at 8 MHz to transfer a
single pixel. This represents a severe system throughput penalty; a large
percentage of the available (and already limited) bandwidth is consumed
by the transfer.
ISO/IEC 13818 – Information Technology: Generic coding of moving
pictures and associated audio. (MPEG-2)
ISDB (Integrated Services Digital Broadcasting) – Japan's transmission specification for digital broadcasting. ISDB uses a new transmission
scheme called BST-OFDM that ensures the flexible use of transmission
capacity and service expandability in addition to the benefits of OFDM.
ISO/IEC 11172 – Information Technology: Coding of moving pictures
and associated audio for digital storage media up to about 1.5 Mbit/s.
ISO/IEC DIS 13818-3 – Information technology: Generic coding of moving
pictures and associated audio.
Isochronous – For digital transmission, events occur with known constant
periods. “Equal-time”. The synchronization signal is derived from the signal
bearing the data.
Isokey – See External Key.
www.tektronix.com/video_audio 123
Video Terms and Acronyms
Isolated Key – A key where the “hole cutting” or key video is different
from the “key filling” or insert video. This is most commonly used with
character generators that provide these two outputs, and allows the
character generator to create a key border that is wider and cleaner than
internally bordered keys. Such signals may also come from a color camera
that provides its own keying output or even a monochrome camera looking
at an art card. An isolated key is always a luminance key, although composite chroma keys may be done with an isolated key source, ignoring the
isolated input. AVC series switchers can defeat isolated inputs to standard
type keys by turning key borders on. Also referred to as a Processed
External Key.
Isoparameters – The curves along a surface resulting from setting u or v
to a constant value.
ISP (Internet Service Provider) – An organization that provides access
to the Internet.
ISV (Independent Software Vendor) – Company which develops and
sells application tools and/or software titles.
ISVR Pro – See Smart Video Recorder Pro.
ISVYUV9 – Recording format for decompressed Indeo video technology
using VidCap under Microsoft’s Video for Windows®.
IT (Information Technology) – Processing information by computer. The
latest title for the information processing industry.
Iterative – Procedure or process that repeatedly executes a series of
operations until some condition is satisfied. Usually implemented by a loop
in a program.
ITFS (Instructional Television Fixed Service) – A method of broadcasting TV programs throughout school systems using low-power high-frequency transmitters.
ITS (Insertion Test Signal) – A test signal that is inserted in one line of
the vertical interval to facilitate in-service testing.
ITSTC (Information Technology Steering Committee) – Established
by the July 2002 to provide advice and recommendations to the Vice
Chancellor on the overall priorities and funding level for information
technology and communications for the University of Pittsburgh.
ITTF (Information Technology Task Force) – The United World Colleges
(UWC) International Board of Directors created the UWC IT Task Force
(ITTF) to coordinate IT development projects for the UWC movement as a
iTTi (International Telecom Union – Telecommunication Sector) –
Started in 1998 as part of the ACTS (Advanced Communication
Technologies and Services). The project goal was the specification
and practical demonstration of a wireless return channel for the
terrestrial digital television.
ITU (International Telecommunications Union) – This is the United
Nations specialized agency dealing with telecommunications. At present
there are 164 member countries. One of its bodies is the International
Telegraph and Telephone Consultative Committee, CCITT. A Plenary
Assembly of the CCITT, which takes place every few years, draws up a list
of ‘Questions’ about possible improvements in international electronic
communication. In Study Groups, experts from different countries develop
‘Recommendations’ which are published after they have been adopted.
Especially relevant to computing are the V series of recommendations on
modems (e.g. V.32, V.42), the X series on data networks and OSI (e.g.,
X.25, X.400), the I and Q series that define ISDN, the Z series that defines
specification and programming languages (SDL, CHILL), the T series on
text communication (teletext, fax, videotext, ODA) and the H series on
digital sound and video encoding.
ITU-R (International Telecommunication Union,
Radiocommunication Sector) – Replaces the CCIR.
ITU-R BT.601-2 – a) Standard developed by the International Radio
Consultative Committee for the digitization of color video signals. ITU-R
BT.601 deals with the conversion from component RGB to YCbCr, the digital filters used for limiting the bandwidth, the sample rate (defined as 13.5
MHz), and the horizontal resolution (720 active samples). b) International
standard for component digital television from which was derived SMPTE
125M (was RP-125) and EBU 3246E standards. CCIR defines the sampling
systems, matrix values, and filter characteristics for both Y, B-Y, R-Y and
RGB component digital television.
ITU-R BT.653 – Standard that defines teletext systems used around the
ITU-R BT.656 – The physical parallel and serial interconnect scheme for
ITU-R BT.601-2. ITU-R BT.656 defines the parallel connector pinouts as
well as the blanking, sync, and multiplexing schemes used in both parallel
and serial interfaces.
ITU-R BT.709-3 – Part II of the recommendation describes the unique
HD-CIF standard of 1080 lines by 1920 samples/line interlace and progressively scanned with an aspect ratio of 16:9 at both 50 Hz and 60 Hz
field and frame rates for high definition program production and exchange.
ITU-R.601 – See ITU-R BT.601.2.
ITU-R.624 – ITU standard that defines PAL, NTSC and SECAM.
ITU-T (International Telecommunication Union, Telecommunication
Standardization Sector) – International body that develops worldwide
standards for telecommunications technologies. The ITU-T carries out the
functions of the former CCITT.
ITVA (International Television Association) – An association for media,
film, video, and television professionals.
I-vop (Intra-coded VOP) – A vop coded using information only from
IVUE – A file format associated with FITS technology that enables images
to be opened and displayed in seconds by showing only as much data on
the screen as is implied by the screen size and zoom factor.
IWU (Inter-Working Unit) – The network “modem” where all the digital
to analogue (and visa versa) conversions take place within the digital GSM
Video Terms and Acronyms
J.41 – This is a recommendation from the ITU-T covering high-quality
coding of audio material at 384 kbit/s. In the same family we find the J.42,
the J.43 and the J.44 recommendations that define the coding of analog
“medium quality” sound at 384 kbit/s, “high quality” sound at 320 kbit/s,
and “medium quality” sound at 320 kbit/s, respectively.
J.81 – This ITU-T recommendation is identical to the ETSI standard ETS
300 174 for video broadcast transmission at 34 Mbit/s.
Jack – Receptacle for a plug connector leading to the input or output
circuit of a tape recorder or other piece of equipment. A jack matches a
specific plug.
Jaggies – a) Slang for the stair-step aliasing that appears on diagonal
lines. Caused by insufficient filtering, violation of the Nyquist Theory, and/or
poor interpolation. b) A term for the jagged visual appearance of lines and
shapes in raster pictures that results from producing graphics on a grid
format. This effect can be reduced by increasing the sample rate in scan
Jam Sync – a) Process of locking a time-code generator to existing
recorded time code on a tape in order to recreate or extend the time code.
This may be necessary because, beyond a given point on tape, time code
may be non-existent or of poor quality. b) Process of synchronizing a
secondary time code generator with a selected master time code, i.e.,
synchronizing the smart slate and the audio time code to the same clock.
Jam Syncing – The process of synchronizing a secondary timecode
generator with a selected master timecode.
Japan Broadcasting Corporation – See NHK.
Java – A highly portable, object-oriented programming language
developed by Sun Microsystems. Not to be confused with JavaScript.
JavaScript – A programming language originally created by Netscape
with specific features designed for use with the Internet and HTML, and
syntax resembling that of Java and C++. Now standardized as ECMA-262.
JBIG – See Joint Bi-Level Image Experts Group.
JBOD (Just a Bunch of Disks) – A collection of optical/magnetic disks
used for storing data.
JCIC (Joint Committee for Inter-Society Coordination) – A group
comprised of the EIA, the IEEE, the NAB, the NCTA, and the SMPTE. The
JCIC created the ATSC in 1982 to handle all of the new advances in TV,
including HDTV. The ATSC has since grown to 52 member and observer
JCTEA (Japan Cable Television Engineering Association)
JEC – Joint Engineering Committee of EIA and NCTA.
Jewel Box – The plastic clamshell case that holds a CD or DVD.
Jitter – a) The variation of a digital signal’s significant instants (such as
transition points) from their ideal positions in time. Jitter can cause the
recovered clock and the data to become momentarily misaligned in time.
In some cases the data can be misinterpreted if this misalignment
becomes too great. b) An undesirable random signal variation with respect
to time. A tendency toward lack of synchronization of the picture. It
may refer to individual lines in the picture or to the entire field of view.
c) A rapid, small shift in image position characteristic of film projection.
Projection jitter can reduce the apparent resolution of film. d) A flickering
on a display screen. Besides a monitor or connector malfunction, jitter can
be caused by a slow refresh rate.
Jitter Amplitude – The variation in phase of the bit rate clock expressed
as a percent of the bit period.
Jitter Rate – The rate of change of the jitter amplitude expressed as a
frequency in Hertz.
JND (Just Noticeable Difference) – A measure of the minimum perceptible change in quality. A one JND change is accurately detected 75 percent of the time; a three JND change is accurately detected 99 percent of
the time. There is a large number of JNDs of difference between NTSC as
it is now received in U.S. homes and high definition electronic production
(HDEP). This difference decreases in ATV systems in a hierarchical order.
Some feel that a large number of JNDs will be necessary for consumers
to purchase new TV sets.
Jog/Shuttle Wheel – A dial on many video decks, VCRs and editing
control units that controls jog and shuttle functions.
Jogging – Single-frame forward or backward movement of video tape.
See Stepping.
Joint Bi-Level Image Experts Group (JBIG) – This is a lossless bi-level
(black and white) image compression technique. JBIG is intended to
replace G3 fax algorithms. The JBIG technique can be used on either grayscaled or color images. Some of the applied techniques have a strong
resemblance with the JPEG standard. Commercially available implementations of JBIT have been scarce, but some find use in remote printing of
Joint Photographic Expert Group (JPEG) – Compression technique
for still images, such as photographs, a single video frame, etc. JPEG can
be used to compress motion video however it is not as efficient as MPEG
which has been optimized for motion video compression applications.
Joint Stereo Coding – Exploitation of interchannel stereophonic
redundancies in audio coding resulting in the left and right stereo pair
being coded in a single bitstream.
Jot – The text editor that comes as a standard utility on every IRIS.
Joystick – Affecting control over X, Y and Z parameters. Typical uses are
switcher pattern positioner, ADO positioner/controller, ACE switcher preview
controller. See Positioner.
www.tektronix.com/video_audio 125
Video Terms and Acronyms
JPEG – See Joint Photographic Experts Group.
JPEG-1 – ISO/IEC DIS 10918-1 begins with a digital image in the format Y,
CB, CR (such as defined in CCIR 601-2) and provides several levels of
compression. Predictive coding and transforms are employed, with the
higher compression ratios selectively recognizing the decrease in human
visual acuity with increasing spatial frequencies. It is optimized for about
15:1 compression. As increased data storage and increased processing
capabilities are becoming available, there is exploration of adapting JPEG-1
for application to successive frames in real time; i.e., full-motion JPEG.
JPEG-2 – ISO/IEC CD 11172 describes procedures for compliance testing
in applications of JPEG-1.
JPG – Filename extension for graphic image files stored using JPEG
JScript – A proprietary Microsoft variant of JavaScript.
JTC1 (Joint Technical Committee) – JTC1 is a joint committee of ISO
and IEC. The scope of JTC1 is information technology standardization.
Judder – a) Jerkiness of motion associated with presentation rates below
the fusion frequency. b) A temporal artifact associated with moving images
when the image is sampled at one frame rate and converted to a different
frame rate for display. As a result, motion vectors in the display may
appear to represent discontinuously varying velocities. The subjective effect
of the artifact becomes more obvious when the frame-rate conversions
are made by simple deletions or repetitions of selected frames (or fields).
It may become less obvious when interpolated frames (or fields) are generated by employing predictive algorithms.
Jump – Instruction that results in a change of sequence.
Jump Cut – A mismatched edit that creates a visual disturbance when
replayed. Usually occurs when cutting between two images which share an
identical subject but place the subject at different positions in the frame.
Video Terms and Acronyms
K – Symbol for 1000 (103). When referring to bits or words, K=1024 (210).
K Factor – A specification rating method that gives a higher factor to
video disturbances that cause the most observable picture degradation.
K Factor Ratings – K Factor ratings are a system that maps linear
distortions of 2T pulses and line time bars onto subjectively determined
scales of picture quality. The various distortions are weighted in terms
of impairment to the picture.
Grat. B
SD 5%
KB – See Kilobyte.
Kbar – A line bar (18 µsecs) is used to measure Kbar. Locate the center
of the bar time, normalize that point to 100% and measure the maximum
amplitude deviation for each half. Ignore the first and last 2.5% (0.45
µsec) of the bar. The largest of the two is the Kbar rating.
Keeper – Term used to indicate the effect, edit was good enough to keep,
but could possibly be improved on, however, the effect or edit should be
stored as is in case it cannot be improved upon.
Karaoke – A special DVD format that allows for certain special features.
The audio portion of this format is distinctive in that it is intended for “sing
along” formats and may include audio tracks for “guide vocals”, “guide
melody”, “chorus” and the main Karaoke left and right channels. The audio
track for Karaoke in DVD-video is defined to be applicable for multi-channel setup with 5 channels maximum. When this vocal part is recorded
mainly in track 4 and 5 except the main 2 channels, the users can enjoy
many different playback modes by Karaoke type DVD players equipped
with various audio on/off switches.
The usual K Factor measurements are Kpulse/bar, K2T or Kpulse (2T pulse
response), Kbar and sometimes K60Hz. The overall K Factor rating is the
largest value obtained from all of these measurements. Special graticules
can be used to obtain the K Factor number or it can be calculated from
the appropriate formula. All types of linear distortions affect the K Factor
rating. Picture effects may include any of the short time, line time, field
time and long time picture distortions. Any signal containing the 2T pulse
and an 18 µsec bar can be used to measure Kpulse/bar, K2T (Kpulse), or Kbar.
A field rate square wave must be used to measure K60Hz. The FCC
composite test signal contains these signal components. See the
discussion on Pulse to Bar Ratios.
K2T or K-2T – K2T is a weighted function of the amplitude and time of the
distortions occurring before and after the 2T pulse. In practice, a graticule
is almost always used to quantify this distortion. Different countries and
standards use slightly different amplitude weighting factors. The figure to
the right shows a typical waveform monitor K Factor graticule display. The
outer dotted lines at the bottom of the graticule indicate 5% K2T limits.
See the discussion on Pulse to Bar Ratios.
K60Hz – A field-rate square wave is used to measure this parameter. Locate
the center of the field bar time, normalize the point to 100% and measure
the maximum amplitude deviation for each half. Ignore the first and last
2.5% (about 200 µsec). The largest of the two tilt measurements divided
by two is the K60Hz rating.
Kell Effect – Vertical resolution of a scanned image subjectively evaluated
is consistently shown to be less than the geometrically-predicted resolution. Observations are usually stated in terms of the ratio of perceived
television lines to active lines present in the display. From the time that R.
Kelt published his studies (conducted on a progressive scanned image),
there have been numerous numerical values and substantiating theories
proposed for this effect. The range of results suggests that many details
of the experiments influence the result and make defining a single “Kell
Factor” impossible. Reported experimental results range at least between
0.5 and 0.9. In an otherwise comparable display, the “ratio” is lower for
interlaced scanning than for progressive scanning.
Kell Factor – A number describing the loss of vertical resolution from that
expected for the number of active scanning lines, names for Ray Kell, a
researcher at RCA Laboratories. Many researchers have come up with
different Kell factors for progressively scanned television systems. These
differences are based on such factors as aperture shape, image content,
and measurement technique. A generally accepted figure for the Kell factor
is around 0.68, which, multiplied by the 484 active NTSC scanning lines,
yields a vertical resolution of 330 lines, matched by NTSC’s 330 lines of
horizontal resolution per picture height (see Square Pixels). It is important
to note that most studies of the Kell factor measure resolution reduction
in a progressive scanning system. Interlaces scanning systems suffer from
both a Kell factor and an interlace coefficient.
Kelvin – This is a system or scale used for measuring temperature.
Absolute zero is 0° Kelvin or -273°C. The “color” of white light is
expressed in terms of degrees Kelvin, the color of light emitted when
an ideal object is heated to a particular temperature.
KEM Roll – The roll of film used on a KEM flatbed editing system. A
KEM roll combines multiple takes onto a single roll (a work print, not a
negative). The maximum length of a KEM roll is 1000 feet.
www.tektronix.com/video_audio 127
Video Terms and Acronyms
Kerberos – Kerberos is a network authentication protocol developed
by MIT. It is designed to provide strong authentication for client/server
applications by using secret-key cryptography.
Kernel – Minimum circuitry required to allow the microprocessor to
function. Usually consists of the microprocessor, clock circuit, interrupt
and DMA control lines, and power supply.
Kerning – The spacing between text characters in print media, such as
Key – a) A signal that can electronically “cut a hole” in the video picture
to allow for insertion of other elements such as text or a smaller video
picture. b) A video that has been overlaid on top of another video. Keys
may be either determined by the luminance or brightness of the key video,
or determined by the chroma or hue of the key video. c) A push-button.
d) To combine a selected image from one source with an image from
another source. See also Chroma Key.
Key Channel – See Alpha Channel.
Key Color – The solid color used to key.
Key Fill – Line key effects, the video signal which is said to “fill the hole”
cut in background video by the key source.
Key Frame – A frame containing all the data representing an image,
rather than just the data that has changed since the last frame. The first
frame of every video file is a key frame; in addition, they occur throughout
the file to refresh image quality and permit certain operations, such as
random user access. Compare Delta Frame.
Key Gain – An adjustment for keys that determines the sharpness of the
key edges. As key gain is reduced, keys become softer at the edges and
may be adjusted to be more transparent.
Key Insert – The video that fills a key.
Key Invert – a) A luminance key mode which inverts the polarity of the
key source to allow dark areas of the source video to cut holes in background instead of bright areas. b) A chroma key mode which inverts the
foreground and background positions.
Key Light – The term used to describe a subject’s main source of
illumination. When shooting outdoors, the key light is usually the sun.
Key Mask – A key mode which allows use of independent key mask
generators to create a pattern to prevent some undesirable portions of
the key source from cutting a hole in the background. This is also
possible using externally generated masks on the Vista.
Key Matrix – The electronic crosspoints which switch and route key
signals and key insert signals to appropriate key processing electronics.
On Ampex switchers, these matrices are controlled by keypads and keyer
insert selector push-button controls and form the Phantom matrix portion
of the switcher.
Key Memory – An AVC series feature that allows a key to be fully adjusted
as soon as it is selected. This is accomplished by a “store” button on the
key adjust panel that may be pressed when an operator is satisfied with
the adjustment of a key. From that point on, whenever that key is selected,
regardless of which keyer it is on, all adjustments and features of that key
are automatically recalled.
Key Numbers – The original frame identification numbers applied by the
film manufacturers to the film stock. Key numbers are used by the negative
cutter to conform the film negative. Film composer cut lists and change
lists reference key numbers.
Key Region – See Slice.
Key Signal – A hole cutting signal.
Key Source – a) A hole cutter. The signal which is said to “cut a hole”
in the background scene for a key effect. In actuality, this signal controls
a video mixer which switches between the background scene and the
fill video; thus, the key source determines the shape of the key effect.
b) The image that contains the colors or luminance values on which you
key to create a chroma or luminance key effect.
Key Type – There are three key types on Ampex switchers; luminance
keys, RGB chroma keys and composite chroma keys.
Keyboard – a) Group of push-buttons used for inputting information to a
system. b) The human interface portion of a computer, typewriter with
alpha numeric keys or push-buttons.
Keyer – a) The electronics and panel controls that create keys. There are
many types of keyers, some limited to titles only, and some capable of
any type of key. All Ampex keyers are full capability. b) A tool that you use
to create a composite from a clip from a background and foreground clip
by using an input key-in clip to determine how the clips are combined.
You use the input key-in clip to create a black and white matte that
defines which areas of the foreground and background clips are used in
the result clip.
Keyframe – a) Keyframes are important frames that are guides in creating
frames that occur between the keyframes. b) A specific manipulation or
positioning of the image. An effect is composed of one or more keyframes.
Keyframe Duration – The length of the keyframe; the time from keyframe
to the start of the next.
Keyframing – The process of creating an animated clip wherein by selecting a beginning image and an ending image the software automatically
generates the frames in between. See also Tweening.
Keying – The process of replacing part of one television image with video
from anther image; that is chroma keying and insert keying.
Keykode – A trademark of Eastman Kodak Company. A barcode on the
edge of motion picture film which allows the film edge numbers to be
electronically read and inserted into an edit list. Very useful for generating
a negative cut list from a video off-line EDL.
Keykode Numbers Reader – Device attached to a telecine or part of a
bench logger which read Keykode number bar code from motion picture
film and provides electronic output to a decoder.
Key-Length-Value (KLV) – The grouping of information concerning a
single metadata element that combines three pieces of information: its
UL Data Key; the Length of its instantiation Value in the next field; its
instantiated Value in the allowed format.
Keypad – The numbered push-buttons used to entered numerical data,
i.e., pattern numbers, transition rates, key source numbers, etc.
Video Terms and Acronyms
KF Flags (Menu) – Miscellaneous keyframe flags, currently used to turn
Globals off and on.
kHz (Kilohertz) – One thousand cycles per second.
Kilobaud – A unit of measurement of data transmission speed equaling
1000 baud.
Kilobyte (KB) – One thousand bytes. Actually 1024 bytes because of the
way computer math works out.
Kinescope – a) Frequently used to mean picture tubes in general.
However, this name has been copyrighted. b) A film recording of a video
image displayed on a specially designed television monitor. Only means
of recording TV programs before video recorders and tape were invented.
Kinescope Recording – Motion pictures taken of a program photographed directly from images on the face of a kinescope tube. A
television transcription.
KLV (Key, Length, and Value) – A data-encoding protocol (SMPTE 336M)
that complies with International Standards Organization rules for Object
Identifier data and SMPTE Universal Label (SMPTE 298M). This is the
“header” information in a metadata stream that will identify the data and
which metadata dictionary of definitions should be used for the metadata
that follows. KLV and UMIDs (Unique Material Identifiers) are the basic
engineering building blocks that have been designed to make metadata
easier to exchange between different media (such as tapes or files) and
metadata standards.
Knee – By convention, the circuitry introducing white compression into the
opto-electric transfer function and thereby modifying the curve for a more
gradual approach to white clip.
Kpulse/bar or K-PB – Calculation of this parameter requires the
measurement of the pulse and bar amplitudes. Kpulse/bar is equal to:
1/4 | (bar-pulse)/pulse | X 100%
Ku-band – Ku-band satellites use the band of satellite downlink frequencies from 11.7 to 12.2 GHz. Also the group of microwave frequencies from
12 to 18 GHz.
www.tektronix.com/video_audio 129
Video Terms and Acronyms
Label – Name assigned to a memory location. When an assembly
language program is written, a label is assigned to an instruction or
memory location that must be referred to by another instruction. Then
when the program is converted to machine code, an actual address is
assigned to the label.
speeds have tended to increase but there is still much variation. Modern
3-1/2-inch drives typically have spindle speeds of between 3,600 and
7,200 revolutions per minute, so one revolution is completed in 16 or 8
milliseconds (ms) respectively. This is represented in the disk specification
as average latency of 8 or 4 ms.
LAeq – An Leq measurement using A weighting. Refer to Leq and
Latent Image – The invisible image formed in a camera or printer by the
action of light on a photographic emulsion.
Lambertian Source/Surface – A surface is called a Lambert radiator
or reflector (depending whether the surface is a primary or a secondary
source of light) if it is a perfectly diffusing surface.
Lateral Direction – Across the width of the tape.
LAN (Local Area Network) – A communications network that serves
users within a confined geographical area. It is made up of servers,
workstations, a network operating system and a communications link.
LANC – See Control-L.
Land – The raised area of an optical disc.
LAP (Link Access Procedure) – An ITU family of error correction
protocols originally derived from the HDLC standard.
Latitude – In a photographic process, the range of exposure over which
substantially correct reproduction is obtained. When the process is represented by and H and D curve, the latitude is the projection on the exposure
axis of that part of the curve which approximates a straight line within the
tolerance permitted for the purpose at hand.
LATM (Low-Overhead MPEG-4 Audio Transport Multiplex) – MPEG-4
audio is an audio standard that integrates many different types of audio
coding tools. Low-overhead MPEG-4 Audio Transport Multiplex (LATM)
manages the sequences of audio data with relatively small overhead. In
audio-only applications, then, it is desirable for LATM-based MPEG-4 audio
bitstreams to be directly mapped onto the RTP packets without using
MPEG-4 systems.
LAP-B (Balanced)
Used in X.25 networks.
LAP-D (D Channel)
Used in ISDN data channel.
LAP-M (Modem)
Defined in ITU V.42, which uses some
LAPD methods and adds additional ones.
Launch – To start up an application, often by double-clicking an icon.
LAP-X (Half-Duplex)
Used in ship to shore transmission.
Layback – Transferring the finished audio track back to the master
video tape.
Lap Dissolve – A slow dissolve in which both pictures are actually
overlapped for a very brief period of time. Same as Dissolve.
LAR (Logarithmic Area Ratio) – Describes spectral envelope in speech
Large Scale Integration (LSI) – Technology by which thousands of
semiconductor devices are fabricated on a single chip.
Large-Area Flicker – Flicker of the overall image or large parts of it.
See also Flicker Frequency and Twitter.
Laser Beam Recording – A technique for recording video on film.
Laser Disc – A 12-inch (or 8-inch) optical disc that holds analog video
(using an FM signal) and both analog and digital (PCM) audio. A precursor
to DVD.
Laser – Light amplification by stimulated emission of radiation. A laser
produces a very strong and coherent light of a single frequency.
LAT (Link Available Time)
Latch – a) Hardware device that captures information and holds sit (e.g.,
group of flip-flops). b) An electronic circuit that holds a signal on once it
has been selected. To latch a signal means to hold it on or off.
Latency – a) The length of time it takes a packet to move from source to
destination. b) A factor of data access time due to disk rotation. The faster
a disk spins the quicker it will be at the position where the required data
can start to be read. As disk diameters have decreased so rotational
Lavaliere – A microphone designed to hang from the performer’s neck.
Layer – a) A term used to describe which video is on top of which background versus foreground and subsequent keys superimposed. b) One of
the levels in the data hierarchy of the video and system specification.
c) In a scalable hierarchy, denotes one out of the ordered set of bitstreams
and (the result of) its associated decoding process. d) The plane of a DVD
disc on which information is recorded in a pattern of microscopic pits.
Each substrate of a disc can contain one or two layers.
Layer 0 – In a dual-layer disc, this is the layer closest to the optical pickup beam and surface of the disc, and the first to be read when scanning
from the beginning of the disc’s data. Dual-layer discs are 10% less dense
than single layer discs due to crosstalk between the layers.
Layer 1 – In a dual-layer disc, this is the deeper of the two layers, and the
second one to be read when scanning from the beginning of the disc’s
Layered Bitstream – A single bitstream associated to a specific layer
(always used in conjunction with layer qualifiers).
Layered Tracks – The elements of an effect created by combining two or
more tracks in a specified way, such as nesting one track as a layer within
Layer-to-Layer Adhesion – The tendency for adjacent layers of tape in a
roll to adhere to each other.
Video Terms and Acronyms
Layer-to-Layer Signal Transfer – The magnetization of a layer of tape in
a roll by the field from a nearby recorded layer, sometimes referred to as
LBR (Laser Beam Recorder) – It creates the DVD master file.
LC (Low Complexity) – The most used profile (MPEG-2) or object type
(MPEG-4) in AAC (advanced audio coding) encoders and decoders nowadays because of its low system requirements, i.e., CPU and memory
LCD (Liquid Crystal Display) – A screen for displaying text/graphics
based on a technology called liquid crystal, where minute currents change
the reflectiveness or transparency of selected parts of the screen. The
advantages of LCD screens are: very small power consumption (can be
easily battery driven) and low price of mass produced units. Its disadvantages presently include narrow viewing angle, somewhat slower response
time, invisibility in the dark unless the display is back-lit, difficulties displaying true colors and resolution limitations.
LCP (Link Control Protocol) – See PPP.
L-Cut – See Overlap Edit.
Lead In – On a compact disc, the lead-in contains a table of contents for
the track layout.
Lead Out – On a compact disc, the lead-out indicates the end of data.
Leader – a) Special non-magnetic tape that can be spliced to either end
of a magnetic tape to prevent damage and possible loss of recorded material and to indicate visually where the recorded portion of the tape begins
and ends. b) Any film or strip of material used for threading a motion
picture machine. Leader may consist of short lengths of blank film attached
to the ends of a print to protect the print from damage during the threading
of a projector, or it may be a long length of any kind of film which is used
to establish the film path in a processing machine before the use of the
machine for processing film.
Leading Blacks – A term used to describe a picture condition in which
the edge preceding a white object is overshaded toward black. The object
appears to have a preceding or leading black border.
Leading Whites – A term used to describe a picture condition in which
the edge preceding a black object is overshaded toward white. The object
appears to have a preceding or leading white border.
Leakage – A term describing the signal picked up by a mike which is
intended to pick up other signals only.
Learn – The act of storing switcher control panel data into memory in a
real-time mode (learning as they happen).
Learning Curve – An algebraic metaphor for the amount of time a learner
needs to learn a new task (such as operating an item of television production equipment).
Leased Access – Commercial channels made available by a cable
operator to third parties for a fee, as required by the Cable Acts of 1984
and 1992.
Least Significant Bit (LSB) – The bit that has the least value in a binary
number or data byte. In written form, this would be the bit on the right.
For example,
Binary 1101 = Decimal 13
In this example the rightmost binary digit, 1, is the least significant bit,
here representing 1. If the LSB in this example were corrupt, the decimal
would not be 13 but 12.
Lechner Distance – Named for Bernard Lechner, researcher at RCA
Laboratories. The Lechner distance is nine feet, the typical distance
Americans sit from television sets, regardless of screen size. The Jackson
distance, three meters, named for Richard Jackson, a researcher at
Philips in Britain, is similar. There is reason to believe that the Lechner
and Jackson distances are why HDTV research was undertaken sooner
in Japan (where viewing distances are shorter) than elsewhere. See also
Viewing Distance.
LED (Light Emitting Diode) – A light on a piece of hardware that
indicates status or error conditions.
Legacy – A term used to describe a hybrid disc that can be played in
both a DVD player and a CD player.
Legal Signal – A signal in which each component remains within the
limits specified for the video signal format; that is, it does not exceed
the specified gamut for the current format. For instance, the gamut limits
for an R’, G’, B’ signal are 0 mV to 700 mV and Y’ is 0 mV to 700 mV
and P’b/P’r are +/-350 mV. If the signal remains within these limits the
value is legal.
Lempel-Ziv Welch (LZW) Compression – Algorithm used by the UNIX
compress command to reduce the size of files, e.g., for archival or transmission. The algorithm relies on repetition of byte sequences(strings) in
its input. It maintains a table mapping input strings to their associated
output codes. The table initially contains mappings for all possible strings
of length one. Input is taken one byte at a time to find the longest initial
string present in the table. The code for that string is output and then the
string is extended with one more input byte b) A new entry is added to the
table mapping the extended string to the next unused code (obtained by
incrementing a counter). The process repeats, starting from byte b) The
number of bits in an output code, and hence the maximum number of
entries in the table is usually fixed and once this limit is reached, no more
entries are added.
Length – a) The physical length of the tape wound on a reel or on a hub,
varying from 213 feet in a C45 cassette to 9200 feet in a roll of instrumentation tape. b) The number of bytes represented by the items whose
Length is being described.
Lens – The curved glass on a video camera or camcorder that collects
light and focuses it.
Leq – Leq represents the continuous noise level, equivalent in loudness
and energy, to the fluctuating sound signal under consideration. Refer to
Letterbox – a) An MPEG video term for which the parameters have a
defined set of constraints within a particular profile. b) A television system
that limits the recording or transmission of useful picture information to
www.tektronix.com/video_audio 131
Video Terms and Acronyms
about three-quarters of the available vertical picture height of the distribution format (e.g., 525-line) in order to offer program material that has a
wide picture aspect ratio. c) Term generally used for the form of aspect
ratio accommodation involving increasing vertical blanking. See Blanking
Letterbox Filter – Circuitry in a DVD player that reduces the vertical size
of anamorphic widescreen video (combining every 4 lines into 3) and adds
black mattes at the top and bottom.
created by plotting luminance versus B-Y in the upper half of the display
and inverted luminance versus R-Y in the lower half of the display. The
bright dot in the center of the screen is the luminance blanking level.
The points above and below this show the plots of the different color
components based on their signal amplitude. This test requires a color
bar test signal be used.
Letterboxing – A technique that maintains the original wide aspect ratio
of film when displayed as video. The top and bottom of the video screen
are blackened and the total scene content is maintained.
Level – a) A defined set of constraints on the values which may be taken
by some parameters within a particular profile. A profile may contain one or
more levels. b) In MPEG-2, a range of picture parameters. c) Defines the
bounds of the coding parameters, such as resolution, bit rate, etc. within
each profile. The variation of performance is inherently wide in a profile.
Thus, levels have been defined in order to set reasonable constraints.
d) When relating to a video signal it refers to the video level in volts. In
CCTV optics, it refers to the auto iris level setting of the electronics that
processes the video signal in order to open or close the iris.
LFE (Low Frequency Effects) – The optional LFE channel (also referred
to as the “boom” channel) carries a separate, limited, frequency bandwidth
signal that complements the main channels. It delivers bass energy specifically created for subwoofer effects or low-frequency information derived
from the other channels. The LFE channel is the “.1” in 5.1-channel audio.
Library – As in a book library, it is somewhere one might keep effects,
i.e., on a disk or collection of disks hence a library of canned effects.
LIFO (Last-In-First-Out) – A buffer. Same as Push-Down Stack.
See Stack.
Lift – To remove selected frames from a sequence and leave black or
silence in the place of the frames.
Light Valve Technology – A light valve projector uses a bulb as the
source of light. The valve technology changes the color and intensity of the
source to form the picture. Film or slide projectors are examples of light
valve technology. The Digital Micro-Mirror Device (DMD); also known as the
Digital Light Processor (DLP), the Image Light Amplifier (ILA), and LCD are
all examples of electronic light valve technology. Obtaining black in a picture produced by a light valve projector requires an ability to shut the light
off in particular areas of the picture. Shutting light off in a small area is
actually rather difficult. Consequently, the real picture contrast ratio of a
number of these projectors is rather poor.
Lightness – The brightness of an area (subjectively) judged relative to the
brightness of a similarly illuminated area that appears to be white or highly
Lightning Measurement Method – A measurement method that allows
for the evaluation of the luma signal gain and for making chroma/luma
gain comparisons. It can also provide simple indication of inter-channel
timing errors indicated by a bowing in the trace between the green-magenta transition. Tektronix developed this two-dimensional Lightning display,
named because of the zigzag trace pattern it produces. This display is
Limiter – a) A compressor with a ratio greater than or equal to 10:1.
b) A device that prevents the voltage of an audio or video signal from
exceeding a specified level, to prevent distortion or overloading of the
recording device.
Limiting – Special circuitry is sometimes included in equipment to limit
bandwidth or amplitude, i.e., white amplitude in cameras is generally
limited. Saturation of matte generators in switchers are generally limited
to stop illegal colors.
Line – Same as a horizontal scan line or horizontal line.
Line Blanking – The blanking signal at the end of each horizontal
scanning line. Used to make the horizontal retrace invisible. Also called
horizontal blanking.
Line Compensation – Use of a video line amplifier to pre-compensate
for high frequency video signal transmission losses resulting from long
distance cable runs (several hundred meters) by boosting those signal
frequencies most effected. Without such compensation, deterioration is
manifested as loss of fine details and color distortion.
Line Count – The total number of horizontal lines in the picture.
Line Crawl – Tendency of the eyes to follow the sequentially flashing
scanning lines of interlaced scanning up or down the screen in the same
way that the eyes follow the sequentially flashing light bulbs on a movie
theater marquee. Line crawl tends to reduce vertical resolution.
Line Doubler – A video processor that doubles the number of lines in the
scanning system in order to create a display with scan lines that are less
visible. Some line doublers convert from interlaced to progressive scan.
Video Terms and Acronyms
Line Doubling – Any number of schemes to convert interlaced scanning
to progressive scanning at the display, the simplest of which simply
doubles each scanning line. More elaborate schemes use line interpolation
and motion compensation or median filtering.
Line Feed – A recording or live feed of a program that switches between
multiple cameras and image sources. Also known in sitcom production as
the Director’s Cut.
Line Frequency – The number of horizontal scans per second, normally
15,734.26 times per second for NTSC color systems and 15,625 in PAL.
Line Interpolation – An advanced mechanism used in some line doublers
that calculates the value of scanning lines to be inserted between existing
Line Locked – a) The sync pulses of cameras are locked to the AC mains
frequency. b) In CCTV, this usually refers to multiple cameras being powered by a common alternating current (AC) source (either 24 VAC, 110 VAC
or 240 VAC) and consequently have field frequencies locked to the same
AC source frequency (50 Hz in CCIR systems and 60 Hz in EIA systems).
Line Mode – A Dolby Digital decoder operational mode. The dialnorm
reference playback level is -31 dBFS and dynamic range words are used
in dynamic range compression. Refer to Dynamic Range Compression.
Line Pair – A measure of resolution often used in film and print media.
In television, lines are used instead, creating confusion when comparing
film and video.
Line Structure Visibility – The ability to see scanning lines. Seeing them
makes it harder to see the image (like looking out a window through
Venetian blinds or not being able to see the forest for the trees). Some ATV
schemes propose blurring the boundary between scanning lines for this
Line Sync – The sync signal pulse transition that defines the start of a
scan line. Line sync may be the start of a normal sync or the start of an
equalization or broad pulse.
Line Sync Frequency – See Line Frequency.
Line Time – The time interval between OH data or the time taken for a
complete scan line. Example: In a PAL system the line time is 64 µs.
Line Time Linear Distortions – Causes tilt in line-rate signal components such as white bars. The amount of distortion is expressed in as
a percentage of the amplitude at the center of the line bar amplitude.
Distortions involving signals in the 1 µsec to 64 µsec range. Line Time
distortions can also be quantified in Kbar units. In large pictures details,
this distortion produces brightness variations between the left and right
sides of the screen. Horizontal streaking and smearing may also be
apparent. Any test signal containing an 18 µsec, 100 IRE bar such as the
FCC Composite or the NTC-7 Composite can be used for this measurement. See the discussion on Linear Distortions and Kbar units.
Line Pair, Optical – In optical measurements and specifications, resolution is specified in terms of line-pairs per unit distance or unit angle, a
line pair consisting of one “black” plus one “white” line. Thus one line pair
corresponds to two television lines.
Line Pairing – A reduction in vertical resolution caused when a display (or
camera) does not correctly space fields, resulting in an overlap of odd and
even numbered scanning lines. See also Random Interlace.
Line Powered – A camera in which the power is supplied along the same
coaxial cable that carries the video signal.
Line Rate – The rate at which scanning lines appear per second (the
number of scanning lines per frame times the frame rate); sometimes
used (non-quantitatively) as an indication of the number of scanning lines
per frame (e.g., a high line rate camera).
Line Rate Conversion – Standardized video systems currently exist
employing the following number of total lines per frame: 525, 625, 1125.
Furthermore, each of these operates in a 2:1 interlace mode, with 262.5,
312.5, 562.5 lines per field (with concurrent temporal differences at field
rates of 50.00, 59.94, or 60.00 fields per second). Additional systems are
being proposed. While simple transcoding by deletion or repetition can be
applied, it is more commonly done by applying an algorithm to stored information in order to generate predictive line structures in the target system.
Line Store – A memory buffer which stores a single digital video line. One
application for line stores is use with video filtering algorithms or video
compression applications.
Line Time Waveform Distortion – See Line Time Linear Distortions.
Linear (Assembly) Editing – Editing using media like tape, in which
material must be accessed in order (e.g., to access scene 5 from the
beginning of the tape, one must proceed from scene 1 through scene 4).
See Nonlinear Editing.
Linear Addressing – This is a modern method of addressing the display
memory. The display memory (in the IBM PC world) was originally located
in a 128-Kbyte area from A000:0 through BFFF:F, too small for today’s
display systems with multi-megabyte memories. Linear addressing allows
the display memory to be addressed in upper memory, where a large
contiguous area is set aside for it.
Linear Distortion – Distortion that is independent of signal amplitude.
These distortions occur as a result of the system’s inability to uniformly
transfer amplitude and phase characteristics at all frequencies. When fast
signal components such as transitions and high frequency chrominance
www.tektronix.com/video_audio 133
Video Terms and Acronyms
are affected differently than slower line-rate or field-rate information, linear
distortions are probably present. These distortions are more commonly
caused by imperfect transfer characteristics in the signal path. However
linear distortions can also be externally introduced. Signals such as power
line hum can couple into the video signal and manifest themselves as distortions.
Line-Out Monitor – A monitor connected to a recording device that
displays the finished product. A line-out monitor may be a video monitor
(video product), an audio speaker (audio product), or a television (both
audio and video).
Linear Editing – A type of tape editing in which you assemble the program from beginning to end. If you require changes, you must re-record
everything downstream of the change. The physical nature of the medium
(for example, analog videotape) dictates how you place material on the
medium. See Nonlinear Editing.
Lines – Scanning lines or lines of resolution. The latter are hypothetical
lines alternating between white and black (or, in the case of chroma resolution, between complementary colors). The combined maximum number
of black and white lines that might be perceived in a particular direction
is the number of lines of resolution. Vertical resolution is measured with
horizontal lines; horizontal resolution is measured with vertical lines;
diagonal resolution is measured with diagonal lines (no current television
system or proposal favors one diagonal direction over the other, so the
direction of the diagonal lines does not really matter). See also PPH.
Linear Key – a) A term given to a key which contains soft edges and
information at many different luminance levels. This is the ability of the
keyer to key many levels linearly and means the keyer has a gain close
to one. b) A process for the selective overlay of one video image upon
another, as through chroma key. Control of the ratio of foreground to background determined by the specifications derived from luminance information, and provided in the linear key data. Ratios to be applied are carried
for each picture element in the alpha channel. The process permits realistic
rendering of semi-transparent objects.
Linear PCM – One of the allowed types of audio formats for DVD. It may
have up to 8 channels and provide very high sample rates and sample
depths. Unfortunately, these very high data rates consume so much DVD
capacity that meaningful quantities of both audio and video become problematic.
Linear Predictive Coding (LPC) – LPC is a speech coding technique.
It models the human vocal tract by producing a time varying filter that
predicts the current speech sample from past speech samples.
Linear Pulse Distribution Amplifier (Linear Pulse DA) – A linear pulse
distribution amplifier will handle 4 Vp-p signals (pulses) but is limited to
amplifying and fanning out the signal. Also see Regenerative Pulse DA.
Linear Select Decoding – Address decoding technique that uses the
most significant address bits to directly enable devices in the system.
Linear Time Code (LTC) – Time code recorded on a linear analog track
on a videotape. This type of time code can be read only while the tape is
Linearity – a) This is the basic measurement of how well analog to digital
and digital to analog conversion are performed. To test for linearity, a
mathematically perfect diagonal line is converted and then compared to a
copy of itself. The difference between the two lines is calculated to show
linearity of the system and is given as a percentage or range of Least
Significant Bits. b) The uniformity of scanning speed which primarily
affects the accuracy of geometry along a horizontal or vertical line through
the picture center. c) The measurement of how accurately a piece of
electronic equipment processes a signal, (a measure of its transparency).
Line-Locked Clock – A design that ensures that there is always a
constant number of samples per scan line, even if the timing of the line
Liners/Friction Plates – Friction controlling plastic sheets used inside a
Philips cassette to control winding uniformity and torque level.
Lines, Active Horizontal – In the scanning of a video image, the line
number associated with the format is the total number of lines assigned to
one frame. It is in fact a timing specification defining the conjunction with
the field frequency the time interval allocated to each horizontal line (commonly measured in number of samples at the specified sampling rate or in
microseconds). Some of these lines and intervals carry image information,
some from the total assigned are dedicated to operational and control
functions, including returning the scanning beam back to the upper left
corner to begin the next field. Those allotted time intervals (lines) actually
carrying image information or image-associated information such as captioning, image test signals, etc., are the active lines. In further reduction of
time allocated to image information, some of each active line is dedicated
to the horizontal interval to get the scanning beam to return to the leftedge starting point for the next line and to reaffirm color subcarrier, etc.
In the U.S. 525/59.94/2:1/NTSC system, about 7.6% of the total field or
frame time is assigned to the vertical interval, and about 16% to the
horizontal interval. Thus, the 525 television lines per frame provide about
480 active lines. Correspondingly, each active line displays image data
about 84% of its time interval. Image information is thus conveyed for only
about 76.4% of the total time. In digital encoding, it may be possible to
reduce the number of bits assigned to the vertical and horizontal intervals
and achieve significant bit rate reduction.
Lines, Active Vertical – In a scanning standard, the number of raster
lines per frame that are not required to contain blanking. The active vertical
lines may include signals containing non-image information.
Lines, Television – Television images are scanned in a sequence of
horizontal lines, beginning at the upper left corner, and reaching the
bottom right corner at the end of the field. Thereupon the scan is returned
to the upper left corner to begin the next field. As a consequence of the
line structure, all television images are sampled vertically. Within a line,
the signal may remain analog or be sampled digitally. A television line is
also a measure of time, representing the interval allocated to one line.
(In the U.S. system 525/59.94/2:1, the line duration is 63.5 s). Television
lines also function as a geometric measure, with resolution (both vertical
and horizontal), for example, specified in terms of “lines per picture
height”. Since both “black” and “white” lines of a resolution chart are
counted, two television lines equal one cycle of the electrical waveform.
Video Terms and Acronyms
Link – A Physical Layer communication path.
Liquid Gate – A printing system in which the original is immersed in a
suitable liquid at the moment of exposure in order to reduce the effect of
surface scratches and abrasions.
Load Resistance – The impedance or resistance (load) that a cable
places on a signal being transmitted through it. In the case of a high
frequency signal, signal-to-cable matching is essential to prevent signal
deterioration. The cable should be terminated by a specific load resistance,
usually 50 or 75 ohms. Improper cable loading results in signal distortion,
ghost images, color loss and other adverse phenomena. Most video inputs
have the proper termination built in.
List Box – Used to make a selection from a list of options. If the list is too
long to fit inside the given area, a vertical scroll bar moves the list up and
LOAS (Low Overhead Audio Stream) – This is an audio-only transport
format for applications where an MPEG-4 audio object needs to be transmitted and additional transport overhead is an issue.
Listener – Device that inputs data from a data bus.
Local Bus Transfer – The host/local bus transfer consumes a smaller
percentage of available bandwidth during video/graphics transfers than
earlier bus standards but the still-noticeable performance penalty may be
objectionable for some users, especially when compared to systems that
circumvent it.
Lip Synchronization – The absence of noticeable lag or lead between the
video and the audio playback.
Little Endian – A process which starts with the low-order byte and ends
with the high-order byte. Intel processors use the little endian format.
Live – Actually presented in the studio, with cameras feeding out to the
lines as the performance is done.
LLC (Logical Link Control) – In the Open Systems Interconnection (OSI)
model of communication, the Logical Link Control Layer is one of two sublayers of the Data-Link Layer and is concerned with managing traffic (flow
and error control) over the physical medium. The Logical Link Control Layer
identifies a line protocol, such as SDLC, NetBIOS, or NetWare, and may
also assign sequence numbers to frames and track acknowledgements.)
LLME (Lower Layer Management Entity) – Contains the management
functions and functions that concern more than one layer.
LMDS (Local Multi-Point Distribution System) – A digital wireless
transmission system that works in the 28 GHz range in the U.S. and 24-40
GHz overseas. It requires line of sight between transmitter and receiving
antenna, which can be from one to four miles apart depending on weather
conditions. LMDS provides bandwidth in the OC-1 to OC-12 range, which
is considerably greater than other broadband wireless services. LMDS can
be deployed in asymmetric and symmetric configurations. It is designed to
provide the “last mile” from a carrier of data services to a large building
or complex that is not wired for high-bandwidth communications. In areas
without gas or steam pipes or other underground conduits, it is less costly
to set up LMDS transceivers on rooftops than to dig up the ground to
install optical fiber. See MMDS.
L-Member (Liaison Member) – A term used within ISO/IEC JTC1
committees. A Liaison Organization does not vote.
LNB (Low-Noise Block Converter) – A device hooked to a satellite
dish’s feedhorn that receives the signal at ~4 or 12 GHz and converts it to
a lower frequency for input into a receiver.
LO (Local Origination Channel) – A channel on a cable system (exclusive of broadcast signals) which is programmed by the cable operator and
subject to his exclusive control.
Lo/Ro (Left Only, Right Only) – A type of two-channel downmix for
multichannel audio programs. Lo/Ro downmixes are intended for applications where surround playback is neither desired nor required.
Load – a) A roll of film stock ready to be placed in the camera for
photography. A 1000-foot load is a common standard. b) A group of
multicamera reels shot at the same time, sharing the same timecode,
and numbered accordingly.
Local Decode – A feature of Indeo video interactive allowing the playback
application to tell the codec to decode only a rectangular subregion of the
source video image: the viewport. See Viewport.
Local Tally – A tally of which bus on an M/E is active regardless of
whether or not it is on air.
Local Workstation, Drive, Disk, File System, or Printer – The physical
workstation whose keyboard and mouse you are using, all hardware that
is connected to that workstation, and all software that resides on that
hardware or its removable media.
Locate (Menu) – The 3D function used to move or relocate an image.
Locate moves the image as if it were in three-dimensional space, even
though the image is seen on a two-dimensional video screen.
Location – Shooting locale.
Locator – A mark added to a selected frame to qualify a particular location within a sequence. User-defined comments can be added to locators.
Locked – a) A video system is considered to be locked when the
receiver is producing horizontal syncs that are in time with the transmitter.
b) When a PLL is accurately producing timing that is precisely lined up
with the timing of the incoming video source, the PLL is said to be
“locked”. When a PLL is locked, the PLL is stable and there is minimum
jitter in the generated sample clock.
Locking Range – The time range measured in micro- or nano-seconds
over which a video decoder can “lock” or stabilize a signal. Digital out of
range signals may require “rubber-band” buffering using a parallel shift
register (FIFO) to reduce the locking range.
Lock-Up Time – The time before a machine is activated and the time it is
ready for use.
LOD (Level of Detail) – An important mechanism for achieving a high
level of performance in a 3D virtual world. It balances the quantity (extent)
of an object with its quality (detail). As some measure of the distance
between the viewer and the object change, a related change is made in
the quantity and quality of the rendering of an object.
LOF (Loss of Frame) – LOF is a generic term with various meanings
depending on the signal standards domain in which it is being used. A
www.tektronix.com/video_audio 135
Video Terms and Acronyms
SONET port status indicator that activates when an LOF defect occurs and
does not clear for an interval of time equal to the alarm integration period,
which is typically 2.5 seconds.
Lofting – The ability to stretch a “skin” over shapes that are in fact crosssectional ribs.
Long Time Distortion – The low frequency transient resulting from a
change in APL. This distortion usually appears as a very low frequency
damped oscillation. The peak overshoot, in IRE, is generally quoted as the
amount of distortion. Setting time is also sometimes measured.
Log – To enter information about your media into bins at the beginning of
the editing process. Logging can be done automatically or manually. See
Shot Log.
Logarithm – A logarithm is the power to which a base (usually 10) must
be raised in order to arrive at the desired value.
Logarithmic Scale – A mathematical function which spreads out low
values and squeezes together higher values.
Logic Analyzer – Test system capable of displaying 0s and 1s, as well as
performing complex test functions. Logic analyzers typically have 16 to 32
input lines and can store sequences of sixteen or more bits on each of the
input lines.
Logic Comparator – Test product that compares pin-for-pin operation of
an IC operating in-circuit with a known good reference IC.
Logic Probe – Handheld troubleshooting tool that detects logic state and
activity on digital circuit nodes.
Logic Pulser – Handheld troubleshooting tool that injects controlled digital
signals into logic nodes.
Logical – An artificial structure or organization of information created for
convenience of access or reference, usually different from the physical
structure or organization. For example, the application specifications of
DVD (the way information is organized and stored) are logical formats.
Logical Channel – A virtual connection between peer Multiplex Layer
(FlexMux or TransMux) entities. It has associated parameters relating to its
priority or error resilience tools applied to the Adaption Layer packets to be
transported in this logical channel.
Logical Unit – A physical or virtual peripheral device, such as a DVD-ROM
Logical Value – A description of the memory blocks disks used for the
frame store.
Login – To log in to a workstation is to establish a connection to the
workstation and to identify yourself as an authorized user.
Login Account – A database of information about each user that, at the
minimum, consists of login name, user ID, and a home directory.
Login Name – A name that uniquely identifies a user to the system.
Login Screen – The window that you see after powering on the system,
before you can access files and directories.
Logout – To log out from a workstation is to finish a connection to the
Long Shot – Camera view of a subject or scene, usually from a distance,
showing a broad perspective.
Long Time Linear Distortions – Distortions involving signals in the
greater-than-16 msec range. Long time distortions affect slowly varying
aspects of the signal such as changes in APL which occur at intervals of
a few seconds. The affected signal components range in duration from
16 msecs to tens of seconds. The peak overshoot, in IRE, which occurs as
a result of an APL change is generally quoted as the amount of distortion.
Settling time is also sometimes measured. Long time distortions are slow
enough that they are often perceived as flicker in the picture. See the
discussion on Linear Distortions.
Longitudinal Curvature – Any deviation from straightness of a length of
Longitudinal Direction – Along the length of the tape.
Longitudinal Time Code (LTC) – Audio rate time code information that
is stored on its own audio track. This audio rate signal allows the editing
system to track the position of the tape even at high shuttle speeds where
VITC data could not be used.
Look Ahead Preview – See Preview.
Lookup Table (LUT) – Files used to convert color information in an image.
Loop – Piece of tape spliced beginning (head) to end (tail) for continuous
playback or recording. To fold around. A loop/slack section of film with the
necessary “play” to allow film which had been previously and continuously
moving from a reel to be intermittently moved through a grate/projection
head/optical lens arrangement. Proper loop size is important in threading a
film projector, i.e., in telecine for film to videotape transfer.
Loop Filter – Used in a PLL design to smooth out tiny inaccuracies in the
output of the phase comparator that might drive the loop out of lock. The
loop filter helps to determine how well the loop locks, how long it takes to
lock and how easy it is to cause the loop out of lock.
Loop Frame Store – The principal is that a series of video frames is compressed and stored in a continuous loop. This records a certain number of
frames and then records over them again until an alarm signal is received.
When this happens it carries on recording for a dozen frames or so and
then stops. This means that frames before and after the incident are
Video Terms and Acronyms
recorded. This eliminates the boring searching through hours of videotape
and concentrates on the period of activity.
Loop Through – A video signal entering a piece of equipment is returned
to the outside world for further use. Loop through circuitry requires careful
design to prevent signal degradation.
Looping – a) A term used to describe the chaining of a video signal
through several video devices (distribution amplifiers, VCRs, monitors, etc.).
A VCR may be hooked up to a distribution amplifier which is supplied with
a video input connector and a loop output connector. When a signal is fed
to the distribution amplifier, it is also fed unprocessed to the loop output
connector (parallel connection) on the distribution amplifier. In turn, the
same signal is fed to another device which is attached to the first one and
so on. Thus a very large number of VCRs or other video devices can be
looped together for multiple processing. b) An input that includes two connectors. One connector accepts the input signal, and the other connector
is used as an output for connecting the input signal to another piece of
equipment or to a monitor.
Loss – Reduction in signal strength or level.
Lossless (Compression) – a) Reducing the bandwidth required for
transmission of a given data rate without loss of any data. b) Image
compression where the recovered image is identical to the original.
c) The reconstructed data is degraded relative to the source material by
the method of removal of redundant information from the media while
compressing. See Lossy (Compression).
Lossy (Compression) – a) Image compression where the recovered
image is different from the original. b) Compression after which some
portion of the original data cannot be recovered with decompression. Such
compression is still useful because the human eye is more sensitive to
some kinds of information than others, and therefore does not necessarily
notice the difference between the original and the decompressed image.
c) Reducing the total data rate by discarding data that is not critical. Both
the video and audio for DTV transmission will use lossy compression.
See Lossless (Compression).
LowFER – One who experiments with radio communication at unusually
low frequencies (typically 1750 meters, which is 160-90 kHz and can be
used under FCC Part 15).
Low-Frequency Amplitude Distortion – A variation in amplitude level
that occurs as a function of frequencies below 1 MHz.
Low-Frequency Distortion – Distortion effects which occur at low
frequency. Generally considered as any frequency below the 15.75 kc line
Low-Order – Pertaining to the weight or significance assigned to the
digits of a number. In the number 123456, the lower order digit is six.
The three low-order bits of the binary word 11100101 are 101.
Lowpass Filter – a) Filter that passes frequencies below a specific
frequency. b) A filter specifically designed to remove frequencies above the
cutoff frequency, and allow those below to pass unprocessed is called a
lowpass filter. The effect of a lowpass filter is to reduce the amplitude of
high frequencies. Common examples include the “treble” controls on many
lower end radios and stereos, the passive “tone” controls often found on
electric guitars and basses, hi-cut filters on consoles, and of course, this
type of filter is found on many synthesizers.
LPC (Linear Predictive Coding) – An encoding technique used to aid in
the prediction of the next sample. This technique can be found in many
analogue to digital conversion processes.
LPCM (Linear Pulse Code Modulation) – A pulse code modulation
system in which the signal is converted directly to a PCM word without
companding, or other processing. Refer to PCM.
LPTV (Low Power TV) – LPTV stations provide their communities with
local programming, covering events and issues in a smaller area than most
TV stations. There were licensed in the United States, 2,190 LPTV stations
as of July 1, 1999. As LPTV signals are comparatively weak, LPTV stations
don’t generally interfere with larger TV stations using the same frequency.
LS/RS (Left Surround, Right Surround) – The actual channels or
speakers delivering discrete surround program material.
Low Band Color – The old, original professional videotape color
LSB – See Least Significant Bit.
Low Delay – A video sequence does not include B-pictures when the low
delay flag is set; consequently, the pictures follow in chronological order,
and low delay is obtained. Normally, when B-pictures are included, the
pictures used for prediction of a B-picture are sent in advance so they are
available when the B-picture arrives, but this increases the delay.
LSP (Line Spectral Pairs) – An alternative representation of linear
predictor coefficients. LSPs have very good quantization properties for
use in speech coding systems.
Low End – The lowest frequency of a signal. See High End.
Low Impedance Mike – A mike designed to be fed into an amplifier or
transformer with input impedance of 150 to 250 ohms.
Low Key – A scene is reproduced in a low key if the tone range of the
reproduction is largely in the high density portion of the H and D scale of
the process.
Lower Layer – A relative reference to the layer immediately below a given
Enhancement Layer (implicitly including decoding of all layers below this
Enhancement Layer).
LSI – See Large Scale Integration.
LSTTL (Low Power Schottky TTL) – Digital integrated circuits that
employ Schottky diodes for improved speed/power performance over
standard TTL.
Lt/Rt (Left Total, Right Total) – Two-channel delivery format for Dolby
Surround. Four channels of audio, Left, Center, Right and Surround (LCRS)
are matrix encoded for two-channel delivery (Lt/Rt). Lt/Rt encoded programs are decoded using Dolby Surround and Dolby Surround Pro Logic
decoders. Refer to Dolby Surround and Dolby Surround Pro Logic.
LTC – See Linear Time Code or Longitudinal Time Code.
www.tektronix.com/video_audio 137
Video Terms and Acronyms
LTP (Long Term Prediction) – A method to detect the innovation in the
voice signal. Since the voice signal contains many redundant voice segments, we can detect these redundancies and only send information about
the changes in the signal from one segment to the next. This is accomplished by comparing the speech samples of the current segment on a
sample by sample basis to the reconstructed speech samples from the
previous segments to obtain the innovation information and an indicator of
the error in the prediction.
LTS (Lifetime Time Stamp) – Gives the duration (in milliseconds) an
object should be displayed in a scene. LTS is implicit in some cases such
as a video sequence where a frame is displayed for 1/frame-rate or until
the next frame is available, whichever is larger. An explicit LTS is necessary
when displaying graphics and text. An audiovisual object should be decoded only once for use during its life time.
Luma – See the definition for Luminance.
Luma (Component) – A matrix, block or single pel representing a monochrome representation of the signal and related to the primary colors in the
manner defined in the bit stream. The symbol used for luma is Y.
Luma Bandpass – A filter used to pass luma information only. It is used
for the same purpose as a chroma bandpass filter. See Chroma Bandpass.
Luma Delay – Luma delay is used in PAL/NTSC encoding and color
decoding in TV systems and processing of luminance in VTRs. The Y signal
occupies a greater bandwidth than the low definition, narrowband chroma.
This also means that the signal is delayed less as the bandwidth of a circuit increases. Without a delay, the chroma would be printed slightly later
than the corresponding luminance signal.
Lumakey – When keying one image onto another, if the composition is
based on a combination of luminance and brightness values, it constitutes
a lumakey.
Lumen (lu) – A light intensity produced by the luminosity of one candela
in one radian of a solid angle.
Luminance (Y) – Video originates with linear-light (tristimulus) RGB
primary components, conventionally contained in the range 0 (black) to
+1 (white). From the RGB triple, three gamma-corrected primary signals
are computed; each is essentially the 0.45-power of the corresponding
tristimulus value, similar to a square-root function. In a practical system
such as a television camera, however, in order to minimize noise in the
dark regions of the picture it is necessary to limit the slope (gain) of the
curve near black. It is now standard to limit gain to 4.5 below a tristimulus
value of +0.018, and to stretch the remainder of the curve to place the
Y-intercept at -0.099 in order to maintain function and tangent continuity
at the breakpoint:
Rgamma = (1.099 * pow(R,0.45)) – 0.099
Ggamma = (1.099 * pow(G,o.45) – 0.099
Bgamma = (1.099 * pow (B,0.45) – 0.099
Luma is then computed as a weighted sum of the gamma-corrected
Y = 0.299 * Rgamma + 0.587 * Ggamma + 0.114 * Bgamma
The three coefficients in this equation correspond to the sensitivity of
human vision to each of the RGB primaries standardized for video. For
example, the low value of the blue coefficient is a consequence of
saturated blue colors being perceived as having low brightness. The
luma coefficients are also a function of the white point (or chromaticity of
reference white). Computer users commonly have a white point with a color
temperature in the range of 9300 K, which contains twice as much blue
as the daylight reference CIE D65 used in television. This is reflected in
pictures and monitors that look too blue. Although television primaries have
changed over the years since the adoption of the NTSC standard in 1953,
the coefficients of the luma equation for 525 and 625 line video have
remained unchanged. For HDTV, the primaries are different and the luma
coefficients have been standardized with somewhat different values. The
signal which represents brightness, or the amount of light in the picture.
This is the only signal required for black and white pictures, and for color
systems it is obtained as the weighted sum (Y = 0.3R + 0.59G + 0.11B)
of the R, G and B signals.
Luminance Factor b – At a surface element of a non self-radiating medium, in a given direction, under specified conditions of illumination, ratio
of the luminance of the surface element in the given direction to that of a
perfect reflecting or transmitting diffuser identically illuminated. No “perfect
reflectors” exist, but properly prepared magnesium oxide has a luminance
factor equal to 98% and this is usually employed to define the scale.
Luminance Key – A key wherein the keying signal is derived from the
instantaneous luminance of a video signal after chroma has been filtered
out. That is, for a particular clip level, all parts of a scene that are brighter
than that level will appear keyed in, leaving background video everywhere
Luminance Noise – Noise which manifests itself in a video picture as
white snow, typically caused by one of the following situations: low signal
level due to poor lighting conditions, poor video signal processing, low
quality videotapes, excessively long video cables used without pre-compensation, dirt on the video recorder heads which interferes with reading and
writing, over-enhancement of the video signal.
Video Terms and Acronyms
Luminance Nonlinearity – Present if luminance gain is affected by
luminance levels. This amplitude distortion is a result of the system’s
inability to uniformly process luminance information over the entire
amplitude range. This distortion is also called differential luminance.
end. All of the scene that is of interest must be placed within these two
limits by the choice of an appropriate transfer function. Some analog
functions permit gradual transitions to overload and/or noise. Digital
functions have inflexible limits imposed by the number of levels permitted
by the bit assignments.
Luminance Range, Scene – The luminance range of original scenes
varies from outdoor scenes in sunlight with a range possibly exceeding
10000:1, to indoor scenes with controlled lighting, where the range may
be reduced to 10:1 or even less. Adjustment of or accommodation to the
luminance range, scene is one of the conditions to be evaluated in determining how the scene is to be recorded. It is a test of artistic judgment
to place the relative luminances for the objects of interest on a suitable
portion of the opto-electronic or opto-photographic transfer function in
order to produce the desired subjective quality.
Luminance Signal – The black and white signal (the brightness signal) in
color TV. The luminance signal is formed by combining a proportion of 30%
red, 50% green and 11% blue from the color signal. This combined output
becomes the luminance (brightness/monochrome) signal.
The amount of luminance nonlinearity distortion is expressed as a percentage. Measurements are made by comparing the amplitudes of the individual steps in a staircase signal as shown. The result is the difference
between the largest and smallest steps, expressed as a percentage of the
largest step. Measurements should be made at both high and low APL and
the worst error should be quoted. In black and white pictures, luminance
nonlinearity will cause pictures loss of detail in shadows and highlights
which are caused by the crushing or clipping of the white or black portions
of the signal. In color pictures, luminance nonlinearity will cause colors in
the high luminance portions of the picture to be distorted.
Luminance Range – The range in measured luminance between the
lightest and the darkest element of a luminous scene or its display.
Luminance Range, Display CRT – The luminance range that can be
displayed on a CRT is the ratio of maximum to minimum luminance on the
tube face. The maximum practical output is determined by beam current,
phosphor efficiency, shadow-mask distortion, etc. The minimum is the
luminance of that portion of the tube face being scanned with beam current set to cut-off. The contributions from room illumination, external and
internal reflections, etc., must be recognized.
Luminance Range, Display Theater – The luminance range that can be
displayed on a theater projection screen is the ratio of maximum to minimum luminance achievable during projection of film. The maximum achievable highlight is determined by light-source output capacity, projection
optical efficiency, the transmission of minimum film densities, screen gain,
etc. The minimum is the luminance contribution from house illumination
and other stray light, plus optical flare raising black levels, and the transmission of maximum film densities. Measured values in typical first-run
theaters show luminance ranges of 500:1 to 300:1 (usually limited by
house illumination).
Luminance Range, Recorded – The luminance range, recorded may
be reduced from the luminance range, scene intentionally and/or by the
limitations of the recording system. Most systems have a maximum
effective signal level limiting the high end, and noise limiting the low
Luminance, Constant (Video) – In an image coding system that derives
a luminance signal and two bandwidth-limited color-difference signals,
constant luminance prevails if all of the luminance information is encoded
into one signal that is supplemented by but totally independent of two color
signals carrying only chrominance information, e.g., hue and saturation.
Constant luminance is only achieved when the luminance and chrominance
vectors are derived from linear signals. The introduction of nonlinear
transform characteristics (usually for better signal-to-noise and control of
dynamic range prior to bandwidth reduction) before creating the luminance
and chrominance vectors destroys constant luminance. Current video systems do not reconstitute the luminance and chrominance signals in their
linear form before further processing and, therefore, depart from constant
luminance. Note: When R, G, B information is required to be recovered
from the set of luminance and color-difference signals, the values correlated to the original signals are obtained only if the luminance and chrominance signals have been derived from the linear functions of R, G, B or
have been transformed back to linear. Constant luminance not only provides a minimum of subjective noise in the display (since the luminance
channel does not respond to chrominance noise), but also preserves this
noise minimum through chrominance transformations.
Luminance, Physics (Generic Usage) – a) Luminance has technical
as well as colloquial definitions. The generic flux from a light-emitting or
light-reflecting surface; the subjective response to luminance is brightness.
The quotient of the luminous flux at an element of the surface surrounding
the point and propagated in directions defined by an elementary cone
containing the given direction, by the product of the solid angle of the cone
and the area of the orthogonal projection of the element of the surface
on a plane perpendicular to the given direction. b) The luminous flux may
be leaving, passing through, and arriving at the surface or both. The luminance for each element of a surface within the field of view is defined as
the ratio of luminous flux per solid angle to the unit projected area of the
surface. Units are candelas per square meter, foot lamberts, nits.
www.tektronix.com/video_audio 139
Video Terms and Acronyms
Luminance, Relative, Scene – A convenient linear scale for measuring
in arbitrary units the relative luminance amplitudes within the scene, to be
recorded in a video or photographic image, as shown below. The relative
luminance scale is one factor affecting the choice of suitably artistic scene
reproduction. It may establish the optimum rendition of reference white and
optimum employment of the nonlinear transfer function in image recording.
Note: This relative luminance scale (linear in luminance) resembles IRE
units (linear in voltage) in positioning both black level reference and reference white at 0 and 100, respectively, but that it differs in recognizing the
extended luminance range of many commonly encountered scenes.
Luminescence – The absorption of energy by matter and its following
emission as light. If the light follows and then completes itself quickly after
absorption of the energy, the term fluorescence is used. If the process is of
a longer and more persistent length, the term phosphorescence is applied.
Luminosity Curve – A function that expresses the apparent brightness of
the spectral colors. It is used in video systems to calculate the luminance
Correlation of Relative Scene Luminance
Relative Scene
Luminance (1)
Typical Limit of Interest
Reference White (2)
Gray Card (3)
Scene Black
(1) IEEE Dictionary of Electrical and Electronics Terms defines luminance
factor as the ratio to a perfect reflector rather than as the ratio to reference
white. In practical electronic production, relative scene luminance is a
more useful measure.
(2) Under scene illumination, the light from a nonselective diffuse reflector
(white card) whose reflectance is 90% compared to a perfect reflector
(prepared magnesium oxide = 98%).
(3) Under scene illumination, the light from a nonselective diffuse reflector
(gray card) whose reflectance is 18% compared with that of a perfect
Luminance, Television – a) When television was monochrome and
sensors were in approximate conformance to CIE Photopic Spectral
Luminous Efficiency Function, it became common to think of the video
signal as the luminance signal. With the introduction of color, a matrix
was designed to develop a luminance function by weighting the R, G, B
signals in accordance with the CIE Photopic Spectral Luminance Efficiency
Function, producing a video signal compatible with monochrome receivers.
b) A signal that has major control of the image luminance. It is a linear
combination of gamma-corrected primary color signals. c) The specific
ratio of color primaries that provides a match to the white point in a specified color space. d) The definition of luminance, television is identical for
NTSC, PAL, and SECAM (CCIR Report 624-4), as follows: E’Y = (0.299) E’R
+ (0.587) E’G + (0.014) E’B. The weighting function is named luminance
signal in all of the television standards. For convenience and bandwidth
conservation, however, it is always formed from the gamma correction
signals (i.e., R’, G’, B’) and not from the initial linear signals, and thus it
is not an exact representation of luminance, physics.
Luminous Flux – a) The time rate of flow of light. b) The time rate of flow
of radiant energy evaluated in terms of a standardized visual response.
Unless otherwise indicated, the luminous flux is defined for photopic vision.
The unit of flux is the lumen: the luminous flux emitted within unit solid
angle by a point source having an isotropic luminous intensity of 1 candela.
LUT (Look-Up Table) – A cross-reference table in the computer memory
that transforms raw information from the scanner or computer and corrects
values to compensate for weakness in equipment or for differences in
emulsion types.
Lux (lx) – a) The metric unit for illumination is 1 lumen per square meter.
1 foot candle = 10.76 Lux. b) A measurement of light. Lux is used in
television production to determine the minimum amount of light (lux rating)
needed for camera operation. Hence, a “2 lux” camcorder requires less
light than a “4 lux” camcorder.
LV (LaserVision) – Technology used in optical video disk.
LVDS (Low Voltage Differential Signal) – A transmission method
defined by DVB for sending digital information in parallel mode The
specification within EN50083-9 describes a 25-pin type D connector
using differential lines. The lines consist of a clock, eight data lines,
packet sync, and a data-valid line. LVDS has been widely used in laptops
to send signals from the motherboard to the flat panel display, because
it uses fewer wires. The technology is also used between the image
scaler and the panel in some stand-alone flat panel displays such as
SGI’s popular 1600SW flat panel.
Video Terms and Acronyms
M – The CCIR designation for 525 scanning-line/30 frame-per-second
television. U.S. color television is internationally designated NTSC-M. The
M standard is the world’s second oldest (the oldest was a 405-line/25
frame British standard, no longer broadcast).
M and E Tracks – a) Stands for music and effects audio tracks. b) The
common designation for a single sound track containing music and sound
effects but not dialog.
M Load – The cassette tape loading mechanism used in VHS videotape
recorder/playback technology.
M/E – See Mix Effects.
M/E Reentries – Those buttons on a bus that allow selection of previous
M/Es for further processing to be overlaid.
M/E to M/E Copy – A panel memory enhancement allowing the operator
with three keystrokes to copy all parameters from one M/E to another.
M/E to M/E Swap – A panel memory enhancement allowing the operator
with three keystrokes to swap all parameters between two M/Es. All
parameters include key clip levels, pattern position, all hues and modifiers
used as long as the M/Es are similarly equipped.
M2 – See Miller Squared Code.
M4IF (MPEG-4 Industry Forum) – The MPEG-4 Industry Forum starts
where the MPEG ends, i.e., dealing with all issues related to practical
implementations of the theoretical standards set by the MPEG in commercial applications.
MAA (MPEG ATM Adaptation)
MAC (Multiplexed Analog Components) – a) A system in which the
components are time multiplexed into one channel using time domain
techniques; that is the components are kept separate by being sent at
different times through the same channel. There are many different MAC
formats and standards. b) A means of time multiplexing component analog
video down a single transmission channel such as coax, fiber or a satellite
channel. Usually involves digital processes to achieve the time compression. c) A large family of television signal formats sharing the following
“two characteristics: color remains in a component rather than composite
form, and luminance and chrominance components are time compressed
so that active line time remains constant, with chrominance following
luminance. Most of the MACs also include digital audio/data channels.
Since they are non-composite, MACs do not suffer from any cross-luminance or cross-color effects. Since they are time compressed, they tend
to have a greater base bandwidth than composite signals. See also ACLE,
MAC-60 – An early version of the HDMAC-60.
Machine Code – See Machine Language.
Machine Cycle – Basic period of time required to manipulate data in a
Machine Error – A machine hardware malfunction.
Machine Language – Binary language (often represented in hexadecimal)
that is directly understood by the processor. All other programming
languages must be translated into binary code before they can be entered
into the processor.
Machine Operator – A person trained in the operation of a specific
Macro Lens – A lens used for videography when the camera-to-object
distance is less than two feet. The macro lens is usually installed within
the zoom lens of the video camera or camcorder.
Macroblock – a) The four 8 by 8 blocks of luminance data and the two
(for 4:2:0 chroma format), four (for 4:2:2 chroma format) or eight (for
4:4:4 chroma format) corresponding 8 by 8 blocks of chrominance data
coming from a 16 by 16 section of the luminance component of the
picture. Macroblock is sometimes used to refer to the pel data and sometimes to the coded representation of the pel values and other data elements defined in the macroblock header. The usage should be clear from
the context. b) The screen area represented by several luminance and
color-difference DCT blocks that are all steered by one motion vector.
c) The entity used for motion estimation, consisting of four blocks of
luminance components and a number of corresponding chrominance
components depending on the video format.
Macrovision – An analog protection scheme developed by Macrovision
for the prevention of analog copying. It is widely used in VHS and has
now been applied to DVD.
Mag Track – This term usually refers to the sound track. It is usually
used only in reference to the separate sound tape used in double system
recording and editing. Videotape is a magnetic medium too, but the term
mag track is only used in reference to sound tape and not to sound on a
videotape picture.
Magnetic Density – The amount of magnetic flux within a specific area.
Magnetic Field – An area under the influence of magnetism.
Magnetic Film – Sprocketed base with a magnetic coating for audio
recording and playback.
Magnetic Force – The amount of magnetic influence/force within a
specific area/field.
Magnetic Head – That part of a videotape recorder which converts
electric variations into magnetic variations and vice versa.
Magnetic Induction – To magnetize by being put within the magnetic
influence of a magnetic field.
Magnetic Instability – The property of a magnetic material that causes
variations in the residual flux density of a tape to occur with temperature,
time and/or mechanical flexing. Magnetic instability is a function of particle
size, magnetization and anisotropy.
www.tektronix.com/video_audio 141
Video Terms and Acronyms
Magnetic Recording – The technology and process of recording
audio/video information using magnetism as the medium for storage of
information. The term is often used to mean the process/capability of both
recording and reproduction/playback.
Magnetic Tape – With a few exceptions, magnetic tape consists of a base
film coated with magnetic particles held in a binder. The magnetic particles
are usually of a circular shape and approach single domain size. See
Gamma Ferric Oxide, Chromium Dioxide and Cobalt Doped Oxide.
1920 x 1152
80 Mbits/s
I, B, P
4:2:0 or 4:2:2
1920 x 1152
100 Mbits/s
I, B, P
High 1440
1440 x 1152
60 Mbits/s
I, B, P
4:2:0 or 4:2:2
1440 x 1152 1440 x 1152
60 Mbits/s
80 Mbits/s
I, B, P
I, B, P
MPEG Levels and Profile
Magnetic Track – A sound-track recorded on magnetic film or tape.
Magnetism – The property of certain physical materials to exert a force
on other physical materials, and to cause voltage to be induced in conducting bodies moving relative to the magnetized body.
Magnetizing Field Strength, H – The instantaneous strength of the
magnetic field applied to a sample of magnetic material.
Magneto-Optical – Recordable disc technology using a laser to heat
spots that are altered by a magnetic field. Other formats include dye-sublimation and phase-change.
Main Channel – The basic transmission channel of an ATV channel
utilizing an augmentation channel.
Main data – User data portion of each sector. 2048 bytes.
Main Level – A range of allowed picture parameters defined by the
MPEG-2 video coding specification with maximum resolution equivalent to
ITU-R Recommendation 601. MPEG-2 standard has four level which define
the resolution of the picture, ranging from SIF to HDTV and five profiles
which determine the set of compression tools used. The four levels can
be described as:
720 x 576 720 x 576 720 x 576
15 Mbits/s 15 Mbits/s 15 Mbits/s
I, B, P
I, B, P
I, B
4:2:0 or 4:2:2
720 x 576
20 Mbits/s
I, B, P
360 x 288 360 x 288
4 Mbits/s 4 Mbits/s
I, B, P
I, B, P
(1) Simple Profile: Defined in order to simpify the encoder and the decoder at
the expense of a higher bit rate.
(2) Main Profile: Best compromise with current technology between rate and
(3) SNR Profile: A quality tradeoff is made against SNR performance. A low bit
rate decoderwill have full resolution but will have lesssignal-to-noise ratio
than a high bit rate one.
(4) Spatial Profile: A tradeoff against spacial resolution. The low bit rate
receiver produces a picture with less resolution than the full bit rate one.
(5) High Profile: Intended for HDTV broadcast applications in 4:2:0 or 4:2:2.
1. Low Level: SIF resolution used in MPEG-1 (up to 360 x 288 pixels)
2. Main Level: Using 4:2:0 standard (720 x 576 pixels)
3. High 1440 Level: Aimed at HDTV (up to 1440 x 1152 pixels)
4. High Level: Wide screen HDTV (up to 1920 x 1152 pixels)
Main Profile – A subset of the syntax of the MPEG-2 video coding specification that is expected to be supported over a large range of applications.
MPEG-2 standard uses four levels which define picture resolution and five
profiles which define the compression tools used.
Main Visual Profile – Adds support for coding of interlaced, semitransparent, and sprite objects to the Core Visual Profile. It is useful for interactive and entertainment-quality broadcast and DVD applications.
Male Connector – A connector that has raised edges, pins, or other
protruding parts that you plug into a female connector. An example of a
male connector is an electrical plug that you plug into a wall outlet.
MAN (Metropolitan Area Network) – Network that spans a metropolitan
area. Generally, a MAN spans a larger geographic area than a LAN, but a
smaller geographic area than a WAN.
Man Page – An on-line document that describes how to use a particular
IRIX or UNIX command.
Mantissa – Fractional value used as part of a floating point number.
For example, the mantissa in the number 0.9873 x 107 is 0.9873.
Manual Iris – A manual method of varying the size of a lens’s aperture.
Mapping – a) A technique for taking a 2D image and applying (mapping)
it as a surface onto a 3D object. b) Conversion of bytes (8 bits) to 2n-bit
wide symbols. Thus n is the bit width for the I and Q quantization; e.g., at
64 QAM the symbol width is 2n=6 bit, n=3, i.e., I and Q are subdivided
into 23=8 amplitude values each. c) Refers to the definition of memory
for storing data used by a particular display mode. The range of addresses
reserved for graphics information in IBM-compatible systems is from
A000:0 to BFFF:F.
Video Terms and Acronyms
Mark – Term used to describe the function of indicating to the editor
where the entry or exit of the edit will be done on the fly.
Mark IN – To select the first frame of a clip.
Mark IN/OUT – a) The process of entering the start and end time codes
for a clip to be edited into a sequence. b) The process of marking or
logging timecode numbers to define clips during a logging, recording or
digitizing session. See also IN Point, OUT Point.
Mark OUT – To select the last frame of a clip.
Mask – a) A mask image is a black and white image, which defines how
opaque each pixel is. A mask blocks out certain components of an image
but lets other parts show through. b) Pattern used to selectively set certain
bits of a word to 1 or 0. Usually ANDed or ORed with the data.
Mask Key – A key that is selectively limited in what portions of the key
source will be allowed to cut the hole. Masks are usually square, however,
on Ampex switchers mask keys are done by utilizing the pattern system
with any pattern shape on the switcher. See Preset Pattern.
Mask Programmed – An IC that is programmed by generating a unique
photomask used in the fabrication of the IC.
Masking – Masking is one way of partial compensation for photo-receptor
dot sensitivity, non-optimum color filters, non-ideal display phosphors,
unwanted dye absorption. Audio: The phenomenon by which loud sounds
prevent the ear from hearing softer sounds of similar frequency. The
process of blocking out portions of a picture area/signal. A psychoacoustic
phenomenon whereby certain sounds cannot be heard in the presence of
others. Video: A process to alter color rendition in which the appropriate
color signals are used to modify each other. Note: The process is usually
accomplished by suitable cross coupling between primary color-signal
channels. Photography: Comparable control of color rendition is accomplished by the simultaneous optimization of image dyes, masking dyes,
and spectral sensitivities.
Masking Threshold – A measure of a function below which an audio
signal cannot be perceived by the human auditory system.
Mass Storage – Secondary, slower memory for large files. Usually floppy
disk or magnetic tape.
Master – The final edited tape recording from a session from which copies
will be made called sub masters. These may be used for some subsequent
editing to create other effects.
Master/Slave – a) Software option which allows user to maintain synchronization between two or more transports using one machine as control
reference (master). b) A video-editing process in which one or more decks
(the slaves) are set to imitate the actions of another deck (the master).
Mastering – The process of making a master pressing disc with a laser
beam recorder and a metal plating process. This master is then used
in the replication process to make thousands of copies. The process is
conceptually similar to processes used to create vinyl LPs.
Mastering Lathe – A turntable and cutting head used to cut the disk from
which the plates used to press records are made.
Match – Matching individual frames in assembled clips to the corresponding frames in the source clip.
Match Frame – An edit in which the source and record tape pick up
exactly where they left off. Often used to extend a previous edit. Also called
a Tracking Edit.
Match Frame Edit – An edit in which the last frame of the outgoing clip
is in sync with the first frame of the incoming clip, such that the incoming
clip is an extension of the outgoing clip.
Matchback – The process allowing you to generate a film cut list from a
30-fps video project that uses film as the source material.
Matchback Conversion – The conversion from film to video frame rates.
Matched Dissolve – A dissolve where the main object is matched in each
Matched Resolution – A term sometimes used to describe matching the
resolution of a television system to the picture size and viewing distance
(visual acuity); more often a term used to describe the matching or horizontal and vertical (and sometimes diagonal) resolutions. There is some
evidence that the lowest resolution in a system (e.g., vertical resolution)
can restrict the perception of higher resolutions in other directions. See
also Square Pixels.
Match-Frame Edit – Edit in which a scene already recorded on the
master is continued with no apparent interruption. A match-frame edit is
done by setting the record and source in-points equal to their respective
out-points for the scene that is to be extended.
Material Editing – Each material has a number of attributes such as
transparency, ambient, diffusion, refraction, reflection, and so on.
Master Clip – In the bin, the media object that refers to the media files
recorded or digitized from tape or other sources. See also Clip, Subclip.
Mathematically Lossless Compression – A method of compressing
video without losing image quality. The video is identical to uncompressed
video, but requires less disk space.
Master Guide Table (MGT) – The ATSC PSIP table that identifies the
size, type, PID value, and version number for all other PSIP tables in the
transport stream.
Mathias, Harry – Cinematographer, designer, teacher, consultant, and
author who came up with the six priorities of electronic cinematography.
Master Reference Synchronizing Generator – A synchronizing pulse
generator that is the precision reference for an entire teleproduction
Harry Mathias’ Priorities for Electronic Cinematography
(in order of importance)
Master Shot – The shot that serves as the basic scene, and into which
all cutaways and close-ups will be inserted during editing. A master shot
is often a wide shot showing all characters and action in the scene.
Practicality, Flexibility, Ruggedness
Aspect Ratio
Gamma or Transfer Characteristic
Standards Acceptance (or Standards Conversion)
www.tektronix.com/video_audio 143
Video Terms and Acronyms
Matrix – a) Device that converts the RGB components from the camera
into color difference signals and the reverse. b) A set of crosspoints in a
particular functional area of a switcher corresponding to a bus (the controls
for that matrix). See Audio Matrix and Primary Matrix.
Matte Generator – The circuitry which generates the matte.
Matte In – To add.
Matrix Encoding – The technique of combining additional surround-sound
channels into a conventional stereo signal. Also see Dolby Surround.
Matrix Switcher – A device which uses an array of electronic switches to
route a number of audio/video signals to one or more outputs in almost any
combination. Production quality matrix switchers perform vertical interval
switching for interference free switching. Matrix switchers may be operated
with RS-232 or RS-422 controls, enhancing flexibility.
Matrix Wipe – a) A wipe wherein the screen is divided into square areas,
each of which can contain the video from either bus. Initially, each square
contains the first bus video, and as the wipe develops, one or more
squares switch to the opposite bus video until, at the completion of the
wipe, all squares contain the second bus video. b) A type of wipe comprised of multiple boxes (a matrix of boxes) which turn on various parts
of the “B” video during the course of a transition from the “A” video, until
all the boxes have turned on the scene is all “B” video. This operates in
either direction.
Matrixing – To perform a color coordinate transformation by computation
or by electrical, optical, or other means.
Matsushita – Parent of Panasonic and Quasar, majority owner of JVC, first
company to demonstrate an HD camera and display in the U.S., has continued demonstrations, and developed the QUME and QAM ATV schemes,
which popularized the idea of quadrature modulation of the picture carrier.
Matte – An operational image or signal carrying only transparency information and intended to overlay and/or control a conventional image or
image signal. a) Without shine or gloss. Relatively unreflective of light.
Removal of a portion of a TV picture and replacement of it with another
picture. b) A solid color, adjustable in hue, luminance, and saturation.
Matte is used to fill areas of keys and borders. Ampex switchers generate
many internal matte signal keys. c) A film term used to describe the film
effect analogous to a key. Sometimes this definition is carried over into
video and used to describe a video key. d) A black and white high contrast
image that suppresses or cuts a hole in the background picture to allow
the picture the matte was made from to seamlessly fit in the hole.
Matte Edge – An undesirable, unwanted outline around a matted image.
This is also called Matte Ring, Matte Ride, but more generally called a
“bad matte”.
Matte Fill – A key filled with a solid color instead of “self”, which is the
video cutting the key. This color is internally generated and adjustable in
hue, luminance and saturation.
Matte Channel – See Alpha Channel.
Matte Key – A key effect in which the inserted video is created by a matte
generator. It is composed of three components: the background video,
the foreground video, and the matte or alpha channel (black and white or
grayscale silhouette) that allows one portion of the image to be superimposed on the other.
Matte Out – To remove, eliminate.
Matte Reel – A black and white (hi con) recording on tape used as a key
source for special effects.
MATV (Master Antenna TV) – A mini cable system relaying the broadcast
channels usually to a block of flats or a small housing estate.
Maximum Intrinsic Flux – In a uniformly magnetized sample of magnetic
material, the product of the maximum intrinsic flux density and the crosssectional area.
Maximum Intrinsic Flux Density – The maximum value, positive or negative, of the intrinsic flux density in a sample of magnetic material which is
in a symmetrically, cyclically magnetized condition.
Maxwell – A unit of magnetic flux.
MB (Megabyte) – A standard unit for measuring the information storage
capacity of disks and memory (RAM and ROM); 1000 kilobytes make one
Mbit – 1,000,000 bits.
MBONE (Multicast Backbone) – a) The MBONE is a system of transmitting audio and video over a multicast network. Mostly available at universities and government facilities, the MBONE can be thought of as a testbed
for technologies that will eventually be promulgates across the larger
Internet. The MBONE has been replaced on the vNBS and Abilene by native
multicast support. b) A collection of Internet routers that support IP multicasting. The MBONE is used as a multicast channel that sends various
public and private audio and video programs.
Mbps or Mb/s (Megabits Per Second) – A data transmission rate in
millions of binary digits per second.
MBps or MB/s (Megabytes Per Second) – Data rate in millions of bytes
per second.
MCA (Media Control Architecture) – System-level specification developed by Apple Computer for addressing various media devices
(videodisc/videotape players, CD players, etc.) to its Macintosh computers.
MCI (Media Control Interface) – a) Microsoft’s interface for controlling
multimedia devices such as a CD-ROM player or a video playback application. b) A high-level control interface to multimedia devices and resource
Video Terms and Acronyms
files that provides software applications with device-independent control of
audio and video peripherals. MCI provides a standard command for playing
and recording multimedia devices and resource files. MCI is a platformindependent layer between multimedia applications and system lower-level
software. The MCI command set is extensible inasmuch as it can be
incorporated in new systems via drivers and can support special features
of multimedia systems or file formats. MCI includes commands like open,
play, and close.
MCPC (Multiple Channels Per Carrier) – An average satellite transponder has a bandwidth of 27 MHz. Typically, the highest symbol rate that
can be used in SR 26 MS/s, and multiple video or audio channels can be
transmitted simultaneously. MCPC uses a technique called Time Division
Multiplex to transmit multiple programs, which works by sending data for
one channel at a certain time and then data for another channel at another
time. Many encoder manufacturers are currently experimenting with statistical multiplexing of MPEG-2 data. Using this technique, channels that
need high data rate bursts in order to prevent pixelization of the picture,
such as live sports events will obtain the bandwidth as they need it by
reducing the data rate for other services that do not. Statistical multiplexing should improve perceived picture quality, especially on video that
changes rapidly. It also has the advantage of requiring no changes in the
receiver equipment.
MCU – See Multipoint Control Unit.
MDCT (Modified DCT) – Used in Layer 3 audio coding.
MDS (Multipoint Distribution Service) – A one-way domestic public
radio service rendered on microwave frequencies from a fixed station
transmitting (usually in an omnidirectional pattern) to multiple receiving
facilities located at fixed points.
MedFER – One who experiments with radio communications at low
frequencies such as those on the edges of the AM broadcast band
(under FCC Part 15).
Media – The video, audio, graphics, and rendered effects that can be
combined to form a sequence or presentation.
Media 100 – A nonlinear editing system that uses its own proprietary
software. Often used with Adobe After Effects.
Media Clip – A video segment usually interleaved with an audio segment.
Media Data – Data from a media source. Media data can be: Analog Data:
Film frames, Nagra tape audio, or videotape video and audio. Digital Data:
Either data that was recorded or digitized such as video frame data and
audio samples, or data created in digital form such as title graphics, DAT
recordings, or animation frames.
Media Files – Files containing the compressed digital audio and video
data needed to play Avid clips and sequences.
Media Conversion – The process of converting data from one type of
media to another for premastering and mastering. Premastering software
typically requires input data on hard disk.
Media Object – A representation of a natural or synthetic object that can
be manifested aurally and/or visually. Each object is associated with zero
or more elementary streams using one or more object descriptors.
Media Object Decoder – An entity that translates between the coded
representation of an elementary stream and its decoded representation.
Media Sample Data – See Safe Color Limiting.
Median Filter – An averaging technique used by PCEC in its IDTV line
interpolation scheme to take an average of lines in the current and previous fields to optimize resolution and avoid motion artifacts without using
motion compensation.
Medium – The substance through which a wave is transmitted.
Medium Scale Integration (MSI) – Technology by which a dozen or more
gate functions are included on one chip.
Medium Shot – Camera perspective between long shot and closeup,
whereby subjects are viewed from medium distance.
Mega – One million, i.e., megacycle is one million cycles.
Megabyte (Mbyte) – One million bytes (actually 1,048,576);
one thousand kilobytes.
Megaframe Initialization Packet (MIP) – A transport stream packet
used by DVB-T to synchronize the transmitters in a multi-frequency
Megahertz (MHz) – One million hertz (unit of frequency). A normal U.S.
television transmission channel is 6 MHz. The base bandwidth of the video
signal in that channel is 4.2 MHz. The SMPTE HDEP system calls for 30
MHz each for red, green, and blue channels.
Memory – Part of a computer system into which information can be
inserted and held for future use. Storage and memory are interchangeable
terms. Digital memories accept and hold binary numbers only. Common
memory types are core, disk, tape, and semiconductors (which includes
ROM and RAM).
Memory Counter (or Rewind) – A system which allows the tape to be
rewound automatically to any predetermined point on the tape.
Memory Effect – Loss of power storing capability in NiCad (video camera)
batteries which occurs when batteries are habitually discharged only partially before recharging. To avoid the memory effect, always fully discharge
NiCad batteries before recharging.
Memory Map – Shows the address assignments for each device in the
Memory-Mapped I/O – I/O devices that are accessed by using the same
group of instruction and control signals used for the memory devices in a
system. The memory and I/O devices share the same address space.
Menu – a) A list of operations or commands that the IRIS can carry out
on various objects on the screen. b) A group of parameters and flags that
enable manipulation of the video image. Menus are Target, Rotate,
Border, Source (with Sides submenu), Digimatte, Timeline and KF Flags.
c) A graphic image, either still or moving, with or without audio provided
to offer the user a variety of choices within the confines of the authoring
and product material provided. It is the traditional meaning of a menu like
you might find in a restaurant.
MER (Modulation Error Ratio) – The MER is defined as the ratio of I/Q
signal power to I/Q noise power; the result is indicated in dB.
www.tektronix.com/video_audio 145
Video Terms and Acronyms
Meridian Lossless Packing (MLP) – A lossless compression technique
(used by DVD-Audio) that removes redundancy from PCM audio signals to
achieve a compression ratio of about 2:1 while allowing the signal to be
perfectly recreated by the MLP decoder.
MHz – See Megahertz.
MESECAM – Middle East SECAM or (B, G, D, K) SECAM. A technique of
recording SECAM video. Instead of dividing the FM color subcarrier by
four and then multiplying back up on playback, MESECAM uses the same
heterodyne conversion as PAL.
MIC (MPEG-2 Interface Card)
Mesh – a) A graphical construct consisting of connected surface elements
to describe the geometry/shape of a visual object. b) A grid that is placed
over an image during morphing or warping.
Meshbeat – See Moiré.
Metadata – a) The descriptive and supporting data that is connected to
the program or the program elements. It is intended to both aid the direct
use of program content and support the retrieval of content as needed
during the post-production process. b) Generally referred to as “data about
data” or “data describing other data”. More specifically, information that is
considered ancillary to or otherwise directly complementary to the essence.
Any information that a content provider considers useful or of value when
associated with the essence being provided.
Metadata Dictionary – The standard database of approved, registered
data element tags, their definitions and their allowed formats.
Metal Particle – One of the most recent developments of a magnetizable
particle for magnetic tape, products from pure iron and having very high
coercivity in the range of 850 to 1250 oersteds.
Metamorphosis – Given two databases with the same number of
vertices, a metamorphosis causes the first to become the second. This
is an animation tool.
Method – Methods, in the object-oriented terminology, are executable
procedures associated with an object that operates on information in the
object’s data structure.
Mezzanine Compression – Contribution level quality encoded high
definition television signals. Typically split into two levels: high level at
approximately 140 Mbps and low level at approximately 39 Mbps (for high
definition with the studio, 270 Mbps is being considered). These levels of
compression are necessary for signal routing and are easily re-encoded
without additional compression artifacts (concatenation) to allow for picture
manipulation after decoding. DS-3 at 44.736 will be used in both terrestrial
and satellite program distribution.
MFN (Multifrequency Network)
MFP (Mega Frame Packet)
MGT – See Master Guide Table.
MHEG – See Multimedia Hypermedia Expert Group.
MHP (Multimedia Home Platform) – A set of common application
programming interfaces (API) designed to create an operating system
independent, level playing field for broadcasters and consumer-electronics
manufacturers. The goal is to provide all DVB-based terminals (set-tops,
TVs, and multimedia PCs) full access to programs and services built on
the DVB Java (DVB-J) platform.
MIB (Management Information Base) – The Management Information
Base is a collection of managed objects defined by their attributes and
visible to the network management system.
Micro – One millionth.
Micro Channel – Personal computer bus architecture introduced by IBM
in some of its PS/2 series microcomputers. Incompatible with original
PC/AT (ISA) architecture.
Micro-Cassette – A miniature cassette system originated by Olympus,
allowing 30 minutes of recording per side on a capstan-driven tape, 1/7”
wide, running at 15/16 ips.
Microcode – See Microprogram.
Microcomputer – Complete system, including CPU, memory and I/O
Microdropouts – Low level, short duration dropouts. They correspond to
RF envelope dropouts of 10 dB or greater with a duration of 0.5 to 0.8
Microphone – A transducer which converts sound pressure waves into
electrical signals.
Microphone Impedance – In order to obtain the highest quality output
signal from a microphone, a preamplifier input should provide a load
(impedance) which exactly matches a microphone’s output impedance.
Microphone output impedances vary from 150 ohms to several megohms.
Microphone Preamplifier – A microphone is a transducer which converts
sound waves to electrical impulses. Microphones typically generate very
low signal levels requiring low noise, high fidelity, pre-amplification to
boost the output signal to a level compatible with audio amplifier circuitry.
Good microphone preamplifiers provide precise matching of microphone
impedance and low noise electronic components.
Microphonics – In video transmission, refers to the mechanical vibration
of the elements of an electron tube resulting in a spurious modulation of
the normal signal. This usually results in erratically spaced horizontal bars
in the picture.
Microprocessor – Central processing unit fabricated on one or two chips.
The processor consists of the arithmetic and logic unit, control block, and
Microprogram – Program that defines the instruction set. The microprogram (also called microcode) tells the CPU what to do to execute each
machine language instruction. It is even more detailed than machine
language and is not generally accessible to the user.
Microsecond – One millionth of a second: 1 x 10-6 or 0.000001 second.
A term used to mean very fast/instantaneous.
Microwave – One definition refers to the portion of the electromagnetic
spectrum that ranges between 300 MHz and 3000 GHz. The other definition is when referring to the transmission media where microwave links are
used. Frequencies in microwave transmission are usually between 1 GHz
and 12 GHz.
Video Terms and Acronyms
Microwave Dish – A parabolic shaped antenna used for high frequency
RF signals.
Microwave Transmission – Communication systems using high
frequency RF to carry the signal information.
Microwaves – Radio frequencies with very short wavelengths (UHF).
Middle Area – Unused physical area that marks the transition from layer
0 to layer 1. Middle Area only exists in dual layer discs where the tracks of
each layer are in opposite directions.
MIDI (Musical Instrument Digital Interface) – A standard for connecting electronic musical instruments and computers. MIDI files can be
thought of as digital sheet music, where the computer acts as the musician
playing back the file. MIDI files are much smaller than digital audio files,
but the quality of playback will vary from computer to computer.
MIDI Timecode – A system for timed device control through MIDI
protocols. The importance of MIDI timecode in video post-production
has increased due to the increased use of personal computers for video
Midtones – Mid-level grays in an image.
MII – Portable, professional video component camera/recorder format,
utilizing 1/2” metal particle videotape.
MII (M2) – Second generation camera/recorder system developed by
Panasonic. Also used for just the recorder or the interconnect format.
MII uses a version of the (Y, R-Y, B-Y) component set.
MII Format – A component videotape format created by Panasonic in
an effort to compete with Sony Betacam. MII is an extension of the VHS
consumer format as Sony Betacam is an extension of the Betamax home
video technology.
Minimum Performance – The line between EDTV and HDTV. Naturally,
each ATV proponent defines minimum performance so as to favor its
system to the detriment of others.
MIP – See Megaframe Initialization Packet.
MIPS (Millions of Instructions Per Second) – Refers to a computer
processor’s performance.
Miro Instant Video – An edit mode in Adobe Premiere for Windows,
specifically for DC30 users, that allows video to be streamed out of a
DC30 capture card.
Mistracking – The phenomenon that occurs when the path followed by
the read head of the recorder does not correspond to the location of the
recorded track on the magnetic tape. Mistracking can occur in both longitudinal and helical scan recording systems. The read head must capture a
given percentage of the track in order to produce a playback signal. If the
head is too far off the track, record information will not be played back.
MIT (Massachusetts Institute of Technology) – Home of the Media
Lab and its Advanced Television Research Program (ATRP), its Audience
Research Facility, its Movies of the Future program, and other advanced
imaging and entertainment technology research. In addition to conducting
and publishing a great deal of ATV research, MIT has come up with two
ATV proposals of its own, one called the Bandwidth Efficient Proposal and
one the Receiver Compatible Proposal.
MITG (Media Integration of Text and Graphics)
Mix – a) A transition between two video signals in which one signal is
faded down as the other is faded up. Also called a dissolve or cross fade.
b) This term is most often used as a synonym for additive mix but may
also refer to a non-additive mix.
Mike Boom – A rigid extension to which a microphone may be attached.
Mix Effects (M/E) – One of the console modules (or its associated signal
processing boards) which allows an operator to perform wipes, mixes,
keys, etc.
Mike Pad – An attenuator placed between the output of a mike and the
input of a mike preamp to prevent overdriving the preamp.
Mixdown Audio – The process that allows the user to combine several
tracks of audio onto a single track.
Mil – 0.001 of an inch.
Mixed Mode – A type of CD containing both Red Book audio and Yellow
Book computer data tracks.
Mike – Microphone.
Millennium Group – The group of companies (Macrovision, Philips,
Digimarc) proposing the Galaxy watermarking format.
Miller Squared Coding (M2) – A DC-free channel coding scheme used in
D2 VTRs.
Millimeter – One thousandth of a meter.
Millimicron – One billionth of a meter.
Millisecond – One thousandth of a second.
MIME (Multi-Purpose Internet Mail Extensions) – Standard for transmitting non-text data (or data that cannot be represented in plain ASCII
code) in Internet mail, such as binary, foreign language text (such as
Russian or Chinese), audio, or video data. MIME is defined in RFC2045.
Mini-Cassette – A miniature cassette system originated by Philips,
allowing 15 minutes of recording per side on a narrow tape.
Minimize – To reduce a window to an icon for later use.
Mixer – The audio or video control equipment used for mixing sound
and/or video. In video, a device for combining several video input signals.
Mixing – To combine various pictures and/or audio elements together.
Mixing Console – A device which can combine several signals into one or
more composite signals, in any desired proportion.
Mixing, Digital – A step in post-production during which two or more
digital representations are combined to create an edited composition. In a
transmission, recording, or reproducing system, combining two or more
inputs into a common output, which operates to combine linearly the separate input signals in a desired proportion in an output signal. Production:
Generally the editing of digital image data, resulting in composites ranging
from simple transitions to multilayered collages combining selected
information from many interim images. The combining of digital images
is accomplished by suitable arithmetic calculations on related pairs of
www.tektronix.com/video_audio 147
Video Terms and Acronyms
digital words. Data Processing: A process of intermingling of data traffic
flowing between concentration and expansion stages.
MJD (Modified Julian Date) – A day numbering system derived from
the julian date. It was introduced to set the beginning of days at 0 hours,
instead of 12 hours and to reduce the number of digits in day numbering.
The modified julian date is obtained subtracting 2.400.000,5 from the
julian date. As a consequence, the origin of this date (day zero) begin at
1858 November 17 at 0 hours. For example, 1996 January 1 at 0 hours
began the modified julian day 50,083.
MJPEG – See Motion JPEG.
MMCD (Multimedia CD) – A development proposal from Sony and
Philips, now integrated in the DVD.
MMDS (Multi-Point Microwave Distribution System) – This is a
terrestrial broadcasting technology which utilizes low-power microwave
transmitters, and is mainly used for extending the range of cable TV
systems and for TV distribution in sparsely populated areas or in areas
with rough terrain. MMDS is not specifically analog or digital. In digital
MMDS, the use of MPEG is highly attractive to boost the number of
channels that may be distributed.
MMI (Man Machine Interface) – Refers to the interface presented by
a machine to a human operator. Another name for User Interface.
MMT (Modulation Mode Table)
Mnemonic Code – Codes designed to assist the human memory. The
microprocessor language consists of binary words, which are a series of
0s and 1s, making it difficult for the programmer to remember the instructions corresponding to a given operation. To assist the human memory,
the binary numbered codes are assigned groups of letters (of mnemonic
symbols) that suggest the definition of the instruction. For example, the
8085 code 100000 binary means load accumulator and is represented by
the mnemonic LDA.
Mobile Unit – Equipment designed to be movable as a unit. A truck/van
with all the necessary equipment to do photography/production on location.
Sometimes mobile units have cameras and VTRs within them and sometimes they are added for specific jobs.
Mod – Abbreviation for Modulator on the 4100 series and Modifier on the
AVC series.
MOD (Minimum Object Distance) – Feature of a fixed or a zoom lens
that indicates the closest distance an object can be from the lens’s image
plane, expressed in meters. Zoom lenses have MOD of around 1 m, while
fixed lenses usually much less, depending on the focal length.
Model-Based Coder – Communicating a higher-level model of the image
than pixels is an active area of research. The idea is to have the transmitter and receiver agree on the basic model for the image; the transmitter
then sends parameters to manipulate this model in lieu of picture elements
themselves. Model-based decoders are similar to computer graphics rendering programs. The model-based coder trades generality for extreme
efficiency in its restricted domain. Better rendering and extending of the
domain are research themes.
Modeling – a) The process of creating a 3D world. There are several
kinds of 3D modeling, including: boundary representation, parametric (or
analytic), and constructive solid geometry. After the geometry of a model
is determined, its surface properties can be defined. b) This process
involves describing the geometry of objects using a 3D design program.
Modem (Modulator/Demodulator) – An electronic device for converting
between serial data (typically RS-232) from a computer and an audio
signal suitable for transmission over telephone lines. The audio signal is
usually composed of silence (no data) or one of two frequencies representing 0 and 1. Modems are distinguished primarily by the baud rates they
support which can range from 75 baud up to 56000 and beyond. Various
data compression and error algorithms are required to support the highest
speeds. Other optional features are auto-dial (auto-call) and auto-answer
which allow the computer to initiate and accept calls without human
Modifier – Pattern system electronics capable of modulator effects,
continuous rotation effects, pattern border hue modulation, pattern border
rainbows, and position modulation.
Modulate – To impress information on an AC or RF signal by varying the
signals amplitude, frequency or phase.
Modulated – When referring to television test signals, this term implies
that chrominance, luminance, sync, color burst and perhaps audio information is present.
Modulated Carrier Recording – Signal information recorded in the form
of a modulated carrier.
Modulated Five-Step Test Signal – A test signal with five steps of
luminance change, each step having a constant frequency and phase
chrominance signal. This signal is used to test for differential phase
distortions. There is also a 10-step version of this signal.
Video Terms and Acronyms
Modulated Pedestal – A test signal which consists of three chrominance
packets with the same phase, on the same luminance level (50 IRE), with
different amplitudes (20, 40 and 80 IRE). This signal is used to test for
chrominance nonlinear phase distortion and chrominance to luminance
intermodulation distortion.
Modulator – a) A section within a VTR that changes the frequency of
the video signal information coming in from an external source (i.e.,
an electronic camera) to signal information that is compatible with the
requirements of the VTR heads, while keeping the picture information
basically unchanged. b) Pattern system electronics capable of distorting
the edge of a pattern by impressing a sine or other waveform on the
vertical or horizontal shape of the pattern. c) The device that places
information on an RF carrier signal.
Modulator Lock – A feature that synchronizes the modulator or modifier
effect to the frame rate, thus preventing the effect from drifting or appearing incoherent.
Module – A small device, not working by itself, designed to perform specialized tasks in association with a host, for example: a conditional access
subsystem, an electronic program guide application module, or to provide
resources required by an application but not provided directly by the host.
Module Board – Printed circuit board and mounted components that is
attached to the base board using screws and spacers.
Modulated Ramp Test Signal – A test signal with a linear rise in luminance and constant chrominance as shown in the figure to the right. This
signal is used to test for differential phase distortions.
Moiré – a) An image artifact that occurs when a pattern is created on the
screen where there should not be one. The moiré pattern is generated
when different frequencies that are part of the video signal, create a new
unwanted frequency. b) A wavy pattern, usually caused by interference.
When that interference is cross-color, the pattern is colored, even if the
picture is not. c) The spurious pattern in the reproduced television picture
resulting from interference beats between two sets of periodic structures
in the image. It usually appears as a curving of the lines in the horizontal
wedges of the test pattern and is most pronounced near the center where
the lines forming the wedges converge. A Moiré pattern is a natural optical
effect when converging lines in the picture are nearly parallel to the scanning lines.
MOL (Maximum Output Level) – In audio tape, that record level which
produces a 3rd harmonic distortion component at 3.0%.
Modulation – a) The imposing of a signal on some type of transmission
or storage medium, such as a radio carrier or magnetic tape. b) The
process (or result) of changing information (audio, video, data, etc.) into
information-carrying signals suitable for transmission and/or recording. In
NTSC-M television transmission, video is modulated onto a picture carrier
using amplitude modulation-virtual sideband, and audio is modulated onto
a sound carrier using frequency modulation.
Modulation Noise – a) Noise which results from the agitation of the
oxide molecules through the recording process. The modulation noise level
increases as record level increases and disappears when no signal is
present. b) The noise arising when reproducing a tape which has been
recorded with a given signal, and which is a function of the instantaneous
amplitude of the signal. This is related to DC noise and arises from the
same causes.
Mole Technology – A seamless MPEG-2 concatenation technology
developed by the ATLANTIC project in which an MPEG-2 bitstream enters
a Mole-equipped decoder, and the decoder not only decodes the video,
but the information on how that video was first encoded (motion vectors
and coding mode decisions). This “side information” or “metadata” in an
information bus is synchronized to the video and sent to the Mole-equipped
encoder. The encoder looks at the metadata and knows exactly how to
encode the video. The video is encoded in exactly the same way (so theoretically it has only been encoded once) and maintains quality. If an opaque
bug is inserted in the picture, the encoder only has to decide how the bug
should be encoded (and then both the bug and the video have been theoretically encoded only once). Problems arise with transparent or translucent
bugs, because the video underneath the bug must be encoded, and therefore that video will have to be encoded twice, while the surrounding video
and the bug itself have only been encoded once theoretically. What Mole
cannot do is make the encoding any better. Therefore, the highest quality
of initial encoding is suggested.
Moment of Inertia – A measure of the rotational force required to accelerate or decelerate a reel of tape or other rotating object.
www.tektronix.com/video_audio 149
Video Terms and Acronyms
Monitor – a) A TV set, or a TV set specifically designed for closed circuit
viewing (i.e., from a VTR) without the electronic capability to receive
broadcast signals. b) A hardware device that displays the images, windows, and text with which you interact to use the system. It is also called a
video display terminal (VDT). c) Program that controls the operation of a
microcomputer system and allows user to run programs, examine and
modify memory, etc.
Monitor Head – A separate playback head on some tape recorders that
makes it possible to listen to the material on the tape an instant after it
has been recorded and while the recording is still in progress.
Monitor Outputs – A set of outputs from a switcher or video recorder
for the specific purpose of feeding video monitors (although not limited to
that purpose). These include preview, individual M/Es, DSK, and bus rows.
The AVC also provides monitor outputs for RGB signals, aux bus selections,
and switcher status information.
Monitor Power Cable – The cable that connects the monitor to the
workstation to provide power to the monitor. It has a male connector on
one end and a female connector on the other.
Monitor Standardization – Although it is customary to make all subjective judgments of image quality from the reference monitor display, the
infinite possibilities for monitor adjustments have hampered reviewers in
exercising effective program control, and have introduced many disparities
and great confusion. The SMPTE Working Group on Studio Monitors,
S17.28, is completing work on three specifications intended to make the
monitor display follow a known electro-optic transfer function and permit
a reliable evaluation of the program image quality.
Monitor Video Cable – The cable that connects the monitor to the workstation to transmit video signals. It has large connector on both ends.
Monitor, Control – A control monitor is one employed primarily for decisions on subject matter, composition, and sequences to be selected in
real-time. It is frequently one of several monitors mounted together in
close proximity as in a studio – for example, to display multiple sources
that are to be compared, selected, and combined in editing for immediate,
direct routing to display. The physical arrangements may make it very
difficult to control the surroundings for each monitor, as specified by
SMPTE Working Group on Studio Monitors in Document S17.280 for the
reference monitor. It is nevertheless essential when sequences on several
monitors are being compared and intercut that the monitors match in
luminance and colorimetry.
Monitor, Reference – A reference monitor is one employed for decisions
on image quality. Achieving controlled reproducibility for this application
is the primary objective of the specifications for monitor standardization.
SMPTE Working Group on Studio Monitors, S17.28, has recognized the
great disparity now existing among studio monitors and control monitors,
and has noted the confusing variability among decisions based upon visual
judgments of program quality as evaluated on different monitors. They are
working to identify and recommend specifications for the variables affecting subjective judgments, coming not only from the monitor capabilities,
but also from the adjustment of its controls and the bias introduced by
monitor surround and room illumination.
Monitor, Standardization – Although it is customary to make all subjective judgments of image quality from the reference monitor display, the
infinite possibilities for monitor adjustments have hampered reviewers in
exercising effective program control, and have introduced many disparities
and great confusion. The SMPTE Working Group on Studio Monitors,
S17.27, is completing work on three specifications intended to make the
monitor display follow a known transfer function, electro-optic, and permit
a reliable evaluation of the program image quality.
Mono, Monophonic – Single-channel sound.
Monochrome – Literally single color, usually used to indicate black and
white. There have been monochrome high line rate cameras and displays
for many years. The EIA has standardized rates of up to 1225 scanning
lines per frame. NHK developed a monochrome HDTV system with 2125
scanning lines per frame. Even higher number of scanning lines are used
in conjunction with lower frame rates in cathode ray tube scanners used
in printing and in film. These extremely high rates are possible because
monochrome picture tubes have no triads.
Monochrome Signal – A “single color” video signal – usually a black
and white signal but sometimes the luminance portion of a composite or
component color signal.
Monochrome Transmission (Black and White) – The transmission of a
signal wave which represents the brightness values in the picture but not
the color (chrominance) values in the picture.
Monophonic – One sound channel/source/signal. Sometimes called
Monotonic – A term used in D/A conversion and is used to indicate that
the magnitude of the DAC output voltage increases every time the input
code increases.
MooV – The file format used in the QuickTime and QuickTime for Windows
environments for displaying videos. See QuickTime, QuickTime for
MOPS (Millions of Operations Per Second) – In the case of DVI technology, more MOPS translate to better video quality. Intel’s video processor
can perform multiple video operations per instruction, thus the MOPS
rating is usually greater than the MIPS rating.
Morphing – A technique for making an object change into the shape of
MOS (Metal Oxide Semiconductor) – Integrated circuits made of field
effect transistors. All MOS devices originally used metal gate technology,
but the term is used to describe silicon gate circuits as well.
Mosaic – a) Term used for an ADO effect which is to segmentize a video
signal into rectangles of variable block sizes and aspect ratio. b) An effect
that “blurs” an image by copying pixels into adjacent pixels both horizontally and vertically. This gives the image a blocky appearance, often used to
hide people’s identities on television.
Mosquito Noise – Caused by quantizing errors between adjacent pixels,
as a result of compression. As the scene content varies, quantizing step
sizes change, and the quantizing errors produced manifest themselves as
shimmering black dots, which look like “mosquitoes” and show at random
around objects within a scene.
Video Terms and Acronyms
Most Significant Bit (MSB) – The bit that has the most value in a binary
number or data byte. In written form, this would be the bit on the left. For
Binary 1110 = Decimal 14
In this example, the leftmost binary digit, 1, is the most significant bit, here
representing 8. If the MSB in this example were corrupt, the decimal would
not be 14 but 6.
Mother – The metal disc produced from mirror images of the Father disc
in the replication process. Mothers are used to make stampers, often
called Sons.
Motherboard – See Backplane.
Motion Adaptive – An ATV scheme that senses motion and changes the
way it functions to avoid or reduce motion artifacts.
Motion Artifacts – a) Picture defects that appear only when there is
motion in the scene. Interlaced scanning has motion artifacts in both the
vertical and horizontal directions. There is a halving of vertical resolution
at certain rates of vertical motion (when the detail in one field appears in
the position of the next field one sixtieth of a second later), and horizontally
moving vertical edges become segmented (reduced in resolution) by the
sequential fields. This is most apparent when a frame of a motion
sequence is frozen and the two fields flash different information. All subsampling ATV schemes have some form of motion artifact, from twinkling
detail to dramatic differences between static and dynamic resolutions. Line
doubling schemes and advanced encoders and decoders can have motion
artifacts, depending on how they are implemented. Techniques for avoiding
motion artifacts include median filtering and motion adaptation or compensation. b) In all temporally-sampled systems (i.e., both photographic and
electronic), realistic motion reproduction is achieved only with sampling
above the Nyquist limit. The subjective response to motion artifacts is complex, influences by the various degrees of smoothing and strobing affecting
temporal and spatial resolution, integration and tag in the sensing, recording, and display elements; sampling geometry and scanning patterns; shutter transmission ratio; perceptual tolerances, etc. (Motion appears “normal”
only when significant frame-to-frame displacement occurs at less than
half the frame rate; i.e., “significant motion” distributed over at least two
frames.) Motion artifacts most frequently observed have their origins in the
following: image components with velocity functions extending beyond the
Nyquist limit (such as rotating, spoked wheels), motion samples with such
short exposures there is noticeable frame-to-frame separation of sharply
defined images (such as synchronized flash illumination), asynchronous
sampling of intermittent motion (such as frame-rate conversions). A considerable number of motion artifacts appear so frequently as to be accepted
by most viewers.
Motion Compensation (MC) – In MPEG, the use of motion vectors to
improve the efficiency of the prediction of pel values. The prediction uses
motion vectors to provide offsets into the past and/or future reference
pictures containing previously decoded pel values that are used to form
the prediction error signal. The book Motion Analysis for Image Sequence
Coding by G. Tziritas and C. Labit documents the technical advances made
through the years in dealing with motion in image sequences.
Motion Effect – An effect that speeds up or slows down the presentation
of media in a track.
Motion Estimation (ME) – The process of determining changes in video
object positions from one video frame to the next. Object position determination is used extensively in high compression applications. For instance
if the background of a scene does not change but the position of an object
in the foreground does, it is advantageous to just transmit the new position
of the object rather than the background or foreground. This technology is
used in MPEG, H.261, and H.263 compression.
Motion Jitters – Jerky movements in a clip, often caused by gate slip
when film is converted into video.
Motion JPEG – Applications where JPEG compression or decompression
is speeded up to be able to process 25 or 30 frames per second and is
applied real-time to video. Even though a video signal is being processed,
each field is still individually processed.
Motion Path – The movement between keyframes, changed with the Path
soft key. There are five types of paths. BRK (Break) modifies Smooth motion
by decelerating speed to zero at each keyframe (a break), then starting
again. IGN (Ignore) allows selected parameter values to be ignored when
calculating motion path. SMTH (Smooth) provides a curved path between
keyframes. The effect speeds up gradually as it leaves the first keyframe,
and slows down gradually until it reached the last keyframe. LIN (Linear)
provides a constant rate of change between keyframes, with an abrupt
change at each keyframe. Linear uses the shortest distance between two
points to travel from one keyframe to another. HOLD stops all motion between keyframes. The result of the motion shows when the next keyframe
appears. HOLD looks like a video “cut”, from one keyframe to the next.
Motion Path Velocity – A successful motion path has two components:
geometry and timing. The geometry is created by choosing keyframes. The
timing of the path is more complex, and can be affected by the geometry.
Intuitively, the timing of a path is simply the speed of motion of the object
as it moves along the path. Since PictureMaker starts with keyframes
and creates in-between positions, PictureMaker determines the velocity by
deciding how many in-betweens to place between each keyframe (and
where to place them). Several methods can be used to determine velocity
along the path. a) Place frame evenly between all keyframes. Closely
placed keyframes will correspond with slow moving parts of the path.
b) Specify a relative velocity at selected keyframes, and specify correspondences between any keyframe and a frame in the final animation.
Motion Prediction – The process that reduces redundancy in a video
signal by measuring an object’s motion at the encoder and sending a
motion vector to the decoder in place of the encoded object.
Motion Resolution – See Dynamic Resolution.
Motion Stabilization – A feature used to eliminate the wobble in the
video taken with a hand-held camera. The After Effects Production Bundle
includes a motion stabilizer.
Motion Surprise – A major shift in the quality of a television picture in the
presence of motion that is so jarring to the viewer that the system might
actually appear better if it had continuously lower quality, rather than jumping from high-quality static image to a lower quality dynamic one.
www.tektronix.com/video_audio 151
Video Terms and Acronyms
Motion Tracking – The process of generating position information that
describes motion in a clip, for example, the changing position of a moving
vehicle. You use motion tracking data to control the movement of effects.
See also Stabilization.
Motion Vector (MV) – a) A two-dimensional vector used for motion
compensation that provides an offset from the coordinate position in the
current picture to the coordinates in a reference picture. b) A pair of
numbers which represent the vertical and horizontal displacement of a
region of a reference picture for production.
Motion Vector for Shape – A motion vector used for motion compensation of shape.
Motion Video – Video that displays real motion by displaying a sequence
of images (frames) rapidly enough that the eyes see the image as a
continuously moving picture.
Moto DV Playback – An edit mode in Premiere, specifically for Moto DV
studio users, that allows video to be streamed out of a Moto DV captured
Mount – To make a file system that is stored on a local or remote disk
resource accessible from a specific directory on a workstation.
Mount Point – The directory on a workstation from which you access
information that is stored on a local or remote disk resource.
Mouse – A hardware device that you use to communicate with windows
and icons. You move the mouse to move the cursor on the screen, and you
press its buttons to initiate operations.
Mouse Pad – For an optical mouse, this is the rectangular, metallic
surface that reads the movements of the mouse. For a mechanical mouse,
this is a clean, soft rectangular surface that makes the mouse’s track ball
roll efficiently.
MOV – The file extension used by MooV format files on Windows.
See MooV.
Movie-2 Bus (or Movie-2 Bus Connector) – Over the top connector
used for high-speed data transfer. These two terms refer to the assembled
component, which consists of a printed circuit board (backplane) with
attached connectors.
Moving Dots – See Chroma Crawl.
Moving Picture Experts Group (MPEG) – An international group of
industry ex-perts set up to standardize compressed moving pictures and
audio. The first release of the MPEG standard was called MPEG-1 (ISO/IEC
Moving Picture Experts Group 1 (MPEG-1) – ISO/IEC CD 11172 is the
first of the standards designed for handling highly compressed moving
images in real-time. It accepts periodically chosen frames to be compressed as in JPEG-1, predicts the content of intervening frames, and
encodes only the difference between the actual and the prediction. Audio
is compressed synchronously. The encoder includes a decoder section in
order to generate and verify the predictions. At the display, a much simpler
decoder becomes possible. MPEG-1 is optimized for a data rate of up
to 1.5 Mbps. MPEG expects to develop a series of compression codes,
optimized for higher bit rates.
Moving Picture Experts Group 2 (MPEG-2) – MPEG-2 expands the
MPEG-1 standard to cover a wider range of applications.
Moving Picture Experts Group 3 (MPEG-3) – MPEG 3 was originally
intended for HDTV applications but has since been incorporated into
Moving Picture Experts Group 4 (MPEG-4) – The goal of MPEG-4 is to
establish a universal and efficient coding for different forms of audio-visual
data, called audio-visual objects. Coding tools for audio-visual objects are
being developed to support various functionality’s, such as object-based
interactivity and scalability. The syntax of the audio-visual objects is being
developed to allow for description of coded objects and to describe how
they were coded. This information can then be downloaded into a decoder.
Moving-Coil – A microphone whose generating element is a coil which
moves within a magnetic gap in response to sound pressure on the
diaphragm attached to it, rather like a small loudspeaker in reverse. The
most common type of Dynamic Microphone.
MP (Multi-Link Point-to-Point Protocol)
MP@HL (Main Profile at High Level) – Widely used shorthand notation
for a specific quality and resolution of MPEG: Main Profile (4:2:0 quality),
High Level (HD resolution).
MP@ML (Main Profile at Main Level) – MPEG-2 specifies different
degrees of compression vs. quality. Of these, Main Profile at Main Level is
the most commonly used.
MP3 – A commonly used term for the MPEG-1 Layer 3 (ISO/IEC 11172-3)
or MPEG-2 Layer 3 (ISO/IEC 13818-3) audio compression formats. MPEG-1
Layer 3 is up to two channels of audio and MPEG-2 Layer 3 is up to 5.1
channels of audio. MP3 is not the same as MPEG-3.
MPC (Multimedia PC) – A specification developed by the Multimedia
Council. It defines the minimum platform capable of running multimedia
software. PCs carrying the MPC logo will be able to run any software that
also displays the MPC logo.
MPCD (Minimum Perceptible Color Difference) – This is a unit of
measure, developed by the CIE, to define the change in light and color
required to be just noticeable to the human eye. The human being in this
MPCD unit is defined as “a trained observer” because there are differences
in the way each of us perceive light.
MPE – See Multiprotocol Encapsulation.
MPEG – A standard for compressing moving pictures. MPEG uses the similarity between frames to create a sequence of I, B and P frames. Only the I
frame contains all the picture data. The B and P frames only contain information relating to changes since the last I frame. MPEG-1 uses a data rate
of 1.2 Mbps, the speed of CD-ROM. MPEG-2 support much higher quality
with a data rate (also called bit rate) of from 1.2 to 15 Mbps. MPEG-2 is
the format most favored for video on demand, DVD, and is the format for
transmitting digital television.
MPEG Audio – Audio compressed according to the MPEG perceptual
encoding system. MPEG-1 audio provides two channels, which can be in
Dolby Surround format. MPEG-2 audio adds data to provide discrete multichannel audio. Stereo MPEG audio is the mandatory audio compression
system for 625/50 (PAL/SECAM) DVD-Video.
Video Terms and Acronyms
MPEG Splicing – The ability to cut into an MPEG bitstream for switching
and editing, regardless of frame types (I, B, P).
MPEG TS (MPEG Transport Stream) – The MPEG transport stream is an
extremely complex structure using interlinked tables and coded identifiers
to separate the programs and the elementary streams within the programs.
Within each elementary stream, there is a complex structure, allowing a
decoder to distinguish between, for example, vectors, coefficients and
quantization tables.
MPEG Video – Video compressed according to the MPEG encoding
system. MPEG-1 is typically used for low data rate video such as on a
Video CD. MPEG-2 is used for higher-quality video, especially interlaced
video, such as on DVD or HDTV.
MPEG-1 – See Moving Picture Experts Group 1.
MPEG-2 – See Moving Picture Experts Group 2.
MPEG-3 – See Moving Picture Experts Group 3.
MPEG-4 – See Moving Picture Experts Group 4.
MPEG-4 Class – MPEG-4 standardizes a number of pre-defined classes.
This set of classes is called the MPEG-4 Standard Class Library. The root
of MPEG-4 classes is called MPEG-4 Object. In Flexible Mode, an MPEG-4
Terminal, based on this library, will be able to produce or use new encoderdefined classes and instantiate objects according to these class definitions.
Graphical methods to represent this hierarchy are commonly used. the
OMT notation has been chosen within the context of MPEG-4 Systems.
MPEG-4 Object – The root of MPEG-4 classes.
MPEG-4 Systems – The “Systems” part of the MPEG-4 standard in
charge of the Multiplex Layer, the Composition Layer and the Flexibility
MPEG-4 Systems Description Language (MSDL) – The language(s)
defined by MPEG-4 Systems for the purpose of the Flexibility Layer.
MPEG-4 Terminal – An MPEG-4 Terminal is a system that allows
presentation of an interactive audiovisual scene from coded audiovisual
information. It can be either a standalone application, or part of a multimedia terminal that needs to deal with MPEG-4 coded audiovisual information,
among others.
MPEG-7 – MPEG-7 is a multimedia content (images, graphics, 3D models,
audio, speech, video) representation standard for information searching.
Final specification is expected in the year 2000.
MPEG-J – A set of Java application program interfaces. It also sets the
rules for delivering Java into a bitstream and it specifies what happens at
the receiving end.
MS Stereo – Exploitation of stereo redundancy in audio programs based
on coding the sum and difference signal instead of the left and right
MSB – See Most Significant Bit.
MSDL (MPEG-4 Syntactic or Systems Description Language) –
An extensible description language defined in MPEG-4 that allows for
selection, description and downloading of tools, algorithms and profiles.
MSI (Medium Scale Integration) – Between 100 and 3,000 transistors
on a chip.
MSO (Multiple System Operator) – A major cable TV organization that
has franchises in multiple locations.
MTBF (Mean Time Between Failure) – The average time a component
works without failure. It is the number of failures divided by the hours
under observation.
MTS (Multichannel Television Sound) – A generic name for various
stereo audio implementations, such as BTSC and Zweiton. Used in conjunction with NTSC/525. Consists of two independent carriers each carrying a discrete channel. One channel provides stereo sound by providing
left/right channel difference signals relative to transmitted mono audio
track. The second carrier carries the Secondary Audio Program (SAP) which
is used for a second language or a descriptive commentary for the blind.
Uses a technique based on the dBx noise reduction to improve the frequency response of the audio channel.
MTTR (Mean Time to Repair) – The average time it takes to repair a
failed component.
MTU (Multi-Port Transceiver Unit)
Mu-Law – The PCM coding and companding standard for digital voice
communications that is used in North America and Japan for analog-todigital conversion.
Multiangle – A DVD-video program containing multiple angles allowing
different views of a scene to be selected during playback.
Multiburst – Useful for quick approximations of the system’s frequency
response and can be used as an in-service VIT signal. The multiburst
waveform is shown in the figure below.
2.0 3.0 3.58 4.1
10~ 14~ 16~ 18~
MPI (MPEG Physical Interface)
MPP (Mix to Preset Pattern) – See Preset Pattern.
MPTS (Multi-Port Presentation Time Stamps)
MPEG 4:2:2 – Also referred to as Studio MPEG, Professional MPEG and
442P@ML. Sony’s Betacam SX is based on MPEG 4:2:2.
MPU (Microprocessing Unit) – See Microprocessor.
www.tektronix.com/video_audio 153
Video Terms and Acronyms
Multicamera – A production or scene that is shot and recorded from more
than one camera simultaneously.
Multichannel – Multiple channels of audio, usually containing different
signals for different speakers in order to create a surround-sound effect.
MultiCrypt – Is used to describe the simultaneous operation of several
conditional access systems.
Multifrequency Monitor – A monitor that accommodates a variety of
horizontal and vertical synchronization frequencies. This monitor type
accepts inputs from many different display adapters, and is typically
capable of either analog or digital input.
Multi-Language Support – A DVD has the ability to store 8 audio
streams. This is different than the number of channels each stream might
have. Thus, each of the streams might contain a multi-channel audio
program in a separate language.
Multi-Layer Effects – A generic term for a mix/effects system that allows
mul-tiple video images to be combined into a composite image.
Multilingual – A presentation of dialog in more than one language.
Multimedia – A somewhat ambiguous term that describes the ability
to combine audio, video and other information with graphics, control,
storage and other features of computer-based systems. Applications
include presentation, editing, interactive learning, games and conferencing.
Current multimedia systems also use mass storage computer devices
such as CD-ROM.
Multimedia Computing – Refers to the delivery of multimedia information
delivered via computers.
Multimedia Hypermedia Expert Group (MHEG) – MHEG is another
working group under the same ISO/IEC subcommittee that feature the
MPEG. The MHEG is the Working Group 12 (WG 12) of Subcommittee 29
(SC 29) of the joint ISO and IEC Technical Committee 1 (JTC 1). The
ISO/IEC standards produced have number 13522. MHEG targets coding of
multimedia and hypermedia information, and defines an interchange format
for composite multimedia contents. The defined MHEG format encapsulates
a multimedia document, so to speak, as communication takes place in a
specific data structure. Despite the talk about multimedia, there is no very
much said and written about MHEG, which seems odd given the realm of
MHEG. The present market significance of MHEG is very low, probably due
to the high number of proprietary standards for audio visual representation
in multimedia PC environments.
Multipath Distortion – A form of interference caused by signal reflections. Signals that are reflected more take a longer path to reach the
receiver than those that are reflected less. The receiver will synchronize to
the strongest signal, with the weaker signals traveling via different paths
causing ghostly images superimposed on the main image. Since many ATV
schemes offer increased horizontal resolution, ghosts can have a more
deleterious effect on them than on ordinary NTSC signals. There have been
many demonstrations of ghost canceling/ eliminating systems and robust
transmission systems over the years. It is probable that these will have to
be used for HDTV.
Multipass Encoding – True multipass encoding is currently available only
for WM8 and MPEG-2. An encoder supporting multipass will, in a first
pass, analyze the video stream to be encoded and write down a log about
everything it encounters. Let’s assume there is a short clip that starts out
in a dialog scene where there are few cuts and the camera remains static.
Then it leads over to a karate fight with lots of fast cuts and a lot of action
(people flying through the air, kicking, punching, etc.). In regular CBR,
encoding every second gets more or less bitrate (it is hard to stay 100%
CBR) whereas in multipass VBR mode the encoder will use the bitrate
according to its knowledge about the video stream, i.e. the dialog part gets
the available bitrate and the fighting scene gets allotted more bitrate. The
more passes, the more refined the bitrate distribution will be. In single
pass VBR, the encoder has to base its knowledge on what it previously has
Multiplane Animation – Multiplane animation refers to a type of cel
animation where individual cels are superimposed using the painters
algorithm, and their motion relative to each other is controlled. Here, the
word “plane” and cel are interchangeable.
Multiple Blanking Lines – Evidenced by a thickening of the blanking line
trace or by several distinct blanking lines as viewed on an oscilloscope.
May be caused by hum.
Multiple B-Roll – A duplicate of the original source tape, created so that
overlays can be merged onto one source tape.
Multiple System Operator (MSO) – A cable TV service provider that
operates more than one cable television system.
Multiple-FIFO Architecture – A display controller architecture characterized by having multiple FIFOs or write buffers. There is typically one FIFO
or write buffer at the CPU interface, and one or more FIFOs in the display
Multiplex – a) To take, or be capable of taking, several different signals
and send them through one source. b) To combine multiple signals, usually
in such a way that they can be separated again later. There are three major
multiplexing techniques. Frequency division multiple (FDM) assigns each
signal a different frequency. This is how radio and television stations in the
same metropolitan area can all transmit through the same air space and be
individually tuned in. Time division multiple (TDM) assigns different signals
different time slots. Different programs can be broadcast over the same
channel using this technique. More technically, the MADs use TDM for
luminance and chrominance. Space or path division multiplex allows different television stations in different cities to use the same channel at the
same time or different people to talk on different telephones in the same
building at the same time. c) A stream of all the digital data carrying one
or more services within a single physical channel. d) To transmit two or
more signals at the same time or on the same carrier frequency. e) To
combine two or more electrical signals into a single, composite signal.
Multiplex Code Field (MC Field) – A field in the TransMux/FlexMux-PDU
header which specifies, by reference to a Multiplex Table Entry, the logical
channel where each byte in the information field belongs.
Video Terms and Acronyms
Multiplex Layer (MUX Layer) – In its broad sense, the combination
of the Adaptation Layer, the FlexMux Layer, the Protection Layer and the
TransMux Layer. In a more strict interpretation, the FlexMux or the
Multiplex Layer Protocol Data Unit (MUX-PDU) – An information unit
exchanged between peer Multiplex Layer entities.
Multipulse – A variation of the sine-squared pulses. Multipulse allows for
the simultaneous evaluation of group-delay errors and amplitude errors at
the various frequencies. Distortions show up in multipulse as distortions of
the baseline. Refer to the figure and to the Sine-Squared pulse discussion.
4.2 MHz
12.5T 12.5T 12.5T
Multiplex Layer Service Data Unit (MUX-SDU) – A logical information
unit whose integrity is preserved in transfer from one Multiplex Layer User
to the peer Multiplex Layer User.
Multiplex Layer User (MUX-User) – An entity which makes use of the
services of the MUX Layer.
Multiplex Table – A table which specifies the multiplexing pattern for the
information field of a MUX-PDU).
Multiplexed Analog Component – See MAC.
Multiplexer (MUX) – Device for combining two or more electrical signals
into a single, composite signal.
Multiplexing – Process of transmitting more than one signal via a single
link. The most common technique used in microprocessor systems is
time division multiplexing, in which one signal line is used for different
information at different times.
Multiplier – A control circuit in which a non-video control signal is faded
down as the other is faded up.
Multipoint Conferencing Server (MCS) – A hardware or software H.323
device that allows multiple video conferencing (or audio or data) users to
connect together. Without an MCS typically only point-to-point conferences
can take place. Commonly supports voice activated switching, where
whoever is talking is broadcast to all users, but new systems support
“Hollywood Squares”, where multiple windows show each participant. ITU-T
standard H.231 describes the standard way of doing this. Many current
systems only support H.320 (ISDN) but many vendors are working to
upgrade their products to support H.323 (LAN, Internet) as well. In the
H.320 space, this functionality is referred to as a multipoint control unit
(MCU). Sometimes these terms are used interchangeably, although they
refer to somewhat different implementations.
Multipoint Control Unit (MCU) – A switching device commonly used to
switch and control a video conferencing network allowing multiple sites to
conference simultaneously.
Multipoint Controller (MC) – Used for conference control of three or
more terminals. It allocates bandwidth.
Multiprotocol Encapsulation (MPE) – The data broadcast specification
profile for multiprotocol encapsulation supports data broadcast services
that require the transmission of datagrams of communication protocols via
DVB compliant broadcast networks. The transmission of datagrams according to the multiprotocol encapsulation specification is done by encapsulating the datagrams in DSM-CC sections., which are compliant with the
MPEG-2 private sector format.
MultiRead – A standard developed by the Yokohama group, a consortium
of companies attempting to ensure that new CD and DVD hardware can
read all CD formats.
Multi-Scan Monitor – A monitor (also referred to as multi-sync or
multi-frequency) which can synchronize to different video signal sync
frequencies, allowing its use with various computer video outputs. See
Analog Monitor.
Multisession – A technique in write-once recording technology that allows
additional data to be appended after data written in an earlier session.
Multi-Standard – TV sets, VTRs, etc., that are designed to work using
more than one technical standard; i.e., a VTR which can record both NTSC
and PAL signals/recordings is a multi-standard machine.
Multitrack – A magnetic tape of film recorder capable of recording more
than one track at a time.
Multitrack Tape – A piece of magnetic tape which can be used to store
two or more discrete signals.
Munsell Chroma – a) Illuminating Engineering: The index of perceived (Y)
and chromaticity coordinates (x,y) for CIE Standard Illuminance C and the
CIE Standard Observer. b) Television: The dimension of the Munsell system
of color that corresponds most closely to saturation. Note: Chroma is frequently used, particularly in English works, as the equivalent of saturation.
Munsell Color System – A system of surface-color specifications based
on perceptually uniform color scales for the three variables. Munsell hue,
Munsell value, and Munsell chroma. For an observer of normal color vision,
adapted to daylight and viewing the specimen when illuminated by daylight
and surrounded with a middle gray to white background, the Munsell hue,
value, and chroma of the color correlate well with the hue, lightness, and
perceived chroma.
www.tektronix.com/video_audio 155
Video Terms and Acronyms
MUSE (Multiple Sub-Nyquist Sampling Encoding) – a) 16:9 aspect
ratio, high definition, widescreen television being proposed in Japan.
b) A term originally used for a transmission scheme developed by NHK
specifically for DBS transmission of HDTV. MUSE has since been extended
to a family of ATV transmission schemes. MUSE, as it was originally developed, is a form of MAC. Recent versions of MUSE (MUSE- and MUSE-9)
are said to be receiver-compatible and, as such, cannot employ MAC
techniques. The sub-Nyquist part of the name indicates that MUSE is a
sub-sampling system and, as such, is subject to motion artifacts. While
it is one of the oldest ATV transmission schemes still considered viable,
MUSE is only four years old.
Music and Effects Track(s) – Music and effects audio without video.
Can be on one track, on different tracks on one piece of film or tape, or on
different tapes, which are combined during an audio “track mix” session.
Sometimes abbreviated M&E.
MUSE-6 – A family of three versions of an ATV transmission scheme said
to be both receiver-compatible and channel-compatible. Since the original
MUSE schemes are neither, there is little similarity between them, other
than the use of sub-sampling. The differences between the three versions
relate to how the wide aspect ratio is handled and what techniques are
used for augmentation in an ATV set. Two versions of MUSE-6 use the
letterbox technique for aspect ratio accommodation and both of these use
blanking stuffing in the expanded VBI area for vertical resolution enhancement. The differences between the two versions relate to the duration of
the sub-sampling sequence (one frame or two). The third uses the truncation technique for aspect ratio accommodation, sending the side panels
stuffed into the existing VBI and HBI. Additional horizontal detail is transmitted via two-frame sub-sampling.
MUX – See Multiplexer.
MUSE-9 – A family of three versions of an ATV transmission scheme said
to be receiver-compatible and utilizing a 3 MHz augmentation channel.
The three versions are very similar to the three versions of MUSE-6, except
that the version using the truncation method sends the wide-screen panels
on the augmentation channel rather than stuffing them into the HBI and
the VBI. There are two classes of the three versions of MUSE-9, one with
a contiguous augmentation channel and one without. The one without is
said to be somewhat inferior in quality to the one with.
MUSE-E – MUSE optimized for emission (i.e., broadcasting) rather than
transmission (i.e., satellite distribution). It is a non-receiver-compatible,
non-channel-compatible scheme occupying 8.1 MHz of base bandwidth
and requiring four fields to build up a full-resolution picture. Thus, it
requires motion compensation (and retains some motion artifacts). It offers
four channels of high-quality digital audio. It has been tested in the
Washington, DC area.
MUSE-T – MUSE optimized for transmission (via satellite) rather than
emission (via terrestrial broadcasting). It occupies twice the bandwidth of
MUSE-E (16.2 MHz), but is otherwise quite similar.
MUSICAM (Masking Pattern Adapted Universal Sub-Band Integrated
Coding and Multiplexing) – Compression method for audio coding.
Must Carry – Legal requirement that cable operators carry local broadcast
signals. Cable systems with 12 or fewer channels must carry at least three
broadcast signals; systems with 12 or more channels must carry up to
one-third of their capacity; systems with 300 or fewer subscribers are
exempt. The 1992 Cable Act requires broadcast station to waive mustcarry rights if it chooses to negotiate retransmission compensation (see
Retransmission consent).
Mux Rate – Defined by MPEG-2 as the combined rate of all video and
audio elementary stream packets common to one program or multi-program stream. The rate of a stream is set based upon a user selection, by
the quality of the program (i.e., constant quality variable rate), or by the
symbol rate required from an RF transponder. This rate also includes the
VBI and sub-picture private stream data, which MPEG treats as a private
stream type. Mux rate is always specified as 10.08 Mbps because this is
the rate at which user data arrives into the track buffer.
MVDS (Multi-Point Video Distribution System)
MXF (Material Exchange Format) – An object subset of AAF and is on
the verge of becoming a SMPTE standard. MXF was designed for less
complex (less vertically rich) metadata applications, such as news editing
and video streaming from servers. Because of its flatter metadata structure, it is better suited to be used as a metadata wrapper within a video
signal or a TCP/IP stream. It offers performance benefits over the more
complex AAF file structure because of its streamable nature.
MXF DMS-1 – The MXF development community has been working on
a specific dialect for Descriptive Metadata, called MXF DMS-1, which is
being designed to describe people, places, times, production billing.
Mylar – A registered trademark of E.I. duPont de Nemours & Co.,
designating their polyester film.
Video Terms and Acronyms
NAB (National Association of Broadcasters) – An association which
has standardized the equalization used in recording and reproducing. This
is a station owner and/or operator’s trade association. NAB is also a participant in ATV testing and standardization work, and a charter member of
ATSC. Though not a proponent of any particular ATV system, NAB lobbies
for the interests of broadcasting as a delivery mechanism and has published some of the least biased information on the subject.
NAB Curves, NAB Equalization – Standard playback equalization curves
for various tape speeds, developed by the National Association of
NAB Reel, NAB Hub – Reels and hubs used in professional recording,
having a large center hole and usually an outer diameter of 10-1/2”.
Miquelon. It is also used in most of the Caribbean and in parts of South
America, Asia, and the Pacific. It is also broadcast at U.S. military installations throughout the world and at some oil facilities in the Middle East.
Barbados was the only country in the world to transmit NTSC color on a
non-525-line system; they have since switched to 525 lines. Brazil remains
the only 525-line country to transmit color TV that is not NTSC; their system is called PAL-M. M is the CCIR designation for 525-line/30 frame television. See also M.
Native BIFS Node – A Binary Format for Scenes (BIFS) node which is
introduced and specified within the Final Committee Draft of International
Standard as opposed to non-native BIFS node, which is a node referenced
from ISO/IEC 14772-1.
NABET (National Association of Broadcast Employees and
Technicians) – NABET is a union of technicians that supplies members
for many videotape, live and film productions.
Native Resolution – The resolution at which the video file was captured.
NABTS – See North American Broadcast Teletext Specification.
Navigation Data – In DVD-Video there are five types of navigation data:
Video Manager Information (VMGI), Video Title Set Information (VTSI),
Program Chain Information (PGCI), Presentation Control Information (PCI)
and Data Search Information (DSI).
Nagra – A brand of audio tape recorder using 1/4” wide audio tape
extensively used for studio and location separate audio recording.
NAM – See Non-Additive Mix.
NANBA (North American National Broadcasters Association)
Nanosecond – One billionth of a second: 1 x 10-9 or 0.000000001
NAP (North American Philips) – Philips Laboratories developed the
HDS-NA ATV scheme and was among the first to suggest advanced
pre-combing. See also PCEC.
Narrow MUSE – An NHK-proposed ATV scheme very similar to MUSE
(and potentially able to use the same decoder) but fitting within a single,
6 MHz transmission channel. Unlike MUSE-6 and MUSE-9, narrow MUSE
is not receiver-compatible.
Narrowband – Relatively restricted in bandwidth.
Narrowband ISDN (N-ISDN) – Telecommunications at 1.5 Mbps on
copper wire.
Narrowcasting – Broadcasting to a small audience.
National Television System Committee (NTSC) – a) The organization
that formulated the “NTSC” system. Usually taken to mean the NTSC color
television system itself, or its interconnect standards. NTSC is the television
standard currently in use in the U.S., Canada and Japan. NTSC image format is 4:3 aspect ratio, 525 lines, 60 Hz and 4 MHz video bandwidth with
a total 6 MHz of video channel width. NTSC uses YIQ. NTSC-1 was set in
1948. It increased the number of scanning lines from 441 to 525, and
replaced AM sound with FM. b) The name of two standardization groups,
the first of which established the 525 scanning-line-per-frame/30 frameper-second standard and the second of which established the color television system currently used in the U.S.; also the common name of the
NTSC-established color system. NTSC is used throughout North America
and Central America, except for the French islands of St. Pierre and
NAVA (National Audio-Visual Association) – A trade association for
audio-visual dealers, manufacturers and producers.
Navigation Timer – In DVD-Video a system timer used during navigation
NBC – Television network that was an original proponent of the ACTV ATV
schemes. NBC was also the first network to announce its intention to shift
from NTSC entirely to CAV recording equipment.
NB (National Body) – Responsible for developing national positions for
international voting.
NBC (Non-Backwards Compatible)
NCTA (National Cable Television Association) – This is the primary
cable TV owner and/or operator’s trade association. NCTA is performing
similar roles to NAB in ATV research and lobbying, with an emphasis
on CATV, rather than broadcasting, of course, and is a charter member
of ATSC.
NDA (Non-Disclosure Agreement) – An agreement signed between
two parties that have to disclose confidential information to each other in
order to do business. In general, the NDA states why the information is
being divulged and stipulates that it cannot be used for any other purpose.
NDAs are signed for a myriad of reasons including when source code is
handed to another party for modification or when a new product under
development is being reviewed by the press, a prospective customer or
other party.
NE (Network Element) – In general, an NE is a combination hardware
and software system that is designed primarily to perform a telecommunications service function. For example, an NE is the part of the network
equipment where a transport entity (such as a line, a path, or a section)
is terminated and monitored. As defined by wavelength routing, an NE is
the originating, transient, or terminating node of a wavelength path.
www.tektronix.com/video_audio 157
Video Terms and Acronyms
Near Instantaneous Companded Audio Multiplex (NICAM) –
a) A digital audio coding system originally developed by the BBC for point
to point links. A later development, NICAM 728 is used in several European
countries to provide stereo digital audio to home television receivers.
b) A digital two-channel audio transmissions with sub-code selection of
bi-lingual operation. Stereo digital signals with specifications approaching
those of compact disc are possible. NICAM uses a 14 bit sample at a 32
kHz sampling rate which produces a data stream of 728 kbits/sec.
NexTView – An electronic program guide (EPG) based on ETSI ETS 300
Negative – a) A film element in which the light and dark areas are
reversed compared to the original scene; the opposite of a positive.
b) A film stock designed to capture an image in the form of a negative.
Nibble – Four bits or half a byte. A group of four contiguous bits. A nibble
can take any of 16 (24) values.
Negative Effect – Special effect in which either blacks and whites are
reversed or colors are inverted. For example, red becomes a blue-green,
green becomes purple, etc. The Video Equalizer and Digital Video Mixer
includes a negative effect which can be used to generate electronic color
slides from color negatives. An electronic color filter can be used for fine
adjustment of the hues.
Negative Image – Refers to a picture signal having a polarity which is
opposite to normal polarity and which results in a picture in which the
white areas appear as black and vice versa.
Negative Logic – The logic false state is represented by the more positive
voltage in the system, and the logic true state is represented by the more
negative voltage in the system. For TTL, 0 becomes +2.4 volts or greater,
and 1 becomes +0.4 volts or less.
Nested – Subroutine that is called by another subroutine or a loop within
a larger loop is said to be nested.
NET (National Educational Television) – A public TV Network of
Network – a) A group of stations connected together for common broadcast or common business purposes; multiple circuits. b) A group of
computers and other devices (such as printers) that can all communicate
with each other electronically to transfer and share information. c) A collection of MPEG-2 Transport Stream (TS) multiplexes transmitted on a
single delivery system, e.g., all digital channels on a specific cable system.
Network Administrator – The individual responsible for setting up,
maintaining, and troubleshooting the network, and for supplying setup
information to system administrators of each system.
NFS™ (Network File System) – A distributed file system developed by
Sun that enables a set of computers to cooperatively access each other’s
files transparently.
NG – An often-used term meaning “no good”.
NHK – See Nippon Hoso Kyokai.
NiCad (Nickel Cadmium) – Common Rechargeable video camera battery
NICAM – See Near Instantaneous Companded Audio Multiplexer.
NICAM 728 – A technique of implementing digital stereo audio for PAL
video using another audio subcarrier. The bit rate is 728 kbps. It is
discussed in BS.707 and ETSI EN 300 163. NICAM 728 is also used to
transmit non-audio digital data in China.
Nighttime Mode – Name for Dolby Digital dynamic range compression
feature to allow low-volume nighttime listening without losing legibility of
Nippon Hoso Kyokai (NHK) – The Japan Broadcasting Corporation,
principal researchers of HDTV through the 1970s, developers of the
1125 scanning-line system for HDEP and of all the MUSE systems for
Nippon Television – See NTV.
NIST (National Institute of Standards and Technology) – This is the
North American regional forum at which OSI implementation agreements
are decided. It is equivalent to EWOS in Europe and AOW in the Pacific.
NIT (Network Information Table) – The NIT conveys information relating
to the physical organization of the multiplex, transport streams carried via
a given network, and the characteristics of the network itself. Transport
streams are identified by the combination of an original network ID and a
transport stream ID in the NIT.
Nits – The metric unit for brightness. 1 foot lambert = 3.425 nits.
Neutral Colors – The range of gray levels, from black to white, but without color. For neutral areas in the image, the RGB signals will all be equal;
in color difference formats, the color difference signals will be zero.
NIU (Network Interface Unit) – A device that serves as a common interface for various other devices within a local area network (LAN), or as an
interface to allow networked computers to connect to an outside network.
The NIU enables communication between devices that use different protocols by supplying a common transmission protocol, which may be used
instead of the device’s own protocols, or may be used to convert the specific device protocol to the common one. To enable an interface between
a LAN and another network, the NIU converts protocols and associated
code and acts as a buffer between the connected hardware. A network
interface card (NIC) is a type of NIU.
New York Institute of Technology – Private engineering school headquartered in Old Westbury, NY, noted for its advanced computer graphics.
Its Science and Technology Research Center, in Dania, FL, has been
researching ATV for years. NYIT is a proponent of the VISTA ATV scheme.
NLM (Network Loadable Module) – Software that runs in a NetWare
server. Although NetWare servers store DOS and Windows applications,
they do not execute them. All programs that run in a NetWare server must
be compiled into the NLM format.
Network Interface Card (NIC) – A device that connects a terminal to a
Neutral – Normal; without power; not in working position; without much
color or brightness purposes; multiple circuits.
Video Terms and Acronyms
NMI (Non-Maskable Interrupt) – A hardware interrupt request to the
CPU which cannot be masked internally in the processor by a bit, but must
be serviced immediately.
Noisy – A description of a picture with abnormal or spurious pixel values.
The picture’s noise is a random variation in signal interfering with the
information content.
NNI (Nederlands Normalisatie-Instituut) – Standards body in the
Noisy Video – Noisy video (e.g., video from low quality VTRs) is more
difficult to code than the cleaner version of the same sequence. The reason
is that the video encoder spends many bits trying to represent the noise as
if it were part of the image. Because noise lacks the spatial coherence of
the image, it is not coded efficiently.
Node – a) A list of calculations that you can apply to materials as part of
the rendering tree language. The node can in turn serve as input to other
nodes. b) Any signal line connected to two or more circuit elements. All
logic inputs and outputs electrically connected together are part of the
same node.
Nodules – Clusters of materials, i.e., a large nodule of iron oxide on
magnetic tape would be a tape defect.
Nomograph – This is a table that allows for the determination of
Chrominance to Luminance Gain and Delay errors. Refer to the discussion
on Chrominance to Luminance Gain and Delay.
Non-Additive Mix (NAM) – The process of combining two video signals
such that the resultant video signal is instant-by-instant the same as the
brighter of the two weighted input signals. For example, at 50% fader,
the brighter of the two videos predominates. The net effect of this type
of mix is a superimposed appearance, with the picture balance controlled
by the fader.
Noise – Any unwanted electrical disturbances, other than crosstalk or distortion components, that occur at the output of the reproduce amplifier.
System Noise: The total noise produced by the whole recording system,
including the tape. Equipment Noise: The noise produced by all the components of the system, with the exception of the tape. Tape Noise: The noise
that can be specifically ascribed to the tape. There are several sources of
tape noise. See DC Noise, Erase Noise, Modulation Noise, Saturation Noise,
and Zero Modulation Noise.
Noncomposite Video – A video which does not contain a synchronizing
Noise Bars – White streaks in a picture, usually caused when video heads
trace parts of the tape that have no recorded signal.
Nondirectional – A pickup pattern which is equally sensitive to sounds
from all directions.
Noise Floor – The level of background noise in a signal or the level of
noise introduced by equipment or storage media below which the signal
can't be isolated from the noise.
Non-Drop Frame – System of time code that retains all frame numbers in
chronological order, resulting in a slight deviation from real clock time.
Noise Gate – A device used to modify a signal’s noise characteristics. In
video, noise gates provide optimal automatic suppression of snow (signal
noise level). In audio, a noise gate provides a settable signal level threshold
below which all sound is removed.
Noise Pulse – A spurious signal of short duration that occurs during
reproduction of a tape and is of magnitude considerably in excess of the
average peak value of the ordinary system noise.
Noise Reduction – The amount in dB that the noise added to a signal by
transmission or storage chain, especially a tape recorder, is reduced from
the level at which it would be if no noise reduction devices were used.
Noise Reduction Systems – Refers to electronic circuits designed to
minimize hiss level in magnetic recording.
Noise Weighting – An adjustment used in the electrical measurement of
television signal noise values to take into account the difference between
the observable effect of noise in a television picture and the actual electrical value of noise.
Noise/A-Weighted – Unwanted electrical signals produced by electronic
equipment or by magnetic tape. Mostly confined to the extremes of the
audible frequency spectrum where it occurs as hum and/or hiss. A-weighted noise is noise measured within the audio frequency band using a
measuring instrument that has a frequency selective characteristic. The
frequency sensitivity of the measuring instrument is adjusted to correspond
to that of the average human hearing response.
Non-Compatible – Incapable of working together.
Non-Drop Frame Time Code – SMPTE time code format that continuously counts a full 30 frames per second. Because NTSC video does not
operate at exactly 30 frames per second, non-drop frame time code will
count 108 more frames in one hour than actually occur in the NTSC video
in one hour. The result is incorrect synchronization of time code with clock
time. Drop frame time code solves this problem by skipping or dropping 2
frame numbers per minute, except at the tens of the minute count.
Non-Ferrous – Without iron or iron oxide.
Noninterlaced – Method of scanning video in which the entire frame is
scanned at once rather than interleaved. The rate of scan must be fast
enough that the average light level of the scene does not decrease
between scans and cause flicker. Another term for a noninterlaced system
is progressive scan.
Non-Intra Coding – Coding of a macroblock or picture that uses information both from itself and from macroblocks and pictures occurring at other
Nonlinear – A term used for editing and the storage of audio, video and
data. Information (footage) is available anywhere on the media (computer
disk or laser disc) almost immediately without having to locate the desired
information in a time linear format.
Nonlinear Distortion – Amplitude-dependent waveform distortion. This
includes APL and instantaneous signal level changes. Analog amplifiers
are linear over a limited portion of their operating range. Signals which fall
outside of the linear range of operation are distorted. Nonlinear distortions
www.tektronix.com/video_audio 159
Video Terms and Acronyms
include crosstalk and intermodulation effects between the luminance and
chrominance portions of the signal.
Nonlinear Editing (NLE) – a) The process of editing using rapid retrieval
(random access) computer controlled media such as hard disks, CD-ROMs
and laser discs. Its main advantages are: allows you to reorganize clips or
make changes to sections without having to redo the entire production and
very fast random access to any point on the hard disk (typically 20-40 ms).
b) Nonlinear distinguished editing operation from the “linear” methods
used with tape. Nonlinear refers to not having to edit material in the
sequence of the final program and does not involve copying to make edits.
It allows any part of the edit to be accessed and modified without having to
re-edit or re-copy the material that is already edited and follows that point.
Nonlinear editing is also non-destructive, the video is not changed but the
list of how the video is played back is modified during editing.
Nonlinear Editor – An editing system based on storage of video and
audio on computer disk, where the order or lengths of scenes can be
changed without the necessity of reassembling or copying the program.
Nonlinear Encoding – Relatively more levels of quantization are assigned
to small amplitude signals, relatively fewer to the large signal peaks.
Nonlinearity – The amount by which a measured video signal output differs from a standard video signal output. The greater this deviation, the
greater the video signal distortion and possibility of luminance and chrominance problems. Having gain vary as a function of signal amplitude.
Non-Return-to-Zero (NRZ) – A coding scheme that is polarity sensitive.
0 = logic low; 1 = logic high.
Non-Synchronous – Separate things not operating together properly, i.e.,
audio and video or the inability to properly operate together with another
specific piece of equipment or signal. See Synchronous.
Non-Synchronous Source – A video signal whose timing information
differs from the reference video by more than 800 ns.
Non-Uniform B-Splines (NURBS) – A superset of both Bézier and
Uniform B-Splines. NURBS introduces the feature of non-uniformity. Thus
it is possible to subdivide a spline, for example, to locally increase the
number of control points without changing the shape of the spline. This
is a powerful feature which enables you to insert more control points on
a spline without altering its shape; cut anywhere on a spline to generate
two parts; and creates cusps in splines.
Non-Useful DC Component – Produced by the transmission equipment
and not related to picture content. The non-useful DC component present
across the interface point, with or without the load impedance connected,
shall be zero +/-50 µV.
Normal – a) Relating to the orientation of a surface or a solid, a normal
specifies the direction in which the outside of the surface or the solid
faces. b) The normal to a plane is the direction perpendicular to the
Normal Key – On the 4100 series, an RGB chroma key or a luminance
key, as distinct from a composite (encoded) chroma key.
Normal/Reverse – The specification of the direction a pattern moves as
the fader is pulled. A normal pattern starts small at the center and grows
to the outside while a reverse pattern starts from the edge of the screen
and shrinks. Normal/Reverse specifies that the pattern will grow as the
fader is pulled down, and shrink as it is pushed up. This definition loses
some meaning for wipes that do not have a size per-se such as a vertical
bar, however, this feature still will select the direction of pattern movement.
North American Broadcast Teletext Specification – Provisions for
525-line system C teletext as described in EIA-516 and ITU-R BT.653.
Non-Return-to-Zero Inverse (NRZI) – A video data scrambling scheme
that is polarity insensitive. 0 = no change in logic; 1 = a transition from
one logic level to the other.
NOS (Network Operating System) – Generic term used to refer to what
are really distributed file systems. Examples of NOSs include LAN Manager,
NetWare, NFS, and VINES.
Notch Filter – A device which attenuates a particular frequency greatly,
but has little effect on frequencies above or below the notch frequency.
Notifier – A form that appears when the system requires you to confirm
an operation that you just requested, or when an error occurs.
NRZ – See Non-Return-to-Zero.
NRZI – See Non-Return-to-Zero Inverse.
NSAP (Network Service Access Point) – Network addresses, as
specified by ISO. An NSAP is the point at which OSI network service is
made available to a Transport Layer (Layer 4) entity.
NSF (Norges Standardiseringsforbund) – Standards body of Norway.
NST (Network Status Table) – The network status table shows the
network name, the protocol, the interface over which the network runs
(eth:1 for LAN, atm:1 or hdlc:1 for WAN), how the network was created
(static for LAN, dynamic for WAN) and the network address assigned to
the connection.
Video Terms and Acronyms
NTC-7 Composite Test Signal
NTSC Color Bars – The pattern comprising eight equal-width color bars
generated by an NTSC generator. The color bars are used for calibration
and as a reference to check transmission paths, signal phase, recording
and playback quality, and monitor alignment.
NTSC Composite – The video signal standard proposed by the NTSC
and adopted by the FCC for broadcast television in the U.S. The signal
is an interlaced composite video signal of 525 lines and 60 fields per
second (30 frames per second), with a bandwidth limited to 4 MHz to
fit into a 6 MHz broadcast television channel without interfering with
adjacent channels.
NTSC Composite Video Receiver System
RF, IF and
NTFS (New Technology File System) – A file system used on Windows
C. Characteristics of the System
(Not Necessarily Defects)
1. 4:3 Aspect Ratio
2. 330 x 330 Resolution
3. NTSC Colorimetry
4. 15 kHz Sound
NTSC Color – The color signal TV standard set by the National Television
Standards Committee of the USA.
3.579545 MHz
NTSC Artifacts – Defects associated with NTSC.
B. Color Defects
1. Visible in Monochrome
• Cross Luminance
• Visible Subcarrier
• Chroma Crawl
• Gamma Problems
• Detail Loss Due to Filters
• Ringing Due to Filters
2. Visible in Color
• Cross Color
• Detail Loss Due to Filters
• Ringing Due to Filters
NTSC 4.43 – This is a NTSC video signal that uses the PAL color subcarrier frequency (about 4.43 MHz). It was developed by Sony in the 1970s to
more easily adapt European receivers to accept NTSC signals.
What’s Wrong with NTSC
3.579545 MHz
NTSC – See National Television System Committee.
A. Monochrome and Color Defects
1. Due to Sampling
• Temporal Alias
• Vertical Alias
• Vertical Resolution Loss
(Kell Factor)
2. Due to aperture
• Visible Scannig Lines
• Soft Vertical Edges
3. Due to Interlace
• Twitter
• Line Crawl
• Vertical Resolution Loss
(Interlace Coefficient)
• Motion Artifacts, Vertical
and Horizontal
4. Due to Transmission
• Ghosts
• Group Delay
• Impulsive Noise
• Periodic Noise
• Random Noise
• Interference
• Filter Artifacts
5. Due to Changing Equipment
• Non-Linear System Gamma
90 Degree
Phase Shift
NTSC Composite Video Transmitter System
90 Degree
Phase Shift
3.579545 MHz
NTSC Decoder – An electronic circuit that breaks down the composite
NTSC video signal into its components.
NTSC Format – A color television format having 525 scan lines (rows)
of resolution at 30 frames per second (30 Hz). See NTSC. Compare PAL
NTSC MUSE – Term sometimes used for MUSE-6 and MUSE-9.
NTSC RGB – Interlaced red, green, and blue video signals timed to NTSC
standards. Refers to the three monochrome signals that represent the
primary colors of an image. Contrast with Component Video.
NTSC Standard – Documentation of the characteristics of NTSC. NTSC
is defined primarily in FCC Part 73 technical specifications. Many of its
characteristics are defined in EIA-170A. NTSC is also defined by the CCIR.
www.tektronix.com/video_audio 161
Video Terms and Acronyms
NTSC is a living standard; as problems with it are discovered, they are
corrected. For example, a former EIA standard, RS-170, omitted any phase
relationship between luminance and chrominance timing, resulting in
blanking problems. EIA-170A defines that relationship (called SC/H for
subcarrier to horizontal phase relationship). See also True NTSC.
NTSC-M – The U.S. standard of color television transmissions. See also
NTSC and M.
NTU (Network Termination Unit) – An Network Termination Unit is a
device located at the final interconnect point between the PSTN (Public
Switched Telephone Network) and the customers own equipment.
NTV (Nippon Television Network) – A Japanese broadcaster that is a
proponent of ATV schemes similar to Faroudja’s SuperNTSC. NTV’s first
generation EDTV system would use high line-rate and/or progressive scan
cameras with prefiltering, adaptive emphasis, gamma correction, ghost
cancellation, a progressive scan display, and advanced decoding at the
receiver. The second generation would add more resolution, a widescreen
aspect ratio, and better sound. The first generation is scheduled to be
broadcast beginning in 1988.
Null Packets – Packets of “stuffing” that carry no data but are necessary
to maintain a constant bit rate with a variable payload. Null packets always
have a PID of 8191.
Number Crunching – Action of performing complex numerical operations.
Numerical Aperture – A number that defines the light gathering ability
of a specific fiber. The numerical aperture is equal to the sine of the
maximum acceptance angle.
NVOD (Near Video On Demand) – This service allows for a single TV
program to be rebroadcast consecutively with a few minutes of difference
in starting time. For example, a movie could be transmitted at 9:00, 9:15
and 9:30.
NWK – See Network.
NYIT – See New York Institute of Technology.
Nyquist – Nyquist Filter, Nyquist Limit, Nyquist Rule, and Harry Nyquist,
for whom they are named.
Nyquist Filter – Commonly used in the IF stage of a television receiver
to separate the desired television channel from potential interference.
Nyquist Frequency – The lowest sampling frequency that can be used for
analog-to-digital conversion of a signal without resulting in significant
aliasing. Normally, this frequency is twice the rate of the highest frequency
contained in the signal being sampled.
Nyquist Interval – The maximum separation in time which can be given
to regularly spaced instantaneous samples of a wave of bandwidth W for
complete determination of the waveform of the signal. Numerically, it is
equal to 1/2 W seconds.
Nyquist Limit – When time-varying information is sampled at a rate R,
the highest frequency that can be recovered without alias is limited to R/2.
Aliasing may be generated by under sampling temporally in frame rate, or
vertically in lines allocated to image height, or horizontally in analog bandwidth or in pixel allocation. Intermodulations prior to band limiting may
“preserve” some distracting effects of aliasing in the final display. Note:
Sampling at a rate below the Nyquist limit permits mathematical confirmation of the frequencies present (as for example in a Fourier analysis of
recorded motion). If the sampling window is very small (as in synchronized
flash exposure), however, it may become a subjective judgment whether
strobing is perceived in the image for motion approaching the limiting
velocity (frequency).
Nyquist Rate Limit – Maximum rate of transmitting pulse signals through
a channel of given bandwidth. If B is the effective bandwidth in Hertz, then
2B is the maximum number of code elements per second that can be
received with certainty. The definition is often inverted, in effect, to read
“the theoretical minimum rate at which an analog signal can be sampled
for transmitting digitally”.
Nyquist Rule – States that in order to be able to reconstruct a sampled
signal without aliases, the sampling must occur at a rate of more than
twice the highest desired frequency. The Nyquist Rule is usually observed
in digital systems. For example, CDs have a sampling frequency of 44.1
kHz to allow signals up to 20 kHz to be recorded. It is, however, frequently
violated in the vertical and temporal sampling of television, resulting in
aliases. See also Alias.
Nyquist Sampling – Sampling at or above twice the maximum bandwidth
of a signal. This allows the original signal to be recovered without distortion.
Nyquist Sampling Theorem – Intervals between successive samples
must be equal to or less than one-half the period of highest frequency.
Video Terms and Acronyms
OAM (Operation, Administration and Maintenance) – ATM Forum
specification for cells used to monitor virtual circuits. OAM cells provide
a virtual circuit level loopback in which a router responds to the cells,
demonstrating that the circuit is up and the router is operational.
Object Based Coding (OBC) – A technique that codes arbitrarily shaped
objects within a scene. Transmitted parameters are shape, color and
OBO (Output Back-Off) – The ratio of the signal power measured at
the output of a high power amplifier to the maximum output signal power.
The output back-off is expressed in decibels as either a positive or
negative quantity. It can be applied to a single carrier at the output to
the HPA (carrier OBO), or to the ensemble of output signals (total OBO).
OC1 (Optical Carrier Level 1) – A signal with a bitrate of 51.8 Mbps.
Fundamental transmission rate for SONET.
Object Carousels – The object carousel specification has been added
in order to support data broadcast services that require the periodic
broadcasting of DSM-CC user-user (U-U) objects through DVB compliant
broadcast networks, specifically as defined by DVB systems for interactive
services (SIS). Data broadcast according to the DVB object carousel
specification is transmitted according to the DSM-CC object carousel
and DSM-CC data carousel specification which are defined in MPEG-2
OC12 (Optical Carrier Level 12) – A signal with a bitrate of 622 Mbps.
Object Clock Reference (OCR) – A clock reference that is used by a
media object hierarchy. This notation has been chosen within the context
of the MPEG-4 Systems.
OCT (Octal Notation) – Any mathematical notation that uses 8 different
characters (usually the digits 0 to 7).
Object Content Information (OCI) – Additional information about content
conveyed through one or more elementary streams. It is either attached to
individual elementary stream descriptors or conveyed itself as an elementary stream.
Object Descriptor (OD) – A descriptor that associates one or more
elementary streams by means of their elementary stream descriptors and
defines their logical dependencies.
Object Descriptor Message – A message that identifies the action to be
taken on a list of object descriptors or object descriptor Ids, for example,
update or remove.
Object Descriptor Stream – An elementary stream that conveys object
descriptors encapsulated in object descriptor messages.
Object Modeling Technique (OMT) – A graphical method to represent
the class hierarchy. This notation has been chosen within the context of the
MPEG-4 Systems.
Object Program – End result of the source language program (assembly
or high-level) after it has been translated into machine language.
OC3 (Optical Carrier Level 3) – A 155 Mbps ATM SONET signal stream
that can carry three DS-3 signals. Equivalent to SDH STM-1.
OC48 (Optical Carrier Level 48) – A signal with a bitrate of 2.4 Gbps.
Occlusion – The process whereby an area of the video raster is blocked
or made non-transparent by controlling selected bits. Occlusion is used
when more than one picture is displayed or windowed simultaneously.
Octal – Base 8 number system. Often used to represent binary numbers,
since each octal digit corresponds directly to three binary digits.
Octave – A two-to-one frequency ratio.
Ocular – The very last optical element at the back of a lens (the one
closer to the CCD chip).
Odd Number – The number of scanning lines per frame necessary in an
interlaced scanning system. One line is split between fields to ensure
proper spacing between scanning lines from different fields. A progressively scanned system may use an even number of scanning lines.
OEM (Original Equipment Manufacturer) – A company which develops,
produces and sells computer and consumer hardware to other companies.
Oersted – A unit of magnetic field strength.
OFDM (Orthogonal Frequency Division Multiplex) – First promoted in
the early 1990s as a wireless LAN technology. OFDM’s spread spectrum
technique distributes the data over a large number of carriers that are
spaced apart at precise frequencies. This spacing provides the “orthogonality” in this technique which prevents the demodulators from seeing other
frequencies than their own. Coded OFDM (COFDM) adds forward error
correction to the OFDM method.
Object Time Base (OTB) – a) The OTB defines the notation of time of a
given encoder. All time stamps that the encoder inserts in a coded audiovisual object data stream refer to this time base. b) A time base valid for a
given object, and hence for its media object decoder. The OTB is conveyed
to the media object decoder via object clock references. All time stamps
relating to this object’s decoding process refer to this time base.
Off-Line, Offline – Preliminary editing done on relatively low-cost editing
systems, usually to provide an EDL for final on-line editing and assembly of
the finished show.
Objects – Objects, in the object-oriented terminology, are entities that
combine a data structure (defining the object’s state), with a set of methods (defining the object’s behavior).
Off-Line Editing – Editing that is done to produce an edit decision list,
which is used later for assembling that program. A video tape (sometimes
called a work print) may be produced as a by-product of off-line editing.
Off-Line Edit – Rough cut editing used to produce an Edit Decision List.
Objective – The very first optical element at the front of a lens.
www.tektronix.com/video_audio 163
Video Terms and Acronyms
Off-Line Editor – A low resolution, usually computer and disk based edit
system in which the creative editing decisions can be made at lower
cost and often with greater flexibility than in an expensive fully equipped
on-line bay.
Offline Encoder – The Indeo video codec’s normal mode of operation,
in which it takes as long as necessary to encode a video file so that it
displays the best image quality and the lowest and most consistent data
rate. Compare Quick Compressor.
One Wire Interconnect – Interconnect consists of a single wire transporting an encoded, composite analog video signal.
One_Random_PGC Title – In DVD-Video, a Title within a Video Title Set
(VTS) that contains a single Program Chain (PGC), but does not meet the
requirements of a One_Sequential_PGC Title. Contrast with to
One_Sequential_PGC Title and Multi_PGC Title.
Offset – a) The horizontal and vertical displacement of a clip.
b) Reference numbers that indicate the change, in terms of frames,
that take place when you trim.
One_Sequential_PGC Title – In DVD-Video, a Title within a Video Title
Set (VTS) that contains a single Program Chain (PGC) with the following
attributes: 1) PG Playback mode is Sequential, 2) no Next PGC, Previous
PGC or Go Up PGCs are defined, and 3) the Navigation Timer is neither set,
nor referred to. Contrast with One_Random_PGC Title and Multi_PGC Title.
Ohm – The unit of resistance. The electrical resistance between two points
of a conductor where a constant difference of potential of 1 V applied
between these points produces in the conductor a current of 1 A, the
conductor not being the source of any electromotive force.
One’s Complement – Number representation system used for signed
binary integers in which the negative of a number is obtained by complementing it. The leftmost bit becomes the sign bit, with 0 for plus, 1 for
OIRT (Organisation Internationale de Radiodiffusion-Television) –
The OIRT was dissolved in 1992 and integrated into the Union of the
European Broadcast Organizations (UER).
On-Line Editing – a) Editing that is done to produce a finished program
master. b) Final editing session, the stage of post-production in which
the edited master tape is assembled from the original production footage,
usually under the direction of an edit decision list (EDL).
OLE (Object Linking and Embedding) – A standard for combining data
from different applications that updates automatically.
O-Member (Observing Member) – A term used within ISO/IEC JTC1
committees. A National Body that does not vote.
OMF, OMFI, OMF Interchange (Open Media Framework Interchange)
– A media and metadata exchange solution developed by Avid Technology.
A standard format for the interchange of digital media data among
heterogeneous platforms. The format is designed to encapsulate all the
information required to interchange a variety of digital media, such as
audio, video, graphics, and still images as well as the rules for combining
and presenting the media. The format includes rules for identifying the
original sources of the digital media, and it can encapsulate both compressed and uncompressed digital media data.
On-Line Editor – An editing system where the actual video master is
created. An on-line bay usually consists of an editing computer, video
switcher, audio mixer, one or more channels of DVE, character generator,
and several video tape machines.
On-Line, Online – Final editing or assembly using master tapes to produce a finished program ready for distribution. Often preceded by off-line
editing, but in some cases programs go directly to the on-line editing suite.
Usually associated with high-quality computer editing and digital effects.
On-Screen Display – A function on many VCRs and televisions in which
operational functions (tint, brightness, VCR function, programming, etc.)
are displayed graphically on the television screen.
ONU (Optical Node Unit)
Omnidirectional – A microphone type that picks up sound relatively
evenly from all directions.
OOB (Out-of-Band) – Out-of-band is any frequency outside the band
used for voice frequencies.
OMWF (Open MPEG Windows Forum) – OMWF is a Japanese industry
consortium aiming at compatibility in MPEG-based multimedia applications.
The group, that includes various hardware and software vendors and
content providers in Japan, ahs its offspring in the popularity in Japan of
CD movies and Karaoke. Through cooperation with the Open MPEG
Consortium in the USA, the OMWF cleared up details in the MCI standard,
that impeded compatibility. The new specification, called the Video CD
specification, allows Windows machines to play MPEG-1 video CDs and
allows Windows data and applications to be stored on the same CD along
with the video contents.
Opaque Macroblock – A macroblock with shape mask of all 255’s.
On the Fly – a) Depressing a button causing some change while a switcher is transitioning. b) Selecting a tape edit point while VTR is moving.
On-Air Output – Ready to use for transmission or videotaping, this is the
PGM output.
One Light – A telecine transfer or film print produced with a single
setting of color correction values. One light is the simplest, fastest, and
least costly type of transfer.
Opcode – See Operation Code.
OPCR (Original Program Clock Reference)
Open – To double-click an icon, or to select an icon then choose “Open”
from a menu in order to display a window that contains the information
that the icon represents.
Open Architecture – A concept for television receivers that acknowledges
an absence of ATV transmission/distribution standards and allows a
receiver to deal with a multiplicity of standards and delivery mechanisms.
Open MPEG Consortium – The goal of the Open MPEG Consortium is
to “create a single API for the playback of MPEG-1 titles under Windows
and DOS”. The consortium has developed the MPEG Multimedia Control
Interface (MCI) which defines how MPEG boards operate under Windows.
Due to some undefined topics, the MCI specification has not been able to
curb incompatibility, but the consortium has later cooperated with the
Japanese OMWF group on an enhanced specification.
Video Terms and Acronyms
Open Subtitles – See Subtitles.
Open-Ended Edit – a) Assemble mode. b) Edit that has a start time but
no designated stop time.
Open-Loop – Circuit or other system operating without feedback.
Operating Level – A certain level of flux recorded on magnetic tape.
Operating Program – Computer software program which controls all
functions of related computers and hardware devices.
Operating System – The primary software in a computer, containing general instructions for managing applications, communications, input/output,
memory and other low-level tasks. DOS, Windows, Mac OS, and UNIX are
examples of operating systems.
Operation Code (Opcode) – Segment of the machine-language instruction that specifies the operation to be performed. The other segments
specify the data, address, or port. For the 8085, the first byte of each
instruction is the opcode.
Opposite Track Path (OTP) – Dual-layer disc where Layer 0 and Layer 1
have opposite track directions. Layer 0 reads from the inside to the outside
of the disc, whereas Layer 1 reads from the outside to the inside. The
disc always spins clockwise, regardless of track structure or layers. This
mode facilitates movie playback by allowing seamless (or near-seamless)
transition from one layer to another. In computer applications (DVD-ROM),
it usually makes more sense to use the Parallel Track Path (PTP) format
where random access time is more important.
Optical Effects – Trick shots prepared by the use of an optical printer in
the laboratory, especially fades and dissolves.
Optical Fiber – A glass strand designed to carry light in a fashion similar
to the manner in which wires carry electrical signals. Since light is electromagnetic radiation of tremendously high frequency, optical fibers can carry
much more information than can wires, though multiple paths through the
fiber place an upper limit on transmission over long distances due to a
characteristic called pulse dispersion. Many feel that the wide bandwidth
of an optical fiber eliminates the transmission problems associated with the
high base bandwidth of HDEP schemes. CATV and telephone companies
propose connecting optical fibers directly to homes.
Opticals – The effects created in a film lab through a process called
A-roll and B-roll printing. This process involves a specified manipulation
of the film negative to create a new negative containing an effect. The
most common opticals used in film editing are fades, dissolves, and
Option Button – Used to select from a list of related items. The selected
option box has a black dot. (One item in the group must be selected.)
Option Drive – Any internal drive other than the system disk. Option
drives include floppy disk drives, secondary hard disk drives, or DAT drives.
Orange Book – The document begun in 1990 which specifies the format
of recordable CD. Three parts define magneto-optical erasable (MO) and
write-once (WO), dye-sublimation write-once (CD-R), and phase-change
rewritable (CD-RW) discs. Orange Book added multisession capabilities to
the CD-ROM XA format.
Orbit – The rotation of the camera eye around the point of interest.
Orientation – a) For animation, many 3D systems fix the viewer’s location
at a specified distance from the viewing screen. Currently, PictureMaker
is one of these. In such systems, the database is moved relative to the
viewer. The set of motions that accomplish any particular view of the world
is called its “orientation”. Using the three coordinate axes as references,
we can translate (shuffle on a plane) and rotate objects to create new
views. During animation, we change the amounts of these motions. A set of
numbers describes orientation: x-trans, y-trans, z-trans, x-rot, y-rot, z-rot.
b) A direction of presentation affecting resolution requirements. Horizontal
lines become vertical lines when their orientation is rotated by 90 degrees;
a pattern of dots appearing to be in horizontal and vertical rows may not
appear to be diagonally aligned when its orientation is rotated 45 degrees
due to characteristics of the human visual system.
Orientation Animation – We can also use splines to calculate orientations for objects in between their orientations at keyframe positions. This
allows the motions of an object to be smooth rather than robot-like. In
traditional animation, orientation animation required an artist to redraw
the object when it rotated out of the plane of the platen (on the animation
stand) and path animation was limited to repositioning the cells in X and Y
(although the whole scene could be zoomed). In computer graphics, it is
easy to rotate and reposition objects anywhere in three dimensions. That
is why you see so much of it!
Orientation Direction – The arrangement of magnetic particles on
recording tape. In tapes designed for quadraplex recording applications,
the orientation direction is transverse. For helical and longitudinal
recording, it is longitudinal.
Orientation Ratio – In a material composed of oriented particles, the
orientation ratio is the ratio of the residual flux density in the orientation
direction to the residual flux density perpendicular to the orientation
direction. The orientation ratio of conventional tapes is typically about 1.7.
Origin – A reference point for measuring sections of recorded or digitized
sample data. A file mob value for the start position in the media is
expressed in relation to the origin. Although the same sample data can
be re-recorded or re-digitized, and more sample data might be added,
the origin remains the same so that composition source clips referencing
it remain valid.
Original Negative – The actual film stock used in the camera to
photograph a scene.
original_network_id – A unique identifier of a network.
Origination – The production cycle begins with the introduction of
images in photographic, electronic imaging, or computational media.
Image capture in real-time is usually essential for recording live subjects
and maintaining the impact of realism. Image generation, normally
achieved in non real-time, provides additional subject matter that can
be edited into and combined with recorded live subjects to achieve
programs that are more artistic, or more instructional, or both.
Orthicon (Conventional) – A camera tube in which a low-velocity
electron beam scans a photoemissive mosaic on which the image is
focused optically and which has electrical storage capability.
Orthicon (Image) – A camera tube in which the optical image falls on a
photo-emissive cathode which emits electrons that are focused on a target
www.tektronix.com/video_audio 165
Video Terms and Acronyms
at high velocity. The target is canned from the rear by a low-velocity electron beam. Return beam modulation is amplified by an electron multiplier
to form an overall light-sensitive device.
Orthicon Effect – One or more of several image orthicon impairments
that have been referred to as “Orthicon Effect” as follows: edge effect,
meshbeat or Moiré, ghost, halo, burned in image. It is obviously necessary
to indicate specifically the effects experienced and, therefore, it is
recommended that use of this term be discontinued.
Orthogonal Projection – With orthogonal projection, parallel receding
lines do not converge. The process of projecting from 3D to 2D is
particularly simple, simply throw away the Z-value of each coordinate.
Orthogonal Sampling – a) Sampling of a line of repetitive video signal in
such a way that samples in each line are in the same horizontal position.
b) Picture sampling arranged in horizontal rows and vertical columns.
Osborne, Joseph – An ATV proponent issued a patent for a data
compression transmission scheme for HD signals. The Osborne
compression system is said to allow channel-compatible but not
receiver-compatible HDTV.
Oscilloscope – An electronic device that can measure the signal changes
versus time. A must for any CCTV technician.
OSI (Open Systems Interconnection) – The OSI Reference Model was
formally initiated by the International Organization for Standardization (ISO)
in March, 1977, in response to the international need for an open set of
communications standards. OSI’s objectives are: to provide an architectural
reference point for developing standardized procedures; to allow inter-networking between networks of the same type; to serve as a common framework for the development of services and protocols consistent with the
OSI model; to expedite the offering of interoperable, multi-vendor products
and services.
OSI Model – The model is similar in structure to that of SNA. It consists
of seven architectural layers: the Physical Layer and Data Link Layer, the
Network Layer; the Transport Layer; the Session Layer; the Presentation
Layer; the Application Layer.
OSI Model
Physical and
Data Link Layers
Provides the same functions as their SNA counterparts
(physical control and data link control layers.
Network Layer
Selects routing services, segments blocks and messages,
and provides error detection, recovery, and notification.
Transport Layer
Controls point-to-point information interchange,
data packet size determination and transfer, and the
connection/disconnection of session entities.
Session Layer
Serves to organize and synchronize the application
process dialog between presentation entities, manage
the exchange of data (normal and expedited) during
the session, and monitor the establishment/release of
transport connections as requested by session entities.
Presentation Layer Responsible for the meaningful display of information to
application entities. More specifically, the presentation
layer identifies and negotiates the choice of communications transfer syntax and the subsequent data conversion
or transformation as required.
Application Layer Affords the interfacing of application processes to system
interconnection facilities to assist with information
exchange. The application layer is also responsible for
the management of application processes including
initialization, maintenance and termination of communications, allocation of costs and resources, prevention of
deadlocks, and transmission security.
OTP – See Opposite Track Path.
OUI (Organizational Unique Identifier) – The part of the MAC address
that identifies the vendor of the network adapter. The OUI is the first three
bytes of the six-byte field and is administered by the IEEE.
OUT Point – The end point of an edit, or a mark on a clip indicating a
transition point. Also called a Mark OUT. See also IN Point, Mark IN/OUT.
Outer Diameter – Width of the disc. This is 12 cm for “normal” CDs and
DVDs, and 8 cm for small CDs and DVDs.
Outlets – Openings in the hardware to which you attach connectors to
make an electrical connection.
Outline – A type of key border effect. An outline key with a character
generator appears as if the letters have been traced; the background video
is visible all around the letter as well as inside it.
Out-of-Band Signaling – A channel that is separate from the data channel carries the signaling.
Video Terms and Acronyms
Out-of-Service (Full Field Testing)
Test Signal
TV System
Test Signal
(capable of full field test signals)
Overlay – Keyed insertion of one image into another. Overlay is used for
example, to superimpose computer generated text on a video image, for
titling purposes. In video, the overlay procedure requires synchronized
sources for proper operation.
Overlap Edit – An edit in which the audio and video signals are given
separate IN points or OUT points, so the edit takes place with one
signal preceding the other. This does not affect the audio and video
synchronization. See also L-Cut, Delay Edit, or Split Edit.
Oversampled VBI Data – See Raw VBI Data.
Output – The magnitude of the reproduced signal voltage, usually measured at the output of the reproduce amplifier. The output of an audio
or instrumentation tape is normally specified in terms of the maximum
output that can be obtained for a given amount of harmonic distortion,
and is expressed in dB relative to the output that can be obtained from
a reference tape under the same conditions.
Output Format – The form in which video is presented by a video chip to
monitoring or recording systems is called the output format. This can be
Output Impedance – The impedance a device presents to its load. The
impedance measured at the output terminals of a transducer with the load
disconnected and all impressed driving forces taken as zero.
Output Port – Circuit that allows the microprocessor system to output
signals to other devices.
Out-Take – A take of a scene which is not used for printing or final
assembly in editing.
Ovenized Crystal Oscillator – A crystal oscillator that is surrounded by
a temperature regulated heater (oven) to maintain a stable frequency in
spite of external temperature variations.
Overcoat – A thin layer of clear or dyed gelatin sometimes applied on top
of the emulsion surface of a film to act as a filter layer or to protect the
emulsion from abrasion during exposure and processing.
Overflow – Results when an arithmetic operation generates a quantity
beyond the capacity of the register. An overflow status bit in the flag
register is set if an operation causes an overflow.
Oversampling – Sampling data at a higher rate than normal to obtain
more accurate results or to make it easier to sample.
Overscan – a) Increases scanning amplitudes approximately 20%. Used
for tube/yoke set-up and sometimes as a precaution against an edge of
picture “raster burn”. b) A video monitor condition in which the raster
extends slightly beyond the physical edges of the CRT screen, cutting off
the outer edges of the picture.
Overshoot – An excessive response to a unidirectional signal change.
Sharp overshoots are sometimes referred to as “spikes”.
Overwrite – An edit in which existing video, audio or both is replaced by
new material. See also Splice.
Overwrite Edit – The addition of a source clip into a record clip, where
the record clip edit sequence does not ripple (the duration does not
change). The source clip overwrites an equal number of frames on the
edit sequence.
Oxide (Magnetic Oxide) – The magnetizable particle used in the
manufacture of magnetic tape.
Oxide Buildup – The accumulation of oxide or, more generally, wear
products in the form of deposits on the surface of heads and guides.
Oxide Coating – The magnetic material coated on base film.
Oxide Loading – A measure of the density with which oxide is packed
into a coating. It is usually specified in terms of the weight of oxide per
unit volume of the coating.
Oxide Shed – The loosening of particles of oxide from the tape coating
during use.
Overhead Bits – Bits added to the binary message for the purpose of
facilitating the transmission and recovery of the message (e.g., frame
synchronization words, check bits, etc.)
www.tektronix.com/video_audio 167
Video Terms and Acronyms
Pack – A layer in the MPEG system coding syntax for MPEG systems
program streams. A pack consists of a pack header followed by zero or
more packets. It is a layer in the system coding syntax.
Pack Slip – A lateral slip of select tape windings causing high or low
spots (when viewed with tape reel laying flat on one side) in an otherwise
smooth tape pack. Pack slip can cause subsequent edge damage when
the tape is played, as it will unwind unevenly and may make contact with
the tape reel flange.
Packed 24-Bit – A compression method where a graphics accelerator
transfers more than one bit on each clock cycle, then reassembles the
fragmented pixels. For example, some chips can transfer 8, 24-bit pixels
in three clocks instead of the four normally required, saving bandwidth.
Packed Pixel – Color information for a pixel packed into one word of
memory data. For a system with few colors, this packed pixel may require
only a part of one word of memory; for very elaborate systems, a packed
pixel might be several words long. See Planar
Packet – a) A unit of information sent across a (packet-switched) network.
A packet generally contains the destination address as well as the data
to be sent. b) A packet consists of a header followed by a number of
contiguous bytes from an elementary data stream. It is a layer in the
system coding syntax.
Packet Data – Contiguous bytes of data from an elementary data stream
present in the packet.
Packet Identifier (PID) – a) MPEG-2 transmits transport stream data
in packets of 188 bytes. At the start of each packet is a packet identifier
(PID). Since the MPEG-2 data stream might be in multi-program mode,
the receiver has to decide which packets are part of the current channel
being watched and pass them onto the video decoder for further processing. Packets that aren’t part of the current channel are discarded. Four
types of PIDs are typically used by receivers. The VPID is for the video
stream and the APID is for the audio stream. Usually reference-timing data
is embedded into the video stream, though occasionally a PCR (program
clock reference) PID is used to synchronize the video and audio packets.
The fourth PID is used for data such as the program guide and information
about other frequencies that make up the total package. b) A unique
integer value used to associate elementary streams of a program in a
single- or multi-program transport stream.
Packet Switched Network – Network that transmits data in units called
packets. The packets can be routed individually over the best available
network connection and reassembled to form a complete message at the
Packet Switching – The method of dividing data into individual packets
with identification and address, and sending these packets through a
switched network.
Packet Video – The integration of video coding and channel coding to
communicate video over a packetized communication channel. Usually
these techniques are designed to work in the presence of high packet jitter
and packet loss.
Packets – A term used in two contexts: in program streams, a packet is
a unit that contains one or more presentation units; in transport streams,
a packet is a small, fixed size data quantum.
Packing Density – The amount of digital information recorded along the
length of a tape measured in bit per inch (bpi).
Padding – A method to adjust the average length of an audio frame in
time to the duration of the corresponding PCM samples, by continuously
adding a slot to the audio frame.
Page – Usually a block of 256 addresses. The lower eight bits of an
address therefore specify the location within the page, while the upper
eight bits specify the page.
Painter’s Algorithm – In traditional painting, paint is applied in layers,
and the last paint applied is what is visible. Digitally, the last value placed
in a pixel determines its color.
Pairing – A partial or complete failure of interlace in which the scanning
lines of alternate fields do not fall exactly between one another but tend to
fall (in pairs) one on top of the other.
PAL – See Phase Alternate Line.
PAL 60 – This is a NTSC video signal that uses the PAL color subcarrier
frequency (about 4.43 MHz) and PAL-type color modulation. It is a further
adaptation of NTSC 4.43, modifying the color modulation in addition to
changing the color subcarrier frequency. It was developed by JVC in the
1980s for use with their video disc players, hence the early name of
“Disk-PAL”. There is a little-used variation, also called PAL 60, which is
a PAL video signal that uses the NTSC color subcarrier frequency (about
3.58 MHz), and PAL-type color modulation.
PAL Format – A color television format having 625 scan lines (rows) of
resolution at 25 frames per second (25 Hz). See PAL. Compare NTSC
PALE – See Phase Alternating Line Encoding.
Palette – a) The limited set of colors that a computer can simultaneously
display. A typical palette contains 256 unique colors, chosen from over
16 million possible colors. An “optimized palette” refers to a palette whose
colors are chosen to best represent the original colors in a particular
graphic or series of graphics. b) A central location for user-selectable
buttons, which you can map to various functions for ease of use. The
command palette houses all the user-selectable buttons that allow you
to perform a wide range of commands with a single click of the mouse.
Palette Flash – A phenomenon caused by simultaneously displaying more
than one bitmap or video that do not share the same palette.
Video Terms and Acronyms
PALplus, PAL+ – PALplus (ITU-R BT.1197) is 16:9 aspect ratio version
of PAL, and is compatible with standard (B, D, G, H, I) PAL. Normal (B, D,
G, H, I) PAL video signals have 576 active scan lines. If a film is broadcast,
usually 432 or fewer active scan lines are used. PALplus uses these
unused “black” scan lines for additional picture information. The PALplus
decoder mixes it with the visible picture, resulting in a 16:9 picture with
the full resolution of 576 active scan lines. Widescreen televisions without
the PALplus decoder, and standard (B, D, G, H, I) PAL TVs, show a standard
picture with about 432 active scan lines. PALplus is compatible with
standard studio equipment. The number of pixels of a PALplus picture
is the same as in (B, D, G, H, I) PAL, only the aspect ratio is different.
Parallel Device – Any hardware device that requires a parallel cable
connection to communicate with a workstation.
Pan – Term used for a type of camera movement, to swing from left to
right across a scene or vice versa.
Parallel HDDR – The recording of multiple PCM data streams which are
synchronous to a common clock onto multitrack recorder/reproducers.
Pan and Scan – A method of transferring movies with an aspect ratio
of 16:9 to film, tape or disc to be shown on a conventional TV with a 4:3
aspect ratio. Only part of the full image is selected for each scene. Pan
and Scan is the opposite of “letterbox” or “widescreen”.
Parallel Interface – A PC port which receives or transmits data in byte or
word form rather than bit form.
Pan and Tilt Head (P/T Head) – A motorized unit permitting vertical and
horizontal positioning of a camera and lens combination. Usually 24 V AC
motors are used in such P/T heads, but also 110 VAC, i.e., 240 VAC units
can be ordered.
Parallel Track Path (PTP) – A variation of DVD dual-layer disc layout
where readout begins at the center of the disc for both layers. Designed
for separate programs (such as a widescreen and a pan & scan version on
the same disc side) or programs with a variation on the second layer. Also
most efficient for DVD-ROM random-access application. Contrast with OTP.
Pan Pot – An electrical device which distributes a single signal between
two or more channels or speakers.
Pan Tilt Zoom (PTZ) – A device that can be remotely controlled to provide
both vertical and horizontal movement for a camera, with zoom.
Pan Unit – A motorized unit permitting horizontal positioning of a camera.
Pan Vector – Horizontal offset in video frame center position.
Panel Memory – See STAR system.
PAP (Password Authentication Protocol) – The most basic access
control protocol for logging onto a network. A table of usernames and
passwords is stored on a server. When users log on, their usernames
and passwords are sent to the server for verification.
Paper Edit – Rough edit decision list made by screening original
materials, but without actually performing edits.
Parade – This is a waveform monitor display mode in which the Y and two
chrominance components of an analog component video are shown sided
by side on the waveform screen.
Parallel Cable – A multi-conductor cable carrying simultaneous transmission of data bits. Analogous to the rows of a marching band passing a
review point.
Parallel Component Digital – This is the component signal sampling
format specified by
ITU-R BT.601-2 and the interface specified by ITU-R BT.656.
Parallel Composite Digital – This is the composite signal sampling
format specified in SMPTE 244M for NTSC. The EBU is working on the
PAL standard. The composite signals are sampled at the rate of 4FSC
which is 14.4 MHz for NTSC and 17.7 MHz for PAL.
Parallel Data – Transmission of data bits in groups along a collection of
wires (called a bus). Analogous to the rows of a marching band passing
a review point. A typical parallel bus may accommodate transmission of
one 8-, 16-, or 32-bit byte or word at a time.
Parallel Digital – A digital video interface which uses twisted pair wiring
and 25-pin D connectors to convey the bits of a digital video signal in
parallel. There are various component and composite parallel digital video
Parallel Port – An outlet on a workstation to which you connect external
parallel devices.
Parameter – a) A variable which may take one of a large range of values.
A variable which can take one of only two values is a flag and not a
parameter. b) The values shown in X, Y and Z in each menu, so called
because they represent the numerical values assigned to each feature
of a video picture, size, aspect ratio, etc. Changing these values, shown
in the “X, Y and Z” columns, produces ADO’s visual effects. c) A setting,
level, condition or position, i.e., clip level, pattern position, system condition. d) Value passed from one routine to another, either in a register or
a memory location.
Parametric Audio Decoder – A set of tools for representing and
decoding audio (speech) signals coded at bit rates between 2 kbps
and 6 kbps.
Parametric Modeling – This method uses algebraic equations (usually
polynomials) to define shapes and surfaces. The user can build and modify
complex objects by combining and modifying simple algebraic primitive
Parental Level – A mechanism that allows control over what viewers may
see depending on the settings in the DVD player, the parental code on a
DVD and the structure of the material on the DVD. This is especially useful
for youthful viewers whose parents wish to exercise a degree of control
over what their children can watch.
Parental Management – An optional feature of DVD-Video that prohibits
programs from being viewed or substitutes different scenes within a program depending on the parental level set in the player. Parental control
requires that parental levels and additional material (if necessary) be
encoded on the disc.
www.tektronix.com/video_audio 169
Video Terms and Acronyms
Parity – a) An extra bit appended to a character as an accuracy check.
For example, if parity is even, the sum of all 1s in the character should
be even. b) Number of 1s in a word, which may be even or odd. When
parity is used, an extra bit is used to force the number of 1s in the word
(including the parity bit) to be even (even parity) or odd (odd parity).
Parity is one of the simplest error detection techniques and will detect
a single-bit failure.
Past Reference Picture – A past reference picture is a reference picture
that occurs at an earlier time than the current picture in display order.
Parity Clock – A self-checking code employing binary digits in which
the total number of 1s (or 0s) in each code expression is always even or
always odd. A check may be made for even or odd parity as a means of
detecting errors in the system.
Patch – a) To connect jack A to jack B on a patch bay with a patch cord.
b) A section of curved, non-planar surface; it can be likened to a rectangular rubber sheet which can be pulled in all directions. c) Section of coding
inserted into a routine to correct a mistake or alter the routine. It is usually
not inserted into the actual sequence of the routine being corrected, but
placed somewhere else. A jump to the patch and a return to the routine
are then provided.
Parsed Audiovisual Objects – See Syntactic Decoded Audiovisual
Parsing – Identifying and extracting syntactic entities related to coded
representations from the bit stream and mapping them in semantic
Parsing Layer – See Syntactic Decoding Layer.
Parsing Script – The description of the parsing procedure.
Part of Title (PTT) – In DVD-Video, a division of a Title representing a
scene. Also called a chapter. Parts of titles are numbered 1 to 99 in a
One_Sequential_PGC Title and 1 to 999 in a Multi_PGC Title.
Partial Transport Stream (TS) – Bitstream derived from an MPEG-2
TS by removing those TS packets that are not relevant to one particular
selected program, or a number of selected programs.
Particle Orientation – The process by which acicular particles are rotated
so that their longest dimensions tend to lie parallel to one another.
Orientation takes place in magnetic tape by a combination of the sheer
force applied during the coating process and the application of a magnetic
field to the coating while it is still fluid. Particle orientation increases the
residual flux density and hence the output of a tape and improves performance in several other ways.
Particle Shape – The particles of gamma ferric oxide used in conventional
magnetic tape are acicular, with a dimensional ratio of about 6:1.
Particle Size – The physical dimensions of magnetic particles used in a
magnetic tape.
Particles – Refer to such vague objects as clouds, fire, water, sand, or
snow that can be rendered using a special program.
Partition – A subdivision of the total capacity of a storage disk that
creates two or more virtual disks from a single physical disk. In the case
of disk arrays, a partition is a virtual array within the whole array.
PASC (Precision Adaptive Sub-Band Coding) – The PASC is very
close to the Layer 1 subset in the MPEG audio specification. The algorithm,
which is used in the DCC system from Phillips, provides a 384 kbit/s
data stream.
Password – A combination of letters and/or numbers that only the user
knows. If you specify a password for your account or if you are assigned
a password by the system administrator, you must type it after you type
your login name before the system lets you access files and directories.
PAT (Program Association Table) – Data appearing in packets having
PID code of zero that the MPEG decoder uses to determine which
programs exist in a Transport Stream. PAT points to PMT (program map
table), which, in turn, points to the video, audio, and data content of
each program.
Patch Panel (or Bay, Board, Rack) – A manual method of routing signals
using a panel of recep-tacles for sources and destinations and wire
jumpers to interconnect them.
Patching – The routing of audio or video from one channel or track in the
sequence to another.
Path Length – The amount of time it takes for a signal to travel through
a piece of equipment or a length of cable. Also called propagation delay.
Pathname – The list of directories that leads you from the root (/)
directory to a specific file or directory in the file system.
Pathological Signal – Used as a stress test for the SDI domain and
contains two parts. The first is an equalizer test producing a sequence of
1 bit high, 19 bits low and the PLL test producing a sequence of 20 bits
high, 20 bits low. These sequences are not present throughout the whole
active region of the signal but only occur once per field as the scrambler
attains the required starting condition. This sequence will be maintained
for the full line until it terminates with the EAV sequence.
Pattern (PTN) – In general switcher terms, a pattern is any geometric
shape which grows, rotates or pivots and in so doing removes the foreground video while simultaneously revealing the background video. Strictly
speaking, a pattern is a fully enclosed shape on the screen. This definition
is our internal view, but not consistent with the industry. Typical patterns
are rectangles, diamonds and circles.
Pattern Border – A variable-width border that occurs at the edges of a
wipe pattern. The border is filled with matte video from the border matte
Pattern Extender – The hardware (and software in AVC) package which
expands the standard pattern system to include rotary wipes, and rotating
patterns (and matrix wipes in AVC).
Pattern Limit – See Preset Pattern.
Pattern Modification – The process of altering one or more pattern
parameters. See Modifier.
Pattern Modifier – An electronic circuit which modifies basic patterns by
rotating, moving positionally, adding specular effects to the borders, etc.;
thereby increasing the creative possibilities.
Video Terms and Acronyms
Pattern System – The electronic circuitry which generates the various
pattern (wipes).
Pause Control – A feature of some tape recorders that makes it possible
to stop the movement of tape temporarily without switching the machine
from “play” or “record”.
Pay TV – A system of television in which scrambled signals are distributed
and are unscrambled at the homeowner’s set with a decoder that responds
upon payment of a fee for each program. Pay TV can also refer to a system
where subscribers pay an extra fee for access to a special channel which
might offer sports programs, first-run movies or professional training.
Payload – Refers to the bytes which follow the header byte in a packet.
For example, the payload of a transport stream packet includes the
PES_packet_header and its PES_packet_data_bytes or pointer_field and
PSI sections, or private data. A PES_packet_payload, however, consists
only of PES_packet_data_bytes. The transport stream packet header and
adaptation fields are not payload.
Pay-Per-View (PPV) – A usage-based fee service charged to the
subscriber for viewing a requested single television program.
PC (Printed Circuit or Program Counter)
PC2 (Pattern Compatible Code)
PCB (Printed Circuit Board) – A flat board that holds chips and other
electronic components. The board is made of layers (typically 2 to 10)
that interconnects components via copper pathways. The main printed
circuit board in a system is called a “system board” or “motherboard”,
while smaller ones that plug into the slots in the main board are called
“boards” or “cards”.
PCI (Peripheral Component Interface) – In 1992, Intel introduced the
Peripheral Component interface bus specification. PCI, a high-speed interconnection system that runs at processor speed, became compatible with
the VL bus by its second release in 1993. PCI includes a 64-bit data bus
and accommodates 32-bit and 64-bit expansion implementations. PCI is
designed to be processor-independent and is used in most high-speed
multimedia systems. PCI is designed so that all processors, co-processors,
and support chips can be linked together without using glue logic and can
operate up to 100 MHz, and beyond. PCI specifies connector pinout as well
as expansion board architecture.
PCI Bus Mastering – This is the key technology that has allowed under
$1000 video capture cards to achieve such high quality levels. With PCI
bus mastering you get perfect audio sync and sustained throughput levels
over 3 megabits per second.
PCI Slot – Connection slot to a type of expansion bus found in most
newer personal computers. Most video capture cards require this type
of information.
PCM (Pulse Code Modulation) – Pulsed modulation in which the analog
signal is sampled periodically and each sample is quantized and transmitted as a digital binary code.
PCM Disk – A method of recording digital signals on a disk like a standard
vinyl record.
PCMCIA (Personal Computer Memory Card International
Association) – A standard format for credit-card size expansion
cards used to add storage capacity or peripherals such as modems to
a computer.
PCR (Program Clock Reference) – a) The sample of the encoder clock
count that is sent in the program header to synchronize the decoder clock.
b) The “clock on the wall” time when the video is multiplexed. c) Reference
for the 27 MHz clock regeneration. Transmitted at least every 0.1 sec for
MPEG-2 and ATSC, and at least every 0.04 sec. for DVB.
PCRI (Interpolated Program Clock Reference) – A PCR estimated from
a previous PCR and used to measure jitter.
PCS (Personal Conferencing Specification) – A videoconferencing
technology that uses Intel’s Indeo compression method. It is endorsed by
the Intel-backed Personal Conferencing Working Group (PCWG). Initially
competing against H.320, Intel subsequently announced its videoconferencing products will also be H.320 compliant.
PCWG (Personal Conferencing Work Group) – The PCWG is a work
group formed by PC and telecom manufacturers to enable interoperable
conferencing products. The PCWG released version one of its Personal
Conferencing Specification in December 1994. The specification defines
a common, interoperable architecture for PC-based conferencing and
communications using PC applications and variety of media types. Since
then they have announced support for H.320 and T.120 standards.
PCX (PC Exchange Format) – A file format common to most bitmap file
format conversions which can be handled by most graphic applications.
PDA (Personal Digital Assistant) - A term for any small mobile handheld device that provides computing and information storage and retrieval
capabilities for personal or business use, often for keeping schedule
calendars and address book information handy.
PDH (Plesiochronous Digital Hierarchy)
PDP (Plasma Display Panel) – Also called “gas discharge display”, a
flat-screen technology that contains an inert ionized gas sandwiched
between x- and y-axis panels. A pixel is selected by charging one x- and
one y-wire, causing the gas in that vicinity to glow. Plasma displays were
initially monochrome, typically orange, but color displays have become
increasingly popular with models 40 inches diagonal and greater being
used for computer displays, high-end home theater and digital TV.
PDU – See Protocol Data Unit.
PE – See Phase Error.
Peak Boost – A boost which is greater at the center frequency than either
above or below it.
Peak Indicator – An indicator that responds to short transient signals,
often used to supplement Recording Level Meters which usually indicate
average signal levels.
Peak Magnetizing Field Strength – The positive or negative limiting
value of the magnetizing field strength.
Peak Value – The maximum positive or negative instantaneous value of
a waveform.
www.tektronix.com/video_audio 171
Video Terms and Acronyms
Peak White – The highest point in the video waveform that the video level
can reach and still stay within specification.
Peaking Equalization – Equalization which is greater at the center
frequency than at either side of center.
Percentage Sync – The ratio, expressed as a percentage, of the
amplitude of the synchronizing signal to the peak-to-peak amplitude of
the picture signal between blanking and reference white level.
Peak-to-Peak (pp) – The amplitude (voltage) difference between the most
positive and the most negative excursions (peaks) of an electrical signal.
Perception, Visual – The interpretation of impressions transmitted from
the retina to the brain in terms of information about a physical world
displayed before the eye. Note: Visual perception involves any one or more
of the following: recognition of the presence of something; identifying it;
locating it in space; noting its relation to other things; identifying its
movement, color, brightness, or form.
Pedding – Raising or lowering the camera while the camera remains level.
Vertical equivalent of dollying.
Perceptual Audio Coding – Audio compression technique that removed
frequencies and harmonics that are outside the range of human hearing.
Pedestal – The offset used to separate the active video from the blanking
level. When a video system uses a pedestal, the black level is above the
blanking level by a small amount. When a video system doesn’t use a
pedestal, the black and blanking levels are the same. (M) NTSC uses a
pedestal set at +7.5 IRE, (B, D, G, H, I) PAL does not.
Perceptual Coding – Lossy compression techniques based on the study
of human perception. Perceptual coding systems identify and remove information that is least likely to be missed by the average human observer.
Peak-Reading Meter – A type of Recording Level Meter that responds to
short transient signals.
Pedestal Level – This term is obsolete; “blanking level” is preferred.
PEG – Public, educational, governmental access channels. Penetration:
The number of homes actually served by cable in a given area, expressed
as a percentage of homes passed. Premium Services: Individual channels
such as HBO and Showtime which are available to cable customers for a
monthly subscription fee.
Pel (Picture Element) – See Pixel.
Pel Aspect Ratio – The ratio of the nominal vertical height of pel on the
display to its nominal horizontal width.
Perceived Resolution – The apparent resolution of a display from the
observer's point of view, based on viewing distance, viewing conditions,
and physical resolution of the display.
Percent SD – Short time distortion amplitudes are not generally quoted
directly as a percent of the transition amplitude but rather are expressed
in terms of an amplitude weighting system which yields “percent-SD”. This
weighting is necessary because the amount of distortion depends not only
on the distortion amplitude but also on the time the distortion occurs with
respect to the transition. The equation for NTSC Systems is SD = at0.67
where “a” is the lobe amplitude and “t” is the time between transitions
and distortions. In practice, screen graticules eliminate the need for
calculations. Refer to the figure below. Also see the discussion on Short
Time Distortions.
Outer Markings:
Inner Markings:
: SD = 5%
: SD = 25%
Max Graticule
Perforations – Regularly spaced and accurately shaped holes which
are punched throughout the length of a motion picture film. These holes
engage the teeth of various sprockets and pins by which the film is
advanced and positioned as it travels through cameras, processing
machines and projectors.
Periodic Noise – The signal-to-periodic noise ratio is the ratio in decibels,
of the nominal amplitude of the luminance signal (100 IRE units) to the
peak-to-peak amplitude of the noise. Different performance objectives are
sometimes specified for periodic noise (single frequency) between 1 kHz
and the upper limit of the video frequency band and the power supply hum,
including low order harmonics.
Peripheral – Any interface (hardware) device connected to a computer
that adds more functionality , such as a tape drive. Also, a mass storage
or communications device connected to a computer. See also External
Devices and Internal Drives.
Perm’ed – Magnetized to a level which cannot be remove with a handheld
Permanent Elongation – The percentage elongation remaining in a tape
or length of base film after a given load, applied for a given time, has been
removed and the specimen allowed to hang free, or lightly loaded, for a
further period.
Permanent Virtual Circuit (PVC) – A PVC in a network does not have a
fixed physical path but is defined in a static manner with static parameters.
Perpendicular Direction – Perpendicular to the plane of the tape.
SD =
Perceptual Weighting – The technique (and to some extent, art) of taking
advantage of the properties of the human auditory or visual system.
T Step of Line Bar Fitted
Through B, C and W
Persistence Indicator (PI) – Indicates if an object is persistent.
Persistence Objects (PO) – Objects that should be saved at the decoder
for use at a later time. The life of a PO is given by an expiration time
stamp (ETS). A PO is not available to the decoder after ETS runs out. ETS
is given in milliseconds. When a PO is to be used at a later time in a
scene, only the corresponding composition information needs to be sent to
the AV terminal.
Perspective – The artistic method in a two dimensional plane to achieve
a three dimensional look. The technique or process of representing on a
plane or curved surface, the spatial relation of objects as they might
appear to the eye, one giving a distinct impression of distance.
Video Terms and Acronyms
Perspective (Menu) – The 3D function that enables changing the skew
and perspective of an image. Skew X: Uses the X axis to slant the
image right or left to change the image geometry into a parallelogram.
Perspective: Uses the Z axis to change the point of view (perspective) of
an image, to give it a three-dimensional appearance.
Perspective Projection – When perspective is used, a vanishing point is
used. With perspective, parallel lines receding into the screen appear to
converge. To make this happen the process of converting a 3D coordinate
(x, y, z) into its 2D perspective on the screen requires dividing the original
x and y coordinates by an amount proportional to the original z value. Thus,
the larger z is, points on the parallel lines that are far away will be closer
together on the screen.
Perturbation – A method to add noise so as to enhance the details of a
PES (Packetized Elementary Stream) – Video and audio data packets
and ancillary data of undefined length.
PES Header – Ancillary data for an elementary stream.
PES Packet – The data structure used to carry elementary stream data.
It consists of a packet header followed by PES packet payload.
PES Packet Header – The leading fields in a PES packet up to but not
including the PES_packet_data_byte fields where the stream is not a
padding stream. In the case of a padding stream, the PES packet header
is defined as the leading fields in a PES packet up to but no including the
padding_byte fields.
PES Stream – A PES stream consists of PES packets, all of whose
payloads consist of data from a single elementary stream, and all of
which have the same stream_id.
Petabyte – 1000 terabytes, or 1 million gigabytes.
P-Frame (Predicted Frame) – One of the three types of frames used in
the coded MPEG-2 signal. The frame in an MPEG sequence created by
predicting the difference between the current frame and the previous one.
P-frames contain much less data than the I frames and so help toward
the low data rates that can be achieved with the MPEG signal. To see the
original picture corresponding to a P-frame, a whole MPEG-2 GOP has to
be decoded.
PGM – See Program.
Phantom Matrix – That portion of the switcher electronic crosspoints
which are not controlled by a row of push buttons on the console. See Bus.
Phantom Points – See Ghost Point.
Phantom Power – Electricity provided by some broadcast and
industrial/professional quality audio mixers for use by condenser microphones connected to the audio mixer. Some microphones require phantom
power, and must be connected to audio mixers that provide it.
Phase – a) A measure of the time delay between points of the same
relative amplitude (e.g., zero crossings) on two separate waveforms.
b) A stage in a cycle. c) The relationship between two periodic signals
or processes. d) The amount of cycles one wave precedes or follows the
cycles of another wave of the same frequency. e) A fraction of a wave
cycle measured from a fixed point on the wave.
Phase Adjust – The method of adjusting the color in a (M) NTSC video
signal. The phase of the chroma information is adjusted relative to the
color burst and affects the hue of the picture.
Phase Alternate Line (PAL) – a) European video standard with image
format 4:3 aspect ratio, 625 lines, 50 Hz and 4 MHz video bandwidth with
a total 8 MHz of video channel width. PAL uses YUV. The Y component
represents Luminance. The U component represents B-Y. The V component
represents R-Y. The V component of burst is inverted in phase from one
line to the next in order to minimize hue errors that may occur in color
transmission. b) The color television transmission standard used in Europe
and other parts of the world. This standard uses a subcarrier which is
alternated 90 degrees in phase from one line to the next to minimize hue
errors in color transmission. PAL-I uses a 4.43361875 subcarrier. A single
frame (picture) in this standard consists of 625 scanning lines. One frame
is produced every 1/25 of a second. PAL-M uses a 3.57561149 MHz
subcarrier and 525 scanning lines. One frame is produced every 1/30 of
a second. c) The television and video standard in use in most of Europe.
Consists of 625 horizontal lines at a field rate of 50 fields per second.
(Two fields equals one complete frame.) Only 576 of these lines are used
for picture. The rest are used for sync or extra information such as VITC
and Closed Captioning.
Phase Alternating Line Encoding (PALE) – A method of encoding the
PCM NTSC signal by reversing the encoding phase on alternate lines to
align the code words vertically.
Phase Change – A technology for rewritable optical discs using a physical
effect in which a laser beam heats a recording material to reversibly
change an area from an amorphous state to a crystalline state, or vice
versa. Continuous heat just above the melting point creates the crystalline
state (an erasure), while high heat followed by rapid cooling creates the
amorphous state (a mark).
Phase Comparator – Circuit used in a phase locked loop to tell how
closely the phase locked loop reference signal and the PLL output are
in phase with each other. If the two signals are not in phase, the Phase
Comparator generates an error signal that adjusts the PLL frequency
output so that it is in phase with the reference signal.
Phase Distortion – A picture defect caused by unequal delay (phase
shift-ing) of different frequency components within the signal as they pass
through different impedance elements – filters, amplifiers, ionosphere
variations, etc. The defect in the picture is “fringing”-like diffraction rings
at edges where the contrast changes abruptly.
Phase Error – a) A picture defect caused by the incorrect relative timing
of a signal in relation to another signal. b) A change in the color subcarrier
signal which moves its timing out of phase, i.e., it occurs at a different
instant from the original signal. Since color information is encoded in a
video signal as a relation between the color subcarrier and the color burst
phase, a deviation in the color subcarrier phase results in a change in the
image’s hue.
Phase Shift – The movement of one signals phase in relation to another
www.tektronix.com/video_audio 173
Video Terms and Acronyms
Phase-Locked Loop – The phase locked loop (PLL) is central to the
operation of frequency and phase stable circuitry. The function of the PLL
is to provide a frequency/phase stable signal that is based on an input
reference signal.
Physical Sector Number – Serial number assigned to physical sectors on
a DVD disc. Serial incremented numbers are assigned to sectors from the
head sector in the Data Area as 30000h from the start of the Lead In Area
to the end of the Lead Out Area.
Phasing – Adjusting the delay of a video signal to a reference video signal
to ensure they are synchronous. This includes horizontal and subcarrier
timing. Also called timing.
PIC – A standard file format for animation files.
PHL – Abbreviation for Physical Layer.
Pick-Up Pattern – The description of the directionality of a microphone.
The two prominent microphone pick-up patterns are omnidirectional and
Phon – A unit of equal loudness for all audio frequencies. Phons are
related to dB, SPL re: 0.0002 microbar by the Fletcher-Munson curves.
For example, a loudness level of 40 phons would require 40 dB SPL at
1 kHz and 52 dB at 10 kHz.
Pick-Off Jitter – Jitter is a random aberration in the time period due to
noise or time base instability. Pick-off means sample point.
Phong – A type of rendering (shadows, environmental reflections, basic
transparency, and textures).
Pickup Tube – An electron-beam tube used in a television camera where
an electron current or a charge-density image is formed from an optical
image and scanned in a predetermined sequence to provide an electrical
Phong Shading – A more realistic and time-consuming type of shading,
Phong shading actually calculates specular r