his chapter introduces you to the subject of TV technology. It embodies almost
all the principles and circuits covered elsewhere in this book. Studying TV is an
excellent review of communication fundamentals, including modulation and multiplexing, transmitters and receivers, antennas and transmission lines, and even
digital techniques.
This chapter covers the basic NTSC system and TV receiver. The NTSC is the
National Television Standards Committee, the group that developed the standards
for analog TV and color TV in the 1950s. It also covers cable TV, satellite TV, and
the new digital and high-definition TV systems as defined by the Advanced
Television Standards Committee (ATSC).
After completing this chapter, you will be able to:
Describe and give specifications for a complete TV signal including all its
individual components.
Explain the process used by a TV camera to convert a visual scene to a
video signal.
Draw a simplified block diagram, showing main components, of a TV transmitter, a TV receiver and the signal flow.
Draw a block diagram of a cable TV system.
Name all the elements of and explain the operation of a cable TV system.
Explain the operation of a DBS TV receiver.
Define digital TV (DTV) and high-definition TV (HDTV), and state the basic
specifications of HDTV receivers.
23-1 TV Signal
TV signal
A considerable amount of intelligence is contained in a complete TV signal. As a result,
the signal occupies a significant amount of spectrum space. As indicated earlier, the TV
signal consists of two main parts: the sound and the picture. But it is far more complex
than that. The sound today is usually stereo, and the picture carries color information as
well as the synchronizing signals that keep the receiver in step with the transmitter.
Signal Bandwidth
Signal bandwidth
The complete signal bandwidth of a TV signal is shown in Fig. 23-1. The entire TV
signal occupies a channel in the spectrum with a bandwidth of 6 MHz. There are two
carriers, one each for the picture and the sound.
Audio signal
Audio Signal. The sound carrier is at the upper end of the spectrum. Frequency modulation is used to impress the sound signal on the carrier. The audio bandwidth of the
signal is 50 Hz to 15 kHz. The maximum permitted frequency deviation is 25 kHz, considerably less than the deviation permitted by conventional FM broadcasting. As a result,
a TV sound signal occupies somewhat less bandwidth in the spectrum than a standard
FM broadcast station. Stereo sound is also available in TV, and the multiplexing method
used to transmit two channels of sound information is virtually identical to that used in
stereo transmission for FM broadcasting.
Video Signal. The picture information is transmitted on a separate carrier located
4.5 MHz lower in frequency than the sound carrier (refer again to Fig. 23-1). The video
signal derived from a camera is used to amplitude-modulate the picture carrier. Different methods of modulation are used for both sound and picture information so that there
is less interference between the picture and sound signals. Further, amplitude modulation of the carrier takes up less bandwidth in the spectrum, and this is important when
a high-frequency, content-modulating signal such as video is to be transmitted.
Note in Fig. 23-1 that vestigial sideband AM is used. The full upper sidebands of the
picture information are transmitted, but a major portion of the lower sidebands is suppressed to conserve spectrum space. Only a vestige of the lower sideband is transmitted.
The color information in a picture is transmitted by way of frequency-division multiplexing techniques. Two color signals derived from the camera are used to modulate a
3.85-MHz subcarrier which, in turn, modulates the picture carrier along with the main
video information. The color subcarriers use double-sideband suppressed carrier AM.
The video signal can contain frequency components up to about 4.2 MHz. Therefore, if both sidebands were transmitted simultaneously, the picture signal would occupy
Figure 23-1
Chapter 23
Spectrum of a broadcast TV signal.
8.4 MHz. The vestigial sideband transmission reduces this excessive bandwidth. The total
bandwidth allocated to a TV signal is 6 MHz.
TV Spectrum Allocation. Because a TV signal occupies so much bandwidth, it must
be transmitted in a very high-frequency portion of the spectrum. TV signals are assigned to
frequencies in the VHF and UHF range. In the United States, TV stations use the frequency
range between 54 and 806 MHz. This portion of the spectrum is divided into sixty-eight
6-MHz channels which are assigned frequencies (Fig. 23-2). Channels 2 through 7 occupy
the frequency range from 54 to 88 MHz. The standard FM radio broadcast band occupies
the 88- to 108-MHz range. Aircraft, amateur radio, and marine and mobile radio communication services occupy the frequency spectrum from approximately 118 to 173 MHz.
Additional TV channels occupy the space between 470 and 806 MHz. Figure 23-2 shows
the frequency range of each TV channel.
Figure 23-2
TV spectrum allocation
VHF and UHF TV channel frequency assignments.
Low-band VHF
FM broadcast
Ham radio
Mobile or marine
High-band VHF
UHF (cont.)
Cellular telephone
In a TV signal, the sound is
frequency-modulated, and the
picture is amplitude-modulated.
Different methods of modulation
are used to minimize interference
between the picture and sound
To find the exact frequencies of the transmitter and sound carriers, use Fig. 23-2 and
the spectrum outline in Fig. 23-1. To compute the picture carrier, add 1.25 MHz to the
lower frequency of range given in Fig. 23-2. For example, for channel 6, the lower frequency is 82 MHz. The picture carrier is 82 ⫹ 1.25, or 83.25, MHz. The sound carrier
is 4.5 MHz higher, or 83.25 ⫹ 4.5, that is, 87.75, MHz.
It is important to point out that although TV is still transmitted by radio waves, most
viewers get their TV signals via a cable. More than 80 percent of U.S. homes have cable
TV that carries the “over-the-air” TV channels as well as premium and specialized channels of programming. In the near future, the FCC will auction off the upper VHF channels (53–69) to other services such as cell phones and wireless LANs and MANs. Some
cell phone TVs will use these channels.
Generating the Video Signal
A given scene is divided into segments that can be transmitted
serially over a period of time,
because any scene contains so
much light information that it
would be impossible for an
electronic device to perform a
simultaneous conversion of all
of it.
The video signal is most often generated by a TV camera, a very sophisticated electronic
device that incorporates lenses and light-sensitive transducers to convert the scene or
object to be viewed to an electric signal that can be used to modulate a carrier. All visible scenes and objects are simply light that has been reflected and absorbed and then
transmitted to our eyes. It is the purpose of the camera to take the light intensity and
color details in a scene and convert them to an electric signal.
To do this, the scene to be transmitted is collected and focused by a lens upon a
light-sensitive imaging device. Both vacuum tube and semiconductor devices are used
for converting the light information in the scene to an electric signal. Some examples
are the vidicon tube and the charge-coupled device (CCD) so widely used in camcorders
and all modern TV cameras.
The scene is divided into smaller segments that can be transmitted serially over a
period of time. Again, it is the job of the camera to subdivide the scene in an orderly
manner so that an acceptable signal is developed. This process is known as scanning.
Example 23-1
Compute the picture and sound carrier frequencies for UHF TV channel 39.
1. From Fig. 23-2, channel 39 extends from 620 to 626 MHz.
2. The picture carrier is 1.25 MHz above the lower band limit, or
1.25 ⫹ 620 ⫽ 621.25 MHz
3. The sound carrier is 4.5 MHz above the picture carrier:
4.5 ⫹ 621.25 ⫽ 625.75 MHz
Principles of Scanning. Scanning is a technique that divides a rectangular scene
into individual lines. The standard TV scene dimensions have an aspect ratio of 4:3; that
is, the scene width is 4 units for every 3 units of height. To create a picture, the scene is
subdivided into many fine horizontal lines called scan lines. Each line represents a very
narrow portion of light variations in the scene. The greater the number of scan lines, the
higher the resolution and the greater the detail that can be observed. U.S. TV standards
call for the scene to be divided into a maximum of 525 horizontal lines.
Figure 23-3 is a simplified drawing of the scanning process. In this example, the scene
is a large black letter F on a white background. The task of the TV camera is to convert this
Chapter 23
Scan lines
Figure 23-3
Simplified explanation of scanning.
scene to an electric signal. The camera accomplishes this by transmitting a voltage of 1 V
for black and 0 V for white. The scene is divided into 15 scan lines numbered 0 through
14. The scene is focused on the light-sensitive area of a vidicon tube or imaging CCD
which scans the scene one line at a time, transmitting the light variations along that line as
voltage levels. Figure 23-3 shows the light variations along several of the lines. Where the
white background is being scanned, a 0-V signal occurs. When a black picture element is
encountered, a 1-V level is transmitted. The electric signals derived from each scan line are
referred to as the video signal. They are transmitted serially one after the other until the
entire scene has been sent (see Fig. 23-4). This is exactly how a standard TV picture is
developed and transmitted.
Since the scene contains colors, there are different levels of light along each scan
line. This information is transmitted as different shades of gray between black and white.
Shades of gray are represented by some voltage level between the 0- and 1-V extremes
represented by white and black. The resulting signal is known as the brightness, or luminance, signal and is usually designated by the letter Y.
A more detailed illustration of the scanning process is given in Fig. 23-5. The scene is
scanned twice. One complete scanning of the scene is called a field and contains 262 1兾2
Figure 23-4
Luminance (Y ) signal
The scan line voltages are transmitted serially. These correspond to the
scanned letter F in Figure 23-3.
Figure 23-5
Interlaced scanning is used to minimize flicker.
Interlaced scanning
lines. The entire field is scanned in 1兾60 s for a 60-Hz field rate. In color TV the field
rate is 59.94 Hz. Then the scene is scanned a second time, again using 262 1兾2
lines. This second field is scanned in such a way that its scan lines fall between those
of the first field. This produces what is known as interlaced scanning, with a total of
2 ⫻ 262 1兾2 ⫽ 525 lines. In practice, only about 480 lines show on the picture tube
screen. Two interlaced fields produce a complete frame of video. With the field rate being
1兾60 s, two fields produce a frame rate of 1兾30 s, or 30 Hz. The frame rate in color TV
is one-half the field rate, or 29.97 Hz. Interlaced scanning is used to reduce flicker, which
is annoying to the eye. This rate is also fast enough that the human eye cannot detect
individual scan lines and therefore sees a stable picture.
The rate of occurrence of the horizontal scan lines is 15,750 Hz for monochrome,
or black and white, TV and 15,734 Hz for color TV. This means that it takes about
1/15,734 s, or 63.6 ␮s, to trace out one horizontal scan line.
At the TV receiver, the picture tube is scanned in step with the transmitter to accurately reproduce the picture. To ensure that the receiver stays exactly in synchronization
with the transmitter, special horizontal and vertical sync pulses are added to and transmitted with the video signal (see Fig. 23-6). After one line has been scanned, a horizontal
blanking pulse comes along. At the receiver, the blanking pulse is used to cut off the electron beam in the picture tube during the time the beam must retrace from right to left to
get ready for the next left-to-right scan line. The horizontal sync pulse is used at the receiver
Chapter 23
Figure 23-6
Sync pulses are used to keep the receiver in step with the transmitter.
to keep the sweep circuits that drive the picture tube in step with the transmitted signal.
The width of the horizontal blanking pulse is about 10 ␮s. Since the total horizontal period
is 63.6 ␮s, only about 53.5 ␮s is devoted to the video signal.
At the end of each field, the scanning must retrace from bottom to top of the scene
so that the next field can be scanned. This is initiated by the vertical blanking and sync
pulses. The entire vertical pulse blacks out the picture tube during the vertical retrace.
The pulses on top of the vertical blanking pulse are the horizontal sync pulses that must
continue to keep the horizontal sweep in sync during the vertical retrace. The equalizing pulses (not shown in Fig. 23-6) that occur during the vertical retrace period help
synchronize the half scan lines in each field. Approximately 30 to 40 scan lines are used
up during the vertical blanking interval. Therefore, only 480 to 495 lines of actual video
are shown on the screen.
Relationship between Resolution and Bandwidth. Scanning a scene or picture is a kind of sampling process. Consider the scene to be a continuous variation of
light intensities and colors. To capture this scene and transmit it electronically,
the light intensity and color variations must be converted to electric signals. This conversion is accomplished through a process called scanning, whereby the picture is divided
into many fine horizontal lines next to one another.
The resolution of the picture refers to the amount of detail that can be shown.
Pictures with high resolution have excellent definition, or distinction of detail, and the
pictures appear to be clearly focused. A picture lacking detail looks softer, or somewhat
out of focus. The bandwidth of a video system determines the resolution. The greater
the bandwidth, the greater the definition and detail.
Resolution in a video system is measured in terms of the number of lines defined
within the bounds of the picture. For example, the horizontal resolution RH is given
as the maximum number of alternating black-and-white vertical lines that can be distinguished. Assume closely spaced vertical black-and-white lines of the same width.
When such lines are scanned, they will be converted to a square wave (50 percent duty
cycle). One cycle, or period, t of this square wave is the time for one black line and
one white line. If the lines are very thin, the resulting period will be short and the frequency will be high. If the lines are wide, the period will be longer and the resulting
frequency lower.
The National Television Standards Committee (NTSC) system restricts the bandwidth in the United States to 4.2 MHz. This translates to a period of 0.238 ␮s, or 238 ns.
The width of a line is one-half this value, or 0.238/2 ␮s, or 0.119 ␮s. Remember that
the horizontal sweep interval is about 63.6 ␮s. About 10 ␮s of this interval is taken up
by the horizontal blanking interval, leaving 53.5 ␮s for the video. The displayed scan
line takes 53.5 ␮s. With 0.119 ␮s per line, one horizontal scan line can resolve, or
contain, up to 53.5/0.119, or 449.5, vertical lines. Therefore, the approximate horizontal
resolution RH is about 450 lines.
The bandwidth of a video system
determines the resolution. The
greater the bandwidth, the
greater the definition and detail.
Video bandwidth
Figure 23-7
Creating other colors with red, green, and blue light.
The vertical resolution RV is the number of horizontal lines that can be distinguished.
Only about 480 to 495 horizontal lines are shown on the screen. The vertical resolution
is about 0.7 times the number of actual lines NL:
RV ⫽ 0.7NL
If 485 lines are shown, the vertical resolution is 0.7 ⫻ 485, or 340, lines. In practice,
the bandwidth varies as will the horizontal resolution. The lowest is about 340, but it
could be as high as 640. A typical figure is 427.
Color signal generation
Color Signal Generation. The video signal as described so far contains the video
or luminance information, which is a black-and-white (B&W) version of the scene. This
is combined with the sync pulses. Now the color detail in the scene must somehow be
represented by an electric signal. This is done by dividing the light in each scan line into
three separate signals, each representing one of the three basic colors, red, green, or blue.
It is a principle of physics that any color can be made by mixing some combination of
the three primary light colors (see Fig. 23-7).
In the same way, the light in any scene can be divided into its three basic color
components by passing the light through red, green, and blue filters. This is done in a
color TV camera, which is really three cameras in one (see Fig. 23-8). The lens focuses
the scene on three separate light-sensitive devices such as a vidicon tube or an imaging CCD by way of a series of mirrors and beam splitters. The red light in the scene
passes through the red filter, the green through the green filter, and the blue through
the blue filter. The result is the generation of three simultaneous signals (R, G, and B)
during the scanning process by the light-sensitive imaging devices.
Figure 23-8
How the camera generates the color signals.
Chapter 23
Example 23-2
The European PAL TV system uses 625 interlaced scan lines occurring at a rate of
25 frames per second. The horizontal scanning rate is 15,625 Hz. About 80 percent of
one complete horizontal scan is devoted to the displayed video, and 20 percent to the
horizontal blanking. Assume that the horizontal resolution RH is about 512 lines. Only
about 580 horizontal scan lines are displayed on the screen. Calculate (1) the bandwidth of the system and (2) the vertical resolution.
1. The time for one horizontal scan is
⫽ 64 ␮s
About 80 percent of this is devoted to the video, or
0.8 ⫻ 64 ⫽ 51.2 ␮s
If the horizontal resolution is 512 lines, then the time for one line is
⫽ 0.1 ␮s
Two lines equals 1 period, or
2 ⫻ 0.1 ⫽ 0.2 ␮s
Converting this to frequency gives the approximate bandwidth BW:
BW ⫽
⫽ 5 MHz
0.2 ⫻ 10⫺6
2. RV ⫽ 0.8NL ⫽ 0.8 ⫻ 580 lines ⫽ 464 lines
The R, G, and B signals also contain the basic brightness or luminance information.
If the color signals are mixed in the correct proportion, the result is the standard B&W
video or luminance Y signal. The Y signal is generated by scaling each color signal with
a tapped voltage divider and adding the signals, as shown in Fig. 23-9(a). Note that the
Y signal is made up of 30 percent red, 59 percent green, and 11 percent blue. The resulting Y signal is what a B&W TV set will see.
The color signals must also be transmitted along with the luminance information in
the same bandwidth allotted to the TV signal. This is done by a frequency-division
multiplexing technique shown in Fig. 23-9(a). Instead of all three color signals being
transmitted, they are combined into I and Q color signals. These signals are made up of
different proportions of the R, G, and B signals according to the following specifications:
I color signal
Q color signal
I ⫽ 60 percent red, 28 percent green, ⫺32 percent blue
Q ⫽ 21 percent red, ⫺52 percent green, 31 percent blue
The minus signs in the above expressions mean that the color signal has been phaseinverted before the mixing process.
The I and Q signals are referred to as the chrominance signals. To transmit them,
they are phase-encoded; i.e., they are used to modulate a subcarrier which is in
turn mixed with the luminance signal to form a complete, or composite, video signal.
These I and Q signals are fed to balanced modulators along with 3.58-MHz (actually
3.579545-MHz) subcarrier signals that are 90° out of phase [again refer to Fig. 23-9(a)].
This type of modulation is referred to as a quadrature modulation, where quadrature
means a 90° phase shift. The output of each balanced modulator is a double-sideband
Chrominance signal
Quadrature modulation
Figure 23-9
(a) How the NTSC composite video signal is generated. (b) The chrominance signals are phase-encoded.
(R – Y )
−(B – Y )
(B – Y )
−(R – Y )
Composite video signal
R ⴚ Y signal
B ⴚ Y signal
suppressed carrier AM signal. The resulting two signals are added to the Y signal to create the composite video signal. The combined signal modulates the picture carrier. The
resulting signal is called the NTSC composite video signal. This signal and its sidebands
are within the 6-MHz TV signal bandwidth.
The I and Q color signals are also called the R ⫺ Y and the B ⫺ Y signals because
the combination of the three color signals produces the effect of subtracting Y from the
R or B signals. The phase of these signals with respect to the original 3.58-MHz subcarrier signal determines the color to be seen. The color tint can be varied at the receiver
so that the viewer sees the correct colors. In many TV sets, an extra phase shift of 57°
is inserted to ensure that maximum color detail is seen. The resulting I and Q signals
are shown as phasors in Fig. 23-9(b). There is still 90° between the I and Q signals, but
their position is moved 57°. The reason for this extra phase shift is that the eye is more
sensitive to the color orange. If the I signal is adjusted to the orange phase position,
Chapter 23
Figure 23-10
The transmitted video and color signal spectrum.
3.58-MHz suppressed subcarrier
6 MHz
Total signal bandwidth
1.25 MHz
4.5 MHz
80 kHz
Total video bandwidth
carrier (AM)
1.5 MHz
Bandwidth of I and Q signals
0.5 MHz
better details will be seen. The I signal is transmitted with more bandwidth than the
Q signal, as can be seen by the response of the low-pass filters at the outputs of the I and
Q mixers in Fig. 23-9(a).
The complete spectrum of the transmitted color signal is shown in Fig. 23-10. Note
the color portion of the signal. Because of the frequency of the subcarrier, the sidebands
produced during amplitude modulation occur in clusters that are interleaved between the
other sidebands produced by the video modulation.
Remember that the 3.58-MHz subcarrier is suppressed by the balanced modulators
and therefore is not transmitted. Only the filtered upper and lower sidebands of the color
signals are transmitted. To demodulate these double-sided (DSB) AM signals, the carrier
must be reinserted at the receiver. A 3.58-MHz oscillator in the receiver generates the
subcarrier for the balanced modulator-demodulator circuits.
For the color signals to be accurately recovered, the subcarrier at the receiver must
have a phase related to the subcarrier at the transmitter. To ensure the proper conditions
at the receiver, a sample of the 3.58-MHz subcarrier signal developed at the transmitter
is added to the composite video signal. This is done by gating 8 to 12 cycles of the
3.58-MHz subcarrier and adding it to the horizontal sync and blanking pulse as shown
in Fig. 23-11. This is called the color burst, and it rides on what is called the back porch
of the horizontal sync pulse. The receiver uses this signal to phase-synchronize the internally generated subcarrier before it is used in the demodulation process.
A block diagram of a TV transmitter is shown in Fig. 23-12. Note the sweep and
sync circuits that create the scanning signals for the vidicons or CCDs as well as generate the sync pulses that are transmitted along with the video and color signals. The
sync signals, luminance Y, and color signals are added to form the final video signal that
is used to modulate the carrier. Low-level AM is used. The final AM signal is amplified
by very high-power linear amplifiers and sent to the antenna via a diplexer, which is a
Figure 23-11
carrier (FM)
Color burst
Back porch
The 3.58-MHz color subcarrier burst used to synchronize color demodulation
at the receiver.
Figure 23-12
Complete TV transmitter.
Color processing
[from Fig. 23-10(a)]
set of sharp bandpass filters that pass the transmitter signal to the antenna but prevent
signals from getting back into the sound transmitter.
At the same time, the voice or sound signals frequency-modulate a carrier that is
amplified by class C amplifiers and fed to the same antenna by way of the diplexer. The
resulting VHF or UHF TV signal travels by line-of-sight propagation to the antenna and
23-2 TV Receiver
TV receiver
The process involved in receiving a TV signal and recovering it to present the picture
and sound outputs in a high-quality manner is complex. Over the course of the years
since its invention, the TV set has evolved from a large vacuum tube unit into a smaller
and more reliable solid-state unit made mostly with ICs.
A block diagram of a TV receiver is shown in Fig. 23-13. Although it is basically
a superheterodyne receiver, it is one of the most sophisticated and complex electronic
devices ever developed. Today, most of the circuitry is incorporated in large-scale ICs.
Yet the typical TV receiver still uses many discrete component circuits.
The signal from the antenna or the cable is connected to the tuner, which consists of
an RF amplifier, mixer, and local oscillator. The tuner is used to select the TV channel
Chapter 23
Figure 23-13
Block diagram of TV receiver.
A TV receiver is one of the most
sophisticated and complex electronic devices ever developed. Although most of the circuitry is
now incorporated in large-scale
ICs, the typical TV receiver still
uses many discrete component
to be viewed and to convert the picture and sound carriers plus their modulation to an
intermediate frequency (IF). As in most superheterodyne receivers, the local-oscillator
frequency is set higher than the incoming signal by the IF value.
Most TV set tuners are prepackaged in sealed and shielded enclosures. They are two
tuners in one, one for the VHF signals and another for the UHF signals. The VHF tuner
usually uses low-noise FETs for the RF amplifier and the mixer. UHF tuners use a diode
mixer with no RF amplifier or a GaAs FET RF amplifier and mixer. Today most modern
tuners are a single integrated circuit.
Tuning Synthesizer. The local oscillators are phase-locked loop (PLL) frequency
synthesizers set to frequencies that will convert the TV signals to the IF. Tuning of
the local oscillator is typically done digitally. The PLL synthesizer is tuned by setting
the feedback frequency-division ratio. In a TV set this is changed by a microprocessor that is part of the master control system. The interstage LC-resonant circuits in
the tuner are controlled by varactor diodes. By varying the dc bias on the varactors,
their capacitance is changed, thereby changing the resonant frequency of the tuned
circuits. The bias control signals also come from the control microprocessor. Most
TV sets are tuned by IR remote control. In single-chip tuners, the synthesizer is part
of the circuit.
Video Intermediate Frequency and Demodulation
To determine the sound carrier
when the channel and video
carrier frequency are known, add
4.5 MHz to the video signal.
The standard TV receiver IFs are 41.25 MHz for the sound and 45.75 MHz for the
picture. For example, if a receiver is tuned to channel 4, the picture carrier is 67.25 MHz,
and the sound carrier is 71.75 MHz (the difference is 4.5 MHz). The synthesizer local
oscillator is set to 113 MHz. The tuner produces an output that is the difference between
the incoming signal and local-oscillator frequencies, or 113 ⫺ 67.25 MHz, or 45.75 MHz,
for the picture and 113 ⫺ 71.75 MHz, or 41.25 MHz, for the sound. Because the localoscillator frequency is above the frequency of incoming signals, the relationship of the
picture and sound carriers is reversed at the intermediate frequencies, the picture IF being
4.5 MHz above the sound IF.
The IF signals are then sent to the video IF amplifiers. Selectivity is usually obtained
with a surface acoustic wave (SAW) filter. This fixed tuned filter is designed to provide
the exact selectivity required to pass both of the IF signals with the correct response to
match the vestigial sideband signal transmitted. Figure 23-14(a), is a block diagram of
the filter. It is made on a piezoelectric ceramic substrate such as lithium niobate. A pattern of interdigital fingers on the surface converts the IF signals to acoustic waves that
travel across the filter surface. By controlling the shapes, sizes, and spacings of the interdigital filters, the response can be tailored to any application. Interdigital fingers at the
output convert the acoustic waves to electric signals at the IF.
The response of the SAW IF filter is shown in Fig. 23-14(b). Note that the filter
greatly attenuates the sound IF to prevent it from getting into the video circuits. The
maximum response occurs in the 43- to 44-MHz range. The picture carrier IF is down
50 percent on the curve.
Continue to refer to Fig. 23-13. The IF signals are next amplified by IC amplifiers.
The video (luminance, or Y ) signal is then recovered by an AM demodulator. In older
sets, a simple diode detector was used for video detection. In most modern sets, a synchronous balanced modulator type of synchronous demodulator is used. It is part of the
IF amplifier IC.
The output of the video detector is the Y signal and the composite color signals,
which are amplified by the video amplifiers. The Y signal is used to create an AGC
voltage for controlling the gain of the IF amplifiers and the tuner amplifiers and
The composite color signal is taken from the video amplifier output by a filter and
fed to color-balanced demodulator circuits. The color burst signal is also picked up by a
gating circuit and sent to a phase detector (⌽ DET) whose output is used to synchronize
Chapter 23
Figure 23-14
(a) Surface acoustic wave (SAW) filter. (b) Typical IF response curve.
an oscillator that produces a 3.58-MHz subcarrier signal of the correct frequency and
phase. The output of this oscillator is fed to two balanced demodulators that recover the
I and Q signals. The carriers fed to the two balanced modulators are 90° out of phase.
Note the 57° phase shifter used to correctly position the color phase for maximum recovery of color detail. The Q and I signals are combined in matrix with the Y signal, and
out come the three R, G, and B color signals. These are amplified and sent to the picture tube, which reproduces the picture.
Sound Intermediate Frequency and Demodulation
Sound intermediate frequency
To recover the sound part of the TV signal, a separate sound IF and detector section are
used. Continuing to refer to Fig. 23-13, note that the 41.25- and 45.75-MHz sound and
picture IF signals are fed to a sound detector circuit. This is a nonlinear circuit that heterodynes the two IFs and generates the sum and difference frequencies. The result is a
4.5-MHz difference signal that contains both the AM picture and the FM sound modulation. This is the sound IF signal. It is passed to the sound IF amplifiers, which also
perform a clipping-limiting function that removes the AM, leaving only the FM sound.
The audio is recovered with a quadrature detector or differential peak detector, as
described in Chap. 6. The audio is amplified by one or more audio stages and sent to
the speaker. If stereo is used, the appropriate demultiplexing is done by an IC, and the
left and right channel audio signals are amplified.
Synchronizing Circuits
Synchronizing circuits
A major part of the TV receiver is dedicated to the sweep and synchronizing functions
that are unique to TV receivers. In other words, the receiver’s job does not end with
demodulation and recovery of the picture and sound. To display the picture on a picture tube, special sweep circuits are needed to generate the voltages and currents to
operate the picture tube, and sync circuits are needed to keep the sweep in step with
the transmitted signal.
The sweep and sync operations begin in the video amplifier. The demodulated video
includes the vertical and horizontal blanking and sync pulses. The sync pulses are
stripped off the video signal with a sync separator circuit and fed to the sweep circuits
(refer to the lower part of Fig. 23-13). The horizontal sync pulses are used to synchronize a horizontal oscillator to 15,734 Hz. This oscillator drives a horizontal output stage
that develops a sawtooth of current that drives magnetic deflection coils in the picture
tube yoke that sweep the electron beams in the picture tube.
The horizontal output stage, which is a high-power transistor switch, is also part of
a switching power supply. The horizontal output transistor drives a step up–step down
transformer called the flyback. The 15.734-kHz pulses developed are stepped up, rectified, and filtered to develop the 30- to 35-kV-high direct current required to operate the
picture tube. Step-down windings on the flyback produce lower-voltage pulses that are
rectified and filtered into low voltages that are used as power supplies for most of the
circuits in the receiver.
The sync pulses are also fed to an IC that takes the horizontal sync pulses during
the vertical blanking interval and integrates them into a 60-Hz sync pulse which is used
to synchronize a vertical sweep oscillator. The output from this oscillator is a sawtooth
sweep voltage at the field rate of 60 Hz (actually 59.94 Hz). This output is amplified
and converted to a linear sweep current that drives the magnetic coils in the picture tube
yoke. These coils produce vertical deflection of the electron beams in the picture tube.
In most modern TV sets, the horizontal and vertical oscillators are replaced by digital sync circuits (see Fig. 23-15). The horizontal sync pulses from the sync separator
are normally used to phase-lock a 31.468-kHz voltage-controlled oscillator (VCO) that
runs at 2 times the normal horizontal rate of 15.734 kHz. Dividing this by 2 in a flipflop gives the horizontal pulses that are amplified and shaped in the horizontal output
stage to drive the deflection coils on the picture tube. A digital frequency divider divides
Figure 23-15
Digital generation of horizontal and vertical sync pulses.
Chapter 23
the 31.468-kHz signal by 525 to get a 59.94-Hz signal for vertical sync. This signal is
shaped into a current sawtooth and amplified by the vertical output stage which drives
the deflection coils on the picture tube.
Picture Tube
A picture tube is a vacuum tube called a cathode-ray tube (CRT ). Both monochrome
(B&W) and color picture tubes are available. The CRT used in computer video monitors works as the TV picture tube described here.
Cathode-ray tube (CRT)
Monochrome CRT. The basic operation of a CRT is illustrated with a monochrome
tube, as shown in Fig. 23-16(a). The tube is housed in a bell-shaped glass enclosure. A
filament heats a cathode which emits electrons. The negatively charged electrons are
attracted and accelerated by positive-bias voltages on the elements in an electron gun
assembly. The electron gun also focuses the electrons into a very narrow beam. A control grid that is made negative with respect to the cathode controls the intensity of the
electron beam and the brightness of the spot it makes.
The beam is accelerated forward by a very high voltage applied to an internal
metallic coating called aquadag. The face, or front, of the picture tube is coated internally with a phosphor that glows and produces white light when it is struck by the
electron beam.
Around the neck of the picture tube is a structure of magnetic coils called the
deflection yoke. The horizontal and vertical current linear sawtooth waves generated by
the sweep and synchronizing circuits are applied to the yoke coils, which produce magnetic fields inside the tube that influence the position of the electron beam. When
electrons flow, a magnetic field is produced around the conductor through which the
current flows. The magnetic field that occurs around the electron beam is moved or
deflected by the magnetic field produced by the deflection coils in the yoke. Thus the
electron beam is swept across the face of the picture tube in the interlaced manner
described earlier.
As the beam is being swept across the face of the tube to trace out the scene, the
intensity of the electron beam is varied by the luminance, or Y, signal, which is applied
to the cathode or in some cases to the control grid. The control grid is an element in the
electron gun that is negatively biased with respect to the cathode. By varying the grid
voltage, the beam can be made stronger or weaker, thereby varying the intensity of the
light spot produced by the beam when it strikes the phosphor. Any shade of gray, from
white to black, can be reproduced in this way.
Monochrome CRT
Color CRT. The operation of a color picture tube is similar to that just described.
Color CRT
To produce color, the inside of the picture tube is coated with many tiny red, green,
and blue phosphor dots arranged in groups of three, called triads. Some tubes use a
pattern of red, green, and blue stripes. These dots or stripes are energized by three
separate cathodes and electron guns driven by the red, green, and blue color signals.
Figure 23-16(b) shows how the three electron guns are focused so that they strike only
the red, green, and blue dots as they are swept across the screen. A metallic plate with
holes for each dot triad called a shadow mask is located between the guns and the
phosphor dots to ensure that the correct beam strikes the correct color dot. By varying the intensity of the color beams, the dot triads can be made to produce any color.
The dots are small enough that the eye cannot see them individually at a distance.
What the eye sees is a color picture swept out on the face of the tube.
Figure 23-17 shows how all the signals come together at the picture tube to produce
the color picture. The R, G, and B signals are mixed with the Y signal to control the
cathodes of the CRT. Thus the beams are properly modulated to reproduce the color
picture. Note the various controls associated with the picture tube. The R-G-B screen,
brightness, focus, and centering controls vary the dc voltages that set the levels as desired.
The convergence controls and assembly are used to control the positioning of the three
For a monochrome CRT or blackand-white TV, the front of the picture tube is coated internally with
a phosphor that glows and produces white light when struck by
an electron beam. A color CRT, on
the other hand, is coated with
red, green, and blue phosphor
dots or stripes which combine to
form the colors visible on the
Deflection yoke
Control grid
Shadow mask
Figure 23-16
(a) Basic construction and operation of a black-and-white (monochrome) cathode-ray tube. (b) Details of color
picture tube.
electron beams so that they are centered on the holes in the shadow mask and the electron beams strike the color dots dead center. The deflection yoke over the neck of the
tube deflects all three electron beams simultaneously.
Other Screen Displays
While most TV sets still use a CRT for a display, during the past 5 years not only have
many other display technologies matured, but also new display methods have been
perfected and brought to market. These include liquid-crystal displays (LCDs), plasma,
Chapter 23
Figure 23-17
Color picture tube circuits.
projection, Digital Light Processing (DLP), and a few others. These new displays are
more expensive than CRTs, but they have brought two major benefits to TV displays.
First, the displays are flat or thin. CRTs require depth to function properly and so
take up a great deal of room on a table or desk. The typical depth of a CRT is 18 to 24 in.
LCD and plasma displays are very thin and rarely more than 5 in thick.
Second, these alternative displays can be made in much larger sizes. The maximum
CRT size made today is 36 in. Other displays can be made in sizes from about 37- to
60-in diagonal measurement. Many of these displays are capable of being wall-mounted.
As costs continue to decline and as digital and high-definition television programming
becomes available, more TV screens will use these modern display techniques.
The operational details of these displays are way beyond the scope of this book, but
here is a brief summary of the most common types.
Plasma. A plasma screen is made up of many tiny cells filled with a special gas.
When the gas is excited by an electric signal, the gas ionizes and becomes a
plasma that glows brightly in shades of red, blue, and green. The cells are
organized to form triads or groups of the three colors that are then mixed and
blended by your eye to form the picture. Scanning signals turn on the cells
horizontally as in a CRT.
LCD. Liquid-crystal displays use special chemicals sandwiched between pieces of
glass. These chemicals are designed to be electrically activated so that they
block light or pass light. A bright white light is placed behind the screen.
Then the red, blue, and green sections of the screen are enabled to pass the
desired amount of light. The screen is also made in the form of groups of
three color dots or segments to produce any desired color. Electric signals
scan across the color dots horizontally, as in other TV sets, to reproduce
the picture. LCD screens are very common in computer video monitors
but are now practical for TV sets. As prices decline more TV sets will
use them.
Projection screens. A popular large screen option is an LCD projection TV. A
very bright light is passed through a smaller LCD screen and then through a
lens, creating a picture from 40 to 60 in diagonally. Another projection
screen uses Texas Instruments’ Digital Light Processing (DLP) chips. These
chips are made with microelectromechanical systems (MEMS). They consist
of thousands of tiny mirror segments each whose tilt angle is controllable.
These mirrors reflect light through color lenses to create a very large
back-projected image.
23-3 Cable TV
CATV (cable TV)
Cable TV, sometimes called CATV, is a system of delivering the TV signal to home receivers
by way of a coaxial cable rather than over the air by radio wave propagation. A cable TV
company collects all the available signals and programs and frequency-multiplexes them
on a single coaxial cable that is fed to the homes of subscribers. A special cable decoder
box is used to receive the cable signals, select the desired channel, and feed a signal to the
TV set. Today, most TV reception is by way of a cable connection instead of an antenna.
CATV Background
Many companies were established to offer TV signals by cable. They put up very tall
high-gain TV antennas. The resulting signals were amplified and fed to the subscribers
by cable. Similar systems were developed for apartments and condominiums. A single
master antenna system was installed at a building, and the signals were amplified and
distributed to each apartment or unit by cable.
Modern Cable TV Systems
Trunk cable
Coaxial cable
Hybrid fiber cable (HFC) system
Today, cable TV companies, generally referred to as multiple (cable) systems operators
(MSOs), collect signals and programs from many sources, multiplex them, and distribute
them to subscribers (see Fig. 23-18). The main building or facility is called the headend.
The antennas receive local TV stations and other nearby stations plus the special cable
channel signals distributed by satellite. The cable companies use parabolic dishes to pick
up the so-called premium cable channels. A cable TV company uses many TV antennas
and receivers to pick up the stations whose programming it will redistribute. These signals
are then processed and combined or frequency-multiplexed onto a single cable.
The main output cable is called the trunk cable. In older systems it was a large, lowloss coaxial cable. Newer systems use a fiber-optic cable. The trunk cable is usually
buried and extended to surrounding areas. A junction box containing amplifiers takes the
signal and redistributes it to smaller cables, called feeders, which go to specific areas
and neighborhoods. From there the signals are again rejuvenated with amplifiers and sent
to individual homes by coaxial cables called drops. The overall system is referred to as
a hybrid fiber cable (HFC) system.
The coaxial cable (usually 75 ⍀ RG-6/U) comes into a home and is connected to a
cable decoder box, which is essentially a special TV tuner that picks up the cable channels
and provides a frequency synthesizer and mixer to select the desired channel. The mixer
Chapter 23
Figure 23-18
The modern cable TV system.
(Coaxial or
output is heterodyned to TV channel 3 or 4 and then fed to the TV set antenna terminals. The desired signal is frequency-translated by the cable box to channel 3 or 4 that
the TV set can receive.
Cable TV is a popular and widely used service in the United States. More than 80 percent of U.S. homes have cable TV service. This service eliminates the need for antennas.
And because of the direct connection of amplified signals, there is no such thing as poor,
weak, noisy, or snowy signals. In addition, many TV programs are available only via
cable, e.g., the specialized content and premium movie channels. The only downside to
cable TV is that it is more expensive than connecting a TV to a standard antenna.
Signal Processing
Signal processing
The TV signals to be redistributed by the cable company usually undergo some kind of
processing before they are put on the cable to the TV set. Amplification and impedance
matching are the main processes involved in sending the signal to remote locations over
what is sometimes many miles of coaxial cable. However, at the headend, other types of
processes are involved.
Straight-Through Processors. In early cable systems, the TV signals from local
stations were picked up with antennas, and the signal was amplified before being multiplexed onto the main cable. This is called straight-through processing. Amplifiers called
strip amplifiers and tuned to the received channels pass the desired TV signal to the combiner. Most of these amplifiers include some kind of gain control or attenuators that can
reduce the signal level to prevent distortion of strong local signals. This process can still
be used with local VHF TV stations, but today heterodyne processing is used instead.
Straight-through processing
Strip amplifier
In cable TV, heterodyne processing translates the incoming TV
signal to a different frequency.
Microwave signals cannot be
put on the cable, so they are
converted to an available
6-MHz TV channel.
Heterodyne processing
Figure 23-19
Heterodyne Processors. Heterodyne processing translates the incoming TV signal
to a different frequency. This is necessary when satellite signals are involved. Microwave
carriers cannot be put on the cable, so they are down-converted to some available 6-MHz
TV channel. In addition, heterodyne processing gives the cable companies the flexibility of putting the signals on any channel they want to use.
The cable TV industry has created a special set of nonbroadcast TV channels, as
shown in Fig. 23-19. Some of the frequency assignments correspond to standard TV
channels, but others do not. Since all these frequencies are confined to a cable, there can
be duplication of any frequency that might be used in radio or TV broadcasting. Note
that the spacing between the channels is 6 MHz.
The cable company uses modules called heterodyne processors to translate the
received signals to the desired channel (see Fig. 23-20). The processor is a small TV
Special cable TV channels. Note that the video or picture carrier frequency is given.
Low-Band VHF
Midband VHF
High-Band VHF
Chapter 23
Superband (cont.)
Figure 23-20
A heterodyne processor.
Video IF (45.75 MHz)
Directional coupler
RF amp
Sound IF (41.25 MHz)
Input from
Combiner or multiplexer
receiver. It has a tuner set to pick up the desired over-the-air channel. The output of the
mixer is the normal TV IFs of 45.75 and 41.25 MHz. These picture and sound IF signals
are usually separated by filters, and they incorporate AGC and provide for individual
gain control to make fine-tuning adjustments. These signals are then sent to a mixer
where they are combined with a local-oscillator signal to up-convert them to the final
output frequency. A switch is usually provided to connect the input local oscillator to
the output mixer. This puts the received signal back on the same frequency. In some
cases this is done. However, setting the switch to the other position selects a different
local-oscillator frequency that will up-convert the signal to another desired channel
Some heterodyne processors completely demodulate the received signal into its individual audio and video components. This gives the cable company full control over signal quality by making it adjustable. In this way, the cable company could also employ
scrambling methods if desired. The signals are then sent to a modulator unit that puts
the signals on carrier frequencies. The resulting signal is up-converted to the desired output channel frequency.
All the signals on their final channel assignments are sent to a combiner, which is
a large special-purpose linear mixer. Normally, directional couplers are used for the
combining operation. Figure 23-20 shows how multiple directional couplers are connected
to form the combiner or multiplexer. The result is that all the signals are frequencymultiplexed into a composite signal that is put on the trunk cable.
Cable TV Converter
The receiving end of the cable TV system at the customer’s home is a box of electronics that selects the desired channel signal from those on the cable and translates it to
channel 3 or 4, where it is connected to the host TV receiver through the antenna input
terminals. The cable TV box is thus a tuner that can select the special cable TV channels and convert them to a frequency that any TV set can pick up.
Figure 23-21 shows a basic block diagram of a CATV converter. The 75-⍀ RG-59/U
cable connects to a tuner made up of a mixer and a frequency synthesizer local oscillator
CATV converter
Figure 23-21
Cable TV converter.
75 ⍀
RG-59/U cable input
local oscillator
Channel 3 or 4
To TV set antenna terminal
IR sensor
Reverse channel
Control input channel
capable of selecting any of the desired channels. The synthesizer is phase-locked and
microprocessor-controlled. Most control processors provide for remote control with a digital infrared remote control similar to that used on virtually every modern TV set.
The output of the mixer is sent to a modulator that puts the signal on channel 3 or
4. The output of the modulator connects to the TV set antenna input. The TV is then set
to the selected channel and left there. All channel changing is done with the cable converter remote control.
Today, cable converters have many advanced features, among them automatic
identification and remote control by the cable company. Each processor contains a unique
ID code that the cable company uses to identify the customer. This digital code is transmitted
back to the cable company over a special reverse channel. There are several 6-MHz channels below channel 2 that can be used to transmit special signals to or from the cable converter. The digital ID modulates one of these special reverse channels. These low channels
can also be used by the cable company to turn on or disable a cable converter box remotely.
A digital signal is modulated onto a special channel and sent to the cable converter. It is
picked off by a special tuner or with a bandpass filter as shown in Fig. 23-21. The signal
is demodulated, and the recovered signal is sent to the microprocessor for control purposes.
It can be used to lock out access to any special channels to which the customer has not
subscribed. The reverse channels can also be used for simple troubleshooting.
Digital Cable
The newest cable TV systems use digital techniques. The audio and video are transmitted in digital form in one or more of the regular 6-MHz-bandwidth analog channels
to the cable box. A video compression technique is used to make the signal fit the
Chapter 23
available channel bandwidth. Digital modulation methods are used, mainly multilevel
QAM (16-QAM, 32-QAM, or 64-QAM). The cable box at the receiving end contains
digital demodulator and decompression circuits and D/A converters to put the signals
into analog form for presentation on the still-analog TV set. The primary benefits of
digital cable are that more channels can be carried and the picture quality is somewhat
better. However, cable TV systems with digital cable also continue to support the older
analog TV system since it is less expensive.
Cable TV systems use a set of standards established by Cable Labs, a nonprofit organization devoted to research into cable TV methods as well as programs of testing and
certification that ensure that any cable TV set or converter box is compatible with any
TV set or cable TV system. Cable Labs has developed a system that also permits cable
TV channels to carry high-speed Internet service as well as Voice over Internet Protocol
(VoIP) phone service. This standard is called Data over Cable Service Interface Specification (DOCSIS). The latest version 3.0 defines wider 6.4-MHz channels, 64-QAM modulation, and a data rate up to 30 Mbps. With such flexibility, DOCSIS permits cable TV
companies to offer virtually any digital service to the consumer. Digital TV, Internet, and
phone service combined is called the triple play.
Triple play
23-4 Satellite TV
Satellite TV
One of the most common methods of TV signal distribution is via communication satellite. A communication satellite orbits the equator about 22,300 mi out in space. It rotates
in synchronism with the earth and therefore appears to be stationary. The satellite is used
as a radio relay station (refer to Fig. 23-22). The TV signal to be distributed is used to
modulate a microwave carrier, and then it is transmitted to the satellite. The path from
earth to the satellite is called the uplink. The satellite translates the signal to another
Figure 23-22
Satellite TV distribution.
TV network or
premium channel supplier
Cable TV company
or consumer
Direct broadcast satellite (DBS) TV
Using microwaves, high-powered
satellite transponders, and very
low-noise GaAs FETs in the receiver, DBS systems create a signal that can be received by a
satellite dish with as small as an
18-in diameter.
frequency and then retransmits it back to earth. This is called the downlink. A receive
site on earth picks up the signal. The receive site may be a cable TV company or an
individual consumer. Satellites are widely used by the TV networks, the premium channel companies, and the cable TV industry for distributing signals nationally.
A newer form of consumer satellite TV is direct broadcast satellite (DBS) TV. The
DBS systems are designed specifically for consumer reception directly from the satellite. The new DBS systems feature digitally encoded video and audio signals, which
make transmission and reception more reliable and provide outstanding picture and sound
quality. By using higher-frequency microwaves, higher-power satellite transponders, and
very low-noise GaAs FETs in the receiver, the customer’s satellite dish can be made very
small. These systems typically use an 18-in dish as opposed to the 5- to 12-ft-diameter
dishes still used in older satellite TV systems.
Direct Broadcast Satellite Systems
The direct broadcast satellite (DBS ) system was designed specifically to be an all-digital
system. Data compression techniques are used to reduce the data rate required to produce high-quality picture and sound.
The DBS system features entirely digital uplink ground stations and satellites. Since
the satellites are designed to transmit directly to the home, extra high-power transponders are used to ensure a satisfactory signal level.
To receive the digital video from the satellite, a consumer must purchase a satellite
TV receiver and antenna. These satellite receivers operate in the K u band. By using higher
frequencies as well as higher-power satellite transponders, the necessary dish antenna can
be extremely small. The new satellite DBS system antennas have only an 18-in diameter. See Fig. 23-23. Several special digital broadcast satellites are in orbit, and two of
the direct satellite TV sources are DirecTV and DISH Network. They provide full coverage of the major cable networks, and the premium channels usually distributed to
homes by cable TV and can be received directly. In addition to purchasing the receiver
and antenna, the consumer must subscribe to one of the services supplying the desired
Satellite Transmission. The video to be transmitted must first be placed into digital form. To digitize an analog signal, it must be sampled a minimum of 2 times per
cycle for sufficient digital data to be developed for reconstruction of the signal. Assuming that video frequencies of up to 4.2 Mbps are used, the minimum sampling rate is
twice this, or 8.4 Mbps. For each sample, a binary number proportional to the light amplitude is developed. This is done by an A/D converter, usually with an 8-bit output. The
resulting video signal, therefore, has a data rate of 8 bits ⫻ 8.4 Mbps, or 67.2 Mbps.
This is an extremely high data rate. However, for a color TV signal to be transmitted in
this way, there must be a separate signal for each of the red, green, and blue components making up the video. This translates to a total data rate of 3 ⫻ 67.2, or 202, Mbps.
Even with today’s technology, this is an extremely high data rate that is hard to achieve
To lower the data rate and improve the reliability of transmission, the new DBS system uses compressed digital video. Once the video signals have been put into digital
form, they are processed by digital signal processing (DSP) circuits to minimize the full
amount of data to be transmitted. Digital compression greatly reduces the actual transmitting speed to somewhere in the 20- to 30-Mbps range. The compressed serial digital
signal is then used to modulate the uplinked carrier using BPSK.
The DBS satellite uses the Ku band with a frequency range of 11 to 14 GHz. Uplink
signals are usually in the 14- to 14.5-GHz range, and the downlink usually covers the
range of 10.95 to 12.75 GHz.
The primary advantage of using the Ku band is that the receiving antennas may be
made much smaller for a given amount of gain. However, these higher frequencies are
more affected by atmospheric conditions than are the lower microwave frequencies. The
Chapter 23
Figure 23-23 This father and son can easily install their RCA direct digital TV satellite
dish antenna.
biggest problem is the increased attenuation of the downlink signal caused by rain. Any
type of weather involving rain or water vapor, such as fog, can seriously reduce the
received signal. This is so because the wavelength of Ku band signals is near that of
water vapor. Therefore, the water vapor absorbs the signal. Although the power of the
satellite transponder and the gain of the receiving antenna are typically sufficient to provide solid reception, there can be fadeout under heavy downpour conditions.
Finally, the digital signal is transmitted from the satellite to the receiver by using
circular polarization. The DBS satellites have right-hand and left-hand circularly polarized (RHCP and LHCP) helical antennas. By transmitting both polarities of signal, frequency reuse can be incorporated to double the channel capacity.
DBS Receiver. A block diagram of a typical DBS digital receiver is shown in
Fig. 23-24. The receiver subsystem begins with the antenna and its low-noise block converter. The horn antenna picks up the Ku band signal and translates the entire 500-MHz
band used by the signal down to the 950- to 1450-MHz range, as explained earlier. Control signals from the receiver to the antenna select between RHCP and LHCP. The RF
signal from the antenna is sent by coaxial cable to the receiver.
Figure 23-24
Digital DBS TV receiver.
IF amplifiers
950–1450 MHz
Coaxial cable
Block converter
(at antenna)
D/A converter
D/A converter
with EPROM
sensor for
Polarization control
output to
TV set
A typical DBS downlink signal occurs in the 12.2- to 12.7-GHz portion of the Ku
band. Each transponder has a bandwidth of approximately 24 MHz. The digital signal
usually occurs at a rate of approximately 27 Mbps.
Figure 23-25 shows how the digital signal is transmitted. The digital audio
and video signals are organized into data packets. Each packet consists of a total of
147 bytes. The first 2 bytes (16 bits) contain the service channel identification (SCID)
number. This is a 12-bit number that identifies the video program being carried by the
packet. The 4 additional bits are used to indicate whether the packet is encrypted and,
if so, which decoding key to use. One additional byte contains the packet type and a
continuity counter.
The data block consists of 127 bytes, either 8-bit video signals or 16-bit audio signals. It may also contain digital data used for control purposes in the receiver. Finally,
the last 17 bytes are the error detection check codes. These 17 bytes are developed by
an error-checking circuit at the transmitter. The appended bytes are checked at the
receiver to detect any errors and correct them.
Figure 23-25
Digital data packet format used in DBS TV.
2 bytes
1 byte
Data block (video, audio, or control)
127 bytes
SCID and
Chapter 23
Packet type
and continuity
17 bytes
The received signal is passed through another mixer with a variable-frequency local
oscillator to provide channel selection. The digital signal at the second IF is then demodulated to recover the originally transmitted digital signal, which is passed through a forward
error correction (FEC) circuit. This circuit is designed to detect bit errors in the transmission and to correct them on the fly. Any bits lost or obscured by noise during the transmission process are usually caught and corrected to ensure a near-perfect digital signal.
The resulting error-corrected signals are then sent to the audio and video decompression circuits. Then they are stored in random access memory (RAM), after which
the signal is decoded to separate it into both the video and the audio portions. The DBS
TV system uses digital compression-decompression standards referred to as MPEG2
(MPEG means Moving Picture Experts Group, which is a standards organization that
establishes technical standards for movies and video). MPEG2 is a compression method
for video that achieves a compression of about 50 to 1 in data rate. Finally, the signals
are sent to D/A converters that modulate the RF modulator which sends the signals to
the TV set antenna terminals.
Although the new DBS digital systems will not replace cable TV, they provide the
consumer with the capability of receiving a wide range of TV channels. The use of digital techniques provides an unusually high-quality signal.
Moving Picture Experts Group
23-5 Digital TV
Digital TV (DTV), also known as high-definition TV (HDTV), was designed to replace the
National Television Standards Committee (NTSC) system, which was invented in the 1940s
and 1950s. The goal of HDTV is to greatly improve the picture and sound quality.
After more than a decade of evaluating alternative HDTV systems, the FCC has
finalized the standards and decreed that HDTV will eventually become the U.S. TV standard by April 2009. The first HDTV stations began transmission in the 10 largest U.S.
cities on September 1, 1998. HDTV sets can now be purchased by the consumer, but
they are still expensive. As more HDTV stations come online and as more HDTV programming becomes available, more consumers will buy HDTV receivers and the cost
will drop dramatically.
The HDTV system is an extremely complex collection of digital, communication,
and computer techniques. A full discussion is beyond the scope of this book. However,
this section is a brief introduction to the basic concepts and techniques used in HDTV.
High-definition TV (HDTV) or digital
HDTV Standards
HDTV for the United States was developed by the Adanced Television Systems Committee (ATSC) in the 1980s and 1990s. HDTV uses the scanning concept to paint a picture
on the CRT, so you can continue to think of the HDTV screen in terms of scan lines, as
you would think of the standard NTSC analog screen. However, you should also view the
HDTV screen as being made up of thousands of tiny dots of light, called pixels. Each pixel
can be any of 256 colors. These pixels can be used to create any image. The greater the
number of pixels on the screen, the greater the resolution and the finer the detail that can
be represented. Each horizontal scan line is divided into hundreds of pixels. The format of
a HDTV screen is described in terms of the numbers of pixels per horizontal line by the
number of vertical pixels (which is the same as the number of horizontal scan lines).
One major difference between conventional NTSC analog TV and HDTV is that
HDTV can use progressive line scanning rather than interlaced scanning. In progressive
scanning each line is scanned one at a time from top to bottom. Since this format is
compatible with computer video monitors, it is possible to display HDTV on computer
screens. Interlaced scanning can be used on one of the HDTV formats. Interlaced
scanning minimizes flicker but complicates the video compression process. Progressive
scanning is preferred and at a 60-Hz frame rate, flicker is not a problem.
Progressive line scanning
Table 23-1
Horizontal Line
Rate, Hz
24, 30, 60†
4:3 or 16:9
24, 30, 60
24, 30, 60
24 or 30
*Number of scan lines.
†Standard PC VGA format.
The FCC has defined a total of 18 different formats for HDTV. Most are variations
of the basic formats as given in Table 23-1. Most plasma, LCD and larger screens only
display these formats.
The 480p (the p stands for “progressive”) standard offers performance comparable
to that of the NTSC system. It uses a 4:3 aspect ratio for the screen. The scanning is
progressive. The vertical scan rate is selectable to fit the type of video being transmitted. This format is fully compatible with modern VGA computer monitors. The
704 ⫻ 480 format can use either progressive or interlaced scanning with either aspect
ratio at the three vertical scan rates shown in Table 23-1.
The 720p format uses a larger aspect ratio of 16:9 (a 4:3 format is optional at this resolution also). This format is better for showing movies. Figure 23-26 shows the difference
between the current and new HDTV aspect ratios. The 1080i format uses the 16:9 aspect
ratio but with more scan lines and more pixels per line. This format obviously gives the best
resolution. The HDTV set should be able to detect and receive any available format. The
720p at 60 Hz and 1080i formats are those designated HDTV.
HDTV Transmission Concepts
In HDTV both the video and the audio signals must be digitized by A /D converters
and transmitted serially to the receiver. Because of the very high frequency of video
signals, special techniques must be used to transmit the video signal over a standard
Figure 23-26
TV picture standards. (a) Current standard. (b) HDTV standard.
Aspect ratio ⫽ 4 : 3
Aspect ratio ⫽ 16 : 9
Number of lines ⫽ 525 (interlaced scanning)
Number of lines ⫽ 1080 (interlaced scanning)
(a )
Number of lines ⫽ 720 (progressive scanning)
Chapter 23
Figure 23-27
HDTV transmitter.
MPEG-2 data
error detection
Serial video data
Up converter
AC-3 data
audio sources
6-MHz-bandwidth TV channel. And because both video and audio must be transmitted
over the same channel, multiplexing techniques must be used. The FCC’s requirement is
that all this information be transmitted reliably over the standard 6-MHz TV channels
now defined for NTSC TV.
Assume that the video to be transmitted contains frequencies up to 4.2 MHz. For
this signal to be digitized, it must be sampled at least 2 times per cycle or at a minimum
sampling rate of 8.4 MHz. If each sample is translated to an 8-bit word (byte) and the
bytes are transmitted serially, the data stream has a rate of 8 ⫻ 8.4 MHz, or 67.2 MHz.
Multiply this by 3 to get 67.2 ⫻ 3 = 201.6 MHz. Add to this the audio channels, and
the total required bandwidth is almost 300 MHz. To permit this quantity of data to be
transmitted over the 6-MHz channel, special encoding and modulation techniques are
HDTV Transmitter. Figure 23-27 shows a block diagram of an HDTV transmitter.
The video from the camera consists of the R, G, and B signals that are converted to
the luminance and chrominance signals. These are digitized by A/D converters. The
luminance sampling rate is 14.3 MHz, and the chroma sampling rate is 7.15 MHz.
The resulting signals are serialized and sent to a data compressor. The purpose of this
device is to reduce the number of bits needed to represent the video data and therefore permit higher transmission rates in a limited-bandwidth channel. MPEG-2 is the
data compression method used in HDTV. The MPEG-2 data compressor processes
the data according to an algorithm that effectively reduces any redundancy in the video
signal. For example, if the picture is one-half light blue sky, the pixel values will be
the same for many lines. All this data can be reduced to one pixel value transmitted
for a known number of times. The algorithm also uses fewer bits to encode the color
than to encode the brightness because the human eye is much more sensitive to brightness than to color. The MPEG-2 encoder captures and compares successive frames of
video and compares them to detect the redundancy so that only differences between
successive frames are transmitted.
The signal is next sent to a data randomizer. The randomizer scrambles or randomizes the signal. This is done to ensure that random data is transmitted even when no
video is present or when the video is a constant value for many scan lines. This permits
clock recovery at the receiver.
Next the random serial signal is passed through a Reed-Solomon (RS) error
detection and correction circuit. This circuit adds extra bits to the data stream so that
transmission errors can be detected at the receiver and corrected. This ensures high reliability in signal transmission even under severe noise conditions. In HDTV, the RS encoder
adds 20 parity bytes per block of data that can provide correction for up to 10 byte errors
per block.
The signal is next fed to a trellis encoder. This circuit further modifies the data to
permit error correction at the receiver. Trellis encoding is widely used in modems. Trellis coding is not used in the cable TV version of HDTV.
The audio portion of the HDTV signal is also digital. It provides for compact disk
(CD) quality audio. The audio system can accommodate up to six audio channels, permitting monophonic sound, stereo, and multichannel surround sound. The channel
arrangement is flexible to permit different systems. For example, one channel could be
used for a second language transmission or closed captioning.
Each audio channel is sampled at a 48-kbps rate, ensuring that audio signals up to
about 24 kHz are accurately captured and transmitted. Each audio sample is converted
to an 18-bit digital word. The audio information is time-multiplexed and transmitted as
a serial bit stream at a frequency of 48 kbps ⫻ 6 channels ⫻ 18 bits ⫽ 5.185 Mbps. A
data compression technique designated AC-3 is used to speed up audio transmission.
Without any kind of data compression and other bandwidth-limiting techniques, a full 1080i
HDTV signal would occupy about 300 MHz of spectrum space. However, with compression the
bandwidth required is very small and actually less than the 6 MHz allotted. In fact, a 1080i HDTV
broadcast only takes about 3 MHz of bandwidth, meaning that two of these broadcasts can fit
into the 6-MHz band. And the bandwidth for lower-definition versions is even smaller. A 720p
broadcast also occupies about 3 MHz. A 480i standard definition digital broadcast can fit into
1 MHz. This allows terrestrial TV stations to offer as many as six subchannels of TV in their
allotted spectrum, each with different programming. Cable TV stations will also be able to put
more programming into their 6-MHz allotted channels.
Next the video and audio data streams are packetized; i.e., they are converted to
short blocks of data bytes that segment the video and audio signals. These packets are
multiplexed along with some synchronizing signals to form the final signal to be transmitted. The result is a 188-bit packet containing both video and audio data plus 4 bytes
of synchronizing bytes and a header. See Fig. 23-28. The header identifies the number
of the packet and its sequence as well as the video format. Next the packets are assembled into frames of data representing one frame of video. The complete frame consists
of 626 packets transmitted sequentially. The final signal is sent to the modulator.
The modulation scheme used in HDTV is 8-VSB, or eight-level vestigial sideband,
amplitude modulation. The carrier is suppressed, and only the upper sideband is transmitted. The serial digital data is sent to a D/A converter where each sequential 3-bit
group is converted to a discrete voltage level. This system encodes 3 bits per symbol,
thereby greatly increasing the data rate within the channel. An example is shown in
Figure 23-28
Packet format for HDTV.
188 bits
4 bytes
Chapter 23
Video/audio data
Figure 23-29
Eight-level VSB signal.
23 ⫽ 8
8 levels
Symbol duration
(using 3-bit groups produces 8 levels)
Fig. 23-29. Each 3-bit group is converted to a relative level of ⫺7, ⫺5, ⫺3, ⫺1,
⫹1, ⫹3, ⫹5, or ⫹7. This is the signal that amplitude-modulates the carrier. The resulting symbol rate is 10,800 symbols per second. This translates to a data rate of
3 ⫻ 10,800 ⫽ 32.4 Mbps. Eliminating the extra RS and trellis bits gives an actual
video/audio rate of about 19.3 Mbps.
A modified version of this format is used when the HDTV signal is to be transmitted over a cable system. Trellis coding is eliminated and 16-VSB modulation is used to
encode 4 bits per symbol. This gives double the data rate of terrestrial HDTV transmission (38.6 Mbps).
The VSB signal can be created with a balanced modulator to eliminate the carrier
and to generate the sidebands. One sideband is removed by a filter or by using the phasing system described earlier in Chap. 3. The modulated signal is up-converted by a mixer
to the final transmission frequency, which is one of the standard TV channels in the VHF
or UHF range. A linear power amplifier is used to boost the signal level prior to transmission by the antenna.
Europe’s version of HDTV is called Digital Video Broadcast—Terrestrial (DVB-T). It is similar in many
ways to the U.S. ATSC system. However, its greatest deviation is its use of coded orthogonal
frequency-division multiplexing (COFDM) with 16-QAM or 64-QAM rather than the 8-VSB of the U.S.
system. The basic claim is that COFDM is a better technology for over-the-air TV because it is more
resistant to fading and multipath interference, so common in TV. The United States debated a
change to COFDM but decided to stay with 8-VSB whose performance has proved satisfactory.
HDTV Receiver. An HDTV receiver picks up the composite signal and then demodulates and decodes the signal into the original video and audio information. A simplified
receiver block diagram is shown in Fig. 23-30. The tuner and IF systems are similar to
those in a standard TV receiver. From there the 8-VSB signal is demodulated (using a
Figure 23-30
HDTV receiver.
to all
Sync detector
synchronous detector) into the original bit stream. A balanced modulator is used along
with a carrier signal that is phase-locked to the pilot carrier to ensure accurate demodulation. A clock recovery circuit regenerates the clock signal that times all the remaining digital operations.
The signal then passes through an NTSC filter that is designed to filter out any one
channel or adjacent channel interference from standard TV stations. The signal is also
passed through an equalizer circuit that adjusts the signal to correct for amplitude and
phase variations encountered during transmission.
The signals are demultiplexed into the video and audio bit streams. Next, the trellis
decoder and RS decoder ensure that any received errors caused by noise are corrected.
The signal is descrambled and decompressed. The video signal is then converted back
to the digital signals that will drive the D/A converters that, in turn, drive the red, green,
and blue electron guns in the CRT. The audio signal is also demultiplexed and fed to
AC-3 decoders. The resulting digital signals are fed to D/A converters that create the
analog audio for each of the six audio channels.
The FCC has mandated that all
new TV sets with screens larger
than 32 in have HDTV tuners by
2006. Further, by 2009, all new
TV sets must have a digital HDTV
tuner for over-the-air reception.
The State of Digital TV
Most over-the-air television is still the original analog NTSC programming. However,
progress is being made in digital TV. Satellite TV is all digital. And cable TV companies
are offering a growing amount of digital TV. Over-the-air HDTV is also available in most
of the major U.S. cities, but it has not been popular. However, growth is evident. The
declining prices of large-screen plasma, LCD, and projection sets with HDTV capability
have had the greatest impact. These prices are continuing to decline. A slow but increasing number of high-definition programming options are making HDTV more attractive.
Chapter 23
But despite the slow growth, the U.S. government is anxious to initiate a complete
switch to digital in the coming years. The previous declared deadline was the end of 2006,
but Congress has moved that date out to February 17, 2009. At the heart of this initiative
is the government’s desire to reclaim a large portion of the UHF TV spectrum (roughly
500 to 800 MHz) to auction off to cell phone companies for expanded growth. Spectrum
is also needed for new land mobile communications services and equipment that are
expected to help resolve the incompatibility of radio services among the various city police,
fire, and public services. The goal is to create fully interoperable radios for all government,
military, and other agencies to allow them to communicate reliably during disasters.
When 2009 arrives, all current NTSC analog transmission will cease and everyone
will have to switch to HDTV. A new TV set is the best option, but for those who cannot afford one, the government will subsidize special converter boxes that will receive
the HDTV signals and convert them to standard analog output for older TV sets.
In the meantime, another form of digital TV is emerging. Known as IPTV or Internet
Protocol TV, this form of TV will be transmitted using standard TCP/IP over high-speed
Internet connections such as those furnished by cable TV companies and DSL carriers.
Standard phone companies (AT&T, Verizon, etc.) will compete with cable TV companies to distribute TV to consumers. The adoption of more advanced video compression
techniques such as the ITU-T’s H.264 standard, also known as MPEG-4 compression, is
expected to further improve picture quality while minimizing bandwidth.
A special form of TV now being developed is that created for use on cell phones and
other small-screen devices. Known as DVB-H, it is a form of digital TV derived from the
European Digital Video Broadcast (DVB) standard. The video is not high-definition since
the small screens have such a small pixel count. Services to deliver small-screen TV are
being developed in the 1670- to 1675-MHz band. An alternate service called MediaFLO
uses the 716- to 722-MHz band, formerly the channel 55 UHF spectrum.
In general, the United States is well behind the rest of the world in converting to
digital television. The technology is here now and has been for a number of years. The
conversion is inevitable but happening on a timetable controlled by government, politics,
economics, and public acceptance.
Television is radio communication with both pictures and
sound. In addition to standard audio transmission, TV systems
use a camera to convert a visual scene to a voltage known as the
video signal. This signal represents the picture information and
modulates a transmitter. The picture and the sound signals are
transmitted to the receiver. The receiver demodulates the signals and presents the information to the user. The TV receiver
is a special superheterodyne that recovers the sound and picture
information. The picture is displayed on a picture tube.
As common as a TV set is, we usually take for granted
the extremely complex process involved in receiving a TV
signal and recovering it to present the picture and sound outputs in a high-quality manner. During the several decades
since its invention, the TV set has evolved from a large
vacuum tube unit into a smaller, more reliable solid-state
unit made mostly with ICs.
Although TV signals are still transmitted by radio,
today most people get their TV signals via a hybrid fiberoptic coaxial cable system. A converter box at the TV set
converts the cable signals to a format compatible with the
TV receiver.
One of the most common means of TV signal distribution is via communication satellite. A communication satellite orbits the equator about 22,300 mi out in space. In this
position it rotates in synchronism with the earth and therefore appears to be stationary. The satellite is used as a radio
relay station. Satellites are widely used by the TV networks,
the premium channel companies, and the cable TV industry
for distributing their signals nationally. A newer form of consumer satellite is direct broadcast satellite (DBS) TV. The
DBS systems are designed for consumer reception directly
from the satellite.
The newest form of TV uses digital methods. Known as
digital TV (DTV) or high-definition TV (HDTV), these new
systems offer improved picture resolution and quality as well
as stereo sound. HDTV will be delivered over the air and by
cable TV companies.
1. What is the bandwidth of a standard TV signal?
2. State the kind of modulation used on the video carrier
and the sound carrier in a TV signal.
3. What is the frequency spacing between the sound and
picture carriers?
4. What do you call the brightness signal produced by a
video monochrome video camera?
5. Name two widely used electronic imaging devices in TV
cameras that convert light variations to a video signal.
6. What is the name of the process that breaks up a picture or scene into serially transmitted signals?
7. What are the three basic colors that can be used to produce any other color light?
8. What is the maximum number of scan lines used in an
NTSC TV picture or frame of video?
9. How many scan lines make up one field of a TV picture?
10. What are the field and frame rates in NTSC color TV?
11. What is the rate of scanning one horizontal line in a
color TV set?
12. What is the name of the circuit that lets the picture
and sound transmitters use the same antenna?
13. The color camera signals are combined in a resistive
matrix to produce the two composite color signals.
What are they called?
14. How are the two color signals multiplexed and modulated onto the main video carrier?
15. What is the frequency of the subcarrier that the color
signals modulate?
16. What characteristic of the composite color signal
tells the receiver what the transmitted color is?
17. What type of modulation is used in the generation of
the chrominance signals?
18. What portion of a modern TV set uses digitally coded
infrared signals to control channel selection and volume level?
19. What is the name of the special filter that provides
most of the selectivity for the TV receiver?
20. The picture and sound IFs are heterodyned together to
form the sound IF. What is its frequency?
21. Name two common sound demodulators in TV receivers.
22. What is meant by a quadrature 3.58-MHz subcarrier
as used in demodulation?
23. What circuit strips the horizontal sync pulses from the
video detector output?
24. The horizontal sync pulses synchronize an internal sweep
oscillator to what frequency in a color TV receiver?
25. What is the shape of the horizontal and vertical sweep
signals which are currents applied to the horizontal
and vertical deflection coils?
Chapter 23
26. What is the name of the assembly around the neck of
the picture tube to which the sweep signals are applied?
27. What element in the picture tube generates the electrons? What element in the picture tube focuses the
electrons into a narrow beam?
28. By what process is the electron beam deflected and
swept across the face of the picture tube?
29. How many electron guns are used in a color CRT to
excite the color dot triads on the face of the tube?
30. What is the name of the circuits that ensure that the
electron beams strike the correct color dots?
31. What stage in the TV receiver is used as a switching
power supply to develop the high voltage required to
operate the picture tube?
32. What is the name of the transformer used to step up and
step down the horizontal sync pulses to produce horizontal sweep as well as most dc power supply voltages
in a TV set?
33. What is the name given to the cable TV station that collects and distributes the cable signals?
34. What are the names of the main cables used to distribute the TV signals to subscribers?
35. What two types of cables are used for the main distribution of signals in a cable TV system?
36. Name the coaxial cable that feeds individual houses in
a cable TV system. What are the type designation and
impedance of this cable?
37. What is the name of the equipment at the cable station
used to change the TV signal frequency to another frequency? Why is this done?
38. What local-oscillator frequency would you use to
translate the TV signal at its normal IF values to cable
channel J?
39. What is the name of the circuit used to assemble or mix
all the different cable channels to form a single signal
that is distributed by the cable?
40. Name the two main sections of a cable converter box
used by the subscriber.
41. What is a reverse channel on a cable TV converter box?
42. Describe the nature of the output signal developed by
the cable converter box and where it is connected.
43. Name the cable TV digital standard and its source.
44. What is the approximate distance of a TV satellite
from the earth?
45. What kind of amplifiers are used in satellite TV receiver front ends?
46. What feature of the DBS system makes digital transmission and reception possible?
47. What natural occurrence can sometimes prohibit the reception of DBS signals?
48. Name two ways in which a satellite receiver, either conventional or DBS, is connected to a conventional TV set.
49. State the main reason for converting to a digital TV
50. What technique is used to reduce the data rate needed
to transmit the digital TV signal in a limited bandwidth? What is the name of the video standard used?
The audio standard used?
51. What is the allotted bandwidth of a digital TV channel?
Actual bandwidth?
52. What is the name of the primary error correction system used in HDTV?
53. By what method is the number of bits per symbol increased to speed up data transmission?
54. Describe the type of modulation used in HDTV.
55. What is the data rate through the TV channel?
56. How many audio channels are used? What is their sampling rate?
57. Calculate the total number of screen pixels in the 720p
58. What is the basic packet size of the HDTV signal?
59. Define progressive scanning. Why is it used?
60. Define HDTV. What standards are used in HDTV?
61. What is the name of the video compression standard
used in HDTV, and what is the approximate data rate
1. Compute the exact video and sound carriers for a channel 12 TV station. ◆
2. What is the approximate upper frequency response of
the video signal transmitted?
3. Using Carson’s rule, calculate the approximate
bandwidth of the sound spectrum of a TV signal. ◆
4. How is the bandwidth of the video portion of a TV signal restricted to minimize spectrum bandwidth?
5. Describe the process by which the picture at the receiver is kept in step with the transmitted signal. What
component of the TV signal performs this function? ◆
6. Describe the process by which the color in a scene is
converted to video signals.
7. What signal is formed by adding the color signals in the
proportion 0.11B ⫹ 0.59G ⫹ 0.3R? ◆
8. How is channel selection in the tuner accomplished?
What type of local oscillator is used, and how does it
9. A channel 33 UHF TV station has a picture carrier
frequency of 585.25 MHz. What is the sound carrier
frequency? ◆
10. What are the TV receiver sound IF and the picture IF
11. What would be the local-oscillator frequency to receive a channel 10 signal?
12. How is the spectrum space conserved in transmitting
the video in a TV signal?
13. How is the composite chrominance signal demodulated? What types of demodulator circuits are used?
14. How is the 3.85-MHz subcarrier oscillator in the receiver phase- and frequency-synchronized to the transmitted signal?
15. How is the vertical sweep oscillator synchronized to a
sync pulse derived from the horizontal sync pulses occurring during the vertical blanking interval? What is
the vertical sync frequency in a color TV set?
16. How is cable attenuation occurring during distribution
17. Explain briefly how a cable company can remotely
control a subscriber’s cable converter.
18. Describe the antennas used in satellite transmission
and reception.
19. Why are the amplifiers and mixers in a satellite receiver usually located at the antenna?
20. What are the operational frequency range and the band
designation of a DBS receiver?
21. What is the rationale for HDTV?
22. Describe the audio format used in HDTV.
23. What will be the total number of pixels used to make up
the visible portion of the HDTV system? Calculate this
number for both the 1050- and the 787.5-line system.
24. Describe briefly how the audio and video data is transmitted in both the DBS and the HDTV systems. What
is the packaging, or format, of the data?
25. Briefly define DVB-T and DVB-H.
◆ Answers
to Selected Problems follow Chap. 22.
Critical Thinking
1. Name three reasons why digital TV transmission is
preferred over analog TV on a satellite.
2. What limits the transmission of video over the telephone lines? Explain how it might be possible
to transmit video over the conventional telephone
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF