ANSI/SCTE 96 2013
ENGINEERING COMMITTEE
Interface Practices Subcommittee
AMERICAN NATIONAL STANDARD
ANSI/SCTE 96 2013
Cable Telecommunications Testing Guidelines
NOTICE
The Society of Cable Telecommunications Engineers (SCTE) Standards are intended to serve the
public interest by providing specifications, test methods and procedures that promote uniformity
of product, interchangeability and ultimately the long term reliability of broadband
communications facilities. These documents shall not in any way preclude any member or nonmember of SCTE from manufacturing or selling products not conforming to such documents, nor
shall the existence of such standards preclude their voluntary use by those other than SCTE
members, whether used domestically or internationally.
SCTE assumes no obligations or liability whatsoever to any party who may adopt the Standards.
Such adopting party assumes all risks associated with adoption of these Standards, and accepts
full responsibility for any damage and/or claims arising from the adoption of such Standards.
Attention is called to the possibility that implementation of this standard may require the use of
subject matter covered by patent rights. By publication of this standard, no position is taken with
respect to the existence or validity of any patent rights in connection therewith. SCTE shall not
be responsible for identifying patents for which a license may be required or for conducting
inquiries into the legal validity or scope of those patents that are brought to its attention.
Patent holders who believe that they hold patents which are essential to the implementation of
this standard have been requested to provide information about those patents and any related
licensing terms and conditions. Any such declarations made before or after publication of this
document are available on the SCTE web site at http://www.scte.org.
All Rights Reserved
© Society of Cable Telecommunications Engineers, Inc. 2013
140 Philips Road
Exton, PA 19341
i
TABLE OF CONTENTS
1.0
SCOPE .........................................................................................................1
2.0
DEFINITIONS AND ACRONYMS ............................................................1
3.0
TEST PLAN CONSIDERATIONS .............................................................5
4.0
TEST EQUIPMENT ..................................................................................10
5.0
THE TOTAL TEST SYSTEM ..................................................................31
6.0
MEASUREMENT ACCURACY ..............................................................39
7.0
MEASUREMENT RECORDS..................................................................43
8.0
REFERENCE INFORMATION ................................................................44
LIST OF FIGURES
FIGURE 1 – RF SPECTRUM ANALYZER BLOCK DIAGRAM
13
FIGURE 2 – RF RECEIVER BLOCK DIAGRAM
13
FIGURE 3 - ENVELOPE DETECTOR.
17
FIGURE 4 – SAMPLE DETECTION
20
FIGURE 5 - VIDEO FILTERING
20
FIGURE 6 - REFERENCE BROADBAND COMMUNICATIONS TEST SYSTEM
31
FIGURE 7 - REFERENCE BROADBAND COMMUNICATIONS TEST SYSTEM
33
FIGURE 8 - POWER (10*LOG) ADDITION
45
FIGURE 9 - VOLTAGE OR CURRENT (20*LOG) ADDITION
46
FIGURE 10 - GRAPH OF NNN/BNN CORRECTION
48
ii
LIST OF TABLES
TABLE 1 - FIXED CHANNEL FILTER REQUIREMENTS
25
TABLE 2 – CTB MEASUREMENT ACCURACY ANALYSIS
41
TABLE 3 - POWER (10*LOG) ADDITION
44
TABLE 4 - VOLTAGE OR CURRENT (20*LOG) ADDITION
46
iii
1.0
SCOPE
The test procedures that reference this document are intended to allow a competent
technician or engineer to perform the tasks of determining, to a reasonable degree of
certainty, the level of performance for the various parameters detailed. The procedures are
general in nature and with sufficient forethought and preparation, can be adapted to
individual devices, cascades or complete systems. The primary focus for these procedures
is for bench or laboratory testing, but the principles discussed are equally applicable to
field testing. When the suggestions made in this document conflict with the detailed steps
of a specific procedure, the specific test procedure will take precedence.
In order to maintain the simplicity and reduce the overall size of the individual procedures,
most theoretical and practical discussions regarding test equipment, methodology and
variations in techniques, as well as information which is generic or repetitive in nature is
discussed in this document. This will also allow alterations and/or updates to be handled
more easily by reducing the total number of documents (or sections) which will be
affected. Specific information or data required for a single test, or a limited number of
tests, will be found in those procedures as needed.
Measurements can normally be separated into two types, absolute and relative. Absolute
measurements are used for determining such items as signal levels, modulation deviation,
etc. Relative measurements are made with respect to a reference level or parameter and
some examples are distortion, frequency flatness, depth of modulation, etc. Absolute
measurements are typically more difficult to make within the same tolerance limits as
relative measurements since more measurement tolerances within the test equipment and
test configuration must be considered. Relative measurements are often quite accurate
since many of theses tolerances are cancelled in the final calculations, especially when
measurement conditions are carefully maintained. Relative measurements are often used
as the basis for comparison between similar products and are valid when the measurement
conditions are identical.
2.0
DEFINITIONS AND ACRONYMS
Carrier Level or Carrier Power
Often used as a synonym for “signal level” or “channel power”. When the carrier level is
being modulated with information, the term “channel power” is more appropriate.
Channel Power or Channel Level
See definitions below for various types of signals. The level is usually presented in
decibels with respect to one millivolt RMS in a 75 Ω system (dBmV).
Continuous Wave (CW) Carrier Level
The RMS voltage of the sinusoidal signal.
1
Analog Video Channel Level
The RMS voltage of the sinusoidal signal during the video sync pulse.
Digital Channel Level
The RMS voltage of a sinusoidal signal that would produce the same heating in a 75 Ω
resistor as does the actual signal.
Intermittent Digital Channel Level
For a signal that occupies one assigned time slot in a time division multiple access
(TDMA) sequence of time slots, the level reported shall be the equivalent level as if the
signal being measured (any one of the multiple signals included in the total sequence)
was on continuously.
dB
Decibels. Logarithmic expression of the ratio between two values
 P2 
x dB = 10 ∗ log 
 P1 
 V2 
For two voltages: x dB = 20 ∗ log 
 V1 
For two powers:
(1)
(2)
dBc
Decibels relative to carrier power. Signals greater than the carrier will have a positive
result; signals less than the carrier will have a negative result.
 Pdisturbanc e 
x dBc = Disturbance Power (dB) - Carrier Power (dB) = 10 ∗ log

 Pcarrier 
(3)
-dBc (negative dBc)
To avoid using negative numbers, the ratio between a disturbance (smaller than the carrier)
and the carrier is often specified as a positive number with the units -dBc.
dBm
Decibels relative to one milliwatt. 0 dBm equals 1 mW.
 x mW 
y dBm = 10 ∗ log

 1 mW 
(4)
where
x = the power in mW
y = the power in dBm
2
dBmV
Decibels relative to one millivolt RMS. 0 dBmV equals 1 mV.
 x mV 
y dBmV = 20 ∗ log

 1 mV 
(5)
where
x = the voltage in mV
y = the voltage in dBmV
dBµ
µV
Decibels relative to one microvolt RMS. 0 dBµV equals 1 µV.
 x µV 

y dBµB = 20 ∗ log
 1 µV 
(6)
where
x = the voltage in µV
y = the voltage in dBµV
Unit Conversions
dBµ
µV to dBmV
To convert from dBµV to dBmV, subtract 60 from the value:
y dBmV = x dBµV – 60
(7)
dBmV to dBµ
µV
To convert from dBmV to dBµV, add 60 to the value:
y dBµV = x dBmV + 60
(8)
dBmV to dBm
To convert from dBm to dBmV, the following equation can be used:


y dBmV = 20 ∗ log1000 ∗




  x dBm
 

 10 
10


 1000 ∗ Z 




where
x = the power in dBm
y = the voltage in dBmV
Z = the impedance in ohms
3
(9)
To convert from dBmV to dBm, the following equation can be used:
2


  x dBmV  


20


10
  1 

y dBm = 10 ∗ log 1000 ∗ 
∗

1000   Z  








(10)
where
x = the voltage in dBmV
y = the power in dBm
Z = the impedance in ohms
In a 75 Ω system:
0 dBm = 48.75 dBmV
In a 50 Ω system:
0 dBm = 46.99 dBmV
dBmV measurements of 75 Ω systems with 50 Ω equipment
The difference when changing between 50 and 75 Ω systems is 48.75 - 46.99 = 1.76 dB. If
measurements of a 75 Ω system are made in dBmV on 50 Ω test equipment, the results will
be 1.76 dB too low. To obtain the correct dBmV value for the 75 Ω system, the loss of the
impedance matching system (generally a transformer or Minimum Loss Pad (MLP)) and
the 1.76 dB correction factor must be added to the result. When a 5.7 dB MLP is used, this
total correction factor is 7.46 dB.
Flatness
The maximum peak to valley excursion of the transmission response over the specified
bandwidth.
Match
See Return Loss
Peak, Peak Level, Peak Carrier Level, Peak of the Carrier, or Peak Signal Level
”Peak” has two common uses:
1. The peak level of a signal or carrier is defined as the maximum voltage of that
signal or carrier. Generally, a specific measurement period is defined, during
which the maximum voltage is recorded. In some cases, only voltages that last for
at least a pre-defined duration are recorded, with large voltage excursions of a
shorter duration being ignored or limited by a narrow measurement bandwidth.
2. The more common usage of “peak” refers to the highest amplitude of a displayed
signal on a spectrum analyzer. It is often necessary to position the spectrum
analyzer marker on such a maximum value (generally with the “peak search”
4
feature) to get a proper reading. In this case, the quantity to be measured is the
maximum value of the RMS voltage and is not the peak voltage or peak power of
a signal.
Unless specifically stated, “peak” will refer to the second definition. The quantity to be
measured is the maximum value of the RMS voltage and is not the peak voltage or peak
power of a signal. For more information on spectrum analyzer detectors used to make this
type of measurement, see section 4.4.1.1.
Return Loss
Ratio between the level of a signal impinging on a port and the level of the signal reflected
back from that same port.
Signal Level
Signal Level has two common uses:
1. Signal Level can be used to define the amplitude of a baseband signal. When
used in this way, the term “signal” is reserved for references to baseband
information, while carrier level or channel power is used for references to
modulated RF carriers.
2. Signal Level can be used to define the level of modulated RF carriers.
To avoid confusion between these two definitions, the term “Signal Level” should be
avoided. “Baseband Signal Level” should be used for baseband signals and “Channel
Power” should be used for modulated RF carriers.
Slope
A measure of the monotonic frequency response of the network from low to high
frequency. Slope is positive, or upward going, if the gain increases as the response is
swept from low frequency to high frequency.
Tilt
The variation in level across the operating range of the network. Positive tilt is defined to
occur if the signals at lower frequencies are lower in amplitude than those at higher
frequencies.
3.0
TEST PLAN CONSIDERATIONS
Many factors must be considered in order to assure confidence that performance tests will
provide valid and useable results. Care must be taken to insure that test results are not
biased by pre-conceived notions of what is expected. The following are several factors to
consider when establishing a test plan. These factors should be established for both the
forward and return paths and both spectral components should be present simultaneously if
actual operating conditions are to be analyzed.
5
3.1
Test Signal Source
The signal loading required for device or system evaluation must be determined as
part of the test plan. Additionally, the same loading (or one that is at least
substantially identical) must be used in all tests which are to be used for comparative
purposes. Tests should be performed with a mixture of analog and digital signals that
represent the anticipated operating conditions of the device. Though it is certainly
possible, and in some cases useful, to test with a full loading of analog channels, there
are meaningful arguments to be made for having a mixture of analog and digital
channels so that one may optimize the set-up of a device for its intended use.
Some of the decisions to be made regarding the composition of the test spectrum (for
both the forward and return path tests) are:
1. The number and frequency of the analog signals (channels)
a. Will the traditional FM band be included (ie., EIA channels 95-97, etc.)?
b. Will out-of-band data signals be included?
c. The worst-case conditions for some parameters will differ with different signal
loading.
2. The number, frequency and type of digital signals (channels)
a. What assigned bandwidth, what data rates, what modulation protocol, what
level relative to analog signal loading, etc.?
b. How much guardband will be provided between channels (if any)?
One example of a test spectrum plan is presented below as an illustration of the
process and necessary items.
Forward Signal Path; 54 to 1002 MHz
1. Full 498 MHz CW carrier loading in the forward path, with channels 95-97 (82
total channels per EIA 542 channelization plan)
2. 200 MHz of digital signal loading using 6 MHz wide 64 QAM or 256 QAM
channels. As an alternative, reduced amplitude CW carriers at the frequencies of
data channels may be used, but this is not the preferred method.
3. The digital signals will be tested at -6 dBc relative to the analog portion of the
spectrum.
6
Return Signal Path; 5 to 42 MHz
1. 35 MHz of digital signal loading using 1.5 MHz wide QPSK channels
2. Actual comparative testing, for noise and distortion performance, may be
conducted using 6 CW carriers within the pass band of 5 to 42 MHz.
Many different combinations of CW carriers, AM modulated signals, noise and actual
digital signals are possible. As an alternative to discrete digitally modulated carriers,
highly filtered broadband noise may be used. In this case, the spacing between
channels must be defined in order to provide notches for measuring distortion
products. When a digital signal is specified, the modulation type and occupied
bandwidth must be included.
Other signal loading/spectrum schemes may be devised to ensure comprehensive
testing of equipment capabilities. Full analog channel loading is useful to
demonstrate the worst-case performance for composite distortions, but the addition of
digital signals has added other impairments such as Carrier-to-Intermodulation Noise
(CIN).
3.1.1
Test Signal Frequencies
Signal carrier frequencies used in broadband telecommunications systems in the
US generally follow EIA Standard 542. This standard is based upon the
traditional broadcast television frequency plan, which now incorporates the
prescribed aeronautical offsets of ±12.5 kHz (or multiples thereof) for analog
channels within specific portions of the transmitted spectrum. Although this
frequency allocation plan represents “real life” conditions, it is not generally
used for purposes of product qualification testing. The major reason for not
using aeronautical offsets is economics and convenience. It would be quite
expensive and troublesome to retune all of the existing fixed frequency signal
generators in use today.
In addition to this, the original frequency plan with visual carriers placed 1.25
MHz above the lower channel band edge actually yields a more severe “worst
case” distortion contribution than the newer requirements. With the aeronautical
frequency offsets, the beats are bound within a larger spectral distribution
pattern, given equal frequency stability requirements. Therefore, aeronautical
offsets are not recommended for product qualification testing since this could
alter the absolute levels of some distortion components. If aeronautical offsets
are used, the distortion measurement procedure must be modified to ensure that
the entire distortion beat spectrum is included within the passband of the
measurement device.
Another consideration when determining frequency assignments for testing is
where to operate high frequency data channels. It might be desirable to set these
7
carriers with a +1.25 MHz offset relative to the standard analog channel visual
carrier frequency assignments in order to cause CTB distortions to accumulate
between the data channels rather than within them. If such a plan is anticipated
the test signal generator should be configured to test accordingly.
Distortion products caused by analog channel loading are fairly predictable and
it should be possible to determine which channels will be most affected by a
particular distortion parameter. Different distortions are likely to have their
“worst case” effect on different channels. The “worst case” channel may change
depending on the analog signal loading, the signal distribution (how many
channels are used, channel bandwidth and frequency allocation) and the degree
and type of tilt implemented.
Distortions caused by digital carriers should also be predictable based upon their
modulation characteristics and spectral positions. Some low bandwidth data
signals may actually act more like analog carriers. Other data signals may
manifest a third order distortion which closely resembles noise (carrier-tointermodulation noise, CIN) and is additive to thermal noise. Some of the
distortion products caused by digital carriers will be transient in nature since the
signals themselves are transient.
3.1.2
Operating Test Source Levels
After the composition of the test spectrum is determined, the operational
conditions under which the device or system is to be loaded must be established.
The fundamental levels and tilts of the signals, which are encountered in “real
life”, vary greatly. A house amp, for instance, encounters a wide range of input
signal levels and tilt conditions based upon where it is located along the
distribution line. The exact test conditions used must be determined with the
understanding that every variable in test conditions requires an additional round
of testing in order to acquire a complete picture of a device’s performance.
As implied in the preceding paragraph, it is essential to determine the carrier
levels used for testing. These levels must be established for the device’s input
and output to ensure that it is operating within its specified range. In some
cases, as in line amplifiers and headend products, intermediate levels must also
be established and maintained. Different sections of broadband communications
plant will require different levels and ratios between forward and return signal
levels. The “in-home” environment generally demands that return path signals
be much higher in level than forward path signals. Equipment used in the inhome environment will normally not encounter as many simultaneous signals in
the return path as the coaxial distribution plant. In addition to establishing the
levels, the operational tilt at which a unit under test will operate must be
determined. The tilt is often different for input, output and intermediate signal
locations and the character of the tilt may be the inverse of the cable loss or
linear.
8
In a complex spectrum, which contains both analog and digital signals, the
digital signals must be set to a proper level relative to the adjacent analog
carriers. This task is complicated when the digital signals vary in modulation
scheme and bandwidth. To determine the “worst case” performance for a
particular parameter, an alternative spectrum loading configuration may be
required. If it is not apparent which spectrum load will provide a “worst case”
scenario then a complete range of anticipated level and tilt conditions must be
used to determine the potential for performance variations as a device is actually
applied in the field.
3.2
Device Under Test (DUT) Configuration
Tests can be conducted for an individual device, a cascade of similar devices or a
combination of different devices to represent much or all sections of a complete
system from headend to in-home terminal.
If anything more than an individual unit is to be tested then the amount and nature of
losses to be incorporated between the various components must be considered.
Different plant sections also have different characteristics. The trunk portions of the
plant generally have more cable loss and less passive loss than the distribution and the
characteristics of those losses will not react the same to temperature variations.
Sufficient test points must be included to allow for absolute and relative
measurements at every potential input and output, for both forward and return signal
paths.
3.3
Environmental Requirements
3.3.1
Temperature Testing
1. It is recommended that all equipment should be tested over the full specified
temperature range. For equipment used in outdoor plant, this should include
soaking the instrument at an appropriate cold temperature (manufacturer’s
minimum operating temperature) with the power off and verifying that the
DUT will start and operate normally.
2. Measurements should be taken at both extremes and at ambient as a
minimum. Other temperatures may prove to be useful for analytical tasks
(i.e., product development duties may suggest that measurements be made at
20° intervals in Celsius).
3. A "Temperature vs. Time" chart is recommended to track temperature
variations as well as their timing and duration. Sufficient temperature
probes should be used to track ambient temperature as well as the “core”
temperature of any cable which is used. Placement of the cable's temperature
probe(s) is especially critical for accurate tracking.
9
4. The stability of the cable's temperature, rather than ambient, should provide
the trigger for each round of testing.
3.3.2
Additional Environmental Tests
The following is a list of additional environmental tests, which may be done if
specified by the manufacturer. Detail on how to perform these environmental
tests can be found in several reference specifications, including MIL spec PRF28800.
1. Humidity – This test is typically done by verifying the performance of the
DUT after soaking the DUT at an elevated temperature and humidity for an
extended period of time.
2. Salt Spray Corrosion - Tests for performance in concentrations of salt spray
or other corrosive elements are difficult and the user should refer to MIL
specifications.
3. Water Resistance - Resistance to water may be measured using several
different methods from drip to blowing rain to total submersion. Most test
equipment intended for outdoor use is designed to withstand a blowing rain
and should be tested using a controlled volume of water directed at the
instrument from vertical ± 90°.
4. EMI and Susceptibility - These tests measure the level of radiated emissions
from the instrument and the instrument’s susceptibility to measurement
errors caused by environments with high RF signal levels. The user should
refer to IEC 801 specifications for the appropriate test procedures to be used.
3.3.3
Test Sample Considerations
1. Typically testing one DUT will verify if the device is within the
manufacturer’s published specifications. If the measurements results are
very close to the published specifications, a larger sample may be required.
2. Devices tested should be from a random sampling of production released
product, not early prototypes or lab samples.
4.0
TEST EQUIPMENT
4.1
RF Power Meters
The power meter is the fundamental test instrument for measuring signal levels. It
provides the highest degree of measurement accuracy and is most suitable as a
standard against which other signal measurement devices can be calibrated. A power
meter provides no frequency selectivity and therefore the user must isolate the carrier
to be measured. Analog television signals, with their visual, aural and color carriers,
10
cannot be measured properly because the power meter will measure the total power of
all three carriers.
The power meter may be used as a standard reference for comparison with other
signal level measurement devices. The difference in level measurement between the
power meter and the frequency selective device may be used as a correction factor for
future measurements. By making comparison measurements between the two devices
across the entire frequency band of interest, a table of correction factors may be
created. The two most common power sensing devices in use today are thermocouple
sensors and diode detectors. Each method of measuring average power has some
advantages and the preferred method is dependent upon the particular measurement
situation.
1.
Thermocouple Sensors
Thermocouple sensors typically have better return loss and higher maximum
input level. They are normally limited on the low-end to about +25 dBmV, but
can easily measure to > +60 dBmV. Thermocouple sensors also measure true
RMS power, even when measuring complex signals.
2.
Diode Detector Sensors
Diode detector sensors have the advantage of being able to measure lower level
signals, typically to < 0 dBmV. The maximum input level to a diode sensor is
typically about +25 dBmV, although higher-level sensors have recently become
available. The response time of diode detector sensors is much quicker, so
automated tests will operate faster. Older diode sensors may measure incorrectly
with complex signals. It is also important to operate in the square law region of
the sensor for best accuracy.
With the continuous improvement in power meter technology, the differences
between measurement technologies are becoming smaller. For a thorough discussion
of power meter fundamentals refer to Reference [6].
4.2
Signal Level Meters
A standard broadband communications signal level meter (SLM) is actually a
frequency selective voltmeter. In its simplest form it combines an envelope amplitude
detector circuit with a tuning circuit, a switchable attenuator and a display device.
Modern high quality SLMs with digital readouts are actually very good devices for
setting amplitude modulated (AM) signals. Though not as accurate as the power
meter they have historically been more accurate than the more complex spectrum
analyzers, since they are designed for a very specific task.
4.3
Signal Analyzers
4.3.1
Spectrum Analyzers
11
A third type of signal amplitude measurement device is the spectrum analyzer.
The spectrum analyzer is the most versatile device for measuring relative signal
differentials, especially for those with a frequency offset from the reference
carrier or signal. It provides a powerful visual display which allows for
simultaneous viewing of a multitude of signals, both intentional and
unintentional. Spectrum analyzers have been developed to a very high degree of
usefulness, incorporating some very powerful features.
The spectrum analyzer is more flexible than the SLM or the power meter but
historically has not been as accurate for measuring levels. However, the latest
generation of digital spectrum analyzers have amplitude specifications similar to
the best signal level meters.
Among the more useful features one should look for in a spectrum analyzer are:
1. Normalization or “zeroing” of certain functions in order to allow for
accurate differential measurements.
2. Markers for absolute amplitude and frequency measurements and
measurements relative to a user selectable reference.
3. Comprehensive self-test and self-calibration check.
4. Complete “on-screen display” of significant data points including equipment
settings, levels, frequencies, etc. Additionally, the displayed information
should be included in any printout or data storage schemes.
5. Standard data communications port for file transfer to printer, plotter or data
storage device.
Some of the features, which are available for spectrum analyzers, may not be
ideally suited for lab or bench test, although they are certainly appropriate for
field use. A preamplifier is often required for better sensitivity. If an internal
preamplifier is used, it is important to characterize it adequately in order to
understand its impact on the measurement being made. If this characterization is
not possible, an external amplifier is preferred.
Another area of caution when using an analyzer (or signal level meter) is the
accuracy of the internal attenuator. Historically attenuators are generally only
capable of 10 dB steps and it is difficult to determine the true attenuation of each
step. This becomes critical when attempting to make relative measurements.
Newer analyzers do an excellent job of compensating accurately for the internal
attenuator. A typical spectrum analyzer block diagram is shown in Figure 1.
12
Figure 1 – RF Spectrum Analyzer Block Diagram
4.3.2
RF Receivers
An alternative solution to the RF spectrum analyzer is the RF Receiver. The RF
Receiver typically provides more flexibility in the measurement configuration
and allows higher dynamic range measurements, than typical Spectrum
Analyzers. The RF receiver provides the frequency conversion to IF and
demodulation of the signal along with additional measurement functionality. A
typical RF Receiver block diagram is shown in Figure 2.
Figure 2 – RF Receiver Block Diagram
The following section discusses differences between the RF Spectrum Analyzer
and RF Receiver, including advantages and disadvantages.
RF Input Performance - Both block diagrams (Figures 1 and 2) indicate RF preselection before the 1st mixer which limits the power level present at the 1st
mixer and minimizes distortion products generated within the receiver’s front
13
end. Pre-selection filtering is standard in most RF receivers, but has
traditionally not been available in any but the most expensive spectrum
analyzers. Many newer spectrum analyzers offer internal pre-selectors, but when
using older analyzers, external pre-selection filters are required for best dynamic
range.
RF Receivers also typically include a low noise preamplifier between the preselector and 1st mixer which establishes a lower noise figure for the RF front
end and provides the best sensitivity to low level signals. If using an external
preamplifier, it is important to place it after the external pre-selection filter and
verify that the dynamic range of the preamplifier is sufficient to handle the signal
levels being measured without contributing addition distortion.
Since RF receivers are specifically designed for receiving and demodulating
narrowband communications signals, they are typically designed with an
emphasis on minimizing the phase noise of the 1st local oscillator which is used
to up convert the incoming signal to the first IF frequency. Spectrum Analyzers
have traditionally been designed with an emphasis on sweep speed and may not
have the same low phase noise performance RF Receiver. This is also changing
over time, and spectrum analyzers with excellent phase noise performance are
becoming much more affordable.
Detectors - The spectrum analyzer block diagram in Figure 1 shows an envelope
detector for demodulation of the signal’s AM component followed by a video
peak or sampling detector. This demodulation / detector scheme is not ideal for
noise-like signals, although correction factors can be used to compensate.
RF receivers typically provide a wider selection of detector choices for different
measurement requirements. This selection of detectors will normally include a
Square Law or RMS detector. The Square Law or RMS detector is also now
available on the more expensive spectrum analyzers. The Square Law or RMS
detector provides the actual “root-mean-square” value of the signal as well as for
the noise. More detail is provided concerning detectors in Section 4.4.
4.3.3
Baseband Analyzers
The Baseband Analyzer’s main use is to resolve signals components which are
largely composed of frequencies in the low Hz to tens of MHz range, into their
respective amplitudes. The spectral amplitudes are then plotted on the
instrument’s front panel display, in much the same format as the RF Spectrum
Analyzer display (i.e. as a ‘spectral density’) using units of voltage, instead of
units of power. Much of the RF Spectrum Analyzer display functionality is also
built into the display of the Baseband Analyzer. However, the internal hardware
of the Baseband Analyzer is closer to an oscilloscope than an RF Spectrum
Analyzer.
14
The Baseband Analyzer consists mainly of three important functional blocks: the
input ‘instrumentation amplifier’, the A/D (analog-to-digital) converter and the
Signal Processor. The input instrumentation amplifier can be thought of as a
high performance ‘Operational Amplifier’ design. It provides a stable input
impedance, wide bandwidth, low noise, fast rise time and low distortion, for
signals which can contain large voltage/amplitude swings. Most importantly, it
maintains this key performance to DC voltages. These signals are then
converted to discrete samples of the waveform by the A/D Converter, which are
then converted to a spectral density plot, using the algorithm of a DFT (Discrete
Fourier Transform). Besides providing the DFT, the Signal Processor also
allows other computations to be carried out on (multiple) waveforms stored in
memory.
High performance Baseband Analyzers contain enough resolution in the A/D
Converter and performance of the amplifiers to reach dynamic range levels of 90
dBc, or better, from waveforms that have amplitudes in the volts range.
Therefore, the Baseband Analyzer instrument is often used to measure the
detected distortion in an RF signal, after demodulation.
4.3.4
Measuring Digital Signals
4.3.4.1
Digital Power Measurements
Measuring the power of a digitally modulated carrier is not as straight
forward as the procedure for measuring analog video signals. By
definition, the power of a digitally modulated carrier is the power as
measured by a power meter which uses a thermocouple as a transducer.
This is the true RMS value of the sinusoidal signal which would produce
the same heating in a 75 Ω resistor as does the actual signal.
The correct way to measure the power in a noise-like digitally modulated
carrier using a spectrum analyzer or signal level meter is to:
1. sample the average power values at equally spaced frequency points
across the bandwidth of the carrier
2. integrate the linear value of these samples
3. correct the result for envelope detection error and noise equivalent
bandwidth of the measurement instrument
4. convert the result to a logarithmic value for display
Because this measurement is made across a specified bandwidth, the
bandwidth of the channel measured becomes an integral part of the
15
measurement. The measurement result is typically specified in one of two
ways. It may be expressed as:
xx dBmV (Occupied Channel BW)
Example: +23 dBmV (2 MHz)
or
xx dBmV (1 Hz)
Example: -40 dBmV (1 Hz)
Both of these results represent the same amount of power in a 2 MHz
channel. Quite often in the first example, the “2 MHz” is dropped and the
result is just +23 dBmV. In this case, the operator needs to know the
bandwidth of the channel measured. In the second example, the total
power of the channel is not known without knowing the channel
bandwidth.
Some measurement devices assume a flat noise characteristic across the
bandwidth of the channel and make a single measurement in the channel
and calculate the total power of the channel using the known channel
bandwidth. This approach is only accurate when the channel is indeed flat
or has the same shape characteristic as was used to calibrate the
instrument’s correction factor.
4.3.4.2
Digital Impairments
Bit Error Rate (BER) is defined as the ratio of the bits in error in a data
stream to the transmitted bits in a given time period. BER is accepted as
the ultimate measure of success for the transport and distribution of digital
signals.
BER measurements are dependent on the modulation type and type of
Forward Error Correction (FEC) used. The performance of a digital
communication channel in an RF network with FEC experiences a cliff
effect which causes the BER to behave well in the presence of lower levels
of impairment and degrade rapidly as the impairment is increased.
Therefore, reliance on post FEC BER as the only indication of a
communication channel’s performance provides very little information
concerning operating margin. In normal operation, a post FEC BER
display will generally indicate a zero BER. Indications of a non-zero BER
following FEC will normally be accompanied by a low Modulation Error
Ratio (MER) reading and should be cause for concern.
16
BER provides only a quantitative indication of a communications
channel’s performance. It does not provide any information regarding the
cause of the errors. Therefore, other parameters such as Modulation Error
Ration (MER), Error Vector Magnitude (EVM), adaptive equalizer tap
analysis and constellation analysis should be used in conjunction with
BER measurements. For those needing BER measurements, either at the
transmission level or after forward error-correction, the techniques
required are well documented in ANSI/SCTE 121 2011.
4.4
Detectors
4.4.1
IF Detectors
4.4.1.1
Envelope or Linear Detector
Spectrum analyzers and signal level meters typically convert the IF signal
to video with an envelope or linear detector. In its simplest form, an
envelope detector is a diode followed by a parallel RC combination (see
Figure 3). The detector is nonlinear as far as the RF carrier is concerned
but linear as far as the modulation is concerned. The detector is often
analyzed as a mixer with the carrier as the local oscillator, but may also be
analyzed as a half wave or full wave rectifier. The purpose of the low pass
filter is to separate the modulation from the RF or IF carrier.
The output of the IF chain is applied to the detector and the detector time
constants are such that the voltage across the capacitor equals the peak
value of the IF signal at all times. That is, the detector can follow the
fastest possible changes in the envelope of the IF signal but not the
instantaneous value of the IF sine wave itself (typically 10.7 MHz or 21.4
MHz).
Figure 3 - Envelope Detector.
For most measurements, a narrow enough IF resolution bandwidth is
chosen to resolve the individual spectral components of the input signal.
When measuring analog video signals, the IF resolution bandwidth needs
17
to be sufficiently wide to pass enough of the horizontal sync spectral
components for detection of the peak sync tip. The envelope detector
follows the changing amplitude values of the peaks of the signal from the
IF chain but not the instantaneous values and gives the analyzer or signal
level meter its voltmeter characteristics.
When used with random noise, an envelope detector creates a reading
which is lower than the true RMS value of the average noise. This
difference is 1.05 dB. Thus, if we measure noise with a spectrum analyzer
using voltage-envelope detection (the “linear” scale) and averaging, an
additional 1.05 dB needs to be added to the result to compensate for
averaging voltage instead of averaging voltage squared.
If the logarithmic display mode is being used, the log shaping used in
spectrum analyzers amplifies noise peaks less than the rest of the noise
signal. Because of this, the reported signal level is smaller than its true
RMS value. The total correction for the log display mode combined with
the detector characteristics is 2.51 dB, and should be used any time
random noise is being measured in the log display mode. A thorough
mathematical analysis of these correction factors is contained in Reference
[14].
4.4.1.2
RMS Detector
RMS detectors display the root of the mean of the square of the signal and
are the only commonly used detector that can measure true power or the
power in a non-sinusoidal signal. All of the previous detectors display
power by assuming a sine wave input and calibrating the display. This is
satisfactory until more than one random signal appears at the detector.
RMS detectors read the true power by measuring the RMS voltage of the
signal.
4.4.1.3
Square Law Detector
Square Law detectors display the mean of the square of the signal and also
measure the true power of the signal. The output of the square law
detector is a linear function of the input power, a fact that is sometimes
useful. Very early detectors were square law and linear detectors are
square law at low signal levels. Wide range square law detectors became
practical with the availability of analog squaring circuits. The Square Law
detector does not reproduce the modulation waveform without distortion.
The basic difference between the square law detector and the linear
detector can be expressed mathematically as follows.
VOUT = kV RF
Linear Detector
18
(11)
Square Law Detector
VOUT = kPRF = k (VRF )
2
(12)
where
4.4.2
VOUT
= low frequency output voltage
k
= detector constant
VRF
= RF voltage
PRF
= RF power
Video Detectors
4.4.2.1
Positive Peak Detector and Negative Peak Detector
Positive and negative peak detectors take the output of the envelope or
linear detector and display either the positive peak or the negative peak of
that signal. This detector is used for measuring the maximum or minimum
level of the signal over a period of time.
4.4.2.2
Sample Detector
Sample detectors are used in analyzers that have internal digital
processing. In order to limit the size of the data being processed, the input
from the linear detector is sampled and processed. Usually this is invisible
because there are hundreds of sampled points across the screen.
4.4.2.3
Rectifying Detector
The rectifying detector is an analysis of the detector as a rectifier. The
Linear detector is usually analyzed in terms of fast peak charge and slow
discharge. The rectifying detector and the linear detector are equivalent.
4.4.2.4
Rectifying Averaging Detector
The use of the term "Rectifying Averaging Detector" became necessary
when measuring noise because it is necessary to accurately know the
bandwidth of the output of the linear detector. For all practical purposes,
"Rectifying Averaging Detectors" are equivalent to linear detectors.
4.4.3
Video Filtering
Spectrum analyzers display signals plus their own internal noise. To reduce the
effect of noise on the displayed signal amplitude, the video can be smoothed or
averaged, as shown in Figure 5. Most analyzers include a variable video filter
for this purpose. The video filter is a low-pass filter that follows the detector
19
and determines the bandwidth of the video circuits that drive the A/D converter.
As the cutoff frequency of the video filter is reduced to the point at which it
becomes equal to or less than the bandwidth of the selected IF resolution
bandwidth filter, the video system can no longer follow the more rapid variations
of the envelope of the signal passing through the IF chain. The result is a
smoothing of the displayed signal.
20
10
0
Level (dBmV)
-10
-20
-30
-40
-50
-60
-70
76
79
82
Frequency (MHz)
Figure 4 – Sample Detection
20
10
Level (dBmV)
0
-10
-20
-30
-40
-50
-60
76
79
82
Frequency (MHz)
Figure 5 - Video Filtering
The effect is most noticeable when measuring noise, particularly when a wide
resolution bandwidth is used. As the video bandwidth is reduced, the peak-topeak variations of the noise are reduced. The degree of noise reduction is a
function of the ratio of video to resolution bandwidth. At ratios of 0.01 or less,
the smoothing is very good.
4.4.4
Video Averaging
20
Today’s digital analyzers provide video averaging as an alternative approach for
display smoothing. Video averaging is done over two or more sweeps on a
point-by-point basis. At each new display point, the new data value is averaged
with the previously averaged data. Video averaging retains the point-to-point
accuracy of the analyzer at the same time it smoothes the display. It
accomplishes this at the expense of update rate since it takes several sweeps to
gradually converge to an average.
If the signal being measured is noise or a very low level signal near the noise, the
effects of video filtering and video averaging are very similar. The significant
difference between the two smoothing approaches is that video filtering is a real
time measurement and its affect is seen on each point as the sweep progresses.
Video averaging requires multiple sweeps and the averaging at each point takes
place over the time period required to complete the multiple sweeps.
Using video filtering, a signal with a spectrum that changes with time will yield
a different average on each sweep. But if video averaging is used, the final
result will be much closer to the true average.
4.5
Signal Sources
Signal generation devices for both forward and reverse signaling paths, whether
analog, digital or both, form the foundation upon which all comparative measurement
data is taken.
It is recommended that a “flat” spectrum be established at the various sources and that
any desired spectrum tilt be imposed through an external device such as a tilt
equalizer or equivalent (“True Tilt Networks,” for instance). This will simplify the
alignment of the source and provide a more repeatable measurement. Tilt can be
either positive or negative. The tilt used may vary as to the absolute amount,
direction (positive or negative) and type (“linear” or “cable”). Some devices are
aligned for linear tilt or no tilt at their output, while others are aligned for flat input
prior to testing. Regardless of the alignment used, the input and output amplitudes
must be clearly specified and documented for all tests.
Signals now come in two basic types — analog and digital (actually, all signals are
analog in the RF domain). For testing purposes the actual difference between the two
types relates more to the presence or absence of modulation. Continuous Wave (CW)
signals are normally used to represent standard NTSC modulated signals, for all tests
except cross modulation. It has been demonstrated that this type of signal will allow a
close approximation of standard television signals for analysis of actual system
performance under “worst case” conditions. Digital signals, on the other hand, are
much more varied in nature, ranging from low-bandwidth (100 kHz) FSK control
signals to relatively high-bandwidth signals, such as the 6 MHz 64 and 256 QAM
channels currently being used within broadband communications plants. Digitally
21
modulated signals do not have a stationary “carrier” which can be measured
accurately with the customary “peak detection” instruments currently in use.
4.5.1
Analog Signal Generation
The total test system requires a multi-carrier analog signal generator to simulate
the channel loading found in real cable telecommunications systems. Typically,
continuous wave (CW) carriers are used for laboratory testing, in order to
produce repeatable results. For cross modulation measurements (XMOD), these
CW carriers must be modulated by a well-defined signal.
To meet these requirements, the multi-carrier analog signal generator must have
the following basic features:
•
The generator must be capable of creating CW signals at all the required
frequencies, according to the desired frequency plan.
•
The output power of each of the CW signals must be individually adjustable,
at a level sufficient to produce the desired testing conditions.
•
The CW signals must be non-coherent, i.e. each must have their own free
running reference signal and the stability of the reference must be sufficient
to keep the beats within the measurement resolution bandwidth .
•
The CW signal at each measurement frequency must have the ability to be
turned off. The power of a signal when it is in the 'OFF' state must be at
least 10 dB below the power of impairment being measured.
•
Every CW signal must have the ability to be modulated with a 50 % duty
cycle, downward only, square wave 100% modulation.
•
The modulation signal must be coherent, i.e. the same modulation signal
must be applied to all channels.
•
The frequency of the modulation signal must be 15.750 kHz, or equal to the
desired horizontal synchronization pulse frequency.
•
The modulation at each measurement frequency must have the ability to be
turned 'OFF'.
4.5.1.1
Spectral Purity
One of the most important performance characteristics of a multi-carrier
analog signal generator is spectral purity. The spectral purity of a signal
describes how closely the actual signal matches the desired signal.
Because no real signal generator is perfect, there will always be some
difference between the actual signal and the desired signal. Consider the
22
55.25 MHz carrier in the output of a multi-carrier signal generator.
Mathematically, the only spectral component in the frequency range from
52.25 to 58.25 MHz is a pure cosine at exactly 55.25 MHz. In reality, this
frequency range will include a large signal at a frequency very close to
55.25 MHz, some low level interference signals at specific frequencies,
and noise. The interference signals may be unwanted distortion, spurious
signals, or modulation occurring within the signal generator.
There are several possible sources of noise (e.g. quantization noise,
thermal noise, impulse noise), but the noise of most multi-carrier analog
signal generators may be treated as having two contributions, phase noise
and broadband noise. Every signal that is generated includes some amount
of phase noise. For an introductory discussion of phase noise, refer to
Reference [10]. Every real device also has broadband noise associated
with it. The amount of broadband noise depends primarily on the
temperature and the noise figure of the device. For an introductory
discussion of broadband noise, refer to Reference [9]. In order to measure
typical devices, which have low distortion and noise levels, the spectral
purity of the multi-carrier analog signal generator must be very good. For
most applications, the distortion and noise levels of the multi-carrier
analog signal generator must be at least 10 dB below the levels to be
measured. If the internal CSO, CTB, or XMOD of the signal source(s) is
produced in a way similar to the CSO, CTB, or XMOD of the Device
Under Test (DUT), the internal CSO, CTB, or XMOD products must be at
least 20 dB below the levels to be measured.
4.5.1.2
Stability
Another critical characteristic of the multi-carrier analog signal generator
is stability. To obtain repeatable measurements, the output power and
frequency of each CW signal must be stable.
The output power stability of each signal directly affects the measurement
repeatability. The performance of the device under test often varies
depending on the signal power, so each 0.1 dB change in input power can
produce a 0.1 or 0.2 dB change in the measurement. The measurements
are often measured relative to the carrier power, so if the carrier power
changes, the measured result will change by an equal amount. Because of
these dependencies, it is imperative that the power of each signal remains
constant, not only over time, but also when the surrounding signals are
turned 'On' and 'Off', and when the signal itself is turned 'Off' and then
back 'On'.
The frequency stability of the signals also affects the accuracy and
repeatability of the measurements, because the frequency distribution of
distortion products is dependent on the frequencies of the carrier signals.
23
If the actual carrier frequencies drift, the frequencies of the individual
distortion products will also drift. Some of the individual distortion
frequencies may drift out of the measurement bandwidth, producing
inaccurate measurements. The individual distortion frequencies may also
drift such that they beat together, producing low frequency variations in
the measurements.
4.5.2
Digital Signal Generation
The total test system may require the presence of digital signals in addition to the
analog signals. The digital signals may be generated by a band of 16-QAM, 64QAM, 256-QAM or QPSK modulators. The data stream used for these
modulators is typically a pseudo-random sequence.
For some tests, it may be sufficient to use a highly filtered broadband noise
source instead of QAM or QPSK signals. If filtered noise is used, care must be
taken to assure that the peaks of the noise are not compressed. In general, no
amplifiers should be used after the filters. The peak factor (sometimes called
crest factor) of the filtered noise should be at least 13 dB. For more information
about QAM and QPSK signals, refer to Reference [4].
4.6
RF Post Amplifiers
An RF amplifier may be required to raise signal levels at the receiver to a more
useable amplitude. One (or more) of these devices will typically be included in the
test system. These amplifiers are normally used after a bandpass filter which limits
the bandwidth of signals and total power which the amplifier must handle. It is
recommended that NO amplifiers be placed between the signal sources and the DUT.
1. Gain ≅ 18 to 25 dB. Too much gain can be just as much a problem as too little.
2. NF ≤ 10 dB. The noise figure required will be determined by the nature of the
measurement being made.
3. Input / Output Return Loss > 14 dB
4. Frequency flatness within the 6 MHz measurement bandwidth < 0.2 dB peak to
valley
5. 1 dB gain compression ≥ 10 dB above the highest signal measured
4.7
Filters
Filters are an essential part of the test system. These devices come in various
configurations:
24
1. Bandpass filters allow a selected portion of the spectrum to pass through while
reflecting all other portions.
2. Bandstop filters are the opposite of bandpass filters. They stop a selected portion
of the spectrum by reflecting the energy, and allow all the rest to go through.
3. Highpass filters allow everything above a specified frequency to pass through
while reflecting everything below.
4. Lowpass filters are the opposite of the highpass type, allowing everything below a
specified frequency to pass while reflecting everything above it.
Various combinations of the types listed above may also be used.
No filter actually acts as a “brick wall”. Each will have some amount of crossover
spectrum where its frequency response changes on a monotonic slope. Some
bandpass filters may allow harmonics of the passband frequencies to pass through as
well, relatively unaffected. This can cause problems in certain measurements and
should be avoided whenever possible. In general, filter performance will degrade as
the operating frequency increases. Although this may be compensated for by using
more complex filters.
Additionally, filters can be either fixed or tunable. Fixed filters are generally
preferred for testing, as their characteristics are more stable. Tunable filters provide
flexibility, since they allow testing across a range of frequencies. In either case the
filter should be carefully centered around the carrier under test. Table 1 provides the
minimum frequency response and bandwidth characteristics for an acceptable filter:
Flatness
Rejection
Loss
≤ ± 0.15 dB
≤ ± 0.5 dB
≥ 25 dB
≥ 50 dB
Bandwidth
Carrier Frequency ± 1.5 MHz
Carrier Frequency ± 2.5 MHz
Carrier Frequency ± 6.0 MHz
≥ Carrier Frequency ± 12.0 MHz
Table 1 - Fixed Channel Filter Requirements
If the bandwidth of a filter is too narrow it may attenuate a beat product which has a
significant frequency offset relative to the carrier, such as CSO. This problem is
especially prevalent in the measurement of lower frequency channels as the “Q” of the
filters for these frequencies is often very good and the bandwidth is less than desired.
Misalignment of tunable filters is also an issue and can cause problems with any
measurement, including that of noise.
A bandpass filter acts as an open circuit to all frequencies that are outside of its
passband. This means that it will cause out of band reflections that may affect the
amplitude of other signals in the test system through phase addition and cancellation
25
and can affect the operation of the device under test. This effect can be minimized
through the use of a fixed attenuator between the filter and the system. The loss of
the fixed attenuator must, of course, be accounted for to ensure that signal levels are
adequate for the measurement task.
4.8
Coaxial Test Devices
4.8.1
RF Attenuators
Attenuators are most useful for improving impedance matching between various
devices in the test system and for adjusting signal levels to more desirable
amplitudes. These devices come in two basic styles, fixed and switchable. As
with everything else within the test environment, caution must be exercised
when using attenuators. Some fixed attenuators have such a wide specification
tolerance that they are almost useless. The performance of switchable
attenuators needs to be verified regularly depending on time and frequency of
use. Like other passive devices used within the test system these should be
carefully analyzed for accuracy and frequency response characteristics. Records
should be made of their initial performance and regular verification tests
performed to identify any degradation.
General recommendations for attenuators include the following:
1. 75Ω switchable attenuator(s):
a. dB steps, 0 - 70 dB range, ± 0.1 dB tolerance.
b. dB steps, 0 - 10 dB range, ± 0.05 dB tolerance.
c. Return loss > 20 dB across the frequency range of interest
2. 75Ω fixed attenuator(s)
a. 3, 6, 10 and 20 dB values, ± 0.05 dB tolerance.
b. Return loss > 20 dB across the frequency range of interest
4.8.2
Cables, Connectors and Adapters
Cables, connectors and adapters are used to interconnect the various components
of test apparatus and can introduce undesirable measurement errors and
uncertainties if not recognized, compensated for, properly selected, or installed.
All of these components will introduce additional reflections and losses in a test
system that can significantly alter the frequency response. Even small
reflections can lead to significant power holes or “suck outs” if they happen to
occur at multiples of ½ wavelength at any in-band frequency.
26
4.8.2.1
Cables
The proper cable type depends on the application. For fixed applications a
“semi-rigid” or “hardline” cable, or a cable with a solid outer conductor is
usually preferred for reasons of cost, performance, and shielding. If a
flexible cable is required, then a braided outer conductor is usually chosen.
For flexible cables, the user needs to be aware that the cable’s performance
can change with bending and the user may need to be concerned with
factors such as amplitude stability, phase stability, and shielding
effectiveness during flexing.
The cable loss must be accounted for in the test system design, or
calibrated out. If long runs of cable are required, the user may need to be
concerned about the cable's “structural return loss”, which is the result of
minor periodicities within the cable all adding in phase at some frequency
to produce a power hole or “suck out”.
4.8.2.2
Connectors and Adapters
High quality test connectors should be used in any test system and all
connectors must be properly installed and torqued. The most common
high quality test connector for use in 75 Ω systems is the “precision” 75 Ω
Type N connector. Caution must be used when using Type N connectors.
The 50 Ω and 75 Ω types are not intermateable. Inserting the largerdiameter pin of a 50 Ω male connector into a 75 Ω female will result in
severe damage to the female contact. Conversely, the smaller pin of a 75
Ω male connector will have intermittent or no contact if used with a 50 Ω
female.
The basic figure of merit for a connector is its “return loss” and the return
loss of all connectors, cables, and adapters in a system must be better than
that which is to be measured, preferably by at least 10 dB. A loose or
damaged connector can seriously degrade the performance of any test
system. Test connectors are very delicate and must be inspected and
cleaned regularly. The most common failure modes are bent or broken
fingers on the female contact, worn or damaged mating interface surfaces,
loose coupling nuts, or dirt or contamination in the interface.
4.8.3
Impedance Matching Devices
It may be necessary to employ one or more impedance matching devices within a
test system. The broadband communications industry has standardized on a 75Ω
impedance. Much of the test equipment is geared toward the more common
50Ω world of radio communications. In order to use 50Ω test equipment one
must use an adequate impedance matching device to prevent high levels of
27
signal reflections, which would invalidate any and all measurements. Devices
currently available for impedance matching include:
1.
Minimum Loss Pad
A minimum loss pad (MLP) is a broadband resistive matching device
which provides the correct impedance on both ports over a specified
frequency range. The advantage of the MLP is that it is non-reactive and
has the broadest bandwidth of available matching devices. The
disadvantage of the MLP is that it has the highest insertion loss. MLPs
from 50Ω to 75Ω are available with good return loss to 3 GHz. The
insertion loss of an MLP when matching 50Ω to 75Ω is calculated to be
5.71 dB.
2.
Matching Transformer
A matching transformer is a broadband matching device which provides
the correct impedance on both ports over a specified frequency range. The
advantage of the matching transformer is that it has the lowest insertion
loss of available matching devices. The disadvantage of the matching
transformer is that it is reactive and typically has a lower useable
bandwidth. Matching transformers from 50Ω to 75Ω are available with
good return loss to > 1 GHz. The insertion loss of a matching transformer
is typically less than 1 dB.
3.
Series Resistor
The series resistor is the simplest matching device used to increase the
input impedance of a 50Ω measurement device to 75Ω. A series resistor
is a unidirectional device providing a well matched impedance on the 75Ω
side, but poor match to the 50Ω side. The advantage of the series resistor
is that it is non-reactive and has a broad bandwidth. The disadvantage is
the lack of match on the 50Ω side. The insertion loss of a 25Ω series
resistor used to match from 50Ω to 75Ω is calculated to be 1.76 dB.
4.9
General Purpose Test Equipment
4.9.1
Digital Multimeters (DMM)
Digital Multimeters, also referred to as DVMs, DMMs, or voltmeters, are used
throughout these procedures to make measurements on both AC and DC supply
voltages. Most modern DMMs have the resolution required for the
measurements being made with 4 ½ digits being the lowend, but the user should
verify that the accuracy is acceptable for their specific requirements. Typical
accuracy for a modern DMM is < ±0.1% for VDC and < ±1.0% for VAC. When
using the DMM for measuring the amplitude of the ferro-resonant quasi-square
28
wave AC power signal, it is critical that a DMM is used with a true-RMS
reading vs. the lower cost peak reading measurement.
4.9.2
Oscilloscopes
Two general types of oscilloscopes are currently available. Analog oscilloscopes
display continuously variable voltages and have traditionally been used for video
measurements due to better screen resolution. Analog scopes are capable of
displaying signals within the frequency range of the CRT display. Very high
frequency signals become too dim to see and very low frequency signals become
a slow moving dot across the screen. The CRT in analog scopes has a
characteristic called intensity grading that makes the trace brighter where the
signal features occur most often. This feature makes it easy to distinguish signal
details by observing intensity levels. The fastest analog scopes can display
frequencies up to about 1 GHz.
Digital storage oscilloscopes (DSOs) use an A/D converter to convert the analog
voltage to bits and store the signal information as a series of samples. Digital
scopes are capable of displaying any frequency within the sample range of the
A/D converter with stability and brightness. For repetitive signals, the upper
frequency limit of the digital scope is a function of the front end analog
components of the scope. For single shot events, the bandwidth may be limited
by the sample rate of the A/D converter. Digital scopes continue to evolve and
improve significantly in capability and are becoming much more affordable.
Some key oscilloscope characteristics are discussed below.
Bandwidth - An oscilloscope’s bandwidth specifies the frequency at which a
sinusoidal signal is measured at 70.7% of its true amplitude, or the -3 dB point.
Without adequate bandwidth, the high frequency details of the signal will be
lost, and the displayed signal will be distorted. To guarantee adequate
bandwidth, the oscilloscope’s bandwidth should be 5 times greater than the
highest frequency to be measured.
Sample Rate - Sample rate specifies how often the digital oscilloscope’s A/D
converter samples the analog signal. In order to accurately reconstruct a signal
and avoid aliasing, Nyquist theory say that the signal must be sampled at least
twice as fast as its highest frequency component. Accurate reconstruction of the
signal is dependent upon sample rate and the interpolation method used to fill
between samples. Some oscilloscopes allow both sin (x)/x interpolation for
sinusoidal signals and linear interpolation for square wave and pulse signals.
Using sin(x)/x interpolation, the sample rate should be at least 2.5 times greater
than the highest frequency being measured. Using linear interpolation, the
sample rate should be at least 10 times greater than the highest frequency signal
component.
29
Vertical Sensitivity – Vertical sensitivity specifies the ability of an oscilloscope
to amplify a weak signal and is usually measured in mV per division. Modern
oscilloscopes are typically capable of 1 mV/div.
Acquisition Modes - Trace averaging can be used to provide better measurement
accuracy on low level signals and is a requirement for many of these
measurements. When using a digital sampling oscilloscope to measure noise
and transient type signals, some form of Peak Detect sampling is required.
4.9.3
AC / DC Power Supplies
Three types of power supplies are used throughout these procedures.
DC power supplies are used for powering preamps or peripheral equipment
which is part of the test set-up. The critical requirements for these supplies are:
•
sufficient current capacity to prevent the supply from going out of regulation
during the test, typically 2 times greater than the current required by the test
system
•
low enough noise performance to prevent the noise of the power supply
from affecting the measurement results
Line voltage AC power supplies are used for powering the test equipment and
peripheral equipment which is used as part of the test set-up. Grounding of
these supplies should be done carefully to prevent ground loops, especially when
measuring low level hum signals and low level distortion products.
Ferro-resonant quasi-square wave (trapezoidal signal with < 100 V/ms slew rate)
AC power supplies are used for powering distribution equipment under test.
The rated current for these supplies should be at least 30% greater than the
desired test current.
4.10 Test Equipment Calibration
No test can be valid if taken with measurement devices that have not been properly
calibrated. The user should follow the manufacturer’s recommendations for
appropriate factory calibration requirements, warm-up time and field calibration
before performing any tests. There are at least two levels of calibration required:
1. Calibration traceable to the original factory specifications
2. Verification which ensures that a piece of equipment is operating properly at the
time the measurements are made.
The factory calibration is normally done at regular intervals, as specified by the
equipment manufacturer. The operational calibration is done at least daily or even
30
several times a day, depending upon the stability of the test equipment. This
frequency of verification will verify that the test unit is performing properly when the
testing is initiated. Many modern test instruments have a “self-calibration” feature,
which will perform operational checks during boot-up or upon command by the
operator. Older equipment, lacking this feature, should be checked against acceptable
frequency and amplitude standards. Note that reference standards also require
verification and calibration.
5.0
THE TOTAL TEST SYSTEM
A simplified block diagram of a total test system is shown in Figure 6. This diagram will
serve to demonstrate the configuration of the various components and interconnection of a
basic test system. This represents only one way by which valid testing may be achieved.
Certainly there are alternative ways of accomplishing the same end. This is presented here
simply as a means of detailing some of the major issues involved in establishing a proper
test system.
RF Signal
Source
Term
(Analog;
CW)
HPF*
Fixed
Att'r
6 dB
Att'r
DUT
RF Signal
Source
Fixed
Att'r
(Digital;
Modulated)
BPF
LPF*
Switch
Att'r*
Post
Amp*
RF Signal Source
(Reverse; CW,
Digital or Hybrid)
6 dB
Att'r
Spectrum
Analyzer
*Optional
Components
Figure 6 - Reference Broadband Communications Test System
5.1
A Reference Broadband Communications Test System
Signal sources are shown for both the forward and the return signal paths. The
forward path signal sources are combined together at the proper levels and then fed
into the input of the test system through a highpass filter (HPF) and a fixed attenuator.
The return signal source is treated in a similar fashion; from the source to a lowpass
filter (LPF) to a fixed attenuator. The highpass and lowpass filters are optional and
are intended to prevent the introduction of out-of-band spurious signals into the test
system. The fixed attenuators are required to provide a broadband return impedance
match to the directional couplers used at the Device Under Test (DUT) ports. A
minimum value of 6 dB is recommended.
After filtering and attenuating, the signal sources are directed through directional
couplers (DCs), which are used to provide forward and return signal injection ports
and complementary test points. The values for the DCs should be selected to ensure
31
proper signal level control while close attention is given to their bandwidth, frequency
response and isolation characteristics.
The DUT may consist of a discrete component, a single device or a complete system.
The output of the DUT is connected through a DC, which as mentioned above,
provides an output test point for the forward path as well as an input port for the
return signal spectrum. The output of the return signal source is connected to the tap
port of this DC. If the device to be tested is intended for two-way applications it is
critical that both signal paths be present simultaneously during all testing in order to
ensure that all possible distortion products are present, including those derived from
loop isolation and common path deficiencies.
After the output DC, a fixed attenuator is used to ensure adequate match between the
DUT and the rest of the test equipment. Again, a minimum of 6 dB is recommended.
The output of the attenuator is passed through a bandpass filter, switchable attenuator
and post amplifier. The bandpass filter minimizes the possibility of overdriving the
input of the amplifier by restricting the number of channels which will be present at
its input. The bandpass filter actually represents a set of filters, whether fixed or
tunable, which will be used to test the designated channels. The switchable attenuator
serves two purposes:
1. Ensure that the input level to the amplifier is maintained well above the
amplifier’s noise floor
2. Maintain a stable input level into the measurement/display device, in this case a
spectrum analyzer, in order to minimize the need for constant resetting of the
measurement device’s controls.
The output of the amplifier is connected to another fixed attenuator and then to the
signal display device. Again, a minimum of 6 dB attenuation is recommended. This
test equipment configuration may be moved from the forward test point to the return
test point as required.
It should be noted that quite a bit of RF loss is incorporated within the test chain.
This is an unfortunate, but necessary, requirement. It is important to remember that
the filters present a well-matched impedance only to those frequencies in the
passband. All other frequencies will be reflected back into the source with very little
return loss. The fixed attenuators are required to minimize these reflections.
Directional couplers may also present mismatched conditions, although to a lesser
degree. A final note; the unused port of the return test point DC is shown to be
terminated. Good engineering test practices dictate that all unused ports should be
terminated.
32
5.2
Test System Characterization
The components used in the test system must be completely characterized for their
effects on the test signals. Factors such as gain, loss, frequency response, etc. must be
known, and often factored out, in order to ensure accurate measurements DUT.
Amplifiers used will contribute some amount of self-generated distortion and noise
components that will interfere with the measurement of the parameters actually
generated within the DUT. Filters will have some degree of loss and group delay and
could have a major impact upon frequency response as well. Other measurement
errors may be caused by a filter’s bandwidth limitation.
Cables will add frequency dependent losses to any signal and the magnitude of the
losses will vary over temperature. Fortunately the variations in losses are reasonably
predictable and can be calculated for the different frequencies. Unfortunately the
losses and the effects of changes in temperature differ by manufacturer, size and, in
some cases, age of the cable. Another problem may arise if the cable is damaged to
any significant degree. A damaged cable can add reflections which result in standing
waves. This will be evident by a consistent, repetitive ripple in the frequency
response of the cable. Because of these potential problems every piece of cable,
whether mainline or drop, should be subjected to a thorough sweep analysis over the
maximum possible frequency range and over the entire range of anticipated
temperatures.
The performance of every passive device, including cables, connectors, test fixtures,
fixed and switchable attenuators, matching devices (pads and transformers), etc.
should be tested in order to demonstrate its suitability for the intended purposes.
These tests should include frequency response, absolute loss over the entire frequency
range, and performance over the temperature range to which the device will be
subject. Switchable devices such as A/B switches and switchable attenuators should
be verified in every possible position to ensure against potential errors. A complete
record of this verification effort, perhaps in an engineering log format, should be
compiled for future reference. Additionally, a record should be kept of the swept
responses to allow for later comparison if a significant change is suspected.
5.3
Characterization Data
RF Signal
Source
Term
(Analog;
CW)
HPF*
Fixed
Att'r
6 dB
Att'r
DUT
RF Signal
Source
Fixed
Att'r
(Digital;
Modulated)
LPF*
BPF
Switch
Att'r*
Post
Amp*
RF Signal Source
(Reverse; CW,
Digital or Hybrid)
6 dB
Att'r
*Optional
Components
Figure 7 - Reference Broadband Communications Test System
33
Spectrum
Analyzer
Every component within the Reference Broadband Communications Test System
should be properly and adequately “characterized” in order to determine what, if any,
affect it might have on the ultimate test results. Figure 7 will be used to outline those
areas which should be analyzed for their potential influence upon the end-of-line
measurements.
Knowledge of such parameters as loss (or gain) and frequency response within the
test system is especially critical to ensuring proper and accurate data. Return loss
(input and output) and distortion characteristics of the different components are also
important. Additionally, other factors must be considered within the individual
scopes of specific tests (such as hum or group delay performance) and should be well
known in order to demonstrate overall device or system capabilities.
Losses within the test system, from cables, connectors, interfaces, etc., will affect
various signal levels and could, possibly, cause the equipment to be operated outside
of its specified or desired range. On the other hand, the gain of any active device
must be determined as accurately as possible in order to ensure proper application
within device or system tests. Noise and distortion generation characteristics of these
devices must be established to permit eventual discrimination between these effects
and those which are actually introduced by the device or system under test.
Frequency response measurements of the entire test system, and its discrete sections
or components, will allow the operator to account for any differences in parametric
performance within the entire passband of the desired spectrum. Return loss
characterization will ensure that proper impedance matching is maintained within the
test system.
The same test procedures used for evaluating products and systems should be used to
qualify the test system itself, before any other evaluations are attempted.
5.3.1
Using Characterization Data
The characterization data established for test system components should be used
in the calculation of actual measurement results when possible. Signal losses
through the test system should be accounted for in all absolute and relative level
measurements.
The characterization data will be useful in three primary ways:
1. Permit the operator to set all levels with a greater degree of accuracy
2. Allow the operator to “normalize” the measurement results by subtracting
the test equipment’s contributions
3. Provide a history to determine if any of the components have changed or
degraded
34
The second point above will benefit from a small amount of expansion. It is
perfectly legitimate to deduct the parametric performance of the test system
(noise, distortions, delay, etc.) from that of the actual product and/or system, but
only if those aspects of the test system are accurately determined and
documented. Such parameters should be measured for the test system and then
deducted from those measurements performed on the DUT.
5.3.2
Signal Sources
One of the most common failures in system or device testing is inaccurate
setting of the signal source. Fortunately, this is one of the easiest oversights to
correct. The first step in testing is to ensure that all source signals are correct,
and then take multiple spectrum plots of the various signals by zooming in on
particular portions of the spectrum. Special notice should be given to those
areas where analog-to-digital signal level offsets occur. A “full spectrum” plot,
for both forward and reverse signal paths, should be recorded.
5.3.3
Connectors, Cables and Adapters
Some components may degrade or “wear-out” over time — connectors, adapters
and cables are especially prone to this problem. Extreme care must be exercised
with these devices as such damage is not commonly apparent except under the
closest of visual inspection. Also, degradation with these components tends to
be catastrophic in nature, often with them failing at the most inopportune time.
Such problems will not normally be discoverable through testing so it is
advisable to follow manufacturers’ recommendations regarding handling and use
of these components, especially if a typical (or maximum) number of
applications is specified.
One sign that a problem may be developing with these components is the sudden
appearance of signal interference (ingress) within the test spectrum. Shielding
effectiveness testing, perhaps using a near-field probe, may also be used to guard
against signal ingress and/or egress which should be evident as the various
interfaces involved with these devices begin to degrade.
5.3.4
Filters and In-Line Fixed Attenuators
Filters come in two broad categories; bandpass or bandstop. Bandpass filters
may cover a specific span of frequencies or be categorized as highpass or
lowpass filters. Bandstop filters may also be called traps. All of these filters
should be qualified for both loss and actual spectrum coverage. An additional
parameter to be concerned about with filters is group delay. Generally, both loss
and group delay should be measured versus frequency. Special attention should
be given to these parameters at a filter’s band edge(s). Tunable bandpass filters
have the added concern that they will have different losses and passband
characteristics dependant upon the frequency to which they are tuned. As these
35
are frequency selective devices, of one sort or another, the out-of-band return
loss will be quite low, perhaps even 0 dB in some instances.
The use of in-line fixed attenuators is recommended for all frequency selective
devices in order to ensure adequate impedance matching characteristics,
especially for those portions of the spectrum which fall outside of the tuning
range of the filters (or tuners, as on the spectrum analyzer). In-line fixed
attenuators are also recommended to ensure good match at the input and output
DC's.
5.3.5
Switchable Attenuators
These devices are quite useful for setting levels and in helping to perform
relative signal level measurements. There are two areas of concern with these
devices, however, where problems can arise in product/system evaluations —
attenuation setting accuracy and long-term wear and tear, which will degrade
performance over time. As with other devices within the test system these items
should be checked with a network analyzer to ascertain and document the
precision of the various switch positions. A record of the initial settings and
performance will also provide a point to which current operation can be
compared, as time passes.
5.3.6
RF Post Amplifiers
As with the previous components mentioned, a record of the swept frequency
response which documents gain, operational bandwidth and frequency response,
will provide a good measure of protection against misuse of RF post amplifiers.
The noise performance and distortion characteristics of RF post amplifiers also
require special attention.
The fundamental measure of an amplifier's noise performance is “noise figure.”
Unfortunately, this metric is usually not uniform across an amplifier’s
operational bandwidth, especially when multi-octave, broadband devices are
under consideration. It is essential to have an accurate reading of the noise
figure of all amplifiers, which will be used within the test system.
Another area of concern with an amplifier’s performance is its distortion
characteristics. This relates to the amount of composite triple beat (CTB),
composite second order (CSO) or cross modulation (XMOD) the amplifier
generates. The generation of these distortions is dependent upon both the
spectrum loading and the output signal level (and tilt) at which the amplifier is
required to operate — the higher the output signal level the greater the
distortion. It is generally assumed that active devices will have a “linear” range
in which the change in distortion products will track with a change in output
level. Unfortunately, this assumption of linearity may or may not be correct.
The only way to know what the range of linearity is (and its limits) is to measure
36
the distortion performance of the amplifier over the anticipated operational
range.
If the amplifier is carrying digital signals, carrier-to-intermodulation noise
distortion (CIN) will also be generated. Since this distortion is noise-like, it has
to be characterized by measuring the carrier-to-composite noise (CCN) with
digital signals present as well as carrier-to-thermal noise (CTN) without digital
signals, and subtracting on a 10 log basis to obtain CIN. For additional
information, refer to ANSI/SCTE 17 2007.
In the test system shown in Figure 7 the spectrum loading will be limited by the
bandpass filter, which precedes the amplifier. The bandpass filter will also limit,
to a large degree, the tilt, which the amplifier sees. The switchable attenuator on
the input of the amplifier should be used to limit the signal level range to a
reasonable spread but it should be noted that one must be careful not to degrade
the system’s noise performance by putting in too much attenuation.
It is recommended that the distortion and noise performance measurements be
performed upon the entire cascade of devices (in Figure 7) between the two 6 dB
fixed attenuators. Testing should be conducted at several specific frequencies as
well as at a number of different output signal levels (and at different
temperatures, if applicable) so that meaningful comparisons can be made with
the measurements made upon the DUT.
5.3.7
General Concerns
Return Loss — Every component within the test system should be qualified for
input and output return loss (IRL/ORL, also called input and output “match”).
Better match (reflected in higher RL numbers) will decrease the potential for
degenerative interaction between devices, which are coupled together. This
concern is the major reason for using fixed attenuators at most (or all) filter
interfaces.
Bandwidth and Frequency Response — It is necessary to ensure that all
components within the test system are capable of handling the full range of
frequencies to be encountered within the total scope of the testing. The best way
to do this is through sweep testing of each part, using a network analyzer or
similar device. The swept response will also provide detailed information as to
changes in response relative to frequency. Special care should be taken in
documenting areas of transition, where changes are likely to occur at a more
pronounced rate.
Group Delay — This parameter is growing in importance, especially relative to
the carriage of broadband, digitally modulated signals. Each component should
be checked across the entire operational band(s) and special attention should be
given to band edges as the frequency sensitive components within the test
37
system will exhibit greater amounts of delay versus frequency within the
transition areas of the spectrums involved.
Temperature — Most (if not all) of the parameters under consideration within
this document are sensitive to changes in temperature. If the product/system
evaluation is designed to cover a specific temperature range (i.e., -40° to + 60°
C) then those components, which are subject to this environmental specification,
must be characterized across the entire range. This will probably entail
measurements at the desired ambient temperature as well as the limits of the
specified temperature range. It may also be helpful to have characterization data
for a limited set of intermediate temperatures.
5.3.8
Records
1. Swept frequency response plots are particularly useful for documentation of
signal transfer characteristics and provide a permanent record of the
following:
a. Loss vs. frequency for all passive components, including filters,
cables/connector assemblies, splitters and directional couplers
b. Gain vs. frequency for active devices
c. Input and output return loss (IRL, ORL) of each component within the
test system
2. Tables containing the results from noise figure and distortion measurements,
at the specific frequencies to be tested, are required. Plots taken directly off
of the spectrum analyzer’s screen (or other measurement device) are also
helpful in recording this performance.
3. A set of nomographs or tables detailing the DUT’s contribution to the total
end-of-line noise may prove useful. These should be based upon the
addition of two powers (or voltages), just like the noise-near-noise table
found in Section 8.2. Since the noise of the test system is not coherent with
the noise of the DUT, the (C/N) of the test system can be subtracted from
combined performance of the test system and the DUT to reveal the actual
performance of the DUT itself. This is not possible with the distortion
performance since there is no way of knowing the relative coherence or
phase of the distortion products.
4. Current calibration records for all applicable devices (meters, spectrum
analyzers, etc.) are essential to demonstrating continued diligence to
performing and maintaining accurate measurements.
38
6.0
MEASUREMENT ACCURACY
The goal of these procedures is to ensure that the measurements taken are accurate and
representative of “real life.” Every test has the potential for errors and the source of these
errors must be known and minimized, if possible. Measurement errors may be sorted into
two separate classes, repeatable and random.
6.1
Repeatable Errors
Repeatable errors include errors in the test equipment being used and related to the
specifications of the signal sources, signal conditioning and signal measurement
devices, as well as errors in the signal conditioning components of the test system.
For a typical system, which includes a spectrum analyzer, amplifier, attenuator and/or
filter, these sources of error include:
1. Amplitude measurement accuracy of the spectrum analyzer, including absolute
and relative accuracy, and resolution
2. Frequency response (spectrum analyzer, amplifier, filter)
3. Attenuator accuracy (spectrum analyzer, fixed and switchable attenuators)
4. Log amplifier accuracy (spectrum analyzer)
5. Noise and distortion contribution (signal generator, spectrum analyzer, amplifier)
6. Signal level inaccuracies which contribute to noise and/or distortion components
7. Bandwidth limits
8. Impedance matching deficiencies
Many of the repeatable errors related to the test equipment can be minimized and/or
eliminated by “normalization” and “characterization”. In addition, the consistent use
of certain test techniques can further decrease the impact of these factors.
6.2
Random Errors
Random errors include the following:
1. The accuracy with which the operating levels of the DUT are set.
2. Errors in reading the signal level meter (or spectrum analyzer) when setting the
carrier levels. Many errors of this type are greatly minimized by digital readouts
on the SLM or the spectrum analyzer’s marker.
3. Errors which occur in reading both the carrier level and the composite distortion
level due to the visual resolution of the spectrum analyzer display. Again, many
errors of this type are greatly minimized by digital readouts on the SLM or the
spectrum analyzer’s marker.
4. Errors produced by variations in the alignment of the non-coherent carrier
frequencies.
39
5. Signal source level drift between the time levels are adjusted and measurements
are completed.
6.3
Measurement Accuracy Analysis
A mathematical analysis of the combined potential errors, representing a calculation
for the measurement of CTB, using a signal level meter (SLM) to set operational
levels, is detailed below. Note that it is assumed for these calculations that the CTB
level is not greater than 70 dBc below the carrier level, and that the CTB level is at
least 10 dB above the spectrum analyzer’s observed noise level.
1. The root-sum-of-the-squares (RSS) of errors (in a linear format) is the square root
of the sum of the squares of the individual random errors (also in a linear format).
The ±dB error is converted to linear format by the following formula:
 error [dB ] 
linear error (%) = 10 10 − 1 ∗ 100%


(13)
For example, ±0.4 dB is equivalent to either:
 +100.4

10
 ∗ 100% = (1.0965-1) = +9.65%
−
1




(14)
or
 −100.4

10
 ∗ 100% = (+0.912 - 1) = -8.80%
−
1




(15)
NOTE: The dB format always refers to a ratio of two numbers and the linear error
quantities (+9.65% and -8.80%) are “geometrically” centered but not
“arithmetically” centered.
2. For n=1 . . . N linear error terms the following formula applies:
For Positive Errors:N
RSS linear sum (%) =
∑ (linear error )
2
∗ 100%
(16)
n =1
For Negative Errors:
N
RSS linear sum (%) = −
∑ (linear error )
2
∗ 100%
(17)
n =1
Example: For a CTB measurement with discrete potential errors of ±2.0 dB, ±1.0
dB, ±0.4 dB and three (3) potential errors of ±0.2 dB each, the worst-case analysis
reveals the following linear quantities:
40
ACCURACY
SPECIFICATION
LINEAR
ERROR (±
±%)
Operating level setting (±1 dB) affect on
CTB
±2.0 dB
+58.49/-36.90
Analyzer Incremental Log Accuracy (0 to
-70 dB from reference level)
±1.0 dB
+25.89/-20.57
Effect of random carrier frequency
distribution
± 0.4 dB
+9.65/-8.80
Effect of 16 dB worst case output match
±0.2 dB
+4.71/-4.50
Display Visual Resolution, reading carrier
level
±0.2 dB
+4.71/-4.50
Display Visual Resolution, reading
distortion product level
±0.2 dB
+4.71/-4.50
Accuracy
FACTOR
+65.20 % & -44.08
%
Total linear RSS (%) error
Total RSS (±
±dB) error
+2.18 dB & -2.51 dB
Table 2 – CTB Measurement Accuracy Analysis
3. The total linear RSS error (%) is determined by the following formula:
For Positive Errors:
(0.5849)2 + (0.2589)2 + (0.0965)2 + (0.0471)2 + (0.0471)2 + (0.0471)2 ∗ 100%
= 0.6520 ∗ 100% = 65.20%
(18)
For Negative Errors:
−
(0.3690)2 + (0.2057 )2 + (0.0880)2 + (0.0450)2 + (0.0450)2 + (0.0450)2 ∗ 100%
= −0.4385 ∗ 100% = −43.85%
(19)
4. Use the following formula to convert this linear term back into a logarithmic
format (dB):
 RSS linear sum (%) 
RSS (dB) = 10 log1 +

100%


5. From the above example this formula provides a total RSS (dB) of
41
(20)
 RSS linear sum (%) 
 65.20% 
RSS (dB) = 10 log1 +
 = 10 log1 +

100%
100% 



= +2.18 dB
(21)
 RSS linear sum (%) 
 - 43.85% 
RSS (dB) = 10 log1 +
 = 10 log1 +

100%
100% 



= −2.51 dB
(22)
6. All of the above is incorporated into the following formula:
For Positive Errors:


RSS accuracy (dB) = 10 log1 +


2
n ( dB )
 accuracy
 
10
10
− 1  • • •
∑

n =1 
 
N
(23)
For Negative Errors:


RSS accuracy (dB) = 10 log1 

2 
n ( dB )
 accuracy


10
10
− 1  • • •
∑

n =1 
 
N
(24)
where n = 1 . . . N individual terms and final probable accuracy is expressed in dB
These maximum errors assume that the equipment has been thoroughly warmed up
and stabilized at the measurement temperature and that all recommended calibration
procedures have been followed.
If the spectrum analyzer input attenuator must be changed, then the accuracy shown
above could be further degraded by the incremental accuracy of the attenuator. If the
final composite distortion levels are close to the visible noise floor, then the accuracy
will be further degraded by the difficulty in discerning a distortion level near the noise
floor of the instrument.
If a spectrum analyzer is used to set the carrier levels the accuracy could be limited by
its absolute amplitude accuracy. Whatever unit is used to set the device’s/system’s
levels, whether a spectrum analyzer or a signal level meter, it should have been
carefully correlated with a power meter so that this error can be minimized.
NOTE: Each discrete set of equipment used in these measurements must be analyzed
for potential errors. Information regarding each instrument’s guaranteed accuracy,
and the conditions for that guarantee, should be available from the test equipment
manufacturers.
42
7.0
MEASUREMENT RECORDS
7.1
Test System Documentation Requirements
The following documentation should be developed and/or collected prior to the
commencement of actual testing.
1. Complete test system diagram which details all components including gain and
loss, for both forward and return paths.
2. Functional block diagram for the device or system to be tested.
3. Detailed test procedures, test equipment lists and setup diagrams for each unique
test system configuration (cascade, bench, etc.). The lists and diagrams should
include the following basic information:
a. Test equipment make, model and calibration history
b. Test equipment and system setup, including settings for each test performed
c. Operational levels and tilts
d. Graphical and table characterization data for all devices used in the test
system, including amplifiers, filters, splitters, directional couplers, switches,
attenuators, etc. This data should be over the frequency range and temperature
range to be used.
4. Other documentation which may be needed including independent lab verification
of compliance with various governmental requirements such as FCC Part 15, UL,
etc. It is often desirable to have the manufacturer state such qualitative standards
as MTBF or MTTF.
7.2
Measurement Records
Measurement records typically include the actual levels measured and all information
pertinent to the conditions under which the data was collected.
1. Date, time and temperature
2. Unit tested; manufacturer, model number, serial number.
3. Signal loading; bandwidth, operating levels, tilt, etc., for input and output.
4. Actual channels or frequencies where the parameter was measured
5. Test equipment identification (make, model number, serial number), calibration
date, etc.
6. Measured performance, including:
a. Per channel under test, with actual carrier level and actual, relative
distortion/noise level
b. Correction factors for analyzer settings, beat near noise/noise near noise
correction, etc.
43
c. Corrected distortion/noise level
7. Accuracy analysis to demonstrate potential margin for measurement error
8. Test personnel data
8.0
REFERENCE INFORMATION
8.1
Power and Voltage Addition
8.1.1
Power Addition
Addition or subtraction of two powers that are represented as decibels requires
an equation or look up table. The equation for adding two signals on a power
basis is:
 P2 
  P1 


 10 
 10 




Sum (dB) = 10log 10
+ 10  




(25)
To add two power levels, the following table can be used. Find the box
corresponding to the difference between the two signal levels and add the
amount in the table to the larger of the two individual levels.
∆
0.0
1.0
2.0
3.0
4.0
5.0
6.0
7.0
8.0
9.0
10.0
11.0
12.0
13.0
14.0
15.0
16.0
17.0
18.0
19.0
20.0
0.0
3.01
2.54
2.12
1.76
1.46
1.19
0.97
0.79
0.64
0.51
0.41
0.33
0.27
0.21
0.17
0.14
0.11
0.09
0.07
0.05
0.04
0.1
2.96
2.50
2.09
1.73
1.43
1.17
0.95
0.77
0.63
0.50
0.40
0.32
0.26
0.21
0.17
0.13
0.11
0.08
0.07
0.05
0.04
0.2
2.91
2.45
2.05
1.70
1.40
1.15
0.93
0.76
0.61
0.49
0.40
0.32
0.25
0.20
0.16
0.13
0.10
0.08
0.07
0.05
0.04
0.3
2.86
2.41
2.01
1.67
1.37
1.12
0.91
0.74
0.60
0.48
0.39
0.31
0.25
0.20
0.16
0.13
0.10
0.08
0.06
0.05
0.04
0.4
2.81
2.37
1.97
1.63
1.35
1.10
0.90
0.73
0.59
0.47
0.38
0.30
0.24
0.19
0.15
0.12
0.10
0.08
0.06
0.05
0.04
0.5
2.77
2.32
1.94
1.60
1.32
1.08
0.88
0.71
0.57
0.46
0.37
0.30
0.24
0.19
0.15
0.12
0.10
0.08
0.06
0.05
0.04
0.6
2.72
2.28
1.90
1.57
1.29
1.06
0.86
0.70
0.56
0.45
0.36
0.29
0.23
0.19
0.15
0.12
0.09
0.07
0.06
0.05
0.04
0.7
2.67
2.24
1.87
1.54
1.27
1.04
0.84
0.68
0.55
0.44
0.35
0.28
0.23
0.18
0.14
0.12
0.09
0.07
0.06
0.05
0.04
Table 3 - Power (10*Log) Addition
44
0.8
2.63
2.20
1.83
1.51
1.24
1.01
0.82
0.67
0.54
0.43
0.35
0.28
0.22
0.18
0.14
0.11
0.09
0.07
0.06
0.05
0.04
0.9
2.58
2.16
1.80
1.48
1.22
0.99
0.81
0.65
0.53
0.42
0.34
0.27
0.22
0.17
0.14
0.11
0.09
0.07
0.06
0.04
0.04
P o w er A d d ition
Value to Add to Larger Power (dB)
3.0
2.5
2.0
1.5
1.0
0.5
0.0
0
2
4
6
8
10
12
14
16
18
20
D ifferen ce B etw een T w o P o w ers (d B )
Figure 8 - Power (10*Log) Addition
8.1.2
Voltage Addition
Addition or subtraction of two voltages that are represented as decibels requires
an equation or look up table. The equation for adding two signals on a voltage
basis is:
 V2 
  V1 


 20 
 20 





Sum (dB) = 20log 10
+ 10




To add two voltage levels, the following table can be used. Find the box
corresponding to the difference between the two signal levels and add the
amount in the table to the larger of the two individual levels.
45
(26)
0.0
6.02
5.53
5.08
4.65
4.25
3.88
3.53
3.21
2.91
2.64
2.39
2.16
1.95
1.75
1.58
1.42
1.28
1.15
1.03
0.92
0.83
∆
0.0
1.0
2.0
3.0
4.0
5.0
6.0
7.0
8.0
9.0
10.0
11.0
12.0
13.0
14.0
15.0
16.0
17.0
18.0
19.0
20.0
0.1
5.97
5.49
5.03
4.61
4.21
3.84
3.50
3.18
2.88
2.61
2.36
2.13
1.93
1.74
1.56
1.41
1.26
1.14
1.02
0.91
0.82
0.2
5.92
5.44
4.99
4.57
4.17
3.80
3.46
3.15
2.85
2.59
2.34
2.11
1.91
1.72
1.55
1.39
1.25
1.12
1.01
0.90
0.81
0.3
5.87
5.39
4.95
4.53
4.13
3.77
3.43
3.12
2.83
2.56
2.32
2.09
1.89
1.70
1.53
1.38
1.24
1.11
1.00
0.89
0.80
0.4
5.82
5.35
4.90
4.49
4.10
3.73
3.40
3.09
2.80
2.53
2.29
2.07
1.87
1.68
1.51
1.36
1.22
1.10
0.99
0.88
0.79
0.5
5.77
5.30
4.86
4.45
4.06
3.70
3.36
3.06
2.77
2.51
2.27
2.05
1.85
1.67
1.50
1.35
1.21
1.09
0.98
0.87
0.78
0.6
5.73
5.26
4.82
4.41
4.02
3.66
3.33
3.03
2.74
2.48
2.25
2.03
1.83
1.65
1.48
1.33
1.20
1.08
0.96
0.86
0.77
0.7
5.68
5.21
4.78
4.37
3.98
3.63
3.30
3.00
2.72
2.46
2.22
2.01
1.81
1.63
1.47
1.32
1.19
1.06
0.95
0.86
0.77
0.8
5.63
5.17
4.73
4.33
3.95
3.60
3.27
2.97
2.69
2.44
2.20
1.99
1.79
1.61
1.45
1.31
1.17
1.05
0.94
0.85
0.76
0.9
5.58
5.12
4.69
4.29
3.91
3.56
3.24
2.94
2.66
2.41
2.18
1.97
1.77
1.60
1.44
1.29
1.16
1.04
0.93
0.84
0.75
Table 4 - Voltage or Current (20*log) Addition
Vo ltag e Ad ditio n
Value to Add to Larger Voltage (dB)
6 .0
5 .5
5 .0
4 .5
4 .0
3 .5
3 .0
2 .5
2 .0
1 .5
1 .0
0 .5
0 .0
0
2
4
6
8
10
12
14
16
D ifferen ce B etw een th e T w o V oltag es (d B )
Figure 9 - Voltage or Current (20*Log) Addition
8.2
Noise-near-Noise/Beat-near-Noise (NnN/BnN) Correction
8.2.1
Calculating NnN/BnN Correction
46
18
20
-Noise Drop



Correction (dB) = 10 log1 − 10 10  dB


8.2.2
Table of NnN/BnN Correction
Noise Floor
Delta
0.5
1
1.5
2.0
2.5
3.0
3.5
4.0
4.5
5.0
5.5
6.0
6.5
7.0
7.5
8.0
8.5
9.0
9.5
10.0
10.5
11.0
11.5
12.0
12.5
13.0
13.5
14.0
14.5
15.0
Correction
9.6
6.9
5.3
4.3
3.6
3.0
2.6
2.2
1.9
1.7
1.4
1.3
1.1
1.0
0.9
0.7
0.7
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.3
0.2
0.2
0.2
0.2
0.1
47
(27)
Note: Correction factors are shown down to a Noise Floor Delta of 0.5 dB and,
of course, the formula will allow one to derive a value for any observed Drop,
however any Drop < 2.0 dB is subject to significant potential errors. Therefore,
it is recommended that, for Noise Floor Deltas < 2.0 dB, 4.3 dB be added to the
observed distortion/noise level and the total value be expressed as “greater than”
(>) x dB.
NnN / BnN Correction
7
Correction (dB)
6
5
4
3
2
1
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Noise Floor Delta (dB)
Figure 10 - Graph of NnN/BnN Correction
8.3
Informative References
The following documents may provide valuable information to the reader but are not
required when complying with this standard.
1. ANSI/SCTE 17 2007: Test Procedure for Carrier to Noise (C/N, CCN, CIN,
CTN).
2. Cable Television Proof of Performance; Jeffrey L. Thomas; Hewlett-Packard
Professional Books; 1995.
3. Cable Television System Measurements Handbook; Hewlett-Packard Company;
July, 1993.
4. Digital Communications, J.G. Proakis, McGraw-Hill, latest edition.
5. EIA/CEA 542 (April 2002): Cable Television Channel Identification Plan
6. Fundamentals of RF and Microwave Power Measurements, Application Note 641B, Agilent Technologies, April 2000
48
7. Introduction to Communications Engineering; Gagliardi, Robert M., John Wiley
and Sons; Chapter 4.
8. MIL-PRF-28800 (6/24/1996): Test Equipment for Use with Electrical and
Electronic Equipment
9. NCTA Recommended Practices For Measurement On Cable Television Systems,
Third Edition; National Cable Television Association.
10. Phase Noise Theory and Measurements: A Short Review; Goldberg, Bar-Giora;
Microwave Journal, Vol. 43, No. 1, January, 2000.
11. ANSI/SCTE 121 2011: Test Procedure for Forward Path (Downstream) Bit Error
Rate
12. Spectrum Analysis . . . . Spectrum Analyzer Basics, Application Note 150,
Agilent Technologies, November, 1989. (1st in a very useful series of application
notes on spectrum analysis.)
13. Spectrum Analyzer Fundamentals, Tektronix App. Note 26W-7037-1, 1989.
14. Spectrum Analyzer Measurements and Noise, Application Note 1303, Agilent
Technologies, April 1998.
49
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement