Estimation of Inverse Models Applied to Power Amplifier Predistortion Ylva Jung

Estimation of Inverse Models Applied to Power Amplifier Predistortion Ylva Jung
Linköping Studies in Science and Technology. Licentiate Thesis.
No. 1605
Estimation of Inverse Models
Applied to Power Amplifier
Predistortion
Ylva Jung
LERTEKNIK
REG
AU
T
O MA
RO
TI C C O N T
L
LINKÖPING
Division of Automatic Control
Department of Electrical Engineering
Linköping University, SE-581 83 Linköping, Sweden
http://www.control.isy.liu.se
ylvju@isy.liu.se
Linköping 2013
This is a Swedish Licentiate’s Thesis.
Swedish postgraduate education leads to a Doctor’s degree and/or a Licentiate’s degree.
A Doctor’s Degree comprises 240 ECTS credits (4 years of full-time studies).
A Licentiate’s degree comprises 120 ECTS credits,
of which at least 60 ECTS credits constitute a Licentiate’s thesis.
Linköping Studies in Science and Technology. Licentiate Thesis.
No. 1605
Estimation of Inverse Models
Applied to Power Amplifier Predistortion
Ylva Jung
ylvju@isy.liu.se
www.control.isy.liu.se
Department of Electrical Engineering
Linköping University
SE-581 83 Linköping
Sweden
ISBN 978-91-7519-571-1
ISSN 0280-7971
LIU-TEK-LIC-2013:39
Copyright © 2013 Ylva Jung
Printed by LiU-Tryck, Linköping, Sweden 2013
Till Daniel
Abstract
Mathematical models are commonly used in technical applications to describe
the behavior of a system. These models can be estimated from data, which is
known as system identification. Usually the models are used to calculate the
output for a given input, but in this thesis, the estimation of inverse models is
investigated. That is, we want to find a model that can be used to calculate the
input for a given output. In this setup, the goal is to minimize the difference between the input and the output from the cascaded systems (system and inverse).
A good model would be one that reconstructs the original input when used in
series with the original system.
Different methods for estimating a system inverse exist. The inverse model
can be based on a forward model, or it can be estimated directly by reversing the
use of input and output in the identification procedure. The models obtained
using the different approaches capture different aspects of the system, and the
choice of method can have a large impact. Here, it is shown in a small linear
example that a direct estimation of the inverse can be advantageous, when the
inverse is supposed to be used in cascade with the system to reconstruct the input.
Inverse systems turn up in many different applications, such as sensor calibration and power amplifier (pa) predistortion. pas used in communication devices
can be nonlinear, and this causes interference in adjacent transmitting channels,
which will be noise to anyone that transmits in these channels. Therefore, linearization of the amplifier is needed, and a prefilter is used, called a predistorter.
In this thesis, the predistortion problem has been investigated for a type of pa,
called outphasing power amplifier, where the input signal is decomposed into
two branches that are amplified separately by highly efficient nonlinear amplifiers, and then recombined. If the decomposition and summation of the two parts
are not perfect, nonlinear terms will be introduced in the output, and predistortion is needed.
Here, a predistorter has been constructed based on a model of the pa. In a
first method, the structure of the outphasing amplifier has been used to model
the distortion, and from this model, a predistorter can be estimated. However,
this involves solving two nonconvex optimization problems, and the risk of obtaining a suboptimal solution. Exploring the structure of the pa, the problem can
be reformulated such that the pa modeling basically can be done by solving two
least-squares (ls) problems, which are convex. In a second step, an analytical
description of an ideal predistorter can be used to obtain a predistorter estimate.
Another approach is to compute the predistorter without a pa model by estimating the inverse directly. The methods have been evaluated in simulations and in
measurements, and it is shown that the predistortion improves the linearity of
the overall power amplifier system.
v
Populärvetenskaplig sammanfattning
Matematiska beskrivningar, här kallade modeller, används i många tekniska tilllämpningar. Ett exempel är utveckling av bilar, där man med simuleringar kan
utvärdera olika designval på ett kostnadseffektivt sätt. Ett annat är flygtillämpningar där riktiga tester på flygplanet skulle kunna leda till fara för piloten.
Dessa modeller kan skattas från uppmätt data från systemet, och detta förfarande
kallas systemidentifiering. Ett system är den avgränsade del av världen som vi är
intresserade av, i exemplen ovan bilen och flygplanet. I systemidentifiering är
målet att finna en modell som så bra som möjligt beskriver utsignalen, baserat
på tidigare in- och utsignaler som har kunnat mätas. I denna avhandling undersöks hur inversa modeller kan skattas. Här ska inversen användas i kombination
med det ursprungliga systemet, med målet att utsignalen från de seriekopplade
systemen (det ursprungliga och dess invers) ska vara densamma som insignalen.
Skattning av inversa system kan göras på flera sätt. Inversen kan baseras
på en modell av systemet som sedan inverteras, eller skattas direkt genom att
insignalen och utsignalen byter plats i systemidentifieringsproblemet. Hur inversen skattas påverkar modellen genom att olika egenskaper hos systemet fångas, och detta kan därför ha en stor inverkan på slutresultatet. I ett litet förenklat
exempel visas att det kan löna sig att skatta inversen direkt när den ska användas
i serie med systemet för att återskapa insignalen.
Linjärisering av effektförstärkare är ett exempel där inversa system används.
Effektförstärkare används i många tillämpningar, bland annat mobiltelefoni, och
dess uppgift är att förstärka en signal vilket är ett steg i överföringen av information. I exemplet med mobiltelefoner kan det exempelvis vara en persons röst som
är signalen, vilken ska överföras från telefonen via luften och vidare till mottagaren. Effektförstärkare kan vara olinjära, vilket medför att de sprider effekt till
närliggande frekvensband. För den som ska sända i dessa frekvensband uppfattas detta som brus, och det finns gränser för hur mycket spridning som får ske.
För att uppfylla dessa krav på spridning krävs alltså ofta linjärisering. Genom
att modellera förstärkarens olinjäriteter och invertera dem kan man få ett system
som inte sprider effekt i frekvensbandet. I detta sammanhang säger man att en
förkompensering, kallad fördistortion, används.
I denna avhandling tillämpas fördistortion på en typ av effektförstärkare, som
kallas outphasing-förstärkare. Detta är en olinjär effektförstärkarstruktur som delar upp signalen i två delar, där delarna förstärks av effektsnåla förstärkare för att
sedan adderas ihop. Om denna uppdelning och summation inte är perfekta uppstår olinjäriteter, och fördistortion krävs. Fördelen med olinjära effektförstärkare
är att dessa kan göras mer effektsnåla, vilket direkt speglas i exempelvis batteritiden för en mobiltelefon.
Här presenteras flera olika metoder för att ta fram en fördistortion. Metoderna
har utvärderats på fysiska förstärkare i mätningar, vilka visar att en förbättring
kan uppnås vid användning av fördistortion.
vii
Acknowledgments
I want to start by expressing my deepest gratitude to my supervisor Dr. Martin
Enqvist. Your never-ending knowledge, patience and encouragement is rather
remarkable. The (almost) infinite amount of comments and questioning on every
detail is very much appreciated. Thank you so much for the help and the time.
I also want to thank Prof. Lennart Ljung for letting me join this great group.
The way you and your successor Prof. Svante Gunnarsson lead the group in large
and also handle smaller matters is impressing. All administrative help from
Ninna Stensgård and her predecessor Åsa Karmelind is also appreciated.
Without Dr. Jonas Fritzin and Prof. Atila Alvandpour, I would not have gotten
into this field of research, I appreciate the nice cooperation and especially Jonas
for answering all my questions about the hardware.
Many thanks to Lic. Patrik Axelsson, Lic. Daniel Eriksson, M.Sc. Manon Kok,
Lic. Roger Larsson, Dr. Christian Lyzell and M.Sc. Maryam Sadeghi Reineh for
proofreading parts of this thesis, your comments have been very valuable to clarify and improve the thesis. I am also grateful to the LATEX gurus Dr. Gustaf Hendeby and Dr. Henrik Tidefelt for making the great template so I don’t have to
do much more than writing. And whenever I do need help, Lic. (soon to be Dr.)
Daniel Petersson always comes to the rescue. Thank you so much!
I am very happy to have happened to end up in the group of Automatic Control. The fika-breaks are always nice, whether the discussions concern work matters or something completely different. I hope there will be more great beer drunk
and barbecues had, and that the girl-power group will continue and make us even
more fit and powery. Special thanks to my former room mate Patrik Axelsson for
taking care of me when I was new and lost and helping me with whatever, and to
my present room mate Maryam Sadeghi Reineh for dragging me along to fika ;-)
Thanks to life (though not appreciated at the time) for making me see what is
important and that this is not necessarily what I had in mind.
This work has been supported by the Excellence Center at Linköping-Lund in
Information Technology (ELLIIT), the Center for Industrial Information Technology at Linköping University (CENIIT) and the Swedish Research Council (VR)
Linneaus Center CADICS, which is gratefully acknowledged.
I am also very grateful that I have family and friends outside of work. Friends
in Linköping, those who left, those from back home and from over the world, I’m
so glad that you are still in my life. I just wish there were more time for us and
shorter distances!
And most of all, to Daniel, the one who puts up with me the most and knows
how to get me up in the morning. I’m so glad you’re mine! Thank you for your
patience, love and encouragement :)
Linköping, August 2013
ix
Contents
Notation
xv
1 Introduction
1.1 Research Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
I
1
1
3
4
System Inversion
2 Introduction to Model Estimation
2.1 System Identification . . . . . . . . .
2.2 Transfer Function Models . . . . . . .
2.3 Prediction Error Method . . . . . . .
2.4 Linear Regression . . . . . . . . . . .
2.5 Least-squares Method . . . . . . . . .
2.6 The System Identification Procedure
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
9
10
11
12
13
14
15
3 Introduction to System Inversion
3.1 Inversion by Feedback . . . . . . . . . . . . . . . .
3.1.1 Feedback and Feedforward Control . . . . .
3.1.2 Iterative Learning Control . . . . . . . . . .
3.1.3 Exact Linearization . . . . . . . . . . . . . .
3.2 Analytic Inversion . . . . . . . . . . . . . . . . . . .
3.2.1 Problems Occurring with System Inversion
3.2.2 Postinverse and Preinverse . . . . . . . . . .
3.2.3 Volterra Series . . . . . . . . . . . . . . . . .
3.3 Inversion by System Simulation . . . . . . . . . . .
3.3.1 Separation of a Nonlinear System . . . . . .
3.3.2 Hirschorn’s Method . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
17
18
19
20
21
22
22
23
24
26
26
27
4 Estimation of Inverse Models
4.1 System Inverse Estimation . . . . . . . . . . . . . . . . . . . . . . .
33
34
xi
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
xii
Contents
4.2 Inverse Identification of LTI Systems . . . . . . . . . . . . . . . . .
4.3 An Illustrative Linear Dynamic Example . . . . . . . . . . . . . . .
4.4 Inverse Identification of Nonlinear Systems . . . . . . . . . . . . .
II
35
37
39
Power Amplifier Predistortion
5 Power Amplifiers
5.1 Power Amplifier Fundamentals . . . .
5.1.1 Basic Transmitter Functionality
5.2 Power Amplifier Characterization . . .
5.2.1 Gain . . . . . . . . . . . . . . .
5.2.2 Efficiency . . . . . . . . . . . . .
5.2.3 Linearity . . . . . . . . . . . . .
5.3 Classification of Power Amplifiers . . .
5.3.1 Transistors . . . . . . . . . . . .
5.3.2 Linear Amplifiers . . . . . . . .
5.3.3 Switched Amplifiers . . . . . .
5.3.4 Other Classes . . . . . . . . . .
5.4 Outphasing Concept . . . . . . . . . .
5.5 Linearization of Power Amplifiers . . .
5.5.1 Volterra series . . . . . . . . . .
5.5.2 Block-oriented Models . . . . .
5.5.3 Outphasing Power Amplifiers .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
47
47
48
50
51
51
52
55
55
56
57
58
58
61
62
62
63
6 Modeling Outphasing Power Amplifiers
6.1 An Alternative Outphasing Decomposition
6.2 Nonconvex PA Model Estimator . . . . . . .
6.3 Least-squares PA Model Estimator . . . . . .
6.4 PA Model Validation . . . . . . . . . . . . .
6.5 Convex vs Nonconvex Formulations . . . . .
6.6 Noise Influence . . . . . . . . . . . . . . . .
6.7 Memory Effects and Dynamics . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
65
65
67
69
71
82
82
84
7 Predistortion
7.1 A DPD Description . . . . . . . . . . . . . . . . . . . . .
7.2 The Ideal DPD . . . . . . . . . . . . . . . . . . . . . . . .
7.3 Nonconvex DPD Estimator . . . . . . . . . . . . . . . . .
7.4 Analytical DPD Estimator . . . . . . . . . . . . . . . . .
7.5 Inverse Least-Squares DPD Estimator . . . . . . . . . . .
7.6 Simulated Evaluation of Analytical and LS Predistorter
7.7 Recursive Least-Squares and Least Mean Squares . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
85
85
87
88
89
90
94
99
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
8 Predistortion Measurement Results
101
8.1 Signals Used for Evaluation . . . . . . . . . . . . . . . . . . . . . . 101
8.2 Measurement Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
8.3 Evaluation of Nonconvex Method . . . . . . . . . . . . . . . . . . . 104
xiii
Contents
8.3.1 Measured Performance of EDGE Signal . . . . . . . . . .
8.3.2 Measured Performance of WCDMA Signal . . . . . . . . .
8.3.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.4 Evaluation of Least Squares PA and Analytical Inversion Method
8.4.1 Measured Performance of WCDMA Signal . . . . . . . . .
8.4.2 Measured Performance of LTE Signal . . . . . . . . . . . .
8.4.3 Evaluation of Polynomial Degree . . . . . . . . . . . . . .
8.4.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
105
105
107
109
110
111
115
115
9 Concluding Remarks
117
9.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
9.2 Open Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
A Power Amplifier Implementation
119
A.1 +10.3 dBm Class-D Outphasing RF Amplifier in 90 nm CMOS . . 119
A.2 +30 dBm Class-D Outphasing RF Amplifier in 65 nm CMOS . . . . 121
Bibliography
125
Notation
Outphasing Amplifiers
Notation
Meaning
Δψ (s1 , s2 )
arg(s1 ) − arg(s2 ), angle difference between outphasing
signals, defined on page 65
same as Δψ (s1 , s2 )
angle difference between predistorted outphasing input signals
angle difference between predistorted outphasing output signals
angle difference between s˜k and sk , defined in (6.6)(6.7), page 67, and Figure 6.1, page 66
phase distortion in the amplifier branch k, defined
in (6.9)
gain factors of each branch in pa, should ideally be
g1 = g2 = g0
phase predistorter functions in the amplifier branch k,
defined in (7.1)
outphasing input signals, decomposed in standard
way (5.11)
predistorted outphasing input signal in branch k, decomposed with identical gain factors using (5.11)
outphasing input signal in branch k, decomposed with
nonidentical gain factors using (6.3)
outphasing output signal in branch k, decomposed
with nonidentical gain factors using (6.3)
predistorted outphasing output signal in branch k, decomposed with nonidentical gain factors using (6.3)
an estimate of the value of x
Δψ
Δψ (s1,P , s2,P )
Δψ (y1,P , y2,P )
ξk
fk
g1 , g2
hk
sk
sk,P
s˜k
yk
yk,P
x̂
xv
xvi
Notation
Power Amplifier Glossary
Notation
Definition
aclr, acpr
adjacent channel leakage (power) ratio, a linearity
measure that describes the amount of power spread
to neighboring channels, page 52.
am-am,
am-pm
amplitude modulation to amplitude modulation or
phase modulation, respectively, a plot mapping the
output amplitude (or phase distortion) to the input
amplitude to determine the distortion induced by the
circuit, for example a power amplifier, page 52.
the circuit that handles the addition of signals in, for
example, Figure 5.13, page 59.
decibel to carrier, the power ratio of a signal to a carrier signal, expressed in decibels.
power level expressed in dB referenced to one milliwatt, so that zero dBm equals one mW and one dBm is
one decibel greater (about 1.259 mW).
drain efficiency and power added efficiency are efficiency measures for power amplifiers, page 51.
direct and indirect learning architectures are two approaches to estimate a power amplifier predistorter,
see Method B and Method C on page 34.
digital predistortion, a linearization technique for
power amplifiers that modifies the input to counteract
power amplifier distortion from nonlinearities and dynamics, page 61.
dynamic range, defining the ratio of the maximum and
minimum output amplitudes an amplifier can achieve,
page 60.
a signal separation into an imaginary part (quadrature,
q) vs real part (in-phase, i), page 48.
local oscillator, a circuit that produces a continuous
sine wave. Usually drives a mixer in a transmitter/receiver, page 48.
translates the signal up or down to another frequency,
page 48 and Figure 5.2.
combiner
dBc
dBm
de, pae
dla, ila
dpd
dr
iq
lo
mixer
outphasing,
linc
pa
rf
scs
an outphasing amplifier, also called linear amplification with nonlinear components, is a nonlinear amplifier structure.
power amplifier, used to increase the power of a signal,
so that the output is a magnified replica of the input.
radio frequency, ranging between 3 kHz and 300 GHz.
signal component separator, (here) decomposes the
signal into outphasing signals according to (5.11).
xvii
Notation
Abbreviations A-O
Abbreviation
ac
aclr
acpr
am
am-am
am-pm
bjt
cmos
dac
db
dc
de
dla
dpd
dr
edge
evm
fpga
fet
fir
fm
gsm
gprs
iir
ila
ilc
iq
linc
lms
lo
ls
lte
lti
lut
mimo
mosfet
nmos
Meaning
Alternating current
Adjacent channel leakage ratio
Adjacent channel power ratio
Amplitude modulation
Amplitude modulation to amplitude modulation
Amplitude modulation to phase modulation
Bipolar junction transistor
Complementary metal-oxide-semiconductor
Digital-to-analog converter
Digital baseband
Direct current
Drain efficiency
Direct learning architecture
Digital predistortion or predistorter
Dynamic range
Enhanced data rates for gsm evolution
Error vector magnitude
Field programmable gate array
Field-effect transistor
Finite impulse response
Frequency modulation
Global system for mobile communications
General packet radio service
Infinite impulse response
Indirect learning architecture
Iterative learning control
in-phase component (i, real part) vs quadrature component (q, imaginary part)
Linear amplification with nonlinear components
Least mean squares
Local oscillator
Least squares
Long term evolution
Linear time invariant
Look-up table
Multiple-input multiple-output
Metal-oxide-semiconductor field-effect transistor
N-channel metal-oxide-semiconductor
xviii
Notation
Abbreviations P-Z
Abbreviation
pa
pae
papr
pd
pem
pm
pmos
pvt
pwm
rbw
rf
rls
rms
rx
scs
siso
sls
tx
wcdma
Meaning
Power amplifier
Power added efficiency
Peak-to-average power ratio
Predistortion or predistorter
Prediction-error (identification) method
Phase modulation
P-channel metal-oxide-semiconductor
Process, voltage and temperature
Pulse-width modulated
Resolution bandwidth
Radio frequency
Recursive least squares
Root mean square
Receiver
Signal component separator
Single-input single-output
Separable least-squares
Transmitter
Wideband code-division multiple access
1
Introduction
Inverse systems and models thereof show up in numerous applications. This entails a need for estimation of models of inverse systems. The concept of building
models based on measured data is called system identification, and many theoretical results exist concerning the properties of the estimated models. However,
less is known when the goal is to estimate the inverse. Should it be based on a
forward model, or should the inverse be estimated directly? Inverse models produced in different ways will capture different properties of the system and more
insights are needed.
In this chapter, a short research motivation will be given, followed by an outline of the thesis. Then follows an overview of the contributions of the thesis, and
some clarifications of the author’s role in the work.
1.1
Research Motivation
Power amplifiers (pas) are used in many applications, such as communication devices (mobile phones) and loudspeakers. In a hand-held device such as a mobile
phone, the power efficiency is an important property as it will reflect directly on
the battery time. In order to match the increasing demand for lower power consumption, nonlinear power amplifiers have been developed. These nonlinear pas
can be made more power efficient than linear ones, but introduce other problems.
A nonlinear device will not only transmit power in the frequency band where the
input signal is, but also risks spreading power to neighboring transmitting channels. For anyone transmitting in these frequency bands, this will be perceived
as noise. Therefore, there are standards describing the amount of power that is
allowed to be spread to adjacent frequencies. So, for the power amplifier to be
useful, linearization is needed, limiting the interference in the neighboring channels. Since the distorted output of the power amplifier is an amplified version
1
2
1
u
S −1
S
(a)
yu
u
S
y
S −1
Introduction
yu
(b)
Figure 1.1: An inverse S −1 of the system S is used to undo the effects of the
system S such that yu = u. In (a), a preinverse is used, where the inverse S −1
is applied before the system S, and in (b), a postinverse is applied, where the
order of the system and the inverse is reversed.
of the input, it is preferable to work with the input. Thus, the goal is to find a
prefilter that inverts the nonlinearities, called a predistorter.
Small loudspeakers, in mobile phones for example, can also show a nonlinear
behavior due to limitations in the movement of the cone. This will distort the
sound and make listening to music less agreeable. The idea behind a compensation of the nonlinearities is similar to that of the power amplifier predistortion,
since only the input is available for modification. Once the signal has been converted to sound, it cannot be altered.
In the power amplifier and loudspeaker applications, the goal is to find a prefilter that inverts the nonlinearities introduced by the power amplifier or loudspeaker, but the same type of inversion problems can be found also in other areas.
In sensor applications it is rather a postdistortion that is needed. If the sensor itself has dynamics or a nonlinear behavior, the sensor output is not the true signal
but will also contain some sensor contamination. This has to be handled at the
sensor output since this is where the user can get access to the signal.
The need for calibration is also relevant in other applications, such as analogto-digital converters (adc s). In an adc, an analog (continuous) input signal is
converted to a digital output, which is limited to a number of discrete values. A
small error in the analog input risks causing a larger error in the output, since
the discrete signal is limited to certain values.
Inversion of systems also appear in other areas, not directly connected to preor postinversion. One application where models of both the system S and its
inverse S −1 are used is robotics. The forward kinematics, describing how to compute the robot tool pose as a function of the joint variables, is used for control as
well as the inverse kinematics, how to compute joint configuration from a given
tool pose.
In all of the above applications, the question is how to find an inverse S −1 to
the system S. The application will determine if it is a preinverse or a postinverse
that is desired. In Figure 1.1, the two different approaches are illustrated.
If an inverse cannot be found analytically, it can be estimated. This opens up
for questions regarding this inverse estimation. Different methods can be applied.
Either it can be based on an inverted model of the system itself, or the method
can try to estimate the inverse directly. That the choice of estimation method
matters is motivated by Example 1.1.
1.2
3
Outline
200
100
0
−100
−200
20
21
22
23
25
24
Time [s]
26
27
28
Figure 1.2: The input u (black solid line), and the reconstructed input yu
using an inverted estimated forward model (black dashed line) and the inverse model estimated directly (gray solid line). The estimation of the inverse (gray) cannot perfectly reconstruct the input (black solid), but is clearly
better than the inverted forward model (dashed).
Example 1.1: Introductory example
Consider a linear time-invariant (lti) system. The goal is to reconstruct the input
by modifying the measured output. When the structure of the inverse is set, in
this case to a finite impulse response (fir) system, what is the best way to estimate
it? Should the inverse be estimated directly or should an inverted model of the
system itself be used? These two approaches have been applied to noise-free data,
and the results are presented in Figure 1.2. We see here that the two models, both
descriptions of the system inverse, capture very different aspects of the system,
and that the method chosen can have a large impact. This example is described
in more details in Section 4.3.
1.2
Outline
The thesis is divided into two parts. The first introduces system inversion and the
estimation of inverse models. The second part concerns using estimated inverse
models for power amplifier predistortion.
Part I – System inversion gives a background to the problem of estimating
inverse models. A short introduction to model estimation is provided in Chapter 2 and a background to system inversion in Chapter 3. In Chapter 4, some
ideas concerning the estimation of inverse models are presented, and three basic approaches are explained. In particular, some conclusions concerning linear,
time-invariant systems are presented.
4
1
Introduction
In Part II – Power amplifier predistortion, the estimation of inverse models
is applied to outphasing power amplifiers. Here, the goal is to find an inverse
such that the output of the power amplifier is an amplified replica of the input,
counteracting the distortion caused by the amplifier. An introduction to power
amplifier functionality and characterization is given in Chapter 5 as well as an
overview of earlier predistortion methods. This chapter also contains a description of the outphasing power amplifier, which is a nonlinear amplifier structure
that needs predistortion, and for which the predistorter methods in this thesis
were produced. Modeling approaches for the power amplifier are presented in
Chapter 6 and methods for finding a predistorter in Chapter 7. The predistortion
methods are evaluated on real power amplifiers in Chapter 8.
The thesis is concluded by Chapter 9 where some conclusions and a discussion
on ideas for future research are presented. Some additional information about
the power amplifiers used is given in the appendix.
1.3
Contributions
The contributions in this thesis are in two areas, power amplifier predistortion
and the more general field of estimating models of inverse systems.
The power amplifier predistortion was first presented in
Jonas Fritzin, Ylva Jung, Per N. Landin, Peter Händel, Martin Enqvist,
and Atila Alvandpour. Phase predistortion of a Class-D outphasing
RF amplifier in 90nm CMOS. IEEE Transactions on Circuits and
Systems-II: Express Briefs, 58(10):642–646, October 2011a.
where a novel model structure for the outphasing power amplifiers was used. A
predistorter that changes only the phases of the outphasing signals was shown
to successfully reduce the distortion introduced by the power amplifier. The proposed model and predistorter structures were produced in close collaboration
between the paper’s first three authors. The theoretical motivation of the predistorter model has been developed by the author of this thesis.
The nonconvex predistortion method presented in the above publication was
then developed into a method that makes use of the structure of the outphasing
power amplifier. It basically consists of solving least-squares problems, which
are convex, and performing an analytical inversion, and it is suitable for online
implementation. This is presented in
Ylva Jung, Jonas Fritzin, Martin Enqvist, and Atila Alvandpour. Leastsquares phase predistortion of a +30dbm Class-D outphasing RF PA
in 65nm CMOS. IEEE Transactions on Circuits and Systems-I: Regular
papers, 60(7):1915–1928, July 2013.
The derivation of this least-squares predistortion method has mainly been done
by the author of this thesis, whereas the paper’s second author has been responsible for the power amplifier and hardware issues. In addition to the reformulation of the nonconvex problem, the paper provides a theoretical description of
1.3
Contributions
5
an ideal outphasing predistorter, that is, one that does not change neither the
amplitude nor the phase of the output. This involves a mathematical description
of the branch decomposition and the impact of unbalanced amplification in the
two branches.
Inverse models can either be estimated directly, or based on a model of the
(forward) system. Some insights into different approaches to estimate models of
inverse systems are discussed in Chapter 4. These results concerning the estimation of inverse models have been accepted for presentation at the 52nd IEEE
Conference on Decision and Control (CDC):
Ylva Jung and Martin Enqvist. Estimating models of inverse systems.
In 52nd IEEE Conference on Decision and Control (CDC), Florence,
Italy, To appear, December 2013.
The paper also contains the postinverse application of Hirschorn’s method presented in Section 3.3.2.
The contents of Appendix A are included here for the sake of completeness
and are not part of the contributions of this thesis. The power amplifiers and the
characterization thereof were done at the Division of Electronic Devices, Department of Electrical Engineering at Linköping University, Linköping, Sweden, by
Jonas Fritzin, Christer Svensson and Atila Alvandpour.
Part I
System Inversion
2
Introduction to Model Estimation
In many cases it is costly, tedious or dangerous to perform real experiments, but
we still want to extract information somehow. The limited part of the world that
we are interested in is called a system. This system can be pretty much anything.
It can for example be interesting for a car manufacturer to know how the car will
react to a change in the accelerator. Or in a paper mill, how the moist content
of the wood will affect the quality of the paper. For a diabetic it is essential to
know how the blood sugar level depends on food intake and exercise. A pilot
needs to know how an airplane reacts to the control of different rudders, and in
economics it is necessary to know how a change in the interest rate will influence
the customers’ willingness to borrow or save money. What we see as a system
depends on the application. In the car analogy, the system can be only the engine,
or the whole car. In the blood sugar level we can either be interested only in how
food intake effects the glucose levels, or how exercise contributes.
In many of these applications one does not want to perform experiments directly, but instead start the evaluation using simulations. This leads to a need
for models of the systems. One way is to use physical modeling where the models are based on what we know of the system by using the knowledge of, for
example, the forces, moments, flows, etc. In the engine example, it is possible to
calculate the output and the connection between the accelerator and the engine
torque. This method is sometimes called white box modeling. Another modeling
approach is to gather data from the system and construct a model based on this
information. This approach is called system identification and will be presented
in this chapter.
9
10
2 Introduction to Model Estimation
v
u
S
y
Figure 2.1: A system S with input u, output y, and disturbance v. For the
blood glucose example, the system S is the patient, or rather a part of the
body’s metabolism system, the input u could represent food intake, the output y is the measured blood glucose level and the disturbance v is for example an infection that effects the body’s insulin sensitivity.
2.1
System Identification
System identification deals with the problem of identifying properties of a system. More specifically, it treats the problem of using measured data to extract
a mathematical model of a system we are interested in. The introduction and
notation presented here is based on Ljung [1999], but other standard references
include Pintelon and Schoukens [2012] and Söderström and Stoica [1989]. Since
we are dealing with sampled data, t will be used to denote the time index. Also,
for notational convenience, the sample time Ts will be assumed to be one, so that
Δ
Δ
y(tTs ) = y(t) and y((t + 1)Ts ) = y(t + 1) is the measurement after y(t), but this can
of course easily be adapted to other choices of Ts .
The observable signals that we are interested in are called outputs, denoted
y(t), and in the examples above this can be the car speed/engine velocity, or the
glucose level in the blood. The system can also be affected by different sources
that we are in control of – the accelerator or the food intake – called inputs, u(t).
Other external sources of stimuli that we cannot control or manipulate are called
disturbances, v(t), – such as a steep uphill affecting the car or a fever or infection
which effect the insulin sensitivity. Some disturbances are measurable and for
others the effects can be noted, but the signal itself cannot be measured. The
different concepts are presented in Figure 2.1.
A system has a number of properties connected to it. A system is linear if its
output response to a linear combination of inputs is the same linear combination
of the output responses of the individual inputs. That is
f (αx + βy) = f (αx) + f (βy) = αf (x) + βf (y),
with x and y independent variables and α and β real-valued scalars. The first
equality makes use of the additivity (also called the superposition property), and
the second the homogeneity property. A system that is not linear is called nonlinear. Since this includes “everything else”, it is hard to do a classification and
come to general conclusions. Most results in system identification are therefore
developed for linear systems, or some limited subset of nonlinear systems. The
system is time invariant if its response to a certain input signal does not depend
on absolute time. A system is said to be dynamical if it has some memory or history, i.e., the output does not only depend on the current input but also previous
inputs and outputs. If it depends only on the current input, it is static.
2.2
11
Transfer Function Models
In system identification, the goal is to use the known input data u and the
measured output data y to construct a model of the system S. Here, only singleinput single-output (siso) systems are considered, but the ideas can most of the
time be adapted to multiple-input multiple-output (mimo) systems. It is usually
neither possible nor desirable to find a model that describes the whole system
and all its properties, but rather one wants to construct a model which captures
and can describe some interesting subset thereof, needed for the application. It
is up to the user to define such criteria as to what needs to be captured by the
model.
2.2
Transfer Function Models
One way to present a linear time invariant (lti) system is via the transfer function
model
y(t) = G(q, θ)u(t) + H(q, θ)e(t)
(2.1)
where q is the shift operator, such that qu(t) = u(t + 1) and q −1 u(t) = u(t − 1),
and e(t) is a white noise sequence. G(q, θ) and H(q, θ) are rational functions of
q and the coefficients in θ, where θ consists of the unknown parameters that describe the system. Depending on the choice of polynomials in G(q, θ) and H(q, θ),
different structures can be obtained. The most general structure is
A(q)y(t) =
C(q)
B(q)
u(t) +
e(t)
F(q)
D(q)
(2.2)
where the polynomials are described by
X(q) = 1 + x1 q −1 + · · · + xnx q −nx
for
X = A, C, D, F,
and nx is the order of the polynomial and a possible delay nk in B(q),
B(q) = bnk q −nk + · · · + xnk +nb −1 q −(nk +nb −1) ,
such that there can be a delay between input and output. This structure is often
too general, and one or several of the polynomials will be set to unity. Depending
on the polynomials used, different commonly used structures will be obtained.
When the noise is assumed to enter directly at the output, such as white measurement noise, or when we are not interested in modeling the noise, the structure is
called an output error (oe) model, which can be written
y(t) =
B(q)
u(t) + e(t),
F(q)
i.e., the polynomials A(q), C(q) and D(q) have all been set to unity. Many such
structures exist (see Ljung [1999] for more examples) and are called black-box
models, since the model structure reflects no physical insight but acts like a black
box on the input, and delivers an output. One strength of these structures is that
12
2 Introduction to Model Estimation
they are flexible and, depending on the choice of G(q, θ) and H(q, θ), they can
cover many different cases.
A model which does not belong to the black-box model structure, and is not
completely obtained from physical knowledge of the system is called a gray-box
model. This can for example be a physical structure with unknown parameters,
such as an unknown resistance in an elsewise known circuit. It can also be a some
properties of the data that can be explored in the choice of model structure. The
latter is done in the power amplifier modeling in Chapter 6.
2.3
Prediction Error Method
In order to say something about the system, we need a model that can predict
what will happen next. At the present time instant t, we have collected data from
previous time instants t − 1, t − 2, . . . , and this can be used to predict the output.
The one-step-ahead predictor of (2.2) is
D(q)A(q)
D(q)B(q)
u(t) + 1 −
y(t),
(2.3)
ŷ(t) =
C(q)F(q)
C(q)
and depends only on previous output data. The unknown parameters in the
polynomials A(q), B(q), C(q), D(q) and F(q) are gathered in the parameter vector
θ,
θ = [a1 . . . ana bnk . . . bnk +nb −1 c1 . . . cnc d1 . . . dnd f 1 . . . f nf ]T .
The predictor ŷ(t) is often written ŷ(t|θ) to point out the dependence on the parameters in θ.
By defining the prediction error
ε(t) = y(t) − ŷ(t|θ),
(2.4)
a straightforward modeling approach is to try to find the parameter vector θ̂, that
minimizes this difference,
θ̂ = arg min V (θ),
V (θ) =
1
N
θ
N
l(ε(t))
(2.5a)
(2.5b)
t=1
where l( · ) is a scalar valued, usually positive, function. Finding the parameters
by this minimization is called a prediction-error (identification) method (pem).
This idea is illustrated in Figure 2.2.
Except for special choices of the model structures G(q, θ) and H(q, θ) and
the function l(ε) in (2.5b), there is no analytical way of finding the minimum of
the minimization problem (2.5a). Numerical solutions have to be relied upon,
which means that a local optimum might be found instead of the global one if
the cost function is nonconvex, with more than one minimum. For results on
the convergence of the parameters and other properties of the estimate, such as
consistency and variance, see Ljung [1999].
2.4
13
Linear Regression
v(t)
u(t)
System
Model
y(t)
_
ε(t)
ŷ(t|θ)
Figure 2.2: An illustration of the idea behind system identification.
2.4
Linear Regression
Another common way to describe the relationship between input and output of
an lti system is through a linear difference equation where the present output,
y(t), depends on previous inputs, u(t − nk ), . . . , u(t − nk − nb + 1), and outputs,
y(t − 1), . . . , y(t − na ) , as well as the noise and disturbance contributions. This
can for example be done for (2.2) when C(q), D(q) and F(q) are set to unity, so
that G(q, θ) and H(q, θ) in (2.1) correspond to
G(q, θ) =
B(q)
,
A(q)
H(q, θ) =
1
A(q)
with
A(q) = 1 + a1 q−1 + · · · + ana q −na
B(q) = bnk q −nk + · · · + bnk +nb −1 q −(nk +nb −1) .
The linear difference equation is then
y(t)+ a1 y(t −1)+· · ·+ ana y(t − na ) = bnk u(t − nk )+· · ·+ bnk +nb −1 u(t − nk − nb +1)+ e(t),
and we can write
A(q)y(t) = B(q)u(t) + e(t).
(2.6)
This particular structure is called auto-regressive with external input (arx). Another special case is when the output only depends on past inputs, such that
na = 0 in (2.6). This is called a finite impulse response (fir) structure.
The predictor for an arx model is
ŷ(t|θ) = −a1 y(t − 1) − · · · − ana y(t − na )+
bnk u(t − nk ) + · · · + bnk +nb −1 u(t − nk − nb + 1).
By gathering all the known elements into one vector, the regression vector,
φ(t) = [−y(t − 1), . . . , −y(t − na ) u(t − nk ), . . . , u(t − nk − nb + 1)]T
and the unknown elements into the parameter vector,
θ = [a1 . . . ana bnk . . . bnk +nb −1 ]T ,
(2.7)
14
2 Introduction to Model Estimation
the predictor (2.7) can be written as a linear regression
ŷ(t|θ) = φ T (t)θ,
(2.8)
that is, the unknown parameters in θ enter the predictor linearly.
2.5
Least-squares Method
With the function l( · ) in (2.5b) chosen as a quadratic function,
l(ε) =
1 2
ε ,
2
and the predictor described by a linear regression, as in (2.8), we get
V (θ) =
N
2
1 y(t) − φ T (t)θ ,
2N
(2.9)
t=1
called the least-squares (ls) criterion. A good thing about this criterion is that it
is quadratic in θ, which means that the problem is convex and the minimum can
be calculated analytically. The minimum is obtained for
θ̂
LS
⎡
⎤−1
N
N
⎢⎢ 1 ⎥⎥ 1 T
⎢
= ⎢⎢⎣
φ(t)φ (t)⎥⎥⎥⎦
φ(t)y(t),
N
N
t=1
(2.10)
t=1
called the least-squares estimator. See Draper and Smith [1998] for a more thorough description of the ls method and its properties.
Apart from the guaranteed convergence to the global optimum, a benefit with
ls solutions is that there exist many efficient numerical methods to solve them.
The recursive least-squares (rls) method can be used to solve the numerical optimization recursively [Björck, 1996]. Another option is the least mean square
(lms) method, which can make use of the linear regression structure of the optimization problem, developed in (2.8).
Separable Least-squares
For some model structures, the parameter vector can be divided into two parts,
θ = [ρ T η T ]T , so that one part enters the predictor linearly and the other nonlinearly, i.e.,
ŷ(t|θ) = ŷ(t|ρ, η) = φ T (t, η)ρ.
Hence, for a fixed η, the predictor is a linear function of the parameters in ρ. The
identification criterion is then
V (θ) = V (ρ, η) =
N
2
1 y(t) − φ T (t, η)ρ
2N
t=1
2.6
The System Identification Procedure
15
and this is an ls criterion for any given η. Often, the minimization is done first
for the linear ρ and then the nonlinear η is solved for. The nonlinear minimization problem now has a reduced dimension, where the reduction depends on the
dimensions of the linear and nonlinear parameters. This method is called separable least-squares (sls) as the ls part has been separated out, leaving a nonlinear
problem of a lower dimension, see Ljung [1999, p. 335-336].
2.6
The System Identification Procedure
The process of constructing a model from data consists of a number of steps,
which often have to be performed a number of times before a suitable model can
be obtained.
1. A data set is needed, usually containing input and output data. The data
should be “good enough”, so that it excites the desired properties of the
system. This is called persistency of excitation.
2. Different model structures should be examined, to evaluate which structure
best captures the properties of the data. These structures should fulfill
certain demands, such that two sets of parameters do not lead to the same
model. This property is called identifiability.
3. A measurement of “goodness”, such as the criterion (2.5), has to be selected
to decide which models best describe the data.
4. The model estimation step is where the parameters in θ are determined.
In the ls method, this would consist of inserting the data into (2.10), and
in the pem case, the minimization of (2.5) for a certain choice of predictor
structure ŷ(t|θ) in (2.3).
5. Model validation. In this step, different models should be evaluated to determine if the models obtained are good enough. The evaluation should
be done on a new set of data, validation data, to ensure that the model is
useful not only for the data for which it was estimated. Two important components of the model validation are the comparisons between measured
data and model output as well as the residual analysis, where the statistics
of the unmodeled properties of the data are evaluated.
Some of these steps contain a large user influence, whereas others might be set
or rather straightforward. The choice of model structure and model order, such
as na and nb in (2.7), is often hard and needs to be repeated a number of times
before a suitable model can be found.
3
Introduction to System Inversion
Inverse systems are used in many applications, more or less visibly. One application example of this is power amplifiers in communication devices, which are
often nonlinear, causing interference in adjacent transmitting channels [Fritzin
et al., 2011a]. This interference will be noise to anyone that transmits in these
channels, and there are measures describing the amount of power that is allowed
to be spread to adjacent frequencies. So to be useful, linearization of the amplifier is needed, limiting the interference in the neighboring channels. However,
one does not want to work with the amplified signal, but rather with the input
signal to the system, that is, before the signal is amplified. A prefilter that inverts
the nonlinearities, called a predistorter, is thus preferable.
In sensor applications it is rather a postdistortion that is needed. If the sensor
itself has dynamics or a nonlinear behavior, the sensor output is not the true
signal but will also contain some sensor contamination. This would have to be
handled at the sensor output, since this is where the user can get access to the
signal.
In the area of robotics, there is a need for control such that the robot achieves
the demands on precision. Smaller and lighter robots reduces the need for large
motors, and also the cost and wear of the robot. However, this also introduces
new problems such as larger oscillations and increases the demands on the control performance. In robotic control applications, a common strategy is to use
feedback to control the joint positions. The last part of the robot, however, connecting the tool to the robot, is often controlled using open-loop control. Models
of both the forward and the inverse kinematics are used for control.
In the above applications, finding the inverse of the system is a crucial point;
how should the input to or the output from the system be modified to obtain
the desired dynamics from input to output? Each application entails its own restrictions and special conditions to attend to, and in this chapter, some aspects of
17
18
3 Introduction to System Inversion
system inversion are discussed. For a nonlinear system, the inversion is nontrivial, and different approaches can be used. A selection of methods is presented
here.
In this thesis, it is assumed that an inverse exists, that is, there is a one-to-one
relation between input and output. This property is called bijectivity. Furthermore, we assume that the system and the inverse can both be written analytically,
see Example 3.1 for a case when this is not the valid. Both the system and the inverse are assumed to be stable and causal (see for example Rugh [1996]). In this
chapter, the main focus is on inversion, and a model of the system is supposed
to be known, either by physical modeling or by system identification. Different
approaches to estimate inverse models will be presented in Chapter 4.
Example 3.1: Nonexisting Analytical Inverse
Consider the system
y(t) = e x(t) + sin(x(t))
for |x(t)| < 0.5π. The function e x + sin(x) is monotonic on [− π2 π2 ], and thus also
invertible. However, no analytic expression of the inverse exists, and a numerical
inverse will have to be used.
Here, the methods are described in either continuous or discrete time. Different frameworks are usually most easily described in one domain or the other,
hence the mixed use in this chapter. Also, the systems are often continuous
whereas the controllers are implemented in discrete time. The explicit dependency on time will sometimes be left out for notational convenience.
3.1
Inversion by Feedback
The behavior of a system can be modified in a multitude of ways, often with the
goal of making the output follow a desired trajectory, called reference signal, r.
In the automatic control society the main choices are feedback and feedforward
control. For the linear case many different control strategies exist, the perhaps
most common of which is the pid, consisting of a proportional (P), an integral
(I) and a derivative (D) part. The P, I and D parameters of the controller can
be trimmed to obtain a desired behavior of the controlled system [Åström and
Hägglund, 2005].
In this section, a few feedback strategies will be introduced. An iterative control approach that can be used for linear and nonlinear systems is the iterative
learning control (ilc). ilc works on systems with a repetitive input signal, such
as a robot that performs the same task over and over again. It makes use of the
output from the last repetition and tries to improve this so that the output better follows the reference signal. Another feedback solution for nonlinear systems
is the exact input-output linearization, that makes use of a known model of the
system to obtain overall linear dynamics, determined by the user.
Though the classical view of feedback control is not that of system inversion,
this is indeed one interpretation; the feedback system produces the input that
3.1
19
Inversion by Feedback
r
+
e
F
u
G
y
−1
Figure 3.1: A feedback controller F applied to the system G.
leads to a desired output. This is also the goal of an inverse system, to produce
an input by use of an output. We will start this chapter by covering a few control
strategies.
3.1.1
Feedback and Feedforward Control
Feedback control refers to a measured output of a system that is used to determine the input to said system. A standard solution is to look at the difference between the reference r(t) and the output y(t), called control error e(t) = r(t) − y(t).
This signal can be used for control of the system. For example, if the control
error is negative, a conclusion can be that the input u is too small, and should
be increased, and vice versa. Many control strategies based on this idea have
been constructed and are commonly used in industry. The idea is presented in
Figure 3.1 where a feedback controller F is applied to the system G.
On the other hand, if we know something about how the system will transform the input, we might want to use this to counteract later effects. This is the
concept of feedforward control, where the reference signal is altered and sent to
the system, or fed forward. Often, feedforward and feedback control are used
together to get the advantages of both approaches. Figure 3.2 shows a block diagram where the feedback loop in Figure 3.1 has been expanded to include a
feedforward loop with the feedforward controller Ff . A common requirement is
that the output should have a softer behavior than the reference, and this can be
achieved via the filter Gm , which denotes the desired dynamics. The ideal choice
of the feedforward controller is Ff = Gm /G. If feedforward control is used alone,
with no feedback loop, it is often called open-loop control.
Feedback control can handle phenomena like disturbances and model uncertainties, since it is based on the true output and not only the input and a model of
the system. It can also handle unstable systems, which is not possible for a pure
feedforward (open-loop) control, but a bad feedback loop may cause instability.
Feedforward control has the advantage of not needing any measurements but
the drawback is that ideal feedforward control (using Ff = Gm /G) requires perfect knowledge of the system, and that both G and Gm /G are stable. Also, there is
no possibility to compensate for disturbances. However, if the disturbances are
perfectly known or measurable, feedforward control from the disturbances can
be applied and the disturbance compensated for. These are of course limiting
assumptions. A benefit with feedforward control is that two cascaded stable systems will always be stable, and a bad controller can therefore not destabilize the
system.
20
3 Introduction to System Inversion
Ff
r
Gm
yr
+
uf
F
us
+
u
G
y
−1
Figure 3.2: Feedforward controller Ff and feedback controller F applied
to a system G. Gm is used here to describe the desired dynamics between
reference and output.
3.1.2
Iterative Learning Control
As discussed in the introduction, iterative learning control (ilc) can be seen as
an iterative inversion method [Markusson, 2001]; the goal is to find the input
that leads to the desired output. In this section, the basic concepts of ilc will be
described, but for a more thorough analysis see for example Wallén [2011], Moore
[1993] and the references therein. The ilc concept comes from the industrial
robot application, where the same task or motion is performed repeatedly. The
idea is to use the knowledge of how the controller performed in the last repetition
and improve the performance in each iteration.
The system S in this setting is described by the input u, the output y, and the
reference r over a finite time interval. The task is assumed to be repeated, so that
the reference r and the starting point are the same for each iteration. The time
index is t, where t ∈ [0, N − 1] for each repetition, and each repetition is of length
N . A basic first order ilc algorithm is described by
uk+1 (t) = Q(q) (uk (t) + L(q)ek (t))
(3.1)
where
ek (t) = r(t) − yk (t)
and k is the iteration index, and indicates how many times the task has been repeated. Here, q is the shift operator such that q−1 u(t) = u(t − 1) and Q(q) and L(q)
denote linear or nonlinear operators, chosen by the user. It is important that this
choice leads to convergence and an input where the output achieves the desired
performance. Also, the learning should be fast enough. There are structured
ways to determine Q(q) and L(q), which can be based on a model of the system.
The concepts of stability and convergence of ilc systems are treated in, for example, Wallén [2011]. It can be shown that ilc is robust to model errors, such that
for a linear system, a relative model error of 100% can be tolerated [Markusson,
2001]. Even a rather simple model can therefore perform well.
Iterative methods are used in many applications, also outside the control community. The common factor is that the information found in the output y is used
to improve the input, but the algorithm is not necessarily similar to (3.1). One
3.1
21
Inversion by Feedback
application where iterative solutions are often used is analog-to-digital converter
(adc) correction, such as in Soudan and Vogel [2012].
3.1.3
Exact Linearization
In exact linearization (also called input-output linearization) [Sastry, 1999], the
output from a nonlinear system S,
ẋ = f (x) + g(x)u
y = h(x),
(3.2)
which is affine in u, is differentiated enough times to obtain a relation between
the differentiated output y (n) and the input, u. Differentiating y with respect to
time, we obtain
∂h
∂h
f (x) +
g(x)u
∂x
∂x
= Lf h(x) + Lg h(x)u,
ẏ =
where Lf h(x) and Lg h(x) are the Lie derivatives of h with respect to f and g,
respectively. If Lg h(x) 0, a relation between the differentiated output ẏ and the
input u has been obtained and an input,
u=
1
(−Lf h(x) + r)
Lg h(x)
can be calculated that leads to a linear relation between output and reference,
ẏ = r. If Lg h(x) = 0, a second differentiation can be done,
∂Lf h
f (x) +
∂Lf h
g(x)u
∂x
∂x
= L2f h(x) + Lg Lf h(x)u,
ÿ =
from which a control law can be calculated if Lg Lf h(x) 0. In this manner, one
can continue until there is a direct relation between y (γ) and r through the control
law
u=
1
γ−1
Lg Lf h(x)
γ
(−Lf h(x) + r)
= α(x) + β(x)r.
(3.3)
Here, γ is the smallest integer for which Lg Lif ≡ 0 for i = 0, 1, . . . , γ − 2 and
γ−1
Lg Lf h(x) 0 and it is called the relative degree of the system.
The system (3.2) with control input (3.3) now describes a system with linear
dynamics. Thus, linear theory can be used to obtain the desired dynamics, Gm ,
chosen by the user, and the linear feedback loop can be combined with the nonlinear one. The overall system from r to y (the nonlinear system with the nonlinear
22
3 Introduction to System Inversion
and linear feedback) will thus be linear, and the dynamics will be described by
the transfer function Gm .
Exact linearization requires knowledge of all the states, and is therefore often
used in combination with a nonlinear observer. This can lead to a complicated
feedback loop. Here, it is assumed that any zero dynamics present are stable. The
above system and the derivation of the feedback loop is described in continuous
time. A discrete-time description can also be done, as presented in Califano et al.
[1998].
3.2
Analytic Inversion
In the above feedback loops, only the system itself, or a model thereof, is used
to produce an inverse. No explicit inversion is done. Another approach is to perform an analytic inversion of the system, which can be applied at the input to, or
the output from, the system, see Figure 1.1. The output from this cascaded system should have the desired dynamics. If the goal is to make the output exactly
the same as the reference, a “true” inverse has to be found. But even for other
cases, the inversion can be seen as a case where the unwanted nonlinear and linear dynamics have been inverted. For example, in the exact linearization case,
the nonlinear and dynamical behavior of the system are inverted, and in the end
a system with some user-defined linear dynamics is obtained. This approach has
already been used in the feedforward controller Ff = Gm /G, where the system G
is inverted.
Finding a system inverse can be done in multiple ways. One method for finding an inverse to dynamic systems uses Volterra series, which is a nonlinear extension of the impulse response concept from the linear case. This leads to an
analytical inverse. Other systems that might be analytically invertible are blockoriented systems, which consist of a static nonlinearity and a linear dynamic system. A brief overview of Volterra series will be presented here together with a
short discussion on the use of preinverse and postinverse and problems that occur with inversion.
3.2.1
Problems Occurring with System Inversion
For a stable and minimum-phase lti system G, it is rather straightforward to find
an inverse G −1 . However, if these conditions are not fulfilled, we quickly run into
problems, even for linear systems. Any nonminimum-phase zeros of the original
system will become unstable poles of the inverse system. However, if the system
is nonminimum phase, the inverse can be used if noncausal filtering is allowed.
If a delay can be allowed, time-reversed input and output sequences can be used
together with a matching, stable inverse [Markusson, 2001].
Another trouble with inverse systems concerns whether the system is proper
or not. A proper transfer function is one where the order of the denominator is
greater than or equal to that of the numerator. A strictly proper transfer function
is one where the order of the denominator is greater than that of the numerator.
3.2
23
Analytic Inversion
The amplification of a proper system always approaches a value as the frequency
goes to infinity. If the transfer function is strictly proper, the amplification will
approach zero at high frequencies. For a transfer function that is not proper,
however, the amplification will approach infinity when the frequency approaches
infinity. That is, high frequency contents will be amplified. This means that the
inverse of a strictly proper system will be improper.
Here, the goal is not to cover all problems with the inversion of systems, but
to give some insights to the problems that can occur.
3.2.2
Postinverse and Preinverse
As is commonly known, the ordering of a linear system does not matter, i.e., the
output from A ∗ B equals the output from B ∗ A when A and B are linear dynamical
systems. This property is called commutativity. However, this does not apply to
nonlinear systems, as shown in Example 3.2.
Example 3.2: Noncommutativity of Nonlinear Systems
Consider the two functions
f 1 (x) = 2x
and
f 2 (x) = x2 .
If the order of the systems is f 1 , f 2 , the output is y12 = 4u 2 and with the reversed
order, the output is y21 = 2u 2 y12 .
Thus, for nonlinear systems, the output depends on the order of the systems.
For some nonlinear systems, this is not true and the systems can change order
without changing the output. One example where two nonlinear systems commute, is where one of the systems is the inverse of the other as in Example 3.3
for a Hammerstein-Wiener system. When an exact inverse exists, the preinverse
and the postinverse are the same. However, it is often not possible to determine
the exact inverse, and an approximate inverse has to be used. This approximate
function does not necessarily commute with the system.
Another example of nonlinear systems that commute are the Volterra series
and the p-th order Volterra inverse that will be described in the next section.
But, in general, the commutative property does not apply to nonlinear systems.
See Mämmelä [2006] for an extended discussion on commutativity in linear and
nonlinear systems.
Example 3.3: Analytical Inversion
Consider the Hammerstein system with the static nonlinearity
f H (x) = x3
which is invertible for all x, followed by the minimum-phase linear dynamic system
s+1
GH (s) =
,
s+2
24
3 Introduction to System Inversion
u
( · )3
z
s+1
s+2
y
(a)
√
3
s+2
s+1
·
(b)
Figure 3.3: (a) A Hammerstein system with invertible static nonlinearity
followed by a linear, stable minimum-phase dynamical system. For such
a system, an analytical inverse exists, as shown in (b).
as shown in Figure 3.3a. For this system, an analytical inverse exists, namely the
Wiener system
√
s+2
, f W (x) = 3 x,
GW (s) =
s+1
see Figure 3.3b. This inverse is also an example of where a nonlinear system and
its inverse are commutative, that is, the two systems can be placed in whichever
order. This Wiener system can thus be used as a preinverse or a postinverse.
Different approximate modeling approaches, which will be further considered in Chapter 4, lead to either a preinverse or a postinverse that are not necessarily equal. Which one that is requested is connected to the application, such
that for power amplifier linearization a preinverse is desired, and for sensor calibration a postinverse. However, for power amplifier predistortion, the commutativity property is often considered approximately valid, and the pre- and postinverses are used interchangeably without further consideration [Abd-Elrady et al.,
2008, Paaso and Mämmelä, 2008].
3.2.3
Volterra Series
In the linear systems theory, a common way to describe the output, y(t), of the
system affected by the input u(t), is by the impulse response g( · ),
∞
g(τ)u(t − τ)dτ,
y(t) =
(3.4)
−∞
usually with the added constraints that the system is causal and the input zero
for t < 0, so that the integral is limited to [0, t]. It can also be described by the
corresponding Laplace relation
Y (s) = G(s)U (s)
(3.5)
where Y (s) and U(s) are the Laplace transformed versions of y(t) and u(t), respectively, and G(s) is the transfer function. This is not possible for nonlinear systems.
3.2
25
Analytic Inversion
However, if the nonlinear system is time invariant with certain restrictions, an
input-output relation can be determined. These conditions include convergence
of the infinite sums and integrals that occur [Sastry, 1999], but will not be further
considered here. The input-output relation can be described by
∞
∞ ∞
h1 (τ1 )u(t − τ1 )dτ1 +
y(t) =
−∞
∞
−∞ −∞
∞
hn (τ1 , . . . , τn )u(t − τ1 ) . . . u(t − τn )dτ1 . . . dτn + . . .
...
+
−∞
h2 (τ1 , τ2 )u(t − τ1 )u(t − τ2 )dτ1 dτ2 + . . .
(3.6)
−∞
where
hn (τ1 , . . . , τn ) = 0 for any τj < 0,
j = 1, 2, . . . , n.
The relation (3.6) is called a Volterra series (sometimes Volterra-Wiener series)
and the functions hn (τ1 , . . . , τn ) are called the Volterra kernels of the system. The
expression (3.6) can also be written as
y(t) = H1 [u(t)] + H2 [u(t)] + · · · + Hn [u(t)] + . . .
(3.7)
where
∞
∞
...
Hn [u(t)] =
−∞
hn (τ1 , . . . , τn )u(t − τ1 )u(t − τ2 ) . . . u(t − τn )dτ1 . . . dτn (3.8)
−∞
is called an n-th order Volterra operator.
When considering an lti single input-single output (siso) system, the Volterra series reduces to the standard form, and the kernel h1 ( · ) in (3.6) corresponds to g( · ) in (3.4). See for example Schetzen [1980] for a more thorough
description of Volterra series. The counterpart of the transfer function is based
on the multivariable Fourier transform,
∞
Hp (j ω1 , . . . , j ωp ) =
∞
...
−∞
hp (τ1 , . . . , τp )e−j(ω1 τ1 +···+ωp τp ) dτ1 . . . dτp
(3.9)
−∞
called the p-th order kernel transform. The inverse relation is
1
hp (τ1 , . . . , τp ) =
(2π)p
∞
∞
Hp (j ω1 , . . . , j ωp )e j(ω1 τ1 +···+ωp τp ) dω1 . . . dωp .
...
−∞
−∞
(3.10)
In analogy to the linear case, these functions are sometimes referred to as higher
order transfer functions. The discrete counterpart of the Volterra operators (3.8)
is [Tummla et al., 1997]
Hn [u(t)] =
∞
i1 =−∞
...
∞
in =−∞
(n)
hi1 ,i2 ,...,in u(n − i1 ) . . . u(n − in ).
(3.11)
26
3 Introduction to System Inversion
This version is often used in data-based modeling, where the models are based
on sampled data.
p -th Order Volterra Inverse
A p-th order inverse, H−1
(p) , is defined as a system that, when connected in series with the nonlinear system H results in a system, Q, in which the first-order
Volterra kernel is a unit pulse and the other Volterra kernels are zero, qk = 0, k =
2, . . . , p. The Volterra kernels for k > p might however be nonzero but are generally considered to be negligible [Zhu et al., 2008]. The inverse, H−1
(p) , can be
determined by using the Volterra series (assumed known) of the system, and the
desired output. This is done in a sequential way by first finding the first order
Volterra operator, H−1
(1) , and then solving for the higher order Volterra operators
H−1
(n) , n = 2, . . . , p which then only depend on the system H and lower order operators of the inverse, see Schetzen [1980, Chapter 7] for a thorough discussion.
The ordering of the system H and the inverse H−1
(p) will affect the output, but it
can be shown [Schetzen, 1980] that the first p Volterra operators of the connected
systems are the same. The order of the system H and the inverse H−1
(p) can thus
be interchanged and the postinverse H−1
(p) can also be used as a preinverse, if only
nonlinearities up to order p are of interest.
3.3
Inversion by System Simulation
Some approaches to avoid the explicit inversion of a system are based on a simulation of the true system, without including any feedback from the actual system.
The exact linearization described in Section 3.1.3 can be modified such that it
uses a simulated output in the feedback loop. Another approach is to decompose
the original system to avoid the explicit inversion of the nonlinear system. The
idea with these inversion methods is that the inverse can be used as a preinverse
or a postinverse, thus avoiding a feedback loop.
3.3.1
Separation of a Nonlinear System
A way to avoid the explicit inversion of a nonlinear system is presented in Markusson [2001, p. 51]. There, the nonlinear system S is separated into a linear part,
L, and a nonlinear part, N , where operator notation is used. The inverse of
S = L + (S − L) = L + N = L(I + L−1 N ), can then be written S −1 = (I + L−1 N )−1 L−1 .
We have thus obtained a postinverse S −1 such that
S −1 S = (I + L−1 N )−1 L−1 (L + N ) = (I + L−1 N )−1 (I + L−1 N ) = I .
This can also be used as a preinverse, since
S S −1 = (L + N )(I + L−1 N )−1 L−1 = L(I + L−1 N )(I + L−1 N )−1 L−1 = I .
3.3
27
Inversion by System Simulation
y(t)
+
L−1 (q−1 )
w(t)
N (w, t)
Figure 3.4: An inversion method that only uses the inverse of the linear part
L of a nonlinear system S = L + N .
The inverse (I + L−1 N )−1 L−1 can be obtained in a feedback loop with the nonlinear
part N in the feedback and the linear inverse L−1 in the forward path (compare
to the sensitivity function for lti systems), see Figure 3.4. It follows that the
nonlinear part N does not have to be explicitly inverted, and that only the linear
part L is to be inverted. The output from the inverted system is denoted w(t)
to separate it from the true input u(t), since different initial conditions of the
true system and the model will produce an output that is not exactly equal to
the input. Unknown initial states are discussed in Markusson [2001, p. 45], in a
maximum likelihood (ml) setting.
3.3.2
Hirschorn’s Method
Another approach to invert nonlinear systems is Hirschorn’s method, where exact linearization is used in order to construct a linear system [Hirschorn, 1979].
Given that the model is good enough, it should be possible to use the model not
only in the feedback, as it is used in the construction of u in (3.3), but also as a
simulation model.
Preinversion
If instead of the measured output from the system, the output from the simulated model is fed back to the controller, see Figure 3.5, the overall system (from
reference r to output ys ) will by construction be linear with the dynamics Gm .
Also, the input calculated for this (simulated) system leads to the desired dynamics, and the same input signal can be used also for the true system. The system
from r to u will be denoted S † . A pure open loop controller is thus obtained, as
in Hirschorn [1979], see Figure 3.6, and this is called Hirschorn’s method. The
simulated feedback can also be interpreted as an observer with no measurement
inputs.
Postinversion
Let the nonlinear system be denoted S and the precompensation be denoted S † ,
since it is not really an inverse of S, but rather creates a system that, in series with
S will be linear. The dynamics of the overall linear system is Gm .
The method described above can be seen as an inversion of the nonlinearities
of the system – the output from the overall system will be linear with dynamics
28
3 Introduction to System Inversion
ylin
S
r
Ŝ
u
ys
S†
Controller
Figure 3.5: A block diagram of Hirschorn’s method, where the system S is
replaced by a model Ŝ in the exact linearization feedback loop. The input
signal calculated in this way is then also applied to the real system S. The
simulation system and feedback loop that leads to an overall linear behavior
between r and ys is denoted S † . The input to S † is the reference r and the
output is the control signal u.
r
S†
u
S
ylin
Gm
Figure 3.6: The predistortion block S † obtained using Hirschorn’s method in
series with the real system leads to an overall linear behavior between r and
ylin .
3.3
29
Inversion by System Simulation
u
S†
r̃
y
S
Figure 3.7: The (possibly fictitious) reference signal r̃ can be seen as input to
the block S † , creating the input u to the nonlinear system S.
1
Gm
y
r̄
S†
ū
ylin
Gm
Figure 3.8: Hirschorn’s method applied as postdistortion, when the output
can be assumed to be created according to Figure 3.7. The block S † cannot
simply be applied at the output y, but has to be manipulated to obtain a
linear behavior between u and ylin .
Gm chosen by the user. This is based on the assumption that the model is accurate enough, of course. This is a setup where preinverse and postinverse are not
interchangeable; Hirschorn’s method tells us only how to determine the input to
the nonlinear system such that the reference-to-output has the linear dynamics
Gm , not how to manipulate the output to make it a linear response to the input.
If it is this postinverse that is wanted, a different setup is needed.
It is known that S † in cascade with S leads to a linear system Gm , so that
y = Gm r
(3.12)
with r the reference, cf. Figure 3.6. The goal is to obtain a linear response to u
by using a postinverse on the output y. Assume that u was actually created by a
prefilter, S † , with u as output and the fictitious signal r̃ as input, as in Figure 3.7.
An estimate of this signal can then be obtained by
r̄ =
1
y,
Gm
(3.13)
where, if no transients or noise are present, r̄ = r̃. An estimate of the input u,
called ū, can be obtained by filtering r̄ by S † . Now, to obtain the desired dynamics,
ū must be filtered by the linear function Gm , see Figure 3.8. The cascade of these
three blocks (1/Gm , S † and Gm ), thus make up a postdistorter that leads to a
linear response between u (not available for manipulation) and ylin in Figure 3.9.
This method is illustrated in Example 3.4.
u
S
y
1
Gm
r̄
S†
ū
Gm
ylin
Figure 3.9: Hirschorn’s method used as postdistortion. The postinverse consists of the three blocks 1/Gm , S † and Gm . Used in this way, the overall
behavior between u and ylin will be linear.
30
3 Introduction to System Inversion
1
0
−1
100
104
108
112
116
120
time [s]
Figure 3.10: The output from the nonlinear system (3.14) in gray and the
desired dynamics from Gm (3.15) in black.
Example 3.4: Hirschorn’s Postinverse
Consider the nonlinear system
ẋ1 = −x13 + x2 + w1
ẋ2 = −x2 + u + w2
y = x1
(3.14)
with process noise wi ∈ N (0, 0.05) and a multisine input. The nonlinear feedback
u = −3x15 + 3x12 x2 + x2 + ũ
leads to a linear system ÿ = ũ. Now, linear theory can be applied and pole placement has been used to get an overall system response from reference r to output
y corresponding to the one from
Gm (s) =
s2
1
.
+ 5s + 6
(3.15)
The output from the nonlinear system (3.14) is plotted in Figure 3.10 together
with the output from the desired dynamics Gm .
A preinverse S † has been constructed as in Figure 3.5. S † has been used as
a preinverse, as well as a postinverse for evaluation purposes. The results are
shown in Figure 3.11. Here, it is clear that the desired preinverse and postinverse
are not the same, and that S † cannot straight away be used as a postinverse. If
instead, the output y is filtered by the cascaded systems 1/Gm , S † and Gm , as in
Figure 3.8, the result improves considerably, as shown in Figure 3.11. The remaining errors are primarily caused by the noise. For noise-free data, the preinverse
performs perfectly whereas the postinverse has some minor errors.
3.3
31
Inversion by System Simulation
0
−0.5
100
104
108
112
116
120
time [s]
Figure 3.11: A Hirschorn postinverse applied to the system (3.14). The output from Gm (3.15) is plotted in solid black and the output when S † was
used as a preinverse in dashed black. The output from the system in series
with the inverse S † is plotted in dashed gray when S † is used as a postinverse
with no extra filtering. When the postdistortion is constructed according to
Figure 3.8, the result improves considerably. Here, the postinverse consists
of three blocks, 1/Gm , S † and Gm and the postdistorted output is plotted in
solid gray. Note the scale difference from Figure 3.10.
4
Estimation of Inverse Models
An inverse model is here estimated with the purpose of using it in cascade with
the system itself, as an inverter, and a good inverse model in this setting would be
one that, when used in series with the original system, reconstructs the original
input, see Figure 4.1.
In estimation, one should usually estimate the system in the setting it should
be used, concerning for example the choice of input and the experimental conditions [Ljung, 1999, Gevers and Ljung, 1986, Pintelon and Schoukens, 2012]. It is
important to choose the input signal to capture the significant characteristics of
the system. Usually, it ought to resemble the conditions in which it is intended to
be used, but what does that correspond to in this case? Will it be the input spectrum that decides the weighting, or is it rather the output spectrum that should
be weighted to reflect the relative importance of the model fit, since the output
is in this case seen as the input to the inverse system to be estimated? Another
important topic in system identification is the choice of loss function, V in (2.5b).
u
S −1
S
yu
(a)
u
S
y
S −1
yu
(b)
Figure 4.1: The intended use of the estimated inverses. Figure (a) shows
predistortion, where the inverse S −1 is applied before the system S, and (b)
shows postdistortion, where the order is reversed.
33
34
4
Estimation of Inverse Models
Table 4.1: Inputs and outputs to the identification procedure, using the different methods.
Input Output Requires
Model
Method A
u
y
forward
Ŝ
Method B
u
u
inverse
Method C
y
u
inverse
It should reflect the goal of the identification, and, depending on how it is chosen,
different properties of the estimated model will be emphasized. In this setup, the
goal is to make use of these degrees of freedom and the flexibility of the model
to obtain an accurate input estimate. In this chapter, some aspects of system
inverse estimation are discussed. The contents are also presented in Jung and
Enqvist [2013].
4.1
System Inverse Estimation
In system identification, the goal is to achieve as good a model as possible to
explain the behavior of y by a prediction or simulation ŷ(t|θ), which depends on
the estimated model parameters θ and the input u. This is done using measured
data, usually input data u(t) and output data y(t), (2.3), see Chapter 2. Here, a
model describing the system itself will be referred to as a forward model and a
model describing the inverse will be called an inverse model.
The inverse model is estimated with the purpose of using it in series with the
system itself, as an inverter, see Figure 4.1. In this setup, the goal is to minimize
the difference between the input u and the output from the cascaded systems,
yu . A good model in this setting would be one that, when used in series with the
original system, regains the original input, so that yu = u.
There are three main approaches to the estimation of an inverse of a system S,
described in more detail below.
Method A In a first step, the forward model Ŝ is estimated in the standard
way, with input data u and output data y. Step two is to invert the resulting
model to obtain an approximate inverse Ŝ −1 .
Method B In a pre-step, the forward model Ŝ is estimated in the standard way,
with input data u and output data y. This model is used in series with an
inverse model, Ŝ −1 , and the inverse model parameters are estimated in this
setting, by trying to minimize the difference between the input u and the
simulated, distorted output yu .
Method C The identification is done in one step, by identifying the inverse
directly, using input data y and output data u.
The inputs and outputs to the different approaches are summarized in Table 4.1.
The identification in the first approach, Method A, is the standard one, as
described in, for example, Ljung [1999] and Pintelon and Schoukens [2012], and
4.2
Inverse Identification of LTI Systems
35
the inversion is discussed in Åström and Hägglund [2005] in the feedforward
control application. The use of feedforward control based on an inverse model of
the system in the presence of plant uncertainty is discussed in Devasia [2002]. A
good thing with Method A is that the identification uses standard methods, but
on the other hand, an inversion is required, and the weighting of the model fit is
not necessarily optimal for the use intended here.
The second approach, Method B, is often used in power amplifier predistortion [Fritzin et al., 2011a, Abd-Elrady et al., 2008, Paaso and Mämmelä, 2008].
In this application, it is also called direct learning architecture (dla). The quality of the inverse and the forward models are closely coupled, and two choices
are available. Since it is often preferable to obtain a rather simple inverse model
(for example in the predistorter case), this restriction can also be applied to the
forward model, so that the same model structure is used for the forward and the
inverse models. Another approach is to use a more complex forward model, making sure that as much as possible of the system behavior is captured, and then
let the inverse model be less complex. The choice in the end comes down to the
implementation – if the forward model has to be implemented, also this model
needs to have a limited complexity. A good thing with this approach is that the
estimation of the inverse is done with no noise present, but it also requires two,
possibly nonconvex, minimizations with the risk of obtaining local minima. The
quality of the inverse also clearly depends on the quality of the forward model.
The third approach, Method C, is also called indirect learning architecture
(ila) in power amplifier predistortion applications. It has been evaluated in pa
predistortion applications in Abd-Elrady et al. [2008] and Paaso and Mämmelä
[2008]. For this approach to be applicable in predistortion, it is assumed that
the predistorter and the postdistorter are interchangeable (commutativity), see
also Section 3.2.2. An advantage with this method is that the inverse is estimated
in the setting in which it is going to be used, and that the weighting is possibly
better than for Method A. A drawback is that the measured output is used as
input, which risks causing a biased estimate [Amin et al., 2012]. It can be an
easier approach, since the estimation is done in one step. Furthermore, there is
no need to construct a model for the forward system that will later be discarded.
In power amplification predistortion applications, Method C (ila) is more
commonly used than Method B (dla), as investigated in Paaso and Mämmelä
[2008]. In Paaso and Mämmelä [2008], comparisons performed indicate that the
dla performs better in the simulation setup used, whereas in Abd-Elrady et al.
[2008] the ila seems to perform slightly better.
4.2
Inverse Identification of LTI Systems
To simplify the discussion, we will start by looking at lti dynamical systems.
The model estimation is done in open loop and assuming the output was created
according to
y(t) = G0 (q)u(t) + H0 (q)e0 (t)
(4.1)
36
4
Estimation of Inverse Models
where G0 is the true system, H0 is the true noise dynamics and e0 is a white noise
sequence.
In system identification, the goal is often to find the minimizing argument of
a function of the prediction error ε(t, θ)
θ̂ = arg min
θ
N
N
1 1 ε(t, θ)2 = arg min
[y(t) − ŷ(t|θ)]2 ,
N
N
θ
t=1
(4.2)
t=1
where y(t) is the measured output and ŷ(t|θ) is the predicted output given the
model parameters θ. Here, we use a fixed noise model H∗ ≡ 1 such that the prediction is described by ŷ(t|θ) = G(q, θ)u(t). Looking at the identification from a
frequency domain point of view, the minimization criterion in (4.2) can asymptotically be written as [Ljung, 1999, (8.71) p. 266]
π 2
θ̂ = arg min G0 (e iω ) − G(e iω , θ) Φu (ω)dω
θ
(4.3)
−π
where G(e iω , θ) is the model and Φu (ω) is the spectrum of the input signal. The
estimation will thus be done in a way to emphasize the model fit in frequency
bands where the transfer function and the input spectrum are large enough to
have a significant impact on the total criterion. The minimization is done with
respect to the product of model fit (|G0 − G|2 ) and input spectrum. If the input is
white noise (flat spectrum), it is thus more important to obtain a good model fit
at frequencies with a large transfer function magnitude.
If instead the goal is to estimate the inverse model to be used as described in
Section 4.1, the minimization criterion in the time domain can be written
θ̂ = arg min
θ
2
N 1 1
y(t)
u(t) −
N
G(q, θ)
(4.4)
t=1
and the frequency domain equivalent to (4.4), when y is noise-free, is
θ̂ = arg min Vinv (θ).
θ
(4.5)
4.3
37
An Illustrative Linear Dynamic Example
The loss function is
2
π 1
1
−
Φ (ω)dω
Vinv (θ) = G0 (e iω ) G(e iω , θ) y
−π
π
=
−π
π
=
−π
π
=
2
1
1
iω 2
−
|G0 (e )| Φu (ω)dω
iω
iω
G0 (e ) G(e , θ) 2
G0 (e iω ) 1 −
Φu (ω)dω
G(e iω , θ) 2
G(e iω , θ) − G0 (e iω )
−π
Φu (ω)
dω
|G(e iω , θ)|2
(4.6)
(4.7)
using Φy = |G0 (e iω )|2 Φu if no noise is present. The loss function in (4.7) is similar
to the weighting for the input error case where H = G so that y(t) = Gu + Ge =
G(u + e), that is, the error enters the system at the same place as the input [Åström
and Eykhoff, 1971].
Comparing the minimization criterion for the forward estimation in (4.3) to
the one for the inverse estimation in (4.6), the weighting is clearly different. In the
forward case, a relative model error at a frequency where the system amplification is small, will affect the criterion much less than a model error at a frequency
where the system amplification is large. In the inverse estimation case, a relative model error will have the same effect on the criterion for two frequencies
with the same input spectral density, and does not depend on the system amplification at that frequency. The weighting, and thus the model fit, between the
different frequencies will be shifted to better reflect the importance of a good fit
also at frequencies with a small transfer function magnification.
The time domain criterion (4.4) thus leads to the frequency domain description (4.6), and the weighting is automatically done to match the use of the inverse
model estimate. Here, only the case when the system and its inverse are both stable and causal will be investigated. See Section 3.2.1 for a brief discussion on the
problems involved in system inversion.
4.3
An Illustrative Linear Dynamic Example
Let us look at a small example. The goal is to obtain a system inverse to be used
in series with the original system in order to retrieve the input, see Figure 4.1a.
The input u and the noise-free output y are measured. The system has two resonance frequencies, at ω = 1 rad/s and ω = 10 rad/s. The magnitudes of the two
resonance peaks are very different, with the first one a hundred times larger than
the second one. The true system, G0 is described by
G0 (s) =
10
s 4 + 1.1s3 + 101.1s 2 + 11s + 100
(4.8)
38
4
Estimation of Inverse Models
Magnitude
100
10−2
10−4
10−6
10−1
100
101
Frequency [rad/s]
102
Figure 4.2: The Bode magnitude plot of G0 in (4.8) in the solid line. The
stars mark the amplitude of the multisine input (u in (4.9)) components at
each frequency.
and the Bode magnitude diagram is shown in Figure 4.2. The input consists of
three sinusoids around each of the two resonance peaks such that the input power
is concentrated in two bands, centered around the resonance frequencies, i.e.,
u=
6
k=1
ak sin(ωk t + φ k )
(4.9)
with ak = 1 for k = 1, 2, 3, ak = 10 for k = 4, 5, 6, ωk = 0.9, 1, 1.1, 9, 10, 11 and
φ k ∼ U [−π π]. The input amplitude and the frequency points are illustrated
by the stars in Figure 4.2. The sampling time is Ts = 0.02 s and N = 10 000
simulated measurements have been collected.
With the goal of using an fir model as a prefilter to recover the input u, two
models have been estimated, using Method A and Method C in Section 4.1.
An fir model depends only on previous input signals, as described on page 13.
As the system is linear, the ordering of the two systems does not matter, and the
preinverse and postinverse are interchangeable.
First, a forward model has been estimated as an output error (oe) model using System Identification Toolbox in Matlab [Ljung, 2003], with [nb nf nk] =
[1 3 0]. This model has then been inverted resulting in an fir model with 4 terms,
according to Method A. The approximative inverse using Method C is an fir
model with 4 terms, i.e. [nb nf nk] = [4 0 0], and will have a very different weighting. Hence, the two inverses will catch different behaviors of the system. The
system G0 , (4.8), is a fourth order system whereas the model is third order. Thus,
4.4
Inverse Identification of Nonlinear Systems
39
the model cannot perfectly model the system but should be able to capture one
resonance peak and the overall behavior of the system.
As can be seen in the Bode magnitude plot in Figure 4.3, the Method A
model has a much better fit around ω = 1 rad/s and almost perfectly models
the resonance peak, but completely misses the second resonance peak at ω =
10 rad/s. The inverse estimate, the Method C model, on the other hand, does not
manage to catch either of the resonance peaks in a satisfactory way but catches
both of the resonance frequencies. That is, the amplification at ω = 1 and 10 rad/s
is well captured, but not the resonance peaks around. Estimating the forward
model in the standard way will clearly focus on the frequencies where the product of model fit (|G0 − G|2 , connected to the transfer function amplification) and
input spectrum is large. When this system approximation is then inverted, according to Method A, the errors around ω = 10 rad/s will become prominent.
The results in the time and frequency domains are presented in Figures 4.4
and 4.5. In the time domain plot in Figure 4.4, it is clear that the Method C
model better reconstructs the input than the Method A model. In Figure 4.5,
the periodograms of the reconstructed inputs are shown, zoomed in around the
input frequencies. At the lower frequency around ω = 1 rad/s, the Method A
model captures the input almost perfectly, but around ω = 10 rad/s, the reverse
is true and the Method C model performs better.
As shown in this small example, there are clearly occasions when it is advantageous to estimate an approximate inverse directly as opposed to estimating the
forward model and then inverting it.
4.4
Inverse Identification of Nonlinear Systems
It is hard to say anything about the estimation of inverse systems for a general
nonlinear system. Depending on the type of nonlinearity and how it enters, the
effects will be different. So the choice of how to estimate the inverse is closely
connected to the system itself.
For a linear system, a binary input signal is enough to extract all information
in an identification experiment. One example of where this can be used is in
identification of Hammerstein systems, which are block-oriented systems where
a static nonlinearity is followed by a linear dynamical system. The linear part
of the Hammerstein system can be modeled perfectly by using a binary input signal. In a second experiment, where the input is no longer binary, the model of the
linear system can be used to simplify the estimation of the nonlinearity. The estimates of the nonlinearity and the linear dynamics can be inverted and a Wiener
system is obtained, according to Method A, assuming a stable, minimum-phase
dynamical system and an invertible nonlinearity. This is similar to Example 3.3
on page 23 where an exact inverse could be found (but there the system was assumed known).
One way of finding an inverse to a more general nonlinear system is by using
Hirschorn’s method, described in Section 3.3.2. A question is how to estimate
this system inverse. In the case where the structure of the nonlinear system is
40
4
Estimation of Inverse Models
Magnitude
106
104
102
100
10−1
100
101
Frequency [rad/s]
102
Figure 4.3: The Bode magnitude response of G0−1 (black solid line), the inverted forward model, Method A, (black dashed line) and the inverse model
estimate using Method C (gray solid line). The inverted forward model
perfectly catches the resonance peak at ω = 1 rad/s, whereas the direct estimation of the inverse does not model either of the resonance peaks in a
satisfactory way. The Method C model instead has an accurate modeling of
both peak frequencies, that is, it manages to accurately model the amplification at ω = 1 and 10 rad/s, but not the resonance peaks.
4.4
41
Inverse Identification of Nonlinear Systems
200
100
0
−100
−200
20
21
22
23
25
24
Time [s]
26
27
28
Figure 4.4: The input u (black solid line), and the reconstructed input yu
using the inverted forward model (black dashed line) and the inverse model
estimate using Method C (gray solid line). The estimation of the inverse
cannot perfectly reconstruct the input, but is clearly better than the inverted
forward model.
42
4
Estimation of Inverse Models
104
103
10−0.1
100
100.1
100.9
101
Frequency [rad/s]
101.1
107
105
103
Figure 4.5: Periodogram of the input u (black solid line), and the reconstructed input yu using the inverted forward model (black dashed line)
and the inverse model estimate using Method C (gray solid line) around
ω = 1 rad/s (top) and ω = 10 rad/s (bottom). It is clear also in the frequency domain that the forward model better captures the behavior around
ω = 1 rad/s than the inverse estimation, but the reverse is true around
ω = 10 rad/s.
4.4
43
Inverse Identification of Nonlinear Systems
u
S
y
1
Gm
r̄
S†
ū
Figure 4.6: Estimation of Hirschorn inverse model. By filtering the output
y through the inverse dynamics of the desired dynamics Gm , an estimate of
the reference signal r̄ can be obtained.
known, but where there are unknown parameters that need to be estimated, the
identification can be done in several ways, just as described in Section 4.1.
Method A would correspond to measuring the input u and the output y, and
identifying the unknown parameter values in the standard (forward) way. This
estimated model could then be used to provide the inverse, since if a model of
the forward system is available, a model of the inverse system is as well.
Since the exact linearization framework provides us with an inverse once the
forward model is known, Method B does not really have an equivalence in this
case – once the forward model is known, the exact inverse to match it is also
known. In the general case, this forward model could be used to estimate an
approximate inverse.
Method C would correspond to estimating the inverse S † directly. The order
of the inverse and the output are reversed in Method C, so that y is used as
input and u as output. In Hirschorn’s method, the inverse takes the reference r
as input and the output is the control signal u. So, in order to find the inverse of
S † , we would need u as output and the reference r as input. But, as the data was
collected in open loop with no pre- or postdistorter, the signal r is not available,
only u and y. Now, as in Section 3.3.2, assume that the system was actually
preceded by a system S † , fed by a fictitious reference signal r̃, and that the overall
behavior from r̃ to y is in fact linear with dynamics described by Gm . If this is true,
then the signal r̄ would be obtained by filtering y with 1/Gm , and the system S †
can be identified using r̄ as input and u as output, see Figure 4.6. So, this equals
finding the inverse by using (a filtered version of) the output y as the input and u
as output as in Method C. A benefit with Hirschorn’s method is that it provides
a parameterized inverse, so that the structure of this inverse system is already
known.
Part II
Power Amplifier Predistortion
5
Power Amplifiers
An electronic amplifier, or power amplifier (pa) is used to increase the power of
a signal, so that the output is a magnified replica of the input. There are many
different constructions of amplifiers, and they can be characterized by different
measures such as gain, efficiency and linearity. Amplifiers are commonly used in
many applications, such as audio applications and telecommunications, both in
base stations and hand-held devices.
This chapter provides a basis to understand the amplifier related problems
described in later chapters. It is by no means a complete description of pas, but
should be enough to understand this thesis. It also introduces the concepts of
predistortion and linearization as well as the outphasing pa.
5.1
Power Amplifier Fundamentals
Today, wireless communication is used everywhere to transfer information. An
important part of the technology is the possibility to transmit and receive the
information, and the devices used are called transmitter (tx) and receiver (rx).
The transmitter converts the information to an electrical signal suitable for the
transmission in the given medium (in this case air, but in standard communication this can be a wire, fiber-optics, etc.). At the other end of the transmitting
medium, a device is needed to receive the message and convert it into the original form – the receiver. This process of sending and propagating an information
signal over a medium is called a transmission.
It is often desired that the equipment should be able to both send and receive
information (a phone for example, where one can speak and listen), that is, a
device that contains both a transmitter and a receiver. Such a circuit is called a
transceiver. The physical circuit is connected to a chip.
By combining the receiver and transmitter into a transceiver, the circuits can
47
48
5
Power Amplifiers
i
dac
Digital
Baseband
(db)
0◦
lo
xBB
x
90◦
pa
Matching
Network
q
dac
Figure 5.1: Block diagram of a direct-conversion transmitter. The baseband
signal (xBB ) is upconverted to radio frequencies by the modulator and passes
through a pa before being sent to the antenna.
be used for multiple purposes, reducing the number of components (and thus the
cost) as well as the size of the chip, leading to more functionality per area. Such
shareable components are antennas, oscillators, amplifiers, tuned networks and
filters, frequency synthesizers and power supplies [Frenzel, 2003].
5.1.1
Basic Transmitter Functionality
A standard transmitter includes a digital baseband (db), digital-to-analog converters (dac s), mixers (x) (further explained in Example 5.1), two local oscillators (los) that are 90◦ out of phase, a combiner, a power amplifier and a matching
network before the antenna. The signal of interest, xBB , is split into an in-phase
channel, I , and a quadrature channel, Q,
xBB (t) = I (t) + j Q(t)
(5.1)
by the db, corresponding to the real (I ) and imaginary (Q) parts of the signal,
to generate two independent signals. Complex signals are commonly used in
different modulation techniques in communications applications, see for example Frenzel [2003]. The I and Q signals are upconverted to the radio frequency
(rf, ranging between 3 kHz and 300 GHz) carrier frequency, ωc , and recombined,
see Figure 5.1. The upconversion is done by a quadrature modulator, usually implemented by two mixers and two lo signals with a phase difference of 90◦ . The
power of the recombined output signal,
x(t) = r(t) cos(ωc t + α(t))
where
(5.2)
r(t) =
I 2 (t) + Q 2 (t)
(5.3)
and
α(t) = arctan(Q(t)/I (t))
(5.4)
is often too low for transmission, and it has to pass through a power amplifier
before being sent to the antenna.
5.1
49
Power Amplifier Fundamentals
Envelope
Modulation signal
Modulated signal
Carrier signal
Figure 5.2: Amplitude modulation. The information in the modulation signal is upconverted in the mixer to the carrier frequency (frequency of the
carrier signal) and the shape (envelope) of the modulated signal contains the
original information in the modulation signal.
Example 5.1: Amplitude modulation
Modulation is the process of varying the properties of a high-frequency signal,
the carrier signal (usually a sine wave) with a modulation signal that contains the
information to be transmitted. The modulation can be performed using a mixer,
a component that multiplies the two (possibly shifted) inputs. When amplitude
modulation (am) is used, the information can be found in the amplitude of the
modulation signal. The imaginary line that connects the peaks of the modulated
signal is the information signal, and is called the envelope. Other common analog
modulation techniques include phase modulation (pm) and frequency modulation (fm). Here, the envelope of the signal is kept constant but the phase shift or
the frequency, respectively, of the carrier frequency is varied. These modulation
techniques can also be combined into more complex modulation techniques.
For the example in Figure 5.2, the modulation (information) signal is a sine
wave. The carrier is a sine wave of much higher frequency, and the modulated
output is a high frequency signal where the shape of the envelope contains the
information in the modulation signal.
The amplitude modulation in Example 5.1 is an analog modulation scheme
that can be used for continuous signals. If the baseband signal is digital, a digital
modulation is needed, which will be introduced in Example 5.2.
Example 5.2: Digital modulation
One digital modulation scheme is phase-shift keying (psk) that changes, modulates, the phase of the carrier signal. A digital modulation uses a finite number
of distinct signals to represent digital data. In psk, the phase is unique for each
signal section, or symbol, that is transmitted. The demodulator, at the receiver
end, should interpret the signal and map it back to the original symbol. This
50
5
Power Amplifiers
q
1
i
01
11
i
00
10
0◦
lo
90◦
10
q
0
(a)
(b)
Figure 5.3: (a) Constellation diagram for quadrature phase-shift keying, a
digital modulation scheme. The four symbols represent the bits 00, 01, 11
and 10. (b) shows an example where the symbol 10 is to be transmitted. The
i part is 1 and the q part is 0. The bits are modulated by a carrier signal, a
sinusoidal with a 90◦ phase shift between the i and q parts, and the signals
are added. Typically, the zero is coded as −1. The phase of the output is
unique and can be mapped back to the i and q parts, as seen in Figure 5.4.
requires the receiver to be able to compare the phase of the received signal to a
reference signal. Such a system is termed coherent.
One type of digital psk modulation is quadrature phase-shift keying (qpsk)
which uses four phases, and can encode two data bits per symbol. In a constellation diagram, the qpsk scheme has four points spread out around a circle, as
seen in Figure 5.3a.
We will here look at an example where the symbol to be transmitted is 10.
The iq decomposition is done such that the odd-numbered bit (1) is the i component and the even-numbered bit (0) is the q component, as seen in Figure 5.3b.
The bits are modulated by the carrier signal, a sinusoidal with a 90◦ phase shift
between the i and q branches, and the signals are added. The resulting signal is
unique, as seen in the bottom row of Figure 5.4, and can be mapped back to the i
and q components.
5.2
Power Amplifier Characterization
The choice of pa is a trade-off between different properties such as output power,
efficiency and linearity, and will depend on the application. If power efficiency
is an important property, such as in handheld devices where it will reflect directly on the battery time, a lower linearity might be accepted, whereas an audio
amplifier, always connected to the power net, might focus more on the linearity
and gain than on the efficiency. Any number of pas can be cascaded in order to
combine the benefits of each step.
5.2
51
Power Amplifier Characterization
1
0
0
1
I
1
0
1
0
Q
Signal
11
0
Tsym
00
01
10
Data
2Tsym 3Tsym 4Tsym
Time
Figure 5.4: The modulated signals in the iq modulation, where the two carrier waves are sinusoidal with a 90◦ phase shift. The odd-numbered bits
encode the in-phase (i) component and the even-numbered bits encode the
quadrature (q) component. The total signal is shown at the bottom, together with the mapping. The digital data transmitted by this signal is
1 1 0 0 0 1 1 0. Tsym is the symbol duration.
5.2.1
Gain
An amplifier is of course supposed to amplify the input signal, and this property
is described by the gain. The gain of an amplifier expresses the relationship between the input and the output [Frenzel, 2003], and is usually described by the
voltage gain, AV ,
V
(5.5)
AV = out ,
Vin
where Vin and Vout are the input and output voltages, respectively. It can also be
expressed by the power gain, AP ,
AP =
Pout
,
Pin
where Pin and Pout are the input and output powers, respectively, see Figure 5.5.
The gain is usually expressed in decibels (dB), so that the power gain is
Pout
AP = 10 log10
.
(5.6)
Pin
5.2.2
Efficiency
Another important property of a pa is the efficiency, which describes the amount
of power needed to perform the amplification. A part of the input power will be
dissipated in the circuit and can be counted as losses. The efficiency of a pa will
52
5
Power Amplifiers
Amplifier
Pin
Pout
pa
Input signal
Output signal
Figure 5.5: Amplifier with input and output. The power gain is AP =
Pout
Pin .
directly affect the battery time for a cell phone for example, and a high efficiency
is desired.
The output efficiency, η, of a pa is defined as the ratio between the output
power at the fundamental frequency, Pout , and the dc supply power of the last
amplifier stage, PDC , [Cripps, 2006]
η=
Pout
,
PDC
(5.7)
and is often denoted drain efficiency (de). Another efficiency measure is the
power added efficiency (pae),
pae =
Pout − Pin
,
PDC
(5.8)
where PDC now represents the total power consumption of all amplifier stages
constituting the whole pa [Razavi, 1998].
5.2.3
Linearity
By assigning transmissions different frequency bands, many transmissions can
be done at the same time. For this setup to work, each of these transmissions
must send only in the allotted slot, or channel. A radio transmission is allocated
a frequency band with a certain bandwidth, ωb , around a center frequency, f c ,
where power may be transmitted. Any power falling outside the boundaries will
cause disturbances in the neighboring channels. Broadening of the spectrum can
be caused by, for example, nonlinearities in the pa. So to be practically useful in
radio communications, pas need to be linear. This means that the signal should
be amplified in such a way that the output is an exact replica of the input but
with a larger amplitude, and not be transferred to other frequencies. This is not
possible in practice, and the level of linearity, or rather nonlinearity, is quantified
by measures such as spectral mask, adjacent channel power ratio (acpr) and
error vector magnitude (evm).
Spectral mask
A spectral mask is a nonlinearity measure describing the amount
of power that is allowed to be spread to adjacent frequencies. It is usually specified in decibel to carrier (dBc, the power ratio of a signal to a carrier signal,
expressed in decibels) or in power levels given in dBm (power expressed in dB
with one milliwatt as reference) in a specified bandwidth at defined frequency
5.2
Power Amplifier Characterization
53
Table 5.1: Spectral mask limitations for an edge signal
Offset [kHz] 100 200 250 400 600 1000
Limit [dBc] 0
-30 -33 -54 -60 -60
Figure 5.6: Spectrum at 1.95 GHz for (a) measured output without dpd, (b)
measured output with predistortion (linearization) and (c) the input signal
for a wcdma signal. The measured aclr are printed in gray for the original output signal (without predistortion) and in black for the predistorted
output. The gray shadows represent the passband in which the integration
takes place.
offsets [Fritzin, 2011]. See Table 5.1 for an example of the spectral mask limits
for an edge signal.
Adjacent Channel Power Ratio
The acpr is a measure that, like the spectral
mask, describes the amount of power spread to neighboring channels. It is defined as the power in a passband away from the main signal divided by the power
in a passband within the main signal [Anritsu, 2013]. The power at frequencies
that are not in the main signal is the power transmitted in neighboring channels,
i.e., the distortion caused by nonlinearities. Another measure is the alternate
channel power ratio, which is defined as the ratio between the power in a passband two channels away from the main signal, over the power within the main
signal.
The bandwidths and limits are connected to the standard used (for example
wcdma and lte). For a wcdma signal, the acpr can be calculated by integrating
the spectrum over a bandwidth of ωb = 3.84 MHz at ±5 MHz distance from the
54
5
Q
Power Amplifiers
Magnitude
error
Error
vector
Measured
signal
Ideal (reference)
signal
Phase error
I
Figure 5.7: Error vector magnitude (evm) and related quantities.
center frequency, as
f c +l ·5+1.92
acpr =
wcdmaspectrum df
f c +l · 5−1.92
f c +1.92
.
(5.9)
wcdmaspectrum df
f c −1.92
Here, f c is the center frequency in the main signal and l = ±1 for the adjacent and
l = ±2 for the alternate channel power ratio. acpr is also named adjacent power
leakage ratio (aclr). An example of the aclr can be seen in Figure 5.6.
Error Vector Magnitude
The error vector magnitude (evm) is a description of
the quality of a signal with both magnitude and phase, such as the iq signals as
described in Section 5.1. The error vector is defined as the difference between the
ideal signal and the measured signal [Agilent, 2013], see Figure 5.7.
Gain Compression, AM - AM and AM - PM
At some point, a change in input amplitude does not result in a corresponding change in output amplitude, as illustrated
in Figure 5.8. This phenomenon is called gain compression. This leads to nonlinearities in the output, since different amplitudes of the input will be amplified in
different ways.
Other nonlinearity measures describing the amplitude and phase distortion
are the amplitude modulation to amplitude modulation (am-am) and the amplitude modulation to phase modulation (am-pm). The am-am maps the input
amplitude to the output amplitude (similar to the gain compression graph in Figure 5.8) and deviations from the straight line will result in output distortion. The
am-pm maps the input amplitude to the output phase, where an increasing input
amplitude results in an additional output phase shift [Cripps, 2006].
5.3
55
Classification of Power Amplifiers
Pout
Pin
Figure 5.8: Gain compression due to saturation in an amplifier transistor.
The dashed line represents the ideal operation of the amplifier, while the
solid line is the true output of the pa and a consequence of gain compression.
5.3
Classification of Power Amplifiers
There are many different types of amplifiers, but they can be divided into two basic types; linear and switched amplifiers, see for example Frenzel [2003] and Jaeger and Blalock [2008] for a more thorough description of the different pa classes
and the circuitry to implement them. Classical pas usually assume both the input
and the output to be sinusoidal, which limits the efficiency. If this assumption is
disregarded, higher efficiency can be achieved [Razavi, 1998]. Here, the different
classes are described.
5.3.1
Transistors
An important part of power amplifier implementation are the transistors, and we
will start with a short overview of transistor functionality. A transistor is a device that uses a small signal to control a much larger signal. The two basic types
of transistors are bipolar junction transistors (bjt s) and field-effect transistors
(fets). The structure of the commonly used fets using semiconducting material
has led to the name metal-oxide-semiconductor field-effect transistor (mosfet).
Depending on how the silicon is doped, the fets can be either of p-type (pmos)
or n-type (nmos), and thus have different conduction capabilities with respect
to the applied voltages at the transistor terminals. Doping is the process of intentionally introducing impurities into an extremely pure semiconductor for the
purpose of modulating its electrical properties. Complementary metal-oxidesemiconductor (cmos) is a technology that typically uses complementary and
symmetrical pairs of p-type and n-type mosfets for logic functions.
The fets have three terminals, labeled gate (G), source (S) and drain (D), and a
voltage at the gate controls the current between source and drain, see Figure 5.9.
See for example Jaeger and Blalock [2008] for more insights into the workings
and construction of transistors. For an nmos transistor, a high voltage at the gate
leads to a large current between source and drain, and for a small gate voltage,
there is no current. For a pmos transistor the relations are reversed, and a small
gate voltage leads to a large current between source and drain, and a large gate
56
5
D
Power Amplifiers
S
G
G
S
D
Figure 5.9: The symbols of nmos (left) and pmos (right) and the associated
ports. The ports are labeled gate (G), source (S) and drain (D).
A
VDD
Input
LRFC
CDC block
B
Vout , iout
Vin
RL
C
Figure 5.10: Generic Class A/B/C power amplifier. The biasing of the transistor determines the conduction angle of the pa, as illustrated in the amplification of a sinewave input (left). The conduction angles are (from top to
bottom) 360◦ for the Class A, 180◦ for the Class B and 90◦ for the Class C
here.
voltage opens the circuit and no current flows. Common uses for transistors are
as amplifiers and switches, depending on the circuitry surrounding them.
5.3.2
Linear Amplifiers
Linear amplifiers provide an amplified replica of the input. The drawback is that
linear amplifiers often require a high power level and provide a rather low efficiency, as they operate far from their maximum output power where the linearity
is limited.
Class A Amplifiers
A Class A amplifier operates linearly over the whole input and output range. It
is said to conduct for 360◦ of an input sine wave, that is, it will amplify for the
whole of the input cycle, see Figure 5.10. Since the device is always conducting,
a lot of power will be dissipated and the maximum achievable output efficiency
is low, only 50%.
5.3
Classification of Power Amplifiers
57
Class B Amplifiers
In a Class B amplifier, the device is biased so that it only conducts for half of
the input cycle, i.e., it has a conduction angle of 180◦ , see Figure 5.10. In this
region the amplifier is linear, and at the rest of the input it is turned off, and the
efficiency reaches η = π/4 ≈ 78.5%, with η defined in (5.7).
Class B amplifiers are often connected in a push-pull circuit, so that two amplifiers are connected, each of them conducting for half of the cycle, and together
they conduct for the whole 360◦ . The efficiency is still the same, and in theory
this will be a completely linear amplifier. In practice, however, if the biasing of
the two amplifiers is not perfect, this will cause cross-over distortion at the time
of switching between the two amplifiers [Jaeger and Blalock, 2008].
Class AB Amplifiers
The Class AB amplifier uses the same idea as the Class B configuration with two
amplifiers, but the amplifiers are slightly overlapping such that the cross-over
distortion is minimized. Each amplifier thus has a larger conduction angle than
the 180◦ of a Class B amplifier, but less than the full 360◦ of a Class A amplifier.
This reduction of cross-over distortion is at the expense of efficiency.
Class C Amplifiers
Class C amplifiers have a conduction angle smaller than 180◦ , typically between
90◦ and 150◦ , see Figure 5.10. This causes a very distorted output consisting of
short pulses, and the amplifier usually has some form of resonant circuit connected to recover the original sine wave.
5.3.3
Switched Amplifiers
The low efficiency of linear amplifiers is caused by the high power dissipation due
to constant conduction. Switched amplifiers consist of transistors that are either
on (conducting) or off (nonconducting). In the off state (cutoff state), no current
flows so there is (almost) no dissipation. When the transistor is conducting, the
resistance across it is very low, and so is the power dissipation.
The output of a switched amplifier is a square wave, which is passed through
a filter to obtain a sinusoidal signal.
Class D Amplifiers
A Class D amplifier consists of two transistors that alternately are on and off. The
output is a pulse-width modulated (pwm) signal, which can be filtered to obtain
the fundamental sine wave, see Figure 5.11. With ideal switches and ideal series
resonant network (C1 and L1 ) stopping all frequencies but the fundamental tone,
the theoretical maximum efficiency is 100%.
58
5
VDD
Power Amplifiers
VDD
Vin
VDS
C1
0
L1
Vin
Vout , iout
VDS
RL
Vout
Figure 5.11: Class D power amplifier.
Class E Amplifiers
In a Class E amplifier, only one transistor is used (compared to the two for Class
D). By choosing a suitable load matching network, the drain current and voltage
can be shaped to not overlap each other, making the theoretical efficiency 100%.
5.3.4
Other Classes
There exist many other classes including Class F (a variation of the Class E amplifier) and Class S (a variation of switching amplifier using pulse-width modulation), see for example Frenzel [2003].
5.4
Outphasing Concept
An outphasing amplifier is based on the idea that a nonconstant envelope signal,
with amplitude and phase information, can be decomposed into two constant
envelope signals with phase information only. The two signals can then be amplified separately by two nonlinear and highly efficient amplifiers and recombined,
as presented in Cox [1974] and Chireix [1935]. The output signal will be amplitude and phase modulated, just like the input signal. Another name for the
outphasing concept is linear amplification with nonlinear components (linc).
The outphasing concept is illustrated in Figure 5.12. Here, a nonconstant
envelope-modulated signal
s(t) = r(t)e jα(t) = rmax cos(ϕ(t))e jα(t) ,
0 ≤ r(t) ≤ rmax
(5.10)
where rmax is a real-valued constant, and α and ϕ are angles, is used to create two
constant-envelope signals, s1 (t) and s2 (t). This is done in the signal component
5.4
59
Outphasing Concept
Q
2s(t)
s1 (t)
e(t)
ϕ
−e(t)
s(t)
α
s2 (t)
I
rmax
Figure 5.12: Outphasing concept and signal decomposition.
s1 (t) = s(t) + e(t)
A1 , g1
y1 (t)
s(t)
+
scs
s2 (t) = s(t) − e(t)
A2 , g2
y(t)
y2 (t)
g1 = g2 = g0
Figure 5.13: Illustration of ideal power combining (the plus sign) of the two
constant-envelope signals. The signals are amplified separately by two nonlinear amplifiers, A1 and A2 , and recombined to an amplified replica of the
input s(t).
separator (scs) in Figure 5.13 as
s1 (t) = s(t) + e(t) = rmax e jα(t) e jϕ(t)
s2 (t) = s(t) − e(t) = rmax e jα(t) e −jϕ(t)
2
rmax
− 1.
e(t) = j s(t)
2
r (t)
(5.11)
The outphasing signals s1 (t) and s2 (t) contain the original signal, s(t), and a
quadrature signal, e(t), and are suitable for amplification by switched amplifiers
like Class D/E. By separately amplifying the two constant-envelope signals and
combining the outputs of the two individual amplifiers as in Figure 5.13, the
output signal is an amplified replica of the input signal.
In theory, the two quadrature signals will cancel each other perfectly in the
combiner, but in practice, implementation imperfections and asymmetries will
cause distortion. Letting g1 and g2 denote two positive real-valued gain factors,
in each branch s1 (t) and s2 (t), and δ denote a phase mismatch in the path for s1 (t),
60
5
Power Amplifiers
Figure 5.14: The bandwidth of the quadrature signal e(t), and thus the outphasing signals s1 (t) = s(t) + e(t) and s2 (t) = s(t) − e(t), is much larger than
that of the original signal s(t). Any remainders of the quadrature signal
caused by pa imperfections will thus lead to degraded aclr and reduced
margins to the spectral mask. From Fritzin [2011].
it is clear from
y(t) = g1 e jδ s1 (t) + g2 s2 (t)
= [g1 e jδ + g2 ]s(t) + [g1 e jδ − g2 ]e(t),
(5.12)
that besides the amplified signal, a part of the quadrature signal remains. As
the bandwidth of the quadrature signal, e(t), is larger than the original signal,
s(t), see Figure 5.14, this would lead to a degraded aclr and reduced margins to
the spectral mask [Birafane and Kouki, 2005, Birafane et al., 2010, Romanò et al.,
2006].
The phase and gain mismatches between s1 (t) and s2 (t) must be minimized
in order not to allow a residual quadrature component to distort the spectrum or
limit the dynamic range (dr),
max(|y(t)|)
|g1 + g2 |
= 20 log10
,
(5.13)
cDR = 20 log10
min(|y(t)|)
|g1 − g2 |
of the pa [Birafane and Kouki, 2005]. The dr defines the ratio of the maximum
and minimum output amplitudes the pa can achieve. However, all phases and
amplitudes within the dr can be reached by changing the phases of the outphasing signals s1 (t) and s2 (t).
Since an outphasing amplifier only uses two states (on or off), it will not experience problems like the conventional pas such as gain compression (see Section 5.2.3), where the peak amplitudes are clipped. Instead, the smallest amplitudes will not be properly amplified in outphasing pas, since any mismatch
of the amplifier gains will make it impossible for s1 (t) and s2 (t) to cancel each
other, compare Figures 5.12 and 5.15. Thus, the dr in an outphasing pa limits
the spectral performance when amplifying modulated signals.
5.5
61
Linearization of Power Amplifiers
Q
Q
s1 + s2
I
s1 + s2
I
Figure 5.15: The outphasing concept when the gain factors g1 and g2 are not
identical. In the left figure, the outphasing signals are parallel and the resulting output is the maximal one. In the right figure, the nonidentical gain
factors cannot cancel each other, and some remains are left. The dynamic
range (the ratio between the maximal and minimal amplitudes, see (5.13))
of the power amplifier will determine the limit of small amplitude clipping.
As the output of a Class D stage can be considered as an ideal voltage source
whose output voltage is independent of the load [Yao and Long, 2006], i.e., the
output is connected to either VDD or GND, the constant gain approximations g1
and g2 are appropriate and make Class D amplifiers suitable for nonisolating
combiners like transformers [Xu et al., 2010]. The implementation of the combiner (the plus sign in Figure 5.13) can be done in a multitude of ways, see for
example Fritzin [2011] and the references therein.
5.5
Linearization of Power Amplifiers
The increased use of nonlinear amplifiers in an attempt to improve efficiency also
requires new linearization methods. As described in Chapter 3, there are different approaches to do linearization. Since it is desirable to work with the original
signal, and not with the amplified output of the pa, a prefilter is desired, also
called a predistorter [Kenington, 2000]. Originally, these predistorters consisted
of small analog circuits, but now they are often implemented in a look-up table (lut) or a digital signal processor (dsp). Such an implementation is called
a digital predistorter (dpd). The idea behind predistortion is presented in Figure 5.16. The predistortion can be divided into two parts, the construction of the
predistorter functions and the implementation of the obtained dpd.
The implementation of predistortion methods entails further considerations,
and as concluded in Guan and Zhu [2010], “different methodologies or implementation structures will lead to very different results in terms of complexity and
cost from the viewpoint of hardware implementation”. An implementation using
a look-up table will grow quickly with the resolution of the dpd, and thus needs
a large chip area, but avoids the necessity of calculations needed in a polynomial
implementation (leading to a larger power consumption). The implementation
issues have not been considered in this thesis.
62
5
Predistorter
System
Power Amplifiers
Linear system
Figure 5.16: The main idea behind predistortion is to compensate for future
nonlinearities and dynamics so that the overall system is linear.
5.5.1
Volterra series
The theory of p-th order Volterra inverses, introduced in Section 3.2.3, allows
for the simpler postinverse (see Section 4.1) to be calculated and then used as
the desired preinverse. This is used in the predistortion, or linearization, of for
example rf power amplifiers. See also Section 3.2.2 for a discussion on preinverse
versus postinverse.
Since Volterra series consist of an infinite sum of integrals, the use of general
Volterra theory is rather limited. To reduce the complexity a pruned, or truncated,
version of the Volterra series is often used, where the memory length and/or
the order of nonlinearity is limited. This heavily reduces the complexity of the
sum, but the computational growth is still exponential/polynomial in memory
length/order of nonlinearity, limiting the practical use of Volterra series.
Using pruned Volterra series as a means for modeling and predistortion of
high-power amplifiers is presented in Tummla et al. [1997] and is shown to work
for simulated data with memory length of 1 and nonlinearity order of 7. In Zhu
et al. [2008], pruning techniques have been applied to drastically reduce the number of terms in the (discrete time) Volterra series and the method was applied to
experimental data. Here, a memory length of 2 and an order of nonlinearity of
11 was used. Volterra based predistorters have also been implemented in field
programmable gate array (fpga), shown in Guan and Zhu [2010]. An fpga is
a circuit that can be configured by the user and are used to implement complex
digital computations.
5.5.2
Block-oriented Models
Since general nonlinear systems are very difficult to model, a common assumption is that the dynamics are linear, and that the nonlinearity is static, which
gives a block-oriented model. This will be the case when there is, for example, a
nonlinear actuator (due to saturation) in a control application.
A Hammerstein system consists of a static nonlinear system followed by a
linear dynamic system and in a Wiener system, the static nonlinearity is at the
output of the linear dynamics, see also Example 3.3. One way to broaden the use
of the Hammerstein system is to use a more general parallel Hammerstein system, where multiple Hammerstein systems are branched. This structure is often
5.5
Linearization of Power Amplifiers
63
used in modeling of power amplifiers, where a basic assumption is that the main
part of the signal is amplified in a nonlinear way through the pa, and distortions
are added to the output. The number of branches in the parallel Hammerstein
structure determines the complexity of the model.
In Gilabert et al. [2006], a Wiener model of the pa has been used in combination with a Hammerstein structure predistorter, with memoryless nonlinearities
followed by linear blocks using finite impulse response (fir) and infinite impulse
response (iir) filters. The implementation of a Hammerstein predistorter in fpga
technique is discussed in Xu et al. [2009] using a wcdma input signal.
5.5.3
Outphasing Power Amplifiers
In outphasing pas, there is no linearity between the individual outphasing signals, and any gain or phase mismatch between the two signal paths will cause
spectral distortion, see for example Birafane and Kouki [2005] and Romanò et al.
[2006]. Typical requirements are approximately 0.1−0.5 dB in gain matching and
0.2 ◦ − 0.4 ◦ in phase matching, which is very hard to achieve [Zhang et al., 2001].
The gain mismatch could be eliminated by adjusting the voltage supplies in
the output stage [Moloudi et al., 2008], but this would require an extra, adjustable
voltage source on the chip, which is undesirable. For the outphasing amplifier, all
amplitudes (within the dynamic range) and phases can be achieved by tuning the
outphasing signals s1 (t) and s2 (t), see Figures 5.12 and 5.13. This can be used in
the predistortion, so that the two signals are adjusted in a way to compensate for
gain errors and possibly other unwanted effects in the pa.
Earlier predistortion methods for outphasing pas compensate for the gain and
phase mismatches in the signal branches. In Myoung et al. [2008], a mismatch
detection algorithm has been evaluated using four test signals. These two-tone
signals are used to calculate the amplitude and phase mismatches of the amplifier
using a closed-form expression, later used for predistortion. Chen et al. [2011]
presents a signal component separator (scs) implementation with a built-in compensation for branch mismatches in phase and amplitude. The scs performs the
decomposition of the original signal s(t) into the outphasing signals s1 (t) and
s2 (t), (5.11). By taking gain and phase mismatches into account, the scs has a
built-in predistorter.
Helaoui et al. [2008] discuss the impact of the combiner on the outphasing pa
performance. The choice of combiner is a trade-off between linearity and power
consumption. Nonlinearities can be introduced by a nonisolated combiner such
that the output distortion depends on the input power. These nonlinearities were
successfully reduced by the use of a predistorter.
The solutions in Myoung et al. [2008] and Chen et al. [2011] consider the
gain mismatch between the two branches and compute the ideal phase compensation when the outputs are approximated as two signals with constant amplitudes. This is possible when there is no interaction between the amplifier stages.
In this thesis, the outputs are still considered as two constant amplitude signals
generating amplitude and phase distortion. Furthermore, an amplitude dependent phase distortion, occurring due to the interaction and signal combining of
64
5
Power Amplifiers
the amplifiers’ outputs, is also considered.
Parts of the results in Chapters 6-8 can also be found in Fritzin et al. [2011a]
and Jung et al. [2013]. The nonconvex algorithm, presented in Fritzin et al.
[2011a], has in Landin et al. [2012] been developed to include a method for finding good initial values to the nonlinear optimization. However, the basic problem of nonconvexity has not been solved there and local minima still risk posing
problems in the optimization. In Jung et al. [2013], the nonconvex formulation
has been reformulated into a convex method. In this method, the pa model is
estimated in a least-squares setting and an analytical calculation of the predistorter is used. Furthermore, a theoretical characterization of an outphasing pa is
presented and form a basis for an ideal dpd. This characterization has also been
used to obtain an estimate thereof.
6
Modeling Outphasing Power
Amplifiers
In this chapter, one way of modeling of the outphasing power amplifier using
knowledge of the physical structure of outphasing amplifiers is presented. It
consists of a new decomposition of the outphasing signals making use of the
knowledge of the uneven amplification in the two branches, as well as a way to
incorporate the possible nonlinearities in the branches.
Despite the fact that the pa is analog and the baseband model is in discrete
time, the notation t is used to indicate the dependency on time. Based on the
context, t may thus be a continuous or discrete quantity and denote the time or
the time indexation. For notational convenience, the explicit dependency on time
will be omitted in parts of this chapter and the following one.
6.1
An Alternative Outphasing Decomposition
As mentioned in Chapter 5, the pa output signal y(t) is a distorted version of
the input signal. The nonlinearities are due to (i) the nonidentical gain factors
g1 and g2 , and (ii) nonlinear distortion in the amplifier branches. First, a novel
decomposition will be described, accounting for the nonidentical gain factors g1
and g2 , followed by a description of how these can be used in the modeling of
the outphasing power amplifier. Since it is desired that the predistorter should
invert all effects of the pa except for the gain, the signals can be assumed to be
normalized such that
(6.1)
max |s(t)| = max |y(t)| = 1.
t
t
As described in Figure 5.12, the amplitude information of the original input
signal s(t) can be found in the angle between s1 (t) and s2 (t). Let
Δψ (s1 , s2 ) = arg(s1 ) − arg(s2 )
65
(6.2)
66
6
Modeling Outphasing Power Amplifiers
Q
2s(t)
s1 (t)
2s˜1 (t)
s˜2 (t)
b1
ξ1
s(t)
b2
2s˜2 (t)
s2 (t)
ξ2
b2
s(t)
s˜1 (t)
I
(a)
b1
(b)
Figure 6.1: (a) Decomposition of the input signal s(t) into s1 (t) and s2 (t)
when g1 = g2 = g0 = 0.5 and into s˜1 (t) and s˜2 (t) when decomposed as
in (6.3) with nonidentical gain factors g1 and g2 . (b) Trigonometric view
of the decomposition of s(t) using nonidentical gain factors. Note that
|s˜k | = gk , k = 1, 2.
denote the phase difference of the outphasing signals s1 (t) and s2 (t). Since the
amplitude of the nondecomposed signal in the outphasing system is determined
by Δψ (s1 , s2 ), this difference can be used instead of the actual amplitude in many
cases. For notational convenience, Δψ will be used instead of Δψ (s1 , s2 ), unless
specified otherwise. Here, all phases are assumed unwrapped.
To describe the distortions caused by the imperfect gain factors, consider
again the decomposition of s(t) into s1 (t) and s2 (t) in (5.11). This is only valid
when g1 = g2 but we can use an alternative decomposition of s(t) into s˜1 (t) and
s˜2 (t) such that
s˜1 (t) + s˜2 (t) = s(t),
|s˜k | = gk , k = 1, 2,
(6.3a)
and
arg(s˜1 ) ≥ arg(s˜2 ).
(6.3b)
(6.3c)
Assuming knowledge of g1 and g2 = 1 − g1 and given s(t), the signals s˜1 (t) and
s˜2 (t) can be computed from (6.3). Let
b1 = arg(s˜1 ) − arg(s)
and
b2 = arg(s) − arg(s˜2 )
denote the angles between the decomposed signals and s(t) as shown in Figure 6.1a.
Figure 6.1b shows that the decomposition can be viewed as a trigonometric
problem and application of the law of cosines gives
and
g22 = g12 + |s|2 − 2g1 |s| cos(b1 )
(6.4)
g12 = g22 + |s|2 − 2g2 |s| cos(b2 ).
(6.5)
6.2
67
Nonconvex PA Model Estimator
The angles b1 and b2 that define s˜1 (t) and s˜2 (t) can be computed from these expressions and can be viewed as functions of Δψ since |s| = rmax cos(Δψ /2). This
means that the angles
Δ
1
Δ
2 ψ
(6.6)
Δ
1
Δ − b2
2 ψ
(6.7)
ξ1 (Δψ ) = arg(s˜1 ) − arg(s1 ) = b1 −
and
ξ2 (Δψ ) = arg(s˜2 ) − arg(s2 ) =
can also be viewed as functions of Δψ .
When the goal is to model the phase distortions in the two branches, this alternative way of defining the decomposition reflects the physical behavior better
than the standard outphasing decomposition in (5.11). The output y(t) can be
decomposed in the same way to y1 (t) and y2 (t), taking the gain factors g1 and g2
into account.
6.2
Nonconvex PA Model Estimator
A first step on the way to model the outphasing pa is to observe that although
the two branches are identical in theory, once implemented in hardware this will
not be the case. Since the signals s1 (t) and s2 (t) are amplified by two different
amplifiers, there might be a small amplification difference resulting in a gain
offset between these signals, as well as a time delay stemming from the fact that
s1 (t) and s2 (t) take different paths to the power combiner. With this insight, a
first model structure with a gain mismatch between g1 and g2 and a phase shift δ
in one branch is proposed. This leads to a model structure described by
y(t) = g1 e jδ s1 (t) + g2 s2 (t),
(6.8)
where g1 , g2 and δ are real-valued constants.
When adding more complex behavior to the model structure, the structure of
the physical pa must still be kept in mind. The separation of the two branches
is still valid, but each branch can be affected by other factors than the gain difference and possible phase shift. As the amplitudes of the outphasing signals are
fixed, a phase dependent distortion in each branch is proposed.
To model an amplitude dependent phase shift while keeping in mind the
constant amplitude of the signals s1 (t) and s2 (t), a model structure with an exponential function can be used. An amplitude-dependent phase distortion in
yk (t), k = 1, 2 (the two amplifier branches) can be written as
yk (t) = gk e j f k (Δψ ) sk (t),
y(t) = y1 (t) + y2 (t),
k = 1, 2,
(6.9a)
(6.9b)
as in Figure 6.2. Here, f 1 and f 2 are two real-valued functions describing the
phase distortion
(6.10)
arg(yk ) − arg(sk ) = f k (Δψ ), k = 1, 2,
68
6
s1 (t)
Modeling Outphasing Power Amplifiers
y1 (t)
g1
f1
s(t)
+
scs
s2 (t)
g2
f2
y(t)
y2 (t)
Figure 6.2: A schematic picture of the amplifier branches setup. Note that
the functions f k , k = 1, 2, are not functions of the input to the block only but
are used to show the general functionality of the pa with the separation of
the two branches.
in each signal path. Furthermore, g1 and g2 are the gain factors in each amplifier
branch. Hence, an ideal pa would have f 1 = f 2 = 0 and g1 = g2 = g0 and any
deviations from these values will cause nonlinearities in the output signal and
spectral distortion as previously concluded.
The functions f 1 and f 2 describing the phase distortion in the separate branches can be described by arbitrary basis functions. Here, polynomials
fˆk = p(ηk , Δψ ) =
n
ηk,i Δiψ ,
k = 1, 2,
(6.11)
i=0
where
ηk = ηk,0
ηk,1 . . .
ηk,n
T
,
have been used as parameterized versions of the functions f k , motivated by the
Stone-Weierstrass theorem, see Rudin [1976, Theorem 7.26].
The model parameters in the given model structure are estimated by minimizing a quadratic cost function [Ljung, 1999] as in
θ̂ = arg min V (θ),
(6.12)
θ
V (θ) =
N y(t) − ŷ(t, θ)2
(6.13)
t=1
with
ŷ(t, θ) = g1 e j p(η1 ,Δψ (s1 ,s2 )) s1 (t) + g2 e j p(η2 ,Δψ (s1 ,s2 )) s2 (t)
(6.14)
where θ = [g1 g2 η1T η2T ]T ∈ R2n+4 , y(t) is the measured output data and
ŷ(t, θ) is the modeled output. The model (6.14) can be compared to the structure (6.9), where y(t) = g1 e j f 1 (Δψ ) s1 (t) + g2 e j f 2 (Δψ ) s2 (t). This structure leads to a
nonlinear and nonconvex optimization problem, so the minimization algorithm
might find a local optimum instead of a global. In order to obtain a good minimum in a nonconvex optimization problem, it is essential to have good initial
6.3
Least-squares PA Model Estimator
69
values, and one way to obtain these is presented in Landin et al. [2012]. For
further discussions on convexity and nonconvexity, see Section 6.5.
Here, a model of the pa was estimated by minimizing a quadratic cost function measuring the difference between the measured and modeled output signal. This estimation problem involves solving a nonconvex optimization problem. However, using the knowledge of the structure of the outphasing amplifier,
there is an alternative way which essentially only involves solving standard leastsquares problems, presented in the next section.
6.3
Least-squares PA Model Estimator
The output distortions originate both from imperfect gain factors and nonlinearities in the amplifiers. Once the gain factor impact has been accounted for, the
amplifier nonlinearities can be modeled. This means that the modeling optimization problem can also be rewritten as a separable least squares (sls) problem,
also presented in Jung et al. [2013]. A separable least squares problem is when
one set of parameter enters the model linearly and one set nonlinearly. Given
the nonlinear parameters, the linear part can be solved for efficiently, leaving a
nonlinear problem of a lower dimension [Ljung, 1999]. See also Section 2.5 for a
short introduction to sls problems.
Often, the minimization is done first for the linear part and then the nonlinear
parameters are solved for and this nonlinear minimization problem now has a
reduced dimension. Here, the idea is to use knowledge of the gain factors to
make a nonlinear transformation of the data using the decomposition (6.3). Once
this decomposition is done, the minimization can be rewritten as a least-squares
(ls) problem in the phase distortion in the two branches. This is not the usual sls
method since it involves a nonlinear transformation of the data, but the basic idea
of separating out the nonlinear parameters to obtain a ls problem still applies.
We will here explore two ways of estimating the gain factors g1 and g2 . One
is based on the dynamic range of the pa and the other is based on a parameter
gridding of possible values of g1 and g2 .
Assuming the gain factors to be known, we know what the phases of the outputs from the two outphasing branches must be in order for the two signals to
sum up to the measured output y(t). It is now possible to decompose the output y(t) into y1 (t) and y2 (t), using the decomposition in Section 6.1. What is left
to determine is the phase distortion in the branches, described by the functions
f k . Since the gain factor influence is handled by the alternative decomposition of
y(t), the phase distortion is now described by the difference between the phase
of the input sk (t) and the output yk (t), k = 1, 2 and this can be formulated as a
least-squares problem.
Consider first the two gain factors g1 and g2 = 1 − g1 , where the relation
between them comes from the normalization (6.1). Let
g1 = g0 ± Δ g ,
g2 = g0 ∓ Δ g ,
(6.15)
70
6
Modeling Outphasing Power Amplifiers
where Δg ≥ 0 represents the gain imbalance between the amplifier stages and
g0 = 0.5. Inserting (6.15) into (5.13) gives
g
(6.16)
cDR = 20 log10 0 .
Δg
Hence, the imbalance term Δg can be computed as
Δg = g0 · 10−cDR /20 ,
(6.17)
making it possible to find approximations of g1 and g2 from the dynamic range of
the output signal. The value of cDR can be estimated from measurements as the
ratio between the maximum and minimum output amplitudes. The estimate is
noise sensitive, but this can be handled by averaging multiple realizations. These
approximations are valid for input signals with large peak to minimum power
ratios, like wcdma and lte, where the pa generates an output signal including
its peak and minimum output amplitudes, i.e., its full dynamic range. If this
is not fulfilled or the noise influence is too large, an alternative approach is to
evaluate a range of values of g1 and g2 = 1 − g1 and then solve the pa modeling
problem for each pair of gain factors, as in the usual sls approach.
Once the gain factors have been determined, s(t) can be decomposed into s˜1 (t)
and s˜2 (t), and y(t) into y1 (t) and y2 (t) using (6.3) to (6.5). Furthermore, the standard outphasing decomposition of s(t) into s1 (t) and s2 (t) as in (5.11) will be used
in the sequel.
Since the gain factor mismatch has been accounted for, it is now possible to
determine the impact of the nonlinearities on the two branches. The phase distortion in each signal path caused by the amplifiers can thus be modeled from
measurements of s(t) and y(t). Here, polynomials
p(ηk , Δψ ) =
n
ηk,i Δiψ ,
k = 1, 2,
i=0
have been used as parameterized versions of the functions f k , as in (6.11). Estimates η̂k,i of the model parameters ηk,i have been computed by minimizing a
quadratic cost function, i.e.,
η̂k = arg min Vk (ηk ),
k = 1, 2,
(6.18)
ηk
where
Vk (ηk ) =
N 2
arg (yk (t)) − arg (s˜k (t)) − p ηk , Δψ (s1 (t), s2 (t)) ,
(6.19)
t=1
and
ηk = ηk,0
ηk,1 . . .
ηk,n
T
.
The cost function (6.19) can be motivated by the fact that the true functions f k
satisfy (6.10) when the amplifier is described by (6.9). Minimization of V1 and V2
6.4
71
PA Model Validation
are standard least-squares problems, which guarantees that the global minimum
will be found [Ljung, 1999].
Once the ls problem is solved for each setup of g1 and g2 , the problem of
finding the best setup is now reduced to a one dimensional (possibly nonconvex)
optimization problem over g1 (g2 = 1 − g1 ), which is much easier to solve than
the original, multidimensional problem. A problem this small can be solved at a
small computational cost.
The parameter estimates η̂k define function estimates
fˆk (z) = p(η̂k , z),
k = 1, 2,
(6.20)
that, together with the gain factor estimates ĝ1 and ĝ2 describe the power amplifier behavior. The different steps are also described in Part A – Estimation of pa
model in Algorithm 1, page 91.
The alternative decomposition described in Section 6.1 depends on the gain
factors g1 and g2 via a nonlinear relation, but with these given, the problem is
reduced to a ls-problem in the phase as in (6.19). If the gain factor estimation
is done using the dr as in (6.15) and (6.17),
will result
in two ls-problems
g this
−g
to solve, and gridding of g1 will result in maxp min + 1 ls problems. The values
M
gmin and gmax bounds the values of g1 and g2 that one wants to evaluate and p M
is the precision, so that g1 ∈ [gmin , gmin + p M , . . . , gmax ] and g2 = 1 − g1 . Compare
to Algorithm 1, page 91, for notation. This is not the standard sls method, since
a nonlinear transformation of the data is done before solving the ls problem, but
the separation of the linear and nonlinear parameters applies. This separation reduces the optimization to a number of ls problems and a nonlinear optimization
in only one dimension, g1 (g2 = 1 − g1 due to the normalization (6.1)). This is
clearly a reduction from the nonlinear optimization in 2n + 4 dimensions of the
original problem.
6.4
PA Model Validation
As an evaluation of the different approaches presented above, the models have
been compared. The Figures 6.3-6.6 present the amplitude and phase of the measured output and the model output. The amplitude error |y − ŷ| and the phase
error arg(y) − arg(ŷ) are also included. The first simple model in (6.8), using only
the gain factors g1 and g2 and a phase shift δ, is presented in Figure 6.3. The more
complex model structure (6.14) is presented in Figures 6.4, 6.5 and 6.6, using the
different modeling methods. The model obtained by the nonconvex approach
as in (6.12)-(6.14) is presented in Figure 6.4. The ls method using (6.18)-(6.19)
and the dynamic range to obtain the gain factors is presented in Figure 6.5. In
Figure 6.6, the ls method using gridding of g1 over a range of values and then
determining the best fit is presented.
The more complex models perform very well, and rather similarly. This is
easier to see in Figure 6.7, where the errors for the different modeling methods
are plotted together. Though the models all perform well, there are still errors.
These errors are largest where the input amplitude is small, such as around time
72
6
Modeling Outphasing Power Amplifiers
Table 6.1: pa Model Validation
Method
Delay only, model structure (6.8)
Nonconvex
ls, grid
ls, dr
g1
0.4911
0.4986
0.50
0.4994
g2
0.5089
0.5014
0.50
0.5006
|y − ŷ|22 , (6.13)
62.99
0.9985
1.119
0.9781
152 μs and 156.5 μs. The result of the dr ls model is also presented in an iq plot
in Figure 6.8 where the signals are plotted in the complex plane. Also in this
plot, the model shows a very good behavior, with a slightly worse performance
for small amplitudes.
The gain factor estimates are presented in Table 6.1 together with the cost
function (6.13) for the different methods. As seen in the rightmost column where
|y − ŷ|22 , (6.13), is presented, the added model complexity with nonlinearities
makes a large improvement in the model fit. The ls method using dr and the
nonconvex method achieve rather similar results with the gridding ls method
slightly behind. The results of the nonconvex method depends on the number of
iterations used in the optimization.
Except for the first simple model, the other methods perform very similarly
with a very good fit to validation data. This clearly shows that the nonlinear extension to the model has a significant impact on the model properties. This also
means that the choice of method comes down to other considerations than the
fit. The lack of guarantees of convergence to a global minimum of nonconvex
optimization methods is a reason to avoid the method described in Section 6.2.
If the ls method is chosen, this also entails the choice of gridding or using the
dynamic range. Gridding is more robust against noise, since the dr estimation is
done using only two measurements (the one with minimal and the one with maximal amplitude), so noise at either of these data points will have a large impact.
A drawback with gridding is the risk of missing the “true” value, if the precision
p M (difference in g1 and g2 ) is chosen too large. A decreasing p M , on the other
hand, will increase the number of ls problems that need to be solved. Benefits
and drawbacks for the dynamic range method are the opposite.
6.4
73
PA Model Validation
Amplitude
1
0.5
0
Phase [rad]
4
2
0
−2
−4
146
148
150
152
Time [µs]
154
156
Figure 6.3: Model validation of the model produced using the first structure (6.8), with gain factors g1 and g2 and a phase shift δ only. The upper
plot shows the amplitude of the measured signal (solid pink), the model output (dashed blue) and the error (black). The lower plot shows the phase.
74
6 Modeling Outphasing Power Amplifiers
Amplitude
1
0.5
0
Phase [rad]
4
2
0
−2
−4
146
148
150
152
Time [µs]
154
156
Figure 6.4: Model validation of the model produced using the original, nonconvex, optimization in (6.12)-(6.14). The upper plot shows the amplitude
of the measured signal (solid pink), the model output (dashed blue) and the
error (black). The lower plot shows the phase.
6.4
75
PA Model Validation
Amplitude
1
0.5
0
Phase [rad]
4
2
0
−2
−4
146
148
150
152
Time [µs]
154
156
Figure 6.5: Model validation of the model produced using the convex
method in (6.18)-(6.19) and the dynamic range has been used to determine
the gain factors as in (6.17) and (6.15). The upper plot shows the amplitude
of the measured signal (solid pink), the model output (dashed blue) and the
error (black line). The lower plot shows the phase.
76
6 Modeling Outphasing Power Amplifiers
Amplitude
1
0.5
0
Phase [rad]
4
2
0
−2
−4
146
148
150
152
Time [µs]
154
156
Figure 6.6: Model validation of the model produced using the convex
method in (6.18)-(6.19) and g1 has been gridded in [gmin , gmax ] = [0.4, 0.6]
with precision pM = 0.005. The upper plot shows the amplitude of the
measured signal (solid pink), the model output (dashed blue) and the error
(black line). The lower plot shows the phase.
6.4
77
PA Model Validation
· 10−2
Amplitude
6
4
2
0
Phase [rad]
0.5
0
−0.5
−1
−1.5
151
152
153
154
Time [µs]
155
156
157
Figure 6.7: A summary of the model errors of the different models. The
upper plot shows the amplitude error |y − ŷ| and the lower plot shows the
phase errors arg(y) − arg(ŷ). The simple model (6.8) is plotted in black, the
ls methods using dr in solid pink and gridding in dashed blue. The model
obtained by the nonconvex method is plotted in a green dashed line. The
three models describing a nonlinear behavior perform very well and in a
very similar way, as seen in the figure where the lines are almost on top of
each other.
78
6 Modeling Outphasing Power Amplifiers
Imaginary part of signal (q)
0.8
0.4
0
-0.4
-0.8
-0.8
0
-0.4
0.4
Real part of signal (i)
0.8
Figure 6.8: iq plot (imaginary part, Q, vs real part, I) of the measured signal
(solid pink) and the model output (dashed blue) and the error y − ŷ (black).
The model was estimated by the ls method using dr to estimate g1 and g2 .
The zoom-in in the upper right corner is a ten times amplification of the
error signal.
6.4
PA Model Validation
79
The estimated phase distortion functions, fˆ1 and fˆ2 , from the models can be
plotted as functions of Δψ and the results for a wcdma signal for the different
methods are rather similar. The function f˜ˆ describes the phase change between
the two outphasing signals at the output, and thus the amplitude change of the
output. The phase distortion functions f˜ˆ are presented in Figure 6.9 as deviations from the ideal phase distortion, which should be as close to zero as possible.
The ideal phase distortion includes the compensation for nonequal gain factors.
By this, it is clear that at amplitudes close to zero (Δψ close to π), a zero distortion will not be possible for nonequal gain factors. In Figure 6.10, the different
functions f k , k = 1, 2, are shown for the different methods. The methods achieve
rather similar results, but at the expense of the number of computations in the
nonconvex approach, where 25 000 function evaluations have been performed to
achieve the optimum.
Even though the methods result in similar validation results, the largest differences are found close to the edges of the interval. In the wcdma signal, 99.1%
of the measured data points have 0.8 ≤ Δψ ≤ 3.0, so the focus of the fit is where
the most data points are. Compared to Figure 5.12 and (6.2), it is clear that the
data points with a very large Δψ (close to π) have a very small amplitude, and errors in the phase distortion modeling might not affect as much as the data points
with a small Δψ (large amplitude). It can thus be concluded that it could be
more important to obtain a good model for small values of Δψ than for large
values (something that could be achieved by weighting functions). It can also
be noted that, if the amplitude of the input had been used instead of the angle
Δψ = arg(s1 ) − arg(s2 ), more weight would have been put at the largest amplitudes. This is not done now since a large input amplitude equals a small Δψ and
vice versa.
In polynomial fitting, the agreement with the function f is often bad at the
outer parts of the interval to be approximated. If one can choose the points at
which the polynomial is to be fitted, Chebyshev points should be chosen, with
more points at the outskirts of the interval [Dahlquist and Björck, 2008, p. 377379]. Here, we are fitting a polynomial using the method of least squares, but
the same reasoning holds. To obtain a smaller error at the peak power, more data
points could have been collected there. Instead, the least-squares fitting focuses
on fitting the overall performance, and hence more effort is made to obtain a
small error in the parts where there is a larger point density. For the signals
used in this thesis, this area of larger point density is in the center of the interval,
where an improvement will be clearly seen in for example Figure 8.9. We will
return to this subject in Chapter 8 when evaluating the predistortion results.
80
6 Modeling Outphasing Power Amplifiers
Output phase distortion [rad]
0.02
0
-0.02
-0.04
-0.06
-0.08
-0.1
0
0.5
1
1.5
∆ψ (s1 , s2 )
2
2.5
3
Figure 6.9: Simulated output phase distortion of the models from the nonconvex method (dotted green) and the ls methods using dr (dashed blue)
and gridding (pink) (the two model outputs are almost completely on top
of each other). The lines describe the modeled phase difference as a function of the input signal amplitudes, that is, taking the different gain factors
into account. The three methods evaluated estimate the phase shift almost
equally for the middle range where most of the data points are (99.1% have
0.8 ≤ ∆ψ ≤ 3.0), but the differences are visible at the edges.
81
PA Model Validation
Output outphasing signals phase distortion [rad]
6.4
0.2
0
−0.2
−0.4
−0.6
0
0.5
1
1.5
∆ψ (s1 , s2 )
2
2.5
3
Figure 6.10: Simulated outphasing output phase distortion of the models
from the nonconvex method (green) and the ls methods using dr (blue) and
gridding (pink). The lines describe the modeled phase in each branch as a
function of the input signal amplitudes. Branch one is plotted in solid lines
and branch two in dashed lines.
82
6.5
6
Modeling Outphasing Power Amplifiers
Convex vs Nonconvex Formulations
The minimization of the cost function (6.12)-(6.14) is a nonconvex optimization
problem in 2n + 4 dimensions with possible presence of local minima. Nonconvex optimization problems can either be solved by a local optimization method
or a global one. A local optimization method minimizes the cost function over
points close to the current point, and guarantees convergence to a local minimum
only. Global methods find the global minimum, at the expense of efficiency [Boyd
and Vandenberghe, 2004]. Hence, even under ideal conditions (noise-free data,
true pa described exactly by one model with the proposed structure), there is no
guarantee that the nonconvex approach will produce an optimal model of the
pa in finite time. The least-squares approach in (6.18)-(6.19) does exactly this
and results in a closed-form expression for the parameter estimate. This is a major advantage since it removes the need for error-prone sub-optimality tests and
possible time-consuming restarts of the search algorithm. Additionally, the computation time for the iterative, nonconvex, and potentially sub-optimal solution
is significantly longer compared to the least-squares method.
A two dimensional projection of the cost functions to be minimized, (6.13)
in the nonlinear formulation and (6.19) in the ls reformulation, can be seen in
Figure 6.11. All parameters but two have been fixed at the optimum, and the
linear term in each amplifier branch (ηk,1 in (6.11)) has been varied. Clearly, there
is a risk of finding a local minimum in the nonconvex formulation illustrated
in (a) whereas there is only one (global) optimum in the least-squares formulation
in (b).
The local minima in themselves might not be a problem if they are good
enough to produce a well performing dpd, but there are no guarantees that this is
the case. Typically, a number of different initial points need to be tested in order
to get a reasonable performance.
6.6
Noise Influence
Noise is always present in measurements, and the noise will effect the models.
The algorithms presented in this chapter are sensitive to noise especially in two
steps; the normalization g1 + g2 = 1 in (6.1) and the calculation of cDR in (5.13).
Both these calculations are based on very few measurements, one for the normalization (the largest amplitude) and two for the dr calculation (the smallest and
the largest amplitudes), so noise at these instances might have a large influence
on the estimation, and thus the performance of the predistorter.
The measurements used for the modeling and model validation in this chapter were recorded using the same measurement setup and power amplifier that
will be used in Section 8.4. To avoid the influence of measurement noise, the
same input was applied a number of times, K, and the output was measured,
whereupon the average over the different realizations was calculated. In measurements used for the pa model estimation described here, K = 10. No automatic
synchronization between input and measured output is done, so a manual syn-
83
Noise Influence
Linear Term in Amplifier Path 2
6.6
8
4
0
−4
−4
0
4
8
Linear Term in Amplifier Path 1
Linear Term in Amplifier Path 2
(a)
8
4
0
−4
−4
0
4
8
Linear Term in Amplifier Path 1
(b)
Figure 6.11: Two dimensional projections of the cost functions of (a) the
original nonconvex optimization problem (6.12)-(6.14) and (b) the leastsquares reformulation, (6.18)-(6.19) using the dynamic range for the estimation of g1 and g2 . All but two parameters in each amplifier branch have been
fixed at the optimal value, and the linear terms (ηk,1 in (6.11)) are varied. In
(a), the visible local minima are marked with 5 and the minimum obtained
clearly depends on the initial point of the local optimization. In the leastsquares formulation illustrated in (b), there is only one minimum (the global
one) and convergence is guaranteed. The + marks the global minimum.
84
6
Modeling Outphasing Power Amplifiers
chronization has to be performed. This also means that the sample times of the
output differ between different measurement sets and that the synchronization
between input and output is not the same for different data sets. When looking
at the different data sets, the most dominant noise effect seems to stem from this
time mismatch, which is evenly distributed around the mean value. The noise
levels in general are very low.
6.7
Memory Effects and Dynamics
A more complex model structure has also been investigated by adding memory,
that is to say that the output depends not only on the current input but also on
the previous inputs, as in the model structure
p mem (α, β̄nm (s)) =
nm n
αmj β(s(t − m))j ,
(6.21)
m=0 j=0
with a memory depth nm , where
nm
.
β̄nm (s) = β s(t − m)
(6.22)
m=0
This approach did not lead to a better fit in the model validation, nor did it give
any significant improvement in predistortion.
If dynamics are present in the pa, it is not unreasonable to assume that they
would appear in the combiner, since the amplifier components in each branch
can be assumed to contribute with little dynamics. This would mean that we
have a parallel Hammerstein system with two parallel nonlinear, static branches
(the amplifiers) followed by a dynamic system (the combiner). To investigate how
such dynamics would effect the method described above, a dynamical system has
been simulated at the output of a static model. The model was estimated using
the ls method with dr. The dynamical system was a first order system with different values of the time constant in the range [0.2Ts 5Ts ], where Ts is the sample
time. The same identification method was then applied to this data. In this case,
the decomposition of the output using an estimate of g1 and g2 (obtained by dynamic range or gridding), is no longer a good approximation of the system, and
the method will not perform in a satisfactory way. Thus, further investigation of
how to include dynamics is needed.
7
Predistortion
Power amplifiers in communication devices are often nonlinear and/or dynamic,
which causes interference in adjacent transmitting channels. To reduce this interference, linearization is needed. This is preferably done at the input, so that
a prefilter inverts the nonlinearities/dynamics. This prefilter is called a predistorter (pd). Originally, these predistorters consisted of small analog circuits, but
now they are often implemented in a look-up table (lut) or a digital signal processor (dsp). Such an implementation is called a digital predistorter (dpd).
For the outphasing amplifiers evaluated in this thesis, the gain mismatch
could be eliminated by adjusting the voltage supplies in the output stage, but
this would require an extra adjustable voltage source on the chip, which is undesirable. Instead, the goal is to find a predistorter that uses only the phases of
the two outphasing signals. By adjusting the outphasing signals, it is possible to
achieve all amplitudes (within the dynamic range) and phases, and this idea will
be explored in the construction of a predistorter.
In this chapter, a description of an ideal dpd will be presented and different
methods to obtain it will be described. As a first step, the evaluation of the predistorters will be based on a model of the pa (described in Chapter 6), on simulated
data only. In Chapter 8, the predistorters will be evaluated on real measurement
data.
7.1
A DPD Description
With the description of the power amplifier in (6.9)-(6.10), it is clear that an ideal
pa would have f 1 = f 2 = 0 and g1 = g2 = g0 = 0.5 and any deviations from
these values will cause nonlinearities in the output signal and spectral distortion.
In order to compensate for these effects, a dpd can be used to modify the input
outphasing signals to the two amplifier branches, i.e., s1 (t) and s2 (t).
85
86
7
Predistortion
dpd
s1 (t)
s1,P (t)
h1
g1
f1
y1 (t)
s(t)
+
scs
s2 (t)
h2
s2,P (t)
g2
f2
y(t)
y2 (t)
Figure 7.1: A schematic picture of the amplifiers with predistorters. Note
that the functions f k and hk , k = 1, 2, are not functions of the input to the
block only, but are used to show the general functionality of the pa and the
dpd with the separation of the two branches.
Since the outputs of the Class D stages (the amplifiers in each branch) have
constant envelopes, the dpd may only change the phase characteristics of the two
input outphasing signals. With this in mind, a dpd that produces the predistorted signals
sk,P (t) = e j hk (Δψ ) sk (t),
k = 1, 2,
(7.1)
to the two amplifier branches is proposed. Here, h1 and h2 are two real-valued
functions that depend on the phase difference between the two signal paths. By
modifying the signals in each branch using the dpd in (7.1), shown in Fig. 7.1,
the predistorted pa output yP (t) can be written
yP = g1 e j f 1 (Δψ (s1,P ,s2,P )) s1,P + g2 e j f 2 (Δψ (s1,P ,s2,P )) s2,P .
Δ
=y1,P
(7.2)
Δ
=y2,P
The output is thus a sum of the two predistorted branches. In each branch k =
1, 2, the phase of the input is changed to counteract the effects of the nonequal
gain factors and the pa nonlinearities. Each branch is predistorted separately and
sent to the outphasing pa.
We will start by describing the effects of the predistorter on the output. The
phase difference between the two paths after the predistorters is described by
Δψ (s1,P , s2,P ) = arg(s1,P ) − arg(s2,P )
= [arg(s1 ) + h1 (Δψ )] − [arg(s2 ) + h2 (Δψ )]
Δ
= Δψ + h1 (Δψ ) − h2 (Δψ ) = h̃(Δψ ),
(7.3)
7.2
87
The Ideal DPD
and the phase difference between the two paths at the (predistorted) outputs by
Δψ (y1,P , y2,P ) = arg(y1,P ) − arg(y2,P )
= arg(s1,P ) + f 1 (Δψ (s1,P , s2,P )) − arg(s2,P ) + f 2 (Δψ (s1,P , s2,P ))
= arg(s1 ) + h1 (Δψ ) + f 1 (h̃(Δψ )) − arg(s2 ) + h2 (Δψ ) + f 2 (h̃(Δψ ))
= Δψ + h1 (Δψ ) − h2 (Δψ ) + f 1 (h̃(Δψ )) − f 2 (h̃(Δψ ))
= h̃(Δψ ) + f 1 (h̃(Δψ )) − f 2 (h̃(Δψ ))
Δ
= f˜(h̃(Δψ )).
(7.4)
These phase differences correspond to the amplitude of the signal, since it is
known that |s| = cos(Δψ /2), cf. Figure 5.12. The absolute phase change in each
branch is given by
arg(yk,P ) = arg(sk ) + hk (Δψ ) + f k (Δψ (s1,P , s2,P ))
(7.5)
for k = 1, 2. We now have a model structure describing how the phases of each
outphasing signal, and thus the amplitude and phase of the output, depend on
the characteristics g1 , g2 , f 1 and f 2 of the pa and the predistorter functions h1 and
h2 .
7.2
The Ideal DPD
As mentioned above, the pa output signal y(t) is a distorted version of the input signal. An ideal dpd should compensate for this distortion and result in a
normalized output signal yP (t) = y1,P (t) + y2,P (t) that is equal to the input signal
s(t) = 0.5s1 (t) + 0.5s2 (t). In the ideal case when g1 = g2 = g0 = 0.5, this is obtained
when y1 (t) = 0.5s1 (t) and y2 (t) = 0.5s2 (t). However, this is not possible to achieve
when gk 0.5, k = 1, 2. In this case, the ideal values for y1,P (t) and y2,P (t) are
instead s˜1 (t) and s˜2 (t), as described in (6.3). These signals define an alternative
decomposition of s(t) such that the gain mismatch is accounted for.
Assume now that an ideal dpd (7.1) is used together with the pa (6.9). In this
case, the equalities
(7.6)
y1,P (t) = s˜1 (t)
and
y2,P (t) = s˜2 (t)
(7.7)
hold, which results in
yP (t) = y1,P (t) + y2,P (t) = s˜1 (t) + s˜2 (t) = s(t).
That is, when the ideal dpd is applied to the pa, the original input will be retrieved. This assumes that the model perfectly describes the pa. Some more
conclusions can be drawn about the ideal dpd by looking at the amplitudes and
the phases of the input and the output. In order not to distort the amplitude at
88
7
Predistortion
the output, the phase difference between y1,P (t) and y2,P (t) must be equal to the
one between s˜1 (t) and s˜2 (t), i.e.,
Δψ (y1,P , y2,P ) = Δψ (s1,P , s2,P ) = arg(s˜1 ) − arg(s˜2 ) =
= arg(s1 ) + ξ1 (Δψ ) − arg(s2 ) + ξ2 (Δψ )
Δ
= Δψ + ξ1 (Δψ ) − ξ2 (Δψ ) = ξ̃(Δψ ).
(7.8)
Hence, inserting (7.8) into (7.4) gives
f˜(h̃(Δψ )) = ξ̃(Δψ )
⇔
h̃(Δψ ) = f˜−1 (ξ̃(Δψ )),
(7.9)
assuming that f˜ is invertible. Furthermore, for (7.6) and (7.7) to hold, that is,
y1,P = s˜1 and y2,P = s˜2 , we require that the phases of the two signals are equal,
arg(yk,P ) = arg(s˜k ),
k = 1, 2.
(7.10)
Now, we have a description of how the predistorter will affect the output as well
as of how the gain factors g1 and g2 changes the desired outphasing output signals. The phase condition (7.10) combined with (7.3), (7.5) as well as (6.6) or (6.7),
respectively (for each branch), gives
arg(sk ) + hk (Δψ ) + f k (h̃(Δψ )) = arg(sk ) + ξk (Δψ ), k = 1, 2.
That is, the predistorter functions hk is the only unknown in each branch and can
be solved for. This results in
hk (Δψ ) = −f k (h̃(Δψ )) + ξk (Δψ )
= −f k (f˜−1 (ξ̃(Δψ ))) + ξk (Δψ )
(7.11)
for k = 1, 2. Here, (7.9) has been used in the last equality.
Hence, using the predistorters (7.11) in (7.1), the output y(t) will be an amplified replica of the input signal s(t), despite the gain mismatch and nonlinear
behavior of the amplifiers.
7.3
Nonconvex DPD Estimator
A first approach to identify the predistorter is to notice that the goal is to minimize the difference between the normalized input and the normalized predistorted output. This can be written down in a straightforward way as solving the
minimization criterion
θ̂DPD = argmin
N 2
s(t) − ŷ (t, θ
P
DPD ) ,
(7.12)
θDPD
ŷP (t, θDPD ) = ĝ1 e
t=1
j p(η̂1 ,Δψ (s1,P ,s2,P ))
s1,P (t) + ĝ2 e j p(η̂2 ,Δψ (s1,P ,s2,P )) s2,P (t),
(7.13)
7.4
89
Analytical DPD Estimator
where
sk,P (t) = e j p(ηk,DPD ,Δψ (s1 ,s2 )) sk (t),
k = 1, 2,
(7.14)
T
T
η2,DPD
]T ∈ R2n+2 . The signal ŷP (t) is the output from a
and θDPD = [η1,DPD
pa model, using a predistorted input, as in Figure 7.1, where the amplifiers are
replaced by the obtained models thereof. The dpd is thus identified based on a
model of the forward system, according to Method B in Section 4.1. The forward
model was approximated by polynomials, fˆk (Δψ ) = p(ηk , Δψ ), according to (6.11),
and this is used in (7.12)-(7.13) to explicitly point out the dependence on the
model parameters. When identifying the dpd model, the model structure was
assumed to be the same as for the pa model, see (6.11), motivated by the StoneWeierstrass theorem (Theorem 7.26 in Rudin [1976]), so that
ĥk (Δψ ) = p(ηk,DPD , Δψ ) =
nh
ηk,i,DPD Δiψ ,
k = 1, 2,
(7.15)
i=0
where
ηk,DPD = ηk,0,DPD
ηk,1,DPD . . .
ηk,n,DPD
T
.
The resulting estimated parameter vector θ̂DPD contains the dpd model parameters.
This formulation leads to a nonconvex optimization problem and is thus at
a risk of obtaining a suboptimal solution if the optimization algorithm finds a
local minimum. To restart the algorithm at different initial points is a possible
way to reduce the risk of getting stuck in a local minimum instead of the global
minimum, but this solution would not be useful in an online implementation, see
also Section 6.5 for a discussion on convex and nonconvex optimization.
7.4
Analytical DPD Estimator
The ideal dpd outlined in Section 7.2 requires knowledge of the pa model, and
once the pa characteristics g1 , g2 , f 1 and f 2 are known (or estimated), the predistorter functions can be determined. The first step to construct a dpd is thus
to obtain a model of the pa, as described in Chapter 6. This method follows
Method A in Section 4.1, where a model of the system itself is used to analytically produce an inverse.
The parameter estimates η̂k define function estimates (6.20)
fˆk (z) = p(η̂k , z),
from which an estimate
k = 1, 2,
f˜ˆ(z) = z + fˆ1 (z) − fˆ2 (z)
(7.16)
of the function f˜ from (7.4) can be computed. Provided that this function can be
inverted numerically, estimates ĥk of the ideal phase correction functions can be
computed as in (7.11), i.e.,
ĥk (Δψ ) = −fˆk (f˜ˆ−1 (ξ̃(Δψ ))) + ξk (Δψ )
(7.17)
90
7
Predistortion
for k = 1, 2, where Δψ is given by (6.2) and ξ̃, ξ1 and ξ2 by (7.8), (6.6) and (6.7),
respectively.
Hence, the complete dpd estimator consists of the selection of gain factors
g1 and g2 , see Sections 6.1 and 6.3. Also, the two least-squares estimators given
by (6.18), a numerical function inversion in order to obtain f˜ˆ−1 and the expressions for the phase correction functions in (7.17) make part of the complete dpd
estimator. The dpd estimation can either be done at each point in time, or (as
has been done here) by evaluating the function for the range of possible Δψ and
saving this nonparametric, piecewise constant function.
The dpd estimator will result in two functions ĥ1 and ĥ2 which take Δψ as
argument, and by using these as in (7.1), the predistorted input signals s1,P (t) and
s2,P (t) can be calculated for arbitrary data. Measurement results for a validation
data set, not used during the modeling, will be presented in Chapter 8.
The algorithm thus consists of two main parts, A – Estimation of pa model
and B – Calculation of dpd functions. Part A consists of three subparts where
the first, A.I, produces candidates for the gain factors g1 and g2 by either using
the dr by gridding possible values. A.II produces ls estimates of the nonlinear
functions fˆ1 and fˆ2 for each pair of g1 and g2 and in A.III, the best performing
model is chosen among all the candidates. In Part B, the dpd functions ĥ1 and ĥ2
are calculated. The different steps are described in more detail in Algorithm 1.
7.5
Inverse Least-Squares DPD Estimator
In the deduction of the predistorter described above, the ideal dpd was deduced
using analytical relationships between the input and the desired output, following the basic Method A described in Section 4.1, page 34. By instead choosing
Method C, we want to estimate the inverse directly. This means that the system
input s(t) (or rather s1 (t) and s2 (t)) will be considered as output to the identification, and y(t) (or y1 (t) and y2 (t)) as input.
Since g1 and g2 can be found rather easily (through the dynamic range or
gridding), these can still be assumed to be known, so the decomposition of y(t)
into y1 (t) and y2 (t) can be performed using (6.3). In each branch k = 1, 2 we thus
have
arg(sk ) = arg(yk ) − hk Δψ (y1 , y2 ) .
(7.18)
The left hand side is the input, which is known. The first term on the right hand
side represents what we have measured, using the decomposition (6.3). The second term represents how the outphasing outputs should be modified to match
the input, a postdistorter. The only unknown is thus the predistorter functions
h1 and h2 in the two branches. By approximating these as polynomials,
ĥk ≈ p(ζk , Δψ (y1 , y2 )) =
nh
ζk,i Δiψ (y1 , y2 ),
i=0
where
ζk = ζk,0
ζk,1 . . .
ζk,n
T
.
k = 1, 2,
(7.19)
7.5
91
Inverse Least-Squares DPD Estimator
Algorithm 1 ls modeling and analytical dpd method
Require: model order n, method for choice of g1 and g2 , precision of pa model
(p M ) and inverse (p I ), estimation data.
{A – Estimation of pa model}
y(t)
1: Normalize the output y(t) = max(|y(t)|)
2: Calculate Δ ψ ∀t according to (6.2).
{A.I – Estimation of gain factor candidates g1 and g2 }
if Use Dynamic Range to determine g1 and g2 then
Calculate cDR using (5.13), and Δg using (6.17).
Calculate possible choices of g1 , g2 according to (6.15).
6: else {g1 and g2 over a range of values}
7:
Grid g1 ∈ [gmin , gmax ] with precision p M and let g2 = 1 − g1 .
8: end if
3:
4:
5:
{A.II – Estimation of nonlinearity function candidates fˆ1 and fˆ2 }
9: for all pairs of g1 , g2 do
10:
Create s˜k = gk e j arg(s˜k ) and yk = gk e j arg(yk ) , k = 1, 2 using (6.4) to (6.7).
11:
Find ηk using (6.18) and calculate fˆk , k = 1, 2 using (6.20).
ˆ
ˆ
Simulate the output ŷg1 ,g2 (t) = g1 e j f 1 (Δψ ) s1 (t) + g2 e j f 2 (Δψ ) s2 (t).
13:
Calculate error Vg (g1 , g2 ) = t |y(t) − ŷg1 ,g2 (t)|2 .
14: end for
12:
{A.III – Choose best forward model, ĝ1 , ĝ2 , fˆ1 and fˆ2 }
15: Select ĝ1 = arg ming Vg (g1 , 1 − g1 ), ĝ2 = 1 − ĝ1 and the corresponding fˆ1 and
1
fˆ2 .
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
{B – Calculation of dpd functions ĥ1 and ĥ2 } {Create a Look Up Table
(lut) for different values of Δψ by creating an intermediate signal s}
Grid Δψ ∈ [0, π] with precision p I .
for each value of Δψ do
Create s = cos(Δψ /2) according to (5.10) assuming α = 0 and rmax = 1
(ϕ = Δψ /2).
Create s1 and s2 according to (5.11) and s˜1 and s˜2 using (6.3) to (6.5).
Find ξ̃ using (7.8), (6.6) and (6.7).
Calculate f˜ˆ(ξ̃) using (7.16).
end for
Invert f˜ˆ(ξ̃) numerically to get f˜ˆ−1 . This can e.g. be done by calculating f˜(ξ̃)
for a number of values of ξ̃ ∈ [0, π], grid f˜ˆ(ξ̃) and match with the ξ̃ that gives
the closest value.
for each value of Δψ in line 16 do
Find estimate ĥk (Δψ ) according to (7.17).
end for
92
7
Predistortion
as was done for the pa model, the parameters corresponding to the hk -functions
can be found.
The estimates ζ̂k,i of the model parameters have been computed by minimizing a quadratic cost function, i.e.,
ζ̂k = arg min Vkh (ζk ),
k = 1, 2,
(7.20)
ζk
where
Vkh (ζk ) =
N 2
arg (yk (t)) − arg (sk (t)) − p ζk , Δψ (y1 (t), y2 (t)) .
(7.21)
t=1
The parameter estimates ζ̂k define inverse function estimates
ĥk (z) = p(ζ̂k , z),
k = 1, 2,
(7.22)
that can be used as a dpd. As discussed in Chapter 3, this method assumes commutativity of the two systems (system and inverse), so that the inverse which was
estimated at the output of the power amplifier, a postdistorter, can also be used
at the input as a predistorter. The method is summarized in Algorithm 2.
7.5
Inverse Least-Squares DPD Estimator
93
Algorithm 2 Inverse ls dpd method
Require: model order nh , method for choice of g1 and g2 , precision of gain
factors (p M ), estimation data.
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
Normalize the output y(t) =
y(t)
max(|y(t)|)
{I – Estimation of gain factor candidates g1 and g2 }
if Use Dynamic Range to determine g1 and g2 then
Calculate cDR using (5.13), and Δg using (6.17).
Calculate possible choices of g1 , g2 according to (6.15).
else {g1 and g2 over a range of values}
Grid g1 ∈ [gmin , gmax ] with precision p M and let g2 = 1 − g1 .
end if
{II – Estimation of nonlinearity function candidates ĥ1 and ĥ2 }
for all pairs of g1 , g2 do
Create yk = gk e j arg(yk ) using (6.4) to (6.7) and sk using (5.11).
Calculate Δψ (y1 , y2 ) ∀t according to (6.2).
Find ζ̂k using (7.20) and calculate ĥk , k = 1, 2 using (7.22).
Simulate the input sˆg1 ,g2 (t) = e j ĥ1 (Δψ (y1 ,y2 )) y1 (t) + e j ĥ2 (Δψ (y1 ,y2 )) y2 (t)
Calculate error Vg (g1 , g2 ) = t |s(t) − sˆg1 ,g2 (t)|2
end for
{III – Choose best inverse model, ĥ1 and ĥ2 }
Select ĝ1 = arg ming1 Vg (g1 , 1 − g1 ), ĝ2 = 1 − ĝ1 and the corresponding ĥ1 and
ĥ2 .
94
7
Predistortion
Table 7.1: dpd Model Validation
Method
Analytical
ls
Gain factors only
7.6
|s − ŷp |22
0.0532
1.008
65.5
Simulated Evaluation of Analytical and LS
Predistorter
The goal here is to evaluate the performance of the predistorter methods in simulations, and determine how well the different methods achieve an inversion. One
way is to look at the am-am modulation to assess how much the amplitude of the
predistorted output is distorted. For an outphasing pa, this is connected to the
phase difference Δψ (y1,P , y2,P ) of the outphasing outputs y1,P and y2,P .
The predistorter methods in Sections 7.4 and 7.5 are evaluated using a model
of the amplifier as “the truth”. The model is presented in Chapter 6, where the
gain factors were estimated using the dr and the nonlinearities using the ls approach, see Section 6.3 and the model validation in Section 6.4 and Figure 6.5.
The same validation data have been used in order to evaluate the different predistorter methods. Evaluation on a real pa will be presented in Chapter 8.
Test 1 – Inversion Evaluation
We will start by looking at the am-am modulation to determine how much the amplitude of the predistorted output is changed. The deviation from the ideal phase
difference at the output (i.e., the output amplitude) with and without predistortion is presented in Figure 7.2. Both the analytical method and the ls method
clearly reduce the phase shift introduced by the pa. Figure 7.3 shows the estimated deviation from the ideal phase for each signal branch with and without
predistortion, with rather similar performance for the two dpd methods.
The values of the cost function (7.12) are presented in Table 7.1 for the two
methods. The result using only the estimation of the gain factors and the alternative decomposition (using the knowledge of the nonequal gain factors) is also presented. It is clear that incorporating the nonlinearities improves the performance.
For cases when the gain factors differ more from the ideal g1 = g2 = 0.5 than in
this case (g1 = 0.4986 and g2 = 0.5014), the alternative decomposition (6.3) will
have a larger improvement on the modeling than in this case, when the difference
is small.
For the ls method, the fit is almost perfect in the middle range, which is to be
expected since a polynomial is used (see discussion in Section 6.4, page 79). Also
the number of measurements is unevenly spread out over Δψ with most data in
the middle, only 0.9% of the estimation data have Δψ < 0.8 or Δψ > 3. For the
95
Simulated Evaluation of Analytical and LS Predistorter
Deviation from ideal phase difference [rad]
7.6
· 10−2
2
0
−2
−4
−6
−8
0
0.5
1
1.5
∆ψ (s1 , s2 )
2
2.5
3
Figure 7.2: Simulated predistorter evaluation for a model with polynomial
degree n = 5 using the wcdma input signal (see Chapter 8). The signals are
generated using the dpd functions and the pa model. For an ideal pa, there
is no amplitude distortion, that is, the phase difference of the outphasing
signals is the same at the output and the input. The deviation from this ideal
phase difference for the (modeled, not predistorted) output signal ŷ is shown
in dotted green and the predistorted output signals ŷP in pink and blue.
The pink line shows the result using the analytical inversion as described
in Sections 7.2 and 7.4 and the dashed blue line shows the result of the ls
approach in Section 7.5, with predistorter degree nh = 5 in (7.19). The two
methods both perform very well in a large interval.
analytical solution, one can see an inversion error close to ∆ψ = π. This is a
consequence of the nonequal gain factors, ∆ψ = π should represent a complete
opposition of the two outphasing signals such that the output amplitude is zero.
If g1 , g2 however, this is not possible and no phase combination of the two
outphasing signals will lead to a zero-amplitude output. A power amplifier with
a large dynamic range (dr, difference between the gain factors g1 and g2 ) will
have a very small distortion close to ∆ψ = π, whereas a pa with a small dr will
show this distortion in a larger region. The errors of the two methods when
compared to validation data are shown in Figure 7.4. Also in this plot, it can
be seen that both methods reduce the power amplifier distortion, and that the
analytical inversion performs slightly better.
96
7
Predistortion
Deviation from ideal phase [rad]
0.2
0.1
0
−0.1
−0.2
−0.3
−0.4
0
0.5
1
1.5
∆ψ (s1 , s2 )
2
2.5
3
Figure 7.3: Simulated predistorter evaluation for a model with polynomial
degree n = 5 using the wcdma input signal. The signals are generated using
the dpd functions and the pa model. The deviation from the ideal phase
for the (modeled, not predistorted) output outphasing signals ŷ1 and ŷ2 are
shown in green and the predistorted output signals ŷ1,P and ŷ2,P in pink
and blue. The pink lines show the results using the analytical inversion as
described in Sections 7.2 and 7.4 and the blue lines show the result of the ls
approach in Section 7.5. Branch one is plotted in solid lines and branch two
in dashed lines.
7.6
97
Simulated Evaluation of Analytical and LS Predistorter
Amplitude
8
· 10−2
6
4
2
Phase [rad]
0
0
−1
−2
282
284
286
288
Time [µs]
290
292
Figure 7.4: The upper plot shows the amplitude error, |s − ŷp |, and the lower
plot shows the phase error, arg(s) − arg(ŷp ), for the two dpd methods. The
analytical method is in pink and the ls in blue. As a comparison, the errors
for the original, unpredistorted signal y(t) are also plotted in green.
98
7
Predistortion
Figure 7.5: Simulated aclr at 5 MHz and 10 MHz offset with dpd (solid
line) and without (dashed line) for the wcdma signal.
Test 2 – Impact of ACLR on Predistorter Performance
As previously explained, the result of a limited dynamic range is that all amplitude and phase errors occurring outside the dr cannot be corrected. The signal
clipping in an outphasing pa occurs at small amplitudes, while in a conventional
linear pa, the peak amplitudes are clipped. Thus, the dr in an outphasing pa limits the spectral performance when amplifying modulated signals. To investigate
the performance limits of the predistorter, simulations have been done using two
amplifiers with a given dr (no phase distortion), with and without dpd. In Figure 7.5, the aclr over dr at 5 MHz and 10 MHz for the wcdma signal are plotted
with and without dpd. Here, the phase error between the outphasing signals is
assumed to be zero. For a pa with a dr of 25 dB the differences in aclr between
the nonpredistorted and predistorted outputs are 8-13 dB. When the dr is 25 dB
the optimal theoretical aclr is achieved after dpd. For a pa with 45 dB of dr,
the difference between when a dpd is used or not is negligible.
Summary
In this simulated evaluation, both dpd methods achieve an improvement, compared to the original power amplifier output. The analytical inversion leads to
slightly better results at the cost of a higher computational complexity. The lookup table for the analytical dpd has 2 ∗ 3142 elements (with precision p I = 0.001 in
Algorithm 1), and the polynomials contain 2 ∗ 6 coefficients (nh = 5 in (7.19)). Using a higher polynomial degree could lead to improved results for the ls method,
and a smaller lut might lead to a small degradation of the analytical method.
As implementation issues are out of scope for this thesis, the methods are not
optimized for implementation and therefore these considerations have not been
further pursued.
7.7
7.7
Recursive Least-Squares and Least Mean Squares
99
Recursive Least-Squares and Least Mean
Squares
Here, a few aspects of a possible future implementation of the dpd methods are
presented. In addition to the guaranteed convergence, least-squares formulations
also have the advantage that there are many efficient numerical methods for solving this type of problems. They can be solved recursively by, for example, the
recursive least-squares (rls) method [Björck, 1996] making them suitable for an
online implementation. An even less complex parameter estimation algorithm is
the least mean square (lms) method, which can make use of the linear regression
structure of the optimization problem, developed here in (6.11) and (6.19). lms
has been used for rf pa linearization in Montoro et al. [2007] and implemented
in field programmable gate array (fpga) technology, as shown in Gilabert et al.
[2009].
With a recursive implementation of the algorithm, it is even more important
that the algorithm can be proved to converge to good values, as no monitoring of
the performance should be necessary in order for the method to be useful in practice. This also means that a nonconvex solution as in (6.12)-(6.13) is not suitable
for online implementation since it cannot guarantee convergence to good enough
minima. In an offline application, the possibility to restart the optimization could
be added but, together with the lack of a bound on the number of iterations, this
does not seem like a good solution for an online version. Using well explored
methods like rls or lms would result in a low-complexity implementation, and
though it is hard to judge the exact complexity of the iterative implementation
that would be needed for the online version of nonconvex solution, it is clear
that it would be very hard to find a simpler one than for the low-complexity lms
version of the convex method.
Since circuitry will behave differently depending on the settings under which
it operates, it is important to be robust to such conditions. This is covered in
the concept of process, voltage and temperature variations (pvt variations). One
way to handle the pvt variations and changes in the setting, such as aging, would
be to use a method with a forgetting factor, reducing the influence of older measurements [Ljung, 1999]. The rls and lms solutions assume the changes in the
operating conditions to be slow.
8
Predistortion Measurement Results
The models presented in Chapter 6 and the predistorters in Chapter 7 are based
on measured data from a power amplifier. In Chapter 7, the methods’ ability
to invert the nonlinearities was investigated, using a forward model as a “true”
system. In this chapter, the methods will be evaluated on real measurements.
The predistorters are applied to a new data set, validation data, that is not the
same as the signal used for estimation. To start off, a short introduction to the
signal types used and the measurement setup will be presented.
8.1
Signals Used for Evaluation
The predistortion methods have been evaluated for the different signal types
edge, wcdma and lte. Mobile communication technologies are often divided
into generations, and the new devices of today are the fourth generation, 4G. The
first generation, 1G, was the first analog mobile radio systems of the 1980s. 2G
was the first digital mobile systems and 3G the first mobile systems handling
broadband data.
Enhanced data rates for gsm evolution (edge) is a mobile phone technology
with higher bit rates than general packet radio service (gprs) [Ahlin et al., 2006],
and has been called 2.75G since it did not quite reach the 3G standards. The
carrier frequency used is 2 GHz, and the bandwidth is 200 kHz. Wideband code
division multiple access (wcdma) is a third generation (3G) mobile phone technology, and is one of the 3G mobile communications standards [Frenzel, 2003].
The carrier frequency used is 2 GHz, and the bandwidth is 5 MHz. The bandwidth of the long term evolution (lte) signal is variable, and can be adjusted
between 1 and 20 MHz. It is sometimes called 4G or 3.9G since it does not completely satisfy the 4G requirements Dahlman et al. [2011].
The wcdma and lte have large peak-to-minimum power ratio, i.e., the pa
101
102
8 Predistortion Measurement Results
1
0
-1
-1
0
edge
1
-1
0
wcdma
1
-1
0
lte
1
Figure 8.1: iq plots (imaginary part, q, vs real part, i) of signal realizations
of the edge, wcdma and lte standards in the complex plane. The sampling
frequency in the modeling data sets is four times higher than that shown
here.
0.3
0.2
0.1
0
0
0.5
edge
1
0
0.5
wcdma
1
0
0.5
lte
1
Figure 8.2: Histograms of the distribution of the input amplitude of signal
realizations of the edge, wcdma and lte standards. This difference in input
distribution effects the peak-to-average power ratio, and it also implicitly
determines the weighting of the fit of the polynomials, see also the discussion
on polynomial fitting on page 79.
output signals include the minimum and maximum amplitudes (the full dynamic
range). For these signals, the dr of the pa will effect the output signal, by clipping
the smallest amplitudes. For edge, the signal amplitude is never close enough to
zero to be effected by the pa dr, and no clipping will occur. Realizations of each
signal type (edge, wcdma and lte) are shown in Figure 8.1 as iq-plots. Histograms of the distribution of the input amplitude are shown in Figure 8.2. The
distribution also implicitly determines the weighting of the fit of the polynomials, see also the discussion on polynomial fitting on page 79. One characteristic
of a signal is the peak-to-average power ratio (papr). A signal with a high papr
sets high standards on the linearity of the pa, since a large range of input signal
amplitudes has to be amplified.
The signals used are created as random signals with predefined characteristics.
8.2
Measurement Setup
103
Figure 8.3: Measurement setup for iq-data with two Master-Slaveconfigured SMBV signal generators [Rohde & Schwarz].
8.2
Measurement Setup
The measurements that will be discussed in Section 8.3 have been performed
using an SMU200A signal generator with two phase-coherent rf outputs and an
arbitrary waveform generator where the input signals (s1 (t) and s2 (t)) and the
predistorted input signals (s˜1 (t) and s˜2 (t)) were stored. For the measurements
that will be discussed in Section 8.4, two R&S SMBV100A signal generators with
phase-coherent rf outputs and arbitrary waveform generators with maximum
iq sample rate of 150 MHz have been used. Figure 8.3 shows the measurement
setup.
The outphasing power amplifiers used in the measurements have been developed by Jonas Fritzin et al. and are briefly described in Appendix A and in more
detail in Fritzin [2011].
Sampling
The sampling rate in Section 8.4 was 92.16 MHz in the measurements, six times
the original sampling frequency of the signal. The impact of baseband filtering and limited bandwidth is investigated in Gerhard and Knöchel [2005a,b],
where it was concluded that to obtain an optimal signal/distortion ratio over
the entire bandwidth, a compromise between the sampling frequency and the
filter characteristics has to be made. Here, we have evaluated the required bandwidth/sampling rate based on measurements with two signal generators and one
combiner, no pa was used. Increasing the sampling frequency from the original
15.36 MHz to 30.72 MHz and 61.44 MHz, the aclr is improved, see Table 8.1.
Thus, for the specific tests performed here, the aclr at 5 and 10 MHz can be
improved by 6-9 dB and 4-8 dB, respectively, when increasing the sampling rate
up to four times the original sampling rate of 15.36 MHz. Further increasing the
sampling frequency, up to 92.16 MHz, shows no significant change.
104
8 Predistortion Measurement Results
Table 8.1: Measured Spectral Performance at 1.95 GHz for wcdma and lte
Uplink Signals for Different Sampling Frequencies.
wcdma
lte
8.3
Measured Parameter
aclr @ 5 MHz [dBc]
aclr @ 10 MHz [dBc]
aclr @ 5 MHz [dBc]
15.36 MHz
-44
-48
-34
30.72 MHz
-50
-52
-43
61.44 MHz
-52
-56
-46
Evaluation of Nonconvex Method
In this section, the nonconvex approach presented in Sections 6.2 and 7.3 has
been evaluated. The pa model has been obtained by minimizing the nonconvex
cost function in (6.13) and the corresponding dpd by minimizing (7.12). The
method involves solving two nonconvex optimization problems, and corresponds
to Method B in Section 4.1. This method has been evaluated on the pa described
in Appendix A.1 and Fritzin [2011].
The predistortion methods were evaluated on a physical chip. The measurement setup was optimized and the branch amplifiers were tuned to achieve the
best performance possible. The phase offset between s1 (t) and s2 (t) in the baseband was adjusted to minimize phase mismatch (ideally 180 ◦ between the two
rf inputs for nonmodulated s1 (t) and −s2 (t) in Figure A.2, i.e. maximum output power for a continuous signal). Since this is not a reasonable assumption in
a real-life application, an additional phase error of 3 ◦ was added in one of the
branches.
Measurements of input s(t) and output y(t) of length Nid were collected K
times, and an average was taken to avoid the influence of measurement noise.
This data was used to model the power amplifier. Based on this pa model, a predistorter model was produced. Polynomials with order n have been used as parameterized versions of the pa nonlinearities and of order nh for the predistorter
functions. The predistorted input signals, s1,P and s2,P , were then computed (in
Matlab) for a validation input signal of length Nval . The predistorted outphasing input signals were sent to the pa, resulting in a predistorted output. The
additional phase error was still applied during the predistorter validation.
For the computation of the model parameters, a large number of algorithms
are available for solving a nonlinear optimization problem. Here, the Matlab
routine fminsearch, based on the Nelder-Mead simplex method, was used. The
estimation and validation data sets contain Nid and Nval samples, respectively.
The input and output sampling frequencies are denoted f s and f s,out , respectively.
To minimize the influence of measurement noise, the signals were measured K
times, and a mean was calculated. The data collection parameters are shown in
Table 8.2. Since the wcdma is a more wide-band signal than the edge signal, the
number of samples Nid and Nval were chosen larger.
8.3
105
Evaluation of Nonconvex Method
Table 8.2: Data Collection, Nonconvex Method
edge
wcdma
Nid
40 001
153 600
Nval
80 001
153 600
fs
8.67 MHz
61.44 MHz
f s,out
34.68 MHz
61.44 MHz
K
150
200
Table 8.3: Measured Spectral Performance of the edge Signal
(a) With no phase error and no dpd.
(b) For a 3 ◦ phase error and no dpd.
(c) When dpd is applied to (b).
Freq.
2 GHz
8.3.1
Freq. offset
400 kHz
600 kHz
Spec.
-54 dB
-60 dB
Meas. (a)
-54.4 dB
-60.3 dB
Meas. (b)
-53.5 dB
-59.9 dB
Meas. (c)
-65.9 dB
-68.2 dB
Measured Performance of EDGE Signal
edge is a rather narrow-band signal with a peak-to-average power ratio (papr)
of 3.0 dB. The spectrum of the estimation input data set is shown in Figure 8.4(d).
The output of a perfectly matched pa in Figure 8.4(a) fulfills the requirements,
but without any margins to the spectral mask. The spectral mask is a nonlinearity
measurement that describes the amount of power that is allowed to be spread to
the neighboring channels. The requirements for an edge signal are summarized
in Table 5.1 and illustrated in Figure 8.4. As the phase error cannot be assumed
to be 0 ◦ in a transceiver, a phase error of 3 ◦ was added and led to a violated
spectral mask as in Figure 8.4(b).
When predistortion was applied to a validation data set, not used for estimation, the linearity improves, as seen in Figure 8.4(c). The pa model was of order
n = 5 and the predistorter of order nh = 5. The measured power at 400 and
600 kHz offsets were -65.9 and -68.2 dB, with margins of 11.9 and 8.2 dB, respectively. The average power at 2 GHz was +7 dBm with 22 % pae and root mean
square (rms) evm of 2 %. The measured performance of the amplifier for an
edge signal is summarized in Table 8.3.
8.3.2
Measured Performance of WCDMA Signal
The papr of the wcdma signal was 3.2 dB and the spectrum of the estimation
data set is shown in Figure 8.5(d). Figure 8.5(a) shows the measured wcdma spectrum at 2 GHz, with minimized phase mismatch and no predistortion. When the
same phase error of 3 ◦ as for the edge signal was added to simulate reasonable
phase settings, a distorted spectrum as in Figure 8.5(b) was measured. The aclr
is an integrated measure that describes the power spread to adjacent channels.
106
8 Predistortion Measurement Results
EDGE
Relative spectral density
for RBW = 30 kHz [dB]
0
−10
← spectral mask
−20
−30
−40
−50
−60
−70
←a
←c
←d
b
↓
−80
−0.8 −0.6 −0.4 −0.2
0
0.2 0.4 0.6 0.8
Offset from carrier frequency [MHz] at 2 GHz
Figure 8.4: Measured edge spectrum at 2 GHz.
(a) Output spectrum without phase error between s1 (t) and s2 (t).
(b) Output spectrum with 3 ◦ phase error between s1 (t) and s2 (t).
(c) Output spectrum when dpd is applied to (b).
(d) Output spectrum of the estimation signal. The spectrum of the validation
signal was similar.
8.3
107
Evaluation of Nonconvex Method
Table 8.4: Measured Spectral Performance of the wcdma Signal
(a) With no phase error and no dpd.
(b) For a 3 ◦ phase error and no dpd.
(c) When dpd is applied to (b).
Freq.
1 GHz
2 GHz
aclr
5 MHz
10 MHz
5 MHz
10 MHz
Spec.
-33 dBc
-43 dBc
-33 dBc
-43 dBc
Meas. (a)
-40.6 dBc
-59.8 dBc
-43.4 dBc
-53.9 dBc
Meas. (b)
-39.4 dBc
-56.2 dBc
-38.0 dBc
-50.9 dBc
Meas. (c)
-53.6 dBc
-60.3 dBc
-50.2 dBc
-52.2 dBc
At 1 GHz and 2 GHz, the power amplifier fulfills the requirements, also with the
additional phase error, as seen in Table 8.4.
The phase predistortion method, with n = 5 and nh = 4, for a validation
signal, improves the measured aclr. A spectrum is shown in Figure 8.5(c). The
channel power at 2 GHz was +6.3 dBm with pae of 22 % and rms composite evm
of 1.4 % (0.6 % after dpd). The measured performance of the amplifier for a
wcdma signal is summarized in Table 8.4.
8.3.3
Summary
The nonconvex predistortion method clearly improves the pa performance for
both edge and wcdma signals, even when an extra phase error is added. The
measured spectral performance at 400 kHz offset and the aclr at 5 MHz is comparable to state-of-the-art edge [Mehta et al., 2010] and wcdma [Huang et al.,
2010] transmitters.
108
8 Predistortion Measurement Results
WCDMA
Relative spectral density
for RBW = 30 kHz [dB]
0
−10
−20
−30
−40
−50
−60
−70
←a
←b
←c
←d
−80
−10.0
−5.0
0.0
5.0
10.0
Offset from carrier frequency [MHz] at 2 GHz
Figure 8.5: Measured wcdma spectrum at 2 GHz.
(a) Output spectrum without phase error between s1 (t) and s2 (t).
(b) Output spectrum with 3 ◦ phase error between s1 (t) and s2 (t).
(c) Output spectrum when dpd is applied to (b).
(d) Output spectrum of the estimation signal. The spectrum of the validation
signal was similar.
8.4
109
Evaluation of Least Squares PA and Analytical Inversion Method
Table 8.5: Data Collection, Least-Squares and Analytical Method
wcdma
lte
8.4
Nid
100 000
100 000
Nval
100 000
100 000
fs
92.1 MHz
92.1 MHz
f s,out
92.1 MHz
92.1 MHz
K
10
10
Evaluation of Least Squares PA and Analytical
Inversion Method
In this section, the least-squares modeling of the pa, using the dr to estimate
g1 and g2 , has been applied. An analytical inversion has been used to construct
the predistorter functions, as in Method A in Section 4.1. The pa modeling is
described in Section 6.3, the dpd in Section 7.4 and the method is summarized
in Algorithm 1, page 91. This method has been evaluated on the pa described in
Appendix A.2 and Fritzin et al. [2011c].
The measurement setup was optimized and the branch amplifiers were tuned
to achieve the best performance possible. For the measurements without predistortion, the phase offset between s1 (t) and s2 (t) in the baseband was adjusted
to minimize phase mismatch (ideally 0 ◦ between nonmodulated s1 (t) and s2 (t),
that is, maximum output power for a continuous signal). Moreover, the iq-delay
between the signal generators was adjusted for optimal performance [Rohde &
Schwarz].
Measurements of input s(t) and output y(t) were collected K times, and an
average was taken to avoid the influence of measurement noise. This averaged
data set was used to model the pa, and based on the pa model, a predistorter
model was produced. Polynomials with order n have been used as parameterized
versions of the pa nonlinearities and based on this model, an approximation of
the ideal predistorter has been constructed. The predistorted input signals, s1,P
and s2,P , were then computed (in Matlab) for a validation input signal. The predistorted outphasing input signals were sent to the pa, resulting in a predistorted
output.
The estimation and validation data sets contain Nid and Nval samples, respectively. The input and output sampling frequencies are denoted f s and f s,out , respectively. The data collection parameters are shown in Table 8.5. In all following
experiments, the dpd estimates ĥk , k = 1, 2, have been calculated for 3142 uniformly distributed points (p I = 0.001 in Algorithm 1). This lut has been used
in the construction of the predistorted outphasing input signals. For each input
phase difference Δψ , the outphasing input signals s1 (t) and s2 (t) were adjusted
according to the nearest neighbor principle.
110
8 Predistortion Measurement Results
Figure 8.6: Measured wcdma spectrum at 1.95 GHz.
(a) Measured wcdma spectrum without dpd. The measured aclr is printed
in gray.
(b) When dpd is applied to (a). The measured aclr is printed in black.
(c) Spectrum of estimation signal. Spectrum of validation signal was similar.
8.4.1
Measured Performance of WCDMA Signal
The papr of the wcdma uplink signal was 3.5 dB. The spectrum of the estimation data is shown in Figure 8.6(c). For the wcdma signal at 1.95 GHz without
predistortion, the measured aclr at 5 MHz and 10 MHz offsets were -35.5 dBc
and -48.1 dBc, respectively. The spectrum is shown in Figure 8.6(a). The estimation output data y(t) were used in the predistortion method to extract the model
parameters, with n = 5. The aclr is a measure describing the amount of leakage
into adjacent channels that can be tolerated, and the standards for wcdma are
-33 dBc and -43 dBc at 5 MHz and 10 MHz offsets, respectively.
The predistorted input signals, s1,P (t) and s2,P (t), were computed for the validation input signal, resulting in an output spectrum as shown in Figure 8.6(b).
The power spectral densities of the predistorted input is similar to that of the
nonpredistorted input signal, and therefore not included (similarly for the lte
signal). With predistortion, the measured aclr at 5 MHz and 10 MHz offsets
were -46.3 dBc and -55.6 dBc, respectively. Thus, the measured aclr at 5 MHz
and at 10 MHz offsets were improved by 10.8 dB and 7.5 dB, respectively. The
average power at 1.95 GHz was +26.0 dBm with 16.5 % pae. It is clear that the
predistortion reduces the spectral leakage.
Figure 8.7 shows the measured am-am (output amplitude vs. input amplitude) and am-pm (phase change vs. input amplitude) characteristics with and
without dpd for the wcdma signal. The upper figure shows the amplitude mod-
8.4
Evaluation of Least Squares PA and Analytical Inversion Method
111
ulation, and should ideally be a straight line from lower left corner (0,0) to the
upper right (1,1), such that the output amplitude equals the input amplitude for
the whole range of the signal. If this is not the case, there will be amplitude distortions. Here, the improvement can be seen in normalized amplitudes smaller
than 0.4. The lower plot shows the phase distortion, and the ideal is zero. It can
be seen that the dpd reduces the phase distortion for normalized amplitudes in
the range 0.05 |s| 0.95. For amplitudes close to one, the distortion is slightly
worse with a predistorter than without. This is due to the polynomial fit of the pa
model, which has a best fit in the middle region where the density of data points
is largest.
8.4.2
Measured Performance of LTE Signal
The papr of the lte uplink signal was 6.2 dB and the spectrum of the estimation
data sets is shown in Figure 8.8(c). For the lte signal at 1.95 GHz without predistortion, the measured aclr at 5 MHz offset was -34.1 dBc. The spectrum is shown
in Figure 8.8(a). The estimation output data y(t) were used in the predistortion
method to extract the model parameters with n = 5. The predistorted input signals, s1,P (t) and s2,P (t), were computed for the validation input signal, resulting
in an output spectrum as shown in Figure 8.8(b). With the predistorted spectrum in Figure 8.8(b), a small asymmetry can be observed, which was expected
due to the asymmetrical frequency spectrum of the reference signal. With predistortion, the measured aclr at 5 MHz offset was -43.5 dBc. Thus, the measured
aclr at 5 MHz offset was improved by 9.4 dB. The average power at 1.95 GHz
was +23.3 dBm with 8.0 % pae.
Figure 8.9 shows the measured am-am and am-pm characteristics with and
without dpd for the lte signal. The amplitude mapping in the upper figure
should ideally be a straight line from the lower left corner to the upper right
one, and the bottom figure should be zero always. The figure shows that the
amplitude and phase errors are significantly reduced for small amplitudes, with
a normalized amplitude |s| 0.4.
112
8 Predistortion Measurement Results
(a)
(b)
Figure 8.7: (a) Measured am-am characteristics (output amplitude vs. input
amplitude) with dpd (black) and without dpd (gray) for wcdma signal. (b)
Measured am-pm characteristics (phase change vs. input amplitude) with
dpd (black) and without dpd (gray) for wcdma signal.
8.4
Evaluation of Least Squares PA and Analytical Inversion Method
113
Figure 8.8: Measured lte spectrum at 1.95 GHz.
(a) Measured lte spectrum without dpd. The measured aclr is printed in
gray.
(b) When dpd is applied to (a). The measured aclr is printed in black.
(c) Spectrum of estimation signal. Spectrum of validation signal was similar.
114
8 Predistortion Measurement Results
(a)
(b)
Figure 8.9: (a) Measured am-am characteristics (output amplitude vs. input
amplitude) with dpd (black) and without dpd (gray) for lte signal. (b) Measured am-pm characteristics (phase change vs. input amplitude) with dpd
(black) and without dpd (gray) for lte signal.
8.4
Evaluation of Least Squares PA and Analytical Inversion Method
115
Figure 8.10: Measured aclr depending on the polynomial degree n of the
pa model. Degree n = 0 represents the performance without predistortion.
The nonlinear modeling and distortion clearly improves the performance by
reducing the aclr.
8.4.3
Evaluation of Polynomial Degree
A small evaluation of the impact of polynomial degree in the pa model has been
performed, and the result is presented in Figure 8.10. It is clear that the added
nonlinear terms improves the aclr and reduces the spectral leakage. Polynomials with orders above n = 5 did not further improve the results significantly. A
discussion on the impact of the choice of data points used in the ls problem can
be found in Section 6.4 on page 79.
8.4.4
Summary
The measured performance of the pa for modulated signals is summarized in
Table 8.6. The table shows measured aclr with dpd, without dpd, and the
required (Req) aclr for the wcdma [3GP] and the lte [3GPP] standards. In
measurements at 1.95 GHz, the dpd proved to be successful and improved the
wcdma aclr at 5 MHz and 10 MHz offsets by 10.8 dB and 7.5 dB, respectively.
The lte aclr at 5 MHz offset was improved by 9.4 dB. Thus, the predistortion
method improves the measured aclr to have at least 12.6 dB of margin to the
requirements [3GP, 3GPP]. The measured aclr at 5 MHz is comparable to stateof-the-art wcdma transceivers [Huang et al., 2010].
To compare the dpd performance to the achievable aclr, a small simulation
study has been performed. Assuming a pa with 35 dB of dynamic range (neglecting phase distortions), i.e. assuming g1 = 0.509 and g2 = 0.491, and a polynomial
degree of n = 5, the computed achievable aclr at 5 MHz and 10 MHz is ∼3 dB
better compared to the measurements with the wcdma signal. Similarly, the computed achievable aclr at 5 MHz is ∼2 dB better compared to the measurements
with the lte signal.
116
8 Predistortion Measurement Results
Table 8.6: Measured Spectral Performance at 1.95 GHz for wcdma and lte
Uplink Signals with Predistortion (using n = 5) and without.
wcdma
lte
Measured Parameter
aclr @ 5 MHz [dBc]
aclr @ 10 MHz [dBc]
aclr @ 5 MHz [dBc]
Req
-33
-43
-30
Without dpd
-35.5
-48.1
-34.1
With dpd
-46.3
-55.6
-43.5
As discussed in Section 6.4 on page 79, the polynomial fit is best in the middle, and in intervals where there is most data points. For the signals in this thesis,
that is in the center of the interval, see Figure 8.2 for the distribution of the different signal types used. As seen in Figures 8.7 and 8.9, this is where the predistorter improves the performance. The predistorter is based on inversion of the
pa models estimated using least squares. Since the inversion is almost perfect,
see Figure 7.2 for the analytical inversion, the misfit at the smallest and largest
input amplitudes can be assumed to be correlated with the polynomial fit of the
pa model. The nonlinearity functions can be compared for different signal types,
and though the overall appearance is very similar, a small shift can be seen, such
that the fit has been adapted to the signal type. That is, for an lte signal, the
functions fˆk differ a bit from the ones estimated for a wcdma signal. This can be
seen for lower amplitudes in particular, where the lte signal has a higher signal
density than the wcdma.
9
Concluding Remarks
In this chapter, conclusions and some discussions on possible research ideas for
the future are provided.
9.1
Conclusions
In this thesis, some different aspects concerning the estimation of inverse models
have been discussed.
In system identification, the model should typically be estimated in the setting in which it will be used. This idea has been further investigated for the
inverse model estimation, where different approaches are used in applications.
Here, an inverse model has been estimated with the purpose of using it in cascade with the system itself, as an inverter. A good inverse model in this setting
would be one that, when used in series with the original system, reconstructs
the original input. This problem has been treated in applications, such as power
amplifier predistortion, but theoretical insights as to how it should be done have
been lacking. Different methods can lead to good results, but it can be shown
that the characteristics captured by the methods differ. It is important to know
how the choice of method effects the inverse model. For a noise-free linear timeinvariant system, it is shown here that the weighting of the identification will be
adjusted to better reflect this intended use when the inverse is estimated directly
instead of based on a forward model. This has also been illustrated by a small
example.
Inverse systems are used in many applications, and here, outphasing power
amplifier predistortion has been investigated. The goal is to obtain a predistorter
that counteracts the nonlinearities introduced by the amplifier. In outphasing
pas, the signal is decomposed into two branches, so that highly efficient, nonlinear amplifiers can be used. This structure makes it hard to use conventional
117
118
9
Concluding Remarks
predistortion methods, but enables a theoretical description of the outphasing
pa and the matching ideal predistorter. Here, a first method based on two nonconvex optimization problems has been further developed using the structure of
the outphasing amplifier. The improved method basically consists of two leastsquares problems and an analytic inversion, and can be adapted to online implementation. It has been shown that the methods reduce the nonlinearities and the
leakage into adjacent channels.
9.2
Open Questions
For amplifier predistortion, an interesting extension to the methods presented
here is to include dynamics. In this thesis, two different approaches have been
mentioned that did not improve the modeling results. Since nonlinear systems
is “everything that is not linear”, there are many ways to include nonlinear dynamics, and one of them not working (improving the modeling) does not mean
that some other way will not. Even though the measurements did not indicate
a large dynamic influence, extending the method to include possible dynamics
would extend the field of application.
Since the measurements were performed in a rather ideal setup and then averaged over multiple realizations, the noise influence was minor, but the influence
under less ideal conditions could be evaluated. Now, the noise has a large impact
in the normalization and the estimation of gain factors in the two branches, since
these depend on only one and two measurements, respectively. This could be
made more robust by looking at multiple measurements.
The construction of a least-squares method is one step towards a possible online implementation, but further adaption could also be done. This includes the
choice of whether the dpd should be implemented as a polynomial or in a lut
solution. Furthermore, if the method can be allowed to use some calibration time
to adapt the parameters to the device at hand, this calls for another solution than
if the method has to find the parameters during operation. However, this would
more concern hardware implementation questions.
Concerning the research on estimation of inverse systems, only a small first
investigation has been performed, and many open questions remain. These include a more thorough analysis of the properties of the estimators, as well as the
noise influence in the different approaches. An extension to the nonlinear case
would also be an interesting, but challenging topic.
A
Power Amplifier Implementation
The outphasing power amplifiers used for the measurements presented in Chapter 8 and the power amplifier modeling in Chapter 6 have been constructed by
Jonas Fritzin, Christer Svensson and Atila Alvandpour at the Division of Electronic Devices, Linköping University, Linköping, Sweden. The results and pictures in this chapter are all measured and reproduced with the authors’ permission and are published here for sake of completeness.
As described in Section 5.2, a power amplifier can be characterized by different measures, such as the efficiency and the gain. For the pa beginner, a quick
review of these concepts and the others in Section 5.2 could be useful. See also
the Glossary in the preamble (page xvi).
The power amplifiers are of outphasing-type. The amplifier in each branch is
a Class D amplifier, based on inverters, that switches between VDD and GND.
A.1
+10.3 dBm Class-D Outphasing RF Amplifier in
90 nm CMOS
The chip used for validation of the nonconvex method in Section 8.3 can be seen
in the chip photo in Figure A.1 and the sketch in Figure A.2. The pa is a Class D
outphasing amplifier with an inverter-based output stage and an on-chip transformer as power combiner. More specifics can be found in Fritzin [2011] and
Fritzin et al. [2011a].
Figure A.3a shows the measured maximum output power (Pout ), the drain
efficiency (de) and the power-added efficiency (pae) over frequency for the power
amplifier. VDD and Vbias were 1.3 V and 0.65 V, respectively. The 3 dB bandwidth
was 2 GHz (1-3 GHz). The output power at 2 GHz was +10.3 dBm with de and
pae of 39 % and 33 %, respectively, with a gain of 23 dB from the buffers to the
119
120
A Power Amplifier Implementation
Figure A.1: Photo of the chip with size 1x1mm2 .
Figure A.2: Implemented outphasing amplifier with inverters in the output
stage.
121
+30 dBm Class-D Outphasing RF Amplifier in 65 nm CMOS
12.5
10
Pout [dBm]
50
← Pout
40
DE →
7.5
30
PAE →
5
20
2.5
10
0
0.75
1
1.25 1.5 1.75 2 2.25 2.5 2.75
Carrier frequency [GHz]
3
PAE, DE [%]
A.2
0
(a)
50
10
45
←P
out,max
0
40
−10
35
−20
30
−30
−40
−50
0.75
DR →
← Pout,min
1
25
20
1.25 1.5 1.75 2 2.25 2.5 2.75
Carrier frequency [GHz]
3
Dynamic range [dB]
Output power [dBm]
20
15
(b)
Figure A.3: (a) Measured output power (Pout ), de and pae over frequency.
(b) Measured maximum output power, Pout,max , minimum output power,
Pout,min , and dynamic range, dr, over frequency.
output. The minimum and maximum output power and dr of the pa are plotted
in Figure A.3b, where Pout,max = Pout in Figure A.3a.
A.2
+30 dBm Class-D Outphasing RF Amplifier in
65 nm CMOS
The pa used for validation in Section 8.4 is described in more detail in Fritzin et al.
[2011c] , but some basic characteristics can be found here. The chip photo can be
seen in Figure A.4. Figure A.5 shows the outphasing pa, based on a Class D amplifier stage utilizing a cascode configuration illustrated in Figure A.6a. This configuration improves the life-time of the transistors by achieving a low on-resistance
in the on-state and distributing the voltage stress in the off state which assures
that the root mean square (rms) electric fields across the gate oxide is kept low.
The output stage is driven by an ac-coupled low-voltage driver operating at 1.3 V,
VDD1 , to allow a 5.5 V, VDD2 , supply without excessive device voltage stress as discussed in Fritzin et al. [2011b] and Fritzin et al. [2011c]. The chip was attached
122
A Power Amplifier Implementation
Figure A.4: Photo of the chip with size 2.5x1.0mm2 . The photo has the same
orientation as the simplified PA schematic in Figure A.5.
Figure A.5: The implemented Class-D outphasing RF PA using two transformers to combine the outputs of four amplifier stages.
to an FR4 PCB and connected with bond-wires.
The measured output power, drain efficiency and power-added efficiency over
frequency and outphasing angle, ϕ in (5.11) (where ϕ = 2Δψ ), for VDD1 = 1.3 V
and VDD2 = 5.5 V is shown in Figures A.7. The output power at 1.95 GHz was
+29.7 dBm with a pae of 26.6 % (including all drivers). The pa had a peak to
minimum power ratio of ∼35 dB and the gain was 26 dB from the drivers to the
output. The dc power consumption of the smallest drivers was considered as
input power.
A.2
+30 dBm Class-D Outphasing RF Amplifier in 65 nm CMOS
(a)
123
(b)
Figure A.6: (a) The Class-D stage used in the outphasing PA Fritzin et al.
[2011c]. C1 -C4 are MIM capacitors. (b) Off-chip biasing resistors, R and Ri .
124
A Power Amplifier Implementation
(a)
(b)
(c)
Figure A.7: Measured Pout , de and pae for VDD1 = 1.3 V and VDD2 =
5.5 V [Fritzin et al., 2011c]:
(a) over carrier frequency.
(b) over outphasing angle, ϕ, at 1.95 GHz.
(c) Measured Pout , de and pae over VDD2 for VDD1 = 1.3 V at 1.95 GHz.
Bibliography
3GP. TS 25.101 v10.2.0 (2011-06). 3rd Generation Partnership Project; Technical
specification group radio access network; user equipment (UE) radio transmission and reception (FDD), Release 10. Cited on page 115.
3GPP. TS 36.101 v10.3.0 (2011-06). 3rd Generation Partnership Project; Technical specification group radio access network; evolved universal terrestrial
radio access (E-UTRA); user equipment (UE) radio transmission and reception,
Release 10. Cited on page 115.
Emad Abd-Elrady, Li Gan, and Gernot Kubin. Direct and indirect learning methods for adaptive predistortion of IIR Hammerstein systems. Elektrotechnik &
Informationstechnik, 125(4):126–131, April 2008. Cited on pages 24 and 35.
Agilent.
Agilent PN 89400-14, using error vector magnitude measurements to analyze and troubleshoot vector-modulated signals - Product Note
2000. http://cp.literature.agilent.com/litweb/pdf/5965-2898e.pdf. Accessed
January, 2013. Cited on page 54.
Lars Ahlin, Jens Zander, and Ben Slimane. Principles of Wireless Communications. Studentlitteratur, 2006. ISBN 91-44-03080-0. Cited on page 101.
Shoaib Amin, Efrain Zenteno, Per N. Landin, Daniel Rönnow, Magnus Isaksson,
and Peter Händel. Noise impact on the identification of digital predistorter
parameters in the indirect learning architecture. In Swedish Communication
Technologies Workshop (SWE-CTW), pages 36–39, Lund, Sweden, October
2012. Cited on page 35.
Anritsu. Adjacent channel power ratio (ACPR) - Application Note, Rev. A. February 2001. http://www.us.anritsu.com/downloads/files/11410-00264.pdf, accessed January, 2013. Cited on page 53.
Karl J. Åström and Pieter Eykhoff. System identification - a survey. Automatica,
7:123–162, 1971. Cited on page 37.
Karl J. Åström and Tore Hägglund. Advanced PID Control. ISA - Instrumentation,
Systems, and Automation Society, Second edition, 2005. ISBN 1-55617-942-1.
Cited on pages 18 and 35.
125
126
Bibliography
Ahmed Birafane and Ammar B. Kouki. Phase-only predistortion for LINC amplifiers with Chireix-outphasing combiners. IEEE Transactions on Microwave
Theory and Techniques, 53(6):2240–2250, June 2005. Cited on pages 60
and 63.
Ahmed Birafane, Mohamed El-Asmar, Ammar B. Kouki, Mohamed Helaoui, and
Fadhel M. Ghannouchi. Analyzing LINC systems. IEEE Microwave Magazine,
11(5):59–71, August 2010. Cited on page 60.
Åke Björck. Numerical Methods for Least Squares Problems. Siam, 1996. Cited
on pages 14 and 99.
Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, 2004. Cited on page 82.
Claudia Califano, Salvatore Monaco, and Dorothé Normand-Cyrot. On the
discrete-time normal form. IEEE Transactions on Automatic Control, 43(11):
1654–1658, November 1998. Cited on page 22.
Tsan-Wen Chen, Ping-Yuan Tsai, Jui-Yuan Yu, and Chen-Yi Lee. A sub-mW alldigital signal component separator with branch mismatch compensation for
OFDM LINC transmitters. IEEE Journal of Solid-State Circuits, 46(11):2514–
2523, November 2011. Cited on page 63.
Hektbi Chireix. High power outphasing modulation. IRE, 23:1370–1392, November 1935. Cited on page 58.
Donald C. Cox. Linear amplification with nonlinear components. IEEE Transactions on Communication, COM-23:1942–1945, December 1974. Cited on page
58.
Steve C. Cripps. RF Power Amplifiers for Wireless Communications. Artech
House, Second edition, 2006. ISBN 1-59693-018-7. Cited on pages 52 and 54.
Erik Dahlman, Stefan Parkvall, and Johan Sköld. 4G LTE/LTE-Advanced for Mobile Broadband. Elsevier, 2011. ISBN 978-0-12-385489-6. Cited on page 101.
Germund Dahlquist and Åke Björck. Numerical Methods in Scientific Computing, Vol I. Siam, 2008. ISBN 978-0-898716-44. Cited on page 79.
Santosh Devasia. Technical notes and correspondance - should model-based inverse inputs be used as feedforward under plant uncertainty. IEEE Transactions on Automatic Control, 47(11):1865–1871, November 2002. Cited on page
35.
Norman R. Draper and Harry Smith. Applied Regression Analysis. John Wiley &
Sons, Third edition, 1998. ISBN 0-471-17082-8. Cited on page 14.
Louis E. Frenzel. Principles of Electronic Communication Systems. McGrawHill, Second edition, 2003. ISBN 0-07-828131-8. Cited on pages 48, 51, 55, 58,
and 101.
Bibliography
127
Jonas Fritzin. CMOS RF Power Amplifiers for Wireless Communications.
Linköping Studies in Science and Technology. Dissertations. No 1399,
Linköping University, Linköping, Sweden, SE-581 83 Linköping, Sweden,
November 2011. Cited on pages 53, 60, 61, 103, 104, and 119.
Jonas Fritzin, Ylva Jung, Per N. Landin, Peter Händel, Martin Enqvist, and Atila
Alvandpour. Phase predistortion of a Class-D outphasing RF amplifier in 90nm
CMOS. IEEE Transactions on Circuits and Systems-II: Express Briefs, 58(10):
642–646, October 2011a. Cited on pages 17, 35, 64, and 119.
Jonas Fritzin, Christer Svensson, and Atila Alvandpour. A +32dBm 1.85GHz
Class-D outphasing RF PA in 130nm CMOS for WCDMA/LTE. In IEEE European Solid-State Circuits Conference (ESSCIRC), pages 127–130, Helsinki,
Finland, September 2011b. Cited on page 121.
Jonas Fritzin, Christer Svensson, and Atila Alvandpour. A wideband fully integrated +30dBm Class-D outphasing RF PA in 65nm CMOS. In IEEE International Symposium on Integrated Circuits (ISIC), pages 25–28, Singapore, Singapore, December 2011c. Cited on pages 109, 121, 123, and 124.
Walter Gerhard and Reinhard Knöchel. Prediction of bandwidth requirements
for a digitally based WCDMA phase modulated outphasing transmitter. In
The European Conference on Wireless Technology, pages 97–100, Paris, France,
October 2005a. Cited on page 103.
Walter Gerhard and Reinhard Knöchel. LINC digital component separator for
single and multicarrier W-CDMA signals. IEEE Transactions on Microwave
Theory and Techniques, 53(1):274–282, January 2005b. Cited on page 103.
Michel Gevers and Lennart Ljung. Optimal experimental designs, with respect to
the intended model application. Automatica, 22(5):543–554, September 1986.
Cited on page 33.
Pere L. Gilabert, Daniel D. Silveira, Gabriel Montoro, and Gottfried Magerl. RFpower amplifier modeling and predistortion based on a modular approach. In
European Microwave Integrated Circuits Conference, pages 265–268, Manchester, UK, September 2006. Cited on page 63.
Pere L. Gilabert, Eduard Bertran, Gabriel Montoro, and Jordi Berenguer. FPGA
implementation of an LMS-based real-time adaptive predistorter for power
amplifiers. In Circuits and Systems and TAISA Conference, 2009. NEWCASTAISA ’09. Joint IEEE North-East Workshop on, Toulouse, France, June-July
2009. Cited on page 99.
Lei Guan and Anding Zhu. Low-cost FPGA implementation of Volterra seriesbased digital predistorter for RF power amplifiers. IEEE Transactions on Microwave Theory and Techniques, 58(4):866–872, April 2010. Cited on pages 61
and 62.
128
Bibliography
Mohamed Helaoui, Slim Boumaiza, and Fadhel M. Ghannouchi. On the outphasing power amplifier nonlinearity analysis and correction using digital predistortion technique. In IEEE Radio and Wireless Symposium (RWS), pages 751–
754, Orlando, FL, USA, January 2008. Cited on page 63.
Ronald M. Hirschorn. Invertibility of multivariable nonlinear control systems.
IEEE Transactions on Automatic Control, 24(6):855–865, December 1979.
Cited on page 27.
Qiuting Huang, Jürgen Rogin, Xinhua Chen, David Tschopp, Thomas Burger,
Thomas Christen, Dimitris Papadopolous, Ilian Kouchev, Chiara Martelli,
and Thomas Dellsperger. A tri-band SAW-less WCDMA/HSPA RF CMOS
transceiver, with on-chip DC-DC converter connectable to battery. In International Solid-State Circuits Conference Digest of Technical Papers (ISSCC),
pages 60–61, San Fransisco, CA, USA, February 2010. Cited on pages 107
and 115.
Richard C. Jaeger and Travis N. Blalock. Microelectronic Circuit Design. McGrawHill, Third edition, 2008. ISBN 978-0-07-110203-2. Cited on pages 55 and 57.
Ylva Jung and Martin Enqvist. Estimating models of inverse systems. In 52nd
IEEE Conference on Decision and Control (CDC), Florence, Italy, To appear,
December 2013. Cited on page 34.
Ylva Jung, Jonas Fritzin, Martin Enqvist, and Atila Alvandpour. Least-squares
phase predistortion of a +30dbm Class-D outphasing RF PA in 65nm CMOS.
IEEE Transactions on Circuits and Systems-I: Regular papers, 60(7):1915–1928,
July 2013. Cited on pages 64 and 69.
Peter B Kenington. High-Linearity RF Amplifier Design. Artech House, 2000.
ISBN 1-58053-143-1. Cited on page 61.
Per N. Landin, Jonas Fritzin, Wendy Van Moer, Magnus Isaksson, and Atila Alvandpour. Modeling and digital predistortion of Class-D outphasing RF power
amplifiers. IEEE Transactions on Microwave Theory and Techniques, 60(6):
1907–1915, June 2012. Cited on pages 64 and 69.
Lennart Ljung. System Identification, Theory for the User. Prentice Hall PTR,
Second edition, 1999. ISBN 0-13-656695-2. Cited on pages 10, 11, 12, 15, 33,
34, 36, 68, 69, 71, and 99.
Lennart Ljung. System Identification, Toolbox, User’s Guide. MathWorks, Sixth
edition, 2003. Cited on page 38.
Aarne Mämmelä. Commutation in linear and nonlinear systems. Frequenz, 60
(5-6):92–94, June 2006. Cited on page 23.
Ola Markusson. Model and System Inversion with Applications in Nonlinear
System Identification and Control. TRITA-S3-REG-0201, Royal Institute of
Technology, Stockholm, Sweden, SE-100 44 Stockholm, Sweden, 2001. Cited
on pages 20, 22, 26, and 27.
Bibliography
129
Jaimin Mehta, Vasile Zoicas, Oren Eliezer, R. Bogdan Staszewski, Sameh Rezeq,
Mitch Entezari, and Poras Bolsara. An efficient linearization scheme for a digital polar EDGE transmitter. IEEE Transactions on Circuits and Systems-II:
Express Briefs, 57(3):193–197, March 2010. Cited on page 107.
Shervin Moloudi, Koji Takanami, Michael Youssef, Mohyee Mikhemar, and Asad
Abidi. An outphasing power amplifier for software-defined radio transmitter. In International Solid-State Circuits Conference Digest of Technical Papers (ISSCC), pages 568–569, San Fransisco, CA, USA, February 2008. Cited
on page 63.
Gabriel Montoro, Pere L. Gilabert, Eduard Bertran, Albert Cesari, and José A.
Garcia. An LMS-based adaptive predistorter for cancelling nonlinear memory
effects in RF power amplifiers. In Microwave Conference, 2007. APMC 2007.
Asia-Pacific, Bangkok, Thailand, December 2007. Cited on page 99.
Kevin L. Moore. Iterative Learning Control for Deterministic Systems. Springer
Verlag, 1993. ISBN 978-1-4471-1914-2. Cited on page 20.
Seong-Sik Myoung, Il-Kyoo Kee, Jong-Gwan Yook, Kyutae Lim, and Joy Laskar.
Mismatch detection and compensation method for the LINC system using a
closed-form expression. IEEE Transactions on Microwave Theory and Techniques, 56(12):3050–3057, December 2008. Cited on page 63.
Henna Paaso and Aarne Mämmelä. Comparison of direct learning and indirect learning predistortion architechtures. In IEEE International Syposium on
Wireless Communication Systems (ISWCS), pages 309–313, Reykjavik, Iceland,
October 2008. Cited on pages 24 and 35.
Rik Pintelon and Johan Schoukens. System Identification - A Frequency Domain
Approach. IEEE Press and John Wiley & Sons, Second edition, 2012. ISBN
978-0-470-64037-1. Cited on pages 10, 33, and 34.
Behzad Razavi. RF Microelectronics. Prentice Hall, 1998. ISBN 0-13-887571-5.
Cited on pages 52 and 55.
Rohde & Schwarz. Application note, 1GP67: Phase adjustment of two MIMO
signal sources with option B90. Cited on pages 103 and 109.
Luca Romanò, Luigi Panseri, Carlo Samori, and Andrea L. Lacaita. Matching
requirements in LINC transmitters for OFDM signals. IEEE Transactions on
Circuits and Systems-I: Regular Papers, 53(7):1572–1578, July 2006. Cited on
pages 60 and 63.
Walter Rudin. Principles of Mathematical Analysis. McGraw-Hill Book Co.,
Third edition, 1976. ISBN 0-07-085613-3. Cited on pages 68 and 89.
Wilson J. Rugh. Linear System Theory. Prentice-Hall, Second edition, 1996. ISBN
0-13-441205-2. Cited on page 18.
130
Bibliography
Shankar Sastry. Nonlinear Systems – Analysis, Stability and Control. Springer
Verlag, New York, 1999. ISBN 0-387-98513-1. Cited on pages 21 and 25.
Martin Schetzen. The Volterra and Wiener Theories of Nonlinear Systems. John
Wiley & Sons, New York, 1980. ISBN 0-471-04455-5. Cited on pages 25 and 26.
Torsten Söderström and Petre Stoica. System Identification. Prentice Hall, 1989.
ISBN 0-13-881236-5. Cited on page 10.
Michael Soudan and Christian Vogel. Correction structures for linear weakly
time-varying systems. IEEE Transactions on Circuits and Systems-I: Regular
papers, 59(9):2075–2084, September 2012. Cited on page 21.
Murali Tummla, Michael T. Donovan, Bruce E. Watkins, and Robert North.
Volterra series based modeling and compensation of nonlinearities in high
power amplifiers. In International Conference on Acoustics, Speech and Signal
Processing (ICASSP), pages 2417–2420 vol. 3, Munich, Germany, April 1997.
Cited on pages 25 and 62.
Johanna Wallén. Estimation-Based Iterative Learning Control. Linköping Studies in Science and Technology. Dissertations. No 1358, Linköping University,
Linköping, Sweden, SE-581 83 Linköping, Sweden, February 2011. Cited on
page 20.
Gaoming Xu, Taijun Liu, Yan Ye, and Tiefeng Xu. FPGA implementation of augmented hammerstein predistorters for RF power amplifier linearization. In
Symposium on Microwave, Antenna, Propagation and EMC Technologies for
Wireless Communications, pages 481–484, Beijing, China, October 2009. Cited
on page 63.
Hongtao Xu, Yorgos Palaskas, Ashoke Ravi, and Krishnamurthy Soumyanath. A
highly linear 25dBm outphasing power amplifier in 32nm CMOS for WLAN application. In IEEE European Solid-State Circuits Conference (ESSCIRC), pages
306–309, Seville, Spain, September 2010. Cited on page 61.
Jingshi Yao and Stephen I. Long. Power amplifier selection for LINC application.
IEEE Transactions on Circuits and Systems-II: Express Briefs, 53(8):763–766,
August 2006. Cited on page 61.
Xuejun Zhang, Lawrence E. Larson, Peter M. Asbeck, and Peter Nanawa.
Gain/phase imbalance-minimization techniques for LINC transmitters. IEEE
Transactions on Microwave Theory and Techniques, 49(12):2507–2516, June
2001. Cited on page 63.
Anding Zhu, Paul J. Draxler, Jonmei J. Yan, Thomas J. Brazil, Donald F. Kimball,
and Peter M. Asbeck. Open-loop digital predistorter for RF power amplifiers
using dynamic deviation reduction-based Volterra series. IEEE Transactions
on Microwave Theory and Techniques, 56(7):1524–1534, July 2008. Cited on
pages 26 and 62.
Licentiate Theses
Division of Automatic Control
Linköping University
P. Andersson: Adaptive Forgetting through Multiple Models and Adaptive Control of Car
Dynamics. Thesis No. 15, 1983.
B. Wahlberg: On Model Simplification in System Identification. Thesis No. 47, 1985.
A. Isaksson: Identification of Time Varying Systems and Applications of System Identification to Signal Processing. Thesis No. 75, 1986.
G. Malmberg: A Study of Adaptive Control Missiles. Thesis No. 76, 1986.
S. Gunnarsson: On the Mean Square Error of Transfer Function Estimates with Applications to Control. Thesis No. 90, 1986.
M. Viberg: On the Adaptive Array Problem. Thesis No. 117, 1987.
K. Ståhl: On the Frequency Domain Analysis of Nonlinear Systems. Thesis No. 137, 1988.
A. Skeppstedt: Construction of Composite Models from Large Data-Sets. Thesis No. 149,
1988.
P. A. J. Nagy: MaMiS: A Programming Environment for Numeric/Symbolic Data Processing. Thesis No. 153, 1988.
K. Forsman: Applications of Constructive Algebra to Control Problems. Thesis No. 231,
1990.
I. Klein: Planning for a Class of Sequential Control Problems. Thesis No. 234, 1990.
F. Gustafsson: Optimal Segmentation of Linear Regression Parameters. Thesis No. 246,
1990.
H. Hjalmarsson: On Estimation of Model Quality in System Identification. Thesis No. 251,
1990.
S. Andersson: Sensor Array Processing; Application to Mobile Communication Systems
and Dimension Reduction. Thesis No. 255, 1990.
K. Wang Chen: Observability and Invertibility of Nonlinear Systems: A Differential Algebraic Approach. Thesis No. 282, 1991.
J. Sjöberg: Regularization Issues in Neural Network Models of Dynamical Systems. Thesis
No. 366, 1993.
P. Pucar: Segmentation of Laser Range Radar Images Using Hidden Markov Field Models.
Thesis No. 403, 1993.
H. Fortell: Volterra and Algebraic Approaches to the Zero Dynamics. Thesis No. 438,
1994.
T. McKelvey: On State-Space Models in System Identification. Thesis No. 447, 1994.
T. Andersson: Concepts and Algorithms for Non-Linear System Identifiability. Thesis
No. 448, 1994.
P. Lindskog: Algorithms and Tools for System Identification Using Prior Knowledge. Thesis No. 456, 1994.
J. Plantin: Algebraic Methods for Verification and Control of Discrete Event Dynamic
Systems. Thesis No. 501, 1995.
J. Gunnarsson: On Modeling of Discrete Event Dynamic Systems, Using Symbolic Algebraic Methods. Thesis No. 502, 1995.
A. Ericsson: Fast Power Control to Counteract Rayleigh Fading in Cellular Radio Systems.
Thesis No. 527, 1995.
M. Jirstrand: Algebraic Methods for Modeling and Design in Control. Thesis No. 540,
1996.
K. Edström: Simulation of Mode Switching Systems Using Switched Bond Graphs. Thesis
No. 586, 1996.
J. Palmqvist: On Integrity Monitoring of Integrated Navigation Systems. Thesis No. 600,
1997.
A. Stenman: Just-in-Time Models with Applications to Dynamical Systems. Thesis
No. 601, 1997.
M. Andersson: Experimental Design and Updating of Finite Element Models. Thesis
No. 611, 1997.
U. Forssell: Properties and Usage of Closed-Loop Identification Methods. Thesis No. 641,
1997.
M. Larsson: On Modeling and Diagnosis of Discrete Event Dynamic systems. Thesis
No. 648, 1997.
N. Bergman: Bayesian Inference in Terrain Navigation. Thesis No. 649, 1997.
V. Einarsson: On Verification of Switched Systems Using Abstractions. Thesis No. 705,
1998.
J. Blom, F. Gunnarsson: Power Control in Cellular Radio Systems. Thesis No. 706, 1998.
P. Spångéus: Hybrid Control using LP and LMI methods – Some Applications. Thesis
No. 724, 1998.
M. Norrlöf: On Analysis and Implementation of Iterative Learning Control. Thesis
No. 727, 1998.
A. Hagenblad: Aspects of the Identification of Wiener Models. Thesis No. 793, 1999.
F. Tjärnström: Quality Estimation of Approximate Models. Thesis No. 810, 2000.
C. Carlsson: Vehicle Size and Orientation Estimation Using Geometric Fitting. Thesis
No. 840, 2000.
J. Löfberg: Linear Model Predictive Control: Stability and Robustness. Thesis No. 866,
2001.
O. Härkegård: Flight Control Design Using Backstepping. Thesis No. 875, 2001.
J. Elbornsson: Equalization of Distortion in A/D Converters. Thesis No. 883, 2001.
J. Roll: Robust Verification and Identification of Piecewise Affine Systems. Thesis No. 899,
2001.
I. Lind: Regressor Selection in System Identification using ANOVA. Thesis No. 921, 2001.
R. Karlsson: Simulation Based Methods for Target Tracking. Thesis No. 930, 2002.
P.-J. Nordlund: Sequential Monte Carlo Filters and Integrated Navigation. Thesis No. 945,
2002.
M. Östring: Identification, Diagnosis, and Control of a Flexible Robot Arm. Thesis
No. 948, 2002.
C. Olsson: Active Engine Vibration Isolation using Feedback Control. Thesis No. 968,
2002.
J. Jansson: Tracking and Decision Making for Automotive Collision Avoidance. Thesis
No. 965, 2002.
N. Persson: Event Based Sampling with Application to Spectral Estimation. Thesis
No. 981, 2002.
D. Lindgren: Subspace Selection Techniques for Classification Problems. Thesis No. 995,
2002.
E. Geijer Lundin: Uplink Load in CDMA Cellular Systems. Thesis No. 1045, 2003.
M. Enqvist: Some Results on Linear Models of Nonlinear Systems. Thesis No. 1046, 2003.
T. Schön: On Computational Methods for Nonlinear Estimation. Thesis No. 1047, 2003.
F. Gunnarsson: On Modeling and Control of Network Queue Dynamics. Thesis No. 1048,
2003.
S. Björklund: A Survey and Comparison of Time-Delay Estimation Methods in Linear
Systems. Thesis No. 1061, 2003.
M. Gerdin: Parameter Estimation in Linear Descriptor Systems. Thesis No. 1085, 2004.
A. Eidehall: An Automotive Lane Guidance System. Thesis No. 1122, 2004.
E. Wernholt: On Multivariable and Nonlinear Identification of Industrial Robots. Thesis
No. 1131, 2004.
J. Gillberg: Methods for Frequency Domain Estimation of Continuous-Time Models. Thesis No. 1133, 2004.
G. Hendeby: Fundamental Estimation and Detection Limits in Linear Non-Gaussian Systems. Thesis No. 1199, 2005.
D. Axehill: Applications of Integer Quadratic Programming in Control and Communication. Thesis No. 1218, 2005.
J. Sjöberg: Some Results On Optimal Control for Nonlinear Descriptor Systems. Thesis
No. 1227, 2006.
D. Törnqvist: Statistical Fault Detection with Applications to IMU Disturbances. Thesis
No. 1258, 2006.
H. Tidefelt: Structural algorithms and perturbations in differential-algebraic equations.
Thesis No. 1318, 2007.
S. Moberg: On Modeling and Control of Flexible Manipulators. Thesis No. 1336, 2007.
J. Wallén: On Kinematic Modelling and Iterative Learning Control of Industrial Robots.
Thesis No. 1343, 2008.
J. Harju Johansson: A Structure Utilizing Inexact Primal-Dual Interior-Point Method for
Analysis of Linear Differential Inclusions. Thesis No. 1367, 2008.
J. D. Hol: Pose Estimation and Calibration Algorithms for Vision and Inertial Sensors.
Thesis No. 1370, 2008.
H. Ohlsson: Regression on Manifolds with Implications for System Identification. Thesis
No. 1382, 2008.
D. Ankelhed: On low order controller synthesis using rational constraints. Thesis
No. 1398, 2009.
P. Skoglar: Planning Methods for Aerial Exploration and Ground Target Tracking. Thesis
No. 1420, 2009.
C. Lundquist: Automotive Sensor Fusion for Situation Awareness. Thesis No. 1422, 2009.
C. Lyzell: Initialization Methods for System Identification. Thesis No. 1426, 2009.
R. Falkeborn: Structure exploitation in semidefinite programming for control. Thesis
No. 1430, 2010.
D. Petersson: Nonlinear Optimization Approaches to H2 -Norm Based LPV Modelling and
Control. Thesis No. 1453, 2010.
Z. Sjanic: Navigation and SAR Auto-focusing in a Sensor Fusion Framework. Thesis
No. 1464, 2011.
K. Granström: Loop detection and extended target tracking using laser data. Thesis
No. 1465, 2011.
J. Callmer: Topics in Localization and Mapping. Thesis No. 1489, 2011.
F. Lindsten: Rao-Blackwellised particle methods for inference and identification. Thesis
No. 1480, 2011.
M. Skoglund: Visual Inertial Navigation and Calibration. Thesis No. 1500, 2011.
S. Khoshfetrat Pakazad: Topics in Robustness Analysis. Thesis No. 1512, 2011.
P. Axelsson: On Sensor Fusion Applied to Industrial Manipulators. Thesis No. 1511, 2011.
A. Carvalho Bittencourt: On Modeling and Diagnosis of Friction and Wear in Industrial
Robots. Thesis No. 1516, 2012.
P. Rosander: Averaging level control in the presence of frequent inlet flow upsets . Thesis
No. 1527, 2012.
N. Wahlström: Localization using Magnetometers and Light Sensors. Thesis No. 1581,
2013.
R. Larsson: System Identification of Flight Mechanical Characteristics. Thesis No. 1599,
2013.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Related manuals

Download PDF

advertising