Feedback for physicists: A tutorial essay on control * John Bechhoefer

Feedback for physicists: A tutorial essay on control * John Bechhoefer
Feedback for physicists: A tutorial essay on control
John Bechhoefer*
Department of Physics, Simon Fraser University, Burnaby, British Columbia V5A 1S6,
共Published 31 August 2005兲
Feedback and control theory are important ideas that should form part of the education of a physicist
but rarely do. This tutorial essay aims to give enough of the formal elements of control theory to
satisfy the experimentalist designing or running a typical physics experiment and enough to satisfy the
theorist wishing to understand its broader intellectual context. The level is generally simple, although
more advanced methods are also introduced. Several types of applications are discussed, as the
practical uses of feedback extend far beyond the simple regulation problems where it is most often
employed. Sketches are then provided of some of the broader implications and applications of control
theory, especially in biology, which are topics of active research.
I. Introduction
II. Brief Review of Dynamical Systems
III. Feedback: An Elementary Introduction
A. Basic ideas: Frequency domain
B. Basic ideas: Feedforward
C. Basic ideas: Time domain
D. Two case studies
1. Michelson interferometer
2. Operational amplifier
E. Integral control
IV. Feedback and Stability
A. Stability in dynamical systems
B. Stability ideas in feedback systems
C. Delays: Their generic origins and effect on stability
D. Nonminimum-phase systems
E. MIMO vs SISO systems
V. Implementation and Some Advanced Topics
A. Experimental determination of the transfer function
1. Measurement of the transfer function
2. Model building
3. Model reduction
4. Revisiting the system
B. Choosing the controller
1. PID controllers
2. Loop shaping
3. Optimal control
C. Digital control loops
1. Case study: Vibration isolation of an atom
2. Commercial tools
D. Measurement noise and the Kalman filter
E. Robust control
1. The internal model control parametrization
2. Quantifying model uncertainty
3. Robust stability
4. Robust performance
5. Robust control methods
VI. Notes on Nonlinearity
A. Saturation effects
B. Chaos: The ally of control?
VII. Applications to Biological Systems
A. Physiological example: The pupil light reflex
B. Fundamental mechanisms
1. Negative feedback example
2. Positive feedback example
C. Network example
VIII. Other Applications, Other Approaches
IX. Feedback and Information Theory
X. Conclusions
List of Abbreviations
Feedback and its big brother, control theory, are such
important concepts that it is odd that they usually find
no formal place in the education of physicists. On the
practical side, experimentalists often need to use feedback. Almost any experiment is subject to the vagaries
of environmental perturbations. Usually, one wants to
vary a parameter of interest while holding all others constant. How to do this properly is the subject of control
theory. More fundamentally, feedback is one of the great
ideas developed 共mostly兲 in the last century,1 with par-
Feedback mechanisms regulating liquid level were described
over 2000 years ago, while steam-engine “governors” date
back to the 18th century. 关An influential theoretical study of
governors was given by Maxwell 共1868兲.兴 However, realization
of the broader implications of feedback concepts, as well as
their deeper analysis and widespread application, date to the
20th century. Chapter 1 of Franklin et al. 共2002兲 gives a brief
historical review of the development of control theory.
In a more detailed account, Mayr 共1970兲 describes a number
of early feedback devices, from classical examples 共Ktesibios
and Hero, both of Alexandria兲 to a scattering of medieval
Arab accounts. Curiously, the modern rediscovery of feedback
*Electronic address: [email protected]
©2005 The American Physical Society
John Bechhoefer: Feedback for physicists: A tutorial essay on control
ticularly deep consequences for biological systems, and
all physicists should have some understanding of such a
basic concept. Indeed, further progress in areas of current interest such as systems biology is likely to rely on
concepts from control theory.
This article is a tutorial essay on feedback and control
theory. It is a tutorial in that I give enough detail about
basic methods to meet most of the needs of experimentalists wishing to use feedback in their experiments. It is
an essay in that, at the same time, I hope to convince the
reader that control theory is useful not only for the engineering aspects of experiments but also for the conceptual development of physics. Indeed, we shall see
that feedback and control theory have recently found
applications in areas as diverse as chaos and nonlinear
dynamics, statistical mechanics, optics, quantum computing, and biological physics. This essay supplies the
background in control theory necessary to appreciate
many of these developments.
The article is written for physics graduate students
and interested professional physicists, although most of
it should also be understandable to undergraduate students. Beyond the overall importance of the topic, this
article is motivated by a lack of adequate alternatives.
The obvious places to learn about control theory—
introductory engineering textbooks 共Dutton et al., 1997;
Franklin et al., 1998, 2002; Goodwin et al., 2001兲—are
not very satisfactory places for a physicist to start. They
are long—800 pages is typical—with the relevant information often scattered in different sections. Their examples are understandably geared more to the engineer
than to the physicist. They often cloak concepts familiar
to the physicist in unfamiliar language and notation.
And they do not make connections to similar concepts
that physicists will have likely encountered in standard
courses. The main alternative, more mathematical texts
关e.g., in order of increasing sophistication, the books by
Özbay 共2000兲, Morris 共2001兲, Doyle et al. 共1992兲, and
Sontag 共1998兲兴, are terse but assume the reader already
has an intuitive understanding of the subject.
At the other end of the intellectual spectrum, the first
real exposure of many experimentalists to control theory
control took place entirely in England, at the beginning of the
industrial revolution of the 18th century. Mayr speculates that
the concept of feedback arose there and not elsewhere because
it fit in more naturally with the prevailing empiricist philosophy of England and Scotland 共e.g., Hume and Locke兲. On the
Continent, the rationalist philosophy of Descartes and Leibniz
postulated preconceived goals that were to be determined and
achieved via a priori planning and not by comparison with
experience. While Mayr’s thesis might at first glance seem farfetched, the philosophical split between empiricism and rationalism was in fact reflected in various social institutions, such
as government 共absolute vs limited monarchy兲, law 共Napoleonic code vs case law兲, and economics 共mercantilism vs liberalism兲. Engineering 共feedback control vs top-down design兲 is
perhaps another such case. Mayr devoted much of his subsequent career to developing this thesis 共Bennett, 2002兲. Elsewhere, Bennett 共1996兲 gives a more detailed history of control
theory and practice in the 20th century.
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
comes when they are faced with having to use or modify
a PID 共proportional-integral-derivative兲 control loop
that regulates some experimental quantity, such as temperature or pressure. Lacking the time to delve more
deeply into the subject, they turn to the semiqualitative
discussions found in the appendixes of manuals for commercial regulators and the like. While these can be
enough to get by, they rarely give optimal performance
and certainly do not give a full idea of the range of
possible approaches to a particular experimental problem. Naturally, they also do not give an appreciation for
the broader uses of control theory.
Thirty years ago, E. M. Forgan wrote an excellent introduction to control theory for experimental physicists,
“On the use of temperature controllers in cryogenics”
共Forgan, 1974兲, which, despite its seemingly narrow title,
addresses many of the needs of the experimentalist discussed above. However, it predates the widespread use
of digital control and time-domain methods, as well as
important advances in control theory that have taken
place over the past three decades. In one sense, this essay is an updated, slightly more accessible version of
Forgan’s article, but the strategies for control and the
range of applications that are relevant to the physicist
are much broader than what is implied in that earlier
work. I have tried to give some feeling for this breadth,
presenting simple ideas and simple cases in some detail
while sketching generalizations and advanced topics
more briefly.
The plan of this essay, then, is as follows: In Sec. II, we
review some elementary features of dynamical systems
at the level of an intermediate undergraduate mechanics
course. In Sec. III, we introduce the simplest tools of
control theory, feedback and feedforward. As case studies, we discuss how adding a control loop can increase
the usefulness of a Michelson interferometer as a displacement sensor and how feedback plays an essential
role in modern analog electronic circuits. In Sec. IV, we
discuss the relationship between feedback and stability,
focusing on how time delays generically arise and can
limit the amount of feedback gain that may be applied.
In Sec. V, we discuss various practical issues of implementation: the identification of system dynamics, the
choice of control algorithm and the tuning of any parameters, the translation of continuous-time designs to discrete difference equations suitable for programming on
a computer, the use of commercial software packages to
simulate the effects of control on given systems, and the
problems posed by noisy sensors and other types of uncertainty. We include a case study of an active vibrationisolation system. This section includes introductions to a
number of advanced topics, including model reduction,
optimal control, Kalman filtering, and robust methods.
We deliberately intersperse these discussions in a section
on practical implementation, for the need to tune many
parameters, to deal with noise and uncertainties in the
form of the system model itself—all require more advanced methods. Indeed, historically it has been the failure of simpler ideas that has motivated more complex
methods. Theorists should not skip this section!
John Bechhoefer: Feedback for physicists: A tutorial essay on control
In Sec. VI, we note the limitations of standard control
theory to 共nearly兲 linear systems and discuss its extension to nonlinear systems. Here, the engineering and
physics literature have trod their separate ways, with no
intellectual synthesis comparable to that for linear systems. We point out some basic issues. In Sec. VII, we
discuss biological applications of feedback, which lead to
a much broader view of the subject and make clear that
an understanding of “modular” biology presupposes a
knowledge of the control concepts discussed here. In
Sec. VIII, we mention very briefly a few other major
applications of feedback and control, mainly as a pointer
to other literature. We also briefly discuss adaptive
control—a vast topic that ranges from simple parameter
estimation to various kinds of artificial-intelligence approaches that try to mimic human judgment. In Sec. IX,
we discuss some of the relations between feedback and
information theory, highlighting a recent attempt to explore the connections.
Finally, we note that while the level of this tutorial is
relatively elementary, the material is quite concentrated
and takes careful reading to absorb fully. We encourage
the reader to “browse” ahead to lighter sections. The
case studies in Secs. III.D, V.C.1, and VII.A are good
places for this.
In one way or another, feedback implies the modification of a dynamical system. The modification may be
done for a number of reasons: to regulate a physical
variable, such as temperature or pressure; to linearize
the response of a sensor; to speed up a sluggish system
or slow down a jumpy one; to stabilize an otherwise unstable dynamics. Whatever the reason, one always starts
from a given dynamical system and creates a “better”
We begin by reviewing a few ideas concerning dynamical systems. A good reference here and for other
issues discussed below 共bifurcations, stability, chaos, etc.兲
is the book by Strogatz 共1994兲. The general dynamical
system can be written
xជ̇ = ជf共xជ ,uជ 兲,
yជ = gជ 共xជ ,uជ 兲,
where the vector xជ represents n independent “states” of
a system, uជ represents m independent inputs 共driving
terms兲, and yជ represents p independent outputs. The
vector-valued function ជf represents the 共nonlinear兲 dy-
namics and gជ translates the state xជ and “feeds through”
the input uជ directly to the output yជ . The role of Eq.
共2.1b兲 is to translate the perhaps-unobservable state
variables xជ into output variables yជ . The number of internal state variables 共n兲 is usually greater than the number
of accessible output variables 共p兲. Equations 共2.1兲 differ
from the dynamical systems that physicists often study in
that the inputs uជ and outputs yជ are explicitly identified.
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
FIG. 1. Low-pass electrical filter.
Mostly, we will deal with simpler systems that do not
need the heavy notation of Eqs. 共2.1兲. For example, a
linear system can be written
xជ̇ = Ãxជ + B̃uជ ,
yជ = C̃xជ + D̃uជ ,
where the dynamics à are represented as an n ⫻ n matrix, the input coupling B̃ as an n ⫻ m matrix, the output
coupling C̃ as a p ⫻ n matrix, and D̃ is a p ⫻ m matrix.
Often, the “direct feed” matrix D̃ will not be present.
As a concrete example, many sensors act as a low-pass
filter2 and are equivalent to the electrical circuit shown
in Fig. 1, where one finds
V̇out共t兲 = −
Vout共t兲 +
Here, n = m = p = 1, x = y = Vout, u = Vin, A = −1 / RC, B
= 1 / RC, C = 1, and D = 0.
A slightly more complicated example is a driven,
damped harmonic oscillator, which models the typical
behavior of many systems when slightly perturbed from
equilibrium and is depicted in Fig. 2, where
mq̈ + 2␥q̇ + kq = kqo共t兲.
We can simplify Eq. 共2.4兲 by scaling time by ␻20 = k / m,
the undamped resonant frequency, and by defining ␨
= 共␥ / m兲冑m / k = ␥ / 冑mk as a dimensionless damping parameter, with 0 ⬍ ␨ ⬍ 1 for an underdamped oscillator
and ␨ ⬎ 1 for an overdamped system. 关In the physics literature, one usually defines the quality factor Q = 1 / ␨.兴
Then, we have
One finds second-order behavior in other types of sensors,
too, including thermal and hydraulic systems. For thermal systems 共Forgan, 1974兲, mass plays the role of electrical capacitance and thermal resistivity 共inverse of thermal conductivity兲
the role of electrical resistance. For hydraulic systems, the corresponding quantities are mass and flow resistance 共proportional to fluid viscosity兲. At a deeper level, such analogies arise
if two conditions are met: 共1兲 the system is near enough thermodynamic equilibrium that the standard “flux⬀ gradient”
rule of irreversible thermodynamics applies; 共2兲 any time dependence must be at frequencies low enough that each physical element behaves as a single object 共“lumped-parameter”
approximation兲. In Sec. IV.C, we consider a situation where
the second condition is violated.
John Bechhoefer: Feedback for physicists: A tutorial essay on control
FIG. 2. Driven simple harmonic oscillator.
q̈ + 2␨q̇ + q = qo共t兲.
To put this in the form of Eqs. 共2.2兲, we let x1 = q, x2 = q̇.
Then n = 2 共second-order system兲 and m = p = 1 共one input, u = q0, and one output, q兲, and we have
d x1
− 1 − 2␨
dt x2
y = 共1
冊冉 冊 冉 冊
· u共t兲,
+ 0 · u共t兲,
Of course, it is possible to measure both position and velocity, using, for example, an inductive pickup coil for the latter;
however, it is rare in practice to measure all the state variables
xជ .
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
L关y共t兲兴 ⬅ y共s兲 =
where the matrices Ã, B̃, C̃, and D̃ are all written explicitly. In such a simple example, there is little reason to
distinguish between x1 and y, except to emphasize that
one observes only the position, not the velocity.3 Often,
one observes linear combinations of the state variables.
共For example, in a higher-order system, a position variable may reflect the influence of several modes. Imagine
a cantilever anchored at one end and free to vibrate at
the other end. Assume that one measures only the displacement of the free end. If several modes are excited,
this single output variable y will be a linear combination
of the individual modes with displacements xជ i.兲 Note
that we have written a second-order system as two
coupled first-order equations. In general, an nth-order
linear system can be converted to n first-order equations.
The above discussion has been carried out in the time
domain. Often, it is convenient to analyze linear
equations—the focus of much of practical control
theory—in the frequency domain. Physicists usually do
this via the Fourier transform; however, control-theory
books almost invariably use the Laplace transform. The
latter has some minor advantages. Physical systems, for
example, usually start at a given time. Laplace-transform
methods can handle such initial-value problems, while
Fourier transforms are better suited for steady-state
situations where any information from initial conditions
has decayed away. In practice, one usually sets the initial
conditions to zero 共and assumes, implicitly, that they
have decayed away兲, which effectively eliminates any
distinction. Inverse Laplace transforms, on the other
hand, lack the symmetry that inverse Fourier transforms
have with respect to the forward transform. But in practice, one almost never needs to carry out an explicit inversion. Finally, one often needs to transform functions
that do not decay to 0 at infinity, such as the step function ␪共x兲 共=0 , x ⬍ 0 and =1 , x ⬎ 0兲. The Laplace transform
is straightforward, but the Fourier transform must be
defined by multiplying by a decaying function and taking
the limit of infinitely slow decay after transforming. Because of the decaying exponential in its integral, one can
define the Laplace transform of a system output, even
when the system is unstable. Whichever transform one
chooses, one needs to consider complex values of the
transform variable. This also gives a slight edge to the
Laplace transform, since one does not have to remember which sign of complex frequency corresponds to a
decay and which to growth. In the end, we follow the
engineers 共except that we use i = 冑−1!兲, defining the
Laplace transform of y共t兲 to be
Then, for zero initial conditions, L关dny / dtn兴 = sny共s兲 and
L关兰y共t兲dt兴 = 共1 / s兲y共s兲. Note that we use the same symbol
y for the time and transform domains, which are quite
different functions of the arguments t and s. The abuse
of notation makes it easier to keep track of variables.
An nth-order linear differential equation then transforms to an nth-order algebraic equation in s. For example, the first-order system
ẏ共t兲 = − ␻0y共t兲 + ␻0u共t兲
sy共s兲 = ␻0关− y共s兲 + u共s兲兴,
leading to
G共s兲 ⬅
u共s兲 1 + s/␻0
where the transfer function G共s兲 is the ratio of output to
input, in the transform space. Here, ␻0 = 2␲f0 is the characteristic angular frequency of the low-pass filter. The
frequency dependence implicit in Eq. 共2.11兲 is made explicit by simply evaluating G共s兲 at s = i␻:
G共i␻兲 =
1 + i␻/␻0
兩G共i␻兲兩 =
冑1 + ␻2/␻20
arg G共i␻兲 = − tan−1
John Bechhoefer: Feedback for physicists: A tutorial essay on control
FIG. 3. 共Color in online edition兲 Transfer function for a firstorder, low-pass filter. 共a兲 Bode magnitude plot 关兩G共i␻兲兩兴; 共b兲
Bode phase plot 关arg G共i␻兲兴.
Note that many physics texts use the notation ␶
= 1 / ␻0, so that the Fourier transform of Eq. 共2.12兲 takes
the form 1 / 共1 + i␻␶兲.
In control-theory books, log-log graphs of 兩G共i␻兲兩 and
linear-log graphs of arg„G共i␻兲… are known as Bode plots,
and we shall see that they are very useful for understanding qualitative system dynamics. 关In the physics literature, ␹共␻兲 ⬅ G共i␻兲 is known as the dynamical linear
response function 共Chaikin and Lubensky, 1995兲.兴 In Fig.
3, we show the Bode plots corresponding to Eqs. 共2.13兲
and 共2.14兲. Note that the asymptotes in Fig. 3共a兲 intercept at the cutoff frequency ␻0 and that in Fig. 3共b兲, the
phase lag is −90° asymptotically, crossing −45° at ␻0.
Note, too, that we break partly from engineering notation by using amplitude ratios in Fig. 3共a兲 rather than
decibels 共1 dB= 20 log10兩G兩兲.
The transfer function G共s兲 goes to infinity when s
= −␻0. Such a point in the complex s plane is called a
pole. Here, we see that poles on the negative real s axis
correspond to exponential decay of impulses, with the
decay rate fixed by the pole position. The closer the pole
is to the imaginary s axis, the slower the decay. Poles in
the right-hand side correspond to exponentially increasing amplitudes.
Similarly, using the Laplace 共Fourier兲 transform, the
transfer function for the second-order system is
G共s兲 =
1 + 2␨s + s2
or G共i␻兲 =
1 + 2i␨␻ − ␻2
FIG. 4. 共Color in online edition兲 Transfer function for a
second-order system. ␨ = 0.1 gives underdamped and ␨ = 1 critically damped dynamics. 共a兲 Bode magnitude plot; 共b兲 Bode
phase plot.
Both the low-pass filter and the second-order system
have transfer functions G共s兲 that are rational functions;
i.e., they can be written as M共s兲 / N共s兲, where M共s兲 and
N共s兲 are polynomials. Not every system can be written in
this form. For example, consider a sensor that faithfully
records a signal, but with a time delay ⌬t, i.e., v共t兲 = y共t
− ⌬t兲. From the shift theorem for Laplace transforms,
the transfer function for such a sensor is G共s兲 = e−s⌬t,
which is equivalent to an infinite-order system. Note that
the magnitude of G is always 1 and that the phase increases linearly with frequency. In contrast, in the earlier
two examples, the phase tended to an asymptotic value.
The convolution theorem allows important manipulations of transfer functions. If we define G ⴱ H
⬅ 兰⬁0 G共␶兲H共t − ␶兲d␶, then L关G ⴱ H兴 = G共s兲H共s兲. Convolution is just the tool one needs to describe compound
systems where the output of one element is fed into the
input of the next element. Consider, for example, a firstorder sensor element that reports the position of a
second-order mechanical system. We would have
ÿ + 2␨ẏ + y = u共t兲,
v̇ + v = y共t兲,
where u共t兲 drives the oscillator position y共t兲, which then
drives the measured output v共t兲. Laplace transforming,
y共s兲 = G共s兲u共s兲 =
· u共s兲,
1 + 2␨s + s2
v共s兲 = H共s兲y共s兲 =
· y共s兲,
and Bode plots for various damping ratios ␨ are shown
in Fig. 4. 共Recall that we have scaled ␻0 = 1.兲 Here, there
are two poles. In the underdamped case 共␨ ⬍ 1兲, they
form a complex-conjugate pair s = −␨ ± i冑1 − ␨2. In the
overdamped case 共␨ ⬎ 1兲, the two poles are both on the
real s axis.
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
and thus v共s兲 = H共s兲G共s兲u共s兲, implying an overall loop
transfer function F共s兲 = H共s兲G共s兲. Having two elements
in series leads to a transfer function that is the product
of the transfer functions of the individual elements. In
John Bechhoefer: Feedback for physicists: A tutorial essay on control
FIG. 5. Block diagram illustrating signal flow from the input u
through the system dynamics G to the output y through the
sensor H.
the time domain, the series output would be the convolution of the two elements.
The above example motivates the introduction of
block diagrams to represent the flow of signals for linear
systems. We depict the above as shown in Fig. 5.
Having reviewed some basic ideas from dynamical
systems and having established some notation, we can
understand simple feedback ideas. Here, and elsewhere
below, it is understood that the reader desiring more
details should consult one of the standard control-theory
references cited above 共Dutton et al., 1997; Franklin et
al., 1998, 2002; Goodwin et al., 2001兲 or any of the many
other similar works that exist. A good book on “control
lore” is Leigh 共2004兲, which gives a detailed, qualitative
overview of the field, with a nice annotated bibliography.
one can choose K共s兲 so that T共s兲 has “better” dynamics
than G共s兲. 共For reasons to be discussed below, T is
known as the complementary sensitivity function.兲
Note the negative sign in the feedback node in Fig. 6,
which implies that the signal e共t兲 ⬅ r共t兲 − y共t兲 is fed back
into the controller. Such “negative feedback” gives a signal that is positive when the output y is below the setpoint r and negative when above. A direct proportional
control law, u = Kpe with Kp ⬎ 0, then tends to counteract
a disturbance. Absent any delays, this is a stabilizing
effect that can also reduce a system’s sensitivity to perturbations. “Positive feedback” reverses the sign in the
feedback node, so that r + y is fed into the controller.4 It
can be used to make a switch between two states, and
hence a digital memory 共see Sec. VII.B.2兲. It can also be
used to make an oscillator.5 More generally, though,
there is no reason that a controller cannot be an arbitrary function u共t兲 = f关r共t兲 , y共t兲兴. Such a controller has two
degrees of freedom and, obviously, can have more complicated behavior than is possible with only negative or
positive feedback. 共As an example, one could have negative feedback in some frequency range and positive
feedback at other frequencies.兲
As a simple example, consider the first-order, low-pass
filter described above, with
G共s兲 =
A. Basic ideas: Frequency domain
Consider a system whose dynamics are described by
G共s兲. The goal is to have the system’s output y共t兲 follow
a control signal r共t兲 as faithfully as possible. The general
strategy consists of two parts: First, we measure the actual output y共t兲 and determine the difference between it
and the desired control signal r共t兲, i.e., we define e共t兲
= r共t兲 − y共t兲, which is known as the error signal. Then we
apply some “control law” K to the error signal to try to
minimize its magnitude 共or square magnitude兲.
In terms of block diagrams, we make the connections
shown in Fig. 6, where a control law K共s兲 has been
added. Manipulating the block diagrams, we have
y共s兲 = K共s兲G共s兲e共s兲,
y共s兲 =
r共s兲 =
1 + K共s兲G共s兲
1 + L共s兲
where the loop gain L共s兲 ⬅ K共s兲G共s兲. Starting from the
system dynamics G共s兲, we have modified the “openloop” dynamics to be L共s兲 = K共s兲G共s兲, and then we have
transformed the dynamics a second time by “closing the
loop,” which leads to closed-loop dynamics given by T
⬅ KG / 共1 + KG兲 = L / 共1 + L兲. One hopes, of course, that
FIG. 6. Block diagram illustrating closed-loop control of a system G共s兲. Controller dynamics are given by K共s兲.
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
1 + s/␻0
Here, we have added a dc gain G0. We apply the simplest of control laws, K共s兲 = Kp, a constant. This is known
as proportional feedback, since the feedback signal u共t兲
is proportional to the error signal, u共t兲 = Kpe共t兲. Then
T共s兲 =
K pG 0
KpG0 + 1 + s/␻0
K pG 0
KpG0 + 1 1 + s/␻0共1 + KpG0兲
This is again just a low-pass filter, with modified dc gain
and cutoff frequency. The new dc gain is deduced by
taking the s → 0 共␻ → 0兲 limit in T共s兲 and gives
KpG0 / 共KpG0 + 1兲. The new cutoff frequency is ␻⬘ = ␻0共1
+ KpG0兲. This can also be seen in the time domain,
ẏ = − ␻0y + ␻0KpG0共r − y兲,
Physicists often informally use “positive feedback” to denote
the situation where −e = y − r is fed back into the controller. If
the control law is u = Kp共−e兲, then the feedback will tend to
drive the system away from an equilibrium fixed point. Using
the terminology defined above, however, one would better describe this as a situation with negative feedback and negative
Negative feedback can also lead to oscillation, although typically more gain—and hence more control energy—is required
than if positive feedback is used.
John Bechhoefer: Feedback for physicists: A tutorial essay on control
e = r − ␰ − y,
which implies
y共s兲 =
FIG. 7. Block diagram illustrating closed-loop control of a system G共s兲 subject to disturbances d共s兲 and measurement noise
␰共s兲. The control signal r共s兲 is assumed to be noise free. If
present, the block F共s兲 adds feedforward dynamics.
ẏ = − ␻0共1 + Kp兲y共t兲 + KpG0r共t兲.
In effect, the driving signal is u共t兲 = ␻0KpG0共r − y兲. If the
control signal r共t兲 = r⬁ = const, then the output settles to
y⬁ = 关KpG0 / 共KpG0 + 1兲兴r⬁.
If we had instead the open-loop dynamics,
ẏ共t兲 = − ␻0y共t兲 + ␻0KpG0r共t兲,
we would have the “bare” cutoff frequency ␻0 and a
final value y⬁ ⬅ y共t → ⬁兲 = KpG0r⬁.
It is interesting to compare the open- and closed-loop
systems. The closed-loop system is faster for Kp ⬎ 1.
Thus, if the system is a sensor, sluggish dynamics can be
speeded up, which might be an advantage for a measuring instrument. Furthermore, the steady-state solution,
y⬁ =
K pG 0
r⬁ ,
K pG 0 + 1
tracks r⬁ closely if Kp Ⰷ 1 / G0. One might counter that in
the open-loop system, one could set Kp = 1 / G0 and have
y⬁ = r⬁ exactly. However, if G0 varies over time—for example, amplifier gains often drift greatly with
temperature—this tracking will not be good. By contrast, for Kp Ⰷ 1 / G0, the closed-loop system tracks r⬁
without much sensitivity to the dc gain G0.
We can sharpen our appreciation of this second advantage by introducing the notion of the parametric sensitivity S of a quantity Q with respect to variations of a
parameter P. One defines S ⬅ 共P / Q兲共dQ / dP兲. We can
then compare the open- and closed-loop sensitivities of
the transfer function with respect to the dc gain, G0:
Sopen =
G0 d
共KpG0兲 = 1,
K0G0 dG0
Sclosed =
Ⰶ 1.
1 + K pG 0
Thus, if KpG0 Ⰷ 1, the closed-loop dynamics are much
less sensitive to variations in the static gain G0 than are
the open-loop dynamics.
We next consider the effects of an output disturbance
d共t兲 and sensor noise ␰共t兲. From the block diagram in
Fig. 7 关ignoring the F共s兲 block for the moment兴,
y = KGe + d,
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
关r共s兲 − ␰共s兲兴 +
1 + KG
1 + KG
In the previous example, disturbances d共t兲 would have
been rejected up to the cutoff frequency ␻⬘ = ␻0共1 + Kp兲.
This is usually desirable. On the other hand, the control
signal effectively becomes r − ␰: the system has no way to
distinguish the control signal r共t兲 from measurement
noise ␰共t兲. Thus the higher the gain is, the noisier the
output. 共Putting a “prefilter” between the reference and
the feedback node a is a partial solution. The controller
then has two degrees of freedom, as described above.兲
The frequency ␻⬘ is known as the feedback bandwidth. We can thus state that a high feedback bandwidth
is good, in that it allows a system to track rapidly varying
control, and bad, in that it also allows noise to feed
through and contaminate the output. This tradeoff may
also be expressed by rewriting Eq. 共3.13兲 as
e0共s兲 ⬅ r共s兲 − y共s兲 = S共s兲关r共s兲 − d共s兲兴 + T共s兲␰共s兲, 共3.14兲
where e0 is the tracking error, i.e., the difference between the desired tracking signal r and the actual output
y. This is distinguished from e = r − y − ␰, the difference
between the tracking signal and the measured output. In
Eq. 共3.14兲, S共s兲 is the sensitivity function discussed above
and T共s兲 = KG / 共1 + KG兲 = 1 − S is the complementary sensitivity function. The general performance goal is to
have e0 be small given various “inputs” to the system.
Here, we regard the tracking signal r, the disturbance d,
and the sensor noise ␰ as inputs of differing kinds. A
fundamental obstacle is that S + T = 1 at all frequencies;
thus if S is small and disturbances are rejected, then T is
large and sensor noise feeds through, and vice versa. We
discuss possible ways around this tradeoff, below.
B. Basic ideas: Feedforward
Another basic idea of control theory is the notion of
feedforward, which is a useful complement to feedback.
Say that one wants a step change in the reference function. For example, in a loop controlling the temperature
of a room, one can suddenly change the desired setpoint
from 20 to 25° C. The feedback loop may work satisfactorily in response to room perturbations, but if one
knows ahead of time that one is making a sudden
change, one can do better than to just let the feedback
loop respond. The usual way is to try to apply a prefilter
F共s兲 to the control signal r共s兲. In the absence of any
feedback, the system response is just y共s兲 = G共s兲F共s兲r共s兲.
If we can choose F = G−1, then y will just follow r exactly.
Because the actuator has finite power and bandwidth,
one usually cannot simply invert G. Still, if one can apply an approximate inverse to r, the dynamical response
will often be significantly improved. Often, this amounts
to inverting the system G共s兲 at low frequencies and leaving it unchanged above a cutoff frequency set by actua-
John Bechhoefer: Feedback for physicists: A tutorial essay on control
tor limits or by uncertainty in the system’s highfrequency dynamics. See Devasia 共2002兲 and Sec. V.E.
Similarly, if one has an independent way of measuring
disturbances one can apply a compensator element that
feeds forward the filtered disturbance to the actuator.
共For example, a thermometer outside a building allows
one to anticipate disturbances to an interior room.兲
Again, the idea is to try to create a signal that will counteract the effect of the disturbance.
Another general motivation for using feedforward is
to control the movement of flexible structures. For example, translation stages have mechanical resonances.
Imagine that one wants a response that approximates a
step displacement as closely as possible. If one uses as
the control signal a step displacement, the high frequencies of the step function will excite the system’s mechanical resonances. Adding a feedforward element allows
one to “shape” the input so that resonances are not excited; the actual response can then be closer to the desired step than it would be were the “naive” input used
instead 共Singhose, 1997; Croft and Devasia, 1999; Schitter and Stemmer, 2004; Zou et al., 2005兲.
Because it is usually impossible to implement perfect
feedforward and because disturbances are usually unknown, one generally combines feedforward with feedback. In Fig. 7, one includes the prefilter F共s兲 after the
time varying setpoint r共s兲 before the loop node. The output y共s兲 关Eq. 共3.13兲兴 then becomes
y共s兲 =
关F共s兲r共s兲 − ␰共s兲兴 +
1 + KG
1 + KG
Choosing F as close as possible to 1 + 共KG兲−1 reduces the
“load” on the feedback while simultaneously rejecting
disturbances via the feedback loop.
A more subtle application of these ideas to scanning
probe microscopy illustrates the strengths and weaknesses of using feedforward. Scanning probe microscopes 共SPMs兲—including, most notably, the scanning
tunneling microscope 共STM兲 and atomic force microscope 共AFM兲—have revolutionized surface science by
achieving atomic- or near-atomic-resolution images of
surfaces in a variety of circumstances. Practical microscopes usually contain a feedback loop that keeps constant the distance between the sample and the SPM
probe, as that probe is rastered over the sample surface
共or vice versa兲. In a recent paper, Schitter et al. 共2004兲
use the fact that in many SPM images one scan line
resembles its neighbor. They record the topography estimate of the previous line and calculate the actuator
signal needed to reproduce that topography. This is then
added to the actuator signal of the next scan line. The
feedback loop then has to deal with only the differences
between the expected topography and the actual topography. The limitations of this are 共i兲 that the actuator
may not be able to produce a strong or rapid enough
signal to deal with a sharp topography change 共e.g., a
step兲 and 共ii兲 the new scan line may in fact be rather
different from the previous line, in which case the feedforward will actually worsen the performance. In pracRev. Mod. Phys., Vol. 77, No. 3, July 2005
tice, one puts a “gain” on the feedforward term that
reflects the correlation one expects between corresponding vertical pixels. Unity gain—unity correlation—
implies that one expects the new line to be exactly the
same as the old line, justifying the full use of feedforward. If there is no correlation—i.e., if statistically,
knowing the height of the pixel in a given column of the
previous scan tells you nothing about the expected
height in the next scan—then one should not use feedforward at all. The actual gain that should be applied
then should reflect the expected correlation from line to
line. Note the relationship here between information
and feedback/feedforward, in that the appropriate
amount of feedback and feedforward is determined by
the amount of information one scan line gives regarding
the next; see Sec. IX, below.
Finally, we can now better appreciate the distinction
between “control” and “feedback.” The former refers to
the general problem of how to make a system behave as
desired. The latter is one technique for doing so. What
we have seen so far is that feedback, or closed-loop control, is useful for dealing with uncertainty, in particular
by reducing the effects of unknown disturbances. On the
other hand, feedforward, or open-loop control, is useful
for making desired 共i.e., known兲 changes. As Eq. 共3.15兲
shows, one usually will want to combine both control
C. Basic ideas: Time domain
Until now, we have focused our discussion of feedback on the frequency domain, but it is sometimes preferable to use the time domain, working directly with Eq.
共2.2兲. State-space approaches are particularly useful for
cases where one has multiple inputs and multiple outputs 共MIMO兲, although much of our discussion will be
for the single-input–single-output 共SISO兲 case. To further simplify issues, we will consider only the special
problem of controlling about a state xជ = 0. It is easy to
generalize this 共Franklin et al., 2002兲.
In the SISO case, we have only one output y共t兲 and
one input u共t兲. We then have
xជ̇ = Ãxជ + bជ u,
u = − kជ Txជ ,
y = cជ Txជ .
In Eq. 共3.16兲, row vectors are represented, for example,
as cជ T = 共c1 c2兲. The problem is then one of choosing the
feedback vector kជ T = 共k1 k2兲 so that the eigenvalues of
the new dynamical system, xជ̇ = Ã⬘xជ with Ã⬘ = Ã − bជ kជ T,
have the desired properties.
One can easily go from the state-space representation
of a dynamical system to its transfer function. Laplace
transforming Eqs. 共3.16兲, one finds
G共s兲cជ T共sĨ − Ã兲−1bជ ,
where Ĩ is the identity matrix. One can also show that if
one changes coordinates for the state vector 共i.e., if one
John Bechhoefer: Feedback for physicists: A tutorial essay on control
defines xជ ⬘ = T̃xជ 兲, the transfer function deduced in Eq.
共3.17兲 will not change. The state vector xជ is an internal
representation of the dynamics, while G共s兲 represents
the physical input-output relation.
Because the internal state xជ usually has more elements than either the number of inputs or outputs 共here,
just one each兲, a few subtleties arise. To understand
these, let us rewrite Eqs. 共3.16兲 for the special case
where à is a 2 ⫻ 2 diagonal matrix, with diagonal entries
␭1 and ␭2. Then
ẋ1 = ␭1x1 + b1u,
ẋ2 = ␭2x2 + b2u,
y = c 1x 1 + c 2x 2 .
If b1 = 0, then clearly, there is no way that u共t兲 can influence the state x1共t兲. More generally, if any element of bជ
is zero, the system will not be “controllable.” Likewise,
if any element of cជ is zero, then y共t兲 will not be influenced at all by the corresponding element of xជ 共t兲, and
we say that that state is not “observable.” More formally, a system is controllable if it is always possible to
choose a control sequence u共t兲 that takes the system
from an arbitrary initial state to an arbitrary final state
in finite time. One can show that a formal test of controllability is given by examining a matrix made out of
the vectors Ãibជ , for i = 0 , . . . , n − 1. If the controllability
matrix Ũb is invertible, the system is controllable. The
n ⫻ n matrix Ũb is explicitly
Ũb ⬅ 共bជ
Ãn−1bជ 兲.
In the more general MIMO case, bជ will be a matrix B̃,
and the controllability condition is that ŨB have full
rank. One can also show that a similar test exists for
observability and, indeed, that controllability and observability are dual properties 共corresponding to how
one gets information into and out of the dynamical system兲 and that any property pertaining to one has a counterpart 共Lewis, 1992兲. Note that we use “observable”
here in its classical sense. Section VIII contains a brief
discussion of quantum control.
One can show that if the technical conditions of controllability and observability are met, then one can
choose a kជ that will place the eigenvalues anywhere.6
The catch is that the farther one moves an eigenvalue,
the larger the elements of kជ must be, which will quickly
lead to unattainable values for the input u. Thus one
should move only the eigenvalues that are “bad” for
system response and move them as little as possible
共Franklin et al., 2002兲.
This is true even if the system is unobservable, but then the
internal state xជ cannot be deduced from the observed output y.
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
As a quick example 共Franklin et al., 2002兲, consider an
undamped pendulum of unit frequency 关Eq. 共2.6兲, with
␨ = 0兴. We desire to move the eigenvalues from ±i to −2
and −2. In other words, we want to double the natural
frequency and change the damping ␨ from 0 to 1 共critical
damping兲. Let u = −共k1x1 + k2x2兲 in Eq. 共2.6兲, which leads
to a new dynamical matrix,
Ã⬘ =
− 1 − k1 − k2
Computing the characteristic equation for Ã⬘ and
matching coefficients with the desired equation 关共␭ + 2兲2
= 0兴, one easily finds kជ T = 共3 4兲. More systematically,
there are general algorithms for choosing kជ so that Ã⬘
has desired eigenvalues, such as “Ackermann’s method”
for pole 共eigenvalue兲 placement 共Dutton et al., 1997; Sec.
In the above discussion of a SISO system, the control
law used the full state vector xជ 共t兲 but there was only a
single observable y共t兲. This is a typical situation. One
thus needs some way to go from the single dynamical
variable y to the full state xជ . But since y is a linear function of all the x’s and since one knows the system dynamics, one can, in fact, do such an inversion. In the
control-theory literature, the basic strategy is to make an
observer that takes the partial information as an input
and tends to the true state.7 Since the observed system is
being modeled by computer, one has access to its internal state, which one can use to estimate the true dynamical state. In the absence of measurement noise, observers are straightforward to implement 共Dutton et al.,
1997; Özbay, 2000兲. The basic idea is to update an existing estimate x̂共t兲 of the state using the dynamical equations 关Eqs. 共2.2兲兴 with the observed output y共t兲. The augmented dynamics are
x̂˙共t兲 = Ax̂共t兲 + Bu共t兲 + F关y共t兲 − Cx̂共t兲兴.
For simplicity, we have dropped tildes 共A, B, etc.兲 and
vector symbols 共x, y, etc.兲. The new matrix F is chosen so
that the error in the estimate of x共t兲 converges to zero:
ė共t兲 = ẋ共t兲 − x̂˙共t兲 = 共A − FC兲e共t兲.
To converge quickly, the real parts of the eigenvalues of
A − FC should be large and negative; on the other hand,
large eigenvalues will lead to noisy estimates. A reasonable compromise is to choose F so that the estimator
converges several times faster than the fastest pole in
the system dynamics. The estimator described here is a
There are actually two distinct control strategies. In the
“state-vector-feedback” 共SVF兲 approach, one uses a feedback
law of the form uជ = −K̃xជ along with an observer to estimate xជ
from the measurements yជ . In the “output-feedback” approach,
one uses directly uជ = −K̃⬘yជ . The SVF approach is conceptually
cleaner and more systematic, and it allows one to prove general results on feedback performance. The output-feedback
approach is more direct—when it works 共Lewis, 1992兲.
John Bechhoefer: Feedback for physicists: A tutorial essay on control
FIG. 8. 共Color in online edition兲 Phase-space plot of harmonic
oscillator and its observer. The harmonic oscillator trajectory
starts from the dot and traces a circle. The observer starts from
the square with incorrect initial velocity but converges to track
the oscillator. For simplicity, the control signal u = 0.
“full-order” estimator, in that it estimates all the states x
whether they are observed or not. Intuitively, it is clear
that if one defines each observable yជ to be one of the
elements of xជ , then it should be possible to define a
“partial-order” estimator, which uses the observations
where they exist and estimates only the remaining unknown elements of xជ . This is more efficient to calculate
but more complicated to set up 共Dutton et al., 1997兲.
Finally, if there is significant measurement noise, one
generally uses a rather different strategy, the Kalman
filter, to make the observer; see Sec. V.D.
To continue the example of the harmonic oscillator,
ជ T = 共4 3兲 gives Ã⬘ repeated eigenvalues
above, choosing F
at −2. The phase-space plot 共x1 vs x0兲 in Fig. 8 shows
how the observer tracks the physical system. The oscillator, starting from x0 = 0, x1 = −1 traces a circle of unit
radius in phase space. The observer dynamical system
starts from different initial conditions but eventually
converges to track the physical system. Here, the observer time scale is only half the natural period; in a real
application, it should be faster. After the observer has
converged, one simply uses its values 共xជ̂ 兲, fed by the observations y, in the feedback law. Obviously, in a real
application, the computer simulation of the observer
must run faster than the dynamics of the physical system.
The above methods are satisfactory for linear equations. In the physics literature, one often uses the
“method of time delays” 共Strogatz, 1994兲. Here, an
n-element state vector is made by taking the vector yជ ␶
⬅ „兵y共t兲 , y共t − ␶兲 , . . . , y关t − 共n − 1兲␶兴其…, where the delay ␶ is
chosen to be the “dominant” time constant 共the choice
often requires a bit of playing around to optimize兲. Measuring yជ ␶ is roughly equivalent to measuring the first n
− 1 time derivatives of y共t兲. For example, from y共t兲 and
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
y共t − ␶兲, one has information about ẏ共t兲. One can show
that yជ ␶ obeys dynamics like that of the true system 共more
precisely, its phase space is topologically similar兲. Note
that one usually must choose the embedding dimension
n. If n is too small, trajectories will cross in the
n-dimensional phase space, while if n is too large, one
will merely be embedding the system in a space that has
a larger dimension than that of the actual state vectors.
Measurement noise effects are magnified when n is chosen too large. The method of time delays works even if
one does not know the correct system. If the system is
linear, then there is little excuse for not figuring it out
共see Sec. V.A兲, but if the system is nonlinear, then it may
not be easy to derive a good model and the method of
time delays may be the best one can do.
The direct, “state-space” formulation of feedback
with its “pole-placement” methods is easier for simulating on a computer and has other mathematical advantages, while the frequency domain is often more intuitive. One should know both approaches.
To summarize 共and anticipate, slightly兲, the possible
advantages of adding a feedback loop include the following:
共i兲 altered dynamics 共e.g., faster or slower time constants兲;
共ii兲 the ability to linearize a nonlinear sensor by holding the sensor value constant 共discussed below兲;
共iii兲 reduced sensitivity to environmental disturbances.
Possible disadvantages include the following:
共i兲 sensor noise may contaminate the output;
共ii兲 a stable system may be driven unstable.
We shall deal with the potential disadvantages below.
We now give an example that highlights the use of feedback to linearize a nonlinear sensor signal.
D. Two case studies
1. Michelson interferometer
In Sec. III, we looked at controlling a device that was
merely a low-pass filter. Such an example might seem
academic in that one rarely encounters such a simple
system in the real world. Yet, as the following case study
shows, simplicity can often be forced on a system, purchased at the price of possible performance.
We consider the Michelson interferometer designed
by Gray et al. for use in the next-generation Laser Interferometer Gravitational Wave Observatory 共LIGO兲
gravity-wave project 共Gray et al., 1999兲. The gravitywave detector 共itself an interferometer兲 must have all its
elements 共which are several kilometers long!兲 isolated
from Earth-induced vibrations, so that any gravity-waveinduced distortions may be detected. In order to isolate
the large masses, one can measure their position relative
to the earth—hence the need for accurate displacement
measurement. Of course, the interferometer may be
used in many other applications, too. We refer the
reader to the article of Gray et al. for details about the
project and interferometer. Here, we consider a simplified version that highlights their use of feedback.
John Bechhoefer: Feedback for physicists: A tutorial essay on control
FIG. 10. Op-amp based noninverting amplifier.
FIG. 9. Michelson interferometer with feedback element K共s兲
added to linearize output for large displacements.
Figure 9 shows a schematic diagram of the interferometer. Without the control element K共s兲 and the piezoactuator shown at bottom, it would depict just an ordinary
Michelson interferometer. As such, its output is a sinusoidal function of the displacement of the target mirror.
In open-loop operation, the interferometer could be
used as a linear sensor only over a small fraction of a
wavelength. By adding the actuator, Gray et al. force the
servo-mirror to track the moving target mirror. The actuator signal to the servo-mirror effectively becomes the
sensor signal.
One immediate advantage of tracking a desired “setpoint” on the fringe is that if the actuator is linear, one
will have effectively linearized the original, highly nonlinear sensor signal. 共In fact, the actuator used piezoelectric ceramic stacks for displacement, which have their
own nonlinearities. But these nonlinearities are much
smaller than those of the original output.兲 Another
widely used application of feedback to linearize a signal,
mentioned briefly above in our discussion of feedforward techniques is the scanning tunneling microscope
共STM兲, where the exponential dependence of tunneling
current on the distance between conductors is linearized
by feedback 共Oliva et al., 1995兲.
In their published design, Gray et al. chose the feedback law to be a band-limited proportional gain:
K共s兲 =
1 + s/␻0
Their controller K共s兲 looks just like our simple system
K共s兲G共s兲 in Eq. 共3.2兲 above. They assume that their system has no dynamics 关G共s兲 = 1兴, up to the feedback bandwidth ␻⬘ = ␻0共1 + K0兲. Of course, their system does have
dynamics. For example, the photodiode signal rolls off
at about 30 kHz, and the piezoactuator has a mechanical
resonance at roughly the same frequency. But they chose
␻0 ⬇ 2␲ ⫻ 25 Hz and K0 ⬇ 120, so that the feedback
bandwidth of 3 kHz was much less than the natural frequencies of their dynamical system.
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
The design has much to recommend it. The large dc
gain means that static displacements are measured accurately. One can also track displacements up to ␻⬘, which,
if slower than the system dynamics, is much faster than
their application requires. More sophisticated feedback
design could achieve similar bandwidth even if the system dynamics were slower, but the added design costs
would almost certainly outweigh any component-cost
savings. And the performance is impressive: Gray et al.
report a position noise of 2 ⫻ 10−14 m / 冑Hz, only about
ten times more than the shot-noise limit imposed by the
laser intensity used 共⬇10 mW at ␭ = 880 nm兲. The lesson
is that it often pays to spend a bit more on highperformance components in order to simplify the feedback design. Here, one is “killing the problem with
bandwidth”; i.e., one starts with far more bandwidth
than is ultimately needed, in order to simplify the design. Of course, one does not always have that luxury,
which motivates the study of more sophisticated feedback algorithms.
2. Operational amplifier
We briefly discuss another application of proportional
feedback for first-order systems that most experimentalists will not be able to avoid, the operational amplifier
共“op amp”兲 共Mancini, 2002兲. The op amp is perhaps the
most widely used analog device and is the basis of most
modern analog electronic circuits. For example, it is
used to make amplifiers, filters, differentiators, integrators, multipliers, to interface with sensors, and so forth.
Almost any analog circuit will contain several of them.
An op amp is essentially a very high gain differential
amplifier that uses negative feedback to trade off high
gain for reduced sensitivity to drift. A typical circuit
共“noninverting amplifier”兲 is shown in Fig. 10. The op
amp is the triangular element, which is a differential amplifier of gain A:
Vout = A共Vin − V−兲.
The ⫹ and ⫺ inputs serve as an error signal for the
feedback loop. The two resistors in the return loop form
a voltage divider, with
John Bechhoefer: Feedback for physicists: A tutorial essay on control
V− = Vout
⬅ Vout␤ ,
R1 + R2
ready” engineering calculations concerning feedback
which leads to
⬇ ,
Vin 1 + A␤ ␤
E. Integral control
where GCL is the closed-loop gain and the approximation holds when A Ⰷ 1. Thus the circuit is an amplifier of
gain GCL ⬇ 1 + R2 / R1. The sensitivity to drift in A is
Ⰶ 1,
1 + A␤
which again shows how a high open-loop gain A Ⰷ 1 reduces the sensitivity of the closed-loop gain GCL to drifts
in the op amp. This, in fact, was the original technical
motivation for introducing negative feedback, which occurred in the telephone industry in the 1920s and 1930s.8
Open-loop amplifiers were prone to drift over time,
leading to telephone connections that were too soft or
too loud. By increasing the gain of the amplifiers and
using negative feedback, one achieves much more stable
performance. In effect, one is replacing the large temperature dependence of the semiconductors in transistors with the feebler temperature dependence of resistors 共Mancini, 2002兲. Note that the A → ⬁ limit is one of
the approximations made when introducing the “ideal”
op amp that is usually first taught in electronics courses.
It is interesting to look at the frequency dependence
of the op-amp circuit 共Fig. 10兲. The amplifier is usually
designed to act like a simple low-pass filter, with gain
A共␻兲 = A0 / 共1 + i␻ / ␻0兲. Following the same logic as in
Sec. III.A, we find that the closed-loop equations correspond to a modified low-pass filter with cutoff frequency
␻CL = ␤ A0 ␻0. One concludes that for ␻CL Ⰷ ␻0,
GCL ␻CL ⬇ A0 ␻0 .
Thus the closed-loop gain times the closed-loop bandwidth is a constant determined by the parameters of the
amplifier. The product A0 ␻0 is usually known as the
unity-gain bandwidth, because it is the frequency at
which the open-loop gain A of an op amp is 1. Modern
op amps have unity-gain bandwidths that are typically
between 106 and 109 Hz. Equation 共3.28兲 states that in
an op-amp-based voltage amplifier there is a tradeoff
between the gain of the circuit and its bandwidth. Mancini 共2002兲 contains many examples of such “rough-and8
See Mancini 共2002兲. Historically, the introduction of negative
feedback into electronic circuits was due to H. S. Black in 1927
共Black, 1934兲. While not the first use of negative feedback
共steam-engine governors using negative feedback had been in
use for over a century兲, Black’s ideas spurred colleagues at the
Bell Telephone Laboratories 共Bell Labs兲 to analyze more
deeply why feedback was effective. This led to influential studies by Nyquist 共1932兲 and Bode 共1945兲 that resulted in the
“classical control,” frequency-domain analysis discussed here.
Around 1960, Pontryagin, Kalman, and others developed
“modern” state-space methods based on time-domain analysis.
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
All of the examples of feedback discussed above suffer from “proportional droop”: i.e., the long-term response to a steady-state input differs from the desired
setpoint. Thus if the 共static兲 control input to a low-pass
filter is r⬁, the system settles to a solution y⬁ = 关Kp / 共1
+ Kp兲兴r⬁. Only for an infinite gain will y⬁ = r⬁, but in
practice, the gain cannot be infinite. The difference between the desired signal r⬁ and the actual signal equals
关1 / 共1 + Kp兲兴r⬁.
With integral control, one applies a control
e共t⬘兲dt⬘ rather than 共or in addition to兲 the proporKi兰−⬁
tional control term Kpe共t兲. The integral will build up as
long as e共t兲 ⫽ 0. In other words, the integral term eliminates the steady-state error. We can see this easily in the
time domain, where
ẏ共t兲 = − y共t兲 +
关r⬁ − y共t⬘兲兴dt⬘ ,
where Ki is rescaled to be dimensionless. Differentiating,
ÿ共t兲 = − ẏ共t兲 + 2 关r⬁ − y共t兲兴,
which clearly has a steady-state solution y⬁ = r⬁—no proportional droop!
It is equally interesting to examine the situation in
frequency space, where the control law is K共s兲 = Ki / ␶s.
One can interpret this K as a frequency-dependent gain,
which is infinite at zero frequency and falls off as 1 / ␻ at
finite frequencies. Because the dc gain is infinite, there is
no steady-state error.
If the system transfer function is G共s兲 = 1 / 共1 + ␶s兲, then
the closed-loop transfer function becomes
T共s兲 =
1 + KG 1 + ␶s Ki + ␶2s2Ki
Note that both Eqs. 共3.30兲 and 共3.31兲 imply that the integral control has turned a first-order system into effectively a second-order system, with ␻20 = Ki / ␶2 and ␨
= 1 / 2冑Ki. This observation implies a tradeoff: a large Ki
gives good feedback bandwidth 共large ␻0兲 but reduced ␨.
Eventually 共when ␨ = 1兲, even an overdamped system will
become underdamped. In the latter case, perturbations
relax with oscillations that overshoot the setpoint, which
may be undesirable.
Integral control can be improved by combining with
proportional control, K共s兲 = Ki / ␶s + Kp, which gives faster
response while still eliminating steady-state errors. To
see this, note that the closed-loop transfer function
共3.31兲 becomes
John Bechhoefer: Feedback for physicists: A tutorial essay on control
T共s兲 =
1 + 共Kp/Ki兲␶s
, 共3.32兲
1 + KG 1 + 共1 + Kp兲␶s Ki + ␶2s2 Ki
which is 1 for ␻ → 0 and is asymptotically first order, with
time constant ␶ / Kp, for ␻ → ⬁.
We have seen that a system with integral control always tends to the setpoint, whatever the value of Ki.
This sounds like a trivial statement, but it is our first
example of how a feedback loop can lead to robust behavior that does not depend on details of the original
system itself. In other words, it is only the loop itself and
the fact that integration is employed that leads to the
tracking properties of feedback. We return to this point
in Sec. VII, which discusses biological applications of
Feedback can also be used to change the stability of a
dynamical system. Usually, this is undesired. For example, as we shall see below, many systems will go into
spontaneous oscillation when the proportional gain Kp is
too high. Occasionally, feedback is used to stabilize an
initially unstable system. A topical example is the
Stealth Fighter.9 The aircraft’s body is made of flat surfaces assembled into an overall polygonal hull. The flat
surfaces deflect radar beams away from their source but
make the aircraft unstable. Using active control, one can
stabilize the flight motion. A more prosaic example is
the problem of balancing a broom upside down in the
palm of your hand.
Before discussing stability in feedback systems, we
briefly review some notions of stability in general in dynamical systems in Sec. IV.A. In Sec. IV.B, we apply
those ideas to feedback systems.
A. Stability in dynamical systems
Here, we review a few key ideas from stability theory
共Strogatz, 1994兲. We return to the time domain and write
a simple nth-order equation in matrix form as
yជ̇ = Ãyជ .
Assuming à to be diagonalizable, one can write the solution as
yជ 共t兲 = eAtyជ 共0兲 = R̃eDtR̃−1yជ 共0兲,
where à = R̃eDtR̃−1 and
The instability of planes such as the F-117 fighter 共“Nighthawk”兲 is fairly obvious just looking at pictures of them. I have
seen anecdotal mention of this but no serious discussion of
how active control stabilizes its flight. See, for example, http://
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
D̃ =
The solution yជ = 0 is stable against infinitesimally small
perturbations if Re ␭i ⬍ 0 , ∀ i. Generally, the eigenvalues
of the stability matrix à change when a control
parameter—such as a feedback gain—is changed. If one
of the eigenvalues has positive real part, the associated
eigenmode will grow exponentially in time. 共This is linear stability; a solution may be stable to infinitesimal
perturbations but unstable to finite perturbations of
large enough amplitude. Such issues of nonlinear stability are relevant to the hysteretic systems discussed below.兲 The qualitative change in behavior that occurs as
the eigenvalue crosses zero is known as a bifurcation.
Once the system is unstable, the growth of the unstable
mode means that nonlinear terms will quickly become
important. The generic behavior of the bifurcation is
then determined by the lowest-order nonlinear terms.
共The general nonlinear term is assumed to have a Taylor
expansion about the solution, yជ = 0.兲 For example, the
unstable mode 共indexed by i兲 may behave either as
ẏi = ␭iyi − ay2i
ẏi = ␭iyi − by3i .
In Eq. 共4.4兲, ␭i is a “control parameter,” i.e., one that
may be controlled by the experimenter. The parameters
a and b are assumed to remain constant. The first relation in Eq. 共4.4兲 describes a “transcritical bifurcation”
and the second a “pitchfork bifurcation” 共Strogatz,
1994兲. 共If there is a symmetry y → −y, one has a pitchfork
bifurcation; otherwise, the transcritical bifurcation is the
generic description of two solutions that exchange stabilities.兲 In both cases, the linearly unstable mode saturates with a finite amplitude, in a way determined by the
lowest-order nonlinear term.
If the eigenvalue ␭i is complex and à real, then there
will be a second eigenvalue ␭j = ␭*i . The eigenvalues become unstable in pairs when Re ␭i = 0. The system then
begins to oscillate with angular frequency Im ␭i. The instability is known as a Hopf bifurcation.
One situation that is seldom discussed in introductory
feedback texts but is familiar to physicists studying nonlinear dynamics is the distinction between subcritical
and supercritical bifurcations 共analogous to first- and
second-order phase transitions兲. Supercritical bifurcations refer to the case described above, where the system is stable until ␭ = 0, where the mode spontaneously
begins to grow. But if the lowest nonlinear term is positive, it will reinforce the instability rather than saturating
it. Then the lowest-order negative nonlinear term will
saturate the instability. For example, a subcritical pitchfork bifurcation would be described by
ẏi = ␭iyi + by3i − cy5i ,
and a subcritical Hopf bifurcation by
John Bechhoefer: Feedback for physicists: A tutorial essay on control
a second-order system lags 180°, etc. Thus a secondorder system will lag less than 180° at finite frequencies,
implying that at least a third-order system is needed for
As an example, we consider the integral control
of a 共degenerate兲 second-order system, with G共s兲 = 1 /
共1 + ␶s兲2 and K共s兲 = Ki / s. Instability occurs when K共i␻兲
⫻ G共i␻兲 = −1. This leads to
FIG. 11. 共Color in online edition兲 Two scenarios for a pitchfork bifurcation, showing steady-state solution y as a function
of the control parameter ␭: 共a兲 supercritical; 共b兲 subcritical,
where the arrows show the maximum observable hysteresis.
ẏi = ␭iyi + b兩yi兩2yi − c兩yi兩4yi .
In both these cases, the yi = 0 solution will be metastable
when Re ␭i is slightly positive. At some finite value of
Re ␭i, the system will jump to a finite-amplitude solution. On reducing Re ␭i, the eigenvalue will be slightly
negative before the system spontaneously drops down to
the yi = 0 solution. For a Hopf bifurcation, the discussion
is the same, except that y is now the amplitude of the
spontaneous oscillation. Thus subcritical instabilities are
associated with hysteresis with respect to controlparameter variations. In the context of control theory, a
system with a subcritical bifurcation might suddenly go
unstable when the gain was increased beyond a critical
value. One would then have to lower the gain a finite
amount below this value to restabilize the system. Supercritical and subcritical pitchfork bifurcations are illustrated in Fig. 11.
B. Stability ideas in feedback systems
As mentioned before, in a closed-loop system, varying
parameters such as Kp or Ki in the feedback law will
continuously vary the system’s eigenvalues, raising the
possibility that the system will become unstable. As
usual, one can also look at the situation in frequency
space. If one has a closed-loop transfer function T
= KG / 共1 + KG兲, one can see that if KG ever equals −1,
the response will be infinite. This situation occurs at the
bifurcation points discussed above. 共Exponential growth
implies an infinite steady-state amplitude for the linear
system.兲 In other words, in a feedback system, instability
will occur when 兩KG兩 = 1 with a simultaneous phase lag
of 180°.
The need for a phase lag of 180° implies that the
open-loop dynamical system 共combination of the original system and feedback compensation兲 must be at least
third order to have an instability. Since the transfer function of an nth-order system has terms in the denominator of order sn, the frequency response will asymptotically be 共i␻兲−n, which implies a phase lag of n␲ / 2. 共This
is just from the i−n term.兲 In other words, as Figs. 3 and
4 show, a first-order system lags 90° at high frequencies,
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
共1 − r2␻2兲 −
+ 1 = 0.
Both real and imaginary parts of this complex equation
must vanish. The imaginary part implies ␻* = 1 / ␶, while
the real part implies K*i = 2 / ␶ ; i.e., when Ki is increased
to the critical value 2 / ␶, the system goes unstable and
begins oscillating at ␻ = 1 / ␶.
Bode plots are useful tools in seeing whether a system
will go unstable. Figure 12 shows the Bode magnitude
and phase plots for the second-order system described in
the paragraph above, with ␶ = 1. The magnitude response
is plotted for two different values of the integral gain Ki.
Note how changing a multiplicative gain simply shifts
the response curve up on the log-log plot. In this simple
case, the phase response is independent of Ki, which
would not in general be true. For Ki = 1, we also show
explicitly the gain margin, defined as the factor by which
the gain must be increased to have instability, at the frequency where the phase lag is 180°. 共We assume an
open-loop-stable system.兲 Similarly, the phase margin is
defined as the phase lag at the frequency where the magnitude response is unity. In Fig. 12, the gain margin is
about a factor of 2, and the phase margin is about 20°. A
good rule of thumb is that one usually wants to limit the
gain so that the gain margin is at least a factor of 2 and
the phase margin at least 45°. For small gain margins,
perturbations decay slowly. Similarly, a 90° phase margin
would correspond to critically damped response to perturbations, and smaller phase margins give underdamped dynamics. The curve drawn for Ki = 2 shows that
the system is at the threshold of instability 共transferfunction response= 1, phase lag= 180°兲.
Here, we are implicitly assuming the 共common兲 scenario
where instability arises dues to time delay 共two poles with conjugate imaginary parts cross the imaginary s axis together兲.
Another case arises when a real pole crosses the imaginary s
axis from the left. Still other cases arise if the system is intrinsically unstable: then, increasing a feedback gain can actually
stabilize the system. For example, the unstable inverted pendulum discussed below in Sec. IV.D can be stabilized by increasing the gains of proportional and derivative feedback
terms 共Sontag, 1998兲. These cases can all be diagnosed and
classified using the Nyquist criterion, which involves examining
a polar plot of 关兩K共s兲G共s兲兩s=i␻兴 for 0 ⬍ ␻ ⬍ ⬁ in the complex
plane 共Dutton et al., 1997兲; see Sec. V.E for examples of such a
polar plot. In any case, it is usually clear which case is relevant
in a given problem.
John Bechhoefer: Feedback for physicists: A tutorial essay on control
FIG. 13. Zero-order-hold digitization of an analog signal.
Points are the measured values. Note that their average implies a sampling delay of half the sampling interval, Ts.
FIG. 12. 共Color in online edition兲 Transfer function for a degenerate second-order system with integral control, showing
gain and phase margins for Ki = 1. 共a兲 Bode magnitude plot; 共b兲
Bode phase plot.
y共s兲 =
1 + KGH
An instability occurs when KGH = −1, i.e., when
C. Delays: Their generic origins and effect on stability
We see then that integral control, by raising the order
of the effective dynamical system by one, will always
tend to be destabilizing. In practice, the situation is even
worse, in that one almost always finds that increasing the
proportional gain also leads to instability. It is worth
dwelling on this to understand why.
As we have seen, instability occurs when K共s兲G共s兲
= −1. Because we evaluate KG at s = i␻, we consider
separately the conditions 兩KG兩 = 1 and phase lag= 180°.
Since a proportional-gain law has K = Kp, increasing the
gain will almost certainly eventually lead to the first condition’s being satisfied.
The question is whether G共s兲 ever develops a phase
lag⬎ 180°. Unfortunately, the answer in practice is that
most physical systems do show lags that become important at higher frequencies.
To see one example, consider the effect of reading an
experimental signal into a computer using an analog-todigital 共A/D兲 converter. Most converters use a “zeroorder hold” 共ZOH兲. This means that they freeze the
voltage at the beginning of a sampling time period, measure it over the sample period Ts, and then proceed to
freeze the next value at the start of the next sampling
interval. Figure 13 illustrates that this leads to a delay in
the digitized signal of ⌬t = Ts / 2.
Even in a first-order system, the lag will make proportional control unstable if the gain is too large. The block
diagram now has the form shown in Fig. 14, where we
have added a sensor with dynamics H共s兲 = e−s⌬t, corresponding to the digitization lag. From the block diagram, one finds
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
e−s⌬t = − 1.
1 + s␶
Since 兩H兩 = 1, 兩KGH兩 has the same form as the undelayed
system, Kp / 冑1 + ␻2␶2. To find the frequency where the
phase lag hits 180°, we first assume that ⌬t Ⰶ ␶, so that on
increasing ␻ we first have the 90° phase shift from the
low-pass filter. Then when e−s⌬t = e−i␻⌬t = −i共␻⌬t = ␲ / 2兲,
the system is unstable. This occurs for ␻* = ␲ / 2⌬t and
Kp* =
冑 冉冊
␲ ␶
=␲ ,
2 ⌬t
i.e., the maximum allowed gain will be of order ␶ / Ts,
implying the need to sample much more frequently than
the time scale ␶ in order for proportional feedback to be
effective. In other words, the analog-to-digital-induced
lag—and any other lags present—will limit the maximum amount of gain one can apply.
Although one might argue that delays due to analogto-digital conversion can be avoided in an analog feedback loop, there are other generic sources of delay. The
most obvious one is that one often wants to control
higher-order systems, where the delay is already built
into the dynamics one cares about. For example, almost
any mechanical system is at least second order, and if
FIG. 14. Block diagram of feedback system with nontrivial
sensor dynamics, given by H共s兲.
John Bechhoefer: Feedback for physicists: A tutorial essay on control
one wants integral control, the effective system is third
order. A more subtle point is that the ODE models of
dynamical systems that we have been discussing are almost always reductions from continuum equations. Such
a system will be described by partial differential equations that have an infinity of normal modes. The larger
the system, the more closely spaced in frequency are the
modes. Thus if one measures G共i␻兲 at higher frequencies, we expect to see a series of first- and second-order
frequency responses, each one of which adds its 90° or
180° phase lag to G.
In the limit of large systems and/or high frequencies,
the modes will be approximately continuous, and we will
be guaranteed that the system will lag by 180° for high
enough frequencies. We can consider an example 关simplified from a discussion in Forgan 共1974兲兴 of controlling
the temperature of a thermometer embedded in a long
共semi-infinite兲 bar of metal, with a heater at one end; see
Fig. 15. The temperature field in the bar obeys, in a
one-dimensional approximation,
⳵ 2T ⳵ T
⳵x2 ⳵t
where D = ␭ / ␳Cp is the thermal diffusivity of the bar,
with ␭ the thermal conductivity, ␳ the density, and Cp the
heat capacity at constant pressure. The boundary conditions are
T共x → ⬁兲 = T⬁ ,
冏 冏
= J0e−i␻t ,
with J0 the magnitude of the power input.11 In order to
construct the transfer function, we assume a sinusoidal
power input at frequency ␻. The solution to Eqs.
共4.11兲–共4.13兲 is given by
T共x,t兲 − T⬁
1 i共kx+␲/4兲 −kx
e ,
J 0e
where k = 冑␻ / 2D and where we have written the solution in the form of a transfer function, evaluated at s
= i␻. If, as shown in Fig. 15, the thermometer is placed at
a distance L from the heater, there will be a phase lag
given by kL + ␲ / 4. This will equal 180° at a critical frequency
9 D
␻* = ␲2 2 .
8 L
In other words, there will always be a mode whose
frequency ␻* gives the critical phase lag. Here, D / L2
may be thought of as the characteristic lag, or time it
One usually imposes an ac voltage or current across a resistive heating element. Because power is the square of either of
these, the heater will inject a dc plus 2␻ signal in the experiment. One can compensate for this nonlinearity by taking, in
software, the square root of the control signal before sending it
to the heater.
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
takes heat to propagate a distance L in the bar. In controlling the temperature of such a system, the controller
gain will thus have to be low enough that at ␻*, the
overall transfer function of system 关Eq. 共4.14兲 multiplied
by the controller gain兴 is less than 1. The calculation is
similar to the one describing digitization lag.
The overall lesson is that response lags are generic to
almost any physical system. They end up limiting the
feedback gain that can be applied before spontaneous
oscillation sets in. In many cases, a bit of care in the
experimental design can minimize the pernicious effects
of such lags. For example, in the temperature-control
example discussed above, an immediate lesson is that
putting the thermometer close to the heater will allow
larger gains and tighter control 共Forgan, 1974兲. Of
course, the thermometer should also be near the part of
the sample that matters, leading to a possible design
D. Nonminimum-phase systems
Delays, then, are one generic source of instability. Another arises when the system transfer function G共s兲 has a
zero in the right-hand plane. To understand what happens, consider the transfer functions G1共s兲 = 1 and
G2共s兲 = 共1 − s兲 / 共1 + s兲. Both have unit amplitude response
for all frequencies 关they are “all-pass” transfer functions
共Doyle et al., 1992兲兴, but G1 has no phase lag while G2
has a phase lag that tends to 180° at high frequencies.
Thus all of our statements about leads and lags and
about gain and phase margins must be revised when
there are zeros in the right-hand side of the s plane.
Such a transfer function describes a nonminimum-phase
system in control-theory jargon.12 In addition to arising
from delays, they can arise, for example, in controlling
the position of floppy structures—e.g., the tip of a fishing
rod or, to use a time-honored control example, the bob
of a pendulum that is attached to a movable support.
共Think of balancing a vertical stick in your hand.兲
The term “nonminimum-phase” refers to Bode’s gain-phase
relationship, which states that for any transfer function L共s兲
with no zeros or poles in the right-hand plane, if L共0兲 ⬎ 0, the
phase lag ␾ is given by
␾ 共 ␻ 0兲 =
冋 冉 冊册
ln兩L共i␻兲兩ln coth
d␯ ,
with ␯ = ln共␻ / ␻0兲 the normalized frequency. 关See Özbay 共2000兲,
but note the misprint.兴 The phase at ␻0 thus depends on 兩L兩,
over all frequencies. However, if 兩L兩 ⬃ ␻−n over at least a decade centered on ␻0, then Eq. 共4.16兲 is well approximated by
␾共␻0兲 ⬇ −n␲ / 2. Transfer functions that have more than this
minimum lag are nonminimum phase systems. See Franklin et
al. 共1998兲, pp. 254–256. One can show that a general transfer
function G共s兲 can always be written as the product G1共s兲G2共s兲,
where G1共s兲 is minimum phase and G2共s兲 is an all-pass filter
with unit amplitude response; see Doyle et al. 共1992兲, Sec.
John Bechhoefer: Feedback for physicists: A tutorial essay on control
and a bob of mass m. The deflection of the rod about the
unstable vertical equilibrium is given by ␪. The position
of the hand 共and bottom of rod兲 is 关x1共t兲 , 0兴 and that of
the rod end 共bob兲 is 关x2共t兲 , y2共t兲兴. The Lagrangian L = T
− V is
FIG. 15. Control of the temperature of a probe located a distance L inside a metal bar, with heater at the bar’s end.
If a nonminimum phase system has an odd number of
zeros in the right-hand plane, the time response to a step
will initially be in the direction opposite the control signal, so that is one indication that such behavior is
present 共Vidyasagar, 1986兲. The controller must be
“smart enough” to handle the opposite response of the
system. Another way to see that nonminimum phase systems can be difficult to control is to note the effects of
increasing the overall controller gain. We write the system transfer function G共s兲 = NG共s兲 / DG共s兲 and the controller as K共s兲 = KpNK共s兲 / DK共s兲, where Kp is an overall
gain. The loop gain L共s兲 = K共s兲G共s兲 ⬅ KpN共s兲 / D共s兲, and
the sensitivity function S共s兲 共Sec. III.A兲 is given by
S共s兲 =
1 + L D共s兲 + KpN共s兲
Instabilities arise when the denominator of S, ␹共s兲
⬅ D共s兲 + KpN共s兲 = 0. Clearly, as Kp increases, the zeros of
␹共s兲 move from the roots of D共s兲 to the roots of N共s兲. In
other words, the poles of the closed-loop system move
towards the zeros of the open-loop system.
As an example, consider
G共s兲 =
共s − p兲共s − p*兲
and K共s兲 = Kp. Then ␹共s兲 = s2 + 关−共p + p*兲 + Kp兴s + pp*
− Kpz. Figure 16 graphs the closed-loop poles in the
complex s plane as a function of Kp. Such “root-locus”
plots can be useful in getting an intuition about a system’s dynamics.13 Note that while one often plots the
pole movement as a function of overall gain, one can do
the same for any parameter.
To get a feel for how and when nonminimum phase
systems arise, we consider the example of balancing a
rod in one’s hand 共Doyle et al., 1992兲. See Fig. 17 for an
illustration, where the hand is idealized as a mass M
constrained to move horizontally, and the rod is taken to
be a simple pendulum with massless support of length L
Since the point of root-locus plot is to build intuition, one
might wonder whether the traditional way of presenting such
plots, as exemplified by Fig. 16, is the best one can do. Indeed,
the more modern way to make a root-locus plot is to do so
interactively, using a mouse to drag a slider controlling some
parameter while updating the pole positions in real time. Such
features are implemented, for example, in the MATLAB Control
Toolbox 共see Sec. V.C.2 on commercial tools兲 or are easy to
create in more general graphics applications such as IGOR PRO
共Wavemetrics, Inc.兲 or SYSQUAKE 共Calerga Sarl兲.
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
L = Mẋ21 + m共ẋ22 + ẏ22兲 − mgL cos ␪ ,
with x2 = x1 + L sin ␪ and y2 = L cos ␪. If we neglect friction, this leads to nonlinear equations of motion for x1
and ␪:
共M + m兲ẍ1 + mL共␪¨ cos ␪ − ␪˙ 2 sin ␪兲 = u,
m共ẍ1 cos ␪ + L␪¨ − g sin ␪兲 = d,
where u共t兲 is the force exerted by the hand and d共t兲 is
any disturbing torque on the rod itself. Linearizing
about the unstable vertical equilibrium, we have
共M + m兲ẍ1 + mL␪¨ = u,
ẍ1 + L␪¨ − g␪ = d.
The system transfer function from u to x1 共rod bottom兲
is then easily found to be G1共s兲 = 共Ls2 − g兲 / D共s兲, with
D共s兲 = s2关MLs2 − 共M + m兲g兴. One can see that G1 is unstable, with right-hand plane 共RHP兲 poles at 0, 0, and
冑共M + m兲g / ML, and nonminimum phase, with a righthand plane zero at 冑g / L. On the other hand, the transfer function between u and x2 共rod top兲 is G2共s兲
= −g / D共s兲. This has the same poles as G1 but lacks the
right-hand plane zero. Thus G1 is nonminimum phase,
while G2 is not. The conclusion is that it is much easier
to balance a rod looking at the top 共i.e., measuring x2兲
than it is looking at the bottom 共i.e., measuring x1兲. The
reader can easily check this; a meter stick works well as
a rod.
An example of a nonminimum phase system that
arises in physics instrumentation is found in a recent
study of the dynamics of a piezoelectric stage 共Salapaka
et al., 2002兲. The article has a good discussion of how the
odd impulse response of such nonminimum phase systems make them more difficult to control and what to do
about them.
E. MIMO vs SISO systems
In Sec. III.C, we distinguished between multipleinput–multiple-output 共MIMO兲 and single-input–singleoutput 共SISO兲 systems. Here, we discuss briefly why
MIMO systems are usually harder to control. Although
MIMO systems are usually defined in the state-space
formulation 关e.g., Eq. 共2.2兲兴, they can also be defined in
the frequency domain. One uses a matrix of transfer
functions G̃共s兲, where the ij element, gij共s兲, is the ratio
xi共s兲 / uj共s兲, that is, the ratio of the ith output to the jth
input. Let us consider a simple example that illustrates
John Bechhoefer: Feedback for physicists: A tutorial essay on control
␶s ⬇ − 1 +
冉 冊
where ␶1 ⬇ ␶2 ⬅ ␶ and ⌬␶ = ␶2 − ␶1. This implies an instability for gains larger than k* ⬇ 2␶ / 共␶1 − ␶2兲. Thus, for ␶1
⬎ ␶2, the system will be unstable using only proportional
gain, in contrast to its SISO analog. Comfortable showers are not easy to achieve, as anyone who has been in
the shower when someone else flushes the toilet can attest!
The reason that MIMO systems are so “touchy” is
roughly that the sensitivity matrix S̃ has different gains
in different directions.14 If the largest and smallest eiFIG. 16. 共Color in online edition兲 Root-locus plot of the poles
of Eq. 共4.18兲 as a function of the gain, with z = 1 and p = −1 + i.
The crosses denote the poles when Kp = 0, while the circle denotes the zero. The two poles approach each other, meeting
and becoming real when Kp = −4 + 2冑5 ⬇ 0.47. One root then
approaches 1 as Kp → ⬁, crossing zero at Kp = 2, while the other
root goes to −⬁. The system becomes unstable at Kp = 2.
the extra difficulties that multiple inputs and outputs can
bring. Recall that for the SISO case, a first-order system
关1 / 共1 + ␶s兲兴 is stable under proportional feedback for any
positive gain k 关Eqs. 共3.3兲 and 共3.4兲兴. Consider now a 2
⫻ 2 generalization
G̃共s兲 =
1 + ␶ 1s 1 + ␶ 2s
1 + ␶ 2s 1 + ␶ 1s
If one prefers a physical picture, think about a shower
where pipes carrying hot and cold water are mixed to
form a single stream. The shower output is characterized
by an overall flow and temperature. Assuming there are
separate hot and cold shower knobs, one has two control
inputs and two outputs. If the ␶’s in Eq. 共4.22兲 were all
different, they would reflect the individual time constants to change flow and temperature. The rather unrealistic choice of ␶’s in Eq. 共4.22兲 simplifies the algebra.
Let us now try to regulate the shower’s temperature
and flow by adjusting the hot and cold flows with proportional gains k that are assumed identical for each.
The controller matrix K̃共s兲 is then k1̃, with 1̃ the 2 ⫻ 2
identity matrix. The loop stability is determined by the
matrix analog of Eq. 共3.2兲, S̃ = 共1̃ + G̃K̃兲−1, where the
closed-loop system is unstable when one of the eigenvalues of S̃ has positive real part for any value of s = i␻,
generalizing the ideas of Sec. IV.B. 共Recall that the order of matrix multiplication matters.兲 After some manipulation, one can easily show that the eigenvalues of S̃
are given by
␭±共s兲 =
1 + k关1/共1 + ␶1s兲 ± 1/共1 + ␶2s兲兴
The negative root ␭− has poles at
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
genvalues of S̃ are ␭max and ␭min, respectively, then the
maximum gain one can apply before the system is unstable is determined by ␭max while the closed-loop bandwidth is set by ␭min. Thus systems with widely ranging
eigenvalues will have compromised performance. A
measure of the severity of this compromise is given by
the ratio ␭max / ␭min, known as the condition number of
the matrix S̃. 共More generally, this discussion should be
phrased in terms of singular values, rather than
eigenvalues.15 Also, since singular values—like
Another reason that the specific G̃共s兲 in Eq. 共4.22兲 is difficult to control is that it has rank 1 at zero frequency. 共The
determinant vanishes at dc.兲 The system is thus nearly
uncontrollable—in the technical and intuitive senses of the
word—at low frequencies.
The eigenvalue is not the right quantity to characterize the
gain, in general, for two reasons: First, whenever the number
of inputs differs from the number of outputs, the matrix S̃ is
not square. Second, eigenvalues give only the gain along certain directions and can miss subtle types of “cross gains.” For
example, consider the matrix
S̃ =
100 1
which implies that the unit input 共1 0兲T gives rise to an output
共1 100兲T, which has a gain of roughly 100, even though the two
eigenvalues of S̃ are unity. The appropriate generalization of
the notion of eigenvalue is that of a singular value. The
singular-value decomposition of an m ⫻ n matrix S̃ is given by
S̃ = Ũ⌺̃ṼT, where Ũ is an m ⫻ m unitary matrix, ṼT is the transpose of an n ⫻ n unitary matrix, and where ⌺̃ is an m ⫻ n matrix
that contains k = min共兵m , n其兲 non-negative singular values ␴i
共Skogestad and Postlethwaite, 1996兲. For example, one can
show that
␴max共S̃兲 = w
ជ 储2
ជ 储2
where 储 · 储2 denotes the Euclidean norm. Equation 共4.26兲 states
ជ . This quantity is
that ␴max is the largest gain for all inputs w
then the largest singular value of S̃. One can also show that the
␴’s are the square roots of the eigenvalues of S̃TS̃. In the example of Eq. 共4.25兲, one finds the singular values ␴ are ⬇1 and
⬇100, showing that the singular values capture better the
“size” of S̃ than do the eigenvalues.
John Bechhoefer: Feedback for physicists: A tutorial essay on control
Fourth, measurements are almost always contaminated
by a significant amount of noise, which as we have already seen in Eq. 共3.13兲, can feed through to the actuator output of a controller. Fifth, control is always done in
the face of uncertainty—in the model’s form, in the parameters, in the nature of disturbances, etc., and it is
important that feedback systems work robustly under
such conditions. We consider each of these in turn.
A. Experimental determination of the transfer function
FIG. 17. Model of a hand balancing a vertical rod.
eigenvalues—depend on frequency, the statements
above should be interpreted as having to apply at each
We do not have space here to explore the various
strategies for dealing with a system G̃ that is ill conditioned 共i.e., has a large condition number兲, except to
point out the obvious strategy that if one can implement
a controller K̃共s兲 = G̃共s兲−1, then one will have effectively
skirted the problem. Indeed, any controller leading to a
diagonal “loop matrix” L̃共s兲 = G̃K̃ will reduce the problem to one of independent control loops that may be
dealt with separately, thus avoiding the limitations discussed above. Note that it may be sufficient to diagonalize L̃ in a limited frequency range of interest. Unfortunately, this “ideal” strategy usually cannot be
implemented, for reasons of finite actuator range and
other issues we have discussed in connection with the
SISO case. One can can try various compromises, such
as transforming L̃ to a block-diagonal matrix and reducing the condition number 共Skogestad and Postlethwaite, 1996兲.
The ideas in the above sections are enough to make a
good start in applying feedback to improve the dynamics
of an experimental system. There are, however, some
subtle points about implementing feedback in practice,
which lead to more advanced methods. First, we have
previously assumed that one knows the system dynamics
G共s兲, but this often needs to be determined experimentally. We accordingly give a brief introduction to the subject of experimental measurement of the transfer function and “model building.” If the resulting model is too
complicated, one should find a simpler, approximate
model, using “model reduction.” Second, one must
choose a controller, which implies choosing both a functional form and values for any free parameters. These
choices all involve tradeoffs, and there is often an optimum way to choose. Third, modern control systems are
usually implemented digitally, which introduces issues
associated with the finite sampling time interval Ts.
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
Although often overlooked by hurried experimentalists, the proper place to begin the design of a control
loop is with the measurement of the system transfer
function G共s兲 the frequency-dependent ratio of system
output to input. 关Even if you think you know what G共s兲
should be, it is a good idea to check out what it is.兴 This
topic is deceptively simple, because it actually implies
four separate steps: First, one must measure experimentally the transfer function; second, one must fit a model
transfer function to it; third, because a full description of
the experimental transfer function usually leads to very
high-order systems, one needs a way to approximate a
high-order system accurately by a lower-order system;
and fourth, one should always ask whether the system
can be “improved” to make control easier and more effective.
1. Measurement of the transfer function
The transfer function G共s兲 can be inferred from Bode
amplitude and phase plots. The simplest way to make
such measurements requires only a function generator
and an oscilloscope 共preferably a digital one that measures the amplitudes and phases between signals兲. One
inputs a sine wave from the function generator into the
experimental input u共t兲 and records the output y共t兲. 共We
assume a SISO system, but the MIMO generalization is
straightforward.兲 By plotting the input and output directly on two channels of an oscilloscope, one can read
off the relative amplitude and phase shifts as a function
of the driving frequency. A better technique is to use a
lock-in amplifier, which gives the amplitude and phase
shifts directly, with much higher precision than an oscilloscope. They often can be programmed to sweep
through frequencies automatically. 共“Dynamic signal
analyzers” and “network analyzers” automate this task.兲
Bear in mind, though, that the transfer function measured may not be fixed for all time. For example, the
frequency of a mechanical resonator varies with its external load. For this reason, a good control design
should not depend crucially on an extraordinarily accurate determination of the transfer function. We will pursue this point in Sec. V.E, on robust control.
The above discussion shows how to measure the
transfer function directly in the frequency domain.
Whenever such a measurement is possible, the results
are very intuitive, and I would recommend this approach. However, for slow systems, a very long time may
be required to measure the transfer function, and one
John Bechhoefer: Feedback for physicists: A tutorial essay on control
may prefer time-domain methods—indeed, they are
standard in the control field. The basic idea is to excite
the system with a known input u共t兲 and measure the
response y共t兲. One then computes the correlation function Ruy共␶兲 between the input and output Ruy共␶兲
= 具u共t兲y共t + ␶兲典 and also Ruu共␶兲 = 具u共t兲u共t + ␶兲典, the autocorrelation function of the input. The transfer function G共s兲
is then found by taking the Fourier transforms of Ruy
and Ruu and computing G = Ryu / Ruu.16 Time-domain
methods are more efficient than frequency methods because u共t兲 can contain all frequencies—using the system’s linearity, one measures all frequencies in a Bode
plot simultaneously.17
In theory, one could choose u共t兲 to be an impulse,
which means that Ruy共␶兲 would be the impulse response
function 关the Fourier transform of G共s = i␻兲兴. In practice,
it is often hard to inject enough energy to make an accurate measurement. A step-function input, which has a
power spectrum 1 / ␻2, is very easy to implement and
injects enough energy at low frequencies. At higher frequencies, the injected energy is low and noise dominates. Another good choice for u共t兲 is a pseudorandom
binary sequence 共PRBS兲, which is a kind of randomly
switching square wave that alternates stochastically between two values. This shares with the delta-function
input the property of having equal energies at all relevant frequencies. For an introduction, see Dutton et al.
共1997兲; for full details, see Ljung 共1999兲.
At this stage, it is also good to check the linearity of
the system. At any frequency, one can vary the input
amplitude and record the output amplitude. Note that
nonlinear effects can be hysteretic—the response upon
increasing the input may be different from that obtained
upon decreasing the input. Almost all of the techniques
described here are designed to work with linear systems.
They may be adequate for small nonlinearities but will
fail otherwise 共cf. Sec. VI兲. Remember, though, that one
can use linear techniques to stabilize a system about a
setpoint, even if there are strong nonlinearities as the
setpoint is varied. This was the case in the interferometer example discussed in Sec. III.D.1. If the system is
locally linear, the structure of the measured transfer
function is usually constant as one varies the set point,
but the positions of poles and zeros vary smoothly. One
can then design a controller whose parameters vary
smoothly with set point, as well.
Here is a quick proof, following Doyle et al. 共1992兲. In the
time domain y共t兲 = 兰⬁−⬁G共t⬘兲u共t − t⬘兲dt⬘. Multiplying both sides
by u共t兲 and shifting by ␶ gives u共t兲y共t + ␶兲 = 兰⬁−⬁G共t⬘兲u共t兲u共t + ␶
− t⬘兲dt⬘. Averaging gives Ruy共␶兲 = G ⴱ Ruu共␶兲. Then Fourier
transform and use the convolution theorem.
Time-domain methods may also be generalized to nonlinear
systems. One looks at higher-order correlations between input
and output, e.g., Ryuu共␶1 , ␶2兲 = 具y共t兲x共t − ␶1兲x共t − ␶2兲典 共Wiener,
1958兲. These techniques are widely used to study neuron-firing
dynamics 共Rieke et al., 1997兲.
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
2. Model building
Given measurements of the transfer function at a set
of frequencies si, G共si兲, one should work out a reasonable analytic approximation. Of course, any physics of
the system should be included in the model, but often
systems are too complicated to model easily. In such
cases, one tries to make a model from standard
elements—poles and zeros, which individually correspond to low- and high-pass filters 共first order兲, resonances and antiresonances 共second order兲, as well as delays, which technically are “infinite order,” since they
cannot be written as a ratio of finite polynomials. It usually is possible to do this, as most often the system can
be decomposed into modes with reasonably wellseparated frequencies. If there are many nearly degenerate modes, or if the system is best described as a spatially extended system, then these techniques can break
down, and one should refer to the various control-theory
texts for more sophisticated approaches. Usually, one
can do quite well looking for 90° and 180° phase shifts
and identifying them with first- and second-order terms.
Remember that lags correspond to terms in the denominator and leads 共forward phase shifts兲 correspond to
terms in the numerator. Any linear phase shift with frequency on the Bode plot corresponds to a lag in the time
domain. 共If the signal is digitized and fed to a computer,
there will be such a lag, as discussed in Sec. IV.C; cf. Sec.
V.C, below.兲 Once there is a reasonable functional form,
one can determine various coefficients 共time constants,
damping rates, etc.兲 by a least-squares fit to the measured transfer function. Alternatively, there are a number of methods that avoid the transfer function completely: from a given input u共t兲 and measured response
y共t兲, they directly fit to the coefficients of a time-domain
model or directly give pole and zero positions 共Ljung,
3. Model reduction
If the result of the model-building exercise is a highorder system, then it is often useful to seek a lowerorder approximation. First, such an approximation will
be simpler and faster to compute. This may be important in implementing a controller, as we have already
seen that state-space controllers require observers,
which require one to model the system dynamics on a
computer much more rapidly than time scales of the real
dynamics. 共The same remark applies to the choice of a
controller—lower-order ones are easier to implement.兲
The second reason for preferring low-order approximations is that the smaller, higher-order parts of the dynamics are not very robust, in the sense that any parameters tend to vary more than the “core” parts of the
dynamical system. We will return to this in our discussion of robust systems, below 共Sec. V.E兲.
Given Bode magnitude and phase plots of experimental data for a given transfer function, how should one
simplify the functional form? The obvious strategy is
truncation: one simply keeps enough terms to fit the
transfer function accurately up to some maximum fre-
John Bechhoefer: Feedback for physicists: A tutorial essay on control
quency ␻max. At a minimum, one should have ␻max
⬎ ␻0, the desired feedback bandwidth. Ideally, one
should keep enough terms that the gain 共relative to the
zero-frequency, or dc, gain兲 at ␻max is Ⰶ1. Note that the
usual structure of transfer functions means that higherorder terms are multiplied, not added, onto lower-order
terms, i.e., that we truncate a system
G共s兲 =
Gt共s兲 =
s − z⬘
where the last 共Nth兲 pole or zero occurs at a frequency
⬇␻max. Note that the zeros and poles of the reduced
system—found by refitting the data to the truncated
form for G共s兲—will in general differ from the “exact
values” of lower-order zeros/poles of the full system.18
While the above method seems straightforward, it is
not always the best thing to do. For example, if there
happens to be a high-frequency mode 共or modes兲 with a
large amplitude, neglecting it may not wise. A more
subtle situation is that there may be so many highfrequency modes that even though the amplitude of any
one is small, they may have a collective effect. To deal
with such cases, one strategy is to try to order the system’s modes not by frequency but by some measure of
their “importance.” One definition of importance is in
terms of the Hankel singular values of the system transfer function G共s兲. The detailed discussion is beyond the
scope of this tutorial, but the rough idea is to appeal to
the notions of controllability and observability of the
Note that my definition of truncation differs from that used
in standard control texts, such as Skogestad and Postlethwaite 共1996兲, Chap. 11. There, the authors first define a truncation where the poles and zeros are held to be the same. If
one then looks at the difference between the exact transfer
function and its truncated approximation, one sees that they
differ most at low frequencies and the difference goes to zero
only at infinite frequencies. But the whole point of doing a
truncation is almost always to capture the low-frequency dynamics while disregarding high frequencies! Thus this kind of
truncation seems unlikely to be useful in practice. Skogestad
and Postlethwaite 共1996兲 then go on to introduce “residualization” 共physicists would use the terms “slaving” or adiabatic
elimination of fast variables兲, where, in the time domain, the
fast mode derivatives are set to zero and the steady states of
the fast modes are solved. These instantaneous or adiabatic
steady states are then substituted wherever fast variables appear in the equations for the remaining slow modes. The equations for the slow modes then form a smaller, closed set. If the
modes are well behaved—i.e., well separated in frequency
from each other—then the procedure described in the text,
fitting to a form up to some maximum frequency, will give
essentially equivalent results. Both methods share the property
of agreeing well at low frequencies up to some limit, above
which the approximation begins to fail.
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
system modes discussed above. The singular values of
the matrix describing controllability rank the effect of an
input on a particular system mode, while the singular
values of the matrix describing observability rank the
effect of the dynamics on a particular output mode. The
product of the relevant two matrices 共“gramians”兲 then
gives the strength of “input-output” relationships of
each mode of the system. One can show that it is possible to use the freedom in defining a state-space representation of a system to make these gramian matrices
equal—the “balanced representation.” Ordering these
balanced modes by their singular values and retaining
only the largest ones gives a systematic approach to
model reduction 共Skogestad and Postlethwaite, 1996兲.
The kind of systematic model reduction discussed
above has recently attracted the attention of statistical
physicists, who are faced with a similar task when
“coarse-graining” dynamics in a thermodynamic system.
For example, if one considers a small number of objects
in contact with many other elements of a “heat bath,”
then a common strategy is to integrate out the degrees
of freedom corresponding to the bath. One derives thus
a reduced dynamics for the remaining degrees of freedom. A similar task arises when one uses the renormalization group 共Goldenfeld, 1992兲. Coarse graining is thus
a kind of model reduction. The interest in control strategies is that they give a systematic way of handling a
general system, while the usual coarse-graining strategy
is more ad hoc and fails, for example, when naively applied to spatially inhomogeneous systems. A recent paper by Reynolds 共2003兲 explores these issues.
4. Revisiting the system
We close by noting that determination of the transfer
function has two meanings: Whatever the system dynamics are, the experimentalist should measure them;
but the experimentalist also has, in most cases, the
power to influence greatly, or determine, the system itself. If a system is hard to control, think about ways of
changing it to make control easier. 共This is a basic difference in philosophy from most engineering texts, where
the system—“plant” in their jargon—is usually a given.兲
Often, the experimentalist simultaneously designs both
the physical 共mechanical, electrical兲 aspects and the control aspects of an experiment. We have already seen two
examples: in temperature control, where minimizing the
physical separation between heater and thermometer
makes a system much easier to control and allows higher
performance; and in balancing the rod, where one
should look at the top of the rod, not the bottom. More
generally, one should minimize the delay and/or separation between the actuator and its sensor. Another area
in which good design plays a role is in the separation of
modes. As we have discussed above, closely spaced
modes are difficult to control, and it usually pays to
make different modes as widely spaced in frequency as
possible. For example, if one is trying to isolate vibrations, the apparatus should be as small and cube shaped,
to maximize the frequency separation between the soft
John Bechhoefer: Feedback for physicists: A tutorial essay on control
isolating spring and the lumped experimental mass and
the internal modes of the experimental apparatus. In the
next section, we shall see that systems that are up to
second order over the frequency range of interest may
be perfectly controlled by the elementary “PID” law
commonly found in commercial controllers but higherorder systems in general cannot be adequately controlled by such laws.
Another example of how the design of an experimental system determines the quality of control comes from
my own laboratory, where we routinely regulate temperature to 50 ␮K rms near room temperature, i.e., to
fractional variations of 2 ⫻ 10−7 共Metzger, 2002; Yethiraj
et al., 2002兲. In order to obtain such performance, the
simple proportional-integral 共PI兲 law described above
sufficed. 关A more sophisticated control algorithm would
have further improved the control, but we did not need
a better performance. As an example, Barune et al.
共1995兲 use loop-shaping techniques—see Sec. V.B.2
below—to achieve 20-␮K control of the temperature of
a laser diode.兴 The crucial steps all involved the physical
and electrical design of the system itself. One important
idea is to use a bridge circuit so that the error signal is
centered on zero rather than about a finite level and
may thereafter be amplified. We minimized sensor noise
by using a watch battery to power the bridge.19 The
overall performance turned out to be set by the temperature variations in the nonsensor resistors in the
bridge. The best place to have put these would have
been next to the sensor resistor itself, as that is where
the temperature is most stable. In our case, that was
inconvenient, and we put them inside a thick styrofoam
box. Slow variations in the bridge temperature then
show up as setpoint drifts that cannot be corrected by
the simple feedback loop; however, the temperature was
very stable on the 1-min time scale of our particular
experiment. To stabilize over longer times, we could
have added a second feedback loop to regulate the temperature of the external components of the bridge. The
time scales of the two controllers should be well separated to decouple their dynamics. The idea of using
nested stages of dynamically decoupled feedback loops
can lead to outstanding performance. Perhaps the ultimate example is a four-stage controller developed by the
group of Lipa, which achieved stabilities on the order of
10−10 K at low temperatures 共2 K兲 共Day et al. 1997兲.
B. Choosing the controller
Having identified the system 共and, perhaps, having already made the system easier to control兲, we need to
choose the control algorithm. In the frequency-domain
One would be tempted to use a lock-in amplifier to supply
an ac voltage to the bridge. The lock-in technique, by centering
the bandwidth about a finite carrier frequency, can lower sensor noise by moving the passband to a high enough frequency
that 1 / f noise is unimportant. In our own case, this was not the
limiting factor for performance.
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
approach, this means choosing the dynamics K共s兲. This
task is commonly broken down into two parts: choosing
a general form for K共s兲 and choosing, or “tuning,” the
free parameters.
1. PID controllers
Probably the most common form for K共s兲 is the PID
controller, which is a combination of proportional, integral, and differential control:
K共s兲 = Kp +
+ Kds,
where Kp, Ki, and Kd are parameters that would be
tuned for a particular application. We have already discussed the general motivations for proportional and integral control. The intuitive justification for derivative
control is that if one sees the system moving at high
“velocity,” one knows that the system state will be
changing rapidly. One can thus speed the feedback response greatly by anticipating this state excursion and
taking counteraction immediately. For example, in controlling a temperature, if the temperature starts falling,
one can increase the heat even before the objects has
cooled, in order to counteract the presumably large perturbation that has occurred. The word “presumably”
highlights a difficulty of derivative control. One infers a
rapid temperature change by measuring the derivative
of the system state. If the sensor is noisy, random fluctuations can lead to large spurious rates of change and
to inappropriate controller response. Thus many experimentalists try derivative control, find out that it makes
the system noisier, and then give up. Since there are
many benefits to derivative control and since spurious
response to noise can be avoided, this is a shame, and
hopefully this article will motivate experimentalists to
work around the potential problems.
In order to better understand derivative control, we
can look at it in frequency space 共Kds兲. The linear s
dependence means that the response increases with frequency, explaining why high-frequency sensor noise can
have such a great effect. One obvious response is to limit
the action of the derivative term by adding one or more
low-pass elements the control law, which becomes
K共s兲 =
Kp + Ki/s + Kds
共1 + s/␻0兲n
where we have added n low-pass filters with cutoff frequencies all at ␻0. Indeed, since no actuator can respond
with large amplitude at arbitrarily large frequencies,
some kind of low-pass filter will be present at high frequencies, whether added deliberately or not. As long as
␻0 is higher than the feedback bandwidth, it will have
little effect on the system’s dynamics, while limiting the
effects of sensor noise. A more sophisticated way to
minimize sensor-noise feedthrough, the Kalman filter,
will be discussed in Sec. V.D below.
John Bechhoefer: Feedback for physicists: A tutorial essay on control
2. Loop shaping
Many experimentalists limit their attention to PID
controllers 共and often just PI controllers if they have no
luck with the derivative term兲. While PID controllers
can give good results and have the advantage that each
term has an intuitive justification, they are by no means
the only possibility. Indeed, the frequency-domain approach we have developed suggests that one can think of
choosing K共s兲 to “sculpt” or “shape” the closed-loop response T共s兲. For example, given a system G共s兲, one can
invert Eq. 共3.2兲 and write
K共s兲 =
1 − T共s兲
If one wanted T共s兲 = 1 / 共1 + s / ␻0兲, for example, one could
choose K共s兲 = 共␻0 / s兲G−1共s兲. If G共s兲 = 1 / 共1 + 2␨s / ␻1
+ s2 / ␻21兲, then the resulting K共s兲 has the form 2␨ / ␻1
+ 1 / s + s / ␻21, which is the PID form. We thus have another justification for using the PID form—second-order
systems are common—and an understanding of why the
PID form is not always satisfactory: many systems are
higher than second order over the required frequency
The above technique of “inverse-based controller design” sounds too good to be true, and often it is. One
catch is that it often is not possible for the actuator u to
give the desired response 关u共s兲 = K共s兲e共s兲, or the equivalent time-domain requirement for u共t兲兴. All actuators
have finite ranges, but nothing in the inversion design
limits requirements on u. One systematic failure is that
the higher the order of the characteristic polynomial in
the denominator of G 共nth order for an nth-order system兲 the higher the derivative required in K and hence
the larger u required at higher frequencies. Even one
derivative cannot be implemented at arbitrarily high frequencies and must be cut off by some kind of low-pass
filter, as described above. The problem is more severe
with higher-order terms in the numerator of K. Thus, in
general, one has to augment G−1 at least with low-pass
elements. One then has to worry whether the additional
elements degrade the controller and whether alternative
structures might in fact be better.
More broadly, one can think of choosing a fairly arbitrary form for K共s兲 in order that G共s兲 have desirable
properties. The general principle is to make the gain
high 共兩L兩 = 兩KG兩 Ⰷ 1兲 over the frequency range of interest,
to track the reference and reject disturbances, while
making sure that the phase lag of the loop is not too
close to 180° near 兩L兩 = 1, to give an adequate phase margin. Recall that nearly all physical systems and physical
controllers have limited response at high frequency, implying that 兩L兩 → 0 as ␻ → ⬁. In addition, recall the
tradeoff between accurate tracking of a control signal r
and rejection of sensor noise n: one wants 兩L兩 Ⰶ 1 at frequencies where noise dominates over disturbances and
control signals 关see Eq. 共3.13兲兴. Fortunately, control signals and disturbances are often mostly low frequency,
while sensor noise is broad band and goes to high freRev. Mod. Phys., Vol. 77, No. 3, July 2005
FIG. 18. 共Color in online edition兲 共a兲 Schematic of a desired
loop shape L共s兲. 共b兲 Corresponding phase plot.
quencies. Thus one easy approach is to use large gains
over the frequency range one wishes to track the control
signal and small gains at higher frequencies.
One subtlety concerns frequencies near the crossover
where 兩L兩 = 1. Because transfer functions are meromorphic 共analytic everywhere but at their poles兲, the real
and imaginary parts of transfer functions are interrelated, giving rise to a number of analytical results 关along
the lines of the various Cauchy theorems of complex
analysis and the Kramers-Krönig relations; Doyle et al.
共1992兲, Chap. 6兴. One of these is Bode’s gain-phase relationship 共Sec. IV.D, footnote 12兲, which shows that for a
minimum-phase system the phase lag is determined by a
frequency integral over the transfer-function magnitude.
The practical upshot, which we saw first in Sec. IV.B,
is that if L ⬃ 共i␻兲−n over a frequency range near a reference frequency ␻0 关here, taken to be the crossover
frequency where 兩L共i␻0兲兩 = 1兴, then the phase lag is
⬇−n␲ / 2. Because a phase of −180° implies instability,
the transfer function L共s兲 should show approximately
single-pole, low-pass behavior near ␻0. In particular, the
gain-phase theorem implies that an “ideal” low-pass filter that cuts off like a step function would not be a good
way to limit the controller bandwidth.
Putting these various constraints—high gain at low
frequencies, low gain at high frequencies, and singlepole behavior near the crossover frequency—one arrives
at a “loop shape” for L that looks qualitatively like the
sketch in Fig. 18共a兲 共Özbay, 2000兲. The left black arrow
depicts schematically the desired tracking properties at
John Bechhoefer: Feedback for physicists: A tutorial essay on control
troublesome phase lag at higher frequencies. Lag compensators can also act as an approximation to low-pass
filters at high frequencies, 共one would typically remove
the overall factor of a so that the gain goes from 1 to
1 / a兲. Again, the advantage is that the phase lag does not
accumulate asymptotically.
Conversely, a lead compensator is of the form
Klead共i␻兲 =
1 + ai␻/␻0
1 + i␻/␻0
which gives a phase lead in the crossover frequency
range. The amplitude goes from 1 to a and thus does not
indefinitely increase the way a pure derivative term
As a general conclusion, one should be open to adding enough elements—whether low-pass filters, lead or
lag compensators, or other forms—to shape the response L共s兲 as desired. For an effectively second-order
system, a three-term controller can work well, but a
higher-order system will require compensating dynamics
K共s兲 that is higher order and will depend on more free
parameters, which must be chosen, or “tuned.” How to
do this is the subject of optimal control.
FIG. 19. 共Color in online edition兲 Bode plots of a lag compensator 关Eq. 共5.6兲兴 with a = 10.
3. Optimal control
low frequencies, where L should be larger than some
bound, while the right black arrow depicts the desired
noise suppression at high frequencies, where L should
be smaller than some bound. The parallel black lines
illustrate the ␻−1 scaling desired near 兩L兩 = 1. Figure
18共b兲 is the corresponding Bode phase plot, which shows
that the loop has a gain margin ⬇10 and a phase margin
⬇70°. One also sees that the way to satisfy these constraints is to use an L with left-hand plane 共LHP兲 zeros
共generalizations of derivative control兲, which can add to
the phase. These various tricks constitute the engineering “art” of control design, which, though surely not optimal or systematic, often works well. Remember, finally,
that the above discussion pertains to the open-loop
transfer function L共s兲. The actual controller is given by
K共s兲 = L共s兲 / G共s兲. In practice, one simply plots 兩L兩 while
“tweaking” the form and parameters of K.
Two common elements that can be used to shape the
frequency response in a limited frequency range are lag
and lead compensators. A lag compensator has the form
The algorithm that many experimentalists use in tuning a PID control loop is 共1兲 start by turning up the
proportional gain until the system oscillates, and then
turn down the gain somewhat; 共2兲 add integral gain to
eliminate setpoint droop, and turn down the proportional gain a bit more to offset the destabilizing lag of
the integral term; 共3兲 add a derivative term, become frustrated as sensor noise feeds into the system, give up and
settle for the PI parameters of step 共2兲. We have already
commented on the usefulness of the derivative term and
how other elements such as lead and lag compensators
can shape the loop gain as a function of frequency to
minimize the problem of noise feedthrough. But now
with three or more terms to tune, one might wonder
whether there is a systematic way of choosing parameter
One approach is to formalize the tuning process by
defining a scalar “performance index,” which is a quantity that is minimized for the “best” choice of parameters. This is “optimal control.”
The standard performance indices are of the form
Klag共i␻兲 =
a共1 + i␻/␻0兲
1 + ai␻/␻0
where a is a constant 共typically of order 10兲 and ␻0 is the
scale frequency. Figure 19 shows the Bode response of
Eq. 共5.6兲. The gain goes from a at low frequencies 共␻
Ⰶ ␻0兲 to 1 at high frequencies 共␻ Ⰷ ␻0兲. The phase response has a transient lag in the crossover frequency
region but goes to zero at both low and high frequencies.
One can think of a lag compensator as either an approximation to integral control 共a boost of a at zero frequency rather than infinite gain兲 but without the often
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
V关x共t兲,u共t兲兴dt =
关e2共t兲Q + u2共t兲R兴dt,
where the general function V共x , u兲 commonly has a quadratic form and where Q and R are positive constants
that balance the relative “costs” of errors e共t兲 and control efforts u共t兲. Large Q and small R penalize control
errors with little regard to the control effort, while small
Q and large R penalize control effort with little regard
for control error. Equation 共5.8兲 has an obvious vectormatrix generalization to higher-order systems. Implicit
in Eq. 共5.8兲 is the choice of a control signal r共t兲 and a
John Bechhoefer: Feedback for physicists: A tutorial essay on control
disturbance d共t兲. For example, one often assumes a step
function input r共t兲 = ␪共t兲, with d = 0. One can, alternatively, keep r constant and add a step disturbance for
As a simple example, we consider the onedimensional system ẋ = −␣x共t兲 + u共t兲 with proportional
control u共t兲 = −Kpe共t兲 and reference signal r共t兲 = 0
共Skogestad and Postlethwaite, 1996兲. The proportional
control gives a motion x共t兲 = x0e−共␣+Kp兲t, which, when inserted into the cost function, Eq. 共5.8兲, gives 共with Q
= 1兲
J共Kp兲 = 共1 + RKp2 兲
2共␣ + Kp兲
Minimizing J with respect to Kp gives an “optimal”
Kp* = − ␣ + 冑␣2 + 1/R.
The decay rate of the optimal system is ␣⬘ = 冑␣2 + 1 / R.
From this simple example, one can see the good and bad
points of optimal control. First of all, optimal control
does not eliminate the problem of tuning a parameter.
Rather, in this case, it has replaced the problem of
choosing the proportional gain Kp with that of choosing
the weight R. What one gains, however, is a way of making any tradeoffs in the design process more transparent.
Here, for example, one is balancing the desire to have
small tracking error e共t兲 with that of wanting minimal
control effort u共t兲. The coefficient R expresses that
tradeoff explicitly. For example, in the expression for the
optimal Kp in Eq. 共5.10兲, a small R 共“cheap control”兲
leads to a large Kp while a large R 共“expensive control”兲
leads to a small Kp. In such a trivial example, the result
could have been easily foreseen and there would be little
point in going through this exercise. When the system is
more complex, one often has more intuition into how to
set the matrices Q̃ and R̃ of the generalization of Eq.
共5.8兲 than to tune the parameters. At any rate, do not
confuse “optimal” with “good,” for a poor choice of
weights will lead to a poor controller.
The above discussion gives short shrift to a rich and
beautiful branch of applied mathematics. One can be
more ambitious and ask for more than the optimal set of
parameters, given a previously chosen control law. Indeed, why not look for the best of all possible control
laws? Formulated this way, there is a close connection
between optimal control and variational methods. In the
simplest cases, these variational methods are just those
familiar from classical mechanics. 关For an overview of
the history of optimal control and its relations with classical problems, see Sussmann and Willems 共1997兲.兴 For
example, we can view the minimization of the performance index J in Eq. 共5.8兲 as a problem belonging to the
calculus of variations, where one needs to find the optimal control u共t兲 that minimizes the functional J subject
to the constraint that the equations of motion ẋ = f共x , u兲
are obeyed. 关In the example above, there is only one
equation of motion, and f共x , u兲 = −␣x + u.兴 One can solve
this problem by the method of Lagrange multipliers,
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
minimizing the functional L = 兰⬁0 Ldt, where L共x , ẋ , u , ␭兲
= ␭0V共x , u兲 + ␭共f − ẋ兲 by free variation of x, u, and the
Lagrange multiplier ␭.20 For nth-order systems, there
are n Lagrange multiplier functions ␭i共t兲. Setting the
variation of J with respect to x, u, and ␭ equal to zero
leads to three sets of Euler-Lagrange equations:
d ⳵L ⳵L
= 0,
dt ⳵ẋ ⳵x
= 0,
= 0.
Equations 共5.11a兲 and 共5.11b兲 give the equations obeyed
by the performance index V; Eq. 共5.11c兲 gives the equations of motion.
For reasons to be discussed below, the Hamiltonian
formulation is usually preferred. There, one defines a
Hamiltonian H = L + ␭ẋ = V + ␭f. The Euler-Lagrange
equations can then be transformed into the controltheory version of Hamilton’s equations:
ẋ =
␭˙ = −
= 0.
Equations 共5.12a兲 and 共5.12b兲 describe the evolution of
the “co-states” ␭共t兲, which are the equivalent of the conjugate momenta in the classical-mechanics formulation.
Note that, in the general case, the state and co-state
vectors have n components while the control vector u
has r components. In the simple example discussed
above, one has H = Qx2 + Ru2 + ␭共−␣x + u兲.
Optimal-control problems often lead to two types of
generalizations that are less likely to be familiar to
physicists: In classical mechanics, one typically assumes
that the starting and ending states and times of the dynamical system are fixed. Only variations satisfying
these boundary conditions are considered. Variational
problems in control theory are less restrictive. For example, imagine that one wants to move a dynamical system from an initial state x0 to a final state x1 as fast as
possible 关given the dynamical equations ẋ = f共x , u兲兴. One
can formulate this as a problem whose goal is to minimize the unit performance index V = 1 over times from t0
共which can be set to be 0 in an autonomous system兲 to t1,
where t1 is to be determined. A recent book by Naidu
Usually, ␭0 = 1; however, an “abnormal” case with ␭0 = 0 can
arise when both the constraint and its derivative vanish simultaneously 共Sontag, 1998兲.
John Bechhoefer: Feedback for physicists: A tutorial essay on control
共2003兲 catalogs the different types of variational problems commonly encountered in control theory. Note,
too, that optimal control problems include both openloop, feedforward designs, where one asks what input
u共t兲 will best move the system from state 1 to state 2, as
well as closed-loop, feedback designs, where one asks
what feedback law u = −Ke will best respond to a given
type of disturbance.
A second important type of generalization that is
commonly encountered concerns constraints placed either on the control variables u or the allowable region of
phase space that the state vector x is permitted to enter.
These types of problems led Pontryagin and collaborators to generalize the treatment of optimizations, as expressed in the famous “minimum principle” 共Pontryagin
et al., 1964兲.21 The main result is that if the control variables u共t兲 are required to lie within some closed and
bounded set in the function space of all possible control
laws U共t兲, then the optimal choice is that control element u that minimizes H共x , ␭ , u兲. In other words, the
condition for a local stationary point of H, Eq. 共5.12c兲, is
replaced by the requirement for a global minimum. The
minimum principle allows one to solve problems that
would have no solution within the traditional analytical
framework of calculus or the calculus of variations. For
example, the derivative of the function f共x兲 = x is never
zero; however, if x is constrained to lie between 0 and 1,
f has a well-defined minimum 共0兲. The minimum principle allows one to consider such situations and provides
a necessary condition for how the control variable u
must be selected on the 共in general nontrivial兲 boundary
set of its allowed domain. Note that the minimum principle is a necessary, but not sufficient requirement for
optimal control.
To appreciate the significance of the minimum principle, let us consider the simple example of a free particle, obeying Newton’s laws and acted on by a controllable force, in one dimension. The equations of motion
are ẋ1 = x2, ẋ2 = u, where x1 is the particle’s position, x2 its
velocity, and u = F / m is the external force divided by the
particle’s mass. We neglect friction. The problem is to
Pontryagin et al. actually formulated their results in terms of
−H and thus wrote about the “maximum principle.” Modern
usage has changed the sign. As an aside, Pontryagin was
blinded by an accident when he was 14 and was thereafter
tutored by his mother, who had no education in mathematics
and read and described the mathematical symbols as they appeared to her 共Naidu, 2003兲. Nonetheless, Pontryagin became
one of the leading mathematicians of the 20th century, making
major contributions to the theory of topological groups. Later
in life, he turned his attention to engineering problems and
was asked to solve a problem that arose in the context of the
trajectory control of a military aircraft. The basic insights required to solve such problems with constraints on the control
variables came after three consecutive, sleepless nights
共Gamkrelidze, 1999兲. The use of the symbol u for the control
variable共s兲 seems also to date from Pontryagin’s work, as the
word “control” is “upravlenie” in Russian 共Gamkrelidze,
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
FIG. 20. 共Color in online edition兲 Phase-space plot of dynamical trajectories for a free, frictionless particle acted on by a
force u = ± 1. The thrust switches as trajectories reach the curve
AOB. The heavy curve shows an example trajectory 共Pontryagin et al., 1964兲.
bring the particle from an arbitrary initial state 共x共0兲
1 , x2 兲
to the state 共0, 0兲 as fast as possible. Assume that the
applied force has limits that imply that 兩u共t兲兩 艋 1. The
Hamiltonian is then given by H = ␭1x2 + ␭2u. Hamilton’s
equations give, in addition to the equations of motion,
␭˙ = 0 and ␭˙ = −␭ . If naively we were to try to use
⳵H / ⳵u = 0 to determine u, we would conclude that all
problem variables were identically equal to zero for all
time, i.e., that there was no nontrivial solution to our
problem. Replacing the local condition by the global requirement that u minimize H, we find that u = −sgn共␭2兲.
Integrating the equations for ␭1 and ␭2, we find ␭2共t兲
= −c1t + c2 and conclude that the applied force will always be pegged to its maximum value and that there can
be at most one “reversal of thrust” during the problem.
关This last conclusion follows because ␭2 makes at most
one sign change. Pontryagin et al. showed that if an
nth-order, linear system is controllable and if all eigenvalues of its system matrix A are real, then the optimum
control will have at most n − 1 jump discontinuities
共Pontryagin et al., 1964兲.兴 By analyzing separately the
cases where u = ± 1, one finds that for u = 1, the system’s
dynamics lie on one of a family of parabolas described
by x1 = 21 x22 + c+, with c+ determined by initial conditions.
Similarly, for u = −1, one finds that x1 = − 21 x22 + c−.
The full solution is illustrated in the phase-space plot
shown in Fig. 20. The curve AOB passing through the
end target point 共the origin兲 has special significance. The
left-hand segment AO is defined by motion, with u = −1
that will end up at the target point 共the origin兲. The
right-hand segment OB is defined by a similar segment
with u = 1. One thus clearly sees that the optimal motion
is determined by evaluating the location of the system in
phase space. If the state is below the curve AOB, choose
u = 1, wait until the system state hits the curve AO, and
then impose u = −1. If the state is above AOB, follow the
reverse recipe. The curve AOB is thus known as the
“switching curve” 共Pontryagin et al., 1964兲. In Fig. 20,
the heavy curve denotes the optimal solution for a particle initially at rest. Not surprisingly, the best strategy is
to accelerate as fast as possible until halfway to the goal
and then to decelerate as fast as possible the rest of the
John Bechhoefer: Feedback for physicists: A tutorial essay on control
way. Note that formulating the problem in phase space
leads to a local rule for the control—essential for implementing a practical optimal feedback controller—based
on the geometry of motion in phase space, as determined by the switching curve. This law is nonlinear even
though the original problem is linear. This induced nonlinearity is typical of constrained optimization problems
and is one reason that they are much harder to solve
than unconstrained ones. We return to this and other
examples of nonlinearity below in Sec. VI.
The above discussion merely gives some of the flavor
of the types of analysis used. Another important method
is dynamic programming, introduced by R. Bellman in
the 1950s 共Naidu, 2003兲. It is especially well adapted to
discrete problems. Its continuous form is analogous to
the Hamilton-Jacobi method of classical mechanics.
Finally, in the above discussions of optimal control, we
used quadratic functions V共x , u兲 in the performance index 关e.g., that given in Eq. 共5.8兲兴. Such functions are sensitive to the average deviations of the state and control
variables from the desired values. The choice of a quadratic form is motivated largely by the fact that one can
then solve for the optimal controller for linear systems
analytically. The more recent control literature tends to
advocate an alternative that keeps track of the worstcase deviations. This more conservative measure of deviations leads to a more robust design, as discussed in
Sec. V.E, below.
C. Digital control loops
The original control loops were implemented by analog mechanical, electronic, or hydraulic circuits. At
present, they are almost always implemented digitally,
for reasons of flexibility: very complicated control algorithms are easily programmed 共and reprogrammed!兲,
and performance is easy to assess. It is worth noting,
however, that proportional or integral control can be
implemented with operational amplifiers costing just a
few cents each, with bandwidths easily in the 100-kHz
range. Sometimes, such circuits may be the simplest solution.
One subtlety of using a digital control loops is that the
input signal must be low-pass filtered to avoid aliasing.
To understand aliasing, consider a continuous signal f共t兲
sampled periodically at times nTs. We can write the
sampled signal as
fsamp共t兲 =
f共t兲␦共t − nTs兲,
where ␦ is the Dirac ␦ function. Being periodic, the ␦
function has a Fourier-series representation,
␦共t − nTs兲 =
兺 eik␻st ,
Ts k=−⬁
with ␻s = 2␲ / Ts. Inserting this into Eq. 共5.13兲 and Fourier
transforming, one finds
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
f s共 ␻ 兲 =
兺 f共␻ − n␻s兲.
Ts n=−⬁
In other words, the power spectrum of the continuous
signal 兩f共␻兲兩 is replicated at frequencies n␻s in the power
spectrum of the sampled signal 兩fs共␻兲兩. If the highest frequency ␻max in f is less than ␻s / 2 ⬅ ␻N 共the Nyquist frequency兲, we have the situation in Fig. 21共a兲. If not, we
have the situation in Fig. 21共b兲. In the former case, the
power spectrum of the sampled signal will be a faithful
replication of that of the continuous signal. In the latter
case, high-frequency components of f will mix down into
the spectrum of fs 共Lewis, 1992兲.
One can picture this alternatively in the time domain,
where the situation leading to aliasing is shown in Fig.
22. The Nyquist sampling theorem derived above states
that frequencies higher than 1 / 2Ts will be erroneously
read as lower frequencies by the digitizer 共“aliasing”兲
and that, in general, one must sample at least twice per
period in order to reconstruct the signal present at that
period 共Franklin et al., 1998兲. Thus if one samples with
sampling time Ts, one must add an analog low-pass filter
with cutoff frequency no higher than 1 / 2Ts.22 Since the
noise requirements are stringent, one must use either a
higher-order filter 共having nth-order dynamics and made
from active components兲 or a simpler, passive RC filter
with a somewhat lower cutoff frequency than is required
by the sampling theorem. Either way, the dynamics
added by the filter becomes part of the feedback loop.
Because filters add a phase lag, their effect is destabilizing, as seen in the previous section. Finally, another
subtlety is that sometimes it pays to add deliberately a
small amount of noise to combat the quantization effects
introduced by the analog-to-digital converter. See the
note in Sec. VI.A for details.
A second issue in digital control is that one must
transform an analysis developed for continuous-time dynamical systems into discrete-time dynamical systems.
This corresponds to a passage from ordinary differential
equations to discrete maps that transform the system
An alternative is “one-bit delta-sigma analog-digital conversion” 共Gershenfeld, 2000兲, where the signal is converted to a
rapidly alternating sequence of 1’s and 0’s 共representing the
maximum and minimum voltage limits兲. If the voltage input is,
say, 0.5, then the output will have an equal number of 1’s and
0’s. If the voltage input is 0.75, there will be 3 times as many 1’s
as 0’s, etc. The cycle time for the alternation is faster 共by a
factor of 64, or even more兲 than the desired ultimate sampling
time. One then digitally low-pass filters this one-bit signal to
create the slower, higher resolution final output value. Because
of the oversampling, a simple low-pass filter suffices to prevent
aliasing. The disadvantage of delta-sigma conversion is a relatively long “latency” time—the lag between the signal and the
digital output can be 10–100 times the sampling interval. Still,
if the lag is small compared to the system time scales, this kind
of conversion may be the simplest option. Note that deltasigma and other analog-to-digital conversion schemes are actually helped by adding a small amount of noise; see footnote
28, below.
John Bechhoefer: Feedback for physicists: A tutorial essay on control
FIG. 22. 共Color in online edition兲 Aliasing of a high-frequency
signal 共solid curve, with period T兲 produced by sampling with a
period TS that is greater than the Nyquist criterion. The apparent measured points 共large dots兲 are interpreted as coming
from a much lower frequency signal 共dashed curve兲.
FIG. 21. 共Color in online edition兲 Illustration of the Nyquist
sampling theorem in frequency space. 共a兲 The maximum frequency of the power spectrum of the continuous signal is less
than the Nyquist frequency 共positive and negative frequencies
indicated by arrows兲. The spectrum of the sampled signal accurately represents the continuous signal. 共b兲 The maximum
frequency exceeds the Nyquist frequency ␻N, and the aliased
spectra overlap, distorting the estimate of the continuous spectrum from sampled data. The individual copies of the spectrum
of the continuous signal are shown in the dashed lines. The
overlap is apparent, particularly near multiples of ␻N.
state vector xជ 共tn兲 into xជ 共tn+1兲. The whole topic can get
rather technical 共see the various control theory texts兲,
but the basic ideas are simple. Here, we assume that we
know a reasonable analytic approximation to the transfer function G共s兲 of the system 共see Sec. V.A, above兲.
The next step is to write the model in the time domain. Formally, one would take the inverse Laplace
transform. Usually, if the analytic model uses standard
components 共first- and second-order elements, lags兲, one
can do this by inspection. For example, if we had
G共s兲 =
共1 + s/␻0兲e−␶s
1 + ␥s/␻1 + s2/␻21
then we would infer
ẏ + y = u共t − ␶兲 + u̇共t − ␶兲,
which can be written in standard form,
− ␻21 − ␥␻1
冊冉 冊 冉 冊
u共t − ␶兲,
with y = x1.
The final step is then to discretize the continuous-time
system. The obvious algorithm is just to replace derivatives by first-order approximations, with
xn+1 − xn
ẋ ⬇
for each component of x 共or use higher-order approximations for the higher derivatives of y兲. Usually, such a
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
simple algorithm is sufficient, as long as the sampling
time is much shorter 共by a factor of at least 10, but preferably 20–30兲 than the fastest dynamics that need to be
modeled in the system. For example, the PID law, when
discretized using Eq. 共5.19兲 becomes
un = A1un−1 + B0en + B1en−1 + B2en−2 ,
with A1 = 1, B0 = Kp + KiTs / 2 + Kd / Ts, B1 = −Kp + KiTs / 2
− 2Kd / Ts, and B2 = Kd / Ts. Here Kp, Ki, and Kd are the
proportional, integral, and derivative terms, Ts is the
sampling time, and un and en are the actuator and error
signals at time nTs. Equation 共5.20兲 has the advantage
that no integral explicitly appears. It is derived by taking
the discrete differential of the straightforward expression for un in terms of an integral over the previous
error signals. Equation 共5.20兲 also has the advantage that
it expresses the next value of the actuator signal 共un兲 in
terms of its previous value 共un−1兲. This is useful if one
wants to go back and forth from a “manual mode” to
closed-loop control without large transients in the actuator signal. Note that en does appear in Eq. 共5.20兲. We
assume that we use the current error signal in calculating
the current actuator response, implying that the calculation time is much less than the sampling time. If not, the
delay should be taken into account.
In deriving expressions such as Eq. 共5.20兲, it is often
convenient to go directly from the transfer function to
the discrete dynamics, without writing down the
continuous-time equations explicitly. One can do this by
looking at the Laplace transform of sampled signals. Assume that a continuous signal f共t兲 is sampled at times
nTs, for n = 0,1, . . .. As above, the sampled signal is then
fs共t兲 = 兺 f共t兲␦共t − nTs兲.
The Laplace transform of fs共t兲 is
L关fs共t兲兴 = 兺 f共nTs兲e−s共nTs兲 .
If we introduce z ⬅ esTs and define fn ⬅ f共nTs兲, then Eq.
共5.22兲 leads to the “z transform,” defined by
John Bechhoefer: Feedback for physicists: A tutorial essay on control
Z关f兴 = 兺 fnz−n .
For example, by direct substitution into Eq. 共5.23兲,
one can see that the z transform of the step function ␪共t兲
共0 for t ⬍ 0 , 1 for t 艌 0兲 is z / 共z − 1兲. Similarly, Z关f共t + Ts兲兴
= zZ关f共t兲兴 − zf0. Thus with a zero initial condition for f,
the z transform shows that shifting by Ts means multiplying the transform by z. This is analogous to the
Laplace transform of a derivative 共multiply by s兲 and
implies that taking the z transform of a difference equation allows it to be solved by algebraic manipulation, in
exact analogy to the way a Laplace transform can be
used to solve a differential equation.
The transformation z = esTs relates the complex s plane
to the complex z plane. Note that left-hand plane poles
关i.e., ones with Re共s兲 ⬍ 0兴 are mapped to the interior of
the unit circle. The frequency response in the z plane is
obtained by substituting z = ei␻Ts. Frequencies higher
than the Nyquist frequency of ␻s / 2 = ␲ / Ts are then
mapped on top of lower frequencies, in accordance with
the aliasing phenomenon depicted in Figs. 21 and 22.
We seek a way of discretizing the transfer function
K共s兲 of a continuous controller. In principle, this could
be done using the relation z = esTs, but this would lead to
infinite-order difference equations. One is then led to try
low-order approximate relations between s and z. The
first-order expansion of z−1 = e−sTs ⬇ 1 − sTs leads to
1 − z−1
Since z−1 means “delay by Ts,” we see that this is a transformation of Eq. 共5.19兲.
If the sampling time cannot be fast enough that simple
discretization is accurate, then one has to begin to worry
about more sophisticated algorithms. One could imagine
expanding the exponential to second order, but it turns
out that using a first-order Padé approximation is better.
Thus one sets z−1 ⬇ 共1 − sTs / 2兲 / 共1 + sTs / 2兲, which is accurate to second order. This gives “Tustin’s transformation” 共Dutton et al., 1997兲,
2 1 − z−1
Ts 1 + z−1
Equation 共5.25兲 is equivalent to using the trapezoidal
rule for integrating forward the system dynamics, while
Eq. 共5.24兲 corresponds to using the rectangular rule
共Lewis, 1992, Chap. 5兲. One advantage of Tustin’s transformation is that it, too, maps Re共s兲 ⬍ 0 to the interior of
the unit circle, implying that if the Laplace transform of
a continuous function is stable 关Re共s ⬍ 0兲兴, then its z
transform will also be stable. 共Iterating a linear function
with magnitude greater than one leads to instability.兲
The direct, first-order substitution, Eq. 共5.24兲, does not
have this property. Analogously, in the numerical integration of differential equations, too coarse a time step
can lead to numerical instability if one uses an explicit
representation of time derivatives 关Eq. 共5.24兲兴. If, on the
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
other hand, one uses an implicit representation of the
derivative 关Eq. 共5.25兲兴, then stability is guaranteed, although accuracy will suffer if Ts is too big.
Finally, one can account for the effect of using a zeroorder hold 共ZOH兲 to produce digitization 共as in Fig. 13兲
by considering its Laplace transform. Let
fZOH共t兲 = f共0兲,
0 ⬍ t ⬍ Ts ,
fZOH共t兲 = f共Ts兲,
Ts ⬍ t ⬍ 2Ts ,
L关fZOH兴 = f共0兲
e−stdt + f共Ts兲
e−stdt + ¯
1 − e−sTs
The effect of the zero-order hold 共ZOH兲 is to introduce
an extra factor in front of the ordinary z transform. Thus
a common “recipe” for translating a continuous controller K共s兲 to a discrete equivalent D共z兲 is
D共z兲 = 共1 − z−1兲
with Eq. 共5.25兲 used to transform s to z. Because the
discretization leads to a large amount of algebra,
“canned” routines in programs such as MATLAB are useful 共see Sec. V.C.2, below兲.
Once the approximate digital controller D共z兲 has
been worked out, one can generate the appropriate difference equation by writing 共Dutton et al., 1997兲
D共z兲 =
u共z兲 B0 + B1z−1 + B2z−2 + ¯
1 − A1z−1 − A2z−2 − ¯
In Eq. 共5.29兲, the transfer function is between the error
signal e共z兲 input and the control variable 共actuator兲 u共z兲
output. 关Of course, the transfer function G共z兲 of the system itself would be between the control variable input u
and the system output y.兴 Recalling that z−1 has the interpretation of delay by Ts, we may rewrite Eq. 共5.29兲 as
a discrete difference relation 共known as an “infinite impulse response,” or IIR filter in the signal-processing literature兲 共Oppenheim et al., 1992兲,
un = A1un−1 + A2un−2 + ¯ + B0en + B1en−1 + B2en−2
which generalizes the result of Eq. 共5.20兲.
One minor pitfall to avoid is that discrete systems can
introduce “ringing poles.” For example, consider the
simple difference equation
yn+1 = 共1 − ␭兲yn .
This, obviously, is the discrete analog of a first-order system ẏ共t兲 = −␭y共t兲 and gives similar stable behavior for 0
⬍ ␭ ⬍ 1. Now consider the range 1 ⬍ ␭ ⬍ 2, where the system continues to be stable but oscillates violently with
John Bechhoefer: Feedback for physicists: A tutorial essay on control
FIG. 23. 共Color in online edition兲 Time series for a discrete
dynamical system showing a “ringing pole.” Generated from
yn+1 = 共1 − ␭兲yn, with ␭ = 1.9.
each time step, albeit with a decaying envelope 共Fig. 23兲.
Such a ringing pole occurs because of an effective
aliasing—beating—of the decay rate against the sampling rate. In any case, one usually wants to avoid such a
response in a discrete controller. Although nominally
stable, the stability is achieved by wild cycling of the
actuator between large positive and negative values.
We note that the unpleasant situation illustrated in
Fig. 23 can also appear in the dynamics. Even if all the
measured outputs yn and all the control inputs un seem
reasonable, the system may be ill-behaved between the
sampling time points. One can guard against this possibility by sampling the system dynamics more rapidly in
tests. If that is not possible, at the very least one should
model the closed-loop system dynamics on a computer,
using a shorter time interval for the internal simulation.
We can summarize the steps needed to create a digital
control loop as follows:
共1兲 Determine 共empirically兲 the system transfer function G共s兲.
共2兲 Construct a feedback law K共s兲 that gives the desired closed-loop dynamics for T共s兲 = KG / 共1 + KG兲, possibly taking into account sensor dynamics H共s兲, as well.
共3兲 Use Eq. 共5.28兲 and either the direct method, Tustin’s transformation, or any of the alternate methods discussed in control-theory books, to transform s → z,
thereby converting the continuous controller K共s兲 to a
discrete controller D共z兲.
共4兲 Deduce the difference equation corresponding to
D共z兲 and program this on an appropriate digital device,
for example a computer, microcontroller, digital signal
processor 共DSP兲, or programmable logic device 共PLD兲.
The reader is cautioned, however, to consult a more
detailed reference such as Lewis 共1992兲 before actually
implementing a digital control loop in an experiment.
There are a number of practical issues that we have had
to skip over. To cite just one of these, with a finite word
size 共integer arithmetic兲, a high-order difference relation
such as Eq. 共5.30兲 is prone to numerical inaccuracies. Its
accuracy can be improved by rewriting the relation using
a partial-fraction expansion.
At present, digital loops may be implemented on any
of the platforms mentioned in step 共4兲. Probably the
most popular way is to use a computer and commercial
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
data acquisition board. Although such boards often run
at 100 kHz or even faster, they cannot be used for control loops of anywhere near that bandwidth. Modern operating systems are multitasking, which means that they
cannot be counted on to provide reliable real-time response. In practice, timing variability limits control loops
based on such computers to kHz rates at best. Microcontrollers are a cheap alternative. One downloads a program 共usually in C or Assembler, occasionally in a simpler, more user-friendly language兲 that runs in a standalone mode on the microcontroller. Because the
microcontroller does nothing else, its timing is reliable.
More important, it will work even if 共when!兲 the host
computer crashes. Digital signal processors offer similar
advantages, with much higher performance 共and price兲.
Microcontrollers can execute control loops at rates up to
10 kHz, while digital signal processors can work up to
1 MHz. 共These rates are increasing steadily as more sophisticated technologies are introduced.兲 Digital signal
processors also offer more sophisticated possibilities for
asynchronous transfer of data to and from the host computer. In general, one would consider microcontrollers
for simpler, slower loops and digital signal processors for
more high-performance needs.
There are many options for microcontrollers and digital signal processors. Often, there is a tradeoff between
inexpensive high-performance hardware that is difficult
共and thus expensive兲 to program 共and maintain兲 and expensive digital signal processor boxes that are somewhat
lower performance but have an easy-to-use programming environment. In my laboratory, we have had some
success with the latter approach.23 We can implement
simple control problems such as PID loops up to
500 kHz and complicated control problems such as a
scanning-probe microscope controller at 20 kHz. The
use of a high-level language is important in that it allows
a wide range of students, from undergraduates on up, to
easily modify source code as needed without making a
large investment in understanding complicated software.
Finally, programmable logic devices 共PLDs兲 are another option, albeit little known in the physics community. They grow out of specific “hard-wired” solutions
for particular problems. Think, for example, of the circuitry in a dishwasher, dryer, or other common appliance. Programmable logic devices are a kind of programmable version of these. Essentially, they are arrays
of logic gates, flip flops, etc., whose interconnections
may be programmed. They replace custom hardware solutions to implement an essentially arbitrary digital circuit. They excel in situations where many operations can
be performed in parallel, resulting in a tremendous
speed advantage 共of order 1000 in many cases兲. Programmable logic devices come in a bewildering variety
We have used the digital signal processor boxes of Jäger
Computergesteurt Messtechnik, GmbH., Rheinstraße 2-4
64653 Lorsch, Germany 共www.adwin.de兲. We made no attempt
to survey the market systematically, and there may well be
better alternatives.
John Bechhoefer: Feedback for physicists: A tutorial essay on control
of families, with varying logic-gate densities and varying
ease of programmability. One popular family is the fieldprogrammable gate array 共FPGA兲. In physics, these
have been used to make trigger units in high-energy experiments that must deal with a tremendous amount of
data in very short times. For example, in the D0 detector
at Fermilab, a set of boards containing 100 fieldprogrammable
⬇107 events/ sec, identifying about 1000 events/ sec as
potentially “interesting.” Subsequent trigger circuits
then reduce the event rate to about 50 Hz 共Borcherding
et al., 1999兲. Until recently, the “learning curve” in programming such devices was steep enough that they
made sense only in projects large enough to justify the
services of a specialist programmer. 共The language and
logic of field-programmable gate array devices was different enough from ordinary programs to require extensive training for such devices.兲 Recently, fieldprogrammable gate arrays that are programmable in
ordinary, high-level languages 共e.g., LABVIEW兲 have become available.24 Field-programmable gate arrays are so
powerful that they can implement almost any conventional digital or 共low-power兲 analog electronic circuit,
and many commercial devices are now based on such
circuits. Despite the D0 example above, the full impact
of having “hardware you can program” has not yet been
appreciated by the physics community, even as commercial applications seem to multiply exponentially.
In looking over the above “recipes” for making a digital controller, one might be tempted to think that it is
simpler to bypass the first two steps and work in the
discrete domain from the beginning. For example, any
discrete, linear control law will have the form of Eq.
共5.20兲, with different values for the A, B, and C coefficients 共and with the coefficient in front of un not necessarily 1 and with perhaps other terms un−1 , . . . and
en−2 , . . . as well兲. Indeed, our approach to digital feedback is known as “design by emulation,” and it works
well as long as the sampling rate is high enough relative
to system dynamics. Emulation has the virtue that it uses
the intuitive loop-shaping ideas discussed above. Still,
direct digital design has fewer steps and potentially
higher performance. We do not have space to discuss
direct methods, except to say that virtually every
continuous-time technique has a discrete-time analog.
For example, the discrete analog of the linear system
given by Eq. 共2.2兲 is
xn+1 = Ã⬘xn + B̃⬘un ,
yn+1 = C̃xn+1 ,
with Ã⬘ = eATs and B̃⬘ = Ã−1共Ã⬘ − Ĩ兲B̃. In the continuoustime case, the eigenvalues of à needed to have negative
real part for stability. Here, the analogous condition is
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
that the eigenvalues of Ã⬘ have magnitude less than one.
Other aspects carry over as well 共e.g., Laplace transform
to z transform兲. But when faced with a choice between
reading up on fancy direct-digital-design techniques and
buying a better digitizer, buy the board. The time you
save will almost certainly be worth more than the price
of the more expensive hardware.
1. Case study: Vibration isolation of an atom
We mention briefly a nice example of an implementation of a feedback loop that illustrates most of the topics
discussed in this section. The application is the control
system of a vibration isolation stage for an atom interferometer 共Hensley et al., 1999兲. The goal is to decouple
the instrument from random low-frequency vibrations
due to machinery, micro-seismic disturbances, etc. The
conventional passive strategy is to mount the instrument
on a mechanical spring system 共often an air spring, but
here a conventional wire spring兲. The idea is that if the
driving vibrations have frequencies much higher than
the resonant frequency of the mass-spring system, their
amplitude will be damped. The lower the frequency of
the external vibrations, the lower the required resonant
frequency of the mass-spring system. But practical massspring systems have resonant frequencies of roughly
1 Hz for vertical vibrations. Lowering the resonant frequency requires softer springs, which stretch large distances under the influence of gravity. In order to lower
the resonant frequency still further, one can use feedback to alter the dynamics. 共Note that in the first case
study, we decreased characteristic times; here we will
increase them.兲
The system has an equation of motion
ÿ + 2␨0␻0共ẏ − ẏg兲 + ␻20共y − yg兲 = u共t兲,
where ␻20 = k / m is the natural resonance frequency of the
undamped system, yg共t兲 and ẏg共t兲 are the position and
velocity of the ground, u共t兲 is the actuator signal, and ␨0
is the dimensionless damping, as in Eq. 共2.5兲. Indeed
Eqs. 共5.33兲 and 共2.5兲 differ only in that the damping is
now proportional to the difference between the mass
and ground velocity, and similarly for the restoring
The control-loop design uses an accelerometer to
sense unwanted vibrations and applies a PI control
关u共t兲 = −Kpÿ共t兲 − 2␻0Kiẏ共t兲兴 to lower the effective resonant frequency of the mass-spring design. Notice that
measuring the acceleration is better than measuring the
position for noise rejection. 共You integrate once, as opposed to differentiating twice.兲 The closed-loop transfer
function is
T共s兲 =
2s␨1␻1 + ␻21
s2 + 2s共␨1 + K1⬘兲␻1 + ␻21
␻1 = ␻0 / 冑Kp + 1,
␨1 = ␨0 / 冑Kp + 1,
= Ki / 冑Kp + 1. The closed-loop resonance frequency ␻1 is
set nearly 50 times lower than ␻0, and the damping,
John Bechhoefer: Feedback for physicists: A tutorial essay on control
⬇Ki⬘, is set near 1, the critical-damping value.25
What makes this example particularly interesting
from our point of view is that the actual feedback law
used was more complicated than the simple PI law discussed above. Because the accelerometer had upper and
lower limits to its bandwidth, the feedback gain needs to
roll off near these limits in order to reduce noise and
maintain stability. The limits are implemented by a series of lag compensators 关see Eq. 共5.6兲兴 and low- and
high-pass filters, which collectively shape the frequency
response to counteract problems induced by the finite
sensor bandwidth. The entire law is implemented digitally using Tustin’s method 关Eq. 共5.25兲兴; see Hensley et al.
共1999兲 for details. 共If you have understood the discussion
up to this point, the paper will be straightforward.兲 Finally, while vibration isolation is enhanced by decreasing
the “Q” of an oscillator, other applications use feedback
to increase Q. An example is “active Q control,” where
reduced oscillator damping increases the sensitivity for
detecting small forces 共Tamayo et al., 2000兲.
2. Commercial tools
The design and analysis of feedback systems is made
much easier by computer simulation. The engineering
community leans heavily on commercial software products, especially MATLAB.26 All of the steps and laws discussed here, and many more are available on the
MATLAB platform, although unfortunately, one must buy
several modules 共“toolboxes”兲 to fully implement them.
There are also some freely available packages of control
routines 共e.g., SCILAB, OCTAVE, and SLICOT27兲 which,
however, do not seem to be as widely adopted. My own
feeling is that physicists can usually get by without such
specialized tools—the basic approaches we describe
here are simple and easy to implement without investing
in learning complicated software packages. On the other
hand, for more complex problems or for obtaining the
absolute highest performance from a given hardware,
these software tools can be essential. Physicists should
also note that most engineering departments will already
have such software. One useful feature of commercial
products is that they often can generate low-level code
that can be downloaded to one’s hardware device to run
the control loop. Usually, a special add-on module is
required for each type and model of platform 共digital
signal processor, field-programmable gate array, etc.兲.
Equation 共3兲 of Hensley et al. 共1999兲 has a misprint.
MATLAB is a registered trademark of The Mathworks, Inc.,
3 Apple Hill Dr., Natick, MA 01760-2098 共USA兲. See http://
SCILAB is an open-source program that is a fairly close imitation of MATLAB 共兲. Octave is a highlevel programming language that implements many of the
MATLAB commands, with graphics implemented in GNU Plot
共兲. SLICOT is a set of FORTRAN 77 routines, callable from a wide variety of programming environments 共兲.
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
FIG. 24. 共Color in online edition兲 Control of a first-order system in the presence of noise. 共a兲 Open-loop fluctuations of the
controlled variable in response to environmental noise; no sensor noise is present; 共b兲 closed-loop proportional control; 共c兲
sensor noise added; 共d兲 Kalman filter added; actual system
state and the Kalman estimate, which has smaller fluctuations,
are both shown. In 共a兲–共d兲, ␴ is the standard deviation of the
measured signal.
D. Measurement noise and the Kalman filter
At the end of Sec. III, we briefly discussed the effect
of environmental disturbances and sensor noise on control loops. In Fig. 7 and Eq. 共3.13兲, we see the effects of
noise on the variable y共s兲 that is being controlled. To
understand better the effects of such noise, we again
consider the simple case of proportional feedback control of a first-order system, with no significant sensor
dynamics. In other words, we set K共s兲 = Kp, G共s兲 = 1 / 共1
+ s / ␻0兲, and H共s兲 = 1. We also consider the case of a regulator, where r共s兲 = 0. 关We are trying to keep y共s兲 = 0.兴
Then, for Kp Ⰷ 1,
y共s兲 ⬇ −
␰共s兲 +
1 + s/␻⬘
1 + s/␻⬘
with ␻⬘ = ␻0共1 + Kp兲 Ⰷ ␻0. Thus, low-frequency 共␻ Ⰶ ␻⬘兲
sensor noise and high-frequency environmental disturbances cause undesired fluctuations in y.
These effects are illustrated in Figs. 24共a兲–24共c兲, which
shows the variable x共t兲 in a discrete-time simulation of a
low-pass filter. The explicit discrete equations are
xn+1 = ␾xn + un + dn ,
yn+1 = xn+1 + ␰n+1 .
with ␾ = 1 − ␻0Ts and un = Kp␻0Ts共rn − yn兲 and rn the control signal. Here, we have reintroduced the distinction
John Bechhoefer: Feedback for physicists: A tutorial essay on control
between the environmental variable being controlled
共for example, a temperature兲 x and the sensor reading of
that variable 共for example, a thermistor impedance兲 y.
For simplicity, we have taken y = x, but in general the
units are different 共°C vs ohms兲, and the sensor may
have its own dynamics. The discrete dynamics are evaluated at times nTs. The proportional feedback gain is Kp
共dimensionless兲. The environmental noise is given by the
stochastic variable dn and the sensor noise by ␰n. Both of
these are Gaussian random variables, with zero mean
and variances 具d2典 ⬅ d2 and 具␰2典 ⬅ ␰2, respectively.28
In Fig. 24共a兲, we plot a time series xn for the openloop system without feedback 共Kp = rn = 0兲. The time constant of the system low-pass filters the white noise due to
the environment 共d2 = 0.01兲. In Fig. 24共b兲, we add feedback 共Kp = 10兲. The plot of xn shows how the proportional feedback increases the cutoff frequency of the
low-pass filtering 共␻⬘ Ⰷ ␻0兲. The reduced standard deviation illustrates how feedback controls the excursions of x
from the setpoint.
In Fig. 24共c兲, we add sensor noise 共␰2 = 0.1兲. Because
the control loop cannot distinguish between the desired
control signal rn and the undesired sensor noise ␰n, the
regulation is noticeably worse. This degradation in performance would be aggravated were one to use derivative control, which amplifies high-frequency noise.
Sensor noise can thus strongly degrade the performance of a control loop. One might try to limit the sensor noise by reducing the sensor’s bandwidth—for example, by adding a low-pass filter to the sensor circuit.
This works as long as the sensor bandwidth remains
much greater than that of the feedback. If not, the extra
low-pass filter adds a phase lag that degrades the stability margins, forcing one to lower the feedback gain. If
the sensor has significant noise within the feedback
bandwidth, straightforward frequency filtering will not
be sufficient.
About 40 years ago, Kalman 共1960兲 suggested a clever
strategy that is a variant of the observer discussed in Sec.
III.C.29 There, one used the dynamics to evolve an estimate of the internal state forward in time, corrected by a
term proportional to the difference between the actual
observed variable and the prediction. Here, the strategy
is similar, but because both the dynamics and the observations are subject to noise 共disturbances and sensor
noise兲, one wants to blend the two in a way that minimizes the overall uncertainty. Like the observer, though,
the key idea is to use the system dynamics to supplement what one observes about the system, in order to
estimate the complete internal state of the dynamics.
Caution: the variance of the noise terms dn and ␰n increases
linearly in time. In other words, the associated standard deviation, which would be used by a random-number generator in a
numerical simulation of Eq. 共5.36兲, is proportional to 冑Ts.
Norbert Wiener introduced the idea that control systems
should be analyzed stochastically 共Wiener, 1961兲.
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
We begin by noting that there are three related quantities that describe the internal state of a dynamical system at time n + 1:
true state,
xn+1 = ␾xn + un + dn ,
predicted state,
best estimate,
x̂n+1 = ␾x̂n兩n + un ,
x̂n+1兩n+1 = 共1 − K兲x̂n+1 + Kyn+1 ,
and two quantities that describe the measurements:
actual measurement,
yn+1 = xn+1 + ␰n+1 ,
predicted measurement,
ŷn+1 = x̂n+1 .
Keeping these straight is half the battle. In the last statevector item, the “Kalman gain” K is chosen to minimize
the overall uncertainty in x̂n+1 by blending, with proper
weight, the two pieces of information we have available:
the best estimate based on a knowledge of previous
measurements and of the system dynamics, and the actual measurement. The notation x̂n+1兩n+1 means that the
estimate of xn+1 uses observations up to the time n + 1.
By contrast, the prediction x̂n+1 uses observations only
up to time n. Note that the true state x is forever unknown and that usually there would be fewer measurements y than state-vector components for x.
Our goal is to derive the optimal value for the Kalman
gain K. To proceed, we write x̂n+1兩n+1 as
x̂n+1兩n+1 = ␾x̂n兩n + un + K共yn+1 − ŷn+1兲,
in the standard, observer form 关Eq. 共3.21兲兴 using Eq.
共5.36兲. Defining the estimation error en = xn − x̂n兩n, we can
easily show using Eqs. 共5.36兲 and 共5.37兲 that
en+1 = 共1 − K兲关␾en + dn兴 − K␰n+1 .
Note how Eq. 共5.38兲, in the absence of the noise terms dn
and ␰n+1, essentially reduces to our previous equation for
estimator error, Eq. 共3.22兲.
The crucial step is then to choose the Kalman gain K
in the “best” way. We do this by minimizing the expected value of the error variance at time n + 1. This
error variance is just
典 = 共1 − K兲2关␾2具en2 典 + d2兴 + K2␰2 .
In Eq. 共5.39兲, the cross terms are zero because the different noise and error signals are uncorrelated with each
other: 具d␰典 = 具ed典 = 具e␰典 = 0. Differentiating Eq. 共5.39兲 with
respect to K, we minimize the error at time n + 1 by taking
Kn+1 =
␾2具en2 典 + d2
␾2具en2 典 + d2 + ␰2
We put the index n + 1 on K because, in general, the
minimization must be done at each time step, and the
dynamics, as well as the noise statistics, may have explicit time dependence. Equation 共5.40兲 implies that if d2
is big and ␰2 small, Kn+1 ⬇ 1: one should trust the measurement. Alternatively, if sensor noise dominates, one
John Bechhoefer: Feedback for physicists: A tutorial essay on control
should weight the dynamical predictions more heavily
by taking Kn+1 ⬇ 0. Equation 共5.40兲 gives the optimal
balance between the two terms. If the coefficients of the
dynamical equations do not depend on time 共n兲, there
will be a time-independent optimum mix K*. 共Even with
stationary dynamics, poorly known initial conditions will
tend to make the Kalman filter initially put more weight
on the observations. Subsequently, the optimal K’s will
decrease to K*.兲
In Fig. 24共d兲, the optimal K* is found to be ⬇0.2 after
a transient lasting 10–20 time steps, implying that afterwards relatively little weight is placed on new measurements. This is not surprising, given that the standard
deviation of the sensor noise is more than three times
that of the disturbances. Still, no matter how noisy the
sensor, it always pays to have K* ⬎ 0 in that one needs at
least some contact with the measured system state. Note
in Fig. 24共d兲 the marked improvement in regulation
compared to Fig. 24共c兲, showing that many of the problems created by sensor noise have been mitigated. The
actual measurement signal in Fig. 24共d兲 is similar to the
trace in Fig. 24共c兲, showing that we have “filtered” out
the measurement noise in constructing the estimate x̂.
Note, too, how the estimate x̂ has fewer fluctuations
than the actual state x. In sensor applications, such as
the Michelson interferometer discussed above, where
the output is the feedback actuator signal, this is a significant advantage.
One important point is how to estimate d2 and ␰2. If
the sensor can be separated from the system being measured, one can easily establish its noise properties, which
are usually close to Gaussian. 共A typical procedure is to
short the sensor input and measure the power spectrum
of the sensor output.兲 Disturbances can be more problematic. In the most straightforward case, they are due
to noise from the output stages of the amplifier that
powers the system’s actuator. But more commonly, the
most important disturbances are due to environmental
perturbations. For example, in a scanning tunneling microscope 共STM兲, they would be due to the roughness of
the surface, and one would need some knowledge about
the typical surface to fix the roughness scale. It matters
less that the height profile of the surface show Gaussian
fluctuations than that there is a typical scale. If so, then
the Kalman algorithm, which basically assumes that the
only relevant information is the second moment, will
usually be reasonable. If the fluctuations deviate in a
serious way—e.g., if they show a power-law
distribution—then the algorithm may be inaccurate. But
note that true power-law behavior is probably fairly
rare. In the case of an STM scanning a surface, for example, scans are over a finite area, and for fixed scanning area, there would be a typical length scale for surface fluctuations. 共This length could diverge with the
overall scan size, but so weakly in practice that one
could ignore the effect.兲
The discussion so far has been for a scalar case with
only one degree of freedom. The general case is handled
similarly, with exact vector-matrix analogs for each
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
equation discussed above. The algebra becomes more
complicated, but all the basic reasoning remains the
same 共Dutton et al., 1997兲. 关The generalization of Eq.
共5.39兲 leads to a “matrix Riccati equation,” which requires some care in its solution.兴
The Kalman filter is similar to the Wiener filter 关compare Eq. 共5.40兲 with Eq. 共13.3.6兲 of Press et al. 共1993兲兴,
which solves the problem of extracting a signal that is
convoluted with an instrumental response and by sensor
noise. One difference, which is one of the most attractive features of the Kalman filter, is its recursive structure: data at time n + 1 are calculated in terms of data at
time n. This means that one only has to keep track of the
most recent data, rather than an entire measurement set.
To understand this advantage better, consider the computation of the average of n + 1 measurements,
具x典n+1 ⬅
兺 xi .
n+1 i
In computing the average in Eq. 共5.41兲, one normally
sums the n + 1 terms all at once. Alternatively, Eq. 共5.41兲
can be rewritten recursively as
具x典n+1 =
具x典n +
which is the strategy used in formulating the Kalman
filter. The Kalman filter takes advantage of the fact that
data are collected periodically and that the driving noise
has no correlation from one step to another. Its main
limitations are that the calculation assumes one knows
the underlying system and that system is linear. If the
model is slightly off, the feedback nature of the filter
usually does an adequate job in compensating, but too
much inaccuracy in the underlying dynamical model will
cause the filter to produce unreliable estimates of the
state x. Perhaps the most important idea, though, is that
the Kalman filter shows explicitly the advantage of
tracking the system’s dynamics using an internal model.
Since the Kalman filter generalizes the observer discussed earlier in Sec. III.C, one recovers a simple observer if the sensor noise is zero. Whether generated by
an observer or a Kalman filter, the estimate of the state
vector is used to generate an error signal in the feedback
loop. This loop can then be designed using the optimal
control theory described in Sec. V.B.3, which in this context is called LQG control: linear model, integral quadratic performance index, Gaussian noise process
共Skogestad and Postlethwaite, 1996兲. The “separation
theorem” proves that the dynamical performance of the
feedback loop based on the optimal estimates provided
by the Kalman filter have the same properties 共poles,
etc.兲 as they would if the internal states x共t兲 of the model
were directly observable. Thus one can separate the observer problem from the control problem.
Like most of the techniques discussed here, the Kalman filter is designed to deal with linear systems. Linearity actually enters in two places: in the dynamical
equation for the internal state x共t兲 and in the output
relation between x and y. If one or either of these rela-
John Bechhoefer: Feedback for physicists: A tutorial essay on control
tions is nonlinear, a simple strategy 共the “extended Kalman filter”兲 is to linearize about the current operating
point. Thus if the state xn+1 = f共xn兲 for some nonlinear
function f共x兲, one would update x using ␾n = df / dx, with
the derivative evaluated at the current state xn. The extended Kalman filter works well in slightly nonlinear
situations, but it implicitly assumes that Gaussian statistics are preserved under the nonlinear dynamics. For
strong nonlinearities, the analysis is much more difficult,
and the search for approximate solution methods is an
active area of research 共Evensen, 2003; Eyink et al.,
Although our discussion here has been rather elementary, we hope to have motivated the reader to explore a
strategy that, despite occasional attempts at popularization over the years 共Cooper, 1986; Gershenfeld, 1999兲,
remains little used by physicists. A natural area to apply
such ideas is in instrumentation for biophysics studies
involving the tracking of micron-sized beads 共e.g., Gosse
and Croquette, 2002; Cohen, 2005兲, where disturbances
are mostly due to Brownian motion.
E. Robust control
Control theory underwent something of an identity
crisis during the 1970s. Although the sophisticated methods described in the above sections can give feedback
schemes that work very well on paper, the results, in
practice, were often disappointing. This led many practical engineers 共and physicists兲 to conclude that it was
not worth learning fancy techniques and that the triedand-true PID controller was just about all one needed to
know. Indeed, the consensus is that the academic research on control theory from 1960 to about 1980 had
“negligible effect” on industrial control practice 关see the
introduction to Morari and Zafirioiu 共1989兲, as well as
Leigh 共2004兲兴.
The root of the problem was that the schemes presented so far implicitly assume that the system itself and
the types of inputs and disturbances it is subject to are
well known. But in practice, models of a system have
uncertainties—parameters differ from setup to setup,
high-frequency modes may be neglected, components
age, and so on. Controllers that are optimized 共e.g., in
the sense of Sec. V.B.3兲 for one system may fail miserably on the slightly different systems encountered in
The challenge of finding effective control laws in the
face of such uncertainties led to two approaches, beginning in earnest the late 1970s and early 1980s. One of
these, adaptive control, tries to reduce uncertainty by
learning more about the system while it is under control.
We discuss this briefly in Sec. VIII, below. The other
approach is that of “robust control,” where one tries to
come up with control laws that take into account this
uncertainty. Here we give some of the basic ideas and
flavor of this approach, following mainly Doyle et al.
共1992兲, Skogestad and Postlethwaite 共1996兲, and Özbay
共2000兲. In Sec. V.E.1, we show that in any control system
one implicitly assumes a model of the system being conRev. Mod. Phys., Vol. 77, No. 3, July 2005
FIG. 25. Block diagram of an IMC controller. Shaded area is
implemented either in a computer program or in control electronics.
trolled. In Sec. V.E.2, we give a useful way of quantifying
one’s uncertainty about a system. In Sec. V.E.3, we show
how to test whether, given a nominal system, with uncertainty, the system is stable. This leads to the notion of
“robust stability.” In Sec. V.E.4, we give an analogous
method for insuring “robust performance” in the face of
uncertainty about control inputs and disturbances. Finally, in Sec. V.E.5, we discuss how to find control laws
that balance the competing objectives of robust stability
and performance.
1. The internal model control parametrization
As a useful starting point, we introduce an alternative
way of parametrizing control systems, known as “internal model control” 共IMC兲 共Morari and Zafirioiu, 1989;
Goodwin et al., 2001兲. The basic idea is to explicitly include a model G̃共s兲 of the system’s actual dynamics,
G共s兲. This leads to the block diagram of Fig. 25. 共For
simplicity, we omit external disturbances, sensor noise,
sensor dynamics, etc.兲 Solving the block dynamics in Fig.
25, one finds
y共s兲 =
1 + 共G − G̃兲Q
Notice that the feedback signal is v = 共G − G̃兲u. This
shows explicitly that if we were to have a perfect model
共and no disturbances兲, there would be no need for feedback. Feedback is required only because of our always
imperfect knowledge of the model system and its disturbances. As we discuss in Sec. IX below, there is a deep
connection between feedback and information.
There are a number of other advantages to the internal model control formulation. It is easy to relate the
internal model control controller Q共s兲 to the “classic”
controller K共s兲 discussed previously:
K共s兲 =
1 − G̃Q
Because the denominator in Eq. 共5.43兲 is 1 + 共G − G̃兲Q,
the feedback system will become unstable only if Q
= −1 / 共G − G̃兲, which will be large when G ⬇ G̃. In other
words, as long as the model G̃ is a reasonable approximation to the real system G共s兲, a stable controller Q共s兲
will lead to a stable closed-loop system. 关Here we assume that G共s兲 itself is stable.兴 This is in contrast to the
John Bechhoefer: Feedback for physicists: A tutorial essay on control
classic controller K共s兲, where stable controllers can
nonetheless destabilize the system. A more positive way
of saying this is to note that for the system G共s兲, the set
of all stable Q共s兲’s generates all stable controllers K共s兲.
Another advantage of the IMC structure is that the
above remarks about stability carry forward to cases
where the system G and model G̃ are nonlinear. If G
⬇ G̃, then their difference may be well approximated by
a linear function, even when G and G̃ are themselves
strongly nonlinear.
Finally, in Eq. 共5.43兲, we see that if G = G̃, and we set
Q = 1 / G̃, then we will have perfect control 关y共s兲 = r共s兲,
exactly兴. This means that even when the system is
known exactly, it is necessary to invert the model transfer function G̃共s兲 in order to have perfect control. If G̃共s兲
is nonminimum phase and has a zero in the right-hand
plane, say at s0, then the controller Q共s兲 will have a pole
at s0 as well and will be impossible to implement at the
corresponding frequency. 共Intuitively, one needs infinite
energy to move a system that has zero response.兲 This
means that the bandwidth of the closed-loop system—
whatever the feedback law chosen—will always be limited by the lowest-frequency right-hand plane zero of
the system G共s兲. Thus we see another reason to watch
out for nonminimum phase dynamics and to eliminate
them by redesigning the system whenever possible.
2. Quantifying model uncertainty
The internal model control structure highlights the
role that an internal model of the system plays. Since
models are never perfect, one must learn to deal with
the consequences of uncertainty. The first step is to
quantify the uncertainty of the system model. The most
obvious way is to allow for uncertainties in any model
parameters. For example, in a second-order system such
as Eq. 共2.5兲, one could estimate uncertainties in the
damping ␨ and natural frequency ␻0. There are two limitations to this approach: First, it assumes that the form
of the model is correct. But one may not know the physics of the model well, and, even if one does, one may
choose to neglect certain parts of the dynamics 共such as
higher-order modes兲, which translate into errors in the
model. Second, one would, in principle, have to come up
with a special way of dealing with the effects of different
kinds of parameters. For example, an uncertainty in
natural frequency just amounts to a rescaling of the time
axis, whereas an uncertainty in damping could mean that
sometimes a system is underdamped and sometimes
overdamped, which could have very different consequences for the control law.
In the control literature, the uncertainty that enters
via imperfectly known parameters is called “structured
uncertainty,” since a certain model 共“structure”兲 is assumed. To get around the problems discussed above,
one can consider instead “unstructured uncertainty.” In
this approach, we start with a nominal system G0共s兲. The
actual system, which is unknown, is a member of a set of
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
systems G共s兲, defined by an arbitrary multiplicative uncertainty:
G共s兲 ⬅ 关1 + ⌬共s兲W共s兲兴G0共s兲,
where ⌬共s兲 is an arbitrary transfer function that satisfies
储⌬储⬁ 艋 1 and where W共s兲 is a transfer function that gives
the uncertainty limits. The ⬁ subscript on the norm refers to the “H⬁” norm, which is defined to be
储⌬储⬁ ⬅ ␻ 兩⌬共i␻兲兩.
In words, the H⬁ norm is computed by taking that maximum of the magnitude of the function, in contrast to the
more common Euclidean, or H2 norm, defined in frequency space by
储⌬储2 ⬅
The reasons for using the H⬁ norm will become clear
shortly. The best way to picture multiplicative uncertainty is in the complex s plane, Fig. 27 where we plot
G0共i␻兲, the nominal system, and the band formed by
superposition of the frequency-dependent multiplicative
perturbations. At each frequency, the system is located
within a circle of radius of radius W共i␻兲, as illustrated.
Superposing all the circles gives the two bands shown.
The multiplicative bound means that the uncertainties
are expressed relative to G0共i␻兲.
There is no deep reason for using multiplicative uncertainty. For example, one could use additive uncertainty, defined by G共s兲 ⬅ G共s兲 + ⌬共s兲W共s兲. But an additive
uncertainty is easily redefined in terms of a multiplicative one; in addition, multiplicative uncertainties tend to
arise naturally in control problems. For example, if the
actuator 共considered as part of the system兲 has a
temperature-dependent gain and the equipment is to be
used in rooms of differing temperatures, then the set of
systems ranges from K0 − ⌬KG0共s兲 to K0 + ⌬KG0共s兲,
meaning that the fractional uncertainty W = ⌬K / K0. In
general, both K0 and ⌬K could be functions of frequency.
A more interesting example is a system that includes
an unknown time delay ␶, ranging from 0 to ␶max. If the
nominal system is G0共s兲, then, at each frequency ␻, the
uncertainty bound 兩W共i␻兲兩 must be greater than
− 1 = 兩e−i␻␶ − 1兩
共0 艋 ␶ 艋 ␶max兲.
In Fig. 26, we show that a suitable bound is
W共s兲 =
1 + ␶maxs
which can be derived by noting the asymptotic behavior
for ␻ → 0 and ␻ → ⬁ of Eq. 共5.48兲 and increasing the amplitude slightly to avoid “cutting the corner” in Fig. 26.
In practice, one way to construct a bounding function
W共s兲 is to measure a system transfer function several
times, under a variety of conditions. Let the kth transfer-
John Bechhoefer: Feedback for physicists: A tutorial essay on control
FIG. 26. 共Color in online edition兲 Multiplicative bound for a
variable delay. Bottom curves: magnitude of Eq. 共5.48兲; top
curve: magnitude of Eq. 共5.49兲.
function measurement be done over a set of frequencies
␻j, giving a series of magnitude-phase pairs 共Mjk , ␾jk兲.
For each point j, estimate a “typical value” M*j ei␾j . Then
find an analytic function W共s兲 that satisfies, for every j
and k,
Mjkei␾jk − M*j ei␾j
M*j ei␾j
艋 兩W共i␻j兲兩.
It is important to realize that M* and ␾* need not be
the averages of the measurements. The main usefulness
of the robust approach is in dealing with the effects of
systematic variations. If used for random variations following known statistics, it will likely be overconservative. Typically, systematic variations arise because only part of a complex system is being modeled.
As discussed above, the effects of higher-order modes
may be neglected or projected away, and there may not
be enough separation from the lower-order term to
model the effect of those neglected modes by a whitenoise term in a Langevin equation, which is the usual
physics approach 共Chaikin and Lubensky, 1995兲. Another example is nonlinear dynamics, which will be discussed below in Sec. VI. Because the methods we have
been discussing are based heavily on linear techniques,
one must assume that the system dynamics are linear
about a given operating point. Although this is often
true, the parameters and even the form of the linear
system can vary with the operating point chosen. A final
example is that one’s system is almost always embedded
within a larger system, whose dynamics are not modeled. Thus an experiment in a room may show different
behavior as a function of temperature. A biochemical
network 共see Sec. VII.C below兲 may function in a cell
whose environment changes significantly in different
conditions, and so on. In all of these situations, the right
thing to do is to choose typical conditions, which may or
may not involve an average and then to identify the expected maximum deviation at each frequency ␻j. 兩W共s兲兩
is then an analytic function bounding all of these deviations.
3. Robust stability
In the previous section, we saw one way to quantify
the uncertainty in a model of system dynamics. Given
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
that one’s system belongs to a family of systems G共s兲,
one would like to choose a controller K共s兲 so that, at a
minimum, the feedback loop is stable for all possible
realizations G共s兲 of systems taken from the family G共s兲.
This property of being stable over all possible realizations is known as robust stability.
Unlike the system, we can assume that we know the
controller K共s兲 exactly.30 Then, the loop gain L = KG will
be a member of a set L = KG. Since we have seen that a
system goes unstable when the denominator of the
transfer functions T and S equal zero, we must have that
1 + L共i␻兲 ⫽ 0 for all ␻, for all values of any parameters
used in K, and for all systems G. The last requirement
can be restated succinctly as 1 + L ⫽ 0.31
To analyze things further, define G as the set of systems G共s兲 = G0共s兲关1 + ⌬共s兲W共s兲兴, with W the uncertainty
bound and ⌬ an arbitrary transfer function with magnitude 艋1. Similarly, write the set of loop transfer functions L as L共s兲 = L0共s兲关1 + ⌬共s兲W共s兲兴, with L0共s兲
= K共s兲G0共s兲. Then the condition for robust stability can
be written
兩1 + L共i␻兲兩 = 兩1 + L0共i␻兲 + ⌬共i␻兲W共i␻兲L0共i␻兲兩 ⬎ 0
∀ ␻.
This can be illustrated by a diagram analogous to Fig.
27, where instead of plotting the family of systems G one
plots the family of loop transfer functions L. Equation
共5.51兲 then states that the light shaded area cannot touch
the point −1. Because, at each ␻, ⌬ is any complex number with magnitude 艋1, we can always choose the worst
case, i.e., the ⌬ that minimizes the left-hand side of Eq.
共5.51兲. This implies
兩1 + L0共i␻兲兩 − 兩W共i␻兲L0共i␻兲兩 ⬎ 0
1 + L0共i␻兲
∀ ␻.
∀ ␻,
Since the complementary sensitivity function of the
nominal system is T = L0 / 共1 + L0兲, we can write this as
储WT储⬁ ⬍ 1,
where we have used the H⬁ norm as shorthand for
兩W共i␻兲T共i␻兲兩 ⬍ 1 ∀ ␻. The use of the H⬁ norm arises from
the desire to be conservative, to be stable for the worst
If the controller is implemented digitally, this will be strictly
true. Almost all practical robust controllers are implemented
digitally, since the algorithms generate fairly complex controllers, which would be complicated to implement with analog
As discussed in Sec. IV.B above, we are simplifying somewhat. For a given controller K共s兲, one would invoke the Nyquist criterion to see whether any system in the Nyquist plot of
L共i␻兲 circles −1 an appropriate number of times to be unstable. In practice, one is almost always worried about computing the limit where a system 共or set of systems兲 crosses over
from being stable to unstable, in which case the relevant criterion is 1 + L = 0.
John Bechhoefer: Feedback for physicists: A tutorial essay on control
FIG. 27. 共Color in online edition兲 Illustration of a multiplicative uncertainty bound and of robust stability. The thick, central line is the nominal system G0, with Im关G0共i␻兲兴 plotted vs
Re关G0共i␻兲兴 over 0 ⬍ ␻ ⬍ ⬁ 共“Nyquist plot”兲. The thin lines that
shadow it are the uncertainty limits, meaning that the actual
system follows a path somewhere within the shaded area. The
shaded circle gives the multiplicative uncertainty bound at one
frequency. If we now interpret the thick central line as a loop
transfer function and if the system is closed in a feedback loop,
then robust stability implies that the set of loop transfer functions defined by the light shaded region cannot touch the point
−1, which is indicated by the large dot.
possible realization of one’s system. Equation 共5.54兲 also
implies that the uncertainty bound W must be less than
T−1 at all frequencies for robust stability to hold.
To see how robust stability works, let us consider
again a first-order system with variable time lag. The
nominal system is G0共s兲 = 1 / 共1 + ␶0s兲. If the maximum expected lag is ␶max, we saw above 关Eq. 共5.49兲兴 that we can
take W共s兲 = 2.1␶maxs / 共1 + ␶maxs兲. Now, for a given controller, what is the maximum gain we can apply while still
maintaining robust stability? Taking, for simplicity,
K共s兲 = K, we see that Eq. 共5.54兲 implies
f共K, ␻兲 ⬅
冑1 +
冑共1 + K兲2 + ␶20␻2 ⬍ 1.
We then seek K = Kmax such that f共Kmax , ␻*兲 = 1 and
兩共⳵f / ⳵␻兲兩Kmax,␻* = 0. Numerically, for ␶0 = 1 and ␶max = 0.1,
one finds Kmax ⬇ 10. By contrast, the stability limit found
in Sec. IV.C for a first-order system with time delay was
⬇ 15 关Eq. 共4.10兲, with a factor of 2 for gain margin兴.
The robust-stability criterion leads to a smaller maximum gain than does a calculation based on a precise
model of the system 共with delay兲. Uncertainty thus leads
to conservatism in the controller design.
4. Robust performance
The previous section was mostly concerned with stability, but the point of feedback is usually to improve the
performance of an open-loop system. In Sec. V.B.3, we
defined a scalar “performance index” that measures how
well a system rejects errors and at what control costs
关Eq. 共5.8兲兴. One difficulty is that such optimal control
assumes a particular disturbance d共t兲 and a particular
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
reference signal r共t兲, whereas in practice these signals
vary. In other words, we would like to assure good performance for a set of disturbances and reference signals.
We begin by recalling that, in the absence of sensor
noise, the tracking error e0共s兲 = S共s兲关r共s兲 − d共s兲兴, where
the sensitivity function S = 1 / 共1 + L兲. Thus, to have good
performance, we would like to have 兩S共i␻兲兩 Ⰶ 1. However, the loop gain L = KG will go to zero at high frequencies, so that S → 1 as ␻ → ⬁, meaning that it is impossible to track a reference or compensate for
disturbances to arbitrarily high frequencies. Actually,
the situation is even more difficult, since the analytic
structure of L constrains the form of S. Depending on
the pole-zero structure of L 共especially, the number of
poles and zeros in the right-hand plane兲, one has a number of different analytical constraints on S. To give the
simplest one, originally derived by Bode, assume that L
has neither poles nor zeros in the right-hand plane 共i.e.,
assume it is stable and nonminimum phase兲, and assume
also that the relative degree of L is at least 2 共i.e., L
⬃ ␻−n as ␻ → ⬁, with n 艌 2兲. Then one can show 共Doyle
et al., 1992兲 that
ln兩S共i␻兲兩d␻ = 0.
This means that on a log-linear plot of S共i␻兲 the area
below 0 共兩S兩 ⬍ 1兲 must be balanced by an equal area
above 0 共兩S兩 ⬎ 1兲. This is illustrated in Fig. 28. When 兩S兩
⬍ 1, the control loop is decreasing the sensitivity to disturbances and the system is made more “robust.” When
兩S兩 ⬎ 1, the control loop actually amplifies disturbances,
and the system becomes more “fragile.” Equation 共5.56兲
implies a kind of “conservation of fragility,” with frequency ranges where the system is robust to disturbances being “paid for” with frequency ranges where
the system is fragile 共Csete and Doyle, 2002兲. This is also
known as the “waterbed effect”: push 兩S兩 down at some
frequency, and it will pop up at another! As a consequence, increasing the gain will increase the frequency
range over which disturbances will be rejected but will
increase the fragility of the system at higher frequencies.
Given these constraints on S and given that r and d
are usually unknown, one can proceed by defining a set
of expected or desired control inputs or disturbances
and asking that the control error be small for any member of these sets. For simplicity, we consider only disturbances. Let the set of disturbances be characterized by
d共s兲 = ⌬共s兲W1共s兲, where, as before, W1共s兲 is a bounding
function 共the “performance weight”兲 and ⌬共s兲 is an arbitrary transfer function with 兩⌬兩 艋 1. Typically, W1共s兲 is
large at low frequencies and cuts off at high frequencies.
The functional form is often taken to be a lag compensator 关Eq. 共5.6兲兴. Again, W1 represents the largest disturbances one expects or, at least, that one desires to suppress. The largest error that one expects is
John Bechhoefer: Feedback for physicists: A tutorial essay on control
储兩W1S兩 + 兩W2T兩储⬁ ⬍ 1.
This is the robust-performance problem. Again, the H⬁
norm means that the relation holds for all frequencies.
In Eq. 共5.60兲, all quantities are evaluated at s = i␻, and S
and T refer to the nominal system G0.
5. Robust control methods
FIG. 28. 共Color in online edition兲 Constraints on the sensitivity
function S = 1 / 共1 + L兲. 共a兲 Logarithm of the sensitivity function
ln兩S共i␻兲兩 for a second-order system with proportional gain. The
negative area where control is “robust” is just balanced by the
positive area, where control is “fragile.” Note that the slow
decay 共here, ln兩S兩 ⬃ ␻−2 at large frequencies兲 implies that a significant part of the positive area is off the plot. 共b兲 Nyquist plot
of the same system. Note that when the plot enters the shaded
circle of radius 1, centered on −1, we have 兩S兩 ⬎ 1.
兩e0共i␻兲兩 = 兩dS兩 艋 ␻ 兩W1共i␻兲S共i␻兲兩 = 储W1S储⬁ .
Then we can reasonably ask that the worst possible error resulting from the most dangerous expected disturbance be bounded, i.e., that 储W1S储⬁ ⬍ 1, where W1 is implicitly rescaled to make the bound 1.
Equation 共5.57兲 represents the desired nominal performance, given an accurate model of the system 共Morari
and Zafirioiu, 1989兲. One should also ask for robust performance, so that Eq. 共5.57兲 holds for all systems G allowed by the uncertainty. We replace S by S⌬, the sensitivity function of a particular realization of one of the
possible systems G⌬ = G0共1 + ⌬W2兲, where we now use
W2 for the multiplicative bound on the system uncertainty. Then
兩W1S⌬兩 =
⬍ 1. 共5.58兲
兩1 + L0 + ⌬W2L0兩 兩1 + ⌬W2T兩
Multiplying through, we have
兩W1S兩 ⬍ 兩1 + ⌬W2T兩 艋 1 − 兩W2T兩
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
∀ ␻,
From our point of view, the formulation of the robustperformance problem is more important than its solution. Equation 共5.60兲 may be thought of as another type
of optimal-control problem. Instead of asking that the
left-hand side be less than 1 共which may not be possible,
given that the performance criterion reflects one’s desires, not what is possible兲, we can ask that it be less than
some bound ␥. Then the problem is to find a controller
K that minimizes 储兩W1S兩 + 兩W2T兩储⬁, given performance
and stability weights W1 and W2 and given a nominal
system G0. Finding even an approximate solution to this
problem requires a sophisticated treatment 共Doyle et al.,
1992; Skogestad and Postlethwaite, 1996; Özbay, 2000兲
that is beyond the scope of this tutorial. Alternatively,
one can seek a numerical solution by postulating some
form for K共s兲 and running a numerical optimization
code to find the best values of any free parameters. On
the one hand, such optimization is in principle straightforward since one usually does not want a controller
with more than a dozen or so free parameters. On the
other hand, the landscape of the optimized function is
usually rugged, and a robust optimization code is
needed. Recent work has shown that genetic algorithms
can be quite effective 共Jamshidi et al., 2003兲. For the
problems that most readers are likely to encounter in
practice, there is probably little difference between the
two approaches, in that both lead to a numerical solution for which software is available 共e.g., MATLAB or
Here we limit our discussion to a reconsideration of
loop shaping. If we write out Eq. 共5.60兲 in terms of L0
and recall that L0 is large at low frequencies and tends
to zero at high frequencies, then we easily derive that
1 − 兩W2共i␻兲兩
␻ → 0,
兩1 − W1共i␻兲兩
兩L0共i␻兲兩 ⬍
␻ → ⬁.
兩L0共i␻兲兩 ⬎
Thus Eq. 共5.61兲 provides explicit criteria to use in shaping the low- and high-frequency forms of the controller
K共s兲 = L0共s兲 / G0共s兲.
Finally, the above discussion of robust control methods neglects sensor noise. In Sec. V.D on Kalman filtering, we saw how to estimate the system state in the presence of noise. The optimal state estimated by the
Kalman filter, however, assumed that one had accurate
knowledge of the system’s dynamics. Combining robust
methods with optimal state estimation remains a topic of
current research 共Petersen and Savkin, 1999兲.
John Bechhoefer: Feedback for physicists: A tutorial essay on control
In the physics literature, robust controller design has
seldom been used, but two recent examples both concern the control of a positioner for an atomic force microscope head 共Schitter et al., 2001; Salapaka et al.,
2002兲. In Salapaka et al., the system used a piezoelectric
stack as an actuator, with a control loop to counteract
nonlinearities and hysteresis in the actuator, as well as
mechanical resonances in the stage. A first attempt at
using a PI controller gave a bandwidth of 3 Hz. Use of
the H⬁ techniques led to a bandwidth of over 100 Hz. It
should be noted, however, that the increase in performance came from replacing a two-term controller 共P and
I兲 with a more complicated form for K that had 12 terms
to tune, thus giving much more freedom to shape K共s兲.
It did not per se come from the choice of the H⬁ 共robust兲
metric. What robust and optimal control methods offer
is a rational way of using performance objectives 共e.g.,
high bandwidth of the closed-loop system兲 to choose the
many free parameters in K共s兲. Without some systematic
guide, tuning a 12-parameter controller would be difficult, if not impossible. The use of robust measures helps
to insure that a solution that works “on paper” will perform satisfactorily in the real world, taking account of
errors in the modeling, drift in parameter values, and so
Most of our discussion so far has focused on the control of linear systems. Many types of dynamical systems
indeed are linear, or are close enough to an equilibrium
that they behave approximately linearly about some
equilibrium point. A straightforward approach is “gain
scheduling,” where one measures the transfer function
locally over a range of setpoints, with the positions 共and
even types兲 of poles and zeros evolving with set point.
One then varies the control algorithm parameters 共or
even structure兲 as a function of the local transfer function.
While gain scheduling works well for weak nonlinearities, it does not for stronger ones. The past few decades
have brought an increasing awareness of the
importance—and ubiquity—of strongly nonlinear systems, and there have been increasing efforts to find ways
of controlling such dynamical systems. The difficulty is
that most of the methods we have discussed above, including frequency response and the manipulation of
pole positions, make sense in general only for linear or
nearly linear systems and often cannot even be adapted
to study nonlinear systems. Indeed, nonlinear systems
show phenomena such as chaos, hysteresis, and harmonic generation that do not exist in linear systems, implying that the failure of the linear methods in general is
a fundamental rather than a superficial one. Until recently, the physics literature and the control-theory literature on nonlinear dynamics were fairly separate.
There is little point in trying to review the vast variety of
different approaches that exist, a variety that no doubt
reflects our more limited and less systematic understanding of nonlinear systems. Instead, we briefly revisit the
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
FIG. 29. 共Color in online edition兲 Metastability: the ball is
stable to small perturbations but unstable to larger ones.
notion of stability and then give a couple of illustrations.
Over a century ago, Lyapunov generalized the notions
of stability discussed in Sec. IV.A. Such a generalization
is necessary because the linear stability analysis discussed there is not always reliable. For example, a system that is stable to infinitesimal perturbations may be
unstable to finite perturbations 共“metastability”兲, as illustrated in Fig. 29. See Strogatz 共1994兲 for a physicist’s
discussion and Dutton et al. 共1997兲 for a control engineer’s discussion of stability in nonlinear systems and
how Lyapunov functions can be used to prove the stability of a solution to perturbations lying within a given
region of phase space. Most of the methods from the
control literature—describing functions, Popov’s
method, Zames’s circle criterion, etc. 共Dutton et al.,
1997兲—deal only with rather specific types of nonlinearity and have correspondingly limited ranges of applicability. Instead of cataloging all these cases, we give one
example of a 共basically unavoidable兲 nonlinearity that is
a common topic in control-theory texts and a second,
more recent example from the physics literature, chosen
to illustrate how different the approaches can be.
A. Saturation effects
In our discussion of control laws, such as proportional
control, we always assumed that however large the error
might be, it is always possible to generate the appropriate control signal. But every actuator has its limits. For
example, if temperature is being controlled, then the actuator will often be a Peltier element, which pumps heat
into a sample 共i.e., heats it兲 when positive current is used
and pumps heat out of a sample 共i.e., cools it兲 when
negative current is used. The Peltier element uses a bipolar current source with a maximum possible current.
The actual control signal will then resemble the thick
trace in Fig. 30 rather than the thin dashed one. The
control law is therefore nonlinear for large-enough error
signals. Such a saturation always occurs, although one
may or may not encounter in a given situation the large
errors needed to enter the nonlinear regime.
Saturation need not have a dramatic effect on proportional control. When the error signal is too large, the
system applies its largest correction signal, which is
smaller than it should be. Intuitively, the quality of control will smoothly degrade as the errors become larger
and larger, relative to the saturation value 共often called
John Bechhoefer: Feedback for physicists: A tutorial essay on control
the “proportional band” in the control literature兲.
The same kind of saturation effect, however, also occurs with integral control, and there its effects are more
subtle. Consider a pure integral control law. If the system receives a large, long-lasting perturbation, the error
integral will begin to build up, and hence the control
signal. At some point, the control signal saturates, leaving the error to continue to build up 共assuming that the
error signal still has the same sign兲. The control signal
will have reached its maximum value, where it will stay
until the integral term has been reduced. For this to happen, it is not sufficient that the system return to a state
of zero error, for a large constant part of the integral will
have built up. The effect will be to create a large error of
the opposite sign 共and perhaps to be unstable兲. The easy
fix—universally used in practical integral control laws,
including PID ones—is to freeze the value of the integral error whenever the control signal saturates. When
the signal reenters the proportional band, one updates
the integral term as usual, and the performance will then
be as calculated for the linear system. This runaway of
the integral term due to the saturation of the activator is
known as “integral windup” 共Dutton et al., 1997兲.
A more general approach to handling such actuator
constraints is known as “Model Predictive Control”
共Goodwin et al., 2001兲. Briefly, the basic idea is to integrate 共or step兲 a dynamical model of the system forward
in time by N units. One then optimizes a performance
index over this future time period, assuming that no unknown disturbances enter and taking into account all
actuator constraints. The first step in this optimal solution is actually taken, and then a new optimization problem is solved, again going forward by N units. One can
then treat almost any kind of constraint, including sensor saturation, slew-rate limitations, etc.
Saturation effects are unavoidable in control situations because they are part of the control mechanism
itself. Similar nonlinearities may also be dealt with in
various ad hoc ways. For example, if a digital controller
is used, there will be quantization nonlinearity due to
the finite word size of the analog-to-digital and digitalto-analog converters.32
共These effects can usually be cured most easily by
making good use of the full dynamic range and, when
In analog-to-digital conversion, the effects of quantization
error can often be reduced by deliberately adding noise to the
signal. If the noise amplitude is roughly one least-significant bit
共LSB兲, then successive digitizations will reflect the true amplitude of the signal. For example, imagine measuring a signal
level of 0.4 with a digitizer that reports 0 or 1. With no noise,
one measures always 0. If the noise level 共i.e., its standard
deviation兲 is on the order of 0.5, one will measure a “0” 60% of
the time and a “1” 40% of the time. Measuring the signal
several times will suffice to get a reasonable estimate of the
average. The method, similar in spirit to the technique of
delta-sigma conversion discussed above in Sec. V.C, is known
as dithering 共Etchenique and Aliaga, 2004兲. It gives an example
of how, in a nonlinear system, noise can sometimes improve a
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
FIG. 30. 共Color in online edition兲 Saturation of proportional
gain. Thin dashed line is the proportional gain with no saturation. Thick lines show saturation. There is, in general, no reason for saturation to be symmetric about zero, as depicted
more desperate, by investing in more expensive analogto-digital and digital-to-analog chips that have more
bits.兲 But the more interesting and important nonlinearities are to be found in the system dynamics themselves.
As the above examples show, nonlinearities associated
with discontinuities in a function or one of its derivatives
are typical in engineering problems, while the systems
that physicists encounter more often have analytic nonlinearities. The following example shows how an analytic nonlinearity leads to complex, chaotic behavior that
is nonetheless controllable.
B. Chaos: The ally of control?
A chaotic dynamical system is one that shows sensitive dependence to initial conditions 共Strogatz, 1994兲.
Two nearby initial conditions will diverge exponentially.
Most often, one deals with dissipative chaotic systems,
where the dissipation implies that the system will be
confined to a finite volume of phase space 共state space,
in our language here兲. Although the system is always
locally unstable, the state vector stays in some general
region. One can ask whether it is possible, through small
variations of one or more of the system’s control parameters, to stabilize the dynamics. 共Stabilization through
large variations is less interesting, because one can usually find a stable region for some range of control parameters.兲
The first algorithm to stabilize a system with chaotic
dynamics was given by Ott, Grebogi, and Yorke in 1990
共Ott et al., 1990兲. Their algorithm drew on several key
features of chaotic motion: First, once initial transients
have passed, the system’s motion is on an attractor, a set
of points in phase space. Second, the motion is ergodic
on that attractor. This means that the system revisits arbitrarily close to a given point arbitrarily many times.
Third, embedded in each attractor is a dense set of unstable periodic orbits. Without getting into the types of
orbits, we will think about the simplest kind, a fixed
point. In that case, each attractor will contain an unstable fixed point.
The idea of the Ott-Grebogi-Yorke algorithm is to stabilize motion about an unstable orbit 共or fixed point兲 x*
by waiting until the system brings the state vector x near
to x* 共Grebogi and Lai, 1999兲. By the ergodicity properties of chaotic motion, this will always happen if one
John Bechhoefer: Feedback for physicists: A tutorial essay on control
within a predefined tolerance x* ± ␧ 共here, ␧ = 0.02兲. In
the particular run shown in Fig. 31共a兲, this happens at
time step 24. The Ott-Grebogi-Yorke algorithm is then
activated, as shown in Fig. 31共b兲. Note that the system
state x24 is just slightly above the set point. The idea, as
discussed above, is to change ␭ to ␭n = ␭ + ⌬␭n so as to
position the fixed point of the modified dynamical system 共logistic map with control parameter ␭⬘兲 above the
point x24 so that the repulsive dynamics of the modified
system pushes the point towards x*. Equation 共6.2兲
shows that we set
⌬␭n = −
FIG. 31. 共Color in online edition兲 Illustration of the OGY
chaos-control algorithm. 共a兲 Time series of xn. Control is initiated at t = 0 about unstable fixed point x* 共horizontal solid line兲.
Dashed lines show the tolerance range ±␧ within which OGY
algorithm is active. Here, the algorithm actives at t = 24. 共b兲
Control parameter ␭n, as adjusted by OGY algorithm to stabilize fixed point.
waits long enough. 共The closer one wants x to approach
x*, the longer one has to wait.兲 Once this happens, the
control algorithm is activated. If the dynamics are given
by xn+1 = f共␭ , xn兲 共assume discrete dynamics for simplicity兲, then one may use the proximity of xn to x* to linearize about the fixed point. The linearized dynamics are
xn+1 − x* ⬇
共xn − x*兲 + ⌬␭n .
One then changes ␭ in Eq. 共6.1兲 so that xn+1 = x*, i.e., one
⌬␭n = −
⳵f共xn − x*兲/⳵x
Of course, choosing ⌬␭n in accordance with Eq. 共6.2兲
will not make xn+1 precisely equal to x*, since that choice
is based upon the approximation in Eq. 共6.1兲. But if the
original distance xn − x* is small enough, the system will
quickly converge to x*.
We can illustrate the Ott-Grebogi-Yorke algorithm on
a simple chaotic dynamical system, the logistic map
xn+1 = ␭xn共1 − xn兲,
which is a standard example of a simple dynamical map
that shows complex behavior as the control parameter ␭
is adjusted 共Strogatz, 1994兲. Here, at ␭ = 3.8, the system is
normally chaotic. The goal will be to stabilize the system’s motion about the normally unstable fixed point,
given by x* = 1 − 1 / ␭ ⬇ 0.74. The Ott-Grebogi-Yorke control algorithm is turned on at t = 0; see Fig. 31共a兲, which
shows a logistic map with control initiated at t = 0. One
waits until the natural motion of the system brings it to
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
␭共1 − 2x*兲
共xn − x*兲 ⬇ 9.3共xn − x*兲.
x*共1 − x*兲
The values of ␭n are shown in Fig. 31共b兲. Note how ␭24
shoots up in response to x24, which represents the first
time the system has entered the tolerance zone x* ± ␧.
The system quickly settles down and stabilizes about the
desired set point x*. In most applications, there would be
noise present in addition to the deterministic dynamics,
in which case the algorithm would lead to small fluctuations about the fixed point for small noise. The occasional large kick may cause the system to lose control.
One would then have to wait for the system to re-enter
the tolerance zone to reactivate control.
Can one do better than merely waiting for the system
to wander near the set point? In fact, small “targeted”
corrections to the motion can drastically reduce the
waiting time. The idea is to extrapolate backwards from
the setpoint to find a point near the current state of the
system. Then one works out the perturbation needed to
approach the target. Small control-parameter perturbations at the right time can have a large effect. In a reallife application, NASA scientists have used such ideas to
send spacecraft on remote missions throughout the solar
system using small thrusts to perturb the motion in the
planets’ and sun’s gravitational fields. These and other
aspects of targeting are discussed in Shinbrot 共1999兲.
The Ott-Grebogi-Yorke algorithm uses the system’s
dynamics, notably the ergodic property of chaos, where
the dynamical system generically passes arbitrarily close
to every point in phase space 共“state space”兲. In contrast,
most control algorithms “fight the system,” in that they
change the dynamical system by overpowering it. In the
Ott-Grebogi-Yorke algorithm, only small perturbations
to a control parameter are required. One nudges the
system in just the right way at just the right point at just
the right moment, to persuade it to behave as desired.
One limitation of the Ott-Grebogi-Yorke algorithm is
that one cannot choose the set point arbitrarily. One can
stabilize the system about an unstable fixed point, but
that unstable fixed point must first exist in the ordinary
dynamical system. This limitation is often not too important practically, because there are lots of unstable fixed
points and periodic orbits to use. Also, with targeting,
the time to find the required region of phase space is
often short. But conceptually, these limitations do not
exist in classical algorithms.
John Bechhoefer: Feedback for physicists: A tutorial essay on control
Closely related to chaos control is the notion of synchronization, by weak coupling, of two nonlinear systems 共generally oscillators, perhaps chaotic兲 共Pikovsky et
al., 2001兲. The classic example, discovered by Huygens
in the 17th century, is the tendency for two pendulum
clocks hanging on the same wall to synchronize 共in antiphase兲. One can think of this situation as the nonlinear
generalization of the observers discussed above in Sec.
III.C. There, the goal was to construct a model of an
observed 共linear兲 dynamical system where the observations were used to couple 共synchronize兲 the model to the
natural system. One then used the deduced internal
state vector as the basis for feedback. This is clearly the
same situation discussed by Pikovsky et al. 共2001兲, where
one dynamical system exists in nature, the other as a
computer model, and the observation provides the weak
coupling. Again, we have a case where there are extensive, nonoverlapping discussions of essentially the same
phenomena in the physics and engineering literatures.
The seeming dichotomy between chaotic control and
classical control algorithms is not as sharp as I have
made it seem in the last paragraph. The Ott-GrebogiYorke algorithm was the first in a long series of algorithms for chaotic control, and many of them blend elements of classical algorithms 共Schuster, 1999兲. On the
other side, some of the classical control algorithms have
at least some of the flavor of the Ott-Grebogi-Yorke algorithm. For example, in the example of the unstable
Stealth Fighter plane referred to above, instability is actually desirable feature as it allows the plane to respond
much faster to control signals than a stable system
would. Again, relatively small control signals 共wing flap
movements, etc.兲 can produce large effects in the planes
motion, by working about unstable equilibria.
With these limitations in mind, we can trace two additional approaches to understanding the role of feedback in biology. The first course is to find simple enough
settings where standard feedback ideas may safely be
employed. These are largely found in the biochemistry
of enzyme-catalyzed reactions and, more recently, in
gene-expression models. The second course is to begin
to tackle more complex situations. Here, the main innovation is that instead of a simple feedback loop, one has
a complex network of interactions, and one has to consider both the geometry and topology of that network.
In the following sections, we will first discuss a phenomenological example, then some simple biochemical feedback loops, and finally give a brief overview of current
efforts to understand networks of interacting genes and
The reader will notice that the biological examples to
be discussed lack any man-made element. Our previous
discussions all started from a natural physical system
and added a deliberate coupling between variables, creating a feedback loop. In the natural system, a variable
Y depends on X; then one creates an additional dependence of X on Y. 共For example, in the vibrationisolation device described in Sec. V.C.1, the accelerometer signal depends on the forces exerted on it by the
Earth. The feedback apparatus then creates a way for
the accelerator signal to influence those forces, via the
piezoelement actuator.兲 In the biological systems below,
the couplings between Y and X and then X and Y are
usually both “natural.” But insofar as the two couplings
may be separately identified, feedback will be a useful
way of looking at the dynamics, whether the couplings
are created by man or by nature.
A. Physiological example: The pupil light reflex
From the beginnings of control theory, there have
been attempts to make connections to biological systems
共Wiener, 1961兲. Indeed, one does not have to look far to
find numerous examples of regulation, or “homeostasis.” Body temperature is regulated to the point where
variations of one degree imply sickness and of ten degrees, death. When we are hot, we sweat and cool by
evaporation. When we are cold, we shiver and increase
the circulation of warm blood to cool areas. Similarly,
osmotic pressure, pH, the size of the eye’s pupils, the
vibration of hair cells in the inner ear—all are tightly
controlled. Over the years, there have been numerous
attempts to model such processes at the “system” or
physiological level 共Keener and Sneyd, 1998兲. In many
cases, it has been possible to come up with models that
mimic, at least partly, the observed behavior. Because
the models describe high-level phenomena, they are
largely phenomenological. That is, they involve somewhat arbitrarily chosen elements that have the right behavior but may be only loosely tied to the underlying
physiological elements. In addition, one commonly finds
that the more closely one studies a phenomenon, the
more baroquely complex become the models needed.
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
The eye is an amazing organism. It can respond with
reasonable sensitivity over a wide range of light levels,
from single photons to bright sunlight. The eye uses a
mix of feedback methods, the most important of which
is adaptation, as summarized by the empirical response
law of Weber: the eye’s sensitivity is inversely proportional to the background light level 共Keener and Sneyd,
Another mechanism that is particularly important at
the highest light levels is the pupil light reflex. When
light levels are high, the pupil contracts, reducing the
light flux onto the retina. The size of the pupil is controlled by circularly arranged constricting muscles,
which are activated and inhibited 共left to relax兲 by control signals from the brain. More light causes activation
of the constricting muscles, which shrinks the pupil area
and limits the light flux at the retina.
The pupil light reflex is a particularly attractive example of physiological feedback. First, the state variable, the retinal light flux, is an intuitive quantity, and
the actuator mechanism, pupil size, is easy to understand. Second, using a trick, one can “break open” the
feedback loop. Ordinarily, the incident light flux covers a
larger area than the pupil, so that adjustments to the
John Bechhoefer: Feedback for physicists: A tutorial essay on control
FIG. 32. Light beams and pupil sizes. Light beam is shaded
light; pupil is shaded dark. 共a兲 The ordinary case: the beam is
larger than the pupil. 共b兲 Breaking the feedback loop: the
beam is smaller than the pupil. 共c兲 Inducing pupil oscillations: a
small beam falls on the edge of the pupil.
pupil size adjust the retinal light flux 关Fig. 32共a兲兴. If, however, one uses a beam of light that is narrower than the
minimum pupil size, adjusting the area will not change
the retinal light flux 关Fig. 32共b兲兴. The feedback loop is
broken, and one can then study how variations in the
light intensity change the pupil size. For example, one
could impose sinusoidal intensity variations in order to
measure the transfer function of the pupil response.
While we do not have space to describe that transfer
function fully, its most important feature is that the pupil
response to a changing light stimulus is delayed by about
300 ms. This leads to an amusing consequence. If a narrow beam of light shines on the edge of the pupil 关Fig.
32共c兲兴, the pupil will begin to contract. Because of the
delay, it will continue to contract after it is small enough
that the beam no longer enters the pupil. After the delay, the eye realizes there is no light and starts to enlarge
the pupil. This continues, because of the delay, somewhat past the moment when the pupil is large enough to
admit light to the retina. The system then begins to contract, and thus continues on, in steady oscillation. This
really happens!
The ability to open the feedback loop by using a narrow stimulus light beam also allows one to substitute an
electronic feedback for the natural one. One measures
the pupil-size variations and adjusts the light intensity
electronically, according to an algorithm chosen by the
experimentalist. In effect, one creates a primitive “cyborg,” melding man and machine. If one implements ordinary proportional feedback, one regulates artificially
the light intensity to a constant value. The system goes
unstable and the pupil oscillates when the gain is too
Here, we shall account qualitatively for these observations using a model describing the pupil light reflex that
is due originally to Longtin and Milton 共1989a, 1989b兲.
Our discussion is a simplified version of that of Keener
and Sneyd 共1998兲.
We start by relating the retinal light flux ␸ to the light
intensity I and pupil area A:
␸ = IA.
We next relate the muscular activity x to the rate of
arriving action potentials 共the signals that stimulate the
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
+ x = E共t兲.
Here, we have taken simple first-order dynamics with a
time constant ␶x. The driving E共t兲 is the rate of arriving
action potentials:
E共t兲 = ␥F ln
␸共t − ⌬t兲
where ␥ is a phenomenological rate constant and the
response delay is modeled by evaluating the retinal light
flux a time ⌬t in the past 关i.e., by ␸共t − ⌬t兲兴. The function
F共x兲 = x for x 艌 0 and 0 for x ⬍ 0, so that ␸¯ acts as a
threshold retinal light level. In other words, muscles are
activated only when there is sufficient light. The logarithm incorporates Weber’s law, mentioned above.
Equation 共7.2兲 illustrates the “phenomenological” character of such large-scale models, where the form of the
equation, in particular of E共t兲, is chosen to be in qualitative agreement with empirical observations.
In order to close the model, we need to relate the
pupil area A to the muscular activity x. High activity
should give a small pupil area. Again, one uses an empirical, phenomenological form:
A共x兲 = Amin + 共Amax − Amin兲
xn + ␪n
which smoothly interpolates between Amax at zero activity 共x = 0兲 to Amin at infinite activity. The parameters ␪
and n must be fit to experiment. Putting Eqs. 共7.1兲–共7.4兲
together, we have
I共t − ⌬t兲A关x共t − ⌬t兲兴
+ x = ␥F ln
⬅g„x共t − ⌬t兲,I共t − ⌬t兲….
Because A共x兲 is a decreasing function of x, one can
always find a steady-state solution to Eq. 共7.6兲 when the
light intensity I is constant. Let us call this solution x*,
which satisfies x* = g共x* , I兲. We linearize the equations
about x*, defining x共t兲 = x* + X共t兲. This gives
+ X = − KpX共t − ⌬t兲,
where Kp = −gx共x* , I兲 can be viewed as a proportional
feedback gain. Equation 共7.7兲 is nothing more than the
first-order system with sensor lag and proportional feedback that we considered previously in Sec. IV.C 关see
Eqs. 共4.8兲 and 共4.9兲兴. The stability may be analyzed
analogously; one again finds that for high-enough gain
Kp, the system begins to oscillate spontaneously. This is
in accordance with the cyborg experiments using artificial gain and with the use of light that falls on the pupil
edge to excite oscillations, too. In the latter case, the
feedback becomes nonlinear, since the coupling changes
discontinuously when the area reaches a critical value
A*, which divides the region where changes in A do or
do not affect the retinal light flux. Crudely, this functions
John Bechhoefer: Feedback for physicists: A tutorial essay on control
as a locally infinite gain, which is unstable.
Notice that our modeling was in two steps. In the first,
we formulated nonlinear equations. We related the
muscle activity to the light flux, the light flux to the pupil
area, and the pupil area to the muscle activity. This
“loop” of relationships is the nonlinear generalization to
our previous discussions of feedback loops, which applied to linear equations. Because the system is entirely
a “natural” one, it may not be immediately obvious how
to identify the “system,” “sensor,” and “controller.”
Here, the system sets the pupil size, the sensor is the
retina, and the controller is presumably circuitry in the
brain 共or an explicit computer algorithm in the cyborg
setup described above兲. In general, one is confronted
with a system of coupled variables. Whether they neatly
decouple into traditional feedback elements or not will
depend on the particular situation.
In the second step, we linearized about a fixed point,
coming up with equations that could be analyzed using
block-flow diagrams. This strategy is not infallible. For
example, there may not be a steady state to linearize
about. Still, it is one of the handier approaches for dealing with nonlinear systems. And in biology, strong nonlinearities are the rule.
In the example of pupil-light-flux control, the time delay in reacting to light changes plays a key role. Time
delays are present in most physiological responses, and
people compensate for such time delays by using feedforward algorithms. In a very interesting recent experiment on reaction responses, Ishida and Sawada 共2004兲
show that people in fact slightly over compensate; they
suggest that being “proactive” minimizes transient errors while tracking erratic motion.
B. Fundamental mechanisms
While feedback is present in many macroscopic,
physiological processes, it also plays a role in more fundamental, microscopic settings. The traditional viewpoint focuses on enzyme-catalyzed biochemical reactions, where the rate of production of some desired
molecule is greatly accelerated by a catalyst 共often a protein, or enzyme兲. By inhibiting or enhancing the production of the enzyme, one gains great control over the
quantity of product. As we shall discuss below, in a cell,
there are thousands of such reactions, all coupled together in a “genetic network.” The theoretical analysis
of simple networks has a long history 共Wolf and Eeckman, 1998兲. What is exciting is that recently it has become possible to create, by genetic engineering techniques, simple artificial genetic networks that illustrate
basic feedback mechanisms. We shall give two examples.
The first, due to Becksei and Serrano 共2000兲 关cf. Gardner
and Collins 共2000兲兴, illustrates how negative feedback
can limit variability in gene expression, providing robustness against changing external conditions. The second shows how positive feedback can be used to switch
a gene on and off 共Gardner et al., 2000; Kaern et al.,
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
1. Negative feedback example
We begin with an example of negative feedback.
Becksei and Serrano constructed a simple genetic circuit
in which a gene expresses a protein that actively represses its own production. In their experiment, the protein “tetracycline repressor” 共TetR兲 was fused to a fluorescent protein, so that its expression could be
monitored optically. 共A cell’s fluorescence is proportional to the number of TetR molecules present.兲 In order for the gene to be expressed, RNA polymerase
共RNAP兲 must bind just “upstream” of the DNA coding
for the gene. However, the TetR protein also binds to
the DNA in competition with the RNAP. If the TetR is
bound, then RNAP cannot bind, and TetR is not produced. The concentration of TetR, R, is governed by an
equation of the form
− ␭R,
dt 1 + KR
where ␣ is the rate of production of TetR in the absence
of the feedback mechanism. 共The rate ␣ depends on the
concentration of RNAP, which produces mRNA, which
leads to the production of the actual protein TetR. All of
these dynamics are assumed to be fast compared to the
production rates of TetR.兲 In Eq. 共7.8兲, ␭ is the rate of
degradation of TetR in the cell, and K is related to the
binding affinity of TetR to the upstream DNA site. If
K ⬎ 0, the feedback is negative, since increasing concentrations of TetR suppress the production rate of the enzyme. 共K = 0 implies no feedback.兲 See Becksei and Serrano 共2000兲 for the full equations, and Wolf and
Eeckman 共1998兲 for the thermodynamic background.
关Essentially, the KR term comes from considering the
relative Gibbs free energies of the DNA when TetR is
bound or not. Such kinetic laws are variants of the
Michaelis-Menten law for enzyme kinetics 共Keener and
Sneyd, 1998兲兴.
The main result of Becksei and Serrano is experimental evidence that negative feedback reduces the variability of gene expression. 共They disable the feedback both
by modifying the TetR protein so that it does not bind to
the DNA site and by introducing additional molecules
that bind to the site but do not interfere with the
RNAP.兲 From our point of view, it is easy to see where
such a behavior comes from. Intuitively, adding negative
feedback speeds the system’s response to perturbations.
The system will spend more time in the unperturbed
state than without feedback, reducing variation.
To see this in more detail, we solve first for the steadystate production level of TetR: Setting the left-hand side
of Eq. 共7.8兲 =0, we find
R* =
冉 冑
− +
2 +
which decreases from ␣ / ␭ → 0 when K goes from 0 → ⬁.
Small perturbations to the cell then lead to small variations in R共t兲 = R* + r共t兲, which obeys the linear equation
John Bechhoefer: Feedback for physicists: A tutorial essay on control
ṙ = − ␭*r,
␭* = ␭ +
共1 + KR*兲2
The decay rate ␭* goes from ␭ → 2␭ for K going from
0 → ⬁. To model fluctuation effects, one adds a stochastic term ␰共t兲 to Eq. 共7.10兲, with 具␰典 = 0 and 具␰共t兲␰共t⬘兲典
= ⌳2␦共t − t⬘兲 共white noise兲. Then the fluctuations of r obey
what is essentially the equipartition theorem:
␴r ⬅ 具r2典 =
冑␭ * .
In Eq. 共7.11兲, we see explicitly that the negative feedback loop, which increases the value of ␭*, reduces the
fluctuation of the system in response to a fixed level of
noise. This is what Becksei and Serrano observed in
their experiments. In the same system, Rosenfeld et al.
共2002兲 later showed directly that the negative feedback
indeed leads to faster response times.
2. Positive feedback example
In biology, and elsewhere, one use of positive feedback is in constructing a switch that can go between two
separate states, either one of which is locally stable to
small perturbations. A large-enough perturbation, however, will make the system switch from one state to the
other. In electronics, such ideas are the basis of the flipflop circuit, which toggles between two states 共conventionally known as 0 and 1兲 and forms the basis of digital
memories. For essentially topological reasons, in phase
space, between any two stable states must lie an unstable intermediate state 关see Fig. 33共b兲兴. Near the intermediate state, the system shows a positive feedback that
drives the system into either of the adjoining stable
states. The statement that “positive feedback leads to a
switch or to oscillations” comes from the common situation where, in the absence of feedback, there is only
one stable state. Adding positive feedback then converts
the stable to an unstable state either via a pitchfork 共or
transcritical兲 bifurcation 共with two stable states as the
outcome兲 or via a Hopf bifurcation 共limit-cycle oscillator兲.
In biology, positive feedback can lead to a situation
where the expression of one gene inhibits the expression
of a second gene, and vice versa, an idea that goes back
at least to Monod and Jacob 共1961兲. A simple model for
the concentration dynamics of two proteins u and v
u̇ =
= u,
1 + vn
v̇ =
− v,
1 + un
where ␣1 and ␣2 are the rates of production of u and v in
the absence of a repressor and where the “cooperativity
exponent” n = 2. Equation 共7.12兲 resembles Eq. 共7.8兲,
with some important differences. First, as mentioned
above, increased production of u inhibits the production
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
FIG. 33. 共Color in online edition兲 Dynamics of a genetic toggle
switch. 共a兲 Dynamics in the u-v concentration plane 共arrows兲,
with gray scale representing the magnitude of the vector field.
共b兲 Equivalent dynamics on center manifold. The three equilibrium positions are represented by the two minima and the
unstable intermediate state.
of v, and vice versa. Second, the inhibitor binding, in
both cases, is assumed to show cooperativity. The exponents n = 2 mean that two molecules of u or v bind in
rapid succession to the DNA. 关The specific value n = 2 is
not important, but at least one of the exponents needs to
be larger than one to have a switch of the kind discussed
below 共Cherry and Adler, 2000兲. Greater cooperativity
共larger n兲 enlarges the region of ␣1-␣2 parameter space
where bistability exists.兴
By setting u̇ = v̇ = 0, one can easily see that, depending
on the values of ␣1 and ␣2, there can be either one or
three stationary solutions. For example, if ␣1 = ␣2 = ␣
Ⰷ 1, then the three solutions are 共1兲 u ⬇ ␣, v ⬇ 1 / ␣; 共2兲
v ⬇ ␣, u ⬇ 1 / ␣; and 共3兲 u ⬇ v ⬇ ␣1/3. Solutions 共1兲 and 共2兲
are stable, while solution 共3兲 is a saddle, with one stable
and one unstable eigenvector. These three solutions are
illustrated in Fig. 33共a兲, where the two stable solutions
are denoted by circles and the unstable solution by a
cross. The vectors illustrate the local u-v dynamics. An
arbitrary initial condition will relax quickly onto a “center manifold”—here, the one-dimensional dashed line
connecting the three solutions. The dynamics will then
occur along this curved line in parameter space. One can
derive equations of motion for the dynamics along this
center manifold: the system behaves like a particle in a
double-well potential; cf. Fig. 33共b兲.
In the work by Gardner et al. 共2000兲, the authors constructed a simple artificial genetic circuit 共within the bacterium E. coli兲 that had the mutual-repression mechanism described above and were able to observe
John Bechhoefer: Feedback for physicists: A tutorial essay on control
bistability. Examples from real biological systems are
more complicated. For example, the lac operon is the
prototype of a genetic switch, having been studied for
some 50 years. 共The details are not relevant here: briefly,
E. coli can use glucose or lactose as a food source. Normally, it does not produce the enzyme to digest lactose,
but the presence of a small amount of lactose switches
on a gene that produces the necessary enzyme.兲 Even
this prototypical example can be complicated. For example, Keener and Sneyd 共1998兲 cite studies that begin
with simplified dynamics of six coupled equations and
then argue that these equations can be approximated by
three others. Vilar et al. 共2003兲 have argued that such
simplified models need to incorporate stochastic fluctuations 共because of the small number of molecules of the
relevant species in each cell兲 to agree with observations.
Very recently, Ozbudak et al. 共2004兲 have explored the
phase diagram of the lac operon in E. coli 共modified by
fusing fluorescent “reporter” proteins to the genome兲.
They make quantitative contact to the kinds of geneticswitch models discussed here.
Rather than entering into the details of how best to
model a particular feedback mechanism, we want to emphasize merely that many, if not all, basic cell functions
depend on interconnected positive and negative feedback loops. Indeed, it seems likely that such feedbacks
are necessary in living organisms. For catalogs of such
mechanisms, see, e.g., Freeman 共2000兲 and Keener and
Sneyd 共1998兲; for an analysis of elementary biochemical
mechanisms 共amplifications, etc.兲 that adopts an explicit
engineering perspective, see Detwiler et al. 共2000兲.
C. Network example
In the previous section, we saw that both positive and
negative feedback loops are present in basic biochemical
systems. In the cell, however, vast numbers of chemical
species are present, constantly being synthesized, degraded, and otherwise interacting with each other. Thus
instead of a simple, isolated system modified by a single
feedback loop, one has many interacting systems, connected to each other in a network. In such a network,
the notion of a loop can be generalized to be the set of
interconnections between nodes 共the individual proteins兲, with positive and negative values assigned to each
interconnection, depending on whether the presence of
protein 1 increases or decreases the concentration of
protein 2. 共Here, “1” and “2” represent arbitrary proteins in the network.兲 The structure of such networks is a
topic of intense current interest 共Albert and Barabási,
2002; Newman, 2003兲. Much attention has been focused
on the statistical properties of large networks, for example, on the distribution of the number of connections
k a node has with its neighbors. Random networks have
a distribution P共k兲 peaked about an average number of
interconnections 具k典 共Poisson distribution兲, while for
scale-free networks, there is a power-law distribution
P共k兲 ⬃ k−␥, with ␥ typically have a value of 2–3. Scalefree networks have interesting properties and are found
in many places, including communications settings 共e.g.,
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
webpages and their links to each other兲, social interactions 共e.g., collaborations among individuals, whether as
actors in films, sexual partners, or scientific co-authors of
papers兲, and in biological networks, as we will discuss
more below 共Albert and Barabási, 2002兲. Because of the
power-law distribution of connections, a few important
nodes 共“hubs”兲 will have many more than the average
number of interconnections and play a central role in
the network. They serve, first of all, to create the “smallworld” phenomenon, where the number of steps needed
to go between any two nodes increases only logarithmically with the number of nodes. They also give robustness to the structure of the networks: removing nodes
other than one of the rare hubs will not affect substantially the connectivity of the network. But if scale-free
networks are robust to the destruction of random nodes,
they are fragile and sensitive to the destruction of hubs.
Much work in biology has been devoted to the identification of different types of networks, most notably
metabolic and protein interaction networks 共Jeong et al.,
2000兲. 共Both have proteins as nodes. In the former, the
interactions are chemical; in the latter, physical.兲 An
equally important type of network is that of gene regulatory networks, which govern the production of the
proteins involved in the metabolic and interaction networks 共Maslov and Sneppen, 2002兲. This is in addition to
more physical networks such as the networks of neurons
in the brain and nervous system and the network of
blood vessels. 关For a review of all of these, see Newman
The statistical point of view is not the only way to
understand network structure. A number of authors
have focused on the “modular” aspects of complex biological structures, with the goal of identifying the structures and interactions between relatively independent
elements 共Hartwell et al., 1999兲. This has led to the
search for “network motifs” 共Shen-Orr et al., 2002; Mangan and Alon, 2003兲, which are relatively simple clusters
of nodes that behave as individual elements in a larger
If complex networks are generically present in biological systems, one might suspect that they confer some
overall benefit to the host organisms, and one such hypothesized benefit that has lately received much attention is the notion of robustness. In influential work,
Leibler and collaborators have looked at the relatively
simple network involved in chemotaxis in the bacterium
E. coli and shown, both in a model 共Barkai and Leibler,
1997兲 and in experiment 共Alon et al., 1999兲, that certain
properties show a remarkable robustness in the face of
large concentration variations of elements within the
cell. Chemotaxis in E. coli arises by controlling the time
the bacterium spends in two states, “smooth runs” and
“tumbling.” During smooth runs, the cell swims in a
relatively straight line. It then stops and begins tumbling, a motion that leads to a random reorientation.
Then the cell swims for another period of time. The cell
carries receptors for various chemical attractants. If the
level of an attractant is rising, the cell will tend to swim
longer before tumbling. In other words, by reducing the
John Bechhoefer: Feedback for physicists: A tutorial essay on control
tumbling frequency 共rate of tumbling events兲, the cell
will tend to swim up the spatial gradient of attractor.
Thus, chemotaxis occurs via a modulation of tumbling
frequency. The kind of robustness explored by Leibler
and collaborators looks at the adaptation of tumbling
frequency to various changes. For example, the cell responds to gradients of chemical attractants in a way that
is nearly independent of their absolute concentration.
More precisely, the tumbling frequency should, after a
transient, return to its original value after a sudden increase in the overall concentration of an attractant.
As Barkai and Leibler 共1997兲 have emphasized, one
can imagine two ways in which perfect adaptation can be
obtained. One way involves a model that has fine-tuned
parameter values that happen to lead to a canceling out
of the effects of a concentration increase. The problem
with such a model is that it implies that adaptation
would be a fragile phenomenon, easily disrupted by
changes in any of the parameters of the system, which
does not seem to be the case experimentally 共Alon et al.,
1999兲. Alternatively, the robustness could be a property
of the network itself. We have already seen examples
where the level of an output in the face of a step-change
disturbance. These all trace back to the use of integral
feedback, and, indeed, Yi et al. 共2000兲 have shown not
only that integral feedback is present implicitly in the
model of Barkai and Leibler but also that such feedback
must be present in the dynamics. Rather than enter into
the details of the chemotaxis model 共even the “simplified” version of Yi et al. has 14 coupled equations兲, we
sketch the proof that integral feedback must be present
in order to have robust adaptation. We follow the appendix of Yi et al. 共2000兲.
Consider first a linear SISO model
xជ̇ = Ãxជ + bជ u,
y = cជ Txជ + du,
where xជ is an n-element internal state vector, y is the
single output, and u is the single input. At steady state,
xជ̇ = 0, so that y = 共d − cA−1b兲u. 共We drop tildes and vector
symbols for simplicity.兲 Then for constant input u, the
output y = 0 if and only if either c = d = 0 共trivial case兲 or
冋 册
A b
= 0,
c d
where the matrix in Eq. 共7.14兲 has n + 1 by n + 1 elements.
If the determinant is zero, then there is vector kជ such
that k关A b兴 = 关c d兴. Defining z = kជ Txជ , we have ż = kẋ
= k共Ax + bu兲 = cx + du = y. Thus, if y = 0 for all parameter
variations 共here, these would be variations in the elements of A, b, c, or d兲, then ż = y, a condition that is
equivalent to having integral feedback be part of the
structure of the system itself.
As Yi et al. 共2000兲 comment and we have implicitly
seen in this paper, the requirement of having integral
feedback in order to reject step disturbances of parameters is a special case of the “internal model principle”
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
共IMP兲, which states that the controller must contain a
model of the external signal in order to track robustly.33
This is another motivation for the internal model control
discussed in Sec. V.E. Finally, we note that the restriction to linear systems is not necessary 共the biochemical
models are all nonlinear兲. For example, consider the
nonlinear dynamical system ẋ = f共x , u兲, with x again an
n-dimensional state vector and f a nonlinear function of
x and the input u. If we want the output y = x 共for simplicity兲 to track a constant state r, we can set u = 兰t共r
− x兲dt⬘ 共integral feedback兲. Then differentiating ẋ = f
shows that x = r is a steady-state solution to the modified
dynamics. One caveat about integral feedback is that the
modified dynamical system must be stable. This must be
verified case by case.
The chemotaxis example discussed above is just one
of many instances where a complicated network of feedback interactions plays an important biological role. Another case considers neural networks, where the firings
of one neuron stimulate or inhibit other connected neurons. Doiron et al. 共2003兲 have recently shown that negative feedback plays an essential role in the neural response of electric fish to communication and prey
signals. While the organism communicates with others,
neurons in the sensory system studied switch to a globally synchronized oscillatory state that is maintained by
negative feedback. At other times, the firings are not
At this point, it is perhaps proper to note some of the
many areas that, for reasons of space but not for lack of
interest, we have decided not to explore. As our discussion of biological networks suggests, complex systems
can often be modeled as an interacting network, and
complex networks often have a modular structure whose
function can be understood by appeal to elementary
feedback motifs. Thus, one finds feedback to be a relevant concept in understanding the weather, climate
change, in economics, etc. For example, many of the
arguments in the field of global warming amount to debates about the magnitudes and sizes of feedback effects. While the link between industrial and other human activity and the increased amount of CO2 and other
greenhouse gasses is clear, the climatic effects are much
A more precise statement is that if all disturbances d共t兲 satisfy a known differential equation of the form 兺ni=0pid共i兲 = 0,
then one can design a control system that perfectly tracks the
disturbances. Here, d共i兲 is the ith time derivative of d共t兲, and
the pi are known, constant coefficients. For example, all step
disturbances satisfy d共1兲 = 0, while oscillatory disturbances satisfy d共2兲 + ␻d2 d共0兲 = 0. The internal model principle states that disturbances or reference signals are canceled in steady state if
their transfer function is contained in the denominator of the
controller K共s兲; see Lewis 共1992兲, Chap. 4. The nonlinear generalization of the IMP is discussed by Isidori and Byrnes
John Bechhoefer: Feedback for physicists: A tutorial essay on control
harder to predict, because of the complexity of feedback
effects. There is a tremendous flux of carbon between
the atmosphere and various “sinks,” such as the oceans
and forests, and the dependence of these fluxes on
greenhouse gasses must be evaluated accurately to know
the cumulative effect on climate 共Sarmiento and Gruber,
2002兲. Even more disturbing, it has become clear that
small changes have in the past led to rapid climate
shifts—on the time scale of decades, or even less. Models of thermohaline circulation in the oceans show
bistable behavior analogous to the genetic switch discussed above, where positive feedback effects can toggle
the climate between warm and icy states 共Weart, 2003兲.
One rather distinct area left out is the application of
control principles to quantum phenomena 共Rice and
Zhao, 2000; Shapiro and Brumer, 2003; Walmsley and
Rabitz, 2003兲. The basic insight is that adding a coherent
perturbation to a system can enhance or suppress the
amplitude of desired “product” states. The famous twoslit experiment of Young, where two beams of light interfere to produce light and dark fringes, is an elementary example. Much of the practical work has used
coherent light beams interacting with matter to enhance
or suppress phenomena such as dissociation and chemical reaction. Feedback often enters here in only a weak
way, where one conducts repeated trials, using the results to adaptively tune experimental control parameters
such as the amplitude and phase of different frequency
components in a shaped pulse; however, some recent
work has emphasized real-time corrections, where the
ideas discussed here, including state estimation 共Kalman
filtering兲 and robustness, are beginning to be explored
共Wiseman et al., 2002兲. Many of these approaches are
based on adaptive optics, a technique that has many
other applications—for example, compensating for turbulence in astronomy, where one changes rapidly the
shape of a mirror to remove phase disturbances added
by atmospheric fluctuations 共Roggemann et al., 1999兲.
Finally, most of the work to date on quantum systems
has been “semiclassical,” in that sensors perform measurements on a system, classical computers process the
results, and semiclassical fields are used to influence the
future evolution of the system. Lloyd has emphasized
that one can imagine a fully quantum feedback scheme
wherein one adds an element to the quantum system
that interacts 共without making a measurement兲 with it in
such a way that the dynamics are altered in a desired
way 共Lloyd, 2000兲. Because information is not destroyed
共as it is in the measurement stage of the semiclassical
scheme兲, higher performance is in principle possible. But
this field is still young.
There are also many valuable topics in control theory
that we do not have space to discuss. Foremost among
these is adaptive control, which can be considered as a
complement to the approach of robust control, discussed
in Sec. V.E. In both cases, the system to be controlled is
at least partially unknown. In robust control, one first
tries to describe the set of possible transfer functions of
close linear systems and then tries to find a controller
that is stable in all situations and still performs adRev. Mod. Phys., Vol. 77, No. 3, July 2005
equately, even under a worst-case scenario. In essence,
this is a conservative approach, which works best for
smaller sets of transfer functions. 共If the variations are
too large, trying to satisfy all the constraints all the time
will lead to very weak control.兲 Alternatively, one can
try to “learn” which of the possible set of transfer functions best describes the system at the present time and
to design a control law for the current best estimate of
the dynamical system. This is the overall approach of
adaptive control, which in its simplest forms treats topics
such as autotuning of parameters for simple control
loops 共e.g., PID loops兲 共Dutton et al., 1997; Franklin et
al., 1998兲. The basic algorithms are akin to the Kalman
filter, in that model parameters are estimated using a
recursive version of least-squares fitting that updates the
parameter estimates at each time step. In more sophisticated analyses, the controller’s structure can be varied
as well 共Isermann et al., 1992兲. Adaptation and “learning” are used also in approaches from computer science
that include the genetic algorithms mentioned above
共Jamshidi et al., 2003兲, neural networks 共Norgaard et al.,
2000兲, fuzzy logic 共Verbruggen et al., 1999兲, and “intelligent” control systems 共Hangos et al., 2001兲. Broadly
speaking, all of these are attempts to mimic the judgment of an experienced human operator manually controlling a system. Of course, they have applications far
beyond problems of control. For most physicists, the
more straightforward control techniques described in
this review will be more than adequate.
In various places in the discussion above, we have
noted the informal connection between information and
control. For example, in Sec. III.A, we saw how the
strategy of feedforward relied on prior information
about the nature of a disturbance. In Sec. V.E.1, we saw
that feedback is required only when our knowledge of
the system and its disturbances is incomplete. Also, in
Sec. V.D, we saw that the Kalman filter gives the optimal
way to blend the two pieces of information one has
about the current state of a linear dynamical system subject to white noise perturbations and to white noise in
the measurements. That way best blends the actual measurement and the prediction based on the system dynamics.
In all these discussions, the use of the term “information” has been informal, but it is natural to wonder
whether there are links to the technical subject of “information theory.” Indeed, the influential book Cybernetics by Wiener 共first published in 1948, the year of
Shannon’s fundamental papers on information theory兲
explored some connections 共Wiener, 1961兲. Still, there
has been very little development of these ideas. Recent
papers by Touchette and Lloyd 共2000, 2004兲 begin to
explore more formally these links and derive a fundamental relationship between the amount of control
achievable 共“decrease of entropy” in their formulation兲
and the “mutual information” 共Cover and Thomas,
1991兲 between the dynamical system and the controller
John Bechhoefer: Feedback for physicists: A tutorial essay on control
created by an initial interaction. They show that if one
measures the accuracy of control by the statistical reduction of uncertainty about the state of the controlled object, then that reduction is limited by the information
that measurements can extract about the system, in conjunction with any improvements that open-loop control
can offer. In their formulation, information becomes the
common language for both measurement and control.
The full implications of their results and how they connect to the “optimal estimation theory” of Kalman discussed here 共and to similar “minimum-variance” concepts related to Wiener filters 共Harris et al., 1999兲 have
yet to be worked out.34
As an example of the kinds of issues to consider, in
the temperature-control example of Day et al. 共1997兲 already discussed, the temperature control is actually better than the sensor-noise limit, because the sample is
surrounded by a regulated shield and is not directly
regulated. The last stage then acts like a passive filter.
Since all external disturbances pass through the shield,
the interior has a lower level of fluctuations. This setup
allowed the authors to measure the internal thermal
fluctuations between a thermometer and a reservoir and
between two thermometers.
It is interesting that while the link between feedback
and information theory has been developing almost
since the beginning of the subject itself, the connection
has been slow to sink in and difficult to grasp. As Bennett 共1993兲 has noted, pre-20th-century controllers were
almost invariably “direct” acting, with the force required
to effect a change on the system developed by the measuring device itself. For example, the buoyant force on a
ball directly controls the water valve in a toilet. It took a
long time for engineers to recognize that the sensor and
the actuator were logically distinct devices and that the
function of the controller was just to process the information gained through the sensor, converting it to a response by the actuator.
One of the main goals of this review has been to give
a pragmatic introduction to control theory and the basic
techniques of feedback and feedforward. We have discussed a broad range of applications, including details of
practical implementation of feedback loops. We have
emphasized basic understanding and outlined the starting points for more advanced methods. We have tried in
many places to show how a bit of thought about the
design of the physical system can reduce the demands on
the controller. 共Remember to limit time lags by putting
One link between the Kalman filter and information theory
is via the use of Bayes’ theorem in constructing the optimal
estimate 共Maybeck, 1979兲; however, the work of Touchette and
Lloyd 共2000兲 implies deeper connections. Links to areas such
as the robust methods of Sec. V.E are even less clear; see
Kåhre 共2002兲, Chap. 12, for other ideas on how information
theory can be related to control issues.
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
sensors close to actuators!兲 We have also emphasized
that it often pays to spend a bit more on equipment and
less on fancy control-loop design. We have argued that a
mix of feedforward and feedback can work much better
than either technique alone. 共To anticipate is better than
to react, but cautious anticipation is better still.兲 Taken
together, the techniques outlined here are probably
more sophisticated, and more systematic, than what is
commonly practiced among physicists. I certainly hope
that this article “raises the consciousness” and perhaps
even the level of practice of physicists as regards feedback loops. And if higher performance is needed in an
application, one can, and should, consult the professional control-theory books, which I hope will now be
more accessible.
At the same time, we have explored some of the
deeper implications of control theory. We have seen how
feedback loops and other complicated networks play a
fundamental role in complex systems encountered in nature, particular in biology. We have seen how information is the “common coin” of measurements and that
feedback can be thought of in terms of information
flows, from system to controller and back again in the
simplest case, or from node to node for more complicated networks. What is interesting is how the reality of
the physical system itself begins to fade from the picture
once control loops are implemented. In simple cases,
this occurs when we use feedback to speed up 共Sec.
III.A兲 or slow down dynamics 共Sec. V.C.1兲, implying that
one can use feedback to compensate for the physical
limitations of the particular elements one may have at
hand. Certainly, some of the simpler applications of
feedback depend on such abilities.
Other, deeper applications of control depend on the
robustness that feedback can lead to. We saw this in
integral control, where tracking is achieved over a wide
range of control parameters, and it is the loop structure
that ensures the desired result. Such feedback is, of
course, at the heart of the PID loops beloved in industrial applications. But we saw that it also exists in mathematical and physical models of biological networks
where feedback leads to a robustness that begins to approximate the behavior of real biological systems.
What is perhaps most interesting to a physicist is the
way new kinds of behavior arise from the structure of
control loops. The tracking property of integral feedback comes from the structure of the feedback loop, not
the nature of the individual elements. In this sense, it is
a kind of “emergent phenomenon,” but one that differs
from the examples familiar in physics, such as the soft
modes and phase rigidity that accompany symmetrybreaking phase transitions. Thus, engineering provides
an alternate set of archetypes for emergent phenomena,
which—as Carlson and Doyle 共2002兲, Csete and Doyle
共2002兲, and Kitano 共2002兲 have argued—will perhaps be
more fruitful for understanding biological mechanisms
than physics-inspired archetypes. My own view is that
one should not be too dogmatic about which discipline
provides the better model, for in the past physicists have
often been successful precisely because they were willing
John Bechhoefer: Feedback for physicists: A tutorial essay on control
to try to solve the problem at hand “by any means necessary.”
Still, the easy integration of certain engineering concepts into common knowledge is striking. We physicists
are justly proud of having developed concepts such as
“work,” “energy,” “momentum,” “resonance,” and even
“entropy” that have entered the popular language as
metaphors and are used by most people with confidence
even when the technical meaning is obscure. More recently, engineering terms as “bits,” “information,” and
“feedback,” which also treat topics that physicists deal
with, have become equally familiar. If we take up the
last of these terms, the word “feedback” is less than a
century old and was coined to describe an advance in
radio technology 共Simpson and Weiner, 1989兲. In popular language, it quickly went from a novelty to a fad to a
word so often used that one can scarcely imagine its
recent origin. Feedback is a useful technical tool that
underlies much of modern technology, but it is also an
essential ingredient of biological systems and even of life
itself. Is it then merely by chance that “feedback,” despite its narrow technical origins, has found such ready
acceptance and wide use in popular culture?
I thank Vincent Fourmond, Russ Greenall, Christoph
Hebeisen, Bloen Metzger, Laurent Talon, and Anand
Yethiraj for collaboration on some of the experimental
work cited here. I am grateful for comments by Ping Ao,
Adrienne Drobnies, Ted Forgan, Bruce Francis, JeanChristophe Géminard, Andrew Lewis, Albert Libchaber, and Juan Restrepo, and for support by NSERC
analog to digital
atomic force microscope
digital to analog
deoxyribonucleic acid
digital signal processor
extended Kalman filter
field-programmable gate array
infinite impulse response
internal model control
internal model principle
left-hand plane
Laser Interferometer Gravitational Wave
linear quadratic Gaussian
least-significant bit
multiple input, multiple output
messenger RNA
nonminimum phase
ordinary differential equation
proportional-integral-derivative control law
programmable logic device
pseudorandom binary sequence
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
right-hand plane
root-mean square
ribonucleic acid
RNA polymerase
single input, single output
scanning probe microscope
scanning tunneling microscope
singular-value decomposition
state-vector feedback
tetracycline repressor
zero-order hold
Albert, R., and A.-L. Barabási, 2002, “Statistical mechanics of
complex networks,” Rev. Mod. Phys. 74, 47–97.
Alon, U., M. G. Surette, N. Barkai, and S. Leibler, 1999, “Robustness in bacterial chemotaxis,” Nature 共London兲 397, 168–
Barkai, N., and S. Leibler, 1997, “Robustness in simple biochemical networks,” Nature 共London兲 387, 913–917.
Barone, F., E. Calloni, A. Grado, R. D. Rosa, L. D. Fiore, L.
Milano, and G. Russo, 1995, “High accuracy digital temperature control for a laser diode,” Rev. Sci. Instrum. 66, 4051–
Becksei, A., and L. Serrano, 2000, “Engineering stability in
gene networks by autoregulation,” Nature 共London兲 405,
Bennett, S., 1993, “Development of the PID controller,” IEEE
Control Syst. Mag. 13 共6兲, 58–65.
Bennett, S., 1996, “A brief history of automatic control,” IEEE
Control Syst. Mag. 16 共3兲, 17–25.
Bennett, S., 2002, “Otto Mayr: Contributions to the history of
feedback control,” IEEE Control Syst. Mag. 22 共2兲, 29–33.
Black, H. S., 1934, “Stabilized feedback amplifiers,” Bell Syst.
Tech. J. 13, 1–18.
Bode, H. W., 1945, Network Analysis and Feedback Amplifier
Design 共van Nostrand, New York兲.
Borcherding, F., S. Grünendahl, M. Johnson, M. Martin, J.
Olsen, and K. Yip, 1999, “A first-level tracking trigger for the
upgraded D0 detector,” IEEE Trans. Nucl. Sci. 46, 359–364.
Carlson, J. M., and J. Doyle, 2002, “Complexity and robustness,” Proc. Natl. Acad. Sci. U.S.A. 99, 2538–2545.
Chaikin, P. M., and T. C. Lubensky, 1995, Principles of Condensed Matter Physics 共Cambridge University Press, Cambridge, UK兲.
Cherry, J. L., and F. R. Adler, 2000, “How to make a biological
switch,” J. Theor. Biol. 203, 117–133.
Cohen, A. E., 2005, “Control of nanoparticles with arbitrary
two-dimensional force fields,” Phys. Rev. Lett. 94, 118102.
Cooper, W. S., 1986, “Use of optimal estimation theory, in particular the Kalman filter, in data processing and signal analysis,” Rev. Sci. Instrum. 57, 2862–2869.
Cover, T., and J. Thomas, 1991, Elements of Information
Theory 共Wiley, New York兲.
Croft, D., and S. Devasia, 1999, “Vibration compensation for
high speed scanning tunneling microscopy,” Rev. Sci.
Instrum. 70, 4600–4605.
Csete, M. E., and J. C. Doyle, 2002, “Reverse engineering of
biological complexity,” Science 295, 1664–1669.
Day, P., I. Hahn, T. C. P. Chui, A. W. Harter, D. Rowe, and J.
A. Lipa, 1997, “The fluctuation-imposed limit for tempera-
John Bechhoefer: Feedback for physicists: A tutorial essay on control
ture measurement,” J. Low Temp. Phys. 107, 359–370.
Detwiler, P. B., S. Ramanathan, A. Sengupta, and B. I.
Shraiman, 2000, “Engineering aspects of enzymatic signal
transduction: Photoreceptors in the retina,” Biophys. J. 79,
Devasia, S., 2002, “Should model-based inverse inputs be used
as feedforward under plant uncertainty?,” IEEE Trans. Autom. Control 47, 1865–1871.
Doiron, B., M. J. Chacron, L. Maler, A. Longtin, and J. Bastian, 2003, “Inhibitory feedback required for network oscillatory responses to communication but not prey stimuli,” Nature 共London兲 421, 539–543.
Doyle, J. C., B. A. Francis, and A. R. Tannenbaum, 1992,
Feedback Control Theory 共Macmillan, New York兲, URL
Dutton, K., S. Thompson, and B. Barraclough, 1997, The Art
of Control Engineering 共Addison-Wesley, Harlow, England兲.
Etchenique, R., and J. Aliaga, 2004, “Resolution enhancement
by dithering,” Am. J. Phys. 72, 159–163.
Evensen, G., 2003, “The ensemble kalman filter: Theoretical
formulation and practical implementation,” Ocean Dynamics
53, 343–367.
Eyink, G. L., J. M. Restrepo, and F. J. Alexander, 2004, “A
mean field approximation in data assimilation for nonlinear
dynamics,” Physica D 195, 347–368.
Forgan, E. M., 1974, “On the use of temperature controllers in
cryogenics,” Cryogenics 14, 207–214.
Franklin, G. F., J. D. Powell, and A. Emami-Naeini, 2002,
Feedback Control of Dynamical Systems, 4th ed. 共PrenticeHall, Upper Saddle River, NJ兲.
Franklin, G. F., J. D. Powell, and M. Workman, 1998, Digital
Control of Dynamical Systems 共Addison-Wesley, Reading,
Freeman, M., 2000, “Feedback control of intercellular signalling in development,” Nature 共London兲 408, 313–319.
Gamkrelidze, R. V., 1999, “Discovery of the maximum principle,” J. Dyn. Control Syst. 5, 437–451.
Gardner, T. S., C. R. Cantor, and J. J. Collins, 2000, “Construction of a genetic toggle switch in Escherichia coli,” Nature
共London兲 405, 520–521.
Gardner, T. S., and J. J. Collins, 2000, “Neutralizing noise in
gene networks,” Nature 共London兲 403, 339–342.
Gershenfeld, N., 1999, The Nature of Mathematical Modeling
共Cambridge University Press, Cambridge, UK兲.
Gershenfeld, N., 2000, The Physics of Information Technology
共Cambridge University Press, Cambridge, UK兲.
Goldenfeld, N., 1992, Lectures on Phase Transitions and the
Renormalization Group 共Perseus, Reading, MA兲.
Goodwin, G. C., S. F. Graebe, and M. E. Salgado, 2001, Control System Design 共Prentice Hall, Upper Saddle River, NJ兲.
Gosse, C., and V. Croquette, 2002, “Magnetic tweezers: micromanipulation and force measurement at the molecular level,”
Biophys. J. 82, 3314–3329.
Gray, M., D. McClelland, M. Barton, and S. Kawamura, 1999,
“A simple high-sensitivity interferometric position sensor for
test mass control on an advanced LIGO interferometer,”
Opt. Quantum Electron. 31, 571–582.
Grebogi, C., and Y.-C. Lai, 1999, in Handbook of Chaos Control, edited by H. G. Schuster 共Wiley-VCH, New York兲, pp.
Hangos, K. M., R. Lakner, and M. Gerzson, 2001, Intelligent
Control Systems: An Introduction with Examples 共Kluwer
Academic, Dordrecht兲.
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
Harris, T. J., C. T. Seppala, and L. D. Desborough, 1999. “A
review of performance monitoring and assessment techniques
for univariate and multivariate control systems,” J. Process
Control 9, 1–17.
Hartwell, L. H., J. J. Hopfield, S. Leibler, and A. W. Murray,
1999, “From molecular to modular biology,” Nature
共London兲 402, C47–C52.
Hensley, J. M., A. Peters, and S. Chu, 1999, “Active low frequency vertical vibration isolation,” Rev. Sci. Instrum. 70,
Isermann, R., K.-H. Lachmann, and D. Matko, 1992, Adaptive
Control Systems 共Prentice-Hall, New York兲.
Ishida, F., and Y. E. Sawada, 2004, “Human hand moves proactively to the external stimulus: An evolutional strategy for
minimizing transient error,” Phys. Rev. Lett. 93, 168105.
Isidori, A., and C. I. Byrnes, 1990, “Output regulation of nonlinear systems,” IEEE Trans. Autom. Control 35, 131–140.
Jamshidi, M., L. dos Santos Coelho, R. A. Krohling, and P. J.
Fleming, 2003, Robust Control Systems with Genetic Algorithms 共CRC, Boca Raton, FL兲.
Jeong, H., B. Tombor, R. Albert, Z. N. Oltvai, and A.-L.
Barabási, 2000, “The large-scale organization of metabolic
networks,” Nature 共London兲 407, 651–654.
Kaern, M., W. J. Blake, and J. J. Collins, 2003, “The engineering of gene regulatory networks,” Annu. Rev. Biomed. Eng.
5, 179–206.
Kåhre, J., 2002, The Mathematical Theory of Information 共Kluwer Academic, Boston兲.
Kalman, R. E., 1960, “A new approach to linear filtering and
prediction problems,” J. Basic Eng. 82, 35–45.
Keener, J., and J. Sneyd, 1998, Mathematical Physiology
共Springer-Verlag, New York兲.
Kitano, H., 2002, “Looking beyond the details: A rise in
systems-oriented approaches in genetics and molecular biology,” Curr. Genet. 41, 1–10.
Leigh, J. R., 2004, Control Theory, 2nd ed. 共The Institution of
Electrical Engineers, London兲.
Lewis, F. L., 1992, Applied Optimal Control and Estimation
共Prentice-Hall, Englewood Cliffs, NJ兲.
Ljung, L., 1999, System Identification: Theory for the User, 2nd
ed. 共Prentice-Hall, Upper Saddle River, NJ兲.
Lloyd, S., 2000, “Coherent quantum feedback,” Phys. Rev. A
62, 022108.
Longtin, A., and J. Milton, 1989a, “Insight into the transfer
function, gain, and oscillation onset for the pupil light reflex
using nonlinear delay-differential equations,” Biol. Cybern.
61, 51–58.
Longtin, A., and J. Milton, 1989b, “Modelling autonomous oscillations in the human pupil light reflex using non-linear
delay-differential equations,” Bull. Math. Biol. 51, 605–624.
Mancini, R., 2002, Op Amps for Everyone: Texas Instruments
Guide, Technical Report, Texas Instruments, URL http://
Mangan, S., and U. Alon, 2003, “Structure and function of the
feed-forward loop network motif,” Proc. Natl. Acad. Sci.
U.S.A. 100, 11 980–11 985.
Maslov, S., and K. Sneppen, 2002, “Specificity and stability in
topology of protein networks,” Science 296, 910–913.
Maxwell, J. C., 1868, “On governors,” Proc. R. Soc. London
16, 270–283.
Maybeck, P. S., 1979, Stochastic Models, Estimation, and Control 共Academic, New York兲, Vol. 1.
Mayr, O., 1970, “The origins of feedback control,” Sci. Am.
John Bechhoefer: Feedback for physicists: A tutorial essay on control
223 共4兲, 110–118.
Metzger, B., 2002, Intensity fluctuation microscopy applied to
the nematic–smectic-A phase transition, Thèse de magistère,
Ecole Normale Supérieure de Lyon.
Monod, J., and F. Jacob, 1961, “General conclusions: Teleonomic mechanisms in cellular metabolism, growth, and differentiation,” Cold Spring Harbor Symp. Quant. Biol. 26, 389–
Morari, M., and E. Zafirioiu, 1989, Robust Process Control
共Prentice-Hall, Upper Saddle River, NJ兲.
Morris, K., 2001, Introduction to Feedback Control Theory
共Academic, San Diego兲.
Naidu, D. S., 2003, Optimal Control Systems 共CRC, Boca Raton, FL兲.
Newman, M. E. J., 2003, “The structure and function of complex networks,” SIAM Rev. 45, 167–256.
Norgaard, M., O. Ravn, N. K. Poulsen, and L. K. Hansen,
2000, Neural Networks for Modelling and Control of Dynamic
Systems: A Practitioner’s Handbook 共Springer-Verlag, Berlin兲.
Nyquist, H., 1932, “Regeneration theory,” Bell Syst. Tech. J.
11, 126–147.
Oliva, A. I., E. Angulano, N. Denisenko, and M. Aguilar, 1995,
“Analysis of scanning tunneling microscopy feedback system,” Rev. Sci. Instrum. 66, 3195–3203.
Oppenheim, A. V., R. W. Schafer, and J. R. Buck, 1992,
Discrete-Time Signal Processing, 2nd ed. 共Prentice-Hall, Upper Saddle River, NJ兲.
Ott, E., C. Grebogi, and J. A. Yorke, 1990, “Controlling
chaos,” Phys. Rev. Lett. 64, 1196–1199.
Özbay, H., 2000, Introduction to Feedback Control Theory
共CRC, Boca Raton, FL兲.
Ozbudak, E. M., M. Thattai, H. N. Lim, B. I. Shraiman, and A.
van Oudenaarden, 2004, “Multistability in the lactose utilization network of Escherichia coli,” Nature 共London兲 427, 737–
Petersen, I. R., and A. V. Savkin, 1999, Robust Kalman Filtering for Signals and Systems with Large Uncertainties
共Birkhäuser, Boston兲.
Pikovsky, A., M. Rosenblum, and J. Kurths, 2001, Synchronization: A Universal Concept in Nonlinear Sciences 共Cambridge University Press, Cambridge, UK兲.
Pontryagin, L. S., V. G. Boltyanskii, R. V. Gamkrelidze, and E.
F. Mishchenko, 1964, The Mathematical Theory of Optimal
Processes 共Macmillan, New York兲.
Press, W. H., B. P. Plannery, S. A. Teukolsky, and W. T. Vetterling, 1993, Numerical Recipes in C: The Art of Scientific
Computing, 2nd ed. 共Cambridge University Press, Cambridge, UK兲.
Reynolds, D. E., 2003, “Coarse graining and control theory
model reduction,” e-print cond-mat/0309116.
Rice, S. A., and M. Zhao, 2000, Optical Control of Molecular
Dynamics 共Wiley-Interscience, New York兲.
Rieke, F., D. Warland, R. de Ruyter van Steveninck, and W.
Bialek, 1997, Spikes: Exploring the Neural Code 共MIT Press,
Cambridge, MA兲.
Roggemann, M. C., B. M. Welsh, and R. Q. Fugate, 1999, “Improving the resolution of ground-based telescopes,” Rev.
Mod. Phys. 69, 437–505.
Rosenfeld, N., M. Elowitz, and U. Alon, 2002, “Negative autoregulation speeds the response times of transcription networks,” J. Mol. Biol. 323, 785–793.
Salapaka, S., A. Sebastian, J. P. Cleveland, and M. V. Salapaka,
2002, “High bandwidth nano-positioner: A robust control apRev. Mod. Phys., Vol. 77, No. 3, July 2005
proach,” Rev. Sci. Instrum. 73, 3232–3241.
Sarmiento, J. L., and N. Gruber, 2002, “Sinks for anthropogenic carbon,” Phys. Today 55 共8兲, 30–36.
Schitter, G., F. Allgöwer, and A. Stemmer, 2004, “A new control strategy for high-speed atomic force microscopy,”
Nanotechnology 15, 108–114.
Schitter, G., P. Menold, H. F. Knapp, F. Allgöwer, and A.
Stemmer, 2001, “High performance feedback for fast scanning atomic force microscopes,” Rev. Sci. Instrum. 72, 3320–
Schitter, G., and A. Stemmer, 2004, “Identification and openloop tracking control of a piezoelectric tube scanner for highspeed scanning-probe microscopy,” IEEE Trans. Control
Syst. Technol. 12, 449–454.
Schuster, H. G., 1999, Ed., Handbook of Chaos Control
共Wiley-VCH, New York兲.
Shapiro, M., and P. Brumer, 2003, Principles of the Quantum
Control of Molecular Processes 共Wiley-Interscience, New
Shen-Orr, S. S., R. Milo, S. Mangan, and U. Alon, 2002, “Network motifs in the transcriptional regulation network of Escherichia Coli,” Nat. Genet. 31, 64–68.
Shinbrot, T., 1999, in Handbook of Chaos Control, edited by
H. G. Schuster 共Wiley-VCH, New York兲, pp. 157–180.
Simpson, J., and E. Weiner, 1989, Eds., Oxford English Dictionary, 2nd ed. 共Clarendon, Oxford兲.
Singhose, W. E., 1997, Command Generation for Flexible Systems, Ph.D. thesis 共MIT兲.
Skogestad, S., and I. Postlethwaite, 1996, Multivariable Feedback Control 共Wiley, Chichester, UK兲.
Sontag, E. D., 1998, Mathematical Control Theory, 2nd ed.
共Springer-Verlag, New York兲.
Strogatz, S. H., 1994, Nonlinear Dynamics and Chaos
共Addison-Wesley, Reading, MA兲.
Sussmann, H. J., and J. C. Willems, 1997, “300 years of optimal
control,” IEEE Control Syst. Mag. 17, 32–44.
Tamayo, J., A. D. L. Humphris, and M. J. Miles, 2000, “Piconewton regime dynamic force microscopy in liquid,” Appl.
Phys. Lett. 77, 582–584.
Touchette, H., and S. Lloyd, 2000, “Information-theoretic limits of control,” Phys. Rev. Lett. 84, 1156–1159.
Touchette, H., and S. Lloyd, 2004, “Information-theoretic approach to the study of control systems,” Physica A 331, 140–
Verbruggen, H. B., H.-J. Zimmermann, and R. Babuska, 1999,
Eds., Fuzzy Algorithms for Control 共Kluwer Academic, Boston兲.
Vidyasagar, M., 1986, “On undershoot and nonminimum phase
zeros,” IEEE Trans. Autom. Control 31, 440.
Vilar, J. M. G., C. C. Guet, and S. Leibler, 2003, “Modeling
network dynamics: the lac operon, a case study,” J. Cell Biol.
161, 471–476.
Walmsley, I., and H. Rabitz, 2003, “Quantum physics under
control,” Phys. Today 56 共8兲, 43–49.
Weart, S., 2003, “The discovery of rapid climate change,” Phys.
Today 56 共8兲, 30–36.
Wiener, N., 1958, Nonlinear Problems in Random Theory
共MIT, Cambridge, MA兲.
Wiener, N., 1961, Cybernetics: Control and Communication in
the Animal and the Machine, 2nd ed. 共MIT Press, Cambridge,
Wiseman, H. M., S. Mancini, and J. Wang, 2002, “Bayesian
feedback versus Markovian feedback in a two-level atom,”
John Bechhoefer: Feedback for physicists: A tutorial essay on control
Phys. Rev. A 66, 013807.
Wolf, D. M., and F. H. Eeckman, 1998, “On the relationship
between genomic regulatory element organization and gene
regulatory dynamics,” J. Theor. Biol. 195, 167–186.
Yethiraj, A., R. Mukhopadhyay, and J. Bechhoefer, 2002,
“Two experimental tests of a fluctuation-induced first-order
phase transition: Intensity fluctuation microscopy at the
nematic–smectic-A transition,” Phys. Rev. E 65, 021702.
Rev. Mod. Phys., Vol. 77, No. 3, July 2005
Yi, T.-M., Y. Huang, M. I. Simon, and J. Doyle, 2000, “Robust
perfect adaptation in bacterial chemotaxis through integral
feedback control,” Proc. Natl. Acad. Sci. U.S.A. 97, 4649–
Zou, Q., C. V. Giessen, J. Garbini, and S. Devasia, 2005, “Precision tracking of driving wave forms for inertial reaction devices,” Rev. Sci. Instrum. 76, 1–9.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF