Numerical Computing with Simulink, Volume I Creating Simulations

Numerical Computing with Simulink, Volume I Creating Simulations
ot100_granfm3.qxp
10/11/2007
9:29 AM
Page 1
Numerical Computing
with Simulink, Volume I
Creating Simulations
ot100_granfm3.qxp
10/11/2007
9:29 AM
Page 2
ot100_granfm3.qxp
10/11/2007
9:29 AM
Page 3
Numerical Computing
with Simulink, Volume I
Creating Simulations
Richard J. Gran
Mathematical Analysis Company
Norfolk, Massachusetts
Society for Industrial and Applied Mathematics • Philadelphia
ot100_granfm3.qxp
10/11/2007
9:29 AM
Page 4
Copyright © 2007 by the Society for Industrial and Applied Mathematics.
10 9 8 7 6 5 4 3 2 1
All rights reserved. Printed in the United States of America. No part of this book may be
reproduced, stored, or transmitted in any manner without the written permission of the
publisher. For information, write to the Society for Industrial and Applied Mathematics,
3600 Market Street, 6th floor, Philadelphia, PA 19104-2688 USA.
Trademarked names may be used in this book without the inclusion of a trademark symbol.
These names are used in an editorial context only; no infringement of trademark is intended.
Maple is a trademark of Maplesoft, Waterloo, Ontario, Canada.
MATLAB, Simulink, Real Time Workshop, SimHydraulics, Stateflow, and Handle Graphics are
registered trademarks of The MathWorks, Inc. For MATLAB product information, please
contact The MathWorks, Inc., 3 Apple Hill Drive, Natick, MA 01760-2098 USA, 508-6477000, Fax: 508-647-7101, [email protected], www.mathworks.com.
Figure 1.12 is used with permission of Marcus Orvando, president of Erwin Sattler Clocks
of America.
Figure 1.16 was taken by Michael Reeve on January 30, 2004, and its source is the
Wikipedia article “Foucault Pendulum.” Per Wikipedia, permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation
License, Version 1.2 or any later version published by the Free Software Foundation; with no
Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. Subject to disclaimers.
Figure 2.7 is taken from the Wikipedia article “Thermostat.” Per Wikipedia, permission is
granted to copy, distribute and/or modify this document under the terms of the GNU Free
Documentation License, Version 1.2 or any later version published by the Free Software
Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
Subject to disclaimers.
Library of Congress Cataloging-in-Publication Data
Gran, Richard J., 1940Numerical computing with Simulink / Richard J. Gran.
v. cm.
Includes bibliographical references and index.
Contents: v. 1. Creating simulations
ISBN 978-0-898716-37-5 (alk. paper)
1. Numerical analysis--Data processing. 2. SIMULINK. I. Title.
QA297.G676 2007
518.0285--dc22
2007061803
is a registered trademark.
ot100_granfm3.qxp
10/11/2007
9:29 AM
Page 5
To Dr. Richard A. Scheuing: A Friend and Mentor
ot100_granfm3.qxp
10/11/2007
9:29 AM
Page 6
Contents
List of Figures
xi
List of Tables
xvii
Preface
xix
1
2
Introduction to Simulink
1.1 Using a Picture to Write a Program . . . . . . . . . . . . . . . . . . . .
1.2 Example 1: Galileo Drops Two Objects from the Leaning Tower of Pisa
1.3 Example 2: Modeling a Pendulum and the Escapement of a Clock . . .
1.3.1 History of Pendulum Clocks . . . . . . . . . . . . . . . . . . . .
1.3.2 A Simulation Model for the Clock . . . . . . . . . . . . . . . . .
1.4 Example 3: Complex Rotations—The Foucault Pendulum . . . . . . . .
1.4.1 Forces from Rotations . . . . . . . . . . . . . . . . . . . . . . .
1.4.2 Foucault Pendulum Dynamics . . . . . . . . . . . . . . . . . . .
1.5 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
1
1
5
17
18
20
24
25
26
31
32
Linear Differential Equations, Matrix Algebra, and Control Systems
2.1 Linear Differential Equations: Linear Algebra . . . . . . . . . . . . .
2.1.1 Solving a Differential Equation at Discrete Time Steps . . . . .
2.1.2 Linear Differential Equations in Simulink . . . . . . . . . . . .
2.2 Laplace Transforms for Linear Differential Equations . . . . . . . . .
2.3 Linear Feedback Control . . . . . . . . . . . . . . . . . . . . . . . .
2.3.1 What Is a Control System? . . . . . . . . . . . . . . . . . . . .
2.3.2 Control Systems and Linear Differential Equations . . . . . . .
2.4 Linearization and the Control of Linear Systems . . . . . . . . . . . .
2.4.1 Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4.2 Eigenvalues and the Response of a Linear System . . . . . . . .
2.5 Poles and the Roots of the Characteristic Polynomial . . . . . . . . . .
2.5.1 Feedback of the Position of the Mass in the Spring-Mass Model
2.5.2 Feedback of the Velocity of the Mass in the Spring-Mass Model
2.5.3 Comparing Position and Rate Feedback . . . . . . . . . . . . .
2.5.4 The Structure of a Control System: Transfer Functions . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
33
33
36
38
40
43
44
48
49
49
51
55
56
57
61
62
vii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
viii
Contents
2.6 Transfer Functions: Bode Plots . . . . . . . . . . . . . . . . . .
2.6.1 The Bode Plot for Continuous Time Systems . . . . . . .
2.6.2 Calculating the Bode Plot for Continuous Time Systems .
2.7 PD Control, PID Control, and Full State Feedback . . . . . . . .
2.7.1 PD Control . . . . . . . . . . . . . . . . . . . . . . . . .
2.7.2 PID Control . . . . . . . . . . . . . . . . . . . . . . . .
2.7.3 Full State Feedback . . . . . . . . . . . . . . . . . . . . .
2.7.4 Getting Derivatives for PID Control or Full State Feedback
2.8 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . .
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
4
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Nonlinear Differential Equations
3.1 The Lorenz Attractor . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.1 Linear Operating Points: Why the Lorenz Attractor Is Chaotic . .
3.2 Differential Equation Solvers in MATLAB and Simulink . . . . . . . .
3.3 Tables, Interpolation, and Curve Fitting in Simulink . . . . . . . . . . .
3.3.1 The Simple Lookup Table . . . . . . . . . . . . . . . . . . . . .
3.3.2 Interpolation: Fitting a Polynomial to the Data and Using the Result
in Simulink . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.3 Using Real Data in the Model: From Workspace and File . . . . .
3.4 Rotations in Three Dimensions: Euler Rotations, Axis-Angle
Representations, Direction Cosines, and the Quaternion . . . . . . . . .
3.4.1 Euler angles . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.4.2 Direction Cosines . . . . . . . . . . . . . . . . . . . . . . . . . .
3.4.3 Axis-Angle Rotations . . . . . . . . . . . . . . . . . . . . . . . .
3.4.4 The Quaternion Representation . . . . . . . . . . . . . . . . . . .
3.5 Modeling the Motion of a Satellite in Orbit . . . . . . . . . . . . . . . .
3.5.1 Creating an Attitude Error When Using Direction Cosines . . . .
3.5.2 Creating an Attitude Error Using Quaternion Representations . . .
3.5.3 The Complete Spacecraft Model . . . . . . . . . . . . . . . . . .
3.6 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
64
65
65
69
69
71
72
74
78
78
.
.
.
.
.
81
81
83
86
87
88
. 91
. 92
.
.
.
.
.
.
.
.
.
.
.
94
95
98
99
99
105
107
109
109
112
112
Digital Signal Processing in Simulink
4.1 Difference Equations, Fibonacci Numbers, and z-Transforms . . . . . . .
4.1.1 The z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1.2 Fibonacci (Again) Using z-Transforms . . . . . . . . . . . . . . . .
4.2 Digital Sequences, Digital Filters, and Signal Processing . . . . . . . . . .
4.2.1 Digital Filters, Using z-Transforms, and Discrete Transfer Functions
4.2.2 Simulink Experiments: Filtering a Sinusoidal Signal and Aliasing .
4.2.3 The Simulink Digital Library . . . . . . . . . . . . . . . . . . . . .
4.3 Matrix Algebra and Discrete Systems . . . . . . . . . . . . . . . . . . . .
4.4 The Bode Plot for Discrete Time Systems . . . . . . . . . . . . . . . . . .
4.5 Digital Filter Design: Sampling Analog Signals, the Sampling Theorem,
and Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5.1 Sampling and Reconstructing Analog Signals . . . . . . . . . . . .
115
116
119
120
121
121
123
128
130
135
136
137
Contents
4.5.2 Analog Prototypes of Digital Filters: The Butterworth Filter . .
4.6 The Signal Processing Blockset . . . . . . . . . . . . . . . . . . . . .
4.6.1 Fundamentals of the Signal Processing Blockset: Analog Filters
4.6.2 Creating Digital Filters from Analog Filters . . . . . . . . . . .
4.6.3 Digital Signal Processing . . . . . . . . . . . . . . . . . . . . .
4.6.4 Implementing Digital Filters: Structures and Limited Precision .
4.6.5 Batch Filtering Operations, Buffers, and Frames . . . . . . . . .
4.7 The Phase-Locked Loop . . . . . . . . . . . . . . . . . . . . . . . . .
4.8 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ix
.
.
.
.
.
.
.
.
.
.
142
145
146
148
149
153
160
165
170
171
.
.
.
.
.
.
.
.
173
173
174
176
177
178
180
184
184
.
.
.
.
.
186
189
194
199
199
6
Modeling a Partial Differential Equation in Simulink
6.1 The Heat Equation: Partial Differential Equations in Simulink . . . . . . .
6.1.1 Finite Dimensional Models . . . . . . . . . . . . . . . . . . . . . .
6.1.2 An Electrical Analogy of the Heat Equation . . . . . . . . . . . . .
6.2 Converting the Finite Model into Equations for Simulation with Simulink
6.2.1 Using Kirchhoff’s Law to Get the Equations . . . . . . . . . . . . .
6.2.2 The State-Space Model . . . . . . . . . . . . . . . . . . . . . . . .
6.3 Partial Differential Equations for Vibration . . . . . . . . . . . . . . . . .
6.4 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
201
202
202
203
205
206
208
212
212
213
7
Stateflow: A Tool for Creating and Coding State Diagrams, Complex Logic,
Event Driven Actions, and Finite State Machines
215
7.1 Properties of Stateflow: Building a Simple Model . . . . . . . . . . . . . 216
7.1.1 Stateflow Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . 218
7.1.2 Making the Simple Stateflow Chart Do Something . . . . . . . . . 221
7.1.3 Following Stateflow’s Semantics Using the Debugger . . . . . . . . 223
7.2 Using Stateflow: A Controller for Home Heating . . . . . . . . . . . . . . 225
7.2.1 Creating a Model of the System and an Executable Specification . . 225
5
.
.
.
.
.
.
.
.
.
.
Random Numbers, White Noise, and Stochastic Processes
5.1 Modeling with Random Variables in Simulink: Monte Carlo Simulations
5.1.1 Monte Carlo Analysis and the Central Limit Theorem . . . . . . .
5.1.2 Simulating a Rayleigh Distributed Random Variable . . . . . . .
5.2 Stochastic Processes and White Noise . . . . . . . . . . . . . . . . . . .
5.2.1 The Random Walk Process . . . . . . . . . . . . . . . . . . . . .
5.2.2 Brownian Motion and White Noise . . . . . . . . . . . . . . . . .
5.3 Simulating a System with White Noise Inputs Using the Weiner Process
5.3.1 White Noise and a Spring-Mass-Damper System . . . . . . . . .
5.3.2 Noisy Continuous and Discrete Time Systems:
The Covariance Matrix . . . . . . . . . . . . . . . . . . . . . . .
5.3.3 Discrete Time Equivalent of a Continuous Stochastic Process . . .
5.3.4 Modeling a Specified Power Spectral Density: 1/f Noise . . . . .
5.4 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
x
8
9
10
Contents
7.2.2 Stateflow’s Action Language Types . . . . . . . . . . . . . . . . .
7.2.3 The Heating Controller Layout . . . . . . . . . . . . . . . . . . .
7.2.4 Adding the User Actions, the Digital Clock, and the Stateflow Chart
to the Simulink Model of the Home Heating System . . . . . . . .
7.2.5 Some Comments on Creating the GUI . . . . . . . . . . . . . . .
7.3 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. 229
. 230
Physical Modeling: SimPowerSystems and SimMechanics
8.1 SimPowerSystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.1.1 How the SimPowerSystems Blockset Works: Modeling a Nonlinear
Resistor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.1.2 Using the Nonlinear Resistor Block . . . . . . . . . . . . . . . .
8.2 Modeling an Electric Train Moving on a Rail . . . . . . . . . . . . . . .
8.3 SimMechanics: A Tool for Modeling Mechanical Linkages
and Mechanical Systems . . . . . . . . . . . . . . . . . . . . . . . . . .
8.3.1 Modeling a Pendulum with SimMechanics . . . . . . . . . . . . .
8.3.2 Modeling the Clock: Simulink and SimMechanics Together . . .
8.4 More Complex Models in SimMechanics and SimPowerSystems . . . .
8.5 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
241
. 242
.
.
.
.
231
239
240
240
. 246
. 248
. 251
.
.
.
.
.
.
256
257
260
262
265
267
Putting Everything Together: Using Simulink in a System Design Process
9.1 Specifications Development and Capture . . . . . . . . . . . . . . . . . .
9.1.1 Modeling and Analysis: Converting the Specifications into an “Executable Specification” . . . . . . . . . . . . . . . . . . . . . . . .
9.2 Modeling the System to Incorporate the Specifications: Lunar Module
Rotation Using Time Optimal Control . . . . . . . . . . . . . . . . . . .
9.2.1 From Specification to Control Algorithm . . . . . . . . . . . . . . .
9.3 Design of System Components to Meet Specifications: Modify the Design
to Accommodate Computer Limitations . . . . . . . . . . . . . . . . . . .
9.3.1 Final Lunar Module Control System Executable Specification . . .
9.3.2 The Control System Logic: Using Stateflow . . . . . . . . . . . . .
9.4 Verification and Validation of the Design . . . . . . . . . . . . . . . . . .
9.5 The Final Step: Creating Embedded Code . . . . . . . . . . . . . . . . .
9.6 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
269
270
Conclusion: Thoughts about Broad-Based Knowledge
289
271
272
273
276
279
283
285
286
287
Bibliography
291
Index
295
List of Figures
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
1.10
1.11
1.12
1.13
1.14
1.15
1.16
1.17
1.18
1.19
1.20
2.1
2.2
2.3
Simulink library browser. . . . . . . . . . . . . . . . . . . . . . . . . .
Leaning Tower of Pisa. . . . . . . . . . . . . . . . . . . . . . . . . . . .
Integrator block dialog. . . . . . . . . . . . . . . . . . . . . . . . . . . .
Simulink model for the Leaning Tower of Pisa experiment. . . . . . . . .
Scope block graph for the Leaning Tower simulation. . . . . . . . . . . .
Leaning Tower Simulink model with air drag added. . . . . . . . . . . .
Scope dialog showing how to change the number of axes in the Scope plot.
Leaning Tower simulation results when air drag is included. . . . . . . .
Adding the second object to the Leaning Tower simulation by vectorizing
the air drag. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Simulating results. The Leaning Tower Simulink model with a heavy
and light object. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Right clicking anywhere in the diagram opens a pull-down menu that
allows changes to the colors of the blocks, bold vector line styles, and
other model annotations. . . . . . . . . . . . . . . . . . . . . . . . . . .
Components of a pendulum clock. . . . . . . . . . . . . . . . . . . . . .
Simulink model of the clock. . . . . . . . . . . . . . . . . . . . . . . . .
Blocks in the subsystem “Escapement Model.” . . . . . . . . . . . . . .
Results of the clock simulation for times around 416 sec. Upper figure
is the pendulum angle, and the lower figure is the acceleration applied to
the pendulum by the escapement. . . . . . . . . . . . . . . . . . . . . .
Foucault pendulum at the Panthéon in Paris. . . . . . . . . . . . . . . . .
Axes used to model the Foucault pendulum. . . . . . . . . . . . . . . . .
The Simulink model of the Foucault pendulum. . . . . . . . . . . . . . .
Simulation results from the Foucault pendulum Simulink model using
the default solver. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Simulation results for the Foucault pendulum using a tighter tolerance
for the solver. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
5
8
9
10
13
13
14
15
16
17
19
21
22
24
25
27
29
29
30
Using Simulink to compare discrete and continuous time state-space versions of the pendulum. . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Outputs as seen in the Scope blocks when simulating the model in Figure 2.1. 40
Linear pendulum model using Transfer Function and Zero-Pole-Gain
blocks from the Simulink Continuous Library. . . . . . . . . . . . . . . 43
xi
xii
List of Figures
2.4
2.5
2.6
2.7
2.8
2.9
2.10
2.11
2.12
2.13
2.14
2.15
2.16
2.17
2.18
2.19
2.20
2.21
2.22
2.23
2.24
2.25
2.26
2.27
2.28
2.29
Home heating system Simulink model (Thermo_NCS). This model describes the house temperature using a single first order differential equation and assumes that a bimetal thermostat controls the temperature. . . .
Subsystem “House” contains the single differential equation that models
the house. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Results of simulating the home heating system model. The top figure
shows the outdoor and indoor temperatures, and the bottom figure shows
the cost of the heat over the 24-hour simulation. . . . . . . . . . . . . . .
A typical home-heating thermostat [50]. . . . . . . . . . . . . . . . . . .
Hysteresis curve for the thermostat in the Simulink model. . . . . . . . .
Model of a spring-mass-damper system using Simulink primitives. . . .
Model of a spring-mass-damper system using the state-space model and
Simulink’s automatic vectorization. . . . . . . . . . . . . . . . . . . . .
Changing the value of the damping ratio from 0.1 to 0.5 in the model of
Figure 2.10. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Generic components of a control system. . . . . . . . . . . . . . . . . .
A simple control that feeds back the position of the mass in a springmass-damper system. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Feed back the velocity of the mass instead of the position to damp the
oscillations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Time response of the mass (position at the top and velocity at the bottom)
for the velocity feedback controller. . . . . . . . . . . . . . . . . . . . .
Using the state-space model for the spring-mass-damper control system. .
Root locus for the spring-mass-damper system. Velocity gain varying
from 0 to 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Root locus for the position feedback control. Gain changes only the
frequency of oscillation. . . . . . . . . . . . . . . . . . . . . . . . . . .
One method of determining the velocity of the mass is to differentiate the
position using a linear system that approximates the derivative. . . . . .
Simulation result from the Simulink model. . . . . . . . . . . . . . . . .
Control System Toolbox interface to Simulink. GUIs allow you to select
inputs and outputs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
LTI Viewer results from the LTI Viewer. . . . . . . . . . . . . . . . . . .
Using proportional-plus derivative (PD) control for the spring-massdamper. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Response of the spring-mass-damper with the PD controller. There are
no oscillations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Proportional, integral, and derivative (PID) control in the spring-massdamper system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PID control of the spring-mass system. Response to a unit step has zero
error. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Getting speed from measurements of the mass position and force applied
to the mass. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Simulink model to compare three methods for deriving the velocity of
the mass. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Methods for deriving rate are selected using the “multiport switch.” . . .
45
46
47
47
48
52
52
53
55
56
57
58
59
60
61
64
64
67
68
70
71
72
73
76
76
77
List of Figures
3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
3.9
3.10
3.11
3.12
3.13
3.14
3.15
3.16
3.17
4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8
4.9
4.10
4.11
4.12
4.13
Simulink model and results for the Lorenz attractor. . . . . . . . . . .
Exploring the Lorenz attractor using the linearization tool from the control
toolbox. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Eigenvalues of the linearized Lorenz attractor as a function of time. . .
Pseudotabulated data for the external temperature in the home heating
model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Home heating system Simulink model with tabulated outside temperature
added. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Dialog to add the tabulated input. . . . . . . . . . . . . . . . . . . . .
Real data must be in a MATLAB array for use in the “From Workspace”
block. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Simulink model of the home heating system using measured outdoor
temperatures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
From workspace dialog block that uses the measured outdoor temperature
array. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using real data from measurements in the Simulink model. . . . . . . .
Single axis rotation in three dimensions. . . . . . . . . . . . . . . . . .
Using quaternions to compute the attitude from three simultaneous body
axis rotations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Simulink block to convert quaternions into a direction cosine. . . . . .
Combining the quaternion block with the body rotational acceleration. .
Electrical circuit of a motor with the mechanical equations. . . . . . . .
Controlling the rotation of a spacecraft using reaction wheels. . . . . .
Matrix concatenation blocks create the matrix Q used to achieve a desired
value for q. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Simulating the Fibonacci sequence in Simulink. . . . . . . . . . . . . .
Fibonacci sequence graph generated by the Simulink model. . . . . . .
Simulink model for computing the golden ratio from the Fibonacci
sequence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Generating the data needed to compute a digital filter transfer function.
Adding callbacks to a Simulink model uses Simulink’s model properties
dialog, an option under the Simulink window’s file menu. . . . . . . .
Sampling a sine wave at two different rates, illustrating the effect of
aliasing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Changing the numerator and denominator in the Digital Filter block
changes the filter icon. . . . . . . . . . . . . . . . . . . . . . . . . . .
Using the Digital Filter block to simulate the Fibonacci sequence. . . .
State-space models for discrete time simulations in Simulink. . . . . .
A numerical experiment in Simulink: Does fk+1 fk−1 − fk2 = ±1? . . .
13 iterations of the Fibonacci sequence show that fk+1 fk−1 − fk2 = ±1
(so far). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Bode plot for the discrete filter model developed using the Control
System Toolbox interface with Simulink. . . . . . . . . . . . . . . . .
Illustrating the steps in the proof of the sampling theorem. . . . . . . .
xiii
. 83
. 84
. 85
. 89
. 89
. 90
. 92
. 93
. 93
. 94
. 95
.
.
.
.
.
100
103
104
106
110
. 111
. 117
. 118
. 118
. 123
. 125
. 127
.
.
.
.
129
130
131
134
. 134
. 137
. 139
xiv
List of Figures
4.14
4.15
4.16
4.17
4.18
4.19
4.20
4.21
4.22
4.23
4.24
4.25
4.26
4.27
4.28
4.29
4.30
4.31
4.32
5.1
5.2
5.3
5.4
5.5
5.6
5.7
5.8
5.9
5.10
Simulation illustrating the sampling theorem. The input signal is 15
sinusoidal signals with frequencies less than 500 Hz. . . . . . . . . . .
Specification of a unity gain low pass filter requires four pieces of information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Butterworth filter for D/A conversion using the results of the M-file “butterworthncs.” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using the Signal Processing Blockset to design an analog filter. . . . .
Moving average FIR filter simulation. . . . . . . . . . . . . . . . . . .
Pole-zero plot and filter properties for the moving average filter. . . . .
The Simulink model for the band-pass filter with no computational limitations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The band-pass filter design created by the Signal Processing Blockset
digital filter block. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Band-pass filter simulation results (with and without signal in the pass
band). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Fixed-point implementation of a band-pass filter using the Signal Processing Blockset. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Running the Simulink model to determine the minima, maxima, and
scaling for all of the fixed-point calculations in the Band-pass filter. . .
Illustrating the use of buffers. Reconstruction of a sampled signal using
the FFT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using the FFT and the sampling theorem to interpolate a faster sampled
version of a sampled analog signal. . . . . . . . . . . . . . . . . . . .
The analog signal (sampled at 1 kHz, top) and the reconstructed signal
(sampled at 8 kHz, bottom). . . . . . . . . . . . . . . . . . . . . . . .
Results of interpolating a sampled signal using FFTs and the sampling
theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Phase-locked loop (PLL). A nonlinear feedback control for tracking frequency and phase of a sinusoidal signal. . . . . . . . . . . . . . . . . .
Voltage controlled oscillator subsystem in the PLL model. . . . . . . .
Details of the integrator modulo 1 subsystem in Figure 4.30. . . . . . .
Phase-locked loop simulation results. . . . . . . . . . . . . . . . . . .
Monte Carlo simulation demonstrating the central limit theorem. . . . .
Result of the Monte Carlo simulation of the central limit theorem. . . .
A Simulink model that generates Rayleigh random variables. . . . . . .
Simulink model that generates nine samples of a random walk. . . . . .
Nine samples of a random walk process. . . . . . . . . . . . . . . . . .
Simulink model for the spring-mass-damper system. . . . . . . . . . .
Motions of 10 masses with white noise force excitations. . . . . . . . .
White noise block in the Simulink library (masked subsystem). . . . .
Continuous linear system covariance matrix calculation in Simulink. The
result is the covariance matrix that can be used for simulating with a
covariance equivalent discrete system. . . . . . . . . . . . . . . . . . .
Noise response continuous time simulation and equivalent discrete systems at three different sample times. . . . . . . . . . . . . . . . . . . .
. 140
. 142
.
.
.
.
145
147
150
152
. 154
. 155
. 156
. 159
. 161
. 162
. 163
. 165
. 166
.
.
.
.
167
168
168
169
.
.
.
.
.
.
.
.
175
175
177
179
179
185
185
186
. 191
. 192
List of Figures
5.11
5.12
5.13
6.1
6.2
6.3
6.4
6.5
7.1
7.2
7.3
7.4
7.5
7.6
7.7
7.8
7.9
7.10
7.11
7.12
7.13
7.14
8.1
8.2
8.3
8.4
8.5
8.6
xv
Simulation results from the four simulations with white noise inputs. . . 193
Simulation of the fractal noise process that has a spectrum proportional
to 1/f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Power spectral density estimators in the signal processing blockset used
to compute the sampled PSD of the 1/f noise process. . . . . . . . . . . 198
An electrical model of the thermodynamics of a house and its heating
system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Simulink model for the two-room house dynamics. . . . . . . . . . . .
Simulation results when the heat is off. . . . . . . . . . . . . . . . . .
Simulation results when the heat is on continuously. . . . . . . . . . .
Using PID control to maintain constant room temperatures (70 deg F). .
First Stateflow diagram (in the chart) uses a manual switch to create the
event. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A Stateflow diagram that contains most of the stateflow semantics. . . .
The Simulink model for the simple timer example. . . . . . . . . . . .
Modified Stateflow chart provides the timer outputs “Start,” “On,” and
“Trip.” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Outputs resulting from running the simple timer example. . . . . . . .
PID controller for the home-heating example developed in Chapter 6. .
Response of the home heating system with the external temperature from
Chapter 3 as input. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A tentative design for the new home heating controller we want to
develop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The complete simulation of the home heating controller. . . . . . . . .
Graphical user interface (GUI) that implements the prototype design for
the controller. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Interactions between the GUI and Stateflow use changes in the gain values
in three gain blocks in the subsystem called “User_Selections.” . . . .
The first of the two parallel states in the state chart for the heating
controller. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The second of the two parallel states in the home heating controller state
chart. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The final version of the PID control for the home heating controller. . .
.
.
.
.
.
204
210
211
211
211
. 218
. 219
. 222
. 223
. 224
. 227
. 228
. 228
. 231
. 232
. 233
. 235
. 236
. 239
A simple electrical circuit model using Simulink and SimPowerSystems.
The electrical circuit time response. Current in the resistor and voltage
across the capacitor. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The SimPowerSystems library in the Simulink browser and the elements
sublibrary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A Simulink and SimPowerSystems subsystem model for a nonlinear or
time varying resistor. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Masking the subsystem with an icon that portrays what the variable resistor does makes it easy to identify in an electrical circuit. . . . . . . . .
Circuit diagram of a simulation with the nonlinear resistor block. . . . .
242
243
244
247
248
249
xvi
List of Figures
8.7
8.8
8.9
8.10
8.11
8.12
8.13
8.14
8.15
8.16
8.17
8.18
8.19
8.20
8.21
9.1
9.2
9.3
9.4
9.5
9.6
9.7
9.8
Using the signal builder block in Simulink to define the values for the
time varying resistance. . . . . . . . . . . . . . . . . . . . . . . . . .
Using SimPowerSystems: Results from the simple RL network
simulation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Simulink and SimPowerSystems model of a train and its dynamics. . .
Complete model of the train moving along a track. . . . . . . . . . . .
The Simulink blocks that compute the rail resistance as a function of the
train location. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Simulation results. Rail resistances, currents in the rails, and current to
the train. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Modeling a pendulum using SimMechanics. . . . . . . . . . . . . . . .
Coordinate system for the pendulum. . . . . . . . . . . . . . . . . . .
Simulating the pendulum motion using SimMechanics and its viewer. .
Adding Simulink blocks to a SimMechanics model to simulate the clock
as in Chapter 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Time history of the pendulum motion from the clock simulation. . . . .
SimMechanics model Sim_Mechanics_Vibration. . . . . . . .
String model (the subsystem in the model above). It consists of 20 identical “spring mass” revolute and prismatic joint elements. . . . . . . . .
String simulation. Plucked 15 cm from the left, at the center, and 15 cm
from the right. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using SimPowerSystems to build a model for a four-room house. . . .
. 250
. 251
. 252
. 254
. 255
.
.
.
.
255
258
258
260
. 261
. 261
. 263
. 263
. 263
. 264
The start of the specification capture process. Gather existing designs
and simulations and create empty subsystems for pieces that need to be
developed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
The physics of the phase plane logic and the actions that need to be
developed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
Complete simulation of the lunar module digital autopilot. . . . . . . . . 279
Simulating a counter that ticks at 625 microsec for the lunar module
Simulink model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
The blocks in the jet-on-time counter subsystem in Figure 9.4. . . . . . . 281
Lunar module graph of the switch curves and the phase plane motion for
the yaw axis. (The graph uses a MATLAB function block in the simulation.)282
Stateflow logic that determines the location of the state in the phase
plane. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
Simulink blocks that check different model attributes for model
verification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
List of Tables
3.1
Computation time for the Lorenz model with different solvers and tolerances. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.1
Using low pass filters. Simulating various filters with various sample
times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Comparison of the error in converting a digital signal to an analog signal
using different low pass filters. . . . . . . . . . . . . . . . . . . . . . . . 148
Specification for the band-pass filter and the actual values achieved in the
design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
4.2
4.3
5.1
Simulink computation times for the four white noise simulations. . . . . 193
xvii
Preface
Simulation has gradually become the preferred method for designing complex systems. The
aerospace industry provided the impetus for this design approach because most aerospace
systems are at best difficult or at worst impossible to test. I started my career working on the
lunar module flight control system. Testing this system in space was extremely expensive
and very difficult. In the entire development cycle, there was, in fact, only one unmanned test
in earth orbit. All of the designs, by necessity, required simulations. Over time, this approach
has become pervasive in many industries, particularly the automotive industry. Even in
product developments as mundane (seemingly) as disk drives for computers, simulation has
become the dominant method of design.
A digital simulation of the environment that a system “lives” in is crucial in this
modern simulation based design process. The accurate representation of this environment
allows the designer to see how well the system being designed performs. It allows the
designer to verify that the design meets all of the performance specifications. Moreover,
as the simulation develops, the designer always can see how the performance requirements
pass through to the various subsystems in the design. This refinement of the specification
allows the model to, in essence, become the specification. The simulation environment can
also allow the designer of embedded digital signal processing and control algorithms to test
the code directly as it is developed to ensure that it satisfies the specification. The last,
and newest, step in this process is the ability to use the simulation model automatically to
generate the computer code for computers in the system. This can eliminate most of the
“hand coding” that is required by the current practice. (In most companies this process
consists of handing a stack of papers with the written specification to a team of computer
programmers for coding.) This approach eliminates two potential sources of error: the
errors introduced by the conversion of the design into a written specification and the errors
introduced by the manual coding.
Simulink® is a remarkable tool that fills the niche for a robust, accurate, and easily used
simulation tool. It provides a visual way of constructing a simulation of a complex system
that evolves in time (the designer can optimize the system by optimizing the mathematical
model’s response). This “model-based design” approach provides accurate simulation as
various numerical constants in the model are changed. It allows the user to tune these
constants so that the simulation accurately portrays the real world. It then allows sharing
of the model among many different users so they can begin to design various components
or parts of the system. At the same time, it provides a means for understanding design
problems whose solutions will make the system operate in a desired way. Finally, the
resulting design’s embedded software is rapidly convertible into embedded code that is
xix
xx
Preface
directly usable for testing in a stand-alone computer. All of this can be accomplished using
one easily understood graphical environment. A major thesis of this book is that Simulink
can form the basis for an integrated design environment. As your use of the tool evolves,
the way this can happen should become more apparent.
Because simulation is such an important adjunct to numerical analysis, the book can
be a supplement for a course on numerical computing. It fits very well with an earlier
SIAM book, Numerical Computing with MATLAB by Cleve Moler [29] (which we refer to
so frequently that its title is abbreviated NCM in the text), and it has been written with the
same eclectic mix of practice and mathematics that makes Cleve’s book so much fun to read.
I hope that this work favorably compares with—and comes close to matching the quality
of—Cleve’s work. As is the case with Numerical Computing with MATLAB, this book is
based on a collection of 88 model files that are in the library Numerical Computing with
Simulink (which we abbreviate as NCS), which can be downloaded from the Mathematical
Analysis Co. web page (www.math-analysis.com). This library of SIMULINK models and
MATLAB® M-files forms an essential part of the book and, through extensions that the
reader is encouraged to peruse, an essential part of the exercises.
The book will take you on a tour of the Simulink environment, showing you how to
develop a system model and then how to execute the design steps to make the model into
a functioning design laboratory. Along the way, you will be introduced to the mathematics
of systems, including difference equations and z-transforms, ordinary differential equations
(both linear and nonlinear), Laplace transforms, numerical methods for solving differential
equations, and methods for simulating complex systems from several different disciplines.
The mathematics of simulation is not complete without a discussion of random variables and random processes for doing Monte Carlo simulations. Toward this end, we
introduce and develop the techniques for modeling random processes with predetermined
statistical properties. The mathematics for this type of simulation begins with “white noise,”
which is a very difficult entity to pin down. We introduce the concept of Brownian motion
and show the connection between this process and the fictitious, but useful, white noise
process. The simulation of the Brownian process in Simulink is developed; it is then used
to form more (statistically) complex processes. We review and show how to simulate and
analyze random processes, and we formulate and use the power spectral density of a process.
We introduce other tools, in addition to Simulink, from The MathWorks. Each of
the tools is an expansion into a different “domain of knowledge.” The first tool is the
Signal Processing Blockset that extends Simulink into the domain of signal processing
(both analog and digital). The second tool, Stateflow® , expands Simulink to include state
charts and signal flow for modeling event driven systems (i.e., systems where the actions
start at times that are asynchronous with the computer’s clock). This tool naturally extends
Simulink into the traditional realm of finite state machines.
The third tool, SimPowerSystems, extends Simulink into the realm of physical modeling and in particular into the realm of electrical circuits including power systems, motor
drives, power generation equipment including power electronics, and three-phase power
transmission lines.
The last tool, SimMechanics, is also a physical modeling tool; it develops models of
mechanical systems such as robots, linkages, etc. These tools can all work together in the
common Simulink environment, but they all have their own method for displaying a picture
of the underlying system.
Preface
xxi
Both MATLAB and Simulink are available in student versions. All of the examples
in this book use the tools in the student version supplemented with the Control System
Toolbox, the Signal Processing blockset, SimMechanics, and SimPowerSystems, which
can be purchased separately. If you are not a student but wish to learn how to use Simulink
using this book, the MathWorks can provide a demonstration copy of MATLAB, Simulink,
and the add-ons; contact them at http://www.mathworks.com.
This book not only introduces Simulink and discusses many useful tricks to help
develop models but it also shows how it is possible for Simulink to be the basis of a
complete design process. I hope the material is not too theoretical and that you find the
models we will work with are fun, instructive, and useful in the future.
As is always the case, innumerable people have contributed to the genesis and development of this book. I thank Cleve Moler for sharing an early manuscript of his wonderful
book and The MathWorks for many years of interesting and stimulating work. In particular, I wish to thank the MathWorks staff with whom I have had many long and interesting
discussions: Rob Aberg, Paul Barnard, Jason Ghidella, Ned Gully, Loren Shure, and Mark
Ullman, and The MathWorks’s CEO, Jack Little. These interesting and productive discussions helped me create the various examples and models described in this book.
Words are never enough to thank ones family. My wife, and soulmate, Jean Frova
Gran, provides support, joy, affection, and love. My daughter Kathleen keeps my thoughts
focused on the future. Her hard work and dedication to the achievement of her goals, despite
many small-business frustrations, is laudable.
My extended family includes two stepgrandchildren, Ian and Ila, who provide me
with insight into the joy brought by the process of learning about this unbelievable world
we live in. My stepdaughters Elizabeth and Juliet have a sense of humor and a joie de vivre
that are inspirational.
Finally, I need to say “thank you” to my sister Cora and her wonderful husband,
Arthur, who have always been loving and supportive in more ways than can be stated; Cora
is, as the self-proclaimed “matriarch” of the Grans, much more to me than a mere sister.
Dr. Richard J. Gran
Mathematical Analysis Co.
www.math-analysis.com
Norfolk, Massachusetts, August 2006
Chapter 1
Introduction to Simulink
The best way to follow the details of this book is to have a copy of Simulink and work
the examples as you read the text. The examples described in the book are all in the
Numerical Computing with Simulink (NCS) library that is available as a download from
The MathWorks (www.mathworks.com). The files are located in an ftp directory on the site.
As you work your way through this text, the Simulink and MATLAB files you will
run are in boldface Courier type. When you need to run a model or M-file, the text
shows the exact MATLAB command you need to type.
After you download the NCS library, place it on the MATLAB path using the “Set
Path” command under the MATLAB file menu. Alternatively, you can save the NCS files
and use the MATLAB directory browser (at the top of the MATLAB window) or the cd
command to change to the directory that contains the NCS library.
1.1
Using a Picture to Write a Program
In order to understand the Simulink models that we use to illustrate the mathematical principles, this chapter provides the reader with a brief introduction to Simulink and its modeling
syntax. In my experience, the more familiarity a user has with the modern click-and-drag
technique for interfacing with a computer, the easier it is to build (and understand) a Simulink
model.
In essence, a Simulink model provides a visual depiction
of the signal flow in a system. Conceptually, signal flow diagrams indicate that a “block” in the diagram (as represented
by the figure at left) causes an operation on the input, and the
result of that operation is the output (which is then possibly the
input to the next block). The operation can be as simple as a “gain block,” where the output
is simply the product of the input and a constant (called the gain).
Signal flow in the diagram splits, so several different blocks have the same input.
Then the signal can be coalesced into a single signal through addition (subtraction) and
multiplication (division). The diagram illustrates this; the “output” y is y = K1 u − K2 u
(or y = (K1 − K2 )u). The power of this type of diagram is that K1 or K2 may represent an
1
2
Chapter 1. Introduction to Simulink
Figure 1.1. Simulink library browser.
operator which might be a matrix, a difference equation, a differential equation, integration,
or any of a large number of nonlinear operations.
The signals in a block diagram (y, u, and any intermediate terms not explicitly named)
are usually functions of time. However, any independent variable may be used as the
underlying simulation variable. (For example, it may be the spatial dimension x, or it might
simply be interpreted as an integer representing the subscript for a sequence of values.)
The Simulink environment consists of a library of elementary building blocks that are
available to build a model and a window where the user builds the model. Figure 1.1 shows
the Simulink library browser. Open this browser by typing
simulink
at the MATLAB command line (or alternatively by clicking on the Simulink icon
that
appears below the menu at the top of the MATLAB window). In either case, the result is
that you will open the Simulink browser and a window for building the model.
The browser consists of a catalog of 16 types of blocks. The catalog is a grouping of the
blocks into categories that make it easier for the user to find the appropriate mathematical
1.1. Using a Picture to Write a Program
3
element, function, operation, or grouping of elements. A quick look at the list shows
that the first element in the catalog is the “Commonly Used Blocks.” This, as the name
implies, is redundant in that it contains blocks from other elements in the catalog. The
second set of elements in the library is denoted “Continuous.” This element contains the
operator blocks needed to build continuous time ordinary differential equations and perform
numerical integration. Similarly, the library element “Discrete” contains the operators
needed to develop a discrete time (sampled data) system. All math operations are in the
“Math Operations” catalog element.
To understand how to build a Simulink model, let us build a simple system.
In many applications, simple trigonometric transformations are required to resolve
forces into different coordinates. For example, if an object is moving in two dimensions
with a vector velocity v at an angle θ relative to the x-axis of a Cartesian coordinate system
(see figure), then the velocity along the x-axis is v cos(θ ) and along the y-axis is v sin(θ ).
A Simulink model that will compute this is in the figure
at the right. If you do not want to build this model from
scratch, then type
Simplemodel
at the MATLAB command line. However, if this is the first
time you will be using Simulink, you should go through the
exercise of building it. Remember that the model building
uses a click-and-drag approach. As we discussed above, the
trigonometric functions in the blocks are operators in the sense
that they operate on the input signal v to create the output.
To build the model, open Simulink and use the empty model window that opens or, if
the empty window is not open, click on the blank paper icon at the upper left of the Simulink
library browser (the circled icon in the Simulink library browser, Figure 1.1). The empty
model window that is open will appear with the name “Untitled.” All of the mathematical
functions needed for building this model are in the Math Operations section of the browser.
This section of the browser is opened by clicking on the words “Math Operations” in the
browser or by clicking on the + symbol at the “Math Operations” icon in the right side of
the browser window. (The browser is set up to work the same as the browser in Windows.)
Math Operations contains an icon for all of the math operations available for use in Simulink
models (in alphabetic order). We will need the Trigonometric Function block. To insert
this block in the model, click on it and drag it into the “untitled” model window. We will
need two of these blocks, so you can drag it twice. Alternatively, after the first block is in
the model window, you can right click on the block in the diagram (which will create a copy
of the block) and then (still holding down the right click button on the mouse) drag the copy
to another location in the diagram.
4
Chapter 1. Introduction to Simulink
We will assume that the velocity v is a MATLAB variable that is in the MATLAB
workspace. To bring this variable into the Simulink model, a block from the Sources library
is required. To open this library, locate the Sources Library in the Simulink Browser.
This will open a large number of Simulink blocks that provide sources (or inputs) for the
model. The “Constant” block allows you to specify a value in MATLAB (created using the
MATLAB command window), so click on it and drag it into the untitled model window.
To make this input the velocity v, double click on the Constant block to open the “Block
Parameters” window. In general, almost all Simulink blocks have some form of dialog that
sits (virtually) behind the block; open these dialogs by double clicking the block. As we
go through the steps in building a model, we will investigate some of the dialogs. It is a
good idea to spend some time familiarizing yourself with the contents of these dialogs by
browsing through the library one block at a time and opening each of their dialogs to see
what is possible. For now, in the “Block Parameters” window change the Constant value to
v, and click OK to close the “Block Parameters” window.
We are now ready to create the signal flow in the diagram. Look carefully at the icons
in the model. You will see arrowheads at either the input or output (or both) for each of
the blocks. You can connect blocks manually or automatically. To connect them manually,
click the mouse on the arrowhead at the output of the constant block, drag the line to the
input of the Trigonometric Function block (either one), and release the mouse button when
the icon changes from a plus sign to a double plus sign. This should leave a line connecting
the input to the trig block. To connect the second trig block, start at the input arrow of
this block and drag the line to the point on the previous connection that is closest to the
block. This will take a little getting used to, so try it a couple of times. The last part of
the modeling task is to change the sin function (the default for the trigonometric function
from the library) to a cos. This block has a dialog behind it that allows changes, so double
click on the block that will become the cosine and use the pull down menu in the dialog
called “Function” to select cos. Two additional boxes in the dialog allow you to specify the
output type (auto, real, or complex) and the sample time for the signal. We can ignore these
advanced functions for now. Try to move the blocks around so they look like the blocks in
the Simplemodel figure above.
As you were connecting the first two blocks, a message from Simulink should have
appeared, telling you that there is an automatic way of connecting the blocks. Do this by
clicking (and thereby highlighting) the block that is the input and, while holding down the
control (Ctrl) key, clicking on the block that this input will go to. Simulink will then create
(and neatly route) a line to connect the two blocks.
This simple model introduces you to click and drag icon-based mathematical models.
It is simple, obviously too simple for the work needed to create it. It would be far easier to
do this in MATLAB.
Simulink provides a visual representation of the calculations involved in a process. It
is easier to follow complex calculations visually, particularly if they involve flow in time.
Toward this end, it is very possible to build a diagram that obscures the flow rather than
enhances it. We call such a diagram a “spaghetti Simulink model” (like spaghetti code).
To avoid this, it is important to make the connections clear and the model visually easy to
navigate. Simulink allows you to do this by dragging the icons around the diagram to make
it look better. It is a good idea to get used to this right from the start. Therefore, spend a
few minutes manipulating the diagram with the mouse. In addition, blocks move (nudged
1.2. Example 1: Galileo Drops Two Objects from the Leaning Tower of Pisa
5
up or down and back and forth) with the arrow keys. Simply highlight the block and press
the arrow key to move it one grid point.
To flip a block, highlight it and use Ctrl + i, and to rotate a block, highlight it and use
Ctrl + r. Once again, try these steps. If you want to save the example you created, click the
floppy disk icon at the top of the model window and save the model in any directory you
want. (Be careful not to overwrite the model “Simplemodel” in the NCS directory.)
1.2
Example 1: Galileo Drops Two Objects from the
Leaning Tower of Pisa
To illustrate a Simulink model that solves a differential equation, let us build a model that
simulates the “experiment” that is attributed to Galileo. In the experiment, he went to the
upper level of the Leaning Tower of Pisa (Figure 1.21 ) and dropped a heavy object and a light
Figure 1.2. Leaning Tower of Pisa.
1 This photograph is from my personal collection. It was taken during a visit to Pisa in May of 2006. If Galileo
performed the experiment, he most likely would have used the upper parapet at the left of the picture. (Because
of the offset at the top, it would have been difficult to use the very top of the tower.)
6
Chapter 1. Introduction to Simulink
object. The experiment demonstrated that the objects both hit the ground at the same time.
Is it true that this would happen? Let us see (this model is in the NCS library, where it is
called Leaningtower—you can open it from the library, but if this is your first experience
with Simulink, use the following instructions to create the model instead).
Thanks to Newton, we know how to model the acceleration due to gravity. Denote
the acceleration at the surface of the earth by g (32.2 ft per sec per sec). Since the force on
the object is mg and Newton’s first law says that
F = ma,
we have
d 2x
= −mg.
dt 2
From this, it follows immediately that the mass of the object does not enter into the calculation of the position (and velocity), and the underlying equation of motion is
m
d 2x
= −g.
dt 2
We assume that Galileo dropped the balls from the initial location of 150 ft, so the initial
= 0 and the initial position is x(0) = 150.
velocity is dx(0)
dt
Even though this formulation of the equation of motion explicitly shows that Galileo’s
experiment has to work, we continue because we are interested in seeing if the experimental
results would in fact have shown that the objects would touch the ground at the same
time. Initially we model just the motion as it would be in a vacuum. A little later in this
section, we will add air drag to the model. The model of any physical device follows
this evolutionary path. Understanding the physical environment allows the developer to
understand the ramifications of different assumptions and then add refinements (such as air
drag) to the model.
The equation above is probably the first differential equation anyone solves. It has a
solution (in the foot-pound-second [fps] system) given by
dx(t)
= −32.2t,
dt
x(t) = −16.1t 2 + 150.
Since the experiment stops when the objects hit the ground, the final time for this experiment
is when x(t) = 0. From the analytic solution, we find that this time is
150
.
t=
16.1
Go into MATLAB and compute this value. MATLAB’s full precision for the square root
gives
>> format long
>> sqrt(150/16.1)
ans =
3.05233847833680
1.2. Example 1: Galileo Drops Two Objects from the Leaning Tower of Pisa
7
We will return to this calculation after we use Simulink to model and solve this
equation because it will highlight a feature of Simulink that is numerically very important.
Now, to build the Simulink model of the motion we need to create the left-hand side
of the differential equation and then integrate the result twice (once to get the velocity and
then again to get the position).
Five types of Simulink blocks are required to complete this model:
• Three input blocks are used to input the constant g into the equation and to provide
the initial position and to provide a value to test the position against to determine
whether the object has reached the ground.
• Two integrator blocks are needed to compute the velocity and position from the
acceleration.
• One block will perform the logic test to see when the object hits the ground.
• One block will receive the signal from the logic block to stop the simulation.
• One block will create a graph of the position and velocity as a function of time.
These blocks are in sections of the Simulink library browser as follows:
• The inputs are in the Sources library.
• The integrator is in the Continuous library.
• The block to test to see if the object has hit the ground is a Relational Operation, and
it is in the Logic and Bit Operations library.
• The block to stop the simulation is in the Sinks library.
• The graph is a Scope block and is in the Sinks library.
So let us start building the model. As we did above, open up the Simulink browser
and a new untitled model window. Accumulate all of the blocks in the window before we
start to make connections. Thus, open the Sources, Continuous, Logic and Bit Operations,
and Sinks libraries one by one, and grab the icons for a Constant, an Integrator, a Relational
Operator, and the Stop and drag them into the untitled model window.
The integrator block in the Continuous library uses the Laplace operator 1s to denote
integration. This is a bit of an abuse of notation, since strictly speaking this operator
multiplies the input only if it is the Laplace transform of the time signal. However, Simulink
had its origins with control system designers, and consequently the integration symbol
remains in the form that would appear in a control system block diagram.
In fact, the integrator symbol invokes C-coded numerical integration algorithms that
are the same as those in the MATLAB ODE suite to perform the integration. The C-coded
solvers are compiled into linked libraries that are invoked automatically by Simulink when
the simulation is executed. (Simulink handles all of the inputs, outputs, initial conditions,
etc. that are needed.) We will talk a little more about the ordinary differential equation
solvers later, but if the reader is interested in more details about the numerical integration
solvers, consult Chapter 7 of NCM [29].
8
Chapter 1. Introduction to Simulink
Figure 1.3. Integrator block dialog.
Make copies of the integrator and the constant block (since we need two integrators
and three constants); as before, either copy it by right clicking or drag a second block into
the model from the browser.
Make the connections in the model. Next, change the Constant block values by double
clicking to open its dialog box. Change the first of these to make the value –g, change the
second to make the value 0 (which will be used to test when the position of the object is at
the “ground”), and then change the third so its value is 150 (the initial position of the object
at the top of the Leaning Tower of Pisa).
Modify the integrator block to bring the initial conditions into the model. When
you double click the integrator block, the “Block Parameters” menu, shown in Figure 1.3,
appears.
In this menu, you can change the way the numerical integration handles the integral.
The options are as follows:
• Force an integration reset (to the initial condition) when an external reset event occurs.
This reset can be increasing as a function of time (i.e., whenever the reset signal is
increasing, the integrator is reset), it can be decreasing, or it can be either (i.e., the
integration will take place only if the reset signal is unchanging), and finally, the reset
can be forced when the reset signal crosses a preset threshold.
• The initial conditions can be internal or external. If it is internal, there is a box called
“Initial condition” (as shown above) for the user to specify the value of the initial
1.2. Example 1: Galileo Drops Two Objects from the Leaning Tower of Pisa
9
Figure 1.4. Simulink model for the Leaning Tower of Pisa experiment.
condition. When it is set to external, an additional input to the integrator appears on
the integrator icon. We will use this external initial condition to input the height of
the leaning tower.
• The output of the integrator can be limited (i.e., it can be forced to stay constant if
the integration exceeds some minimum or maximum value).
Other options are available that will be discussed as we encounter the need for them.
Set one of the integrators to have an external initial condition, and leave the other at
its default value. (The default is internal with an initial condition of 0.) Then connect the
blocks, as we did before, so the model looks like Figure 1.4.
The default time for the simulation is 10 sec, which is ample time for the object to hit
the ground. Before we start the simulation, we need to specify a value for g (we used the
value −g in the dialog for the Constant, and it must be created). To do this, in MATLAB
type the command
g = 32.2;
Now start the simulation by clicking the right pointing “play” arrow (highlighted with
a circle in Figure 1.4). The simulation will run. Double click on the Scope icon and then
click the binoculars (at the top of the Scope plot window) to scale the plot properly. The
result should look like Figure 1.5.
10
Chapter 1. Introduction to Simulink
Figure 1.5. Scope block graph for the Leaning Tower simulation.
Notice that the simulation stopped at a time just past 3 sec. We can see how accurately
Simulink determined that time by zooming in on the plot. If you put the mouse pointer over
the zero crossing point and click, the plot scale will change by about a factor of three. If
you do this some number of times, you will see that the crossing time is about 3.0523. The
plot will not resolve any finer than that so we cannot see the zero crossing time any better
than this.
The default for Simulink is to use the variable step differential equation solver ode45.
This variable step solver forces the integration to take as long a step as it can consistent
with an accuracy of one part in 1000 (the default relative tolerance). Simulink also sends
(by default) the time points at which the integration occurred back to MATLAB. The time
values are in a vector in MATLAB called tout. To see what the zero crossing time was,
get the last entry in tout by typing
tout(end)
To get the full precision of the calculation, type
format long
The answer should be 3.05233847833684. Remember that the stop time computed
above with MATLAB (using the solution of the equation) was 3.05233847833680, a difference of 4 in the last decimal place (i.e., a difference of 4 × 10−14 , which is a little more
than the computation tolerance variable in MATLAB called eps). How can the Simulink
numerical integration compute this so accurately?
1.2. Example 1: Galileo Drops Two Objects from the Leaning Tower of Pisa
11
This remarkable accuracy in the numeric calculation of the stopping time is because
of a unique feature of Simulink. The variable step integration algorithms that are used to
solve the differential equations in a Simulink model will compute the solution at time steps
that are as big as possible consistent with the relative tolerance specified for the algorithm.
If, in the process of computing the solution, a discontinuity occurs between any two times,
the solver forces an interpolation to determine the time at which the discontinuity occurred.
Only certain Simulink blocks cause the solver to interpolate. These are as follows:
• Abs—the absolute value block.
• Backlash—a nonlinear block that simulates a mechanical system’s backlash.
• Dead zone—a nonlinear system in which outputs do not change until the absolute
value of the input exceeds a fixed value (the variable called deadzone).
• Hit crossing—a block that looks at any signal in the Simulink diagram to see if it
matches a particular value (the hit).
• Integrator—one of the options in the integrator is to reset the integration. This reset value triggers the zero crossing algorithm. The integrator also triggers the zero
crossing algorithm if the integration is limited.
• MinMax—this block computes the minimum or maximum of a signal. When an
input to this block exceeds the maximum or is below the minimum, the interpolation
algorithm calculates the new maximum or minimum accurately.
• Relational operator—this operation, in this problem, detects the crossing of the ground
(zero crossing in this case) of the object.
• Saturation—a signal entering this block is limited to be between an upper and lower
bound (the saturation values). The solver interpolates the time at which the signal
crosses the saturation values.
• Sign—this block is +1 or −1, depending on whether the signal is positive or negative.
The solver interpolates the time at which the signal changes sign.
• Step—this is an input function that jumps from one value (usually 0) to another value
(usually 1) at a specified time.
• Subsystem—a subsystem is a block that contains other blocks. Their use makes
diagrams easier to read. They also allow simulations to have blocks that go on and
off. Blocks inside a subsystem do not appear at the level of the system, but you can
view them by double clicking the subsystem block. Subsystems come on (are enabled)
using a signal in the Simulink model that connects to an icon on the subsystem. If you
place an Enable block in a subsystem, the icon automatically appears at the top level
of the subsystem. Whenever the system enable or disable occurs, the solver forces a
time step.
• Switch—there is a manual switch that the user can “throw” at any time. The solver
computes the time the switch state changes.
12
Chapter 1. Introduction to Simulink
Now that we have simulated the basic motion of Galileo’s experiment, let us add the
effects of air resistance to the model and see if, in fact, the objects hit the ground at the same
time.
Air drag creates a force that is proportional to the square of the speed of the object. To
first order, an object with cross sectional area A moving at a velocity v in air with density
ρ will experience a drag force given by
fDrag =
1 2
ρv A.
2
(In the model so far we have assumed that the force from gravity is downward and is
therefore negative, but the air drag force is up so it is positive.) In the fps unit system we
have been using, the value of ρ is 0.0765 lbf/ft3 . Let us assume that the objects are spherical
and that their diameters are 0.1 and 0.05 ft each. (If they are made of the same material, the
large object will then be 8 times heavier than the small object.)
The acceleration of the objects is now due to the force from the air drag combined
with the force from gravity, so the differential equation we need to model is now
2
d 2x
1
dx
= ρA
− g.
dt 2
2
dt
To get the air drag force for each object we need to know their cross-sectional areas. Since
the cross-sectional area is πr 2 , the acceleration from the air drag for the large object is
1
ρπ r 2 = ρ2 (0.1)2 π = 0.005πρ and the acceleration for the small object is 12 ρπ (0.05)2 =
2
.00125πρ (they differ by a factor of 4).
To create the new model, we need to add the air drag terms. Thus, we need a way to
add the force from the air resistance to the force from gravity. This uses a Sum block from
the Math library. This block is a small circle (or a rectangle—my personal preference is
the circle since it makes it easy for the eye to distinguish summations from the other icons
that are mostly rectangular) with two inputs, (each of which has a small plus sign on it). So
click and drag this block into the leaning tower model. To get the square of the velocity, we
need to take the output of the first integration (the integration that creates the velocity from
the acceleration) and multiply it by itself. The block that does this is the Product block in
the Math library. Drag a copy of this block into the model also.
We will start by creating the model for the heavy object alone. Connect the output of
the integrator to the Product block as we did before (using either of the two inputs, it does
not matter which one), and then grab the second input to the Product block and drag it until
it touches the line that you just connected (the line from the integrator). This will cause
the two inputs to the Product block to be the same thing (the velocity), so the output is the
square of the velocity. Notice how easy it is to create a nonlinear differential equation in
Simulink. The next step is to multiply the velocity squared by 0.005πρ or 0.003825. Use
the Gain block from the Math library to do this. Click and drag a copy of this block into
the model then double click on this Gain block and change the value in the dialog to be
0.003825.
Figure 1.6 shows the resulting Simulink model, which is in the NCS library. Open it
with the MATLAB command:
Leaningtower2
1.2. Example 1: Galileo Drops Two Objects from the Leaning Tower of Pisa
13
Figure 1.6. Leaning Tower Simulink model with air drag added.
Figure 1.7. Scope dialog showing how to change the number of axes in the Scope plot.
Note that we changed the Scope block to view both the position and the velocity of
the objects. This was done by double clicking the Scope icon and then selecting the “Scope
Parameters” dialog by double clicking the second menu item at the top of the Scope window.
(This is the icon to the right of the printer icon; it is the icon for a tabbed dialog.) The dialog
that opens looks like Figure 1.7.
Note that the number of axes in the Scope is the result of changing the value in the
“Number of Axes” box. We have made it two, which causes the Scope block to have two
inputs. We then connected the first input to the Velocity and the second to the Position lines
in the model.
14
Chapter 1. Introduction to Simulink
Velocity
0
-20
-40
-60
-80
0
0.5
1
1.5
2
2.5
3
3.5
2
2.5
3
3.5
Position
150
100
50
0
-50
0
0.5
1
1.5
Time
Figure 1.8. Leaning Tower simulation results when air drag is included.
If you run this model, the results in the Scope block will resemble the graphs in
Figure 1.8.
Notice that the air drag causes the object to hit the ground later than we determined
when there was no drag (3.35085405691268 seconds instead of 3.05233847833684, about
0.3 seconds later).
Now let us compare the light and heavy object. We could do this by going back to the
model and changing the cross-sectional area to that of the smaller object. However, there
is a feature of Simulink that makes this type of calculation extremely easy. The feature is
the automatic “vectorization” of the model.
To exploit this feature, open the Gain block in the model (by double clicking) and
enter both the values for the large and the small object. Do this using MATLAB’s vector
notation. Thus the Gain block dialog will have the values [0.003825 0.000945]. (Include
the square brackets, which make the value in the dialog equal to the MATLAB vector, as
shown in Figure 1.9.)
After running this model with the 2-vector for the drag coefficients, the Scope shows
plots for the position and velocity of both of the objects (see Figure 1.10).
The heavier object hits the ground well after the light object. The velocity curve in the
top graph shows why. The speed of the heavier object comes to a maximum value of about
1.2. Example 1: Galileo Drops Two Objects from the Leaning Tower of Pisa
15
Figure 1.9. Adding the second object to the Leaning Tower simulation by vectorizing the air drag.
70 ft/sec, whereas the lighter object ends up at about 90 ft/sec. This difference is from the
increased air drag force on the object because the heavier object has a larger cross-sectional
area.
This difference is too large for Galileo not to have noticed, so it does lead us to suspect
that he never really did the purported experiment. (It is not clear from Galileo’s writings
that he understood the effect of drag, but if he did, he could have avoided this by making
the objects the same size.) In fact, it is very probable that Galileo relied on the thought
experiment he talks about in his 1638 book on the Copernican theory, Two New Sciences
[4]. In this thought experiment he argued that in a vacuum, by tying the two objects together
with a short rope, if the heavier object were to hit the ground first, at some point the rope
would become taut and the light object would begin dragging the heavier. That implies
that the combined object would hit the ground a little later than would have been the case
had no rope been connecting them. This meant that the combined object (which is heavier
than either one) would hit the ground a little later than the single heavy object. However,
since the combined weight of the two objects is greater than the single heavy object, this
violates the premise that the heavier object must hit the ground first. This deduction was
the reason Galileo believed that the Aristotelian statement that the heavy object falls faster
was wrong, and that prompted him to perform many experiments with inclined planes to
try to understand the dynamics of falling bodies [14].
Since this is the first Simulink model with any substance that we have built, let us use
it to illustrate some other features of Simulink.
First, let us clean up the model and annotate it. It is a good idea to do this as you
create the model so that when you return to the model later (or give the model to someone
else to use), it is clear what is simulated. The first thing to do is to assign a name to the
16
Chapter 1. Introduction to Simulink
Velocity
0
-50
-100
0
0.5
1
1.5
2
2.5
3
3.5
2
2.5
3
3.5
Position
150
100
50
0
-50
0
0.5
1
1.5
Time
Figure 1.10. Simulating results. The Leaning Tower Simulink model with a heavy
and light object.
variables in the diagram, and the second is to name all of the blocks so their functions are
clear. It is also helpful to add color to the diagram to indicate the generic function of the
blocks.
To add a variable name onto a line in the Simulink model, simply double click on the
line. This causes an insertion point to appear on the line where you can type any text. The
annotation is arbitrary.
To annotate the model you created, enter the names in the model, or if you have not
created a model, open the model Leaningtower3 by typing Leaningtower3 at the MATLAB
command line. This will open the model with the annotation shown in Figure 1.11.
Simulink automatically changed from a scalar to a vector in the model when we made
the gain a MATLAB vector. After making the change, to see that the variables in the model
are vectors corresponding to the two different drag accelerations you must right click the
mouse anywhere in the model window and select Format from the menu; this will open the
submenu shown as part of Figure 1.11.
In this menu, we selected “Wide Nonscalar Lines” and “Signal Dimensions” (note
that they are checked). After you do this, all of the vector signals in the diagram have a
wider line, and the signal dimensions (vectors of size 2 here) are on the signal lines in the
model (as shown in Figure 1.11). To change the color of a block, highlight the block and
1.3. Example 2: Modeling a Pendulum and the Escapement of a Clock
17
Figure 1.11. Right clicking anywhere in the diagram opens a pull-down menu
that allows changes to the colors of the blocks, bold vector line styles, and other model
annotations.
right click. Select “Background Color” from the resulting pop-up menu and then move the
mouse pointer to the right to select the desired color.
There are other features that Simulink supports which you can discover on your own
by playing with the model. As we work our way through the exercises and examples in this
book, you will learn a lot more.
In the next section, we continue creating some simple models by simulating a pendulum clock. For the interested reader, [30] has a very nice discussion of how Galileo came
to realize that objects with different masses had the same motions. The Leaning Tower
experiment that showed that the masses would actually move the same had to wait for the
invention of the vacuum pump at the end of the 17th century.
1.3
Example 2: Modeling a Pendulum and the
Escapement of a Clock
The next step in our introduction to Simulink is to investigate a pendulum clock and the
mechanism that allows clocks to work: the escapement. This example introduces control
systems and how they are simulated. It also continues to discuss both linear and nonlinear
differential equations.
18
Chapter 1. Introduction to Simulink
The escapement mechanism is one of the earliest examples of a feedback control
system. An escapement provides the sustaining force for the motion of the clock while
ensuring that the pendulum and the clock movement stay in synchronization. Many famous
mathematicians and physicists analyzed clocks and early forms of escapements during the
17th and 18th centuries. We start by describing the escapement mechanism for a mechanical
pendulum clock; we then show how to build a Simulink model of the clock.
1.3.1
History of Pendulum Clocks
Humans have been building timekeepers that use the sun or water since at least 1400 BC. The
Egyptian clepsydra (from the Greek kleptei, to steal, and hydor, water) was a water-filled
bucket with a hole. The water level was a measure of the elapsed time. Sundials provided
daylight time keeping. The first mechanical clocks appeared in the 13th century, with the
earliest known device dating to about 1290. Who invented the first clock is lost to history,
but the main development that made these devices possible is the escapement, a gear with
steep sloping teeth. A gear tooth “escapes” from the escapement, causing the gear to rotate
until the next tooth engages the opposite side of the escapement. The rotation of the gear
registers a count by the clock.
At the same time the tooth escapes, the driver imparts energy (through the escaping
tooth) into the pendulum, which overcomes the friction losses at the pivot. The escapement
thus serves two purposes: it provides the energy to keep the clock in motion, and it provides
the means for counting the number of oscillations.
Christiaan Huygens invented an escapement similar to that shown in Figure 1.12 in
1656. This type of mechanism was responsible for the emergence of accurate mechanical
clocks in the eighteenth century. The mechanism relies on the escapement (the parts labeled
1 and 2 in Figure 1.12).2 With this mechanism, he was able to build a clock that was accurate
to within 1 sec per day. Previous clock mechanisms did not have the regulation capability
of the Huygens escapement, so they would slow down as the springs unwound or weights
that drove the clock dropped down. Earlier clocks required daily resetting to ensure that
they were accurate to only the nearest hour.
The pendulum mechanism provides the timing for the clock and the escapement
provides the forces and connections that
• provide energy to the pendulum so the clock does not stop because of friction;
• regulate the speed of the gear motions to the natural period of the pendulum so that
the timing of the clock is accurate, depending only on the pendulum motion;
• decouple the pendulum and the drive;
• provide the feedback to ensure that the force from the driver (springs or weights) and
wear in the various mechanical parts do not adversely affect the time displayed by
the clock.
2 This copyright figure is from a document titled “How a Clock Works” available from Erwin Sattler Clocks of
America. You can download the document from the web page http://www.sattlerclocks.com/ck.php. I would like
to thank Marcus Orvando, president of Sattler Clocks of America, for permission to use this diagram.
1.3. Example 2: Modeling a Pendulum and the Escapement of a Clock
19
Figure 1.12. Components of a pendulum clock.
Before we go into the details of the design of the escapement, let us look at how it
works. The escapement is on a shaft attached to the clock case through a bushing called a
back cock. The pendulum mount does not connect to this shaft. The pendulum suspension
allows the pendulum to swing with minimal friction unencumbered by the escapement. As
the pendulum swings, it alternately engages the left and right side of the escapement anchor
(1). In the process a force from the escapement wheel (2) goes to the pendulum through the
crutch (4) and the crutch pin (5). The crutch assembly also forces the escapement to rotate
in synchronism with the pendulum. The pendulum itself (3) moves back and forth with a
period that is very closely determined by its length. (Once again we return to Galileo, who
demonstrated this in 1582.) The pendulum includes a nut at its bottom (not shown), whose
adjustment changes the effective length and thereby adjusts the period of the pendulum.
The escapement wheel (2) has a link to all of the gears in the gear train (they are not all
shown in Figure 1.12) that turns the hands (8) of the clock.
For the remainder of this description, we focus on the escapement anchor (1) and its
two sides. On the two sides of the anchor are triangular teeth; as the anchor rotates, one
of the two sides of each tooth alternately enters and exits the escapement gear, hence the
names entrance and exit pallets. If you look carefully at the anchor in Figure 1.12, you
20
Chapter 1. Introduction to Simulink
will see them: they are triangular pins that pass through each end of the anchor. As the
pendulum moves to the left, it forces the anchor to rotate and ultimately disengage the right
exit pallet face from the escapement wheel (2). When it does, the exit pallet on the opposite
side of the anchor engages the next tooth of the escapement wheel (9), and the escapement
gear moves one tooth. If the pendulum is set to swing at a period of 1 sec (as is the case for
the mechanism in Figure 1.12), the result of the motion of the escapement gear is to move
the second hand on the clock by one second (which implies that there are 60 teeth on the
escapement gear).
When the right face of the anchor engages the gear, the anchor and the escapement
wheel interact. The escapement wheel, which is torqued by the drive weight (7), applies a
force back through the anchor (1) and then, through the crutch and crutch pin (4 and 5), to the
pendulum. The alternate release and reengagement of the anchor and escape wheel provides
the energy to the pendulum that compensates for the energy lost to friction. As a by-product
of this interplay of the pendulum and the escapement wheel, we hear the distinctive and
comforting “tick-tock” sound that a clock makes.
1.3.2 A Simulation Model for the Clock
This Simulink model of the clock is in the NCS library, and you can open it using the
command Clocksim. The clock model has two parts: the model of the pendulum, and the
model of the escapement. The pendulum model comes from the application of Newton’s
laws by balancing the force due to the acceleration with the forces created by gravity. Let
us assume the following:
• The pendulum angle is θ .
• The mass of the pendulum is m.
• The length of the pendulum is l.
• The inertia of the pendulum is J . (If we assume that the mass of the pendulum is
concentrated at the end of the shaft, the inertia is given by J = ml 2 .)
• The damping force coefficient is c (Newtons/radians/sec).
Then the equation of motion of the pendulum is
J
d 2 θ (t)
dθ (t)
= −c
− mgl sin(θ (t)).
2
dt
dt
Substituting ml 2 for J gives
d 2 θ (t)
c dθ (t) g
=− 2
− sin(θ (t)).
2
dt
ml dt
l
These dynamics are in the pendulum clock Simulink model shown in Figure 1.13.
This model uses Simulink primitive blocks for integrators, summing junctions, gains, and
1.3. Example 2: Modeling a Pendulum and the Escapement of a Clock
21
Pend. Angle
Damping Accel.
dampcoef
Display
Variables
Pendulum AngleForce
Escapement
Model
Force
0.1148
c/(m*len^2)
1
s
Thet2dot
Escapement Accel.
Thetadot
Force from escapment
over the pendulum mass
Initial
Pend.
Angle
Units of
dampcoef = 0.01
g = 9.8
plength = 0.2485
1
xo s
acceleration m/s
0.2
2
Gravity Accel.
1/plength
1/length
g*sin(u)
Pend. Angle
Gravity Force
g*sin(Theta)
Figure 1.13. Simulink model of the clock.
the sine function (from the Continuous and the Math Operations libraries). We also used a
new concept, called a subsystem, to create a block called “Escapement Model.” Inside this
block are Simulink primitives that model the escapement. Open the subsystem by double
clicking, or view it using the model browser.
Let us follow the signal flow in the model. We start at the output of the summing junction, where the angular acceleration of the pendulum, Theta2dot, is the result of subtracting
the three terms:
• the damping acceleration mlc 2 θ̇ from the equation above (dampcoef*Thetadot),
• the gravity acceleration from the equation above (g/plength*sin(Theta)),
• the acceleration created by the escapement that we develop next.
The Simulink model creates the variables representing the angular position and velocity, Theta and Thetadot, by integrating the angular acceleration Theta2dot. A pendulum
clock only sustains its oscillation if the pendulum starts at an angle that causes the escapement to operate. Consequently, an initial condition is used in the last integrator (we use 0.2
radians, about 11.5 degrees) to start the clock.
A feature of Simulink is the ability to create a subsystem to simplify the model.
Typically, a subsystem is a Simulink model that groups the equations for a particular common
function into a single block. The details of the block are in the subsystem and are not visible
until the subsystem block is opened (by double clicking or browsing), but its functionality
is captured so that the user can see the interactions. The clock’s single subsystem is the
model for the escapement. This “Escapement Model” block has the equations shown in
Figure 1.14. These nonlinear escapement mechanism equations consist of the logic needed
to model the force created by the escapement.
22
Chapter 1. Introduction to Simulink
Clock Escapement M odel
Restore
Sign
1
Pendulum
Angle
|u|
<=
Abs
Penhigh 0.15
Relational
Operator1
1
Force
>=
Penlow 0.1
Product1
Relational
Operator2
du/dt
<
Derivative
0
Relational
Operator3
penlow1
Creates a force that is applied when the pendulum is at the end of its swing
Figure 1.14. Blocks in the subsystem “Escapement Model.”
The escapement works in the same way as you would push a swing. A small accurately
timed push occurs when the exit pallet leaves the escapement. The push is always in
the direction of the pendulum’s motion and it occurs just after the pendulum reaches its
maximum or minimum swing (depending on the sign of the angle).
The attributes of the escapement model are as follows:
• Since the pendulum motion is symmetric, the calculations use positive angles (with
the absolute value function block). The force applied has the correct sign because
we multiply it by +1 or −1, depending on the sign of the angle. (The block called
“Restore Sign” does this.)
• The pendulum angle at which the escapement engages (applies an acceleration) is
denoted by Penlow. (As can be seen, it is given a value of 0.1 radians.)
• The angle of the pendulum at which the escapement disengages (jumps from one gear
tooth to the next) is denoted by Penhigh (with a value of 0.15 radians).
• The escapement applies the force only when the pendulum is accelerating (i.e., when
the pendulum is moving toward the center of its swing). We check to see if the
derivative of the pendulum angle is positive or not, and multiply the force by 1 when
it is and by 0 when it is not in the “Relational Operator 3” block.
1.3. Example 2: Modeling a Pendulum and the Escapement of a Clock
23
The plot at right shows the force
applied by the escapement model (the
Y axis) vs. the pendulum angle (X
axis). Notice that the angle at which
the force engages is 0.1 and –0.1 radians, and the angle at which the force
stops is 0.15 and –0.15 radians as required. We used the Math Operations and Logic libraries to develop
this model, so let us go through the
logic systematically.
The absolute value of the pendulum angle is the first operation. This
block is the first one in the Math Operations library. At the conclusion of the
logic, the direction of the force comes from the sign of the pendulum angle. In the model
this is done by multiplying the force created (which will always be positive) by the sign of
the pendulum angle. When the absolute value of the pendulum angle is less than or equal to
0.15, the “Relational Operator1” block is one; otherwise it is zero. When the angle is greater
than or equal to 0.1 the “Relational Operator2” block is one; otherwise it is zero. The last
part of the logic insures that the only time the force is applied is when the pendulum angle is
getting smaller (in the absolute sense). This is accomplished by checking that the derivative
of the pendulum angle is negative. (The derivative block is in the Continuous library). The
product of all of these terms causes the output to be one if and only if the pendulum angle
is between 0.1 and 0.15 and the pendulum angle is decreasing, or the pendulum angle is
between −0.1 and −0.15 and the pendulum angle is increasing.
The simulation is set up to run for 1000 sec, and the output in the Scope block shows
the pendulum angle and the escapement acceleration. After you run the model, you can
reveal the details of the plot by using the mouse to encircle an area on the plot to zoom
in on. If you do this (it was done here between 412 and 421 sec), the plot resembles
Figure 1.15.
The early part of the simulation shows that the pendulum angle grows from its initial
value (of 0.2 radians) to 0.2415 radians. This stable value for the maximum pendulum angle
is a consequence of the escapement.
At this point, numerical experiments with the clock can investigate different questions. For example, the fact that the escapement force is determined by some device that
converts potential energy into kinetic energy (weights, a spring, a pneumatic device that uses
variations in air pressure, etc.) means that over its operation the force that the escapement
applies to the pendulum will decrease. To understand the effect of this decreasing force
on the time keeping of the clock, simulate the escapement with different values of Gain in
the Gain block labeled “Force from escapement over the pendulum mass.” Evaluate the
change, if any, in the period of swing of the pendulum, and the amplitude of the swing when
you change this value (see Example 1.2).
24
Chapter 1. Introduction to Simulink
Figure 1.15. Results of the clock simulation for times around 416 sec. Upper figure
is the pendulum angle, and the lower figure is the acceleration applied to the pendulum by
the escapement.
1.4
Example 3: Complex Rotations—The Foucault
Pendulum
Léon Foucault, in 1848, noticed that a pendulum mounted in the chuck of a rotating lathe
always moved back and forth in the same plane even if the chuck rotated. This gave him the
idea that he could demonstrate that the earth rotated by setting in motion a long pendulum
and observing the path of motion over the period of a day. In a series of experiments
with gradually longer pendulums, he became convinced that his concept was valid. In
February of 1851, he unveiled a public demonstration for the Paris Exposition, where, at the
Eglise Ste.-Geneviève in Paris, he set up a pendulum that was 67 meters long. The original
pendulum, shown in Figure 1.163 , has been back at the Panthéon since 1995.
The Foucault pendulum both illustrates the mechanics of a body moving in a rotating
frame and is the simplest example of the complex motions that result from gyroscopic forces.
We will investigate these forces in detail in Chapter 3. For this introduction to Simulink,
we have looked at Galileo’s experiment, the pendulum for use in a clock, and now the
Foucault pendulum. The dynamics are not too complex, but they do require that the full
three dimensions be included. (However, one of these motions, the vertical direction, may
be ignored.) The model we create can also be used as a simple example of the Coriolis
effect that causes weather patterns to move from west to east (in the northern hemisphere).
3 The original church is now a public building called the Panthéon on the Place du Panthéon in Paris. This
photgraph is from the Wikipedia web site [51].
1.4. Example 3: Complex Rotations—The Foucault Pendulum
25
Figure 1.16. Foucault pendulum at the Panthéon in Paris.
1.4.1
Forces from Rotations
A particle moving with constant speed along a
circular path is always changing its direction of
travel; it is accelerating all of the time. The forces
associated with this change in direction are the
centrifugal and centripetal forces. The figure at
right illustrates the motion and the vectors associated with the location of the particle as it moves
over an interval t. The vectors in the diagram
are the radial location at the two times t + t;
the unit vectors for the radial motion, ur (t); and
the direction perpendicular to the radial motion,
uθ (t). To restrict the particle to motions with a
constant radius, we assume that r(t) = r ur (t), where r is the radius (a scalar constant).
Differentiating this to get the linear velocity of the particle gives
v(t) =
dr(t)
dur (t)
=r
.
dt
dt
From the figure above, the derivative of the unit vector ( dudtr (t) ) is given by calculating the
difference in this unit vector over the time t. Since t is small, the directions of the unit
vector uθ over this interval can be assumed constant. In addition, the magnitude of the
perturbation |uθ (t + t) − uθ (t)| is the small arc length θ.
Putting these together gives the derivative dudtr (t) as
dur (t)
ur (t +
= lim
t→0
dt
t) − ur (t)
ur (t) + uθ θ − ur (t)
= lim
= lim uθ
t→0
t→0
t
t
θ
= uθ ω.
t
Here ω is the angular velocity of the particle’s motion. Therefore we have v(t) = uθ ωr or,
equivalently, that the magnitude of the velocity (the particle’s speed) is ωr.
26
Chapter 1. Introduction to Simulink
To get the relationship between the angular acceleration and the particle’s speed we
differentiate the vector velocity using the fact that the radius is constant and the angular
velocity and unit vector perpendicular to the radial motion are changing with time. Thus
the acceleration of the particle is
dv(t)
dω
duθ
= uθ
r+
ωr.
dt
dt
dt
Following steps similar to those we used above for differentiating ur , the derivative of uθ
is − ur ω, where the minus sign comes from the fact that the difference in uθ at the times t
and t+ t points inward (along the radial direction this is negative). Using these values, we
get the acceleration of the particle as (in this expression, α = dω
)
dt
a(t) =
a(t) = uθ αr − ur ω2 r.
This acceleration has two components: tangential and radial. The tangential component
is the instantaneous linear acceleration and its magnitude is αr. The radial component is
2
the centrifugal acceleration and its magnitude is ω2 r = vr . These two results should be
familiar from introductory physics, but we show them here to remind you that motions under
rotations need to account for both the vector magnitude and its directions.
1.4.2
Foucault Pendulum Dynamics
When there are no forces on an object, its trajectory in inertial space is a straight line. If we
view this motion in a rotating coordinate system, the motion will appear curved. Because the
curve is not the true motion of the object but is a consequence of the observer’s motion, to get
the equations of motion for the observer we use a fictitious force, called the Coriolis force.
We have indirectly seen this kind of fictitious force in the centrifugal acceleration
above. Following the derivation in Section 1.4.1, if a three-dimensional vector b is rotating,
then its derivative in an inertial (stationary) frame is
db db =
+ ω × b|Inertial .
dt Inertial
dt Rotating
Using this to get the velocity of a rotating vector gives vInertial = vRotating + ω × r. Applying
this result again to differentiate the inertial velocity gives
d
(vRotating + ω × r)Rotating + ω × vRotating + ω × (ω × r).
dt
The derivative of the first term on the right must account for the fact that the angular velocity
and the radius vector are both time varying. Thus, we have
aInertial =
dω
× r + 2ω × vRotating + ω × (ω × r).
dt
This equation gives the acceleration in the rotating coordinate system as
aInertial = aRotating +
dω
× r.
dt
Since the force on the body in the rotating frame is the mass times the acceleration in the
rotating frame and the force on the body in the inertial frame is the mass times the inertial
aRotating = aInertial − 2ω × vRotating − ω × (ω × r) −
1.4. Example 3: Complex Rotations—The Foucault Pendulum
27
Ω
z
λ
x
Figure 1.17. Axes used to model the Foucault pendulum.
acceleration, the “Coriolis force” that accounts for the perceived motion in the rotating
frame is
dω
FCoriolis = −2mω × vRotating − mω × (ω × r) − m
× r.
dt
We can now model the Foucault pendulum. Figure 1.17 shows the coordinate system
we use. The y coordinate comes out of the paper. The origin of these coordinates is the
rest coordinates of the pendulum, and the z axis goes through the point of suspension of the
pendulum. We assume that the pendulum length is L.
In the figure, the Earth is rotating at the angular velocity $, the pendulum
is at the
latitude λ, and we denote the coordinates of the pendulum with the vector x, y, z .
Since the pendulum is rotating at the Earth’s rate, the Coriolis forces on the pendulum are
C = −2m(v × $) = −2m
dx
,
dt
dy
,
dt
dz
dt
×
−$ cos λ,
0,
$ sin λ
.
In the vertical direction, the tension in the wire holding the pendulum is approximately
constant (equal to the weight of the pendulum bob mg), so the velocity along the vertical is
essentially zero. Therefore, the Coriolis forces (from the cross product) are
2m
dy
$ sin λ,
dt
− dx
$ sin λ,
dt
dy
$ cos λ
dt
.
Now, when the pendulum moves, there is a restoring force from gravity. This force is
proportional to mg sin(θ ), where θ is the pendulum angle, as we saw for the pendulum
above. For the pendulum, the angle is Lx along the x-axis and Ly along the y-axis. We assume
that the pendulum motion is small so the acceleration is mgθ . Thus the accelerations on the
28
Chapter 1. Introduction to Simulink
Foucault pendulum are
d 2x
dy
= 2$ sin(λ)
− (2πfP )2 x,
2
dt
dt
d 2y
dx
− (2πfP )2 y.
= −2$ sin(λ)
dt 2
dt
We ignore the accelerations in the vertical direction since to first order they are zero.
The term (2πfP )2 = Lg is the square of the period of the pendulum, exactly as was
used in the clock example above. An interesting aspect of this problem is the magnitudes of
the coefficients in the differential equations. The rate of rotation of the earth is essentially
once every 24 hours (it is actually a little less), so
$∼
=
2π
2π
=
= 0.00007272205.
24 ∗ 60 ∗ 60
86, 400
The coefficient in the model depends on the latitude. Let us assume we are at 45 degrees
north latitude, so the coefficients are
2$ sin(λ) = 0.00010284.
If we assume that the pendulum length is such that the period of the pendulum swing is 1
minute, we obtain that the other coefficient in the differential equation is
2
2π
= 0.01096622.
(2πfP )2 =
60
These coefficients differ by about four orders of magnitude, which will make these equations
difficult to solve numerically. As we will see, Simulink allows the user to accommodate
this so-called stiffness without difficulty.
We can now build a Simulink model for the Foucault pendulum using these differential
equations. First, though, let us compute how long the simulation should be set up to run.
The period of the total motion of the Foucault pendulum depends upon our latitude. This
2π
Earth
= Tsin(λ)
, where TEarth is the time for one rotation of the earth. (We assume
period is $ sin(λ)
the earth rotates in 24 hours so, for a latitude of 45 degrees, the period is 1.2219e+005 sec.)
We will make this time the simulation stop time.
Figure 1.18 shows the model for the Foucault pendulum. Once again, to gain familiarity with Simulink and to ensure you know how to build a model like this, you should
create this model yourself. We call this model Foucault_Pendulum in the NCS library.
By now, you should be finding it reasonably easy to picture what the Simulink diagram
portrays. Remember that the picture represents the mathematics and, because the signal flow
represents the various parts of the mathematics, it is important when the model reinforces
this flow. In this picture, you should be able to clearly distinguish the x and y components of
the pendulum motion. We could emphasize this by coloring the two second order differential
equation blocksets with a different color. (The models that are in the NCS library use this
annotation.)
When you run this model, the Scope block creates the oscillation over one period, as
shown in Figure 1.19. As can be seen from the plots, the motions in the east-west and northsouth directions are out of phase by 90 degrees and essentially create a circular motion. The
initial condition in the model sets the pendulum at 1 ft to the north and at exactly zero along
1.4. Example 3: Complex Rotations—The Foucault Pendulum
29
Pendulm Freq
Squared
Earth Rate
at Latitude (Lat)
2*OmegaE*sin(Lat)
OmegaP^2
Pend. Accel.
1
s
Coriolis
Accel (y)
Integrate y
Accel.
dy/dt
y Axis (E-W)
Motion
1
s
dy/dt
Integrate y
Vel.
2*OmegaE*sin(Lat)
Pendulum Freq
Squared
Coriolis
Accel. (x)
Pend.
Accel.
x Axis (N-S)
Motion
OmegaP^2
Earth Rate
at Latitude
1
s
dx/dt
Integrate x
Accel.
1
s
dx/dt
Integrate x
Vel.
Figure 1.18. The Simulink model of the Foucault pendulum.
y Axis (E-W)
Motion
Motion of Pendulum in y (DFeet)
1
0.5
0
-0.5
-1
0
2
4
6
8
10
12
Time
14
4
x 10
x Axis (N-S)
Motion
Motion of Pendulum in x (Feet)
1
0.5
0
-0.5
-1
0
2
4
6
8
Time
10
12
14
4
x 10
Figure 1.19. Simulation results from the Foucault pendulum Simulink model using
the default solver.
Scope
30
Chapter 1. Introduction to Simulink
y Axis (E-W)
Motion
1
0.5
0
-0.5
-1
0
2
4
6
8
10
12
14
4
x 10
x Axis (N-S)
Motion
1
0.5
0
-0.5
-1
0
2
4
6
8
Time
10
12
14
4
x 10
Figure 1.20. Simulation results for the Foucault pendulum using a tighter tolerance for the solver.
the east-west line. If for some reason the pendulum starts with a motion that is out of the
plane, the pendulum will precess during its motion sweeping out a more elliptic path.
The parameters in the model come from code used by the pre-load function (in the
“Model Properties” dialog under “Edit” in the pull-down menu at the top of the Simulink
model). You can experiment with different latitudes for the pendulum by changing the value
of Lat (by simply typing the new value of Lat at the MATLAB command line).
Let us use this result to explore the numerical integration tools in Simulink. Chapter 7 of Cleve Moler’s Numerical Computing with MATLAB contains a discussion of the
differential equation solvers in MATLAB. The solvers in Simulink are identical, except that
they are linked libraries that Simulink automatically uses when the model runs (because
you clicked the “run” button). The results in Figure 1.19 use the default solver in Simulink
(the default is ode45 with a relative tolerance of 1e-3). If you look carefully at this figure,
you will notice that the amplitude of the pendulum motion is decreasing over time. There
is, however, no physical reason for this. We have not included any pendulum damping in
the model, and the terms that couple the x and y motions should not introduce damping.
To see if this effect is a consequence of the numerical solver, let us go into the model and
change the solver to make it more accurate. Open the “Configuration Parameters” dialog
under the Simulation pull-down menu (or use Ctrl + E), and in the dialog change the relative
tolerance to 1e-5. If you run the simulation, the result appears as shown in Figure 1.20.
1.5. Further Reading
31
Notice that this result is what we expect; there is no discernable damping in the
motion and the period is exactly what we predicted. The need for the tighter tolerance in
the calculation is because of the order of magnitude difference of five in the coefficients in
the model, as we noted above. In general, experimentation is required with the differential
equation solver values in order to insure that the solution is correct. The best way to do this
is to try different solvers and different tolerances to see if they create discernable differences
in the solutions. We return to the solvers as we look at other simulations. In particular, we
will look at some very stiff systems and the use of stiff solvers in Chapter 3.
In Chapter 3, we will also explore the more complex motions of objects undergoing
gyroscopic forces when the motion takes place in three dimensions. First, however, we will
look at how Simulink creates simulation models for linear systems, how Simulink handles
vectors and matrices, and, last, when to use Simulink as part of the process of designing
control systems.
1.5
Further Reading
Galileo showed that a falling body’s acceleration is independent of its mass by making
detailed measurements with an inclined plane, as was described in his 1632 Dialogue Concerning the Two Chief World Systems: Ptolemaic and Copernican [14]. Recent and old
versions of his devices are in the Institute and Museum of the History of Science in Florence, Italy (http://www.imss.firenze.it /), and you can view a digital version of the original
manuscript of Galileo’s “Notes on Motion” (Folios 33 to 196) from this site. The location
for the manuscript is http://www.imss.fi.it/ms72/index.htm. An excellent translation of the
Galileo dialogue describing the inclined plane measurements that he made is in The World
of Mathematics, Volume 2, edited by James R. Newman [30].
There is a wonderful essay by John H. Lienhard from the University of Houston (at
http://www.uh.edu/engines/epi1307.htm) that describes the clock as the first device ever
engineered. The essay ties the clock back to Galileo and traces its history from the early
writings of Francis Bacon to Huygens and Hooke.
The document that you can download from Sattler Clocks (referred to in footnote 2)
has much more detail on the mechanisms involved in the escapement (and in particular the
escapement in Figure 1.12 that was invented by George Graham in 1720) and the other
very interesting features of a pendulum clock. They also have extensive pictures of some
elegantly designed and beautiful clocks.
I used the clock example to develop an analysis of the clock escapement in the IEEE
Control Systems Society Magazine [38]. Shortly after the publication of this article, a set
of articles also appeared in the same magazine on several interesting early control systems,
including the clock [3], [19].
In a recent Science article [37] the authors demonstrated that moths maintain control
during their flight because their antennae vibrate. This vibration, like the motion of the
Foucault pendulum, precesses as the moth rotates, giving the moth a signal that tells it
what changes it needs to make to maintain its orientation. The authors demonstrated this
by cutting the antennae from moths and observing that their flight was unstable. They
subsequently reattached the antennae, and the moths were again able to fly stably.
You should carefully review The MathWorks’s User’s Manual for Simulink [42].
Simulink models can get very large and take a long time to run. The data generated by
32
Chapter 1. Introduction to Simulink
the model can also overwhelm the computer’s memory. Managing the use of memory can
improve the performance of your models. Reference [41] has a good discussion of the
methods available for managing the memory in a Simulink model.
Exercises
1.1 Use the Leaning Tower of Pisa model as the starting point to add a horizontal component to the velocity at the instant the objects start. Include the effect of air drag.
Also, add an input to the model to simulate different wind speeds. Experiment with
this model to see how these do or do not affect the experiment.
1.2 In the clock simulation, increase the friction force to see how large it must be before
the escapement ceases working. Perform some numerical experiments with the model
to see if there is any effect on the accuracy because of the increased friction. When
you evaluated the change in the period and amplitude of swing of the pendulum as
you changed the Gain, you should not have seen a significant change. What change in
the model will allow you to see the effect of lower forces applied by the escapement?
Modify the model to look at this, and experiment with different escapement forces.
What is the effect? Why is the effect so small? How does the escapement ameliorate
these forces? Look at Figure 1.12 carefully. The Sattler Company has a spring
mounted on the pendulum that transmits the force from the crutch pin to the pendulum.
Does the Simulink model have this mechanism? How would you go about adding
this mechanism to the model? Is there a good reason for the Sattler clock to use this
spring?
1.3 Experiment with the different Simulink solvers (one by one) to see what effect they
have on the simulation of the Foucault pendulum. Which ones work well? Try
changing the tolerances used for each of the solvers to see if they stop working.
1.4 Create a Simulink model of a mass suspended on a spring under the influence of
gravity. Add a force that is proportional to the velocity of the mass to model the
damping. Pick some parameter values for the mass, the spring constant, the damping,
and, of course, gravity, and simulate the system. Explore what happens when you
increase the value of the damping from zero. Can you explain the behavior you are
seeing in terms of the solution of the underlying differential equation? (If you cannot,
we will see why in the next chapter.)
Chapter 2
Linear Differential
Equations, Matrix Algebra,
and Control Systems
We have seen in Chapter 1 how Simulink allows vectorization of the parameters in a model
(using MATLAB notation for the vectors) with the resulting model simultaneously simulating multiple situations. In a similar manner, Simulink allows matrix algebra using blocks.
In this chapter, we will exploit this capability to build easy to understand models using
vectors, matrices, and related computational tools. In fact, it is this capability that, when
used properly, contributes to the readability and ease of use of many models. To make it
easy to understand these capabilities, we will spend some time talking about the solution to
linear differential equations using matrix techniques.
2.1
Linear Differential Equations: Linear Algebra
The general form of an nth order linear differential equation is
an+1
d n y(t)
d n−1 y(t)
d n−2 y(t)
dy(t)
+ a1 y = u(t).
+ an
+ an−1
+ · · · + a2
n
n−1
dt
dt
dt n−2
dt
In addition to the equation, we must specify the values (at t = 0) of the dependent variable
y(t) and its derivatives up to the order n − 1. Specifying the initial conditions creates a
class of differential equation called an initial value problem.
The coefficients in the equation can be functions of t, but for now, we will assume
that they are constants. One way to solve this equation is to use a test solution of the form
y(t) = ept , where p is an unknown. Substituting this for y(t) into the differential equation
gives a polynomial equation of order n for p. The solution of this polynomial gives n values
of p that, with the initial conditions and the input u(t), provide the solution y(t). This may
be familiar to you from a course on differential equations. However, there is a better way
of understanding the solution to this equation, and it relies on matrix algebra.
In the differential equation above, let us build an n-vector x (the vector will be n × 1)
that consists of y and its derivatives as follows:
x(t) = y(t)
dy(t)
dt
d 2 y(t)
dt 2
33
···
d n−1 y(t)
dt n−1
T
.
34
Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
Using this vector, the differential equation becomes a first order vector-matrix differential
equation. The equation comes about quite naturally because all of the derivates of the
elements in the vector x(t) are linear combinations of elements in x(t). Thus, any linear
differential equation becomes
dx(t)
= Ax(t) + bu(t),
dt
where A and b are the matrices

0
1

0
0

..
..

.
.
A=


0
0

a2
a1
−
−
an+1
an+1
0
1
..
.
0
a3
−
an+1
···
···
···
···
0
0
..
.
1
an
··· −
an+1





,







b=



0
0
0
..
.
1





.



a n+1
If this is the first time you have encountered this equation, you should verify that the matrix
equation and the original differential equation are the same (see Problem 2.1).
We will solve this equation in two steps. First, we assume that u(t) ≡ 0 and get the
solution; then we will let u(t) be nonzero and derive the solution. We will do this using
matrix algebra. When u(t) ≡ 0 we have
dx(t)
= Ax(t)
dt
with x(0) = x0 .
If A in this equation were a scalar (say a), then the solution would be x(t) = eat x(0). This
comes immediately by substituting this value for x into the differential equation and using
the fact that dtd eat x0 = aeat x0 .
Can we expand the idea of an exponential to a matrix? To answer this we need to
think about what the exponential function is. When we say that a function eat exists, what
we really mean is that the infinite series
eat = 1 + at +
a3t 3
ant n
a2t 2
+
+ ··· +
+ ···
2!
3!
n!
is convergent and well behaved. Therefore, we can differentiate this by taking the derivative
of each term in the series. Can we do the same thing with a matrix?
Let’s define the matrix exponential (analogous to the scalar exponential) as
eAt = I + At +
A3 t 3
An t n
A2 t 2
+
+ ··· +
+ ···.
2!
3!
n!
This series converges because we can always find a matrix, T, which will convert the matrix
A to a diagonal or Jordan form. For now, let us assume that the A matrix is diagonalizable.
That is, there exists a matrix T such that TAT−1 = -, a diagonal matrix. (See Chapter 10 of Numerical Computing with MATLAB [29] for a discussion of eigenvalues and the
diagonalization of a matrix in MATLAB.)
2.1. Linear Differential Equations: Linear Algebra
35
Multiplying both sides of the definition of the matrix exponential by T on the left and
T−1 on the right gives
TeAt T−1 = I + TAT−1 t +
TA2 T−1 t 2
TA3 T−1 t 3
TAn T−1 t n
+
+ ··· +
+ ···.
2!
3!
n!
If we insert the identity matrix in the form of T−1 T between each of the powers of A in this
series, we get
TeAt T−1 = I + TAT−1 t +
TA(T−1 T)A(T−1 T)AT−1 t 3
TA(T−1 T)AT−1 t 2
+
+ ···.
2!
3!
Now, by grouping slightly differently we get
TeAt T−1 = I + TAT−1 t +
(TAT−1 )(TAT−1 )t 2
(TAT−1 )(TAT−1 )(TAT−1 )t 3
+
+ ···.
2!
3!
Therefore, the infinite series is now the sum of an infinite number of diagonal matrices
whose n diagonal elements are the infinite series that sums to the exponential eλi t for each
eigenvalue λi of the matrix A. The steps for this are as follows:
TeAt T−1 = I + -t +




=


eλ1 t
0
0
..
.
0
-2 t 2
-3 t 3
-n t n
+
+ ··· +
+ · · · = e-t
2!
3!
n!

0
0 ··· 0
eλ2 t
0 ··· 0 

0 eλ3 t · · · 0 
.
..
..
.. 
.
.
···
. 
λn t
0
0 ··· e
Therefore, the matrix exponential exists and is absolutely convergent. This constructive
proof of the existence of the matrix exponential also shows how to compute it. To construct
it, first compute the diagonal matrix above from each of the eigenvalues of the matrix and
then multiply the result on the right by T and on the left by T−1 .
MATLAB allows construction of a simple M-file that will generate the solution matrix this way. The code is expm_ncs. There is a built-in matrix exponential function in
MATLAB that performs the calculation, but since the code is not visible, the M-file expm_ncs
is in the NCS library. Open this file and follow the steps it uses to form the matrix exponential.
We are only halfway to solving the linear differential equation we set out to solve.
So now, let us remove the restriction that u(t) ≡ 0. Before we show what happens when
we do this, there are a couple of facts about the matrix exponential that you need to know.
(Spend a few seconds verifying these facts, using the hint below each of the statements; see
Exercise 2.2.)
Fact 1: The derivative of eAt is AeAt .
Show this by differentiating the infinite series term by term.
Fact 2: The inverse of eAt is e−At .
36
Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
Show this by taking the derivative of the product eAt e−At (using the first fact) and
show that the result is zero. Then show that this means that the product is a constant equal
to the identity matrix.
Now, return to the differential equation dx(t)
= Ax(t) + bu(t) and let u(t) be nonzero.
dt
Conceptually, the input u(t) is like a change to the initial conditions of the differential
equation at every time step. Thus, it seems reasonable that the solution to the equation
should have the form eAt c(t), where c(t) is a “pseudoinitial condition” that is a function of
time that is to be determined.
Substituting this assumed solution into the differential equation gives
d At
e c(t) = AeAt c(t) + bu(t).
dt
However,
d At
d
e c(t) = AeAt c(t) + eAt c(t).
dt
dt
When this result is used, we get
AeAt c(t) + eAt dtd c(t) = AeAt c(t) + bu(t)
or
At d
e dt c(t) = bu(t).
Multiplying by e−At gives the following for c(t):
t
e−Aτ bu(τ )dτ + x0 .
c(t) =
0
The fact that the constant of integration is x0 comes from letting t = 0 in the solution.
Therefore the solution of the differential equation is
t
At
At
eA(t−τ ) bu(τ )dτ .
x(t) = e c(t) = e x0 +
0
We call the integral in this expression the convolution integral.
2.1.1
Solving a Differential Equation at Discrete Time Steps
In our discussion of digital filters in Chapter 4, we will need to understand how to convert a
continuous time linear system into an equivalent discrete time (digital) representation. One
method uses the solution above.
Assume that the analog-to-digital (A/D) converter at the input samples u(t) to make it
piecewise constant (i.e., the analog input is sampled at times k t and held constant over the
interval t). We call this a “zero order hold” sampler. Under this assumption, the solution
above becomes a difference equation by following these steps:
• Assume that the initial time, t0 , is the sample k t and the current time, t, is (k + 1) t.
• Assume that u(t) is sampled so that its value is constant over the sample interval, i.e.,
u(t) = u(k t), k t ≤ t < (k + 1) t, which we denote by uk .
2.1. Linear Differential Equations: Linear Algebra
37
• Denote the value of the vector x(k t) by xk .
With these assumptions, the solution becomes the difference equation
xk+1 = e
A t
xk +
(k+1) t
k t
eA( (k+1)
t−τ )
dτ buk
= ( t)xk + ( t)uk .
t
The matrix ( t) is eA t , and the matrix 2( t) = 0 eAτ dτ comes from the integral by
changing the integration variable (k + 1) t − τ = τ (which makes dτ = −dτ ):
( t) =
(k+1) t
k t
eA( (k+1)
t−τ )
dτ b =
t
eAτ dτ b.
0
Notice that this difference equation is also a vector matrix equation. It corresponds to the
solution of the differential equation at the times k t, and this discrete time solution is the
same as the continuous time solution when the input u(t) is a constant.
We can now solve any linear differential equation we encounter using linear algebra.
In fact, we have a whole bunch of ways of solving the equation. So let us explore them.
Before we do, there are some names that you should be aware of that are associated with
the solution method that we just went through. We call the vector x(t) in the differential
equation the “state vector” and the differential equation the “state-space” model. In the
solution, the matrix = eAt is the transition matrix. If you check the Simulink Continuous
library, you will find the state-space icon that allows you to put a linear differential equation
in state-space form in a model. Similarly, in the list of digital filters in the Discrete library,
the eighth block is the “Discrete State-Space.” It allows you to enter the difference equation
above. In order to match the continuous and discrete models, we need to be able to compute
for any A and b the values of ( t) and 2( t). MATLAB can do this.
The NCS library contains a code called c2d_ncs that will create these matrices. The
inputs to c2d_ncs are A, b, and the sample time t; the outputs are phi and gamma. The
M-file, shown below, uses the matrix exponential expm in MATLAB. This code is similar
to the code called c2d that is part of the control system toolbox in MATLAB.
function [Phi, Gamma] = c2d_ncs(a, b, deltat)
% C2D_NCS Converts the continuous time state space model to a discrete
% time state space model that is equivalent at the sample times under
% the assumption that the input is constant over the sample time.
%
%
[Phi, Gamma] = C2D(A,B,deltat) converts:
%
.
%
x = Ax + Bu
%
into the discrete-time state-space system:
%
%
x[k+1] = Phi * x[k] + Gamma * u[k]
[ma,na]
[mb,nb]
= size(a)
= size(b)
38
Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
if ma˜=na
error(’The matrix a must be square’)
end
if mb˜=ma
error(’The matrix b must have the same number of rows as the matrix a’)
end
augmented_matrix
exp_aug
Phi
Gamma
=
=
=
=
[[a b];zeros(nb,na+nb)];
expm(augmented_matrix*deltat);
exp_aug( 1:na, 1:na );
exp_aug( 1:na, na+1:na+nb );
A
b
The cute trick in this code is to build the augmented matrix 0nxn
, where the
0nxm
zeros at the bottom have the same number of rows as A and the same number of columns as
the concatenation of A and b. Computing the matrix exponential of this matrix automatically
t
generates the integral 0 eAt bdt. (See Exercise 2.3, where you will verify this yourself.
Hint: Use the fact that the integral of the bottom n rows in the matrix is a constant.)
2.1.2
Linear Differential Equations in Simulink
We can now go to Simulink and solve some linear differential equations. In the clock
example, the equation for the pendulum was
g
c dθ
d 2θ
− sin(θ ) + 0.1148 u(t).
=− 2
dt 2
ml dt
l
To make this linear, assume that the pendulum angle is small so that sin(θ ) ≈ θ . (This
)
comes from the fact that the limθ →0 sin(θ
= 1.) The parameter values in the model were
θ
such that the linear differential equation becomes
θ̈ + 0.01 θ̇ + 39.4366 θ = 0.1148 u(t).
Using the state-space form we developed above gives
dx(t)
=
dt
0
−39.4366
1
−0.01
x(t) +
0
0.1148
u(t).
If we convert this, using the c2d_ncs M-file, into a discrete system with a sample time of
0.01 seconds, the matrices and are
phi =
0.99802888363385
-0.39408713885130
gamma =
0.00999292887448
0.99792895434511
0.00000573792261
0.00114718823479
2.1. Linear Differential Equations: Linear Algebra
39
x' = Ax+Bu
y = Cx+Du
Scope2
State-Space
0
Scope
Constant
y(n)=Cx(n)+Du(n)
x(n+1)=Ax(n)+Bu(n)
Discrete State-Space
Scope1
Figure 2.1. Using Simulink to compare discrete and continuous time state-space
versions of the pendulum.
We can now compare the solutions in Simulink for the continuous version and the
discrete version using the state-space blocks for each. The NCS library contains the Simulink
model State_Space, which you can open (or build yourself using Figure 2.1 as the guide).
The state-space block requires values for two additional matrices (called C and D). They
define the output of the block. Specifically, the output of the block, denoted by y, is
y = Cx + Du. Because we defined the state x with the output y as the first entry (and
the derivatives of y as the subsequent entries), the output and its derivatives are always
available as a simple
matrix
product. If we are interested only in the value of y, then
the matrix c = 1 0 , and since the input u does not appear in the output, D = 0 (a
scalar). These values, along with the values of A and b, are set when the model opens using
the model preload function callback in the model properties. We also have set the initial
displacement of the pendulum to be 1 and the initial velocity of the pendulum to be 0. This
is done in the block dialogs by setting the initial condition vector to be [1 0].
As an exercise, let us see how the two solutions compare. We set up the model
so that a sum block creates the difference between the two solutions and sends the difference to a Scope. We also set the model up to run for 1000 sec, and to compute the
solution to the continuous time model with the ode45 algorithm with the relative tolerance
set to10−10 .
The Simulink model that compares the state-space models in discrete and continuous
form are in Figure 2.1; Figure 2.2 contains plots that show the solution for the continuous
model and the difference between the continuous solution and the discrete solution. Notice
that the largest difference is about 3 × 10−8 , which is almost entirely due to the conversion
from continuous to discrete form. This is because in the Configuration Parameters dialog
(under the Simulation menu) we set the relative tolerance for the ode45 solver to 10−10 .
The ability to simulate a continuous time linear differential equation using the iterative
discrete version is obviously very useful when it comes to designing a digital filter. We will
return to this later when we show details about the design of digital filters.
For now, we need to spend some time on a detour that shows how to get the frequency
response of a continuous time system. To do this we need to introduce and investigate the
properties of the Laplace transform.
40
Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
Dynamics
0.2
0.04
0.1
0.02
0
0
-0.02
-0.04
-0.1
-0.2
399
0
200
400
600
800
399.5
400
Time
400.5
401
1000
Time
-8
x 10
Scope 2 -- Continuous State -Space Model
-8
4
1
x 10
0
-1
2
399
0
399.5
400
400.5
Time
401
401.5
-2
-4
0
200
400
600
800
1000
Time
Scope -- Difference Between Continuous
and Discrete Mo del
Output from Continuous State-Space Model (Scope2) and
the Difference Between the Continuous and Discrete
Models (Scope)
Figure 2.2. Outputs as seen in the Scope blocks when simulating the model in Figure 2.1.
2.2
Laplace Transforms for Linear Differential Equations
Early in his career, Pierre-Simon Laplace (1749–1827) was interested in the stability of
the orbits of planets. He developed his eponymous transform to solve the linear differential equations that defined the perturbations away from the nominal orbit of a planet, and
consequently there is no single paper devoted only to the transform. In fact, he treated the
transform as simply a tool and most times used it without explanation.
The transform converts a differential equation into an equivalent algebraic equation
whose solution is more manageable. Today, this use is not as important because we have
computers, and tools like Simulink, to do this for us. However, one aspect of the Laplace
transform is still extensively used: the ability to understand the effect of a differential
equation on a sinusoidal input in terms of the frequency of the sinusoid. We will make
2.2. Laplace Transforms for Linear Differential Equations
41
extensive use of this in our discussions of electrical systems, mechanical systems, and both
analog and digital filtering.
The Laplace transform of a function f (t) is
∞
F (s) =
e−st f (t)dt.
0
With this definition, it is easy to build a table of Laplace transforms for different types of
functions. The simplest is the function f (t) = 1 when t ≥ 0 and f (t) = 0 when t < 0.
This is the step function because its value steps from 0 to 1 at the time t = 0. Putting this
function in the definition, we get
∞
∞
1
1
F (s) =
e−st dt = − e−st = .
s
s
0
0
When the value of the integral is evaluated at infinity, the assumption is made that the real
1
part of s is > 0. In a similar way, the transform of e−at is s+a
. The Laplace transforms
ibt
−ibt
for the sine and cosine can use this transform and the fact that sin(bt) = e −e
and
2i
ibt
−ibt
(see
Exercise
2.4).
cos(bt) = e +e
2
We can also apply the Laplace transform to the state-space model we developed in the
previous section. In order to do this we need to know the Laplace transform of the derivative
of a function. Thus, if the Laplace transform of a function f (t) is F (s), the derivative dfdt(t)
has the Laplace transform sF (s) − f (0). This follows from the definition and integration
by parts as follows:
∞
∞
∞
df (t) −st
f (t) e−st dt + f (t)e−st 0 = sF (s) − f (0).
e dt = s
dt
0
0
Since the Laplace transform of the derivative is s times the Laplace transform of the
function differentiated, it follows that the Laplace transform of the integral is 1/s. From this
comes the Simulink notation for the integral block in the Continuous library.
Let us apply this definition to the state-space model dx(t)
= Ax(t) + bu(t). The
dt
Laplace transform of the vector x(t), X(s), is the Laplace transform of each element, so
sX(s) − x(0) = AX(s) + bU (s).
This gives X(s) as
X(s) = [sI − A]−1 x(0) + [sI − A]−1 bU (s).
t
If we compare this to the solution x(t) = eA(t−t0 ) x0 + t0 eA(t−τ ) bu(τ )dτ we developed
earlier, it is obvious that the Laplace transforms of the two terms in the solution are (using
the symbol L to denote the Laplace transform)
L (5(t)) = L (e−At ) = [sI − A]−1
L
t0
t
and
A(t−τ )
e
bu(τ )dτ = [sI − A]−1 bU (s).
42
Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
The last transform is the convolution theorem; it says that the Laplace transform of the
convolution integral is the product of the transforms of the two functions in the integral.
Also note that the Laplace transform of the matrix exponential has exactly the same form as
the transform of the scalar exponential (except the inverse is used and the Laplace variable
is multiplied by the identity matrix).
As we noted when we introduced the state-space model, the output (for a scalar input
u) is
y = Cx + du.
One of the most important uses of the Laplace transform is the transfer function of a linear
when the initial conditions are zero. In the state-space
system. The transfer function is UY (s)
(s)
model, if y(t) is of dimension m (i.e., there are m outputs from the model), then m transfer
functions are created.
Using the Laplace transforms above gives the transfer function for the state-space
model as
Y(s)
= C[sI − A]−1 b + d.
U (s)
The transfer function of the linear system can specify the properties of the differential
equation in a Simulink simulation. For example, the transfer functions that result from
taking the Laplace transform of the state-space model for the clock dynamics are (once
again using the symbol L to denote the operation of taking the Laplace transform)
dx(t)
0
1
0
=L
x(t) +
u(t) ,
L
−39.4366 −0.01
0.1148
dt
y(t) = 1 0 x(t)
Moreover, using the derivation above and the fact that the initial condition is zero for
the transfer function, we get
−1 Y (s)
0
s
−1
= [ 1 0]
0.1148
39.4366 s + 0.01
U (s)
s + 0.01 1
−39.4366 s
0
= [ 1 0] 2
s + 0.01s + 39.4366 0.1148
=
s2
0.1148
.
+ 0.01s + 39.4366
The Simulink model in Figure 2.3 uses the two transfer-function blocks in Simulink
to define the clock example that we developed in Section 1.3. We opened this from the
NCS library with the command Clock_transfer_functions. Notice that the blocks are
Transfer Fcn and Zero-Pole. The first uses the transfer function in exactly the form that
we just developed, whereas the second uses the form with the numerator and denominator
polynomials factored. The numerator polynomials roots are the zeros of the transfer function
since the roots of the numerator, when substituted for s, result in a value of zero for the
2.3. Linear Feedback Control
43
0.1148
s2 +0.01s+39.4366
Transfer Fcn
Scope
[10000/9]
0
Constant
Scope2
IC
0.1148
(s+0.005+6.27985469577123i)(s+0.005-6.27985469577123i)
Zero-Pole
Scope1
Figure 2.3. Linear pendulum model using Transfer Function and Zero-Pole-Gain
blocks from the Simulink Continuous Library.
transfer function. Similarly the roots of the denominator are called the poles of the transfer
function since setting s to these values causes the transfer function to become infinite. (As
poles go, these are truly long poles.)
The simulation would result in a value of exactly zero if there were no input since the
transfer function assumes that the initial condition is zero. No initial condition combined
with an input of 0 would mean that everything in the simulation is exactly zero for all time.
For this reason, the model uses the Initial Condition (IC) block from the Signal Attributes
library in Simulink to force an initial value for the input to the transfer functions. This block
causes only the input u(t) to have an initial condition, not the output. To get the output to
have the right value at the start of the simulation is rather delicate. It involves issues about
how long the input stays at the value specified by the IC block, the amplitude of the output
caused by this initial input, and the desired output amplitude. To ensure that the simulation
produces consistent results, the solver for this model was a fixed step solver with a step size
of 0.01 sec. The initial condition for the input was set to 10000/9, the value needed to force
the output to be 0.2 at t = 0. The simulation results then are the same as the state-space and
the original clock simulation results shown above. The difficulties associated with getting
the initial condition right makes transfer functions difficult to use for initial value problems.
For this reason, either the state-space model or the model created using integrators (as in
the clock simulation above) is the better choice.
2.3
Linear Feedback Control
One of the more powerful uses for Simulink is the evaluation of the performance of potential control strategies for complex systems. If you are designing an aircraft flight-control
system; a spacecraft’s guidance, navigation, or attitude control system; or an automobile
engine controller, you often have neither the luxury nor the ability to build the device with
the intention of trying the control system and then tinkering with it until it works. The
preferred method for these complex systems is to build a simulation of the device and use
44
Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
the simulation to tinker. This section will introduce the methods for control system design
and its mathematics and then will show how to use Simulink to tinker with the design.
2.3.1 What Is a Control System?
Early in the industrial revolution, it became clear that some attributes of a mechanical system
required regulation. One of the first examples was the control of water flowing through a
sluice gate to maintain a constant speed for the water wheel. Although many control ideas
were tried, they often operated in an unstable way or hunted excessively, causing the water
wheel to alternately speed up and slow down. James Clerk Maxwell analyzed such a device
and showed that its stability and performance required the roots of an equation that came
from the differential equation (the poles that we discussed in Section 2.2). Unfortunately,
the mathematics at the time allowed factoring only of third order and some fourth order
polynomials, so it was very difficult to design a system that involved dynamics that had
more than three differential equations.
Mathematicians found an approach that handled higher order systems, but today the
digital computer and programs such as MATLAB and Simulink have made those methods
interesting only as historical footnotes.
So what exactly is a control system? The easiest way to understand a control system is
to show an example. In the process, we will build a Simulink model that we can experiment
with to understand the various design issues and the consequences of incorrect choices.
One of the simplest and most commonly encountered control systems is that used to
keep the temperature constant for a home heating system. The dynamics for this system
can be grossly described by assuming that the entire house is a single thermal mass which
has both heat loss (via conduction through the walls, windows, floors, and ceilings) and
heat gain from the heating plant (via the radiators, convectors, or air vents, depending on
whether the system uses steam, hot water, or hot air). Simple thermodynamics states that
the temperature, T , of the house is
C
dT
kA
=
(T − Toutside ) + Qin u(t),
dt
l
where C is the thermal capacity of the house,
T is the temperature of the house,
Toutside is the air temperature outside the house,
k is the average coefficient of thermal conductivity of all of the house surfaces,
A is the area of all of the house surfaces,
l is the average thickness of the insulation in the house,
Qin is the heat from the heat exchanger,
u(t), the control signal applied to the furnace, is either zero or one.
Most heating appliances in the United States use the English system of units, where
heat is in British thermal units (BTUs), temperature is in degrees Fahrenheit, dimensions are
in feet, and thermal capacity is defined as the thermal mass of the house times the specific
heat capacity of the materials in the house. (The units are BTUs per degree F.)
Clearly, a house does not consist of a single material, or a single compartment with a
temperature T , or a single thermal mass. For this reason, a more detailed model would have
many terms corresponding to different rooms in the house, with different losses correspond-
2.3. Linear Feedback Control
45
Figure 2.4. Home heating system Simulink model (Thermo_NCS). This model
describes the house temperature using a single first order differential equation and assumes
that a bimetal thermostat controls the temperature.
ing to the walls, doors, windows, floors, ceilings, etc. We will develop a more complex
model like this in Chapter 6.
We will use a model that is one of The MathWorks Simulink demonstrations. (This
model, sldemo_househeat, comes with Simulink, but we have a version of the model in
the NCS library.) Open the model by typing Thermo_NCS at the MATLAB prompt. This
opens the model shown in Figure 2.4.
The model uses meter-kilogram-second units, so there are conversions from degrees
Fahrenheit to Celsius and back in the model. (These are the blocks called F2C and C2F,
respectively). One new feature of Simulink that we have not seen before is in the model.
It is the multiplexer block (or the “mux” block) that is found in the Signal Routing library.
This block converts multiple signals into a single vector signal. In the model, it forces the
Scope block to plot two signals together in a single plot. We configured the Scope block
in this model to show two separate plot axes; we did this by opening the plot “parameters”
dialog by double clicking the icon at the top of the plot that is next to the printer icon and
then changing the “Number of axes” to two.
We have seen the use of a subsystem in the clock example. Here we use subsystems to
group two of the complex models into a visually simplified model by grouping the equations
for the house and the thermostat inside subsystem blocks. The house model contains the
differential equation above, and the thermostat model uses the “hysteresis” block from the
“Discontinuous” library in Simulink.
The house model is a subsystem in the Simulink model that implements the differential
equation above. The subsystem is in Figure 2.5.
The thermal capacity of the house is computed from the total air mass, M, in the
house and the specific heat, c, of air. The thermal conductivity comes from the equivalent
thermal resistance to the flow of heat through the walls, ceilings, glass, etc. All of these
46
Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
1
1/(M*c)
Heater
QDot
In
1/s
1
Indoor Temp
Tin
1/Mc
1/Req
1/Req
Thermodynamic Model
for the House
2
Outdoor Temp
Tout
Figure 2.5. Subsystem “House” contains the single differential equation that
models the house.
data and the data for the heating system (which is electric) are loaded from the MATLAB
M-file Thermdat_NCS.m in the NCS Library. To see the various data and their functional
relationship to the dimensions of the house and the insulation properties, etc., edit and
review the data in this file.
The model uses a sinusoidal input for the outside temperature that gives a variation
of 15 deg F around an average temperature of 50 deg F. The plots of the indoor and outdoor
temperatures and the total heating cost for a 24-hour (86,400 sec) period are in Figure 2.6.
We can now see the features of this home heating controller (and any other control
system for that matter). It consists of a system that needs to be controlled (in this case, the
temperature of the house needs to be controlled, and the thermodynamics of the house is the
system). The control system needs to measure the variables we are trying to control (in this
case the thermostat measures the house temperature), and it needs a device to calculate the
difference between the desired and actual variable. It also needs a device (a controller) that
converts the desired state into an action that causes the system to move toward the desired
condition. In this case, the control is the heater; the controller is the thermostat. (The
thermostat plays the dual role of both the temperature sensor, working on the difference
between the actual and desired temperature values, and the control in this application.) In
order to model the thermostat, we need to look at how this device works.
The photograph in Figure 2.74 shows a typical thermostat. The temperature difference
is detected using a metal coil (2) made from two dissimilar materials, each with different
heat expansion rates (so-called bimetals). The bimetal’s expansion difference translates into
a torque that causes a contact (4 and 6) to open and close. A lever (3) sets the tension of
the coil and therefore the room temperature set point. Moving the lever makes the contact
coil rotate closer or further away from the contact (6). When the temperature drops below
the set point, the bimetal rotates and the contacts close, making the heat source come on.
When the temperature rises above the set point, the contacts open and the heat goes off.
The contact 4 has a magnet that holds it closed (just to the left of the (6) in Figure 2.7).
This forces the contacts to stay closed until the temperature of the room rises well past the
4 This
figure is from the article on thermostats in Wikipedia, The Free Encyclopedia web site [50].
2.3. Linear Feedback Control
47
80
70
60
Indoor Temperature
(with a 70 deg. F set point)
50
Outdoor Temperature
40
30
0
1
2
3
4
5
6
7
8
x 10
4
Heating Cost ($)
40
30
20
10
0
0
2
4
6
8
10
12
14
Time
16
18
x 10
4
Figure 2.6. Results of simulating the home heating system model. The top figure
shows the outdoor and indoor temperatures, and the bottom figure shows the cost of the
heat over the 24-hour simulation.
Figure 2.7. A typical home-heating thermostat [50].
48
Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
Out
1
In
0
In
Ton
Toff
Tset
Figure 2.8. Hysteresis curve for the thermostat in the Simulink model.
point at which the contacts closed. (The force from the magnet is greatest when the contacts
are closed because of the close proximity of the magnet and the contact.) This effect, called
hysteresis, results in the room temperature’s rising above the desired value and then falling
below the desired value by some amount. Look closely at the room temperature in the plot
above; the room temperature rises to about 5 deg above and falls about 5 deg below the
desired value. In the next section, we will develop a simple controller that will eliminate
this problem.
The thermostat’s mechanical motion can be modeled using the built-in hysteresis block
in the Simulink discontinuous block library. This block captures the essential features of
the thermostat, as can be seen from the input-output map of the block. Figure 2.8 shows the
input-output graph.
An input-output map of a device shows how the output changes as the input varies.
In the thermostat, the temperature is the input, and the output is either 0 (off) or 1 (on).
When the temperature is below the set value (Tset ), the thermostat is on and the output of the
block is 1; as the temperature increases because of the heat being provided, the temperature
inevitably becomes greater than the off temperature (Toff ) and the thermostat turns off (the
output becomes 0). As the house cools down, the temperature must fall below the value
Toff to the value Ton before the output changes to 1 again. The difference between Tset and
Toff and Tset and Ton is set to 5 deg in the model by changing the values in the dialog that
opens when you double click on the block.
Once you have built the model, try different values for the hysteresis, change the
temperature profile for the outside, and see if you can find an outside temperature below
which the house will not stay at the desired set point temperature.
2.3.2
Control Systems and Linear Differential Equations
The basic dynamics for the heating system is the first order linear differential equation
for the heat flow into and out of the house. These dynamics are linear, but the system is
nonlinear because of the thermostat. The controller nonlinearity causes the heat to cycle to
2.4. Linearization and the Control of Linear Systems
49
a temperature that is 5 deg too hot and then fall to a temperature that is 5 deg too cold. This
10-deg swing in the room can cause discomfort for the occupants since the room is either too
hot or too cold. What could we do to fix this? The simple answer is to make the controller
linear, because if you did so, the temperature would stay almost exactly at the set point.
Therefore, we would like to control the heater so that the temperature is proportional to
the difference; it should be possible to keep the temperature at the desired value without the
10-deg swing produced by the thermostat. A simple way to achieve this would be to control
the heater using a device that makes the heat output proportional to the input. Electronic
devices exist that allow this form of control. We will actually build such a controller in
Chapter 6, refine it further in Chapter 7, and look at a more complete version in Chapter 8.
Another important aspect of a control system is the fact that a properly operating
controller causes the system to deviate away from the set point by only a small amount.
This means that even a very nonlinear system is linearizable for purposes of control. Let
us formalize these ideas and use the results to justify the use of linear systems analysis for
control systems.
2.4
Linearization and the Control of Linear Systems
In the most general situation, the underlying mathematical model of a system is nonlinear.
Nonlinearities occur naturally because of the three dimensional world we live in. For
example, a rotation about an axis is described through a coordinate transformation which
involves the sine and cosine of the rotational angle. (We will discuss this in Chapter 3.)
Another example is the air drag that is a function of the square of the speed; we modeled this
when we were exploring the Leaning Tower of Pisa in Chapter 1. Other nonlinear effects
are a consequence of the properties of the materials that we encounter. All control system
designs use linear differential equations to model the physical systems under control. This
is true because a control system is usually trying to keep the system at a particular value
or is trying to track an input. In either case, the perturbations away from the desired input
are small. We will show an example of this in Section 2.4.2, but first let us see how we go
about finding a model for small perturbations. (It is the linear model.)
2.4.1
Linearization
As we stated above, the goal of a control system is to make some system attribute constant or
to follow a slowly changing command. (Think of the autopilot on an airplane that is trying
to keep the altitude of the plane constant.) In other words, the control is trying to keep
perturbations small. To make this concept more precise, let us consider a general nonlinear
differential equation that models a physical system. This differential equation might come
from a Simulink model that you have built. The form of the equation then will be a set of
nonlinear differential equations that couple through nonlinear functions as follows:
dxi (t)
= fi (x1 , x2 , . . . , xn , t) + g(u1 , u2 , . . . , um , t),
dt
i = 1, 2, . . . , n.
There are n first order differential equations in this description. The n values of xi (t) are
lumped into an n-vector x(t), the m values of ui (t) are lumped into an m-vector u(t), and
50
Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
the n functionsfi (x1 , x2 , . . . , xn , t) are lumped into an n-vector f . Doing this changes the
way we write the differential equation to the more compact
dx(t)
= f (x, t) + g(u, t).
dt
To investigate small perturbations δx(t) away from some nominal solution xnom (t) of this
equation, let x(t) = xnom (t) + δx(t) and u(t) = unom (t) + δu(t), and substitute them into
the differential equation. The result is
d(xnom (t) + δx(t))
= f (xnom (t) + δx(t), t) + g(unom (t) + δu(t), t).
dt
If the perturbations are small enough, then a Taylor series gives a first order perturbation
away from the nominal as
d(xnom (t)) d(δx(t))
∂f (xnom (t), t)
+
= f (xnom (t), t) +
δx(t) + · · ·
dt
dt
∂x
+g(unom (t), t) +
∂g(unom (t), t)
δu(t) + · · · .
∂u
In these equations, the partial derivatives of any vector with respect to another vector are
matrices defined as follows:
 ∂f
∂f1 
1
···
 ∂x1
∂xn 
 .
∂f
..
.. 

.
=
.
.
. .

∂x 
∂fn
∂fn 
···
∂x1
∂xn
The Taylor series terms beyond the first derivatives are small as long as the perturbations
are small, so we can neglect them.
Using the fact that the nominal values satisfy the differential equation, i.e., that
d(xnom (t))
= f (xnom (t), t) + g(unom (t), t)),
dt
the differential equation for the perturbations is
d(δx(t))
∂f (xnom (t), t)
∂g(unom (t), t)
=
δx(t) +
δu(t).
dt
∂x
∂u
Since the matrices in this equation are evaluated at the nominal value, they are time varying,
and as a result the equation is linear and in the state-variable form dx(t)
= Ax(t) + Bu(t).
dt
(t),t)
(t),t)
In this state equation, the matrix A is ∂f (xnom
,
the
matrix
B
is ∂g(unom
, the
∂x
∂x
states are the perturbations in x(t), and the controls are the inputs δu(t) that create the
perturbations.
To recap this result, the use of the linear equation is justified by the fact that when
the control system is working properly, the perturbations are small and the Taylor series is
2.4. Linearization and the Control of Linear Systems
51
an excellent approximation to the actual dynamics. For this reason, discussions on control
systems design generally consider only linear models. In a later chapter, we will show
how to use the above linearization to create the linear model directly from Simulink. Before
we do this, let us create some Simulink models in various forms for a mass attached to
a spring.
2.4.2
Eigenvalues and the Response of a Linear System
Since linear differential equations form the basis for control system design, it is important
to understand the relationships between the parameters in the state-variable model and the
solution of the differential equations. Let us create some Simulink models for a springmass system that is typical of many mechanical systems. A spring stores mechanical energy
whenever it is compressed or expanded. The energy that the spring stores returns whenever
the spring is not compressed or expanded. The force that the spring applies is typically
nonlinear. However, consistent with the discussion above, we can linearize the force to give
f = Kx, where x is the displacement. (K is the proportionality constant for the spring, and
it has a negative sign so the force is positive when the spring is compressed and negative
when it expands.) The units of K are force per unit displacement (Newtons per meter or
pounds per inch, for example).
This linear equation by itself describes an
ideal spring; there is no friction. When there is
Kx+D dx/dt
friction, it is usual to add another linear force,
a so-called damping force, that is proportional
to the speed of the object. The model for this
is f = D dx
. A spring with a mass attached
dt
therefore has a model that is very much like the
x(t)
M
pendulum in Chapter 1. Let us develop an equaMg
tion of motion for the spring-mass system in the
figure at the right.
In the figure, the motion x(t) is positive
down, so the forces from the spring and the
damping are negative (upward). The figure contains a diagram that shows the force balance
used to get the equation of motion.
Using the force balance in the figure along with Newton’s second law gives
M
d 2x
dx
= −Kx − D
+ Mg.
2
dt
dt
We can now easily build a Simulink model for this equation. The model Spring_Mass1
is in the NCS library, but as usual, you should create the model yourself before you open it
from the library. The model is in Figure 2.9. Remember that all of the data for the model
load from a callback function.
We can also use the state-space methods we developed above to convert this equation
into a state-space model. Letting
x(t)
x = dx(t) ,
dt
52
Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
Spring Mass Model Using the Equations of Motion
and Basic Simulink Blocks
Velocity of Mass
M*g
1
s
1/M
mg
1
xo s
Integrator
1/mass
Position of Mass
Integrator1
Scope
Damper
D
Initial
Positi on
0
Spring
K
Figure 2.9. Model of a spring-mass-damper system using Simulink primitives.
Integrator
Add
Input u(t)
Step
K*u
Bu
2
2
State Deriva tive = Ax+Bu
1
s
2
2
Positi on and Speed
of Ma ss
2
Ma trix
A
Ax
0
1
-omegan^2 -2*zeta*omegan
[2x2]
Mu ltip ly
2
Ma trix
Mu ltip ly
[2x2]
2
Figure 2.10. Model of a spring-mass-damper system using the state-space model
and Simulink’s automatic vectorization.
the model is
dx(t)
=
dt
0
K
−M
1
D
−M
x(t) +
0
1
M
u(t).
A Simulink model for this is easy to build. The first, and easiest, way to build the
model is to use the State-Space block in the Continuous library (you should try this), but
we will use an alternate method that exploits the automatic use of vectors in Simulink. The
model (shown in Figure 2.10) is Spring_Mass2 in the NCS library.
This model uses a characterization for the parameters of the second order differential
equation that make the calculation of the solution easier to understand. The parameters are
the undamped natural frequency (denoted by ωn ) and the damping ratio (denoted by ζ ).
The values of these parameters are
ωn =
K
M
and
ζ =
D/M
.
2ωn
2.4. Linearization and the Control of Linear Systems
53
2
ζ =0.4
ζ = 0.1
ζ = 0.2
1.5
1
ζ = 0.5
0.5
ζ = 0.3
0
-0.5
-1
0
5
10
15
Time
20
25
30
Figure 2.11. Changing the value of the damping ratio from 0.1 to 0.5 in the model
of Figure 2.10.
When we insert these values in the state-space differential equation, we get
dx(t)
=
dt
0
−ωn2
1
−2ζ ωn
x(t) +
0
1
M
u(t).
In the model we let u(t) = g, corresponding to the force of gravity applied at t = 0.
The simulation shows the motion when ωn = 1 and ζ = 0.1, which is identical to the
Spring_Mass1 simulation. When this model opens, MATLAB sets the value for omegan,
zeta, and M to 1, 0.1, and 1, respectively.
The motion of the mass is a sine wave whose amplitude gradually decreases. We can
see what happens if we vary the parameters in this model. First, keep ωn = 1 and let ζ
vary between 0 and 1. The derivative is Ax + bu and the input is a function called the step
function, which is found in the Sources library.
Thus, as an exercise, vary zeta in the model from the value 0.1 to 1 by changing the
value of zeta in MATLAB and rerunning the model. For each value of zeta entered, you
will see a plot that looks like Figure 2.11. (This figure is an overplot for values of zeta from
0.1 to 0.5 in steps of 0.1.)
54
Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
It should be evident that the bigger we make zeta, the more rapidly the oscillations
disappear. (We say the oscillations are damped.) It is important to understand this since it
is critical in the design of a control system.
The second exercise is to increase the value of ωn . As you do so, you will see that
the frequency of the oscillations increases. There is a subtle connection between the actual
frequency of the oscillation and the damping ratio.
Remember that we showed that the solution of the differential equation in state-space
form depends on the transition matrix given by eAt = T−1 e-t T, where is the diagonal
matrix that contains the eigenvalues of the matrix A.
Using the undamped natural frequency and damping ratio parameters in the A matrix,
let us see what its eigenvalues are. The eigenvalues come from setting the determinant
det(λI − A)to 0, so
λ
−1
det(λI − A) = det
= λ2 + 2ζ ωn λ + ωn2 = 0.
ωn2 λ + 2ζ ωn
There are two eigenvalues, the roots of this equation, given (assuming that ς ≤ 1) by
λ1 = −ζ ωn + i 1 − ζ 2 ωn ,
λ2 = −ζ ωn − i 1 − ζ 2 ωn .
Therefore, the diagonal matrix e-t is
 √
−ζ ωn +i 1−ζ 2 ωn t
e
-t
e =
0

0
√ 2 .
e −ζ ωn −i 1−ζ ωn t
Since the solution is T−1 e-t T, the frequency of the oscillations is 1 − ζ 2 ωn and the
damping comes from the term e−ζ ωn t . Thus, the closer ζ gets to 1, the less oscillatory
the response is and the faster the initial conditions disappear. The complete solution of
the differential equation comes from eAt = T−1 e-t T. We ask you to complete this in
Exercise 2.5. (You need to compute T and T−1 and finish the multiplication.)
Several additional facts about the solution to this spring mass equation are important.
First, when ζ >1, the two eigenvalues are real and the solution becomes the sum of two
exponentials (i.e., it is no longer oscillatory). Second, if the mass or the damping term were
negative for any reason (such as a badly designed control system), the solution will grow
without √
bound. (The term ζ ωn in the solution would be negative, so the solution terms
(−ζ ωn +i 1−ζ 2 ωn )t
would grow without bound.)
e
The solution to the state-space differential equation using Laplace transforms was
X(s) = [sI − A]−1 x(0) + [sI − A]−1 bU (s).
The inverse Laplace transform involves exponentials of the form e−pi t , where pi is one of
the roots (poles) of the denominator polynomial det(sI − A). This determinant is the same
one we get when we compute the eigenvalues of the matrix A, so poles and eigenvalues
are the same creature in different clothes. The major difference is in the ease and accuracy
of calculation. Poles are the roots of the polynomial; factoring it is inherently prone to
2.5. Poles and the Roots of the Characteristic Polynomial
White Noise
55
Dynamics of the
Distrurbance
num(s)
den(s)
Control
Gain
Signal 1
K
Dynamics of the
Controller
Dynamics of the
System Being Controlled
num(s)
num(s)
den(s)
den(s)
1
Output
Input Signal
num(s)
den(s)
Dynamics of the
Measurement Device
Figure 2.12. Generic components of a control system.
numeric issues, whereas the eigenvalues of a matrix are computed using numerically robust
algorithms (see NCM, Chapter 10, in particular Section 10.5).
These are all important aspects for the understanding of the control of a linear system,
which we explore next.
2.5
Poles and the Roots of the Characteristic Polynomial
Now that we know that poles and eigenvalues are the same, let us use this knowledge.
By now, you should be familiar with the notation used in Simulink for modeling
linear systems. The idea is to use the transfer function (which is the Laplace transform of
the output divided by the Laplace transform of the input) as a multiplicative element inside
the block. As a result, you can write the equations in the pictorial form of the block diagram
where the multiplications use the blocks (with the transfer function inside) and parallel paths
represented by lines that join a sum or difference (which use summing junctions). This idea
predates Simulink and has represented control systems for over 50 years.
Figure 2.12 shows the general form of a linear control system when the device to be
controlled is a linear differential equation. Let us bring each of the elements in this figure
into the control system framework we established above; from left to right in the diagram
these are as follows:
• The Input Signal is the set point (in the case of the thermostat) which models the input
that the control system is trying to maintain (or track).
• The Control Gain is one or more gains in the system. They ensure the control system
performs properly.
• The Dynamics of the Controller models the control device. (This might not be dynamic, as was the case for the thermostat.)
56
Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
• The Dynamics of the Disturbance is a model for the way that “the world” is acting
to cause the system to deviate from the desired performance. Note that the model
assumes that the disturbance adds to the controller.
• The Dynamics of the System Being Controlled —often called the plant dynamics—is
the differential equation (or transfer function) of the system that is to be controlled.
• Notice that the Output is measured, and this measurement compares with the input
signal through a difference. The controller forces the system to react as desired, using
the difference signal.
Each of these functions exists in any control system. They sometimes use a common
device (as was the case for the thermostat, where the difference, the set point, and the
measurement are in this single device).
2.5.1
Feedback of the Position of the Mass in the Spring-Mass Model
Let us build a simple control system that will attempt to control the spring-mass damper
system that we developed in Section 2.4.2. We assume that you are holding the mass at
some location below its rest position, and you then let it go. We also assume that the control
system will attempt to stop the mass motion as it approaches the rest position, but with no
oscillation and as little residual motion as possible. The Simulink simulation in Figure 2.13
is the model. By now, this model should be easy for you to build. Try to build it without
looking too closely at the figure. (Use the model Spring_Mass1 from the NCS library
as the starting point, or open the complete model from NCS library using the command
SpringMass_Control.) As you review the diagram, identify each of the generic control
system features that we discussed.
This model has the values K = 20, D = 2, M = 10, and initially the gain = 0. The
initial condition for y has been set up so the mass is initially at rest (at t = 0 this requires
2
that ddt y2 = 0 and dy
= 0 in the differential equation), so the initial value of y needs to be
dt
m*32.2
mg
Gain*m
Step at
1 second
Control
Gain
(Initially 1)
1/m
1/mass
1
s
Velocity of Mass
Integrator
1
xo s
Position of Mass
Integrator1
Damper
d
-32.2/(k/m+Gain)
Spring
Initial
Condition
k
Figure 2.13. A simple control that feeds back the position of the mass in a springmass-damper system.
Scope
2.5. Poles and the Roots of the Characteristic Polynomial
computed. With the gain at 0, the initial position is
−g
K/ .
M
57
(As an exercise, show why this is
so.) This initial condition for y is an external input to the integrator using the dialog for the
integrator y. With this initial condition, the mass is at rest when the simulation starts.
The first thing you need to try in this model is to change the value of gain and see what
happens. If you have done what was asked, i.e., varied the value of gain, you should see that
you cannot achieve the desired result (i.e., having the motion stop without an oscillation).
All that seems to happen as the value of gain is changed is that the oscillation frequency
increases. Why is this and what can be done to achieve the desired performance?
The why part of this question is easy: it can be answered by inspection from the
block diagram. Look carefully at what the feedback is doing. The gain multiplies the
position (after the summing junction), which is exactly what the spring force is doing.
Thus, increasing the gain is like changing the restoring force of the spring. In the solution
for the spring mass system we developed
in the previous section, the result of changing K
√
is to change the parameter ωn = K/M. Therefore, changing gain will only change the
frequency of oscillation of the mass, and it will never cause the mass to stop without an
oscillation.
2.5.2
Feedback of the Velocity of the Mass in the Spring-Mass Model
From this discussion and the solution we developed in the previous section, it should be clear
. The
that the only way to cause the oscillation to stop quickly is to alter the value of ζ = D/M
2ωn
most direct way to do this is to change the effective D. Unfortunately, as it is constructed,
the control system will not do this.
If, however, we were to measure the velocity of the mass and then multiply this by
a gain, we will create a force that is proportional to the velocity. This is equivalent to the
mass damping term D dy
that appears in the differential equation. Thus, we will modify
dt
the Simulink diagram to allow a feedback from the velocity. The resulting block diagram,
shown in Figure 2.14, is the model SpringMass_Vel_Control in the NCS library.
M*32.2
mg
Gain *M
Step at
1 second
Control
Gain
(Ini tial ly 1)
1/M
1/ma ss
1
s
Integrator
Velocity of Mass
1
xo s
Position of Mass
Integrator1
Damp er
D
-32.2/(K/M)
Spring
Init ia l
Condit io n
K
Figure 2.14. Feed back the velocity of the mass instead of the position to damp
the oscillations.
Scope
58
Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
Position of Mass
-15
-15.2
-15.4
-15.6
-15.8
-16
0
5
10
15
Time
20
25
Figure 2.15. Time response of the mass (position at the top and velocity at the
bottom) for the velocity feedback controller.
This model uses a measurement of the velocity of the mass as the feedback variable.
If you open this model and play with the gain variable, you will see that the response of the
mass to the change in the position of the mass is less oscillatory as the gain is increased. In
fact when the gain is set to 1, there is only one cycle of oscillation before the mass settles,
and when the gain is 2, there are no longer any oscillations. (See the plot of the response
with these different gain values in Figure 2.15.)
For now, let us bypass how one would measure the speed and accept the fact that the
control objective (that of ensuring that the mass stops with little or no oscillatory motion) is
achieved with this control. Instead, let us try to put in place a way of determining why the
second of the two control systems achieved the desired result while the first did not. To do
this we will use the state-space model (with the built-in state-space block) for the various
pieces in the control system.
The Simulink model then looks like Figure 2.16.
30
2.5. Poles and the Roots of the Characteristic Polynomial
59
Figure 2.16. Using the state-space model for the spring-mass-damper control system.
The state-space model of the spring is
0
0
1
dx
K
D
1
=
u(t),
x+
−
−
dt
M
M
M
y
1 0
dy
=
x.
0 1
dt measured
Moreover, from the model, we see that u(t) = Gain(Stepinput − dy
)−32.2. If we substitute
dt
this into the state-space model, we get the state-space model of the closed loop system as


0
1

0
dx 
dy


1
=
− 32.2,
Gain Stepinput −
x +
 K
dt
dt
D 
M
−
−
M
M
dy
= [ 0 1] x,
dt
0 1 x = 00 01 x)
which reduces to the model (using the fact that 01
M
dx
=
dt
0
K
−
M
1
D + Gain
−
M
x+
0
1
M
M
(Gain Stepinput) − 32.2.
This illustrates clearly that the gain term is equivalent to a change in the damping term D
since the gain adds to D.
The eigenvalues of the state matrix in this equation are the roots of the polynomial
λ
−1
det(λI − A) = det
= λ2 + (Gain + D/M)λ + K/M,
K/M λ + Gain + D/M
and they clearly change as the gain changes. Let us draw a plot that will allow us to see
how the roots change.
60
Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
1.5
Imaginary part of the Eigenvalues
1
0.5
0
-0.5
-1
-1.5
-2.5
-2
-1.5
-1
Real part of the Eigenvalues
-0.5
0
Figure 2.17. Root locus for the spring-mass-damper system. Velocity gain varying
from 0 to 3.
If we let the gain vary from 0 to 3 in steps of 0.1, and we use the parameter values
in the model above (i.e., M = 10; D = 2; K = 20), the eigenvalues change as shown in
Figure 2.17. Remember that the response of the system is a sum of exponentials, where
each of the exponents is an eigenvalue. Thus, as long as the eigenvalues are complex, the
solution will be oscillatory. When the eigenvalues are all real and negative, on the other
hand, the solution will asymptotically decay with no oscillations. The plot we created allows
us to select a gain that will ensure that this happens (and meet the requirements that the
control system had to satisfy). This plot, called a “root locus” by control engineers, is the
most useful way of understanding the effect on a control system’s response when the gain is
changed. Before computers, the rules for creating a root locus by hand were quite elaborate.
Fortunately, today we can do this easily using MATLAB (and Simulink). For example, the
code that created the plot is
ev =[];
for Gain = 0:0.1:3
ev = [ev eig(A-[0 0;0 Gain])];
end
plot(real(ev),imag(ev),’x’)
axis(’square’)
grid
2.5. Poles and the Roots of the Characteristic Polynomial
61
4
3
Imaginary part of the Eigenvalues
2
1
0
-1
-2
-3
-4
-0.18
-0.16
-0.14 -0.12
-0.1
-0.08 -0.06
Real part of the Eigenvalues
-0.04
-0.02
0
Figure 2.18. Root locus for the position feedback control. Gain changes only the
frequency of oscillation.
2.5.3
Comparing Position and Rate Feedback
In contrast to this root locus, let us look at the locus for the first control system we developed.
In this system, the gain multiplied the position of the mass, and the control system model
in state-space form is
0
0
1
dx
D
K
1
x+
=
Gain(Stepinput − y) − 32.2,
−
−
dt
M
M
M
y = [ 1 0] x.
As we did above, multiplying the matrices gives
0
0
1
dx
K + Gain
1
x+
=
(GainStepinput) − 32.2,
D
−
−M
dt
M
M
y = [ 1 0] x.
Therefore, the eigenvalues of the matrix
0
− K+Gain
M
1
D
−M
will be the root locus. Figure 2.18 shows the plot.
62
Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
As we observed before, this control does nothing but change the frequency because
the only change is the imaginary part of the eigenvalues.
2.5.4 The Structure of a Control System: Transfer Functions
A generic control system has several essential components. These are the plant, which we
will denote by G(s); the measurement, which we will denote by H (s); and the control
dynamics (which might be only a gain), which we denote by K(s). The structure of the
control system is in the figure below.
U(s)
+
E(s)
C(s)
K(s)
G(s)
Y(s)
H(s)
It is a simple matter to write the equation for the transfer function of the closed loop
control system. If we denote the difference between the input on the left and the measurement
as E(s), then
E(s) = U (s) − H (s)Y (s),
Y (s) = K(s)G(s)E(s),
which we solve for the transfer function
T (s) =
Y (s)
U (s)
to give
Y (s)
K(s)G(s)
=
.
U (s)
1 + K(s)G(s)H (s)
Let us take this equation one step further by replacing each of the transfer functions with
their respective numerators and denominators. Thus, if the transfer functions are
G(s) =
n1 (s)
,
d1 (s)
then
T (s) =
K(s) = K
n2 (s)
,
d2 (s)
H (s) =
n3 (s)
,
d3 (s)
n1 (s)n2 (s)d3 (s)
.
d1 (s)d2 (s)d3 (s) + Kn1 (s)n2 (s)n3 (s)
The inverse Laplace transform of this transfer function comes from the partial fraction
expansion into a sum of terms involving the roots of the denominator polynomial. This
expansion is
n
aj
T (s) = a0 +
.
s
+
pj
j =1
The denominator term s + pj is the j th factor of the denominator polynomial (i.e., −pj
is the j th root of the polynomial d1 (s)d2 (s)d3 (s) + Kn1 (s)n2 (s)n3 (s)). These “poles” of
the closed loop transfer function obviously determine the response of the system. Notice
2.5. Poles and the Roots of the Characteristic Polynomial
63
that the denominator polynomial is determined from the numerator and denominator of
K(s)G(s)H (s); i.e., the poles are determined by the numerator, denominator, and gain
terms of the individual blocks in the system. (These result from the open loop transfer
functions of the system, the transfer function that would result if there were no feedback.)
We could compute the root locus as the gain K varies using this polynomial equation,
but this is not numerically robust. As was pointed out above, factoring polynomials is not
a good idea. However, the use of Laplace transforms in block diagrams does allow some
control system features to be readily developed. For example, in the control system model
we developed above, how would we create the measurement of the velocity of the mass?
One way would be to measure the position and differentiate it. A block that would do that
was sF (s), where
would contain a single s (since the Laplace transform of the derivative df
dt
F (s) is the Laplace transform of f (t)). If one were to try to build an analog device that
would do this (using an electronic or mechanical device, for example), it becomes obvious
rather quickly that this cannot be easily done. A simple idea would be to use a coil of wire
and a magnet attached to the mass to develop a signal that is proportional to the speed of the
mass. This comes about naturally because the voltage across the coil is the rate of change of
the magnetic flux, which comes from the coil inductance and the coil motion. The electrical
model for this is (where P is a constant that depends on the geometry of the coil and magnet)
dφ
di
dy
d(Li − P y)
=L −P
= −iR.
=
dt
dt
dt
dt
Let the measurement be the voltage iR; then the Laplace transform of this equation
R/Ls
(s)
gives the measurement-block transfer function asH (s) = RI
= P s+R/L
, which, when
X(s)
added to the simulation diagram
of the mass damper, gives the
C(s)
+ E(s)
Y(s)
K
G(s)
model at right. (G(s) is the U(s)
Laplace transform of the massdamper dynamics.)
R s
This exercise shows that
L
P
some measurements come with
s +R
L
additional dynamics. In this
case, the measurement came
with the dynamics associated with the inductor and resistor in the electrical circuit. The
analysis of the control system must account for these to ensure that the system is stable
despite the added dynamics.
Once again, Simulink can rapidly analyze the effect of adding the dynamics. With a
few mouse strokes, the model changes to include the dynamics of the measurement (they can
be either a transfer function or differential equation). The resulting Simulink model with the
response of the system is in Figure 2.19. (Instead of the model SpringMas_Vel_Contol,
use SpringMass_Control_Sensor_Dynamics.) In the modified model, the transfer
function block was used, and the values of the parameters in the model were set to make
P (R/L) = 1 and R/L = 100. Note that the response of the control system with the value
of the Gain = 100 (Figure 2.20) is almost identical in the two models, indicating that the
sensor design does not change the result. Once again, Simulink allowed us rapidly to answer
the question of the effect of the additional sensor dynamics on the stability and response of
the system.
64
Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
Figure 2.19. One method of determining the velocity of the mass is to differentiate
the position using a linear system that approximates the derivative.
Figure 2.20. Simulation result from the Simulink model.
2.6 Transfer Functions: Bode Plots
It is possible to build a simulation in Simulink that uses a sinusoidal input. You could
use any of the simulations we have created so far. If the simulations are linear, all of the
outputs will be sinusoids at the same frequency as the input. In steady state, the outputs
will have different amplitudes and will cross through zero at times that are different from
the input. Therefore, the amplitude and phase of the sinusoid at the various frequencies
describe the sinusoidal response of the linear system. In 1938, Bell Labs scientist Hendrik
Wade Bode (pronounced “Boh-dee”) demonstrated that a plot of the frequency response of
a linear system contains all of the information needed to understand what feedback around
2.6. Transfer Functions: Bode Plots
65
the system would do. His Bode plots are still a mainstay of control system design. Let us
see how to compute a Bode plot and how Simulink (and the Control System Toolbox in
MATLAB) can do the work.
2.6.1 The Bode Plot for Continuous Time Systems
Remember that the Laplace transform for an exponential gives the Laplace transform of a
sine wave (see Problem 2.4). Using the approach in this problem, the Laplace transform of
sin(ωt) is
iωt
1
1
ω
e − e−iωt
.
= 2i − 2i = 2
L
s + ω2
2i
s − iω s + iω
Now, if we want to know the response of a system with transfer function H (s) to a sinusoidal
input, we use the fact that the transfer function is the ratio of the Laplace transform of the
output divided by the Laplace transform of the input. Thus, if the input U (s) is the Laplace
transform above, the output Y (s) is
1
1
ω
1
−
.
Y (s) = H (s) 2
=
H
(s)
s + ω2
2i s + iω s − iω
The partial fraction expansion of the right-hand side above gives
Y (s) =
np
H (−pk )
k=1
s + pk
1
+
2i
H (−iω) H (iω)
+
,
s + iω
s − iω
where the expansion is around the n poles (pk ) of the transfer function H (s) and the two
imaginary poles from the sinusoidal input. The inverse Laplace transform of this has two
parts: the transient response from the first term above and the sinusoidal steady state response
from the second term above. The inverse transform gives the response of the system as
y(t) =
np
αk H (−pk )e−pk t + |H (iω)| sin(ωt + φ(ω)).
k=1
The αk are constants that depend on the initial conditions and the proof of this assertion is
Exercise 2.6.
The Bode plot is a plot of 20 log10 |H (iω)| and φ(ω) versus ω on semilog axes.
These show what the sinusoidal steady state response of the system is because the output is
a sinusoid with amplitude |H (iω)| and phase φ(ω).
Rapid and accurate calculation of the magnitude and the phase of H (s) uses the statespace representation of the system. The calculations are easy to do in MATLAB, and there is
a strong connection between Simulink and these calculations. The next section shows this.
2.6.2
Calculating the Bode Plot for Continuous Time Systems
In Section 2.2, we showed that the transfer function for a linear system in state-space form is
Y(s)
= C[sI − A]−1 b. We now know that the Bode plot comes from this transfer function
U (s)
66
Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
by letting s = iω and then computing the magnitude and phase of the resulting complex
variable. Thus, the Bode plot contains the two terms
Bode amplitude = 20 log10 C (iωI − A)−1 b ,
−1
b)
Im(C
−
A)
(iωI
Bode phase = tan−1
.
Re(C (iωI − A)−1 b)
It should be obvious that the calculation of these terms in MATLAB is trivial. The code
would look something like this:
function [mag, phase] = bode(a,b,c,omega)
% This function computes the magnitude and phase of a
% transfer function when the system dynamics are in
% state-space form. The inputs are:
%
a - the A matrix of the system
%
b - the B matrix of the system
%
c - the C matrix of the system
%
omega vector of values for the frequency (rad/sec)
for iw = sqrt(-1)*omega
h
= c*(iw-a)/b;
hmag
= abs(h);
hphase = phase(h);
mag
= [mag hmag];
phase = [phase hphase];
end
This code segment is not terribly efficient, and it is prone to errors because it manipulates the matrix a in an unscaled form. In the Control Systems Toolbox, there is a built
in function that does the calculation in a far more efficient, and therefore faster, form. The
function is called bode, and it returns both the magnitude and the phase. The MATLAB
command logspace generates a vector of equally spaced values for omega so that a semilog
plot of the magnitude versus the log base 10 of the frequency is a smooth curve. (To run
the following examples you will need the Control Systems Toolbox.)
The Control Systems Toolbox uses a MATLAB object called an “lti object” that allows
the calculations of the various linear system attributes such as the poles, zeros, Bode plot,
and other plots in a simple and seamless way. The user may enter the lti object data using
state-space models or transfer functions (factored into the poles and zeros or as polynomials).
It also allows the system to be discrete (i.e., in the form of a z-transform) or continuous (i.e.,
in the form of a Laplace transform). The rules for manipulating the lti object are part of the
object’s definition and the overloading of the various operations.
“Overloading” an operation is a way of redefining the meaning of the math operators “+
or −”, “* or /”, so they are meaningful operations for the objects. For the lti object, transfer
functions, pole-zero representations, and state-space representations are interchangeable.
The “+ or −” adds or subtracts the transfer functions (with the automatic changes in the
state-space model that the additions or subtractions imply), and the “* or /” operators are
multiplication and division of the transfer functions. (For these operations the order of
the transfer functions increases, so the resulting polynomials change their order and the
state-space model changes dimensions.)
2.6. Transfer Functions: Bode Plots
67
Figure 2.21. Control System Toolbox interface to Simulink. GUIs allow you to
select inputs and outputs.
It is beyond the scope of this book to discuss how to build a MATLAB object, but if
the reader is interested, the documentation that comes with MATLAB shows how to create
and use objects. The example shows how to create a polynomial object and how to then
overload all of MATLAB’s operations so that the appropriate rules for the polynomial arithmetic are invoked whenever the user types the symbols +, −, *, /, ˆ2, etc. at the MATLAB
command line.
Let us use the model Spring_Mass2 that we created in Section 2.5 to illustrate
the use of Simulink to get the lti model and the Bode plots. Open the model by typing
Spring_Mass2 at the command line after changing to the NCS directory. We now want to
invoke the connections between Simulink and MATLAB that the Control Systems Toolbox
allows. Under the Tools menu in the model, select “Control Design” and the submenu
“Linear Analysis.” This will invoke the linearization tool and the GUI shown at the left
in Figure 2.21 will appear. This GUI allows you to select where you want the inputs and
outputs to be. Any signal in the model can be selected for either of these; just go to the
model, right click on the line that has the desired signal, and then select from the resulting
dialog “Linearization Points,” and then from the submenu whether the point is an input, an
output, or possibly a combination of the two. You need to select one input and one output,
and when you are done, the GUI will resemble the GUI at the right in Figure 2.21.
Now, click the Linearize button on the GUI and watch. A new LTI window will open
and the comment LTI viewer is being launched will appear. The step response of the output
will appear in the LTI Viewer window (see Figure 2.22). To see other types of plots, right
click anywhere in the window and select the submenu “Plot Types” and then “Bode.” The
result will be the Bode plots for the two outputs from the model shown below. (There are
two outputs because the Simulink model has a 2-vector output that has both the position
and velocity of the mass.)
From the LTI Viewer window, you can export the lti object to MATLAB for further
analysis. To do this, select from the File menu in the viewer the “Export” command, and
then, in the LTI Viewer Export GUI that comes up, select the model to export. The GUI
68
Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
Figure 2.22. LTI Viewer results from the LTI Viewer.
gives a tentative name of sys to the lti model, so if you select this it will create the object in
MATLAB. To export now, click the “Export to workspace” button. Now go to the MATLAB
command line and type sys. You will see that MATLAB now contains the lti object sys,
and it is a state-space model. If you want to convert this to a transfer function, type tf(sys).
All of the MATLAB commands and the resulting answers are in the MATLAB code
segment below.
>> sys
a =
Spring_Mass/
Spring_Mass/
Spring_Mass/
0
-1
b =
Spring_Mass/
Spring_Mass/
Spring_Mass/
0
1
Spring_Mass/
1
-0.2
2.7. PD Control, PID Control, and Full State Feedback
69
c =
Spring_Mass/
Spring_Mass/
Spring_Mass/
1
0
Spring_Mass/
0
1
d =
Spring_Mass/
Spring_Mass/
Spring_Mass/
0
0
Continuous-time model.
>> tf(sys)
Transfer function from input "Spring_Mass/Step (1)" to output...
1
Spring_Mass/Integrator (pout 1, ch 1): ----------------sˆ2 + 0.2 s + 1
Spring_Mass/Integrator (pout 1, ch 2):
s
----------------sˆ2 + 0.2 s + 1
Other plot types and results are available with the LTI Viewer. Spend some time
exploring them and work with some of the problems at the end of this chapter.
2.7
2.7.1
PD Control, PID Control, and Full State Feedback
PD Control
The simple control systems that we have investigated so far were mostly second order
systems (i.e., systems described by second order differential equations). There are good
reasons for thoroughly exploring this class of system, because any mechanical system that
has a moving mass creates a second order differential equation via Newton’s law of motion
2
f
(f = ma implies ddt x2 = m
). We saw in the spring-mass example that if the position
and derivative (speed) were the feedback variables any desired response is possible by
appropriate choices of gains. Control engineers call this PD control (P for position and D
for derivative). The simplest way to see that this control will allow the system to be set
arbitrarily is to use the state-space formulation of the system. In state-space form the second
order differential equation is
dx
0
1
0
=
x+
u,
−a −b
a
dt
1 0
y=
x.
0 1
The first component of the vector measurement y is the position, and the second component
70
Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
Position
Control Gain
P
x' = Ax+Bu
y = Cx+Du
Step at
1 second
Position
State-Space
Model of Spring
D
Position
and
Velocity
Velocity
Velocity
Control Gain
Figure 2.23. Using proportional-plus derivative (PD) control for the spring-massdamper.
is the derivative (because of the definition of the state x). Therefore, the PD control is
P 0
u= P D y=
x + uin ,
0 D
where the input to the closed loop system is uin . Substituting this back into the state-space
model above gives
dx
0
1
0
P 0
=
x+
x + uin
−a −b
a
0 D
dt
0
1
=
x + uin .
a(P − 1) aD − b
It is clear from the matrix that by selecting P and D the entire closed loop system can respond
in any desired way. This makes PD control the simplest and easiest to use in a mechanical
system, and as such, it should be the first choice for a simple design.
There is, however, one problem with a simple PD controller. Most systems require
that the response in steady state should have some predictable value. (When the controller
is trying to track the input, the desired value is the steady state value of uin .) In the above
system, we denote the steady state value of x by S. S comes from setting the derivative of x
to zero. (The name steady state comes from the fact that nothing in the system is changing;
hence the derivative of all of the states must be zero.) Therefore,
−1
0
1
S=−
uin .
a(P − 1) aD − b
Figure 2.23 is the Simulink model PD_Control from the NCS library. Simulating this
model with a, b, P , and D equal to 100, 2, 1, and 0.2, respectively, shows that the steady
state value of the output is 0.5 when the input steady state value is 1 (see Figure 2.24).
2.7. PD Control, PID Control, and Full State Feedback
71
Position
0.8
0.6
0.4
0.2
0
0
2
4
6
8
10
12
14
16
18
20
12
14
16
18
20
Velocity
3
2
1
0
-1
0
2
4
6
8
10
Time
Figure 2.24. Response of the spring-mass-damper with the PD controller. There
are no oscillations.
It is obvious that the steady state value of neither component of x is equal to uin .
We now address the question, How does one force the steady state value of one of the
components to match uin ?
2.7.2
PID Control
We will use the block diagram of the PD control system we developed above to see how it
is possible to force one of the state components to be equal to the input in steady state. If
the control system were only trying to improve the response with no attempt to track the
input (the step), the response we got above would be fine.
Assume that we want the response to track the input precisely (for example, this
control might be for the steering system in an automobile so that the angle of the steering
wheel will always be proportionally to the angle of the front tires). In this case, the steady
state response is not adequate. One possibility would be to add a gain in front of the step
input that doubles the input. Then the value of the output would be 1 when the step is in
steady state, but it is easy to see that any change in the values of a, b, P , or D would change
this relationship. So what can we do?
72
Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
Integral Gain
1
s
Position
and
Velocity
I
Integrator1
x' = Ax+Bu
y = Cx+Du
P
Step at
1 second
Position Gain1
Position
State-Space
Model of Spring
D
Velocity (Derivative) Gain
Figure 2.25. Proportional, integral, and derivative (PID) control in the springmass-damper system.
Think about the error between the input and the output as measured by the difference
between the value of uin and the position measured in the system. (This is the output from
the difference block in the diagram above.) If we were to insert an integrator into the model
at this point, then, were the steady state error at this point S, the output of the integrator
would be St. The input to the second order spring-mass system is now St, so it begins to
move away from the steady state value of 0.5. As it moves, the feedback causes the output
to get closer to the steady state value of the input. In fact, when the output matches the input
in steady state, the difference will be exactly zero and the integrator stops. The result will
be that the input and the output match exactly and the states will not change values because
the integrator output will be an unchanging constant.
Let us do this in the model. Add an integrator from the continuous library in Simulink
so it integrates the output of the difference block (the difference between the input and the
measured position of the mass). Multiply the integral by a gain I and then add the result to
the difference (see Figure 2.25). Simulate the system and verify that the output and input
match. (This model is called PID_Control in the NCS library.)
While you are doing the simulation of this model, look carefully at the difference
between the input and the output (add another Scope block displaying the difference if you
want to make the difference clear), and verify that we have achieved zero steady state error,
as we set out to do. Also, experiment with different values for the integral gain I (the model
uses I = 7) to see what effect this has on the time it takes to reach steady state.
The result of simulating the PID controller with the nominal values is in Figure 2.26.
The figure clearly shows that the steady state value of the mass position is exactly one, and
the error between the desired and actual position is zero.
2.7.3
Full State Feedback
Most models used in textbooks to illustrate control concepts are second order. The reasons
for this are that
• second order systems are easy to visualize and manipulate;
• second order systems are the basic building blocks of mechanical systems through
Newton’s equations of motion.
2.7. PD Control, PID Control, and Full State Feedback
73
Position
1.5
1
0.5
0
0
1
2
3
4
5
6
7
8
9
10
6
7
8
9
10
Velocity
5
4
3
2
1
0
-1
0
1
2
3
4
5
Time
Figure 2.26. PID control of the spring-mass system. Response to a unit step has
zero error.
Unfortunately, the real world is not restricted to second order systems. What do we
do if we want the response of a higher order system to match some desired response?
The PD controller we developed used feedback of both the position and the derivative
of the output, which turned out to be the full state of the model when we used the output y
and its derivative, dy
, as the state. In other words the full state of the system was assumed to
dt
be available (i.e., measurements of the entire state were assumed to exist). This generalizes
very easily. We have seen that any linear system given by a differential equation of the form
d n y(t)
d n−1 y(t)
d n−2 y(t)
+ an−1
+ an−2
+ · · · + a0 y(t) = bu(t)
n
n−1
dt
dt n−2
dt
has the vector-matrix (state-space) form


dx(t) 

=

dt

0
0
..
.
1
0
..
.
0
1
..
.
0
−a0
0
−a1
0
−a2
···
···
0
0
..
.
···
···
1
· · · −an−1


0
0
..
.






 x(t) + 



 0
b




 u(t).


74
Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
If the entire state is available for control (meaning that y and all of its derivatives up to the
(n − 1)st are available), then the controller can be
u(t) = K1 x1 + K2 x2 + · · · + Kn xn = [ K1
K2
When used in the state-space equation, the result is

0
1
0
···
0
 0
0
1
·
·
·
0
dx(t) 
 ..
.
.
..
..
..
= .
···
.

dt
 0
0
0
···
1
−a0 −a1 −a2 · · · −an−1

· · · Kn ]x = Kx.

0
0
..
.






 x(t) + 



 0
b




 Kx.


Since K is a row vector, the product of the b vector and K is an n × n matrix that, when
added to the state matrix, results in


0
1
0
···
0


0
0
1
···
0

dx(t) 


.
.
.
.
..
..
..
..
=
 x(t).
·
·
·


dt


0
0
0
···
1
bK1 − a0 bK2 − a1 bK3 − a2 · · · bKn − an−1
Now every one of the original coefficients in the differential equation has an additive term
from the gain matrix, and as a result, the gains change the entire last row of the state matrix.
This means that any desired response is possible for the entire system. This is “full state
feedback.”
There are some subtleties associated with full state feedback having to do with statespace models that are not in the form we used here (i.e., where the state is not the output
and all of its derivatives). The result we showed is still true.
2.7.4
Getting Derivatives for PID Control or Full State Feedback
The assumption in all of the discussions above was that some device measures all of the
derivatives of the output. What happens when this is not so?
Three methods are available to create measurements for a full state feedback implementation. The first approach to creating the derivative of a variable is to use a linear system
that approximates the derivative. One possibility is to use the transfer function
Hd (s) =
αs
α2
=α−
.
s+α
s+α
This is not exactly the derivative, but it is close. (It is the derivative for all motions that are
faster than approximately tmax = α1 .)
The second approach is to use a digital approximation to the derivative. Remember
that the derivative is approximately
dx
x(k t +
≈
dt
t) − x(k t)
1
=
(xk+1 − xk ).
t
t
2.7. PD Control, PID Control, and Full State Feedback
75
This digital approximation is reasonably good if the sample time is fast relative to the rate
of change in the position.
In the third approach, the fact that the derivative of the output (or even linear combinations of the state of the system) is available through related measurements. One example
of this would be a measurement of the force at the support point of a spring-mass system.
The force measured would be the sum of the gravity force (mg), the spring force (Kx), and
). Thus, this measurement, combined with a measurement of the
the damping force (D dx
dt
position, can compute the velocity. If the force measurement is fMeas , the acceleration of
the mass is the result of subtracting the three accelerations applied to the mass. They are
• the acceleration due to the spring (obtained using the position measurement multiplied
by the spring constant),
• gravity acceleration,
• the acceleration from the damping (obtained as feedback by integrating this acceleration estimate).
Thus a pseudomeasurement of velocity is
f Meas
D dx K
d 2 x =
,
− g − xMeas −
dt 2 Meas
m
m
m dt Meas
where
2 dx d x =
dt.
dt 2 Meas
dt Meas
In this implementation, we have an implicit feedback loop because we integrate the pseudomeasurement
of acceleration (the left-hand side of this equation) for use on the right
).
(as dx
dt Meas
This approach uses a priori values for the spring constant and the force of gravity
(which means that in the actual control system they need to remain constant over time),
but despite this, control systems are often built using this type of computation to provide
a pseudomeasurement of the velocity (or other states). Using this approach, you can recompute the gains so that the feedback explicitly uses the measured values of force and
displacement, thereby eliminating the explicit computation of the derivative. Once again,
this requires that the parameters in the model be constant or, if they are not, that the gains
change as the parameters vary.
These equations can be assembled into the Simulink diagram in Figure 2.27; the linear
system that results derives the rate using no derivatives (actually, it uses only integration).
The initial condition for the derived rate in this model is zero. That means that when the
system starts, there could be an error between the estimate of the rate and the actual rate. It is
easy to show that this error goes to zero as time evolves, so that the derived rate approaches
the velocity of the mass. (Try to demonstrate that this is true.) To verify these three strategies
for reconstructing the derivative, let us build a Simulink model that implements all of these
measurements. The model will contain the three different methods for deriving rate that we
described above in a masked subsystem. The model is PID_Control_Vel_Estimates in
the NCS library. It is in Figure 2.28.
76
Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
Damping
for Rate
Reconstruction
damp
Externally
Applied
Force
Integrator
1
1
s
1/m
Divide
by mass
Position
of Mass
2
Derived Rate
1
Estimate
of Velocity
Kspring
Spring Constant
for Rate
Reconstruction
Figure 2.27. Getting speed from measurements of the mass position and force
applied to the mass.
P
Position Gain
Integral Gain
Spring Constant
I
-K-
1
s
Step at
1 second
External Force
Integrator
1
s
1/m
1/ mass
Velocity
Integrator1
1
s
Position
Integrator2
-KDamping
[Estimate]
From
Position and
Velocity Plot
Velocity
(Derivative)
Gain
Externally Applied Force
D
Deriv ed Rate
Position of Mass
[Estimate]
Send Rate Estimate ot
SCope using Goto
Select Method for
Deriving Rate
for Use in PID Controller
(Dialog also allows testing
of parameter variations)
Figure 2.28. Simulink model to compare three methods for deriving the velocity
of the mass.
The derivation of the rate is in the subsystem at the bottom center of the model. Right
click on this masked subsystem and select “Look Under Mask.” The three rate estimation
approaches are in this subsystem, which is shown by Figure 2.29.
We are using two new blocks in this model. The first is the Multiport Switch. This
switch uses the variable “Select” to determine which of the three inputs come out of the
block.
2.7. PD Control, PID Control, and Full State Feedback
77
Damping
for Rate
Reconstruction
damp
Externally
Applied
Force
Integrator
1
1
s
1/m
Spring Constant
for Rate
Reconstruction
Select Constant
Estimate3
of Velocity
Divide
by mass
Estimate3
of Velocity
Kspring
1/Samp_Time
Position
of Mass
Difference
2
z-1
1/Samp_T
Derived Rate
1
Estimate2
of Velocity
z
Zero-Order
Hold
Multiport
Switch
alpha.s
Estimate1
of Velocity
s+alpha
Transfer Fcn
Figure 2.29. Methods for deriving rate are selected using the “multiport switch.”
If you double click
on the subsystem block,
the mask shown at right
opens.
The pull-down
menu at the top determines the value of Select.
In the mask editor (obtained by right clicking the block and selecting Edit Mask), you will
see that the pull-down menu
selection makes the value
of Select equal to 1, 2, or
3. The way the switch
is set up, this numbering scheme needs to be
reversed, so in the Initialization part of the editor we have the code
Select = 4-Select. All of the data used in the estimation go under the mask using
the mask dialog, which we will discuss in more detail in the next chapter.
The last blocks that we use here, but have not used before in our models, are the
GoTo and the From blocks. They are very convenient blocks to use to help keep your
model looking uncluttered. As their names imply, they allow a signal to propagate from one
location in the model to another without the use of a line. We use these to send the signals
78
Chapter 2. Linear Differential Equations, Matrix Algebra, and Control Systems
to the Scope blocks for plotting. (In general, just as with GoTo statements in code, avoid
using these blocks.)
Open this model, and run it. Use the mask dialog to change the subsystem values
and to experiment with the different rate derivation approaches. Verify that the various
derivations of the rate of the mass are reasonably accurate. Try changing the values of the
various parameters (alpha, Samp_Time, and the mass, spring constant, and damping used
in the derivation of the rate). These change when you type the new values right into the
dialog box of the subsystem mask.
Control systems engineers call this method for deriving the rate an “observer.” If you
wish to explore this in more detail, and to explore observers for more complex systems,
see [1].
In this chapter, we have explored linear systems and the basics of feedback controllers
that will achieve a desired result. In the next chapter, we look at some real-world complexities that make systems nonlinear. The first example is chaotic in the mathematical sense,
and subsequent examples extend the pendulums that we explored in Chapter 1 to rotations
that are more complex. Along the way, we explore how one can incorporate nonlinear
devices in our simulation using Simulink primitives (blocks in the libraries). When the
desired behavior is not available as a block, we look at how to build the requisite nonlinear
behavior out of the primitives.
2.8
Further Reading
Dynamic systems and state-space formulation is discussed in a nice book by Scheinerman
[33]. State-space models also are the backbone of modern control design, as in Bryson [6]
and Anderson and Moore [1].
A pseudomeasurement of a system state can come from a mathematical description
of the function that relates the state to the measurement, or from the differential equations
describing the state and the measurements. Applications of both approaches are available.
For example, Stateflow (described in Chapter 7) comes with a demo called “fuelsys” that
illustrates the first approach. The Simulink blocks in this demo illustrate how the computer
provides control of the fuel injectors on a car. It also shows how pseudomeasurements
provide the missing data from a failed sensor, using redundant information contained in
related sensors. This demo is fun to run, since it allows you to throw switches that fail the
sensors and see the result. The Help file that accompanies the demo has more details.
Exercises
2.1 In the text, the differential equation in vector-matrix form or the state-space model
used the output and its derivatives up to the (n − 1)st order to create the state vector.
Verify that the state-space model in the text and the original differential equation are
the same. How would you modify the model to add more inputs? (Hint: Make the
input a vector.) How would you add more outputs?
2.2 In the text we asked that you verify that the derivative of eAt is AeAt and that the
inverse of eAt is e−At . The latter result comes easily from showing that the derivative
Exercises
79
of the product eAt e−At is zero (so the product is a constant). If you do this, you need
to show that AeAt = eAt A, i.e., that these matrices commute.
2.3 The calculation of the transition matrix
phi and the input influence matrix gamma in
A b
the code c2d_ncs uses the matrix 0 0 . Verify that this matrix creates the phi
and gamma matrices defined in the text. (Use the hint in the text.)
2.4 Create the Laplace transform of the cosine and sine using their exponential definitions
(as suggested in the text).
2.5 The motion of the spring, mass, and damper is the solution of the differential equation
in state-space form
d
0
0
1
x(t)
+
u(t).
x(t) =
1
−ωn2 −2ς ωn
dt
M
Solve this equation when the input is zero (with the damping ratio less than one, the
initial position of x0 and the initial velocity of v0 ) to show that the motion of the mass
is
v0
+ ς x0
ωn
−ςωn t
2
2
1 − ς ωn t .
1 − ς ωn t + sin
x(t) = e
x0 cos
1 − ς2
Create a Simulink model that uses this solution and a simulation of the motion to
compare the numerical accuracy of the Simulink solution for different solvers.
2.6 In the text, we used the Laplace
develop the response of a system to a
np transform to−p
sinusoidal input as y(t) = k=1
αk H (−pk )e k t + |H (iω)| sin (ωt + ϕ(ω)). Verify
this result. Can you think of a way that you can use Simulink to calculate the transfer
function H (iω) without the use of the Control System Toolbox?
2.7 Experiment with the three different control system models (PD, PI, and PID). Try
changing the gains and observing the results. Try adding nonlinearities to the models
(in particular, limit the control to a maximum and minimum value) and redo the
experiments. What can you say about the resulting responses?
Chapter 3
Nonlinear Differential
Equations
We have already seen some nonlinear differential equations (for example, the clock and the
Foucault pendulum dynamics in Chapter 1). In this chapter, we will delve into some of the
more interesting aspects of nonlinear equations, including a simple example of chaos, the
dynamics of a rotating body, and the modeling of a satellite in orbit. We will also explore
an interesting way of modeling motions in three dimensions using a four-dimensional vector called a quaternion. This chapter illustrates some advanced Simulink modeling and a
major feature of Simulink, namely the ability to create reusable models. For the beginning
Simulink user these models might not be too useful, but users that need to model complex
mechanical systems will find a great many uses for many of the examples presented here
(and therefore could cut and paste the models in the NCS library directly into the new models
that they might want to build).
In addition, we illustrate the following:
• how to make subsystem models that can be stored in a library for later use;
• how to annotate a model so the simulated equations are written out in the annotation;
• how to create a subsystem and then layer a mask on top of it, the mask providing
the user with a dialog that allows parameters in the model to be transferred into the
model (just as a subroutine’s parameters are transferred into the subroutine during
execution).
3.1 The Lorenz Attractor
One of the more interesting uses of simulation is the investigation of chaos. Chapter 7
of Cleve Moler’s Numerical Computing with MATLAB [29] describes the Lorenz chaotic
attractor problem developed in 1963 by Edward Lorenz at MIT. The differential equation is
a simple nonlinear model that describes the behavior of the earth’s atmosphere.
Cleve analyzes the differential equation’s “strange attractors”—the values at which
the differential equation tries to stop but cannot because the solution, although bounded, is
neither convergent nor periodic. The model tracks three variables: a term that tracks the
81
82
Chapter 3. Nonlinear Differential Equations
convection of the atmospheric flow (we call it y1 ), the second (y2 ) related to the horizontal
temperature of the atmosphere, and a last (y3 ) related to the vertical temperature of the
atmosphere.
The Lorenz differential equation has three parameters, σ, ρ, and β; the most popular
values for these parameters are 10, 28, and 8/3, respectively. The differential equation is
three coupled first order differential equations given by
ẏ1 = −βy1 + y2 y3 ,
ẏ2 = −σy2 + σy3 ,
ẏ3 = −y2 y1 + ρy2 − y3 .
Alternatively, in state-space form (even though the equations are nonlinear),

−β
dx(t) 
0
=
dt
−y2
0
−σ
ρ

y2
σ  x(t).
−1
We will not duplicate the analysis from Moler’s book; suffice it to say that the equation has
two “fixed points” where, if the equation has an initial condition at t = 0 equal to one of
these solutions, the derivatives would all be zero and the solution would stay at these points
(hence the name fixed points). However, both of these points are unstable in the sense that
any small perturbation away from them will cause the solution to rapidly move away and
never return. (If the solution gets close again to one of these points, it will rapidly move
away again.) The two points are determined from


ρ−1
 √

 √

y =  β(ρ − 1)  and y =  − β(ρ − 1)  .
√
√
β(ρ − 1)
− β(ρ − 1)
ρ−1


We created the nonlinear equation in Simulink by building each of the differential
equations around the integrator block in the Continuous library, and at the same time, we
use a new block to plot the resulting solution on the two axes (y1 versus y2 and y1 versus
y3 ). The model is in the NCS library, and it opens using the command Lorenz_1. The
model is in Figure 3.1 along with a typical set of two-dimensional plots.
This model uses a few new tricks that you should explore carefully. First, the two
dimensional plots are from the Simulink Sinks library. The scales on these plots are set
by double clicking on the icon. (Review the settings when the model is opened.) In
order to have a new initial condition every time the model runs, we have added a Gaussian
random perturbation to each of the initial conditions in the integrator blocks. The MATLAB
command randn is used in the “Constant” block, and each integrator has its initial condition
set from the “external” source (so they actually appear in the diagram; this is a good practice
when the models are simple because the initial conditions are visible, but when the model
is very busy, this approach can make the diagram too busy). To make the initial conditions
external, you need to open the integrator dialog by double clicking the integrator block, and
then select external as the source of the initial conditions.
3.1. The Lorenz Attractor
83
Figure 3.1. Simulink model and results for the Lorenz attractor.
The two x-y plots show the two fixed points for the Lorenz system. To change the
values of Rho, Sigma, and Beta, simply type in new values at the MATLAB command line
(see [29] for some suggestions).
3.1.1
Linear Operating Points: Why the Lorenz Attractor Is Chaotic
Since Release 7 of Simulink, the Control System Toolbox link has a very robust linearization
algorithm. Two blocks appearing in the Simulink Model-Wide Utilities library can be used
to get a linear model of a nonlinear system. The first of these blocks is the time-based
linearization, and the second is trigger-based linearization. The first will create a complete
state-space model at the times specified in the dialog that opens when you double click the
block. Unlike the Simulink Scope block, this block does not require a connection to the
model. The block causes the simulation to stop at either the times or the triggers, and then
a program from the Control System Toolbox, called linmod (for a continuous time model)
or dlinmod (for a discrete time model), creates the linear state-space matrices A, B, C, and
D. We changed the model Lorenz_1 to create the linearizations using the trigger block.
The triggers come from the Pulse Generator in the Sources library. In this new model,
called Lorenz_2 in the NCS library (Figure 3.2), the diagram uses the “GoTo” and “From”
blocks in the Simulink Signal Routing library to make it easier to follow. These blocks
allow connections without a line, thereby simplifying the diagram. The model was also
redrawn a little to emphasize the state-space form of the Lorenz differential equations. The
state-space form of the equations is in the text block in the model (created using the “Enable
Tex Commands” option in the Format menu). When this model runs (slowly because
the linearization is taking place every 0.01 sec of simulation time), the linear models are
created and stored in a structure in MATLAB. We use the structure created by linmod—
called Lorenz_2_Trigger_Based_Linearization_—to investigate the stability of the
84
Chapter 3. Nonlinear Differential Equations
Lorenz Attractor
El em ent 1 of the ma trix Ay
y1
Beta
y2
y2 *y 3
y3
1
xo s
Nonlin earity:
Prod of
y2 and y3
y1
Simulink model for the Lorenz Attractor, whose
nonlinear diff ernetial equation is give n by:
y1
dy/dt = A y
Rho-1+10*rand n
where: A =
El em ent 2 of the ma trix Ay
y2
| -β
0
| 0
| - y
-σ
ρ
2
y3
Si gm a
1
xo s
y2
y2
sqrt(Beta*(Rho-1))+20*randn
y |
σ
-1
2
|
|
.
Pulse
Generator
El em ent 3 of the ma trix Ay
y3
y2
Rho
1
xo s
y1 *y 2
y1
sqrt(Beta*(Rho-1))+20*randn
Nonlin earity:
Prod of
y1 and y2
Trig ger-Based
Li nearizatio n.
y3
y3
Pl ot
Subsystem
(Uses Data from GOTO s)
Th is set of bl ocks creates
a li near mo del at the time s
trig gered by the pulse generator.
Th e results are stored in a structure
in the MATL AB Workspace.
Figure 3.2. Exploring the Lorenz attractor using the linearization tool from the
control toolbox.
operating points and at the same time understand why the Lorenz attractor response is
chaotic. A small MATLAB program, called Lorenz_eigs.m in the NCS library, plots the
first 40 eigenvalues (corresponding to the first 0.4 sec of the time history). The code in this
M-file is
axis([-25 15 -25 25]);
axis(’square’)
grid
hold on
nlins = tstop/0.01
for i = 1:nlins/50
Lorenz(i).eig=eig(Lorenz_2_Trigger_Based_Linearization_(i).a);
plot(real(Lorenz(i).eig),imag(Lorenz(i).eig),’.’)
drawnow
end
hold off
Figure 3.3 shows the eigenvalue plot. Let us compare this plot with the solutions
generated when the simulation ran. The plot below the eigenvalue figure is the simulation
results for the state variable y1 plotted against the state variable y3 . The initial position in
this plane was about (10, 20).
3.1. The Lorenz Attractor
85
Eigenvalues of the Lorenz System
(Every 0.01 sec. of the 20 sec. simulation)
30
Imaginary part of the Eigenvalues
20
t=0
10
0
-10
t=0
t=0
-20
-30
-20
-15
-10
-5
0
Real part of the Eigenvalues
5
10
Figure 3.3. Eigenvalues of the linearized Lorenz attractor as a function of time.
From the eigenvalues plot above we can see
that the initial condition
was unstable; when the
simulation starts there are
a complex pair of eigenvalues that have a real part
of 2.91. The eigenvalues are almost in the left
half plane, and during the
next four times they move
toward the left half complex plane. From the sixth
eigenvalue on, they have
moved into the left half
complex plane, indicating
that the solution now is
86
Chapter 3. Nonlinear Differential Equations
stable. The complex pair indicates that the solutions are oscillatory, but since they are in the
left half plane, it also indicates that the amplitude is decreasing, so the solution is spiraling
in toward the attractor at the bottom of the time-history plot. However, as the solution
approaches this attractor, the complex pair of eigenvalues again moves into the right half
plane (the eigenvalue plot ends before this happens), indicating that the solution is again
locally unstable, and it begins to diverge away from the attractor. This behavior is consistent
with the fact that the attractors
are both unstable. (As an exercise, set the value of y2 in the
√
A matrix to the values ± β(ρ − 1), and compute the eigenvalues of the matrix.)
It is quite instructive to play with this model and compare the eigenvalues with the
solutions. Some of the questions you might want to ask: Do the three eigenvalues always
appear as one complex pair and one real? What is the importance of the magnitude of the
real part of the complex pair? What is the importance of the imaginary part of the complex
pair?
This technique of analyzing a system by linearization about the current operating point
is very useful as systems get complex. This approach is particularly useful when systems
are so large that creating linear models analytically is virtually impossible.
3.2
Differential Equation Solvers in MATLAB and
Simulink
Simulink uses the same solvers that are in the MATLAB ode suite (although they are Ccoded in an independent way and automatically linked by the Simulink engine to solve the
differential equations modeled in the diagram). Chapter 7 of NCM describes these and how
they work, so we will not go into the details here.
When you create a new
Simulink model, the default solver
is the variable step ode45, and the
default simulation time is 10 sec. In
addition, the “Relative tolerance” is
set to 1e-3 (or approximately 0.1%
accuracy), and the “Absolute tolerance” is set to “auto.” The solver
also allows the user to force a Max,
Min, and Initial step size if desired, but these are all set initially
to “auto.” The solver options allow the “Zero crossing control” to be set so that zero crossing detection occurs for all blocks that allow it (or, optionally, none of the blocks). The
default option for zero crossing is “Use local settings,” where the solver checks to see if a
zero crossing has occurred for those blocks that this option is set to “on.” For example, the
absolute value block has the dialog shown above. If the “Enable zero crossing detection”
check box is not checked, then in the “local settings” mode, this block’s zero crossing is
disabled. Note that if the zero crossing control is set to “Enable all,” this check box selection
does not appear.
Some simple numerical experiments were performed (in the spirit of Section 7.14 in
NCM) using the Lorenz model above. To do these experiments, we execute the Simulink
3.3. Tables, Interpolation, and Curve Fitting in Simulink
87
Table 3.1. Computation time for the Lorenz model with different solvers and tolerances.
Solver
Tolerance
Tolerance
Type
0.001
0.000001
Ode45
0.0370
0.3670
Ode23
0.1445
0.4680
Ode113
0.1802
0.3749
model from the MATLAB command line. Type in the following code at the MATLAB
command line and then use the arrow keys to rerun the same code segment:
tic;sim(’Lorenz_2’);te = toc;
te
In this set of MATLAB instructions, the sim command causes MATLAB to run the
Simulink model (the argument of the command tells MATLAB what model to run—it is a
string). The value of the elapsed time for the simulation is te. In between each execution
of the simulation using the sim command, open the Configuration Parameters dialog in
Simulink; use the Simulation pull-down menu or click on the model window and type
Ctrl + e. Then change, in turn, the solver or the relative tolerances, and—this is very
important—save the model and re-execute the commands above. Table 3.1 shows the result
of my experiment; I used a Dell dual Pentium 3.192 GHz computer with 1 gigabyte of RAM.
With the understanding that you have obtained from what we have done so far in this
book, and the discussion in NCM, it should be clear that there are no hard and fast rules
that apply to selecting a solver, a tolerance, or any of the other parameters in the Simulink
Configuration Parameters pull-down dialog. The best way to understand the simulation
accuracy for any model you will produce is to try different solvers with different tolerances.
If you think that the solvers are jumping through time steps that are too big, you can set the
maximum step size, and if you think that the solver is taking too long, you can try increasing
the minimum step size. In addition, if you are interested in the solution of a system that has
some dynamics that change very fast, but you do not really care about the details of these
fast states, then the “stiff” solvers are best (see Section 7.9 of NCM). We will see some
examples in later chapters that require stiff solvers, and when we do, we will be able to
explain why they are used and how to use them.
3.3 Tables, Interpolation, and Curve Fitting in Simulink
There are times when a simulation requires tabulated data for a function. Examples of this
are systems that are operating in flowing air or water (aircraft, rockets, submarines, balloons,
parachutes, etc.). In many of these applications, the tabular data represents the gradient of
a nonlinear function of one or more variables. The gradient measurement comes from wind
tunnels, water tanks, or actual measurements of the system in its environment (a full-scale
model of an airplane or automobile). How do you use such data in Simulink?
Simulink implements most of these curve-fitting tools in a set of blocks that are in
the Lookup Tables library. This library has a variety of tables. The simplest is the “Lookup
Table”; the versions that are more complex are the 2-D and n-D (for functions that give two
outputs or more). A block that does interpolation (n-D), using what is called a PreLookup of
88
Chapter 3. Nonlinear Differential Equations
the input data, simplifies the tables. The “PreLookup Index Search” block that goes along
with this block is also in the library, but since this block works in conjunction with the
PreLookup block, it is not a stand-alone block. The last block in the library is the “Lookup
Table Dynamic” block. We will investigate each of these block types and illustrate when to
use them in the following examples.
3.3.1 The Simple Lookup Table
To illustrate the simple lookup table, let us return to the home heating system model we
introduced in Section 2.4.1. The original model used an outdoor temperature variation of 50
deg F with a 24-hour sinusoidal variation of 15 deg. This might be a typical diurnal variation
in the outdoor temperature, but to compute the fuel usage for a specific winter’s day we
want to use the actual outdoor temperature. Assume that you have made measurements of
the temperature during a particular cold snap when the temperature plummets in a 48-hour
period from an average of 40 deg to about 10 deg. Figure 3.4 shows the temperature over
a 24-hour day (sampled every 15 minutes).
The data were actually generated from MATLAB using the following code:
Time = (0:.25:48)*3600;
Temp = 50-0.75*Time/3600 –
5*sin(2*pi*Time/(24*3600)+pi/4)+randn(size(Time))
How do you insert this temperature variation into the model? Open the model by
typing Thermo_NCS at the MATLAB command line. Find the blocks with the constant 50
and the sinusoid that models the external temperature variation along with the summation
block that adds the two, and remove them from the model. Open the Simulink library
browser and find the Lookup Tables library. From this library, select the block called
“Lookup Table” and insert it into the model. The input to this block will be the simulation
time that is the “Clock” block in the Simulink Sources library. Get one of these blocks
and insert it into the model so that it is the input to the lookup table. Then, double click
on the Lookup Table block and enter the values of the data using the MATLAB commands
above. The model that results is in Figure 3.5, and the Lookup Table dialog should look
like Figure 3.6.
The result will now show the home heating system operating with the outdoor temperature given by the tabular values in the lookup table. The figure also shows the dialog that
opens when you double click the Lookup Table block. It allows editing of the data (using
the “Edit” button). You can also enter the data as a MATLAB vector using MATLAB’s
vector notation or pasting a vector of numbers into the dialog box. If, for example, the data
is in an Excel spread sheet, it can be pasted into the vector of output values window and
then surrounded by the square brackets “[” and “]” to make the data into a MATLAB vector.
Notice also that the dialog box allows you to select the interpolation method you want
to use. The default is interpolation and extrapolation. This means that the data interpolation
is linear between time values, and the last two data points in the table project the data beyond
the final time. The other options are interpolation with end values used past the last data
point in the table, the value nearest to the input value, or the values below or above the
input value. The block will also provide a value at fixed times by setting the sample time.
In the simulation the value of this is set to −1, which causes the value to be computed at
3.3. Tables, Interpolation, and Curve Fitting in Simulink
89
Figure 3.4. Pseudotabulated data for the external temperature in the home heating
model.
deg F to
deg C
70
F2C
Blower
Heat Output
blower
cmd
Terr
deg C to
deg F
Mdot*ha
Tset
Set Point
Thermostat
Thouse
C2F
Indoor v s.
Outdoor Temp.
Plots
F2C
Clock
Lookup Table
for Outdoor Temp.
deg F
to deg C
House
Heat Cost ($)
1/s
Mdot*ha
Heat Output Cal/Hr.
cost
Cost of Fuel
$/Cal/Hr
Figure 3.5. Home heating system Simulink model with tabulated outside temperature added.
90
Chapter 3. Nonlinear Differential Equations
Figure 3.6. Dialog to add the tabulated input.
every simulation step when the input changes. (Since the input to the block is the clock,
this is every integration step used by the solver.) The Help button on the dialog opens the
MATLAB help and provides a very good description of how this block works.
If you have not created the model yourself during the above discussion, it is available
in the NCS library using the command Thermo_table.
The 2-D Lookup Table block computes an approximation to z = f (x, y) given some
x, y, and z data points. The dialog block for this lookup table has an edit box that contains
the “Row index input values” (a 1 × m vector corresponding to the x data points) and an
edit box with the “Column index input values” (a 1 × n vector of y data points). The values
for z are in the edit box labeled “Matrix of output values” (an m × n matrix).
Both the row and column vectors must be monotonically increasing. These vectors
must be strictly monotonically increasing in some specific cases. (See the Help dialog for
more details on what these cases are.)
By default, the output is determined from the input values using interpolationextrapolation—that is, linear interpolation and extrapolation of the inputs past the ends.
The alternatives available are “Interpolation-Use End Values,” “Use Input Nearest,” “Input
Below,” or “Input Above,” with the same meaning as for the 1-D Lookup Table.
It is important to use the proper end extrapolation method when the simulation could
cause the input to the lookup table to be outside the range of the tabulated values. For
example, the heating simulation is set up to run for 2 days, which is also the extent of the
time data loaded into the lookup table dialog. The selected extrapolation was “InterpolationExtrapolation,” so running the simulation linearly extrapolates past 48 hours, using the last
two data points in the table. This means that the temperature will continue to drop, going
down at the rate of about 10 deg per hour (240 deg per day), which is clearly not correct.
Even selecting the option of keeping the end-point values constant (“Interpolation-Use End
Values”) is not a good approximation to what actually happens, but it is the better choice. In
this case the user has clear control (via the simulation time) over whether or not the lookup
table data will be exceeded; in many applications the independent variable in the table is
a simulation variable whose value cannot be easily determined. In this case, a plot of the
simulation results and the independent variable will allow you to verify that the simulation
handles the end-point properly.
3.3. Tables, Interpolation, and Curve Fitting in Simulink
91
The 2-D and n-D Lookup Tables are set up in a very similar way to the 1-D table we
just reviewed. To ensure that you understand the way these are used, use the Simulink help
(from the 2-D or n-D block dialog) to review the way these tables are used.
3.3.2
Interpolation: Fitting a Polynomial to the Data and Using the
Result in Simulink
There is another option in Simulink that in some applications may be more accurate and easier
to use. This option is the curve fitting block, “Polynomial,” in the Math Operations Simulink
library. The use of this block requires you to do some preliminary work in MATLAB to
compute the coefficients of the polynomial that will interpolate and extrapolate your data.
The process of creating a curve fit to any data uses the “Basic Fitting” tool in MATLAB.
To open this tool, create a plot of the data, using the MATLAB commands that we used
to create the 48-hour outdoor temperature profile above or running the MATLAB M-file
simulated_temp_data in the NCS library.
When the plot appears, select “Basic Fitting” under the “Tools” menu to open the Basic
Fitting dialog shown above. In this dialog, we selected the sixth order polynomial option.
The tool will overplot the curve fit on the graph of the data. Try different polynomial degree
options and navigate around the dialog. To view the polynomial coefficients as shown in
the figure, click the right arrow at the bottom of the dialog. To get these values into the
92
Chapter 3. Nonlinear Differential Equations
Temperature Data for Norfolk MA, Jan. 1,2006
30
Temperature (Dry Bulb)
Temperature (Wet Bulb)
29
Dry and Wet Bulb Temperature
Readings (Deg. F)
28
27
26
25
24
23
22
21
20
0
5
10
15
Time (Hours from Midnight)
20
25
Figure 3.7. Real data must be in a MATLAB array for use in the “From Workspace”
block.
Simulink polynomial block, click the button in the Basic Fitting tool that says “Save to
workspace.” MATLAB will save the curve fit in a MATLAB structure called “fit.” The
coefficients are in the field called coeff (i.e., the coefficients are the MATLAB variable
fit.coeff). In the Thermo_table model in the NCS library, replace the Lookup Table block
with the Polynomial block, and then open this blocks dialog and type “fit.coeff” in the area
designated “Polynomial coefficients.” Run the simulation and verify that the simulation is
using the curve fit you created instead of the data. (A version of this model is in the NCS
library, but you still need to create the polynomial fit as described above before you use it;
the model is called Thermo_polynomial.)
The reason you might want to use this approach is to smooth the data and to improve
the simulation run time. In general, it is faster to use a low order fit than it is to use the table
lookup with interpolation. (It is not always the case that curve fit calculations are faster, so
if improving the simulation time is your goal this assumption should be verified.)
3.3.3
Using Real Data in the Model: From Workspace and File
A web site gives access to weather data from volunteers in your local area. We used
data for Norfolk, Massachusetts on January 1, 2006, to create an M-file that can then
import the actual temperature data into the home heating model. This data is in the M-file
weather_data.m, and Figure 3.7 is its plot. The web site that the data comes from is
http://www.wunderground.com/weatherstation.
3.3. Tables, Interpolation, and Curve Fitting in Simulink
Thouse
deg F
to deg C
70
Set
Point (F)
F2C
93
blower
cmd
Terr
Mdot*ha
C2F
Thermostat
Heater
Blower
House
Tdata(:,1:2)
F2C
Measured Data
From
Workspace
deg F
to deg C
Indoor v s.
Outdoor Temp.
Thermo
Plots
deg C to
deg F
Measured Outdoor
Temperature
Figure 3.8. Simulink model of the home heating system using measured outdoor
temperatures.
Figure 3.9. From workspace dialog block that uses the measured outdoor temperature array.
Running the M-file creates the plot in Figure 3.7 and the data in the MATLAB
workspace. The data is stored in a 288 × 3 array called Tdata, where the first column
is the time (in hours), the second is the measured dry bulb temperature, and the last column
is the wet bulb temperature (both in degrees F).
To use data in MATLAB as an input to a Simulink model, the From Workspace block
is used. (An equivalent block called From File gets the data from matrices that were saved
using a mat file; since the two blocks work in the same way, we will illustrate only the
Workspace block.) We have modified the Thermo_table model to allow input to the
model to come from a data file in MATLAB. This is Thermo_real_data in the NCS
library (Figure 3.8).
The data we want is in the array called Tdata, where the first column is the time and the
second is the outside temperature. The From Workspace block dialog in Figure 3.9 opens
when you double click on the block. We have entered the values for the data into this dialog
94
Chapter 3. Nonlinear Differential Equations
Indoor vs.
Outdoor Temp.
74
72
70
68
66
0
1
2
3
4
5
6
7
8
9
x 10
4
Measured Temperature
30
28
26
24
22
0
1
2
3
4
5
Time (Sec.)
6
7
8
9
x 10
4
Figure 3.10. Using real data from measurements in the Simulink model.
(calling the data “Tdata”). Since the array had three columns (the last column is the wet
bulb temperature, which we do not need), the dialog box Data has Tdata(:,1:2). This
truncates the tabulated data down to two columns. The sample time is every 5 minutes, or
every 300 sec. For completeness, we show the result of simulating the data in Figure 3.10).
3.4
Rotations in Three Dimensions: Euler Rotations,
Axis-Angle Representations, Direction Cosines, and
the Quaternion
We saw in Chapter 1 that the Foucault pendulum has forces that are a consequence of the
rotation of the coordinate system where the pendulum is mounted. When we extend these
ideas to a rigid object rotating in three-dimensional space, it is not easy to keep track of
the new orientation. This is because even when doing the rotations about a single axis at a
time, the order they are done will determine the result. The simplest way to illustrate this
is to think about what happens if you were standing, facing north, and you then turned 90
degrees to the left followed by a 90-degree rotation to lie down on your back. You would be
laying face up with your head to the east. If you did this sequence in the reverse order—i.e.,
you first lay down on your back and then you turned left—you would be lying on your left
side with your head to the north, clearly not in the same orientation as the first sequence.
Mathematically, rotations do not commute.
The way we keep track of the rotation of a body is with the angles of rotation about
each of the three orthogonal axes. Mathematically, rotations are transformation matrices
that describe what the rotation does to the x, y, and z coordinates after the rotation. These
matrices have unique representations and manipulations.
3.4. Rotations in Three Dimensions
95
z
θ
y New
y
x New
x
Figure 3.11. Single axis rotation in three dimensions.
3.4.1
Euler angles
Euler proved that any orientation requires at most three rotations about arbitrary axes on a
rigid body. (The rigidness of the body ensures that the complete body is a single mass as
well as inertia; we discuss what happens when the body is not rigid in Chapter 6.) The angles
that define the rotation are Euler angles, and they are not unique. In most applications that
use Euler angles, it is conventional for the rotations first to be about the x-axis, followed
by a rotation about the new y-axis and then another rotation about the new z-axis. (This
is abbreviated “123 rotation”.) This is not unique since there are 12 possible orderings
of the rotation axes (namely 123, 132, 213, 232, 312, 321, 121, 131, 212, 232, 313, and
323). To compute the orientation of the new axes after such a sequence of rotations, we
need to investigate what happens with each of them alone. Only coordinates in the plane
perpendicular to the axis of rotation change when a single axis rotation is used. We can
think of the three Euler angle rotations as a sequence of three independent planar rotations,
each plane redefined by the previous rotation. Figure 3.11 shows a rotation about the z-axis
that causes the entire x-y plane to rotate.
For the single axis rotation of angle θ about the z-axis shown above, the coordinates
(x, y) of a point in the plane has a new set of coordinates (xN ew , yN ew ) given by
xNew = x cos(θ ) + y sin(θ ),
yNew = −x sin(θ ) + y cos(θ ).
This transformation, using the vector-matrix form, is

 
xNew
cos(θ ) sin(θ )
 yNew  =  − sin(θ ) cos(θ )
0
0
zNew




0
x
x
0   y  = T(θ )  y  .
1
z
z
96
Chapter 3. Nonlinear Differential Equations
For a given value of the angle θ , the matrix T is orthogonal. (As an exercise, prove this by
showing that TTT is the identity matrix.)
As an example, if we use this result to compute the transformation of the coordinates
after a standard Euler angle rotation using the axes 121 (i.e., the three rotation angles are
about the original x-axis, then the new y-axis, and, last, about the new x-axis) the new
coordinates are given by
 
1
xNew
yNew  = 0
0
zNew

0
cos(ψ)
− sin(ψ)

cos(θ)
0
sin(ψ)   0
− sin(θ)
cos(ψ)
0
1
0

1
sin(θ)
0  0
0
cos(θ)
0
cos(φ)
− sin(φ)
 
x
0
sin(φ)  y  .
z
cos(φ)
Notice that the transformation is always a 2×2 matrix, but when the axis of rotation changes
the 2 × 2 matrix appears in different rows and columns of the full 3 × 3-rotation matrix.
The three transformation matrices can be multiplied together to give a single direction
cosine matrix for the entire rotation, but in most applications this is not done. Note that
since each of the transformations by themselves is orthogonal, the combined direction cosine
matrix product is also orthogonal. (The proof of this is an exercise.)
When an object is rotating, each axis of the body has its own rotational rate. It is usual
to denote these rotations with the letters p, q, and r for the x-, y-, and z-axes attached to
the body, respectively. When this is the case, the Euler angles are functions of time, so we
need to know what happens to them as the object rotates.
We saw in Section 1.4.1 of Chapter 1 that a scalar rotation around a single axis gives
rise to rotations in the two coordinates that are orthogonal to the rotation axis. Generalizing
this to three dimensions gives the operator equation (where a is an arbitrary vector in the
“Body” frame that is rotating at the vector rate ω)
da da =
+ ω × a.
dt Inertial
dt Body
A rotating body has accelerations induced whenever the angular momentum vector changes.
If we let the rotational rates around the body axes be


p
ω =  q ,
r
the angular momentum of the body around its center of mass is the result of calculating the
angular momentum of every mass particle in the rigid body and then summing over all of
the particles. If the rigid body is homogeneous, the summation is an integral. In either case,
the result is that the angular momentum is Jω, where the inertia matrix is


Jxx −Jxy −Jxz
J =  −Jyx Jyy −Jyz  .
−Jzx −Jzy
Jzz
(Since this result might be unfamiliar, see [17] for more details.)
From Newton’s laws applied to rotations, the torques applied to the body and the rate
of change of the angular momentum balance, so (using the derivative of a vector in inertial
3.4. Rotations in Three Dimensions
97
coordinates above)
dω dJ
dJω
=J
+
ω + ω × Jω = T.
dt
dt
dt
When the inertia matrix is constant over time, J factors to give
d
(ω) = J−1 (T − ω × (Jω)) .
dt
This equation is the starting point for any simulation that involves the rotation of a rigid
body. (We will build a simulation model in Simulink shortly.) If this equation is integrated,
the result is the angular velocity at any time t. Integrating the angular velocity gives the
angular position. The angular position that results from integrating the angular rate is not a
record of the Euler angle history. This is because the angular rate ω is always with respect
to the body, and the Euler angles are always with respect to the original orientation of the
body.
Using the transformation from the body axes to the Euler axes (and the Euler rotations
about x-, y-, and the new x-axes) developed above, the instantaneous angular rate in the
body coordinates (with respect to the Euler angle rates) is








dϕ
0
p
1
0
0


 dθ 

 q  =  dt  +  0 cos ϕ
sin ϕ  
 0 
 dt 
r
0 − sin ϕ cos ϕ
0
0





0
1
0
0
cos θ 0 − sin θ 

 0 .
sin ϕ   0
1
0
+  0 cos ϕ
 dψ 
0 − sin ϕ cos ϕ
sin θ 0 cos θ
dt
Multiplying the matrices in this expression gives

 



p
1 0
− sin θ
φ
d
 q  =  0 cos φ
 θ .
sin φ cos θ 
dt
r
0 − sin φ cos φ cos θ
ψ
The Euler rates in terms of the body rates come from inverting the matrix in this expression
as follows (noting that this matrix is not orthogonal; see Exercise 3.1):

d
dt
 
−1 
φ
1
0
− sin θ
 θ  =  0 cos φ
sin φ cos θ  
ψ
0 − sin φ cos φ cos θ


1 sin φ tan θ cos φ tan θ 
 0
cos φ
− sin φ 

=


sin φ
cos φ
0
cos θ
cos θ

p
q 
r

p
q .
r
Now there are some interesting and potentially difficult numerical issues associated with this
equation. Whenever the Euler angle θ becomes a multiple of π2 , this matrix becomes singular
98
Chapter 3. Nonlinear Differential Equations
(its inverse becomes infinite), so keeping track of the orientation using this representation can
have numeric problems whenever the Euler angles are near 90 degrees or 270 degrees. Even
if there were a nice way of overcoming the numerical issues, the fact that the transformation
matrix is not orthogonal would create numerical stability issues. What can we do about this?
3.4.2
Direction Cosines
Assume that three rotations that lead to a general rotation are sequential about the axes z,
y, and x (axes 321 as defined above). The rotation matrices above give the final orientation
as the product of three matrices as follows (starting with a rotation of angle ψ about z,
followed by a rotation of θ about y, and, last, a rotation of ϕ around x):




cos θ 0 − sin θ
cos ψ
sin ψ 0
1
0
0
 0 cos φ
  − sin ψ cos ψ 0 
sin ϕ   0
1
0
0 − sin φ cos ϕ
sin θ 0 cos θ
0
0
1


cos θ cos ψ
cos θ sin ψ
− sin θ
=  sin φ sin θ cos ψ − cos φ sin ψ sin φ sin θ sin ψ + cos φ cos ψ sin φ cos θ  .
cos φ sin θ cos ψ + sin φ sin ψ cos φ sin θ sin ψ − sin φ cos ψ cos φ cos θ
As we pointed out when we were talking about the Euler angles, this matrix is the “direction
cosine matrix,” and it defines the cosines of the angles between the new axis (after the
rotation) and the old (before the rotation). Thus the 1,1 element of the matrix is the cosine
of the angle between the x-axis after rotation and the x-axis before rotation, the 1,2 element
is the cosine of the angle between the x-axis after and the y-axis before, etc. If we denote
this matrix by C, then when the object is rotating about its body axes with rates p, q, and
r, the direction cosine matrix satisfies the differential equation




0
r
−q
0 −r
q
dC 
−r
0
p C = − r
0 −p  C = −$C.
=
dt
q −p 0
−q p
0
This equation comes from the fact that the rate of angular rotation is the cross product of
the vector rate with the body axis vector. (In Exercise 3.2 you will show that the matrix


0 −r
q
0 −p  ,
= r
−q p
0
multiplying, on the left, any column ci of C, gives a result that is the same as ω × ci .)
This equation can be the calculation used to find the orientation of an object as it rotates,
but because there are nine elements in C and there are only six independent variables, we
would be calculating nine values when only six are required. For this reason, direction
cosine formulations are not preferred for calculating rotations. The next section describes
the preferred method. It is instructive to build a Simulink model that uses this equation to
keep track of the orientation of an object and to compare the computation time required
with that required using the quaternion formulation (see Exercise 3.3).
3.4. Rotations in Three Dimensions
99
3.4.3 Axis-Angle Rotations
Euler also proved that any three-dimensional rotation can be represented using a single
rotation about some axis (Euler’s theorem). The “axis-angle” form of a rotation uses this
observation. For now let us not worry about how we determine the axis; assume that the
vector n specifies it. The desired rotation is defined to be (using the right-hand rule) a
rotation about this axis by some angle. The vector n is a unit vector (i.e., nT n = 1).
Most of the blockset tools in Simulink (the Aerospace Block Set and SimMechanics)
do not make direct use of the axis-angle form, but this representation is the starting point
for deriving other forms. The axis-angle approach forms the basis of the quaternion representation that is finding more applications in many diverse fields (for example, mechanical
applications, computer-aided design, and robotics).
We need four pieces of information to describe the axis-angle transformation, so we
T
nx ny nz θ
denote it by the four-dimensional vector [ n θ]T =
. However,
because nT n = 1 (n specifying direction of the axis that we are rotating about, not its
length), only three of these four numbers are independent. When there is a rotation about
the body axis, the rotational rate around the axis n is
dn
= nxω = −ωxn.
dt
3.4.4 The Quaternion Representation
There is a nice computationally compact method for computing rotations using the axis-angle
form. The method uses the “quaternion representation,” invented by William Hamilton in
1843. A quaternion is a modification of the angle-axis 4-vector that represents the rotation,
defined as
q=
q1
s
T
=
nx sin(θ/2)
ny sin(θ/2)
nz sin(θ/2)
cos(θ/2)
.
Because of the sin(θ/2) multiplying the vector n in this definition, the rotational rate
of q1 (first three components of the quaternion) is
1
dq1
= q1 × ω.
dt
2
By using sin and cos of the rotation angle instead of the angle itself (as was done in
the axis-angle representation), the quaternion is numerically easier to manipulate. In fact,
T
q1 s
= q1 q1T + s 2 = 1. (As an exercise, show
the norm of q is qT q = q1 s
that this is true.) Since q must be a unit vector, we can continuously normalize it as we
compute it (which ensures an accurate representation of the axis of rotation).
The other attribute of the quaternion representation that makes it more robust numerically is the way the body rates determine the quaternion rates. The derivative of the
100
Chapter 3. Nonlinear Differential Equations
Scalar multiplication
s omega / ||q||
1
3omega
3
omega_b2i
3
s/||q||
4
Select s from q
||q||
(q_1 cross omega) / ||q||
3
Select q_1 from q
3
q_1/||q||
3
4
Cross
Product
3
omega
Normalize the
quaternion[q_1 s]
3
3
3
q_1 x omega + s omega
4
x||x||
omega' q_1
Form the norm of q
using a subsystem
q_1/||q||
4
3
-1
4
0.5
4
qdot
4
omega3
Integrate qdot
4
4
1
s x
o
4
Initial_q
This block implements the quaternion equations.
Let the quaternion q (a 4 vector) be partitioned into a 3 vector and a scalar as follows:
q = [ q1 s ]
T
The differential equations for the partitioned components of the quaternion vector are:
dq
/dt = 1/2 [ q 1 X ω + s ω ]
1
and
ds
T
/dt = -1/2 ω q 1
To insure that the quaternion vector has a unit norm, ||q|| is calculated at each
step and the derivatives are divided by this norm before they are integrated (insuring that the
vector has unit norm at the time t, just prior to the next integration step).
Figure 3.12. Using quaternions to compute the attitude from three simultaneous
body axis rotations.
quaternion, using the partition above, is

1
dθ
dn
sin(θ/2) + cos(θ/2)n
dq
d
d
q1
n sin(θ/2)

2
dt 
=
=
=  dt

1
dθ
s
cos(θ/2)
dt
dt
dt
− sin(θ/2)
dt
2


1
 nxω sin(θ/2) + 2 cos(θ/2)ω  1 sω + q1 xω
=
.
=

1
dθ
−ωT q1
2
− sin(θ/2)
2
dt

Thus, calculating the quaternion rate is simply a matter of some algebra involving the cross
product of the quaternion with the body rate (for q1 ) and an inner product with the body
rate (for s).
We have created a subsystem model for these equations (see Figure 3.12). It uses some
Simulink blocks that we have not encountered before. The first is the block that extracts a
component from a vector (the “Selector” block). We use this block twice: first to extract
1
qdot_i2b
3.4. Rotations in Three Dimensions
101
q1 and second to extract s. The other blocks that are used are the dot product and cross
product blocks in the Math Operations library. We need these because of the cross product
in dq
and the calculation of ωT q1 .
dt
The cross product block in Simulink
implements the definition of the cross prod1
uct using a masked subsystem. Since we
have not thoroughly explored the masking of In1
Selector1
subsystems yet, let us take a little detour here
to do so. The idea behind a mask is to create a new Simulink block (i.e., a user-defined
1
Selector2
block where the data needed to execute the
Out1
block appear in a dialog that the user selects
2
by double clicking on the block, as we have
In2
seen many times for the built in blocks in the
Selector3
Simulink browser). The mask hides the internal mathematics from the user so it has the
look and feel of a Simulink built-in block. To
Selector4
build such a block, you first create a subsystem in the usual way and then invoke the mask. For the cross product, the subsystem uses
two multiplication blocks and four Selector blocks (to select the appropriate components of
the vector for the cross product). The subsystem is in the figure above.
If you were creating this block as a masked subsystem, the next step in masking the
subsystem would be to right click the subsystem block and select “Edit Mask” from the
menu. This opens the “Edit mask” dialog that allows you to specify the mask properties.
For the cross product block, no input parameters are required, so the only thing you might
want to do is provide documentation for the block using the “Documentation” tab in the
dialog. We will see other masked subsystems as we start to build models that are more
complex. Therefore, we will defer further discussion on building a mask until we need
to use them in these models. Using the equations above and the cross product block, the
Simulink model that will compute the quaternions is shown below. (This model is called
Quaternion_block in the NCS library.) The input to this block is the angular velocity in
body coordinates of the rigid body, and the output is the quaternion rate. To complete the
quaternion calculations we must integrate the quaternion rate. We will do this outside the
quaternion subsystem after we show how to extract the Euler angles and direction cosine
matrix from q.
We use the definition of q to find the Euler angles and direction cosine matrix. Refer back to the definition of the direction cosine matrix: the Euler angles come from the
appropriate angle in the matrix and the inverse trigonometric functions. For example, in
the 121 representation we developed above, the 3,2 element of the direction cosine matrix
is sin φ cos θ , and the 3,3 element is cos φ cos θ, so if we divide the 3,2 element by the 3,3
element we get tan φ, so the Euler angle φ is given by φ = tan−1 ( cc3,2
). In a similar fashion,
3,3
the other angles are
θ = sin−1 c1,3 and ψ = tan−1
c1,2
c1,1
.
102
Chapter 3. Nonlinear Differential Equations
From the definition of the quaternion, the Euler angles are determined from the following (where the components of the vector q1 are denoted q1 , q2 , and q3 ):
ϕ = tan−1
2(q2 q3 + sq1 )
2
s − q12 − q22 + q32
,
θ = sin−1 (−2(q1 q3 − sq2 )) ,
2(q1 q2 + sq3 )
.
ψ = tan−1 2
s + q12 − q22 − q32
Again, using the same reasoning, the direction cosine matrix is



s 2 + q12 − q22 − q32
2(q1 q2 − sq3 )
2(q1 q3 + sq2 )
2(q1 q2 + sq3 )
s 2 + q22 − q12 − q32
2(q2 q3 − sq1 )
2(q1 q3 − sq2 )
2(q2 q3 + sq1 )
s 2 + q32 − q22 − q12


.
A Simulink block to do these transformations is in Figure 3.13 (this is the model Quaternion2DCM in the NCS library). Notice that this model implements the direction cosine
matrix from the following definition:

0
2
DCM = s − |q1 |2 I3x3 + 2q1 q1T − 2s  q3
−q2
−q3
0
q1

q2
−q1  .
0
(In Exercise 3.5, you are to verify that this calculation gives the same direction cosine matrix
as that shown above.)
When this discussion started, we noted that the rotational acceleration comes from
the Euler equation
d
(ω) = J−1 (T − ωx(Jω)) .
dt
We have assumed that the inertia matrix is constant; otherwise, its derivative needs to
be included since the left-hand side of this equation is really the derivative of the total
momentum given by Jω.
We combine the body-axis angular acceleration term above with the quaternion calculation shown in Figure 3.12 to give the Simulink model shown in Figure 3.14. (This model,
called Quaternion_acceleration, is in the NCS library.) Also, note that the block implements a time varying inertia matrix using the input port number 2 for the derivative of
the inertia; you can either remove the port if the inertia is constant or put the 3 × 3 zero
matrix as the input.
Once again, some new Simulink blocks are in this model. The multiplication block
in the Math Operators library computes the inverse of the matrix J . To do this, drag the
multiplication block into the model and then double click to open its block dialog. In the
dialog, change the multiplication type to Matrix (*) in the pull-down menu, and then put the
symbols “/*” in the “Number of inputs” box. The icon will change to the matrix multiply, and
“Inv” will denote the top input (because of the division symbol you entered in the dialog).
3.4. Rotations in Three Dimensions
Reshape
103
uT
Matrix
Multiply
Transpose q
Form q q^T
Gain
Get |q|^2
2
Form (s^2-q dot q) I
1
1
em
Quaternion
Vector
Direction
Cosine
Matrix
square s
eye(3)
Multiply by q4
3x3 Identity
2
col vector
zeros(3,1)
of zeros
Horiz Cat
Form skew Sym
cross product
matrix Q
U1 -> Y
U2 -> Y(E)
Y
-1
Select q3
U1 -> Y
-1
U2 -> Y(E)
Y
Select q2
U1 -> Y
U2 -> Y(E) Y
-1
Select q1
This block implements the conversion from
Quaternions to the Direction Cosine Matrix.
The Equation implemented is:
2
2
T
[A(q)] = ( s -- |q| ) I + 2 q q -- 2 s Q
in Yellow
in Cyan in Blue
| 0 --q 3 q2 |
Where Q = | q3 0 --q 1 |
| --q 2 q1 0 |
Figure 3.13. Simulink block to convert quaternions into a direction cosine.
We have already encountered the text block with TeX when we explored the Lorenz
attractor in Section 3.1. This model uses this feature, and it is a good habit to do so. Locate
the text block in the Simulink model at any desired location by double clicking at the
desired location. This approach should always be used to annotate the model by showing
the equations that the model implements. If you open the model later, the combination of
the Simulink model and the equations make it very easy to understand and reuse the model.
After you double click on a location in the model for the equations, simply type the text. You
have control over the font selected, the font size, the font style, the alignment of the font, and
finally the ability to turn on TeX commands in the block. These are selectable (after the text
is typed) using the “Format” menu in the Simulink window. For the TeX commands, the
options are the subset of TeX that MATLAB recognizes. Thus, for example, the annotation
of the equation at the bottom of the quaternion model in Figure 3.12 above uses the following
text (the text appears centered in the block because the alignment is set to “center”):
104
Chapter 3. Nonlinear Differential Equations
This section of the block implements the equation:
dω
/ = J -1 [ - ω X J ω - dJ/ ω +m
dt
2
total
]
Inv
External
Momentum (M)
3
dJ/dt
4
dt
Body Angular
Accel
1
U( : )
Make into a 4x1
column vector
Matrix
Multiply
omega
1
J
Matrix
Multiply
Cross
Product
omega
cross H
J omega
( angular momentum H)
5
q
x
||x||
Magnitude of q1
Cross
Product
Normalize the
quaternion[q s] 1
q cross omega1
0.5
2
dq/dt
-1
q (dot) omega1
This section implements the quarternion equations.
dq
/
1
dt
= 1/2 [ q X ω + s ω ]
1
and
ds
/
T
dt
= -1/2 ω q
1
To insure that the quaternion vector has a unit norm,||q|| is calculated at each
step and the derivatives are divided by this norm before they are integrated (insuring that the
vector has unit norm at the time t, just prior to the next integration step).
Figure 3.14. Combining the quaternion block with the body rotational acceleration.
{\bf{\it This block implements the quaternion equations.}}
Let the quaternion {\bfq} (a 4 vector) be partitioned into a 3 vector and a scalar as
follows:
{\bfq} = {[\it {\bfq_1} } {s ]ˆT}.
The differential equations for the partitioned components of the quaternion vector are
{{ˆ{d{\bf\itq_1}}}}/_{\itdt} = 1/2 [ {\bf{\itq_1} X {\omega}} + s {\bf {\omega}} ]
and
ˆ{ds}/_{dt} = -1/2 {\bf{\omega}ˆT} {\it{\bfq_1}}.
To ensure that the quaternion vector has a unit norm, {\bf||q||} is calculated at each
step and the derivatives are divided by this norm before they are integrated (ensuring
that the vector has unit norm at the time t, just prior to the next integration step).
3.5. Modeling the Motion of a Satellite in Orbit
105
The command \bf{ } causes the text inside the braces to appear in boldface. The
equation that defines the quaternion vector is {\bfq} = {[\it {\bfq_1} } {s ]ˆT}, where the
\it invokes italics, and the ˆ means superscript. Since the purpose of Simulink is to model
the equations in a signal flow form, it is always good practice to include annotation for the
actual equation. In fact, this practice will ensure that the next user of the block understands
both the block flow and the equations that the block is implementing. To familiarize you
with the TeX notation, open the quaternion model and review the two text blocks.
3.5
Modeling the Motion of a Satellite in Orbit
With the blocks we created in Section 3.4, we can now build a simulation for the rotational
dynamics of a satellite in orbit. This really is the first Simulink model that we will build
that is close to a real problem. As such, you can use it for models that you might build in
the future.
We start by listing the specifications for the model and for what we hope to achieve.
We assume we are trying to build a control system for a satellite that will orbit the earth.
We also assume that the satellite’s control system will consist of a set of reaction wheels—
devices that are essentially large wheels attached to a motor. The motor accelerates the
reaction wheel and in the process creates a torque on the vehicle through Newton’s third
law. (The spacecraft reacts to the accelerating inertia because the torque produced is equal
and the opposite of the torque on the wheel.) We assume that there are three wheels mounted
on the spacecraft along the three orthogonal body axes. (For the exercise we will assume
that the wheels are along the axes of the body, and it is symmetric so the inertia matrix is
diagonal.)
The first part of the model uses the quaternion and acceleration model from Section 3.4.
For the second part of the model, we will need to create models for the electric motors that
drive each of the wheels and the torque that they create. Once we have the model, we can
create a feedback controller for the reaction wheels that will cause them to move the satellite
from one orientation to another. In the process of creating these models, we will end up
with a simulation for the rotational motion of a satellite in orbit. We also will confront
a problem that always exists with reaction wheels, namely that they can provide a torque
only as long as the wheels can be accelerated. Because a motor cannot accelerate once it
reached its maximum speed, a method is required to allow the motor to decelerate back to
zero speed. This uses reaction jets to “unload” the wheel (the jets, by firing in the opposite
direction from the torque created by the deceleration, allow the reduction of the wheel’s
speed without affecting the spacecraft). We will be modeling only the reaction wheel in the
following, not the unloading of the wheel, and the unloading is left as an exercise.
First, we build a model for the reaction wheels. A DC motor converts an electric
current flowing through an iron core coil into a magnetic field that interacts with a stationary
magnetic field to create a torque that causes the coil to spin. Current flowing through a coil
of wire creates a north pole on one side of the coil and a south pole on the opposite side.
The interaction of the magnet with the stationary magnetic field will cause the coil to spin
as long as the stationary poles are opposite to the spinning coil (i.e., as long as a north pole
of the spinning coil is near a north pole of the stationary coil). As soon as the spin of the
coil cause the north pole of the coil to align with the south pole of the stationary magnet,
106
Chapter 3. Nonlinear Differential Equations
iR
Armature
Inductance
Mechanical Side of the Motor:
Armature
Resistance
J dω/dt= K i i
where K i is the Torque Constant
Motor Voltage
Back emf
Back emf = K b ω
where K b is the Back emf Gain
Figure 3.15. Electrical circuit of a motor with the mechanical equations.
the coil will see a torque that stops it from rotating, and it will stop. To overcome this,
the current in the coil must change direction at least every half revolution. The device that
switches the direction of the current at the half revolution point, if it is mechanical, is a
commutator.
Most modern motors achieve this switching using electronics where, as the rotor
turns, a device on the shaft lets the driving electronics know where the rotor is relative to
the fixed magnet (the stator). The torque applied to the motor is proportional to the current
flowing in the rotating coil, and as long as the gap between the rotating coil and the fixed
magnet is small, the torque is nearly linear. Thus, for the first part of the model, the torque
is directly proportional to the magnetic field induced in the rotor by the rotor current; since
the magnetic field is also nearly linear, the torque is proportional to the current flowing in
the rotor. Thus, the torque is Tm = Ki iR . The rotor electrical circuit consists of the rotor
resistance RR , the rotor inductance LR , the voltage applied to the rotor Vin , and the voltage
induced across the rotor because of its motion in the magnetic field of the stator, Vb . (This
voltage is the back electromotive force [emf ].)
A simple circuit diagram (see Figure 3.15) can represent the electromechanical equations for the motor. In this diagram, the motor torque Tm is the controlled source Ki i, and
the back emf, Vb , is the controlled voltage source whose voltage is Kb ωM .
It is a simple matter to use the fact that the sum of the voltages around the loop must
be zero to get the equation for the current in the rotor (iR ):
LR
diR
= −RR iR + Vin − Vb .
dt
The back emf is a nonlinear function of how fast the motor is turning. For well-designed
motors, however, the voltage Vb is proportional to the motor angular velocity (ωM ). Thus,
the motor electrical circuit is the differential equation
RR
Kb
1
diR
=−
iR −
ωM +
Vin .
dt
LR
LR
LR
3.5. Modeling the Motion of a Satellite in Orbit
107
Because electrical energy, when converted into mechanical energy, is conserved, the constants Km and Kb are dependent. Thus the electrical power (the product of the back emf eb
b iR
and the rotor current) is PElectical = e746
(where in the English unit system the 746 converts
iR
watts into horsepower [hp]. Using the linear back emf voltage this term becomes Kb ωM 746
.
In a similar way, the mechanical power is the torque produced times the motor angular
iR ωM
(where the 550 in the English unit system converts foot-pounds
velocity: Tm ωM = Ki550
into hp). Equating these gives Kb = 746
Ki = 1.3564Ki (in the English unit system).
550
3.5.1
Creating an Attitude Error When Using Direction Cosines
We need to describe how one creates an “attitude error” for the 3-axis rotation of a spacecraft.
None of the descriptions (Euler, direction cosine, or quaternion) lends themselves to forming
the difference between a desired and an actual value. We need a way to create an error that
goes to zero at the desired attitude. The direction cosine matrix difference C − Cdesired , or
some similar error for either the quaternion or the Euler angle representation, is not usable.
(The property that causes the rotation to the desired command is that the error is zero when
the command is achieved.) This is because the necessary rotational commands about the
three body axes are a nonlinear function of the angles (in any of the representations).
Let us perform some algebra with Maple to see what we can do to get a representation
for the commands in terms of the direction cosine matrix and the quaternion approach. We
will work with the direction cosine matrix first. The direction cosine matrix we worked
with earlier is


cos θ cos ψ
cos θ sin ψ
− sin θ
CBI =  sin φ sin θ cos ψ − cos φ sin ψ sin φ sin θ sin ψ + cos φ cos ψ sin φ cos θ  ,
cos φ sin θ cos ψ + sin φ sin ψ cos φ sin θ sin ψ − sin φ cos ψ cos φ cos θ
where the rotations are about the z-, y-, and x-axes, are in the sequence φ, θ, ψ, and we
have explicitly indicated with the subscript BI that the rotations are with respect to some
inertial (i.e., fixed) axis to the principal axes of the body that is rotating.
When the body is rotating with angular velocity


p
ω= q 
r
about these principal axes, the matrix C satisfies the differential equation


0
r
−q
dC 
−r
0
p C
=
dt
q −p 0
= − c1 c2 c3 .
The value of is explicitly defined by this equation.
108
Chapter 3. Nonlinear Differential Equations
We can also write this matrix differential equation in another form by creating a 9 × 1
vector that consists of the columns of C concatenated one on top of the other. That is, we
let the vector c be
c=
c11
c21
c31
c12
c22
c23
c31
c32
c33
T
.
We also define the cross product matrix for C as (where the subscript i denotes the ith
column of the matrix C)

0

Cxi =  c3i
−c2i
−c3i
0
c1i
c2i


−c1i  .
0
In terms of this cross product matrix, the differential equation for the columns of C is
dci
= Cxi ω.
dt
Applying this equation to the vector c, we get a vector matrix differential equation for the 9vector c in terms of the three vectors ω as (where CCross is the 9 × 3 matrix defined explicitly
by this equation)


Cx1
d


c =  Cx2  ω = CCross ω.
dt
Cx3
You can deduce some interesting attributes of CCross using Maple. Develop a Maple program
(from MATLAB) to show, remarkably, the two products below:
CTCross CCross = 2I3×3 and CTCross c = 09×1 .
Maple facilitates the calculations (all of the algebra and trigonometric identities involved
in manipulating these matrices; they are quite extensive and are done automatically) and
saves an immense amount of work.
If you have not built the Maple program yourself, you can open the MATLAB program
Maple_CCt_identity in the NCS library to do the computations. This program sets up
the direction cosine matrix in Maple and computes CTCross CCross and CTCross c. (It computes
the vector q and the matrix Q used in the following discussion.)
From these identities, we can create a nice error signal for a rotation.
Multiply the equation dtd c = CCross ω by CTCross on both sides to give
CTCross
d
c = CTCross CCross ω = 2ω.
dt
So ω = 12 CTCross dtd c, or ω is the rotational rate needed to make the direction cosine matrix
C. Therefore, if we want a particular direction cosine matrix whose elements are the 9vector c, all we need to do is calculate the value of ω from this equation and use it to drive
3.5. Modeling the Motion of a Satellite in Orbit
109
the three axes of the spacecraft. From the second identity we proved in the Maple program,
when the direction cosine matrix is at the desired value, the command becomes zero, as we
require. Notice how neatly this error signal works.
3.5.2
Creating an Attitude Error Using Quaternion Representations
There is a complete analogy to this result for the quaternion representation. We will summarize it here and ask you to reproduce the results in Exercise 3.7 at the end of this chapter.
The first part of the quaternion result requires the formation of the 4 × 3 matrix Q as
follows:


s
−q3 q2
 q3
s
−q1 
.
Q=
 −q2 q1
s 
−q1 −q2 −q3
Then, following similar steps used in the direction cosine matrix derivation, the following
two identities are true:


q1
 q2 

QT Q = I3×3 and QT 
 q 3  = 04 .
s
Finally, the differential equation for the quaternion vector that we developed above (in terms
of the matrix Q) is


q1

d 
 q2  .
ω = 2QT

q
dt
3 
s
This last equation becomes the defining equation for the quaternion command (again following the steps we used for the direction cosine); the commanded rate to achieve a desired
quaternion value is
ηC = 2QT qDesired .
We use this in the spacecraft attitude-control model. All of the pieces we need are now in
place, so we can create the Simulink model for the simulation. You might want to try to do
this yourself before you open the model Spacecraft_Attitude_Control in the NCS
library.
3.5.3 The Complete Spacecraft Model
The model of the complete rotational dynamics of the spacecraft is in Figure 3.16. It has
four subsystems to compute the following:
• the spacecraft rotational motion—this is a variation of the quaternion block we created
in Section 3.4.4;
• the Reaction Wheel dynamics that we created above;
110
Chapter 3. Nonlinear Differential Equations
Spacecraft
Inertia Matrix
Wheel Momentum
[3x1]
Command
Rate Error
3
Wheel
Inertia
Convert (rad)
to Volts
Altitude Error
[3x1]
J
[3x3]
PID
Controller
[3x1]
[3x1]
1
[3x1]
[3x1]
[3x3]
Quaternion and Spacecraft
Rotational Dynamics
Inertia J
dq/dt
V_in
Wheel Speeds
[3x1]
[3x1]
omega_wheel (r/s)
[3x1]
Jw
4
Input External Momentum (M)
Momentum
of Wheels
Reaction Wheels
3
4
omega
Body Angular Accel
3
q
Integrate
omega dot
3
omega
2E' q_desired
3
omega
1
s xo
3
Initial
ones(3,1)*0.01
omega
4
Quater nions
Integrate
qdot
Normalize the
quaternion[q s] 1
[3x4]
[3x1]
2
Matrix
[3x1] Multiply
4
4
E' q_desired
[4x1]
Mult by 2
[4x1]
Desired
q_desired/norm(q_desired) altitude
(q_desired)
[3x4]
Matrix E'
Quaternions
4
4
4
Form quatern ion
propagation matrix
||x||
x
1
s xo
4
4
Magnitude of q
4
[0.5 0.5 0.5 sqrt(3)/3 ]'/1.048
[3x4]
In1
Calculate
and display
E'*E (Should be I_3)
Figure 3.16. Controlling the rotation of a spacecraft using reaction wheels.
• a block that has a PID controller in it;
• a block that forms the matrix Q from the quaternion vector.
In addition, there are blocks that compute the command (ηC = 2QT qDesired ) and a
block that checks the identity QT Q = I3×3 . (This is used to verify the accuracy of the
computations and that the model is accurate.)
It is a good idea to familiarize yourself with each of the subsystems and the overall
simulation before you run it. The data is all contained in the model; the inertia of the wheels
is 50 slug-ft2 and the vehicle inertia matrix is diagonal, with the diagonal values of 1000,
700, and 500. The reaction-wheel-dynamics model uses the same parameters as we used
in the model above. The model uses as initial conditions for q the value [0.4771 0.4771
0.4771 0.5509], and the desired final value is set to q_desired/norm(q_desired) in the block
that is labeled Desired Attitude. The value for q_ desired is [1 2 3 4]. You can change this
value at the MATLAB command line to any value you would like and see the result.
The PID controller is set up with only PD. (The Integral gain is set to zero; look inside
the PID block to see the values.) The Proportional gain is set to 10, and the Derivative
gains (on the three angular velocities p, q, and r) are 70. We have not attempted to make
these values optimum in any way; they simply make the response reasonably fast with a
minimum overshoot in the quaternion values at the end of the command.
When you run the model using these parameter values, you should see the quaternion
values plotted in the figure at the right below. Note that these quaternion values change
smoothly over the time interval, and they do indeed go to the desired final values from the
initial value we used.
Initial
q
3.5. Modeling the Motion of a Satellite in Orbit
111
Minus
q1
-u
-q1
s
q3
-q2
-q1
Horiz Cat
[1x4]
Minus1
q2
-u
Quaternions
4
1
-q2
-q3
s
q1
-q2
Horiz Cat
[1x4]
[1x4]
[1x4]
q3
Minus2
-u
q2
-q1
s
-q3
Horiz Cat
[1x4]
Vert Cat
Matrix Q'
[3x4]
1
Matrix
Concatenation
s
Figure 3.17. Matrix concatenation blocks create the matrix Q used to achieve a
desired value for q.
One aspect of
this model is new,
namely the way we
create the matrix
QT . We use two
steps to form the elements of the matrix (which has 3
rows and 4 columns).
First, we compute
the 3 rows (each 1 ×
4) one at a time,
and then we concatenate them vertically to create the
matrix.
The block that
creates the rows and
columns is the Matrix Concatenation
block in the Math
Operations library in
Simulink.
This
block takes scalars
or one-dimensional vectors and places them into the columns (called horizontal concatenation) or the rows (called vertical concatenation) of a matrix. The initial construction of
the rows places the various scalar values from the quaternion vector q into the appropriate
locations to build the three rows of the matrix (using the horizontal version of the block),
and then the rows are stored into a matrix using the vertical version of the block. Figure 3.17
112
Chapter 3. Nonlinear Differential Equations
shows how we used the block “Horizontal Concatenation” in the subsystem called “Form
quaternion propagation matrix.”
The quaternion block is applicable to more complex dynamics than a spacecraft.
Examples are the rotational dynamics of an automobile or any other vehicle in motion, a
robot, a linkage, or an inertial platform. Since the quaternion formulation can model these,
it is a good idea to keep this model as part of a user library with the other models in the
Simulink browser.
3.6
Further Reading
Reference [40] (Chapter 4 in particular) has a good description of the Euler and quaternion
representations of the rotational motion of a spacecraft.
In his book Unknown Quantity: A Real and Imaginary History of Algebra [7], John
Derbyshire describes the development of the quaternion representation as the logical leap
into the fourth dimension after the two-dimensional representation of complex numbers.
He notes that in 1827 Hamilton began investigating complex numbers in a purely algebraic
way. He found it extremely difficult to come up with a scheme that would make the algebra
distributive and would maintain the property of complex numbers that the modulus of the
product of two numbers is the product of their respective moduli. Hamilton’s insight into the
fact that you could not satisfy this modulus rule with triplets but could do so with quadruplets
led to the invention of quaternion algebra. Hamilton was so pleased with this result that he
inscribed it with a knife on a stone of Brougham Bridge near Dublin. Derbyshire’s book
is about the algebraic properties of the quaternion, which are not a major consideration for
portraying rotations. Despite this, his book is fun to read for its mathematical discussions
and its look at the personalities behind the math.
There is an excellent discussion of the computational efficiency of the quaternion
representation in [11], and you should consult this after you complete Exercise 3.3.
Exercises
3.1 Show that the matrix product
  dϕ  
1
0
p
dt
 q  =  0  +  0 cos ϕ
0 − sin ϕ
r
0


1
0
0
sin ϕ  
+  0 cos ϕ
0 − sin ϕ cos ϕ

gives

 
p
1
 q = 0
r
0
0
cos φ
− sin φ


0
0

sin ϕ   dθ
dt
cos ϕ
0
cos θ
0
sin θ


0
0 − sin θ
 0 
1
0
dψ
0 cos θ
dt



− sin θ
φ
d
 θ .
sin φ cos θ 
dt
cos φ cos θ
ψ
Exercises
113
3.2 Show that the direction cosine matrix satisfies the differential equation


0
r
−q
dC 
−r
0
p C
=
dt
q −p 0
= −
c1
c2
c3
.
3.3 Start with the Simulink model for the quaternion representation of rotations. Next,
create a model that does the same thing using direction cosine matrices and, last, a
model that creates a rotation with Euler angles. Compare the computational efficiency
of the three models. In particular, see what happens when the Euler angles approach
the points where the transformations become infinite. Reference [24] has a good
discussion of why the quaternion approach is better.
3.4 Show that the norm of the quaternion vector is one.
3.5 Verify that the direction cosine matrix from the quaternion vector is


0
−q3 q2
2
0
−q1  .
DCM = s − |q1 |2 I3×3 + 2q1 q1T − 2s  q3
−q2 q1
0
3.6 Try adding a simple controller to the reaction wheel model that will cause the reaction
wheel speed to decrease whenever it gets to 1000 rpm. Use an external torque applied
by a reaction jet. (The jet will create a force, so you need to decide on where it is
relative to the center of gravity to determine the torque.) If you apply a torque using
only a single jet with the force not through the center of gravity, then rotations will
occur about multiple axes. How do you prevent this? How many jets will you need
to unload the reaction wheels in all three axes?
3.7 Verify the quaternion will move toward the desired quaternion qDesired if the rate body
rate is
ηC = 2QT qDesired .
In this equation, the matrix

s
 q3
Q=
 −q2
−q1
−q3
s
q1
−q2

q2
−q1 
.
s 
−q3
Also, verify that the matrix concatenation blocks in Figure 3.17 create this matrix.
Chapter 4
Digital Signal Processing in
Simulink
We saw in the previous chapters how to build models of continuous time systems in Simulink.
This chapter provides insight into how to use Simulink to create, analyze, simulate, and code
digital systems and digital filters for various applications.
We begin with a simple example of a discrete system; one discussed in Cleve Moler’s
Numerical Computing with MATLAB [29]. This example, the Fibonacci sequence, is not a
digital filter but is an example of a difference equation. In creating the Simulink model for
this sequence, we illustrate the fact that the independent variable in Simulink does not have
to be time. In this case, the independent variable is the index in the sequence. We set the
“sample time” in the digital block to one, and then we interpret the time steps as the number
index for the element in the sequence.
Digital filters use an analysis technique related to the Laplace transform, called the
z-transform. We introduce the mathematics of this transform along with several methods
for calculating digital filter transfer functions.
A digital signal typically consists of samples from an analog signal at fixed times.
Since it is usually necessary to convert these digital signals back into an analog form so that
they can be used (to hear the audio from a CD or a cell phone, for example), it is natural to
start by asking how to go about doing this. The answer is the “sampling theorem” that shows
that an analog signal with a bounded Fourier transform (i.e., F (ω) = 0 for |ω| > ωM ) can
be sampled at the times ωπM or faster, and these samples can then be used to reconstruct
the analog signal. The method for doing this reconstruction is via a filter that allows only
the frequencies below ωM to pass through (and for this reason, we call the filter a low pass
filter). Therefore, the second section of this chapter shows how to develop low pass filters
and ways to adapt their properties to make them do other useful signal processing functions
such as high pass and band pass. In all cases, we will use Simulink to simulate the filters
and explore their properties.
The last part of this chapter will deal with implementation issues. We will look at how
Simulink allows us to evaluate different implementations of the digital filter. We will also
begin to look at the effect of limited precision arithmetic on the digital filter’s performance.
We will then look at a unique combined analog and digital device called a phase lock
loop. Simulink allows us to build a simulation of this device and do numerical experiments
115
116
Chapter 4. Digital Signal Processing in Simulink
that demonstrate its properties. In fact, Simulink is unique in its ability to do some very
detailed analyses of a phase-locked loop (see [11]). This is particularly true when it comes
to analyzing the effect of noise on the loop, a topic that we will take up in Chapter 5.
4.1
Difference Equations, Fibonacci Numbers, and
z-Transforms
One of the more interesting difference equations is the Fibonacci sequence. Fibonacci, the
sequence he developed, and its rather remarkable properties and history are all described
in detail, along with several MATLAB programs developed to illustrate the sequence, in
Chapter 1 of Cleve Moler’s Numerical Computing with MATLAB [29]; also see [26].
Let us revisit this sequence using Simulink. The Fibonacci sequence is
fn+2 = fn+1 + fn
with f1 = 1 and f2 = 2.
As a reminder, the sequence describes the growth in a population of animals that are
constrained to give birth once per generation, where the index is the current generation.
One possible Simulink model to generate this sequence uses the “Discrete” library
from the Simulink browser. To understand how Simulink works, we need to describe how
one would go about writing a program that generates the Fibonacci numbers. In the NCM
library (the programs that accompany the book Numerical Computing with MATLAB), there
is a MATLAB program that computes and saves the entire Fibonacci sequence from 1 to
n. If, instead, we want only to compute the values as the program runs without saving the
entire set of values, the MATLAB code (called Fibonacci and located in the NCS library)
would look like
function f = fibonacci(n)
% FIBONACCI Fibonacci sequence
% f = FIBONACCI(n) sequentially finds the first n
% Fibonacci numbers and displays them on the command line.
f1 = 1;
f2 = 2;
while i <= n
f = f1 + f2
f2 = f1;
f1 = f;
i = i+1
end
Even though we do not need to save the entire sequence in this example, we still need
to save the current and the previous value in order to calculate the next value of the sequence.
This fact means that this sequence requires two “states.” In the theory of systems, a state
is the minimum information needed to calculate the values of the difference equation. In
the snippet of MATLAB code above, we save the values in f1 and f2. When we build a
Simulink model to simulate a difference equation, the state needs a place to be stored for
use in the solution. Simulink uses the name 1z to denote this block. To understand where
4.1. Difference Equations, Fibonacci Numbers, and z-Transforms
1
1
z
Unit Delay
z
Unit Delay1
117
Scope
Figure 4.1. Simulating the Fibonacci sequence in Simulink.
this comes from, we need to show the method for solving difference equations using the
discrete version of the Laplace transform. We will do that in a moment, but first let us build
the Simulink model for the Fibonacci sequence.
To create the model, from MATLAB open the Simulink Library Browser as we have
done previously and open a new untitled model window.
Select the “Discrete” library, and then select the 1z icon (the “Unit Delay” block) and
drag one into the open model window. Right click and drag on the Unit Delay block in
the model window to make a copy of the Unit Delay. Connect the output of the first delay
block to the input of the second. This will send the output of the first delay to the second
delay block. Now we need to create the left-hand side of the Fibonacci equation. To do
this we need to add the outputs of the two Unit Delay blocks. Therefore, open the “Math
Operations” library and drag the summation block into the model window. Then connect
the outputs of each of the Unit Delay blocks to sum the inputs one at a time. The model
should look like Figure 4.1.
In order to start the process, we need the correct initial conditions. To set them, double
click on the Unit Delay block and set the initial condition to 2 and 1 (from left to right in the
diagram). This will set the initial value of f1 to 1 and the initial value of f2 to 2, as required.
Notice that the 1z block has a default “sample time” of 1 sec, which is exactly what we want
for simulating the sequence, as we discussed in the introduction above. To view the output,
drag a Scope block (in the Sinks library) into the model and connect it to the last Unit Delay
block. Click the start button on the model to start the simulation of the sequence.
Double click on the Scope block and click on the binoculars icon to see the result.
The simulation makes ten steps and plots the values that the Fibonacci sequence generated
as it runs. The graph should look like Figure 4.2. We created this figure using a built-in
MATLAB routine called “simplot.” This M-file uses a MATLAB structure generated by the
output of the Scope block. The Scope block generates this MATLAB data structure during
the simulation; the plot comes from the MATLAB command simplot(ScopeData). The
plot is Fibonacci Sequence.fig, and it is available in the NCS library from the Figures
directory.
If you want to compute the golden ratio “phi” as was done in NCM [29], the calculation
requires that you divide the value of f2 by f1. This uses the Math Operations library Product
block. In this library, drag the product block into the model window and double click on
it. The dialog that opens allows you to change the operations. In the dialog box that asks
“Number of Inputs” (which is set to 2 by default), type the symbols * and /. This will cause
one of the inputs to be the numerator and the other the denominator in a division (denoted
by × and ÷ in the block). Connect the × sign on the Product block icon to the line after
118
Chapter 4. Digital Signal Processing in Simulink
Fibonacci Sequence
(Values for the first 20 Numbers)
15000
10000
5000
Fibonacci Sequence
(Detail from 0 to 10)
100
0
0
5
10
Index
15
20
80
60
40
20
0
0
2
4
6
8
10
Index
Figure 4.2. Fibonacci sequence graph generated by the Simulink model.
1
f(n-2)
z
Unit Delay
1
f(n-1)
f(n)
z
Scope
Unit Delay1
f(n)
1.6180339901756
f(n-1)
Product
Display
Figure 4.3. Simulink model for computing the golden ratio from the Fibonacci sequence.
the first delay block (the “Unit Delay” block in Figure 4.1) and connect the ÷ to the “Unit
Delay1” block. Connect the output of the Product block to a Display block that you can
get from the “Sinks” library. This block displays the numeric value of a signal in Simulink.
Figure 4.3 shows Display with the result of the division after 10 iterations.
To make the simulation run longer, open the Configuration Parameters menu under
the “Simulation” menu at the top of the Fibonacci model window. The dialog that opens
when you do this allows you to change the Stop time. Change it to some large number
(from the default of 10), and run the simulation. You should see the display go to 1.618. To
see the full precision of this number, double click on the Display block and in the Format
pull down, select “long.” This corresponds to the MATLAB long format. The result should
be 1.6180339901756,
the limit of this ratio is the “golden ratio,” phi, and it has the value
√
phi = 1+2 5 . (MATLAB returns 1.61803398874989 when calculating this, and as can be
seen after 20 iterations, Simulink has come very close.) The discussions in [29] and [26]
describe phi and its history in detail.
This discrete sequence is only one of many sequences that you might want to find the
solution of in Simulink. In a more practical vein, we often want to process a digital signal
4.1. Difference Equations, Fibonacci Numbers, and z-Transforms
119
(a process called digital signal processing). Toward this end, digital filters are part of the
Simulink discrete library. They appear in the Digital library as z-transforms. What is this
all about?
4.1.1 The z-Transform
The z-transform, F (z), of an infinite sequence {fk } , k = 0, 1, . . . , n, . . . , is
F (z) =
∞
fk z k .
k=0
There are many technical details that need to be invoked to ensure that this sequence always
converges to a finite value, but suffice it to say that because the variable z is complex, the
sequence always is finite (even for sequences that diverge). For example, let us compute
the z-transform for the sequence that is 1 for all values of k. (This is called the discrete step
function.) Thus, we need to compute the sum
F (z) =
∞
zk = 1 + z1 + z2 + z3 + · · · .
k=0
If we multiply the value of F (z) by z, the sum on the right side becomes
zF (z) = z1 + z2 + z3 + · · · .
Now, by subtracting the second series from the first, all of the powers of z subtract (all the
way to infinity), and the only term that remains on the right is the 1, so
F (z) − zF (z) = 1
or
1
F (z) =
.
1−z
As a second example, consider the sequence α k , k = 0, 1, 2, . . . . This sequence is the
discrete version of the exponential function eat since the values of this at the times k t
generate the sequence (ea t )k = α k (where α = ea t ).Following the same steps as we
1
k k
used above, the z-transform of this sequence is F (z) = ∞
k=0 α z = 1−αz .
It is a simple matter to work with the definition to create a table of z-transforms. This
table will allow you to solve any linear difference equation. For example, the discrete sine
iωk t
−iωk t
can be generated using sin(ωk t) = e −e
and the above transform of α k .
2i
You can use the MATLAB connection to Maple to get some z-transforms. Try some
of these:
syms k n w z
simplify(ztrans(2ˆn))
This gives z/(z-2) as the result.
ztrans(sym(‘f(n+1)’))
This gives z*ztrans(f(n),n,z)-f(0)*z) as the result.
120
Chapter 4. Digital Signal Processing in Simulink
ztrans(sym(‘f(n+1)’))
This gives z*sin(k)/(zˆ2-2*z*cos(k)+1) as the result.
Solutions of linear difference equations using z-transforms are very similar to the
techniques for solving differential equations using Laplace transforms. Just as the derivative
has a Laplace transform that converts the differential equation into an algebraic equation, the
z-transform of fk+1 , k = 0, 1, . . . , n, . . . , converts the difference equation into an algebraic
equation. To see that this is so, assume that the z-transform of fk , k = 0, 1, . . . , n, . . . , is
F (z). Then the z-transform of fk+1 , k = 0, 1, . . . , n, . . . , is
∞
k=0
fk+1 zk = f1 + f2 z1 + f3 z2 + · · ·
= z−1 F (z) − z−1 f0 .
Notice that this is the same answer as we got when using Maple above. From this, we can
see why Simulink uses 1/z as the notation for the “Unit Delay.”
4.1.2
Fibonacci (Again) Using z-Transforms
Let us use the z-transform to solve the Fibonacci difference equation. We use the unit delay
z-transform above twice. The first time gives the z-transform of fk+2 , and the second gives
the transform of fk+1 . The z transform of the Fibonacci equation is therefore
z−2 F (z) − z−2 f0 − z−1 f1 = z−1 F (z) − z−1 f0 + F (z).
Solving for F (z) in this expression gives
(z−2 − z−1 − 1)F (z) = z−1 f1 + z−2 f0 − z−1 f0 ;
therefore, we have the final algebraic equation for F (z):
F (z) =
z−1 (z−1 + 1)
−z−1 + z−2 + 2z−1
=
.
z−2 − z−1 − 1
(z−2 − z−1 − 1)
Now we can factor the denominator of the function F (z) and write the right-hand side of
the above √as a partial fraction
expansion. The roots of the denominator polynomial are
√
1+ 5
1− 5
φ1 = − 2 and φ2 = − 2 (note that φ2 = 1 − φ1 ), so the partial fraction expansion is
F (z) =
z−1
B
A
+ −1
.
+ φ1
z + φ2
A and B are determined by using Heaviside’s method, wherein the value of A is obtained
by multiplying the left and right side of the above expression by z−1 + φ 1 and then setting
the value of z−1 = −φ 1 , and similarly for B. The inverse z-transform for each of these
terms comes from the transforms above. There is a lot of algebra involved in this, so let us
just look at the answer (see Example 4.1):
fn =
1
(φ n+1 − (1 − φ 1 )n+1 ).
2φ1 − 1 1
This is the same result demonstrated in NCM [29].
4.2. Digital Sequences, Digital Filters, and Signal Processing
121
There is a connection between z-transforms and Laplace transforms that we will
develop later, but first let us look at practical applications of difference equations. Modern
technology such as cell phones, digital audio, digital TV, high-definition TV, and so on,
depends on taking an analog signal, processing it to make it digital, and then doing something
to the signal to make it easier to send and receive. Digital filtering permeates all of this
technology.
4.2
Digital Sequences, Digital Filters, and Signal
Processing
The advent of digital technology for both telephone and audio applications has made digital
signal processing one of the most pervasive mathematical techniques in use today. The methods used are particularly easy to simulate with the digital blocks in Simulink’s “Discrete”
library. To understand what these blocks do to a digital sequence, we need to understand
the mathematics that underlies discrete time signal processing and digital control.
4.2.1
Digital Filters, Using z-Transforms, and Discrete Transfer
Functions
To start, we will work with the exponential digital sequence we created above (see Section 4.1.1). Assume that we are going to process a digital sequence fk using the sequence
α k . The digital filter is then
yk+1 = αy k + (1 − α)fk .
In this difference equation, the sequence fk is the signal to be processed (where fk is the
value of the signal f (t) at times k t as k, an integer, increases from 0), and the sequence y k
is the processed result. The simplest way to solve this equation is to use induction. Starting
at k = 0, with the initial condition y0 , we get the value of y1 as
y1 = αy0 + (1 − α)f0 .
Now, with y1 in hand, we can compute y2 by setting k = 1 in the difference equation for
the filter. The result is as follows:
y2 = αy1 + (1 − α)f1 = α(αy0 + (1 − α)f0 ) + (1 − α)f1 .
Thus,
y2 = α 2 y0 + α(1 − α)f0 + (1 − α)f1 .
If we continue iterating the equation like this, a pattern rapidly emerges and can be used to
write the solution to the equation for any k. (Verify this assertion by continuing to do the
iteration.) This solution is
yk = α k y0 + (1 − α)
k−1
j =0
α k−1−j fj .
122
Chapter 4. Digital Signal Processing in Simulink
Notice that α k , the sequence we wanted, multiplies both the summation and the initial
condition.
We now use induction to show that this is indeed the solution. Remember that a proof
by induction follows these steps:
• Verify that the assertion is true for k = 0.
• Assume that the assertion is true for k, and show that it is then true for k + 1.
Because of the way that we generated the solution, it is clear that it is true for k = 0.
So next, assume that the solution above (for the index k) is true, and let us show that it is
true for k + 1.
From the difference equation yk+1 = αyk + (1 − α)fk , we substitute the postulated
solution for yk to get
yk+1 = αy k + (1 − α)fk
= α(α k y0 + (1 − α)
= α k+1 y0 + (1 − α)
k−1
j =0
k
j =0
α k−1−j fj ) + (1 − α)fk
α k−j fj .
This is exactly the solution that we postulated with the index at k + 1. Thus, by induction,
this is the solution to the difference equation.
Notice that one way of thinking about discrete equations in Simulink is that it implements the induction algorithm. It uses the definitions of the discrete process and starts at
k = 0, iterating until it reaches the nth sample.
If we take the z-transform of the difference equation yk+1 = αyk + (1 − α)f , we get
z−1 Y (z) − z−1 y 0 = αY (z) + (1 − α)F (z).
Solving this for Y (z) gives
Y (z) =
=
z−1
(1 − α)
y 0 + −1
F (z)
−α
z −α
z−1
1
(1 − α)z
y0 +
F (z).
1 − αz
1 − αz
By comparing the z-transform above with the solution, we can conclude that the inverse
z
z-transform of 1−αz
F (z) is the convolution sum kj =0 α k−j fj . Note that this says that when
a sequence whose z-transform is F (z) is used as an input to a digital filter whose z-transform
z
is H (z) (called the discrete transfer function, and for this filter its value is H (z) = 1−αz
),
the product H (z)F (z) has an inverse transform that is the convolution sum. This result,
called the convolution theorem, provides the rationale for the Simulink notation of using
the z-transform of the filter inside a block. The notation implies that the output of the block
is the transfer function times the input to the block—even though the output came from the
difference equation and the result is a convolution sum (when the system is linear). This
4.2. Digital Sequences, Digital Filters, and Signal Processing
123
Gain1
1-alpha
1
simout
z
Sine Wave
Unit Delay
T o Workspace
alpha
Gain
Figure 4.4. Generating the data needed to compute a digital filter transfer function.
slight abuse of notation allows for clarity in following the flow of “signals” in the Simulink
model because it maintains the operator notation even when the block contains a transfer
function.
One of the major uses of digital filters is to alter the tonal content of a sound. Since
music consists of many tones mixed together in a harmonic way, it is useful to see what a
digital filter does to a single tone. Therefore, we use Simulink to build a model that will
generate the output sequence yk when the input is a single sinusoid at frequency ω. Thus, we
assume that fk = A sin(ωk t), where k t is the sample time. The process of converting
an analog signal to a digital number is “sampling.” All digital signal processing uses some
form of sampling device that does this analog to digital conversion.
The discrete elements in the Simulink library handle the sampling process automatically. Try building a Simulink model that filters an analog sine wave signal with the first
order digital filter yk+1 = αyk + (1 − α)fk before you open the model in the NCS library.
To run the model in the NCS library, type Digital_Filter at the MATLAB command line. Figure 4.4 shows the model.
In this model, the sampling of the sine wave occurs at the input to the Unit Delay block.
The sample time is set in a “Block Parameters” dialog box (opened by double clicking on
the Unit Delay block). In this dialog, the sample time was set to delta_t (an input from
MATLAB that is set when the model opens). This illustrates an important attribute that
Simulink uses. After sampling the signal, all further operations connected to the block that
does the sampling treat the signal as sampled (discrete). Thus, the Gain block operates on
the sampled output from the Unit Delay, and the addition occurs only at the sample times.
The dialog also allows setting the initial condition for the output. (We assume that the initial
condition is zero, the default value in the dialog box.)
4.2.2
Simulink Experiments: Filtering a Sinusoidal Signal and
Aliasing
The digital filter model in Section 4.2.1 is set up to run 50 sinusoidal signals (each at a
different frequency) simultaneously, and as it runs, it sends the results of all 50 simulations
into MATLAB in the MATLAB structure “simout.”
The values for the various parameters in the model are in the MATLAB workspace
and have the following values:
124
Chapter 4. Digital Signal Processing in Simulink
>> delta_t
delta_t =
1.0000e-003
%(Sample time of 1 ms or sample frequency of 1 kHz)
>> alpha
alpha =
9.0000e-001
>> omega
omega =
Columns 1 through 25
1.0000e-001
2.8786e-001
8.2864e-001
2.3853e+000
6.8665e+000
1.2355e-001
3.5565e-001
1.0238e+000
2.9471e+000
8.4834e+000
1.5264e-001
4.3940e-001
1.2649e+000
3.6410e+000
1.0481e+001
1.8859e-001
5.4287e-001
1.5627e+000
4.4984e+000
1.2949e+001
2.3300e-001
6.7070e-001
1.9307e+000
5.5577e+000
1.5999e+001
3.0171e+001
8.6851e+001
2.5001e+002
7.1969e+002
2.0717e+003
3.7276e+001
1.0730e+002
3.0888e+002
8.8916e+002
2.5595e+003
4.6054e+001
1.3257e+002
3.8162e+002
1.0985e+003
3.1623e+003:
Columns 26 through 50
1.9766e+001
5.6899e+001
1.6379e+002
4.7149e+002
1.3572e+003
2.4421e+001
7.0297e+001
2.0236e+002
5.8251e+002
1.6768e+003
The values for the 50 frequencies, the sample time for the filter, and the value of alpha are
stored as part of the model through a callback set by the Model Properties dialog.
The ability to cause calculations in MATLAB to run when the model opens or when
other Simulink actions occur is a feature of Simulink that you should understand.
After you have opened the model, go to the File menu and select “Model Properties.”
A Model Properties window will open, allowing you to enter and/or view information about
the model. It also allows you to select actions that occur at various events during the model
execution.
There are four tabs across the top of this window, denoted “Main,” “Callbacks,”
“History,” and “Description.” Figure 4.5 shows two of the tabs in the dialog. The window
opens, showing the contents of the Main tab. The Main tab is the top level of the window. It
shows the model creation date and the date we last saved it. It also shows a version number
(every time the model is changed or updated in any way, this number changes) and whether
or not the model has been modified. The second tab in the window is the Callbacks tab. In
this section, the user can specify MATLAB commands to execute whenever the indicated
action occurs. The possible actions are as follows.
4.2. Digital Sequences, Digital Filters, and Signal Processing
125
Figure 4.5. Adding callbacks to a Simulink model uses Simulink’s model properties
dialog, an option under the Simulink window’s file menu.
• Model preload function: These commands run immediately before the model opens
for the first time. Note that it is here that the values of omega, the sample time
delta_t, and alpha are set. The values for omega are provided by the MATLAB
function “logspace,” which creates a set of equally spaced values of the log (base 10)
of the output (in this case, omega), where the two arguments are the lowest value
(here it is 10−1 ) and the highest value (3 × 103 here). The value set for the sample
time is 0.001 sec (corresponding to 1 kHz).
• Model postload function: These commands run immediately after the model loads
the first time.
• Model initialization function: These commands run when the model creates its initial
conditions before starting.
• Simulation start function: These commands run prior to the actual start (i.e., immediately after the start arrow is clicked).
• Simulation stop function: These commands run when the simulation stops. In this
case, we have two MATLAB commands to first calculate the maximum value of all 50
signals. (Note that the maximum values are over the structure in MATLAB generated
by the “To Workspace” block.)
• Model presave function: These commands run prior to the saving the model.
• Model close function: These commands run prior to closing the model.
126
Chapter 4. Digital Signal Processing in Simulink
The History and Description tabs allow the user to save information about the number
of times the model opens and is changed (the History tab) and for the user to describe the
model.
The StopFcn callback, executed when the simulation stops, is
simmax = max(simout.signals.values);
semilogx(omega,simmax);
xlabel(’Frequency "omega" in rad/sec’)
ylabel(’Amplitude of Output’);
grid
1
0.9
Amplitude of Filtered Output
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
-1
10
0
10
1
2
10
10
Frequency "omega" in rad/sec
3
10
4
10
1
0.9
0.8
Amplitude of Filtered Output
The plot at the right
comes from this code. It is a
semilog plot that shows
what the digital filter does
to the amplitudes of a sinusoid for different frequencies. Notice that all of
the low frequencies—tones
up to about 6 radians/sec
(about 1 Hz)—are unaffected by the filter, whereas
the frequencies above that
are attenuated to the point
where a tone at about 3000
radians/sec (about 500 Hz)
is reduced in amplitude by
95%. For this reason, this
filter is a “low pass” filter. This means that low
frequencies pass through
the filter unchanged, and it
attenuates higher frequencies. Let us see what happens if we input frequencies beyond 500 Hz. To
see this, change the values in the omega array
by typing at the MATLAB
command line omega =
logspace(2,3.8);. The
second plot shown at the
right is the result of rerunning the model with these
50 values of omega. Instead of continuing to re-
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
2
10
3
10
Frequency "omega" in rad/sec
4
10
4.2. Digital Sequences, Digital Filters, and Signal Processing
127
Sampled Sine Wave -- Illustrating Aliasing
1.1
0.75
Sampled only at the times
0.0025 and 0.0125 sec., the sine
seems to be a constant for all times.
Signal Amplitude
0.5
0.25
0
-0.25
When the samples are every 0.002 seconds,
the signal looks like a sinusoid.
-0.5
-0.75
-1.1
0
0.002
0.004
0.006
0.008 0.01 0.012
Time (sec.)
0.014
0.016
0.018
0.02
Continuous Signal
Sampled at 0.0025 and 0.0125
Sampled Every 0.002 sec.
Figure 4.6. Sampling a sine wave at two different rates, illustrating the effect of aliasing.
duce the amplitude of the input, the filter’s output amplitude starts to climb back up until,
at the frequency 1t (1 kHz), there is no reduction in the amplitude of the sinusoid.
Why does the amplitude not continue to decrease? It is because this is a sampled-data
signal. A close examination of what happens when we sample an analog signal to convert
it into a sequence of numeric values (at equally spaced sample times) reveals why.
Look carefully at the effect of sampling a 100 Hz sinusoid as illustrated in Figure 4.6.
When the sample time is 0.002 seconds (the * in the figure), the values are tracking the
sinusoid as it oscillates up and down in amplitude. However, when the sample time exactly
matches the frequency of the sinusoid (the black squares), the oscillatory behavior of the
sinusoid is lost. For the precisely sampled values shown, the amplitude after sampling is
always exactly 1. Thus, as far as the digital filter is concerned, the frequency of the input
is 0 (it is not oscillating at all). This is what causes the plot of the amplitude of the filtered
output to turn around starting at half of the sample frequency.
We call this effect “aliasing.” It is exactly because of this that the digital standard
for audio CDs is to sample at 44.1 kHz (a sample time of 22.676 microseconds). With
this sample frequency, the audio frequencies up to 22.05 kHz are unaffected by the digital
conversion. Since the human ear does not really hear sounds that are above this frequency,
the aliasing effect is not perceived. There is a caveat, though: since aliasing takes the high
frequencies above 22.05 kHz and reduces them, these frequencies need to be removed (with
128
Chapter 4. Digital Signal Processing in Simulink
an analog filter) before the sampling takes place. The filter that does this is an antialiasing
filter. In Section 4.5.2, we will investigate how one must sample an analog signal to
ensure that aliasing is not a problem, but first we explore some of Simulink’s digital system
tools.
4.2.3 The Simulink Digital Library
Before we leave the subject of digital filters, let us look at some other methods that Simulink
has for digital filtering. You may have noticed as we built the digital models above that
there were other blocks in the Discrete library for digital filters. They are
• Discrete Filter,
• Discrete Transfer Function,
• Discrete Zero-Pole,
• Weighted Moving Average,
• Transfer Function First Order,
• Transfer Function Lead or Lag,
• Transfer Function Real Zero,
• Discrete State Space.
Each of these uses a different, but related, method for simulating the digital filter. The
preponderance of these models uses the z-transform of the filter to create the simulation of
the filter. For example, the discrete filter model uses the z-transform in terms of powers
of 1/z. We use this form of the digital filter because in some definitions of the z-transform
the infinite series is in terms of 1/z, not z. To change the numerator and denominator of
the filter transfer function, use the block parameters dialog box that opens when you double
1
click on the Discrete Filter block. The filter default is 1+0.5z
−1 . The numerator is 1, and
the denominator is entered using the MATLAB notation [1 0.5], which, as the help at the
top of the dialog box shows, is for ascending powers of 1/z (see Figure 4.7). Notice in the
figure that when you change the numerator and denominator, the icon in the Simulink model
changes to show the new transfer function.
To try this block, let us simulate the Fibonacci sequence with it. Set up a new model
and drag the Discrete Filter block into it. Open the dialog by double clicking and enter the
vector [1 −1 −1] as the Denominator. The vector sets the powers of z−1 from the lowest to
the highest (the MATLAB convention for polynomials). Notice that the icon changes for
the Discrete Filter to display the denominator polynomials. Leave the numerator at 1.
Transfer functions do not have initial conditions, so we need to find a way to specify
that the Fibonacci sequence start with initial values of 1 and 2. To do this we use a block
that causes a signal to have a value at the start of the simulation. The block we need is
the IC block, which is in the library called Signal Attributes. Grab an IC block and drag
it into the model. The IC block has an input, but we do not need to use it. You can leave
4.2. Digital Sequences, Digital Filters, and Signal Processing
129
Figure 4.7. Changing the numerator and denominator in the Digital Filter block
changes the filter icon.
the input unconnected, but every time you run the model, you will receive the annoying
message
Warning: Input port 1 of ’untitled/IC’ is not connected.
To eliminate this message, there is a connection in the Sources library called “Ground.”
All it does is provides a dummy connection for the block and thereby eliminates the message.
The last thing is to connect a Scope to the output. The IC block has a default value of 1,
which is acceptable because starting the Fibonacci sequence with initial vales of 0 and 1
will still generate the sequence.
Figure 4.8 shows the model (called Fibonacci2 in the NCS library) and the simulation
results from the Scope block.
Comparing this with the result generated in the earlier version of the Fibonacci model
shows that the results are the same (except for the initial conditions).
The first seven versions of the discrete filter in the list above are all variations on
this block. To understand the subtleties of the differences, spend some time modeling the
Fibonacci sequence using each of these.
130
Chapter 4. Digital Signal Processing in Simulink
Figure 4.8. Using the Digital Filter block to simulate the Fibonacci sequence.
4.3
Matrix Algebra and Discrete Systems
We looked at state-space models for continuous time linear systems in Chapter 2. There is
an equivalent model for discrete systems.
Let us return to the Fibonacci sequence and use Simulink in a different way to show
some attributes of the sequence. Before we do though, let us look at the state-space version
of the Fibonacci sequence. When we talked about the digital filters in Simulink, we had
a list of eight different ways, the last of which was a state-space model. We did not go
into the details at the time because we had not developed the state-space model. We saw
how to convert a continuous time state-space model into an equivalent discrete model in
Section 2.1.1. We can convert the Fibonacci sequence difference equation into a discrete
state-space model directly. The steps are as follows.
Use the values on the right-hand side of the Fibonacci sequence (fk and fk+1 ) as the
components of a vector as follows:
fk
xk =
.
fk+1
From the definition of the sequence fn+2 = fn+1 + fn , with f1 = 1 and f2 = 2, we get the
state-space vector-matrix form from the fact that the first component of xk+1 is the second
component of xk and the second component of xk+1 is the left-hand side of the difference
equation. Thus,
0 1
fk+1
=
xk ,
xk+1 =
fk+2
1 1
1
x0 =
,
1
fk = [ 1
0 ]xk .
4.3. Matrix Algebra and Discrete Systems
131
y(n)=Cx(n)+Du(n)
x(n+1)=Ax(n)+Bu(n)
144
Ground
Display
Discrete State-Space
a)
Simulating the Fibonacci Sequence with the Discrete Library State-Space Block
Matrix
Constant
0
1
1
1
[2x2]
Matrix
Multiply
[2x2]
2
2
2
Multiply
1
144
[1 0]* u
Gain
Display
2
2
z
Unit Delay
b)
Simulating the Fibonacci Sequence Using Simulink’s Automatic Vectorization
Figure 4.9. State-space models for discrete time simulations in Simulink.
Notice that the initial conditions are not the values we used previously, but since we start
the iteration in the state-space model at k = 0, we have made the initial value of f0 = 1,
which is consistent with the initial values we used previously (since f2 = f1 + f0 = 2).
We use two methods to create this model. The first model for this state-space description uses the Discrete State Space block in the Simulink Discrete library. The model
is shown in Figure 4.9(a). (It is called Fib_State_Space1 in the NCS library.)
When this model runs, the Display block shows the values for the sequence. The
model has been set up to do 11 iterations. To see other values, highlight the number 11 in
the window at the right of the start arrow in the model window, and change its value to any
number (but be careful: the sequence grows without bound).
To build a Simulink model for this equation that uses Simulink’s vector-matrix capabilities, we use the blocks for addition, multiplication, and gains from the Math Operations
library (as we did when we created the state-space model for continuous time systems).
Figure 4.9(b) shows this model (it is Fib_State_Space2 in the NCS library).
Note that the Constant block from the Sources library provides, as an input, the matrix
on the right side of the state-space equation above. The Multiply and the Gain blocks from
the Math Operations library are used to do the matrix multiply and the calculation of the
output. The dialogs from the two Math Operations blocks are shown in the following figures.
132
Chapter 4. Digital Signal Processing in Simulink
The dialog on the left is for the Matrix Constant block, and as can be seen, the value
for it has been set to the MATLAB variable [1 0; 1 1]. The dialog on the right is for the
Multiply block. It specifies that there are two inputs through the ** (two products), which
may be increased to as many values as desired (including the use of the “/ “to denote matrix
inverse). The multiplication type comes from the pull-down menu next to the Multiplication:
annotation. The menu has only two options: Matrix(*) and Element-wise (.*), where the
notation in parentheses indicates the operation as if it were MATLAB notation.
The initial conditions for the iteration are set using the dialog that opens when you
double click the Unit Delay block. Double clicking this block opens
the dialog that allows
the desired initial values to be set. As above, the initial values are 01 .
The iteration we are doing creates the Fibonacci sequence in a vector form that will
allow us to show some
interesting
facts about the
sequence.
So let us do some exploring.
The matrix 01 11 can be thought of as ff01 ff12 , since we assume that the initial
value of f0 is 0 and the values of f1 and f2 are both 1. Therefore, after the first iteration of
the difference equation we have
0 1
fk+2
0 1
f0 f1
x2 =
=
x1 =
x0
f1 f2
fk+3
f1 f2
1 1
f2
f1
x0
=
f0 + f 1 f1 + f 2
f1 f2
x0 .
=
f2 f3
Notice that after this iteration the matrix multiplying x0 is in exactly the same form as
when we started, except that the subscripts are all one greater than the initial matrix. If the
iteration continues,
the matrix form is the same (i.e., the value of xk after k iterations is
fk−1
fk
x
;
in
Exercise
4.2 we ask you to use induction to prove this).
0
fk
fk+1
If we take the determinant of this matrix, we see that it is
fk−1 fk
det
= fk+1 fk−1 − fk2 .
fk
fk+1
Let us use our model to show that this determinant is
fk+1 fk−1 − fk2 = ±1.
4.3. Matrix Algebra and Discrete Systems
133
The Simulink library does not contain an explicit block to compute the determinant of a
matrix, so we will use some of the “User Defined Functions” blocks. There are five different
ways that the user may define a function in Simulink. In this library are blocks that
• create embedded code directly from MATLAB instructions,
• call any MATLAB function (this block does not compile the MATLAB code),
• create a C-code function.
There are also variations on these blocks where a function uses a rather arcane but
useful form and a version of the C-code function that uses MATLAB syntax for those who
refuse to learn C. We will use only two of these blocks: the MATLAB function and the
Embedded MATLAB blocks.
The model that was created is shown below. (It is called Fib_determinant in the
NCS library.) In this model, the MATLAB Function block uses the single function “det”
directly from MATLAB to compute the determinant (which is set using the dialog that opens
when you double click on the block), and the “Embedded MATLAB Function” block has
the following simple code to compute the determinant of the 2 × 2 matrix input u. Since
the result of using the MATLAB function and the Embedded MATLAB function are the
same, you might legitimately wonder why there are two blocks. The reason has to do with
calling MATLAB from Simulink. Stand-alone code does not have access to MATLAB,
so the MATLAB Function block will not work. The Embedded MATLAB block, on the
other hand, creates exportable C code, so when the code compiles it works as a stand-alone
application.
function d = det2(u)
% An embeddable subset of the MATLAB language is supported.
% This function computes the determinant of the 2x2 matrix u.
d=det(u);
The model is in Figure 4.10. The first time this model runs, the embedded code
compiles into a dll file that executes each time the model runs.
When the model runs, 13 iterations result, and the determinant plot appears in the
Scope block (see Figure 4.11). We will use this model to illustrate the computational
aspects of finite precision arithmetic. If the number of iterations is set to 39, the determinant
from the MATLAB function shows a value of 2, and the determinant from the embedded
MATLAB function shows a value of 0. If we continue past 39 iterations, say 70, we still
get zero from the embedded function, but we get 2.18e+013 for the determinant from the
MATLAB function. What is happening here?
The problem is that we are at the limit of the precision of the computations. The
values for the Fibonacci sequence are at 8e+014 so that the products of the values in the
equation fk+1 fk−1 − fk2 are on the order of 1029 and the difference is therefore less than
the least significant bit in the calculation. The built-in MATLAB function starts to fail as
soon as the terms in this function get to about 1018 , whereas the embedded MATLAB block
protects the calculation from underflow by making the value 0. This still works only for so
long; eventually even this strategy fails. (To see this, try 80 iterations; at this number of
iterations none of the determinant values work.)
134
Chapter 4. Digital Signal Processing in Simulink
Figure 4.10. A numerical experiment in Simulink: Does fk+1 fk−1 − fk2 = ±1?
Figure 4.11. 13 iterations of the Fibonacci sequence show that fk+1 fk−1 − fk2 =
±1 (so far).
This exercise is an example of how you could use Simulink to check that some
mathematical result is true. If you wanted to show that fk+1 fk−1 − fk2 = ±1, and you did
not have any idea if it were true or not, you could build the model and try it. It would be
immediately obvious that the values are ±1 for the number of iterations you used.
4.4. The Bode Plot for Discrete Time Systems
135
You then could try to prove the result. The ability to use the pictorial representation
of the premise quickly to create numerical results can almost immediately tell you if the
premise is true.
Now that we know fk+1 fk−1 − fk2 = ±1 is
true, Exercise 4.3 asks that you use
!
induction to prove it. (Use the fact that det 01 11
is −1.)
4.4 The Bode Plot for Discrete Time Systems
In Section 4.2.2, we created a frequency response plot, but it used a numerical experiment
on the Simulink model, which is ponderous at best. There is a simpler way, based on the
z-transforms that we explore now.
In order to compute the Bode plot for a discrete system we need to understand the
mapping from the continuous Laplace variable to the discrete z-transform variable. To see
this we need to investigate the Laplace transform of a sampled signal f (t) (i.e., a signal that
exists only at the sample times k t) when we shift it in time by t. If we assume that the
Laplace transform of f (t) is F (s), then
∞
e
−st
f (t +
t)dt =
0
∞
t
e−s(t+
t)
f (τ )dτ
= e−s t F (s) − e−s t f (0).
The last step used the fact that the Laplace transform is from t = 0 to infinity, and in the
first line of the equation the integral starts at t. Since the function f (t) is discrete,f (0) is
the only value not in the integral when it starts at t.
Comparing this to the z-transform derived in Section 4.1.1, we see that z−1 = e−s t
or z = es t .
With this information, we would like to find the discrete (z-transform) transfer function
for the discrete time state-space model. We have seen two ways for developing the discrete
state-space model. When we developed the solution for the continuous time state-space
model in Section 2.1.1, we showed that the result of making the system discrete in time was
the model
xk+1 = ( t)xk + ( t)uk ,
yk = Cxk + Duk .
In developing the discrete state-space model of the Fibonacci sequence above, we went
directly from the difference equation to the discrete state-space model. (In this case the
matrices were not determined from the continuous system, and they are not functions of
t.) In either case, the form of the equations is the same. Taking the z-transform of this
gives
zX(z) − zx0 = ( t)X(z) + ( t)U(z).
136
Chapter 4. Digital Signal Processing in Simulink
Thus the discrete transfer function of the system (H(z)) is the z-transform with the initial
conditions set to zero, so we have
H(z) =
Z {yk }
= C(zI − )−1 B + D.
Z {uk }
The Bode plot of the discrete system from the state-space form of the model comes from
setting z = eiω t in the above derivation. That is we need to compute
H (eiω t ) = C(eiω t I − )−1 B + D,
which has exactly the same form as the continuous transfer function except iω is replaced
by eiω t . (Remember that t is the sample time of the discrete process and is therefore
a constant.) The manipulations of the state-space model for both continuous and discrete
systems in MATLAB are the same, so the connection from Simulink to MATLAB for the
Bode plot calculation is identical. We can go back now to the digital filter example in
Section 4.2.2 above and use the Control System Toolbox to calculate its Bode plot.
Open the digital filter model by typing “Digital_Filter1” at the MATLAB command
line; the model that opens is the same as the model we created in Section 4.2.2, but the
sinusoid input has been deleted since this is not needed to create the Bode plot. As we did
above, under the “Tools” menu select “Control Design” and the submenu “Linear Analysis.”
The “Control and Estimation Tools Manager” will open. In the model, right click on the line
coming from the Input block and select the “Input Point” sub menu under the “Linearization
Points” menu item. Similarly, select “Output Point” (under the same menu) for the line
going to the Output block. Selecting these input and output points causes a small I/O icon
to appear on the input and output lines in the model. In the “Control and Estimation Tools
Manager” GUI, select the Bode response plot for the plot linearization results and then click
the Linearize Model button. The LTI Viewer starts, and right click to select the Bode plot
under “Plot Types.” The plot of the amplitude and phase of the discrete filter as created
by the Viewer (Figure 4.12) stops at the frequency 3.1416 radians/sec because this is the
frequency at which the Bode plot for the discrete system starts to turn around and repeat.
(This is called the “half sample frequency,” and it is equal to πt .)
Compare this plot with the amplitude plot created by the Simulink model Digital_Filter
in Section 4.2.2. You will see that it is the same. The important difference is that the
computation of the Bode plot using this method is far more accurate (and faster) than using
a large number of sinusoidal inputs as we did there.
4.5
Digital Filter Design: Sampling Analog Signals, the
Sampling Theorem, and Filters
In 1949, C. E. Shannon published a paper in the Proceedings of the Institute of Radio
Engineers (the IRE was an organization that became part of the Institute of Electrical and
Electronics Engineers, or IEEE) called “Communication in the Presence of Noise.” This
landmark paper introduced a wide range of technical ideas that form the backbone of modern
communications. The most interesting of these concepts is the sampling theorem. It gives
conditions under which an analog signal can be 100% accurately reconstructed from its
4.5. Digital Filter Design
137
Figure 4.12. The Bode plot for the discrete filter model developed using the Control
System Toolbox interface with Simulink.
discrete samples. Because it is such an important idea, and because it is so fundamental, it
is very instructive to work through the theorem to understand why, and how, it works.
4.5.1
Sampling and Reconstructing Analog Signals
When we introduced discrete signals above, they were simply a sequence of numbers. In
addition, their z-transform was the sum of the sequence values multiplied by powers of z.
In Section 4.4, we saw that the frequency response results when eiω t is substituted
for z (thereby giving H (eiω t )). When the digital signal is the result of sampling an analog
signal, we need a way of representing this fact. The method must maintain the connection
with the analog process.
Equating a sampled analog signal to a sequence of its values at the sample times loses
the analog nature of the process (and besides is really only one of the mathematical ways
of representing the process). An alternative is to represent the sampled signal as an analog
signal that is a sequence of impulses (analog functions) multiplying the sample values.
Doing this gives an alternate representation of the sampled sequence, {sk } as the analog
signal s ∗ (t):
∞
s ∗ (t) =
sk δ(t − k t).
k=0
138
Chapter 4. Digital Signal Processing in Simulink
With this definition, we can take the Laplace transform of s ∗ (t) as
∞
∞
∞
−st
∗
−st ∗
S (s) =
sk δ(t − k t) dt.
e s (t)dt =
e
0
0
k=0
The integral and the sum commute in the last term above, so
∞ ∞
∞
∗
−st
S (s) =
e sk δ(t − k t)dt =
sk e−sk t .
k=0
0
k=0
Now to understand the sampling theorem, assume that we have been sampling the signal
for a very long time so that the signal is in steady state. In that case, the Fourier transform
describes the frequency content of the signal. The difference between the Laplace and
Fourier transform is in the assumption that for the Fourier transform the time signal began
at −∞. The steps that created the sampled Laplace transform above are the same, except
that, since the time signal exists for all time, the sum and integral are double sided:
∞
∞ ∞
S ∗ (ω) =
e−iωt sk δ(t − k t)dt =
sk e−sk t .
k=−∞
−∞
k=−∞
Now, the sampling theorem is as follows:
If a continuous time signal s(t) has the property that its Fourier transform is zero for
all frequencies above ωm or below −ωm (the Fourier transform of s(t), S(ω) = 0 for
|ω| ≥ ωm ), then we can perfectly reconstruct the signal from its sample values at the
sample times t = ωπm (or if the signal is sampled faster) using the infinite sum
s(t) =
∞
sk
k=−∞
sin(ωm t − kπ )
.
(ωm t − kπ )
This theorem is critical to all applications that use digital processing since it assures
that after the signal processing is complete the signal may be perfectly reconstructed as long
as the sampling was originally done at a rate at least twice as fast as the highest frequency
in the signal. As an aside, Claude Shannon proved this result, and it was published in the
Proceedings of the I.R.E. in 1949 [39].
The proof of this assertion is straightforward. Refer to Figure 4.13 as we proceed
through the steps in the proof.
The first step is to take the function S(ω) and expand it into an infinite series (using
the Fourier series to do so). The result is
Sexpanded (ω) =
∞
Sk e −
ik2π ω
2ωm
k=−∞
The value of Sk comes from the Fourier expansion
ωm
ikπ ω
1
Sk =
S(ω)e ωm dω.
2ωm −ωm
.
4.5. Digital Filter Design
139
s0 s1 s2 s3
s4 s5 …
t
−ωm
a) Signal and its Sampled Values
ωm
b) Fourier Transform of the Signal
c) Result of replicating S (ω ) an infinite number of times.
Figure 4.13. Illustrating the steps in the proof of the sampling theorem.
∞
1
iωt
dω, we
Since the inverse Fourier transform of S(ω) is the signal s(t) = 2π
−∞ S(ω)e
π
∗
get that the value of Sk is ωm sk . Therefore, S (ω) is the same as Sexpanded (ω) as shown in
Figure 4.13. That is,
S ∗ (ω) =
∞
ikπ ω
π
sk e− ωm .
ω
k=−∞ m
The last step in the proof is now simple. In order to recover the signal we simply need to
multiply the Fourier transform of the sampled signal by the function p(ω) defined by
p(ω) =
1,
0,
−ωm < ω < ωm ,
elsewhere.
Figure 4.13 shows the rectangular function drawn on top of the transform of the sampled
signal. Clearly, the product is the transform of the original analog signal.
The inverse Fourier transform of p(ω) is the “sinc” function given by
ωm sin(ωm t − kπ )
.
π (ωm t − kπ )
The proof of the result follows immediately because the inverse Fourier transform of the
product of S ∗ (ω) and p(ω) is the convolution of this function and the inverse transform of
S ∗ (ω) which is just the sum of impulses defining s ∗ (t) that we started with.
140
Chapter 4. Digital Signal Processing in Simulink
View Signals
Sum of
Sines
Input
Generate 15 Sine Waves
at Frequencies given by freqs.
(s et by a call back when the model loads)
D/A Filter
Output
Digital
Signal
2*pi*5000
15
15
s+2*pi*5000
A/D Converter
(Sampled at 0.1 msec)
Trans fer Fcn of
D/A Filter
Error
D/A Error
Figure 4.14. Simulation illustrating the sampling theorem. The input signal is 15
sinusoidal signals with frequencies less than 500 Hz.
The function p(ω) eliminates all frequencies in the sampled signal above the frequency
ωm , and we have already seen that such a filter is a low pass filter. This is the main reason
we need to create good low pass analog filters.
For various technical reasons, it is impossible to build a filter that has a frequency
response that is exactly p(ω). This means that we are always searching for a good approximation. The next section shows that there are numerous ways to come up with an approximation and introduces the design of analog filters. These filters are prototypes of digital
filters, so this section is important for both the implementation of the sampling theorem and
for digital filter design, but first let us do some simple simulations to illustrate these concepts.
Because of the sharp discontinuity in the filter function, the ideal low pass filter
represented by the function p(ω) above is not the result of using a finite dimensional system
(i.e., a system represented by a finite order differential equation, having a transfer function
whose magnitude is |H (iω)|). Therefore, it is necessary to figure out how best to create
a good approximation. As you should guess, the approximation must not have the sharp
corner, so the transition from the region where the gain of the filter is 1 to the region where
the gain is 0 must be smooth.
In Figure 4.14, we show a Simulink model that uses the sine block (from the Sources
library) to create 15 continuous time sinusoidal signals. The signals are then summed
together and sampled at 0.1 msec, using a sample and hold operation (the “zero order hold”
block in the Discrete library). Once again, you should try to create this model from scratch
using the Simulink library rather than loading it from the NCS library using the MATLAB
command Sampling_Theorem.
The approximation we use for the low pass filter is the simple first order digital filter
from Section 4.2.2. The frequency for the filter has been set at 5 kHz.
The sinusoidal frequencies in the simulation come from the MATLAB vector freqs,
whose values are: 65, 72.8, 74.7, 82, 89.3, 91, 99.5, 103.2, 125, 180.2, 202.1, 223.3, 230.3,
310.3, and 405.2 Hz. Because the sample frequency of 10 kHz is 20 times the highest
frequency in the signal, the digital to analog (D/A) reconstruction of the signal using the
simple first order filter is not too bad. (The error is about 8%, as can be seen from the D/A
Error Scope.)
4.5. Digital Filter Design
141
Table 4.1. Using low pass filters. Simulating various filters with various sample times.
Sample Time
Filter Type
Filter Freq.
0.001
0.001
0.0001
0.0001
First Order
Second Order
First Order
Second Order
1000 Hz
1000 Hz
1000 Hz
1000 Hz
Standard Dev. Of
Error Observed
2.093
1.902
0.872
0.632
This simulation can investigate changes in the sample frequency relative to the frequencies in the signal. As a first numerical experiment, try changing the sampling frequency
to 500 Hz. (Make the sample time in the A/D Converter block 0.002.) Make note of the
magnitude of the error.
Next, change the
filter to 1 kHz by
changing the parameter values in the Transfer Function block to
2*pi*1000 in both the
numerator and denominator. Note the error for this simulation. Remember that
this filter is not a good
approximation to the
ideal low pass filter.
(If you go back to Section 4.2.2 and look
at the frequency response plot for this filter, you can see that for
this filter the amplitude is reduced 50% at 1000 Hz and is reduced only 90% at 10 kHz.)
A better filter is one that has a more rapid reduction in amplitude. Higher order filters
will do this. For example, double click the filter block and change the filter parameters
to the values in the figure above. This makes the transfer function for the filter H (s) =
(2π 1000)2
. As we will see next, this is an example of a Butterworth filter.
s 2 +1.414(2π 1000)+(2π1000)2
Run the simulation using 1 kHz sampling and record this error. You should see very
little difference between the two filters when the sample time and the filter functions are
set to what the sampling theorem says are the appropriate values. Again, this is because
we are not filtering the signal with the ideal filter required by the theorem. To see that the
filters work well when the sampling is done at a higher rate than the minimum value of the
sampling theorem, change the sample time back to 0.1 msec, as we started with, and rerun
the simulation. Table 4.1 summarizes our results from these simulations.
142
Chapter 4. Digital Signal Processing in Simulink
Figure 4.15. Specification of a unity gain low pass filter requires four pieces of
information.
4.5.2 Analog Prototypes of Digital Filters: The Butterworth Filter
We did some simple analog low pass filter design in the previous section to illustrate the
conversion of a digital to an analog signal. We also need to have analog filters as follows.
• Since the Fourier transform of a signal that we want to sample must have no frequencies above half the sample frequency, a low pass antialiasing filter used prior to
sampling ensures that this is true.
• We can design a digital filter using the analog filter as the starting point and then
converting the result using some type of transformation,.
So, with these reasons as motivation, let us explore some analog filters.
Remember that the transfer function of a filter is H (s)|s=iω = |H (iω)| ei H (iω) . Three
“bands” on the amplitude plot specify this filter, as illustrated in Figure 4.15. The first of
these bands is the “pass band,” which represents the region where the signal is not attenuated
(the region whose frequencies are below ωm ). The second region is the “transition band,”
where the amplitude gradually reduces to an acceptable minimum. (Because the frequency
response of a finite dimensional system is the ratio of polynomials, it is impossible for the
amplitude to be exactly zero except at an infinite frequency.) The last region is the “stop
band,” where the filter is below the acceptable value. In the figure, there are three additional
parts to the specification: the acceptable gain change over the pass band, the acceptable
amplitude of the signal in the stop band, and the frequency at the transition band limit.
4.5. Digital Filter Design
143
These parameters are ε1 , ε2 , and ωmacceptable , respectively, and they specify bounds on the
transfer function as follows:
In the pass band,
1 − ε1 ≤ |H (iω)| ≤ 1 for |ω| ≤ ωm .
In the stop band,
|H (iω)| ≤ ε2 for |ω| ≥ ωmacceptable .
The analog low pass filters that are most frequently used are the Butterworth, Chebyshev, and elliptic filters. The first of these, the Butterworth filter, comes from making the
filter transfer function as smooth as possible in each of the regions. The filter has as many
of the derivatives of its transfer function equal to 0 at frequencies
√ 0 and infinity.
Since the magnitude of the transfer function is |H (iω)| = H (iω)H (−iω), it is usual
to use the square of the magnitude in specifying the filter transfer function (eliminating the
square root). Therefore, the Butterworth filter is
|H Butter |2 =
1+
1
2n .
ω
ωm
This function has the property that its derivatives (up to the (2n − 1)st are zero at ω = 0
and at ω = ∞. Exercise 4.3 asks that you show that this assertion is true. The Laplace
transform for the filter can be determined by using the fact that
|H (iω)|2 = H (s)H (−s)|s=iω .
Therefore, filter transfer function is the result of factoring
H (s)H (−s) =
1+
1
2n .
s
iωm
The poles of the transfer function are the roots of the denominator, given by the equation
( iωsm )2n = −1, or equivalently, the 2n roots of this polynomial are the 2n complex roots of
−1 given by
1
sk = (−1) 2n (iωm ) .
These poles are equally spaced around a circle with radius ωm . The values with negative real
parts define H (s), and those with positive real parts become H (−s), so the transfer function
for the desired Butterworth filter is stable. It is easy to create this filter in MATLAB. We
have included in the NCS library an M-file called butterworthncs that uses the above to
determine the Butterworth filter poles and gain for any filter order. The code performs the
144
Chapter 4. Digital Signal Processing in Simulink
calculations shown below:
function [denpoles, gain] = butterworthncs(n, freq)
% The Butterworth Filter transfer function
% Use zero-pole-gain block in Simulink with the kth pole given by the
% formula:
%
(1/2n)
% Poles with real part <0 among
p = i(-1)
2*pi*freq
%
k
%
Where:
%
freq = the Butterworth filter design frequency in Hz.
%
n
= order of the desired filter
%
i
= sqrt(-1)
denroots = (i*roots([1 zeros(1,2*n-1) 1]))*2*pi*freq; %BW poles = roots
denpoles = denroots(find(real(denroots)<=0)); % of -1 in left plane
%
%
%
Zero out the imaginary part of the real pole (there is only one real
pole, and then only when n is odd. Because of the precision
in computing roots, the imaginary part will not be zero):
index
= find(abs(imag(denpoles))<1e-6); % imag. part is 0
denpoles(index) = real(denpoles(index));
% when < eps.
% To insure the steady state gain is 1, the Butterworth filter gain
% must be n times the magnitude of the Butterworth circle squared:
gain = (2*pi*freq)ˆn
To exercise this code, let us design an eighth order Butterworth filter for the problem
we investigated above. The poles and gain result from typing the following command in
MATLAB:
[BWpoles, BWGain] =butterworthncs(8, 10000)
Now let us filter the same signal we filtered above, but this time we build a Simulink
model using these poles and gain. A new model similar to the Sampling_Theorem model
above is in Figure 4.16. (This model is Butterworth in the NCS library.)
The only difference in this model is that instead of the transfer function block, the filter
comes from the zero-pole-gain block in the Simulink continuous library. The dialog values
for this filter are set using the output of the M-file butterworthncs. (The output of this
M-file is the gain, BWgain, and the poles, BWpoles.) We designed the filter for a sample
frequency of 10000 Hz. Since the maximum frequency contained in the signal is only
about 500 Hz, this filter should do a good job of reconstructing the sampled signal. It does,
because the root mean square (rms) error in the reconstruction is only 0.0761 (significantly
smaller than the rms value for the simple second order filter we used above). Notice that
the model above contains a delay block (called transport delay in the model) that delays
the input signal by 1.1 msec to account for the Butterworth filter’s phase shift that delays
4.6. The Signal Processing Blockset
145
Filtered
Signal
Generate 15 Sine Waves
at Frequencies given by freqs.
(set by a call back when the model loads)
BWgain
BWpoles(s)
A/D Converter
Butterworth Filter of Order n
(Sampled at 0.1 msec) Compute the coefficients
using the m-filebutterworthncs
Transport
Delay
Error
Analog
Input
This model uses the data created by a callback to MATLAB when the model opens as follows:
Signal
The source is a sum of sine waves at the following frequencies
freqs = [65 72.8 74.7 82 89.3 91 99.5 103.2 125 180.2 202.1 223.3 230.3 310.3 405.2] Hz.
The phase of each sine wave is:
Φ = [0 0.0011 -0.0014 -0.0013 0.0006 0.0002 0.0037 -0.0006 0.0044 0.0030 -0.0039 -0.0034 -0.0011 -0.0004 0 ]
The amplitude of all of the sine waves and the sample times that they are generated are:
amplitudes = 1; sample time = 10 -5 sec.
Figure 4.16. Butterworth filter for D/A conversion using the results of the M-file
“butterworthncs.”
the output signal. By delaying the input before the error is computed, this filter lag is
accommodated. (Remember that in signal processing, time lags in the reconstruction of the
data are acceptable because there is usually a large spatial difference between the source
and the site of the reconstruction. (Think of transmission via the internet or via a radio link.)
We do not have to go through these machinations every time we want to design a filter.
Simulink easily allows the adding of new tools. These tools are blocksets, and the first of
them that we will look into is the Signal Processing blockset, which contains built-in blocks
that allow any analog (or digital) filters to be created. In the next section, we experiment with
analog and digital filter blocks using the Signal Processing Blockset to design Butterworth
and other filters.
4.6 The Signal Processing Blockset
Analog filters, digital filters and many other signal-processing components are available in
the Simulink add-on tool called the Signal Processing Blockset. This tool has many unique
features that allow analog and digital filters to be designed, modeled, and then coded. (Signals can be analog, digital, or mixed analog-digital; digital signals can have multiple sample
times.) Among the features of the tool are the ability to capture a segment of a time series
(in a buffer) for subsequent “batch” processing. The tool also permits processing temporal
data using “frames,” where the computations wait until a frame of data is collected, and then
the processor operates on the entire frame in a parallel operation. The tool allows digital
signal processing models and components using computations that have limited precision
and only fixed-point calculations. This last feature couples with a coding method that allows
the generation of C-code and allows HDL code output for special purpose signal processing
applications on a chip. (The code comes directly from the Simulink model.) We will not
describe how we do this in this chapter, but we will touch on some of the features in Chapter 9.
146
4.6.1
Chapter 4. Digital Signal Processing in Simulink
Fundamentals of the Signal Processing Blockset: Analog
Filters
To understand the capabilities and become
familiar with the Signal Processing Blockset, open the library. The figure at the right,
a snapshot of the Library browser, shows
nine different categories of model elements.
They are Estimation, Filtering, Math Functions, Quantizers, Signal Management, Signal Operations, Sinks, Sources, Statistics,
and Transforms. Explaining the details of
many of these blocks would take us beyond
the scope of this book, so we will leave
out the Estimation library and some of the
blocks in the Filtering and Math Functions
libraries.
We start our discussion by navigating
the Filtering library. Let us use the model
we created above, but this time we add filtering blocks from the Signal Processing library.
Open the model butterworth_sp
(shown in Figure 4.17) that now includes
a block from the Filtering library that automatically designs an analog filter of any
type. The list of possible filters goes way
beyond the Butterworth that we have been
exploring so far. When you open the model,
it will contain the Butterworth filter we
designed using the butterworthncs M-file
along with the Filter block from the Signal
Processing Blockset. This block contains
the design specifications for the same Butterworth filter.
This model illustrates another powerful feature of Simulink. In the process of making
a change to the model (perhaps a change that does nothing but simplify it, as we are doing
here), we need to be worried that the change might introduce an error. The simple expedient
of comparing the two calculations using the summation block (to compute their differences)
creates an immediate and unequivocal test for the accuracy of the calculations. The difference, displayed on a Scope, should be about the numeric precision. The result, displayed
in a Scope block, contains both the Signal Processing Blockset results and the difference
between the design using the NCS library design and the Signal Processing Blockset design
of the Butterworth filter. The Scope we call “Compare SP block with NCS block” gives
the plot shown in Figure 4.17(b). As can be seen in this figure, the difference between the
4.6. The Signal Processing Blockset
147
Analog
Filter Design
from SP Blockset
Compare
SP block with
NCS block
butter
SP Blockset Result
Difference between
NCS and SP Library
Butterworth filters
Generate 15 Sine Waves
at Frequencies given by freqs.
(set by a call back when the model loads)
BWgain
15
D/A Filter
Output
BWpoles(s)
A/D Converter
(Sampled at 0.1 msec)
Butterworth Filter of Order n
Computed using the NCS
m-file butterworthncs
Transport
Delay
D/A Error
Sum ofSines
Input
This model uses the data created by a callback to MATLAB when the modle opens as follows:
The source is a sum of sine waves at the following frequencies
freqs = [65 72.8 74.7 82 89.3 91 99.5 103.2 125 180.2 202.1 223.3 230.3 310.3 405.2] Hz.
The phase of each sine wave is:
Φ = [0 0.0011 -0.0014 -0.0013 0.0006 0.0002 0.0037 -0.0006 0.0044 0.0030 -0.0039 -0.0034 -0.0011 -0.0004 0 ]
The amplitude of all of the sine waves and the sample times that they are generated are:
amplitudes = 1; sample time = 10
-5
Analog
Input
Signal
sec.
a)
Simulink Model with Butterworth Filter from the Signal Processing Blockset and the Filter
Designed using butterworthtncs.
SP Blockset Result
15
10
5
0
-5
-10
4
0
x 10
0.05
0.1
0.15
0.2
0.25
0.2
0.25
Difference between
NCS and SP Library
Butterworth filters
-14
2
0
-2
-4
0
0.05
0.1
0.15
Time
b)
Reconstruction of an Analog Signal using a Filter Designed with the Signal Processing Blockset.
Figure 4.17. Using the Signal Processing Blockset to design an analog filter.
two implementations of the filter is mostly less than 2 × 10−14 , which is well within the
numerical tolerances of the calculations.
We can now experiment with the entire range of filters that the Signal Processing
Blockset offers. Note that as the filter changes, the icon for the filter shows a plot of the
148
Chapter 4. Digital Signal Processing in Simulink
Table 4.2. Comparison of the error in converting a digital signal to an analog
signal using different low pass filters.
10th Order
Filter Type
Butterworth
Chebyshev I
Chebyshev II
Elliptic
Bessel
Time Delay
milliseconds
1.10
1.47
0.58
0.49
0.132
A/D
Error
0.0761
0.4576
0.1859
0.3393 to 0.5108
0.274
frequency response of the transfer function, along with the name of the filter type. This
provides a direct visual cue to the type of filter so that when we review it in the future, we
know precisely the original intent (and if for any reason the picture and/or the type do not
match the original intention or the specifications, it is readily apparent).
Five different filters are available from the analog filter design block. Try each of
these filters in turn and record the error in the reconstruction of the original analog signal.
We did this experiment, with the results tabulated in Table 4.2. (Each filter has a different
time lag, so modify the delay time in the Transport Lag block, as shown, to account for
this.) The elliptic filter stop band ripples range over 0.1–2 db, hence the error ranges.
4.6.2
Creating Digital Filters from Analog Filters
We have seen how one can use the state-space model to create a digital system that has the
same response as the analog system when the input to the system is a step. In Section 2.1.1,
the discrete time solution of a continuous state variable model was determined to be
xk+1 = ( t)xk + ( t)uk ,
where
( t) =
t
eAτ dτ,
0
( t) =
t
eAτ Bdτ,
0
and the MATLAB file c2d_ncs computes these matrices. We can use these methods of
forming a discrete time system with a response that is equivalent to the analog system to
make a digital filter from the analog filter. In this instance the responses are equivalent
in the sense that both the analog and digital filters will have the same step response. (In
Exercise 4.5 you will show that this is true.) Digital filters are usually equivalent (using
this approach) to the analog Butterworth, Chebyshev, Bessel, or elliptic filters. The other
equivalence that one can have is “impulse equivalence,” where the impulse responses of
the analog and digital filter are the same (but not the step responses).
A third approach for the creation of an equivalent digital filter uses an approximation
of the mapping of the Laplace and z-transform variables. The approximation is the bilinear
−1 transformation given by s = 2t 1−z
. This mapping in the complex plane is one-to-one,
1+z−1
4.6. The Signal Processing Blockset
149
so the inverse mapping is unique and is (noting that this form is also bilinear)
1 + 2t s
.
z=
1 − 2t s
The denominator in this mapping is of the form
t
s
z= 1+
2
=1+
1
1−x
ts +
= 1 + x + x 2 + · · · , so the value of z is
t
1+
s+
2
t
s
2
2
+ ···
t2 2
s + ···.
2
This approximation is very close to the Taylor series for es t , the actual mapping for the
Laplace variable to the z variable. However, because the bilinear mapping is one-to-one,
there is no ambiguity when this transformation is used. The powerful feature of this transformation is that one frequency exists where the phases of the transfer functions for both
the analog and digital filters are the same. You can select this frequency using a technique
called prewarping. Any of these approaches are selectable in the filter design block from
the Signal Processing Blockset. In the next section, we look at one of these.
4.6.3
Digital Signal Processing
One of the consequences of using a linear system described by a differential equation to
filter data is the time lag that we encountered in the low pass filters we used in the previous
section. In signal processing, these are “infinite impulse response,” or iir, filters. If it
is necessary to eliminate this time lag, then the filter transfer function needs to be real at
all frequencies. (If it is complex, then the imaginary part of the filter transfer function
corresponds to a phase shift that results in the time lag.) For an iir filter, the only way that
the transfer function can be real is if the poles are symmetric with respect to the imaginary
axis (i.e., for any pole with negative real part, there is an equivalent pole with positive real
part). Since poles in the right half plane (with positive real parts) are unstable, it is clearly
impossible to build an iir filter that processes the signal sequentially and has no phase shift.
The next best attribute we can ask for an iir filter is that its phase be linear. This is possible,
and many filter designs impose this criterion in the development of the filter specification.
There is an alternative to an iir filter; it is called “finite impulse response,” or fir, filter.
The fir filter does not require the solution of a differential equation but instead relies on the
convolution of the input with the filter response to create the output. The most important
attribute of a fir filter is that it always has linear phase. Let us explore this attribute with
some Simulink models. The first model we create uses one of the simplest and, for many
applications, most useful of all fir filters. This filter is the moving average filter that computes
the mean of some number of past samples of a signal. The filter is
n−1
yk =
1
uk−i .
n i=0
150
Chapter 4. Digital Signal Processing in Simulink
Band-Limited
White Noise
1
[zeros(n-2,1);1]
ones(1,n-1)/n* u
z
Unit Delay
b Vector
Scope
c
diag(ones(1,n-2),1)* u
A Matrix
1/n
d
a)
The FIR Filter Simulink Model
2 0 0 P o in t M o vin g A ve ra g e
S im u lin k S im u la t io n
U s in g S t a t e S a c e M o d e l
1
0.8
0.6
0.4
0.2
0
-0 . 2
-0 . 4
-0 . 6
-0 . 8
-1
0
50
100
150
200
250
T im e
300
350
400
450
500
b)
Simulation Results
Figure 4.18. Moving average FIR filter simulation.
The z-transform of this filter is
Y (z) =
1
1 1 + z−1 + · · · + z−(n−1) = n−1 zn−1 + zn−2 + · · · + z + 1 .
n
nz
Thus, this filter has n − 1 poles at the origin. This means only that you have to wait for n
samples before there is an output.
The process for computing the output yk is to accumulate (in the sense of adding to
the previous values) n1 times the current and past values of the input. The Simulink model
that generates the fir n-point moving average filter (Figure 4.18) is in the NCS library. It is
Moving_Avg_FIR. Exercise 4.6 asks that you verify that this model is indeed the state-space
model of the filter.
The simulation computes the average of the input created by the Band Limited White
Noise block, which we investigate in more detail in the next chapter. The important information about this block is that the output is a sequence of Gaussian random variables
with zero mean and unit variance (so the moving average should be zero). When the model
opens, the callback sets n to 200 so the average is over the previous 200 samples. The result
is in Figure 4.18(b).
4.6. The Signal Processing Blockset
151
You can change the number of points in the moving average by changing the variable n
in MATLAB. Beware, however, that this implementation of the filter is extremely inefficient,
so as n gets larger, the time it takes to compute the moving average increases dramatically.
This last point illustrates a very important aspect of filter design. That is, the design approach
makes a dramatic impact on the time it takes to
perform the computations and the accuracy of the
result. Let us explore this with the Digital Filter design block in the Signal Processing Blockset.
The model Moving_Avg_FIR_sp in the NCS library uses the signal processing Digital filter block
to create an fir filter that is identical to the statespace model above. Double click on the “Digital
Filter” block in the model, and the window on the
right will appear. This dialog allows you to select
the filter type (in the “Main” tab); in this case, we
selected FIR (all zeros). Next, we select the filter structure, the feature of the block that allows
a more robust implementation. This is not critical in the simulation of the filter (although using
a bad implementation as we did above can cause a
long wait for the simulation to be completed), but it is absolutely critical in implementing a filter in a real-time application. Using the smallest number of multiplies, adds,
and storage elements can
greatly simplify the filter and make its execution
much more rapid. In this
implementation, we have
used the “Direct Form”
of the filter with the numerator coefficients equal
to 1/n*ones(1,n), where in
the simulation, n = 200.
One last feature of
the Filter block is the ability to view the filter transfer function as a frequency
plot (and in other views).
Clicking the “View Filter
Response” button creates
the plot shown at the right.
The plot opens showing
the magnitude of the filter transfer function plotted versus the normalized frequency. The
viewer allows you also to look at
• the phase of the filter,
• the amplitude and phase plotted together,
152
Chapter 4. Digital Signal Processing in Simulink
Pole/Zero Plot
1
0.8
0.6
Imaginary Part
0.4
0.2
199
0
-0.2
-0.4
-0.6
-0.8
-1
-1
-0.5
0
0.5
1
Real Part
a) Pole Zero Plot
b) Filter Properties
Figure 4.19. Pole-zero plot and filter properties for the moving average filter.
• the phase delay,
• the impulse or the step response,
• the poles and zeros of the filter,
• the coefficients of the filter (both numerator and denominator polynomials),
• the data about the filter’s form,
• issues about the filter implementation by viewing the filter magnitude response with
limited precision implementations.
We select these using the buttons along the top of the figure (circled in the figure).
Two of the more interesting outputs from this tool are the pole-zero plot and the filter
properties. Figure 4.19 shows these for the moving average filter. Notice that the filter
properties gives a calculation of the number of multiplies, adds, states, and the number of
multiplies and adds per input sample. (In this case the numbers are the same since the fir
filter requires only multiplications and adds for each of the zeros.)
We will explore some of the filter structures and their computation counts in Exercise 4.6. For now, we can really appreciate the difference the implementation makes by
using the sim command in MATLAB to simulate both of the moving average Simulink
models. The MATLAB code for doing this is
tic;sim(’Moving_Avg_FIR’);t1=toc;
tic;sim(’Moving_Avg_FIRsp’);t2=toc;
tcomp = [t1 t2]
The results from this code (on my computer) are
tcomp =
18.6036
0.5747
4.6. The Signal Processing Blockset
153
The difference is so dramatic because the first implementation (the state-space model)
has a state matrix a that is 199 × 199. At every iteration, this matrix multiplies the previous
state. Then there is a vector addition, followed by a vector multiply and an addition (for
the feed-through of the input). Use this as a guide for determining the calculation count in
Exercise 4.6.
In general, all fir filters have the form of a sum of present and past values of the input
multiplied by coefficients that are the desired (finite) impulse response. Thus, the general
fir filter and its z-transform are
yk =
n−1
n−1
hi uk−i and H (z) =
i=0
n−1
"
Y (z)
=
hi z−i =
(z−1 + zeroi ).
U (z)
i=0
i=0
Notice that this filter has n zeroes (zeroi ) that are determined by factoring the polynomial H (z). Because the highest power in this transfer function is z−n , the filter has n
poles at the origin (which corresponds to the fact that the filter output is only complete after
n samples; there is an n-sample lag before the correct output appears). This appears in the
response of the moving average filter from the Simulink model above.
The discussion above was for a filter that computes the mean of n samples. Because
of the study of this type of filter in statistics, all zero filters are “moving average” (or MAfilters). The statisticians also developed iir filters that were all poles, and they dubbed these
autoregressive (or AR-filters). Last, if a filter has both poles and zeros, it is an ARMA-filter.
The filter design block allows you to create all three types of filters. The dialog also allows
the user to specify where the filter coefficients are set in the model. The option we have
used is to set them through the filter mask dialog. (They are computed from the poles and
zeros set in the dialog.) You also can select an option where the filter coefficients are an
input (they can then be computed in some other part of the Simulink model for use in this
block, thereby changing the filter coefficients “on the fly”), or the coefficients can be created
in MATLAB using an object called DFILT. Finally, the dialog allows the user to force the
filter to use fixed-point arithmetic. This leads us to the discussion in the next section.
4.6.4
Implementing Digital Filters: Structures and Limited Precision
Very few analytic methods allow one to visualize the effects of limited precision arithmetic
on a filter. The use of simulation in this case is mandatory. Therefore, let us explore some
of the consequences of designing a filter for use in a small, inexpensive computer that, say,
has only 16 bits available.
Input Signal (Sum of sinusoids)
15
10
5
0
-5
-10
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
154
Chapter 4. Digital Signal Processing in Simulink
Table 4.3. Specification for the band-pass filter and the actual values achieved in
the design.
Specification
Spec. Values
Sample Freq.
Pass Band #1 Edge
Stop Band #1 Edge
Pass Band #2 Edge
Stop Band #2 Edge
Stop Band 1 Gain
Transition Width
Pass Band Ripple
10 kHz
100 Hz
110 Hz
120 Hz
130 Hz
−60 dB
10 Hz
1 dB
Generate 15 Sine Waves
at Frequencies given by freqs.
(set by a call back when the model loads)
double
double
Actual Values
(−3 dB point)
10 kHz
100 Hz
109.7705Hz
120.2508 Hz
130 Hz
−60 dB
10 Hz
1 dB
Digital
Bandpass Filter
Designer
Bandpass
double
Floating Pt.
Bandpass Filter
View Signals
Figure 4.20. The Simulink model for the band-pass filter with no computational
limitations.
The first question we need to ask is, what’s the problem? We have created a very
simple example of a digital signal-processing task that will allow us to explore some of the
limited precision arithmetic features built into the signal processing blockset. The model,
called Precision_testsp1 or 2 in the NCS library, is a simulation of a device that
one might design to try to find a single tone in a time series that consists of a multitude of
tones. We might use it, for example, in a frequency analyzer or in a device to tune a wind
instrument during its manufacture. It consists of a band-pass filter with a narrow frequency
range (10 Hz in this case) and a sample time of 10 kHz. The input to the filter is the same
sum of sinusoids that we have used previously, namely sine waves at the frequencies 65,
72.8, 74.7, 82, 89.3, 91, 99.5, 103.2, 125, 180.2, 202.1, 223.3, 230.3, 310.30, and 405.2 Hz,
in the figure above.
The band-pass filter specification is in Table 4.3, and the Simulink model
Precision_testsp1 that we use to test the design is in Figure 4.20. If you open this
model, you will see that it contains the Bandpass Filter block from the “Filter Design
Toolbox” Simulink library. This block converted the specification in the table into filter
coefficients used by the Digital Filter block. The digital filter block allows you to take the
floating point design and convert it to an equivalent fixed point design.
Figure 4.21 shows the amplitude-versus-frequency plot for the filter designed by the
“Band Pass Filter Design” block. (Open the model in the NCS library and double click
on this block to see the specification and create this plot.) The solid lines in the figure are
the filter amplitude, as designed, and the dotted lines are the specification from the table
4.6. The Signal Processing Blockset
155
Magnitude Response (dB)
0
-10
Magnitude (dB)
-20
-30
-40
-50
-60
-70
0
0.02
0.04
0.06
0.08
0.1
Frequency (kHz)
0.12
0.14
0.16
0.18
Figure 4.21. The band-pass filter design created by the Signal Processing Blockset
digital filter block.
above. When this filter is used, the digital filter response is almost indistinguishable from
the response of an analog filter. We have deliberately designed this filter for a frequency
that is not contained in the 15 sine waves that make up the input. (The pass band is from
110 to 120 Hz; none of the sinusoids have this frequency range.)
Thus, the output of the filter should be very small. This is in fact the case as can be
seen in the response plot (Figure 4.22(a)). The amplitude of the output (after the initial
transient dies down) is about 0.01 (about 1/1000 the amplitude of the input signal), which
is very good for detecting that the tone is not present.
The next part of the design is to place the filter pass band in the area where we know
there is a tone. Thus, let us try to pick out the tone at 103.2 Hz by specifying that the pass
band corners will be 100 and 110 Hz (so the values of Fstop1, Fpass1, Fstop2, and Fpass2
are 90, 100, 110, and 120, respectively). This is so easy to do in the design that it amounts to
a trivial change (in contrast to designing this filter by hand). You should try this, if you have
not already. If you have not, the Simulink model Precision_testsp2 in the NCS library
has the changes in it. Open this model and double click the Bandpass Filter block to view
the changes that were made. (The dialog that opens has the new design for the pass band.)
When you run this model, the response should look like the plot shown in Figure 4.22(b).
The maximum output is now about 1.5 (compared to the 0.15 above), showing that there is
a tone in the 10 Hz frequency range of 100 to 110 Hz.
The filter design method that we use for both of the band-pass filters is the “Second
Order Section” (or Direct Form Type II). When a digital filter is implemented using floatingpoint operations, the structure of the filter does not usually matter, but when the filter is for a
fixed-point computer, as we intend to do next, the way that the filter is structured can make
a huge difference. For this reason, the Signal Processing Blockset includes design methods
for a wide variety of filter structures.
156
Chapter 4. Digital Signal Processing in Simulink
Output of Bandpass Filter
0.2
0.1
0
-0.1
-0.2
0
0.2
0.4
0.6
0.8
1
Time
1.2
1.4
1.6
1.8
2
a) Output of the Bandpass Filter when no Signal is in the Pass Band
1.5
1
0.5
0
-0.5
-1
-1.5
0.2
0.4
0.6
0.8
1
Time
1.2
1.4
1.6
1.8
2
b) Bandpass Filter Output when One of the Sinusoidal Signal is in the Pass Band
Figure 4.22. Band-pass filter simulation results (with and without signal in the
pass band).
Why is the implementation method an issue for fixed-point calculations? In Section 4.4, we showed that the transfer function for a discrete linear system comes from the
Z{yk }
state-space model as H(z) = Z{u
= C(zI − )−1 B + D. The poles of the discrete system
k}
and the eigenvalues of the system are given by the det(zI − ( t)), so using the denominator polynomial to create a digital filter will result in coefficients that range from the sum of
the eigenvalues (or poles) to their product. All of the eigenvalues (poles) of the z-transform
transfer function are less than one because the matrix ( t) is

( t) = eA
t


= T −1 

eλ1
0
..
.
0
t
0
eλ2
..
.
0
t
···
···
0
0
..
.
···
· · · eλn



T,

t
where T is the matrix that diagonalizes the matrix A (and ( t)). The
#denominator of
the z-transform is the determinant of this matrix and is the polynomial nj=1 (z − eλj t ).
(Exercise 4.7 asks that you show that this is true.)
4.6. The Signal Processing Blockset
157
If this polynomial defines the filter denominator, the precision needed to store the
coefficients would range over many orders of magnitude. Consider, for example, the fourth
order filter with poles at 0.1, 0.2, 0.3, and 0.4; its denominator polynomial is z4 + z3 +
0.35z2 + 0.05z + .0024, so the coefficients span almost three orders of magnitude. Each
coefficient would therefore require at least 10 bits just to get one bit of accuracy. The
simplest way to avoid this is to break the filter up into cascaded second order systems.
Second order sections can eliminate complex arithmetic. (The denominator polynomial of
∗
∗
2
a second order system is always z2 + (eλi t + eλi t )z + eλi λi t , which is always real.)
The cascaded second order sections that were designed by the filter designer for this model
can be seen by looking under the mask (by right clicking on the Bandpass Filter block,
and selecting “look under mask” form the menu that opens). The block that appears is the
“Generated Filter block,” and we open it by double clicking. When this block opens, you
will see the cascaded second order sections (unconnected) that make up the filter (as shown
in the figure below).
1
Input.
CheckSigna
Attributes
-K-
[Sect1]
s(1)
[Sect1]
-K-
[Sect2]
s(2)
-Ka(2)(1)
[Sect2]
-K-
[Sect3]
s(3)
-1
z
-K-
-K-
b(2)(1)
a(2)(2)
-1
z
[Sect3]
-K-
-K-
s(4)
-1
z
s5(4)
-1
z
-K-
-K-
b(2)(2)
a(2)(3)
-1
z
-1
z
-K-
-K-
b(2)(3)
a(2)(4)
-1
z
-Kb(2)(4)
-1
z
-K-
-K-
-K-
-K-
-K-
-K-
-K-
-K-
a(3)(1)
b(3)(1)
a(3)(2)
b(3)(2)
a(3)(3)
b(3)(3)
a(3)(4)
b(3)(4)
The coefficients are in gain blocks, and we can inspect them by double clicking on
the block. It is very useful to become familiar with navigating around the Signal Processing
Blockset generated models this way.
Now that a filter design exists, we impose the requirement that the calculations must
use a precision of 16 bits. Designing a filter with limited precision arithmetic is simple
if the filter design block creates the design. Once it is complete, MATLAB uses the filter
design block to design the implementable digital filter. Before we do this, we need to think
a little about what we mean by limited precision arithmetic and digital computing without
floating-point calculations.
Whenever a calculation uses floating-point arithmetic, the computer automatically
normalizes the result (using the scaling of the floating-point number) to give the maximum
precision. (Among the many references on the use of floating point, [31] describes how
to use a block floating-point realization of digital filters.) The normalized version of any
variable in the computer is
x = ±(1 + f ) · 2e ,
f is the fraction (or mantissa) and e is the exponent.
The fraction f is always positive and less than one and is a binary number with at most
52 bits. The exponent (in 64-bit IEEE format) is always in the interval −1022 ≤ e ≤ 1023.
Because of the limited size of f , there is a limit to the precision of the number that can
be represented. (In MATLAB, this is captured by the variable eps = 2−52 , the value for
any computer using the IEEE floating-point standard for 64-bit words.) The exponent has a
similar effect, except that its maximum and minimum values determine the smallest and the
largest numbers we can represent. It would pay to read Section 1.7 of NCM [29] to make
sure that you understand the concept of limited precision.
1
Output
158
Chapter 4. Digital Signal Processing in Simulink
Inexpensive computers do not have floating-point capability. (They have “fixed-point”
architectures, where the user places the binary point at a fixed location in the digital word.)
Programming these computers requires that the programmer ensure that the calculation
uses enough bits. When we say enough bits, we are immediately in the realm of speculation
because the number of bits needed for any calculation depends on the range of values that
the data will have. For example, if we are interested in building a digital filter that will act
as the tone control for an audio amplifier, the amplifier electronics immediately before the
analog-to-digital conversion determines the maximum value that the signal will ever have.
The amplifier might limit the signal to a maximum of 1 volt. The result of the conversion
of the analog signal will be a number that ranges from −1 to 1. It is then easy to scale the
fixed-point number so its magnitude is always less than one by simply putting the binary
point after the first bit. (The first bit is then the sign bit, and the remainder of the bits are
available to store the value.) This scaling can be kept for all of the intermediate calculations,
but depending on the calculation (add, subtract, multiply, or divide), this may not be best,
so scaling needs to be reconsidered at every step in the computation.
When the amplitude of the signal being filtered is not known—as, for example, when
filtering a signal that is being received over a radio link—the designer needs to figure out
what the extremes for the signal will be and ensure that the digital word accommodates
them. When the extremes are exceeded, the conversion to a digital word reaches a limit
(which is called saturation and results in the most significant bit in the conversion to try to
alter the sign bit). When the digital word has a sign, the computer recognizes this attempt
and creates an error. When using unsigned integers, the bit overflows the register. (The
computer sees a carry bit that has no place to go, resulting in an overflow error.)
To handle the limited precision, the designer needs to figure out what the effect of the
limited precision will be both in terms of the accuracy that is required and in terms of the
artifacts introduced by the quantization. As we will see, quantization is a nonlinear effect
that can introduce many different noises into a signal that may, at a minimum, be distracting
and, in the worst case, cause the filter to do strange things (like oscillate).
Based on the discussion above, it should be clear that fixed-point numbers accommodate a much smaller dynamic range than floating-point numbers. The goal in the scaling
is to ensure that this range is as large as possible. The scaling usually used is to have a
scale factor that goes along with the digital representation and an additive constant (or bias)
that determines what value the digital representation has when all of the bits are zero. The
scaling acts like a slope and the additive constant acts like the intercept in the equation of a
straight line, i.e.,
Vrepresentation = S · w + b.
As was the case for the floating-point numbers, the constant S = (1 + f ) · 2e , where the
magnitude of f is less than one and b is the bias. The difference between the floating-point
and fixed-point representation is that the value for e is always the same in the fixed-point
calculations. The programmer must keep track of the slope and the bias. The computer
uses only w during the calculations. Simulink fixed-point tools have many different rules
for altering the scaling for each of the calculations that occur ensuring the best possible
precision (with constraints). Simulations insure that the input signals truly represent the full
range that is expected. The simulation also must ensure that the calculations at every step
use values that have ranges that are consistent with the data.
4.6. The Signal Processing Blockset
159
Figure 4.23. Fixed-point implementation of a band-pass filter using the Signal
Processing Blockset.
With this brief discussion, we can begin to investigate digital signal processing on
a fixed-point machine. Open the models Precision_testsp2 (see Figure 4.23(a)) and
double click the Digital Filter block. The dialog in Figure 4.23(b) will appear. The infor-
160
Chapter 4. Digital Signal Processing in Simulink
mation in the Main dialog consists of the data created by the filter design in the “Digital
Band-pass Filter Designer” block in the model. The Transfer function type is iir, and the
filter structure is the biquadratic direct form that we specified.
The dialog has a tab that forces the filter design to use a fixed-point architecture.
Click this tab and look at the options. Since the filter consists of four cascaded second order
sections, the first issue we need to address is how to scale each of the outputs. As a starting
guess we specify that each second order filter will use the full precision of the 16-bit word,
so we put the binary point to the right of the sign bit (i.e., between bit 15 and 16, so that the
fraction length—the length of w in the scaling equation—is 15 bits). The coefficients are
also critical, so they need as much precision as possible. We have specified overall accuracy
of 32 bits, with 15 bits for both the numerator and denominator coefficients in each of the
second order sections. Each multiplication in our computer has 32 bits of precision, so we
specify that the word length is 32 bits, and we again allow the most precision of 31 bits for
the “Fraction Length.” The output will use a 32-bit D/A conversion, so the output is scaled
the same as the accumulator.
These guesses for the scaling are now tested. The first step in the process is to run the
model (with the guesses) to see what the maximum and minimum values of the various signals are. To do this, just click the start button. The simulation will run, and the maximum and
minimum values appear in MATLAB in a 45-element cell array called FixPtSimRanges.
The last five values in this cell array show the simulation results for the fixed-point calculations. For example, the 45th entry, obtained by typing FixPtSimRanges{45} at the
MATLAB command line, contains
Path: [1x54 char] SignalName: ’Section output’
DataType: ’FixPt’MantBits: -16
FixExp: -15 MinValue: -1.0000
MaxValue: 0.9998.
The next step is to allow the “Autoscale” tool to adjust the scaling to give the maximum
amount of precision to the implementation. To do this, select “Fixed Point Settings” under
the Tools menu. This selection opens the dialog in Figure 4.24.
To automatically scale the calculations (and in the process compute all of the scale
factors S associated with each calculation), simply click the “Autoscale Blocks” button at
the bottom of the dialog. The results appear by opening the Digital Filter block and then
looking under the Fixed-Point Tab. The Autoscale will have scaled all of the calculations,
but you should find that the values we selected are good.
Now that the filter is complete, you can experiment with different word sizes and
precisions, and see what the effect is on the output of our filter. However, if you look at
the “View Signals” Scope block, you will notice that even with the design optimized for
precision, the fixed-point and floating-point computations are not the same.
4.6.5
Batch Filtering Operations, Buffers, and Frames
The Signal Processing Blockset can also process signals in a “batch” mode where the signals
are captured and then buffered in a register before the entire sequence is used to calculate a
desired functional. The most frequent use of this technique is in the calculation of the Fourier
transform of a signal. Remember that the Fourier transform is an integral over all time. Since
4.6. The Signal Processing Blockset
161
Figure 4.24. Running the Simulink model to determine the minima, maxima, and
scaling for all of the fixed-point calculations in the Band-pass filter.
we can capture only a finite time sample, any calculation for a signal processing application
will at best approximate the transform. In addition, the data will be digital, so the transform
will be done using an approximating summation. The most frequent approximation is the
fast Fourier transform (FFT). It is the subject of Chapter 8 of NCM, so we will assume that
the reader is familiar with its method of calculation.
In order to illustrate the concept of buffers and the FFT in the Signal Processing
blockset, consider the problem of recreating an analog signal from its samples (the sampling
theorem problem we investigated above). In this case let’s not try to find an analog filter
that will do this, but let’s try to use the Fourier transform (in the form of the FFT).
The model FFT_reconstruction in the NCS library (Figure 4.25) starts with the
same sum of 15 sinusoids that we have been using in the previous models. The reconstruction, however, uses the transform of the signal. Let us look at the theory behind the
calculations, and then we will explore the model’s details.
Remember that the sampling theorem told us that to reconstruct the signal from its
samples we need to multiply the Fourier transform of the sampled signal by the function that
is 1 up to the sample frequency and zero elsewhere. When we take the Fourier transform
using the FFT, we get frequencies that are up to half the sample frequency. Thus, if the
162
Chapter 4. Digital Signal Processing in Simulink
Delay by:
samp_time*nbuffer
(Accounts for lag in buffer)
Generate 15 Sine Waves
at Frequencies given by freqs.
(set by a call back when the model loads)
15
Sampled
Input
Pad Transform
with Zeros
[512x1]
Buffer
(nbuffer values)
FFT
[512x1]
[512x1]
FFT
Padded FFT
Unbuffer
(ndesired values)
4096
IFFT
Make Ressult Real
(Eliminate Residual
Imaginary Part)
[4096x1]
Inherit
Ref Complexity
IFFT
FFT
Scale by:
nd/nbuf
Data
-K-
Reconstructed
at: nd/nbuf
View Signals
[512x1]
|u|
[512x1]
Compute |FFT|
Freq
DSP
Constant
1
Short-Time
Spectrum
Frequency Domain Sampled Signal Reconstruction
Ideal Low Pass Filter Created by Padding the FFT
Figure 4.25. Illustrating the use of buffers. Reconstruction of a sampled signal
using the FFT.
conditions for the sampling theorem (that the signal is band limited) are valid, the FFT is
just S(ω), −ωM ≤ ω ≤ ωM , where ωM is the maximum frequency contained in the signal
and the values of ω are just 2ωnM , where n is the number of samples of the signal that were
transformed. In creating the model in Figure 4.25, the first step was to save n samples of
the input signal for subsequent processing by the FFT. The block in the Signal Processing
Blockset that does this is the Buffer block. It simply stores some number of values of the
input in a vector before passing it on to the next block. When the model opened, three data
values (called nbuffer, npad, and ndesired) appear in MATLAB. Nbuffer is the size of
the buffer, and it is set to 512. (It always must be a power of two for the FFT block.)
The FFT and the inverse FFT use the blocks from the Transforms library in the Signal
Processing Blockset. Thus, outwardly, all we are doing in the model above is taking the
transform of the input and then taking the inverse transform to create an output. The output,
though, cannot be the result of the inverse transform, since this is going to be a vector of
time samples that, presumably, matches the output of the Buffer block. The way to pass the
vector of samples back into Simulink as a set of time samples at the appropriate simulation
times is to use the “Unbuffer” block. (Both the Buffer and Unbuffer are in the Signal
Management library under Buffers.)
To apply the Sampling theorem, you need to multiply the transform of the sampled
signal by the pulse function and then take the inverse Fourier transform (which is continuous
so this results in a continuous time signal). We are using the IFFT block that takes the
inverse Fast Fourier Transform and therefore outputs a signal only at discrete values of
time. Therefore, in order for the inverse to have more time values (thereby filling in or
interpolating the missing samples), we need to increase the size of the FFT before we take
the inverse. We do this by padding the transform with zeros to the left of −ωM and to the
right of ωM . The subsystem called “Pad Transform with Zeros” in Figure 4.26(a) does this.
To see how, double click on the block to open it.
There is a block called “Zero Pad” in the “Signal Operations” library, and we use it
to do the padding. The block adds zeros at either the front or the rear of a vector. However,
we need to be careful about this. Remember that when the FFT is computed, the highest
frequencies are in the center of the transform vector, and the lowest (zero frequencies) are
at the left and right of the vector (i.e., the transform is stored from 0 to −ωM and then from
4.6. The Signal Processing Blockset
163
Zero Pad the Transform
(at the lowest neg. freq.)
1
FFT
[512x1]
MATLAB
Function
512
Un-Shift the Transform
so Zero is at the Ends.
2304
Shift the Transform
so Zero is at the Center.
MATLAB
Function
4096
Zerod Pad the Transform
(at the highest pos. freq.)
4096
1
4096
Padded
FFT
4096
|u|
View Padded
Transform as a Matrix
|FFT| padded
4096
[8x4096]
4096
Matrix
Viewer
Store the |FFT|
in rows of a matrix
a) Simulink Subsystem that Pads an FFT with Zeros to
Increase the Number of Sample Points in the Transform
300
250
200
150
100
50
0
0
100
200
300
400
500
600
b) Padding an FFT with Zeros Must Account for the Fact that
the 0-Frequency is Not at the Center of the FFT
Figure 4.26. Using the FFT and the sampling theorem to interpolate a faster
sampled version of a sampled analog signal.
ωM to 0 with a discontinuity at the center of the array). If you run the following MATLAB
code, it will generate the plot in Figure 4.26(b) to illustrate this:
t = 0: .01:511*.01;
y = sin(t)+sin(10*t)+sin(100*t);
z=fft(y);
plot (abs(z))
The plot shows the fact that the transform has the zero frequency at the 0th and the
512th points computed. Because of this FFT quirk, padding the FFT with zeros during the
computation in the Simulink model will not add zeros below −ωM and above ωM . (Exercise:
What does it do?)
There is a built-in command in MATLAB that we will use to rotate the fft. The
function is fftshift, and it converts the FFT so its zero frequency is at the center of the
164
Chapter 4. Digital Signal Processing in Simulink
plot (as it appears using the Fourier transform). To see the effect of using this command,
replace the last plot command with
plot (fftshift(abs(z))).
There is no function in the Signal Processing library that does the equivalent of
fftshift. Therefore, we use the MATLAB function block in the Simulink library to use the
MATLAB fftshift. The function needs to be invoked twice: once before we do the zero
padding and once after to put the FFT back into the correct form for the inverse (IFFT block).
Two additional blocks used in the model come from the Signal Processing Blockset
sinks library. They are the Short Time Spectrum block and the Matrix Viewer block. The
first block is used to view the FFT as it is computed, and the second allows us to view the
padded FFT as it is computed (in a three-dimensional plot). In this plot, time and frequency
are the two axes, and the color is the amplitude of the FFT. (In the book it is in a gray
scale, but in the model that you are running from the NCS library, it is in color.) Another
attribute used in the model is to display the length of the vectors and the sample times
on the various lines using different colors for the lines. These attributes come from the
Port/Signal Displays under the Format menu in the model. Now, with this understanding of
the mathematics involved, it should be clear that if we pad the transforms by some number
of zero elements to make the final padded value have 2n points, the inverse FFT (IFFT)
will result in a time series with 2n values. Thus, making 2n > nbuff er, the output will
have more time samples than the input. From the sampling theorem, this output is the result
of multiplying the FFT of the input by the pulse function, and the result is a signal that
perfectly—to within numeric precision, of course—reconstructs the original signal at the
new sample times.
The result of running this model is shown in Figure 4.27 for the reconstruction of
the samples signal at 8 times the input frequency (ndesired = 4096, nbuffer = 512, and
npad = 1792).
The results show very good reconstruction of the sampled signal, as we would expect.
Figure 4.28(a) shows the output from the Short-Time Spectrum block in the model, and
Figure 4.28(b) shows the plot created by the Matrix Viewer block. Almost every one of
the 15 frequencies in the input sine wave can be seen in the peaks of the spectrum (which
is really the FFT), and the padding of the FFT to create the new output can be seen in the
Matrix Viewer output.
We have explored about 20% of the capabilities of the Signal Processing Blockset
in this chapter. For example, dramatic improvements in many signal-processing applications result from the processing of a large sample of data transferred to the computer as
a contiguous block. To some extent, we have seen this in the example above, where we
buffer the signal data before sending it to the FFT. The same approach applies to filters and
many other signal-processing operations. When we do this in Simulink, the operation uses
a vector called a “frame.” Frames make most of the computationally intensive blocks in the
Signal Processing Blockset run faster. There are many examples in the demos that are part
of the signal-processing toolbox that illustrate this, and now that you understand how the
buffer block works, you should be able to work through these examples without trouble.
Furthermore, it is a good idea to look at all of the demos, and also to open all of the blocks in
the blockset to see how each of them works, and, along with the help, determine what each of
the blocks needs in terms of data inputs and special considerations for the use of the blocks.
4.7. The Phase-Locked Loop
165
Analog Signal Sampled
at 1kHz
10
5
0
-5
-10
4
4.01
4.02
4.03
4.04
4.05
4.06
4.07
4.08
4.09
4.1
4.06
4.07
4.08
4.09
4.1
Reconstructed
at 4 kHz
10
5
0
-5
-10
4
4.01
4.02
4.03
4.04
4.05
Time
Figure 4.27. The analog signal (sampled at 1 kHz, top) and the reconstructed
signal (sampled at 8 kHz, bottom).
4.7 The Phase-Locked Loop
An interesting signal-processing device incorporates the basics of signal processing and
feedback control. The device, a phase-locked loop (PLL), is an inherently nonlinear control
system. It is extremely simple to understand, and Simulink provides a perfect way for
simulating the device. In the process of creating the simulation, we will encounter some
new Simulink blocks, we will use some familiar blocks in a new way, and we will encounter
some numerical issues. We begin with how the PLL operates.
Imagine that we want to track a sinusoidal signal. The classic example of this is
the tuner in a radio, television, cell phone, or any device that must lock onto a particular
frequency to operate properly. In early radios, a demodulator followed an oscillator tuned
to the desired frequency. The oscillator operated in an open loop fashion, so if its frequency
drifted (or the signal’s frequency shifted slightly), the radio needed to be manually retuned.
The operation of the demodulator used the fact that the product of the incoming frequency
and the local oscillator created sinusoids at frequencies that were the sum and difference of
the input and oscillator frequencies. Mathematically this comes from
sin(ω1 t+ϕ1 ) cos(ω2 t+ϕ2 ) =
1
(sin ((ω1 + ω2 )t + ϕ1 + ϕ2 ) + sin ((ω1 − ω2 )t + ϕ1 − ϕ2 )) .
2
The output of the demodulator was the result of extracting only the difference frequency
using a circuit tuned to this “intermediate” frequency. In most applications, tuned radiofrequency amplifiers increased the intermediate signal’s amplitude. The PLL uses the same
166
Chapter 4. Digital Signal Processing in Simulink
Figure 4.28. Results of interpolating a sampled signal using FFTs and the sampling
theorem.
4.7. The Phase-Locked Loop
167
num(s)
den(s)
Input Signal
101.56 Hz.
Modulator
Low Pass Filter
(3rd Order Butterworth at 5 Hz.)
Error
VCOout
VCO
Tracking
Input
VCOin
Voltage
Controlled
Oscillator
This Phase Locked Loop Simulation will Track a sinusoidal input that
is in the frequency range of 95 to 105 Hx.
Figure 4.29. Phase-locked loop (PLL). A nonlinear feedback control for tracking
frequency and phase of a sinusoidal signal.
concept, except the oscillator frequency uses the difference sinusoid to change the frequency of the oscillator. The feedback uses a device called a voltage controlled oscillator
(abbreviated VCO). The operation of the loop needs three parts:
• the VCO,
• the device that creates the product of the input and the VCO output,
• a filter to remove sin ((ω1 + ω2 )t + ϕ1 + ϕ2 ) before the result of the product passes
to the VCO.
The term “locked” describes when the input frequency and the VCO frequency are
the same. Thus, at lock, since the product has both the sum and difference frequencies, the
difference frequency is zero. Therefore, the filter that we need to remove the sum frequency
is our old friend the low pass filter. We will build the loop simulation in Simulink using a
third order Butterworth filter.
The Simulink model of phase-locked loop is Phase_Lock_Loop in the NCS library
(Figure 4.29).
A MATLAB callback (as usual) creates the parameters for the simulation. The Butterworth filter is third order, and its coefficients are in the Transfer function block. The
modulator uses the product block. The VCO is a subsystem that looks like Figure 4.30.
This implementation of the VCO ensures that the generated sinusoid always has the
correct frequency despite the limited numeric precision of the simulation. The big worry is
the effect of roundoff. NCM has a discussion of the effect of calculating a sinusoid using
increments in t that look like tnext = t+ t. If the value of t cannot be represented precisely
in binary, then eventually the iteration implied by the equation will give an incorrect result.
The integrator modulo 1 in the diagram above fixes this problem.
Let us look at this subsystem (Figure 4.31).
The first thing to note is that the integration is creating the ω2 t part of the argument of
cos(ω2 t + ϕ 2 ) that is the VCO output. The second thing to note is that the integrator resets
whenever the value of its output is one.
168
Chapter 4. Digital Signal Processing in Simulink
Scope
Integrator
Modulo 1
1
Kpll
In
Voltage Controlled Oscillator
Output Ampitude
cos(u[1])
Integral modulo 2pi
VCOin
Amp
cosine
Gain
f0
Oscillator Base
Frequency
Phase
1
VCOout
Oscillator
Base Phase
The integrator output is approximately:
0 <= ( f0 + Input(t) Kvco) t < 2 π
Figure 4.30. Voltage controlled oscillator subsystem in the PLL model.
Gain
1+eps
In
1
1
s
>=
Modulus
1
rem
Modulus
1
Math
Function
1
Relational
Operator
[0]
Initial Integral
is set to 0 at t = 0
2*pi
Integrator
1
Integral modulo 2pi
xo
Set value to
be between 0 and 2 pi
**************************************************************************
This integrator creates an output that is between 0 and 2π .
The reset on the integration uses the state port at the top
(so there is no algebraic loop).
Any errors in the integration at the reset are calculated from
the remainder. The integration starts at a value that is its value
before the reset - 1.
**************************************************************************
Figure 4.31. Details of the integrator modulo 1 subsystem in Figure 4.30.
This reset uses two options in the integrator block that we have not used before. The
first is the state port that comes from the top of the integrator. A check box in the integrator
dialog causes the display of this port. You use it when you need the result of the integration
to modify the input to the integrator. If you fed the output back to the input, an algebraic
loop would result that is difficult for Simulink to resolve. The port eliminates this loop.
The second new input to the integrator is the “reset” port. This port is created when
you select rising from the “External reset” pull-down menu in the integrator dialog. Thus,
in the model, the integrator is reset to the initial condition whenever the relational operator
shows that the output is greater than 1+eps. (eps is the smallest floating-point number in the
IEEE floating-point standard; see NCM for a discussion of this MATLAB variable.) The
4.7. The Phase-Locked Loop
169
Input Signal
1.5
1
0.5
0
-0.5
-1
-1.5
1.9
1.91
1.92
1.93
1.94
1.95
1.96
1.97
1.98
1.99
2
1.97
1.98
1.99
2
Output of the VCO
1.5
1
0.5
0
-0.5
-1
-1.5
1.9
1.91
1.92
1.93
1.94
1.95
Time
1.96
a) Phase Lock Loop Simulation Input and Output
Output of the Modulo Integrator
Phase Lock Loop Tracking Error
0.6
6
0.5
5
0.4
Input to VCO
7
4
3
0.3
0.2
2
0.1
1
0
0
1.9
1.91
1.92
1.93
1.94
1.95
1.96
1.97
1.98
1.99
Time
b) Output of the Modulo Integrator
2
-0.1
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
Time
c) Loop Tracking Response
Figure 4.32. Phase-locked loop simulation results.
integrator steps are not necessarily going to occur at exactly the point where the output is
exactly one. The external initial condition in the integrator is used at the reset to set the
value of the integrator to the remainder (the amount the output exceeds 1) for the next cycle.
The output of the integral is therefore a sawtooth wave that goes from zero to one. We
multiply the output by 2π before using it to calculate the cosine.
Run the PLL Simulink model and observe the outputs in the Scopes. There are three
Scope blocks, two at the top level of the diagram. The first shows the two sinusoids (the
input and the VCO output), and the second shows the feedback signal that drives the VCO.
Look at these and at the Scope block in the VCO subsystem. (Figure 4.32(a) shows the
2
170
Chapter 4. Digital Signal Processing in Simulink
plots of the input and output of the simulation.) Figure 4.32(b) shows the moduloarithmetic
integrator output that we described above, and Figure 4.32(c) is the tracking error in the PLL.
Because the frequency of the VCO is not exactly 101.56 Hz, the VCO frequency (VCOin)
must have a nonzero steady state value. This is clearly the case. Furthermore, the PLL got to
this steady state value with a rapid response and minimal overshoot. The design of the PLL
feedback uses linear control techniques that start with a linear model of the PLL dynamics.
Many more interesting and complex mathematical processing applications are possible using the blockset, including noise removal, more complex filter designs, and the ability
to create a processing algorithm for an audio application and actually hear how it sounds
on the PC. There are tools that allow the export of any signal processing application to
Hardware Description Language form so that the processing ends up on a chip. Tools also
will export designs to Xilinx and Texas Instruments architectures.
We are now ready to talk about stochastic processes and the mathematics of simulation
when the simulated variables are processes generated by a random quantity called “noise.”
This is the subject of the next chapter.
4.8
Further Reading
The Fibonacci sequence and the golden ratio is the subject of a book by Livio [26]. It is
a very readable and interesting review of the many real (and apocryphal) attributes of the
sequence.
In graduate school (at MIT), Claude Shannon worked on an early from of analog
computer called a “differential analyzer.” His work on the sampling theorem links to the
subject of simulation in a very fundamental way. In fact, while he was working on the
differential analyzer he created a way of analyzing relay switching circuits that were part
of these early devices. He published a paper on these results in the Transactions of the
American Institute of Electrical Engineers (AIEE) that won the Alfred Noble AIEE award.
During World War II, Shannon worked at Bell Labs on fire control systems. As the
war ended, Bell Labs published a compilation of the work done on these systems. Shannon,
along with Richard Blackman and Hendrik Bode, wrote an article on data smoothing that
cast the control problem as one of signal processing. This work used the idea of uncertainty
as a way to model information, and it was a precursor to the discovery of the sampling
theorem.
The Shannon sampling theorem is so fundamental to all of discrete systems that his
papers frequently reappear in print. Two recent examples of this are in the Proceedings of
the IEEE [23], [39]. A paper in the IEEE Communications Society Magazine [27] followed
shortly afterward. The best source for the proof of the theorem is in Papoulis [32]. Papoulis
had a knack for finding elegant proofs, and his proof of the sampling theorem is no exception.
There are hundreds of texts on digital filter design. One that is reasonable is by Leland
Jackson [22].
Reference [11] describes the operation of phase-locked loops. It also shows how they
are analyzed using linear feedback control techniques, and how easy it is to create a loop in
a digital form.
To learn more about the tools, blocks, and techniques available with the Signal Processing Blockset and Toolbox, see The MathWorks Users Manuals and Introduction [43], [44].
Exercises
171
Exercises
4.1 Show that the Fibonacci sequence has the solution (completing the step outlined in
Section 4.1.2)
1
fn =
(φ n+1 − (1 − φ 1 )n+1 ).
2φ1 − 1 1
fk
4.2 Verify using induction the result from Section 4.3 that: xk = ffk−1
x0 . Is it a
fk+1
k
fact that because you use the iterative equation in the model and the result is what you
expect that this must imply that the result is true by induction? If you use a simulation
of a difference equation, iteratively, to produce a consistent result, will this imply that
the result is always true by induction?
4.3 Show, using induction, that the Fibonacci sequence has the property that fk+1 fk−1 −
fk2 = ±1. (Follow the hint in Section 4.3.)
4.4 Verify that the Butterworth filter function
|H Butter |2 =
1+
1
2n
ω
ωm
is maximally flat. (All of its derivatives from the first to the (n − 1)st are zero at zero
and infinity.)
4.5 Show that the analog system dtd x(t) = Ax(t) + Bu(t) and the discrete system xk+1 =
( t)xk + ( t)uk have the same step responses when
t
( t) = 0 eAτ dτ,
t
( t) = 0 eAτ Bdτ.
Create a Simulink model with the state-space model of a second order system in
continuous time and in discrete time using the above.
Create a discrete time system using the Laplace transform of the continuous system
−1
with the mapping s = 2t ( 1−z
) to generate the discrete system.
1+z−1
Compare the simulations using inputs that are zero (use the same initial conditions
for each of the versions), a step, and a sinusoid at different frequencies.
4.6 Show that the fir filter for an n-sample moving average has the state-space model




0 1 0 ... 0
0
 0 0 1 ... 0 
 0 




xk+1 =  . . .
xk +  .  u k ,

.
 .. .. .. . . . .. 
 .. 
0
yk =
1
n
0
... 1
0
1
1
1
1
1
xk + n1 uk .
In this state-space model the A matrix is (n − 1) × (n − 1) and the vectors are of
appropriate dimensions. Verify that the Simulink model in the text does indeed use
172
Chapter 4. Digital Signal Processing in Simulink
this equation. Calculate the number of adds and multiplies this version of the moving
average filter requires.
Investigate other non–state-space versions of this digital filter. Use the Simulink
digital filter library. Work out the computation count needed to implement the filters
you select, and compare them with the number of computations needed for the statespace model. Is this model an efficient way to do this filter? What would be the most
efficient implementation?
#
4.7 Show that det(zI − ( t)) is nj=1 (z − eλj t ), where ( t) = eA t and the λj are
the eigenvalues of the matrix A.
Chapter 5
Random Numbers, White
Noise, and Stochastic
Processes
Simulation of dynamic systems where we assume we know the exact value for all of the
model parameters does not adequately represent the real world. Most of the time the designer
of a system wants to know what happens when one or more components are subject to
uncertainty. The modeling of uncertainty covers two related subjects: random variables
and stochastic processes.
Random variables can model the uncertainty in experiments where a single event
occurs, resulting in a numeric value for some observable. An example of this might be
the value of a parameter (or multiple parameters) specified in the design. In the process of
building the system, these can change randomly because of manufacturing or other errors.
Simulink provides blocks that can be used to model random variables using the two standard
probability distributions: uniform and Gaussian or normal.
Stochastic processes, in contrast to random variables, are functions of time or some
other independent variable (or variables). The mathematics of these processes is complicated
because of the interrelation between the randomness and the independent variable(s).
In this chapter, we explore the use of Simulink to model both types of uncertainty. We
introduce the two types of random variables available in Simulink and show how to create
other probability distributions for use in modeling phenomena that are more complex. We
extend ideas of random variables to the simplest of stochastic processes where at each of
the sample times in a discrete system we select a new random variable. Finally, we show
discrete time processes that converge to continuous time processes in the limit where the
time steps are smaller and smaller.
5.1
Modeling with Random Variables in Simulink: Monte
Carlo Simulations
The simplest uncertainties one might need to model are the values of parameters in a simulation. These parameter uncertainties can be errors due to manufacturing tolerances, uncertainties in physical values because of measurement errors, and errors in model parameters
because of system wear. Monte Carlo simulations use parameters selected from known
probability distributions. The parameters are then random variables, so each simulation has
173
5.1. Modeling with Random Variables in Simulink: Monte Carlo Simulations
From MATLAB:
Sum n+1 Uniform ly Distributed
Random Variables w ith 0 m ean
(values from -1 to +1 )
SumofUniforms
Variance
1/sqrt(n/6)
Sign
1/sqrt(n/2)
Sum n/2
Binomial RVs
Create n/2 Random Variables
w ith: Pr( x = 1 ) = Pr( x = -1 ) = 1/2
(Mean = 0 and Var = 1)
Display
Scope2
Sum m ation form s
Binom ial Distribution
for X, w ith -n/2 < X < n/2
Random
Number
1.033
Scope1
Divide by
Sigma of Uniform
5000
Running
Var
Sum of Unif orm rv s
with mean 0 and
unit v ariance
From Workspace
5000
175
simout
Sum of Binomial rv s
with mean 0 and
unit v ariance
To Workspace
Scope
Divide by Sigma
of Binomial
Running
Var
0.9905
Display1
Variance1
Figure 5.1. Monte Carlo simulation demonstrating the central limit theorem.
Mo n te C a rl o Si m u l a ti o n Il l u s tra ti n g th e
C e n tra l L i m i t Th e o re m
H i s to g ra m (Pro b a b i l i ty D i s tri b u ti o n ) R e s u l ts
350
N u mb e r o f Sa mp l e s a t Ea ch Va l u e
300
250
200
150
100
50
0
-6
-4
-2
0
2
4
6
Va l u e s o f th e R a n d o m Va ri a b l e
Figure 5.2. Result of the Monte Carlo simulation of the central limit theorem.
structure.) The first column is the time points (in this case the integers from 0 to n, which
are the counters for the number of random samples we are creating and not time), and the
second column is the random numbers generated in MATLAB using the rand function.
The result of running this model with n = 10,000 is shown in Figure 5.2. Note that
the distribution is almost exactly Gaussian, with zero mean and unit variance as the central
limit theorem demands. Thus, we have done a numerical experiment where we have made
176
Chapter 5. Random Numbers, White Noise, and Stochastic Processes
multiple runs with random number generators, and we have created the probability density
function for the result—and have seen that the result is Gaussian as theory predicts. This is
exactly what Monte Carlo analyses are supposed to do. In the situations where it is used,
the mathematics required to develop the probability distribution are so complex that this
method is the only way to achieve the required understanding.
In the simulation above, we generate 100 million random variables to create the
histogram (50 million in MATLAB before we start the model, and 50 million over all of the
time steps in the model). Using tic and toc, the total time that this took on a 3.192 GHz
dual processor Pentium computer was 24.2 seconds.
When the simulation ends, there is a callback (using the StopFcn tab in the File menu
Model Properties submenu) that executes the following code to create the plot in Figure 5.2:
y = simout.signals.values;
hist(y,100).
It is important to see how the parameters were set up in the dialog box for the binomial
distribution. We used the “Band Limited White Noise” which generates a Gaussian random
variable at each time step, and then we extracted from this a random variable that is +1
when the sign is positive and −1 when it is negative. This uses the sgn (sign) block from
the Math Library. We add the elements in the resulting vector to create a random variable
that lies between −n/2 and n/2; thus it is a binomial distribution. The dialog box for the
Band Limited White Noise block has the value of the random seed set to
12345:100:12345+100*n/2-1.
The seed needs to be a vector of different values so that each of the n/2 generated
random variables is independent.
From this discussion, you should be able to see how to create a simulation that contains
random variables either using the random number generators in Simulink or using MATLAB
commands to generate random numbers. However, you might think from the example that
the only options for the random variables are uniform distributions or Gaussian.
Well, while you were not looking, we sneaked a binomial distribution into the model
above. We actually generated this distribution using Simulink blocks. Let us explore another
example, the Rayleigh distribution, which will give some insight into how to generate more
complex (and consequently more likely to occur in practice) random variables.
5.1.2
Simulating a Rayleigh Distributed Random Variable
Radio waves that reflect off two different surfaces have amplitudes that are the square root
of the sum of the squares of the amplitudes from each direction. Thus, if the amplitude of
each wave is Gaussian, the resulting signal is Rayleigh. To make this precise,
let x(t) and
y(t) be two Gaussian random variables; then the random variable z(t) = x(t)2 + y(t)2 is
Rayleigh distributed. The probability density function for z(t) is
 z
2
2

e−z /2σ for z ≥ 0,
2
σ
fZ (z) =

0 for z < 0.
5.2. Stochastic Processes and White Noise
177
sqrt
Random
Number
Product
Rayleigh Distributed
Random Variable
Math
Function
Running
Var
Variance
Random
Number1
Scope
Product1
0.4292
Display
Figure 5.3. A Simulink model that generates Rayleigh random variables.
How do we generate a Rayleigh distributed random variable for use in Simulink? The
answer is to go to the definition. The model in Figure 5.3 creates a Monte Carlo simulation
that generates a random variable with a Rayleigh distribution. This model is Rayleigh_Sim
in the NCS library.
When this model runs, the generated noise
sample appears in the Scope, and when the simulation stops, a MATLAB callback creates the
histogram for the generated process (as we did in
Section 5.1.1 above). The figure at the right is
the histogram that results.
As an aside, the variance of the Rayleigh
random variable is 2 − π2 = 0.4292. The simulation uses the “variance” block from the Signal
Processing library to compute the variance as the
simulation progresses, and the value is exactly
the 0.4292 for the 10 sec simulated in the example above.
It should now be clear how one goes about
creating random variables with different distributions. The first step is to find a mathematical
relationship between the desired random variable and the Gaussian or other random variable,
and then build a set of blocks that form that function. In Example 5.1, we ask that you modify
the Simulink model above to add the calculation of the random variable that is the phase
angle of the reflected sine waves.
Histogram for the Monte Carlo
Experiment of the Rayleigh Distribution
70
Number of Samples for the Specified Value
60
50
40
30
20
10
0
5.2
0
0.5
1
1.5
2
2.5
3
Values of the Random Variable
3.5
4
4.5
5
Stochastic Processes and White Noise
A random variable is a number that results from a random experiment. The experimenter
may assign the number, or it might arise naturally because of the experiment. For example,
with a coin flip, the experiment is the operation of flipping the coin and observing the result.
The outcome is a member of the set that contains the two possible outcomes {“Head,”
“Tail”}. A random variable for this experiment is the assignment of a number to each of the
178
Chapter 5. Random Numbers, White Noise, and Stochastic Processes
outcomes (the number 1 for heads and the number 0 for tails, for example). A stochastic
or random process is similar to a random variable except that the experimental result is the
assignment of a function (usually, but not always, a function of time) to the event. Thus, the
following are stochastic processes (in order of complexity from trivial to overwhelming).
• A random process resulting from an event with only two outcomes:
– Flip a coin, and assign the function sin(αt) if the result of flip is a head and
assign the function e−αt if it is a tail.
• The “random walk” process:
– Start with x(0) = 0. At every discrete time step k t, flip a coin and move left
if the result is a head and move right if it is a tail.
– Thus, x(k t) = x({k − 1} t)) − 1 if a head appears and x(k t) = x({k −
1} t)) + 1 if a tail appears.
• The Brownian motion (Weiner) process:
– Let the coin flip above happen infinitely often in the sense that over any interval,
t no matter how small, the coin is still flipped an infinite number of times.
Each of these processes is progressively more complex than the previous. The first
is simple because there are only two outcomes so the probability for each is simply 12 . The
second process has a growing number of possible outcomes, so at the nth time step there
are n possible values for the process. The last process is inconceivably complex; it is the
prototype for Brownian motion—the motion of a particle in a fluid. This process plays a
fundamental role in the analysis of stochastic processes. (The process is also called the
Weiner process.)
Most of us have heard of white noise, perhaps only as a noise source that masks
background noise to allow us to sleep well. However, this ubiquitous process is an essential
part of all analyses of systems with noise, and it is, therefore, a very convenient mathematical
fiction. As a process, white noise does not exist (it is easy to demonstrate that this process
has infinite energy), but it is possible to give some insight into its properties and show a way
that it is related to the Brownian motion process, and as a consequence it can be subjected
to a rigorous mathematical treatment. As the first step in understanding systems excited by
noise, let us explore the Brownian motion process in more detail. We will see how to use it
as a prototype of white noise.
5.2.1 The Random Walk Process
The starting point for our discussion of Brownian motion is the random walk process described above. Let us create a Simulink model that generates this process (Figure 5.4). We
need to use the 1z block from the Discrete library to store the previous value of the Random
Walk, and then we need to add this to the result of “Flipping the Coin,” which is simulated
using the sign of the Uniform Random Number Generator (a sign of +1 or −1).
The simulation has a seed for the random number generator that is a vector of length
nine, so each run creates nine separate instantiations of the random walk. Figure 5.5 shows
5.2. Stochastic Processes and White Noise
179
x
k+1
= x + sign(y )
k
k
x =0
0
+ or - 1
x_k+1
Sign
Uniform Random
Number
Scope
x_k
1
z
Unit Delay
Figure 5.4. Simulink model that generates nine samples of a random walk.
Nine Simulations of the Random Walk
for 10000 Coin "Flips"
200
150
Random Walk Values
100
50
0
-50
-100
-150
-200
0
1000
2000
3000
4000
5000
Index "k"
6000
7000
8000
9000
10000
Figure 5.5. Nine samples of a random walk process.
the nine samples of the random walk that result with 10,000 flips. (After n iterations, the
value of the random
√ walk must be between −n and n, but it can be shown that a random
walk grows like n, and this is indeed the case for these nine samples.)
180
Chapter 5. Random Numbers, White Noise, and Stochastic Processes
Let us compute some of the statistics for this process. The mean of the process is as
follows:
k
E {sign(yi )} = 0.
E {xk } =
i=0
The fact that the mean of the sign of y is zero follows from the fact that the probability that
the sign is +1 (from the uniform distribution) is 12 and the probability that the sign is −1 is
also 12 . The mean is therefore 12 (+1) + 12 (−1) = 0.
Similarly, the variance of the process is

2 
k
k
  +
*
2
σRW
=
(k) = E
sign(yi )
E sign(yi )2 = k.


i=0
i=0
Our goal is to convert this discrete time random walk into a continuous process by letting
the numbers of coin flips in any time interval go to infinity. That means that
lim
xk (k, t) = B(t).
n→∞
t→0
We need to verify that the result B(t) is a mathematical function that exists and has the
desired property that it represents the random motion of a particle immersed in a fluid.
We have scaled the time axis in such a way that it can become continuous, but we do
not know how to scale the outcome of the coin flip (in the random walk it is +1 or −1 for
each flip), so we need to answer the following very simple question:
If we scale the number of coin flips in the random walk using a scale factor of t (we are
scaling the horizontal axis that will converge to the time axis), then how must we scale the
position (vertical) axis so the process converges to a valid continuous time process as t
approaches zero?
This question arose at the end of the 19th century because physicists were trying to
model Brownian motion as a verification of the atomic theory of matter in which the motion
was due to collisions with atoms (see [5]).
5.2.2
Brownian Motion and White Noise
Brownian motion is a process named for the English botanist Robert Brown, who stated in
a paper in 1828 that the motion of a particle immersed in a fluid was not due to something
alive in the particle but was probably due to the fluid. This led to attempts to find what it
was in the fluid that caused the motion.
The Polish physicist Marian von Smoluchowski was trying to model the Brownian
motion process using a limiting argument on the random walk. He realized that this approach
would provide a way of verifying the atomic theory of matter. Physicists who believed in the
atomic theory thought that the observed Brownian motion was due to the very large number
of collisions between the particle and the atoms in the fluid. While Smoluchowski spent a lot
of effort on solving this problem using the approach described here, Albert Einstein worked
with the underlying probability density instead, deriving this density from first principles.
5.2. Stochastic Processes and White Noise
181
Einstein’s first 1905 paper (the year in which he also published the photoelectric
effect and the special theory of relativity papers) reported his work on Brownian motion.
Smoluchowski published his results about a year later. A few years later, the French physicist
Perrin won the Nobel Prize by using Einstein’s result with experimental measurements of
Brownian motion to come up with an estimate for Avogadro’s number that was in excellent
agreement with (and actually more accurate than) other measurements.
The velocity of a particle undergoing Brownian motion is not computable, and Perrin
showed that this was true when one attempted to measure the velocity of a particle undergoing
Brownian motion. In fact, Brownian motion is an example of a mathematical function that
is nowhere differentiable. We will show this result and use it to show why white noise
does not exist, by itself, in any mathematical sense. (However, a form of the process exists
whenever it appears, as it always does in practice, in an integral.)
The discussion that follows is about how one goes about modeling a random process
in Simulink. To do this properly we need to understand the nature of the continuous time
process called white noise and its relation to Brownian motion. The way we will do this is
to let the random walk above become a continuous process by letting the time between coin
flips go to zero (in a very precise way). Brownian motion is the result of the convergence
in the limit, as the time between flips goes to zero, to a continuous time process. Before we
explore this convergence, we need to describe what we mean by convergence.
If we look at the Simulink results for the random walk simulation above, it is quite
clear that the random walk does not converge to a deterministic function. In fact every
time we redo the experiment, the results are different, so what is converging? Remember
that the stochastic process is the result of a random experiment, so it is a function of two
variables: one is time, and the other is the experimental outcome (the observed coin flips
and the process generated by the particular sequence of flips).
The process can converge in the following ways:
• in terms of the statistics (i.e., in the mean, mean square, or variance sense, for example);
• in terms of the probability distribution (i.e., in the sense that the probability distribution
for the random walk converges to a probability distribution for the continuous time
process);
• in the sense that for every experimental outcome, the function of time that results
converges to a well-defined function with certainty (in a sense that is made precise
but is beyond the scope of this discussion).
We will use the convergence in the probability distribution sense in the following, but
it is true that the random walk converges in every one of these senses to Brownian motion.
Let us assume that we scale the graphs above so that along the flips axis, the variable
k is replaced by k t. The question is, how do we scale the vertical axis? Since we do not
know exactly what to use, let us just scale it with an unknown value, say s. In that case, the
mean of the process (before we take the limit) is still 0 and the variance is ks 2 . (Exercise 5.2
asks that you verify this using the calculations for the variance of the random walk above.)
It is now a simple matter to use the central limit theorem to develop the probability
density function for the Brownian motion process. Let the number of flips k go to infinity
182
Chapter 5. Random Numbers, White Noise, and Stochastic Processes
and let t go to 0 in such a way that the product becomes the time variable t in the Brownian
motion process. That means that
lim (k t) = t,
t→0
n→∞
or equivalently, the number of flips k is constrained to be t t for any time t.
We use the central limit theorem. The variance of the scaled random walk (scaled by
s) is ks 2 . Thus, in the limit, when we force k to be t t , the variance of the random walk
√
2
becomes ts t . It is now obvious that the scale factor, s, must be proportional to
t. If
it is not, the limiting variance will be either zero or infinity, neither of which makes the
limiting process have the desired properties. The infinite variance is a process that could
have infinite jumps in any short period, and the zero variance would mean the process is
not random. Brownian motion has neither of these
√ properties.
Therefore, scaling the random walk by σ
t will insure convergence to the Brownian motion process. (The parameter σ is the proportionality constant and is the standard
deviation of the resulting Gaussian process.) Thus, we have the following result.
Create a random process B(t) from the random walk as follows:
B(t) = lim
t→0
k= 2t
σ
t
k
1 2
σ t sign(yi ).
√
σ t i=1
Then B(t) is a process that has a probability density function that is normal with mean 0
and variance 1.
1
The usual way of stating the limit is to remove the scale factor σ √
on the process so
t
2
the variance of the result is σ t. Both Einstein and Smoluchowski showed this result, but
using the approach we are using, Smoluchowski was able to prove that the process does not
have a derivative anywhere. We can see this if we try to differentiate the random walk and
then take the limit as we did above. When we do, we get the following:
The Brownian motion process B(t) has an infinite derivative everywhere.
The derivative is dB(t)
= lim t→0 B(t+ t)−B(t)
. Based on the scaling we did above,
dt
t √
the difference in the numerator is proportional to
t. Let the proportionality
constant be
√
t
α
α; then the derivative becomes lim t→0 B(t+ t)−B(t)
=
lim
→
∞
for
all times t.
t→0
t
t
The Brownian motion process has the following properties:
• The process is Gaussian at every time t, with mean zero and variance σ 2 t.
• The process is 0 at t = 0.
• The process has increments that are independent random variables in the sense that
B(tn ) − B(tn−1 ) and B(tn−2 ) − B(tn−3 ) are independent random variables for any
nonoverlapping intervals (tn−3 ≤ tn−2 ) < (tn−1 ≤ tn ).
• The process does not have a derivative anywhere.
5.2. Stochastic Processes and White Noise
183
• The integral of any function of time multiplying the derivative of B(t) is evaluated
using the integral f (t)dB(t), which is well defined even though the derivative of
B(t) is not.
The fifth property is the mathematical rationale for the existence of white noise. Even
does not exist, it does make sense to talk about integrals that contain
though w(t) = dB(t)
dt
products of w(t) (white noise) with other functions because these integrals are
f (t)w(t)dt = f (t)dB(t).
The first four of these properties follow from the discussion so far. Norbert Weiner
demonstrated the last result in the 1950s, but it will take us too far afield to show the result
here, so we will accept it without proof. However, because of his seminal work in showing
these properties, we call the process B(t) the Wiener process, and we will use this name
from now on.
The name white noise comes from an analogy with white light that contains every
color. White noise as a stochastic process is supposed to contain every frequency. Let us
see what that means.
A stochastic process is called stationary if for any two times t1 and t2 the correlation
function, defined by E {y(t1 )y(t2 )}, depends on only the difference between the two times
t1 and t2 . That is,
R(τ ) = E {y(t1 )y(t2 )} = R(|t2 − t|1 ),
where τ = |t2 − t1 |.
If we go back to the attempt to define the derivative of the Weiner process, the correlation function for white noise is

 0, τ > 0,
E {w(t)w(t + τ )} =
= σ 2 δ(τ ).

∞, τ = 0,
The first part of this assertion comes from the fact that the Weiner process has independent
increments, so the expected value of nonoverlapping intervals is the product of the means
(and since the mean is 0, the result is 0). The last part of this assertion comes from the
fact that the derivative of B(t) is infinite. The σ 2 in this definition is often called the noise
power. It is not the variance of the process (which is infinite); however, the integral of this
process has the variance σ 2 t, since the integral is the Weiner process.
We can compute the correlation function of a stationary process directly from the
process, or from the way we generated the process. If, for example, the process were the
result of using white noise as an input to a linear system, the output would be the convolution
of the white noise with the system’s time response to an impulse. With this approach, the
white noise process only appears in the solution through the convolution integral, and from
property 5 of the Weiner process, this integral always is computable. Remember, however,
that the convolution integral can be computed from the product of the transforms of the
two functions. (In this case, we must use the Fourier transform because a process can be
stationary only if it has been in existence for an infinite time, as we will see when we attempt
to develop simulations of these processes.)
184
Chapter 5. Random Numbers, White Noise, and Stochastic Processes
The Fourier transform of the correlation function is the “spectral density function.”
For the white noise process the spectral density is
∞
σ 2 δ(τ )e−iωτ dτ = σ 2 .
S(ω) =
−∞
Since the spectral density is constant for all frequencies, we can now see why we use the
name “white noise” to describe it.
If white noise does not exist, how can we simulate a system where the white noise
stochastic process excites a linear system? The answer is to use the Weiner process.
5.3
Simulating a System with White Noise Inputs Using
the Weiner Process
From the discussion in the previous section, it is clear that to simulate a system with white
noise we must use care. When simulating a continuous time system with a white noise input,
the method used is always to select a Gaussian random variable at each of the numerical
solver’s time steps and then to scale the variable so it has the same effect on the solution as
if the random variable were continuous. From the results of Section 5.2, this means that the
scaling has to be proportional to the square root of the time step used to generate the “white
noise” sample. We will now look into how Simulink achieves this with the built in “Band
Limited White Noise” block.
5.3.1 White Noise and a Spring-Mass-Damper System
Open the model White_Noise in the NCS library. We will use this model to explore Monte
Carlo simulations with noise sources and to explore methods that speed up the simulations.
The first part of the model uses the built in white noise block with 10 different random
number seeds to create 10 separate simulations of “white noise” exciting a linear system.
The model is the state-space version of the spring-mass-damper with the damping set to
make the damping ratio 0.707. This Simulink model uses the integrator and gain blocks that
we did in Chapter 2. The model is in Figure 5.6, and Figure 5.7 shows the 10l responses.
The model uses the fixed step solver with a nominal solver step of 0.005 sec. The undamped
natural frequency of the spring is 10 rad/sec (1.6 Hz).
The variance and mean calculations are from the Signal Processing Blockset library.
The data for the model come from a Callback when the model opens. The integrator step size
and the various sample times in the model are all the same (called samptime in MATLAB),
so when this is changed, the simulation’s solver and all of the calculations for the white
noise and the statistics are updated too. Open this model from the NCS library and run the
simulation. You should see roughly the same variances as in Figure 5.6. (The variances of
the position and the velocity are difficult to read in this figure, so make sure you run the
model; the values are all around 3.53 and 353.6, respectively.)
The first thing we need to look at in this model is the Band Limited White Noise block.
This block is an example of a masked subsystem. To see what the model is like under the
mask, right click on the block and select “Look Under Mask” from the menu that opens. A
new window will open that contains the blocks in Figure 5.8. (The block will open with the
5.3. Simulating a System with White Noise Inputs Using the Weiner Process
185
322.3
Display Mean
348.7
0.3125
341.4
0.2534
396.3
0.04678
411.5
Running
Var
0.5556
325.7
Variance1
-0.6937
In
342.6
358.2
Sample Data
for Statistics1
0.02838
Mean
0.0582
330.6
-0.01026
357
0.1033
Gain2
-0.1191
Display Variance1
omega^2
Band-Limited
White Noise
1
s
1
s
Integrator1
Integrator
3.706
Sample Data
for Statistics
3.025
Gain1
3.611
2.584
2*zeta*omega
4.045
Running
Var
Gain
2.493
Variance
5.135
3.439
omega^2
3.307
4.352
Display Variance
Scope
Figure 5.6. Simulink model for the spring-mass-damper system.
8
6
4
2
0
-2
-4
-6
-8
0
1
2
3
4
5
Time
6
7
8
9
Figure 5.7. Motions of 10 masses with white noise force excitations.
10
186
Chapter 5. Random Numbers, White Noise, and Stochastic Processes
[sqrt(Cov)]/[sqrt(Ts)]
1
White Noise
Figure 5.8. White noise block in the Simulink library (masked subsystem).
Gain block small; we enlarged it here to make its contents visible.) Along with the mask
is a mask dialog that allows the user to select the values for the parameters used under the
mask. To see this, again right click on the block, but this time select “Edit Mask.” A dialog
window with four tabs will appear. The first tab is the instructions for drawing the little
wiggly line (representing noise) on the block. The second tab is the dialog that allows the
user to change the parameters he wants to use in the subsystem. Options for these are an
edit box, a pop-up menu, or a check box. The names of the parameters and a user prompt
can also be set up. The third tab, called “Initialization,” allows the user to set up MATLAB
code to set up the actual values used in the blocks under the mask. Note that the mask is
a form of subroutine, so these parameter values only appear in the masked subsystem. For
the Band Limited White Noise block, the important calculations are the test done in the
initialization to ensure that the terms in the variable Cov are all positive, and the calculation
done in the block below. This calculation should look familiar. It is the scaling that we
determined above (with the value of σ denoted by Cov and the value of t denoted by Ts).
Let us perform some experiments where we change the solver step size. The goal is to
verify that this scaling keeps the results statistically consistent as we make the changes. The
step size is the MATLAB variable samptime. (It is changed at the MATLAB command
line.) The model opens with this value set to 0.005. Select some both smaller and larger
time steps (say from 0.001 to 0.05). You should observe that the mean and variance of the
simulated processes are about the same. (Remember in a Monte Carlo simulation the results
will not be the same from simulation to simulation.) One caveat: since the seed used for
the random number generator is the same in every simulation if you keep everything in the
simulation the same, the results will be identical from run to run.
One final caution: when you simulate the response of the system you need to be
careful to select a solver step size and a value for the sample time in the white noise block
that is smaller by about a factor of 10 than the response time of the system (in this case faster
than about 0.05 sec). See what happens if you make the samptime equal to 0.1 sec. (You
should even see some effect when it was 0.05 sec.) This effect comes from the sampling
theorem.
The value of the noise power in the Band Limited White Noise block is set to 1. It
would seem that this should set the value of the variance of the position of the mass at 1,
but it does not. Let us try to find out why.
5.3.2
Noisy Continuous and Discrete Time Systems: The Covariance
Matrix
To describe a stochastic process we need to specify its joint probability density function
for the values at any arbitrary set of times. White noise is characterized completely by the
Weiner process because the independence of all of the increments makes the joint density
5.3. Simulating a System with White Noise Inputs Using the Weiner Process
187
just the product of the densities for each of the increments. When the stochastic process
results from white noise excitation of a linear system, the joint density is determined from
a matrix whose dimension is the highest derivative in the underlying differential equation.
This matrix is the “covariance matrix.” We need to compute this matrix for a linear system.
Consider the linear state-space system excited by a white noise process w(t) given by
dx(t)
= Ax(t)+Bw(t).
dt
matrix, P(t), of this vector stochastic process is defined as P(t) =
+
* The covariance
E x(t)x(t)T . It is relatively easy to determine a differential equation for this matrix. We
simply differentiate P(t)using the fact that the derivative and the expectation commute.
Thus,
dP(t)
dt
d =E
x(t)x(t)T
dt
d
d
=E
x(t) x(t)T + x(t)
x(t)T
dt
dt
*
+
*
+
= E (Ax(t)+Bw(t)) x(t)T + E x(t) (Ax(t)+Bw(t))T
+
*
= AP(t) + P(t)AT + E (Bw(t)) x(t)T + x(t) (Bw(t))T
= AP(t) + P(t)AT + BSBT .
The last step comes from *the fact that the+ process w(t) is the vector white noise process.
The covariance matrix E (Bw(t)) x(t)T in the next to last step is therefore determined
using the solution of the linear state variable equation*that we developed
in Chapter 2. We
+
can now use this solution, along with the fact that E w(t)wT (τ ) is the impulse function
scaled by the noise power parameter (for each of the white noise processes in the vector), to
develop the solution. Remember that when an impulse appears inside an integral, it selects
the values of the integrand at the point the impulse occurs (in this case when (t − τ ) = 0).
Therefore, we have
+
*
E (Bw(t)) x(t)T
,
T t
= E (Bw(t)) (t)x0 +
(t − τ )Bw(τ )dτ
0
t
T
T
T
= BE
w(t)w (τ )B (t − τ ) dτ
0
t
*
+
=B
E w(t)wT (τ ) BT (t − τ )T dτ
0
t
1
=B
Sδ(t − τ )BT (t − τ )T dτ
0 2
1
= BSBT .
2
188
Chapter 5. Random Numbers, White Noise, and Stochastic Processes
The factor of 12 in this sequence comes from the fact that the impulse is a two-sided function
and the integration limit t is right in the center of these two sides, so only 12 of the value
comes from the impulse. The other 12 BSBT comes from the second term in the expectation
+
*
(i.e., from the E x(t) (Bw(t))T term).
The linear matrix equation dP(t)
= AP(t) + P(t)AT + BSBT has a solution given by
dt
t
T
P(t) = (t − t0 )P(t0 )(t − t0 ) +
(t − τ )BSBT (t − τ )T dτ .
t0
This is easy to verify by substituting back into the differential equation. Exercise 5.3 asks
that you make this substitution, but remember when you do that the derivative of an integral
whose upper limit depends on the independent variable comes from the identity
b(t)
b(t)
d
∂
db(t)
c(t, τ )dτ = c(t, b(t))
+
c(t, τ )dτ .
dt a
dt
∂t
a
Before we investigate the use of this procedure to model a continuous time stochastic
process as a discrete time equivalent, we can use the development above to compute the
steady state noise variances for the spring-mass-damper Simulink model in Section 5.3.1.
This system has a state variable model given by
dx(t)
0
1
0
=
x(t) +
w(t).
−100 −14.14
10
dt
This state x(t) in this model has the usual components. (The first is the position of the mass,
and the second is the velocity.) Therefore, the covariance matrix for the state x(t) is
2
σpos
ρσpos σvel
.
2
ρσpos σvel σvel
The diagonal terms are the variances of the position and velocity and the off diagonal terms
show the correlations between the two. The variable ρ has a value whose magnitude is
less than 1, and it measures the correlation between the position and the velocity. We can
calculate the covariance matrix to verify that the Simulink model of the mass’s position
gave the correct variance. We do this by solving for the steady state covariance in the
differential equation above. Steady state means that the covariance matrix does not change,
so its derivative is 0. Thus,
AP(t) + P(t)AT + BSBT = 0.
Substituting the values for A, B, and S from the state model above gives
T 0
1
0
1
0
0
P(t) + P(t)
+
= 0.
−100 −14.14
−100 −14.14
0 10000
This equation, called a Lyapunov equation, is solved with an M-file that is built into the
control system toolbox in MATLAB. The command is lyap(A,BSBT),
where A is the matrix
0
A and BSBT is the matrix BSBT . (Remember that B = 100
and S = 1 for this system.)
The MATLAB commands and the result for the steady state value of P (Pss) are as follows
(a version of lyap, called lyap_ncs, has been included in the NCS library so you can try
doing this calculation if you do not have lyap available):
5.3. Simulating a System with White Noise Inputs Using the Weiner Process
189
>> A
= [0 1;-100 -14.14];
>> BSBT = [0 0;0 10000];
>> Pss = lyap(a,q)
Pss =
3.5355
-0.0000
-0.0000
353.5534.
As can be seen, the variances from this calculation match the simulation results in the
Simulink model we generated and evaluated in the previous section (at least statistically,
since the simulation generates an estimate of the 10 runs). This then answers the question we
asked at the end of Section 5.3.1. This analysis also shows that the noise power in the Band
Limited White Noise block does not directly determine the variance of the output from the
simulation, only the calculations above provide the correct variances for the various state
variables in the model.
5.3.3
Discrete Time Equivalent of a Continuous Stochastic Process
We are now in a position to develop a discrete time model for a continuous time stochastic
linear system. In general, if we want to use a discrete process to create a solution that is in
some sense equivalent to the original continuous time process, we need to come up with a
definition for what we mean by equivalence. The simplest and most direct is covariance
equivalence, defined as follows:
A continuous time system and a discrete time system (for the sample times
“covariance equivalent” if
t) are
• they have the same mean at the sample times;
• they have the same covariance matrix at the sample times.
Remember from Section 2.1.1 that the discrete time solution of a continuous state
variable model is
xk+1 = ( t)xk + ( t)uk .
We assume that the input in this model is a vector of Gaussian random variables (each one
like the random walk) whose values are independent from step to step and Gaussian. We
also
* assume
+ that random variables have the same Covariance matrix at each time step (i.e.,
E uk uTk = Sdiscrete for all values k). The covariance matrix for the discrete process is the
solution of the difference equation, which is determined as follows:
*
+
T
Pk+1 = E xk+1 xk+1
*
+
= E (xk + uk ) (xk + uk )T
= Pk T + Sdiscrete T .
The state xk is a vector random variable, and it is independent of uk (because xk only
depends on the past values of u, and all* of the values
+ of uk are independent,
+zero mean
*
random variables). This means that E xk (uk )T = E {xk } E (uk )T = 0, and
*
+
+
*
E uk (xk )T = E {uk } E (xk )T = 0.
190
Chapter 5. Random Numbers, White Noise, and Stochastic Processes
The covariance matrix for the continuous process, P(t) will be exactly the same as the
covariance matrix of the discrete process Pk when t = k t if the solution to the continuous
time covariance matrix equation above at the times t = k t is the same as the covariance
matrix for the discrete time system. That is, Pk = P(k t) where Pk comes from iterating
the equation above, and P(k t) is the solution of the continuous time covariance equation
at the times k t. We need to find a value for Sdiscrete that insures this.
Comparing the solution for the continuous covariance matrix at time t = t when
the initial covariance (at t = 0) is 0, with iterations from the difference equation for the
discrete covariance matrix with the initial value 0 gives an equation for Sdiscrete as follows:
t
(τ )BSBT (τ )T dτ .
Sdiscrete T =
0
There are many ways of factoring the left-hand side of this solution to give an explicit
value for Sdiscrete , and we will explore this further as we return to the spring-mass-damper
example. Exercise 5.4 asks you to verify that Sdiscrete that results from this single iteration
insures that the covariance matrices are the same for all k.
Let us use a sample time for the discrete model that is 0.01 sec. The value of ( t)
is determined, as in Section 2.1.1, using the c2d_ncs program in the NCS library. Now we
need to determine ( t) using the solution of the covariance matrix at the time t.
When the initial covariance matrix is 0, the solution for the covariance matrix at time
t is
t
P( t) =
( t − τ )BSBT ( t − τ )T dτ .
0
Many numerical approaches will compute this integral, but let us try doing it in Simulink.
We will use the ability of Simulink to solve matrix differential equations by setting up the
differential equation dP(t)
= AP(t) + P(t)AT + BSBT . Figure 5.9 shows the model (called
dt
Covariance_Matrix_c2d in the NCS library). Running this model produces the solution
for the matrix P( t) in the MATLAB workspace.
To send P to MATLAB at the end of the simulation, the model uses the Triggered
Subsystem block from the Ports and Subsystems library. In a triggered subsystem, the
trigger causes any blocks inside the subsystem to run when the trigger signal increases
in value (the default). The icon at the top of the subsystem, where the trigger signal is
connected, indicates the trigger type; if the icon looks like a step discontinuity that is
increasing, the trigger is “rising,” and the options are increasing, decreasing, or both. The
contents of the subsystem appear when the block opened.
From the annotation for this model, which shows the differential equation simulated,
you can work through the diagram and verify the Simulink model. You should do this by
carefully going through the model or, better still, by creating the model yourself. Before
running the simulation, set up the values of A and BSBT in MATLAB. (From the Simulink
model you can see that the variables are called A and BSBT respectively, and if you used
the lyap_ncs code above, these should already exist in MATLAB.) After the simulation is
complete, you can see what P is in MATLAB by typing P at the command line. The result
should be the matrix
P =
0.0030
0.4333
0.4333
86.8229.
5.3. Simulating a System with White Noise Inputs Using the Weiner Process
A
A Matrix
[2x2]
Product
Matrix
[2x2] Multiply
191
[2x2]
[2x2]
>= deltat
Clock
[2x2]
BSBT
[2x2]
Noise
Covariance
[2x2]
[2x2]
Initial
Value of P
(zero matrix)
[2x2]
[2x2]
1
xo s
[2x2]
[2x2]
Integrator
zeros(n)
Product1
Matrix
[2x2] Multiply
Compare
To Constant
In1
Triggered
Subsystem
to Send P(deltat) to
MATLAB Workspace
[2x2]
[2x2]
[2x2]
A Matrix
(transposed)
A'
Simulink Model for
Computing P( ∆ t) from the Differential Equation:
dP/dt = AP + PA' +Q
For values of A and Q in MATLAB Workspace.
Figure 5.9. Continuous linear system covariance matrix calculation in Simulink.
The result is the covariance matrix that can be used for simulating with a covariance
equivalent discrete system.
Armed with this value for the covariance matrix we can create the equivalent digital
model for the original spring-mass-damper continuous system. Figure 5.10 shows the
Simulink model. We use the discrete state-space model from the discrete library in the
model, and we have set it up for three different sample times (0.01 sec as derived above,
0.05 sec, and 1 sec). The model also contains the original continuous time model as a
subsystem.
The resulting time histories for the four simulations are in Figure 5.11. The figures
show both the position and the velocity of the mass; the lighter line is the velocity, and
the smaller and darker line is the position. Notice that the simulations for the discrete
time models at 0.01 and 0.05 sec match the continuous time system very nicely. (Again,
remember that the match is only in the statistical sense.) For the last sample time, namely
1 sec, the match is not good. Why is this?
The reason is our old friend the sampling theorem. When we create a discrete model
at 1 sec, the conversion to the discrete model has errors. These arise because the system has
a natural frequency of 100 radians/sec (1.592 Hz corresponding to a period of 0.628 sec).
The 1-sec sample time is almost two times slower than the natural oscillation of the mass
and as such violates the requirements of the sampling theorem for accurate reconstruction.
(The discrete time system aliases the oscillation.)
As an exercise, you should isolate each of these discrete models and see how long
Simulink needs to create the results. Remember that to do this you need to run the simulation
192
Chapter 5. Random Numbers, White Noise, and Stochastic Processes
Figure 5.10. Noise response continuous time simulation and equivalent discrete
systems at three different sample times.
from MATLAB using tic and toc before and after the simulation command (in MATLAB
type tic;sim(‘modelname’);toc;).
If you don’t want to do this exercise yourself, we have simulated each of the models
10 times (with each of the simulations using a stop time of 1000 sec) using the M-code
shown below to record the simulation times and compute their averages (see the MATLAB
file Test_Noise_Models.m in the NCS Library).
for j =1:10
for i = 0:3
str = [’tic; sim(’’test’ num2str(i) ’’’); t’ num2str(i) ’(’ …
num2str(j) ’)=toc;’];
eval(str)
end
end
% Averages:
Avgsim0 = mean(t0)
Avgsim1 = mean(t1)
Avgsim2 = mean(t2)
Avgsim3 = mean(t3)
5.3. Simulating a System with White Noise Inputs Using the Weiner Process
193
80
Continuous Time at 0.01 sec.
60
Sample Time = 0.01 sec.
50
40
20
0
0
-20
-40
-50
-60
-80
0
10
20
30
40
50
0
10
20
Time
30
40
50
40
50
Time
80
Sample Time = 0.05 sec.
Sample Time = 1.0 sec.
60
50
40
20
0
0
-20
-40
-50
-60
0
10
20
30
40
50
-80
0
10
20
Time
30
Time
Figure 5.11. Simulation results from the four simulations with white noise inputs.
Table 5.1. Simulink computation times for the four white noise simulations.
Model
Continuous
Discrete at 0.01 sec.
Discrete at 0.05 sec.
Discrete at 1.00 sec.
Avg. Simulation
Time
1.0490 sec.
0.2571 sec.
0.0572 sec.
0.0105 sec.
The results we obtained from this code are in Table 5.1.
There is an improvement of about a factor of 5 in the simulation time comparing the
fast sample time with the continuous solution using the ode45 solver (with a max step size
of 0.01 sec), so when it comes to large and complex Monte Carlo simulations, this approach
can offer a considerable computational advantage.
194
Chapter 5. Random Numbers, White Noise, and Stochastic Processes
Based on the work we have done in this chapter, you should be able to simulate any
system with arbitrary noise sources. If you are given the properties of a noise in the form
of a power spectral density function (which implies that the noise is stationary), it is always
possible to find a linear system that can simulate this noise source.
In the next section, we look at one particular and important process that has a power
spectral density function that is proportional to 1/f (one over the frequency). The approach
we discuss is a simple way to create a noise that has a desired power spectral density function.
5.3.4
Modeling a Specified Power Spectral Density: 1/f Noise
In many electronic devices (such as sensors that are used to create an image from radiation
in visible, infrared, or ultraviolet light) quantum mechanical effects manifest themselves
as a noise that has a power spectral density that is proportional to the reciprocal of the
frequency. These noise sources are not from systems that have rational transfer functions,
so a method to create a typical noise sample for a Monte Carlo simulation needs to use some
approximation. Let us see why this is true.
The power spectral density (PSD) for a linear system excited with white noise comes
from the correlation function of the process because, by definition, the power spectral density
function is
S(ω) =
∞
−∞
R(τ )e−iωτ dτ,
where R(τ ) is the correlation function
R(τ ) = E {x(t)x(t + τ )} .
If the process x(t) comes from a linear differential equation, then it is the result of convolving the input with the impulse response of the system. Since the correlation function
involves the product of x(t) at two different times, the correlation function R(τ ) requires
two convolutions to compute. In fact, if a linear system with impulse response h(t) has
the (real) stochastic process u(t) as its input, then the correlation function of the output is
R(τ ) = h(τ ) ∗ Ru (τ ) ∗ h(−τ ), where Ru (τ ) is the correlation function of the input, and the
* operator denotes the convolution integral.
Thus, we have the following very important result. It is the reason that stochastic
process models use linear systems excited by white noise:
If a real, stationary, stochastic process excites a linear system with impulse response h(t)
it has the correlation function
R(τ ) = h(τ ) ∗ Ru (τ ) ∗ h(−τ ).
Therefore, the power spectral density function of the output is
Sout (ω) = |H (iω)|2 Sin (ω).
The proof of this is very straightforward and is an exercise (Exercise 5.5 at the end of
this chapter).
5.3. Simulating a System with White Noise Inputs Using the Weiner Process
195
Since the input is white noise Sin (ω) = 1, it is easy to see that the PSD of the output
is simply the magnitude squared of the transfer function of the system. Since the transfer
function of an nth order linear system is the ratio of polynomials of order (at most) n (in
s), the PSD is always the ratio of real polynomials with only even powers of ω, and it is of
order 2n. (These properties are because of the square of the magnitude.)
From this, we can see the problem with generating a spectrum that has a PSD proportional to 1/f . The linear
√ system that generates such a spectrum has a transfer function given by H (ω) = 1/ iω because the magnitude squared of this transfer function is
1/ω = 1/(2πf ). The impulse response of the
with this transfer function
√ linear system √
comes from the inverse Laplace transform of 2π/s, which is 2/t. So how do we come
up with an approximation for this spectrum? The method is to approximate the transfer
function on a semilog plot (i.e., on the Bode plot). Since the spectrum is real, we do not
have to worry about the phase.
We can approximate a system using products of first order rational transfer functions:
H (s) =
n
"
s + zi
.
s + pi
i=1
On a Bode plot, this transfer function can be used to approximate any straight line up to
the frequency of the last pole. The trick is to place the zeros and poles symmetrically with
respect to the desired straight line.
i +zi
For our approximation, we use the fact that the transfer function at the pole is p2p
i
and at the zero it is
2zi
,
zi +pi
a difference of
pi + zi
−(pi − zi )2
2zi
.
−
=
zi + p i
2pi
2pi (zi + pi )
We allocate this error around the desired frequency response as follows:
• First, the number of poles used to make the approximation is arbitrary, but the more
that we use, the better the approximation will be.
• The simplest way to set the number of poles is to assume that each pole is a factor
of 2 in frequency away from the previous pole (i.e., they are an octave apart). In
general this separation will ensure that the approximation is within 1 db of the desired
spectrum (1 db equates to a maximum error of about 12% in the magnitude at the
frequencies of the pole and zero; the error is zero at the midpoint between them).
√
• The value of the zeros is set to 2 times the poles (this places the amplitude symmetrically around the poles and zeros).
There is no way that any finite representation of the 1/f noise can be valid over
all frequencies (doing so requires an infinite number of terms). Thus, we assume that the
approximation starts at some frequency ωLow , and it stops some number n of octaves above
196
Chapter 5. Random Numbers, White Noise, and Stochastic Processes
d
Scope
Gain3
B* u
Band-Limited
White Noise
Gain1
1
s
1/f noise
1/f noise
C* u
Integrator
Gain
Gain2
Input Signal
Plot Spectal
Estimates
A* u
Simulation of a noise sample that has a
Power Spectral Desnity Function that is approximately 1/f.
The approximation is valid from: ω
low
to 2n ω
low
In MATLAB the variable names are:
omegalow = ω
low
; numoctaves = n.
Figure 5.12. Simulation of the fractal noise process that has a spectrum proportional to 1/f .
this.5 The transfer function is then
H (s) =
n
"
s+
i=1
√
2 2i−1 ωLow
.
s + 2i−1 ωLow
A Simulink model that generates a noise sample using this approximation is in Figure 5.12
(called Oneonf in the NCS library).
This model generates a plot of the sample and calculations of the resulting PSD. Note
that this simulation, once again, uses the vector capabilities of Simulink to simulate the state
space model of the differential equation represented by the transfer function above.
The differential equation that generates this time sample has an order that is determined
by the number of octaves used. In addition, the state-space matrices A, B, C, and D are
set up in a callback from the model. The number of first order systems is set to the default
value of 10 when the model opens. Also, the starting pole (ωLow ) is set to 1 radian/sec.
These values are MATLAB variables using the names that are in the figure. Changing their
value in MATLAB causes subsequent simulations to use the new values (again, because of
an “Init” callback from the model that executes at every simulation run).
5 In Chapter 6, we examine methods for solving partial differential equations using a finite number of ordinary
differential equations. This discussion parallels the approach that we use there. It is a “lumped parameter” or
“finite element” method.
5.3. Simulating a System with White Noise Inputs Using the Weiner Process
The value for the A and B matrices that come from the callbacks are



0
0
···
0
z1 − p 1
−p1

 z2 − p 2
 z2 − p2
−p
0
·
·
·
0
2





−p3
···
0 
A =  z3 − p 3 z 3 − p 3
 , B =  z3 − p 3


..
..
..
.. 
..


.
.
.
···
. 
.
zn − pn
zn − p n
zn − p n
· · · −p n
197




.


zn − pn
The code that sets up these matrices (shown below) is in the Callbacks tab of the Model
Properties under the File menu of the model:
n
n
p
z
A
for i
A
end
B
C
d
=
=
=
=
=
=
=
0:numoctaves-1;
2.ˆn;
omegalow*n;
sqrt(2)*p;
diag(-p);
1:numoctaves-1
A+diag(p(i:end-1)-z(i:end-1),-i);
= (z-p)’;
= ones(size(B))’;
= 1;
Simulated Values of 1/f Noise
The Simulink simulation
4
results in the 1/f noise sample shown in the figure at the
3
right. In addition, the model
contains a subsystem that has
2
four different methods for com1
puting an estimate of the Spectrum. When the model is open,
0
if you double click on the “Plot
Spectral Estimates” subsystem
-1
you will see that these esti-2
mators are all from the Signal Processing Blockset. (Fig-3
ure 5.13(a) shows the Simulink
model.) The details of how
-4
1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2
Time
each of these estimates operates are beyond the scope of this
text, but in the broad sense, all of them, except the magnitude of the FFT block, use a method
that estimates a linear model that matches the statistical properties of the data and then computes the estimate of the spectrum using the model. This approach always produces an
estimate of the PSD that gets better the longer the time over which the estimate is performed. (Mathematically these estimates are consistent, meaning that the variance of the
estimate goes to zero as the estimation time goes to infinity.)
The magnitude of the FFT is a very poor estimate of the PSD because it does not have
this property. (In fact, the variance of this estimate never gets smaller.) This can be seen in
198
Chapter 5. Random Numbers, White Noise, and Stochastic Processes
Figure 5.13. Power spectral density estimators in the signal processing blockset
used to compute the sampled PSD of the 1/f noise process.
the “Vector Scope” plot of all of the PSD estimates in Figure 5.13(b). (The magnitude of
the FFT is the top estimate with the + signs as line markers.) The three consistent estimators
all have the same overall estimate, and they would indeed be identical except that each of
them is given an arbitrary order for the estimate (starting at 2 for the Yule-Walker estimator
Exercises
199
and going to 6 for the Burg estimator; as an exercise, change them all to order 2 and verify
this result).
In this chapter, we have explored the mathematics of stochastic processes and the
methods for simulating them in Simulink, using some of the tools in the Signal Processing
Blockset. It should be evident from what we have done so far how Simulink’s functionality can go beyond simulating differential and difference equations. We have shown how
to interchange discrete time and continuous time systems and work in the frequency domain. We consider next how Simulink can model systems described by partial differential
equations.
5.4
Further Reading
There are many books on probability, statistics, and random processes. Among the best is
by Papoulis [33]. A recent book by Childers [7] has many MATLAB examples.
Norbert Weiner’s book Cybernetics [49] has a wonderful description of the importance
of being versed in many disciplines. (For more on this point, see Chapter 10.)
He wrote: “Since Leibniz there has perhaps been no man who has had a full command
of all the intellectual activity of his day. Since that time, science has been increasingly the
task of specialists, in fields that show a tendency to grow progressively narrower. A century
ago, there may have been no Leibniz, but there was a Gauss, a Faraday, and a Darwin.
Today few scholars can call themselves mathematicians, physicists, or biologists without
restriction.
“A man may be a topologist, an acoustician, or a coleopterist. He will be filled with
the jargon of his field, and will know all its literature and all its ramifications, but, more
frequently than not, he will regard the next subject as something belonging to his colleague
three doors down the corridor, and will consider any interest in it on his own part as an
unwarrantable breach of privacy.”
For more on Weiner, visit Wikipedia at http://en.wikipedia.org/wiki/Cybernetics.
Both the Weiner process and the 1/f noise process are examples of a fractal. The
fern in NCM is another example. There is an iterative method for generating fractals and a
discussion of fractal dimension in Chapter 5 of Scheinerman [37]. A MATLAB code that
will create Sierpinski’s triangles is in the appendix of [33]; this example is fun to run, and
you might want to try creating a Simulink model to do the same thing.
Exercises
5.1 Modify the Rayleigh distribution Simulink model in Section 5.1.2 to generate the
random variable that is the phase angle of the reflected sine waves. The book by
Papoulis [33] has a derivation of the probability density function for this random
variable. Check to see if the histogram that you generate has the required form.
5.2 Verify the property that the random walk has zero mean and a variance of ks 2 (the
properties shown in Section 5.2.2).
200
Chapter 5. Random Numbers, White Noise, and Stochastic Processes
5.3 Show that the covariance matrix differential equation
dP(t)
= AP(t) + P(t)AT + BSBT
dt
t
has the solution P(t) = (t − t0 )P(t0 )(t − t0 )T + t0 (t − τ )BSBT (t − τ )T dτ .
Use the hint given in Section 5.3.2 when you differentiate this solution.
5.4 We iterated the discrete covariance matrix equation one time and showed that the
covariance matrix of a discrete system xk+1 = ( t)xk + ( t)uk will be the same
as the covariance matrix of the continuous system if the discrete system noise input
uk has the covariance matrix E{uk uTk t} = Sdiscrete for all values k, where Sdiscrete is
t
given implicitly by Sdiscrete T = 0 (τ )BSBT 5(τ )T dτ . Show that this is true
for all possible iterations.
5.5 If a real, stationary, stochastic process excites a linear system with impulse response
h(t) it has the correlation function
R(τ ) = h(τ ) ∗ Ru (τ ) ∗ h(−τ ).
Use this fact to show that the power spectral density function of the output (the Fourier
transform of R(τ )) is
Sout (ω) = |H (iω)|2 Sin (ω).
5.6 Show that the state-space √model in Section 5.3.4 is the representation of the transfer
#
2i−1 ωLow
.
function H (s) = ni=1 s+s+22i−1
ωLow
Chapter 6
Modeling a Partial
Differential Equation in
Simulink
As engineers design systems with more stringent requirements, it has become far more
common to find that the underlying dynamics of the system are partial differential equations.
Examples of this permeate the engineering design literature. For example, designers of
computer disk drives are always striving to store more bits. They do this in two major
ways: by making the number of bits per unit area on the surface as large as they can and by
increasing the rotational speed of the disk. The smaller the area on the disk that contains the
ones and zeros, the more accurate the positioning of the read/write head needs to be. At the
same time, the speed of rotation of the disk makes it possible to access the stored ones and
zeros faster, meaning that the read/write head needs to move faster making the assembly
more prone to vibrations induced by the rapid accelerations. Thus, disk drive designers
have to design the read/write head positioning system with the following:
• flexible motions of the read/write heads, their mounts, and the disks themselves;
• aerodynamic induced vibrations as the heads move across the spinning disk;
• vibrations induced by thermal effects caused by uneven heating of the drive.
The models for these dynamics are specific partial differential equations, and when
they all must be included, they interact with each other.
In a similar vein, automatic flight control systems for advanced aircraft (particularly
aircraft that deliberately have unstable rotational dynamics) impose requirements that require the designer to include the effects of flexible motions and their interactions with the
aerodynamics. Designers of spacecraft control systems, particularly those with precision
pointing requirements, must model the vibrations of appendages that are induced by moving
parts on the vehicle and, in some cases the combined effects of these vibrations interacting
with (and creating) fluid sloshing in propellant and oxidizer tanks—once again, dynamics
that are described by specific partial differential equations.
In Numerical Computing with MATLAB, Cleve Moler showed methods for solving
pdes using finite difference methods. Since Simulink has the ability to model and solve
ordinary differential equations, it is natural to ask whether or not a partial differential
201
202
Chapter 6. Modeling a Partial Differential Equation in Simulink
equation may be solved using a method that converts the partial differential equation into a
set of coupled ordinary differential equations. The answer is a resounding yes.
6.1 The Heat Equation: Partial Differential Equations in
Simulink
In the next chapter, we will begin the design of a home heating controller that replaces a
thermostat, with the following benefits:
• better control of the temperature of the home,
• less energy use for the same or better comfort level,
• a significantly better feel for the residents.
The way one goes about achieving these objectives is with a thorough understanding
of the dynamics of heat flow from a home, so we need to model the home in detail. With this
in mind, let us build a two-room home heating model in Simulink and run it to understand
the issues.
Let the temperature of the house be T (x, y, z, t), where the three orthogonal spatial
directions x, y, z are relative to some coordinate system and the temperature is explicitly
a function of time because of the heat supplied by the heating system. The underlying
equation for computing the temperature is the heat equation
ρcp
∂T (x, y, z, t)
= ∇ · ( k(x, y, z, t) ∇T (x, y, z, t) ) + Q(x, y, z, t).
∂t
The source of heat in the rooms is the convectors (radiators) that are located either along
the walls or in the floor, depending on the geometry. These heat sources are the heat input
Q(x, y, z, t).
6.1.1
Finite Dimensional Models
There are many simplifying assumptions that we can use to model the home heating system
using this equation. First, we assume that the walls, ceilings, floors, and glass areas have
constant heat conductivities (i.e., k will be a constant for each of the different types of
surface). Second, we can assume that each room in our house is a separate volumetric
“lumped” entity so that the heat stored in each room (the ρcp ∂T (x(t),t)
in the heat equation)
∂t
is independent for each of the rooms. The last assumption we can make is that the heat
flow in and out of the rooms (through the walls, ceilings, etc.) will result in a linear thermal
gradient in the material.
We can also assume that the heat exchangers have some thermal capacity (i.e., they
can store the heat for some time). Each heat exchanger will transfer their heat to the room
with some thermal loss. This heat exchange occurs through boundaries of air that, while
not conducting (the heat exchange is actually through convection), are modeled assuming
a linear gradient in the air around the exchanger. We make the reasonable assumption
that the heat flowing out of the house does not change the outside air temperature (at
6.1. The Heat Equation: Partial Differential Equations in Simulink
203
least over the time scale at which we are the heating the house). We also assume that
the combustion process is a constant source of heat, with appropriate efficiencies for the
combustion; this means that each gallon of oil or 100,000 BTUs (“therms”) of gas burned
produces a constant amount of heat. Under these assumptions, the heat equation becomes a
set of interconnected “lumps” with the “lumps” consisting of the individual rooms and the
individual heat exchangers.
These assumptions—constant outside air temperatures, linear gradients in the walls
and around the heat exchangers, and constant heat flow from the combustion process into the
medium used for heat exchange—mean that the heat flow in steady state from the warmest
parts of the home to the cold outside is constant. This follows from the fact that with these
assumptions the term k(x, y, z, t) ∇T (x, y, z, t) is constant for each of the rooms.
We modeled a house in Chapter 2 (the model called Thermo_NCS), where we simply
provided the model as a first order differential equation for the entire home. Because the
house was a single entity, the temperature of each room in the house was the same. This is
clearly not the case in a typical home. Each room’s thermodynamics are interconnected but
also independent in the sense that the heat loss from infiltration, conduction, and radiation
depend only on the materials in the walls, the number of windows (architects use the term
fenestration) and doors, and the spaces that surround the room (basements, attics, other
rooms, sunspaces, etc.).
In the lumped parameter approximation, the individual room temperatures are the
same throughout the room’s volume. These assumptions also mean that the net heat flowing
into or out of a compartment (a volume of space that has constant thermal capacity and
uniform temperature) sums to zero. From the heat equation when the divergence of the
gradient is zero, there is no net increase in the temperature of the compartment from this
flow—and a linear gradient ensures that this is true. We will see that this assumption is
equivalent to the fact that, in an electrical circuit, currents flowing into and out of any node
must sum to zero.
Thus, for our two-room house, the heat flow equation breaks up into four separate ordinary differential equations: two for the room temperatures and two for the heat exchanger
6.1.2 An Electrical Analogy of the Heat Equation
Once we have “lumped” the rooms and heaters into single entities and made the assumptions
on how the heat is flowing, the heat equation becomes analogous to an electrical circuit with
resistors, capacitors, current sources and voltage sources. The analogs of the different
heat-equation element values are as follows:
• Heat flow is analogous to current (and a current source is a heat source).
• The temperature of an element is analogous to the voltage in the circuit (and therefore
a voltage source is a constant temperature regardless of the heat flowing in or out).
• Thermal conductivity of the material is analogous to a resistor (the analogy is really
to the conductance, i.e., 1/R is the thermal conductance, which will be important
when we need to combine heat flow paths).
204
Chapter 6. Modeling a Partial Differential Equation in Simulink
Rwall
Rfl-ceil
Rconvect
Rwindow
Losses
Ta
Losses
RL 3
TH1
Tp
RL 2
Flame
Heat
“Qf”
RL 1
Heat Source:
RHE 1
CHE 1
TR1
TH2
RHE 2
TR2
CR 1
CHE 2
CR 2
Room 1
Rwall
Rfl-ceil
Rconvect
Rwindow
Ambient
Outdoor
Temp.
“Ta”
Room 2
Figure 6.1. An electrical model of the thermodynamics of a house and its heating system.
• Thermal capacitance times the volume of the element, ρcp V , is analogous to the
electrical capacitance.
Using this analogy, we have the following:
• The individual rooms in our home can each be modeled as a single capacitor.
• The heat flow between rooms and the outside are modeled as current flowing through
resistors.
• The room’s heat exchangers are also modeled as capacitors.
• The heat losses from the heat source (the flame) to the heat exchangers can also be
modeled as resistors.
• The heat source (oil or gas being burned) is a current source.
• The outside temperature (constant) is a voltage source.
Putting these together, we can model our two-room house as the electrical circuit in
Figure 6.1.
This model is easy to understand because it presents a (lumped) picture of the underlying dynamics. It should be obvious that the picture in this case is not the equations (in the
sense that a Simulink model explicitly represents the differential equations), but we can go
through each piece in this circuit (picture) to understand exactly what it implies in terms of
the thermodynamics.
6.2. Converting the Finite Model into Equations for Simulation with Simulink
205
Therefore, following Figure 6.1, we have the following:
• Each room and each heat exchanger is represented by a capacitor (this is the heat
capacity of the room or heat exchanger given byρcV ). For each room this is the
product of the air density, the specific heat of the air, and the volume of the room, and
for each heat exchanger, it is the product of the density of the heat exchange medium
(water or air), the specific heat of this material, and the volume of the heat exchanger.
• The heat flowing into or out of the room depends on the thermal conductivity, k, which
we convert into a thermal resistance using 1/k. Thus, the four resistors in parallel, at
the top of each of the room compartments, are the paths for the heat flow due to the
thermal losses from the walls, the ceilings, the convection of air from the outside to
the inside, and the windows. These conduct the heat (current is heat) from the room
to the fixed outside temperature which is the voltage source titled “Ambient Outdoor
Temp.”
• The heat flow from the heat exchangers to the rooms involves some reduction in the
temperature of the medium so the room does not get as hot as the heat exchangers
(the resistors RH E1,2 ).
• The heat from the heat source is not completely captured. (The process of transferring
the heat from the boiler to the heat exchanger involves losses that are modeled as
resistors RL1,2,3 .)
• The power of this electrical analog for the heat equation is that it is easy to modify
the model to make it more complex and include more effects. (We will do this in
Chapter 8.)
6.2
Converting the Finite Model into Equations for
Simulation with Simulink
The circuit picture models the heat flow and the room temperatures, but it is not in the form of
differential equations. To convert the picture into a set of coupled differential equations we
need to use Kirchhoff’s current law that states that the sum of all of the currents entering any
node of the circuit must be zero (where the signs of the currents are determined by assigning
arbitrary inequalities to the various capacitor voltages). For this model, the temperatures for
each of the rooms and heat exchangers (the capacitor voltages) are the “nodes.” The external
sources and sinks of heat are the voltage and current sources that represent the “constant”
ambient temperatures and heat flow. (These are not constant over a long time interval, but
over the time interval that the heating system is operating they are very reasonably constant.)
The result of these manipulations on the heat equation is the equations used in the Simulink
model.
We have captured the salient features of the home for design of the heating system without explicitly solving the heat partial differential equation. The method we used
here is only one of the ways of converting the partial differential equation into a coupled set
206
Chapter 6. Modeling a Partial Differential Equation in Simulink
of differential equations. They are all called “finite element models.” In this version, the
elements are large masses that have more or less constant temperatures (each room and each
heat exchanger).
For many reasons it would be nice to have a tool that would convert a picture like
this into the appropriate equations for use by Simulink. It also would be nice to maintain
the picture above in the model because it is a lot easier to modify the picture to change
the attributes of the house. Adding more rooms, heat exchangers, and intermediate losses
like that from the heat exchangers, etc., using the picture is a lot easier than working with
gradually more complex equations. This is true because the picture clearly shows what
interactions are taking place among the various components, and it is easy to cut and paste
to create additional components with a computer. If such a tool existed, it would allow the
pictorial representation of the circuit to be visually available in Simulink, just as the signal
flow is available. Such a tool exists. The SimPowerSystems Blockset from The MathWorks
would convert this picture into the differential equations that Simulink needs. We describe
this tool later, but for now let us work through the model above and get the differential
equations. (In Chapter 8, as we investigate how to use the SimPowerSystems tool, we will
create a multiroom model.)
6.2.1
Using Kirchhoff’s Law to Get the Equations
Kirchhoff’s current law at each of the capacitance nodes in the model gives us the differential
equations for the simulation. Thus we have the following.
For Heat Exchanger 1:
CH E1
1
dT H 1
1
1
1
TH E1 +
=−
+
TR1 +
Tp .
dt
RH E1
RL2
RH E1
RL2
For Room 1:
CR1
1
=
Req1
where
1
Req2
dT R1
1
1
1
TR1 +
TH 1 +
Ta ,
=−
dt
Req1
RH E1
Req2
1
+
1
+
1
+
1
+
1
Rwall1
Rf l−ceil1
Rconvect1
Rwindow1
RH E1
1
1
1
1
.
=
+
+
+
Rwall1
Rf l−ceil1
Rconvect1
Rwindow1
For Heat Exchanger 2:
CH E2
dTH 2
1
1
1
1
TH 2 +
=−
+
TR2 +
Tp .
dt
RH E2
RL3
RL3
RL3
,
6.2. Converting the Finite Model into Equations for Simulation with Simulink
207
For Room 2:
CR2
1
=
Req3
1
Req4
dTR2
1
1
1
TR2 +
TH 2 +
Ta ,
=−
Req3
RH E2
Req4
dt
where
1
1
1
1
1
+
+
+
+
Rwall2
Rf l−ceil2
Rconvect2
Rwindow2
RH E2
1
1
1
1
.
+
+
+
=
Rwindow2
Rwall2
Rf l−ceil2
Rconvect2
,
There is also an algebraic (nondifferential equation) for the temperature of the heat exchange
medium, Tp , given by summing the heat flow into this node at the heat source.
For the heat flow to the heat exchangers (algebraic equation):
TH 1
TH 2
+
TH 1
TH 2
RL2
RL3
= Req5 Qf +
Tp = ,
+
1
1
1
RL2
RL3
+
+
RL2
RL3
RL1
Qf +
1
=
Req5
where
1
1
1
+
+
RL1
RL2
RL3
.
Substituting this algebraic result into the first and third equations above gives the final form
of these two equations as
Req5
Req5
Req5
1
1
1
dT H 1
CH E1
TH 1 +
TR1 +
TH 2 +
Qf
+ 1−
=−
RL2 RL2
RH E1
RL2 RL3
RL2
dt
RH E1
and
CH E2
Req5
Req5
Req5
1
1
dTH 2
1
TH 2 +
=−
+ 1−
TR2 +
TH 1 +
Qf .
dt
RH E2
RL3
RL3 RL3
RL2 RL3
RL3
These differential equations were relatively easy to write out because there are only four
differential equations (and one algebraic equation); however, if we were to try to model a
house with 15 rooms, the task would be quite difficult.
In the development of these equations, we have defined some equivalent resistors.
These equivalents are meaningful in terms of the heat losses for the rooms and the heat
exchangers. The combined loss for each room is the equivalent resistor that comes from
placing the resistors for the losses in parallel, so the net loss is the combined effect of the
heat flowing through the walls, floors ceilings, doors, and windows and the conductive path
of the heat exchangers. In practice, these resistances account for the convective losses that
come from air infiltration into the rooms. Since the air infiltration depends on the external
winds, these added terms include the wind speed. We have ignored this effect in our model,
but it is important enough that we will revisit the model and add this effect in Chapter 8.
208
Chapter 6. Modeling a Partial Differential Equation in Simulink
6.2.2 The State-Space Model
Now that we have the equations to model the house heating system, we will create a statespace model for use in Simulink. Assume that the voltage (temperature) of each capacitor
is a state in a four state model (one differential equation, or state, for each temperature).
The equations above then become




TR1
TR1


d  TR2 
 = G  TR2  + B1 Ta + B2 Qf .
C 



TH E1
TH E1 
dt
TH E2
TH E2
Let the state vector be x(t), and we can invert the matrix C to give the state-space model,
d
x(t) = C−1 Gx(t) + C−1 B1 Ta + C−1 B2 Qf .
dt
The matrix C is diagonal so its inverse is simply the reciprocal of its elements, and therefore
the final form of the state equations has the following matrices:
C−1 G =

− CR11Req1
1
CR1 RH E1
0
0



1


0
− CR21Req3
0


CR2 RH E2


,

Req5
Req5
1
1
1


0
−
+
1
−
CH E1 RH E1
RL2
CH E1 RL2
CH E1 RL2 RL3

 CH E1 RH E1


Req5
Req5
1
1
1
0
− CH E2 RH E2 + 1 − RL3 CH E2 RL3
CH E2 RH E2
CH E2 RL2 RL3



−1
C B1 = 


1
CR1 Req2
1
CR2 Req4
0
0






 , and C−1 B2 = 





0
0
Req5
CH E1 RL2
Req5
CH E2 RL3







This set of equations is the basis for the Simulink model of the heating system. We can run
the model to learn what the temperatures do when the heating plant starts, and when the
heating plant is off. To complete the model, we need numeric values for the parameters.
To determine the parameters we need to define the house. We assume that room 1
is a rectangular solid with dimensions of 20 × 30 × 8 ft; thus the areas of the floor and
ceiling are 600 square feet. Room 1 has four windows and one door, with a total area of
72 square ft. Room 2 is also a rectangular solid with a size of 20 × 40 × 8 ft. We assume
that the two rooms are adjacent along the 20-foot dimension and that the heat flow through
this wall is negligible (and we will ignore it). Room 2 has eight windows and two doors,
with a combined area of 180 square ft. The area of the exposed walls in Room 1 is therefore
the total surface area minus the glass and door areas, which is 568 square feet. Similarly,
Room 2 has an exposed surface (minus the glass and doors) of 620 square ft.
The floors and the ceilings of both rooms have 12 inches of fiberglass insulation.
(This insulation has an equivalent R (i.e., including the infiltration loss, value of 30). The
6.2. Converting the Finite Model into Equations for Simulation with Simulink
209
thermal conductance is the reciprocal of the resistance, so an R of 30 translates into a thermal
conductivity of 1/30 BTU/hour per square foot of floor and ceiling surface area per degree F
temperature difference between the room and the outside. The walls of each room also have
been insulated with fiberglass with the equivalent R-value of 15. (Again this represents a
heat conductance of 1/15 BTU/hour per square foot of insulated wall surface area and per
degree F.) The last equivalent thermal conductance value we need is that of the windows
and doors. With double panes of glass, most modern windows (and doors) have equivalent
R-values of 3, which gives a thermal conductance of 1/3 (same units as above). In all of
these numbers we assume that the convection has been included in the equivalent R so the
explicit term in the room equivalent resistances for the convection is not used (however, we
will investigate the effect of added convection losses using this term). For the calculation
of all of the thermal resistance values above, the values are per hour, but the simulation will
be in seconds, so the data needs to be converted.
The heating system uses water as the heat exchange medium. The thermal conductance
(one over the resistance) that couples the heat exchangers into the rooms is .01 BTUs per
hour per linear foot of convector per degree F temperature difference between the rooms
and the heat exchangers. We assume that Room 1 has 25 ft of heat exchangers, and Room 2
has 30 ft. We also assume that the losses in the heat exchange process are 0.05.
The specific heat of air is 0.17 BTU per pound per degree F, and the density of air is
0.0763 pounds per cubic foot.
Combining these quantities together, we get the following values for the components
in the state-space model:
The thermal capacitance of Room 1 is the mass of the air in the room times the
specific heat of air. The room volume is 4800 cubic ft, so the mass of air is 4800 × 0.0763 =
366.31 pounds, and the capacitance is 0.17 ∗ 366.31 = 62.27 BTU per deg F. Similarly,
the capacitance of Room 2 is 83.03 BTU per deg F. For the heat exchangers, we assume
that the volume of water in them is 1 cubic ft. Since the specific heat of water is 1, the
thermal capacity of each of the heat exchangers is therefore 62.5 BTU per deg F. Thus the
capacitances used in the model (in BTU/deg F) are
CR1 = 62.27,
CR2 = 83.03,
CH E1 = CH E2 = 62.5.
The thermal resistances for the insulation in the various parts of the house are not the
thermal resistance numbers in the differential equations (the dimensions of the various
quantities show this). To get these numbers we need to multiply the conductance numbers
for the various surfaces by their areas. Thus, we have (before the conversion from hours to
seconds)
RH E1 = RH E2 = RH E3 = .01 · 20 = 0.2,
RL1 = RL2 = RL3 = 0.05,
Rwall1 =
Rwall2 =
15
568
15
620
= 0.0264,
= 0.0242,
Rf l−ceil1 = 2 ·
Rf l−ceil2 = 2 ·
30
600
30
800
= 0.1,
= 0.075,
Rwindow1 =
Rwindow2 =
3
72
= 0.0417,
3
180
= 0.0167.
The Simulink model we created once again uses the state-space model explicitly, and it uses
the vectorizing capability of Simulink. The model resulting from this exercise is extremely
simple; it is in the figure below. This model will work no matter how many rooms and heat
210
Chapter 6. Modeling a Partial Differential Equation in Simulink
Open Loop (no control) Investigation
(doub le click on the switch):
- no heat from furnace (top)
- max heat from furnace (b ottom)
Heat Input
(Furnace OFF)
Closed Loop vs. Open Loop Investigation
(doub le click on the switch):
- Open Loop (top)
- Closed Loop (b ottom)
Heat Zones
(B1 is zone 1
B2 is zone 2)
Swicth
Furnace On or Off
B1 from m-file
Outside
Temperature (deg. F)
B1* u
10
Switch
Heat Controller On or Off
0
K*u
Qf
Heat Input
(Furnace ON)
70
Saturation
(Heat command
is > 0 and <1)
Temperatures
Closed Loop
Set Point
Temeprature
1
xo s
Heat from Furnace
Set Temperature
Linear Hating
Controller
Matrix
Multiply
Integrator
Product
[50 50 120 120]'
Temperatures
(R1 R2 HE1 HE2)
Plot 4 Temperatures
Room 1 and 2
Heat Exchanger 1 and 2
K*u
Initial Temperatures
Zone 1 & 2 Heat Input Vectors
(B2 is zone 1; B3 is zone 2)
both vectors are from m-file
A
A Matrix
from m-file
Temperatures
(R1 R2 HE1 HE2)
Select
Room1 and 2
Temperatures
Simulation of the Heating System in a Two Room House
Figure 6.2. Simulink model for the two-room house dynamics.
exchangers we model; all you need to do is change the matrix values. The data are all in
an M-file called TwoRoomHouse.m that is in the NCS library. This M-file runs when the
model opens because of the callback from the “Model Properties” dialog.
At this point, you might try creating this model yourself. If you look back at the
state-space model that was developed in Section 2.1.1, this Simulink model will be similar.
(The state here is of dimension 4 and in the earlier model it was 2, but this does not matter
since Simulink will use the matrices provided to get the dimensions.) Remember to put the
initial conditions into the integrator as a 4-vector.
Our version of the model is shown in Figure 6.2. (It is in the NCS library and is called
TwoRoomHeatingSystem.)
This model has been set up to run the M-file TwoRoomHouse when it starts. This
populates all of the matrices in the model with the correct data. Two Simulink “manual
switches” are in the model. Remember that these switches change state whenever you
double click on the icon. We use them to investigate the open loop (no control) attributes of
the house temperatures. The first switch (at the top left of the diagram) is set up when the
model opens to run the simulation with the heater off. The temperatures that result from this
analysis are in Figure 6.3. (The values of the four states appear on the same graph.) The
second, Figure 6.4, shows what happens when the heater is turned on (double clicking the
switch at the top left) and left on (the graphs are in the same order as the first figure). The
simulations are for 3 hours (10,800 sec). The outside temperature is 10 deg F, and the heat
from the furnace, Qf , is 22.22 BTU/sec (or 80,000 BTU per hour, which is the heat from a
gallon of fuel oil when burned in one hour with about 80% efficiency). These numbers are
very typical for a reasonably well insulated house. (Remember that this simulation uses an
outside temperature of 10 deg F.)
Double click on the second switch in the diagram, and rerun the simulation. The
resulting temperatures are shown in Figure 6.5. We will reserve comment on these results
6.2. Converting the Finite Model into Equations for Simulation with Simulink
211
Temperatures (R1 R2 HE1 HE2)
120
Room 1 Temperature
Room 2 Temperature
Heat Exchanger 1 Temp.
Heat Exchanger 2 Temp.
100
80
60
40
20
0
0
1000
2000
3000
4000
5000
6000
Time
7000
8000
9000
10000
Figure 6.3. Simulation results when the heat is off.
Temperatures (R1 R2 HE1 HE2)
240
Room 1 Temperature
Room 2 Temperature
Heat Exchanger 1 Temp.
Heat Exchanger 2 Temp.
220
200
180
160
140
120
100
80
60
40
0
1000
2000
3000
4000
5000
6000
Time
7000
8000
9000
10000
Figure 6.4. Simulation results when the heat is on continuously.
Temperatures (R1 R2 HE1 HE2)
150
140
130
120
110
Temp. Room 1
Temp. Room 2
HE 1 Temp.
HE 2 Temp.
100
90
80
70
60
50
0
1000
2000
3000
4000
5000
6000
Time
7000
8000
9000
10000
Figure 6.5. Using PID control to maintain constant room temperatures (70 deg F).
212
Chapter 6. Modeling a Partial Differential Equation in Simulink
until the next chapter. For now let us assume that a thermodynamics design team has
just performed the analysis of the house heat losses and developed the heating controller.
The next step in the process of designing a controller will be to put this heating system
model together with a model that has the controller logic. This logic will use a tool that
allows complex logic with a signal flow graph that is similar to Simulink (i.e., with a visual
programming environment that is “tuned” to the needs of logic). The tool is Stateflow,
which is the subject of the next chapter. We will revisit this model and create a new type of
home heating controller, one that will provide heating that is more comfortable. The system
will use the PID control system that we developed here.
6.3
Partial Differential Equations for Vibration
The techniques used in developing the finite dimensional model for the heat equation apply to
other types of equations (such as the partial differential equation that describes the vibration
of the read/write head in a disk drive). The idea is the same; the difference lies in the order
of the time derivatives. In the case of the heat equation (which, because of the electric
analog we showed, also defines electric fields), the differential equations that result are
always first order. This class of equations (called parabolic equations) also includes any
type of diffusion. When the time derivative is zero (i.e., the solution in steady state is
desired), they are called elliptic equations. The equations that describe vibration are more
complex in that the underlying ordinary differential equations that result are second order
(these equations are hyperbolic). The method for creating the finite element model follows
the above method; we assume a particular “shape” for the spatial part of the vibration, and
these shapes break the total motion into a large number of finite elements. In the process
of developing these elements, we create masses and inertias that interact through springs
(and damping elements) so the resulting equations are second order in time. The state-space
models that result are then a set of coupled second order differential equations that describe
the oscillations. This topic can be the subject of an entire book by itself, and if you are
interested in learning more, see [6], [21], and [28]. We will also look at an example of a
vibrating string in Chapter 8.
6.4
Further Reading
The modeling of thermal systems using electrical analogues is pervasive. A recent special
issue of the Proceedings of the IEEE describes this modeling approach for the calculation of
the temperature of a chip. ([34] is one particular paper that discusses the thermal modeling
of a VLSI chip.) An online PDF version of a book on heat transfer also is available [25].
The use of equivalent thermal resistance to model both the heat conduction and convection is pervasive but prone to error. The equivalent thermal resistance is good for a
single convection rate. The convective losses depend on the pressure difference between
the interior spaces and the exterior. These are a function of wind speed, so the greater the
wind speed, the larger the heat loss due to convection.
It is typical for oil companies to calculate oil consumption using a number called
degree days. This number is the accumulated sum of the difference between 65 deg F
and the average outdoor temperature over some period. (Generally, oil companies use the
Exercises
213
number to determine when to deliver oil, so the period is approximately a month). You
can compute this number yourself, and check your oil or gas consumption per degree day.
You will soon see that your consumption per degree day is not constant. However, if you
use the average wind-chill–corrected temperature (which captures the convective loss), you
should see a very good match. In the model we have created, you can change the thermal
resistances to account for the effect of the wind by lowering the resistance by some amount.
We ask you to work this out in Exercise 6.2.
Exercises
6.1 The two-room house model with the 24-hour external temperature variation can calculate the homeowner’s heating bill. Add a set of blocks to the model that accumulates
the total heat used over the 24-hour period. Then modify the model to use the thermostat that we introduced in Chapter 2. Compare the heating costs for the two methods
of heating. Is there any difference? If there is, why is this so?
6.2 Develop a model for the heating system that has a thermal loss for each room that
depends on the external wind speed. Modify the resistances Req1 , Req2 , Req3 , and
Req4 (remember that they each contain an equivalent resistance for convective losses)
so they are of the form
Req1 (t) = Req1 (0) +
Req1 w(t)
where w(t) is the wind speed.
Make Req1 10% of the nominal resistance when the wind speed is 10 mph. That
means that the modified thermal resistances are
w(t)
Req1 (t) = Req1 (0) 1 + 0.1
.
10
Now investigate the effect of the wind on the heat loss. What can you conclude about
the importance of reducing the convective losses?
Chapter 7
Stateflow: A Tool for
Creating and Coding State
Diagrams, Complex Logic,
Event Driven Actions, and
Finite State Machines
Simulation of systems that have logical operations or that contain actions triggered by
external asynchronous actions is difficult. For example, if we are simulating the action of
a person throwing a switch, or if we are simulating the action that causes a computer to
process an interrupt, the simulation needs to process the event at the time it occurs. When we
create such a model using Simulink, the blocks and their connections do not always provide
clear indications of the data flow making the diagrams difficult to read and understand. The
first attempt at creating a visual representation of complex logic flows was David Harel’s
introduction, in the 1980s, of a visual tool to represent all of the possible states that a
complex reactive system might have. He called the idea a “statechart” [18]. The major
innovation that a statechart provided was “And” states, where parts of the diagram could be
acting in parallel with other parts. Until the introduction of this idea, the modeling of finite
state machines relied on the ideas of Mealy and Moore from the 1950s. (Thus, in the Harel
approach, states could be active at the same time—one and two are both active—whereas
in the Mealy and Moore approach, all states were “Or” states.)
In 1996, The MathWorks introduced a new tool called Stateflow, which encompasses
many modeling and programming ideas into a single and easily understood tool. The
name hints that the tool allows modeling automata using the statechart approach, while
also allowing the picture to represent signal flow. Designing a system with complex logic
and state transitions that depend upon events (either external or internal) using the Stateflow
paradigm is remarkably easy. It also has the major advantage that it integrates with Simulink.
This last feature makes it possible not only to design the logic for an automata but also
to model its interaction with the environment. The result is an insight into how well the
automata design works; that is, it allows you to verify that the design meets the specification
as it operates in a simulation of the real world. In the early stages of any design, the
ability to make decisions about the structure of the automata can help clarify and tighten
the requirements as the specification for the design also evolves. The result becomes an
“executable specification,” which we will have more to say about in Chapter 9.
215
216
7.1
Chapter 7. Stateflow
Properties of Stateflow: Building a Simple Model
We used the concept of “state” extensively in this book. Stateflow expands the
concept of a dynamic state to systems that do not require differential or difference
equations.
The basic building blocks for creating a Stateflow machine are the eight
icons at the left. Starting at the top, we have the following:
• The first icon is used to create a state in the Stateflow chart.
• The second is the icon that denotes a state that should remember its history.
• The third icon is used to denote the initialization of the Stateflow chart.
• The fourth icon is used to make signal flow connections. (Typically the
connections are like resting points prior to single or multiple decisions.)
We will explore each of these in turn.
The last four icons are truth tables, functions, embedded MATLAB code,
and the “box.” We will consider only the truth table object and the embedded
MATLAB object in this chapter. The box icon allows the user to change the
shape of the State symbol to a square (box) and treat the result as a subsystem.
The easiest way to get familiar with Stateflow is to build a simple model.
Therefore, we will create a model that uses Stateflow to track the position of a
simple on-off switch. Clearly such a switch has only two (exclusive or) states.
Therefore, the diagram will be quite simple. To create the Stateflow model we
need to open Simulink and create a new Simulink model using the blank paper
icon in the menu. Then, in the Simulink library browser, open Stateflow by clicking on
its icon, and after it opens, drag a new chart into the Simulink model. These steps should
create a Simulink model that looks like the figure below.
Double click the Chart icon, and the Stateflow
window will open. This window contains the icons
above on the left hand side. This window is where we
construct the actual Stateflow chart. We have already
established that we need two states in this chart, so let
us start by dragging two copies of the state icon (at the
top of the icon chain) into the diagram. After you place
Chart
each state icon, you will see a flashing insertion point
at the top left side of the icon. If you immediately start
typing, the name you assign to the state appears in the
state icon. Let us call the first state “Off” and the second “On.” If you do not type these
names immediately, the icon displays with a question mark where the name should go. To
add the name, simply click the question mark and type the name.
With the states in place, we need to specify the transitions. If you place the mouse
cursor over any of the lines that specify the boundary of the state (any of the straight lines
that bound the state will do), you will see the cursor change from an arrow to a plus sign.
When this happens, left click and drag the mouse to create a connection line. This line is
7.1. Properties of Stateflow: Building a Simple Model
217
actually a Bezier curve that will leave the state perpendicular to the boundary, and when
you make the connection to the terminating state, it will be perpendicular to the bounding
lines of that state. You need to drag two lines, one starting at the On state and terminating
at the Off state and the second going the opposite direction. When these lines are in your
diagram, click in turn on each of them. You will see a question mark on the line that will
allow you to specify what causes the transition. We will set up an event that we will call
“Switch” that will cause the transition next, so for now all you need to do is click on each
of the transitions and type the word Switch in place of the question marks.
As a final step, we need to
specify where in the diagram we
want the state chart to start. Select the third icon in the menu
to do this. When you drag the
icon into the chart, a transition
arrow will appear. You want to
attach this to the state we have
called Off. This will cause the
Stateflow chart to start in the
Off state. When you have completed all of these steps, the diagram will look like the picture
at the right.
We cannot execute the
diagram yet because we have
not specified what the event
“Switch” means, nor have we
created anything in Simulink that will simulate the throwing of a switch. So let us
specify what the event is. At the top of the chart is a menu with the traditional File,
Edit, View, Help, and Simulation options that are in Simulink. In addition, there are
two options called Tools and Add. Click the word “Add” in the menu and a list will
appear; from this list select “Event” and in the submenu of this list we need to select
“Input from Simulink.” When
you do this, the dialog box
shown at the left will open.
Enter the name of the event
(“Switch”) in the box titled
“Name” (replacing the default
name “event1”), and where
it says “Trigger,” select “Either” from the pull-down menu.
Finally, click Apply.
After
you complete this, look at the
Simulink model that contains
the Chart; you will see that the
Chart icon has a Trigger input at
218
Chapter 7. Stateflow
1
0
Manual Switch
Chart
Figure 7.1. First Stateflow diagram (in the chart) uses a manual switch to create
the event.
the top center of the chart and the icon inside the chart indicates that both rising and falling
events will cause the event to trigger. The trigger icon is the same as that used for triggering
a Simulink subsystem. This process has created not only the variable “Switch” in Stateflow
but also a data type (in the sense of a strongly typed programming language). All variables
in Stateflow have to be “typed” using this process (i.e., by using the Data or related dialog).
We now need to build a switch in Simulink that causes the trigger event “Switch.”
In the Signal Routing library, you will find a Manual Switch icon that we can use.
Drag it into the model and connect the switch arm to the trigger input of the chart. Last,
from the Source library drag a Constant block into the model and duplicate it. Set one of
the constants to 1 (leave the other at the 0 it came in with), and connect them as shown in
Figure 7.1. The model is now complete.
The model has only discrete states, so you can go into the “Configuration Parameters”
menu (under Simulation or you can use Ctrl+e) and change the Solver to discrete (no
continuous states), and while you are at it, change the stop time to “inf” (the MATLAB
name for infinity). If you now run the model and double click the switch icon while looking
at the Chart, you will see the transitions from state to state take place each time the switch is
thrown. (An active state will be highlighted in blue, and each time a transition takes place,
it also will be highlighted in blue.)
7.1.1
Stateflow Semantics
The Stateflow chart that we have created so far is extremely simple. It should have given
you an idea of how the chart works. (In the language of automata theory, this knowledge is
called semantics.) The chart we created in the introduction does not accomplish anything.
In order to do something, we need to export an action from the chart. However, before we
do that, let us describe some of the semantics of Stateflow.
Look at the diagram in Figure 7.2. It depicts most of the semantics of Stateflow.
Semantics are associated with each of the Stateflow objects as follows.
• States:
– States can be hierarchical. A state that contains other states is called a super
state (examples: SuperState_1 and SuperState_2 in the figure).
7.1. Properties of Stateflow: Building a Simple Model
219
Figure 7.2. A Stateflow diagram that contains most of the stateflow semantics.
– States can cause an action when they are entered or exited, during their execution,
or when an event occurs while the state is active. (Examples: State SS1.1 has
entry action SS1.1_A1, during action SS1.1_A2, and exit action SS1.1_A3.)
Note that the names of the actions can be anything you choose, and the semantics
allow you to abbreviate the names to en, du, and ex, respectively.
– States are, by default, exclusive “Or” states. However, you can change them
to parallel (“And”) states by highlighting the states that you want to be parallel
and then right clicking on the super state that contains them and selecting the
Parallel (AND) option under Decomposition in the menu that opens. Parallel
states appear in the diagram with dotted boundary lines (SS2.1 and SS2.2, for
example).
– States may be grouped together (as a graphical object in the diagram) using the
“Group state” icon in the menu at the top of the chart. (It is the tenth icon.)
You can also double click on the super state that contains them. (Example:
SuperState_2 in the diagram is “grouped,” as can be seen from the wide border
and the shading.)
– Super states or combinations of states may be made into a subsystem (or a subchart in the semantics) by selecting this option from the “Make Contents” menu
220
Chapter 7. Stateflow
after you right click on the combination. (Example: The SuperState_3 in the
diagram is a subchart. It has internal structure that appears when you double
click on it, just as a subsystem opens in a new window in Simulink.)
• Transitions, Conditions, and History:
– A transition is a directed arrow in the diagram that implies the action of leaving
one state (the source) and moving to another (the destination).
– A default transition is the method Stateflow uses to determine which of the exclusive (“Or”) states are to be active when there is ambiguity (for example, when
the chart is started). These transitions may also occur with conditions attached
or because of events. The initialization of SuperState_1 and the state SS1.1 in
this super state are both defaults that occur on the first entry into the chart.
– The History (the encircled H in the diagram) overrides the default. In this example, the history forces Stateflow to remember the substate that was active when
the exit action from SuperState_1 occurs. (In the example, the transition from
SuperState_1 is to SuperState_2, and it occurs regardless of which substate in
SuperState_1 is active.) The subsequent return to SuperState_1 will go back
to the last active substate because of the history. Otherwise, Stateflow uses the
default transition.
– A transition may occur because of an external event (as we saw in the previous
section) or because of an event that is internal to the chart. Events are nongraphical objects; they do not have a pictorial representation, but they are shown in a dialog that is opened using the “Explore” option under the Tools menu in the chart.
– A transition can also use a condition. Conditions can be one or more Simulink
data variables. (Data variables are also nongraphical, and you can view them using the Explorer option.) The semantics for conditional operations on Simulink
data is to enclose the operation in square brackets. (Example: in the chart above,
the conditions are Condition_1, Condition_2, and Condition_3.) Thus, the condition [t > 10] causes a transition when “t” (data from Simulink) is greater than
10; note that this means that when the Stateflow model is C-coded, “t” will be
input data.
– We can also have actions associated with a transition. Actions can occur in two
different ways. The first is during a transition, when the condition evaluated
is true (before the transition takes place). The second is when the transition
actually takes place. The first action is denoted by putting the action inside of
curly brackets { and }. The second is denoted using a diagonal slash / (Example:
[Condition_3] {CA_3} / TA3 in the diagram above denotes that Condition_3 is
being tested for the transition, and if it is true, the conditional action CA_3 will be
created; the transition will then occur, and the transition action TA3 will occur.)
• Connective Junctions:
– A connective junction is a place that allows a transition to terminate while a
decision is made. (It is denoted by a circle; transitions can be made to or from
7.1. Properties of Stateflow: Building a Simple Model
221
this circle.) This device is the “flow” part of Stateflow. It allows pictorial representations of all kinds of programming decisions and as such greatly contributes
to the visual programming approach that Stateflow embodies. As an example, in
the chart above, there is an if, elseif, and else implied by the connective junction
with the three transitions labeled 1, 2, and 3. The code that would be created
from this is as follows: If Condition_1 then, if Condition_2 cause action CA_2,
elseif Condition_3 cause action CA_3 and make the transition to state SS1.2
and cause transition action TA_3, else transition back to state SS1.1. Notice
that in this example, the diagram has three lines with six clearly visible actions,
whereas the verbal description contains 30 words, each of which must have
precisely the stated order for the proper actions. (The code that it implies also
is verbose.) As we develop some more complex Stateflow examples, the power
of this visual programming environment will become even more evident.
This is not a complete description of Stateflow’s semantics. We will see how Stateflow
programs a chart when we build a model that does something.
7.1.2
Making the Simple Stateflow Chart Do Something
Before we make our chart do something (have outputs), let us look at how Stateflow differs
from the classical Mealy and Moore state machines. In the Moore paradigm, outputs from
the finite state machine depend only on the machine’s states. In the Mealy paradigm, an
output can occur when there is a change in the input, so the output depends on both the states
and the input. The newest version of Stateflow allows you to restrict the Stateflow chart to
be one or the other of these two paradigms. If you wish to do this, you need to open the
“Model Explorer” window in the chart by selecting the “Explore” option under the “Tools”
menu and then selecting the Mealy or Moore option in the “State Machine Type” pull-down
menu.
In addition to the method for defining outputs, Stateflow also allows parallel (“And”)
states, while both the Mealy and Moore schemes are purely exclusive “Or” machines.
Finally, the ability to do signal flow using transitions and connection icons makes it possible
to implement (and visualize) complex branching, including do loops and all types of if
statements.
Now we can return to the example we started and create some outputs. We will assume
that we have a simple specification for the automata that we want to create, and we will
then build the Stateflow model to satisfy the specification. We will then look at the way
Stateflow processes the chart by looking at the transitions and the actions both during the
transitions and in the states themselves.
We assume we are creating a device that will cause a neon sign to flash with a period
of 0.5 sec with a duty cycle of 50%. The first requirement that we have is that the drive
electronics for the neon light must be “warmed up” for at least 0.25 sec before the flashing
commences. (This ensures that the voltages are at the correct values.) After the warmup
is complete, we commence flashing using a timer to trigger the state chart. The final
requirement is to turn on a fault detection circuit that will monitor the voltage and current
going to the lamp, but this can be done only 1 sec after the start command is issued, to allow
222
Chapter 7. Stateflow
Pulse
Generator
12:34
start
t
on
trip
Digital Clock
Chart
start
on
trip
Scope
Figure 7.3. The Simulink model for the simple timer example.
the start transient to die down. (This time is greater than 3 time constants, so we select a
time of 1 sec).
We assume that the earlier model was the first step in the development of this timer,
so we will modify this model to incorporate the complete specification. Here, then, is the
specification:
• A timer that sends out a pulse every 0.125 sec will replace the switch we created
in the early version (and because this triggers the chart, it will cause the states and
transitions to occur every 0.125 sec).
• We will send the clocks time to the chart so we can create outputs that are “Timed.”
• We will have three outputs. The first will be a variable called “start” that starts the
light’s electronics. The second will be a variable called “on” that causes the neon
light to switch on and off at the 0.25-sec rate. The final variable, “trip,” occurs after
a delay of 1 sec.
Figure 7.3 shows the Simulink model with the state chart that achieves this specification (called SecondStateflowModel in the NCS library).
Run this model several times, carefully watching the transitions and the actions that
occur on entry and exit from the states. (This is easy to do because the chart in Figure 7.4
is animated.) Along with the animation, also look at the three outputs in the Scope block.
The model has been set up so all of the transitions in the Chart occur every 0.6 sec. (This is
done using the Stateflow Debugger under the Tools menu; we will look at this tool in more
detail later.)
The chart has the same two states that we used in the initial example. We have added
entry and exit actions for the “On” state and an exit action for the “Off” state. In addition,
we have put the connective icon at the transition from the “Off” state to the “On” state. This
connection is an if-then-else statement. It executes the following logic:
• If the time t is greater than 1 sec, then trip is set to 1; otherwise (the else), go
unconditionally to the “On” state. (Remember that t is data from Simulink; when the
device is implemented, this is the clock time beginning when the timer is turned on.)
• When we enter the “On” state, Stateflow sets the variable “on” to 1, and when the
chart exits the “On” state, the variable “on” is set to 0 (causing the flashing).
7.1. Properties of Stateflow: Building a Simple Model
223
Figure 7.4. Modified Stateflow chart provides the timer outputs “Start,” “On,”
and “Trip.”
• All of the transitions take place because the event Switch is triggered by the clock
pulses. (Remember that either rising or falling edges of the pulse trigger the event.)
To create the input “t” for the Statechart we used the “Add” menu in the Stateflow
window. When you select Add, the options that you can use are “Event,” “Data,” or “Target.”
We want “t” to be Data that is an input from Simulink. Therefore, select Data, and then
from the submenu select “Input from Simulink.” In a similar way, we need to add outputs
called “on,” “start,” and “trip” (as in the chart diagram above)
The Simulink model contains the chart and models of the timer. (We use the Pulse
Generator from the Simulink Sources library with the dialog values set to “Pulse type =
Sample based,” “Period = 4,” “Pulse Width = 2,” and “Sample Time = 0.125.” The digital
clock is also from the Sources library, and its sample time is 0.125 sec.
We have now looked at the semantics for Stateflow, and a simple example with inputs
and outputs. We can now create a model that actually simulates a device that has both
complex dynamics and a Stateflow chart that provides the code for some complex logic.
7.1.3
Following Stateflow’s Semantics Using the Debugger
When you ran the model and viewed the animation, you should have noted that there is a
logical pattern to the execution of the chart. This pattern is part of the Stateflow semantics
that you should understand. Let us look at how Stateflow translates the chart into actions.
224
Chapter 7. Stateflow
start
1
Event 2
0.5
0
0
0.5
Event 1
1
1.5
Event 3
2
2.5
3
3.5
4
2.5
3
3.5
4
2.5
3
3.5
4
on
1
0.5
0
0
0.5
1
1.5
2
trip
1
Event 6
0.5
0
0
0.5
1
1.5
2
Time
Figure 7.5. Outputs resulting from running the simple timer example.
First, the chart does nothing until something that the chart recognizes occurs. This
recognition might be an event, as we have used in the model above, or it might be a change
in some data from Simulink. Stateflow can recognize a Simulink variable change as a
“condition” for a transition. In the (anthropomorphic and cute) semantics of Stateflow, the
chart is “asleep” when it starts. When something outside the chart causes an event or the
fulfillment of a condition, the chart wakes up. When the chart wakes up for the first time, the
default transition takes place and the chart initializes. Note that this takes 0.25 sec because
the first event occurs then. Thus at the end of 0.25 sec the first state is active, but the chart
is asleep again. The next event is at 0.5 sec. (The clock ticks occur every 0.25 sec because
the Pulse Generator has a period of 4 samples, with a pulse width of 2 samples.) At the
second Switch event, the chart transitions from the “Off” state to the “On” state, and the
entry action start=1 occurs. (Figure 7.5 shows the Scope that displays all of the outputs
from the chart.)
If you watched the transitions carefully in the Stateflow window, you will see that the
first action that occurs is the transition. Then the connective decision occurs, with the flow
stepping up to the border of the “On” state. Before the chart enters this state, the “Off” state
exit action occurs. Then and only then does the chart enter the “On” state.
You can understand the details of the actions and their timing if you start the Stateflow
debugger. This is accessible in the Tools menu at the top of the chart window or by typing
Ctrl+g when the chart window is active. The debugger opens with the check box “Disable
all” checked. To have the debugger stop at the various actions in the chart, deselect this
check box. Run the model from the debugger by selecting Start. The debugger will stop
the chart at every action, and you can use the “Step” button to move to the next change in
7.2. Using Stateflow: A Controller for Home Heating
225
the chart. As you step through the chart, note carefully the actions that Stateflow takes at
each step. This is the best way to understand the Stateflow semantics.
In the next section, we will use Stateflow to create a new concept in home heating
control. This control will be an alternative to a thermostat, we will create the controller
using Stateflow to handle all of the logic, and we will create the control system in Simulink,
all the while using Simulink to model the furnace that heats the home. This example builds
on the simple home heating example (using a thermostat) that we investigated in Chapter 2,
and the Two Room House model that we developed in Chapter 6.
7.2
Using Stateflow: A Controller for Home Heating
The simple home heating example we investigated in Chapter 2 assumed that the heat source
was an electric heater. The thermostat that did the control was a bimetal device that worked
because of the unequal coefficients of expansion of the two metal pieces. This type of
heating system is still in common use in most homes, but it is not very efficient. First,
the hysteresis of the thermostat causes the temperature of the house to overshoot the value
that the thermostat is set to. (This overshoot can be as much as 4 or 5 deg F for a typical
thermostat.) Second, the typical house heat source is oil or gas (not electricity), and the
heater itself typically heats either air or water that circulates around the house. (In the case of
water, or hydronic, systems, the water may be circulated as superheated water or as steam.)
In either case, the heat exchanger that transfers the heat generated by the flames into the
water or the air has significant heat loss. (A good boiler has an efficiency of about 75%, and
a good air heat exchange plenum has an efficiency of about the same.) One way to avoid
these losses is to circulate water or air at a temperature that varies (and is typically only
slightly above the ambient temperature of the house). The ideal method for doing this is to
create a controller that varies the flow rate of the heat exchange medium. (A variable flow
rate requires a variable-speed fan in the case of an air heat exchange or a variable-speed
pump in the case of a hydronic system.) Let us develop an electronic controller in Stateflow
that will provide all of the functionality that a variable flow rate system needs.
The first step in the design is to create the Simulink model for the physical system
in which the controller will operate. This part of the design process is critical because it
provides the basis for capturing the specifications by allowing the developer to see the effect
of a change in the requirements or the specifications on the way the system operates. It is
the first step in the creation of an “executable specification” for the design. This step is so
crucial that we will do it in some detail.
7.2.1
Creating a Model of the System and an Executable
Specification
We will assume that we are creating a device that will control the hydronic heating system
that we investigated in Chapter 6. We will divide the system into four parts:
• the combustion process,
• the heat exchanger that converts the heated gas or oil into hot water,
226
Chapter 7. Stateflow
• the pumps that deliver the heated water to the heat exchangers in the rooms,
• finally, the dynamics of the heat exchange in the rooms (the flow of heat into the
room from the heat exchangers and the loss of heat through conduction, infiltration
of external air, and radiation).
We modeled a home that has two “rooms” in Chapter 6. This model had all of the
attributes of a multiroom home but was easier to build. In the next chapter, we will revisit
the model and, using another tool, build a model that has multiple rooms. This approach is
typical of the way a design progresses. Initial modeling tends to be simple but expandable.
Its main thrust is to capture the salient details that drive the design of the product and
allow the specification to be refined. When systems engineers talk about this phase, they
often talk about “flowing” the requirements down. This phrase captures one of the most
important attributes of the development of a specification. We start with a requirement for
the system as a complete unit. We specify the very top part of the system; in this case we
want an energy efficient heating system that provides control over each individual room in
a multiroom home with heat flowing into the room on a continuous basis, not turned on
and off with a thermostat. The next step is modeling the dynamics to investigate how the
various parts of the system interact. This is where the simple model plays a significant role.
So-called trade-offs can now take place where a more complex subsystem in one area can
make the design of other subsystems less difficult. Trade-offs of tighter specifications in one
area for less onerous specification in another require a very robust model. In fact, this is the
only method that the designer can use to develop a complex specification. However, in most
engineering the early work to create these models is lost because a written specification is all
that the engineers see in the subsequent design phases. The transmission of the specification
through a written document that omits the simulations used to develop it result in a loss of
intellectual content and it significantly slows down the design process.
In order to understand the process we will create the new home heating controller
using the model of the house that we developed in Chapter 6. Presumably, the model was
the starting point for the design of the new heat controller, and the model showed the limits
on what is possible using the new controller. When we described the model in Chapter 6,
we noted that it contained a simple control system that started each time the manual switch
changed. So let us open the model and look at what the specification developers want. Open
the model TwoRoomHeatingSystem in the NCS library. Look at the responses when you
run the system without the controller. (The model should open with this option—the switch
on the right should be in the “up” position—and then throw the switch on the left by double
clicking and see what happens when the furnace is turned on and off.) The responses you see
were in Figure 6.5 of Chapter 6. Now, turn on the controller by throwing (double clicking)
the switch on the right. You will see that the design team has devised a good controller. It
seems to keep the house temperature right at the 70 deg F set point. (This value comes from
the constant input at the lower left side of the Simulink diagram.)
So let us look at how they did this. The Simulink subsystem that contains the controller
is in Figure 7.6. The salient features of this subsystem are as follows:
• The design uses a PID controller, which makes sense because the heating system
should provide the homeowner with a constant temperature under all circumstances.
(The integral control forces the temperature to match the set point.)
7.2. Using Stateflow: A Controller for Home Heating
227
Integral Control
K Ts
Set
Temperature
2
Sample at
1 sec
z-1
Discrete Integrator
with Integral Gain Ki
Control_Gain
1
Heater Command
Proportional Control
Measured
Temperatures
1
Sample at
1 sec.
K (z-1)
Ts z
Derivative
Control
Figure 7.6. PID controller for the home-heating example developed in Chapter 6.
• The design uses derivative control, which allows the controller to anticipate changes
in the outside temperatures and accommodate them.
• The proportional control makes the heating system follow variations in the indoor
temperature caused by disturbances such as the opening and closing of doors.
• The control system is digital. (This is implicit in the specification since we are using
a digital device for the control; the sample time of the controller is 1 sec, which is
more than adequate but is long enough that it should not cause any computational
issues.)
The simulation results from the executable specification are good, but we probably
should see what happens when the outside temperature moves around. Back in Chapter 2, we
developed a temperature profile (from real data) for a typical day in Norfolk, Massachusetts.
We can now excite the model with this temperature variation.
Open the two-room house model, and delete the fixed outside temperature block
and make it a From Workspace block. For the data type Tdata(:,1:2) in the “From
Workspace” block. This causes the outdoor temperature in the model to follow the outside
temperature profile in the figure that opened when you started the two-room heating system
model (the temperature profile is the curve in red). Also, change the stop time for the
simulation from Tend to Tend1. (This makes the simulation 24 hours long.) If you do
not want to do this, you can open the model TwoRoomHeatingSystem24 in the NCS
library. When you run the simulation, the four temperatures in the model are as shown
in Figure 7.7. Notice that the individual room temperatures go from 50 deg F to the set
point of 70 deg F with no overshoot in about 20 minutes. Furthermore, the heating system
controller maintains a constant temperature of 70 deg F for both rooms despite the diurnal
variations in the temperature. In addition, the circulating water temperature goes up and
down as the outside temperature changes; this helps maintain a constant wall temperature
and significantly improves the comfort level in the rooms.
This simulation allows you verify that this controller works well. At this point, you
are ready to build the Stateflow block that will implement this controller. However, there
are features that the controller should have that you know but the control designer did not.
228
Chapter 7. Stateflow
Temperatures (R1 R2 HE1 HE2)
130
120
110
100
Room 1 Temp.
Room 2 Temp..
Heat Exchanger 1 Temp.
Heat Exchanger 2 Temp.
90
80
70
60
50
0
1
2
3
4
5
6
Time
7
8
x 10
4
Figure 7.7. Response of the home heating system with the external temperature
from Chapter 3 as input.
Heating Controller
69 .7 °
68. 5 °
70 .0 °
68.5 °
Temp .…. Zone 1…..Set Temp
Temp ….. Zone 2 ….Set Temp
Time
Time
Temp
Run
10 : 20 AM
1
2
3
4
5
6
7
8
9
0
Enter
Figure 7.8. A tentative design for the new home heating controller we want to develop.
You know that you want to raise and lower the set temperature from the device. You know
that you need to display the set temperatures for each zone. You know that you need to
display the status of the system for the user, including the current time and the current indoor
temperature for each room (zone).
Assume that a human factors engineer and a commercial artist working together have
created the tentative interface in Figure 7.8 for the heating controller.
From this, we can begin to build the requirements for the Stateflow logic. First,
however, we need to talk about some of Stateflow’s Action Language constructs that we
have not used so far.
7.2. Using Stateflow: A Controller for Home Heating
7.2.2
229
Stateflow’s Action Language Types
Stateflow has various types of “Action Language” constructs, some of which we have already
seen, that will make the programming of the controller easy. For example, actions can be
associated with both states and transitions. These actions have certain keywords associated
with them (we have already encountered most of them), but there are some implicit actions
that are very useful. These are as follows.
• Binary and Fixed-Point Operations: The rules for fixed-point and binary operations
are quite extensive and beyond the scope of this introduction. You should carefully
read the Stateflow manual before using any of these operations.
– In their order of precedence, the binary operations are a*b, a/b, a+b, a-b, a > b,
a < b, a >= b, a <= b, a==b, a ∼= b, a != b, a <> b, a&b, a|b, a && b, a||b.
– The unary operations are ∼a (unary minus), !a (logical not), a++ (increment a
by 1), a – (decrement a by 1).
– Assignments: There are certain assumptions that Stateflow makes about the
calculation results when you are using fixed point. Some of the operators can
override these assumptions. The Stateflow manual describes these assumptions,
so if you are using the assignments below, you need to use care.
– With this in mind the assignments are a = expression, a: = expression, a +=
expression, a -= expression, a *= expression, a /= expression, a /= expression,
a |= expression, a &= expression. Most of these are the same as they would be
for C code; however, in some cases they are bit operations and not operations
on an entire fixed-point word.
• Typecast operations: These operations are data conversions and are of the form
int8(v), int16(v), int32(v), single(v), and double(v).
• C function calls: You can use a subset of the C Math Library functions. These are
abs, acos, asin, atan, atan2, cell, cos, cosh, exp, fabs, floor, fmod, labs, ldexp, log,
log10, pow, rand, sin, sinh, sqrt, tan, and tanh. Stateflow also uses macros to enable
max and min.
The user may call his or her own C code from Stateflow. The procedure for doing so
is in the manuals.
• MATLAB calls from Stateflow use the operator ml. For example, a = ml.sin(ml.x)
will use MATLAB to compute sin(x), where x is in the MATLAB workspace.
• Event broadcasting: Any action in Stateflow can create and broadcast (to other states)
a new event. For example, the entry into a state can have “on: e1:e2,” which is read
as “On the occurrence of the event e1, broadcast the event e2.” An event broadcast
may also occur during a transition using the code “e1/e2,” which is read “Make the
transition when e1 occurs and broadcast the event e2.” The events can also be directed
to specific states in the chart using “send(event, state),” where event is the name of
the event and state is the name of the state it will be used by. (No other state will be
able to use the event.)
230
Chapter 7. Stateflow
• Temporal logic: This logic uses constructs like “Before,” “After,” “Every,” and “At.”
It also allows
– conditionals and supports some special symbols like “t” for the absolute time
in the simulation target;
– “$” for target code entries that will appear as written between the $ signs in the
code but will be ignored by the Stateflow parser;
– ellipsis to (…) denote continuation of a code segment to the next line;
– the MATLAB, C, and C++ comment delimiters (%, /* … */, and //);
– the MATLAB display symbol “;”.
It supports the single-precision floating-point number symbol “F” and hexadecimal
notation.
7.2.3 The Heating Controller Layout
We are ready to begin building the home heating controller. We will start with the simulation
model for the heating controller TwoRoomHeatingSystem24, and modify it by adding the
Stateflow chart that will handle all of the operations that the user will perform. The first
step is to write the specification for the Stateflow chart that outlines all of the actions that
the user can perform.
The actions are as follows:
• The user may select any of the three switch settings: Run, Set Time, and Temp.
• In the “Run” position, the controller operates to maintain the temperature set for each
of the zones. The controller also displays the time of day.
• In the “Time” position, the user can set the time. The user sets the time with the pushbutton keyboard (with the integers 1 through 0) and the Enter pushbutton. Pressing
the Enter key saves the time in the display. The display cycles through the hour and
minute display values every time a new number is pressed. If the Enter key is pressed
before the complete time is entered, the Time entry command recycles back to the
beginning—as if no entry has been made.
• In the “Temp” position, the user can set the temperature for each zone. The display
begins with zone 1. The desired temperature is set by using the number keys. After
entering each digit, the option is to change it (pressing the new number causes the
display to cycle through the two digits). By pressing the enter key, the number in
the display is accepted. Temperatures are set to two digits, and the selection of enter
saves the temperatures.
• Zones are selected using the enter key immediately after the “Set Temp” option is
selected.
In all cases, the system continues to run properly even if the switch is not in the “Run”
position. If the users fail to put the switch to “Run,” the display will continue to accept new
inputs. With this specification for the operation, we can begin building the Stateflow chart.
7.2. Using Stateflow: A Controller for Home Heating
231
Two Zone Home
Heating System and Controller
Update Time
Display
MATLAB
Function
Clock
tic
2
MATLAB Reset Time
Function in GUI
next
Setting
SetClock
Setting
SetClock
PushButton
Reset
Enter
Enter
User_Selections
2
SetTemps
MeasuredTemps
Command
Stateflow Chart
ProcessCommands
Outdoor Temperature
from Workspace
MATLAB
Function
MATLAB Update Set
Function Temps in GUI
Clock Set
Time
Time
PushButton
Reset
Pushbuttons in
User_Selections
Time for Display
2
4
2
2
Command
Heat Cmd.
2
2 Zone PID Heating
Controller
2
2
Heat Command Gain
( 0 < Heat cmd. <Qf )
4
4
A/D Conversion
2
Output (double)
Analog Inputs
Digital Output (single)
1
xo s
A Matrix
from m-file
4
4
Integrator
Matrix
Multiply
[4x4]
2
Plot Room
Temps (States)
4
4
Zone 1 & 2
Heat Input Vectors
MATLAB Update Room
Function Temps in GUI
2
K*u
Tdata(:,1:2)
B1 from
m-file
Heater
Commands
2
2
B1* u
4
-C-
4
Initial
Temps
A
4
Temperature States
(Room1 Room2 HE1 HE2)
Figure 7.9. The complete simulation of the home heating controller.
7.2.4 Adding the User Actions, the Digital Clock, and the Stateflow
Chart to the Simulink Model of the Home Heating System
The Simulink model needs a simulation of different user actions so we can test the various
modes of the Stateflow chart. (When we test the chart, we want to make sure that we cover
every possible action and that no states in the chart are unused.) The GUI will contain all
of the buttons and displays that the design team specified (as in the sketch provided above).
We also need a simulation of the digital clock that will trigger the controller. (The digital
clock also drives the clock display.)
The complete simulation is the model Stateflow_Heating_Controller in the
NCS library (Figure 7.9). When you open this model, a Heating_Control_GUI display
opens also (Figure 7.10). We made this GUI with GUIDE (GUI Development Environment)
in MATLAB. The GUI simulates all of the user interactions, including
• the display of the time,
• the display of the actual room temperatures,
• the display of the desired (set) room temperatures,
• the operating modes (through a pull-down menu): “Run,” Set Temp,” and “Set Time,”
• a functional keyboard that allows the user to set the time and the two-room set temperatures.
When the user changes the settings, they immediately are available in the simulation
and the GUI displays the result.
232
Chapter 7. Stateflow
Figure 7.10. Graphical user interface (GUI) that implements the prototype design
for the controller.
We will go through all of the details in the creation of this GUI in the next section,
but first let us see how it works. The Simulink model with the controller and all of the
code to drive the GUI are in the figures and code descriptions that follow. A Stateflow chart
handles the logic for the feedback control (the command to the heating controller comes
from the bottom right port in the chart) and all of the pushbutton and selection options.
These include the options for running the system and changing the time and the desired
temperature settings.
The simulation contains a digital clock that drives the display and triggers the Stateflow
chart. This is above the chart in Figure 7.9 and provides the trigger inputs to the chart. Five
MATLAB blocks handle the changes to the GUI. These are as follows.
• Update Time Display: As its name implies, this code updates the GUI-displayed time
every minute.
• Reset Time in GUI: When the user changes the time by selecting “Set Time” in the
pull-down menu, this code changes the time display in the GUI to the time entered
by the user.
• Reset PushButtons in “User_Selections”: The communication between the GUI and
the Simulink model uses a Gain block whose value is changed by code in the GUI
M-file. (We will spend some time describing this feature since we have not used it
before.)
7.2. Using Stateflow: A Controller for Home Heating
233
Setting_SW_Gain
1
2
1
Setting
1
-1
11_PB_Gain
1
-1
Enter_Gain
2
PushButton
3
Enter
Figure 7.11. Interactions between the GUI and Stateflow use changes in the gain
values in three gain blocks in the subsystem called “User_Selections.”
• Update Set Temps in GUI: The user inputs from the keyboard change the set temperatures, and this code displays them in the GUI.
• Update Room Temps in GUI: The measured room temperatures are the output of a
Simulink block that simulates the A/D conversion of the measured temperatures into
3 digits (in the form XX.X), which are displayed in the GUI with code in this block.
The digital controller and the simulation of the two rooms are directly from the
TwoRoomHouse model from Chapter 6. We added an empty Stateflow chart to the model,
and we used the Add menu selection (in Stateflow) to add the data inputs. The inputs are
• the settings (an integer from 1 to 3 from the GUI pull-down menu);
• the 11 PushButton values that come from the interaction of the GUI and the
User_Selection block (these values are −1, 0, 1, 2, . . . , 9);
• the result of pressing the Enter pushbutton (when the buttons are reset the value is −1,
and it changes to +1 when the Enter button is pressed; these result from interactions
of the GUI with the User_Selections block).
The inputs, from the block called User_Selections at the left of the model, result from
changes in the gains in Gain blocks. These Gain blocks are in the User_Selections block
shown in Figure 7.11.
To see how the interactions with the GUI occur, open the model and the User_Selections
subsystem. You do not have to start the model to see the interactions. Start by selecting
one of the three options in the pull down menu in the GUI. When you do, you should see
the value in the “Setting_SW_Gain” (the top block) change. This change uses a feature of
the interaction of Simulink with MATLAB that we have not used before. The command, in
MATLAB, is set_param, and it allows you to change the values of the parameter settings
in a block. For the Gain block, there are only two parameters, the Gain itself and the sample
time. In order to use setparam, you need to get the handle to the graphics object (the block)
that you want to change from the GUI. We do this in two steps. First, when the model
opens, a callback (under Model Properties in the edit menu) is executed and opens the GUI
234
Chapter 7. Stateflow
using the command hGUI = Heating_Control_GUI. This saves the graphics handle for
the GUI in the variable named hGUI in MATLAB. We can then use this handle to access
the GUI and to make the appropriate changes. The second step is to set the gain values in
each of the blocks above. The handle for these blocks uses the names of the model, the
subsystem that contains the gain blocks, and the names of the gain blocks. Thus, to set the
gain in the first block, the following code is used:
set_gain = ’1’;
block_handle =…
’Stateflow_Heating_Controller/User_Selections/Setting_SW_Gain’;
set_param(block_handle,’Gain’,set_gain);
The fully qualified name for the block is the string variable block_handle. This is
used in the set_param command to set the value of the Gain to the string variable set_gain
(which is the string for the character 1).
This set of commands will change the gain of the top block to 1. (In Figure 7.11 the
value is 2.) Each time you change the value of the gain, the Stateflow Chart sees the result
of multiplying the gain (from the set_param value) by the input 1 (i.e., it sees the values 1,
2, or 3).
The “PostLoadFcn” callback also sets the value of the 11_PB_Gain to an initial value
of −1 (so we can tell when no pushbuttons are selected); this changes to 0, 1, 2, . . . , or
9, depending on which button is pushed. The last gain, the “Enter_Gain” is changed to 1
whenever the Enter pushbutton is pressed; otherwise it is −1.
In the Stateflow chart, the action “Reset” occurs after the pushbutton data is used.
This action is an output from the chart. The other outputs are the Set Clock action and
the time to which the clock is to be set, and the SetTemps that is a 2-vector of the desired
temperatures created by the user’s pushbutton actions.
The Stateflow chart has two parallel states called Interactions (Figure 7.12) and Process_Buttons (Figure 7.13). The Interactions state computes the two commands for the
controller (each command is the difference between the actual and set temperatures for
each zone) and processes the changes in the state of the switch. The second parallel state
processes the pushbuttons.
This chart is rather easy to follow. The annotation at the top of the state has enough
detail so you subsequently can see how this part of the chart processes the commands. Run
the model and watch the chart. Then as you select different options with the pull-down
menu, watch the chart as the paths change. In addition, notice how the events broadcast
from this state to the parallel Process_Buttons state and note what happens in this part of
the chart (shown in Figure 7.13).
If you are sure that you understand the working of the Interactions state, focus your
attention on the Process_Buttons chart. The entry into this chart activates the Start state. The
broadcast of one of the three events “run,” “time,” or “temperature” causes the chart to stay in
this state (run), go to the state “SetTime” (time), or the state “Temperature” (temp). In each
of the SetTime states, we wait for a pushbutton selection. When the selection occurs, the
value of the variable “Pushbutton” is 0, 1, . . . , 9, depending on which button was pushed,
and we exit the state and go to the state Hours2, where the first digit of the desired hour
setting is set. We then reset the pushbuttons, and when the event “next” occurs, we exit the
state and go to a Wait state. If the user does not continue after 25 of the next clock pulses (25
7.2. Using Stateflow: A Controller for Home Heating
235
At Every Clock tic after the Start --Test the Switch condition and act as follows:
Switch Setting = 1 -- send event "run" to Process_Buttons
Switch Setting = 2 -- send event "time" to Process_Buttons
Switch Setting = 3 -- send event "temperature" to Process_Buttons
Then calculate the heater command "Command" and Reset all of the pushbuttons.
2
Interactions
Start
3
1
2
tic [Setting == 1]/send(run,Process_Buttons)
tic [Setting==3]/send(temperature,Process_Buttons)
tic [Setting==2]/send(time,Process_Buttons)
Output
en: Command = SetTemps -MeasuredTemps;
Reset = 0;
Figure 7.12. The first of the two parallel states in the state chart for the heating
controller.
sec), this state is abandoned and we start all over again. If the user continues, we process the
next digit in the same manner. When all of the digits are set, we are in the Min1 state where
we wait for the user to press the Enter button. This sends the chart to the Output_Time state
where we calculate the Time from the four digits and the variable SetClock is set to 1. This
updates the display with the new time. First, we need to decide whether the user is setting
the clock to am or pm. This happens in the state AMPM. The MATLAB code that resets
236
Chapter 7. Stateflow
Process_Buttons
1
time
1
[Zone == 1] / Zone =2;Reset = 1;
temperature
2
1
1
next [Enter == 1]
temperature
run
2
2
run
Start
en: SetClock = 0;
3
1
time
tic [PushButton >= 0]
Temperature
Minutes1
en: h2 = PushButton;
du: Reset = 1;
4
/Zone = 1;Reset=1;
SetTime
3
2
[Enter == -1]
next [PushButton == -1]
2
next [PushButton >=0]
Wait2
after(25,next)
1
tic [PushButton >= 0]
Minutes2
en: h1 = PushButton;
du: Reset = 1;
Units1
en: m2 = PushButton;
du: Reset = 1;
next [PushButton >= 0]
next [PushButton == -1]
after(25,next)
2
next [PushButton >=0]
1
2
[Zone == 1] /SetTemps[1] = 10*m2+m1;
SetTemp
next [Enter==1]
/Time=Time+100;
2
1
after(25,next)
Hour2
en: m1 = PushButton;
Reset = 1;
next [Enter == 1]
1
1
[Zone == 2] /SetTemps[2] = 10*m2+m1;
1
next [PushButton == -1]
next [PushButton >= 0]
[Time >100]/Time = Time-100;
next [Enter == 1]/ Reset =1;
after(25,next)
Wait3
Wait4
Tens1
en: m1 = PushButton;
2
2
Hour1
en: m2 = PushButton;
du: Reset = 1;
Wait1
1
next [PushButton == -1]
AMPM
en: Reset = 1;
Output_Time
en: Time=10*h2+h1+(10*m2+m1)/100;
SetClock = 1;
2
after(4,next)/SetClock = 0;
Figure 7.13. The second of the two parallel states in the home heating controller
state chart.
the clock looks at the time, and if it is greater than 100 it knows that the time is pm, and if
not, it is am. Thus, in this state, every time the user hits the Enter button we alternately add
or subtract 100. This sequentially sets “am” or “pm” in the time display. Try this!
The path that sets the temperatures begins with the state Temperature. At the start
of this, we wait to see if the user presses the “Enter” button. Every time the user presses
“Enter,” the zone changes; it cycles through 1 and 2. The rest of this chart should be easy
to follow, particularly if you run the chart while you select the temperatures.
The only remaining parts of the model are the MATLAB code blocks that change the
display. We will describe one of these in detail, and allow you to look at the remaining
code blocks to see what each of them does. There are 5 different MATLAB functions in the
model. One of these is the code that resets the pushbuttons, and we have seen this already.
The other four all cause changes to be made to the GUI. Two of these make the changes
that the user commands using the pushbuttons, while the other two change the time and the
temperature as the simulation progresses. We look at the block that updates the time in the
GUI. The code is
7.2. Using Stateflow: A Controller for Home Heating
237
function updatetime(Clock)
persistent AMPM Clocksec Clockmin Clockhr changehr changemin changeampm
if Clock == 0; return; end
hGUI
h
Hrfield
= evalin(’base’,’hGUI’);
= guidata(hGUI);
= get(h.text15,’String’);
if strcmp(Hrfield,’00’)
AMPM
= ’am’;
Clocksec = 0;
Clockmin = 0;
Clockhr
= 12;
changemin = 1;
changehr = 1;
changeampm= 1;
end
Clocksec = Clocksec + 1;
if Clocksec == 60
Cm = get(h.text13,’String’);
Clockmin = str2double(Cm);
Clocksec = 0;
Clockmin = Clockmin + 1;
changemin = 1;
if Clockmin == 60
Ch = get(h.text15,’String’);
Clockhr
= str2double(Ch);
Clockhr
= Clockhr + 1;
Clockmin = 0;
changehr = 1;
if Clockhr == 13; Clockhr = 1; end
if Clockhr == 12 && Clockmin == 0
changeampm = 1;
if strcmp(AMPM,’am’)
AMPM = ’pm’;
else
AMPM = ’am’;
end
end
end
end
if changemin == 1;
Clockmins = num2str(Clockmin);
if Clockmin < 10; Clockmins = [’0’ Clockmins]; end
set(h.text13,’String’,Clockmins)
changemin = 0;
238
Chapter 7. Stateflow
end
if changehr == 1;
Clockhrs = num2str(Clockhr);
set(h.text15,’String’,Clockhrs)
changehr = 0;
end
if changeampm == 1;
set(h.text11,’String’,AMPM)
changeampm = 0;
end
This code should be easy to follow. The only part that might be new to you (particularly
if you have never used MATLAB to handle graphics before) is the code that finds the graphic
object in the GUI. The GUI was created using GUIDE. GUIDE automatically creates a name
(a handle) for each of the objects in the GUI. In this case, we want to change the attributes
of the text in the GUI. At any time we want to use them, we can retrieve these names using
the MATLAB command guidata. To do this we need to have the handle for the GUI.
Remember that when we opened the GUI from the callback, MATLAB created a number
called hGUI. This number exists in the MATLAB workspace, so to retrieve it for use in this
function you need to bring it into the function’s workspace. The evalin command does this.
If this is the first time you have seen this command, go to MATLAB and type help evalin
at the command line. The text in the GUI is always a MATLAB string variable, and its name
is always “String.” Try typing some of the “set” and “get” commands that appear in this
code at the MATLAB command line. (The model does not have to be running to execute
these commands in the GUI.)
The controller (with the PID control) and the Stateflow chart (along with the A/D
converters) are the only parts of this model that will be used in the actual controller. (The
Simulink pieces can be C coded using Real Time Workshop, and the Stateflow chart can
be coded with the Stateflow coder.) Figure 7.14 shows the final form of the Controller.
The measured temperature goes to the display through a MATLAB block that computes the
temperatures to three-digit accuracy (tens, units, and tenths) in the blocks to the left of the
A/D converter block in Figure 7.9.
When the design is complete, Real Time Workshop creates C-code for the Stateflow
chart and the digital controller. Thus, the design will be complete when the Simulink
diagram works as required. One of the neat aspects of this design approach is the fact that
the Simulink model can be used to verify that the controller works as a customer would
expect. You can show him.
If you run this model, you will see it controlling the room temperatures both through
the GUI and in the plots. The plots are very similar to those that we created in the simple
models, which is further verification of the accuracy of the design (the design and the
specifications match).
7.2. Using Stateflow: A Controller for Home Heating
239
Integral Control
1
2
Samp Time
2
2
2
delt
2
z
Delay
2
Integral
Gain
Command
1
2
Ki
2
2
2
2
1
Heater Command
2
Proportional Control
2
2
1
2
2
2
Control_Gain
2
z
2
Kd/delt
2
Derivative
Gain
Unit Delay
Derivative Control
Figure 7.14. The final version of the PID control for the home heating controller.
7.2.5
Some Comments on Creating the GUI
The GUI was created using GUIDE (the GUI Development Environment in MATLAB).
The creation of the GUI is quite easy. The tool opens when you type guide at the command
line in MATLAB. You insert each of the objects for the GUI by clicking and dragging the
appropriate icon into the blank window that opens. When all of the objects (pushbuttons,
pull-down menus, text, etc.) are in the window, simply click the green arrow at the top to
create the GUI. The result will be an active window (which will do nothing) and an M-file
that you can edit to create the actions for each of the objects.
The M-file contains a subfunction that is executed for every action in the GUI. The
code below, for example, is the code that runs when you double click pushbutton 1.
% --- Executes on button press of pushbutton1.
function pushbutton1_Callback(hObject, eventdata, handles)
% hObject
handle to pushbutton1 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles
structure with handles and user data (see GUIDATA)
%
This is the Keyboard Entry for the number 1.
block_handle = …
’Stateflow_Heating_Controller/User_Selections/11_PB_Gain’;
set_gain = ’1’;
settimeortemp(hObject, eventdata, handles,set_gain,block_handle)
240
Chapter 7. Stateflow
The first five lines come from GUIDE. The comment and the last three lines are the
added code that processes the button. Note that this code changes the value of the gain in
the User_Settings subsystem in the Simulink model (we have seen this already). The gain
is changed to 1 here. (Other buttons change the gain to the values 0, 1, 2 . . . , 9, as we have
seen.) When the gain change occurs, the appropriate action in the Stateflow Chart occurs
because the chart input sees the value of the output of the gain block. With this knowledge,
you should be able to debug the GUI. Edit the GUI (called Heating_Control_GUI.m in
the NCS library). Select some of the lines in the M-file that correspond to actions, and click
the dash to the left of the lines. This will put a large red dot where the dash was, and the
next time the M-file runs, MATLAB will pause at this point. As is always the case, once
you understand how the GUI works, you can learn more by reading the help files.
7.3
Further Reading
The paper by Harel [18] forms only a part of Stateflow. The most significant difference is the
ability to do both the Mealy and Moore approaches (mixed syntactically into the same chart
if you desire). In addition, Stateflow’s flow chart feature allows coding of if statements and
do loops. This is a powerful and significant addition to the Harel formulation. However,
the true genius of Stateflow is its ability to connect to Simulink. This allows integration of
the software design and the hardware right from the start. We will return to this subject in
Chapter 9.
Stateflow semantics are subtle. The Stateflow User’s Guide should be consulted as
you work with the tool to ensure that you use the most effective techniques when you build
a model.
Exercise
7.1 Use the modifications you made to the two-room house model in Exercise 6.2 to
investigate the heating controller design. In particular, see what happens when the
wind speed increases and decreases. Try creating a wind gust model and use it to
drive the two-room house model. What do you think you could do to the controller
to better accommodate the heating system response to wind gusts?
Chapter 8
Physical Modeling:
SimPowerSystems and
SimMechanics
Almost every engineering discipline has its own unique method for developing a mathematical representation for a particular system. In electrical networks, the pictorial representation
is the “schematic diagram” that shows how the components (resistors, capacitors, inductors,
solid-state devices, switches, etc.) are connected. In mechanical systems, depending on
their complexity, diagrams can be used to develop the equations of motion (for example,
free body diagrams), and in more complex systems, the Lagrange and Hamiltonian variation
methods are used. In hydraulic systems, models consist of devices such as hydraulic pumps
and motors, storage reservoirs, accumulators, straight and angled connecting lines, valves,
and hydraulic cylinders.
The models of the devices and components used in various disciplines have unique
nonlinear differential equation representations. Their inputs and outputs come from both
the internal equations and the equations of the devices with which they interact. Before an
engineer can use Simulink’s signal flow paradigm, he must develop the differential equations he needs to model. However, because almost every discipline has a unique pictorial
representation that helps in the derivation of these equations, an intermediate process leads
to the ultimate Simulink model. Unique pictorial tools can capture the domain knowledge
for different disciplines, and this domain knowledge has evolved so practitioners can easily
develop the underlying differential equations for complex models.
The domain models pictorially represent interconnections of components. Once the
picture is complete, analysts use various methods for reducing them to the required equations. These methods become quite complex when the system has many parts, and even for
simple problems, developing the equations from the picture can be a bookkeeping nightmare. Producing Simulink models for these disciplines seems to demand a method that will
allow the user to draw the relevant pictures and then invoke a set of algorithms that will
convert the picture into the Simulink model.
The MathWorks has developed the concept of physical modeling tools that provide
a way of drawing the domain knowledge pictures in Simulink. We will describe two tools
that have been created for doing this. The first, used for electrical circuits and machinery,
is SimPowerSystems. It allows the user to draw a picture of the circuit, and when the
simulation starts, the tool converts the picture into the differential (and algebraic) equations
241
242
Chapter 8. Physical Modeling: SimPowerSystems and SimMechanics
c
2
Step
Cap.
Current
1
Switch
Resistor
2
DC Voltage Source
Capacitor
Multimeter
Cap.
Voltage
Continuous
powergui
Figure 8.1. A simple electrical circuit model using Simulink and SimPowerSystems.
that the picture describes. The second tool is SimMechanics, which allows the user to draw
pictures of complex mechanical linkages, and simulate them in Simulink.
The approach used to create the Simulink equations for each of these modeling tools
is very clever because the final simulation takes place in Simulink. Consequently, inputs
and outputs to and from the electrical or mechanical components can be Simulink blocks (or
blocks from other domains besides the electrical and mechanical systems). At the current
time, tools exist for hydraulic systems (Physical Networks), aerospace vehicles (Aerospace
Blockset), and automotive vehicle power plant and drive systems (SimDriveline). The final
simulations are hybrids of the specific domains, and the combined tool for each of the
engineering disciplines becomes the actual simulation in Simulink. We start our tour with
SimPowerSystems.
8.1
SimPowerSystems
An electrical network diagram shows how electrical components interact. Therefore, a
circuit diagram is not the signal flow diagram used in Simulink. One of the easiest examples
is the circuit diagram in Figure 8.1. The SimPowerSystems tool was used to draw this circuit.
In order to run this and other examples in this chapter, you will need to obtain a
demo copy of this tool. (If you do not already own it, a demo copy is available from The
MathWorks.) The model Simple_Circuit is in the NCS library. The circuit consists of a
series connection of a 100 ohm resistor and a .001 farad capacitor with a 100 volt DC source
that is switched on at t = 1 sec. Because the circuit uses the standard symbols for the DC
source, the resistor, and the capacitor, the pictorial representation should be easy to follow.
The results of running this simulation are shown in Figure 8.2. As can be seen, the switch
closes at one second, and because this circuit has only a single state (i.e., it is a first order
differential equation), the current and voltage have exponential decay and growth exactly
as we would expect.
This is the first example of a physical model that we have encountered, and since
a physical model is different from Simulink, let us spend some time learning how these
8.1. SimPowerSystems
243
Voltage Across Capacitor
100
0.9
90
0.8
80
0.7
70
Voltage (Volts)
Current (Amps)
Current in Resistor & Capacitor
1
0.6
0.5
0.4
60
50
40
0.3
30
0.2
20
0.1
10
0
0
0.1
0.2
0.3
0.4
0.5
Time
0.6
0.7
0.8
0.9
1
0
0
0.1
0.2
0.3
0.4
0.5
Time
0.6
0.7
0.8
0.9
1
Figure 8.2. The electrical circuit time response. Current in the resistor and voltage
across the capacitor.
differences manifest themselves. First, a physical component, such as a resistor, capacitor,
or inductor (or a linkage or joint in the SimMechanics tool), has two types of connections.
The first type of connection creates the network (i.e., it connects components together). The
icon for this is a small circle that does not have an arrow (since the current flowing in an
element can flow in or out of the node, or the forces at a joint act on all of the links connected
to the joint). The second connection type is the small arrow that is the standard Simulink
connection. In early releases of the tools, it was possible to connect the two types, but in
the current release, the connection is not possible. If you use one of the earlier versions of
the tools, be careful not to make incorrect connections.
Two blocks in the diagram are unfamiliar and the way in which the switch changes
is different, but even these blocks should be clear. To understand the way this tool is used,
let us go through the steps that are required to build this model in Simulink. The blocks we
need are in SimPowerSystems under the Simulink browser. Open this from the Simulink
Library browser. When you double click on the SimPowerSystems icon in the browser, the
Library browser should resemble Figure 8.3(a). The library for the circuit models consists
of the following:
• + Application Libraries,
• Electrical Sources,
• Elements,
• + Extras Library,
• Machines,
• Measurements,
• Power Electronics.
244
Chapter 8. Physical Modeling: SimPowerSystems and SimMechanics
Figure 8.3. The SimPowerSystems library in the Simulink browser and the elements sublibrary.
The categories with the + sign in front have further subdirectories that we will discuss
later. The blocks that are required to build this model are in the Electrical Sources, Elements,
and Measurements libraries. To start, open a new Simulink window (as usual); then open
the Electrical Sources browser and find the “DC Voltage Source.” Now drag a copy into
the Simulink window.
Next, open the Elements library (Figure 8.3(b)). This library contains all of the basic
elements needed to create a circuit.
One of the clever attributes of the Elements library is the way that the three basic
elements (resistor, capacitor, and inductor) are a single icon. The options are to select a
8.1. SimPowerSystems
245
series combination or a parallel combination of these elements. This minimizes the number
of basic elements in the library. The parallel combinations are the ninth and tenth icons in the
library, and the series combinations are the fourteenth and fifteenth icons. The model we are
building uses single elements for the resistor and capacitor, obtained from either the series
or parallel combinations. The single elements in the diagram are the result of placing the
series or parallel combinations into the diagram and then double clicking them to open the
dialog box. The dialog that opens has a pull-down menu that allows the selection of a single
R, L, or C; a combination of RL, RC, and LC; or the complete R, L, and C circuit element.
The dialog also allows the initial voltage (for a capacitor) and the initial current (for an
inductor) to be specified. Last, the dialog allows the user to select what data in the element
are measured. (The options are the current in the branch, the voltage across the branch, or
both.) If you need to plot one of the signals in the circuit, use the Multimeter block in the
Measurements library. As can be seen in Figure 8.1, this block stands alone in the diagram.
The outputs from this block are a subset (or all) of the measurements that are specified in
the dialog. To select what is an output, double click on the Multimeter block, highlight the
desired measurement in the left pane, and then click the double right arrow icon to place
the selected items in the right pane. Buttons allow you to reorder the measurements or to
remove a highlighted measurement from the list. If you have not created the circuit elements
yet, drag them into your diagram and set the parameter values to 100 ohms and 0.001 farads.
In the capacitor icon, specify that you want to measure the current and voltage of the branch.
The last part of the diagram is the switch. Two switch elements are available for use
in the model. The first is the circuit breaker that is in the Elements library, and the second
is the ideal switch that is in the Power Electronics library. For this model, either switch can
work in the circuit. We have selected to use the circuit breaker. Drag a copy of the breaker
into your diagram and double click the icon. The dialog that opens allows the selection of
• the on resistance for the breaker (note that this cannot be zero);
• the initial state of the breaker (0 is off and 1 is on);
• the specification of what is called a “Snubber” (a resistor and capacitor in series that
are placed across the switch—this represents either actual components that are in the
circuit or they model the “stray” capacitance and the open circuit resistance of the
breaker when it is in the off state).
We will see why this snubber is important later.
We now have all of the elements in the model, and we are ready to connect them.
If you look carefully at the various icons, you should see that the electrical elements have
an icon that is different from the Simulink icons that we have seen so far. First, they do
not have an arrow to indicate a preferred flow direction (since there is none in a circuit),
and second, the connection points are small circles rather than the familiar little arrow in
Simulink. The exception is the circuit breaker, where the input on the top left is the arrow
(the arrow goes to the letter “c” in the icon). This small distinction specifies when icons in
the diagram expect a Simulink (signal flow) input/output or are part of the topology of the
electrical circuit. Usually these inputs and outputs are ports that give access to the internal
workings of the element. (For example, the state of the switch is determined by whether or
not the input “c” is greater than 0, and for the electrical machine elements, outputs provide
Simulink values for the internal states of the machine, such as speed of the motor.)
246
Chapter 8. Physical Modeling: SimPowerSystems and SimMechanics
Connect the circuit together to match Figure 8.1, and finally drag two Scope blocks
and a demux block from Simulink into the diagram to show the current and voltage across
the capacitor. When you run the simulation, the results should look like Figure 8.2. Once
again, draw the diagram neatly so that it is easier to follow. Toward this end, if you highlight
an icon, you can use the command Ctrl+R to rotate it.
8.1.1
How the SimPowerSystems Blockset Works: Modeling a
Nonlinear Resistor
Modeling a circuit as a schematic diagram represents its topology, and not the underlying
differential equations. Kirchhoff’s laws applied to the circuit (we encountered them in
Chapter 6) give the differential equations. These laws state that in a connection of elements,
the voltages around any closed loop sum to zero, and the currents entering and leaving any
node sum to zero. In creating these sums, you must be careful to maintain the signs of the
voltages and the direction of flow of the currents. (For example, assume current flow out of
the node is negative and that a capacitor connected to a node has a current that flows out of
the node.) With this convention, the currents flowing into the node from other elements are
positive, and similarly for an inductor and resistor. One of the easiest numerical methods
for reducing any circuit to a set of differential equations is to create a state-space model
where the voltages across the capacitors and the currents in the inductors are the states. The
topological representation of resistors connected in series or parallel lead to algebraic equations. SimPowerSystems creates the state-space model automatically using this approach,
and it eliminates algebraic equations so the model is of the lowest possible order.
When there are nonlinear elements in the circuit, SimPowerSystems isolates the nonlinear part of the model from the linear part and creates a small feedback loop in Simulink
that models the linear pieces (using state-space models) with the nonlinear parts as an equation (or equations) that manipulate the currents and/or voltages. The currents and voltages
are the values that come from the linear circuit. We will build a simple model to give some
insight into how SimPowerSystems does this. Therefore, let us build a model for a nonlinear
resistor. We will use this model in the next section, where we will create a simulation of an
electric train on a track.
The Electrical Sources library of the SimPowerSystems blockset contains two extremely useful blocks: the Controlled Voltage and Current Sources. Each of these blocks
generates an output (a voltage or a current) that is equal to the value of the Simulink signal
at its input. They are the way in which Simulink couples into SimPowerSystems and are the
dual of the Voltage and Current Measurement blocks in the Measurements library. To create
a device that is nonlinear, we need to use a measurement and a controlled source block.
The equations that relate the voltage and current for resistors (Ohm’s law), inductors,
and capacitors are
e = Ri,
di
e= L ,
dt
dv
i=C .
dt
If the elements are functions of the current or voltage in the network (or if they are functions
of some other variable that is indirectly a function of the voltages and currents), then the
8.1. SimPowerSystems
247
Current
Measurement
1
Resistance (R)
Current (i)
i
+
Desired R
+
iR
2
}
s
+
-
-
Voltage Source
( V = to Simulink signal s)
Output Voltage
V = iR
1
Figure 8.4. A Simulink and SimPowerSystems subsystem model for a nonlinear
or time varying resistor.
circuit is nonlinear. To build a circuit element where the component values (R, L, or C) are
functions of some other variable, we need to use the definitions above to implement them.
So let us build a model of a nonlinear (or time-varying) resistor.
The nonlinear resistor model is shown in Figure 8.4. (It is in the NCS lbrary and is
called Nonlinear_Resistor.) Try to create the model yourself before you look at the
model as we created it. To create it you need the current measurement block (in the Measurements library of SimPowerSystems) and the Controlled Voltage Source in the Electrical
Sources library. You also need two electrical connection ports (in the Elements library).
The Current Measurement block has three ports: a pair for the electrical circuit labeled +
and – and a port that is a Simulink signal output that is the measurement of the current
passing through the block. The desired resistance multiplies the measured current, which
is this Simulink signal. To create the product you need a Product block from the Simulink
Math Operations library.
The signals create a voltage output that satisfies Ohm’s law, as indicated in the diagram.
It should be clear how one could use this approach to create an electrical circuit element in
Simulink that has any desired nonlinear characteristics.
The final step in the creation of this block is to add a mask (icon) to the block that
makes the element look like a variable resistor. We do this by selecting first the entire model
(use Ctrl+A to do this) and then the Create Subsystem option from the Edit menu. After
creating the subsystem, right click on the Subsystem block and select the Mask Subsystem
option from the menu. This will bring up the Mask dialog that allows the selection of the
attributes of the mask. We do not need to specify any parameters for this block, so only the
Icon, Initialization, and Documentation tabs are used.
The masked icon we created looks like Figure 8.5.
To draw this picture, we needed to define the plot points. This is under the Initialization
tab, where the following MATLAB code is entered:
yvals = [0.5 0.5 0.45 0.55 0.45 0.55 0.45 0.55 0.45 0.55 0.5 0.5];
xvals = [0 0.3 0.325 0.375 0.425 0.475 0.525 0.575 0.625 0.675 0.7 1];
Wyvals = [0.75 0.75 0.5 0.6 0.6 0.5];
Wxvals = [0.2 0.5 0.5 0.47 0.53 0.5];
248
Chapter 8. Physical Modeling: SimPowerSystems and SimMechanics
i
R
--
+
Nonlinear
Resistor
Figure 8.5. Masking the subsystem with an icon that portrays what the variable
resistor does makes it easy to identify in an electrical circuit.
The first set of x and y plot coordinates draw the resistor, and the second set draws
the wiper (the arrow). The actual drawing is under the Icon tab in the dialog, and it uses the
code
plot(xvals,yvals)
plot(Wxvals,Wyvals).
The points that are plotted use the convention that the lower left corner of the icon is
the point (0,0) and the upper right corner is the point (1,1). This will plot correctly if the
Units (the last entry in the Icon options of the Icon dialog) are “Normalized.” We also have
specified in this dialog that the Rotation of the Icon is fixed (i.e., it will not rotate if the user
clicks on the icon and types Ctrl+R).
The last part of the mask dialog is the documentation. This tab in the dialog allows
the user to name the block and create a description of the block that appears after double
clicking on the block; it also provides the user with help on the block. To navigate through
all of these, open the NLResistor model and select the block. Double click on it to see the
mask, look under the Mask to see the model, and finally edit the Mask to see the various
dialogs. If you choose, you can add this block to the SimPowerSystems Elements library so
it will be available in the future. This is essentially how the Elements library was developed.
SimPowerSystems is extendable to include any type of electrical device, and the
library contains many such models. Take a few minutes to browse through the complete
library. Note that the Application libraries contains a set of wind turbine induction generator
models, various induction and synchronous motor models, a set of DC drives that use oneor three-phase AC in a bridge rectifier, a set of mechanical elements that model a mechanical
shaft (with compliance and damping), and a speed reducer. Many of these models contain
blocks from the Power Electronics devices library, where models for nonlinear electronic
elements such as SCRs, IGBTs, diodes, and MOSFETS are located. These models use
Simulink S-function code in the background to do the calculations that are required to
implement the voltage and current characteristics of the devices. The outputs from, and input
to, these S-functions use the same approach that we used to create the nonlinear resistor.
8.1.2
Using the Nonlinear Resistor Block
Before we create a circuit with the nonlinear resistor, you should be aware of a few subtleties.
8.1. SimPowerSystems
249
Signal
Builder
Nonlinear Res.
Nonlinear Res.
Current
Switch On at
@ 0.05
Switch
c
1
2
Nonlinear
Resistor1
100 V DC
RL Branch
(1 Ohm & 0.1 Henry)
Multimeter
Continuous
RL Branch
Voltage
1
powergui
Figure 8.6. Circuit diagram of a simulation with the nonlinear resistor block.
The nonlinear resistor block introduces an algebraic loop in Simulink. This loop
automatically generates an error message that will appear in the MATLAB window every
time the simulation starts. To get rid of this message, select the Configuration Parameters
option in the Simulation pull-down menu after you build the model. The Diagnostics tab in
this dialog allows you to change the algebraic loop diagnostic from warning to none. Every
time you build a new model with this block, you will need to do this. If you are going to build
this model (and not use the version in the NCS library), then make this change. In addition,
the SimPowerSystems tool prompts you to use the Ode23t solver for the simulation. If you
do not do this, again an error message appears every time you run the model. The reason
this solver is suggested is that, for technical reasons having to do with the order of the
resulting differential equations, the electronic devices that have discontinuities (switches,
diodes, thyristers, and related power electronics, etc.) all have a snubber circuit that is an
RC circuit in series across the switched part of the device. The dynamics associated with
the snubber makes the system stiff. Ode23t is a stiff solver that minimizes the number of
calculations at every time step, making it run fast and accurately. Thus, if you are creating
this model yourself, make sure that you select this solver.
We can now try to create a circuit that uses the nonlinear resistor and demonstrate
how it works. The circuit, called NL_Res_Circuit in the NCS library, is in Figure 8.6.
Before we explore the model, we need to describe a new block in this diagram,
the Signal Builder block. This block allows the user to create an input that consists of a
piecewise continuous sequence of steps and ramps. The tool allows the user to specify
250
Chapter 8. Physical Modeling: SimPowerSystems and SimMechanics
Figure 8.7. Using the signal builder block in Simulink to define the values for the
time varying resistance.
when the discontinuities occur, the amplitude of the steps, and the slopes of the ramps.
Figure 8.7 shows how the user interface looks. In this figure the rightmost discontinuity of
the waveform has been selected (highlighted with a little circle), and consequently the time
(0.5) and amplitude (3) of this edge are displayed at the bottom of the graph. By clicking
and grabbing, any line can be moved (up and down if the line is horizontal, or left and right
if it is vertical). If a point is selected it may be moved up and down. The duration of the
waveform comes from the Axes menu (where the first choice is “Change Time Range”).
This waveform drives the nonlinear resistor block so the resistance will vary from 1 ohm
to 4 ohms over the first 0.1 sec. The resistor then changes from 4 to 5 ohms over the next
0.2 sec and then drop from 5 to 3 ohms over the next 0.2 sec, where the value will be constant
until the end (at 1 sec).
The circuit we created in Figure 8.6 consists of the nonlinear resistor in series with
the RL branch (the branch resistance is 1 ohm and the inductance is 0.1 Henry) with a
100 volt DC source that is applied at time t = 0.05 seconds. Ignoring the transient due
to the inductance, the current in the RL branch would be 28.5 amps at the time the switch
changes state; it would be 20 amps at 0.1 sec and it would be 33 amps at the end. The
simulation results are in Figure 8.8(a) and 8.8(b).
The transient due to the inductance shows up quite clearly in the first graph of the
voltage across the RL branch, and the nonlinear resistance is quite clear from the current
8.2. Modeling an Electric Train Moving on a Rail
251
Ccurrent in the Nonlinear Resistor (Amps)
Voltage Across the RL Branch (Volts)
35
30
30
25
25
20
20
15
15
10
10
5
0
5
0
0.1
0.2
0.3
0.4
0.5
Time
0.6
0.7
0.8
0.9
a) Voltage across the R-L Branch
1
0
0
0.1
0.2
0.3
0.4
0.5
Time
0.6
0.7
0.8
0.9
b) Current in the Nonlinear Resistor Block
Figure 8.8. Using SimPowerSystems: Results from the simple RL network simulation.
flowing in the Nonlinear Resistor block. Note that this model is time varying and linear
because the resistance does not depend on any of the internal states in the model. (It follows
the time variation specified by the Signal Builder block.) However, it should be obvious to
you that the input to the nonlinear resistor could be any variable in the Simulink model.
Before we leave this discussion, the SimPowerSystems blockset manual has a description of a nonlinear resistor wherein Ohm’s law is implemented using a controlled current
source and a voltage measurement (instead of the voltage source and current measurement
we used). It pays to look at the description of that in the user’s manual (see [47]).
8.2
1
Modeling an Electric Train Moving on a Rail
As a practical application of the SimPowerSystems tools, we model an electric train moving
along a rail. We create the electrical circuits using SimPowerSystems, and we use Simulink
to model the dynamics of the train’s motion. The example illustrates how the combination
of these two different modeling tools into one diagram makes the simulation far easier to
understand.
A train moves along a set of rails called traction rails since they provide the friction
force that the wheels need to propel the vehicle. The train gets its power either from a third
rail (the power rail, usually found at the side of the traction rails) or from an overhead wire.
The power transfers from the rail or the overhead wire to the train using a connector that
slides along the rail or wire. The power circuit completes through the traction rails (i.e.,
the return path for the current uses the traction rails as the conductor). As the train moves
along the traction rails, the resistance between the power supplies and the train for both the
traction and power rails changes. For example if a train is traveling at 60 mph (88 ft per
sec) on a set of rails with a resistance of 10−6 ohms per ft, then the resistance of the rail
from the supply to the train is 0.001 ohms when the train is 1000 ft away from the source. If
252
Chapter 8. Physical Modeling: SimPowerSystems and SimMechanics
Blocks in Light Blue Model the PI Motor Controller,
the Motor Converter, and Converter Time Constant
From1
[Accel]
PID Controller
Goto
i
1
2
Accel. Cmd.R Equivalent
Ac
[Speed]
Sum
-KProduct
Square
Air Drag
Inverter Time
Constant
2
-KGoto1
F
10
Motor Drive
Controller
s
1
+
+
-
Friction
[Accel]
Vector of
States
s+10
+
ForceConst
i
-
Traction
Motor Force
Integrator
1/TotalMass
1
xo s
Acceleration
Current
Measurement
1
xo s
Velocity
Position
Velocity Int.
Armature RL
Initial
Speed
V0
Pos
Initial
Position
+
Field RL
Blocks in Cyan Model the
Dynamics of the Train
Speed
[Speed]
Mux
-
s
Back EMF
From3
[Speed]
VConst
From
2
-[Accel]
Blocks in Green Model the
Electrical Components of the Motor
Acceleration
From2
Figure 8.9. Simulink and SimPowerSystems model of a train and its dynamics.
the train is moving away from the source, the rail resistance will increase by 0.00088 ohms
every second. The voltage and current flowing in the motor give the train’s speed, and the
rail resistance determines them. This means that the interaction of the train propulsion with
the track geometry is nonlinear. Thus, in order to model the interactions, the train and rail
interactions need to make extensive use of the nonlinear resistor block we created.
To start the modeling, we first need to create a DC or AC traction motor. We assume
that the system uses DC, and we further assume that a control system keeps the supply
voltage constant. We also assume that the traction motors are controlled and that we are
modeling an urban transit system where each of the cars in the train has identical traction
motors. We will lump all of these traction motors together into one equivalent motor. The
model of the traction motor uses a masked subsystem where all of the parameters needed
to describe the entire train drive motors are in the mask dialog.
Open the model in Figure 8.10 from the NCS library, using the command
Train Simulation. The train subsystem model is the green block in this diagram (labeled “Train”), and to view it you need to select “Look Under Mask” from the menu that
opens when you right click on the block. (The Train subsystem is in Figure 8.9.) This train
model consists of a combination of Simulink and SimPowerSystems blocks. These blocks
are in color, but here they are in black and white. The color makes it easier to understand the
diagram. The color keys each part of the diagram to its function. The SimPowerSystems
blocks are in green in the model, the control system that controls the acceleration of the
train is in light blue, and the dynamics of the motion are in cyan. We describe the blocks,
and their interconnections, beginning with the power electronics blocks (at the lower left of
the figure). We will work our way clockwise around the diagram.
x
1
8.2. Modeling an Electric Train Moving on a Rail
253
We have already seen in Section 3.5 how to model a motor. The electrical and
mechanical sides of a motor are
LR
diR
= −RR iR + Vin − Vb
dt
and
dωM
1
=
(Km iR − Bw ωM ) .
dt
Jw
These equations are not required in this model since SimPowerSystems will create them
for us. The circuit for the motor consists of the input ports (from the Elements library) and
the inductance and resistance of the train lighting, heating, ventilation, and air conditioning
systems connected across the ports. This load is in parallel with the series circuit consisting
of the armature inductance, armature resistance, and the back emf from the motor. In this
model, the back emf is a voltage source whose voltage is the product, using the gain block,
of “Vconst” and the train speed. In building this motor model, we deduced the rotational rate
of the motor from the train’s speed and the gearing that connects the motor to the traction
wheels. The diagram was cleaned up using “GoTo” and “From” blocks, so the train speed
is sent into the gain using the “GoTo” at the top right of the diagram.
Another controlled voltage source is also in series with the armature circuit. This
source models the DC voltage converter that controls the acceleration of the train. This
controller is a simple PI control of the type that we have discussed before, so we will not go
into details. The only important aspect of the control is that it essentially implements the
nonlinear resistor block (although the block is not explicitly used). This approach emulates
the way the actual converter on the train operates. We use the measurement of the current
in the armature for both the train controller and the force that propels the train. The right
side of the diagram implements this. The force is the result of multiplying (through the gain
block) the motor parameter “ForceConst” and the current in the armature.
From the armature current measurement on, the diagram consists of only Simulink
blocks. These implement the acceleration from the sum of the forces on the train. These
forces are
• the external force “F,” comes from the force of gravity acting when the train is moving
on a grade;
• the friction force (proportional to the speed);
• the aerodynamic drag (proportional to the square of the speed);
• the force from the motor (traction motor force).
The summing junctions before the gain block (with the gain 1/TotalMass) creates the
total force on the train, and the gain creates the acceleration. As usual, we integrate the
acceleration and speed to get the entire state for the train. The state vector uses the “Mux”
block to send the state out of the subsystem. The other outputs from the subsystem are the
motor current and, of course, the connections to the motor.
This entire motor model is a masked subsystem. The train model connects to the
simulation of the track in the diagram using the SimPowerSystems connection icons. The
254
Chapter 8. Physical Modeling: SimPowerSystems and SimMechanics
Location
Speed
Acceleration
Pos. Along Track (Feet)
Currents in Rails
and Powering Train
Speed (mph)
States
I in West Traction Rail
R1Current
Accel. (mph/sec)
I in West Power Rail
R2Current
I Train
Train_Current
I in East Traction Rail
R3Current
I in East Power Rail
R4Current
Rail Resistances
Acceleration Command
[Accel_Cmd]
Signal 1
Track Slope
Signal 1
Weight_of_train
R1
West Traction Rail Res.
R2
West Power Rail Res.
R3
East Traction Rail Res.
R4
East Power Rail Res.
[Slope_Accel]
Stop Cmd.
powergui
Constant_drag
Stop Simulation
when Train is at
SubStation3
Continuous
West Power Rail
SS2 to Train
NL Resistance
R2
STOP
Train to Substatoin
Rail Resistances
Calculated from Train Location
East Power Rail
SS3 to Train
NL Resistance
R2Current
Ri
R4Current
+--
i
R
--
+
Power Rail 2
R4
Power Rail 1
Supply R1
Train
Supply R2
[Accel_Cmd]
Ac Accel. Cmd.
State x
[Slope_Accel]
Supply R3
Supply R4
SubStation3
SubStation4
States
F Ext. Forces
+
Train_Current
Current i
SubStation1
R1Current
Traction Rail 1
Location
Pos1 =
42543 ft.
--
SubStation2
Location
Pos 2 =
47543 ft.
i
R
--
+
R1
R3
Ri
R3Current
Traction Rail 2
+--
West Traction Rail
SS2 to Train
NL Resistance
East Traction Rail
SS3 to Train
NL Resistance
Train Motion___
>
Location
Pos3 =
55543 ft.
Location
Pos4 =
60543 ft.
Figure 8.10. Complete model of the train moving along a track.
final model is in Figure 8.10. The model contains four DC voltage sources and their series
source resistances. These four sources represent power substations along the track. We
model the train moving between substations 2 and 3 (i.e., in the middle of the various
substations). Four copies of the nonlinear resistor model the variable resistance between
the train and the adjacent substations. We assume the train is moving to the right, so the
nonlinear resistors to the left of the train (denoted “West” in the annotation) start at 0 ohms
and rise to their maximum value when the train is at substation 3 (on the right). This
model will not be correct when the train position exceeds the location of this substation (the
topology of the model will change). Therefore, we calculate where the train is relative to
this substation and stop the simulation when the train position reaches it. The block that
does this also calculates the resistances for the nonlinear resistor blocks.
Each of the blocks in this model are SimPowerSystems blocks except for the GoTo and
From blocks, the Scope blocks, and the Signal Generator blocks that create the slope over
which the train operates and the acceleration commands that are sent to the train. Running
the simulation creates plots of the four tracks resistances, the state (position, speed, and
acceleration), the currents in the four nonlinear resistors, and the current that the train is
drawing from the track.
The Simulink subsystem that computes the resistances and generates the stop command is in the top right of the diagram. It has the Simulink blocks shown in Figure 8.11.
8.2. Modeling an Electric Train Moving on a Rail
255
Train Position
States
>=
From2
Selector
Sub
Sta.3
Pos2
West Traction
Rail Res.
1
Pos3
Rtraction
East Traction
Rail Res.
3
Rtraction
Gain4
Location of Train
(relative to Sub Station 3)
Location of Train
(relative to SubStation 2)
Gain1
2
Pos3
Location of
Sub Station 3
Initial Train Position
& Location of
Sub-Station 2
Gain2
West Power
Rail Res.
5
Stop
Cmd.
Relational
Operator
East Power
Rail Res.
Rpower
Rpower
4
Gain3
Figure 8.11. The Simulink blocks that compute the rail resistance as a function of
the train location.
Resistance of Rail Segments
Between Train and Power Substations
I in West Traction Rail
0.07
10000
5000
0.06
0
0
20
40
60
80
100
120
I in West Power Rail
140
160
180
200
0
20
40
60
80
120
140
160
180
200
0
20
40
60
80
100
120
I in East Traction Rail
140
160
180
200
0
20
40
60
80
100
120
I in East Power Rail
140
160
180
200
0
20
40
60
80
140
160
180
200
10000
0.05
5000
0
Traction Rail to SS3
Power Rail to SS3
Traction Rail to SS2
Power Rail to SS2
0.04
0.03
100
I Train
5000
0
0.02
1000
500
0.01
0
1000
0
500
0
-0.01
0
20
40
60
80
100
Time
120
140
160
180
a) Rail Resistances as a Function of Time
200
100
Time
120
b) Currents – Train, Traction & Power Rails
Figure 8.12. Simulation results. Rail resistances, currents in the rails, and current
to the train.
When the simulation opens, a MATLAB M-file called Train_Data runs (using the
standard Model Properties callback). Edit this file and look at how it is constructed. You
can increase the drag on the train by changing the variable Tunnels in MATLAB from one
to two. You can also change any of the parameters.
The simulation creates the graphs in Figure 8.12(a) and (b). The leftmost is the
nonlinear resistor values as a function of time (and since the train is moving, as a function of
256
Chapter 8. Physical Modeling: SimPowerSystems and SimMechanics
the location of the train). The rightmost picture is the currents in the nonlinear resistors and
in the train. The five graphs, from the top to the bottom, are the current in the traction and
power rails to the right (west) of the train, the train current, and the currents in the traction
and power rails to the left (east) of the train.
To get familiar with this model, open it and try changing some of the parameters. In
addition, you can use the command generators to change the slope that the train is on and
the acceleration command.
We have only touched on a few of the many possible models that SimPowerSystems
can create. The user’s guide and the demos provided with the tool should be the next step
if you want to become an expert with it.
8.3
SimMechanics: A Tool for Modeling Mechanical
Linkages and Mechanical Systems
Just as the SimPowerSystems tool allows a picture of an electrical network that interacts with
Simulink, the SimMechanics tool allows you to draw a picture of a complex linkage with
multiple connected coordinate systems. The model you create will then run in Simulink. It
also can use Simulink inputs, and the motions of any of its components are available for use
in Simulink. It is also possible to combine SimPowerSystems and SimMechanics models.
SimMechanics has a look and feel that are similar to SimPowerSystems. However,
there are significant differences. When mechanical elements are connected to form a “machine,” there are physical attributes that need to be specified. First is the coordinate system
that the entire model will use. The second is the coordinate system that applies to each of
the bodies in the device. SimMechanics assigns a coordinate system to each body (usually
at the center of gravity) and to every point on the body that can be connected to another
part of the machine. (There is a large variety of possible connections: joints that allow
rotations in one, two, or three directions and joints that are rigid, called “welds.”) Entering these coordinates means that there are a lot more data associated with the elements in
SimMechanics than were the case for SimPowerSystems. One rule is still inviolate: no connections are permitted between a SimMechanics port
(a connection with a small circle on it just like SimPowerSystems) and a Simulink signal. These connections must be made with measurement and controlled sources (as was the case for SimPowerSystems) that are in the Sensors and Actuators library
in SimMechanics.
The entire SimMechanics library looks like
the figure at the left. The Bodies sublibrary contains four blocks. The first specifies a body that is
the basic element of the mechanical system. The
second block in the library is the Ground block. It
specifies the coordinates of the base of the device.
It is possible for the Ground to represent the ground
(for example, when you want to model the trajec-
8.3. SimMechanics
257
tory of a projectile). Two entries are needed in this block: the coordinates of the origin and
a check box that will create an output from the Ground block that you can use to connect
the machine. Every SimMechanics model must have a Ground block.
The third library block (Machine Environment) specifies the environment for the
device (i.e., the value of the gravity vector, the values used for the differential equation
solver, etc.). Solvers for SimMechanics exist for solving the differential equations forward,
as in Simulink, or backward. The backward solver uses the motions of each element to find
what forces would create them. Kinematical solvers give the steady state solution without
going through the differential equations. Last, a trimming solver does the calculations
needed for the simulation to start with all of the elements at rest (so no motion will occur
in the device until an external force or torque is applied). Every SimMechanics model must
have an Environment block.
When you double click on the Environment block, you will see that the dialog opens
with the tab called “Parameters” selected. All of the above settings (and a few more) are
accessible with this tab. You also will see that this is one of four tabs across the top. The
constraints allow you to specify the solver tolerances when a hard constraint on the motion of
a body or joint occurs. Constraints are usually limits that the motions must not exceed. The
third tab, “Linearization” allows you to select the size of the perturbation used to determine
the linear model. (Chapter 2 contains a discussion of linearization and a description of
perturbation.) The last tab, “Visualization,” allows you to invoke the SimMechanics tool
that draws and animates a picture of the device as the simulation progresses. Make sure
that you check this box before you close the block; this will show the motion of the device
during the simulation.
The Constraints and Drivers library has a large variety of constraints that the bodies
and joints in the simulation model can have. The idea behind a driver is to force a body or
joint to have predetermined displacements or angles as a function of time; they provide the
inputs when we compute the forces that cause a particular motion.
The Force elements in the library are blocks that add a spring or a damper to a body to
cause the linear or rotational motion to have stored energy or to lose energy due to friction.
Without the blocks, the model would require a separate sensor and actuator connected
through Simulink to provide the spring and damping forces.
The meat of SimMechanics is the Joints library (which is comparable to the Elements
library in SimPowerSystems). There are 22 different joints in this library. We will discuss
a few of them, but if you want to use this tool in models that are more complex, you should
be aware of and understand each of these blocks.
The Sensors and Actuators blocks are the means used to allow Simulink to talk and
listen to SimMechanics models. These blocks allow connections to and from Simulink
To understand how SimMechanics works we replicate the development of the clock
model in Chapter 1 using SimMechanics. We start by modeling the pendulum.
8.3.1
Modeling a Pendulum with SimMechanics
One of the easiest models to create is a single joint and body configured to be a swinging
pendulum. Remember that in Chapter 1 we created this model using Simulink alone, so this
exercise is an interesting contrast of the differential equation development and the pictorial
258
Chapter 8. Physical Modeling: SimPowerSystems and SimMechanics
Ground
Coordinates
Env
Pivot
Pendulum
BF
CS1
Damping
at Pivot
ap
av
Scope
Position
Velocity
Joint Sensor
Figure 8.13. Modeling a pendulum using SimMechanics.
Figure 8.14. Coordinate system for the pendulum.
representation provided by SimMechanics. The pendulum model we will develop is in
Figure 8.13. You can open the SimMechanics model (called SimMechanics_Pendulum in
the NCS library), or you can launch SimMechanics and try building the model from scratch
yourself. Since we will take you through all of the steps to create this model, it is probably
better to build the model yourself.
Because every SimMechanics model needs an Environment and Ground block, open
an empty Simulink window and drag one of each of these blocks from the SimMechanics
library into it. Then connect the Environment and Ground blocks together. Double click on
the Environment and Ground blocks to view their dialogs. The Environment block opens
m
with the default gravity vector set to −9.81 sec
2 in the negative y direction. The default value
for the Ground block coordinates (defining the coordinate system for the entire mechanical
device) is [0 0 0]. These defaults mean that the coordinates for the environment are as
shown in Figure 8.14. We add the pivot for the pendulum and the pendulum body using this
World coordinate system, as shown in the figure.
8.3. SimMechanics
259
The pivot for the pendulum is a revolute block from the Joints library. The revolute
block allows rotations around a single axis (a single degree of freedom in the Base). The
axis of revolution can be referenced to the World, to the body that the joint is connected to
(called the Base), or to the body that rotates relative to the base (called the Follower). All
joints connect to two bodies. (In this case, the Base or B port connects to the Ground and
the pendulum; the second body connects to the follower or F port.) The coordinate system
for the Pivot is the World coordinate system. (Open the Revolute dialog for the Pivot block
and look at the Axes tab, where the rotation axis is set to the z-axis and the coordinates,
called “Reference CS” in the dialog, are the World coordinates.)
A joint can have sensors that measure the rotational angle, velocity, etc. and can
have forces or torques applied. To allow this, the Joint blocks have a dialog that allows
selection of a different number of sensor/actuator ports. In our model, we need a port for
sensing the motion and a port to apply the damping force, so the number of ports is set to
two. To make the measurements we need a Body Sensor from the Sensors and Actuators
library, and to create the damping force, we need a Joint and Spring block from the Force
Elements library. The joint sensor dialog for the model has the check boxes selected that
provide measurements of the angle and angular velocity of the joint, using degrees as the
angle units. The Joint Spring and Damper block is configured to only provide damping.
(The spring constant is set to 0, and the damping coefficient is set to 0.00005.) This block
creates equal and opposite torques for each element connected through the joint.
The last block needed to describe the physical model is the Body block that models the
pendulum. This block connects to the joint follower port (the F port). So grab a Body block
and drag it into the model. Any number of bodies may be in the model, and SimMechanics
automatically generates the equations of motion. The tool even takes care of computing
the rotations of the bodies relative to each other and the base, using one of the methods we
developed in Chapter 3 (Euler, angle-axis, or quaternion). For now we need only to enter
the body mass properties (mass and inertia) and the location of the center of gravity and
the attachment point to the pivot. Open the dialog on the body and set the mass to 1 kg
and the inertia matrix to the diagonal matrix with 0.001 kg*mˆ2 along the diagonal. The
center-of-gravity (CG) coordinates in the Ground Coordinates block are set to the origin,
and the CG coordinates in the Pendulum block are set to [0.05 −0.2485 0]. This corresponds
to a slight offset in x (5 cm) for the pendulum. (This will cause the simulation to start with
the pendulum moving because of the gravity force.) The value of the CG displacement
of 0.2485 m along the negative y axis will make the pendulum
move with a period of 1
√
sec. (Remember that the frequency of the pendulum is g/ l, which can be solved for the
pendulum’s length to give 0.2485 m.)
Connect all of the blocks as shown in the diagram above, making sure that you
understand why these connections are all required. Also, make sure that all of the dialogs
that we specified along the way are in place and that you have checked the animation box
in the Visualization Tab of the Environment block. Finally, connect a scope block (with
two inputs) to the joint sensor. We have set the simulation time to 100 sec, and using the
Configuration Parameters dialog, we have set the maximum step size for the solver to 0.1
sec. You can start the model now.
When the simulation is complete, the pendulum simulation will start and the animation
will appear. You should be able to watch the simulation evolve both through the animation
and the graphs displayed in the Scope. This ability will be there no matter how complex the
260
Chapter 8. Physical Modeling: SimPowerSystems and SimMechanics
Position
0
-5
-10
-15
-20
-25
0
10
20
30
40
50
60
70
80
90
100
60
70
80
90
100
Velocity
100
50
0
-50
-100
0
10
20
30
40
50
Time
Figure 8.15. Simulating the pendulum motion using SimMechanics and its viewer.
mechanical system you have modeled. (A snap shot of the animation and the Scope plots is
in Figure 8.15.) Remember that this simulation, unlike the one we developed in Chapter 1,
did not require us to develop the equations of motion. Furthermore, it was created quite
rapidly. (Of course if this model was the first time you used SimMechanics, there was a
learning curve you had to go through, but you can easily appreciate how the tool could save
a large amount of time when you are modeling complex linkages, machines, or dynamics.)
8.3.2
Modeling the Clock: Simulink and SimMechanics Together
The simple pendulum model can redo the simulation of the clock from Chapter 1. The
approach used in the SimPowerSystems tool, where a Simulink value converts into a physical
value (a voltage or current) is also the physical modeling approach. For SimMechanics, the
physical measurements can be quite numerous, as we have already seen. The SimMechanics
actuators can apply a torque, a force, or a motion based on any combination of measurements
and Simulink calculation that we want. The Damping force block for the pendulum is an
example of this; it is really a masked block with both an actuator and a sensor built in.
The calculations that provide the desired damping and spring force are in a Simulink block.
To create the clock model, we will actually build the damping, so we will not need the
Damping block. We also will use the Escapement force block that was used in Chapter 1,
and finally we will build a Signal using the “signal builder” to give the pendulum an initial
displacement. The initial displacement will allow us to place the center of gravity of the
pendulum at the pivot location in both x and z. (Since we will be “kicking” the clock
pendulum to start it, we will not need the offset of 0.05 specified for the pendulum CG to
get the simulation started.)
It should be easy for you to add the blocks needed to create the clock simulation, so try
to do so. The model in Figure 8.16 is SimMechanicsClock in the NCS library. The joint
sensor measures both the angle and the angular velocity of the pendulum; these are inputs to
the escapement block. The angle and angular velocity determine the forces applied by the
escapement and the angular velocity determines the damping force. The signal generator is
a pulse of amplitude 1 with duration 0.1 sec applied at the start of the simulation.
8.3. SimMechanics
261
Ground
Coordinates
Pivot
Env
B
Pendulum
F
CS1
Joint Actuator
Pendulum
Angle
Ang. Position
Escapement
Model
Pendulum AngleForce
1.6
ap
Escapment
acceleration
av
Joint
Sensor
Ang. Vel.
0.0005
Damping
Coefficient
Signal 1
Initial Pendulum "Kick"
Figure 8.16. Adding Simulink blocks to a SimMechanics model to simulate the
clock as in Chapter 1.
Pendulum Position
Angle (Degrees)
40
20
0
-20
-40
0
5
10
15
Time
20
25
Figure 8.17. Time history of the pendulum motion from the clock simulation.
Most of the parameters in the model remain the same, except that in Chapter 1 we
used radians as the units, and here we use degrees. Thus, the escapement values “Penlow”
and “Penhigh” are 5.6 and 8.6, respectively, and the gain for the escapement force increases
to 1.6 (in the Gain block). The signs of the damping and the escapement forces applied to
the joint are negative since they reference the follower. The result of the simulation from
this model is in Figure 8.17.
30
262
8.4
Chapter 8. Physical Modeling: SimPowerSystems and SimMechanics
More Complex Models in SimMechanics and
SimPowerSystems
There are many examples of complex motions in the demos that ship with SimMechanics,
and it is very useful to look at them. Before we leave this tool, though, there is one example
that is worth looking at. In Chapter 7, we talked about modeling partial differential equations,
and we said that one could model vibration using the same approach as we used to model
the heat flow in the house. We also said that we could build a more complex home heating
system house model. Let us illustrate both of these now.
The first model is a modification of the vibrating string model demo that is included
with SimMechanics ([2] was the source for the model and the data). A modified version of
this model is the model SimMechanics_Vibration in the NCS library. Open this model
and run the simulation. The model uses 20 finite linkages to model a steel string (like a guitar
string) that is 1 m long. Each of the elements is a two degree of freedom representation
of the string’s tension and rotation. Each of the elements is a small mass that can rotate
and translate (while transferring the forces that result from element to element). Browse
through the model and locate each of the elements. Look at the dialogs that each element
has to see what the data are, and then look at their values using MATLAB. The data is stored
in a MATLAB structure called “fBeamElement” that has the components
material: ’Mild Steel’
density: 7800
youngsM: 210000000
diameter: 5.0000e-004
length: 0.0500
cgLeng: 0.0250
beamDir: [1 0 0]
Inertia: [3x3 double]:
matDamping: 5.0000e-004
The inertia matrix for each of the elements is
1.0e-007 *
0.0005
0
0
0
0.2500
0
0
0
0.2500
The model (Figures 8.18 and 8.19) looks complicated, but it is simply the same block
replicated 20 times. The 20 inputs allow you to pluck the string at any of the 20 lumped
locations (every 5 cm) along the string. It is interesting to do this while viewing the vibration
to see how many of the various harmonics are excited as you pluck the string near the edges
versus near the center. Several examples are in the output of SimMechanics shown in
Figure 8.20.
It is easy to modify this model to make it have more elements. We explore more
attributes of this model in Exercise 8.3.
The last model we will build in this chapter shows a complex electrical circuit. We
have modeled a four-room house heat flow (FourRoomHouse in the NCS library, shown in
8.4. More Complex Models in SimMechanics and SimPowerSystems
Env
Lef t End
Machine
Environment
Signal 1
Y f orce
Signal Builder
-- String Pluck
263
Pluck
Pluck1
Ground1
Pluck2
Pluck3
Connect to any of the 20 Pluck
Ports on the String Model to see
what happens to the string motion.
StringPlucker
Pluck4
Pluck5
Pluck6
Pluck7
This is a model of a 1 meter long string made of "mild steel" that is plucked and then vibrates
in a uniform gravitationla field. There are 20 elements all together, and each element is 5 cm long.
Pluck8
Pluck9
The elements are identical ( they consist of a spring constant and damping in each of the 2
degrees of freedom -- 1 rotation that transfers torques from element to element and 1 translation
to transmit tension from element to element).
B
Right End
Pluck10
F
Pluck11
Each of the 5 cm segments can be plucked by changing the connection point for the "External Pluck".
The Pluck ports on the model are numbered from the left to right.
Pluck13
The mass proopserites for the elements are separately defined in the "Mask" for each of the 20 elements.
Pluck15
Ground2
Connection to
Ground (Joint)
Pluck12
Pluck14
Pluck16
See Klaus-Jurgen Bathe, Finite Element Procedures, Prentice Hall Inc, 1996 for details on this model.
Pluck17
Pluck18
This model is a modification of the Vibrating String model that ships with SimMechanics.
Pluck19
Pluck20
String Model
Figure 8.18. SimMechanics model Sim_Mechanics_Vibration.
Left End
1
Base
Tip
3
Pluck1
4
Pluck2
5
Pluck3
6
Pluck4
7
Pluck5
8
Pluck6
9
Pluck7
10
Base
Pluck
Wire Element 1
Tip
Base
Pluck
Wire Element 2
Tip
Base
Pluck
Wire Element 3
Tip
Base
Pluck
Wire Element 4
Tip
Base
Tip
Pluck
Wire Element 5
Base
Pluck
Wire Element 6
Tip
Base
Tip
Pluck
Wire Element 7
Base
Pluck
Wire Element 8
Tip
Base
Pluck
Wire Element 9
Tip
Base
Pluck
Wire Element 10
Tip
Base
Tip
Pluck
Wire Element 11
Base
Tip
Pluck
Wire Element 12
Base
Tip
Pluck
Wire Element 13
Base
Tip
Pluck
Wire Element 14
Base
Tip
Pluck
Wire Element 15
Base
Tip
Pluck
Wire Element 16
Base
Tip
Pluck
Wire Element 17
Base
Pluck
Wire Element 18
Tip
Base
Tip
Pluck
Wire Element 19
Pluck
Wire Element 20
Pluck8
11
Pluck9
12
Pluck10
13
Pluck11
14
Pluck12
15
Pluck13
16
Pluck14
17
Pluck15
18
Pluck16
19
Pluck17
20
Pluck18
21
Pluck19
22
Pluck20
Figure 8.19. String model (the subsystem in the model above). It consists of 20
identical “spring mass” revolute and prismatic joint elements.
Figure 8.20. String simulation. Plucked 15 cm from the left, at the center, and 15
cm from the right.
2
Right End
264
Chapter 8. Physical Modeling: SimPowerSystems and SimMechanics
Heat Exchanger
Thermal Loss1
HE Loss to
Room 1
Heat Exchanger
Room 1
Heat Exchanger
Thermal Loss2
HE Loss to
Room 2
Wall
Thermal Loss
Room 1
Thermal C
Room 1
Wall
Thermal Loss
Room 2
Thermal C
Room 2
Heat Exchanger
Room 2
Wall to Outside
Room 1
Wall C
Room 1
Wall to Outside
Room 2
Wall C
Room 2
s
+
-
Heat Source
Piping/Duct
Thermal Loss
Heat Exchanger
Thermal Loss3
HE Loss to
Room 3
Heat Exchanger
Room 3
Heat Exchanger
Thermal Loss4
Heat Exchanger
Room 4
Wall
Thermal Loss
Room 3
Thermal C
Room 3
HE Loss to
Room 4
Wall
Thermal Loss
Room 4
Thermal C
Room 4
Outside Temperature
Wall to Outside
Room 3
Wall C
Room 3
Wall to Outside
Room 4
Wall C
Room 4
Figure 8.21. Using SimPowerSystems to build a model for a four-room house.
Figure 8.21). In addition to the four rooms, instead of the two that were created in Chapter 6,
we have also modeled the thermal capacity of the wall. By modeling the wall capacity,
we can use a measurement of the wall temperature as a way to account for the outside
temperatures. The model is not complete, and in the exercises at the end of the chapter, we
ask that you finish the model (Exercise 8.4). The model uses the SimPowerSystems tool.
The rooms are all modeled using an identical circuit. This allows you to draw one room
and then copy the circuit elements to make the next room, and so on. Look carefully at the
way Simulink creates the circuit element names. Because Simulink does not allow identical
names, copying an element causes Simulink to change the number at the end of the name
to the next sequential number. For this reason, copying the circuit elements automatically
changes the name to the next room. (Try making a fifth room to see how this works.)
8.5. Further Reading
265
We pointed out when we built the two-room model in Chapter 6 that the equivalent
R-values for the various losses (in particular for the ceilings, floors, walls, and windows
include an approximation to the heat loss from convection (i.e., from the flow of air in and
out of the various spaces). Unfortunately, this approximation does not account for the effect
of increased convection when the outside winds increase in speed. We can include this
effect using the nonlinear resistor model we created in Chapter 6. It is simply a matter of
changing the resistor at the right of each room to one of the nonlinear resistor blocks. Note
that in this model, the resistor will be time varying—a function of the wind speed—and
therefore the block is linear. We leave this as part of Exercise 8.4.
The models created in this chapter show how Simulink includes many physical modeling concepts. We have not shown all of the uses of these tools, nor have we shown how
the SimMechanics tool can use a CAD tool to build the SimMechanics model automatically
from the CAD drawings. If you need to pursue this further, see [46].
Newer physical modeling tools are useful in the automotive industry and for modeling
hydraulic systems. In addition, The MathWorks is developing new tools for other physical
systems. They collectively will allow engineers rapidly to create simulation models.
We are almost at the end of our tour of mathematical modeling with Simulink. The
next chapter tries to show, with a relatively simple example, how to use Simulink in the
overall design and development process. The key to doing so is the development of a process
that makes it possible for all of the people working on the design of a system to interact
using the same environment. By wrapping a process around Simulink and its various tools,
a company can easily create new designs. One of the features is that the staff can maintain a
clear path to the corporate legacy (intellectual property) so previous designs can easily and
seamlessly be incorporated into their newer designs. The models also provide a simple way
to document designs and, last but not least, have a way of generating code that will satisfy
the design requirements.
The next chapter takes us into the realm of product development and all of the issues
associated with ensuring that the process creates a reliable and accurate device at the lowest
possible cost.
8.5
Further Reading
The nonlinear resistor model and the model of the train are from [15]. I developed these
to investigate a method for adding high temperature superconducting wire for power at the
center of the Bay Area Rapid Transit (BART) Trans-Bay Tube between Oakland and San
Francisco. This report used a SimPowerSystems model for multiple trains operating over
various segments of BART. It allowed a preliminary design of the power distribution system
and allowed experiments that investigated alternative approaches.
Once again, we are becoming aware of the importance of energy conservation in the
world. The model of the four-room home is useful to answer personal questions about
the cost effectiveness of adding insulation to your home (particularly when there are tax
advantages for adding insulation). It is fun to create a model of you own home and play the
“What if?” game with insulation and other energy efficiency improvements.
In [8], The MathWorks’s technical staff describes a method for creating a SimMechanics model that computes the motion of a flexible beam. The paper describing the method
266
Chapter 8. Physical Modeling: SimPowerSystems and SimMechanics
and the models is available from The MathWorks’s web site. In the paper that accompanies
the model, the authors show an alternative method that uses Simulink and the LTI modeling
approach from the Control Systems Toolbox (see Chapter 2). The model they use is the
state-space model.
The steps that one follows to use this model provide insight into the power of
Simulink’s automatic vectorization, so it is a useful exercise.
The last part of the reference develops the equations for a multi-degree-of-freedom
model in some generalized coordinate system q. When a finite element modeling approach
is used, the coordinates that result are usually a mix of translations and rotations for each
element—anywhere from all three translations and three rotations at an element to perhaps
one or two at each element. The masses (or inertias) of the elements, the damping forces,
and the spring force between elements are represented by generalized mass matrices M,
damping matrices C, and stiffness matrices K, so the dynamics of all of the elements are
encapsulated by the differential equations
M
d 2q
dq
+C
+ Kq = F.
2
dt
dt
Any finite element modeling tool will create this general form for a vibrating system. The
finite element codes also will reduce this model to the form
d 2ζ
dζ
+ 2 ζ = T−1 F.
+ 2
dt
dt 2
This is done using the orthogonal transformation T to simultaneously diagonalize the symmetric matrices M, C, and K. The steps for this are
q = Tζ .
Then
M
d 2 Tζ
dTζ
+ KTζ = F.
+C
2
dt
dt
Multiplying this equation by the inverse of T (T−1 ) gives
T−1 MT ddt ζ2 + T−1 CT dζ
+ T−1 KTζ = T−1 F,
dt
or
d2ζ
dζ
+
2
+ 2 ζ = T−1 F.
dt
dt 2
2
This last step comes from the fact that the matrix T diagonalizes all of the matrices in the
first equation. The algorithm that does this is the qz algorithm; it is a built-in routine in
MATLAB (see the MATLAB help).
Both the SimPowerSystems and the SimMechanics user’s guides [46], [47] are excellent references for creating models and for understanding the demonstrations that are part
of the products. Consider them an extension of the material in this chapter.
Exercises
267
Exercises
8.1 Modify the model in Section 8.1.2 to make it nonlinear. For example, measure the
voltage across the RL branch and add a nonlinear resistor that has the following value:
R(t) = R0 +
where R0 = 1 ohm,
R VRL ,
R = 0.1 ohm, and VRL is the voltage across the RL branch.
8.2 Build a simple Simulink model of an electric car using the DC motor model in the
train simulation. Set the mass of the car to 2000 pounds. Investigate the amount of
power and energy used to accelerate the car from 0 to 60 mph in 10 sec. (Remember
that power is the product of the current and voltage used by the motor and that the
energy used is the integral of the power.) Use the air drag numbers in the train model.
Try different rates of acceleration in the model. What can you conclude about the
energy used as a function of the rate of acceleration?
8.3 Add more elements to the string model. You need to allocate the mass element data
in the fBeamElement structure.
8.4 Add a fifth room to the model FourRoomHouse. Create data for all of the components
using the values from Chapter 6. Add the time varying resistor (from Section 8.1.2)
to model the convective losses from the wind. Make runs to answer some of the
hypothetical questions in the Further Reading section.
Chapter 9
Putting Everything
Together: Using Simulink in
a System Design Process
We have reached the point in our travels through the world of Simulink where we can
talk about how Simulink can help in the design of complex systems. There are many
ways to partition a design process. Furthermore, every person that works in research and
development has a favorite approach to design. The one common aspect of all of these
approaches is the use of a methodical, documented, and traceable procedure.
A design process is methodical when it has a clear specification, the design has connections to the specification, the design has clear documentation, and the design validation
is performed via a thorough test program. The obvious aspects of this statement are that the
design meets the objectives and performs as required. It is efficient, reliable, and buildable
within some budgeted cost. However, there are some nonobvious benefits: the design has
legacy (it is reusable), the design exploits legacy from previous intellectual property, and
the ultimate audience for the system can exercise the simulation model so that their inputs
have a beneficial influence on the design. (In fact, many times this feature can sell a design.)
The entire design process needs careful documentation. Doing otherwise makes it
impossible to revisit decisions and convince engineers, designers, and technicians working
with the designs that the decisions made during the creation are valid (at least not without
a large and often futile replication of the earlier work).
A process that provides traceability means that the parameters in the design trace
back to one or more of the specifications for the design. Traceability also provides an
understanding of a how a top-level requirement flows into specifications for various subsystems. Subsystem specifications often come from a specification imposed by top-level
system design requirements.
Once the design is finished, it requires a method for its validation. This is particularly
true when the design requires embedding computer code in the system. The design process
also should not require the creation of a new specification just for this embedded code. In
fact, the process should be seamless so the code generated as the specification evolves is
always tested quality code.
Simulink allows you to create a design process that will perform all of these functions
in a seamless way. In fact, the Simulink model can play the role of the specification. This
is an “Executable Specification.” To make Simulink the tool for system design, the R&D
269
270
Chapter 9. Putting Everything Together
parts of a company, the manufacturing arm of the company, and the computer engineers
need to use the same Simulink model throughout the process. Obviously, every company
approaches system design with a bias that comes from their previous experience. For this
reason, a process using Simulink must come from the “Systems Engineering” part of the
company.
Whatever way the process uses, the broad outlines of the steps are
• specification development and capture,
• modeling the system to incorporate the specifications,
• design of the system components that meet the specifications,
• verification, and validation of the design,
• creation of embedded code.
In the 1960s I was part of the team that developed the original digital autopilot for
the lunar module. The design of this system was difficult because there was no way that
the control system was testable on earth. This meant that simulation was the only way
to create, optimize, and verify the design. In the process of creating the control system,
the first step was to understand the requirements and translate these into a mathematical
model for the design. The modeling started with a digital simulation. We used it to develop
candidate designs that in turn were optimized using simulation. In the 1960s this work used
a mainframe computer that allowed, at best, two computer runs per day. Simulink and the
PC change every aspect of the design approach we used then. Therefore, the exposition that
follows should be entitled “How we would do it today.” I wrote an article that appeared in
the summer 1999 issue of The MathWorks’s magazine News and Notes that describes the
design process and the lunar module reaction-jet attitude-control system design [16]. We
will use the lunar module digital autopilot design as a case study to illustrate how Simulink
provides a natural tool for the design process steps above.
9.1
Specifications Development and Capture
In the actual lunar module design, there were modes for the following.
• Coasting flight: the initial mode for preparation of the lunar descent, where the lunar
module was powered up and checked, the inertial measurement unit was calibrated
(using an optical telescope to track stars), the computer was verified, etc.
• Descent: using the descent engine gimbals for control during the landing.
• Ascent: using reaction jets but with the control actively combating the rotation caused
by the engine misalignment.
• Combined lunar module and command-service module control: where the lunar module controlled both vehicles, the mode used for returning the astronauts after theApollo
13 explosion.
9.1. Specifications Development and Capture
271
Figure 9.1. The start of the specification capture process. Gather existing designs
and simulations and create empty subsystems for pieces that need to be developed.
The requirements for each of these modes were traceable to the overall mission requirements. (For example, the amount of fuel needed to be carried on the lunar module
was a function of how long the descent would take, which is relatively easy to calculate.)
Another example is how much time a search for an alternate landing could use. This site
selection was required if there was a problem with the preselected site. A gimbal on the
descent engine might allow the thrust to go through the center of gravity, or we could have
fixed the engine at a particular orientation (with a fuel penalty for the thrust needed to
counteract the torque from the engine). The requirements optimization needed to exercise
alternative designs. Calculations provided some of the answers but were time consuming.
This severely limited the number of alternatives that we investigated. An easily modified
fast simulation of multiple systems would clearly have made it possible to do many more
trade-offs of options and theoretically would have resulted in a better design.
To keep the model of the lunar module understandable, we will limit the discussion of
the digital autopilot to the rotational motion using the reaction jets only (i.e., we will work
only with the first phase of the mission, coasting flight).
9.1.1
Modeling and Analysis: Converting the Specifications into an
“Executable Specification”
We already have most of the modeling pieces that we need for the lunar module simulation
from the discussion on the rotation dynamics of a spacecraft in Chapter 3. The ability to
use a previous Simulink block is one of the major features of Simulink, allowing reuse of
previous work (and the possible exploitation of corporate intellectual property).
Working with the results of Chapter 3, the building of the simulation part of the model
is straightforward. The model is in Figure 9.1. We placed an empty subsystem block in
272
Chapter 9. Putting Everything Together
this model to denote the control system. This allows the development of the simulation
model to proceed independently of the development of the control system design. When
you are working with several teams each tasked to create a different aspect of the system,
this approach (sometimes called “top-down design”) enables teams to work independently
but in concert. Also, note that all of the blocks in this model come from the discussion in
Chapter 3, so no new dynamics are required. This is a good example of the independent
design approach that we have been discussing. In this design, a team separate from the
modeling team might be creating the control system. The modeling team can still proceed
with the creation of the simulation using the empty subsystem. (Open the model called
LM_Control_System in the NCS library and look at how this subsystem was created.)
The lunar module had an inertial platform that provided Euler angle measurements for
the control system. The Euler angles were a direct measurement of the angular orientation
of the vehicle without the transformations needed to give the exact orientation in inertial
space. Astronaut inputs were Euler angle commands that incrementally caused the vehicle
to be oriented properly. The model shown here uses the quaternion representation from
Chapter 3, and the measured angles for the autopilot are the direction cosines (as given
by the quaternion to direction-cosine-matrix block we developed in Section 3.4.4). The
computations needed to convert the Euler angles into a direction cosine matrix were well
beyond the capability of the lunar module guidance computer at the time, so we approximated
the angles using Euler angles as if they were the actual rotational angles. This is not a bad
approximation since the control system was operating to maintain the orientation at the
desired value every 0.1 sec.
Before we leave the discussion on modeling, it might be of interest to discuss how
we built the first simulation model in the 1960s. An engineer, working with FORTRAN,
developed this design model (for Grumman, the lunar module designer). He spent over six
months developing a model that had only a single axis of rotation. The reasons the model was
so simple had to do with the computer resources available (we used an IBM 7090 computer
that allowed each user, less than 100 Kbytes) and the available time (the simulation was
created using punch cards—one card for each FORTRAN instruction—and each attempt at
compiling the model took one day). Simulink and the PC have dramatically changed the
way we approach such designs today.
So let us now change hats and make believe we are the design team that is developing
the control system for the lunar module and embark on the second of the system design
tasks, modeling the system to incorporate the specifications.
9.2
Modeling the System to Incorporate the
Specifications: Lunar Module Rotation Using Time
Optimal Control
Control of a satellite using reaction jets is possible with two different strategies for the firing
of the jets. The first is to use a modulator to emulate a linear control system where the average
thrust is proportional to some error signal. A modulator pulsing the jets on and off over time
creates this average torque so it can be made proportional to the error signal. For several
reasons this approach is not good: first, it causes the mechanical device that is pulsing the
jets to wear faster than need be, and second, the typical reaction jet has an efficiency curve
9.2. Modeling the System to Incorporate the Specifications
273
that causes less fuel to be used if the jets are turned on for a longer time. In the early design
of the lunar module, jets were turned on and off with a modulator. This analog device was
a pulse ratio modulator (a combination of pulse-width and pulse-frequency modulation).
For several reasons this system did not have a backup, which worried the system designers
(see [16]). NASA proposed that the digital computer used for guidance and navigation
might provide a backup control system. In a competition described in more detail in [16],
MIT Instrumentation Labs proposed a control system that used a time optimal controller
instead of a modulator. This approach greatly simplified the needed computations to reorient
the vehicle, a simplification that made it practical to implement the autopilot in the existing
computer. This is an excellent example of how independent design teams working on the
same system model can optimize the design.
9.2.1
From Specification to Control Algorithm
The lunar module digital autopilot design was based on a very simple conceptual approach:
look at the error between the actual (measured) angular orientation and the desired orientation, and then fire the jets with either a positive or a negative acceleration (as appropriate)
in such a way that the error is reduced to zero as fast as possible. Working out the details of
how to do so is not difficult. Once the control strategy was worked out, the “control law”
(i.e., the exact strategy for what to do in every situation) was modified to limit the amount
of fuel that was used.
To get an idea of what a minimum time control is like, consider the following simplified
version of the lunar module control:
You are in a drag road race where you must start when a light goes green and then
travel exactly 1500 ft at which point you must be stopped. You will be penalized if you either
fall short or exceed the 1500-ft distance.
Before you read the following, you might ask, How do you drive the car to win this
race?
After a little thought you should be convinced that the way to win is to accelerate
for as long as you can (to the exact point where if you did not brake you would skid past
the stop line) and then stomp on the brakes and decelerate until the car stops exactly at the
finish line. The car that accelerates the fastest and the driver that best finds the exact point
to make the switch between accelerating and decelerating the car will determine the winner
of the race.
The form of this problem makes the answer easy to calculate. To perform the calculations, we will assume the following:
• the car instantaneously achieves its maximum acceleration;
• the driver applies the brakes at exactly the correct time;
• there is no change in the deceleration because of tire slippage, etc.
Let us put some numbers on the table. Assume that the car can accelerate at 10 feet
per second per second (10 ft/sec2 ) and the braking force allows a constant deceleration of
5 ft/sec2 . It the driver accelerates for t1 sec, the speed will then be 10t1 ft/sec. During this
274
Chapter 9. Putting Everything Together
t2
acceleration, the car will travel 10 21 ft. The deceleration will now commence and the car
must slow down from 10t1 ft/sec to zero. This will require a time of 2t1 sec (since the car
decelerates half as fast as it accelerates). The distance traveled during the deceleration is
2
therefore 5 (2t21 ) = 10t12 . The total distance traveled is 15t12 , and this must equal 1500 ft.
Thus, we have the equation that provides the critical time:
15t12 = 1500; → t12 = 100; → t1 = 10.
Clearly, by working through this process for any acceleration/deceleration rates, the strategy
for any car can be determined.
To control the lunar module, the same rules apply, except that both the acceleration
and deceleration require firing the reaction jets (which produce the same force) so the
acceleration and deceleration are the same. In addition, the location of the stop is different
every time we apply the control, so we need a generic control that will work for any starting
and stopping conditions (including the possibility that the lunar module is rotating at the
start).
We will develop this control system (which was only one of several on board the lunar
module) and insert it into the TO BE DESIGNED block in the rotational dynamics model
developed previously by the system design team.
If the jet forces are a couple around the center of gravity, the linear model for a single
axis rotation is simply
d 2 θe
Fl
=
u = αu,
dt 2
I
where
θe is the control system error (i.e., θmeasured − θdesired ),
F is the force from the jets,
l is the distance from the jets to the center of gravity,
I is the inertia of the vehicle around the rotational axis.
We will let the acceleration term F l/I be α, and u is either ±1 (firing jets for positive
or negative rotations) or 0 (no rotation).
The minimum time control sets the control u to either +1 or −1 until we reach the
critical point where (as was the case for the car above) the sign of the control changes. This
critical point is along the unique trajectory that causes the angular position error θe and the
angular rate error θ̇e to be exactly zero at the same time. The theory of optimal control states
that the goal (getting to zero θe and θ̇e ) can be achieved with at most one switch in the sign
of the acceleration. The plausibility of this should be clear from the discussion of the auto
race above. (As an exercise, create a logical argument why this must be so.)
To start the development of the control system, let us assume we are at the critical
point described above (the point where we can force the position and rate to be zero at
exactly the same time using a single jet firing). We use a graphical approach to visualize the
“trajectory” that will be followed. The graph is a plot of θe vs. θ̇e when u is ±1. To develop
this graph, let us solve the simple second order differential equation for the rotation above.
The solution is simply
θ̇e (t) = αut + θ̇e (t0 ),
t2
θe = αu + θ̇e (t0 )t + θe (t0 ),
2
9.2. Modeling the System to Incorporate the Specifications
275
where t0 is the initial time and u is constant (either +1 or −1) over the duration of the
solution.
To create a plot
of θe vs. θ̇e we need
to eliminate time (t)
from these equations.
Therefore, solving the
first of the equations
θ̇e (t0 )
gives t = θ̇e −αu
.
Substituting this time
into the second equation gives the equation
for a parabola: θe (t) −
2
−θ̇e (t0 )2
.
θe (t0 ) = θ̇e (t) 2αu
The unique parabolas
that go through the
origin from an arbitrary initial condition
are obtained by setting
θe (t) = 0 and θ̇e (t) = 0
in this equation (i.e., t is the time at which the trajectory goes through 0). A plot of these
parabolas in the θe , θ̇e plane (called the phase plane or, in higher dimensions, phase space)
is in the figure above. For each value of u, one and only one parabola goes through the
origin (i.e., [ θe θ̇e ] = [ 0 0] ). These parabolas are the locus of all of the critical
points at which the sign of the jet firings must be reversed. (These curves are therefore
called the switch curves.)
If the initial conditions are located anywhere in the area above and to the right of
the curves shown in the figure the jets are fired using −1 as the control (i.e., negative
acceleration) to force the motion to approach the switch curve in the second quadrant.
When the values of [θe θ̇e ] reach the switch curve, the sign is reversed (to +1) and the
motion will track the switch curve to the origin at which point the control is set to 0. In the
absence of any disturbance, the system would stay at the origin forever.
Similarly, if the initial conditions are to the left of the switch curves, the jets are fired
with a positive acceleration (u = +1) until the switch curve in the fourth quadrant is intersected and the acceleration is reversed to allow the motion to approach the origin from above.
The measurement of the values of [θe θ̇e ] is always off slightly, and the exact vehicle
inertia and the jet forces are known only to within some error, causing the acceleration α to
be in error when the switch curve is calculated. These errors will cause the control to miss
the origin slightly so instead of having the control objective be to hit the origin exactly; a
box around the origin is the control objective. This box is denoted using the name “dead
zone,” indicating that any time the vehicle has errors inside this zone, the control is unused
(i.e., dead). The resulting switch curves have the form shown in Figure 9.2.
Now that we have calculated the switch curves, let us try to justify the assertion that
the minimum time control is the control that gets the trajectory in the phase plane to the
origin with one switch. Assume we have made one switch and are on the unique trajectory in
phase space that takes the solution to the origin. Were we now to switch again, the solution
276
Chapter 9. Putting Everything Together
0.25
0.2
Coast Region -- Jets are Off
0.15
Angular Rate (Rad./Sec.)
0.1
0.05
Target Region
Turn Jets Off at this
Curve
0
-0.05
-0.1
Turn Jets On at this
Curve
-0.15
-0.2
-0.25
-0.8
-0.6
-0.4
-0.2
0
0.2
Angular Position (Radians)
0.4
0.6
0.8
Figure 9.2. The physics of the phase plane logic and the actions that need to be
developed.
will immediately move off the only solution path that goes to the origin, and we would then
have to undo that move, all of which would increase the amount of time it would take to
achieve the goal of reaching the origin. Thus, the minimum time path must be that which
results from using the logic described in the figure above.
To implement this control using feedback in continuous time is simply a matter of
tracking the location in the phase plane, performing the switch when the trajectory crosses
the appropriate switch curve. However, because we are implementing the control system in
a computer, there are computer usage issues we must consider. This leads to the third component of the design process: designing the system components to meet the specifications.
9.3
Design of System Components to Meet
Specifications: Modify the Design to Accommodate
Computer Limitations
When using a real-time computer implementation of a control algorithm, the time it takes
to perform the control calculations can interfere with other computations that need to be
done. (Typically a computer is too expensive to devote solely to the control system, so other
functions are performed on a time sharing basis.) This is the first time we must deal with
creating embedded software (and simulating its effects), so let us spend some time on this.
9.3. Design of System Components to Meet Specifications
277
The computer is totally occupied every time it looks at the location in the phase plane.
The more frequently we look to see where we are, the more computer time used and the
more overloaded the computer will be. The discussion above on the control system design
shows we cannot wait to make a switch. Every time the control system looks at the position
and rate to make a decision, other critical tasks are left waiting. (An interesting historical
footnote is that when we first attempted to design a digital control system for the lunar
module, we concluded that it was impossible for exactly this reason.) Thanks to George
Cherry, an MIT Instrumentation Laboratory engineer at the time, the lunar module control
system had a unique implementation. His idea was to make the control into a timed task.
Any computer has the ability to do a timed task. The timed task interrupts the computer
(i.e., temporarily terminating an existing task for this one) and then a program runs that
starts an external timer. When the computer is finished with the task of starting the timer,
it can return to performing the original interrupted task (while the timer ticks away in the
background). When the timer counts down, indicating that the task is complete, the computer
is once again used (usually again via an interrupt) to complete the timed task. This was
George Cherry’s idea for the lunar module digital control system. When the control logic
required jets to be on, the control algorithm computed the time needed to reach the desired
location in the phase plane (either the switch curve or the small rectangle at the origin). The
computer turned on the jets and sent the desired on time to an external timer. The control
task ended, and the computer was free to do other tasks. When the timer counted down, the
computer was interrupted again and the jets were turned off. This operation was very quick
since it consisted of simply resetting bits in one of two output channels (each bit connected
directly to a jet firing solenoid).
We ran a large simulation of the design twice a day, so to verify that this logic worked
was a time consuming process. With Simulink, we can have the same simulation running on
a desktop computer. Furthermore, the simulation can use the same logic used in the actual
lunar module code (including a simulation of the timers).
To implement the design, we need to compute the amount of time needed to get from
any initial location in the phase plane to some desired position. Let us assume that we are in
the region where negative accelerations are required (i.e., to the right and above the switch
curves). At the switch curve we want to intersect, the control will change from −1 to +1
(positive acceleration). From the analysis above the switch curve equation is:
θe (t) =
θ̇e (t)2
.
2α
Note that when we are on the switch curve, the jet on time is simply
Ton =
θ̇e (t)
.
α
Until we reach the switch curve, the jets are on, providing negative acceleration, and the
trajectory in the phase plane is
θe (t) − θe (t0 ) = −
θ̇e (t)2 − θ̇e (t0 )2
.
2α
We need to compute the intersection of this curve with the switch curve. Once we know the
intersection values, we can compute the time that the jets need to be on using the equation
278
Chapter 9. Putting Everything Together
for the time t we developed above (in this case with the value of u = −1):
Ton =
θ̇e (t0 ) − θ̇e (t)
.
α
One of the important ways to do an analysis like this now is to use symbolic manipulation.
MATLAB has an interface to Maple that easily allows this. The MATLAB M-file with the
implicit Maple calls to solve for the on time is in the code segment below. Note that the
Maple variables in the code are MATLAB objects created with the “syms” command.
syms Ton t alph th thdot th0 thd0
thdot = solve('th = thdot^2/(2*alph)',thdot)
thdot = thdot(2) % Select negative square root
intsect = subs('th = th0 +(-thdot^2+thd0^2)/(2*alph)',thdot)
thdot = solve(intsect,'thdot')
Ton = (thdot(1)-thd0)/alph
pretty(Ton)
The solution proceeds by first setting up the variables that are symbolic in MATLAB
(Ton, t, alph, th, thdot, th0, and thd0). Next, the switch curve is defined, and the solution
along the curve as a function of θ̇e is developed. (Realizing that the solution involves a square
root, the negative value is selected since the intersection when the acceleration is positive
will occur in the second quadrant where θ̇e is negative.) Next, the curve followed during
negative acceleration is developed. Then, we find the intersection of the two parabolas. The
value of θ̇e (t) at the intersection determines the time Ton . The MATLAB script that invokes
Maple to do the algebra is in the figure above. The result from running this script (after the
pretty instruction in MATLAB) is
[
1/2
1/2
2
1/2
]
Ton = [(2 2
(th alph)
alph + 2 th0 alph + thd0)
- thd0]
[-------------------------------------------------------]
[
alph
].
This equation defines the time that the jet needs to be on in order to reach the switch
curve. At every sample time (0.1 sec), the lunar module digital control system computes
the total time that the jets need to fire. This equation applies for both negative and positive
accelerations by using absolute values in some of the terms. The coasting region of the
control system (when the jets are off and the lunar module drifts at its last angular rate) uses
the approach of looking every 0.1 sec to see if the jets should be on (based on the location in
the phase plane). If there were no need to fire the jets, the coast would continue for another
0.1 sec. When the motion is just about to leave the dead zone region this strategy adds an
extra 0.1 sec of drift. We deemed this error was acceptable.
9.3. Design of System Components to Meet Specifications
279
The Lunar Module Digital Autopilot Design
Jet
Torques1
Jet
Commands
[I]
Inertia
Inertia
Matrix
Quaternion and Spacecraft
Rotational Dynamics
Attitude Cmd.
Cmd.
Yaw Jets
Jet
Torques
dq/dt
Reaction Jet Control
Phase Plane
Plot
Two Jet
Pos. Error
2
||x||
Two Jet or
Four Jet Yaw
Couples
Initial
Rates
Derived_Rates
Initial
Quaternions
[4x1]
x
Read the "News & Notes" article
about this model over the web.
(Double-Click Here)
Initialize Data Stores
2
PitchRollJets
e_edot
1
Write Var
Derived
Normalize the Magnitude of q
quaternian[q s] 1
Nominal Two Jet Couples
NofJets
4
Four Jet
-C-
1
sx o
q
Euler Angles
(Yaw Roll Pitch)
from Quaternions
Rates
Actual
Actual
Input Jet Torque
Integrate
qdot
Attitudes
Read data
-K-
q
ψφθ
e_edot
1
xo s
omega
Pitch-Roll
Attitude Meas. Pitch/Roll Jets
A/D
Conversions
(Sample
time delt)
Body Angular Accel
Yaw
Mux
Astronaut Attitude
Command
to Autopilot
Integrate
omega dot
Inertia J
Mux
Ascent Single
Jets
Write
Single Jet (Ascent)
PitchRoll Var
or Two Jet
Couples
for Pitch and Roll
NofJets
PitchRollJets
Figure 9.3. Complete simulation of the lunar module digital autopilot.
9.3.1
Final Lunar Module Control System Executable Specification
We are now ready to look at the Simulink model for the control system. Open the model
LMdap3dof from the NCS library in MATLAB, and carefully look at the model (Figure 9.3).
This model is the complete simulation of the control system in the three degree of
freedom simulation we developed in Section 9.1.1. The control system has a number of
features that are important for the overall development process that we are trying to describe.
These are as follows.
• Data used in the model are saved and retrieved using Data Store blocks.
• The control system design began with a single axis, and the result was stored in a
library. Each of the three axes then used the library block.
• All of the logic for determining where we are in the phase plane uses Stateflow.
• To ensure that the blocks we developed are available for reuse, the library also has the
block for the quaternion calculations, the block used to derive the digital rates for the
control system from the attitudes, and the block that converts the quaternion values
into Euler angles.
When you open the model, it will appear with the model browser turned on. This
feature allows you rapidly to navigate around the model using the same navigation technique
that allows you to navigate in Windows.
At the top level, zero order hold blocks before the Reaction Jet Control subsystem
convert the continuous time attitudes into measured values (at a time interval of every
0.1 sec). The zero order hold measures the value of the analog signal and then does the
280
Chapter 9. Putting Everything Together
analog to digital conversion. Because the astronauts communicated with the control system
both by using a data entry key board on the computer and by turning switches on and
off, we need to keep track of the switches that they used. All of the calculations above
(since the acceleration from the jets determines the parabolas) use information about the
available acceleration. Therefore, the control system must know the state of the switches
that the astronaut could use to disable reaction jets. Use of one of these switches caused a
computer update that saved the state of the switch and changed the values of the affected
jet accelerations. In the Simulink model, we use blocks called Data Store Write, Data Store
Read, and Data Store Memory to record and interrogate the switch states. These blocks
are at the bottom of the Simulink model. They update the number of jets in the model and
modify the acceleration used by the control system. Another neat feature is that the user can
double click on the switch icon to change the number of jets used in the simulation. The
switches change both the yaw axis jets (where the number of jets used can be set to either 2
or 4) or the pitch and roll axes jets (where control using only upward firing jets is an option).
As long as the jets fired downward during ascent from the surface of the moon, all of the
control forces contributed to the lifting thrust from the ascent engine. This idea reduced the
amount of fuel used during the ascent. The Data Store Read, Write, and Memory blocks
work by creating internal memory storage locations for the values of the variable. (The
stored values are created by the Data Store Write block.) The initial values for the data are
created using the Data Store Memory block (which are in the diagram but are not connected
to any other blocks; they are at the bottom right side of the model). Data Store Read blocks
are the source in the model of all of the volatile data required by the control system. In this
model, the Read blocks are all inside the Reaction Jet Control subsystem.
There is also a block that provides information about the revision number, date of
revision, and so on; it appears at the bottom right of the model. Every time you save the
model, Simulink updates this block. The block for the Quaternion calculations is the same
block that we used in Chapter 3, but it is now stored in the library. The last new block at this
level is the quaternion to Euler angle block. The blocks contain annotation of the equations
used inside the block, so we will not go through them.
With this introduction, you can now begin to explore each of the blocks in the final
model (in Figure 9.3). A good place to start is the simulation of the counter that times
the jets. This model is in Figure 9.4. (There are three of these blocks one in each of the
“Reaction Jet Control” subsystems. Browse to one of them to see the model.)
The timer clock was 625 microsec, so the simulation uses two digital clocks: one
at this rate and one at the sample time (0.1 sec). The jet on time counts off the 625microsec clock in the subsystem labeled “Jet On Time Counter.” This subsystem looks like
Figure 9.5.
Since the Sample Time clock changes at the sample time, this value is constant over
the duration of the countdown of the jets. (Remember that we compute the on time anew at
every sample time.) Thus at some time before the next sample the difference between the
time of the last sample and the time from the counter clock will be equal to the on time. The
relational operator block looks for the difference to be greater than zero, so the output goes
to zero when the countdown is complete. Returning to the blocks that use the counter, we
see that the logic uses this output, along with the jet on command and the counter enable
that comes from a Stateflow chart. The logic implemented is the logical NOR of the enable
and the counter output. Next, we compare the output, using a logical OR, with the counter
output. This finally is multiplied by the jet on command. (This command is the number
9.3. Design of System Components to Meet Specifications
281
1
On Command
2
Product
Enable Counter
3
Jet On Time
Clock at counter
time tic (0.000625) clockt
Counter
Output
Logical
OR1
Logic implements:
~(enable | stopjets) | stopjets
ton
12:34
Clock at tics
12:34
Clock at Sample Time
Clock at time
delt (Sample time)
Logical
Operator
1
Jet
Command
Stop jets
Counter
Output
Jet On TIme Counter
Figure 9.4. Simulating a counter that ticks at 625 microsec for the lunar module
Simulink model.
Enable
1
ton
2
Count down of ton
ton
Time in clockt
multiples
Clock at tics
3
>
Stop Command
Relational
Operator
1
Stop jets
Constant
0
Time in delt
multiples
Clock at
Sample Time
Figure 9.5. The blocks in the jet-on-time counter subsystem in Figure 9.4.
of jets with a sign indicating the direction, so a logical AND cannot be used to provide the
final jet command output.)
If you start this model in the usual way, a phase plane plot for the system response
appears (Figure 9.6) as the simulation runs. The plot is the control response for motion
around the x (yaw) axis, which is the axis with the smallest inertia. The model also contains
plots (Scopes) that show the attitude and attitude rate errors for all three axes.
If you navigate through the reaction jet control subsystem, you can see the logic and
the computations for the switch curves. In Section 9.1.1, we created the model and used
an empty subsystem for the controller. (We called the subsystem “Control System TO BE
DESIGNED”.) We have now completed the design. It is easy to see that a different group
could have created the control system design. The group that created the simulation model
with the quaternion block might even have been working at the same time.
282
Chapter 9. Putting Everything Together
Phase Plane Plot for LM Yaw Axis Digital Control
0.1
Switch Curves for
Turning Jets Off
Switch Curves for
Turning Jets On
0.08
0.06
Yaw Attitude Rate Error
0.04
0.02
0
-0.02
-0.04
-0.06
-0.08
-0.1
-0.1
-0.08
-0.06
-0.04
-0.02
0
0.02
Yaw Attitude Error
0.04
0.06
0.08
0.1
Figure 9.6. Lunar module graph of the switch curves and the phase plane motion
for the yaw axis. (The graph uses a MATLAB function block in the simulation.)
Continuing our travels through the model, we have a block that computes the three
axis rotational rates from the Euler angle measurements. This block is in the library called
LMdapLibrary_NCS.mdl in the NCS library. We have not encountered library blocks
before, so we need to describe how to create a library and how they work.
When you select “New” in the File menu, there are two options: a Simulink model
or a library. To create a library you simply select the latter. A library is different from a
model in that none of the Simulink menu items (like the start button) appears in the window.
Furthermore, if you use any of the blocks in the library to develop a model, future changes
to any of the library blocks will propagate to the Simulink models that use them. This
feature makes it easy to keep track of design modifications and far easier to reuse a block in
the future. When you convert a library block into embedded computer code, and the code
is used multiple times (as is the case here where the same control system block applies to
the yaw, pitch, and roll controllers), the code is created as a “reentrant” subroutine. The
conversion of the diagram into embedded code occurs once. The Real Time Workshop
conversion makes only one copy of the control system code (not three versions). Each
instantiation uses the same code, minimizing the amount of storage needed for the code.
While we are on the subject, libraries work best if a tool for maintaining configuration
control is also in place. All of the models in Simulink (and in MATLAB) allow the user to use
a configuration management tool with the model. Source control systems must comply with
the Microsoft Common Source Control standard. If you have a compliant source control
9.3. Design of System Components to Meet Specifications
283
system on your computer, the Source Control options in the MATLAB Preferences dialog
will show it. Revision Control Systems (RCS) and Concurrent Versions System (CVS) are
two tools that use the Microsoft Source Code Control. When a tool such as this is used,
a designated configuration manager must make all changes to models and libraries. This
ensures that all models that are in the current design are verified and validated. Individuals
can work with their own version of the models, but then the configuration manager must
test them before he allows the changes to migrate to the actual design.
9.3.2 The Control System Logic: Using Stateflow
Now that all of the basic analyses for the control system design are complete, we can focus
on implementing the control logic in the simulation.
function [Firefct1, Coastfct1, Firefct2, Coastfct2, tcalc1, tcalc] = ...
fcn(e,Njets,DB,alph,alphs)
% Time at full accel for e_dot = 0 (cross the e axis) is tcalc1.
%
(This time is positive in RHP and negative in the LHP, and
%
this is accounted for in the calculation of the total on time
%
in the Stateflow chart).
% The intersection of the switch curve and the "on" trajectory
%
determines the value of xdot at the switch curve. This value
%
divided by the acceleration is the time needed to go from
%
the e axis crossing to the switch curve (tcalc). The two times
%
tcalc and tcalc1 are added together in the State Chart.
%
Note that the square plus or minus value are accounted for
%
in the State Chart (when in the RHP we add tcalc and tcalc1,
%
and in the LHP we subtract tcalc1 (which is <0) from
%
tcalc (>0 always).
%
ac
Constants used in the calculations:
= 2*alphs/(alph+alphs);
%
Changeable constants used in the calculation:
accel
= Njets*alph;
accels
= Njets*alphs;
%
Evaluate the
Firefct1 = e(1)
Coastfct1 = e(1)
Firefct2 = e(1)
Coastfct2 = e(1)
%
location in the phase plane w.r.t the 4 parabolas:
-DB +e(2)^2/(2*accel);
-DB -e(2)^2/(2*accels);
+DB -e(2)^2/(2*accel);
+DB +e(2)^2/(2*accels);
Compute the on time for a jet firing based on phase plane locations:
x1
x2
x3
tcalc1
=
=
=
=
tcalc
=
e(1)/accel;
% Scal position for jet on time calc.
e(2)/accel;
% Time at full accel for e_dot = 0.
DB/accel;
% Scale Dead Band for on time calc.
x2;
% Time to reach the e axis from edot.
u
= (abs(x1) + x2^2/2 - x3)*ac;
sqrt(u*(u>=0));
284
Chapter 9. Putting Everything Together
[count==0]
/count=2;
Wait_for_stable_rate
1
2
Start
[ (e[1]<0 & Coastfct1<=0 & Firefct2>0)
]
4
[ (e[1]<0 & Firefct2<0) | (e[1]>0 & Coastfct2>0) ]
1
/count--;
2
3
[ (e[1]>0 & Firefct1>0) | (e[1]<0 & Coastfct1<0) ]
Fire_region_1
en: jets=-Nofjets;
ton=tjcalc+tjcalc1;
[ton<2*delt]{enable=1;}
2
[ton>tmin]{count=2;}
1
1
[ (e[1]>0 & Coastfct2>=0 & Firefct1<0) ]
[e[1]>0 & Firefct1>0]
[count==0]
3
[e[1]<0 & Coastfct1<=0]
2
/ton=ton-delt;
2
{enable=0;}
Skip_a_Sample_2
en:count--;
{enable=0;}
2
1
/ton=ton-delt;
[e[1]>0 & Coastfct2>0]
2
1
Skip_a_Sample_1
en:count--;
Coast_region_2
en: jets=0;
ton=0;
enable=0;
1
[ton>tmin]{count=2;}
[count==0]
Coast_region_1
en: jets=0;
ton=0;
enable=0;
[e[1]<0 & Firefct2<0]
[ton<2*delt]{enable=1;}
1
2
Fire_region_2
en: jets=Nofjets;
ton=tjcalc-tjcalc1;
3
Figure 9.7. Stateflow logic that determines the location of the state in the phase plane.
The Control Law blocks from the library use an Embedded MATLAB Function block
to compute the data needed to determine where the error is in the phase plane. We do
this by substituting the appropriate yaw, pitch, or roll attitude and attitude rates into the
equations for the four different switch curve parabolas. The functions that we generate
are Firefct and Coastfct (with 1 or 2 appended to denote the quadrant). In addition, this
code block calculates the jet on time using the equations from the Maple derivation above.
This Embedded MATLAB code is in the window above. Note that the code is very readable because it uses standard MATLAB. As an exercise, follow the code and verify that it
uses the equations above for the jet on times. Also, note that this code segment is in the
LMdapLibrary_NCS.mdl as part of the Control System block, so it also is reentrant.
The Embedded MATLAB block does not do the logic for determining which jets to
fire and counting down the on time. This decision making uses Stateflow for the logic.
(The Stateflow block decides whether to fire a jet with positive or negative acceleration
and whether or not to coast at any decision point.) The logic in the Stateflow block is easy
to follow (see Figure 9.7 for the chart). The original control design for the lunar module
did not use a process like this at all; all of the logic had to be programmed in assembly
language.
The Stateflow chart is identical for all three axes. One of the techniques used to
develop this chart was to lay it out visually so it follows the four quadrants in the phase
9.4. Verification and Validation of the Design
285
space. Skipping the top of the chart for a moment, there are four states that are at the vertices
of a rectangle:
• quadrant 4 & 1 coast region is at the top right;
• quadrant 1 & 2 jet firing (negative acceleration) is at the bottom right;
• quadrant 2 & 3 coast region is at the bottom left;
• quadrant 3 & 4 jet firing (positive acceleration) is at the top left.
When the state-space locations are near the switch curves, we start the counter to
count down the jet on times. This is not necessary unless we are near the switch curves.
(When we are not close, the on times are greater than the sample time so we turn the jets on,
leave them, and reevaluate the on time at the next sample.) The Stateflow states that handle
this logic we called “Skip_a_Sample.” There are two such states: one for the left half of
the phase plane and one for the right. The Stateflow logic should be easy to follow, since
every decision uses the sign of the attitude rate and the coast and fire functions evaluated
in the Embedded MATLAB block. When you run the model, open the Stateflow block for
the yaw control and watch the logic and the phase plane plot. The correlation between the
Stateflow switching and the phase plane should make it easy to follow the logic.
We are now ready for the fourth step in the design process: verification and validation
of the design.
9.4 Verification and Validation of the Design
If you browse through the Simulink library, you will see a group of blocks called Model
Verification. Figure 9.8 shows this library. You can use these blocks to evaluate whether or
not the signals in the Simulink model are doing what you expect.
The block description and their icons help you to understand what they do. For
example, the Check Discrete Gradient block (the second block in the library) evaluates the
difference between the signals at two different times and flags you if this exceeds some
number that you specify in the block. This ensures that the (digital) derivative of the signal
is within bounds. You can check to see if signals are within certain bounds, that they do not
exceed bounds, that they lie within a dynamic range that is possibly changing with time,
and that the signal is nonzero. With these blocks and a built-in tool that gives an analysis of
the coverage of the signal flow (for both Simulink and Stateflow), you can assure yourself
that the simulation is valid (and therefore, when the time comes to create the code, that it
is valid). Since this is the kind of tool that provides verification and validation, which are
important only when the design process has the goal of creating a system design that may
have embedded code, we will not spend more time on it. However, if the overall system
design is your goal, this tool is extremely valuable.
We are now at the last step in the design process, the creation of embedded code.
286
Chapter 9. Putting Everything Together
Figure 9.8. Simulink blocks that check different model attributes for model verification.
9.5 The Final Step: Creating Embedded Code
We now come to the last step in the design process: the creation of the embedded code.
Simulink offers the user a multitude of paths for achieving this goal. The simplest code
generation process is to generate code that runs under Windows. The code generation tool
Real Time Workshop® (RTW) will typically default to this option. When you invoke RTW,
it will generate a stand-alone application that runs from the Command window. It is well
beyond the scope of this book to take you through this process, but it is easy enough to
do and you can generate some very interesting applications. Reference [41] discusses the
need for structured development of real-time system applications; it should be clear that
you could not find a more structured (since it is automatic) way than using RTW.
9.6. Further Reading
287
The Signal Processing Blockset comes with blocks that allow the capture of signals
from the audio devices built into your computer. You can design a complex signal-processing
task and try it out on your computer. The blockset comes with several interesting examples
that you should try. Once you understand what the applications do, try modifying them to
do something different. It is a lot of fun.
9.6
Further Reading
The process of creating new systems is difficult to manage. Several books have been devoted
to this subject (see [10], for example), and their main conclusion is that a tight integration
of the research, development, and manufacturing groups is the only way to ensure that
new technology rapidly migrates into products. The example in this chapter illustrates a
powerful feature of Simulink. You can open a model and actually play with it. There is no
better way for a research team to communicate an innovative idea to the development and
manufacturing sides of a company (and, for that matter, to the company’s managers—in
fact, why not the CEO and the board of directors). Imagine the R&D team briefing the
top management of a company using a simulation of the proposed new device. Imagine
that the simulation has all of the bells and whistles, shows all of the manufacturability
components, clearly delineates computer software required, and has a clear path to the
embedded computer code. When the design team then quotes a cost and schedule, it will
have far more meaning than a simple recitation of the same information.
Controlling the costs of the software in a system has been extremely difficult. There
are innumerable instances of software development cost overruns that are multiples of the
initial estimates. Reference [35] describes many such horror stories. The stories generally
result in the definition of a process that will make the software development work better.
In my experience, these process improvements always lack one major component:
the software designers never have a way of verifying that the software will do what is
required. Reference [19], for example, decries the fact that software requirements appear
well before the allocation of the system requirements to the software is complete. This is
justified because the software development takes so long that it must start early. Another
example that I remember well was the first time I tried to put a process together. It was
before Simulink but after the introduction of MATLAB. The CEO of the company I worked
for asked me to fix a severe schedule slippage in the development of a complex system. The
first thing I did was to force the development team to use MATLAB (instead of FORTRAN
or C) to build the system components. The existing process required that we send a monthly
specification document to the software group. Each of these transmittals was hundreds of
pages. About two months after we adopted MATLAB for the design team, I noticed that
the software group would ask for the MATLAB code. In a conversation with the software
team, I found out that they were using the MATLAB code as the specification and checking
the written specifications only when they found conflicts or processes that were not in the
MATLAB code. This was the first example of an executable specification that I had ever
encountered. I have had the same experience many times; these occasions are what have
convinced me that spending the time and effort to build a process around Simulink (and the
other tools we have discussed) is cost effective and possible and will lead to faster, cheaper,
and more efficient designs.
Chapter 10
Conclusion: Thoughts
about Broad-Based
Knowledge
I hope that you have reached this point in the book with a much better understanding of the
power of a visual programming environment. I also hope that you can see how an integrated
system-engineering environment can improve your designs.
In the process of working through this text, you should have become familiar with a
large number of different disciplines—perhaps some that you had not previously learned.
This was partly my goal. One of the major attributes of a good engineer (or mathematician
or any other discipline you would like to insert here) is both breadth and depth of knowledge.
As I was completing this book, I read a marvelous editorial in the New York Times by
Thomas Friedman, entitled “Learning to Keep Learning” [12]. In the editorial, Friedman
makes the case for better and broader education. He quotes Marc Tucker, who heads the
National Center on Education and the Economy: “It is hard to see how, over time, we are
going to be able to maintain our standard of living.”
Friedman then adds: “In a globally integrated economy, our workers will get paid a
premium only if they or their firms offer a uniquely innovative product or service, which
demands a skilled and creative labor force to conceive, design, market, and manufacture—
and a labor force that is constantly able to keep learning. We can’t go on lagging other major
economies in every math/science/reading test and every ranking of Internet penetration and
think we’re going to field a work force able to command premium wages.”
A little later in the editorial, Friedman again quotes Tucker as saying, “One thing we
know about creativity is that it typically occurs when people who have mastered two or
more quite different fields use the framework in one to think afresh about the other.” The
goal of Tucker is to make “that kind of thinking integral to every level of education.”
Tucker thinks that this requires a revamping of our educational system, designed in
the 1900s for people to do routine work, into something different. The new system must
teach how to “imagine things that have never been available before, and to create ingenious
marketing and sales campaigns, write books, build furniture, make movies, and design
software, ‘that will capture people’s imaginations and become indispensable for millions.”’
In the last part of his editorial, Friedman tempers the nationalistic sound of the statements by noting that innovation is not a zero-sum game. It can be win-win. He says, “We,
China, India and Europe can all flourish. But the ones who flourish the most will be those
289
290
Chapter 10. Conclusion: Thoughts about Broad-Based Knowledge
who develop the best broad-based education system, to have the most people doing and
designing the most things we can’t even imagine today.”
It is my hope that this book and the products and processes we described here go some
small way to making Friedman’s thoughts real for you. It is clearly possible to learn multiple
disciplines, to bring them to bear on complex design problems, and to create devices using
more automation. It is also possible, working with the visual programming paradigm that
Simulink exemplifies, to work faster, smarter, and more accurately. I truly hope that this
book helps to further that process.
As a final footnote, one of the reviewers of this manuscript made the prescient observation that using Simulink or any modeling software without care can lead to disasters.
This is very true.
However, I am old enough to remember engineers who, after they graduated, used
only handbooks for the models in their designs. The result for these engineers was often a
disaster, too. Because what they selected from the handbook was not valid in the context of
the design they were developing, their conclusions were wrong. Simulink does not solve
this problem. In fact, no tool can.
How many times have you seen someone using a wrench as a hammer? Workers
always have the opportunity to abuse their tools. The main reason for this book was to
show you how to use Simulink the tool, not by simply learning how to build models but by
showing you how the numerical methods in the background work. Along the way, we have
emphasized the design process and the fact that modern design is a team effort. If you are
new to this world, or if you are a student looking forward to working as a design engineer,
remember that your colleagues want to help. Use their expertise. Share the Simulink
models with them—early and often. Get their feedback and use it. If you are an engineer
with experience in the design of systems, work to incorporate a visual programming tool
into your design process. If you do, do not forget to train the engineers that will be new to
this process. Also, ensure that your design teams have an ample number of old-timers who
know when something looks wrong.
Finally, remember the various tricks and comments that I have made in the text:
• Annotate your models.
• Use subsystems to keep the diagram simple.
• Vectorize the model where you can (and let the user know that you have done so).
• Make neat models; avoid spaghetti (very messy) Simulink models.
• Run the models with the verification and validation tools to see if the results of the
simulation make sense.
• Check the data you are using against the real world by seeing if the simulation produces
results that match experiments.
• If you replace a block in a model with a new block, run the model with both the old
and new block, subtracting the outputs of each to verify that the differences between
them is essentially zero.
In other words, be a good engineer.
Bibliography
[1] Anderson, Brian D. O. and Moore, John B., Linear Optimal Control, Prentice-Hall,
Englewood Cliffs, 1971.
[2] Bathe, Klaus-Jurgen, Finite Element Procedures, Prentice Hall, Englewood Cliffs,
1996.
[3] Bernstein, Dennis S., Feedback Control: An Invisible Thread in the History of Technology, IEEE Control System Magazine, Vol. 22, No. 2, pp 53–68, April 2002
[4] Bolles, Edmund Blair, Galileo’s Commandment, pp 415–419, W. H. Freeman, New
York, 1999. This book contains the fragment from Galileo’s 1638 book Two New
Sciences, as translated by Henry Crew and Alfonso de Salvio.
[5] Brush, Stephen G., A History of Random Processes, I, Brownian Movement from Brown
to Perrin, Archive for History of Exact Sciences, Vol. 5, pp 1–36, 1968. Reprinted in
Studies in the History of Statistics and Probability, II, edited by M. Kendall & R. L.
Plackett, Macmillan, New York, pp 347–382, 1977.
[6] Bryson Jr., Arthur E., Control of Spacecraft and Aircraft, Princeton University Press,
Princeton, N.J., 1994.
[7] Childers, Donald G., Probability and Random Processes—Using MATLAB with Applications to Continuous and Discrete Time Systems, Irwin, a McGraw-Hill Company,
Chicago, 1997.
[8] Chudnovsky, Victor, Kennedy, Dallas, Mukherjee, Arnav, and Wendlandt, Jeff,
Modeling Flexible Bodies in SimMechanics and Simulink, MATLAB Digest, Available on The MathWorks web site http://www.mathworks.com/company/newsletters/
digest/2006/may/simmechanics.html, May 2006.
[9] Derbyshire, John, Unknown Quantity—A Real and Imaginary History of Algebra,
Joseph Henry Press, Washington, D.C., 2006.
[10] Edosomwan, Johnson A., Integrating Innovation and Technology Management, John
Wiley and Sons, New York, 1989.
[11] Egan, William F., Phase-Lock Basics, John Wiley and Sons, New York, 1998.
291
292
Bibliography
[12] Friedman, Thomas L., Learning to Keep Learning, New York Times, p A33, December
13, 2006.
[13] Funda, J., Taylor, R.H., and Paul, R.P., On Homogeneous Transforms, Quaternions,
and Computational Efficiency, IEEE Transactions on Robotics and Automation, Vol.
6, No. 3, pp 382–388, June 1990.
[14] Galilei, Galileo, Dialogue Concerning the Two Chief World Systems—Ptolemaic and
Copernican, translated by Stillman Drake, Dover Press, New York, 1995.
[15] Gran, Richard, Fly Me to the Moon, The MathWorks News and Notes, Summer 1999,
available at the MathWorks web site: http://www.mathworks.com/company/newletters/
news_notes/sum99/gran.html
[16] Gran, Richard. et.al, High Temperature Superconductor Evaluation Study, Final report
for Department of Transportation, available from the DOT Library NASSIF Branch,
2003.
[17] Halliday, David, and Resnick, Robert, Physics, John Wiley and Sons, New York, 1978.
[18] Harel, David, Executable Object Modeling with Statecharts, IEEE Computer Society
Magazine, Vol. 30, No. 7, pp 31–42, July 1997.
[19] Headrick, Mark V., Origin and Evolution of the Anchor Cock Escapement, IEEE
Control System Magazine, Vol. 22, No. 2, pp 41–52, April 2002.
[20] Humphrey, Watts S., Managing the Software Process, Software Engineering Institute,
Addison-Wesley, Reading, MA, 1990.
[21] Inman, Daniel J., Engineering Vibration, Prentice Hall, Upper Saddle River, NJ, 2001.
[22] Jackson, Leland B., Digital Filters and Signal Processing, Kluwer Academic Publishers, Norwell, MA, 1986.
[23] Jerri, A.J., The Shannon sampling theorem—Its various extensions and applications:
A tutorial review, Proceedings of the IEEE, Vol. 65, No. 11, pp 1565–1596, Nov. 1977.
[24] Kaplan, Marshall H., Modern Spacecraft Dynamics and Control, John Wiley and Sons,
New York, 1976.
[25] Lienhard IV, John H., and Lienhard V, John H., A Heat Transfer Textbook, Phlogiston
Press, Cambridge, MA, 2002.
[26] Livio, Mario, The Golden Ratio, The Story of Phi, the World’s Most Astonishing
Number, Broadway Books, New York, 2002.
[27] Luke, H.D., The Origins of the Sampling Theorem, IEEE Communications Society
Magazine, Vol. 37, No. 4, pp 106–108, April 1999.
[28] Meirovitch, Leonard, Dynamics and Control of Structures, John Wiley and Sons, New
York, 1990.
Bibliography
293
[29] Moler, Cleve B., Numerical Computing with MATLAB, SIAM, Philadelphia, 2004.
[30] Newman, James R., editor, Volume 2 of The World of Mathematics, Mathematics of
Motion, by Galileo Gallilei, pp. 734–774, Simon and Schuster, Inc., 1956.
[31] Oppenheim, Alan, Realization of Digital Filters Using Block-floating-point Arithmetic,
IEEE Transactions on Audio and Electroacoustics, Vol. 18, No. 2, pp. 130–139, June
1970.
[32] Papoulis, Athanasios, The Fourier Integral and it Applications, McGraw-Hill, NY,
1962.
[33] Papoulis, Athanasios, Probability, Random Variables, and Stochastic Processes, McGraw-Hill, NY, 2002.
[34] Pedram, Massoud and Nazarian, Shahin, Thermal Models, Analysis and Management
in VLSI Circuits: Principles and Methods, Proceeding of the IEEE, Special Issue –
On-Chip Thermal Engineering, Vol. 94, No. 8, pp 1473–1486, August, 2006.
[35] Royce, Walker, Software Project Management—A Unified Framework, AddisonWesley, Boston, 1998.
[36] Sane, Sanjay P., Dieudonné, Alexandre, Willis, Mark A., and Daniel, Thomas L.,
Antennal Mechanosensors Mediate Flight Contol in Moths, Science, Vol. 315, No.
5813, pp 863–866, February 2007.
[37] Scheinerman, Edward R., Invitation to Dynamical Systems, McGraw Hill, Prentice
Hall, NJ, 1996.
[38] Schwartz, Carla and Gran, Richard, Describing function analysis using MATLAB and
Simulink, IEEE Control System Magazine, Vol. 21, No. 4, pp 19–26, Aug. 2001.
[39] Shannon, Claude, Communication in the Presence of Noise, originally published in the
Proceedings of the I.R.E., Vol. 37, No. 1, Jan. 1949. Republished as a Classic Paper
in the Proceedings of the IEEE, Vol. 86, No. 2, pp 447–457, February 1998.
[40] Sidi, Marcel J., Spacecraft Dynamics and Control—A Practical Engineering Approach, Cambridge University Press, Cambridge, MA, 1997.
[41] The MathWorks Inc., Simulink Performance and Memory Management Guide,
http://www.mathworks.com/support/tech-notes/1800/1806.html.
[42] The MathWorks Inc., Users Manual: Simulink ® , Simulation and Model Based Design;
Using Simulink, Version 6, Ninth Printing, Revision for Simulink 6.4 (Release 2006a),
March 2006.6
6 References [42] through [47] are the current printed versions (2007) of the appropriate users manuals and
guides. They are available from The MathWorks, Inc., 3 Apple Hill Drive, Natick, MA 01760-2098. They also are
in the Help files shipped with the various products, and as downloads in pdf format from the MathWorks web site
(http://www.mathworks.com).
294
Bibliography
[43] The MathWorks Inc., Getting Started: Signal Processing Blockset, For Use with
Simulink ® , Version 5.1, Reprint, (Release 2006a), March 2006.
[44] The MathWorks Inc., Users Guide: Signal Processing Toolbox, For Use with MATLAB® , Version 6, Sixth Printing, Revision for Version 6.5 (Release 2006a), March
2006.
[45] The MathWorks Inc., Users Guide: Stateflow® and Stateflow® Coder, For Complex
Logic and State Diagram Modeling, Version 5, Fifth Printing, Revision for Version 5
(Release 13), July 2002.
[46] The MathWorks Inc., Users Guide: SimMechanics, For Use with Simulink ® , Reprint
for Version 2.2, December 2005.
[47] The MathWorks Inc., Users Guide: SimPowerSystems, For Use with Simulink ® ,
Reprint for Version 4, March 2006.
[48] Ward, P., and Mellor, S., Structured Development for Real Time Systems, Prentice-Hall,
Englewood Cliffs, NJ, 1985.
[49] Wiener, Norbert. Cybernetics or Control and Communication in the Animal and the
Machine, Hermann et Cie., Paris, and MIT Press, Cambridge, MA, 1948.
[50] Wikipedia, The Free Encyclopedia, Thermostats, http://en.wikipedia.org/wiki/Image:
WPThermostat.jpg, retrieved December 10, 2006.
[51] Wikipedia, The Free Encyclopedia, Foucault Pendulum, http://en.wikipedia.org/wiki/
Foucault%27s_pendulum, retrieved December 10, 2006.
Index
1/f noise
Need for, 194
Simulating, 194
Using cascaded linear systems, 195
Setting noise power of, 189
Band Limited White Noise block
In moving average simulation, 150
Numerical experiments with, 184
To generate binomial distribution,
176
Using the Weiner process, 184
Bessel filter
In Signal Processing Blockset, 146
Bode plots
Calculating, 65
Creating, 64
Using Control Systems Toolbox, 67
Brown, Robert, 180
Brownian motion, see also Random
processes; Weiner process
And white noise, 178
Random walk and, 178
Brownian motion, connection
White noise, 180
Buffers, 160
Using with the FFT, 161
Butterworth filter
Analog, 142
Compared to other filters, 147
Definition, 143
In phase-locked loop, 167
MATLAB code for, 144
Using Signal Processing Blockset,
148
Absolute value block
In clock model, 22
In Math Operations library, 23
Zero-crossing, turning on and off,
86
Zero-crossing detection, 11
Air drag, 12
Force from, 12
In Leaning Tower model, 12
In train model, 253
Leaning Tower Simulation results,
13
Analog filters
As prototypes for digital filters, 142
Signal Processing Blockset, 145
Animation
SimMechanics blocks, 259
Annotation
Creating wide lines for vectors, 15
Of models, 16
Using TeX, 103
Attitude errors
3-axis direction cosine matrix, 107
3-axis quaternion, 109
Axis angle
Rotation, 94
Axis-angle representation
Euler’s theorem, 99
Callback
Executing MATLAB code from,
196
Finding graphics handles after, 233
Band Limited White Noise, 184
295
296
Loading Data from, 254
Options in Model Properties, 124
PostLoadFcn, 234
PreLoad function, 39, 51, 123, 150,
167
StopFcn, 126, 176, 177
Central limit theorem, 174
Monte Carlo Simulink model, 174
Chebyshev filter
In Signal Processing Blockset, 146
Clock
Pendulum, 17
Simulation, 23
Using SimMechanics, 260
Clock model, see NCS library
Computing Mean and Variance
With Signal Processing Blockset
blocks, 184
Constant block, see also Simulink blocks
MATLAB input for, 4
Control System Toolbox
Lyapunov equation solver, 188
Control Systems
Bode plots, 64
Comparing position and velocity
feedback, 60
Early development, 44
Example, thermostat, 46
Full state feedback, 72
Getting derivatives, 74
Linear differential equations and,
48
Observers, 74
PD control, 69
PID control, 71
Position feedback, 56
Simulink model, constructing
derivatives, 77
Velocity feedback, 57
Control Systems Toolbox, 64
Convergence
Limits for random processes, 181
Correlation function
Definition, 194
Of white noise, 183
Spectral density function from, 183
Index
Covariance equivalence
Definition, 189
White noise, 189
Covariance matrix, 186
Covariance equivalence, 189
Definition, 187
Differential equation for, 187
Creating an Executable Specification
In the system design process, 271
Cross product, 98, 100
Cross product block
In Simulink, masked subsystem,
101
Curve fitting
with MATLAB for the polynomial
block, 91
Damping ratio, 51
Data store, data read, data write
Storing and retrieving data in
embedded code, 279
DC motor, 105
Demux block, see also Simulink blocks
Using, 246
Deriving rates
For derivative feedback, 75
Difference block, see also Simulink
blocks
Validating model changes with, 146
Digital filters, 121
Bode plot for, 135
Discrete library, 129
In Discrete library, 121
Limited precision arithmetic and,
153
State-space models in Simulink,
129
Digital signal processing, 115
Bandpass filter design, 154
Definition, 118
Digital filter from analog using
bilinear transformation, 148
fir filter from state-space model,
151
Impulse sample representation, 137
Index
Sampling and A/D conversion, 123
Uses of, 121
Using Signal Processing blockset,
145
Digital Transfer Function block, see
Simulink blocks
Direct Form Type II, see second order
sections
Direction cosines, 94
From quaternion, 101
Discrete time
Comparing continuous with
discrete, 39
Converting continuous system to,
38
Discrete transfer functions
Calculation, 135
Dynamics
Connection with stiff solvers, 87
Control system, 55
Control system sensors, 63
DC motor, 105
Forces from rotation, 24
Foucault pendulum, for, 26
House, heat flow in, 44, 48
Pendulum, 42
Reaction wheel, 105
Rotation, see also Axis angle:
Direction Cosines; Quaternion
Rotations, 94
Satellite rotation, 105
Einstein, Albert
Brownian motion, 180
Electric train
Rail resistance with nonlinear
resistor block, 251
Traction motor and train dynamics,
252
Electric Train Model
Model, 251
Elliptic Filter
In Signal Processing Blockset, 146
Embedded code
In the system design process, 286
297
Using data read, data write, data
store blocks, 279
Equations of Motion
For Euler angle rotations, 97
For quaternion representation of
rotations, 99
Escapement, 17, 31
Description of, 19
History of, 17
Simulink subsystem for, 21
Euler angles, 95
From quaternion, 101
In lunar module model, 272
Euler’s theorem, 99
Executable Specification, 215, 225, 269
Analysis to create, 276
Definition and example, 225
For lunar module digital control
system, 279
Stateflow and Embedded MATLAB
in, 283
Fast Fourier Transform (FFT)
block, 161
Fibonacci sequence, 116
Using the z-transform on, 120
Filter design, 141
fir filter, 149
White noise in, 150
Fixed-point filtering in, 159
Foucault pendulum, see also NCS
library, 24
Dynamics, derivation of, 26
Model parameters, 30
Model, creating, 28
Solvers, experimenting with, 30
Vibration of moth’s antennae, 31
Friedman, Thomas, 289
From Workspace block, 92
Gain block, 1, 12, 14, 23, 123, 184, 186
Creating a product with, 253
GUI interface to Simulink with, 232
In Signal Processing Blockset, 157
In SimMechanics, 261
298
Galileo
Comparing objects of different
mass, 14
Experiments by, 17
Inclined plane experiments, 31
Internet, references to, 31
Leaning Tower of Pisa, 5
Pendulum period and, 19
GoTo block, 77, 83, 253, 254
Graphical User Interface (GUI), 231
MATLAB Graphical User Interface
Development Environment,
232
GUI, see Graphical User Interface
GUIDE, see Graphical User Interface
Hamilton
Quaternion, 99
History, 112
Heat equation
Electrical analog of, 203
Four-room house model, 262
Partial differential equations, 202,
213
Two-room house model, 262
Heating control
With thermostat, 44
Home heating control
Two-room house model, 233
Hooke, 18
Huygens, Christiaan, 18, 31
Hysteresis, 48, 225
Hysteresis block, 45
iir filter, 149
Integration
Observer, 75
Integrator, see also Simulink blocks
Block dialog for, 8
Block dialog options, 8
Denoted by 1/s, 7
Dialog, 8
Discrete Reset in phase-locked
loop, 167
External initial conditions, 57
In integral compensation, 72
Index
In state-space model, 210
Initial condition, external, 9
Simulink Block, 7
Simulink C-coded dll, 7
Integrator modulo 1 block
In the PLL model, 167
Interpolation
In n dimensions, 87
In zero crossing, 11
Polynomial block with MATLAB
curve fit, 91
Laplace Transforms, 39
Of state-space model, 41
Transfer function of state model, 42
Leaning Tower
Simulation, two bodies, 14
Library block
Quaternion, making, 279
Limited precision arithmetic, 115, 145,
152, 153
Fixed-point, 158
Floating-point, 157
Linear control systems, see control
systems
Linear Differential Equations
Complete solution of, 35
Computing in discrete time , 37
Eigenvalues and response, 51
General form of, 33
In Simulink, state-variable form, 38
Poles and zeros, 55
State-space form of, 34
Undamped natural frequency and
damping ratio, 51
Linear feedback control, see Control
systems
Linear models
Analysis of Lorenz attractor with,
83
Creating from differential
equations, 49
From Control System Toolbox, 67
Linearization
Of a Nonlinear System, 49
Index
Lookup Table, 90
2-D, 90
Entering tabular data in, 88
n-D, 91
Simple, 88
Lookup Table block, see also Simulink
blocks
Lookup Table library, 87
Lorenz attractor
Chaotic motion model, 82
Linear models as function of time,
83
Parameters and fixed points, 82
Time varying root locus for, 83
Lorenz attractor simulation, 81
Lorenz, Edward, 81
LTI object
Creating and using, 67
Definition, 66
LTI Viewer
For digital filters, 136
For string vibration model, 266
Lunar Module
Euler angles in, 272
Quaternion block in, 272
Lunar module digital flight control
System design process example,
272
Lyapunov equation solver
Control System Toolbox M-file,
188
In Control System Toolbox, 188
Maple-MATLAB Interface
Determining direction cosine
attitude error with, 107
Using for lunar module control law,
277
Using Maple for z-transform, 119
Masked subsystem
Deriving rates model, 75
Example, cross product, 101
Masked block parameters, 76
Masked subsystem, creating
SimPowerSystems nonlinear
resistor, 247
299
Math Operations, see Simulink blocks
MATLAB
Calculation of stop time, 6, 10
Commands, in Courier type, 1
Creating a discrete time model, 35
Creating a discrete time model,
c2d_ncs, 37
Default Simulink data in, 10
Input for Constant block, 4
ODE Suite, compared to Simulink,
7
Opening Foucault_Pendulum, 28
Opening Leaningtower, 6
Opening Leaningtower2, 12
Opening Leaningtower3, 16
Opening Simulink, 2
Vector notation in blocks, 14
MATLAB code
Updating GUI with, 236
MATLAB connection to Simulink
Graphical User Interface (GUI),
232
MATLAB GUI
Code for updating, 236
Creating, 238
Interface for Stateflow and
Simulink, 230
Model Properties Tab, see Callback
Modeling and Analysis
In the system design process, 271
Moler, Cleve, xx, 30, 81, 115, 201
Monte Carlo simulation, 173
Mux block, see also Simulink blocks
using
Using, 45
NCS library
1/f noise, model, 194
Batch processing, model, 160
Butterworth filter model, 141
Butterworth filter, Signal
Processing Blockset, 145
Clock_transfer_functions, 42
Control systems, model, 44
Digital filter aliasing, model, 123
Digital filter transfer function,
model, 121
300
Index
Digital filter, Bode plot, 135
Fibonacci sequence
Digital filter block model, 127
Model, 116
State-space model, 129
Final lunar module specification,
model, 278
fir filter model, 149
Foucault pendulum, model, 24
Heat equation, model, 207
Heating controller
Executable specification model,
225
MATLAB connection, 230
Model, 230
How to download, 1
Including computer limitations,
lunar module model, 276
Leaning Tower, model, 9
Limited precision filter design,
model, 157
Linear differential equations,
model, 38
Lorenz attractor, model, 81
lunar module Stateflow logic,
model, 282
Monte Carlo, central limit theorem
model, 174
Naming conventions, 1
NCS definition, xx
Nonlinear resistor, model, 246
Observer, model, 74
Partial differential equations, 207
PD control, model, 69
Phase-locked loop, model, 164
PID control, model, 71
Random walk, model, 178
Rayleigh noise, model, 176
Reaction wheels model, 105
Rotation
Axis-angles, 98
Direction cosines, 98
Euler angles, 94
Quaternions, 99
Sampling theorem using FFT,
model, 160
Sampling theorem, model, 140
Saving new models, 4, 5
Set MATLAB path, 1
SimMechanics clock, model, 260
SimMechanics pendulum, model,
257
SimMechanics vibrating string,
model, 262
SimPowerSystems
Four-room house model, 263
Simple model, 242
SimPowerSystems train, model,
251
Spacecraft with reaction wheel,
model, 109
Specification capture, lunar module
model, 271
Specification to design, lunar
module control law model,
273
Spring-mass
Model, 55
State-space model, 51
State-space, continuous and
discrete, 39
Stateflow
Chart, 215
Debugger, 223
Model input-output, 221
Simple model, 215
Systems excited by white noise,
model, 189
Thermo_NCS, 46
Using data in tables, models, 87
Using filter design block, model,
153
Using the nonlinear resistor, model,
248
White noise, model, 184
New York Times
Thomas Friedman editorial, 289
Nonlinear controller
Home heating thermostat, 44
Nonlinear differential equations, 79
Index
Numerical Computing with MATLAB, 30
Fibonacci sequence in, 115
Lorenz attractor in, 81
Partial differential equations in, 201
Numerical integration, see Solvers
Observers, 77
Ode113, 86
Ode23t, 249
Ode45, 193
Opening Simulink
Clicking on icon, 2
Command for, 2
Partial differential equations
Creating a Simulink model for, 205
Finite-dimensional models for, 202
Heat equation, 202
Electrical analog, 207
Model using SimPowerSystems,
262
State-space model, 207
Modeling in Simulink, 200
Vibrating string, 261, 264
Vibration, models for, 211, 261,
265
Pendulum, see also Clock; Foucault
pendulum
Clock, 17
Using SimMechanics, 257
Perrin, Jean
And Brownian motion, 180
Phase-locked loop (PLL), 164
How it works, 165
Model of, 167
Simulation of, 169
Voltage controlled oscillator
(VCO), 167
Physical modeling, 241
SimMechanics, 242
Poles and zeros
Bode Plot calculation with, 65
Definition, 42
Digital filter implementation and,
155
For 1/f noise approximation, 195
301
From state-space model, 54
In fir filters, 150
In Signal Processing Blockset, 152
LTI object and Control Systems
Toolbox, 66
Maxwell and, 44
Of Butterworth filter, 143
Phase shift in filters, 149
Using Control System Transfer
function, 55, 62
Polynomial block
Curve fitting in MATLAB for, 91
PostLoadFcn, see Callback
Power spectral density function
1/f noise, 194
Creating with white noise, 194
Definition, 194
Preload function, see Callback
Simulink model properties, 30
Quaternion, 94
Converting to direction cosines,
101
Converting to Euler angles, 101
Definition, 99
Derivative of, 99
Hamilton, discovery of, 112
In lunar module model, 272
Library block, 280
making, 279
Norm of, 99
Subsystem block for, 101
Random processes, 173
Convergence of, 181
Random walk, see also Random
processes
Prototype for white noise, 178
Reaction wheels
Model of, 107
Operation of, 105
Relational Operator block, see Simulink
blocks
Reset integrator
For phase-locked loop, 167
302
Root locus plot
Comparing position and velocity
feedback, 61
Definition, 60
For mass velocity feedback, 60
Transfer functions, numerical
issues with, 63
Rotating bodies
Forces on, 25
Rotations
Axis-angle representation, 94
Direction cosine matrix
representation, 94
Quaternion representation, 94
Sampling theorem, 115, 136
Implementing, low pass filters, 140
Numerical experiments, 140
Proof of, 138
Simulink model for, 140
Using FFT to implement, 161
Satellite in orbit
Dynamics, rotational, 105
Second order sections, 157
In Filter Design, 156
Second order systems
Parameters in, see also Damping
ratio; Undamped natural
frequency, 51
Simulink model with transfer
functions, 42
Shannon, Claude, 136, 138, 170
Signal builder
For input to nonlinear resistor, 249
Signal processing blocks
FFT, 161
Signal Processing Blockset
Butterworth filter in, 148
Signal Processing blockset, 145
Analog filters in, 146
Bessel filter in, 148
Blocks in the library, 146
Chebyshev filter in, 148
Comparing blockset and Simulink
models, 146
Index
Filter design with second order
sections, 157
Implementing a digital filter in
fixed point, 159
Limited Precision Bandpass filter
design with, 154
Using buffers for batch processing,
160
SimMechanics, 240
Environment, 258
Ground, 258
Library, 256
Physical modeling, 242
Vibrating string, 262, 264
SimMechanics Blocks
Animation, 259
Body, 259
Body sensor, 259
Environment, 257
Ground, 256
Revolute joint, 259
SimPowerSystems, 240
Algebraic loops in, 248
Circuit with nonlinear resistor, 250
Connection icon, 242
Connections icon, 245
Connections to and from Simulink,
252
Library of blocks, 243, 244
Mask for nonlinear resistor block,
247
Mask for train model, 253
Modeling an electric train, 251
Nonlinear devices in, 248
Nonlinear elements in, 246
Nonlinear resistor model, 246
Simple example—RC circuit, 242
Simulink inputs, 245
SimPowerSystems, blocks
DC voltage Source, 243
Resistors, inductors, capacitors,
244
Switch and circuit breaker, 245
Simulink
Adding a block, 3
Automatic connections, 4
Automatic vectorization, 33
Index
Block diagram basics, 1
Block dialogs, 3
Click and drag, 3, 7
Continuous, library, 7
Creating a new model, 3
Drawing neat diagrams, 4
Library browser, 2
Logic and bit operations, library, 7
Masked subsystem, creating, 247
MATLAB, setting constants in, 9
Modeling partial differential
equations, 200
Preload function in model
properties, 30
Signal flow connections, 4
Simple model, 3
Sinks, library, 7
Solver
Default, 10
For SimMechanics, 257
For SimPowerSystems, see also
Ode23t
Making changes, 30
Selecting, 86, 87
With noise, 184, 186
Sources, library, 7
Starting a simulation, 9
Viewing results, Scope block, 9
Zero crossing
Blocks that detect, 11
Detection, 10
Interpolation, 10
Simulink blocks
Absolute value, 22, 23
Band Limited White Noise, 184
Clock, from Sources library, 88
Comparing, using differences, 39
Constant, 88
Constant block, 4, 7
Control systems, time-based
linearization, 83
Control systems, trigger-based
linearization, 83
Cross product, 101
Data store, read, write and memory,
280
303
From File, 92
From Workspace, 92
Gain, 12, 23, 72
Vectorizing, 14
GoTo, 77
Hysteresis, 45
Integrator, 7, 82
Integrator notation, from Laplace
operator, 41
Leaning Tower simulation, blocks
for, 7
Lookup Interpolation, 87
Lookup Table, 87, 88
Dynamics, 87
Library, 87
Math Operations, 3
Matrix Concatenation, horizontal
and vertical, 111
Multiplication, for matrix
operations, 102
Multiport switch, 76
Mux, 45, 253
Polynomial, curve fitting, 91
Prelookup, 87
Product, 12
Relational Operator block, 7
Scope, 13
Scope block, 7
Selector, 100
Sign, 23
Signal Builder, 249
Sine Input, 21, 46
State-space
Continuous, 38, 52, 58
Discrete, 39
Stop block, 7
Subsystem, 21, 190
Sum, 12, 20, 88, 253
Sum, used as difference, 72
Transfer function, 42, 63
Trigonometric functions, 4
Unit Delay, 116
Zero-Pole-Gain, 42
Simulink data
In MATLAB by default, 10
304
Simulink Models, see also NCS library
Annotating, 15
Wide nonscalar lines, 16
Smoluchowski, Marian von
And Brownian motion, 180
Solvers, see also Ode113:Ode23t;
Simulink, see also Ode45
For Numerical Integration, see also
Foucault pendulum
In SimMechanics, 257
ODE suite in Simulink, 86
Setting step size, 259
Simulink implements MATLAB
Solvers as standalone dlls, 30
Simulink’s use of, 7
Stiff, 105
Using Different in a simulation, 31
Using Ode23t in
SimPowerSystems, 249
Spacecraft rotation
Model for, 109
Spaghetti Simulink models, 290
Specification development and capture
In the system design process, 270
Spectral density function
Definition, 194
Using correlation function, 183
State-Space
And switch curves for lunar
module, 285
State-Space model, see also Simulink
blocks
Calculating transfer function from,
42
For 1/f noise approximation, 196
For discrete time systems, 131
For pendulum, 38
For spring-mass-damper system, 58
Full state feedback and, 74
Getting Bode plot from, 65
Getting linear model for Lorenz
attractor, 83
In SimPowerSystems, 246
Of two-room house, 209
Transfer function for, 42
Discrete systems using, 135
Using lti object in MATLAB, 67
Index
Stateflow, 282
Action language, 228
Adding events, 221
Adding inputs and outputs, 221
Heating control specification, 230
Home heating controller using, 225
Semantics, 218
Simple chart, 215
Using a GUI for inputs, 230, 233
Using the debugger, 223
Stateflow Executable Specification
In the system design process, 283
Stochastic processes, see Random
processes
Subsystems
Annotating, 81
Completing top down design, 281
Converting Fahrenheit to Celsius
and back, 45
Counter, for RCS jet timing (lunar
module), 280
Creating, 21
Creating a library for, 81
Creating a matrix in, 111
Deriving rate, for, 75
Empty, for top-down design, 271
Heating controller, executable
specifications, 226
House dynamics, heating system
model, 45
Interacting with a GUI, 233
Lunar module reaction jet control,
279
Mask dialog for parameter inputs,
78
Masked, 76, 81
documentation, 247
drawing an icon on, 247
for white noise, 184
Noise simulation, discrete and
continuous time, 191
Nonlinear resistor, in
SimPowerSystems, 247
Padding a buffer, signal processing,
162
Index
Quaternion, library block, for, 100
Reaction wheels, for, 107
Reset Integrator, in phase-locked
loop, 167
Spacecraft rotation, for, 109
Specification capture and, 226
Stateflow “Box’’ command, 216
Stateflow, subcharts, 219
Tracing model to specification, for,
269
Train model
For track resistances, 254
SimPowerSystems and
Simulink, 252
With track simulation, 253
Triggered, 218
VCO, in phase-locked loop, 167
Vibrating string, in SimMechanics,
264
Zero crossing detection for, 11
System design process, 269
Component level design that meets
specification, 276
Creating an executable
specification, 271
Creating embedded code, 286
Example, lunar module digital
flight control, 272
Modeling and analysis, 271
Specification development and
capture, 270
Steps in, 270
The final lunar module executable
specification, 279
Using Stateflow in the executable
specification, 283
Verification and validation, 285
TeX
Annotating with, 103
Thermostat
Modeling, 48
Operation of, 46
Time based linearization, 83
305
Train Simulation
Calculating rail resistances from
train positions, 254
Using SimPowerSystems, 254
Transfer Function
Irrational for q/f Noise, 195
Of a discrete system, 156
Of Butterworth filter, 143
Of ideal low pass filter, 140
Specifying for an analog filter, 142
Viewing in the Signal Processing
Blockset, 151
Transfer Function Block, see also
Simulink Blocks
Change of Filter changing icon for,
147
Transfer Functions
For digital filters, 121
From a simulation, 123
In the Simulink digital library, 128
Trigger based linearization, 83
Trigonometric (Trig) functions, 3
In direction-cosine matrix
calculation, 101
Tucker, Marc
In New York Times editorial, 289
Unbuffer, see Buffers
Undamped natural frequency, 51
Unit Delay, see Simulink blocks
Vectorizing a model, 33
Verification and validation
In the system design process, 285
Simulink blocks for, 285
Vibrating string, 262
Voltage Controlled Oscillator (VCO)
In phase-locked loop, 167
Weiner process, 178
Band Limited White Noise block,
184
In integrals, 183
Simulations with, 184
White noise and, 183
White noise, in Simulink, 184
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement