Identification and Analysis of Nonlinear Systems Master Thesis MEE03:32 Henrik Åkesson Benny Sällberg
Master Thesis MEE03:32
Identification and Analysis of Nonlinear Systems
10
−3
10
−4
10
−5
10
−6
10
−7
5 10 15 20
Henrik Åkesson
Benny Sällberg
Degree of Master of Science in Electrical Engineering
Examiner: Prof. Ingvar Claesson
Supervisor: Prof. Kjell Ahlin
Department of Telecommunication and Signal Processing
Blekinge Institute of Technology
December, 2003
Abstract
In classical mechanical engineering the predominant group of system analysis and identification tools relies on Linear Systems, where research have been carried out for over half a century. Usage of Linear Systems is most widely spread, often due to its simple mathematics and formulation for many engineering problems. Although linearizing is a means for simplifying a problem, it will introduce more or less severe modelling errors.
In some cases the errors due to linearizing are too large to be practically acceptable, and therefore nonlinear structures and models are sometimes introduced.
This thesis aims in implementing and evaluating some popular methods and algorithms for nonlinear structure analysis and identification, with emphasis on systems having nonlinear terms. Preferably the algorithms should be optimized in their computational load.
The result are several algorithms for nonlinear analysis and identification. The ones giving best results were the frequency based methods Reverse Path and a Frequency Domain Structure Selection Method (FDSSA). The time domain based method, Nonlinear Autoregressive Moving Average with Exogenous Input (NARMAX), in which a lot of hope had been put, did perform very well in giving good system descriptions, but due to its nonphysical representation it was not suitable for usage in this thesis.
The algorithms and methods were finally applied for two cases, a four system blackbox case and an experimental testrig case. The methods did perform well in three out of four systems in the first case, but the methods did not perform well for the second case, due to problems in applying correct levels of excitation force at the testrig’s resonance frequencies.
Contents
1 Introduction
2 Theoretic Background
2.1
Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.1
Degree of freedom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.2
Basic mechanical system SDOF . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.3
Larger mechanical systems MDOF . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2
Nonlinear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.1
Brief Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.2
Theoretical Representation of Nonlinearities . . . . . . . . . . . . . . . . . . . . . .
2.2.3
Presentation of Nonlinear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3 System Identification and Analysis 10
3.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
3.2
Linear System Identification and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
3.2.1
NonParametrical Spectrum Identification . . . . . . . . . . . . . . . . . . . . . . . .
10
3.2.2
SISO Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
3.2.3
Time Series Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
3.2.4
Special Types of Random Processes . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
3.2.5
Modal Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
3.3
Nonlinear System Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
18
3.3.1
NARMAX Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
18
3.3.2
Reverse Path Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
18
3.3.3
Frequency Domain Structure Selection Algorithm . . . . . . . . . . . . . . . . . . . .
20
3.3.4
Finding the Nonlinear Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
21
3.4
Nonlinear System Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22
3.4.1
Harmonic Balance Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22
3.4.2
Hilbert Transformation Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . .
23
3.4.3
Statistical Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
27
4 System Synthesis 30
4.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
30
4.2
Linear System Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
30
4.2.1
Time Response Synthesis using Laplace Transformation . . . . . . . . . . . . . . . .
30
4.2.2
Synthesis by Digital Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
4.3
Nonlinear Systems Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35
4.3.1
Analytical Time Responses Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . .
35
4.3.2
Synthesis by Ordinary Differential Equation Solvers . . . . . . . . . . . . . . . . . .
36
4.3.3
Synthesis by Extended Digital Filter Structures . . . . . . . . . . . . . . . . . . . . .
38
4.4
Synthesis Quality Assessments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
39
1
7
9
5
7
9
2
3
2
2 ii
5 Experimental Evaluation of Blackbox Systems 40
5.1
Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
40
5.2
The Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
42
5.2.1
The First System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
42
5.2.2
The Second System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
46
5.2.3
The Third System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
48
5.2.4
The Fourth System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
52
5.2.5
The Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
56
6 Experimental Evaluation of TestRig 57
6.1
Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
57
6.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
57
6.3
System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
57
6.3.1
Linear Part of System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
57
6.3.2
Nonlinear Part of System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
57
6.3.3
Nonlinear Property Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
58
6.4
Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
59
6.4.1
Measurement Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
59
6.4.2
Equipment List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
59
6.4.3
Work Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
59
6.5
Choice of Performance and Excitation Signals . . . . . . . . . . . . . . . . . . . . . . . . . .
61
6.6
Measurement Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61
6.6.1
Measurement Settings, Collection from Signal Calc when using buildin functions . .
61
6.6.2
Measurement Settings, Collection from Signal Calc when using excitation signal produced in matalab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61
6.6.3
Channels Settings for measurement 13 . . . . . . . . . . . . . . . . . . . . . . . . .
61
6.6.4
Channels Settings for measurement 411 . . . . . . . . . . . . . . . . . . . . . . . .
62
6.7
Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
62
6.8
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
80
7 Summary and Conclusions 81
7.1
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
81
7.2
Conclusions and Further Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
81
A Derivations of Impulse, Step, and Ramp Invariance 82
A.1 Derivation of Impulse Invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
82
A.2 Derivation of Step Invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
83
A.3 Derivation of Ramp Invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
84
B Modified Bootstrap Structure Detection 86
iii
Nomenclature
Matrix Notation
{..}
{..} T
[..]
[..]
T
[..]
−1
[..]
H
Column Vector
Row Vector
Matrix
Transpose of a matrix
Inverse of a matrix
Complex conjugate transpose, or Hermitian transpose, of a matrix
I
∂
∂
t
H
R
F
L
Z
F
L
−1
−1
Z
H
−1
−1
Operator Notation
x
∗
Complex conjugate
First derivative with respect to time
Second derivative with respect to time
Estimated value of x
Mean value of x
Fourier transform
Inverse Fourier transform
Laplace transform
Inverse Laplace transform
Z transform
Inverse Z transform
Hilbert transform
Inverse Hilbert transform
Real part of complex number
Imaginary part of complex number
Partial derivative with respect to independent variable t
Mechanical Notation
a
Acceleration
v m
Velocity
x
Displacement
Mass
c
Viscous damping
m/s m/s
m kg kg/s
k
Stiffness N/m
f
Force N
λ
Eigenvalue, pole of transfer function rad/s
ζ
Relative damping DL
R
Residue 
2
r
mode number (subscript)
p
response point (subscript)
q
reference point (subscript)
DL
DL
DL iv
Signal Analysis Notation
F f t
T
Time
Time period
Frequency
Normalized frequency s s
Hz
DL
θ
τ
Fs
Ω
ω
Ω
A
Sampling frequency
Angle frequency
Normalized angle frequency
Instantaneous frequency
Instantaneous Phase
Time delay, Independent variable s
s z
Instantaneous amplitude
Laplace domain variable
Z domain variable
x, x(t) Input signal
Hz rad/s
DL rad/s rad/s




2
y, y(t) Output signal
x(n) Time discrete signal


y(n) Time discrete signal
X ( f ) Spectrum of x(t)
Y ( f ) Spectrum of y(t)
Pxx
Auto spectrum of x(t)
Pyy
Pyx
Auto spectrum of y(t) 
Cross Spectrum of x(t) with y(t) 
Rxx
Autocorrelation of x(t)
H( f ) Frequency response function






h(t) Impulse response function
w(n) Window function
δ
(t)
Impulse function
δ
(n)
Discrete impulse function
n
Sample number
N
M
Integer number, order
Integer number, order




DL
DL
DL
Statistical Notation
M i
M
2
s x
Central Moment
Second Moment
Skewness



k x
σ
x
σ
2
x
Kurtosis
Standard deviation of x
Variance of x
n(t) White Gaussian noise



v
General Notation
[I]
J
Identity matrix
The Jacobian
f {..}
Nonlinear function
G[..] Nonlinear system
k n
Ω
Nonlinear coefficient constant
Harmonic excitation
e(n) Model error



e(t)
L
Error
j
4
√
−1
Independent variable
DL




DL


Notes:
The unit varies with variables.
DL Dimensionless variable.
Abbreviations
DOF
SDOF
MDOF
FEM
FRF
GFRF
SISO
SIMO
MISO
CSD
PSD
AR
MA
Degree of Freedom
Single Degree of Freedom
Multi Degree of Freedom
Finite Element Method
Frequency Response Function
General Frequency Response Function
Single Input Single Output
Single Input Multiple Output
Multiple Input Singe Output
Cross Spectrum Density
Power Spectrum Density
Auto Regressive
Moving Average
ARMA
ARX
ARMAX
NARMA
Auto Regressive Moving Average
Auto Regressive with External Input
Auto Regressive Moving Average with External Input
Nonlinear Auto Regressive Moving Average
NARMAX Nonlinear Auto Regressive Moving Average with External Input
HBM
ODE
Harmonic Balance Method
Ordinary Differential Equation
CEA
LSCE
PTD
ITD
MRITD
ERA
PFD
SFD
Complex Exponential Algorithm
Least Squares Complex Exponential
Polyreference Time Domain
Ibrahim Time Domain
Multiple Reference ITD
Eigensystem Realization Algorithm
Polyreference Frequency Domain
Simultaneous Frequency Domain
MRFD
RFP
OP
CMIF
MBSP
DBG
LR
SD
MultiReference Frequency Domain
Rational Fraction Polynomial
Orthogonal Polynomial
Complex Mode Indicator Function
Modified Bootstrap Structure Detection
Data Block Generator
Linear Regressor
Structure Discriminator vi
Chapter 1
Introduction
This master thesis is divided into eight chapters. The initial chapters act as brief introductory to the huge field of nonlinear system analysis, identification and synthesis. The later chapters presents operations and results on four blackbox systems and on a physical testrig. The chapters are as follows:
Chapter 2  Theoretic Background Gives a theoretical background to the field of mechanics and modal analysis. Single or multi degree of freedom dynamical systems, pure linear systems or systems having nonlinear elements are explained and presented.
Chapter 3  System Identification and Analysis Presents briefly linear system identification and analysis methods, and more thoroughly for nonlinear system identification and analysis methods.
Chapter 4  System Synthesis In this chapter tools for system synthesis are presented, both for linear systems and for nonlinear systems.
Chapter 5  Experimental Evaluation of Blackbox systems The analysis and identification tools are used for experimental evaluation of four blackbox systems.
Chapter 6  Experimental Evaluation of TestRig The analysis and identification tools are used for experimental evaluation of a nonlinear mechanical system.
Chapter 7  Summary and Conclusions Summarizes and concludes the work and results of this thesis.
1
Chapter 2
Theoretic Background
2.1
Linear Systems
2.1.1
Degree of freedom
The understanding of the degree of freedom is a condition for understanding the concept of modal analysis.
The number of degree of freedom is the number of independent coordinates and independent motions in each coordinate. In one single point there exists six degrees of freedom; the motions in each direction (x, y, z), and the rotational motion (x
θ
,y
θ
,z
θ
) of each axis. A mechanical system has an infinite number of degrees of freedom, because the system is continuous and must be described by an infinite number of coordinates. The observed degrees of freedom is in reality of course a finite number, limited by different physical causes. The following parameters reduces the number of degree of freedom; the frequency range of interest, the dynamics in the measurement system and the in reality possible degrees of freedom able to reach and measure. All these parameters makes it possible to connect theoretical analysis with real measurements.
Further definitions
A coordinate (x,y,z) of the mechanical system is hereafter referred to as a node, together with the motions, the node forms a mode in each of these direction. With the fourth dimension time, mode shapes will appear and all mode shapes together describes the motion behavior of the whole system.
Assumptions
To be able to calculate and analyze the system, following assumptions will be made about the systems in this chapter.
• The system is linear:
a
1
H {x
1
(t)} + a
2
H {x
2
(t)} = H {a
1
x
1
(t) + a
2
x
2
(t)}
(2.1)
Figure 2.1: Condition for system to be linear.
2
CHAPTER 2. THEORETIC BACKGROUND
• The system is causal:
If the output from the system only depends on the current value or the previous values, the system is denoted causal. That means the systems lacks of terms z
k
, where k > 0 which is same as the condition in equation (2.2), where h(n) is the discrete impulse response.
h(n) = 0, n < 0 (2.2)
• The system is time invariant:
That means the system acts independent over time.
y(t) = H {x(t)} ⇔ y(t − k) = H {x(t − k)} (2.3)
Figure 2.2: Condition for system to be timeinvariant.
• Reciprocity:
If the system fulfils the reciprocity condition then the transfer function is the same, even if the input is applied into node q and the output measured in node p as the opposite.
H pq
(s) = H
qp
(s)
(2.4)
q p
( s
p q
( s
Figure 2.3: Condition for system to fulfil the reciprocity. f (t) is the force into node q of the system, x(t) is the displacement response from the system in node p.
2.1.2
Basic mechanical system SDOF
Simple systems can be modelled as a massdamperspring system in a single point and direction which are denoted as single degree of freedom system (SDOF). This is described by Newton’s equation, equation (2.5), where m is the mass, c the damping coefficient and k is the stiffness coefficient.
(2.5)
3
2.1. LINEAR SYSTEMS
Figure 2.4: An example singledegreeoffreedom system, described using mass m, spring k and damper c. The system excitation force f (t) and response x(t) are displayed as well.
This equation has two solutions: A transient solution and a steady state solution. The transient solution is a homogenous solution and is the one of interest when investigating the properties of the system.
To formulate the problem, a good start would be to set up an equation, based on Newtons second law, equation (2.5). This equation of motion describes the behavior of the system when a force is present. By using the Laplace transform the problem becomes more convenient to solve.
L
{m ¨x(t) + c ˙x(t) + kx(t)} =
L
{ f (t)}
m(s
2
x(0)) + c(sX (s) − x(0)) + kX (s) = F(s)
(2.6)
(2.7)
ms
2
X (s) + csX (s) + kX (s) = F(s) =
(ms
2
+ cs + k)X(s) = F(s)
⇒
1
(ms
2
+ cs + k)
=
X (s)
F(s)
= H(s)
The Homogenous solution is solved with f (t) = 0.
(2.8)
(2.9)
ms
2
X (s) + csX (s) + kX (s) = 0 (2.10)
Solving the non trivial solution
L
{x(t)} 6= 0, will give the characteristic system. This is a simple second degree equation solved by the general equation (2.11)
x
1,2
= −
p
2
±
r
p
2
− q
2
(2.11) for the expression x
2
+ px + q = 0. The parameters from above used in equation (2.11) gives the natural frequencies.
λ
1,2
= −
c
2m
±
r³
c
2m
´
2
− k m
(2.12)
4
CHAPTER 2. THEORETIC BACKGROUND
Another common representation of the system is the sum of partial fraction of the system equation (2.9).
H(s) =
R
1
s −
λ
1
+
R
2
s −
λ
2
(2.13) where R
1 and R
2 are the residues belonging to each root
λ
1 the roots from equation (2.13) will be complex conjugated (
λ
1 and
λ
=
λ
2
∗
2
. For underdamped
¡
(
c
2m
) 2
< k m
¢ systems
). By definition the residues will also be complex conjugate, equation (2.13) is rewritten into equation (2.14).
H(s) =
R s −
λ
+
R
∗ s −
λ
∗
(2.14)
2.1.3
Larger mechanical systems MDOF
For higher degree of freedom than the order one, the system becomes a multi degree of freedom (MDOF) system.
part.
It is more convenient to first rewrite the equation system onto matrix form and then solve the homogenous
[M]{ ¨x(t)} + [C]{ ˙x(t)} + [K]{x(t)} = { f (t)}
(2.15)
[M] =
[C] =
[K] =
m
11
m
21
.
..
m m
12
22
· · · m
1m
· · · m
2m
. .
.
.
..
m c
m1
11
c
21
.
..
c c
· · ·
12
22
c
m1
· · · m mm
· · · c
1m
· · · c
2m
. ..
..
.
c mm
k
11
k
12
· · · k
1m
k
21
k
22
· · · k
2m
.
..
. .
.
.
..
k
m1
· · · k mm
{ ¨x(t)} =
x
1
(t)
¨
2
(t)
.
..
x m
(t)
. . . {x(t)} =
x
1
(t)
x
2
(t)
.
..
x m
(t)
{ f (t)} =
f
1
(t)
f
2
(t)
.
..
f m
(t)
(2.16)
(2.17)
(2.18)
(2.19)
The matrix notation in equation (2.172.18) is the general expression, but more common is the expression from the FEM (Finite Element Method) where the mass matrix is a diagonal matrix and the damping and stiffness matrices are symmetric. The equation (2.20) is an example of a lumped system calculated from a FEM model with the special case where the damping matrix equals zero. This special case is called a nondamped system.
[M] =
m
1
0
0
0
m
2
0
0
0
0
0
. .
.
0
0 0 0
m m
[K] =
k
1
+ k
−k
.
..
2
2
k
2
−k
2
+ k
3
· · ·
. .
.
(2.20)
5
2.1. LINEAR SYSTEMS
When the equation system is rewritten, the problem could be seen as an eigenvalue problem. There are three different systems that will be mentioned, nondamped system, proportionally damped systems and non proportionally damped system. The first system with no damping, [C] = [0] formulates the easiest problem to solve.
¡
[M]s
2
+ [K]
¢
¯
α
+ [K]
(2.21)
(2.22)
−1
][K] +
α
[I]
(2.23)
The systems poles are the roots of the eigenvalues calculated from equation (2.23),
√
λ
=
α
. Putting each eigenvalue into the same equation and calculating the corresponding eigenvector will result in a mode shape, which belongs to the natural frequency with the relationship
λ
r
=
σ
± j
ω
.
Proportionally damped system
It is shown that proportionally damped systems can be diagonalized in the same way as for non damped systems.
The proportionally damped systems can thereby be represented as a set of uncoupled singledegreeoffreedom systems.
The definition of a proportionally damped system
£
[M
−1
][C]
¤
s
£
[M
−1
][K]
¤
r
=
£
[M
−1
][K]
¤
r
£
[M
−1
][C]
¤
s
(2.24)
s, r ∈ Z
A more simple definition which works in many cases is the definition in equation (2.25).
[C] =
α
[M] +
β
[K]
(2.25)
Proportionally non damped system
When it comes to the non proportional damped system, which does not fulfil the above conditions, one gets a more complicated formulation to solve. But it is a common problem which has been solved by reformulating the problem using state space formulation. This is done by adding equation (2.27) and then reformulating the matrix definition as in equation (2.292.29).
[M] { ¨x(t)} + [C]{ ˙x(t)} + [K]{x(t)} = {0}
[M] { ˙x(t)} − [M]{ ˙x(t)} = {0}
(2.26)
(2.27)
[A] =
·
[B] =
·
0
M
M
C
−M
0
0 K
¸
¸
(2.28)
(2.29)
{ ˙y} =
½
{ ¨x}
{ ˙x}
¾
{y} =
½
{ ˙x}
{x}
¾
(2.30)
Equation (2.31) gives the final expression and can be solved as the eigenvalue problem previously shown for the nondamped system and proportionally damped system.
[A]{ ˙y} + [B]{y} = 0
(2.31)
The solution differs in some aspect from the previous solution, as a result from the state space formulation.
The eigenvalues and eigenvectors will be twice as many as before. And natural frequencies are not the square root of the eigenvalues, but directly given from the eigenvalues. This could be noticed from the order of the differential equation (2.31).
6
CHAPTER 2. THEORETIC BACKGROUND
2.2
Nonlinear Systems
2.2.1
Brief Introduction
In many areas of science, nonlinear systems are often linearized. Linearization is applied for making calculations more manageable and less tedious. But in some cases, when neglecting the nonlinearities, errors too large to be acceptable are introduced. An example of errors due to linearization are presented in figure (2.5).
Therefore, nonlinear models are developed for representing the reality a bit more accurate, and thus minimizing
Coherence Functions
Frequency Response Functions
−50
−60
−70
−80
−90
−100
H
1
(F)
H
2
H
3
(F)
(F)
1
0.8
0.6
0.4
γ 2
(F)
γ
(F)
γ
3
(F)
−110
0.2
−120
−130
0 20 40 60 80 100 120
F − Frequency [Hz] a)
140 160 180 200
0
0 20 40 60 80 100 120
F − Frequency [Hz] b)
140 160 180 200
Figure 2.5: Example of system errors that could occur due to linearization. In plot a) the estimated frequency response functions of three systems, the systems are: H
1
(F)  The underlying linear SDOFsystem, H
2
(F)  A nonlinear system having a cubic term with the constant 1 · 10
9 N
m
3
, H
3
(F)  A nonlinear system having a cubic term with the constant 1 · 10
10 N
m
3
. In plot b) the coherence functions of the systems. The underlying linear
SDOF system is represented with the modal parameters M = 1 kg, C = 10 kg/s, K = 1 · 10
4
N/m using the sampling frequency F
s
= 400 Hz.
such modelling errors. Even though a lot of models are developed, this thesis will only deal with some few selected models or methods. Special concern will be taken for the frequency spectrum based method Reverse
Path and a time series method, the Nonlinear Autoregressive Moving Average model with eXogenous inputs
(NARMAX).
Following in this section are some important properties, concepts and ideas concerning nonlinear systems such as: nonlinear structures, stability considerations, frequency modulation, bifurcations and briefly about nonlinear resonance.
Nonlinear Structures
Nonlinear structures are often divided into three main types: zero memory, finite memory and infinite memory systems. The zero memory types of systems are perhaps the most simple of the three types as it only applies the nonlinear operator at system input, whereas the infinite memory types of systems applies also at the system response, coupled back to system input. This thesis will emphasize on nonlinear systems having infinite memory, i.e. the nonlinearity of such a system is recursively coupled back to system input. A typical infinite memory type of system is a MDOF system as in equation (2.32), this system is often referred to as the Duffing
Oscillator System in literature.
[M]{ ¨x(t)} + [C]{ ˙x(t)} + [K]{x(t)} + [K
n
]{x
3
(t)} = { f (t)}
(2.32)
7
2.2. NONLINEAR SYSTEMS
Stability Considerations
Stable stationary solutions to dynamical systems are called equilibrium points. Stable but dynamic (timevariant) solutions are called trajectories or orbits. A nonlinear system can have more than one equilibrium solution as opposed to linear systems which always have only one such equilibrium point. Later in this thesis the method Harmonic Balancing will be described, which is a method for experimentally finding such equilibrium solutions.
Frequency Modulation
The concept of frequency modulation is that in nonlinear systems, poles are no longer fix entities. The poles of nonlinear systems can be modified by for an example different excitation force amplitudes due to internal force feedback of nonlinear responses, and thus changing the modal mass, stiffness or damping properties of the nonlinear system. One of the modifications a pole could get is change in its resonance frequency. This property is called frequency modulation in nonlinear systems. See section (3.4.2) on Hilbert transformation techniques for a more thorough discussion on nonlinear frequency modulation.
Bifurcations
Linear systems only have a single equilibrium point, but nonlinear systems exhibits one or more equilibrium points. The number of equilibrium points can change, as well as the stability of the points as the system or excitation signal properties changes. These sorts of events are called bifurcations. There are a manyfold of types of bifurcations, where some adds and some removes equilibrium points. A common feature is that they alter the stability of the equilibrium points. Of special interest in the duffing oscillator configuration is the pitchfork bifurcation, where equilibrium points are created or destroyed symmetrically, with a change of stability of these points. To display the nature of bifurcations, bifurcation diagrams are used.
Another special type of bifurcation is the Hopf Bifurcation. The Hopf bifurcation occurs when a stable equilibrium point is intersected by a stable trajectory, i.e. when a stable solution is intersected with a dynamical stable solution. The Hopf Bifurcation makes the system to translate from a stable solution into a dynamically stable solution. In literature, the bifurcation diagram of the Hopf Bifurcation is identical to the Pitchfork
Bifurcation.
Nonlinear Resonance
The term nonlinear resonance means that a nonlinear structure will be excited by frequencies lower than the underlying linear system’s resonance frequencies. This happen as the response is fed back to the system input through a nonlinearity. Following is an intuitive example of the nonlinear resonance phenomenon, however not containing any proofs whatsoever. Assuming a simple SDOF duffing oscillator system, having a resonance angular frequency equal to
Ω
r
of the underlying linear system, we let the excitation force be a sinusoidal as in equation (2.33). Initially when the feedback response is dorment, the response becomes of sinusoidal order as in equation (2.34) and h(t) is the underlying linear system impulse response function. At time t + µ immediately after the nonlinear feedback has been initiated the excitation force could be approximated as in equation (2.35) and thus giving the response as in equation (2.36) having a response frequency three times higher than the excitation force frequency. Now, let the excitation force frequency be
Ω
=
1 has a periodic component at
Ω
r
3
Ω
r
. This means that the response and thus, the duffing oscillator system will be excited at resonance frequency.
By this simple yet intuitive example the phenomenon of nonlinear resonance is displayed, but not proved by any means.
f (t)
= A sin(
Ω
t)
x(t)
= h(t) ∗ f (t) = B sin(
Ω
t +
θ
)
ˆ
≈
f (t + µ) − k
n x
3
(t)
x(t + µ)
≈ h(t + µ) ∗ ˆf(t + µ) = h(t + µ) ∗
µ µ
= h(t + µ) ∗
f (t + µ) − k
n
B
3
¡
−
1
4
f (t + µ) − k
n
sin(3
Ω
t +
θ
x
1
3
(t)
) +
¢
3
4
= sin(
Ω
t +
θ
2
)
¶¶
(2.33)
(2.34)
(2.35)
(2.36)
8
CHAPTER 2. THEORETIC BACKGROUND
2.2.2
Theoretical Representation of Nonlinearities
Volterra Series
Purely linear systems can be fully represented by a single convolution integral, for calculation of the system’s time response. But in the case of systems having nonlinearities, the time response is a combination of overtones thus rendering a higher order convolution integral. The higher order expansion of the convolution integral is called a Volterra series expansion, as in equation (2.37). The functions h
k
(
τ
1
, . . . ,
τ
k
) are called Volterra series kernels.
x(t)
=
Z
+
∞
h
1
(
τ
1
) f (t −
τ
1
)d
τ
1
+
−
∞
Z
+
∞
Z
−
∞
+
∞
h
2
(
τ
1
,
τ
2
) f (t −
τ
1
) f (t −
τ
2)d
τ
1
d
τ
2
+ . . .
−
∞
(2.37)
The description of a linear system could be seen as a special case of the Volterra series expansion, having only
h
1
(
τ
). This in turn shows that a nonlinear system always has an underlying linear system. See [11] for further introduction on theoretical backgrounds to nonlinear systems.
2.2.3
Presentation of Nonlinear Systems
In the case of linear dynamical systems, FRF:s are often enough to show the characteristics of such systems.
But, when dealing with nonlinear systems one has to use different presentation techniques to fully show the characteristics of the nonlinear dynamical system.
Generalized Frequency Response Function
The generalised FRF (GFRF) is defined as the multidimensional Fourier transformation of the Volterra series kernel of order n as in equation (2.38).
H n
(
ω
1
,
ω
2
, . . . ,
ω
n
) =
Z
∞
−
∞
. . .
Z
∞
−
∞
h n
(
τ
1
,
τ
2
, . . . ,
τ
n
)e
− j(
ω
1
τ
1
+...+
ω
n
τ
n
)
d
τ
a
. . . d
τ
n
(2.38)
The GFRF for linear systems are all zero for n > 1 and equal to the linear system description, or FRF, for n = 1.
Backbone of Nonlinear Systems
The backbone of a nonlinear system is a plot of the resonance frequencies as functions of the excitation force amplitude. Since nonlinear terms affects the structure itself, making the structure more or less stiff or damped this property changes in most cases as the excitation force amplitude is altered.
9
Chapter 3
System Identification and Analysis
3.1
Introduction
System identification and analysis is a vast topic in the area of signal processing; this thesis will only briefly touch selected areas of interest and present some suitable methods and algorithms for nonlinear system identification and analysis. The emphasis is on so called Blackbox system identification methods, i.e. methods with few and often nonstrict system assumptions. Blackbox methods operates directly on measured data and should preferably be robust to measurement noise.
When identifying or analyzing a nonlinear system some basic questions need to be answered
Is Nonlinear analysis necessary Could the system be approximated by a linear system description, or does a linearization render too large errors?
Where is a nonlinearity situated Between which of the nodes in an approximated system is the nonlinear functional mounted, ground node included?
Partly a linear system Estimate the underlying linear system.
The nonlinear functional What is the type of nonlinearity? Approximate the nonlinear functional.
3.2
Linear System Identification and Analysis
3.2.1
NonParametrical Spectrum Identification
Cross Spectral Density
frequency. Following are some popular intuitive estimators of the CSD. X
¡
e
− j
ω
¢ and Y
¡
e
− j
ω
¢ are the discrete fourier transforms of x(n) and y(n) respectively, where n ∈ [0, N − 1] and
ω
∈ [−
π
,
π
]. Furthermore F
S
is the sampling frequency in [Hz] and w(n) is the window function.
Periodogram The periodogram calculates a nonweighted CSD estimate for the signals x(n) and y(n). An estimate of the periodogram could be expressed as:
P
Y X
(
ω
) =
1
F
S
N
Y
¡
e
− j
ω
¢
X
∗
¡
e
− j
ω
¢
(3.1)
Modified Periodogram The Modified Periodogram is basically the Periodogram where a window is applied to the timesignals in advance. Thus minimizing the leakage that would appear due to abrupt endings in
10
CHAPTER 3. SYSTEM IDENTIFICATION AND ANALYSIS the data vectors. One could see the Periodogram as a special case of the Modified Periodogram, where the rectangular window has been applied.
P
Y X
(
ω
) =
U
=
1
F
S
NU
¡
Y
¡
e
− j
ω
¢
∗W
¡
e
− j
ω
¢¢ ¡
X
∗
¡
e
− j
ω
¢
∗W
¡
e
− j
ω
¢¢
1
N
N−1
∑
n=0
w(n)
2
(3.2)
(3.3)
Bartlett Barlett’s method for estimating the CSD is an extension of the Periodogram. The signals are divided into blocks, and for each block the periodogram is estimated. All subperiodograms are averaged to form the final averaged CSD estimate.
P
Y X
(
ω
) =
1
F
S
KN
K−1
∑
k=0
Y k
¡
e
− j
ω
¢
X
∗ k
¡
e
− j
ω
¢
(3.4)
Welch Welch’s method uses averaging like Bartlett’s method except that it also uses an overlap technique. By overlapping, the number of effective blocks are increased a lot and thereby lowering the variance of the estimate even more. Depending on which window is used different values of the overlap exists for giving optimal performance.
P
Y X
(
ω
) =
U
=
1
F
S
KNU
K−1
∑
k=0
¡
Y k
¡
e
− j
ω
¢
∗W
¡
e
− j
ω
¢¢ ¡
X k
∗
¡
e
− j
ω
∗W
¡
e
− j
ω
¢¢¢
1
N
N−1
∑
n=0
w(n)
2
(3.5)
(3.6)
Auto Power Spectral Density
(Auto) Power Spectral Density (PSD) could be seen as a special case of the CSD, as the PSD is calculated for only one signal. For an example, the Periodogram for the x(n) signal would become as in equation (3.7).
P
X X
(
ω
) =
F
S
1
N
¡
e
− j
ω
¯ 2
(3.7)
P
X X
(
ω
) depends on the signal you are measuring, if x(n) is a force signal with the unit [N] then the
PSD of the signal will have the unit [N
2
/Hz].
Integration for all frequencies of a PSD
ω
∈ [−
π
,
π
] gives the total signal power. This means that the PSD is very useful for estimating the power for signals having wideband components.
3.2.2
SISO Systems
There are different ways to calculate systems when the input and output of the system is known. But in real measurements these signals are always more or less contaminated with noise and can thereby not be calculated but instead estimated. There are different methods that models different situations with respect to the noise.
The first estimator H
1 equation (3.8) assumes only noise at the output, the second estimator H
2 equation (3.9) assumes only noise at the input. The third estimator H
3 equation (3.10) is the mean of the two previously estimators thou the H
1 estimator underestimates the true H( f ), while the H
2 estimator overestimates the true H( f ).
H
1
( f ) =
P yx
( f )
P xx
( f )
H
2
( f ) =
P yy
( f )
P xy
( f )
H
3
( f ) =
ˆ
1
H
2
( f )
2
(3.8)
(3.9)
(3.10)
11
3.2. LINEAR SYSTEM IDENTIFICATION AND ANALYSIS
Where P
xx
( f ) and P
yy
( f ) are the autospectrums of the input and output signals respectively. P
xy
( f ) and P
yx
( f ) are the crosspectrum of the input and output signals respectively. These spectrums could be calculated with one of the following methods; Periodogram, Modified Periodogram, Bartlett or the Welch method. Se the previous chapter for the definition and description of the different methods.
The last estimator H
c
is developed for shaker excitations where the force and e.q. the acceleration is measured. It is reasonable to assume noise at both the input signal (force) and the output signal (acceleration).
The estimator is based on the possibility to measure the signal v(t) to shaker and the assumption that this signal is free from contamination. If that’s the case, then the system H
Also the total system H
av f v
( f ) could be estimated using the H
1 estimator.
( f ) with the input v(t) and the contaminated output a(t) could be estimated with the
H
1 estimator.
n a
(t)
v(t)
 H
f v
( f )
f
0
(t)
 H
a f
( f ) 
n f

(t)

f (t)
Figure 3.1: A model for the H
c
the system which is examined.
estimator. The first block represents the shaker and the second block represents
By using equation (3.11) the system of interest H
a f
is derived from the estimated systems H
av
and H
f v
.
H a f
( f ) =
H av
( f )
f v
( f )
(3.11)
3.2.3
Time Series Models
3.2.4
Special Types of Random Processes
A brief description of different model assumptions, where the input signal to the system having p poles and q zeros is white gaussian noise for identification purpose.
ARMA  Auto Regressive Moving Average
The ARMA model is as the general model with both poles and zeros. The coefficient a
0 equation that follows.
could be set one in the
Time domain
a
0
y(n)
= b
0
x(n) + b
1
x(n − 1) + . . . + b
q
x(n − q) − a
1
x(n − 1) − . . . − a
p
x(n − p) =
=
q
∑
k=0
b k
x(n − k) −
p
∑
l=1
a l
y(n − l)
Z domain
H(z) =
B(z)
A(z)
=
q
∑
k=0
b k z
−1
a
0
+
p
∑
l=1
a l z
−k
(3.12)
(3.13)
12
CHAPTER 3. SYSTEM IDENTIFICATION AND ANALYSIS
AR  Auto Regressive
The AR model is special case of the ARMA model where the order of the bpolynomial equals zeros.
Time domain
a
0
y(n)
= b
0
x(n) − a
1
y(n − 1) − . . . − a
p
y(n − p) =
= x(n) −
p
∑
k=1
a k
y(n − k)
Z domain
b
0
H(z) =
A(z)
=
b
0
p
∑
k=0
a k z
−k
where p is the order of the ARmodel.
(3.14)
(3.15)
MA  Moving Average
The MA model is also a special case of the ARMA model but where the order of the apolynomial equals zeros instead. Under this section a
0 is not written into the equations.
Time domain
y(n)
= b
0
x(n) + b
1
x(n − 1) + . . . + b
q
x(n − q) =
=
q
∑
k=0
b k
x(n − k) (3.16)
Z domain
H(z) = B(z) =
q
∑
k=0
b k z
−1
(3.17) where q is the order of the MAmodel.
ARMAX  Auto Regressive Moving Average with Exogenous input
The ARMAX is an extended version of the ARMA model as previously mentioned, but with an extra, measurable signal x(t). n(t) is white gaussian noise, which has been filtered through an ARMA model. The moving average part of the model uses unknown sampled noise sequence to reduce the biased error and the exogenous input is the past samples of the input time record. A general presentation of the ARMAX model is given in figure 3.2.
x(t)
 B( f )
A( f )
i
6
n(t)
 C( f )
A( f )
Figure 3.2: General description of the Auto Regressive Moving Average model with exogenous input. x(t) is seen as the exogenous input, n(t) is the modelled noise.
13
3.2. LINEAR SYSTEM IDENTIFICATION AND ANALYSIS
a
0
y(n)
= b
0
x(n) + b
1
x(n − 1) + . . . + b
q
x(n − q) +
−a
1
y(n − 1) − a
2
y(n − 2) − . . . − a
p
y(n − p) +
=
+c
0
n(n − 1) + c
1
n(n − 1) + . . . + c
r
n(n − r) =
q
∑
k=0
b k
x(n − k) −
p
∑
l=1
a l
y(n − l) +
r
∑
m=0
c l
n(n − m) where r is the grade of the cpolynomial.
(3.18)
ARX  Auto Regressive with Exogenous input
The ARX model is a special case of the ARMAX where the moving average is not modelled. The truncated model could result in a biased error.
x(t)
 B( f )
A( f )
i
6
n(t)

1
A( f )
Figure 3.3: General description of the Auto Regressive model with exogenous input. x(t) is seen as the exogenous input, n(t) is the modelled noise.
a
0
y(n)
= b
0
x(n) + b
1
x(n − 1) + . . . + b
q
x(n − q) +
=
−a
1
y(n − 1) − . . . − a
p
y(n − p) + n(n) =
q
∑
k=0
b k
x(n − k) −
p
∑
l=1
a l
y(n − l) + n(n) (3.19)
3.2.5
Modal Analysis
Modal analysis is defined as the process of characterizing the dynamics of a structure in terms of its modes of vibration [10]. As described in chapter 2.1.1 the eigenvalues and eigenvectors of the mathematical model are also the parameters which defines the resonant frequency and the mode shape of the modes of vibration.
Modal analysis could be performed by the analytical finite element modelling as previously shown. Modal analysis could also be performed by modal testing, also referred to as experimental modal analysis. The experimental modal analysis is performed to either confirm an analytical finite element model or to characterize an unknown structure e.g. for troubleshooting. The process of characterizing a structure or system is called modal parameter estimating, also referred to as curve fitting. There are many methods for modal analysis with different advantages and disadvantages, see table (3.2.5) for the most common algorithms [12].
The two most popular curve fitting methods [10], either curve fit to measured Frequency Response Function data using a parametric model of the FRF, this model can be written as in equation (3.20),
H( j
ω
) =
N
∑
r=1
R r j
ω
−
λ
r
+
R
∗ r j
ω
−
λ
∗ r
(3.20) or they curve fit to measured Impulse Response Function data using a parametrical model of the IRF, which can be written as in equation (3.21).
h(t) =
N
∑
r=1
R r e
λ
r t
+ R
∗ r e
λ
∗ r t
(3.21)
Most algorithms are variants of these methods, but also other methods exists, i.e. curve fitting based on statespace models.
14
CHAPTER 3. SYSTEM IDENTIFICATION AND ANALYSIS
Algorithm
CEA
LSCE
PTD
ITD
MRITD
ERA
PFD
SFD
MRFD
RFP
OP
CMIF
Full name
Complex Exponential Algorithm
Least Squares Complex Exponential
Polyreference Time Domain
Domain Polynomial Order Coefficients
Time High Scalar
Time
Time
High
High
Scalar
Matrix
Ibrahim Time Domain
Multiple Reference ITD
Time
Time
Eigensystem Realization Algorithm Time
Polyreference Frequency Domain Frequency
Low
Low
Low
Low
Matrix
Matrix
Matrix
Matrix
Simultaneous Frequency Domain Frequency
MultiReference Frequency Domain Frequency
Rational Fraction Polynomial
Orthogonal Polynomial
Frequency
Frequency
Complex Mode Indicator Function Frequency
Low
Low
High
High
Zero
Matrix
Matrix
Both
Both
Matrix
Table 3.1: A collection of the most commonly used algorithms.
Proposed Procedure for Curve Fitting
The first step is to estimate the poles from the measured data, the pole describes the resonance frequencies and damping values e.g. equation (3.22).
λ
r
= −
ζ
r
ω
r
± j
ω
r
q
1 −
ς
2
r
(3.22)
The estimation of the poles can be done by the Least Square Complex Exponential Algorithm which is an extension of the Complex Exponential Algorithm. The CEA, also referred to as the Prony algorithm, was discovered in 1795 [10] by R. Prony and is a PoleZero model that minimizes the error e
0
(n) = x(n) − h(n), where h(n) is the unit sample response of the system. Prony’s method is a very simple method which only requires solving a set of linear equations, the extension LSCEA is known as Shank’s method which differs in the approach of finding the zeros of the system function. The approach is to formulate a moving average parameter estimation problem as a pair of auto regressive parameter estimations using Durbin’s method [7].
While Prony’s method bases the moving average coefficients on a model error equal to zero for n = 0, 1, 2 . . . , q, see equation (3.23) where q is the grade of the numerator and p is the grade of the denominator. In other words,
Shank suggest a least square minimization of the model error e
0
(n) = x(n) − ˆx(n) according to figure 3.5.
e(n) = a
p
(n) ∗ x(n) − b
q
(n)
(3.23)
( n
( n
( z
( n
+ +
( n
Figure 3.4: Signal model for Prony’s method. x(n) = ˆh(n) is the unit sample response of the system, b
q
(n) is the systems numerator coefficients and A
p
(z) the denominator. e(n) is the error which is to be minimized.
A least square minimization of Prony’s error is given in equation (3.28), the roots of the estimated denomi
15
3.2. LINEAR SYSTEM IDENTIFICATION AND ANALYSIS
( n
( n ( n
( z
( n (' n
Figure 3.5: Signal model for Shanks method. x(n) = ˆh(n) is the unit sample response of the system,
δ
(n) is the impulse function, B
q
(z) is the systems numerator and A
p
(z) the denominator. e
0
(n) is the error which is to be minimized.
nator are the poles.
x
q+1
r
x
R
xx a p
X
q
=
x(q)
x(q + 1)
x(q − 1)
· · ·
x(q − p + 1)
x(q)
· · ·
x(q − p + 2)
x(q + 2) x(q + 1)
· · ·
x(q − p + 3)
.
..
..
.
..
.
= [x(q + 1), x(q + 2), x(q + 3), · · · ]
T
= X
H q
x
q+1
= X
H q
X
q
= −R
−1
xx
r
x
(3.24)
(3.25)
(3.26)
(3.27)
(3.28)
During this procedure, the number of poles has to be determined in order to get the best estimation possible.
The number of poles chosen is often more than the exact number of poles in the examined frequency span.
Those extra poles are called computational poles which is a result from measurement errors of the FRF. As a tool for choosing the number of poles a stability diagram is used, such a diagram is shown in figure 3.6. The next step is to calculate the residues. This is often done in frequency domain using a least square method, but it could also be calculated in time domain from the estimated nominator. The estimation of the nominator using
Shank’s method is presented in equation (3.31).
r
xg(k)
R
gg(k,l)
b q
=
=
∞
∑
l=0
∞
∑
n=0
x(n)g
∗
(n − k)
g(n − l)g
∗
(n − k)
= −R
−1
gg
r
xg
(3.29)
(3.30)
(3.31)
16
CHAPTER 3. SYSTEM IDENTIFICATION AND ANALYSIS
Stability diagram
−70
−80
−90
−100
−110
−120
−130
−140
16*
15*
14*
13*
12*
11*
10*
7*
6*
5*
9*
8*
4*
−150
0 5 10 15 20
Frequency [Hz]
25 30 35 40
Figure 3.6: Stability diagram of a MDOFsystem where the number of poles has been estimated from 4 to
16 and plotted as red crosses in increasing order where the grade of each estimation is presented on the left together with an asterisk.
17
3.3. NONLINEAR SYSTEM IDENTIFICATION
3.3
Nonlinear System Identification
3.3.1
NARMAX Modelling
NARMAX is an abbreviation for Nonlinear Auto Regressive Moving Average Model with eXogenous inputs.
Exogenous means that the model is excited not only by noise as in the ARMA, AR and MA cases but also by a measurable input signal. The nonlinear elements of the NARMAX model consists of polynomial terms of outputs, inputs and noise, cross terms as well. The general formulation for a NARMAX model is as in equation
(3.32).
a
0
y(n) + . . . + a
N y
y(n − N
y
) =
f (y(n − 1), . . . , y(n − N
y
); x(n), . . . , x(n − N
x
); e(n), . . . , e(n − N
e
)) +
+b
0
x(n) + . . . + b
N x
x(n − N
x
) + c
0
e(n) + . . . + c
N e
e(n − N
e
)
(3.32)
f {. . .} is a nonlinear function of the input, output and error signals respectively.
On Structure Selection
The NARMAX model, as well as other timebased models have one very important drawback, that is that the structure of the model must be either known in advance or estimated for a given set of data. The goal for any discrete time model is to use as small model as possible yet having the capability to describe the system with minimal modelling errors. In this thesis two methods were used to truncate models for getting optimally small models with minimized modelling errors. The first method was to guess the orders of the model by investigating the frequency response functions for an example, the second method was to use the Bootstrap structure detection algorithm (see appendix B). The most reliable of the two methods were the first method, to guess model orders. The reason of that seems to lie in that the choice of correct initial parameters is crucial for the
Bootstrap method to give good performance.
Summarize of the NARMAX model
The NARMAX structure itself has a limitation in describing many physical quantities and scenarios. This origins from the fact that many systems has nonlinearities that operates directly on system output, and then feed back to system input. The NARMAX model has no such correspondence in its structure. This is a problem in the context of this thesis, which aims to get physically interpretable parameters, but does not have to constitute a problem in other applications.
To empicture this problem, a single degree of freedom system were used, as in equation 3.33, the system is often denoted as the duffing oscillator system in literature.
x(t) + kx(t) + k
n x
3
(t) =
f (t) (3.33)
The Modified Bootstrap Structure Detection Algorithm proposed a NARMAX structure which did not only consist of the cubic term, but also of lagged input and output terms. By definition, a NARMAX differential equation cannot contain the expression y(n) more than once, and thus the infinite memory system cannot be modelled exactly using the NARMAX model without modelling errors.
One should not neglect the NARMAX model out of the results of this thesis, there are certainly many other applications where the NARMAX model is more suitable. If the NARMAX model is to be used in an application. It is also important to understand that for the NARMAX method to perform well, a suitable choice of parameters is needed. The classical question always exists; Do we fit our model to the current available data, or do we fit our model to the real, physical model? The later is of course the desired case.
3.3.2
Reverse Path Method
The nonlinear systems discussed in this thesis has mostly been concentrated on systems with feedback. And as previously mentioned this causes a system with infinite memory as shown in figure 3.7. The method proposed
18
CHAPTER 3. SYSTEM IDENTIFICATION AND ANALYSIS
x(t)
i  H( f )
6
G[y(t)] ¾
Figure 3.7: General description of a system with nonlinear feedback, i.e. having a nonlinearity with infinite memory.
by Bendat is based on system with zero or finite memory. To be able to use Bendats method so called Direct
MISO technique (Multiple Input Single Output) on systems with feedback, a reverse operation of the signal has to be applied as in figure 3.8. This operation makes it possible to regard the system as a zero memory or finite memory system depending on the feedback.
y(t)
 H
−1
( f )
 G[y(t)]
i
6
Figure 3.8: By reversing a system with infinite memory nonlinearity, a system with zero memory or finite memory nonlinearity is given.
Direct MISO
Systems described as in figure 3.9, two parallel SISO systems could be solved with the H
1 estimator. Note that
When the system A( f ) equals a constant the nonlinear system is said to be a zero memory system, otherwise it is a finite memory system.
n(t)
x(t)
 H( f )
 G[x(t)]
x
2
(t)

A( f )

6
Figure 3.9: General description of a singleinputsingleoutput system with arbitrary finite or zero memory system given by A( f ).
To be able to use the estimator for both system H( f ) and A( f ), the system model is simplified into the system model in figure 3.10. To do this, x
2
(t) has to be calculated which means that G[x(t)] has to be known. To be able to draw any conclusions when the systems A
1
( f ) and A
2
( f ) are estimated, some measure of how good these results are is needed. This measure is the ordinary coherence function from both systems compared to the multiple coherence function. Which show how much linear dependency each systems has to the output signal.
19
3.3. NONLINEAR SYSTEM IDENTIFICATION
x
1
(t)
 A
1
( f )
n(t)

6
x
2
(t)
 A
2
( f )
Figure 3.10: Multipleinputsingleoutput system with correlated inputs x
1
(t) and x
2 and A
2
( f ) are uncorrelated.
(t), the outputs of A
1
( f )
Conclusion
The reverse path method together with the direct MISO is a good method to estimate both the linear system and the nonlinear system when the nonlinearity is known. Or perhaps assumed to be known. When using the direct
MISO from the reverse path method the linear system is of course the inverse of the estimated system. Using direct MISO method for identifying the nonlinearity could only give a measure of how good different guesses was. By testing different common nonlinearities there is a chance to draw good conclusions about the system model. It is perfect method to identify where nonlinearities are located by testing all available permutations of the inputs and outputs and compare the results to each other.
3.3.3
Frequency Domain Structure Selection Algorithm
The Frequency Domain Structure Selection Algorithm (FDSSA) is based on the modified SIMO infinite memory nonlinear system as in equation (3.34).
x i
(t) =
Z
+
∞
τ
=−
∞
h i, j
(
τ
− t)
©
f j
(t) − g (x
1
(t), . . . , x
I
(t))
ª
d
τ
(3.34)
Where i is a response index, i ∈ 1, . . . , I. j is the index of the excitation force node. Finally, g(·) is the nonlinear function of one response or a combination of responses. Note that g(·) is a fix quantity for a specific system, independent of the indices i and j.
Let for an example the nonlinear function g(·) be a polynomial of order P + 1 as in equation (3.35).
g (x
1
(t), . . . , x
I
(t)) =
P
∑
p=1
λ
p g p
(t)
g p
(t) = (c
p,1
x
1
(t) + . . . + c
p,I x
I
(t))
p+1
(3.35)
(3.36)
Where c
p,i
are fixed onoff weighting terms for each response signal i and polynomial order p, c
p,i
∈ {−1, 0, 1}.
λ
p
are unknown weighting terms for the nonlinear polynomial combinations, and are of particular interest and to be found in the FDSSA method.
The fourier transformation is calculated for the terms in the equation and a corresponding frequency domain expression is evaluated for frequencies
ω
∈ [−
π
,
π
], as in equation (3.37).
X i
(
ω
) = H
i, j
(
ω
)
Ã
F j
(
ω
) −
P
∑
p=1
λ
p
G p
(
ω
)
!
G p
(
ω
) =
F
{g p
(t)}
ω
(3.37)
(3.38)
By multiplication of X
i
∗
(
ω
) to each side and expressing for the expected value equation (3.39) is found.
Ã !
E {X i
(
ω
)X
i
∗
(
ω
)} = H
i, j
(
ω
)
E
©
F j
(
ω
)X
i
∗
(
ω
)
ª
−
P
∑
p=1
λ
p
E {G p
(
ω
)X
i
∗
(
ω
)}
(3.39)
The left side expectation term in equation (3.40) is the definition of the Power Spectral Density (PSD) of the signal x
i
(n), the right side expectation terms are the definitions of the cross spectral densities (CSD) between
20
CHAPTER 3. SYSTEM IDENTIFICATION AND ANALYSIS the signals f
j
(n), x
i
(n) and g
p
(n) respectively. Robust estimates of the PSD and CSD functions are applied and the final expression is evaluated as in equation (3.40).
P
X i
X i
(
ω
k
) = H
i, j
(
ω
k
)
Ã
P
F j
X i
(
ω
k
) −
P
∑
p=1
λ
p
ˆ
G p
X i
(
ω
k
)
!
(3.40)
The final expression in equation (3.40) could now easily be arranged with respect to the polynomial terms
λ
p
for suiting an optimizing algorithm.
3.3.4
Finding the Nonlinear Nodes
This section will present a brute force methodology for finding the node or the nodes where the nonlinear functional is mounted.
The Brute Force Methodology
Assume we have a system of N nodes. The brutal force methodology uses data from each of the system response nodes to find the node that is housing the nonlinearity. The following list shows the brutal force methodology.
1. Make an assumption of where the nonlinearity is mounted. Assume that the nonlinearity operates on the difference x
1
(t) − x
2
(t), such as g (x
1
(t), x
2
(t)).
2. Estimate the nonlinearity of the system when it is assumed that g(x
1
(t), x
2
(t)) is a valid assumption.
Calculate a goodnessoffit value, for an example the ordinary coherence function.
3. Repeat 1. and 2. for all possible nodes of combinations of x
i
(t).
4. For all combinations, find the combination that has the best goodnessoffit value. This combination could then be seen as the node where the nonlinearity is mounted.
The withdraw with the Brutal Force methodology is that it does not render well when the true combination of input signals is not included. Also, questions like; does another set of input combinations render better goodnessoffit values than the true combination; thus giving rise to need of other, more robust methods for finding the nonlinear nodes.
21
3.4. NONLINEAR SYSTEM ANALYSIS
−50
−60
−70
−80
−90
−100
Harmonic Balance Simulation
Underlying Linear System
First set of HBM Solutions
Second set of HBM Solutions
Third set of HBM Solutions
−110
0 20 40 60 80
F − Frequency [Hz]
100 120 140 160
Figure 3.11: A harmonic balance plot for an example SDOF duffing oscillator system having the modal parameters m = 1kg, c = 10kg/s, k = 1 · 10
4
N/m and the cubic constant 1 · 10
10
N/m
3
. Note the harmonic balance method’s ability to find the unstable solutions (green dots). The sampling frequency was set to 400 Hz.
3.4
Nonlinear System Analysis
The analysis methods suggested in this thesis does not always reveal the true type of nonlinearity, but the methods will give the reader a sense of how the nonlinearities operates in the linear system environment.
3.4.1
Harmonic Balance Method
Introduction
The Harmonic Balance Method, further denoted as HBM, is a method that is used for experimentally finding the steady state equilibrium solutions to a dynamical system. The solutions are calculated per frequency and presented as a FRF. See figure 3.11 for an example of how the harmonic balance solutions could be presented.
As mentioned earlier, a nonlinear system could have one or more equilibrium points per frequency, this could easily be seen with the HBM. Notable is that chaotic behavior, such as bifurcations are not considered in the HBM, but they could render errors in the FRFs.
The method originates as many other from the definition of linear systems, but could easily be expanded for nonlinear systems as could be seen in upcoming sections. The implementation in this thesis works for any infinite memory nonlinearity, as long as it could be formulated as equation (3.41).
Derivation of the Method
To derive the Harmonic Balance Method, one should expand the newtonian dynamical system description as in
3.41. In the linear case g{x(t)} should be set to 0.
x(t) + c ˙
=
f (t) (3.41)
22
CHAPTER 3. SYSTEM IDENTIFICATION AND ANALYSIS
Given that the responses and excitation forces all have bounded energy, the fourier transformation could be applied for the expression in (3.41) giving equation (3.42).
m
£
( j
Ω
)
2
X ( j
Ω
) − j
Ω
x(0) − ˙
¤
+ c [( j
Ω
)X( j
Ω
) − x(0)] + kX( j
Ω
) + G( j
Ω
) = F( j
Ω
)
(3.42)
Where G( j
Ω
) =
R
+
∞
−
∞
g{x(t)}e
j
Ω
t
dt, i.e. the fourier transformation of the nonlinear functional g{x(t)} and
F( j
Ω
) is the fourier transformation of the excitation force f (t). x(0) and ˙x(0) are initial conditions of the response signal. Simplifying and rearranging the expression gives equation (3.43).
£
−m
Ω
2
+ j
Ω
c + k
¤
X ( j
Ω
) = F( j
Ω
) − G( j
Ω
) + ( j
Ω
m + c)x(0) + m ˙ (3.43)
Let
1
H( j
Ω
)
= −m
Ω
2 + j
Ω
c + k and simplifying gives the final expression in equation (3.44).
X ( j
Ω
) = H( j
Ω
)F( j
Ω
) − H( j
Ω
)G( j
Ω
) + H( j
Ω
) [( j
Ω
m + c)x(0) + m ˙ (3.44)
As mentioned in the introduction; the HBM calculates the steady state equilibrium solutions at a given frequency for a dynamical system. But for nonlinear systems, due to for an example nonlinear resonance or frequency modulation, there could be several over or under tones to a singular frequency response. Therefore different versions of the HBM exists; versions which only investigates the ground tone and versions which investigates the ground as well as the over/under tones. The implementation described in this thesis only investigates the ground tone of the responses for simplicity, but could be extended for over/under tone analysis as well. Therefore a simplification has to be made; the time response of the excitation force and corresponding responses for a given frequency is assumed to be as in equations (3.45) and (3.46) respectively, having fourier transformations as in (3.47) and (3.48). Note that for multi tone analysis, equations (3.46) and (3.48) are expanded to include energy contributions for the over and/or under tones as well.
f (t)
= F
amp
sin(
Ω
k
t)
F( j
X l x l
(t) = X
amp,l
sin(
Ω
k
t)
( j
Ω
Ω
) =
) =
π
F amp j
π
X amp,l j
{
δ
(
Ω
−
Ω
k
{
δ
(
Ω
−
Ω
) −
k
δ
) −
(
δ
Ω
(
+
Ω
Ω
+
k
Ω
)}
k
)}
H l
( j
Ω
) =
X amp,l
, l ∈ [1, L]
F amp
(3.45)
(3.46)
(3.47)
(3.48)
(3.49)
Where F
amp
∈ C and X
l
∈ C. Note that X
l
is one of (l ∈ [1, L]) L roots such that equation (3.44) is in equilibrium, i.e in Harmonic Balance. The Harmonic Balance basis function should preferably be of sinusoidal order such as sin(
Ω
k
t). The solutions to the FRF of the dynamical system for which the HBM was applied could be expressed, per frequency, as equation (3.49). L is of order 1 in the linear case and of order L ≥ 1 in the nonlinear case.
3.4.2
Hilbert Transformation Techniques
The Hilbert Transform and its properties is commonly used among different application and has been proposed also to be used in the nonlinear vibration analysis [4],[5]. Common areas are envelope calculations and modulation analyzing. The Hilbert Transform properties of analyzing modulation is one of the features used when diagnose and characterize frequency modulation in nonlinear vibrating systems. The goal of the Hilbert transform time domain technique is to detect and classify the change in the natural frequency. Classifying with respect to the type oscillation from the nonlinear system which could be noticed as a modulation. The Hilbert transform of a signal is defined according to equation (3.50) where
H
is the Hilbert transform and ∗ denotes convolution.
H
[x(t)] =
Z
∞
−
∞
x(u)
π
(t − u)
du = x(t) ∗
1
π
t
(3.50)
23
3.4. NONLINEAR SYSTEM ANALYSIS
As could be seen in the definition of the Hilbert transform, the transformation is in the domain, that is the time domain. In other words the transform is a phased shifted version of the signal which is easy to see when looking at the Fourier transform
F
of
1
π
t
in equation (3.50).
F
·
1
π
t
¸
= − j sgn f =
− j , f > 0
0
, f = 0
(3.51)
j
, f < 0
The Hilbert transform weights the signal around t according to equation (3.52) which makes the transform suitable for analyzing local behavior in opposite to the Fourier transform which base is an exponential function
e
− j
ω
t
and has to be used on the whole time record.
1
π
t
(3.52)
The main properties in the algorithm is the instantaneous characteristics of amplitude A(t), phase
θ
(t) and frequency f (t). Those are all derived from the function z(t) which is defined by equation (3.53).
(3.53)
Imag
z(t)
θ
(t)
Real
x(t)
Figure 3.12: The analytic function z(t) used in the Hilbert transform time domain algorithm.
Instantaneous Phase:
θ
(t) = angle{x(t), ˜x(t)}
(3.54)
Instantaneous Frequency:
Ω
(t) =
d
θ
(t)
dt
(3.55)
Instantaneous Amplitude:
A(t) = q
x
2
(t) + ˜x
2
(t)
(3.56)
These tools which instantaneous analyzes the signal, could now be used in the Hilbert transform time domain algorithm. To see the differences between instantaneous and noninstantaneous analyzing tools the instantaneous frequency will be compared with the Fourier transform. The signal should be of a nonstationary type that shows the advantage of the Hilbert transform and the weakness of the Fourier transform. Such a signal is
24
CHAPTER 3. SYSTEM IDENTIFICATION AND ANALYSIS the sinus sweep which is shown in figure 3.13 and defined by equation (3.59).
x(t)
= sin(2
π
f (t)t)
f (t)
= 1.25t
(3.57)
(3.58)
(3.59)
Sinus sweep 0−25 Hz
0.4
0.2
0
−0.2
−0.4
−0.6
−0.8
−1
0
1
0.8
0.6
1 2 3 4 5
Time t [s]
6 7 8 9 10
Figure 3.13: A sinus sweep x(n) defined in equation (3.59) , starting at 0 Hz, sweeping up to 25 Hz.
As previously mentioned the goal is to find oscillation caused by the nonlinear structure near the resonance frequency. It is thereby important to only examine the frequencies in a narrow band where the resonance frequency of interest is. The following listing are the steps of the algorithm:
• Filtering
Extracting a narrow frequency band where no other modes are interfering with the mode of interest.
• Hilbert transform
Calculate the z(t).
• Instantaneous Quantities
Calculate the different instantaneous signals.
• Analyzing
Compare the results with known discrimination diagrams for classifying the nonlinearity.
25
3.4. NONLINEAR SYSTEM ANALYSIS
Instantaneous Frequency
20
15
10
5
30
Simulated
Ω
(t)/(2
π
)
Ideal
Ω
(t)/(2
π
)
25
0
0 1 2 3 4 5 time t [s]
6 7 8 9 10
Figure 3.14: The blue curve is instantaneous frequency
Ω
(t) of the sinussweep signal x(t) defined in equation
(3.59). The red curve is the frequency vector used when manufacturing the signal x(t).
Fourier Transform of x(t)
50
45
40
35
30
25
20
15
10
5
−40 −30 −20 −10 0
Frequency f [Hz]
10 20 30 40
Figure 3.15: The Fourier transform of the sinussweep signal x(t) defined in equation (3.59), based on the whole time record. It is apparent that the Fourier transform is not an appropriate analyzing tool for nonstationary signals.
26
CHAPTER 3. SYSTEM IDENTIFICATION AND ANALYSIS
3.4.3
Statistical Analysis
Introduction
Statistical analysis is mainly used for indicating if systems are having nonlinearities. The statistical analysis origins from Random theory [8] and [1]. Random theory stems that one of many important properties of linear systems is the linearity property. The linearity property states that a signal into a linear system retains its statistical properties, though with an eventual change in amplitude, phase or mean. But, the symmetrical properties of a signal must be kept unchanged. Thus by estimating and comparing certain parameters that measure the symmetry of the probability density functions, before and after system application, one could get a hint whether a system is linear or not.
It is important to see that an infinite memory nonlinear system often has less noticeable nonlinear effects compared to a zero memory nonlinear system for example. This comes from the fact that the infinite memory system’s nonlinearity is filtered through the system itself. Where the system eventually are damping the feedback nonlinearity signal, the zero memory nonlinear system has no such filtering of the nonlinear function and the nonlinear signal could therefore be transferred directly to the system’s response.
It is recommended that the histograms of excitation signals and responses are investigated at an early stage when starting a new measurement session of a system, to get an overview of the system’s level of linearity.
Many other abnormalities concerning signal quality could be discovered by the histogram investigation as well such as signal clipping, DCoffset problems, undesired signal glitches due to improper inducer mounting for an example, exterior sinusoidal disturbances et cetera.
Central Moments
The central moment M
i
of a signal is defined as equation (3.60).
M i
= E h
(x − ¯x)
i
i
The second central moment is also referred to as the sample variance, as M
2
=
σ
2
x
.
(3.60)
Skewness
The skewness could be seen as a measure of in which extent a signal is symmetric around its mean, where symmetric refers to the signal’s probability density function. If a signal is symmetric the skewness should be as close as possible to zero. The gaussian distribution is an example of signals having skewness equal to zero.
The skewness is defined as the normalized third order central moment, as in equation (3.61).
S x
=
M
3
σ
3
x
(3.61)
The skewness is a dimensionless parameter.
Kurtosis
Kurtosis on the other hand could be seen as a comparative measure of the probability density function’s tails with respect to the gaussian distribution’s tails. A fully gaussian distribution has a kurtosis value equal to three.
The kurtosis is defined as the normalized fourth order central moment, as equation (3.62).
K x
=
M
4
σ
4
x
(3.62)
The kurtosis is a dimensionless parameter.
27
3.4. NONLINEAR SYSTEM ANALYSIS
Example Statistical Analysis
Assume a infinite memory nonlinear system defined by the modal mass m = 1kg, the damping c = 10kg/s, the stiffness k = 1 · 10
4
N/m and the nonlinear polynomial 1 · 10
9
x
3
(t) + 1 · 10
6
x
2
(t), let this be the first system.
Now, assume a zero memory nonlinear system, having the same modal parameters and nonlinear polynomial as the first system, let this be the second system. Let the excitation force be gaussian distributed noise having zero mean and the standard deviation 10N. Let the sampling frequency be 400Hz.
The histogram of the excitation force, the first system’s nonlinear response, the second system’s nonlinear response and their linear responses are presented in figure 3.16. The statistical parameters skewness and kurtosis for the signals are presented in table 3.2.
Histogram of linear sub−system response
Histogram of excitation force
6000
6000
5000
5000
4000
4000
3000
3000
2000
2000
1000
1000
0
−15 15
0
−1.5
−10 −5 0
Force [N]
5 a)
Histogram of first nonlinear system response
10
−1 −0.5
0
Response [m]
0.5
b)
Histogram of second nonlinear system response
1 1.5
x 10
−3
6000 6000
5000
4000
3000
2000
1000
5000
4000
3000
2000
1000
0
−1.5
−1 −0.5
0
Response [m] c)
0.5
1 1.5
x 10
−3
0
−1.5
−1 −0.5
0
Response [m] d)
0.5
1 1.5
x 10
−3
Figure 3.16: Histograms for a sample system given by parameters m = 1kg, c = 10kg/s, k = 1 · 10
4
N/m and the nonlinear polynomial 1 · 10
9
x
3
(t) + 1 · 10
6
x
2
(t). Plot a) is the histogram of the excitation force, b) is the histogram of the linear subsystem response, c) is the histogram of the first system’s nonlinear response, and d) is the histogram of the second system’s nonlinear response.
28
CHAPTER 3. SYSTEM IDENTIFICATION AND ANALYSIS
Statistical Parameter Excitation Force Nonlinear System 1 Nonlinear System 2 Linear Subsystem
Mean 0.0285N
−4.7 · 10
−6
m
1.7 · 10
−6
m
2.9 · 10
−6
m
Skewness
−0.014 −0.038 −0.039 −0.016
Kurtosis 3.03 2.67 3.83 3.04
Table 3.2: Statistical analysis of an excitation force, the response from a nonlinear system, and the linear response of the system’s linear subsystem. The first nonlinear system and its linear subsystem is given by the parameters m = 1kg, c = 10kg/s, k = 1 · 10
4
N/m and the nonlinear polynomial 1 · 10
9
x
3 (t) + 1 · 10 6
x
2 (t). The second nonlinear system has the same linear subsystem as the first, but instead having the nonlinear polynomial
4 · 10
−2
F
3
(t) + 4 · 10
−6
F
2
(t). Note the difference between the two nonlinear response’s parameters and the linear response’s parameters.
29
Chapter 4
System Synthesis
4.1
Introduction
One of the goals during this master thesis was to develop powerful tools for analysis and identification of mechanical systems, as well as getting tools for dynamical system synthesis. Synthesis means reproducing or predicting the system response given a specific system excitation force, of course under the assumption that the system is correctly identified and analyzed. Synthesis is thereby very useful for simulation of algorithms for system control. It saves a lot of money and time, but the synthesis itself is only useful if it introduces manageable and limited errors to the predicted output. Having too large errors in the synthesis gives data that is useless to the end user. Therefore the synthesis tools presented in the following section aims to minimize such errors. Each of the synthesis tools are explained such that the area of usage is clearly stated.
This chapter includes the following sections:
Linear System Synthesis Some popular methods for linear system synthesis, such as; analytical time response synthesis using laplace transformations, synthesis by digital filters.
Nonlinear System Synthesis Some methods for nonlinear system synthesis, such as; analytical time response synthesis, synthesis by ordinary differential equation solvers, synthesis by extended digital filter structures.
Synthesis Quality Assessments How to assess the errors of the synthesis methods, which references one could use and which quality assessments are available.
4.2
Linear System Synthesis
4.2.1
Time Response Synthesis using Laplace Transformation
The most common method for time response synthesis when having analytical excitation forces is by using the
Laplace transformation. This method is not suitable when having experimental data. The standard definition of the Laplace transform is as in equation 4.1. It is notable that the Laplace transform is only defined for t ≥ 0.
Z
+
∞
X (s)
=
L
{x(t)} =
x(t)e
−st dt
(4.1)
0
Note that there are double sided laplace transforms as well, not regarded in this thesis. Some important, basic properties of the Laplace transformation when determining the analytical time response to linear dynamical systems are presented in equation (4.2) to (4.4).
L
n
x
(n)
(t) o
= s
n
X (s) −
n
∑
k=1
s n−k x
(k−1)
(0)
L { ˙
= sX(s) − x(0)
(4.2)
(4.3)
L { ¨
= s
2
x(0) (4.4)
30
CHAPTER 4. SYSTEM SYNTHESIS
Where x
(i)
(0), i ∈ (0, 1, . . .) are initial conditions of the response signal.
An intuitive example
Assume a SDOF linear dynamical system, described as the newtonian equation (4.5).
x(t) + c ˙
=
f (t)
Laplace transformation of the SDOF system description gives equation (4.6).
m
¡
s
2
X (s) − sx(0) − ˙ x(0)
¢
+ c (sX(s) − x(0)) + kX(s) = F(s)
Solving for X (s) in equation (4.6) gives equation (4.7).
X (s)
=
(ms 2
1
+ cs + k)
(F(s) + mx(0)s + m ˙x(0) + cx(0))
Partial fraction expansion gives equation (4.8).
X (s)
=
µ
A s −
λ
+
A
∗ s −
λ
∗
¶
(F(s) + mx(0)s + m ˙x(0) + cx(0))
Inverse laplace transformation gives equation (4.9).
x(t)
=
³
Ae
λ
t
+ A
∗ e
λ
∗ t
´
∗ ( f (t) + mx(0)
δ
(t) + m ˙x(0) + cx(0))
(4.5)
(4.6)
(4.7)
(4.8)
(4.9)
4.2.2
Synthesis by Digital Filters
Introduction
The main topic of synthesis by using digital filters is how to perform the transformation from the continuous time into discrete time representation of a dynamical system. With frequency notation; transformation from continuous time system description as in equation (4.10) to corresponding sampled time system description as in equation (4.11).
H(s)
=
H(z)
=
N
∑
r=1
µ
R r s −
λ
r
+
R
∗ r s −
λ
∗ r
¶
N
∑
r=1
b
r,0
+ b
r,1
z
−1
1 + a
r,1
z
−1
+ b
r,2
z
−2
+ a
r,2
z
−2
(4.10)
(4.11)
Where H(s) is a general Laplace domain dynamical system description represented as the sum of pairs of modal poles and residues. H(z) is the corresponding Zdomain representation as the sum of second order filter sections, where b
k,i
and a
k, j
are filter coefficients for each section, k.
There are a lot of different methods for performing such a transformation, where the bilinear transform and the impulse invariance methods are popular examples of such transformations, often reoccurring in literature.
The advantage of those methods are that they are often quite easy to comprehend and to use. The main disadvantage for the impulse invariance methods is that its gain at low frequencies is not constant for different system resonance frequencies, due to aliasing effects for an example. The main disadvantage of the bilinear transformation method is its nonlinear frequency translation, thus making eventual errors in the transformation phase. As the Nyquist sampling theorem states; that the maximal signal bandwidth has to be less than half the sampling frequency; problems occur with the example methods.
The impulse invariance method uses the unit impulse function for transforming from continuous time to discrete time, i.e for sampling, thus rendering synthesis errors. This thesis will present two extensions to the impulse invariance method; the step invariance and the ramp invariance method. Both having very good
31
4.2. LINEAR SYSTEM SYNTHESIS
FRF of SDOFs having different resonance frequencies, Q = 10
10
0
−10
30
20
−20
−30
−40
−50
−60
0 5 10 15 20 25 30 35 40
F
R
/F s
− Resonance frequency rel. to sampling frequency [%]
45 50
Figure 4.1: Example SDOF systems having different resonance frequencies, ranging from 2% up to 38% of the sampling frequency. The SDOF systems all have a Qvalue equal to 10. The sampling frequency was set to
400 Hz.
DC gain characteristics, but not as good resonance frequency gain. But, since system information often is in frequencies lower than half the sampling frequency, the resonance frequency gain will not cause such a problem as the DC gain problem would. Example SDOF systems having different resonance frequencies is presented in figure 4.1. The DC gain problem with the impulse invariance and the improvements with the step and ramp invariance methods is presented in figure 4.2. The resonance frequency gain for the different methods is presented in figure 4.3.
32
CHAPTER 4. SYSTEM SYNTHESIS
45
40
35
30
25
20
15
10
5
0
SDOF DC gain error as function of resonance frequency
H*(F) ~ Impulse invariance
H*(F) ~ Step invariance
H*(F) ~ Ramp invariance
5 10 15 20 25 30
F
R
/F s
− Resonance frequency rel. to sampling frequency [%]
35
Figure 4.2: DC gain error characteristics for the methods impulse invariance, step invariance and ramp invariance for the example SDOF systems having different resonance frequencies. Ideal DC gain for the sample
SDOF systems is 1. The sampling frequency was set to 400 Hz.
45
40
35
30
25
20
15
10
5
0
SDOF resonance frequency gain as function of resonance frequency
H*(F) ~ Impulse invariance
H*(F) ~ Step invariance
H*(F) ~ Ramp invariance
5 10 15 20 25 30
F
R
/F s
− Resonance frequency rel. to sampling frequency [%]
35
Figure 4.3: Resonance frequency gain error characteristics for the methods impulse invariance, step invariance and ramp invariance for the example SDOF systems having different resonance frequencies. Ideal resonance frequency gain for the sample SDOF systems is 10. The sampling frequency was set to 400 Hz.
33
4.2. LINEAR SYSTEM SYNTHESIS
Suggested Methodology for Deriving Transformation Methods
The suggested methodology for deriving a transformation method, is by using a catalyzer function in the transformation steps. The transformation derivation methodology can be generalized into five steps, assuming a general analogous system description H(s) as in equation (4.10).
1. Multiply a Laplace domain catalyzer function C(s) to the analogue system description.
H
C
(s) = C(s)H(s)
(4.12)
2. Calculate the continuous time response function, by inverse Laplacetransforming the catalyzed Laplace domain system.
h
C
(t) =
L
−1
{H
C
(s)}
(4.13)
3. Apply equidistant sampling to the continuous time response function, assuming the sample interval T =
1/F
S
.
ˆh
C
(nT ) = h
C
(t)
t=nT
(4.14)
4. Ztransformation of the sampled time response function.
H
C
(z) =
Z
©
ˆh
C
(n)
ª
(4.15)
5. Compensating for the catalyzer function C(s) by dividing its corresponding zdomain representation.
H(z) =
H
C
(z)
C(z)
(4.16)
The methods of impulse invariance, step invariance and ramp invariance transformation are derived in appendix A.
34
CHAPTER 4. SYSTEM SYNTHESIS
4.3
Nonlinear Systems Synthesis
4.3.1
Analytical Time Responses Synthesis
The time response y(t) from a linear system is simply expressed as the convolution between the input x(t) and the system h(t) as in equation (4.17).
Z
∞
y(t) =
τ
=−
∞
h(
τ
)x(t −
τ
)d
τ
= h(t) ∗ x(t)
(4.17)
For nonlinear systems a first order convolution is not enough in order to describe the output signal. But as shown in [2] also nonlinear systems can be modelled, but via higher order convolution. This by a sort of superposition stems from the Volterra series representing the nonlinear system according to equation (4.18).
y(t) = y
1
(t) + y
2
(t) + . . . + y
n
(t).
(4.18)
Where y(t) is the response from the system. Depending on the kind of system, zero memory, finite memory or infinite memory and in the two first cases the polynomial order of the nonlinear system the one needs different amount of from the Volterra series in order to represent the system. For the general case with infinity terms the response would look like equation (4.19).
y(t)
=
+
+
Z
∞
h
1
(
τ
1
)x
1
(t −
τ
1
)d
τ
1
+
τ
1
=−
∞
Z
∞
Z
∞
h(
τ
1
,
τ
2
)x
1
(t −
τ
1
)x
2
(t −
τ
2
)d
τ
1
d
τ
2
+ . . .
τ
1
=−
∞
Z
∞
τ
2
=−
∞
Z
∞
Z
∞
τ
1
=−
∞ τ
2
=−
∞ τ
3
=−
∞
h(
τ
1
,
τ
2
,
τ
3
)x
1
(t −
τ
1
)x
2
(t −
τ
2
)x
3
(t −
τ
3
)d
τ
1
d
τ
2
d
τ
3
+ . . .
(4.19)
The most basic nonlinear Volterra series are the bilinear system equation (4.20) and the trilinear system equation
(4.21), see [2]. These systems is of second order and third order respectively, example of such systems is the squarer and the cuber.
Z
∞
Z
∞
τ
1
=−
∞ τ
2
=−
∞
h(
τ
1
,
τ
2
)x
1
(t −
τ
1
)x
2
(t −
τ
2
)d
τ
1
d
τ
2
(4.20)
Z
∞
Z
∞
Z
∞
τ
1
=−
∞ τ
2
=−
∞ τ
3
=−
∞
h(
τ
1
,
τ
2
,
τ
3
)x
1
(t −
τ
1
)x
2
(t −
τ
2
)x
3
(t −
τ
3
)d
τ
1
d
τ
2
d
τ
3
Whether system is of kind zero memory or finite memory is determined by the transfer function h(t).
(4.21)
35
4.3. NONLINEAR SYSTEMS SYNTHESIS
4.3.2
Synthesis by Ordinary Differential Equation Solvers
Ordinary differential equation (ODE) solvers are a group of system solvers that are using recursive algorithms such as RungeKutta for solving differential equations, linear or nonlinear. This section will emphasize on
ODE versions as available in the standard MATLAB
1
5 software package. For further reading on MATLAB
ODE solvers, please refer to [9].
ODE solvers in general are not intended for real time applications due to their tedious and computationally expensive nature. ODE solvers are thereby not really interesting regarding the goals of this thesis, but are included for general knowledge in system synthesis.
The MATLAB 5 Software package does include ODE solvers with applications as presented in table 4.1.
ODE solver Application
ode23
Method
For nonstiff systems RungeKutta ode45 ode113
Order
2 and 3
For nonstiff systems RungeKutta of DormandPrince 4 and 5
For nonstiff systems AdamsBashforthMoulton predictorcorrector 1 to 13 ode23s ode15s
For stiff systems
For stiff systems
Rungekutta
Numerical differentiation formulas
2 and 3
1 to 5
Table 4.1: Matlab 5 ordinary differential equation solvers with applications, solving method and specific orders.
ODE Solver Methods
Following is a brief overview of the solver methods used by MATLAB implementations of ODE solvers.
RungeKutta RungeKutta methods are referred to as onestep multistage methods. The update algorithm of a fourth order RungeKutta method is presented in equation (4.22).
x
n+1
= x
n
+
h
(k
1
6
=
f (t
n
, y n
)
+ 2k
2
+ 2k
3
+ k
4
)
k
1
k
2
=
f (t
n
+
1
2
h, y n
+
1
2
hk
1
)
k
3
=
f (t
n k
4
=
f (t
n
+
1
h, y n
+
2
+ h, y
n
+ hk
1
2
3
)
hk
2
)
(4.22)
(4.23)
(4.24)
(4.25)
(4.26)
The RungeKutta of DormandPrince is an extended version of the RungeKutta method, using numerical interpolation for increased performance, and minimal error.
AdamsBashforthMoulton The AdamsBashforthMoulton method is a linear multistep method defined in standard form as in equation (4.27).
k
∑
j=0
α
j y
n+ j−k+1
= h
k
∑
j=0
β
j f
n+ j−k+1
(4.27)
Numerical differentiation formulas The numerical differential formulas are extensions to the backward differential formulas and are expressed as in equation (4.28).
k
∑
m=1
1
m
∇
m y
n+1
= h f
n+1
+
κγ
k
Ã
y
n+1
− k
∑
m=0
1
m
∇
m y n
!
(4.28)
1
M
ATLAB is a trademark of The MathWorks, Inc.
36
CHAPTER 4. SYSTEM SYNTHESIS
ODE Solver Applications
The definition of whether a system is said to be stiff or nonstiff is rather advanced, but two of the requirements for stiff systems is that the real parts of the eigenvalues of the jacobian to the system must be negative, and that the stiffness ratio is large. Let us apply the two requirements to a SDOF duffing oscillator system as an example system for testing for stiffness or nonstiffness. The SDOF is stated as in equation (4.29) but rewritten into the first order differential equation system functional {u(x, y,t)} as in equation (4.30).
f (t)
{u(x, y,t)} =
½
u
1
(x, y,t)
u
2
(x, y,t)
¾
=
½
α
x
3
=
½
1
m
¡
f (t) − cy(t) − kx(t) −
α
x
3 (t)
¢
y(t)
¾
(4.29)
(4.30)
The jacobian matrix of the first order ODE system is defined as in equation (4.31).
" # "
[J] =
∂
u
1
(x,y,t)
∂
x(t)
∂
u
2
(x,y,t)
∂
x(t)
∂
u
1
(x,y,t)
∂
y(t)
∂
u
2
(x,y,t)
∂
y(t)
=
−k−3
α
x
2
(t)
m
0
−c m
1
#
(4.31)
The eigenvalues of the jacobian to the example system are in equation (4.33).
det J −
½
λ
I
(
λ
1
λ
2
=
−
1
k+3
α
x
2
(t)
m
(4.32)
(4.33)
Under the assumption that any response signal x(t) is realvalued and that the stiffness k, cubic stiffness
α and mass m are positive constants, the second eigenvalue is always negative, but the first eigenvalue is positive. The example system is therefore said to be nonstiff.
37
4.3. NONLINEAR SYSTEMS SYNTHESIS
F i
(n)
 H
i,1
(
ω
)
 H
i,2
(
ω
)
.
..
 H
i,L
(
ω
)
x
1
x x
Figure 4.4: Standard SIMO linear system description.
g (x
j
(n)) ¾
x j
(n)
F i
(n) +− ?
z i, j
(n)
i,1
(
ω
)
x
1
x
 H
i,2
(
ω
)
.
..
 H
i,L
(
ω
)
x
Figure 4.5: Expansion of the standard SIMO linear system description into SIMO nonlinear system description, having infinite memory.
4.3.3
Synthesis by Extended Digital Filter Structures
When synthesizing the output of a Single Input Multiple Output (SIMO) nonlinear dynamical system using digital filters, this thesis proposes a method which is an expansion of the standard linear SIMO system description.
The standard linear SIMO system description as in figure 4.4 could be formulated as in equation (4.34).
H x j
(n) = −
¡
e
− j
ω
¢
=
B
N
∑
a k x j
(n − k) +
¡
e
− j
ω
¢
A (e
− j
ω
)
=
M
∑
m=0
b m
F i
(n − m)
b
0
+ b
1
1 + a
1
e
− j
ω
e
− j
ω
+ . . . + b
M
+ . . . + a
N e
− j
ω
M e
− j
ω
N
(4.34)
(4.35)
For an infinite memory nonlinear system having a nonlinear function of response x
j
(n) as presented in figure
4.5, this description is reformulated as in equation (4.36).
x j
(n) = −
N
∑
k=1
a k x j
(n − k) +
M
∑
m=0
b m
(F
i
(n − m) − g(x
j
(n − m)))
b
0
g (x
j
(n)) + x
j
(n) = −
N
∑
k=1
a k x j
(n − k) + b
0
F i
(n) +
M
∑
m=1
b m
(F
i
(n − m) − g(x
j
(n − m)))
(4.36)
(4.37)
Solving for x
j
(n) for each point in time, n, gives the expected output for the nonlinear dynamical system. The current implementation uses the NewtonRaphson iteration for solving x
j
(n). Other schemes for solving could be used as well.
38
CHAPTER 4. SYSTEM SYNTHESIS
4.4
Synthesis Quality Assessments
This thesis have sofar only presented different synthesis algorithms. No effort in quality assessments have been made. Following is an introductory on how to assess the quality of system synthesis.
The Method
Following is a suggested fourstep procedure for calculating synthesis goodness estimators. The suggested methodology assumes that the system is a infinite memory nonlinear system, as presented earlier in figure 4.5, for variable names and declarations please refer to the previously mentioned figure 4.5.
1. Apply the synthesis to the system, i.e calculate the response x
j
(n) given an excitation force F
i
(n) using a synthesis algorithm.
2. Calculate the fictive excitation force as it is seen from the linear subsystem as in equation (4.38).
3. Recalculate a new estimate of the response x
j
(n) by filtering the fictive excitation force through the linear subsystem as in equation (4.39).
4. Finally express the quality function as the standard deviation of difference between the synthesized response signal and the estimated response signal, divided by the standard deviation of the synthesized response signal as presented in equation (4.40) to (4.42).
z i, j
(n) = F
j
(n) = H
Q i, j
=
D
©
x j
D
s
D {x(n)}
=
i
1
N
©
z i, j
©
x
N−1
∑
n=0
j
(n)
j
(n)
j
ª
(n)
ª
(x(n) − x)
2
x
=
1
N
N−1
∑
k=0
x(k)
(4.38)
(4.39)
(4.40)
(4.41)
(4.42)
39
Chapter 5
Experimental Evaluation of Blackbox
Systems
5.1
Background
Four systems of arbitrary high degrees of freedom, containing unknown nonlinearities of unknown mounting nodes where challenged to the master thesis students. The aim was for the students to use the, in thesis developed blackbox system identification methods and thus identifying the unknown systems. The only prior system knowledge was as follows:
• There is only one nonlinearity present in each system.
• The nonlinearity of a system (if any) must be mounted from one of the system nodes to ground.
• A nonlinearity should not have internal memory.
The four systems are presented in table 5.1. Their respective true underlying linear systems are presented in figure 5.1.
It should be emphasized that the third system diverged from the experiment prerequisites in that its nonlinearity did have internal memory. The third system housed a nonlinearity called StickSlip. StickSlip is closely related to the hysteresis nonlinearity, i.e a nonlinearity with internal memory.
3
4
System DOF’s Nonlinearity
1
2
8
8
0.8 · 10
9
x
3
3
(t)
0
8
8
Description
Cubic stiffness
Linear
Mounted node node three to ground
unknown Stickslip node five to ground
5 arctan(4000x
4
(t))
Arcus tangens node four to ground
Table 5.1: The contents of the four challenged Blackbox systems.
40
CHAPTER 5. EXPERIMENTAL EVALUATION OF BLACKBOX SYSTEMS
Linear subsystem of System 1 (Force node=1, Response node=1) Linear subsystem of System 2 (Force node=1, Response node=1)
−60
−65
−70
−75
−80
−85
−90
−95
−100
−105
0
−60
−65
−70
−75
−80
−85
−90
−95
−100
−105
0 10 20 30
F − Frequency [Hz]
40 50 a)
Linear subsystem of System 3 (Force node=1, Response node=1)
60 10 20 30
F − Frequency [Hz]
40 50 b)
Linear subsystem of System 4 (Force node=1, Response node=1)
60
−60
−65
−70
−75
−80
−85
−90
−95
−100
−105
0
−50
−55
−60
−65
−70
−75
−80
−85
−90
−95
−100
−105
0 10 20 30
F − Frequency [Hz]
40 c)
50 60 10 20 30
F − Frequency [Hz]
40 d)
50 60
Figure 5.1: True frequency response functions of the underlying linear system of the four challenged blackbox systems. a) FRF of system 1, b) FRF of system 2, c) FRF of system 3 and d) FRF of system 4.
41
5.2. THE EXPERIMENT
5.2
The Experiment
The four blackbox systems are to be identified and thoroughly analyzed during this experimental evaluation.
The following questions will be answered for each of the four systems:
1. Is the system linear or nonlinear? For answering this question one could for an example excite the system with different excitation force amplitudes within the operating range. If the frequency response functions of the different excitation levels does not change, besides eventual measurement noise, the system could be seen as linear. Specially, when the system is nonlinear, the antiresonances of the system tends to change.
2. Where is the nonlinearity mounted? Preferably one could use the Brute Force method for system nonlinear node identification.
3. Estimate the nonlinearity and the linear subsystem. Estimation using the suggested methods; Linear system identification, Reverse Path Method and the Frequency Domain Structure Selection Algorithm.
5.2.1
The First System
Is the system linear or nonlinear?
For testing whether the system is linear or nonlinear five different force excitation levels where used. The resulting frequency response functions are presented in figure 5.2 and clearly indicates that the system is not linear.
−60
−65
−70
Estimated FRF of System 1 for five excitation levels F
1,RMS
to F
5,RMS
Excitation force F
1,RMS
= 0.1 N
Excitation force F
2,RMS
= 0.5 N
Excitation force F
3,RMS
Excitation force F
4,RMS
Excitation force F
5,RMS
= 1 N
= 5 N
= 10 N
−75
−80
−85
−90
−95
−100
−105
0 5 10 15 20
F − Frequency [Hz]
25 30 35 40
Figure 5.2: Estimated frequency response functions between force node one and response node one for System
1, having five different excitation force levels. The system’s antiresonances as well as its resonances are changed as the excitation force level changes, thus the system is assumed to be nonlinear.
Where is the nonlinearity mounted?
The brute force method applied to the first system results in figure 5.3. It indicates that the nonlinearity is mounted in node three.
42
CHAPTER 5. EXPERIMENTAL EVALUATION OF BLACKBOX SYSTEMS
Nonlinear−Linear Coherence System1
1
0.8
0.6
0.4
0.2
0
1
2
3
4
Nodes
5
6
1
7
8
5
6
2
3
4
Non−linear function x k
9
10
Figure 5.3: Estimated nodes of where the nonlinearity could be mounted in the first system. Polynomial orders from 1 up to order 10 were individually tested. As obvious in the figure, node three holds the nonlinearity.
43
5.2. THE EXPERIMENT
Estimate the nonlinearity and the linear subsystem
The FDSSA method were used for evaluating which linear subsystem and nonlinearity the first system contains.
The resulting nonlinear function is plotted in figure 5.4 in comparison to the true nonlinearity. The resulting linear subsystem estimate is plotted in comparison to the true linear subsystem in figure 5.5.
60
Polynomial functions of the nonlinearity in the first system
True polynomial
FDSSA estimate, Q err
=3.2136%
40
20
0
−20
−40
−60
−4 −3 −2 −1 0 x
1 2 3 x 10
−3
4
Figure 5.4: Polynomial functions of the first system’s nonlinearity. (Blue) the true nonlinear polynomial, and
(Red) the estimated nonlinear polynomial. The estimation error Q
err
is defined as the standard deviation of the polynomial fit error divided by the standard deviation of the true polynomial.
44
CHAPTER 5. EXPERIMENTAL EVALUATION OF BLACKBOX SYSTEMS
Frequency Response Function of the underlying linear system for system one
FRF of true linear subsystem
Estimated FRF of linear subsystem
−40
−50
−60
−70
−80
−90
−100
−110
0 5 10 15 20
F − Frequency [Hz]
25 30 35 40
Figure 5.5: The linear subsystem estimate of the first system. (Blue) the true linear subsystem, and (Red) the estimated linear subsystem.
45
5.2. THE EXPERIMENT
5.2.2
The Second System
Is the system linear or nonlinear?
For testing whether the system is linear or nonlinear five different force excitation levels where used. The resulting frequency response functions are presented in figure 5.6 and clearly indicates that eventual nonlinearities are not detectable in the system, for the applied excitation force levels.
−60
−65
−70
−75
−80
−85
−90
−95
−100
−105
0
Estimated FRF of System 2 for five excitation levels F
1,RMS
to F
5,RMS
Excitation force F
1,RMS
Excitation force F
2,RMS
Excitation force F
3,RMS
Excitation force F
4,RMS
= 0.1 N
= 0.5 N
= 1 N
= 5 N
Excitation force F
5,RMS
= 10 N
5 10 15 20
F − Frequency [Hz]
25 30 35 40
Figure 5.6: Estimated frequency response functions between force node one and response node one for System
2, having five different excitation force levels. All FRFs are overlaying each other, and the system is therefore said to be linear, for this specific range of operation.
Where is the nonlinearity mounted?
The first test reveals that there is no nonlinearity in the second system, the brute force test seems therefore unnecessary. But, the brute force method is although evaluated for the second system for clarity. The results of the brute force method is presented in figure 5.7.
46
CHAPTER 5. EXPERIMENTAL EVALUATION OF BLACKBOX SYSTEMS
Nonlinear−Linear Coherence System 2
1
0.8
0.6
0.4
0.2
0
1
2
3
4
5
6
Nodes
1
2
3
7
8
9
10
4
5
6
Non−linear function x k
Figure 5.7: Estimated nodes of where the nonlinearity could be mounted in the second system. Polynomial orders from 1 up to order 10 were individually tested. As obvious in the figure, no node holds any of the tested nonlinearities.
47
5.2. THE EXPERIMENT
5.2.3
The Third System
Is the system linear or nonlinear?
For testing whether the system is linear or nonlinear five different force excitation levels where used. The resulting frequency response functions are presented in figure 5.8 and clearly indicates that the system is not linear.
−60
−65
−70
Estimated FRF of System 3 for five excitation levels F
1,RMS
to F
5,RMS
Excitation force F
1,RMS
Excitation force F
2,RMS
Excitation force F
3,RMS
Excitation force F
4,RMS
= 0.1 N
= 0.5 N
= 1 N
= 5 N
Excitation force F
5,RMS
= 10 N
−90
−95
−100
−105
0
−75
−80
−85
5 10 15 20
F − Frequency [Hz]
25 30 35 40
Figure 5.8: Estimated frequency response functions between force node one and response node one for System
3, having five different excitation force levels. The system’s antiresonances as well as its resonances are changed as the excitation force level changes, thus the system is assumed to be nonlinear.
Where is the nonlinearity mounted?
The brute force method applied to the third system results in figure 5.9. It indicates that the nonlinearity is mounted in node five.
48
CHAPTER 5. EXPERIMENTAL EVALUATION OF BLACKBOX SYSTEMS
Nonlinear−Linear Coherence System3
1
0.8
0.6
0.4
0.2
0
1
2
3
4
Nodes
5
6
1
7
8
5
6
2
3
4
Non−linear function v
k
9
10
Figure 5.9: Estimated nodes of where the nonlinearity could be mounted in the third system. Polynomial orders from 1 up to order 10 were individually tested. As obvious in the figure, node five holds the nonlinearity.
49
5.2. THE EXPERIMENT
Estimate the nonlinearity and the linear subsystem
The FDSSA method were used for evaluating which linear subsystem and nonlinearity the third system contains. The resulting nonlinear function is plotted in figure 5.10 in comparison to the true nonlinearity. The resulting linear subsystem estimate is plotted in comparison to the true linear subsystem in figure 5.11.
0.8
Polynomial functions of the nonlinearity in the third system
FDSSA estimate
0.6
0.4
0.2
0
−0.2
−0.4
−0.6
−0.8
−5 −4 −3 −2 −1 0 x´(t)
1 2 3 4 5
Figure 5.10: Polynomial functions of the third system’s nonlinearity. (Blue) the true nonlinear polynomial, and
(Red) the estimated nonlinear polynomial. The estimation error Q
err
is defined as the standard deviation of the polynomial fit error divided by the standard deviation of the true polynomial.
50
CHAPTER 5. EXPERIMENTAL EVALUATION OF BLACKBOX SYSTEMS
Frequency Response Function of the underlying linear system for system three
FRF of true linear subsystem
Estimated FRF of linear subsystem
−40
−50
−60
−70
−80
−90
−100
−110
0 5 10 15 20
F − Frequency [Hz]
25 30 35 40
Figure 5.11: The linear subsystem estimate of the third system. (Blue) the true linear subsystem, and (Red) the estimated linear subsystem.
51
5.2. THE EXPERIMENT
5.2.4
The Fourth System
Is the system linear or nonlinear?
For testing whether the system is linear or nonlinear five different force excitation levels where used. The resulting frequency response functions are presented in figure 5.12 and clearly indicates that the system is not linear.
−60
−65
−70
Estimated FRF of System 4 for five excitation levels F
1,RMS
to F
5,RMS
Excitation force F
1,RMS
Excitation force F
2,RMS
Excitation force F
3,RMS
Excitation force F
4,RMS
= 0.1 N
= 0.5 N
= 1 N
= 5 N
Excitation force F
5,RMS
= 10 N
−90
−95
−100
−105
0
−75
−80
−85
5 10 15 20
F − Frequency [Hz]
25 30 35 40
Figure 5.12: Estimated frequency response functions between force node one and response node one for System
4, having five different excitation force levels. The system’s antiresonances as well as its resonances are changed as the excitation force level changes, thus the system is assumed to be nonlinear.
Where is the nonlinearity mounted?
The brute force method applied to the fourth system results in figure 5.13. It indicates that the nonlinearity is mounted in node four.
52
CHAPTER 5. EXPERIMENTAL EVALUATION OF BLACKBOX SYSTEMS
Nonlinear−Linear Coherence System 4
1
0.8
0.6
0.4
0.2
0
1
2
3
4
Nodes
5
6
1
7
8
5
6
2
3
4
Non−linear function x k
9
10
Figure 5.13: Estimated nodes of where the nonlinearity could be mounted in the fourth system. Polynomial orders from 1 up to order 10 were individually tested. As obvious in the figure, node four holds the nonlinearity.
53
5.2. THE EXPERIMENT
Estimate the nonlinearity and the linear subsystem
The FDSSA method were used for evaluating which linear subsystem and nonlinearity the fourth system contains. The resulting nonlinear function is plotted in figure 5.14 in comparison to the true nonlinearity. The resulting linear subsystem estimate is plotted in comparison to the true linear subsystem in figure 5.15.
8
Polynomial functions of the nonlinearity in the fourth system
True polynomial
FDSSA estimate, Q err
=6.5009%
6
4
2
0
−2
−4
−6
−8
−2 −1.5
−1 −0.5
0 x
0.5
1 1.5
2 x 10
−3
Figure 5.14: Polynomial functions of the fourth system’s nonlinearity. (Blue) the true nonlinear polynomial, and (Red) the estimated nonlinear polynomial. The estimation error Q
err
is defined as the standard deviation of the polynomial fit error divided by the standard deviation of the true polynomial.
54
CHAPTER 5. EXPERIMENTAL EVALUATION OF BLACKBOX SYSTEMS
Frequency Response Function of the underlying linear system for system four
FRF of true linear subsystem
Estimated FRF of linear subsystem
−40
−50
−60
−70
−80
−90
−100
−110
0 5 10 15 20
F − Frequency [Hz]
25 30 35 40
Figure 5.15: The linear subsystem estimate of the fourth system. (Blue) the true linear subsystem, and (Red) the estimated linear subsystem.
55
5.2. THE EXPERIMENT
5.2.5
The Results
The results for the Experimental Evaluation of Blackbox systems give good system descriptions for three out of four cases. The first, second and fourth system were identified with acceptable error levels, but the third system could not be estimated correctly at all, due to the fact that the third system diverged from the experiment prerequisites.
It is very important to emphasize that the experimental evaluation was conducted without any present noise.
56
Chapter 6
Experimental Evaluation of TestRig
6.1
Background
After developing different methods to identify the nonlinearities and analytical evaluation of the those methods, some practical tests are in place. Different known simulated system with and with out nonlinearities and also with and with out noise has been applied on the methods with different results. There has only been one nonlinearity a time, the nonlinearity has been coupled either between different nodes towards ground or node to node. Also unknown systems have been identified with variable outcomes for the different methods, but total impression gives a quit good view over the systems behavior.
6.2
Introduction
To be able to test the different methods on measured data a known system with a nonlinearity property is needed. A common system with a simple nonlinearity such as cubic stiffness could be built according to a certain mechanical model. Besides this, another property of the system is wanted. Which is to have a first resonance frequency (further referred to the first mode) easily identifiable meaning sufficient separated from the next mode.Most important is the nonlinear effect on the first mode.
This kind of system could be found in a report [6] about identification of nonlinear systems, which also include results from both the analytical model and measurements done on the system. With this report as a foundation for the measurements done on an almost identical model, the aim was to get similar results.
6.3
System Model
6.3.1
Linear Part of System Model
The linear system with an appropriate first mode around 50 Hz was constructed by a firmed clamped beam, only clamped on the one side of the beam with the dimensions according to figure 6.1.
6.3.2
Nonlinear Part of System Model
To cause a cubic stiffness coupled to first mode, two thinner beams are applied under and over the first beam representing the linear system, this at the free end of the bream and in right angles to it. The two beams are firmly clamped in both sides of the beams and having dimensions which results in resonance frequencies much higher than the first mode, according to figure 6.2. The cubic stiffness is now applied due to the increased tension under large amplitudes and should effect the first mode. Figure 6.3 is a closeup on the cubic stiffness model. The previous linear systems properties has been changed somewhat, but the total system now has the
57
6.3. SYSTEM MODEL
Figure 6.1: The underlying linear system of the experimental testrig.
Figure 6.2: By adding two parallel thin beams, perpendicular to the linear system beam, a nonlinear function is added to the structure.
wanted properties.
Ø 5 m m
Ø 5 m m
1 2 m m
1 2 m m
Figure 6.3: A closeup of the mechanical system that modulates the nonlinearity property of the system used in this experiment setup.
6.3.3
Nonlinear Property Verification
Verification of the systems nonlinearity property was made with the assumption of cubic stiffness according to the parameters
α
x +
β
x
3
, which was done by a static load test. The test is done by applying different amount force i.e. applying different weights and measure the displacement. The result of this measurements could bee seen in figure 6.4 as blue dots. The test will also give a hint of what level of the force is needed to get enough influence from the nonlinearity. The result from the static load test indicates a property of nonlinearity, which
58
CHAPTER 6. EXPERIMENTAL EVALUATION OF TESTRIG
Static Load Test
100
80
60
40
20
0
−20
−40
−60
−80
−100
−3 −2 −1 0
Displacement x [m]
1 2 x 10
−3
3
Figure 6.4: A curvefit from static load test confirms the presence of at least a cubic stiffness. The equation
α
x +
β
x
3 is only fitted the an interval where the system acts according to the model. The estimated parameters are
α
= 12.242 · 10 3
N/m and
β
= 24.962 · 10 8
N/m
3
, The blue dots are the measured results and the red curve is the fitted result.
could be a cubic stiffness. If the cubic stiffness is a correct assumption the estimated parameters
α and
β will be 12.242 · 10
3
N/m and 24.962 · 10
8
N/m
3 respectively. The resulting function could bee seen as the red curve in figure 6.4. According to the estimated parameters a force of 50 − 70N should be enough to get measured data effected by the nonlinearity, which could be used later on, to identify the nonlinearity.
6.4
Experimental Setup
According to the report [6] previous mentioned in the introduction, is an appropriate node to measure in is the intersection of the first beam and the two thinner beams. This node will be a drivingpoint of the measurements done on the system.
6.4.1
Measurement Equipment
The equipment used to perform this experiment is an analyzer, an amplifier, a shaker, an accelerometer, a force sensor and finally a computer connected to the analyzer with the task of managing the analyzer.
6.4.2
Equipment List
• Analyzer, "SignalCalc Mobilyzer" a portable multichannel Dynamic Signal Analyzer
• Amplifier
• Shaker
• Force Transducer,ICP Sensor, Model PCB 208C01, SN 17157, 116.51 mV /N
• Acceleromter, ICP Sensor, Model PCB 353B03, SN 45862, 9.51 mV /
m s
2
6.4.3
Work Materials
• An enormous block of solid steel, which the rig is attached to, denoted "ground".
• Three smaller underlying blocks of solid steel, the frame of the rig attached to the ground.
59
6.4. EXPERIMENTAL SETUP
• Three smaller overlying blocks of solid steel, which together with the underlying blocks gives a firm clamped properties to each end of the beams.
• One beam of type A, this beam constitute the linear model with the mode of interest.
• Two thinner beams of type B, contributing with the nonlinearity property to the system.
• Two smaller blocks, making the two thinner beams firmly clamped.
• 6 screws of type M12x100, to attach the blocks towards ground.
• 6 screws of type M10x70, to attach the blocks towards each other.
• one special threaded screw of type UNF 1032 with the length of 18mm, to attach the sensors with.
• Two holders for the sting wire, one attached to the shaker and the other attached to the force transducer.
• Two power cables with laboration connectors, for the amplifier to drive the shaker.
• A coaxial cable BNC to BNC, from the analyzer to the amplifier.
• Two coaxial cables 1032 UNF2A to BNC, from the sensors to the analyzer.
• One crossed TP cable, for network communication between the analyzer and computer.
• One big solid steel module, for holding the shaker.
C o m p u t e r M o b y l i z e r
A m p l i f i e r
S h a k e r
F o r c e T r a n s d u c e r
A c c e l e r o m t e r
Figure 6.5: The measurement setup of the testrig system.
The force transducer is mounted on top of the top beam and the accelerometer is mounted under the bottom beam. The sensors are clamping the three beams together with a special threaded screw. The force transducer is physically connected to the shaker trough a wire of steel. The shaker is connected to the amplifier with power cables able to drive the shaker with the excitation signal, and with the correct amount of force. This excitation signal comes from the analyzer output channel trough a coaxial cable. The sensors are connected to the two first input channels on the analyzer. The connection between the analyzer and the computer is a network connection with capacity of 100 Mbit/s. The cable has to be a crossed TP network cable. For managing the analyzer a program called "Signal Calc Mobilyzer" is installed on the host. This program supplies with many tools of controlling excitation signals and and analyzing tools for the measured data. For an overview over the experiment setup se figure 6.5.
60
CHAPTER 6. EXPERIMENTAL EVALUATION OF TESTRIG
6.5
Choice of Performance and Excitation Signals
The first thing to do when the experimental rig setup is complete, is to check that signals seems to be correct.
Using a random noise low excitation level and adjusting the amplifier to a suitable level.
When all setting are checked and adjusted to the correct value the real measurement can commence. To begin with some noise signal with different excitation levels and different frequency spans [625, 320, 160] Hz, is a simple way to get an overview over the behavior of systems. One could after those measurements decide which span that is of interest. Information of this was already available from the report [6], but an unwritten rule is to decide this from experimental measurements. The goal of this noise measurements is to show some influence from the nonlinearity property when overlaying the FRF’s in the same plot.
Further measurements of interest is the sinus sweep, up and down with different excitation levels, to point out any presence of the cubic stiffness.
After a quick evaluation of the results from the measurement which lacks indication of the cubic stiffness, the conclusion is to increase the frequency resolution. This resulted in an excitation signal produced in M
AT

LAB
, since the program Signal Calc did not supply a resolution high enough.
New random noise signals where with different excitation levels and after that even longer signals where synthesized, now with a resolution which should be more than enough.
6.6
Measurement Settings
6.6.1
Measurement Settings, Collection from Signal Calc when using buildin functions
Measurement 15 have been done with 7 different excitation levels, but because of nonuniformed force distribution over time, a specification of the force level is not presented here.
Meas.
Output Avgs.
Avg type: Overlap Trigger Freq. Span [Hz]
∆
F [Hz] Fs [Hz]
1 noise 50 Stable 50 % Free Run 625 0.635
2604
4
5
2
3 noise noise sweep up sweep down
50
50
10
10
Stable
Stable
Stable
Stable
50 % Free Run
50 % Free Run
0
0
Free Run
Free Run
320
160
160
160
0.317
0.158
0.158
0.158
1302
651
651
651
6.6.2
Measurement Settings, Collection from Signal Calc when using excitation signal produced in matalab
Measurement 6 is done with 3 different excitation levels and measurement 9 has been done with 4 different excitation levels. The same reason as for the measurement 15 also applies here.
Measurement Excitation Freq. Span [Hz] Fs [Hz] Time span [Sec]
6 noise 160 651 154
7
8
9
10
11 sweep up sweep down noise sweep up sweep down
5070
7050
160
5070
7050
651
651
651
651
651
154
154
300
900
900
6.6.3
Channels Settings for measurement 13
Channel Window Coupling EU
1 Hanning ICP 4mA
m/s
2
2 Hanning ICP 4mA N
Other reference
61
6.7. RESULTS
6.6.4
Channels Settings for measurement 411
Other Channel
1
2
Window Coupling EU
Rectangular ICP 4mA
m/s
2
Rectangular ICP 4mA N reference
6.7
Results
FRFs, from measurement 1 with 5 different excitation levels
30
20
10
0
−10
−20
−30
−40
−50
−60
500
RMS=0.0561 N
RMS=0.147 N
RMS=0.298 N
RMS=0.459 N
RMS=0.657 N
600
−70
0 100 200 300
Frequency F [Hz]
400
Figure 6.6: FRFs from the first measurement with the frequency range 0625 Hz to get an overview of the system. Five different excitation levels were measured in order to see any abnormalities. The different levels are presented as the RMS of the force signal.
62
CHAPTER 6. EXPERIMENTAL EVALUATION OF TESTRIG
Coherence, from measurement 1 with 5 different excitation levels
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
500
RMS=0.0561 N
RMS=0.147 N
RMS=0.298 N
RMS=0.459 N
RMS=0.657 N
600
0
0 100 200 300
Frequency F [Hz]
400
Figure 6.7: Coherence function of the first measurement shows the need of higher excitation level in order to get a valid FRF
63
6.7. RESULTS
FRFs, from measurement 2 with 5 different excitation levels
40
20
0
−20
−40
−60
250
RMS=0.0718 N
RMS=0.16 N
RMS=0.332 N
RMS=0.531 N
RMS=0.75 N
300
−80
0 50 100 150
Frequency F [Hz]
200
Figure 6.8: FRFs from measurement 2, a smaller frequency spawn than previous measurement.
64
CHAPTER 6. EXPERIMENTAL EVALUATION OF TESTRIG
Coherence, from measurement 2 with 5 different excitation levels
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
250
RMS=0.0718 N
RMS=0.16 N
RMS=0.332 N
RMS=0.531 N
RMS=0.75 N
300
0
0 50 100 150
Frequency F [Hz]
200
Figure 6.9: Coherence function from measurement 2, almost the same result as measurement 1 but with a higher resolution.
65
6.7. RESULTS
FRFs, from measurement 3 with 5 different excitation levels
40
30
20
10
0
−10
−20
−30
−40
−50
RMS=0.00394 N
RMS=0.241 N
RMS=0.731 N
RMS=1.19 N
RMS=1.66 N
150
−60
0 50
Frequency F [Hz]
100
Figure 6.10: FRFs from measurement 3, the highest frequency resolution with the program Signal Calc.
66
CHAPTER 6. EXPERIMENTAL EVALUATION OF TESTRIG
Coherence, from measurement 3 with 5 different excitation levels
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
RMS=0.00394 N
RMS=0.241 N
RMS=0.731 N
RMS=1.19 N
RMS=1.66 N
0
0 50
Frequency F [Hz]
100 150
Figure 6.11: Coherence function from measurement 3, some abnormalities could be seen at approximate 54 Hz which could bee an indication of some nonlinearity or the shakers inability to drive the structure.
67
6.7. RESULTS
FRFs, from measurement 4 and 5
10
0
−10
−20
−30
−40
40
30
20
RMS=0.702 N
RMS=1.18 N
RMS=1.66 N
−50
−60
0 50
Frequency F [Hz]
100 150
Figure 6.12: FRFs from frequency sweep up and down, measurement 4 and 5. Has almost the same result as measurement 3. There are three sweep up and three sweep using three different excitation levels.
68
CHAPTER 6. EXPERIMENTAL EVALUATION OF TESTRIG
Coherence, from measurement 4 and 5
0.6
0.5
0.4
0.3
0.2
0.1
0
0
1
0.9
0.8
0.7
RMS=0.702 N
RMS=1.18 N
RMS=1.66 N
50
Frequency F [Hz]
100
Figure 6.13: Coherence function from measurement 4 and 5.
150
69
6.7. RESULTS
FRF from measurement 6
0
−20
−40
−60
−80
40
20
RMS=0.418 N
RMS=0.862 N
RMS=1.37 N
−100
0 20 40 60 80
Frequency F [Hz]
100 120 140 160
Figure 6.14: Three FRFs with white gaussian noise synthesized in matlab as excitation signals.
70
CHAPTER 6. EXPERIMENTAL EVALUATION OF TESTRIG
Coherence from measurement 6
0.6
0.5
0.4
0.3
0.2
0.1
0
0
1
0.9
0.8
0.7
RMS=0.418 N
RMS=0.862 N
RMS=1.37 N
20 40 60 80
Frequency F [Hz]
100 120
Figure 6.15: Coherence function from measure 6.
140 160
71
6.7. RESULTS
FRF from measurement 7 and 8
35
30
25
20
15
10
5
Sweep up
Sweep down
0
−5
50 52 54 56 58 60
Frequency F [Hz]
62 64 66 68 70
Figure 6.16: Two FRFs from measurement 7 and 8 where the blue plot is the sweep up and the magenta plot is the sweep down. The excitation level has been changed due to properties of the system combined with the shaker.
72
CHAPTER 6. EXPERIMENTAL EVALUATION OF TESTRIG
FRF from measurement 9
10
0
30
20
−10
−20
−30
−40
0 20 40 60 80
Frequency F [Hz]
100 120 140 160
Figure 6.17: FRF with the highest excitation level with gaussian noise, which was performed in measurement
9.
73
6.7. RESULTS
Coherence from measurement 9
0.6
0.5
0.4
0.3
0.2
0.1
0
0
1
0.9
0.8
0.7
20 40 60 80
Frequency F [Hz]
100 120
Figure 6.18: Coherence function from measurement 9.
140 160
74
CHAPTER 6. EXPERIMENTAL EVALUATION OF TESTRIG
FRF from measurement 10 and 11
40
35
30
25
20
15
10
5
Sweep up
Sweep down
0
−5
−10
50 52 54 56 58 60
Frequency F [Hz]
62 64 66 68 70
Figure 6.19: The highest excitation level with frequency sweep was performed in measurement 10 and 11. The level of the force over time could be seen in the waterfall diagram figure6.21.
75
6.7. RESULTS
Figure 6.20: Acceleration signal from measurement 10, the acceleration signal show an amplitude, constant in the first harmonic component during the sweep from 50 to 70 Hz.
76
CHAPTER 6. EXPERIMENTAL EVALUATION OF TESTRIG
Figure 6.21: Force signal from measurement 10, the force has an excitation which effects more than the intended frequency and is sweeping from 50 to 70 Hz. The force was supposed to have constant amplitude, but the shaker was unable to fulfill the desired input.
77
6.7. RESULTS
Figure 6.22: Acceleration signal from measurement 11 shows a constant amplitude in the first harmonic component during the sweep from 70 to 50 Hz.
78
CHAPTER 6. EXPERIMENTAL EVALUATION OF TESTRIG
Figure 6.23: Force signal from measurement 10, the force has an excitation which effects more than the intended frequency, sweeping from 70 to 50 Hz. During the frequency sweep up the force is decreasing in amplitude over the first mode.
79
6.8. CONCLUSION
6.8
Conclusion
The results from the measurements which have been presented in the previous section indicates an undesired force signal rather than it indicates the desired nonlinear property. The first problem in the measurement was to keep constant excitation force level during the frequency sweep, shown in figure 6.21 and 6.23. The second problem was the excitation of more than one frequency during the sweeps, shown in figure 6.20  6.23. These problems are a result from the weak structure combined with the shaker’s inability to drive the desired force signal into the structure. The shaker should instead of a voltage amplifier been driven by a current amplifier, controlled by a feedback system which task is to ensure that the applied excited force is identical to the desired force. It is extremely important that the applied signal is the same as the desired for the system to excite its nonlinearities.
80
Chapter 7
Summary and Conclusions
7.1
Summary
In this thesis, an overview of the theoretics of linear systems as well as nonlinear systems have been presented.
The theoretics have given brief information to both analysis and synthesis of linear and nonlinear systems.
The theoretical background has constituted a knowledgebase for understanding, implementing and evaluating several parametric and nonparametric methods and algorithms.
In the later part of the thesis, the methods and algorithms have been applied for two experimental cases.
Case one constituted of four unknown blackbox systems and the second case constituted of a testrig. The task for the given cases was to identify if the systems hosted nonlinearities, and where the nonlinearities were mounted, and finally estimation of the nonlinearities as well as the linear subsystems, thus giving a total description of the systems. The idea of using the testrig comes from [6], where the testrig is said to contain nonlinear elements. But due to problems in applying the correct level of excitation force at resonance frequencies the system did not show its nonlinear behavior as expected.
7.2
Conclusions and Further Research
The results from the first experimental case indicate that the frequencydomain methods performs better than timedomain methods such as the NARMAX method, for the given thesis prerequisites. The prerequisites states that the methods and algorithms in the thesis should be optimized for timeefficiency and should be suitable for physically applicable scenarios. The successful methods do both perform well in system identification as well as being computationally efficient.
For getting success in the second case, the setup should be armed with a force control system. Such that the desired applied force really is the applied force, even at system resonance frequencies.
A summary of the conclusions that could be drawn from this thesis is that it clearly shows that nonlinear system identification and analysis is a huge, yet important area of research within the field of signal processing.
There are many pitfalls and hinders within the subject of nonlinear systems, and a researcher in the subject must be extremely critical to the material and information that is presented.
81
Appendix A
Derivations of Impulse, Step, and Ramp
Invariance
A.1
Derivation of Impulse Invariance
The method of impulse invariance uses the dirac delta function (A.1) as catalyzer function.
½
c(t)
=
+
∞
,t = 0
0
,t 6= 0
,
Z
0
+
∞
c(t)dt = 1
C(s)
= 1
c(nT )
=
T
, nT = 0
0
, nT 6= 0
C(z)
= T
By using the suggested methodology the method of impulse invariance could be formulated as:
H
C
(s) = C(s)H(s) = 1H(s) =
µ
R r s −
λ
r
R
∗ r s −
λ
∗ r
¶
N
∑
r=1
+
h
C h
C
(nT ) = h
C
H
C
(t) =
(z) =
H(z)
=
L
−1
{H
C
(t)
t=nT
Z
{h
C
H
C
(z)
C(z)
(s)} =
= T
N
∑
r=1
N
³
∑
³
R r
R r
(nT )} =
=
H
C
T
(z)
r=1
N
µ
∑
r=1
=
R e
λ
r e
λ
r r z nT z − e
N
µ
λ
r
∑
r=1
t
T
+ R
∗ r
+ R
∗ r
+
R
1 − e
λ
r r e
λ
∗ r e
λ
∗ r t
´
nT
R
∗ r z − e z
λ
∗ r
T z
−1
+
´
T
¶
=
N
∑
r=1
1 − e
R
λ
∗ r
∗ r
T
µ
z
−1
R r
1 − e
¶
λ
r
=
T z
−1
+
1 − e
R
λ
∗ r
∗ r
T z
−1
¶
=
N
∑
r=1
b
r,0
+ b
r,1
z
−1
1 + a
r,1
z
−1
+ b
r,2
z
−2
+ a
r,2
z
−2
(A.5)
(A.6)
(A.7)
(A.8)
(A.9)
(A.1)
(A.2)
(A.3)
(A.4)
82
APPENDIX A. DERIVATIONS OF IMPULSE, STEP, AND RAMP INVARIANCE
The filter coefficients b
r,i
and a
r, j
per second order section have the following expressions
b b
r,0
r,1
= R
= −
r
∗ r
R r e
λ
∗ r
T
+ R
∗ r e
λ
r
T
´
b a
r,2
r,1
= 0
= −
³
e
λ
r
T
+ e
λ
∗ r
T
´
a
r,2
= e
(
λ
r
+
λ
∗ r
)T
A.2
Derivation of Step Invariance
The method of step invariance uses the unit step function as catalyzer.
½
c(t)
=
1
,t ≥ 0
0
,t < 0
1
C(s)
=
c(nT )
=
T
, nT ≥ 0
0
, nT < 0
C(z)
= T
z
z − 1
By using the suggested methodology the method of step invariance could be formulated as
H
C
(s) = C(s)H(s) =
H(s)
s
=
N
∑
r=1
µ
R r
s(s −
λ
r
)
+
R
∗ r
s(s −
λ
∗ r
)
¶
A general inverse Laplacetransform of a function
½
L
−1
H(s)
s
H(s)
¾
s
is stated by equation (A.20).
=
Z
t
0−
h(
τ
)d
τ
h(t)
=
L
−1
{H(s)}
Applying the inverse Laplacetransform to equation (A.19).
h c
(t) =
=
=
N
∑
r=1
N
∑
r=1
N
∑
r=1
µ
Z
t
0−
"
R r e
λ
r
λ
r
R r e
λ
r
τ
τ
+
+ R
∗ r
R
∗ r e
λ
∗ r
λ
∗ r
τ
e
λ
∗ r
#
τ
=t
µ
R r
λ
r
³
e
λ
r t
− 1
´
+
τ
d
τ
¶
=
0−
R
∗ r
λ
∗ r
³
e
λ
∗ r t
=
− 1
´¶
Sampling of the continuous time response function.
h c
(nT ) = T
N
∑
r=1
µ
R r
λ
r
³
e
λ
r nT
− 1
´
+
R
∗ r
λ
∗ r
³
e
λ
∗ r nT
− 1
´¶
Ztransformation of the sampled time response function.
H
C
(z) =
ζ
{h c
= T
N
∑
r=1
(nT )} =
µ µ
R r
µ
λ
r
µ
= T
N
∑
r=1
R r
λ
r z z − e
λ
1
1 − e
λ
r r
T
T
− z
−1
z
z − 1
−
¶
1
+
1 − z
R
∗ r
λ
∗ r
¶
µ
−1
¶¶
+
z z − e
λ
∗ r
µ
R
∗ r
λ
∗ r
T
− z
z − 1
1
1 − e
λ
∗ r
T z
−1
−
=
1
1 − z
−1
¶¶
83
(A.10)
(A.11)
(A.12)
(A.13)
(A.14)
(A.15)
(A.16)
(A.17)
(A.18)
(A.19)
(A.20)
(A.21)
(A.22)
(A.23)
(A.24)
A.3. DERIVATION OF RAMP INVARIANCE
Compensation for the catalyzer function and simplifying the frequency response function.
H(z)
=
=
=
=
H
C
(z)
=
C(z)
1 − z
−1
N
∑
r=1
R
λ
r r
H
C
(z) =
µ
1 − z
−1
1 − e
λ
r
T z
−1
b
r,0
+ b
r,1
z
−1
1 + a
r,1
z
−1
+ b
r,2
z
−2
+ a
r,2
z
−2
− 1
¶
+
R
∗ r
λ
∗ r
µ
1 − z
−1
1 − e
λ
∗ r
T z
−1
− 1
¶¶
=
The filter coefficient for the second order sections could be expressed as in following equations.
b
r,0
b
r,1
b
r,2
a a
r,1
r,2
= 0
=
R
λ
r r
µ
³
e
λ
r
= −
R r
T
³
e
λ
r
= −
³
λ
e
λ
r r
T
+ e
− 1
T
´
+
− 1
´
R
λ
∗ r
∗ r e
λ
³
e
λ
∗ r
∗ r
T
+
T
λ
∗ r
T
´
− 1
R
∗ r
λ
∗ r
³
´
e
λ
∗ r
T
− 1
´
e
λ
r
T
¶
= e
(
λ
r
+
λ
∗ r
)T
(A.25)
A.3
Derivation of Ramp Invariance
The method of ramp invariance uses the ramp function as catalyzer, as defined by equation (A.31).
½
c(t)
=
t
,t ≥ 0
0
,t < 0
C(s)
=
1
½
2
c(nT )
=
T n
, nT ≥ 0
0
, nT < 0
C(z)
= T
z
(z − 1)
2
By using the suggested methodology the method of ramp invariance could be formulated as:
(A.31)
(A.32)
(A.33)
(A.34)
H
C
(s) = C(s)H(s) =
H(s)
s
2
=
N
∑
r=1
µ
R r s
2
(s −
λ
r
)
+
s
2
R
∗ r
(s −
λ
∗ r
)
¶
(A.35)
A general inverse Laplacetransform of a function response function h(t) with a ramp function t.
L
−1
½
H(s)
¾
s
2
H(s)
s
2 is stated by equation (A.36), i.e convolving the time
=
h(t)
=
Z
t
0−
(t −
τ
)h(
τ
)d
τ
L
−1
{H(s)}
(A.36)
(A.37)
(A.26)
(A.27)
(A.28)
(A.29)
(A.30)
84
APPENDIX A. DERIVATIONS OF IMPULSE, STEP, AND RAMP INVARIANCE
Applying the inverse Laplacetransform to equation (A.38).
h c
(t) =
=
=
N
∑
r=1
N
∑
r=1
N
∑
r=1
µ
Z
"
Ã
t
0−
R r
λ
r
R
λ
2
r r t
(t −
τ
)R
r e
³
λ
r e
λ
τ
r t
−
−
R
λ
r r
λ
r e
λ
r
τ
e
τ
λ
r
+ (t −
τ
)R
∗ r
τ
t − 1
+
´
R
λ
+
2
r r
λ
e
R
∗ r
λ
r
∗ r
2
e
λ
∗ r
τ
d
τ
¶
=
τ
+
³
e
λ
∗ r
R
∗ r t
λ
∗ r t e
λ
∗ r
τ
−
−
λ
∗ r
t − 1
R
∗ r
λ
∗ r
τ
´
!
e
λ
∗ r
τ
+
R
∗ r
λ
∗ r
2
e
λ
∗ r
τ
#
τ
=t
0−
=
Sampling of the analogue time response function.
h c
(nT ) = T
N
∑
r=1
Ã
R
λ
r
2
r
³
e
λ
r nT
−
λ
r
nT − 1
´
+
R
∗ r
λ
∗ r
2
³
e
λ
∗ r nT
−
λ
∗ r
nT − 1
´
!
Ztransformation of the discrete time response function.
(A.38)
(A.39)
H
C
(z) =
=
=
Z
{h
N
∑
r=1
N
∑
r=1
+
λ
µ
∗ r
2
(n)} =
µ
R r
λ
2
r
R
∗ r
R r
λ
2
r
µ
µ
z z − e
1 − e
λ
r
1
1 − e
λ
∗ r
λ
r
T
1
T
T
− z
−1
z
−1
λ
r
T z
(z − 1)
2
−
−
λ
r
−
T z
−1
(1 − z
−1
λ
∗ r
T z
−1
(1 − z
−1
) 2
z
z − 1
) 2
−
−
¶
+
λ
∗ r
2
1 − z
−1
1
1
R
∗ r
1 − z
−1
µ
¶
¶!
z z − e
λ
∗ r
+
T
−
λ
∗ r
T z
(z − 1)
2
− z
z − 1
¶!
=
(A.40)
Compensation for the catalyzer function and simplifying the frequency response function.
H(z)
=
=
=
H
C
(z)
=
C(z)
(1 − z
+
z
R
λ
∗ r
∗ r
2
−1
z
−1
(1 − z
−1
−1
µ
)
)
2
2
H
C
N
∑
r=1
(z) =
1
µ
1 − e
λ
∗ r
T
R
λ
2
r r z
−1
µ
1 − e
−
1
λ
r
T z
−1
λ
∗ r
T z
−1
(1 − z
−1
)
2
−
−
λ
r
T z
−1
(1 − z
−1
)
2
−
¶!
1
1 − z
−1
1
1 − z
−1
¶
+
(A.41)
Which could be simplified into a sum of second order sections
H(z)
=
N
∑
r=1
b
r,0
+ b
r,1
z
−1
1 + a
r,1
z
−1
+ b
r,2
z
−2
+ a
r,2
z
−2
Where the coefficients b
b b b
r,0
r,1
r,2
=
=
r,0
, b
R
λ
R
λ
2
r
+
= −
r,1
r r
2
r
³
r,2
e
λ
r
T
, a
r,1
− 1 −
³
1 + (
λ
r
R
, b
∗ r
λ
∗2
r
R r
λ
2
r a
r,1
= −e
λ
∗ r
³
1 + (
λ
T
³
1 + (
λ
∗
− e
λ
r r
T
and a
r,2
´
r
λ
r
T
T − 1)e
λ
r
T − 1)e
T − 1)e
+
T
for each second order section can be expressed as
λ
r
− e
λ
∗
T
R
λ
´
r
∗ r
∗2
r
(
λ
r
T e
³
− e
(
λ
λ
∗ r e
T
λ
∗
+
λ
∗ r r
)
− r
T
− 1 −
+ (1 +
+
λ
∗
R
λ
∗ r
∗2
r r
)
λ
λ
∗ r r
T
+ (1 +
´
T )e
λ
³
1 + (
λ
∗ r
λ
∗
∗ r r
T
´
T )e
+
λ
r
T
T − 1)e
´
λ
∗ r
T
´
e
λ
r
T a
r,2
= e
(
λ
r
+
λ
∗ r
)T
(A.42)
(A.43)
(A.44)
(A.45)
(A.46)
(A.47)
85
Appendix B
Modified Bootstrap Structure Detection
The MBSD is a recursive algorithm that tries to identify the structure and parameters of a time series model, such as defined by the NARMAX, ARMAX, ARMA structures for an example. The core element of the MBSD is a linear in the parameters regressor, see [3]. The bootstrap method is used in parallel to the linear regressor for identifying false and true parameters in the regression model. False parameters are removed from the linear regressor and the algorithm repeats until convergence, and the true parameters are found.
An overview schematic of the MBSD is presented in figure B.1. The individual blocks of the overview schematic are more thoroughly specified in the following listing.
1.) DBG  Data Block Generator The data block generator takes or reads input signals and divide them into blocks of N data samples.
2.) LR  Linear in the parameters regressor The linear regressor uses the least squares method for fitting optimal regressor coefficients. Derivation of the least squares linear regression model;
{ ˆx} = [
Ψ
]{c
opt
}
(B.1)
(B.2)
∇
{c}
¡
{e}
T
{e}
{c opt
= 2[
Ψ
]
} =
¡
[
Ψ
]
T
T
[
Ψ
]{c} − 2[
Ψ
]
T
[
Ψ
]
¢
−1
[
Ψ
]
T
{x}
{x} = 0
(B.3)
(B.4)
The
Ψ matrix is modified in the current implementation with respect to the original implementation of the bootstrap method, as proposed in [3], and does only contain the following terms for an assumed
1.) DBG
N
6
{F} 
{x} 
2.) LR
{I
(r)
}
6

{ ˆx} 
3.) BS
B
6
[C
(r)
] 
4.) SD
{I
(r)
}
6
{I
(r+1)

}
Figure B.1: Overview Schematics of the Modified Bootstrap Structure Detection method. The abbreviations of the blocks are; 1.) DBG  Data block generator, 2.)LR  Linear Regressor, 3.) BS  Bootstrapper, 4.) SD
 Structure Discriminator. The signal wires are; {F}  The excitation force, {x}  The response, N  Data block length in samples, {I
(r)
}  Valid regressor term indicator vector for recursion r, { ˆx}  Linear in the parameters regressed response, B  Number of bootstrap replicas, [C
(r)
]  Bootstrap replicas of the regressor terms for recursion r, r  The recursion index, r ∈ (0, R). The recursion stops as the method converges; r = R,
{I
(R−1)
} = {I
(R)
}.
86
APPENDIX B. MODIFIED BOOTSTRAP STRUCTURE DETECTION polynomial order P;
[
[
[
Ψ
Ψ
Ψ
x
F
] = [[
] =
] =
Ψ
x
], [
Ψ
F
]]
∆
{x}, . . . ,
∆
N x
{x}, · · · ,
∆
{x
P
}, . . . ,
∆
N x
{x
P
}
¤
£
{F},
∆
{F}, . . . ,
∆
N
F
{F}, · · · , {F
P
},
∆
{F
P
}, . . . ,
∆
N
F
{F
P
}
¤
(B.5)
(B.6)
(B.7)
In the matrices above; when a vector is raised to the power of p means element wise operation, such as if {x}
T
= {1, 2, 3} ⇒ {x 3
} T
= {1, 8, 27} for an example. The
∆ is the delay operator, such as
∆
3
x(n) =
3.) BS  Bootstrapper The bootstrapper uses assumptions of the linear regressor error residuals, specifically that they are independent and identically distributed and has zero mean. This works for the least squares method. The error residuals of an initial linear regression are resampled with replacement for each bootstrap replication. The resampled error residual is added to the initally regressed data and a new set of regressor parameters are calculated. This is repeated B times, and thus rendering B bootstrap replicas of the regressor parameters.
{e
(b)
} = RESAMPLE ({e}) h
{x
(b)
Ψ (b) i
{c
(b)
=
} =
£
[
Ψ
x
(b)
³
[
Ψ (b)
(b)
], [
Ψ
]
T
[
Ψ
}
F
]
(b)
¤
]
´
−1
[
Ψ (b)
]
T
{x
(b)
}
[C]
b,·
= {c
(b)
}
T
(B.8)
(B.9)
(B.10)
(B.11)
(B.12)
4.) SD  Structure Discriminator The structure discriminator takes a matrix of B bootstrap replicas of the regressor parameters. It estimates the Probability density function for each of the regressor parameters and establishes an interval of confidence for each regressor parameter. If the bounds of the interval of confidence for a regressor parameter surrounds zero, that parameters is regarded as false and is thereby removed from the regression model, by update of the valid regressor term indicator vector. If zero is not within the bounds of the confidence interval it is regarded as a true parameter and is still in the regression model.
Convergence Rate
The convergence rate of the MBSD algorithm is in high degree bound to: the number of bootstrap replicas, the data block size, the input/output signal memory and the degree of the polynomial of the nonlinearity.
87
Bibliography
[1] Saven EduTech AB. Noise and Vibration Analysis III. Täby Sweden, 2002.
[2] Julius S. Bendat. Nonlinear system analysis and identification from random data. John Wiley and Sons,
Inc., 1990.
[3] Hans Peter Geering Esfandiar Shafai, Mikael Bianchi. detectnarmax: A graphical user interface for structure detection of narmax models using the bootstrap method. SYSID 2003, 13th IFAC Symposium on System Identification(13), August 2003.
[4] Michael Feldman. Nonlinear system vibration analysis using hilbert transform  ii. forced vibration analysis method ’forevib’. Mechanical System and Signal Processing, 8(3):309–318, 3 May 1994.
[5] Michael Feldman. Nonlinear free vibration identification via the hilbert transform. Journal of Sound and
Vibration, 208(3):475–489, 11 July 1997.
[6] Janito Vaquerio Ferreira. Dynamic Response Analysis of Structures with Nonlinear Components. PhD thesis, University of London, May.
[7] Monson H. Hayes. Statistical digital signal processing and modelling. Wiley, 1996.
[8] Peyton Z. Peebles Jr. Probability, Random Variables, and Random Signal Principles. McGrawHill
Higher Education, 4th edition, 2001.
[9] R Vaillancourt R Ashino, M Nagase. Behind and beyond the matlab ode suite. Computers and Mathe
matics with Applications, 40(45):491–512, 2000.
[10] Mark Richardson and Brian Schwarz. Modal parameter estimation from operating data. Sound and
Vibration, January 2003.
[11] Structural Dynamics Research Lab University of Cincinatti. 20263781: Nonlinear vibrations. Course
Literature, 27 Marsch 2000.
[12] Structural Dynamics Research Laboratory University of Cincinatti. Modal parameter estimation. Course
Literature, 5 February 2002.
88
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
Related manuals
advertisement