Automotive Sensor Fusion for Situation Awareness $(function(){PrimeFaces.cw("Tooltip","widget_formSmash_items_resultList_6_j_idt799_0_j_idt801",{id:"formSmash:items:resultList:6:j_idt799:0:j_idt801",widgetVar:"widget_formSmash_items_resultList_6_j_idt799_0_j_idt801",showEffect:"fade",hideEffect:"fade",target:"formSmash:items:resultList:6:j_idt799:0:fullText"});});

Automotive Sensor Fusion for Situation Awareness $(function(){PrimeFaces.cw("Tooltip","widget_formSmash_items_resultList_6_j_idt799_0_j_idt801",{id:"formSmash:items:resultList:6:j_idt799:0:j_idt801",widgetVar:"widget_formSmash_items_resultList_6_j_idt799_0_j_idt801",showEffect:"fade",hideEffect:"fade",target:"formSmash:items:resultList:6:j_idt799:0:fullText"});});
main: 2009-10-21 11:26 — i(1)
Linköping studies in science and technology. Thesis.
No. 1422
Automotive Sensor Fusion for
Situation Awareness
Christian Lundquist
LERTEKNIK
REG
AU
T
O MA
RO
TI C C O N T
L
LINKÖPING
Division of Automatic Control
Department of Electrical Engineering
Linköping University, SE-581 83 Linköping, Sweden
http://www.control.isy.liu.se
[email protected]
Linköping 2009
main: 2009-10-21 11:26 — ii(2)
This is a Swedish Licentiate’s Thesis.
Swedish postgraduate education leads to a Doctor’s degree and/or a Licentiate’s degree.
A Doctor’s Degree comprises 240 ECTS credits (4 years of full-time studies).
A Licentiate’s degree comprises 120 ECTS credits,
of which at least 60 ECTS credits constitute a Licentiate’s thesis.
Linköping studies in science and technology. Thesis.
No. 1422
Automotive Sensor Fusion for Situation Awareness
Christian Lundquist
[email protected]
www.control.isy.liu.se
Department of Electrical Engineering
Linköping University
SE-581 83 Linköping
Sweden
ISBN 978-91-7393-492-3
ISSN 0280-7971
LiU-TEK-LIC-2009:30
c 2009 Christian Lundquist
Copyright Printed by LiU-Tryck, Linköping, Sweden 2009
main: 2009-10-21 11:26 — iii(3)
To my family
main: 2009-10-21 11:26 — iv(4)
main: 2009-10-21 11:26 — v(5)
Abstract
The use of radar and camera for situation awareness is gaining popularity in automotive
safety applications. In this thesis situation awareness consists of accurate estimates of the
ego vehicle’s motion, the position of the other vehicles and the road geometry. By fusing
information from different types of sensors, such as radar, camera and inertial sensor, the
accuracy and robustness of those estimates can be increased.
Sensor fusion is the process of using information from several different sensors to
compute an estimate of the state of a dynamic system, that in some sense is better than
it would be if the sensors were used individually. Furthermore, the resulting estimate is
in some cases only obtainable through the use of data from different types of sensors. A
systematic approach to handle sensor fusion problems is provided by model based state
estimation theory. The systems discussed in this thesis are primarily dynamic and they are
modeled using state space models. A measurement model is used to describe the relation
between the state variables and the measurements from the different sensors. Within the
state estimation framework a process model is used to describe how the state variables
propagate in time. These two models are of major importance for the resulting state
estimate and are therefore given much attention in this thesis. One example of a process
model is the single track vehicle model, which is used to model the ego vehicle’s motion.
In this thesis it is shown how the estimate of the road geometry obtained directly from the
camera information can be improved by fusing it with the estimates of the other vehicles’
positions on the road and the estimate of the radius of the ego vehicle’s currently driven
path.
The positions of stationary objects, such as guardrails, lampposts and delineators are
measured by the radar. These measurements can be used to estimate the border of the
road. Three conceptually different methods to represent and derive the road borders are
presented in this thesis. Occupancy grid mapping discretizes the map surrounding the
ego vehicle and the probability of occupancy is estimated for each grid cell. The second
method applies a constrained quadratic program in order to estimate the road borders,
which are represented by two polynomials. The third method associates the radar measurements to extended stationary objects and tracks them as extended targets.
The approaches presented in this thesis have all been evaluated on real data from both
freeways and rural roads in Sweden.
v
main: 2009-10-21 11:26 — vi(6)
main: 2009-10-21 11:26 — vii(7)
Populärvetenskaplig sammanfattning
Användandet av radar och kamera för att skapa en bra situationsmedvetenhet ökar i popularitet i säkerhetsapplikationer för bilar. I den här avhandlingen omfattar situationsmedvetenheten noggranna skattningar av den egna bilens rörelse, de andra bilarnas positioner
samt vägens geometri. Genom att fusionera information från flera typer av sensorer, såsom
radar, kamera och tröghetssensor, kan noggrannheten och robustheten av dessa skattningar
öka.
Sensorfusion är en process där informationen från flera olika sensorer används för att
beräkna en skattning av ett systems tillstånd, som på något sätt kan anses vara bättre än om
sensorerna användes individuellt. Dessutom kan den resulterande tillståndsskattningen i
vissa fall endast erhållas genom att använda data från olika sensorer. Ett systematiskt sätt
att behandla sensorfusionsproblemet tillhandahålls genom att använda modellbaserade
tillståndsskattningsmetoder. Systemen som diskuteras i den här avhandlingen är huvudsakligen dynamiska och modelleras med tillståndsmodeller. En mätmodell används för
att beskriva relationen mellan tillståndsvariablerna och mätningarna från de olika sensorerna. Inom tillståndsskattningens ramverk används en processmodell för att beskriva hur
en tillståndsvariabel propagerar i tiden. Dessa två modeller är av stor betydelse för den resulterande tillståndsskattningen och ges därför stort utrymme i den här avhandlingen. Ett
exempel på en processmodell är den så kallade enspårs fordonsmodellen, som används för
att skatta den egna bilens rörelse. I den här avhandlingen visas hur skattningen av vägens
geometri, som erhålls av kameran, kan förbättras genom att fusionera informationen med
skattningen av de andra bilarnas positioner på vägen och skattningen av den egna bilens
körda radie.
Stationära objekt, såsom vägräcken och lampstolpar uppmäts med radarn. Dessa mätningar kan användas för att skatta vägens kanter. Tre konceptuellt olika metoder att representera och beräkna vägkanterna presenteras i den här avhandlingen. “Occupancy grid
mapping” diskretiserar kartan som omger den egna bilen, och sannolikheten att en kartcell
är ockuperad skattas. Den andra metoden applicerar ett kvadratiskt program med bivillkor för att skatta vägkanterna, vilka är representerade i form av två polynom. Den tredje
metoden associerar radarmätningarna med utsträckta stationära objekt och följer dem som
utsträckta mål.
Tillvägagångssätten som presenteras i den här avhandlingen är alla utvärderade på
mätdata från svenska motorvägar och landsvägar.
vii
main: 2009-10-21 11:26 — viii(8)
main: 2009-10-21 11:26 — ix(9)
Acknowledgments
First of all I would like to thank my supervisor Professor Fredrik Gustafsson for guidance and inspiring discussions during my research projects and the writing of this thesis.
Especially, I want to acknowledge all the good and thrilling ideas popping up during our
discussions. I would also like to thank my co-supervisor Dr. Thomas Schön for introducing me to the world of academic research and teaching me all those important details, for
example how to write a good, exciting and understandable paper.
I am very grateful to Professor Lennart Ljung for giving me the opportunity to join the
Automatic Control group and for creating an inspiring, friendly and professional atmosphere. This atmosphere is maintained by all great colleagues, and I would like to thank
you all for being good friends.
This work was supported by the SEnsor Fusion for Safety (SEFS) project within the
Intelligent Vehicle Safety Systems (IVSS) program. I would like to thank Lars Danielsson at Volvo Car Corporation and Fredrik Sandblom at Volvo 3P for the recent useful and
interesting discussions at Chalmers. I hope that we will have the possibility to cooperate
even after the end of the project. Dr. Andreas Eidehall at Volvo Car Corporation helped
me a lot with the measurements and fusion framework at the beginning of my research,
which I thankfully acknowledge. I would also like to thank Andreas Andersson at Nira
Dynamics for fruitful discussions on the German Autobahn and for providing measurement data.
A special thanks to Dr. Umut Orguner who helped me with the target tracking theory
and took the time to explain all things I didn’t understand. This thesis has been proofread
by Karl Granström and Umut Orguner. Your help has improved the quality of this thesis
substantially. I acknowledge Ulla Salaneck’s help when it comes to practical and administrative stuff. Gustaf Hendeby and Henrik Tidefelt helped me with my LATEX issues. Thank
you all!
From 2004 to 2007 I worked at the company ZF Lenksysteme GmbH with the development of Active Front Steering. I appreciate the encouragement I got from my colleague
Dr. Wolfgang Reinelt during this time. With him I wrote my first papers and he also helped
me to establish the contact with Professor Lennart Ljung. My former boss Gerd Reimann
introduced me to the beautiful world of vehicle dynamics and taught me the importance
of performing good experiments and collecting real data.
Finally, I would like to thank my parents and my sister for their never ending support
for all that I have undertaken in life this far.
Linköping, October 2009
Christian Lundquist
ix
main: 2009-10-21 11:26 — x(10)
main: 2009-10-21 11:26 — xi(11)
Contents
1
Introduction
1.1 Sensor Fusion . . . . . . . . . . . . . . . . .
1.2 Automotive Sensor Fusion . . . . . . . . . .
1.3 Sensor Fusion for Safety . . . . . . . . . . .
1.4 Components of the Sensor Fusion Framework
1.5 Contributions . . . . . . . . . . . . . . . . .
1.6 Outline . . . . . . . . . . . . . . . . . . . .
1.6.1 Outline of Part I . . . . . . . . . . .
1.6.2 Outline of Part II . . . . . . . . . . .
1.6.3 Related Publications . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
I
Background Theory and Applications
2
Models of Dynamic Systems
2.1 Discretizing Continuous-Time Models . . . . . .
2.2 Special cases of the State Space Model . . . . . .
2.2.1 Linear State Space Model . . . . . . . .
2.2.2 State Space Model with Additive Noise .
2.3 Ego Vehicle Model . . . . . . . . . . . . . . . .
2.3.1 Notation . . . . . . . . . . . . . . . . .
2.3.2 Tire Model . . . . . . . . . . . . . . . .
2.3.3 Single Track Model . . . . . . . . . . . .
2.3.4 Single Track Model with Road Interaction
2.4 Road Model . . . . . . . . . . . . . . . . . . . .
2.5 Target Model . . . . . . . . . . . . . . . . . . .
xi
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
2
4
5
8
8
8
8
10
13
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
15
16
17
18
19
20
20
22
23
26
28
32
main: 2009-10-21 11:26 — xii(12)
xii
3
4
5
Contents
Estimation Theory
3.1 Static Estimation Theory . . . . . . .
3.1.1 Least Squares Estimator . . .
3.1.2 Recursive Least Squares . . .
3.1.3 Probabilistic Point Estimates .
3.2 Filter Theory . . . . . . . . . . . . .
3.2.1 The Linear Kalman Filter . . .
3.2.2 The Extended Kalman Filter .
3.2.3 The Unscented Kalman Filter
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
35
36
37
39
40
40
41
42
43
The Sensor Fusion Framework
4.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . .
4.2 Target Tracking . . . . . . . . . . . . . . . . . . . . . . .
4.2.1 Data Association . . . . . . . . . . . . . . . . . .
4.2.2 Extended Object Tracking . . . . . . . . . . . . .
4.3 Estimating the Free Space using Radar . . . . . . . . . . .
4.3.1 Occupancy Grid Map . . . . . . . . . . . . . . . .
4.3.2 Comparison of Free Space Estimation Approaches
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
49
49
51
52
53
56
56
59
Concluding Remarks
5.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2 Future Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
63
63
64
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Bibliography
67
II
77
Publications
A Joint Ego-Motion and Road Geometry Estimation
1
Introduction . . . . . . . . . . . . . . . . . . .
2
Sensor Fusion . . . . . . . . . . . . . . . . . .
3
Dynamic Models . . . . . . . . . . . . . . . .
3.1
Geometry and Notation . . . . . . . . .
3.2
Ego Vehicle . . . . . . . . . . . . . . .
3.3
Road Geometry . . . . . . . . . . . . .
3.4
Leading Vehicles . . . . . . . . . . . .
3.5
Summarizing the Dynamic Model . . .
4
Measurement Model . . . . . . . . . . . . . .
5
Experiments and Results . . . . . . . . . . . .
5.1
Parameter Estimation and Filter Tuning
5.2
Validation Using Ego Vehicle Signals .
5.3
Road Curvature Estimation . . . . . . .
6
Conclusions . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
79
81
83
85
85
86
88
92
93
94
96
96
97
98
102
102
main: 2009-10-21 11:26 — xiii(13)
xiii
B Recursive Identification of Cornering Stiffness Parameters for an Enhanced
Single Track Model
107
1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
2
Longitudinal and Pitch Dynamics . . . . . . . . . . . . . . . . . . . . . 110
2.1
Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
2.2
Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
3
Lateral and Yaw Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . 115
4
Recursive Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
4.1
Regression Model . . . . . . . . . . . . . . . . . . . . . . . . . 117
4.2
Constrained Recursive Least Squares . . . . . . . . . . . . . . . 119
5
Experiments and Results . . . . . . . . . . . . . . . . . . . . . . . . . . 119
6
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
C Estimation of the Free Space in Front of a Moving Vehicle
1
Introduction . . . . . . . . . . . . . . . . . . . . . . .
2
Related Work . . . . . . . . . . . . . . . . . . . . . .
3
Problem Formulation . . . . . . . . . . . . . . . . . .
4
Road Border Model . . . . . . . . . . . . . . . . . . .
4.1
Predictor . . . . . . . . . . . . . . . . . . . .
4.2
Constraining the Predictor . . . . . . . . . . .
4.3
Outlier Rejection . . . . . . . . . . . . . . . .
4.4
Computational Time . . . . . . . . . . . . . .
5
Calculating the Free Space . . . . . . . . . . . . . . .
5.1
Border Line Validity . . . . . . . . . . . . . .
6
Conclusions and Future Work . . . . . . . . . . . . . .
7
Acknowledgement . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
125
127
129
131
133
133
137
138
138
141
141
142
142
144
D Tracking Stationary Extended Objects for Road Mapping using Radar Measurements
147
1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
2
Geometry and Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
3
Extended Object Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
3.1
Process Model of the Stationary Objects . . . . . . . . . . . . . . 152
3.2
Measurement Model . . . . . . . . . . . . . . . . . . . . . . . . 153
4
Data Association and Gating . . . . . . . . . . . . . . . . . . . . . . . . 154
5
Handling Tracks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
5.1
Initiating Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
5.2
Remove Lines or Points . . . . . . . . . . . . . . . . . . . . . . 157
6
Experiments and Results . . . . . . . . . . . . . . . . . . . . . . . . . . 157
7
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
main: 2009-10-21 11:26 — xiv(14)
xiv
Contents
main: 2009-10-21 11:26 — 1(15)
1
Introduction
This thesis is concerned with the problem of estimating the motion of a vehicle and the
characteristics of its surroundings, i.e. to improve the situation awareness. More specifically, the description of the ego vehicle’s surroundings consists in other vehicles and
stationary objects as well as the geometry of the road. The signals from several different
sensors, including camera, radar and inertial sensor, must be combined and analyzed to
compute estimates of various quantities and to detect and classify many objects simultaneously. Sensor fusion allows the system to obtain information that is better than if it was
obtained by individual sensors.
Situation awareness is the perception of environmental features, the comprehension
of their meaning and the prediction of their status in the near future. It involves being
aware of what is happening in and around the vehicle to understand how the subsystems
impact on each other.
Sensor fusion is introduced in Section 1.1 and its application within the automotive
community is briefly discussed in Section 1.2. The study presented in this thesis was
accomplished in a Swedish research project, briefly described in Section 1.3. The sensor
fusion framework and its components, such as infrastructure, estimation algorithms and
various mathematical models, are all introduced in Section 1.4. Finally, the chapter is
concluded with a statement of the contributions in Section 1.5, and the outline of this
thesis in Section 1.6.
1.1
Sensor Fusion
Sensor fusion is the process of using information from several different sensors to compute an estimate of the state of a dynamic system. The resulting estimate is in some sense
better than it would be if the sensors were used individually. The term better can in this
case mean more accurate, more reliable, more available and of higher safety integrity.
Furthermore, the resulting estimate may in some cases only be possible to obtain by using
1
main: 2009-10-21 11:26 — 2(16)
2
1
Sensors
..
.
Applications
Sensor Fusion
State Estimation
Process Model
Introduction
State
Estimate
..
.
Measurement Model
Figure 1.1: The main components of the sensor fusion framework are shown in the
middle box. The framework receives measurements from several sensors, fuses them
and produces one state estimate, which can be used by several applications.
data from different types of sensors. Figure 1.1 illustrates the basic concept of the sensor
fusion framework. Many systems have traditionally been stand alone systems with one
or several sensors transmitting information to only one single application. Using a sensor fusion approach it might be possible to remove one sensor and still perform the same
tasks, or add new applications without the need to add new sensors.
Sensor fusion is required to reduce cost, system complexity and number of components involved and to increase accuracy and confidence of sensing.
1.2
Automotive Sensor Fusion
Within the automotive industry there is currently a huge interest in active safety systems.
External sensors are increasingly important and typical examples used in this work are
radar sensors and camera systems. Today, a sensor is usually connected to a single function. However, all active safety functions need information about the state of the ego
vehicle and its surroundings, such as the lane geometry and the position of other vehicles.
The use of signal processing and sensor fusion to replace redundant and costly sensors
with software attracted recent attention in IEEE Signal Processing Magazine (Gustafsson, 2009).
The sensors in a modern passenger car can be divided into a number of subgroups;
there are internal sensors measuring the motion of the vehicle, external sensor measuring
the objects surrounding the vehicle and there are sensors communicating with other vehicles and with the infrastructure. The communication between sensors, fusion framework,
actuators and controllers is made possible by the controller area network (CAN). It is a
serial bus communication protocol developed by Bosch in the early 1980s and presented
by Kiencke et al. (1986) at the SAE international congress in Detroit. An overview of
the CAN bus, which has become the de facto standard for automotive communication, is
given in Johansson et al. (2005).
Internal sensors are often referred to as proprioceptive sensors in the literature. Typical examples are gyrometers, primarily measuring the yaw rate about the vehicle’s vertical
main: 2009-10-21 11:26 — 3(17)
1.2
3
Automotive Sensor Fusion
(a)
(b)
Figure 1.2: Figure (a) shows the camera in the vehicle, and Figure (b) the front
looking radar. Note that this is not serial production mounting. Courtesy of Volvo
Car Corporation.
axis, and accelerometers, measuring the longitudinal and lateral acceleration of the vehicle. The velocity of the vehicle is measured using inductive wheel speed sensors and the
steering wheel position is measured using an angle sensor. External sensors are referred
to as exteroceptive sensors in the literature, typical examples are radar (RAdio Detection
And Ranging), lidar (LIght Detection And Ranging) and cameras.
An example of how a radar and a camera may be mounted in a passenger car is illustrated in Figure 1.2. These two sensors complement each other very well, since the
advantage of the radar is the disadvantage of the camera and vice versa. A summary of
the two sensors’ properties is presented in Table 1.1 and in e.g., Jansson (2005).
As already mentioned, the topic of this thesis is how to estimate the state variables
describing the ego vehicle’s motion and the characteristics of its surroundings. The ego
vehicle is one subsystem, labeled E in this work. The use of data from the vehicle’s actuators, e.g. the transmission and steering wheel, to estimate a change in position over
Table 1.1: Properties of radar and camera for object detection
Detects
Classifies objects
Azimuth angle
Range
Range rate
Field of View
Weather Conditions
Camera
other vehicles, lane
markings, pedestrians
yes
high accuracy
low accuracy
not
wide
sensitive to bad visibility
Radar
other vehicles, stationary objects
no
medium accuracy
very high accuracy
very high accuracy
narrow
less sensitive
main: 2009-10-21 11:26 — 4(18)
4
1
Introduction
time is referred to as odometry. The ego vehicle’s surroundings consists of other vehicles,
referred to as targets T , and stationary objects as well as the shape and the geometry of
the road R. Mapping is the problem of integrating the information obtained by the sensors into a given representation, see Adams et al. (2007) for a recent overview and Thrun
(2002) for a survey. The main focus of this thesis is the ego vehicle E (odometry) and
the road geometry R, which includes stationary objects along the road (mapping). Simultaneous localization and mapping (SLAM) is an approach used by autonomous vehicles
to build a map while at the same time keeping track of their current locations, see e.g.
Durrant-Whyte and Bailey (2006), Bailey and Durrant-Whyte (2006). This approach is
not treated in this thesis.
1.3
Sensor Fusion for Safety
The work in this thesis has been performed within the research project Sensor Fusion
for Safety (SEFS), which is funded by the Swedish Intelligent Vehicle Safety Systems
(IVSS) program. The project is a collaboration between Volvo Technology, Volvo Cars,
Volvo Trucks, Mecel, Chalmers University of Technology and Linköping University.
The overall objective of this project is to obtain sensor fusion competence for automotive safety applications in Sweden by doing research within relevant areas. This goal
is achieved by developing a sensor fusion platform, algorithms, modeling tools and a simulation platform. More specifically, the aim is to develop general methods and algorithms
for a sensor fusion systems utilizing information from all available sensors in a modern
passenger car. The sensor fusion will provide a refined description of the vehicle’s environment that can be used by a number of different safety functions. The integration of the
data flow requires new specifications with respect to sensor signals, hardware, processing,
architectures and reliability.
The SEFS work scope is divided into a number of work packages. These include at
a top level, fusion structure, key scenarios and the development of requirement methods.
The next level consists in work packages such as pre-processing and modeling, the implementation of a fusion platform and research done on fusion algorithms, into which
this thesis can be classified. The use-case work package consists of implementation of
software and design of prototypes and demonstrators. Finally, there is an evaluation and
validation work package.
During the runtime of the SEFS project, i.e. from 2005 until today, two PhD theses
(Schön, 2006, Gunnarsson, 2007) and two licentiate theses (Bengtsson, 2008, Danielsson,
2008) have been produced. An overview of the main results in the project is given in
Ahrholdt et al. (2009) and the sensor fusion framework is well described in Bengtsson
and Danielsson (2008). Furthermore it is worth mentioning some of the publications
produced by the project partners. Motion models for tracked vehicles are covered in
Svensson and Gunnarsson (2006), Gunnarsson et al. (2006). A better sensor model of
the tracked vehicle is presented in Gunnarsson et al. (2007). Detection of lane departures
and lane changes of leading vehicles are studied in Schön et al. (2006), with the goal
to increase the accuracy of the road geometry estimate. Computational complexity for
systems obtaining data from sensors with different sampling rates and different noise
distributions is studied in Schön et al. (2007).
main: 2009-10-21 11:26 — 5(19)
1.4
1.4
5
Components of the Sensor Fusion Framework
Components of the Sensor Fusion Framework
A systematic approach to handle sensor fusion problems is provided by nonlinear state estimation theory. Estimation problems are handled using discrete-time model based methods. The systems discussed in this thesis are primarily dynamic and they are modeled
using stochastic difference equations. More specifically, the systems are modeled using
the discrete-time nonlinear state space model
xt+1 = ft (xt , ut , wt , θ),
yt = ht (xt , ut , et , θ),
(1.1a)
(1.1b)
where (1.1a) describes the evolution of the state variable x over time and (1.1b) explains
how the state variable x relates to the measurement y. The state vector at time t is denoted by xt ∈ Rnx , with elements x1 , . . . , xnx being real numbers. Sensor observations
collected at time t are denoted by yt ∈ Rny , with elements y1 , . . . , ynx being real numbers. The model ft in (1.1a) is referred to as the process model, the system model, the
dynamic model or the motion model, and it describes how the state propagates in time.
The model ht in (1.1b) is referred to as the measurement model or sensor model and it
describes how the state is propagated into the measurement space. The random vector wt
describes the process noise, which models the fact that the actual state dynamics is usually
unknown. The random vector et describes the sensor noise. Furthermore, ut denotes the
deterministic input signals and θ denotes the possibly unknown parameter vector of the
model.
The ego vehicle constitutes an important dynamic system in this thesis. The yaw and
lateral dynamics are modeled using the so called single track model. This model will be
used as an example throughout the thesis. Some of the variables and parameters in the
model are introduced in Example 1.1.
Example 1.1: Single Track Ego Vehicle Model
A so called bicycle model is obtained if the wheels at the front and the rear axle of a
passenger car are modeled as single wheels. This type of model is also referred to as
single track model and a schematic drawing is given in Figure 1.3. Some examples of
typical variables and parameters are:
State variables x: the yaw rate ψ̇E and the body side slip angle β, i.e.
T
x = ψ̇E β .
(1.2)
Measurements y: the yaw rate ψ̇E and the lateral acceleration ay , i.e.
T
y = ψ̇E ay ,
(1.3)
which both are measured by an inertial measurement unit (IMU).
Input signals u: the steering wheel angle δs , which is measured with an angular sensor
at the steering column, the longitudinal acceleration v̇x , which is measured by the
IMU and the vehicle velocity vx , which is measured at the wheels, i.e.
T
u = δs v̇x vx .
(1.4)
main: 2009-10-21 11:26 — 6(20)
6
1
ρ
Introduction
αf
δf
β
vx
ψE
y W
CoG
αr
OW
x
Figure 1.3: Illustration of the geometry for the single track model, describing the
motion of the ego vehicle. The ego vehicle velocity vector vx is defined from the
center of gravity (CoG) and its angle to the longitudinal axis of the vehicle is denoted
by β, referred to as the body side slip angle. Furthermore, the slip angles are referred
to as αf and αr . The front wheel angle is denoted by δf and the current driven radius
is denoted by ρ.
Parameters θ: the vehicle mass m, which is weighed before the tests, the steering ratio
is between the steering wheel angle and the front wheels, which has to be estimated in advance, and the tire parameter Cα , which is estimated on-line, since the
parameter value changes due to different road and weather conditions.
The nonlinear models f and h are derived in Section 2.3.
The model (1.1) must describe the essential properties of the system, but it must also
be simple enough to be efficiently used within a state estimation algorithm. The model
parameters θ are estimated using techniques from system identification community. The
main topic of Chapter 2 is the derivation of the model equations through physical relations and general assumptions. Chapter 3 describes algorithms that are used to compute
estimates of the state xt and the parameter θ in (1.1).
Before describing the individual steps of the sensor fusion framework another important example is presented in Example 1.2.
Example 1.2: Object Tracking
Other objects, such as vehicles or stationary objects on and along the road, are tracked
using measurements from a radar mounted in the ego vehicle. A simple model for one
such tracked object is given by using the following variables:
State variables x: Cartesian position of tracked targets i = 1, . . . , Nx in a world fixed
T
coordinate frame W , i.e. xi = xW y W .
main: 2009-10-21 11:26 — 7(21)
1.4
7
Components of the Sensor Fusion Framework
Measurements y: Range and azimuth angle to objects m = 1, . . . , Ny measured by the
T
radar in the ego vehicle fixed coordinate frame E, i.e. ym = dE δ .
At every time step t, Ny observations are obtained by the radar. Hence,
the radar delivers
Ny range and azimuth measurements in a multi-sensor set Y = y1 , . . . , yNy to the
sensor fusion framework. The sensor fusion framework currently also tracks Nx targets.
The multi-target state is given by the set X = {x1 , . . . , xNx } where x1 , . . . , xNx are the
individual states.
Obviously, the total number of state variables in the present example is 2Nx and the
total number of measurements is 2Ny . This issue may be compared to Example 1.1,
where the size of the y-vector corresponds to the total number of measurements at time t.
Typically, the radar also observes false detections, referred to as clutter, or receives several
measurements from the same target, i.e. Ny is seldom equal to Nx for radar sensors.
The different steps of a typical sensor fusion algorithm, as the central part of the larger
framework, are shown in Figure 1.4. The algorithm is initiated using a prior guess of the
state x0 or, if it is not the first iteration, the state estimate x̂t−1|t−1 from the previous time
step t − 1 is used. New measurements Yt are collected from the sensors and preprocessed
at time t. Model (1.1) is used to predict the state estimate x̂t|t−1 and the measurement
ŷt|t−1 . For Example 1.2 it is necessary to associate the radar observations Yt with the
predicted measurements Ŷt|t−1 of the existing state estimates and to manage the tracks,
i.e. initiate new states and remove old, invalid states. The data association and track
management are further discussed in Section 4.2. Returning to Example 1.1, where the
data association and track management are obviously not needed, since there the data
association is assumed fixed. Finally, the new measurement yt is used to improve the
state estimate x̂t|t at time t in the so called measurement update step. The prediction
and measurement update are described in Section 3.2. This algorithm is iterated, x̂t|t is
used to predict x̂t+1|t , new measurements Yt+1 are collected at time t + 1 and so on.
The state estimation theory, as part of the sensor fusion framework, is discussed further
in Chapter 3.
sensor
Yt
preprocessing
Yt
prediction
p(xt |y1:t−1 )
data
association
Yt , Λt
p(xt |y1:t−1 )
p(xt−1 |y1:t−1 )
track
management
Yt , Λt
p(xt |y1:t−1 )
measurement
update
p(xt |y1:t )
p(xt |y1:t )
time step
Figure 1.4: The new measurements Yt contain new information and are associb t|t−1 and thereafter used to update them to obtain the
ated to the predicted states X
b
improved state estimates Xt|t .
main: 2009-10-21 11:26 — 8(22)
8
1
1.5
Introduction
Contributions
The main contributions of this thesis are briefly summarized and presented below:
• A method to improve the road curvature estimate, using information from the image
processing, the motion of the ego vehicle and the position of the other vehicles on
the road is presented in Paper A. Furthermore, a new process model for the road is
presented.
• An approach to estimate the tire road interaction is presented in Paper B. The load
transfer between the front and rear axles is considered when recursively estimating
the stiffness parameters of the tires.
• Two different methods to estimate the road edges and stationary objects along the
road are presented in the Papers C and D. The methods are compared to the standard
occupancy grid mapping technique, which is presented in Section 4.3.1.
1.6
Outline
There are two parts in this thesis. The objective of the first part is to give a unified
overview of the research reported in this thesis. This is accomplished by explaining how
the different publications in Part II relate to each other and to the existing theory.
1.6.1
Outline of Part I
The main components of a sensor fusion framework are depicted in Figure 1.1. Part I aims
at giving a general description of the individual components of this framework. Chapter 2
is concerned with the inner part of the model based estimation process i.e., the process
model and the measurement model illustrated by the two white rectangles in Figure 1.1.
The estimation process, illustrated by the gray rectangle, is outlined in Chapter 3. In
Chapter 4 some examples including the sensors to the left in Figure 1.1 and the tracking
or fusion management, illustrated by the black rectangle, are described. Chapters 2 and 3
emphasize on the theory and the background of the mathematical relations used in Part II.
Finally, the work is summarized and the next steps for future work are given in Chapter 5.
1.6.2
Outline of Part II
Part II consists of a collection of edited papers, introduced below. Besides a short summary of the paper, a paragraph briefly explaining the background and the contribution is
provided. The background is concerned with how the research came about, whereas the
contribution part states the contribution of the present author.
Paper A: Joint Ego-Motion and Road Geometry Estimation
Lundquist, C. and Schön, T. B. (2008a). Joint ego-motion and road geometry
estimation. Submitted to Information Fusion.
main: 2009-10-21 11:26 — 9(23)
1.6
Outline
9
Summary: We provide a sensor fusion framework for solving the problem of joint egomotion and road geometry estimation. More specifically we employ a sensor fusion
framework to make systematic use of the measurements from a forward looking radar and
camera, steering wheel angle sensor, wheel speed sensors and inertial sensors to compute
good estimates of the road geometry and the motion of the ego vehicle on this road. In
order to solve this problem we derive dynamical models for the ego vehicle, the road and
the leading vehicles. The main difference to existing approaches is that we make use of
a new dynamic model for the road. An extended Kalman filter is used to fuse data and to
filter measurements from the camera in order to improve the road geometry estimate. The
proposed solution has been tested and compared to existing algorithms for this problem,
using measurements from authentic traffic environments on public roads in Sweden. The
results clearly indicate that the proposed method provides better estimates.
Background and contribution: The topic had already been studied in the automatic
control group in Linköping by Dr. Thomas B. Schön and Dr. Andreas Eidehall, see e.g.,
Eidehall et al. (2007), Schön et al. (2006), where a simplified vehicle model was used. The
aim of this work was to study if the results could be improved by using a more complex
vehicle model, i.e. the single track model, which in addition includes the side slip of the
vehicle. The author of this thesis contributed with the idea that the single track model
could be used to describe the current driven curvature instead of using a road model based
on road construction standards.
Paper B: Recursive Identification of Cornering Stiffness
Parameters for an Enhanced Single Track Model
Lundquist, C. and Schön, T. B. (2009b). Recursive identification of cornering stiffness parameters for an enhanced single track model. In Proceedings
of the 15th IFAC Symposium on System Identification, pages 1726–1731,
Saint-Malo, France.
Summary: The current development of safety systems within the automotive industry
heavily relies on the ability to perceive the environment. This is accomplished by using measurements from several different sensors within a sensor fusion framework. One
important part of any system of this kind is an accurate model describing the motion of
the vehicle. The most commonly used model for the lateral dynamics is the single track
model, which includes the so called cornering stiffness parameters. These parameters describe the tire-road contact and are unknown and even time-varying. Hence, in order to
fully make use of the single track model, these parameters have to be identified. The aim
of this work is to provide a method for recursive identification of the cornering stiffness
parameters to be used on-line while driving.
Background and contribution: The tire parameters are included in the single track
model, which is used to describe the ego vehicle’s motion in all papers in this thesis.
This work started as a project in a graduate course in system identification held by Professor Lennart Ljung. The idea to use RLS to estimate the parameters was formulated during
discussion between the two authors of this paper. Andreas Andersson at Nira Dynamics
and the author of this thesis collected the measurement data during a trip to Germany.
main: 2009-10-21 11:26 — 10(24)
10
1
Introduction
Paper C: Estimation of the Free Space in Front of a Moving
Vehicle
Lundquist, C. and Schön, T. B. (2009a). Estimation of the free space in front
of a moving vehicle. In Proceedings of the SAE World Congress, SAE paper
2009-01-1288, Detroit, MI, USA.
Summary: There are more and more systems emerging making use of measurements
from a forward looking radar and a forward looking camera. It is by now well known
how to exploit this data in order to compute estimates of the road geometry, tracking leading vehicles, etc. However, there is valuable information present in the radar concerning
stationary objects, that is typically not used. The present work shows how radar measurements of stationary objects can be used to obtain a reliable estimate of the free space in
front of a moving vehicle. The approach has been evaluated on real data from highways
and rural roads in Sweden.
Background and contribution: This work started as a project in a graduate course on
convex optimization held by Professor Anders Hansson, who also proposed the idea of
using the arctan-function in the predictor. Dr. Thomas Schön established the contact with
Dr. Adrian Wills at the University of Newcastle, Australia, whose toolbox was used to
efficiently solve the least squares problem.
Paper D: Tracking Stationary Extended Objects for Road Mapping
using Radar Measurements
Lundquist, C., Orguner, U., and Schön, T. B. (2009). Tracking stationary
extended objects for road mapping using radar measurements. In Proceedings
of the IEEE Intelligent Vehicles Symposium, pages 405–410, Xi’an, China.
Summary: It is getting more common that premium cars are equipped with a forward
looking radar and a forward looking camera. The data is often used to estimate the road
geometry, tracking leading vehicles, etc. However, there is valuable information present
in the radar concerning stationary objects, that is typically not used. The present work
shows how stationary objects, such as guardrails, can be modeled and tracked as extended
objects using radar measurements. The problem is cast within a standard sensor fusion
framework utilizing the Kalman filter. The approach has been evaluated on real data from
highways and rural roads in Sweden.
Background and contribution: The author of this thesis came up with the ideas presented in this paper as he was writing Paper C. Dr. Umut Orguner contributed with his
knowledge in the area of target tracking to the realization of the ideas.
1.6.3
Related Publications
Publications of related interest, but not included in this thesis:
Ahrholdt, M., Bengtsson, F., Danielsson, L., and Lundquist, C. (2009). SEFS
– results on sensor data fusion system development. In 16th World Congress
of ITS, Stockholm, Sweden
main: 2009-10-21 11:26 — 11(25)
1.6
Outline
Reinelt, W. and Lundquist, C. (2006a). Controllability of active steering system hazards: From standards to driving tests. In Pimintel, J. R., editor, Safety
Critical Automotive Systems, ISBN 13: 978-0-7680-1243-9, pages 173–178.
SAE International, 400 Commonwealth Drive, Warrendale, PA, USA,
Malinen, S., Lundquist, C., and Reinelt, W. (2006). Fault detection of a steering wheel sensor signal in an active front steering system. In Preprints of the
IFAC Symposium on SAFEPROCESS, pages 547–552, Beijing, China,
Reinelt, W. and Lundquist, C. (2006b). Mechatronische Lenksysteme: Modellbildung und Funktionalität des Active Front Steering. In Isermann, R., editor, Fahrdynamik Regelung - Modellbildung, Fahrassistenzsysteme, Mechatronik, ISBN 3-8348-0109-7, pages 213–236. Vieweg Verlag,
Lundquist, C. and Reinelt, W. (2006a). Back driving assistant for passenger
cars with trailer. In Proceedings of the SAE World Congress, SAE paper
2006-01-0940, Detroit, MI, USA,
Lundquist, C. and Reinelt, W. (2006b). Rückwärtsfahrassistent für PKW mit
Aktive Front Steering. In Proceedings of the AUTOREG (Steuerung und
Regelung von Fahrzeugen und Motoren, VDI Bericht 1931, pages 45–54,
Wiesloch, Germany,
Reinelt, W. and Lundquist, C. (2005). Observer based sensor monitoring
in an active front steering system using explicit sensor failure modeling. In
Proceedings of the 16th IFAC World Congress, Prague, Czech Republic,
Reinelt, W., Lundquist, C., and Johansson, H. (2005). On-line sensor monitoring in an active front steering system using extended Kalman filtering. In
Proceedings of the SAE World Congress, SAE paper 2005-01-1271, Detroit,
MI, USA,
Reinelt, W., Klier, W., Reimann, G., Lundquist, C., Schuster, W., and Großheim, R. (2004). Active front steering for passenger cars: System modelling
and functions. In Proceedings of the first IFAC Symposium on Advances in
Automotive Control, Salerno, Italy.
Patents of related interest, but not included in this thesis:
Lundquist, C. and Großheim, R. (2009). Method and device for determining
steering angle information. International Patent WO 2009047020, 2009.04.16
and German Patent DE 102007000958, 2009.05.14,
Lundquist, C. (2008). Method for stabilizing a vehicle combination. U.S.
Patent US 2008196964, 2008.08.21 and German Patent DE 102007008342,
2008.08.21,
Reimann, G. and Lundquist, C. (2008). Verfahren zum Betrieb eines elektronisch geregelten Servolenksystems. German Patent DE 102006053029,
2008.05.15,
11
main: 2009-10-21 11:26 — 12(26)
12
1
Introduction
Reinelt, W., Schuster, W., Großheim, R., and Lundquist, C. (2008c). Verfahren zum Betrieb eines Servolenksystems. German Patent DE 102006052092,
2008.05.08,
Reinelt, W., Schuster, W., Großheim, R., and Lundquist, C. (2008b). Verfahren zum Betrieb eines elektronischen Servolenksystems. German Patent
DE 102006043069, 2008.03.27,
Reinelt, W., Schuster, W., Großheim, R., and Lundquist, C. (2008d). Verfahren zum Betrieb eines Servolenksystems. German Patent DE 102006041237,
2008.03.06,
Reinelt, W., Schuster, W., Großheim, R., and Lundquist, C. (2008e). Verfahren zum Betrieb eines Servolenksystems. German Patent DE 102006041236,
2008.03.06,
Reinelt, W., Schuster, W., Großheim, R., and Lundquist, C. (2008a). Verfahren zum Betrieb eines elektronisch geregelten Servolenksystems. German
Patent DE 102006040443, 2008.03.06,
Reinelt, W. and Lundquist, C. (2007). Method for assisting the driver of a motor vehicle with a trailer when reversing. German Patent DE 102006002294,
2007.07.19, European Patent EP 1810913, 2007.07.25 and Japanese Patent
JP 2007191143, 2007.08.02,
Reinelt, W., Lundquist, C., and Malinen, S. (2007). Automatic generation of
a computer program for monitoring a main program to provide operational
safety. German Patent DE 102005049657, 2007.04.19,
Lundquist, C. and Reinelt, W. (2006c). Verfahren zur Überwachung der Rotorlage eines Elektromotors. German Patent DE 102005016514, 2006.10.12,
main: 2009-10-21 11:26 — 13(27)
Part I
Background Theory and
Applications
13
main: 2009-10-21 11:26 — 14(28)
main: 2009-10-21 11:26 — 15(29)
2
Models of Dynamic Systems
Given measurements from several sensors the objective is to estimate one or several state
variables, either by means of improving a measured signal or by means of estimating
a signal which is not, or can not, be directly measured. In either case the relationship
between the measured signals and the state variable must be described, and the equations
describing this relationship is referred to as the measurement model. When dealing with
dynamic or moving systems, as is commonly the case in automotive applications, the
objective might be to predict the value of the state variable at the next time step. The
prediction equation is referred to as the process model. This section deals with these two
types of models.
As mentioned in the introduction in Section 1.4, a general model of dynamic systems
is provided by the nonlinear state space model
xt+1 = ft (xt , ut , wt , θ),
yt = ht (xt , ut , et , θ).
(2.1a)
(2.1b)
The single track model, introduced in Example 1.1, is used as an example throughout
the first sections of this chapter. For this purpose the process and measurement models
are given in Example 2.1, while the derivations are provided later in Section 2.3. Most
mechanical and physical laws are provided in continuous-time, but computer implementations are made in discrete-time, i.e. the process and measurement models are derived in
continuous-time according to
ẋ(t) = a(x(t), u(t), w(t), θ, t),
y(t) = c(x(t), u(t), e(t), θ, t),
(2.2a)
(2.2b)
and are then discretized. Discretization is the topic of Section 2.1. Special cases of the
general state space model (2.1), such as the state space model with additive noise and the
linear state space model, are discussed in Section 2.2.
15
main: 2009-10-21 11:26 — 16(30)
16
2
Models of Dynamic Systems
Several models for various applications are given in the papers in Part II, however, the
derivations are not always thoroughly described, and the last sections of this chapter are
aimed at closing this gap. More specifically, the single track state space model of the ego
vehicle given in Example 2.1 is derived in Section 2.3 and compared to other commonly
used models. There exist different road models, of which some are treated in Section 2.4.
Finally, target tracking models are discussed briefly in Section 2.5.
Example 2.1: Single Track Model
The state variables xE , the input signals uE and the measurement signals yIMU of the ego
vehicle model were defined in Example 1.1, and are repeated here for convenience
T
(2.3a)
xE = ψ̇E β ,
T
uE = δf v̇x vx ,
(2.3b)
m
T
.
(2.3c)
yIMU = ψ̇E am
y
Note that the front wheel angle δf is used directly as an input signal to simplify the
example. The continuous-time single track process and measurement models are given
by


Cαf lf2 cos δf +Cαr lr2
−Cαf lf cos δf +Cαr lr
Cαf lf tan δf
−
ψ̇
+
β
+
aE1
E
Izz vx
Izz
Izz
,
ẋE =
= C l cos δ −C l
C cos δf +Cαr +v̇x m
C sin δf
aE2
− 1 + αf f 2 f αr r ψ̇E − αf
β + αf
vx m
mvx
mvx
(2.4a)
yIMU
"
c
= E1 = −Cαf lf cos δf +Cαr lr
cE2
ψ̇E −
mvx
ψ̇E
Cαf cos δf +Cαr +mv̇x
β
m
#
+
Cαf sin δf
m
,
(2.4b)
with parameter vector
θ = lf
lr
Izz
m
Cαf
Cαr ,
(2.5)
where lf and lr denotes the distances between the center of gravity of the vehicle and the
front and rear axles, respectively. Furthermore, m denotes the mass of the vehicle and
Izz denotes the moment of inertia of the vehicle about its vertical axis in the center of
gravity. The parameters Cαf and Cαf are called cornering stiffness and describe the road
tire interaction. Typical values for the parameters are given in Table 2.1. The model is
derived in Section 2.3.
2.1
Discretizing Continuous-Time Models
The measurements dealt with in this work are sampled and handled as discrete-time variables in computers and electronic control units (ECU). All sensor signals are transferred
in sampled form from different sensors to the log-computer on a so called CAN-Bus (Controller Area Network). Hence, the systems discussed in this thesis must also be described
main: 2009-10-21 11:26 — 17(31)
2.2
17
Special cases of the State Space Model
Table 2.1: Typical ranges for the vehicle parameters used in the single track model.
m
kg
Izz
kgm2
Cα
N/rad
lf + lr
m
1000 − 2500
850 − 5000
45000 − 75000
2.5 − 3.0
using discrete-time models according to the state space model in (2.1). Nevertheless, since
physical relations commonly are given in continuous-time, the various systems presented
in this thesis, such as the single track model in Example 2.1, are derived and represented
using continuous-time state space models in the form (2.2). Thus, all continuous-time
models in this thesis have to be discretized in order to describe the measurements. Only a
few of the motion models can be discretized exactly by solving the sampling formula
t+T
Z
xt+1 = xt +
a(x(τ ), u(t), w(t), θ)dτ ,
(2.6)
t
analytically, where T denotes the sampling time. A simpler way is to make use of the
standard forward Euler method, which approximates (2.2a) according to
xt+1 ≈ xt + T a(xt , ut , wt , θ) , ft (xt , ut , wt , θ).
(2.7)
This is a very rough approximation with many disadvantages, but it is frequently used
because of its simplicity. This method is used in Example 2.2 to discretize the continuoustime vehicle model given in (2.4).
Example 2.2: Discrete-Time Single Track Model
The single track model given in Example 2.1 may be discretized using (2.7) according to
fE1
ψ̇E,t + T aE1
xE,t+1 =
=
,
(2.8a)
fE2
βt + T aE2
h
c
yIMU,t = E1 = E1 ,
(2.8b)
hE2
cE2
where T is the sampling time.
Sampling of linear systems is thoroughly described by Rugh (1996). Moreover, different options to sample and linearize non-linear continuous-time systems are described
by Gustafsson (2000). The linearization problem is treated in Chapter 3, in a discussion
of approximative model based filters such as the extended Kalman filter.
2.2
Special cases of the State Space Model
Special cases of the general state space model (2.1) are treated in this section. These
includes the linear state space model in Section 2.2.1 and the state space model with
additive noise in Section 2.2.2.
main: 2009-10-21 11:26 — 18(32)
18
2
2.2.1
Models of Dynamic Systems
Linear State Space Model
An important special case of the general state space model (2.1) is the linear Gaussian
state space model, where f and h are linear functions and the noise is Gaussian,
xt+1 = Ft (θ)xt + Gut (θ)ut + Gw
t wt ,
u
yt = Ht (θ)xt + Ht (θ)ut + et ,
(2.9a)
(2.9b)
where wt ∼ N (0, Qt ) and et ∼ N (0, Rt ). Note that the single track model (2.4) is linear
in the state variables, as shown in Example 2.3.
Example 2.3: Linearized Single Track Model
The front wheel angle is usually quite small at higher velocities and the assumptions
cos δf ≈ 1, tan δf ≈ sin δf ≈ δf therefore applies. The discrete-time single track
model (2.8) may be written on the linear form (2.9) according to
Cαf l2 +Cαr l2
"
ẋE,t+1
#
#
"
Cαf lf
−C l +C l
T αf Ifzz αr r
I
zz
δf + wt ,
Cαf
C +Cαr +v̇x m xE,t +
1 − T αf mv
mvx
x
(2.10a)
0
0
(2.10b)
Cαf +Cαr +mv̇x xE,t + Cαf δf + et .
f
r
1−T
Izz vx
=
Cαf lf −Cαr lr
−T − T
v2 m
x
yIMU,t =
1
−Cαf lf +Cαr lr
mvx
−
m
m
The model is linear in the input δf . However, the inputs v̇x and vx are implicitly modeled
in the matrices Ft (v̇x , vx , θ), Gut (vx , θ) and Ht (v̇x , vx , θ).
Several of the radar measurements in Example 1.2 can be associated to the same
tracked state. This situation leads to a problem where a batch of measurements yi , . . . , yj
is associated to the same state xk . The update of the state with the batch of new measurements may be executed iteratively, as if the measurements were collected at different
time steps. Another method, which is used in Paper C, is accomplished by stacking all
available measurements in the set yi:j and sensor models Hi:j on top of each other in
order to form
 


yi
Hi (θ)
 


Yi:j =  ...  and Hi:j (θ) =  ...  ,
(2.11)
yj
Hj (θ)
respectively. The measurement equation (2.9b) may now be rewritten according to
Yi:j,t = Hi:j,t (θ)xk,t + et .
(2.12)
Linear state space models and linear system theory in general are thoroughly described
by Rugh (1996) and Kailath (1980).
main: 2009-10-21 11:26 — 19(33)
2.2
Special cases of the State Space Model
2.2.2
19
State Space Model with Additive Noise
A special case of the general state space model (2.1) is given by assuming that the noise
enters additively and the input signals are subsumed in the time-varying dynamics, which
leads to the form
xt+1 = ft (xt , θ) + wt ,
yt = ht (xt , θ) + et .
(2.13a)
(2.13b)
In Example 1.1 an ego vehicle model was introduced, where the steering wheel angle,
the longitudinal acceleration and the vehicle velocity were modeled as deterministic input
signals. This consideration can be motivated by claiming that the driver controls the
vehicle’s lateral movement with the steering wheel and the longitudinal movement with
the throttle and brake pedals. Furthermore, the steering wheel angle and the velocity are
measured with less noise than the other measurement signals, and they are often preprocessed to improve the accuracy and remove bias. With these arguments the resulting
model, given in Example 2.1, may be employed. The model is in some sense simpler
than if these two signals would be assumed to be stochastic measurements, as shown in
Example 2.4.
Example 2.4: Single Track Model without Deterministic Input Signals
In classical signal processing it is uncommon to allow deterministic input signals, at least
not if these are measured by sensors. The input signals in Example 1.1 should instead be
modeled as stochastic measurements. Hence, the measurement vector and the state vector
are augmented and the system is remodeled. One example is given by the state space
model


 
fE1 (ψ̇t , βt , δf,t , vx,t , wψ̇,t , θ)
ψ̇t+1
 βt+1  f (ψ̇ , β , δ , v̇ , v , w , θ)

  E2 t t f,t x,t x,t β,t 
,
 
(2.14a)
xE,t+1 = 
fE3 (δf,t , wδf ,t , θ)

 δf,t+1  = 

vx,t+1  
vx,t + T v̇x,t
v̇x,t+1
v̇x,t + wv̇x ,t

 m 
hE1 (ψ̇t , βt , δf,t , vx,t , θ) + eψ̇,t
ψ̇t
am
 hE (ψ̇t , βt , δf,t , v̇x,t , vx,t , θ) + eβ,t 
  2

 y,t
m 

,
(2.14b)
yt = 
δ
hE3 (ψ̇t , βt , δf,t , θ) + eδs ,t
 s,t  = 

m 

vx,t

vx,t + evx ,t
m
v̇x,t
v̇x,t + ev̇x ,t
where T is the sample time and the measured signals are labeled with superscript m to
distinguish them from the states. The first two rows of the process and measurement
models i.e., fE1 , fE2 , hE1 and hE1 , where given in (2.8). The third measurement signal
is the steering wheel angle δs , but the third state is the front wheel angle δf . A possible
measurement model hE3 will be discussed in Example 3.1. Random walk is assumed for
the longitudinal acceleration v̇x in the process model.
main: 2009-10-21 11:26 — 20(34)
20
2
Models of Dynamic Systems
Another way to represent the state space model is given by considering the probability
density function (pdf) of different signals or state variables of a system. The transition
density p(xt+1 |xt ) models the dynamics of the system and if the process noise is assumed
additive, the transition model is given by
p(xt+1 |xt ) = pw (xt+1 − f (xt , ut , θ)),
(2.15)
where pw denotes the density of the process noise w. A fundamental property of the
process model is the Markov property,
p(xt+1 |x1 , . . . , xt ) = p(xt+1 |xt ).
(2.16)
This means that the state of the system at time t contains all necessary information about
the past, which is needed to predict the future behavior of the system.
Furthermore, if the measurement noise is assumed additive then the likelihood function, which describes the measurement model, is given by
p(yt |xt ) = pe (yk − h(xt , ut , θ)),
(2.17)
where pe denotes the density of the sensor noise e. The two density functions in (2.15)
and (2.17) are often referred to as a hidden Markov model (HMM) according to
xt+1 ∼ p(xt+1 |xt ),
yt ∼ p(yt |xt ),
(2.18a)
(2.18b)
since xt is not directly visible in yt . It is a statistical model where one Markov process,
that represents the system, is observed through another stochastic process, the measurement model.
2.3
Ego Vehicle Model
The ego vehicle model was introduced in Example 1.1 and the single track model was
given in Example 2.1. Before the model equations are derived in Section 2.3.3, the tire
road interaction, which is an important part of the model, is discussed in Section 2.3.2.
Two other vehicle models, which are commonly used for lane keeping systems are given
in Section 2.3.4. However, to derive these models accurately some notation is required,
which is the topic of Section 2.3.1.
2.3.1
Notation
The coordinate frames describing the ego vehicle and one leading vehicle are defined
in Figure 2.1. The extension to several leading vehicles is straightforward. The inertial
world reference frame is denoted by W and its origin is OW . The ego vehicle’s coordinate
frame E is located in the center of gravity (CoG) and Es is at the vision and radar sensor
of the ego vehicle. Furthermore, the coordinate frame Ti is associated with the tracked
main: 2009-10-21 11:26 — 21(35)
2.3
21
Ego Vehicle Model
ψTi
x
dTi Es
OTi
Ti y
Es
OEf
x
dW
Ef W
OW
x
y
Ef
dW
Ti W
y W
lf
ls
dW
Es W
ψE
E
y
OE
lb
lr
x
dW
EW
x
dW
Er W
Er y
OEr
Figure 2.1: Coordinate frames describing the ego vehicle, with center of gravity
in OE and the radar and camera sensors mounted in Es . One leading vehicle is
positioned in OTi .
leading vehicle i, and its origin OTi is located at the leading vehicle. In this work the
planar coordinate rotation matrix
cos ψE − sin ψE
WE
R
=
(2.19)
sin ψE
cos ψE
is used to transform a vector dE , represented in E, into a vector dW , represented in W ,
according to
dW = RW E dE + dW
(2.20)
EW ,
where the yaw angle of the ego vehicle ψE is the angle of rotation from W to E. The
geometric displacement vector dW
EW is the direct straight line from OW to OE represented
with respect to the frame W . Velocities are defined as the movement of a frame E relative
to the inertial reference frame W , but typically resolved in the frame E, for example vxE
is the velocity of the E frame in its x-direction. The same convention holds for the
acceleration aE
x . In order to simplify the notation, E is left out when referring to the ego
vehicle’s velocity and acceleration.
This notation will be used when referring to the various coordinate frames. However,
certain frequently used quantities will be renamed, in the interest of readability. The
measurements are denoted using superscript m. Furthermore, the notation used for the
rigid body dynamics is in accordance with Hahn (2002).
main: 2009-10-21 11:26 — 22(36)
22
2.3.2
2
Models of Dynamic Systems
Tire Model
The slip angle αi is defined as the angle between the central axis of the wheel and the
path along which the wheel moves. The phenomenon of side slip is mainly due to the
lateral elasticity of the tire. For reasonably small slip angles, at maximum 3◦ or up to a
centripetal force of approximately 0.4 g, it is a good approximation to assume that the
lateral friction force of the tire Fi is proportional to the slip angle,
Fi = Cαi αi .
(2.21)
The parameter Cαi is referred to as the cornering stiffness of tire i and describes the
cornering behavior of the tire. The load transfer to the front axle when braking or to the
outer wheels when driving through a curve can be considered by modeling the cornering
stiffness as
Cαi = Cαi0 + ζαi ∆Fzi ,
(2.22)
where Cαi0 is the equilibrium of the stiffness for tire i and ζαi relates the load transfer
∆Fzi to the total stiffness. This tire model is treated in Paper B. General information
about slip angles and cornering stiffness can be found in the books by e.g. Pacejka (2006),
Mitschke and Wallentowitz (2004), Wong (2001).
Most of the ego vehicle’s parameters θ, such as the dimensions, the mass and the
moment of inertia are assumed time invariant and are given by the vehicle manufacturer.
Since the cornering stiffness is a parameter that describes the properties between road and
tire it has to be estimated on-line, as described in Paper B, or has to be estimated for the
given set, i.e. a batch, of measurements.
To determine how the front and rear cornering stiffness parameters relate to each other
and in which range they typically are, a 3 min measurement sequence, acquired on rural
roads, was used. The data used to identify the cornering stiffness parameters was split into
two parts, one estimation part and one validation part. This facilitates cross-validation,
where the parameters are estimated using the estimation data and the quality of the estimates can then be assessed using the validation data (Ljung, 1999). From Pacejka (2006),
Mitschke and Wallentowitz (2004), Wong (2001) it is known that the cornering stiffness
values should be somewhere in the range between 20, 000 and 100, 000 N/rad. The single track model (2.4) was used and the parameter space was gridded and an exhaustive
search was performed. To gauge how good a specific parameter pair is, the simulated yaw
rate and lateral acceleration were compared with the measured values according to
|y − ŷ|
,
(2.23)
fit1 = 100 1 −
|y − ȳ|
where y is the measured value, ŷ is the estimate and ȳ is the mean of the measurement,
see Ljung (2009). Since there are two signals, two fit-values are obtained, which are
combined into a joint fit-value using a weighted sum. In Figure 2.2 a diagonal ridge
of the best fit value is clearly visible. For different estimation data sets, different local
maxima were found on the ridge. Further, it was assumed that the two parameters should
have approximately the same value. This constraint (which forms a cross diagonal or
main: 2009-10-21 11:26 — 23(37)
2.3
23
Ego Vehicle Model
80
70
60
fit
50
40
30
20
10
9
10
8
7
0
10
6
9
5
8
7
4
x 10
6
5
4
4
x 10
Cα f [rad/s]
Cα r [rad/s]
Figure 2.2: A grid map showing the total fit value of the two outputs and the constraint defined in (2.24).
orthogonal ridge) is expressed as


|C
−
C
|
αf
αr
,
fit2 = 100 1 − (C +C ) αf 2 αr (2.24)
and added as a third fit-value to the weighted sum, obtaining the total fit for the estimation
data set as
total fit = wψE fitψE + way fitay + w2 fit2 ,
(2.25)
where the weights should sum to one, i.e. wψE + way + w2 = 1, w ≥ 0. The exhaustive search resulted in the values Cαf = 41000 N/rad and Cαr = 43000 N/rad. The
resulting state-space model was validated using the validation data and the result is given
in Figure 5 in Paper A.
2.3.3
Single Track Model
In this work the ego vehicle motion is only considered during normal driving situations
and not at the adhesion limit. This implies that the single track model, described in e.g.,
Mitschke and Wallentowitz (2004) is sufficient for the present purposes. This model is
also referred to as the bicycle model. The geometry of the single track model with slip
angles is shown in Figure 1.3. It is worth mentioning that the velocity vector of the ego
main: 2009-10-21 11:26 — 24(38)
24
2
Models of Dynamic Systems
vehicle is typically not in the same direction as the longitudinal axis of the ego vehicle.
Instead the vehicle will move along a path at an angle β with the longitudinal direction of
the vehicle. Hence, the angle β is defined as,
tan β =
vy
,
vx
(2.26)
where vx and vy are the ego vehicle’s longitudinal and lateral velocity components, respectively. This angle β is referred to as the float angle in Robert Bosch GmbH (2004)
and the vehicle body side slip angle in Kiencke and Nielsen (2005). Lateral slip is an
effect of cornering. To turn, a vehicle needs to be affected by lateral forces. These are
provided by the friction when the wheels slip.
The Slip Angles
From Figure 2.1 the following geometric constraints, describing the relations between the
front axle, rear axle and the origin of the world coordinate frame, are obtained
W
xW
Ef W = lb cos ψE + xEr W ,
(2.27a)
W
yE
fW
(2.27b)
= lb sin ψE +
W
yE
,
rW
where Ef and Er are coordinate frames fixed to the front and rear wheel, respectively.
The ego vehicle’s velocity at the rear axle is given by
E v r
E r W ˙W
R
dEr W = xEr ,
(2.28)
vy
which is rewritten to obtain
W
Er
ẋW
Er W cos ψE + ẏEr W sin ψE = vx ,
−ẋW
Er W
sin ψE +
W
ẏE
rW
cos ψE =
(2.29a)
vyEr .
(2.29b)
Furthermore, the direction of the tire velocity vectors are given by the constraint equations
W
− sin (ψE − αr ) ẋW
E r W + cos (ψE − αr ) ẏE r W = 0,
− sin (ψE + δf −
αf ) ẋW
Ef W
+ cos (ψE + δf −
W
αf ) ẏE
fW
= 0.
(2.30a)
(2.30b)
The equations (2.27), (2.29) and (2.30) are used to obtain
vyEr
vxEr
tan (δf − αf ) −
,
l1
l1
= −vxEr tan αr .
ψ̇1 =
vyEr
(2.31a)
(2.31b)
The velocities vxEr and vyEr have their origin in the ego vehicle’s rear axle, and the velocities in the vehicle’s center of gravity are given by vx , vxE ≈ vxEr and vy , vyE =
vyEr + ψ̇E lr . The ego vehicles body side slip angle β is defined in (2.26), and by inserting
main: 2009-10-21 11:26 — 25(39)
2.3
25
Ego Vehicle Model
this relation into (2.31) the following equations are obtained
ψ̇E · lr
− tan β,
vx
ψ̇E · lf
tan(δf − αf ) =
+ tan β.
vx
tan αr =
(2.32a)
(2.32b)
Small α and β angles (tan α ≈ α and tan β ≈ β) can be assumed during normal driving
conditions i.e.,
ψ̇E lr
− β,
vx
ψ̇E lf
− β + tan δf .
αf = −
vx
αr =
(2.33a)
(2.33b)
Process Model
Newton’s second law of motion, F = ma, is applied to the center of gravity. Only the
lateral axis y has to be considered, since the longitudinal movement is a measured input
X
Fi = m ay ,
(2.34)
where
ay = v̇y + ψ̇E vx ,
(2.35)
and
d
(βvx ) = vx β̇ + v̇x β,
(2.36)
dt
for small angles. By inserting the tire forces Fi , which were defined by the tire model (2.21),
into (2.34) the following force equation is obtained
v̇y ≈
Cαf αf cos δf + Cαr αr = m(vx ψ̇E + vx β̇ + v̇x β),
(2.37)
where m denotes the mass of the ego vehicle. The moment equation
X
Mi = Izz ψ̈E
(2.38)
is used in the same manner to obtain the relations for the angular accelerations
lf Cαf αf cos δf − lr Cαr αr = Izz ψ̈E ,
(2.39)
where Izz denotes the moment of inertia of the vehicle about its vertical axis in the center
of gravity. Inserting the relations for the wheel side slip angles (2.33) into (2.37) and
(2.39) results in
!
!
ψ̇E lr
ψ̇E lf
+ β − tan δf cos δf + Cαr β −
,
m(vx ψ̇E + vx β̇ + v̇x β) = Cαf
vx
vx
Izz ψ̈E = lf Cαf
ψ̇E lf
+ β − tan δf
vx
!
cos δf − lr Cαr
(2.40a)
!
ψ̇E lr
β−
.
vx
(2.40b)
main: 2009-10-21 11:26 — 26(40)
26
2
Models of Dynamic Systems
These relations are rewritten according to
ψ̈E = β
Cαf lf2 cos δf + Cαr lr2
lf Cαf tan δf
−lf Cαf cos δf + lr Cαr
− ψ̇E
+
,
Izz
Izz vx
Izz
(2.41a)
β̇ = −β
Cαf cos δf + Cαr + v̇x m
Cαf lf cos δf − Cαr lr
− ψ̇E 1 +
mvx
vx2 m
+
Cαf sin δf
,
mvx
(2.41b)
to obtain the process model (2.4a).
Measurement Model
The ego vehicle’s lateral acceleration in the CoG is given by
ay = vx (ψ̇E + β̇) + v̇x β.
(2.42)
By replacing β̇ with the expression given in (2.41b) and at the same time assuming that
v̇x β is small and can be neglected, the following relation is obtained
ay = vx (ψ̇E + β̇)
−Cαf lf cos δf + Cαr lr
Cαf
Cαf cos δf + Cαr + mv̇x
+ ψ̇E
sin δf ,
+
= −β
m
mvx
m
(2.43)
which is the measurement equation in (2.4b).
2.3.4
Single Track Model with Road Interaction
There are several different way to model the ego vehicle. The single track model (2.4)
is used in all papers in Part II, but in Paper A a comparison is made with two other
approaches. These are based on different vehicle models, which are discussed in this
section.
The first model is commonly used for autonomous driving and lane keeping. This
model is well described by e.g. Dickmanns (2007) and Behringer (1997). Note that the
ego vehicle’s motion is modeled with respect to a road fixed coordinate frame, unlike the
single track model in Section 2.3.3, which is modeled in a Cartesian world coordinate
frame.
The relative angle between the vehicle’s longitudinal axis and the tangent of the road
is denoted ψRE . Ackermann’s steering geometry is used to obtain the relation
ψ̇RE =
vx
δf − vx · c0 ,
lb
(2.44)
where the current curvature of the road c0 is the inverse of the road’s radius. The lateral
displacement of the vehicle in the lane is given by
l˙E = vx (ψRE + β).
(2.45)
main: 2009-10-21 11:26 — 27(41)
2.3
27
Ego Vehicle Model
A process model for the body side slip angle was given in (2.41b), but since the yaw rate
ψ̇E is not part of the model in this section, equation (2.41b) has to be rewritten according
to
β̇ = −
Cαf cos δf + Cαr + v̇x m
β
mvx
Cαf lf cos δf − Cαr lr vx
Cαf
− 1+
tan δf +
sin δf , (2.46)
vx2 m
lb
mvx
which is further simplified by assuming small angles, to obtain a linear model according
to
Cαf
vx
Cαf + Cαr
β+
−
δf .
(2.47)
β̇ = −
mvx
mvx
lb
Recall Example 2.4, where no deterministic input signals were used. Especially the
steering wheel angle might have a bias, for example if the sensor is not calibrated, which
leads to an accumulation of the side slip angle β in (2.47). Other reasons for a steering
wheel angle bias is track torsion or strong side wind, which the driver compensates for
with the steering wheel. The problem is solved by introducing an offset to the front wheel
angel as a state variable according to
δfm = δf + δfoffs .
(2.48)
To summarize, the state variable vector is defined as


 
ψRE
relative angle between vehicle and road
 lE   lateral displacement of vehicle in lane 


 

 
vehicle body side slip angle
xE3 = 
 β =

 δf  

front wheel angle
δfoffs
front wheel angle bias offset
and the process model is given by

 
vx
ψ̇RE
lb δf − vx · c0
 l˙  
vx (ψRE + β)
 E  

  C +C
C
 β̇  = − αfmvx αr β + mvαfx −

 
 δ̇f  
wδf
δ̇foffs
0
(2.49)

vx
lb



δf  .


(2.50)
Note that the curvature c0 is included in (2.44) and in the process model above. The road
geometry is the topic of the next section. The curvature c0 can either be modeled as a
deterministic input signal or as a state variable as shown in Example 2.5. This model is
used in the approach called “fusion 3” in Paper A, and the state vector is denoted xE3 .
Another and simpler vehicle model is obtained if the side slip angle is omitted and the
yaw rate ψ̇E is used instead of the steering wheel angle. The model is described together
with results in Eidehall (2007), Eidehall et al. (2007), Eidehall and Gustafsson (2006),
Gern et al. (2000, 2001), Zomotor and Franke (1997). The state variable vector is then
defined as
T
xE2 = ψRE lE ,
(2.51)
main: 2009-10-21 11:26 — 28(42)
28
2
Models of Dynamic Systems
and the process model is simply given by
ψ̇RE
vx c0 + ψ̇E
=
,
vx ψRE
l˙E
(2.52)
where the yaw rate ψ̇E is modeled as an input signal and the curvature c0 is modeled
either as an input signal or as a state variable in combination with a road model. This
model, in combination with the road model (2.56) described in the next section, is used in
the approach called “fusion 2” in Paper A, and the state vector is xE2 .
More advanced vehicle models with more degrees of freedom, including the two track
model, are described by Schofield (2008).
2.4
Road Model
The road, as a construction created by humans, possesses no dynamics; it is a static time
invariant object in the world coordinate frame. The building of roads is subject to road
construction standards such as VGU (2004a,b), hence, the modeling of roads is geared
to these specifications. However, if the road is described in the ego vehicle’s coordinate
frame and the vehicle is moving along the road it is possible and indeed useful to describe
the characteristics of the road using time varying state variables.
A road consists of straight and curved segments with constant radius and of varying
length. The sections are connected through transition curves, so that the driver can use
smooth and constant steering wheel movements instead of stepwise changes when passing
through road segments. More specifically, this means that a transition curve is formed as
a clothoid, whose curvature c changes linearly with its curve length xc according to
c(xc ) = c0 + c1 · xc .
(2.53)
Note that the curvature c is the inverse of the radius. Now, suppose xc is fixed to the ego
vehicle, i.e. xc = 0 at the position of the ego vehicle. When driving along the road and
passing through different road segments c0 and c1 will not be constant, but rather time
varying state variables
c0
curvature at the ego vehicle
xR1 =
=
.
(2.54)
c1
curvature derivative
Using (2.53) a change in curvature at the position of the vehicle is given by
dc0 dxc
dc(xc ) = ċ0 =
·
= c1 · vx ,
dt xc =0
dxc dt
(2.55)
where vx is the ego vehicle’s longitudinal velocity. This relation was introduced by Dickmanns and Zapp (1986), who posted the following process model
ċ0
0 vx c0
0
+
.
(2.56)
=
ċ1
0 0 c1
wc1
This model is sometimes also referred to as the simple clothoid model. Note that the
road is modeled in a road aligned coordinate frame, with the components (xc , yc ). There
main: 2009-10-21 11:26 — 29(43)
2.4
29
Road Model
are several advantages using road aligned coordinate frames, especially when it comes to
the process models of the other vehicles on the same road, which can be greatly simplified. However, the flexibility of the process model is reduced and basic dynamic relations
such as Newton’s and Euler’s laws cannot be directly applied. The road model (2.53) is
transformed into Cartesian coordinates (xR , y R ) using
Zxc
R
cos (χ(x))dx ≈ xc ,
x (xc ) =
(2.57a)
0
y R (xc ) =
Zxc
sin (χ(x))dx ≈
c0 2 c1 3
x + xc ,
2 c
6
(2.57b)
0
where the heading angle χ is defined as
Zx
χ(x) =
c(λ)dλ = c0 x +
c1 2
x .
2
(2.57c)
0
The origin of the two frames is fixed to the ego vehicle, hence, integration constants
R
(xR
0 , y0 ) are omitted. Example 2.5 shows how the simple clothoid model can be combined
with the ego vehicle model described in Section 2.3.4 into one state space model.
Example 2.5: Single Track Model with Road Interaction
An alternative single track model was proposed in Section 2.3.4. The vehicle is modeled
in a road aligned coordinate frame and the process model (2.50) includes the curvature c0 ,
which was considered as a state variable in this section. Hence, the vehicle model (2.50)
can be augmented with a road model e.g., the simple clothoid model (2.56), to describe
the vehicle’s motion, the shape of the road and their interaction according to the linear
state space model

 

 

vx
ψ̇RE
0 0
0
0 −vx 0
ψRE
0
lb
 l˙  v

 

vx
0
0
0
0
 E   x 0
  lE   0 

 
Cαf +Cαr
Cαf
v




x
 β̇   0 0
0
0
0  β   0 

mvx
mvx − lb

 
  δf  + wδf  ,
 δ̇f  =  0 0
0
0
0
0
0





 offs  
  δfoffs   0 
 δ̇f   0 0
0
0
0
0
0







 ċ0   0 0
0
0
0
0
vx   c0   0 
wc1
c1
0 0
0
0
0
0
0
ċ1
(2.58a)


ψRE
 m  
  lE  


ψRE
1 0 0 0 0 0 0 
eψRE


m 
 lE

 β  

 m  = 0 1 0 0 0 0 0  δf  +  elE  .
(2.58b)
 δf  0 0 0 1 1 0 0  offs   eδf 
 δf 

0 0 0 0 0 1 0 
ec0
cm
0
 c0 
c1
main: 2009-10-21 11:26 — 30(44)
30
2
Models of Dynamic Systems
The velocity vx is modeled as a deterministic input signal and the measurements
m
T
m
lE
cm
(2.59)
ycamera = ψRE
0
are obtained using a camera and a computer vision algorithm. The front wheel angle δfm
is derived from the steering wheel angle, which is measured by the steering wheel angle
sensor. This model is similar to the model denoted “fusion 3” in Paper A.
A problem appears when two or more clothoid segments, with different parameters c0
and c1 , are observed in the same camera view. The parameter c0 will change continuously
during driving, whereas c1 will be constant in each segment and change stepwise at the
segment transition. This leads to a dirac impulse in ċ1 at the transition. The problem can
be solved by assuming a high process noise wc1 , but this leads to less precise estimation
of the state variables when no segment transitions occur in the camera view. To solve
this problem Dickmanns (1988) proposed an averaging curvature model, which is best
described with an example. Assume that the ego vehicle is driving on a straight road
(i.e., c0 = c1 = 0) and that the look ahead distance of the camera is x̄c . A new segment
begins at the position x0c < x̄c , which means that there is a step in c1 and c0 is ramped
up, see Figure 2.3. The penetration into the next segment is lc = x̄c − x0c . The idea of
this model, referred to as averaging or spread-out dynamic curvature model, with the new
state variables c0m and c1m , is that it generates the true lateral offset y R (x̄c ) at the look
ahead distance x̄c , i.e.
R
R
yreal
(x̄c ) = ymodel
(x̄c ),
(2.60)
but it is continuously spread out in the range (0, x̄c ). The lateral offset of the real road as
a function of the penetration lc , for 0 ≤ lc ≤ x̄c , is
c1
R
(2.61)
yreal
(lc ) = lc3 ,
6
since the first segment is straight. The lateral offset of the averaging model as a function
of the penetration lc is
c0m (lc ) 2 c1m (lc ) 3
x̄c +
x̄c ,
2
6
at the look ahead distance x̄c . The equation
R
ymodel
(lc ) =
c1
lc3
= 3c0m (lc ) + c1m (lc )x̄c ,
x̄2c
(2.62)
(2.63)
is obtained by inserting (2.61) and (2.62) into (2.60). By differentiating (2.63) with redc0m (lc )
dt
1
= c1m (lc ) and d(dl·c) = d(dt· ) · dl
the
spect to lc and using the relations dc
dlc = 0,
dlc
c
following equation is obtained
vx
(2.64)
ċ1m = 3 (c1 (lc /x̄c )2 − c1m ),
x̄c
for lc < x̄c . Since (lc /x̄c )2 is unknown it is usually set to 1 (Dickmanns, 2007), which
finally yields
vx
ċ1m = 3 (c1 − c1m ).
(2.65)
x̄c
main: 2009-10-21 11:26 — 31(45)
2.4
31
Road Model
c1 real road
xc
c0 real road
xc
real road
yR
model
y R (x̄c )
xR
x0c
lc
x̄c
Figure 2.3: A straight and a curved road segment are modeled with the averaging
road model. The two upper plots shows the parameters c1 and c0 of the real road, the
bottom plot shows the real and the modeled roads in a Cartesian coordinate frame.
The state variable vector of the averaging model is defined as

 

c0m
curvature at the ego vehicle
,
averaged curvature derivative
xR2 = c1m  = 
c1
curvature derivative of the foremost segment
(2.66)
and the process model is given by augmenting the simple clothoid model (2.56) with (2.65)
according to

 


 
0
vx
0
c0m
0
ċ0m
ċ1m  = 0 −3 vx 3 vx  c1m  +  0  .
(2.67)
x̄c
x̄c
c1
wc1
ċ1
0
0
0
The model is driven by the process noise wc1 , which also influences the other states. The
averaging model is well described in the recent book by Dickmanns (2007) and some
early results using the model are presented by e.g. Dickmanns and Mysliwetz (1992).
A completely different approach is proposed in Paper A, where the process model
describes the driven path of the ego vehicle instead of using road construction standards.
The shape of the road is given under the assumption that the ego vehicle is driving on the
road and the angle between the road and the ego vehicle is measured by the camera and
main: 2009-10-21 11:26 — 32(46)
32
2
Models of Dynamic Systems
included as a state variable. The advantage of this approach is that the ego vehicle’s path
can be modeled more accurately than an unknown road, since there are a lot of sensors
available in the vehicle and most vehicle dimensions are known. This model, denoted
“fusion 1”, is compared with two other approaches in Section 5.3 in Paper A, including a
model, denoted “fusion 3”, which is similar to the one presented in Example 2.5.
2.5
Target Model
In this work, only measurements from the ego vehicle’s sensors are available; that is the
target’s motion is measured using the ego vehicle’s radar and camera. This is the reason
for why the target model is simpler than the ego vehicle model. The targets play an
important role in the sensor fusion framework presented in this work, but little effort has
been spent modeling their motion. Instead standard models from target tracking literature
are used. A survey of different process models and measurement models are given by
Rong Li and Jilkov (2003) and Rong Li and Jilkov (2001), respectively. The subject is
also covered in the books by Blackman and Popoli (1999) and Bar-Shalom et al. (2001).
One typical target model is given in Example 2.6.
Example 2.6: Coordinated Turn Model
The coordinated turn model is commonly used to model moving targets. The ego vehicle’s
˙m
radar and camera measures the range dm
Ti Es , the range rate dTi Es and the azimuth angle
m
δTi Es to target number i as described in the introduction in Example 1.2 and shown in
Figure 2.1. The states of the coordinated turn model in polar velocity are given by

 W  
xTi W
x-position in W -frame
 yTWW   y-position in W -frame 

 i  

 ψTi  
heading angle
.



(2.68)
xT =  Ti  = 

v
longitudinal
velocity

 x  

 ψ̇T  
yaw rate
i
Ti
longitudinal
acceleration
ax
The process and measurement models are given by
 

 W   Ti
0
ẋTi W
vx cos ψTi
 ẏTWW   vxTi sin ψTi   0 
 
 i  

  0 
 ψ̇T  
ψ̇Ti
+

 Ti  = 
(2.69a)
  0 
 v̇ i  
aTxi
 

 x  
 wψ̈T 
 ψ̈T  
0
i
i
Ti
wȧTi
0
ȧx
x
q
2
2 
 m 
W
E
W − yE
xW
+ yTWi W − yEW
dTi Es
Ti W − xEW − xEs E
Es E


 + eT (2.69b)
d˙m  = 
vxTi cos (−(ψTi − ψE ) + δTi Es ) − vx cos δTi Es
Ti Es


W
yT W
m
δTi Es
arctan Wi − ψE − ψE E
xT
iW
s
main: 2009-10-21 11:26 — 33(47)
2.5
Target Model
33
E
where (xE
Es E , yEs E , ψEs E ) represents the sensor mounting position and orientation in
the ego vehicle coordinate frame E. The single track ego vehicle state variable vector and
state space model (2.4) has to be augmented with the ego vehicle’s position in the world
W
frame (xW
EW , yEW ), since it is included in the measurement model of the target (2.69b).
main: 2009-10-21 11:26 — 34(48)
main: 2009-10-21 11:26 — 35(49)
3
Estimation Theory
This thesis is concerned with estimation problems, i.e. given measurements y the aim
is to estimate the parameter θ or the state x in (1.1). Both problems rely on the same
theoretical basis and the same algorithms can be used. The parameter estimation problem
is a part of the system identification process, which also includes the derivation of the
model structure, discussed in the previous chapter. The state estimation problem utilizes
the model and its parameters to solve for the states. When estimating x it is assumed that θ
is known and vice versa. The parameter is estimated in advance if θ is time invariant or in
parallel with the state estimation problem if θ is assumed to be time varying. Example 3.1
illustrates how the states and parameters may be estimated.
Example 3.1: Parameter and State Estimation
Consider the single track model introduced in Example 1.1 and its equations derived in
Section 2.3. The front wheel angle δf is considered to be a state variable in Example 2.4
and the steering wheel angle δs is treated as a measurement. The measurement equation
is in its simplest form a constant ratio given by
δs = h(δf , θ) = is · δf .
(3.1)
The parameter θ = is is assumed to be time invariant. The state δf must be known in order
to identify the parameter θ. Usually the parameter is estimated off-line in advance using
a test rig where the front wheel angle is measured with highly accurate external sensors.
The parameter is then used within the model in order to estimate the states on-line while
driving.
The tire parameter Cα is assumed to change with weather and road conditions, hence it
is a time varying parameter. It has to be identified on-line at time t using the state estimates
from the previous time step t − 1, which in turn were estimated using the parameter
estimate from time step t − 1.
35
main: 2009-10-21 11:26 — 36(50)
36
3
Estimation Theory
For various reasons some systems are only modeled by a likelihood function. Often
these systems are static and there exists no Markov transition density. However, most
systems in this thesis are modeled by both a prediction and a likelihood function. In
system identification, the model parameter is estimated without physically describing the
parameter’s time dependency, hence static estimation theory is used. The state can be
estimated in more or less the same way. However, the process model (1.1a) is often given
and its time transition information is exploited to further improve the state estimate.
The origins of the estimation research field can be traced back to the work by Gauss
in 1795 on least squares (Abdulle and Wanner, 2002) and Bayes (1763) on conditional
probabilities. Bayes introduced an important theorem which has come to be referred to as
Bayes’ theorem,
p(y|x, θ)p(x, θ)
,
(3.2)
p(x, θ|y) =
p(y)
with which it is possible to calculate the inverse probability p(x, θ|y) given a prior probability p(x, θ) and the likelihood function p(y|x, θ). Note that both the measurement and
the state or parameter are treated as random variables. Another view of the estimation
problem was introduced by Fisher (1922), who claimed that the probability of an estimate
should be seen as a relative frequency of the state or parameter, given data from long-run
experiments. Fisher also treats the measurement as a random variable. The main difference to Bayes’ approach is that in Fisher’s approach there is a true state or parameter
which is treated as deterministic, but unknown. To accentuate the different views, the
likelihood is often written using `(x, θ) to emphasize that the likelihood is regarded as a
function of the state x and the parameter θ.
After this brief historical background, the remainder of this chapter is outlined as follows. In Section 3.1, static estimation methods based on both Fishers and Bayes theories,
are discussed. These methods can be used for both state and parameter estimation. In
Section 3.2, dynamic estimation methods are discussed. These methods are within the
scope of this thesis only used for state estimation and are based solely on Bayes’ theories.
3.1
Static Estimation Theory
The general estimation problem consists of finding the estimates x̂ and θ̂ that minimize
a given loss function V (x, θ; y). This problem is separated into a parameter estimation
problem and a state estimation problem according to
θ̂ = arg min V (θ; x, y),
(3.3a)
θ
x̂ = arg min V (x; θ, y).
(3.3b)
x
How to separate a typical estimation problem into these two parts is shown Example 3.2.
General estimation techniques are covered by most textbooks on this topic, e.g. Kay
(1993), Kailath et al. (2000), Ljung (1999). There are many estimation methods available,
however, in this section the focus is on the methods used in Part II of this thesis.
main: 2009-10-21 11:26 — 37(51)
3.1
37
Static Estimation Theory
Example 3.2: Parameter and State Estimation
Consider the linear single track model in Example 2.3. Suppose that the state variables
are measured with external and highly accurate sensors. The yaw rate is measured with an
R
extra IMU and the body side slip angle β is measured with a so called Correvit
sensor,
which uses optical correlation technology. This sensor incorporates a high intensity light
source that illuminates the road surface, which is optically detected by the sensor via a
two-phase optical grating system. Now, the parameter θ can be estimated, according to
(3.3a).
Conversely, if θ is known and y is measured, the state variables x can be estimated
using (3.3b).
This section covers estimation problems without any process model f ( · ), where a
set of measurements is related to a parameter only via the measurement model h( · ).
Furthermore, only an important and special case where the measurement model is linear
in x is considered. The linear measurement model was given in (2.9b) and is repeated
here for convenience
yt = Ht (θ)xt + et .
(3.4)
In the system identification community the nomenclature deviates slightly and (3.4)
is there referred to as a regression model
yt = ϕTt θt + et ,
(3.5)
with the regressor ϕ. The nomenclature in (3.5) is used in the Papers B and C. Nevertheless, the nomenclature presented in (3.4) is used in this section in order to conform to
the rest of this chapter. That means that in the algorithms in this section h and x can be
substituted by ϕ and θ, respectively.
3.1.1
Least Squares Estimator
The least squares (LS) estimate is defined as the solution to the optimization problem,
where the squared errors between the predicted measurements and the actual measurements are minimized according to,
x̂LS
= arg min
t
x
t
X
||yk − hk (x)||22 .
(3.6)
k=1
The solution for the linear case is given in Algorithm 3.1.
If the measurement covariance R = Cov (e) is known, or in practice at least assumed
to be known, then the weighted least squares (WLS) estimate is given by the optimization
problem
t
X
W LS
x̂t
= arg min
(yk − hk (x))T Rk−1 (yk − hk (x)).
(3.7)
x
k=1
The solution for the linear case is given in Algorithm 3.2, and Example 3.3 illustrates how
the single track vehicle model can be reformulated to estimate the parameters using the
WLS.
main: 2009-10-21 11:26 — 38(52)
38
3
Estimation Theory
Algorithm 3.1: Least Squares
The least squares estimate and its covariance are given by
x̂LS
t
=
t
X
!−1
T
Hk Hk
k=1
t
X
−1
HkT yk = (H T H)
H TY ,
(3.8a)
k=1
Cov (x̂LS ) = (H T H)−1 (H T RH)(H T H)−1 , P LS .
(3.8b)
The last equality is the batch solution, where H and Y were defined in (2.11). Furthermore, the measurement noises Rk = Cov (ek ) are forming the main diagonal of R
according to R = diag(R1 , . . . , Rt ).
Algorithm 3.2: Weighted Least Squares
The weighted least squares estimator and its covariance matrix are given by
LS
x̂W
t
=
t
X
!−1
HkT Rk−1 Hk
t
X
HkT Rk−1 yk = H T R−1 H
−1
H T R−1 Y ,
k=1
k=1
(3.9a)
Cov (x̂
W LS
T
) = (H R
−1
−1
H)
,P
W LS
,
(3.9b)
where the weighting matrix is the noise covariance R.
Example 3.3: Parameter and State Estimation
Consider the linear single track model in Example 2.3 and the separation of the parameter
and the state estimation problems in Example 3.2. Suppose that the vehicle’s mass m and
the dimensions lf and lr are known. Furthermore, suppose that the state variable x may be
measured as described in Example 3.2. Consider the measurement equation (2.10b); the
parameter estimation problem can now be formulated in the form (3.4) or (3.5) according
to
C
y = H(x, u, lf , lr , m) αf + e,
(3.10)
Cαr
and the parameters Cαf , Cαr can be solved for using e.g. WLS in (3.7). Furthermore, the
inverse of the moment of inertia 1/Izz may be estimated off-line by writing the process
model (2.10a) in the form (3.5) according to
xt+1 = H(xt , u, lv , lf , m, Cαf , Cαr ) ·
1
+ w.
Izz
(3.11)
main: 2009-10-21 11:26 — 39(53)
3.1
39
Static Estimation Theory
Another example, where the WLS estimator is applied, is given in Paper C. The
left and right borders of a road are modeled by polynomials and the coefficients are the
parameters which are estimated given a batch of measurements from a radar.
3.1.2
Recursive Least Squares
Consider the LS estimator in Section 3.1.1. If the state x varies with time it is a good
idea to weigh recent measurements higher than older ones. Introduce a forgetting factor
0 < λ ≤ 1 in the loss function (3.3) according to
V (x, y) =
t
X
λt−k ||yk − hk (x)||22 .
(3.12)
k=1
In the linear case the solution is given by the recursion in Algorithm 3.3. For a detailed account of the RLS algorithm and recursive identification in general, see e.g. Ljung (1999),
Ljung and Söderström (1983).
In many practical applications the parameter estimate lies within a certain region.
Some possibilities to constrain the parameter, under the assumption that the constrained
region is a closed convex region in the parameter space, denoted DM , are described
by Goodwin and Sin (1984) and Ljung (1999). The simplest approach is to project the
new estimate x̂t back into DM by taking the old value x̂t−1 according to
(
x̂t
if x̂t ∈ DM
x̂t =
,
(3.13)
x̂t−1 if x̂t ∈
/ DM
or by projecting x̂t orthogonally onto the surface of DM , before continuing.
Another approach is the constrained least-squares algorithm described by Goodwin
and Sin (1984). If x̂t ∈
/ DM , then the coordinate basis for the parameter space is transformed by defining
−1/2
ρ = Pt−1 x,
(3.14)
where
−T/2
−1/2
−1
Pt−1
= Pt−1 Pt−1 .
(3.15)
The image of DM under the linear transformation (3.14) is denoted D̄M . The image
−1/2
ρ̂t of x̂t , under Pt−1 , is orthogonally projected onto the boundary of D̄M to yield ρ̂0t .
1/2
Finally, the parameter x̂t is obtained by projecting back ρ̂0t under Pt−1 according to
1/2
x̂t = x̂0t , Pt−1 ρ̂0t
(3.16)
and continue.
An example of how the RLS estimator can be used for on-line estimation of the stiffness parameters of the tires in a passenger car is given Paper B. The parameters in this
example tend to drift when the system is not excited enough, for example when driving at
a constant velocity on a straight road. The parameters are therefore constrained using the
simple idea given in (3.13).
main: 2009-10-21 11:26 — 40(54)
40
3
Estimation Theory
Algorithm 3.3: Recursive Least Squares
The recursive least squares solution is given by the recursion
x̂t = x̂t−1 + Kt (yt − HtT x̂t−1 ) ,
(3.17a)
−1
T
Kt = Pt−1 Ht (λt Λt + Ht Pt−1 Ht ) ,
1
Pt =
Pt−1 − Pt−1 Ht (λt Λt + HtT Pt−1 Ht )−1 HtT Pt−1 ,
λt
(3.17b)
(3.17c)
where Pt = Cov(x̂t ) and Λ denote a weighting matrix, which can be used to acknowledge
the relative importance of the different measurements.
3.1.3
Probabilistic Point Estimates
The maximum likelihood estimate, first introduced by Fisher (1912, 1922), is defined by
L
x̂M
= arg max p(y1:t |xt ).
t
(3.18)
xt
Put into words, the estimate is chosen to be the parameter most likely to produce the
obtained measurements.
The posterior p(xt |y1:t ) contains all known information about the state of the target
at time t. The maximum a posterior (MAP) estimator is defined by
AP
x̂M
= arg max p(xt |y1:t ) = arg max p(y1:t |xt )p(xt ),
t
xt
(3.19)
xt
or put in words, find the most likely estimate of the parameter given the measurements
y1:t . Bayes’ theorem (3.2) and the fact that the maximization is performed over xt is used
in the second equality of (3.19). The ML and MAP estimates are not considered in this
work, but mentioned here to complete the view.
3.2
Filter Theory
The topic of this section is recursive state estimation based on dynamic models. The iteration process of the state space estimation was briefly described in words in Section 1.4.
The state estimation theory is influenced by the Bayesian view, which implies that the
solution to the estimation problem is provided by the filtering probability density function (pdf) p(xt |y1:t ). The introduction to this section will be rather general using the
model defined in (2.18). Bayes’ theorem was introduced in (3.2) and is used to derive the
recursive Bayes filter equations
Z
p(xt+1 |y1:t ) = p(xt+1 |xt )p(xt |y1:t )dxt ,
(3.20a)
p(xt |y1:t ) =
p(yt |xt )p(xt |y1:t−1 )
,
p(yt |y1:t−1 )
(3.20b)
main: 2009-10-21 11:26 — 41(55)
3.2
41
Filter Theory
with the denominator
Z
p(yt |y1:t−1 ) =
p(yt |xt )p(xt |y1:t−1 )dxt .
(3.20c)
These equations describe the time evolution
· · · → xt|t → xt+1|t → xt+1|t+1 → · · ·
(3.21)
of the random state vector x. The Bayes posterior density function p(xt |y1:t ) conditioned
on the time sequence y1:t = {y1 , . . . , yt } of measurements accumulated at time t is the
probability density function of xt|t . The probability density function p(xt+1 |y1:t ) is the
time prediction of the posterior p(xt |y1:t ) to the time step of the next measurement yt+1 .
Note that the Bayes normalization factor given by (3.20c) is independent of x. In practice
the numerator of (3.20b) is calculated and then simply normalized, since the integral of
the posterior density function must be unitary.
If p(yt |xt ), p(xt+1 |xt ) and p(xt ) are linear and Gaussian then (3.20a) and (3.20b)
are reduced to the Kalman filter prediction and measurement update, respectively. The
Kalman filter is treated in Section 3.2.1. In contrast, if p(yt |xt ), p(xt+1 |xt ) and p(xt ) are
nonlinear, but still assumed Gaussian, several approximations of (3.20a) and (3.20b) exist.
The two most common filters are the extended Kalman Filter and the unscented Kalman
filter, which are outlined in the Sections 3.2.2 and 3.2.3, respectively. Other methods,
including methods that approximate other density functions than Gaussian, are neatly
covered by Hendeby (2008) and Schön (2006). The most popular approaches are the
particle filter and the marginalized particle filter, see e.g. Ristic et al. (2004), Arulampalam
et al. (2002), Cappe et al. (2007), Djuric et al. (2003), Karlsson (2005), Schön et al. (2005).
3.2.1
The Linear Kalman Filter
The linear state space representation subject to Gaussian noise, which were given in (2.9),
is the simplest special case when it comes to state estimation. The model is repeated here
for convenience;
xt+1 = Ft (θ)xt + Gut (θ)ut + Gw
t wt ,
u
yt = Ht (θ)xt + Ht (θ)ut + et ,
w ∼ N (0, Q),
e ∼ N (0, R).
(3.22a)
(3.22b)
The linear model (3.22) has two important properties. All density functions involved in
the model and state estimation are Gaussian and a Gaussian density function is completely
parametrized by the mean and the covariance, i.e. the first and second order moment.
Hence, the Bayesian recursion (3.20) is simplified to only propagating the mean and covariance of the involved probability density functions. The most well known estimation
algorithm is the Kalman Filter (KF), derived by Kalman (1960) and Kalman and Bucy
(1961), and shown in Algorithm 3.4. Example 3.4 shows how the single track vehicle
model, introduced in Example 1.1, may be rewritten to be used with the Kalman filter,
which in turn is used to estimate the states.
main: 2009-10-21 11:26 — 42(56)
42
3
Estimation Theory
Algorithm 3.4: Kalman Filter
Consider the linear state space model (3.22). The Kalman filter is given by the two
following steps.
Prediction
x̂t|t−1 = Ft−1 x̂t−1|t−1 + Gut−1 ut−1
T
wT
Pt|t−1 = Ft−1 Pt−1|t−1 Ft−1
+ Gw
t−1 Qt−1 Gt−1
(3.23a)
(3.23b)
Kt = Pt|t−1 HtT (Ht Pt|t−1 HtT + Rt )−1
x̂t|t = x̂t|t−1 + Kt (yt − Ht x̂t|t−1 − Htu ut )
Pt|t = (I − Kt Ht )Pt|t−1
(3.24a)
(3.24b)
(3.24c)
Measurement Update
Example 3.4: Linearized Single Track Model
The single track vehicle model was introduced in Example 1.1 and the model equations
were derived in Section 2.3. The process model (2.4a) and the measurement model (2.4b)
are linear in the state variables and can be written in the form
ψ̇t+1
ψ̇
= Ft (v̇x , vx , θ) t + Gut (vx , θ)δf + wt ,
w ∼ N (0, Q),
(3.25a)
βt+1
βt
m
ψ̇t
ψ̇
= Ht (v̇x , vx , θ) t + Htu (θ)δf + et ,
e ∼ N (0, R),
(3.25b)
ay,t
βt
as shown in Example 2.3. Since the inputs v̇x and vx are present in Ft , Gut and Ht , these
matrices must be recalculated at each time step before being used in the Kalman filter
(Algorithm 3.4) to estimate the states.
3.2.2
The Extended Kalman Filter
In general, most complex automotive systems tend to be nonlinear. When it comes to
solving state estimation problems in sensor fusion frameworks, nonlinear models are commonly applied. This holds also for the work presented in this thesis, but the problems are
restricted by the assumption that the process and measurement noise is Gaussian. The
most common representation of nonlinear systems is the state space model given in (1.1),
repeated here for convenience;
xt+1 = ft (xt , ut , wt , θ),
yt = ht (xt , ut , et , θ),
w ∼ N (0, Q),
e ∼ N (0, R).
(3.26a)
(3.26b)
main: 2009-10-21 11:26 — 43(57)
3.2
43
Filter Theory
The basic idea behind the extended Kalman filter (EKF) is to approximate the nonlinear
model (3.26) by a linear model and apply the Kalman filter locally. The local approximation is obtained by computing a first order Taylor expansion around the current estimate.
The result is the extended Kalman filter, which is given in Algorithm 3.5. Early practical
applications and examples of the EKF are described in the works by Smith et al. (1962),
Schmidt (1966). An early reference where the EKF is treated is Jazwinski (1970), other
standard references are Anderson and Moore (1979), Kailath et al. (2000) .
The linearization used in the EKF assumes that all second and higher order terms in
the Taylor expansion are negligible. This is certainly true for many systems, but for some
systems this assumption can significantly degrade the estimation performance. Higher
order EKF are discussed by Bar-Shalom and Fortmann (1988) and Gustafsson (2000).
This problem will be revisited in the next section.
3.2.3
The Unscented Kalman Filter
The EKF is sufficient for many applications. However, to use an EKF the gradients of
ft ( · ) and ht ( · ) must be calculated, which in some cases is either hard to do analytically or computational expensive to do numerically. An alternative approach, called the
unscented Kalman filter (UKF) was proposed by Julier et al. (1995), Julier and Uhlmann
(1997) and further refined by e.g. Julier and Uhlmann (2002, 2004), Julier (2002). Instead
Algorithm 3.5: Extended Kalman Filter
Consider the state space model (3.26). The extended Kalman filter is given by the two
following steps.
Prediction
x̂t|t−1 = ft−1 (x̂t−1|t−1 , ut−1 , 0, θ)
T
Pt|t−1 = Ft−1 Pt−1|t−1 Ft−1
+ Gt−1 Qt−1 GTt−1
(3.27a)
(3.27b)
where
∂ft (xt , ut , 0, θ) Ft =
∂xt
xt =x̂t|t
∂ft (x̂t|t , ut , wt , θ) Gt =
∂wt
wt =0
(3.27c)
Measurement Update
Kt = Pt|t−1 HtT (Ht Pt|t−1 HtT + Rt )−1
x̂t|t = x̂t|t−1 + Kt (yt − ht (x̂t|t−1 , ut , 0, θ))
Pt|t = (I − Kt Ht )Pt|t−1
where
Ht =
∂ht (xt , ut , 0, θ) ∂xt
xt =x̂t|t−1
(3.28a)
(3.28b)
(3.28c)
(3.28d)
main: 2009-10-21 11:26 — 44(58)
44
3
Estimation Theory
of linearizing ft ( · ) and ht ( · ), the unscented transform (UT) is used to approximate the
moments of the prediction p(xt+1 |xt ) and the likelihood p(yt |xt ). Thereby the UKF to
some extent also considers the second order terms of the models, which is not done by
the EKF.
The principle of the unscented transform is to carefully and deterministically select
a set of points, called sigma points, of the initial stochastic variable x, such that their
mean and covariance equal those of x. Then the sigma points are passed through the nonlinear function and based on the output the resulting mean and covariance are derived. In
case the process noise and measurement noise are not additive, sigma points are selected
from an augmented state space, which includes the state x, the process noise w and the
measurement noise e in one augmented state vector


x̂t|t
x̂at|t =  E (wt )  ,
(3.29)
E (et+1 )
with dimension na = nx + nw + ne and the corresponding covariance matrix


Pt|t 0
0
a
Qt
0 .
Pt|t
= 0
0
0 Rt+1
(3.30)
If the noise is additive, then the noise covariances can be added directly to the estimated
covariances of the non-augmented sigma points.
There exist many possibilities to choose the sigma points, a thorough discussion about
different alternatives is presented by Julier and Uhlmann (2004). In the present work only
the standard form is reproduced. The basic principle is to choose one sigma point in
the mean of xa and 2na points symmetrically on a given contour, described by the state
covariance P a . The sigma points χi and the associated weights w(i) are chosen as
χ(0) = x̂a
χ(i) = χ(0) +
w(0) = w(0)
r
na
Pa
1 − w(0)
(3.31a)
(0)
w(i) =
i
1−w
2na
(3.31b)
na
1 − w(0)
a
P
w(i+na ) =
(3.31c)
(0)
2na
1−w
i
√
for i = 1, . . . , na , where ( A)i is the ith column of any matrix B, such that A =
BB T . The augmented state vector makes it possible to propagate and estimate nonlinear
influences that the process noise and the measurement noise have on the state vector and
the measurement vector, respectively. The weight on the mean w(0) is used for tuning
and according to Julier and Uhlmann (2004) preferable properties for Gaussian density
functions are obtained by choosing w(0) = 1 − n3a . After the sigma points have been
acquired, the augmented state vector can be partitioned according to
 x 
χt|t
.
(3.31d)
χat|t =  χw
t
χet+1
χ(i+na ) = χ(0) −
r
main: 2009-10-21 11:26 — 45(59)
3.2
45
Filter Theory
Algorithm 3.6: Unscented Kalman Filter
Consider the state space model (3.26). The unscented Kalman filter is given by the following steps, which are iterated in the filter.
Choose sigma points according to (3.31)
Prediction
x̂t|t−1 =
2na
X
x,(i)
w(i) χt|t−1
(3.32a)
T
x,(i)
x,(i)
w(i) χt|t−1 − x̂t|t−1 χt|t−1 − x̂t|t−1
(3.32b)
i=0
Pt|t−1 =
2na
X
i=0
where
x,(i)
x,(i)
w,(i)
χt|t−1 = ft−1 χt−1|t−1 , ut−1 , χt−1|t−1 , θ
(3.32c)
−1
x̂t|t = x̂t|t−1 + Pxy Pyy
(yt − ŷt|t−1 )
(3.33a)
Measurement Update
Pt|t = Pt|t−1 −
−1 T
Pxy Pyy
Pxy
(3.33b)
where
(i)
x,(i)
e,(i)
yt|t−1 = ht χt|t−1 , ut , χt|t−1 , θ
ŷt|t−1 =
2na
X
(i)
(3.33c)
w(i) yt|t−1
(3.33d)
T
(i)
(i)
w(i) yt|t−1 − ŷt|t−1 yt|t−1 − ŷt|t−1
(3.33e)
T
x,(i)
(i)
w(i) χt|t−1 − x̂t|t−1 yt|t−1 − ŷt|t−1
(3.33f)
i=0
Pyy =
2na
X
i=0
Pxy =
2na
X
i=0
The rest of the UKF is summarized in Algorithm 3.6.
An advantage of the UKF, compared to the EKF, is that the second order bias correction term is implicitly incorporated in the mean estimate. Example 3.5 shows an important
problem where the second order term should not be neglected.
main: 2009-10-21 11:26 — 46(60)
46
3
Estimation Theory
Example 3.5: Tracked Radar Object
The radar target tracking problem was introduced in Example 1.2 and the model was
defined in Section 2.5. The sensor model converts the Cartesian state variables to polar
measurements. This is one of the most important and commonly used transformations for
sensors measuring range and azimuth angle. Usually the azimuth angle error of these type
of sensors is significantly larger than the range error. This also holds for the sensors used
in this thesis.
Let the sensor be located at the origin and the target at (x, y) = (0, 1) in this simple,
though commonly used example (Julier and Uhlmann, 2004). Measurements may be
simulated by adding Gaussian noise to the actual polar value (r, ψ) = (1, π/2) of the
target localization. A plot of several hundred state estimates, produced in a Monte Carlo
simulation, forms a banana shaped arc around the true value (x, y) = (0, 1), as shown in
Figure 3.1. The azimuth error causes this band of Cartesian points to be stretched around
the circumference of a circle, with the result that the mean of these points lies somewhat
closer to the origin than the point (0, 1). In the figure it is clearly shown that that the UT
estimate (×) lies close to the mean of the measurements (◦). Furthermore, it is shown that
the linearized state estimate (+) produced by the EKF is biased and the variance in the y
component is underestimated.
As a result of the linearization in the EKF, the second order terms are neglected, which
produces a bias error in the mean as shown in Example 3.5. In Julier and Uhlmann (2004)
it is shown how the UT calculates the projected mean and covariance correctly to the
second order terms.
main: 2009-10-21 11:26 — 47(61)
3.2
47
Filter Theory
1
0.8
y
0.6
0.4
0.2
Sensor
0
−0.2
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
x
(a)
1.02
1
y
0.98
0.96
0.94
0.92
0.9
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
x
(b)
Figure 3.1: A Monte Carlo simulation of the problem in Example 3.5 is shown in
Figure (a). The sensor, for example a radar, is in the position (0, 0) and the true
position of the target is in the position (0, 1). The mean of the measurements is at
◦ and the uncertainty ellipse is solid. The linearized mean is at + and its ellipse is
dashed. The UT mean is at × and its uncertainty ellipse is dotted. Figure (b) is a
zoom. Note that the scaling in the x and the y axis are different.
main: 2009-10-21 11:26 — 48(62)
main: 2009-10-21 11:26 — 49(63)
4
The Sensor Fusion Framework
The components of the sensor fusion framework were illustrated in Figure 1.1 in the
introduction. The inner boxes, i.e. the process and measurement models, have been discussed in Chapter 2, where several examples were given. Furthermore, these models were
used in the estimation algorithms, covered in Chapter 3, to estimate parameters and state
variables. The present chapter deals with the outer box, that is the “surrounding infrastructure”.
Instead of considering the individual components, the sensor fusion framework can
also be represented as an iterative process according to Figure 1.4. In view of this interpretation, the present chapter deals with the sensor data processing, the data association
and the track management.
Practical design principles and implementation strategies, e.g. to manage asynchronous
sensor data and out-of-sequence-measurements are not considered in this work. However,
these topics, with application to automotive systems, are treated in the recent paper by
Bengtsson and Danielsson (2008).
The chapter begins with a brief presentation of the experimental setup in Section 4.1.
Multi-target multi-sensor tracking, including data association and track management, is
treated in Section 4.2. The chapter is concluded with Section 4.3 treating road border
and free space estimation. There are many alternatives when it comes to estimating and
representing the free road area in front of the ego vehicle. Two methods are presented
in the papers C and D in Part II, and a third method is described in Section 4.3.1. The
three approaches are compared and their advantages and disadvantages are discussed in
Section 4.3.2.
4.1
Experimental Setup
During the time of this work measurements from three different vehicles were used. The
vehicles and some of their sensors are shown in Figure 4.1.
49
main: 2009-10-21 11:26 — 50(64)
50
4
The Sensor Fusion Framework
(a)
(b)
(c)
(d)
χ
CoG
∆zf
(e)
Csf
Cdf
Cdr
Csr
∆zr
(f)
Figure 4.1: The Volvo S80 in Figure (a) is equipped with 5 radars and one camera,
as illustrated in Figure (b). The field of view is illustrated as striped zones for the
radar and a gray zone for the camera. Figure (c) shows the Volvo XC90, which is
equipped only with one long range radar and one camera, compare with Figure (d).
Finally, the Audi S3 in Figure (e) is not equipped with any exteroceptive sensors, but
with axle height sensors as illustrated in Figure (f). Note that the drawings are not
true to scale. Courtesy of Volvo Car Corporation.
main: 2009-10-21 11:26 — 51(65)
4.2
51
Target Tracking
All three vehicles were equipped with standard, serial production IMU, steering wheel
angle sensor and wheel speed sensors. The Volvo XC90 was equipped with a forward
looking 77 GHz mechanically scanning frequency modulated continuous-wave (FMCW)
radar and a forward looking vision sensor (camera), measuring range and bearing angle to
the targets. Computer vision is included in the image sensor and comprehends object and
lane detection and provides for example the lane curvature. In addition, the Volvo S80 was
equipped with four wide field of view 24 GHz radars at the corners of the vehicle. The
range of the forward looking radar is approximately 200 m, whereas it is approximately
50 m for the four other radars.
The Audi S3 was equipped with neither radar nor camera. In this vehicle the vertical
position of the front and the rear suspension is measured with axle height sensor, and can
be used to derive the pitch angle. A summary of the sensor equipment of the prototypes
is given in Table 4.1.
The results in this thesis are based on tests performed on public roads. Hence, no
specific test procedures are realized and no reference values are provided.
4.2
Target Tracking
Radar measurements originate from objects, referred to as targets, or from false detections, referred to as clutter. The target tracking collects the measurement data including
one or more observations of targets and partitions the data into sets of observations, referred to as tracks. Measurements associated to one track are supposed to be produced by
the same source.
The track management handles the tracks and ensures that only tracks with sufficient
quality are kept within the sensor fusion framework. If measurements are likely to originate from a new target, then the track management starts a new track and chooses a suitable prior p(x0 |y0 ) to initiate the tracking filter. If a track is not observed for a number
of time steps it is removed.
When the tracks are observed a number of time steps, quantities such as position and
velocity can be estimated. Furthermore, new measurements are first considered for the
update of existing tracks and a data association algorithm is used to determine which
measurement corresponds to which track. This is the topic of Section 4.2.1. If multiple
measurements are received from the same target, i.e. when the size of the target is large
Table 4.1: Overview of the sensors equipment in the prototypes.
proprioceptive
sensors
exteroceptive
sensors
IMU
steering wheel angle sensor
wheel speed sensor
axle height sensors
forward looking radar
forward looking camera
rear radar
side radar
S80
X
X
X
XC90
X
X
X
X
X
X
X
X
X
S3
X
X
X
X
main: 2009-10-21 11:26 — 52(66)
52
4
The Sensor Fusion Framework
compared to the sensor resolution, it can be modeled and tracked as a so called extended
target. Different approaches to take care of the measurements and to appropriately model
the target are discussed in Section 4.2.2.
4.2.1
Data Association
This section would not be needed if only the state variables of the ego vehicle, introduced
in Example 1.1 are estimated, because in that case it is obvious how the measurements
are associated with the state variables. In the object tracking problem, introduced in Example 1.2, it is no longer obvious which measurement should update which track. There
are many methods available for finding likely measurement-to-track associations, i.e. for
solving the data association problem, see e.g., Bar-Shalom and Fortmann (1988), Blackman and Popoli (1999). However, the task is seldom easy, due to noisy measurements,
multiple reflections on each target and erroneous detections caused by spurious reflections.
The first step in the data association process is called gating. Gates are constructed
around the predicted measurement ŷi,t|t−1 of each track i to eliminate unlikely pairings
and thereby to limit the number of measurement-to-track associations. This reduces the
number of measurements that are examined by the data association algorithm. The residual between a measurement yj,t and a predicted measurement ŷi,t|t−1 is
ỹi,j,t|t−1 = yj,t − ŷi,t|t−1 ,
(4.1)
and it is assumed Gaussian distributed according to
ỹi,j,t|t−1 ∼ N (0, Si,t ),
where Si,t is the innovation covariance. The gate Gi is defined as the region
−1
(y − ŷi,t|t−1 ) ≤ UG ,
Gi , y (y − ŷi,t|t−1 )T Si,t
(4.2)
(4.3)
where UG is the gating threshold. The measurements yj,t ∈ Gi are considered as candidates for updating the track xi,t in the data association algorithm.
Now, different conflicts occur. There are several measurements falling within the same
gate and there are also measurements falling within more than one gate. There exist many
techniques to solve these conflicts, which are considered to be the main part of the data
association process. The simplest association algorithm is called nearest neighbor (NN).
This approach searches for a unique pairing, i.e. one track xi,t is only updated by one
observation yj,t . There are some possibilities to decide which measurement actually is
the nearest. Common approaches are to choose the measurement with the smallest error
ỹi,j,t|t−1 or the smallest statistical distance
−1
T
d2 (ỹi,j,t|t−1 ) = ỹi,j,t|t−1
Si,t
ỹi,j,t|t−1
(4.4)
which is also known as the Mahalanobis distance, see e.g., Bar-Shalom et al. (2001).
Another method is to choose the measurement with the largest likelihood according to
`(yj,t , ŷi,t|t−1 ) = N yj,t ; ŷi,t|t−1 , Si,t .
(4.5)
Besides the two books mentioned above a nice overview concerning data association
and track handling is given in the recent work by Svensson (2008).
main: 2009-10-21 11:26 — 53(67)
4.2
53
Target Tracking
4.2.2
Extended Object Tracking
In classical target tracking problems the objects are modeled as point sources and it is
assumed that only one measurement is received from each target at each time step. In
automotive applications, the targets are at a close distance and of such a large size that
individual features can be resolved by the sensor. A target is denoted extended whenever
the target extent is larger than the sensor resolution, and it is large enough to occupy
multiple resolution cells of the sensor. Put in other words, if a target should be classified
as extended does not only depend on its physical size, but rather on the physical size
relative to the sensor resolution.
The methods used to track extended objects are very similar to the ones used for
tracking a group of targets moving in formation. Extended object tracking and group
tracking is thoroughly described in e.g., Ristic et al. (2004). The bibliography Waxman
and Drummond (2004) provides a comprehensive overview of existing literature in the
area of group and cluster tracking. There exist some different approaches to represent, i.e.
to model, the extended target, of which four methods are described in this section.
Point Features
The first and most traditional method is to model the target as a set of point features in
a target reference frame, each of which may contribute at most one sensor measurement.
The exact location of a feature in the target reference frame is often assumed uncertain.
However, if the appearance of the target is known and especially if typical radar reflection
points are known, then the location of the features in the target reference frame can be
assumed known. The motion of an extended target is modeled through the process model
in terms of the translation and rotation of the target reference frame relative to a world
coordinate frame, see e.g., Dezert (1998).
For an application in two dimensions the point features are defined as
Np
PT = pTi i=1
with pTi = xTpi T
ypTi T
T
(4.6)
T
W
yTWW of the target’s
in the target’s coordinate frame T . The position dW
T W = xT W
origin and the orientation ψT of the target’s frame is tracked relative to the world coordinate frame. The state vector is defined as
x = dW
TW
ψT
d˙W
TW
ψ̇T
PW
T
,
(4.7)
Np
where the point features PW = pW
are expressed with respect to the world coori
i=1
dinate frame W . The point features in the target’s coordinate frame can be mapped into a
point in the world frame, as they are defined in the state vector, through the transform
WT T
pW
pi + dW
i =R
TW ,
(4.8)
where the rotation matrix RW T was defined previously in (2.19).
The process model for the frame can for example be a constant velocity model, where
the velocities are modeled as a first order Gaussian random walk. The uncertainty about
main: 2009-10-21 11:26 — 54(68)
54
4
The Sensor Fusion Framework
the exact position of the point feature is modeled according to
W
p(PW
t |dT W , ψT ) =
Np
Y
WT
N (pW
(ψT )pTi + dW
i,t |R
T W , wp I2 ),
(4.9)
i=1
which means that the uncertainty is assumed isotropic around the mean location of the
point and with known variance wp .
Ny
is received and has to be
At each time step a set of Ny measurements Y = {yi }i=1
associated to the states. Not all measurements arise from a point feature, some are due
to false detections (clutter). The association hypotheses are derived through some data
association algorithm. In Vermaak et al. (2005) a method is proposed where the association hypotheses are included in the state vector and the output of the tracking filter
is a joint posterior density function of the state vector and the association hypotheses.
Furthermore, a multi-hypothesis likelihood is obtained by marginalizing over all the association hypotheses. An alternative solution is also proposed using a particle filter, where
the unknown hypotheses are sampled from a well designed proposal density function.
An automotive radar sensor model developed for simulation purposes is proposed in
Bühren and Yang (2006), where it is assumed that radar sensors often receive measurements from specific reflection centers on a vehicle. These reflection centers can be tracked
in a filter and valuable information regarding the vehicle’s orientation can be extracted as
shown in Gunnarsson et al. (2007). A difficulty in solving the data association problem
is the large number of association hypotheses available. To reduce the complexity Gunnarsson et al. (2007) propose an approach where detections are associated with reflector
groups. The spatial Poisson distribution, discussed in the subsequent section, is considered to be inappropriate, since the number of vehicle detections is assumed essentially
known and not adequately modeled by a Poisson process.
Spatial Distribution
Instead of modeling the target as a number of point features, which are assumed to be explicit measurement sources, the target is represented by a spatial probability distribution.
It is more likely that a measurement comes from a region of high spatial density than from
a sparse region. In Gilholm and Salmond (2005), Gilholm et al. (2005) it is assumed that
the number of received target and clutter measurements are Poisson distributed, hence
several measurements may originate from the same target. Each target related measurement is an independent sample from the spatial distribution. The spatial model could be a
bounded distribution, such as a uniform pdf or an unbounded distribution, such as a Gaussian. The Poisson assumption allows the problem, or more specifically the evaluation of
the likelihood, to be solved without association hypotheses. The spatial distribution is
preferable where the point source models are poor representations of reality, that is in
cases where the measurement generation is diffuse.
In Gilholm and Salmond (2005) two simple examples are given. One where the principle axis of the extended target is aligned with the velocity vector, i.e. a target is represented
by a one dimensional uniform stick model. In the other example, a Gaussian mixture
model is assumed for the target. A Kalman filter implementation with explicit constructions of assignment hypotheses is derived from the likelihood in Gilholm and Salmond
main: 2009-10-21 11:26 — 55(69)
4.2
Target Tracking
55
(2005), whereas in Gilholm et al. (2005), a particle filter is applied directly given the likelihood which is represented by the Poisson spatial model of the stick. Hence, the need to
construct explicit measurement-target assignment hypotheses is avoided in Gilholm et al.
(2005).
Boers et al. (2006) present a similar approach, but since raw data is considered, no
data association hypotheses are needed. The method to use raw data, i.e. consider the
measurements without applying a threshold, is referred to as track before detect. A one
dimensional stick target is assumed also by Boers et al. (2006), but unlike Gilholm and
Salmond (2005), the target extent is assumed unknown. The state vector is given by the
stick’s center position and velocity as well as the stick’s extension according to
T
(4.10)
x = x y ẋ ẏ L .
The process model is a simple constant velocity model and the length L is modeled as a
random walk. The likelihood function is given by the probability distribution
Z
(4.11)
p(y|x) = p(y|x̃)p(x̃|x)dx̃,
where the spatial extension is modeled by the pdf p(x̃|x) and x̃ is assumed to be a point
source from an extended target with center given by the state vector x. Hence, a measurement is received from a source x̃ with likelihood p(y|x̃).
Elliptical Shaped Target
In many papers dealing with the shape of a target it is assumed that the sensor, e.g. radar,
is also able to measure one or more dimensions of the target’s extent. A high-resolution
radar sensor may provide measurements of a targets down-range extent, i.e. the extension
of the objects along the line-of-sight. The information of the target’s extent is incorporated
in the tracking filter and aids the tracking process to maintain track on the target when it
is close to other objects.
An elliptical target model, to represent an extended target or a group of targets, is
proposed in Drummond et al. (1990). The idea was improved by Salmond and Parr (2003),
where the sensor not only provides measurements of point observations, but rather range,
bearing and down-range extent. The prime motivation of the study is to aid track retention
for closely spaced moving targets. Furthermore, the state vector includes the position,
velocity and the size of the ellipse. An EKF is used in Salmond and Parr (2003), but
it is concluded that the filter may diverge under certain conditions, since the relation
between the down-range extent measurement of the target and the position and velocity
coordinates in the state vector is highly nonlinear. The same problem is studied in Ristic
and Salmond (2004), where a UKF is implemented and tested. Even though the UKF
shows better performance it is concluded that neither the EKF nor the UKF are suitable
for this problem. The problem is further studied by Angelova and Mihaylova (2008),
where other filter techniques, based on Monte Carlo algorithms, are proposed. In this
paper the size of the ellipse takes values from a set of standard values, i.e. the algorithm
estimates the type of object from a list, under the assumption that typical target sizes are
known.
A group of objects moving collectively may also be modeled as an extended target.
The ellipse model is used to model a formation of aircraft in Koch (2008).
main: 2009-10-21 11:26 — 56(70)
56
4
The Sensor Fusion Framework
Line Shaped Target
In paper D the road borders are modeled as extended objects in the form of lines. A line is
expressed as a third order polynomial in its coordinate frame. Since the road borders are
assumed to be stationary, the frames are not included in the state vector. Furthermore, stationary points such as delineators and lamp posts are also modeled in paper D. The nearest
neighbor algorithm is used to associate measurements from stationary observations Sm
to the targets. Here it is assumed that an extended line target Lj can give rise to several
measurements, but a point target Pi can only contribute to one measurement. Since the
likelihood of a line `Sm Lj is a one dimensional spatial density function, but the likelihood
of a point `Sm Pi is given by a two dimensional density function, a likelihood ratio test is
applied to determine the measurement-to-track association problem. The likelihood ratio
for a measurement ySm is given by
Λ(ySm ) ,
`Sm Pi
.
`Sm Lj
(4.12)
The corresponding likelihood ratio test is
H0
Λ(ySm ) ≷ η,
(4.13)
H1
where H0 and H1 correspond to hypotheses that the measurement ySm is associated to
the point Pi and to the line Lj , respectively. The threshold is selected as η < 1, since
the density function of a point is two dimensional and the density function of a line is one
dimensional. More theory about likelihood ratio test is given by e.g., van Trees (1968).
4.3
Estimating the Free Space using Radar
In this section three conceptually different methods to estimate stationary objects along
the road, or more specifically to estimate the road borders, are introduced and compared.
The first method considered in Section 4.3.1 is occupancy grid mapping, which discretizes
the map surrounding the ego vehicle and the probability of occupancy is estimated for
each grid cell. The second method applies a constrained quadratic program in order to
estimate the road borders and is described in detail in Paper C. The problem is stated as
a constrained curve fitting problem. The third method, described in Paper D and briefly
introduced in Section 4.2.2, associates the radar measurements to extended stationary
objects and tracks them as extended targets. This section is concluded in Section 4.3.2 by
comparing the three approaches.
4.3.1
Occupancy Grid Map
The objective is to compute a map of the environment surrounding the ego vehicle using as few variables as possible. A map is defined over a continuous space and it can
be discretized with, e.g. a grid approximation. The size of the map can be reduced to a
certain area surrounding the ego vehicle. In order to keep a constant map size while the
vehicle is moving, some parts of the map are thrown away and new parts are initiated.
main: 2009-10-21 11:26 — 57(71)
4.3
Estimating the Free Space using Radar
57
Occupancy grid mapping (OGM) is one method for tackling the problem of generating
consistent maps from noisy and uncertain data under the assumption that the ego vehicle
pose, i.e. position and heading, is known. These maps are very popular in the robotics
community, especially for all sorts of autonomous vehicles equipped with laser scanners.
Indeed several of the DARPA urban challenge vehicles (Buehler et al., 2008a,b,c) used
OGM’s. This is because they are easy to acquire, and they capture important information
for navigation. The OGM was introduced by Elfes (1987) and an early introduction is
given by Moravec (1988). To the best of the author’s knowledge, Borenstein and Koren
(1991) were the first to utilize OGM for collision avoidance. Examples of OGM in automotive applications are given in Vu et al. (2007), Weiss et al. (2007). A solid treatment
can be found in the recent textbook by Thrun et al. (2005).
This section begins with a brief introduction to occupancy grid maps, according to
Thrun et al. (2005). Using this theory and a sensor with high resolution usually gives a
nice looking bird eye’s view map. However, since a standard automotive radar is used,
producing only a few range and bearing measurements at every time sample, some modifications are introduced as described in the following sections.
Background
The planar map m is defined in the world coordinate frame W and is represented by a
matrix. The goal of any occupancy grid mapping algorithm is to calculate the filtering
probability density function of the map
p(m|y1:t , xE,1:t ),
(4.14)
where m denotes the map, y1:t , {y1 , . . . , yt } denotes the set of all measurements up
to time t, and xE,1:t denotes the path of the ego vehicle defined through the discrete-time
sequence of all previous positions. An occupancy grid map is partitioned into finitely
many grid cells
m
m = {mi }N
(4.15)
i=1 .
The probability of a cell being occupied p(mi ) is specified by a number ranging from 1
for occupied to 0 for free. The notation p(mi ) will be used to refer to the probability that
a grid cell is occupied. A disadvantage with this design is that it not enables to represent
dependencies between neighboring cells.
The occupancy grid map was originally developed to primarily be used with measurements from a laser scanner. A laser is often mounted on a rotating shaft and generates
a range measurement for every angular step of the mechanical shaft, i.e. a bearing angle. This means that the continuously rotating shaft produces many range and bearing
measurements during every cycle. The OGM algorithms transform the polar coordinates
of the measurements into Cartesian coordinates in a fixed world or map frame. After
completing one mechanical measurement cycle the sensor provides the measurements for
use.
The algorithm loops through all cells and increases the occupancy probability p(mi )
if the cell was occupied according to the measurement yt . Otherwise the occupancy value
either remains unchanged or is decreased, depending on if the range to the cell is greater
or less than the measured range. The latter implies that the laser beam did pass this cell
main: 2009-10-21 11:26 — 58(72)
58
4
The Sensor Fusion Framework
without observing any obstacles. If the measured range is great or the cell size is small,
it might be necessary to consider the angular spread of the laser beam and increase or
decrease the occupancy probability of several cells with respect to the beam width.
The map is assumed not to be changed during sensing. Problems of this kind, where
a state does not change over time are solved with binary Bayes filter, of which OGM is
one example. In this case the state can either be free mi = 0 or occupied mi = 1. A
standard technique to avoid numerical instabilities for probabilities close to 0 and to avoid
truncation problems close to 0 and 1 is to use the log odds representation of occupancy
`i,t = log
p(mi |y1:t , xE,1:t )
,
1 − p(mi |y1:t , xE,1:t )
(4.16)
or put in words, the odds of a state is defined as the ratio of the probability of this event
p(mi |y1:t , xE,1:t ) divided by the probability of its complement 1 − p(mi |y1:t , xE,1:t ).
The probabilities are easily recovered using
p(mi |y1:t , xE,1:t ) = 1 −
1
.
1 + exp `i,t
(4.17)
Note that the filter uses the inverse measurement model p(m|y, x). Using Bayes’ rule it
can be shown that the binary Bayes filter in log odds form is
`i,t = `i,t−1 + log
p(mi |yt , xE,t )
p(mi )
− log
,
1 − p(mi |yt , xE,t )
1 − p(mi )
(4.18)
where p(mi ) represents the prior probability. The log odds ratio of the prior before processing any measurements is defined as
`i,0 = log
p(mi )
.
1 − p(mi )
(4.19)
Typically p(mi ) = 0.5 is assumed, since before having measurements nothing is known
about the surrounding environment. This value yields `0 = 0.
OGM with Radar Measurements
The radar system provides range and bearing measurements for observed targets at every
measurement cycle. The main difference to a laser is that there is not one range measurement for every angular position of the moving sensor. The number of observations
depends on the environment. In general there are much fever observations compared to a
laser sensor. There is also a limit on the number of objects transmitted by the radar equipment on the CAN-bus. Moving objects, which are distinguished by measurements of the
Doppler shift, are prioritized and more likely to be transmitted than stationary objects.
Furthermore, it is assumed that the opening angle of the radar beam is small compared
to the grid cell size. With these the OGM algorithm was changed to loop through the
measurements instead of the cells, in order to decrease the computational load. A radar’s
angular uncertainty is usually larger than its range uncertainty. When transforming the polar coordinates of the radar measurements into the Cartesian coordinates of the map, the
uncertainties can either be transformed in the same manner or it can simply be assumed
that the uncertainty increases with the range.
main: 2009-10-21 11:26 — 59(73)
4.3
Estimating the Free Space using Radar
59
Experiments and Results
Figure 4.2a shows an OGM example of a highway situation. The ego vehicle’s camera
view is shown in Figure 4.2c. The size of the OGM is 401 × 401 m, with the ego vehicle
in the middle cell. Each cell represents a 1×1 m square. The gray-level in the occupancy
map indicates the probability of occupancy p(m|y1:t , xE,1:t ), the darker the grid cell, the
more likely it is to be occupied. The map shows all major structural elements as they are
visible at the height of the radar. This is a problem if the road is undulated and especially
if the radar observes obstacles over and behind the guardrail. In this case the occupancy
probability of a cell might be decreased even though it was previously believed to be
occupied, since the cell is between the ego vehicle and the new observation. The impact
of this problem can be reduced by tuning the filter well.
It is clearly visible in Figure 4.2a that the left border is sharper than the right. The only
obstacle on the left side is the guardrail, which gives rise to the sharp edge, whereas on the
right side there are several obstacles behind the guardrail, which also cause reflections,
e.g. noise barrier and vegetation. A closer look in Figure 4.2b reveals that there is no black
line of occupied cells representing the guardrail as expected. Instead there is a region with
mixed probability of occupancy and after about 5 m the gray region with initial valued
cells tell us that nothing is known about these cells.
In summary the OGM generates a good-looking overview of the traffic situation, but
not much information for a collision avoidance system. Given the sparse radar measurements it is inefficient to represent the occupancy information as a rather huge square
matrix with most of its elements equal to 0.5, which indicates that nothing is known about
these cells.
4.3.2
Comparison of Free Space Estimation Approaches
The presented methods, i.e. the OGM in the previous section, the constrained curve fitting
problem in Paper C and the extended stationary objects tracks in Paper D, do not depend
on the fact that only one radar sensor is used. In fact it is straightforward to add more sensor information from additional sensors. In other words, the approach introduced here fits
well within a future sensor fusion framework where additional sensors, such as cameras
and additional radars, are incorporated.
The properties of the three approaches are compared and summarized below.
The results of the presented methods are better than expected, given the fact that only
measurements delivered by standard automotive sensors are used. The main drawback of the presented methods is that the result can be unstable or erroneous if
there are too few measurement points or if the measurements stem from other objects than the guardrail. However, the problem of having too few measurements
or having measurements from the wrong objects is very hard to solve with any
algorithm.
The representation form of the OGM is a square matrix with the log odds of each grid
cell. Since most of the environment is unknown many of the matrix elements are
equal to the initial log odds. In this example, a 401 × 401 matrix is used, implying
that the environment is described by 160801 parameters. The number of parameters
main: 2009-10-21 11:26 — 60(74)
60
4
The Sensor Fusion Framework
110
50
120
130
100
140
150
150
160
200
170
250
180
300
190
200
350
210
400
50
100
150
200
250
300
(a)
350
400
220
180
200
220
(b)
(c)
Figure 4.2: The filled circle at position (201, 201) in the occupancy grid map in
Figure (a) is the ego vehicle, the + are the radar observations obtained at this time
sample, the black squares are the two leading vehicles that are currently tracked.
Figure (b) shows a zoom of the OGM in front of the ego vehicle. The gray-level in
the figure indicates the probability of occupancy, the darker the grid cell, the more
likely it is to be occupied. The shape of the road is given as solid and dashed lines,
calculated as described in Lundquist and Schön (2008b). The camera view from the
ego vehicle is shown in Figure (c), the concrete walls, the guardrail and the pillar of
the bridge are interesting landmarks. Furthermore, the two tracked leading vehicles
are clearly visible in the right lane.
main: 2009-10-21 11:26 — 61(75)
4.3
61
Estimating the Free Space using Radar
used for the constrained curve fitting is 8 and 12 for the linear and nonlinear model,
respectively. The start and endpoint of valid segments can be limited by the user,
even though no vector with more than 100 elements was observed during the tests.
A line modeling the extended objects is represented by 5 parameters and one coordinate frame which is defined by its position and heading, i.e. 3 parameters. The
author observed at maximum 20 lines, adding up to 160 parameters. However, it is
suggested that the user limits the number of lines to 10, adding up to 80 parameters.
The computational time does of course depend on the hardware on which the algorithm
is implemented, but it is still worth comparing the proposed algorithms. The average computational times over a sequence of 1796 samples for the presented methods are given in Table 4.2. Note that the times given in this table include the complete algorithms, including initialization and coordinate frame transformations. The
times given in Table 1 in Paper C only compare the optimization algorithms. All
of the algorithms can be made more efficient by fine tuning the code. However, the
potential of the extended object tracking is assumed to be highest. This is because
time implicitly depends on the number of tracked objects, which can be reduced by
merging tracks and associating measurements to fewer tracks.
Table 4.2: Average computational time for one sample.
Method
Occupancy Grid Mapping, Section 4.3.1
Linear Predictor, Paper C
Nonlinear Predictor, Paper C
Extended Object Tracking, Paper D
Time [ms]
14.9
109.5
137.2
28.6
The flexibility of the OGM and the extended object tracking must be said to be higher.
The OGM is not tied to any form of the road border or the stationary objects. The
extended objects can be modeled in various types of shapes. The constrained curve
fitting problem is the least flexible in that it only models the left and right border
lines.
main: 2009-10-21 11:26 — 62(76)
main: 2009-10-21 11:26 — 63(77)
5
Concluding Remarks
In the first part an overview of the basics behind the research reported in this thesis has
been presented. This part also aims at explaining how the papers in Part II relate to each
other and to the existing theory. A conclusion of the results is given in Section 5.1 and
ideas for future work are discussed in Section 5.2.
5.1
Conclusion
The work presented in this thesis has dealt with the problem of estimating the motion of
a vehicle and representing and estimating its surroundings, i.e. improving the situation
awareness. The surroundings consist of other vehicles and stationary objects, as well
as the shape and the geometry of the road. Here, a major part of the work is not only
the estimation problem itself, but also the way in which to represent the environment,
i.e. the mapping problem. Paper A is concerned with estimating the lane geometry, i.e.
the lane markings are described by a polynomial and the coefficients are the states to
estimate. This problem can be solved with a camera and computer vision, but by fusing
the data obtained from the image processing with information about the ego vehicle’s
motion and the other vehicles’ movement on the road, the road geometry estimate can be
improved. The other vehicles are tracked primarily by using measurements from a radar.
The motion of the ego vehicle is estimated by combining measurements from the vehicle’s
IMU, steering wheel angle sensor and wheel velocity sensors in a model based filter. The
model is in this case the so called single track model or bicycle model, in which the
tire road interaction plays a major role. This interaction can be considered as a constant
parameter, which is estimated off-line in advance, or the parameter can be considered
time varying and be estimated on-line while driving. This is the topic of Paper B.
The surroundings of a vehicle is more complicated than the shape of the lane markings. In this thesis three conceptually different methods to estimate the road borders and
the stationary objects along the road are studied and compared. The first method consid63
main: 2009-10-21 11:26 — 64(78)
64
5
Concluding Remarks
ered in Section 4.3.1 is occupancy grid mapping, which discretizes the surroundings into
a number of grid cells. The probability of occupancy is estimated for each grid cell using
radar data regarding the position of the stationary objects. The second method, described
in detail in Paper C, consists in a constrained quadratic program in order to estimate the
road borders. The problem is formulated as a constrained curve fitting problem, and the
road borders are represented as two polynomials. The third method, described in Paper D, associates the radar measurements to extended stationary objects in the form of
curved lines and tracks these lines as extended targets.
The approaches have been evaluated on real data from both freeways and rural roads
in Sweden. The results are encouraging and surprisingly good at times, not perfect but
much more informative than the raw measurements. Problems typically occur when there
are too few measurements or if the measurements stem from other objects than the road
side objects.
5.2
Future Research
The radar and camera data used in this thesis is generally preprocessed. Nevertheless,
the preprocessing is not covered in this thesis. Specifically, more effort can be spent on
the image processing to increase the information content. For example within the area
of odometry the estimate could be more accurate if the camera information is used in
addition to the measurements in Example 1.1. This is called visual odometry and it would
probably improve the estimate of the body side slip angles, especially during extreme
maneuvers where the tire road interaction is strongly nonlinear. Since only one camera is
used, the inverse depth parametrization introduced by Civera et al. (2008) is an interesting
approach, see e.g., Schön and Roll (2009) for an automotive example on visual odometry.
To verify the state estimates more accurate reference values are needed as well.
The stationary objects along the road are treated as extended targets in this thesis. This
approach requires comprehensive data association. The probability hypothesis density
(PHD) filter, based on a finite random set description of the targets is a newly developed
approach to propagate the intensity of these sets of states in time, see e.g., Mahler (2003),
Vo and Ma (2006), Erdinc et al. (2006). It is an elegant method that avoids the combinatorial problem that arises from data association in a multi-sensor multi-target framework.
A first example of an intensity map describing the density of stationary targets along the
road is shown in Figure 5.1. In this thesis only radar data has been used to estimate the position of stationary objects. However, the camera captures information about the objects
along the road and this source of information should be better used.
Currently there is a lot of activity within the computer vision community to enable
non-planar road models, making use of parametric models similar to the ones used in this
paper. A very interesting avenue for future work is to combine the ideas presented in
this thesis with information from a camera about the height differences on the road side
within a sensor fusion framework. This would probably improve the estimates, especially
in situations when there are too few radar measurements available.
Parameter and model uncertainty in general are not treated in this thesis. One important aspect is how to model the process noise, i.e. how it shall best be included into the
process model. In all applications discussed in this thesis the process noise is assumed
main: 2009-10-21 11:26 — 65(79)
5.2
Future Research
65
Figure 5.1: Illustration of stationary target estimation. The intensity map of the
PHD filter is illustrated using a gray scale, the darker the area, the higher the density
of stationary targets. Here, only measurements from the radar are used. The photo
shows the driver’s view.
main: 2009-10-21 11:26 — 66(80)
66
5
Concluding Remarks
additive. Can the state estimate computed in a filter be improved by modeling the process noise differently? Another aspect is how to treat bias and variance of the parameter
θ. Bias and variance of θ is propagated to a bias and a variance increase of the states
x. How can this impact on x be reduced? These two aspects are interesting and would
improve many results if they are thoroughly considered.
main: 2009-10-21 11:26 — 67(81)
Bibliography
Abdulle, A. and Wanner, G. (2002). 200 years of least squares method. Elemente der
Mathematik, 57:45–60.
Adams, M., Wijesoma, W., and Shacklock, A. (2007). Autonomous navigation: Achievements in complex environments. IEEE Instrumentation & Measurement Magazine,
10(3):15–21.
Ahrholdt, M., Bengtsson, F., Danielsson, L., and Lundquist, C. (2009). SEFS – results on
sensor data fusion system development. In 16th World Congress of ITS, Stockholm,
Sweden.
Anderson, B. D. O. and Moore, J. B. (1979). Optimal Filtering. Information and system
science series. Prentice Hall, Englewood Cliffs, NJ, USA.
Angelova, D. and Mihaylova, L. (2008). Extended object tracking using Monte Carlo
methods. IEEE Transactions on Signal Processing, 56(2):825–832.
Arulampalam, M. S., Maskell, S., Gordon, N., and Clapp, T. (2002). A tutorial on particle
filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Transactions on
Signal Processing, 50(2):174–188.
Bailey, T. and Durrant-Whyte, H. (2006). Simultaneous localization and mapping
(SLAM): Part II. IEEE Robotics & Automation Magazine, 13(3):108–117.
Bar-Shalom, Y. and Fortmann, T. E. (1988). Tracking and Data Association. Mathematics
in science and engineering. Academic Press, Orlando, FL, USA.
Bar-Shalom, Y., Rong Li, X., and Kirubarajan, T. (2001). Estimation with Applications
to Tracking and Navigation. John Wiley & Sons, New York.
67
main: 2009-10-21 11:26 — 68(82)
68
Bibliography
Bayes, T. (1763). An essay towards solving a problem in the doctrine of chances. The
Philosophical Transactions, 53:370–418.
Behringer, R. (1997). Visuelle Erkennung und Interpretation des Fahrspurverlaufes durch
Rechnersehen für ein autonomes Straßenfahrzeug, volume 310 of Fortschrittsberichte
VDI, Reihe 12. VDI Verlag, Düsseldorf, Germany. Also as: PhD Thesis, Universität
der Bundeswehr, 1996.
Bengtsson, F. (2008). Models for Tracking in automotive safety systems. Licentiate
Thesis No R012/2008, Department of Signals and Systems, Chalmers University of
Technology.
Bengtsson, F. and Danielsson, L. (2008). Designing a real time sensor data fusion system
with application to automotive safety. In 15th World Congress of ITS, New York, USA.
Bühren, M. and Yang, B. (2006). Simulation of automotive radar target lists using a novel
approach of object representation. In Proceedings of the IEEE Intelligent Vehicles
Symposium, pages 314–319.
Blackman, S. S. and Popoli, R. (1999). Design and Analysis of Modern Tracking Systems.
Artech House, Inc., Norwood, MA, USA.
Boers, Y., Driessen, H., Torstensson, J., Trieb, M., Karlsson, R., and Gustafsson, F.
(2006). Track-before-detect algorithm for tracking extended targets. In IEE Proceedings on Radar and Sonar Navigation, volume 153, pages 345–351.
Borenstein, J. and Koren, Y. (1991). The vector field histogram-fast obstacle avoidance
for mobile robots. IEEE Transactions on Robotics and Automation,, 7(3):278–288.
Buehler, M., Iagnemma, K., and Singh, S., editors (2008a). Special Issue on the 2007
DARPA Urban Challenge, Part I, volume 25 (8). Journal of Field Robotics.
Buehler, M., Iagnemma, K., and Singh, S., editors (2008b). Special Issue on the 2007
DARPA Urban Challenge, Part II, volume 25 (9). Journal of Field Robotics.
Buehler, M., Iagnemma, K., and Singh, S., editors (2008c). Special Issue on the 2007
DARPA Urban Challenge, Part III, volume 25 (10). Journal of Field Robotics.
Cappe, O., Godsill, S., and Moulines, E. (2007). An overview of existing methods and
recent advances in sequential Monte Carlo. Proceedings of the IEEE, 95(5):899–924.
Civera, J., Davison, A., and Montiel, J. (2008). Inverse depth parametrization for monocular SLAM. IEEE Transactions on Robotics, 24(5):932–945.
Danielsson, L. (2008). Tracking Theory for Preventive Safety Systems. Licentiate Thesis
No R004/2008, Department of Signals and Systems, Chalmers University of Technology.
Dezert, J. C. (1998). Tracking maneuvering and bending extended target in cluttered
environment. In Proceedings of Signal and Data Processing of Small Targets, volume
3373, pages 283–294. SPIE.
main: 2009-10-21 11:26 — 69(83)
Bibliography
69
Dickmanns, E. (1988). Dynamic computer vision for mobile robot control. In Proceedings
of the 19th International Symposium on Industrial Robots, Sydney, Australia.
Dickmanns, E. D. (2007).
Springer, London, UK.
Dynamic Vision for Perception and Control of Motion.
Dickmanns, E. D. and Mysliwetz, B. D. (1992). Recursive 3-D road and relative egostate recognition. IEEE Transactions on pattern analysis and machine intelligence,
14(2):199–213.
Dickmanns, E. D. and Zapp, A. (1986). A curvature-based scheme for improving road vehicle guidance by computer vision. In Proceedings of the SPIE Conference on Mobile
Robots, volume 727, pages 161–198, Cambridge, MA, USA.
Djuric, P. M., Kotecha, J. H., Zhang, J., Huang, Y., Ghirmai, T., Bugallo, M. F., and
Miguez, J. (2003). Particle filtering. Signal Processing Magazine, IEEE, 20(5):19–38.
Drummond, O. E., Blackman, S. S., and Pretrisor, G. C. (1990). Tracking clusters and
extended objects with multiple sensors. In Drummond, O. E., editor, Proceedings of
Signal and Data Processing of Small Targets, volume 1305, pages 362–375. SPIE.
Durrant-Whyte, H. and Bailey, T. (2006). Simultaneous localization and mapping
(SLAM): Part I. IEEE Robotics & Automation Magazine, 13(2):99–110.
Eidehall, A. (2007). Tracking and threat assessment for automotive collision avoidance.
PhD thesis No 1066, Linköping Studies in Science and Technology, Linköping, Sweden.
Eidehall, A. and Gustafsson, F. (2006). Obtaining reference road geometry parameters
from recorded sensor data. In Proceedings of the IEEE Intelligent Vehicles Symposium,
pages 256–260, Tokyo, Japan.
Eidehall, A., Pohl, J., and Gustafsson, F. (2007). Joint road geometry estimation and
vehicle tracking. Control Engineering Practice, 15(12):1484–1494.
Elfes, A. (1987). Sonar-based real-world mapping and navigation. IEEE Journal of
Robotics and Automation, 3(3):249–265.
Erdinc, O., Willett, P., and Bar-Shalom, Y. (2006). A physical-space approach for the
probability hypothesis density and cardinalized probability hypothesis density filters. In
Drummond, O. E., editor, Proceedings of Signal and Data Processing of Small Targets,
volume 6236, page 623619. SPIE.
Fisher, R. A. (1912). On an absolute criterion for fitting frequency curves. Messenger of
Mathematics, 41:155–160.
Fisher, R. A. (1922). On the mathematical foundations of theoretical statistics. Philosophical Transactions of the Royal Society Series A, 222:309–368.
Gern, A., Franke, U., and Levi, P. (2000). Advanced lane recognition - fusing vision
and radar. In Proceedings of the IEEE Intelligent Vehicles Symposium, pages 45–51,
Dearborn, MI, USA.
main: 2009-10-21 11:26 — 70(84)
70
Bibliography
Gern, A., Franke, U., and Levi, P. (2001). Robust vehicle tracking fusing radar and vision.
In Proceedings of the international conference of multisensor fusion and integration for
intelligent systems, pages 323–328, Baden-Baden, Germany.
Gilholm, K., Godsill, S., Maskell, S., and Salmond, D. (2005). Poisson models for extended target and group tracking. In Drummond, O. E., editor, Proceedings of Signal
and Data Processing of Small Targets, volume 5913, page 59130R. SPIE.
Gilholm, K. and Salmond, D. (2005). Spatial distribution model for tracking extended
objects. In IEE Proceedings of Radar, Sonar and Navigation, volume 152, pages 364–
371.
Goodwin, G. C. and Sin, K. S. (1984). Adaptive filtering prediction and control. PrenticeHall, Englewood Cliffs.
Gunnarsson, J. (2007). Models and Algorithms - with applications to vehicle tracking
and frequency estimation. PhD thesis No 2628, Department of Signals and Systems,
Chalmers University of Technology.
Gunnarsson, J., Svensson, L., Bengtsson, E., and Danielsson, L. (2006). Joint driver intention classification and tracking of vehicles. In Nonlinear Statistical Signal Processing
Workshop, 2006 IEEE, pages 95–98.
Gunnarsson, J., Svensson, L., Danielsson, L., and Bengtsson, F. (2007). Tracking vehicles
using radar detections. In Proceedings of the IEEE Intelligent Vehicles Symposium,
pages 296–302, Istanbul, Turkey.
Gustafsson, F. (2000). Adaptive Filtering and Change Detection. John Wiley & Sons,
New York, USA.
Gustafsson, F. (2009). Automotive safety systems. Signal Processing Magazine, IEEE,
26(4):32–47.
Hahn, H. (2002). Rigid body dynamics of mechanisms. 1, Theoretical basis, volume 1.
Springer, Berlin, Germany.
Hendeby, G. (2008). Performance and Implementation Aspects of Nonlinear Filtering.
PhD thesis No 1161, Linköping Studies in Science and Technology, Linköping, Sweden.
Jansson, J. (2005). Collision Avoidance Theory with Applications to Automotive Collision Mitigation. PhD thesis No 950, Linköping Studies in Science and Technology,
Linköping, Sweden.
Jazwinski, A. H. (1970). Stochastic processes and filtering theory. Mathematics in science
and engineering. Academic Press, New York, USA.
Johansson, K. H., Törngren, M., and Nielsen, L. (2005). Vehicle applications of controller area network. In Hristu-Varsakelis, D. and Levine, W. S., editors, Handbook of
Networked and Embedded Control Systems, pages 741–765. Birkhäuser.
main: 2009-10-21 11:26 — 71(85)
Bibliography
71
Julier, S. (2002). The scaled unscented transformation. In Proceedings of the American
Control Conference, volume 6, pages 4555–4559.
Julier, S. and Uhlmann, J. (2002). Reduced sigma point filters for the propagation of
means and covariances through nonlinear transformations. In Proceedings of the American Control Conference, volume 2, pages 887–892.
Julier, S., Uhlmann, J., and Durrant-Whyte, H. (1995). A new approach for filtering nonlinear systems. In American Control Conference, 1995. Proceedings of the, volume 3,
pages 1628–1632.
Julier, S. J. and Uhlmann, J. K. (1997). New extension of the Kalman filter to nonlinear
systems. In Signal Processing, Sensor Fusion, and Target Recognition VI, volume
3068, pages 182–193. SPIE.
Julier, S. J. and Uhlmann, J. K. (2004). Unscented filtering and nonlinear estimation.
Proceedings of the IEEE, 92(3):401–422.
Kailath, T. (1980). Linear systems. Prentice Hall, Englewood Cliffs, NJ, USA.
Kailath, T., Sayed, A. H., and Hassibi, B. (2000). Linear Estimation. Information and
System Sciences Series. Prentice Hall, Upper Saddle River, NJ, USA.
Kalman, R. E. (1960). A new approach to linear filtering and prediction problems. Transactions of the ASME, Journal of Basic Engineering, 82:35–45.
Kalman, R. E. and Bucy, R. S. (1961). New results in linear filtering and prediction theory.
Transactions of the ASME – Journal of Basic Engineering, Series 83D, pages 95–108.
Karlsson, R. (2005). Particle Filtering for Positioning and Tracking Applications. PhD
thesis No 924, Linköping Studies in Science and Technology, Linköping, Sweden.
Kay, S. M. (1993). Fundamentals of Statistical Signal Processing, Volume I: Estimation
Theory. Prentice Hall Signal Processing. Prentice Hall, Upper Saddle River, NJ, USA.
Kiencke, U., Dais, S., and Litschel, M. (1986). Automotive serial controller area network.
Technical Report 860391, SAE International Congress.
Kiencke, U. and Nielsen, L. (2005). Automotive Control Systems. Springer, Berlin,
Heidelberg, Germany, second edition.
Koch, J. W. (2008). Bayesian approach to extended object and cluster tracking using random matrices. IEEE Transactions on Aerospace and Electronic Systems, 44(3):1042–
1059.
Ljung, L. (1999). System identification, Theory for the user. System sciences series.
Prentice Hall, Upper Saddle River, NJ, USA, second edition.
Ljung, L. (2009). System identification toolbox 7 – user’s guide. MathWorks, Natick,
Mass.
main: 2009-10-21 11:26 — 72(86)
72
Bibliography
Ljung, L. and Söderström, T. (1983). Theory and Practice of Recursive Identification.
The MIT Press series in Signal Processing, Optimization, and Control. The MIT Press,
Cambridge, Massachusetts.
Lundquist, C. (2008). Method for stabilizing a vehicle combination. U.S. Patent
US 2008196964, 2008.08.21 and German Patent DE 102007008342, 2008.08.21.
Lundquist, C. and Großheim, R. (2009). Method and device for determining steering angle information. International Patent WO 2009047020, 2009.04.16 and German Patent
DE 102007000958, 2009.05.14.
Lundquist, C., Orguner, U., and Schön, T. B. (2009). Tracking stationary extended objects
for road mapping using radar measurements. In Proceedings of the IEEE Intelligent
Vehicles Symposium, pages 405–410, Xi’an, China.
Lundquist, C. and Reinelt, W. (2006a). Back driving assistant for passenger cars with
trailer. In Proceedings of the SAE World Congress, SAE paper 2006-01-0940, Detroit,
MI, USA.
Lundquist, C. and Reinelt, W. (2006b). Rückwärtsfahrassistent für PKW mit Aktive Front
Steering. In Proceedings of the AUTOREG (Steuerung und Regelung von Fahrzeugen
und Motoren, VDI Bericht 1931, pages 45–54, Wiesloch, Germany.
Lundquist, C. and Reinelt, W. (2006c). Verfahren zur Überwachung der Rotorlage eines
Elektromotors. German Patent DE 102005016514, 2006.10.12.
Lundquist, C. and Schön, T. B. (2008a). Joint ego-motion and road geometry estimation.
Submitted to Information Fusion.
Lundquist, C. and Schön, T. B. (2008b). Road geometry estimation and vehicle tracking
using a single track model. In Proceedings of the IEEE Intelligent Vehicles Symposium,
pages 144–149, Eindhoven, The Netherlands.
Lundquist, C. and Schön, T. B. (2009a). Estimation of the free space in front of a moving
vehicle. In Proceedings of the SAE World Congress, SAE paper 2009-01-1288, Detroit,
MI, USA.
Lundquist, C. and Schön, T. B. (2009b). Recursive identification of cornering stiffness
parameters for an enhanced single track model. In Proceedings of the 15th IFAC Symposium on System Identification, pages 1726–1731, Saint-Malo, France.
Mahler, R. (2003). Multitarget Bayes filtering via first-order multitarget moments. IEEE
Transactions on Aerospace and Electronic Systems, 39(4):1152–1178.
Malinen, S., Lundquist, C., and Reinelt, W. (2006). Fault detection of a steering wheel
sensor signal in an active front steering system. In Preprints of the IFAC Symposium
on SAFEPROCESS, pages 547–552, Beijing, China.
Mitschke, M. and Wallentowitz, H. (2004). Dynamik der Kraftfahrzeuge. Springer,
Berlin, Heidelberg, 4th edition.
main: 2009-10-21 11:26 — 73(87)
Bibliography
73
Moravec, H. (1988). Sensor fusion in certainty grids for mobile robots. AI Magazine,
9(2):61–74.
Pacejka, H. B. (2006). Tyre and Vehicle Dynamics. Elsevier, Amsterdam, second edition.
Reimann, G. and Lundquist, C. (2008). Verfahren zum Betrieb eines elektronisch geregelten Servolenksystems. German Patent DE 102006053029, 2008.05.15.
Reinelt, W., Klier, W., Reimann, G., Lundquist, C., Schuster, W., and Großheim, R.
(2004). Active front steering for passenger cars: System modelling and functions.
In Proceedings of the first IFAC Symposium on Advances in Automotive Control,
Salerno, Italy.
Reinelt, W. and Lundquist, C. (2005). Observer based sensor monitoring in an active
front steering system using explicit sensor failure modeling. In Proceedings of the 16th
IFAC World Congress, Prague, Czech Republic.
Reinelt, W. and Lundquist, C. (2006a). Controllability of active steering system hazards:
From standards to driving tests. In Pimintel, J. R., editor, Safety Critical Automotive Systems, ISBN 13: 978-0-7680-1243-9, pages 173–178. SAE International, 400
Commonwealth Drive, Warrendale, PA, USA.
Reinelt, W. and Lundquist, C. (2006b). Mechatronische Lenksysteme: Modellbildung
und Funktionalität des Active Front Steering. In Isermann, R., editor, Fahrdynamik
Regelung - Modellbildung, Fahrassistenzsysteme, Mechatronik, ISBN 3-8348-0109-7,
pages 213–236. Vieweg Verlag.
Reinelt, W. and Lundquist, C. (2007). Method for assisting the driver of a motor vehicle
with a trailer when reversing. German Patent DE 102006002294, 2007.07.19, European
Patent EP 1810913, 2007.07.25 and Japanese Patent JP 2007191143, 2007.08.02.
Reinelt, W., Lundquist, C., and Johansson, H. (2005). On-line sensor monitoring in an
active front steering system using extended Kalman filtering. In Proceedings of the
SAE World Congress, SAE paper 2005-01-1271, Detroit, MI, USA.
Reinelt, W., Lundquist, C., and Malinen, S. (2007). Automatic generation of a computer
program for monitoring a main program to provide operational safety. German Patent
DE 102005049657, 2007.04.19.
Reinelt, W., Schuster, W., Großheim, R., and Lundquist, C. (2008a). Verfahren zum Betrieb eines elektronisch geregelten Servolenksystems. German Patent
DE 102006040443, 2008.03.06.
Reinelt, W., Schuster, W., Großheim, R., and Lundquist, C. (2008b). Verfahren zum
Betrieb eines elektronischen Servolenksystems. German Patent DE 102006043069,
2008.03.27.
Reinelt, W., Schuster, W., Großheim, R., and Lundquist, C. (2008c). Verfahren zum
Betrieb eines Servolenksystems. German Patent DE 102006052092, 2008.05.08.
main: 2009-10-21 11:26 — 74(88)
74
Bibliography
Reinelt, W., Schuster, W., Großheim, R., and Lundquist, C. (2008d). Verfahren zum
Betrieb eines Servolenksystems. German Patent DE 102006041237, 2008.03.06.
Reinelt, W., Schuster, W., Großheim, R., and Lundquist, C. (2008e). Verfahren zum
Betrieb eines Servolenksystems. German Patent DE 102006041236, 2008.03.06.
Ristic, B., Arulampalam, S., and Gordon, N. (2004). Beyond the Kalman Filter: Particle
filters for tracking applications. Artech House, London, UK.
Ristic, B. and Salmond, D. J. (2004). A study of a nonlinear filtering problem for tracking
an extended target. In Proceedings of the 7th International Conference on Information
Fusion, Stockholm, Sweden.
Robert Bosch GmbH, editor (2004). Automotive Handbook. SAE Society of Automotive
Engineers, 6th edition.
Rong Li, X. and Jilkov, V. (2001). Survey of maneuvering target tracking: Part III. Measurement models. In Proceedings of Signal and Data Processing of Small Targets,
volume 4473, pages 423–446. SPIE.
Rong Li, X. and Jilkov, V. (2003). Survey of maneuvering target tracking: Part I. Dynamic
models. IEEE Transactions on Aerospace and Electronic Systems, 39(4):1333–1364.
Rugh, W. J. (1996). Linear System Theory. Information and system sciences series.
Prentice Hall, Upper Saddle River, NJ, USA, second edition.
Salmond, D. and Parr, M. (2003). Track maintenance using measurements of target extent.
In IEE Proceedings on Radar and Sonar Navigation, volume 150, pages 389–395.
Schmidt, S. F. (1966). Application of state-space methods to navigation problems. Advances in Control Systems, 3:293–340.
Schön, T. B. and Roll, J. (2009). Ego-motion and indirect road geometry estimation using
night vision. In Proceedings of the IEEE Intelligent Vehicles Symposium, pages 30–35.
Schofield, B. (2008). Model-Based Vehicle Dynamics Control for Active Safety. PhD
thesis, Department of Automatic Control, Lund University, Sweden.
Schön, T. B. (2006). Estimation of Nonlinear Dynamic Systems – Theory and Applications. PhD thesis No 998, Linköping Studies in Science and Technology, Department
of Electrical Engineering, Linköping University, Sweden.
Schön, T. B., Eidehall, A., and Gustafsson, F. (2006). Lane departure detection for improved road geometry estimation. In Proceedings of the IEEE Intelligent Vehicle Symposium, pages 546–551, Tokyo, Japan.
Schön, T. B., Gustafsson, F., and Nordlund, P.-J. (2005). Marginalized particle filters for
mixed linear/nonlinear state-space models. IEEE Transactions on Signal Processing,
53(7):2279–2289.
main: 2009-10-21 11:26 — 75(89)
Bibliography
75
Schön, T. B., Törnqvist, D., and Gustafsson, F. (2007). Fast particle filters for multi-rate
sensors. In Proceedings of the 15th European Signal Processing Conference, Poznań,
Poland.
Smith, G. L., Schmidt, S. F., and McGee, L. A. (1962). Application of statistical filter theory to the optimal estimation of position and velocity on board a circumlunar vehicle.
Technical Report TR R-135, NASA.
Svensson, D. (2008). Multiple Model Filtering and Data Association with Application to
Ground Target Tracking. Licentiate Thesis No R017/2008, Department of Signals and
Systems, Chalmers University of Technology.
Svensson, L. and Gunnarsson, J. (2006). A new motion model for tracking of vehicles. In
Proceedings of the 14th IFAC Symposium on System Identification, Newcastle, Australia.
Thrun, S. (2002). Robotic mapping: A survey. In Exploring Artificial Intelligence in the
New Millenium. Morgan Kaufmann.
Thrun, S., Burgard, W., and Fox, D. (2005). Probabilistic Robotics. Intelligent Robotics
and Autonomous Agents. The MIT Press, Cambridge, MA, USA.
van Trees, H. L. (1968). Detection, Estimation, and Modulation Theory. John Wiley &
Sons, New York, USA.
Vermaak, J., Ikoma, N., and Godsill, S. J. (2005). Sequential Monte Carlo framework for extended object tracking. IEE Proceedings of Radar, Sonar and Navigation,
152(5):353–363.
VGU (2004a). Vägar och gators utformning – Grundvärden. Vägverket, Swedish Road
Administration, Borlänge, Sweden. 2004:80.
VGU (2004b). Vägar och gators utformning – Landsbygd - Vägrum. Vägverket, Swedish
Road Administration, Borlänge, Sweden. 2004:80.
Vo, B.-N. and Ma, W.-K. (2006). The Gaussian mixture probability hypothesis density
filter. IEEE Transactions on Signal Processing, 54(11):4091–4104.
Vu, T. D., Aycard, O., and Appenrodt, N. (2007). Online localization and mapping with
moving object tracking in dynamic outdoor environments. In Proceedings of the IEEE
Intelligent Vehicles Symposium, pages 190–195.
Waxman, M. J. and Drummond, O. E. (2004). A bibliography of cluster (group) tracking.
In Proceedings of Signal and Data Processing of Small Targets, volume 5428, pages
551–560. SPIE.
Weiss, T., Schiele, B., and Dietmayer, K. (2007). Robust driving path detection in urban
and highway scenarios using a laser scanner and online occupancy grids. In Proceedings of the IEEE Intelligent Vehicles Symposium, pages 184–189.
Wong, J. (2001). Theory Of Ground Vehicles. John Wiley & Sons, New York, USA, third
edition.
main: 2009-10-21 11:26 — 76(90)
76
Bibliography
Zomotor, Z. and Franke, U. (1997). Sensor fusion for improved vision based lane recognition and object tracking with range-finders. In Proceedings of IEEE Conference on
Intelligent Transportation System, pages 595–600, Boston, MA, USA.
main: 2009-10-21 11:26 — 171(185)
Licentiate Theses
Division of Automatic Control
Linköping University
P. Andersson: Adaptive Forgetting through Multiple Models and Adaptive Control of Car Dynamics. Thesis No. 15, 1983.
B. Wahlberg: On Model Simplification in System Identification. Thesis No. 47, 1985.
A. Isaksson: Identification of Time Varying Systems and Applications of System Identification to
Signal Processing. Thesis No. 75, 1986.
G. Malmberg: A Study of Adaptive Control Missiles. Thesis No. 76, 1986.
S. Gunnarsson: On the Mean Square Error of Transfer Function Estimates with Applications to
Control. Thesis No. 90, 1986.
M. Viberg: On the Adaptive Array Problem. Thesis No. 117, 1987.
K. Ståhl: On the Frequency Domain Analysis of Nonlinear Systems. Thesis No. 137, 1988.
A. Skeppstedt: Construction of Composite Models from Large Data-Sets. Thesis No. 149, 1988.
P. A. J. Nagy: MaMiS: A Programming Environment for Numeric/Symbolic Data Processing.
Thesis No. 153, 1988.
K. Forsman: Applications of Constructive Algebra to Control Problems. Thesis No. 231, 1990.
I. Klein: Planning for a Class of Sequential Control Problems. Thesis No. 234, 1990.
F. Gustafsson: Optimal Segmentation of Linear Regression Parameters. Thesis No. 246, 1990.
H. Hjalmarsson: On Estimation of Model Quality in System Identification. Thesis No. 251, 1990.
S. Andersson: Sensor Array Processing; Application to Mobile Communication Systems and Dimension Reduction. Thesis No. 255, 1990.
K. Wang Chen: Observability and Invertibility of Nonlinear Systems: A Differential Algebraic
Approach. Thesis No. 282, 1991.
J. Sjöberg: Regularization Issues in Neural Network Models of Dynamical Systems. Thesis
No. 366, 1993.
P. Pucar: Segmentation of Laser Range Radar Images Using Hidden Markov Field Models. Thesis
No. 403, 1993.
H. Fortell: Volterra and Algebraic Approaches to the Zero Dynamics. Thesis No. 438, 1994.
T. McKelvey: On State-Space Models in System Identification. Thesis No. 447, 1994.
T. Andersson: Concepts and Algorithms for Non-Linear System Identifiability. Thesis No. 448,
1994.
P. Lindskog: Algorithms and Tools for System Identification Using Prior Knowledge. Thesis
No. 456, 1994.
J. Plantin: Algebraic Methods for Verification and Control of Discrete Event Dynamic Systems.
Thesis No. 501, 1995.
J. Gunnarsson: On Modeling of Discrete Event Dynamic Systems, Using Symbolic Algebraic
Methods. Thesis No. 502, 1995.
A. Ericsson: Fast Power Control to Counteract Rayleigh Fading in Cellular Radio Systems. Thesis
No. 527, 1995.
M. Jirstrand: Algebraic Methods for Modeling and Design in Control. Thesis No. 540, 1996.
K. Edström: Simulation of Mode Switching Systems Using Switched Bond Graphs. Thesis
No. 586, 1996.
J. Palmqvist: On Integrity Monitoring of Integrated Navigation Systems. Thesis No. 600, 1997.
A. Stenman: Just-in-Time Models with Applications to Dynamical Systems. Thesis No. 601, 1997.
M. Andersson: Experimental Design and Updating of Finite Element Models. Thesis No. 611,
1997.
U. Forssell: Properties and Usage of Closed-Loop Identification Methods. Thesis No. 641, 1997.
main: 2009-10-21 11:26 — 172(186)
M. Larsson: On Modeling and Diagnosis of Discrete Event Dynamic systems. Thesis No. 648,
1997.
N. Bergman: Bayesian Inference in Terrain Navigation. Thesis No. 649, 1997.
V. Einarsson: On Verification of Switched Systems Using Abstractions. Thesis No. 705, 1998.
J. Blom, F. Gunnarsson: Power Control in Cellular Radio Systems. Thesis No. 706, 1998.
P. Spångéus: Hybrid Control using LP and LMI methods – Some Applications. Thesis No. 724,
1998.
M. Norrlöf: On Analysis and Implementation of Iterative Learning Control. Thesis No. 727, 1998.
A. Hagenblad: Aspects of the Identification of Wiener Models. Thesis No. 793, 1999.
F. Tjärnström: Quality Estimation of Approximate Models. Thesis No. 810, 2000.
C. Carlsson: Vehicle Size and Orientation Estimation Using Geometric Fitting. Thesis No. 840,
2000.
J. Löfberg: Linear Model Predictive Control: Stability and Robustness. Thesis No. 866, 2001.
O. Härkegård: Flight Control Design Using Backstepping. Thesis No. 875, 2001.
J. Elbornsson: Equalization of Distortion in A/D Converters. Thesis No. 883, 2001.
J. Roll: Robust Verification and Identification of Piecewise Affine Systems. Thesis No. 899, 2001.
I. Lind: Regressor Selection in System Identification using ANOVA. Thesis No. 921, 2001.
R. Karlsson: Simulation Based Methods for Target Tracking. Thesis No. 930, 2002.
P.-J. Nordlund: Sequential Monte Carlo Filters and Integrated Navigation. Thesis No. 945, 2002.
M. Östring: Identification, Diagnosis, and Control of a Flexible Robot Arm. Thesis No. 948, 2002.
C. Olsson: Active Engine Vibration Isolation using Feedback Control. Thesis No. 968, 2002.
J. Jansson: Tracking and Decision Making for Automotive Collision Avoidance. Thesis No. 965,
2002.
N. Persson: Event Based Sampling with Application to Spectral Estimation. Thesis No. 981, 2002.
D. Lindgren: Subspace Selection Techniques for Classification Problems. Thesis No. 995, 2002.
E. Geijer Lundin: Uplink Load in CDMA Cellular Systems. Thesis No. 1045, 2003.
M. Enqvist: Some Results on Linear Models of Nonlinear Systems. Thesis No. 1046, 2003.
T. Schön: On Computational Methods for Nonlinear Estimation. Thesis No. 1047, 2003.
F. Gunnarsson: On Modeling and Control of Network Queue Dynamics. Thesis No. 1048, 2003.
S. Björklund: A Survey and Comparison of Time-Delay Estimation Methods in Linear Systems.
Thesis No. 1061, 2003.
M. Gerdin: Parameter Estimation in Linear Descriptor Systems. Thesis No. 1085, 2004.
A. Eidehall: An Automotive Lane Guidance System. Thesis No. 1122, 2004.
E. Wernholt: On Multivariable and Nonlinear Identification of Industrial Robots. Thesis No. 1131,
2004.
J. Gillberg: Methods for Frequency Domain Estimation of Continuous-Time Models. Thesis
No. 1133, 2004.
G. Hendeby: Fundamental Estimation and Detection Limits in Linear Non-Gaussian Systems.
Thesis No. 1199, 2005.
D. Axehill: Applications of Integer Quadratic Programming in Control and Communication. Thesis
No. 1218, 2005.
J. Sjöberg: Some Results On Optimal Control for Nonlinear Descriptor Systems. Thesis No. 1227,
2006.
D. Törnqvist: Statistical Fault Detection with Applications to IMU Disturbances. Thesis No. 1258,
2006.
H. Tidefelt: Structural algorithms and perturbations in differential-algebraic equations. Thesis
No. 1318, 2007.
main: 2009-10-21 11:26 — 173(187)
S. Moberg: On Modeling and Control of Flexible Manipulators. Thesis No. 1336, 2007.
J. Wallén: On Kinematic Modelling and Iterative Learning Control of Industrial Robots. Thesis
No. 1343, 2008.
J. Harju Johansson: A Structure Utilizing Inexact Primal-Dual Interior-Point Method for Analysis
of Linear Differential Inclusions. Thesis No. 1367, 2008.
J. D. Hol: Pose Estimation and Calibration Algorithms for Vision and Inertial Sensors. Thesis
No. 1370, 2008.
H. Ohlsson: Regression on Manifolds with Implications for System Identification. Thesis
No. 1382, 2008.
D. Ankelhed: On low order controller synthesis using rational constraints. Thesis No. 1398, 2009.
P. Skoglar: Planning Methods for Aerial Exploration and Ground Target Tracking. Thesis
No. 1420, 2009.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Related manuals

Download PDF

advertisement