Autonomous Pedestrian Collision Avoidance Using a - HAL

Autonomous Pedestrian Collision Avoidance Using a - HAL
Autonomous Pedestrian Collision Avoidance Using a
Fuzzy Steering Controller
David Llorca, Vicente Milanés, Ignacio Parra, Miguel Gavilán, Iván Garcı́a
Daza, Joshué Pérez Rastelli, M.A. Sotelo
To cite this version:
David Llorca, Vicente Milanés, Ignacio Parra, Miguel Gavilán, Iván Garcı́a Daza, et al.. Autonomous Pedestrian Collision Avoidance Using a Fuzzy Steering Controller. IEEE Transactions on Intelligent Transportation Systems, IEEE, 2011.
HAL Id: hal-00737639
https://hal.inria.fr/hal-00737639
Submitted on 2 Oct 2012
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diffusion de documents
scientifiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
390
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 12, NO. 2, JUNE 2011
Autonomous Pedestrian Collision Avoidance
Using a Fuzzy Steering Controller
David Fernández Llorca, Member, IEEE, Vicente Milanés, Ignacio Parra Alonso, Miguel Gavilán,
Iván García Daza, Joshué Pérez, and Miguel Ángel Sotelo, Member, IEEE
Abstract—Collision avoidance is one of the most difficult and
challenging automatic driving operations in the domain of intelligent vehicles. In emergency situations, human drivers are more
likely to brake than to steer, although the optimal maneuver
would, more frequently, be steering alone. This statement suggests
the use of automatic steering as a promising solution to avoid
accidents in the future. The objective of this paper is to provide
a collision avoidance system (CAS) for autonomous vehicles, focusing on pedestrian collision avoidance. The detection component involves a stereo-vision-based pedestrian detection system
that provides suitable measurements of the time to collision. The
collision avoidance maneuver is performed using fuzzy controllers
for the actuators that mimic human behavior and reactions, along
with a high-precision Global Positioning System (GPS), which provides the information needed for the autonomous navigation. The
proposed system is evaluated in two steps. First, drivers’ behavior
and sensor accuracy are studied in experiments carried out by
manual driving. This study will be used to define the parameters of
the second step, in which automatic pedestrian collision avoidance
is carried out at speeds of up to 30 km/h. The performed field
tests provided encouraging results and proved the viability of the
proposed approach.
Index Terms—Collision avoidance, fuzzy control, pedestrian
protection, steering control, stereo vision.
I. I NTRODUCTION
C
OLLISION avoidance is one of the most difficult and
challenging automatic driving operations for autonomous
vehicles. This maneuver is used as a last resort in a critical situation by braking and/or steering, as long as the accident is still
avoidable. Collision avoidance can be applied to either vehicles
(e.g., cars, trucks, motorbikes, and bicycles) or pedestrians. In
both cases, this maneuver is considered a hazardous operation
not only from the point of view of an autonomous system but
from the perspective of a human driver as well.
Manuscript received October 1, 2009; revised May 19, 2010 and
September 2, 2010; accepted October 26, 2010. Date of publication
February 17, 2011; date of current version June 6, 2011. This work was
supported in part by the Spanish Ministry of Science and Innovation under
Research Grant TRANSITO TRA2008-06602-C03 and by the Spanish Ministry
of Development under Research Grant GUIADE P9/08. The Associate Editor
for this paper was L. Vlacic.
D. Fernández Llorca, I. P. Alonso, M. Gavilán, I. G. Daza, and M. Á. Sotelo
are with the Departamento de Automática, Escuela Politécnica Superior,
Universidad de Alcalá, 28871 Alcalá de Henares, Spain (e-mail: [email protected]
uah.es; [email protected]; [email protected]; [email protected];
[email protected]).
V. Milanés and J. Pérez are with the Centro de Automática y Robótica
(CAR), Universidad Politécnica de Madrid-Consejo Superior de Investigaciones Científicas (UMP–CSIC), 28500 Madrid, Spain (e-mail: vicente.
[email protected]; [email protected]).
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TITS.2010.2091272
A collision avoidance system (CAS) involves, at least, the
following three main parts: 1) object detection; 2) decisionmaking; and 3) actuation. Object detection relates to perception
tasks that analyze the environment information obtained by one
or more sensors. Although object detection has been carried out
through ranging sensors, e.g., lidar or radar, the computer vision
community has developed an extensive amount of interest in
solving this stage over the last few years. Many techniques
have been proposed in terms of features, models, and general
architectures to estimate the position and velocity of objects,
e.g., vehicles, bicyclists, or pedestrians. A decision-making
system interprets these estimates and makes a decision on when
and how collisions can be avoided [1]. The complexity of this
stage depends on the specific traffic situation. The huge set of
possible maneuvers, their physical restrictions, and nonlinear
properties makes decision making a multidimensional difficult
problem. Finally, one actuation system adapts the target commands generated by the previous stage and transforms these
commands to low-level control signals needed by the respective
actuators: 1) throttle; brake; and 3) steering. The generated
signals have to take the corresponding actions to avoid the
collision, i.e., acceleration, breaking, or steering.
Autonomous collision avoidance comes down to overruling
the driver. This case entails a considerable number of important
concerns. The legislation referred to active systems is not yet
fully developed. It is essential to identify new potential problems with regard to regulatory and liability law, particularly in
situations where technical systems are designed to take over
certain driving tasks in whole or in part from the driver [2].
The driver acceptance of active systems has to be investigated
in detail. False alarms are extremely critical, because they may
lead to unpredictable situations with serious consequences. In
addition, the introduction of autonomous and active systems,
which can be seen as autopilot systems, may cause misuse
problems if the drivers reduce their attention, because they completely rely on the system [3]. This condition can be avoided
by designing not fully automated systems to keep the driver
concentration at all times, e.g., systems in which only a certain
percentage of the required steering torque is applied. Another
possible solution can be obtained by combining these systems
with driver-monitoring methods [4].
The analysis of the human driver behavior is one of the
main encouragements for proposing the avoidance maneuver
by steering. Adams [5] suggests that, in emergency situations,
drivers are more likely to brake than to steer, whereas the
optimal maneuver would be more frequently steering alone. It
is unclear why drivers tend not to use the optimal strategy in
1524-9050/$26.00 © 2011 IEEE
FERNÁNDEZ LLORCA et al.: AUTONOMOUS PEDESTRIAN COLLISION AVOIDANCE USING FUZZY CONTROLLER
emergency situations. It is possible that drivers’ reluctance to
steer is due to a tendency to maintain their own lanes of travel
at all costs, their lack of knowledge about alternative maneuvers
and the handling capability of their vehicles, or their preference
to lessen the severity of the accident by applying the brakes
rather than risking a different collision by executing a lateral
maneuver [5]. Systems that take active control of the vehicle
and automatically perform the optimal maneuver may be the
solution to avoiding accidents in the future.
In this paper, we are concerned with autonomous pedestrian
collision avoidance by steering. Pedestrian detection is carried out using a stereo-vision-based approach [6], [7], which
provides pedestrians position and relative motion. Decision
making is based on the computation of the time to collision
(TTC) between the host vehicle and the pedestrian ahead. On
the one hand, if the system considers the collision unavoidable,
pedestrian protection systems (PPSs), e.g., active breaking,
active hood, or pedestrian protection airbags, will be triggered
[7]. On the other hand, if the collision is still avoidable, the
collision avoidance maneuver is performed using fuzzy steering controllers for path tracking and lane change, which are
defined as a function of the vehicle speed. Speed control is also
autonomously managed, keeping the vehicle at the target speed
(cruise control). Extra sensorial information is obtained from a
high-precision Global Positioning System (GPS) and a wireless
communication system.
From the experimental side, some conditions are out of the
scope of this paper. For instance, complex traffic situations,
which require a more complete sensor fusion scheme and
complex decision making, e.g., situation and threat assessment
[1], are not analyzed. Instead, a simple scenario is defined.
The vehicle moves along a straight road. The pedestrian with
whom the collision may take place is located in the same lane
as the host vehicle. The left lane is free and long enough for the
collision avoidance maneuver to be completed at the current
speed. During the avoidance maneuver, the vehicle speed is
considered constant, i.e., braking is not allowed. Pedestrian
dynamics are considered negligible compared with the vehicle
dynamics. Thus, only short-term collision estimations are determined, instead of long-term impact predictions that would
require pedestrian behavior modeling [8].
The remainder of this paper is organized as follows.
Section II briefly surveys computer-vision-based pedestrian
detection and autonomous steering maneuvers. An overall description of the system is presented in Section III. The stereovision-based pedestrian detection system and the fuzzy steering
controller for pedestrian collision avoidance are described in
Sections IV and V, respectively. The preliminary results with
regard to the analysis of the drivers’ behavior and the accuracy
of the sensor system, as well as the real collision avoidance
experiments, are listed in Section VI. After discussing our
results, we conclude in Section VII.
II. P REVIOUS W ORK
Sensor systems onboard the vehicles are required to predict
the car-to-pedestrian distance and the TTC. Cameras are the
most commonly used sensors for that purpose. Over the last
391
decade, a considerable number of vision-based pedestrian detection systems have been proposed. Several remarkable surveys have been presented [9]–[12], some of which have recently
been published [13], [14]. Most of the work with regard to
human motion have been summarized in [9]–[13], focusing on
the pedestrian protection application in the intelligent vehicle
domain, covering both passive and active safety techniques. An
overview of the state of the art from both methodological and
experimental perspectives is presented in [14], where a novel
benchmark set has been made publicly available.
Autonomous collision avoidance was first proposed for unmanned aerial vehicles (UAVs) [15], and it has been in place
onboard domestic transport aircraft since the early 1990s [16].
Although autonomous aerial navigation considerably differs
from autonomous navigation in the intelligent vehicle domain,
several aspects can fruitfully be extended. For instance, in [17],
an overtaking control method is proposed using the conflict
probability, which has widely been used in the aviation community [18]. Other concepts, e.g., TTC, time-to-escape, and risk
assessment, which have deeply been studied for UAVs, are also
suitable for intelligent vehicles.
The next step carried out by the research community took
place for autonomous-mobile-robot applications. In robotics,
collision avoidance consists of modifying the trajectory of the
mobile robot in real time such that the robot can avoid collisions
with obstacles found on its path. This approach comprises the
following two main layers: 1) obstacle avoidance and 2) path
planning. Due to the idiosyncrasy of this field, a sizeable
number of works have been developed during the past few
decades. We refer to the review in [19] for a general background
with regard to obstacle avoidance for autonomous mobile
robots.
With regard to autonomous collision avoidance for intelligent
vehicles, in particular, collision avoidance by steering, we can
define, at least, the following four stages of development.
1) Guidance or lane keeping. Autonomous guidance or lane
keeping refers to the technology that tries to prevent lane
departure, usually by monitoring the lane markers using
a vision-based system and controlling the steering wheel.
The first work on lane guidance was built in Japan in 1977
[20]. Subsequently, works started to appear in the late
1980s [21], [22]. The Carnegie Mellon University (CMU)
Navlab gained much experience in developing steering
controllers for autonomous navigation for the Navlab
vehicle series, which are equipped with artificial vision
systems. The steering of the early versions was controlled
by the template-based rapidly adapting lateral position
handler (RALPH) [23]. Several lateral controllers and
autonomous guidance systems have also been developed
through the Partners for Advanced Transit and Highways
(PATH) Program [24]–[27]. As part of the well-known
ARGO Project [28], an automatic guidance system was
developed with an onboard computer that manages the
steering wheel of a mass-produced car. The guidance
system was based on a classical proportional controller
whose inputs signals were directly supplied by the lanerecognition vision system. Other real-vehicle applications
have been developed, which can perform autonomous
392
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 12, NO. 2, JUNE 2011
lane keeping and navigation by automatic steering management [29], [30].
2) Lane change. In contrast to lane keeping, lane change
concerns methods that allow the vehicle to target a different lane, estimate the lane-change trajectory, and track the
new path. There is sufficient literature related to the lanechange issue. Most of the works have been proposed by
the PATH Program. In [26], a classical analytical control
system is proposed for performing lane-change to get
an autonomous vehicle to automatically join or leave a
platoon of unmanned vehicles that circulate in a different
lane. The input variables are lateral and angular errors,
which are provided by a magnetic marker sequence
placed at the center of the lanes. A fuzzy controller, which
consists of 24 rules, for managing the steering wheel
of an autonomous vehicle in lane-change maneuvers is
presented in [31]. A new longitudinal controller for lanechange tracking within platoon operations is proposed in
[32]. The longitudinal controller is combined with lateral
controllers to minimize the length of the maneuver. In
[33], an autonomous navigation system that performs
lane keeping and lane-change maneuvers following magnetic markers that are placed in each lane of the road
has been developed. The lane-change maneuver is defined
as the movement from one lane to the adjacent lane.
During this process, the navigation is performed by dead
reckoning until the vehicle locates the new lane magnetic
sensor sequence. An exhaustive analysis for calculating
the lane-change trajectory is conducted in [34].
3) Overtaking. An overtaking maneuver can be defined as
a sequence of a lane-change maneuver, a path tracking along the new lane, and a return to the original
lane. The optimal trajectory for executing this complex
maneuver was first proposed in [35]. A fuzzy-controlbased automatic lane-change system that mimics human
behavior and reactions during overtaking is presented
in [36]. The navigational information that is needed for
the overtaking operation is supplied by a differential
GPS (DGPS). Recently, a conflict-probability-estimationbased overtaking control method has been proposed to
enhance safety during an overtaking maneuver [17]. The
conflict probability is used as the safety indicator.
4) Collision avoidance. Most of the collision avoidance
approaches have been defined to avoid collision between
vehicles. Lane-change maneuvers are used as a response
to an emergency situation, resulting in the so-called
emergency lane change (ELC) [37] or emergency lane
assist (ELA) [3] systems. The minimum distance or TTC
beyond which an obstacle cannot be avoided at a given
initial speed is determined as 1 s in [37]. The design of an
ELC maneuver that enables the follower vehicle to track
the lead vehicle’s trajectory for a platoon of two vehicles
is presented in [38]. The analysis of the kinematics of
the vehicles involved in a lane-changing/merging maneuver and the study of the conditions under which lanechanging/merging crashes can be avoided are provided in
[34]. In [3], a new type of lane guidance system (ELA)
is proposed. The main goal is to prevent dangerous lane
Fig. 1.
Experimental vehicle and the onboard stereo sensor.
departure maneuvers by automatically applying a torque
to the steering wheel.
In this paper, we will focus on autonomous pedestrian collision avoidance by steering. According to the literature, we
can conclude that this issue is one of the less well-researched
topics in the autonomous vehicle domain, because it has been
somewhat neglected.
III. S YSTEM D ESCRIPTION
A. Experimental Vehicle
The experimental vehicle used in this paper is a Citröen C3
Pluriel, which has been automated by the Spanish National
Research Council (CSIC; see Fig. 1). It is a dual-mode vehicle
that offers an automatic mode in specific situations (e.g.,
platooning) and specific locations (e.g., automated parking
lots) and a manual-assisted mode in regular situations [39]. It
has an onboard computer that houses the control system. The
GPS is connected through an RS232 serial port, the cameras
provide the images through the FireWire port, and the speed
signal is read through the controller area network (CAN) bus
interface. Finally, a wireless networking infrastructure is used
to transmit the differential correction from a GPS base station
to the vehicle [40].
B. Collision Avoidance Overview
Pedestrian collision avoidance is defined as a three-stage
process. As soon as the stereo vision sensor detects a potential
pedestrian collision that can be avoided, a lane change to the
adjacent left lane is performed. Path tracking is then applied
until the pedestrian has been passed. Finally, a second lane
change is carried out to go back to the right lane (see Fig. 2). We
do not consider oncoming traffic along the left lane, because it
would require a vehicle detection system and complex decision
making, which are out of the scope of this paper.
According to this scheme, two controllers have to be designed: one controller for the speed and another controller
for the steering wheel. Although both controllers are considered partially decoupled and, thus, they can independently be
FERNÁNDEZ LLORCA et al.: AUTONOMOUS PEDESTRIAN COLLISION AVOIDANCE USING FUZZY CONTROLLER
393
Fig. 4. Overview of the stereo-vision-based pedestrian detection architecture.
Fig. 2. Pedestrian collision avoidance stages. (a) First lane change to the
contiguous lane, (b) path tracking in the left lane, and (c) second lane change
to the right lane.
Fig. 3.
Stages of the control architecture.
designed, they share the input information and decision-making
layers and work in a coordinated way. The speed control [39]
works as an adaptive cruise control (ACC), maintaining a target
speed for obstacle-free circulation, platooning, overtaking, and
collision avoidance. Several fuzzy steering controllers have
previously been designed to manage obstacle-free circulation
and platooning [41] and overtaking [36]. In this paper, a fuzzy
control system is specifically designed to control the steering in
pedestrian collision avoidance maneuvers.
C. Architecture Description
The control architecture is divided into three different stages,
as shown in Fig. 3. This architecture can deal with different
vehicle models, actuators, and control methods. It is open and
distributed, allowing scalability without substantial changes to
its configuration, even with the inclusion of different elements
in each car and irrespective of the vehicle model. The stages are
listed as follows.
1) Perception. In this stage, the acquisition of the environment information is carried out. The main sensorial inputs
include a real-time kinematic DGPS (RTK-DGPS), an
inertial measurement unit (IMU), and a stereo-visionbased system. These systems are combined [40] to obtain
a good vehicle position that is used as the input to the
next stage.
2) Planning. This phase is subdivided in three subphases.
The first subphase is a navigator that, in this case, will
be defined as a set of GPS waypoints used as the reference route. The second subphase is the adviser, whose
mission is to select among all the different controllers.
These controllers—all based on fuzzy logic—have been
designed to take into account any traffic condition,
i.e., straight-road tracking, bend tracking, overtaking, or
ACC. Finally, the pilot decides the best controller for each
traffic situation and generates the corresponding output
for the actuators.
3) Actuation. The latter stage is in charge of the execution
of the targets that come from the planning stage. Its
function is to adapt the output value generated by the
pilot to values that can be applied to the actuators, i.e., the
throttle, the brake, and the steering wheel. The actuators
have been modified to permit autonomous driving. In
particular, an analog output card carries out the control
of the throttle, and an added electrohydraulic pump acts
on the brake. For the steering wheel, a parallel system to
the electrical assisted power steering has been installed. A
pulsewidth modulation (PWM) signal is sent to act over
the steering wheel, and a power stage manages the motor.
IV. S TEREO -V ISION -BASED P EDESTRIAN D ETECTION
Pedestrian detection is carried out using the system described
in [6] and [7] (see Fig. 4). Nondense 3-D maps are computed
using a robust correlation process that reduces the number of
matching errors [42]. The camera pitch angle is dynamically
estimated using the so-called virtual disparity map, which
provides a better performance compared with other representations, e.g., the v-disparity map or the yOz plane [7]. Two main
advantages are achieved through pitch compensation. First, the
accuracy of the TTC estimation in car-to-pedestrian accidents
is increased. Second, the separation between road points and
obstacle points is improved, resulting in lower false-positive
and false-negative detection rates [7].
394
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 12, NO. 2, JUNE 2011
Fig. 5. Representation of the pedestrian collision avoidance maneuver variables.
Three-dimensional maps are filtered, assuming that the road
surface is planar (which can be acceptable in most cases),
i.e., points under the actual road profile and over the actual
road profile plus the maximum pedestrian height are removed,
because they do not correspond to obstacles (possible pedestrians). The resulting filtered 3-D maps are used to obtain the
regions of interest. A subtractive clustering method, which is
adapted to the accuracy provided by the stereo sensor, is applied
to detect generic obstacles with a 3-D shape that is similar
to the pedestrians. The 2-D candidates are then obtained by
projecting the 3-D points of each resulting cluster and computing their bounding boxes. A support-vector-machine-based
(SVM) classifier is then applied using an optimal combination
of feature extraction methods and a by-components approach
[6]. Nonetheless, the 2-D bounding box that corresponds to
a 3-D candidate might not perfectly match the actual pedestrian appearance in the image plane. Multiple candidates are
generated around each original candidate. The so-called multicandidate (MC) approach proves to increase the detection rate,
the accuracy of depth measurements, and the detection range.
The tracking stage is carried out using a Kalman filter, including
both pedestrian position and relative velocity in the state vector
and assuming a constant velocity motion model.
The pedestrian detection system runs in real time with 320 ×
240 images and a baseline of 30 cm. The stereo-visionbased pedestrian detection system has been tested in real
collision–mitigation experiments by active hood triggering and
collision avoidance tests by breaking or decelerating [7]. These
two actions are now combined with collision avoidance by
steering resulting in a complete PPS.
V. P EDESTRIAN C OLLISION AVOIDANCE
Some preliminary considerations must be done to define the
condition to activate the autonomous pedestrian CAS. First,
the vehicle has to be moving along a straight road and in the
right lane. Second, the pedestrian has to be located in the same
lane. Third, the left lane has to be free and long enough for the
pedestrian collision avoidance maneuver to be completed at the
current speed. Because we have developed a fuzzy steering controller, we consider a constant speed that is autonomously fixed.
A. Collision Avoidance Maneuver
If the previous conditions are satisfied, the collision avoidance maneuver is carried out. To perform a good maneuver,
we must guarantee the safety of both the pedestrian and the
vehicle occupants. Accordingly, the trajectory of the vehicle
must be as soft as possible to avoid a car accident, e.g., lateral
overturning of the vehicle. Once the car-to-pedestrian TTC is
under a specific threshold, the following two main parameters
have to be considered: 1) the actual speed of the car and
2) the lateral displacement, which is needed to avoid running
over the pedestrian. The actual speed of the vehicle is used to
restrict the torque applied to the steering wheel. The lateral
displacement is used to define the new reference during the
avoidance maneuver (see Fig. 5).
The planning stage in our architecture allows the following
of a predefined route. The target reference is located in the right
lane until an unexpected traffic situation takes place. Once the
system triggers the collision avoidance maneuver, a projection
of the predefined route is computed with the reference located
in the right lane plus the lateral displacement. Virtually, some
kind of a triangle, whose vertices are the vehicle position, the
pedestrian position, and the future safety vehicle position in
parallel with the pedestrian, is traced.
This movement consists of a soft action over the steering
wheel that the implemented fuzzy controllers cannot carry
out. The straight-road fuzzy controller permits a minimum
movement to avoid unexpected turns of the steering wheel in a
straight stretch [41]. In a parallel line, the bend fuzzy controller
permits a complete freedom in the action over the steering
wheel [41]. Other fuzzy controllers implemented act over the
longitudinal control [39]. A lane-change fuzzy controller has
been developed to perform the overtaking maneuver [36]. In
this case, we obtained a function that depends on the speed
of the vehicle to determine the distance necessary to safely
perform the lane-change maneuver.
In our case, we want to deal with an unexpected traffic
situation, i.e., when a pedestrian suddenly appears in front of
the vehicle. The goal is to avoid the car-to-pedestrian collision
without causing an accident. A new fuzzy controller is developed to achieve that goal.
B. Fuzzy Steering Controller
There are several approaches for performing the control of
the actuators in a vehicle. The conventional methods of control
produce good results but with a high computational and design
cost due to the nonlinear characteristic of a vehicle. Indeed,
its mathematical representation becomes extremely costly. Another way of coming closer to the human behavior for the
steering control is the use of technologies based on artificial
intelligence, e.g., neural networks [43], but fuzzy logic gives a
good approximation to the human reasoning and is an intuitive
control technique.
The fuzzy controller developed will be responsible for managing the steering wheel (i.e., the lateral control) in making a
decision about modifying the autonomous vehicle’s steering.
This controller consists of a rule base that contains expert
FERNÁNDEZ LLORCA et al.: AUTONOMOUS PEDESTRIAN COLLISION AVOIDANCE USING FUZZY CONTROLLER
Fig. 6.
395
Membership function definition for the input variables. (a) Lateral displacement, (b) speed, and (c) output variable steering.
knowledge and a set of variables that represent the linguistic
values considered. Functionally, the fuzzy reasoning process
can be divided into the following three stages.
Fuzzyfication receives the stage in which a crisp input value
is converted to a fuzzy value. The following two inputs
have been used in the definition of this controller: 1) the
lateral displacement, which is measured as the difference
between the actual position of the vehicle and the desired
safety position, and 2) the speed of the vehicle, which is
obtained from the CAN bus.
Inference engine simulates the human reasoning process by
making fuzzy inference on the inputs and if–then rules.
We use Mamdani’s inference method [44] (min–min–max)
to solve the implication. The application of the inference
engine yields the values of the output fuzzy variables.
Defuzzification is the reverse process to the fuzzification. In this
stage, the fuzzy output values are converted to crisp values.
We defined the output fuzzy variable membership function
shapes using Sugeno’s singleton [45]. Thus, control decisions can be taken in a short period of time, with very good
precision and quality for a real-time system. A modification of the center-of-area (CoA) method is applied as
ωi Bi
(1)
y CoA = ωi
where ωi represents the membership degree that results
from the inference of the ith rule, and Bi is the membership
function for the different values of the output variables of
the ith rule.
Fig. 6(a) and (b) shows the membership functions for the
input variables. Without unexpected traffic situations, the input
variables for the control are the lateral error and the angular
error [36]. We assume that the vehicle is driven on a straightroad stretch and that the main goal is to avoid the pedestrian
collision. Therefore, the variation in the angular error is negligible. Taking this condition into account, an angular error can
be removed from the rules, and a new variable can be included,
i.e., the actual speed of the vehicle.
The fuzzy input variable lateral displacement contains three
membership functions for each of its three associated linguistic
labels. The right linguistic label is used to determine how far
deviated to the right the vehicle is from its target route. In the
same way, the left linguistic label is used to calculate how far
deviated to the left the vehicle is from its target route. The
center linguistic label detects when to stop the movement of the
steering wheel when the reference route and the actual position
TABLE I
RULE BASE FOR S TEERING C ONTROL
of the vehicle are coincident. When a variation from this point
to the left or the right occurs, the steering-wheel action is then
permitted. Note that this fuzzy membership function has been
defined asymmetric. The reason for this approach is that we
previously assumed that the vehicle was driven on the right lane
and that the pedestrian is located on the same lane. Therefore,
the first movement of the steering wheel must be as hard as
possible to ensure that collision with the pedestrian is avoided.
A lateral displacement around 2.5 m is considered maximum
security. Due to this asymmetry, we carry out a hard movement
in the avoidance maneuver and a softer movement when the
vehicle returns to the right lane.
The second fuzzy input variable used to perform the steering
for the pedestrian collision avoidance maneuver is the speed
of the vehicle. This fuzzy input also has three membership
function definitions for each of its three associated linguistic
labels. The low linguistic label is used when the vehicle is
driven at very low speeds, and the medium and high linguistic
labels are defined to consider the moments when the vehicle is
driven at medium and high speeds, respectively. As intuition
commands, the higher the speed of the vehicle, the lower
the movement of the steering wheel. In this case, the fuzzy
membership function has been defined as symmetric.
The output variable is steering, which determines the
steering-wheel position. As previously stated, the fuzzy output
variable membership function shape is defined using Sugeno’s
singletons, which are based on monotonic functions [see
Fig. 6(c)]. In the straight-road fuzzy controller, two singletons
are defined with values of −1 and 1 to move the steering to the
left or right, respectively. In this controller, we have defined five
different singletons, as observed in Fig. 6(c). The movement
to the left is limited to 80% of the maximum steering-wheel
movement, because this maneuver occurs while the vehicle
returns to the right lane. On the other hand, the movement
to the right is completely allowed to avoid collisions with
pedestrians. The rule base implementation is shown in Table I.
One can appreciate how the fuzzy rules have been also selected
as asymmetric. It is motivated by the consideration that the left
lane is free during the maneuver. Accordingly, the softer the
action, the higher the comfort of the occupants.
396
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 12, NO. 2, JUNE 2011
Fig. 7. Control surface of the fuzzy steering controller for pedestrian collision
avoidance.
Fig. 7 shows the control surface for the output variable—
steering—as a function of the fuzzy input variables, i.e., lateral
displacement and actual speed according to the fuzzy rules in
Table I. The smoothness in the variation of the surface indicates
that the rules selected are convenient.
VI. E XPERIMENTS
The proposed pedestrian CAS using a fuzzy steering controller is evaluated in two steps. First, several manually driven
avoidance maneuvers are performed, with the aim of studying
the sensor accuracy and determining the drivers’ behavior. This
study will be used to define the parameters of the second step, in
which automatic pedestrian collision avoidance field tests have
been carried out to demonstrate the viability of the proposed
approach.
A. Drivers’ Behavior and Sensor Accuracy
To evaluate the drivers’ behavior, we have recorded a set of
sequences in which five different drivers have been requested to
perform pedestrian collision avoidance maneuvers by steering
at different speeds: 10, 15, 20, 25, and 30 km/h. In addition to
the stereo vision sensor, two DGPSs are used. The first DGPS
is placed at the pedestrian position, and the second DGPS is
installed onboard the vehicle. The measurements supplied by
the DGPS (after linear interpolation due to its low sample
frequency, i.e., −5 Hz) are considered the ground truth. Thus,
we can compare the results provided by the stereo vision sensor
and determine its suitability to carry out automatic pedestrian
collision avoidance maneuvers.
Fig. 8 shows the DGPS trajectories that correspond to
driver 1, where the x-axis represents the Universal Transverse
Mercator (UTM) East coordinates, and the y-axis represents
the UTM North coordinates in meters. To compare these trajectories with the trajectories provided by the stereo sensor,
the relative car-to-pedestrian positions with respect to the left
camera have to be computed. This transformation is carried out
by applying two translations: one translation from the UTM
global reference to the DGPS onboard the vehicle and another
Fig. 8. Pedestrian collision avoidance maneuvers that correspond to driver 1
at different speeds.
translation from the DGPS to the left camera. The orientation
of both axes is computed using the longitudinal movement of
the vehicle.
Fig. 9(a)–(c) depicts the trajectories supplied by the DGPS,
with the reference located on the moving vehicle (left camera)
and the trajectories provided by the stereo sensor, as well
as their uncertainties (which are drawn with dotted ellipses),
corresponding to driver 1 performing the avoidance maneuver
at 10, 20, and 30 km/h, respectively. Some remarkable conclusions can be deduced from these figures. The maximum range
(25–30 m) and the inverse proportion between the depth and the
stereo accuracy can easily be appreciated. The DGPS trajectories are always inside the limits of the stereo measurements plus
their corresponding uncertainties, which proves that the stereo
sensor provides information that is accurate enough, despite its
inner accuracy constraints. In addition, although stereo depth
measurements are not reliable at long distances, their accuracy
improves in proportion to the collision risk, i.e., as the car-topedestrian distance decreases. For example, at 15 m, the depth
error is about ±1.5 m; at 10 m, the depth error is about ±0.7 m;
and at 5 m, the depth error is lower than ±0.2 m. These
statements can be extended from driver 1 to driver 5.
However, as suggested in [46] for braking maneuvers, we
deduce that the decision to start the avoidance maneuver and
the control of steering may well be based on TTC information
as directly available to the driver from the optic flow field. This
TTC information is an important cue for the driver in detecting
potentially dangerous situations. The problem is then to define
an adequate criterion for activating the CAS. Fig. 10 shows
the TTC computed through the DGPS and the stereo sensor,
as well as the corresponding absolute error in the experiment
performed by driver 1 at 10 km/h.1 The error is clearly unacceptable for TTC values above 8 s. However, the accuracy of
the measurements increases as long as the TTC decreases. The
shape of the picture is highly similar in all tests (from driver 1
to 5 at all speeds).
1 The target speed has been used to compute the TTC. Actual speeds are
unknown, because they have not been measured.
FERNÁNDEZ LLORCA et al.: AUTONOMOUS PEDESTRIAN COLLISION AVOIDANCE USING FUZZY CONTROLLER
397
Fig. 9. Vehicle-to-pedestrian trajectories from DGPS (ground truth) and stereo (including covariances of the given stereo uncertainty at one time step) that
correspond to driver 1 at different speeds. (a) 10 km/h, (b) 20 km/h, and (c) 30 km/h. Note that the scales of the x- and y-axes are not equivalent.
Fig. 10. DGPS and stereo TTC and RMSE that correspond to driver 1 at
10 km/h.
TABLE II
RMSE OF THE TTC
In Table II, we show the root mean square error (RMSE) of
the TTC for all drivers at all speeds, specifying the error for
TTC lower than 8 and 4 s. On the average, the error for TTC <
8 s is lower than 0.3 s, and for TTC < 4 s, it is lower than 0.1 s.
In addition, we can see that the larger the speed, the larger the
error, although this relationship is not linear.
As previously stated, the parameters needed to perform
automatic pedestrian collision avoidance include the TTC at
the beginning of steering, which will correspond to different
distances, depending on the speed, and the minimum car-topedestrian lateral distance to safely avoid the collision. These
values are obtained by computing the average for all drivers.
Fig. 11 depicts the stereo TTC and the stereo car-to-pedestrian
distance at the beginning of the steering and the DGPS car-
Fig. 11. Average stereo TTC (in seconds) and stereo longitudinal distance
(in meters) before starting the avoidance maneuver and maximum DGPS carto-pedestrian lateral distance.
to-pedestrian lateral distance when the car exceeds the pedestrian position for all the speeds. On the one hand, the car-topedestrian distance before starting the maneuver almost linearly
increases with the speed. On the other hand, the TTC at the
beginning of steering and the lateral distance at the moment of
passing the pedestrian remain almost constant. In particular, the
average TTC is 2.3 s, and the average lateral distance is 2.2 m.2
These parameters are slightly oversized up to 2.5 s and 2.5 m,
respectively, to increase the safety of the automatic field tests.
B. Automatic Pedestrian Collision Avoidance
To try to mimic the human behavior, the results obtained with
human drivers in Section VI-A were used as reference for the
automatic trials. The trigger used to change the straight-road
fuzzy controller to the fuzzy steering controller for pedestrian
collision avoidance is the TTC. Therefore, a value equal to or
2 Note that the TTC at the beginning of steering obtained in our experiments
almost matches the TTC suggested in [46] (2.5 s) at the onset of braking.
398
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 12, NO. 2, JUNE 2011
Fig. 13. Steering reference generated by the fuzzy controller at different
speeds.
Fig. 12. Automatic steering wheel maneuvers at different speeds for pedestrian collision avoidance.
less than 2.5 s was used as the threshold for the activation of
the automatic pedestrian CAS that has been developed. A set
of trials for performing the avoidance maneuver at different
speeds—10, 15, 20, 25, and 30 km/h—were done.
The trials were developed at the CAR–CSIC facilities in a
real environment. A 6-m-wide road is used, where two lanes
of 3 m are defined. The vehicle was driven across the right
lane, and the left lane was used to perform the avoidance
maneuver. A 150-m straight stretch of the circuit was selected to
perform the experiments. One RTK-DGPS onboard the vehicle
is used to define the reference of the lateral controller. The
stereo-vision-based pedestrian detection system provides the
TTC measurements. In this case, a lightweight dummy made
of cardboard was used.
The automatic system behavior shown in Fig. 12 allows us
to evaluate the steering response at different speeds and to
compare it with the steering response of a human driver. The
upper part of the figure shows the avoidance maneuver at a
lower speed, i.e., 10 km/h. This case is the more difficult control
because of the physical limitations of the automatic steering
system. However, the vehicle can avoid the pedestrian with
enough safety. The other plots in the figure show the avoidance
maneuver at different speeds, i.e., 15, 20, 25, and 30 km/h. In
all the cases, the automatic system carried out the maneuver
without problems.
Fig. 13 shows the steering reference generated by the fuzzy
controller. Because the vehicle is driving along a straight
stretch, the steering output is close to zero to maintain the
lane before the automatic avoidance pedestrian controller is
triggered. In all the cases, the trigger causes a sudden steering
output change. The lower the speed, the higher the steering
reference change. The higher the speed becomes, the longer
the reference is maintained to avoid hard steering movements.
One can appreciate how the fuzzy output takes negative values
before reaching the pedestrian position. The goal is to straighten
the trajectory, maintaining the vehicle in the road, to be in
parallel to the reference lane when the pedestrian position is
reached. Once the avoidance has been achieved, the vehicle
comes back to the reference lane, taking a longer time as rules
were designed to command.
Two main conclusions are obtained in these experiments.
First, we have achieved the development of an autonomous system that can avoid pedestrian collisions. Second, hard steeringwheel movements have been avoided when the lane change to
the left is done. The variation of the steering is softer when the
speed is higher. Note that the return to the right lane is carried
out with soft steering changes, because we assume that the left
lane is free as a preliminary consideration. One can appreciate
how the response of the autonomous system (see Fig. 12) is
similar to the human drivers’ behavior (see Fig. 8).
To compare the results, in a parallel line with the manually
driven experiments, the stereo TTC and the stereo car-topedestrian distance at the beginning of the steering and the
FERNÁNDEZ LLORCA et al.: AUTONOMOUS PEDESTRIAN COLLISION AVOIDANCE USING FUZZY CONTROLLER
Fig. 14. Average stereo TTC (in seconds) and stereo longitudinal distance
(in meters) before starting the avoidance maneuver and maximum DGPS carto-pedestrian lateral distance obtained in the automatic experiments.
TABLE III
S UMMARY OF THE E XPERIMENTAL R ESULTS IN
M ANUALLY AND AUTOMATICALLY D RIVEN M ODES
DGPS car-to-pedestrian lateral displacement when the vehicle
exceeds the pedestrian position for all speeds are shown in
Fig. 14. In all cases, the car-to-pedestrian distance is enough
to guarantee the pedestrian safety. One can observe how the
values for the lateral displacement between the vehicle and the
pedestrian when they are in parallel lanes are similar to
the values depicted in Fig. 11.
Finally, a summary of the results obtained with the autonomous system (A) and the manual one (M) at different
speeds are shown in Table III.3,4 The relative TTC and lateral
offset errors of the automatic system are 2.8% and 14.4%,
respectively, with regard to the oversized parameters. In all
cases, these values are greater than the values obtained from
the manual experiments. We can conclude that the automatic
pedestrian CAS clearly mimics the human drivers’ behavior,
which satisfies our design requirements. The question whether
the human drivers’ behavior is the best choice or if there are
more sophisticated solutions remains open.
399
inner accuracy constraints. Both car-to-pedestrian trajectory
and TTC are satisfactorily supplied to cope with autonomous
pedestrian collision avoidance maneuvers at speeds of up to
30 km/h. The risks associated with performing collision avoidance maneuvers at higher speeds are not acceptable with our
experimental setup. However, some conclusions can be extrapolated from our results. The distance needed to safely perform
the avoidance maneuver would approximately increase by 3 m
per 5 km/h. For example, for a target speed of 50 km/h, the
distance would be greater than 30 m. In addition, higher speeds
will endure higher errors in the estimated TTC. To increase the
accuracy of the measurements provided by the stereo system,
higher resolution images can be used. However, that approach
would increase the computational cost.
The proposed system implements a fuzzy-control-based automatic collision maneuver by steering. The lateral displacement and the actual speed of the vehicle are used as fuzzy
inputs. The output of the fuzzy steering controller is the
steering-wheel position. The navigational information that is
needed to perform the collision avoidance operation is supplied
by an RTK-DGPS that governs the navigation of the vehicle.
No specific reference trajectory needs to be defined, because
the lane-change system can perform the collision avoidance
maneuver by just specifying the right-lane reference plus the
lateral displacement. The parameters of the automatic pedestrian collision system are defined after studying the drivers’
behavior, as well as the sensor accuracy.
This paper has demonstrated that the proposed approach can
perform humanlike pedestrian collision avoidance maneuvers
by steering under certain conditions that have to be fulfilled:
The vehicle has to be moving along the right lane, the pedestrian
has to be located in the same lane, and the left lane has to be
free and long enough for the collision avoidance maneuver to
be completed. Although the proposed approach provides very
encouraging results, from a real-world application perspective,
where the traffic conditions are certainly more complex, significant effort is further necessary to solve this important problem.
Our future work includes new experiments at higher speeds
in more challenging scenarios with moving pedestrians, including emergency maneuvers with lower TTC values (e.g.,
at 1.0 s < TTC < 2 s). Free-space computation [47] will be
necessary to define the maximum lateral and frontal distance
available to safely perform the avoidance maneuver. Other sensors, including V2V and V2I communications, will be included
to better understand the specific traffic situation. Finally, more
sophisticated decision-making schemes [48] will be included to
deal with real urban traffic scenarios.
VII. C ONCLUSION
This paper has described an automatic driving system that
can carry out autonomous pedestrian collision avoidance by
steering. The detection component involved a stereo-visionbased system that provides suitable measurements, despite its
3 Note that the automatic parameters were oversized to increase the safety of
the automatic field tests.
4 The speed that corresponds to manual experiments is the target speed,
because it has not been measured.
R EFERENCES
[1] J. Jansson, J. Johansson, and F. Gustafsson, “Decision making for collision avoidance systems,” in Society Automotive Engineering World Congr.
Exh., Detroit, MI, Mar. 2002. Paper 2002-01-0403.
[2] T. Benz, F. Christen, G. Lerner, M. Schulze, and D. Vollmer, “Traffic
effects of driver assistance systems: The approach within invent,” in
Proc. 10th World Congress Intell. Transp. Syst.—Solutions for Today and
Tomorrow, 2003, pp. 1–11.
[3] A. Eidehall, J. Pohl, F. Gustafsson, and J. Ekmark, “Toward autonomous
collision avoidance by steering,” IEEE Trans. Intell. Transp. Syst., vol. 8,
no. 1, pp. 84–94, Mar. 2007.
400
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 12, NO. 2, JUNE 2011
[4] L. M. Bergasa, J. Nuevo, M. A. Sotelo, R. Barea, and M. E. López, “Realtime system for monitoring driver vigilance,” IEEE Trans. Intell. Transp.
Syst., vol. 7, no. 1, pp. 63–77, Mar. 2006.
[5] L. D. Adams, “A review of the literature on obstacle avoidance maneuvers:
Braking versus steering,” Univ. Michigan Transp. Res. Inst., Ann Arbor,
MI, Tech. Rep. UMTRI-94-19, 1994.
[6] I. Parra, D. F. Llorca, M. A. Sotelo, L. M. Bergasa, P. Revenga, J. Nuevo,
M. Ocaña, and M. A. García, “Combination of feature extraction methods
for SVM pedestrian detection,” IEEE Trans. Intell. Transp. Syst., vol. 8,
no. 2, pp. 292–307, Jun. 2007.
[7] D. F. Llorca, M. A. Sotelo, I. Parra, J. E. Naranjo, M. Gavilán, and
S. Álvarez, “An experimental study on pitch compensation in pedestrianprotection systems for collision avoidance and mitigation,” IEEE Trans.
Intell. Transp. Syst., vol. 10, no. 3, pp. 469–474, Sep. 2009.
[8] T. Gandhi and M. M. Trivedi, “Pedestrian collision avoidance systems:
A survey of computer vision based recent studies,” in Proc. IEEE Int.
Transp. Syst. Conf., Sep. 2006, pp. 976–981.
[9] D. M. Gavrila, “The visual analysis of human movement: A survey,”
Comput. Vis. Image Understanding, vol. 73, no. 1, pp. 82–98, Jan. 1999.
[10] T. B. Moeslund and E. Granum, “A survey of advances in vision-based
human motion capture and analysis,” Comput. Vis. Image Understanding,
vol. 103, no. 2/3, pp. 90–126, Nov./Dec. 2006.
[11] R. Poppe, “Vision-based human motion analysis: An overview,” Comput.
Vis. Image Understanding, vol. 108, no. 1/2, pp. 4–18, Oct./Nov. 2007.
[12] T. Gandhi and M. M. Trivedi, “Pedestrian protection systems: Issues,
survey, and challenges,” IEEE Trans. Intell. Transp. Syst., vol. 8, no. 3,
pp. 413–430, Sep. 2007.
[13] D. Gerónimo, A. M. López, A. D. Sappa, and T. Graf, “Survey on
pedestrian detection for advanced driver assistance systems,” IEEE Trans.
Pattern Anal. Mach. Intell., vol. 32, no. 7, pp. 1239–1258, Jul. 2010.
[14] M. Enzweiler and D. M. Gavrila, “Monocular pedestrian detection:
Survey and experiments,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 31,
no. 12, pp. 2179–2195, Dec. 2009.
[15] J. S. Morrel, “The mathematics of collision avoidance in the air,” J. Inst.
Navig., vol. 11, no. 1, pp. 18–28, Jan. 1958.
[16] J. K. Kuchar and L. C. Yang, “A review of conflict detection and resolution modeling methods,” IEEE Trans. Intell. Transp. Syst., vol. 1, no. 4,
pp. 179–189, Dec. 2000.
[17] F. Wang, M. Yang, and R. Yang, “Conflict-probability-estimation-based
overtaking for intelligent vehicles,” IEEE Trans. Intell. Transp. Syst.,
vol. 10, no. 2, pp. 366–370, Jun. 2009.
[18] R. Paielli and H. Erzberger, “Conflict probability estimation for free
flight,” J. Guid. Control Dyn., vol. 20, no. 3, pp. 588–596, May/Jun. 1997.
[19] V. Kunchev, L. Jain, V. Ivancevic, and A. Finn, “Path planning and obstacle avoidance for autonomous mobile robots: A review,” in Proc. 10th
Int. Conf. Knowl.-Based Intell. Inf. Eng. Syst., vol. 4252, Lecture Notes in
Computer Science, 2006, pp. 537–544.
[20] S. Tsugawa, T. Yatabe, T. Hirose, and S. Matsumoto, “An automobile with
artificial intelligence,” in Proc. IJCAI, 1979, pp. 893–895.
[21] E. D. Dickmanns and A. Zapp, “Autonomous high speed road vehicle
guidance by computer vision,” in Proc. Sel. Papers 10th Triennial World
Congr. Int. Fed. Autom. Control., 1987, pp. 221–226.
[22] C. E. Thorpe, Vision and Navigation: The Carnegie Mellon Navlab.
Norwell, MA: Kluwer, 1990.
[23] D. Pomerleau, “RALPH: Rapidly adapting lateral position handler,” in
Proc. IEEE Intell. Veh. Symp., 1995, pp. 506–511.
[24] P. Varaiya, “Smart cars on smart roads: Problems of control,” IEEE Trans.
Autom. Control, vol. 38, no. 2, pp. 195–207, Feb. 1993.
[25] T. Hessburg and M. Tomizuka, “Fuzzy logic control for lateral vehicle guidance,” IEEE Control Syst. Mag., vol. 14, no. 4, pp. 55–63,
Aug. 1994.
[26] R. Rajamani, H. S. Tan, B. K. Law, and W. B. Zhang, “Demonstration of
integrated longitudinal and lateral control for the operation of automated
vehicles in platoon,” IEEE Trans. Control Syst. Technol., vol. 8, no. 4,
pp. 695–708, Jul. 2000.
[27] H. S. Tan, B. Bougler, and W. B. Zhang, “Automatic steering based on
roadway markers—From highway driving to precision docking,” Veh.
Syst. Dyn., vol. 37, no. 5, pp. 315–339, 2002.
[28] A. Broggi, M. Bertozzi, A. Fascioli, and G. Conte, Automatic Vehicle
Guidance: The Experience of Argo Autonomous Vehicle. Singapore:
World Scientific, 1999.
[29] U. Franke, D. Gavrila, S. Gorzig, F. Lindner, F. Paetzold, and C. Wohler,
“Autonomous driving approaches downtown,” IEEE Intell. Syst., vol. 13,
no. 6, pp. 40–48, Nov./Dec. 1998.
[30] M. A. Sotelo, F. J. Rodriguez, and L. Magdalena, “Virtuous: Vision-based
road transportation for unmanned operations on urbanlike scenarios,”
IEEE Trans. Intell. Transp. Syst., vol. 5, no. 2, pp. 69–83, Jun. 2004.
[31] T. Hessburg and M. Tomizuka, “Fuzzy logic control for lane change
maneuvers in lateral vehicle guidance,” Univ. Calif., Berkeley, CA, Calif
PATH Working Paper UCB-ITS-PWP-95-13, 1995.
[32] R. Horowitz, C. W. Tan, and X. Sun, “An efficient lane change maneuver for platoons of vehicles in an automated highway system,” Univ.
California, Berkeley, CA, Calif PATH Working Paper UCB-ITS-PRR2004-16, 2004.
[33] C. Hatipoglu, U. Ozguner, and K. A. Redmill, “Automated lane change
controller design,” IEEE Trans. Intell. Transp. Syst., vol. 4, no. 1, pp. 13–
22, Mar. 2003.
[34] H. Jula, E. B. Kosmatopoulos, and P. Ioannou, “Collision avoidance analysis for lane changing and merging,” IEEE Trans. Veh. Technol., vol. 49,
no. 6, pp. 2295–2308, Nov. 2000.
[35] T. Shamir, “How should an autonomous vehicle overtake a slower moving vehicle?” IEEE Trans. Autom. Control, vol. 49, no. 4, pp. 607–610,
Apr. 2004.
[36] J. E. Naranjo, C. González, R. García, and T. de Pedro, “Lane-change
fuzzy control in autonomous vehicles for the overtaking maneuver,” IEEE
Trans. Intell. Transp. Syst., vol. 9, no. 3, pp. 438–450, Sep. 2008.
[37] Z. Shiller and S. Sundar, “Emergency lane-change maneuvers of autonomous vehicles,” Trans. ASME, J. Dyn. Syst. Meas. Control, vol. 120,
no. 1, pp. 37–44, 1998.
[38] D. Swaroop and S. M. Yoon, “The design of a controller for a following vehicle in an emergency lane change maneuver,” Univ. Calif.,
Berkeley, CA, Calif PATH Working Paper UCB-ITS-PWP-99-3, 1999.
[39] J. E. Naranjo, C. González, R. García, and T. de Pedro, “ACC + stop&go
maneuvers with throttle and brake fuzzy control,” IEEE Trans. Intell.
Transp. Syst., vol. 7, no. 2, pp. 213–225, Jun. 2006.
[40] V. Milanés, J. E. Naranjo, C. González, J. Alonso, and T. de Pedro,
“Autonomous vehicle based in cooperative GPS and inertial systems,”
Robotica, vol. 26, no. 5, pp. 627–633, Sep. 2008.
[41] J. E. Naranjo, C. González, R. García, T. de Pedro, and R. E. Haber,
“Power-steering control architecture for automatic driving,” IEEE Trans.
Intell. Transp. Syst., vol. 6, no. 4, pp. 406–415, Dec. 2005.
[42] D. Fernández, I. Parra, M. A. Sotelo, P. Revenga, S. Álvarez, and
M. Gavilán, “3-D candidate selection method for pedestrian detection on
nonplanar roads,” in Proc. IEEE Intell. Veh. Symp., 2007, pp. 1162–1167.
[43] S. Kumarawadu and T. T. Lee, “Neuroadaptive combined lateral and longitudinal control of highway vehicles using RBF networks,” IEEE Trans.
Intell. Transp. Syst., vol. 7, no. 4, pp. 500–512, Dec. 2006.
[44] E. Mamdani, “Application of fuzzy algorithms for control of a simple
dynamic plant,” Proc. Inst. Elect. Eng., vol. 121, no. 12, pp. 1585–1588,
Dec. 1974.
[45] T. Takagi and M. Sugeno, “Fuzzy identification of systems and its applications to modeling and control,” IEEE Trans. Syst., Man, Cybern.,
vol. SMC-15, no. 1, pp. 116–132, Feb. 1985.
[46] R. van der Horst and J. Hogema, “Time-to-collision and collision avoidance systems,” in Proc. 6th ICTCT Workshop—Safety Evaluation of Traffic Systems: Traffic Conflicts and Other Measures, 1993, pp. 109–121.
[47] H. Badino, R. Mester, T. Vaudrey, and U. Franke, “Stereo-based freespace computation in complex traffic scenarios,” in Proc. IEEE Southwest
Symp. Image Anal. Interpretation, 2008, pp. 189–192.
[48] J. Hillenbrand, A. Spieker, and K. Kroschel, “Efficient decision making
for a multilevel collision mitigation system,” in Proc. IEEE Intell. Veh.
Symp., 2006, pp. 460–465.
David Fernández Llorca (M’08) received the M.S.
degree in telecommunications engineering and the
Ph.D. degree in electrical engineering from the Universidad de Alcalá (UAH), Alcalá de Henares, Spain,
in 2003 and 2008, respectively.
He is currently an Associate Professor with the
Departamento de Automática, Escuela Politécnica
Superior, UAH. He is the author or a coauthor of
more than 40 refereed publications in international
journals, book chapters, and conference proceedings.
His research interests are mainly focused on computer vision and intelligent transportation systems.
Dr. Llorca was the recipient of the Best Ph.D. Award from the UAH,
the Best Research Award in automotive and vehicle applications in Spain in
2008, the 3M Foundation Awards under the category of eSafety in 2009, the
Master’s Thesis Award in eSafety from the ADA Lectureship at the Technical
University of Madrid, Madrid, Spain, in 2004, and the Best Telecommunication
Engineering Student Award in 2004.
FERNÁNDEZ LLORCA et al.: AUTONOMOUS PEDESTRIAN COLLISION AVOIDANCE USING FUZZY CONTROLLER
Vicente Milanés was born in Badajoz, Spain, in
1980. He received the B.E. and M.E. degrees in
electronics engineering from the Extremadura University, Badajoz, in 2002 and 2006, respectively, and
the Ph.D. degree in electronics engineering from
the University of Alcala (UAH), Alcalá de Henares,
Spain, in 2010.
Since 2006, he has been with the Spanish National Research Council (CSIC). He is currently with
the Center for Automation and Robotics (CAR),
Technical University of Madrid–Spanish National
Research Council (UPM–CSIC), Madrid, Spain. His research interests include
autonomous vehicles, fuzzy logic control, intelligent traffic and transport infrastructures, vehicle-infrastructure cooperation, and intelligent transportation
systems.
Ignacio Parra Alonso received the M.S. degree in
telecommunications engineering and the Ph.D. degree in electrical engineering from the Universidad
de Alcalá (UAH), Alcalá de Henares, Spain, in 2005
and 2010, respectively.
He is currently a member of Research Staff with
the Departamento de Automática, Escuela Politécnica Superior, UAH. His research interests include
intelligent transportation systems, intelligent vehicles, artificial vision, and operating systems.
Dr. Parra was the recipient of the Master’s Thesis
Award in eSafety from the ADA Lectureship at the Technical University of
Madrid, Madrid, Spain, in 2006 and the 3M Foundation Awards under the
category of eSafety in 2009.
Miguel Gavilán received the M.S. degree in
telecommunications engineering from the Universidad de Alcalá (UAH), Alcalá de Henares, Spain, in
2007. He is currently working toward the Ph.D. degree with the Departamento de Automática, Escuela
Politécnica Superior, UAH.
His research interests include image processing
and intelligent transportation systems.
Mr. Gavilán was the recipient of the Master’s
Thesis Award in eSafety from the ADA Lectureship
at the Technical University of Madrid in 2007, the
Master’s Thesis Award from the National Association of Telecommunication
Engineers in 2008, and the 3M Foundation Awards under the category of
eSafety in 2009.
401
Iván García Daza received the M.S. degree in
telecommunications engineering from the Universidad de Alcalá (UAH), Alcalá de Henares, Spain,
in 2004. He is currently working toward the Ph.D.
degree, specializing in drowsy-driver detection systems, with the Departamento de Automática, Escuela
Politécnica Superior, UAH.
His research interests include computer vision,
pattern recognition, machine learning, stochastic
process optimization, and control theory.
Joshué Pérez was born in Coro, Venezuela, in 1984.
He received the B.E. degree in electronics engineering from the Simón Bolívar University, Caracas,
Venezuela, in 2007 and the M.E. degree in systems
engineering and automatic control from the University Complutense of Madrid, Madrid, Spain, in 2009.
He is currently working toward the Ph.D. degree
with the Center for Automation and Robotics (CAR),
Technical University of Madrid–Spanish National
Research Council (UPM–CSIC).
His research interests include fuzzy logic, modeling, control, and cooperative maneuvers among autonomous vehicles.
Miguel Ángel Sotelo (M’02) received the Dr. Ing.
degree in electrical engineering from the Technical
University of Madrid, Madrid, Spain, in 1996 and
the Ph.D. degree in electrical engineering from the
Universided de Alcalá (UAH), Alcalá de Henares,
Spain, in 2001.
Since September 2004, he has been an Auditor
and Expert for the FITSA Foundation, working on
R&D projects on automotive applications. He is
currently a Full Professor with the Departamento
de Automática, Escuela Politécnica Superior, UAH.
He is the author or a coauthor of more than 100 refereed publications in
international journals, book chapters, and conference proceedings. His research
interests include real-time computer vision and control systems for autonomous
and assisted intelligent road vehicles.
Dr. Sotelo is a member of the IEEE Intelligent Transportation Systems (ITS)
Society and the ITS Spain Committee. He is currently an Associate Editor for
the IEEE T RANSACTIONS ON I NTELLIGENT T RANSPORTATION S YSTEMS.
He was the recipient of the Best Research Award in automotive and vehicle
applications in Spain in 2002 and 2009, the 3M Foundation Awards under the
category of eSafety in 2003 and 2004, and the Best Young Researcher Award
from the UAH in 2004.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement