A Smart-Dashboard Muhammad Akhlaq Augmenting safe & smooth driving Master Thesis

A Smart-Dashboard Muhammad Akhlaq Augmenting safe & smooth driving Master Thesis
Master Thesis
Computer Science
Thesis no: 2010:MUC:01
Month Year 02-10
A Smart-Dashboard
Augmenting safe & smooth driving
Muhammad Akhlaq
School of Computing
Blekinge Institute of Technology
Box 520
SE – 372 25 Ronneby
Sweden
This thesis is submitted to the School of Computing at Blekinge Institute of Technology in
partial fulfillment of the requirements for the degree of Master of Science in Computer Science
(Ubiquitous Computing). The thesis is equivalent to 20 weeks of full time studies.
Contact Information:
Author(s):
Muhammad Akhlaq
Address: Mohallah Kot Ahmad Shah, Mandi Bahauddin, PAKISTAN-50400
E-mail: [email protected]
University advisor(s):
Prof. Dr. Bo Helgeson
School of Computing
School of Computing
Blekinge Institute of Technology
Box 520
SE – 372 25 Ronneby
Sweden
Internet
Phone
Fax
: www.bth.se/com
: +46 457 38 50 00
: + 46 457 102 45
ii
ABSTRACT
Annually, road accidents cause more than 1.2 million deaths, 50 million injuries,
and US$ 518 billion of economic cost globally [1]. About 90% of the accidents occur
due to human errors [2][3] such as bad awareness, distraction, drowsiness, low
training, fatigue etc. These human errors can be minimized by using advanced driver
assistance system (ADAS) which actively monitors the driving environment and alerts
a driver to the forthcoming danger, for example adaptive cruise control, blind spot
detection, parking assistance, forward collision warning, lane departure warning,
driver drowsiness detection, and traffic sign recognition etc. Unfortunately, these
systems are provided only with modern luxury cars because they are very expensive
due to numerous sensors employed. Therefore, camera-based ADAS are being seen as
an alternative because a camera has much lower cost, higher availability, can be used
for multiple applications and ability to integrate with other systems.
Aiming at developing a camera-based ADAS, we have performed an ethnographic
study of drivers in order to find what information about the surroundings could be
helpful for drivers to avoid accidents. Our study shows that information on speed,
distance, relative position, direction, and size & type of the nearby vehicles & other
objects would be useful for drivers, and sufficient for implementing most of the ADAS
functions. After considering available technologies such as radar, sonar, lidar, GPS,
and video-based analysis, we conclude that video-based analysis is the fittest
technology that provides all the essential support required for implementing ADAS
functions at very low cost.
Finally, we have proposed a Smart-Dashboard system that puts technologies –
such as camera, digital image processor, and thin display – into a smart system to offer
all advanced driver assistance functions. A basic prototype, demonstrating three
functions only, is implemented in order to show that a full-fledged camera-based
ADAS can be implemented using MATLAB.
Keywords: Ubiquitous Computing, Smart Systems, Context-Awareness, Ethnography,
Advanced Driver Assistance System (ADAS), Middleware, Driver-Centered Design,
Image Sensors, Video-Based Analysis, Bird’s-Eye View.
ACKNOWLEDGEMENTS
First, I would like to thank my adviser Prof. Dr. Bo Helgeson at Blekinge Institute
of Technology for his invaluable advice during the course of this thesis.
Second, I would like to thank Dr. Hans Tap and Dr. Marcus Sanchez Svensson –
the former program mangers for Master in Ubiquitous Computing. Their continuous
administrative support made it possible for me to complete this thesis.
Third, a special thank to my father Gulzar Ahmad, my mother Zainab Bibi, and my
wife Sadia Bashir for their prayers and encouragement.
Muhammad Akhlaq,
Ronneby, 2009.
ii
CONTENTS
1 INTRODUCTION........................................................................................................................... 3 1.1 BACKGROUND ........................................................................................................................... 3 1.2 CHALLENGES ............................................................................................................................ 3 1.3 RESEARCH QUESTIONS ............................................................................................................. 4 1.4 SMART SYSTEMS ....................................................................................................................... 5 1.4.1 Context-awareness............................................................................................................... 5 1.4.2 Intelligence .......................................................................................................................... 5 1.4.3 Pro-activity .......................................................................................................................... 5 1.4.4 Minimal User Interruption .................................................................................................. 6 1.5 RELATED STUDIES / PROJECTS ................................................................................................. 6 1.5.1 Advanced Driver Assistance Systems (ADAS) .................................................................... 6 1.5.2 In-Vehicle Information Systems (IVIS) ............................................................................... 8 1.5.3 Warning Systems.................................................................................................................. 8 1.5.4 Navigation and Guidance Systems ...................................................................................... 9 1.5.5 Mountable Devices and Displays ........................................................................................ 9 1.5.6 Vision-based integration of ADAS .................................................................................... 10 1.6 ANALYSIS OF THE RELATED PROJECTS .................................................................................. 10 2 BASICS OF UBIQUITOUS COMPUTING .............................................................................. 11 2.1 WHAT IS UBIQUITOUS & PERVASIVE COMPUTING? ............................................................... 11 2.1.1 Ubiquitous vs. Pervasive Computing ................................................................................ 12 2.1.2 Related Fields .................................................................................................................... 13 2.1.3 Issues and Challenges in UbiComp .................................................................................. 13 2.2 DESIGNING FOR UBICOMP SYSTEMS ...................................................................................... 16 2.2.1 Background ........................................................................................................................ 16 2.2.2 Design Models ................................................................................................................... 17 2.2.3 Interaction Design ............................................................................................................. 20 2.3 ISSUES IN UBICOMP DESIGN ................................................................................................... 22 2.3.1 What and When to Design? ............................................................................................... 22 2.3.2 Targets of the Design......................................................................................................... 22 2.3.3 Designing for Specific Settings – Driving Environment ................................................... 22 2.3.4 UbiComp and the Notion of Invisibility ............................................................................ 23 2.3.5 Calm Technology ............................................................................................................... 23 2.3.6 Embodied Interaction ........................................................................................................ 23 2.3.7 Limitations of Ethnography ............................................................................................... 24 2.3.8 Prototyping ........................................................................................................................ 24 2.3.9 Socio-Technical Gap ......................................................................................................... 24 2.3.10 Hacking ......................................................................................................................... 24 2.4 UBICOMP AND SMART-DASHBOARD PROJECT ....................................................................... 24 2.5 CONCLUSIONS AND FUTURE DIRECTIONS .............................................................................. 25 3 ETHNOGRAPHIC STUDIES ..................................................................................................... 26 3.1 INTRODUCTION ....................................................................................................................... 26 3.2 OUR APPROACH ...................................................................................................................... 28 3.3 RESULTS.................................................................................................................................. 29 3.3.1 Results from Ethnography ................................................................................................. 29 3.3.2 Video Results ..................................................................................................................... 30 3.3.3 Results from Questionnaire ............................................................................................... 32 3.4 CONCLUSIONS ......................................................................................................................... 34 4 GENERAL CONCEPT DEVELOPMENT ............................................................................... 35 4.1 NEED FOR BETTER SITUATION AWARENESS ............................................................................ 35 4.1.1 Improving Context-awareness........................................................................................... 35 4.1.2 Detecting Blind-spots ........................................................................................................ 36 4.1.3 Enhancing Object-Recognition ......................................................................................... 36 iii
4.2 4.3 4.4 5 NEED FOR AN UNOBTRUSIVE SYSTEM ..................................................................................... 36 NEED FOR AN EASY USER INTERACTION ................................................................................. 37 CONCLUSIONS ......................................................................................................................... 37 TECHNOLOGIES ........................................................................................................................ 38 5.1 RADAR .................................................................................................................................... 38 5.2 SONAR ..................................................................................................................................... 39 5.3 LIDAR ...................................................................................................................................... 40 5.4 GPS ......................................................................................................................................... 41 5.5 VIDEO-BASED ANALYSIS ....................................................................................................... 42 5.5.1 CCD/CMOS Camera ......................................................................................................... 42 5.5.2 Working Principles ............................................................................................................ 44 5.5.3 Object Recognition (size & type) ...................................................................................... 46 5.5.4 Road Sign Recognition ...................................................................................................... 47 5.5.5 Lane Detection and Tracking ............................................................................................ 48 5.5.6 Distance Measurement ...................................................................................................... 49 5.5.7 Speed & Direction (Velocity) Measurement ..................................................................... 51 5.5.8 Drowsiness Detection ........................................................................................................ 52 5.5.9 Environment Reconstruction ............................................................................................. 52 5.5.10 Pros and Cons ............................................................................................................... 53 5.6 CONCLUSIONS ......................................................................................................................... 53 6 THE SYSTEM DESIGN .............................................................................................................. 54 6.1 INTRODUCTION ....................................................................................................................... 54 6.2 COMPONENTS OF THE SYSTEM ............................................................................................... 54 6.2.1 Hardware (Physical Layer) ............................................................................................... 55 6.2.2 Middleware ........................................................................................................................ 56 6.2.3 Applications ....................................................................................................................... 57 6.3 DESIGN CONSIDERATIONS ...................................................................................................... 57 6.3.1 Information Requirements ................................................................................................. 57 6.3.2 Camera Positions .............................................................................................................. 57 6.3.3 Issuing an Alert .................................................................................................................. 57 6.3.4 User Interface .................................................................................................................... 58 6.3.5 Human-Machine Interaction ............................................................................................. 58 6.4 SYSTEM DESIGN...................................................................................................................... 59 6.4.1 Adaptive Cruise Control (ACC) ........................................................................................ 60 6.4.2 Intelligent Speed Adaptation/Advice (ISA) ....................................................................... 62 6.4.3 Forward Collision Warning (FCW) or Collision Avoidance ........................................... 62 6.4.4 Lane Departure Warning (LDW) ...................................................................................... 63 6.4.5 Adaptive Light Control ...................................................................................................... 64 6.4.6 Parking Assistance ............................................................................................................ 65 6.4.7 Traffic Sign Recognition .................................................................................................... 65 6.4.8 Blind Spot Detection .......................................................................................................... 66 6.4.9 Driver Drowsiness Detection ............................................................................................ 67 6.4.10 Pedestrian Detection ..................................................................................................... 67 6.4.11 Night Vision ................................................................................................................... 68 6.4.12 Environment Reconstruction......................................................................................... 69 6.5 IMPLEMENTATION ................................................................................................................... 69 6.6 CONCLUSIONS ......................................................................................................................... 70 7 CONCLUSIONS ........................................................................................................................... 71 7.1.1 7.1.2 7.1.3 Strengths ............................................................................................................................ 71 Weaknesses ........................................................................................................................ 73 Future Enhancements ........................................................................................................ 74 APPENDIX A ......................................................................................................................................... 75 A1 – QUESTIONNAIRE .......................................................................................................................... 75 A2 – RESPONSE SUMMARY REPORT .................................................................................................... 78 BIBLIOGRAPHY .................................................................................................................................. 83 iv
LIST OF FIGURES
Figure 2.1: Publicness Spectrum and the Aspects of Pervasive Systems [90] ....................... 12 Figure 2.2: Classification of computing by Mobility & Embeddedness [95] ......................... 13 Figure 2.3: The iterative approach of designing UbiComp systems [130]. ............................ 18 Figure 3.1: Blind spots on both sides of a vehicle .................................................................. 31 Figure 5.1: An example of in-phase & out-of-phase waves ................................................... 38 Figure 5.2: Principle of pulse radar ........................................................................................ 38 Figure 5.3: A special case where radar is unable to find the correct target [194]................... 39 Figure 5.4: Principle of active sonar ....................................................................................... 40 Figure 5.5: Principle of Lateration in 2D................................................................................ 41 Figure 5.6: Some examples of image sensors and cameras .................................................... 42 Figure 5.7: Image processing in CCD [192] ........................................................................... 43 Figure 5.8: Image processing in CMOS [192]........................................................................ 43 Figure 5.9: Camera-lens parameters ....................................................................................... 45 Figure 5.10: Imaging geometry for distance calculation [202] .............................................. 49 Figure 5.11: Distance estimation model [231]........................................................................ 50 Figure 5.12: Radar capable CMOS imager chip by Canesta .................................................. 51 Figure 5.13: Distance estimation using smearing effect [296] ............................................... 52 Figure 6.1: Layered architecture of context-aware systems [315] .......................................... 54 Figure 6.2: Smart-Dashboard system with five cameras ........................................................ 55 Figure 6.3: Preferred places for a display ............................................................................... 56 Figure 6.4: An integrated and adaptive interface of Smart-Dashboard .................................. 59 Figure 6.5: Overview of the Smart-Dashboard system........................................................... 60 Figure 6.6: Adaptive Cruise Control system .......................................................................... 61 Figure 6.7: Vehicle detection .................................................................................................. 61 Figure 6.8: Intelligent Speed Adaptation system .................................................................... 62 Figure 6.9: Forward Collision Warning system...................................................................... 63 Figure 6.10: Lane Departure Warning system ........................................................................ 64 Figure 6.11: Adaptive Light Control system .......................................................................... 64 Figure 6.12: Parking Assistance system ................................................................................. 65 Figure 6.13: Traffic Sign Recognition system ........................................................................ 66 Figure 6.14: Blind Spot Detection system .............................................................................. 66 Figure 6.15: Driver Drowsiness Detection system ................................................................. 67 Figure 6.16: Pedestrian Detection system .............................................................................. 68 Figure 6.17: Night Vision system ........................................................................................... 68 Figure 6.18: Environment Reconstruction system and the Display ........................................ 69 Figure 6.19: Pedestrian Detection using built-in MATLAB model [317] .............................. 69 Figure 6.20: Traffic Sign Recognition using built-in MATLAB model [317] ....................... 70 Figure 6.21: Pedestrian Detection using built-in MATLAB model [317] .............................. 70 Figure 7.1: Imaging without (a) & with (b, c) wide dynamic range (WDR) [316]. ............... 73 v
LIST OF TABLES
Table 2.1: Differences b/w Ubiquitous Computing & Pervasive Computing ........................ 12 Table 2.2: Positivist approach Vs. Phenomenological approach ............................................ 17 Table 5.1: Performance comparison of CCD and CMOS image sensors ............................... 44 Table 5.2: A timeline for camera-based automotive applications by Mobileye.com ............. 53 2
1
INTRODUCTION
Driving is a very common activity of our daily life. It is extremely enjoyable until
we face a nasty situation, such as flat-tire, traffic-violation, congestion, need for
parking, or an accident etc. However, accidents are the most vital situations and cause
a great loss to human lives and assets. Most of the accidents occur due to human
errors. A Smart-Dashboard could help avoid these unpleasant situations by providing
relevant information for drivers in their car as and when needed. This would
significantly reduce the level of frustrations, delays, financial losses, injuries, deaths
etc caused by road-incidents.
1.1
Background
Annually, road accidents cause about 1.2 million deaths, over 50 million injuries,
and global economic cost of over US$ 518 billion [1]. About 90% of the accidents
happen due to the driver behavior [2][3], such as bad awareness of driving
environment, low training, distraction, work over-load or under-load, or low physical
or physiological conditions etc. An advanced driver assistance system (ADAS) can
play a positive role in improving driver awareness and hence performance by
providing relevant information as and when needed.
New features are being introduced in vehicles daily to serve better the information
needs of a driver. In the beginning, only luxury vehicles come with these new features
due to their heavy cost. As time passes, these features become standard and start
appearing in all types of vehicles. Some new features are now being introduced in
ordinary vehicles from the very first day. These new features are based on innovative
automotive sensors.
The automotive sensor market is growing rapidly. A large variety of automotive
sensors or technologies are available which can provide data about car (such as fuellevel, temperature, tire-pressure, speed etc), weather, traffic, navigation, road signs,
road surface, parking, route prediction, drivers’ vigilance, and situation awareness, etc.
Vehicles of this age combine a variety of sensor technologies to keep an eye on their
environment. For example, a mid-range saloon may use about 50 sensors and a top
class vehicle may use well over 100 sensors [69].
1.2
Challenges
In presence of variety of sensors technologies, system integration is a major
concern of present developments. Although some latest developments already show
improvements but a fully integrated system for driver-assistance is yet to come in few
years. For example, the smart cars of future will come with many safety features
integrated into a single system [4]. Even after full integration is achieved, system
designers will have to solve a number of further issues, such as how to alert a driver to
the forthcoming danger using either visual, audible or haptic warnings. The challenge
is to avoid information overload when at a decisive time. Another issue is deciding
about the level of automation i.e. when should control be transferred from driver to
the system. Additionally, our approach to interaction with automobiles is changing
with the introduction of new technologies, information media, and human and
environmental factors involved [5]. For example, auto-parking feature in latest BMW
3
cars require only a button pressed so that the car may find an available slot and
automatically park itself into it.
Advanced driver assistance systems (ADAS) augment safe & smooth driving by
actively monitoring the driving environment and producing a warning or taking over
the control in highly dangerous situations. Most of the existing systems focus on only
single useful service, such as parking assistance, forward collision warning, lane
departure warning, adaptive cruise control, driver drowsiness detection, etc. Recently,
many integrated ADAS have been proposed. These systems use a variety of sensors
that makes them complex and costly. Any integrated ADAS [11] combines multiple
services into a single system in an efficient and cost effective way.
Vision-based ADAS use cameras to provide multiple services for driver assistance.
They are becoming popular because of their low-cost and independence from
infrastructure outside the vehicle. For example, an intelligent and integrated ADAS
[11] uses only 2 cameras and 8 sonars, and others make use of only cameras [71][72]
[73][74][75][76][84]. They present information through an in-vehicle display.
Specialized devices are being introduced which can efficiently process visual data
[77][78]. For better situation awareness for drivers, different systems have been
introduced to display the surrounding environment of the vehicle [79][80][81][82][83].
These recent developments show that the future lies in vision-based integrated
ADAS. Advantages of vision based integrated systems include: their cost is lower;
their performance is improving; they support innovative features; they can be used
with new as well as old vehicles having no support for infrastructure; and they are easy
to develop, install and maintain. That is why they are getting much attention from
researchers in academia and automotive industry. Current research mainly focuses on
introducing traditional driver assistance systems based on camera, and then combining
these individual systems into an integrated ADAS.
However, despite much advancement in ADAS systems, the issues of informationoverload for drivers have been overlooked and remained unsolved. Little attention is
given to the interface and interaction design for vision-based ADAS. It is important to
note that a driver can pay only a little attention to the displayed information while
driving [15]. Therefore, the system should provide only relevant information, in a
distraction-free way, as and when needed. There is a sever need to design and evaluate
an in-vehicle display for vision based ADAS which would be distraction-free, contextaware, usable and easy to interact with for a driver. It would augment safe & smooth
driving and help reducing losses caused by road-incidents.
1.3
Research Questions
In this thesis, we consider the following closely related research questions:
1. What information about the surroundings should be provided to the drivers for
better situation awareness?
2. How should this information be presented to drivers in a distraction-free way?
3. How should drivers interact with the proposed system?
This thesis provides answer to these research questions. As a result of this thesis,
we expect to come up with an innovative & usable design of an in-dash display for
drivers, called as Smart-Dashboard.
4
1.4
Smart Systems
A smart system is a system that is able to analyze available data to produce
meaningful and intelligent responses. They use sensors to monitor their environment
and actuators to reflect changes to the environment. Smart systems can utilize
available context information to develop meaningful responses using some kind of
Artificial Intelligence techniques. They have very useful applications in real life,
ranging from smart things to smart spaces to smart world. For example, a smart car
always monitors the driver for drowsiness and alerts the driver well in time.
Smart systems are essentially context-aware, intelligent, proactive and minimally
intrusive. A brief description of these basic features is given below.
1.4.1
Context-awareness
A system is context-aware if it uses some or all of the relevant information to
provide better service to its users, i.e. it can adapt to its changing context of use. A
context-aware system is expected to be more user-friendly, less obtrusive, and more
efficient [315]. A system that needs to be minimally distractive has to be contextaware because a context-aware system is sensitive & responsive to different settings in
which it can be used [318] and hence requires very little input from the user. It needs
to capture context information, model it, generate an adaptive response and store
context information for possible future use.
Context-aware systems need to maintain historical context information for finding
trends and predicting future values of context [6]. For example, we can predict future
location of an automobile if we know few of its recent locations.
1.4.2
Intelligence
The low-level context provided by sensors is called as primary context [70]. From
primary context data, we can infer related context, which is known as secondary
context. We can combine several primary contexts to infer secondary contexts. For an
example, we can infer that the user is sleeping at home if primary context data shows
that the user is lying in a sofa or bed, lights are off, it is nighttime, there is silence, and
there is no movement. It is however not the ultimate inference because the user may
not be sleeping but just relaxing for a few minutes in sleeping position.
The process of inference and extraction is very complicated because there is no
single possible inference for one set of primary contexts. We need intelligent methods
for context extraction and inference in order to make context-aware applications truly
unobtrusive and smart [7]. Another major issue is the performance & time-complexity
of reasoning process in the presence of huge amount of context data at hand [8].
1.4.3
Pro-activity
Context-awareness makes it possible to meet or anticipate user needs in a better
way. It is, however, very challenging to predict user behavior because humans have
very complex motivations [9]. We need a very intelligent & trustable prediction
technique in order to avoid problems for the user. Context-aware systems in future will
serve as per users’ expectations to bear out a new acronym WYNIWYG – What You
Need Is What You Get [10].
5
1.4.4
Minimal User Interruption
As we know that human attention capability is very limited [15], we need smart
systems to assure minimal user interruption. A smart system minimizes the annoyance
by lowering the level of input required of the user. It also learns from experience and
uses its learning to inform future decisions.
In this way, incorporating smartness in the dashboard will make it distraction-free,
context-aware, usable and easy to interact with for a driver. This would augment safe
& smooth driving and help reducing losses caused by road-incidents.
1.5
Related Studies / Projects
Road safety is an important and well-researched issue. This area is so vital that
many governmental bodies in developed countries have issued a set of requirements
for systems regarding road-safety. For the last many decades, a large number of
projects or studies have been undertaken under the flag of road-safety, intelligent
transportation, IVIS (In-Vehicle Information System) and DSS (Driver Support
Systems) etc. There are hundreds of active projects in industry, universities, and
research centers. Most of these projects concentrate on single aspect of the system,
such as LDW (Lane Departure Warning), while others consider only a few aspects.
In this section, we describe some representative studies/projects, which are the
most important and relevant to our thesis.
1.5.1
Advanced Driver Assistance Systems (ADAS)
Driver assistance systems support drivers in driving a vehicle safely & smoothly.
They provide drivers with an extra ease, decreased workload, and more focus on the
road, and hence reduce the risk of accidents [85]. In this way, they increase road safety
in general. They are also known as Driver Support Systems (DSS) etc. Examples of
such systems include [11]:
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Adaptive Cruise Control (ACC)
Forward Collision Warning (FCW)
Lane Departure Warning (LDW)
Adaptive Light Control (ALC)
Vehicle-to-Vehicle communication (V2V)
Car data acquisition/presentation (e.g. fuel-level, temperature, tire-pressure, speed)
Automatic parking or parking assistance
Traffic Sign Recognition (TSR)
Blind Spot Detection (BSD)
Driver Drowsiness Detection (DDD)
In-vehicle navigation system
Intelligent Speed Adaptation/Advice (ISA)
Night vision and augmented reality
Rear view or the side view
Object recognition (e.g. vehicle, obstacles and pedestrian)
Etc…
6
Intelligent Car Initiative project (i2010) [12][13] and Intelligent Vehicle Initiative
(IVI) [29] are the two famous examples of large projects covering many of these
features.
1.5.1.1
Intelligent Car Initiative (i2010)
Intelligent Car Initiative project [12][13] is funded by European Commission. The
objective of this project is to encourage smart, safe and green system for
transportation. It also promotes cooperative research in intelligent vehicle systems and
assists in adopting research results. Many sub-projects are funded under this initiative,
such as AWAKE, AIDE, PReVENT and eSafety etc.
The AWAKE project (2000-2004) [14] gives an integrated system for driver
fatigue monitoring (sleepiness, inattention, stress, etc.). It set-up a multi-sensor system
which fuses information provides by a number of automotive sensors, such as eyelid
sensor, gaze sensor, steering grip sensor, and additional information, such as wheel
speed and steering wheel movements etc. Other similar EU-funded projects include
SENSATION [16], DETER-EU [17], PROCHIP/PROMETHEUS program [18] and
SAVE-project [19].
AIDE (2004-to date) [20] is an acronym for adaptive integrated driver-vehicle
interface. The main objective of AIDE project are to maximize the efficiency of
ADAS, to minimize the level of distraction and workload enforced by IVIS, and to
facilitate mobility & comfort by using new technologies and devices. AIDE aims at
developing a special dashboard computer to display important information for drivers
but it does not explain how the driver is required to process all the displayed
information.
PReVENT (2004-2008) [21] is one of the major initiatives on road safety which
spent €55 million for four years. It aimed at developing and demonstrating preventive
safety systems for European roads, and creating awareness of preventive/active safety
in people.
eSafety [22] aims at reducing the number of road accidents in Europe by bringing
Intelligent Vehicle Safety Systems that use ICT (information & communication
technologies) to market. A similar recent project is the Safety In Motion (SIM) [23],
which targets motorcycle safety.
Some other relevant projects financed by EU/EC include ADASE (Advanced
Driver Assistance Systems in Europe) [24], APROSYS (Advanced Protection
Systems) [25], EASIS (Electronic Architecture Safety Systems) [26], GST (Global
System for Telematics) [27], HUMANIST (HUMAN-centred design for Information
Society Technologies) [28], and the SENECa [55] which proves usability of speechbased user interfaces in vehicles.
1.5.1.2
Intelligent Vehicle Initiative (IVI)
Intelligent Vehicle Initiative (IVI) [29] was funded by U.S. Department of
Transportation (1997-2005). It aimed at preventing driver distraction, introduction of
crash avoidance systems, and studying the effects of in-vehicle technologies on driver
performance.
7
1.5.2
In-Vehicle Information Systems (IVIS)
IVIS are also known as Driver Information Systems (DIS). An IVIS combines
many systems, such as communication, navigation, entertainment, climate control etc
into a single integrated system. They use LCD panel mounted on dashboard, a
controller knob, and optionally voice recognition. IVIS can be found in almost all the
latest luxury vehicles, such as Audi, BMW, Hyundai, Mercedes, Peugeot, Volvo,
Toyota and Mitsubishi etc.
One of the earliest researches in this area was sponsored by US Department of
Transportation, Federal Highway Administration in 1997. The goal of their In-Vehicle
Information Systems (IVIS) project [30] was to develop a fully integrated IVIS that
would safely manage highway & vehicle information, and provide integrated interface
to the devices in the driving environment. The implementation was done on personal
computers connected via Ethernet LAN. However, it came up with useful results.
Similarly, HASTE [57] is a recent EU funded project that provides guidelines and tests
the fitness of three possible environments (lab, simulator and vehicle) for studying the
effects of IVIS on driving performance.
An IVIS can also make use of guidance & traffic information produced by the
systems that are managed by the city administration in developed countries. Examples
of such systems include Tallahassee Driver Information System [31], and California
Advanced Driver Information System (CADIS) [32][33].
1.5.3
Warning Systems
Recently, a number of in-vehicle systems have been developed that either alert the
driver of the forthcoming danger or try to improve his behavior. Such systems can be
considered as a sub-set of IVIS/ADAS because they handle only one or few features.
In this section, we will briefly survey some of the prominent warning systems.
Night Vision Systems [34] use Head-up Display (HUD) to mark an object which is
outside the field of vision of a driver. The mark on HUD follows the object until the
point of danger is passed. A driver can easily know about the speed, direction and
distance of the object. The next generation systems will also be able to recognize
objects actively.
Dynamic Speedometer [35] addresses the problem of over-speeding. It actively
considers current speed limit information and redraws a dynamic speedometer on
dashboard display in red. Other similar projects include Speed Monitoring Awareness
and Radar Trailer (SMART) [36] which displays the vehicle speed and the current
speed limit, Behavior-Based Safety (BBS) [37] which displays the driver performance
regarding speed, and the Intelligent Speed Adaptation project (ISA) [38] which
displays current speed limit.
Road Surface Monitoring systems detect and display the surface condition of the
road ahead. This is relatively new area of research in ADAS. A recent project, Pothole
Patrol (P2) [39] uses GPS and other sources to report path-holes on the route. Other
examples include CarTel [40], and TrafficSense [41] by Microsoft Research.
Safe Speed And Safe Distance (SASPENCE) [42] aims at avoiding accidents due
to speed and distance problems. This project was carried out in Sweden and Spain in
the year 2004. It suggests visual, auditory & haptic feedback, and provides alternatives
to develop a DSS for safe speed and safe distance. Similarly, Green Light for Life [54]
8
uses an In-Vehicle Data Recorder (IVDR) system to promote safe driving in young
drivers. It uses messages, reports and an in-vehicle display unit to provide feedback to
the young drivers.
Monitoring the driver vigilance or alertness is another important thing for road
safety. A recent prototype system for Monitoring Driver Vigilance [43] uses computer
vision (IR illuminator and software implementations) to find level of vigilance.
Automotive industry uses some other methods for monitoring driver vigilance. For
example, Toyota uses steering wheel sensors and a pulse sensor [44], Mitsubishi uses
steering wheel sensors and measures of vehicle behavior [44], Daimler Chrysler uses
vehicle speed, steering angle, and vehicle position using a camera [46], and IBM’s
smart dashboard analyzes speech for signs of drowsiness [47].
In-Vehicle Signing Systems (IVSS) may read the road signs and display them
inside the vehicle for driver attention. A recent example of such systems is the one
prototyped by National Information and Communications Technology Australia
(NICTA) & Australian National University [48]. The IVSS may possibly use one of
the following three techniques: 1) image-processing or computer-vision [48][49][50],
2) digital road-data [51][52], and 3) DSRC (Dedicated Short Range Communications)
[53].
Safe Tunnel project [56] simulates tunnel driving and recommends the uses highly
informative display to inform drivers of the incidents. A highly informative display
might increase the threat of distraction but it might significantly improve safety.
Recently, some warning systems have been developed which use existing
infrastructure, such as GSM, GPS, and sensors deployed in the road, cars or the
networks. Examples include NOTICE [58] that proposes architecture for the warning
on traffic incidents, Co-Driver Alert [59] that provides hazard information, and
Driving Guidance System (DGS) [60] that provides information about weather and
speed etc.
1.5.4
Navigation and Guidance Systems
Route guidance and navigation systems are perhaps the oldest and most commonly
provided feature in luxury cars. They use interactive displays and speech technologies.
There exist hundreds of such systems or projects. Examples include systems accessible
in US, such as TravTek, UMTRI, OmniTRACS, Navmate, TravelPilot, Crew Station
Research and Development Facility, and Army Helicopter Mission Replanning System
etc [61].
On the other hand, parking guidance or automatic parking is very new area of
research. Advanced Parking Guidance System (APGS) [62] lets a vehicle steer itself
into a parking space. They use in-dash screen, button controls, camera and multiple
sensors, but need very little input from the driver. Toyota, BMW, Audi and Lexus are
already using APGS in their luxury cars, and others are expected to use it soon.
1.5.5
Mountable Devices and Displays
Users with ordinary vehicles, not older than 1996, may apply mountable devices
and displays to supplement IVIS. These devices can be connected to the diagnostic
port located under the dashboard. They can collect useful data about the vehicle and
display it for the driver, such as speed, engine RPM, oxygen sensors, fuel economy,
9
air-fuel ratio, battery voltage, error codes and so on. Examples of such devices include
DashDyno SPD [63], CarChip Fleet Pro [64], DriveRight [65], ScanGaugeII [66], and
PDA-Dyno [67].
Virtual Dashboard [68] is an important device developed by Toshiba. It is perhaps
the most promising solution for information needs and infotainment. It consists of a
real-time display controller (TX4961) and a dashboard display. Virtual Dashboard can
handle all the information according to the current context. It can change the display to
show a speedometer, tachometer, rear-view, navigation maps, speed or fuel-level etc.
1.5.6
Vision-based integration of ADAS
As mentioned previously, vision-based integrated ADAS systems use cameras to
provide multiple services for driver assistance. They are becoming very popular
because of their low-cost and independence from infrastructure outside the vehicle.
For example, an intelligent and integrated ADAS [11] uses only 2 cameras and 8
sonars, and others make use of only cameras [71][72][73][74][75][76][84]. They
present information through an in-vehicle display. Specialized devices are being
introduced which can efficiently process visual data [77][78]. For better situation
awareness for drivers, different systems have been introduced to display the
surrounding environment of the vehicle [79][80][81][82][83]. These recent
developments show that the future lies in vision-based integrated ADAS. Current
research mainly focuses on introducing traditional driver assistance systems based on
camera, and then combining these individual systems into an integrated ADAS.
1.6
Analysis of the Related Projects
After a careful analysis of related projects described in the previous section (i.e.
section 1.5), we find that the vision-based integrated ADAS [79][80][81][82][83],
AIDE [20] and Virtual Dashboard [68] are very close to our proposed project.
However, still leave a large number of research questions unanswered, for example:
1. Why to use single integrated display (multipurpose & adaptive) instead of several
displays (one for each function)?
2. Where should we place this integrated display for best performance?
3. What level of details should be presented to the driver?
4. How the driver is required to process all the information displayed?
5. How to prioritize the information type to show?
6. How to alert the driver of the forthcoming danger using visual, auditory, and
tactile warnings?
7. How to avoid information overload when at a decisive time?
8. When the control should be transferred from driver to the system for automatic
execution of a function?
9. How to use history to make the system truly unobtrusive?
Based on this research gap, we formulate our three research questions (see section
1.3) which comprehensively cover all of the above issues. As a result of this thesis, we
expect to come up with an innovative & usable design of Smart-Dashboard.
In the next chapter, we present the vision of ubiquitous computing (UbiComp) and
a discussion of UbiComp systems design.
10
2
BASICS OF UBIQUITOUS COMPUTING
Computing began with mainframe-era where machines were really fixed. UNIX
finger command was used to locate any machine. Then we saw portability-era where
machines would move from place to place. The idea of profiles was introduced to
serve the users in a better way. Recently, we have come across mobility-era where
machines are being used while on the move. Mobile computers, such as PDA’s,
Ubiquitous Communicator terminals, cell phones, electronic tags, sensors and
wearable computers are becoming popular [86]. The trends are very clear; computing
is moving off the desktop; devices are becoming smaller in size but greater in number;
and computation is moving from personal devices to the smaller devices deployed in
our environment. Interaction with these embedded & mobile devices will become so
normal activity that people will not even realize that they are using computers. This is
an era of ubiquitous & pervasive computing where users can demand for services
anywhere at any time, while they are on move [87].
2.1
What is Ubiquitous & Pervasive Computing?
Back in 1991, Mark Weiser [88], the father of Ubiquitous Computing (UbiComp),
gave an idea of invisible computers, embedded in everyday objects replacing PCs. He
emphasized the need for unifying computers and humans seamlessly in an
environment rich with computing. In such environment, computers would be
everywhere, vanished into the background, and serving the people without being
noticed. Traditional computers are frustrating because of the information overload.
Ubiquitous computing can assist us in solving the issue of information overload, which
would make “using a computer as refreshing as taking a walk in the woods” [88].
UbiComp brings computing into our environment to support everyday life
activities. Computers are becoming small and more powerful. As described by Moore
in 1960’s, number of transistors per chip and power of microprocessors doubles every
18 months [45]. At the same time, we have seen tremendous developments in sensor
technologies. These sensors can sense our environment and correspond to the five
senses (i.e. sound, sight, smell, taste & touch). We can embed these small sensors into
the real life objects to make them smart. These smart objects will put ambient
intelligence in every aspect of our life. In this way, computing will be everywhere to
augment our daily life activities in homes, bathrooms, cars, classrooms, offices, shops,
playgrounds, and public places etc. The enabling technologies for ubiquitous and
pervasive applications are wireless networks and mobile devices.
National Institutes of Science & Technology (NIST), in 2001, defined pervasive
computing as an emerging trend towards [89]:
• Numerous, casually accessible, often invisible computing devices
• Frequently mobile or imbedded in the environment
• Connected to an increasingly ubiquitous network structure
However, NIST definition attempts to give a generic explanation for the two
distinct terms i.e. pervasive computing and ubiquitous computing.
Kostakos et al [90] describe features of ubiquitous & pervasive systems in urban
environments based on location, technology, information and degree of publicness
(private, social, or public).
11
Figure 2.1: Publicness Spectrum and the Aspects of Pervasive Systems [90]
According to Figure 2.1, for example, park is a public place where video-wall can
be used to display train-timetable; office is a social place where television can be used
to display business strategies; bedroom is a private place where PDA can be used to
see personal information. In this thesis, we take car as a social place where a SmartDashboard will be used to display context information for drivers.
2.1.1
Ubiquitous vs. Pervasive Computing
Ubiquitous computing and pervasive computing are two different things but
people are using these terms interchangeably nowadays. They seem like somewhat
similar things but actually, they are not [91]. Table 2.1 gives an account of differences
between ubiquitous computing and pervasive computing.
Table 2.1: Differences b/w Ubiquitous Computing & Pervasive Computing
Ubiquitous Computing
Pervasive Computing
Meanings
Computing everywhere
Devices
involved
Computing devices embedded in the
things we already use
Computing diffused throughout
every part of environment
Small,
easy-to-use,
handheld
devices
Purpose
Is more like
Computing in the background
Embedded
or
invisible
transparent computing
High level of mobility
embeddedness
Main feature
Initiators
Example(s)
or
Accessing information on something
Mobile computing
and
Low mobility but high level of
embeddedness
Xerox PARC (Xerox Palo Alto
Research Center) [92]
Dangling String, dashboard, weather
beacon, and Datafountain. [94]
IBM Pervasive Computing division
[93]
Information
access,
pervasive
devices, smart badges etc.
12
We can classify computing on the bases of different features, such as mobility and
embeddedness as shown in the figure 2.2 below.
Embeddedness
Pervasive Computing
High
Low
Traditional Computing
Ubiquitous Computing
High
Low
Mobility
Mobile Computing
Figure 2.2: Classification of computing by Mobility & Embeddedness [95]
It is clear from figure 2.2 that ubiquitous computing puts together pervasive
computing functionality with high level of mobility. In this way, they are related to
each other. Most of the researchers nowadays do not differentiate between ubiquitous
computing and pervasive computing. That is why they use these two terms
interchangeably without any concern. This point onwards, we will also use these two
terms interchangeably.
2.1.2
Related Fields
Ubiquitous & Pervasive Computing is also referred as sentient computing, contextaware computing, invisible computing, transparent computing, everyday computing,
embedded computing, and social computing [128]. Distributed Systems and Mobile
Computing are the predecessors of Ubiquitous & Pervasive Computing. They share a
number of features, strengths, weaknesses and problems [96]. Other closely related
fields of research are “augmented reality” [97], “tangible interfaces” [98], “wearable
computers” [99], and “cooperative buildings” [100]. What these technologies have in
common is that they move computing beyond the desktop and into the real world
environment. The real world is complex and has dynamic context of use that does not
follow any predefined sequence of actions. The main focal points of ubiquitous
computing are:
1. To find mutual relationship between physical world and the activity, and
2. To make computation sensitive & responsive to its dynamic environment
Designing and development of ubiquitous systems require a broad set of skills,
ranging from sensor-technologies, wireless communications, embedded systems,
software agents and interaction design to computer science.
2.1.3
Issues and Challenges in UbiComp
When Mark Weiser [88] gave the vision of ubiquitous computing, he also
identified some of the potential challenges in making it reality. In addition to these
challenges, Satyanarayanan [96] and others [101] have identified a number of issues
13
and challenges in pervasive computing. Here is a comprehensive, but not exhaustive,
list of issues and challenges in ubiquitous and pervasive computing.
2.1.3.1
Invisibility
Invisibility requires that a system should behave as per user expectations, while
considering individual user preferences, and maintain a balance between proactivity
and transparency. Ubiquitous and pervasive systems need to offer right service at right
time by anticipating user needs with minimal user interruption. Examples include
sending a print command to the nearest printer and switching mobile phone to silent
mode when user enters into a library. Applications need to adapt to the environment
and available resources according to some “adaptation strategy”.
2.1.3.2
Scalability
Ubiquitous and pervasive systems need to be scalable. Scalability means enabling
large-scale deployments and increasing the number of resources and users whenever
needed.
2.1.3.3
Adaptation
Context-aware systems need sensors to read changes in the environment (hardware
or software sensors). They can either poll sensors (periodically or selectively), or
subscribe for any changes in context. They may use different polling rate for different
contexts. For example, the location of a printer need not be checked as frequently as
that of a person.
2.1.3.4
Effective Use of Smart Spaces
The smart-spaces bring real world and computing together by embedding devices
into environment, for example, automatic adjustment of room temperature based on
person’s profile.
2.1.3.5
Localized Scalability
The localized scalability can be attained by decreasing interactions between
remote entities. The intensity of interaction with pervasive computing environment has
to decrease when one moves away from it. Interactions between close entities are of
more relevance.
2.1.3.6
Heterogeneity and Masking Uneven Conditioning
In ubiquitous computing environment, the mobile clients are usually thin, less
powerful and have limited battery capacity. Some neighboring infrastructure may have
very powerful computing facilities. Similarly, some environments may be equipped
with better computing facilities than others. We need to fill these differences in
smartness of environments by utilizing, for example, personal computing space. This
requires “cyber foraging”, which means proactively detecting possible surrogates,
14
negotiating quality of service, and then moving some of the computation tasks to these
surrogates. A very intelligent tracking of “user intent” is needed.
2.1.3.7
Seamless Integration of Technologies
A number of technologies are available for developing ubiquitous & pervasive
systems. We may need to use several technologies in our system. For example, we
may use RFID, biometrics and computer vision in a single system. Their variant
features make one technology more appropriate for one kind of environment when
compared to others. Therefore, existence of various technologies in a pervasive
environment is inevitable, and so is their seamless integration.
2.1.3.8
Context-Awareness
There is no standard definition of ‘context’. However, any information which is
relevant and accessible at the time of interaction with a system can be called as
‘context’ [102][103][104][105][106][107][108]. Context-aware systems use some or
all of the relevant information to provide better service to their users. A pervasive
system that needs to be minimally distractive has to be context-aware. That is, it
should be sensitive and responsive to different social settings in which it can be used.
Context-aware systems are expected to provide following features [109]:
•
•
•
•
•
•
•
•
•
Context discovery: locating and accessing possible sources of context data.
Context acquisition: read context data from different sources using sensors,
computer vision, object tracking, and user modeling etc.
Context modeling: defining & storing context data in a well-organized way using
any context model, such as key-value model, logic based model, graphical model,
markup scheme, object oriented model, or ontology based model [109]. If
different models are used in the same domain & semantics, context integration is
required to combine the context.
Context-fusion or aggregation: combining interrelated context data acquired by
different sensors, resolving conflicts and hence assuring consistency.
Quality of Context (QoC) indicators: showing Quality of Context (QoC) [110]
from different sources in terms of accuracy, reliability, granularity, validity-period
etc [111].
Context reasoning: deducing new context from the available contextual
information using, for example, first-order predicates and description logics.
Context query: sending queries to devices and other connected systems for
context retrieval.
Context adaptation: generating an adaptive response according to the context
using, for example, IF-THEN rules.
Context storage and sharing: storing context data in a centralized or distributed
place, and then distributing or sharing it with other users or systems [112].
It is important to note that the lack of standard definition of ‘context’ makes it
difficult to represent and exchange context in a universal way [113].
2.1.3.9
Privacy and Trust
Many users join a pervasive system on ad-hoc basis. A pervasive system has a
very rich collection of information about user patterns. We need to share this
15
information with others for a better service. For example, sharing my location with
others may help them locate me quickly when needed. We need to provide reasonable
privacy and trust to the users. This may be done by using authentication, allowing
users hide their identity, or even turning off monitoring for a reluctant user.
2.1.3.10
Ubiquitous Interaction Design
Ubiquitous & pervasive systems incorporate a variety of devices, ranging from
handheld PCs to wall-sized displays. Interfaces are transferable and are used in
changing locations by a mobile user. This has created new challenges for HumanComputer Interaction (HCI) and Interaction Design.
2.2
Designing for UbiComp Systems
Ubiquitous computing systems are used in real world environment to support dayto-day activities. These systems should have a very careful and well-informed design.
A poorly designed system will simply be rejected by the people. Applications that are
introduced after a careful study of user needs & requirements are more successful.
Different methods are available for capturing user needs, such as requirementsworkshop, brainstorming, use-case modeling, interviewing, questionnaires, and roleplaying etc. Some innovative methods are also available especially for the design and
development of ubiquitous computing systems, such as ethnography, participatory
design, and rapid prototyping etc [131].
2.2.1
Background
Ubiquitous computing systems are essentially context-aware systems. The design
of UbiComp systems depends on how we conceive the notion of context. There exist
two contrary views of context [106][116]. One comes from positivist theory – context
can be described independently of the activity or action. Think about a discussion
happening in a classroom, for example; the discussion is an activity, while the time,
location & identity of participants are features of the context. Another view comes
from phenomenological theory – context is an emergent property of activity and
cannot be described independently of that activity. Most of the early context-aware
systems follow positivist approach, while phenomenological approach is becoming
more popular nowadays.
Phenomenological approach has a very strong position. Winograd [117] says that
something is considered as context because of the way it is used in interpretation.
Dourish [106] considers “how and why” as the key factors of context which make
activities meaningful. Zheng and Yano [115] believe that activities are not isolated;
they are linked to the profiles of its subject, object and tools used. The
phenomenologist consider practice – what people actually do and what they experience
in doing – as a dynamic process [118] [119] [120]. Users learn new things during the
performance of an activity. New aspects of environment may become relevant for the
activity being performed, which extends the scope of context. We can say that practice
combines action and meaning; and context provides a way for making actions
meaningful [106]. Ishii and Ullmer [98] put the idea of embodied-interaction, which is
related to the methods in which meaning of objects, come up out of their use inside
systems of practice. The invisibility of ubiquitous computing technology is not ensured
by its design, but by its use inside systems of practice [121]. That is, invisibility can be
assured by augmenting and enhancing what people already do (using pre-existing
16
methods of interaction) [159]. This makes applications un-obtrusive, unremarkable and
hence effectively invisible. Table 2.2 summarizes assumptions underlying the notion
of context in both approaches.
Table 2.2: Positivist approach Vs. Phenomenological approach
Positivist Approach
(representational model)
Context
is
something
that
describes a setting
Phenomenological Approach
(interactional model)
Context is something that people
actually do, and what they
experience in the doing
What we look
for?
Features of the environment within
which any activity takes place
Relational property that
between objects or activities
Main issue
Representation – Encoding and
representation of context
Separate from activity
Interaction – ways in which actions
become meaningful
Particular to each occasion of
activity or action
Who, What, When, and Where i.e.
user ID, action, time, and location
respectively.
Why and How are also used in
addition to Who, What, When, and
Where [158].
Why and How represent user’s
intention and action respectively.
Defined dynamically and not in
advance
What
context?
is
Relationship
between context
& activity
Activity
is
described by
Scope of
context
the
Modeling
encoding
&
Example(s)
Remains stable during an activity
or an event (independent of the
actions of individuals)
Can be encoded and modeled in
advance – using tables
Dey [122], Schilit & Theimer
[123], and Ryan et al [124].
holds
Arises from activity and is actively
produced, maintained and enacted
during activity – using interaction
Dourish [106], Winograd [117], and
Zheng & Yano [115].
We can conclude that context is not a static description of setting; it is an emergent
property of activity. The main design opportunity is not related to using predefined
context; it is related to enabling ubiquitous computing application to produce, define,
manage and share context continuously. This requires following additional features:
Presentation – displays its own context, activity and resources around; Adaptation –
infers user patterns and adapts accordingly [125]; Migration – moves from place to
place and reconfigures itself according to local resources [126]; and Informationcentric model of interaction – allows users interact directly with the information
objects and information structure emerges in the course of users’ interaction [127].
2.2.2
Design Models
We cannot use traditional models of software engineering for ubiquitous systems.
There are two main approaches for designing context-aware ubiquitous and pervasive
systems [106]: Representational model of context (positivist theory) which considers
context as static description of settings, independent of the activity or action; and
Interactional model of context (phenomenological theory) which considers context as
an emergent property of activity.
17
Most of the early systems follow representational model, while interactional model
is becoming more popular nowadays. In this section, we describe both approaches, but
our main focus will be on the second approach i.e. interactional model.
2.2.2.1
Representational Model
Dey [114] has identified a very simplified process for designing context-aware
systems that consists of the following five steps:
1. Specification: State the problem at hand and its high-level solution. This step can
be further divided into two parts:
i.
Find out the context-aware actions to be implemented.
ii.
Find out what context data is needed and then request it
2. Acquisition: Install the essential hardware or sensors to acquire context data from
the environment.
3. Delivery (Optional): Make it easy to deliver acquired context to the context-aware
systems.
4. Reception (Optional): Get the required context data and use it.
5. Action: Select an appropriate context-aware action and execute it.
This model assumes that the context is a static description of settings, separate
from activity at hand, and can be modeled in advance. If we follow this model, we end
up with a rigid system that cannot fit into the dynamic environments to support real
life activities.
2.2.2.2
Interactional Model
In interactional model (phenomenological theory), context is considered as an
emergent property of activity and is described in relation to the activity at hand
[106][115]. Interactional model is used in most of the modern systems [106]. In this
model, the design can enable UbiComp system to constantly produce, define, mange
and share context.
As we know that UbiComp systems exist in the natural environment, it is very
important to understand human activity so that we can support natural interactions of
humans in UbiComp environment. We can use iterative approach of designing
ubiquitous systems [129] [130] to have better understanding of a complex problem
space. In an iterative approach, steps are repeatedly applied until we are satisfied with
the results. These steps are briefly explained below and are shown in figure 2.3 below.
Domain understanding
Prototyping
Idea formation
Figure 2.3: The iterative approach of designing UbiComp systems [130].
18
2.2.2.2.1
Domain Understanding (Research)
Domain understanding is a key to successful development and implementation of a
system [133]. It requires detailed study of user environment and real life setting in
which the technology or application will be used. Ethnography [132] helps us in this
first phase of the system development. It consists of observations, interviews, and
other useful tools, such as field notes, digital photographs, artifacts, and video
recordings etc [130]. We need to focus on the aspects that can be easily implemented
by system designers. This helps us avoid gap between the ethnography and design of
the systems. Ethnography can also help us in identifying any socio-technical gap i.e. a
gap in social practices and technology available to support them. Identification of this
gap can help us in designing innovative technologies to fill it [137].
Ethnography involves the study of how people do their work in real world settings.
A careful ethnographic study can inform a better design and implementation of a
system. It also helps in identifying how people handle exceptions, cooperate or
compete, do something, and the design of a system itself. Ethnography involves
sociologists, cognitive psychologists, and computer scientists in the design process
[134]. In this way, it creates synergic effect that brings many different aspects of the
system. The ethnographic study of the system is more useful for designers if the
ethnographer has some knowledge of designing and developing the system. However,
ethnography alone is not sufficient for successful design & development of a system
[135].
Some researchers think that ethnography is a time consuming & costly process that
is of less use for designer [136]. It emphasizes on data collection through first hand
participation, and organizing data by giving meaningful explanations. These
explanations are too lengthy to make designers understand user requirements for a
system. Therefore, they recommend using Rapid Ethnography or ‘quick and dirty
ethnography’ to complete it in shorter time [183].
It is important to observe actual work practices, identify any exceptions and find
how people resolve them. To address these problems, a different style of design used
by Scandinavia, is recommended. This is called as Participatory Design or
Scandinavian Design [138]. It aims at involving intended users in the system design
process, and expects an effective, acceptable and useful product at the end.
The data collected during ethnographic studies must be analyzed carefully. This
will produce clear understanding of the domain. Video recordings, if any, may be
helpful to better access the richness and complexity of interactions taking place in that
domain. After performing ethnographic study of people in a natural environment, we
should be able to describe the actions they do, information they use, technology that
might help them complete their tasks, and understand relationship between different
activities [130].
2.2.2.2.2
Idea Formation (Concept development)
This phase shows how ethnographic descriptions are able to inform the design of
UbiComp system or device. The ethnographic study enables us to form a rough sketch
of the system that can serve the user needs. It is, however, very complicated to move
from ethnographic study to the design of a new system [139].
19
We need to uncover the interesting aspects of the environment and then envision a
new system that may better serve the users in that environment. We need to decide
about many components of the system, such as devices to be used, sensors to be
deployed, and information to be provided etc. We may come up with a number of
different technological solutions. By observing the user activities, we can understand
how the technology is used and how it changes the activity itself.
2.2.2.2.3
Prototyping
Many ideas may emerge from the primary ethnographic study. The designers
should match a prospective solution to the target humans and their environment. After
domain understanding and idea formation, we can build some mockup prototypes,
drawings, sketches, interactive demos, and working implementations etc [129]. We
can test these prototypes and find their usability & utility. The finalized prototype may
be considered for full-scale system implementation.
We can experiment with existing technologies and design new technologies &
systems. The proposed system should be novel in its design, be able to serve the user
needs, and must be minimally intrusive. The proposed system should not only support
but also improve the user activity. The designer should keep in mind that the
environment will affect the task and therefore provide an interaction that is suitable for
the environment.
2.2.2.2.4
Evaluation and Feedback
Prototyping and role-playing [140] can help in getting user feedback and
determining the usability of new technology. The finalized prototype of the system can
be offered to the users for evaluation. A rapid prototype can help users play with the
system and provide feedback in a better way. A soft prototype can be helpful in better
design and successful implementation of the system.
If the prototype system is well designed, users may find it interesting and easy to
use. A continuous use of the system may make the users mature and they may suggest
some additional features to be included in the systems.
2.2.3
Interaction Design
UbiComp has forced us to revise the theories of Human Computer Interaction
(HCI). It extends interaction beyond the desktop containing mouse, keyboard and
monitor. New models of interaction have shifted focus from desktop to the
surroundings. Desktop is not like the way humans interact with the real world. We
speak, touch, write and gesture that are driving the flourishing area of perceptual
interfaces. An implicit action, such as walking into an area is sufficient to announce
our presence and should be sensed & recognized as an input. We can use radio
frequency identifications (RFIDs), accelerometers, tilt sensors, capacitive coupling and
infrared range finders to capture user inputs. We can make the computing invisible by
determining the identity, location and activity of users through their presence and usual
interaction with environment. Output is distributed among many diversified but
properly coordinated devices requiring limited user attention. We can see new trends
in display design. These displays require less attention like ambient displays (Dangling
String [149], Ambient ROOM [148], Audio Aura [147] etc). We can also overlay
electronic information on the real world to produce augmented reality [97]. Physical
20
world objects can also be used to manipulate the electronic objects as in graspable or
tangible user interface (TUI) [98]. All these things have made it possible to have a
seamless integration of physical and virtual world [128].
Theories of human cognition and behavior [150] have informed interaction design.
The traditional theory of Model Human Processor (HMP) stressed on internal
cognition pushed by three autonomous but co-operating units of sensory, cognitive,
and motor activity. However, with the advances in computer applications, designers
now take into account the relationship between internal cognition and the external
world. Three main models of cognition are providing bases for interaction design for
UbiComp: activity theory, situated action, and distributed cognition [153].
2.2.3.1
Activity Theory
Activity theory [151] realizes notions of goals, actions, and operations; which is
very close to the traditional theory. However, goals and actions are flexible, and
operation can shift to an action depending on changing environment. For example, cardriving operation does not require much attention from an expert driver; but in rush
hours & bad weather, it needs more attention that results in a set of careful actions.
Activity theory also highlights transformational properties of artifacts. This
property says that objects, such as cars, chairs, and other tools hold knowledge and
traditions, which determine the users’ behavior [152]. An interaction design based on
activity theory focuses on transformational properties of object, and the smooth
execution of actions and operations [128].
2.2.3.2
Situated Action
Situated action [154] highlights unplanned human behavior and says that
knowledge in the world constantly forms the users’ actions i.e. actions depend on the
current situation. A design based on this theory would intend to impart new knowledge
to the world that would help forming the users’ actions, for example, by constantly
updating the display.
Our proposed Smart-Dashboard design is also based on situated action theory. A
driver constantly changes her behavior given the changing road conditions, traffic
information, road signs, and weather conditions etc.
2.2.3.3
Distributed Cognition
Distributed cognition [155][157] considers humans as part of a bigger system and
stresses on collaboration, where many people use many objects encoded with
necessary information to achieve system goals. For example, many people
(crewmembers) use many tools to move a ship into port.
An interaction design based on distributed cognition stresses on designing for
larger system goals, encoding information in objects, and translating that information
by different users [128].
21
2.3
Issues in UbiComp Design
Designing is an act of making the form of something either form start or by
improving an existing object/process. There are many types of design, such as user
interface design, graphic design, web design, interaction design, industrial design, and
user centered design etc.
In this section, we discuss several issues regarding designing for UbiComp
systems.
2.3.1
What and When to Design?
When we observe that there is an urgent need or want for something, we have an
opportunity to carry out a design to satisfy that need or want. In UbiComp, the design
must be informed by the user need or want, and not that of the designer. Although,
there is a space for creativity, but the designer should take care of user’s need or want
and the system goals.
2.3.2
Targets of the Design
The first and the prime target of UbiComp system design is the anticipated user. It
is very useful to let the anticipated users draw a sketch of the device/system they want
[141]. This sketch can be useful for designer to realize a system that is simple and
useful.
A second target of design is the user-environment that directly affects the user.
Different environments have different characteristics, such as open (where information
can flow), close, harsh, gentle etc. The designer has to know the user-environment that
may change from time to time.
The last target of design is the device. The designer should make sure that all
devices serve their purpose for the user unobtrusively. The devices which require
much of the user attention, e.g. mobile phone, are obtrusive and do not allow users to
pay attention to any other task.
2.3.3
Designing for Specific Settings – Driving Environment
A system for drivers should have a design that is easy to use, require very less user
attention and time to complete a task [143]. A distraction of only a few seconds may
result in a vital road-accident. For example, a large text message on a screen requires
much attention from user to read it, hence be avoided.
Secondly, not all the drivers are well educated and computer literate. Therefore,
the system should require little or no training, troubleshooting and administration.
Thirdly, it should not have a major affect on driving activity itself, i.e., it should let
drivers drive their cars as they have always done unless there is a major problem. The
system should fit into the driver environment rather than enforcing it like an office
system. It should not only accommodate the wide range of driver’s activities but also
support them. The system should provide an interface to connect different devices that
may be used in cars, such as mobile phone, to make drivers’ life easy.
22
Fourthly, such a system should have low cost especially when it is being
introduced as an add-on; an expensive system may not be bought and used by drivers.
Finally and the most importantly, the system must follow the guidelines set by
different governmental bodies issuing a set of requirements for systems regarding
road-safety [142][143].
2.3.4
UbiComp and the Notion of Invisibility
Ubiquitous means omnipresent or everywhere whereas invisible means unremarkable or un-noticed. At first, ubiquity and invisibility look two conflicting terms
but actually, they are not. Ubiquity entails embedding computing into everyday objects
and invisibility entails using pre-existing & unobtrusive methods of interaction to
augment what people already do [159].
Invisibility implies offering right service at right time by anticipating user needs
with minimal user interruption. Designers should keep in mind that the system should
be literally visible but so un-obtrusive that it becomes effectively invisible or
unnoticeable.
2.3.5
Calm Technology
Some technologies are so obtrusive that they do not fit to our lives e.g. a
videogame, alarms etc. However, some are calm & comfortable, such as comfy pair of
shoes, a nice pen etc. What makes the difference is how they engage our attention.
A calm technology [146][149] uses both the centre and the periphery of our
attention and moves back and forth between the two. Periphery is what we are used to
but it requires no explicit attention e.g. noise of the engine when driving a car. A thing
in our periphery at one moment may be at our centre of attention at the next e.g., an
odd noise of the car engine catches our attention. Calm technology will easily move
between the periphery and the centre, making it possible to use many more things at a
time. We can take control of something by re-centering it from the periphery.
We need to design for the periphery so that we can access and use technology
without being dominated by it.
2.3.6
Embodied Interaction
We have seen transition from electrical interface to symbolic interface to textual
interface to graphical user interface (GUI). The improved power of computers and
increasing context of their use calls for new ways of interaction with computers that
are better tuned to our abilities and needs.
Dourish Paul [145] gave the idea of embodied interaction. Tangible, physical and
social approaches to computing suggest interacting directly through physical objects in
a way we experience the everyday world rather than GUI and interface devices, such
as mouse. This gives the idea of embodiment, which says that the things are embodied
in the world and hence interaction depends on the settings in witch it occurs.
23
2.3.7
Limitations of Ethnography
The designers should be aware of the limitations of ethnography [136] to avoid
pitfalls. Despite its limitations, ethnography provides us better understanding of
domain, tells us about how people use a system, and what additional features they need
in a system.
2.3.8
Prototyping
Prototyping provides anticipated users an opportunity to use a system before its
full-scale implementation. However, prototyping is expensive, time-consuming, and
confusing for designers. A better prototype can be designed through better domainunderstanding, selecting real users, and real settings for testing the prototype.
2.3.9
Socio-Technical Gap
A gap in social practices and technology available to support them is called as
socio-technical gap. This gap should be known to the designers as well as users so that
they realize what available technology cannot support. Sometimes a supporting
technology may be available but it is so obtrusive that it cannot be used in an
UbiComp system.
2.3.10 Hacking
Sometimes, users explore devices or systems to find some innovative uses that
were not even perceived by their designers; this is called as hacking. Here, hacking
does not mean to let users breach the security or take illegal control of the devices or
systems.
A good design allows hacking i.e. it allows users to find some innovative use of
the device or system. A user feels emotional attachment to a device or system that
he/she has hacked for some innovative uses [144]. One example of hackable systems is
Email, where users have found many uses other than sending a message, such as using
email space as online file storage, sending junk messages, spreading viruses,
unsolicited ads, and scam etc.
2.4
UbiComp and Smart-Dashboard Project
Ubiquitous Computing strives to bring computing into every part of human life.
From current trends in computing [86][87], we can predict that computing will be
everywhere in our life after few years. The automobiles have also benefited a lot from
advancements in computing and sensing technologies. A modern car comes with a
number of microprocessors and sensors embedded [69]. To make driving safer and
more enjoyable, new systems for in-car use are being introduced daily. In our design,
we plan to use sensors to keep an eye on driving environment and provide relevant
information for drivers in their car as and when needed. It can play a positive role in
improving driver awareness and performance.
In the next chapter, we perform ethnographic study of how people drive their cars
and the factors affecting their actions while driving.
24
2.5
Conclusions and Future Directions
Ubiquitous Computing suggests the natural way of human-computer interaction. It
also encourages the system designers to consider new interaction models, such as
gesture, sound and touch etc. It brings computers into human world that already exists
instead of pulling human to the virtual world of computers. It is becoming a
technology that will be calm and comfortable for its users.
Mobile computers, such as PDA’s, cell phones, electronic tags, sensors and
wearable computers are becoming popular. Wireless networking technologies, such as
Bluetooth, GSM, WLAN, WiFi, and WiMax are becoming more ubiquitous.
Almost all the future applications, services and devices will be context-aware.
Currently, there are no standard context modeling and query languages available.
Resource discovery, use of historical context, learning, and security are the least
supported features of current context-aware systems. We need to develop standard
context-modeling scheme, communication protocol, system architecture, and
interaction model.
25
3
ETHNOGRAPHIC STUDIES
In order to design technologies for natural interactions of humans, it is very
important to understand human activities in real world settings. A number of
methodologies have been developed in social sciences, such as ethnography in
anthropology, for better understanding of activities & social settings. Ethnography
deals with the study of human activities in real world settings. It gives a detailed
description of “context and evolution” of human interaction, i.e. it describes human
activity as well as the human experience while doing it. Ethnography is a participatory
approach in which the ethnographer becomes part of a setting and observes the activity
that is taking place in the setting for an extended period of time and then reports it in
writing. An ethnographer applies a number of tools to capture rich understanding of
real life setting and activities taking place in it. These tools include observations,
interviews, field notes, questionnaires, digital photographs, artifacts, and video
recordings etc [130]. Although ethnography is time-consuming, confusing, insufficient
and costly [135][136], a careful analysis of ethnographic results can provide very
useful ‘hints’ for UbiComp system design.
Ethnography is briefly introduced in the next section in order to develop better
understanding of the origin of ethnography and its role in HCI & UbiComp system
design.
3.1
Introduction
Sociology, anthropology and ethnography are related disciplines, which hold an
overlapping relationship. Sociology is a social science that deals with the study of the
development, structure, and functioning of human society; anthropology is a social
science that deals with the study of humankind, especially the study of societies and
cultures and human origins; and ethnography is a branch of anthropology that
provides scientific description of peoples and cultures [167]. Where, anthropology
passively records what members of other cultures do, ethnography requires active
participation in everyday life to realize what members of those cultures actually
experience by their actions. Ethnography urges to use long-term and devoted fieldwork
through participatory observation instead of surveys and interviews.
Anthropology started in 19th century during the Western expansionism when it was
used for recording the quickly shrinking cultures of original Americans. Within the
discipline of anthropology, ethnography started in the early part of 19th century, during
World War 1, primarily by Bronislaw Malinowski in his work on the Trobriand
Islands [168]. He lived the life of Trobrianders for few years and studied the culture
and practices of the local population. He experienced what they experienced; how they
experienced; and their reaction to such experiences. In this way, he was able to find
“the member’s point of view”, and this fieldwork provided foundations for modern
ethnography. Since then, ethnography is being used in many other fields with different
flavors and intensities. One historical example of such use is in the work of the
Chicago School sociologists (Robert Park and others) in which they conducted an
inquiry into the American urban life [160]. Other recent examples include research
into crimes [169], drugs [170], politics [171], and technology [172] etc.
Recently, ethnographic methods have been used by researchers in HCI and
UbiComp design [160][161]. Ethnography was first used in Computer-Supported
26
Cooperative Work (CSCW) to understand social organization of activity, and then in
Participatory Design (PD) to find employ’s views on changes in working conditions
due to cybernation. Through PD, and CSCW, ethnographic methods became popular
with HCI and UbiComp researchers. What makes ethnographic methods popular in
these fields is their potential to capture the complexity of real world settings and use of
technology in that context.
Ethnographic study can offer major insight and benefit for HCI research including
implications for design. However, emphasis on implications for design should be
avoided because the valuable material lies somewhere else. Ethnographic studies may
lose many of their potential benefits when performed for a specific purpose such as
“implications for design”. Ethnography is a multi-sited process and should be used for
multi-sited processes [173]. It is possible that some ethnographic work may not present
any “implications for design” but still presents valuable guidance for how to think
about the implications for design [174][175][176][177][178][179].
Ethnography has two types of contributions: empirical (e.g. observations) and
analytic (e.g. interpretations). The implications for design are derived from analytic
aspects of ethnography and not that of empirical, i.e. a careful analysis of
ethnographic results can provide useful ‘hints’ for system design. In this way, the
movement from ethnography to design is conceptual and creative move.
Dourish Paul [160][161] has identified the following four major problems with
ethnography when it is used (or intended to be used) for design:
1. The marginalization of theory: Ethnography is commonly mistaken as a field
technique for gathering and organizing qualitative data. However, ethnographies
are basically interpretive texts and give us not only observations but also the
relationships between them. Ethnography helps us understand member’s
experience through their interactions with the ethnographer. Therefore, we can say
that ethnographies are descriptive texts about the culture, the cultural view from
which it is scripted, and the target audience.
2. Power relations: The importance of ethnography has been undervalued,
politically. There is a difference of power between engineering and social
sciences, which is clearly visible in the relative size of research funding in these
two fields. Nonetheless, we should not ignore interdisciplinary role of
ethnography where it is really in service to the other disciplines.
3. Technology & practice: Ethnography is mistakenly assumed as a point of
mediation between everyday ‘practice’ and the technical ‘design’. However,
ethnography rejects this separation and assumes the fact that practice provides a
form & meaning to technology. A good design in HCI will not only give a form &
meaning to technology but also cover appropriation (a dynamic process of
inclusion & evolution of technologies, practices & settings). Only poorly designed
technologies need adaptation & appropriation.
4. Representation & interaction: It is generally believed that ethnography can
highlight the practices of specific people. However, ethnography can do even
more i.e. it also finds the operational principles by which these practices are
produced, shared, re-produced and changed. Moreover, ethnography is often
viewed as “scenic fieldwork” which focuses on moments i.e. it describes what
happened in the past. We can extract different conclusions from these historical
tales. However, the alternative view of ethnography is that it is “a model for
understanding social settings”. Therefore, important is not the description of what
27
happened, but the descriptive form which organizes & connects these past
moments.
There are some methods (mistakenly) labeled as “discount ethnographies” and
proposed as an alternative to ethnography, such as interview-based Contextual Inquiry
[180] and survey-based Cultural Probes [181][182]. At first, these two methods look
similar to ethnography but actually, they are much different. They focus only on
implications for design, and one can directly move to design phase after this step.
However, in reality, these methods are very limited and they fail to get what an
ethnographic study can get. Therefore, we should not consider them as an alternative
to ethnography.
In short, ethnography can be very useful in HCI and UbiComp design research.
Although ethnographies are descriptive texts and may not provide us a list of
implications for design, the analytic aspects of ethnography (i.e. a careful analysis of
ethnographic results) can provide us valuable guidance for how to think about the
implications for design. However, this shift from ethnographic study to design practice
requires imagination, creativity and analytical skills.
The work done by early ethnographers has guided us to include ethnography in the
modern fields such as CSCW, PD, interactive system design, HCI and UbiComp. That
is why we have performed ethnographic studies to explore how drivers try to ensure
safe and smooth driving in order to inform the design of our Smart-Dashboard. After
performing ethnographic study of drivers, we should be able to describe the actions
they do, information they use, technology that might help them complete their tasks,
and understand relationship between different activities… Our key research challenges
in this thesis are to find what information about the surroundings should be provided
to the drivers for better situation awareness, and how this information should be
presented unobtrusively. Our ultimate aim is to design a Smart-Dashboard system
which may augment safe and smooth driving.
3.2
Our Approach
Ethnography, like other participatory approaches, is very much revealing,
trustworthy, direct, and produces better results than other approaches such as
laboratory-based study, questionnaires, or interviews etc. However, ethnography is
time-consuming, confusing, insufficient and costly [135][136].
In order to save time, money, and efforts, we used minimal participation &
observations which is also known as “quick & dirty ethnography” [183] in which
short ethnographic studies are conducted to provide a general understanding of the
setting for designers. In addition to this, we used questionnaires, interviews, and video
recordings as supporting tools. Such a mixed-methods approach [162][163] was used
to compensate for weaknesses of one method by the strengths of other methods.
We performed ethnographic study of ten young drivers, mostly our friends, and
engaged them in face-to-face interviews. We captured on video four of them as they
drove. We also handed over questionnaires to them and to many other people around
the world through Internet. This selection of respondents was somehow biased, as a
random selection would not work here. The results of this study are reported in the
next section.
28
3.3
Results
This section describes results from ethnographic study and other supporting tools
such as questionnaires, interviews and video recordings.
3.3.1
3.3.1.1
Results from Ethnography
Introduction
This section reports ethnographic study of ten drivers, mostly our friends. We
spent 2-3 hours daily with them for two weeks in order to understand their behavior on
the road. We also involved them in ad-hoc discussions or interviews. We were
specifically interested in finding what information about the surroundings could be
helpful for drivers to avoid accidents.
3.3.1.2
Background
Annually, road accidents cause about 1.2 million deaths, over 50 million injuries,
and global economic cost of over US$ 518 billion [1]. About 90% of the accidents
happen due to the driver behavior [2][3], such as bad awareness of driving
environment, low training, distraction, work over-load or under-load, or low physical
or physiological conditions etc. This ethnographic study was conducted to find how a
driver support system (DSS) can play a positive role in improving driver awareness
and hence performance by providing relevant information using a smart dashboard as
and when needed.
3.3.1.3
Patterns discovered
From our initial study, we had identified three different occasions when drivers
had different concerns: 1) in the parking area, 2) on the highway, and 3) inside a town.
Therefore, we performed a detailed study to find out the type of contextual information
drivers needed in order to avoid accidents on each of these occasions.
We started out observing drivers in the parking area when they were driving their
cars in or out of the parking lots. We observed that they were driving very slowly &
carefully in the parking areas because cars were parked very close to each other and
any mistake would result in a collision. They seemed to make best estimate of the
distance of their car from others’ using rear-view & side-view mirrors or any other
available technological support such as sonar and rearview camera etc. This shows that
parking is an activity when drivers need to know the distance of their car form others’
and technology can play an important role here. Any technological solution for
distance estimation will be beneficial for new drivers and an additional support for
experienced ones. For example, one of the respondents told that parking was a
challenging job for him and he had hit other cars many a times in the past because he
could not estimate exact distance using side-view and rear-view mirrors. He explained
that these mirrors were useful but any additional support such as sonar or rear-view
camera would really help a lot. On the other hand, a mature driver with luxury car was
feeling easy with parking and told us that he had never committed any accident in the
29
parking area because his car had sonar which beeped to warn him when he was very
close to another car. However, he had to rely on side-view mirrors in order to avoid
any collision with vehicles on other sides.
To observe drivers on the highways, we accompanied our friends when they were
traveling to the nearby town. This gave us an opportunity to observe drivers closely on
the highways where vehicles move at a higher speed. We found that drivers tried to
keep a safe speed and distance; they were keeping an eye on the direction of the
movement of neighboring vehicles; and they were especially careful about heavy
vehicles such as buses & trucks, and small-sized objects such as obstacles &
motorcyclists. One of the new drivers told us that he was really annoyed by sudden
appearance of objects, and that he felt difficulty in judging the distance, speed &
direction of other objects on the road. For a safer journey, he used to take somebody
with him while going on a long trip so that the other person would keep him aware of
the crazy vehicles around him, especially in the blind spots. On the other hand, a
professional driver was worried about some other factors such as size of the
neighboring vehicles and the decreased visibility. He told us that he tried to stay away
from heavy-vehicles because they might not move or stop quickly when needed.
Another thing that made him crazy was the decreased visibility due to weather
conditions such as fog, dust, heavy rain, and snow etc, and due to the time of the day
such as sunrise and sunset when some drivers leave their headlights off (decreased
visibility) while others kept them bright even when crossing (dazzling). We also
noticed that experienced drivers could recall locations of road-defects & other
obstacles and pre-planed to avoid them.
On entering a town, we observed a clear change in drivers’ attitude perhaps due to
the volume and type of traffic inside a town. They reduced their speed because of
lower speed limits, and became alert as they were expecting more crossings, bridges,
congestion, traffic signals, pedestrians, cyclists, motorcyclists, and animals on the
road. These features of urban traffic required drivers to be aware and respond quickly.
For example, one of our respondents told us that he found it tedious to drive inside a
town, and explained that even though speed was low, sudden appearance of any object
might result in an accident because vehicles were too close to each other that there was
not enough space available to change the lane quickly or apply breaks to avoid
collisions.
In short, it would be useful for drivers if they know speed, distance, relative
position, direction, and size & type of the neighboring vehicles or other objects. We
verified these observations and discovered some more facts by analyzing a number of
video recordings and by using questionnaires & interviews, where our questions were
mainly based on these observations. In the next section, we describe some of the
valuable findings from video analysis.
3.3.2
Video Results
We captured four of our respondents on video as they drove. These videos helped
us in confirming our earlier findings and discovering some more patterns. However,
our video recordings could not catch any accidents. For analysis of accidents, a
number of video recordings from traffic surveillance cameras were obtained from
online resources such as YouTube.com, a very famous video sharing website.
Examples of such videos include “Road accidents (Japan)” [165] and “Karachi
Accidents” [166].
30
In our video recordings, a driver starts from driving his car out of the parking area,
travels on the highway for about half an hour, enters a town, and parks his car in a
parking lot. We saw in the videos that drivers mostly kept focused on the windscreen
in front of them. Occasionally, when needed, they switched their attention to other
locations for short time. However, they turned their attention back to the windscreen as
quickly as possible. In a certain situation, drivers spent more attention on a place
where they expected relevant information. For example, rearview mirrors got more
attention in the parking area, whereas side mirrors got more attention when changing a
lane on the highway. From the video recordings of drivers, we calculated Visual
attention [164] which shows “for how many times (frequency) & for how long
(duration) a driver looked at certain locations”. We found that drivers kept focused on
the windscreen in front of them for about 80% of the time while driving. Occasionally
they switched their attention to the button control area (8%), speedometer (2%),
rearview mirror (3%), side view mirrors (3%) and other areas (4%). However, these
ratios may change with changing context such as traffic conditions, weather, route,
time and the driver etc. This observation has very important application; that is the
designer should consider drivers’ visual attention and avoid putting useful information
away from their visual approach.
In the videos on road-accidents, we found that many accidents occurred in the
blind-spots – which are areas of the road on right and left of the vehicles that are not
covered by any of the side-mirrors, forward vision, or rearview mirror (see figure 3.1).
The major reason for these accidents was that a sudden appearance of any object in the
blind spots went unnoticed and resulted in an accident. Any technology that can make
drivers aware of the objects appearing in the blind spots would help reducing such
accidents.
Forward Vision
Blind Spot
Side
Mirror
Vision
Blind Spot
Rearview
Mirror
Vision
Side
Mirror
Vision
Figure 3.1: Blind spots on both sides of a vehicle
In these videos, we also noted that many accidents occurred because of a sudden
change in the speed or direction (a.k.a. acceleration) of some neighboring vehicle.
That is, a moving vehicle either suddenly stopped or took a turn, or a stationary vehicle
31
suddenly moved. For example, while moving on an average speed on the highways,
drivers assumed that the cars in front of them would continue moving with the same
speed. However, when a car in the front suddenly stopped or took a turn, accident
occurred because of other drivers’ inability to react quickly. Again, here technology
can help drivers in reducing accidents by identifying a sudden change in the speed or
direction of the neighboring vehicles.
We noted another important factor – miss-judgment & unawareness – which
accounted for a large number of road accidents and appeared in different forms. For
example, the driver was unable to recognize smaller objects such as obstacle, human,
bicycle, and motorcycle etc; the driver failed to judge other’s path, size, position or
speed; the driver failed to keep in proper lane or ran off the road; and the driver failed
to recognize a road sign such as a stop signal on the crossing. In all of these situations,
a driving support system would improve drivers’ awareness and augment their
decision capabilities by identifying smaller objects, unclear road signs, lane & path,
and other objects nearby.
To obtain some statistical data on our observations, we used questionnaire. The
results from questionnaire are presented in the next section.
3.3.3
Results from Questionnaire
In the questionnaire, we were particularly keen to investigate three issues: 1) how
many cars had installed modern safety features; 2) what were the causes of distraction
& road accidents; 3) and how could we augment humans for safe and smooth driving.
We designed a questionnaire (Appendix A1) consisting of 15 questions for drivers. We
launched our survey using an online tool www.surveygizmo.com. This survey was a
great success which received 192 responses from around the world including Europe,
North America, Middle East, Far East, and Australia. The results of this survey
(Appendix A2) are briefly described in this section.
We had an initial assumption that all the most recent cars (2008 and newer
models) would have at least one of the modern safety features such as night vision,
parking assistant, active cruise control, traffic sign recognition, and blind spot
detection etc, but it was found that only 60% of them had some safety features. These
modern safety features are usually available in modern luxury cars, but only a small
ratio of ordinary cars come with any of these features that would make them more
expensive. Our survey results show that 85% of our respondents had somehow a new
car but only 31% had installed any of the modern safety features. One of the basic
devices in road-safety systems is the in-vehicle display which is usually used to show
any relevant information for drivers, and also used for other purposes such as GPS
navigation, CD/DVD display, and speedometer etc. We found that these in-vehicle
displays are not very popular yet; only 31% of the respondents had any kind of display
mounted on their car’s dashboard.
One of the major reasons for serious road accidents is the driver’s distraction
[186] i.e. drawing her attention away from the road. It is very important to find out
distracting things so that proper solution could be provided through a well-designed
driver support system. One-third of our respondents (i.e. 33%) think that the most
distracting things for them are the “things outside the car” such as too much traffic,
heavy traffic (trucks & busses), people and animals crossing the road, vehicles which
are too close, vehicles which are too fast, motorcycles & bicycles, and uneven, curvy
& damaged roads. Some of the respondents think that unclear road signs and the
32
vehicles that are moving too slow are also distracting. Another major reason for
distraction is the “driver’s personal state” such as tiredness, sleepy, and being
depressed or upset etc. Some of the optional activities taking place inside the vehicle,
such as the use of mobile phones etc, are also very much distracting. We found that
82% of our respondents use mobile phones, laptops or other hand-held computers
while driving. Although mobile devices are commonly used to make voice calls
(96.53%) and messages (31.21%), few people (7.51%) use it for playing games,
photography and audio/video recording which can be highly dangerous while driving.
It is important to note that 89% of the respondents think that reading an SMS while
driving requires much of their attention i.e. it is obtrusive and causes distraction. These
results suggest that long text messages should be avoided for conveying information
about surrounding vehicles to the drivers. Instead, we can use standard icons and
symbolic representations for quick understanding.
There are three major factors which contribute to road accidents: human factors,
road defects, and vehicle defects [185]. However, in the last few decades we have seen
a significant improvement in the quality of roads and vehicles which leave human
factor as the only dominant factor in road accidents. In our survey, 84% of the
respondents think that the most common reason for road accidents are human factors
such as drowsiness, tiredness, fatigue, inattention, over-speeding, drinking, changing
lanes without warning, and inability to recognize a road sign or an object etc. It is
important to note that the drivers’ fatigue, tiredness or drowsiness is one of the major
reasons for “fatal accidents” [186]. Although most of our respondents were
experienced drivers, only 31% of them could drive for more than 4 hours continuously
without taking any rest or break. A continuous long drive can be tiring and boring
which can result in a fatal accident.
As human factors are the most common reason for road accidents, technology can
be used to augment drivers for safe & smooth driving by improving their awareness
of the settings. Our respondents think that the information about neighboring objects
that can help in avoiding accidents includes speed (65%), distance (52%), relative
position (39%), direction (34%), and size & type (26%); and a combination of all
would best serve the purpose. In dangerous situations, rather than to actively takeover
the control from drivers, it would be much better to passively present this information
to the drivers for proper action and issue an alert. Here, an important question is to find
the best location for displaying this information inside the vehicle. This location
should be chosen while considering drivers’ visual attention and should be within their
visual approach [164]. For majority of our respondents, speedometer (after
windscreen) is the easiest location to see while driving. This gives us a nice hint for
location of our proposed system. In addition to displaying contextual information, any
proactive action such as issuing a warning/alert is very helpful in avoiding accidents.
Many modern vehicles include some kind of warning or alert system for this reason. It
is interesting to note that auditory alert is preferred by majority of our respondents (i.e.
54%), while 51% prefer automatic or takeover the control from driver (e.g.
automatically apply brakes to avoid collision etc). However, this automatic option can
be even more dangerous in some situations. Other kinds of possible alerts include
haptic (e.g. shake the driver seat if sleeping) and textual/visual alert. It is important to
note that a combination of different alerts can better serve the purpose. We’ll
preferably use a combination of only auditory and visual alerts in our proposed system.
In short, these results suggest that we’d incorporate our system into the
speedometer to show information on speed, distance, relative position, direction, and
size & type of vehicles around, and to issue auditory and visual alerts when needed.
33
3.3.3.1
Comments from the respondents
We included one open-ended question at the end of our questionnaire to get any
comments from the respondents. Some of the interesting comments are given here:
1. Best way is that drivers keep control and stay focused and technology may be
introduced for better results simultaneously.
2. Drivers need to be taught the importance of patience.
3. Everyone believes he/she is a better driver than he/she actually is. Notice that
everyone driving slower than you is an idiot and everyone faster is a maniac.
4. One should have a fully fit vehicle, one should leave early so not to drive fast, be
in a comfortable state of mind and physique, should not use mobile phone while
driving, watch out for others making mistakes and constantly keep looking in side
and back view mirrors.
5. Knowing about your surroundings, e.g. person standing back of car while you are
driving back will be helpful. However, at the same time please note that only give
information which is needed and only when it is needed.
6. Most accidents happen when driver is assuming something and it didn't happen.
For example, car in front stopped suddenly, or didn’t start moving.
7. The questions of this survey are specific to normal drivers, and cannot be
applicable to heavy traffic driver.
8. In third-word countries, you always have to drive with the supposition that your
neighboring drivers are reckless and will suddenly make a mistake - endangering
you or others around you. Therefore, you should be able to react quickly to avoid
any damage.
9. Making drivers aware of their environment can significantly reduce chances of
accidents. Accidents mostly occur due to negligence of drivers somehow.
3.4
Conclusions
Ethnography can be applied to capture the complexity of real world setting and use
of technology in it. We applied “quick & dirty ethnography” to find what information
is needed by the drivers to avoid any forthcoming collision in order to inform the
design of our Smart-Dashboard.
We have found that the modern safety features such as night vision, parking
assistant, traffic sign recognition, blind spot detection etc are still avoided in ordinary
cars that would make them more expensive. About 90% of the road accidents happen
due to the driver behavior. Our study shows that it will be very useful for drivers if we
provide them with the information on speed, distance, relative position, direction, and
size & type of the vehicles or other objects around them.
Based on our findings, we’ll propose a simple & inexpensive system that would
provide the relevant information, and produce alerts when needed.
34
4
GENERAL CONCEPT DEVELOPMENT
Moving from ethnography to the design phase is very complex. However, concept
development stage makes this movement easier by serving as a mediate between
ethnography and design phase. It also helps us forming the “rough sketch” of the
required system.
The concept development stage started as soon as we received first response to our
questionnaire and completed the first interview. We developed concepts around the
observations, survey-responses, video-analysis, and the interviews.
We know that about 90% of the accidents happen due to the driver behavior [2][3],
such as bad awareness of driving environment, low training, distraction, work overload or under-load, or low physical or physiological conditions etc. We also know that
modern safety features such as night vision, parking assistant, traffic sign recognition,
blind spot detection etc can be useful in making drivers aware of their surroundings so
as to avoid any forthcoming accident. However, only a small ratio of new cars (other
than luxury cars) comes with modern safety features that would make them more
expensive. This calls for an inexpensive, easy to use, and effective driver support
system which could be used as an add-on.
In this chapter, we describe some of the interesting aspects of driving environment
and predict a simple & inexpensive Smart-Dashboard system that would help drivers
in safe and smooth driving.
4.1
Need for better situation awareness
From the analysis of videos on road-accidents and the statistical data obtained
from our survey, we find that the major reasons for road accidents are human factors
which include, among others, bad awareness of driving environment, inability to
recognize other objects, and miss-judgments etc. These human errors can be
minimized and hence performance can be improved by making drivers aware of their
context. This can be done by providing them all the relevant information inside the
vehicle as and when needed.
4.1.1
Improving Context-awareness
After careful analysis of the results of ethnographic study, we find that in order to
avoid any forthcoming accident, drivers need the following five pieces of information
about vehicles or other objects around them:
1.
2.
3.
4.
5.
Their distance,
Relative position,
Relative speed ,
Direction of movement, and
Size & type.
A combination of all the five parameters will provide a meaningful piece of
information in a certain context. For example, consider a scenario in which a relatively
fast moving (speed) bus (size & type) suddenly appears in your left blind spot
35
(position), quickly overtakes you, and enters into your lane (direction) just in front of
you (distance). An accident may occur if you are unaware of the situation or if you
react slowly. However, this situation is not dangerous if there is a safe distance or if
the distance is increasing instead of decreasing. Therefore, a combination of all the
five parameters will be used to detect dangerous situations in a certain context.
4.1.2
Detecting Blind-spots
Moreover, we find that many accidents occur in the blind spots because drivers are
not well aware of the objects suddenly appearing in that area. Therefore, blind spots
should be specially taken care of.
A simple solution to the blind-spot problem can be provided by around-view
mirrors – convex mirrors which can provide mirror-view of these blind spots.
However, around-view mirrors are not much useful because:
1. They don’t work in darkness.
2. They (being convex) very much reduce the size of objects in the mirror image.
3. It is hard to guess distance, speed & direction of objects in the mirror image.
The blind-spot problem has been addressed by many researchers and some of the
proposed technological solutions to this problem include:
1. use of a camera attached to the back bumper of car that provides view of the area
behind the car when in reverse,
2. vehicle on-board radar (VORAD) [184] that uses a radar system to detect other
objects around a heavy-vehicle,
3. lane-changing alarm which uses infrared or ultrasound sensors to detect objects
in the blind-spot while changing a lane, and
4. other systems for blind-spot detection and warning using mobile devices, GPS,
and road infrastructure etc. (see section 1.5 for more details)
However, none of these methods provide a comprehensive solution to the blindspot problem. Our proposed system would provide a complete picture of the
surroundings in order to make drivers aware of their context.
4.1.3
Enhancing Object-Recognition
Good judgments and reactions are, by and large, based on better recognition and
situation awareness. From the analysis of videos on road-accidents, we also find that
many accidents occur because of the drivers’ inability to recognize smaller objects
such as pedestrians and bicycles etc. These objects are relatively harder to notice while
driving; bad weather conditions make it even worse.
It is generally observed that any collision with a smaller object usually results in a
fatal accident. Our proposed system would be smart enough to identify these smaller
objects, and warn the driver of their presence in very close vicinity.
4.2
Need for an unobtrusive system
From the results of ethnographic study, we find that drivers need to keep focused
on the road in front of them while driving. Although they may switch their attention to
36
other places for short time when needed, they cannot keep their attention away from
the road for more than a few seconds that may cause a road-accident. Therefore, the
proposed system would consider visual attention of drivers and display the useful
information within their visual approach [164]. This will assure minimal user
interruption.
We also find that reading a text message is highly obtrusive activity, which needs a
lot of attention from the reader. Therefore, long text messages should be avoided, and
information should be conveyed to drivers using some standard symbols (such as
standard road signs, and commonly used symbols for different objects or events etc) or
other methods.
Keeping in mind the visual attention of drivers, we propose a Smart-Dashboard
system that would use an in-vehicle display to show required contextual-information to
drivers. This would place useful information within the drivers’ visual approach in an
unobtrusive way.
4.3
Need for an easy user interaction
We know that a driver can’t engage in any time-consuming or obtrusive activity
while driving. Furthermore, it is not necessary for all drivers to be educated and
computer literate. Therefore, the proposed system should be easy to use & interact
with. However, installation and configuration might require some expertise. These
limits should be kept in mind while designing any system for drivers.
In our proposed system, the users will be able to start quickly with default settings.
However, they will be able to customize the settings by inputting the level of their
driving expertise (i.e. learner, beginner, experienced, or expert), type of warnings to
be issued (i.e. none, auditory, visual, or both audio-visual), and volume of the sound
for auditory alerts (i.e. anywhere from silent to loud). These settings will be
remembered or saved for future use until changed by the user again.
4.4
Conclusions
Our proposed system would make drivers aware of their surroundings by detecting
blind spots, recognizing smaller objects, identifying dangerous situations and alerting
drivers well in time. It would provide a complete picture of the surroundings in order
to make drivers aware of their driving context. The proposed system is expected to be
unobtrusive and easy to interact with for drivers. This would augment safe & smooth
driving and help reducing losses caused by road-incidents.
37
5
TECHNOLOGIES
In this chapter, we will provide a brief survey of technologies that can support our
proposed system which will implement a number of ADAS functions such as adaptive
cruise control, lane keeping or departure warning, forward collision warning,
intelligent speed adaptation, automatic parking, and blind spot detection etc. For
implementation of these functions, we need to capture the information on speed,
distance, relative position, direction of movement, and size & type of the neighboring
vehicles or other objects on the road. For this purpose, several technologies are
available in the market having their own pros and cons. Most commonly used
technologies include RADAR (Radio Detection And Ranging), LIDAR (Light
Detection And Ranging), Sonar (Sound Navigation And Ranging), GPS (Global
Positioning System), and Video-Based Analysis.
We will provide a brief description of these technologies and explain how they can
be used in capturing the required information for ADAS. However, vision-based
technology will be explained in more detail because we will use this technology in our
proposed system.
5.1
Radar
Radar stands for “radio detection and ranging”. It uses radio waves (frequency
range about 300 MHz to 30 GHz) to find the distance, height, direction and speed of
any stationary or moving object. It is an object detection system which is used for
airplanes, vehicles, ships, ocean waves, weather monitoring, landscape, and other
physical objects. A radar system has a transmitter which transmits in-phase radio
waves (see figure 5.1(a)). These radio waves are spread in all directions after hitting
any object on their way. Therefore, some part of the signal is reflected back to the
sender (see figure 5.2). Due to Doppler Effect, the wavelength, and hence frequency,
of this reflected signal is modified to some extent if the object is in motion.
(a) In-phase waves
(b) Out-of-phase waves
Figure 5.1: An example of in-phase & out-of-phase waves
Figure 5.2: Principle of pulse radar
38
The received or reflected signal is usually very week and hence needs to be
amplified before processing. This amplification makes the radar system able to find
objects even at large distances. Through computer processing, it can find distance,
speed, direction and size of any target object. A radar detection beam used in vehicles
is normally 150 meters long and 3-4 degree wide to each side of the vehicle.
The distance of any object from the radar transceiver can be calculated by using
time of flight (ToF) method which takes into account the time it takes the reflected
signal to reach the receiver. Following formula is used for distance calculation:
Distance =
Speed of radio wave × Time
, where speed of the radio wave is almost
2
equal to the speed of light (i.e. 300,000 km/sec). We know that the speed is defined as
the rate of change of distance. Therefore, the speed of a target can be measured from
few successive measurements of distance. However, modern radar systems combine
other principles with the Doppler Effect to find the speed of moving objects. A radar
transmitter is either fixed or rotates by up to 360o while sending out the radio waves.
After hitting an object, the signal is reflect back to the receiver at the same location.
This clearly tells us the direction of target object. Similarly, the larger object will
reflect more waves than smaller object. In this way, we can also estimate the size &
type of the target objects. One example of radar-based systems is VORAD [184] –
vehicle on-board radar - that uses a radar system to detect other objects around a
heavy-vehicle. Radar is the most feasible detection & ranging technology when cost is
not an issue.
The main advantages of radar technology are its reliability, accuracy in finding
speed & direction etcetera by using Doppler shift analysis, and its ability to work in
any weather conditions. The main disadvantages of radar technology are its high cost,
inability to work in presence of radar absorbent materials, creation of ghost objects due
to multi-path reflections, inability to differentiate vehicles from other obstacles, and
limited field-of-view (i.e. up to 16o only) & low lateral-resolution which may cause
bad positioning of the target vehicle in some cases as shown in figure 5.3 below.
Figure 5.3: A special case where radar is unable to find the correct target [194]
5.2
Sonar
Sonar stands for “sound navigation and ranging” and is also known as acoustic
radar. It is usually used by watercrafts (submarines and vessels etc) to navigate,
communicate with, or to identify other vessels. It can also be used in air for robot
39
navigation, and for atmospheric research (where it is known as SODAR – sonic
detection and ranging). The working principles of sonar are similar to radar but it uses
sound waves (infrasonic to ultrasonic) instead of radio waves. The speed of a sound
wave is almost 340.29 meters per second at sea level in normal weather. An active
sonar sends out sound waves which may be reflected by some object on their way (see
figure 5.4 below), whereas a passive sonar only listens without sending out any wave.
Figure 5.4: Principle of active sonar
By measuring strength and round-trip time of these reflected sound waves, it can
measure distance, speed, direction and size of any target object. A sonar detection
beam used in vehicles is usually very short range and can detect other objects around a
vehicle in very close vicinity. One example of such systems is Dolphin SonarStep
[187] that uses a sonar system to detect other objects within 7 feet.
The main advantages of sonar technology are its ability to find speed & direction
etc by using Doppler shift analysis, and its ability to work under water and on the
surface as well. The main disadvantages of this technology are its inability to work in
presence of sound absorbent materials, and inaccurate distance measurements during
wind gusts, snow and rain because of the fact that speed of sound varies in water,
snow, and air.
5.3
Lidar
LIDAR stands for “light detection and ranging” or “laser infrared detection and
ranging”. It is also known as LADAR (laser detection and ranging). It uses either
laser or infrared light to create an image of the environment. However, the basic
working principles are same in both cases. Lidar has many applications in scientific
research, defense, sports, production, and automotives etc. A lidar sends out hundreds
of light pulses in a second which may hit some object on their way and a part of it is
reflected back to the origin. It measures the characteristics of reflected light to
calculate speed, distance and other information of the target object. A powerful lidar
based on laser light may have a range of up to 25 kilometers.
It uses time of flight (ToF) method to calculate speed and distance of the target
object. It may also use Doppler Effect technique for calculating speed and direction of
the target object. A lidar can create image of the surrounding environment so as to
make object recognition possible and to be used as night-vision support. Lidars are
being used in vehicles in order to find distance from other vehicles in front of them.
An example of such systems is the one produced by Sick AG [193].
40
The main benefits of lidar are its accuracy, low cost, ability to distinguish between
the relevant traffic from irrelevant things such as obstacles and tin cans etc, and its
ability to produce the image of environment for night-vision. The major disadvantages
of lidar are its limited resistance to the interference by light in the surroundings,
infeasibility for bad weather conditions due to dependence on lighting and limited field
of view, and performance degradation by snow reflections as laser-based lidar operates
in the optical range.
5.4
GPS
GPS [188] stands for “global positioning system” and was developed by US
Department of Defense. GPS consists of 24-32 satellites worldwide, at least four of
which are always visible from any point on the earth. These satellites find their own
location very precisely by communicating with ground stations at known places on
earth and with each other. These satellites send their location as radio signals to the
earth. GPS covers the whole earth and is the most widely used location system,
especially, in navigation and tracking applications.
All GPS-enabled devices have a receiver which uses trilateration to find out its
current position using satellite data. Lateration measures distance of the object from
some known reference points using time-of-flight, direct touch, or signal attenuation
information. Location information in 2D needs three reference points, while location
information in 3D needs four reference points. Lateration in 2D is explained in figure
5.5 where a black dot represents the object, three white dots are the known reference
points (i.e. position of satellites), and R1, R2, and R3 are distances between the object
and known reference points.
Figure 5.5: Principle of Lateration in 2D
Any GPS-enabled device receives signals from three or more satellites and
calculates its location using Lateration at an accuracy of 1-5 meters in open areas
(outdoor). A sequence of readings can be used by a mobile object to find its speed,
direction of movement, and distance from a specific point.
GPS has been the prevailing location system in navigations, path finding, and
tracking applications. It can be used to find the speed, direction, and location etc of our
own vehicle but cannot directly find these characteristics for other vehicles. However,
a cooperative environment can be established to share this information among all
neighbors on the road. It is important to note that this information provision is not so
fast that it could be used in driver assistance systems such as forward collision
warning. However, intelligent speed adaptation can be implemented by using a GPS
which provides current location of the vehicle, and a digital map of the area to
determine speed limits for the current location.
41
The main advantages of GPS are its global availability. The main drawbacks of
GPS are: GPS does not work fine in urban areas with high buildings etc; accuracy of
location information provided by GPS is not very good; GPS is controlled by the US
military which can degrade the service (Selective Availability); and receivers have
high cost and fairly long start-up time (<45 seconds) [189].
5.5
Video-Based Analysis
Recently, vision-based driver assistance systems are becoming more popular. They
are innovative, low-cost, high-performance, usable with new as well as old vehicles,
independent from infrastructure outside the vehicle, and easy to develop, install, and
maintain. They use either cameras – charge-coupled device (CCD) or complementary
metal-oxide semiconductor (CMOS) – to get a digital image of the surroundings of a
vehicle. The captured video can be processed in real-time so as to calculate speed,
direction, distance, and size & type of objects appearing in any image or frame. This
information is sufficient to implement most of the functions of an advanced driver
assistance system (ADAS). Additionally, a vision-based system opens many other
possibilities such as road sign recognition, and driver’s drowsiness detection etc. A
number of ADAS have been implemented using video-based analysis, such as
Wraparound View by Fujitsu [190] and EyeQ2™ by Mobileye [191].
In this section, we briefly explain the functioning of CCD & CMOS imagers, and
how image-processing techniques can be used to implement a vision based ADAS.
5.5.1
CCD/CMOS Camera
A digital camera uses an image sensor device – either CCD or CMOS – that
changes an optical image to an electrical signal. (Some examples of available image
sensors and cameras are shown in figure 5.6 below). Both CCD & CMOS image
sensors consist of an array of photo-diodes made from silicon and can sense only the
amount of light but not its color, and then convert this light into electrons. For colored
image, a colored filter (red, green or blue) is used for each pixel. After changing an
optical image to an electrical signal in the first step, the next step which differs in CCD
and CMOS is to read the value of charge stored in each cell of the image.
(a) 1/4-inch
CMOS Image
Sensor by Sony
(b) A small 8-mp
CMOS Camera
by Samsung
(c) 1/3-inch
CCD Image
Sensor by
Kodak
(d) A small 2mp CCD
Camera by
Sharp
(e) An infrared
enabled CMOS
camera by Yanlab
(f) 7x7 pixel
CMOS camera
with ultrawideband radar.
Figure 5.6: Some examples of image sensors and cameras
5.5.1.1
CCD
A charge-coupled device (CCD) is an analog device which stores light as tiny
charges in each photo sensor. This electric charge is shifted across the chip at one
42
corner and is read one pixel at a time. Here an additional circuitry with analog-todigital converter changes the voltage into digital value as shown in figure 5.7 below.
Figure 5.7: Image processing in CCD [192]
5.5.1.2
CMOS
A complementary metal-oxide semiconductor (CMOS) is an active pixel sensor in
which each photo sensor has extra circuitry to convert light energy into voltage. On the
same chip, an additional circuitry with analog-to-digital converter changes the voltage
into digital value as shown in the figure 5.8 below. A CMOS has everything it needs to
work within the chip making it “camera-on-a-chip”.
Figure 5.8: Image processing in CMOS [192]
5.5.1.3
Performance comparison
A number of parameters are used to compare the performance of different image
sensors. These parameters include dynamic range – the limits of luminance range it
can capture, signal-to-noise ratio (SNR or S/N) – the ratio of a signal power to the
noise power, and light sensitivity – ability to work in darker environments, etc. CCD is
more mature technology than CMOS. The performance of CCD was much better in the
past. However, CMOS are improving to a point where they are performing almost
equal to the CCD. A comparison is provided in the table 5.1:
43
Table 5.1: Performance comparison of CCD and CMOS image sensors
Performance Parameters
CCD
CMOS
Dynamic range
Noise
Light sensitivity
Uniformity
Windowing (sub-regions)
High
Low
High
Better
Limited support
Moderate
Noisier (getting better)
Lower
Worse (getting better)
Fully supported
Image rate (speed)
Image quality
Age of technology
Power consumption
Lower
High
Mature
High (100 times)
Higher
Lower (comparable now)
Newer
Low
Reliability
Pixel size
System size
Architecture
Flexibility
Moderate
Smaller (better)
Larger
External circuitry required
High
High
Larger
Smaller
All circuitry on one chip
Low
Signal type
Manufacturing
Cost / Price
Example Applications
Analog
Digital
Complex
Simple
Expensive (little bit)
Digital photography,
broadcast-tv,
industrial/scientific/medical
imaging etc.
Inexpensive
Cameras for mobile devices,
computers, scanners, faxmachines, bar-code readers,
toys, biometrics & vehicles etc.
In short, we can say that CCD has better quality, resolution and light sensitivity,
but CMOS is also improving in these terms & is already a faster, smaller, cheaper,
simpler and power efficient technology. CCD cameras are usually used as rear-view
because they perform better in dark environment, whereas CMOS cameras are used for
advanced driver assistance systems because of their higher image rate. The trends in
the automotive industry show that CMOS cameras will dominate the market in future
[190][191].
5.5.2
Working Principles
As we have described earlier, we need to capture the information on speed,
distance, relative position, direction of movement, and size & type of the neighboring
vehicles or other objects on the road in order to implement ADAS functions. In the
following sections, we will show how single camera mounted on a vehicle can be used
to measure these parameters. However, before that, we briefly describe some of the
important principles using which these measurements are made.
5.5.2.1
Perspective Transformation
The perspective transform method is used for mapping any 3D object to a 2D
surface such as paper or monitor. In a perspective view, the parallel lines in the scene
that are not parallel to the display plane are projected into converging lines i.e. they
44
converge to a distant point in the background, and distant objects appear smaller than
objects closer to the viewing position. This method is used to display a 3D scene on a
2D device without third dimension i.e. depth or distance. Therefore, when we take a
picture of the real world in 3D, it is projected to a 2D device. A 2D image of the real
world does not have depth. We need to translate it into 3D in order to measure distance
of objects appearing in the picture. This calls for a reverse process which is known as
Inverse Perspective Transform (IPT). Using IPT, we can re-project 2D image onto a
3D ground plane which enables us to measure distance of each object in the picture.
5.5.2.2
Camera Parameters
A vision-based automotive system uses cameras installed on a vehicle. The image
captured by a camera is affected by many parameters which include angle of view,
focal length, camera height, total number of pixels, motion blur, and exposure time etc.
Figure 5.9: Camera-lens parameters
When we take a picture, the area of scene covered by the picture is determined by
the field of view (FOV) which defines the angle of view (α) of a camera lens. A wideangle lens can see wider & larger area but has lower resolution. However, they are also
well suited for vision-based automotive applications [195]. A wider angle of view and
higher optical power are usually associated with a shorter focal length; focal length (F)
is the distance from a lens to its focal point (point of convergence of light). An image
produced by a camera lies in the image plane which is perpendicular to the axis of the
lens. The total number of pixels of an image in horizontal direction is called as image
width. The pixel-width is the breadth of a pixel on a display device. When a picture is
taken while moving, a motion-blur takes place in the dynamic region of the image
because of the relative motion between the camera and the object. Exposure time or
the shutter speed – effective length of time a camera-shutter is kept open in order to let
light reach the film/sensor – plays an important role in blurring. The camera is usually
installed at a location above the ground; the height from the ground at which a camera
is installed is called as camera height. In automotive applications, it is useful to
identify the point of contact which is a point where two things meet, e.g. bottom of the
wheel where vehicle & the road meet.
5.5.2.3
Monocular vs. Stereovision
A stereovision-based system uses two cameras (tightly coupled) for any function.
Many stereovision-based systems for driver assistance have been proposed recently
45
[74][196][197][198]. However, a stereovision system is costly due to an additional
camera, higher processing power requirements, and calibration problems
[199][200][201]. On the other hand, a monocular vision uses only one camera for its
functioning. However, it lacks depth cues and required accuracy for automotive
functions [202]. Recently, a number of monocular vision methods have been proposed
for distance-measurement with a quite high accuracy [202][231][249][253][288][289]
[290][291]. Most of the modern research is focused on monocular vision, which will
help in developing low cost and high performance automotive applications.
5.5.2.4
Image Processing
An image can reveal lot of information when we process it using a digital
processor. Pattern matching is one of the most commonly used techniques which stores
templates in the form of a hierarchy for efficient searching/matching. The main
problem with pattern matching is that one object may have a range of appearances due
to viewing angle, lighting conditions, and motion etc. However, we can use rotation,
scaling, translation, gray-scale conversion, noise filtration etc in order to improve
pattern matching process. Hundreds of image processing techniques have been
developed which can be used for transformations, image enhancement, image
segmentation, object representation or modeling, feature extraction, object recognition,
distance & speed estimation, drowsiness detection, and scene understanding etc [203].
5.5.3
Object Recognition (size & type)
A two-step method is used for recognition of objects such as obstacles,
pedestrians, vehicles, road signs, and lane markings etc. First, a hypothesis is
generated – hypothesis generation (HG) and then this hypothesis is verified –
hypothesis verification (HV). That is, a supposition is made about the location of some
object in the picture first and then the presence of that object is verified. A large
number of methods exist for HG & HV which use different kind of knowledge about
the object under consideration and are classified accordingly.
We can divide hypothesis generation (HG) methods into six classes:
1. Model-based or knowledge-based methods [204] use specific characteristics of an
object such as shadow [205][206][207], corners [208], texture [209], color [210],
light [211][212][213], symmetry [214][215], and geometrical features [84]. A
combination of these is also used for better performance. For example, Collado et
al. [216][217] use shape, symmetry, and shadow; Kate et al. [218] use shadow,
entropy, and horizontal symmetry; Liu et al. [219][220] use shadow, symmetry,
and knowledge-based learning; and Hoffman [221] uses shadow, symmetry and
3D road information.
2. Stereo-based methods use dual camera based techniques such as disparity-map
[222][223], and inverse perspective transform (IPT) [224][225]. However, stereobased methods have high computational cost and low processing speed [221].
3. Motion-based methods use optical-flow [202][226][227][228][229][230] or some
other techniques such as Sobel edge-enhancement filter combined with the optical
flow [231]. Optical-flow is the change of object position between two pictures.
The main issue with motion-based methods is that they feel difficulty in detecting
objects when objects are either still or moving very slowly [226]. The optical-flow
can be used for pedestrian detection as well, e.g. Bota and Nedesvchi [312] use
motion cues such as walk in order to detect pedestrians.
4. Context-based methods use the relationship between the objects & the scene [232].
46
5. Feature fusion-based methods merge two or more of the above methods [233].
6. Adaptive frameworks adjust or choose from many features depending on the
situation at hand [11]. Adaptive frameworks prove to be a better approach. The
major drawbacks of the previous approaches are that none of them is generic
enough to handle all the situations, and change of environment (e.g. lighting,
weather, traffic, etc) would significantly change the detection rate and errors.
We can divide hypothesis verification (HV) methods into three classes:
1. Template-based methods make use of correlation with existing patterns such as
edges, corners, shapes and wavelet characteristics etc [84]. For example, Betke et
al. [234][235] utilized color, edge, and motion information for vehicle detection.
2. Appearance methods use feature extraction and classification techniques. Feature
extraction discovers a set of characteristics of the object class using a set of
training images. A number of techniques have been used for feature extraction,
such as principal component analysis (PCA) [232][236], local orientation coding
(LOC) [237], Gabor filter [238], scale invariant feature transform (SIFT) [239],
and Haar wavelet [240] etc. For classification, a couple of methods have been used
such as neural network [236][237], statistical model [241], support vector machine
(SVM) [238], and horizontal Sobel filter based boosting method [242] etc. The
performance of appearance methods is better than template-based methods.
3. Hybrid techniques combine two or more techniques in order to achieve better
performance. For examples, Geismann and Schneider [243] use Haar features for
detection, histograms of oriented gradients (HOG) and linear support vector
machine (SVM) for classification of objects (pedestrians); Cao et al. [244] extract
features such as appearance & motion, and use statistical learning & support
vector machine (SVM) for classifying a pedestrians; they also measure speed and
direction; Blanc et al. [245] use Support Vector Machine (SVM) and template
matching algorithm for vehicle detection.
Object detection can be improved by adding some kind of object tracking
mechanism in it. Objects on all the four sides of a vehicle can be detected by using one
of the above mentioned methods. However, vehicles on the right & left sides can also
be identified by detecting wheels [246][247][248]. One example of a complete system
for object recognition & tracking is developed by Fritsch et al. [249] who use humanlike attention approach to process only small parts of image, known as Region of
Interest (RoI), apply hierarchical models of invariant object recognition [250] and
classify objects on the bases of a confidence value using a threshold for rejection.
Object detection, especially in the blind-spot, is investigated by many researchers, e.g.
Mota et al. [251] detect blind spots using Reichardt correlator model, Wang et al. [252]
use optical flow for this purpose, and Wu et al. [253] compare the gray intensity with
the highway surface and use image coordinate model for distance measurements.
5.5.4
Road Sign Recognition
The road sign recognition also goes through two steps: sign detection, and
classification. A sign detection technique identifies all the areas in a picture where
some road-sign is expected or present, and then inputs these identified areas to the
classification module in order to recognize the type of road signs. A very good
recognition rate had been achieved a decade ago [254] and now its accuracy is
improved to “almost” 100%.
A number of techniques have been designed for robust sign detection. The input
stream can be minimized for fast processing by using a priori assumptions [256] about
47
the picture organization so as to ignore irrelevant parts of the image; for example,
supposing that the road is almost straight. To facilitate the search for road signs in only
limited part of the picture, we can also use color-segmentation [257][258] as road
signs have a special color, and scene understanding [254] as a road sign is expected on
the sides of a road or overhead but not in the sky or on the road itself. Most of the
road-sign detection algorithms use their features such as general shape, color, size or
position etc. However, detection can be enhanced by identifying Region of Interest
(RoI) using perspective view and 3D modeling [259].
Moreover, for better classification, Fletcher et al. [255] applied super-resolution
over multiple images, while others used pattern recognition such as regular polygon
detector [48]. Some researchers consider road-signs as normal objects and use twostep method for object recognition – hypothesis generation (HG) & hypothesis
verification (HV) – instead of sign detection and classification; they use Region Of
Interest (ROI) for HG and pattern recognition for HV [260][261].
5.5.5
Lane Detection and Tracking
The lane detection and tracking is another important function in automotive
applications such as lane departure warning and environment reconstruction etc. Laneboundaries are indicated by painted-lines, reflectors or magnetic markers embedded
into the center of road. However, painted-lanes are more common because they are
economical to make; and are easy to detect & track using camera because of their
higher intensity. A video camera is installed in the front of a vehicle which can see the
road for more then 25 meters depending on the range of camera. However, some old
systems used downward-looking video camera [268], while others have used
backward-looking camera [201][219][220].
Recently, a number of vision-based methods for lane detection & tracking have
been developed which can be divided into three categories: feature-based methods,
model-based methods, and hybrid methods.
In feature-based methods, a lane boundary is detected by its specific features such
as color, contrast, edge, texture, or a combination of these. Examples of feature-based
lane detection & tracking include edge-based detection [262][263], color-based system
[264], multi-scale Gabor wavelets filters for texture-based detection [265], edge and
texture based detection [266], and lane detection based on edge, texture, and vehiclestate information [267].
On the other hand, a model-based method represents or matches a lane boundary
using some model. For examples, Dickmanns & Mysliwetz [269] find the position &
curvature using a Kalman filter; Jochem [270] used neural network to find the lane
positions; the RALPH system [271] – rapidly adapting lateral position handler – uses a
template-based matching in order to discover parallel image features such as lane
boundaries; LeBlanc [272] calculated the gradient of the intensity to find the lane
boundaries; Bertozzi et al. [273] and Yong Zhou et al. [274] use inverse perspective
transform (IPT) in order to re-project the image onto a ground plane (to make 3D
model from 2D image) so as to detect lane boundaries, and GOLD system [275] –
generic obstacle and lane-detection – uses inverse perspective transform (IPT) and
intensity information. However, the GOLD system gives error when it comes across a
zebra crossing. Therefore, it is improved by Kim et al [11] who apply IPT only on the
candidates for lane boundary obtained through adjustable template matching (ATM)
[268] and utilize curvature information for robust tracing of the lane boundaries.
Similarly, Tsai [276] proposed a fuzzy inference system to avoid errors due to shadow;
48
Wang et al. [277][278] used the spline curve to propose CHEVP algorithm –
Canny/Hough Estimation of Vanishing Points – to model the lane boundary points;
Gonzalez and Ozguner [279] used the histogram method; Jung & Kelber [280] and
Lim et al. [281] used linear-parabolic model to find lane boundaries; and Wang et al.
[282][283] used peak-finding algorithm and Gaussian filter to detect lanes, and
computed angle and distance between two lanes.
There are some hybrid approaches which combine many techniques, such as using
Hough Transformation & some road model [284][285] or using gray scale statistics
(lane markings have higher gray value), dynamic range of interesting (ROI) and lane
features for detection of lane boundaries [231]. The ROI-based hybrid approaches are
efficient & popular which first find a region of interest (ROI), then find a real midpoint
of the road lane to find candidates of lane markings, and finally use a temporal
trajectory strategy to improve lane detection [286]. This method is very fast (62 frames
per second) and accurate (robust to lighting changes, shadows, & occlusions etc).
Recently, some very light-weight methods have been proposed for lane detection, e.g.
Ren et al [287] have used a Hough transform to detect lanes using an iPhone.
Although object detection & tracking has improved a lot, the road-side structures
such as buildings, tunnels, overhead-bridges, and billboards etc yet offer enhanced
difficulty for recognition of vehicles, pedestrians, and road-signs.
5.5.6
Distance Measurement
Traditionally, radar, lidar or sonar is used to measure the distance of an object
from the vehicle. The use of digital camera for distance measurement is relatively new
approach and is getting popularity because of lower cost and multiple applications.
There are many cues that can be used for distance estimation, such as size & position
of the objects, lane width, and point of contact of a vehicle & the road etc. The last cue
is more useful because other cues have a very large variation, e.g. width of a vehicle
may vary from 1.5 to 3 meters. We can also use perspective transform and many other
techniques for measuring the approximate distance of an object using single camera.
Stein et al. [202] proposed a single-camera based method for calculating the
distance to the vehicle as:
Z =
fH
y
Where H is the camera height in meters, f is the focal length, and y is the height of
image-plane onto which the point of contact between the vehicle and the road (i.e. the
wheels) is projected as shown in the figure 5.10 below. This gives a quite accurate
measurement and gives only 5% error at a distance of 45 meters.
Figure 5.10: Imaging geometry for distance calculation [202]
49
Liu et al. [231] use Sobel edge-enhancement filter combined with the optical flow
to detect the vehicle in front and find the distance by using headway distance
estimation model. The distance between the host and preceding vehicle can be
calculated as:
d2 =
H × fc
Pr × (YHV
ΔR
)
−
2
− d1
Where d2 is the distance to be measured, H is the camera height from ground, fc is
its focal-length, d1 is its distance to the front-tip of the vehicle hosting camera, Pr is
pixel-width in the monitor or display device, ∆R is image width, and YHV is the
coordinate of image in row direction of the preceding vehicle bottom end-point as
shown in the figure 5.11 below.
Figure 5.11: Distance estimation model [231]
Recently, a number of camera-based methods have been proposed for distance
measurement. Lamprecht et al. [291] propose a very simple method for measuring the
distance to stationary points from a vehicle by tracking these points three times while
considering the velocity of vehicle during a certain period of time. In the same way,
Shibata et al. [288] use only optical flow to measure distance and direction of an object
using single camera. Optical-flow is the change of object position between two
pictures. Fritsch et al [249] use human-like attention approach and consider only
related parts of an image to find all objects of interest, and calculate distance of all the
objects on road using EKF-based fusion (Extended Kalman Filter). Dagan et al. [289]
have found a method for calculating the distance & relative velocity to the vehicle in
front, and have used it for calculating time to collision or contact (TTC) in their
collision warning system. Wu et al. [253] compare the gray intensity with the highway
surface and use image coordinate model for distance measurements. Goto and
Fujimoto [290] use perspective model in order to find distance by means of square
measure of the object in image plane.
The accuracy of distance measured by ordinary camera is not too high to be used
in crash avoidance systems. Fortunately, there have been some efforts to incorporate
radar capabilities into the CMOS camera so as to add 3D capabilities. For example,
Canesta's CMOS image chip (figure 5.12) automatically finds the distance to every
object in a sight at once using time-of-flight calculations on each pixel [292]. The main
50
advantages of this technology are that it is highly accurate and works in all weather
conditions.
Figure 5.12: Radar capable CMOS imager chip by Canesta
5.5.7
Speed & Direction (Velocity) Measurement
Speed is defined as the distance traveled per unit time, whereas velocity is the
distance traveled per unit time in certain direction. The change of object position
between two video-frames is known as optical-flow which is commonly used for
measuring speed & direction of moving objects in a scene. Lucas–Kanade Algorithm
[293] is a two-frame differential method for optical flow estimation.
Li et al. [294] measure the speed of a vehicle by capturing two pictures
immediately one after the other using a fixed CCD camera, whereas Martinez et al.
[227] use optical flow to find time to collision or contact (TTC) in order to detect
head-on collision.
Tracking a moving object over a few seconds can help finding its speed. The speed
of an object can be calculated from the discrete differencing of distances at different
time instance i.e. we can easily get speed of the vehicle from successive measurements
of distance. However, this method is not accurate because when we subtract one
inaccurate value from another inaccurate value the result is always inaccurate. Similar
approach is used by Stein et al. [202] who use single-camera based method for
calculating the speed as:
v=
Z
w − w′
w′
Δt
Where Z is the distance of target object, and w & w' are the width or height of the
object in pixels at the start and end time respectively for the period ∆t.
Another innovative approach is based on motion blur, also known as smearing
effect, which occurs because of the relative motion between the camera and the
objects. Lin et al. [295][296] have proposed a method to calculate object speed using a
single motion blurred image. They have developed a mathematical model of linear
motion blurring and a method to find the direction & speed of the moving object using
single motion blurred picture. The relative speed is calculated using this formula:
v=
zKsx
Tf cosθ
Where z is the distance from camera to the object, K is the blur-length in pixels, sx
is the width of a camera-pixel, T is the shutter speed & f is the focal length of camera,
and θ is the angle when object is not moving parallel to the image plane as shown in
the figure 5.13.
51
Figure 5.13: Distance estimation using smearing effect [296]
Similarly, the smearing/blurring effect is also used by Cheng et al. [297] for
computing the speed of the camera-carrier using a kinetic model to express the
movement of the camera, target and the image.
As described in the previous section, radar-enabled CMOS camera can also find
speed and direction of every object in a scene at once using time-of-flight calculations
on each pixel [292].
5.5.8
Drowsiness Detection
The main signs of fatigue or drowsiness are human head position and eye closure
[298]. However, drivers’ vigilance or attention is different from fatigue; driver may be
looking off the road or involved in some other activity while being fully awake. A
single camera mounted on the dashboard, for example, can be used to track eye & head
in order to find visual attention of the driver [299][300].
Heitmann et al [301] use facial expression, eye-lid movement, gaze orientation,
and head movement for detecting fatigue. A custom-designed hardware system [302]
or a simple camera with infrared illuminators for dark environments can be used for
this purpose [303][304]. Flores et al. [305] track face and eyes in order to detect
drowsiness. In recent times, Albu et al. [306] have used event detection approach to
monitor the eye-state using their template-matching algorithm.
The drowsiness detection methods successfully trigger an alarm for about 90% of
the time which is not very much accurate. More research in this area will produce
some highly accurate methods in near future.
5.5.9
Environment Reconstruction
The video output of a camera can be directly displayed to drivers, which may not
be really useful for them. However, we can reconstruct the environment by processing
images form all the cameras around the vehicle and provide a bird’s-eye view which is
very much useful for drivers because it provides them a quick overview of the
surroundings. Many systems have been introduced which provide an overview of the
surrounding area of the vehicle [11][72][79][80][83][307][308][309][310][311].
Our proposed Smart-Dashboard system also provides the bird’s-eye view in
addition to other functions of ADAS using monocular cameras.
52
5.5.10 Pros and Cons
Camera-based automotive applications have many advantages including low cost,
high availability, multiple applications and ability to integrate with other systems.
However, there are some serious drawbacks of camera based solutions such as weather
dependency, and lower accuracy. The camera-based automotive applications are still
in development phase and will take few more years to gain reliability. A timeline
(table 5.2) provided by one of the vision industry leader – Mobileye [191] – explains it
well.
Table 5.2: A timeline for camera-based automotive applications by Mobileye.com
Year
Up to now
Late 2009
Mid 2010
2011
2012
2012
5.6
Development
Lane Departure Waning,
Radar-Vision Fusion,
Traffic Sign Recognition.
360o Multi-camera View,
Lane Departure Warning,
Radar-Vision,
Intelligent Headlight Control,
Headway Monitoring.
Fully functional vision-based ADAS
Vehicle Detection,
Intelligent High Beam Control,
Pedestrian Detection
Vehicle Detection,
Forward Collision Warning /Mitigation.
Traffic Sign Recognition.
Conclusions
In this chapter, we provided a brief survey of technologies that can support our
proposed Smart-Dashboard system. We have found that ADAS functions can be
implemented by capturing the information on speed, distance, relative position,
direction of movement, and size & type of the neighboring vehicles or other objects on
the road. For this purpose, many technologies are available in the market, such as
RADAR, LIDAR, Sonar, GPS, and Video-Based Analysis etc.
Our proposed system uses Video-Based Analysis, which requires ordinary CMOS
cameras. The camera-based solutions have low cost, high availability, multiple
applications and ability to integrate with other systems. We have briefly described in
section 5.5 that a large number of camera-based techniques are available for detecting
the speed, distance, relative position, direction of movement, and size & type of
objects on the road. Depending on the power of digital image processor, more than 30
frames can be processed in one second for automotive applications. We believe that all
the required technology is now available for implementing camera-based ADAS.
Camera-based systems are generally considered inaccurate & inappropriate in poor
visibility conditions such as fog, dust, rain, and particularly snow. However, many
efficient techniques are now available for bad weather conditions. Moreover, infrared
or radar enabled CMOS cameras are available now which can better solve these issues.
They are more expensive than ordinary cameras at present, but will become cheaper
very soon.
53
6
THE SYSTEM DESIGN
The design of our Smart-Dashboard system is inferred from the statements
presented in previous chapters. Our design puts technologies – such as camera, digital
image processor, and thin display – into a smart system in order to offer advanced
driver assistance functions. Given that drivers may not be well versed in computing
skills, we will consider designing an easy to use, smart, and adaptive system that
requires minimum input from the user but leaves maximum control in the hands of
users. From the drivers’ point of view, the system should provide them almost all the
ADAS functions (see section 1.5.1) in an unobtrusive way. They should be able to get
assistance in maintaining a safe speed & safe distance, avoiding any collision, keeping
them alert, recognizing road signs, detecting blind-spots, keeping their lane,
identifying pedestrians, enhancing their vision at night, and warning them of the
dangerous situations.
6.1
Introduction
Driving is a very common activity of our daily life. People drive their cars for
travel or pleasure. They make use of all the available technological support in order to
avoid any accident. They make best estimate of the position & velocity of other object
on the road, and guess the distance of their car from others’ using rear-view & sideview mirrors or any other available technological support such as sonar and rearview
camera etc. However, this puts an extra burden on the driver. Our design moves all of
these tasks from drivers to the system and minimizes undue burden on humans.
Equipped with the camera technology, our proposed Smart-Dashboard system
monitors its surroundings and processes video frames in order to find distance,
velocity, position, and size & type of all the neighboring objects. This information is
then used by different ADAS modules to assist the drivers and to generate bird’s-eye
view of the surroundings. In short, drivers will be provided with all the assistance
required for safe & smooth driving.
6.2
Components of the System
Smart-Dashboard system has three components: Hardware, Middleware, and the
Applications. We use five-layered architecture of context-aware systems [315] as
shown in the figure 6.1 below.
Application Layer
Context-aware applications & services
Management Layer
Store, Share, Distribute, and Publish context
Semantic & Inference Layer
Preprocessing of context
Middleware
Data Layer
Raw Data Retrieval and Processing
Physical Layer
Sensors and other objects
Figure 6.1: Layered architecture of context-aware systems [315]
54
The image sensors or video cameras are present at Physical Layer. These image
sensors capture real-time video of the surrounding environment. The captured video
frames are instantly sent to the middleware where they are preprocessed for inferring
useful information. The information produced by middleware is then provided to the
application modules at uppermost layer.
6.2.1
Hardware (Physical Layer)
There are three major hardware components of the system: five CMOS cameras, a
digital processor, and a TFT-LCD display (thin film transistor liquid crystal display).
The system is equipped with four ordinary CMOS cameras installed on all the four
sides of vehicle (see figure 6.2), whereas the fifth camera is installed inside the
vehicle.
Figure 6.2: Smart-Dashboard system with five cameras
(5th camera is inside)
One of the CMOS cameras is installed in the front of vehicle between the rearview mirror and the windscreen. This does not block view of the windscreen as it is
attached behind the rear-view mirror. A similar camera is installed in the back of the
vehicle. A wide-angle CMOS camera is installed on each of the right & left sides of
vehicle. These cameras are attached to the side-view mirrors or somewhere above
them. This arrangement not only provides 360o or all-around coverage of the
surrounding areas but also allows two cameras on each side to see into blind spot
simultaneously. A wide-angle camera will enable the system see the lane markings or
other objects that are very close to the sides of vehicle. The fifth camera is installed on
the dashboard inside the vehicle that will look at driver for drowsiness and attention.
The system uses a digital processor (ordinary computer or digital signal processor
chip) for applying image-processing techniques on video frames in order to get
required information for ADAS system modules.
The 360o or all-around view is then processed for environment reconstruction and
is displayed on TFT-LCD display mounted on the dashboard behind the steering at
DIM as shown in the figure 6.3 below.
55
Figure 6.3: Preferred places for a display
(www.volvocars.com, 2009)
There are four preferred places where any display can be mounted [319]: Head Up
Display (HUD), Driver Information Module (DIM), Rear View Mirror (RVM), and
Infotainment Control Module (ICM). We put our display at DIM location. However, it
could also be projected on HUD location if some projection device was available. Ours
is an adaptive display that can be used for many purposes according to the context.
This display can be used for speedometer, odometer, temperature, time, fuel-gauge,
and other vehicle data.
6.2.2
Middleware
A middleware is the software part of a context-aware system that lies between the
hardware and the applications. It provides the following functionality in general
[313][314]:
1.
2.
3.
4.
5.
6.
7.
Support of a variety of sensor devices including multimedia devices,
Support of the distributed nature of context information,
Providing for transparent interpretation of applications,
Abstraction of context data,
Maintenance of context storage,
Control of the context data flow,
Providing support for the mobility in presence of different constraints such as lowbandwidth, network partitions, poor coverage, limited resources, asynchronous
communication, and dynamic execution environment etc,
8. Providing support for the system adaptability,
9. Using the best available resource such as bandwidth and place of computation etc
The video frames captured by the five cameras at physical layer are instantly sent
to the middleware for noise removal, enhancement, transformation, fusion etc. These
frames are then processed for calculating the distance, speed, direction, position, and
size & type of all the objects appearing in a scene as explained in the previous chapter
(see section 5.5). This processed information is then pushed up to the application
modules at application layer.
56
6.2.3
Applications
Our Smart-Dashboard system provides a number of application modules for driver
assistance. These application modules provide almost all the ADAS functions listed in
section 1.5.1. The implementation details of these modules are given in the following
sections.
6.3
Design Considerations
Now we describe a number of issues/considerations related to the design of our
proposed Smart-Dashboard system.
6.3.1
Information Requirements
Before we can provide any ADAS function, we need to have information on
distance, speed, direction, position, and size & type of all the relevant objects
appearing in a scene. Based on our argument in previous chapters, we believe that
these pieces of information are enough to build a full-fledged camera-based ADAS.
For example, to implement forward collision warning system, we need to know only
the relative speed and distance of the vehicle in front.
It is important to note that we do not need to store any information for longer time
because a real-time system processes and uses instant information. However, for
navigational support (not included in our proposed system), we can save some
information on routes, road-sign locations, accidents etc.
6.3.2
Camera Positions
Selecting a proper location for mounting a camera is an important issue in camerabased automotive applications. These cameras should be able to see the environment
without any blockage. A camera in front is required to capture the road curvature, lane
boundaries, road-signs, vehicles and other objects. A camera in the rear is required to
detect lane-boundaries and objects in the blind-spots & behind the vehicle. The
cameras on two sides of the vehicle are required to detect objects in the blind-spot and
on both sides of the vehicle. In this way, a blind-spot on each side of the vehicle is
covered by two cameras with some overlapped view i.e. one camera on the left/right
side and another camera in the rear of vehicle.
The front and the rear cameras are mounted on the windscreens inside the vehicle
for security and performance reasons. The cameras on two sides are embedded into the
side-view mirrors or at some upper location so that they can see the road directly
below them. The fifth camera is installed on the dashboard inside the vehicle that will
look at the driver for drowsiness and attention.
6.3.3
Issuing an Alert
On detecting some dangerous situation, the system should issue an alert. However,
it is important to choose from different types of alerts or warnings. There are four
kinds of alerts issued by different automotive applications: auditory, visual, haptic, and
automatic (i.e. takeover the control from driver). We will consider visual and auditory
alerts only because our system employs only image sensors.
57
By default, the system will blink symbols for vehicles and other objects in very
close vicinity on the TFT-LCD display. However, in dangerous situations, it will issue
auditory alerts using beeps of low or high volume depending on the level of danger.
6.3.4
User Interface
While installing a display, it must be a prime consideration that the display is
viewable & within the reach of driver. There are two main issues regarding user
interface: placement (i.e. where to put it) and the mode (i.e. to use either buttons or
touch-screen).
There are four possible locations for mounting the display as for as placement is
concerned (see Figure 6.3). However, in our case we can opt from two locations only;
on the windscreen (Head-Up Display – HUD), or inside the dashboard behind steering
(Driver Information Module – DIM). As we do not use any kind of projection device,
DIM is the best suitable place for display. It is within drivers’ visual approach and is
reusable for displaying other information such as speedometer, tachometer, rear-view,
navigation maps, speed or fuel-level etc.
The most appropriate mode of interaction is the touch screen where user can touch
the screen in order to make selections. The user interface screen will provide following
options to the drivers on starting a vehicle:
1. Change my settings – (it will have 3 sub-options)
a. Change level of expertise – learner, beginner, experienced, or expert.
b. Type of warnings to be issued – none, auditory, visual, or both audio-visual
c. Volume of the sound for auditory alerts – anywhere from silent to loud
2. Remove my settings – (users known by face-recognition; have different settings)
3. Start camera calibration – (required at the time of installation or after a damage)
These options will appear for a few second on the startup and then disappear in
favor of default settings. However, on touching the screen, these options will appear
again. The default settings are as follows: level of expertise = experienced, type of
warning = both audio-visual, volume of the sound = medium.
The system will mange and remember settings for each user by identifying their
face through the fifth camera installed on the dashboard. The level of expertise for any
user will be automatically raised while she gains experience with the passage of time.
Similarly, if the initial value entered by any user is “expert” but she makes many
mistakes on the road, the system will learn that the user is not an expert in fact and will
lower her expertise level accordingly.
6.3.5
Human-Machine Interaction
A smart system is required to be context-aware, intelligent, proactive and
minimally intrusive. We see our proposed system as a smart system that engages in
two-way interaction with the users. We need to regulate the evolving interaction
carefully in order to realize unobtrusive and seamless interaction.
We have attempted to give users maximum control over the system. A user can
initiate interaction by touching the screen. However, five cameras make the system
aware of its users and context. This awareness makes it possible to adapt the system
58
according to the situation and minimize the annoyance by lowering the level of input
required of the user. For these reasons, the system should continuously learn from the
interactions and use this learning in future decisions. For example, the system should
automatically update the expertise level of the driver with the passage of time.
6.4
System Design
The Smart-Dashboard system uses single integrated display (multipurpose &
adaptive) instead of several displays (one for each ADAS function). This display is
highly adaptive and shows the highest priority information at any instance of time. For
example, at startup, it shows options’ screen; on recognizing some traffic signs, it
displays them; and for most of the time, it displays reconstructed environment along
with speedometer etc. Different modules in the system can be assigned priorities so
that the highest priority module will use the display in case there is any contention.
It adjusts the size of each component of the display to fit them all on one screen as
shown in the figure 6.4 below.
(a) Options displayed at startup
(b) Traffic signs and the speedometer etc.
(c) Reconstructed environment and the speedometer etc.
Figure 6.4: An integrated and adaptive interface of Smart-Dashboard
The Smart-Dashboard system implements a number of ADAS functions. This
section explains how different components of the system work together in order to
achieve the design goals. Figure 6.5 provides an overview of the Smart-Dashboard
system.
59
Issue Respective Warning
Intelligent
Speed
Adaptation
Forward
Collision
Warning
Lane
Departure
Warning
Adaptive
Light
Control
Traffic Sign
Recognition
Blind Spot
Detection
Pedestrian
Detection
Parking
Assistance
Night
Vision
Distance
Speed
Direction
Position
Noise
removal
Image
enhancement
Transformation
Driver
Drowsiness
Detection
Environment
Reconstruction
Size & type
Image fusion
Etc…
Etc…
Middleware
Adaptive
Cruise
Control
Application Layer
Update the Display
Image Sequence Input
Physical Layer
Figure 6.5: Overview of the Smart-Dashboard system
The image sensors at the physical layer of Smart-Dashboard system capture realtime video, the middleware layer performs some pre-processing, and the application
layer provides ADAS functions.
In section 5.5 of the previous chapter, we have already listed a number of camerabased methods for object recognition (i.e. vehicle, pedestrian, and obstacle
recognition), road sign recognition, lane detection and tracking, distance measurement,
speed & direction (velocity) measurement, driver drowsiness detection, environment
reconstruction, and so on. Using these camera-based methods, we provide a system
design for individual ADAS functions in this section.
6.4.1
Adaptive Cruise Control (ACC)
Adaptive Cruise Control system automatically slows down the vehicle when it
approaches another vehicle in front and accelerates again to achieve the preset speed
when traffic allows. Traditional ACC systems use laser or radar technologies to
measure the distance and speed of the vehicle in front. However, we have proposed a
camera-based implementation of ACC in the figure 6.6.
60
Image Sequence Input
Preceding Vehicle Detection
Vehicle Found?
Yes
No
Find Vehicle Speed
Find Local Speed
Find Headway
Distance
Find Braking Time or
Time to Contact (ToC)
Too Close?
Yes
No
Reduce Speed
/ Issue Warning
Figure 6.6: Adaptive Cruise Control system
After finding the speed of vehicle in front, it finds the local speed and the headway
distance. It issues a warning and/or reduces the local speed in order to avoid
forthcoming collision if Time to Contact (ToC) is too small. Here, we implement
vehicle detection using only camera-based methods as shown in the figure 6.7 below.
Image Sequence Input
Pre-processing
Lane
Detection
Candidate
Extraction
Candidate
Validation
Vehicle Tracking
(or other objects)
Vehicle Classification
(or other objects)
Figure 6.7: Vehicle detection
The vehicle detection module extracts some candidates, validates them, and then
tracks those candidates for some time by getting help from lane detection module. At
the end, it classifies objects as, for example, bicycle, motorcycle, car, bus or truck etc.
61
6.4.2
Intelligent Speed Adaptation/Advice (ISA)
Intelligent Speed Adaptation system continuously observes the vehicle speed &
the local speed limit on a highway and advises or takes an action when the vehicle
exceeds the speed limit. Traditionally, a GPS is used to determine the local speed limit
on a road, but we have proposed a camera-based implementation of ISA in the figure
6.8 below.
Image Sequence Input
Detect Speed-limit Sign
No
Sign Found?
Yes
Find Local Speed-limit
Find Vehicle Speed
Too Fast?
Yes
No
Find Following Distance
Reduce Speed to the Limit
& Issue Warning
Figure 6.8: Intelligent Speed Adaptation system
The Intelligent Speed Adaptation system looks for any speed-limit sign on the
road, and compares the speed limit with the speed of vehicle. If the vehicle is too fast,
it issues a warning and reduces the vehicle speed while keeping an eye on the vehicles
behind to avoid rear-collision.
6.4.3
Forward Collision Warning (FCW) or Collision Avoidance
Forward Collision Warning system detects objects on the road that would
otherwise go un-noticed and warns its driver of any possible collision with them.
Traditional systems use infrared and radar technologies to detect objects on the road,
but we have proposed a camera-based implementation of FCW in the figure 6.9.
Humans are not good at calculating distance and speed of different objects on the
road. The FCW system detects any objects in the same lane, calculates distance
between the object and the vehicle, and issues a collision warning in order to avoid
accident if the distance is quickly becoming shorter than a threshold value. In this way,
FCW will issue collision warning only when it finds that the vehicle will collide with
another vehicle or object if it continues to move with the current speed.
62
Image Sequence Input
Lane Detection
Preceding Object
Detection
Lane Info
Object Found?
No
Yes
Identify Object Type
Object Tracking
Distance Estimation
Too Close?
No
Yes
Issue Collision Warning
Figure 6.9: Forward Collision Warning system
6.4.4
Lane Departure Warning (LDW)
Lane Departure Warning system constantly observes the lane markings and warns
a driver when the vehicle begins to move out of its lane while its turn signal in that
direction is off. A similar system, Lane Keeping Assistance (LKA), helps driver in
keeping the vehicle inside a proper lane.
Traditional LDW systems use light or magnetic sensors to detect reflections from
reflectors or magnetic field produced by the embedded magnetic markers respectively.
However, we have proposed a camera-based implementation of LDW in the figure
6.10.
The LDW system detects and tracks lane markings, predicts lane geometry, finds
any deviation from path, and issues lane departure warning or lane keeping action
while keeping an eye on the vehicles behind to avoid rear-collision.
This system works even in the absence of lane markings. In this case, it assumes
virtual lanes of about three meters width.
63
Image Sequence Input
Lane Detection
No
Assume Virtual Lanes
of About 3m Width
Markings Found?
Yes
Lane Tracking
Predict Path or Geometry
Find Deviations
Find Corrections Required
Need Corrections?
No
Yes
Find Following Distance
Issue Lane Departure Warning
or Lane Keeping Actions
Figure 6.10: Lane Departure Warning system
6.4.5
Adaptive Light Control
Adaptive Light Control system moves or optimizes the headlight beam in response
to a number of external factors such as vehicular steering, suspension dynamics,
ambient weather, visibility conditions, vehicle speed, road curvature, contour etc.
Traditional ALC uses a large number of electronic sensors, transducers & actuators.
However, we have proposed a camera-based solution for finding environmental factors
for ALC as shown in the figure 6.11.
Image Sequence Input
Detect
Approaching
Vehicle
Found?
Yes
Dim Lights
Detect
Environmental
Lighting
No
No
Dark?
Lane Detection
Find Local
Speed
Path Prediction
Fast?
Yes
Yes
No
Bright Lights
No
Turning?
Yes
Bend Lights
Light Controller
Figure 6.11: Adaptive Light Control system
64
The ALC system determines driving context and sends this information to the
headlight controller for adaptive actions. It detects any approaching vehicle,
environmental lighting, local speed of the vehicle, and path turnings in order to adapt
headlights accordingly.
6.4.6
Parking Assistance
A Parking Assistance system helps drivers avoid any collision while parking their
vehicles. Some systems takeover the steering and actively park a vehicle, while others
provide a live view of the surroundings and issue a warning in case of forthcoming
collision. Such systems are new to automobiles and usually use cameras. We also
propose a camera-based PA system as shown in the figure 6.12.
Image Sequence Input
Image Fusion
Live View on Display
Detect Objects All-Around
No
Objects Found?
Yes
Identify Object Type
Object Tracking
Distance Estimation
Too Close?
No
Yes
Issue Collision Warning
Figure 6.12: Parking Assistance system
The PA system identifies any objects in the very close proximity of vehicle, tracks
them to find their distance, and issues a collision warning in order to avoid any
collision if the vehicle is very close to the objects.
6.4.7
Traffic Sign Recognition
Traffic Sign Recognition system identifies the road traffic signs and warns the
driver to act accordingly. Traditional TSR systems use GPS or radio technologies to
determine the traffic signs. However, we have proposed a camera-based
implementation of TSR as shown in the figure 6.13.
The TSR system selects a region of interest (RoI), finds & tracks any candidates,
extract features, and classify sign. It then shows this sign on the display if valid.
65
Image Sequence Input
Select Region of Interest (RoI)
Candidate Detection
No
Any Candidate?
Yes
Track Candidate
Feature Extraction
Sign Classification
Valid Sign?
No
Yes
Show Sign on Display
Figure 6.13: Traffic Sign Recognition system
6.4.8
Blind Spot Detection
Blind Spot Detection system helps avoid accidents when changing lane in presence
of other vehicles in the blind spot. It actively detects vehicles in the blind spot and
informs the driver before taking a turn. Traditional systems use sonar, radar, or laser to
detect vehicles in the blind spot. However, we have proposed a camera-based
implementation of BSD as shown in figure 6.14.
Image Sequence Input
Lane Detection
No
Lane Found?
Yes
Assume Virtual Lanes
of About 3m Width
Make Region of
Interest (RoI)
Detect Vehicles
Vehicles Found?
No
Yes
Find Distance, speed,
direction, size & type
Update Display &
Issue Warning
Figure 6.14: Blind Spot Detection system
66
The BSD system first finds the lane markings, detects any vehicles in the blind
spots, finds their speed, distance, direction, size and type, and issues warning and
updates the display according to the reconstructed environment.
6.4.9
Driver Drowsiness Detection
Driver Drowsiness Detection system detects a drowsy or sleeping driver and
awakes him to avoid any accident. Traditional systems use a number of sensors such as
stress sensor for finding grip on steering, sensors to find heart-beet, blood pressure,
and temperature of the driver. However, we have proposed a simple camera-based
implementation of DDD that detects eye closure as shown in the figure 6.15.
Image Sequence Input
Face Detection
Eye Detection
Eye State
Tracking
No
Eye Closed
for n Frames?
Yes
Issue Warning
Figure 6.15: Driver Drowsiness Detection system
The DDD system first detects human face, then eyes, and then tracks eye state. It
issues a warning if it finds closed eyes in more than n consecutive frames (n is usually
near 10).
6.4.10 Pedestrian Detection
A Pedestrian Detection system identifies any human walking on or near the road
and alerts the driver to avoid any collision. Traditional systems use sonar, radar, or
laser technology. However, we have proposed a camera-based implementation of PD
as shown in the figure 6.16.
Humans are the most valuable assets on the road and the governments in near
future will enforce pedestrian detection systems. Therefore, future cars will have PD as
compulsory module.
Our proposed PD system works like a TSR system. It detects human by symmetry
or motion. However, it also issues warning and highlights pedestrian symbol on the
display if someone is very close to the vehicle.
67
Image Sequence Input
Select Region of Interest (RoI)
Candidate Detection
No
Any Candidate?
Yes
Track Candidate
Validate Pedestrians
Find Distance etc
Very Close?
No
Yes
Issue Warning and
Show on Display
Figure 6.16: Pedestrian Detection system
6.4.11 Night Vision
Night Vision system helps a driver in seeing objects on the road during night or
poor weather. Traditional systems use infrared or radar technology to detect objects on
the road and use a projector for head-up-display (HUD).
However, we have proposed a camera-based implementation of NV as shown in
the figure 6.17. We use ordinary CMOS camera for object detection and a TFT-LCD
display for showing these objects.
Image Sequence Input
Lane Detection
Vehicle Detection
(& other objects)
Find distance,
speed, direction
and size & type of
each object
Highlight the Objects
on Display
Figure 6.17: Night Vision system
68
6.4.12 Environment Reconstruction
Environment Reconstruction system identifies all the neighboring objects on the
road including lane markings, and vehicles, and finds their speed, distance, direction,
size & type. It then reconstructs the environment and draws it on the display. The idea
of environment-reconstruction is very new and fuses camera and some other type of
sensors such as infrared, radar and laser. We have proposed a camera-based
implementation of ER in the figure 6.18 (a), and figure 6.18(b) shows the sample
output. The ER system identifies lanes and other objects around the user (encircled
vehicle) and reconstructs the environment on a display.
Image Sequence Input
Lane Detection
Vehicle Detection
(& other objects)
Find distance,
speed, direction
and size & type of
each object
Reconstruct the Environment
Show on the Display
(a) Environment Reconstruction system
(b) The reconstructed environment
Figure 6.18: Environment Reconstruction system and the Display
6.5
Implementation
A full-fledge implementation of the proposed system is out of the scope of this
thesis. However, we show a basic prototype using ordinary CMOS cameras (8 megapixels) and a laptop (P-4, 2 GHZ, and dual-core). We use the built-in models available
in “Video and Image Processing Blockset” [317] provided by MATLAB (R2007a or
newer versions). We demonstrate only three functions: Human/Pedestrian Detection,
Traffic Sign Recognition, and Lane Departure Warning.
In Pedestrian Detection system, the input video is processed to identify
background, detect humans, and track these humans as shown in the figure 6.19 below.
(a) Input Video
(b) Background
(c) Human Detected
(d) Human Tracked
Figure 6.19: Pedestrian Detection using built-in MATLAB model [317]
69
In Traffic Sign Recognition system, the input video is processed and a “Stop” sign
is recognized. The “Stop” sign is highlighted in the video and a text message is
displayed as shown in the figure 6.20 below.
(a) Input Video
(b) “Stop” Traffic Sign Recognized
Figure 6.20: Traffic Sign Recognition using built-in MATLAB model [317]
In Lane Departure Warning system, a departure on left or right lane markings is
detected and an audio-visual warning is issued. This is done by continuously observing
the distance of vehicle from the center of a lane and is also plotted on a graph as
shown in the figure 6.21.
(a) Lane Departure on Left Side
(b) Lane Distance Signal
Figure 6.21: Pedestrian Detection using built-in MATLAB model [317]
The objective of this basic prototype is only to demonstrate that a full-fledged
camera-based ADAS system can be implemented using MATLAB or any other
programming tools available. However, implementation of the system is out of the
scope of this thesis.
6.6
Conclusions
In this chapter, we explained the design of our proposed Smart-Dashboard system,
which uses layered architecture of context-aware systems. The system is not really a
calm technology; however, it serves the user silently until there is a warning to be
issued. This system is hackable in the sense that many innovative uses can be found,
capturing on video the beautiful landscape of a hill station or creating a 3D view of the
streets you visited. We also provided the detailed design of each of the ADAS module.
Finally, we demonstrated a very basic prototype using built-in MATLAB models to
show how fast & easy it is to implement any camera-based ADAS function.
70
7
CONCLUSIONS
Road accidents cause a great loss to human lives and assets. Most of the accidents
occur due to human errors such as bad awareness, distraction, drowsiness, low
training, fatigue etc. An advanced driver assistance system (ADAS) can help drivers
avoid accidents by minimizing these human errors. An ADAS actively monitors the
driving environment and produces a warning or takes over the control in dangerous
situations. The main features of ADAS include parking assistance, forward collision
warning, lane departure warning, adaptive cruise control, driver drowsiness detection,
and traffic sign recognition etc. Unfortunately, these features are provided only with
modern luxury cars because of their high cost. These systems use numerous sensors
that make them complex and costly. Therefore, researchers have shifted their attention
to camera-based ADAS functions nowadays. Aiming at developing a camera-based
ADAS system, we carried out an ethnographic study of how people drive their
vehicles and the factors affecting their actions while driving. We observed drivers’
activities while driving, engaged them in discussions, and sent out questionnaires to
selected people all over the world. We were particularly interested in finding answers
to our research questions (section 1.3); and here is a brief description of the answers:
1. Contextual information for drivers’ awareness: We found that it would be very
useful for drivers to avoid accidents if we would provide them with the
information on speed, distance, relative position, direction, and size & type of the
vehicles or other objects around them. These five pieces of information are enough
to build a full-fledged camera-based ADAS and can be captured using different
technologies. We did a survey of all the supporting technologies including radar,
sonar, lidar, GPS, and video-based analysis. We found that video-based analysis is
the most suitable technology for this purpose because it provides all the required
support for implementing ADAS functions in a simple way and at a very low-cost.
2. Distraction-free presentation of the information: For the presentation of
information to the drivers in a distraction-free way, we process this information to
reconstruct the environment and draw the birds’-eye view on a display mounted on
the dashboard, behind the steering, and just in front of the driver.
3. User-interface and human-machine interaction: To ensure simple and easy user
interface, we make our system context-aware and hence adaptive. It requires
minimal input from the users, but gives maximum control over the system. The
system uses a touch-screen display and issues only audio-visual alerts to make the
interaction simple and easy for drivers.
In this thesis, we have proposed a camera-based ADAS (i.e. Smart-Dashboard
system) using layered architecture of context-aware systems. This chapter identifies
some of the strengths, weaknesses, and future enhancements in our proposed system.
7.1.1
Strengths
Our proposed Smart-Dashboard system is a camera-based system, which has a
number of strong points.
First, the proposed Smart-Dashboard system provides almost all the functions of
ADAS entirely based on five cameras installed on a vehicle. Many innovations are
71
being introduced in cameras everyday. For examples, infrared enabled cameras can see
at night as well; cameras with microphone can listen as well; and radar-enabled
cameras can generate 3D pictures of the environment [292]. These innovative cameras
will soon become cheaper like an ordinary camera. In addition, we have seen many
innovative applications of a camera in the last few decades. Having installed cameras
on a vehicle opens doors to many innovative applications to come in the future.
Second, the cost of a camera-based ADAS is much lower than other technologies.
The cost of a CMOS camera is only a few dollars, starting from US$ 15 for ordinary
camera and US$ 20 for an infrared-enable camera. The popularity of cameras in
mobile phones and other handheld devices has encouraged producers to design
cheaper, smaller and efficient cameras.
Third, camera-based ADAS systems had poor performance few years ago.
However, camera technology has significantly improved nowadays which makes it
possible to design high performance automotive functions.
Fourth, a camera-based ADAS system does not depend on any infrastructure
outside the vehicle. For example, lane departure warning can work even if there are no
visible lane markings on the road or magnetic markers embedded. Additionally, it can
be used with new as well as old vehicles having no support for infrastructure.
Fifth, a camera-based ADAS is very simple to implement i.e. to develop, install &
maintain. This is because the area of video & image processing has been around us for
decades; and the proposed algorithms are much accurate and faster. These algorithms,
with slight modifications, can be used in the development of camera-based ADAS.
Today, a large number of camera-based techniques are available for detecting the
speed, distance, relative position, direction of movement, and size & type of objects on
the road.
Sixth, a camera-based ADAS is more intelligent than a system based on radar,
sonar or lidar. It can distinguish between the relevant traffic from irrelevant things
such as obstacles and tin cans etc. It is also possible to incorporate learning &
prediction capability using techniques such as scene analysis. Moreover, a camerabased system can host multiple applications and has ability to integrate with other
systems as well.
Seventh, the camera-based ADAS system has very high availability. It uses
cameras, processor, and a display. The CMOS cameras are readily available in the
market and are very easy to install & operate. Similarly, processors and displays are
also easily available at lower costs.
Eighth, a camera can scan much wider area as compared to other technologies. For
example, radar has a very limited field-of-view (about 16o), whereas a normal camera
has 46o field-of-view and a wide-angle camera has a field-of-view as wider as 180o.
Moreover, a camera faces no interference from absorbents like a radar or sonar.
However, it is interfered by lighting conditions, such as reflections, like a lidar. Poor
lighting, bad weather, and high illuminations etc also affect its performance.
Nowadays, to overcome these issues, we have cameras with wide dynamic range
(WDR) [316] of more than 120dB that can handle both bright and dark environments
by automatic adjustments. For example, the picture in figure 7.1(a) is not very clear
because it was taken in a very bright environment using an ordinary camera. However,
the same picture in figure 7.1(b) is very much clear when taken by a wide dynamic
range camera. Likewise, the figure 7.1(c) is also taken by a wide dynamic range
camera, which shows a road scene at night where everything can be seen very clearly.
72
(a) Image captured without WDR (b) Image captured with WDR
(c) A night scene captured with WDR
Figure 7.1: Imaging without (a) & with (b, c) wide dynamic range (WDR) [316].
Nine, the capabilities of a CMOS camera can be increased by fusing some other
kind of sensing into it. For example, radar-enabled CMOS camera can also find speed
and direction of every object in a scene at once using time-of-flight calculations on
each pixel [292].
Finally, a camera-based system can easily be integrated with other systems.
However, this integration becomes smoother & easier if all components are camerabased. For example, a camera-based lane departure warning system can be integrated
with traffic sign recognition system to share the same camera between them.
7.1.2
Weaknesses
Although the Smart-Dashboard system is designed very carefully, it has many
weaknesses. Some of these weaknesses are inherited from the technology while others
are inherited from design itself.
First, the performance of Smart-Dashboard system is affected by the bad weather
conditions such as fog, dust, rain, and particularly snow. This is because the visibility
is severely reduced during bad weather. However, infrared-enabled or radar-enabled
cameras will remove this weakness in future.
Second, the proposed system has lower accuracy of speed & distance
measurements when compared to radar, sonar or lidar. This is because a camera cannot
accurately measure the speed & distance of an object if it is too slow or too close to the
camera. Again, infrared-enabled or radar-enabled cameras will improve accuracy of
the measurements.
Third, the Smart-Dashboard system uses LCD display to show reconstructed
environment. It is important to note that a driver can pay only a little attention to the
displayed information while driving. A driver may be distracted while looking at the
display for more than a few seconds. To avoid this problem, we can use a projector for
head-up-display (HUD), but it will significantly increase the cost of our proposed
system.
Four, as the proposed Smart-Dashboard system uses five cameras, the issue of
privacy cannot be overlooked. A camera inside the vehicle might be invasive for
driver, and the cameras outside the vehicle might be invasive for neighboring travelers
on the road. Furthermore, the possibility of adding new applications into the system
can make it possible to record every movement of the neighbors.
73
Finally, the Smart-Dashboard system requires some tuning in the beginning (at the
time of installation only) because the camera parameters should be determined for
accurate measurements of speed, distance etc.
7.1.3
Future Enhancements
Since we spent limited time and resources on Smart-Dashboard project, many
deficiencies are present in it. Therefore, a number of future enhancements are possible
in the system.
First, we have described the design of all ADAS functions for Smart-Dashboard
system. However, our prototype implements only three of them, namely
Human/Pedestrian Detection, Traffic Sign Recognition, and Lane Departure Warning.
We can implement all the remaining functions in future.
Second, the Smart-Dashboard system does not learn from user actions at present.
However, we can incorporate learning in its future version.
Third, with few enhancements, the Smart-Dashboard system is useable for training
a new driver. This requires some more functions in order to advise a new driver and
control the vehicle in case of emergency.
Finally, camera-based automotive systems are still in the development phase and
will take few more years to have reliable applications for automobiles. Recently,
infrared-enabled & radar-enabled CMOS cameras with high accuracy and reliability
are available. Fusing radar and vision sensing will make Smart-Dashboard system very
much accurate and reliable in the future.
74
APPENDIX A
A1 – Questionnaire
The following survey was put online and could be accessed through public
URL:http://www.surveygizmo.com/s/124388/aug-drive during the period it was active.
Augmenting safe and smooth driving
Introduction:
• Annually, road accidents cause about 1.2 million deaths, over 50 million injuries,
and global economic cost of over US$ 518 billion.
•
I'm conducting this survey for my research on "Augmenting safe and smooth
driving". The data collected will be used only for academic purpose and will not be
given to any third party.
•
Please answer all of these questions carefully. This will take only a few minutes but
will be a valuable contribution to save lives on road. I'll be extremely thankful to you
for your extended cooperation ... (Muhammad Akhlaq)
1. What kind of car do you own or drive (now or in the past)?
Latest car with many safety features (2008 or above model)
Relatively new car (1996-2007 model)
Old car (1985-1995 model)
Very old car (before 1985 model)
2. Does your car have any modern safety features? (e.g. Night Vision, Parking
Assistant, Traffic Sign Recognition, or Blind Spot Detection etc)
Yes
No
3. Is there any kind of video display mounted on your car’s dashboard
(e.g. for GPS navigation or CD/DVD, or inside your car’s speedometer etc)?
Yes
No
4. For how long can you drive continuously without taking any rest or break?
Less than 1 hour
1 – 2 hours
2 – 4 hours
More than 4 hours
5. Do you use mobile phone, laptop or other hand-held computers while driving?
Never (i.e. I keep it switched off)
Sometimes
Often
More than often
75
6. For what purpose do you use mobile phone while driving, if needed?
(Select one or more options)
Messages ( i.e. SMS or MMS)
Phone calls
Games and playing audio/video clips
Photography and audio/video recording
Others (Please specify …)
7. Reading an SMS while driving requires how much of your attention?
No attention
Very less attention
High attention
A very high attention
8. Which of the following MOSTLY distracts you from driving (i.e. draws your
attention away from driving)?
Things outside the car (such as too much or fast traffic, police, sun, animals, accident,
construction, bad-road etc )
Things inside the car (such as adjusting radio, cassette, CD, mirrors, AC, & wipers etc )
Personal state (such as thinking, tiredness, sleepy, being depressed, happy, upset or
relationship problem etc )
Activity (such as eating, drinking, smoking, talking etc )
9. While driving, what makes you more worried / upset about the surroundings?
(Select one or more options)
Too much traffic
Uneven, curvy and damaged roads
Too many crossings, bridges and footpaths
Vehicles which are too close to me
Vehicles which are too fast
Heavy traffic such as trucks and busses around me
Motorcycles and bicycles
People and animals crossing the road
Others (Please specify …)
10. In your opinion, what is the most common reason for road accidents?
Human factors (such as inattention, over-speeding, drinking, drowsiness, tiredness,
violation of traffic laws etc…)
Road defects (such as potholes, narrow lanes, bridges, crossings, sudden turns, slipperyroad and too much traffic i.e. rush etc)
Vehicle Defects (such as break-failure, steering-failure, tire-burst, headlights-failure
etc)
Others (please specify…)
11. What was the reason for road accident you have recently faced or seen?
Driver was distracted from the road i.e. his attention was dispersed.
76
Driver felt sleepy or tired
Driver changed the lane without taking care of traffic on the road
Driver could not recognize a road sign such as speed limit or no overtaking
Another vehicle or person suddenly appeared
Something wrong with the car (such as failure of breaks, tire-burst etc)
Others (please specify…)
12. After windscreen, which of the following locations is easiest to see while driving?
Speedometer
Button/control area in the middle of dashboard
Side mirrors
Back-view mirror
Other (Please specify ... )
13. In your opinion, what information about other vehicles on the road can be helpful
for drivers to avoid accidents?
(Select one or more options)
Distance in meters
Speed
Direction
Size and Type (i.e. human/animal, bicycle/motorcycle, car/van, bus/truck etc)
Relative position
Others (please specify …)
14. In time of danger, what kind of alert should be issued? (Select one or more options)
Auditory
Textual or Visual
Haptic (e.g. shake the driver seat if sleeping)
Automatic or takeover the control from driver (e.g. automatically apply brakes to avoid
collision etc)
Others (please specify …)
15. WRITE YOUR COMMENTS HERE (If any):
(Note: Include your Email address if you are interested in results)
Submit this Survey
Online Survey powered by SurveyGizmo.com
77
A2 – Response Summary Report
The results of the above survey was collected by using the same online tool i.e.
SurveyGizmo.com. Here is a brief report of the results.
Report: Response Summary Report
Survey: Augmenting safe and smooth driving
Compiled: 04/26/2009
1. What kind of car do you own or drive (now or in the past)?
2. Does your car have any modern safety features?
(e.g. Night Vision, Parking Assistant, Sign Recognition, Blind Spot Detection etc)
3. Is there any kind of video display mounted on your car’s dashboard
(e.g. for GPS navigation or CD/DVD, or inside your car’s speedometer etc)?
4. For how long can you drive continuously without taking any rest or break?
78
5. Do you use mobile phone, laptop or other hand-held computers while driving?
6. For what purpose do you use mobile phone while driving, if needed?
(Select one or more options)
7. Reading an SMS while driving requires how much of your attention?
8. Which of the following MOSTLY distracts you from driving (i.e. draws your
attention away from driving)?
79
9. While driving, what makes you more worried / upset about the surroundings?
(Select one or more options)
10. In your opinion, what is the most common reason for road accidents?
11. What was the reason for road accident you have recently faced or seen?
12. After windscreen, which of the following locations is easiest to see while driving?
80
13. In your opinion, what information about other vehicles on the road can be helpful
for drivers to avoid accidents? (Select one or more options)
14. In time of danger, what kind of alert should be issued?
(Select one or more options)
15. WRITE YOUR COMMENTS HERE (If any):
(Note: Include your Email address if you are interested in results)
ID
Comments
27977884
Everyone believes he/she is a better driver than they actually are. Notice that everyone driving slower than you is an
idiot and everyone faster is a maniac. please share the results of the survey with me [email protected]
27978649
Nice survey. Hope you can get it published in soft or hard media and distribute it to spread awareness among
common people
27990528 I have no car but I have bicycle but the rules are same for every body, [email protected]
27995563 In addition, drivers need to be taught the importance of patience. Thanks, My email: [email protected]
27995908 Something is missing in the questionnaire. It needs customization to the local Saudi Arabia.
28002714 Good luck, excellent subject matter and hoping that you may contribute to this society. e-mail:[email protected]
28009536 please send me the results when the survey is complete [email protected]
In my opinion in one should have a fully fit vehicle, one should leave early so not to drive fast, be in a comfortable
28010774 state of mind and physique, should not use mobile phone while driving, watch out for others making mistakes and
constantly keep looking in side and back view mirrors. [email protected]
Some points from above can themselves be dangerous in many cases, e.g. in Q20 if automatically brakes are applied,
28021300 car can slip if speed is over or car can be hit from back. … Best way is that drivers keep control and stay focused
and technology may be introduced for better results simultaneously. Best of luck.
28041273
Talking about Pakistan is a very different matter. Here motorways/highways are built without any planning. In
Islamabad, a network of highways has been built but there is no underpass for the pedestrians. on some places, there
81
ID
Comments
are overhead bridges but less than 5 percent of the pedestrians use it and you can see them coming on the highway
and creating problem. Further more these overhead bridges have been built ignoring the people on cycle or people
who are handicap. Some people may think that it may be cost a lot if people are told to get driving classes from a
training school would be expensive but I think it must be implemented as 90% of the people driving don’t have
traffic sense.
28070733
Knowing about your surroundings, e.g. person standing back of car while you are driving back will be helpful.
However, at the same time please note that only give information which is needed and only when it is needed.
In my personal opinion, most accident happens when driver is assuming something and it didn't happen. Like Cars in
front stopping suddenly, or Car doesn't start moving, ( e.g. on yield or stop sign car at front doesn't start moving as
expected). Volvo has already introduced features like alerts and automatic car stopping (city safety) and I think these
28066469
are good features. For long drives, on highways, I would like the car to maintain the lane automatically. Some
solution to blind spot or if I am changing lane without knowing that other car is too close. Email address:
[email protected]
28076690
this survey contains too many same sort of questions.... ask things like..... 1)seat belts 2)do u follow traffic rules
3)loud music can be a factor of negligence etc
28069362
There should be lot of safety step to avoid "Accident or Collision", it is a serious Problem & we should take it
[email protected]
This online survey is not applicable for a professional driver ( e.g. taxi driver, truck driver, goods transporter driver
28213734 etc) The question of this survey are specific to normal/family driver, can not be applicable to heavy traffic driver .
(the have different driving parameters such as load etc. ) --Ishtiaq ([email protected])
As I mentioned above, an automatic control can be made to alert all drivers to apply brakes in a circle of an expected
28209423 accident in order to avoid any accident by means of some wireless/radio transmission of information between the
vehicles etc.
28223473
It was nice and short survey, Drivers in Pakistan never use low beams at night, and I saw many accidents because of
it. Good luck, looking forward for results. e-mail is: [email protected]
28242874 do send me the results, [email protected]
28254131
Very interesting and useful research. Please send me result after completion. My e-mail address is:
[email protected]
28256098 a very nice survey... briefly covered almost all the things in the topic
28260232 What u asked is really good but few things are missing.....
28261026
I am professor of computer sciences, I will be interested in segmentation of the people based on their different
driving behaviors. [email protected]
28264337 I would like to see the results Thanks - from Pakistan [email protected]
28264631 It’s all about the traffic controller if they enforced the people to follow the laws.
28271339
This is a good topic it may be helpful in controlling traffic and reducing accidents in our country.
[email protected]
28334937 Akhlaq sahib, wish u best of luck. hanso, mazay karo aur khush raho. my email: [email protected]
The main issue with people is lack of education and or caring for rules to avoid accidents. Ego, carelessness,
ignorance etc cause most accidents. At least over here in Pakistan you always have to drive with the supposition that
your neighboring drivers are reckless and will suddenly make a mistake - endangering you or others around you 28375823 therefore you are able to react quickly and avoid damage. More helpful (in my opinion) questions that should have
been included are: 1. What is your age-group? 2. Do you posses a driving license? 3. Education level? 4. Does your
car have seatbelts and airbags? 5. Do you always put on a seatbelt? I am interested in getting the results. My email
Address is: [email protected]
28436009
Question 18 is difficult to understand. Kindly update so that it could be interpenetrated easily. My email is:
[email protected]
28527882 have fun please :) and don’t drive fast; drive slowly and nicely :)
28527927 thanks for this survey I think it will be helpful in future
28527901 Thanks 4 the survey .. it's really interesting
28528047 BE CAREFUL
28527934 well, thanks and I hope every thing be good for me and family and my friends.
28532998 Several questions need selection of multiple choices, whereas only Radio Button is used. [email protected]
28533204 I would see the results please contact me on : [email protected] thanks
28533829 the accident still with the people it is will not done maybe decreased [email protected]
28822198
Making drivers aware of their environment can significantly reduce chances of accidents. Accidents mostly occur
due to negligence of drivers in one or the other way.
82
BIBLIOGRAPHY
Peden, M., Scurfield, R., et al, eds, “World report on road traffic injury prevention”,
World Health Organization, Geneva, 2004,
http://www.who.int/violence_injury_prevention/publications/road_traffic/world_rep
ort/en/index.html, (Retrieved: Dec 10, 2009).
[2] Treat, J. R., Tumbas, N. S., McDonald, S .T., Shinar, D., Hume, R. D., Mayer, R. E.,
Stanisfer, R. L., and Castillan, N. J., “Tri-level study of the causes of traffic
accidents”, Report No. DOT-HS-034-3-535-77, Indiana University, 1977.
[3] Sabey, B. E. and Staughton, G. C., “Interacting roles of road environment, vehicle
and road user in accidents”, Paper presented at the 5th International Conference on
the International Association for Accident and Traffic Medicine, London, 1975.
[4] Conti, J.P., "Smart cars," Communications Engineer , vol.3, no.6, pp. 25-29,
Dec./Jan. 2005/2006.
[5] Krum, D. M., Faenger, J., Lathrop, B., Sison, J., and Lien, A., “All roads lead to
CHI: interaction in the automobile”. In CHI '08 Extended Abstracts on Human
Factors in Computing Systems (Florence, Italy, April 05 - 10, 2008). CHI '08. ACM,
New York, NY, 2387-2390, 2008.
[6] Rene Mayrhofer, “An Architecture for Context Prediction”, PhD dissertation,
Johannes Kepler Universität Linz, Austria, 2004.
[7] Albrecht Schmidt, Kofi Asante Aidoo, Antti Takaluoma, Urpo Tuomela, Kristof
Van Laerhoven, and Walter Van de Velde., “Advanced interaction in context”. In
Proceedings of First International Symposium on Handheld and Ubiquitous
Computing, HUC'99, pages 89-101, Karlsruhe, Germany, Springer Verlag.
September 1999.
[8] Chen, H.; Finin, T.; Anupam Joshi; Kagal, L.; Perich, F.; Dipanjan Chakraborty,
"Intelligent agents meet the semantic Web in smart spaces," Internet Computing,
IEEE , vol.8, no.6, pp. 69-79, Nov.-Dec. 2004.
[9] V. Belloti, K. Edwards, “Intelligibility and accountability: human considerations in
context aware systems”, Human Computer Interaction 16 (2–4), pp. 193–212, 2001.
[10] S Hoh, J S Tan and M Hartley, “Context-aware systems — a primer for user-centred
services”, BT Technology Journal, Vol 24 No 2, pp. 186-194, April 2006.
[11] Kim, S., Kang, J., Oh, S., Ryu, Y., Kim, K., Park, S., and Kim, J., “An Intelligent
and Integrated Driver Assistance System for Increased Safety and Convenience
Based on All-around Sensing”, J. Intell. Robotics Syst. 51, 3 (Mar. 2008), 261-287,
2008.
[12] Reding, V. (2006), “The Intelligent Car Initiative: raising awareness of ICT for
Smarter, Safer and Cleaner vehicle”, Speech delivered at the Intelligent Car
Launching Event, Brussels, 23 February 2006.
[13] Intelligent Car,
http://ec.europa.eu/information_society/activities/intelligentcar/index_en.htm,
(Retrieved: Dec 10, 2009).
[14] AWAKE (2000), “System for Effective Assessment of Driver Vigilance and
Warning According to Traffic Risk Estimation” - and Vehicle control in
Emergency”-European project IST-2000-28062 - http://www.awakeeu.org/
[15] Rene´ Marois and Jason Ivanoff, “Capacity limits of information processing in the
brain”, TRENDS in Cognitive Sciences Vol.9 No.6 June 2005.
[16] SENSATION, 2004 “Advanced Sensor Development for Attention Stress Vigilance
and Sleep/Wakefulness Monitoring” - European project IST-507231 www.sensation-eu.org
[17] Brookhuis K., “Integrated systems. Results of experimental tests, recommendations
for introduction”, Report Deter, Deliverable 18 to the European commission,
University of Groningen, 1995.
[1]
83
Estève D., Coustre A., Garajedagui M.., “L’intégration des systèmes électroniques
dans la voiture du XXI siècle”. Cépadues Editions, 1995.
[19] SAVE EEC/DG XIII (1996-1998) .Technology Initiative Transports Telematics .
System for effective Assessment of the driver state and Vehicle control in
Emergency situations (SAVE) TR1047, 1996-1998.
[20] AIDE - Adaptive Integrated Driver-vehicle Interface, http://www.aide-eu.org/,
(Retrieved: Dec 10, 2009).
[21] PReVENT, http://www.prevent-ip.org/, (Retrieved: Dec 10, 2009).
[22] eSafety, http://ec.europa.eu/information_society/activities/esafety/index_en.htm,
(Retrieved: Dec 10, 2009).
[23] SIM, http://www.sim-eu.org/glance.html, (Retrieved: Dec 10, 2009).
[24] ADASE,
http://cordis.europa.eu/data/PROJ_FP5/ACTIONeqDndSESSIONeq112422005919n
dDOCeq155ndTBLeqEN_PROJ.htm, (Retrieved: Dec 10, 2009).
[25] APROSYS, http://www.aprosys.com/, (Retrieved: Dec 10, 2009).
[26] EASIS – Electronic Architecture Safety Systems, http://www.easis-online.org/,
(Retrieved: Nov 2008).
[27] GST, http://www.gstforum.org/, (Retrieved: Dec 10, 2009).
[28] HUMANIST_Network of excellence, http://www.noehumanist.org/, (Retrieved: Dec
10, 2009).
[29] Intelligent Vehicle Initiative's (IVI), http://www.its.dot.gov/ivi/ivi.htm, (Retrieved:
Dec 10, 2009).
[30] Philip F. Spelt, Daniel R. Tufano, and Helmut E. Knee, “Development and
Evaluation of An In-Vehicle Information System”, Proceedings of the Seventh
Annual Meeting and Exposition of the Intelligent Transportation Society of America
Washington, D.C. June 2-5, 1997.
[31] Driver Information System, http://www.talgov.com/traffic/index.cfm, (Retrieved:
Dec 10, 2009).
[32] Srinivasan, Raghavan, Chun-Zin Yang, Paul P. Jovanis, Ryuichi Kitamura, G.
Owens, Mohammed Anwar, “California Advanced Driver Information System
(CADIS)”, Final Report. Institute of Transportation Studies, University of
California, Davis, Research Report UCD-ITS-RR-92-20, 1992.
[33] Srinivasan, Raghavan, Paul P. Jovanis, Francine H. Landau, M. Laur, Charles Lee,
C. M. Hein, “California Advanced Driver Information System II (CADIS II):
Extended Tests of Audio and Visual Route Guidance Displays”. Institute of
Transportation Studies, University of California, Davis, Research Report UCD-ITSRR-95-03, 1995.
[34] Bergmeier, Heiner Bubb, “Augmented Reality in vehicle – Technical realisation of a
contact analogue Head-up Display under automotive capable aspects; usefulness
exemplified through Night Vision Systems.”, FISITA Edition: 2008-09, available
online at: http://www.fisita2008.com/programme/programme/pdf/F2008-02-043.pdf
[35] Kumar, M. and Kim, T., “Dynamic speedometer: dashboard redesign to discourage
drivers from speeding”. In CHI '05 Extended Abstracts on Human Factors in
Computing Systems (Portland, OR, USA, April 02 - 07, 2005). CHI '05. ACM, New
York, NY, 1573-1576, 2005.
[36] Perrillo, K. V., "Effectiveness of Speed Trailer on Low-Speed Urban Roadway,"
Master Thesis, Texas A&M University, College Station, TX, December 1997.
[37] Knipling, R., “Changing driver behavior with on-board safety monitoring”. ITS
Quarterly, Volume VIII, No.2, 27-35, 2000.
[38] The Intelligent Speed Adaptation project at the Institute for Transport Studies at the
University of Leeds, UK. http://www.its.leeds.ac.uk/projects/ISA/index.htm,
(Retrieved: Dec 10, 2009).
[39] Eriksson, J., Girod, L., Hull, B., Newton, R., Madden, S., and Balakrishnan, H.,
“The pothole patrol: using a mobile sensor network for road surface monitoring”. In
Proceeding of the 6th international Conference on Mobile Systems, Applications,
[18]
84
and Services (Breckenridge, CO, USA, June 17 - 20, 2008). MobiSys '08. ACM,
New York, NY, 29-39, 2008.
[40] B. Hull, V. Bychkovsky, Y. Zhang, K. Chen, M. Goraczko, E. Shih, H.
Balakrishnan, and S. Madden, “CarTel: A Distributed Mobile Sensor Computing
System”. In Proc. ACM SenSys, Nov. 2006.
[41] B. Greenstein, E. Kohler, and D. Estrin., “A sensor network application construction
kit (SNACK)”. In SenSys, pages 69–80, 2004.
[42] Adell, E.; Varhelyi, A.; Alonso, M.; Plaza, J., "Developing human-machine
interaction components for a driver assistance system for safe speed and safe
distance," Intelligent Transport Systems, IET , vol.2, no.1, pp.1-14, March 2008.
[43] Bergasa, L.M.; Nuevo, J.; Sotelo, M.A.; Barea, R.; Lopez, M.E., "Real-time system
for monitoring driver vigilance," Intelligent Transportation Systems, IEEE
Transactions on , vol.7, no.1, pp. 63-77, March 2006
[44] Albert Kircher, Marcus Uddman and Jesper Sandin, “Vehicle control and
drowsiness,” Swedish National Road and Transport Research Institute, Linkoping,
Sweden, Tech. Rep. VTI-922A, 2002.
[45] Moore, G., “Craming more components onto integrated circuits”, Electronics. Vol.
38, Page 114-117, 1965.
[46] DaimerChryslerAG. (2001, Jun.). The Electronic Drawbar. [Online]. Available:
http://www.daimlerchrysler.com, (Retrieved: Dec 10, 2009).
[47] IBM Smart dashboard watches drivers, Thursday, 19 July, 2001,
http://news.bbc.co.uk/2/low/science/nature/1445342.stm, (Retrieved: Dec 10, 2009).
[48] Janine G Walker, Nick Barnes and Kaarin Anstey, “Sign Detection and Driving
Competency for Older Drivers with Impaired Vision”, Proceedings of the 2006
Australasian Conference on Robotics & Automation, Bruce MacDonald (ed),
Auckland, New Zealand, December 6 - 8, 2006.
[49] Kohashi, Y., Ishikawa, N., Nakajima,M. “Automatic Recognition of Road signs and
Traffic signs”, Proc. 1st ITS Symposium 2002, pp.321-326, 2002.
[50] Makanae, K., Kanno, A., “Proposal of the Signing System Utilizing Image
Recognition”, Proc. 1st ITS Symposium 2002, pp.137-142, 2002.
[51] Uchimura, K., Tominaga, H., Nakamura, K., Wakisaka, S., Arita, H. “Adding a
Route Guidance Sign to Digital Road Map”, Proc. 1st ITS Symposium 2002,
pp.25-30, 2002.
[52] Yoshimichi Sato, Koji Makanae, “Development and Evaluation of In-vehicle
Signing System Utilizing RFID tags as Digital Traffic Signs”, International Journal
of ITS Research, Vol. 4, No.1, December 2006.
[53] Oki, Y., Yamada, F., Seki, Y., Mizutani, H., Makino,H. “Actual Road Verification
of AHS Support System for Prevention Of Vehicle Overshooting on Curves”, Proc.
2nd ITS Symposium 2003, pp.247-252.
[54] Tsippy Lotan, Tomer Toledo, “Evaluating the Safety Implications and Benefits of an
In-Vehicle Data Recorder to Young Drivers”, In PROCEEDINGS of the Third
International Driving Symposium on Human Factors in Driver Assessment, Training
and Vehicle Design, Maine USA, June 27-30, 2005
[55] Minker, W., Haiber, U., Heisterkamp, P., and Scheible, S., “Intelligent dialog
overcomes speech technology limitations: the SENECa example”. In Proceedings of
the 8th international Conference on intelligent User interfaces (Miami, Florida,
USA, January 12 - 15, 2003). IUI '03. ACM, New York, NY, 267-269, 2003.
[56] Geva Vashitz, David Shinar, Yuval Blum, “In-vehicle information systems to
improve traffic safety in road tunnels”, Transportation Research Part F: Traffic
Psychology and Behaviour Volume 11, Issue 1, , Pages 61-74, January 2008.
[57] Santos, J., Merat, N., Mouta, S., Brookhuis, K. and De Waard, D., “The interaction
between driving and in-vehicle information systems: Comparison of results from
laboratory, simulator and real-world studies”. Transportation Research Part F:
Traffic Psychology and Behaviour, 8 (2). pp. 135-146. ISSN 1369-8478, 2005.
85
Abuelela, M.; Olariu, S.; Weigle, M.C., "NOTICE: An Architecture for the
Notification of Traffic Incidents," Vehicular Technology Conference, 2008. VTC
Spring 2008. IEEE , vol., no., pp.3001-3005, 11-14 May 2008.
[59] McMurran, Ross; McKinney, Francis; Hayden, Jill; Fowkes, Mark; Szczygiel,
Michael; Ross, Tracey; Frampton, Richard; Robinson, Tom; Clare, Nick, "Co-Driver
Alert project," Road Transport Information and Control - RTIC 2008 and ITS United
Kingdom Members' Conference, IET , vol., no., pp.1-6, 20-22 May 2008.
[60] Seong-eun Yoo; Poh Kit Chong; Taisoo Park; Youngsoo Kim; Daeyoung Kim;
Changsub Shin; Kyungbok Sung; Hyunhak Kim, "DGS: Driving Guidance System
Based on Wireless Sensor Network," Advanced Information Networking and
Applications - Workshops, 2008. AINAW 2008. 22nd International Conference on ,
vol., no., pp.628-633, 25-28 March 2008.
[61] Compareable Systems Analysis, http://www.fhwa.dot.gov/tfhrc/safety/pubs/95197/,
(Retrieved: Dec 10, 2009).
[62] Advanced Parking Guidance System,
http://www.lexus.com/models/LS/features/exterior/advanced_parking_guidance_sys
tem.html, (Retrieved: Dec 10, 2009).
[63] DashDyno SPD Automotive Computer,
http://www.auterraweb.com/dashdynoseries.html, (Retrieved: Dec 10, 2009).
[64] CarChip Fleet Pro Engine Performance Monitor,
http://www.geneq.com/catalog/en/carchips.htm, (Retrieved: Dec 10, 2009).
[65] Davis DriveRight 600E Data Logger,
http://www.microdaq.com/davis/automotive/driveright/8156obd.php, (Retrieved:
Dec 10, 2009).
[66] ScanGaugeII - Scan Tool + Digital Gauges + Trip Computers,
http://www.scangauge.com/features/, (Retrieved: Dec 10, 2009).
[67] PDA-Dyno and OBD2 Scan Tool, http://www.nology.com/pdadyno.html,
(Retrieved: Dec 10, 2009).
[68] Mark David, “ESC Gives Life To Smart Fabric, Smart Dashboard”, ELECTRONIC
DESIGN -NEW YORK THEN HASBROUCK HEIGHTS THEN CLEVELAND
OH-, PENTON PUBLISHING INC, VOL 54; NUMB 9, pages 19-19, 2006.
[69] Robert Bogue, “Safety concerns drive the automotive sensor markets”, Sensor
Review, Volume 26 • Number 3 • 2006 • 231–235 , Emerald Group Publishing
Limited [ISSN 0260-2288], 2006.
[70] Dey and Abowd, 2000. Dey, A.K., Abowd, G.D., “Towards a better understanding
of context and context awareness”. Proceedings of the CHI 2000 Workshop on The
What, Who, Where, and How of Context-Awareness, The Hague, The Netherlands,
2000.
[71] Fletcher, L.; Petersson, L.; Zelinsky, A., "Driver assistance systems based on vision
in and out of vehicles," Intelligent Vehicles Symposium, 2003. Proceedings. IEEE ,
vol., no., pp. 322-327, 9-11 June 2003
[72] Mohan Manubhai Trivedi; Tarak Gandhi; Joel McCall, "Looking-In and LookingOut of a Vehicle: Computer-Vision-Based Enhanced Vehicle Safety," Intelligent
Transportation Systems, IEEE Transactions on , vol.8, no.1, pp.108-120, March
2007
[73] Honda Develops New Multi-View Camera System to Provide View of Surrounding
Areas to Support Comfortable and Safe Driving, September 18, 2008,
http://world.honda.com/news/2008/4080918Multi-View-Camera-System/,
(Retrieved: Dec 10, 2009)
[74] Nedevschi, S.; Danescu, R.; Marita, T.; Oniga, F.; Pocol, C.; Sobol, S.; Tomiuc, C.;
Vancea, C.; Meinecke, M.M.; Graf, T.; Thanh Binh To; Obojski, M.A., "A Sensor
for Urban Driving Assistance Systems Based on Dense Stereovision," Intelligent
Vehicles Symposium, 2007 IEEE , vol., no., pp.276-283, 13-15 June 2007.
[75] Luiz Cludio G. Andrade, Mrio F. Montenegro Campos, Rodrigo L. Carceroni, "A
Video-Based Support System for Nighttime Navigation in Semi-Structured
[58]
86
Environments," Computer Graphics and Image Processing, Brazilian Symposium on,
vol. 0, no. 0, pp. 178-185, Computer Graphics and Image Processing, XVII Brazilian
Symposium on (SIBGRAPI'04), 2004.
[76] Finnefrock, M.; Xinahua Jiang; Motai, Y., "Visual-based assistance for electric
vehicle driving," Intelligent Vehicles Symposium, 2005. Proceedings. IEEE , vol.,
no., pp. 656-661, 6-8 June 2005.
[77] EyeQ2, Vision System on Chip, www.mobileye.com/uploaded/EyeQ2.pdf,
(Retrieved: Dec 10, 2009)
[78] Shorin Kyo; Okazaki, S., "In-vehicle vision processors for driver assistance
systems," Design Automation Conference, 2008. ASPDAC 2008. Asia and South
Pacific , vol., no., pp.383-388, 21-24 March 2008
[79] Ehlgen, T.; Thorn, M.; Glaser, M., "Omnidirectional Cameras as Backing-Up Aid,"
Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on , vol.,
no., pp.1-5, 14-21 Oct. 2007
[80] Hong Cheng; Zicheng Liu; Nanning Zheng; Jie Yang, "Enhancing a Driver's
Situation Awareness using a Global View Map," Multimedia and Expo, 2007 IEEE
International Conference on , vol., no., pp.1019-1022, 2-5 July 2007.
[81] Rakotonirainy, Andry and Feller, Frank and Haworth, Narelle L., “Using in-vehicle
avatars to prevent road violence”. Copyright 2008 Centre for Accident Research and
Road Safety (CARRS-Q), 2008.
[82] Pugh, S., "The Train Operator's View," User Worked Crossings, 2008. The
Institution of Engineering and Technology Seminar on , vol., no., pp.103-112, 30-30
Jan. 2008.
[83] Liu, Y. C., K. Y. Lin, and Y. S. Chen, “Bird’s-eye view vision system for vehicle
surrounding monitoring,” in Proc. Conf. Robot Vision, Berlin, Germany, Feb. 20,
2008, pp.207-218, 2008.
[84] U. Handmann, T. Kalinke, C. Tzomakas, M. Werner, W. v. Seelen, “An image
processing system for driver assistance”, Image and Vision Computing, Volume 18,
Issue 5, April 2000, Pages 367-376, ISSN 0262-8856, 2000.
[85] Driver assistance systems,
http://www.volkswagen.com/vwcms/master_public/virtualmaster/en2/experience/inn
ovation/driver_assistance_systems/start.html, (Retrieved: Dec 10, 2009).
[86] Krikke, J., "T-Engine: Japan's ubiquitous computing architecture is ready for prime
time," Pervasive Computing, IEEE , vol.4, no.2, pp. 4-9, Jan.-March 2005.
[87] L. Feng, P.M.G. Apers, and W. Jonker., “Towards Context-Aware Data
Management for Ambient Intelligence”. In Proceedings of the 15th International
Conference on Database and Expert Systems Applications, Zaragoza, Spain, pp.
422-431, 2004.
[88] M. Weiser, "The Computer for the 21st Century", Scientific American 265, No. 3,
94-104, September 1991.
[89] Pervasive Computing 2001 Conference in National Institute of Standards and
Technology, http://www.nist.gov/ (Retrieved: Dec 10, 2009)
[90] Kostakos, V., O'Neill, E., and Penn, A. “Designing Urban Pervasive Systems”. IEEE
Computer 39, 9 (Sep. 2006), 52-59. 2006.
[91] Anne McCrory, “Ubiquitous? Pervasive? Sorry, they don't compute… JARGON
JUDGE”, COMPUTERWORLD,
http://www.computerworld.com/news/2000/story/0,11280,41901,00.html
(Retrieved: Dec 10, 2009)
[92] Weiser, M., Gold, R., and Brown, J. S., “The origins of ubiquitous computing
research at PARC in the late 1980s”, IBM Systems Journal, Volume 38, Number 4,
pp. 693-696, December 1999.
[93] IBM's Advanced PvC Technology Laboratory,
http://www.ibm.com/developerworks/wireless/library/wi-pvc/ (Retrieved: Dec 10,
2009)
87
Wikipedia contributors, 'Ubiquitous computing', Wikipedia, The Free Encyclopedia,
http://en.wikipedia.org/w/index.php?title=Ubiquitous_computing&oldid=249778535
, (Retrieved: Dec 10, 2009)
[95] Issus and challenges in ubicomp,
http://wiki.daimi.au.dk:8000/pca/_files/ubicomp.ppt (Retrieved: Dec 10, 2009)
[96] Satyanarayanan, M., "Pervasive computing: vision and challenges," Personal
Communications, IEEE [see also IEEE Wireless Communications] , vol.8, no.4,
pp.10-17, Aug 2001.
[97] Mackay, W.E., Velay, G., Carter, K., Ma, C., & Pagani, D., “Augmenting Reality:
Adding Computational Dimensions to Paper”. Communications of the ACM, 36, 7,
96-97, 1993.
[98] Ishii, H. & Ullmer, B., “Tangible bits: Towards seamless interfaces between people,
bits, and atoms”. Proceedings of the CHI’97 Conference on Human Factors in
Computing Systems, 234-241, 1997.
[99] Bass, L., Kasabach, C., Martin, R., Siewoirek, D., Smailagic, A., & Stivorik, J., “The
Design of a Wearable Computer”. Proceedings of the CHI’97 Conference on Human
Factors in Computing Systems, 139-146, 1997.
[100] Streitz, N. A., Konomi, S., Burkhardt, H-J. (Eds.), “Cooperative Buildings:
Integrating information organization, and architecture”. Berlin: Springer, 1998.
[101] da Costa, Cristiano Andre; Yamin, Adenauer Correa; Geyer, Claudio Fernando
Resin, "Toward a General Software Infrastructure for Ubiquitous Computing,"
Pervasive Computing, IEEE , vol.7, no.1, pp.64-73, Jan.-March 2008.
[102] Chen, G. and Kotz, D. "A Survey of Context-Aware Mobile Computing
Research", Technical Report: TR2000-381 Dartmouth College, Hanover, NH, USA,
2000.
[103] Schmidt, A., Beigl, M,. Gellersen, H.W. “There is more to Context than
Location”. In: Proceedings of the International Workshop on Interactive
Applications of Mobile Computing (IMC98), Rostock, Germany, November 1998.
[104] Cynthia A. Patterson, Richard R. Muntz, and Cherri M. Pancake, “Challenges in
Location-Aware Computing”, Published by the IEEE CS and IEEE Computer
Society, pp.80-89, APRIL–JUNE 2003.
[105] Tamminen, S., Oulasvirta, A., Toiskallio, K., Kankainen, A., “Understanding
Mobile Contexts”. In Proceedings of MobileHCI 2003, pp: 17-31, Udine, Italy:
Springer, 2003.
[106] Dourish, P., “What We Talk About When We Talk About Context”. Personal and
Ubiquitous Computing, 8(1). 19-30, 2004.
[107] de Almeida, D.R.; de Souza Baptista, C.; da Silva, E.R.; Campelo, C.E.C.; de
Figueiredo, H.F.; Lacerda, Y.A., "A context-aware system based on service-oriented
architecture," Proceedings of the 20th International Conference on Advanced
Information Networking and Applications (AINA 2006). Volume 01, pp.205-210,
IEEE Computer Society Washington, DC, USA, 18-20 April 2006.
[108] Korkea-aho, M. “Context-Aware Applications Survey”,
http://www.hut.fi/~mkorkeaa/doc/context-aware.html (Retrieved: Dec 10, 2009)
[109] Anagnostopoulos, C. B., Tsounis, A., and Hadjiefthymiades, S., “Context
Awareness in Mobile Computing Environments”. Wireless Personal
Communications, 42, 3 (Aug. 2007), 445-464, 2007.
[110] T. Buchholz, A. Ku¨pper, M. Schiffers, “Quality of context: what it is and why we
need it”, Proceedings of the 10th HP-Open View University Association Workshop,
Geneva, Switzerland, July, 2003.
[111] R. Giaffreda, “WP6 - Context Aware Networks”, 1st WWI Symposium, Brussels
Belgium, December 2004.
[112] Dey, Anind K.; Abowd, Gregory D.; Salber, Daniel., “A Conceptual Framework
and a Toolkit for Supporting the Rapid Prototyping of Context-Aware
Applications.”, Human-Computer Interaction (HCI) Journal, v16, n2-4, p97-166,
October 2001.
[94]
88
Kavi Kumar Khedo, “Context-Aware Systems for Mobile and Ubiquitous
Networks”, Proceedings of the International Conference on Networking,
International Conference on Systems and International Conference on Mobile
Communications and Learning Technologies (ICNICONSMCL’06), IEEE Computer
Society, 2006.
[114] A. Dey, “Providing Architectural Support for Building Context-Aware
Applications”, Ph.D. Thesis Dissertation, College of Computing, Georgia Tech,
December 2000.
[115] Yanlin Zheng and Yoneo Yano, "A framework of context-awareness support for
peer recommendation in the e-learning context", British Journal of Educational
Technology, Vol 38 No 2, pp.197–210, 2007.
[116] Dourish, P., “Seeking a Foundation for Context-Aware Computing”. HumanComputer Interaction, 16(2). 229-241, 2001.
[117] T. Winograd, “Architectures for context”, Human Computer Interaction, 16 (2),
pp. 401–419, 2001.
[118] Plowman L, Rogers Y and Ramage M, “What are workplace studies for?” In:
Proceedings of the European Conference on Computer-Supported Cooperative Work
(ECSCW_95), Stockholm, Sweden, Kluwer, Dordrecht, The Netherlands, 1995.
[119] Paul Luff, Jon Hindmarsh, Christian Heath (eds), “Workplace studies: recovering
work practice and informing system design”. ISBN 0521598214, Cambridge
University Press, Cambridge, UK, 2000.
[120] Etienne Wenger, “Communities of practice: learning, meaning, and identity”.
ISBN 0521663636, Cambridge University Press, Cambridge, UK, 1999.
[121] Tolmie P, Pycock J, Diggins T, MacLean A and Karsenty A, “Unremarkable
computing”. In: Proceedings of the ACM Conference on Human Factors in
Computing Systems (CHI 2002), Minneapolis, MN, ACM Press, New York, 2002.
[122] A.K. Dey. “Understanding and using context”. Personal and Ubiquitous
Computing, Vol. 5, pp.4-7, 2001.
[123] B. Schilit and M. Theimer. “Disseminating active map information to mobile
hosts”. IEEE Network, 8(5):22–32, July 1994.
[124] N. Ryan, J. Pascoe and D. Morse. “Enhanced reality fieldwork: the context-aware
archaeological assistant”. Computer Applications and Quantitative Methods in
Archaeology. V. Gaffney, M. van Leusen and S. Exxon, Editors. British
Archaeological Reports, Oxford (UK), pp. 34-45, 1998.
[125] Mørch A, Mehandjiev N, “Tailoring as collaboration: the mediating role of
multiple representations and application units”. Comp Supp Coop Work 9(1):75–
100, 2000.
[126] Dashofy E, van der Hoek A and Taylor R, “A highly-extensible, XML-based
architecture description language”. In: Proceedings of the Working IEEE/IFIP
Conference on Software Architecture (WICSA 2001), Amsterdam, The Netherlands,
28–31, August 2001.
[127] Marshall C, Shipman F, “Searching for the missing link: discovering implicit
structure in spatial hypertext”. In: Proceedings of the ACM Hypertext 93
Conference, Seattle, WA, 14–18 November 1993.
[128] Gregory D. Abowd, Elizabeth D. Mynatt, and Tom Rodden, "The Human
Experience", IEEE PERVASIVE computing, pp. 48-57, January–March 2002.
[129] Brooke, T. and Burrell, J. 2003., "From ethnography to design in a vineyard". In
Proceedings of the 2003 Conference on Designing For User Experiences (San
Francisco, California). DUX '03. ACM, New York, NY, 1-4, June 06 - 07, 2003.
[130] P. Wyeth, D. Austin, and H. Szeto, “Designing Ambient Computing for Use in the
Mobile Health Care Domain,” Proceedings of the CHI2001 Workshop on
Distributed and Disappearing UIs in Ubiquitous Computing, Seattle, WA, 2001,
http://www.teco.edu/chi2001ws/17_wyeth.pdf
[131] Romi Satria Wahono., “Analyzing requirements engineering problems”. In Proc of
IECI Japan Workshop, ISSN 1344-7491, 55-58, 2003.
[113]
89
Blomberg, J., Burrell, M., and Guest, G., “An ethnographic approach to design”.
In the Human-Computer interaction Handbook: Fundamentals, Evolving
Technologies and Emerging Applications, J. A. Jacko and A. Sears, Eds. Human
Factors And Ergonomics. L. Erlbaum Associates, Hillsdale, NJ, 964-986, 2003.
[133] Ray Offen., “Domain Understanding is the Key to Successful System
Development”, In Requirements Eng 7, Springer-Verlag London Limited, 172–175,
2002.
[134] Sommerville, I., T. Rodden, et al., “Sociologists can be surprisingly useful in
interactive systems design”. In Proc of the Conference on Human Factors in
Computing Systems (CHI'92), Monterey, CA, ACM , 1992.
[135] Button, G., “The Ethnographic tradition and design”. Design Studies 21 (4): 319332, 2000.
[136] Hughes, J., King, V., Rodden, T., & Andersen, H., “Moving Out from the Control
Room: Ethnography in System Design”, In Proc ACM Conference on ComputerSupported Cooperative Work, Chapel Hill, North Carolina, ACM Press, 429-439,
1994.
[137] Ackerman, M., “The Intellectual Challenge of CSCW: The gap between social
requirements and technical feasibility”. In Human-Computer Interaction, Vol. 15,
No. 2&3, 179-203, 2000.
[138] Greenbaum, J. and M. Kyng, Eds., “Design at Work: cooperative design of
computer systems”. Hillsdale, NJ, Lawrence Erlbaum Associates, 1991.
[139] Blomberg, J. L., “Ethnography: aligning field studies of work and system design”.
In Monk, A. F. & Gilbert, G. N. (eds), Perspectives on HCI: Diverse Approaches,
Academic Press: London, 1995.
[140] Hanna Strömberg, Valtteri Pirttilä, Veikko Ikonen., “Interactive scenarios—
building ubiquitous computing concepts in the spirit of participatory design”.
Springer-Verlag London Limite, 2004.
[141] Kaye, J. '. and Goulding, L. 2004. “Intimate objects”. In Proceedings of the 5th
Conference on Designing interactive Systems: Processes, Practices, Methods, and
Techniques (Cambridge, MA, USA). DIS '04. ACM, New York, NY, 341-344,
August 01 - 04, 2004.
[142] Volume I: Guidelines, "In-Vehicle Display Icons and Other Information
Elements", U.S. Department of Transportation, PUBLICATION NO. FHWA-RD03-065 SEPTEMBER 2004, available online at
http://www.tfhrc.gov/safety/pubs/03065/03065.pdf
[143] Yukinobu Nakamura - HMI Experts Group, JAMA, (Honda R&D Co., Ltd.),
"JAMA Guideline for In-Vehicle Display Systems", Document Number: 2008-210003, October 2008, available online at http://www.sae.org/technical/papers/200821-0003
[144] Dan Saffer, “Designing for Interaction: Creating Smart Applications and Clever
Devices (VOICES)”, ISBN:0321432061, Peachpit Press, Berkeley, CA, 2006.
[145] Dourish, P.,“Where the Action Is: The Foundations of Embodied Interaction”,
MIT Press, ISBN 0262541785, 9780262541787, 2004.
[146] Weiser, M. & J.S. Brown, “The Coming Age of Calm Technology”, in: D.
Metcalfe (Ed) Beyond Calculation (New York, Springer-Verlag)- 1997
[147] Mynatt, E. D., Back, M., Want, R., Baer, M. and Ellis, J. (1998). “Designing
Audio Aura,” in the Proc. of the 1998 ACM Conference on Human Factors in
Computing Systems (CHI’98)., Los Angeles, CA., 566- 573, 1998.
[148] Ishii et al. “Ambient ROOM: Integrating ambient media with architectural space”.
In Proc. Of CHI’98 Conference Companion. ACM Press, 173-174, 1998.
[149] Weiser, M. & Brown, J.S. “Designing calm technology”, PowerGrid Journal,
1(01). http://powergrid.electriciti.com/1.01 (July 1996).
[150] S. Card, T. Moran, and A. Newell, “The Psychology of Human-Computer
Interaction”, Lawrence Erlbaum, Mahwah, N.J., 1983.
[132]
90
L. Vygotsky, “The Instrumental Method in Psychology,” The Concept of Activity
in Soviet Psychology, J. Wertsch, ed., Sharpe, Armonk, New York, 1981.
[152] B. Nardi, ed., “Context and Consciousness: Activity Theory and HumanComputer Interaction”, MIT Press, Cambridge, Mass., 1996.
[153] Nardi, B. A., “Studying context: a comparison of activity theory, situated action
models, and distributed cognition”. In Context and Consciousness: Activity theory
and Human-Computer interaction, B. A. Nardi, Ed. Massachusetts Institute of
Technology, Cambridge, MA, 69-102, 1995.
[154] Lucy A. Suchman, “Plans and situated actions: the problem of human-machine
communication”, Cambridge University Press, New York, NY, 1987.
[155] Cole, M. and Engeström, Y., 1993. “A cultural-historical approach to distributed
cognition”. In: Salomon, G., Editor,. Distributed cognitions. Psychological and
educational considerations, Cambridge University Press, Cambridge, MA, pp. 1–47,
1993.
[156] E. Hutchins, “Cognition in the Wild”, ISBN: 978-0262581462, MIT Press,
Cambridge, Mass., 1995.
[157] James Hollan , Edwin Hutchins , David Kirsh, “Distributed cognition: toward a
new foundation for human-computer interaction research”, ACM Transactions on
Computer-Human Interaction (TOCHI), v.7 n.2, p.174-196, June 2000.
[158] Seiie Jang, Eun-Jung Ko, and Woontack Woo, “Unified User-Centric Context:
Who, Where, When, What, How and Why”. UbiPCMM: Personalized Context
Modelling and Management for Ubicomp Applications, pp:26-34, 2005.
[159] Stringer, M., Fitzpatrick, G., Halloran, J., & Hornecker, E., “Moving Beyond the
Application: Design Challenges For Ubiquitous Computing. Position paper
presented at Aarhus 2005 workshop 'Ambient Computing in a Critical, Quality of
Life Perspective', Aarhus, Denmark, 21st August, 2005.
[160] Dourish, P., “Implications for design”. In Proceedings of the SIGCHI Conference
on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 - 27,
2006). R. Grinter, T. Rodden, P. Aoki, E. Cutrell, R. Jeffries, and G. Olson, Eds.
CHI '06. ACM, New York, NY, pp. 541-550, 2006.
[161] Dourish, P., “Responsibilities and implications: further thoughts on ethnography
and design”. In Proceedings of the 2007 Conference on Designing For User
Experiences (Chicago, Illinois, November 05 - 07, 2007). DUX '07. ACM, New
York, NY, pp. 2-16, 2007.
[162] Creswell, J.W., “Research design: Qualitative, quantitative and mixed methods
approaches”, 2nd Edition, Sage Publications, 2002.
[163] S. M. Easterbrook, J. Singer, M.-A. Storey, and D. Damian, “Selecting empirical
methods for software engineering research”. In F. Shull, J. Singer, and D. I. K.
Sjøberg, editors, Guide to Advanced Empirical Software Engineering, pages 285-311. Springer, 2007.
[164] Parkes, A.M.; Ashby, M.C.; Fairclough, S.H., "The effects of different in-vehicle
route information displays on driver behaviour," Vehicle Navigation and
Information Systems Conference, 1991 , vol.2, no., pp. 61-70, 20-23 Oct. 1991
[165] Road accidents (Japan), http://www.youtube.com/watch?v=boDRxgJQAcY,
(Retrieved: April 27, 2009).
[166] Karachi Accidents, http://www.youtube.com/watch?v=RF5JscrsQ4Q, (Retrieved:
April 27, 2009)
[167] Oxford English Dictionary, http://www.askoxford.com, Oxford University Press,
2009
[168] Malinowski, B., “Argonauts of the Western Pacific”. London: Routledge, 1967.
[169] J. Scott Kenney and Don Clairmont, “Using the Victim Role as Both Sword and
Shield: The Interactional Dynamics of Restorative Justice Sessions”, Journal of
Contemporary Ethnography, 38: 279-307, 2009.
[151]
91
Mitchell B. Mackinem and Paul Higgins. “Tell Me about the Test: The
Construction of Truth and Lies in Drug Court”, Journal of Contemporary
Ethnography; 36: Page 223-251. 2007.
[171] Wing Chung Ho and Petrus Ng, “Public Amnesia and Multiple Modernities in
Shanghai: Narrating the Postsocialist Future in a Former Socialist "Model
Community" “, Journal of Contemporary Ethnography, 37: 383-416, 2008.
[172] Angela Cora Garcia, Alecea I. Standlee, Jennifer Bechkoff, and Yan Cui,
“Ethnographic Approaches to the Internet and Computer-Mediated
Communication”, Journal of Contemporary Ethnography 2009 38: 52-84, 2009.
[173] Marcus, G., “Ethnography in/of the World System: The Emergence of Multi-Sited
Ethnography”. Annual Review of Anthropology, 24, 95-117, 1995.
[174] Abu-Lughod, A., “Veiled Sentiments: Honor and Poetry in a Bedouin Society”.
Berkeley, CA: University of California, 2000.
[175] Lutz, C., “Emotion, Thought, and Estrangement: Emotion as a Cultural Category”.
Cultural Anthropology, 1(3), 287-309, 1986..
[176] Lutz, C., “Unnatural Emotions: Everyday Sentiments on an Micronesian Atoll and
their Challenge to Western Theory”. Chicago: University of Chicago Press, 1988.
[177] Malkki, L., “National Geographic: The Rootedness of Peoples and the
Territorialization of National Identity Amongst Scholars and Refugees”. Cultural
Anthropology, 7(1), 24-44, 1992.
[178] Malkki, L., “Purity and Exile: Memory and National Cosmology amongst Hutu
Refugees in Tanzania”. Chicago, 1995.
[179] Munn, N., “Excluded Spaces: The Figure in the Australian Aboriginal
Landscape”. Critical Inquiry, 22(3), 446-465, 1996.
[180] Beyer, H. and Holtzblatt, K., “Contextual Design: Defining Customer-Centered
Systems”. Morgan Kaufman, 1997.
[181] Gaver, W., Dunne, T., and Pacenti, E., “Cultural Probes. Interactions”, 6(1), 2129, 1999.
[182] Hutchinson, H., Hansen, H., Roussel, N., Eiderbäck, B., Mackay, W., Westerlund,
B., Bederson, B., Druin, A., Plaisant, C., Beaudouin-Lafon, M., Conversy, S., and
Evans, H., “Technology Probes: Inspiring Design For and With Families”. Proc.
ACM Conf. Human Factors in Computing Systems CHI 2003 (Ft Lauderdale, FL),
17-24. New York: ACM, 2003.
[183] Millen, D. R., “Rapid ethnography: time deepening strategies for HCI field
research”. In Proceedings of the 3rd Conference on Designing interactive Systems:
Processes, Practices, Methods, and Techniques (New York City, New York, United
States, August 17 - 19, 2000). D. Boyarski and W. A. Kellogg, Eds. DIS '00. ACM,
New York, NY, 280-286, 2000.
[184] vorad - vehicle on-board radar,
http://www.minecom.com/pdf/solutions/tracking/MC_VORADFlyer_240108.pdf,
(Retrieved: June 12, 2009).
[185] Moodley, S & Allopi, D, “An analytical study of vehicle defects and their
contribution to road accidents”, Paper presented to the 27th Annual Southern
African Transport Conference, South Africa, 7 - 11 July 2008.
[186] Young, K. & Regan, M., “Driver distraction: A review of the literature”. In: I.J.
Faulks, M. Regan, M. Stevenson, J. Brown, A. Porter & J.D. Irwin (Eds.). Distracted
driving. Sydney, NSW: Australasian College of Road Safety. Pages 379-405, 2007.
[187] Dolphin Sonar Hitchscan Step, http://www.truckspecialties.com/dolphin_step.htm
(Retrieved: August 31, 2009).
[188] A. E. Rabbany, “Introduction to GPS the Global Positioning System”, Artech
House Publisher, ISBN 1-58053-183-0, 2002.
[189] Porcino, D., “Location of third generation mobile devices: a comparison between
terrestrial and satellite positioning systems”, Vehicular Technology Conference, 200,
VTC 2001 Spring. IEEE VTS 53rd, Vol 4, pp.2970-2974, 2001.
[170]
92
Fujitsu Laboratories Develops Video-Processing Technology Enabling World's
First Wraparound View of Vehicles in Real Time,
http://www.fujitsu.com/global/news/pr/archives/month/2008/20081117-01.html
(Retrieved: August 31, 2009).
[191] EyeQ2™ by Mobileye, http://www.mobileye.com/manufacturerproducts/processing-platforms/EyeQ2 (Retrieved: August 31, 2009).
[192] JISC Digital Media, http://www.jiscdigitalmedia.ac.uk/stillimages/advice/digitalcameras/ (Retrieved: September 04, 2009).
[193] SICK – Sensor Intelligence, http://www.sick.com/ (Retrieved: September 04,
2009).
[194] Fawzi Nashashibi, Ayoub Khammari, Claude Laurgeau, "Vehicle recognition and
tracking using a generic multisensor and multialgorithm fusion approach",
International Journal of Vehicle Autonomous Systems, Volume 6, Number 1-2, pp.
134 - 154, 2008.
[195] Hughes, C; Glavin, M; Jones, E; Denny, P. "Wide-angle camera technology for
automotive applications: a review". IET Intelligent Transportation Systems 3(1),
Page(s): 19-31, March 2009.
[196] S. Nedevschi, R. Danescu, T. Marita, F. Oniga, C. Pocol, S. Sobol, T. Graf, R.
Schmidt, “Driving Environment Perception Using Stereovision”, Procedeeings of
IEEE Intelligent Vehicles Symposium, (IV2005), Las Vegas, USA, pp.331-336,
June 2005.
[197] M. Bertozzi, A. Broggi, P Medici, P.P. Porta, A. Sjogren, “Stereo Vision-Based
Start-Inhibit for Heavy Goods Vehicles”, IEEE Intelligent Vehicles Symposium
2006, Tokyo, Japan, pp. 350-355, June 13-15, 2006.
[198] J.I. Woodfill, G. Gorden, R. Buck, “Tyzx DeepSea High Speed Stereo Vision
System”, IEEE Conference on Computer Vision and Pattern Recognition,
Washington, D.C., pp. 41-45, 2004.
[199] Alberto Broggi et al., “Automatic Vehicle Guidance: The experience of the
ARGO Autonomous Vehicle”, World Scientific, Singapore, ISBN: 981-02-3720-0,
1999.
[200] T. Williamson and C. Thorpe, “Detection of small obstacles at long range using
multibaseline stereo”, In Proceedings of the Int. Conf: on Intelligent Vehicles, pages
230-235. 1998.
[201] A. Takahashi, Y. Ninomiya, M. Ohta, M. Nishida, and M. Takayama. “Rear view
lane detection by wide angle camera” in Proc. IEEE Intell. Vehicle Symp., Vol. 1,
pp. 148-153, Jun. 2002.
[202] Stein, G.P.; Mano, O.; Shashua, A., "Vision-based ACC with a single camera:
bounds on range and range rate accuracy," Intelligent Vehicles Symposium, 2003.
Proceedings. IEEE , vol., no., pp. 120-125, 9-11 June 2003.
rd
[203] Rafael C. González, and Richard Eugene Woods, “Digital image processing”, 3
edition, Prentice-Hall, ISBN 978-0-13-168728-8, 2008.
[204] C. Hoffmann, T. Dang, C. Stiller, “Vehicle Detection Fusing 2D Visual Features,”
IEEE Intelligent Vehicles Symposium, pp. 280-285, 2004
[205] Dickmanns, E., et al.: The seeing passenger car ‘Vamors-P’. In: Proc. Intelligent
Vehicle Symp., pp. 24–26, 1994.
[206] C. Tzomakas, W. von Seelen, "Vehicle detection in traffic scenes using shadows",
Internal Report IRINI 98-06, Institut fu r Neuroinformatik, Ruhr-Universitat
Bochum, D-44780 Bochum, Germany, August 1998.
[207] Krips, M., J. Valten, and A. Kummert, "AdTM tracking for blind spot collision
avoidance," in Proc. IEEE Intelligent Vehicles Symp., Parma, Italy, pp.544-548,
Jun.14-17, 2004.
[208] Bertozzi, M., Broggi, A., Castelluccio, S., “A real-time oriented system for vehicle
detection”, J. Syst. Architect. 43, (1–5), 317–325, 1997.
[190]
93
Kalinke, T., Tzomakas, C., Seelen, W.V., “A texture-based object detection and
an adaptive model-based classification”, In: Proc. IEEE Int. Conf. on Intelligent
Vehicles, pp. 143–148, 1998.
[210] Crisman, J., Thorpe, C., “Color vision for road following”, In: Proc. SPIE Conf.
on Mobile Robots, pp. 246–249, 1988.
[211] Cucchiara, R., Piccardi, M., “Vehicle detection under day and night illumination”,
In: Proc. Int. ICSC Symp. Intelligent Industrial Automation, 1999.
[212] Alt, N., Claus, C., and Stechele, W. “Hardware/software architecture of an
algorithm for vision-based real-time vehicle detection in dark environments”, In
Proceedings of the Conference on Design, Automation and Test in Europe (Munich,
Germany, March 10 - 14, 2008). DATE '08. ACM, New York, NY, 176-181, 2008.
[213] P.F. Alcantarilla, L.M. Bergasa, P. Jim´enez, M.A. Sotelo, I.Parra, D. Fern´andez,
“Night Time Vehicle Detection for Driving Assistance LightBeam Controller”, 2008
IEEE Intelligent Vehicles Symposium, Eindhoven University of Technology,
Eindhoven, The Netherlands, June 4-6, 2008.
[214] Bertozzi, M., Broggi, A., Fascioli, A., “Vision-based intelligent vehicles: state of
the art and perspective”, Robot. Auton. Syst. 32, 1–6, 2000.
[215] A. Broggi, P. Cerri, and P. C. Antonello, "Multi-resolution vehicle detection using
artificial vision," in Intelligent Vehicles Symposium, 2004 IEEE, pp. 310-314, 2004.
[216] Armingol, J. M., A. de la Escalera, C. Hilario, J. M. Collado, J. P. Carrasco, M. J.
Flores, J. M. Pastor, and J. Rodríguez, “IVVI: Intelligent vehicle based on visual
information,” Robotics and Autonomous Systems, vol.55, issue 12, pp.904-916,
Dec. 2007.
[217] Collado, J.M., C. Hilario, A. de la Escalera, and J.M. Armingol, “Model based
vehicle detection for intelligent vehicles,” in Proc. IEEE Intelligent Vehicles Symp.,
Parma, Italy, pp.572-577, Jun.14-17, 2004.
[218] Kate, T. K., M. B. van Leewen, S.E. Moro-Ellenberger, B. J. F. Dressen, A. H. G.
Versluis, and F. C. A. Groen, “Mid-range and distant vehicle detection with a mobile
camera,” in Proc. IEEE Conf. on Intelligent Vehicles, Parma, Italy, pp.72-77, Jun.
14-17, 2004.
[219] W. Liu, X. Wen, B. Duan, H. Yuan, and N. Wang, "Rear Vehicle Detection and
Tracking for Lane Change Assist," in Intelligent Vehicles Symposium, 2007 IEEE,
pp. 252-257, 2007.
[220] W. Liu, C. Song, P. Fu, N. Wang, and H. Yuan, "A Rear Vehicle Location
Algorithm for Lane Change Assist," in IAPR Conference on Machine Vision
Applications, Tokyo, Japan, pp. 82-85, May , 2007.
[221] C. Hoffmann, "Fusing multiple 2D visual features for vehicle detection," in
Intelligent Vehicles Symposium, 2006 IEEE, pp. 406-411, 2006.
[222] Franke, U., Kutzbach, I., “Fast stereo based object detection for stop and go
traffic”, In: Proc. Intelligent Vehicles Symp., pp. 339–344, 1996.
[223] Qian Yu, Helder Araujo, and Hong Wang, “Stereo-Vision Based Real time
Obstacle Detection for Urban Environments”, Proceedings of ICAR 2003, The 11th
International Conference on Advanced Robotics, Coimbra, Portugal, June 30 - July
3, 2003.
[224] Bertozzi, M., Broggi, A., “Vision-based vehicle guidance”, Computer, 30, (7), 49–
55, 1997.
[225] M. Bertozzi, A. Broggi, and A. Fascioli, “Stereo inverse perspective mapping:
Theory and applications,” Image and Vision Computing, vol. 8, no. 16, pp. 585–590,
1998.
[226] Giachetti, A., Campani, M., Torre, V., “The use of optical flow for road
navigation”, IEEE Trans. Robotics and Automation. 14, (1), 34–48, 1998.
[227] Martinez, E.; Diaz, M.; Melenchon, J.; Montero, J.A.; Iriondo, I.; Socoro, J.C.,
"Driving assistance system based on the detection of head-on collisions," Intelligent
Vehicles Symposium, 2008 IEEE , vol., no., pp.913-918, 4-6 June 2008.
[209]
94
Inoue, O.; Seonju Ahn; Ozawa, S., "Following vehicle detection using multiple
cameras," Vehicular Electronics and Safety, 2008. ICVES 2008. IEEE International
Conference on , vol., no., pp.79-83, 22-24 Sept. 2008.
[229] H. Y. Chang, C. M. Fu, and C. L. Huang, "Real-time vision-based preceding
vehicle tracking and recognition," in Intelligent Vehicles Symposium, 2005.
Proceedings. IEEE, pp. 514-519, 2005.
[230] Yanpeng Cao,; Renfrew, Alasdair; Cook, Peter, "Vehicle motion analysis based
on a monocular vision system," Road Transport Information and Control - RTIC
2008 and ITS United Kingdom Members' Conference, IET , vol., no., pp.1-6, 20-22
May 2008.
[231] Liu, J., Su, Y., Ko, M., and Yu, P., “Development of a Vision-Based Driver
Assistance System with Lane Departure Warning and Forward Collision Warning
Functions”, In Proceedings of the 2008 Digital Image Computing: Techniques and
Applications (December 01 - 03, 2008). DICTA. IEEE Computer Society,
Washington, DC, 480-485, 2008.
[232] Torralba, A., “Contextual priming for object detection”, Int. J. of Comput. Vision.
53, (2), 169–191, 2003.
[233] Zhu, Y., Comanicciu, D., Ramesh, V., et al., “An integrated framework of visionbased vehicle detection with knowledge fusion”, In: Proc. Intelligent Vehicle Symp.,
pp. 199–204, 2005.
[234] M. Betke, E. Haritaglu and L. Davis, “Multiple Vehicle Detection and Tracking in
Hard Real Time,” IEEE Intelligent VehiclesSymposium, pp. 351–356, 1996.
[235] Betke, M., E. Haritaoglu, and L. S. Davis, “Real-time multiple vehicle detection
and tracking from a moving vehicle,” Machine Vision and Applications, vol.12,
pp.69-83, Sep. 2000.
[236] Matthews, N., An, P., Charnley, D., Harris, C., “Vehicle detection and recognition
in greyscale imagery”, Control Eng. Practices. 4, 473–479, 1996.
[237] Goerick, C., Detlev, N., Werner, M., “Artificial neural networks in real-time car
detection and tracking applications”, Pattern Recogn. Lett. 17, 335–342, 1996.
[238] Sun, Z., Bebis, G., Miller, R., “On-road vehicle detection using gabor filters and
support vector machines”, In: Proc. IEEE Int. Conf. on Digital Signal Processing,
pp. 1019–1022, 2002.
[239] Lowe, D., “Object recognition from local scale-invariant features”, In: Proc. Int.
Conf. on Computer Vision, vol. 2, pp. 1150–1157, 1999.
[240] Papageorgiou, C., Poggio, T., “Trainable system for object detection”, Int. J.
Comput. Vision. 38, (1), 15–33, 2000.
[241] Schneiderman, H., Kanade, T.: “A statistical method for 3D object detection
applied to faces and cars”. In: Proc. Int. Conf. on Computer Vision and Pattern
Recognition, pp. 746–751, 2000.
[242] Ayoub, K., Fawzi, N., Yotam, A., Claude, L., “Vehicle detection combining
gradient analysis and adaboost classification”, In: Proc. Intelligent Transportation
System Conf., 2005.
[243] Philip Geismann and Georg Schneider, “A Two-staged Approach to Vision-based
Pedestrian Recognition Using Haar and HOG Features”, 2008 IEEE Intelligent
Vehicles Symposium, Eindhoven University of Technology, Eindhoven, The
Netherlands, June 4-6, 2008
[244] Xian-Bin Cao; Hong Qiao; Keane, J., "A Low-Cost Pedestrian-Detection System
With a Single Optical Camera," Intelligent Transportation Systems, IEEE
Transactions on , vol.9, no.1, pp.58-67, March 2008.
[245] Blanc, N., B. Steux, and T. Hinz, "LaRASideCam - a fast and robust vision-based
blindspot detection system," in Proc. IEEE Intelligent Vehicles Symp., Istanbul,
Turkey, Jun.13-15, 2007, pp.480-485, 2007.
[246] Lai, C.C., Tsai, W.H., “Location estimation and trajectory prediction of moving
lateral vehicle using two wheel shapes information in 2-D lateral vehicle images by
[228]
95
3-D computer vision techniques”, In: Proc. Int. Conf. on Robotics and Automation,
pp. 881–886, 2003.
[247] Achler, O., Trivedi, M.M., “Vehicle wheel detector using 2D filter banks”, In:
Proc. Intelligent Vehicle Symp., pp. 25–30, 2004.
[248] Achler, O., Trivedi, M.M., “Camera based vehicle detection, tracking, and wheel
baseline estimation approach”, In: Proc. Intelligent Transportation Systems Conf.,
pp. 743–748, 2004.
[249] J. Fritsch, T. Michalke, A. Gepperth, S. Bone, F. Waibel, M. Kleinehagenbrock, J.
Gayko, C. Goerick, "Towards a Human-like Vision System for Driver Assistance",
2008 IEEE Intelligent Vehicles Symposium, Eindhoven University of Technology,
Eindhoven, The Netherlands, June 4-6, 2008.
[250] H. Wersing and E. K¨orner, “Learning optimized features for hierarchical models
of invariant object recognition,” Neural Computation, vol. 15, no. 2, pp. 1559–1588,
2003.
[251] Mota, S., E. Ros, E. M. Ortigosa, and F. J. Pelayo, “Bio-inspired motion detection
for blind spot overtaking monitor,” International Journal of Robotics and
Automation, vol.19, no.4, pp.190-196, 2004.
[252] Wang, Y., E. K. Teoh, and D. Shen, “Lane detection and tracking using B-Snake,”
Image and Vision Computing, vol.22, issue 4, pp.269-280, Apr. 2004.
[253] Wu, B.-F., W.-H. Chen, C.-W. Chang, C.-J. Chen, and M.-W. Chung, “A new
vehicle detection with distance estimation for lane change warning systems,” in
Proc. IEEE Intelligent Vehicles Symp., Istanbul, Turkey, pp.698-703, Jun.13-15,
2007.
[254] G Piccioli, E De Micheli, P Parodi, and M Campani, “Robust method for road sign
detection and recognition”, Image and Vision Computing, 14(3):209–223, 1996.
[255] L Fletcher, G Loy, N Barnes, and A Zelinsky, “Correlating driver gaze with the
road scene for driver assistance”, Robotics and Autonomous Systems, 52(1):71–84,
2005.
[256] Jianwei Gong; Anshuai Wang; Yong Zhai; Guangming Xiong; Peiyun Zhou;
Huiyan Chen, "High Speed Lane Recognition under Complex Road Conditions,"
Intelligent Vehicles Symposium, 2008 IEEE , vol., no., pp.566-570, 4-6 June 2008.
[257] S H Hsu and C L Huang. “Road sign detection and recognition using matching
pursuit method”. Image and Vision Computing, 19:119–129, 2001.
[258] Woong-Jae Won, Minho Lee, Joon-Woo Son, “Implementation of Road Traffic
Signs Detection Based on Saliency Map Model”, 2008 IEEE Intelligent Vehicles
Symposium, Eindhoven University of Technology, Eindhoven, The Netherlands,
June 4-6, 2008.
[259] Christian Nunn, Anton Kummert, and Stefan M¨uller-Schneiders, “A Novel
Region of Interest Selection Approach for Traffic Sign Recognition Based on 3D
Modelling”, 2008 IEEE Intelligent Vehicles Symposium, Eindhoven University of
Technology, Eindhoven, The Netherlands, June 4-6, 2008.
[260] Thi Thi Zin, Sung Shik Koh and Hiromitsu Hama, “Robust Signboard
Recognition for Vision-based Navigation”, The Journal of The Institute of Image
Information and Television Engineers, Issue: 61(8), pp. 1192-1200, 2007.
[261] Turin, J.; Turan, J.; Ovsenik, L.; Fifik, M., "Architecture of invariant transform
based traffic sign recognition system," Radioelektronika, 2008 18th International
Conference , vol., no., pp.1-4, 24-25 April 2008.
[262] A. Broggi and S. Berte, “Vision-based road detection in automotive systems: a
real-time expectation-driven approach,” Journal of Artificial Intelligence Research,
vol. 3, pp. 325-348, 1995.
[263] Wei Liu, Hongliang Zhang, Bobo Duan, Huai Yuan and Hong Zhao, “VisionBased Real-Time Lane Marking Detection and Tracking”, Proceedings of the 11th
International IEEE, Conference on Intelligent Transportation Systems, Beijing,
China, October 12-15, 2008.
96
Y. He, H. Wang, and B. Zhang, “Color-Based Road Detection in Urban Traffic
Scenes,” IEEE Transactions on ITS, vol. 5, pp. 309-318, 2004.
[265] C. Rasmussen, “Grouping Dominant Orientations for Ill-Structured Road
Following,” IEEE Conference on CVPR, vol. 1, pp. 470-447, 2004.
[266] P.-Y. Jeong and S. Nedevschi, “Efficient and robust classification method using
combined feature vector for lane detection,” IEEE Transactions on CSVT, vol. 15,
no. 4, pp.528-437, April 2005.
[267] J. C. McCall and M. M. Trivedi, “Video-based Lane Estimation and Tracking for
Driver Assistance: Survey, System, and Evaluation,” IEEE Transactions on ITS, vol.
7, no. 1, pp.20-37, March 2006.
[268] Chen, M., Jochem, T., Pomerleau, D.: “AURORA: A vision-based roadway
departure warning system”. In: Proc. Int. Conf. on Intelligent Robots and Systems,
pp. 243–248, 1995.
[269] Dickmanns, E.D., Mysliwetz, B.D., “Recursive 3-D analysis and machine
intelligence”, IEEE Trans. Pattern Anal. Machine Intell. 14, (2), 199–213, 1992.
[270] Jochem, T.M., Pomerleau, D.A., Thorpe, C.E., “MANIC: A nest generation
neurally based autonomous road follower”, In: Proc. 3rd Int. Conf. on Intelligent
Autonomous Systems , 1993.
[271] Pomerleau, D.A., “Ralph: Rapidly adaptive lateral position handler”, In: Proc.
Intelligent Vehicle Symp., 1995.
[272] LeBlanc, D.J., et al., “CAPC: a road-departure prevention system”, IEEE Control
Syst. Mag. 16, (6), 61–71, 1996.
[273] M. Bertozzi, A. Broggi, M. Cellario, A. Fascioli, P. Lombardi, and M. Porta,
“Artificial Vision in Road Vehicles”, in Proc. IEEE, vol. 90, July 2002.
[274] Yong Zhou, Rong Xu, Xiaofeng Hu and Qingtai Ye, “A robust lane detection and
tracking method based on computer vision,” 2006 IOP Publishing Ltd, vol. 7, pp.
62-81, February 2006.
[275] Betozzi, M., Broggi, A., “GOLD: A parallel real-time stereo vision system for
generic obstacle and lane detection”, IEEE Trans. Image Process. 7, (1), 62–81,
1998.
[276] S.-J. Tsai and T.-Y. Sun, “The Robust and Fast Approach for Vision-based
Shadowy Road Boundary Detection,” IEEE Transaction on ITS, pp.486-491, 2005.
[277] Wang, Y., Teoh, E.K., Shen, D., “Lane detection using B-snake”, In: Proc. IEEE
Int. Conf. on Intelligence, Information, and Systems, 1999.
[278] Wang, J., G. Bebis, and R. Miller, “Overtaking vehicle detection using dynamic
and quasi-static background modeling,” in Proc. IEEE Conf. On Computer Vision
and Pattern Recognition, San Diego, CA, pp.64-71, Jun.20-26, 2005.
[279] Gonzalez, J.P., Ozguner, U., “Lane detection using histogram-based segmentation
and decision trees”, In:Proc. Intelligent Transportation System Conf., 2000.
[280] Jung, C. R. and C. R. Kelber, “A lane departure warning system using lateral
offset with uncalibrated camera,” in Proc. IEEE Conf. On Intelligent Transportation
Systems, Vienna, Austria, pp.102-107, Sep.13-16, 2005.
[281] King Hann LIM, Li-Minn ANG, Kah Phooi SENG and Siew Wen CHIN, “LaneVehicle Detection and Tracking”, Proceedings of the International MultiConference
of Engineers and Computer Scientists 2009 Vol II IMECS 2009, Hong Kong, March
18 - 20, 2009.
[282] Wang, C.-C., “Driver Assistance System for Lane Departure Prevention and
Collision Avoidance with Night Vision”, Master thesis, Computer Science and
Information Engineering Dept., National Taiwan Univ., Taipei, Taiwan, 2004.
[283] Wang, C.-C., C.-J. Chen, Y.-M. Chan, L.-C. Fu, and P.-Y. Hsiao, "Lane detection
and vehicle recognition for driver assistance system at daytime and nighttime,"
Image and Recognition Magazine, vol.12, no.2, pp.4-17, 2006.
[284] A. Broggi, M. Bertozzi, A. Fascioli, C. Guarino Lo Bianco, and A. Piazzi, “Visual
Perception of Obstacles and Vehicles for Platooning,” IEEE Trans. on Intelligent
Transportation systems, vol. 1, pp. 164-176, Sept. 2000.
[264]
97
Suzuki A.,Yasui N.,Kaneko M., “Lane Recognition System for Guiding of
Autonomous Vehicle”, Intelligent Vehicle '92, pp. 196-201, Sept. 2000.
[286] Chung-Yen Su and Gen-Hau Fan, "An Effective and Fast Lane Detection
Algorithm", Lecture Notes in Computer Science, Springer Berlin / Heidelberg, pp.
942-948, Volume 5359/2008, 2008.
[287] Ren, Feixiang; Huang, Jinsheng; Jiang, Ruyi; Klette, Reinhard, "Lane Detection
on the iPhone", Multimedia Imaging Report 30, 2008,
http://www.mi.auckland.ac.nz/tech-reports/MItech-TR-43.pdf (Retrieved: Dec 10,
2009).
[288] Masaaki Shibata, Tomohiko Makino, Masahide Ito, “Target Distance
Measurement based on Camera Moving Direction Estimated with Optical Flow”,
10th IEEE International Workshop on Advanced Motion Control, 2008. AMC '08,
Trento, pp. 62-67, 26-28 March 2008.
[289] Dagan, E., Mano, O., Stein, G.P., Shashua, A., “Forward collision warning with a
single camera”, In: Proc. Intelligent Vehicles Symp., pp. 37–42, 2004.
[290] Akira Goto and Hiroshi Fujimoto, “Proposal of 6 DOF Visual Servoing for
Moving Object Basedon Real-Time Distance Identification”, SICE Annual
Conference 2008, The University Electro-Communications, Japan, August 20-22,
2008.
[291] Lamprecht, B.; Rass, S.; Fuchs, S.; Kyamakya, K., "Fusion of an uncalibrated
camera with velocity information for distance measurement from a moving camera
on highways," Positioning, Navigation and Communication, 2008. WPNC 2008. 5th
Workshop on , vol., no., pp.165-171, 27-27 March 2008.
[292] CMOS imager chips enable true machine vision,
http://www.eetasia.com/ART_8800446742_480500_NP_89a21a9f.HTM,
(Retrieved: September 12, 2009).
[293] Bruce D. Lucas, "Generalized Image Matching by the Method of Differences,"
doctoral dissertation, tech. report , Robotics Institute, Carnegie Mellon University,
July, 1984,
http://www.ri.cmu.edu/pub_files/pub4/lucas_bruce_d_1984_1/lucas_bruce_d_1984_
1.pdf, (Retrieved: September 10, 2009).
[294] Yucheng Li, Liang Yin, Yan Jia, Mushu Wang, "Vehicle Speed Measurement
Based on Video Images," icicic, pp.439, 2008 3rd International Conference on
Innovative Computing Information and Control, 2008.
[295] Lin, H., Li, K.,, “Vehicle Speed Estimation from Single Still Images Based on
Motion Blur Analysis”, MVA2005 IAPR Conference on Machine VIsion
Applications, Tsukuba Science City, Japan, May 16-18, 2005.
[296] Lin, H., Li, K., and Chang, C., “Vehicle speed detection from a single motion
blurred image”. Image Vision Comput. Issue: 26,10, pp. 1327-1337, Oct. 2008.
[297] Xu Cheng, Liu Yongcai, Liu Hanzhou, “An Image Method of Velocity
Measurement Based on the Smearing Effect”, Proceedings of the 27th Chinese
Control Conference, Kunming, Yunnan, China, July 16-18, 2008.
[298] N. L. Haworth, T. J. Triggs, and E. M. Grey, “Driver fatigue: Concepts,
measurement and crash countermeasures”. Technical report, Federal Office of Road
Safety Contract Report 72 by Human Factors Group, Department of Psychology,
Monash University, 1988.
[299] Paul Smith, Mubarak Shah, and Niels da Vitoria Lobo, “Determining Driver
Visual Attention With One Camera”, IEEE TRANSACTIONS ON INTELLIGENT
TRANSPORTATION SYSTEMS, VOL. 4, NO. 4, DECEMBER 2003.
[300] Devi, M.S.; Bajaj, P.R., "Driver Fatigue Detection Based on Eye Tracking,"
Emerging Trends in Engineering and Technology, 2008. ICETET '08. First
International Conference on , vol., no., pp.649-652, 16-18 July 2008.
[301] A. Heitmann, R. Guttkuhn, A. Aguirre, U. Trutschel, and M. Moore-Ede,
“Technologies for the monitoring and prevention of Driver Fatigue”, In Proc.. of Int.
[285]
98
Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle
Design, pp. 82-86, 2004.
[302] Q. Ji and X. Yang, “Real-time visual cues extraction for monitoring driver
vigilance”, in Proc. of Int. Symp. on Computer Vision Systems, pp. 107-124, 2001.
[303] Z. Zhu and Q. Ji, “Real-Time and non-intrusive driver fatigue monitoring”, in
Proc. of IEEE Intelligent Transportation Systems Conference, pp. 657-662, 2004.
[304] L. M. Bergasa, J. Nuevo, M. A. Sotelo, and Manuel Vazquez, “Realtime system
for monitoring driver vigilance”, IEEE. Int. Conf. On Intelligent Vehicles, pp. 78-83,
2004.
[305] Marco Javier Flores, José María Armingol and Arturo de la Escalera, “Real-Time
Drowsiness Detection System for an Intelligent Vehicle“,2008 IEEE Intelligent
Vehicles Symposium, Eindhoven University of Technology, Eindhoven, The
Netherlands, June 4-6, 2008.
[306] Albu, A.B.; Widsten, B.; Tiange Wang; Lan, J.; Mah, J., "A computer visionbased system for real-time detection of sleep onset in fatigued drivers," Intelligent
Vehicles Symposium, 2008 IEEE , vol., no., pp.25-30, 4-6 June 2008.
[307] Kato, K., et al., “Image synthesis display method and apparatus for vehicle
camera”, United States Patent 7,139,412, 2006.
[308] Ehlgen, T., Pajdla, T., “Monitoring surrounding areas of truck-trailer
combinations”, In: Proceedings of the 5th International Conference on Computer
Vision Systems, 2007.
[309] Brauckmann, M.E., Goerick, C., Groß, J., Zielke, T., “Towards all around
automatic visual obstacles sensing for cars”, In: Proc. Intelligent Vehicle Symp., pp.
79–84, 1994.
[310] Toyota, K., Fuji, T., Kimoto, T., Tanimoto, M., “A proposal of HIR (HumanOriented Image Restructuring) Systems for ITS”, In: Proc. Intelligent Vehicle
Symp., pp. 540–544, 2000.
[311] Ichihara, E., Ohta, Y., “NaviView: visual assistance using roadside cameras –
evaluation of virtual views”, In: Proc. Intelligent Transportation System Conf., pp.
322–327, 2000.
[312] S. Bota and S. Nedesvchi, ”Multi-Feature Walking Pedestrian Detection Using
Dense Stereo and Motion”, WIT 2007, 2007.
[313] Carpa L. et al., “Middleware for Mobile Computing”, In Proceedings of the 8th
Workshop on Hot Topics in Operating Systems, Elmau, Germany, 2001.
[314] O. Davidyuk, J. Riekki, V. Rautio, and J. Sun, “Context-Aware Middleware for
Mobile Multimedia Applications”, In Proceedings of the 3rd International
Conference on Mobile and Ubiquitous Multimedia, ACM Press, Vol. 83, pp. 213220, 2004.
[315] Matthias Baldauf, Schahram Dustdar and Florian Rosenberg, "A Survey on
Context-Aware Systems", International Journal of Ad Hoc and Ubiquitous
Computing Issue: Volume 2, Number 4, pp. 263 - 277, 2007.
[316] Hertel, D.; Betts, A.; Hicks, R.; ten Brinke, M., "An adaptive multiple-reset
CMOS wide dynamic range imager for automotive vision applications," Intelligent
Vehicles Symposium, 2008 IEEE, vol., no., pp.614-619, 4-6 June 2008.
[317] Video and Image Processing Blockset 2.8 ,
http://www.mathworks.com/products/viprocessing/demos.html (Retrieved: Dec 10,
2009)
[318] Asim Smailagic , Daniel P. Siewiorek , Joshua Anhalt , Francine Gemperle ,
Daniel Salber , Sam Weber, “Towards Context Aware Computing: Experiences and
Lessons”, IEEE Journal on Intelligent Systems, vol.16, pp. 38-46, 2001.
[319] Maria Jonefjäll, “Visual assistance HMI for use of video camera applications in
the car”, ISSN 1402-1617 / ISRN LTU-EX--09/003--SE / NR 2009:003, Master’s
thesis, Luleå University of Technology, Sweden, 2009.
99
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement