Real-time vision-based blind spot warning system: Experiments with

Real-time vision-based blind spot warning system: Experiments with

International Journal of Automotive Technology, Vol. 14, No. 1, pp. 113−122 (2013)

DOI 10.1007/s12239

−013−0013−3

Copyright © 2013 KSAE/ 069−13 pISSN 1229−9138/ eISSN 1976−3832

REAL-TIME VISION-BASED BLIND SPOT WARNING SYSTEM:

EXPERIMENTS WITH MOTORCYCLES IN

DAYTIME/NIGHTTIME CONDITIONS

C. FERNÁNDEZ, D. F. LLORCA

*

, M. A. SOTELO, I. G. DAZA,

A. M. HELLÍN and S. ÁLVAREZ

Computer Engineering Department, University of Alcalá, Escuela Politécnica, Campus Universitario,

Ctra. Madrid-Barcelona, Km 33.600, Alcalá de Henares (Madrid), Spain

(Received 1 July 2011; Revised 8 October 2012; Accepted 11 October 2012)

ABSTRACT−This paper describes a real-time vision-based blind spot warning system that has been specially designed for motorcycles detection in both daytime and nighttime conditions. Motorcycles are fast moving and small vehicles that frequently remain unseen to other drivers, mainly in the blind-spot area. In fact, although in recent years the number of fatal accidents has decreased overall, motorcycle accidents have increased by 20%. The risks are primarily linked to the inner characteristics of this mode of travel: motorcycles are fast moving vehicles, light, unstable and fragile. These features make the motorcycle detection problem a difficult but challenging task to be solved from the computer vision point of view. In this paper we present a daytime and nighttime vision-based motorcycle and car detection system in the blind spot area using a single camera installed on the side mirror. On the one hand, daytime vehicle detection is carried out using optical flow features and Support Vector Machine-based (SVM) classification. On the other hand, nighttime vehicle detection is based on head lights detection. The proposed system warns the driver about the presence of vehicles in the blind area, including information about the position and the type of vehicle. Extensive experiments have been carried out in 172 minutes of sequences recorded in real traffic scenarios in both daytime and nighttime conditions, in the context of the Valencia MotoGP Grand Prix 2009.

KEY WORDS : Computer vision, Blind spot, Optical flow, Motorcycle, Vehicle, Single camera

1. INTRODUCTION

Although in recent years the number of fatal accidents has decreased in general, the number of accidents between vehicles and motorcycles has increased by 20% (DGT,

2008). Motorcycle and moped accidents represent 32% of traffic accidents each year, and two-wheeled vehicles cause around 640 deaths and 33.750 injuries only in Spain. On average, there are 45 vehicle-to-motorcycle collisions per day in Spain, and motorbikers get injured in 98% of them.

The main causes of these accidents are traffic infractions, the difficulty to perceive motorcycles (fast moving and small type of vehicle), the instability of motorcycles, the road pavement conditions, and the driving experience. 80% of crashes happen in urban areas, 18% in secondary roads and 2% in highway roads (INTRAS, 2005). In addition, car drivers’ skills and attitudes to motorcycles are very sensitive to the spatial frequency (the width of the vehicle).

Car drivers extract low spatial frequency items from a visual scene first (including wide vehicles such as cars, trucks, etc.). Thus, they are more likely to miss narrow motorcycles, which are considered to be high spatial

*Corresponding author. e-mail: [email protected]

frequency objects (Crundall et al., 2008). It is possible that a driver looks at an approaching motorcycle, and even perceives the motorcycle, yet still makes a manoeuvre that leads to a collision. This can be explained due to the “sizearrival effect”, i.e., approaching speed is related to the size of the vehicle. The consequence of this is that the narrower image of the motorcycle compared to the car may result in the driver over-estimating the time of arrival (DeLucia,

1991). Accordingly, we can state that motorcycles are the most dangerous means of transport and the motorbiker is one of the most vulnerable road user.

Blind spots refers to the areas of the road that cannot be seen by the driver while looking forward or through either the rear-view or side mirrors. The most common blind spots areas appear towards the rear of the vehicle on both sides. Any kind of vehicles in the adjacent lanes of the road that fall into these blind areas are not visible by the driver only using the mirrors of the cars. Motorcycles are maybe the least visible type of vehicle due to their size and the moving speed. Other areas that are usually considered as blind spots are those that are too low to be seen in the rear, in the front, or to the sides of a vehicle, especially in those with a high seating position, such as large vans, trucks,

SUVs and Longer Combination Vehicles.

113

114 C. FERNÁNDEZ et al.

Detection of vehicles in such blind spots can be aided by systems based on passive sensors such as video cameras or active sensors such as radar or laser sensors. Intelligent vehicle safety systems are in general designed to improve the road safety by using both active and passive sensors. In the last years, some automobile manufacturers have developed innovative research in order to solve this problem. They use two main technologies: short-range radar and computer vision. Mercedes-Benz, BMW, Audi and Ford use forward and rear radar sensors to assist park maneuvers and also to detect vehicles in the blind spot. The position and orientation of the sensors are good for parking assistance but they are not the best solution to detect vehicles and motorcycles in the blind spot. Computer vision is less extended for this purpose and only Volvo provides this technology in its passenger cars. This system does not distinguish the vehicles direction, therefore it warns the driver unnecessarily. Apart from car manufacturers, some other companies, such as Mobileye, develop computer vision-based systems. Cameras are cheap passive sensors that do not emit any beams or waves. They provide a rich data source in good visibility conditions. However, in contrast to radar systems, vision-based solutions are very sensitive to bad weather conditions. Accordingly, they can be considered as low cost solutions more suitable for mass production in the automotive industry but with limited performance.

In this paper we present an improved (Sotelo and

Barriga, 2008) real-time vision-based blind spot warning system using a single camera in the visible spectrum that has been specially designed for motorcycles detection in both daytime and nighttime conditions. Motorcycles are fast moving and small vehicles that make the detection problem a difficult but challenging task to be solved from the computer vision point of view. On the one hand, daytime vehicle detection is carried out using optical flow features and Support Vector Machine-based (SVM) classification. On the other hand, nighttime vehicle detection is based on head lights detection. The proposed system warns the driver about the presence of vehicles in the blind area, including information about the position and the type of vehicle.

The remainder of the paper is organized as follows:

Section 2 briefly surveys passive and active blind spot detection systems. An overall description of the visionbased blind spot detection system is presented in Section 3.

The results obtained over 172 minutes of sequences recorded in real traffic scenarios are described and discussed in Section 4. Finally, conclusions and future work are provided in Section 5.

2. RELATED WORK

Considering vision-based vehicle detection for monitoring the blind spot area, most approaches are based on motion information, knowledge-based methods and optical flow features. (Techmer, 2004) combined a lane detection stage with a tracking procedure by minimizing distances between the surrounding contours of each edge point within two consecutive images. Motion information of vehicles is obtained in (Wang and Chen, 2005) using a spatio-temporal wavelet transform. (Zhu et al., 2006) detect overtaking vehicles by integrating dynamic scene modeling as well as hypothesis testing and robust information fusion.

(Tsai et al., 2005; She et al., 2004) proposed the use of color and shape information to detect approaching vehicles in a single-frame fashion.

The use of optical flow features to detect overtaking vehicles was first proposed by (Batavia et al., 1997).

Optical flow vectors indicate the objects movement between different frames. The vectors are obtained as follows: first, robust features are extracted in the current frame and the previous one. Second, features extracted are matched to compose the optical flow vectors. Optical flow features are less affected by shadows and they perform perfectly for straight road conditions (Sotelo and Barriga,

2008). However vehicle detection based on optical flow usually fails in sharp curves such as roundabouts. (Wang et al., 2005) employed homogeneous sparse optical flow, making detection more robust to camera shocks and vibrations. Although optical flow methods are fast and robust for detecting approaching objects, they still have a strong limitation with vehicles running at similar speeds. In such cases, tracking algorithms play a key role.

Focusing in rear and forward vehicle detection applications, a considerable amount of research work has been carried out during the last years. As demonstrated by

(Mori and Charkari, 1993), during the day, vehicles have an underneath shadow. The shadow is darker than the road and a Region Of Interest (ROI) can be set around this area.

The problem of this technique is to set a threshold to robustly detect the shadow. In order to set the correct threshold, (Khammari et al., 2005) and (Liu et al., 2007) consider pixels with negative vertical gradient values as local darker regions. In contrast, (Veit et al., 2008) computes the main road pixels. In this technique, the influence of the illumination, the type of road and the weather conditions make difficult to set a fixed threshold.

In rainy conditions, the road color turns darker and the detection of the shadow is more difficult.

Vehicles have strong vertical and horizontal edges. This characteristic is very important to detect the vehicle properly. The key point is also to set an optimal threshold to separate vehicles from the background. In (Tzomakas and Von Seelen, 1998), they detect the road and find the first horizontal edge scanned from the bottom of the image.

In (Matthews et al., 1995), each image column is summed up to compute vertical edges and each image row is also summed up to compute horizontal edges. The local maximal peaks determine the positions of candidates.

Vehicles are symmetric in front or back view, so symmetry features have been widely used to fit the

REAL-TIME VISION-BASED BLIND SPOT WARNING SYSTEM: XPERIMENTS WITH MOTORCYCLES 115 bounding box of vehicles (Kuehnle, 1991; Bertozzi et al.,

2000). Symmetry can be computed in many different types of images: contour images, images of vertical and horizontal edges, grayscale images, etc. In (Llorca et al.,

2010a) gray level, vertical edges and horizontal edges symmetries are used to verify the selected candidates and to refine the final position of the vehicle. The main disadvantage of this method is that it does not work well when the vehicles are partially occluded or when they appear oblique with respect to the camera.

Inverse perspective mapping (IPM) has been widely proposed to detect forward vehicles (Arróspide and

Salgado, 2012; Lee and Kim, 2012). Among the methods used to reduce the region of interest from which vehicles are detected, we remark lane detection approaches (Álvarez et al., 2010; Choi et al., 2012), that can be also applied in

Lane Departure Warning (LDW) systems.

Finally, it is important to remark that an additional camera can be used to obtain depth information easing the detection process. A considerable amount of work can be found in the literature (Sun et al., 2006; Jung et al., 2007;

Hwang and Huh, 2009; Vinagre et al., 2012; Llorca et al.,

2012). However, stereo vision-based vehicle detection systems are not suitable for blind-spot applications mainly due to integration problems since they need a minimum distance between the cameras (baseline) in order to provide accurate depth estimates (Llorca et al., 2010b).

3. SYSTEM DESCRIPTION

In order to deal with vehicle detection in daytime and nighttime conditions we use a single camera in the visible spectrum installed on the side mirror (see Figure 1). The digital camera is a FireWire color camera, with a Sony 1/

4’’ CDD sensor with progressive scan, a bright lens with

4.3 mm focal length, working at 30 frames per second with a 640

× 480 pixel resolution.

The global overview of the system is depicted in Figure 2.

The first stage provides information about the type of scene. More specifically, it detects whether the vehicle is

Figure 2. Global scheme of the detection method.

driving in daytime or nighttime conditions, as well as if the host vehicle is driving in roundabouts. According to the output of this first stage we trigger two different systems: daytime and/or nighttime vehicle detection. On the one hand, daytime vehicle detection is carried out using optical flow features and SVM classification. On the other hand, nighttime vehicle detection is based on head lights detection.

3.1. Daytime Vehicle Detection

The algorithm carried out in daytime conditions can be summarized as follows:

(1) Extract robust features in each frame.

(2) Match features between frames.

(3) Analyze optical flow vectors to take into account only part of the vectors.

(4) Extract clusters of vectors to extract candidates.

(5) Fit candidates bounding box using vertical and horizontal edges.

(6) Candidates tracking.

(7) Linear SVM classification using Histogram of Oriented Gradient (HOG) features.

Figure 1. Side mirror prototype camera used in the experiments.

Figure 3. Optical flow results. The flow vectors appearing on the infrastructure are created by the host vehicle’s egomotion and the flow vectors appearing on the motorbike are created by the object overtaking the host vehicle.

116 C. FERNÁNDEZ et al.

In (Luo and Gwon, 2009) an experimental comparison of the following robust features extraction methods is performed: SIFT, PCA and SURF. In our case, the robust features method applied is SURF (Speeded-Up Robust

Features) (Bay et al., 2008) because this method takes less computation time than SIFT or PCA and performs better under illumination changes. Features are then matched using nearest neighbor algorithm (Nene and Nayar, 1997) and the resulting vectors are analyzed in order to filter out the flow created by the ego-vehicle (flow vectors in the opposite direction). In Figure 3, we can see the optical flow vectors (red) created by the car ego-motion and optical flow vectors (blue) created by a motorcycle overtaking the host vehicle (it has higher speed than the host ego-vehicle).

In order to reduce the impact of noise, the flow vectors generated by overtaking vehicles have to be of a predefined minimum size. The filtered vectors are then validated and depicted as large dots (see Figure 4). A simple analysis is then applied in the resulting image, obtaining the biggest object as the vehicle candidate.

Furthermore, it is necessary to fit the bounding box to the contour of the object. To achieve this goal, we compute vertical and horizontal edges as well as a set of dynamic thresholds based on edge statistics. Two histograms

(vertical and horizontal) are then used to fit the final bounding box. An example of this process can be seen in

Figure 5.

Figure 4. Blue vector clusters. The small ones are discarded.

Each detected candidate is tracked by means of the nearest neighbor algorithm for solving the data association problem and then using a Kalman filter (Kalman, 1960) to model the following state vector: x n

= [ ]

T

(1) where u and v are the respective horizontal and vertical image coordinates for the top left corner of each candidate

(note that we include their velocities), and w and h are the respective width and height of the bounding box in the image plane.

The last stage of the daytime vehicle detection is the classification step. The selected and tracked candidates are classified by means of linear SVM classifier (Christopher,

1998) in combination with histograms of oriented gradient

(HOG) features (Dalal and Triggs, 2005). A similar approach for rear vehicle detection was previously presented (Álvarez et al., 2010; Llorca et al., 2010c) using only two classes: vehicles (cars and trucks) and nonvehicles. Motorcycles are now included as a separate class, so we have a multi-classification problem with three classes: cars (including trucks), motorcycles and nonvehicles (see Figure 6).

The one-against-one approach is used with two classifiers: one to classify motorcycles versus non-vehicles and other to classify cars versus non-vehicle. The first classifier was trained with 8.850 positive samples (cars and trucks) and 17.701 negative samples. The second classifier was trained with 6.143 positive samples (motorcycles) and

12.285 negative samples, using up to three cycles of bootstrapping. The number of samples used to test the classifier was: 320 motorcycles, 332 cars and 336 negative samples.

Figure 7 depicts the Receiver Operational Characteristic

(ROC) curve. As we can see, the Detection Rate (DR) obtained is 94.9% and 98.7% for cars and motorcycles respectively. The False Positive Rate provided by the classifier ensemble is 1.5% and 5.2% for cars and motorcycles respectively. The upper performance observed

Figure 5. Bounding box fitting using horizontal and vertical edges.

Figure 6. Training samples. Upper row: cars. Middle row: motorcycles. Lower row: negative samples.

REAL-TIME VISION-BASED BLIND SPOT WARNING SYSTEM: XPERIMENTS WITH MOTORCYCLES 117

Figure 7. ROC curve of the linear SVM classifier performance.

Figure 8. Pairs of lights analyzed in the nighttime car detection algorithm.

when classifying motorcycles can be explained by the fact that HOG features are specially designed for human detection (Dalal and Triggs, 2005). In addition, negative samples usually include infrastructure elements with strong horizontal edges (see lower row of Figure 7), that are prone to be detected as false positives.

3.2. Nighttime Vehicle Detection

During nighttime conditions, the detection method is based on car and motorcycle headlights detection. This technique is based on the method described in (Alcantarilla et al.,

2011) including some improvements to allow motorcycles headlights detection.

Detection of car lights has some remarkable advantages compared to other techniques. Car lights have a very distinctive and stable appearance in video sequences. They have a very well known geometry and show a higher intensity value than neighborhood pixels. Two similar approaches that complement each other have been developed, one for cars and other for motorcycles. The method can be summarized as follows:

(1) Lights detection algorithm:

• Set a threshold to get a binary image.

• Apply a morphological operation in the image.

• Find contours.

• Remove small objects.

• These objects are labeled as LIGHTS.

(2) Car detection algorithm:

• From LIGHTS group, find pairs of lights with the same vertical position.

• Remove reflected lights on the road.

• Remove overlapped pairs of lights.

• Analyze symmetry.

• Compute 3D size and distance.

• These objects are labeled as CARS.

(3) Motorcycle detection algorithm:

• From LIGHTS group, remove CARS group and reflected lights on the road.

• Remove reflected lights of motorcycles.

• Remove closely related lights.

Figure 9. Cars and their reflected lights are deleted.

• Compute 3D size and distance.

• These objects are labeled as MOTORCYCLES.

When LIGHTS are detected, the next step is classifying the lights in three categories: car, motorcycle and artifacts.

Cars are detected looking for pairs of lights with the same vertical position. An example of the algorithm is depicted in Figure 8. As we can see, there are up to four pairs of lights. The bottom one is removed because is a reflected light on the road and the widest one is also deleted because is overlapped with other pairs. In addition, symmetry is analyzed and 3D size and distance are estimated to reject too far candidates or streetlights. Distance estimation is based on flat world assumption, camera calibration parameters and previous knowledge of objects size.

The nighttime motorcycle detection algorithm starts removing all the lights labeled as CARS as well as the reflected lights of the cars on the road from the LIGHTS group, as shown in Figure 9. Furthermore, lights which are very close to each other are removed and their 3D size and distance are estimated to reject false positives.

3.3. Roundabouts Detection

One of the main problems derived from the use of optical flow in daytime conditions arises when the car is driving in sharp curves such as roundabouts and 90 degree turns. In

118 C. FERNÁNDEZ et al.

Figure 10. Regions of interest for roundabout detection.

Figure 12. Type of scene distribution.

order to improve the performance, we develop a roundabout detection method based on optical flow analysis. When the method detects a roundabout, car and motorcycle detection algorithms are temporally switched off. The purpose of this method is to decrease the number of false warnings given to the driver in such conditions (detection of cars and motorcycles in roundabouts is left for future work). The method computes the density of optical flow vectors in two pre-defined regions (see Figure 10) taking into account the number of vectors, their distribution, their module and their direction. The density of flow vectors in the pre-defined

ROI in roundabouts is much higher than in straight roads.

However, this situation also appears when the host vehicle is overtaking a vehicle located on the left lane. In order to avoid false detections, a second region of interest is used

(see Figure 10), so detections are only considered if both regions contain a minimum number of flow vectors. Figure

11 depicts the results of the proposed method in a sequence where the host vehicle drove through two roundabouts with three overtakings detected. Combining the density of flow vectors of both regions of interest the detection rate of roundabouts is around 95%.

4. EXPERIMENTAL RESULTS

The vision-based blind spot car and motorcycle detection system was tested in a set of sequences recorded in real traffic conditions (640

× 480 sized at 30 fps), including daytime and nighttime conditions, as well as sequences in tunnels and roundabouts of urban and highway roads.

Figure 12 shows the type of scene distribution of the recorded sequences with a total duration of 172 minutes.

Some specific motorcyclists collaborated in the experiments by performing several overtakings to the host vehicle. However, we obtained a considerable number of sequences including multiple overtakings by driving in the

A-3 Madrid-Valencia highway during the weekend where the Valencia MotoGP Grand Prix of 2010 took place.

The sequences were manually labeled in order to obtain the ground truth. As we can see in Table 1, a total of 1.048

overtakings are available, from which 494 correspond to motorcycles and 554 to other vehicles (cars, vans, trucks, etc).

The performance is firstly analyzed regarding time-todetect, here defined as the time (number of frames) needed to detect a ground-truth overtaking since the first instance

Figure 11. Results of the two optical flow analysis functions.

REAL-TIME VISION-BASED BLIND SPOT WARNING SYSTEM: XPERIMENTS WITH MOTORCYCLES 119

Table 1. Number of labeled overtakings in the sequences used to test the system.

Vehicle type

Motorcycle

Car

Total

Day

449

(42.84%)

529

(50.47%)

978

(93.32%)

Night

45

(4.29%)

25

(2.38%)

70

(6.67%)

Total

494

(47.13%)

554

(52.86%)

1 048

Figure 13. Time-to-detect and overlapped time definitions.

of full vehicle visibility (see Figure 13). The overlappedtime is defined as the time elapsed when ground-truth and the system output match (see Figure 13).

We consider a detection hit when the overtaking detected overlaps at least 50% with the ground truth overtaking.

Applying this requirement, the detection rate performance obtained can be seen in Table 2. On the one hand, detection rate for cars (88%) is higher than for motorcycles (84.44%) in nighttime conditions. The reason for that stems from the fact that more information is available for car detection in nighttime conditions (cars are represented by two headlights and motorcycles only by one headlight). On the other hand, motorcycles are better detected than cars in daytime conditions (98.0% and 96.78% respectively). Daytime results can be explained due to the fact that HOG features are specially designed for human detection.

These are remarkable results since motorcycles are very fast moving kind of vehicle, so the number of frames

Table 2. Number of overtakings detected applying the requirement of 50% of overlapped time.

Vehicle Type

Motorcycle

Car

Total

Day

440

(98.0%)

512

(96.78%)

952

(97.34%)

Night

38

(84.44%)

22

(88%)

60

(85.71%)

Total

478

(96.76%)

534

(96.38%)

1012

(96.56%)

Table 3. Average overlapping time in frames and seconds.

Vehicle type

Motorcycle

Car

Mean

Units

Frames

Seconds

Frames

Seconds

Frames

Seconds

Day

39.55

1.31

63.44

2.11

52.47

1.74

Night

80.04

2.66

93.24

3.10

84.75

2.82

Mean

43.23 1.44

64.79 2.15

54.63 1.82

Table 4. Time-to-detect results in frames and seconds.

Vehicle type

Motorcycle

Car

Mean

Units

Frames

Seconds

Frames

Seconds

Frames

Seconds

Day Night

5.37

0.17

5.52

0.18

5.43

0.18

1.17

0.03

3.80

0.12

2.11

0.07

Mean

4.99

0.16

5.45

0.18

5.23

0.17

available for detecting motorcycles is lower than for other vehicles such as cars, vans or trucks.

Table 3 depicts the average number of frames that corresponds to the detected overtaking process (overlapping time). On average, daytime overtakings have a lower duration (1.74 sec) than nighttime overtakings (2.82 sec), since vehicles usually run at lower speeds in low visibility conditions. The worst case from the detection point of view corresponds to motorcycles in daytime conditions (39.55

frames; 1.31sec). Because of this, one of the most important requirements of the system is to have a short delay time (time-to-detect) in order to detect fast overtakings. We consider that this goal has been accomplished since the average time-to-detect is only 5.23

frames (0.17 seconds) for motorcycles, as can be seen in

Table 4. It is important to remark that the system is very fast when detecting vehicles in nighttime conditions (2.11

frames on average).

The good detection rate (96.56%) and the fast response of the system (0.17 seconds) demonstrate a good performance but it implies a high False Positive Rate (FPR). A false alarm is considered if an overtaking is detected when there is not a vehicle in the blind spot and the duration of the

Table 5. False positive rate and its duration.

Minimum duration of overtaking to alert

> 0 frames

> 15 frames

> 20 frames

> 30 frames

1 alert every

(second)

Day

Duration

(second)

Night

1 alert every

(seconds)

Duration

(seconds)

51

62

77

97

1.61

1.91

2.22

2.58

84

90

107

147

1.38

1.45

1.61

1.89

120 C. FERNÁNDEZ et al.

Figure 14. Results. Upper row: daytime examples. Lower row: nighttime examples.

Table 6. Computation time for daytime and nighttime.

Vehicle type

Computation time

Real time restriction at 30 fps

Day

94.06 ms

Night

29.34 ms

33 ms alarm is longer than 0 frames. Under these conditions, in daytime, we unnecessarily warn every 51 seconds during

1.61 seconds. Motorcycles overtakings during the day are the fastest ones with 39.55 frames duration (1.31 seconds).

In order to reduce the number of false alarms, we filter out the alerts that remain active less than 20 frames (0.66

seconds) getting a smaller number of false alarms: one every 77 seconds with an average duration of 2.22 seconds

(see Table 5). This filter worsens the detection rate and the false positive rate does not improve significantly, so in future work, a method to reduce false alert rate is needed.

Figure 14 depicts some examples of the results provided by the proposed system.

The details corresponding to the average computation time are shown in Table 6. The information about daytime

Figure 15. Detailed information about daytime CPU time.

computation time is detailed in Figure 15. Computation time measurements are based on a PC with Quad Core processor at 2.83GHz and 3GB of RAM. In daytime condition the CPU time is 3 times higher than the specified restriction and this is a consequence of optical flow. It takes

83% of CPU time. However, this can be easily solved by means of parallel computation.

5. CONCLUSION

In this paper, a motorcycles and cars detection system is presented to reduce the number of accidents caused by the blind spot of side mirrors. The system works during daytime and nighttime. It also detects roundabouts in order to turn the system off decreasing the number of false alarms. The system has been tested in sequences with a total duration of 2:51’40’’ and the number of overtaking manoeuvres was 1048. A large amount of information is collected about overtakings for this project or future work, including duration, type of vehicle, etc. We obtained up to

98% of detection rate for motorcycles during the day and a global detection rate of 96.56% was achieved. Motorcycles overtakings are very fast. Thus, a short delay time is required. The average system reaction time is 0.17 seconds.

The good performance in detection rate and reaction time implies the disadvantage of high false alarm rate. In nighttime condition the real time restriction is reached. In daytime, the CPU time is 3 times higher than real time restriction. The developed system also works if the vehicle is parked, so it can warn the driver in a open-door check mode in order to avoid crashes with motorcycles or bicycles.

Experimental data collected in daytime does not contain bad weather conditions such as heavy fog, rain or snow. On the one hand, due to the reduced visibility conditions, drivers usually turn on the lights, so that nighttime motorcycle detection algorithm is expected to play an important role in such cases. On the other hand, it is possible to use an automatic fog, rain or snow detection system (Bronte et al., 2009) that would assess the blind spot system about the available visibility range. In severe weather conditions, these types of diagnosis systems

REAL-TIME VISION-BASED BLIND SPOT WARNING SYSTEM: XPERIMENTS WITH MOTORCYCLES 121 usually suggest to switch off the vision-based detection system.

Current and future work can be summarized in the following statements. A specific method for vehicle detection in roundabouts is needed. The false alarm rate could be improved by applying additional bootstrapping cycles to the SVM classifiers. Furthermore, it could be improved by reading car CAN bus data and computing vehicle ego motion to compensate for the optical flow created in turns and roundabouts. By halving image resolution, processing time can be divided by four. Another solution is to extract fewer features from each image in order to reduce the computation time. The disadvantage of this method is the loose of optical flow vectors in vehicles, getting worse performance. Another future work is the use of GPU and parallel computing to improve the computational time needed to obtain the optical flow. The proposed PC-based approach is not suitable for mass production purposes. In order to provide a more realistic solution to the automotive industry, new hardware implementations of this system should be studied to design a low-cost, low consumption and reliable platform.

ACKNOWLEDGMENT−This work was supported by the

Spanish Ministry of Economy and Competitiveness under

Research Grant ONDA-FP TRA2011-27712-C02-02.

REFERENCES

Alcantarilla, P. F., Bergasa, L. M., Jiménez, P., Parra, I.,

Llorca, D. F., Sotelo, M. A. and Mayoral, S. S. (2011).

Automatic lightbeam controller for driver assistance.

Machine Vision and Applications,

22, 819−835.

Álvarez, S., Sotelo, M. A., Ocaña, M., Llorca, D. F., Parra,

I. and Bergasa, L. M. (2010). Perception advances in outdoor vehicle detection for automatic cruise control.

Robotica

28, 5, 765−779.

Arróspide, J. and Salgado, L. (2012) On-road visual vehicle tracking using Markov chain monte carlo particle filtering with metropolis sampling. Int. J.

Automotive Technology

13, 6, 955961.

Batavia, P. H., Pomerleau, D. E. and Thorpe, C. E. (1997).

Overtaking vehicle detection using implicit optical flow.

IEEE Intelligent Transportation Systems Conf., 729

734.

Bay, H., Ess, A., Tuytelaars, T. and Van Gool, L. (2008).

SURF: Speeded up robust features. Computer Vision and

Image Understanding (CVIU)

110, 3, 346−359.

Bertozzi, M., Broggi, A., Fascioli, A. and Nichele, S.

(2000). Stereo vision-based vehicle detection. IEEE

Intelligent Vehicles Symp., 39

−44.

Bronte, S., Bergasa, L. M. and Alcantarilla, P. F. (2009).

Fog detection system base don computer vision techniques. IEEE Intelligent Transportation Systems

Conf., 1

−6.

Choi, H.-C., Park, J.-M., Choi, W.-S. and Oh, S.-Y. (2012).

Vision-based fusion of robust lane tracking and forward vehicle detection in a real driving environment. Int. J.

Automotive Technology

13, 4, 653

669.

Christopher, B. C. J. (1998). A tutorial on support vector machines for pattern recognition. Data Mining and

Knowledge Discovery

2, 2, 121−167.

Crundall, D., Clarke, D., Ward, P. and Bartle, C. (2008).

Car drivers’ skills and attitudes to motorcycle safety: A review. Road Safety Research Report No. 85.

Dalal, N. and Triggs, B. (2005). Histograms of oriented gradients for human detection. IEEE Computer Society

Conf. Computer Vision and Pattern Recognition, 886

893.

DeLucia, P. R. (1991). Pictorial and motion-based information for depth perception. J. Experimental Psychology:

Human Perception and Performance,

17, 738−748.

DGT (2008). Dirección General de Tráfico. Ministerio del

Interior del Gobierno de España. Caracterización de la

Accidentalidad para el Plan General de Motos.

Hwang, J. and Huh, K. (2009). Vehicle detection system design based on stereo vision sensors. Int. J. Automotive

Technology

10, 3, 373−379.

INTRAS (2005). Instituto de Tráfico y Seguridad Vial de la

Universidad de Valencia. Colisiones entre Vehículos de dos Ruedas y Turismos 2001-2005.

Jung, H. G., Lee, Y. H., Kim, B. J., Yoon, P. J. and Kim, J.

H. (2007). Stereo vision-based forward obstacle detection. Int. J. Automotive Technology

8, 4, 493−504.

Kalman, R. E. (1960). A new approach to linear filtering and prediction problems. J. Basic Engineering Series D,

82, 35−45.

Khammari, A., Nashashibi, F., Abramson, Y. and

Laurgeau, C. (2005). Vehicle detection combining gradient analysis and AdaBoost classification. IEEE

Intelligent Transportation Systems Conf., 66

−71.

Kuehnle, A. (1991). Symmetry-based recognition of vehicle rears. Pattern Recognition Letters

12, 4, 249−

258.

Lee, B. and Kim, G. (2012). Robust detection of preceding vehicles in crowded traffic conditions. Int. J. Automotive

Technology

13, 4, 671−678.

Liu, W., Wen, X., Duan, B., Yuan, H. and Wang, N. (2007).

Rear vehicle detection and tracking for lane change assist. IEEE Intelligent Vehicles Symp., 252

−257.

Llorca, D. F., Sotelo, M. A., Hellín, A. M., Orellana, A.,

Gavilan, M., Daza, I. G. and Lorente, A. G. (2012)

Transportation Research C Emerging Technologies,

25,

226

−237.

Llorca, D. F., Sotelo, M. A., Sánchez, S., Ocaña, M.,

Rodríguez-Ascariz, J. M. and García-Garrido, M. A.

(2010a). Traffic data collection for floating car data enhancement in V2I networks. EURASIP J. Advances in

Signal Processing, Article ID.

719294, 13.

Llorca, D. F., Sánchez, S., Ocaña, M. and Sotelo, M. A.

(2010b). Error analysis in a stereo vision-based pedestrian detection sensor for collision avoidance applications.

122 C. FERNÁNDEZ et al.

Sensors

10, 4, 3741−3758.

Llorca, D. F., Sánchez, S., Ocaña, M. and Sotelo, M. A.

(2010c). Vision-based traffic data collection sensor for automotive applications. Sensors

10, 1, 860−875.

Luo, J. and Gwon, O. (2009). Comparison of SIFT, PCA-

SIFT and SURF. Int. J. Image Processing (IJIP),

4, 143−

152.

Matthews, N. D., An, P. E. and Harris, C. J. (1995). Vehicle detection and recognition in greyscale imagery. 2nd

Int.Workshop on Intelligent Autonomous Vehicles, 1

−6.

Mori, H. and Charkari, N. M. (1993). Shadow and rhythm as sign patterns of obstacle detection. IEEE Int. Symp.

Industrial Electronics, 271

−277.

Nene, S. A. and Nayar, S. K. (1997). A simple algorithm for nearest neighbor search in high dimensions. IEEE

Trans. Pattern Analysis and Machine Intelligence

19, 9,

989

−1003.

She, K., Bebis, G., Gu, H. and Miller, R. (2004). Vehicle tracking using on-line fusion of color and shape features.

IEEE Intelligent Transportation Systems Conf., 731

736.

Sotelo, M. A. and Barriga, J. (2008). Blind spot detection using vision for automotive applications. J. Zheijiang

University SCIENCE A

9, 10, 1369−1372.

Sun, Z., Bebis, G. and Miller, R. (2006). On-road vehicle detecion: A review. IEEE Trans. Pattern Analysis and

Machine Intelligence

28, 5, 694−711.

Techmer, A. (2004). Real-time motion analysis for monitoring the rear and lateral road. IEEE Intelligent

Vehicle Symp., 704

−709.

Tsai, L. W., Hsieh, J. W. and Fan, K. C. (2005). Vehicle detection using normalized color and edge map. IEEE

Int. Conf. Image Processing,

2, 558−601.

Tzomakas, C. and Von Seelen, W. (1998). Vehicle detection in traffic scenes using shadows. Internal

Report of Institut für Neuroinformatik, 1

−8.

Veit, T., Tarel, J. P., Nicolle, P. and Charbonnier, P. (2008).

Evaluation of road marking feature extraction. IEEE

Intelligent Transportation Systems Conf., 174

−181.

Vinagre, J. J., Llorca, D. F., Rodríguez, A. B., Quintero, R.,

Llamazares, A. and Sotelo, M. A. (2012). Extended floating car data system: Experimental results and applications for hybrid route level of service. IEEE

Trans. Intelligent Transportation Systems

13, 1, 25−35.

Wang, J., Bebis, G. and Miller, R. (2005). Overtaking vehicle detection using dynamic and quasi-static background modelling. IEEE Computer Society Conf.

Computer Vision and Pattern Recognition, 64

−72.

Wang, Y. K. and Chen, S. H. (2005). A robust vehicle detection approach. IEEE Int. Conf. Advanced Video and

Signal-based Surveillance, 117

−222.

Zhu, Y., Comaniciu, D., Pellkofer, M. and Koehler, T.

(2006). Reliable detection of overtaking vehicles using robust information fusion. IEEE Trans. Intelligent

Transportation Systems

7, 4, 401−414.

Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement