Vehicle Detection at Night Based on Tail-Light Detection

Vehicle Detection at Night Based on Tail-Light Detection
Vehicle Detection at Night
Based on Tail-Light Detection
Ronan O’Malley, Martin Glavin, Edward Jones
Connaught Automotive Research Group
Department of Electronic Engineering
National University of Ireland, Galway
{ronan.omalley, martin.glavin, edward.jones}
Automated detection of vehicles in front can be used as a
component of systems for forward collision avoidance and
mitigation. When driving in dark conditions, vehicles in
front are generally visible by their tail and brake lights. We
present an algorithm that detects vehicles at night using a
camera by searching for tail lights. Knowledge of colour,
size, symmetry and position of rear facing vehicle lights,
taken from relevant legislation, is exploited. We develop an
image processing system that can reliably detect vehicles
at different distances and in different weather and lighting
collaboration or fusion relationship between such systems to
continue to function in darkness. If a forward facing camera
is on a vehicle performing functions in the day time, any
useful functionality that can be achieved in dark conditions
is a no-cost bonus feature.
While all vehicles will differ in appearance, with different
styles of rear facing lights, they must adhere to certain guidelines governed by automotive legislation. These properties
can be identified by image processing systems. When in
direct view and not occluded, tail lights will be:
• amongst the brightest objects in the image;
vehicle detection, forward collision, advanced driver assist,
automotive, image processing, rear light detection
• close to each other in pairs;
• symmetrical, same size and shape;
Statistics suggest that forward collision detection and avoidance - avoiding collision with a vehicle in front - is an important area of focus for road safety and accident prevention.
For example, in the USA in 2003, 29% of total light vehicle
crashes were rear-end collisions [1].
Demand for systems that can avoid or mitigate rear-end collisions is expected to grow as consumers grow increasingly
safety conscious and insurance companies begin to recognise
the impact such systems could have on the number of accidents occurring. Automotive manufactures have began to
introduce such systems mainly implemented with active systems such as RADAR and LIDAR. A forward facing camera
could be a low cost alternative or assistant to such active
systems, as well as fulfilling many other functions. As there
can be serious interference when multiple vehicles travelling
in the same direction have the same type of active system [8],
it is expected that the future of forward collision detection
systems will be a combination of RADAR and forward facing optical camera. The approach we present could enable a
• red in colour.
This paper describes a system for detecting vehicles based
on their rear lights. This system focuses on close range detection so that the distance in the image between the rear
lights is large enough so that the individual lights are distinguishable. As the target vehicle gets further away, the rear
lights tend to blur together, resulting in the distortion of the
distinctive characteristics used for detection. Of course, the
near range is also the area which is most critical in collision
detection systems. We do not account for vehicles that do
not meet legislative requirements, such as vehicles with broken lights or modified lights that do not meet the common
legal specification of colour, brightness and position.
The algorithm can be summarised as follows. Firstly the image is converted into HSV colour space. Two colour thresholds reveal white and red regions. Red regions are used to
mask the white thresholded image, resulting in white regions that are adjacent to red regions. A symmetry check
attempts to group these regions into pairs. A brute-force
axis of symmetry fit is avoided in favour of a simpler and
less processing intensive comparison of size, aspect-ratio and
alignment of centres. A bounding box containing the pairs
is constructed. The aspect ratio of this box is checked to
ensure that similar lights from different parts of the image
aren’t paired. Remaining bounding boxes are marked as
detected vehicles.
The layout of the remainder of the paper is as follows. In
section 2 we present the legislative background to rear automotive lighting. This is followed by a review of prior research
in the area of automotive forward collision detection in Section 3, with a particular emphasis on visual cameras and
operation in dark conditions. In Section 4 our experimental data capture setup is explained. The image processing
detection algorithm is outlined in detail in Section 5. Experimental results are outlined in Section 6. We conclude
and consider directions for future work in Section 7.
Worldwide legislation states that rear automotive lights must
be red and placed symmetrically in pairs at the extremities
of the rear of the vehicle. These tail lights must be wired
so that they light up whenever the front headlights are activated, and they must be constantly lit. Legislation also
states that although tail lights and brake lights can be integrated into a single unit, there must be a minimum ratio
between the brightness of the tail lights and the brake lights,
so they can be easily distinguished.
Figure 1: Tail lights cause camera to saturate in
metrical during daylight and darkness. Some approaches fit
an axis of symmetry e.g. [5]. While compute intensive,
There is no legislation governing the shape of rear autothis approach is effective for daylight situations where the
motive lights. Due to the advances in LED technology, light
scene is more complex than darkness. Cucchiara et al [4]
manufacturers are departing from conventional shapes of tail
detect vehicles under day and night illumination in surveiland brake lights. Thus it is important to have a detection
lance video, but approach the two environments with sepmethod that is shape independent.
arate techniques. For detection at night, size and shape of
thresholded lights are analysed. An axis of symmetry is deIt has been compulsory for manufacturers to include a horizontal- termined, and reflections are distinguished from lights by
bar brake light since 1986 in North America and since 1998
examining the angle of the axis of symmetry. However this
in Europe. This is a feature that could possibly be exploited
would not be effective from a observation point directly bein future systems, as an aid to detection and as a means to
hind and square to the target vehicle. Vertical edges and
differentiate between tail lights and brake lights.
shadows underneath the vehicle along with symmetry and
tail light blobs have been used to detect vehicles by day and
night in a particle filter framework [2]. The tail light pairing process can be simplified by making several assumptions
As rear lights must be red by law, several systems have
[10]. They assume that the average car is around 170cm
utilised colour to aid vehicle detection. Chern et al [3] dewide and the width/height aspect ratio of a highway vehicle
tect rear lights by colour filtering in RGB space to detect
is approximately 2.0.
red and white regions. If a white region is surrounded for
most of its perimeter by red pixels, it is regarded as a potenMorphology has been used to detect vehicle lights [9]. The
tial rear-light. They note that tail lights within 40 metres
assumption is made that the lights will be circular or elusually appear as white regions in the image as they are
liptical in shape. However the shape of rear lights is not
too bright for the image sensor. Their white filter was effecspecified in legislation and automotive designers are expertive, however the red filter allowed through many different
imenting with different shapes as LED lights become more
colours, resulting in bright objects such as street lamps becommon. A temporal approach can also be used to improve
ing let through the filter. The candidates were paired by
detection rates. Blob trajectories can be grouped by their
considering y-values, area and spacing.
apparent motion [6], and Kalman filter tracking could be
introduced to continue tracking through occlusion [12].
The RACCOON system [7] uses two thresholds to find tail
light pixels, one for brightness and one for redness. Detected
To aid detection, the lane ahead can be detected and a mask
tail lights are then tracked using a simple algorithm. The
applied to reduce the area of the image that is searched for
bearing of the target vehicle is estimated by the horizontal
target vehicles [4][3].
position of the centroid of the tail lights.
A very different approach is taken with the entirely hardware based solution described in [11]. No software signal
processing is required. A signal is taken directly from the
red channel of the RGB sensor, filtered and thresholded in
hardware. This method has a zero processing overhead, but
is not adaptable.
Symmetry is commonly used to filter potential candidates
for vehicle detection as the rear of a vehicle is generally sym-
For the approach presented here, the camera was mounted
internally on the vehicle, behind the rear view mirror. The
camera module has a resolution 640x480 and frame rate of
15Hz. It is important that the camera is mounted level. If
it is not, it will interfere with the symmetry searches in the
detection algorithm. Figure 1 shows a typical frame from
the captured video.
Table 1: Red HSV Filter
Greater Than
Less Than
Table 2: White HSV Filter Parameters
Greater Than Less Than
Colour Filter
It was observed that during darkness tail and brake lights
tend to appear as white spots in the video output. This
can be attributed to most cameras automatic exposure adjustment for dark scenes. These white spots appear with a
red halo region around the perimeter of the light where the
intensity level of the light falls off.
Figure 2: Detection procedure
Data was captured with different cameras in an effort to
assess how sensor independent the system was. However
due to the different way in which different sensors interpret
colour it was found that the colour filter parameters of the
red threshold had to be slightly adjusted for optimal operation between different sensors. Future work could involve
introducing a calibration technique so camera sensors could
be changed, and the system less sensor dependant.
A test plan was created with a view to capturing test data in
simple situations. The plan involved video sequences with
various permutations of the following options.
• Street lit environment / no lighting
• Tail lights / brake lights
• Indicator lights flashing intermittently
• Different distances
• Approaching target vehicle / target vehicle departing
Real world automotive video data was then captured in urban and rural situations. Data was also taken in bad weather
conditions including heavy rain, as detection becomes more
challenging when the road surface is wet as rear lights are
reflected on it. Algorithm parameters were refined using the
experimental test data.
In this section we outline the structure of an image processing system to detect rear lights from frames of automotive
video. Objects such as street lamps, traffic lights, indicators
lamps, reversing lights and oncoming headlights need to be
filtered out, while retaining the rear lights of the target vehicle. A flow chart outlining the structure of the system is
shown in Figure 2.
We exploit these features to detect vehicles from the rear
by applying two colour thresholds to the image to search for
white regions and red regions. It was observed that it was
impractical to implement the red filter in the RGB space,
as contiguous RGB values could not represent the desired
colour range for the filter to allow through. A more natural
and practical colour space for this problem is the HSV (HueSaturation-Value) colour space, which is more representative
of the way humans observe colour. HSV can be represented
as an inverted cone, with Hue as the angle, Saturation as the
radius and Value as the height. Hue is a cyclical dimension
between 0 and 360, which is representative of the tint. Red
is centred around the hue value of zero. Saturation is equivalent to shade, and has values of between 0 and 100. Value
is tone, and also has values from 0 to 100. The parameters
of the HSV space colour thresholds are displayed in Table 1
and Table 2.
Tail lights in the white filtered binary image generally appear as full circles with centres on the same level. In the
red filtered binary image, tail lights can appear as a ring or
annulus. This can be observed in Figure 3.
Image Masking
The binary images resulting from the two colour thresholds
are filtered to remove noise. A binary mask is created from
the bounding box rectangles of the red regions. This mask
is applied to the white thresholded image. White regions
containing any of the remaining pixels are transferred to the
next stage of the process. This results in an image containing only white regions that are adjacent to red regions.
This process is effective at selecting regions from the target
vehicle at different distances.
Symmetry Check
As we are focusing on close range detection, we make the
assumption that the target vehicle in front will be at the
same tilt (pitch, roll and yaw) as the observing vehicle. In
other words, we assume that, for close range application,
the road in the short distance in front of the vehicle is tilted
Figure 3: Vehicle rear lights and the result of the
red and white colour thresholding operations.
at the same angle as the observing vehicle and is therefore
relatively level to the vehicle. The rear of the target vehicle
will therefore appear square to the observer and the tail
lights of the target vehicle will appear symmetrical. We
make no assumption about shape, as there are no regulations
governing the shape of tail lights, only that they must be
placed symmetrically in pairs.
We employ a simple pseudo-symmetry check to avoid processor intensive brute force symmetry searches. The binary
image resulting from the masking is searched for pairs. The
first criterion that is applied is centroid alignment. All connected objects in the images are labelled, and their centres
calculated. The image is then traversed, and objects with
centres aligning in the y-dimension within a certain number
of pixels are marked as potential pairs. The number of pixels that they must align by is proportional to the size of the
regions. This is because the nearer and larger the lights, the
greater the error is in their horizontal alignment.
The two objects of the potential pair are then compared in
terms of size. The smaller of the two must have at least
seventy percent of the number of pixels of the larger object.
If this is not the case then they are removed. The concluding
stage in the symmetry check is the comparison of the aspect
ratios of the light candidates. This is to prevent objects of
different shapes but similar size and position being paired.
Aspect Ratio Constraints
As a final check, the width to height aspect ratio of the
bounding box containing the tail-lights must meet the following constraints.
bounding box width
bounding box height
This ensures that similar objects a large distance apart, such
as lights from vehicles in different lanes, are filtered out of
the pairing process.
This entire process essentially amounts to a symmetry check.
If a bounding box above a certain size is detected then the
driver is alerted that a vehicle is close. Experimental test
video was used to develop the algorithm as described in Section 4. This section presents some preliminary results drawn
from video of a real road environment. In an 11.47 second
sample video of 172 frames, the target vehicle was approximately 10m ahead. The vehicle in front, the target vehicle,
was successfully detected in 164 of 172 frames, resulting in
a detection rate of 95.3%. Bounding boxes resulting from
white regions appeared incorrectly, not identifying the target vehicle, 5 times in the 172 frames, resulting in a false
positive rate of 2.9%. These results refer to only a subset of
the total video data.
Figure 4: Examples of successful tail light detection
at different distances.
The algorithm has demonstrated that it works well in both
well lit urban areas and dark rural areas. It also works
effectively in wet conditions where the rear lights reflect off
the road. Figure 4 is an example of tail light detection at
multiple distances.
In this paper, we have discussed the need for a system to
avoid or mitigate forward collisions during darkness. A
background to the relevant automotive rear light legislation,
showing characteristics that can be recognised by image processing, was given. We have presented an algorithm for forward collision detection at night using a visual camera. Our
technique filters red and white colours in the HSV colour
space. White regions adjacent to red regions are searched for
symmetrical pairs, and aspect ratio constraints are applied
to resulting bounding boxes. This produces detected rear
target vehicle lights. We have shown promising preliminary
results, and intend to expand and improve the system. An
important next step is to introduce a temporal dimension
and track targeted vehicles in video sequences to improve
the detection rate, and to detect imminent collisions. It is
envisaged to expand the range of test scenarios to make the
system more robust.
Future work, could involve experimenting with different exposure times and High Dynamic Range (HDR) technology
to achieve different views of rear lights in darkness. The
algorithm could also be used to detect brake lights during
the day and at night. Preliminary work has shown that the
algorithm can be adapted to detect brake lights during the
daytime. Future work could also include detecting the lightlevel at which night-time processing begins and day light
processing stops. Several factors would have to be considered including a crossover procedure, and possibly a period
when both night and day systems continue to function and
co-operate, fusing information. It is envisaged to also make
the system more sensor independent and develop an effective
calibration technique.
The authors would like to thank the Irish Research Council
for Science, Engineering and Technology (IRCSET) Embark
Initiative for supporting this research.
[1] Integrated vehicle-based safety systems, first annual
report. Technical report, University of Michigan
Transportation Research Institute (UMTRI), 2007.
[2] Y.-M. Chan, S.-S. Huang, L.-C. Fu, and P.-Y. Hsiao.
Vehicle detection under various lighting conditions by
incorporating particle filter. In IEEE Intelligent
Transportation Systems Conference, pages 534–539,
M. Y. Chern and P. C. Hou. The lane recognition and
vehicle detection at night for a camera-assisted car on
highway. Proceedings ICRA’03 IEEE International
Conference on Robotics and Automation, vol.2, 2003.
R. Cucchiara and M. Piccardi. Vehicle detection under
day and night illumination. Proceedings of the 3rd
International ICSC Symposia on Intelligent Industrial
Automation and Soft computing, pages 1–4, June 1999.
Y. Du and N. Papanikolopoulos. Real-time vehicle
following through a novel symmetry-based approach.
Proceedings IEEE International Conference on
Robotics and Automation, 4:3160–3165, 1997.
C. Julià, A. Sappa, F. Lumbreras, J. Serrat, and
A. López. Motion segmentation from feature
trajectories with missing data. In 3rd Iberian
Conference on Pattern Recognition and Image
Analysis,LNCS vol.1, pp., Girona, Spain, June, 2007.
R. Sukthankar and R. Sukthankar. Raccoon: A
real-time autonomous car chaser operating optimally
at night. In Intelligent Vehicles ’93 Symposium, pages
37–42, 1993.
Z. Sun, G. Bebis, and R. Miller. On-road vehicle
detection: a review. Transactions on Pattern Analysis
and Machine Intelligence, 28:694–711, 2006.
R. Taktak, M. Dufaut, and R. Husson. Vehicle
detection at night using image processing and
patternrecognition. Proceedings IEEE International
Conference Image Processing ICIP-94, 2, 1994.
C.-C. Wang, S.-S. Huang, and L.-C. Fu. Driver
assistance system for lane detection and vehicle
recognition with night vision. In IEEE/RSJ
International Conference on Intelligent Robots and
Systems, pages 3530–3535, 2005.
W.-Y. Wang, M.-C. Lu, H. L. Kao, and C.-Y. Chu.
Nighttime vehicle distance measuring systems. IEEE
Transactions on Circuits and Systems II: Express
Briefs, 54:81–85, 2007.
G. Welch and G. Bishop. An introduction to the
kalman filter. Technical Report TR 95-041, University
of North Carolina at Chapel Hill, Department of
Computer Science, 2003.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Related manuals

Download PDF