SlideShare a Scribd company logo
1 of 8
Download to read offline
Vehicle Tracking and Distance Estimation
Based on Multiple Image Features
Yixin Chen
Technical Center Brighton
Delphi Corporation
Brighton, MI 48116-8326
yixin.chen@delphi.com
Manohar Das
Dept. of Electrical and
Computer Engineering
Oakland University
Rochester, MI 48309-4401
das@oakland.edu
Devendra Bajpai
Dept. of Electrical and
Computer Engineering
Oakland University
Rochester, MI 48309-4401
dbajpai@oakland.edu
Abstract
In this paper, we introduce a vehicle tracking
algorithm based on multiple image features to detect
and track the front car in a collision avoidance system
(CAS) application. The algorithm uses multiple image
features, such as corner, edge, gradient, vehicle
symmetry property, and image matching technique to
robustly detect the vehicle bottom corners and edges,
and estimate the vehicle width. Based on the estimated
vehicle width, a few pre-selected edge templates are
used to match the image edges that allow us to
estimate the vehicle height, and also the distance
between the front vehicle and the host vehicle. Some
experimental results based on real world video images
are presented. These seem to indicate that the
algorithm is capable of identifying a front vehicle,
tracking it, and estimating its distance from the host
vehicle.
1. Introduction
The past decade has seen emergence of many
promising technologies that enhance the driving safety
of a vehicle [1]. One such technique is a collision
avoidance system (CAS) that detects the surrounding
objects, estimates their distances from the host vehicle,
and predicts the time-to-collision. For example, a radar
sensor has been used to measure the distance between
the front and the host vehicles in an adaptive cruise
control (ACC) system [2] to improve the drive comfort
and avoid vehicle collision. A video camera is another
typical sensor that is used to detect and track the front
vehicle in CAS applications.
A vehicle tracking system for real-end CAS
application should be able to detect the front vehicle
and measure the distance between the front and the
host vehicles in real-time. In addition, the time-to-
collision (TTC) can be estimated based on the distance
measurements so that a warning will be given to driver
about the potential collision when TTC is smaller than
a threshold. To detect the front vehicle by images
captured from a moving host vehicle in a real-end CAS
system poses many challenges [3], [4]. Some well
known techniques for motion detection, such as
background subtraction and optical flow measurement,
are not well suited for a CAS system, because the
image background is changing constantly and the front
vehicles (for a rear-end CAS system) do not usually
exhibit very different optical flow patterns from
extraneous objects, such as the roadside trees and signs.
A corner feature based technique to track and predict
the positions of the front vehicle is proposed in [5]. But
using only corners doesn’t allow us to identify the
vehicle structure (width, height, centroid, etc.), or
estimate the distance between the front vehicle and the
host vehicle.
In this paper, we present a new algorithm that uses
monochrome video images to detect and track the front
vehicle from a moving host vehicle, and estimate the
distance between the front vehicle and the host vehicle
as well. The algorithm uses multiple image features
such as corner, edge, gradient, vehicle symmetry
property, and image matching technique to robustly
detect the vehicle bottom corners and edges, and
estimate the vehicle width. Then, based on the vehicle
geometry and optical perspective principle, a formula
is derived to estimate the vehicle height and the
distance between the front and host vehicles. A
detailed explanation of the algorithm and
demonstration of its performance are provided in the
following sections.
2. Vehicle Detection and Tracking
Algorithm
Fig. 1 shows a typical image of a front car, which is
the object-of-interest in a rear-end CAS system.
Fourth Canadian Conference on Computer and Robot Vision(CRV'07)
0-7695-2786-8/07 $20.00 © 2007
Figure 1: A typical front car image
In general, a vehicle usually exhibits very strong
geometrical features, such as corners, edges, symmetry,
etc. It’s also easy to see from Fig. 1 that the bottom
area of a front vehicle is less likely to be occluded by
other vehicles or roadside objects (such as trees, traffic
signs, etc.), because there should always be some open
space between the front vehicle and host vehicle in a
typical driving condition. Therefore, the geometrical
features in the bottom area of a front vehicle can be
used to detect the vehicle.
The situation is somewhat different for the top area
of a vehicle. From Fig. 1, it’s easy to see that the top
area of a vehicle is very likely to be smudged by
images of extraneous objects, such as other vehicles
moving ahead of it, roadside signs, background trees,
bridges, etc. Therefore, we must find another way to
locate the vehicle top edge so that a bounding box can
be obtained.
2.1 Corner and Edge Feature Extraction
We use Harris corner detector [9] to calculate the
corner degree, C(x,y), of an image pixel located
at ),( yx as shown below:
><+><
><−>><<
= 22
222
),(
yx
yxyx
II
IIII
yxC (1)
where xI , yI denote image gradients along x and y
directions, respectively, and >•< denotes an image
smoothing operation. The bigger the value of C is, the
more likely the pixel is a true corner.
The Prewitt gradient operators are used to calculate
the image gradients xI , yI . Also, a Gaussian
smoothing operator is used to smooth the noise
sensitive first-order derivatives to improve the
robustness of the corner detection algorithm. The (5x5)
Gaussian smoothing kernel used in our experiment is
given by:
















14741
41626164
72641267
41626164
14741
273
1
(2)
The above mask is derived from the assumption that
joint pdf of the gradient at (x,y) is given by:
2
22
2
2
2
1
),( σ
πσ
yx
eyxG
+
−
= (3)
where the standard deviation, σ, is assumed to be equal
to 1.
The image first-order derivatives, xI , yI , used in
corner detection algorithm are also used for edge
detection. Although a number of sophisticated edge
detection algorithms exist in the literature, we used a
very simple one that requires minimal computational
overhead. It can be summarized as follows:
For a pixel at (x, y), if either || xI or || yI is
greater than a threshold, eT , the pixel is declared
as an edge point; Otherwise, the pixel is declared
as a non-edge point.
Similarly, a simple way to detect the corners is to
compare ),( yxC with a threshold cT , i.e.,
If cTyxC >),( , the pixel ),( yx is classified as
corner; otherwise, it is classified as a non-corner.
2.2 Gaussian Mixture Model (GMM) Based
Threshold Selection For Corner and Edge
Detection
Obviously, the selection of the thresholds cT or eT
will affect the corner and edge detection performance.
Using a fixed threshold, cT or eT , can result in poor
corner or edge detection while the image lighting
conditions have changed. Therefore, an adaptive
threshold is desired.
Here, we consider edge detection based on || xI as
an example to illustrate the method used to find an
adaptive threshold based on GMM method.
Assume that the values of || xI can be categorized
into two classes, one for the edge pixels with larger
Fourth Canadian Conference on Computer and Robot Vision(CRV'07)
0-7695-2786-8/07 $20.00 © 2007
|| xI values and the other for the non-edge pixels with
smaller || xI values. We assume further that the
distribution of || xI obeys a two-class Gaussian
mixture model as shown below:
),()(),()(
)|()()|()()|(
2
222
2
111
2211
σµωσµω
ωωωω
NPNP
xPPxPPxp
+=
+=θ
(4)
where x is used as an abbreviation of || xI , 1µ and
2µ denote means for the two categories 1ω and 2ω ,
)( 1ωP and )( 2ωP are prior probabilities for
categories 1ω and 2ω . Assuming
2
1σ =
2
2σ =
2
σ , it
can be shown [12] that an optimal 2-category classifier
is given by the following classification rule:
If 





−
+
+
>
)(
)(
ln
2 1
2
21
2
21
ω
ω
µµ
σµµ
P
P
x , x is
classified as 1ω ;
If 





−
+
+
<
)(
)(
ln
2 1
2
21
2
21
ω
ω
µµ
σµµ
P
P
x , x is
classified as 2ω .
To use the two-category classifier above, the
distribution parameters, 1µ , )( 1ωP , 2µ , )( 2ωP ,
2
σ should be known a priori. Unfortunately, these are
usually unknown to start with.
The well known maximum likelihood method can
be used to estimate the distribution parameters in a
GMM model using the following iterative formulas
[13]:
∑=
∧∧
−
∧∧
−
∧
∧∧
−
∧∧
−
∧
∧∧






−−−






−−−
= c
j
jjkj
t
jkj
iiki
t
iki
ki
P
P
P
1
12/1
12/1
)()()(
2
1
exp||
)()()(
2
1
exp||
),|(
ω
ω
ω
µxΣµxΣ
µxΣµxΣ
θx
(5)
))(,|(
1
)(
1
1 jP
n
P ki
n
k
ij
∧
=
∧
+
∧
∑= θxωω (6)
∑
∑
=
∧∧
=
∧∧
∧
=+ n
k
ki
n
k
kki
i
jP
jP
j
1
1
))(,|(
))(,|(
)1(
θx
xθx
µ
ω
ω
(7)
∑
∑
=
∧∧
=
∧∧∧∧
∧
−−
=+ n
k
ki
n
k
t
ikikki
i
jP
jjjP
j
1
1
))(,|(
))())(())((,|(
)1(
θx
µxµxθx
Σ
ω
ω
(8)
where c denotes the number of categories, which
equals 2 in this example, and i = 1, …, c. Also, n
denotes the number of pixels included in the
calculations, and k = 1, …, n; kx denotes the gradient
values || xI in this example, and
∧
Σ =
2
σ .
In above equations, first ),|(
∧∧
θxkiP ω is
calculated using some assumed initial values of 1µ ,
)( 1ωP , 2µ , )( 2ωP ,
2
σ (for instance, select 1µ ,
2µ as 50% and 90% of the maximum value of || xI ,
)( 1ωP = 0.8, )( 2ωP = 0.2, and
2
σ as the 2×2
identity matrix). Then, based on the calculated
),|(
∧∧
θxkiP ω , the GMM model distribution
parameters, 1µ , )( 1ωP , 2µ , )( 2ωP ,
2
σ , are
updated using Equations (6)-(8). Next, the updated
values of 1µ , )( 1ωP , 2µ , )( 2ωP ,
2
σ are
substituted in equation (5) for the next-round
calculations. The above process is stopped if the
changes in updated distribution parameters are very
small, or the number of iterations exceeds a pre-
selected value, such as 20.
Figs. 2(a) and 2(b) show the image corner and edge
detection results based on the thresholds calculated by
using GMM method.
(a) Corner Detection (b) Edge Detection
Figure 2: Examples of image corner and edge
detection
Fourth Canadian Conference on Computer and Robot Vision(CRV'07)
0-7695-2786-8/07 $20.00 © 2007
2.3 Detection of left-bottom corner of the
vehicle
As mentioned earlier, the geometrical features in the
bottom area of a front vehicle can be used to detect the
vehicle. Specifically, based on the corner and edge
points obtained in the last step, the algorithm finds the
left-bottom corner of the vehicle using the following
two steps:
• In the region-of-interest (the front area of the host
vehicle), align an “L-shape” edge template (height:
M pixels, width: N pixels) for every corner point,
and find its cross-correlation with the edge image.
The bottom the left-bottom corner is identified by
looking for the location where the above correlation
has a maximum value.
• After the correlation calculations, we validate if
the point with the maximum correlation value is a
true left-bottom corner. The two criteria used to
select the true bottom corner are: i) it has a
maximum correlation and ii) the edge points
enclosed by a sub-region below the point should be
less than a threshold, T (this is based on the fact that
there should be road surface only below the vehicle
bottom line).
The above idea can be expressed in the form of an
algorithm as follows:
1) In the region-of-interest (the front area of the host
vehicle), if: (i,j) ∈ corner, then calculate:
Sum(i,j) = ∑∑ +−=
+
=
+
i
Mix
Nj
jy
jxEdgeyiEdge
1
),(),(
where M, N are the height and width of the starting
edge template.
2) Find the initial matching point:
)),((arg),(
),(
00 jiSumMaxyx
ji
= .
3) If TjiEdge
Qy
yj
Px
xi
<∑∑
+
=
+
=
),(
0
0
0
0
, go to step 5);
else 0),( 00 =yxSum , where P, Q give size of a sub-
region, and T is a threshold.
4) )),((arg),(
),(
00 jiSumMaxyx
ji
= . Go to (3).
5) End.
The tracking window after detecting the left-bottom
corner is shown in Fig. 3, where the left-bottom corner
is aligned with the tracking window.
Figure 3: Tracking left-bottom corner
2.4 Detection of the vehicle right edge
After aligning the tracking window with the vehicle
left-bottom corner, a column direction projection is
performed on a sub-region of the edge image
consisting of R rows from the bottom. The choice of R
should be based on the maximum distance up to which
we wish to track a front vehicle. This will allow us to
compute the minimum number of rows, Hmin, that
separate the top edge of a front vehicle from its bottom.
In order to avoid interference of the column projection
from the edge points in the upper part of the tracking
window, R can be chosen to be about ½ of Hmin.
Based on the above considerations, in this set of
experiments, we chose R to be equal to 15.
The above column projection can be described by
Eq. (9), where R = 15 in our experiment.
),()(Pr
0
0
jiEdgejoj
x
Rxi −=
∑= ,
Nyjy +≤≤ 00
(9)
Fig. (4) shows the column projection plots, where the
top one is the plot inside the tracking window only.
Figure 4: The edge column projection )(Pr joj
In Fig. 4, the first peak and the second peak (around
column 190) correspond to the vehicle left and right
Fourth Canadian Conference on Computer and Robot Vision(CRV'07)
0-7695-2786-8/07 $20.00 © 2007
edge, respectively. In addition, there is always a valley
after the vehicle right edge (like the one in the range of
column 200 ~ 215), because there should always be
some clear space between the front vehicle and the
right side objects. The procedure to find the vehicle
right edge consists of two steps:
• Coarse search: find the vehicle right edge by
finding the valley starting point. Denote the valley
starting point as column 1y ;
• Fine search: based on the vehicle symmetry
property, use the gradient data around the vehicle
left edge area to find the vehicle right edge using
the principle of matched filtering.
(x0
, y0
)
(x0
-m, y0
)
(x0
, y1
)
(x0
-m, y1
)
n
2
01 yy −
),( yxfl
),( yxfr
Figure 5: Vehicle right edge fine search
illustration
In Fig. 5, ),( yxfl is the horizontal gradient data
template around the vehicle left edge area, namely:
|)1,1(|),( 00 −+−−+= yymxxIyxf xl ,
11 +≤≤ mx , 11 +≤≤ ny (10)
),( yxfr is the horizontal gradient data around
the vehicle right edge area, namely:
|)1
2
,1(|),( 01
00 −
−
++−−+=
yy
yymxxIyxf xr
,
11 +≤≤ mx , 1
2
1 01
+
−
≤≤
yy
y (11)
The width of ),( yxfr is selected as half of the
tracking window width.
Based on the vehicle symmetry property, if we flip
),( yxfl in horizontal direction and name the
new data as ),( yxf l− , then:
)1,1(|),( 00 ynymxxIyxf xl
−++−−+=− ,
11 +≤≤ mx , 11 +≤≤ ny (12)
The correlation between ),( yxf l− and ),( yxfr
is given by Eq. (13). It’s expected that the
correlation has the maximum value in the vehicle
right edge.
),(),(),(
1
1
1
1
tysxfyxftsC r
m
x
n
y
l
++= ∑∑
+
=
+
=
− (13)
where 0=s because ),( yxfl and ),( yxfr
have same height,
2
,...,2,1,0 01 yy
t
−
= .
The tracking window after finding the vehicle
left/right edges are shown as Fig. 6.
Figure 6: The tracking window after finding the
vehicle left/right edges
2.5 Estimation of vehicle height and distance
As mentioned earlier, the image of the top area of
the front vehicle is more likely to be smudged by
images of extraneous objects. This makes the task of
finding the top edge of the vehicle a challenging one.
Here we present a new method of finding the top edge
based on the well known principle of perspective
transformation. The detection of the top edge of the
vehicle also allows us to estimate the distance between
the front and host vehicles.
In general, a vehicle has a fixed ratio of width to
height (RW/H). The vehicle image size will change if
the distance has changed, but RW/H should be same.
This fixed ratio property can be used to find the
approximate vehicle height if its width is known
already. In addition, different types of vehicles, such as
sedans, mini-vans or SUVs, commercial trucks, etc.,
have different values of RW/H. For example, RW/H is
about 1 for a mini-van or SUV, and 1.2 ~ 1.3 for a
sedan.
Fourth Canadian Conference on Computer and Robot Vision(CRV'07)
0-7695-2786-8/07 $20.00 © 2007
Z
Ww
f
focal point
Figure 7: Perspective Transformation of
Imaging System
For an optical imaging system shown in Fig. 7, the
relationship between focal length f, object size W,
image size w, and object distance Z is given by:
Z
W
f
fZ
W
fw ≈
−
= (14)
In Eq. (14), w has same length unit (e.g., millimeter
or inch) as W. If we want to know the value w in terms
of number of pixels, Eq. (14) needs to be modified
slightly as,
wk
Z
W
fw = (15)
where wk represents the number of horizontal pixels
per unit length. Similarly, a vertical line of height H
will be transformed to h as,
hk
Z
H
fh = (16)
where hk represents the number of vertical pixels per
unit length.
Suppose the image of a sedan car at a distance
100 =Z meters has 470 =H pixels, and 620 =W
pixels. The height and width data for all other distances
Z can be calculated as follows:






===
===
Z
Z
h
Z
Z
k
Z
H
fk
Z
H
fh
Z
Z
w
Z
Z
k
Z
W
fk
Z
W
fw
hh
ww
0
0
0
0
0
0
0
0
)(
)(
(17)
Let’s denote the vehicle width found in step 2.4 be
1w . By substituting 1w in Equation (17), we have:






=
=
Z
Z
hh
w
Z
wZ
0
0
1
0
0
(18)
which gives estimates of height and distance of the
vehicle.
Given the same value of 0Z , different types of
vehicles will have different image widths 0w .
Therefore, Eq. (18) will give one set of estimates of Z
and w for each type of vehicle. A template matching
technique is used to decide the type of vehicle as
follows.
Basically, for each possible vehicle width/height
data, we can generate a vehicle edge template. For
instance, a vehicle edge template for 5=h , 7=w is
shown below:
















=
1111111
1000001
1000001
1000001
1111111
)7,5(plateVehicleTem
First we calculate the number of overlapped pixels
between a vehicle edge template and edge image of the
front vehicle, and do this for different vehicle
templates. Then we find the template that corresponds
to the maximum number of overlapped pixels. This
allows us to find the real vehicle height as well as the
vehicle type (sedan, mini-van, or commercial truck).
Fig. 8 shows the final tracking widow after finding
the vehicle height.
Figure 8: Final Tracking Window
3. Experiment Results
The algorithm discussed above was tested on a
variety of video sequences captured from a host
vehicle, and in each case, the performance was found
to be satisfactory. MATLAB software was used to
perform these simulation studies. Fig. 9 shows the
tracking windows for different vehicles.
Fourth Canadian Conference on Computer and Robot Vision(CRV'07)
0-7695-2786-8/07 $20.00 © 2007
Figure 9: Tracking windows for different
vehicles
In Fig. 10, the image sequences were recorded
while the host vehicle was approaching a fully stopped
front car. Fig. 10(a) and 10(b) are the tracking
windows in the 1st
and 50th
frame, respectively.
(a)
(b)
Figure 11. Tracking windows for partially
occluded vehicles
Figures 11(a) and 11(b) show the tracking windows for
partially occluded front vehicles.
The estimated front car width and height, and the
distance from the host vehicle are shown in Fig. 12.
The estimated distance changed from around 20 meters
in the first frame to around 10 meters in the 50th
frame.
After applying Kalman filter to the estimated distance,
Time-To-Collision (TTC) can be estimated based on
the estimation of the relative velocity between the front
and host vehicles.
(a) (b)
Figure 10. The tracking windows for the 1st
and 50th
frames
Fourth Canadian Conference on Computer and Robot Vision(CRV'07)
0-7695-2786-8/07 $20.00 © 2007
Vehicle Tracking Data
0
10
20
30
40
50
60
70
80
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49
Frame Number
Width,Height(pixel)
Distance(meter)
Width
Height
Distance
Figure 12. The Vehicle Tracking and Distance Measurement Results
4. Conclusion
This paper introduces a new algorithm to track and
identify a front vehicle from a host vehicle using
monochrome video images. The algorithm also
estimates the distance between the front and host
vehicles for typical rear-end CAS applications. The
proposed algorithm works well in normal driving
conditions. A future extension of the proposed method
will address using this method to track front-side
vehicles as well, so that a cut-in maneuver can be
predicated ahead of time. In addition, the robustness of
the algorithm in presence of occlusion, road curvature,
and severe driving conditions will be addressed too.
References
[1] Li Li, Jingyan Song, Fei-Yue Wang, etc., “New
Development and Research Trends for Intelligent
Vehicles”, IEEE Intelligent Systems, vol. , no. , 2005,
pp. 10-14.
[2] Stephen Rohr, Richard Lind, Robert Myers, Williams
Bauson, Water Kosiak, Huan Yen, “An integrated
approach to automotive safety systems”, 2000 SAE
Conference.
[3] D. Koller, K. Daniilidis, and H.-H. Nagel, "Model-based
object tracking in monocular image sequences of road-
traffic scenes", International Journal of Computer
Vision, 10:257-281, 1993.
[4] K. Daniilidis, Ch. Krauss, M. Hansen, and G. Sommer,
"Real-Time Tracking of Moving Objects with an Active
Camera", Journal of Real Time Imaging, 4:3-20, 1998.
[5] Jonathan Michael Roberts, Attentive Visual Tracking and
Trajectory Estimation for Dynamic Scene Segmentation,
Doctor Dissertation, Univ. of Southampton
[6] T. Xiong and C. Debrunneer, Stochastic Car Tracking
With Line- and Color-Based Features, IEEE Trans.
Intelligent Transportation Systems, 5(4):324-328, Dec.
2004.
[7] Beaudet. P. R., Rotational Invariant Image Operators, 4th
International Conference Pattern Recognition, Tokyo,
pp. 579-583, 1978.
[8] Kitchen. L. and Rosenfeld, A., Gray-level Corner
Detection, Pattern Recognition Letter, 1:95-102,
December 1982.
[9] C.G. Harris and M. J. Stephens. A Combined Corner and
Edge Detector. Proceedings of the Fourth Alvey Vision
Conference, Manchester, pp. 147-151, 1988.
[10] J. A. Noble, Finding Corners, Image and Vision
Computing, 6(2): 121-128, 1988.
[11] Canny, John. "A Computational Approach to Edge
Detection," IEEE Transactions on Pattern Analysis and
Machine Intelligence, 1986. Vol. PAMI-8, No. 6, pp.
679-698.
[12] Rafael C. Gonzalez, Richard E. Woods, Digital Image
Processing, 2nd
edition, 1992, Addison-Wesley
Longman Publishing Co., Inc.
[13] Richard O. Duda, Peter E. Hart, David G. Stork, Pattern
Classification, 2nd
edition, 2001, John Wiley & Sons,
Inc.
Fourth Canadian Conference on Computer and Robot Vision(CRV'07)
0-7695-2786-8/07 $20.00 © 2007

More Related Content

What's hot

The Geometric Characteristics of the Linear Features in Close Range Photogram...
The Geometric Characteristics of the Linear Features in Close Range Photogram...The Geometric Characteristics of the Linear Features in Close Range Photogram...
The Geometric Characteristics of the Linear Features in Close Range Photogram...IJERD Editor
 
Detection of Seam Carving in Uncompressed Images using eXtreme Gradient Boosting
Detection of Seam Carving in Uncompressed Images using eXtreme Gradient BoostingDetection of Seam Carving in Uncompressed Images using eXtreme Gradient Boosting
Detection of Seam Carving in Uncompressed Images using eXtreme Gradient BoostingIJCSIS Research Publications
 
My Amazing CFD Coursework - Competitiveness of the Ferrari F2002
My Amazing CFD Coursework - Competitiveness of the Ferrari F2002My Amazing CFD Coursework - Competitiveness of the Ferrari F2002
My Amazing CFD Coursework - Competitiveness of the Ferrari F2002Nadezda Avanessova
 
LEARNING FINGERPRINT RECONSTRUCTION: FROM MINUTIAE TO IMAGE
 LEARNING FINGERPRINT RECONSTRUCTION: FROM MINUTIAE TO IMAGE LEARNING FINGERPRINT RECONSTRUCTION: FROM MINUTIAE TO IMAGE
LEARNING FINGERPRINT RECONSTRUCTION: FROM MINUTIAE TO IMAGENexgen Technology
 
Iris Localization - a Biometric Approach Referring Daugman's Algorithm
Iris Localization - a Biometric Approach Referring Daugman's AlgorithmIris Localization - a Biometric Approach Referring Daugman's Algorithm
Iris Localization - a Biometric Approach Referring Daugman's AlgorithmEditor IJCATR
 
Edge Detection using Hough Transform
Edge Detection using Hough TransformEdge Detection using Hough Transform
Edge Detection using Hough TransformMrunal Selokar
 
Autonomous Parallel Parking Methodology for Ackerman Configured Vehicles
Autonomous Parallel Parking Methodology for Ackerman Configured VehiclesAutonomous Parallel Parking Methodology for Ackerman Configured Vehicles
Autonomous Parallel Parking Methodology for Ackerman Configured VehiclesIDES Editor
 
Localization of free 3 d surfaces by the mean of photometric
Localization of free 3 d surfaces by the mean of photometricLocalization of free 3 d surfaces by the mean of photometric
Localization of free 3 d surfaces by the mean of photometricIAEME Publication
 
A survey on road extraction from color image using vectorization
A survey on road extraction from color image using vectorizationA survey on road extraction from color image using vectorization
A survey on road extraction from color image using vectorizationeSAT Journals
 
3 d graphics with opengl part 2
3 d graphics with opengl  part 23 d graphics with opengl  part 2
3 d graphics with opengl part 2Sardar Alam
 
Linear Feature Separation From Topographic Maps Using Energy Density and The ...
Linear Feature Separation From Topographic Maps Using Energy Density and The ...Linear Feature Separation From Topographic Maps Using Energy Density and The ...
Linear Feature Separation From Topographic Maps Using Energy Density and The ...Rojith Thomas
 
A survey on road extraction from color image using
A survey on road extraction from color image usingA survey on road extraction from color image using
A survey on road extraction from color image usingeSAT Publishing House
 
Fuzzy c-means clustering for image segmentation
Fuzzy c-means  clustering for image segmentationFuzzy c-means  clustering for image segmentation
Fuzzy c-means clustering for image segmentationDharmesh Patel
 
Hybrid autonomousnavigation p_limaye-et-al_3pgabstract
Hybrid autonomousnavigation p_limaye-et-al_3pgabstractHybrid autonomousnavigation p_limaye-et-al_3pgabstract
Hybrid autonomousnavigation p_limaye-et-al_3pgabstractPushkar Limaye
 
DimEye Corp Presents Revolutionary VLS (Video Laser Scan) at SS IMMR 2013
DimEye Corp Presents Revolutionary VLS (Video Laser Scan) at SS IMMR 2013DimEye Corp Presents Revolutionary VLS (Video Laser Scan) at SS IMMR 2013
DimEye Corp Presents Revolutionary VLS (Video Laser Scan) at SS IMMR 2013Patrick Raymond
 

What's hot (20)

The Geometric Characteristics of the Linear Features in Close Range Photogram...
The Geometric Characteristics of the Linear Features in Close Range Photogram...The Geometric Characteristics of the Linear Features in Close Range Photogram...
The Geometric Characteristics of the Linear Features in Close Range Photogram...
 
Detection of Seam Carving in Uncompressed Images using eXtreme Gradient Boosting
Detection of Seam Carving in Uncompressed Images using eXtreme Gradient BoostingDetection of Seam Carving in Uncompressed Images using eXtreme Gradient Boosting
Detection of Seam Carving in Uncompressed Images using eXtreme Gradient Boosting
 
My Amazing CFD Coursework - Competitiveness of the Ferrari F2002
My Amazing CFD Coursework - Competitiveness of the Ferrari F2002My Amazing CFD Coursework - Competitiveness of the Ferrari F2002
My Amazing CFD Coursework - Competitiveness of the Ferrari F2002
 
LEARNING FINGERPRINT RECONSTRUCTION: FROM MINUTIAE TO IMAGE
 LEARNING FINGERPRINT RECONSTRUCTION: FROM MINUTIAE TO IMAGE LEARNING FINGERPRINT RECONSTRUCTION: FROM MINUTIAE TO IMAGE
LEARNING FINGERPRINT RECONSTRUCTION: FROM MINUTIAE TO IMAGE
 
Iris Localization - a Biometric Approach Referring Daugman's Algorithm
Iris Localization - a Biometric Approach Referring Daugman's AlgorithmIris Localization - a Biometric Approach Referring Daugman's Algorithm
Iris Localization - a Biometric Approach Referring Daugman's Algorithm
 
Edge Detection using Hough Transform
Edge Detection using Hough TransformEdge Detection using Hough Transform
Edge Detection using Hough Transform
 
Digital manufacture 1
Digital manufacture 1Digital manufacture 1
Digital manufacture 1
 
Autonomous Parallel Parking Methodology for Ackerman Configured Vehicles
Autonomous Parallel Parking Methodology for Ackerman Configured VehiclesAutonomous Parallel Parking Methodology for Ackerman Configured Vehicles
Autonomous Parallel Parking Methodology for Ackerman Configured Vehicles
 
Localization of free 3 d surfaces by the mean of photometric
Localization of free 3 d surfaces by the mean of photometricLocalization of free 3 d surfaces by the mean of photometric
Localization of free 3 d surfaces by the mean of photometric
 
A survey on road extraction from color image using vectorization
A survey on road extraction from color image using vectorizationA survey on road extraction from color image using vectorization
A survey on road extraction from color image using vectorization
 
CAE Assignment
CAE AssignmentCAE Assignment
CAE Assignment
 
paper
paperpaper
paper
 
3 d graphics with opengl part 2
3 d graphics with opengl  part 23 d graphics with opengl  part 2
3 d graphics with opengl part 2
 
Linear Feature Separation From Topographic Maps Using Energy Density and The ...
Linear Feature Separation From Topographic Maps Using Energy Density and The ...Linear Feature Separation From Topographic Maps Using Energy Density and The ...
Linear Feature Separation From Topographic Maps Using Energy Density and The ...
 
A survey on road extraction from color image using
A survey on road extraction from color image usingA survey on road extraction from color image using
A survey on road extraction from color image using
 
Fuzzy c-means clustering for image segmentation
Fuzzy c-means  clustering for image segmentationFuzzy c-means  clustering for image segmentation
Fuzzy c-means clustering for image segmentation
 
Hybrid autonomousnavigation p_limaye-et-al_3pgabstract
Hybrid autonomousnavigation p_limaye-et-al_3pgabstractHybrid autonomousnavigation p_limaye-et-al_3pgabstract
Hybrid autonomousnavigation p_limaye-et-al_3pgabstract
 
DimEye Corp Presents Revolutionary VLS (Video Laser Scan) at SS IMMR 2013
DimEye Corp Presents Revolutionary VLS (Video Laser Scan) at SS IMMR 2013DimEye Corp Presents Revolutionary VLS (Video Laser Scan) at SS IMMR 2013
DimEye Corp Presents Revolutionary VLS (Video Laser Scan) at SS IMMR 2013
 
proj525
proj525proj525
proj525
 
1422798749.2779lecture 5
1422798749.2779lecture 51422798749.2779lecture 5
1422798749.2779lecture 5
 

Similar to Vehicle tracking and distance estimation based on multiple image features

An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...aciijournal
 
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...aciijournal
 
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...aciijournal
 
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...aciijournal
 
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...aciijournal
 
AN EFFICIENT SYSTEM FOR FORWARD COLLISION AVOIDANCE USING LOW COST CAMERA & E...
AN EFFICIENT SYSTEM FOR FORWARD COLLISION AVOIDANCE USING LOW COST CAMERA & E...AN EFFICIENT SYSTEM FOR FORWARD COLLISION AVOIDANCE USING LOW COST CAMERA & E...
AN EFFICIENT SYSTEM FOR FORWARD COLLISION AVOIDANCE USING LOW COST CAMERA & E...aciijournal
 
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...aciijournal
 
Stereo Vision Distance Estimation Employing Canny Edge Detector with Interpol...
Stereo Vision Distance Estimation Employing Canny Edge Detector with Interpol...Stereo Vision Distance Estimation Employing Canny Edge Detector with Interpol...
Stereo Vision Distance Estimation Employing Canny Edge Detector with Interpol...ZaidHussein6
 
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...csandit
 
Gait Based Person Recognition Using Partial Least Squares Selection Scheme
Gait Based Person Recognition Using Partial Least Squares Selection Scheme Gait Based Person Recognition Using Partial Least Squares Selection Scheme
Gait Based Person Recognition Using Partial Least Squares Selection Scheme ijcisjournal
 
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...ijma
 
Vehicle Recognition Using VIBE and SVM
Vehicle Recognition Using VIBE and SVMVehicle Recognition Using VIBE and SVM
Vehicle Recognition Using VIBE and SVMCSEIJJournal
 
VEHICLE RECOGNITION USING VIBE AND SVM
VEHICLE RECOGNITION USING VIBE AND SVMVEHICLE RECOGNITION USING VIBE AND SVM
VEHICLE RECOGNITION USING VIBE AND SVMcseij
 
VEHICLE RECOGNITION USING VIBE AND SVM
VEHICLE RECOGNITION USING VIBE AND SVM VEHICLE RECOGNITION USING VIBE AND SVM
VEHICLE RECOGNITION USING VIBE AND SVM cseij
 
VEHICLE RECOGNITION USING VIBE AND SVM
VEHICLE RECOGNITION USING VIBE AND SVMVEHICLE RECOGNITION USING VIBE AND SVM
VEHICLE RECOGNITION USING VIBE AND SVMcseij
 
License Plate Recognition using Morphological Operation.
License Plate Recognition using Morphological Operation. License Plate Recognition using Morphological Operation.
License Plate Recognition using Morphological Operation. Amitava Choudhury
 
Performance evaluation of different automatic seed point generation technique...
Performance evaluation of different automatic seed point generation technique...Performance evaluation of different automatic seed point generation technique...
Performance evaluation of different automatic seed point generation technique...IAEME Publication
 
The Technology Research of Camera Calibration Based On LabVIEW
The Technology Research of Camera Calibration Based On LabVIEWThe Technology Research of Camera Calibration Based On LabVIEW
The Technology Research of Camera Calibration Based On LabVIEWIJRES Journal
 
Improving time to-collision estimation by IMM based Kalman filter
Improving time to-collision estimation by IMM based Kalman filterImproving time to-collision estimation by IMM based Kalman filter
Improving time to-collision estimation by IMM based Kalman filterYixin Chen
 
Intelligent Auto Horn System Using Artificial Intelligence
Intelligent Auto Horn System Using Artificial IntelligenceIntelligent Auto Horn System Using Artificial Intelligence
Intelligent Auto Horn System Using Artificial IntelligenceIRJET Journal
 

Similar to Vehicle tracking and distance estimation based on multiple image features (20)

An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
 
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
 
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
 
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
 
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
 
AN EFFICIENT SYSTEM FOR FORWARD COLLISION AVOIDANCE USING LOW COST CAMERA & E...
AN EFFICIENT SYSTEM FOR FORWARD COLLISION AVOIDANCE USING LOW COST CAMERA & E...AN EFFICIENT SYSTEM FOR FORWARD COLLISION AVOIDANCE USING LOW COST CAMERA & E...
AN EFFICIENT SYSTEM FOR FORWARD COLLISION AVOIDANCE USING LOW COST CAMERA & E...
 
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...
 
Stereo Vision Distance Estimation Employing Canny Edge Detector with Interpol...
Stereo Vision Distance Estimation Employing Canny Edge Detector with Interpol...Stereo Vision Distance Estimation Employing Canny Edge Detector with Interpol...
Stereo Vision Distance Estimation Employing Canny Edge Detector with Interpol...
 
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...
 
Gait Based Person Recognition Using Partial Least Squares Selection Scheme
Gait Based Person Recognition Using Partial Least Squares Selection Scheme Gait Based Person Recognition Using Partial Least Squares Selection Scheme
Gait Based Person Recognition Using Partial Least Squares Selection Scheme
 
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
 
Vehicle Recognition Using VIBE and SVM
Vehicle Recognition Using VIBE and SVMVehicle Recognition Using VIBE and SVM
Vehicle Recognition Using VIBE and SVM
 
VEHICLE RECOGNITION USING VIBE AND SVM
VEHICLE RECOGNITION USING VIBE AND SVMVEHICLE RECOGNITION USING VIBE AND SVM
VEHICLE RECOGNITION USING VIBE AND SVM
 
VEHICLE RECOGNITION USING VIBE AND SVM
VEHICLE RECOGNITION USING VIBE AND SVM VEHICLE RECOGNITION USING VIBE AND SVM
VEHICLE RECOGNITION USING VIBE AND SVM
 
VEHICLE RECOGNITION USING VIBE AND SVM
VEHICLE RECOGNITION USING VIBE AND SVMVEHICLE RECOGNITION USING VIBE AND SVM
VEHICLE RECOGNITION USING VIBE AND SVM
 
License Plate Recognition using Morphological Operation.
License Plate Recognition using Morphological Operation. License Plate Recognition using Morphological Operation.
License Plate Recognition using Morphological Operation.
 
Performance evaluation of different automatic seed point generation technique...
Performance evaluation of different automatic seed point generation technique...Performance evaluation of different automatic seed point generation technique...
Performance evaluation of different automatic seed point generation technique...
 
The Technology Research of Camera Calibration Based On LabVIEW
The Technology Research of Camera Calibration Based On LabVIEWThe Technology Research of Camera Calibration Based On LabVIEW
The Technology Research of Camera Calibration Based On LabVIEW
 
Improving time to-collision estimation by IMM based Kalman filter
Improving time to-collision estimation by IMM based Kalman filterImproving time to-collision estimation by IMM based Kalman filter
Improving time to-collision estimation by IMM based Kalman filter
 
Intelligent Auto Horn System Using Artificial Intelligence
Intelligent Auto Horn System Using Artificial IntelligenceIntelligent Auto Horn System Using Artificial Intelligence
Intelligent Auto Horn System Using Artificial Intelligence
 

Recently uploaded

Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxpurnimasatapathy1234
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxupamatechverse
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxupamatechverse
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxAsutosh Ranjan
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxDeepakSakkari2
 
High Profile Call Girls Nashik Megha 7001305949 Independent Escort Service Na...
High Profile Call Girls Nashik Megha 7001305949 Independent Escort Service Na...High Profile Call Girls Nashik Megha 7001305949 Independent Escort Service Na...
High Profile Call Girls Nashik Megha 7001305949 Independent Escort Service Na...Call Girls in Nagpur High Profile
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxupamatechverse
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerAnamika Sarkar
 
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).pptssuser5c9d4b1
 
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...ZTE
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSSIVASHANKAR N
 
Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxpranjaldaimarysona
 
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝soniya singh
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingrakeshbaidya232001
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Dr.Costas Sachpazis
 

Recently uploaded (20)

Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptx
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptx
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptx
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptx
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptx
 
High Profile Call Girls Nashik Megha 7001305949 Independent Escort Service Na...
High Profile Call Girls Nashik Megha 7001305949 Independent Escort Service Na...High Profile Call Girls Nashik Megha 7001305949 Independent Escort Service Na...
High Profile Call Girls Nashik Megha 7001305949 Independent Escort Service Na...
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptx
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
 
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
 
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
 
Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptx
 
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writing
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
 

Vehicle tracking and distance estimation based on multiple image features

  • 1. Vehicle Tracking and Distance Estimation Based on Multiple Image Features Yixin Chen Technical Center Brighton Delphi Corporation Brighton, MI 48116-8326 yixin.chen@delphi.com Manohar Das Dept. of Electrical and Computer Engineering Oakland University Rochester, MI 48309-4401 das@oakland.edu Devendra Bajpai Dept. of Electrical and Computer Engineering Oakland University Rochester, MI 48309-4401 dbajpai@oakland.edu Abstract In this paper, we introduce a vehicle tracking algorithm based on multiple image features to detect and track the front car in a collision avoidance system (CAS) application. The algorithm uses multiple image features, such as corner, edge, gradient, vehicle symmetry property, and image matching technique to robustly detect the vehicle bottom corners and edges, and estimate the vehicle width. Based on the estimated vehicle width, a few pre-selected edge templates are used to match the image edges that allow us to estimate the vehicle height, and also the distance between the front vehicle and the host vehicle. Some experimental results based on real world video images are presented. These seem to indicate that the algorithm is capable of identifying a front vehicle, tracking it, and estimating its distance from the host vehicle. 1. Introduction The past decade has seen emergence of many promising technologies that enhance the driving safety of a vehicle [1]. One such technique is a collision avoidance system (CAS) that detects the surrounding objects, estimates their distances from the host vehicle, and predicts the time-to-collision. For example, a radar sensor has been used to measure the distance between the front and the host vehicles in an adaptive cruise control (ACC) system [2] to improve the drive comfort and avoid vehicle collision. A video camera is another typical sensor that is used to detect and track the front vehicle in CAS applications. A vehicle tracking system for real-end CAS application should be able to detect the front vehicle and measure the distance between the front and the host vehicles in real-time. In addition, the time-to- collision (TTC) can be estimated based on the distance measurements so that a warning will be given to driver about the potential collision when TTC is smaller than a threshold. To detect the front vehicle by images captured from a moving host vehicle in a real-end CAS system poses many challenges [3], [4]. Some well known techniques for motion detection, such as background subtraction and optical flow measurement, are not well suited for a CAS system, because the image background is changing constantly and the front vehicles (for a rear-end CAS system) do not usually exhibit very different optical flow patterns from extraneous objects, such as the roadside trees and signs. A corner feature based technique to track and predict the positions of the front vehicle is proposed in [5]. But using only corners doesn’t allow us to identify the vehicle structure (width, height, centroid, etc.), or estimate the distance between the front vehicle and the host vehicle. In this paper, we present a new algorithm that uses monochrome video images to detect and track the front vehicle from a moving host vehicle, and estimate the distance between the front vehicle and the host vehicle as well. The algorithm uses multiple image features such as corner, edge, gradient, vehicle symmetry property, and image matching technique to robustly detect the vehicle bottom corners and edges, and estimate the vehicle width. Then, based on the vehicle geometry and optical perspective principle, a formula is derived to estimate the vehicle height and the distance between the front and host vehicles. A detailed explanation of the algorithm and demonstration of its performance are provided in the following sections. 2. Vehicle Detection and Tracking Algorithm Fig. 1 shows a typical image of a front car, which is the object-of-interest in a rear-end CAS system. Fourth Canadian Conference on Computer and Robot Vision(CRV'07) 0-7695-2786-8/07 $20.00 © 2007
  • 2. Figure 1: A typical front car image In general, a vehicle usually exhibits very strong geometrical features, such as corners, edges, symmetry, etc. It’s also easy to see from Fig. 1 that the bottom area of a front vehicle is less likely to be occluded by other vehicles or roadside objects (such as trees, traffic signs, etc.), because there should always be some open space between the front vehicle and host vehicle in a typical driving condition. Therefore, the geometrical features in the bottom area of a front vehicle can be used to detect the vehicle. The situation is somewhat different for the top area of a vehicle. From Fig. 1, it’s easy to see that the top area of a vehicle is very likely to be smudged by images of extraneous objects, such as other vehicles moving ahead of it, roadside signs, background trees, bridges, etc. Therefore, we must find another way to locate the vehicle top edge so that a bounding box can be obtained. 2.1 Corner and Edge Feature Extraction We use Harris corner detector [9] to calculate the corner degree, C(x,y), of an image pixel located at ),( yx as shown below: ><+>< ><−>><< = 22 222 ),( yx yxyx II IIII yxC (1) where xI , yI denote image gradients along x and y directions, respectively, and >•< denotes an image smoothing operation. The bigger the value of C is, the more likely the pixel is a true corner. The Prewitt gradient operators are used to calculate the image gradients xI , yI . Also, a Gaussian smoothing operator is used to smooth the noise sensitive first-order derivatives to improve the robustness of the corner detection algorithm. The (5x5) Gaussian smoothing kernel used in our experiment is given by:                 14741 41626164 72641267 41626164 14741 273 1 (2) The above mask is derived from the assumption that joint pdf of the gradient at (x,y) is given by: 2 22 2 2 2 1 ),( σ πσ yx eyxG + − = (3) where the standard deviation, σ, is assumed to be equal to 1. The image first-order derivatives, xI , yI , used in corner detection algorithm are also used for edge detection. Although a number of sophisticated edge detection algorithms exist in the literature, we used a very simple one that requires minimal computational overhead. It can be summarized as follows: For a pixel at (x, y), if either || xI or || yI is greater than a threshold, eT , the pixel is declared as an edge point; Otherwise, the pixel is declared as a non-edge point. Similarly, a simple way to detect the corners is to compare ),( yxC with a threshold cT , i.e., If cTyxC >),( , the pixel ),( yx is classified as corner; otherwise, it is classified as a non-corner. 2.2 Gaussian Mixture Model (GMM) Based Threshold Selection For Corner and Edge Detection Obviously, the selection of the thresholds cT or eT will affect the corner and edge detection performance. Using a fixed threshold, cT or eT , can result in poor corner or edge detection while the image lighting conditions have changed. Therefore, an adaptive threshold is desired. Here, we consider edge detection based on || xI as an example to illustrate the method used to find an adaptive threshold based on GMM method. Assume that the values of || xI can be categorized into two classes, one for the edge pixels with larger Fourth Canadian Conference on Computer and Robot Vision(CRV'07) 0-7695-2786-8/07 $20.00 © 2007
  • 3. || xI values and the other for the non-edge pixels with smaller || xI values. We assume further that the distribution of || xI obeys a two-class Gaussian mixture model as shown below: ),()(),()( )|()()|()()|( 2 222 2 111 2211 σµωσµω ωωωω NPNP xPPxPPxp += +=θ (4) where x is used as an abbreviation of || xI , 1µ and 2µ denote means for the two categories 1ω and 2ω , )( 1ωP and )( 2ωP are prior probabilities for categories 1ω and 2ω . Assuming 2 1σ = 2 2σ = 2 σ , it can be shown [12] that an optimal 2-category classifier is given by the following classification rule: If       − + + > )( )( ln 2 1 2 21 2 21 ω ω µµ σµµ P P x , x is classified as 1ω ; If       − + + < )( )( ln 2 1 2 21 2 21 ω ω µµ σµµ P P x , x is classified as 2ω . To use the two-category classifier above, the distribution parameters, 1µ , )( 1ωP , 2µ , )( 2ωP , 2 σ should be known a priori. Unfortunately, these are usually unknown to start with. The well known maximum likelihood method can be used to estimate the distribution parameters in a GMM model using the following iterative formulas [13]: ∑= ∧∧ − ∧∧ − ∧ ∧∧ − ∧∧ − ∧ ∧∧       −−−       −−− = c j jjkj t jkj iiki t iki ki P P P 1 12/1 12/1 )()()( 2 1 exp|| )()()( 2 1 exp|| ),|( ω ω ω µxΣµxΣ µxΣµxΣ θx (5) ))(,|( 1 )( 1 1 jP n P ki n k ij ∧ = ∧ + ∧ ∑= θxωω (6) ∑ ∑ = ∧∧ = ∧∧ ∧ =+ n k ki n k kki i jP jP j 1 1 ))(,|( ))(,|( )1( θx xθx µ ω ω (7) ∑ ∑ = ∧∧ = ∧∧∧∧ ∧ −− =+ n k ki n k t ikikki i jP jjjP j 1 1 ))(,|( ))())(())((,|( )1( θx µxµxθx Σ ω ω (8) where c denotes the number of categories, which equals 2 in this example, and i = 1, …, c. Also, n denotes the number of pixels included in the calculations, and k = 1, …, n; kx denotes the gradient values || xI in this example, and ∧ Σ = 2 σ . In above equations, first ),|( ∧∧ θxkiP ω is calculated using some assumed initial values of 1µ , )( 1ωP , 2µ , )( 2ωP , 2 σ (for instance, select 1µ , 2µ as 50% and 90% of the maximum value of || xI , )( 1ωP = 0.8, )( 2ωP = 0.2, and 2 σ as the 2×2 identity matrix). Then, based on the calculated ),|( ∧∧ θxkiP ω , the GMM model distribution parameters, 1µ , )( 1ωP , 2µ , )( 2ωP , 2 σ , are updated using Equations (6)-(8). Next, the updated values of 1µ , )( 1ωP , 2µ , )( 2ωP , 2 σ are substituted in equation (5) for the next-round calculations. The above process is stopped if the changes in updated distribution parameters are very small, or the number of iterations exceeds a pre- selected value, such as 20. Figs. 2(a) and 2(b) show the image corner and edge detection results based on the thresholds calculated by using GMM method. (a) Corner Detection (b) Edge Detection Figure 2: Examples of image corner and edge detection Fourth Canadian Conference on Computer and Robot Vision(CRV'07) 0-7695-2786-8/07 $20.00 © 2007
  • 4. 2.3 Detection of left-bottom corner of the vehicle As mentioned earlier, the geometrical features in the bottom area of a front vehicle can be used to detect the vehicle. Specifically, based on the corner and edge points obtained in the last step, the algorithm finds the left-bottom corner of the vehicle using the following two steps: • In the region-of-interest (the front area of the host vehicle), align an “L-shape” edge template (height: M pixels, width: N pixels) for every corner point, and find its cross-correlation with the edge image. The bottom the left-bottom corner is identified by looking for the location where the above correlation has a maximum value. • After the correlation calculations, we validate if the point with the maximum correlation value is a true left-bottom corner. The two criteria used to select the true bottom corner are: i) it has a maximum correlation and ii) the edge points enclosed by a sub-region below the point should be less than a threshold, T (this is based on the fact that there should be road surface only below the vehicle bottom line). The above idea can be expressed in the form of an algorithm as follows: 1) In the region-of-interest (the front area of the host vehicle), if: (i,j) ∈ corner, then calculate: Sum(i,j) = ∑∑ +−= + = + i Mix Nj jy jxEdgeyiEdge 1 ),(),( where M, N are the height and width of the starting edge template. 2) Find the initial matching point: )),((arg),( ),( 00 jiSumMaxyx ji = . 3) If TjiEdge Qy yj Px xi <∑∑ + = + = ),( 0 0 0 0 , go to step 5); else 0),( 00 =yxSum , where P, Q give size of a sub- region, and T is a threshold. 4) )),((arg),( ),( 00 jiSumMaxyx ji = . Go to (3). 5) End. The tracking window after detecting the left-bottom corner is shown in Fig. 3, where the left-bottom corner is aligned with the tracking window. Figure 3: Tracking left-bottom corner 2.4 Detection of the vehicle right edge After aligning the tracking window with the vehicle left-bottom corner, a column direction projection is performed on a sub-region of the edge image consisting of R rows from the bottom. The choice of R should be based on the maximum distance up to which we wish to track a front vehicle. This will allow us to compute the minimum number of rows, Hmin, that separate the top edge of a front vehicle from its bottom. In order to avoid interference of the column projection from the edge points in the upper part of the tracking window, R can be chosen to be about ½ of Hmin. Based on the above considerations, in this set of experiments, we chose R to be equal to 15. The above column projection can be described by Eq. (9), where R = 15 in our experiment. ),()(Pr 0 0 jiEdgejoj x Rxi −= ∑= , Nyjy +≤≤ 00 (9) Fig. (4) shows the column projection plots, where the top one is the plot inside the tracking window only. Figure 4: The edge column projection )(Pr joj In Fig. 4, the first peak and the second peak (around column 190) correspond to the vehicle left and right Fourth Canadian Conference on Computer and Robot Vision(CRV'07) 0-7695-2786-8/07 $20.00 © 2007
  • 5. edge, respectively. In addition, there is always a valley after the vehicle right edge (like the one in the range of column 200 ~ 215), because there should always be some clear space between the front vehicle and the right side objects. The procedure to find the vehicle right edge consists of two steps: • Coarse search: find the vehicle right edge by finding the valley starting point. Denote the valley starting point as column 1y ; • Fine search: based on the vehicle symmetry property, use the gradient data around the vehicle left edge area to find the vehicle right edge using the principle of matched filtering. (x0 , y0 ) (x0 -m, y0 ) (x0 , y1 ) (x0 -m, y1 ) n 2 01 yy − ),( yxfl ),( yxfr Figure 5: Vehicle right edge fine search illustration In Fig. 5, ),( yxfl is the horizontal gradient data template around the vehicle left edge area, namely: |)1,1(|),( 00 −+−−+= yymxxIyxf xl , 11 +≤≤ mx , 11 +≤≤ ny (10) ),( yxfr is the horizontal gradient data around the vehicle right edge area, namely: |)1 2 ,1(|),( 01 00 − − ++−−+= yy yymxxIyxf xr , 11 +≤≤ mx , 1 2 1 01 + − ≤≤ yy y (11) The width of ),( yxfr is selected as half of the tracking window width. Based on the vehicle symmetry property, if we flip ),( yxfl in horizontal direction and name the new data as ),( yxf l− , then: )1,1(|),( 00 ynymxxIyxf xl −++−−+=− , 11 +≤≤ mx , 11 +≤≤ ny (12) The correlation between ),( yxf l− and ),( yxfr is given by Eq. (13). It’s expected that the correlation has the maximum value in the vehicle right edge. ),(),(),( 1 1 1 1 tysxfyxftsC r m x n y l ++= ∑∑ + = + = − (13) where 0=s because ),( yxfl and ),( yxfr have same height, 2 ,...,2,1,0 01 yy t − = . The tracking window after finding the vehicle left/right edges are shown as Fig. 6. Figure 6: The tracking window after finding the vehicle left/right edges 2.5 Estimation of vehicle height and distance As mentioned earlier, the image of the top area of the front vehicle is more likely to be smudged by images of extraneous objects. This makes the task of finding the top edge of the vehicle a challenging one. Here we present a new method of finding the top edge based on the well known principle of perspective transformation. The detection of the top edge of the vehicle also allows us to estimate the distance between the front and host vehicles. In general, a vehicle has a fixed ratio of width to height (RW/H). The vehicle image size will change if the distance has changed, but RW/H should be same. This fixed ratio property can be used to find the approximate vehicle height if its width is known already. In addition, different types of vehicles, such as sedans, mini-vans or SUVs, commercial trucks, etc., have different values of RW/H. For example, RW/H is about 1 for a mini-van or SUV, and 1.2 ~ 1.3 for a sedan. Fourth Canadian Conference on Computer and Robot Vision(CRV'07) 0-7695-2786-8/07 $20.00 © 2007
  • 6. Z Ww f focal point Figure 7: Perspective Transformation of Imaging System For an optical imaging system shown in Fig. 7, the relationship between focal length f, object size W, image size w, and object distance Z is given by: Z W f fZ W fw ≈ − = (14) In Eq. (14), w has same length unit (e.g., millimeter or inch) as W. If we want to know the value w in terms of number of pixels, Eq. (14) needs to be modified slightly as, wk Z W fw = (15) where wk represents the number of horizontal pixels per unit length. Similarly, a vertical line of height H will be transformed to h as, hk Z H fh = (16) where hk represents the number of vertical pixels per unit length. Suppose the image of a sedan car at a distance 100 =Z meters has 470 =H pixels, and 620 =W pixels. The height and width data for all other distances Z can be calculated as follows:       === === Z Z h Z Z k Z H fk Z H fh Z Z w Z Z k Z W fk Z W fw hh ww 0 0 0 0 0 0 0 0 )( )( (17) Let’s denote the vehicle width found in step 2.4 be 1w . By substituting 1w in Equation (17), we have:       = = Z Z hh w Z wZ 0 0 1 0 0 (18) which gives estimates of height and distance of the vehicle. Given the same value of 0Z , different types of vehicles will have different image widths 0w . Therefore, Eq. (18) will give one set of estimates of Z and w for each type of vehicle. A template matching technique is used to decide the type of vehicle as follows. Basically, for each possible vehicle width/height data, we can generate a vehicle edge template. For instance, a vehicle edge template for 5=h , 7=w is shown below:                 = 1111111 1000001 1000001 1000001 1111111 )7,5(plateVehicleTem First we calculate the number of overlapped pixels between a vehicle edge template and edge image of the front vehicle, and do this for different vehicle templates. Then we find the template that corresponds to the maximum number of overlapped pixels. This allows us to find the real vehicle height as well as the vehicle type (sedan, mini-van, or commercial truck). Fig. 8 shows the final tracking widow after finding the vehicle height. Figure 8: Final Tracking Window 3. Experiment Results The algorithm discussed above was tested on a variety of video sequences captured from a host vehicle, and in each case, the performance was found to be satisfactory. MATLAB software was used to perform these simulation studies. Fig. 9 shows the tracking windows for different vehicles. Fourth Canadian Conference on Computer and Robot Vision(CRV'07) 0-7695-2786-8/07 $20.00 © 2007
  • 7. Figure 9: Tracking windows for different vehicles In Fig. 10, the image sequences were recorded while the host vehicle was approaching a fully stopped front car. Fig. 10(a) and 10(b) are the tracking windows in the 1st and 50th frame, respectively. (a) (b) Figure 11. Tracking windows for partially occluded vehicles Figures 11(a) and 11(b) show the tracking windows for partially occluded front vehicles. The estimated front car width and height, and the distance from the host vehicle are shown in Fig. 12. The estimated distance changed from around 20 meters in the first frame to around 10 meters in the 50th frame. After applying Kalman filter to the estimated distance, Time-To-Collision (TTC) can be estimated based on the estimation of the relative velocity between the front and host vehicles. (a) (b) Figure 10. The tracking windows for the 1st and 50th frames Fourth Canadian Conference on Computer and Robot Vision(CRV'07) 0-7695-2786-8/07 $20.00 © 2007
  • 8. Vehicle Tracking Data 0 10 20 30 40 50 60 70 80 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 Frame Number Width,Height(pixel) Distance(meter) Width Height Distance Figure 12. The Vehicle Tracking and Distance Measurement Results 4. Conclusion This paper introduces a new algorithm to track and identify a front vehicle from a host vehicle using monochrome video images. The algorithm also estimates the distance between the front and host vehicles for typical rear-end CAS applications. The proposed algorithm works well in normal driving conditions. A future extension of the proposed method will address using this method to track front-side vehicles as well, so that a cut-in maneuver can be predicated ahead of time. In addition, the robustness of the algorithm in presence of occlusion, road curvature, and severe driving conditions will be addressed too. References [1] Li Li, Jingyan Song, Fei-Yue Wang, etc., “New Development and Research Trends for Intelligent Vehicles”, IEEE Intelligent Systems, vol. , no. , 2005, pp. 10-14. [2] Stephen Rohr, Richard Lind, Robert Myers, Williams Bauson, Water Kosiak, Huan Yen, “An integrated approach to automotive safety systems”, 2000 SAE Conference. [3] D. Koller, K. Daniilidis, and H.-H. Nagel, "Model-based object tracking in monocular image sequences of road- traffic scenes", International Journal of Computer Vision, 10:257-281, 1993. [4] K. Daniilidis, Ch. Krauss, M. Hansen, and G. Sommer, "Real-Time Tracking of Moving Objects with an Active Camera", Journal of Real Time Imaging, 4:3-20, 1998. [5] Jonathan Michael Roberts, Attentive Visual Tracking and Trajectory Estimation for Dynamic Scene Segmentation, Doctor Dissertation, Univ. of Southampton [6] T. Xiong and C. Debrunneer, Stochastic Car Tracking With Line- and Color-Based Features, IEEE Trans. Intelligent Transportation Systems, 5(4):324-328, Dec. 2004. [7] Beaudet. P. R., Rotational Invariant Image Operators, 4th International Conference Pattern Recognition, Tokyo, pp. 579-583, 1978. [8] Kitchen. L. and Rosenfeld, A., Gray-level Corner Detection, Pattern Recognition Letter, 1:95-102, December 1982. [9] C.G. Harris and M. J. Stephens. A Combined Corner and Edge Detector. Proceedings of the Fourth Alvey Vision Conference, Manchester, pp. 147-151, 1988. [10] J. A. Noble, Finding Corners, Image and Vision Computing, 6(2): 121-128, 1988. [11] Canny, John. "A Computational Approach to Edge Detection," IEEE Transactions on Pattern Analysis and Machine Intelligence, 1986. Vol. PAMI-8, No. 6, pp. 679-698. [12] Rafael C. Gonzalez, Richard E. Woods, Digital Image Processing, 2nd edition, 1992, Addison-Wesley Longman Publishing Co., Inc. [13] Richard O. Duda, Peter E. Hart, David G. Stork, Pattern Classification, 2nd edition, 2001, John Wiley & Sons, Inc. Fourth Canadian Conference on Computer and Robot Vision(CRV'07) 0-7695-2786-8/07 $20.00 © 2007