SlideShare a Scribd company logo
1 of 13
Tracking Pedestrians using Extended and Unscented Kalman Filters in
a Sensor Fusion Framework
Kaustav Mondal
Abstract— This paper presents a sensor fusion algorithm to
track pedestrians accurately using measurements from LiDAR
and radar sensor. The formulation of standard kalman filter,
extended kalman filter and unscented kalman filter is discussed
in detail. The objective of this paper is to show the importance
of using multiple sensors for estimation purposes in autonomous
vehicles. The ability of EKF and UKF to accurately estimate
the nonlinear behavior of the pedestrian is studied by com-
paring the accuracy of predicted path vs ground truth using
performance metrics.
I. INTRODUCTION TO SENSOR FUSION
One of the main functions of the sensor fusion module
of an autonomous car is to track it’s environment. The
environment could consist of cars, pedestrians walking on
the road, bikes or even landmarks. In this paper, we have
used sensor fusion techniques to track a pedestrian riding
a bike. Two main sensors have been used in this project
i.e. LiDAR and Radar. Since sensor fusion requires fast
processing time, C++ has been used to program the algorithm
since it’s a high performance language. In this project,
we have used measurement data from a Velodyne LiDAR
and Radar sensor mounted on a prototype autonomous car
designed by Mercedez Benz. The car has 15 sensors mounted
on it. At the front, a stereo camera is installed which gives
image data and distance information. There is a separate
camera installed which has a wide view lens. This is to
ensure that the traffic signals can be detected from far away.
Radar sensors have been installed on the front bumper of
the car. Radars are generally used for adaptive cruise control,
blind spot warning, collision avoidance and collision control.
Radars use Doppler effect to measure the velocity of a
tracked object. The Doppler effect measures the change in
frequency of the radar waves based on relative velocity of the
object w.r.t. the vehicle. Doppler effect is advantageous for
sensor fusion because it calculates velocity as an independent
measurement. Radars can also be used for localization based
on radar waves bouncing off hard surfaces around the car.
It can provide measurements of the distance of the objects
which may not be present in the direct line of sight. Of
all the sensors on the car, the radar is least affected by
rain or fog. It has a wide field of view (eg. about 150◦
)
and a range of 200 meters or more. Compared to LiDARs
and cameras, radars have a low resolution. This can be
particularly problematic when we consider reflection from
the surface of static objects. For example, a man hole cover or
a soda can lying on the road will have high reflectivity even
This work has been done as part of Udacity’s Self Driving Car Nanode-
gree (Term 2), 2017
though they are relatively small. This is called Radar clutter.
This is why current automotive radars usually disregard static
objects.
LiDAR stand for light, detection and ranging. It uses an
infrared laser beam to measure the distance between the
sensor and the object. Most current LiDARs use light in the
900 nanometer wavelength range. There are some LiDARs
which use a longer wavelenth which performs better in rain
and fog. LiDARs have a much greater spatial resolution
than the Radar because of a more focussed laser beam,
larger number of scan layers in the vertical direction and
a high density of LiDAR points per layer. The current
LiDARs used in the automotive industry can’t measure the
velocity of an object independently. They have to rely on
the changing positions over every time step between two
or more scans. LiDAR measurements are more affected by
weather conditions as compared to Radars. It’s measurements
may be erroneous if dirt settles on it. It is required to keep
the LiDARs clean. The prototype car used Velodyne’s new
PUCKT M
(VLP-16) sensor which is the smallest, newest
and most advanced product in Velodyne’s 3D LiDAR product
range.
Velodyne’s PUCKT M
(VLP-16) sensor key features. The
key features of the LiDAR are given in Table I [1].
Key Features
Nominal
Value
Cost 7999 $
Mass 0.84 kg
Number of channels 16
Range 100 m
Spatial Resolution 300,000 points/sec
Field of view(horizontal/azimuth) 360◦
Field of view(Vertical) 30◦(+15◦ to -15◦)
Accuracy ± 3 cm (typical)
Angular resolution(vertical) 2◦
Angular resolution(horizontal/azimuth) 0.1◦ - 0.4◦
Rotation Range 5-20 hz
TABLE I
PUCKT M (VLP-16) KEY FEATURES AND CHARACTERISTICS
LiDARs and radars have strengths and weaknesses of their
own. By combining the data from both of the sensors, we
can precisely estimate the pedestrian location, heading and
speed. Every time we receive a new measurement from
one of the sensors, the estimation function is triggered.
The estimation function consists of two steps. In the
prediction step, we predict the pedestrian’s state( position,
velocity) and covariance while taking into account the
elapsed time ∆t between current and previous observations.
The measurement update step depends on the type of
measurement received at every time step. If the current
measurement is a LiDAR measurement, then we use
the standard kalman filter to update the pedestrian’s
state. However, if the current measurement is a radar
measurement, then we use the Extended Kalman Filter/
Unscented Kalman Filter to update the state. This is because
the radar measurement model is nonlinear in nature. A
standard kalman filter works well only in the case of linear
models. The sensor flow flowshart is illustrated in Fig 1.
Fig. 1. Overview of sensor fusion algorithm
Initially the filter would receive the pedestrian’s initial posi-
tion relative to the car. This measurement could be from a
laser or radar. The state and covariance of the process model
will be initialized by the filter based on the first measurement.
After the filter receives the 2nd measurement at time ∆t, the
algorithm predicts the process model’s state and covariance.
For EKF, we have used a constant velocity model so the
velocity of the pedestrian riding a bicycle is assumed to be
constant during the time period ∆t. Based on the constant
velocity model, the filter predicts that the bicycle has moved
by a distance equivalent to v∆t where v is the velocity of the
pedestrian. In the update step, the filter uses the measurement
from sensor and the predicted state to give a better estimate
of the pedestrian’s location and velocity. This is followed by
another cycle of prediction and update. Hence the kalman
filter is a recursive algorithm. Since the LiDAR uses cartesian
coordinates and the radar uses polar coordinates, we use
different measurement models in the update step, based on
which sensor measurement is received at that particular time
step. If the measurements from LiDAR and radar are received
simultaneously, we just run one prediction step followed by
two update steps instead of running two prediction steps for
each measurement.
II. KALMAN FILTER
In a standard Kalman filter, there are two main steps:
prediction and update. Prediction: In this step, we assume
that we know the current position and velocity of the object.
Assuming that the velocity remains constant, we can predict
the state of the object after a time period ∆t. The equation
x = Fx + Bu + ν represents the predicted state of the
object. But it is very possible that the object didn’t maintain
constant velocity or may be it changed direction, accelerated
or decelerated. This leads to an increase in uncertainty. The
equation P = FPF + Q represents this increase in uncer-
tainty. The process noise is represented by ν. The notation
ν ∼ N(0, Q) represented the process noise as a gaussian
distribution with a mean zero and covariance Q. Please note
that Bu in the prediction step represent the control input
where B is the control input matrix and u is the control
vector. It is to be noted that the autonomous car has no way
of measuring the exact acceleration of the tracked object (
bicycle ) since we cannot model the internal forces of the
object. Hence we will set Bu = 0 and represent acceleration
as a random noise with mean ν. Update: In this step, we use
the sensor data to calculate a better estimate of the object’s
position and velocity. The equation y = z − Hx calculates
the error in the measurement where H is the measurement
matrix, z is the true sensor measurement and ˆz = Hx
is the predicted observation. This equation compares the
measurement data to the predicted position measurement of
the tracked object. The kalman filter gain K is represented
by the equation K = PH (HPH + R)−1
. The kalman
gain combines the uncertainty of where we think the object
is (P ) with the uncertainty of our sensor measurement R.
If the sensor measurements are very uncertain (R is high
relative to P ), then the Kalman gain will give more weight
to where we think we are i.e. x . If the predicted state has a
high uncertainty( P is high relative to R), the kalman gain
will put more weight on the sensor measurement, z. The
notation ω ∼ N(0, R) represents the measurement noise as
a gaussian distribution with mean zero and covariance R.
The sensor measurement uncertainty matrix R is provided by
the manufacturer of the sensor. After calculating the kalman
gain, we update the state vector and the state covariance
matrix. The equations xnew = x + Ky and Pnew =
(I − KH)P represent the new estimated state vector and
state covariance matrix respectively.
Process Model: The state of the moving pedestrian is
represented by position (px,py) and velocity(vx,vy) w.r.t. to
the sensor. The state vector and linear motion model are
defined as follows
x =




px
py
vx
vy







px = px + vx∆t + νpx
py = py + vy∆t + νpy
vx = vx + νvx
vy = vy + νvy
(1)
The linear motion model is expressed in matrix form as




px
py
vx
vy



 =




1 0 ∆t 0
0 1 0 ∆t
0 0 1 0
0 0 0 1








px
py
vx
vy



 +




νpx
νpy
νvx
νvy



 (2)
The model of the bicycle assumes that the velocity is constant
between the time intervals. Hence it is called a constant
velocity model [34]. But in reality, the bicycle’s velocity
may change due to acceleration. Hence this un-modeled
acceleration introduces an uncertainty in the form of a
process noise. Considering the acceleration as uncertain, the
state transition equations can be rewritten as




px
py
vx
vy



 =




1 0 ∆t 0
0 1 0 ∆t
0 0 1 0
0 0 0 1








px
py
vx
vy



 +




1
2 ax∆t2
1
2 ay∆t2
ax∆t
ay∆t



 (3)
The process noise vector can be redefined as
ν =




νpx
νpy
νvx
νvy



 =




1
2 ax∆t2
1
2 ay∆t2
ax∆t
ay∆t



 . =




∆t2
2 0
0 ∆t2
2
∆t 0
0 ∆t




ax
ay
= Ga
(4)
The random acceleration vector is defined as ν ∼ N(0, Q).
The acceleration in the x and y directions can be defined as
ax ∼ N(0, σ2
ax) and ay ∼ N(0, σ2
ay) respectively. The cross-
correlation between ax and ay is assumed to be zero. But in
real-life situations, this is not considered to be zero. In the
equation of ν, the matrix Q is the measurement covariance
matrix. It is defined as Q = E(ννT
) = GQvGT
where Qv =
σ2
ax σaxy
σaxy σ2
ay
. The noise processes ax and ay are assumed
to be uncorrelated. Hence σaxy = 0. The final expression of
the process covariance matrix Q is
Q =





∆t4
4 σ2
ax 0 ∆t3
2 σ2
ax 0
0 ∆t4
4 σ2
ay 0 ∆t3
2 σ2
ay
∆t3
2 σ2
ax 0 ∆t2
σ2
ax 0
0 ∆t3
2 σ2
ay 0 ∆t2
σ2
ay





(5)
Laser Measurement: In this section, we describe the
measurement model w.r.t. LiDAR. There are 3 main com-
ponents of sensor measurement, the measurement vector z,
the measurement matrix H and the measurement covariance
matrix R. The LiDAR sensor output is a point cloud, but
the data which was provided to us was already analyzed and
we were provided the 2D location of the pedestrian at every
time period. For a LiDAR sensor, we can only measure the
x position and y position of the tracked object relative to
the sensor. Hence the measurement vector z can be defined
as z = px py
T
. Given that our state is a 4D vector
x = px py vx vy
T
, the measurement matrix H is
defined as follows
px
py
= H




px
py
vx
vy



 + ω, H =
1 0 0 0
0 1 0 0
(6)
The H matrix projects the belief of the tracked object’s
current 4D state into the 2D measurement space of the
sensor. The measurement covariance matrix R represents
the uncertainty present in the sensor measurements. Since
LiDAR measurements are 2D in nature, the noise vector ω
is 2D as well. The measurement covariance matrix R can be
defined as
R = E(ωωT
) =
σ2
px 0
0 σ2
py
(7)
The parameters of the measurement covariance matrix R is
provided by the manufacturer. It is to be noted that the noise
processes for the sensor are assumed to be uncorrelated.
Radar measurement: Although the LiDAR sensor mea-
sures the relative distance of the tracked object with high
accuracy, it cannot measure the relative velocity of the object
w.r.t the sensor. This is where the radar is advantageous. The
radar measures the radial velocity of the moving object w.r.t
sensor using Doppler effect. The Doppler effect measures the
change in frequency of the radar waves based on the relative
movement of the object w.r.t sensor.
Fig. 2. Overview of Radar Measurement
As shown in Fig 2, the x axis always points in vehicle’s
direction of motion and the y axis points to the left. Instead
of measuring the 2D pose of the object, the radar measures
• Range(ρ), radial distance from origin.
• Bearing(φ), angle between ρ and x.
• Radial Velocity( ˙ρ), rate of change of ρ.
The measurement function for the radar is defined as z =
h(x ) + ω where z = ρ φ ˙ρ
T
and ω ∼ N(0, R) is
the measurement noise. Compared to the LiDAR, there is
no H matrix which will directly map the state vector x to
the polar measurement space of the radar. To convert the
cartesian coordinates to polar coordinates, we will use the
following nonlinear function h(x )


ρ
φ
˙ρ

 h(x )
←−−−




px
py
vx
vy



 , h(x ) =





p 2
x + p 2
y
tan−1
(
py
px
)
pxvx+pyvy
p 2
x +p 2
y





(8)
Assuming that the 3 measurement components are not cross-
correlated, the radar measurement covariance matrix R is
represented as
R =


σ2
ρ 0 0
0 σ2
φ 0
0 0 σ2
˙φ

 (9)
Extended Kalman Filter: In the case of LiDAR measure-
ment, our measurement function was a linear function. Hence
assuming state x to be a Gaussian distribution, the measure-
ment vector z is also a Gaussian distribution. But in the case
of the radar, the measurement function is nonlinear and the
output of a Gaussian distribution passed through a nonlinear
function isn’t Gaussian anymore. Hence in the case of radar,
the standard Kalman Filter update equations aren’t applicable
anymore since Kalman Filter is inherently a Gaussian filter.
In order to fix this problem, a possible solution would be to
linearize the nonlinear function h(x )about the mean location
µ which is the best estimate of our predicted distribution.
Using a first order Taylor distribution, we linearize the
function h(x ) as
h(x ) ≈ h(µ) +
δh(µ)
δx
(x − µ) (10)
Similarly the state transition function f(x) may also be
linearized in a similar way if it is nonlinear. By generalizing
this to a higher dimension, we would have to calculate a
4x4 jacobian matrix instead of δh(µ)
δx . For the case of radar
measurement the Jacobian matrix is as follows
Hj =




px√
p2
x+p2
y
py
√
p2
x+p2
y
0 0
−
py
p2
x+p2
y
px
p2
x+p2
y
0 0
py(vxpy−vypx)
(p2
x+p2
y)
3
2
px(vypx−vxpy)
(p2
x+p2
y)
3
2
px√
p2
x+p2
y
py
√
p2
x+p2
y




(11)
The standard Kalman Filter and the Extended Kalman Filter
filter equations are similar. The main differences are
• F matrix is replaced by jacobian F when calculating
P .
• H matrix in the Kalman Filter is replaced by Jacobian
Hj while calculating S,K and P.
Fig. 3. Comparison : Kalman Filter vs EKF
The difference between regular and extended Kalman filter
equations is explained in Fig 3. For this project, we are
using a linear model for the prediction step. Hence we
don’t need to use the jacobian Fj. For our project, since
we have considered a constant velocity model, we use the
standard Kalman filter equations for the prediction step. The
measurement update for LiDAR also uses standard Kalman
filter update equations since LiDAR measurement model is
linear. We use the H matrix for calculating y, S, K and P.
However as explained before, the measurement update for
the radar would require extended Kalman filter equations.
The processing flow of the EKF is summarized in Fig 4.
Summarizing the processing flow of the extended Kalman
Filter, here are a few points.
• The state of the moving pedestrian is represented by a
2D position and 2D velocity. The EKF measurements
are initialized using the data provided by the manufac-
turer.
• Each time we receive a new measurement, the process
measurement module is triggered. On the first iteration,
we initialize the state vector x and the process covari-
ance matrix Q. This is followed by the prediction and
measurement update.
• Before predicting the state and covariance matrix, we
compute the elapsed time between previous and current
observation. We use elapsed time ∆t to calculate the
new state transition matrix F and process covariance
matrix Q. Then we predict the state and covariance of
the process model.
• The measurement update step depends on the sensor
type. If the current observation comes from a radar
sensor, then we linearize h(x ). We compute the new
Jacobian matrix Hj, use the measurement function to
project the predicted state and then call the measurement
update.
• If the current observation comes from a LiDAR, we use
the H matrix instead of linearizing the measurement
function h(x ). We set up the measurement covariance
matrix for LiDAR and then call the measurement up-
date.
Fig. 4. Process flow of Extended Kalman Filter
Root mean square error: In order to check how far the
estimated state are from the true states, we use a metric
called RMSE. In case of tracking, the RMSE gives a measure
of the deviation of the estimated state from the true state.
At a given time, we have two states, xest
t (estimated state)
and xtrue
t (ground truth state). The difference between the
estimated state and the true state is called the residual. The
residual is defined as (xest
t −xtrue
t ). This residual is squared
and averaged over the entire time period and it’s square root
gives the error metric. The lower the error, the higher is the
estimation accuracy. The RMSE is expressed as
RMSE = 1
n
n
n=1(xest
t − xtrue
t )2
EKF Results: Two measurement datasets are used for the
estimation process. Both datasets have a collection of LiDAR
and Radar measurements at each time interval. To give a
sense of what the measurements look like, Figure 5 is a
snapshot of the first 6 measurements of one of the .txt files.
Fig. 5. Measurement data
R stands for radar data and L stands for LiDAR data. In the
rows representing the radar measurement, the first 3 values
are v, ρ and ˙ρ of the tracked object relative to the sensor.
In the rows representing the LiDAR measurement, the first
two measurements are positions px and py of the tracked
object relative to the sensor. The last 4 measurements in
every row represents the ground truth values of px, py,vx and
vy. The big number is the timestamp. Timestamps are used
for logging a sequence of events, so that we know exactly,
for example, when the measurements were generated. Fig 6
shows how the timestamp works.
Fig. 6. Measurement data (timestamp)
The timestamp values are used to calculate the elapsed time
between two observations. The difference is divided by 106
to calculate ∆t in seconds rather than microseconds.
∆t =
timestamp(k + 1) − timestamp(k)
1000000.0
(12)
Figure 7 and Figure 12 illustrate the estimated trajectory
(px and py) of the pedestrian, measurement data and the
ground truth data. In Figure 7, it is interesting to note that
the extended Kalman Filter does not estimate the position
of the bicycle accurately where there is a sharp turn. This
is because Extended Kalman Filter is using a constant
velocity linear process model. Hence it is unable to estimate
a highly nonlinear maneuver performed by the bicycle. Later
we would delve into Unscented Kalman Filter which helps
circumvent this problem by using a nonlinear process model.
4 5 6 7 8 9 10 11 12
-14
-12
-10
-8
-6
-4
-2
0
2
Fig. 7. Estimated trajectory using EKF : Dataset 1
1.47701044 1.47701045 1.47701046 1.47701047 1.47701048 1.47701049 1.4770105 1.47701051
10 15
4
5
6
7
8
9
10
11
12
Fig. 8. Estimated px using EKF : Dataset 1
1.47701044 1.47701045 1.47701046 1.47701047 1.47701048 1.47701049 1.4770105 1.47701051
10 15
-14
-12
-10
-8
-6
-4
-2
0
2
Fig. 9. Estimated py using EKF : Dataset 1
1.47701044 1.47701045 1.47701046 1.47701047 1.47701048 1.47701049 1.4770105 1.47701051
10 15
-4
-3
-2
-1
0
1
2
3
Fig. 10. Estimated vx using EKF : Dataset 1
1.47701044 1.47701045 1.47701046 1.47701047 1.47701048 1.47701049 1.4770105 1.47701051
10 15
-4
-3
-2
-1
0
1
2
3
4
5
Fig. 11. Estimated vy using EKF : Dataset 1
0 50 100 150 200 250
-5
0
5
10
15
20
25
30
35
40
Fig. 12. Estimated trajectory using EKF : Dataset 2
In Table II, the root mean square error (RMSE) values
are listed for two cases. In the first case, the RMSE was
calculated by using just LiDAR data to estimate the states.
1.47701044 1.47701046 1.47701048 1.4770105 1.47701052 1.47701054 1.47701056
10 15
0
50
100
150
200
250
Fig. 13. Estimated px using EKF : Dataset 2
1.47701044 1.47701046 1.47701048 1.4770105 1.47701052 1.47701054 1.47701056
10 15
-5
0
5
10
15
20
25
30
35
40
Fig. 14. Estimated py using EKF : Dataset 2
1.47701044 1.47701046 1.47701048 1.4770105 1.47701052 1.47701054 1.47701056
10 15
-0.5
0
0.5
1
1.5
2
2.5
3
3.5
4
Fig. 15. Estimated vx using EKF : Dataset 2
1.47701044 1.47701046 1.47701048 1.4770105 1.47701052 1.47701054 1.47701056
10 15
-3
-2
-1
0
1
2
3
Fig. 16. Estimated vy using EKF : Dataset 2
In the second case, both LiDAR and Radar data have been
used to estimate the states using sensor fusion techniques.
We clearly see that the RMSE values are less for the second
case which means that the sensor fusion is advantageous
for estimating the state of a tracked object instead of just
using one sensor. In Table II, it is particularly interesting
to note that the RMSE values for vx and vy show a large
dip when both sensors are used as compared to the case
of just single sensor( LiDAR) usage. This is because the
LiDAR can only measure the position of the object. But the
radar can measure the radial velocity of the tracked object.
Hence a fusion of both Radar and LiDAR data is particularly
advantageous when it comes to estimating the velocity of a
tracked object.
State LiDAR LiDAR & Radar
Data 1
px 0.115 0.065
py 0.095 0.060
vx 0.794 0.531
vy 0.694 0.544
Data 2
px 0.218 0.185
py 0.194 0.190
vx 0.937 0.476
vy 0.834 0.814
TABLE II
ROOT MEAN SQUARE ERROR FOR EKF : DATASET 1 AND DATASET 2
III. UNSCENTED KALMAN FILTER
We know that EKF has two major drawbacks[26]:
• Linearisation may produce highly unstable filters if the
assumption of local linearity is violated
• The derivation and calculation of Jacobian matrices may
be nontrivial and computationally expensive.
Need for UKF: The central operation which the Kalman
Filter performs is the propagation of a gaussian random
variable (GRV) through the system dynamics (process model
and measurement model). In the EKF, the state distribution
is approximated by a GRV which is then propagated ana-
lytically through the first order linearization of a nonlinear
system [25, 26]. For example, while implementing the EKF
in the previous section, we linearized the nonlinear measure-
ment model for Radar (process model was already linear)
and used the first order approximations of the model to
perform the update step. These approximations can introduce
errors in the true posterior mean and covariance of the
predicted and/or updated GRV which may lead to sub-
optimal performance and sometimes divergence of the filter.
The UKF circumvents this problem by using a deterministic
sampling approach [25, 26]. In UKF, the state distribution
which is approximated by a GRV, is represented by a set of
carefully chosen sample points. These points are propagated
through the nonlinear system. The transformed points capture
the mean and covariance accurately to 3rd
order( Taylor
series expansion) for any nonlinear system. In contrast, EKF
only achieves first-order accuracy. Also the computational
complexity of the UKF is the same order as EKF[25, 26].
CTRV Model: In the EKF section, we used a constant
velocity (CV) model for pedestrian tracking. In the literature,
we can find various implementations of motion models [34]
for tracking such as
• Constant turn rate and velocity magnitude (CTRV)
model
• Constant turn rate and acceleration model (CTRA)
model
• Constant steering angle and velocity (CSAV) model
• Constant curvature and acceleration (CCA) model
While using constant velocity (CV) model in the EKF
implementation in the previous section, we were simplifying
the way vehicles actually move. This was a major drawback
in the implementation process because a process model
assuming constant velocity will predict moving vehicles and
pedestrians incorrectly. Hence, we chose the constant turn
rate and velocity magnitude (CTRV) model to implement
the Unscented Kalman Filter as shown in Fig 17. In the
CTRV model , the state vector and the nonlinear motion
model xk+1 = f(xk, νk) is defined as
x =






px
py
v
ψ
˙ψ






, xk+1 = xk +
tk+1
tk






˙px(t)
˙py(t)
˙v(t)
˙ψ(t)
¨ψ(t)






dt + νk (13)
where (px, py) is the position of the tracked object, v is
the velocity magnitude, ψ and ˙ψ are the yaw and yaw rate
respectively and νk represents the process noise. The motion
can be further rewritten as
xk+1 = xk +







vk
tk+1
tk
cos(ψk + ˙ψk(t − tk))dt
vk
tk+1
tk
sin(ψk + ˙ψk(t − tk))dt
0
˙ψk∆t
0







+νk (14)
Solving it further, we obtain the final CTRV motion model
[34].
xk+1 = xk +







vk
˙ψk
(sin(ψk + ˙ψk∆t) − sin(ψk))
−vk
˙ψk
(cos(ψk + ˙ψk∆t) − cos(ψk))
0
˙ψk∆t
0







+ νk
(15)
There may be situations when tracked vehicle is driving on
a straight road. In this case, the yaw rate ( ˙ψ) is zero. Since
division by zero is unacceptable, we modify equations of the
CTRV model as
xk+1 = xk +






vk cos(ψk)∆t
vk sin(ψk)∆t
0
˙ψk∆t
0






(16)
The vector νk in the process model is the noise vector and
it is defined as νk = νa,k ν ¨ψ,k
T
. The noise process νa,k
represents the longitudinal acceleration noise. It influences
the longitudinal speed of the tracked object and randomly
changes it’s value at every time step k. It is expressed as
νa,k ∼ N(0, σ2
a). It is normally distributed white noise
with zero mean and variance σ2
a. The noise process ν ¨ψk
∼
N(0, σ2
¨ψ
) is also a normally distributed white noise with
Fig. 17. Constant turn rate and velocity magnitude(CTRV model)
zero mean and variance σ2
¨ψ
. In Fig 17, it is interesting to
notice that the yaw acceleration in the form of ν ¨ψ,k will
change the radius of the tracked vehicle, so it would have
some effect on the position of the tracked object. However
we will assume that the effect of yaw acceleration noise on
the position is negligible and we will neglect it here. In the
CTRV model, we have assumed that the noise processes are
uncorrelated. The process covariance matrix Q is defined
as Q = E(vkvT
k ) =
σ2
a 0
0 σ2
¨ψ
. Considering the effect of
process noise νa,k and ν ¨ψ,k on the states, the process noise
vector νk is defined as
νk =






1
2 (∆t)2
cos(ψk).νa,k
1
2 (∆t)2
sin(ψk).νa,k
∆t.νa,k
1
2 (∆t)2
.ν ¨ψ,k
∆t.ν ¨ψ,k






(17)
Unscented Transformation: The unscented
transformation(UT) is a method for calculating the
statistics of a random variable which undergoes a nonlinear
transformation[25, 26]. The procedure is illustrated in Fig 18.
Fig. 18. The principle of unscented transformation
Let us consider propagating a GRV x(dimension nx) through
a nonlinear function y = f(x). Let us assume that x has
mean ¯x and covariance Pxx. To calculate the mean ¯y and
covariance Pyy of y, we form a matrix X of 2nx+1 sigma
vectors Xi( with corresponding weights Wi). The sigma
vectors are chosen around the mean ¯x and in certain relation
to the standard deviation Pxx. This is shown as below:
X0 = ¯x
Xi = ¯x + ( (nx + λ)Pxx)i, i = 1, . . . , nx
Xi = ¯x − ( (nx + λ)Pxx)i−nx
, i = n + 1, . . . , 2nx
W0 =
λ
nx + λ
Wi =
1
2(nx + λ)
, i = 1, . . . , 2nx
(18)
where ( (nx + λ)Pxx)i is the ith row or column of the
matrix square root of (nx + λ)Pxx, Wi is the weight
associated with the ith point and λ is a scaling parameter
which determines the spread of the sigma points around the
mean ¯x. For our project, we have considered λ = 3−nx. The
Unscented Transformation procedure is as follows[25, 26] :
• Instantiate each sigma vector through the function to
yield a set of transformed sigma vectors,
yi = f(Xi)
• The mean of the transformed sigma vectors is given by
the weighted average of the transformed sigma vectors,
¯y =
2nx
i=0
Wiyi (19)
• The covariance of the transformed sigma vectors is
given by the weighted outer product of the transformed
sigma vectors,
Pyy =
2nx
i=0
Wi(yi − ¯y)(yi − ¯y)T
(20)
To capture the importance of Unscented transformation, a
simple example is shown in Fig 19 using a 2D system
[25, 26]. The left plot shows the true mean and covariance
propagation using Monte-Carlo Sampling which we will talk
later in particle filter. The centre plot shows the results using
a linearization approach like EKF. The right plot shows the
application of unscented transformation(UT). It is clear that
the UKF outperforms EKF.
Fig. 19. Comparison: EKF vs UKF
UKF roadmap: The UKF is a straightforward extention
of the unscented transformation technique.The various steps
involving the application of the Unscented Kalman Filter is
as follows
• Prediction
1) Generate Sigma points
2) Predict Sigma points
3) Predict Mean and Covariance
• Update
1) Predict measurement
2) Update state
At the beginning of the prediction step, we have the posterior
state xk/k, the mean ¯xk/k and the posterior covariance Pk/k
from the last iteration. They represent the state distribution of
the current state. The number of sigma points to be generated
depends of the dimension of the state. For our CTRV model,
we have 5-D state, hence nx = 5. We would generate nσ =
2nx + 1 = 11 sigma points. The sigma points are generated
as follows:
Xk|k = ¯xk|k ¯xk|k + (nx + λ)Pk|k ¯xk|k − (nx + λ)Pk|k
(21)
The square root of the matrix (nx+λ)Pk|k is calculated using
Cholesky decomposition. To take into account the non-linear
effect of the process noise νk =
νa,k
ν ¨ψ,k
of CTRV model, we
include the process noise in the state vector. This is called
augmentation. Also recall that for the CTRV model
Q = E(vkvT
k ) =
σ2
a 0
0 σ2
¨ψ
(22)
The state vector xa,k|k and state covariance matrix after
augmentation Pa,k|k is defined as
xa,k|k =










px
py
v
ψ
˙ψ
νa,x
ν ¨ψ,x










, Pa,k|k =
Pa,k|k 0
0 Q
(23)
The number of sigma points after augmentation would be
nσ = 2nx +1 where nx = 7. The 4 extra sigma points would
represent the uncertainty caused by the process noise. We
calculate the augmented sigma points Xa,k/k using Equation
21 but with xk|k replaced by the augmented state vector
xa,k|k. In the prediction step , every column of the augmented
sigma point matrix Xa,k|k is passed through the CTRV
process model. We calculate Xk+1|k = f(Xa,k|k) where
Xk+1|k is the predicted sigma point matrix. Please note that
Xa,k|k is a 7x15 matrix and Xk+1|k is a 5x15 matrix since
each state is a 5D vector. The prediction step of UKF is
illustrated in Figure 20.
Fig. 20. Generation of predicted sigma points
The predicted mean xk+1|k and covariance Pk+1|k of the
sigma points are as follows:
xk+1|k =
nσ
i=1
ωiXk+1|k,i
Pk+1|k =
nσ
i=1
ωi(Xk+1|k,i − xk+1|k)(Xk+1|k,i − xk+1|k)T
ωi =
λ
λ + na
, i = 1
ωi =
1
2(λ + na)
, i = 2 . . . σ
(24)
In the measurement step, we use different approaches based
on the the type of sensor measurement received. If we
receive a LiDAR measurement, we use the same steps
as used in EKF to predict the measurement. However,
when we receive radar measurement, we use unscented
transformation to predict the measurement since the
measurement model pertaining to the radar is nonlinear. To
predict the measurement, we could use the same sequence
of steps as used in the prediction step but in order to
save computation time, we reuse the sigma points Xk+1|k
generated in the prediction step instead of generating new
sigma points. We also note that the measurement noise
doesn’t have a nonlinear effect on the measurement model.
Since the effect of the measurement noise is purely additive(
zk+1 = h(xk+1) + ωk+1 ), we can skip the augmentation
step as well. The transformed measurement sigma points
are generated the same way as in the prediction step. It is
illustrated in Fig 21.
Fig. 21. Generation of measurement sigma points
The predicted mean zk+1|k and the covariance Sk+1|k of the
measurement sigma points are as follows:
zk+1|k =
nσ
i=1
ωiZk+1|k,i
Sk+1|k =
nσ
i=1
ωi(Zk+1|k,i − zk+1|k)(Zk+1|k,i − zk+1|k)T
+ R
(25)
where R = E(ωωT
) is the measurement noise covariance.
In UKF, we need the cross-correlation Tk+1|k of sigma
points in state space and measurement space to calculate
the kalman gain. The equations of Tk+1|k(cross-correlation),
Kk+1|k(Kalman Gain), xk+1|k+1(Updated State) and
Pk+1|k+1(Updated Covariance) are as follows:
Tk+1|k =
nσ
i=1
ωi(Xk+1|k,i − xk+1|k)(Zk+1|k,i − zk+1|k)T
Kk+1|k = Tk+1|kS−1
k+1|k
xk+1|k+1 = xk+1|k + Kk+1|k(zk+1 − zk+1|k)
Pk+1|k+1 = Pk+1|k − Kk+1|kSk+1|kKT
k+1|k
(26)
Noise Parameters: In this section, we will describe how
we chose the noise parameters. This section applies to both
EKF and UKF. The process noise and the measurement
noise vectors and matrices are illustrated in Figure 22.
Fig. 22. Overview of Noise Parameters
Measurement noise vector ωk is an indication of how precise
is the measurement of the Radar sensor. These values are
usually given in the sensor manual. For example the sensor
manual tells us that the radar measures the distance ρ with
a standard deviation of σρ = 0.3 meters. This given us the
value of σ2
ρ = 0.09 in the measurement noise matrix R.
The process noise parameters are hard to determine since
we have to depend on approximation. Based on intuition,
we decide the maximum acceleration at which the tracked
object could possibly move. For example, in our project,
we are tracking a pedestrian riding a bicycle. We assume
that the bicycle can’t accelerate more than 1 m/sec2
. For
calculating the standard deviation (σa) of the process noise,
we use a rule of thumb. We consider the standard deviation to
be half of the maximum acceleration. So we consider σ2
a =
0.25 m2
/sec4
. To calculate σ ¨ψ, we consider the maximum
angular acceleration of the bicycle to be 1 rad/sec2
. So
using the rule of thumb, we obtain σ ¨ψ = 0.25 rad2
/sec4
.
The process noise really depends on the application. If we
want smoother estimates, we consider process noise to be
lower. If we expect the tracked object to behave erratically,
then we would choose a higher process noise.
Filter Consistency: The goal of filter consistency is to check
whether the filter has an incorrect estimate [2]. The validity
of the estimate can be determined based on whether the
estimate of the filter agrees with the observed measurements.
A filter is consistent if it provides a realistic estimation un-
certainty. The Normalized Innovations Squared (NIS) check
for Kalman Filters is based on the assumption that , the
normalized innovations of a Kalman Filter will have unit
variance under nominal conditions[2]. The innovation is the
difference between the predicted measurement and the actual
measurement. We normalize the innovation by putting it in
relation to Sk+1|k as shown in Equation 27. The NIS follows
a chi-squared(X2
) distribution. It is expressed as:
= (zk+1 − zk+1|k)T
S−1
k+1|k(zk+1 − zk+1|k) (27)
Based on the X2
distribution table and the fact that radar
measurement is 3 dimensional, we choose the 5% statistic.
The NIS plot for both dataset 1 and dataset 2 are shown in
Figure 29 and Figure 36 respectively.
UKF Results : Table III and Table IV compares the
performance of EKF and UKF for datasets 1 and 2
respectively.
State RMSE(EKF) RMSE(UKF)
LiDAR
px 0.115 0.131
py 0.095 0.118
vx 0.794 0.731
vy 0.694 0.669
LiDAR
& Radar
px 0.065 0.077
py 0.060 0.082
vx 0.531 0.590
vy 0.544 0.574
TABLE III
COMPARISON OF EKF AND UKF : DATASET 1
Based on the RMSE values, it is seen that UKF and EKF
performs the same for Dataset 1. However, UKF outperforms
EKF for Dataset 2. For both dataset 1 and 2, we notice that
the RMSE values are less if both LiDARs and radars are
used as compared to using just LiDAR. Thus we confirm
that sensor fusion techniques are advantageous in tracking
applications irrespective of the type of estimation filters used.
State RMSE(EKF) RMSE(UKF)
LiDAR
px 0.218 0.207
py 0.194 0.193
vx 0.937 0.480
vy 0.834 0.585
LiDAR
& Radar
px 0.185 0.180
py 0.190 0.187
vx 0.476 0.262
vy 0.814 0.463
TABLE IV
COMPARISON OF EKF AND UKF : DATASET 2
In Fig 23 and Fig 30, the estimated trajectory of the bicycle
for both the datasets are plotted. We notice that UKF does a
good job estimating the nonlinear trajectory of the bicycle.
By comparing Fig 23 and Fig 7, we can conclude that UKF
estimates the trajectory better around sharp corners than EKF.
The estimated vx, vy and φ for Dataset1 are plotted in Fig 26,
Fig 27 and Fig 28 respectively. In Fig 24 and Fig 25, we plot
the estimates px and py respectively.
4 5 6 7 8 9 10 11 12
-14
-12
-10
-8
-6
-4
-2
0
2
Fig. 23. Estimated trajectory using UKF : Dataset 1
1.47701044 1.47701045 1.47701046 1.47701047 1.47701048 1.47701049 1.4770105 1.47701051
10 15
4
5
6
7
8
9
10
11
12
Fig. 24. Estimated px using UKF : Dataset 1
1.47701044 1.47701045 1.47701046 1.47701047 1.47701048 1.47701049 1.4770105 1.47701051
10 15
-14
-12
-10
-8
-6
-4
-2
0
2
Fig. 25. Estimated py using UKF : Dataset 1
1.47701044 1.47701045 1.47701046 1.47701047 1.47701048 1.47701049 1.4770105 1.47701051
10 15
-4
-3
-2
-1
0
1
2
3
Fig. 26. Estimated vx using UKF : Dataset 1
The estimated vx, vy and φ for Dataset 2 are plotted in
Fig 33, Fig 34 and Fig 35 respectively. Based on the NIS plot
1.47701044 1.47701045 1.47701046 1.47701047 1.47701048 1.47701049 1.4770105 1.47701051
10 15
-4
-3
-2
-1
0
1
2
3
Fig. 27. Estimated vy using UKF : Dataset 1
1.47701044 1.47701045 1.47701046 1.47701047 1.47701048 1.47701049 1.4770105 1.47701051
10 15
-150
-100
-50
0
50
100
150
200
Fig. 28. Estimate φ using UKF : Dataset 1
1.47701044 1.47701045 1.47701046 1.47701047 1.47701048 1.47701049 1.4770105 1.47701051
10 15
0
2
4
6
8
10
12
NIS(5%)
5% (7.815)
Fig. 29. Normalized Innovation Squared(NIS) using UKF : Dataset 1
0 50 100 150 200 250
-5
0
5
10
15
20
25
30
35
40
Fig. 30. Estimated trajectory usign UKF : Dataset 2
(Fig 29 and Fig 36), we can conclude that the uncertainties
have been accurately modeled.
IV. SUMMARY
In this paper, a sensor fusion algorithm was presented to
track a pedestrian riding a bike by fusing the measurements
1.47701044 1.47701046 1.47701048 1.4770105 1.47701052 1.47701054 1.47701056
10 15
0
50
100
150
200
250
Fig. 31. Estimated px using UKF : Dataset 2
1.47701044 1.47701046 1.47701048 1.4770105 1.47701052 1.47701054 1.47701056
10 15
-5
0
5
10
15
20
25
30
35
40
Fig. 32. Estimated py using UKF : Dataset 2
1.47701044 1.47701046 1.47701048 1.4770105 1.47701052 1.47701054 1.47701056
10 15
0
0.5
1
1.5
2
2.5
3
3.5
Fig. 33. Estimated vx using UKF : Dataset 2
1.47701044 1.47701046 1.47701048 1.4770105 1.47701052 1.47701054 1.47701056
10 15
-1.5
-1
-0.5
0
0.5
1
1.5
2
2.5
Fig. 34. Estimated vy using UKF : Dataset 2
from a Velodyne LiDAR and a radar. Two bayesian filters,
EKF and UKF were implemented to estimate the state of
the pedestrian. In both filter implementations, it was noticed
that estimation error was less while using both sensors as
compared to using just LiDAR measurements. Hence it was
concluded that sensor fusion is advantageous. In future work,
1.47701044 1.47701046 1.47701048 1.4770105 1.47701052 1.47701054 1.47701056
timestamp ×1015
-80
-60
-40
-20
0
20
40
60
φ
Estimated φ vs Ground Truth (Unscented Kalman Filter) : Data 2
Ground Truth
Estimated φ
Fig. 35. Estimate φ using UKF : Dataset 2
1.47701044 1.47701046 1.47701048 1.4770105 1.47701052 1.47701054 1.47701056
10 15
0
2
4
6
8
10
12
14
NIS(5%)
5% (7.815)
Fig. 36. Normalized Innovation Squared(NIS) using UKF : Dataset 2
several other sensors like stereo cameras would be used
to further improve the accuracy of position and velocity
estimates of tracked objects.
REFERENCES
[1] PUCK, Velodyne LiDAR, PUCKT M Idf, National Conference,
2015.
[2] Sarah E Houts, Shandor G Dektor, Stephen M Rock. A robust frame-
work for failure detection and recovery for terrain-relative navigation.
Unmanned Untethered Submersible Technology,2013
[3] Jesse levinson, Michael Montemerlo, Sebastian Thrun Map-Based
Precision Vehicle Localization in Urban Environments. Robotics:
Science and Systems, Volume 4, pages 1, Citeseet 2007.
[4] Neil Getz. Control of balance for a nonlinear nonholonomic non-
minimum phase model of a bicycle. American Control Conference,
Vol 1, Pages 148-151, IEEE 1994.
[5] Brian Yamauchi. All-weather perception for man-portable robots
using ultrawideband radar. 2010 IEEE International Conference on
Robotics and Automation(ICRA), pages 3610–3615, IEEE 2010.
[6] Nico Kaempchen, Kay Ch Fuerstenberg, Alexander G Skibicki, Klaus
CJ Dietmayer. Sensor fusion for multiple automotive active safety
and comfort applications. Advanced Microsystems for Automotive
Applications 2004, pages 137–163, Springer 2004.
[7] Tarak Gandhi, Mohan Manubhai Trivedi. Pedestrian protection sys-
tems: Issues, survey, and challenges IEEE Transactions on intelligent
Transportation systems, Vol 8, No.3, pages 413-430, IEEE 2007.
[8] Daniel Olmeda, Arturo De LA Escalera, Jos M Armingol. Detection
and tracking of pedestrians in infrared images 2009 3rd International
Conference on Signals, Circuits and Systems (SCS), pages 1-6, IEEE
2009.
[9] Klaus CJ Dietmayer, Jan Sparbert, Daniel Streller. Model based object
classification and object tracking in traffic scenes from range images.
Proceedings of IV IEEE Intelligent Vehicles Symposium, Vol.200,
2001.
[10] Daniel Streller, Klaus Dietmayer. Object tracking and classification
using a multiple hypothesis approach. Intelligent Vehicles Symposium,
IEEE 2004.
[11] Cristiano Premebida, Nunes Urbano. A multi-target tracking and
GMM-classifier for intelligent vehicles. Intelligent Transportation
Systems Conference (ITSE), pages 313-318, IEEE 2006.
[12] Samuel Gidel, Paul Checchin Christophe Blanc, Thierry Chateau,
Laurent Trassoudaine. Pedestrian detection method using a multilayer
laserscanner: Application in urban environment IEEE/RSJ Interna-
tional Conference on Intelligent Robots and Systems(IROS), pages
173-178, IEEE 2008.
[13] Uwe Franke, Armin Joos. Real-time stereo vision for urban traffic
scene understanding. Proceedings of the IEEE Intelligent Vehicles
Symposium, pages 273-278, IEEE 2000.
[14] Mirko Mahlisch, Roland Schweiger, Werner Ritter, Klaus Dietmayer.
Sensor fusion using spatio-temporal aligned video and LiDAR for
improved vehicle detection 2006 IEEE Intelligent Vehicles Symposium,
pages 424-429, IEEE 2006.
[15] Raphael Labayrade, Cyril Royere, Dominique Gruyer, Didier Aubert.
Cooperative fusion for multi-obstacles detection with use of stereo-
vision and laser scanner Autonomous Robots, Vol.19, No.2, pages
117-140, Springer 2005.
[16] Jeoung Woo, Young-Joong Kim, Jeong-on Lee, Myo-Taeg Lim. Lo-
calization of mobile robot using particle filter International Joint
Conference(SICE-ICASE), pages 3031-3034, IEEE 2006.
[17] Fredrik Gustafsson, Fredrik Gunnarsson, Niclas Bergman, Urban Fors-
sell, Jonas Jansson, Rickard Karlsson, P-J Nordlund. Particle filters
for positioning, navigation, and tracking. IEEE Transactions on Signal
Processing,Vol 50, No.2, Pages 425-437, IEEE 2002.
[18] Sebastian Thrun, Wolfram Burgard, Dieter Fox. Probabilistic Robotics.
MIT press,2005
[19] Sebastian Thrun. Particle filters in robotics. Proceedings of the
Eighteenth conference on Uncertainty in artificial intelligence, pages
511-518, Morgan Kaugmann Publishers Inc. 2002.
[20] Dieter Fox, Sebastian Thrun, Wolfram Burgard, Frank Dellaert. Par-
ticle filters for mobile robot localization. Sequential Monte Carlo
methods in practice, pages 401-428, Springer 2001.
[21] Fengbing Luo, Bianjing Du, Zhen Fan. Mobile robot localization
based on particle filter. 11th World Congress onIntelligent Control
and Automation(WCICA), pages 3089-3093, IEEE 2014.
[22] Dongsheng Wang, Jianchao Zhao, Wei Wang. Particle filter based
robust mobile robot localization. International conference on Mecha-
tronics and Automation(ICMA), pages 817-821, IEEE 2009.
[23] Mirko Meuter, Uri Iurgel, Su-Birm Park, Anton Kummert. The
unscented Kalman filter for pedestrian tracking from a moving host.
2008 IEEE Intelligent Vehicles Symposium, pages 37-42, IEEE 2008.
[24] Daniel Olmeda, Arturo De La Escalera, Jos M Armingol. Detection
and tracking of pedestrians in infrared images. 2009 3rd International
Conference on Signals, Circuits and Systems(SCS),pages 1-6, IEEE
2009.
[25] Eric A Wan, Rudolph Van Der Merwe. The unscented Kalman Filter
for Nonlinear Estimation. Adaptive Systems for Signal Processing,
Communications and Control Symposium 2000, pages 153-158, IEEE
2000
[26] Simon J Julier, Jeffrey K Uhlmann. New Extension of the Kalman
Filter to Nonlinear Systems. AeroSense’97, International Society for
Optics and Photonics. pages 182-193, 1997
[27] Ehab I Al Khatib, Mohammad A Jaradat, Mamoun AbdelHafez, Milad
Roigari. Multiple sensor fusion for mobile robot localization and
navigation using the Extended Kalman Filter 2015 10th International
Symposium on Mechatronics and its Applications(ISMA), pages 1-5,
IEEE 2015.
[28] John Shackleton, Brian VanVoorst, Joel Hesch. Tracking people with
a 360-degree lidar. 2010 7th IEEE International Conference on
Advanced Video and Signal Based Surveillance(AVSS), pages 420-426,
IEEE 2010.
[29] Osama Masoud, Nikolaos P Papanikolopoulos. A novel method for
tracking and counting pedestrians in real-time using a single camera.
IEEE transactions on vehicular techonology, Vol.50, No.5, pages 1267-
1278, IEEE 2001.
[30] Hyunggi Cho, Young-Woo Seo, BVK Vijaya Kumar, Rahunathan Raj
Rajkumar. A multi-sensor fusion system for moving object detection
and tracking in urban driving environments. 2014 IEEE International
Conference on Robotics and Automation(ICRA), pages 1836-1843,
IEEE 2014.
[31] Christophe Blanc, Laurent Trassoudaine, Jean Gallice. EKF and
particle filter track-to-track fusion: a quantitative comparison from
radar/lidar obstacle tracks. 2005 8th International Conference on
Information Fusion, Vol.2, pages 7–pp, IEEE 2005.
[32] Fadi Fayad, Veronique Cherfaoui. Tracking objects using a laser
scanner in driving situation based on modeling target shape. 2007
IEEE Intelligent Vehicles Symposium, pages 44-49, IEEE 2007.
[33] Leonardo Marin, Marina Valles, Angel Soriano, Angel Valera, Pedro
Albertos. Event-based localization in ackermann steering limited
resource mobile robots. IEEE/ASME Transactions on Mechatronics,
Vol.19, No.4, pages 1171-1182, IEEE 2014.
[34] Robin Schubert, Eric Richter, Gerd Wanielik. Comparison and
evaluation of advanced motion models for vehicle tracking. 2008
11th International Conference on Information Fusion,pages 1–6, IEEE
2008.

More Related Content

What's hot (19)

Flight Control System
Flight Control SystemFlight Control System
Flight Control System
 
Kalman filter implimention in mathlab
Kalman filter  implimention in mathlabKalman filter  implimention in mathlab
Kalman filter implimention in mathlab
 
A PROJECT REPORT
A PROJECT REPORTA PROJECT REPORT
A PROJECT REPORT
 
Kalmanfilter
KalmanfilterKalmanfilter
Kalmanfilter
 
Kalman Equations
Kalman EquationsKalman Equations
Kalman Equations
 
Kalman Filter Based GPS Receiver
Kalman Filter Based GPS ReceiverKalman Filter Based GPS Receiver
Kalman Filter Based GPS Receiver
 
Kalman Filter and its Application
Kalman Filter and its ApplicationKalman Filter and its Application
Kalman Filter and its Application
 
Kalman Filter | Statistics
Kalman Filter | StatisticsKalman Filter | Statistics
Kalman Filter | Statistics
 
TRS
TRSTRS
TRS
 
Kealy fidaforever
Kealy fidaforeverKealy fidaforever
Kealy fidaforever
 
Lab4-report
Lab4-reportLab4-report
Lab4-report
 
The gps dictionary
The gps dictionaryThe gps dictionary
The gps dictionary
 
Real time implementation of unscented kalman filter for target tracking
Real time implementation of unscented kalman filter for target trackingReal time implementation of unscented kalman filter for target tracking
Real time implementation of unscented kalman filter for target tracking
 
Monopulse Radar
Monopulse RadarMonopulse Radar
Monopulse Radar
 
Challenging Paraglide Control System
Challenging Paraglide Control SystemChallenging Paraglide Control System
Challenging Paraglide Control System
 
Lecture 09: SLAM
Lecture 09: SLAMLecture 09: SLAM
Lecture 09: SLAM
 
In tech recent-advances_in_synthetic_aperture_radar_enhancement_and_informati...
In tech recent-advances_in_synthetic_aperture_radar_enhancement_and_informati...In tech recent-advances_in_synthetic_aperture_radar_enhancement_and_informati...
In tech recent-advances_in_synthetic_aperture_radar_enhancement_and_informati...
 
Gps2
Gps2Gps2
Gps2
 
Emm3104 chapter 3
Emm3104 chapter 3Emm3104 chapter 3
Emm3104 chapter 3
 

Similar to Tracking Pedestrians using Extended and Unscented Kalman Filters in a Sensor Fusion Framework

Cruise control devices
Cruise control devicesCruise control devices
Cruise control devicesPrashant Kumar
 
Implementation of Doppler Radar Based Vehicle Speed Detection System
Implementation of Doppler Radar Based Vehicle Speed Detection SystemImplementation of Doppler Radar Based Vehicle Speed Detection System
Implementation of Doppler Radar Based Vehicle Speed Detection Systemijtsrd
 
IRJET - Lidar based Autonomous Robot
IRJET - Lidar based Autonomous RobotIRJET - Lidar based Autonomous Robot
IRJET - Lidar based Autonomous RobotIRJET Journal
 
Vehicle detection through image processing
Vehicle detection through image processingVehicle detection through image processing
Vehicle detection through image processingGhazalpreet Kaur
 
Cruise control
Cruise controlCruise control
Cruise controlSyed Imad
 
16 channels Velodyne versus planar LiDARs based perception system for Large S...
16 channels Velodyne versus planar LiDARs based perception system for Large S...16 channels Velodyne versus planar LiDARs based perception system for Large S...
16 channels Velodyne versus planar LiDARs based perception system for Large S...Brett Johnson
 
16 channels velodyne versus planar lidars based perception system for large s...
16 channels velodyne versus planar lidars based perception system for large s...16 channels velodyne versus planar lidars based perception system for large s...
16 channels velodyne versus planar lidars based perception system for large s...Brett Johnson
 
Smartphone-based Pedestrian Dead Reckoning as an Indoor Positioning System
Smartphone-based Pedestrian Dead Reckoning as an Indoor Positioning SystemSmartphone-based Pedestrian Dead Reckoning as an Indoor Positioning System
Smartphone-based Pedestrian Dead Reckoning as an Indoor Positioning SystemAlwin Poulose
 
Liquid Level Estimation in Dynamic Condition using Kalman Filter
Liquid Level Estimation in Dynamic Condition using Kalman FilterLiquid Level Estimation in Dynamic Condition using Kalman Filter
Liquid Level Estimation in Dynamic Condition using Kalman FilterIJERA Editor
 
booysen_vehicle_paper automotive 2015.pdf
booysen_vehicle_paper automotive 2015.pdfbooysen_vehicle_paper automotive 2015.pdf
booysen_vehicle_paper automotive 2015.pdfYogi Adi Wijaya
 
Combined Complementary Filter For Inertial Navigation System
Combined Complementary Filter For Inertial Navigation SystemCombined Complementary Filter For Inertial Navigation System
Combined Complementary Filter For Inertial Navigation SystemMykola Novik
 
Precise Attitude Determination Using a Hexagonal GPS Platform
Precise Attitude Determination Using a Hexagonal GPS PlatformPrecise Attitude Determination Using a Hexagonal GPS Platform
Precise Attitude Determination Using a Hexagonal GPS PlatformCSCJournals
 
208114036 l aser guided robo
208114036 l aser guided robo208114036 l aser guided robo
208114036 l aser guided roboChiranjeevi Manda
 
Hybrid autonomousnavigation p_limaye-et-al_3pgabstract
Hybrid autonomousnavigation p_limaye-et-al_3pgabstractHybrid autonomousnavigation p_limaye-et-al_3pgabstract
Hybrid autonomousnavigation p_limaye-et-al_3pgabstractPushkar Limaye
 
Speed detection-of-moving-vehicle-using-speed-cameras
Speed detection-of-moving-vehicle-using-speed-camerasSpeed detection-of-moving-vehicle-using-speed-cameras
Speed detection-of-moving-vehicle-using-speed-camerasVIKAS SINGH BHADOURIA
 
Development of an Integrated Attitude Determination System for Small Unmanned...
Development of an Integrated Attitude Determination System for Small Unmanned...Development of an Integrated Attitude Determination System for Small Unmanned...
Development of an Integrated Attitude Determination System for Small Unmanned...IRJET Journal
 

Similar to Tracking Pedestrians using Extended and Unscented Kalman Filters in a Sensor Fusion Framework (20)

Cruise control devices
Cruise control devicesCruise control devices
Cruise control devices
 
Seminar koushlendra saini
Seminar koushlendra sainiSeminar koushlendra saini
Seminar koushlendra saini
 
Implementation of Doppler Radar Based Vehicle Speed Detection System
Implementation of Doppler Radar Based Vehicle Speed Detection SystemImplementation of Doppler Radar Based Vehicle Speed Detection System
Implementation of Doppler Radar Based Vehicle Speed Detection System
 
IRJET - Lidar based Autonomous Robot
IRJET - Lidar based Autonomous RobotIRJET - Lidar based Autonomous Robot
IRJET - Lidar based Autonomous Robot
 
Vehicle detection through image processing
Vehicle detection through image processingVehicle detection through image processing
Vehicle detection through image processing
 
Cruise control
Cruise controlCruise control
Cruise control
 
16 channels Velodyne versus planar LiDARs based perception system for Large S...
16 channels Velodyne versus planar LiDARs based perception system for Large S...16 channels Velodyne versus planar LiDARs based perception system for Large S...
16 channels Velodyne versus planar LiDARs based perception system for Large S...
 
16 channels velodyne versus planar lidars based perception system for large s...
16 channels velodyne versus planar lidars based perception system for large s...16 channels velodyne versus planar lidars based perception system for large s...
16 channels velodyne versus planar lidars based perception system for large s...
 
PLANS14-0029
PLANS14-0029PLANS14-0029
PLANS14-0029
 
Smartphone-based Pedestrian Dead Reckoning as an Indoor Positioning System
Smartphone-based Pedestrian Dead Reckoning as an Indoor Positioning SystemSmartphone-based Pedestrian Dead Reckoning as an Indoor Positioning System
Smartphone-based Pedestrian Dead Reckoning as an Indoor Positioning System
 
Liquid Level Estimation in Dynamic Condition using Kalman Filter
Liquid Level Estimation in Dynamic Condition using Kalman FilterLiquid Level Estimation in Dynamic Condition using Kalman Filter
Liquid Level Estimation in Dynamic Condition using Kalman Filter
 
booysen_vehicle_paper automotive 2015.pdf
booysen_vehicle_paper automotive 2015.pdfbooysen_vehicle_paper automotive 2015.pdf
booysen_vehicle_paper automotive 2015.pdf
 
Combined Complementary Filter For Inertial Navigation System
Combined Complementary Filter For Inertial Navigation SystemCombined Complementary Filter For Inertial Navigation System
Combined Complementary Filter For Inertial Navigation System
 
Precise Attitude Determination Using a Hexagonal GPS Platform
Precise Attitude Determination Using a Hexagonal GPS PlatformPrecise Attitude Determination Using a Hexagonal GPS Platform
Precise Attitude Determination Using a Hexagonal GPS Platform
 
208114036 l aser guided robo
208114036 l aser guided robo208114036 l aser guided robo
208114036 l aser guided robo
 
Robotics Localization
Robotics LocalizationRobotics Localization
Robotics Localization
 
Hybrid autonomousnavigation p_limaye-et-al_3pgabstract
Hybrid autonomousnavigation p_limaye-et-al_3pgabstractHybrid autonomousnavigation p_limaye-et-al_3pgabstract
Hybrid autonomousnavigation p_limaye-et-al_3pgabstract
 
0326
03260326
0326
 
Speed detection-of-moving-vehicle-using-speed-cameras
Speed detection-of-moving-vehicle-using-speed-camerasSpeed detection-of-moving-vehicle-using-speed-cameras
Speed detection-of-moving-vehicle-using-speed-cameras
 
Development of an Integrated Attitude Determination System for Small Unmanned...
Development of an Integrated Attitude Determination System for Small Unmanned...Development of an Integrated Attitude Determination System for Small Unmanned...
Development of an Integrated Attitude Determination System for Small Unmanned...
 

Recently uploaded

NO1 Best Powerful Vashikaran Specialist Baba Vashikaran Specialist For Love V...
NO1 Best Powerful Vashikaran Specialist Baba Vashikaran Specialist For Love V...NO1 Best Powerful Vashikaran Specialist Baba Vashikaran Specialist For Love V...
NO1 Best Powerful Vashikaran Specialist Baba Vashikaran Specialist For Love V...Amil baba
 
NEWLETTER FRANCE HELICES/ SDS SURFACE DRIVES - MAY 2024
NEWLETTER FRANCE HELICES/ SDS SURFACE DRIVES - MAY 2024NEWLETTER FRANCE HELICES/ SDS SURFACE DRIVES - MAY 2024
NEWLETTER FRANCE HELICES/ SDS SURFACE DRIVES - MAY 2024EMMANUELLEFRANCEHELI
 
Fuzzy logic method-based stress detector with blood pressure and body tempera...
Fuzzy logic method-based stress detector with blood pressure and body tempera...Fuzzy logic method-based stress detector with blood pressure and body tempera...
Fuzzy logic method-based stress detector with blood pressure and body tempera...IJECEIAES
 
5G and 6G refer to generations of mobile network technology, each representin...
5G and 6G refer to generations of mobile network technology, each representin...5G and 6G refer to generations of mobile network technology, each representin...
5G and 6G refer to generations of mobile network technology, each representin...archanaece3
 
Instruct Nirmaana 24-Smart and Lean Construction Through Technology.pdf
Instruct Nirmaana 24-Smart and Lean Construction Through Technology.pdfInstruct Nirmaana 24-Smart and Lean Construction Through Technology.pdf
Instruct Nirmaana 24-Smart and Lean Construction Through Technology.pdfEr.Sonali Nasikkar
 
Developing a smart system for infant incubators using the internet of things ...
Developing a smart system for infant incubators using the internet of things ...Developing a smart system for infant incubators using the internet of things ...
Developing a smart system for infant incubators using the internet of things ...IJECEIAES
 
Circuit Breakers for Engineering Students
Circuit Breakers for Engineering StudentsCircuit Breakers for Engineering Students
Circuit Breakers for Engineering Studentskannan348865
 
SLIDESHARE PPT-DECISION MAKING METHODS.pptx
SLIDESHARE PPT-DECISION MAKING METHODS.pptxSLIDESHARE PPT-DECISION MAKING METHODS.pptx
SLIDESHARE PPT-DECISION MAKING METHODS.pptxCHAIRMAN M
 
Seismic Hazard Assessment Software in Python by Prof. Dr. Costas Sachpazis
Seismic Hazard Assessment Software in Python by Prof. Dr. Costas SachpazisSeismic Hazard Assessment Software in Python by Prof. Dr. Costas Sachpazis
Seismic Hazard Assessment Software in Python by Prof. Dr. Costas SachpazisDr.Costas Sachpazis
 
Dynamo Scripts for Task IDs and Space Naming.pptx
Dynamo Scripts for Task IDs and Space Naming.pptxDynamo Scripts for Task IDs and Space Naming.pptx
Dynamo Scripts for Task IDs and Space Naming.pptxMustafa Ahmed
 
What is Coordinate Measuring Machine? CMM Types, Features, Functions
What is Coordinate Measuring Machine? CMM Types, Features, FunctionsWhat is Coordinate Measuring Machine? CMM Types, Features, Functions
What is Coordinate Measuring Machine? CMM Types, Features, FunctionsVIEW
 
Geometric constructions Engineering Drawing.pdf
Geometric constructions Engineering Drawing.pdfGeometric constructions Engineering Drawing.pdf
Geometric constructions Engineering Drawing.pdfJNTUA
 
Autodesk Construction Cloud (Autodesk Build).pptx
Autodesk Construction Cloud (Autodesk Build).pptxAutodesk Construction Cloud (Autodesk Build).pptx
Autodesk Construction Cloud (Autodesk Build).pptxMustafa Ahmed
 
Worksharing and 3D Modeling with Revit.pptx
Worksharing and 3D Modeling with Revit.pptxWorksharing and 3D Modeling with Revit.pptx
Worksharing and 3D Modeling with Revit.pptxMustafa Ahmed
 
Artificial Intelligence in due diligence
Artificial Intelligence in due diligenceArtificial Intelligence in due diligence
Artificial Intelligence in due diligencemahaffeycheryld
 
Augmented Reality (AR) with Augin Software.pptx
Augmented Reality (AR) with Augin Software.pptxAugmented Reality (AR) with Augin Software.pptx
Augmented Reality (AR) with Augin Software.pptxMustafa Ahmed
 
Maximizing Incident Investigation Efficacy in Oil & Gas: Techniques and Tools
Maximizing Incident Investigation Efficacy in Oil & Gas: Techniques and ToolsMaximizing Incident Investigation Efficacy in Oil & Gas: Techniques and Tools
Maximizing Incident Investigation Efficacy in Oil & Gas: Techniques and Toolssoginsider
 
Involute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdf
Involute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdfInvolute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdf
Involute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdfJNTUA
 
8th International Conference on Soft Computing, Mathematics and Control (SMC ...
8th International Conference on Soft Computing, Mathematics and Control (SMC ...8th International Conference on Soft Computing, Mathematics and Control (SMC ...
8th International Conference on Soft Computing, Mathematics and Control (SMC ...josephjonse
 
Call for Papers - Journal of Electrical Systems (JES), E-ISSN: 1112-5209, ind...
Call for Papers - Journal of Electrical Systems (JES), E-ISSN: 1112-5209, ind...Call for Papers - Journal of Electrical Systems (JES), E-ISSN: 1112-5209, ind...
Call for Papers - Journal of Electrical Systems (JES), E-ISSN: 1112-5209, ind...Christo Ananth
 

Recently uploaded (20)

NO1 Best Powerful Vashikaran Specialist Baba Vashikaran Specialist For Love V...
NO1 Best Powerful Vashikaran Specialist Baba Vashikaran Specialist For Love V...NO1 Best Powerful Vashikaran Specialist Baba Vashikaran Specialist For Love V...
NO1 Best Powerful Vashikaran Specialist Baba Vashikaran Specialist For Love V...
 
NEWLETTER FRANCE HELICES/ SDS SURFACE DRIVES - MAY 2024
NEWLETTER FRANCE HELICES/ SDS SURFACE DRIVES - MAY 2024NEWLETTER FRANCE HELICES/ SDS SURFACE DRIVES - MAY 2024
NEWLETTER FRANCE HELICES/ SDS SURFACE DRIVES - MAY 2024
 
Fuzzy logic method-based stress detector with blood pressure and body tempera...
Fuzzy logic method-based stress detector with blood pressure and body tempera...Fuzzy logic method-based stress detector with blood pressure and body tempera...
Fuzzy logic method-based stress detector with blood pressure and body tempera...
 
5G and 6G refer to generations of mobile network technology, each representin...
5G and 6G refer to generations of mobile network technology, each representin...5G and 6G refer to generations of mobile network technology, each representin...
5G and 6G refer to generations of mobile network technology, each representin...
 
Instruct Nirmaana 24-Smart and Lean Construction Through Technology.pdf
Instruct Nirmaana 24-Smart and Lean Construction Through Technology.pdfInstruct Nirmaana 24-Smart and Lean Construction Through Technology.pdf
Instruct Nirmaana 24-Smart and Lean Construction Through Technology.pdf
 
Developing a smart system for infant incubators using the internet of things ...
Developing a smart system for infant incubators using the internet of things ...Developing a smart system for infant incubators using the internet of things ...
Developing a smart system for infant incubators using the internet of things ...
 
Circuit Breakers for Engineering Students
Circuit Breakers for Engineering StudentsCircuit Breakers for Engineering Students
Circuit Breakers for Engineering Students
 
SLIDESHARE PPT-DECISION MAKING METHODS.pptx
SLIDESHARE PPT-DECISION MAKING METHODS.pptxSLIDESHARE PPT-DECISION MAKING METHODS.pptx
SLIDESHARE PPT-DECISION MAKING METHODS.pptx
 
Seismic Hazard Assessment Software in Python by Prof. Dr. Costas Sachpazis
Seismic Hazard Assessment Software in Python by Prof. Dr. Costas SachpazisSeismic Hazard Assessment Software in Python by Prof. Dr. Costas Sachpazis
Seismic Hazard Assessment Software in Python by Prof. Dr. Costas Sachpazis
 
Dynamo Scripts for Task IDs and Space Naming.pptx
Dynamo Scripts for Task IDs and Space Naming.pptxDynamo Scripts for Task IDs and Space Naming.pptx
Dynamo Scripts for Task IDs and Space Naming.pptx
 
What is Coordinate Measuring Machine? CMM Types, Features, Functions
What is Coordinate Measuring Machine? CMM Types, Features, FunctionsWhat is Coordinate Measuring Machine? CMM Types, Features, Functions
What is Coordinate Measuring Machine? CMM Types, Features, Functions
 
Geometric constructions Engineering Drawing.pdf
Geometric constructions Engineering Drawing.pdfGeometric constructions Engineering Drawing.pdf
Geometric constructions Engineering Drawing.pdf
 
Autodesk Construction Cloud (Autodesk Build).pptx
Autodesk Construction Cloud (Autodesk Build).pptxAutodesk Construction Cloud (Autodesk Build).pptx
Autodesk Construction Cloud (Autodesk Build).pptx
 
Worksharing and 3D Modeling with Revit.pptx
Worksharing and 3D Modeling with Revit.pptxWorksharing and 3D Modeling with Revit.pptx
Worksharing and 3D Modeling with Revit.pptx
 
Artificial Intelligence in due diligence
Artificial Intelligence in due diligenceArtificial Intelligence in due diligence
Artificial Intelligence in due diligence
 
Augmented Reality (AR) with Augin Software.pptx
Augmented Reality (AR) with Augin Software.pptxAugmented Reality (AR) with Augin Software.pptx
Augmented Reality (AR) with Augin Software.pptx
 
Maximizing Incident Investigation Efficacy in Oil & Gas: Techniques and Tools
Maximizing Incident Investigation Efficacy in Oil & Gas: Techniques and ToolsMaximizing Incident Investigation Efficacy in Oil & Gas: Techniques and Tools
Maximizing Incident Investigation Efficacy in Oil & Gas: Techniques and Tools
 
Involute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdf
Involute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdfInvolute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdf
Involute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdf
 
8th International Conference on Soft Computing, Mathematics and Control (SMC ...
8th International Conference on Soft Computing, Mathematics and Control (SMC ...8th International Conference on Soft Computing, Mathematics and Control (SMC ...
8th International Conference on Soft Computing, Mathematics and Control (SMC ...
 
Call for Papers - Journal of Electrical Systems (JES), E-ISSN: 1112-5209, ind...
Call for Papers - Journal of Electrical Systems (JES), E-ISSN: 1112-5209, ind...Call for Papers - Journal of Electrical Systems (JES), E-ISSN: 1112-5209, ind...
Call for Papers - Journal of Electrical Systems (JES), E-ISSN: 1112-5209, ind...
 

Tracking Pedestrians using Extended and Unscented Kalman Filters in a Sensor Fusion Framework

  • 1. Tracking Pedestrians using Extended and Unscented Kalman Filters in a Sensor Fusion Framework Kaustav Mondal Abstract— This paper presents a sensor fusion algorithm to track pedestrians accurately using measurements from LiDAR and radar sensor. The formulation of standard kalman filter, extended kalman filter and unscented kalman filter is discussed in detail. The objective of this paper is to show the importance of using multiple sensors for estimation purposes in autonomous vehicles. The ability of EKF and UKF to accurately estimate the nonlinear behavior of the pedestrian is studied by com- paring the accuracy of predicted path vs ground truth using performance metrics. I. INTRODUCTION TO SENSOR FUSION One of the main functions of the sensor fusion module of an autonomous car is to track it’s environment. The environment could consist of cars, pedestrians walking on the road, bikes or even landmarks. In this paper, we have used sensor fusion techniques to track a pedestrian riding a bike. Two main sensors have been used in this project i.e. LiDAR and Radar. Since sensor fusion requires fast processing time, C++ has been used to program the algorithm since it’s a high performance language. In this project, we have used measurement data from a Velodyne LiDAR and Radar sensor mounted on a prototype autonomous car designed by Mercedez Benz. The car has 15 sensors mounted on it. At the front, a stereo camera is installed which gives image data and distance information. There is a separate camera installed which has a wide view lens. This is to ensure that the traffic signals can be detected from far away. Radar sensors have been installed on the front bumper of the car. Radars are generally used for adaptive cruise control, blind spot warning, collision avoidance and collision control. Radars use Doppler effect to measure the velocity of a tracked object. The Doppler effect measures the change in frequency of the radar waves based on relative velocity of the object w.r.t. the vehicle. Doppler effect is advantageous for sensor fusion because it calculates velocity as an independent measurement. Radars can also be used for localization based on radar waves bouncing off hard surfaces around the car. It can provide measurements of the distance of the objects which may not be present in the direct line of sight. Of all the sensors on the car, the radar is least affected by rain or fog. It has a wide field of view (eg. about 150◦ ) and a range of 200 meters or more. Compared to LiDARs and cameras, radars have a low resolution. This can be particularly problematic when we consider reflection from the surface of static objects. For example, a man hole cover or a soda can lying on the road will have high reflectivity even This work has been done as part of Udacity’s Self Driving Car Nanode- gree (Term 2), 2017 though they are relatively small. This is called Radar clutter. This is why current automotive radars usually disregard static objects. LiDAR stand for light, detection and ranging. It uses an infrared laser beam to measure the distance between the sensor and the object. Most current LiDARs use light in the 900 nanometer wavelength range. There are some LiDARs which use a longer wavelenth which performs better in rain and fog. LiDARs have a much greater spatial resolution than the Radar because of a more focussed laser beam, larger number of scan layers in the vertical direction and a high density of LiDAR points per layer. The current LiDARs used in the automotive industry can’t measure the velocity of an object independently. They have to rely on the changing positions over every time step between two or more scans. LiDAR measurements are more affected by weather conditions as compared to Radars. It’s measurements may be erroneous if dirt settles on it. It is required to keep the LiDARs clean. The prototype car used Velodyne’s new PUCKT M (VLP-16) sensor which is the smallest, newest and most advanced product in Velodyne’s 3D LiDAR product range. Velodyne’s PUCKT M (VLP-16) sensor key features. The key features of the LiDAR are given in Table I [1]. Key Features Nominal Value Cost 7999 $ Mass 0.84 kg Number of channels 16 Range 100 m Spatial Resolution 300,000 points/sec Field of view(horizontal/azimuth) 360◦ Field of view(Vertical) 30◦(+15◦ to -15◦) Accuracy ± 3 cm (typical) Angular resolution(vertical) 2◦ Angular resolution(horizontal/azimuth) 0.1◦ - 0.4◦ Rotation Range 5-20 hz TABLE I PUCKT M (VLP-16) KEY FEATURES AND CHARACTERISTICS LiDARs and radars have strengths and weaknesses of their own. By combining the data from both of the sensors, we can precisely estimate the pedestrian location, heading and speed. Every time we receive a new measurement from one of the sensors, the estimation function is triggered. The estimation function consists of two steps. In the prediction step, we predict the pedestrian’s state( position, velocity) and covariance while taking into account the elapsed time ∆t between current and previous observations.
  • 2. The measurement update step depends on the type of measurement received at every time step. If the current measurement is a LiDAR measurement, then we use the standard kalman filter to update the pedestrian’s state. However, if the current measurement is a radar measurement, then we use the Extended Kalman Filter/ Unscented Kalman Filter to update the state. This is because the radar measurement model is nonlinear in nature. A standard kalman filter works well only in the case of linear models. The sensor flow flowshart is illustrated in Fig 1. Fig. 1. Overview of sensor fusion algorithm Initially the filter would receive the pedestrian’s initial posi- tion relative to the car. This measurement could be from a laser or radar. The state and covariance of the process model will be initialized by the filter based on the first measurement. After the filter receives the 2nd measurement at time ∆t, the algorithm predicts the process model’s state and covariance. For EKF, we have used a constant velocity model so the velocity of the pedestrian riding a bicycle is assumed to be constant during the time period ∆t. Based on the constant velocity model, the filter predicts that the bicycle has moved by a distance equivalent to v∆t where v is the velocity of the pedestrian. In the update step, the filter uses the measurement from sensor and the predicted state to give a better estimate of the pedestrian’s location and velocity. This is followed by another cycle of prediction and update. Hence the kalman filter is a recursive algorithm. Since the LiDAR uses cartesian coordinates and the radar uses polar coordinates, we use different measurement models in the update step, based on which sensor measurement is received at that particular time step. If the measurements from LiDAR and radar are received simultaneously, we just run one prediction step followed by two update steps instead of running two prediction steps for each measurement. II. KALMAN FILTER In a standard Kalman filter, there are two main steps: prediction and update. Prediction: In this step, we assume that we know the current position and velocity of the object. Assuming that the velocity remains constant, we can predict the state of the object after a time period ∆t. The equation x = Fx + Bu + ν represents the predicted state of the object. But it is very possible that the object didn’t maintain constant velocity or may be it changed direction, accelerated or decelerated. This leads to an increase in uncertainty. The equation P = FPF + Q represents this increase in uncer- tainty. The process noise is represented by ν. The notation ν ∼ N(0, Q) represented the process noise as a gaussian distribution with a mean zero and covariance Q. Please note that Bu in the prediction step represent the control input where B is the control input matrix and u is the control vector. It is to be noted that the autonomous car has no way of measuring the exact acceleration of the tracked object ( bicycle ) since we cannot model the internal forces of the object. Hence we will set Bu = 0 and represent acceleration as a random noise with mean ν. Update: In this step, we use the sensor data to calculate a better estimate of the object’s position and velocity. The equation y = z − Hx calculates the error in the measurement where H is the measurement matrix, z is the true sensor measurement and ˆz = Hx is the predicted observation. This equation compares the measurement data to the predicted position measurement of the tracked object. The kalman filter gain K is represented by the equation K = PH (HPH + R)−1 . The kalman gain combines the uncertainty of where we think the object is (P ) with the uncertainty of our sensor measurement R. If the sensor measurements are very uncertain (R is high relative to P ), then the Kalman gain will give more weight to where we think we are i.e. x . If the predicted state has a high uncertainty( P is high relative to R), the kalman gain will put more weight on the sensor measurement, z. The notation ω ∼ N(0, R) represents the measurement noise as a gaussian distribution with mean zero and covariance R. The sensor measurement uncertainty matrix R is provided by the manufacturer of the sensor. After calculating the kalman gain, we update the state vector and the state covariance matrix. The equations xnew = x + Ky and Pnew = (I − KH)P represent the new estimated state vector and state covariance matrix respectively. Process Model: The state of the moving pedestrian is represented by position (px,py) and velocity(vx,vy) w.r.t. to the sensor. The state vector and linear motion model are defined as follows x =     px py vx vy        px = px + vx∆t + νpx py = py + vy∆t + νpy vx = vx + νvx vy = vy + νvy (1) The linear motion model is expressed in matrix form as     px py vx vy     =     1 0 ∆t 0 0 1 0 ∆t 0 0 1 0 0 0 0 1         px py vx vy     +     νpx νpy νvx νvy     (2) The model of the bicycle assumes that the velocity is constant between the time intervals. Hence it is called a constant velocity model [34]. But in reality, the bicycle’s velocity may change due to acceleration. Hence this un-modeled acceleration introduces an uncertainty in the form of a process noise. Considering the acceleration as uncertain, the state transition equations can be rewritten as
  • 3.     px py vx vy     =     1 0 ∆t 0 0 1 0 ∆t 0 0 1 0 0 0 0 1         px py vx vy     +     1 2 ax∆t2 1 2 ay∆t2 ax∆t ay∆t     (3) The process noise vector can be redefined as ν =     νpx νpy νvx νvy     =     1 2 ax∆t2 1 2 ay∆t2 ax∆t ay∆t     . =     ∆t2 2 0 0 ∆t2 2 ∆t 0 0 ∆t     ax ay = Ga (4) The random acceleration vector is defined as ν ∼ N(0, Q). The acceleration in the x and y directions can be defined as ax ∼ N(0, σ2 ax) and ay ∼ N(0, σ2 ay) respectively. The cross- correlation between ax and ay is assumed to be zero. But in real-life situations, this is not considered to be zero. In the equation of ν, the matrix Q is the measurement covariance matrix. It is defined as Q = E(ννT ) = GQvGT where Qv = σ2 ax σaxy σaxy σ2 ay . The noise processes ax and ay are assumed to be uncorrelated. Hence σaxy = 0. The final expression of the process covariance matrix Q is Q =      ∆t4 4 σ2 ax 0 ∆t3 2 σ2 ax 0 0 ∆t4 4 σ2 ay 0 ∆t3 2 σ2 ay ∆t3 2 σ2 ax 0 ∆t2 σ2 ax 0 0 ∆t3 2 σ2 ay 0 ∆t2 σ2 ay      (5) Laser Measurement: In this section, we describe the measurement model w.r.t. LiDAR. There are 3 main com- ponents of sensor measurement, the measurement vector z, the measurement matrix H and the measurement covariance matrix R. The LiDAR sensor output is a point cloud, but the data which was provided to us was already analyzed and we were provided the 2D location of the pedestrian at every time period. For a LiDAR sensor, we can only measure the x position and y position of the tracked object relative to the sensor. Hence the measurement vector z can be defined as z = px py T . Given that our state is a 4D vector x = px py vx vy T , the measurement matrix H is defined as follows px py = H     px py vx vy     + ω, H = 1 0 0 0 0 1 0 0 (6) The H matrix projects the belief of the tracked object’s current 4D state into the 2D measurement space of the sensor. The measurement covariance matrix R represents the uncertainty present in the sensor measurements. Since LiDAR measurements are 2D in nature, the noise vector ω is 2D as well. The measurement covariance matrix R can be defined as R = E(ωωT ) = σ2 px 0 0 σ2 py (7) The parameters of the measurement covariance matrix R is provided by the manufacturer. It is to be noted that the noise processes for the sensor are assumed to be uncorrelated. Radar measurement: Although the LiDAR sensor mea- sures the relative distance of the tracked object with high accuracy, it cannot measure the relative velocity of the object w.r.t the sensor. This is where the radar is advantageous. The radar measures the radial velocity of the moving object w.r.t sensor using Doppler effect. The Doppler effect measures the change in frequency of the radar waves based on the relative movement of the object w.r.t sensor. Fig. 2. Overview of Radar Measurement As shown in Fig 2, the x axis always points in vehicle’s direction of motion and the y axis points to the left. Instead of measuring the 2D pose of the object, the radar measures • Range(ρ), radial distance from origin. • Bearing(φ), angle between ρ and x. • Radial Velocity( ˙ρ), rate of change of ρ. The measurement function for the radar is defined as z = h(x ) + ω where z = ρ φ ˙ρ T and ω ∼ N(0, R) is the measurement noise. Compared to the LiDAR, there is no H matrix which will directly map the state vector x to the polar measurement space of the radar. To convert the cartesian coordinates to polar coordinates, we will use the following nonlinear function h(x )   ρ φ ˙ρ   h(x ) ←−−−     px py vx vy     , h(x ) =      p 2 x + p 2 y tan−1 ( py px ) pxvx+pyvy p 2 x +p 2 y      (8) Assuming that the 3 measurement components are not cross- correlated, the radar measurement covariance matrix R is represented as R =   σ2 ρ 0 0 0 σ2 φ 0 0 0 σ2 ˙φ   (9) Extended Kalman Filter: In the case of LiDAR measure- ment, our measurement function was a linear function. Hence assuming state x to be a Gaussian distribution, the measure- ment vector z is also a Gaussian distribution. But in the case of the radar, the measurement function is nonlinear and the output of a Gaussian distribution passed through a nonlinear function isn’t Gaussian anymore. Hence in the case of radar, the standard Kalman Filter update equations aren’t applicable anymore since Kalman Filter is inherently a Gaussian filter. In order to fix this problem, a possible solution would be to
  • 4. linearize the nonlinear function h(x )about the mean location µ which is the best estimate of our predicted distribution. Using a first order Taylor distribution, we linearize the function h(x ) as h(x ) ≈ h(µ) + δh(µ) δx (x − µ) (10) Similarly the state transition function f(x) may also be linearized in a similar way if it is nonlinear. By generalizing this to a higher dimension, we would have to calculate a 4x4 jacobian matrix instead of δh(µ) δx . For the case of radar measurement the Jacobian matrix is as follows Hj =     px√ p2 x+p2 y py √ p2 x+p2 y 0 0 − py p2 x+p2 y px p2 x+p2 y 0 0 py(vxpy−vypx) (p2 x+p2 y) 3 2 px(vypx−vxpy) (p2 x+p2 y) 3 2 px√ p2 x+p2 y py √ p2 x+p2 y     (11) The standard Kalman Filter and the Extended Kalman Filter filter equations are similar. The main differences are • F matrix is replaced by jacobian F when calculating P . • H matrix in the Kalman Filter is replaced by Jacobian Hj while calculating S,K and P. Fig. 3. Comparison : Kalman Filter vs EKF The difference between regular and extended Kalman filter equations is explained in Fig 3. For this project, we are using a linear model for the prediction step. Hence we don’t need to use the jacobian Fj. For our project, since we have considered a constant velocity model, we use the standard Kalman filter equations for the prediction step. The measurement update for LiDAR also uses standard Kalman filter update equations since LiDAR measurement model is linear. We use the H matrix for calculating y, S, K and P. However as explained before, the measurement update for the radar would require extended Kalman filter equations. The processing flow of the EKF is summarized in Fig 4. Summarizing the processing flow of the extended Kalman Filter, here are a few points. • The state of the moving pedestrian is represented by a 2D position and 2D velocity. The EKF measurements are initialized using the data provided by the manufac- turer. • Each time we receive a new measurement, the process measurement module is triggered. On the first iteration, we initialize the state vector x and the process covari- ance matrix Q. This is followed by the prediction and measurement update. • Before predicting the state and covariance matrix, we compute the elapsed time between previous and current observation. We use elapsed time ∆t to calculate the new state transition matrix F and process covariance matrix Q. Then we predict the state and covariance of the process model. • The measurement update step depends on the sensor type. If the current observation comes from a radar sensor, then we linearize h(x ). We compute the new Jacobian matrix Hj, use the measurement function to project the predicted state and then call the measurement update. • If the current observation comes from a LiDAR, we use the H matrix instead of linearizing the measurement function h(x ). We set up the measurement covariance matrix for LiDAR and then call the measurement up- date. Fig. 4. Process flow of Extended Kalman Filter Root mean square error: In order to check how far the estimated state are from the true states, we use a metric called RMSE. In case of tracking, the RMSE gives a measure of the deviation of the estimated state from the true state. At a given time, we have two states, xest t (estimated state) and xtrue t (ground truth state). The difference between the estimated state and the true state is called the residual. The residual is defined as (xest t −xtrue t ). This residual is squared and averaged over the entire time period and it’s square root gives the error metric. The lower the error, the higher is the estimation accuracy. The RMSE is expressed as RMSE = 1 n n n=1(xest t − xtrue t )2 EKF Results: Two measurement datasets are used for the estimation process. Both datasets have a collection of LiDAR and Radar measurements at each time interval. To give a sense of what the measurements look like, Figure 5 is a snapshot of the first 6 measurements of one of the .txt files. Fig. 5. Measurement data
  • 5. R stands for radar data and L stands for LiDAR data. In the rows representing the radar measurement, the first 3 values are v, ρ and ˙ρ of the tracked object relative to the sensor. In the rows representing the LiDAR measurement, the first two measurements are positions px and py of the tracked object relative to the sensor. The last 4 measurements in every row represents the ground truth values of px, py,vx and vy. The big number is the timestamp. Timestamps are used for logging a sequence of events, so that we know exactly, for example, when the measurements were generated. Fig 6 shows how the timestamp works. Fig. 6. Measurement data (timestamp) The timestamp values are used to calculate the elapsed time between two observations. The difference is divided by 106 to calculate ∆t in seconds rather than microseconds. ∆t = timestamp(k + 1) − timestamp(k) 1000000.0 (12) Figure 7 and Figure 12 illustrate the estimated trajectory (px and py) of the pedestrian, measurement data and the ground truth data. In Figure 7, it is interesting to note that the extended Kalman Filter does not estimate the position of the bicycle accurately where there is a sharp turn. This is because Extended Kalman Filter is using a constant velocity linear process model. Hence it is unable to estimate a highly nonlinear maneuver performed by the bicycle. Later we would delve into Unscented Kalman Filter which helps circumvent this problem by using a nonlinear process model. 4 5 6 7 8 9 10 11 12 -14 -12 -10 -8 -6 -4 -2 0 2 Fig. 7. Estimated trajectory using EKF : Dataset 1 1.47701044 1.47701045 1.47701046 1.47701047 1.47701048 1.47701049 1.4770105 1.47701051 10 15 4 5 6 7 8 9 10 11 12 Fig. 8. Estimated px using EKF : Dataset 1 1.47701044 1.47701045 1.47701046 1.47701047 1.47701048 1.47701049 1.4770105 1.47701051 10 15 -14 -12 -10 -8 -6 -4 -2 0 2 Fig. 9. Estimated py using EKF : Dataset 1 1.47701044 1.47701045 1.47701046 1.47701047 1.47701048 1.47701049 1.4770105 1.47701051 10 15 -4 -3 -2 -1 0 1 2 3 Fig. 10. Estimated vx using EKF : Dataset 1 1.47701044 1.47701045 1.47701046 1.47701047 1.47701048 1.47701049 1.4770105 1.47701051 10 15 -4 -3 -2 -1 0 1 2 3 4 5 Fig. 11. Estimated vy using EKF : Dataset 1 0 50 100 150 200 250 -5 0 5 10 15 20 25 30 35 40 Fig. 12. Estimated trajectory using EKF : Dataset 2 In Table II, the root mean square error (RMSE) values are listed for two cases. In the first case, the RMSE was calculated by using just LiDAR data to estimate the states.
  • 6. 1.47701044 1.47701046 1.47701048 1.4770105 1.47701052 1.47701054 1.47701056 10 15 0 50 100 150 200 250 Fig. 13. Estimated px using EKF : Dataset 2 1.47701044 1.47701046 1.47701048 1.4770105 1.47701052 1.47701054 1.47701056 10 15 -5 0 5 10 15 20 25 30 35 40 Fig. 14. Estimated py using EKF : Dataset 2 1.47701044 1.47701046 1.47701048 1.4770105 1.47701052 1.47701054 1.47701056 10 15 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 Fig. 15. Estimated vx using EKF : Dataset 2 1.47701044 1.47701046 1.47701048 1.4770105 1.47701052 1.47701054 1.47701056 10 15 -3 -2 -1 0 1 2 3 Fig. 16. Estimated vy using EKF : Dataset 2 In the second case, both LiDAR and Radar data have been used to estimate the states using sensor fusion techniques. We clearly see that the RMSE values are less for the second case which means that the sensor fusion is advantageous for estimating the state of a tracked object instead of just using one sensor. In Table II, it is particularly interesting to note that the RMSE values for vx and vy show a large dip when both sensors are used as compared to the case of just single sensor( LiDAR) usage. This is because the LiDAR can only measure the position of the object. But the radar can measure the radial velocity of the tracked object. Hence a fusion of both Radar and LiDAR data is particularly advantageous when it comes to estimating the velocity of a tracked object. State LiDAR LiDAR & Radar Data 1 px 0.115 0.065 py 0.095 0.060 vx 0.794 0.531 vy 0.694 0.544 Data 2 px 0.218 0.185 py 0.194 0.190 vx 0.937 0.476 vy 0.834 0.814 TABLE II ROOT MEAN SQUARE ERROR FOR EKF : DATASET 1 AND DATASET 2 III. UNSCENTED KALMAN FILTER We know that EKF has two major drawbacks[26]: • Linearisation may produce highly unstable filters if the assumption of local linearity is violated • The derivation and calculation of Jacobian matrices may be nontrivial and computationally expensive. Need for UKF: The central operation which the Kalman Filter performs is the propagation of a gaussian random variable (GRV) through the system dynamics (process model and measurement model). In the EKF, the state distribution is approximated by a GRV which is then propagated ana- lytically through the first order linearization of a nonlinear system [25, 26]. For example, while implementing the EKF in the previous section, we linearized the nonlinear measure- ment model for Radar (process model was already linear) and used the first order approximations of the model to perform the update step. These approximations can introduce errors in the true posterior mean and covariance of the predicted and/or updated GRV which may lead to sub- optimal performance and sometimes divergence of the filter. The UKF circumvents this problem by using a deterministic sampling approach [25, 26]. In UKF, the state distribution which is approximated by a GRV, is represented by a set of carefully chosen sample points. These points are propagated through the nonlinear system. The transformed points capture the mean and covariance accurately to 3rd order( Taylor series expansion) for any nonlinear system. In contrast, EKF only achieves first-order accuracy. Also the computational complexity of the UKF is the same order as EKF[25, 26]. CTRV Model: In the EKF section, we used a constant velocity (CV) model for pedestrian tracking. In the literature, we can find various implementations of motion models [34] for tracking such as • Constant turn rate and velocity magnitude (CTRV) model • Constant turn rate and acceleration model (CTRA) model • Constant steering angle and velocity (CSAV) model
  • 7. • Constant curvature and acceleration (CCA) model While using constant velocity (CV) model in the EKF implementation in the previous section, we were simplifying the way vehicles actually move. This was a major drawback in the implementation process because a process model assuming constant velocity will predict moving vehicles and pedestrians incorrectly. Hence, we chose the constant turn rate and velocity magnitude (CTRV) model to implement the Unscented Kalman Filter as shown in Fig 17. In the CTRV model , the state vector and the nonlinear motion model xk+1 = f(xk, νk) is defined as x =       px py v ψ ˙ψ       , xk+1 = xk + tk+1 tk       ˙px(t) ˙py(t) ˙v(t) ˙ψ(t) ¨ψ(t)       dt + νk (13) where (px, py) is the position of the tracked object, v is the velocity magnitude, ψ and ˙ψ are the yaw and yaw rate respectively and νk represents the process noise. The motion can be further rewritten as xk+1 = xk +        vk tk+1 tk cos(ψk + ˙ψk(t − tk))dt vk tk+1 tk sin(ψk + ˙ψk(t − tk))dt 0 ˙ψk∆t 0        +νk (14) Solving it further, we obtain the final CTRV motion model [34]. xk+1 = xk +        vk ˙ψk (sin(ψk + ˙ψk∆t) − sin(ψk)) −vk ˙ψk (cos(ψk + ˙ψk∆t) − cos(ψk)) 0 ˙ψk∆t 0        + νk (15) There may be situations when tracked vehicle is driving on a straight road. In this case, the yaw rate ( ˙ψ) is zero. Since division by zero is unacceptable, we modify equations of the CTRV model as xk+1 = xk +       vk cos(ψk)∆t vk sin(ψk)∆t 0 ˙ψk∆t 0       (16) The vector νk in the process model is the noise vector and it is defined as νk = νa,k ν ¨ψ,k T . The noise process νa,k represents the longitudinal acceleration noise. It influences the longitudinal speed of the tracked object and randomly changes it’s value at every time step k. It is expressed as νa,k ∼ N(0, σ2 a). It is normally distributed white noise with zero mean and variance σ2 a. The noise process ν ¨ψk ∼ N(0, σ2 ¨ψ ) is also a normally distributed white noise with Fig. 17. Constant turn rate and velocity magnitude(CTRV model) zero mean and variance σ2 ¨ψ . In Fig 17, it is interesting to notice that the yaw acceleration in the form of ν ¨ψ,k will change the radius of the tracked vehicle, so it would have some effect on the position of the tracked object. However we will assume that the effect of yaw acceleration noise on the position is negligible and we will neglect it here. In the CTRV model, we have assumed that the noise processes are uncorrelated. The process covariance matrix Q is defined as Q = E(vkvT k ) = σ2 a 0 0 σ2 ¨ψ . Considering the effect of process noise νa,k and ν ¨ψ,k on the states, the process noise vector νk is defined as νk =       1 2 (∆t)2 cos(ψk).νa,k 1 2 (∆t)2 sin(ψk).νa,k ∆t.νa,k 1 2 (∆t)2 .ν ¨ψ,k ∆t.ν ¨ψ,k       (17) Unscented Transformation: The unscented transformation(UT) is a method for calculating the statistics of a random variable which undergoes a nonlinear transformation[25, 26]. The procedure is illustrated in Fig 18. Fig. 18. The principle of unscented transformation Let us consider propagating a GRV x(dimension nx) through a nonlinear function y = f(x). Let us assume that x has mean ¯x and covariance Pxx. To calculate the mean ¯y and covariance Pyy of y, we form a matrix X of 2nx+1 sigma vectors Xi( with corresponding weights Wi). The sigma vectors are chosen around the mean ¯x and in certain relation to the standard deviation Pxx. This is shown as below:
  • 8. X0 = ¯x Xi = ¯x + ( (nx + λ)Pxx)i, i = 1, . . . , nx Xi = ¯x − ( (nx + λ)Pxx)i−nx , i = n + 1, . . . , 2nx W0 = λ nx + λ Wi = 1 2(nx + λ) , i = 1, . . . , 2nx (18) where ( (nx + λ)Pxx)i is the ith row or column of the matrix square root of (nx + λ)Pxx, Wi is the weight associated with the ith point and λ is a scaling parameter which determines the spread of the sigma points around the mean ¯x. For our project, we have considered λ = 3−nx. The Unscented Transformation procedure is as follows[25, 26] : • Instantiate each sigma vector through the function to yield a set of transformed sigma vectors, yi = f(Xi) • The mean of the transformed sigma vectors is given by the weighted average of the transformed sigma vectors, ¯y = 2nx i=0 Wiyi (19) • The covariance of the transformed sigma vectors is given by the weighted outer product of the transformed sigma vectors, Pyy = 2nx i=0 Wi(yi − ¯y)(yi − ¯y)T (20) To capture the importance of Unscented transformation, a simple example is shown in Fig 19 using a 2D system [25, 26]. The left plot shows the true mean and covariance propagation using Monte-Carlo Sampling which we will talk later in particle filter. The centre plot shows the results using a linearization approach like EKF. The right plot shows the application of unscented transformation(UT). It is clear that the UKF outperforms EKF. Fig. 19. Comparison: EKF vs UKF UKF roadmap: The UKF is a straightforward extention of the unscented transformation technique.The various steps involving the application of the Unscented Kalman Filter is as follows • Prediction 1) Generate Sigma points 2) Predict Sigma points 3) Predict Mean and Covariance • Update 1) Predict measurement 2) Update state At the beginning of the prediction step, we have the posterior state xk/k, the mean ¯xk/k and the posterior covariance Pk/k from the last iteration. They represent the state distribution of the current state. The number of sigma points to be generated depends of the dimension of the state. For our CTRV model, we have 5-D state, hence nx = 5. We would generate nσ = 2nx + 1 = 11 sigma points. The sigma points are generated as follows: Xk|k = ¯xk|k ¯xk|k + (nx + λ)Pk|k ¯xk|k − (nx + λ)Pk|k (21) The square root of the matrix (nx+λ)Pk|k is calculated using Cholesky decomposition. To take into account the non-linear effect of the process noise νk = νa,k ν ¨ψ,k of CTRV model, we include the process noise in the state vector. This is called augmentation. Also recall that for the CTRV model Q = E(vkvT k ) = σ2 a 0 0 σ2 ¨ψ (22) The state vector xa,k|k and state covariance matrix after augmentation Pa,k|k is defined as xa,k|k =           px py v ψ ˙ψ νa,x ν ¨ψ,x           , Pa,k|k = Pa,k|k 0 0 Q (23) The number of sigma points after augmentation would be nσ = 2nx +1 where nx = 7. The 4 extra sigma points would represent the uncertainty caused by the process noise. We calculate the augmented sigma points Xa,k/k using Equation 21 but with xk|k replaced by the augmented state vector xa,k|k. In the prediction step , every column of the augmented sigma point matrix Xa,k|k is passed through the CTRV process model. We calculate Xk+1|k = f(Xa,k|k) where Xk+1|k is the predicted sigma point matrix. Please note that Xa,k|k is a 7x15 matrix and Xk+1|k is a 5x15 matrix since each state is a 5D vector. The prediction step of UKF is illustrated in Figure 20.
  • 9. Fig. 20. Generation of predicted sigma points The predicted mean xk+1|k and covariance Pk+1|k of the sigma points are as follows: xk+1|k = nσ i=1 ωiXk+1|k,i Pk+1|k = nσ i=1 ωi(Xk+1|k,i − xk+1|k)(Xk+1|k,i − xk+1|k)T ωi = λ λ + na , i = 1 ωi = 1 2(λ + na) , i = 2 . . . σ (24) In the measurement step, we use different approaches based on the the type of sensor measurement received. If we receive a LiDAR measurement, we use the same steps as used in EKF to predict the measurement. However, when we receive radar measurement, we use unscented transformation to predict the measurement since the measurement model pertaining to the radar is nonlinear. To predict the measurement, we could use the same sequence of steps as used in the prediction step but in order to save computation time, we reuse the sigma points Xk+1|k generated in the prediction step instead of generating new sigma points. We also note that the measurement noise doesn’t have a nonlinear effect on the measurement model. Since the effect of the measurement noise is purely additive( zk+1 = h(xk+1) + ωk+1 ), we can skip the augmentation step as well. The transformed measurement sigma points are generated the same way as in the prediction step. It is illustrated in Fig 21. Fig. 21. Generation of measurement sigma points The predicted mean zk+1|k and the covariance Sk+1|k of the measurement sigma points are as follows: zk+1|k = nσ i=1 ωiZk+1|k,i Sk+1|k = nσ i=1 ωi(Zk+1|k,i − zk+1|k)(Zk+1|k,i − zk+1|k)T + R (25) where R = E(ωωT ) is the measurement noise covariance. In UKF, we need the cross-correlation Tk+1|k of sigma points in state space and measurement space to calculate the kalman gain. The equations of Tk+1|k(cross-correlation), Kk+1|k(Kalman Gain), xk+1|k+1(Updated State) and Pk+1|k+1(Updated Covariance) are as follows: Tk+1|k = nσ i=1 ωi(Xk+1|k,i − xk+1|k)(Zk+1|k,i − zk+1|k)T Kk+1|k = Tk+1|kS−1 k+1|k xk+1|k+1 = xk+1|k + Kk+1|k(zk+1 − zk+1|k) Pk+1|k+1 = Pk+1|k − Kk+1|kSk+1|kKT k+1|k (26) Noise Parameters: In this section, we will describe how we chose the noise parameters. This section applies to both EKF and UKF. The process noise and the measurement noise vectors and matrices are illustrated in Figure 22. Fig. 22. Overview of Noise Parameters Measurement noise vector ωk is an indication of how precise is the measurement of the Radar sensor. These values are usually given in the sensor manual. For example the sensor manual tells us that the radar measures the distance ρ with a standard deviation of σρ = 0.3 meters. This given us the value of σ2 ρ = 0.09 in the measurement noise matrix R. The process noise parameters are hard to determine since we have to depend on approximation. Based on intuition, we decide the maximum acceleration at which the tracked object could possibly move. For example, in our project, we are tracking a pedestrian riding a bicycle. We assume that the bicycle can’t accelerate more than 1 m/sec2 . For calculating the standard deviation (σa) of the process noise, we use a rule of thumb. We consider the standard deviation to be half of the maximum acceleration. So we consider σ2 a = 0.25 m2 /sec4 . To calculate σ ¨ψ, we consider the maximum angular acceleration of the bicycle to be 1 rad/sec2 . So using the rule of thumb, we obtain σ ¨ψ = 0.25 rad2 /sec4 . The process noise really depends on the application. If we want smoother estimates, we consider process noise to be
  • 10. lower. If we expect the tracked object to behave erratically, then we would choose a higher process noise. Filter Consistency: The goal of filter consistency is to check whether the filter has an incorrect estimate [2]. The validity of the estimate can be determined based on whether the estimate of the filter agrees with the observed measurements. A filter is consistent if it provides a realistic estimation un- certainty. The Normalized Innovations Squared (NIS) check for Kalman Filters is based on the assumption that , the normalized innovations of a Kalman Filter will have unit variance under nominal conditions[2]. The innovation is the difference between the predicted measurement and the actual measurement. We normalize the innovation by putting it in relation to Sk+1|k as shown in Equation 27. The NIS follows a chi-squared(X2 ) distribution. It is expressed as: = (zk+1 − zk+1|k)T S−1 k+1|k(zk+1 − zk+1|k) (27) Based on the X2 distribution table and the fact that radar measurement is 3 dimensional, we choose the 5% statistic. The NIS plot for both dataset 1 and dataset 2 are shown in Figure 29 and Figure 36 respectively. UKF Results : Table III and Table IV compares the performance of EKF and UKF for datasets 1 and 2 respectively. State RMSE(EKF) RMSE(UKF) LiDAR px 0.115 0.131 py 0.095 0.118 vx 0.794 0.731 vy 0.694 0.669 LiDAR & Radar px 0.065 0.077 py 0.060 0.082 vx 0.531 0.590 vy 0.544 0.574 TABLE III COMPARISON OF EKF AND UKF : DATASET 1 Based on the RMSE values, it is seen that UKF and EKF performs the same for Dataset 1. However, UKF outperforms EKF for Dataset 2. For both dataset 1 and 2, we notice that the RMSE values are less if both LiDARs and radars are used as compared to using just LiDAR. Thus we confirm that sensor fusion techniques are advantageous in tracking applications irrespective of the type of estimation filters used. State RMSE(EKF) RMSE(UKF) LiDAR px 0.218 0.207 py 0.194 0.193 vx 0.937 0.480 vy 0.834 0.585 LiDAR & Radar px 0.185 0.180 py 0.190 0.187 vx 0.476 0.262 vy 0.814 0.463 TABLE IV COMPARISON OF EKF AND UKF : DATASET 2 In Fig 23 and Fig 30, the estimated trajectory of the bicycle for both the datasets are plotted. We notice that UKF does a good job estimating the nonlinear trajectory of the bicycle. By comparing Fig 23 and Fig 7, we can conclude that UKF estimates the trajectory better around sharp corners than EKF. The estimated vx, vy and φ for Dataset1 are plotted in Fig 26, Fig 27 and Fig 28 respectively. In Fig 24 and Fig 25, we plot the estimates px and py respectively. 4 5 6 7 8 9 10 11 12 -14 -12 -10 -8 -6 -4 -2 0 2 Fig. 23. Estimated trajectory using UKF : Dataset 1 1.47701044 1.47701045 1.47701046 1.47701047 1.47701048 1.47701049 1.4770105 1.47701051 10 15 4 5 6 7 8 9 10 11 12 Fig. 24. Estimated px using UKF : Dataset 1 1.47701044 1.47701045 1.47701046 1.47701047 1.47701048 1.47701049 1.4770105 1.47701051 10 15 -14 -12 -10 -8 -6 -4 -2 0 2 Fig. 25. Estimated py using UKF : Dataset 1 1.47701044 1.47701045 1.47701046 1.47701047 1.47701048 1.47701049 1.4770105 1.47701051 10 15 -4 -3 -2 -1 0 1 2 3 Fig. 26. Estimated vx using UKF : Dataset 1 The estimated vx, vy and φ for Dataset 2 are plotted in Fig 33, Fig 34 and Fig 35 respectively. Based on the NIS plot
  • 11. 1.47701044 1.47701045 1.47701046 1.47701047 1.47701048 1.47701049 1.4770105 1.47701051 10 15 -4 -3 -2 -1 0 1 2 3 Fig. 27. Estimated vy using UKF : Dataset 1 1.47701044 1.47701045 1.47701046 1.47701047 1.47701048 1.47701049 1.4770105 1.47701051 10 15 -150 -100 -50 0 50 100 150 200 Fig. 28. Estimate φ using UKF : Dataset 1 1.47701044 1.47701045 1.47701046 1.47701047 1.47701048 1.47701049 1.4770105 1.47701051 10 15 0 2 4 6 8 10 12 NIS(5%) 5% (7.815) Fig. 29. Normalized Innovation Squared(NIS) using UKF : Dataset 1 0 50 100 150 200 250 -5 0 5 10 15 20 25 30 35 40 Fig. 30. Estimated trajectory usign UKF : Dataset 2 (Fig 29 and Fig 36), we can conclude that the uncertainties have been accurately modeled. IV. SUMMARY In this paper, a sensor fusion algorithm was presented to track a pedestrian riding a bike by fusing the measurements 1.47701044 1.47701046 1.47701048 1.4770105 1.47701052 1.47701054 1.47701056 10 15 0 50 100 150 200 250 Fig. 31. Estimated px using UKF : Dataset 2 1.47701044 1.47701046 1.47701048 1.4770105 1.47701052 1.47701054 1.47701056 10 15 -5 0 5 10 15 20 25 30 35 40 Fig. 32. Estimated py using UKF : Dataset 2 1.47701044 1.47701046 1.47701048 1.4770105 1.47701052 1.47701054 1.47701056 10 15 0 0.5 1 1.5 2 2.5 3 3.5 Fig. 33. Estimated vx using UKF : Dataset 2 1.47701044 1.47701046 1.47701048 1.4770105 1.47701052 1.47701054 1.47701056 10 15 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 Fig. 34. Estimated vy using UKF : Dataset 2 from a Velodyne LiDAR and a radar. Two bayesian filters, EKF and UKF were implemented to estimate the state of the pedestrian. In both filter implementations, it was noticed that estimation error was less while using both sensors as compared to using just LiDAR measurements. Hence it was concluded that sensor fusion is advantageous. In future work,
  • 12. 1.47701044 1.47701046 1.47701048 1.4770105 1.47701052 1.47701054 1.47701056 timestamp ×1015 -80 -60 -40 -20 0 20 40 60 φ Estimated φ vs Ground Truth (Unscented Kalman Filter) : Data 2 Ground Truth Estimated φ Fig. 35. Estimate φ using UKF : Dataset 2 1.47701044 1.47701046 1.47701048 1.4770105 1.47701052 1.47701054 1.47701056 10 15 0 2 4 6 8 10 12 14 NIS(5%) 5% (7.815) Fig. 36. Normalized Innovation Squared(NIS) using UKF : Dataset 2 several other sensors like stereo cameras would be used to further improve the accuracy of position and velocity estimates of tracked objects. REFERENCES [1] PUCK, Velodyne LiDAR, PUCKT M Idf, National Conference, 2015. [2] Sarah E Houts, Shandor G Dektor, Stephen M Rock. A robust frame- work for failure detection and recovery for terrain-relative navigation. Unmanned Untethered Submersible Technology,2013 [3] Jesse levinson, Michael Montemerlo, Sebastian Thrun Map-Based Precision Vehicle Localization in Urban Environments. Robotics: Science and Systems, Volume 4, pages 1, Citeseet 2007. [4] Neil Getz. Control of balance for a nonlinear nonholonomic non- minimum phase model of a bicycle. American Control Conference, Vol 1, Pages 148-151, IEEE 1994. [5] Brian Yamauchi. All-weather perception for man-portable robots using ultrawideband radar. 2010 IEEE International Conference on Robotics and Automation(ICRA), pages 3610–3615, IEEE 2010. [6] Nico Kaempchen, Kay Ch Fuerstenberg, Alexander G Skibicki, Klaus CJ Dietmayer. Sensor fusion for multiple automotive active safety and comfort applications. Advanced Microsystems for Automotive Applications 2004, pages 137–163, Springer 2004. [7] Tarak Gandhi, Mohan Manubhai Trivedi. Pedestrian protection sys- tems: Issues, survey, and challenges IEEE Transactions on intelligent Transportation systems, Vol 8, No.3, pages 413-430, IEEE 2007. [8] Daniel Olmeda, Arturo De LA Escalera, Jos M Armingol. Detection and tracking of pedestrians in infrared images 2009 3rd International Conference on Signals, Circuits and Systems (SCS), pages 1-6, IEEE 2009. [9] Klaus CJ Dietmayer, Jan Sparbert, Daniel Streller. Model based object classification and object tracking in traffic scenes from range images. Proceedings of IV IEEE Intelligent Vehicles Symposium, Vol.200, 2001. [10] Daniel Streller, Klaus Dietmayer. Object tracking and classification using a multiple hypothesis approach. Intelligent Vehicles Symposium, IEEE 2004. [11] Cristiano Premebida, Nunes Urbano. A multi-target tracking and GMM-classifier for intelligent vehicles. Intelligent Transportation Systems Conference (ITSE), pages 313-318, IEEE 2006. [12] Samuel Gidel, Paul Checchin Christophe Blanc, Thierry Chateau, Laurent Trassoudaine. Pedestrian detection method using a multilayer laserscanner: Application in urban environment IEEE/RSJ Interna- tional Conference on Intelligent Robots and Systems(IROS), pages 173-178, IEEE 2008. [13] Uwe Franke, Armin Joos. Real-time stereo vision for urban traffic scene understanding. Proceedings of the IEEE Intelligent Vehicles Symposium, pages 273-278, IEEE 2000. [14] Mirko Mahlisch, Roland Schweiger, Werner Ritter, Klaus Dietmayer. Sensor fusion using spatio-temporal aligned video and LiDAR for improved vehicle detection 2006 IEEE Intelligent Vehicles Symposium, pages 424-429, IEEE 2006. [15] Raphael Labayrade, Cyril Royere, Dominique Gruyer, Didier Aubert. Cooperative fusion for multi-obstacles detection with use of stereo- vision and laser scanner Autonomous Robots, Vol.19, No.2, pages 117-140, Springer 2005. [16] Jeoung Woo, Young-Joong Kim, Jeong-on Lee, Myo-Taeg Lim. Lo- calization of mobile robot using particle filter International Joint Conference(SICE-ICASE), pages 3031-3034, IEEE 2006. [17] Fredrik Gustafsson, Fredrik Gunnarsson, Niclas Bergman, Urban Fors- sell, Jonas Jansson, Rickard Karlsson, P-J Nordlund. Particle filters for positioning, navigation, and tracking. IEEE Transactions on Signal Processing,Vol 50, No.2, Pages 425-437, IEEE 2002. [18] Sebastian Thrun, Wolfram Burgard, Dieter Fox. Probabilistic Robotics. MIT press,2005 [19] Sebastian Thrun. Particle filters in robotics. Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence, pages 511-518, Morgan Kaugmann Publishers Inc. 2002. [20] Dieter Fox, Sebastian Thrun, Wolfram Burgard, Frank Dellaert. Par- ticle filters for mobile robot localization. Sequential Monte Carlo methods in practice, pages 401-428, Springer 2001. [21] Fengbing Luo, Bianjing Du, Zhen Fan. Mobile robot localization based on particle filter. 11th World Congress onIntelligent Control and Automation(WCICA), pages 3089-3093, IEEE 2014. [22] Dongsheng Wang, Jianchao Zhao, Wei Wang. Particle filter based robust mobile robot localization. International conference on Mecha- tronics and Automation(ICMA), pages 817-821, IEEE 2009. [23] Mirko Meuter, Uri Iurgel, Su-Birm Park, Anton Kummert. The unscented Kalman filter for pedestrian tracking from a moving host. 2008 IEEE Intelligent Vehicles Symposium, pages 37-42, IEEE 2008. [24] Daniel Olmeda, Arturo De La Escalera, Jos M Armingol. Detection and tracking of pedestrians in infrared images. 2009 3rd International Conference on Signals, Circuits and Systems(SCS),pages 1-6, IEEE 2009. [25] Eric A Wan, Rudolph Van Der Merwe. The unscented Kalman Filter for Nonlinear Estimation. Adaptive Systems for Signal Processing, Communications and Control Symposium 2000, pages 153-158, IEEE 2000 [26] Simon J Julier, Jeffrey K Uhlmann. New Extension of the Kalman Filter to Nonlinear Systems. AeroSense’97, International Society for Optics and Photonics. pages 182-193, 1997 [27] Ehab I Al Khatib, Mohammad A Jaradat, Mamoun AbdelHafez, Milad Roigari. Multiple sensor fusion for mobile robot localization and navigation using the Extended Kalman Filter 2015 10th International Symposium on Mechatronics and its Applications(ISMA), pages 1-5, IEEE 2015. [28] John Shackleton, Brian VanVoorst, Joel Hesch. Tracking people with a 360-degree lidar. 2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance(AVSS), pages 420-426, IEEE 2010. [29] Osama Masoud, Nikolaos P Papanikolopoulos. A novel method for tracking and counting pedestrians in real-time using a single camera. IEEE transactions on vehicular techonology, Vol.50, No.5, pages 1267- 1278, IEEE 2001. [30] Hyunggi Cho, Young-Woo Seo, BVK Vijaya Kumar, Rahunathan Raj Rajkumar. A multi-sensor fusion system for moving object detection and tracking in urban driving environments. 2014 IEEE International Conference on Robotics and Automation(ICRA), pages 1836-1843, IEEE 2014. [31] Christophe Blanc, Laurent Trassoudaine, Jean Gallice. EKF and particle filter track-to-track fusion: a quantitative comparison from radar/lidar obstacle tracks. 2005 8th International Conference on Information Fusion, Vol.2, pages 7–pp, IEEE 2005. [32] Fadi Fayad, Veronique Cherfaoui. Tracking objects using a laser scanner in driving situation based on modeling target shape. 2007 IEEE Intelligent Vehicles Symposium, pages 44-49, IEEE 2007. [33] Leonardo Marin, Marina Valles, Angel Soriano, Angel Valera, Pedro Albertos. Event-based localization in ackermann steering limited resource mobile robots. IEEE/ASME Transactions on Mechatronics, Vol.19, No.4, pages 1171-1182, IEEE 2014.
  • 13. [34] Robin Schubert, Eric Richter, Gerd Wanielik. Comparison and evaluation of advanced motion models for vehicle tracking. 2008 11th International Conference on Information Fusion,pages 1–6, IEEE 2008.