2. Gates and Gating
▪ information about target states obtained indirectly via measurements
from a sensor or sensor set at each processing time instant which are
also called scans
▪ each scan contains a certain number of measurements
▪ usually (except specific type of tracking problems) assumed that each
target in every scan generates at most one measurement
▪ number of measurements in a scan depends on various factors
▪ number of targets present in the scan volume,
▪ detection probability
▪ amount of clutter
Measurement Gating and Validation
3. ▪ gating is the hard decisions made about which measurements are
considered as valid (feasible) measurements for a target
▪ region in the measurement space that the feasible measurements for
a target is allowed to be is called as target's gate.
▪ minimum/maximum assumptions about the target can give coarse
gates, rectangular gates
▪ detailed gates are formed using the predicted measurement means
and innovations (measurement prediction) covariances of the track
Measurement Gating and Validation
4. ▪ predict measurement based on the predicted track state.
▪ gives an area for where to expect the next observation.
▪ make observations, get measurements
▪ observations may be raw sensory data or the output of a target
detector
▪ check if measurement lies close to the predicted measurement in
terms of the squared Mahalanobis distance
▪ if the distance is smaller than a threshold from a cumulative
distribution, then they form a pairing or match
▪ area around the predicted measurement in which pairings are
accepted is called validation gate or validation region
▪ procedure is also called validation gating or simply gating
Measurement Gating and Validation
5. Validation/Gating
▪ Challenging due to;
▪ Multiple targets
▪ Association ambiguity
when several measurements
are in the gate
▪ False alarms
▪ Detection uncertainty
▪ occlusions, misdetections
Measurement Gating and Validation
6. Validation/Gating
▪ validation test implies that measurements 𝐳(𝑘 + 1)are distributed
according to a Gaussian distribution, centered at the measurement
prediction ො𝐳(𝑘 + 1) with covariance 𝐒(𝑘 + 1) .
▪ squared Mahalanobis distance of a pairing is 𝑑2
= 𝒗 𝑇
𝐒−1
𝒗
▪ measurements will be in the area (called as validation gate)
𝑉 𝛾 𝐺 = 𝐳: 𝑑2 < 𝛾 𝐺
with a probability defined by the gate threshold 𝛾 𝐺
▪ shape of the validation gate is a hyperellipsoid
Measurement Gating and Validation
7. ▪ measurement likelihood model set to
𝑐 =
1
(2𝜋) 𝐷/2 𝚺 1/2
exp(−
1
2
𝒗 𝑇 𝐒−1 𝒗)
▪ assume 𝑛-dimensional observations
𝝁 = ො𝐳(𝑘 + 1) and 𝚺 = 𝐒(𝑘 + 1)
▪ 𝑛 -dimensional Mahalanobis distance is then
𝑑2 = (𝒛 − 𝝁) 𝑇 𝚺−1(𝒛 − 𝝁)
• by changing variables as
𝐲 = 𝐂−1(𝒛 − 𝝁)
with 𝚺 = 𝐂𝐂 𝑇 where 𝐂 is obtained from a Cholesky decomposition, we
have 𝐲~𝒩𝐲(𝟎, 𝐈)
Measurement Gating and Validation
8. therefore
𝑑2
= 𝐲 𝑇
𝐈𝐲 which means 𝑑2
= σ𝑖=1
𝑘
𝑦𝑖
2
▪ if several 𝑥 ‘s form a set of 𝑘 i.i.d. standard normally distributed
random variables
𝑥𝑖 = 𝒩𝑥 𝑖
(0,1)
▪ then, variable 𝑞 with follows a 𝜒2
distribution with 𝑘 degrees of
freedom
▪ therefore 𝑑2 is 𝜒2 distributed with 𝑘 degrees of freedom
▪ gate threshold 𝛾 𝐺 is obtained from inverse 𝜒2
cumulative distribution,
typically expressed as 𝜒 𝑘,𝛼
2
, at a level 𝛼 and 𝑘 degrees of freedom
▪ given the level 𝛼, the validation gate is a region of acceptance such
that 100 1 − 𝛼 % of true measurements are rejected, typical values
for 𝛼 are 0.95 or 0.99
Measurement Gating and Validation
10. ▪ Euclidian distance accounts for position
▪ Mahalanobis distance without spherical covariance
matrices accounts for
position, uncertainty and correlations
▪ Mahalanobis distance with
spherical covariance matrices
accounts for position and uncertainty
Measurement Gating and Validation
11. ▪ gating should be possible after getting the first measurement 𝐳0 at
time 𝑘 = 0
▪ at time 𝑘 = 1, the gating should be applied on 𝒛1 , therefore, ො𝐳1|0 and
𝐒1|0 should be calculated.
▪ with a single measurement, it might generally not be possible to form
a proper state covariance 𝐏0|0.
▪ use measurement covariance and prior info about the target to form
an initial state covariance 𝐏0|0.
▪ two popular approaches, single point initiation, two point difference
initiation
Measurement Gating and Validation
12. Single Point Initiation: Cartesian state, position only measurements:
𝐱 𝑘 = 𝑥 𝑘 𝑦 𝑘 𝑣 𝑘
𝑥
𝑣 𝑘
𝑦 𝑇
,
𝐳 𝑘 = 𝑥 𝑘 𝑦 𝑘
𝑇 + 𝑒 𝑘
𝑥
𝑒 𝑘
𝑦 𝑇
where 𝑒 𝑘
𝑥
~𝒩 0, 𝜎 𝑥
2 and 𝑒 𝑘
𝑦
~𝒩 0, 𝜎 𝑦
2
suppose we got 𝐳0, what would be ො𝐱0|0 and 𝐏0|0?
ො𝐱0|0 = 𝑧0
𝑥
𝑧0
𝑦
0 0
𝑇
𝐏0|0 = 𝑑𝑖𝑎𝑔(𝜎 𝑥
2
, 𝜎 𝑦
2
, 𝑣 𝑚𝑎𝑥
𝑥
/𝜅 2
, 𝑣 𝑚𝑎𝑥
𝑦
/𝜅
2
)
Measurement Gating and Validation
13. Two-point difference initiation :
▪ alternative is first gating with 𝑦1 − 𝑦0 ≤ 𝑇
𝑣 𝑚𝑎𝑥
𝜅
and
▪ when the gate is satisfied, form the state estimate and covariance as
ො𝐱1|1 = 𝑧1
𝑥
𝑧1
𝑦 𝑧1
𝑥
− 𝑧0
𝑥
𝑇
𝑧1
𝑦
− 𝑧0
𝑦
𝑇
𝑇
𝐏1|1 =
𝜎 𝑥
2 0 𝜎 𝑥
2/𝑇 0
0 𝜎 𝑦
2
0 𝜎 𝑦
2
/𝑇
𝜎 𝑥
2/𝑇 0 2𝜎 𝑥
2/𝑇2 0
0 𝜎 𝑦
2/𝑇 0 2𝜎 𝑦
2/𝑇2
Measurement Gating and Validation
14. ▪ gating reduces the number possible association combinations,
▪ still remains some association uncertainty, handled by association
algorithms
▪ Nearest neighbors (NN) (single, non-global hard decision)
▪ Global nearest neighbors (GNN) (single unique hard decision)
▪ (Joint) probabilistic data associations (JPDA) (soft decisions, i.e.,
no decision or decision with probabilities)
▪ Multi hypothesis tracking (MHT) (making hard but multiple
decisions and keeping them until sufficient evidence arrives)
▪ association algorithm selection depends on the SNR of the system
and amount of computational resources
Measurement Gating and Validation
15. ▪ NNSF takes hard association decisions, sometimes correct and
sometimes wrong
▪ NNSF is simple to implement, works well in well-behaved conditions
▪ TSF, track splitting filter grows a tree of tracks from association
ambiguities and relies on the track likelihood as a goodness of fit
measure for pruning, rarely used in practice
▪ PDAF makes soft decisions, it averages over all validated association
possibilities
▪ PDAF soft decision is never totally correct but never totally wrong, a
suboptimal strategy
▪ compared to the NNSF, the PDAF can significantly improve tracking
in regions of high false alarm densities
Measurement Gating and Validation
16. References
[1] David Geier, Development and Evaluation of a Real-time Capable Multiple
Hypothesis Tracker, Technische Universität Berlin,2012
[2] Y. Bar-Shalom, X. Rong Li, T. Kirubarajan, “Estimation with Applications to
Tracking and Navigation”, Wiley, 2001
[3] U. Orguner, “Target Tracking”, Lecture notes, Linköpings University, 2010.
[4] Y. Bar-Shalom, F. Daum, and J. Huang, “The Probabilistic Data Association
Filter,” IEEE Control System Magazine, 29(6), Dec. 2009.
[5] S.S. Blackman, R. Popoli, “Design and Analysis of Modern Tracking
Systems”, Artech House, 1999
[6] Ba-Ngu Vo, Random Finite Set for Multi-object Dynamical System,
Department of ECE Curtin University Perth Western Australia
[7] Human-Oriented Robotics , Prof. Kai Arras, Social Robotics Lab
17. Other MOFT Tutorials – Lists and Links
Introduction to Multi Target Tracking
Bayesian Inference and Filtering
Kalman Filtering
Sequential Monte Carlo (SMC) Methods and Particle Filtering
Single Object Filtering Single Target Tracking
Nearest Neighbor(NN) and Probabilistic Data Association Filter(PDAF)
Multi Object Filtering Multi Target Tracking
Global Nearest Neighbor and Joint Probabilistic Data Association Filter
Data Association in Multi Target Tracking
Multiple Hypothesis Tracking, MHT
18. Other MOFT Tutorials – Lists and Links
Random Finite Sets, RFS
Random Finite Set Based RFS Filters
RFS Filters, Probability Hypothesis Density, PHD
RFS Filters, Cardinalized Probability Hypothesis Density, CPHD Filter
RFS Filters, Multi Bernoulli MemBer and Cardinality Balanced MeMBer, CBMemBer Filter
RFS Labeled Filters, Generalized Labeled Multi Bernoulli, GLMB and Labeled Multi Bernoulli, LMB Filters
Multiple Model Methods in Multi Target Tracking
Multi Target Tracking Implementation
Multi Target Tracking Performance and Metrics