The document describes the mean shift algorithm and its application to object tracking in computer vision. Mean shift is an iterative procedure that moves data points to the average of nearby points, converging at modes of the data's probability density function. It can be used for tracking by modeling a target object's color distribution and applying mean shift to match candidate locations in subsequent frames. The algorithm maximizes the Bhattacharyya coefficient between color distributions to find the best match for the target's new location in each frame.
In pattern recognition, the k-nearest neighbors algorithm (k-NN) is a non-parametric method proposed by Thomas Cover used for classification and regression. In both cases, the input consists of the k closest training examples in the feature space.
K-Nearest neighbor is one of the most commonly used classifier based in lazy learning. It is one of the most commonly used methods in recommendation systems and document similarity measures. It mainly uses Euclidean distance to find the similarity measures between two data points.
In machine learning, support vector machines (SVMs, also support-vector networks) are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis.
In pattern recognition, the k-nearest neighbors algorithm (k-NN) is a non-parametric method proposed by Thomas Cover used for classification and regression. In both cases, the input consists of the k closest training examples in the feature space.
K-Nearest neighbor is one of the most commonly used classifier based in lazy learning. It is one of the most commonly used methods in recommendation systems and document similarity measures. It mainly uses Euclidean distance to find the similarity measures between two data points.
In machine learning, support vector machines (SVMs, also support-vector networks) are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis.
Process of converting data set having vast dimensions into data set with lesser dimensions ensuring that it conveys similar information concisely.
Concept
R code
In this tutorial, we will learn the the following topics -
+ Linear SVM Classification
+ Soft Margin Classification
+ Nonlinear SVM Classification
+ Polynomial Kernel
+ Adding Similarity Features
+ Gaussian RBF Kernel
+ Computational Complexity
+ SVM Regression
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain RatioMarina Santini
attribute selection, constructing decision trees, decision trees, divide and conquer, entropy, gain ratio, information gain, machine leaning, pruning, rules, suprisal
Process of converting data set having vast dimensions into data set with lesser dimensions ensuring that it conveys similar information concisely.
Concept
R code
In this tutorial, we will learn the the following topics -
+ Linear SVM Classification
+ Soft Margin Classification
+ Nonlinear SVM Classification
+ Polynomial Kernel
+ Adding Similarity Features
+ Gaussian RBF Kernel
+ Computational Complexity
+ SVM Regression
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain RatioMarina Santini
attribute selection, constructing decision trees, decision trees, divide and conquer, entropy, gain ratio, information gain, machine leaning, pruning, rules, suprisal
Speech Processing in Stressing Co-Channel Interference Using the Wigner Distr...CSCJournals
The Fractional Fourier Transform (FrFT) can provide significant interference suppression (IS) over other techniques in real-life non-stationary environments because it can operate with very few samples. However, the optimum rotational parameter ‘a’ must first be estimated. Recently, a new method to estimate ‘a’ based on the value that minimizes the projection of the product of the Wigner Distributions (WDs) of the signal-of-interest (SOI) and interference was proposed. This is more easily calculated by recognizing its equivalency to choosing ‘a’ for which the product of the energies of the SOI and interference in the FrFT domain is minimized, termed the WD-FrFT algorithm. The algorithm was shown to estimate ‘a’ more accurately than minimum mean square error FrFT (MMSE-FrFT) methods and perform far better than MMSE Fast Fourier Transform (MMSE-FFT) methods, which only operate in the frequency domain. The WD-FrFT algorithm significantly improves interference suppression (IS) capability, even at low signal-to-noise ratio (SNR). In this paper, we apply the proposed WD-FrFT technique to recovering a speech signal in non-stationary co-channel interference. Using mean-square error (MSE) between the SOI and its estimate as the performance metric, we show that the technique greatly outperforms the conventional methods, MMSE-FrFT and MMSE-FFT, which fail with just one non-stationary interferer, and continues to perform well in the presence of severe co-channel interference (CCI) consisting of multiple, equal power, non-stationary interferers. This method therefore has great potential for separating co-channel signals in harsh, noisy, non-stationary environments.
Giorgio Limonta on "Representation and analysis of retail phenomena to support
urban planning policies.Some applications of the Kernel Density Estimation method in the Milan area."
We are confident that Compressed Air Technology (CAT) holds the key to the automobile's future. At the same time, though we are aware that no one type of vehicle. Can meet all society’s needs. That's why we are pushing ahead with research on a range of Vehicle propulsion technologies.
k-means clustering aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells.
Lesson 26: The Fundamental Theorem of Calculus (Section 4 version)Matthew Leingang
The First Fundamental Theorem of Calculus looks at the area function and its derivative. It so happens that the derivative of the area function is the original integrand.
Multidimensional integrals may be approximated by weighted averages of integrand values. Quasi-Monte Carlo (QMC) methods are more accurate than simple Monte Carlo methods because they carefully choose where to evaluate the integrand. This tutorial focuses on how quickly QMC methods converge to the correct answer as the number of integrand values increases. The answer may depend on the smoothness of the integrand and the sophistication of the QMC method. QMC error analysis may assumes the integrand belongs to a reproducing kernel Hilbert space or may assume that the integrand is an instance of a stochastic process with known covariance structure. These two approaches have interesting parallels. This tutorial also explores how the computational cost of achieving a good approximation to the integral depends on the dimension of the domain of the integrand. Finally, this tutorial explores methods for determining how many integrand values are needed to satisfy the error tolerance. Relevant software is described.
Information-theoretic clustering with applicationsFrank Nielsen
Information-theoretic clustering with applications
Abstract: Clustering is a fundamental and key primitive to discover structural groups of homogeneous data in data sets, called clusters. The most famous clustering technique is the celebrated k-means clustering that seeks to minimize the sum of intra-cluster variances. k-Means is NP-hard as soon as the dimension and the number of clusters are both greater than 1. In the first part of the talk, we first present a generic dynamic programming method to compute the optimal clustering of n scalar elements into k pairwise disjoint intervals. This case includes 1D Euclidean k-means but also other kinds of clustering algorithms like the k-medoids, the k-medians, the k-centers, etc.
We extend the method to incorporate cluster size constraints and show how to choose the appropriate number of clusters using model selection. We then illustrate and refine the method on two case studies: 1D Bregman clustering and univariate statistical mixture learning maximizing the complete likelihood. In the second part of the talk, we introduce a generalization of k-means to cluster sets of histograms that has become an important ingredient of modern information processing due to the success of the bag-of-word modelling paradigm.
Clustering histograms can be performed using the celebrated k-means centroid-based algorithm. We consider the Jeffreys divergence that symmetrizes the Kullback-Leibler divergence, and investigate the computation of Jeffreys centroids. We prove that the Jeffreys centroid can be expressed analytically using the Lambert W function for positive histograms. We then show how to obtain a fast guaranteed approximation when dealing with frequency histograms and conclude with some remarks on the k-means histogram clustering.
References: - Optimal interval clustering: Application to Bregman clustering and statistical mixture learning IEEE ISIT 2014 (recent result poster) http://arxiv.org/abs/1403.2485
- Jeffreys Centroids: A Closed-Form Expression for Positive Histograms and a Guaranteed Tight Approximation for Frequency Histograms.
IEEE Signal Process. Lett. 20(7): 657-660 (2013) http://arxiv.org/abs/1303.7286
http://www.i.kyoto-u.ac.jp/informatics-seminar/
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Meanshift Tracking Presentation
1. Mean Shift Tracking
CS4243 Computer Vision and Pattern Recognition
Leow Wee Kheng
Department of Computer Science
School of Computing
National University of Singapore
(CS4243) Mean Shift Tracking 1 / 1
2. Mean Shift
Mean Shift
Mean Shift [Che98, FH75, Sil86]
An algorithm that iteratively shifts a data point to the average of
data points in its neighborhood.
Similar to clustering.
Useful for clustering, mode seeking, probability density estimation,
tracking, etc.
(CS4243) Mean Shift Tracking 2 / 1
3. Mean Shift
Consider a set S of n data points xi in d-D Euclidean space X.
Let K(x) denote a kernel function that indicates how much x
contributes to the estimation of the mean.
Then, the sample mean m at x with kernel K is given by
n
K(x − xi ) xi
i=1
m(x) = n (1)
K(x − xi )
i=1
The difference m(x) − x is called mean shift.
Mean shift algorithm: iteratively move date point to its mean.
In each iteration, x ← m(x).
The algorithm stops when m(x) = x.
(CS4243) Mean Shift Tracking 3 / 1
4. Mean Shift
The sequence x, m(x), m(m(x)), . . . is called the trajectory of x.
If sample means are computed at multiple points, then at each
iteration, update is done simultaneously to all these points.
(CS4243) Mean Shift Tracking 4 / 1
5. Mean Shift Kernel
Kernel
Typically, kernel K is a function of x 2 :
K(x) = k( x 2 ) (2)
k is called the profile of K.
Properties of Profile:
1 k is nonnegative.
2 k is nonincreasing: k(x) ≥ k(y) if x < y.
3 k is piecewise continuous and
∞
k(x)dx < ∞ (3)
0
(CS4243) Mean Shift Tracking 5 / 1
6. Mean Shift Kernel
Examples of kernels [Che98]:
Flat kernel:
1 if x ≤ 1
K(x) = (4)
0 otherwise
Gaussian kernel:
K(x) = exp(− x 2 ) (5)
(a) Flat kernel (b) Gaussian kernel
(CS4243) Mean Shift Tracking 6 / 1
7. Mean Shift Density Estimation
Density Estimation
Kernel density estimation (Parzen window technique) is a popular
method for estimating probability density
[CRM00, CRM02, DH73].
For a set of n data points xi in d-D space, the kernel density
estimate with kernel K(x) (profile k(x)) and radius h is
n
˜ 1 x − xi
fK (x) = K
nhd h
i=1
(6)
n 2
1 x − xi
= k
nhd h
i=1
The quality of kernel density estimator is measured by the mean
square error between the actual density and the estimate.
(CS4243) Mean Shift Tracking 7 / 1
8. Mean Shift Density Estimation
The mean square error is minimized by the Epanechnikov kernel:
1
(d + 2)(1 − x 2 ) if x ≤ 1
KE (x) = 2Cd (7)
0 otherwise
where Cd is the volume of the unit d-D sphere, with profile
1
(d + 2)(1 − x) if 0 ≤ x ≤ 1
kE (x) = 2Cd (8)
0 if x > 1
A more commonly used kernel is Gaussian
1 1 2
K(x) = exp − x (9)
(2π)d 2
with profile
1 1
k(x) = exp − x (10)
(2π)d 2
(CS4243) Mean Shift Tracking 8 / 1
9. Mean Shift Density Estimation
Define another kernel G(x) = g( x 2 ) such that
dk(x)
g(x) = −k (x) = − (11)
dx
Important Result
Mean shift with kernel G moves x along the direction of the gradient of
˜
density estimate f with kernel K.
Define density estimate with kernel K.
But, perform mean shift with kernel G.
Then, mean shift performs gradient ascent on density estimate.
(CS4243) Mean Shift Tracking 9 / 1
10. Mean Shift Density Estimation
Proof:
˜
Define f with kernel G
n 2
˜ C x − xi
fG (x) ≡ g (12)
nhd h
i=1
where C is a normalization constant.
Mean shift M with kernel G is
n 2
x − xi
g xi
h
i=1
MG (x) ≡ n 2
−x (13)
x − xi
g
h
i=1
(CS4243) Mean Shift Tracking 10 / 1
11. Mean Shift Density Estimation
Estimate of density gradient is the gradient of density estimate
˜ fK (x) ≡ ˜
fK (x)
n 2
2 x − xi
= (x − xi ) k
nhd+2 h
i=1
n 2
(14)
2 x − xi
= (xi − x) g
nhd+2 h
i=1
2 ˜
= fG (x)MG (x)
Ch2
Then,
Ch2 ˜ fK (x)
MG (x) = (15)
˜
2 fG (x)
Thus, MG is an estimate of the normalized gradient of fK .
So, can use mean shift (Eq. 13) to obtain estimate of fK .
(CS4243) Mean Shift Tracking 11 / 1
12. Mean Shift Tracking
Mean Shift Tracking
Basic Ideas [CRM00]:
Model object using color probability density.
Track target object in video by matching color density.
Use mean shift to estimate color density and target location.
(CS4243) Mean Shift Tracking 12 / 1
13. Mean Shift Tracking Object Model
Object Model
Let xi , i = 1, . . . , n, denote pixel locations of model centered at 0.
Represent color distribution by discrete m-bin color histogram.
Let b(xi ) denote the color bin of the color at xi .
Assume size of model is normalized; so, kernel radius h = 1.
Then, the probability q of color u in the model is
n
qu = C k( xi 2 ) δ(b(xi ) − u) (16)
i=1
C is the normalization constant
n −1
2
C= k( xi ) (17)
i=1
Kernel profile k weights contribution by distance to centroid.
(CS4243) Mean Shift Tracking 13 / 1
14. Mean Shift Tracking Object Model
δ is the Kronecker delta function
1 if a = 0
δ(a) = (18)
0 otherwise
That is, contribute k( xi 2 ) to qu if b(xi ) = u.
(CS4243) Mean Shift Tracking 14 / 1
15. Mean Shift Tracking Target Candidate
Target Candidate
Let yi , i = 1, . . . , nh , denote pixel locations of target centered at y.
Then, the probability p of color u in the target is
nh 2
y − yi
pu (y) = Ch k δ(b(yi ) − u) (19)
h
i=1
Ch is the normalization constant
nh 2 −1
y − yi
Ch = k (20)
h
i=1
(CS4243) Mean Shift Tracking 15 / 1
16. Mean Shift Tracking Color Density Matching
Color Density Matching
Use Bhattacharyya coefficient ρ
m
ρ(p(y), q) = pu (y) qu (21)
u=1
√ √ √ √
ρ is the cosine of vectors ( p1 , . . . , pm ) and ( q1 , . . . , qm ) .
Large ρ means good color match.
For each image frame, find y that maximizes ρ.
This y is the location of the target.
(CS4243) Mean Shift Tracking 16 / 1
17. Mean Shift Tracking Tracking Algorithm
Tracking Algorithm
Given {qu } of model and location y of target in previous frame:
1 Initialize location of target in current frame as y.
2 Compute {pu (y)} and ρ(p(y), q).
3 Apply mean shift: Compute new location z as
nh 2
y − yi
g yi
h
i=1
z= nh 2
(22)
y − yi
g
h
i=1
4 Compute {pu (z)} and ρ(p(z), q).
5 While ρ(p(z), q) < ρ(p(y), q), do z ← 1 (y + z).
2
6 If z − y is small enough, stop. Else, set y ← z and goto (1).
(CS4243) Mean Shift Tracking 17 / 1
18. Mean Shift Tracking Tracking Algorithm
Step 3: In practice, a window of pixels yi is considered.
Size of window is related to h.
Step 5 is used to validate the target’s new location.
Can stop Step 5 if y and z round off to the same pixel.
Tests show that Step 5 is needed only 0.1% of the time.
Step 6: can stop algorithm if y and z round off to the same pixel.
To track object that changes size, varies radius h
(see [CRM00] for details).
(CS4243) Mean Shift Tracking 18 / 1
19. Mean Shift Tracking Tracking Algorithm
Example 1: Track football player no. 78 [CRM00].
(CS4243) Mean Shift Tracking 19 / 1
20. Mean Shift Tracking Tracking Algorithm
Example 2: Track a passenger in train station [CRM00].
(CS4243) Mean Shift Tracking 20 / 1
21. Reference
Reference I
Y. Cheng.
Mean shift, mode seeking, and clustering.
IEEE Trans. on Pattern Analysis and Machine Intelligence,
17(8):790–799, 1998.
D. Comaniciu, V. Ramesh, and P. Meer.
Real-time tracking of non-rigid objects using mean shift.
In IEEE Proc. on Computer Vision and Pattern Recognition, pages
673–678, 2000.
D. Comaniciu, V. Ramesh, and P. Meer.
Mean shift: A robust approach towards feature space analysis.
IEEE Trans. on Pattern Analysis and Machine Intelligence,
24(5):603–619, 2002.
(CS4243) Mean Shift Tracking 21 / 1
22. Reference
Reference II
R. O. Duda and P. E. Hart.
Pattern Classification and Scene Analysis.
Wiley, 1973.
K. Fukunaga and L. D. Hostetler.
The estimation of the gradient of a density function, with
applications in pattern recognition.
IEEE Trans. on Information Theory, 21:32–40, 1975.
B. W. Silverman.
Density Estimation for Statistics and Data Analysis.
Chapman and Hall, 1986.
(CS4243) Mean Shift Tracking 22 / 1