SlideShare a Scribd company logo
1 of 34
Download to read offline
Neighbourhood Component
       Analysis



         T.S. Yo
References
Outline

●   Introduction
●   Learn the distance metric from data
●   The size of K
●   Procedure of NCA
●   Experiments
●   Discussions
Introduction (1/2)

●   KNN
    –   Simple and effective
    –   Nonlinear decision surface
    –   Non-parametric
    –   Quality improved with more data
    –   Only one parameter, K -> easy for tuning
Introduction (2/2)
●   Drawbacks of KNN
    –   Computationally expensive: search through the
        whole training data in the test time
    –   How to define the “distance” properly?


●   Learn the distance metric from data, and
    force it to be low rank.
Learn the Distance from Data (1/5)
●   What is a good distance metric?
     –   The one that minimize (optimize) the cost!


●   Then, what is the cost?
     –   The expected testing error
     –   Best estimated with leave-one-out (LOO) cross-
         validation error in the training data
Kohavi, Ron (1995). "A study of cross-validation and bootstrap for accuracy estimation and model selection".
Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence 2 (12): 1137–1143. (Morgan
Kaufmann, San Mateo)
Learn the Distance from Data (2/5)
●   Modeling the LOO error:
    –   Let pij be the probability that point xj is selected as
        point xi's neighbour.
    –   The probability that points are correctly classified
        when xi is used as the reference is:


●   To maximize pi for all xi means to minimize
    LOO error.
Learn the Distance from Data (3/5)
●   Then, how do we define pij ?
    –   According to the softmax of the distance dij


                                                       Softmax Function
                                                  1

                                                 0.9


    –   Relatively smoother than dij
                                                 0.8

                                                 0.7

                                                 0.6




                                       exp(-X)
                                                 0.5

                                                 0.4

                                                 0.3

                                                 0.2

                                                 0.1

                                                  0
                                                              X
Learn the Distance from Data (4/5)
●   How do we define dij ?
●   Limit the distance measure within Mahalanobis
    (quadratic) distance.



●   That is to say, we project the original feature
    vectors x into another vector space with q
    transformation matrix, A
Learn the Distance from Data (5/5)
●   Substitute the dij in pij :



●   Now, we have the objective function :


●   Maximize f(A) w.r.t. A → minimize overall
    LOO error
The Size of k
●   For the probability distribution pij :



●   The perplexity can be used as an estimate for
    the size of neighbours to be considered, k
Procedure of NCA (1/2)
●   Use the objective function and its gradient to
    learn the transformation matrix A and K from
    the training data, Dtrain(with or without dimension
    reduction).
●   Project the test data, Dtest, into the transformed
    space.
●   Perform traditional KNN (with K and ADtrain) on
    the transformed test data, ADtest.
Procedure of NCA (2/2)
●   Functions used for optimization
Experiments – Datasets (1/2)
●   4 from UCI ML Repository, 2 self-made
Experiments – Datasets (2/2)




n2d is a mixture of two bivariate normal distributions with different means and
covariance matrices. ring consists of 2-d concentric rings and 8 dimensions of
uniform random noise.
Experiments – Results (1/4)




Error rates of KNN and NCA with the same K.
It is shown that generally NCA does improve the
performance of KNN.
Experiments – Results (2/4)
Experiments – Results (3/4)
●   Compare with
    other classifiers
Experiments – Results (4/4)
                   ●   Rank 2
                       dimension
                       reduction
Discussions (1/8)
●   Rank 2 transformation for wine
Discussions (2/8)
●   Rank 1 transformation for n2d
Discussions (3/8)
●   Results of
    Goldberger
    et al.
(40 realizations of
   30%/70% splits)
Discussions
    (4/8)

●   Results of
    Goldberger
    et al.
(rank 2
   transformation)
Discussions (5/8)
●   Results of experiments suggest that with the
    learned distance metric by NCA algorithm, KNN
    classification can be improved.

●   NCA also outperforms traditional dimension
    reduction methods for several datasets.
Discussions (6/8)
●   Comparing to other classification methods (i.e.
    LDA and QDA), NCA usually does not give the
    best accuracy.

●   Some odd performance on dimension reduction
    suggests that a further investigation on the
    optimization algorithm is necessary.
Discussions (7/8)
●   Optimize a matrix
●
    Can we Optimize these Functions? (Michael L. Overton)
    –   Globally, no. Related problems are NP-hard (Blondell-
        Tsitsiklas, Nemirovski)
    –   Locally, yes.
         ●
             But not by standard methods for nonconvex,
             smooth optimization
         ●
             Steepest descent, BFGS or nonlinear conjugate
             gradient will typically jam because of nonsmoothness
Discussions (8/8)
 ●   Other methods learn distant metric from data
      –   Discriminant Common Vectors(DCV)
           ●   Similar to NCA, DCV focuses on optimizing the distance
               metric on certain objective functions


      –   Laplacianfaces(LAP)
           ●   Emphasizes more on dimension reduction

J. Liu and S. Chen , Discriminant Common Vecotors Versus Neighbourhood Components
Analysis and Laplacianfaces: A comparative study in small sample size problem. Image and
Vision Computing
Question?
Thank you!
Derive the Objective Function (1/5)
●   From the assumptions, we have :
Derive the Objective Function (2/5)
Derive the Objective Function (3/5)
Derive the Objective Function (4/5)
Derive the Objective Function (5/5)

More Related Content

What's hot

What's hot (20)

Unit 1 - R Programming (Part 2).pptx
Unit 1 - R Programming (Part 2).pptxUnit 1 - R Programming (Part 2).pptx
Unit 1 - R Programming (Part 2).pptx
 
Quadratic programming (Tool of optimization)
Quadratic programming (Tool of optimization)Quadratic programming (Tool of optimization)
Quadratic programming (Tool of optimization)
 
Knn
KnnKnn
Knn
 
DBSCAN : A Clustering Algorithm
DBSCAN : A Clustering AlgorithmDBSCAN : A Clustering Algorithm
DBSCAN : A Clustering Algorithm
 
Anomaly Detection in Seasonal Time Series
Anomaly Detection in Seasonal Time SeriesAnomaly Detection in Seasonal Time Series
Anomaly Detection in Seasonal Time Series
 
Recurrent and Recursive Nets (part 2)
Recurrent and Recursive Nets (part 2)Recurrent and Recursive Nets (part 2)
Recurrent and Recursive Nets (part 2)
 
Lstm
LstmLstm
Lstm
 
Adhoc and routing protocols
Adhoc and routing protocolsAdhoc and routing protocols
Adhoc and routing protocols
 
Pattern recognition binoy k means clustering
Pattern recognition binoy  k means clusteringPattern recognition binoy  k means clustering
Pattern recognition binoy k means clustering
 
Polynomial Matrix Decompositions
Polynomial Matrix DecompositionsPolynomial Matrix Decompositions
Polynomial Matrix Decompositions
 
Canfis
CanfisCanfis
Canfis
 
Video Multi-Object Tracking using Deep Learning
Video Multi-Object Tracking using Deep LearningVideo Multi-Object Tracking using Deep Learning
Video Multi-Object Tracking using Deep Learning
 
Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...
Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...
Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...
 
Deep Learning A-Z™: Recurrent Neural Networks (RNN) - The Vanishing Gradient ...
Deep Learning A-Z™: Recurrent Neural Networks (RNN) - The Vanishing Gradient ...Deep Learning A-Z™: Recurrent Neural Networks (RNN) - The Vanishing Gradient ...
Deep Learning A-Z™: Recurrent Neural Networks (RNN) - The Vanishing Gradient ...
 
Singular value decomposition (SVD)
Singular value decomposition (SVD)Singular value decomposition (SVD)
Singular value decomposition (SVD)
 
fitness function
fitness functionfitness function
fitness function
 
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
 
Resampling methods
Resampling methodsResampling methods
Resampling methods
 
Kernel Method
Kernel MethodKernel Method
Kernel Method
 
Network flows
Network flowsNetwork flows
Network flows
 

Viewers also liked

The Los Angeles Healthy Community Neighborhood Initiative- A Ten Year Ex...
The Los Angeles Healthy Community Neighborhood Initiative- A Ten Year Ex...The Los Angeles Healthy Community Neighborhood Initiative- A Ten Year Ex...
The Los Angeles Healthy Community Neighborhood Initiative- A Ten Year Ex...
Courtney Porter
 
Presentation my neighborhood
Presentation my neighborhoodPresentation my neighborhood
Presentation my neighborhood
Ramirez Ramirez
 
Metric learning ICML2010 tutorial
Metric learning  ICML2010 tutorialMetric learning  ICML2010 tutorial
Metric learning ICML2010 tutorial
zukun
 
2009 05 14 Oc Division Program Optimized
2009 05 14 Oc Division Program Optimized2009 05 14 Oc Division Program Optimized
2009 05 14 Oc Division Program Optimized
RonKrater
 
Zoning and subdivision of land
Zoning and subdivision of landZoning and subdivision of land
Zoning and subdivision of land
ctlachu
 
Planning concepts
Planning conceptsPlanning concepts
Planning concepts
ctlachu
 

Viewers also liked (20)

Urban Analysis for the XXI Century: Jornadas BigDataCyL 2013
Urban Analysis for the XXI Century: Jornadas BigDataCyL 2013Urban Analysis for the XXI Century: Jornadas BigDataCyL 2013
Urban Analysis for the XXI Century: Jornadas BigDataCyL 2013
 
Neighborhood
NeighborhoodNeighborhood
Neighborhood
 
Unit 1 planning c oncepts ppt
Unit 1   planning c oncepts  pptUnit 1   planning c oncepts  ppt
Unit 1 planning c oncepts ppt
 
Neighborhood Analysis, Cox Meadows
Neighborhood Analysis, Cox MeadowsNeighborhood Analysis, Cox Meadows
Neighborhood Analysis, Cox Meadows
 
The Los Angeles Healthy Community Neighborhood Initiative- A Ten Year Ex...
The Los Angeles Healthy Community Neighborhood Initiative- A Ten Year Ex...The Los Angeles Healthy Community Neighborhood Initiative- A Ten Year Ex...
The Los Angeles Healthy Community Neighborhood Initiative- A Ten Year Ex...
 
Broyhill McLean Estates Neighborhood Stormwater Improvement Project
Broyhill McLean Estates Neighborhood Stormwater Improvement ProjectBroyhill McLean Estates Neighborhood Stormwater Improvement Project
Broyhill McLean Estates Neighborhood Stormwater Improvement Project
 
Intervalos de confianza en r commander
Intervalos de confianza en r commanderIntervalos de confianza en r commander
Intervalos de confianza en r commander
 
Parts of the neighborhood
Parts of the  neighborhoodParts of the  neighborhood
Parts of the neighborhood
 
Neighborhood Watch: Toolkit Training Presentation
Neighborhood Watch: Toolkit Training PresentationNeighborhood Watch: Toolkit Training Presentation
Neighborhood Watch: Toolkit Training Presentation
 
Presentation my neighborhood
Presentation my neighborhoodPresentation my neighborhood
Presentation my neighborhood
 
Rotterdam Tarwewijk, a resilient neighborhood? A case study
Rotterdam Tarwewijk, a resilient neighborhood? A case studyRotterdam Tarwewijk, a resilient neighborhood? A case study
Rotterdam Tarwewijk, a resilient neighborhood? A case study
 
Broyhill McLean Estates BMP Neighborhood Improvement Project
Broyhill McLean Estates BMP Neighborhood Improvement ProjectBroyhill McLean Estates BMP Neighborhood Improvement Project
Broyhill McLean Estates BMP Neighborhood Improvement Project
 
Metric learning ICML2010 tutorial
Metric learning  ICML2010 tutorialMetric learning  ICML2010 tutorial
Metric learning ICML2010 tutorial
 
2009 05 14 Oc Division Program Optimized
2009 05 14 Oc Division Program Optimized2009 05 14 Oc Division Program Optimized
2009 05 14 Oc Division Program Optimized
 
Councillor Briefing: Neighbourhood planning
Councillor Briefing: Neighbourhood planning Councillor Briefing: Neighbourhood planning
Councillor Briefing: Neighbourhood planning
 
Neighbourhood planning - learning from the pioneers
Neighbourhood planning - learning from the pioneersNeighbourhood planning - learning from the pioneers
Neighbourhood planning - learning from the pioneers
 
Zoning and subdivision of land
Zoning and subdivision of landZoning and subdivision of land
Zoning and subdivision of land
 
Neighbourhood Planning
Neighbourhood PlanningNeighbourhood Planning
Neighbourhood Planning
 
Unit 1 ppt
Unit 1 pptUnit 1 ppt
Unit 1 ppt
 
Planning concepts
Planning conceptsPlanning concepts
Planning concepts
 

Similar to Neighborhood Component Analysis 20071108

Similar to Neighborhood Component Analysis 20071108 (20)

Machine Learning Applications in Subsurface Analysis: Case Study in North Sea
Machine Learning Applications in Subsurface Analysis: Case Study in North SeaMachine Learning Applications in Subsurface Analysis: Case Study in North Sea
Machine Learning Applications in Subsurface Analysis: Case Study in North Sea
 
Aaa ped-17-Unsupervised Learning: Dimensionality reduction
Aaa ped-17-Unsupervised Learning: Dimensionality reductionAaa ped-17-Unsupervised Learning: Dimensionality reduction
Aaa ped-17-Unsupervised Learning: Dimensionality reduction
 
Medical diagnosis classification
Medical diagnosis classificationMedical diagnosis classification
Medical diagnosis classification
 
MEDICAL DIAGNOSIS CLASSIFICATION USING MIGRATION BASED DIFFERENTIAL EVOLUTION...
MEDICAL DIAGNOSIS CLASSIFICATION USING MIGRATION BASED DIFFERENTIAL EVOLUTION...MEDICAL DIAGNOSIS CLASSIFICATION USING MIGRATION BASED DIFFERENTIAL EVOLUTION...
MEDICAL DIAGNOSIS CLASSIFICATION USING MIGRATION BASED DIFFERENTIAL EVOLUTION...
 
Optimizing Deep Networks (D1L6 Insight@DCU Machine Learning Workshop 2017)
Optimizing Deep Networks (D1L6 Insight@DCU Machine Learning Workshop 2017)Optimizing Deep Networks (D1L6 Insight@DCU Machine Learning Workshop 2017)
Optimizing Deep Networks (D1L6 Insight@DCU Machine Learning Workshop 2017)
 
Magellan FOSS4G Talk, Boston 2017
Magellan FOSS4G Talk, Boston 2017Magellan FOSS4G Talk, Boston 2017
Magellan FOSS4G Talk, Boston 2017
 
ngboost.pptx
ngboost.pptxngboost.pptx
ngboost.pptx
 
Machine Learning Algorithms (Part 1)
Machine Learning Algorithms (Part 1)Machine Learning Algorithms (Part 1)
Machine Learning Algorithms (Part 1)
 
End-to-End Object Detection with Transformers
End-to-End Object Detection with TransformersEnd-to-End Object Detection with Transformers
End-to-End Object Detection with Transformers
 
Multiobjective optimization and trade offs using pareto optimality
Multiobjective optimization and trade offs using pareto optimalityMultiobjective optimization and trade offs using pareto optimality
Multiobjective optimization and trade offs using pareto optimality
 
Heuristic design of experiments w meta gradient search
Heuristic design of experiments w meta gradient searchHeuristic design of experiments w meta gradient search
Heuristic design of experiments w meta gradient search
 
2017 09-29 ndt loop closure
2017 09-29 ndt loop closure2017 09-29 ndt loop closure
2017 09-29 ndt loop closure
 
Dimensionality Reduction
Dimensionality ReductionDimensionality Reduction
Dimensionality Reduction
 
AI optimizing HPC simulations (presentation from 6th EULAG Workshop)
AI optimizing HPC simulations (presentation from  6th EULAG Workshop)AI optimizing HPC simulations (presentation from  6th EULAG Workshop)
AI optimizing HPC simulations (presentation from 6th EULAG Workshop)
 
Group Project
Group ProjectGroup Project
Group Project
 
Dimensionality Reduction and feature extraction.pptx
Dimensionality Reduction and feature extraction.pptxDimensionality Reduction and feature extraction.pptx
Dimensionality Reduction and feature extraction.pptx
 
Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition
Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix DecompositionBeyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition
Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition
 
Reading_0413_var_Transformers.pptx
Reading_0413_var_Transformers.pptxReading_0413_var_Transformers.pptx
Reading_0413_var_Transformers.pptx
 
Optimization for Deep Networks (D2L1 2017 UPC Deep Learning for Computer Vision)
Optimization for Deep Networks (D2L1 2017 UPC Deep Learning for Computer Vision)Optimization for Deep Networks (D2L1 2017 UPC Deep Learning for Computer Vision)
Optimization for Deep Networks (D2L1 2017 UPC Deep Learning for Computer Vision)
 
Data mining with weka
Data mining with wekaData mining with weka
Data mining with weka
 

More from Ting-Shuo Yo

Introduction to BCI
Introduction to BCIIntroduction to BCI
Introduction to BCI
Ting-Shuo Yo
 

More from Ting-Shuo Yo (9)

20141030 ntustme computer_programmingandbeyond_share
20141030 ntustme computer_programmingandbeyond_share20141030 ntustme computer_programmingandbeyond_share
20141030 ntustme computer_programmingandbeyond_share
 
Tag2Card User's manual v01
Tag2Card User's manual v01Tag2Card User's manual v01
Tag2Card User's manual v01
 
InnoCentive
InnoCentiveInnoCentive
InnoCentive
 
Introduction to BCI
Introduction to BCIIntroduction to BCI
Introduction to BCI
 
Design Thinking and Innovation
Design Thinking and InnovationDesign Thinking and Innovation
Design Thinking and Innovation
 
A Comparison of Evaluation Methods in Coevolution 20070921
A Comparison of Evaluation Methods in Coevolution 20070921A Comparison of Evaluation Methods in Coevolution 20070921
A Comparison of Evaluation Methods in Coevolution 20070921
 
The Neurophysiology of Speech
The Neurophysiology of SpeechThe Neurophysiology of Speech
The Neurophysiology of Speech
 
Diffusion MRI, Tractography,and Connectivity: what machine learning can do?
Diffusion MRI, Tractography,and Connectivity: what machine learning can do?Diffusion MRI, Tractography,and Connectivity: what machine learning can do?
Diffusion MRI, Tractography,and Connectivity: what machine learning can do?
 
Simulating Weather: Numerical Weather Prediction as Computational Simulation
Simulating Weather: Numerical Weather Prediction as Computational SimulationSimulating Weather: Numerical Weather Prediction as Computational Simulation
Simulating Weather: Numerical Weather Prediction as Computational Simulation
 

Recently uploaded

Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Recently uploaded (20)

Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 

Neighborhood Component Analysis 20071108

  • 1. Neighbourhood Component Analysis T.S. Yo
  • 3. Outline ● Introduction ● Learn the distance metric from data ● The size of K ● Procedure of NCA ● Experiments ● Discussions
  • 4. Introduction (1/2) ● KNN – Simple and effective – Nonlinear decision surface – Non-parametric – Quality improved with more data – Only one parameter, K -> easy for tuning
  • 5. Introduction (2/2) ● Drawbacks of KNN – Computationally expensive: search through the whole training data in the test time – How to define the “distance” properly? ● Learn the distance metric from data, and force it to be low rank.
  • 6. Learn the Distance from Data (1/5) ● What is a good distance metric? – The one that minimize (optimize) the cost! ● Then, what is the cost? – The expected testing error – Best estimated with leave-one-out (LOO) cross- validation error in the training data Kohavi, Ron (1995). "A study of cross-validation and bootstrap for accuracy estimation and model selection". Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence 2 (12): 1137–1143. (Morgan Kaufmann, San Mateo)
  • 7. Learn the Distance from Data (2/5) ● Modeling the LOO error: – Let pij be the probability that point xj is selected as point xi's neighbour. – The probability that points are correctly classified when xi is used as the reference is: ● To maximize pi for all xi means to minimize LOO error.
  • 8. Learn the Distance from Data (3/5) ● Then, how do we define pij ? – According to the softmax of the distance dij Softmax Function 1 0.9 – Relatively smoother than dij 0.8 0.7 0.6 exp(-X) 0.5 0.4 0.3 0.2 0.1 0 X
  • 9. Learn the Distance from Data (4/5) ● How do we define dij ? ● Limit the distance measure within Mahalanobis (quadratic) distance. ● That is to say, we project the original feature vectors x into another vector space with q transformation matrix, A
  • 10. Learn the Distance from Data (5/5) ● Substitute the dij in pij : ● Now, we have the objective function : ● Maximize f(A) w.r.t. A → minimize overall LOO error
  • 11. The Size of k ● For the probability distribution pij : ● The perplexity can be used as an estimate for the size of neighbours to be considered, k
  • 12. Procedure of NCA (1/2) ● Use the objective function and its gradient to learn the transformation matrix A and K from the training data, Dtrain(with or without dimension reduction). ● Project the test data, Dtest, into the transformed space. ● Perform traditional KNN (with K and ADtrain) on the transformed test data, ADtest.
  • 13. Procedure of NCA (2/2) ● Functions used for optimization
  • 14. Experiments – Datasets (1/2) ● 4 from UCI ML Repository, 2 self-made
  • 15. Experiments – Datasets (2/2) n2d is a mixture of two bivariate normal distributions with different means and covariance matrices. ring consists of 2-d concentric rings and 8 dimensions of uniform random noise.
  • 16. Experiments – Results (1/4) Error rates of KNN and NCA with the same K. It is shown that generally NCA does improve the performance of KNN.
  • 18. Experiments – Results (3/4) ● Compare with other classifiers
  • 19. Experiments – Results (4/4) ● Rank 2 dimension reduction
  • 20. Discussions (1/8) ● Rank 2 transformation for wine
  • 21. Discussions (2/8) ● Rank 1 transformation for n2d
  • 22. Discussions (3/8) ● Results of Goldberger et al. (40 realizations of 30%/70% splits)
  • 23. Discussions (4/8) ● Results of Goldberger et al. (rank 2 transformation)
  • 24. Discussions (5/8) ● Results of experiments suggest that with the learned distance metric by NCA algorithm, KNN classification can be improved. ● NCA also outperforms traditional dimension reduction methods for several datasets.
  • 25. Discussions (6/8) ● Comparing to other classification methods (i.e. LDA and QDA), NCA usually does not give the best accuracy. ● Some odd performance on dimension reduction suggests that a further investigation on the optimization algorithm is necessary.
  • 26. Discussions (7/8) ● Optimize a matrix ● Can we Optimize these Functions? (Michael L. Overton) – Globally, no. Related problems are NP-hard (Blondell- Tsitsiklas, Nemirovski) – Locally, yes. ● But not by standard methods for nonconvex, smooth optimization ● Steepest descent, BFGS or nonlinear conjugate gradient will typically jam because of nonsmoothness
  • 27. Discussions (8/8) ● Other methods learn distant metric from data – Discriminant Common Vectors(DCV) ● Similar to NCA, DCV focuses on optimizing the distance metric on certain objective functions – Laplacianfaces(LAP) ● Emphasizes more on dimension reduction J. Liu and S. Chen , Discriminant Common Vecotors Versus Neighbourhood Components Analysis and Laplacianfaces: A comparative study in small sample size problem. Image and Vision Computing
  • 30. Derive the Objective Function (1/5) ● From the assumptions, we have :
  • 31. Derive the Objective Function (2/5)
  • 32. Derive the Objective Function (3/5)
  • 33. Derive the Objective Function (4/5)
  • 34. Derive the Objective Function (5/5)