The document discusses metric learning for clustering. It motivates metric learning by showing how must-link and cannot-link constraints can help clustering algorithms find better solutions. It explains that metric learning learns a distance metric that respects the pairwise constraints by assigning small distances to similar pairs and larger distances to dissimilar pairs. The document outlines an algorithm called MPCK-means that learns individual metrics for each cluster while allowing weights for different constraints.
Distance metric learning is a technique to learn a distance metric from training data to improve the performance of algorithms like classification and clustering. Large Margin Nearest Neighbor (LMNN) is an approach that learns a Mahalanobis distance metric for k-nearest neighbor classification by formulating it as a semidefinite program to minimize a cost function. It aims to bring similar examples closer while pushing dissimilar examples farther apart with a margin of at least 1 unit. Large Margin Component Analysis (LMCA) extends LMNN to high dimensional data by directly optimizing the objective with respect to a non-square dimensionality reduction matrix rather than a square distance metric matrix.
The document discusses metric learning for clustering. It motivates metric learning by showing how must-link and cannot-link constraints can help clustering algorithms find better solutions. It explains that metric learning learns a distance metric that respects the pairwise constraints by assigning small distances to similar pairs and larger distances to dissimilar pairs. The document outlines an algorithm called MPCK-means that learns individual metrics for each cluster while allowing weights for different constraints.
Distance metric learning is a technique to learn a distance metric from training data to improve the performance of algorithms like classification and clustering. Large Margin Nearest Neighbor (LMNN) is an approach that learns a Mahalanobis distance metric for k-nearest neighbor classification by formulating it as a semidefinite program to minimize a cost function. It aims to bring similar examples closer while pushing dissimilar examples farther apart with a margin of at least 1 unit. Large Margin Component Analysis (LMCA) extends LMNN to high dimensional data by directly optimizing the objective with respect to a non-square dimensionality reduction matrix rather than a square distance metric matrix.
This document summarizes a 2010 tutorial on metric learning given by Brian Kulis at the University of California, Berkeley. The tutorial introduces metric learning problems and algorithms. It discusses how metric learning can learn feature weights or linear/nonlinear transformations from data to improve distance metrics for tasks like clustering and classification. Key topics covered include Mahalanobis distance metrics, linear and nonlinear metric learning methods, and applications. The tutorial aims to explain both theoretical concepts and practical considerations for metric learning.
Improving neural networks by preventing co adaptation of feature detectors
1. Improving neural networks
by preventing co-adaptation
of feature detectors
[arXiv 2013]
G. E. Hinton, N. Srivastava, A. Krizhevsky,
I. Sutskever and R. R. Salakhutdinov
(University of Toronto)
斎藤 淳哉
間違い等ありましたらご連絡ください
junya【あっと】fugaga.info
論文紹介