Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this presentation? Why not share!

- Manifold learning by Wei Yang 992 views
- Manifold learning with application ... by zukun 4659 views
- Game Bot Identification Based on Ma... by Academia Sinica 1605 views
- Deep Learning for Computer Vision: ... by Xavier Giro-i-Nieto 814 views
- Deep Learning for Computer Vision: ... by Xavier Giro-i-Nieto 1492 views
- Lecture7 xing fei-fei by Tianlu Wang 489 views

2,893 views

Published on

A short introduction about manifold learning

No Downloads

Total views

2,893

On SlideShare

0

From Embeds

0

Number of Embeds

541

Shares

0

Downloads

113

Comments

0

Likes

8

No embeds

No notes for slide

- 1. Motivation Background Taxonomy Alignment Discussion References Review on Manifold learning Phong. Vo Dinh National Institute of Informatics Hitotsubashi, Chiyoda-ku, Tokyo, Japan Lab Meeting 25th Mar, 2009 Phong. Vo Dinh Review on Manifold learning
- 2. Motivation Background Taxonomy Alignment Discussion References Outline 1 Motivation Curse of Dimensionality Do we need feature invariance? Hypothesis about manifolds agreement 2 Background 3 Taxonomy Distance preservation Topology preservation 4 Alignment 5 Discussion 6 References Phong. Vo Dinh Review on Manifold learning
- 3. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Outline 1 Motivation Curse of Dimensionality Do we need feature invariance? Hypothesis about manifolds agreement 2 Background 3 Taxonomy Distance preservation Topology preservation 4 Alignment 5 Discussion 6 References Phong. Vo Dinh Review on Manifold learning
- 4. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Hyper-volume of cubes and spheres In D-dimensional space, the sphere and the corresponding circumscripted cube: π D /2 r D Vsphere (r ) = Γ(1 + D /2) Vcube = (2r )D Vsphere (r ) When D increase, we obtain lim =0 D →∞ Vcube (r ) The volume of a sphere vanishes when dimensionality increase! Phong. Vo Dinh Review on Manifold learning
- 5. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Hyper-volume of cubes and spheres In D-dimensional space, the sphere and the corresponding circumscripted cube: π D /2 r D Vsphere (r ) = Γ(1 + D /2) Vcube = (2r )D Vsphere (r ) When D increase, we obtain lim =0 D →∞ Vcube (r ) The volume of a sphere vanishes when dimensionality increase! Phong. Vo Dinh Review on Manifold learning
- 6. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Hyper-volume of cubes and spheres In D-dimensional space, the sphere and the corresponding circumscripted cube: π D /2 r D Vsphere (r ) = Γ(1 + D /2) Vcube = (2r )D Vsphere (r ) When D increase, we obtain lim =0 D →∞ Vcube (r ) The volume of a sphere vanishes when dimensionality increase! Phong. Vo Dinh Review on Manifold learning
- 7. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Hyper-volume of cubes and spheres In D-dimensional space, the sphere and the corresponding circumscripted cube: π D /2 r D Vsphere (r ) = Γ(1 + D /2) Vcube = (2r )D Vsphere (r ) When D increase, we obtain lim =0 D →∞ Vcube (r ) The volume of a sphere vanishes when dimensionality increase! Phong. Vo Dinh Review on Manifold learning
- 8. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Hyper-volume of a thin spherical shell The relative hyper-volume of a thin spherical shell is Vsphere (r ) − Vsphere (r (1 − ε)) 1D − (1 − ε)D = Vsphere (r ) 1D where ε is the thickness of the shell (ε 1). When D increase, the ratio tends to 1, meaning that the shell contains almost all the volume. Phong. Vo Dinh Review on Manifold learning
- 9. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Diagonal of a hypercube Considering a hypercube [−1, +1]D with 2D corners, Vector from origin to one of corners v=[±1, ..., ±1]T The angle between a half-diagonal v and one of coordinate axes ed = [0, ..., 0, 1, 0, ..., 0]T is computed as v T ed ±1 cos θD = =√ v ed D When D grows, half-diagonals are nearly orthogonal to all coordinate axes. Phong. Vo Dinh Review on Manifold learning
- 10. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Diagonal of a hypercube Considering a hypercube [−1, +1]D with 2D corners, Vector from origin to one of corners v=[±1, ..., ±1]T The angle between a half-diagonal v and one of coordinate axes ed = [0, ..., 0, 1, 0, ..., 0]T is computed as v T ed ±1 cos θD = =√ v ed D When D grows, half-diagonals are nearly orthogonal to all coordinate axes. Phong. Vo Dinh Review on Manifold learning
- 11. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Diagonal of a hypercube Considering a hypercube [−1, +1]D with 2D corners, Vector from origin to one of corners v=[±1, ..., ±1]T The angle between a half-diagonal v and one of coordinate axes ed = [0, ..., 0, 1, 0, ..., 0]T is computed as v T ed ±1 cos θD = =√ v ed D When D grows, half-diagonals are nearly orthogonal to all coordinate axes. Phong. Vo Dinh Review on Manifold learning
- 12. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Diagonal of a hypercube Considering a hypercube [−1, +1]D with 2D corners, Vector from origin to one of corners v=[±1, ..., ±1]T The angle between a half-diagonal v and one of coordinate axes ed = [0, ..., 0, 1, 0, ..., 0]T is computed as v T ed ±1 cos θD = =√ v ed D When D grows, half-diagonals are nearly orthogonal to all coordinate axes. Phong. Vo Dinh Review on Manifold learning
- 13. Example: hypercube in hyperspace Figure: An intuition about a hypercube, courtesy of Mathematica
- 14. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Outline 1 Motivation Curse of Dimensionality Do we need feature invariance? Hypothesis about manifolds agreement 2 Background 3 Taxonomy Distance preservation Topology preservation 4 Alignment 5 Discussion 6 References Phong. Vo Dinh Review on Manifold learning
- 15. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Feature invariance Or Distance invariance? A possible approach to introduce invariance into pattern recognition algorithm is to use transformation invariant features. Crucial information may be discarded Dicult to evaluate the impact of feature extraction on the classication error Alignment and classication can be seen as two sides of the same coin. The appropriate distance for classication is that which maximizes alignment. A lot of eorts have concentrated on seeking for invariance by the computation of appropriate distance measures in the pattern space. Phong. Vo Dinh Review on Manifold learning
- 16. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Feature invariance Or Distance invariance? A possible approach to introduce invariance into pattern recognition algorithm is to use transformation invariant features. Crucial information may be discarded Dicult to evaluate the impact of feature extraction on the classication error Alignment and classication can be seen as two sides of the same coin. The appropriate distance for classication is that which maximizes alignment. A lot of eorts have concentrated on seeking for invariance by the computation of appropriate distance measures in the pattern space. Phong. Vo Dinh Review on Manifold learning
- 17. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Feature invariance Or Distance invariance? A possible approach to introduce invariance into pattern recognition algorithm is to use transformation invariant features. Crucial information may be discarded Dicult to evaluate the impact of feature extraction on the classication error Alignment and classication can be seen as two sides of the same coin. The appropriate distance for classication is that which maximizes alignment. A lot of eorts have concentrated on seeking for invariance by the computation of appropriate distance measures in the pattern space. Phong. Vo Dinh Review on Manifold learning
- 18. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Feature invariance Or Distance invariance? A possible approach to introduce invariance into pattern recognition algorithm is to use transformation invariant features. Crucial information may be discarded Dicult to evaluate the impact of feature extraction on the classication error Alignment and classication can be seen as two sides of the same coin. The appropriate distance for classication is that which maximizes alignment. A lot of eorts have concentrated on seeking for invariance by the computation of appropriate distance measures in the pattern space. Phong. Vo Dinh Review on Manifold learning
- 19. Example: classication under alignment viewpoint Figure: A dense space of images, courtesy of [6]
- 20. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Outline 1 Motivation Curse of Dimensionality Do we need feature invariance? Hypothesis about manifolds agreement 2 Background 3 Taxonomy Distance preservation Topology preservation 4 Alignment 5 Discussion 6 References Phong. Vo Dinh Review on Manifold learning
- 21. A way of vision perception
- 22. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Manifolds in visual perception The retinal image is a collection of signals from photoreceptor cells Thoses photoreceptors construct an abstract image space Dierent appearances of an identity are expected to lie on low-dimensional manifold How the brain represents image manifolds? Phong. Vo Dinh Review on Manifold learning
- 23. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Manifolds in visual perception The retinal image is a collection of signals from photoreceptor cells Thoses photoreceptors construct an abstract image space Dierent appearances of an identity are expected to lie on low-dimensional manifold How the brain represents image manifolds? Phong. Vo Dinh Review on Manifold learning
- 24. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Manifolds in visual perception The retinal image is a collection of signals from photoreceptor cells Thoses photoreceptors construct an abstract image space Dierent appearances of an identity are expected to lie on low-dimensional manifold How the brain represents image manifolds? Phong. Vo Dinh Review on Manifold learning
- 25. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Manifolds in visual perception The retinal image is a collection of signals from photoreceptor cells Thoses photoreceptors construct an abstract image space Dierent appearances of an identity are expected to lie on low-dimensional manifold How the brain represents image manifolds? Phong. Vo Dinh Review on Manifold learning
- 26. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Manifolds in visual perception Neurophysiologists found that the ring rate of each neuron can be expressed as a smooth fuction of several variables angular position of the eye direction of the head ... Imply that the neuron population acitivity is constrained to lie on a low-dimensional manifold The connection between neural manifolds and image manifolds? The question remains to be resolved! Phong. Vo Dinh Review on Manifold learning
- 27. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Manifolds in visual perception Neurophysiologists found that the ring rate of each neuron can be expressed as a smooth fuction of several variables angular position of the eye direction of the head ... Imply that the neuron population acitivity is constrained to lie on a low-dimensional manifold The connection between neural manifolds and image manifolds? The question remains to be resolved! Phong. Vo Dinh Review on Manifold learning
- 28. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Manifolds in visual perception Neurophysiologists found that the ring rate of each neuron can be expressed as a smooth fuction of several variables angular position of the eye direction of the head ... Imply that the neuron population acitivity is constrained to lie on a low-dimensional manifold The connection between neural manifolds and image manifolds? The question remains to be resolved! Phong. Vo Dinh Review on Manifold learning
- 29. Motivation Background Curse of Dimensionality Taxonomy Do we need feature invariance? Alignment Hypothesis about manifolds agreement Discussion References Manifolds in visual perception Neurophysiologists found that the ring rate of each neuron can be expressed as a smooth fuction of several variables angular position of the eye direction of the head ... Imply that the neuron population acitivity is constrained to lie on a low-dimensional manifold The connection between neural manifolds and image manifolds? The question remains to be resolved! Phong. Vo Dinh Review on Manifold learning
- 30. Motivation Background Taxonomy Alignment Discussion References Topology and spaces Topology studies the properties of objects that are preserved through deformations, twistings, and stretchings. The knowledge of object does not depend on how they are presented, or embedded, in space. Used to abstract the intrinsic connectivity of objects while ignoring their detailed form. A and B are called homeomorphic (topological isomorphism) if there is exist a topological structure-preserving map between them. Example A circle is topologically equivalent to an ellipse, and a glass is equivalent to a torus! Phong. Vo Dinh Review on Manifold learning
- 31. Motivation Background Taxonomy Alignment Discussion References Topology and spaces Topology studies the properties of objects that are preserved through deformations, twistings, and stretchings. The knowledge of object does not depend on how they are presented, or embedded, in space. Used to abstract the intrinsic connectivity of objects while ignoring their detailed form. A and B are called homeomorphic (topological isomorphism) if there is exist a topological structure-preserving map between them. Example A circle is topologically equivalent to an ellipse, and a glass is equivalent to a torus! Phong. Vo Dinh Review on Manifold learning
- 32. Motivation Background Taxonomy Alignment Discussion References Topology and spaces Topology studies the properties of objects that are preserved through deformations, twistings, and stretchings. The knowledge of object does not depend on how they are presented, or embedded, in space. Used to abstract the intrinsic connectivity of objects while ignoring their detailed form. A and B are called homeomorphic (topological isomorphism) if there is exist a topological structure-preserving map between them. Example A circle is topologically equivalent to an ellipse, and a glass is equivalent to a torus! Phong. Vo Dinh Review on Manifold learning
- 33. Motivation Background Taxonomy Alignment Discussion References Topology and spaces Topology studies the properties of objects that are preserved through deformations, twistings, and stretchings. The knowledge of object does not depend on how they are presented, or embedded, in space. Used to abstract the intrinsic connectivity of objects while ignoring their detailed form. A and B are called homeomorphic (topological isomorphism) if there is exist a topological structure-preserving map between them. Example A circle is topologically equivalent to an ellipse, and a glass is equivalent to a torus! Phong. Vo Dinh Review on Manifold learning
- 34. Motivation Background Taxonomy Alignment Discussion References Topology and spaces Topology studies the properties of objects that are preserved through deformations, twistings, and stretchings. The knowledge of object does not depend on how they are presented, or embedded, in space. Used to abstract the intrinsic connectivity of objects while ignoring their detailed form. A and B are called homeomorphic (topological isomorphism) if there is exist a topological structure-preserving map between them. Example A circle is topologically equivalent to an ellipse, and a glass is equivalent to a torus! Phong. Vo Dinh Review on Manifold learning
- 35. Example: a glass is equivalent to a torus
- 36. Motivation Background Taxonomy Alignment Discussion References Manifold intuition Intuitively, a manifold is a generation of curves and surfaces to arbitrary dimension, or... Phong. Vo Dinh Review on Manifold learning
- 37. Motivation Background Taxonomy Alignment Discussion References What is manifold? How to make sense of “locally similar” to an Euclidean space? Denitions A map ϕ : A topological space open region U ⊆ REuclideansaid to be a U → Rm deﬁned on an n M is locally , n ≤ m, is of dimension n if every parameterization if: point p in M has a neighborhood U such that there is a homeomorphism ϕ from U onto an open subset of Rn .[12] (i) ϕ is a smooth (i.e., inﬁnitely differentiable), one-to-one mapping. ϕ −→ U ⊂ R2 This simply says that V = ϕ(U ) is produced by bending and stretching the region gentle, elastic manner, disallowing M is a topological space that U in aA (topological) manifold self-intersections. is locally Euclidean. Phong. Vo Dinh Review on Manifold learning
- 38. Motivation Background Taxonomy Alignment Discussion References What is manifold? How to make sense of “locally similar” to an Euclidean space? Denitions A map ϕ : A topological space open region U ⊆ REuclideansaid to be a U → Rm deﬁned on an n M is locally , n ≤ m, is of dimension n if every parameterization if: point p in M has a neighborhood U such that there is a homeomorphism ϕ from U onto an open subset of Rn .[12] (i) ϕ is a smooth (i.e., inﬁnitely differentiable), one-to-one mapping. ϕ −→ U ⊂ R2 This simply says that V = ϕ(U ) is produced by bending and stretching the region gentle, elastic manner, disallowing M is a topological space that U in aA (topological) manifold self-intersections. is locally Euclidean. Phong. Vo Dinh Review on Manifold learning
- 39. Motivation Background Taxonomy Alignment Discussion References Embedding A representation of a topological object in a certain space in such a way topological properties are preserved. Usually, a P-manifold has the dimension P D than the embedding space RD . Phong. Vo Dinh Review on Manifold learning
- 40. Motivation Background Taxonomy Alignment Discussion References Embedding A representation of a topological object in a certain space in such a way topological properties are preserved. Usually, a P-manifold has the dimension P D than the embedding space RD . Phong. Vo Dinh Review on Manifold learning
- 41. Motivation Background Taxonomy Alignment Discussion References Dimensionality Reduction with Manifolds Re-embedding a manifold from a high-dimensional space to a lower-dimensional one. Practically, underlying manifold is completely unknowned excerpt limited and noised data points! Phong. Vo Dinh Review on Manifold learning
- 42. Motivation Background Taxonomy Alignment Discussion References Dimensionality Reduction with Manifolds Re-embedding a manifold from a high-dimensional space to a lower-dimensional one. Practically, underlying manifold is completely unknowned excerpt limited and noised data points! Phong. Vo Dinh Review on Manifold learning
- 43. Example: Visualization face image space onto 2D space
- 44. Example: Unfolding the Swiss roll Figure: The problem of nonlinear dimensionality reduction for three-dimensional data (B) sampled from a two-dimensional manifold (A). An unsupervised learning algorithm must discover the global internal coordinates of the manifold without signals that explicitly indicate how the data should be embedded in two dimensions. The color coding illustrates the neighborhood- preserving mapping discovered by LLE [7]; black outlines in (B) and (C) show the neighborhood of a single point.
- 45. Example: linear dimensionality reduction VS. nonlinear dimensionality reduction Figure: Locally Linear Embedding (LLE) is an algorithm for nonlinear dimensionality reduction using manifold. Here we present the results of PCA (left) and LLE (right), applied to images of a single face translated across a two-dimensional background of noise. Note how LLE maps the images with corner faces to the corners of its two dimensional embedding, while PCA fails to preserve the neighborhood structure of nearby images. Courtesy of [5]
- 46. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Outline 1 Motivation Curse of Dimensionality Do we need feature invariance? Hypothesis about manifolds agreement 2 Background 3 Taxonomy Distance preservation Topology preservation 4 Alignment 5 Discussion 6 References Phong. Vo Dinh Review on Manifold learning
- 47. Example: 2-manifold and geodesic distance Figure: A sphere can be represented by a collection of two dimensional maps; therefore a sphere is a two dimensional manifold.
- 48. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Introduction In the linear case, maximization/minimization reconstruction error, combined with a basic linear model, lead to robust methods (i.e PCA). In the nonlinear case, more complex data models are required. The motivation behind distance preservation? Any manifold can be fully described by pairwise distances. The goal: A low-dimensional representation can be built in such a way that the initial distances are reproduced. Spatial distance Geodesic distance Other distances: Kernel PCA, Semidenite programming Phong. Vo Dinh Review on Manifold learning
- 49. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Introduction In the linear case, maximization/minimization reconstruction error, combined with a basic linear model, lead to robust methods (i.e PCA). In the nonlinear case, more complex data models are required. The motivation behind distance preservation? Any manifold can be fully described by pairwise distances. The goal: A low-dimensional representation can be built in such a way that the initial distances are reproduced. Spatial distance Geodesic distance Other distances: Kernel PCA, Semidenite programming Phong. Vo Dinh Review on Manifold learning
- 50. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Introduction In the linear case, maximization/minimization reconstruction error, combined with a basic linear model, lead to robust methods (i.e PCA). In the nonlinear case, more complex data models are required. The motivation behind distance preservation? Any manifold can be fully described by pairwise distances. The goal: A low-dimensional representation can be built in such a way that the initial distances are reproduced. Spatial distance Geodesic distance Other distances: Kernel PCA, Semidenite programming Phong. Vo Dinh Review on Manifold learning
- 51. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Introduction In the linear case, maximization/minimization reconstruction error, combined with a basic linear model, lead to robust methods (i.e PCA). In the nonlinear case, more complex data models are required. The motivation behind distance preservation? Any manifold can be fully described by pairwise distances. The goal: A low-dimensional representation can be built in such a way that the initial distances are reproduced. Spatial distance Geodesic distance Other distances: Kernel PCA, Semidenite programming Phong. Vo Dinh Review on Manifold learning
- 52. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Introduction In the linear case, maximization/minimization reconstruction error, combined with a basic linear model, lead to robust methods (i.e PCA). In the nonlinear case, more complex data models are required. The motivation behind distance preservation? Any manifold can be fully described by pairwise distances. The goal: A low-dimensional representation can be built in such a way that the initial distances are reproduced. Spatial distance Geodesic distance Other distances: Kernel PCA, Semidenite programming Phong. Vo Dinh Review on Manifold learning
- 53. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Spatial distance Compute the distance separating two points of the spaces Do not regards to any other information, i.e the presence of a submanifold Methods Multidimensional scaling[5] Sammon's nonlinear mapping[5] Curvilinear component analysis[5] Phong. Vo Dinh Review on Manifold learning
- 54. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Spatial distance Compute the distance separating two points of the spaces Do not regards to any other information, i.e the presence of a submanifold Methods Multidimensional scaling[5] Sammon's nonlinear mapping[5] Curvilinear component analysis[5] Phong. Vo Dinh Review on Manifold learning
- 55. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Graph distance Attempt to overcome some shortcomings of spatial metrics like the Euclidean distance Measuring the distance along the manifold and not through the embedding spaces The distance along a manifold is called geodesic distance Geodesic distance is hard to minimize: some (noisy) points on M are available the input space is non-continuous Discretize the arc length into paths on graph Methods Isomap[11] Geodesic NLM[5] Curvilinear distance analysis[5] Phong. Vo Dinh Review on Manifold learning
- 56. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Graph distance Attempt to overcome some shortcomings of spatial metrics like the Euclidean distance Measuring the distance along the manifold and not through the embedding spaces The distance along a manifold is called geodesic distance Geodesic distance is hard to minimize: some (noisy) points on M are available the input space is non-continuous Discretize the arc length into paths on graph Methods Isomap[11] Geodesic NLM[5] Curvilinear distance analysis[5] Phong. Vo Dinh Review on Manifold learning
- 57. Example: graph distance in Isomap Figure: (A) For two arbitrary points (circled) on a nonlinear manifold, their Euclidean distance in the high- dimensional input space (length of dashed line) may not accurately reect their intrinsic similarity, as measured by geodesic distance along the low-dimensional manifold (length of solid curve). (B) The neighbor- hood graph G constructed in step one of Isomap allows an approximation (red segments) to the true geodesic path to be computed eciently in step two, as the shortest path in G.(C) The two-dimensional embedding recovered by Isomap in step three, which best preserves the shortest path distances in the neighborhood graph (overlaid). Straight lines in the embedding (blue) now represent simpler and cleaner approximations to the true geodesic paths than do the corresponding graph paths (red).
- 58. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Other distance: Kernel PCA Closely related to classical metric MDS KPCA extends the algebraical properties of MDS to nonlinear manifolds without regards to their geometrical meaning The idea is to linearize the underlying manifold M φ : M ⊂ RD → RQ , y −→ z = φ (y) in which Q is very high (innitie) dimension. KPCA assumes φ can map data to linear subspace of the Q-dimensional space (Q D) Suprisingly, KPCA increase the data dimensionality rst! Share advantages with PCA and MDS Diculty in choosing appropriate kernel Not motivated by geometrical arguments Phong. Vo Dinh Review on Manifold learning
- 59. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Other distance: Kernel PCA Closely related to classical metric MDS KPCA extends the algebraical properties of MDS to nonlinear manifolds without regards to their geometrical meaning The idea is to linearize the underlying manifold M φ : M ⊂ RD → RQ , y −→ z = φ (y) in which Q is very high (innitie) dimension. KPCA assumes φ can map data to linear subspace of the Q-dimensional space (Q D) Suprisingly, KPCA increase the data dimensionality rst! Share advantages with PCA and MDS Diculty in choosing appropriate kernel Not motivated by geometrical arguments Phong. Vo Dinh Review on Manifold learning
- 60. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Other distance: Kernel PCA Closely related to classical metric MDS KPCA extends the algebraical properties of MDS to nonlinear manifolds without regards to their geometrical meaning The idea is to linearize the underlying manifold M φ : M ⊂ RD → RQ , y −→ z = φ (y) in which Q is very high (innitie) dimension. KPCA assumes φ can map data to linear subspace of the Q-dimensional space (Q D) Suprisingly, KPCA increase the data dimensionality rst! Share advantages with PCA and MDS Diculty in choosing appropriate kernel Not motivated by geometrical arguments Phong. Vo Dinh Review on Manifold learning
- 61. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Other distance: Kernel PCA Closely related to classical metric MDS KPCA extends the algebraical properties of MDS to nonlinear manifolds without regards to their geometrical meaning The idea is to linearize the underlying manifold M φ : M ⊂ RD → RQ , y −→ z = φ (y) in which Q is very high (innitie) dimension. KPCA assumes φ can map data to linear subspace of the Q-dimensional space (Q D) Suprisingly, KPCA increase the data dimensionality rst! Share advantages with PCA and MDS Diculty in choosing appropriate kernel Not motivated by geometrical arguments Phong. Vo Dinh Review on Manifold learning
- 62. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Other distance: Kernel PCA Closely related to classical metric MDS KPCA extends the algebraical properties of MDS to nonlinear manifolds without regards to their geometrical meaning The idea is to linearize the underlying manifold M φ : M ⊂ RD → RQ , y −→ z = φ (y) in which Q is very high (innitie) dimension. KPCA assumes φ can map data to linear subspace of the Q-dimensional space (Q D) Suprisingly, KPCA increase the data dimensionality rst! Share advantages with PCA and MDS Diculty in choosing appropriate kernel Not motivated by geometrical arguments Phong. Vo Dinh Review on Manifold learning
- 63. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Outline 1 Motivation Curse of Dimensionality Do we need feature invariance? Hypothesis about manifolds agreement 2 Background 3 Taxonomy Distance preservation Topology preservation 4 Alignment 5 Discussion 6 References Phong. Vo Dinh Review on Manifold learning
- 64. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Introduction Distance gives too much information that is unneccessary Comparative information between distances, like inequalities or ranks, suces to characterize a manifold, for any embedding Most distance functions make no distinction between the manifold and the surrounding empty space Topology just considers inside the manifold Dicult to characterize because of data points limitation Most of methods work with a discrete mapping model (lattice) Models: Predened lattice Data-driven lattice Phong. Vo Dinh Review on Manifold learning
- 65. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Introduction Distance gives too much information that is unneccessary Comparative information between distances, like inequalities or ranks, suces to characterize a manifold, for any embedding Most distance functions make no distinction between the manifold and the surrounding empty space Topology just considers inside the manifold Dicult to characterize because of data points limitation Most of methods work with a discrete mapping model (lattice) Models: Predened lattice Data-driven lattice Phong. Vo Dinh Review on Manifold learning
- 66. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Introduction Distance gives too much information that is unneccessary Comparative information between distances, like inequalities or ranks, suces to characterize a manifold, for any embedding Most distance functions make no distinction between the manifold and the surrounding empty space Topology just considers inside the manifold Dicult to characterize because of data points limitation Most of methods work with a discrete mapping model (lattice) Models: Predened lattice Data-driven lattice Phong. Vo Dinh Review on Manifold learning
- 67. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Predened lattice Lattice is xed in advance Cannot change after the dimensionality reduction has begun. Lattice is a rectangular or hexagonal grid made of regularly spaced points Very few manifolds t such a simple shape in practice Methods: Self-Organizing Maps[5] Generative Topographic Mapping[5] Phong. Vo Dinh Review on Manifold learning
- 68. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Predened lattice Lattice is xed in advance Cannot change after the dimensionality reduction has begun. Lattice is a rectangular or hexagonal grid made of regularly spaced points Very few manifolds t such a simple shape in practice Methods: Self-Organizing Maps[5] Generative Topographic Mapping[5] Phong. Vo Dinh Review on Manifold learning
- 69. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Data-driven lattice Make no assumption about the shape ans topology of the embedding Adapt to data set in order to captuer the manifold shape Methods Locally linear embedding[8, 7] Laplacian eigenmaps[1, 2] Phong. Vo Dinh Review on Manifold learning
- 70. Motivation Background Taxonomy Distance preservation Alignment Topology preservation Discussion References Data-driven lattice Make no assumption about the shape ans topology of the embedding Adapt to data set in order to captuer the manifold shape Methods Locally linear embedding[8, 7] Laplacian eigenmaps[1, 2] Phong. Vo Dinh Review on Manifold learning
- 71. Motivation Background Taxonomy Alignment Discussion References Distance between manifolds Recognition can also be conducted with a set of query images rather than single query image Reformulated as matching a query image set against all the gallery image sets representing a subject This problem can be converted to the problem of matching dierent manifolds Need a good denition on manifolds distance, which is nonlinear space Until present, few works devote this problem: [14, 13, 4, 3] Phong. Vo Dinh Review on Manifold learning
- 72. Motivation Background Taxonomy Alignment Discussion References Distance between manifolds Recognition can also be conducted with a set of query images rather than single query image Reformulated as matching a query image set against all the gallery image sets representing a subject This problem can be converted to the problem of matching dierent manifolds Need a good denition on manifolds distance, which is nonlinear space Until present, few works devote this problem: [14, 13, 4, 3] Phong. Vo Dinh Review on Manifold learning
- 73. Motivation Background Taxonomy Alignment Discussion References Applicability Manifold-based nonlinear dimensionality reduction (NLDR) has been applied in: Face recognition Gesture recognition Handwritten recognition Human action recognition[10] Characteristics of current manifold-based NLDRs: Prefer medium or large scale database Data instances should be quite similar in appearances (i.e face, hand, handwritten) Small image size (e.x [200,200]) Manually choosing manifold dimension Restricted in adapting new data points (i.e oine mode or batch mode) Phong. Vo Dinh Review on Manifold learning
- 74. Motivation Background Taxonomy Alignment Discussion References Applicability Manifold-based nonlinear dimensionality reduction (NLDR) has been applied in: Face recognition Gesture recognition Handwritten recognition Human action recognition[10] Characteristics of current manifold-based NLDRs: Prefer medium or large scale database Data instances should be quite similar in appearances (i.e face, hand, handwritten) Small image size (e.x [200,200]) Manually choosing manifold dimension Restricted in adapting new data points (i.e oine mode or batch mode) Phong. Vo Dinh Review on Manifold learning
- 75. Motivation Background Taxonomy Alignment Discussion References Discussion Challenges/Opportunities: Nobody has done it before! Event image/event video is highly varied in appearance Poor-dened distance measure for event manifolds Schedule First test on event images with single actor (KTH dataset, Weizman dataset, IXMAS dataset) Then test on event images with cluttered background (movies,...) Test dierent kinds of manifold-manifold distance Propose a way to decrease the variation in event image/event video Phong. Vo Dinh Review on Manifold learning
- 76. Motivation Background Taxonomy Alignment Discussion References Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. In NIPS, pages 585591, 2001. Mikhail Belkin and Partha Niyogi. Convergence of laplacian eigenmaps. In NIPS, pages 129136, 2006. Andrew W. Fitzgibbon and Andrew Zisserman. Joint manifold distance: a new approach to appearance based clustering. Computer Vision and Pattern Recognition, IEEE Computer Society Conference on, 1:26, 2003. Erosyni Kokiopoulou and Pascal Frossard. Phong. Vo Dinh Review on Manifold learning
- 77. Motivation Background Taxonomy Alignment Discussion References Minimum distance between pattern transformation manifolds: Algorithm and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 99(1), 2008. John A. Lee and Michel Verleysen. Nonlinear Dimensionality Reduction. Springer, 2007. C. Liu, J. Yuen, A. Torralba, J. Sivic, and W. T. Freeman. SIFT ow: Dense correspondence across dierent scenes. In ECCV, pages III: 2842, 2008. S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):23232326, December 2000. Lawrence K. Saul and Sam T. Roweis. Phong. Vo Dinh Review on Manifold learning
- 78. Motivation Background Taxonomy Alignment Discussion References Think globally, t locally: Unsupervised learning of low dimensional manifold. Journal of Machine Learning Research, 4:119155, 2003. H. Sebastian Seung and Daniel D. Lee. The manifold ways of perception. Science, 290(5500):22682269, 2000. Richard Souvenir and Justin Babbs. Learning the viewpoint manifold for action recognition. In CVPR, 2008. J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):23192323, December 2000. Loring W. Tu. Phong. Vo Dinh Review on Manifold learning
- 79. Motivation Background Taxonomy Alignment Discussion References An Introduction to Manifolds. Springer, 2008. Nuno Vasconcelos and Andrew Lippman. A multiresolution manifold distance for invariant image similarity. IEEE Transactions on Multimedia, 7(1):127142, 2005. R.P. Wang, S.G. Shan, X.L. Chen, and W. Gao. Manifold-manifold distance with application to face recognition based on image set. In CVPR08, pages 18, 2008. Phong. Vo Dinh Review on Manifold learning

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment