0
Representative Previous Work<br />PCA<br />LDA<br />ISOMAP: Geodesic <br />Distance Preserving<br />J. Tenenbaum et al., 2...
Hundreds<br />Dimensionality Reduction Algorithms<br />Statistics-based<br />Geometry-based<br />…<br />PCA/KPCA<br />ISOM...
Our Answers<br />Direct Graph Embedding<br />Linearization<br />Kernelization<br />Original PCA & LDA,<br />ISOMAP, LLE,<b...
Direct Graph Embedding<br />Intrinsic Graph:<br />S, SP:  Similarity matrix (graph edge)<br />Similarity in high <br />dim...
Direct Graph Embedding -- Continued<br />Intrinsic Graph:<br />S, SP:  Similarity matrix (graph edge)<br />L, B:Laplacian ...
Linearization<br />Intrinsic Graph<br />Linear mapping function<br />Penalty Graph<br />Objective function in Linearizatio...
Kernelization<br />Intrinsic Graph<br />Nonlinear mapping:<br />the original input space to another<br />higher dimensiona...
Tensorization <br />Low dimensional representation is <br />obtained as:<br />Intrinsic Graph<br />Penalty Graph<br />Obje...
Common Formulation<br />S, SP:  Similarity matrix<br />Intrinsic graph<br />L, B:Laplacian matrix from S, SP;<br />Penalty...
A General Framework for Dimensionality Reduction<br />D: Direct Graph Embedding <br />L:Linearization<br />K:  Kernelizati...
New Dimensionality Reduction Algorithm: Marginal Fisher Analysis<br />Important Information for face recognition:<br />1) ...
Marginal Fisher Analysis: Advantage<br />No Gaussian distribution assumption<br />
Experiments: Face Recognition<br />
Summary<br />Optimization framework that unifies previous dimensionality reduction algorithms as special cases.<br />A new...
Event Recognition in News Video<br /><ul><li>Online and offline video search
56 events are defined in LSCOM</li></ul>Airplane Flying<br />Riot<br />Existing Car<br />Geometric and photometric varianc...
Earth Mover’s Distance in Temporal Domain(T-MM, Under Review)<br />Key Frames of two video clips in class “riot”<br />EMD ...
Multi-level Pyramid Matching (CVPR 2007, Under Review)<br /><ul><li>One Clip = several </li></ul>subclips (stages of event...
Other Publications & Professional Activities<br />Other Publications:<br /><ul><li>  Kernel based Learning:</li></ul>Coupl...
Special issue on Video-based Object and Event Analysis, Pattern RecognitionLetters</li></ul>Book Editor:<br /><ul><li>Sema...
Future Work<br />Machine Learning<br />Event Recognition<br />Biometric<br />Computer Vision<br />Pattern Recognition<br /...
Acknowledgement<br />Shuicheng Yan<br /> UIUC<br />Steve Lin<br /> Microsoft<br />Lei Zhang <br />Microsoft<br />Hong-Jian...
Thank You very much!<br />
What is Gabor Features?<br />Gabor features can improve recognition performance in comparison to grayscale features. Cheng...
How to Utilize More Correlations?<br />Pixel Rearrangement<br />Pixel<br />Rearrangement<br />Sets of highly<br />correlat...
Upcoming SlideShare
Loading in...5
×

Representative Previous Work

335

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
335
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Transcript of "Representative Previous Work"

  1. 1. Representative Previous Work<br />PCA<br />LDA<br />ISOMAP: Geodesic <br />Distance Preserving<br />J. Tenenbaum et al., 2000<br />LLE: Local Neighborhood <br />Relationship Preserving<br />S. Roweis & L. Saul, 2000<br />LE/LPP: Local Similarity Preserving, M. Belkin, P. Niyogi et al., 2001, 2003<br />
  2. 2. Hundreds<br />Dimensionality Reduction Algorithms<br />Statistics-based<br />Geometry-based<br />…<br />PCA/KPCA<br />ISOMAP<br />LLE<br />LE/LPP<br />…<br />LDA/KDA<br />Matrix<br />Tensor<br />Any common perspective to understand and explain these dimensionality reduction algorithms? Or any unified formulation that is shared by them?<br />Any general tool to guide developing new algorithms for dimensionality reduction?<br />
  3. 3. Our Answers<br />Direct Graph Embedding<br />Linearization<br />Kernelization<br />Original PCA & LDA,<br />ISOMAP, LLE,<br />Laplacian Eigenmap<br />PCA, LDA, LPP<br />KPCA, KDA<br />Tensorization<br />Type<br />Formulation<br />CSA, DATER<br />Example<br />S. Yan, D. Xu, H. Zhang and et al., CVPR, 2005, T-PAMI,2007<br />
  4. 4. Direct Graph Embedding<br />Intrinsic Graph:<br />S, SP: Similarity matrix (graph edge)<br />Similarity in high <br />dimensional space<br />L, B:Laplacian matrix from S, SP;<br />Data in high-dimensional space and low-dimensional space (assumed as 1D space here):<br />Penalty Graph<br />
  5. 5. Direct Graph Embedding -- Continued<br />Intrinsic Graph:<br />S, SP: Similarity matrix (graph edge)<br />L, B:Laplacian matrix from S, SP;<br />Similarity in high <br />dimensional space<br />Data in high-dimensional space and low-dimensional space (assumed as 1D space here):<br />Criterion to Preserve Graph Similarity:<br />Penalty Graph<br /> Special case B isIdentity matrix (Scale normalization)<br />Problem: It cannot handle new test data.<br />
  6. 6. Linearization<br />Intrinsic Graph<br />Linear mapping function<br />Penalty Graph<br />Objective function in Linearization<br />Problem: linear mapping function is not enough to preserve <br />the real nonlinear structure?<br />
  7. 7. Kernelization<br />Intrinsic Graph<br />Nonlinear mapping:<br />the original input space to another<br />higher dimensional Hilbert space.<br />Penalty Graph<br />Constraint:<br />Kernel matrix:<br />Objective function in Kernelization<br />
  8. 8. Tensorization <br />Low dimensional representation is <br />obtained as:<br />Intrinsic Graph<br />Penalty Graph<br />Objective function in Tensorization<br />where<br />
  9. 9. Common Formulation<br />S, SP: Similarity matrix<br />Intrinsic graph<br />L, B:Laplacian matrix from S, SP;<br />Penalty graph<br />Direct Graph Embedding<br />Linearization<br />Kernelization<br />Tensorization<br />where<br />
  10. 10. A General Framework for Dimensionality Reduction<br />D: Direct Graph Embedding <br />L:Linearization<br />K: Kernelization<br />T: Tensorization<br />
  11. 11. New Dimensionality Reduction Algorithm: Marginal Fisher Analysis<br />Important Information for face recognition:<br />1) Label information<br /> 2) Local manifold structure <br />(neighborhood or margin)<br /> 1: ifxi is among the k1-nearest neighbors of xj in the same class;<br />0 :otherwise<br />1: if the pair (i,j) is among the k2 shortest pairs among the data set;<br />0: otherwise<br />
  12. 12. Marginal Fisher Analysis: Advantage<br />No Gaussian distribution assumption<br />
  13. 13. Experiments: Face Recognition<br />
  14. 14. Summary<br />Optimization framework that unifies previous dimensionality reduction algorithms as special cases.<br />A new dimensionality reduction algorithm: Marginal Fisher Analysis. <br />
  15. 15. Event Recognition in News Video<br /><ul><li>Online and offline video search
  16. 16. 56 events are defined in LSCOM</li></ul>Airplane Flying<br />Riot<br />Existing Car<br />Geometric and photometric variances<br />Clutter background<br />Complex camera motion and object motion<br />More diverse !<br />
  17. 17. Earth Mover’s Distance in Temporal Domain(T-MM, Under Review)<br />Key Frames of two video clips in class “riot”<br />EMD can efficiently utilize the information from multiple frames. <br />
  18. 18. Multi-level Pyramid Matching (CVPR 2007, Under Review)<br /><ul><li>One Clip = several </li></ul>subclips (stages of event evolution) .<br /><ul><li>No prior knowledge about the number of stages in an event, and videos of the same event may include a subset of stage only. </li></ul>Smoke<br />Level-1<br />Fire<br />Level-1<br />Level-0<br />Level-0<br />Fire<br />Level-1<br />Smoke<br />Level-1<br />Solution: Multi-level Pyramid <br />Matching in Temporal Domain<br />
  19. 19. Other Publications & Professional Activities<br />Other Publications:<br /><ul><li> Kernel based Learning:</li></ul>Coupled Kernel-based Subspace Analysis: CVPR 2005<br /> Fisher+Kernel Criterion for Discriminant Analysis: CVPR 2005<br /><ul><li>Manifold Learning:</li></ul>Nonlinear Discriminant Analysis on Embedding Manifold : T-CSVT (Accepted)<br /><ul><li>Face Verification:</li></ul>Face Verification with Balanced Thresholds: T-IP (Accepted)<br /><ul><li>Multimedia:</li></ul>Insignificant Shadow Detection for Video Segmentation: T-CSVT 2005<br />Anchorperson extraction for Picture in Picture News Video: PRL 2005<br />Guest Editor: <br /><ul><li>Special issue on Video Analysis, Computer Vision and Image Understanding
  20. 20. Special issue on Video-based Object and Event Analysis, Pattern RecognitionLetters</li></ul>Book Editor:<br /><ul><li>Semantic Mining Technologies for Multimedia Databases</li></ul>Publisher: Idea Group Inc. (www.idea-group.com)<br />
  21. 21. Future Work<br />Machine Learning<br />Event Recognition<br />Biometric<br />Computer Vision<br />Pattern Recognition<br />Web Search<br />Multimedia Content<br />Analysis<br />Multimedia<br />
  22. 22. Acknowledgement<br />Shuicheng Yan<br /> UIUC<br />Steve Lin<br /> Microsoft<br />Lei Zhang <br />Microsoft<br />Hong-Jiang Zhang<br />Microsoft<br />Shih-Fu Chang<br />Columbia<br />Xuelong Li<br /> UK<br />Xiaoou Tang<br />Hong Kong<br />Zhengkai Liu, USTC<br />
  23. 23. Thank You very much!<br />
  24. 24. What is Gabor Features?<br />Gabor features can improve recognition performance in comparison to grayscale features. Chengjun Liu T-IP, 2002<br />Five Scales<br />…<br />Input: <br />Grayscale<br />Image<br />Eight Orientations<br />Output: <br />40 Gabor-filtered<br />Images<br />Gabor Wavelet Kernels<br />
  25. 25. How to Utilize More Correlations?<br />Pixel Rearrangement<br />Pixel<br />Rearrangement<br />Sets of highly<br />correlated pixels<br />Columns of highly<br />correlated pixels<br />Potential Assumption in Previous Tensor-based Subspace Learning:<br />Intra-tensor correlations: Correlations among the features within certain <br />tensor dimensions, such as rows, columns and Gabor features…<br />
  26. 26. Tensor Representation: Advantages<br />Enhanced Learnability<br />2. Appreciable reductions in computational costs <br />3. Large number of available projection directions<br />4. Utilize the structure information<br />
  27. 27. Connection to Previous Work –Tensorface (M. Vasilescu and D. Terzopoulos, 2002)<br />From an algorithmic view or mathematics view, CSA and Tensorface are both variants of Rank-(R1,R2,…,Rn) decomposition.<br />
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×