Call Girls in Sarojini Nagar Market Delhi 💯 Call Us 🔝8264348440🔝
Confident Kernel Sparse Coding and Dictionary Learning
1. Confident Kernel Sparse Coding
and Dictionary Learning
Babak Hosseini
Prof. Dr. Barbara Hammer
Singapore, 20 Nov. 2018
bhosseini@techfak.uni-bielefeld.de
Cognitive Interaction Technology Centre of Excellence (CITEC)
Bielefeld University, Germany
4. Introduction
• Dictionary learning and sparse coding
• X: Input signals
• U: Dictionary matrix
• Γ: Sparse codes
• Reconstructing X using sparse resources from U
4
5. Introduction
• Dictionary learning and sparse coding
• X: Input signals
• U: Dictionary matrix
• Γ: Sparse codes
• Reconstructing X using sparse resources from U
5
x U γ
9. Introduction
• Discriminative dictionary learning (DDL)
• Classification setting
• Label matrix
• Goal:
• Learn a Dictionary U that reconstructs X via Γ
• Mapping Γ: X L
9
10. Introduction
• Discriminative dictionary learning (DDL)
• Goal:
• Learn a Dictionary U that reconstructs X via Γ
• Mapping Γ: X L
• : Ensures the discriminative mapping for training data
• : Has access to label information L
10
13. Introduction
• What is the problem?
• The test and train models are not consistent!
• Reconstruction of test data
doesn’t follow the discriminative mapping.
13
!
16. Confident Dictionary Learning
• A new discriminant objective
•
• Reconstruction model:
• :
• Its entries show the share of each class in the reconstruction of x.
16
17. Confident Dictionary Learning
• A new discriminant objective
•
• Reconstruction model:
Therefore,
• Mapping to label space:
17
18. Confident Dictionary Learning
• A new discriminant objective
•
• Minimizing
each x is reconstructed mostly by its own class.
• Flexible term: x still can use other classes (minor share)
18
20. Confident Dictionary Learning
• Classification of test data:
• z: test data
• Label :
• Class j: highest contribution in the reconstruction
20
21. • Classification of test data:
• z: test data
• Class j: highest contribution in the reconstruction
•
• Minimizing
Forcing to reconstruct z using less number of classes.
• Flexible: can still use small share of other classes (if required).
• Confident toward one class.
Confident Dictionary Learning
21
23. Confident Dictionary Learning
• Convexity?
• Not convex!
• β: -{most negative eigenvalue of V}
• Once before the training!
23
24. Confident Dictionary Learning
• Consistent?
• Recall uses L too.
• Recall objective term
similar to the train’s
• Flexible contributions
• Confidence criteria
• More consistent
24
26. Experiments
• Datasets:
• Multi-dimensional time-series
• Cricket Umpire [1]:
• Articulatory Words [2]
• Schunk Dexterous [3]:
• UTKinect Actions [4]:
• DynTex++[5]:
[1] M. H. Ko, et al. G. W. West, S. Venkatesh, and M. Kumar, “Online context recognition in multisensor systems using dynamic
time warping,” in ISSNIP’05. IEEE, 2005, pp. 283–288.
[2] J. Wang, A. Samal, and J. Green, “Preliminary test of a real-time, interactive silent speech interface based on electromagnetic
articulograph,” in SLPAT’14, 2014, pp. 38–45.
[3] A. Drimus, G. Kootstra, A. Bilberg, and D. Kragic, “Design of a flexible tactile sensor for classification of rigid and deformable
objects,” Robotics and Autonomous Systems, vol. 62, no. 1, pp. 3–15, 2014.
[4] M. Madry, L. Bo, D. Kragic, and D. Fox, “St-hmp: Unsupervised spatiotemporal feature learning for tactile data,” in ICRA’14.
IEEE, 2014, pp. 2262–2269.
[5] L. Xia, C.-C. Chen, and J. Aggarwal, “View invariant human action recognition using histograms of 3d joints,” in CVPRW’12
Workshops. IEEE, 2012, pp. 20–27.
[6] B. Ghanem and N. Ahuja, “Maximum margin distance learning for dynamic texture recognition,” in ECCV’10. Springer, 2010,
26
29. Experiments
• Interpretability of atoms (IP):
• Dictionary:
•
• In range [1/c 1]
• atom i
• 1 if atom i belongs only to one class
• 1/c if atom i is related to all classes
29
30. Experiments
• Interpretability of atoms (IP):
•
• In range [1/c 1]
• 1 if atom i belongs only to one class
• 1/c if atom i is related to all classes
• Good discrimination
• Good interpretation
30
31. Conclusion
• Consistency is important for DDL models.
• Proposed a flexible discriminant terms for DDL.
• Proposed a more consistent training-recall framework.
• Increase in:
• Discriminative performance
• Interpretability of dictionary atoms.
31