SlideShare a Scribd company logo
Deep Transform Learning
Towards a Machine Learning Framework
Jyoti Maggu
Advisor: Dr. Angshul Majumdar
December 20, 2019
!2
Transform Learning(TL)
Transform Original Data Sparse Representation
k × m m × n k × n
(T) (X) (Z)
!3
Transform Learning(TL)
Transform Original Data Sparse Representation
k × m m × n k × n
(T) (X) (Z)
TX = Z
!4
Transform Learning(TL)
Transform Original Data Sparse Representation
k × m m × n k × n
The transform learning1 problem can be expressed as
min
T∈!k×m
,Z∈!k×n
||TX − Z ||F
2
+λ(ε ||T ||F
2
−logdetT) + µ || Z ||0
1. S. Ravishankar and Y. Bresler, “Learning Sparsifying Transforms”, IEEE Transactions on Signal Processing, (2013).
(T) (X) (Z)
TX = Z
!5
Transform Learning(TL)
min
T∈!k×m
,Z∈!k×n
||TX − Z ||F
2
+λ(ε ||T ||F
2
−logdetT) + µ || Z ||0
T ← min
T
||TX − Z ||F
2
+λ(ε ||T ||F
2
−logdetT )
Z ← min
Z
||TX − Z ||F
2
+µ || Z ||0
Closed form
solution exists
!6
Transform Learning(TL)
min
T∈!k×m
,Z∈!k×n
||TX − Z ||F
2
+λ(ε ||T ||F
2
−logdetT) + µ || Z ||0
T ← min
T
||TX − Z ||F
2
+λ(ε ||T ||F
2
−logdetT )
Z ← min
Z
||TX − Z ||F
2
+µ || Z ||0
Closed form
solution exists
Idea : Use transforms for solving Machine Learning Problems??
Outline
●Supervised TL
●Unsupervised DTL
●Supervised DTL
●Deep Transformed Subspace Clustering
●Convolutional TL
●Semi-coupled TL
●Future Work
7
Outline
●Supervised TL
●Unsupervised DTL
●Supervised DTL
●Deep Transformed Subspace Clustering
●Convolutional TL
●Semi-coupled TL
●Future Work
!8
Supervised Transform Learning
Label Consistent TL (LCTL):
!9
! min
T ,Z,M
||TX − Z ||F
2
+λ(ε ||T ||F
2
−logdetT )+ µ || Z ||1 +η ||Q − MZ ||F
2
supervision term
Learn mapping between true labels Q and the coefficients Z.
Mapping M can be linear or non-linear.
X Z
Q
T
M
Kernel Transform Learning
Kernel TL:
• Dense transform —> non-linear transformation of noisy
data into higher dimensional feature space .X ϕ
!10
: fixed basis
: transform as sparse
combination (B) of basis from
Φ
ΦB
Φ
ϕ :!N
→ F
TX = Z
K(X, X) = ϕ(X)T
ϕ(X)
BK(X,X) = Z
Bϕ(X)T
Transform
!"# $#
ϕ(X)
Data
! = Z
Supervised Kernel Transform Learning
Transform Learning:
Kernel TL:
Kernel LCTL:
!11
min
B,Z,M
|| BK(X, X)− Z ||F
2
+λ(ε ||T ||F
2
−logdetT )+ µ || Z ||1 +η ||Q − MZ ||F
2
min
B,Z
|| BK(X, X)− Z ||F
2
+λ(ε || B ||F
2
−logdet B)+ µ || Z ||0
supervision term
Kernel transform term
Kernel transform term
min
T ,Z
||TX − Z ||F
2
+λ(ε ||T ||F
2
−logdetT )+ µ || Z ||1
Supervised Transform Learning: Results
• Classification Results on YaleB(38 persons), AR faces(100 persons)
• Kernel: polynomial order 3
• Parameter values obtained on CIFAR-10 validation dataset
• Benchmark comparisons:
• Discriminative Baysian Dictionary Learning(DBDL)2
• Multimodal Task Driven Dictionary Learning(MTDL)3
• Discriminative Analysis Dictionary Learning(DADL)4
• Sparse Embedded Dictionary Learning(SEDL)5
• Non-Linear Dictionary Learning(NDL)6
!12
2. N. Akhtar, F. Shafait and A. Mian, "Discriminative Bayesian Dictionary Learning for Classification," IEEE Transactions on Pattern
Analysis and Machine Intelligence, 2016.
3. S. Bahrampour, N. M. Nasrabadi, A. Ray and W. K. Jenkins, "Multimodal Task-Driven Dictionary Learning for Image Classification,"
IEEE Transactions on Image Processing, 2016.
4. J. Guo, Y. Guo, X. Kong, M. Zhang and R. He, “Discriminative Analysis Dictionary Learning”, AAAI Conference on Artificial
Intelligence, 2016.
5. Y. Chen and J. Su, “Sparse embedded dictionary learning on face recognition”, Pattern Recognition, 2017.
6. J. Hu and Y.-P. Tan, “Nonlinear dictionary learning with application to image classification”, Pattern Recognition (in Press).
!13
Method YaleB AR faces
DBDL 97.2 97.4
MTDL 97.0 97.1
DADL 97.7 98.7
SEDL 96.6 94.2
NDL 91.8 92.1
LCDL 92.7 94.6
LCTL 97.8 98.8
K-LCTL 98.4 99.2
Classification Results
Classification Accuracy(%age)
Outline
●Supervised TL
●Unsupervised DTL
● Jointly learned DTL
● DTL for inverse problems
●Supervised DTL
●Deep Transformed Subspace Clustering
●Convolutional TL
●Semi-coupled TL
●Future Work
!14
Deep Transform Learning(DTL)
!15
Basic Idea : Repeat transforms to form a deeper architecture
To learn N-levels of transform, the model is
TN
ϕ...(T2
ϕ(T1
X )) = Z
All layers are
learned jointly
Jointly Learned Deep Transform Learning
!16
min
T1,T2 ,Z1,Z
||T2Z1 − Z ||F
2
+λ (µ ||Ti ||F
2
−logdetTi )
i=1
2
∑ + µ ||T1X −φ−1
Z1( )||F
2
● Formulation for two layer network
● Variable splitting
● All coefficients and transforms are learned in one loop
min
T1,T2 ,Z
||T2 (φ(T1X))− Z ||F
2
+λ (µ ||Ti ||F
2
−logdetTi )
i=1
2
∑
Z1 = φ(T1X)
T2Z1 = Z
Jointly Learned Deep Transform Learning
● S1 :
● S2 :
● S3 :
● S4 :
!17
min
T1,T2 ,Z1,Z
||T2Z1 − Z ||F
2
+λ (µ ||Ti ||F
2
−logdetTi )
i=1
2
∑ + µ ||T1X −φ−1
Z1( )||F
2
min
T2
||T2Z1 − Z ||F
2
+λ(||T2 ||F
2
−logdetT2 )
min
T1
µ ||T1X −φ−1
(Z1)||F
2
+λ(||T1 ||F
2
−logdetT1)
min
Z
||T2Z1 − Z ||F
2
⇒ T2Z1 = Z
min
Z1
||T2Z1 − Z ||F
2
+µ ||φ(T1X)− Z1 ||F
2
Classification Results: Joint DTL
!18
Classification Accuracy with SVM
Method YALEB AR Faces
CSSAE7 85.21 82.22
CSDBN8 84.97 82.11
DDL9 92.66 93.35
Proposed 1-layer 95.11 94.98
Proposed 2-layers 97.41 95.87
Proposed 3-layers 97.67 96.80
Proposed 4-layers 96.36 96.24
7. A. Sankaran, M. Vatsa, R. Singh, and A. Majumdar, “Group sparse autoencoder,” Image and Vision Computing, 2017.
8. A. Sankaran, G. Goswami, M. Vatsa, R. Singh, and A. Majumdar, “Class sparsity signature based restricted boltzmann
machine,” Pattern Recognition, 2017.
9. V. Singal and A. Majumdar, "Majorization Minimization Technique for Optimally Solving Deep Dictionary Learning",
Neural Processing Letters, doi:10.1007/s11063-017-9603-9, 2017.
Clustering Results
!19
K-Means: YaleB
Method
HOG DSIFT
NMI ARI F-score NMI ARI F-score
SAE10 93.43 82.57 83.07 87.54 75.82 76.50
DSC11 96.91 90.25 89.46 90.85 83.00 83.45
DDL 96.82 88.97 89.13 90.20 81.83 83.42
Joint DTL 98.93 93.43 92.06 93.26 85.62 85.86
10. S. Gao, Y. Zhang, K. Jia, J. Lu, and Y. Zhang, “Single sample face recognition via learning deep supervised autoencoders,”
IEEE Transactions on Information Forensics and Security, 2015.
11. X. Peng, J. Feng, S. Xiao, J. Lu, Z. Yi, and S. Yan, “Deep sparse subspace clustering,” arXiv preprint arXiv:1709.08374,
2017.
Outline
●Supervised TL
●Unsupervised DTL
● Jointly learned DTL
● DTL for inverse problems
●Supervised DTL
●Deep Transformed Subspace Clustering
●Convolutional TL
●Semi-coupled TL
●Future Work
!20
What is Inverse Problem?
Inverse problem is given by equation:
• Operator A defines the problem
• Denoising - identity
• Super-resolution - subsampling
• Deblurring - convolution
• Reconstruction - projection
!21
y = Ax +η
Sparsity based Solution
• Exploits the sparsity of the image in some domain.
• Assume that sparsifying basis is known (DCT, wavelet
etc.).
• where is sparse representation and is fixed basis.
• Are fixed basis the best possible option?
φ
!22
α
y = Ax +η = Aφα +η
Adaptive Learning based Solution
!23
Transform learning learns basis adaptively from the image
patches
Z =[z1
|z2
|...|zK
]
Data consistency Transform learning
Pi
x : ith patch of the image
min
x,T ,Z
|| y − Ax ||2
2
+λ( ||TPi
x − zi
||2
2
i
∑ + µ(||T ||F
2
−logdetT +γ || zi
||0
)
Proposed DTL Inversion
!24
DTL12 learns multiple levels of transforms. The problem is
formulated as
Data consistency Deep transform learning
Z =[z1
|z2
|...|zK
]Pi
x : ith patch of the image
min
x,T1,T2 ,T3,Z
|| y − Ax ||2
2
+λ( ||T3
i
∑ T2
T1
Pi
x − z ||F
2
+µ (||Ti
||F
2
−logdetTi
)
j=1
3
∑ +γ || zi
||1
)
12. J. Maggu and A. Majumdar ,”Transductive Inversion via Deep Transform Learning”, Signal Processing(submitted).
T1Pi x > 0
T2T1Pi x > 0
Deblurring Results
!25
Images Blurry RCSR13 GBD14 DeblurGAN15 Proposed
Baby 0.78 0.76 0.85 0.86 0.89
Bird 0.76 0.74 0.83 0.84 0.85
Butterfly 0.48 0.47 0.62 0.63 0.65
Head 0.66 0.65 0.72 0.84 0.84
Woman 0.73 0.71 0.80 0.80 0.82
Comparative Debluring Perfomance (SSIM)
13. M. Tofighi, Y. Li and V. Monga, "Blind Image Deblurring Using Row–Column Sparse Representations," IEEE Signal
Processing Letters, 2018.
14. Y. Bai, G. Cheung, X. Liu and W. Gao, "Graph-Based Blind Image Deblurring From a Single Photograph," IEEE
Transactions on Image Processing, 2019.
15. O. Kupyn, V. Budzan, M. Mykhailych, D. Mishkin and J. Matas, "DeblurGAN: Blind Motion Deblurring Using
Conditional Adversarial Networks," IEEE Conference on Computer Vision and Pattern Recognition, 2018.
Deblurring Results
!26
Man Left to Right: Original, Blurred image, RCSR, GBD, DeblurGAN and Proposed
Original Blurred RCSR GBD DeblurGAN Proposed
Outline
●Supervised TL
●Unsupervised DTL
●Supervised DTL
●Deep Transformed Subspace Clustering
●Convolutional TL
●Semi-coupled TL
●Future Work
!27
Supervised Deep Transform Learning
Label-Consistent DTL :
Multi-class classification
Multi-label classification16
!28
min
Ti
′s,Z ,M
||TN (φ...(T2 (φ(T1X))))− Z ||F
2
+λ (µ ||Ti ||F
2
−logdetTi )
i
∑ +η ||Q −φ(MZ)||F
2
supervision term
Deep TL
16. V. Singhal, J. Maggu and A. Majumdar, “Simultaneous Detection of Multiple Appliances from Smart-meter Measurements
via Multi-Label Consistent Deep Dictionary Learning and Deep Transform Learning,” IEEE Transactions on Smart Grid, 2019.
.
Multi-class Classification Results
!29
Technique YaleB AR
Stacked Denoising Autoencoder 42.81 37.60
Stacked Group Sparse Autoencoder 66.27 32.50
Stacked Label Consistent Autoencoder 86.22 85.21
Discriminative Deep Belief Network 60.34 38.20
LC-KSVD 90.80 87.67
DDL (unsupervised) 93.35 92.66
LC-DDL 94.57 96.50
DTL (unsupervised) 97.67 96.80
LCTL 1-layer 98.80 97.80
LCTL 2-layers 98.87 97.91
LCTL 3-layers 98.65 98.89
LCTL 4-layers 97.24 96.16
Classification Accuracy on AR and YaleB Face Recognition Datasets
NILM as Multi-label Classification
Problem
!30
Aggregated load
Supervised Deep
transform learning
A1
A2
An
Appliance states
Results on Energy Datasets
!31
Dataset
REDD dataset Pecan street dataset
Micro-F1 Macro-F1 Energy error Micro-F1 Macro-F1 Energy error
MLKNN 0.6034 0.5931 0.1067 0.6263 0.6227 0.0989
RAKEL 0.5749 0.5334 0.9948 0.6663 0.6620 0.9995
Proposed (1 layer) 0.5884 0.5838 0.0983 0.6079 0.6079 0.0236
Proposed (2 layers) 0.5905 0.5857 0.0892 0.6082 0.6089 0.0223
Proposed (3 layers) 0.6001 0.5981 0.0766 0.6104 0.6104 0.0115
Proposed (4 layers)
0.5914 0.5951 0.0827 0.6096 0.6087 0.0228
Performance on REDD and Pecan street Datasets
17. M.-L. Zhang and Z.-H. Zhou, “A k-nearest neighbor based algorithm for multi-label classification,” in Granular Computing, 2005.
18. G. Tsoumakas and I. Vlahavas, “Random k-labelsets: An ensemble method for multilabel classification,” Mach. Learn. ECML 2007.
17
18
Outline
●Supervised TL
●Unsupervised DTL
●Supervised DTL
●Deep Transformed Subspace Clustering
●Convolutional TL
●Semi-coupled TL
●Future Work
!32
Subspace Clustering
A special case of spectral clustering, where data samples
from same cluster are assumed to lie in same subspace.
• Each data point expressed as a linear combination of
others: ! with ! the i-th
sample, ! gathers all the other samples
column-wise, and ! states for the corresponding
linear weight vector.
• An affinity matrix ! is computed from the
! to quantify the similarity (inverse distance)
between the samples.
• The clusters are segmented by applying a cut technique
(eg, N-Cut).
(∀i ∈{1,...,n}) xi
= Xic ci xi
∈!m
Xic ∈!m×n−1
ci
∈!n−1
A ∈!n×n
(ci
)1≤i≤n
!33
Subspace Clustering
!34
Illustration of the subspace clustering16 framework based on sparse and
low-rank representation approaches for building the affinity matrix
19. A. Sobral, “Robust Low-rank and Sparse Decomposition for Moving Object Detection: From Matrices to Tensors,” 10.13140/RG.
2.2.33578.82884.
Transformed Subspace Clustering
!35
Illustration of the transformed subspace clustering framework based on
sparse and low-rank representation approaches for building the affinity matrix
On transformed
coefficient space
Joint solution
Deep Transformed Subspace Clustering
● Learn the linear weight vector on the transformed coefficient
space.
● Transformed locally linear manifold clustering22:
● Transformed sparse subspace clustering20,21:
● Transformed low rank subspace clustering20,21:
R(C) = 0
R(C) =|| C ||1
R(C) =|| C ||*
!36
min
T3,T2 ,T1,Z,C
||T3T2T1X − Z ||F
2
+λ (||Ti ||F
2
i=1
3
∑ − logdetTi )+γ || zi − Zic ci ||2
2
+R(C)
i
∑
20. J. Maggu, A. Majumdar and E. Chozenoux, “Transformed Subspace Clustering,” IEEE Transactions on Knowledge and Data Engineering (accepted).
21. J. Maggu, A. Majumdar, E. Chozenoux and G. Chierchia , “Deeply Transformed Subspace Clustering,” Signal Processing (major revision).
22. J. Maggu, A. Majumdar, and E. Chozenoux , “Transformed Locally Linear Manifold Clustering,” EUSIPCO 2018.
clustering termdeep transform
Experimental Results: EYALEB
!37
Method DSC23 DKM24 DMF25 DTLLMC DTSSC
Accuracy 88.00 91.00 89.00 93.13 99.26
NMI 0.90 0.92 0.90 0.92 0.95
ARI 0.83 0.90 0.83 0.91 0.97
Precision 0.79 0.91 0.80 0.94 0.99
F-Score 0.83 0.90 0.84 0.94 0.97
Comparison with benchmarks on EYALEB
23. X. Peng, S. Xiao, J. Feng, W. Y. Yau and Z. Yi, “Deep Sub-space Clustering with Sparsity Prior,” IJCAI, 2016.
24. B. Yang, X. Fu, N. D. Sidiropoulos and M. Hong, “Towards k-means-friendly spaces: Simultaneous deep learning and
clustering,” ICML, 2017.
25. G. Trigeorgis, K. Bousmalis, S. Zafeiriou and B. W. Schuller, "A Deep Matrix Factorization Method for Learning Attribute
Representations," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
Experimental Results: EYALEB
!38
Method
DTSSC
1-layer
DTSSC
2-layers
DTSSC
3-layers
Accuracy 99.22 99.23 99.26
NMI 0.9448 0.9451 0.9476
ARI 0.9656 0.9663 0.9666
Precision 0.9887 0.9900 0.9912
F-Score 0.9567 0.9610 0.9667
Comparison with benchmarks on EYALEB
The joint formulation reduces the chances of masking relevant
clusters by noisy features and avoids the need for a preliminary
step of feature extraction.
Outline
●Supervised TL
●Unsupervised DTL
●Supervised DTL
●Deep Transformed Subspace Clustering
●Convolutional TL
●Semi-coupled TL
●Conclusion and Future Work
!39
Convolutional Transform Learning
● In standard TL, a dense basis is learnt.
● Proposal: Learn a set of independent filters that convolve on
images to learn representations.
● Motivation: The pivotal connection between CNNs and
CTL; but CTL is unexplored.
● Research gaps being addressed:
● Unlike CNNs, CTL works as unsupervised learning.
● Learnt filters are guaranteed to be mutually different.
● CNNs analysed via Convolutional sparse coding.
!40
Convolutional Transform Learning
● Input: Dataset: with M entries in
● Proposed model:
T convolutive transform, which gathers a set of K
kernels, i.e.
Toeplitz matrix such that
A matrix of coefficients associated to each
entry of the dataset.
● Goal: Estimate dense filters and sparse coefficients
from the
︎
m ∈ 1,…,M{ }
{tm}1≤m≤M
{Zm}1≤m≤M {x(m)
}1≤m≤M
!41
!"
T = [t1 |...| tK ]∈!K×K
{x(m)
}1≤m≤M
(∀m ∈{1,...,M}),
χ(M )
T ≈ Zm
(χ(m)
)1≤m≤K ∈!N×K
χ(m)
T = [t1 ∗ x(m)
|...| tK ∗ x(m)
]
Zm = [z1
(m)
|...| ZK
(m)
]
Convolutional Transform Learning
Learns convolved features in an unsupervised way
!
with
min
T ,Z
1
2
||T ∗ X(m)
− Zm ||F
2
m=1
M
∑ + µ ||T ||F
2
−λ log |T | +β || Z ||1 +ι[0,+∞[NM ×K (Z)
!42
26. J. Maggu, A. Majumdar and E. Chozenoux, “Convolutional Transform Learning,” IEEE ICONIP 2018.
Z = [Z1
⊤
|…| ZM
⊤
]⊤
∈!NM ×K
.
min
T ,Z
1
2
|| χ(m)
T − Zm ||F
2
m=1
M
∑ + µ ||T ||F
2
−λ log |T | +β || Z ||1 +ι[0,+∞[NM ×K (Z)
ConvTL: Classification Results
!43
Classification accuracy with SVM
Datasets YALEB AR
Raw 93.24 87.33
TL 94.21 84.33
DTL 97.67 96.80
ConvTL(1-layer) 97.38 88.87
DConvTL(2 layers) 97.00 92.22
DConvTL(3 layers) 98.00 97.67
DConvTL(4 layers) 94.44 82.21
CNN 98.60 95.50
Classification accuracy with SVM
Kernels in CNN and CTL:
Learnt kernels from CNN and CTL are similar. There is close relationship.
!44
Kernels from CNN
Kernels from CTL
Outline
●Supervised TL
●Unsupervised DTL
●Supervised DTL
●Deep Transformed Subspace Clustering
●Convolutional TL
●Semi-coupled TL
●Future Work
!45
Semi-coupled transform learning
!46
Coefficients Z1
Coefficients Z2
Transform T2
Transform T1
Common feature space
Synthesis
LR image
HR image
Photo
Sketch
Source view action
Target view action
Resolution
Data X1
Data X2
Projection
Semi-coupled transform learning
• Comparison of heterogeneous
samples
• Data can be from different
sources
• Eg. face sketch and photo for
matching
!47
T1 T2
Z2Z1
X1 X2
M
Semi-Coupled TL
● TL network for Source X1
● TL network for Target X2
● Coupling map
!48
Source
X1
Target
X2
M
T1X1 = Z1
T2 X2 = Z2
Z2 = MZ1
min
T1,Z1
||T1X1 − Z1 ||F
2
+η || Z1 ||1 +λ(ε ||T1 ||F
2
−logdetT1)
Semi-Coupled TL
● TL network for Source X1
● TL network for Target X2
● Coupling map
!49
Source
X1
Target
X2
M
T1X1 = Z1
T2 X2 = Z2
Z2 = MZ1
min
T2 ,Z2
||T2 X2 − Z2 ||F
2
+η || Z2 ||1 +λ(ε ||T2 ||F
2
−logdetT2 )
Semi-Coupled TL
● TL network for Source X1
● TL network for Target X2
● Coupling map
!50
Source
X1
Target
X2
M
T1X1 = Z1
T2 X2 = Z2
Z2 = MZ1
min
M
|| Z2 − MZ1 ||F
2
Problem Formulation
!51
min
T1,T2 ,Z1,Z2 ,M
||T1X1 − Z1 ||F
2
+ ||T2 X2 − Z2 ||F
2
+µ || Z2 − MZ1 ||F
2
+η(|| Z1 ||1 + || Z2 ||1)+ λ(ε ||T1 ||F
2
+ε ||T2 ||F
2
−logdetT1 − logdetT2 )
27. J. Maggu and A. Majumdar , “Semi-Coupled Transform Learning ,” IEEE ICONIP, 2018.
Image super resolution Results
!52
Original Coupled DL Semi-coupled TL
Image super-resolution Results
!53
Image name Lena Barbara Pepper Cameraman
Color CDL 30.79 28.21 29.76 27.86
Proposed 33.03 30.28 31.81 30.14
Gray scale CDL 31.27 28.98 30.46 28.70
Proposed 34.55 31.17 32.68 30.85
PSNR for super-resolution
Cross lingual document retrieval results
!54
Algorithm
Europarl Wikipedia
Accuracy MRR Accuracy MRR
OPCA28 97.42 0.9846 72.55 0.7734
CPLSA28 97.16 0.9782 45.79 0.5130
CDL29 98.12 0.9839 72.79 28.70
Proposed 99.54 0.9896 78.68 0.8002
Comparable document retrieval
28. Platt J.C., Toutanova, “K.:Association for Computational Linguistics", Conference on Empirical Methods in Natural
Language Processing, 2011.
29. Mehrotra R., Chu D., Haider S.A., Kakadiaris I.A, “Towards Learning Coupled Representations for Cross-Lingual
Information Retrieval”.
OPCA: Oriented Principal Component Analysis
CPLSA: Coupled Probabilistic Latent Semantic Analysis
CDL: Coupled dictionary learning
MRR: Mean Reciprocal Rank
Outline
●Supervised TL
●Unsupervised DTL
●Supervised DTL
●Deep Transformed Subspace Clustering
●Convolutional TL
●Semi-coupled TL
●Future Work
● Deeply Coupled TL
● Deep Transform Information Fusion Network
!55
Deeply-coupled transform learning
!56
Coefficients Z1
Coefficients Z2
Deep Transform T2
Deep Transform T1
Common feature space
Synthesis
LR image
HR image
Photo
Sketch
Source view action
Target view action
Resolution
Data X2
Projection
Data X1
Deeply Coupled transform learning
• Comparison of heterogeneous
samples
• Data can be from different
sources
• Eg. face sketch and photo for
matching
!57
X1 X2
Z2Z1
M
T11
T12 T22
T21
Deeply Coupled TL
!58
Source X1
Target X2
M
Deep Transform Information Fusion Network
● The network learns if the two
inputs (images) presented are
related or not.
● Eg. Verification Task
!59
Architecture 1
Deep Transform Information Fusion Network
● The network learns if the two
inputs (images) presented are
related or not.
● Eg. Verification Task
!60
Architecture 2
Deep Transform Information Fusion Network
● The network learns if the two
inputs (images) presented are
related or not.
● Eg. Verification Task
!61
Architecture 3
Publications(Journals)
!62
1. J. Maggu, A. Majumdar and E. Chouzenoux, “Transformed Subspace Clustering”, IEEE Transactions on Knowledge
and Data Engineering (accepted).
2. J. Maggu, H. Agarwal and A. Majumdar, “Label Consistent Transform Learning for Hyperspectral Image
Classification”, IEEE Geosciences and Remote Sensing Letters, Vol. 16 (9), pp. 1502-1506, 2019
3. V. Singhal, J. Maggu and A. Majumdar, “Simultaneous Detection of Multiple Appliances from Smart-meter
Measurements via Multi-Label Consistent Deep Dictionary Learning and Deep Transform Learning” IEEE
Transactions on Smart Grid, Vol. 10 (3), pp. 2969-2978, 2019.
4. J. Maggu, P. Singh and A. Majumdar, “Multi-echo Reconstruction from Partial K-space Scans via Adaptively Learnt
Basis”, Magnetic Resonance Imaging, Vol. 45, pp. 105-112, 2018.
5. J. Maggu and A. Majumdar, “Kernel Transform Learning”, Pattern Recognition Letters, Vol. 117, pp. 117-122, 2017.
6. J. Maggu, A. Majumdar, E. Chouzenoux and G. Chierchia, “Deeply Transformed Subspace Clustering”, Signal
Processing (major revision).
7. J. Maggu and A. Majumdar, “Dynamic MRI Reconstruction with Deep Transform Learning Prior”, Magnetic
Resonance Imaging, (major revision)
8. J. Maggu and A. Majumdar, “Transductive Inversion via Deep Transform Learning”, Signal Processing (submitted).
Publications(Conferences)
!63
1. J. Maggu and A. Majumdar, “Supervised Kernel Transform Learning”, IEEE IJCNN 2019.
2. J. Maggu, E. Chouzenoux, G. Chierchia and A. Majumdar, “Convolutional Transform Learning”, ICONIP,
pp. 162-174, 2018.
3. J. Maggu and A. Majumdar, "Semi-Coupled Transform Learning", ICONIP, pp. 141-150, 2018.
4. J. Maggu, A. Majumdar and E. Chouzenoux, “Transformed Locally Linear Manifold Clustering”,
EUSIPCO, pp. 1057-1061, 2018.
5. J. Maggu and A. Majumdar, "Unsupervised Deep Transform Learning", IEEE ICASSP, pp. 6782-6786,
2018.
6. J. Maggu, R. Hussein, A. Majumdar and R. Ward, "Impulse Denoising via Transform Learning", IEEE
GlobalSIP, pp. 1250-1254, 2017.
7. J. Maggu and A. Majumdar, “Greedy Deep Transform Learning”, IEEE ICIP, pp. 1822-1826, 2017.
8. J. Maggu and A. Majumdar, “Robust Transform Learning”, IEEE ICASSP, pp. 1467-1471, 2017.
9. J. Maggu and A. Majumdar, "Alternate Formulation for Transform Learning", ICVGIP, pp. 501-508, 2016.
Thank You
!64

More Related Content

What's hot

Visualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional NetworksVisualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional Networks
Willy Marroquin (WillyDevNET)
 
Cognition, Information and Subjective Computation
Cognition, Information and Subjective ComputationCognition, Information and Subjective Computation
Cognition, Information and Subjective Computation
Hector Zenil
 
Multi Layer Perceptron & Back Propagation
Multi Layer Perceptron & Back PropagationMulti Layer Perceptron & Back Propagation
Multi Layer Perceptron & Back Propagation
Sung-ju Kim
 
Back propagation
Back propagationBack propagation
Back propagation
Nagarajan
 
Neural network and mlp
Neural network and mlpNeural network and mlp
Neural network and mlp
partha pratim deb
 
Neural networks
Neural networksNeural networks
Neural networksSlideshare
 
Neural network
Neural networkNeural network
Neural network
Silicon
 
Machine Learning
Machine LearningMachine Learning
Machine Learningbutest
 
(Artificial) Neural Network
(Artificial) Neural Network(Artificial) Neural Network
(Artificial) Neural NetworkPutri Wikie
 
Back propagation
Back propagation Back propagation
Back propagation
DrBaljitSinghKhehra
 
Artificial Neuron network
Artificial Neuron network Artificial Neuron network
Artificial Neuron network
Smruti Ranjan Sahoo
 
Offline Character Recognition Using Monte Carlo Method and Neural Network
Offline Character Recognition Using Monte Carlo Method and Neural NetworkOffline Character Recognition Using Monte Carlo Method and Neural Network
Offline Character Recognition Using Monte Carlo Method and Neural Network
ijaia
 
CLIM Program: Remote Sensing Workshop, Multilayer Modeling and Analysis of Co...
CLIM Program: Remote Sensing Workshop, Multilayer Modeling and Analysis of Co...CLIM Program: Remote Sensing Workshop, Multilayer Modeling and Analysis of Co...
CLIM Program: Remote Sensing Workshop, Multilayer Modeling and Analysis of Co...
The Statistical and Applied Mathematical Sciences Institute
 
Performance Evaluation of Object Tracking Technique Based on Position Vectors
Performance Evaluation of Object Tracking Technique Based on Position VectorsPerformance Evaluation of Object Tracking Technique Based on Position Vectors
Performance Evaluation of Object Tracking Technique Based on Position Vectors
CSCJournals
 
Piotr Mirowski - Review Autoencoders (Deep Learning) - CIUUK14
Piotr Mirowski - Review Autoencoders (Deep Learning) - CIUUK14Piotr Mirowski - Review Autoencoders (Deep Learning) - CIUUK14
Piotr Mirowski - Review Autoencoders (Deep Learning) - CIUUK14
Daniel Lewis
 
Artificial neural network for concrete mix design
Artificial neural network for concrete mix designArtificial neural network for concrete mix design
Artificial neural network for concrete mix design
Monjurul Shuvo
 
Neural networks
Neural networksNeural networks
Neural networks
Dr. C.V. Suresh Babu
 
Neural Network Fundamentals
Neural Network FundamentalsNeural Network Fundamentals
Neural Network Fundamentals
Manoj Kumar
 

What's hot (20)

Visualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional NetworksVisualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional Networks
 
Cognition, Information and Subjective Computation
Cognition, Information and Subjective ComputationCognition, Information and Subjective Computation
Cognition, Information and Subjective Computation
 
Multi Layer Perceptron & Back Propagation
Multi Layer Perceptron & Back PropagationMulti Layer Perceptron & Back Propagation
Multi Layer Perceptron & Back Propagation
 
Back propagation
Back propagationBack propagation
Back propagation
 
hopfield neural network
hopfield neural networkhopfield neural network
hopfield neural network
 
Neural network and mlp
Neural network and mlpNeural network and mlp
Neural network and mlp
 
Neural networks
Neural networksNeural networks
Neural networks
 
Neural network
Neural networkNeural network
Neural network
 
Machine Learning
Machine LearningMachine Learning
Machine Learning
 
(Artificial) Neural Network
(Artificial) Neural Network(Artificial) Neural Network
(Artificial) Neural Network
 
Back propagation
Back propagation Back propagation
Back propagation
 
Artificial Neuron network
Artificial Neuron network Artificial Neuron network
Artificial Neuron network
 
Offline Character Recognition Using Monte Carlo Method and Neural Network
Offline Character Recognition Using Monte Carlo Method and Neural NetworkOffline Character Recognition Using Monte Carlo Method and Neural Network
Offline Character Recognition Using Monte Carlo Method and Neural Network
 
HOPFIELD NETWORK
HOPFIELD NETWORKHOPFIELD NETWORK
HOPFIELD NETWORK
 
CLIM Program: Remote Sensing Workshop, Multilayer Modeling and Analysis of Co...
CLIM Program: Remote Sensing Workshop, Multilayer Modeling and Analysis of Co...CLIM Program: Remote Sensing Workshop, Multilayer Modeling and Analysis of Co...
CLIM Program: Remote Sensing Workshop, Multilayer Modeling and Analysis of Co...
 
Performance Evaluation of Object Tracking Technique Based on Position Vectors
Performance Evaluation of Object Tracking Technique Based on Position VectorsPerformance Evaluation of Object Tracking Technique Based on Position Vectors
Performance Evaluation of Object Tracking Technique Based on Position Vectors
 
Piotr Mirowski - Review Autoencoders (Deep Learning) - CIUUK14
Piotr Mirowski - Review Autoencoders (Deep Learning) - CIUUK14Piotr Mirowski - Review Autoencoders (Deep Learning) - CIUUK14
Piotr Mirowski - Review Autoencoders (Deep Learning) - CIUUK14
 
Artificial neural network for concrete mix design
Artificial neural network for concrete mix designArtificial neural network for concrete mix design
Artificial neural network for concrete mix design
 
Neural networks
Neural networksNeural networks
Neural networks
 
Neural Network Fundamentals
Neural Network FundamentalsNeural Network Fundamentals
Neural Network Fundamentals
 

Similar to PhD Defense

Random Matrix Theory and Machine Learning - Part 4
Random Matrix Theory and Machine Learning - Part 4Random Matrix Theory and Machine Learning - Part 4
Random Matrix Theory and Machine Learning - Part 4
Fabian Pedregosa
 
Continuum Modeling and Control of Large Nonuniform Networks
Continuum Modeling and Control of Large Nonuniform NetworksContinuum Modeling and Control of Large Nonuniform Networks
Continuum Modeling and Control of Large Nonuniform Networks
Yang Zhang
 
Paper Summary of Disentangling by Factorising (Factor-VAE)
Paper Summary of Disentangling by Factorising (Factor-VAE)Paper Summary of Disentangling by Factorising (Factor-VAE)
Paper Summary of Disentangling by Factorising (Factor-VAE)
준식 최
 
Pattern learning and recognition on statistical manifolds: An information-geo...
Pattern learning and recognition on statistical manifolds: An information-geo...Pattern learning and recognition on statistical manifolds: An information-geo...
Pattern learning and recognition on statistical manifolds: An information-geo...
Frank Nielsen
 
diffusion 모델부터 DALLE2까지.pdf
diffusion 모델부터 DALLE2까지.pdfdiffusion 모델부터 DALLE2까지.pdf
diffusion 모델부터 DALLE2까지.pdf
수철 박
 
Macrocanonical models for texture synthesis
Macrocanonical models for texture synthesisMacrocanonical models for texture synthesis
Macrocanonical models for texture synthesis
Valentin De Bortoli
 
Representing Simplicial Complexes with Mangroves
Representing Simplicial Complexes with MangrovesRepresenting Simplicial Complexes with Mangroves
Representing Simplicial Complexes with Mangroves
David Canino
 
Dictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix FactorizationDictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix Factorization
Arthur Mensch
 
Comparison of the optimal design
Comparison of the optimal designComparison of the optimal design
Comparison of the optimal design
Alexander Decker
 
Yolos you only look one sequence
Yolos you only look one sequenceYolos you only look one sequence
Yolos you only look one sequence
taeseon ryu
 
Learning to Reconstruct
Learning to ReconstructLearning to Reconstruct
Learning to Reconstruct
Jonas Adler
 
Dixon Deep Learning
Dixon Deep LearningDixon Deep Learning
Dixon Deep Learning
SciCompIIT
 
Molecular autoencoder
Molecular autoencoderMolecular autoencoder
Molecular autoencoder
Dan Elton
 
A Simple Introduction to Neural Information Retrieval
A Simple Introduction to Neural Information RetrievalA Simple Introduction to Neural Information Retrieval
A Simple Introduction to Neural Information Retrieval
Bhaskar Mitra
 
Understanding variable importances in forests of randomized trees
Understanding variable importances in forests of randomized treesUnderstanding variable importances in forests of randomized trees
Understanding variable importances in forests of randomized treesGilles Louppe
 
PhD defense talk slides
PhD  defense talk slidesPhD  defense talk slides
PhD defense talk slides
Chiheb Ben Hammouda
 
Smart Multitask Bregman Clustering
Smart Multitask Bregman ClusteringSmart Multitask Bregman Clustering
Smart Multitask Bregman Clustering
Venkat Sai Sharath Mudhigonda
 
Dynamic stiffness and eigenvalues of nonlocal nano beams
Dynamic stiffness and eigenvalues of nonlocal nano beamsDynamic stiffness and eigenvalues of nonlocal nano beams
Dynamic stiffness and eigenvalues of nonlocal nano beams
University of Glasgow
 
Uncertainty in deep learning
Uncertainty in deep learningUncertainty in deep learning
Uncertainty in deep learning
Yujiro Katagiri
 

Similar to PhD Defense (20)

Random Matrix Theory and Machine Learning - Part 4
Random Matrix Theory and Machine Learning - Part 4Random Matrix Theory and Machine Learning - Part 4
Random Matrix Theory and Machine Learning - Part 4
 
Continuum Modeling and Control of Large Nonuniform Networks
Continuum Modeling and Control of Large Nonuniform NetworksContinuum Modeling and Control of Large Nonuniform Networks
Continuum Modeling and Control of Large Nonuniform Networks
 
Paper Summary of Disentangling by Factorising (Factor-VAE)
Paper Summary of Disentangling by Factorising (Factor-VAE)Paper Summary of Disentangling by Factorising (Factor-VAE)
Paper Summary of Disentangling by Factorising (Factor-VAE)
 
Pattern learning and recognition on statistical manifolds: An information-geo...
Pattern learning and recognition on statistical manifolds: An information-geo...Pattern learning and recognition on statistical manifolds: An information-geo...
Pattern learning and recognition on statistical manifolds: An information-geo...
 
diffusion 모델부터 DALLE2까지.pdf
diffusion 모델부터 DALLE2까지.pdfdiffusion 모델부터 DALLE2까지.pdf
diffusion 모델부터 DALLE2까지.pdf
 
main
mainmain
main
 
Macrocanonical models for texture synthesis
Macrocanonical models for texture synthesisMacrocanonical models for texture synthesis
Macrocanonical models for texture synthesis
 
Representing Simplicial Complexes with Mangroves
Representing Simplicial Complexes with MangrovesRepresenting Simplicial Complexes with Mangroves
Representing Simplicial Complexes with Mangroves
 
Dictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix FactorizationDictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix Factorization
 
Comparison of the optimal design
Comparison of the optimal designComparison of the optimal design
Comparison of the optimal design
 
Yolos you only look one sequence
Yolos you only look one sequenceYolos you only look one sequence
Yolos you only look one sequence
 
Learning to Reconstruct
Learning to ReconstructLearning to Reconstruct
Learning to Reconstruct
 
Dixon Deep Learning
Dixon Deep LearningDixon Deep Learning
Dixon Deep Learning
 
Molecular autoencoder
Molecular autoencoderMolecular autoencoder
Molecular autoencoder
 
A Simple Introduction to Neural Information Retrieval
A Simple Introduction to Neural Information RetrievalA Simple Introduction to Neural Information Retrieval
A Simple Introduction to Neural Information Retrieval
 
Understanding variable importances in forests of randomized trees
Understanding variable importances in forests of randomized treesUnderstanding variable importances in forests of randomized trees
Understanding variable importances in forests of randomized trees
 
PhD defense talk slides
PhD  defense talk slidesPhD  defense talk slides
PhD defense talk slides
 
Smart Multitask Bregman Clustering
Smart Multitask Bregman ClusteringSmart Multitask Bregman Clustering
Smart Multitask Bregman Clustering
 
Dynamic stiffness and eigenvalues of nonlocal nano beams
Dynamic stiffness and eigenvalues of nonlocal nano beamsDynamic stiffness and eigenvalues of nonlocal nano beams
Dynamic stiffness and eigenvalues of nonlocal nano beams
 
Uncertainty in deep learning
Uncertainty in deep learningUncertainty in deep learning
Uncertainty in deep learning
 

Recently uploaded

Pride Month Slides 2024 David Douglas School District
Pride Month Slides 2024 David Douglas School DistrictPride Month Slides 2024 David Douglas School District
Pride Month Slides 2024 David Douglas School District
David Douglas School District
 
MASS MEDIA STUDIES-835-CLASS XI Resource Material.pdf
MASS MEDIA STUDIES-835-CLASS XI Resource Material.pdfMASS MEDIA STUDIES-835-CLASS XI Resource Material.pdf
MASS MEDIA STUDIES-835-CLASS XI Resource Material.pdf
goswamiyash170123
 
special B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdfspecial B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdf
Special education needs
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
MysoreMuleSoftMeetup
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
Jisc
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
Jisc
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
Thiyagu K
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
Delapenabediema
 
Francesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptxFrancesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptx
EduSkills OECD
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
EverAndrsGuerraGuerr
 
Digital Artifact 2 - Investigating Pavilion Designs
Digital Artifact 2 - Investigating Pavilion DesignsDigital Artifact 2 - Investigating Pavilion Designs
Digital Artifact 2 - Investigating Pavilion Designs
chanes7
 
Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
Balvir Singh
 
Best Digital Marketing Institute In NOIDA
Best Digital Marketing Institute In NOIDABest Digital Marketing Institute In NOIDA
Best Digital Marketing Institute In NOIDA
deeptiverma2406
 
Digital Artifact 1 - 10VCD Environments Unit
Digital Artifact 1 - 10VCD Environments UnitDigital Artifact 1 - 10VCD Environments Unit
Digital Artifact 1 - 10VCD Environments Unit
chanes7
 
Executive Directors Chat Leveraging AI for Diversity, Equity, and Inclusion
Executive Directors Chat  Leveraging AI for Diversity, Equity, and InclusionExecutive Directors Chat  Leveraging AI for Diversity, Equity, and Inclusion
Executive Directors Chat Leveraging AI for Diversity, Equity, and Inclusion
TechSoup
 
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
EugeneSaldivar
 
A Survey of Techniques for Maximizing LLM Performance.pptx
A Survey of Techniques for Maximizing LLM Performance.pptxA Survey of Techniques for Maximizing LLM Performance.pptx
A Survey of Techniques for Maximizing LLM Performance.pptx
thanhdowork
 
Lapbook sobre os Regimes Totalitários.pdf
Lapbook sobre os Regimes Totalitários.pdfLapbook sobre os Regimes Totalitários.pdf
Lapbook sobre os Regimes Totalitários.pdf
Jean Carlos Nunes Paixão
 
Advantages and Disadvantages of CMS from an SEO Perspective
Advantages and Disadvantages of CMS from an SEO PerspectiveAdvantages and Disadvantages of CMS from an SEO Perspective
Advantages and Disadvantages of CMS from an SEO Perspective
Krisztián Száraz
 
A Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in EducationA Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in Education
Peter Windle
 

Recently uploaded (20)

Pride Month Slides 2024 David Douglas School District
Pride Month Slides 2024 David Douglas School DistrictPride Month Slides 2024 David Douglas School District
Pride Month Slides 2024 David Douglas School District
 
MASS MEDIA STUDIES-835-CLASS XI Resource Material.pdf
MASS MEDIA STUDIES-835-CLASS XI Resource Material.pdfMASS MEDIA STUDIES-835-CLASS XI Resource Material.pdf
MASS MEDIA STUDIES-835-CLASS XI Resource Material.pdf
 
special B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdfspecial B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdf
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
 
Francesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptxFrancesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptx
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
 
Digital Artifact 2 - Investigating Pavilion Designs
Digital Artifact 2 - Investigating Pavilion DesignsDigital Artifact 2 - Investigating Pavilion Designs
Digital Artifact 2 - Investigating Pavilion Designs
 
Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
 
Best Digital Marketing Institute In NOIDA
Best Digital Marketing Institute In NOIDABest Digital Marketing Institute In NOIDA
Best Digital Marketing Institute In NOIDA
 
Digital Artifact 1 - 10VCD Environments Unit
Digital Artifact 1 - 10VCD Environments UnitDigital Artifact 1 - 10VCD Environments Unit
Digital Artifact 1 - 10VCD Environments Unit
 
Executive Directors Chat Leveraging AI for Diversity, Equity, and Inclusion
Executive Directors Chat  Leveraging AI for Diversity, Equity, and InclusionExecutive Directors Chat  Leveraging AI for Diversity, Equity, and Inclusion
Executive Directors Chat Leveraging AI for Diversity, Equity, and Inclusion
 
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
 
A Survey of Techniques for Maximizing LLM Performance.pptx
A Survey of Techniques for Maximizing LLM Performance.pptxA Survey of Techniques for Maximizing LLM Performance.pptx
A Survey of Techniques for Maximizing LLM Performance.pptx
 
Lapbook sobre os Regimes Totalitários.pdf
Lapbook sobre os Regimes Totalitários.pdfLapbook sobre os Regimes Totalitários.pdf
Lapbook sobre os Regimes Totalitários.pdf
 
Advantages and Disadvantages of CMS from an SEO Perspective
Advantages and Disadvantages of CMS from an SEO PerspectiveAdvantages and Disadvantages of CMS from an SEO Perspective
Advantages and Disadvantages of CMS from an SEO Perspective
 
A Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in EducationA Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in Education
 

PhD Defense

  • 1. Deep Transform Learning Towards a Machine Learning Framework Jyoti Maggu Advisor: Dr. Angshul Majumdar December 20, 2019
  • 2. !2 Transform Learning(TL) Transform Original Data Sparse Representation k × m m × n k × n (T) (X) (Z)
  • 3. !3 Transform Learning(TL) Transform Original Data Sparse Representation k × m m × n k × n (T) (X) (Z) TX = Z
  • 4. !4 Transform Learning(TL) Transform Original Data Sparse Representation k × m m × n k × n The transform learning1 problem can be expressed as min T∈!k×m ,Z∈!k×n ||TX − Z ||F 2 +λ(ε ||T ||F 2 −logdetT) + µ || Z ||0 1. S. Ravishankar and Y. Bresler, “Learning Sparsifying Transforms”, IEEE Transactions on Signal Processing, (2013). (T) (X) (Z) TX = Z
  • 5. !5 Transform Learning(TL) min T∈!k×m ,Z∈!k×n ||TX − Z ||F 2 +λ(ε ||T ||F 2 −logdetT) + µ || Z ||0 T ← min T ||TX − Z ||F 2 +λ(ε ||T ||F 2 −logdetT ) Z ← min Z ||TX − Z ||F 2 +µ || Z ||0 Closed form solution exists
  • 6. !6 Transform Learning(TL) min T∈!k×m ,Z∈!k×n ||TX − Z ||F 2 +λ(ε ||T ||F 2 −logdetT) + µ || Z ||0 T ← min T ||TX − Z ||F 2 +λ(ε ||T ||F 2 −logdetT ) Z ← min Z ||TX − Z ||F 2 +µ || Z ||0 Closed form solution exists Idea : Use transforms for solving Machine Learning Problems??
  • 7. Outline ●Supervised TL ●Unsupervised DTL ●Supervised DTL ●Deep Transformed Subspace Clustering ●Convolutional TL ●Semi-coupled TL ●Future Work 7
  • 8. Outline ●Supervised TL ●Unsupervised DTL ●Supervised DTL ●Deep Transformed Subspace Clustering ●Convolutional TL ●Semi-coupled TL ●Future Work !8
  • 9. Supervised Transform Learning Label Consistent TL (LCTL): !9 ! min T ,Z,M ||TX − Z ||F 2 +λ(ε ||T ||F 2 −logdetT )+ µ || Z ||1 +η ||Q − MZ ||F 2 supervision term Learn mapping between true labels Q and the coefficients Z. Mapping M can be linear or non-linear. X Z Q T M
  • 10. Kernel Transform Learning Kernel TL: • Dense transform —> non-linear transformation of noisy data into higher dimensional feature space .X ϕ !10 : fixed basis : transform as sparse combination (B) of basis from Φ ΦB Φ ϕ :!N → F TX = Z K(X, X) = ϕ(X)T ϕ(X) BK(X,X) = Z Bϕ(X)T Transform !"# $# ϕ(X) Data ! = Z
  • 11. Supervised Kernel Transform Learning Transform Learning: Kernel TL: Kernel LCTL: !11 min B,Z,M || BK(X, X)− Z ||F 2 +λ(ε ||T ||F 2 −logdetT )+ µ || Z ||1 +η ||Q − MZ ||F 2 min B,Z || BK(X, X)− Z ||F 2 +λ(ε || B ||F 2 −logdet B)+ µ || Z ||0 supervision term Kernel transform term Kernel transform term min T ,Z ||TX − Z ||F 2 +λ(ε ||T ||F 2 −logdetT )+ µ || Z ||1
  • 12. Supervised Transform Learning: Results • Classification Results on YaleB(38 persons), AR faces(100 persons) • Kernel: polynomial order 3 • Parameter values obtained on CIFAR-10 validation dataset • Benchmark comparisons: • Discriminative Baysian Dictionary Learning(DBDL)2 • Multimodal Task Driven Dictionary Learning(MTDL)3 • Discriminative Analysis Dictionary Learning(DADL)4 • Sparse Embedded Dictionary Learning(SEDL)5 • Non-Linear Dictionary Learning(NDL)6 !12 2. N. Akhtar, F. Shafait and A. Mian, "Discriminative Bayesian Dictionary Learning for Classification," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016. 3. S. Bahrampour, N. M. Nasrabadi, A. Ray and W. K. Jenkins, "Multimodal Task-Driven Dictionary Learning for Image Classification," IEEE Transactions on Image Processing, 2016. 4. J. Guo, Y. Guo, X. Kong, M. Zhang and R. He, “Discriminative Analysis Dictionary Learning”, AAAI Conference on Artificial Intelligence, 2016. 5. Y. Chen and J. Su, “Sparse embedded dictionary learning on face recognition”, Pattern Recognition, 2017. 6. J. Hu and Y.-P. Tan, “Nonlinear dictionary learning with application to image classification”, Pattern Recognition (in Press).
  • 13. !13 Method YaleB AR faces DBDL 97.2 97.4 MTDL 97.0 97.1 DADL 97.7 98.7 SEDL 96.6 94.2 NDL 91.8 92.1 LCDL 92.7 94.6 LCTL 97.8 98.8 K-LCTL 98.4 99.2 Classification Results Classification Accuracy(%age)
  • 14. Outline ●Supervised TL ●Unsupervised DTL ● Jointly learned DTL ● DTL for inverse problems ●Supervised DTL ●Deep Transformed Subspace Clustering ●Convolutional TL ●Semi-coupled TL ●Future Work !14
  • 15. Deep Transform Learning(DTL) !15 Basic Idea : Repeat transforms to form a deeper architecture To learn N-levels of transform, the model is TN ϕ...(T2 ϕ(T1 X )) = Z All layers are learned jointly
  • 16. Jointly Learned Deep Transform Learning !16 min T1,T2 ,Z1,Z ||T2Z1 − Z ||F 2 +λ (µ ||Ti ||F 2 −logdetTi ) i=1 2 ∑ + µ ||T1X −φ−1 Z1( )||F 2 ● Formulation for two layer network ● Variable splitting ● All coefficients and transforms are learned in one loop min T1,T2 ,Z ||T2 (φ(T1X))− Z ||F 2 +λ (µ ||Ti ||F 2 −logdetTi ) i=1 2 ∑ Z1 = φ(T1X) T2Z1 = Z
  • 17. Jointly Learned Deep Transform Learning ● S1 : ● S2 : ● S3 : ● S4 : !17 min T1,T2 ,Z1,Z ||T2Z1 − Z ||F 2 +λ (µ ||Ti ||F 2 −logdetTi ) i=1 2 ∑ + µ ||T1X −φ−1 Z1( )||F 2 min T2 ||T2Z1 − Z ||F 2 +λ(||T2 ||F 2 −logdetT2 ) min T1 µ ||T1X −φ−1 (Z1)||F 2 +λ(||T1 ||F 2 −logdetT1) min Z ||T2Z1 − Z ||F 2 ⇒ T2Z1 = Z min Z1 ||T2Z1 − Z ||F 2 +µ ||φ(T1X)− Z1 ||F 2
  • 18. Classification Results: Joint DTL !18 Classification Accuracy with SVM Method YALEB AR Faces CSSAE7 85.21 82.22 CSDBN8 84.97 82.11 DDL9 92.66 93.35 Proposed 1-layer 95.11 94.98 Proposed 2-layers 97.41 95.87 Proposed 3-layers 97.67 96.80 Proposed 4-layers 96.36 96.24 7. A. Sankaran, M. Vatsa, R. Singh, and A. Majumdar, “Group sparse autoencoder,” Image and Vision Computing, 2017. 8. A. Sankaran, G. Goswami, M. Vatsa, R. Singh, and A. Majumdar, “Class sparsity signature based restricted boltzmann machine,” Pattern Recognition, 2017. 9. V. Singal and A. Majumdar, "Majorization Minimization Technique for Optimally Solving Deep Dictionary Learning", Neural Processing Letters, doi:10.1007/s11063-017-9603-9, 2017.
  • 19. Clustering Results !19 K-Means: YaleB Method HOG DSIFT NMI ARI F-score NMI ARI F-score SAE10 93.43 82.57 83.07 87.54 75.82 76.50 DSC11 96.91 90.25 89.46 90.85 83.00 83.45 DDL 96.82 88.97 89.13 90.20 81.83 83.42 Joint DTL 98.93 93.43 92.06 93.26 85.62 85.86 10. S. Gao, Y. Zhang, K. Jia, J. Lu, and Y. Zhang, “Single sample face recognition via learning deep supervised autoencoders,” IEEE Transactions on Information Forensics and Security, 2015. 11. X. Peng, J. Feng, S. Xiao, J. Lu, Z. Yi, and S. Yan, “Deep sparse subspace clustering,” arXiv preprint arXiv:1709.08374, 2017.
  • 20. Outline ●Supervised TL ●Unsupervised DTL ● Jointly learned DTL ● DTL for inverse problems ●Supervised DTL ●Deep Transformed Subspace Clustering ●Convolutional TL ●Semi-coupled TL ●Future Work !20
  • 21. What is Inverse Problem? Inverse problem is given by equation: • Operator A defines the problem • Denoising - identity • Super-resolution - subsampling • Deblurring - convolution • Reconstruction - projection !21 y = Ax +η
  • 22. Sparsity based Solution • Exploits the sparsity of the image in some domain. • Assume that sparsifying basis is known (DCT, wavelet etc.). • where is sparse representation and is fixed basis. • Are fixed basis the best possible option? φ !22 α y = Ax +η = Aφα +η
  • 23. Adaptive Learning based Solution !23 Transform learning learns basis adaptively from the image patches Z =[z1 |z2 |...|zK ] Data consistency Transform learning Pi x : ith patch of the image min x,T ,Z || y − Ax ||2 2 +λ( ||TPi x − zi ||2 2 i ∑ + µ(||T ||F 2 −logdetT +γ || zi ||0 )
  • 24. Proposed DTL Inversion !24 DTL12 learns multiple levels of transforms. The problem is formulated as Data consistency Deep transform learning Z =[z1 |z2 |...|zK ]Pi x : ith patch of the image min x,T1,T2 ,T3,Z || y − Ax ||2 2 +λ( ||T3 i ∑ T2 T1 Pi x − z ||F 2 +µ (||Ti ||F 2 −logdetTi ) j=1 3 ∑ +γ || zi ||1 ) 12. J. Maggu and A. Majumdar ,”Transductive Inversion via Deep Transform Learning”, Signal Processing(submitted). T1Pi x > 0 T2T1Pi x > 0
  • 25. Deblurring Results !25 Images Blurry RCSR13 GBD14 DeblurGAN15 Proposed Baby 0.78 0.76 0.85 0.86 0.89 Bird 0.76 0.74 0.83 0.84 0.85 Butterfly 0.48 0.47 0.62 0.63 0.65 Head 0.66 0.65 0.72 0.84 0.84 Woman 0.73 0.71 0.80 0.80 0.82 Comparative Debluring Perfomance (SSIM) 13. M. Tofighi, Y. Li and V. Monga, "Blind Image Deblurring Using Row–Column Sparse Representations," IEEE Signal Processing Letters, 2018. 14. Y. Bai, G. Cheung, X. Liu and W. Gao, "Graph-Based Blind Image Deblurring From a Single Photograph," IEEE Transactions on Image Processing, 2019. 15. O. Kupyn, V. Budzan, M. Mykhailych, D. Mishkin and J. Matas, "DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks," IEEE Conference on Computer Vision and Pattern Recognition, 2018.
  • 26. Deblurring Results !26 Man Left to Right: Original, Blurred image, RCSR, GBD, DeblurGAN and Proposed Original Blurred RCSR GBD DeblurGAN Proposed
  • 27. Outline ●Supervised TL ●Unsupervised DTL ●Supervised DTL ●Deep Transformed Subspace Clustering ●Convolutional TL ●Semi-coupled TL ●Future Work !27
  • 28. Supervised Deep Transform Learning Label-Consistent DTL : Multi-class classification Multi-label classification16 !28 min Ti ′s,Z ,M ||TN (φ...(T2 (φ(T1X))))− Z ||F 2 +λ (µ ||Ti ||F 2 −logdetTi ) i ∑ +η ||Q −φ(MZ)||F 2 supervision term Deep TL 16. V. Singhal, J. Maggu and A. Majumdar, “Simultaneous Detection of Multiple Appliances from Smart-meter Measurements via Multi-Label Consistent Deep Dictionary Learning and Deep Transform Learning,” IEEE Transactions on Smart Grid, 2019. .
  • 29. Multi-class Classification Results !29 Technique YaleB AR Stacked Denoising Autoencoder 42.81 37.60 Stacked Group Sparse Autoencoder 66.27 32.50 Stacked Label Consistent Autoencoder 86.22 85.21 Discriminative Deep Belief Network 60.34 38.20 LC-KSVD 90.80 87.67 DDL (unsupervised) 93.35 92.66 LC-DDL 94.57 96.50 DTL (unsupervised) 97.67 96.80 LCTL 1-layer 98.80 97.80 LCTL 2-layers 98.87 97.91 LCTL 3-layers 98.65 98.89 LCTL 4-layers 97.24 96.16 Classification Accuracy on AR and YaleB Face Recognition Datasets
  • 30. NILM as Multi-label Classification Problem !30 Aggregated load Supervised Deep transform learning A1 A2 An Appliance states
  • 31. Results on Energy Datasets !31 Dataset REDD dataset Pecan street dataset Micro-F1 Macro-F1 Energy error Micro-F1 Macro-F1 Energy error MLKNN 0.6034 0.5931 0.1067 0.6263 0.6227 0.0989 RAKEL 0.5749 0.5334 0.9948 0.6663 0.6620 0.9995 Proposed (1 layer) 0.5884 0.5838 0.0983 0.6079 0.6079 0.0236 Proposed (2 layers) 0.5905 0.5857 0.0892 0.6082 0.6089 0.0223 Proposed (3 layers) 0.6001 0.5981 0.0766 0.6104 0.6104 0.0115 Proposed (4 layers) 0.5914 0.5951 0.0827 0.6096 0.6087 0.0228 Performance on REDD and Pecan street Datasets 17. M.-L. Zhang and Z.-H. Zhou, “A k-nearest neighbor based algorithm for multi-label classification,” in Granular Computing, 2005. 18. G. Tsoumakas and I. Vlahavas, “Random k-labelsets: An ensemble method for multilabel classification,” Mach. Learn. ECML 2007. 17 18
  • 32. Outline ●Supervised TL ●Unsupervised DTL ●Supervised DTL ●Deep Transformed Subspace Clustering ●Convolutional TL ●Semi-coupled TL ●Future Work !32
  • 33. Subspace Clustering A special case of spectral clustering, where data samples from same cluster are assumed to lie in same subspace. • Each data point expressed as a linear combination of others: ! with ! the i-th sample, ! gathers all the other samples column-wise, and ! states for the corresponding linear weight vector. • An affinity matrix ! is computed from the ! to quantify the similarity (inverse distance) between the samples. • The clusters are segmented by applying a cut technique (eg, N-Cut). (∀i ∈{1,...,n}) xi = Xic ci xi ∈!m Xic ∈!m×n−1 ci ∈!n−1 A ∈!n×n (ci )1≤i≤n !33
  • 34. Subspace Clustering !34 Illustration of the subspace clustering16 framework based on sparse and low-rank representation approaches for building the affinity matrix 19. A. Sobral, “Robust Low-rank and Sparse Decomposition for Moving Object Detection: From Matrices to Tensors,” 10.13140/RG. 2.2.33578.82884.
  • 35. Transformed Subspace Clustering !35 Illustration of the transformed subspace clustering framework based on sparse and low-rank representation approaches for building the affinity matrix On transformed coefficient space Joint solution
  • 36. Deep Transformed Subspace Clustering ● Learn the linear weight vector on the transformed coefficient space. ● Transformed locally linear manifold clustering22: ● Transformed sparse subspace clustering20,21: ● Transformed low rank subspace clustering20,21: R(C) = 0 R(C) =|| C ||1 R(C) =|| C ||* !36 min T3,T2 ,T1,Z,C ||T3T2T1X − Z ||F 2 +λ (||Ti ||F 2 i=1 3 ∑ − logdetTi )+γ || zi − Zic ci ||2 2 +R(C) i ∑ 20. J. Maggu, A. Majumdar and E. Chozenoux, “Transformed Subspace Clustering,” IEEE Transactions on Knowledge and Data Engineering (accepted). 21. J. Maggu, A. Majumdar, E. Chozenoux and G. Chierchia , “Deeply Transformed Subspace Clustering,” Signal Processing (major revision). 22. J. Maggu, A. Majumdar, and E. Chozenoux , “Transformed Locally Linear Manifold Clustering,” EUSIPCO 2018. clustering termdeep transform
  • 37. Experimental Results: EYALEB !37 Method DSC23 DKM24 DMF25 DTLLMC DTSSC Accuracy 88.00 91.00 89.00 93.13 99.26 NMI 0.90 0.92 0.90 0.92 0.95 ARI 0.83 0.90 0.83 0.91 0.97 Precision 0.79 0.91 0.80 0.94 0.99 F-Score 0.83 0.90 0.84 0.94 0.97 Comparison with benchmarks on EYALEB 23. X. Peng, S. Xiao, J. Feng, W. Y. Yau and Z. Yi, “Deep Sub-space Clustering with Sparsity Prior,” IJCAI, 2016. 24. B. Yang, X. Fu, N. D. Sidiropoulos and M. Hong, “Towards k-means-friendly spaces: Simultaneous deep learning and clustering,” ICML, 2017. 25. G. Trigeorgis, K. Bousmalis, S. Zafeiriou and B. W. Schuller, "A Deep Matrix Factorization Method for Learning Attribute Representations," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
  • 38. Experimental Results: EYALEB !38 Method DTSSC 1-layer DTSSC 2-layers DTSSC 3-layers Accuracy 99.22 99.23 99.26 NMI 0.9448 0.9451 0.9476 ARI 0.9656 0.9663 0.9666 Precision 0.9887 0.9900 0.9912 F-Score 0.9567 0.9610 0.9667 Comparison with benchmarks on EYALEB The joint formulation reduces the chances of masking relevant clusters by noisy features and avoids the need for a preliminary step of feature extraction.
  • 39. Outline ●Supervised TL ●Unsupervised DTL ●Supervised DTL ●Deep Transformed Subspace Clustering ●Convolutional TL ●Semi-coupled TL ●Conclusion and Future Work !39
  • 40. Convolutional Transform Learning ● In standard TL, a dense basis is learnt. ● Proposal: Learn a set of independent filters that convolve on images to learn representations. ● Motivation: The pivotal connection between CNNs and CTL; but CTL is unexplored. ● Research gaps being addressed: ● Unlike CNNs, CTL works as unsupervised learning. ● Learnt filters are guaranteed to be mutually different. ● CNNs analysed via Convolutional sparse coding. !40
  • 41. Convolutional Transform Learning ● Input: Dataset: with M entries in ● Proposed model: T convolutive transform, which gathers a set of K kernels, i.e. Toeplitz matrix such that A matrix of coefficients associated to each entry of the dataset. ● Goal: Estimate dense filters and sparse coefficients from the ︎ m ∈ 1,…,M{ } {tm}1≤m≤M {Zm}1≤m≤M {x(m) }1≤m≤M !41 !" T = [t1 |...| tK ]∈!K×K {x(m) }1≤m≤M (∀m ∈{1,...,M}), χ(M ) T ≈ Zm (χ(m) )1≤m≤K ∈!N×K χ(m) T = [t1 ∗ x(m) |...| tK ∗ x(m) ] Zm = [z1 (m) |...| ZK (m) ]
  • 42. Convolutional Transform Learning Learns convolved features in an unsupervised way ! with min T ,Z 1 2 ||T ∗ X(m) − Zm ||F 2 m=1 M ∑ + µ ||T ||F 2 −λ log |T | +β || Z ||1 +ι[0,+∞[NM ×K (Z) !42 26. J. Maggu, A. Majumdar and E. Chozenoux, “Convolutional Transform Learning,” IEEE ICONIP 2018. Z = [Z1 ⊤ |…| ZM ⊤ ]⊤ ∈!NM ×K . min T ,Z 1 2 || χ(m) T − Zm ||F 2 m=1 M ∑ + µ ||T ||F 2 −λ log |T | +β || Z ||1 +ι[0,+∞[NM ×K (Z)
  • 43. ConvTL: Classification Results !43 Classification accuracy with SVM Datasets YALEB AR Raw 93.24 87.33 TL 94.21 84.33 DTL 97.67 96.80 ConvTL(1-layer) 97.38 88.87 DConvTL(2 layers) 97.00 92.22 DConvTL(3 layers) 98.00 97.67 DConvTL(4 layers) 94.44 82.21 CNN 98.60 95.50 Classification accuracy with SVM
  • 44. Kernels in CNN and CTL: Learnt kernels from CNN and CTL are similar. There is close relationship. !44 Kernels from CNN Kernels from CTL
  • 45. Outline ●Supervised TL ●Unsupervised DTL ●Supervised DTL ●Deep Transformed Subspace Clustering ●Convolutional TL ●Semi-coupled TL ●Future Work !45
  • 46. Semi-coupled transform learning !46 Coefficients Z1 Coefficients Z2 Transform T2 Transform T1 Common feature space Synthesis LR image HR image Photo Sketch Source view action Target view action Resolution Data X1 Data X2 Projection
  • 47. Semi-coupled transform learning • Comparison of heterogeneous samples • Data can be from different sources • Eg. face sketch and photo for matching !47 T1 T2 Z2Z1 X1 X2 M
  • 48. Semi-Coupled TL ● TL network for Source X1 ● TL network for Target X2 ● Coupling map !48 Source X1 Target X2 M T1X1 = Z1 T2 X2 = Z2 Z2 = MZ1 min T1,Z1 ||T1X1 − Z1 ||F 2 +η || Z1 ||1 +λ(ε ||T1 ||F 2 −logdetT1)
  • 49. Semi-Coupled TL ● TL network for Source X1 ● TL network for Target X2 ● Coupling map !49 Source X1 Target X2 M T1X1 = Z1 T2 X2 = Z2 Z2 = MZ1 min T2 ,Z2 ||T2 X2 − Z2 ||F 2 +η || Z2 ||1 +λ(ε ||T2 ||F 2 −logdetT2 )
  • 50. Semi-Coupled TL ● TL network for Source X1 ● TL network for Target X2 ● Coupling map !50 Source X1 Target X2 M T1X1 = Z1 T2 X2 = Z2 Z2 = MZ1 min M || Z2 − MZ1 ||F 2
  • 51. Problem Formulation !51 min T1,T2 ,Z1,Z2 ,M ||T1X1 − Z1 ||F 2 + ||T2 X2 − Z2 ||F 2 +µ || Z2 − MZ1 ||F 2 +η(|| Z1 ||1 + || Z2 ||1)+ λ(ε ||T1 ||F 2 +ε ||T2 ||F 2 −logdetT1 − logdetT2 ) 27. J. Maggu and A. Majumdar , “Semi-Coupled Transform Learning ,” IEEE ICONIP, 2018.
  • 52. Image super resolution Results !52 Original Coupled DL Semi-coupled TL
  • 53. Image super-resolution Results !53 Image name Lena Barbara Pepper Cameraman Color CDL 30.79 28.21 29.76 27.86 Proposed 33.03 30.28 31.81 30.14 Gray scale CDL 31.27 28.98 30.46 28.70 Proposed 34.55 31.17 32.68 30.85 PSNR for super-resolution
  • 54. Cross lingual document retrieval results !54 Algorithm Europarl Wikipedia Accuracy MRR Accuracy MRR OPCA28 97.42 0.9846 72.55 0.7734 CPLSA28 97.16 0.9782 45.79 0.5130 CDL29 98.12 0.9839 72.79 28.70 Proposed 99.54 0.9896 78.68 0.8002 Comparable document retrieval 28. Platt J.C., Toutanova, “K.:Association for Computational Linguistics", Conference on Empirical Methods in Natural Language Processing, 2011. 29. Mehrotra R., Chu D., Haider S.A., Kakadiaris I.A, “Towards Learning Coupled Representations for Cross-Lingual Information Retrieval”. OPCA: Oriented Principal Component Analysis CPLSA: Coupled Probabilistic Latent Semantic Analysis CDL: Coupled dictionary learning MRR: Mean Reciprocal Rank
  • 55. Outline ●Supervised TL ●Unsupervised DTL ●Supervised DTL ●Deep Transformed Subspace Clustering ●Convolutional TL ●Semi-coupled TL ●Future Work ● Deeply Coupled TL ● Deep Transform Information Fusion Network !55
  • 56. Deeply-coupled transform learning !56 Coefficients Z1 Coefficients Z2 Deep Transform T2 Deep Transform T1 Common feature space Synthesis LR image HR image Photo Sketch Source view action Target view action Resolution Data X2 Projection Data X1
  • 57. Deeply Coupled transform learning • Comparison of heterogeneous samples • Data can be from different sources • Eg. face sketch and photo for matching !57 X1 X2 Z2Z1 M T11 T12 T22 T21
  • 59. Deep Transform Information Fusion Network ● The network learns if the two inputs (images) presented are related or not. ● Eg. Verification Task !59 Architecture 1
  • 60. Deep Transform Information Fusion Network ● The network learns if the two inputs (images) presented are related or not. ● Eg. Verification Task !60 Architecture 2
  • 61. Deep Transform Information Fusion Network ● The network learns if the two inputs (images) presented are related or not. ● Eg. Verification Task !61 Architecture 3
  • 62. Publications(Journals) !62 1. J. Maggu, A. Majumdar and E. Chouzenoux, “Transformed Subspace Clustering”, IEEE Transactions on Knowledge and Data Engineering (accepted). 2. J. Maggu, H. Agarwal and A. Majumdar, “Label Consistent Transform Learning for Hyperspectral Image Classification”, IEEE Geosciences and Remote Sensing Letters, Vol. 16 (9), pp. 1502-1506, 2019 3. V. Singhal, J. Maggu and A. Majumdar, “Simultaneous Detection of Multiple Appliances from Smart-meter Measurements via Multi-Label Consistent Deep Dictionary Learning and Deep Transform Learning” IEEE Transactions on Smart Grid, Vol. 10 (3), pp. 2969-2978, 2019. 4. J. Maggu, P. Singh and A. Majumdar, “Multi-echo Reconstruction from Partial K-space Scans via Adaptively Learnt Basis”, Magnetic Resonance Imaging, Vol. 45, pp. 105-112, 2018. 5. J. Maggu and A. Majumdar, “Kernel Transform Learning”, Pattern Recognition Letters, Vol. 117, pp. 117-122, 2017. 6. J. Maggu, A. Majumdar, E. Chouzenoux and G. Chierchia, “Deeply Transformed Subspace Clustering”, Signal Processing (major revision). 7. J. Maggu and A. Majumdar, “Dynamic MRI Reconstruction with Deep Transform Learning Prior”, Magnetic Resonance Imaging, (major revision) 8. J. Maggu and A. Majumdar, “Transductive Inversion via Deep Transform Learning”, Signal Processing (submitted).
  • 63. Publications(Conferences) !63 1. J. Maggu and A. Majumdar, “Supervised Kernel Transform Learning”, IEEE IJCNN 2019. 2. J. Maggu, E. Chouzenoux, G. Chierchia and A. Majumdar, “Convolutional Transform Learning”, ICONIP, pp. 162-174, 2018. 3. J. Maggu and A. Majumdar, "Semi-Coupled Transform Learning", ICONIP, pp. 141-150, 2018. 4. J. Maggu, A. Majumdar and E. Chouzenoux, “Transformed Locally Linear Manifold Clustering”, EUSIPCO, pp. 1057-1061, 2018. 5. J. Maggu and A. Majumdar, "Unsupervised Deep Transform Learning", IEEE ICASSP, pp. 6782-6786, 2018. 6. J. Maggu, R. Hussein, A. Majumdar and R. Ward, "Impulse Denoising via Transform Learning", IEEE GlobalSIP, pp. 1250-1254, 2017. 7. J. Maggu and A. Majumdar, “Greedy Deep Transform Learning”, IEEE ICIP, pp. 1822-1826, 2017. 8. J. Maggu and A. Majumdar, “Robust Transform Learning”, IEEE ICASSP, pp. 1467-1471, 2017. 9. J. Maggu and A. Majumdar, "Alternate Formulation for Transform Learning", ICVGIP, pp. 501-508, 2016.