This document summarizes a dissertation on developing new priors and algorithms for signal recovery problems solved via convex optimization. Chapter 4 proposes a blockwise low-rank prior called the Block Nuclear Norm (BNN) to better model texture patterns in images. BNN represents textures as locally low-rank blocks under different shears. Chapter 5 introduces the Local Color Nuclear Norm (LCNN) prior to promote the color-line property and reduce color artifacts in restored images. Chapter 6 develops a hierarchical convex optimization algorithm using primal-dual splitting to solve problems with non-unique solutions and non-strictly convex objectives.
20220617_You_Only_Look_Once_Series.pdf
You Only Look Once: Unified, Real-Time Object Detection
https://www.cv-foundation.org/openaccess/content_cvpr_2016/html/Redmon_You_Only_Look_CVPR_2016_paper.html
YOLO9000: Better, Faster, Stronger
https://openaccess.thecvf.com/content_cvpr_2017/html/Redmon_YOLO9000_Better_Faster_CVPR_2017_paper.html
YOLOv3: An Incremental Improvement
https://arxiv.org/abs/1804.02767
YOLOv4: Optimal Speed and Accuracy of Object Detection
https://arxiv.org/abs/2004.10934
YOLOv5
https://github.com/ultralytics/yolov5
YOLOX: Exceeding YOLO Series in 2021
https://arxiv.org/abs/2107.08430
You Only Look One-Level Feature
https://openaccess.thecvf.com/content/CVPR2021/html/Chen_You_Only_Look_One-Level_Feature_CVPR_2021_paper.html
You Only Watch Once: A Unified CNN Architecture for Real-Time Spatiotemporal Action Localization
https://openaccess.thecvf.com/content/ICCV2021/html/Chen_Watch_Only_Once_An_End-to-End_Video_Action_Detection_Framework_ICCV_2021_paper.html
Tensor Train (TT) decomposition [3] is a generalization of SVD decomposition from matrices to tensors (=multidimensional arrays).
It represents a tensor compactly in terms of factors and allows to work with the tensor via its factors without materializing the tensor itself.
For example, we can find the elementwise product of two TT-tensors of size 2^100 and get the result in the TT-format as well.
In the talk, we will show how Tensor Train decomposition can be used to represent parameters of neural networks [1] and polynomial models [2].
This parametrization allows exponentially many 'virtual' parameters while working only with small factors of the TT-format.
To train the model, i.e. optimize the objective subject to the constraint that the parameters are in the TT-format, [2] uses stochastic Riemannian optimization.
[1] Novikov, A., Podoprikhin, D., Osokin, A., & Vetrov, D. P. (2015). Tensorizing neural networks. In Advances in Neural Information Processing Systems.
[2] Novikov, A., Trofimov, M., & Oseledets, I. (2016). Tensor Train polynomial models via Riemannian optimization. arXiv:1605.03795.
[3] Oseledets, I. (2011). Tensor-train decomposition. SIAM Journal on Scientific Computing.
20220617_You_Only_Look_Once_Series.pdf
You Only Look Once: Unified, Real-Time Object Detection
https://www.cv-foundation.org/openaccess/content_cvpr_2016/html/Redmon_You_Only_Look_CVPR_2016_paper.html
YOLO9000: Better, Faster, Stronger
https://openaccess.thecvf.com/content_cvpr_2017/html/Redmon_YOLO9000_Better_Faster_CVPR_2017_paper.html
YOLOv3: An Incremental Improvement
https://arxiv.org/abs/1804.02767
YOLOv4: Optimal Speed and Accuracy of Object Detection
https://arxiv.org/abs/2004.10934
YOLOv5
https://github.com/ultralytics/yolov5
YOLOX: Exceeding YOLO Series in 2021
https://arxiv.org/abs/2107.08430
You Only Look One-Level Feature
https://openaccess.thecvf.com/content/CVPR2021/html/Chen_You_Only_Look_One-Level_Feature_CVPR_2021_paper.html
You Only Watch Once: A Unified CNN Architecture for Real-Time Spatiotemporal Action Localization
https://openaccess.thecvf.com/content/ICCV2021/html/Chen_Watch_Only_Once_An_End-to-End_Video_Action_Detection_Framework_ICCV_2021_paper.html
Tensor Train (TT) decomposition [3] is a generalization of SVD decomposition from matrices to tensors (=multidimensional arrays).
It represents a tensor compactly in terms of factors and allows to work with the tensor via its factors without materializing the tensor itself.
For example, we can find the elementwise product of two TT-tensors of size 2^100 and get the result in the TT-format as well.
In the talk, we will show how Tensor Train decomposition can be used to represent parameters of neural networks [1] and polynomial models [2].
This parametrization allows exponentially many 'virtual' parameters while working only with small factors of the TT-format.
To train the model, i.e. optimize the objective subject to the constraint that the parameters are in the TT-format, [2] uses stochastic Riemannian optimization.
[1] Novikov, A., Podoprikhin, D., Osokin, A., & Vetrov, D. P. (2015). Tensorizing neural networks. In Advances in Neural Information Processing Systems.
[2] Novikov, A., Trofimov, M., & Oseledets, I. (2016). Tensor Train polynomial models via Riemannian optimization. arXiv:1605.03795.
[3] Oseledets, I. (2011). Tensor-train decomposition. SIAM Journal on Scientific Computing.
Convex Optimization Modelling with CVXOPTandrewmart11
An introduction to convex optimization modelling using cvxopt in an IPython environment. The facility location problem is used as an example to demonstrate modelling in cvxopt.
This power point pres will be useful for all the budding PhD aspirants who are preparing for their viva irrespective of their subject. Good Luck & All the Best !
Performance of Efficient Closed-Form Solution to Comprehensive Frontier Exposureiosrjce
IOSR Journal of Electronics and Communication Engineering(IOSR-JECE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of electronics and communication engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in electronics and communication engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Dissertation synopsis for imagedenoising(noise reduction )using non local me...Arti Singh
Dissertation report for image denoising using non-local mean algorithm, discussion about subproblem of noise reduction,descrption for problem in image noise
Image Restoration UsingNonlocally Centralized Sparse Representation and histo...IJERA Editor
Due to the degradation of observed image the noisy, blurred, distorted image can be occurred .To restore the image informationby conventional modelsmay not be accurate enough for faithful reconstruction of the original image. I propose the sparse representations to improve the performance of based image restoration. In this method the sparse coding noise is added for image restoration, due to this image restoration the sparse coefficients of original image can be detected. The so-called nonlocally centralized sparse representation (NCSR) model is as simple as the standard sparse representation model, fordenoising the image here we use the histogram clipping method by using histogram based sparse representation to effectively reduce the noise and also implement the TMR filter for Quality image. Various types of image restoration problems, including denoising, deblurring and super-resolution, validate the generality and state-of-the-art performance of the proposed algorithm.
This is a paper I wrote as part of my seminar "Inverse Problems in Computer Vision" while pursuing my M.Sc Medical Engineering at FAU, Erlangen, Germany.
The paper details a state-of-the-art method used for Single Image Super Resolution using Deep Convolutional Networks and the possible extensions to the original approach by considering compression and noise artifacts.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Ph.D. Thesis Presentation: A Study of Priors and Algorithms for Signal Recovery by Convex Optimization Techniques
1. A Study of Priors and Algorithms for Signal Recovery
by Convex Optimization Techniques
Shunsuke Ono
Yamada Lab.
Dept. Communications and Integrated Systems
Tokyo Institute of Technology
2014/06/12
3. 3
Signal Recovery Problem
Signal recovery is a fundamental problem in signal processing
• signal reconstruction
• image restoration
• compressed sensing
• tensor completion
...
signal recovery problems = inverse problems of the form:
observation
Goal: estimate from and
noise contamination linear degradation
How to resolve this ill-posed/ill-conditioned problem?
unknown signal
4. 4
Prior and Convex Optimization
Some a priori information on signal of interest, e.g.,
• sparsity
• smoothness
• low-rankness
should be taken into consideration.
desired signal
convex function
convex set
1. D. P. Palomar and Y. C. Eldar, Eds., Convex Optimization in Signal Processing and Communications, Cambridge University Press, 2009.
2. J.-L. Starck et al., Sparse Image and Signal Processing: Wavelets, Curvelets,Morphological Diversity. Cambridge University Press, 2010.
3. H. H. Bauschke et al., Eds., Fixed-Point Algorithm for Inverse Problems in Science and Engineering, Springer-Verlag, 2011.
a-priori information convex function =: prior
• l1-norm [Donoho+ ‘03; Candes+ ‘06]
• total variation (TV) [Rudin+ ‘92; Chambolle ‘04]
• nuclear norm [Fazel ‘02; Recht et al. ‘11]
advantage: 1. local optimal = global optimal
2. flexible framework
A powerful approach: convex optimization [see, e.g., 1-3]
5. 5
Optimization Algorithms for Signal Recovery
Optimization algorithms for signal recovery must deal with
• useful priors = nonsmooth convex function
• problem scale = often more than 10^4
proximal splitting methods [e.g., Gabay+ ‘76; Lions+ ‘79; Combettes+ ‘05; Condat ‘13]
• first-order (without Hessian)
• nonsmooth functions
• multiple constraints
Why do we need more?
Useful priors
Efficient
algorithms
6. Motivation & Goal
6
# prior: signal-specific properties are NOT fully exploited.
=> undesired results, e.g., texture degradation, color artifact, …
# algorithm: CANNOT deal with sophisticated constraints.
=> only the intersection of projectable convex sets.
Goal: design priors and algorithms to resolve them.
7. 7
Structure of The Dissertation
Chap. 1 General Introduction
Chap. 2 Preliminaries
Chap. 3 Image restoration with component-wise
use of priors
Chap. 4 Blockwise low-rank prior for cartoon-texture
image decomposition and restoration
Chap. 5 Priors for color artifact reduction
in image restoration
Chap. 6 A hierarchical convex optimization algorithm
with primal-dual splitting
Chap. 7 An efficient algorithm for signal recovery with
sophisticated data-fidelity constraints
Chap. 8 General conclusion
Main
chapters
priors
algorithms
8. 8
Structure of The Dissertation
Chap. 1 General Introduction
Chap. 2 Preliminaries
Chap. 3 Image restoration with component-wise
use of priors
Chap. 4 Blockwise low-rank prior for cartoon-texture
image decomposition and restoration
Chap. 5 Priors for color artifact reduction
in image restoration
Chap. 6 A hierarchical convex optimization algorithm
with primal-dual splitting
Chap. 7 An efficient algorithm for signal recovery with
sophisticated data-fidelity constraints
Chap. 8 General conclusion
priors
algorithms
Main
chapters
9. 9
Chap. 4 Blockwise low-rank prior for cartoon-texture
image decomposition and restoration
11. Cartoon-Texture Decomposition Model
11
image cartoon texture
Assumption: image = sum of two components
optimization problem [Meyer ‘01; Vese+ ‘03; Aujol+ ’05; ~ Schaeffer+ ‘13 ]
advantage: 1. prior suitable to each component
2. extraction of texture
priors for each component data-fidelity to
18. How To Model Texture?
18
globally dissimilar but locally well-patterned
Any block is approximately low-rank after suitable shear.
19. Any block is approximately low-rank after suitable shear.
Proposed Prior: Block Nuclear Norm (1/2)
19
Definition: pre-Block-nuclear-norm (pre-BNN)
nuclear normpositive weight
Important property of pre-BNN
Pre-BNN is tightest convex relaxation of
weighted blockwise rank
* Generalization of [Fazel ‘02]
20. Proposed Prior: Block Nuclear Norm (2/2)
20
Definition: Block Nuclear Norm (BNN)
periodic expansion operator (overlap) shear operator
Any block is approximately low-rank after suitable shear.
BNN becomes small,
i.e., good texture prior.
21. Cartoon-Texture Decomposition Using BNN
21image cartoon texture
• Patterns running in different directions are separately extracted.
• Proximal splitting methods can solve the problem after reformulation.
proposed cartoon-texture decomposition model
texture (K=3) sub-texture 1 sub-texture 2 sub-texture 3
various shear angles
22. 22
Experimental Results
CASE 1: pure decomposition
compared with a state-of-the-art decomposition [Schaeffer & Osher, 2013]
image
cartoon
texture
cartoon
texture
[Schaeffer & Osher 2013] “A low patch-
rank interpretation of texture,” SIAM J.
Imag. Sci. [Schaeffer & Osher 2013] proposed
23. 23
Experimental Results
CASE 2: blur+20%missing pixels
compared also with [Schaeffer & Osher 2013]
PSNR: 23.20
SSIM: 0.6613
PSNR: 23.75
SSIM: 0.6978
observation [Schaeffer & Osher 2013] proposed
31. Mathematical Modeling of Color-Line
31
color image
B G R
-th local region
(e.g., block)
Vectorize
matrix for
-th local region
Define matrices for every local region of a color image.
32. color-line property low-rankness of
Proposed Prior: Local Color Nuclear Norm
32
number of local regions
key principle
rank( ) = 1exact cases
Local Color Nuclear Norm (LCNN)
33. Proposed Prior: Local Color Nuclear Norm
33
key principle
color-line property small singular values of
practical cases rank( ) ≠ 1
but is small
Suppressing LCNN promotes the color-line property.
Local Color Nuclear Norm (LCNN)
34. Application to Denoising
34
color-line
Proximal splitting methods are applicable after reformulation.
smoothness [VTV]
dynamic range
data-fidelity robust to
Impulsive noise
: color image contaminated by impulsive noise
optimization problem
[VTV] Bresson et al. “Fast dual minimization of the vectorial total variation norm and
applications to color image processing”, Inverse Probl. Img., 2008.
38. NOT uniqueUnique
NOT strictly convexStrictly convex
.
38
contains infinitely many solutions non-strict convexity of .
Solutions of Convex Optimization Problems
Solution set of a convex optimization problem
Solutions could be considerably different in another criterion.
39. 39
Hierarchical Convex Optimization
ideal strategy: hierarchical convex optimization:
highly involved (≠the intersection of projectable convex sets)
proximal splitting methods cannot solve the problem.
selector: smooth convex function
via fixed point set characterization
[e.g., Yamada ‘01; Ogura & Yamada‘03; Yamada, Yukawa, Yamagishi ‘11]
Definition: nonexpansive mapping
computable nonexpansive mapping on a certain Hilbert space
40. 40
Hierarchical Convex Optimization
fixed point set characterized problem
Hybrid Steepest Descent Method (HSDM) [e.g., Yamada ‘01; Ogura & Yamada ‘03]
nonexpansive mapping gradient of selector
Q. What kinds of are available?
41. 41
• Forward-Backward Splitting (FBS) method [Passty ’79; Combettes+ ‘05]
• Douglas-Rachford Splitting (DRS) method [Lions+ ‘79; Combettes+ ‘07]
Two characterizations underlying proximal splitting methods
are given in [Yamada, Yukawa, Yamagishi ‘11].
Q. Can we deal with a more flexible formulation?
Nonexpansive Mappings for
Definition: proximity operator [Moreau ‘62]
42. 42
• Forward-Backward Splitting (FBS) method [Passty ’79; Combettes+ ‘05]
• Douglas-Rachford Splitting (DRS) method [Lions+ ’79; Combettes+ ‘07]
• Primal-Dual Splitting (PDS) method [Condat ‘13; Vu ‘13]
Nonexpansive Mappings for
Two characterizations underlying proximal splitting methods
are given in [Yamada, Yukawa, Yamagishi ‘11].
43. 43
• Primal-Dual Splitting (PDS) method [Condat ‘13; Vu ‘13]
Contribution
• reveal convergence properties
• modify gradient computation
• extract operator-theoretic idea from [Condat 13]
• reformulate in a certain product space
incorporate
hierarchical convex optimization by HSDM
45. 45
Outline
Reformulate in the canonical product space with dual problem
Extract & incorporate fixed point set characterization from [Condat ‘13]
Install another inner product for nonexpansivity of by [Condat ‘13]
Apply HSDM with modified gradient computation w.r.t.
46. 46
Reformulation in The Canonical Product Space
solution set of the first stage problem (=primal problem)
solution set of the dual problem of the first stage problem
By letting
Note:
47. 47
Incorporation of PDS Characterization
Extract the PDS fixed point characterization from [Condat ‘13]
48. 48
Activation of Nonexpansivity
is nonexpansive NOT on the canonical product space
Definition: canonical inner product of
BUT on the following space with another inner product [Condat ‘13]
where
: strongly positive bounded linear operator
50. 50
Convergence of HSDM with PDS
Assumptions:
Convergence 1:
Convergence 2:
Recall
Definition: distance function
51. 51
Application to Signal Recovery
unknown signal
Gaussian noisedegradation
observation model:
first stage problem:
priornumerical rangedata-fidelity
hierarchical convex optimization problem:
non-strictly convex
another prior
to specify
a better solution
Definition: indicator function
52. 52
Application to Signal Recovery
unknown signal
Gaussian noisedegradation
observation model:
first stage problem:
hierarchical convex optimization problem:
non-strictly convex
another prior
to specify
a better solution
Definition: indicator function
54. 54
General Conclusion
Chap. 3 Image restoration with component-wise
use of priors
Chap. 4 Blockwise low-rank prior for cartoon-texture
image decomposition and restoration
Chap. 5 Priors for color artifact reduction
in image restoration
Chap. 6 A hierarchical convex optimization algorithm
with primal-dual splitting
Chap. 7 An efficient algorithm for signal recovery with
sophisticated data-fidelity constraints
priors: to model signal-specific properties
algorithms: to deal with involved constraints
We have developed novel priors and algorithms for signal recovery.
55. Related Publications
55
# Journal Papers
[J1] S. Ono, T. Miyata, I. Yamada, and K. Yamaoka, "Image Recovery by
Decomposition with Component-Wise Regularization,"
IEICE Trans. Fundamentals, vol. E95-A, no. 12, pp. 2470-2478, 2012.
(Best Paper Award from IEICE)
[J2] S. Ono, T. Miyata, and I. Yamada, "Cartoon-Texture Image Decomposition
Using Blockwise Low-Rank Texture Characterization,"
IEEE Trans. Image Process., vol. 23, no. 3, pp. 1028-1042, 2014.
[J3] S. Ono and I. Yamada, "Hierarchical Convex Optimization with Primal-Dual
Splitting,“ submitted to IEEE Trans. Signal Process (accepted conditionally
in May. 2014).
[J4] S. Ono and I. Yamada, "Signal Recovery Using Complicated Data-Fidelity
Constraints,“ in preparation.
56. Related Publications
56
# Articles in Proceedings of International Conferences (reviewed)
[C1] S. Ono, T. Miyata, and K. Yamaoka, "Total Variation-Wavelet-Curvelet
Regularized Optimization for Image Restoration," IEEE ICIP 2011.
[C2] S. Ono, T. Miyata, I. Yamada, and K. Yamaoka, "Missing Region Recovery by
Promoting Blockwise Low-Rankness," IEEE ICASSP 2012.
[C3] S. Ono and I. Yamada, "A Hierarchical Convex Optimization Approach for High
Fidelity Solution Selection in Image Recovery,'' APSIPA ASC 2012, (Invited).
[C4] S. Ono and I. Yamada, "Poisson Image Restoration with Likelihood Constraint
via Hybrid Steepest Descent Method," IEEE ICASSP 2013.
[C5] S. Ono, M. Yamagishi, and I. Yamada, "A Sparse System Identification by Using
Adaptively-Weighted Total Variation via A Primal-Dual Splitting Approach,"
IEEE ICASSP 2013.
[C6] S. Ono and I. Yamada, "A Convex Regularizer for Reducing Color Artifact in
Color Image Recovery,“ IEEE Conf. CVPR 2013.
[C7] I. Yamada and S. Ono, "Signal Recovery by Minimizing The Moreau Envelope
over The Fixed Point Set of Nonexpansive Mappings," EUSIPCO 2013, (invited).
[C8] S. Ono and I. Yamada, “Second-Order Total Generalized Variation Constraint,”
IEEE ICASSP 2014.
[C9] S. Ono and I. Yamada, “Decorrelated Vectorial Total Variation,” IEEE Conf. CVPR
2014 (to appear).
57. Other Publications
57
# Journal Papers
[J5] S. Ono, T. Miyata, and Y. Sakai, "Improvement of Colorization Based Coding by
Using Redundancy of The Color Assignment Information and Correct Color
Component," IEICE Trans. Information and Systems, vol. J93-D, no. 9, pp.
1638-1641, 2010 (in Japanese).
[J6] H. Kuroda, S. Ono, M. Yamagishi, and I. Yamada, "Exploiting Group Sparsity in
Nonlinear Acoustic Echo Cancellation by Adaptive Proximal Forward-Backward
Splitting," IEICE Trans. Fundamentals, vol.E96-A, no.10, pp.1918-1927, 2013.
[J7] T. Baba, R. Matsuoka, S. Ono, K. Shirai, and M. Okuda, "Image Composition Using
A Pair of Flash/No-Flash Images by Convex Optimization,“ IEICE Transactions on
Information and System, 2014 (in Japanese, to appear)
58. Other Publications
58
# Articles in Proceedings of International Conference (reviewed)
[C10] S. Ono, T. Miyata, and Y. Sakai, "Colorization-Based Coding by Focusing on
Characteristics of Colorization Bases," PCS 2010.
[C11] M. Yamagishi, S. Ono, and I. Yamada, "Two Variants of Alternating Direction
Method of Multipliers without Inner Iterations and Their Application to Image
Super-Resolution,'' IEEE ICASSP 2012.
[C12] S. Ono and I. Yamada, "Optimized JPEG Image Decompression with Super-
Resolution Interpolation Using Multi-Order Total Variation," IEEE ICIP 2013
(top 10% of all accepted papers).
[C13] K. Toyokawa, S. Ono, M. Yamagishi, and I. Yamada, "Detecting Edges of
Reflections from a Single Image via Convex Optimization,“ IEEE ICASSP 2014.
[C14] T. Baba, R. Matsuoka, S. Ono, K. Shirai, and M. Okuda, "Flash/No-flash Image
Integration Using Convex Optimization,“ IEEE ICASSP 2014.
* Many other articles in proceedings of domestic conferences