The document discusses sparse regularization for inverse problems. It describes how sparse regularization can be used for tasks like denoising, inpainting, and image separation by posing them as optimization problems that minimize data fidelity and an L1 sparsity prior on the coefficients. Iterative soft thresholding is presented as an algorithm for solving the noisy sparse regularization problem. Examples are given of how sparse wavelet regularization can outperform other regularizers like Sobolev for tasks like image deblurring.
This presentation is Part 2 of my September Lisp NYC presentation on Reinforcement Learning and Artificial Neural Nets. We will continue from where we left off by covering Convolutional Neural Nets (CNN) and Recurrent Neural Nets (RNN) in depth.
Time permitting I also plan on having a few slides on each of the following topics:
1. Generative Adversarial Networks (GANs)
2. Differentiable Neural Computers (DNCs)
3. Deep Reinforcement Learning (DRL)
Some code examples will be provided in Clojure.
After a very brief recap of Part 1 (ANN & RL), we will jump right into CNN and their appropriateness for image recognition. We will start by covering the convolution operator. We will then explain feature maps and pooling operations and then explain the LeNet 5 architecture. The MNIST data will be used to illustrate a fully functioning CNN.
Next we cover Recurrent Neural Nets in depth and describe how they have been used in Natural Language Processing. We will explain why gated networks and LSTM are used in practice.
Please note that some exposure or familiarity with Gradient Descent and Backpropagation will be assumed. These are covered in the first part of the talk for which both video and slides are available online.
A lot of material will be drawn from the new Deep Learning book by Goodfellow & Bengio as well as Michael Nielsen's online book on Neural Networks and Deep Learning as well several other online resources.
Bio
Pierre de Lacaze has over 20 years industry experience with AI and Lisp based technologies. He holds a Bachelor of Science in Applied Mathematics and a Master’s Degree in Computer Science.
https://www.linkedin.com/in/pierre-de-lacaze-b11026b/
basic and brief but informative knowledge about how MRI works and what are its components ... easy to understand as well as presenting during lectures and in classes . share it
COntents:
Signals & Systems, Classification of Continuous and Discrete Time signals, Standard Continuous and Discrete Time Signals
Block Diagram Representation of System, Properties of System
Linear Time Invariant Systems (LTI)
Convolution, Properties of Convolution, Performing Convolution
Differential and Difference Equation Representation of LTI Systems
Fourier Series, Dirichlit Condition, Determination of Fourier Coefficeints, Wave Symmetry, Exponential Form of Fourier Series
Fourier Transform, Discrete Time Fourier Transform
Laplace Transform, Inverse Laplace Transform, Properties of Laplace Transform
Z-Transform, Properties of Z-Transform, Inverse Z- Transform
Text Book
Signal & Systems (2nd Edition) By A. V. Oppenheim, A. S. Willsky & S. H. Nawa
Signal & Systems
By Prentice Hall
Reference Book
Signal & Systems (2nd Edition)
By S. Haykin & B.V. Veen
Signals & Systems
By Smarajit Gosh
This slide best explains the introduction of CT, basis and types of CT image reconstructions with detailed explanation about Interpolation, convolution, Fourier slice theorem, Fourier transformation and brief explanation about the image domain i.e digital image processing.
Digital Signal Processing[ECEG-3171]-Ch1_L02Rediet Moges
This Digital Signal Processing Lecture material is the property of the author (Rediet M.) . It is not for publication,nor is it to be sold or reproduced
#Africa#Ethiopia
This presentation is Part 2 of my September Lisp NYC presentation on Reinforcement Learning and Artificial Neural Nets. We will continue from where we left off by covering Convolutional Neural Nets (CNN) and Recurrent Neural Nets (RNN) in depth.
Time permitting I also plan on having a few slides on each of the following topics:
1. Generative Adversarial Networks (GANs)
2. Differentiable Neural Computers (DNCs)
3. Deep Reinforcement Learning (DRL)
Some code examples will be provided in Clojure.
After a very brief recap of Part 1 (ANN & RL), we will jump right into CNN and their appropriateness for image recognition. We will start by covering the convolution operator. We will then explain feature maps and pooling operations and then explain the LeNet 5 architecture. The MNIST data will be used to illustrate a fully functioning CNN.
Next we cover Recurrent Neural Nets in depth and describe how they have been used in Natural Language Processing. We will explain why gated networks and LSTM are used in practice.
Please note that some exposure or familiarity with Gradient Descent and Backpropagation will be assumed. These are covered in the first part of the talk for which both video and slides are available online.
A lot of material will be drawn from the new Deep Learning book by Goodfellow & Bengio as well as Michael Nielsen's online book on Neural Networks and Deep Learning as well several other online resources.
Bio
Pierre de Lacaze has over 20 years industry experience with AI and Lisp based technologies. He holds a Bachelor of Science in Applied Mathematics and a Master’s Degree in Computer Science.
https://www.linkedin.com/in/pierre-de-lacaze-b11026b/
basic and brief but informative knowledge about how MRI works and what are its components ... easy to understand as well as presenting during lectures and in classes . share it
COntents:
Signals & Systems, Classification of Continuous and Discrete Time signals, Standard Continuous and Discrete Time Signals
Block Diagram Representation of System, Properties of System
Linear Time Invariant Systems (LTI)
Convolution, Properties of Convolution, Performing Convolution
Differential and Difference Equation Representation of LTI Systems
Fourier Series, Dirichlit Condition, Determination of Fourier Coefficeints, Wave Symmetry, Exponential Form of Fourier Series
Fourier Transform, Discrete Time Fourier Transform
Laplace Transform, Inverse Laplace Transform, Properties of Laplace Transform
Z-Transform, Properties of Z-Transform, Inverse Z- Transform
Text Book
Signal & Systems (2nd Edition) By A. V. Oppenheim, A. S. Willsky & S. H. Nawa
Signal & Systems
By Prentice Hall
Reference Book
Signal & Systems (2nd Edition)
By S. Haykin & B.V. Veen
Signals & Systems
By Smarajit Gosh
This slide best explains the introduction of CT, basis and types of CT image reconstructions with detailed explanation about Interpolation, convolution, Fourier slice theorem, Fourier transformation and brief explanation about the image domain i.e digital image processing.
Digital Signal Processing[ECEG-3171]-Ch1_L02Rediet Moges
This Digital Signal Processing Lecture material is the property of the author (Rediet M.) . It is not for publication,nor is it to be sold or reproduced
#Africa#Ethiopia
Digital Signal Processing - Practical Techniques, Tips and Tricks Course SamplerJim Jenkins
The goal of this 3-day course is to Introduce, explain, and demonstrate powerful, proven techniques, tips and “tricks of the trade” that can dramatically improve accuracy, speed and efficiency in Digital Signal Processing (DSP) applications.
The concepts are first presented using many colorful, clear figures along with plain English explanations and real-world examples. They are next demonstrated using clearly written MATLAB programs (with graphics). This way the student sees the key equations “in action” which increases intuitive understanding and learning speed. These (free) working programs can also be later modified or adapted by the student for customized, site specific use.
Each student will receive extensive course slides, a CD with MATLAB m-files for demonstration and later adaptation, supplementary materials and references to aid in the understanding and application of these “techniques, tips, and tricks” and a copy of the instructor’s latest book “The Essential Guide to Digital Signal Processing”.
Slides of the lectures given at the summer school "Biomedical Image Analysis Summer School : Modalities, Methodologies & Clinical Research", Centrale Paris, Paris, July 9-13, 2012
In this second lecture, I will discuss how to calculate polarization in terms of Berry phase, how to include GW correction in the real-time dynamics and electron-hole interaction.
WAVELET-PACKET-BASED ADAPTIVE ALGORITHM FOR SPARSE IMPULSE RESPONSE IDENTIFI...bermudez_jcm
Presented at IEEE ICASSP-2007:
This paper proposes a wavelet-packet-based (WPB) algorithm for efficient identification of sparse impulse responses with arbitrary frequency spectra. The discrete wavelet packet transform (DWPT) is adaptively tailored to the energy distribution of the unknown system\'s response spectrum. The new algorithm leads to a reduced number of active coefficients and to a reduced computational complexity, when compared to competing wavelet-based algorithms. Simulation results illustrate the applicability of the proposed algorithm.
Model Selection with Piecewise Regular GaugesGabriel Peyré
Talk given at Sampta 2013.
The corresponding paper is :
Model Selection with Piecewise Regular Gauges (S. Vaiter, M. Golbabaee, J. Fadili, G. Peyré), Technical report, Preprint hal-00842603, 2013.
http://hal.archives-ouvertes.fr/hal-00842603/
4. Inverse Problems
Forward model: y = K f0 + w RP
Observations Operator (Unknown) Noise
: RQ RP Input
Denoising: K = IdQ , P = Q.
5. Inverse Problems
Forward model: y = K f0 + w RP
Observations Operator (Unknown) Noise
: RQ RP Input
Denoising: K = IdQ , P = Q.
Inpainting: set of missing pixels, P = Q | |.
0 if x ,
(Kf )(x) =
f (x) if x / .
K
6. Inverse Problems
Forward model: y = K f0 + w RP
Observations Operator (Unknown) Noise
: RQ RP Input
Denoising: K = IdQ , P = Q.
Inpainting: set of missing pixels, P = Q | |.
0 if x ,
(Kf )(x) =
f (x) if x / .
Super-resolution: Kf = (f k) , P = Q/ .
K K
11. Inverse Problem Regularization
Noisy measurements: y = Kf0 + w.
Prior model: J : RQ R assigns a score to images.
1
f argmin ||y Kf ||2 + J(f )
f RQ 2
Data fidelity Regularity
12. Inverse Problem Regularization
Noisy measurements: y = Kf0 + w.
Prior model: J : RQ R assigns a score to images.
1
f argmin ||y Kf ||2 + J(f )
f RQ 2
Data fidelity Regularity
Choice of : tradeo
Noise level Regularity of f0
||w|| J(f0 )
13. Inverse Problem Regularization
Noisy measurements: y = Kf0 + w.
Prior model: J : RQ R assigns a score to images.
1
f argmin ||y Kf ||2 + J(f )
f RQ 2
Data fidelity Regularity
Choice of : tradeo
Noise level Regularity of f0
||w|| J(f0 )
No noise: 0+ , minimize f argmin J(f )
f RQ ,Kf =y
20. Redundant Dictionaries
Dictionary =( m )m RQ N
,N Q.
m = (j, , n)
Fourier: m =e i ·, m
frequency scale position
Wavelets:
m = (2 j
R x n) orientation
=1 =2
Q
N
21. Redundant Dictionaries
Dictionary =( m )m RQ N
,N Q.
m = (j, , n)
Fourier: m =e i ·, m
frequency scale position
Wavelets:
m = (2 j
R x n) orientation
DCT, Curvelets, bandlets, . . .
=1 =2
Q
N
22. Redundant Dictionaries
Dictionary =( m )m RQ N
,N Q.
m = (j, , n)
Fourier: m =e i ·, m
frequency scale position
Wavelets:
m = (2 j
R x n) orientation
DCT, Curvelets, bandlets, . . .
Synthesis: f = m xm m = x. =1 =2
Q =f
x
N
Coe cients x Image f = x
23. Sparse Priors
Coe cients x
Ideal sparsity: for most m, xm = 0.
J0 (x) = # {m xm = 0}
Image f0
24. Sparse Priors
Coe cients x
Ideal sparsity: for most m, xm = 0.
J0 (x) = # {m xm = 0}
Sparse approximation: f = x where
argmin ||f0 x||2 + T 2 J0 (x)
x2RN
Image f0
25. Sparse Priors
Coe cients x
Ideal sparsity: for most m, xm = 0.
J0 (x) = # {m xm = 0}
Sparse approximation: f = x where
argmin ||f0 x||2 + T 2 J0 (x)
x2RN
Orthogonal : = = IdN
f0 , m if | f0 , m | > T,
xm =
0 otherwise. ST Image f0
f= ST (f0 )
26. Sparse Priors
Coe cients x
Ideal sparsity: for most m, xm = 0.
J0 (x) = # {m xm = 0}
Sparse approximation: f = x where
argmin ||f0 x||2 + T 2 J0 (x)
x2RN
Orthogonal : = = IdN
f0 , m if | f0 , m | > T,
xm =
0 otherwise. ST Image f0
f= ST (f0 )
Non-orthogonal :
NP-hard.
32. L1 Regularization
x0 RN f0 = x0 RQ y = Kf0 + w RP
coe cients image observations
K
w
33. L1 Regularization
x0 RN f0 = x0 RQ y = Kf0 + w RP
coe cients image observations
K
w
= K ⇥ ⇥ RP N
34. L1 Regularization
x0 RN f0 = x0 RQ y = Kf0 + w RP
coe cients image observations
K
w
= K ⇥ ⇥ RP N
Sparse recovery: f = x where x solves
1
min ||y x||2 + ||x||1
x RN 2
Fidelity Regularization
37. Noiseless Sparse Regularization
Noiseless measurements: y = x0
x
x
x= x=
y y
x argmin |xm | x argmin |xm |2
x=y m x=y m
Convex linear program.
Interior points, cf. [Chen, Donoho, Saunders] “basis pursuit”.
Douglas-Rachford splitting, see [Combettes, Pesquet].
39. Noisy Sparse Regularization
Noisy measurements: y = x0 + w
1
x argmin ||y x||2 + ||x||1
x RQ 2 Equivalence
Data fidelity Regularization
x argmin ||x||1
|| x y||
|
x=
x y|
40. Noisy Sparse Regularization
Noisy measurements: y = x0 + w
1
x argmin ||y x||2 + ||x||1
x RQ 2 Equivalence
Data fidelity Regularization
x argmin ||x||1
|| x y||
|
x=
Algorithms: x y|
Iterative soft thresholding
Forward-backward splitting
see [Daubechies et al], [Pesquet et al], etc
Nesterov multi-steps schemes.
43. Image De-blurring
Original f0 y = h f0 + w Sobolev
SNR=22.7dB
Sobolev regularization: f = argmin ||f ⇥ h y||2 + ||⇥f ||2
f RN
ˆ
h(⇥)
ˆ
f (⇥) = y (⇥)
ˆ
ˆ
|h(⇥)|2 + |⇥|2
44. Image De-blurring
Original f0 y = h f0 + w Sobolev Sparsity
SNR=22.7dB SNR=24.7dB
Sobolev regularization: f = argmin ||f ⇥ h y||2 + ||⇥f ||2
f RN
ˆ
h(⇥)
ˆ
f (⇥) = y (⇥)
ˆ
ˆ
|h(⇥)|2 + |⇥|2
Sparsity regularization: = translation invariant wavelets.
1
f = x where x argmin ||h ( x) y||2 + ||x||1
x 2
56. Surrogate Functionals
Sparse regularization:
? 1
x 2 argmin E(x) = ||y x||2 + ||x||1
x2RN 2
⇤
Surrogate functional: ⌧ < 1/|| ||
1 2 1
E(x, x) = E(x)
˜ || (x x)|| + ||x
˜ x||2
˜
2 2⌧
E(·, x)
˜
Theorem:
argmin E(x, x) = S ⌧ (u)
˜ E(·)
x
⇤
where u = x ⌧ ( x x)
˜
x
S ⌧ (u) x
˜
Proof: E(x, x) / 1 ||u
˜ 2 x||2 + ||x||1 + cst.
57. Iterative Thresholding
Algorithm: x(`+1) = argmin E(x, x(`) )
x
Initialize x(0) , set ` = 0.
⇤
u(`) = x(`) ⌧ ( x(`) ⌧ y)
E(·)
x(`+1) = S 1⌧ (u(`) ) (2) (1) (0)
x
x x x
58. Iterative Thresholding
Algorithm: x(`+1) = argmin E(x, x(`) )
x
Initialize x(0) , set ` = 0.
⇤
u(`) = x(`) ⌧ ( x(`) ⌧ y)
E(·)
x(`+1) = S 1⌧ (u(`) ) (2) (1) (0)
x
x x x
Remark:
x(`) 7! u(`) is a gradient descent of || x y||2 .
1
S`⌧ is the proximal step of associated to ||x||1 .
59. Iterative Thresholding
Algorithm: x(`+1) = argmin E(x, x(`) )
x
Initialize x(0) , set ` = 0.
⇤
u(`) = x(`) ⌧ ( x(`) ⌧ y)
E(·)
x(`+1) = S 1⌧ (u(`) ) (2) (1) (0)
x
x x x
Remark:
x(`) 7! u(`) is a gradient descent of || x y||2 .
1
S`⌧ is the proximal step of associated to ||x||1 .
⇤
Theorem: if ⌧ < 2/|| ||, then x(`) ! x? .