SlideShare a Scribd company logo
Gabriel Peyré
www.numerical-tours.com
Model Selection
with Piecewise
Regular Gauges
Samuel Vaiter
Charles Deledalle
Jalal Fadili
Joint work with:
Joseph Salmon
VISI N
Overview
• Inverse Problems
• Gauge Decomposition and Model Selection
• L2 Stability Performances
• Model Stability Performances
y = x0 + w 2 RP
Inverse Problems
Recovering x0 RN
from noisy observations
Examples: Inpainting, super-resolution, compressed-sensing
y = x0 + w 2 RP
Inverse Problems
Recovering x0 RN
from noisy observations
x0
x0
Regularized inversion:
Estimators
x(y) 2 argmin
x2RN
1
2
||y x||2
+ J(x)
Data fidelity Regularity
Observations: y = x0 + w 2 RP
.
L2
error stability: ||x(y) x0|| = O(||w||).
Promoted subspace (“model”) stability.
Goal: Performance analysis:
Regularized inversion:
Estimators
x(y) 2 argmin
x2RN
1
2
||y x||2
+ J(x)
! Criteria on (x0, ||w||, ) to ensure
Data fidelity Regularity
Observations: y = x0 + w 2 RP
.
Overview
• Inverse Problems
• Gauge Decomposition and Model Selection
• L2 Stability Performances
• Model Stability Performances
Coe cients x Image x
Union of Linear Models for Data Processing
Union of models: T 2 T linear spaces.
Synthesis
sparsity:
T
Coe cients x Image x
Union of Linear Models for Data Processing
Union of models: T 2 T linear spaces.
Synthesis
sparsity:
T
Structured
sparsity:
Coe cients x Image x
Union of Linear Models for Data Processing
D
Image x Gradient D⇤
x
Union of models: T 2 T linear spaces.
Synthesis
sparsity:
T
Structured
sparsity:
Analysis
sparsity:
Coe cients x Image x
Multi-spectral imaging:
xi,· =
Pr
j=1 Ai,jSj,·
Union of Linear Models for Data Processing
D
Image x Gradient D⇤
x
Union of models: T 2 T linear spaces.
Synthesis
sparsity:
T
Structured
sparsity:
Analysis
sparsity:
Low-rank:
S1,·
S2,·
S3,·x
Gauge: J : RN
! R+
8 ↵ 2 R+
, J(↵x) = ↵J(x)
Gauges for Union of Linear Models
Convex
Gauge: J : RN
! R+
8 ↵ 2 R+
, J(↵x) = ↵J(x)
J(x) = C(x) = inf {⇢ > 0  x 2 ⇢C}
C = {x  J(x) 6 1} (assuming 0 2 C)
Gauges for Union of Linear Models
J(x)
C
1
Convex
Gauge:
, Union of linear models (T)T 2TPiecewise regular ball
J : RN
! R+
8 ↵ 2 R+
, J(↵x) = ↵J(x)
J(x) = C(x) = inf {⇢ > 0  x 2 ⇢C}
C = {x  J(x) 6 1} (assuming 0 2 C)
Gauges for Union of Linear Models
J(x)
C
1
Convex
x
T
Gauge:
, Union of linear models (T)T 2TPiecewise regular ball
J : RN
! R+
8 ↵ 2 R+
, J(↵x) = ↵J(x)
J(x) = C(x) = inf {⇢ > 0  x 2 ⇢C}
C = {x  J(x) 6 1} (assuming 0 2 C)
Gauges for Union of Linear Models
J(x) = ||x||1
T = sparse
vectors
J(x)
C
1
Convex
x
T
Gauge:
, Union of linear models (T)T 2TPiecewise regular ball
J : RN
! R+
8 ↵ 2 R+
, J(↵x) = ↵J(x)
J(x) = C(x) = inf {⇢ > 0  x 2 ⇢C}
C = {x  J(x) 6 1} (assuming 0 2 C)
Gauges for Union of Linear Models
J(x) = ||x||1
x0
T0
T = sparse
vectors
J(x)
C
1
Convex
x
T
Gauge:
, Union of linear models (T)T 2TPiecewise regular ball
J : RN
! R+
8 ↵ 2 R+
, J(↵x) = ↵J(x)
J(x) = C(x) = inf {⇢ > 0  x 2 ⇢C}
C = {x  J(x) 6 1} (assuming 0 2 C)
Gauges for Union of Linear Models
J(x) = ||x||1
x0
T0
T = sparse
vectors
|x1|+||x2,3||
x0
xT
T0
T = block
vectors
sparse
J(x)
C
1
Convex
x
T
Gauge:
, Union of linear models (T)T 2TPiecewise regular ball
J : RN
! R+
8 ↵ 2 R+
, J(↵x) = ↵J(x)
J(x) = C(x) = inf {⇢ > 0  x 2 ⇢C}
C = {x  J(x) 6 1} (assuming 0 2 C)
Gauges for Union of Linear Models
J(x) = ||x||1
T = low-rank
matrices
J(x) = ||x||⇤
x
x0
T0
T = sparse
vectors
|x1|+||x2,3||
x0
xT
T0
T = block
vectors
sparse
J(x)
C
1
Convex
x
T
Gauge:
, Union of linear models (T)T 2TPiecewise regular ball
J : RN
! R+
8 ↵ 2 R+
, J(↵x) = ↵J(x)
J(x) = C(x) = inf {⇢ > 0  x 2 ⇢C}
C = {x  J(x) 6 1} (assuming 0 2 C)
Gauges for Union of Linear Models
J(x) = ||x||1
T = low-rank
matrices
J(x) = ||x||⇤
x
x0
T0
T = anti-
sparse
vectors
J(x) = ||x||1
x
x0
T0
T = sparse
vectors
|x1|+||x2,3||
x0
xT
T0
T = block
vectors
sparse
J(x)
C
1
Convex
Subdifferentials and Models
@J(x) = ⌘ 2 RN
 8 y, J(y) > J(x) + h⌘, y xi
I = supp(x) = {i  xi 6= 0}
Subdifferentials and Models
Example: J(x) = ||x||1 @||x||1 =
⇢
⌘ 
supp(⌘) = I,
8 j /2 I, |⌘j| 6 1
x
@J(x)
0
@J(x) = ⌘ 2 RN
 8 y, J(y) > J(x) + h⌘, y xi
x@J(x)
0
I = supp(x) = {i  xi 6= 0}
Subdifferentials and Models
Example: J(x) = ||x||1 @||x||1 =
⇢
⌘ 
supp(⌘) = I,
8 j /2 I, |⌘j| 6 1
x
@J(x)
0
@J(x) = ⌘ 2 RN
 8 y, J(y) > J(x) + h⌘, y xi
Tx
x@J(x)
0
Definition:
I = supp(x) = {i  xi 6= 0}
Tx = VectHull(@J(x))?
Subdifferentials and Models
Tx = {⌘  supp(⌘) = I}
Example: J(x) = ||x||1 @||x||1 =
⇢
⌘ 
supp(⌘) = I,
8 j /2 I, |⌘j| 6 1
Tx
x
@J(x)
0
@J(x) = ⌘ 2 RN
 8 y, J(y) > J(x) + h⌘, y xi
Tx
x@J(x)
0
Definition:
I = supp(x) = {i  xi 6= 0}
Tx = VectHull(@J(x))?
Subdifferentials and Models
ex
ex = ProjTx
(@J(x))
ex = sign(x)
Tx = {⌘  supp(⌘) = I}
Example: J(x) = ||x||1 @||x||1 =
⇢
⌘ 
supp(⌘) = I,
8 j /2 I, |⌘j| 6 1
ex
Tx
x
@J(x)
0
@J(x) = ⌘ 2 RN
 8 y, J(y) > J(x) + h⌘, y xi
Examples
`1
sparsity: J(x) = ||x||1
ex = sign(x) Tx = {z  supp(z) ⇢ supp(x)}
x0
x @J(x)
Examples
x0
x
@J(x)
`1
sparsity: J(x) = ||x||1
ex = sign(x) Tx = {z  supp(z) ⇢ supp(x)}
ex = (N(xb))b2B
N(a) = a/||a||Structured sparsity: J(x) =
P
b ||xb||
Tx = {z  supp(z) ⇢ supp(x)}
x0
x @J(x)
Examples
x0
x
@J(x)
Tx = {z  U⇤
?zV? = 0}ex = UV ⇤
Nuclear norm: J(x) = ||x||⇤ x = U⇤V ⇤
SVD:
`1
sparsity: J(x) = ||x||1
ex = sign(x) Tx = {z  supp(z) ⇢ supp(x)}
ex = (N(xb))b2B
N(a) = a/||a||Structured sparsity: J(x) =
P
b ||xb||
Tx = {z  supp(z) ⇢ supp(x)}
x
@J(x)
x0
x @J(x)
Examples
x0
x
@J(x)
x x0
@J(x)
I = {i  |xi| = ||x||1}Anti-sparsity: J(x) = ||x||1
Tx = {y  yI / sign(xI)}ex = |I| 1
sign(x)
Tx = {z  U⇤
?zV? = 0}ex = UV ⇤
Nuclear norm: J(x) = ||x||⇤ x = U⇤V ⇤
SVD:
`1
sparsity: J(x) = ||x||1
ex = sign(x) Tx = {z  supp(z) ⇢ supp(x)}
ex = (N(xb))b2B
N(a) = a/||a||Structured sparsity: J(x) =
P
b ||xb||
Tx = {z  supp(z) ⇢ supp(x)}
x
@J(x)
x0
x @J(x)
Overview
• Inverse Problems
• Gauge Decomposition and Model Selection
• L2 Stability Performances
• Model Stability Performances
Noiseless recovery: min
x= x0
J(x) (P0)
x = x0
Dual Certificate and L2 Stability
x?
Noiseless recovery: min
x= x0
J(x) (P0)
Dual certificates:
x = x0
⌘
Proposition:
D = Im( ⇤
)  @J(x0)
9 ⌘ 2 D () x0 solution of (P0)
Dual Certificate and L2 Stability
@J(x0)
x?
Noiseless recovery: min
x= x0
J(x) (P0)
Dual certificates:
Tight dual certificates:
x = x0
⌘
Proposition:
D = Im( ⇤
)  @J(x0)
¯D = Im( ⇤
)  ri(@J(x0))
9 ⌘ 2 D () x0 solution of (P0)
Dual Certificate and L2 Stability
@J(x0)
x?
Noiseless recovery: min
x= x0
J(x) (P0)
Dual certificates:
Tight dual certificates:
x = x0
⌘
Proposition:
D = Im( ⇤
)  @J(x0)
¯D = Im( ⇤
)  ri(@J(x0))
9 ⌘ 2 D () x0 solution of (P0)
Dual Certificate and L2 Stability
@J(x0)
x?
Theorem:
[Fadili et al. 2013] for ⇠ ||w|| one has ||x?
x0|| = O(||w||)
If 9 ⌘ 2 ¯D and ker( )  Tx0 = {0}
Noiseless recovery: min
x= x0
J(x) (P0)
Dual certificates:
Tight dual certificates:
x = x0
⌘
Proposition:
! The constants depend on N . . .
D = Im( ⇤
)  @J(x0)
¯D = Im( ⇤
)  ri(@J(x0))
9 ⌘ 2 D () x0 solution of (P0)
Dual Certificate and L2 Stability
@J(x0)
x?
Theorem:
[Fadili et al. 2013] for ⇠ ||w|| one has ||x?
x0|| = O(||w||)
If 9 ⌘ 2 ¯D and ker( )  Tx0 = {0}
Noiseless recovery: min
x= x0
J(x) (P0)
Dual certificates:
Tight dual certificates:
x = x0
⌘
Proposition:
[Grassmair 2012]: J(x?
x0) = O(||w||).
[Grassmair, Haltmeier, Scherzer 2010]: J = || · ||1.
! The constants depend on N . . .
D = Im( ⇤
)  @J(x0)
¯D = Im( ⇤
)  ri(@J(x0))
9 ⌘ 2 D () x0 solution of (P0)
Dual Certificate and L2 Stability
@J(x0)
x?
Theorem:
[Fadili et al. 2013] for ⇠ ||w|| one has ||x?
x0|| = O(||w||)
If 9 ⌘ 2 ¯D and ker( )  Tx0 = {0}
Overview
• Inverse Problems
• Gauge Decomposition and Model Selection
• L2 Stability Performances
• Model Stability Performances
⌘ 2 D () and J (⌘) = 1
Minimal-norm Certificate
⌘ = ⇤
q
⌘T = e
⇢
T = Tx0
e = ex0
⌘ 2 D ()
We assume ker( )  T = {0} and J piecewise regular.
and J (⌘) = 1
Minimal-norm Certificate
⌘ = ⇤
q
⌘T = e
⇢
T = Tx0
e = ex0
⌘0 = argmin
⌘= ⇤q,⌘T =e
||q||
⌘ 2 D ()
We assume ker( )  T = {0} and J piecewise regular.
and J (⌘) = 1
Minimal-norm Certificate
⌘ = ⇤
q
⌘T = e
Minimal-norm pre-certificate:
⇢
T = Tx0
e = ex0
⌘0 = argmin
⌘= ⇤q,⌘T =e
||q||
⌘ 2 D ()
We assume ker( )  T = {0} and J piecewise regular.
and J (⌘) = 1
Minimal-norm Certificate
Proposition: One has
⌘ = ⇤
q
⌘T = e
Minimal-norm pre-certificate:
⇢
T = Tx0
e = ex0
⌘0 = ( +
T )⇤
e
⌘0 = argmin
⌘= ⇤q,⌘T =e
||q||
⌘ 2 D ()
We assume ker( )  T = {0} and J piecewise regular.
and J (⌘) = 1
Minimal-norm Certificate
Proposition:
||w|| = O(⌫x0 ) and ⇠ ||w||,Theorem:
the unique solution x?
of P (y) for y = x0 + w satisfies
Tx? = Tx0
and ||x?
x0|| = O(||w||) [Vaiter et al. 2013]
One has
⌘ = ⇤
q
⌘T = e
Minimal-norm pre-certificate:
⇢
T = Tx0
e = ex0
⌘0 = ( +
T )⇤
e
If ⌘0 2 ¯D,
[Fuchs 2004]: J = || · ||1.
[Bach 2008]: J = || · ||1,2 and J = || · ||⇤.
[Vaiter et al. 2011]: J = ||D⇤
· ||1.
⌘0 = argmin
⌘= ⇤q,⌘T =e
||q||
⌘ 2 D ()
We assume ker( )  T = {0} and J piecewise regular.
and J (⌘) = 1
Minimal-norm Certificate
Proposition:
||w|| = O(⌫x0 ) and ⇠ ||w||,Theorem:
the unique solution x?
of P (y) for y = x0 + w satisfies
Tx? = Tx0
and ||x?
x0|| = O(||w||) [Vaiter et al. 2013]
One has
⌘ = ⇤
q
⌘T = e
Minimal-norm pre-certificate:
⇢
T = Tx0
e = ex0
⌘0 = ( +
T )⇤
e
If ⌘0 2 ¯D,
⇥x =
i
xi (· i)
Increasing :
reduces correlation.
reduces resolution.
J(x) = ||x||1
Example: 1-D Sparse Deconvolution
x0
x0
⇥x =
i
xi (· i)
Increasing :
reduces correlation.
reduces resolution.
0 10
2
support recovery.
J(x) = ||x||1
()
||⌘0,Ic ||1 < 1
⌘0 2 ¯D(x0)
I = {j  x0(j) 6= 0}
||⌘0,Ic ||1
Example: 1-D Sparse Deconvolution
x0
x0
20
1
()
J(x) = ||rx||1 (rx)i = xi xi 1
= Id I = {i  (rx0)i 6= 0}
8 j /2 I, ( ↵0)j = 0⌘0 = div(↵0) where
Example: 1-D TV Denoising
x0
+1
1
I
J
Support stability.
J(x) = ||rx||1 (rx)i = xi xi 1
= Id I = {i  (rx0)i 6= 0}
8 j /2 I, ( ↵0)j = 0⌘0 = div(↵0) where
||↵0,Ic || < 1
Example: 1-D TV Denoising
x0
x0
+1
1
I
J
`2
stability onlySupport stability.
x0
J(x) = ||rx||1 (rx)i = xi xi 1
= Id I = {i  (rx0)i 6= 0}
8 j /2 I, ( ↵0)j = 0⌘0 = div(↵0) where
||↵0,Ic || < 1 ||↵0,Ic || = 1
Example: 1-D TV Denoising
+1
1
J
x0
x0
Gauges: encode linear models as singular points.
Conclusion
Piecewise smooth gauges: enable model recovery Tx? = Tx0 .
Gauges: encode linear models as singular points.
Tight dual certificates: enables L2
stability.
Conclusion
Piecewise smooth gauges: enable model recovery Tx? = Tx0 .
– Approximate model recovery Tx? ⇡ Tx0 .
Gauges: encode linear models as singular points.
– Infinite dimensional problems (measures, TV, etc.).
Tight dual certificates: enables L2
stability.
Conclusion
Open problems:

More Related Content

What's hot

Mesh Processing Course : Multiresolution
Mesh Processing Course : MultiresolutionMesh Processing Course : Multiresolution
Mesh Processing Course : Multiresolution
Gabriel Peyré
 
Geodesic Method in Computer Vision and Graphics
Geodesic Method in Computer Vision and GraphicsGeodesic Method in Computer Vision and Graphics
Geodesic Method in Computer Vision and Graphics
Gabriel Peyré
 
Learning Sparse Representation
Learning Sparse RepresentationLearning Sparse Representation
Learning Sparse Representation
Gabriel Peyré
 
Actuarial Science Reference Sheet
Actuarial Science Reference SheetActuarial Science Reference Sheet
Actuarial Science Reference SheetDaniel Nolan
 
Lecture 2: linear SVM in the dual
Lecture 2: linear SVM in the dualLecture 2: linear SVM in the dual
Lecture 2: linear SVM in the dual
Stéphane Canu
 
Mesh Processing Course : Mesh Parameterization
Mesh Processing Course : Mesh ParameterizationMesh Processing Course : Mesh Parameterization
Mesh Processing Course : Mesh Parameterization
Gabriel Peyré
 
A series of maximum entropy upper bounds of the differential entropy
A series of maximum entropy upper bounds of the differential entropyA series of maximum entropy upper bounds of the differential entropy
A series of maximum entropy upper bounds of the differential entropy
Frank Nielsen
 
Lecture3 linear svm_with_slack
Lecture3 linear svm_with_slackLecture3 linear svm_with_slack
Lecture3 linear svm_with_slackStéphane Canu
 
Lecture 1: linear SVM in the primal
Lecture 1: linear SVM in the primalLecture 1: linear SVM in the primal
Lecture 1: linear SVM in the primal
Stéphane Canu
 
Bregman divergences from comparative convexity
Bregman divergences from comparative convexityBregman divergences from comparative convexity
Bregman divergences from comparative convexity
Frank Nielsen
 
Classification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metricsClassification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metrics
Frank Nielsen
 
Image denoising
Image denoisingImage denoising
Image denoising
Yap Wooi Hen
 
QMC: Operator Splitting Workshop, A New (More Intuitive?) Interpretation of I...
QMC: Operator Splitting Workshop, A New (More Intuitive?) Interpretation of I...QMC: Operator Splitting Workshop, A New (More Intuitive?) Interpretation of I...
QMC: Operator Splitting Workshop, A New (More Intuitive?) Interpretation of I...
The Statistical and Applied Mathematical Sciences Institute
 
QMC: Operator Splitting Workshop, Boundedness of the Sequence if Iterates Gen...
QMC: Operator Splitting Workshop, Boundedness of the Sequence if Iterates Gen...QMC: Operator Splitting Workshop, Boundedness of the Sequence if Iterates Gen...
QMC: Operator Splitting Workshop, Boundedness of the Sequence if Iterates Gen...
The Statistical and Applied Mathematical Sciences Institute
 
The dual geometry of Shannon information
The dual geometry of Shannon informationThe dual geometry of Shannon information
The dual geometry of Shannon information
Frank Nielsen
 
Lecture 2: linear SVM in the Dual
Lecture 2: linear SVM in the DualLecture 2: linear SVM in the Dual
Lecture 2: linear SVM in the Dual
Stéphane Canu
 
Iceaa07 Foils
Iceaa07 FoilsIceaa07 Foils
Iceaa07 Foils
Antonini
 
Andreas Eberle
Andreas EberleAndreas Eberle
Andreas Eberle
BigMC
 
Tele3113 wk1wed
Tele3113 wk1wedTele3113 wk1wed
Tele3113 wk1wedVin Voro
 

What's hot (20)

Mesh Processing Course : Multiresolution
Mesh Processing Course : MultiresolutionMesh Processing Course : Multiresolution
Mesh Processing Course : Multiresolution
 
Geodesic Method in Computer Vision and Graphics
Geodesic Method in Computer Vision and GraphicsGeodesic Method in Computer Vision and Graphics
Geodesic Method in Computer Vision and Graphics
 
Learning Sparse Representation
Learning Sparse RepresentationLearning Sparse Representation
Learning Sparse Representation
 
Actuarial Science Reference Sheet
Actuarial Science Reference SheetActuarial Science Reference Sheet
Actuarial Science Reference Sheet
 
Lecture 2: linear SVM in the dual
Lecture 2: linear SVM in the dualLecture 2: linear SVM in the dual
Lecture 2: linear SVM in the dual
 
Mesh Processing Course : Mesh Parameterization
Mesh Processing Course : Mesh ParameterizationMesh Processing Course : Mesh Parameterization
Mesh Processing Course : Mesh Parameterization
 
Lecture5 kernel svm
Lecture5 kernel svmLecture5 kernel svm
Lecture5 kernel svm
 
A series of maximum entropy upper bounds of the differential entropy
A series of maximum entropy upper bounds of the differential entropyA series of maximum entropy upper bounds of the differential entropy
A series of maximum entropy upper bounds of the differential entropy
 
Lecture3 linear svm_with_slack
Lecture3 linear svm_with_slackLecture3 linear svm_with_slack
Lecture3 linear svm_with_slack
 
Lecture 1: linear SVM in the primal
Lecture 1: linear SVM in the primalLecture 1: linear SVM in the primal
Lecture 1: linear SVM in the primal
 
Bregman divergences from comparative convexity
Bregman divergences from comparative convexityBregman divergences from comparative convexity
Bregman divergences from comparative convexity
 
Classification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metricsClassification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metrics
 
Image denoising
Image denoisingImage denoising
Image denoising
 
QMC: Operator Splitting Workshop, A New (More Intuitive?) Interpretation of I...
QMC: Operator Splitting Workshop, A New (More Intuitive?) Interpretation of I...QMC: Operator Splitting Workshop, A New (More Intuitive?) Interpretation of I...
QMC: Operator Splitting Workshop, A New (More Intuitive?) Interpretation of I...
 
QMC: Operator Splitting Workshop, Boundedness of the Sequence if Iterates Gen...
QMC: Operator Splitting Workshop, Boundedness of the Sequence if Iterates Gen...QMC: Operator Splitting Workshop, Boundedness of the Sequence if Iterates Gen...
QMC: Operator Splitting Workshop, Boundedness of the Sequence if Iterates Gen...
 
The dual geometry of Shannon information
The dual geometry of Shannon informationThe dual geometry of Shannon information
The dual geometry of Shannon information
 
Lecture 2: linear SVM in the Dual
Lecture 2: linear SVM in the DualLecture 2: linear SVM in the Dual
Lecture 2: linear SVM in the Dual
 
Iceaa07 Foils
Iceaa07 FoilsIceaa07 Foils
Iceaa07 Foils
 
Andreas Eberle
Andreas EberleAndreas Eberle
Andreas Eberle
 
Tele3113 wk1wed
Tele3113 wk1wedTele3113 wk1wed
Tele3113 wk1wed
 

Similar to Model Selection with Piecewise Regular Gauges

Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration
Jagadeeswaran Rathinavel
 
Topic5
Topic5Topic5
Topic5
HAM Karim
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 
Modeling the Dynamics of SGD by Stochastic Differential Equation
Modeling the Dynamics of SGD by Stochastic Differential EquationModeling the Dynamics of SGD by Stochastic Differential Equation
Modeling the Dynamics of SGD by Stochastic Differential Equation
Mark Chang
 
ch3.ppt
ch3.pptch3.ppt
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
The Statistical and Applied Mathematical Sciences Institute
 
Formulario Geometria Analitica
Formulario Geometria AnaliticaFormulario Geometria Analitica
Formulario Geometria Analitica
Antonio Guasco
 
slides_online_optimization_david_mateos
slides_online_optimization_david_mateosslides_online_optimization_david_mateos
slides_online_optimization_david_mateosDavid Mateos
 
QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...
QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...
QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...
The Statistical and Applied Mathematical Sciences Institute
 
A Szemerédi-type theorem for subsets of the unit cube
A Szemerédi-type theorem for subsets of the unit cubeA Szemerédi-type theorem for subsets of the unit cube
A Szemerédi-type theorem for subsets of the unit cube
VjekoslavKovac1
 
Properties of bivariate and conditional Gaussian PDFs
Properties of bivariate and conditional Gaussian PDFsProperties of bivariate and conditional Gaussian PDFs
Properties of bivariate and conditional Gaussian PDFs
Ahmad Gomaa
 
Differential Calculus
Differential Calculus Differential Calculus
Differential Calculus
OlooPundit
 
Nested sampling
Nested samplingNested sampling
Nested sampling
Christian Robert
 
2018 MUMS Fall Course - Mathematical surrogate and reduced-order models - Ral...
2018 MUMS Fall Course - Mathematical surrogate and reduced-order models - Ral...2018 MUMS Fall Course - Mathematical surrogate and reduced-order models - Ral...
2018 MUMS Fall Course - Mathematical surrogate and reduced-order models - Ral...
The Statistical and Applied Mathematical Sciences Institute
 
Applications of Differential Calculus in real life
Applications of Differential Calculus in real life Applications of Differential Calculus in real life
Applications of Differential Calculus in real life
OlooPundit
 
Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...
Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...
Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...
Michael Lie
 
Docslide.us 2002 formulae-and-tables
Docslide.us 2002 formulae-and-tablesDocslide.us 2002 formulae-and-tables
Docslide.us 2002 formulae-and-tables
barasActuarial
 
cps170_bayes_nets.ppt
cps170_bayes_nets.pptcps170_bayes_nets.ppt
cps170_bayes_nets.ppt
FaizAbaas
 
Integration techniques
Integration techniquesIntegration techniques
Integration techniquesKrishna Gali
 
Modeling Transformations
Modeling TransformationsModeling Transformations
Modeling Transformations
Tarun Gehlot
 

Similar to Model Selection with Piecewise Regular Gauges (20)

Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration
 
Topic5
Topic5Topic5
Topic5
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Modeling the Dynamics of SGD by Stochastic Differential Equation
Modeling the Dynamics of SGD by Stochastic Differential EquationModeling the Dynamics of SGD by Stochastic Differential Equation
Modeling the Dynamics of SGD by Stochastic Differential Equation
 
ch3.ppt
ch3.pptch3.ppt
ch3.ppt
 
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
 
Formulario Geometria Analitica
Formulario Geometria AnaliticaFormulario Geometria Analitica
Formulario Geometria Analitica
 
slides_online_optimization_david_mateos
slides_online_optimization_david_mateosslides_online_optimization_david_mateos
slides_online_optimization_david_mateos
 
QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...
QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...
QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...
 
A Szemerédi-type theorem for subsets of the unit cube
A Szemerédi-type theorem for subsets of the unit cubeA Szemerédi-type theorem for subsets of the unit cube
A Szemerédi-type theorem for subsets of the unit cube
 
Properties of bivariate and conditional Gaussian PDFs
Properties of bivariate and conditional Gaussian PDFsProperties of bivariate and conditional Gaussian PDFs
Properties of bivariate and conditional Gaussian PDFs
 
Differential Calculus
Differential Calculus Differential Calculus
Differential Calculus
 
Nested sampling
Nested samplingNested sampling
Nested sampling
 
2018 MUMS Fall Course - Mathematical surrogate and reduced-order models - Ral...
2018 MUMS Fall Course - Mathematical surrogate and reduced-order models - Ral...2018 MUMS Fall Course - Mathematical surrogate and reduced-order models - Ral...
2018 MUMS Fall Course - Mathematical surrogate and reduced-order models - Ral...
 
Applications of Differential Calculus in real life
Applications of Differential Calculus in real life Applications of Differential Calculus in real life
Applications of Differential Calculus in real life
 
Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...
Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...
Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...
 
Docslide.us 2002 formulae-and-tables
Docslide.us 2002 formulae-and-tablesDocslide.us 2002 formulae-and-tables
Docslide.us 2002 formulae-and-tables
 
cps170_bayes_nets.ppt
cps170_bayes_nets.pptcps170_bayes_nets.ppt
cps170_bayes_nets.ppt
 
Integration techniques
Integration techniquesIntegration techniques
Integration techniques
 
Modeling Transformations
Modeling TransformationsModeling Transformations
Modeling Transformations
 

More from Gabriel Peyré

Adaptive Signal and Image Processing
Adaptive Signal and Image ProcessingAdaptive Signal and Image Processing
Adaptive Signal and Image Processing
Gabriel Peyré
 
Mesh Processing Course : Introduction
Mesh Processing Course : IntroductionMesh Processing Course : Introduction
Mesh Processing Course : Introduction
Gabriel Peyré
 
Mesh Processing Course : Geodesics
Mesh Processing Course : GeodesicsMesh Processing Course : Geodesics
Mesh Processing Course : Geodesics
Gabriel Peyré
 
Mesh Processing Course : Geodesic Sampling
Mesh Processing Course : Geodesic SamplingMesh Processing Course : Geodesic Sampling
Mesh Processing Course : Geodesic Sampling
Gabriel Peyré
 
Mesh Processing Course : Differential Calculus
Mesh Processing Course : Differential CalculusMesh Processing Course : Differential Calculus
Mesh Processing Course : Differential Calculus
Gabriel Peyré
 
Signal Processing Course : Theory for Sparse Recovery
Signal Processing Course : Theory for Sparse RecoverySignal Processing Course : Theory for Sparse Recovery
Signal Processing Course : Theory for Sparse Recovery
Gabriel Peyré
 
Signal Processing Course : Presentation of the Course
Signal Processing Course : Presentation of the CourseSignal Processing Course : Presentation of the Course
Signal Processing Course : Presentation of the Course
Gabriel Peyré
 
Signal Processing Course : Orthogonal Bases
Signal Processing Course : Orthogonal BasesSignal Processing Course : Orthogonal Bases
Signal Processing Course : Orthogonal Bases
Gabriel Peyré
 
Signal Processing Course : Sparse Regularization of Inverse Problems
Signal Processing Course : Sparse Regularization of Inverse ProblemsSignal Processing Course : Sparse Regularization of Inverse Problems
Signal Processing Course : Sparse Regularization of Inverse Problems
Gabriel Peyré
 
Signal Processing Course : Fourier
Signal Processing Course : FourierSignal Processing Course : Fourier
Signal Processing Course : Fourier
Gabriel Peyré
 
Signal Processing Course : Denoising
Signal Processing Course : DenoisingSignal Processing Course : Denoising
Signal Processing Course : Denoising
Gabriel Peyré
 
Signal Processing Course : Compressed Sensing
Signal Processing Course : Compressed SensingSignal Processing Course : Compressed Sensing
Signal Processing Course : Compressed Sensing
Gabriel Peyré
 
Signal Processing Course : Approximation
Signal Processing Course : ApproximationSignal Processing Course : Approximation
Signal Processing Course : Approximation
Gabriel Peyré
 
Signal Processing Course : Wavelets
Signal Processing Course : WaveletsSignal Processing Course : Wavelets
Signal Processing Course : Wavelets
Gabriel Peyré
 
Sparsity and Compressed Sensing
Sparsity and Compressed SensingSparsity and Compressed Sensing
Sparsity and Compressed Sensing
Gabriel Peyré
 
Optimal Transport in Imaging Sciences
Optimal Transport in Imaging SciencesOptimal Transport in Imaging Sciences
Optimal Transport in Imaging Sciences
Gabriel Peyré
 
An Introduction to Optimal Transport
An Introduction to Optimal TransportAn Introduction to Optimal Transport
An Introduction to Optimal Transport
Gabriel Peyré
 
A Review of Proximal Methods, with a New One
A Review of Proximal Methods, with a New OneA Review of Proximal Methods, with a New One
A Review of Proximal Methods, with a New One
Gabriel Peyré
 

More from Gabriel Peyré (18)

Adaptive Signal and Image Processing
Adaptive Signal and Image ProcessingAdaptive Signal and Image Processing
Adaptive Signal and Image Processing
 
Mesh Processing Course : Introduction
Mesh Processing Course : IntroductionMesh Processing Course : Introduction
Mesh Processing Course : Introduction
 
Mesh Processing Course : Geodesics
Mesh Processing Course : GeodesicsMesh Processing Course : Geodesics
Mesh Processing Course : Geodesics
 
Mesh Processing Course : Geodesic Sampling
Mesh Processing Course : Geodesic SamplingMesh Processing Course : Geodesic Sampling
Mesh Processing Course : Geodesic Sampling
 
Mesh Processing Course : Differential Calculus
Mesh Processing Course : Differential CalculusMesh Processing Course : Differential Calculus
Mesh Processing Course : Differential Calculus
 
Signal Processing Course : Theory for Sparse Recovery
Signal Processing Course : Theory for Sparse RecoverySignal Processing Course : Theory for Sparse Recovery
Signal Processing Course : Theory for Sparse Recovery
 
Signal Processing Course : Presentation of the Course
Signal Processing Course : Presentation of the CourseSignal Processing Course : Presentation of the Course
Signal Processing Course : Presentation of the Course
 
Signal Processing Course : Orthogonal Bases
Signal Processing Course : Orthogonal BasesSignal Processing Course : Orthogonal Bases
Signal Processing Course : Orthogonal Bases
 
Signal Processing Course : Sparse Regularization of Inverse Problems
Signal Processing Course : Sparse Regularization of Inverse ProblemsSignal Processing Course : Sparse Regularization of Inverse Problems
Signal Processing Course : Sparse Regularization of Inverse Problems
 
Signal Processing Course : Fourier
Signal Processing Course : FourierSignal Processing Course : Fourier
Signal Processing Course : Fourier
 
Signal Processing Course : Denoising
Signal Processing Course : DenoisingSignal Processing Course : Denoising
Signal Processing Course : Denoising
 
Signal Processing Course : Compressed Sensing
Signal Processing Course : Compressed SensingSignal Processing Course : Compressed Sensing
Signal Processing Course : Compressed Sensing
 
Signal Processing Course : Approximation
Signal Processing Course : ApproximationSignal Processing Course : Approximation
Signal Processing Course : Approximation
 
Signal Processing Course : Wavelets
Signal Processing Course : WaveletsSignal Processing Course : Wavelets
Signal Processing Course : Wavelets
 
Sparsity and Compressed Sensing
Sparsity and Compressed SensingSparsity and Compressed Sensing
Sparsity and Compressed Sensing
 
Optimal Transport in Imaging Sciences
Optimal Transport in Imaging SciencesOptimal Transport in Imaging Sciences
Optimal Transport in Imaging Sciences
 
An Introduction to Optimal Transport
An Introduction to Optimal TransportAn Introduction to Optimal Transport
An Introduction to Optimal Transport
 
A Review of Proximal Methods, with a New One
A Review of Proximal Methods, with a New OneA Review of Proximal Methods, with a New One
A Review of Proximal Methods, with a New One
 

Recently uploaded

A Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in EducationA Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in Education
Peter Windle
 
The French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free downloadThe French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free download
Vivekanand Anglo Vedic Academy
 
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCECLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
BhavyaRajput3
 
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
Levi Shapiro
 
Introduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp NetworkIntroduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp Network
TechSoup
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
Tamralipta Mahavidyalaya
 
Language Across the Curriculm LAC B.Ed.
Language Across the  Curriculm LAC B.Ed.Language Across the  Curriculm LAC B.Ed.
Language Across the Curriculm LAC B.Ed.
Atul Kumar Singh
 
Acetabularia Information For Class 9 .docx
Acetabularia Information For Class 9  .docxAcetabularia Information For Class 9  .docx
Acetabularia Information For Class 9 .docx
vaibhavrinwa19
 
"Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe..."Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe...
SACHIN R KONDAGURI
 
Instructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxInstructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptx
Jheel Barad
 
The Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdfThe Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdf
kaushalkr1407
 
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup   New Member Orientation and Q&A (May 2024).pdfWelcome to TechSoup   New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
TechSoup
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
EverAndrsGuerraGuerr
 
How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17
Celine George
 
2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
Sandy Millin
 
Honest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptxHonest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptx
timhan337
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
Jisc
 
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
Nguyen Thanh Tu Collection
 
Digital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and ResearchDigital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and Research
Vikramjit Singh
 
1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx
JosvitaDsouza2
 

Recently uploaded (20)

A Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in EducationA Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in Education
 
The French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free downloadThe French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free download
 
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCECLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
 
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
 
Introduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp NetworkIntroduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp Network
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
 
Language Across the Curriculm LAC B.Ed.
Language Across the  Curriculm LAC B.Ed.Language Across the  Curriculm LAC B.Ed.
Language Across the Curriculm LAC B.Ed.
 
Acetabularia Information For Class 9 .docx
Acetabularia Information For Class 9  .docxAcetabularia Information For Class 9  .docx
Acetabularia Information For Class 9 .docx
 
"Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe..."Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe...
 
Instructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxInstructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptx
 
The Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdfThe Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdf
 
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup   New Member Orientation and Q&A (May 2024).pdfWelcome to TechSoup   New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
 
How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17
 
2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
 
Honest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptxHonest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptx
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
 
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
 
Digital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and ResearchDigital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and Research
 
1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx
 

Model Selection with Piecewise Regular Gauges

  • 1. Gabriel Peyré www.numerical-tours.com Model Selection with Piecewise Regular Gauges Samuel Vaiter Charles Deledalle Jalal Fadili Joint work with: Joseph Salmon VISI N
  • 2. Overview • Inverse Problems • Gauge Decomposition and Model Selection • L2 Stability Performances • Model Stability Performances
  • 3. y = x0 + w 2 RP Inverse Problems Recovering x0 RN from noisy observations
  • 4. Examples: Inpainting, super-resolution, compressed-sensing y = x0 + w 2 RP Inverse Problems Recovering x0 RN from noisy observations x0 x0
  • 5. Regularized inversion: Estimators x(y) 2 argmin x2RN 1 2 ||y x||2 + J(x) Data fidelity Regularity Observations: y = x0 + w 2 RP .
  • 6. L2 error stability: ||x(y) x0|| = O(||w||). Promoted subspace (“model”) stability. Goal: Performance analysis: Regularized inversion: Estimators x(y) 2 argmin x2RN 1 2 ||y x||2 + J(x) ! Criteria on (x0, ||w||, ) to ensure Data fidelity Regularity Observations: y = x0 + w 2 RP .
  • 7. Overview • Inverse Problems • Gauge Decomposition and Model Selection • L2 Stability Performances • Model Stability Performances
  • 8. Coe cients x Image x Union of Linear Models for Data Processing Union of models: T 2 T linear spaces. Synthesis sparsity: T
  • 9. Coe cients x Image x Union of Linear Models for Data Processing Union of models: T 2 T linear spaces. Synthesis sparsity: T Structured sparsity:
  • 10. Coe cients x Image x Union of Linear Models for Data Processing D Image x Gradient D⇤ x Union of models: T 2 T linear spaces. Synthesis sparsity: T Structured sparsity: Analysis sparsity:
  • 11. Coe cients x Image x Multi-spectral imaging: xi,· = Pr j=1 Ai,jSj,· Union of Linear Models for Data Processing D Image x Gradient D⇤ x Union of models: T 2 T linear spaces. Synthesis sparsity: T Structured sparsity: Analysis sparsity: Low-rank: S1,· S2,· S3,·x
  • 12. Gauge: J : RN ! R+ 8 ↵ 2 R+ , J(↵x) = ↵J(x) Gauges for Union of Linear Models Convex
  • 13. Gauge: J : RN ! R+ 8 ↵ 2 R+ , J(↵x) = ↵J(x) J(x) = C(x) = inf {⇢ > 0 x 2 ⇢C} C = {x J(x) 6 1} (assuming 0 2 C) Gauges for Union of Linear Models J(x) C 1 Convex
  • 14. Gauge: , Union of linear models (T)T 2TPiecewise regular ball J : RN ! R+ 8 ↵ 2 R+ , J(↵x) = ↵J(x) J(x) = C(x) = inf {⇢ > 0 x 2 ⇢C} C = {x J(x) 6 1} (assuming 0 2 C) Gauges for Union of Linear Models J(x) C 1 Convex
  • 15. x T Gauge: , Union of linear models (T)T 2TPiecewise regular ball J : RN ! R+ 8 ↵ 2 R+ , J(↵x) = ↵J(x) J(x) = C(x) = inf {⇢ > 0 x 2 ⇢C} C = {x J(x) 6 1} (assuming 0 2 C) Gauges for Union of Linear Models J(x) = ||x||1 T = sparse vectors J(x) C 1 Convex
  • 16. x T Gauge: , Union of linear models (T)T 2TPiecewise regular ball J : RN ! R+ 8 ↵ 2 R+ , J(↵x) = ↵J(x) J(x) = C(x) = inf {⇢ > 0 x 2 ⇢C} C = {x J(x) 6 1} (assuming 0 2 C) Gauges for Union of Linear Models J(x) = ||x||1 x0 T0 T = sparse vectors J(x) C 1 Convex
  • 17. x T Gauge: , Union of linear models (T)T 2TPiecewise regular ball J : RN ! R+ 8 ↵ 2 R+ , J(↵x) = ↵J(x) J(x) = C(x) = inf {⇢ > 0 x 2 ⇢C} C = {x J(x) 6 1} (assuming 0 2 C) Gauges for Union of Linear Models J(x) = ||x||1 x0 T0 T = sparse vectors |x1|+||x2,3|| x0 xT T0 T = block vectors sparse J(x) C 1 Convex
  • 18. x T Gauge: , Union of linear models (T)T 2TPiecewise regular ball J : RN ! R+ 8 ↵ 2 R+ , J(↵x) = ↵J(x) J(x) = C(x) = inf {⇢ > 0 x 2 ⇢C} C = {x J(x) 6 1} (assuming 0 2 C) Gauges for Union of Linear Models J(x) = ||x||1 T = low-rank matrices J(x) = ||x||⇤ x x0 T0 T = sparse vectors |x1|+||x2,3|| x0 xT T0 T = block vectors sparse J(x) C 1 Convex
  • 19. x T Gauge: , Union of linear models (T)T 2TPiecewise regular ball J : RN ! R+ 8 ↵ 2 R+ , J(↵x) = ↵J(x) J(x) = C(x) = inf {⇢ > 0 x 2 ⇢C} C = {x J(x) 6 1} (assuming 0 2 C) Gauges for Union of Linear Models J(x) = ||x||1 T = low-rank matrices J(x) = ||x||⇤ x x0 T0 T = anti- sparse vectors J(x) = ||x||1 x x0 T0 T = sparse vectors |x1|+||x2,3|| x0 xT T0 T = block vectors sparse J(x) C 1 Convex
  • 20. Subdifferentials and Models @J(x) = ⌘ 2 RN 8 y, J(y) > J(x) + h⌘, y xi
  • 21. I = supp(x) = {i xi 6= 0} Subdifferentials and Models Example: J(x) = ||x||1 @||x||1 = ⇢ ⌘ supp(⌘) = I, 8 j /2 I, |⌘j| 6 1 x @J(x) 0 @J(x) = ⌘ 2 RN 8 y, J(y) > J(x) + h⌘, y xi
  • 22. x@J(x) 0 I = supp(x) = {i xi 6= 0} Subdifferentials and Models Example: J(x) = ||x||1 @||x||1 = ⇢ ⌘ supp(⌘) = I, 8 j /2 I, |⌘j| 6 1 x @J(x) 0 @J(x) = ⌘ 2 RN 8 y, J(y) > J(x) + h⌘, y xi
  • 23. Tx x@J(x) 0 Definition: I = supp(x) = {i xi 6= 0} Tx = VectHull(@J(x))? Subdifferentials and Models Tx = {⌘ supp(⌘) = I} Example: J(x) = ||x||1 @||x||1 = ⇢ ⌘ supp(⌘) = I, 8 j /2 I, |⌘j| 6 1 Tx x @J(x) 0 @J(x) = ⌘ 2 RN 8 y, J(y) > J(x) + h⌘, y xi
  • 24. Tx x@J(x) 0 Definition: I = supp(x) = {i xi 6= 0} Tx = VectHull(@J(x))? Subdifferentials and Models ex ex = ProjTx (@J(x)) ex = sign(x) Tx = {⌘ supp(⌘) = I} Example: J(x) = ||x||1 @||x||1 = ⇢ ⌘ supp(⌘) = I, 8 j /2 I, |⌘j| 6 1 ex Tx x @J(x) 0 @J(x) = ⌘ 2 RN 8 y, J(y) > J(x) + h⌘, y xi
  • 25. Examples `1 sparsity: J(x) = ||x||1 ex = sign(x) Tx = {z supp(z) ⇢ supp(x)} x0 x @J(x)
  • 26. Examples x0 x @J(x) `1 sparsity: J(x) = ||x||1 ex = sign(x) Tx = {z supp(z) ⇢ supp(x)} ex = (N(xb))b2B N(a) = a/||a||Structured sparsity: J(x) = P b ||xb|| Tx = {z supp(z) ⇢ supp(x)} x0 x @J(x)
  • 27. Examples x0 x @J(x) Tx = {z U⇤ ?zV? = 0}ex = UV ⇤ Nuclear norm: J(x) = ||x||⇤ x = U⇤V ⇤ SVD: `1 sparsity: J(x) = ||x||1 ex = sign(x) Tx = {z supp(z) ⇢ supp(x)} ex = (N(xb))b2B N(a) = a/||a||Structured sparsity: J(x) = P b ||xb|| Tx = {z supp(z) ⇢ supp(x)} x @J(x) x0 x @J(x)
  • 28. Examples x0 x @J(x) x x0 @J(x) I = {i |xi| = ||x||1}Anti-sparsity: J(x) = ||x||1 Tx = {y yI / sign(xI)}ex = |I| 1 sign(x) Tx = {z U⇤ ?zV? = 0}ex = UV ⇤ Nuclear norm: J(x) = ||x||⇤ x = U⇤V ⇤ SVD: `1 sparsity: J(x) = ||x||1 ex = sign(x) Tx = {z supp(z) ⇢ supp(x)} ex = (N(xb))b2B N(a) = a/||a||Structured sparsity: J(x) = P b ||xb|| Tx = {z supp(z) ⇢ supp(x)} x @J(x) x0 x @J(x)
  • 29. Overview • Inverse Problems • Gauge Decomposition and Model Selection • L2 Stability Performances • Model Stability Performances
  • 30. Noiseless recovery: min x= x0 J(x) (P0) x = x0 Dual Certificate and L2 Stability x?
  • 31. Noiseless recovery: min x= x0 J(x) (P0) Dual certificates: x = x0 ⌘ Proposition: D = Im( ⇤ ) @J(x0) 9 ⌘ 2 D () x0 solution of (P0) Dual Certificate and L2 Stability @J(x0) x?
  • 32. Noiseless recovery: min x= x0 J(x) (P0) Dual certificates: Tight dual certificates: x = x0 ⌘ Proposition: D = Im( ⇤ ) @J(x0) ¯D = Im( ⇤ ) ri(@J(x0)) 9 ⌘ 2 D () x0 solution of (P0) Dual Certificate and L2 Stability @J(x0) x?
  • 33. Noiseless recovery: min x= x0 J(x) (P0) Dual certificates: Tight dual certificates: x = x0 ⌘ Proposition: D = Im( ⇤ ) @J(x0) ¯D = Im( ⇤ ) ri(@J(x0)) 9 ⌘ 2 D () x0 solution of (P0) Dual Certificate and L2 Stability @J(x0) x? Theorem: [Fadili et al. 2013] for ⇠ ||w|| one has ||x? x0|| = O(||w||) If 9 ⌘ 2 ¯D and ker( ) Tx0 = {0}
  • 34. Noiseless recovery: min x= x0 J(x) (P0) Dual certificates: Tight dual certificates: x = x0 ⌘ Proposition: ! The constants depend on N . . . D = Im( ⇤ ) @J(x0) ¯D = Im( ⇤ ) ri(@J(x0)) 9 ⌘ 2 D () x0 solution of (P0) Dual Certificate and L2 Stability @J(x0) x? Theorem: [Fadili et al. 2013] for ⇠ ||w|| one has ||x? x0|| = O(||w||) If 9 ⌘ 2 ¯D and ker( ) Tx0 = {0}
  • 35. Noiseless recovery: min x= x0 J(x) (P0) Dual certificates: Tight dual certificates: x = x0 ⌘ Proposition: [Grassmair 2012]: J(x? x0) = O(||w||). [Grassmair, Haltmeier, Scherzer 2010]: J = || · ||1. ! The constants depend on N . . . D = Im( ⇤ ) @J(x0) ¯D = Im( ⇤ ) ri(@J(x0)) 9 ⌘ 2 D () x0 solution of (P0) Dual Certificate and L2 Stability @J(x0) x? Theorem: [Fadili et al. 2013] for ⇠ ||w|| one has ||x? x0|| = O(||w||) If 9 ⌘ 2 ¯D and ker( ) Tx0 = {0}
  • 36. Overview • Inverse Problems • Gauge Decomposition and Model Selection • L2 Stability Performances • Model Stability Performances
  • 37. ⌘ 2 D () and J (⌘) = 1 Minimal-norm Certificate ⌘ = ⇤ q ⌘T = e ⇢ T = Tx0 e = ex0
  • 38. ⌘ 2 D () We assume ker( ) T = {0} and J piecewise regular. and J (⌘) = 1 Minimal-norm Certificate ⌘ = ⇤ q ⌘T = e ⇢ T = Tx0 e = ex0
  • 39. ⌘0 = argmin ⌘= ⇤q,⌘T =e ||q|| ⌘ 2 D () We assume ker( ) T = {0} and J piecewise regular. and J (⌘) = 1 Minimal-norm Certificate ⌘ = ⇤ q ⌘T = e Minimal-norm pre-certificate: ⇢ T = Tx0 e = ex0
  • 40. ⌘0 = argmin ⌘= ⇤q,⌘T =e ||q|| ⌘ 2 D () We assume ker( ) T = {0} and J piecewise regular. and J (⌘) = 1 Minimal-norm Certificate Proposition: One has ⌘ = ⇤ q ⌘T = e Minimal-norm pre-certificate: ⇢ T = Tx0 e = ex0 ⌘0 = ( + T )⇤ e
  • 41. ⌘0 = argmin ⌘= ⇤q,⌘T =e ||q|| ⌘ 2 D () We assume ker( ) T = {0} and J piecewise regular. and J (⌘) = 1 Minimal-norm Certificate Proposition: ||w|| = O(⌫x0 ) and ⇠ ||w||,Theorem: the unique solution x? of P (y) for y = x0 + w satisfies Tx? = Tx0 and ||x? x0|| = O(||w||) [Vaiter et al. 2013] One has ⌘ = ⇤ q ⌘T = e Minimal-norm pre-certificate: ⇢ T = Tx0 e = ex0 ⌘0 = ( + T )⇤ e If ⌘0 2 ¯D,
  • 42. [Fuchs 2004]: J = || · ||1. [Bach 2008]: J = || · ||1,2 and J = || · ||⇤. [Vaiter et al. 2011]: J = ||D⇤ · ||1. ⌘0 = argmin ⌘= ⇤q,⌘T =e ||q|| ⌘ 2 D () We assume ker( ) T = {0} and J piecewise regular. and J (⌘) = 1 Minimal-norm Certificate Proposition: ||w|| = O(⌫x0 ) and ⇠ ||w||,Theorem: the unique solution x? of P (y) for y = x0 + w satisfies Tx? = Tx0 and ||x? x0|| = O(||w||) [Vaiter et al. 2013] One has ⌘ = ⇤ q ⌘T = e Minimal-norm pre-certificate: ⇢ T = Tx0 e = ex0 ⌘0 = ( + T )⇤ e If ⌘0 2 ¯D,
  • 43. ⇥x = i xi (· i) Increasing : reduces correlation. reduces resolution. J(x) = ||x||1 Example: 1-D Sparse Deconvolution x0 x0
  • 44. ⇥x = i xi (· i) Increasing : reduces correlation. reduces resolution. 0 10 2 support recovery. J(x) = ||x||1 () ||⌘0,Ic ||1 < 1 ⌘0 2 ¯D(x0) I = {j x0(j) 6= 0} ||⌘0,Ic ||1 Example: 1-D Sparse Deconvolution x0 x0 20 1 ()
  • 45. J(x) = ||rx||1 (rx)i = xi xi 1 = Id I = {i (rx0)i 6= 0} 8 j /2 I, ( ↵0)j = 0⌘0 = div(↵0) where Example: 1-D TV Denoising x0
  • 46. +1 1 I J Support stability. J(x) = ||rx||1 (rx)i = xi xi 1 = Id I = {i (rx0)i 6= 0} 8 j /2 I, ( ↵0)j = 0⌘0 = div(↵0) where ||↵0,Ic || < 1 Example: 1-D TV Denoising x0 x0
  • 47. +1 1 I J `2 stability onlySupport stability. x0 J(x) = ||rx||1 (rx)i = xi xi 1 = Id I = {i (rx0)i 6= 0} 8 j /2 I, ( ↵0)j = 0⌘0 = div(↵0) where ||↵0,Ic || < 1 ||↵0,Ic || = 1 Example: 1-D TV Denoising +1 1 J x0 x0
  • 48. Gauges: encode linear models as singular points. Conclusion
  • 49. Piecewise smooth gauges: enable model recovery Tx? = Tx0 . Gauges: encode linear models as singular points. Tight dual certificates: enables L2 stability. Conclusion
  • 50. Piecewise smooth gauges: enable model recovery Tx? = Tx0 . – Approximate model recovery Tx? ⇡ Tx0 . Gauges: encode linear models as singular points. – Infinite dimensional problems (measures, TV, etc.). Tight dual certificates: enables L2 stability. Conclusion Open problems: