SlideShare a Scribd company logo
1 of 25
Download to read offline
1/12
STOCHASTIC BLOCK-COORDINATE
FIXED POINT ALGORITHMS
Jean-Christophe Pesquet
Center for Visual Computing, CentraleSup´elec, University Paris-Saclay
Joint work with Patrick Louis Combettes
SAMSI Workshop - March 2018
2/12
Motivation
FIXED POINT ALGORITHM
for n = 0, 1, . . .
xn+1 = xn + λn Tnxn − xn ,
where
• x0 ∈ H separable real Hilbert space
• (∀n ∈ N) Tn : H → H
• (λn)n∈N relaxation parameters in ]0, +∞[.
2/12
Motivation
FIXED POINT ALGORITHM
for n = 0, 1, . . .
xn+1 = xn + λn Tnxn − xn ,
• widely used in optimization, game theory, inverse problems, ma-
chine learning,...
• convergence of (xn)n∈N to x ∈ F =
n∈N
Fix Tn, under suitable
assumptions.
E. Picard (1856-1941)
2/12
Motivation
FIXED POINT ALGORITHM
for n = 0, 1, . . .
xn+1 = xn + λn Tnxn − xn ,
• widely used in optimization, game theory, inverse problems, ma-
chine learning,...
• convergence of (xn)n∈N to x ∈ F =
n∈N
Fix Tn, under suitable
assumptions.
In the context of high-dimensional problems,
how to limit computational issues
raised by memory requirements ?
3/12
Block-coordinate approach
x ∈ H
x1 ∈ H1
x2 ∈ H2
·
·
·
·
xm ∈ Hm
H = H1 ⊕ · · · ⊕ Hm
H1, . . . , Hm: real separable Hilbert spaces
g
4/12
Block-coordinate algorithm
for n = 0, 1, . . .
for i = 1, . . . , m
xi,n+1 = xi,n + εi,nλn Ti,n(x1,n, . . . , xm,n) + ai,n − xi,n .
BLOCK-COORDINATE ALGORITHM
where
• (∀x ∈ H) Tnx = (Ti,n x)1 i m
where, for every i ∈ {1, . . . , m}, Ti,n : H → Hi is
measurable.
4/12
Block-coordinate algorithm
for n = 0, 1, . . .
for i = 1, . . . , m
xi,n+1 = xi,n + εi,nλn Ti,n(x1,n, . . . , xm,n) + ai,n − xi,n .
BLOCK-COORDINATE ALGORITHM
where
• (∀x ∈ H) Tnx = (Ti,n x)1 i m
where, for every i ∈ {1, . . . , m}, Ti,n : H → Hi is measurable.
• (εn)n∈N = (εi,n)1 i m n∈N
identically distributed D-valued
random variables with D = {0, 1}m {0}.
4/12
Block-coordinate algorithm
for n = 0, 1, . . .
for i = 1, . . . , m
xi,n+1 = xi,n + εi,nλn Ti,n(x1,n, . . . , xm,n) + ai,n − xi,n .
BLOCK-COORDINATE ALGORITHM
where
• (∀x ∈ H) Tnx = (Ti,n x)1 i m
where, for every i ∈ {1, . . . , m}, Ti,n : H → Hi is measurable.
• (εn)n∈N = (εi,n)1 i m n∈N
identically distributed D-valued
random variables with D = {0, 1}m {0}.
• λn ∈ ]0, 1].
4/12
Block-coordinate algorithm
for n = 0, 1, . . .
for i = 1, . . . , m
xi,n+1 = xi,n + εi,nλn Ti,n(x1,n, . . . , xm,n) + ai,n − xi,n .
BLOCK-COORDINATE ALGORITHM
where
• (∀x ∈ H) Tnx = (Ti,n x)1 i m
where, for every i ∈ {1, . . . , m}, Ti,n : H → Hi is measurable.
• (εn)n∈N = (εi,n)1 i m n∈N
identically distributed D-valued
random variables with D = {0, 1}m {0}.
• λn ∈ ]0, 1].
• an = (ai,n)1 i n H-valued random variable: possible error
term.
4/12
Block-coordinate algorithm
for n = 0, 1, . . .
for i = 1, . . . , m
xi,n+1 = xi,n + εi,nλn Ti,n(x1,n, . . . , xm,n) + ai,n − xi,n .
BLOCK-COORDINATE ALGORITHM
where
• (∀x ∈ H) Tnx = (Ti,n x)1 i m
where, for every i ∈ {1, . . . , m}, Ti,n : H → Hi is measurable.
• (εn)n∈N = (εi,n)1 i m n∈N
identically distributed D-valued
random variables with D = {0, 1}m {0}.
• λn ∈ ]0, 1].
• an = (ai,n)1 i n H-valued random variable: possible error
term.
an ≡ 0 and εn ≡ (1, . . . , 1) P-a.s. ⇔ deterministic algorithm with
no error
5/12
Illustration of block activation strategy
Variable selection (∀n ∈ N)
x1,n activated when ε1,n = 1
x2,n activated when ε2,n = 1
x3,n activated when ε3,n = 1
x4,n activated when ε4,n = 1
x5,n activated when ε5,n = 1
x6,n activated when ε6,n = 1
How to choose the variable
εn = (ε1,n, . . . , ε6,n)?
5/12
Illustration of block activation strategy
Variable selection (∀n ∈ N)
x1,n activated when ε1,n = 1
x2,n activated when ε2,n = 1
x3,n activated when ε3,n = 1
x4,n activated when ε4,n = 1
x5,n activated when ε5,n = 1
x6,n activated when ε6,n = 1
How to choose the variable
εn = (ε1,n, . . . , ε6,n)?
P[εn = (1, 1, 0, 0, 0, 0)] = 0.1
5/12
Illustration of block activation strategy
Variable selection (∀n ∈ N)
x1,n activated when ε1,n = 1
x2,n activated when ε2,n = 1
x3,n activated when ε3,n = 1
x4,n activated when ε4,n = 1
x5,n activated when ε5,n = 1
x6,n activated when ε6,n = 1
How to choose the variable
εn = (ε1,n, . . . , ε6,n)?
P[εn = (1, 1, 0, 0, 0, 0)] = 0.1
P[εn = (1, 0, 1, 0, 0, 0)] = 0.2
5/12
Illustration of block activation strategy
Variable selection (∀n ∈ N)
x1,n activated when ε1,n = 1
x2,n activated when ε2,n = 1
x3,n activated when ε3,n = 1
x4,n activated when ε4,n = 1
x5,n activated when ε5,n = 1
x6,n activated when ε6,n = 1
How to choose the variable
εn = (ε1,n, . . . , ε6,n)?
P[εn = (1, 1, 0, 0, 0, 0)] = 0.1
P[εn = (1, 0, 1, 0, 0, 0)] = 0.2
P[εn = (1, 0, 0, 1, 1, 0)] = 0.2
5/12
Illustration of block activation strategy
Variable selection (∀n ∈ N)
x1,n activated when ε1,n = 1
x2,n activated when ε2,n = 1
x3,n activated when ε3,n = 1
x4,n activated when ε4,n = 1
x5,n activated when ε5,n = 1
x6,n activated when ε6,n = 1
How to choose the variable
εn = (ε1,n, . . . , ε6,n)?
P[εn = (1, 1, 0, 0, 0, 0)] = 0.1
P[εn = (1, 0, 1, 0, 0, 0)] = 0.2
P[εn = (1, 0, 0, 1, 1, 0)] = 0.2
P[εn = (0, 1, 1, 1, 1, 1)] = 0.5
6/12
Convergence analysis
NOTATION
(Fn)n∈N sequence of sigma-algebras such that
(∀n ∈ N) Fn ⊂ F and σ(x0, . . . , xn) ⊂ Fn ⊂ Fn+1
where σ(x0, . . . , xn) is the smallest σ-algebra generated by
(x0, . . . , xn).
6/12
Convergence analysis
NOTATION
(Fn)n∈N sequence of sigma-algebras such that
(∀n ∈ N) Fn ⊂ F and σ(x0, . . . , xn) ⊂ Fn ⊂ Fn+1
where σ(x0, . . . , xn) is the smallest σ-algebra generated by
(x0, . . . , xn).
ASSUMPTIONS
(i) F = ∅.
(ii) infn∈N λn > 0.
(iii) There exists a sequence (αn)n∈N in [0, +∞[ such that
n∈N
√
αn < +∞ and (∀n ∈ N) E( an
2 |Fn) αn.
(iv) For every n ∈ N, En = σ(εn) and Fn are independent.
(v) For every i ∈ {1, . . . , m}, pi = P[εi,0 = 1] > 0.
7/12
Convergence results
[Combettes, Pesquet, 2015]
Suppose that supn∈N λn < 1 and that, for every n ∈ N, Tn is
quasinonexpansive, i.e.
(∀z ∈ Fix Tn)(∀x ∈ H) Tnx − z x − z .
Then
(i) (Tnxn − xn)n∈N converges strongly P-a.s.to 0.
(ii) Suppose that, almost surely, every sequential cluster point
of (xn)n∈N belongs to F. Then (xn)n∈N converges weakly
P-a.s.to an F-valued random variable.
REMARK
Conditions met for many algorithms for solving monotone
inclusion problems, e.g., the forward-backward or the
Douglas-Rachford algorithm.
8/12
Convergence results
[Combettes, Pesquet, 2017]
Assume that



F = {x} = {(xi)1 i m}
(∀n ∈ N)(∀x = (xi)1 i m ∈ H) Tnx − x 2
m
i=1
τi,n xi − xi
2
,
where {τi,n | 1 i m, n ∈ N} ⊂]0, +∞[. Then
(∀n ∈ N) E( xn+1−x 2
|F0)
max
1 i m
pi
min
1 i m
pi
n
k=0
χk x0−x 2
+ηn.
with, for every n ∈ N,



ξn =
αn
min
1 i m
pi
, µn = 1 − min
1 i m
pi 1 − τi,n
χn = 1 − λn(1 − µn) + ξnλn(1 + λn
√
µn)
ηn =
n
k=0
n
=k+1
χ λk 1 + λk
√
µk + λk ξk ξk.
8/12
Convergence results
[Combettes, Pesquet, 2017]
Assume that



F = {x} = {(xi)1 i m}
(∀n ∈ N)(∀x = (xi)1 i m ∈ H) Tnx − x 2
m
i=1
τi,n xi − xi
2
,
where {τi,n | 1 i m, n ∈ N} ⊂]0, +∞[ and
(∀i ∈ {1, . . . , m}) sup
n∈N
τi,n < 1.
Suppose that x0 ∈ L2(Ω, F, P; H).
Then (xn)n∈N converges to x both in the mean square and
strongly P-a.s. senses.
9/12
Behavior in the absence of errors
• Under the same assumptions, linear convergence rate.
• Comparison with deterministic case
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
χ = 0.95
χ = 0.8
χ = 0.6
χ = 0.4
χ = 0.2
χ = 0.1
(p)/ (1) as a function of p for various values of χ
(p) = −
ln 1−(1−χ)p
p : convergence rate normalized by the
computational cost when (∀i ∈ {1, . . . , m}) pi = p
χ: convergence factor in the deterministic case.
9/12
Behavior in the absence of errors
• Under the same assumptions, linear convergence rate.
• Accuracy of upper bounds for a variational problem in
multicomponent image recovery
0 20 40 60 80 100 120 140 160 180 200
-120
-100
-80
-60
-40
-20
0
E xn − x 2
/E x0 − x 2
(in dB) versus iteration number n
when p = 1, p = 0.8, p = 0.46.
Theoretical upper bound in dashed lines.
10/12
Influence of stochastic errors
Assume that
αn = O(n−θ
)
with θ ∈ ]2, +∞[.
Then
E xn − x 2
= O(n−θ/2
).
loss of the linear convergence
11/12
Open issue: deterministic block activation
Let
(∀x ∈ H) |||x|||2
=
m
i=1
ωi xi
2
,
where max
1 i m
ωipi = 1.
Assume that λn ≡ 1 and an ≡ 0. Then
(∀n ∈ N) E(|||xn+1 − x|||2
|Fn)
=
m
i=1
ωipi Ti,n xn − xi
2
+
m
i=1
ωi(1 − pi) xi,n − xi
2
Tnxn − x 2
+ |||xn − x|||2
−
m
i=1
ωipi xi,n − xi
2
|||xn − x|||2
+
m
i=1
(τi,n − ωipi)
0
xi,n − xi
2
.
stochastic Fej´er monotonicity [Combettes, Pesquet, 2015]
12/12
Open issue: more directional convergence conditions
Example:
minimize
x∈H
f(x) = g
m
i=1
Lixi +
θ
2
x 2
where g: G → R convex 1-Lipschitz differentiable, G separable
real Hilbert space, (∀i ∈ {1, . . . , m}) Li bounded linear from Hi
to G, θ ∈]0, +∞[
• stochastic approach
Tn = Id − γn f
⇒ (∀i ∈ {1, . . . , m}) τn,i = 1 − γnθ
γn < 2
m
i=1 L∗
i Li +2θ
• deterministic approach (quasi cyclic activation)
γn < 2
Lin
2+2θ

More Related Content

What's hot

Levitan Centenary Conference Talk, June 27 2014
Levitan Centenary Conference Talk, June 27 2014Levitan Centenary Conference Talk, June 27 2014
Levitan Centenary Conference Talk, June 27 2014Nikita V. Artamonov
 
ABC with data cloning for MLE in state space models
ABC with data cloning for MLE in state space modelsABC with data cloning for MLE in state space models
ABC with data cloning for MLE in state space modelsUmberto Picchini
 
Voronoi diagrams in information geometry:  Statistical Voronoi diagrams and ...
Voronoi diagrams in information geometry:  Statistical Voronoi diagrams and ...Voronoi diagrams in information geometry:  Statistical Voronoi diagrams and ...
Voronoi diagrams in information geometry:  Statistical Voronoi diagrams and ...Frank Nielsen
 
Interpolation techniques - Background and implementation
Interpolation techniques - Background and implementationInterpolation techniques - Background and implementation
Interpolation techniques - Background and implementationQuasar Chunawala
 
Can we estimate a constant?
Can we estimate a constant?Can we estimate a constant?
Can we estimate a constant?Christian Robert
 
Reinforcement Learning: Hidden Theory and New Super-Fast Algorithms
Reinforcement Learning: Hidden Theory and New Super-Fast AlgorithmsReinforcement Learning: Hidden Theory and New Super-Fast Algorithms
Reinforcement Learning: Hidden Theory and New Super-Fast AlgorithmsSean Meyn
 
WSC 2011, advanced tutorial on simulation in Statistics
WSC 2011, advanced tutorial on simulation in StatisticsWSC 2011, advanced tutorial on simulation in Statistics
WSC 2011, advanced tutorial on simulation in StatisticsChristian Robert
 
Introducing Zap Q-Learning
Introducing Zap Q-Learning   Introducing Zap Q-Learning
Introducing Zap Q-Learning Sean Meyn
 
Interpolation of Cubic Splines
Interpolation of Cubic SplinesInterpolation of Cubic Splines
Interpolation of Cubic SplinesSohaib H. Khan
 
Hyperfunction method for numerical integration and Fredholm integral equation...
Hyperfunction method for numerical integration and Fredholm integral equation...Hyperfunction method for numerical integration and Fredholm integral equation...
Hyperfunction method for numerical integration and Fredholm integral equation...HidenoriOgata
 
On the solvability of a system of forward-backward linear equations with unbo...
On the solvability of a system of forward-backward linear equations with unbo...On the solvability of a system of forward-backward linear equations with unbo...
On the solvability of a system of forward-backward linear equations with unbo...Nikita V. Artamonov
 
EM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysisEM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysiszukun
 
Multivriada ppt ms
Multivriada   ppt msMultivriada   ppt ms
Multivriada ppt msFaeco Bot
 
Complex Variables and Numerical Methods
Complex Variables and Numerical MethodsComplex Variables and Numerical Methods
Complex Variables and Numerical MethodsDhrumit Patel
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Pierre Jacob
 

What's hot (20)

Levitan Centenary Conference Talk, June 27 2014
Levitan Centenary Conference Talk, June 27 2014Levitan Centenary Conference Talk, June 27 2014
Levitan Centenary Conference Talk, June 27 2014
 
ABC with data cloning for MLE in state space models
ABC with data cloning for MLE in state space modelsABC with data cloning for MLE in state space models
ABC with data cloning for MLE in state space models
 
Voronoi diagrams in information geometry:  Statistical Voronoi diagrams and ...
Voronoi diagrams in information geometry:  Statistical Voronoi diagrams and ...Voronoi diagrams in information geometry:  Statistical Voronoi diagrams and ...
Voronoi diagrams in information geometry:  Statistical Voronoi diagrams and ...
 
Interpolation techniques - Background and implementation
Interpolation techniques - Background and implementationInterpolation techniques - Background and implementation
Interpolation techniques - Background and implementation
 
Can we estimate a constant?
Can we estimate a constant?Can we estimate a constant?
Can we estimate a constant?
 
Reinforcement Learning: Hidden Theory and New Super-Fast Algorithms
Reinforcement Learning: Hidden Theory and New Super-Fast AlgorithmsReinforcement Learning: Hidden Theory and New Super-Fast Algorithms
Reinforcement Learning: Hidden Theory and New Super-Fast Algorithms
 
WSC 2011, advanced tutorial on simulation in Statistics
WSC 2011, advanced tutorial on simulation in StatisticsWSC 2011, advanced tutorial on simulation in Statistics
WSC 2011, advanced tutorial on simulation in Statistics
 
Introducing Zap Q-Learning
Introducing Zap Q-Learning   Introducing Zap Q-Learning
Introducing Zap Q-Learning
 
Interpolation of Cubic Splines
Interpolation of Cubic SplinesInterpolation of Cubic Splines
Interpolation of Cubic Splines
 
Big model, big data
Big model, big dataBig model, big data
Big model, big data
 
Lecture6.handout
Lecture6.handoutLecture6.handout
Lecture6.handout
 
Matrix calculus
Matrix calculusMatrix calculus
Matrix calculus
 
Hyperfunction method for numerical integration and Fredholm integral equation...
Hyperfunction method for numerical integration and Fredholm integral equation...Hyperfunction method for numerical integration and Fredholm integral equation...
Hyperfunction method for numerical integration and Fredholm integral equation...
 
On the solvability of a system of forward-backward linear equations with unbo...
On the solvability of a system of forward-backward linear equations with unbo...On the solvability of a system of forward-backward linear equations with unbo...
On the solvability of a system of forward-backward linear equations with unbo...
 
EM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysisEM algorithm and its application in probabilistic latent semantic analysis
EM algorithm and its application in probabilistic latent semantic analysis
 
Multivriada ppt ms
Multivriada   ppt msMultivriada   ppt ms
Multivriada ppt ms
 
Madrid easy
Madrid easyMadrid easy
Madrid easy
 
QMC: Operator Splitting Workshop, Compactness Estimates for Nonlinear PDEs - ...
QMC: Operator Splitting Workshop, Compactness Estimates for Nonlinear PDEs - ...QMC: Operator Splitting Workshop, Compactness Estimates for Nonlinear PDEs - ...
QMC: Operator Splitting Workshop, Compactness Estimates for Nonlinear PDEs - ...
 
Complex Variables and Numerical Methods
Complex Variables and Numerical MethodsComplex Variables and Numerical Methods
Complex Variables and Numerical Methods
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
 

Similar to QMC: Operator Splitting Workshop, Stochastic Block-Coordinate Fixed Point Algorithms - Jean-Christophe Pesquet, Mar 23, 2018

Approximation Methods Of Solutions For Equilibrium Problem In Hilbert Spaces
Approximation Methods Of Solutions For Equilibrium Problem In Hilbert SpacesApproximation Methods Of Solutions For Equilibrium Problem In Hilbert Spaces
Approximation Methods Of Solutions For Equilibrium Problem In Hilbert SpacesLisa Garcia
 
A numerical method to solve fractional Fredholm-Volterra integro-differential...
A numerical method to solve fractional Fredholm-Volterra integro-differential...A numerical method to solve fractional Fredholm-Volterra integro-differential...
A numerical method to solve fractional Fredholm-Volterra integro-differential...OctavianPostavaru
 
Density theorems for anisotropic point configurations
Density theorems for anisotropic point configurationsDensity theorems for anisotropic point configurations
Density theorems for anisotropic point configurationsVjekoslavKovac1
 
Introduction to the theory of optimization
Introduction to the theory of optimizationIntroduction to the theory of optimization
Introduction to the theory of optimizationDelta Pi Systems
 
Research internship on optimal stochastic theory with financial application u...
Research internship on optimal stochastic theory with financial application u...Research internship on optimal stochastic theory with financial application u...
Research internship on optimal stochastic theory with financial application u...Asma Ben Slimene
 
Presentation on stochastic control problem with financial applications (Merto...
Presentation on stochastic control problem with financial applications (Merto...Presentation on stochastic control problem with financial applications (Merto...
Presentation on stochastic control problem with financial applications (Merto...Asma Ben Slimene
 
Basics of probability in statistical simulation and stochastic programming
Basics of probability in statistical simulation and stochastic programmingBasics of probability in statistical simulation and stochastic programming
Basics of probability in statistical simulation and stochastic programmingSSA KPI
 
Radial Basis Function Interpolation
Radial Basis Function InterpolationRadial Basis Function Interpolation
Radial Basis Function InterpolationJesse Bettencourt
 
Unique fixed point theorems for generalized weakly contractive condition in o...
Unique fixed point theorems for generalized weakly contractive condition in o...Unique fixed point theorems for generalized weakly contractive condition in o...
Unique fixed point theorems for generalized weakly contractive condition in o...Alexander Decker
 
Multilinear Twisted Paraproducts
Multilinear Twisted ParaproductsMultilinear Twisted Paraproducts
Multilinear Twisted ParaproductsVjekoslavKovac1
 
Nonlinear perturbed difference equations
Nonlinear perturbed difference equationsNonlinear perturbed difference equations
Nonlinear perturbed difference equationsTahia ZERIZER
 

Similar to QMC: Operator Splitting Workshop, Stochastic Block-Coordinate Fixed Point Algorithms - Jean-Christophe Pesquet, Mar 23, 2018 (20)

QMC: Operator Splitting Workshop, Are Multistep Algorithms Reducible to Memor...
QMC: Operator Splitting Workshop, Are Multistep Algorithms Reducible to Memor...QMC: Operator Splitting Workshop, Are Multistep Algorithms Reducible to Memor...
QMC: Operator Splitting Workshop, Are Multistep Algorithms Reducible to Memor...
 
AJMS_402_22_Reprocess_new.pdf
AJMS_402_22_Reprocess_new.pdfAJMS_402_22_Reprocess_new.pdf
AJMS_402_22_Reprocess_new.pdf
 
Approximation Methods Of Solutions For Equilibrium Problem In Hilbert Spaces
Approximation Methods Of Solutions For Equilibrium Problem In Hilbert SpacesApproximation Methods Of Solutions For Equilibrium Problem In Hilbert Spaces
Approximation Methods Of Solutions For Equilibrium Problem In Hilbert Spaces
 
A numerical method to solve fractional Fredholm-Volterra integro-differential...
A numerical method to solve fractional Fredholm-Volterra integro-differential...A numerical method to solve fractional Fredholm-Volterra integro-differential...
A numerical method to solve fractional Fredholm-Volterra integro-differential...
 
Density theorems for anisotropic point configurations
Density theorems for anisotropic point configurationsDensity theorems for anisotropic point configurations
Density theorems for anisotropic point configurations
 
Statistical Method In Economics
Statistical Method In EconomicsStatistical Method In Economics
Statistical Method In Economics
 
stochastic processes assignment help
stochastic processes assignment helpstochastic processes assignment help
stochastic processes assignment help
 
Introduction to the theory of optimization
Introduction to the theory of optimizationIntroduction to the theory of optimization
Introduction to the theory of optimization
 
QMC: Operator Splitting Workshop, Progressive Decoupling of Linkages in Optim...
QMC: Operator Splitting Workshop, Progressive Decoupling of Linkages in Optim...QMC: Operator Splitting Workshop, Progressive Decoupling of Linkages in Optim...
QMC: Operator Splitting Workshop, Progressive Decoupling of Linkages in Optim...
 
Research internship on optimal stochastic theory with financial application u...
Research internship on optimal stochastic theory with financial application u...Research internship on optimal stochastic theory with financial application u...
Research internship on optimal stochastic theory with financial application u...
 
Presentation on stochastic control problem with financial applications (Merto...
Presentation on stochastic control problem with financial applications (Merto...Presentation on stochastic control problem with financial applications (Merto...
Presentation on stochastic control problem with financial applications (Merto...
 
Basics of probability in statistical simulation and stochastic programming
Basics of probability in statistical simulation and stochastic programmingBasics of probability in statistical simulation and stochastic programming
Basics of probability in statistical simulation and stochastic programming
 
Radial Basis Function Interpolation
Radial Basis Function InterpolationRadial Basis Function Interpolation
Radial Basis Function Interpolation
 
Unique fixed point theorems for generalized weakly contractive condition in o...
Unique fixed point theorems for generalized weakly contractive condition in o...Unique fixed point theorems for generalized weakly contractive condition in o...
Unique fixed point theorems for generalized weakly contractive condition in o...
 
Stochastic Processes - part 6
Stochastic Processes - part 6Stochastic Processes - part 6
Stochastic Processes - part 6
 
Chapter 3
Chapter 3Chapter 3
Chapter 3
 
Imc2017 day2-solutions
Imc2017 day2-solutionsImc2017 day2-solutions
Imc2017 day2-solutions
 
Optimization tutorial
Optimization tutorialOptimization tutorial
Optimization tutorial
 
Multilinear Twisted Paraproducts
Multilinear Twisted ParaproductsMultilinear Twisted Paraproducts
Multilinear Twisted Paraproducts
 
Nonlinear perturbed difference equations
Nonlinear perturbed difference equationsNonlinear perturbed difference equations
Nonlinear perturbed difference equations
 

More from The Statistical and Applied Mathematical Sciences Institute

More from The Statistical and Applied Mathematical Sciences Institute (20)

Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
 
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
 
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
 
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
 
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
 
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
 
Causal Inference Opening Workshop - Difference-in-differences: more than meet...
Causal Inference Opening Workshop - Difference-in-differences: more than meet...Causal Inference Opening Workshop - Difference-in-differences: more than meet...
Causal Inference Opening Workshop - Difference-in-differences: more than meet...
 
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
 
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
 
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
 
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
 
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
 
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
 
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
 
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
 
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
 
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
 
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
 
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
 
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
 

Recently uploaded

How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Celine George
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 
Concept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfConcept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfUmakantAnnand
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAssociation for Project Management
 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsKarinaGenton
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsanshu789521
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting DataJhengPantaleon
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppCeline George
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesFatimaKhan178732
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3JemimahLaneBuaron
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 

Recently uploaded (20)

How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
Concept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfConcept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.Compdf
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across Sectors
 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its Characteristics
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha elections
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website App
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and Actinides
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 

QMC: Operator Splitting Workshop, Stochastic Block-Coordinate Fixed Point Algorithms - Jean-Christophe Pesquet, Mar 23, 2018

  • 1. 1/12 STOCHASTIC BLOCK-COORDINATE FIXED POINT ALGORITHMS Jean-Christophe Pesquet Center for Visual Computing, CentraleSup´elec, University Paris-Saclay Joint work with Patrick Louis Combettes SAMSI Workshop - March 2018
  • 2. 2/12 Motivation FIXED POINT ALGORITHM for n = 0, 1, . . . xn+1 = xn + λn Tnxn − xn , where • x0 ∈ H separable real Hilbert space • (∀n ∈ N) Tn : H → H • (λn)n∈N relaxation parameters in ]0, +∞[.
  • 3. 2/12 Motivation FIXED POINT ALGORITHM for n = 0, 1, . . . xn+1 = xn + λn Tnxn − xn , • widely used in optimization, game theory, inverse problems, ma- chine learning,... • convergence of (xn)n∈N to x ∈ F = n∈N Fix Tn, under suitable assumptions. E. Picard (1856-1941)
  • 4. 2/12 Motivation FIXED POINT ALGORITHM for n = 0, 1, . . . xn+1 = xn + λn Tnxn − xn , • widely used in optimization, game theory, inverse problems, ma- chine learning,... • convergence of (xn)n∈N to x ∈ F = n∈N Fix Tn, under suitable assumptions. In the context of high-dimensional problems, how to limit computational issues raised by memory requirements ?
  • 5. 3/12 Block-coordinate approach x ∈ H x1 ∈ H1 x2 ∈ H2 · · · · xm ∈ Hm H = H1 ⊕ · · · ⊕ Hm H1, . . . , Hm: real separable Hilbert spaces g
  • 6. 4/12 Block-coordinate algorithm for n = 0, 1, . . . for i = 1, . . . , m xi,n+1 = xi,n + εi,nλn Ti,n(x1,n, . . . , xm,n) + ai,n − xi,n . BLOCK-COORDINATE ALGORITHM where • (∀x ∈ H) Tnx = (Ti,n x)1 i m where, for every i ∈ {1, . . . , m}, Ti,n : H → Hi is measurable.
  • 7. 4/12 Block-coordinate algorithm for n = 0, 1, . . . for i = 1, . . . , m xi,n+1 = xi,n + εi,nλn Ti,n(x1,n, . . . , xm,n) + ai,n − xi,n . BLOCK-COORDINATE ALGORITHM where • (∀x ∈ H) Tnx = (Ti,n x)1 i m where, for every i ∈ {1, . . . , m}, Ti,n : H → Hi is measurable. • (εn)n∈N = (εi,n)1 i m n∈N identically distributed D-valued random variables with D = {0, 1}m {0}.
  • 8. 4/12 Block-coordinate algorithm for n = 0, 1, . . . for i = 1, . . . , m xi,n+1 = xi,n + εi,nλn Ti,n(x1,n, . . . , xm,n) + ai,n − xi,n . BLOCK-COORDINATE ALGORITHM where • (∀x ∈ H) Tnx = (Ti,n x)1 i m where, for every i ∈ {1, . . . , m}, Ti,n : H → Hi is measurable. • (εn)n∈N = (εi,n)1 i m n∈N identically distributed D-valued random variables with D = {0, 1}m {0}. • λn ∈ ]0, 1].
  • 9. 4/12 Block-coordinate algorithm for n = 0, 1, . . . for i = 1, . . . , m xi,n+1 = xi,n + εi,nλn Ti,n(x1,n, . . . , xm,n) + ai,n − xi,n . BLOCK-COORDINATE ALGORITHM where • (∀x ∈ H) Tnx = (Ti,n x)1 i m where, for every i ∈ {1, . . . , m}, Ti,n : H → Hi is measurable. • (εn)n∈N = (εi,n)1 i m n∈N identically distributed D-valued random variables with D = {0, 1}m {0}. • λn ∈ ]0, 1]. • an = (ai,n)1 i n H-valued random variable: possible error term.
  • 10. 4/12 Block-coordinate algorithm for n = 0, 1, . . . for i = 1, . . . , m xi,n+1 = xi,n + εi,nλn Ti,n(x1,n, . . . , xm,n) + ai,n − xi,n . BLOCK-COORDINATE ALGORITHM where • (∀x ∈ H) Tnx = (Ti,n x)1 i m where, for every i ∈ {1, . . . , m}, Ti,n : H → Hi is measurable. • (εn)n∈N = (εi,n)1 i m n∈N identically distributed D-valued random variables with D = {0, 1}m {0}. • λn ∈ ]0, 1]. • an = (ai,n)1 i n H-valued random variable: possible error term. an ≡ 0 and εn ≡ (1, . . . , 1) P-a.s. ⇔ deterministic algorithm with no error
  • 11. 5/12 Illustration of block activation strategy Variable selection (∀n ∈ N) x1,n activated when ε1,n = 1 x2,n activated when ε2,n = 1 x3,n activated when ε3,n = 1 x4,n activated when ε4,n = 1 x5,n activated when ε5,n = 1 x6,n activated when ε6,n = 1 How to choose the variable εn = (ε1,n, . . . , ε6,n)?
  • 12. 5/12 Illustration of block activation strategy Variable selection (∀n ∈ N) x1,n activated when ε1,n = 1 x2,n activated when ε2,n = 1 x3,n activated when ε3,n = 1 x4,n activated when ε4,n = 1 x5,n activated when ε5,n = 1 x6,n activated when ε6,n = 1 How to choose the variable εn = (ε1,n, . . . , ε6,n)? P[εn = (1, 1, 0, 0, 0, 0)] = 0.1
  • 13. 5/12 Illustration of block activation strategy Variable selection (∀n ∈ N) x1,n activated when ε1,n = 1 x2,n activated when ε2,n = 1 x3,n activated when ε3,n = 1 x4,n activated when ε4,n = 1 x5,n activated when ε5,n = 1 x6,n activated when ε6,n = 1 How to choose the variable εn = (ε1,n, . . . , ε6,n)? P[εn = (1, 1, 0, 0, 0, 0)] = 0.1 P[εn = (1, 0, 1, 0, 0, 0)] = 0.2
  • 14. 5/12 Illustration of block activation strategy Variable selection (∀n ∈ N) x1,n activated when ε1,n = 1 x2,n activated when ε2,n = 1 x3,n activated when ε3,n = 1 x4,n activated when ε4,n = 1 x5,n activated when ε5,n = 1 x6,n activated when ε6,n = 1 How to choose the variable εn = (ε1,n, . . . , ε6,n)? P[εn = (1, 1, 0, 0, 0, 0)] = 0.1 P[εn = (1, 0, 1, 0, 0, 0)] = 0.2 P[εn = (1, 0, 0, 1, 1, 0)] = 0.2
  • 15. 5/12 Illustration of block activation strategy Variable selection (∀n ∈ N) x1,n activated when ε1,n = 1 x2,n activated when ε2,n = 1 x3,n activated when ε3,n = 1 x4,n activated when ε4,n = 1 x5,n activated when ε5,n = 1 x6,n activated when ε6,n = 1 How to choose the variable εn = (ε1,n, . . . , ε6,n)? P[εn = (1, 1, 0, 0, 0, 0)] = 0.1 P[εn = (1, 0, 1, 0, 0, 0)] = 0.2 P[εn = (1, 0, 0, 1, 1, 0)] = 0.2 P[εn = (0, 1, 1, 1, 1, 1)] = 0.5
  • 16. 6/12 Convergence analysis NOTATION (Fn)n∈N sequence of sigma-algebras such that (∀n ∈ N) Fn ⊂ F and σ(x0, . . . , xn) ⊂ Fn ⊂ Fn+1 where σ(x0, . . . , xn) is the smallest σ-algebra generated by (x0, . . . , xn).
  • 17. 6/12 Convergence analysis NOTATION (Fn)n∈N sequence of sigma-algebras such that (∀n ∈ N) Fn ⊂ F and σ(x0, . . . , xn) ⊂ Fn ⊂ Fn+1 where σ(x0, . . . , xn) is the smallest σ-algebra generated by (x0, . . . , xn). ASSUMPTIONS (i) F = ∅. (ii) infn∈N λn > 0. (iii) There exists a sequence (αn)n∈N in [0, +∞[ such that n∈N √ αn < +∞ and (∀n ∈ N) E( an 2 |Fn) αn. (iv) For every n ∈ N, En = σ(εn) and Fn are independent. (v) For every i ∈ {1, . . . , m}, pi = P[εi,0 = 1] > 0.
  • 18. 7/12 Convergence results [Combettes, Pesquet, 2015] Suppose that supn∈N λn < 1 and that, for every n ∈ N, Tn is quasinonexpansive, i.e. (∀z ∈ Fix Tn)(∀x ∈ H) Tnx − z x − z . Then (i) (Tnxn − xn)n∈N converges strongly P-a.s.to 0. (ii) Suppose that, almost surely, every sequential cluster point of (xn)n∈N belongs to F. Then (xn)n∈N converges weakly P-a.s.to an F-valued random variable. REMARK Conditions met for many algorithms for solving monotone inclusion problems, e.g., the forward-backward or the Douglas-Rachford algorithm.
  • 19. 8/12 Convergence results [Combettes, Pesquet, 2017] Assume that    F = {x} = {(xi)1 i m} (∀n ∈ N)(∀x = (xi)1 i m ∈ H) Tnx − x 2 m i=1 τi,n xi − xi 2 , where {τi,n | 1 i m, n ∈ N} ⊂]0, +∞[. Then (∀n ∈ N) E( xn+1−x 2 |F0) max 1 i m pi min 1 i m pi n k=0 χk x0−x 2 +ηn. with, for every n ∈ N,    ξn = αn min 1 i m pi , µn = 1 − min 1 i m pi 1 − τi,n χn = 1 − λn(1 − µn) + ξnλn(1 + λn √ µn) ηn = n k=0 n =k+1 χ λk 1 + λk √ µk + λk ξk ξk.
  • 20. 8/12 Convergence results [Combettes, Pesquet, 2017] Assume that    F = {x} = {(xi)1 i m} (∀n ∈ N)(∀x = (xi)1 i m ∈ H) Tnx − x 2 m i=1 τi,n xi − xi 2 , where {τi,n | 1 i m, n ∈ N} ⊂]0, +∞[ and (∀i ∈ {1, . . . , m}) sup n∈N τi,n < 1. Suppose that x0 ∈ L2(Ω, F, P; H). Then (xn)n∈N converges to x both in the mean square and strongly P-a.s. senses.
  • 21. 9/12 Behavior in the absence of errors • Under the same assumptions, linear convergence rate. • Comparison with deterministic case 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 χ = 0.95 χ = 0.8 χ = 0.6 χ = 0.4 χ = 0.2 χ = 0.1 (p)/ (1) as a function of p for various values of χ (p) = − ln 1−(1−χ)p p : convergence rate normalized by the computational cost when (∀i ∈ {1, . . . , m}) pi = p χ: convergence factor in the deterministic case.
  • 22. 9/12 Behavior in the absence of errors • Under the same assumptions, linear convergence rate. • Accuracy of upper bounds for a variational problem in multicomponent image recovery 0 20 40 60 80 100 120 140 160 180 200 -120 -100 -80 -60 -40 -20 0 E xn − x 2 /E x0 − x 2 (in dB) versus iteration number n when p = 1, p = 0.8, p = 0.46. Theoretical upper bound in dashed lines.
  • 23. 10/12 Influence of stochastic errors Assume that αn = O(n−θ ) with θ ∈ ]2, +∞[. Then E xn − x 2 = O(n−θ/2 ). loss of the linear convergence
  • 24. 11/12 Open issue: deterministic block activation Let (∀x ∈ H) |||x|||2 = m i=1 ωi xi 2 , where max 1 i m ωipi = 1. Assume that λn ≡ 1 and an ≡ 0. Then (∀n ∈ N) E(|||xn+1 − x|||2 |Fn) = m i=1 ωipi Ti,n xn − xi 2 + m i=1 ωi(1 − pi) xi,n − xi 2 Tnxn − x 2 + |||xn − x|||2 − m i=1 ωipi xi,n − xi 2 |||xn − x|||2 + m i=1 (τi,n − ωipi) 0 xi,n − xi 2 . stochastic Fej´er monotonicity [Combettes, Pesquet, 2015]
  • 25. 12/12 Open issue: more directional convergence conditions Example: minimize x∈H f(x) = g m i=1 Lixi + θ 2 x 2 where g: G → R convex 1-Lipschitz differentiable, G separable real Hilbert space, (∀i ∈ {1, . . . , m}) Li bounded linear from Hi to G, θ ∈]0, +∞[ • stochastic approach Tn = Id − γn f ⇒ (∀i ∈ {1, . . . , m}) τn,i = 1 − γnθ γn < 2 m i=1 L∗ i Li +2θ • deterministic approach (quasi cyclic activation) γn < 2 Lin 2+2θ