SlideShare a Scribd company logo
1 of 36
Download to read offline
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Time-Series Analysis on Multiperiodic
Conditional Correlation by Sparse
Covariance Selection
Michael Lie1
1Prof. Suzuki Taiji Lab.,
Faculty of Science,
Department of Information Science,
Tokyo Institute of Technology, Japan
February 12, 2015
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Agenda
To propose of the new statistical model:
Sparse Multiperiodic Covariance Selection (M-CovSel)
To propose of optimization method through ADMM
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Covariance Selection
Covariance Selection
Sparse Covariance Selection
Y1, · · · , Yn ∼
i.i.d.
Np(µ, Σ).
argmin
X 0
− ln det X + trace(SX) + λ X 1
Original idea: Dempster (1972)
Application to Sparse and High-dimensional Matrices:
Meinshausen and Bühlmann (2006)
Problem Formulation: Banerjee, Ghaoui and d’Aspremont
(2008)
Solution through graphical lasso model: Friedman, Hastie
and Tibshirani (2008)
Solution by ADMM method: Boyd (2011)
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Application
Application: Markowitz’s Portfolio Selection
Portfolio Selection (Markowitz, 1952)
min
w
σ2
p,w = w Sw s.t. w 1 = 1 ∴ w =
S−11
1 S−11
.
Here, the inverse of empirical covariance S−1 is needed!
The existing Covariance Selection: fixed time
⇒ Covariance Selection analysis over time series is needed!
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Intuition
Intuition
Figure: Existing Model
By estimating X, we can construct the portfolio.
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Intuition
Figure: Our Model
Sij :=
1
n
k,l
(yk,i − ˆµi)(yl,j − ˆµj) ,
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Problem Formulation
Problem Formulation
Consider a stationary-time process such that the multiperiodic
inverse covariance matrix X can be expressed as
X =







X11 X12 X13 · · · X1,T
X12 X22 X23 · · · X2,T
X13 X23 X33 · · · X3,T
...
...
...
...
...
X1,T X2,T X3,T · · · XT,T







Tp columns



Tprows
.
Assumption: X is stationary time-process, such that
Xi,i+h = Xj,j+h for all i, j.
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Problem Formulation
Sparse Multiperiodic Covariance Selection (M-CovSel):
argmin
X 0
f(X) := argmin
X 0
− ln det X +
i,j
trace Sij Xij +
λ1
i,j
Xij 1
+ λ2
i,j k>i,l>j
Xij − Xkl
2
2
subject to Xi,i+h = Xj,j+h, ∀i, j.
1 : w 1 =
i
|wi | 2 : w 2
F =
i
|wi |2
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Problem Formulation
We separate our model into two parts:
f(X) ≡ g(X) + h(X)
g(X) = − ln det X +
i,j
trace Sij Xij ,
h(X) = λ1
i,j
Xij 1
+ λ2
i,j k>i,l>j
Xij − Xkl
2
F
.
g(X): twice differentiable and strictly convex
h(X): convex but non-differentiable
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Problem Formulation
Auxiliary Variables
X =







X11 X12 X13 · · · X1,T
X12 X22 X23 · · · X2,T
X13 X23 X33 · · · X3,T
...
...
...
...
...
X1,T X2,T X3,T · · · XT,T







bvec
−→ X =















X11
...
X1,T
X22
...
X2,T
...
XT,T















p



numX×p
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Problem Formulation
H: stationary time matrix
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Problem Formulation
All D: time-difference matrix
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Problem Formulation
Simplified D: time-difference matrix
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Problem Formulation
minimize g(X) + h(˜Z)
subject to



X = Z
DX = Z
HX = 0
⇐⇒ ˜X = ˜Z
where
g(X) = − ln det X +
i,j
trace Sij Xij ,
h(˜Z) = λ1
i,j
Z1 1 + λ2
i,j
Z2
2
F ,
˜X =


X
DX
HX

 , ˜Z =


Z1
Z2
0

 .
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Alternating Direction Method of Multiplier (ADMM)
Solving Through ADMM
Algorithm 1 Overview of ADMM
1: for k = 0, 1, · · · do
2: ˜X-update:
3: Compute W(0)
= (X(0)
)−1
.
4: for t = 1, 2, · · · do
5: Compute the direction using steepest gradient descent d = − G(˜X).
6: Use an Armijo’s rule based step-size selection to get α such that
X(t+1)
= X(t)
+ αd(t)
is positive definite and the objective value suffi-
ciently decreases.
7: Update ˜X.
8: end for
9: ˜Z-update:
10: Update Z1 : Z
(k+1)
1 = Sλ1/ρ((X )(k+1)
+ Y(k)
ρ
)
11: Update Z2:
Z
(k+1)
2 =
ρD(X )(k+1)
+ Y(k)
2λ2 + ρ
12: Y-update: Y(k+1)
= Y(k)
+ ρ ˜X(k+1)
− ˜Z(k+1)
13: end for
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Alternating Direction Method of Multiplier (ADMM)
minimize g(X) + h(˜Z)
subject to



X = Z
DX = Z
HX = 0
⇐⇒ ˜X = ˜Z
Its augmented Lagrangian is
Lρ(˜X, ˜Z, Y) = g(˜X) + h(˜Z) + (ρ/2) ˜X − ˜Z +
Y
ρ
2
F
,
g(X) = − ln det X +
i,j
trace Sij Xij ,
h(˜Z) = λ1
i,j
Z1 1 + λ2
i,j
Z2
2
F .
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Alternating Direction Method of Multiplier (ADMM)
1 ˜X-update:
˜X(k+1)
:= argmin
˜X
− ln det X +
i,j
trace Sij Xij
+
ρ
2
˜X − ˜Z(k)
+
Y(k)
ρ
2
F
,
2 ˜Z-update:
˜Z(k+1)
:= argmin
˜Z
λ1 Z1 1 + λ2 Z2
2
F
+
ρ
2
˜X(k+1)
− ˜Z +
Y(k)
ρ
2
F
,
3 ˜Y-update:
Y(k+1)
:= Y(k)
+ ρ ˜X(k+1)
− ˜Z(k+1)
.
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Alternating Direction Method of Multiplier (ADMM)
˜X Update
The solution of
˜X(k+1)
:= argmin
˜X
− ln det X +
i,j
trace Sij Xij +
ρ
2
˜X − ˜Z(k)
+
Y(k)
ρ
2
F
is solved through steepest gradient descent and the algorithm
is as given in Algorithm 1 of line 2-8.
Algorithm 2 ˜X Update
1: Compute W(0)
= (X(0)
)−1
.
2: for t = 1, 2, · · · do
3: Compute the direction using steepest gradient descent d = − G(˜X).
4: Use an Armijo’s rule based step-size selection to get α such that
X(t+1)
= X(t)
+ αd(t)
is positive definite and the objective value suffi-
ciently decreases.
5: Update ˜X.
6: end for
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Alternating Direction Method of Multiplier (ADMM)
˜Z Update
˜Z Update
˜Zk+1
:= argmin
˜Z
λ1 Z1 1 + λ2 Z2
2
F
+ (ρ/2) ˜X(k+1)
− ˜Z +
Y(k)
ρ
2
F
.
The equation above can be separated as two equations as
below:
Z
(k+1)
1 := argmin
Z1
λ1 Z1 1 + (ρ/2) (X )(k+1)
− Z1 + Yk
1/ρ 2
F
Z
(k+1)
2 := argmin
Z2
λ2 Z2
2
F + (ρ/2) D(X )(k+1)
− Z2 + Yk
2/ρ 2
F
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Alternating Direction Method of Multiplier (ADMM)
Solution of ˜Z Update
Z
(k+1)
1 := argmin
Z1
λ1 Z1 1 + (ρ/2) (X )(k+1)
− Z1 + Yk
1/ρ 2
F
Z
(k+1)
2 := argmin
Z2
λ2 Z2
2
F + (ρ/2) D(X )(k+1)
− Z2 + Yk
2/ρ 2
F
The solution of first solution is simply the soft-thresholding
function of
Z
(k+1)
1 = Sλ1/ρ (X )(k+1)
+
Y(k)
ρ
and the solution of second solution is
Z
(k+1)
2 =
ρD(X )(k+1) + Y(k)
2λ2 + ρ
.
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Numerical Results
Numerical Results
Execution environment:
Intel Core i7-4770 CPU @ 3.40GHz (8 CPUs)
8GB RAM
R ver. 3.3.65126.0
OS Windows 7 Professional 64 bit (6.1. build 7601)
Verifying:
Convergence Speed
Sparsity of the estimates
using random data sets and real data.
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Numerical Results
All D
Simplified D
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Numerical Results
Figure: Runtime of n = 10, λ1 = 0.01, λ2 = 0.01.
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Numerical Results
Figure: (i) Objective Values, (ii) Primal Residuals, and (iii) Dual
Residuals of n = 10, T = 5, λ1 = 0.01, λ2 = 0.01.
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Numerical Results
Figure: The sparsity pattern of estimates from the model of
n = 10, T = 5, λ1 = 0.01, λ2 = 0.01.
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Numerical Results
Analysis on real data
Stock data of 50 randomly selected companies from NASDAQ
Period: 4 January 2011 to 31 December 2014
Tick Name Sector
PDCO Patterson Companies, Inc. Health Care
OMER Omeros Corporation Health Care
HEAR Turtle Beach Corporation Consumer Durables
QBAK Qualstar Corporation Technology
UTHR United Therapeutics Corporation Health Care
PLCE The Children&39;s Place Retail Stores, Inc. Consumer Services
SUSQ Susquehanna Bancshares, Inc. Finance
IDCC InterDigital, Inc. Miscellaneous
ELON Echelon Corporation Technology
BGCP BGC Partners, Inc. Finance
MRGE Merge Healthcare Incorporated. Technology
TISA Top Image Systems, Ltd. Technology
IPXL Impax Laboratories, Inc. Health Care
ROVI Rovi Corporation Miscellaneous
IBCP Independent Bank Corporation Finance
BABY Natus Medical Incorporated Health Care
HFFC HF Financial Corp. Finance
ISLE Isle of Capri Casinos, Inc. Consumer Services
ITIC Investors Title Company Finance
SLGN Silgan Holdings Inc. Consumer Durables
ZIOP ZIOPHARM Oncology Inc Health Care
MXIM Maxim Integrated Products, Inc. Technology
NEPT Neptune Technologies & Bioresources Inc Health Care
UTMD Utah Medical Products, Inc. Health Care
.
.
.
.
.
.
.
.
.
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Numerical Results
Figure: (i) Objective Values, (ii) Primal Residuals, and (iii) Dual
Residuals of T = 5, λ1 = 0.01, λ2 = 0.01 from real stock data.
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Numerical Results
Figure: The sparsity pattern of estimates from the model of
T = 5, λ1 = 0.01, λ2 = 0.01 from real stock data.
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Numerical Results
Figure: The covariance matrix plot of estimates from the model of
T = 5, λ1 = 0.01, λ2 = 0.01 from real stock data.
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Numerical Results
Figure: Negative covariance value of estimates from the model of
T = 5, λ1 = 0.01, λ2 = 0.01 from real stock data.
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Numerical Results
Figure: Negative covariance value of estimates from the model of
T = 5, λ1 = 0.01, λ2 = 0.01 from real stock data (zoom on T = 1).
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Numerical Results
Figure: The weak positivity of estimates from the model of
T = 5, λ1 = 0.01, λ2 = 0.01 from real stock data.
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Numerical Results
Figure: The weak positivity of estimates from the model of
T = 5, λ1 = 0.01, λ2 = 0.01 from real stock data (zoom on T = 1).
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
Conclusion and Discussion
Conclusions:
ADMM algorithm with steepest gradient descent for ˜X
update minimized our objective function f(X).
Computation time took a lot of time as T increases.
Discussions:
Instead of steepest gradient descent, Newton direction. cf.
QUIC.
Use Block Coordinate Descent as in BIG & QUIC.
Introduce the decay constant in D.
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
References I
[De72] Dempster, A. P. (1972). Covariance Selection. Biometrics 28 157-175.
[MB06] Meinshausen, N. and Bühlmann, P. (2006). High-dimensional graphs and
variable selection with the Lasso. Annals of Statistics 34 1436-1462.
[BG08] Banerjee, O., Ghaoui, E. L. and d’Aspremont, A. (2008). Model selection
through sparse maximum likelihood estimation for multivariate Gaussian
or binary data. Journal of Machine Learning Research 9 485-516.
[Ti08] Friedman, J., Hastie, T. and Tibshirani, R. (2008). Sparse inverse
covariance estimation with the graphical Lasso. Biostatistics 9 432-441.
[Ma52] Markowitz, H. (1952). Portfolio Selection. The Journal of Finance 7 77-91.
[Ti96] Tibshirani, R. (1996). Regression shrinkage and selection via the lasso.
Journal of the Royal Statistical Society: Series B 58 267-288.
[Bo11] Boyd, S., Parikh, N., Chu, E., Peleato, B. and Eckstein, J. (2011).
Distributed optimization and statistical learning via the alternating
direction method of multipliers. Foundations and Trends in Machine
Learning 3 1-122.
Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References
References II
[Hs13] Hsieh, C. J., Sustik, M. A., Dhillon, I., Ravikumar, P. and Poldrack, R.
(2013). BIG & QUIC: Sparse inverse covariance estimation for a million
variables. In Advances in Neural Information Processing Systems
3165-3173.
[Bv11] Bühlmann, P. and van de Geer, S. (2011). Statistics for High-Dimensional
Data: Methods, Theory and Applications. Springer-Verlag, Berlin.
[WB12] Wahlberg, B., Boyd, S., Annergren, M. and Wang, Y. (2012). An ADMM
algorithm for a class of total variation regularized estimation problems.
ArXiv:1203.1828.

More Related Content

What's hot

Principal component analysis and matrix factorizations for learning (part 3) ...
Principal component analysis and matrix factorizations for learning (part 3) ...Principal component analysis and matrix factorizations for learning (part 3) ...
Principal component analysis and matrix factorizations for learning (part 3) ...
zukun
 

What's hot (20)

QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
3 grechnikov
3 grechnikov3 grechnikov
3 grechnikov
 
Signal Processing Course : Convex Optimization
Signal Processing Course : Convex OptimizationSignal Processing Course : Convex Optimization
Signal Processing Course : Convex Optimization
 
Model Selection with Piecewise Regular Gauges
Model Selection with Piecewise Regular GaugesModel Selection with Piecewise Regular Gauges
Model Selection with Piecewise Regular Gauges
 
Low Complexity Regularization of Inverse Problems - Course #1 Inverse Problems
Low Complexity Regularization of Inverse Problems - Course #1 Inverse ProblemsLow Complexity Regularization of Inverse Problems - Course #1 Inverse Problems
Low Complexity Regularization of Inverse Problems - Course #1 Inverse Problems
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
2018 MUMS Fall Course - Statistical and Mathematical Techniques for Sensitivi...
2018 MUMS Fall Course - Statistical and Mathematical Techniques for Sensitivi...2018 MUMS Fall Course - Statistical and Mathematical Techniques for Sensitivi...
2018 MUMS Fall Course - Statistical and Mathematical Techniques for Sensitivi...
 
Principal component analysis and matrix factorizations for learning (part 3) ...
Principal component analysis and matrix factorizations for learning (part 3) ...Principal component analysis and matrix factorizations for learning (part 3) ...
Principal component analysis and matrix factorizations for learning (part 3) ...
 
Signal Processing Course : Inverse Problems Regularization
Signal Processing Course : Inverse Problems RegularizationSignal Processing Course : Inverse Problems Regularization
Signal Processing Course : Inverse Problems Regularization
 
5.3 dynamic programming
5.3 dynamic programming5.3 dynamic programming
5.3 dynamic programming
 
Tensor Train data format for uncertainty quantification
Tensor Train data format for uncertainty quantificationTensor Train data format for uncertainty quantification
Tensor Train data format for uncertainty quantification
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Overview on Optimization algorithms in Deep Learning
Overview on Optimization algorithms in Deep LearningOverview on Optimization algorithms in Deep Learning
Overview on Optimization algorithms in Deep Learning
 
Murphy: Machine learning A probabilistic perspective: Ch.9
Murphy: Machine learning A probabilistic perspective: Ch.9Murphy: Machine learning A probabilistic perspective: Ch.9
Murphy: Machine learning A probabilistic perspective: Ch.9
 
cyclic_code.pdf
cyclic_code.pdfcyclic_code.pdf
cyclic_code.pdf
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Random Matrix Theory and Machine Learning - Part 4
Random Matrix Theory and Machine Learning - Part 4Random Matrix Theory and Machine Learning - Part 4
Random Matrix Theory and Machine Learning - Part 4
 
Ydstie
YdstieYdstie
Ydstie
 
Computer Aided Manufacturing Design
Computer Aided Manufacturing DesignComputer Aided Manufacturing Design
Computer Aided Manufacturing Design
 
Random Matrix Theory and Machine Learning - Part 1
Random Matrix Theory and Machine Learning - Part 1Random Matrix Theory and Machine Learning - Part 1
Random Matrix Theory and Machine Learning - Part 1
 

Viewers also liked

Numerical simulation and optimization of high performance supersonic nozzle a...
Numerical simulation and optimization of high performance supersonic nozzle a...Numerical simulation and optimization of high performance supersonic nozzle a...
Numerical simulation and optimization of high performance supersonic nozzle a...
eSAT Journals
 
Python: ottimizzazione numerica algoritmi genetici
Python: ottimizzazione numerica algoritmi geneticiPython: ottimizzazione numerica algoritmi genetici
Python: ottimizzazione numerica algoritmi genetici
PyCon Italia
 

Viewers also liked (14)

ESTECO Company Overview
ESTECO Company OverviewESTECO Company Overview
ESTECO Company Overview
 
Numerical Simulation and Design Optimization of Intake and Spiral Case for Lo...
Numerical Simulation and Design Optimization of Intake and Spiral Case for Lo...Numerical Simulation and Design Optimization of Intake and Spiral Case for Lo...
Numerical Simulation and Design Optimization of Intake and Spiral Case for Lo...
 
An efficient and powerful advanced algorithm for solving real coded numerica...
An efficient and powerful advanced algorithm for solving real  coded numerica...An efficient and powerful advanced algorithm for solving real  coded numerica...
An efficient and powerful advanced algorithm for solving real coded numerica...
 
Numerical simulation and optimization of high performance supersonic nozzle a...
Numerical simulation and optimization of high performance supersonic nozzle a...Numerical simulation and optimization of high performance supersonic nozzle a...
Numerical simulation and optimization of high performance supersonic nozzle a...
 
Acoustic near field topology optimization of a piezoelectric loudspeaker
Acoustic near field topology optimization of a piezoelectric loudspeakerAcoustic near field topology optimization of a piezoelectric loudspeaker
Acoustic near field topology optimization of a piezoelectric loudspeaker
 
Python: ottimizzazione numerica algoritmi genetici
Python: ottimizzazione numerica algoritmi geneticiPython: ottimizzazione numerica algoritmi genetici
Python: ottimizzazione numerica algoritmi genetici
 
Eccm 10
Eccm 10Eccm 10
Eccm 10
 
Ric walter (auth.) numerical methods and optimization a consumer guide-sprin...
Ric walter (auth.) numerical methods and optimization  a consumer guide-sprin...Ric walter (auth.) numerical methods and optimization  a consumer guide-sprin...
Ric walter (auth.) numerical methods and optimization a consumer guide-sprin...
 
Numerical on bisection method
Numerical on bisection methodNumerical on bisection method
Numerical on bisection method
 
Application of Numerical method in Real Life
Application of Numerical method in Real LifeApplication of Numerical method in Real Life
Application of Numerical method in Real Life
 
Presentation on Numerical Method (Trapezoidal Method)
Presentation on Numerical Method (Trapezoidal Method)Presentation on Numerical Method (Trapezoidal Method)
Presentation on Numerical Method (Trapezoidal Method)
 
Applications of numerical methods
Applications of numerical methodsApplications of numerical methods
Applications of numerical methods
 
Numerical method
Numerical methodNumerical method
Numerical method
 
Genetic Algorithms for optimization
Genetic Algorithms for optimizationGenetic Algorithms for optimization
Genetic Algorithms for optimization
 

Similar to Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covariance Selection

SURF 2012 Final Report(1)
SURF 2012 Final Report(1)SURF 2012 Final Report(1)
SURF 2012 Final Report(1)
Eric Zhang
 

Similar to Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covariance Selection (20)

Optimization tutorial
Optimization tutorialOptimization tutorial
Optimization tutorial
 
Research internship on optimal stochastic theory with financial application u...
Research internship on optimal stochastic theory with financial application u...Research internship on optimal stochastic theory with financial application u...
Research internship on optimal stochastic theory with financial application u...
 
Presentation on stochastic control problem with financial applications (Merto...
Presentation on stochastic control problem with financial applications (Merto...Presentation on stochastic control problem with financial applications (Merto...
Presentation on stochastic control problem with financial applications (Merto...
 
SIAM - Minisymposium on Guaranteed numerical algorithms
SIAM - Minisymposium on Guaranteed numerical algorithmsSIAM - Minisymposium on Guaranteed numerical algorithms
SIAM - Minisymposium on Guaranteed numerical algorithms
 
Randomized algorithms ver 1.0
Randomized algorithms ver 1.0Randomized algorithms ver 1.0
Randomized algorithms ver 1.0
 
BEADS : filtrage asymétrique de ligne de base (tendance) et débruitage pour d...
BEADS : filtrage asymétrique de ligne de base (tendance) et débruitage pour d...BEADS : filtrage asymétrique de ligne de base (tendance) et débruitage pour d...
BEADS : filtrage asymétrique de ligne de base (tendance) et débruitage pour d...
 
Efficient end-to-end learning for quantizable representations
Efficient end-to-end learning for quantizable representationsEfficient end-to-end learning for quantizable representations
Efficient end-to-end learning for quantizable representations
 
Efficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formatsEfficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formats
 
Design and Implementation of Parallel and Randomized Approximation Algorithms
Design and Implementation of Parallel and Randomized Approximation AlgorithmsDesign and Implementation of Parallel and Randomized Approximation Algorithms
Design and Implementation of Parallel and Randomized Approximation Algorithms
 
SURF 2012 Final Report(1)
SURF 2012 Final Report(1)SURF 2012 Final Report(1)
SURF 2012 Final Report(1)
 
Convex optmization in communications
Convex optmization in communicationsConvex optmization in communications
Convex optmization in communications
 
Simplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsSimplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution Algorithms
 
Automatic bayesian cubature
Automatic bayesian cubatureAutomatic bayesian cubature
Automatic bayesian cubature
 
Analysis of Algorithum
Analysis of AlgorithumAnalysis of Algorithum
Analysis of Algorithum
 
Presentation.pdf
Presentation.pdfPresentation.pdf
Presentation.pdf
 
cheb_conf_aksenov.pdf
cheb_conf_aksenov.pdfcheb_conf_aksenov.pdf
cheb_conf_aksenov.pdf
 
Computer graphics 2
Computer graphics 2Computer graphics 2
Computer graphics 2
 
3 analysis.gtm
3 analysis.gtm3 analysis.gtm
3 analysis.gtm
 
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
 
Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...
Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...
Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...
 

Recently uploaded

development of diagnostic enzyme assay to detect leuser virus
development of diagnostic enzyme assay to detect leuser virusdevelopment of diagnostic enzyme assay to detect leuser virus
development of diagnostic enzyme assay to detect leuser virus
NazaninKarimi6
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 
The Mariana Trench remarkable geological features on Earth.pptx
The Mariana Trench remarkable geological features on Earth.pptxThe Mariana Trench remarkable geological features on Earth.pptx
The Mariana Trench remarkable geological features on Earth.pptx
seri bangash
 
Porella : features, morphology, anatomy, reproduction etc.
Porella : features, morphology, anatomy, reproduction etc.Porella : features, morphology, anatomy, reproduction etc.
Porella : features, morphology, anatomy, reproduction etc.
Silpa
 
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune WaterworldsBiogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Sérgio Sacani
 

Recently uploaded (20)

development of diagnostic enzyme assay to detect leuser virus
development of diagnostic enzyme assay to detect leuser virusdevelopment of diagnostic enzyme assay to detect leuser virus
development of diagnostic enzyme assay to detect leuser virus
 
GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
COMPUTING ANTI-DERIVATIVES (Integration by SUBSTITUTION)
COMPUTING ANTI-DERIVATIVES(Integration by SUBSTITUTION)COMPUTING ANTI-DERIVATIVES(Integration by SUBSTITUTION)
COMPUTING ANTI-DERIVATIVES (Integration by SUBSTITUTION)
 
module for grade 9 for distance learning
module for grade 9 for distance learningmodule for grade 9 for distance learning
module for grade 9 for distance learning
 
Chemistry 5th semester paper 1st Notes.pdf
Chemistry 5th semester paper 1st Notes.pdfChemistry 5th semester paper 1st Notes.pdf
Chemistry 5th semester paper 1st Notes.pdf
 
Grade 7 - Lesson 1 - Microscope and Its Functions
Grade 7 - Lesson 1 - Microscope and Its FunctionsGrade 7 - Lesson 1 - Microscope and Its Functions
Grade 7 - Lesson 1 - Microscope and Its Functions
 
FAIRSpectra - Enabling the FAIRification of Analytical Science
FAIRSpectra - Enabling the FAIRification of Analytical ScienceFAIRSpectra - Enabling the FAIRification of Analytical Science
FAIRSpectra - Enabling the FAIRification of Analytical Science
 
Proteomics: types, protein profiling steps etc.
Proteomics: types, protein profiling steps etc.Proteomics: types, protein profiling steps etc.
Proteomics: types, protein profiling steps etc.
 
The Mariana Trench remarkable geological features on Earth.pptx
The Mariana Trench remarkable geological features on Earth.pptxThe Mariana Trench remarkable geological features on Earth.pptx
The Mariana Trench remarkable geological features on Earth.pptx
 
GBSN - Microbiology (Unit 3)
GBSN - Microbiology (Unit 3)GBSN - Microbiology (Unit 3)
GBSN - Microbiology (Unit 3)
 
Molecular markers- RFLP, RAPD, AFLP, SNP etc.
Molecular markers- RFLP, RAPD, AFLP, SNP etc.Molecular markers- RFLP, RAPD, AFLP, SNP etc.
Molecular markers- RFLP, RAPD, AFLP, SNP etc.
 
Porella : features, morphology, anatomy, reproduction etc.
Porella : features, morphology, anatomy, reproduction etc.Porella : features, morphology, anatomy, reproduction etc.
Porella : features, morphology, anatomy, reproduction etc.
 
An introduction on sequence tagged site mapping
An introduction on sequence tagged site mappingAn introduction on sequence tagged site mapping
An introduction on sequence tagged site mapping
 
Zoology 5th semester notes( Sumit_yadav).pdf
Zoology 5th semester notes( Sumit_yadav).pdfZoology 5th semester notes( Sumit_yadav).pdf
Zoology 5th semester notes( Sumit_yadav).pdf
 
Clean In Place(CIP).pptx .
Clean In Place(CIP).pptx                 .Clean In Place(CIP).pptx                 .
Clean In Place(CIP).pptx .
 
Dr. E. Muralinath_ Blood indices_clinical aspects
Dr. E. Muralinath_ Blood indices_clinical  aspectsDr. E. Muralinath_ Blood indices_clinical  aspects
Dr. E. Muralinath_ Blood indices_clinical aspects
 
pumpkin fruit fly, water melon fruit fly, cucumber fruit fly
pumpkin fruit fly, water melon fruit fly, cucumber fruit flypumpkin fruit fly, water melon fruit fly, cucumber fruit fly
pumpkin fruit fly, water melon fruit fly, cucumber fruit fly
 
GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)
 
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune WaterworldsBiogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
 

Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covariance Selection

  • 1. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covariance Selection Michael Lie1 1Prof. Suzuki Taiji Lab., Faculty of Science, Department of Information Science, Tokyo Institute of Technology, Japan February 12, 2015
  • 2. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Agenda To propose of the new statistical model: Sparse Multiperiodic Covariance Selection (M-CovSel) To propose of optimization method through ADMM
  • 3. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Covariance Selection Covariance Selection Sparse Covariance Selection Y1, · · · , Yn ∼ i.i.d. Np(µ, Σ). argmin X 0 − ln det X + trace(SX) + λ X 1 Original idea: Dempster (1972) Application to Sparse and High-dimensional Matrices: Meinshausen and Bühlmann (2006) Problem Formulation: Banerjee, Ghaoui and d’Aspremont (2008) Solution through graphical lasso model: Friedman, Hastie and Tibshirani (2008) Solution by ADMM method: Boyd (2011)
  • 4. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Application Application: Markowitz’s Portfolio Selection Portfolio Selection (Markowitz, 1952) min w σ2 p,w = w Sw s.t. w 1 = 1 ∴ w = S−11 1 S−11 . Here, the inverse of empirical covariance S−1 is needed! The existing Covariance Selection: fixed time ⇒ Covariance Selection analysis over time series is needed!
  • 5. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Intuition Intuition Figure: Existing Model By estimating X, we can construct the portfolio.
  • 6. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Intuition Figure: Our Model Sij := 1 n k,l (yk,i − ˆµi)(yl,j − ˆµj) ,
  • 7. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Problem Formulation Problem Formulation Consider a stationary-time process such that the multiperiodic inverse covariance matrix X can be expressed as X =        X11 X12 X13 · · · X1,T X12 X22 X23 · · · X2,T X13 X23 X33 · · · X3,T ... ... ... ... ... X1,T X2,T X3,T · · · XT,T        Tp columns    Tprows . Assumption: X is stationary time-process, such that Xi,i+h = Xj,j+h for all i, j.
  • 8. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Problem Formulation Sparse Multiperiodic Covariance Selection (M-CovSel): argmin X 0 f(X) := argmin X 0 − ln det X + i,j trace Sij Xij + λ1 i,j Xij 1 + λ2 i,j k>i,l>j Xij − Xkl 2 2 subject to Xi,i+h = Xj,j+h, ∀i, j. 1 : w 1 = i |wi | 2 : w 2 F = i |wi |2
  • 9. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Problem Formulation We separate our model into two parts: f(X) ≡ g(X) + h(X) g(X) = − ln det X + i,j trace Sij Xij , h(X) = λ1 i,j Xij 1 + λ2 i,j k>i,l>j Xij − Xkl 2 F . g(X): twice differentiable and strictly convex h(X): convex but non-differentiable
  • 10. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Problem Formulation Auxiliary Variables X =        X11 X12 X13 · · · X1,T X12 X22 X23 · · · X2,T X13 X23 X33 · · · X3,T ... ... ... ... ... X1,T X2,T X3,T · · · XT,T        bvec −→ X =                X11 ... X1,T X22 ... X2,T ... XT,T                p    numX×p
  • 11. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Problem Formulation H: stationary time matrix
  • 12. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Problem Formulation All D: time-difference matrix
  • 13. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Problem Formulation Simplified D: time-difference matrix
  • 14. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Problem Formulation minimize g(X) + h(˜Z) subject to    X = Z DX = Z HX = 0 ⇐⇒ ˜X = ˜Z where g(X) = − ln det X + i,j trace Sij Xij , h(˜Z) = λ1 i,j Z1 1 + λ2 i,j Z2 2 F , ˜X =   X DX HX   , ˜Z =   Z1 Z2 0   .
  • 15. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Alternating Direction Method of Multiplier (ADMM) Solving Through ADMM Algorithm 1 Overview of ADMM 1: for k = 0, 1, · · · do 2: ˜X-update: 3: Compute W(0) = (X(0) )−1 . 4: for t = 1, 2, · · · do 5: Compute the direction using steepest gradient descent d = − G(˜X). 6: Use an Armijo’s rule based step-size selection to get α such that X(t+1) = X(t) + αd(t) is positive definite and the objective value suffi- ciently decreases. 7: Update ˜X. 8: end for 9: ˜Z-update: 10: Update Z1 : Z (k+1) 1 = Sλ1/ρ((X )(k+1) + Y(k) ρ ) 11: Update Z2: Z (k+1) 2 = ρD(X )(k+1) + Y(k) 2λ2 + ρ 12: Y-update: Y(k+1) = Y(k) + ρ ˜X(k+1) − ˜Z(k+1) 13: end for
  • 16. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Alternating Direction Method of Multiplier (ADMM) minimize g(X) + h(˜Z) subject to    X = Z DX = Z HX = 0 ⇐⇒ ˜X = ˜Z Its augmented Lagrangian is Lρ(˜X, ˜Z, Y) = g(˜X) + h(˜Z) + (ρ/2) ˜X − ˜Z + Y ρ 2 F , g(X) = − ln det X + i,j trace Sij Xij , h(˜Z) = λ1 i,j Z1 1 + λ2 i,j Z2 2 F .
  • 17. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Alternating Direction Method of Multiplier (ADMM) 1 ˜X-update: ˜X(k+1) := argmin ˜X − ln det X + i,j trace Sij Xij + ρ 2 ˜X − ˜Z(k) + Y(k) ρ 2 F , 2 ˜Z-update: ˜Z(k+1) := argmin ˜Z λ1 Z1 1 + λ2 Z2 2 F + ρ 2 ˜X(k+1) − ˜Z + Y(k) ρ 2 F , 3 ˜Y-update: Y(k+1) := Y(k) + ρ ˜X(k+1) − ˜Z(k+1) .
  • 18. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Alternating Direction Method of Multiplier (ADMM) ˜X Update The solution of ˜X(k+1) := argmin ˜X − ln det X + i,j trace Sij Xij + ρ 2 ˜X − ˜Z(k) + Y(k) ρ 2 F is solved through steepest gradient descent and the algorithm is as given in Algorithm 1 of line 2-8. Algorithm 2 ˜X Update 1: Compute W(0) = (X(0) )−1 . 2: for t = 1, 2, · · · do 3: Compute the direction using steepest gradient descent d = − G(˜X). 4: Use an Armijo’s rule based step-size selection to get α such that X(t+1) = X(t) + αd(t) is positive definite and the objective value suffi- ciently decreases. 5: Update ˜X. 6: end for
  • 19. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Alternating Direction Method of Multiplier (ADMM) ˜Z Update ˜Z Update ˜Zk+1 := argmin ˜Z λ1 Z1 1 + λ2 Z2 2 F + (ρ/2) ˜X(k+1) − ˜Z + Y(k) ρ 2 F . The equation above can be separated as two equations as below: Z (k+1) 1 := argmin Z1 λ1 Z1 1 + (ρ/2) (X )(k+1) − Z1 + Yk 1/ρ 2 F Z (k+1) 2 := argmin Z2 λ2 Z2 2 F + (ρ/2) D(X )(k+1) − Z2 + Yk 2/ρ 2 F
  • 20. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Alternating Direction Method of Multiplier (ADMM) Solution of ˜Z Update Z (k+1) 1 := argmin Z1 λ1 Z1 1 + (ρ/2) (X )(k+1) − Z1 + Yk 1/ρ 2 F Z (k+1) 2 := argmin Z2 λ2 Z2 2 F + (ρ/2) D(X )(k+1) − Z2 + Yk 2/ρ 2 F The solution of first solution is simply the soft-thresholding function of Z (k+1) 1 = Sλ1/ρ (X )(k+1) + Y(k) ρ and the solution of second solution is Z (k+1) 2 = ρD(X )(k+1) + Y(k) 2λ2 + ρ .
  • 21. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Numerical Results Numerical Results Execution environment: Intel Core i7-4770 CPU @ 3.40GHz (8 CPUs) 8GB RAM R ver. 3.3.65126.0 OS Windows 7 Professional 64 bit (6.1. build 7601) Verifying: Convergence Speed Sparsity of the estimates using random data sets and real data.
  • 22. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Numerical Results All D Simplified D
  • 23. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Numerical Results Figure: Runtime of n = 10, λ1 = 0.01, λ2 = 0.01.
  • 24. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Numerical Results Figure: (i) Objective Values, (ii) Primal Residuals, and (iii) Dual Residuals of n = 10, T = 5, λ1 = 0.01, λ2 = 0.01.
  • 25. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Numerical Results Figure: The sparsity pattern of estimates from the model of n = 10, T = 5, λ1 = 0.01, λ2 = 0.01.
  • 26. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Numerical Results Analysis on real data Stock data of 50 randomly selected companies from NASDAQ Period: 4 January 2011 to 31 December 2014 Tick Name Sector PDCO Patterson Companies, Inc. Health Care OMER Omeros Corporation Health Care HEAR Turtle Beach Corporation Consumer Durables QBAK Qualstar Corporation Technology UTHR United Therapeutics Corporation Health Care PLCE The Children&39;s Place Retail Stores, Inc. Consumer Services SUSQ Susquehanna Bancshares, Inc. Finance IDCC InterDigital, Inc. Miscellaneous ELON Echelon Corporation Technology BGCP BGC Partners, Inc. Finance MRGE Merge Healthcare Incorporated. Technology TISA Top Image Systems, Ltd. Technology IPXL Impax Laboratories, Inc. Health Care ROVI Rovi Corporation Miscellaneous IBCP Independent Bank Corporation Finance BABY Natus Medical Incorporated Health Care HFFC HF Financial Corp. Finance ISLE Isle of Capri Casinos, Inc. Consumer Services ITIC Investors Title Company Finance SLGN Silgan Holdings Inc. Consumer Durables ZIOP ZIOPHARM Oncology Inc Health Care MXIM Maxim Integrated Products, Inc. Technology NEPT Neptune Technologies & Bioresources Inc Health Care UTMD Utah Medical Products, Inc. Health Care . . . . . . . . .
  • 27. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Numerical Results Figure: (i) Objective Values, (ii) Primal Residuals, and (iii) Dual Residuals of T = 5, λ1 = 0.01, λ2 = 0.01 from real stock data.
  • 28. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Numerical Results Figure: The sparsity pattern of estimates from the model of T = 5, λ1 = 0.01, λ2 = 0.01 from real stock data.
  • 29. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Numerical Results Figure: The covariance matrix plot of estimates from the model of T = 5, λ1 = 0.01, λ2 = 0.01 from real stock data.
  • 30. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Numerical Results Figure: Negative covariance value of estimates from the model of T = 5, λ1 = 0.01, λ2 = 0.01 from real stock data.
  • 31. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Numerical Results Figure: Negative covariance value of estimates from the model of T = 5, λ1 = 0.01, λ2 = 0.01 from real stock data (zoom on T = 1).
  • 32. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Numerical Results Figure: The weak positivity of estimates from the model of T = 5, λ1 = 0.01, λ2 = 0.01 from real stock data.
  • 33. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Numerical Results Figure: The weak positivity of estimates from the model of T = 5, λ1 = 0.01, λ2 = 0.01 from real stock data (zoom on T = 1).
  • 34. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References Conclusion and Discussion Conclusions: ADMM algorithm with steepest gradient descent for ˜X update minimized our objective function f(X). Computation time took a lot of time as T increases. Discussions: Instead of steepest gradient descent, Newton direction. cf. QUIC. Use Block Coordinate Descent as in BIG & QUIC. Introduce the decay constant in D.
  • 35. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References References I [De72] Dempster, A. P. (1972). Covariance Selection. Biometrics 28 157-175. [MB06] Meinshausen, N. and Bühlmann, P. (2006). High-dimensional graphs and variable selection with the Lasso. Annals of Statistics 34 1436-1462. [BG08] Banerjee, O., Ghaoui, E. L. and d’Aspremont, A. (2008). Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. Journal of Machine Learning Research 9 485-516. [Ti08] Friedman, J., Hastie, T. and Tibshirani, R. (2008). Sparse inverse covariance estimation with the graphical Lasso. Biostatistics 9 432-441. [Ma52] Markowitz, H. (1952). Portfolio Selection. The Journal of Finance 7 77-91. [Ti96] Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B 58 267-288. [Bo11] Boyd, S., Parikh, N., Chu, E., Peleato, B. and Eckstein, J. (2011). Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning 3 1-122.
  • 36. Introduction Problem Setup Optimization Method Numerical Results Conclusion and Discussion References References II [Hs13] Hsieh, C. J., Sustik, M. A., Dhillon, I., Ravikumar, P. and Poldrack, R. (2013). BIG & QUIC: Sparse inverse covariance estimation for a million variables. In Advances in Neural Information Processing Systems 3165-3173. [Bv11] Bühlmann, P. and van de Geer, S. (2011). Statistics for High-Dimensional Data: Methods, Theory and Applications. Springer-Verlag, Berlin. [WB12] Wahlberg, B., Boyd, S., Annergren, M. and Wang, Y. (2012). An ADMM algorithm for a class of total variation regularized estimation problems. ArXiv:1203.1828.