SlideShare a Scribd company logo
1 of 46
Topics In Digital Contents Signals – Special Issue 01Topics In Digital Contents Signals – Special Issue 01
The Least-Square Optimization andThe Least-Square Optimization and
Sparse Linear System SolverSparse Linear System Solver
Presented by Ji-yong Kwon
Visual Computing Lab.
1
OutlineOutline
• What is the least-square optimization?
– Optimization
– Least-square optimization
– Application to computer graphics
• Poisson image cloning
• What is the sparse linear system
– Dense matrix v.s. Sparse matrix
– Steepest-descent approach
– Conjugate Gradient method
2
ReferenceReference
• Valuable reading materials
– Practical Least-Squares for Computer Graphics
• Pighin and Lewis
• ACM SIGGRAPH 2007 course note
• http://graphics.stanford.edu/~jplewis/lscourse/ls.pdf
– An Introduction to the Conjugate Gradient Method Without the
Agonizing Pain
• J. R. Shewchuk
• CMU tech. report
• http://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf
3
The Least-Square OptimizationThe Least-Square Optimization
4
Simple ExampleSimple Example
• Example
– Line equation passing two points on the a-b plane
• One unique solution
5
a
b
01=++ ybxa
01
01
22
11
=++
=++
ybxa
ybxa






−
−






−
−
−
=











−
−
=











1
11
1
1
12
12
1221
22
11
aa
bb
babay
x
y
x
ba
ba
Line equation:
We know:
( )11,ba
( )22 ,ba
Simple ExampleSimple Example
• Example
– Line equation passing three points on the a-b plane
• No exact solution
6
a
b
01=++ ybxa
01
01
01
33
22
11
=++
=++
=++
ybxa
ybxa
ybxa










−
−
−
=















1
1
1
33
22
11
y
x
ba
ba
ba
Line equation:
We know:
( )11,ba
( )22 ,ba
( )33,ba
Why Optimization?Why Optimization?
• Observation
– Unfortunately, many problems do not have a unique solution.
• Too many solutions, or
• No exact solution
– Concept of Optimization
• Find approximated solution
– Not exactly satisfy conditions,
– But satisfy conditions as much as possible.
• Strategy
– Set the objective (or energy) function
– Find a solution that minimizes (or maximizes) the objective
function.
7
Why Optimization?Why Optimization?
• Objective function
– A.K.A. energy function
– Input: a set of variables that we want to know
– Output: a scalar value
– Output value is used for estimation of solution’s quality
• Generally, small output value (small energy)  good solution
– A solution that minimizes the output value of the objective
function  Optimized solution
– To design the good objective function is the most
important task of the optimization techniques.
8
Simple ExampleSimple Example
• Example again,
– Line equation passing three points on the a-b plane
• No exact solution,
• But we can compute the approximated solution.
9
a
b
( )11,ba
( )22 ,ba
( )33,ba
– Passing all points would be
impossible,
– Find the line that minimizes
distances from all points
Objective FunctionObjective Function
• Example again,
– How to compute ‘distances’?
• Setting the objective function
10
a
b
01=++ ybxaPoint on the line:
( )11,ba
( )22 ,ba
( )33,ba+
–
0
Point out of the line:
01
,01
<++
>++
ybxa
ybxa
Objective function:
( ) ( )∑=
++=
3
1
2
1, i ii ybxayxO
Optimization ProblemOptimization Problem
• Problem description
– Find the line coefficients (x, y) that minimize a sum of squared
distances between the line and given points.
– Mathematically,
– More compact description
11
( )∑=
++
3
1
2
1minimize i ii ybxa
( ) ( )∑=
++=
3
1
2
1argmin, i iix,yoo ybxayx
SolutionSolution
• Solution of the example
– The objective function has
a parabolic shape
 The objective function would be minimized
at the zero gradient of the function
12
( ) ( )∑=
++=
3
1
2
1, i ii ybxayxO
x
y
O(x,y)
( ) ( )
( ) ( )∑∑
∑∑
==
==
=++=++=
∂
∂
=++=++=
∂
∂
3
1
23
1
3
1
23
1
0212
0212
i iiiii iii
i iiiii iii
aybbxaybxab
y
O
abyaxaybxaa
x
O
SolutionSolution
• Solution of the example
13
x
y
O(x,y)
∑∑∑
∑∑∑
===
===
−=+
−=+
3
1
3
1
23
1
3
1
3
1
3
1
2
i ii ii ii
i ii iii i
bbybax
abayax








−
−
=













∑
∑
∑∑
∑∑
=
=
==
==
3
1
3
1
3
1
23
1
3
1
3
1
2
i i
i i
i ii ii
i iii i
b
a
y
x
bba
baa








−
−








=





∑
∑
∑∑
∑∑
=
=
−
==
==
3
1
3
1
1
3
1
23
1
3
1
3
1
2
i i
i i
i ii ii
i iii i
b
a
bba
baa
y
x
Squared DistanceSquared Distance
• Why ‘squared’?
– Naïve sum
• Each distance would be positive or negative,
• Sum of signed distances does not estimate the quality of the
solution.
– Sum of the absolute distance
• Distances would be 0 or positive,
• But the minimum point cannot be computed easily
(Not differentiable at the minimum point)
– Sum of the square distance
• Distances would be 0 or positive,
• Differentiable at the minimum point,
• The shape of the square function would be parabolic.
14
( ) ( )∑=
++=
3
1
1, i ii ybxayxO
( ) ∑=
++=
3
1
1, i ii ybxayxO
( ) ( )∑=
++=
3
1
2
1, i ii ybxayxO
Another SolutionAnother Solution
• Pseudo-inverse
– The inverse matrix can be computed if and only if the matrix is
square
– The pseudo-inverse matrix
– From the example before…
– Solution computed by using pseudo-inverse method
= Solution computed by using least-square optimization
15










−
−
−
=















1
1
1
33
22
11
y
x
ba
ba
ba
bAx =
( )
( )



=⇐
=⇐
= +−
+−
+
IAAAAA
IAAAAA
A 1
1
T
TT
( ) bAAAx TT 1−
=
BackgroundBackground
• Background of matrix differentiation
– Very comfortable technique for derivation of matrix system
– Reference
• A. M. Mathai, ‘Jacobians of Matrix Transformations and
Functions of Matrix Argument’, World Scientific Publishing, 1997
– Contents that would be covered in this lecture…
• A scalar value function of a vector
• A vector function of a vector
16
[ ]
( ) ( ) ( ) ( )
T
p
T
p
x
f
x
f
x
fy
fy
xxx








∂
∂
∂
∂
∂
∂
=
∂
∂
⇒=
=
xxx
x
x
x
,,,
,,,
21
21


BackgroundBackground
• Theorem 1
– Let be the vector of variables
be the constant vector, then
[ ]T
pxxx ,,, 21 =x
[ ]T
paaa ,,, 21 =a
( )xAA
x
Axx
x
x
xx
a
x
xa
TT
T
T
y
y
y
y
y
y
+=
∂
∂
⇒=
=
∂
∂
⇒=
=
∂
∂
⇒=
2
BackgroundBackground
• Proof of Theorem 1
[ ]T
pxxx ,,, 21 =x
[ ]T
paaa ,,, 21 =a
)
[ ]
)
[ ]
)
( )xAA
x
xAxA
Axx
x
x
xx
a
x
xa
TT
ii
p
j
jji
p
j
jij
i
p
i
p
j
jiij
T
T
p
T
p
i
i
p
T
T
p
T
p
i
i
pp
T
y
xaxa
x
y
xxay
xxx
x
y
x
y
x
yy
x
x
y
xxxy
aaa
x
y
x
y
x
yy
a
x
y
xaxaxay
+=
∂
∂
⇒+=+=
∂
∂
==
==








∂
∂
∂
∂
∂
∂
=
∂
∂
⇒=
∂
∂
+++==
==








∂
∂
∂
∂
∂
∂
=
∂
∂
⇒=
∂
∂
+++==
⋅⋅
==
= =
∑∑
∑∑3
22222
2
1
11
1 1
21
21
22
2
2
1
21
21
2211




BackgroundBackground
• Theorem 2
– Let
19
[ ] [ ]
xy
x
y
J
yx
Jdd
x
y
x
y
x
y
x
y
x
y
x
y
x
y
x
y
x
y
x
y
yyyxxx
p
ppp
p
p
j
i
T
p
T
p
=




















∂
∂
∂
∂
∂
∂
∂
∂
∂
∂
∂
∂
∂
∂
∂
∂
∂
∂
=








∂
∂
=
∂
∂
=
==





21
2
2
2
1
2
1
2
1
1
1
2121 ,
Matrix FormulationMatrix Formulation
• Example again,
– Can be described as a matrix form
20
( ) ( )
2
33
22
11
2
33
22
11
3
1
2
1
1
1
1
1
1
1,










−
−
−
−















=










++
++
++
=
++= ∑=
y
x
ba
ba
ba
ybxa
ybxa
ybxa
ybxayxO i ii
( )
( ) ( )bAxbAx
bAxx
−−=
−=
⇒
T
O
2
Matrix FormulationMatrix Formulation
• Matrix formulation of the least square optimization
21
( ) ( )( )
( )( )( )
( )
( )
( )
( ) bAAAx
bAAxA
bAAxA
bAxAAAA
bbAxbAxAx
x
bbbAxAxbAxAx
x
bAxbAx
x
bAxbAx
xx
TT
TT
TT
TTT
TTTT
TTTTTT
TTT
TO
1
022
2
2
−
=∴
=∴
=−=
−+=
+−
∂
∂
=
+−−
∂
∂
=
−−
∂
∂
=
−−
∂
∂
=
∂
∂
Can be solved by using
the linear system solver
ConstraintsConstraints
• Slightly different example
– Line that minimizes distance from the red points,
– One additional constraint
• This line should pass the white point
22
a
b
( )11,ba
( )22 ,ba
( )33,ba
( )cc ba ,
( )
01subject to
1minimize
3
1
2
=++
++∑=
cc
i ii
ybxa
ybxa
ConstraintsConstraints
• Constraints
– A.K.A. hard constraints
• c.f. soft constraints  objective (energy) term
– Condition that must be satisfied
– Constrained optimization
• Optimization with some constraints
– (Linear / Non-linear) (equality / inequality) constraint
– This lecture only covers linear equality constraint
23
Constrained OptimizationConstrained Optimization
• Lagrange multiplier
– Constrained optimization can be expressed as a
unconstrained optimization with a Lagrange multiplier
– Why is it possible?
• This lecture would not covers the theory of the Lagrange
multiplier.
• Reference
– http://en.wikipedia.org/wiki/Lagrange_multipliers
24
( )
( ) cyxC
yxO
=,subject to
,minimize
( ) ( )( )cyxCyxO −+ ,,minimize λ
Constrained OptimizationConstrained Optimization
• Solution of constrained optimization
– At the minimum point, the gradient of objective function
should be zero
– Also can be solved by using the linear system solver
25
( )1
2
1
minarg
2
++− xcbAxx
T
λ
01=+=
∂
∂
=+−=
∂
∂
xc
0cbAAxA
x
T
TT
O
O
λ
λ 





−
=











10
bAx
c
cAA T
T
T
λ
bxA ˆˆˆ =
Constrained OptimizationConstrained Optimization
• Case for multiple constraints
– Multiple Lagrange multipliers
26
( )cCxλbAxx −+− T2
2
1
minarg
0=−=
∂
∂
=+−=
∂
∂
cCx
λ
0λCbAAxA
x
O
O TTT






=











c
bAx
C
CAA TTT
λ0
bxA ˆˆˆ =
ImplementationImplementation
• How to solve the linear system?
– Provided by many libraries.
– Using OpenCV,
• Data structure for storing a matrix
– CvMat *aMat = cvCreateMat(nRow, nCol, CV_32F);
– cvReleaseMat(&aMat);
• Set and get the element of a matrix
– cvmSet(aMat, m, n, 1.0f);
– float mn = cvmGet(aMat, m, n);
• Linear operation
– cvAdd(aMat, bMat, cMat);
– cvGEMM(aMat, aMat, 1.0f, NULL, 0.0f, ataMat,
CV_GEMM_A_T);
• Solver
– cvSolve(aMat, bVect, xVect, CV_LU);
27
Practical ExamplePractical Example
• Poisson image cloning
– Interesting solution to composite the image
– Paste the modified gradient of source image with satisfaction
of boundary colors
28
Poisson Image CloningPoisson Image Cloning
• Problem description
– Source image pixel
– Target image pixel
– Unknown new image pixel
– Objective
• Minimize difference of gradients between a new image and a
source image
– Constraint
• Pixel values at the boundary should be equal to those of the
target image
29
yxs ,
yxt ,
yxn ,
s t
nt
Poisson Image CloningPoisson Image Cloning
• Problem description
– Mathematical formulation
30
( ) ( )( )
( ) ( )
( ) ( )( )
( ) ( )
( ) Ω∂∈=
−−−+
−−−
∑
∑
Ω∈+
++
Ω∈+
++
yxtn
ssnn
ssnn
yxyx
yxyx
yxyxyxyx
yxyx
yxyxyxyx
,for,subject to
minimize
,,
1,,,
2
,1,,1,
,1,,
2
,,1,,1
s t
nt
Poisson Image CloningPoisson Image Cloning
• Matrix formulation
31
( )BtBnλGsGnn −+− T2
2
1
minarg










−=



11G
( )yx, ( )1, +yx
( )yx ,1+










=



1B
( ) Ω∂∈yx,
0
0
=−=
∂
∂
=+−=
∂
∂
BtBn
λ
λBGsGGnG
n
O
O TTT






=











Bt
GsG
λ
n
0B
BGG TTT
Poisson Image CloningPoisson Image Cloning
• Why is it called ‘Poisson’?
32
s t
nt
-1
x,y-1
-1
x-1,y
4
x,y
-1
x+1,
y
-1
x,y+
1
( ) ( )( )
( ) ( )( )
( ) ( )( )
( ) ( )( )
( )
( )1,,1,11,,
1,,1,11,,
,1,,1,
,,1,,1
,1,,1,
1,,1,,
,
4
4
++−−
++−−
++
++
−−
−−
−−−−−
−−−−
=
−−−−
−−−−
−−−+
−−−
=
∂
∂
yxyxyxyxyx
yxyxyxyxyx
yxyxyxyx
yxyxyxyx
yxyxyxyx
yxyxyxyx
yx
sssss
nnnnn
ssnn
ssnn
ssnn
ssnn
n
O
s
y
n
x
n
∆=
∂
∂
+
∂
∂
2
2
2
2
Poisson Image CloningPoisson Image Cloning
• Implementation issues
– Computation of can be expensive
 Construct
– The number of neighbors is not always equal to four
33






=











Bt
GsG
λ
n
0B
BGG TTT
GGT
GGL T
=










−−−−=



11411L
-1
-1 4 -1
-1
-1
-1 3 -1
-1
-1 2
Sparse Linear System SolverSparse Linear System Solver
34
Dense V.S. SparseDense V.S. Sparse
• In previous example,
– Assume that the size of the composite image is 200 x 200
– 200 x 200  40,000 pixels  40,000 unknowns
– We should solve the linear system
– Size of A: 40,000 x 40,000
 1,600,000,000 elements  1,600,000,000 float
 1,600,000,000 byte  about 1.6Gb
– Computing the inverse of (40,000 x 40,000) is very, very
expensive
35
bAx =
Dense V.S. SparseDense V.S. Sparse
• Concept of the dense / sparse matrix
– Dense matrix
• The matrix that has a small number of zero elements
– Sparse matrix
• The matrix that has a large number of zero elements
• Storing the dense / sparse matrix
– Dense matrix
• Store all elements
• [1, 0, 0; 0, -1, 0; 0, 0, 2]
– Sparse matrix
• Store only non-zero elements
• [(1,1,1), (2,2,-1), (3,3,2)]
36










−
200
010
001
Dense V.S. SparseDense V.S. Sparse
• Linear system of Poisson image cloning
– has at most 5 non-zero elements per row.
– The number of non-zero elements of is same as that of
the boundary pixels.
– 200 x 200 images
 (40000 x 5 + a) elements
– Efficient multiplication
37






=











Bt
GsG
λ
n
0B
BGG TTT
GGT
B
Steepest-Descent MethodSteepest-Descent Method
• How to solve the sparse linear system?
– Computing the inverse is expensive
 Find an optimized solution Iteratively
– Strategy of the iterative method
• Set the initial solution
• Until the objective value converges,
– Compute the gradient of
the objective function
at the current solution
– Move the solution with
an inverse direction
of the gradient
38
x
y
O(x,y)
i
O
ii
xx
xx
∂
∂
−=+ α1
Steepest-Descent MethodSteepest-Descent Method
• Steepest-descent method for linear system
– Gradient of the objective function (assume A is symmetric)
– Next iteration
– How to determine the step size ?
39
bAx = ( ) cf TT
+−= xbAxxx
2
1
( ) bAxxbAxxx −⇒+−= cf TT
2
1
( ) iiiii rxAxbxx αα +=−+=+1
α
Steepest-Descent MethodSteepest-Descent Method
• Determine the optimal step
– minimizes the objective function at the zero gradient
40
α
( ) ( ) 0111 =′= +++ iii
d
d
ff
d
d
xxx
αα
( )
( )( )
( )
( )
i
T
i
i
T
i
i
T
ii
T
i
i
T
ii
i
T
ii
i
T
ii
i
T
i
Arr
rr
Arrrr
rArr
rArAxb
rrxAb
rAxb
=∴
=−=
−=
−−=
+−=
− +
α
α
α
α
α
0
1
Conjugate Gradient MethodConjugate Gradient Method
• Performance of steepest-descent method
– Simplest iterative algorithm that solves the linear system,
– But slow convergence.
• Especially at the optimized point
• Conjugate Gradient Method
– One of the most popular method that solves the sparse linear
system
– Use the gradient direction and its conjugate gradient direction
to find the solution
– Ideally, conjugate gradient method always converges at least
N iterations for a (N x N) matrix system
– This lecture does not cover the detail of the CGM
41
Conjugate Gradient MethodConjugate Gradient Method
• Many improved versions of CGM
– For stabilized and fast convergence
– Preconditioned Conjugate Gradient Method
– Conjugate Gradient Squared Method
– Bi-Conjugate Gradient Method
– Bi-Conjugate Gradient Stabilized Method
– Reference
• J. R. Shewchuk , An Introduction to the Conjugate Gradient
Method Without the Agonizing Pain, CMU tech. report
• Wikipedia
42
ImplementationImplementation
• Library for sparse linear solver
– TAUCS
• http://www.tau.ac.il/~stoledo/taucs/
– OpenNL
• http://alice.loria.fr/index.php/software/4-library/23-opennl.html
• My original library for CGM
– Made by Ji-yong Kwon
– Simple implementation for dense /sparse matrix
– Features
• 2 types of data structure: CDenseVect, CSparseMat
• Linear operations between datas
• A set of sparse matrix solver (CGM, BiCGSTAB)
• Multi-core processing
43
ImplementationImplementation
• Basic usage
– CSparseMat aMat;
CDenseVect bVect, xVect;
CSparseSolverBiCGSTAB solver;
– Memory allocation
• bVect.Init(nRow);
• aMat.Create(nRow, nCol, 32);
// maximum number of elements per row
• Automatic de-allocation
– Set/get element
• bVect.Set(row, 1.0f);
float value = bVect.Get(row);
• aMat.AddElement(row, col, 1.0f);
float value = aMat.GetElement(row, elementId);
44
ImplementationImplementation
• Basic usage
– Solver initialization
• solver.InitSolver(aMat, bVect, xVect, 1.0f);
– This function initialize the solver’s state
– xVect should be initialized.
– Solve
• while(solver.CheckTermination())
solver.OneStep(aMat, bVect, xVect);
– Residual
• float residual = solver.GetResidual();
– Additional comment
• For ‘Visual Studio’, check project property  C/C++ 
Language  Provide OpenMP : Yes
• Release mode is much faster than debug mode.
45
SummarySummary
• Concept of the least-square optimization
– Useful to convert the hard problem into the approximated
version
– Can be solved by using the linear system solver
• Although the problem has multiple linear equality constraints
• Concept of the sparse linear system
– Solving the large linear system with a dense matrix can be
very expensive
 Use iterative methods to solve the sparse linear system
efficiently.
• The most important thing is to design the
objective function
46

More Related Content

What's hot

Introduction to optimization technique
Introduction to optimization techniqueIntroduction to optimization technique
Introduction to optimization techniqueKAMINISINGH963
 
Multiple Choice Questions - Numerical Methods
Multiple Choice Questions - Numerical MethodsMultiple Choice Questions - Numerical Methods
Multiple Choice Questions - Numerical MethodsMeenakshisundaram N
 
I. Hill climbing algorithm II. Steepest hill climbing algorithm
I. Hill climbing algorithm II. Steepest hill climbing algorithmI. Hill climbing algorithm II. Steepest hill climbing algorithm
I. Hill climbing algorithm II. Steepest hill climbing algorithmvikas dhakane
 
Gauss Elimination Method.pptx
Gauss Elimination Method.pptxGauss Elimination Method.pptx
Gauss Elimination Method.pptxRajendraKhapre1
 
Curve fitting - Lecture Notes
Curve fitting - Lecture NotesCurve fitting - Lecture Notes
Curve fitting - Lecture NotesDr. Nirav Vyas
 
Numerical method for solving non linear equations
Numerical method for solving non linear equationsNumerical method for solving non linear equations
Numerical method for solving non linear equationsMdHaque78
 
numerical differentiation&integration
numerical differentiation&integrationnumerical differentiation&integration
numerical differentiation&integration8laddu8
 
Group Theory and Its Application: Beamer Presentation (PPT)
Group Theory and Its Application:   Beamer Presentation (PPT)Group Theory and Its Application:   Beamer Presentation (PPT)
Group Theory and Its Application: Beamer Presentation (PPT)SIRAJAHMAD36
 
Matlab practical and lab session
Matlab practical and lab sessionMatlab practical and lab session
Matlab practical and lab sessionDr. Krishna Mohbey
 

What's hot (20)

Dynamic programming
Dynamic programmingDynamic programming
Dynamic programming
 
Jacobi method
Jacobi methodJacobi method
Jacobi method
 
Fibonacci Heap
Fibonacci HeapFibonacci Heap
Fibonacci Heap
 
Secant Method
Secant MethodSecant Method
Secant Method
 
Introduction to optimization technique
Introduction to optimization techniqueIntroduction to optimization technique
Introduction to optimization technique
 
Gradient descent method
Gradient descent methodGradient descent method
Gradient descent method
 
Multiple Choice Questions - Numerical Methods
Multiple Choice Questions - Numerical MethodsMultiple Choice Questions - Numerical Methods
Multiple Choice Questions - Numerical Methods
 
I. Hill climbing algorithm II. Steepest hill climbing algorithm
I. Hill climbing algorithm II. Steepest hill climbing algorithmI. Hill climbing algorithm II. Steepest hill climbing algorithm
I. Hill climbing algorithm II. Steepest hill climbing algorithm
 
Optmization techniques
Optmization techniquesOptmization techniques
Optmization techniques
 
Gauss Elimination Method.pptx
Gauss Elimination Method.pptxGauss Elimination Method.pptx
Gauss Elimination Method.pptx
 
Group theory
Group theoryGroup theory
Group theory
 
Bisection method
Bisection methodBisection method
Bisection method
 
Curve fitting - Lecture Notes
Curve fitting - Lecture NotesCurve fitting - Lecture Notes
Curve fitting - Lecture Notes
 
Numerical method for solving non linear equations
Numerical method for solving non linear equationsNumerical method for solving non linear equations
Numerical method for solving non linear equations
 
Big m method
Big   m methodBig   m method
Big m method
 
numerical differentiation&integration
numerical differentiation&integrationnumerical differentiation&integration
numerical differentiation&integration
 
BISECTION METHOD
BISECTION METHODBISECTION METHOD
BISECTION METHOD
 
Group Theory and Its Application: Beamer Presentation (PPT)
Group Theory and Its Application:   Beamer Presentation (PPT)Group Theory and Its Application:   Beamer Presentation (PPT)
Group Theory and Its Application: Beamer Presentation (PPT)
 
Numerical analysis ppt
Numerical analysis pptNumerical analysis ppt
Numerical analysis ppt
 
Matlab practical and lab session
Matlab practical and lab sessionMatlab practical and lab session
Matlab practical and lab session
 

Similar to Optimize Least-Square Methods and Sparse Linear Systems

Computer algorithm(Dynamic Programming).pdf
Computer algorithm(Dynamic Programming).pdfComputer algorithm(Dynamic Programming).pdf
Computer algorithm(Dynamic Programming).pdfjannatulferdousmaish
 
Convex Optimization Modelling with CVXOPT
Convex Optimization Modelling with CVXOPTConvex Optimization Modelling with CVXOPT
Convex Optimization Modelling with CVXOPTandrewmart11
 
super vector machines algorithms using deep
super vector machines algorithms using deepsuper vector machines algorithms using deep
super vector machines algorithms using deepKNaveenKumarECE
 
Convex optmization in communications
Convex optmization in communicationsConvex optmization in communications
Convex optmization in communicationsDeepshika Reddy
 
Support Vector Machines is the the the the the the the the the
Support Vector Machines is the the the the the the the the theSupport Vector Machines is the the the the the the the the the
Support Vector Machines is the the the the the the the the thesanjaibalajeessn
 
unit-4-dynamic programming
unit-4-dynamic programmingunit-4-dynamic programming
unit-4-dynamic programminghodcsencet
 
AAC ch 3 Advance strategies (Dynamic Programming).pptx
AAC ch 3 Advance strategies (Dynamic Programming).pptxAAC ch 3 Advance strategies (Dynamic Programming).pptx
AAC ch 3 Advance strategies (Dynamic Programming).pptxHarshitSingh334328
 
machine learning.pptx
machine learning.pptxmachine learning.pptx
machine learning.pptxAbdusSadik
 
LPP, Duality and Game Theory
LPP, Duality and Game TheoryLPP, Duality and Game Theory
LPP, Duality and Game TheoryPurnima Pandit
 
Matlab for Chemical Engineering
Matlab for Chemical EngineeringMatlab for Chemical Engineering
Matlab for Chemical EngineeringDebarun Banerjee
 
Solving Optimization Problems using the Matlab Optimization.docx
Solving Optimization Problems using the Matlab Optimization.docxSolving Optimization Problems using the Matlab Optimization.docx
Solving Optimization Problems using the Matlab Optimization.docxwhitneyleman54422
 
Linear Algebra and Matlab tutorial
Linear Algebra and Matlab tutorialLinear Algebra and Matlab tutorial
Linear Algebra and Matlab tutorialJia-Bin Huang
 
DimensionalityReduction.pptx
DimensionalityReduction.pptxDimensionalityReduction.pptx
DimensionalityReduction.pptx36rajneekant
 
Intro matlab and convolution islam
Intro matlab and convolution islamIntro matlab and convolution islam
Intro matlab and convolution islamIslam Alabbasy
 
Writing Fast MATLAB Code
Writing Fast MATLAB CodeWriting Fast MATLAB Code
Writing Fast MATLAB CodeJia-Bin Huang
 

Similar to Optimize Least-Square Methods and Sparse Linear Systems (20)

Computer algorithm(Dynamic Programming).pdf
Computer algorithm(Dynamic Programming).pdfComputer algorithm(Dynamic Programming).pdf
Computer algorithm(Dynamic Programming).pdf
 
Convex Optimization Modelling with CVXOPT
Convex Optimization Modelling with CVXOPTConvex Optimization Modelling with CVXOPT
Convex Optimization Modelling with CVXOPT
 
super vector machines algorithms using deep
super vector machines algorithms using deepsuper vector machines algorithms using deep
super vector machines algorithms using deep
 
Convex optmization in communications
Convex optmization in communicationsConvex optmization in communications
Convex optmization in communications
 
Support Vector Machines is the the the the the the the the the
Support Vector Machines is the the the the the the the the theSupport Vector Machines is the the the the the the the the the
Support Vector Machines is the the the the the the the the the
 
unit-4-dynamic programming
unit-4-dynamic programmingunit-4-dynamic programming
unit-4-dynamic programming
 
AAC ch 3 Advance strategies (Dynamic Programming).pptx
AAC ch 3 Advance strategies (Dynamic Programming).pptxAAC ch 3 Advance strategies (Dynamic Programming).pptx
AAC ch 3 Advance strategies (Dynamic Programming).pptx
 
Presentation on machine learning
Presentation on machine learningPresentation on machine learning
Presentation on machine learning
 
machine learning.pptx
machine learning.pptxmachine learning.pptx
machine learning.pptx
 
Approximation Algorithms TSP
Approximation Algorithms   TSPApproximation Algorithms   TSP
Approximation Algorithms TSP
 
Algorithms Lab PPT
Algorithms Lab PPTAlgorithms Lab PPT
Algorithms Lab PPT
 
LPP, Duality and Game Theory
LPP, Duality and Game TheoryLPP, Duality and Game Theory
LPP, Duality and Game Theory
 
Dynamic pgmming
Dynamic pgmmingDynamic pgmming
Dynamic pgmming
 
Matlab for Chemical Engineering
Matlab for Chemical EngineeringMatlab for Chemical Engineering
Matlab for Chemical Engineering
 
Solving Optimization Problems using the Matlab Optimization.docx
Solving Optimization Problems using the Matlab Optimization.docxSolving Optimization Problems using the Matlab Optimization.docx
Solving Optimization Problems using the Matlab Optimization.docx
 
Linear Algebra and Matlab tutorial
Linear Algebra and Matlab tutorialLinear Algebra and Matlab tutorial
Linear Algebra and Matlab tutorial
 
DimensionalityReduction.pptx
DimensionalityReduction.pptxDimensionalityReduction.pptx
DimensionalityReduction.pptx
 
bv_cvxslides (1).pdf
bv_cvxslides (1).pdfbv_cvxslides (1).pdf
bv_cvxslides (1).pdf
 
Intro matlab and convolution islam
Intro matlab and convolution islamIntro matlab and convolution islam
Intro matlab and convolution islam
 
Writing Fast MATLAB Code
Writing Fast MATLAB CodeWriting Fast MATLAB Code
Writing Fast MATLAB Code
 

Recently uploaded

TOPIC 8 Temperature and Heat.pdf physics
TOPIC 8 Temperature and Heat.pdf physicsTOPIC 8 Temperature and Heat.pdf physics
TOPIC 8 Temperature and Heat.pdf physicsssuserddc89b
 
Call Girls in Hauz Khas Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Hauz Khas Delhi 💯Call Us 🔝9953322196🔝 💯Escort.Call Girls in Hauz Khas Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Hauz Khas Delhi 💯Call Us 🔝9953322196🔝 💯Escort.aasikanpl
 
TOTAL CHOLESTEROL (lipid profile test).pptx
TOTAL CHOLESTEROL (lipid profile test).pptxTOTAL CHOLESTEROL (lipid profile test).pptx
TOTAL CHOLESTEROL (lipid profile test).pptxdharshini369nike
 
Evidences of Evolution General Biology 2
Evidences of Evolution General Biology 2Evidences of Evolution General Biology 2
Evidences of Evolution General Biology 2John Carlo Rollon
 
zoogeography of pakistan.pptx fauna of Pakistan
zoogeography of pakistan.pptx fauna of Pakistanzoogeography of pakistan.pptx fauna of Pakistan
zoogeography of pakistan.pptx fauna of Pakistanzohaibmir069
 
Spermiogenesis or Spermateleosis or metamorphosis of spermatid
Spermiogenesis or Spermateleosis or metamorphosis of spermatidSpermiogenesis or Spermateleosis or metamorphosis of spermatid
Spermiogenesis or Spermateleosis or metamorphosis of spermatidSarthak Sekhar Mondal
 
Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |
Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |
Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |aasikanpl
 
Grafana in space: Monitoring Japan's SLIM moon lander in real time
Grafana in space: Monitoring Japan's SLIM moon lander  in real timeGrafana in space: Monitoring Japan's SLIM moon lander  in real time
Grafana in space: Monitoring Japan's SLIM moon lander in real timeSatoshi NAKAHIRA
 
‏‏VIRUS - 123455555555555555555555555555555555555555
‏‏VIRUS -  123455555555555555555555555555555555555555‏‏VIRUS -  123455555555555555555555555555555555555555
‏‏VIRUS - 123455555555555555555555555555555555555555kikilily0909
 
SOLUBLE PATTERN RECOGNITION RECEPTORS.pptx
SOLUBLE PATTERN RECOGNITION RECEPTORS.pptxSOLUBLE PATTERN RECOGNITION RECEPTORS.pptx
SOLUBLE PATTERN RECOGNITION RECEPTORS.pptxkessiyaTpeter
 
Bentham & Hooker's Classification. along with the merits and demerits of the ...
Bentham & Hooker's Classification. along with the merits and demerits of the ...Bentham & Hooker's Classification. along with the merits and demerits of the ...
Bentham & Hooker's Classification. along with the merits and demerits of the ...Nistarini College, Purulia (W.B) India
 
Forest laws, Indian forest laws, why they are important
Forest laws, Indian forest laws, why they are importantForest laws, Indian forest laws, why they are important
Forest laws, Indian forest laws, why they are importantadityabhardwaj282
 
insect anatomy and insect body wall and their physiology
insect anatomy and insect body wall and their  physiologyinsect anatomy and insect body wall and their  physiology
insect anatomy and insect body wall and their physiologyDrAnita Sharma
 
Call Us ≽ 9953322196 ≼ Call Girls In Lajpat Nagar (Delhi) |
Call Us ≽ 9953322196 ≼ Call Girls In Lajpat Nagar (Delhi) |Call Us ≽ 9953322196 ≼ Call Girls In Lajpat Nagar (Delhi) |
Call Us ≽ 9953322196 ≼ Call Girls In Lajpat Nagar (Delhi) |aasikanpl
 
Call Girls in Mayapuri Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Mayapuri Delhi 💯Call Us 🔝9953322196🔝 💯Escort.Call Girls in Mayapuri Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Mayapuri Delhi 💯Call Us 🔝9953322196🔝 💯Escort.aasikanpl
 
BIOETHICS IN RECOMBINANT DNA TECHNOLOGY.
BIOETHICS IN RECOMBINANT DNA TECHNOLOGY.BIOETHICS IN RECOMBINANT DNA TECHNOLOGY.
BIOETHICS IN RECOMBINANT DNA TECHNOLOGY.PraveenaKalaiselvan1
 
Solution chemistry, Moral and Normal solutions
Solution chemistry, Moral and Normal solutionsSolution chemistry, Moral and Normal solutions
Solution chemistry, Moral and Normal solutionsHajira Mahmood
 
Neurodevelopmental disorders according to the dsm 5 tr
Neurodevelopmental disorders according to the dsm 5 trNeurodevelopmental disorders according to the dsm 5 tr
Neurodevelopmental disorders according to the dsm 5 trssuser06f238
 
Heredity: Inheritance and Variation of Traits
Heredity: Inheritance and Variation of TraitsHeredity: Inheritance and Variation of Traits
Heredity: Inheritance and Variation of TraitsCharlene Llagas
 

Recently uploaded (20)

TOPIC 8 Temperature and Heat.pdf physics
TOPIC 8 Temperature and Heat.pdf physicsTOPIC 8 Temperature and Heat.pdf physics
TOPIC 8 Temperature and Heat.pdf physics
 
Call Girls in Hauz Khas Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Hauz Khas Delhi 💯Call Us 🔝9953322196🔝 💯Escort.Call Girls in Hauz Khas Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Hauz Khas Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
 
TOTAL CHOLESTEROL (lipid profile test).pptx
TOTAL CHOLESTEROL (lipid profile test).pptxTOTAL CHOLESTEROL (lipid profile test).pptx
TOTAL CHOLESTEROL (lipid profile test).pptx
 
Evidences of Evolution General Biology 2
Evidences of Evolution General Biology 2Evidences of Evolution General Biology 2
Evidences of Evolution General Biology 2
 
zoogeography of pakistan.pptx fauna of Pakistan
zoogeography of pakistan.pptx fauna of Pakistanzoogeography of pakistan.pptx fauna of Pakistan
zoogeography of pakistan.pptx fauna of Pakistan
 
Spermiogenesis or Spermateleosis or metamorphosis of spermatid
Spermiogenesis or Spermateleosis or metamorphosis of spermatidSpermiogenesis or Spermateleosis or metamorphosis of spermatid
Spermiogenesis or Spermateleosis or metamorphosis of spermatid
 
Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |
Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |
Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |
 
Grafana in space: Monitoring Japan's SLIM moon lander in real time
Grafana in space: Monitoring Japan's SLIM moon lander  in real timeGrafana in space: Monitoring Japan's SLIM moon lander  in real time
Grafana in space: Monitoring Japan's SLIM moon lander in real time
 
‏‏VIRUS - 123455555555555555555555555555555555555555
‏‏VIRUS -  123455555555555555555555555555555555555555‏‏VIRUS -  123455555555555555555555555555555555555555
‏‏VIRUS - 123455555555555555555555555555555555555555
 
SOLUBLE PATTERN RECOGNITION RECEPTORS.pptx
SOLUBLE PATTERN RECOGNITION RECEPTORS.pptxSOLUBLE PATTERN RECOGNITION RECEPTORS.pptx
SOLUBLE PATTERN RECOGNITION RECEPTORS.pptx
 
Bentham & Hooker's Classification. along with the merits and demerits of the ...
Bentham & Hooker's Classification. along with the merits and demerits of the ...Bentham & Hooker's Classification. along with the merits and demerits of the ...
Bentham & Hooker's Classification. along with the merits and demerits of the ...
 
Volatile Oils Pharmacognosy And Phytochemistry -I
Volatile Oils Pharmacognosy And Phytochemistry -IVolatile Oils Pharmacognosy And Phytochemistry -I
Volatile Oils Pharmacognosy And Phytochemistry -I
 
Forest laws, Indian forest laws, why they are important
Forest laws, Indian forest laws, why they are importantForest laws, Indian forest laws, why they are important
Forest laws, Indian forest laws, why they are important
 
insect anatomy and insect body wall and their physiology
insect anatomy and insect body wall and their  physiologyinsect anatomy and insect body wall and their  physiology
insect anatomy and insect body wall and their physiology
 
Call Us ≽ 9953322196 ≼ Call Girls In Lajpat Nagar (Delhi) |
Call Us ≽ 9953322196 ≼ Call Girls In Lajpat Nagar (Delhi) |Call Us ≽ 9953322196 ≼ Call Girls In Lajpat Nagar (Delhi) |
Call Us ≽ 9953322196 ≼ Call Girls In Lajpat Nagar (Delhi) |
 
Call Girls in Mayapuri Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Mayapuri Delhi 💯Call Us 🔝9953322196🔝 💯Escort.Call Girls in Mayapuri Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Mayapuri Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
 
BIOETHICS IN RECOMBINANT DNA TECHNOLOGY.
BIOETHICS IN RECOMBINANT DNA TECHNOLOGY.BIOETHICS IN RECOMBINANT DNA TECHNOLOGY.
BIOETHICS IN RECOMBINANT DNA TECHNOLOGY.
 
Solution chemistry, Moral and Normal solutions
Solution chemistry, Moral and Normal solutionsSolution chemistry, Moral and Normal solutions
Solution chemistry, Moral and Normal solutions
 
Neurodevelopmental disorders according to the dsm 5 tr
Neurodevelopmental disorders according to the dsm 5 trNeurodevelopmental disorders according to the dsm 5 tr
Neurodevelopmental disorders according to the dsm 5 tr
 
Heredity: Inheritance and Variation of Traits
Heredity: Inheritance and Variation of TraitsHeredity: Inheritance and Variation of Traits
Heredity: Inheritance and Variation of Traits
 

Optimize Least-Square Methods and Sparse Linear Systems

  • 1. Topics In Digital Contents Signals – Special Issue 01Topics In Digital Contents Signals – Special Issue 01 The Least-Square Optimization andThe Least-Square Optimization and Sparse Linear System SolverSparse Linear System Solver Presented by Ji-yong Kwon Visual Computing Lab. 1
  • 2. OutlineOutline • What is the least-square optimization? – Optimization – Least-square optimization – Application to computer graphics • Poisson image cloning • What is the sparse linear system – Dense matrix v.s. Sparse matrix – Steepest-descent approach – Conjugate Gradient method 2
  • 3. ReferenceReference • Valuable reading materials – Practical Least-Squares for Computer Graphics • Pighin and Lewis • ACM SIGGRAPH 2007 course note • http://graphics.stanford.edu/~jplewis/lscourse/ls.pdf – An Introduction to the Conjugate Gradient Method Without the Agonizing Pain • J. R. Shewchuk • CMU tech. report • http://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf 3
  • 4. The Least-Square OptimizationThe Least-Square Optimization 4
  • 5. Simple ExampleSimple Example • Example – Line equation passing two points on the a-b plane • One unique solution 5 a b 01=++ ybxa 01 01 22 11 =++ =++ ybxa ybxa       − −       − − − =            − − =            1 11 1 1 12 12 1221 22 11 aa bb babay x y x ba ba Line equation: We know: ( )11,ba ( )22 ,ba
  • 6. Simple ExampleSimple Example • Example – Line equation passing three points on the a-b plane • No exact solution 6 a b 01=++ ybxa 01 01 01 33 22 11 =++ =++ =++ ybxa ybxa ybxa           − − − =                1 1 1 33 22 11 y x ba ba ba Line equation: We know: ( )11,ba ( )22 ,ba ( )33,ba
  • 7. Why Optimization?Why Optimization? • Observation – Unfortunately, many problems do not have a unique solution. • Too many solutions, or • No exact solution – Concept of Optimization • Find approximated solution – Not exactly satisfy conditions, – But satisfy conditions as much as possible. • Strategy – Set the objective (or energy) function – Find a solution that minimizes (or maximizes) the objective function. 7
  • 8. Why Optimization?Why Optimization? • Objective function – A.K.A. energy function – Input: a set of variables that we want to know – Output: a scalar value – Output value is used for estimation of solution’s quality • Generally, small output value (small energy)  good solution – A solution that minimizes the output value of the objective function  Optimized solution – To design the good objective function is the most important task of the optimization techniques. 8
  • 9. Simple ExampleSimple Example • Example again, – Line equation passing three points on the a-b plane • No exact solution, • But we can compute the approximated solution. 9 a b ( )11,ba ( )22 ,ba ( )33,ba – Passing all points would be impossible, – Find the line that minimizes distances from all points
  • 10. Objective FunctionObjective Function • Example again, – How to compute ‘distances’? • Setting the objective function 10 a b 01=++ ybxaPoint on the line: ( )11,ba ( )22 ,ba ( )33,ba+ – 0 Point out of the line: 01 ,01 <++ >++ ybxa ybxa Objective function: ( ) ( )∑= ++= 3 1 2 1, i ii ybxayxO
  • 11. Optimization ProblemOptimization Problem • Problem description – Find the line coefficients (x, y) that minimize a sum of squared distances between the line and given points. – Mathematically, – More compact description 11 ( )∑= ++ 3 1 2 1minimize i ii ybxa ( ) ( )∑= ++= 3 1 2 1argmin, i iix,yoo ybxayx
  • 12. SolutionSolution • Solution of the example – The objective function has a parabolic shape  The objective function would be minimized at the zero gradient of the function 12 ( ) ( )∑= ++= 3 1 2 1, i ii ybxayxO x y O(x,y) ( ) ( ) ( ) ( )∑∑ ∑∑ == == =++=++= ∂ ∂ =++=++= ∂ ∂ 3 1 23 1 3 1 23 1 0212 0212 i iiiii iii i iiiii iii aybbxaybxab y O abyaxaybxaa x O
  • 13. SolutionSolution • Solution of the example 13 x y O(x,y) ∑∑∑ ∑∑∑ === === −=+ −=+ 3 1 3 1 23 1 3 1 3 1 3 1 2 i ii ii ii i ii iii i bbybax abayax         − − =              ∑ ∑ ∑∑ ∑∑ = = == == 3 1 3 1 3 1 23 1 3 1 3 1 2 i i i i i ii ii i iii i b a y x bba baa         − −         =      ∑ ∑ ∑∑ ∑∑ = = − == == 3 1 3 1 1 3 1 23 1 3 1 3 1 2 i i i i i ii ii i iii i b a bba baa y x
  • 14. Squared DistanceSquared Distance • Why ‘squared’? – Naïve sum • Each distance would be positive or negative, • Sum of signed distances does not estimate the quality of the solution. – Sum of the absolute distance • Distances would be 0 or positive, • But the minimum point cannot be computed easily (Not differentiable at the minimum point) – Sum of the square distance • Distances would be 0 or positive, • Differentiable at the minimum point, • The shape of the square function would be parabolic. 14 ( ) ( )∑= ++= 3 1 1, i ii ybxayxO ( ) ∑= ++= 3 1 1, i ii ybxayxO ( ) ( )∑= ++= 3 1 2 1, i ii ybxayxO
  • 15. Another SolutionAnother Solution • Pseudo-inverse – The inverse matrix can be computed if and only if the matrix is square – The pseudo-inverse matrix – From the example before… – Solution computed by using pseudo-inverse method = Solution computed by using least-square optimization 15           − − − =                1 1 1 33 22 11 y x ba ba ba bAx = ( ) ( )    =⇐ =⇐ = +− +− + IAAAAA IAAAAA A 1 1 T TT ( ) bAAAx TT 1− =
  • 16. BackgroundBackground • Background of matrix differentiation – Very comfortable technique for derivation of matrix system – Reference • A. M. Mathai, ‘Jacobians of Matrix Transformations and Functions of Matrix Argument’, World Scientific Publishing, 1997 – Contents that would be covered in this lecture… • A scalar value function of a vector • A vector function of a vector 16 [ ] ( ) ( ) ( ) ( ) T p T p x f x f x fy fy xxx         ∂ ∂ ∂ ∂ ∂ ∂ = ∂ ∂ ⇒= = xxx x x x ,,, ,,, 21 21  
  • 17. BackgroundBackground • Theorem 1 – Let be the vector of variables be the constant vector, then [ ]T pxxx ,,, 21 =x [ ]T paaa ,,, 21 =a ( )xAA x Axx x x xx a x xa TT T T y y y y y y += ∂ ∂ ⇒= = ∂ ∂ ⇒= = ∂ ∂ ⇒= 2
  • 18. BackgroundBackground • Proof of Theorem 1 [ ]T pxxx ,,, 21 =x [ ]T paaa ,,, 21 =a ) [ ] ) [ ] ) ( )xAA x xAxA Axx x x xx a x xa TT ii p j jji p j jij i p i p j jiij T T p T p i i p T T p T p i i pp T y xaxa x y xxay xxx x y x y x yy x x y xxxy aaa x y x y x yy a x y xaxaxay += ∂ ∂ ⇒+=+= ∂ ∂ == ==         ∂ ∂ ∂ ∂ ∂ ∂ = ∂ ∂ ⇒= ∂ ∂ +++== ==         ∂ ∂ ∂ ∂ ∂ ∂ = ∂ ∂ ⇒= ∂ ∂ +++== ⋅⋅ == = = ∑∑ ∑∑3 22222 2 1 11 1 1 21 21 22 2 2 1 21 21 2211    
  • 19. BackgroundBackground • Theorem 2 – Let 19 [ ] [ ] xy x y J yx Jdd x y x y x y x y x y x y x y x y x y x y yyyxxx p ppp p p j i T p T p =                     ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ =         ∂ ∂ = ∂ ∂ = ==      21 2 2 2 1 2 1 2 1 1 1 2121 ,
  • 20. Matrix FormulationMatrix Formulation • Example again, – Can be described as a matrix form 20 ( ) ( ) 2 33 22 11 2 33 22 11 3 1 2 1 1 1 1 1 1 1,           − − − −                =           ++ ++ ++ = ++= ∑= y x ba ba ba ybxa ybxa ybxa ybxayxO i ii ( ) ( ) ( )bAxbAx bAxx −−= −= ⇒ T O 2
  • 21. Matrix FormulationMatrix Formulation • Matrix formulation of the least square optimization 21 ( ) ( )( ) ( )( )( ) ( ) ( ) ( ) ( ) bAAAx bAAxA bAAxA bAxAAAA bbAxbAxAx x bbbAxAxbAxAx x bAxbAx x bAxbAx xx TT TT TT TTT TTTT TTTTTT TTT TO 1 022 2 2 − =∴ =∴ =−= −+= +− ∂ ∂ = +−− ∂ ∂ = −− ∂ ∂ = −− ∂ ∂ = ∂ ∂ Can be solved by using the linear system solver
  • 22. ConstraintsConstraints • Slightly different example – Line that minimizes distance from the red points, – One additional constraint • This line should pass the white point 22 a b ( )11,ba ( )22 ,ba ( )33,ba ( )cc ba , ( ) 01subject to 1minimize 3 1 2 =++ ++∑= cc i ii ybxa ybxa
  • 23. ConstraintsConstraints • Constraints – A.K.A. hard constraints • c.f. soft constraints  objective (energy) term – Condition that must be satisfied – Constrained optimization • Optimization with some constraints – (Linear / Non-linear) (equality / inequality) constraint – This lecture only covers linear equality constraint 23
  • 24. Constrained OptimizationConstrained Optimization • Lagrange multiplier – Constrained optimization can be expressed as a unconstrained optimization with a Lagrange multiplier – Why is it possible? • This lecture would not covers the theory of the Lagrange multiplier. • Reference – http://en.wikipedia.org/wiki/Lagrange_multipliers 24 ( ) ( ) cyxC yxO =,subject to ,minimize ( ) ( )( )cyxCyxO −+ ,,minimize λ
  • 25. Constrained OptimizationConstrained Optimization • Solution of constrained optimization – At the minimum point, the gradient of objective function should be zero – Also can be solved by using the linear system solver 25 ( )1 2 1 minarg 2 ++− xcbAxx T λ 01=+= ∂ ∂ =+−= ∂ ∂ xc 0cbAAxA x T TT O O λ λ       − =            10 bAx c cAA T T T λ bxA ˆˆˆ =
  • 26. Constrained OptimizationConstrained Optimization • Case for multiple constraints – Multiple Lagrange multipliers 26 ( )cCxλbAxx −+− T2 2 1 minarg 0=−= ∂ ∂ =+−= ∂ ∂ cCx λ 0λCbAAxA x O O TTT       =            c bAx C CAA TTT λ0 bxA ˆˆˆ =
  • 27. ImplementationImplementation • How to solve the linear system? – Provided by many libraries. – Using OpenCV, • Data structure for storing a matrix – CvMat *aMat = cvCreateMat(nRow, nCol, CV_32F); – cvReleaseMat(&aMat); • Set and get the element of a matrix – cvmSet(aMat, m, n, 1.0f); – float mn = cvmGet(aMat, m, n); • Linear operation – cvAdd(aMat, bMat, cMat); – cvGEMM(aMat, aMat, 1.0f, NULL, 0.0f, ataMat, CV_GEMM_A_T); • Solver – cvSolve(aMat, bVect, xVect, CV_LU); 27
  • 28. Practical ExamplePractical Example • Poisson image cloning – Interesting solution to composite the image – Paste the modified gradient of source image with satisfaction of boundary colors 28
  • 29. Poisson Image CloningPoisson Image Cloning • Problem description – Source image pixel – Target image pixel – Unknown new image pixel – Objective • Minimize difference of gradients between a new image and a source image – Constraint • Pixel values at the boundary should be equal to those of the target image 29 yxs , yxt , yxn , s t nt
  • 30. Poisson Image CloningPoisson Image Cloning • Problem description – Mathematical formulation 30 ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) Ω∂∈= −−−+ −−− ∑ ∑ Ω∈+ ++ Ω∈+ ++ yxtn ssnn ssnn yxyx yxyx yxyxyxyx yxyx yxyxyxyx ,for,subject to minimize ,, 1,,, 2 ,1,,1, ,1,, 2 ,,1,,1 s t nt
  • 31. Poisson Image CloningPoisson Image Cloning • Matrix formulation 31 ( )BtBnλGsGnn −+− T2 2 1 minarg           −=    11G ( )yx, ( )1, +yx ( )yx ,1+           =    1B ( ) Ω∂∈yx, 0 0 =−= ∂ ∂ =+−= ∂ ∂ BtBn λ λBGsGGnG n O O TTT       =            Bt GsG λ n 0B BGG TTT
  • 32. Poisson Image CloningPoisson Image Cloning • Why is it called ‘Poisson’? 32 s t nt -1 x,y-1 -1 x-1,y 4 x,y -1 x+1, y -1 x,y+ 1 ( ) ( )( ) ( ) ( )( ) ( ) ( )( ) ( ) ( )( ) ( ) ( )1,,1,11,, 1,,1,11,, ,1,,1, ,,1,,1 ,1,,1, 1,,1,, , 4 4 ++−− ++−− ++ ++ −− −− −−−−− −−−− = −−−− −−−− −−−+ −−− = ∂ ∂ yxyxyxyxyx yxyxyxyxyx yxyxyxyx yxyxyxyx yxyxyxyx yxyxyxyx yx sssss nnnnn ssnn ssnn ssnn ssnn n O s y n x n ∆= ∂ ∂ + ∂ ∂ 2 2 2 2
  • 33. Poisson Image CloningPoisson Image Cloning • Implementation issues – Computation of can be expensive  Construct – The number of neighbors is not always equal to four 33       =            Bt GsG λ n 0B BGG TTT GGT GGL T =           −−−−=    11411L -1 -1 4 -1 -1 -1 -1 3 -1 -1 -1 2
  • 34. Sparse Linear System SolverSparse Linear System Solver 34
  • 35. Dense V.S. SparseDense V.S. Sparse • In previous example, – Assume that the size of the composite image is 200 x 200 – 200 x 200  40,000 pixels  40,000 unknowns – We should solve the linear system – Size of A: 40,000 x 40,000  1,600,000,000 elements  1,600,000,000 float  1,600,000,000 byte  about 1.6Gb – Computing the inverse of (40,000 x 40,000) is very, very expensive 35 bAx =
  • 36. Dense V.S. SparseDense V.S. Sparse • Concept of the dense / sparse matrix – Dense matrix • The matrix that has a small number of zero elements – Sparse matrix • The matrix that has a large number of zero elements • Storing the dense / sparse matrix – Dense matrix • Store all elements • [1, 0, 0; 0, -1, 0; 0, 0, 2] – Sparse matrix • Store only non-zero elements • [(1,1,1), (2,2,-1), (3,3,2)] 36           − 200 010 001
  • 37. Dense V.S. SparseDense V.S. Sparse • Linear system of Poisson image cloning – has at most 5 non-zero elements per row. – The number of non-zero elements of is same as that of the boundary pixels. – 200 x 200 images  (40000 x 5 + a) elements – Efficient multiplication 37       =            Bt GsG λ n 0B BGG TTT GGT B
  • 38. Steepest-Descent MethodSteepest-Descent Method • How to solve the sparse linear system? – Computing the inverse is expensive  Find an optimized solution Iteratively – Strategy of the iterative method • Set the initial solution • Until the objective value converges, – Compute the gradient of the objective function at the current solution – Move the solution with an inverse direction of the gradient 38 x y O(x,y) i O ii xx xx ∂ ∂ −=+ α1
  • 39. Steepest-Descent MethodSteepest-Descent Method • Steepest-descent method for linear system – Gradient of the objective function (assume A is symmetric) – Next iteration – How to determine the step size ? 39 bAx = ( ) cf TT +−= xbAxxx 2 1 ( ) bAxxbAxxx −⇒+−= cf TT 2 1 ( ) iiiii rxAxbxx αα +=−+=+1 α
  • 40. Steepest-Descent MethodSteepest-Descent Method • Determine the optimal step – minimizes the objective function at the zero gradient 40 α ( ) ( ) 0111 =′= +++ iii d d ff d d xxx αα ( ) ( )( ) ( ) ( ) i T i i T i i T ii T i i T ii i T ii i T ii i T i Arr rr Arrrr rArr rArAxb rrxAb rAxb =∴ =−= −= −−= +−= − + α α α α α 0 1
  • 41. Conjugate Gradient MethodConjugate Gradient Method • Performance of steepest-descent method – Simplest iterative algorithm that solves the linear system, – But slow convergence. • Especially at the optimized point • Conjugate Gradient Method – One of the most popular method that solves the sparse linear system – Use the gradient direction and its conjugate gradient direction to find the solution – Ideally, conjugate gradient method always converges at least N iterations for a (N x N) matrix system – This lecture does not cover the detail of the CGM 41
  • 42. Conjugate Gradient MethodConjugate Gradient Method • Many improved versions of CGM – For stabilized and fast convergence – Preconditioned Conjugate Gradient Method – Conjugate Gradient Squared Method – Bi-Conjugate Gradient Method – Bi-Conjugate Gradient Stabilized Method – Reference • J. R. Shewchuk , An Introduction to the Conjugate Gradient Method Without the Agonizing Pain, CMU tech. report • Wikipedia 42
  • 43. ImplementationImplementation • Library for sparse linear solver – TAUCS • http://www.tau.ac.il/~stoledo/taucs/ – OpenNL • http://alice.loria.fr/index.php/software/4-library/23-opennl.html • My original library for CGM – Made by Ji-yong Kwon – Simple implementation for dense /sparse matrix – Features • 2 types of data structure: CDenseVect, CSparseMat • Linear operations between datas • A set of sparse matrix solver (CGM, BiCGSTAB) • Multi-core processing 43
  • 44. ImplementationImplementation • Basic usage – CSparseMat aMat; CDenseVect bVect, xVect; CSparseSolverBiCGSTAB solver; – Memory allocation • bVect.Init(nRow); • aMat.Create(nRow, nCol, 32); // maximum number of elements per row • Automatic de-allocation – Set/get element • bVect.Set(row, 1.0f); float value = bVect.Get(row); • aMat.AddElement(row, col, 1.0f); float value = aMat.GetElement(row, elementId); 44
  • 45. ImplementationImplementation • Basic usage – Solver initialization • solver.InitSolver(aMat, bVect, xVect, 1.0f); – This function initialize the solver’s state – xVect should be initialized. – Solve • while(solver.CheckTermination()) solver.OneStep(aMat, bVect, xVect); – Residual • float residual = solver.GetResidual(); – Additional comment • For ‘Visual Studio’, check project property  C/C++  Language  Provide OpenMP : Yes • Release mode is much faster than debug mode. 45
  • 46. SummarySummary • Concept of the least-square optimization – Useful to convert the hard problem into the approximated version – Can be solved by using the linear system solver • Although the problem has multiple linear equality constraints • Concept of the sparse linear system – Solving the large linear system with a dense matrix can be very expensive  Use iterative methods to solve the sparse linear system efficiently. • The most important thing is to design the objective function 46