SlideShare a Scribd company logo
1 of 56
Download to read offline
WSPAA’06 Session 5
More on Randomization
Semidefinite Programming and Derandomization
Abner Chih-Yi Huang1
June 24, 2006
1
Graduate student of M.S. degree CS program of Algorithm and
Biocomputing Laboratory, National Tsing Hua University. BarrosH@gmail.com
Abner Chih-Yi Huang WSPAA’06 Ses
Outline
Derandomization: The Method of The Conditional
Probabilities;
Approximation Algorithms Based on Semidefinite
Programming
Introduction to Semidefinite Programming
Application : MaxCut, Weighted-Max2SAT problem
Abner Chih-Yi Huang WSPAA’06 Ses
Derandomization
Derandomization.
Abner Chih-Yi Huang WSPAA’06 Ses
Why Do We Study Derandomization?
Why do we study derandomization since that randomized
algorithms are so powerful?
Because independent random unbiased bits are hard to obtain.
Empirically a large number of randomized algorithms have been
implemented and seem to work just fine, even without access to
any source of true randomness. There are, essentially, two general
arguments to support the belief that BPP is “close” to P.
Abner Chih-Yi Huang WSPAA’06 Ses
De-randomization
Removing randomization from randomized algorithms to build
equivalently powerful deterministic algorithms.
One of general technique, method of the conditional
probabilities.
View a randomized algorithm A as a computation tree on
input x.
Assume A independently perform r(|x|) random choices each
with two possible outcomes, denoted 0 and 1.
Each path form root to a leaf means a possible computation of
A .
Abner Chih-Yi Huang WSPAA’06 Ses
Computation Tree
Figure: Level i as i-th random choice of A .
Abner Chih-Yi Huang WSPAA’06 Ses
Computation Tree
Figure: Assign each node u, of level i, a binary string σ(u) of length
i − 1 representing the random choices so far.
Abner Chih-Yi Huang WSPAA’06 Ses
Computation Tree
We can assign each leaf l a measure ml .
And every inner node u with the average measure E(u), of all
measures in the subtree rooted at u
If w, v are children of u, then either E(v) ≥ E(u) or
E(w) ≥ E(u).
Abner Chih-Yi Huang WSPAA’06 Ses
Computation Tree
Figure: There exists a path from root to leaf l s.t. ml ≥ E(root). This
path can be deterministically derived if we can efficiently determine which
of the children v and u is greater.
Abner Chih-Yi Huang WSPAA’06 Ses
Example: Weighted MaxSAT
Weighted MaxSAT asks for the maximum weight which can be
satisfied by any assignment, given a set of weighted clauses.
Figure: Program 2.10
Recall the 3rd talk today.
Yu-Han Lyu, Approximation Techniques (II) –
Linear Programming and Randomization
Abner Chih-Yi Huang WSPAA’06 Ses
Computation Tree for MAX Weighted SAT
Abner Chih-Yi Huang WSPAA’06 Ses
Computation Tree for MAX Weighted SAT
Abner Chih-Yi Huang WSPAA’06 Ses
Computation Tree for MAX Weighted SAT
Abner Chih-Yi Huang WSPAA’06 Ses
Computation Tree for MAX Weighted SAT
Abner Chih-Yi Huang WSPAA’06 Ses
Computation Tree for MAX Weighted SAT
To derandomize Program 2.10,
(1) At the i-th iteration, the random variable
mRWS (x|v1v2 · · · vi−1) means the measure of solution with
input x and the decided value vj of variable vj .
(2) If E[mRWS (x|v1v2 · · · vi−10)] ≤ E[mRWS (x|v1v2 · · · vi−11)],
then vi is set to 1, otherwise it is set to 0.
(3) Eventually, we have mA (x) = E[mRWS (x|v1v2 · · · vn)].
Abner Chih-Yi Huang WSPAA’06 Ses
Computation Tree for MAX Weighted SAT
(4) At the i-th iteration, the random variable
mRWS (x|v1v2 · · · vi−1) means the measure of solution with
input x and the decided value vj of variable vj .
(5) If E[mRWS (x|v1v2 · · · vi−10)] ≤ E[mRWS (x|v1v2 · · · vi−11)],
then vi is set to 1, otherwise it is set to 0.
(6) Eventually, we have mA (x) = E[mRWS (x|v1v2 · · · vn)]
Assume that x contains t clauses c1, . . . , ct. We have
E[mRWS (x|v1v2 · · · vi−11)] =
t
j=1
w(cj )Pr{cj is satisfied|v1v2 · · · vi−11}
Abner Chih-Yi Huang WSPAA’06 Ses
Computation Tree for MAX Weighted SAT
If vi occurs positive in cj then
Pr{cj is satisfied|v1v2 · · · vi−11} = 1
If vi doesn’t occur in cj or positive in cj , then the probability that a
random assignment of values to variables vi+1, . . . , vn satisfies cj is
Pr{cj is satisfied|v1v2 · · · vi−11} = 1 −
1
2dj
where dj is the number of variables occurring in cj that are
different from v1, . . . , vn.
Abner Chih-Yi Huang WSPAA’06 Ses
Computation Tree for MAX Weighted SAT
We have E[mRWS (x|v1v2 · · · vi−11)] =
Wi +
cj s.t. vi occurs +
w(cj )1 +
cj s.t. vi occurs -
w(cj )(1 −
1
2dj
)
Clearly it can be computed in P. Hence we have
E[mRWS (x)] ≤ E[mRWS (x|v1)]
≤ E[mRWS (x|v1v2)]
≤
...
≤ E[mRWS (x|v1 · · · vn)] = mA (x)
By Corollary 2.20, mA (x) ≥ E[mRWS (x)] ≥ m∗(x)/2
Abner Chih-Yi Huang WSPAA’06 Ses
Semidefinite Programming
Semidefinite Programming
Abner Chih-Yi Huang WSPAA’06 Ses
The Second Part: SDP
Figure: Liner programming as a systematic approach to design
approximation algorithms
Abner Chih-Yi Huang WSPAA’06 Ses
The Power of Liner Programming
Recall the 3rd talk today.
Yu-Han Lyu, Approximation
Techniques (II) – Linear
Programming and
Randomization
Abner Chih-Yi Huang WSPAA’06 Ses
What’s semidefinite programming?
minimize cT
x
subject to G +
n
i
xi Fi ≤ 0
where G, F1, . . . , Fn ∈ Sk, and A ∈ Rp×n.
A semidefinite program is a convex optimization problem since
its objective and constraint are convex:
In semidefinite programming one minimizes a linear function
subject to the constraint that an affine combination of
symmetric matrices is positive semidefinite.
We say a n × n matrix M is positive semidefinite if
xT Mx ≥ 0, ∀x ∈ Rn
Abner Chih-Yi Huang WSPAA’06 Ses
What’s semidefinite programming?
many convex optimization problems, e.g., linear programming
and (convex) quadratically constrained quadratic
programming, can be cast as semidefinite programs.
(Nesterov and Nemirovsky in 1988, they showed that
interior-point methods for linear programming can, in
principle, be generalized to all convex optimization problems.)
Most importantly, however, semidefinite programs can be
solved very efficiently, both in theory and in practice.
Abner Chih-Yi Huang WSPAA’06 Ses
In Theory and In Practice
In Theory : For worst-case complexity, the number of iterations to
solve a semidefinite program to a given accuracy grows with
problem size as O(n
1
2 ).
For example, [Alizadeh 1995] adapt Ye’s interior-point algorithm to
semidefinite programming performs O(
√
n(log Wtot + log 1
))
iterations and each iteration can be implemented in O(n3) time.
[Rendl et al. 1993].
Abner Chih-Yi Huang WSPAA’06 Ses
In Theory
Therefore SDP is almost exactly in P.
O(n3
) × O(
√
n(log Wtot + log
1
))
Abner Chih-Yi Huang WSPAA’06 Ses
In Practice
In Practice : the number of iterations required grows much
more slowly than n
1
2 , perhaps like log(n) or n
1
4 , and can often
be assumed to be almost constant. (5 to 50 iterations)
It is now generally accepted that interior-point methods for LPs are
competitive with the simplex method and even faster for problems
with more than 10,000 variables or constraints.[Lustig, et. al.,
1994]
Abner Chih-Yi Huang WSPAA’06 Ses
Conclusion on SDP
From S. Boyd & L. Vandenberghe’s survey paper,
Our final conclusion is therefore: it is not much harder to
solve a rather wide class of nonlinear convex optimization
problems than it is to solve LPs.
Abner Chih-Yi Huang WSPAA’06 Ses
The applications of SDP
SDP has applications of control theory, nonlinear programming,
geometry, etc. However we might most care the applications on
combinatorial optimization.
Integer 0/1 Programming problem
Stable set problem
Max-cut problem
Graph coloring problem
Shannon Capacity of a Graph
VLSI Layout
...
Abner Chih-Yi Huang WSPAA’06 Ses
Approximation Algorithm based on SDP
Figure: M.X. Goemans
The first time that semidefinite pro-
grams have been used in the design
and analysis of approximation algo-
rithms is M.X. Goemans and D.P.
Williamson, “Improved Approxima-
tion Algorithms for Maximum Cut and
Satisfiability Problems Using Semidef-
inite Programming”, J. ACM, 42,
1115–1145, 1995.
Abner Chih-Yi Huang WSPAA’06 Ses
The Systematic Approach based on SDP
Abner Chih-Yi Huang WSPAA’06 Ses
Why SDP?
In combinatorial optimization, the importance of semidefinite
programming is that it leads to tighter relaxations than the
classical linear programming relaxations for many graph and
combinatorial problems.
Abner Chih-Yi Huang WSPAA’06 Ses
Max. Weighted Cut
Figure: Picks weighted edges to divide vertices into two partitions.
Abner Chih-Yi Huang WSPAA’06 Ses
Mathematical Programming Expressions for Max.
Weighted Cut
Express Max. Weighted Cut problem as integer quadratic program
IQP-CUT(x). Edge weight wij = w(vi , vj ) if (vi , vj ) ∈ E, wij = 0
otherwise.
maximize 1
2
n
j=1
j
i=1 wij (1 − yi yj )
subject to yi ∈ {−1, 1} 1 ≤ i ≤ n
Ref. figure 7, nodes a, b
1
2
wa,b(1 − yayb) =
3
2
× (1 − (1 × −1)) = 3
nodes b, d
1
2
wb,d (1 − ybyd ) =
1
2
× (1 − (1 × 1)) = 0
Abner Chih-Yi Huang WSPAA’06 Ses
Mathematical Programming Expressions for Max.
Weighted Cut
We can relax it to 2-D vector.
maximize 1
2
n
j=1
j
i=1 wij (1 − −→yi · −→yj )
subject to −→yi ∈ R2 1 ≤ i ≤ n, −→yi ∈ R2
where −→yi , −→yj denotes the inner product of vectors, i.e.,
−→yi · −→yj = yi,1yj,1 + yi,2yj,2.
Abner Chih-Yi Huang WSPAA’06 Ses
Simple Randomized Algorithm for Max. Weighted Cut
Simple Randomized Algorithm for Max. Weighted Cut, Program
5.3
(1) Solve (QP-CUT(x)), obtaining an optimal set of vectors
(
−→
y∗
1 , . . . ,
−→
y∗
n );
(2) Randomly choose a vector −→r on the unit sphere Sn;
(3) Set V1 = {vi ∈ V | −→yi
∗ · −→r ≥ 0};
(4) V2 = V − V1.
Abner Chih-Yi Huang WSPAA’06 Ses
V1 = {vi ∈ V | −→yi
∗
· −→r ≥ 0}
Figure:
−→
A
−→
B = 0 if
−→
A ⊥
−→
B ,
−→
A
−→
B = cos(q)
Abner Chih-Yi Huang WSPAA’06 Ses
Analysis of Algorithm
Denote mRWC (x) be the measure of the solution returned by
program 5.3. If −→r divide the circle into two sides.
E[mRWC (x)] =
n
j=1
j
i=1
wij Pr{
−→
y∗
i ,
−→
y∗
j are in different side}
The probability Pr{
−→
y∗
i ,
−→
y∗
j are in different side} is the segments of
the circle that
−→
y∗
i ,
−→
y∗
j dominated.
2
cos−1(−→yi
∗ ·
−→
y∗
j )
2π
=
cos−1(−→yi
∗ ·
−→
y∗
j )
π
(polar-coordinate)
Abner Chih-Yi Huang WSPAA’06 Ses
V1 = {vi ∈ V | −→yi
∗
· −→r ≥ 0}
Figure: Pr{
−→
y∗
i ,
−→
y∗
j are in different side}
Abner Chih-Yi Huang WSPAA’06 Ses
Analysis of Algorithm
Compare
E[mRWC (x)] =
n
j=1
j
i=1
wij
cos−1(−→yi
∗ ·
−→
y∗
j )
π
m∗
QP−CUT (x) =
1
2
n
j=1
j
i=1
wij (1 − −→yi · −→yj )
We have
E[mRWC (x)] =
2 cos−1(−→yi
∗ ·
−→
y∗
j )
π(1 − cos(cos−1(−→yi
∗ ·
−→
y∗
j ))
1
2
n
j=1
j
i=1
wij (1 − −→yi · −→yj )
Abner Chih-Yi Huang WSPAA’06 Ses
Analysis of Algorithm
Let β = min0<α≤π
2α
π(1−cos(α) , Since QP-CUT(x) is a relaxation of
IQP-CUT(x), we have
E[mRWC (x)] ≥ β ×m∗
QP−CUT (x) ≥ β ×m∗
IQP−CUT (x) = β ×m∗
(x)
By Lemma, β > 0.8785. Thus, this algorithm is
1.139-approximation algorithm.
Abner Chih-Yi Huang WSPAA’06 Ses
Perfect Ending?
Unfortunately, it is unknown that QP-CUT(x) in P or not.
Therefore, we relax it to n-D QP program.
maximize 1
2
n
j=1
j
i=1 wij (1 − −→yi · −→yj )
subject to yi ∈ {−1, 1} 1 ≤ i ≤ n, −→yi ∈ Rn
Observe now that, given −→y1, . . . , −→yn ∈ Sn, the matrix M defined as
Mi,j = −→yi · −→yj is positive semidefinite.
Abner Chih-Yi Huang WSPAA’06 Ses
Semidefinite Program
In other words, QP-CUT(x) is equivalent to the following
semidefinite program SDP-CUT(x):
maximize 1
2
n
j=1
j
i=1 wij (1 − Mi,j )
subject to M is positive semidefinite.
Mi,i = 1 1 ≤ i ≤ n
It can be proven that for any > 0, it can find m∗
SDP−CUT (x) −
in time complexity about |x| and log(1
). (Even = 10−5)
Abner Chih-Yi Huang WSPAA’06 Ses
Improved Algorithm for Weighted 2-SAT
INSTANCE: Set U of variables, collection C of disjunctive
weighted clauses of at most 2 literals, where a literal is a
variable or a negated variable in U.
SOLUTION: A truth assignment for U.
MEASURE: Number of clauses satisfied by the truth
assignment.
Abner Chih-Yi Huang WSPAA’06 Ses
Improved Algorithm for Weighted 2-SAT
We can model Max2SAT as
maximize cj ∈C wj t(cj )
subject to yi ∈ {−1, 1} i = 0, 1, . . . , n; where y0 = 1.
For unit clause cj , if cj = vi ,
t(cj ) =
1 + yi y0
2
otherwise,
t(cj ) =
1 − yi y0
2
Abner Chih-Yi Huang WSPAA’06 Ses
Improved Algorithm for Weighted 2-SAT
For example, let c1 = y1, c2 = y2, c3 = y1 + y2, if y1 = 1, y2 = −1,
t(c1) =
1 + y1y0
2
=
1 + 1 × 1
2
= 1
and,
t(c2) =
1 − y2y0
2
=
1 − (−1 × 1)
2
= 1
Abner Chih-Yi Huang WSPAA’06 Ses
Improved Algorithm for Weighted 2-SAT
Observe that, for two literals clause,
t(cj ) = 1 − t(vi ∨ vk) = 1 − t(vi )t(vk)
= 1 −
1 − yi y0
2
1 − yky0
2
=
1
4
[(1 + yi y0) + (1 + 1 − yky0) + (1 − yi yk)]
Other cases are similar. For example, let c3 = y1 + y2, if
y1 = 1, y2 = −1,
t(c3) =
1
4
[(1 + y1y0) + (1 + y2y0) + (1 − y1y2)]
=
1
4
[(1 + 1) + (1 + (−1)) + (1 − (−1))]
=
4
4
= 1
Abner Chih-Yi Huang WSPAA’06 Ses
Improved Algorithm for Weighted 2-SAT
It could be expressed as following,
maximize n
j=0
j−1
i=0[aij (1 − yi yj ) + bij (1 + yi yj )]
subject to yi ∈ {−1, 1} i = 0, 1, · · · , n
where y0 is TRUE, i.e., yi = y0. We can relax it to
maximize n
j=0
j−1
i=0[aij (1 − vi vj ) + bij (1 + vi vj )]
subject to vi ∈ Sn vi ∈ V .
Abner Chih-Yi Huang WSPAA’06 Ses
Improved Algorithm for Weighted 2-SAT
We have
E[V ] = 2
n
j=0
j−1
i=0
aij Pr{vi , vj are in different sides.}
+
n
j=0
j−1
i=0
bij Pr{vi , vj are in different sides.}
Recall the analysis of Max. Weighted Cut. It shows that by similar
method, we can get the expected performance ratio is at most
1.139.
Abner Chih-Yi Huang WSPAA’06 Ses
Computational Results of MaxCut on TSPLIB
Abner Chih-Yi Huang WSPAA’06 Ses
More Computational Results
[Homer, et. al., 1997] have implemented our algorithm on a CM-5,
and have shown that it produces optimal or very nearly optimal
solutions to a number of MAX CUT instances derived from via
minimization problems.
Abner Chih-Yi Huang WSPAA’06 Ses
More Computational Results
Figure: cutRG , cutSA, and cutGW are the cut sizes found by randomized
greedy, simulated annealing, and GW respectively. The column tconvAbner Chih-Yi Huang WSPAA’06 Ses
More Computational Results
The results for simulated annealing are the best cuts found
over 5 runs of 107 annealing steps each.
The results for randomized greedy are the maximum cuts
found over 20, 000 independent runs.
Column UB displays the upper bounds which were derived
from the dual solutions. Our corresponding primal and dual
approximations of the optimum are within 0.05% of each
other and therefore within 0.05% of the true upper bound.
Abner Chih-Yi Huang WSPAA’06 Ses
Bibliography I
G. Ausiello, P. Crescenzi, G. Gambosi, V. Kann, A.
Marchetti-Spaccamela, M. Protasi (1999) Complexity and
Approximation, Springer Verlag.
Kabanets, V. Derandomization: A Brief Overview Electronic
Colloquium on Computational Complexity, 2002, 9.
Mahajan, S. & Ramesh, H. Derandomizing Approximation
Algorithms Based on Semidefinite Programming SIAM J.
Comput., Society for Industrial and Applied Mathematics,
1999, 28, 1641-1663
Impagliazzo, R. Hardness as randomness: a survey of universal
derandomization Proceedings of the ICM, Beijing 2002, 2002,
3, 659-672
Abner Chih-Yi Huang WSPAA’06 Ses
Bibliography II
Goemans, M.X. & Williamson, D.P. Improved approximation
algorithms for maximum cut and satisfiability problems using
semidefinite programming J. ACM, ACM Press, 1995, 42,
1115-1145.
Y. Nesterov and A. Nemirovskii. Self-Concordant Functions
and Polynomial Time Methods in Convex Programming.
Central Economic and Mathematical Institute, USSR
Academy of Science, .Moscow, 1989.
F. Alizadeh, ”Interior Point Methods in Semidefinite
Programming with Applications to Combinatorial
Optimization”, SIAM J. Optim., vol 5, No. 1, pp. 13–51,
1995 RENDL, F., VANDERBEI, R., AND WOLKOWICZ, H.
1993. Interior point methods for max-min eigenvalue
problems. Report 264, Technische Universitat Graz, Graz,
Austria.
Abner Chih-Yi Huang WSPAA’06 Ses
Bibliography III
Vandenberghe, L. & Boyd, S. Semidefinite programming
SIAM Review, 1996, 38, 49-95
S. Boyd and L. Vandenberghe, Convex Optimization.
Cambridge University Press, 2003.
Lecture Notes of Randomized Algorithms, Prof. Hsueh-I Lu.
Rajeev Motwani, Prabhakar Raghavan , Randomized
Algorithms, Cambridge University Press, August 25, 1995.
I. J. Lustig, R. E. Marsten, and D. F. Shanno, Interior point
methods for linear programming: Computational state of the
art, ORSA Journal on Computing, 6, 1994
Steven Homer and Marcus Peinado, Design and Performance
of Parallel and Distributed Approximation Algorithms for
Maxcut, Journal of Parallel and Distributed Computing,
Volume 46, Issue 1, , 10 October 1997, Pages 48-61.
Abner Chih-Yi Huang WSPAA’06 Ses
Session 5: More on Randomization
End! Thanks!
Abner Chih-Yi Huang WSPAA’06 Ses

More Related Content

What's hot

accurate ABC Oliver Ratmann
accurate ABC Oliver Ratmannaccurate ABC Oliver Ratmann
accurate ABC Oliver Ratmann
olli0601
 

What's hot (20)

Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1
 
Coordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerCoordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like sampler
 
Delayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithmsDelayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithms
 
accurate ABC Oliver Ratmann
accurate ABC Oliver Ratmannaccurate ABC Oliver Ratmann
accurate ABC Oliver Ratmann
 
ABC workshop: 17w5025
ABC workshop: 17w5025ABC workshop: 17w5025
ABC workshop: 17w5025
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussion
 
Quantum Search and Quantum Learning
Quantum Search and Quantum Learning Quantum Search and Quantum Learning
Quantum Search and Quantum Learning
 
Introduction to logistic regression
Introduction to logistic regressionIntroduction to logistic regression
Introduction to logistic regression
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli...
 Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli... Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli...
 
CISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceCISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergence
 
Side 2019 #12
Side 2019 #12Side 2019 #12
Side 2019 #12
 
Can we estimate a constant?
Can we estimate a constant?Can we estimate a constant?
Can we estimate a constant?
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
comments on exponential ergodicity of the bouncy particle sampler
comments on exponential ergodicity of the bouncy particle samplercomments on exponential ergodicity of the bouncy particle sampler
comments on exponential ergodicity of the bouncy particle sampler
 
Lecture10 - Naïve Bayes
Lecture10 - Naïve BayesLecture10 - Naïve Bayes
Lecture10 - Naïve Bayes
 
Lecture 12 orhogonality - 6.1 6.2 6.3
Lecture 12   orhogonality - 6.1 6.2 6.3Lecture 12   orhogonality - 6.1 6.2 6.3
Lecture 12 orhogonality - 6.1 6.2 6.3
 
Kolev skalna2018 article-exact_solutiontoa_parametricline
Kolev skalna2018 article-exact_solutiontoa_parametriclineKolev skalna2018 article-exact_solutiontoa_parametricline
Kolev skalna2018 article-exact_solutiontoa_parametricline
 

Similar to More on randomization semi-definite programming and derandomization

fb69b412-97cb-4e8d-8a28-574c09557d35-160618025920
fb69b412-97cb-4e8d-8a28-574c09557d35-160618025920fb69b412-97cb-4e8d-8a28-574c09557d35-160618025920
fb69b412-97cb-4e8d-8a28-574c09557d35-160618025920
Karl Rudeen
 
Dynamic1
Dynamic1Dynamic1
Dynamic1
MyAlome
 

Similar to More on randomization semi-definite programming and derandomization (20)

Litvinenko nlbu2016
Litvinenko nlbu2016Litvinenko nlbu2016
Litvinenko nlbu2016
 
Solving inverse problems via non-linear Bayesian Update of PCE coefficients
Solving inverse problems via non-linear Bayesian Update of PCE coefficientsSolving inverse problems via non-linear Bayesian Update of PCE coefficients
Solving inverse problems via non-linear Bayesian Update of PCE coefficients
 
A Multi-Objective Genetic Algorithm for Pruning Support Vector Machines
A Multi-Objective Genetic Algorithm for Pruning Support Vector MachinesA Multi-Objective Genetic Algorithm for Pruning Support Vector Machines
A Multi-Objective Genetic Algorithm for Pruning Support Vector Machines
 
STUDY OF Ε-SMOOTH SUPPORT VECTOR REGRESSION AND COMPARISON WITH Ε- SUPPORT ...
STUDY OF Ε-SMOOTH SUPPORT VECTOR  REGRESSION AND COMPARISON WITH Ε- SUPPORT  ...STUDY OF Ε-SMOOTH SUPPORT VECTOR  REGRESSION AND COMPARISON WITH Ε- SUPPORT  ...
STUDY OF Ε-SMOOTH SUPPORT VECTOR REGRESSION AND COMPARISON WITH Ε- SUPPORT ...
 
Talk iccf 19_ben_hammouda
Talk iccf 19_ben_hammoudaTalk iccf 19_ben_hammouda
Talk iccf 19_ben_hammouda
 
Intro to ABC
Intro to ABCIntro to ABC
Intro to ABC
 
An Efficient And Safe Framework For Solving Optimization Problems
An Efficient And Safe Framework For Solving Optimization ProblemsAn Efficient And Safe Framework For Solving Optimization Problems
An Efficient And Safe Framework For Solving Optimization Problems
 
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
 
Draft6
Draft6Draft6
Draft6
 
GDRR Opening Workshop - Bayesian Inference for Common Cause Failure Rate Base...
GDRR Opening Workshop - Bayesian Inference for Common Cause Failure Rate Base...GDRR Opening Workshop - Bayesian Inference for Common Cause Failure Rate Base...
GDRR Opening Workshop - Bayesian Inference for Common Cause Failure Rate Base...
 
linear SVM.ppt
linear SVM.pptlinear SVM.ppt
linear SVM.ppt
 
NICE Implementations of Variational Inference
NICE Implementations of Variational Inference NICE Implementations of Variational Inference
NICE Implementations of Variational Inference
 
NICE Research -Variational inference project
NICE Research -Variational inference projectNICE Research -Variational inference project
NICE Research -Variational inference project
 
Dynamic programming
Dynamic programmingDynamic programming
Dynamic programming
 
A nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formulaA nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formula
 
Minimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian updateMinimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian update
 
fb69b412-97cb-4e8d-8a28-574c09557d35-160618025920
fb69b412-97cb-4e8d-8a28-574c09557d35-160618025920fb69b412-97cb-4e8d-8a28-574c09557d35-160618025920
fb69b412-97cb-4e8d-8a28-574c09557d35-160618025920
 
Project Paper
Project PaperProject Paper
Project Paper
 
Dynamic1
Dynamic1Dynamic1
Dynamic1
 
Mohammad Sabawi ICETS-2018 Presentation
Mohammad Sabawi ICETS-2018 PresentationMohammad Sabawi ICETS-2018 Presentation
Mohammad Sabawi ICETS-2018 Presentation
 

More from Abner Chih Yi Huang (11)

諾貝爾經濟學獎得主的獲利公式
諾貝爾經濟學獎得主的獲利公式諾貝爾經濟學獎得主的獲利公式
諾貝爾經濟學獎得主的獲利公式
 
Clip Tree Applications
Clip Tree ApplicationsClip Tree Applications
Clip Tree Applications
 
Introduction to Szemerédi regularity lemma
Introduction to Szemerédi regularity lemmaIntroduction to Szemerédi regularity lemma
Introduction to Szemerédi regularity lemma
 
Introduction to Algorithmic aspect of Market Equlibra
Introduction to Algorithmic aspect of Market EqulibraIntroduction to Algorithmic aspect of Market Equlibra
Introduction to Algorithmic aspect of Market Equlibra
 
An introduction to Google test framework
An introduction to Google test frameworkAn introduction to Google test framework
An introduction to Google test framework
 
SaaS: Science as a Service
SaaS: Science as a Service SaaS: Science as a Service
SaaS: Science as a Service
 
Alignment spaced seed
Alignment spaced seedAlignment spaced seed
Alignment spaced seed
 
A small debate of power of randomness
A small debate of power of randomnessA small debate of power of randomness
A small debate of power of randomness
 
Dominating set of fixed size in degenerated graph
Dominating set of fixed size in degenerated graphDominating set of fixed size in degenerated graph
Dominating set of fixed size in degenerated graph
 
Introduction to algorithmic aspect of auction theory
Introduction to algorithmic aspect of auction theoryIntroduction to algorithmic aspect of auction theory
Introduction to algorithmic aspect of auction theory
 
Refactoring Chapter11
Refactoring Chapter11Refactoring Chapter11
Refactoring Chapter11
 

Recently uploaded

%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
masabamasaba
 
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
VictoriaMetrics
 
AI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM TechniquesAI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
VictorSzoltysek
 

Recently uploaded (20)

%in Soweto+277-882-255-28 abortion pills for sale in soweto
%in Soweto+277-882-255-28 abortion pills for sale in soweto%in Soweto+277-882-255-28 abortion pills for sale in soweto
%in Soweto+277-882-255-28 abortion pills for sale in soweto
 
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
%+27788225528 love spells in Colorado Springs Psychic Readings, Attraction sp...
 
Introducing Microsoft’s new Enterprise Work Management (EWM) Solution
Introducing Microsoft’s new Enterprise Work Management (EWM) SolutionIntroducing Microsoft’s new Enterprise Work Management (EWM) Solution
Introducing Microsoft’s new Enterprise Work Management (EWM) Solution
 
WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...
WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...
WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...
 
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital TransformationWSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
 
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
 
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
 
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
 
AI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM TechniquesAI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
 
VTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learnVTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learn
 
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
 
tonesoftg
tonesoftgtonesoftg
tonesoftg
 
WSO2CON2024 - It's time to go Platformless
WSO2CON2024 - It's time to go PlatformlessWSO2CON2024 - It's time to go Platformless
WSO2CON2024 - It's time to go Platformless
 
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdf
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdfPayment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdf
Payment Gateway Testing Simplified_ A Step-by-Step Guide for Beginners.pdf
 
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
 
WSO2Con2024 - Enabling Transactional System's Exponential Growth With Simplicity
WSO2Con2024 - Enabling Transactional System's Exponential Growth With SimplicityWSO2Con2024 - Enabling Transactional System's Exponential Growth With Simplicity
WSO2Con2024 - Enabling Transactional System's Exponential Growth With Simplicity
 
8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students
 
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview Questions
 
%in Midrand+277-882-255-28 abortion pills for sale in midrand
%in Midrand+277-882-255-28 abortion pills for sale in midrand%in Midrand+277-882-255-28 abortion pills for sale in midrand
%in Midrand+277-882-255-28 abortion pills for sale in midrand
 

More on randomization semi-definite programming and derandomization

  • 1. WSPAA’06 Session 5 More on Randomization Semidefinite Programming and Derandomization Abner Chih-Yi Huang1 June 24, 2006 1 Graduate student of M.S. degree CS program of Algorithm and Biocomputing Laboratory, National Tsing Hua University. BarrosH@gmail.com Abner Chih-Yi Huang WSPAA’06 Ses
  • 2. Outline Derandomization: The Method of The Conditional Probabilities; Approximation Algorithms Based on Semidefinite Programming Introduction to Semidefinite Programming Application : MaxCut, Weighted-Max2SAT problem Abner Chih-Yi Huang WSPAA’06 Ses
  • 4. Why Do We Study Derandomization? Why do we study derandomization since that randomized algorithms are so powerful? Because independent random unbiased bits are hard to obtain. Empirically a large number of randomized algorithms have been implemented and seem to work just fine, even without access to any source of true randomness. There are, essentially, two general arguments to support the belief that BPP is “close” to P. Abner Chih-Yi Huang WSPAA’06 Ses
  • 5. De-randomization Removing randomization from randomized algorithms to build equivalently powerful deterministic algorithms. One of general technique, method of the conditional probabilities. View a randomized algorithm A as a computation tree on input x. Assume A independently perform r(|x|) random choices each with two possible outcomes, denoted 0 and 1. Each path form root to a leaf means a possible computation of A . Abner Chih-Yi Huang WSPAA’06 Ses
  • 6. Computation Tree Figure: Level i as i-th random choice of A . Abner Chih-Yi Huang WSPAA’06 Ses
  • 7. Computation Tree Figure: Assign each node u, of level i, a binary string σ(u) of length i − 1 representing the random choices so far. Abner Chih-Yi Huang WSPAA’06 Ses
  • 8. Computation Tree We can assign each leaf l a measure ml . And every inner node u with the average measure E(u), of all measures in the subtree rooted at u If w, v are children of u, then either E(v) ≥ E(u) or E(w) ≥ E(u). Abner Chih-Yi Huang WSPAA’06 Ses
  • 9. Computation Tree Figure: There exists a path from root to leaf l s.t. ml ≥ E(root). This path can be deterministically derived if we can efficiently determine which of the children v and u is greater. Abner Chih-Yi Huang WSPAA’06 Ses
  • 10. Example: Weighted MaxSAT Weighted MaxSAT asks for the maximum weight which can be satisfied by any assignment, given a set of weighted clauses. Figure: Program 2.10 Recall the 3rd talk today. Yu-Han Lyu, Approximation Techniques (II) – Linear Programming and Randomization Abner Chih-Yi Huang WSPAA’06 Ses
  • 11. Computation Tree for MAX Weighted SAT Abner Chih-Yi Huang WSPAA’06 Ses
  • 12. Computation Tree for MAX Weighted SAT Abner Chih-Yi Huang WSPAA’06 Ses
  • 13. Computation Tree for MAX Weighted SAT Abner Chih-Yi Huang WSPAA’06 Ses
  • 14. Computation Tree for MAX Weighted SAT Abner Chih-Yi Huang WSPAA’06 Ses
  • 15. Computation Tree for MAX Weighted SAT To derandomize Program 2.10, (1) At the i-th iteration, the random variable mRWS (x|v1v2 · · · vi−1) means the measure of solution with input x and the decided value vj of variable vj . (2) If E[mRWS (x|v1v2 · · · vi−10)] ≤ E[mRWS (x|v1v2 · · · vi−11)], then vi is set to 1, otherwise it is set to 0. (3) Eventually, we have mA (x) = E[mRWS (x|v1v2 · · · vn)]. Abner Chih-Yi Huang WSPAA’06 Ses
  • 16. Computation Tree for MAX Weighted SAT (4) At the i-th iteration, the random variable mRWS (x|v1v2 · · · vi−1) means the measure of solution with input x and the decided value vj of variable vj . (5) If E[mRWS (x|v1v2 · · · vi−10)] ≤ E[mRWS (x|v1v2 · · · vi−11)], then vi is set to 1, otherwise it is set to 0. (6) Eventually, we have mA (x) = E[mRWS (x|v1v2 · · · vn)] Assume that x contains t clauses c1, . . . , ct. We have E[mRWS (x|v1v2 · · · vi−11)] = t j=1 w(cj )Pr{cj is satisfied|v1v2 · · · vi−11} Abner Chih-Yi Huang WSPAA’06 Ses
  • 17. Computation Tree for MAX Weighted SAT If vi occurs positive in cj then Pr{cj is satisfied|v1v2 · · · vi−11} = 1 If vi doesn’t occur in cj or positive in cj , then the probability that a random assignment of values to variables vi+1, . . . , vn satisfies cj is Pr{cj is satisfied|v1v2 · · · vi−11} = 1 − 1 2dj where dj is the number of variables occurring in cj that are different from v1, . . . , vn. Abner Chih-Yi Huang WSPAA’06 Ses
  • 18. Computation Tree for MAX Weighted SAT We have E[mRWS (x|v1v2 · · · vi−11)] = Wi + cj s.t. vi occurs + w(cj )1 + cj s.t. vi occurs - w(cj )(1 − 1 2dj ) Clearly it can be computed in P. Hence we have E[mRWS (x)] ≤ E[mRWS (x|v1)] ≤ E[mRWS (x|v1v2)] ≤ ... ≤ E[mRWS (x|v1 · · · vn)] = mA (x) By Corollary 2.20, mA (x) ≥ E[mRWS (x)] ≥ m∗(x)/2 Abner Chih-Yi Huang WSPAA’06 Ses
  • 20. The Second Part: SDP Figure: Liner programming as a systematic approach to design approximation algorithms Abner Chih-Yi Huang WSPAA’06 Ses
  • 21. The Power of Liner Programming Recall the 3rd talk today. Yu-Han Lyu, Approximation Techniques (II) – Linear Programming and Randomization Abner Chih-Yi Huang WSPAA’06 Ses
  • 22. What’s semidefinite programming? minimize cT x subject to G + n i xi Fi ≤ 0 where G, F1, . . . , Fn ∈ Sk, and A ∈ Rp×n. A semidefinite program is a convex optimization problem since its objective and constraint are convex: In semidefinite programming one minimizes a linear function subject to the constraint that an affine combination of symmetric matrices is positive semidefinite. We say a n × n matrix M is positive semidefinite if xT Mx ≥ 0, ∀x ∈ Rn Abner Chih-Yi Huang WSPAA’06 Ses
  • 23. What’s semidefinite programming? many convex optimization problems, e.g., linear programming and (convex) quadratically constrained quadratic programming, can be cast as semidefinite programs. (Nesterov and Nemirovsky in 1988, they showed that interior-point methods for linear programming can, in principle, be generalized to all convex optimization problems.) Most importantly, however, semidefinite programs can be solved very efficiently, both in theory and in practice. Abner Chih-Yi Huang WSPAA’06 Ses
  • 24. In Theory and In Practice In Theory : For worst-case complexity, the number of iterations to solve a semidefinite program to a given accuracy grows with problem size as O(n 1 2 ). For example, [Alizadeh 1995] adapt Ye’s interior-point algorithm to semidefinite programming performs O( √ n(log Wtot + log 1 )) iterations and each iteration can be implemented in O(n3) time. [Rendl et al. 1993]. Abner Chih-Yi Huang WSPAA’06 Ses
  • 25. In Theory Therefore SDP is almost exactly in P. O(n3 ) × O( √ n(log Wtot + log 1 )) Abner Chih-Yi Huang WSPAA’06 Ses
  • 26. In Practice In Practice : the number of iterations required grows much more slowly than n 1 2 , perhaps like log(n) or n 1 4 , and can often be assumed to be almost constant. (5 to 50 iterations) It is now generally accepted that interior-point methods for LPs are competitive with the simplex method and even faster for problems with more than 10,000 variables or constraints.[Lustig, et. al., 1994] Abner Chih-Yi Huang WSPAA’06 Ses
  • 27. Conclusion on SDP From S. Boyd & L. Vandenberghe’s survey paper, Our final conclusion is therefore: it is not much harder to solve a rather wide class of nonlinear convex optimization problems than it is to solve LPs. Abner Chih-Yi Huang WSPAA’06 Ses
  • 28. The applications of SDP SDP has applications of control theory, nonlinear programming, geometry, etc. However we might most care the applications on combinatorial optimization. Integer 0/1 Programming problem Stable set problem Max-cut problem Graph coloring problem Shannon Capacity of a Graph VLSI Layout ... Abner Chih-Yi Huang WSPAA’06 Ses
  • 29. Approximation Algorithm based on SDP Figure: M.X. Goemans The first time that semidefinite pro- grams have been used in the design and analysis of approximation algo- rithms is M.X. Goemans and D.P. Williamson, “Improved Approxima- tion Algorithms for Maximum Cut and Satisfiability Problems Using Semidef- inite Programming”, J. ACM, 42, 1115–1145, 1995. Abner Chih-Yi Huang WSPAA’06 Ses
  • 30. The Systematic Approach based on SDP Abner Chih-Yi Huang WSPAA’06 Ses
  • 31. Why SDP? In combinatorial optimization, the importance of semidefinite programming is that it leads to tighter relaxations than the classical linear programming relaxations for many graph and combinatorial problems. Abner Chih-Yi Huang WSPAA’06 Ses
  • 32. Max. Weighted Cut Figure: Picks weighted edges to divide vertices into two partitions. Abner Chih-Yi Huang WSPAA’06 Ses
  • 33. Mathematical Programming Expressions for Max. Weighted Cut Express Max. Weighted Cut problem as integer quadratic program IQP-CUT(x). Edge weight wij = w(vi , vj ) if (vi , vj ) ∈ E, wij = 0 otherwise. maximize 1 2 n j=1 j i=1 wij (1 − yi yj ) subject to yi ∈ {−1, 1} 1 ≤ i ≤ n Ref. figure 7, nodes a, b 1 2 wa,b(1 − yayb) = 3 2 × (1 − (1 × −1)) = 3 nodes b, d 1 2 wb,d (1 − ybyd ) = 1 2 × (1 − (1 × 1)) = 0 Abner Chih-Yi Huang WSPAA’06 Ses
  • 34. Mathematical Programming Expressions for Max. Weighted Cut We can relax it to 2-D vector. maximize 1 2 n j=1 j i=1 wij (1 − −→yi · −→yj ) subject to −→yi ∈ R2 1 ≤ i ≤ n, −→yi ∈ R2 where −→yi , −→yj denotes the inner product of vectors, i.e., −→yi · −→yj = yi,1yj,1 + yi,2yj,2. Abner Chih-Yi Huang WSPAA’06 Ses
  • 35. Simple Randomized Algorithm for Max. Weighted Cut Simple Randomized Algorithm for Max. Weighted Cut, Program 5.3 (1) Solve (QP-CUT(x)), obtaining an optimal set of vectors ( −→ y∗ 1 , . . . , −→ y∗ n ); (2) Randomly choose a vector −→r on the unit sphere Sn; (3) Set V1 = {vi ∈ V | −→yi ∗ · −→r ≥ 0}; (4) V2 = V − V1. Abner Chih-Yi Huang WSPAA’06 Ses
  • 36. V1 = {vi ∈ V | −→yi ∗ · −→r ≥ 0} Figure: −→ A −→ B = 0 if −→ A ⊥ −→ B , −→ A −→ B = cos(q) Abner Chih-Yi Huang WSPAA’06 Ses
  • 37. Analysis of Algorithm Denote mRWC (x) be the measure of the solution returned by program 5.3. If −→r divide the circle into two sides. E[mRWC (x)] = n j=1 j i=1 wij Pr{ −→ y∗ i , −→ y∗ j are in different side} The probability Pr{ −→ y∗ i , −→ y∗ j are in different side} is the segments of the circle that −→ y∗ i , −→ y∗ j dominated. 2 cos−1(−→yi ∗ · −→ y∗ j ) 2π = cos−1(−→yi ∗ · −→ y∗ j ) π (polar-coordinate) Abner Chih-Yi Huang WSPAA’06 Ses
  • 38. V1 = {vi ∈ V | −→yi ∗ · −→r ≥ 0} Figure: Pr{ −→ y∗ i , −→ y∗ j are in different side} Abner Chih-Yi Huang WSPAA’06 Ses
  • 39. Analysis of Algorithm Compare E[mRWC (x)] = n j=1 j i=1 wij cos−1(−→yi ∗ · −→ y∗ j ) π m∗ QP−CUT (x) = 1 2 n j=1 j i=1 wij (1 − −→yi · −→yj ) We have E[mRWC (x)] = 2 cos−1(−→yi ∗ · −→ y∗ j ) π(1 − cos(cos−1(−→yi ∗ · −→ y∗ j )) 1 2 n j=1 j i=1 wij (1 − −→yi · −→yj ) Abner Chih-Yi Huang WSPAA’06 Ses
  • 40. Analysis of Algorithm Let β = min0<α≤π 2α π(1−cos(α) , Since QP-CUT(x) is a relaxation of IQP-CUT(x), we have E[mRWC (x)] ≥ β ×m∗ QP−CUT (x) ≥ β ×m∗ IQP−CUT (x) = β ×m∗ (x) By Lemma, β > 0.8785. Thus, this algorithm is 1.139-approximation algorithm. Abner Chih-Yi Huang WSPAA’06 Ses
  • 41. Perfect Ending? Unfortunately, it is unknown that QP-CUT(x) in P or not. Therefore, we relax it to n-D QP program. maximize 1 2 n j=1 j i=1 wij (1 − −→yi · −→yj ) subject to yi ∈ {−1, 1} 1 ≤ i ≤ n, −→yi ∈ Rn Observe now that, given −→y1, . . . , −→yn ∈ Sn, the matrix M defined as Mi,j = −→yi · −→yj is positive semidefinite. Abner Chih-Yi Huang WSPAA’06 Ses
  • 42. Semidefinite Program In other words, QP-CUT(x) is equivalent to the following semidefinite program SDP-CUT(x): maximize 1 2 n j=1 j i=1 wij (1 − Mi,j ) subject to M is positive semidefinite. Mi,i = 1 1 ≤ i ≤ n It can be proven that for any > 0, it can find m∗ SDP−CUT (x) − in time complexity about |x| and log(1 ). (Even = 10−5) Abner Chih-Yi Huang WSPAA’06 Ses
  • 43. Improved Algorithm for Weighted 2-SAT INSTANCE: Set U of variables, collection C of disjunctive weighted clauses of at most 2 literals, where a literal is a variable or a negated variable in U. SOLUTION: A truth assignment for U. MEASURE: Number of clauses satisfied by the truth assignment. Abner Chih-Yi Huang WSPAA’06 Ses
  • 44. Improved Algorithm for Weighted 2-SAT We can model Max2SAT as maximize cj ∈C wj t(cj ) subject to yi ∈ {−1, 1} i = 0, 1, . . . , n; where y0 = 1. For unit clause cj , if cj = vi , t(cj ) = 1 + yi y0 2 otherwise, t(cj ) = 1 − yi y0 2 Abner Chih-Yi Huang WSPAA’06 Ses
  • 45. Improved Algorithm for Weighted 2-SAT For example, let c1 = y1, c2 = y2, c3 = y1 + y2, if y1 = 1, y2 = −1, t(c1) = 1 + y1y0 2 = 1 + 1 × 1 2 = 1 and, t(c2) = 1 − y2y0 2 = 1 − (−1 × 1) 2 = 1 Abner Chih-Yi Huang WSPAA’06 Ses
  • 46. Improved Algorithm for Weighted 2-SAT Observe that, for two literals clause, t(cj ) = 1 − t(vi ∨ vk) = 1 − t(vi )t(vk) = 1 − 1 − yi y0 2 1 − yky0 2 = 1 4 [(1 + yi y0) + (1 + 1 − yky0) + (1 − yi yk)] Other cases are similar. For example, let c3 = y1 + y2, if y1 = 1, y2 = −1, t(c3) = 1 4 [(1 + y1y0) + (1 + y2y0) + (1 − y1y2)] = 1 4 [(1 + 1) + (1 + (−1)) + (1 − (−1))] = 4 4 = 1 Abner Chih-Yi Huang WSPAA’06 Ses
  • 47. Improved Algorithm for Weighted 2-SAT It could be expressed as following, maximize n j=0 j−1 i=0[aij (1 − yi yj ) + bij (1 + yi yj )] subject to yi ∈ {−1, 1} i = 0, 1, · · · , n where y0 is TRUE, i.e., yi = y0. We can relax it to maximize n j=0 j−1 i=0[aij (1 − vi vj ) + bij (1 + vi vj )] subject to vi ∈ Sn vi ∈ V . Abner Chih-Yi Huang WSPAA’06 Ses
  • 48. Improved Algorithm for Weighted 2-SAT We have E[V ] = 2 n j=0 j−1 i=0 aij Pr{vi , vj are in different sides.} + n j=0 j−1 i=0 bij Pr{vi , vj are in different sides.} Recall the analysis of Max. Weighted Cut. It shows that by similar method, we can get the expected performance ratio is at most 1.139. Abner Chih-Yi Huang WSPAA’06 Ses
  • 49. Computational Results of MaxCut on TSPLIB Abner Chih-Yi Huang WSPAA’06 Ses
  • 50. More Computational Results [Homer, et. al., 1997] have implemented our algorithm on a CM-5, and have shown that it produces optimal or very nearly optimal solutions to a number of MAX CUT instances derived from via minimization problems. Abner Chih-Yi Huang WSPAA’06 Ses
  • 51. More Computational Results Figure: cutRG , cutSA, and cutGW are the cut sizes found by randomized greedy, simulated annealing, and GW respectively. The column tconvAbner Chih-Yi Huang WSPAA’06 Ses
  • 52. More Computational Results The results for simulated annealing are the best cuts found over 5 runs of 107 annealing steps each. The results for randomized greedy are the maximum cuts found over 20, 000 independent runs. Column UB displays the upper bounds which were derived from the dual solutions. Our corresponding primal and dual approximations of the optimum are within 0.05% of each other and therefore within 0.05% of the true upper bound. Abner Chih-Yi Huang WSPAA’06 Ses
  • 53. Bibliography I G. Ausiello, P. Crescenzi, G. Gambosi, V. Kann, A. Marchetti-Spaccamela, M. Protasi (1999) Complexity and Approximation, Springer Verlag. Kabanets, V. Derandomization: A Brief Overview Electronic Colloquium on Computational Complexity, 2002, 9. Mahajan, S. & Ramesh, H. Derandomizing Approximation Algorithms Based on Semidefinite Programming SIAM J. Comput., Society for Industrial and Applied Mathematics, 1999, 28, 1641-1663 Impagliazzo, R. Hardness as randomness: a survey of universal derandomization Proceedings of the ICM, Beijing 2002, 2002, 3, 659-672 Abner Chih-Yi Huang WSPAA’06 Ses
  • 54. Bibliography II Goemans, M.X. & Williamson, D.P. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming J. ACM, ACM Press, 1995, 42, 1115-1145. Y. Nesterov and A. Nemirovskii. Self-Concordant Functions and Polynomial Time Methods in Convex Programming. Central Economic and Mathematical Institute, USSR Academy of Science, .Moscow, 1989. F. Alizadeh, ”Interior Point Methods in Semidefinite Programming with Applications to Combinatorial Optimization”, SIAM J. Optim., vol 5, No. 1, pp. 13–51, 1995 RENDL, F., VANDERBEI, R., AND WOLKOWICZ, H. 1993. Interior point methods for max-min eigenvalue problems. Report 264, Technische Universitat Graz, Graz, Austria. Abner Chih-Yi Huang WSPAA’06 Ses
  • 55. Bibliography III Vandenberghe, L. & Boyd, S. Semidefinite programming SIAM Review, 1996, 38, 49-95 S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge University Press, 2003. Lecture Notes of Randomized Algorithms, Prof. Hsueh-I Lu. Rajeev Motwani, Prabhakar Raghavan , Randomized Algorithms, Cambridge University Press, August 25, 1995. I. J. Lustig, R. E. Marsten, and D. F. Shanno, Interior point methods for linear programming: Computational state of the art, ORSA Journal on Computing, 6, 1994 Steven Homer and Marcus Peinado, Design and Performance of Parallel and Distributed Approximation Algorithms for Maxcut, Journal of Parallel and Distributed Computing, Volume 46, Issue 1, , 10 October 1997, Pages 48-61. Abner Chih-Yi Huang WSPAA’06 Ses
  • 56. Session 5: More on Randomization End! Thanks! Abner Chih-Yi Huang WSPAA’06 Ses