%in Midrand+277-882-255-28 abortion pills for sale in midrand
More on randomization semi-definite programming and derandomization
1. WSPAA’06 Session 5
More on Randomization
Semidefinite Programming and Derandomization
Abner Chih-Yi Huang1
June 24, 2006
1
Graduate student of M.S. degree CS program of Algorithm and
Biocomputing Laboratory, National Tsing Hua University. BarrosH@gmail.com
Abner Chih-Yi Huang WSPAA’06 Ses
2. Outline
Derandomization: The Method of The Conditional
Probabilities;
Approximation Algorithms Based on Semidefinite
Programming
Introduction to Semidefinite Programming
Application : MaxCut, Weighted-Max2SAT problem
Abner Chih-Yi Huang WSPAA’06 Ses
4. Why Do We Study Derandomization?
Why do we study derandomization since that randomized
algorithms are so powerful?
Because independent random unbiased bits are hard to obtain.
Empirically a large number of randomized algorithms have been
implemented and seem to work just fine, even without access to
any source of true randomness. There are, essentially, two general
arguments to support the belief that BPP is “close” to P.
Abner Chih-Yi Huang WSPAA’06 Ses
5. De-randomization
Removing randomization from randomized algorithms to build
equivalently powerful deterministic algorithms.
One of general technique, method of the conditional
probabilities.
View a randomized algorithm A as a computation tree on
input x.
Assume A independently perform r(|x|) random choices each
with two possible outcomes, denoted 0 and 1.
Each path form root to a leaf means a possible computation of
A .
Abner Chih-Yi Huang WSPAA’06 Ses
7. Computation Tree
Figure: Assign each node u, of level i, a binary string σ(u) of length
i − 1 representing the random choices so far.
Abner Chih-Yi Huang WSPAA’06 Ses
8. Computation Tree
We can assign each leaf l a measure ml .
And every inner node u with the average measure E(u), of all
measures in the subtree rooted at u
If w, v are children of u, then either E(v) ≥ E(u) or
E(w) ≥ E(u).
Abner Chih-Yi Huang WSPAA’06 Ses
9. Computation Tree
Figure: There exists a path from root to leaf l s.t. ml ≥ E(root). This
path can be deterministically derived if we can efficiently determine which
of the children v and u is greater.
Abner Chih-Yi Huang WSPAA’06 Ses
10. Example: Weighted MaxSAT
Weighted MaxSAT asks for the maximum weight which can be
satisfied by any assignment, given a set of weighted clauses.
Figure: Program 2.10
Recall the 3rd talk today.
Yu-Han Lyu, Approximation Techniques (II) –
Linear Programming and Randomization
Abner Chih-Yi Huang WSPAA’06 Ses
15. Computation Tree for MAX Weighted SAT
To derandomize Program 2.10,
(1) At the i-th iteration, the random variable
mRWS (x|v1v2 · · · vi−1) means the measure of solution with
input x and the decided value vj of variable vj .
(2) If E[mRWS (x|v1v2 · · · vi−10)] ≤ E[mRWS (x|v1v2 · · · vi−11)],
then vi is set to 1, otherwise it is set to 0.
(3) Eventually, we have mA (x) = E[mRWS (x|v1v2 · · · vn)].
Abner Chih-Yi Huang WSPAA’06 Ses
16. Computation Tree for MAX Weighted SAT
(4) At the i-th iteration, the random variable
mRWS (x|v1v2 · · · vi−1) means the measure of solution with
input x and the decided value vj of variable vj .
(5) If E[mRWS (x|v1v2 · · · vi−10)] ≤ E[mRWS (x|v1v2 · · · vi−11)],
then vi is set to 1, otherwise it is set to 0.
(6) Eventually, we have mA (x) = E[mRWS (x|v1v2 · · · vn)]
Assume that x contains t clauses c1, . . . , ct. We have
E[mRWS (x|v1v2 · · · vi−11)] =
t
j=1
w(cj )Pr{cj is satisfied|v1v2 · · · vi−11}
Abner Chih-Yi Huang WSPAA’06 Ses
17. Computation Tree for MAX Weighted SAT
If vi occurs positive in cj then
Pr{cj is satisfied|v1v2 · · · vi−11} = 1
If vi doesn’t occur in cj or positive in cj , then the probability that a
random assignment of values to variables vi+1, . . . , vn satisfies cj is
Pr{cj is satisfied|v1v2 · · · vi−11} = 1 −
1
2dj
where dj is the number of variables occurring in cj that are
different from v1, . . . , vn.
Abner Chih-Yi Huang WSPAA’06 Ses
18. Computation Tree for MAX Weighted SAT
We have E[mRWS (x|v1v2 · · · vi−11)] =
Wi +
cj s.t. vi occurs +
w(cj )1 +
cj s.t. vi occurs -
w(cj )(1 −
1
2dj
)
Clearly it can be computed in P. Hence we have
E[mRWS (x)] ≤ E[mRWS (x|v1)]
≤ E[mRWS (x|v1v2)]
≤
...
≤ E[mRWS (x|v1 · · · vn)] = mA (x)
By Corollary 2.20, mA (x) ≥ E[mRWS (x)] ≥ m∗(x)/2
Abner Chih-Yi Huang WSPAA’06 Ses
20. The Second Part: SDP
Figure: Liner programming as a systematic approach to design
approximation algorithms
Abner Chih-Yi Huang WSPAA’06 Ses
21. The Power of Liner Programming
Recall the 3rd talk today.
Yu-Han Lyu, Approximation
Techniques (II) – Linear
Programming and
Randomization
Abner Chih-Yi Huang WSPAA’06 Ses
22. What’s semidefinite programming?
minimize cT
x
subject to G +
n
i
xi Fi ≤ 0
where G, F1, . . . , Fn ∈ Sk, and A ∈ Rp×n.
A semidefinite program is a convex optimization problem since
its objective and constraint are convex:
In semidefinite programming one minimizes a linear function
subject to the constraint that an affine combination of
symmetric matrices is positive semidefinite.
We say a n × n matrix M is positive semidefinite if
xT Mx ≥ 0, ∀x ∈ Rn
Abner Chih-Yi Huang WSPAA’06 Ses
23. What’s semidefinite programming?
many convex optimization problems, e.g., linear programming
and (convex) quadratically constrained quadratic
programming, can be cast as semidefinite programs.
(Nesterov and Nemirovsky in 1988, they showed that
interior-point methods for linear programming can, in
principle, be generalized to all convex optimization problems.)
Most importantly, however, semidefinite programs can be
solved very efficiently, both in theory and in practice.
Abner Chih-Yi Huang WSPAA’06 Ses
24. In Theory and In Practice
In Theory : For worst-case complexity, the number of iterations to
solve a semidefinite program to a given accuracy grows with
problem size as O(n
1
2 ).
For example, [Alizadeh 1995] adapt Ye’s interior-point algorithm to
semidefinite programming performs O(
√
n(log Wtot + log 1
))
iterations and each iteration can be implemented in O(n3) time.
[Rendl et al. 1993].
Abner Chih-Yi Huang WSPAA’06 Ses
25. In Theory
Therefore SDP is almost exactly in P.
O(n3
) × O(
√
n(log Wtot + log
1
))
Abner Chih-Yi Huang WSPAA’06 Ses
26. In Practice
In Practice : the number of iterations required grows much
more slowly than n
1
2 , perhaps like log(n) or n
1
4 , and can often
be assumed to be almost constant. (5 to 50 iterations)
It is now generally accepted that interior-point methods for LPs are
competitive with the simplex method and even faster for problems
with more than 10,000 variables or constraints.[Lustig, et. al.,
1994]
Abner Chih-Yi Huang WSPAA’06 Ses
27. Conclusion on SDP
From S. Boyd & L. Vandenberghe’s survey paper,
Our final conclusion is therefore: it is not much harder to
solve a rather wide class of nonlinear convex optimization
problems than it is to solve LPs.
Abner Chih-Yi Huang WSPAA’06 Ses
28. The applications of SDP
SDP has applications of control theory, nonlinear programming,
geometry, etc. However we might most care the applications on
combinatorial optimization.
Integer 0/1 Programming problem
Stable set problem
Max-cut problem
Graph coloring problem
Shannon Capacity of a Graph
VLSI Layout
...
Abner Chih-Yi Huang WSPAA’06 Ses
29. Approximation Algorithm based on SDP
Figure: M.X. Goemans
The first time that semidefinite pro-
grams have been used in the design
and analysis of approximation algo-
rithms is M.X. Goemans and D.P.
Williamson, “Improved Approxima-
tion Algorithms for Maximum Cut and
Satisfiability Problems Using Semidef-
inite Programming”, J. ACM, 42,
1115–1145, 1995.
Abner Chih-Yi Huang WSPAA’06 Ses
31. Why SDP?
In combinatorial optimization, the importance of semidefinite
programming is that it leads to tighter relaxations than the
classical linear programming relaxations for many graph and
combinatorial problems.
Abner Chih-Yi Huang WSPAA’06 Ses
32. Max. Weighted Cut
Figure: Picks weighted edges to divide vertices into two partitions.
Abner Chih-Yi Huang WSPAA’06 Ses
33. Mathematical Programming Expressions for Max.
Weighted Cut
Express Max. Weighted Cut problem as integer quadratic program
IQP-CUT(x). Edge weight wij = w(vi , vj ) if (vi , vj ) ∈ E, wij = 0
otherwise.
maximize 1
2
n
j=1
j
i=1 wij (1 − yi yj )
subject to yi ∈ {−1, 1} 1 ≤ i ≤ n
Ref. figure 7, nodes a, b
1
2
wa,b(1 − yayb) =
3
2
× (1 − (1 × −1)) = 3
nodes b, d
1
2
wb,d (1 − ybyd ) =
1
2
× (1 − (1 × 1)) = 0
Abner Chih-Yi Huang WSPAA’06 Ses
34. Mathematical Programming Expressions for Max.
Weighted Cut
We can relax it to 2-D vector.
maximize 1
2
n
j=1
j
i=1 wij (1 − −→yi · −→yj )
subject to −→yi ∈ R2 1 ≤ i ≤ n, −→yi ∈ R2
where −→yi , −→yj denotes the inner product of vectors, i.e.,
−→yi · −→yj = yi,1yj,1 + yi,2yj,2.
Abner Chih-Yi Huang WSPAA’06 Ses
35. Simple Randomized Algorithm for Max. Weighted Cut
Simple Randomized Algorithm for Max. Weighted Cut, Program
5.3
(1) Solve (QP-CUT(x)), obtaining an optimal set of vectors
(
−→
y∗
1 , . . . ,
−→
y∗
n );
(2) Randomly choose a vector −→r on the unit sphere Sn;
(3) Set V1 = {vi ∈ V | −→yi
∗ · −→r ≥ 0};
(4) V2 = V − V1.
Abner Chih-Yi Huang WSPAA’06 Ses
36. V1 = {vi ∈ V | −→yi
∗
· −→r ≥ 0}
Figure:
−→
A
−→
B = 0 if
−→
A ⊥
−→
B ,
−→
A
−→
B = cos(q)
Abner Chih-Yi Huang WSPAA’06 Ses
37. Analysis of Algorithm
Denote mRWC (x) be the measure of the solution returned by
program 5.3. If −→r divide the circle into two sides.
E[mRWC (x)] =
n
j=1
j
i=1
wij Pr{
−→
y∗
i ,
−→
y∗
j are in different side}
The probability Pr{
−→
y∗
i ,
−→
y∗
j are in different side} is the segments of
the circle that
−→
y∗
i ,
−→
y∗
j dominated.
2
cos−1(−→yi
∗ ·
−→
y∗
j )
2π
=
cos−1(−→yi
∗ ·
−→
y∗
j )
π
(polar-coordinate)
Abner Chih-Yi Huang WSPAA’06 Ses
38. V1 = {vi ∈ V | −→yi
∗
· −→r ≥ 0}
Figure: Pr{
−→
y∗
i ,
−→
y∗
j are in different side}
Abner Chih-Yi Huang WSPAA’06 Ses
40. Analysis of Algorithm
Let β = min0<α≤π
2α
π(1−cos(α) , Since QP-CUT(x) is a relaxation of
IQP-CUT(x), we have
E[mRWC (x)] ≥ β ×m∗
QP−CUT (x) ≥ β ×m∗
IQP−CUT (x) = β ×m∗
(x)
By Lemma, β > 0.8785. Thus, this algorithm is
1.139-approximation algorithm.
Abner Chih-Yi Huang WSPAA’06 Ses
41. Perfect Ending?
Unfortunately, it is unknown that QP-CUT(x) in P or not.
Therefore, we relax it to n-D QP program.
maximize 1
2
n
j=1
j
i=1 wij (1 − −→yi · −→yj )
subject to yi ∈ {−1, 1} 1 ≤ i ≤ n, −→yi ∈ Rn
Observe now that, given −→y1, . . . , −→yn ∈ Sn, the matrix M defined as
Mi,j = −→yi · −→yj is positive semidefinite.
Abner Chih-Yi Huang WSPAA’06 Ses
42. Semidefinite Program
In other words, QP-CUT(x) is equivalent to the following
semidefinite program SDP-CUT(x):
maximize 1
2
n
j=1
j
i=1 wij (1 − Mi,j )
subject to M is positive semidefinite.
Mi,i = 1 1 ≤ i ≤ n
It can be proven that for any > 0, it can find m∗
SDP−CUT (x) −
in time complexity about |x| and log(1
). (Even = 10−5)
Abner Chih-Yi Huang WSPAA’06 Ses
43. Improved Algorithm for Weighted 2-SAT
INSTANCE: Set U of variables, collection C of disjunctive
weighted clauses of at most 2 literals, where a literal is a
variable or a negated variable in U.
SOLUTION: A truth assignment for U.
MEASURE: Number of clauses satisfied by the truth
assignment.
Abner Chih-Yi Huang WSPAA’06 Ses
44. Improved Algorithm for Weighted 2-SAT
We can model Max2SAT as
maximize cj ∈C wj t(cj )
subject to yi ∈ {−1, 1} i = 0, 1, . . . , n; where y0 = 1.
For unit clause cj , if cj = vi ,
t(cj ) =
1 + yi y0
2
otherwise,
t(cj ) =
1 − yi y0
2
Abner Chih-Yi Huang WSPAA’06 Ses
46. Improved Algorithm for Weighted 2-SAT
Observe that, for two literals clause,
t(cj ) = 1 − t(vi ∨ vk) = 1 − t(vi )t(vk)
= 1 −
1 − yi y0
2
1 − yky0
2
=
1
4
[(1 + yi y0) + (1 + 1 − yky0) + (1 − yi yk)]
Other cases are similar. For example, let c3 = y1 + y2, if
y1 = 1, y2 = −1,
t(c3) =
1
4
[(1 + y1y0) + (1 + y2y0) + (1 − y1y2)]
=
1
4
[(1 + 1) + (1 + (−1)) + (1 − (−1))]
=
4
4
= 1
Abner Chih-Yi Huang WSPAA’06 Ses
47. Improved Algorithm for Weighted 2-SAT
It could be expressed as following,
maximize n
j=0
j−1
i=0[aij (1 − yi yj ) + bij (1 + yi yj )]
subject to yi ∈ {−1, 1} i = 0, 1, · · · , n
where y0 is TRUE, i.e., yi = y0. We can relax it to
maximize n
j=0
j−1
i=0[aij (1 − vi vj ) + bij (1 + vi vj )]
subject to vi ∈ Sn vi ∈ V .
Abner Chih-Yi Huang WSPAA’06 Ses
48. Improved Algorithm for Weighted 2-SAT
We have
E[V ] = 2
n
j=0
j−1
i=0
aij Pr{vi , vj are in different sides.}
+
n
j=0
j−1
i=0
bij Pr{vi , vj are in different sides.}
Recall the analysis of Max. Weighted Cut. It shows that by similar
method, we can get the expected performance ratio is at most
1.139.
Abner Chih-Yi Huang WSPAA’06 Ses
50. More Computational Results
[Homer, et. al., 1997] have implemented our algorithm on a CM-5,
and have shown that it produces optimal or very nearly optimal
solutions to a number of MAX CUT instances derived from via
minimization problems.
Abner Chih-Yi Huang WSPAA’06 Ses
51. More Computational Results
Figure: cutRG , cutSA, and cutGW are the cut sizes found by randomized
greedy, simulated annealing, and GW respectively. The column tconvAbner Chih-Yi Huang WSPAA’06 Ses
52. More Computational Results
The results for simulated annealing are the best cuts found
over 5 runs of 107 annealing steps each.
The results for randomized greedy are the maximum cuts
found over 20, 000 independent runs.
Column UB displays the upper bounds which were derived
from the dual solutions. Our corresponding primal and dual
approximations of the optimum are within 0.05% of each
other and therefore within 0.05% of the true upper bound.
Abner Chih-Yi Huang WSPAA’06 Ses
53. Bibliography I
G. Ausiello, P. Crescenzi, G. Gambosi, V. Kann, A.
Marchetti-Spaccamela, M. Protasi (1999) Complexity and
Approximation, Springer Verlag.
Kabanets, V. Derandomization: A Brief Overview Electronic
Colloquium on Computational Complexity, 2002, 9.
Mahajan, S. & Ramesh, H. Derandomizing Approximation
Algorithms Based on Semidefinite Programming SIAM J.
Comput., Society for Industrial and Applied Mathematics,
1999, 28, 1641-1663
Impagliazzo, R. Hardness as randomness: a survey of universal
derandomization Proceedings of the ICM, Beijing 2002, 2002,
3, 659-672
Abner Chih-Yi Huang WSPAA’06 Ses
54. Bibliography II
Goemans, M.X. & Williamson, D.P. Improved approximation
algorithms for maximum cut and satisfiability problems using
semidefinite programming J. ACM, ACM Press, 1995, 42,
1115-1145.
Y. Nesterov and A. Nemirovskii. Self-Concordant Functions
and Polynomial Time Methods in Convex Programming.
Central Economic and Mathematical Institute, USSR
Academy of Science, .Moscow, 1989.
F. Alizadeh, ”Interior Point Methods in Semidefinite
Programming with Applications to Combinatorial
Optimization”, SIAM J. Optim., vol 5, No. 1, pp. 13–51,
1995 RENDL, F., VANDERBEI, R., AND WOLKOWICZ, H.
1993. Interior point methods for max-min eigenvalue
problems. Report 264, Technische Universitat Graz, Graz,
Austria.
Abner Chih-Yi Huang WSPAA’06 Ses
55. Bibliography III
Vandenberghe, L. & Boyd, S. Semidefinite programming
SIAM Review, 1996, 38, 49-95
S. Boyd and L. Vandenberghe, Convex Optimization.
Cambridge University Press, 2003.
Lecture Notes of Randomized Algorithms, Prof. Hsueh-I Lu.
Rajeev Motwani, Prabhakar Raghavan , Randomized
Algorithms, Cambridge University Press, August 25, 1995.
I. J. Lustig, R. E. Marsten, and D. F. Shanno, Interior point
methods for linear programming: Computational state of the
art, ORSA Journal on Computing, 6, 1994
Steven Homer and Marcus Peinado, Design and Performance
of Parallel and Distributed Approximation Algorithms for
Maxcut, Journal of Parallel and Distributed Computing,
Volume 46, Issue 1, , 10 October 1997, Pages 48-61.
Abner Chih-Yi Huang WSPAA’06 Ses
56. Session 5: More on Randomization
End! Thanks!
Abner Chih-Yi Huang WSPAA’06 Ses