SlideShare a Scribd company logo
Analysis of Algorithms
Dealing with NP Problems:
Intelligent Exponential Search and Approximation Algorithms
Andres Mendez-Vazquez
December 2, 2015
1 / 78
Outline
1 Introduction
The Dilemma
Branches Attacking the Problem
2 Intelligent Exhaustive Search
Backtracking
Example
Backtracking Algorithm
Branch-and-Bound
Algorithm Branch-and-Bound
Example
3 Approximation Algorithms
Introduction
Examples
Vertex Cover Problem
The Traveling-Salesman Problem
2 / 78
Outline
1 Introduction
The Dilemma
Branches Attacking the Problem
2 Intelligent Exhaustive Search
Backtracking
Example
Backtracking Algorithm
Branch-and-Bound
Algorithm Branch-and-Bound
Example
3 Approximation Algorithms
Introduction
Examples
Vertex Cover Problem
The Traveling-Salesman Problem
3 / 78
The Dilemma
We face the following
NP problems need solutions in real-life!!!
We only know exponential algorithms for them!!!
What do we do?
4 / 78
The Dilemma
We face the following
NP problems need solutions in real-life!!!
We only know exponential algorithms for them!!!
What do we do?
4 / 78
The Dilemma
We face the following
NP problems need solutions in real-life!!!
We only know exponential algorithms for them!!!
What do we do?
4 / 78
Accuracy
Accuracy issues
NP problems are often optimization problems.
It’s hard to find the EXACT answer.
Maybe we just want to know if our answer is close to the
exact answer.
5 / 78
Accuracy
Accuracy issues
NP problems are often optimization problems.
It’s hard to find the EXACT answer.
Maybe we just want to know if our answer is close to the
exact answer.
5 / 78
Accuracy
Accuracy issues
NP problems are often optimization problems.
It’s hard to find the EXACT answer.
Maybe we just want to know if our answer is close to the
exact answer.
5 / 78
Near optimal solutions
Ways of dealing with NP problems
if the actual input is small, exponential running time may be
perfectly satisfactory.
It may be possible to isolate special cases solvable in polynomial
time.
Ways of dealing with NP problems
Third, it may be possible to use some form of intelligent search to
avoid almost all the time the worst case.
Fourth, it may still be possible to find near-optimal solutions
in polynomial time (either in the worst case or on average).
6 / 78
Near optimal solutions
Ways of dealing with NP problems
if the actual input is small, exponential running time may be
perfectly satisfactory.
It may be possible to isolate special cases solvable in polynomial
time.
Ways of dealing with NP problems
Third, it may be possible to use some form of intelligent search to
avoid almost all the time the worst case.
Fourth, it may still be possible to find near-optimal solutions
in polynomial time (either in the worst case or on average).
6 / 78
Near optimal solutions
Ways of dealing with NP problems
if the actual input is small, exponential running time may be
perfectly satisfactory.
It may be possible to isolate special cases solvable in polynomial
time.
Ways of dealing with NP problems
Third, it may be possible to use some form of intelligent search to
avoid almost all the time the worst case.
Fourth, it may still be possible to find near-optimal solutions
in polynomial time (either in the worst case or on average).
6 / 78
Near optimal solutions
Ways of dealing with NP problems
if the actual input is small, exponential running time may be
perfectly satisfactory.
It may be possible to isolate special cases solvable in polynomial
time.
Ways of dealing with NP problems
Third, it may be possible to use some form of intelligent search to
avoid almost all the time the worst case.
Fourth, it may still be possible to find near-optimal solutions
in polynomial time (either in the worst case or on average).
6 / 78
Outline
1 Introduction
The Dilemma
Branches Attacking the Problem
2 Intelligent Exhaustive Search
Backtracking
Example
Backtracking Algorithm
Branch-and-Bound
Algorithm Branch-and-Bound
Example
3 Approximation Algorithms
Introduction
Examples
Vertex Cover Problem
The Traveling-Salesman Problem
7 / 78
Three main branches attacking the NP Problems
Intelligent Exponential Search
Procedures that are exponential time in the worst-case, but with the right
design can be very efficient on typical instances.
Approximation algorithms
Algorithms guaranteed to find a "near optimal" solution, under a
certain bound, in polynomial time.
Heuristics
Useful algorithmic procedures that provide approximated solutions in
polynomial time.
They are a trade-off between optimality, completeness, accuracy and
precision and running time.
8 / 78
Three main branches attacking the NP Problems
Intelligent Exponential Search
Procedures that are exponential time in the worst-case, but with the right
design can be very efficient on typical instances.
Approximation algorithms
Algorithms guaranteed to find a "near optimal" solution, under a
certain bound, in polynomial time.
Heuristics
Useful algorithmic procedures that provide approximated solutions in
polynomial time.
They are a trade-off between optimality, completeness, accuracy and
precision and running time.
8 / 78
Three main branches attacking the NP Problems
Intelligent Exponential Search
Procedures that are exponential time in the worst-case, but with the right
design can be very efficient on typical instances.
Approximation algorithms
Algorithms guaranteed to find a "near optimal" solution, under a
certain bound, in polynomial time.
Heuristics
Useful algorithmic procedures that provide approximated solutions in
polynomial time.
They are a trade-off between optimality, completeness, accuracy and
precision and running time.
8 / 78
Three main branches attacking the NP Problems
Intelligent Exponential Search
Procedures that are exponential time in the worst-case, but with the right
design can be very efficient on typical instances.
Approximation algorithms
Algorithms guaranteed to find a "near optimal" solution, under a
certain bound, in polynomial time.
Heuristics
Useful algorithmic procedures that provide approximated solutions in
polynomial time.
They are a trade-off between optimality, completeness, accuracy and
precision and running time.
8 / 78
We have something like this
As sets
Exact Solution
Approximation Algorithms
Heuristics
Intelligent exponentiasl searches
WORST CASE EXPONENTIAL
Polynomial Time
Algorithms
Polynomial Time
NP Problems
9 / 78
Outline
1 Introduction
The Dilemma
Branches Attacking the Problem
2 Intelligent Exhaustive Search
Backtracking
Example
Backtracking Algorithm
Branch-and-Bound
Algorithm Branch-and-Bound
Example
3 Approximation Algorithms
Introduction
Examples
Vertex Cover Problem
The Traveling-Salesman Problem
10 / 78
Backtracking
Something Notable
Backtracking is based on that it is often possible to reject a solution by
looking at just a small portion of it.
Example
If an instance of SAT contains the clause Ci = (x1 ∨ x2), then all
assignments with x1 = x2 = 0 can be instantly eliminated.
11 / 78
Backtracking
Something Notable
Backtracking is based on that it is often possible to reject a solution by
looking at just a small portion of it.
Example
If an instance of SAT contains the clause Ci = (x1 ∨ x2), then all
assignments with x1 = x2 = 0 can be instantly eliminated.
11 / 78
Outline
1 Introduction
The Dilemma
Branches Attacking the Problem
2 Intelligent Exhaustive Search
Backtracking
Example
Backtracking Algorithm
Branch-and-Bound
Algorithm Branch-and-Bound
Example
3 Approximation Algorithms
Introduction
Examples
Vertex Cover Problem
The Traveling-Salesman Problem
12 / 78
Example
Pruning Example
Given the possible values that you can give to two literals:
x1 x2
1 1
1 0
0 1
0 0
It is possible to prune a quarter of the entire search space... Can
this be systematically exploited?
13 / 78
An example of exploiting this idea in SAT solvers
Consider the following Boolean formula φ (w, x, y, z)
(w ∨ x ∨ y ∨ z) ∧ (w ∨ ¬x) ∧ (x ∨ ¬y) ∧ (y ∨ ¬z) ∧ (z ∨ ¬w) ∧ (¬w ∨ ¬z)
We start branching in one variable, we can choose w
Note: This selection does not violate any of the clauses of
φ (w, x, y, z)
14 / 78
An example of exploiting this idea in SAT solvers
Consider the following Boolean formula φ (w, x, y, z)
(w ∨ x ∨ y ∨ z) ∧ (w ∨ ¬x) ∧ (x ∨ ¬y) ∧ (y ∨ ¬z) ∧ (z ∨ ¬w) ∧ (¬w ∨ ¬z)
We start branching in one variable, we can choose w
Initial formula
Note: This selection does not violate any of the clauses of
φ (w, x, y, z)
14 / 78
Now
The partial assignment w = 0, x = 1 violates the clause (w ∨ ¬x)
Initial formula
15 / 78
Now
Then, we prune that branch
Initial formula
16 / 78
In addition
What if w = 0, x = 0
Then, the following clauses are satisfied
1 ¬w = 1
2 ¬x = 1
Thus, we have the following left
1 Before
1 (w ∨ x ∨ y ∨ z)∧(w ∨ ¬x)∧(x ∨ ¬y)∧(y ∨ ¬z)∧(z ∨ ¬w)∧(¬w ∨ ¬z)
2 After
1 (0 ∨ 0 ∨ y ∨ z) ∧ (0 ∨ 1) ∧ (0 ∨ ¬y) ∧ (y ∨ ¬z) ∧ (z ∨ 1) ∧ (1 ∨ ¬z)
17 / 78
In addition
What if w = 0, x = 0
Then, the following clauses are satisfied
1 ¬w = 1
2 ¬x = 1
Thus, we have the following left
1 Before
1 (w ∨ x ∨ y ∨ z)∧(w ∨ ¬x)∧(x ∨ ¬y)∧(y ∨ ¬z)∧(z ∨ ¬w)∧(¬w ∨ ¬z)
2 After
1 (0 ∨ 0 ∨ y ∨ z) ∧ (0 ∨ 1) ∧ (0 ∨ ¬y) ∧ (y ∨ ¬z) ∧ (z ∨ 1) ∧ (1 ∨ ¬z)
17 / 78
Finally
We have the following reduced number of equations
(y ∨ z) , (1) , (¬y) , (y ∨ ¬z) , (1) , (1) ⇔ (y ∨ z) , (¬y) , (y ∨ ¬z)
What if w = 0, x = 1
1 Before
1 (w ∨ x ∨ y ∨ z)∧(w ∨ ¬x)∧(x ∨ ¬y)∧(y ∨ ¬z)∧(z ∨ ¬w)∧(¬w ∨ ¬z)
2 After
1 (1) ∧ (0) ∧ (1) ∧ (y ∨ ¬z) ∧ (1) ∧ (1)
18 / 78
Finally
We have the following reduced number of equations
(y ∨ z) , (1) , (¬y) , (y ∨ ¬z) , (1) , (1) ⇔ (y ∨ z) , (¬y) , (y ∨ ¬z)
What if w = 0, x = 1
1 Before
1 (w ∨ x ∨ y ∨ z)∧(w ∨ ¬x)∧(x ∨ ¬y)∧(y ∨ ¬z)∧(z ∨ ¬w)∧(¬w ∨ ¬z)
2 After
1 (1) ∧ (0) ∧ (1) ∧ (y ∨ ¬z) ∧ (1) ∧ (1)
18 / 78
Thus
We have something no satisfiable
(1) ∧ (0) ∧ (1) ∧ (y ∨ ¬z) ∧ (1) ∧ (1) ⇔ (), (y ∨ ¬z)
Clearly
We prune that part of the search tree.
Note we use “()≡(0)” to point out to a “empty clause” ruling out
satisfiability.
19 / 78
Thus
We have something no satisfiable
(1) ∧ (0) ∧ (1) ∧ (y ∨ ¬z) ∧ (1) ∧ (1) ⇔ (), (y ∨ ¬z)
Clearly
We prune that part of the search tree.
Note we use “()≡(0)” to point out to a “empty clause” ruling out
satisfiability.
19 / 78
The decisions we need to make in backtracking
First
Which subproblem to expand next.
Second
Which branching variable to use.
Remark
The benefit of backtracking lies in its ability to eliminate portions of the
search space.
20 / 78
The decisions we need to make in backtracking
First
Which subproblem to expand next.
Second
Which branching variable to use.
Remark
The benefit of backtracking lies in its ability to eliminate portions of the
search space.
20 / 78
The decisions we need to make in backtracking
First
Which subproblem to expand next.
Second
Which branching variable to use.
Remark
The benefit of backtracking lies in its ability to eliminate portions of the
search space.
20 / 78
Choosing
Something Notable
A classic strategy:
You choose the subproblem that contains the smallest clause.
Then, you branch on a variable in that clause.
Then
If the clause is a singleton then at least one of the resulting branches will
be terminated.
21 / 78
Choosing
Something Notable
A classic strategy:
You choose the subproblem that contains the smallest clause.
Then, you branch on a variable in that clause.
Then
If the clause is a singleton then at least one of the resulting branches will
be terminated.
21 / 78
Choosing
Something Notable
A classic strategy:
You choose the subproblem that contains the smallest clause.
Then, you branch on a variable in that clause.
Then
If the clause is a singleton then at least one of the resulting branches will
be terminated.
21 / 78
Choosing
Something Notable
A classic strategy:
You choose the subproblem that contains the smallest clause.
Then, you branch on a variable in that clause.
Then
If the clause is a singleton then at least one of the resulting branches will
be terminated.
21 / 78
The Backtracking Test
The test needs to look at the subproblem to declare quickly if
1 Failure: the subproblem has no solution.
2 Success: a solution to the subproblem is found.
3 Uncertainty.
What about SAT
The test declares failure if there is an empty clause
The test declares success if there are no clauses
Uncertainty Otherwise.
22 / 78
The Backtracking Test
The test needs to look at the subproblem to declare quickly if
1 Failure: the subproblem has no solution.
2 Success: a solution to the subproblem is found.
3 Uncertainty.
What about SAT
The test declares failure if there is an empty clause
The test declares success if there are no clauses
Uncertainty Otherwise.
22 / 78
The Backtracking Test
The test needs to look at the subproblem to declare quickly if
1 Failure: the subproblem has no solution.
2 Success: a solution to the subproblem is found.
3 Uncertainty.
What about SAT
The test declares failure if there is an empty clause
The test declares success if there are no clauses
Uncertainty Otherwise.
22 / 78
The Backtracking Test
The test needs to look at the subproblem to declare quickly if
1 Failure: the subproblem has no solution.
2 Success: a solution to the subproblem is found.
3 Uncertainty.
What about SAT
The test declares failure if there is an empty clause
The test declares success if there are no clauses
Uncertainty Otherwise.
22 / 78
The Backtracking Test
The test needs to look at the subproblem to declare quickly if
1 Failure: the subproblem has no solution.
2 Success: a solution to the subproblem is found.
3 Uncertainty.
What about SAT
The test declares failure if there is an empty clause
The test declares success if there are no clauses
Uncertainty Otherwise.
22 / 78
The Backtracking Test
The test needs to look at the subproblem to declare quickly if
1 Failure: the subproblem has no solution.
2 Success: a solution to the subproblem is found.
3 Uncertainty.
What about SAT
The test declares failure if there is an empty clause
The test declares success if there are no clauses
Uncertainty Otherwise.
22 / 78
Example
We have the following
23 / 78
Outline
1 Introduction
The Dilemma
Branches Attacking the Problem
2 Intelligent Exhaustive Search
Backtracking
Example
Backtracking Algorithm
Branch-and-Bound
Algorithm Branch-and-Bound
Example
3 Approximation Algorithms
Introduction
Examples
Vertex Cover Problem
The Traveling-Salesman Problem
24 / 78
Pseudo-code for Backtracking
We have
BACKTRACKING(P0)
1 Start with some problem P0
2 Let S = {P0}, the set if active subproblems
3 While S = ∅
4 choose a subproblem P ∈ S and remove it from S
5 expand it into smaller subproblems P1, P2, ..., Pk
6 For each Pi
7 if test(Pi) succeeds: halt and return the branch solution
8 if test(Pi) fails: discard Pi
9 Otherwise: add Pi to S
10 return there is no solution.
25 / 78
Pseudo-code for Backtracking
We have
BACKTRACKING(P0)
1 Start with some problem P0
2 Let S = {P0}, the set if active subproblems
3 While S = ∅
4 choose a subproblem P ∈ S and remove it from S
5 expand it into smaller subproblems P1, P2, ..., Pk
6 For each Pi
7 if test(Pi) succeeds: halt and return the branch solution
8 if test(Pi) fails: discard Pi
9 Otherwise: add Pi to S
10 return there is no solution.
25 / 78
Pseudo-code for Backtracking
We have
BACKTRACKING(P0)
1 Start with some problem P0
2 Let S = {P0}, the set if active subproblems
3 While S = ∅
4 choose a subproblem P ∈ S and remove it from S
5 expand it into smaller subproblems P1, P2, ..., Pk
6 For each Pi
7 if test(Pi) succeeds: halt and return the branch solution
8 if test(Pi) fails: discard Pi
9 Otherwise: add Pi to S
10 return there is no solution.
25 / 78
Pseudo-code for Backtracking
We have
BACKTRACKING(P0)
1 Start with some problem P0
2 Let S = {P0}, the set if active subproblems
3 While S = ∅
4 choose a subproblem P ∈ S and remove it from S
5 expand it into smaller subproblems P1, P2, ..., Pk
6 For each Pi
7 if test(Pi) succeeds: halt and return the branch solution
8 if test(Pi) fails: discard Pi
9 Otherwise: add Pi to S
10 return there is no solution.
25 / 78
Outline
1 Introduction
The Dilemma
Branches Attacking the Problem
2 Intelligent Exhaustive Search
Backtracking
Example
Backtracking Algorithm
Branch-and-Bound
Algorithm Branch-and-Bound
Example
3 Approximation Algorithms
Introduction
Examples
Vertex Cover Problem
The Traveling-Salesman Problem
26 / 78
Another Exponential Search
Branch-and-Bound
This is a search better fitted to optimization problems
How do we reject subproblems?
First, imagine a minimization problem
To reject a subproblem, its cost must exceeds the best cost found so
far.
PROBLEM!!! This is unknown!!!
Thus, we use a quick lower bound on this cost.
27 / 78
Another Exponential Search
Branch-and-Bound
This is a search better fitted to optimization problems
How do we reject subproblems?
First, imagine a minimization problem
To reject a subproblem, its cost must exceeds the best cost found so
far.
PROBLEM!!! This is unknown!!!
Thus, we use a quick lower bound on this cost.
27 / 78
Another Exponential Search
Branch-and-Bound
This is a search better fitted to optimization problems
How do we reject subproblems?
First, imagine a minimization problem
To reject a subproblem, its cost must exceeds the best cost found so
far.
PROBLEM!!! This is unknown!!!
Thus, we use a quick lower bound on this cost.
27 / 78
Another Exponential Search
Branch-and-Bound
This is a search better fitted to optimization problems
How do we reject subproblems?
First, imagine a minimization problem
To reject a subproblem, its cost must exceeds the best cost found so
far.
PROBLEM!!! This is unknown!!!
Thus, we use a quick lower bound on this cost.
27 / 78
Another Exponential Search
Branch-and-Bound
This is a search better fitted to optimization problems
How do we reject subproblems?
First, imagine a minimization problem
To reject a subproblem, its cost must exceeds the best cost found so
far.
PROBLEM!!! This is unknown!!!
Thus, we use a quick lower bound on this cost.
27 / 78
Lower Bound
We use the info in the problem
To design an efficient way to approximate the solution.
28 / 78
Outline
1 Introduction
The Dilemma
Branches Attacking the Problem
2 Intelligent Exhaustive Search
Backtracking
Example
Backtracking Algorithm
Branch-and-Bound
Algorithm Branch-and-Bound
Example
3 Approximation Algorithms
Introduction
Examples
Vertex Cover Problem
The Traveling-Salesman Problem
29 / 78
Pseudo-code for Branch-and-Bound
We have
BRANCH-AND-BOUND(P0)
1 Start with some problem P0
2 Let S = {P0}, the set if active subproblems
3 bestsofar=∞
4 While S = ∅
5 choose a subproblem (Partial Solution) P ∈ S and remove it from S
6 expand it into smaller subproblems P1, P2, ..., Pk
7 For each Pi
8 if Pi is a complete solution:
9 update bestsofar
10 else
11 if lowerbound(Pi) <bestsofar: add Pi to S
12 return bestsofar
30 / 78
Pseudo-code for Branch-and-Bound
We have
BRANCH-AND-BOUND(P0)
1 Start with some problem P0
2 Let S = {P0}, the set if active subproblems
3 bestsofar=∞
4 While S = ∅
5 choose a subproblem (Partial Solution) P ∈ S and remove it from S
6 expand it into smaller subproblems P1, P2, ..., Pk
7 For each Pi
8 if Pi is a complete solution:
9 update bestsofar
10 else
11 if lowerbound(Pi) <bestsofar: add Pi to S
12 return bestsofar
30 / 78
Pseudo-code for Branch-and-Bound
We have
BRANCH-AND-BOUND(P0)
1 Start with some problem P0
2 Let S = {P0}, the set if active subproblems
3 bestsofar=∞
4 While S = ∅
5 choose a subproblem (Partial Solution) P ∈ S and remove it from S
6 expand it into smaller subproblems P1, P2, ..., Pk
7 For each Pi
8 if Pi is a complete solution:
9 update bestsofar
10 else
11 if lowerbound(Pi) <bestsofar: add Pi to S
12 return bestsofar
30 / 78
Pseudo-code for Branch-and-Bound
We have
BRANCH-AND-BOUND(P0)
1 Start with some problem P0
2 Let S = {P0}, the set if active subproblems
3 bestsofar=∞
4 While S = ∅
5 choose a subproblem (Partial Solution) P ∈ S and remove it from S
6 expand it into smaller subproblems P1, P2, ..., Pk
7 For each Pi
8 if Pi is a complete solution:
9 update bestsofar
10 else
11 if lowerbound(Pi) <bestsofar: add Pi to S
12 return bestsofar
30 / 78
Pseudo-code for Branch-and-Bound
We have
BRANCH-AND-BOUND(P0)
1 Start with some problem P0
2 Let S = {P0}, the set if active subproblems
3 bestsofar=∞
4 While S = ∅
5 choose a subproblem (Partial Solution) P ∈ S and remove it from S
6 expand it into smaller subproblems P1, P2, ..., Pk
7 For each Pi
8 if Pi is a complete solution:
9 update bestsofar
10 else
11 if lowerbound(Pi) <bestsofar: add Pi to S
12 return bestsofar
30 / 78
Outline
1 Introduction
The Dilemma
Branches Attacking the Problem
2 Intelligent Exhaustive Search
Backtracking
Example
Backtracking Algorithm
Branch-and-Bound
Algorithm Branch-and-Bound
Example
3 Approximation Algorithms
Introduction
Examples
Vertex Cover Problem
The Traveling-Salesman Problem
31 / 78
Example: The Traveling Salesman Problem
A partial solution to the TSP
It is a simple path a b passing through some vertices S ⊆ V with
a, b ∈ S.
Denote this as [a, S, b]
The subproblem
It is necessary to find the best completion of the tour i.e. the cheapest
complementary path b a with intermediate vertices in V − S.
32 / 78
Example: The Traveling Salesman Problem
A partial solution to the TSP
It is a simple path a b passing through some vertices S ⊆ V with
a, b ∈ S.
Denote this as [a, S, b]
The subproblem
It is necessary to find the best completion of the tour i.e. the cheapest
complementary path b a with intermediate vertices in V − S.
V
S
b
Light Edges
32 / 78
Thus, Given a b
We extend a particular solution [a, S, b]
By a simple edge (b, x) with x ∈ V − S
Leading to |V − S| subproblems
Of the form [a, S ∪ {x} , x]
How can we lower-bound the cost of completing a partial tour
[a, S, b]?
We use the simple strategy, the lower-bound is the sum of the following
quantities:
1 The lightest edge from a to V − S.
2 The lightest edge from b to V − S.
3 The minimum spanning tree of V − S.
33 / 78
Thus, Given a b
We extend a particular solution [a, S, b]
By a simple edge (b, x) with x ∈ V − S
Leading to |V − S| subproblems
Of the form [a, S ∪ {x} , x]
How can we lower-bound the cost of completing a partial tour
[a, S, b]?
We use the simple strategy, the lower-bound is the sum of the following
quantities:
1 The lightest edge from a to V − S.
2 The lightest edge from b to V − S.
3 The minimum spanning tree of V − S.
33 / 78
Thus, Given a b
We extend a particular solution [a, S, b]
By a simple edge (b, x) with x ∈ V − S
Leading to |V − S| subproblems
Of the form [a, S ∪ {x} , x]
How can we lower-bound the cost of completing a partial tour
[a, S, b]?
We use the simple strategy, the lower-bound is the sum of the following
quantities:
1 The lightest edge from a to V − S.
2 The lightest edge from b to V − S.
3 The minimum spanning tree of V − S.
33 / 78
Thus, Given a b
We extend a particular solution [a, S, b]
By a simple edge (b, x) with x ∈ V − S
Leading to |V − S| subproblems
Of the form [a, S ∪ {x} , x]
How can we lower-bound the cost of completing a partial tour
[a, S, b]?
We use the simple strategy, the lower-bound is the sum of the following
quantities:
1 The lightest edge from a to V − S.
2 The lightest edge from b to V − S.
3 The minimum spanning tree of V − S.
33 / 78
Thus, Given a b
We extend a particular solution [a, S, b]
By a simple edge (b, x) with x ∈ V − S
Leading to |V − S| subproblems
Of the form [a, S ∪ {x} , x]
How can we lower-bound the cost of completing a partial tour
[a, S, b]?
We use the simple strategy, the lower-bound is the sum of the following
quantities:
1 The lightest edge from a to V − S.
2 The lightest edge from b to V − S.
3 The minimum spanning tree of V − S.
33 / 78
Thus, Given a b
We extend a particular solution [a, S, b]
By a simple edge (b, x) with x ∈ V − S
Leading to |V − S| subproblems
Of the form [a, S ∪ {x} , x]
How can we lower-bound the cost of completing a partial tour
[a, S, b]?
We use the simple strategy, the lower-bound is the sum of the following
quantities:
1 The lightest edge from a to V − S.
2 The lightest edge from b to V − S.
3 The minimum spanning tree of V − S.
33 / 78
Thus, Given a b
We extend a particular solution [a, S, b]
By a simple edge (b, x) with x ∈ V − S
Leading to |V − S| subproblems
Of the form [a, S ∪ {x} , x]
How can we lower-bound the cost of completing a partial tour
[a, S, b]?
We use the simple strategy, the lower-bound is the sum of the following
quantities:
1 The lightest edge from a to V − S.
2 The lightest edge from b to V − S.
3 The minimum spanning tree of V − S.
33 / 78
Example
A graph where a brute force approach will take 7!=5,040
F E
G D
C
B
H
A
1
2 2
1
1
1
1
5
1
2
1
1
34 / 78
Example
TSP Solution
F E
G D
C
B
H
A
1
1
1
1
11 1
1
35 / 78
Example
Branch and Bound Solution
A
B
C E
D H D
E
F
F
G G
G
H
10
10 10
1011
11
11
11
11
14
14
15
36 / 78
Outline
1 Introduction
The Dilemma
Branches Attacking the Problem
2 Intelligent Exhaustive Search
Backtracking
Example
Backtracking Algorithm
Branch-and-Bound
Algorithm Branch-and-Bound
Example
3 Approximation Algorithms
Introduction
Examples
Vertex Cover Problem
The Traveling-Salesman Problem
37 / 78
Approximation Algorithms
Remarks
In practice, near-optimality is often good.
An algorithm that returns near-optimal solutions is called an
approximation algorithm.
38 / 78
Approximation Algorithms
Remarks
In practice, near-optimality is often good.
An algorithm that returns near-optimal solutions is called an
approximation algorithm.
38 / 78
Definition of approximation algorithms
Famework
Suppose that we are working on an optimization problem in which each
potential solution has a positive cost.
We wish to find a near-optimal solution.
Thus
Max cost for a maximization problem.
Min cost for a minimization problem.
39 / 78
Definition of approximation algorithms
Famework
Suppose that we are working on an optimization problem in which each
potential solution has a positive cost.
We wish to find a near-optimal solution.
Thus
Max cost for a maximization problem.
Min cost for a minimization problem.
39 / 78
Definition of approximation algorithms
Famework
Suppose that we are working on an optimization problem in which each
potential solution has a positive cost.
We wish to find a near-optimal solution.
Thus
Max cost for a maximization problem.
Min cost for a minimization problem.
39 / 78
Definition of approximation algorithms
Famework
Suppose that we are working on an optimization problem in which each
potential solution has a positive cost.
We wish to find a near-optimal solution.
Thus
Max cost for a maximization problem.
Min cost for a minimization problem.
39 / 78
Definition of approximation algorithms
Famework
Suppose that we are working on an optimization problem in which each
potential solution has a positive cost.
We wish to find a near-optimal solution.
Thus
Max cost for a maximization problem.
Min cost for a minimization problem.
39 / 78
Then
Definition
We say that an algorithm for a problem has an approximation ratio of
ρ (n) if:
For any input of size n the cost C of the solution produced by the
algorithm is within a factor of ρ (n) of the cost C∗ of an optimal
solution:
max
C
C∗
,
C∗
C
≤ ρ (n)
40 / 78
Then
Definition
We say that an algorithm for a problem has an approximation ratio of
ρ (n) if:
For any input of size n the cost C of the solution produced by the
algorithm is within a factor of ρ (n) of the cost C∗ of an optimal
solution:
max
C
C∗
,
C∗
C
≤ ρ (n)
40 / 78
Then
Definition
We say that an algorithm for a problem has an approximation ratio of
ρ (n) if:
For any input of size n the cost C of the solution produced by the
algorithm is within a factor of ρ (n) of the cost C∗ of an optimal
solution:
max
C
C∗
,
C∗
C
≤ ρ (n)
40 / 78
Definition of approximation algorithms
Definition
If an algorithm achieves an approximation ratio of ρ (n), we call it a
ρ (n)-approximation algorithm.
Further
The definitions of the approximation ratio and of a ρ (n)-approximation
algorithm apply to both minimization and maximization problems.
41 / 78
Definition of approximation algorithms
Definition
If an algorithm achieves an approximation ratio of ρ (n), we call it a
ρ (n)-approximation algorithm.
Further
The definitions of the approximation ratio and of a ρ (n)-approximation
algorithm apply to both minimization and maximization problems.
41 / 78
The two possibilities
For a maximization problem
We have that 0 < C ≤ C∗, then the ratio C∗
/C gives the factor by which
the cost of an optimal solution is larger than the cost of the approximate
solution.
For a minimization problem
We have that 0 < C∗ ≤ C, then the ratio C/C∗ gives the factor by which
the approximate solution is larger than the cost of an optimal solution.
Properties
The approximation ratio of an approximation algorithm is never less than 1.
42 / 78
The two possibilities
For a maximization problem
We have that 0 < C ≤ C∗, then the ratio C∗
/C gives the factor by which
the cost of an optimal solution is larger than the cost of the approximate
solution.
For a minimization problem
We have that 0 < C∗ ≤ C, then the ratio C/C∗ gives the factor by which
the approximate solution is larger than the cost of an optimal solution.
Properties
The approximation ratio of an approximation algorithm is never less than 1.
42 / 78
The two possibilities
For a maximization problem
We have that 0 < C ≤ C∗, then the ratio C∗
/C gives the factor by which
the cost of an optimal solution is larger than the cost of the approximate
solution.
For a minimization problem
We have that 0 < C∗ ≤ C, then the ratio C/C∗ gives the factor by which
the approximate solution is larger than the cost of an optimal solution.
Properties
The approximation ratio of an approximation algorithm is never less than 1.
42 / 78
Some characteristics of approximation algorithms
Approximation algorithms have the following characteristics
They have polynomial times.
They are not guarantee to obtain optimal solutions.
However, they guarantee good solution within some factor of the
optimum.
Further
They often use algorithms from related problems as subroutines.
43 / 78
Some characteristics of approximation algorithms
Approximation algorithms have the following characteristics
They have polynomial times.
They are not guarantee to obtain optimal solutions.
However, they guarantee good solution within some factor of the
optimum.
Further
They often use algorithms from related problems as subroutines.
43 / 78
Some characteristics of approximation algorithms
Approximation algorithms have the following characteristics
They have polynomial times.
They are not guarantee to obtain optimal solutions.
However, they guarantee good solution within some factor of the
optimum.
Further
They often use algorithms from related problems as subroutines.
43 / 78
Some characteristics of approximation algorithms
Approximation algorithms have the following characteristics
They have polynomial times.
They are not guarantee to obtain optimal solutions.
However, they guarantee good solution within some factor of the
optimum.
Further
They often use algorithms from related problems as subroutines.
43 / 78
Some characteristics of approximation algorithms
Approximation algorithms have the following characteristics
They have polynomial times.
They are not guarantee to obtain optimal solutions.
However, they guarantee good solution within some factor of the
optimum.
Further
They often use algorithms from related problems as subroutines.
43 / 78
We will look at...
Examples
Vertex Cover problem.
The Traveling Salesman Problem.
This barely scratches
All the theory of approximation algorithms.
44 / 78
We will look at...
Examples
Vertex Cover problem.
The Traveling Salesman Problem.
This barely scratches
All the theory of approximation algorithms.
44 / 78
We will look at...
Examples
Vertex Cover problem.
The Traveling Salesman Problem.
This barely scratches
All the theory of approximation algorithms.
44 / 78
Outline
1 Introduction
The Dilemma
Branches Attacking the Problem
2 Intelligent Exhaustive Search
Backtracking
Example
Backtracking Algorithm
Branch-and-Bound
Algorithm Branch-and-Bound
Example
3 Approximation Algorithms
Introduction
Examples
Vertex Cover Problem
The Traveling-Salesman Problem
45 / 78
Outline
1 Introduction
The Dilemma
Branches Attacking the Problem
2 Intelligent Exhaustive Search
Backtracking
Example
Backtracking Algorithm
Branch-and-Bound
Algorithm Branch-and-Bound
Example
3 Approximation Algorithms
Introduction
Examples
Vertex Cover Problem
The Traveling-Salesman Problem
46 / 78
Vertex Cover Problem
Definition of a vertex cover
A vertex cover of an undirected graph G = (V , E) is a subset V ⊆ V
such that if (u, v) is an edge of G. then either u ∈ V or v ∈ V (Or
both).
Thus
The size of a vertex cover is the number of vertices in it.
47 / 78
Vertex Cover Problem
Definition of a vertex cover
A vertex cover of an undirected graph G = (V , E) is a subset V ⊆ V
such that if (u, v) is an edge of G. then either u ∈ V or v ∈ V (Or
both).
Thus
The size of a vertex cover is the number of vertices in it.
47 / 78
Vertex Cover Problem
Definition
The vertex-cover problem is to find a vertex cover of minimum size in a
given undirected graph.
48 / 78
Example of Vertex Cover Problem
Example
Determine the smallest subset of vertex that "Cover" the graph on
the right.
1 2
3
4
5
49 / 78
Vertex cover
Example
Determine the smallest subset of vertex that "Cover" the graph on
the right.
1 2
3
4
5
ANSWER:{1, 3, 4}
50 / 78
Approximation vertex cover algorithm
Now, we have
APPROX-VERTEX-COVER(G)
1 C = ∅
2 E = G.E
3 while E = ∅
4 let (u, v) be an arbitrary edge of E
5 C = C ∪ {u, v}
6 remove from E every edge incident on either u or v
7 return C
Complexity O(V + E)
51 / 78
Example
We have
OPTIMAL SOLUTION
52 / 78
Theorem
Theorem 35.1
APPROX-VERTEX-COVER is a polynomial-time 2-approximation
algorithm.
Proof
We know that the algorithms return a vertex cover until it
finishes, and it runs in poly-time.
Now, we only need to prove that APPROX-VERTEX-COVER
returns a vertex cover of at most twice the size of the optimal
cover.
53 / 78
Theorem
Theorem 35.1
APPROX-VERTEX-COVER is a polynomial-time 2-approximation
algorithm.
Proof
We know that the algorithms return a vertex cover until it
finishes, and it runs in poly-time.
Now, we only need to prove that APPROX-VERTEX-COVER
returns a vertex cover of at most twice the size of the optimal
cover.
53 / 78
Theorem
Theorem 35.1
APPROX-VERTEX-COVER is a polynomial-time 2-approximation
algorithm.
Proof
We know that the algorithms return a vertex cover until it
finishes, and it runs in poly-time.
Now, we only need to prove that APPROX-VERTEX-COVER
returns a vertex cover of at most twice the size of the optimal
cover.
53 / 78
Proof
First
We call C as the set of vertices that is returned by the approximation
algorithm.
Now, given the set of edges A produced by the algorithm.
Now, given an optimal cover C∗:
C∗
must include at least one endpoint of each edge in A.
No two edges in A share an endpoint, thus no two edges are covered by
the same vertex from C∗
:
|C∗| ≥ |A|
54 / 78
Proof
First
We call C as the set of vertices that is returned by the approximation
algorithm.
Now, given the set of edges A produced by the algorithm.
Now, given an optimal cover C∗:
C∗
must include at least one endpoint of each edge in A.
No two edges in A share an endpoint, thus no two edges are covered by
the same vertex from C∗
:
|C∗| ≥ |A|
54 / 78
Proof
First
We call C as the set of vertices that is returned by the approximation
algorithm.
Now, given the set of edges A produced by the algorithm.
Now, given an optimal cover C∗:
C∗
must include at least one endpoint of each edge in A.
No two edges in A share an endpoint, thus no two edges are covered by
the same vertex from C∗
:
|C∗| ≥ |A|
54 / 78
Proof
First
We call C as the set of vertices that is returned by the approximation
algorithm.
Now, given the set of edges A produced by the algorithm.
Now, given an optimal cover C∗:
C∗
must include at least one endpoint of each edge in A.
No two edges in A share an endpoint, thus no two edges are covered by
the same vertex from C∗
:
|C∗| ≥ |A|
54 / 78
Proof
First
We call C as the set of vertices that is returned by the approximation
algorithm.
Now, given the set of edges A produced by the algorithm.
Now, given an optimal cover C∗:
C∗
must include at least one endpoint of each edge in A.
No two edges in A share an endpoint, thus no two edges are covered by
the same vertex from C∗
:
|C∗| ≥ |A|
54 / 78
Proof
Second
Now, each execution in line 4 picks an edge for which neither of its
endpoints is already in C, thus we have
|C| = 2|A|
≤ 2|C∗
|
55 / 78
Proof
Finally, we have
|C|
|C∗|
≤ 2
56 / 78
Remarks
Did you notice the trick?
We do not know the size of C∗, but we obtain a lower bound for it
Actually
The set A is a maximal matching in the graph G.
57 / 78
Remarks
Did you notice the trick?
We do not know the size of C∗, but we obtain a lower bound for it
Actually
The set A is a maximal matching in the graph G.
57 / 78
What is a Maximal Matching?
Definition
Given a graph G = (V , E), a matching M in G is a set of pairwise
non-adjacent edges; that is, no two edges share a common vertex.
58 / 78
Maximal Matching
Definition
A Maximal Matching is a matching M of a graph G with the property
that if any edge not in M is added to M, it is no longer a matching.
59 / 78
Example
Here, we can see an example
A
B
C
D
E
F
1
2
3
4
5
6
A
B
C
D
E
F
1
2
3
4
5
6
A
B
C
D
E
F
1
2
3
4
5
6
MATCHING MAXIMAL MATCHING
60 / 78
Outline
1 Introduction
The Dilemma
Branches Attacking the Problem
2 Intelligent Exhaustive Search
Backtracking
Example
Backtracking Algorithm
Branch-and-Bound
Algorithm Branch-and-Bound
Example
3 Approximation Algorithms
Introduction
Examples
Vertex Cover Problem
The Traveling-Salesman Problem
61 / 78
The Traveling-Salesman Problem
Given a complete undirected graph G = (V , E)
With a no-negative integer cost function c(u, v) associated to each
edge (u, v) ∈ E
We need to find a Hamiltonian cycle of G with minimum cost.
62 / 78
The Traveling-Salesman Problem
Given a complete undirected graph G = (V , E)
With a no-negative integer cost function c(u, v) associated to each
edge (u, v) ∈ E
We need to find a Hamiltonian cycle of G with minimum cost.
62 / 78
Extra notation
As an extension of our notation
Let c(A) denote the total cost of the edges in a subset A ⊆ E
c(A) =
(u,v)∈A
c(u, v)
63 / 78
The extra assumption: The triangle inequality
Assume
:
We will assume that the cost function satisfies the triangle inequality
for all its vertices u, v, w ∈ V then:
c(u, w) ≤ c(u, v) + c(v, w)
64 / 78
The 2-approximation algorithm
Pseudocode
APPROX-TSP-TOUR(G, c, r)
1 select a vertex r ∈ G.V to be a “root” vertex.
2 compute a minimum spanning tree T for G from root r using the
MST-PRIM(G, c, r).
3 Let H be a list of vertices, ordered according to when they are first
visited in a preorder tree walk of T.
4 return the Hamiltonian cycle H
65 / 78
The 2-approximation algorithm
Pseudocode
APPROX-TSP-TOUR(G, c, r)
1 select a vertex r ∈ G.V to be a “root” vertex.
2 compute a minimum spanning tree T for G from root r using the
MST-PRIM(G, c, r).
3 Let H be a list of vertices, ordered according to when they are first
visited in a preorder tree walk of T.
4 return the Hamiltonian cycle H
65 / 78
The 2-approximation algorithm
Pseudocode
APPROX-TSP-TOUR(G, c, r)
1 select a vertex r ∈ G.V to be a “root” vertex.
2 compute a minimum spanning tree T for G from root r using the
MST-PRIM(G, c, r).
3 Let H be a list of vertices, ordered according to when they are first
visited in a preorder tree walk of T.
4 return the Hamiltonian cycle H
65 / 78
The 2-approximation algorithm
Pseudocode
APPROX-TSP-TOUR(G, c, r)
1 select a vertex r ∈ G.V to be a “root” vertex.
2 compute a minimum spanning tree T for G from root r using the
MST-PRIM(G, c, r).
3 Let H be a list of vertices, ordered according to when they are first
visited in a preorder tree walk of T.
4 return the Hamiltonian cycle H
65 / 78
Example
The root is a
OPTIMAL SOLUTION
66 / 78
Theorem that supports the claim
Theorem 35.2
APROX-TSP-TOUR is a polynomial-time 2-approximation algorithm for
the traveling salesman problem with the triangle inequality.
67 / 78
Proof
First
Let H∗ denote an optimal tour for a set of vertices,
H∗
is a cycle.
If from H∗
we erase an edge, H∗
becomes a tree.
Let T the minimum spanning tree computed at line 2
Then:
c(T) ≤ c(H∗)
68 / 78
Proof
First
Let H∗ denote an optimal tour for a set of vertices,
H∗
is a cycle.
If from H∗
we erase an edge, H∗
becomes a tree.
Let T the minimum spanning tree computed at line 2
Then:
c(T) ≤ c(H∗)
68 / 78
Proof
First
Let H∗ denote an optimal tour for a set of vertices,
H∗
is a cycle.
If from H∗
we erase an edge, H∗
becomes a tree.
Let T the minimum spanning tree computed at line 2
Then:
c(T) ≤ c(H∗)
68 / 78
Proof
First
Let H∗ denote an optimal tour for a set of vertices,
H∗
is a cycle.
If from H∗
we erase an edge, H∗
becomes a tree.
Let T the minimum spanning tree computed at line 2
Then:
c(T) ≤ c(H∗)
68 / 78
Proof
First
Let H∗ denote an optimal tour for a set of vertices,
H∗
is a cycle.
If from H∗
we erase an edge, H∗
becomes a tree.
Let T the minimum spanning tree computed at line 2
Then:
c(T) ≤ c(H∗)
68 / 78
Full walk
A full walk W of T lists how many times a vertex has been visited
69 / 78
Full walk
A full walk W = a, b, c, b, h, b, a, d, e, f , e, g, e, d, a
70 / 78
Then, we have
Since the full walk traverses every edge of T exactly twice
c(W ) ≤ 2c(T)
In addition
However, W is not a tour... What can we do?
71 / 78
Then, we have
Since the full walk traverses every edge of T exactly twice
c(W ) ≤ 2c(T)
In addition
However, W is not a tour... What can we do?
71 / 78
Use triangle inequality
Something Notable
By the triangle inequality, however, we can delete a visit to any vertex
from W .
IMPORTANT
Using the triangle inequality the cost does not increase.
Deletion of vertices
Delete a vertex v from W between visits to u and w.
The resulting ordering specifies going directly from u and w.
72 / 78
Use triangle inequality
Something Notable
By the triangle inequality, however, we can delete a visit to any vertex
from W .
IMPORTANT
Using the triangle inequality the cost does not increase.
Deletion of vertices
Delete a vertex v from W between visits to u and w.
The resulting ordering specifies going directly from u and w.
72 / 78
Use triangle inequality
Something Notable
By the triangle inequality, however, we can delete a visit to any vertex
from W .
IMPORTANT
Using the triangle inequality the cost does not increase.
Deletion of vertices
Delete a vertex v from W between visits to u and w.
The resulting ordering specifies going directly from u and w.
72 / 78
Use triangle inequality
Something Notable
By the triangle inequality, however, we can delete a visit to any vertex
from W .
IMPORTANT
Using the triangle inequality the cost does not increase.
Deletion of vertices
Delete a vertex v from W between visits to u and w.
The resulting ordering specifies going directly from u and w.
72 / 78
Thus
We get
a, b, c, h, d, e, f , g
73 / 78
Minimum Spanning Tree
From our example
74 / 78
Notice the following
We have
The previous visiting order is equal to a preorder walking on the previous
tree T.
Now
Let H the cycle obtained by this preorder walking.
It is a Hamiltonian cycle, since every vertex is visited exactly one
Thus
c(H) ≤ c(W ) ≤ 2c(T) ≤ 2c(H∗)
75 / 78
Notice the following
We have
The previous visiting order is equal to a preorder walking on the previous
tree T.
Now
Let H the cycle obtained by this preorder walking.
It is a Hamiltonian cycle, since every vertex is visited exactly one
Thus
c(H) ≤ c(W ) ≤ 2c(T) ≤ 2c(H∗)
75 / 78
Notice the following
We have
The previous visiting order is equal to a preorder walking on the previous
tree T.
Now
Let H the cycle obtained by this preorder walking.
It is a Hamiltonian cycle, since every vertex is visited exactly one
Thus
c(H) ≤ c(W ) ≤ 2c(T) ≤ 2c(H∗)
75 / 78
Notice the following
We have
The previous visiting order is equal to a preorder walking on the previous
tree T.
Now
Let H the cycle obtained by this preorder walking.
It is a Hamiltonian cycle, since every vertex is visited exactly one
Thus
c(H) ≤ c(W ) ≤ 2c(T) ≤ 2c(H∗)
75 / 78
Finally
We have that
c (H)
c (H∗)
≤ 2
76 / 78
However
Given the General Traveling-Salesman Problem
If we drop the assumption that the cost function c satisfies the triangle
inequality.
Theorem 35.3
If P = NP, then for any constant ρ ≥ 1, there is no polynomial-time
approximation algorithm with approximation ratio ρ for the general
traveling-salesman problem.
77 / 78
However
Given the General Traveling-Salesman Problem
If we drop the assumption that the cost function c satisfies the triangle
inequality.
Theorem 35.3
If P = NP, then for any constant ρ ≥ 1, there is no polynomial-time
approximation algorithm with approximation ratio ρ for the general
traveling-salesman problem.
77 / 78
Excercises
From Cormen’s book solve
35.1-1
35.1-2
35.1-4
35.2-1
35.2-2
35.2-3
35.2-5
78 / 78

More Related Content

What's hot

Markov chain monte_carlo_methods_for_machine_learning
Markov chain monte_carlo_methods_for_machine_learningMarkov chain monte_carlo_methods_for_machine_learning
Markov chain monte_carlo_methods_for_machine_learning
Andres Mendez-Vazquez
 
Intro to Classification: Logistic Regression & SVM
Intro to Classification: Logistic Regression & SVMIntro to Classification: Logistic Regression & SVM
Intro to Classification: Logistic Regression & SVM
NYC Predictive Analytics
 
26 Machine Learning Unsupervised Fuzzy C-Means
26 Machine Learning Unsupervised Fuzzy C-Means26 Machine Learning Unsupervised Fuzzy C-Means
26 Machine Learning Unsupervised Fuzzy C-Means
Andres Mendez-Vazquez
 
27 Machine Learning Unsupervised Measure Properties
27 Machine Learning Unsupervised Measure Properties27 Machine Learning Unsupervised Measure Properties
27 Machine Learning Unsupervised Measure Properties
Andres Mendez-Vazquez
 
06 recurrent neural_networks
06 recurrent neural_networks06 recurrent neural_networks
06 recurrent neural_networks
Andres Mendez-Vazquez
 
18 Machine Learning Radial Basis Function Networks Forward Heuristics
18 Machine Learning Radial Basis Function Networks Forward Heuristics18 Machine Learning Radial Basis Function Networks Forward Heuristics
18 Machine Learning Radial Basis Function Networks Forward Heuristics
Andres Mendez-Vazquez
 
06 Machine Learning - Naive Bayes
06 Machine Learning - Naive Bayes06 Machine Learning - Naive Bayes
06 Machine Learning - Naive Bayes
Andres Mendez-Vazquez
 
16 Machine Learning Universal Approximation Multilayer Perceptron
16 Machine Learning Universal Approximation Multilayer Perceptron16 Machine Learning Universal Approximation Multilayer Perceptron
16 Machine Learning Universal Approximation Multilayer Perceptron
Andres Mendez-Vazquez
 
Machine learning in science and industry — day 3
Machine learning in science and industry — day 3Machine learning in science and industry — day 3
Machine learning in science and industry — day 3
arogozhnikov
 
Machine learning in science and industry — day 1
Machine learning in science and industry — day 1Machine learning in science and industry — day 1
Machine learning in science and industry — day 1
arogozhnikov
 
Vc dimension in Machine Learning
Vc dimension in Machine LearningVc dimension in Machine Learning
Vc dimension in Machine Learning
VARUN KUMAR
 
Iclr2016 vaeまとめ
Iclr2016 vaeまとめIclr2016 vaeまとめ
Iclr2016 vaeまとめ
Deep Learning JP
 
Machine learning in science and industry — day 2
Machine learning in science and industry — day 2Machine learning in science and industry — day 2
Machine learning in science and industry — day 2
arogozhnikov
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...
Valentin De Bortoli
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
Christian Robert
 
(DL hacks輪読) Variational Inference with Rényi Divergence
(DL hacks輪読) Variational Inference with Rényi Divergence(DL hacks輪読) Variational Inference with Rényi Divergence
(DL hacks輪読) Variational Inference with Rényi Divergence
Masahiro Suzuki
 
Bayesian Nonparametrics: Models Based on the Dirichlet Process
Bayesian Nonparametrics: Models Based on the Dirichlet ProcessBayesian Nonparametrics: Models Based on the Dirichlet Process
Bayesian Nonparametrics: Models Based on the Dirichlet ProcessAlessandro Panella
 
07 Machine Learning - Expectation Maximization
07 Machine Learning - Expectation Maximization07 Machine Learning - Expectation Maximization
07 Machine Learning - Expectation Maximization
Andres Mendez-Vazquez
 
MLHEP Lectures - day 1, basic track
MLHEP Lectures - day 1, basic trackMLHEP Lectures - day 1, basic track
MLHEP Lectures - day 1, basic track
arogozhnikov
 
A Maximum Entropy Approach to the Loss Data Aggregation Problem
A Maximum Entropy Approach to the Loss Data Aggregation ProblemA Maximum Entropy Approach to the Loss Data Aggregation Problem
A Maximum Entropy Approach to the Loss Data Aggregation Problem
Erika G. G.
 

What's hot (20)

Markov chain monte_carlo_methods_for_machine_learning
Markov chain monte_carlo_methods_for_machine_learningMarkov chain monte_carlo_methods_for_machine_learning
Markov chain monte_carlo_methods_for_machine_learning
 
Intro to Classification: Logistic Regression & SVM
Intro to Classification: Logistic Regression & SVMIntro to Classification: Logistic Regression & SVM
Intro to Classification: Logistic Regression & SVM
 
26 Machine Learning Unsupervised Fuzzy C-Means
26 Machine Learning Unsupervised Fuzzy C-Means26 Machine Learning Unsupervised Fuzzy C-Means
26 Machine Learning Unsupervised Fuzzy C-Means
 
27 Machine Learning Unsupervised Measure Properties
27 Machine Learning Unsupervised Measure Properties27 Machine Learning Unsupervised Measure Properties
27 Machine Learning Unsupervised Measure Properties
 
06 recurrent neural_networks
06 recurrent neural_networks06 recurrent neural_networks
06 recurrent neural_networks
 
18 Machine Learning Radial Basis Function Networks Forward Heuristics
18 Machine Learning Radial Basis Function Networks Forward Heuristics18 Machine Learning Radial Basis Function Networks Forward Heuristics
18 Machine Learning Radial Basis Function Networks Forward Heuristics
 
06 Machine Learning - Naive Bayes
06 Machine Learning - Naive Bayes06 Machine Learning - Naive Bayes
06 Machine Learning - Naive Bayes
 
16 Machine Learning Universal Approximation Multilayer Perceptron
16 Machine Learning Universal Approximation Multilayer Perceptron16 Machine Learning Universal Approximation Multilayer Perceptron
16 Machine Learning Universal Approximation Multilayer Perceptron
 
Machine learning in science and industry — day 3
Machine learning in science and industry — day 3Machine learning in science and industry — day 3
Machine learning in science and industry — day 3
 
Machine learning in science and industry — day 1
Machine learning in science and industry — day 1Machine learning in science and industry — day 1
Machine learning in science and industry — day 1
 
Vc dimension in Machine Learning
Vc dimension in Machine LearningVc dimension in Machine Learning
Vc dimension in Machine Learning
 
Iclr2016 vaeまとめ
Iclr2016 vaeまとめIclr2016 vaeまとめ
Iclr2016 vaeまとめ
 
Machine learning in science and industry — day 2
Machine learning in science and industry — day 2Machine learning in science and industry — day 2
Machine learning in science and industry — day 2
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
(DL hacks輪読) Variational Inference with Rényi Divergence
(DL hacks輪読) Variational Inference with Rényi Divergence(DL hacks輪読) Variational Inference with Rényi Divergence
(DL hacks輪読) Variational Inference with Rényi Divergence
 
Bayesian Nonparametrics: Models Based on the Dirichlet Process
Bayesian Nonparametrics: Models Based on the Dirichlet ProcessBayesian Nonparametrics: Models Based on the Dirichlet Process
Bayesian Nonparametrics: Models Based on the Dirichlet Process
 
07 Machine Learning - Expectation Maximization
07 Machine Learning - Expectation Maximization07 Machine Learning - Expectation Maximization
07 Machine Learning - Expectation Maximization
 
MLHEP Lectures - day 1, basic track
MLHEP Lectures - day 1, basic trackMLHEP Lectures - day 1, basic track
MLHEP Lectures - day 1, basic track
 
A Maximum Entropy Approach to the Loss Data Aggregation Problem
A Maximum Entropy Approach to the Loss Data Aggregation ProblemA Maximum Entropy Approach to the Loss Data Aggregation Problem
A Maximum Entropy Approach to the Loss Data Aggregation Problem
 

Viewers also liked

Initialization methods for the tsp with time windows using variable neighborh...
Initialization methods for the tsp with time windows using variable neighborh...Initialization methods for the tsp with time windows using variable neighborh...
Initialization methods for the tsp with time windows using variable neighborh...Konstantinos Giannakis
 
Fractional knapsack class 13
Fractional knapsack class 13Fractional knapsack class 13
Fractional knapsack class 13Kumar
 
Ant colony optimization
Ant colony optimizationAnt colony optimization
Ant colony optimizationMeenakshi Devi
 
What's the Matter?
What's the Matter?What's the Matter?
What's the Matter?
Stephen Taylor
 
Preparation Data Structures 10 trees
Preparation Data Structures 10 treesPreparation Data Structures 10 trees
Preparation Data Structures 10 trees
Andres Mendez-Vazquez
 
01 Machine Learning Introduction
01 Machine Learning Introduction 01 Machine Learning Introduction
01 Machine Learning Introduction
Andres Mendez-Vazquez
 
Preparation Data Structures 07 stacks
Preparation Data Structures 07 stacksPreparation Data Structures 07 stacks
Preparation Data Structures 07 stacks
Andres Mendez-Vazquez
 
26 Computational Geometry
26 Computational Geometry26 Computational Geometry
26 Computational Geometry
Andres Mendez-Vazquez
 
27 NP Completness
27 NP Completness27 NP Completness
27 NP Completness
Andres Mendez-Vazquez
 
factors affecting Evaporation
factors affecting Evaporationfactors affecting Evaporation
factors affecting Evaporation
deepak190
 
03 Probability Review for Analysis of Algorithms
03 Probability Review for Analysis of Algorithms03 Probability Review for Analysis of Algorithms
03 Probability Review for Analysis of Algorithms
Andres Mendez-Vazquez
 
Preparation Data Structures 11 graphs
Preparation Data Structures 11 graphsPreparation Data Structures 11 graphs
Preparation Data Structures 11 graphs
Andres Mendez-Vazquez
 
Periodic table (1)
Periodic table (1)Periodic table (1)
Periodic table (1)jghopwood
 
15 B-Trees
15 B-Trees15 B-Trees
13 Machine Learning Supervised Decision Trees
13 Machine Learning Supervised Decision Trees13 Machine Learning Supervised Decision Trees
13 Machine Learning Supervised Decision Trees
Andres Mendez-Vazquez
 
20 Single Source Shorthest Path
20 Single Source Shorthest Path20 Single Source Shorthest Path
20 Single Source Shorthest Path
Andres Mendez-Vazquez
 
12 Machine Learning Supervised Hidden Markov Chains
12 Machine Learning  Supervised Hidden Markov Chains12 Machine Learning  Supervised Hidden Markov Chains
12 Machine Learning Supervised Hidden Markov Chains
Andres Mendez-Vazquez
 

Viewers also liked (20)

Initialization methods for the tsp with time windows using variable neighborh...
Initialization methods for the tsp with time windows using variable neighborh...Initialization methods for the tsp with time windows using variable neighborh...
Initialization methods for the tsp with time windows using variable neighborh...
 
Fractional knapsack class 13
Fractional knapsack class 13Fractional knapsack class 13
Fractional knapsack class 13
 
Ant colony optimization
Ant colony optimizationAnt colony optimization
Ant colony optimization
 
What's the Matter?
What's the Matter?What's the Matter?
What's the Matter?
 
Preparation Data Structures 10 trees
Preparation Data Structures 10 treesPreparation Data Structures 10 trees
Preparation Data Structures 10 trees
 
01 Machine Learning Introduction
01 Machine Learning Introduction 01 Machine Learning Introduction
01 Machine Learning Introduction
 
Preparation Data Structures 07 stacks
Preparation Data Structures 07 stacksPreparation Data Structures 07 stacks
Preparation Data Structures 07 stacks
 
26 Computational Geometry
26 Computational Geometry26 Computational Geometry
26 Computational Geometry
 
27 NP Completness
27 NP Completness27 NP Completness
27 NP Completness
 
Force resolution force
Force resolution forceForce resolution force
Force resolution force
 
factors affecting Evaporation
factors affecting Evaporationfactors affecting Evaporation
factors affecting Evaporation
 
03 Probability Review for Analysis of Algorithms
03 Probability Review for Analysis of Algorithms03 Probability Review for Analysis of Algorithms
03 Probability Review for Analysis of Algorithms
 
Preparation Data Structures 11 graphs
Preparation Data Structures 11 graphsPreparation Data Structures 11 graphs
Preparation Data Structures 11 graphs
 
Force resultant force
Force resultant forceForce resultant force
Force resultant force
 
Periodic table (1)
Periodic table (1)Periodic table (1)
Periodic table (1)
 
15 B-Trees
15 B-Trees15 B-Trees
15 B-Trees
 
13 Machine Learning Supervised Decision Trees
13 Machine Learning Supervised Decision Trees13 Machine Learning Supervised Decision Trees
13 Machine Learning Supervised Decision Trees
 
20 Single Source Shorthest Path
20 Single Source Shorthest Path20 Single Source Shorthest Path
20 Single Source Shorthest Path
 
Waves
WavesWaves
Waves
 
12 Machine Learning Supervised Hidden Markov Chains
12 Machine Learning  Supervised Hidden Markov Chains12 Machine Learning  Supervised Hidden Markov Chains
12 Machine Learning Supervised Hidden Markov Chains
 

Similar to 28 Dealing with the NP Poblems: Exponential Search and Approximation Algorithms

20130928 automated theorem_proving_harrison
20130928 automated theorem_proving_harrison20130928 automated theorem_proving_harrison
20130928 automated theorem_proving_harrisonComputer Science Club
 
Algoritmos aproximados
Algoritmos aproximadosAlgoritmos aproximados
Algoritmos aproximados
Benjamín Joaquín Martínez
 
Comparative study of different algorithms
Comparative study of different algorithmsComparative study of different algorithms
Comparative study of different algorithms
ijfcstjournal
 
module3_Greedymethod_2022.pdf
module3_Greedymethod_2022.pdfmodule3_Greedymethod_2022.pdf
module3_Greedymethod_2022.pdf
Shiwani Gupta
 
Algorithms Design Patterns
Algorithms Design PatternsAlgorithms Design Patterns
Algorithms Design Patterns
Ashwin Shiv
 
COMPARATIVE STUDY OF DIFFERENT ALGORITHMS TO SOLVE N QUEENS PROBLEM
COMPARATIVE STUDY OF DIFFERENT ALGORITHMS TO SOLVE N QUEENS PROBLEMCOMPARATIVE STUDY OF DIFFERENT ALGORITHMS TO SOLVE N QUEENS PROBLEM
COMPARATIVE STUDY OF DIFFERENT ALGORITHMS TO SOLVE N QUEENS PROBLEM
ijfcstjournal
 
Research internship on optimal stochastic theory with financial application u...
Research internship on optimal stochastic theory with financial application u...Research internship on optimal stochastic theory with financial application u...
Research internship on optimal stochastic theory with financial application u...
Asma Ben Slimene
 
Presentation on stochastic control problem with financial applications (Merto...
Presentation on stochastic control problem with financial applications (Merto...Presentation on stochastic control problem with financial applications (Merto...
Presentation on stochastic control problem with financial applications (Merto...
Asma Ben Slimene
 
chap3.pdf
chap3.pdfchap3.pdf
chap3.pdf
eseinsei
 
N Queens problem
N Queens problemN Queens problem
N Queens problem
Arkadeep Dey
 
John Maxwell, Data Scientist, Nordstrom at MLconf Seattle 2017
John Maxwell, Data Scientist, Nordstrom at MLconf Seattle 2017 John Maxwell, Data Scientist, Nordstrom at MLconf Seattle 2017
John Maxwell, Data Scientist, Nordstrom at MLconf Seattle 2017
MLconf
 
21 All Pairs Shortest Path
21 All Pairs Shortest Path21 All Pairs Shortest Path
21 All Pairs Shortest Path
Andres Mendez-Vazquez
 
Divide and Conquer Case Study
Divide and Conquer Case StudyDivide and Conquer Case Study
Divide and Conquer Case Study
KushagraChadha1
 
Power Full Exposition
Power Full ExpositionPower Full Exposition
Power Full ExpositionHeesu Hwang
 
Intro to Quant Trading Strategies (Lecture 10 of 10)
Intro to Quant Trading Strategies (Lecture 10 of 10)Intro to Quant Trading Strategies (Lecture 10 of 10)
Intro to Quant Trading Strategies (Lecture 10 of 10)
Adrian Aley
 
Simulation-based Optimization of a Real-world Travelling Salesman Problem Usi...
Simulation-based Optimization of a Real-world Travelling Salesman Problem Usi...Simulation-based Optimization of a Real-world Travelling Salesman Problem Usi...
Simulation-based Optimization of a Real-world Travelling Salesman Problem Usi...
CSCJournals
 
Optimization problems
Optimization problemsOptimization problems
Optimization problems
Ruchika Sinha
 
Review of Metaheuristics and Generalized Evolutionary Walk Algorithm
Review of Metaheuristics and Generalized Evolutionary Walk AlgorithmReview of Metaheuristics and Generalized Evolutionary Walk Algorithm
Review of Metaheuristics and Generalized Evolutionary Walk Algorithm
Xin-She Yang
 
Intro to Quantitative Investment (Lecture 1 of 6)
Intro to Quantitative Investment (Lecture 1 of 6)Intro to Quantitative Investment (Lecture 1 of 6)
Intro to Quantitative Investment (Lecture 1 of 6)
Adrian Aley
 

Similar to 28 Dealing with the NP Poblems: Exponential Search and Approximation Algorithms (20)

20130928 automated theorem_proving_harrison
20130928 automated theorem_proving_harrison20130928 automated theorem_proving_harrison
20130928 automated theorem_proving_harrison
 
Algoritmos aproximados
Algoritmos aproximadosAlgoritmos aproximados
Algoritmos aproximados
 
Comparative study of different algorithms
Comparative study of different algorithmsComparative study of different algorithms
Comparative study of different algorithms
 
Analysis of Algorithm
Analysis of AlgorithmAnalysis of Algorithm
Analysis of Algorithm
 
module3_Greedymethod_2022.pdf
module3_Greedymethod_2022.pdfmodule3_Greedymethod_2022.pdf
module3_Greedymethod_2022.pdf
 
Algorithms Design Patterns
Algorithms Design PatternsAlgorithms Design Patterns
Algorithms Design Patterns
 
COMPARATIVE STUDY OF DIFFERENT ALGORITHMS TO SOLVE N QUEENS PROBLEM
COMPARATIVE STUDY OF DIFFERENT ALGORITHMS TO SOLVE N QUEENS PROBLEMCOMPARATIVE STUDY OF DIFFERENT ALGORITHMS TO SOLVE N QUEENS PROBLEM
COMPARATIVE STUDY OF DIFFERENT ALGORITHMS TO SOLVE N QUEENS PROBLEM
 
Research internship on optimal stochastic theory with financial application u...
Research internship on optimal stochastic theory with financial application u...Research internship on optimal stochastic theory with financial application u...
Research internship on optimal stochastic theory with financial application u...
 
Presentation on stochastic control problem with financial applications (Merto...
Presentation on stochastic control problem with financial applications (Merto...Presentation on stochastic control problem with financial applications (Merto...
Presentation on stochastic control problem with financial applications (Merto...
 
chap3.pdf
chap3.pdfchap3.pdf
chap3.pdf
 
N Queens problem
N Queens problemN Queens problem
N Queens problem
 
John Maxwell, Data Scientist, Nordstrom at MLconf Seattle 2017
John Maxwell, Data Scientist, Nordstrom at MLconf Seattle 2017 John Maxwell, Data Scientist, Nordstrom at MLconf Seattle 2017
John Maxwell, Data Scientist, Nordstrom at MLconf Seattle 2017
 
21 All Pairs Shortest Path
21 All Pairs Shortest Path21 All Pairs Shortest Path
21 All Pairs Shortest Path
 
Divide and Conquer Case Study
Divide and Conquer Case StudyDivide and Conquer Case Study
Divide and Conquer Case Study
 
Power Full Exposition
Power Full ExpositionPower Full Exposition
Power Full Exposition
 
Intro to Quant Trading Strategies (Lecture 10 of 10)
Intro to Quant Trading Strategies (Lecture 10 of 10)Intro to Quant Trading Strategies (Lecture 10 of 10)
Intro to Quant Trading Strategies (Lecture 10 of 10)
 
Simulation-based Optimization of a Real-world Travelling Salesman Problem Usi...
Simulation-based Optimization of a Real-world Travelling Salesman Problem Usi...Simulation-based Optimization of a Real-world Travelling Salesman Problem Usi...
Simulation-based Optimization of a Real-world Travelling Salesman Problem Usi...
 
Optimization problems
Optimization problemsOptimization problems
Optimization problems
 
Review of Metaheuristics and Generalized Evolutionary Walk Algorithm
Review of Metaheuristics and Generalized Evolutionary Walk AlgorithmReview of Metaheuristics and Generalized Evolutionary Walk Algorithm
Review of Metaheuristics and Generalized Evolutionary Walk Algorithm
 
Intro to Quantitative Investment (Lecture 1 of 6)
Intro to Quantitative Investment (Lecture 1 of 6)Intro to Quantitative Investment (Lecture 1 of 6)
Intro to Quantitative Investment (Lecture 1 of 6)
 

More from Andres Mendez-Vazquez

2.03 bayesian estimation
2.03 bayesian estimation2.03 bayesian estimation
2.03 bayesian estimation
Andres Mendez-Vazquez
 
05 linear transformations
05 linear transformations05 linear transformations
05 linear transformations
Andres Mendez-Vazquez
 
01.04 orthonormal basis_eigen_vectors
01.04 orthonormal basis_eigen_vectors01.04 orthonormal basis_eigen_vectors
01.04 orthonormal basis_eigen_vectors
Andres Mendez-Vazquez
 
01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issues01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issues
Andres Mendez-Vazquez
 
01.02 linear equations
01.02 linear equations01.02 linear equations
01.02 linear equations
Andres Mendez-Vazquez
 
01.01 vector spaces
01.01 vector spaces01.01 vector spaces
01.01 vector spaces
Andres Mendez-Vazquez
 
05 backpropagation automatic_differentiation
05 backpropagation automatic_differentiation05 backpropagation automatic_differentiation
05 backpropagation automatic_differentiation
Andres Mendez-Vazquez
 
Zetta global
Zetta globalZetta global
Zetta global
Andres Mendez-Vazquez
 
01 Introduction to Neural Networks and Deep Learning
01 Introduction to Neural Networks and Deep Learning01 Introduction to Neural Networks and Deep Learning
01 Introduction to Neural Networks and Deep Learning
Andres Mendez-Vazquez
 
25 introduction reinforcement_learning
25 introduction reinforcement_learning25 introduction reinforcement_learning
25 introduction reinforcement_learning
Andres Mendez-Vazquez
 
Neural Networks and Deep Learning Syllabus
Neural Networks and Deep Learning SyllabusNeural Networks and Deep Learning Syllabus
Neural Networks and Deep Learning Syllabus
Andres Mendez-Vazquez
 
Introduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabusIntroduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabus
Andres Mendez-Vazquez
 
Ideas 09 22_2018
Ideas 09 22_2018Ideas 09 22_2018
Ideas 09 22_2018
Andres Mendez-Vazquez
 
Ideas about a Bachelor in Machine Learning/Data Sciences
Ideas about a Bachelor in Machine Learning/Data SciencesIdeas about a Bachelor in Machine Learning/Data Sciences
Ideas about a Bachelor in Machine Learning/Data Sciences
Andres Mendez-Vazquez
 
Analysis of Algorithms Syllabus
Analysis of Algorithms  SyllabusAnalysis of Algorithms  Syllabus
Analysis of Algorithms Syllabus
Andres Mendez-Vazquez
 
20 k-means, k-center, k-meoids and variations
20 k-means, k-center, k-meoids and variations20 k-means, k-center, k-meoids and variations
20 k-means, k-center, k-meoids and variations
Andres Mendez-Vazquez
 
17 vapnik chervonenkis dimension
17 vapnik chervonenkis dimension17 vapnik chervonenkis dimension
17 vapnik chervonenkis dimension
Andres Mendez-Vazquez
 
A basic introduction to learning
A basic introduction to learningA basic introduction to learning
A basic introduction to learning
Andres Mendez-Vazquez
 
Introduction Mathematics Intelligent Systems Syllabus
Introduction Mathematics Intelligent Systems SyllabusIntroduction Mathematics Intelligent Systems Syllabus
Introduction Mathematics Intelligent Systems Syllabus
Andres Mendez-Vazquez
 
Introduction Machine Learning Syllabus
Introduction Machine Learning SyllabusIntroduction Machine Learning Syllabus
Introduction Machine Learning Syllabus
Andres Mendez-Vazquez
 

More from Andres Mendez-Vazquez (20)

2.03 bayesian estimation
2.03 bayesian estimation2.03 bayesian estimation
2.03 bayesian estimation
 
05 linear transformations
05 linear transformations05 linear transformations
05 linear transformations
 
01.04 orthonormal basis_eigen_vectors
01.04 orthonormal basis_eigen_vectors01.04 orthonormal basis_eigen_vectors
01.04 orthonormal basis_eigen_vectors
 
01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issues01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issues
 
01.02 linear equations
01.02 linear equations01.02 linear equations
01.02 linear equations
 
01.01 vector spaces
01.01 vector spaces01.01 vector spaces
01.01 vector spaces
 
05 backpropagation automatic_differentiation
05 backpropagation automatic_differentiation05 backpropagation automatic_differentiation
05 backpropagation automatic_differentiation
 
Zetta global
Zetta globalZetta global
Zetta global
 
01 Introduction to Neural Networks and Deep Learning
01 Introduction to Neural Networks and Deep Learning01 Introduction to Neural Networks and Deep Learning
01 Introduction to Neural Networks and Deep Learning
 
25 introduction reinforcement_learning
25 introduction reinforcement_learning25 introduction reinforcement_learning
25 introduction reinforcement_learning
 
Neural Networks and Deep Learning Syllabus
Neural Networks and Deep Learning SyllabusNeural Networks and Deep Learning Syllabus
Neural Networks and Deep Learning Syllabus
 
Introduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabusIntroduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabus
 
Ideas 09 22_2018
Ideas 09 22_2018Ideas 09 22_2018
Ideas 09 22_2018
 
Ideas about a Bachelor in Machine Learning/Data Sciences
Ideas about a Bachelor in Machine Learning/Data SciencesIdeas about a Bachelor in Machine Learning/Data Sciences
Ideas about a Bachelor in Machine Learning/Data Sciences
 
Analysis of Algorithms Syllabus
Analysis of Algorithms  SyllabusAnalysis of Algorithms  Syllabus
Analysis of Algorithms Syllabus
 
20 k-means, k-center, k-meoids and variations
20 k-means, k-center, k-meoids and variations20 k-means, k-center, k-meoids and variations
20 k-means, k-center, k-meoids and variations
 
17 vapnik chervonenkis dimension
17 vapnik chervonenkis dimension17 vapnik chervonenkis dimension
17 vapnik chervonenkis dimension
 
A basic introduction to learning
A basic introduction to learningA basic introduction to learning
A basic introduction to learning
 
Introduction Mathematics Intelligent Systems Syllabus
Introduction Mathematics Intelligent Systems SyllabusIntroduction Mathematics Intelligent Systems Syllabus
Introduction Mathematics Intelligent Systems Syllabus
 
Introduction Machine Learning Syllabus
Introduction Machine Learning SyllabusIntroduction Machine Learning Syllabus
Introduction Machine Learning Syllabus
 

Recently uploaded

block diagram and signal flow graph representation
block diagram and signal flow graph representationblock diagram and signal flow graph representation
block diagram and signal flow graph representation
Divya Somashekar
 
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
Amil Baba Dawood bangali
 
Cosmetic shop management system project report.pdf
Cosmetic shop management system project report.pdfCosmetic shop management system project report.pdf
Cosmetic shop management system project report.pdf
Kamal Acharya
 
Unbalanced Three Phase Systems and circuits.pptx
Unbalanced Three Phase Systems and circuits.pptxUnbalanced Three Phase Systems and circuits.pptx
Unbalanced Three Phase Systems and circuits.pptx
ChristineTorrepenida1
 
road safety engineering r s e unit 3.pdf
road safety engineering  r s e unit 3.pdfroad safety engineering  r s e unit 3.pdf
road safety engineering r s e unit 3.pdf
VENKATESHvenky89705
 
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
bakpo1
 
Standard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - NeometrixStandard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - Neometrix
Neometrix_Engineering_Pvt_Ltd
 
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...
ssuser7dcef0
 
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
thanhdowork
 
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdfGoverning Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
WENKENLI1
 
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&BDesign and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Sreedhar Chowdam
 
Fundamentals of Electric Drives and its applications.pptx
Fundamentals of Electric Drives and its applications.pptxFundamentals of Electric Drives and its applications.pptx
Fundamentals of Electric Drives and its applications.pptx
manasideore6
 
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdfTop 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Teleport Manpower Consultant
 
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERS
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERSCW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERS
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERS
veerababupersonal22
 
ML for identifying fraud using open blockchain data.pptx
ML for identifying fraud using open blockchain data.pptxML for identifying fraud using open blockchain data.pptx
ML for identifying fraud using open blockchain data.pptx
Vijay Dialani, PhD
 
Recycled Concrete Aggregate in Construction Part III
Recycled Concrete Aggregate in Construction Part IIIRecycled Concrete Aggregate in Construction Part III
Recycled Concrete Aggregate in Construction Part III
Aditya Rajan Patra
 
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
MdTanvirMahtab2
 
Basic Industrial Engineering terms for apparel
Basic Industrial Engineering terms for apparelBasic Industrial Engineering terms for apparel
Basic Industrial Engineering terms for apparel
top1002
 
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdf
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdfHybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdf
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdf
fxintegritypublishin
 
Building Electrical System Design & Installation
Building Electrical System Design & InstallationBuilding Electrical System Design & Installation
Building Electrical System Design & Installation
symbo111
 

Recently uploaded (20)

block diagram and signal flow graph representation
block diagram and signal flow graph representationblock diagram and signal flow graph representation
block diagram and signal flow graph representation
 
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
 
Cosmetic shop management system project report.pdf
Cosmetic shop management system project report.pdfCosmetic shop management system project report.pdf
Cosmetic shop management system project report.pdf
 
Unbalanced Three Phase Systems and circuits.pptx
Unbalanced Three Phase Systems and circuits.pptxUnbalanced Three Phase Systems and circuits.pptx
Unbalanced Three Phase Systems and circuits.pptx
 
road safety engineering r s e unit 3.pdf
road safety engineering  r s e unit 3.pdfroad safety engineering  r s e unit 3.pdf
road safety engineering r s e unit 3.pdf
 
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
 
Standard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - NeometrixStandard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - Neometrix
 
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...
 
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
 
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdfGoverning Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
 
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&BDesign and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
 
Fundamentals of Electric Drives and its applications.pptx
Fundamentals of Electric Drives and its applications.pptxFundamentals of Electric Drives and its applications.pptx
Fundamentals of Electric Drives and its applications.pptx
 
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdfTop 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
 
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERS
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERSCW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERS
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERS
 
ML for identifying fraud using open blockchain data.pptx
ML for identifying fraud using open blockchain data.pptxML for identifying fraud using open blockchain data.pptx
ML for identifying fraud using open blockchain data.pptx
 
Recycled Concrete Aggregate in Construction Part III
Recycled Concrete Aggregate in Construction Part IIIRecycled Concrete Aggregate in Construction Part III
Recycled Concrete Aggregate in Construction Part III
 
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
 
Basic Industrial Engineering terms for apparel
Basic Industrial Engineering terms for apparelBasic Industrial Engineering terms for apparel
Basic Industrial Engineering terms for apparel
 
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdf
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdfHybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdf
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdf
 
Building Electrical System Design & Installation
Building Electrical System Design & InstallationBuilding Electrical System Design & Installation
Building Electrical System Design & Installation
 

28 Dealing with the NP Poblems: Exponential Search and Approximation Algorithms

  • 1. Analysis of Algorithms Dealing with NP Problems: Intelligent Exponential Search and Approximation Algorithms Andres Mendez-Vazquez December 2, 2015 1 / 78
  • 2. Outline 1 Introduction The Dilemma Branches Attacking the Problem 2 Intelligent Exhaustive Search Backtracking Example Backtracking Algorithm Branch-and-Bound Algorithm Branch-and-Bound Example 3 Approximation Algorithms Introduction Examples Vertex Cover Problem The Traveling-Salesman Problem 2 / 78
  • 3. Outline 1 Introduction The Dilemma Branches Attacking the Problem 2 Intelligent Exhaustive Search Backtracking Example Backtracking Algorithm Branch-and-Bound Algorithm Branch-and-Bound Example 3 Approximation Algorithms Introduction Examples Vertex Cover Problem The Traveling-Salesman Problem 3 / 78
  • 4. The Dilemma We face the following NP problems need solutions in real-life!!! We only know exponential algorithms for them!!! What do we do? 4 / 78
  • 5. The Dilemma We face the following NP problems need solutions in real-life!!! We only know exponential algorithms for them!!! What do we do? 4 / 78
  • 6. The Dilemma We face the following NP problems need solutions in real-life!!! We only know exponential algorithms for them!!! What do we do? 4 / 78
  • 7. Accuracy Accuracy issues NP problems are often optimization problems. It’s hard to find the EXACT answer. Maybe we just want to know if our answer is close to the exact answer. 5 / 78
  • 8. Accuracy Accuracy issues NP problems are often optimization problems. It’s hard to find the EXACT answer. Maybe we just want to know if our answer is close to the exact answer. 5 / 78
  • 9. Accuracy Accuracy issues NP problems are often optimization problems. It’s hard to find the EXACT answer. Maybe we just want to know if our answer is close to the exact answer. 5 / 78
  • 10. Near optimal solutions Ways of dealing with NP problems if the actual input is small, exponential running time may be perfectly satisfactory. It may be possible to isolate special cases solvable in polynomial time. Ways of dealing with NP problems Third, it may be possible to use some form of intelligent search to avoid almost all the time the worst case. Fourth, it may still be possible to find near-optimal solutions in polynomial time (either in the worst case or on average). 6 / 78
  • 11. Near optimal solutions Ways of dealing with NP problems if the actual input is small, exponential running time may be perfectly satisfactory. It may be possible to isolate special cases solvable in polynomial time. Ways of dealing with NP problems Third, it may be possible to use some form of intelligent search to avoid almost all the time the worst case. Fourth, it may still be possible to find near-optimal solutions in polynomial time (either in the worst case or on average). 6 / 78
  • 12. Near optimal solutions Ways of dealing with NP problems if the actual input is small, exponential running time may be perfectly satisfactory. It may be possible to isolate special cases solvable in polynomial time. Ways of dealing with NP problems Third, it may be possible to use some form of intelligent search to avoid almost all the time the worst case. Fourth, it may still be possible to find near-optimal solutions in polynomial time (either in the worst case or on average). 6 / 78
  • 13. Near optimal solutions Ways of dealing with NP problems if the actual input is small, exponential running time may be perfectly satisfactory. It may be possible to isolate special cases solvable in polynomial time. Ways of dealing with NP problems Third, it may be possible to use some form of intelligent search to avoid almost all the time the worst case. Fourth, it may still be possible to find near-optimal solutions in polynomial time (either in the worst case or on average). 6 / 78
  • 14. Outline 1 Introduction The Dilemma Branches Attacking the Problem 2 Intelligent Exhaustive Search Backtracking Example Backtracking Algorithm Branch-and-Bound Algorithm Branch-and-Bound Example 3 Approximation Algorithms Introduction Examples Vertex Cover Problem The Traveling-Salesman Problem 7 / 78
  • 15. Three main branches attacking the NP Problems Intelligent Exponential Search Procedures that are exponential time in the worst-case, but with the right design can be very efficient on typical instances. Approximation algorithms Algorithms guaranteed to find a "near optimal" solution, under a certain bound, in polynomial time. Heuristics Useful algorithmic procedures that provide approximated solutions in polynomial time. They are a trade-off between optimality, completeness, accuracy and precision and running time. 8 / 78
  • 16. Three main branches attacking the NP Problems Intelligent Exponential Search Procedures that are exponential time in the worst-case, but with the right design can be very efficient on typical instances. Approximation algorithms Algorithms guaranteed to find a "near optimal" solution, under a certain bound, in polynomial time. Heuristics Useful algorithmic procedures that provide approximated solutions in polynomial time. They are a trade-off between optimality, completeness, accuracy and precision and running time. 8 / 78
  • 17. Three main branches attacking the NP Problems Intelligent Exponential Search Procedures that are exponential time in the worst-case, but with the right design can be very efficient on typical instances. Approximation algorithms Algorithms guaranteed to find a "near optimal" solution, under a certain bound, in polynomial time. Heuristics Useful algorithmic procedures that provide approximated solutions in polynomial time. They are a trade-off between optimality, completeness, accuracy and precision and running time. 8 / 78
  • 18. Three main branches attacking the NP Problems Intelligent Exponential Search Procedures that are exponential time in the worst-case, but with the right design can be very efficient on typical instances. Approximation algorithms Algorithms guaranteed to find a "near optimal" solution, under a certain bound, in polynomial time. Heuristics Useful algorithmic procedures that provide approximated solutions in polynomial time. They are a trade-off between optimality, completeness, accuracy and precision and running time. 8 / 78
  • 19. We have something like this As sets Exact Solution Approximation Algorithms Heuristics Intelligent exponentiasl searches WORST CASE EXPONENTIAL Polynomial Time Algorithms Polynomial Time NP Problems 9 / 78
  • 20. Outline 1 Introduction The Dilemma Branches Attacking the Problem 2 Intelligent Exhaustive Search Backtracking Example Backtracking Algorithm Branch-and-Bound Algorithm Branch-and-Bound Example 3 Approximation Algorithms Introduction Examples Vertex Cover Problem The Traveling-Salesman Problem 10 / 78
  • 21. Backtracking Something Notable Backtracking is based on that it is often possible to reject a solution by looking at just a small portion of it. Example If an instance of SAT contains the clause Ci = (x1 ∨ x2), then all assignments with x1 = x2 = 0 can be instantly eliminated. 11 / 78
  • 22. Backtracking Something Notable Backtracking is based on that it is often possible to reject a solution by looking at just a small portion of it. Example If an instance of SAT contains the clause Ci = (x1 ∨ x2), then all assignments with x1 = x2 = 0 can be instantly eliminated. 11 / 78
  • 23. Outline 1 Introduction The Dilemma Branches Attacking the Problem 2 Intelligent Exhaustive Search Backtracking Example Backtracking Algorithm Branch-and-Bound Algorithm Branch-and-Bound Example 3 Approximation Algorithms Introduction Examples Vertex Cover Problem The Traveling-Salesman Problem 12 / 78
  • 24. Example Pruning Example Given the possible values that you can give to two literals: x1 x2 1 1 1 0 0 1 0 0 It is possible to prune a quarter of the entire search space... Can this be systematically exploited? 13 / 78
  • 25. An example of exploiting this idea in SAT solvers Consider the following Boolean formula φ (w, x, y, z) (w ∨ x ∨ y ∨ z) ∧ (w ∨ ¬x) ∧ (x ∨ ¬y) ∧ (y ∨ ¬z) ∧ (z ∨ ¬w) ∧ (¬w ∨ ¬z) We start branching in one variable, we can choose w Note: This selection does not violate any of the clauses of φ (w, x, y, z) 14 / 78
  • 26. An example of exploiting this idea in SAT solvers Consider the following Boolean formula φ (w, x, y, z) (w ∨ x ∨ y ∨ z) ∧ (w ∨ ¬x) ∧ (x ∨ ¬y) ∧ (y ∨ ¬z) ∧ (z ∨ ¬w) ∧ (¬w ∨ ¬z) We start branching in one variable, we can choose w Initial formula Note: This selection does not violate any of the clauses of φ (w, x, y, z) 14 / 78
  • 27. Now The partial assignment w = 0, x = 1 violates the clause (w ∨ ¬x) Initial formula 15 / 78
  • 28. Now Then, we prune that branch Initial formula 16 / 78
  • 29. In addition What if w = 0, x = 0 Then, the following clauses are satisfied 1 ¬w = 1 2 ¬x = 1 Thus, we have the following left 1 Before 1 (w ∨ x ∨ y ∨ z)∧(w ∨ ¬x)∧(x ∨ ¬y)∧(y ∨ ¬z)∧(z ∨ ¬w)∧(¬w ∨ ¬z) 2 After 1 (0 ∨ 0 ∨ y ∨ z) ∧ (0 ∨ 1) ∧ (0 ∨ ¬y) ∧ (y ∨ ¬z) ∧ (z ∨ 1) ∧ (1 ∨ ¬z) 17 / 78
  • 30. In addition What if w = 0, x = 0 Then, the following clauses are satisfied 1 ¬w = 1 2 ¬x = 1 Thus, we have the following left 1 Before 1 (w ∨ x ∨ y ∨ z)∧(w ∨ ¬x)∧(x ∨ ¬y)∧(y ∨ ¬z)∧(z ∨ ¬w)∧(¬w ∨ ¬z) 2 After 1 (0 ∨ 0 ∨ y ∨ z) ∧ (0 ∨ 1) ∧ (0 ∨ ¬y) ∧ (y ∨ ¬z) ∧ (z ∨ 1) ∧ (1 ∨ ¬z) 17 / 78
  • 31. Finally We have the following reduced number of equations (y ∨ z) , (1) , (¬y) , (y ∨ ¬z) , (1) , (1) ⇔ (y ∨ z) , (¬y) , (y ∨ ¬z) What if w = 0, x = 1 1 Before 1 (w ∨ x ∨ y ∨ z)∧(w ∨ ¬x)∧(x ∨ ¬y)∧(y ∨ ¬z)∧(z ∨ ¬w)∧(¬w ∨ ¬z) 2 After 1 (1) ∧ (0) ∧ (1) ∧ (y ∨ ¬z) ∧ (1) ∧ (1) 18 / 78
  • 32. Finally We have the following reduced number of equations (y ∨ z) , (1) , (¬y) , (y ∨ ¬z) , (1) , (1) ⇔ (y ∨ z) , (¬y) , (y ∨ ¬z) What if w = 0, x = 1 1 Before 1 (w ∨ x ∨ y ∨ z)∧(w ∨ ¬x)∧(x ∨ ¬y)∧(y ∨ ¬z)∧(z ∨ ¬w)∧(¬w ∨ ¬z) 2 After 1 (1) ∧ (0) ∧ (1) ∧ (y ∨ ¬z) ∧ (1) ∧ (1) 18 / 78
  • 33. Thus We have something no satisfiable (1) ∧ (0) ∧ (1) ∧ (y ∨ ¬z) ∧ (1) ∧ (1) ⇔ (), (y ∨ ¬z) Clearly We prune that part of the search tree. Note we use “()≡(0)” to point out to a “empty clause” ruling out satisfiability. 19 / 78
  • 34. Thus We have something no satisfiable (1) ∧ (0) ∧ (1) ∧ (y ∨ ¬z) ∧ (1) ∧ (1) ⇔ (), (y ∨ ¬z) Clearly We prune that part of the search tree. Note we use “()≡(0)” to point out to a “empty clause” ruling out satisfiability. 19 / 78
  • 35. The decisions we need to make in backtracking First Which subproblem to expand next. Second Which branching variable to use. Remark The benefit of backtracking lies in its ability to eliminate portions of the search space. 20 / 78
  • 36. The decisions we need to make in backtracking First Which subproblem to expand next. Second Which branching variable to use. Remark The benefit of backtracking lies in its ability to eliminate portions of the search space. 20 / 78
  • 37. The decisions we need to make in backtracking First Which subproblem to expand next. Second Which branching variable to use. Remark The benefit of backtracking lies in its ability to eliminate portions of the search space. 20 / 78
  • 38. Choosing Something Notable A classic strategy: You choose the subproblem that contains the smallest clause. Then, you branch on a variable in that clause. Then If the clause is a singleton then at least one of the resulting branches will be terminated. 21 / 78
  • 39. Choosing Something Notable A classic strategy: You choose the subproblem that contains the smallest clause. Then, you branch on a variable in that clause. Then If the clause is a singleton then at least one of the resulting branches will be terminated. 21 / 78
  • 40. Choosing Something Notable A classic strategy: You choose the subproblem that contains the smallest clause. Then, you branch on a variable in that clause. Then If the clause is a singleton then at least one of the resulting branches will be terminated. 21 / 78
  • 41. Choosing Something Notable A classic strategy: You choose the subproblem that contains the smallest clause. Then, you branch on a variable in that clause. Then If the clause is a singleton then at least one of the resulting branches will be terminated. 21 / 78
  • 42. The Backtracking Test The test needs to look at the subproblem to declare quickly if 1 Failure: the subproblem has no solution. 2 Success: a solution to the subproblem is found. 3 Uncertainty. What about SAT The test declares failure if there is an empty clause The test declares success if there are no clauses Uncertainty Otherwise. 22 / 78
  • 43. The Backtracking Test The test needs to look at the subproblem to declare quickly if 1 Failure: the subproblem has no solution. 2 Success: a solution to the subproblem is found. 3 Uncertainty. What about SAT The test declares failure if there is an empty clause The test declares success if there are no clauses Uncertainty Otherwise. 22 / 78
  • 44. The Backtracking Test The test needs to look at the subproblem to declare quickly if 1 Failure: the subproblem has no solution. 2 Success: a solution to the subproblem is found. 3 Uncertainty. What about SAT The test declares failure if there is an empty clause The test declares success if there are no clauses Uncertainty Otherwise. 22 / 78
  • 45. The Backtracking Test The test needs to look at the subproblem to declare quickly if 1 Failure: the subproblem has no solution. 2 Success: a solution to the subproblem is found. 3 Uncertainty. What about SAT The test declares failure if there is an empty clause The test declares success if there are no clauses Uncertainty Otherwise. 22 / 78
  • 46. The Backtracking Test The test needs to look at the subproblem to declare quickly if 1 Failure: the subproblem has no solution. 2 Success: a solution to the subproblem is found. 3 Uncertainty. What about SAT The test declares failure if there is an empty clause The test declares success if there are no clauses Uncertainty Otherwise. 22 / 78
  • 47. The Backtracking Test The test needs to look at the subproblem to declare quickly if 1 Failure: the subproblem has no solution. 2 Success: a solution to the subproblem is found. 3 Uncertainty. What about SAT The test declares failure if there is an empty clause The test declares success if there are no clauses Uncertainty Otherwise. 22 / 78
  • 48. Example We have the following 23 / 78
  • 49. Outline 1 Introduction The Dilemma Branches Attacking the Problem 2 Intelligent Exhaustive Search Backtracking Example Backtracking Algorithm Branch-and-Bound Algorithm Branch-and-Bound Example 3 Approximation Algorithms Introduction Examples Vertex Cover Problem The Traveling-Salesman Problem 24 / 78
  • 50. Pseudo-code for Backtracking We have BACKTRACKING(P0) 1 Start with some problem P0 2 Let S = {P0}, the set if active subproblems 3 While S = ∅ 4 choose a subproblem P ∈ S and remove it from S 5 expand it into smaller subproblems P1, P2, ..., Pk 6 For each Pi 7 if test(Pi) succeeds: halt and return the branch solution 8 if test(Pi) fails: discard Pi 9 Otherwise: add Pi to S 10 return there is no solution. 25 / 78
  • 51. Pseudo-code for Backtracking We have BACKTRACKING(P0) 1 Start with some problem P0 2 Let S = {P0}, the set if active subproblems 3 While S = ∅ 4 choose a subproblem P ∈ S and remove it from S 5 expand it into smaller subproblems P1, P2, ..., Pk 6 For each Pi 7 if test(Pi) succeeds: halt and return the branch solution 8 if test(Pi) fails: discard Pi 9 Otherwise: add Pi to S 10 return there is no solution. 25 / 78
  • 52. Pseudo-code for Backtracking We have BACKTRACKING(P0) 1 Start with some problem P0 2 Let S = {P0}, the set if active subproblems 3 While S = ∅ 4 choose a subproblem P ∈ S and remove it from S 5 expand it into smaller subproblems P1, P2, ..., Pk 6 For each Pi 7 if test(Pi) succeeds: halt and return the branch solution 8 if test(Pi) fails: discard Pi 9 Otherwise: add Pi to S 10 return there is no solution. 25 / 78
  • 53. Pseudo-code for Backtracking We have BACKTRACKING(P0) 1 Start with some problem P0 2 Let S = {P0}, the set if active subproblems 3 While S = ∅ 4 choose a subproblem P ∈ S and remove it from S 5 expand it into smaller subproblems P1, P2, ..., Pk 6 For each Pi 7 if test(Pi) succeeds: halt and return the branch solution 8 if test(Pi) fails: discard Pi 9 Otherwise: add Pi to S 10 return there is no solution. 25 / 78
  • 54. Outline 1 Introduction The Dilemma Branches Attacking the Problem 2 Intelligent Exhaustive Search Backtracking Example Backtracking Algorithm Branch-and-Bound Algorithm Branch-and-Bound Example 3 Approximation Algorithms Introduction Examples Vertex Cover Problem The Traveling-Salesman Problem 26 / 78
  • 55. Another Exponential Search Branch-and-Bound This is a search better fitted to optimization problems How do we reject subproblems? First, imagine a minimization problem To reject a subproblem, its cost must exceeds the best cost found so far. PROBLEM!!! This is unknown!!! Thus, we use a quick lower bound on this cost. 27 / 78
  • 56. Another Exponential Search Branch-and-Bound This is a search better fitted to optimization problems How do we reject subproblems? First, imagine a minimization problem To reject a subproblem, its cost must exceeds the best cost found so far. PROBLEM!!! This is unknown!!! Thus, we use a quick lower bound on this cost. 27 / 78
  • 57. Another Exponential Search Branch-and-Bound This is a search better fitted to optimization problems How do we reject subproblems? First, imagine a minimization problem To reject a subproblem, its cost must exceeds the best cost found so far. PROBLEM!!! This is unknown!!! Thus, we use a quick lower bound on this cost. 27 / 78
  • 58. Another Exponential Search Branch-and-Bound This is a search better fitted to optimization problems How do we reject subproblems? First, imagine a minimization problem To reject a subproblem, its cost must exceeds the best cost found so far. PROBLEM!!! This is unknown!!! Thus, we use a quick lower bound on this cost. 27 / 78
  • 59. Another Exponential Search Branch-and-Bound This is a search better fitted to optimization problems How do we reject subproblems? First, imagine a minimization problem To reject a subproblem, its cost must exceeds the best cost found so far. PROBLEM!!! This is unknown!!! Thus, we use a quick lower bound on this cost. 27 / 78
  • 60. Lower Bound We use the info in the problem To design an efficient way to approximate the solution. 28 / 78
  • 61. Outline 1 Introduction The Dilemma Branches Attacking the Problem 2 Intelligent Exhaustive Search Backtracking Example Backtracking Algorithm Branch-and-Bound Algorithm Branch-and-Bound Example 3 Approximation Algorithms Introduction Examples Vertex Cover Problem The Traveling-Salesman Problem 29 / 78
  • 62. Pseudo-code for Branch-and-Bound We have BRANCH-AND-BOUND(P0) 1 Start with some problem P0 2 Let S = {P0}, the set if active subproblems 3 bestsofar=∞ 4 While S = ∅ 5 choose a subproblem (Partial Solution) P ∈ S and remove it from S 6 expand it into smaller subproblems P1, P2, ..., Pk 7 For each Pi 8 if Pi is a complete solution: 9 update bestsofar 10 else 11 if lowerbound(Pi) <bestsofar: add Pi to S 12 return bestsofar 30 / 78
  • 63. Pseudo-code for Branch-and-Bound We have BRANCH-AND-BOUND(P0) 1 Start with some problem P0 2 Let S = {P0}, the set if active subproblems 3 bestsofar=∞ 4 While S = ∅ 5 choose a subproblem (Partial Solution) P ∈ S and remove it from S 6 expand it into smaller subproblems P1, P2, ..., Pk 7 For each Pi 8 if Pi is a complete solution: 9 update bestsofar 10 else 11 if lowerbound(Pi) <bestsofar: add Pi to S 12 return bestsofar 30 / 78
  • 64. Pseudo-code for Branch-and-Bound We have BRANCH-AND-BOUND(P0) 1 Start with some problem P0 2 Let S = {P0}, the set if active subproblems 3 bestsofar=∞ 4 While S = ∅ 5 choose a subproblem (Partial Solution) P ∈ S and remove it from S 6 expand it into smaller subproblems P1, P2, ..., Pk 7 For each Pi 8 if Pi is a complete solution: 9 update bestsofar 10 else 11 if lowerbound(Pi) <bestsofar: add Pi to S 12 return bestsofar 30 / 78
  • 65. Pseudo-code for Branch-and-Bound We have BRANCH-AND-BOUND(P0) 1 Start with some problem P0 2 Let S = {P0}, the set if active subproblems 3 bestsofar=∞ 4 While S = ∅ 5 choose a subproblem (Partial Solution) P ∈ S and remove it from S 6 expand it into smaller subproblems P1, P2, ..., Pk 7 For each Pi 8 if Pi is a complete solution: 9 update bestsofar 10 else 11 if lowerbound(Pi) <bestsofar: add Pi to S 12 return bestsofar 30 / 78
  • 66. Pseudo-code for Branch-and-Bound We have BRANCH-AND-BOUND(P0) 1 Start with some problem P0 2 Let S = {P0}, the set if active subproblems 3 bestsofar=∞ 4 While S = ∅ 5 choose a subproblem (Partial Solution) P ∈ S and remove it from S 6 expand it into smaller subproblems P1, P2, ..., Pk 7 For each Pi 8 if Pi is a complete solution: 9 update bestsofar 10 else 11 if lowerbound(Pi) <bestsofar: add Pi to S 12 return bestsofar 30 / 78
  • 67. Outline 1 Introduction The Dilemma Branches Attacking the Problem 2 Intelligent Exhaustive Search Backtracking Example Backtracking Algorithm Branch-and-Bound Algorithm Branch-and-Bound Example 3 Approximation Algorithms Introduction Examples Vertex Cover Problem The Traveling-Salesman Problem 31 / 78
  • 68. Example: The Traveling Salesman Problem A partial solution to the TSP It is a simple path a b passing through some vertices S ⊆ V with a, b ∈ S. Denote this as [a, S, b] The subproblem It is necessary to find the best completion of the tour i.e. the cheapest complementary path b a with intermediate vertices in V − S. 32 / 78
  • 69. Example: The Traveling Salesman Problem A partial solution to the TSP It is a simple path a b passing through some vertices S ⊆ V with a, b ∈ S. Denote this as [a, S, b] The subproblem It is necessary to find the best completion of the tour i.e. the cheapest complementary path b a with intermediate vertices in V − S. V S b Light Edges 32 / 78
  • 70. Thus, Given a b We extend a particular solution [a, S, b] By a simple edge (b, x) with x ∈ V − S Leading to |V − S| subproblems Of the form [a, S ∪ {x} , x] How can we lower-bound the cost of completing a partial tour [a, S, b]? We use the simple strategy, the lower-bound is the sum of the following quantities: 1 The lightest edge from a to V − S. 2 The lightest edge from b to V − S. 3 The minimum spanning tree of V − S. 33 / 78
  • 71. Thus, Given a b We extend a particular solution [a, S, b] By a simple edge (b, x) with x ∈ V − S Leading to |V − S| subproblems Of the form [a, S ∪ {x} , x] How can we lower-bound the cost of completing a partial tour [a, S, b]? We use the simple strategy, the lower-bound is the sum of the following quantities: 1 The lightest edge from a to V − S. 2 The lightest edge from b to V − S. 3 The minimum spanning tree of V − S. 33 / 78
  • 72. Thus, Given a b We extend a particular solution [a, S, b] By a simple edge (b, x) with x ∈ V − S Leading to |V − S| subproblems Of the form [a, S ∪ {x} , x] How can we lower-bound the cost of completing a partial tour [a, S, b]? We use the simple strategy, the lower-bound is the sum of the following quantities: 1 The lightest edge from a to V − S. 2 The lightest edge from b to V − S. 3 The minimum spanning tree of V − S. 33 / 78
  • 73. Thus, Given a b We extend a particular solution [a, S, b] By a simple edge (b, x) with x ∈ V − S Leading to |V − S| subproblems Of the form [a, S ∪ {x} , x] How can we lower-bound the cost of completing a partial tour [a, S, b]? We use the simple strategy, the lower-bound is the sum of the following quantities: 1 The lightest edge from a to V − S. 2 The lightest edge from b to V − S. 3 The minimum spanning tree of V − S. 33 / 78
  • 74. Thus, Given a b We extend a particular solution [a, S, b] By a simple edge (b, x) with x ∈ V − S Leading to |V − S| subproblems Of the form [a, S ∪ {x} , x] How can we lower-bound the cost of completing a partial tour [a, S, b]? We use the simple strategy, the lower-bound is the sum of the following quantities: 1 The lightest edge from a to V − S. 2 The lightest edge from b to V − S. 3 The minimum spanning tree of V − S. 33 / 78
  • 75. Thus, Given a b We extend a particular solution [a, S, b] By a simple edge (b, x) with x ∈ V − S Leading to |V − S| subproblems Of the form [a, S ∪ {x} , x] How can we lower-bound the cost of completing a partial tour [a, S, b]? We use the simple strategy, the lower-bound is the sum of the following quantities: 1 The lightest edge from a to V − S. 2 The lightest edge from b to V − S. 3 The minimum spanning tree of V − S. 33 / 78
  • 76. Thus, Given a b We extend a particular solution [a, S, b] By a simple edge (b, x) with x ∈ V − S Leading to |V − S| subproblems Of the form [a, S ∪ {x} , x] How can we lower-bound the cost of completing a partial tour [a, S, b]? We use the simple strategy, the lower-bound is the sum of the following quantities: 1 The lightest edge from a to V − S. 2 The lightest edge from b to V − S. 3 The minimum spanning tree of V − S. 33 / 78
  • 77. Example A graph where a brute force approach will take 7!=5,040 F E G D C B H A 1 2 2 1 1 1 1 5 1 2 1 1 34 / 78
  • 78. Example TSP Solution F E G D C B H A 1 1 1 1 11 1 1 35 / 78
  • 79. Example Branch and Bound Solution A B C E D H D E F F G G G H 10 10 10 1011 11 11 11 11 14 14 15 36 / 78
  • 80. Outline 1 Introduction The Dilemma Branches Attacking the Problem 2 Intelligent Exhaustive Search Backtracking Example Backtracking Algorithm Branch-and-Bound Algorithm Branch-and-Bound Example 3 Approximation Algorithms Introduction Examples Vertex Cover Problem The Traveling-Salesman Problem 37 / 78
  • 81. Approximation Algorithms Remarks In practice, near-optimality is often good. An algorithm that returns near-optimal solutions is called an approximation algorithm. 38 / 78
  • 82. Approximation Algorithms Remarks In practice, near-optimality is often good. An algorithm that returns near-optimal solutions is called an approximation algorithm. 38 / 78
  • 83. Definition of approximation algorithms Famework Suppose that we are working on an optimization problem in which each potential solution has a positive cost. We wish to find a near-optimal solution. Thus Max cost for a maximization problem. Min cost for a minimization problem. 39 / 78
  • 84. Definition of approximation algorithms Famework Suppose that we are working on an optimization problem in which each potential solution has a positive cost. We wish to find a near-optimal solution. Thus Max cost for a maximization problem. Min cost for a minimization problem. 39 / 78
  • 85. Definition of approximation algorithms Famework Suppose that we are working on an optimization problem in which each potential solution has a positive cost. We wish to find a near-optimal solution. Thus Max cost for a maximization problem. Min cost for a minimization problem. 39 / 78
  • 86. Definition of approximation algorithms Famework Suppose that we are working on an optimization problem in which each potential solution has a positive cost. We wish to find a near-optimal solution. Thus Max cost for a maximization problem. Min cost for a minimization problem. 39 / 78
  • 87. Definition of approximation algorithms Famework Suppose that we are working on an optimization problem in which each potential solution has a positive cost. We wish to find a near-optimal solution. Thus Max cost for a maximization problem. Min cost for a minimization problem. 39 / 78
  • 88. Then Definition We say that an algorithm for a problem has an approximation ratio of ρ (n) if: For any input of size n the cost C of the solution produced by the algorithm is within a factor of ρ (n) of the cost C∗ of an optimal solution: max C C∗ , C∗ C ≤ ρ (n) 40 / 78
  • 89. Then Definition We say that an algorithm for a problem has an approximation ratio of ρ (n) if: For any input of size n the cost C of the solution produced by the algorithm is within a factor of ρ (n) of the cost C∗ of an optimal solution: max C C∗ , C∗ C ≤ ρ (n) 40 / 78
  • 90. Then Definition We say that an algorithm for a problem has an approximation ratio of ρ (n) if: For any input of size n the cost C of the solution produced by the algorithm is within a factor of ρ (n) of the cost C∗ of an optimal solution: max C C∗ , C∗ C ≤ ρ (n) 40 / 78
  • 91. Definition of approximation algorithms Definition If an algorithm achieves an approximation ratio of ρ (n), we call it a ρ (n)-approximation algorithm. Further The definitions of the approximation ratio and of a ρ (n)-approximation algorithm apply to both minimization and maximization problems. 41 / 78
  • 92. Definition of approximation algorithms Definition If an algorithm achieves an approximation ratio of ρ (n), we call it a ρ (n)-approximation algorithm. Further The definitions of the approximation ratio and of a ρ (n)-approximation algorithm apply to both minimization and maximization problems. 41 / 78
  • 93. The two possibilities For a maximization problem We have that 0 < C ≤ C∗, then the ratio C∗ /C gives the factor by which the cost of an optimal solution is larger than the cost of the approximate solution. For a minimization problem We have that 0 < C∗ ≤ C, then the ratio C/C∗ gives the factor by which the approximate solution is larger than the cost of an optimal solution. Properties The approximation ratio of an approximation algorithm is never less than 1. 42 / 78
  • 94. The two possibilities For a maximization problem We have that 0 < C ≤ C∗, then the ratio C∗ /C gives the factor by which the cost of an optimal solution is larger than the cost of the approximate solution. For a minimization problem We have that 0 < C∗ ≤ C, then the ratio C/C∗ gives the factor by which the approximate solution is larger than the cost of an optimal solution. Properties The approximation ratio of an approximation algorithm is never less than 1. 42 / 78
  • 95. The two possibilities For a maximization problem We have that 0 < C ≤ C∗, then the ratio C∗ /C gives the factor by which the cost of an optimal solution is larger than the cost of the approximate solution. For a minimization problem We have that 0 < C∗ ≤ C, then the ratio C/C∗ gives the factor by which the approximate solution is larger than the cost of an optimal solution. Properties The approximation ratio of an approximation algorithm is never less than 1. 42 / 78
  • 96. Some characteristics of approximation algorithms Approximation algorithms have the following characteristics They have polynomial times. They are not guarantee to obtain optimal solutions. However, they guarantee good solution within some factor of the optimum. Further They often use algorithms from related problems as subroutines. 43 / 78
  • 97. Some characteristics of approximation algorithms Approximation algorithms have the following characteristics They have polynomial times. They are not guarantee to obtain optimal solutions. However, they guarantee good solution within some factor of the optimum. Further They often use algorithms from related problems as subroutines. 43 / 78
  • 98. Some characteristics of approximation algorithms Approximation algorithms have the following characteristics They have polynomial times. They are not guarantee to obtain optimal solutions. However, they guarantee good solution within some factor of the optimum. Further They often use algorithms from related problems as subroutines. 43 / 78
  • 99. Some characteristics of approximation algorithms Approximation algorithms have the following characteristics They have polynomial times. They are not guarantee to obtain optimal solutions. However, they guarantee good solution within some factor of the optimum. Further They often use algorithms from related problems as subroutines. 43 / 78
  • 100. Some characteristics of approximation algorithms Approximation algorithms have the following characteristics They have polynomial times. They are not guarantee to obtain optimal solutions. However, they guarantee good solution within some factor of the optimum. Further They often use algorithms from related problems as subroutines. 43 / 78
  • 101. We will look at... Examples Vertex Cover problem. The Traveling Salesman Problem. This barely scratches All the theory of approximation algorithms. 44 / 78
  • 102. We will look at... Examples Vertex Cover problem. The Traveling Salesman Problem. This barely scratches All the theory of approximation algorithms. 44 / 78
  • 103. We will look at... Examples Vertex Cover problem. The Traveling Salesman Problem. This barely scratches All the theory of approximation algorithms. 44 / 78
  • 104. Outline 1 Introduction The Dilemma Branches Attacking the Problem 2 Intelligent Exhaustive Search Backtracking Example Backtracking Algorithm Branch-and-Bound Algorithm Branch-and-Bound Example 3 Approximation Algorithms Introduction Examples Vertex Cover Problem The Traveling-Salesman Problem 45 / 78
  • 105. Outline 1 Introduction The Dilemma Branches Attacking the Problem 2 Intelligent Exhaustive Search Backtracking Example Backtracking Algorithm Branch-and-Bound Algorithm Branch-and-Bound Example 3 Approximation Algorithms Introduction Examples Vertex Cover Problem The Traveling-Salesman Problem 46 / 78
  • 106. Vertex Cover Problem Definition of a vertex cover A vertex cover of an undirected graph G = (V , E) is a subset V ⊆ V such that if (u, v) is an edge of G. then either u ∈ V or v ∈ V (Or both). Thus The size of a vertex cover is the number of vertices in it. 47 / 78
  • 107. Vertex Cover Problem Definition of a vertex cover A vertex cover of an undirected graph G = (V , E) is a subset V ⊆ V such that if (u, v) is an edge of G. then either u ∈ V or v ∈ V (Or both). Thus The size of a vertex cover is the number of vertices in it. 47 / 78
  • 108. Vertex Cover Problem Definition The vertex-cover problem is to find a vertex cover of minimum size in a given undirected graph. 48 / 78
  • 109. Example of Vertex Cover Problem Example Determine the smallest subset of vertex that "Cover" the graph on the right. 1 2 3 4 5 49 / 78
  • 110. Vertex cover Example Determine the smallest subset of vertex that "Cover" the graph on the right. 1 2 3 4 5 ANSWER:{1, 3, 4} 50 / 78
  • 111. Approximation vertex cover algorithm Now, we have APPROX-VERTEX-COVER(G) 1 C = ∅ 2 E = G.E 3 while E = ∅ 4 let (u, v) be an arbitrary edge of E 5 C = C ∪ {u, v} 6 remove from E every edge incident on either u or v 7 return C Complexity O(V + E) 51 / 78
  • 113. Theorem Theorem 35.1 APPROX-VERTEX-COVER is a polynomial-time 2-approximation algorithm. Proof We know that the algorithms return a vertex cover until it finishes, and it runs in poly-time. Now, we only need to prove that APPROX-VERTEX-COVER returns a vertex cover of at most twice the size of the optimal cover. 53 / 78
  • 114. Theorem Theorem 35.1 APPROX-VERTEX-COVER is a polynomial-time 2-approximation algorithm. Proof We know that the algorithms return a vertex cover until it finishes, and it runs in poly-time. Now, we only need to prove that APPROX-VERTEX-COVER returns a vertex cover of at most twice the size of the optimal cover. 53 / 78
  • 115. Theorem Theorem 35.1 APPROX-VERTEX-COVER is a polynomial-time 2-approximation algorithm. Proof We know that the algorithms return a vertex cover until it finishes, and it runs in poly-time. Now, we only need to prove that APPROX-VERTEX-COVER returns a vertex cover of at most twice the size of the optimal cover. 53 / 78
  • 116. Proof First We call C as the set of vertices that is returned by the approximation algorithm. Now, given the set of edges A produced by the algorithm. Now, given an optimal cover C∗: C∗ must include at least one endpoint of each edge in A. No two edges in A share an endpoint, thus no two edges are covered by the same vertex from C∗ : |C∗| ≥ |A| 54 / 78
  • 117. Proof First We call C as the set of vertices that is returned by the approximation algorithm. Now, given the set of edges A produced by the algorithm. Now, given an optimal cover C∗: C∗ must include at least one endpoint of each edge in A. No two edges in A share an endpoint, thus no two edges are covered by the same vertex from C∗ : |C∗| ≥ |A| 54 / 78
  • 118. Proof First We call C as the set of vertices that is returned by the approximation algorithm. Now, given the set of edges A produced by the algorithm. Now, given an optimal cover C∗: C∗ must include at least one endpoint of each edge in A. No two edges in A share an endpoint, thus no two edges are covered by the same vertex from C∗ : |C∗| ≥ |A| 54 / 78
  • 119. Proof First We call C as the set of vertices that is returned by the approximation algorithm. Now, given the set of edges A produced by the algorithm. Now, given an optimal cover C∗: C∗ must include at least one endpoint of each edge in A. No two edges in A share an endpoint, thus no two edges are covered by the same vertex from C∗ : |C∗| ≥ |A| 54 / 78
  • 120. Proof First We call C as the set of vertices that is returned by the approximation algorithm. Now, given the set of edges A produced by the algorithm. Now, given an optimal cover C∗: C∗ must include at least one endpoint of each edge in A. No two edges in A share an endpoint, thus no two edges are covered by the same vertex from C∗ : |C∗| ≥ |A| 54 / 78
  • 121. Proof Second Now, each execution in line 4 picks an edge for which neither of its endpoints is already in C, thus we have |C| = 2|A| ≤ 2|C∗ | 55 / 78
  • 123. Remarks Did you notice the trick? We do not know the size of C∗, but we obtain a lower bound for it Actually The set A is a maximal matching in the graph G. 57 / 78
  • 124. Remarks Did you notice the trick? We do not know the size of C∗, but we obtain a lower bound for it Actually The set A is a maximal matching in the graph G. 57 / 78
  • 125. What is a Maximal Matching? Definition Given a graph G = (V , E), a matching M in G is a set of pairwise non-adjacent edges; that is, no two edges share a common vertex. 58 / 78
  • 126. Maximal Matching Definition A Maximal Matching is a matching M of a graph G with the property that if any edge not in M is added to M, it is no longer a matching. 59 / 78
  • 127. Example Here, we can see an example A B C D E F 1 2 3 4 5 6 A B C D E F 1 2 3 4 5 6 A B C D E F 1 2 3 4 5 6 MATCHING MAXIMAL MATCHING 60 / 78
  • 128. Outline 1 Introduction The Dilemma Branches Attacking the Problem 2 Intelligent Exhaustive Search Backtracking Example Backtracking Algorithm Branch-and-Bound Algorithm Branch-and-Bound Example 3 Approximation Algorithms Introduction Examples Vertex Cover Problem The Traveling-Salesman Problem 61 / 78
  • 129. The Traveling-Salesman Problem Given a complete undirected graph G = (V , E) With a no-negative integer cost function c(u, v) associated to each edge (u, v) ∈ E We need to find a Hamiltonian cycle of G with minimum cost. 62 / 78
  • 130. The Traveling-Salesman Problem Given a complete undirected graph G = (V , E) With a no-negative integer cost function c(u, v) associated to each edge (u, v) ∈ E We need to find a Hamiltonian cycle of G with minimum cost. 62 / 78
  • 131. Extra notation As an extension of our notation Let c(A) denote the total cost of the edges in a subset A ⊆ E c(A) = (u,v)∈A c(u, v) 63 / 78
  • 132. The extra assumption: The triangle inequality Assume : We will assume that the cost function satisfies the triangle inequality for all its vertices u, v, w ∈ V then: c(u, w) ≤ c(u, v) + c(v, w) 64 / 78
  • 133. The 2-approximation algorithm Pseudocode APPROX-TSP-TOUR(G, c, r) 1 select a vertex r ∈ G.V to be a “root” vertex. 2 compute a minimum spanning tree T for G from root r using the MST-PRIM(G, c, r). 3 Let H be a list of vertices, ordered according to when they are first visited in a preorder tree walk of T. 4 return the Hamiltonian cycle H 65 / 78
  • 134. The 2-approximation algorithm Pseudocode APPROX-TSP-TOUR(G, c, r) 1 select a vertex r ∈ G.V to be a “root” vertex. 2 compute a minimum spanning tree T for G from root r using the MST-PRIM(G, c, r). 3 Let H be a list of vertices, ordered according to when they are first visited in a preorder tree walk of T. 4 return the Hamiltonian cycle H 65 / 78
  • 135. The 2-approximation algorithm Pseudocode APPROX-TSP-TOUR(G, c, r) 1 select a vertex r ∈ G.V to be a “root” vertex. 2 compute a minimum spanning tree T for G from root r using the MST-PRIM(G, c, r). 3 Let H be a list of vertices, ordered according to when they are first visited in a preorder tree walk of T. 4 return the Hamiltonian cycle H 65 / 78
  • 136. The 2-approximation algorithm Pseudocode APPROX-TSP-TOUR(G, c, r) 1 select a vertex r ∈ G.V to be a “root” vertex. 2 compute a minimum spanning tree T for G from root r using the MST-PRIM(G, c, r). 3 Let H be a list of vertices, ordered according to when they are first visited in a preorder tree walk of T. 4 return the Hamiltonian cycle H 65 / 78
  • 137. Example The root is a OPTIMAL SOLUTION 66 / 78
  • 138. Theorem that supports the claim Theorem 35.2 APROX-TSP-TOUR is a polynomial-time 2-approximation algorithm for the traveling salesman problem with the triangle inequality. 67 / 78
  • 139. Proof First Let H∗ denote an optimal tour for a set of vertices, H∗ is a cycle. If from H∗ we erase an edge, H∗ becomes a tree. Let T the minimum spanning tree computed at line 2 Then: c(T) ≤ c(H∗) 68 / 78
  • 140. Proof First Let H∗ denote an optimal tour for a set of vertices, H∗ is a cycle. If from H∗ we erase an edge, H∗ becomes a tree. Let T the minimum spanning tree computed at line 2 Then: c(T) ≤ c(H∗) 68 / 78
  • 141. Proof First Let H∗ denote an optimal tour for a set of vertices, H∗ is a cycle. If from H∗ we erase an edge, H∗ becomes a tree. Let T the minimum spanning tree computed at line 2 Then: c(T) ≤ c(H∗) 68 / 78
  • 142. Proof First Let H∗ denote an optimal tour for a set of vertices, H∗ is a cycle. If from H∗ we erase an edge, H∗ becomes a tree. Let T the minimum spanning tree computed at line 2 Then: c(T) ≤ c(H∗) 68 / 78
  • 143. Proof First Let H∗ denote an optimal tour for a set of vertices, H∗ is a cycle. If from H∗ we erase an edge, H∗ becomes a tree. Let T the minimum spanning tree computed at line 2 Then: c(T) ≤ c(H∗) 68 / 78
  • 144. Full walk A full walk W of T lists how many times a vertex has been visited 69 / 78
  • 145. Full walk A full walk W = a, b, c, b, h, b, a, d, e, f , e, g, e, d, a 70 / 78
  • 146. Then, we have Since the full walk traverses every edge of T exactly twice c(W ) ≤ 2c(T) In addition However, W is not a tour... What can we do? 71 / 78
  • 147. Then, we have Since the full walk traverses every edge of T exactly twice c(W ) ≤ 2c(T) In addition However, W is not a tour... What can we do? 71 / 78
  • 148. Use triangle inequality Something Notable By the triangle inequality, however, we can delete a visit to any vertex from W . IMPORTANT Using the triangle inequality the cost does not increase. Deletion of vertices Delete a vertex v from W between visits to u and w. The resulting ordering specifies going directly from u and w. 72 / 78
  • 149. Use triangle inequality Something Notable By the triangle inequality, however, we can delete a visit to any vertex from W . IMPORTANT Using the triangle inequality the cost does not increase. Deletion of vertices Delete a vertex v from W between visits to u and w. The resulting ordering specifies going directly from u and w. 72 / 78
  • 150. Use triangle inequality Something Notable By the triangle inequality, however, we can delete a visit to any vertex from W . IMPORTANT Using the triangle inequality the cost does not increase. Deletion of vertices Delete a vertex v from W between visits to u and w. The resulting ordering specifies going directly from u and w. 72 / 78
  • 151. Use triangle inequality Something Notable By the triangle inequality, however, we can delete a visit to any vertex from W . IMPORTANT Using the triangle inequality the cost does not increase. Deletion of vertices Delete a vertex v from W between visits to u and w. The resulting ordering specifies going directly from u and w. 72 / 78
  • 152. Thus We get a, b, c, h, d, e, f , g 73 / 78
  • 153. Minimum Spanning Tree From our example 74 / 78
  • 154. Notice the following We have The previous visiting order is equal to a preorder walking on the previous tree T. Now Let H the cycle obtained by this preorder walking. It is a Hamiltonian cycle, since every vertex is visited exactly one Thus c(H) ≤ c(W ) ≤ 2c(T) ≤ 2c(H∗) 75 / 78
  • 155. Notice the following We have The previous visiting order is equal to a preorder walking on the previous tree T. Now Let H the cycle obtained by this preorder walking. It is a Hamiltonian cycle, since every vertex is visited exactly one Thus c(H) ≤ c(W ) ≤ 2c(T) ≤ 2c(H∗) 75 / 78
  • 156. Notice the following We have The previous visiting order is equal to a preorder walking on the previous tree T. Now Let H the cycle obtained by this preorder walking. It is a Hamiltonian cycle, since every vertex is visited exactly one Thus c(H) ≤ c(W ) ≤ 2c(T) ≤ 2c(H∗) 75 / 78
  • 157. Notice the following We have The previous visiting order is equal to a preorder walking on the previous tree T. Now Let H the cycle obtained by this preorder walking. It is a Hamiltonian cycle, since every vertex is visited exactly one Thus c(H) ≤ c(W ) ≤ 2c(T) ≤ 2c(H∗) 75 / 78
  • 158. Finally We have that c (H) c (H∗) ≤ 2 76 / 78
  • 159. However Given the General Traveling-Salesman Problem If we drop the assumption that the cost function c satisfies the triangle inequality. Theorem 35.3 If P = NP, then for any constant ρ ≥ 1, there is no polynomial-time approximation algorithm with approximation ratio ρ for the general traveling-salesman problem. 77 / 78
  • 160. However Given the General Traveling-Salesman Problem If we drop the assumption that the cost function c satisfies the triangle inequality. Theorem 35.3 If P = NP, then for any constant ρ ≥ 1, there is no polynomial-time approximation algorithm with approximation ratio ρ for the general traveling-salesman problem. 77 / 78
  • 161. Excercises From Cormen’s book solve 35.1-1 35.1-2 35.1-4 35.2-1 35.2-2 35.2-3 35.2-5 78 / 78