SlideShare a Scribd company logo
Analysis of Algorithms
All-Pairs Shortest Path
Andres Mendez-Vazquez
November 11, 2015
1 / 79
Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
2 / 79
Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
3 / 79
Problem
Definition
Given u and v, find the shortest path.
Now, what if you want ALL PAIRS!!!
Use as a source all the elements in V .
Clearly!!! you can fall back to the old algorithms!!!
4 / 79
Problem
Definition
Given u and v, find the shortest path.
Now, what if you want ALL PAIRS!!!
Use as a source all the elements in V .
Clearly!!! you can fall back to the old algorithms!!!
4 / 79
Problem
Definition
Given u and v, find the shortest path.
Now, what if you want ALL PAIRS!!!
Use as a source all the elements in V .
Clearly!!! you can fall back to the old algorithms!!!
4 / 79
Problem
Definition
Given u and v, find the shortest path.
Now, what if you want ALL PAIRS!!!
Use as a source all the elements in V .
Clearly!!! you can fall back to the old algorithms!!!
4 / 79
What can we use?
Use Dijkstra’s |V | times!!!
If all the weights are non-negative.
This has, using Fibonacci Heaps, O (V 2
log V + VE) complexity.
Which is equal O (V 3
) in the case of E = O(V 2
), but with a hidden large
constant c.
Use Bellman-Ford |V | times!!!
If negative weights are allowed.
Then, we have O (V 2
E).
Which is equal O (V 4
) in the case of E = O(V 2
).
5 / 79
What can we use?
Use Dijkstra’s |V | times!!!
If all the weights are non-negative.
This has, using Fibonacci Heaps, O (V 2
log V + VE) complexity.
Which is equal O (V 3
) in the case of E = O(V 2
), but with a hidden large
constant c.
Use Bellman-Ford |V | times!!!
If negative weights are allowed.
Then, we have O (V 2
E).
Which is equal O (V 4
) in the case of E = O(V 2
).
5 / 79
What can we use?
Use Dijkstra’s |V | times!!!
If all the weights are non-negative.
This has, using Fibonacci Heaps, O (V 2
log V + VE) complexity.
Which is equal O (V 3
) in the case of E = O(V 2
), but with a hidden large
constant c.
Use Bellman-Ford |V | times!!!
If negative weights are allowed.
Then, we have O (V 2
E).
Which is equal O (V 4
) in the case of E = O(V 2
).
5 / 79
What can we use?
Use Dijkstra’s |V | times!!!
If all the weights are non-negative.
This has, using Fibonacci Heaps, O (V 2
log V + VE) complexity.
Which is equal O (V 3
) in the case of E = O(V 2
), but with a hidden large
constant c.
Use Bellman-Ford |V | times!!!
If negative weights are allowed.
Then, we have O (V 2
E).
Which is equal O (V 4
) in the case of E = O(V 2
).
5 / 79
What can we use?
Use Dijkstra’s |V | times!!!
If all the weights are non-negative.
This has, using Fibonacci Heaps, O (V 2
log V + VE) complexity.
Which is equal O (V 3
) in the case of E = O(V 2
), but with a hidden large
constant c.
Use Bellman-Ford |V | times!!!
If negative weights are allowed.
Then, we have O (V 2
E).
Which is equal O (V 4
) in the case of E = O(V 2
).
5 / 79
What can we use?
Use Dijkstra’s |V | times!!!
If all the weights are non-negative.
This has, using Fibonacci Heaps, O (V 2
log V + VE) complexity.
Which is equal O (V 3
) in the case of E = O(V 2
), but with a hidden large
constant c.
Use Bellman-Ford |V | times!!!
If negative weights are allowed.
Then, we have O (V 2
E).
Which is equal O (V 4
) in the case of E = O(V 2
).
5 / 79
This is not Good For Large Problems
Problems
Computer Network Systems.
Aircraft Networks (e.g. flying time, fares).
Railroad network tables of distances between all pairs of cites for a
road atlas.
Etc.
6 / 79
This is not Good For Large Problems
Problems
Computer Network Systems.
Aircraft Networks (e.g. flying time, fares).
Railroad network tables of distances between all pairs of cites for a
road atlas.
Etc.
6 / 79
This is not Good For Large Problems
Problems
Computer Network Systems.
Aircraft Networks (e.g. flying time, fares).
Railroad network tables of distances between all pairs of cites for a
road atlas.
Etc.
6 / 79
This is not Good For Large Problems
Problems
Computer Network Systems.
Aircraft Networks (e.g. flying time, fares).
Railroad network tables of distances between all pairs of cites for a
road atlas.
Etc.
6 / 79
For more on this...
Something Notable
As many things in the history of analysis of algorithms the all-pairs
shortest path has a long history.
We have more from
“Studies in the Economics of Transportation” by Beckmann, McGuire, and
Winsten (1956) where the notation that we use for the matrix multiplication alike
was first used.
In addition
G. Tarry, Le probleme des labyrinthes, Nouvelles Annales de Mathématiques (3) 14 (1895)
187–190 [English translation in: N.L. Biggs, E.K. Lloyd, R.J. Wilson, Graph Theory
1736–1936, Clarendon Press, Oxford, 1976, pp. 18–20] (For the theory behind
depth-first search techniques).
Chr. Wiener, Ueber eine Aufgabe aus der Geometria situs, Mathematische Annalen 6
(1873) 29–30, 1873.
7 / 79
For more on this...
Something Notable
As many things in the history of analysis of algorithms the all-pairs
shortest path has a long history.
We have more from
“Studies in the Economics of Transportation” by Beckmann, McGuire, and
Winsten (1956) where the notation that we use for the matrix multiplication alike
was first used.
In addition
G. Tarry, Le probleme des labyrinthes, Nouvelles Annales de Mathématiques (3) 14 (1895)
187–190 [English translation in: N.L. Biggs, E.K. Lloyd, R.J. Wilson, Graph Theory
1736–1936, Clarendon Press, Oxford, 1976, pp. 18–20] (For the theory behind
depth-first search techniques).
Chr. Wiener, Ueber eine Aufgabe aus der Geometria situs, Mathematische Annalen 6
(1873) 29–30, 1873.
7 / 79
For more on this...
Something Notable
As many things in the history of analysis of algorithms the all-pairs
shortest path has a long history.
We have more from
“Studies in the Economics of Transportation” by Beckmann, McGuire, and
Winsten (1956) where the notation that we use for the matrix multiplication alike
was first used.
In addition
G. Tarry, Le probleme des labyrinthes, Nouvelles Annales de Mathématiques (3) 14 (1895)
187–190 [English translation in: N.L. Biggs, E.K. Lloyd, R.J. Wilson, Graph Theory
1736–1936, Clarendon Press, Oxford, 1976, pp. 18–20] (For the theory behind
depth-first search techniques).
Chr. Wiener, Ueber eine Aufgabe aus der Geometria situs, Mathematische Annalen 6
(1873) 29–30, 1873.
7 / 79
For more on this...
Something Notable
As many things in the history of analysis of algorithms the all-pairs
shortest path has a long history.
We have more from
“Studies in the Economics of Transportation” by Beckmann, McGuire, and
Winsten (1956) where the notation that we use for the matrix multiplication alike
was first used.
In addition
G. Tarry, Le probleme des labyrinthes, Nouvelles Annales de Mathématiques (3) 14 (1895)
187–190 [English translation in: N.L. Biggs, E.K. Lloyd, R.J. Wilson, Graph Theory
1736–1936, Clarendon Press, Oxford, 1976, pp. 18–20] (For the theory behind
depth-first search techniques).
Chr. Wiener, Ueber eine Aufgabe aus der Geometria situs, Mathematische Annalen 6
(1873) 29–30, 1873.
7 / 79
Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
8 / 79
Assumptions Matrix Representation
Matrix Representation of a Graph
For this, we have that each weight in the matrix has the following values
wij =



0 if i = j
w(i, j) if i = j and (i, j) ∈ E
∞ if i = j and (i, j) /∈ E
Then, we have W =







w11 w22 ... w1k−1 w1n
. . .
. . .
. . .
wn1 wn2 ... wnn−1 wnn







Important
There are not negative weight cycles.
9 / 79
Assumptions Matrix Representation
Matrix Representation of a Graph
For this, we have that each weight in the matrix has the following values
wij =



0 if i = j
w(i, j) if i = j and (i, j) ∈ E
∞ if i = j and (i, j) /∈ E
Then, we have W =







w11 w22 ... w1k−1 w1n
. . .
. . .
. . .
wn1 wn2 ... wnn−1 wnn







Important
There are not negative weight cycles.
9 / 79
Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
10 / 79
Observations
Ah!!!
The next algorithm is a dynamic programming algorithm for
The all-pairs shortest paths problem on a directed graph G = (V , E).
At the end of the algorithm will generate the following matrix
D =







d11 d22 ... d1k−1 d1n
. . .
. . .
. . .
dn1 dn2 ... dnn−1 dnn







Each entry dij = δ (i, j).
11 / 79
Observations
Ah!!!
The next algorithm is a dynamic programming algorithm for
The all-pairs shortest paths problem on a directed graph G = (V , E).
At the end of the algorithm will generate the following matrix
D =







d11 d22 ... d1k−1 d1n
. . .
. . .
. . .
dn1 dn2 ... dnn−1 dnn







Each entry dij = δ (i, j).
11 / 79
Observations
Ah!!!
The next algorithm is a dynamic programming algorithm for
The all-pairs shortest paths problem on a directed graph G = (V , E).
At the end of the algorithm will generate the following matrix
D =







d11 d22 ... d1k−1 d1n
. . .
. . .
. . .
dn1 dn2 ... dnn−1 dnn







Each entry dij = δ (i, j).
11 / 79
Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
12 / 79
Structure of a Shortest Path
Consider Lemma 24.1
Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk >
be a SP from v1 to vk. Then,
pij =< vi, vi+1, ..., vj > is a Shortest Path (SP) from vi to vj, where
1 ≤ i ≤ j ≤ k.
We can do the following
Consider the shortest path p from vertex i and j, p contains at most
m edges.
Then, we can use the Corollary to make a decomposition
i
p
k → j =⇒ δ(i, j) = δ(i, k) + wkj
13 / 79
Structure of a Shortest Path
Consider Lemma 24.1
Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk >
be a SP from v1 to vk. Then,
pij =< vi, vi+1, ..., vj > is a Shortest Path (SP) from vi to vj, where
1 ≤ i ≤ j ≤ k.
We can do the following
Consider the shortest path p from vertex i and j, p contains at most
m edges.
Then, we can use the Corollary to make a decomposition
i
p
k → j =⇒ δ(i, j) = δ(i, k) + wkj
13 / 79
Structure of a Shortest Path
Idea of Using Matrix Multiplication
We define the following concept based in the decomposition
Corollary!!!
l
(m)
ij =minimum weight of any path from i to j, it contains at most m
edges i.e.
l
(m)
ij could be min
k
l
(m−1)
ik + wkj
14 / 79
Structure of a Shortest Path
Idea of Using Matrix Multiplication
We define the following concept based in the decomposition
Corollary!!!
l
(m)
ij =minimum weight of any path from i to j, it contains at most m
edges i.e.
l
(m)
ij could be min
k
l
(m−1)
ik + wkj
14 / 79
Graphical Interpretation
Looking for the Shortest Path
15 / 79
Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
16 / 79
Recursive Solution
Thus, we have that for paths with ZERO edges
l
(0)
ij =
0 if i = j
∞ if i = j
Recursion Our Great Friend
Consider the previous definition and decomposition. Thus
l
(m)
ij = min l
(m−1)
ij , min
1≤k≤n
l
(m−1)
ik + wkj
= min
1≤k≤n
l
(m−1)
ik + wkj
17 / 79
Recursive Solution
Thus, we have that for paths with ZERO edges
l
(0)
ij =
0 if i = j
∞ if i = j
Recursion Our Great Friend
Consider the previous definition and decomposition. Thus
l
(m)
ij = min l
(m−1)
ij , min
1≤k≤n
l
(m−1)
ik + wkj
= min
1≤k≤n
l
(m−1)
ik + wkj
17 / 79
Recursive Solution
Thus, we have that for paths with ZERO edges
l
(0)
ij =
0 if i = j
∞ if i = j
Recursion Our Great Friend
Consider the previous definition and decomposition. Thus
l
(m)
ij = min l
(m−1)
ij , min
1≤k≤n
l
(m−1)
ik + wkj
= min
1≤k≤n
l
(m−1)
ik + wkj
17 / 79
Recursive Solution
Thus, we have that for paths with ZERO edges
l
(0)
ij =
0 if i = j
∞ if i = j
Recursion Our Great Friend
Consider the previous definition and decomposition. Thus
l
(m)
ij = min l
(m−1)
ij , min
1≤k≤n
l
(m−1)
ik + wkj
= min
1≤k≤n
l
(m−1)
ik + wkj
17 / 79
Recursive Solution
Why? A simple notation problem
l
(m)
ij = l
(m−1)
ij + 0 = l
(m−1)
ij + wjj
18 / 79
Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
19 / 79
Transforming it to a iterative one
What is δ (i, j)?
If you do not have negative-weight cycles, and δ (i, j) < ∞.
Then, the shortest path from vertex i to j has at most n − 1 edges
δ (i, j) = l
(n−1)
ij = l
(n)
ij = l
(n+1)
ij = l
(n+2)
ij = ...
Back to Matrix Multiplication
We have the matrix L(m) = l
(m)
ij .
Then, we can compute first L(1) then compute L(2) all the way to
L(n−1) which contains the actual shortest paths.
What is L(1)
?
First, we have that L(1) = W , since l
(1)
ij = wij.
20 / 79
Transforming it to a iterative one
What is δ (i, j)?
If you do not have negative-weight cycles, and δ (i, j) < ∞.
Then, the shortest path from vertex i to j has at most n − 1 edges
δ (i, j) = l
(n−1)
ij = l
(n)
ij = l
(n+1)
ij = l
(n+2)
ij = ...
Back to Matrix Multiplication
We have the matrix L(m) = l
(m)
ij .
Then, we can compute first L(1) then compute L(2) all the way to
L(n−1) which contains the actual shortest paths.
What is L(1)
?
First, we have that L(1) = W , since l
(1)
ij = wij.
20 / 79
Transforming it to a iterative one
What is δ (i, j)?
If you do not have negative-weight cycles, and δ (i, j) < ∞.
Then, the shortest path from vertex i to j has at most n − 1 edges
δ (i, j) = l
(n−1)
ij = l
(n)
ij = l
(n+1)
ij = l
(n+2)
ij = ...
Back to Matrix Multiplication
We have the matrix L(m) = l
(m)
ij .
Then, we can compute first L(1) then compute L(2) all the way to
L(n−1) which contains the actual shortest paths.
What is L(1)
?
First, we have that L(1) = W , since l
(1)
ij = wij.
20 / 79
Transforming it to a iterative one
What is δ (i, j)?
If you do not have negative-weight cycles, and δ (i, j) < ∞.
Then, the shortest path from vertex i to j has at most n − 1 edges
δ (i, j) = l
(n−1)
ij = l
(n)
ij = l
(n+1)
ij = l
(n+2)
ij = ...
Back to Matrix Multiplication
We have the matrix L(m) = l
(m)
ij .
Then, we can compute first L(1) then compute L(2) all the way to
L(n−1) which contains the actual shortest paths.
What is L(1)
?
First, we have that L(1) = W , since l
(1)
ij = wij.
20 / 79
Transforming it to a iterative one
What is δ (i, j)?
If you do not have negative-weight cycles, and δ (i, j) < ∞.
Then, the shortest path from vertex i to j has at most n − 1 edges
δ (i, j) = l
(n−1)
ij = l
(n)
ij = l
(n+1)
ij = l
(n+2)
ij = ...
Back to Matrix Multiplication
We have the matrix L(m) = l
(m)
ij .
Then, we can compute first L(1) then compute L(2) all the way to
L(n−1) which contains the actual shortest paths.
What is L(1)
?
First, we have that L(1) = W , since l
(1)
ij = wij.
20 / 79
Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
21 / 79
Algorithm
Code
Extended-Shortest-Path(L, W )
1 n = L.rows
2 let L = lij be a new n × n
3 for i = 1 to n
4 for j = 1 to n
5 lij = ∞
6 for k = 1 to n
7 lij = min lij, lik + wkj
8 return L
22 / 79
Algorithm
Code
Extended-Shortest-Path(L, W )
1 n = L.rows
2 let L = lij be a new n × n
3 for i = 1 to n
4 for j = 1 to n
5 lij = ∞
6 for k = 1 to n
7 lij = min lij, lik + wkj
8 return L
22 / 79
Algorithm
Code
Extended-Shortest-Path(L, W )
1 n = L.rows
2 let L = lij be a new n × n
3 for i = 1 to n
4 for j = 1 to n
5 lij = ∞
6 for k = 1 to n
7 lij = min lij, lik + wkj
8 return L
22 / 79
Algorithm
Code
Extended-Shortest-Path(L, W )
1 n = L.rows
2 let L = lij be a new n × n
3 for i = 1 to n
4 for j = 1 to n
5 lij = ∞
6 for k = 1 to n
7 lij = min lij, lik + wkj
8 return L
22 / 79
Algorithm
Complexity
If |V | == n we have that Θ V 3 .
23 / 79
Algorithm
Complexity
If |V | == n we have that Θ V 3 .
23 / 79
Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
24 / 79
Look Alike Matrix Multiplication Operations
Mapping That Can be Thought
L =⇒ A
W =⇒ B
L =⇒ C
min =⇒ +
+ =⇒ ·
∞ =⇒ 0
25 / 79
Look Alike Matrix Multiplication Operations
Using the previous notation, we can rewrite our previous algorithm as
Square-Matrix-Multiply(A, B)
1 n = A.rows
2 let C be a new n × n matrix
3 for i = 1 to n
4 for j = 1 to n
5 cij = 0
6 for k = 1 to n
7 cij = cij + aik · bkj
8 return C
26 / 79
Complexity
Thus
The complexity of the Extended-Shortest-Path is equal to O n3
27 / 79
Using the Analogy
Returning to the all-pairs shortest-paths problem
It is possible to compute the shortest path by extending such a path edge
by edge.
Therefore
If we denote A · B as the “product” of the Extended-Shortest-Path
28 / 79
Using the Analogy
Returning to the all-pairs shortest-paths problem
It is possible to compute the shortest path by extending such a path edge
by edge.
Therefore
If we denote A · B as the “product” of the Extended-Shortest-Path
28 / 79
Using the Analogy
We have that
L(1)
= L(0)
· W = W
L(2)
= L(1)
· W = W 2
...
L(n−1)
= L(n−2)
· W = W n−1
29 / 79
Using the Analogy
We have that
L(1)
= L(0)
· W = W
L(2)
= L(1)
· W = W 2
...
L(n−1)
= L(n−2)
· W = W n−1
29 / 79
Using the Analogy
We have that
L(1)
= L(0)
· W = W
L(2)
= L(1)
· W = W 2
...
L(n−1)
= L(n−2)
· W = W n−1
29 / 79
The Final Algorithm
We have that
Slow-All-Pairs-Shortest-Paths(W )
1 n ← W .rows
2 L(1)
← W
3 for m = 2 to n − 1
4 L(m)
←EXTEND-SHORTEST-PATHS L(m−1)
, W
5 return L(n−1)
30 / 79
With Complexity
Complexity
O V 4
(1)
31 / 79
Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
32 / 79
Example
We have the following
1
2
3
45
3
4
8
2
7
1
6
-4
-5
1 2 3 4 5
1 0 2 8 ∞ -4
2 ∞ 0 ∞ 1 7
3 ∞ 4 0 ∞ ∞
4 2 ∞ -5 0 ∞
5 ∞ ∞ ∞ 6 0
L(1) = L(0)W
33 / 79
Example
We have the following
1
2
3
45
3
4
8
2
7
1
6
-4
-5
1 2 3 4 5
1 0 2 8 2 -4
2 3 0 -4 1 7
3 ∞ 4 0 5 11
4 2 -1 -5 0 -2
5 8 ∞ 1 6 0
L(2) = L(1)W
34 / 79
Here, we use the analogy of matrix multiplication
D1
W
35 / 79
Thus, the update of an element lij
Example
l
(2)
14 = min



0 3 8 ∞ −4 +







∞
1
∞
0
6










= min ∞ 4 ∞ ∞ 2
= 2
36 / 79
Example
We have the following
1
2
3
45
3
4
8
2
7
1
6
-4
-5
1 2 3 4 5
1 0 3 -3 2 -4
2 3 0 -4 1 -1
3 7 4 0 5 11
4 2 -1 -5 0 -2
5 8 5 1 6 0
L(3) = L(2)W
37 / 79
Example
We have the following
1
2
3
45
3
4
8
2
7
1
6
-4
-5
1 2 3 4 5
1 0 1 -3 2 -4
2 3 0 -4 1 -1
3 7 4 0 5 3
4 2 -1 -5 0 -2
5 8 5 1 6 0
L(4) = L(3)W
38 / 79
Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
39 / 79
Recall the following
We are interested only
In matrix L(n−1)
In addition
Remember, we do not have negative weight cycles!!!
Therefore, given the equation
δ (i, j) = l
(n−1)
ij = l
(n)
ij = l
(n)
ij = ... (2)
40 / 79
Recall the following
We are interested only
In matrix L(n−1)
In addition
Remember, we do not have negative weight cycles!!!
Therefore, given the equation
δ (i, j) = l
(n−1)
ij = l
(n)
ij = l
(n)
ij = ... (2)
40 / 79
Recall the following
We are interested only
In matrix L(n−1)
In addition
Remember, we do not have negative weight cycles!!!
Therefore, given the equation
δ (i, j) = l
(n−1)
ij = l
(n)
ij = l
(n)
ij = ... (2)
40 / 79
Thus
It implies
L(m)
= L(n−1)
(3)
For all
m ≥ n − 1 (4)
41 / 79
Thus
It implies
L(m)
= L(n−1)
(3)
For all
m ≥ n − 1 (4)
41 / 79
Something Faster
We want something faster!!! Observation!!!
L(1)
= W
L(2)
= W · W = W 2
L(4)
= W 2
· W 2
= W 4
L(8)
= W 4
· W 4
= W 8
...
L(2 log(n−1)
) = W 2 log(n−1) −1
· W 2 log(n−1) −1
= W 2 log(n−1)
Because
2 lg(n−1)
≥ n − 1 =⇒ L(2 lg(n−1)
) = L(n−1)
42 / 79
Something Faster
We want something faster!!! Observation!!!
L(1)
= W
L(2)
= W · W = W 2
L(4)
= W 2
· W 2
= W 4
L(8)
= W 4
· W 4
= W 8
...
L(2 log(n−1)
) = W 2 log(n−1) −1
· W 2 log(n−1) −1
= W 2 log(n−1)
Because
2 lg(n−1)
≥ n − 1 =⇒ L(2 lg(n−1)
) = L(n−1)
42 / 79
Something Faster
We want something faster!!! Observation!!!
L(1)
= W
L(2)
= W · W = W 2
L(4)
= W 2
· W 2
= W 4
L(8)
= W 4
· W 4
= W 8
...
L(2 log(n−1)
) = W 2 log(n−1) −1
· W 2 log(n−1) −1
= W 2 log(n−1)
Because
2 lg(n−1)
≥ n − 1 =⇒ L(2 lg(n−1)
) = L(n−1)
42 / 79
Something Faster
We want something faster!!! Observation!!!
L(1)
= W
L(2)
= W · W = W 2
L(4)
= W 2
· W 2
= W 4
L(8)
= W 4
· W 4
= W 8
...
L(2 log(n−1)
) = W 2 log(n−1) −1
· W 2 log(n−1) −1
= W 2 log(n−1)
Because
2 lg(n−1)
≥ n − 1 =⇒ L(2 lg(n−1)
) = L(n−1)
42 / 79
Something Faster
We want something faster!!! Observation!!!
L(1)
= W
L(2)
= W · W = W 2
L(4)
= W 2
· W 2
= W 4
L(8)
= W 4
· W 4
= W 8
...
L(2 log(n−1)
) = W 2 log(n−1) −1
· W 2 log(n−1) −1
= W 2 log(n−1)
Because
2 lg(n−1)
≥ n − 1 =⇒ L(2 lg(n−1)
) = L(n−1)
42 / 79
Something Faster
We want something faster!!! Observation!!!
L(1)
= W
L(2)
= W · W = W 2
L(4)
= W 2
· W 2
= W 4
L(8)
= W 4
· W 4
= W 8
...
L(2 log(n−1)
) = W 2 log(n−1) −1
· W 2 log(n−1) −1
= W 2 log(n−1)
Because
2 lg(n−1)
≥ n − 1 =⇒ L(2 lg(n−1)
) = L(n−1)
42 / 79
The Faster Algorithm
Complexity of the Previous Algorithm
Slow-All-Pairs-Shortest-Paths(W )
1 n ← W .rows
2 L(1)
← W
3 m ← 1
4 while m < n − 1
5 L(2m)
←EXTEND-SHORTEST-PATHS L(m)
, L(m)
6 m ← 2m
7 return L(m)
Complexity
If n = |V | we have that O V 3 lg V .
43 / 79
The Faster Algorithm
Complexity of the Previous Algorithm
Slow-All-Pairs-Shortest-Paths(W )
1 n ← W .rows
2 L(1)
← W
3 m ← 1
4 while m < n − 1
5 L(2m)
←EXTEND-SHORTEST-PATHS L(m)
, L(m)
6 m ← 2m
7 return L(m)
Complexity
If n = |V | we have that O V 3 lg V .
43 / 79
Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
44 / 79
The Shortest Path Structure
Intermediate Vertex
For a path p = v1, v2, ..., vl , an intermediate vertex is any vertex of p
other than v1 or vl.
Define
d
(k)
ij =weight of a shortest path between i and j with all intermediate
vertices are in the set {1, 2, ..., k}.
45 / 79
The Shortest Path Structure
Intermediate Vertex
For a path p = v1, v2, ..., vl , an intermediate vertex is any vertex of p
other than v1 or vl.
Define
d
(k)
ij =weight of a shortest path between i and j with all intermediate
vertices are in the set {1, 2, ..., k}.
45 / 79
The Recursive Idea
Simply look at the following cases cases
Case I k is not an intermediate vertex, then a shortest path from i to
j with all intermediate vertices {1, ..., k − 1} is a shortest path from i
to j with intermediate vertices {1, ..., k}.
=⇒ d
(k)
ij = d
(k−1)
ij
Case II if k is an intermediate vertice. Then, i
p1
k
p2
j and we can
make the following statements using Lemma 24.1:
p1 is a shortest path from i to k with all intermediate vertices in the
set {1, ..., k − 1}.
p2 is a shortest path from k to j with all intermediate vertices in the
set {1, ..., k − 1}.
=⇒ d
(k)
ij = d
(k−1)
ik + d
(k−1)
kj
46 / 79
The Recursive Idea
Simply look at the following cases cases
Case I k is not an intermediate vertex, then a shortest path from i to
j with all intermediate vertices {1, ..., k − 1} is a shortest path from i
to j with intermediate vertices {1, ..., k}.
=⇒ d
(k)
ij = d
(k−1)
ij
Case II if k is an intermediate vertice. Then, i
p1
k
p2
j and we can
make the following statements using Lemma 24.1:
p1 is a shortest path from i to k with all intermediate vertices in the
set {1, ..., k − 1}.
p2 is a shortest path from k to j with all intermediate vertices in the
set {1, ..., k − 1}.
=⇒ d
(k)
ij = d
(k−1)
ik + d
(k−1)
kj
46 / 79
The Recursive Idea
Simply look at the following cases cases
Case I k is not an intermediate vertex, then a shortest path from i to
j with all intermediate vertices {1, ..., k − 1} is a shortest path from i
to j with intermediate vertices {1, ..., k}.
=⇒ d
(k)
ij = d
(k−1)
ij
Case II if k is an intermediate vertice. Then, i
p1
k
p2
j and we can
make the following statements using Lemma 24.1:
p1 is a shortest path from i to k with all intermediate vertices in the
set {1, ..., k − 1}.
p2 is a shortest path from k to j with all intermediate vertices in the
set {1, ..., k − 1}.
=⇒ d
(k)
ij = d
(k−1)
ik + d
(k−1)
kj
46 / 79
The Recursive Idea
Simply look at the following cases cases
Case I k is not an intermediate vertex, then a shortest path from i to
j with all intermediate vertices {1, ..., k − 1} is a shortest path from i
to j with intermediate vertices {1, ..., k}.
=⇒ d
(k)
ij = d
(k−1)
ij
Case II if k is an intermediate vertice. Then, i
p1
k
p2
j and we can
make the following statements using Lemma 24.1:
p1 is a shortest path from i to k with all intermediate vertices in the
set {1, ..., k − 1}.
p2 is a shortest path from k to j with all intermediate vertices in the
set {1, ..., k − 1}.
=⇒ d
(k)
ij = d
(k−1)
ik + d
(k−1)
kj
46 / 79
The Recursive Idea
Simply look at the following cases cases
Case I k is not an intermediate vertex, then a shortest path from i to
j with all intermediate vertices {1, ..., k − 1} is a shortest path from i
to j with intermediate vertices {1, ..., k}.
=⇒ d
(k)
ij = d
(k−1)
ij
Case II if k is an intermediate vertice. Then, i
p1
k
p2
j and we can
make the following statements using Lemma 24.1:
p1 is a shortest path from i to k with all intermediate vertices in the
set {1, ..., k − 1}.
p2 is a shortest path from k to j with all intermediate vertices in the
set {1, ..., k − 1}.
=⇒ d
(k)
ij = d
(k−1)
ik + d
(k−1)
kj
46 / 79
The Recursive Idea
Simply look at the following cases cases
Case I k is not an intermediate vertex, then a shortest path from i to
j with all intermediate vertices {1, ..., k − 1} is a shortest path from i
to j with intermediate vertices {1, ..., k}.
=⇒ d
(k)
ij = d
(k−1)
ij
Case II if k is an intermediate vertice. Then, i
p1
k
p2
j and we can
make the following statements using Lemma 24.1:
p1 is a shortest path from i to k with all intermediate vertices in the
set {1, ..., k − 1}.
p2 is a shortest path from k to j with all intermediate vertices in the
set {1, ..., k − 1}.
=⇒ d
(k)
ij = d
(k−1)
ik + d
(k−1)
kj
46 / 79
The Graphical Idea
Consider
All possible intermediate vertices in {1, 2, ..., k}
Figure: The Recursive Idea
47 / 79
The Recursive Solution
The Recursion
d
(k)
ij =



wij if k = 0
min d
(k−1)
ij , d
(k−1)
ik + d
(k−1)
kj if k ≥ 1
Final answer when k = n
We recursively calculate D(n) = d
(n)
ij or d
(n)
ij = δ (i, j) for all i, j ∈ V .
48 / 79
The Recursive Solution
The Recursion
d
(k)
ij =



wij if k = 0
min d
(k−1)
ij , d
(k−1)
ik + d
(k−1)
kj if k ≥ 1
Final answer when k = n
We recursively calculate D(n) = d
(n)
ij or d
(n)
ij = δ (i, j) for all i, j ∈ V .
48 / 79
Thus, we have the following
Recursive Version
Recursive-Floyd-Warshall(W )
1 D(n)
the n × n matrix
2 for i = 1 to n
3 for j = 1 to n
4 D(n)
[i, j] =Recursive-Part(i, j, n, W )
5 return D(n)
49 / 79
Thus, we have the following
Recursive Version
Recursive-Floyd-Warshall(W )
1 D(n)
the n × n matrix
2 for i = 1 to n
3 for j = 1 to n
4 D(n)
[i, j] =Recursive-Part(i, j, n, W )
5 return D(n)
49 / 79
Thus, we have the following
Recursive Version
Recursive-Floyd-Warshall(W )
1 D(n)
the n × n matrix
2 for i = 1 to n
3 for j = 1 to n
4 D(n)
[i, j] =Recursive-Part(i, j, n, W )
5 return D(n)
49 / 79
Thus, we have the following
The Recursive-Part
Recursive-Part(i, j, k, W )
1 if k = 0
2 return W [i, j]
3 if k ≥ 1
4 t1 =Recursive-Part(i, j, k − 1, W )
5 t2 =Recursive-Part(i, k, k − 1, W )+...
6 Recursive-Part(k, j, k − 1, W )
7 if t1 ≤ t2
8 return t1
9 else
10 return t2
50 / 79
Thus, we have the following
The Recursive-Part
Recursive-Part(i, j, k, W )
1 if k = 0
2 return W [i, j]
3 if k ≥ 1
4 t1 =Recursive-Part(i, j, k − 1, W )
5 t2 =Recursive-Part(i, k, k − 1, W )+...
6 Recursive-Part(k, j, k − 1, W )
7 if t1 ≤ t2
8 return t1
9 else
10 return t2
50 / 79
Thus, we have the following
The Recursive-Part
Recursive-Part(i, j, k, W )
1 if k = 0
2 return W [i, j]
3 if k ≥ 1
4 t1 =Recursive-Part(i, j, k − 1, W )
5 t2 =Recursive-Part(i, k, k − 1, W )+...
6 Recursive-Part(k, j, k − 1, W )
7 if t1 ≤ t2
8 return t1
9 else
10 return t2
50 / 79
Thus, we have the following
The Recursive-Part
Recursive-Part(i, j, k, W )
1 if k = 0
2 return W [i, j]
3 if k ≥ 1
4 t1 =Recursive-Part(i, j, k − 1, W )
5 t2 =Recursive-Part(i, k, k − 1, W )+...
6 Recursive-Part(k, j, k − 1, W )
7 if t1 ≤ t2
8 return t1
9 else
10 return t2
50 / 79
Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
51 / 79
Now
We want to use a storage to eliminate the recursion
For this, we are going to use two matrices
1 D(k−1) the previous matrix.
2 D(k) the new matrix based in the previous matrix
Something Notable
With D(0) = W or all weights in the edges that exist.
52 / 79
Now
We want to use a storage to eliminate the recursion
For this, we are going to use two matrices
1 D(k−1) the previous matrix.
2 D(k) the new matrix based in the previous matrix
Something Notable
With D(0) = W or all weights in the edges that exist.
52 / 79
Now
We want to use a storage to eliminate the recursion
For this, we are going to use two matrices
1 D(k−1) the previous matrix.
2 D(k) the new matrix based in the previous matrix
Something Notable
With D(0) = W or all weights in the edges that exist.
52 / 79
Now
We want to use a storage to eliminate the recursion
For this, we are going to use two matrices
1 D(k−1) the previous matrix.
2 D(k) the new matrix based in the previous matrix
Something Notable
With D(0) = W or all weights in the edges that exist.
52 / 79
In addition, we want to rebuild the answer
For this, we have the predecessor matrix Π
Actually, we want to compute a sequence of matrices
Π(0)
, Π(1)
, ..., Π(n)
Where
Π = Π(n)
53 / 79
In addition, we want to rebuild the answer
For this, we have the predecessor matrix Π
Actually, we want to compute a sequence of matrices
Π(0)
, Π(1)
, ..., Π(n)
Where
Π = Π(n)
53 / 79
What are the elements in Π(k)
Each element in the matrix is as follow
π
(k)
ij = the predecessor of vertex j on a shortest path from vertex i with all
intermediate vertices in the set {1, 2, ..., k}
Thus, we have that
π
(0)
ij =



NULL if i = j or wij = ∞
i if i = j and wij < ∞
54 / 79
What are the elements in Π(k)
Each element in the matrix is as follow
π
(k)
ij = the predecessor of vertex j on a shortest path from vertex i with all
intermediate vertices in the set {1, 2, ..., k}
Thus, we have that
π
(0)
ij =



NULL if i = j or wij = ∞
i if i = j and wij < ∞
54 / 79
Then
We have the following
For k ≥ 1, if we take the path i k j where k = j.
Then, if d
(k−1)
ij > d
(k−1)
ik + d
(k−1)
kj
For the predecessor of j, we chose k on a shortest path from k with all
intermediate vertices in the set {1, 2, ..., k − 1}
Otherwise, if d
(k−1)
ij ≤ d
(k−1)
ik + d
(k−1)
kj
We choose the same predecessor of j that we chose on a shortest path
from i with all all intermediate vertices in the set {1, 2, ..., k − 1}.
55 / 79
Then
We have the following
For k ≥ 1, if we take the path i k j where k = j.
Then, if d
(k−1)
ij > d
(k−1)
ik + d
(k−1)
kj
For the predecessor of j, we chose k on a shortest path from k with all
intermediate vertices in the set {1, 2, ..., k − 1}
Otherwise, if d
(k−1)
ij ≤ d
(k−1)
ik + d
(k−1)
kj
We choose the same predecessor of j that we chose on a shortest path
from i with all all intermediate vertices in the set {1, 2, ..., k − 1}.
55 / 79
Then
We have the following
For k ≥ 1, if we take the path i k j where k = j.
Then, if d
(k−1)
ij > d
(k−1)
ik + d
(k−1)
kj
For the predecessor of j, we chose k on a shortest path from k with all
intermediate vertices in the set {1, 2, ..., k − 1}
Otherwise, if d
(k−1)
ij ≤ d
(k−1)
ik + d
(k−1)
kj
We choose the same predecessor of j that we chose on a shortest path
from i with all all intermediate vertices in the set {1, 2, ..., k − 1}.
55 / 79
Formally
We have then
π
(k)
ij =



π
(k−1)
ij if d
(k−1)
ij ≤ d
(k−1)
ik + d
(k−1)
kj
π
(k−1)
kj if d
(k−1)
ij > d
(k−1)
ik + d
(k−1)
kj
56 / 79
Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
57 / 79
Final Iterative Version of Floyd-Warshall (Correction by Diego - Class Tec 2015)
Floyd-Warshall(W)
1. n = W.rows
2. D(0) = W
3. for k = 1 to n − 1
4. let D(k) = d
(k)
ij
be a new
n × n matrix
5. let Π(k) be a new
predecessor
n × n matrix
Given each k, we update using D(k−1)
6. for i = 1 to n
7. for j = 1 to n
8. if d
(k−1)
ij ≤ d
(k−1)
ik + d
(k−1)
kj
9. d
(k)
ij = d
(k−1)
ij
10. π
(k)
ij = π
(k−1)
ij
11. else
12. d
(k)
ij = d
(k−1)
ik + d
(k−1)
kj
13. π
(k)
ij = π
(k−1)
kj
14. return D(n) and Π(n)
58 / 79
Final Iterative Version of Floyd-Warshall (Correction by Diego - Class Tec 2015)
Floyd-Warshall(W)
1. n = W.rows
2. D(0) = W
3. for k = 1 to n − 1
4. let D(k) = d
(k)
ij
be a new
n × n matrix
5. let Π(k) be a new
predecessor
n × n matrix
Given each k, we update using D(k−1)
6. for i = 1 to n
7. for j = 1 to n
8. if d
(k−1)
ij ≤ d
(k−1)
ik + d
(k−1)
kj
9. d
(k)
ij = d
(k−1)
ij
10. π
(k)
ij = π
(k−1)
ij
11. else
12. d
(k)
ij = d
(k−1)
ik + d
(k−1)
kj
13. π
(k)
ij = π
(k−1)
kj
14. return D(n) and Π(n)
58 / 79
Final Iterative Version of Floyd-Warshall (Correction by Diego - Class Tec 2015)
Floyd-Warshall(W)
1. n = W.rows
2. D(0) = W
3. for k = 1 to n − 1
4. let D(k) = d
(k)
ij
be a new
n × n matrix
5. let Π(k) be a new
predecessor
n × n matrix
Given each k, we update using D(k−1)
6. for i = 1 to n
7. for j = 1 to n
8. if d
(k−1)
ij ≤ d
(k−1)
ik + d
(k−1)
kj
9. d
(k)
ij = d
(k−1)
ij
10. π
(k)
ij = π
(k−1)
ij
11. else
12. d
(k)
ij = d
(k−1)
ik + d
(k−1)
kj
13. π
(k)
ij = π
(k−1)
kj
14. return D(n) and Π(n)
58 / 79
Final Iterative Version of Floyd-Warshall (Correction by Diego - Class Tec 2015)
Floyd-Warshall(W)
1. n = W.rows
2. D(0) = W
3. for k = 1 to n − 1
4. let D(k) = d
(k)
ij
be a new
n × n matrix
5. let Π(k) be a new
predecessor
n × n matrix
Given each k, we update using D(k−1)
6. for i = 1 to n
7. for j = 1 to n
8. if d
(k−1)
ij ≤ d
(k−1)
ik + d
(k−1)
kj
9. d
(k)
ij = d
(k−1)
ij
10. π
(k)
ij = π
(k−1)
ij
11. else
12. d
(k)
ij = d
(k−1)
ik + d
(k−1)
kj
13. π
(k)
ij = π
(k−1)
kj
14. return D(n) and Π(n)
58 / 79
Explanation
Lines 1 and 2
Initialization of variables n and D(0)
Line 3
In the loop, we solve the smaller problems first with k = 1 to k = n − 1
Remember the largest number of edges in any shortest path
Line 4 and 5
Instantiation of the new matrices D(k) and (k)
to generate the shortest
pats with at least k edges.
59 / 79
Explanation
Lines 1 and 2
Initialization of variables n and D(0)
Line 3
In the loop, we solve the smaller problems first with k = 1 to k = n − 1
Remember the largest number of edges in any shortest path
Line 4 and 5
Instantiation of the new matrices D(k) and (k)
to generate the shortest
pats with at least k edges.
59 / 79
Explanation
Lines 1 and 2
Initialization of variables n and D(0)
Line 3
In the loop, we solve the smaller problems first with k = 1 to k = n − 1
Remember the largest number of edges in any shortest path
Line 4 and 5
Instantiation of the new matrices D(k) and (k)
to generate the shortest
pats with at least k edges.
59 / 79
Explanation
Lines 1 and 2
Initialization of variables n and D(0)
Line 3
In the loop, we solve the smaller problems first with k = 1 to k = n − 1
Remember the largest number of edges in any shortest path
Line 4 and 5
Instantiation of the new matrices D(k) and (k)
to generate the shortest
pats with at least k edges.
59 / 79
Explanation
Line 6 and 7
This is done to go through all the possible combinations of i’s and j’s
Line 8
Deciding if d
(k−1)
ij ≤ d
(k−1)
ik + d
(k−1)
kj
60 / 79
Explanation
Line 6 and 7
This is done to go through all the possible combinations of i’s and j’s
Line 8
Deciding if d
(k−1)
ij ≤ d
(k−1)
ik + d
(k−1)
kj
60 / 79
Example
Graph
1
2
3
45
3
4
8
2
7
1
6
-4
-5
61 / 79
Example
D(0)
and Π(0)
1
2
3
45
3
4
8
2
7
1
6
-4
-5
D(0)
=






0 3 8 ∞ −4
∞ 0 ∞ 1 7
∞ 4 0 ∞ ∞
2 ∞ −5 0 ∞
∞ ∞ ∞ 6 0






Π(0)
=






NIL 1 1 NIL 1
NIL NIL NIL 2 2
NIL 3 NIL NIL NIL
4 NIL 4 NIL NIL
NIL NIL NIL 5 NIL






62 / 79
Example
D(1)
and Π(1)
1
2
3
45
3
4
8
2
7
1
6
-4
-5
D(1)
=






0 3 8 ∞ −4
∞ 0 ∞ 1 7
∞ 4 0 ∞ ∞
2 5 −5 0 −2
∞ ∞ ∞ 6 0






Π(1)
=






NIL 1 1 NIL 1
NIL NIL NIL 2 2
NIL 3 NIL NIL NIL
4 1 4 NIL 1
NIL NIL NIL 5 NIL






63 / 79
Example
D(2)
and Π(2)
1
2
3
45
3
4
8
2
7
1
6
-4
-5
D(2)
=






0 3 8 4 −4
∞ 0 ∞ 1 7
∞ 4 0 5 11
2 5 −5 0 −2
∞ ∞ ∞ 6 0






Π(2)
=






NIL 1 1 2 1
NIL NIL NIL 2 2
NIL 3 NIL 2 2
4 1 4 NIL 1
NIL NIL NIL 5 NIL






64 / 79
Example
D(3)
and Π(3)
1
2
3
45
3
4
8
2
7
1
6
-4
-5
D(3)
=






0 3 8 4 −4
∞ 0 ∞ 1 7
∞ 4 0 5 11
2 −1 −5 0 −2
∞ ∞ ∞ 6 0






Π(3)
=






NIL 1 1 2 1
NIL NIL NIL 2 2
NIL 3 NIL 2 2
4 3 4 NIL 1
NIL NIL NIL 5 NIL






65 / 79
Example
D(4)
and Π(4)
1
2
3
45
3
4
8
2
7
1
6
-4
-5
D(4)
=






0 3 −1 4 −4
3 0 −4 1 −1
7 4 0 5 3
2 −1 −5 0 −2
8 5 1 6 0






Π(4)
=






NIL 1 4 2 1
4 NIL 4 2 1
4 3 NIL 2 1
4 3 4 NIL 1
4 3 4 5 NIL






66 / 79
Example
D(5)
and Π(5)
1
2
3
45
3
4
8
2
7
1
6
-4
-5
D(5)
=






0 1 −3 2 −4
3 0 −4 1 −1
7 4 0 5 3
2 −1 −5 0 −2
8 5 1 6 0






Π(5)
=






NIL 3 4 5 1
4 NIL 4 2 1
4 3 NIL 2 1
4 3 4 NIL 1
4 3 4 5 NIL






67 / 79
Remarks
Something Notable
Because the comparison in line 8 takes O (1)
Complexity of Floyd-Warshall is
Time Complexity Θ V 3
We do not have elaborate data structures as Binary Heap or
Fibonacci Heap!!!
The hidden constant time is quite small:
Making the Floyd-Warshall Algorithm practical even with
moderate-sized graphs!!!
68 / 79
Remarks
Something Notable
Because the comparison in line 8 takes O (1)
Complexity of Floyd-Warshall is
Time Complexity Θ V 3
We do not have elaborate data structures as Binary Heap or
Fibonacci Heap!!!
The hidden constant time is quite small:
Making the Floyd-Warshall Algorithm practical even with
moderate-sized graphs!!!
68 / 79
Remarks
Something Notable
Because the comparison in line 8 takes O (1)
Complexity of Floyd-Warshall is
Time Complexity Θ V 3
We do not have elaborate data structures as Binary Heap or
Fibonacci Heap!!!
The hidden constant time is quite small:
Making the Floyd-Warshall Algorithm practical even with
moderate-sized graphs!!!
68 / 79
Remarks
Something Notable
Because the comparison in line 8 takes O (1)
Complexity of Floyd-Warshall is
Time Complexity Θ V 3
We do not have elaborate data structures as Binary Heap or
Fibonacci Heap!!!
The hidden constant time is quite small:
Making the Floyd-Warshall Algorithm practical even with
moderate-sized graphs!!!
68 / 79
Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
69 / 79
Johnson’s Algorithm
Observations
Used to find all pairs in a sparse graphs by using Dijkstra’s algorithm.
It uses a re-weighting function to obtain positive edges from negative
edges to deal with them.
It can deal with the negative weight cycles.
Therfore
It uses something to deal with the negative weight cycles.
Could be a Bellman-Ford detector as before?
Maybe, we need to transform the weights in order to use them.
What we require
A re-weighting function w (u, v)
A shortest path by w is a shortest path by w.
All edges are not negative using w.
70 / 79
Johnson’s Algorithm
Observations
Used to find all pairs in a sparse graphs by using Dijkstra’s algorithm.
It uses a re-weighting function to obtain positive edges from negative
edges to deal with them.
It can deal with the negative weight cycles.
Therfore
It uses something to deal with the negative weight cycles.
Could be a Bellman-Ford detector as before?
Maybe, we need to transform the weights in order to use them.
What we require
A re-weighting function w (u, v)
A shortest path by w is a shortest path by w.
All edges are not negative using w.
70 / 79
Johnson’s Algorithm
Observations
Used to find all pairs in a sparse graphs by using Dijkstra’s algorithm.
It uses a re-weighting function to obtain positive edges from negative
edges to deal with them.
It can deal with the negative weight cycles.
Therfore
It uses something to deal with the negative weight cycles.
Could be a Bellman-Ford detector as before?
Maybe, we need to transform the weights in order to use them.
What we require
A re-weighting function w (u, v)
A shortest path by w is a shortest path by w.
All edges are not negative using w.
70 / 79
Johnson’s Algorithm
Observations
Used to find all pairs in a sparse graphs by using Dijkstra’s algorithm.
It uses a re-weighting function to obtain positive edges from negative
edges to deal with them.
It can deal with the negative weight cycles.
Therfore
It uses something to deal with the negative weight cycles.
Could be a Bellman-Ford detector as before?
Maybe, we need to transform the weights in order to use them.
What we require
A re-weighting function w (u, v)
A shortest path by w is a shortest path by w.
All edges are not negative using w.
70 / 79
Johnson’s Algorithm
Observations
Used to find all pairs in a sparse graphs by using Dijkstra’s algorithm.
It uses a re-weighting function to obtain positive edges from negative
edges to deal with them.
It can deal with the negative weight cycles.
Therfore
It uses something to deal with the negative weight cycles.
Could be a Bellman-Ford detector as before?
Maybe, we need to transform the weights in order to use them.
What we require
A re-weighting function w (u, v)
A shortest path by w is a shortest path by w.
All edges are not negative using w.
70 / 79
Johnson’s Algorithm
Observations
Used to find all pairs in a sparse graphs by using Dijkstra’s algorithm.
It uses a re-weighting function to obtain positive edges from negative
edges to deal with them.
It can deal with the negative weight cycles.
Therfore
It uses something to deal with the negative weight cycles.
Could be a Bellman-Ford detector as before?
Maybe, we need to transform the weights in order to use them.
What we require
A re-weighting function w (u, v)
A shortest path by w is a shortest path by w.
All edges are not negative using w.
70 / 79
Johnson’s Algorithm
Observations
Used to find all pairs in a sparse graphs by using Dijkstra’s algorithm.
It uses a re-weighting function to obtain positive edges from negative
edges to deal with them.
It can deal with the negative weight cycles.
Therfore
It uses something to deal with the negative weight cycles.
Could be a Bellman-Ford detector as before?
Maybe, we need to transform the weights in order to use them.
What we require
A re-weighting function w (u, v)
A shortest path by w is a shortest path by w.
All edges are not negative using w.
70 / 79
Johnson’s Algorithm
Observations
Used to find all pairs in a sparse graphs by using Dijkstra’s algorithm.
It uses a re-weighting function to obtain positive edges from negative
edges to deal with them.
It can deal with the negative weight cycles.
Therfore
It uses something to deal with the negative weight cycles.
Could be a Bellman-Ford detector as before?
Maybe, we need to transform the weights in order to use them.
What we require
A re-weighting function w (u, v)
A shortest path by w is a shortest path by w.
All edges are not negative using w.
70 / 79
Johnson’s Algorithm
Observations
Used to find all pairs in a sparse graphs by using Dijkstra’s algorithm.
It uses a re-weighting function to obtain positive edges from negative
edges to deal with them.
It can deal with the negative weight cycles.
Therfore
It uses something to deal with the negative weight cycles.
Could be a Bellman-Ford detector as before?
Maybe, we need to transform the weights in order to use them.
What we require
A re-weighting function w (u, v)
A shortest path by w is a shortest path by w.
All edges are not negative using w.
70 / 79
Proving Properties of Re-Weigthing
Lemma 25.1
Given a weighted, directed graph G = (D, V ) with weight function
w : E → R, let h : V → R be any function mapping vertices to real
numbers. For each edge (u, v) ∈ E, define
w (u, v) = w (u, v) + h (u) − h (v)
Let p = v0, v1, ..., vk be any path from vertex 0 to vertex k. Then:
1 p is a shortest path from 0 to k with weight function w if and only if
it is a shortest path with weight function w. That is w(p) = δ (v0, vk)
if and only if w(p) = δ (v0, vk).
2 Furthermore, G has a negative-weight cycle using weight function w
if and only if G has a negative-weight cycle using weight function w.
71 / 79
Proving Properties of Re-Weigthing
Lemma 25.1
Given a weighted, directed graph G = (D, V ) with weight function
w : E → R, let h : V → R be any function mapping vertices to real
numbers. For each edge (u, v) ∈ E, define
w (u, v) = w (u, v) + h (u) − h (v)
Let p = v0, v1, ..., vk be any path from vertex 0 to vertex k. Then:
1 p is a shortest path from 0 to k with weight function w if and only if
it is a shortest path with weight function w. That is w(p) = δ (v0, vk)
if and only if w(p) = δ (v0, vk).
2 Furthermore, G has a negative-weight cycle using weight function w
if and only if G has a negative-weight cycle using weight function w.
71 / 79
Proving Properties of Re-Weigthing
Lemma 25.1
Given a weighted, directed graph G = (D, V ) with weight function
w : E → R, let h : V → R be any function mapping vertices to real
numbers. For each edge (u, v) ∈ E, define
w (u, v) = w (u, v) + h (u) − h (v)
Let p = v0, v1, ..., vk be any path from vertex 0 to vertex k. Then:
1 p is a shortest path from 0 to k with weight function w if and only if
it is a shortest path with weight function w. That is w(p) = δ (v0, vk)
if and only if w(p) = δ (v0, vk).
2 Furthermore, G has a negative-weight cycle using weight function w
if and only if G has a negative-weight cycle using weight function w.
71 / 79
Proving Properties of Re-Weigthing
Lemma 25.1
Given a weighted, directed graph G = (D, V ) with weight function
w : E → R, let h : V → R be any function mapping vertices to real
numbers. For each edge (u, v) ∈ E, define
w (u, v) = w (u, v) + h (u) − h (v)
Let p = v0, v1, ..., vk be any path from vertex 0 to vertex k. Then:
1 p is a shortest path from 0 to k with weight function w if and only if
it is a shortest path with weight function w. That is w(p) = δ (v0, vk)
if and only if w(p) = δ (v0, vk).
2 Furthermore, G has a negative-weight cycle using weight function w
if and only if G has a negative-weight cycle using weight function w.
71 / 79
Proving Properties of Re-Weigthing
Lemma 25.1
Given a weighted, directed graph G = (D, V ) with weight function
w : E → R, let h : V → R be any function mapping vertices to real
numbers. For each edge (u, v) ∈ E, define
w (u, v) = w (u, v) + h (u) − h (v)
Let p = v0, v1, ..., vk be any path from vertex 0 to vertex k. Then:
1 p is a shortest path from 0 to k with weight function w if and only if
it is a shortest path with weight function w. That is w(p) = δ (v0, vk)
if and only if w(p) = δ (v0, vk).
2 Furthermore, G has a negative-weight cycle using weight function w
if and only if G has a negative-weight cycle using weight function w.
71 / 79
Using This Lemma
Select h
Such that w (u, v) + h (u) − h (v) ≥ 0.
Then, we build a new graph G
It has the following elements
V = V ∪ {s}, where s is a new vertex.
E = E ∪ {(s, v) |v ∈ V }.
w(s, v) = 0 for all v ∈ V , in addition to all the other weights.
Select h
Simply select h(v) = δ(s, v).
72 / 79
Using This Lemma
Select h
Such that w (u, v) + h (u) − h (v) ≥ 0.
Then, we build a new graph G
It has the following elements
V = V ∪ {s}, where s is a new vertex.
E = E ∪ {(s, v) |v ∈ V }.
w(s, v) = 0 for all v ∈ V , in addition to all the other weights.
Select h
Simply select h(v) = δ(s, v).
72 / 79
Using This Lemma
Select h
Such that w (u, v) + h (u) − h (v) ≥ 0.
Then, we build a new graph G
It has the following elements
V = V ∪ {s}, where s is a new vertex.
E = E ∪ {(s, v) |v ∈ V }.
w(s, v) = 0 for all v ∈ V , in addition to all the other weights.
Select h
Simply select h(v) = δ(s, v).
72 / 79
Using This Lemma
Select h
Such that w (u, v) + h (u) − h (v) ≥ 0.
Then, we build a new graph G
It has the following elements
V = V ∪ {s}, where s is a new vertex.
E = E ∪ {(s, v) |v ∈ V }.
w(s, v) = 0 for all v ∈ V , in addition to all the other weights.
Select h
Simply select h(v) = δ(s, v).
72 / 79
Using This Lemma
Select h
Such that w (u, v) + h (u) − h (v) ≥ 0.
Then, we build a new graph G
It has the following elements
V = V ∪ {s}, where s is a new vertex.
E = E ∪ {(s, v) |v ∈ V }.
w(s, v) = 0 for all v ∈ V , in addition to all the other weights.
Select h
Simply select h(v) = δ(s, v).
72 / 79
Using This Lemma
Select h
Such that w (u, v) + h (u) − h (v) ≥ 0.
Then, we build a new graph G
It has the following elements
V = V ∪ {s}, where s is a new vertex.
E = E ∪ {(s, v) |v ∈ V }.
w(s, v) = 0 for all v ∈ V , in addition to all the other weights.
Select h
Simply select h(v) = δ(s, v).
72 / 79
Example
Graph G with original weight function w with new source s and
h(v) = δ(s, v) at each vertex
0
0 0
0
0
0
0
-1
1
2
3
4
5
-4
7
3
4
8
2
1
6
s -5
-5
-4 0
73 / 79
Proof of Claim
Claim
w (u, v) + h (u) − h (v) ≥ 0 (5)
By Triangle Inequality
δ (s, v) ≤ δ(s, u) + w(u, v)
Then by the way we selected h, we have:
h(v) ≤ h(u) + w (u, v) (6)
Finally
w (u, v) + h (u) − h (v) ≥ 0 (7)
74 / 79
Proof of Claim
Claim
w (u, v) + h (u) − h (v) ≥ 0 (5)
By Triangle Inequality
δ (s, v) ≤ δ(s, u) + w(u, v)
Then by the way we selected h, we have:
h(v) ≤ h(u) + w (u, v) (6)
Finally
w (u, v) + h (u) − h (v) ≥ 0 (7)
74 / 79
Proof of Claim
Claim
w (u, v) + h (u) − h (v) ≥ 0 (5)
By Triangle Inequality
δ (s, v) ≤ δ(s, u) + w(u, v)
Then by the way we selected h, we have:
h(v) ≤ h(u) + w (u, v) (6)
Finally
w (u, v) + h (u) − h (v) ≥ 0 (7)
74 / 79
Proof of Claim
Claim
w (u, v) + h (u) − h (v) ≥ 0 (5)
By Triangle Inequality
δ (s, v) ≤ δ(s, u) + w(u, v)
Then by the way we selected h, we have:
h(v) ≤ h(u) + w (u, v) (6)
Finally
w (u, v) + h (u) − h (v) ≥ 0 (7)
74 / 79
Example
The new Graph G after re-weighting G
0
5 1
0
4
0
0
-1
1
2
3
4
5
0
10
4
0
13
2
0
2
s 0
-5
-4 0
75 / 79
Final Algorithm
Pseudo-Code
1. Compute G , where: G .V = G.E ∪ {(s, v) |v ∈ G.V } andw (s, v) = 0 for all
v ∈ G.V
2. If Bellman-Ford(G , w, s) == FALSE
3. print “Graphs contains a Neg-Weight Cycle”
4. else for each vertex v ∈ G .V
5. set h (v) = v.d computed by Bellman-Ford
6. for each edge (u, v) ∈ G .E
7. w (u, v) = w (u, v) + h (u) − h (v)
8. Let D = (duv) be a new n × n matrix
9. for each vertex u ∈ G.V
10. run Dijkstra(G, w, u) to compute δ (u, v) for all v ∈ G.V
11. for each vertex v ∈ G.V
12. duv = δ (u, v) + h (v) − h (u)
13. return D
76 / 79
Final Algorithm
Pseudo-Code
1. Compute G , where: G .V = G.E ∪ {(s, v) |v ∈ G.V } andw (s, v) = 0 for all
v ∈ G.V
2. If Bellman-Ford(G , w, s) == FALSE
3. print “Graphs contains a Neg-Weight Cycle”
4. else for each vertex v ∈ G .V
5. set h (v) = v.d computed by Bellman-Ford
6. for each edge (u, v) ∈ G .E
7. w (u, v) = w (u, v) + h (u) − h (v)
8. Let D = (duv) be a new n × n matrix
9. for each vertex u ∈ G.V
10. run Dijkstra(G, w, u) to compute δ (u, v) for all v ∈ G.V
11. for each vertex v ∈ G.V
12. duv = δ (u, v) + h (v) − h (u)
13. return D
76 / 79
Final Algorithm
Pseudo-Code
1. Compute G , where: G .V = G.E ∪ {(s, v) |v ∈ G.V } andw (s, v) = 0 for all
v ∈ G.V
2. If Bellman-Ford(G , w, s) == FALSE
3. print “Graphs contains a Neg-Weight Cycle”
4. else for each vertex v ∈ G .V
5. set h (v) = v.d computed by Bellman-Ford
6. for each edge (u, v) ∈ G .E
7. w (u, v) = w (u, v) + h (u) − h (v)
8. Let D = (duv) be a new n × n matrix
9. for each vertex u ∈ G.V
10. run Dijkstra(G, w, u) to compute δ (u, v) for all v ∈ G.V
11. for each vertex v ∈ G.V
12. duv = δ (u, v) + h (v) − h (u)
13. return D
76 / 79
Final Algorithm
Pseudo-Code
1. Compute G , where: G .V = G.E ∪ {(s, v) |v ∈ G.V } andw (s, v) = 0 for all
v ∈ G.V
2. If Bellman-Ford(G , w, s) == FALSE
3. print “Graphs contains a Neg-Weight Cycle”
4. else for each vertex v ∈ G .V
5. set h (v) = v.d computed by Bellman-Ford
6. for each edge (u, v) ∈ G .E
7. w (u, v) = w (u, v) + h (u) − h (v)
8. Let D = (duv) be a new n × n matrix
9. for each vertex u ∈ G.V
10. run Dijkstra(G, w, u) to compute δ (u, v) for all v ∈ G.V
11. for each vertex v ∈ G.V
12. duv = δ (u, v) + h (v) − h (u)
13. return D
76 / 79
Final Algorithm
Pseudo-Code
1. Compute G , where: G .V = G.E ∪ {(s, v) |v ∈ G.V } andw (s, v) = 0 for all
v ∈ G.V
2. If Bellman-Ford(G , w, s) == FALSE
3. print “Graphs contains a Neg-Weight Cycle”
4. else for each vertex v ∈ G .V
5. set h (v) = v.d computed by Bellman-Ford
6. for each edge (u, v) ∈ G .E
7. w (u, v) = w (u, v) + h (u) − h (v)
8. Let D = (duv) be a new n × n matrix
9. for each vertex u ∈ G.V
10. run Dijkstra(G, w, u) to compute δ (u, v) for all v ∈ G.V
11. for each vertex v ∈ G.V
12. duv = δ (u, v) + h (v) − h (u)
13. return D
76 / 79
Final Algorithm
Pseudo-Code
1. Compute G , where: G .V = G.E ∪ {(s, v) |v ∈ G.V } andw (s, v) = 0 for all
v ∈ G.V
2. If Bellman-Ford(G , w, s) == FALSE
3. print “Graphs contains a Neg-Weight Cycle”
4. else for each vertex v ∈ G .V
5. set h (v) = v.d computed by Bellman-Ford
6. for each edge (u, v) ∈ G .E
7. w (u, v) = w (u, v) + h (u) − h (v)
8. Let D = (duv) be a new n × n matrix
9. for each vertex u ∈ G.V
10. run Dijkstra(G, w, u) to compute δ (u, v) for all v ∈ G.V
11. for each vertex v ∈ G.V
12. duv = δ (u, v) + h (v) − h (u)
13. return D
76 / 79
Final Algorithm
Pseudo-Code
1. Compute G , where: G .V = G.E ∪ {(s, v) |v ∈ G.V } andw (s, v) = 0 for all
v ∈ G.V
2. If Bellman-Ford(G , w, s) == FALSE
3. print “Graphs contains a Neg-Weight Cycle”
4. else for each vertex v ∈ G .V
5. set h (v) = v.d computed by Bellman-Ford
6. for each edge (u, v) ∈ G .E
7. w (u, v) = w (u, v) + h (u) − h (v)
8. Let D = (duv) be a new n × n matrix
9. for each vertex u ∈ G.V
10. run Dijkstra(G, w, u) to compute δ (u, v) for all v ∈ G.V
11. for each vertex v ∈ G.V
12. duv = δ (u, v) + h (v) − h (u)
13. return D
76 / 79
Final Algorithm
Pseudo-Code
1. Compute G , where: G .V = G.E ∪ {(s, v) |v ∈ G.V } andw (s, v) = 0 for all
v ∈ G.V
2. If Bellman-Ford(G , w, s) == FALSE
3. print “Graphs contains a Neg-Weight Cycle”
4. else for each vertex v ∈ G .V
5. set h (v) = v.d computed by Bellman-Ford
6. for each edge (u, v) ∈ G .E
7. w (u, v) = w (u, v) + h (u) − h (v)
8. Let D = (duv) be a new n × n matrix
9. for each vertex u ∈ G.V
10. run Dijkstra(G, w, u) to compute δ (u, v) for all v ∈ G.V
11. for each vertex v ∈ G.V
12. duv = δ (u, v) + h (v) − h (u)
13. return D
76 / 79
Complexity
The Final Complexity
Times:
Θ(V + E) to compute G
O(VE) to run Bellman-Ford
Θ(E) to compute w
O (V 2
lg V + VE) to run Dijkstra’s algorithm |V | time using
Fibonacci Heaps
O (V 2
) to compute D matrix
Total : O(V 2 lg V + VE)
If E = O V 2 =⇒ O V 3
77 / 79
Complexity
The Final Complexity
Times:
Θ(V + E) to compute G
O(VE) to run Bellman-Ford
Θ(E) to compute w
O (V 2
lg V + VE) to run Dijkstra’s algorithm |V | time using
Fibonacci Heaps
O (V 2
) to compute D matrix
Total : O(V 2 lg V + VE)
If E = O V 2 =⇒ O V 3
77 / 79
Complexity
The Final Complexity
Times:
Θ(V + E) to compute G
O(VE) to run Bellman-Ford
Θ(E) to compute w
O (V 2
lg V + VE) to run Dijkstra’s algorithm |V | time using
Fibonacci Heaps
O (V 2
) to compute D matrix
Total : O(V 2 lg V + VE)
If E = O V 2 =⇒ O V 3
77 / 79
Complexity
The Final Complexity
Times:
Θ(V + E) to compute G
O(VE) to run Bellman-Ford
Θ(E) to compute w
O (V 2
lg V + VE) to run Dijkstra’s algorithm |V | time using
Fibonacci Heaps
O (V 2
) to compute D matrix
Total : O(V 2 lg V + VE)
If E = O V 2 =⇒ O V 3
77 / 79
Complexity
The Final Complexity
Times:
Θ(V + E) to compute G
O(VE) to run Bellman-Ford
Θ(E) to compute w
O (V 2
lg V + VE) to run Dijkstra’s algorithm |V | time using
Fibonacci Heaps
O (V 2
) to compute D matrix
Total : O(V 2 lg V + VE)
If E = O V 2 =⇒ O V 3
77 / 79
Complexity
The Final Complexity
Times:
Θ(V + E) to compute G
O(VE) to run Bellman-Ford
Θ(E) to compute w
O (V 2
lg V + VE) to run Dijkstra’s algorithm |V | time using
Fibonacci Heaps
O (V 2
) to compute D matrix
Total : O(V 2 lg V + VE)
If E = O V 2 =⇒ O V 3
77 / 79
Complexity
The Final Complexity
Times:
Θ(V + E) to compute G
O(VE) to run Bellman-Ford
Θ(E) to compute w
O (V 2
lg V + VE) to run Dijkstra’s algorithm |V | time using
Fibonacci Heaps
O (V 2
) to compute D matrix
Total : O(V 2 lg V + VE)
If E = O V 2 =⇒ O V 3
77 / 79
Complexity
The Final Complexity
Times:
Θ(V + E) to compute G
O(VE) to run Bellman-Ford
Θ(E) to compute w
O (V 2
lg V + VE) to run Dijkstra’s algorithm |V | time using
Fibonacci Heaps
O (V 2
) to compute D matrix
Total : O(V 2 lg V + VE)
If E = O V 2 =⇒ O V 3
77 / 79
Outline
1 Introduction
Definition of the Problem
Assumptions
Observations
2 Structure of a Shortest Path
Introduction
3 The Solution
The Recursive Solution
The Iterative Version
Extended-Shoertest-Paths
Looking at the Algorithm as Matrix Multiplication
Example
We want something faster
4 A different dynamic-programming algorithm
The Shortest Path Structure
The Bottom-Up Solution
Floyd-Warshall Algorithm
Example
5 Other Solutions
The Johnson’s Algorithm
6 Exercises
You can try them
78 / 79
Excercises
25.1-4
25.1-8
25.1-9
25.2-4
25.2-6
25.2-9
25.3-3
79 / 79

More Related Content

What's hot

25 String Matching
25 String Matching25 String Matching
25 String Matching
Andres Mendez-Vazquez
 
Inroduction_To_Algorithms_Lect14
Inroduction_To_Algorithms_Lect14Inroduction_To_Algorithms_Lect14
Inroduction_To_Algorithms_Lect14Naor Ami
 
20 Single Source Shorthest Path
20 Single Source Shorthest Path20 Single Source Shorthest Path
20 Single Source Shorthest Path
Andres Mendez-Vazquez
 
Floyd warshall-algorithm
Floyd warshall-algorithmFloyd warshall-algorithm
Floyd warshall-algorithm
Malinga Perera
 
Bellman ford algorithm
Bellman ford algorithmBellman ford algorithm
Bellman ford algorithm
Ruchika Sinha
 
Single source stortest path bellman ford and dijkstra
Single source stortest path bellman ford and dijkstraSingle source stortest path bellman ford and dijkstra
Single source stortest path bellman ford and dijkstra
Roshan Tailor
 
Bellmanford . montaser hamza.iraq
Bellmanford . montaser hamza.iraqBellmanford . montaser hamza.iraq
Bellmanford . montaser hamza.iraq
montaser185
 
19 Minimum Spanning Trees
19 Minimum Spanning Trees19 Minimum Spanning Trees
19 Minimum Spanning Trees
Andres Mendez-Vazquez
 
Daa chpater14
Daa chpater14Daa chpater14
Daa chpater14
B.Kirron Reddi
 
2.3 shortest path dijkstra’s
2.3 shortest path dijkstra’s 2.3 shortest path dijkstra’s
2.3 shortest path dijkstra’s
Krish_ver2
 
Bellman ford algorithm
Bellman ford algorithmBellman ford algorithm
Bellman ford algorithm
A. S. M. Shafi
 
A superglue for string comparison
A superglue for string comparisonA superglue for string comparison
A superglue for string comparison
BioinformaticsInstitute
 
My presentation all shortestpath
My presentation all shortestpathMy presentation all shortestpath
My presentation all shortestpathCarlostheran
 
Topological Sort
Topological SortTopological Sort
Topological Sort
Dr Sandeep Kumar Poonia
 
Algorithms of graph
Algorithms of graphAlgorithms of graph
Algorithms of graph
getacew
 
lecture 21
lecture 21lecture 21
lecture 21sajinsc
 
Matrix of linear transformation
Matrix of linear transformationMatrix of linear transformation
Matrix of linear transformation
beenishbeenish
 
Skiena algorithm 2007 lecture12 topological sort connectivity
Skiena algorithm 2007 lecture12 topological sort connectivitySkiena algorithm 2007 lecture12 topological sort connectivity
Skiena algorithm 2007 lecture12 topological sort connectivityzukun
 

What's hot (20)

25 String Matching
25 String Matching25 String Matching
25 String Matching
 
Inroduction_To_Algorithms_Lect14
Inroduction_To_Algorithms_Lect14Inroduction_To_Algorithms_Lect14
Inroduction_To_Algorithms_Lect14
 
20 Single Source Shorthest Path
20 Single Source Shorthest Path20 Single Source Shorthest Path
20 Single Source Shorthest Path
 
Floyd warshall-algorithm
Floyd warshall-algorithmFloyd warshall-algorithm
Floyd warshall-algorithm
 
Bellman ford algorithm
Bellman ford algorithmBellman ford algorithm
Bellman ford algorithm
 
Single source stortest path bellman ford and dijkstra
Single source stortest path bellman ford and dijkstraSingle source stortest path bellman ford and dijkstra
Single source stortest path bellman ford and dijkstra
 
Bellmanford . montaser hamza.iraq
Bellmanford . montaser hamza.iraqBellmanford . montaser hamza.iraq
Bellmanford . montaser hamza.iraq
 
19 Minimum Spanning Trees
19 Minimum Spanning Trees19 Minimum Spanning Trees
19 Minimum Spanning Trees
 
Bellmanford
BellmanfordBellmanford
Bellmanford
 
Daa chpater14
Daa chpater14Daa chpater14
Daa chpater14
 
2.3 shortest path dijkstra’s
2.3 shortest path dijkstra’s 2.3 shortest path dijkstra’s
2.3 shortest path dijkstra’s
 
Bellman ford algorithm
Bellman ford algorithmBellman ford algorithm
Bellman ford algorithm
 
A superglue for string comparison
A superglue for string comparisonA superglue for string comparison
A superglue for string comparison
 
My presentation all shortestpath
My presentation all shortestpathMy presentation all shortestpath
My presentation all shortestpath
 
Topological Sort
Topological SortTopological Sort
Topological Sort
 
Algorithms of graph
Algorithms of graphAlgorithms of graph
Algorithms of graph
 
Data structure
Data structureData structure
Data structure
 
lecture 21
lecture 21lecture 21
lecture 21
 
Matrix of linear transformation
Matrix of linear transformationMatrix of linear transformation
Matrix of linear transformation
 
Skiena algorithm 2007 lecture12 topological sort connectivity
Skiena algorithm 2007 lecture12 topological sort connectivitySkiena algorithm 2007 lecture12 topological sort connectivity
Skiena algorithm 2007 lecture12 topological sort connectivity
 

Viewers also liked

My presentation all shortestpath
My presentation all shortestpathMy presentation all shortestpath
My presentation all shortestpathCarlostheran
 
All pairs shortest path algorithm
All pairs shortest path algorithmAll pairs shortest path algorithm
All pairs shortest path algorithmSrikrishnan Suresh
 
Multiplication algorithm
Multiplication algorithmMultiplication algorithm
Multiplication algorithm
Gaurav Subham
 
Biconnected components (13024116056)
Biconnected components (13024116056)Biconnected components (13024116056)
Biconnected components (13024116056)
Akshay soni
 
All pair shortest path--SDN
All pair shortest path--SDNAll pair shortest path--SDN
All pair shortest path--SDNSarat Prasad
 
Matrix chain multiplication 2
Matrix chain multiplication 2Matrix chain multiplication 2
Matrix chain multiplication 2
Maher Alshammari
 
Shortest path problem
Shortest path problemShortest path problem
Shortest path problem
Ifra Ilyas
 
Top-k shortest path
Top-k shortest pathTop-k shortest path
Top-k shortest path
redhatdb
 
Matrix multiplication
Matrix multiplicationMatrix multiplication
Matrix multiplication
International Islamic University
 
strassen matrix multiplication algorithm
strassen matrix multiplication algorithmstrassen matrix multiplication algorithm
strassen matrix multiplication algorithm
evil eye
 
Johnson's safety slogans version 1.0
Johnson's safety slogans version 1.0 Johnson's safety slogans version 1.0
Johnson's safety slogans version 1.0
Road Safety
 
(floyd's algm)
(floyd's algm)(floyd's algm)
(floyd's algm)
Jothi Lakshmi
 
Matrix chain multiplication
Matrix chain multiplicationMatrix chain multiplication
Matrix chain multiplication
Respa Peter
 

Viewers also liked (15)

My presentation all shortestpath
My presentation all shortestpathMy presentation all shortestpath
My presentation all shortestpath
 
All pairs shortest path algorithm
All pairs shortest path algorithmAll pairs shortest path algorithm
All pairs shortest path algorithm
 
Multiplication algorithm
Multiplication algorithmMultiplication algorithm
Multiplication algorithm
 
Biconnected components (13024116056)
Biconnected components (13024116056)Biconnected components (13024116056)
Biconnected components (13024116056)
 
All pair shortest path--SDN
All pair shortest path--SDNAll pair shortest path--SDN
All pair shortest path--SDN
 
Matrix chain multiplication 2
Matrix chain multiplication 2Matrix chain multiplication 2
Matrix chain multiplication 2
 
Shortest path problem
Shortest path problemShortest path problem
Shortest path problem
 
Top-k shortest path
Top-k shortest pathTop-k shortest path
Top-k shortest path
 
0 1 knapsack problem
0 1 knapsack problem0 1 knapsack problem
0 1 knapsack problem
 
Matrix multiplication
Matrix multiplicationMatrix multiplication
Matrix multiplication
 
strassen matrix multiplication algorithm
strassen matrix multiplication algorithmstrassen matrix multiplication algorithm
strassen matrix multiplication algorithm
 
Johnson's safety slogans version 1.0
Johnson's safety slogans version 1.0 Johnson's safety slogans version 1.0
Johnson's safety slogans version 1.0
 
(floyd's algm)
(floyd's algm)(floyd's algm)
(floyd's algm)
 
Parallel Algorithms
Parallel AlgorithmsParallel Algorithms
Parallel Algorithms
 
Matrix chain multiplication
Matrix chain multiplicationMatrix chain multiplication
Matrix chain multiplication
 

Similar to 21 All Pairs Shortest Path

DAA_Presentation - Copy.pptx
DAA_Presentation - Copy.pptxDAA_Presentation - Copy.pptx
DAA_Presentation - Copy.pptx
AndrewJohnson866415
 
DS & Algo 4 - Graph and Shortest Path Search
DS & Algo 4 - Graph and Shortest Path SearchDS & Algo 4 - Graph and Shortest Path Search
DS & Algo 4 - Graph and Shortest Path Search
Mohammad Imam Hossain
 
Lecture#9
Lecture#9Lecture#9
Lecture#9
Ali Shah
 
01.01 vector spaces
01.01 vector spaces01.01 vector spaces
01.01 vector spaces
Andres Mendez-Vazquez
 
Introduction (1).pdf
Introduction (1).pdfIntroduction (1).pdf
Introduction (1).pdf
ShivareddyGangam
 
5.1 greedy
5.1 greedy5.1 greedy
5.1 greedy
Krish_ver2
 
The Max Cut Problem
The Max Cut ProblemThe Max Cut Problem
The Max Cut Problem
dnatapov
 
Ou3425912596
Ou3425912596Ou3425912596
Ou3425912596
IJERA Editor
 
Floyd’s and Warshal’s Algorithm ujjwal matoliya.pptx
Floyd’s and Warshal’s Algorithm ujjwal matoliya.pptxFloyd’s and Warshal’s Algorithm ujjwal matoliya.pptx
Floyd’s and Warshal’s Algorithm ujjwal matoliya.pptx
ujjwalmatoliya
 
CONSTRUCTION OF NONASSOCIATIVE GRASSMANN ALGEBRA
CONSTRUCTION OF NONASSOCIATIVE GRASSMANN ALGEBRACONSTRUCTION OF NONASSOCIATIVE GRASSMANN ALGEBRA
CONSTRUCTION OF NONASSOCIATIVE GRASSMANN ALGEBRA
IRJET Journal
 
Linear Programming- Leacture-16-lp1.pptx
Linear Programming- Leacture-16-lp1.pptxLinear Programming- Leacture-16-lp1.pptx
Linear Programming- Leacture-16-lp1.pptx
SarahKoech1
 
test pre
test pretest pre
test pre
farazch
 
Power Full Exposition
Power Full ExpositionPower Full Exposition
Power Full ExpositionHeesu Hwang
 
Shortest path, Bellman-Ford's algorithm, Dijkastra's algorithm, their Java co...
Shortest path, Bellman-Ford's algorithm, Dijkastra's algorithm, their Java co...Shortest path, Bellman-Ford's algorithm, Dijkastra's algorithm, their Java co...
Shortest path, Bellman-Ford's algorithm, Dijkastra's algorithm, their Java co...
Animesh Chaturvedi
 
Shortest Paths Part 2: Negative Weights and All-pairs
Shortest Paths Part 2: Negative Weights and All-pairsShortest Paths Part 2: Negative Weights and All-pairs
Shortest Paths Part 2: Negative Weights and All-pairs
Benjamin Sach
 
Fine Grained Complexity
Fine Grained ComplexityFine Grained Complexity
Fine Grained Complexity
AkankshaAgrawal55
 
Mathematical logic
Mathematical logicMathematical logic
Mathematical logic
ble nature
 
Graph Methods for Generating Test Cases with Universal and Existential Constr...
Graph Methods for Generating Test Cases with Universal and Existential Constr...Graph Methods for Generating Test Cases with Universal and Existential Constr...
Graph Methods for Generating Test Cases with Universal and Existential Constr...
Sylvain Hallé
 

Similar to 21 All Pairs Shortest Path (20)

DAA_Presentation - Copy.pptx
DAA_Presentation - Copy.pptxDAA_Presentation - Copy.pptx
DAA_Presentation - Copy.pptx
 
MMath
MMathMMath
MMath
 
DS & Algo 4 - Graph and Shortest Path Search
DS & Algo 4 - Graph and Shortest Path SearchDS & Algo 4 - Graph and Shortest Path Search
DS & Algo 4 - Graph and Shortest Path Search
 
Lecture#9
Lecture#9Lecture#9
Lecture#9
 
01.01 vector spaces
01.01 vector spaces01.01 vector spaces
01.01 vector spaces
 
Introduction (1).pdf
Introduction (1).pdfIntroduction (1).pdf
Introduction (1).pdf
 
5.1 greedy
5.1 greedy5.1 greedy
5.1 greedy
 
The Max Cut Problem
The Max Cut ProblemThe Max Cut Problem
The Max Cut Problem
 
Ou3425912596
Ou3425912596Ou3425912596
Ou3425912596
 
Floyd’s and Warshal’s Algorithm ujjwal matoliya.pptx
Floyd’s and Warshal’s Algorithm ujjwal matoliya.pptxFloyd’s and Warshal’s Algorithm ujjwal matoliya.pptx
Floyd’s and Warshal’s Algorithm ujjwal matoliya.pptx
 
CONSTRUCTION OF NONASSOCIATIVE GRASSMANN ALGEBRA
CONSTRUCTION OF NONASSOCIATIVE GRASSMANN ALGEBRACONSTRUCTION OF NONASSOCIATIVE GRASSMANN ALGEBRA
CONSTRUCTION OF NONASSOCIATIVE GRASSMANN ALGEBRA
 
Linear Programming- Leacture-16-lp1.pptx
Linear Programming- Leacture-16-lp1.pptxLinear Programming- Leacture-16-lp1.pptx
Linear Programming- Leacture-16-lp1.pptx
 
test pre
test pretest pre
test pre
 
Power Full Exposition
Power Full ExpositionPower Full Exposition
Power Full Exposition
 
Shortest path, Bellman-Ford's algorithm, Dijkastra's algorithm, their Java co...
Shortest path, Bellman-Ford's algorithm, Dijkastra's algorithm, their Java co...Shortest path, Bellman-Ford's algorithm, Dijkastra's algorithm, their Java co...
Shortest path, Bellman-Ford's algorithm, Dijkastra's algorithm, their Java co...
 
ProjectAndersSchreiber
ProjectAndersSchreiberProjectAndersSchreiber
ProjectAndersSchreiber
 
Shortest Paths Part 2: Negative Weights and All-pairs
Shortest Paths Part 2: Negative Weights and All-pairsShortest Paths Part 2: Negative Weights and All-pairs
Shortest Paths Part 2: Negative Weights and All-pairs
 
Fine Grained Complexity
Fine Grained ComplexityFine Grained Complexity
Fine Grained Complexity
 
Mathematical logic
Mathematical logicMathematical logic
Mathematical logic
 
Graph Methods for Generating Test Cases with Universal and Existential Constr...
Graph Methods for Generating Test Cases with Universal and Existential Constr...Graph Methods for Generating Test Cases with Universal and Existential Constr...
Graph Methods for Generating Test Cases with Universal and Existential Constr...
 

More from Andres Mendez-Vazquez

2.03 bayesian estimation
2.03 bayesian estimation2.03 bayesian estimation
2.03 bayesian estimation
Andres Mendez-Vazquez
 
05 linear transformations
05 linear transformations05 linear transformations
05 linear transformations
Andres Mendez-Vazquez
 
01.04 orthonormal basis_eigen_vectors
01.04 orthonormal basis_eigen_vectors01.04 orthonormal basis_eigen_vectors
01.04 orthonormal basis_eigen_vectors
Andres Mendez-Vazquez
 
01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issues01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issues
Andres Mendez-Vazquez
 
01.02 linear equations
01.02 linear equations01.02 linear equations
01.02 linear equations
Andres Mendez-Vazquez
 
06 recurrent neural_networks
06 recurrent neural_networks06 recurrent neural_networks
06 recurrent neural_networks
Andres Mendez-Vazquez
 
05 backpropagation automatic_differentiation
05 backpropagation automatic_differentiation05 backpropagation automatic_differentiation
05 backpropagation automatic_differentiation
Andres Mendez-Vazquez
 
Zetta global
Zetta globalZetta global
Zetta global
Andres Mendez-Vazquez
 
01 Introduction to Neural Networks and Deep Learning
01 Introduction to Neural Networks and Deep Learning01 Introduction to Neural Networks and Deep Learning
01 Introduction to Neural Networks and Deep Learning
Andres Mendez-Vazquez
 
25 introduction reinforcement_learning
25 introduction reinforcement_learning25 introduction reinforcement_learning
25 introduction reinforcement_learning
Andres Mendez-Vazquez
 
Neural Networks and Deep Learning Syllabus
Neural Networks and Deep Learning SyllabusNeural Networks and Deep Learning Syllabus
Neural Networks and Deep Learning Syllabus
Andres Mendez-Vazquez
 
Introduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabusIntroduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabus
Andres Mendez-Vazquez
 
Ideas 09 22_2018
Ideas 09 22_2018Ideas 09 22_2018
Ideas 09 22_2018
Andres Mendez-Vazquez
 
Ideas about a Bachelor in Machine Learning/Data Sciences
Ideas about a Bachelor in Machine Learning/Data SciencesIdeas about a Bachelor in Machine Learning/Data Sciences
Ideas about a Bachelor in Machine Learning/Data Sciences
Andres Mendez-Vazquez
 
Analysis of Algorithms Syllabus
Analysis of Algorithms  SyllabusAnalysis of Algorithms  Syllabus
Analysis of Algorithms Syllabus
Andres Mendez-Vazquez
 
20 k-means, k-center, k-meoids and variations
20 k-means, k-center, k-meoids and variations20 k-means, k-center, k-meoids and variations
20 k-means, k-center, k-meoids and variations
Andres Mendez-Vazquez
 
18.1 combining models
18.1 combining models18.1 combining models
18.1 combining models
Andres Mendez-Vazquez
 
17 vapnik chervonenkis dimension
17 vapnik chervonenkis dimension17 vapnik chervonenkis dimension
17 vapnik chervonenkis dimension
Andres Mendez-Vazquez
 
A basic introduction to learning
A basic introduction to learningA basic introduction to learning
A basic introduction to learning
Andres Mendez-Vazquez
 
Introduction Mathematics Intelligent Systems Syllabus
Introduction Mathematics Intelligent Systems SyllabusIntroduction Mathematics Intelligent Systems Syllabus
Introduction Mathematics Intelligent Systems Syllabus
Andres Mendez-Vazquez
 

More from Andres Mendez-Vazquez (20)

2.03 bayesian estimation
2.03 bayesian estimation2.03 bayesian estimation
2.03 bayesian estimation
 
05 linear transformations
05 linear transformations05 linear transformations
05 linear transformations
 
01.04 orthonormal basis_eigen_vectors
01.04 orthonormal basis_eigen_vectors01.04 orthonormal basis_eigen_vectors
01.04 orthonormal basis_eigen_vectors
 
01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issues01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issues
 
01.02 linear equations
01.02 linear equations01.02 linear equations
01.02 linear equations
 
06 recurrent neural_networks
06 recurrent neural_networks06 recurrent neural_networks
06 recurrent neural_networks
 
05 backpropagation automatic_differentiation
05 backpropagation automatic_differentiation05 backpropagation automatic_differentiation
05 backpropagation automatic_differentiation
 
Zetta global
Zetta globalZetta global
Zetta global
 
01 Introduction to Neural Networks and Deep Learning
01 Introduction to Neural Networks and Deep Learning01 Introduction to Neural Networks and Deep Learning
01 Introduction to Neural Networks and Deep Learning
 
25 introduction reinforcement_learning
25 introduction reinforcement_learning25 introduction reinforcement_learning
25 introduction reinforcement_learning
 
Neural Networks and Deep Learning Syllabus
Neural Networks and Deep Learning SyllabusNeural Networks and Deep Learning Syllabus
Neural Networks and Deep Learning Syllabus
 
Introduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabusIntroduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabus
 
Ideas 09 22_2018
Ideas 09 22_2018Ideas 09 22_2018
Ideas 09 22_2018
 
Ideas about a Bachelor in Machine Learning/Data Sciences
Ideas about a Bachelor in Machine Learning/Data SciencesIdeas about a Bachelor in Machine Learning/Data Sciences
Ideas about a Bachelor in Machine Learning/Data Sciences
 
Analysis of Algorithms Syllabus
Analysis of Algorithms  SyllabusAnalysis of Algorithms  Syllabus
Analysis of Algorithms Syllabus
 
20 k-means, k-center, k-meoids and variations
20 k-means, k-center, k-meoids and variations20 k-means, k-center, k-meoids and variations
20 k-means, k-center, k-meoids and variations
 
18.1 combining models
18.1 combining models18.1 combining models
18.1 combining models
 
17 vapnik chervonenkis dimension
17 vapnik chervonenkis dimension17 vapnik chervonenkis dimension
17 vapnik chervonenkis dimension
 
A basic introduction to learning
A basic introduction to learningA basic introduction to learning
A basic introduction to learning
 
Introduction Mathematics Intelligent Systems Syllabus
Introduction Mathematics Intelligent Systems SyllabusIntroduction Mathematics Intelligent Systems Syllabus
Introduction Mathematics Intelligent Systems Syllabus
 

Recently uploaded

H.Seo, ICLR 2024, MLILAB, KAIST AI.pdf
H.Seo,  ICLR 2024, MLILAB,  KAIST AI.pdfH.Seo,  ICLR 2024, MLILAB,  KAIST AI.pdf
H.Seo, ICLR 2024, MLILAB, KAIST AI.pdf
MLILAB
 
road safety engineering r s e unit 3.pdf
road safety engineering  r s e unit 3.pdfroad safety engineering  r s e unit 3.pdf
road safety engineering r s e unit 3.pdf
VENKATESHvenky89705
 
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
Amil Baba Dawood bangali
 
HYDROPOWER - Hydroelectric power generation
HYDROPOWER - Hydroelectric power generationHYDROPOWER - Hydroelectric power generation
HYDROPOWER - Hydroelectric power generation
Robbie Edward Sayers
 
Cosmetic shop management system project report.pdf
Cosmetic shop management system project report.pdfCosmetic shop management system project report.pdf
Cosmetic shop management system project report.pdf
Kamal Acharya
 
Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024
Massimo Talia
 
WATER CRISIS and its solutions-pptx 1234
WATER CRISIS and its solutions-pptx 1234WATER CRISIS and its solutions-pptx 1234
WATER CRISIS and its solutions-pptx 1234
AafreenAbuthahir2
 
The Benefits and Techniques of Trenchless Pipe Repair.pdf
The Benefits and Techniques of Trenchless Pipe Repair.pdfThe Benefits and Techniques of Trenchless Pipe Repair.pdf
The Benefits and Techniques of Trenchless Pipe Repair.pdf
Pipe Restoration Solutions
 
space technology lecture notes on satellite
space technology lecture notes on satellitespace technology lecture notes on satellite
space technology lecture notes on satellite
ongomchris
 
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
bakpo1
 
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
AJAYKUMARPUND1
 
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
thanhdowork
 
MCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdfMCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdf
Osamah Alsalih
 
CME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional ElectiveCME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional Elective
karthi keyan
 
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
ydteq
 
English lab ppt no titlespecENG PPTt.pdf
English lab ppt no titlespecENG PPTt.pdfEnglish lab ppt no titlespecENG PPTt.pdf
English lab ppt no titlespecENG PPTt.pdf
BrazilAccount1
 
Standard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - NeometrixStandard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - Neometrix
Neometrix_Engineering_Pvt_Ltd
 
Investor-Presentation-Q1FY2024 investor presentation document.pptx
Investor-Presentation-Q1FY2024 investor presentation document.pptxInvestor-Presentation-Q1FY2024 investor presentation document.pptx
Investor-Presentation-Q1FY2024 investor presentation document.pptx
AmarGB2
 
Hierarchical Digital Twin of a Naval Power System
Hierarchical Digital Twin of a Naval Power SystemHierarchical Digital Twin of a Naval Power System
Hierarchical Digital Twin of a Naval Power System
Kerry Sado
 
Gen AI Study Jams _ For the GDSC Leads in India.pdf
Gen AI Study Jams _ For the GDSC Leads in India.pdfGen AI Study Jams _ For the GDSC Leads in India.pdf
Gen AI Study Jams _ For the GDSC Leads in India.pdf
gdsczhcet
 

Recently uploaded (20)

H.Seo, ICLR 2024, MLILAB, KAIST AI.pdf
H.Seo,  ICLR 2024, MLILAB,  KAIST AI.pdfH.Seo,  ICLR 2024, MLILAB,  KAIST AI.pdf
H.Seo, ICLR 2024, MLILAB, KAIST AI.pdf
 
road safety engineering r s e unit 3.pdf
road safety engineering  r s e unit 3.pdfroad safety engineering  r s e unit 3.pdf
road safety engineering r s e unit 3.pdf
 
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
 
HYDROPOWER - Hydroelectric power generation
HYDROPOWER - Hydroelectric power generationHYDROPOWER - Hydroelectric power generation
HYDROPOWER - Hydroelectric power generation
 
Cosmetic shop management system project report.pdf
Cosmetic shop management system project report.pdfCosmetic shop management system project report.pdf
Cosmetic shop management system project report.pdf
 
Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024
 
WATER CRISIS and its solutions-pptx 1234
WATER CRISIS and its solutions-pptx 1234WATER CRISIS and its solutions-pptx 1234
WATER CRISIS and its solutions-pptx 1234
 
The Benefits and Techniques of Trenchless Pipe Repair.pdf
The Benefits and Techniques of Trenchless Pipe Repair.pdfThe Benefits and Techniques of Trenchless Pipe Repair.pdf
The Benefits and Techniques of Trenchless Pipe Repair.pdf
 
space technology lecture notes on satellite
space technology lecture notes on satellitespace technology lecture notes on satellite
space technology lecture notes on satellite
 
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
 
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
 
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
 
MCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdfMCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdf
 
CME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional ElectiveCME397 Surface Engineering- Professional Elective
CME397 Surface Engineering- Professional Elective
 
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
 
English lab ppt no titlespecENG PPTt.pdf
English lab ppt no titlespecENG PPTt.pdfEnglish lab ppt no titlespecENG PPTt.pdf
English lab ppt no titlespecENG PPTt.pdf
 
Standard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - NeometrixStandard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - Neometrix
 
Investor-Presentation-Q1FY2024 investor presentation document.pptx
Investor-Presentation-Q1FY2024 investor presentation document.pptxInvestor-Presentation-Q1FY2024 investor presentation document.pptx
Investor-Presentation-Q1FY2024 investor presentation document.pptx
 
Hierarchical Digital Twin of a Naval Power System
Hierarchical Digital Twin of a Naval Power SystemHierarchical Digital Twin of a Naval Power System
Hierarchical Digital Twin of a Naval Power System
 
Gen AI Study Jams _ For the GDSC Leads in India.pdf
Gen AI Study Jams _ For the GDSC Leads in India.pdfGen AI Study Jams _ For the GDSC Leads in India.pdf
Gen AI Study Jams _ For the GDSC Leads in India.pdf
 

21 All Pairs Shortest Path

  • 1. Analysis of Algorithms All-Pairs Shortest Path Andres Mendez-Vazquez November 11, 2015 1 / 79
  • 2. Outline 1 Introduction Definition of the Problem Assumptions Observations 2 Structure of a Shortest Path Introduction 3 The Solution The Recursive Solution The Iterative Version Extended-Shoertest-Paths Looking at the Algorithm as Matrix Multiplication Example We want something faster 4 A different dynamic-programming algorithm The Shortest Path Structure The Bottom-Up Solution Floyd-Warshall Algorithm Example 5 Other Solutions The Johnson’s Algorithm 6 Exercises You can try them 2 / 79
  • 3. Outline 1 Introduction Definition of the Problem Assumptions Observations 2 Structure of a Shortest Path Introduction 3 The Solution The Recursive Solution The Iterative Version Extended-Shoertest-Paths Looking at the Algorithm as Matrix Multiplication Example We want something faster 4 A different dynamic-programming algorithm The Shortest Path Structure The Bottom-Up Solution Floyd-Warshall Algorithm Example 5 Other Solutions The Johnson’s Algorithm 6 Exercises You can try them 3 / 79
  • 4. Problem Definition Given u and v, find the shortest path. Now, what if you want ALL PAIRS!!! Use as a source all the elements in V . Clearly!!! you can fall back to the old algorithms!!! 4 / 79
  • 5. Problem Definition Given u and v, find the shortest path. Now, what if you want ALL PAIRS!!! Use as a source all the elements in V . Clearly!!! you can fall back to the old algorithms!!! 4 / 79
  • 6. Problem Definition Given u and v, find the shortest path. Now, what if you want ALL PAIRS!!! Use as a source all the elements in V . Clearly!!! you can fall back to the old algorithms!!! 4 / 79
  • 7. Problem Definition Given u and v, find the shortest path. Now, what if you want ALL PAIRS!!! Use as a source all the elements in V . Clearly!!! you can fall back to the old algorithms!!! 4 / 79
  • 8. What can we use? Use Dijkstra’s |V | times!!! If all the weights are non-negative. This has, using Fibonacci Heaps, O (V 2 log V + VE) complexity. Which is equal O (V 3 ) in the case of E = O(V 2 ), but with a hidden large constant c. Use Bellman-Ford |V | times!!! If negative weights are allowed. Then, we have O (V 2 E). Which is equal O (V 4 ) in the case of E = O(V 2 ). 5 / 79
  • 9. What can we use? Use Dijkstra’s |V | times!!! If all the weights are non-negative. This has, using Fibonacci Heaps, O (V 2 log V + VE) complexity. Which is equal O (V 3 ) in the case of E = O(V 2 ), but with a hidden large constant c. Use Bellman-Ford |V | times!!! If negative weights are allowed. Then, we have O (V 2 E). Which is equal O (V 4 ) in the case of E = O(V 2 ). 5 / 79
  • 10. What can we use? Use Dijkstra’s |V | times!!! If all the weights are non-negative. This has, using Fibonacci Heaps, O (V 2 log V + VE) complexity. Which is equal O (V 3 ) in the case of E = O(V 2 ), but with a hidden large constant c. Use Bellman-Ford |V | times!!! If negative weights are allowed. Then, we have O (V 2 E). Which is equal O (V 4 ) in the case of E = O(V 2 ). 5 / 79
  • 11. What can we use? Use Dijkstra’s |V | times!!! If all the weights are non-negative. This has, using Fibonacci Heaps, O (V 2 log V + VE) complexity. Which is equal O (V 3 ) in the case of E = O(V 2 ), but with a hidden large constant c. Use Bellman-Ford |V | times!!! If negative weights are allowed. Then, we have O (V 2 E). Which is equal O (V 4 ) in the case of E = O(V 2 ). 5 / 79
  • 12. What can we use? Use Dijkstra’s |V | times!!! If all the weights are non-negative. This has, using Fibonacci Heaps, O (V 2 log V + VE) complexity. Which is equal O (V 3 ) in the case of E = O(V 2 ), but with a hidden large constant c. Use Bellman-Ford |V | times!!! If negative weights are allowed. Then, we have O (V 2 E). Which is equal O (V 4 ) in the case of E = O(V 2 ). 5 / 79
  • 13. What can we use? Use Dijkstra’s |V | times!!! If all the weights are non-negative. This has, using Fibonacci Heaps, O (V 2 log V + VE) complexity. Which is equal O (V 3 ) in the case of E = O(V 2 ), but with a hidden large constant c. Use Bellman-Ford |V | times!!! If negative weights are allowed. Then, we have O (V 2 E). Which is equal O (V 4 ) in the case of E = O(V 2 ). 5 / 79
  • 14. This is not Good For Large Problems Problems Computer Network Systems. Aircraft Networks (e.g. flying time, fares). Railroad network tables of distances between all pairs of cites for a road atlas. Etc. 6 / 79
  • 15. This is not Good For Large Problems Problems Computer Network Systems. Aircraft Networks (e.g. flying time, fares). Railroad network tables of distances between all pairs of cites for a road atlas. Etc. 6 / 79
  • 16. This is not Good For Large Problems Problems Computer Network Systems. Aircraft Networks (e.g. flying time, fares). Railroad network tables of distances between all pairs of cites for a road atlas. Etc. 6 / 79
  • 17. This is not Good For Large Problems Problems Computer Network Systems. Aircraft Networks (e.g. flying time, fares). Railroad network tables of distances between all pairs of cites for a road atlas. Etc. 6 / 79
  • 18. For more on this... Something Notable As many things in the history of analysis of algorithms the all-pairs shortest path has a long history. We have more from “Studies in the Economics of Transportation” by Beckmann, McGuire, and Winsten (1956) where the notation that we use for the matrix multiplication alike was first used. In addition G. Tarry, Le probleme des labyrinthes, Nouvelles Annales de Mathématiques (3) 14 (1895) 187–190 [English translation in: N.L. Biggs, E.K. Lloyd, R.J. Wilson, Graph Theory 1736–1936, Clarendon Press, Oxford, 1976, pp. 18–20] (For the theory behind depth-first search techniques). Chr. Wiener, Ueber eine Aufgabe aus der Geometria situs, Mathematische Annalen 6 (1873) 29–30, 1873. 7 / 79
  • 19. For more on this... Something Notable As many things in the history of analysis of algorithms the all-pairs shortest path has a long history. We have more from “Studies in the Economics of Transportation” by Beckmann, McGuire, and Winsten (1956) where the notation that we use for the matrix multiplication alike was first used. In addition G. Tarry, Le probleme des labyrinthes, Nouvelles Annales de Mathématiques (3) 14 (1895) 187–190 [English translation in: N.L. Biggs, E.K. Lloyd, R.J. Wilson, Graph Theory 1736–1936, Clarendon Press, Oxford, 1976, pp. 18–20] (For the theory behind depth-first search techniques). Chr. Wiener, Ueber eine Aufgabe aus der Geometria situs, Mathematische Annalen 6 (1873) 29–30, 1873. 7 / 79
  • 20. For more on this... Something Notable As many things in the history of analysis of algorithms the all-pairs shortest path has a long history. We have more from “Studies in the Economics of Transportation” by Beckmann, McGuire, and Winsten (1956) where the notation that we use for the matrix multiplication alike was first used. In addition G. Tarry, Le probleme des labyrinthes, Nouvelles Annales de Mathématiques (3) 14 (1895) 187–190 [English translation in: N.L. Biggs, E.K. Lloyd, R.J. Wilson, Graph Theory 1736–1936, Clarendon Press, Oxford, 1976, pp. 18–20] (For the theory behind depth-first search techniques). Chr. Wiener, Ueber eine Aufgabe aus der Geometria situs, Mathematische Annalen 6 (1873) 29–30, 1873. 7 / 79
  • 21. For more on this... Something Notable As many things in the history of analysis of algorithms the all-pairs shortest path has a long history. We have more from “Studies in the Economics of Transportation” by Beckmann, McGuire, and Winsten (1956) where the notation that we use for the matrix multiplication alike was first used. In addition G. Tarry, Le probleme des labyrinthes, Nouvelles Annales de Mathématiques (3) 14 (1895) 187–190 [English translation in: N.L. Biggs, E.K. Lloyd, R.J. Wilson, Graph Theory 1736–1936, Clarendon Press, Oxford, 1976, pp. 18–20] (For the theory behind depth-first search techniques). Chr. Wiener, Ueber eine Aufgabe aus der Geometria situs, Mathematische Annalen 6 (1873) 29–30, 1873. 7 / 79
  • 22. Outline 1 Introduction Definition of the Problem Assumptions Observations 2 Structure of a Shortest Path Introduction 3 The Solution The Recursive Solution The Iterative Version Extended-Shoertest-Paths Looking at the Algorithm as Matrix Multiplication Example We want something faster 4 A different dynamic-programming algorithm The Shortest Path Structure The Bottom-Up Solution Floyd-Warshall Algorithm Example 5 Other Solutions The Johnson’s Algorithm 6 Exercises You can try them 8 / 79
  • 23. Assumptions Matrix Representation Matrix Representation of a Graph For this, we have that each weight in the matrix has the following values wij =    0 if i = j w(i, j) if i = j and (i, j) ∈ E ∞ if i = j and (i, j) /∈ E Then, we have W =        w11 w22 ... w1k−1 w1n . . . . . . . . . wn1 wn2 ... wnn−1 wnn        Important There are not negative weight cycles. 9 / 79
  • 24. Assumptions Matrix Representation Matrix Representation of a Graph For this, we have that each weight in the matrix has the following values wij =    0 if i = j w(i, j) if i = j and (i, j) ∈ E ∞ if i = j and (i, j) /∈ E Then, we have W =        w11 w22 ... w1k−1 w1n . . . . . . . . . wn1 wn2 ... wnn−1 wnn        Important There are not negative weight cycles. 9 / 79
  • 25. Outline 1 Introduction Definition of the Problem Assumptions Observations 2 Structure of a Shortest Path Introduction 3 The Solution The Recursive Solution The Iterative Version Extended-Shoertest-Paths Looking at the Algorithm as Matrix Multiplication Example We want something faster 4 A different dynamic-programming algorithm The Shortest Path Structure The Bottom-Up Solution Floyd-Warshall Algorithm Example 5 Other Solutions The Johnson’s Algorithm 6 Exercises You can try them 10 / 79
  • 26. Observations Ah!!! The next algorithm is a dynamic programming algorithm for The all-pairs shortest paths problem on a directed graph G = (V , E). At the end of the algorithm will generate the following matrix D =        d11 d22 ... d1k−1 d1n . . . . . . . . . dn1 dn2 ... dnn−1 dnn        Each entry dij = δ (i, j). 11 / 79
  • 27. Observations Ah!!! The next algorithm is a dynamic programming algorithm for The all-pairs shortest paths problem on a directed graph G = (V , E). At the end of the algorithm will generate the following matrix D =        d11 d22 ... d1k−1 d1n . . . . . . . . . dn1 dn2 ... dnn−1 dnn        Each entry dij = δ (i, j). 11 / 79
  • 28. Observations Ah!!! The next algorithm is a dynamic programming algorithm for The all-pairs shortest paths problem on a directed graph G = (V , E). At the end of the algorithm will generate the following matrix D =        d11 d22 ... d1k−1 d1n . . . . . . . . . dn1 dn2 ... dnn−1 dnn        Each entry dij = δ (i, j). 11 / 79
  • 29. Outline 1 Introduction Definition of the Problem Assumptions Observations 2 Structure of a Shortest Path Introduction 3 The Solution The Recursive Solution The Iterative Version Extended-Shoertest-Paths Looking at the Algorithm as Matrix Multiplication Example We want something faster 4 A different dynamic-programming algorithm The Shortest Path Structure The Bottom-Up Solution Floyd-Warshall Algorithm Example 5 Other Solutions The Johnson’s Algorithm 6 Exercises You can try them 12 / 79
  • 30. Structure of a Shortest Path Consider Lemma 24.1 Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk > be a SP from v1 to vk. Then, pij =< vi, vi+1, ..., vj > is a Shortest Path (SP) from vi to vj, where 1 ≤ i ≤ j ≤ k. We can do the following Consider the shortest path p from vertex i and j, p contains at most m edges. Then, we can use the Corollary to make a decomposition i p k → j =⇒ δ(i, j) = δ(i, k) + wkj 13 / 79
  • 31. Structure of a Shortest Path Consider Lemma 24.1 Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk > be a SP from v1 to vk. Then, pij =< vi, vi+1, ..., vj > is a Shortest Path (SP) from vi to vj, where 1 ≤ i ≤ j ≤ k. We can do the following Consider the shortest path p from vertex i and j, p contains at most m edges. Then, we can use the Corollary to make a decomposition i p k → j =⇒ δ(i, j) = δ(i, k) + wkj 13 / 79
  • 32. Structure of a Shortest Path Idea of Using Matrix Multiplication We define the following concept based in the decomposition Corollary!!! l (m) ij =minimum weight of any path from i to j, it contains at most m edges i.e. l (m) ij could be min k l (m−1) ik + wkj 14 / 79
  • 33. Structure of a Shortest Path Idea of Using Matrix Multiplication We define the following concept based in the decomposition Corollary!!! l (m) ij =minimum weight of any path from i to j, it contains at most m edges i.e. l (m) ij could be min k l (m−1) ik + wkj 14 / 79
  • 34. Graphical Interpretation Looking for the Shortest Path 15 / 79
  • 35. Outline 1 Introduction Definition of the Problem Assumptions Observations 2 Structure of a Shortest Path Introduction 3 The Solution The Recursive Solution The Iterative Version Extended-Shoertest-Paths Looking at the Algorithm as Matrix Multiplication Example We want something faster 4 A different dynamic-programming algorithm The Shortest Path Structure The Bottom-Up Solution Floyd-Warshall Algorithm Example 5 Other Solutions The Johnson’s Algorithm 6 Exercises You can try them 16 / 79
  • 36. Recursive Solution Thus, we have that for paths with ZERO edges l (0) ij = 0 if i = j ∞ if i = j Recursion Our Great Friend Consider the previous definition and decomposition. Thus l (m) ij = min l (m−1) ij , min 1≤k≤n l (m−1) ik + wkj = min 1≤k≤n l (m−1) ik + wkj 17 / 79
  • 37. Recursive Solution Thus, we have that for paths with ZERO edges l (0) ij = 0 if i = j ∞ if i = j Recursion Our Great Friend Consider the previous definition and decomposition. Thus l (m) ij = min l (m−1) ij , min 1≤k≤n l (m−1) ik + wkj = min 1≤k≤n l (m−1) ik + wkj 17 / 79
  • 38. Recursive Solution Thus, we have that for paths with ZERO edges l (0) ij = 0 if i = j ∞ if i = j Recursion Our Great Friend Consider the previous definition and decomposition. Thus l (m) ij = min l (m−1) ij , min 1≤k≤n l (m−1) ik + wkj = min 1≤k≤n l (m−1) ik + wkj 17 / 79
  • 39. Recursive Solution Thus, we have that for paths with ZERO edges l (0) ij = 0 if i = j ∞ if i = j Recursion Our Great Friend Consider the previous definition and decomposition. Thus l (m) ij = min l (m−1) ij , min 1≤k≤n l (m−1) ik + wkj = min 1≤k≤n l (m−1) ik + wkj 17 / 79
  • 40. Recursive Solution Why? A simple notation problem l (m) ij = l (m−1) ij + 0 = l (m−1) ij + wjj 18 / 79
  • 41. Outline 1 Introduction Definition of the Problem Assumptions Observations 2 Structure of a Shortest Path Introduction 3 The Solution The Recursive Solution The Iterative Version Extended-Shoertest-Paths Looking at the Algorithm as Matrix Multiplication Example We want something faster 4 A different dynamic-programming algorithm The Shortest Path Structure The Bottom-Up Solution Floyd-Warshall Algorithm Example 5 Other Solutions The Johnson’s Algorithm 6 Exercises You can try them 19 / 79
  • 42. Transforming it to a iterative one What is δ (i, j)? If you do not have negative-weight cycles, and δ (i, j) < ∞. Then, the shortest path from vertex i to j has at most n − 1 edges δ (i, j) = l (n−1) ij = l (n) ij = l (n+1) ij = l (n+2) ij = ... Back to Matrix Multiplication We have the matrix L(m) = l (m) ij . Then, we can compute first L(1) then compute L(2) all the way to L(n−1) which contains the actual shortest paths. What is L(1) ? First, we have that L(1) = W , since l (1) ij = wij. 20 / 79
  • 43. Transforming it to a iterative one What is δ (i, j)? If you do not have negative-weight cycles, and δ (i, j) < ∞. Then, the shortest path from vertex i to j has at most n − 1 edges δ (i, j) = l (n−1) ij = l (n) ij = l (n+1) ij = l (n+2) ij = ... Back to Matrix Multiplication We have the matrix L(m) = l (m) ij . Then, we can compute first L(1) then compute L(2) all the way to L(n−1) which contains the actual shortest paths. What is L(1) ? First, we have that L(1) = W , since l (1) ij = wij. 20 / 79
  • 44. Transforming it to a iterative one What is δ (i, j)? If you do not have negative-weight cycles, and δ (i, j) < ∞. Then, the shortest path from vertex i to j has at most n − 1 edges δ (i, j) = l (n−1) ij = l (n) ij = l (n+1) ij = l (n+2) ij = ... Back to Matrix Multiplication We have the matrix L(m) = l (m) ij . Then, we can compute first L(1) then compute L(2) all the way to L(n−1) which contains the actual shortest paths. What is L(1) ? First, we have that L(1) = W , since l (1) ij = wij. 20 / 79
  • 45. Transforming it to a iterative one What is δ (i, j)? If you do not have negative-weight cycles, and δ (i, j) < ∞. Then, the shortest path from vertex i to j has at most n − 1 edges δ (i, j) = l (n−1) ij = l (n) ij = l (n+1) ij = l (n+2) ij = ... Back to Matrix Multiplication We have the matrix L(m) = l (m) ij . Then, we can compute first L(1) then compute L(2) all the way to L(n−1) which contains the actual shortest paths. What is L(1) ? First, we have that L(1) = W , since l (1) ij = wij. 20 / 79
  • 46. Transforming it to a iterative one What is δ (i, j)? If you do not have negative-weight cycles, and δ (i, j) < ∞. Then, the shortest path from vertex i to j has at most n − 1 edges δ (i, j) = l (n−1) ij = l (n) ij = l (n+1) ij = l (n+2) ij = ... Back to Matrix Multiplication We have the matrix L(m) = l (m) ij . Then, we can compute first L(1) then compute L(2) all the way to L(n−1) which contains the actual shortest paths. What is L(1) ? First, we have that L(1) = W , since l (1) ij = wij. 20 / 79
  • 47. Outline 1 Introduction Definition of the Problem Assumptions Observations 2 Structure of a Shortest Path Introduction 3 The Solution The Recursive Solution The Iterative Version Extended-Shoertest-Paths Looking at the Algorithm as Matrix Multiplication Example We want something faster 4 A different dynamic-programming algorithm The Shortest Path Structure The Bottom-Up Solution Floyd-Warshall Algorithm Example 5 Other Solutions The Johnson’s Algorithm 6 Exercises You can try them 21 / 79
  • 48. Algorithm Code Extended-Shortest-Path(L, W ) 1 n = L.rows 2 let L = lij be a new n × n 3 for i = 1 to n 4 for j = 1 to n 5 lij = ∞ 6 for k = 1 to n 7 lij = min lij, lik + wkj 8 return L 22 / 79
  • 49. Algorithm Code Extended-Shortest-Path(L, W ) 1 n = L.rows 2 let L = lij be a new n × n 3 for i = 1 to n 4 for j = 1 to n 5 lij = ∞ 6 for k = 1 to n 7 lij = min lij, lik + wkj 8 return L 22 / 79
  • 50. Algorithm Code Extended-Shortest-Path(L, W ) 1 n = L.rows 2 let L = lij be a new n × n 3 for i = 1 to n 4 for j = 1 to n 5 lij = ∞ 6 for k = 1 to n 7 lij = min lij, lik + wkj 8 return L 22 / 79
  • 51. Algorithm Code Extended-Shortest-Path(L, W ) 1 n = L.rows 2 let L = lij be a new n × n 3 for i = 1 to n 4 for j = 1 to n 5 lij = ∞ 6 for k = 1 to n 7 lij = min lij, lik + wkj 8 return L 22 / 79
  • 52. Algorithm Complexity If |V | == n we have that Θ V 3 . 23 / 79
  • 53. Algorithm Complexity If |V | == n we have that Θ V 3 . 23 / 79
  • 54. Outline 1 Introduction Definition of the Problem Assumptions Observations 2 Structure of a Shortest Path Introduction 3 The Solution The Recursive Solution The Iterative Version Extended-Shoertest-Paths Looking at the Algorithm as Matrix Multiplication Example We want something faster 4 A different dynamic-programming algorithm The Shortest Path Structure The Bottom-Up Solution Floyd-Warshall Algorithm Example 5 Other Solutions The Johnson’s Algorithm 6 Exercises You can try them 24 / 79
  • 55. Look Alike Matrix Multiplication Operations Mapping That Can be Thought L =⇒ A W =⇒ B L =⇒ C min =⇒ + + =⇒ · ∞ =⇒ 0 25 / 79
  • 56. Look Alike Matrix Multiplication Operations Using the previous notation, we can rewrite our previous algorithm as Square-Matrix-Multiply(A, B) 1 n = A.rows 2 let C be a new n × n matrix 3 for i = 1 to n 4 for j = 1 to n 5 cij = 0 6 for k = 1 to n 7 cij = cij + aik · bkj 8 return C 26 / 79
  • 57. Complexity Thus The complexity of the Extended-Shortest-Path is equal to O n3 27 / 79
  • 58. Using the Analogy Returning to the all-pairs shortest-paths problem It is possible to compute the shortest path by extending such a path edge by edge. Therefore If we denote A · B as the “product” of the Extended-Shortest-Path 28 / 79
  • 59. Using the Analogy Returning to the all-pairs shortest-paths problem It is possible to compute the shortest path by extending such a path edge by edge. Therefore If we denote A · B as the “product” of the Extended-Shortest-Path 28 / 79
  • 60. Using the Analogy We have that L(1) = L(0) · W = W L(2) = L(1) · W = W 2 ... L(n−1) = L(n−2) · W = W n−1 29 / 79
  • 61. Using the Analogy We have that L(1) = L(0) · W = W L(2) = L(1) · W = W 2 ... L(n−1) = L(n−2) · W = W n−1 29 / 79
  • 62. Using the Analogy We have that L(1) = L(0) · W = W L(2) = L(1) · W = W 2 ... L(n−1) = L(n−2) · W = W n−1 29 / 79
  • 63. The Final Algorithm We have that Slow-All-Pairs-Shortest-Paths(W ) 1 n ← W .rows 2 L(1) ← W 3 for m = 2 to n − 1 4 L(m) ←EXTEND-SHORTEST-PATHS L(m−1) , W 5 return L(n−1) 30 / 79
  • 65. Outline 1 Introduction Definition of the Problem Assumptions Observations 2 Structure of a Shortest Path Introduction 3 The Solution The Recursive Solution The Iterative Version Extended-Shoertest-Paths Looking at the Algorithm as Matrix Multiplication Example We want something faster 4 A different dynamic-programming algorithm The Shortest Path Structure The Bottom-Up Solution Floyd-Warshall Algorithm Example 5 Other Solutions The Johnson’s Algorithm 6 Exercises You can try them 32 / 79
  • 66. Example We have the following 1 2 3 45 3 4 8 2 7 1 6 -4 -5 1 2 3 4 5 1 0 2 8 ∞ -4 2 ∞ 0 ∞ 1 7 3 ∞ 4 0 ∞ ∞ 4 2 ∞ -5 0 ∞ 5 ∞ ∞ ∞ 6 0 L(1) = L(0)W 33 / 79
  • 67. Example We have the following 1 2 3 45 3 4 8 2 7 1 6 -4 -5 1 2 3 4 5 1 0 2 8 2 -4 2 3 0 -4 1 7 3 ∞ 4 0 5 11 4 2 -1 -5 0 -2 5 8 ∞ 1 6 0 L(2) = L(1)W 34 / 79
  • 68. Here, we use the analogy of matrix multiplication D1 W 35 / 79
  • 69. Thus, the update of an element lij Example l (2) 14 = min    0 3 8 ∞ −4 +        ∞ 1 ∞ 0 6           = min ∞ 4 ∞ ∞ 2 = 2 36 / 79
  • 70. Example We have the following 1 2 3 45 3 4 8 2 7 1 6 -4 -5 1 2 3 4 5 1 0 3 -3 2 -4 2 3 0 -4 1 -1 3 7 4 0 5 11 4 2 -1 -5 0 -2 5 8 5 1 6 0 L(3) = L(2)W 37 / 79
  • 71. Example We have the following 1 2 3 45 3 4 8 2 7 1 6 -4 -5 1 2 3 4 5 1 0 1 -3 2 -4 2 3 0 -4 1 -1 3 7 4 0 5 3 4 2 -1 -5 0 -2 5 8 5 1 6 0 L(4) = L(3)W 38 / 79
  • 72. Outline 1 Introduction Definition of the Problem Assumptions Observations 2 Structure of a Shortest Path Introduction 3 The Solution The Recursive Solution The Iterative Version Extended-Shoertest-Paths Looking at the Algorithm as Matrix Multiplication Example We want something faster 4 A different dynamic-programming algorithm The Shortest Path Structure The Bottom-Up Solution Floyd-Warshall Algorithm Example 5 Other Solutions The Johnson’s Algorithm 6 Exercises You can try them 39 / 79
  • 73. Recall the following We are interested only In matrix L(n−1) In addition Remember, we do not have negative weight cycles!!! Therefore, given the equation δ (i, j) = l (n−1) ij = l (n) ij = l (n) ij = ... (2) 40 / 79
  • 74. Recall the following We are interested only In matrix L(n−1) In addition Remember, we do not have negative weight cycles!!! Therefore, given the equation δ (i, j) = l (n−1) ij = l (n) ij = l (n) ij = ... (2) 40 / 79
  • 75. Recall the following We are interested only In matrix L(n−1) In addition Remember, we do not have negative weight cycles!!! Therefore, given the equation δ (i, j) = l (n−1) ij = l (n) ij = l (n) ij = ... (2) 40 / 79
  • 76. Thus It implies L(m) = L(n−1) (3) For all m ≥ n − 1 (4) 41 / 79
  • 77. Thus It implies L(m) = L(n−1) (3) For all m ≥ n − 1 (4) 41 / 79
  • 78. Something Faster We want something faster!!! Observation!!! L(1) = W L(2) = W · W = W 2 L(4) = W 2 · W 2 = W 4 L(8) = W 4 · W 4 = W 8 ... L(2 log(n−1) ) = W 2 log(n−1) −1 · W 2 log(n−1) −1 = W 2 log(n−1) Because 2 lg(n−1) ≥ n − 1 =⇒ L(2 lg(n−1) ) = L(n−1) 42 / 79
  • 79. Something Faster We want something faster!!! Observation!!! L(1) = W L(2) = W · W = W 2 L(4) = W 2 · W 2 = W 4 L(8) = W 4 · W 4 = W 8 ... L(2 log(n−1) ) = W 2 log(n−1) −1 · W 2 log(n−1) −1 = W 2 log(n−1) Because 2 lg(n−1) ≥ n − 1 =⇒ L(2 lg(n−1) ) = L(n−1) 42 / 79
  • 80. Something Faster We want something faster!!! Observation!!! L(1) = W L(2) = W · W = W 2 L(4) = W 2 · W 2 = W 4 L(8) = W 4 · W 4 = W 8 ... L(2 log(n−1) ) = W 2 log(n−1) −1 · W 2 log(n−1) −1 = W 2 log(n−1) Because 2 lg(n−1) ≥ n − 1 =⇒ L(2 lg(n−1) ) = L(n−1) 42 / 79
  • 81. Something Faster We want something faster!!! Observation!!! L(1) = W L(2) = W · W = W 2 L(4) = W 2 · W 2 = W 4 L(8) = W 4 · W 4 = W 8 ... L(2 log(n−1) ) = W 2 log(n−1) −1 · W 2 log(n−1) −1 = W 2 log(n−1) Because 2 lg(n−1) ≥ n − 1 =⇒ L(2 lg(n−1) ) = L(n−1) 42 / 79
  • 82. Something Faster We want something faster!!! Observation!!! L(1) = W L(2) = W · W = W 2 L(4) = W 2 · W 2 = W 4 L(8) = W 4 · W 4 = W 8 ... L(2 log(n−1) ) = W 2 log(n−1) −1 · W 2 log(n−1) −1 = W 2 log(n−1) Because 2 lg(n−1) ≥ n − 1 =⇒ L(2 lg(n−1) ) = L(n−1) 42 / 79
  • 83. Something Faster We want something faster!!! Observation!!! L(1) = W L(2) = W · W = W 2 L(4) = W 2 · W 2 = W 4 L(8) = W 4 · W 4 = W 8 ... L(2 log(n−1) ) = W 2 log(n−1) −1 · W 2 log(n−1) −1 = W 2 log(n−1) Because 2 lg(n−1) ≥ n − 1 =⇒ L(2 lg(n−1) ) = L(n−1) 42 / 79
  • 84. The Faster Algorithm Complexity of the Previous Algorithm Slow-All-Pairs-Shortest-Paths(W ) 1 n ← W .rows 2 L(1) ← W 3 m ← 1 4 while m < n − 1 5 L(2m) ←EXTEND-SHORTEST-PATHS L(m) , L(m) 6 m ← 2m 7 return L(m) Complexity If n = |V | we have that O V 3 lg V . 43 / 79
  • 85. The Faster Algorithm Complexity of the Previous Algorithm Slow-All-Pairs-Shortest-Paths(W ) 1 n ← W .rows 2 L(1) ← W 3 m ← 1 4 while m < n − 1 5 L(2m) ←EXTEND-SHORTEST-PATHS L(m) , L(m) 6 m ← 2m 7 return L(m) Complexity If n = |V | we have that O V 3 lg V . 43 / 79
  • 86. Outline 1 Introduction Definition of the Problem Assumptions Observations 2 Structure of a Shortest Path Introduction 3 The Solution The Recursive Solution The Iterative Version Extended-Shoertest-Paths Looking at the Algorithm as Matrix Multiplication Example We want something faster 4 A different dynamic-programming algorithm The Shortest Path Structure The Bottom-Up Solution Floyd-Warshall Algorithm Example 5 Other Solutions The Johnson’s Algorithm 6 Exercises You can try them 44 / 79
  • 87. The Shortest Path Structure Intermediate Vertex For a path p = v1, v2, ..., vl , an intermediate vertex is any vertex of p other than v1 or vl. Define d (k) ij =weight of a shortest path between i and j with all intermediate vertices are in the set {1, 2, ..., k}. 45 / 79
  • 88. The Shortest Path Structure Intermediate Vertex For a path p = v1, v2, ..., vl , an intermediate vertex is any vertex of p other than v1 or vl. Define d (k) ij =weight of a shortest path between i and j with all intermediate vertices are in the set {1, 2, ..., k}. 45 / 79
  • 89. The Recursive Idea Simply look at the following cases cases Case I k is not an intermediate vertex, then a shortest path from i to j with all intermediate vertices {1, ..., k − 1} is a shortest path from i to j with intermediate vertices {1, ..., k}. =⇒ d (k) ij = d (k−1) ij Case II if k is an intermediate vertice. Then, i p1 k p2 j and we can make the following statements using Lemma 24.1: p1 is a shortest path from i to k with all intermediate vertices in the set {1, ..., k − 1}. p2 is a shortest path from k to j with all intermediate vertices in the set {1, ..., k − 1}. =⇒ d (k) ij = d (k−1) ik + d (k−1) kj 46 / 79
  • 90. The Recursive Idea Simply look at the following cases cases Case I k is not an intermediate vertex, then a shortest path from i to j with all intermediate vertices {1, ..., k − 1} is a shortest path from i to j with intermediate vertices {1, ..., k}. =⇒ d (k) ij = d (k−1) ij Case II if k is an intermediate vertice. Then, i p1 k p2 j and we can make the following statements using Lemma 24.1: p1 is a shortest path from i to k with all intermediate vertices in the set {1, ..., k − 1}. p2 is a shortest path from k to j with all intermediate vertices in the set {1, ..., k − 1}. =⇒ d (k) ij = d (k−1) ik + d (k−1) kj 46 / 79
  • 91. The Recursive Idea Simply look at the following cases cases Case I k is not an intermediate vertex, then a shortest path from i to j with all intermediate vertices {1, ..., k − 1} is a shortest path from i to j with intermediate vertices {1, ..., k}. =⇒ d (k) ij = d (k−1) ij Case II if k is an intermediate vertice. Then, i p1 k p2 j and we can make the following statements using Lemma 24.1: p1 is a shortest path from i to k with all intermediate vertices in the set {1, ..., k − 1}. p2 is a shortest path from k to j with all intermediate vertices in the set {1, ..., k − 1}. =⇒ d (k) ij = d (k−1) ik + d (k−1) kj 46 / 79
  • 92. The Recursive Idea Simply look at the following cases cases Case I k is not an intermediate vertex, then a shortest path from i to j with all intermediate vertices {1, ..., k − 1} is a shortest path from i to j with intermediate vertices {1, ..., k}. =⇒ d (k) ij = d (k−1) ij Case II if k is an intermediate vertice. Then, i p1 k p2 j and we can make the following statements using Lemma 24.1: p1 is a shortest path from i to k with all intermediate vertices in the set {1, ..., k − 1}. p2 is a shortest path from k to j with all intermediate vertices in the set {1, ..., k − 1}. =⇒ d (k) ij = d (k−1) ik + d (k−1) kj 46 / 79
  • 93. The Recursive Idea Simply look at the following cases cases Case I k is not an intermediate vertex, then a shortest path from i to j with all intermediate vertices {1, ..., k − 1} is a shortest path from i to j with intermediate vertices {1, ..., k}. =⇒ d (k) ij = d (k−1) ij Case II if k is an intermediate vertice. Then, i p1 k p2 j and we can make the following statements using Lemma 24.1: p1 is a shortest path from i to k with all intermediate vertices in the set {1, ..., k − 1}. p2 is a shortest path from k to j with all intermediate vertices in the set {1, ..., k − 1}. =⇒ d (k) ij = d (k−1) ik + d (k−1) kj 46 / 79
  • 94. The Recursive Idea Simply look at the following cases cases Case I k is not an intermediate vertex, then a shortest path from i to j with all intermediate vertices {1, ..., k − 1} is a shortest path from i to j with intermediate vertices {1, ..., k}. =⇒ d (k) ij = d (k−1) ij Case II if k is an intermediate vertice. Then, i p1 k p2 j and we can make the following statements using Lemma 24.1: p1 is a shortest path from i to k with all intermediate vertices in the set {1, ..., k − 1}. p2 is a shortest path from k to j with all intermediate vertices in the set {1, ..., k − 1}. =⇒ d (k) ij = d (k−1) ik + d (k−1) kj 46 / 79
  • 95. The Graphical Idea Consider All possible intermediate vertices in {1, 2, ..., k} Figure: The Recursive Idea 47 / 79
  • 96. The Recursive Solution The Recursion d (k) ij =    wij if k = 0 min d (k−1) ij , d (k−1) ik + d (k−1) kj if k ≥ 1 Final answer when k = n We recursively calculate D(n) = d (n) ij or d (n) ij = δ (i, j) for all i, j ∈ V . 48 / 79
  • 97. The Recursive Solution The Recursion d (k) ij =    wij if k = 0 min d (k−1) ij , d (k−1) ik + d (k−1) kj if k ≥ 1 Final answer when k = n We recursively calculate D(n) = d (n) ij or d (n) ij = δ (i, j) for all i, j ∈ V . 48 / 79
  • 98. Thus, we have the following Recursive Version Recursive-Floyd-Warshall(W ) 1 D(n) the n × n matrix 2 for i = 1 to n 3 for j = 1 to n 4 D(n) [i, j] =Recursive-Part(i, j, n, W ) 5 return D(n) 49 / 79
  • 99. Thus, we have the following Recursive Version Recursive-Floyd-Warshall(W ) 1 D(n) the n × n matrix 2 for i = 1 to n 3 for j = 1 to n 4 D(n) [i, j] =Recursive-Part(i, j, n, W ) 5 return D(n) 49 / 79
  • 100. Thus, we have the following Recursive Version Recursive-Floyd-Warshall(W ) 1 D(n) the n × n matrix 2 for i = 1 to n 3 for j = 1 to n 4 D(n) [i, j] =Recursive-Part(i, j, n, W ) 5 return D(n) 49 / 79
  • 101. Thus, we have the following The Recursive-Part Recursive-Part(i, j, k, W ) 1 if k = 0 2 return W [i, j] 3 if k ≥ 1 4 t1 =Recursive-Part(i, j, k − 1, W ) 5 t2 =Recursive-Part(i, k, k − 1, W )+... 6 Recursive-Part(k, j, k − 1, W ) 7 if t1 ≤ t2 8 return t1 9 else 10 return t2 50 / 79
  • 102. Thus, we have the following The Recursive-Part Recursive-Part(i, j, k, W ) 1 if k = 0 2 return W [i, j] 3 if k ≥ 1 4 t1 =Recursive-Part(i, j, k − 1, W ) 5 t2 =Recursive-Part(i, k, k − 1, W )+... 6 Recursive-Part(k, j, k − 1, W ) 7 if t1 ≤ t2 8 return t1 9 else 10 return t2 50 / 79
  • 103. Thus, we have the following The Recursive-Part Recursive-Part(i, j, k, W ) 1 if k = 0 2 return W [i, j] 3 if k ≥ 1 4 t1 =Recursive-Part(i, j, k − 1, W ) 5 t2 =Recursive-Part(i, k, k − 1, W )+... 6 Recursive-Part(k, j, k − 1, W ) 7 if t1 ≤ t2 8 return t1 9 else 10 return t2 50 / 79
  • 104. Thus, we have the following The Recursive-Part Recursive-Part(i, j, k, W ) 1 if k = 0 2 return W [i, j] 3 if k ≥ 1 4 t1 =Recursive-Part(i, j, k − 1, W ) 5 t2 =Recursive-Part(i, k, k − 1, W )+... 6 Recursive-Part(k, j, k − 1, W ) 7 if t1 ≤ t2 8 return t1 9 else 10 return t2 50 / 79
  • 105. Outline 1 Introduction Definition of the Problem Assumptions Observations 2 Structure of a Shortest Path Introduction 3 The Solution The Recursive Solution The Iterative Version Extended-Shoertest-Paths Looking at the Algorithm as Matrix Multiplication Example We want something faster 4 A different dynamic-programming algorithm The Shortest Path Structure The Bottom-Up Solution Floyd-Warshall Algorithm Example 5 Other Solutions The Johnson’s Algorithm 6 Exercises You can try them 51 / 79
  • 106. Now We want to use a storage to eliminate the recursion For this, we are going to use two matrices 1 D(k−1) the previous matrix. 2 D(k) the new matrix based in the previous matrix Something Notable With D(0) = W or all weights in the edges that exist. 52 / 79
  • 107. Now We want to use a storage to eliminate the recursion For this, we are going to use two matrices 1 D(k−1) the previous matrix. 2 D(k) the new matrix based in the previous matrix Something Notable With D(0) = W or all weights in the edges that exist. 52 / 79
  • 108. Now We want to use a storage to eliminate the recursion For this, we are going to use two matrices 1 D(k−1) the previous matrix. 2 D(k) the new matrix based in the previous matrix Something Notable With D(0) = W or all weights in the edges that exist. 52 / 79
  • 109. Now We want to use a storage to eliminate the recursion For this, we are going to use two matrices 1 D(k−1) the previous matrix. 2 D(k) the new matrix based in the previous matrix Something Notable With D(0) = W or all weights in the edges that exist. 52 / 79
  • 110. In addition, we want to rebuild the answer For this, we have the predecessor matrix Π Actually, we want to compute a sequence of matrices Π(0) , Π(1) , ..., Π(n) Where Π = Π(n) 53 / 79
  • 111. In addition, we want to rebuild the answer For this, we have the predecessor matrix Π Actually, we want to compute a sequence of matrices Π(0) , Π(1) , ..., Π(n) Where Π = Π(n) 53 / 79
  • 112. What are the elements in Π(k) Each element in the matrix is as follow π (k) ij = the predecessor of vertex j on a shortest path from vertex i with all intermediate vertices in the set {1, 2, ..., k} Thus, we have that π (0) ij =    NULL if i = j or wij = ∞ i if i = j and wij < ∞ 54 / 79
  • 113. What are the elements in Π(k) Each element in the matrix is as follow π (k) ij = the predecessor of vertex j on a shortest path from vertex i with all intermediate vertices in the set {1, 2, ..., k} Thus, we have that π (0) ij =    NULL if i = j or wij = ∞ i if i = j and wij < ∞ 54 / 79
  • 114. Then We have the following For k ≥ 1, if we take the path i k j where k = j. Then, if d (k−1) ij > d (k−1) ik + d (k−1) kj For the predecessor of j, we chose k on a shortest path from k with all intermediate vertices in the set {1, 2, ..., k − 1} Otherwise, if d (k−1) ij ≤ d (k−1) ik + d (k−1) kj We choose the same predecessor of j that we chose on a shortest path from i with all all intermediate vertices in the set {1, 2, ..., k − 1}. 55 / 79
  • 115. Then We have the following For k ≥ 1, if we take the path i k j where k = j. Then, if d (k−1) ij > d (k−1) ik + d (k−1) kj For the predecessor of j, we chose k on a shortest path from k with all intermediate vertices in the set {1, 2, ..., k − 1} Otherwise, if d (k−1) ij ≤ d (k−1) ik + d (k−1) kj We choose the same predecessor of j that we chose on a shortest path from i with all all intermediate vertices in the set {1, 2, ..., k − 1}. 55 / 79
  • 116. Then We have the following For k ≥ 1, if we take the path i k j where k = j. Then, if d (k−1) ij > d (k−1) ik + d (k−1) kj For the predecessor of j, we chose k on a shortest path from k with all intermediate vertices in the set {1, 2, ..., k − 1} Otherwise, if d (k−1) ij ≤ d (k−1) ik + d (k−1) kj We choose the same predecessor of j that we chose on a shortest path from i with all all intermediate vertices in the set {1, 2, ..., k − 1}. 55 / 79
  • 117. Formally We have then π (k) ij =    π (k−1) ij if d (k−1) ij ≤ d (k−1) ik + d (k−1) kj π (k−1) kj if d (k−1) ij > d (k−1) ik + d (k−1) kj 56 / 79
  • 118. Outline 1 Introduction Definition of the Problem Assumptions Observations 2 Structure of a Shortest Path Introduction 3 The Solution The Recursive Solution The Iterative Version Extended-Shoertest-Paths Looking at the Algorithm as Matrix Multiplication Example We want something faster 4 A different dynamic-programming algorithm The Shortest Path Structure The Bottom-Up Solution Floyd-Warshall Algorithm Example 5 Other Solutions The Johnson’s Algorithm 6 Exercises You can try them 57 / 79
  • 119. Final Iterative Version of Floyd-Warshall (Correction by Diego - Class Tec 2015) Floyd-Warshall(W) 1. n = W.rows 2. D(0) = W 3. for k = 1 to n − 1 4. let D(k) = d (k) ij be a new n × n matrix 5. let Π(k) be a new predecessor n × n matrix Given each k, we update using D(k−1) 6. for i = 1 to n 7. for j = 1 to n 8. if d (k−1) ij ≤ d (k−1) ik + d (k−1) kj 9. d (k) ij = d (k−1) ij 10. π (k) ij = π (k−1) ij 11. else 12. d (k) ij = d (k−1) ik + d (k−1) kj 13. π (k) ij = π (k−1) kj 14. return D(n) and Π(n) 58 / 79
  • 120. Final Iterative Version of Floyd-Warshall (Correction by Diego - Class Tec 2015) Floyd-Warshall(W) 1. n = W.rows 2. D(0) = W 3. for k = 1 to n − 1 4. let D(k) = d (k) ij be a new n × n matrix 5. let Π(k) be a new predecessor n × n matrix Given each k, we update using D(k−1) 6. for i = 1 to n 7. for j = 1 to n 8. if d (k−1) ij ≤ d (k−1) ik + d (k−1) kj 9. d (k) ij = d (k−1) ij 10. π (k) ij = π (k−1) ij 11. else 12. d (k) ij = d (k−1) ik + d (k−1) kj 13. π (k) ij = π (k−1) kj 14. return D(n) and Π(n) 58 / 79
  • 121. Final Iterative Version of Floyd-Warshall (Correction by Diego - Class Tec 2015) Floyd-Warshall(W) 1. n = W.rows 2. D(0) = W 3. for k = 1 to n − 1 4. let D(k) = d (k) ij be a new n × n matrix 5. let Π(k) be a new predecessor n × n matrix Given each k, we update using D(k−1) 6. for i = 1 to n 7. for j = 1 to n 8. if d (k−1) ij ≤ d (k−1) ik + d (k−1) kj 9. d (k) ij = d (k−1) ij 10. π (k) ij = π (k−1) ij 11. else 12. d (k) ij = d (k−1) ik + d (k−1) kj 13. π (k) ij = π (k−1) kj 14. return D(n) and Π(n) 58 / 79
  • 122. Final Iterative Version of Floyd-Warshall (Correction by Diego - Class Tec 2015) Floyd-Warshall(W) 1. n = W.rows 2. D(0) = W 3. for k = 1 to n − 1 4. let D(k) = d (k) ij be a new n × n matrix 5. let Π(k) be a new predecessor n × n matrix Given each k, we update using D(k−1) 6. for i = 1 to n 7. for j = 1 to n 8. if d (k−1) ij ≤ d (k−1) ik + d (k−1) kj 9. d (k) ij = d (k−1) ij 10. π (k) ij = π (k−1) ij 11. else 12. d (k) ij = d (k−1) ik + d (k−1) kj 13. π (k) ij = π (k−1) kj 14. return D(n) and Π(n) 58 / 79
  • 123. Explanation Lines 1 and 2 Initialization of variables n and D(0) Line 3 In the loop, we solve the smaller problems first with k = 1 to k = n − 1 Remember the largest number of edges in any shortest path Line 4 and 5 Instantiation of the new matrices D(k) and (k) to generate the shortest pats with at least k edges. 59 / 79
  • 124. Explanation Lines 1 and 2 Initialization of variables n and D(0) Line 3 In the loop, we solve the smaller problems first with k = 1 to k = n − 1 Remember the largest number of edges in any shortest path Line 4 and 5 Instantiation of the new matrices D(k) and (k) to generate the shortest pats with at least k edges. 59 / 79
  • 125. Explanation Lines 1 and 2 Initialization of variables n and D(0) Line 3 In the loop, we solve the smaller problems first with k = 1 to k = n − 1 Remember the largest number of edges in any shortest path Line 4 and 5 Instantiation of the new matrices D(k) and (k) to generate the shortest pats with at least k edges. 59 / 79
  • 126. Explanation Lines 1 and 2 Initialization of variables n and D(0) Line 3 In the loop, we solve the smaller problems first with k = 1 to k = n − 1 Remember the largest number of edges in any shortest path Line 4 and 5 Instantiation of the new matrices D(k) and (k) to generate the shortest pats with at least k edges. 59 / 79
  • 127. Explanation Line 6 and 7 This is done to go through all the possible combinations of i’s and j’s Line 8 Deciding if d (k−1) ij ≤ d (k−1) ik + d (k−1) kj 60 / 79
  • 128. Explanation Line 6 and 7 This is done to go through all the possible combinations of i’s and j’s Line 8 Deciding if d (k−1) ij ≤ d (k−1) ik + d (k−1) kj 60 / 79
  • 130. Example D(0) and Π(0) 1 2 3 45 3 4 8 2 7 1 6 -4 -5 D(0) =       0 3 8 ∞ −4 ∞ 0 ∞ 1 7 ∞ 4 0 ∞ ∞ 2 ∞ −5 0 ∞ ∞ ∞ ∞ 6 0       Π(0) =       NIL 1 1 NIL 1 NIL NIL NIL 2 2 NIL 3 NIL NIL NIL 4 NIL 4 NIL NIL NIL NIL NIL 5 NIL       62 / 79
  • 131. Example D(1) and Π(1) 1 2 3 45 3 4 8 2 7 1 6 -4 -5 D(1) =       0 3 8 ∞ −4 ∞ 0 ∞ 1 7 ∞ 4 0 ∞ ∞ 2 5 −5 0 −2 ∞ ∞ ∞ 6 0       Π(1) =       NIL 1 1 NIL 1 NIL NIL NIL 2 2 NIL 3 NIL NIL NIL 4 1 4 NIL 1 NIL NIL NIL 5 NIL       63 / 79
  • 132. Example D(2) and Π(2) 1 2 3 45 3 4 8 2 7 1 6 -4 -5 D(2) =       0 3 8 4 −4 ∞ 0 ∞ 1 7 ∞ 4 0 5 11 2 5 −5 0 −2 ∞ ∞ ∞ 6 0       Π(2) =       NIL 1 1 2 1 NIL NIL NIL 2 2 NIL 3 NIL 2 2 4 1 4 NIL 1 NIL NIL NIL 5 NIL       64 / 79
  • 133. Example D(3) and Π(3) 1 2 3 45 3 4 8 2 7 1 6 -4 -5 D(3) =       0 3 8 4 −4 ∞ 0 ∞ 1 7 ∞ 4 0 5 11 2 −1 −5 0 −2 ∞ ∞ ∞ 6 0       Π(3) =       NIL 1 1 2 1 NIL NIL NIL 2 2 NIL 3 NIL 2 2 4 3 4 NIL 1 NIL NIL NIL 5 NIL       65 / 79
  • 134. Example D(4) and Π(4) 1 2 3 45 3 4 8 2 7 1 6 -4 -5 D(4) =       0 3 −1 4 −4 3 0 −4 1 −1 7 4 0 5 3 2 −1 −5 0 −2 8 5 1 6 0       Π(4) =       NIL 1 4 2 1 4 NIL 4 2 1 4 3 NIL 2 1 4 3 4 NIL 1 4 3 4 5 NIL       66 / 79
  • 135. Example D(5) and Π(5) 1 2 3 45 3 4 8 2 7 1 6 -4 -5 D(5) =       0 1 −3 2 −4 3 0 −4 1 −1 7 4 0 5 3 2 −1 −5 0 −2 8 5 1 6 0       Π(5) =       NIL 3 4 5 1 4 NIL 4 2 1 4 3 NIL 2 1 4 3 4 NIL 1 4 3 4 5 NIL       67 / 79
  • 136. Remarks Something Notable Because the comparison in line 8 takes O (1) Complexity of Floyd-Warshall is Time Complexity Θ V 3 We do not have elaborate data structures as Binary Heap or Fibonacci Heap!!! The hidden constant time is quite small: Making the Floyd-Warshall Algorithm practical even with moderate-sized graphs!!! 68 / 79
  • 137. Remarks Something Notable Because the comparison in line 8 takes O (1) Complexity of Floyd-Warshall is Time Complexity Θ V 3 We do not have elaborate data structures as Binary Heap or Fibonacci Heap!!! The hidden constant time is quite small: Making the Floyd-Warshall Algorithm practical even with moderate-sized graphs!!! 68 / 79
  • 138. Remarks Something Notable Because the comparison in line 8 takes O (1) Complexity of Floyd-Warshall is Time Complexity Θ V 3 We do not have elaborate data structures as Binary Heap or Fibonacci Heap!!! The hidden constant time is quite small: Making the Floyd-Warshall Algorithm practical even with moderate-sized graphs!!! 68 / 79
  • 139. Remarks Something Notable Because the comparison in line 8 takes O (1) Complexity of Floyd-Warshall is Time Complexity Θ V 3 We do not have elaborate data structures as Binary Heap or Fibonacci Heap!!! The hidden constant time is quite small: Making the Floyd-Warshall Algorithm practical even with moderate-sized graphs!!! 68 / 79
  • 140. Outline 1 Introduction Definition of the Problem Assumptions Observations 2 Structure of a Shortest Path Introduction 3 The Solution The Recursive Solution The Iterative Version Extended-Shoertest-Paths Looking at the Algorithm as Matrix Multiplication Example We want something faster 4 A different dynamic-programming algorithm The Shortest Path Structure The Bottom-Up Solution Floyd-Warshall Algorithm Example 5 Other Solutions The Johnson’s Algorithm 6 Exercises You can try them 69 / 79
  • 141. Johnson’s Algorithm Observations Used to find all pairs in a sparse graphs by using Dijkstra’s algorithm. It uses a re-weighting function to obtain positive edges from negative edges to deal with them. It can deal with the negative weight cycles. Therfore It uses something to deal with the negative weight cycles. Could be a Bellman-Ford detector as before? Maybe, we need to transform the weights in order to use them. What we require A re-weighting function w (u, v) A shortest path by w is a shortest path by w. All edges are not negative using w. 70 / 79
  • 142. Johnson’s Algorithm Observations Used to find all pairs in a sparse graphs by using Dijkstra’s algorithm. It uses a re-weighting function to obtain positive edges from negative edges to deal with them. It can deal with the negative weight cycles. Therfore It uses something to deal with the negative weight cycles. Could be a Bellman-Ford detector as before? Maybe, we need to transform the weights in order to use them. What we require A re-weighting function w (u, v) A shortest path by w is a shortest path by w. All edges are not negative using w. 70 / 79
  • 143. Johnson’s Algorithm Observations Used to find all pairs in a sparse graphs by using Dijkstra’s algorithm. It uses a re-weighting function to obtain positive edges from negative edges to deal with them. It can deal with the negative weight cycles. Therfore It uses something to deal with the negative weight cycles. Could be a Bellman-Ford detector as before? Maybe, we need to transform the weights in order to use them. What we require A re-weighting function w (u, v) A shortest path by w is a shortest path by w. All edges are not negative using w. 70 / 79
  • 144. Johnson’s Algorithm Observations Used to find all pairs in a sparse graphs by using Dijkstra’s algorithm. It uses a re-weighting function to obtain positive edges from negative edges to deal with them. It can deal with the negative weight cycles. Therfore It uses something to deal with the negative weight cycles. Could be a Bellman-Ford detector as before? Maybe, we need to transform the weights in order to use them. What we require A re-weighting function w (u, v) A shortest path by w is a shortest path by w. All edges are not negative using w. 70 / 79
  • 145. Johnson’s Algorithm Observations Used to find all pairs in a sparse graphs by using Dijkstra’s algorithm. It uses a re-weighting function to obtain positive edges from negative edges to deal with them. It can deal with the negative weight cycles. Therfore It uses something to deal with the negative weight cycles. Could be a Bellman-Ford detector as before? Maybe, we need to transform the weights in order to use them. What we require A re-weighting function w (u, v) A shortest path by w is a shortest path by w. All edges are not negative using w. 70 / 79
  • 146. Johnson’s Algorithm Observations Used to find all pairs in a sparse graphs by using Dijkstra’s algorithm. It uses a re-weighting function to obtain positive edges from negative edges to deal with them. It can deal with the negative weight cycles. Therfore It uses something to deal with the negative weight cycles. Could be a Bellman-Ford detector as before? Maybe, we need to transform the weights in order to use them. What we require A re-weighting function w (u, v) A shortest path by w is a shortest path by w. All edges are not negative using w. 70 / 79
  • 147. Johnson’s Algorithm Observations Used to find all pairs in a sparse graphs by using Dijkstra’s algorithm. It uses a re-weighting function to obtain positive edges from negative edges to deal with them. It can deal with the negative weight cycles. Therfore It uses something to deal with the negative weight cycles. Could be a Bellman-Ford detector as before? Maybe, we need to transform the weights in order to use them. What we require A re-weighting function w (u, v) A shortest path by w is a shortest path by w. All edges are not negative using w. 70 / 79
  • 148. Johnson’s Algorithm Observations Used to find all pairs in a sparse graphs by using Dijkstra’s algorithm. It uses a re-weighting function to obtain positive edges from negative edges to deal with them. It can deal with the negative weight cycles. Therfore It uses something to deal with the negative weight cycles. Could be a Bellman-Ford detector as before? Maybe, we need to transform the weights in order to use them. What we require A re-weighting function w (u, v) A shortest path by w is a shortest path by w. All edges are not negative using w. 70 / 79
  • 149. Johnson’s Algorithm Observations Used to find all pairs in a sparse graphs by using Dijkstra’s algorithm. It uses a re-weighting function to obtain positive edges from negative edges to deal with them. It can deal with the negative weight cycles. Therfore It uses something to deal with the negative weight cycles. Could be a Bellman-Ford detector as before? Maybe, we need to transform the weights in order to use them. What we require A re-weighting function w (u, v) A shortest path by w is a shortest path by w. All edges are not negative using w. 70 / 79
  • 150. Proving Properties of Re-Weigthing Lemma 25.1 Given a weighted, directed graph G = (D, V ) with weight function w : E → R, let h : V → R be any function mapping vertices to real numbers. For each edge (u, v) ∈ E, define w (u, v) = w (u, v) + h (u) − h (v) Let p = v0, v1, ..., vk be any path from vertex 0 to vertex k. Then: 1 p is a shortest path from 0 to k with weight function w if and only if it is a shortest path with weight function w. That is w(p) = δ (v0, vk) if and only if w(p) = δ (v0, vk). 2 Furthermore, G has a negative-weight cycle using weight function w if and only if G has a negative-weight cycle using weight function w. 71 / 79
  • 151. Proving Properties of Re-Weigthing Lemma 25.1 Given a weighted, directed graph G = (D, V ) with weight function w : E → R, let h : V → R be any function mapping vertices to real numbers. For each edge (u, v) ∈ E, define w (u, v) = w (u, v) + h (u) − h (v) Let p = v0, v1, ..., vk be any path from vertex 0 to vertex k. Then: 1 p is a shortest path from 0 to k with weight function w if and only if it is a shortest path with weight function w. That is w(p) = δ (v0, vk) if and only if w(p) = δ (v0, vk). 2 Furthermore, G has a negative-weight cycle using weight function w if and only if G has a negative-weight cycle using weight function w. 71 / 79
  • 152. Proving Properties of Re-Weigthing Lemma 25.1 Given a weighted, directed graph G = (D, V ) with weight function w : E → R, let h : V → R be any function mapping vertices to real numbers. For each edge (u, v) ∈ E, define w (u, v) = w (u, v) + h (u) − h (v) Let p = v0, v1, ..., vk be any path from vertex 0 to vertex k. Then: 1 p is a shortest path from 0 to k with weight function w if and only if it is a shortest path with weight function w. That is w(p) = δ (v0, vk) if and only if w(p) = δ (v0, vk). 2 Furthermore, G has a negative-weight cycle using weight function w if and only if G has a negative-weight cycle using weight function w. 71 / 79
  • 153. Proving Properties of Re-Weigthing Lemma 25.1 Given a weighted, directed graph G = (D, V ) with weight function w : E → R, let h : V → R be any function mapping vertices to real numbers. For each edge (u, v) ∈ E, define w (u, v) = w (u, v) + h (u) − h (v) Let p = v0, v1, ..., vk be any path from vertex 0 to vertex k. Then: 1 p is a shortest path from 0 to k with weight function w if and only if it is a shortest path with weight function w. That is w(p) = δ (v0, vk) if and only if w(p) = δ (v0, vk). 2 Furthermore, G has a negative-weight cycle using weight function w if and only if G has a negative-weight cycle using weight function w. 71 / 79
  • 154. Proving Properties of Re-Weigthing Lemma 25.1 Given a weighted, directed graph G = (D, V ) with weight function w : E → R, let h : V → R be any function mapping vertices to real numbers. For each edge (u, v) ∈ E, define w (u, v) = w (u, v) + h (u) − h (v) Let p = v0, v1, ..., vk be any path from vertex 0 to vertex k. Then: 1 p is a shortest path from 0 to k with weight function w if and only if it is a shortest path with weight function w. That is w(p) = δ (v0, vk) if and only if w(p) = δ (v0, vk). 2 Furthermore, G has a negative-weight cycle using weight function w if and only if G has a negative-weight cycle using weight function w. 71 / 79
  • 155. Using This Lemma Select h Such that w (u, v) + h (u) − h (v) ≥ 0. Then, we build a new graph G It has the following elements V = V ∪ {s}, where s is a new vertex. E = E ∪ {(s, v) |v ∈ V }. w(s, v) = 0 for all v ∈ V , in addition to all the other weights. Select h Simply select h(v) = δ(s, v). 72 / 79
  • 156. Using This Lemma Select h Such that w (u, v) + h (u) − h (v) ≥ 0. Then, we build a new graph G It has the following elements V = V ∪ {s}, where s is a new vertex. E = E ∪ {(s, v) |v ∈ V }. w(s, v) = 0 for all v ∈ V , in addition to all the other weights. Select h Simply select h(v) = δ(s, v). 72 / 79
  • 157. Using This Lemma Select h Such that w (u, v) + h (u) − h (v) ≥ 0. Then, we build a new graph G It has the following elements V = V ∪ {s}, where s is a new vertex. E = E ∪ {(s, v) |v ∈ V }. w(s, v) = 0 for all v ∈ V , in addition to all the other weights. Select h Simply select h(v) = δ(s, v). 72 / 79
  • 158. Using This Lemma Select h Such that w (u, v) + h (u) − h (v) ≥ 0. Then, we build a new graph G It has the following elements V = V ∪ {s}, where s is a new vertex. E = E ∪ {(s, v) |v ∈ V }. w(s, v) = 0 for all v ∈ V , in addition to all the other weights. Select h Simply select h(v) = δ(s, v). 72 / 79
  • 159. Using This Lemma Select h Such that w (u, v) + h (u) − h (v) ≥ 0. Then, we build a new graph G It has the following elements V = V ∪ {s}, where s is a new vertex. E = E ∪ {(s, v) |v ∈ V }. w(s, v) = 0 for all v ∈ V , in addition to all the other weights. Select h Simply select h(v) = δ(s, v). 72 / 79
  • 160. Using This Lemma Select h Such that w (u, v) + h (u) − h (v) ≥ 0. Then, we build a new graph G It has the following elements V = V ∪ {s}, where s is a new vertex. E = E ∪ {(s, v) |v ∈ V }. w(s, v) = 0 for all v ∈ V , in addition to all the other weights. Select h Simply select h(v) = δ(s, v). 72 / 79
  • 161. Example Graph G with original weight function w with new source s and h(v) = δ(s, v) at each vertex 0 0 0 0 0 0 0 -1 1 2 3 4 5 -4 7 3 4 8 2 1 6 s -5 -5 -4 0 73 / 79
  • 162. Proof of Claim Claim w (u, v) + h (u) − h (v) ≥ 0 (5) By Triangle Inequality δ (s, v) ≤ δ(s, u) + w(u, v) Then by the way we selected h, we have: h(v) ≤ h(u) + w (u, v) (6) Finally w (u, v) + h (u) − h (v) ≥ 0 (7) 74 / 79
  • 163. Proof of Claim Claim w (u, v) + h (u) − h (v) ≥ 0 (5) By Triangle Inequality δ (s, v) ≤ δ(s, u) + w(u, v) Then by the way we selected h, we have: h(v) ≤ h(u) + w (u, v) (6) Finally w (u, v) + h (u) − h (v) ≥ 0 (7) 74 / 79
  • 164. Proof of Claim Claim w (u, v) + h (u) − h (v) ≥ 0 (5) By Triangle Inequality δ (s, v) ≤ δ(s, u) + w(u, v) Then by the way we selected h, we have: h(v) ≤ h(u) + w (u, v) (6) Finally w (u, v) + h (u) − h (v) ≥ 0 (7) 74 / 79
  • 165. Proof of Claim Claim w (u, v) + h (u) − h (v) ≥ 0 (5) By Triangle Inequality δ (s, v) ≤ δ(s, u) + w(u, v) Then by the way we selected h, we have: h(v) ≤ h(u) + w (u, v) (6) Finally w (u, v) + h (u) − h (v) ≥ 0 (7) 74 / 79
  • 166. Example The new Graph G after re-weighting G 0 5 1 0 4 0 0 -1 1 2 3 4 5 0 10 4 0 13 2 0 2 s 0 -5 -4 0 75 / 79
  • 167. Final Algorithm Pseudo-Code 1. Compute G , where: G .V = G.E ∪ {(s, v) |v ∈ G.V } andw (s, v) = 0 for all v ∈ G.V 2. If Bellman-Ford(G , w, s) == FALSE 3. print “Graphs contains a Neg-Weight Cycle” 4. else for each vertex v ∈ G .V 5. set h (v) = v.d computed by Bellman-Ford 6. for each edge (u, v) ∈ G .E 7. w (u, v) = w (u, v) + h (u) − h (v) 8. Let D = (duv) be a new n × n matrix 9. for each vertex u ∈ G.V 10. run Dijkstra(G, w, u) to compute δ (u, v) for all v ∈ G.V 11. for each vertex v ∈ G.V 12. duv = δ (u, v) + h (v) − h (u) 13. return D 76 / 79
  • 168. Final Algorithm Pseudo-Code 1. Compute G , where: G .V = G.E ∪ {(s, v) |v ∈ G.V } andw (s, v) = 0 for all v ∈ G.V 2. If Bellman-Ford(G , w, s) == FALSE 3. print “Graphs contains a Neg-Weight Cycle” 4. else for each vertex v ∈ G .V 5. set h (v) = v.d computed by Bellman-Ford 6. for each edge (u, v) ∈ G .E 7. w (u, v) = w (u, v) + h (u) − h (v) 8. Let D = (duv) be a new n × n matrix 9. for each vertex u ∈ G.V 10. run Dijkstra(G, w, u) to compute δ (u, v) for all v ∈ G.V 11. for each vertex v ∈ G.V 12. duv = δ (u, v) + h (v) − h (u) 13. return D 76 / 79
  • 169. Final Algorithm Pseudo-Code 1. Compute G , where: G .V = G.E ∪ {(s, v) |v ∈ G.V } andw (s, v) = 0 for all v ∈ G.V 2. If Bellman-Ford(G , w, s) == FALSE 3. print “Graphs contains a Neg-Weight Cycle” 4. else for each vertex v ∈ G .V 5. set h (v) = v.d computed by Bellman-Ford 6. for each edge (u, v) ∈ G .E 7. w (u, v) = w (u, v) + h (u) − h (v) 8. Let D = (duv) be a new n × n matrix 9. for each vertex u ∈ G.V 10. run Dijkstra(G, w, u) to compute δ (u, v) for all v ∈ G.V 11. for each vertex v ∈ G.V 12. duv = δ (u, v) + h (v) − h (u) 13. return D 76 / 79
  • 170. Final Algorithm Pseudo-Code 1. Compute G , where: G .V = G.E ∪ {(s, v) |v ∈ G.V } andw (s, v) = 0 for all v ∈ G.V 2. If Bellman-Ford(G , w, s) == FALSE 3. print “Graphs contains a Neg-Weight Cycle” 4. else for each vertex v ∈ G .V 5. set h (v) = v.d computed by Bellman-Ford 6. for each edge (u, v) ∈ G .E 7. w (u, v) = w (u, v) + h (u) − h (v) 8. Let D = (duv) be a new n × n matrix 9. for each vertex u ∈ G.V 10. run Dijkstra(G, w, u) to compute δ (u, v) for all v ∈ G.V 11. for each vertex v ∈ G.V 12. duv = δ (u, v) + h (v) − h (u) 13. return D 76 / 79
  • 171. Final Algorithm Pseudo-Code 1. Compute G , where: G .V = G.E ∪ {(s, v) |v ∈ G.V } andw (s, v) = 0 for all v ∈ G.V 2. If Bellman-Ford(G , w, s) == FALSE 3. print “Graphs contains a Neg-Weight Cycle” 4. else for each vertex v ∈ G .V 5. set h (v) = v.d computed by Bellman-Ford 6. for each edge (u, v) ∈ G .E 7. w (u, v) = w (u, v) + h (u) − h (v) 8. Let D = (duv) be a new n × n matrix 9. for each vertex u ∈ G.V 10. run Dijkstra(G, w, u) to compute δ (u, v) for all v ∈ G.V 11. for each vertex v ∈ G.V 12. duv = δ (u, v) + h (v) − h (u) 13. return D 76 / 79
  • 172. Final Algorithm Pseudo-Code 1. Compute G , where: G .V = G.E ∪ {(s, v) |v ∈ G.V } andw (s, v) = 0 for all v ∈ G.V 2. If Bellman-Ford(G , w, s) == FALSE 3. print “Graphs contains a Neg-Weight Cycle” 4. else for each vertex v ∈ G .V 5. set h (v) = v.d computed by Bellman-Ford 6. for each edge (u, v) ∈ G .E 7. w (u, v) = w (u, v) + h (u) − h (v) 8. Let D = (duv) be a new n × n matrix 9. for each vertex u ∈ G.V 10. run Dijkstra(G, w, u) to compute δ (u, v) for all v ∈ G.V 11. for each vertex v ∈ G.V 12. duv = δ (u, v) + h (v) − h (u) 13. return D 76 / 79
  • 173. Final Algorithm Pseudo-Code 1. Compute G , where: G .V = G.E ∪ {(s, v) |v ∈ G.V } andw (s, v) = 0 for all v ∈ G.V 2. If Bellman-Ford(G , w, s) == FALSE 3. print “Graphs contains a Neg-Weight Cycle” 4. else for each vertex v ∈ G .V 5. set h (v) = v.d computed by Bellman-Ford 6. for each edge (u, v) ∈ G .E 7. w (u, v) = w (u, v) + h (u) − h (v) 8. Let D = (duv) be a new n × n matrix 9. for each vertex u ∈ G.V 10. run Dijkstra(G, w, u) to compute δ (u, v) for all v ∈ G.V 11. for each vertex v ∈ G.V 12. duv = δ (u, v) + h (v) − h (u) 13. return D 76 / 79
  • 174. Final Algorithm Pseudo-Code 1. Compute G , where: G .V = G.E ∪ {(s, v) |v ∈ G.V } andw (s, v) = 0 for all v ∈ G.V 2. If Bellman-Ford(G , w, s) == FALSE 3. print “Graphs contains a Neg-Weight Cycle” 4. else for each vertex v ∈ G .V 5. set h (v) = v.d computed by Bellman-Ford 6. for each edge (u, v) ∈ G .E 7. w (u, v) = w (u, v) + h (u) − h (v) 8. Let D = (duv) be a new n × n matrix 9. for each vertex u ∈ G.V 10. run Dijkstra(G, w, u) to compute δ (u, v) for all v ∈ G.V 11. for each vertex v ∈ G.V 12. duv = δ (u, v) + h (v) − h (u) 13. return D 76 / 79
  • 175. Complexity The Final Complexity Times: Θ(V + E) to compute G O(VE) to run Bellman-Ford Θ(E) to compute w O (V 2 lg V + VE) to run Dijkstra’s algorithm |V | time using Fibonacci Heaps O (V 2 ) to compute D matrix Total : O(V 2 lg V + VE) If E = O V 2 =⇒ O V 3 77 / 79
  • 176. Complexity The Final Complexity Times: Θ(V + E) to compute G O(VE) to run Bellman-Ford Θ(E) to compute w O (V 2 lg V + VE) to run Dijkstra’s algorithm |V | time using Fibonacci Heaps O (V 2 ) to compute D matrix Total : O(V 2 lg V + VE) If E = O V 2 =⇒ O V 3 77 / 79
  • 177. Complexity The Final Complexity Times: Θ(V + E) to compute G O(VE) to run Bellman-Ford Θ(E) to compute w O (V 2 lg V + VE) to run Dijkstra’s algorithm |V | time using Fibonacci Heaps O (V 2 ) to compute D matrix Total : O(V 2 lg V + VE) If E = O V 2 =⇒ O V 3 77 / 79
  • 178. Complexity The Final Complexity Times: Θ(V + E) to compute G O(VE) to run Bellman-Ford Θ(E) to compute w O (V 2 lg V + VE) to run Dijkstra’s algorithm |V | time using Fibonacci Heaps O (V 2 ) to compute D matrix Total : O(V 2 lg V + VE) If E = O V 2 =⇒ O V 3 77 / 79
  • 179. Complexity The Final Complexity Times: Θ(V + E) to compute G O(VE) to run Bellman-Ford Θ(E) to compute w O (V 2 lg V + VE) to run Dijkstra’s algorithm |V | time using Fibonacci Heaps O (V 2 ) to compute D matrix Total : O(V 2 lg V + VE) If E = O V 2 =⇒ O V 3 77 / 79
  • 180. Complexity The Final Complexity Times: Θ(V + E) to compute G O(VE) to run Bellman-Ford Θ(E) to compute w O (V 2 lg V + VE) to run Dijkstra’s algorithm |V | time using Fibonacci Heaps O (V 2 ) to compute D matrix Total : O(V 2 lg V + VE) If E = O V 2 =⇒ O V 3 77 / 79
  • 181. Complexity The Final Complexity Times: Θ(V + E) to compute G O(VE) to run Bellman-Ford Θ(E) to compute w O (V 2 lg V + VE) to run Dijkstra’s algorithm |V | time using Fibonacci Heaps O (V 2 ) to compute D matrix Total : O(V 2 lg V + VE) If E = O V 2 =⇒ O V 3 77 / 79
  • 182. Complexity The Final Complexity Times: Θ(V + E) to compute G O(VE) to run Bellman-Ford Θ(E) to compute w O (V 2 lg V + VE) to run Dijkstra’s algorithm |V | time using Fibonacci Heaps O (V 2 ) to compute D matrix Total : O(V 2 lg V + VE) If E = O V 2 =⇒ O V 3 77 / 79
  • 183. Outline 1 Introduction Definition of the Problem Assumptions Observations 2 Structure of a Shortest Path Introduction 3 The Solution The Recursive Solution The Iterative Version Extended-Shoertest-Paths Looking at the Algorithm as Matrix Multiplication Example We want something faster 4 A different dynamic-programming algorithm The Shortest Path Structure The Bottom-Up Solution Floyd-Warshall Algorithm Example 5 Other Solutions The Johnson’s Algorithm 6 Exercises You can try them 78 / 79