SlideShare a Scribd company logo
1 of 243
Download to read offline
Analysis of Algorithms
Single Source Shortest Path
Andres Mendez-Vazquez
November 9, 2015
1 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
2 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
3 / 108
Introduction
Problem description
Given a single source vertex in a weighted, directed graph.
We want to compute a shortest path for each possible destination
(Similar to BFS).
Thus
The algorithm will compute a shortest path tree (again, similar to BFS).
4 / 108
Introduction
Problem description
Given a single source vertex in a weighted, directed graph.
We want to compute a shortest path for each possible destination
(Similar to BFS).
Thus
The algorithm will compute a shortest path tree (again, similar to BFS).
4 / 108
Introduction
Problem description
Given a single source vertex in a weighted, directed graph.
We want to compute a shortest path for each possible destination
(Similar to BFS).
Thus
The algorithm will compute a shortest path tree (again, similar to BFS).
4 / 108
Similar Problems
Single destination shortest paths problem
Find a shortest path to a given destination vertex t from each vertex.
By reversing the direction of each edge in the graph, we can reduce
this problem to a single source problem.
5 / 108
Similar Problems
Single destination shortest paths problem
Find a shortest path to a given destination vertex t from each vertex.
By reversing the direction of each edge in the graph, we can reduce
this problem to a single source problem.
Reverse Directions
5 / 108
Similar Problems
Single pair shortest path problem
Find a shortest path from u to v for given vertices u and v.
If we solve the single source problem with source vertex u, we also
solve this problem.
6 / 108
Similar Problems
Single pair shortest path problem
Find a shortest path from u to v for given vertices u and v.
If we solve the single source problem with source vertex u, we also
solve this problem.
6 / 108
Similar Problems
All pairs shortest paths problem
Find a shortest path from u to v for every pair of vertices u and v.
7 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
8 / 108
Optimal Substructure Property
Lemma 24.1
Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk >
be a Shortest Path from v1 to vk. Then,
pij =< vi, vi+1, ..., vj > is a Shortest Path from vi to vj, where
1 ≤ i ≤ j ≤ k.
We have then
So, we have the optimal substructure property.
Bellman-Ford’s algorithm uses dynamic programming.
Dijkstra’s algorithm uses the greedy approach.
In addition, we have that
Let δ(u, v) = weight of Shortest Path from u to v.
9 / 108
Optimal Substructure Property
Lemma 24.1
Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk >
be a Shortest Path from v1 to vk. Then,
pij =< vi, vi+1, ..., vj > is a Shortest Path from vi to vj, where
1 ≤ i ≤ j ≤ k.
We have then
So, we have the optimal substructure property.
Bellman-Ford’s algorithm uses dynamic programming.
Dijkstra’s algorithm uses the greedy approach.
In addition, we have that
Let δ(u, v) = weight of Shortest Path from u to v.
9 / 108
Optimal Substructure Property
Lemma 24.1
Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk >
be a Shortest Path from v1 to vk. Then,
pij =< vi, vi+1, ..., vj > is a Shortest Path from vi to vj, where
1 ≤ i ≤ j ≤ k.
We have then
So, we have the optimal substructure property.
Bellman-Ford’s algorithm uses dynamic programming.
Dijkstra’s algorithm uses the greedy approach.
In addition, we have that
Let δ(u, v) = weight of Shortest Path from u to v.
9 / 108
Optimal Substructure Property
Lemma 24.1
Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk >
be a Shortest Path from v1 to vk. Then,
pij =< vi, vi+1, ..., vj > is a Shortest Path from vi to vj, where
1 ≤ i ≤ j ≤ k.
We have then
So, we have the optimal substructure property.
Bellman-Ford’s algorithm uses dynamic programming.
Dijkstra’s algorithm uses the greedy approach.
In addition, we have that
Let δ(u, v) = weight of Shortest Path from u to v.
9 / 108
Optimal Substructure Property
Lemma 24.1
Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk >
be a Shortest Path from v1 to vk. Then,
pij =< vi, vi+1, ..., vj > is a Shortest Path from vi to vj, where
1 ≤ i ≤ j ≤ k.
We have then
So, we have the optimal substructure property.
Bellman-Ford’s algorithm uses dynamic programming.
Dijkstra’s algorithm uses the greedy approach.
In addition, we have that
Let δ(u, v) = weight of Shortest Path from u to v.
9 / 108
Optimal Substructure Property
Lemma 24.1
Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk >
be a Shortest Path from v1 to vk. Then,
pij =< vi, vi+1, ..., vj > is a Shortest Path from vi to vj, where
1 ≤ i ≤ j ≤ k.
We have then
So, we have the optimal substructure property.
Bellman-Ford’s algorithm uses dynamic programming.
Dijkstra’s algorithm uses the greedy approach.
In addition, we have that
Let δ(u, v) = weight of Shortest Path from u to v.
9 / 108
Optimal Substructure Property
Corollary
Let p be a Shortest Path from s to v, where
p = s
p1
u → v = p1 ∪ {(u, v)}. Then δ(s, v) = δ(s, u) + w(u, v).
10 / 108
The Lower Bound Between Nodes
Lemma 24.10
Let s ∈ V . For all edges (u, v) ∈ E, we have δ(s, v) ≤ δ(s, u) + w(u, v).
11 / 108
Now
Then
We have the basic concepts
Still
We need to define an important one.
The Predecessor Graph
This will facilitate the proof of several concepts
12 / 108
Now
Then
We have the basic concepts
Still
We need to define an important one.
The Predecessor Graph
This will facilitate the proof of several concepts
12 / 108
Now
Then
We have the basic concepts
Still
We need to define an important one.
The Predecessor Graph
This will facilitate the proof of several concepts
12 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
13 / 108
Predecessor Graph
Representing shortest paths
For this we use the predecessor subgraph
It is defined slightly differently from that on Breadth-First-Search
Definition of a Predecessor Subgraph
The predecessor is a subgraph Gπ = (Vπ, Eπ) where
Vπ = {v ∈ V |v.π = NIL} ∪ {s}
Eπ = {(v.π, v)| v ∈ Vπ − {s}}
Properties
The predecessor subgraph Gπ forms a depth first forest composed of
several depth first trees.
The edges in Eπ are called tree edges.
14 / 108
Predecessor Graph
Representing shortest paths
For this we use the predecessor subgraph
It is defined slightly differently from that on Breadth-First-Search
Definition of a Predecessor Subgraph
The predecessor is a subgraph Gπ = (Vπ, Eπ) where
Vπ = {v ∈ V |v.π = NIL} ∪ {s}
Eπ = {(v.π, v)| v ∈ Vπ − {s}}
Properties
The predecessor subgraph Gπ forms a depth first forest composed of
several depth first trees.
The edges in Eπ are called tree edges.
14 / 108
Predecessor Graph
Representing shortest paths
For this we use the predecessor subgraph
It is defined slightly differently from that on Breadth-First-Search
Definition of a Predecessor Subgraph
The predecessor is a subgraph Gπ = (Vπ, Eπ) where
Vπ = {v ∈ V |v.π = NIL} ∪ {s}
Eπ = {(v.π, v)| v ∈ Vπ − {s}}
Properties
The predecessor subgraph Gπ forms a depth first forest composed of
several depth first trees.
The edges in Eπ are called tree edges.
14 / 108
Predecessor Graph
Representing shortest paths
For this we use the predecessor subgraph
It is defined slightly differently from that on Breadth-First-Search
Definition of a Predecessor Subgraph
The predecessor is a subgraph Gπ = (Vπ, Eπ) where
Vπ = {v ∈ V |v.π = NIL} ∪ {s}
Eπ = {(v.π, v)| v ∈ Vπ − {s}}
Properties
The predecessor subgraph Gπ forms a depth first forest composed of
several depth first trees.
The edges in Eπ are called tree edges.
14 / 108
Predecessor Graph
Representing shortest paths
For this we use the predecessor subgraph
It is defined slightly differently from that on Breadth-First-Search
Definition of a Predecessor Subgraph
The predecessor is a subgraph Gπ = (Vπ, Eπ) where
Vπ = {v ∈ V |v.π = NIL} ∪ {s}
Eπ = {(v.π, v)| v ∈ Vπ − {s}}
Properties
The predecessor subgraph Gπ forms a depth first forest composed of
several depth first trees.
The edges in Eπ are called tree edges.
14 / 108
Predecessor Graph
Representing shortest paths
For this we use the predecessor subgraph
It is defined slightly differently from that on Breadth-First-Search
Definition of a Predecessor Subgraph
The predecessor is a subgraph Gπ = (Vπ, Eπ) where
Vπ = {v ∈ V |v.π = NIL} ∪ {s}
Eπ = {(v.π, v)| v ∈ Vπ − {s}}
Properties
The predecessor subgraph Gπ forms a depth first forest composed of
several depth first trees.
The edges in Eπ are called tree edges.
14 / 108
Predecessor Graph
Representing shortest paths
For this we use the predecessor subgraph
It is defined slightly differently from that on Breadth-First-Search
Definition of a Predecessor Subgraph
The predecessor is a subgraph Gπ = (Vπ, Eπ) where
Vπ = {v ∈ V |v.π = NIL} ∪ {s}
Eπ = {(v.π, v)| v ∈ Vπ − {s}}
Properties
The predecessor subgraph Gπ forms a depth first forest composed of
several depth first trees.
The edges in Eπ are called tree edges.
14 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
15 / 108
The Relaxation Concept
We are going to use certain functions for all the algorithms
Initialize
Here, the basic variables of the nodes in a graph will be initialized
v.d = the distance from the source s.
v.π = the predecessor node during the search of the shortest path.
Changing the v.d
This will be done in the Relaxation algorithm.
16 / 108
The Relaxation Concept
We are going to use certain functions for all the algorithms
Initialize
Here, the basic variables of the nodes in a graph will be initialized
v.d = the distance from the source s.
v.π = the predecessor node during the search of the shortest path.
Changing the v.d
This will be done in the Relaxation algorithm.
16 / 108
The Relaxation Concept
We are going to use certain functions for all the algorithms
Initialize
Here, the basic variables of the nodes in a graph will be initialized
v.d = the distance from the source s.
v.π = the predecessor node during the search of the shortest path.
Changing the v.d
This will be done in the Relaxation algorithm.
16 / 108
The Relaxation Concept
We are going to use certain functions for all the algorithms
Initialize
Here, the basic variables of the nodes in a graph will be initialized
v.d = the distance from the source s.
v.π = the predecessor node during the search of the shortest path.
Changing the v.d
This will be done in the Relaxation algorithm.
16 / 108
The Relaxation Concept
We are going to use certain functions for all the algorithms
Initialize
Here, the basic variables of the nodes in a graph will be initialized
v.d = the distance from the source s.
v.π = the predecessor node during the search of the shortest path.
Changing the v.d
This will be done in the Relaxation algorithm.
16 / 108
Initialize and Relaxation
The Algorithms keep track of v.d, v.π. It is initialized as follows
Initialize(G, s)
1 for each v ∈ V [G]
2 v.d = ∞
3 v.π = NIL
4 s.d = 0
These values are changed when an edge (u, v) is relaxed.
Relax(u, v, w)
1 if v.d > u.d + w(u, v)
2 v.d = u.d + w(u, v)
3 v.π = u
17 / 108
Initialize and Relaxation
The Algorithms keep track of v.d, v.π. It is initialized as follows
Initialize(G, s)
1 for each v ∈ V [G]
2 v.d = ∞
3 v.π = NIL
4 s.d = 0
These values are changed when an edge (u, v) is relaxed.
Relax(u, v, w)
1 if v.d > u.d + w(u, v)
2 v.d = u.d + w(u, v)
3 v.π = u
17 / 108
How are these functions used?
These functions are used
1 Build a predecesor graph Gπ.
2 Integrate the Shortest Path into that predecessor graph.
1 Using the field d.
18 / 108
How are these functions used?
These functions are used
1 Build a predecesor graph Gπ.
2 Integrate the Shortest Path into that predecessor graph.
1 Using the field d.
18 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
19 / 108
The Bellman-Ford Algorithm
Bellman-Ford can have negative weight edges. It will “detect”
reachable negative weight cycles.
Bellman-Ford(G, s, w)
1 Initialize(G, s)
2 for i = 1 to |V [G]| − 1
3 for each (u, v) to E [G]
4 Relax(u, v, w)
5 for each (u, v) to E [G]
6 if v.d > u.d + w (u, v)
7 return false
8 return true
Time Complexity
O (VE)
20 / 108
The Bellman-Ford Algorithm
Bellman-Ford can have negative weight edges. It will “detect”
reachable negative weight cycles.
Bellman-Ford(G, s, w)
1 Initialize(G, s)
2 for i = 1 to |V [G]| − 1
3 for each (u, v) to E [G]
4 Relax(u, v, w)
5 for each (u, v) to E [G]
6 if v.d > u.d + w (u, v)
7 return false
8 return true
Time Complexity
O (VE)
20 / 108
The Bellman-Ford Algorithm
Bellman-Ford can have negative weight edges. It will “detect”
reachable negative weight cycles.
Bellman-Ford(G, s, w)
1 Initialize(G, s)
2 for i = 1 to |V [G]| − 1
3 for each (u, v) to E [G]
4 Relax(u, v, w)
5 for each (u, v) to E [G]
6 if v.d > u.d + w (u, v)
7 return false
8 return true
Time Complexity
O (VE)
20 / 108
The Bellman-Ford Algorithm
Bellman-Ford can have negative weight edges. It will “detect”
reachable negative weight cycles.
Bellman-Ford(G, s, w)
1 Initialize(G, s)
2 for i = 1 to |V [G]| − 1
3 for each (u, v) to E [G]
4 Relax(u, v, w)
5 for each (u, v) to E [G]
6 if v.d > u.d + w (u, v)
7 return false
8 return true
Time Complexity
O (VE)
20 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
21 / 108
Properties of Relaxation
Some properties
v.d, if not ∞, is the length of some path from s to v.
v.d either stays the same or decreases with time.
Therefore
If v.d = δ(s, v) at any time, this holds thereafter.
Something nice
Note that v.d ≥ δ(s, v) always (Upper-Bound Property).
After i iterations of relaxing an all (u, v), if the shortest path to
v has i edges, then v.d = δ(s, v).
If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant.
22 / 108
Properties of Relaxation
Some properties
v.d, if not ∞, is the length of some path from s to v.
v.d either stays the same or decreases with time.
Therefore
If v.d = δ(s, v) at any time, this holds thereafter.
Something nice
Note that v.d ≥ δ(s, v) always (Upper-Bound Property).
After i iterations of relaxing an all (u, v), if the shortest path to
v has i edges, then v.d = δ(s, v).
If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant.
22 / 108
Properties of Relaxation
Some properties
v.d, if not ∞, is the length of some path from s to v.
v.d either stays the same or decreases with time.
Therefore
If v.d = δ(s, v) at any time, this holds thereafter.
Something nice
Note that v.d ≥ δ(s, v) always (Upper-Bound Property).
After i iterations of relaxing an all (u, v), if the shortest path to
v has i edges, then v.d = δ(s, v).
If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant.
22 / 108
Properties of Relaxation
Some properties
v.d, if not ∞, is the length of some path from s to v.
v.d either stays the same or decreases with time.
Therefore
If v.d = δ(s, v) at any time, this holds thereafter.
Something nice
Note that v.d ≥ δ(s, v) always (Upper-Bound Property).
After i iterations of relaxing an all (u, v), if the shortest path to
v has i edges, then v.d = δ(s, v).
If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant.
22 / 108
Properties of Relaxation
Some properties
v.d, if not ∞, is the length of some path from s to v.
v.d either stays the same or decreases with time.
Therefore
If v.d = δ(s, v) at any time, this holds thereafter.
Something nice
Note that v.d ≥ δ(s, v) always (Upper-Bound Property).
After i iterations of relaxing an all (u, v), if the shortest path to
v has i edges, then v.d = δ(s, v).
If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant.
22 / 108
Properties of Relaxation
Some properties
v.d, if not ∞, is the length of some path from s to v.
v.d either stays the same or decreases with time.
Therefore
If v.d = δ(s, v) at any time, this holds thereafter.
Something nice
Note that v.d ≥ δ(s, v) always (Upper-Bound Property).
After i iterations of relaxing an all (u, v), if the shortest path to
v has i edges, then v.d = δ(s, v).
If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant.
22 / 108
Properties of Relaxation
Lemma 24.10 (Triangle inequality)
Let G = (V , E) be a weighted, directed graph with weight function
w : E → R and source vertex s. Then, for all edges (u, v) ∈ E, we have:
δ (s, v) ≤ δ (s, u) + w (u, v) (1)
Proof
1 Suppose that p is a shortest path from source s to vertex v.
2 Then, p has no more weight than any other path from s to vertex v.
3 Not only p has no more weiht tha a particular shortest path that goes
from s to u and then takes edge (u, v).
23 / 108
Properties of Relaxation
Lemma 24.10 (Triangle inequality)
Let G = (V , E) be a weighted, directed graph with weight function
w : E → R and source vertex s. Then, for all edges (u, v) ∈ E, we have:
δ (s, v) ≤ δ (s, u) + w (u, v) (1)
Proof
1 Suppose that p is a shortest path from source s to vertex v.
2 Then, p has no more weight than any other path from s to vertex v.
3 Not only p has no more weiht tha a particular shortest path that goes
from s to u and then takes edge (u, v).
23 / 108
Properties of Relaxation
Lemma 24.10 (Triangle inequality)
Let G = (V , E) be a weighted, directed graph with weight function
w : E → R and source vertex s. Then, for all edges (u, v) ∈ E, we have:
δ (s, v) ≤ δ (s, u) + w (u, v) (1)
Proof
1 Suppose that p is a shortest path from source s to vertex v.
2 Then, p has no more weight than any other path from s to vertex v.
3 Not only p has no more weiht tha a particular shortest path that goes
from s to u and then takes edge (u, v).
23 / 108
Properties of Relaxation
Lemma 24.10 (Triangle inequality)
Let G = (V , E) be a weighted, directed graph with weight function
w : E → R and source vertex s. Then, for all edges (u, v) ∈ E, we have:
δ (s, v) ≤ δ (s, u) + w (u, v) (1)
Proof
1 Suppose that p is a shortest path from source s to vertex v.
2 Then, p has no more weight than any other path from s to vertex v.
3 Not only p has no more weiht tha a particular shortest path that goes
from s to u and then takes edge (u, v).
23 / 108
Properties of Relaxation
Lemma 24.10 (Triangle inequality)
Let G = (V , E) be a weighted, directed graph with weight function
w : E → R and source vertex s. Then, for all edges (u, v) ∈ E, we have:
δ (s, v) ≤ δ (s, u) + w (u, v) (1)
Proof
1 Suppose that p is a shortest path from source s to vertex v.
2 Then, p has no more weight than any other path from s to vertex v.
3 Not only p has no more weiht tha a particular shortest path that goes
from s to u and then takes edge (u, v).
23 / 108
Properties of Relaxation
Lemma 24.11 (Upper Bound Property)
Let G = (V , E) be a weighted, directed graph with weight function
w : E → R. Consider any algorithm in which v.d, and v.π are first
initialized by calling Initialize(G, s) (s is the source), and are only
changed by calling Relax.
Then, we have that v.d ≥ δ(s, v) ∀v ∈ V [G] , and this invariant is
maintained over any sequence of relaxation steps on the edges of G.
Moreover, once v.d = δ(s, v), it never changes.
24 / 108
Properties of Relaxation
Lemma 24.11 (Upper Bound Property)
Let G = (V , E) be a weighted, directed graph with weight function
w : E → R. Consider any algorithm in which v.d, and v.π are first
initialized by calling Initialize(G, s) (s is the source), and are only
changed by calling Relax.
Then, we have that v.d ≥ δ(s, v) ∀v ∈ V [G] , and this invariant is
maintained over any sequence of relaxation steps on the edges of G.
Moreover, once v.d = δ(s, v), it never changes.
24 / 108
Properties of Relaxation
Lemma 24.11 (Upper Bound Property)
Let G = (V , E) be a weighted, directed graph with weight function
w : E → R. Consider any algorithm in which v.d, and v.π are first
initialized by calling Initialize(G, s) (s is the source), and are only
changed by calling Relax.
Then, we have that v.d ≥ δ(s, v) ∀v ∈ V [G] , and this invariant is
maintained over any sequence of relaxation steps on the edges of G.
Moreover, once v.d = δ(s, v), it never changes.
24 / 108
Proof of Lemma
Loop Invariance
The Proof can be done by induction over the number of relaxation steps
and the loop invariance:
v.d ≥ δ (s, v) for all v ∈ V
For the Basis
v.d ≥ δ (s, v) is true after initialization, since:
v.d = ∞ making v.d ≥ δ (s, v) for all v ∈ V − {s}.
For s, s.d = 0 ≥ δ (s, s).
For the inductive step, consider the relaxation of an edge (u, v)
By the inductive hypothesis, we have that x.d ≥ δ (s, x) for all x ∈ V prior
to relaxation.
25 / 108
Proof of Lemma
Loop Invariance
The Proof can be done by induction over the number of relaxation steps
and the loop invariance:
v.d ≥ δ (s, v) for all v ∈ V
For the Basis
v.d ≥ δ (s, v) is true after initialization, since:
v.d = ∞ making v.d ≥ δ (s, v) for all v ∈ V − {s}.
For s, s.d = 0 ≥ δ (s, s).
For the inductive step, consider the relaxation of an edge (u, v)
By the inductive hypothesis, we have that x.d ≥ δ (s, x) for all x ∈ V prior
to relaxation.
25 / 108
Proof of Lemma
Loop Invariance
The Proof can be done by induction over the number of relaxation steps
and the loop invariance:
v.d ≥ δ (s, v) for all v ∈ V
For the Basis
v.d ≥ δ (s, v) is true after initialization, since:
v.d = ∞ making v.d ≥ δ (s, v) for all v ∈ V − {s}.
For s, s.d = 0 ≥ δ (s, s).
For the inductive step, consider the relaxation of an edge (u, v)
By the inductive hypothesis, we have that x.d ≥ δ (s, x) for all x ∈ V prior
to relaxation.
25 / 108
Proof of Lemma
Loop Invariance
The Proof can be done by induction over the number of relaxation steps
and the loop invariance:
v.d ≥ δ (s, v) for all v ∈ V
For the Basis
v.d ≥ δ (s, v) is true after initialization, since:
v.d = ∞ making v.d ≥ δ (s, v) for all v ∈ V − {s}.
For s, s.d = 0 ≥ δ (s, s).
For the inductive step, consider the relaxation of an edge (u, v)
By the inductive hypothesis, we have that x.d ≥ δ (s, x) for all x ∈ V prior
to relaxation.
25 / 108
Proof of Lemma
Loop Invariance
The Proof can be done by induction over the number of relaxation steps
and the loop invariance:
v.d ≥ δ (s, v) for all v ∈ V
For the Basis
v.d ≥ δ (s, v) is true after initialization, since:
v.d = ∞ making v.d ≥ δ (s, v) for all v ∈ V − {s}.
For s, s.d = 0 ≥ δ (s, s).
For the inductive step, consider the relaxation of an edge (u, v)
By the inductive hypothesis, we have that x.d ≥ δ (s, x) for all x ∈ V prior
to relaxation.
25 / 108
Thus
If you call Relax(u, v, w), it may change v.d
v.d = u.d + w(u, v)
≥ δ(s, u) + w(u, v) by inductive hypothesis
≥ δ (s, v) by the triangle inequality
Thus, the invariant is maintained.
26 / 108
Thus
If you call Relax(u, v, w), it may change v.d
v.d = u.d + w(u, v)
≥ δ(s, u) + w(u, v) by inductive hypothesis
≥ δ (s, v) by the triangle inequality
Thus, the invariant is maintained.
26 / 108
Thus
If you call Relax(u, v, w), it may change v.d
v.d = u.d + w(u, v)
≥ δ(s, u) + w(u, v) by inductive hypothesis
≥ δ (s, v) by the triangle inequality
Thus, the invariant is maintained.
26 / 108
Thus
If you call Relax(u, v, w), it may change v.d
v.d = u.d + w(u, v)
≥ δ(s, u) + w(u, v) by inductive hypothesis
≥ δ (s, v) by the triangle inequality
Thus, the invariant is maintained.
26 / 108
Properties of Relaxation
Proof of lemma 24.11 cont...
To proof that the value v.d never changes once v.d = δ (s, v):
Note the following: Once v.d = δ (s, v), it cannot decrease because
v.d ≥ δ (s, v) and Relaxation never increases d.
27 / 108
Properties of Relaxation
Proof of lemma 24.11 cont...
To proof that the value v.d never changes once v.d = δ (s, v):
Note the following: Once v.d = δ (s, v), it cannot decrease because
v.d ≥ δ (s, v) and Relaxation never increases d.
27 / 108
Next, we have
Corollary 24.12 (No-path property)
If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant.
Proof
By the upper-bound property, we always have ∞ = δ (s, v) ≤ v.d. Then,
v.d = ∞.
28 / 108
Next, we have
Corollary 24.12 (No-path property)
If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant.
Proof
By the upper-bound property, we always have ∞ = δ (s, v) ≤ v.d. Then,
v.d = ∞.
28 / 108
More Lemmas
Lemma 24.13
Let G = (V , E) be a weighted, directed graph with weight function
w : E → R. Then, immediately after relaxing edge (u, v) by calling
Relax(u, v, w) we have v.d ≤ u.d + w(u, v).
29 / 108
Proof
First
If, just prior to relaxing edge (u, v),
Case 1: if we have that v.d > u.d + w (u, v)
Then, v.d = u.d + w (u, v) after relaxation.
Now, Case 2
If v.d ≤ u.d + w (u, v) just before relaxation, then:
neither u.d nor v.d changes
Thus, afterwards
v.d ≤ u.d + w (u, v)
30 / 108
Proof
First
If, just prior to relaxing edge (u, v),
Case 1: if we have that v.d > u.d + w (u, v)
Then, v.d = u.d + w (u, v) after relaxation.
Now, Case 2
If v.d ≤ u.d + w (u, v) just before relaxation, then:
neither u.d nor v.d changes
Thus, afterwards
v.d ≤ u.d + w (u, v)
30 / 108
Proof
First
If, just prior to relaxing edge (u, v),
Case 1: if we have that v.d > u.d + w (u, v)
Then, v.d = u.d + w (u, v) after relaxation.
Now, Case 2
If v.d ≤ u.d + w (u, v) just before relaxation, then:
neither u.d nor v.d changes
Thus, afterwards
v.d ≤ u.d + w (u, v)
30 / 108
Proof
First
If, just prior to relaxing edge (u, v),
Case 1: if we have that v.d > u.d + w (u, v)
Then, v.d = u.d + w (u, v) after relaxation.
Now, Case 2
If v.d ≤ u.d + w (u, v) just before relaxation, then:
neither u.d nor v.d changes
Thus, afterwards
v.d ≤ u.d + w (u, v)
30 / 108
Proof
First
If, just prior to relaxing edge (u, v),
Case 1: if we have that v.d > u.d + w (u, v)
Then, v.d = u.d + w (u, v) after relaxation.
Now, Case 2
If v.d ≤ u.d + w (u, v) just before relaxation, then:
neither u.d nor v.d changes
Thus, afterwards
v.d ≤ u.d + w (u, v)
30 / 108
Proof
First
If, just prior to relaxing edge (u, v),
Case 1: if we have that v.d > u.d + w (u, v)
Then, v.d = u.d + w (u, v) after relaxation.
Now, Case 2
If v.d ≤ u.d + w (u, v) just before relaxation, then:
neither u.d nor v.d changes
Thus, afterwards
v.d ≤ u.d + w (u, v)
30 / 108
More Lemmas
Lemma 24.14 (Convergence property)
Let p be a shortest path from s to v, where
p =
p1
s u → v = p1 ∪ {(u, v)}.
If u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then
v.d = δ(s, v) holds at all times after the call.
Proof:
By the upper-bound property, if u.d = δ (s, u) at some moment before
relaxing edge (u, v), holding afterwards.
31 / 108
More Lemmas
Lemma 24.14 (Convergence property)
Let p be a shortest path from s to v, where
p =
p1
s u → v = p1 ∪ {(u, v)}.
If u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then
v.d = δ(s, v) holds at all times after the call.
Proof:
By the upper-bound property, if u.d = δ (s, u) at some moment before
relaxing edge (u, v), holding afterwards.
31 / 108
More Lemmas
Lemma 24.14 (Convergence property)
Let p be a shortest path from s to v, where
p =
p1
s u → v = p1 ∪ {(u, v)}.
If u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then
v.d = δ(s, v) holds at all times after the call.
Proof:
By the upper-bound property, if u.d = δ (s, u) at some moment before
relaxing edge (u, v), holding afterwards.
31 / 108
Proof
Thus , after relaxing (u, v)
v.d ≤ u.d + w(u, v) by lemma 24.13
= δ (s, u) + w (u, v)
= δ (s, v) by corollary of lemma 24.1
Now
By lemma 24.11, v.d ≥ δ(s, v), so v.d = δ(s, v).
32 / 108
Proof
Thus , after relaxing (u, v)
v.d ≤ u.d + w(u, v) by lemma 24.13
= δ (s, u) + w (u, v)
= δ (s, v) by corollary of lemma 24.1
Now
By lemma 24.11, v.d ≥ δ(s, v), so v.d = δ(s, v).
32 / 108
Proof
Thus , after relaxing (u, v)
v.d ≤ u.d + w(u, v) by lemma 24.13
= δ (s, u) + w (u, v)
= δ (s, v) by corollary of lemma 24.1
Now
By lemma 24.11, v.d ≥ δ(s, v), so v.d = δ(s, v).
32 / 108
Proof
Thus , after relaxing (u, v)
v.d ≤ u.d + w(u, v) by lemma 24.13
= δ (s, u) + w (u, v)
= δ (s, v) by corollary of lemma 24.1
Now
By lemma 24.11, v.d ≥ δ(s, v), so v.d = δ(s, v).
32 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
33 / 108
Predecessor Subgraph for Bellman
Lemma 24.16
Assume a given graph G that has no negative weight cycles reachable
from s. Then, after the initialization, the predecessor subgraph Gπ is
always a tree with root s, and any sequence of relaxations steps on edges
of G maintains this property as an invariant.
Proof
It is necessary to prove two things in order to get a tree:
1 Gπ is acyclic.
2 There exists a unique path from source s to each vertex Vπ.
34 / 108
Predecessor Subgraph for Bellman
Lemma 24.16
Assume a given graph G that has no negative weight cycles reachable
from s. Then, after the initialization, the predecessor subgraph Gπ is
always a tree with root s, and any sequence of relaxations steps on edges
of G maintains this property as an invariant.
Proof
It is necessary to prove two things in order to get a tree:
1 Gπ is acyclic.
2 There exists a unique path from source s to each vertex Vπ.
34 / 108
Predecessor Subgraph for Bellman
Lemma 24.16
Assume a given graph G that has no negative weight cycles reachable
from s. Then, after the initialization, the predecessor subgraph Gπ is
always a tree with root s, and any sequence of relaxations steps on edges
of G maintains this property as an invariant.
Proof
It is necessary to prove two things in order to get a tree:
1 Gπ is acyclic.
2 There exists a unique path from source s to each vertex Vπ.
34 / 108
Predecessor Subgraph for Bellman
Lemma 24.16
Assume a given graph G that has no negative weight cycles reachable
from s. Then, after the initialization, the predecessor subgraph Gπ is
always a tree with root s, and any sequence of relaxations steps on edges
of G maintains this property as an invariant.
Proof
It is necessary to prove two things in order to get a tree:
1 Gπ is acyclic.
2 There exists a unique path from source s to each vertex Vπ.
34 / 108
Proof of Gπ is acyclic
First
Suppose there exist a cycle c =< v0, v1, ..., vk >, where v0 = vk. We have
vi.π = vi−1 for i = 1, 2, ..., k.
Second
Assume relaxation of (vk−1, vk) created the cycle. We are going to show
that the cycle has a negative weight.
We claim that
The cycle must be reachable from s (Why?)
35 / 108
Proof of Gπ is acyclic
First
Suppose there exist a cycle c =< v0, v1, ..., vk >, where v0 = vk. We have
vi.π = vi−1 for i = 1, 2, ..., k.
Second
Assume relaxation of (vk−1, vk) created the cycle. We are going to show
that the cycle has a negative weight.
We claim that
The cycle must be reachable from s (Why?)
35 / 108
Proof of Gπ is acyclic
First
Suppose there exist a cycle c =< v0, v1, ..., vk >, where v0 = vk. We have
vi.π = vi−1 for i = 1, 2, ..., k.
Second
Assume relaxation of (vk−1, vk) created the cycle. We are going to show
that the cycle has a negative weight.
We claim that
The cycle must be reachable from s (Why?)
35 / 108
Proof
First
Each vertex on the cycle has a non-NIL predecessor, and so each vertex on
it was assigned a finite shortest-path estimate when it was assigned its
non-NIL value.
Then
By the upper-bound property, each vertex on the cycle has a finite
shortest-path weight,
Thus
Making the cycle reachable from s.
36 / 108
Proof
First
Each vertex on the cycle has a non-NIL predecessor, and so each vertex on
it was assigned a finite shortest-path estimate when it was assigned its
non-NIL value.
Then
By the upper-bound property, each vertex on the cycle has a finite
shortest-path weight,
Thus
Making the cycle reachable from s.
36 / 108
Proof
First
Each vertex on the cycle has a non-NIL predecessor, and so each vertex on
it was assigned a finite shortest-path estimate when it was assigned its
non-NIL value.
Then
By the upper-bound property, each vertex on the cycle has a finite
shortest-path weight,
Thus
Making the cycle reachable from s.
36 / 108
Proof
Before call to Relax(vk−1, vk, w):
vi.π = vi−1 for i = 1, ..., k − 1. (2)
Thus
This Implies vi.d was last updated by
vi.d = vi−1.d + w(vi−1, vi) (3)
for i = 1, ..., k − 1 (Because Relax updates π).
This implies
This implies
vi.d ≥ vi−1.d + w(vi−1, vi) (4)
for i = 1, ..., k − 1 (Before Relaxation in Lemma 24.13).
37 / 108
Proof
Before call to Relax(vk−1, vk, w):
vi.π = vi−1 for i = 1, ..., k − 1. (2)
Thus
This Implies vi.d was last updated by
vi.d = vi−1.d + w(vi−1, vi) (3)
for i = 1, ..., k − 1 (Because Relax updates π).
This implies
This implies
vi.d ≥ vi−1.d + w(vi−1, vi) (4)
for i = 1, ..., k − 1 (Before Relaxation in Lemma 24.13).
37 / 108
Proof
Before call to Relax(vk−1, vk, w):
vi.π = vi−1 for i = 1, ..., k − 1. (2)
Thus
This Implies vi.d was last updated by
vi.d = vi−1.d + w(vi−1, vi) (3)
for i = 1, ..., k − 1 (Because Relax updates π).
This implies
This implies
vi.d ≥ vi−1.d + w(vi−1, vi) (4)
for i = 1, ..., k − 1 (Before Relaxation in Lemma 24.13).
37 / 108
Proof
Thus
Because vk.π is changed by call Relax (Immediately before),
vk.d > vk−1.d + w(vk−1, vk), we have that:
k
i=1
vi.d >
k
i=1
(vi−1.d + w (vi−1, vi))
=
k
i=1
vi−1.d +
k
i=1
w (vi−1, vi)
We have finally that
Because
k
i=1
vi.d =
k
i=1
vi−1.d, we have that
k
i=1
w(vi−1, vi) < 0, i.e., a
negative weight cycle!!!
38 / 108
Proof
Thus
Because vk.π is changed by call Relax (Immediately before),
vk.d > vk−1.d + w(vk−1, vk), we have that:
k
i=1
vi.d >
k
i=1
(vi−1.d + w (vi−1, vi))
=
k
i=1
vi−1.d +
k
i=1
w (vi−1, vi)
We have finally that
Because
k
i=1
vi.d =
k
i=1
vi−1.d, we have that
k
i=1
w(vi−1, vi) < 0, i.e., a
negative weight cycle!!!
38 / 108
Proof
Thus
Because vk.π is changed by call Relax (Immediately before),
vk.d > vk−1.d + w(vk−1, vk), we have that:
k
i=1
vi.d >
k
i=1
(vi−1.d + w (vi−1, vi))
=
k
i=1
vi−1.d +
k
i=1
w (vi−1, vi)
We have finally that
Because
k
i=1
vi.d =
k
i=1
vi−1.d, we have that
k
i=1
w(vi−1, vi) < 0, i.e., a
negative weight cycle!!!
38 / 108
Proof
Thus
Because vk.π is changed by call Relax (Immediately before),
vk.d > vk−1.d + w(vk−1, vk), we have that:
k
i=1
vi.d >
k
i=1
(vi−1.d + w (vi−1, vi))
=
k
i=1
vi−1.d +
k
i=1
w (vi−1, vi)
We have finally that
Because
k
i=1
vi.d =
k
i=1
vi−1.d, we have that
k
i=1
w(vi−1, vi) < 0, i.e., a
negative weight cycle!!!
38 / 108
Some comments
Comments
vi.d ≥ vi−1.d + w(vi−1, vi) for i = 1, ..., k − 1 because when
Relax(vi−1, vi, w) was called, there was an equality, and vi−1.d may
have gotten smaller by further calls to Relax.
vk.d > vk−1.d + w(vk−1, vk) before the last call to Relax because
that last call changed vk.d.
39 / 108
Some comments
Comments
vi.d ≥ vi−1.d + w(vi−1, vi) for i = 1, ..., k − 1 because when
Relax(vi−1, vi, w) was called, there was an equality, and vi−1.d may
have gotten smaller by further calls to Relax.
vk.d > vk−1.d + w(vk−1, vk) before the last call to Relax because
that last call changed vk.d.
39 / 108
Proof of existence of a unique path from source s
Let Gπ be the predecessor subgraph.
So, for any v in Vπ, the graph Gπ contains at least one path from s
to v.
Assume now that you have two paths:
This can only be possible if for two nodes x and y ⇒ x = y, but
z.π = x = y!!!
Contradiction!!! Therefore, we have only one path and Gπ is a tree.
40 / 108
Proof of existence of a unique path from source s
Let Gπ be the predecessor subgraph.
So, for any v in Vπ, the graph Gπ contains at least one path from s
to v.
Assume now that you have two paths:
This can only be possible if for two nodes x and y ⇒ x = y, but
z.π = x = y!!!
Contradiction!!! Therefore, we have only one path and Gπ is a tree.
40 / 108
Proof of existence of a unique path from source s
Let Gπ be the predecessor subgraph.
So, for any v in Vπ, the graph Gπ contains at least one path from s
to v.
Assume now that you have two paths:
s u
x
y
z v
Impossible!
This can only be possible if for two nodes x and y ⇒ x = y, but
z.π = x = y!!!
Contradiction!!! Therefore, we have only one path and Gπ is a tree.
40 / 108
Proof of existence of a unique path from source s
Let Gπ be the predecessor subgraph.
So, for any v in Vπ, the graph Gπ contains at least one path from s
to v.
Assume now that you have two paths:
s u
x
y
z v
Impossible!
This can only be possible if for two nodes x and y ⇒ x = y, but
z.π = x = y!!!
Contradiction!!! Therefore, we have only one path and Gπ is a tree.
40 / 108
Proof of existence of a unique path from source s
Let Gπ be the predecessor subgraph.
So, for any v in Vπ, the graph Gπ contains at least one path from s
to v.
Assume now that you have two paths:
s u
x
y
z v
Impossible!
This can only be possible if for two nodes x and y ⇒ x = y, but
z.π = x = y!!!
Contradiction!!! Therefore, we have only one path and Gπ is a tree.
40 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
41 / 108
Lemma 24.17
Lemma 24.17
Same conditions as before. It calls Initialize and repeatedly calls Relax until
v.d = δ(s, v) for all v in V . Then Gπ is a shortest path tree rooted at s.
Proof
For all v in V , there is a unique simple path p from s to v in Gπ
(Lemma 24.16).
We want to prove that it is a shortest path from s to v in G.
42 / 108
Lemma 24.17
Lemma 24.17
Same conditions as before. It calls Initialize and repeatedly calls Relax until
v.d = δ(s, v) for all v in V . Then Gπ is a shortest path tree rooted at s.
Proof
For all v in V , there is a unique simple path p from s to v in Gπ
(Lemma 24.16).
We want to prove that it is a shortest path from s to v in G.
42 / 108
Lemma 24.17
Lemma 24.17
Same conditions as before. It calls Initialize and repeatedly calls Relax until
v.d = δ(s, v) for all v in V . Then Gπ is a shortest path tree rooted at s.
Proof
For all v in V , there is a unique simple path p from s to v in Gπ
(Lemma 24.16).
We want to prove that it is a shortest path from s to v in G.
42 / 108
Proof
Then
Let p =< v0, v1, ..., vk >, where v0 = s and vk = v. Thus, we have
vi.d = δ(s, vi).
And reasoning as before
vi.d ≥ vi−1.d + w(vi−1, vi) (5)
This implies that
w(vi−1, vi) ≤ δ(s, vi) − δ(s, vi−1) (6)
43 / 108
Proof
Then
Let p =< v0, v1, ..., vk >, where v0 = s and vk = v. Thus, we have
vi.d = δ(s, vi).
And reasoning as before
vi.d ≥ vi−1.d + w(vi−1, vi) (5)
This implies that
w(vi−1, vi) ≤ δ(s, vi) − δ(s, vi−1) (6)
43 / 108
Proof
Then
Let p =< v0, v1, ..., vk >, where v0 = s and vk = v. Thus, we have
vi.d = δ(s, vi).
And reasoning as before
vi.d ≥ vi−1.d + w(vi−1, vi) (5)
This implies that
w(vi−1, vi) ≤ δ(s, vi) − δ(s, vi−1) (6)
43 / 108
Proof
Then, we sum over all weights
w (p) =
k
i=1
w (vi−1, vi)
≤
k
i=1
(δ (s, vi) − δ(s, vi−1))
= δ(s, vk) − δ(s, v0)
= δ(s, vk)
Finally
So, equality holds and p is a shortest path because δ (s, vk) ≤ w (p).
44 / 108
Proof
Then, we sum over all weights
w (p) =
k
i=1
w (vi−1, vi)
≤
k
i=1
(δ (s, vi) − δ(s, vi−1))
= δ(s, vk) − δ(s, v0)
= δ(s, vk)
Finally
So, equality holds and p is a shortest path because δ (s, vk) ≤ w (p).
44 / 108
Proof
Then, we sum over all weights
w (p) =
k
i=1
w (vi−1, vi)
≤
k
i=1
(δ (s, vi) − δ(s, vi−1))
= δ(s, vk) − δ(s, v0)
= δ(s, vk)
Finally
So, equality holds and p is a shortest path because δ (s, vk) ≤ w (p).
44 / 108
Proof
Then, we sum over all weights
w (p) =
k
i=1
w (vi−1, vi)
≤
k
i=1
(δ (s, vi) − δ(s, vi−1))
= δ(s, vk) − δ(s, v0)
= δ(s, vk)
Finally
So, equality holds and p is a shortest path because δ (s, vk) ≤ w (p).
44 / 108
Proof
Then, we sum over all weights
w (p) =
k
i=1
w (vi−1, vi)
≤
k
i=1
(δ (s, vi) − δ(s, vi−1))
= δ(s, vk) − δ(s, v0)
= δ(s, vk)
Finally
So, equality holds and p is a shortest path because δ (s, vk) ≤ w (p).
44 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
45 / 108
Again the Bellman-Ford Algorithm
Bellman-Ford can have negative weight edges. It will “detect”
reachable negative weight cycles.
Bellman-Ford(G, s, w)
1 Initialize(G, s)
2 for i = 1 to |V [G]| − 1
3 for each (u, v) to E [G]
4 Relax(u, v, w) The Decision Part of the Dynamic
Programming for u.d and u.π.
5 for each (u, v) to E [G]
6 if v.d > u.d + w (u, v)
7 return false
8 return true
Observation
If Bellman-Ford has not converged after V (G) − 1 iterations, then
there cannot be a shortest path tree, so there must be a negative46 / 108
Again the Bellman-Ford Algorithm
Bellman-Ford can have negative weight edges. It will “detect”
reachable negative weight cycles.
Bellman-Ford(G, s, w)
1 Initialize(G, s)
2 for i = 1 to |V [G]| − 1
3 for each (u, v) to E [G]
4 Relax(u, v, w) The Decision Part of the Dynamic
Programming for u.d and u.π.
5 for each (u, v) to E [G]
6 if v.d > u.d + w (u, v)
7 return false
8 return true
Observation
If Bellman-Ford has not converged after V (G) − 1 iterations, then
there cannot be a shortest path tree, so there must be a negative46 / 108
Example
Red Arrows are the representation of v.π
s b
d
a
c
e h
g
f5
-4
5
3
3
6
-2
1
5
-2
2
7
-2
10
3
47 / 108
Example
Here, whenever we have v.d = ∞ and v.u = ∞ no change is done
s b
d
a
c
e h
g
f5
-4
5
3
3
6
-2
1
5
-2
2
7
-2
106
5
5 2
48 / 108
Example
Here, we keep updating in the second iteration
s b
d
a
c
e h
g
f5
-4
5
3
3
6
-2
1
5
-2
2
7
-2
106
5
5
1
3
16
2
49 / 108
Example
Here, during it. we notice that e can be updated for a better value
s b
d
a
c
e h
g
f5
-4
5
3
3
6
-2
1
5
-2
2
7
-2
106
5
4
1
3
16
2
50 / 108
Example
Here, we keep updating in third iteration and d and g also is updated
s b
d
a
c
e h
g
f5
-4
5
3
3
6
-2
1
5
-2
2
7
-2
106
5
4
1
2
16
44
2
51 / 108
Example
Here, we keep updating in fourth iteration
s b
d
a
c
e h
g
f5
-4
5
3
3
6
-2
1
5
-2
2
7
-2
102
5
4
1
2
14
44
2
52 / 108
Example
Here, f is updated during this iteration
s b
d
a
c
e h
g
f5
-4
5
3
3
6
-2
1
5
-2
2
7
-2
102
5
4
1
2
12
40
2
53 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
54 / 108
v.d == δ(s, v) upon termination
Lemma 24.2
Assuming no negative weight cycles reachable from s, v.d == δ(s, v)
holds upon termination for all vertices v reachable from s.
Proof
Consider a shortest path p, where p =< v0, v1, ..., vk >, where v0 = s and
vk = v.
We know the following
Shortest paths are simple, p has at most |V | − 1, thus we have that
k ≤ |V | − 1.
55 / 108
v.d == δ(s, v) upon termination
Lemma 24.2
Assuming no negative weight cycles reachable from s, v.d == δ(s, v)
holds upon termination for all vertices v reachable from s.
Proof
Consider a shortest path p, where p =< v0, v1, ..., vk >, where v0 = s and
vk = v.
We know the following
Shortest paths are simple, p has at most |V | − 1, thus we have that
k ≤ |V | − 1.
55 / 108
v.d == δ(s, v) upon termination
Lemma 24.2
Assuming no negative weight cycles reachable from s, v.d == δ(s, v)
holds upon termination for all vertices v reachable from s.
Proof
Consider a shortest path p, where p =< v0, v1, ..., vk >, where v0 = s and
vk = v.
We know the following
Shortest paths are simple, p has at most |V | − 1, thus we have that
k ≤ |V | − 1.
55 / 108
Proof
Something Notable
Claim: vi.d = δ(s, vi) holds after the ith pass over edges.
In the algorithm
Each of the |V | − 1 iterations of the for loop (Lines 2-4) relaxes all edges
in E.
Proof follows by induction on i
The edges relaxed in the i th iteration, for i = 1, 2, ..., k, is (vi−1, vi).
By lemma 24.11, once vi.d = δ(s, vi) holds, it continues to hold.
56 / 108
Proof
Something Notable
Claim: vi.d = δ(s, vi) holds after the ith pass over edges.
In the algorithm
Each of the |V | − 1 iterations of the for loop (Lines 2-4) relaxes all edges
in E.
Proof follows by induction on i
The edges relaxed in the i th iteration, for i = 1, 2, ..., k, is (vi−1, vi).
By lemma 24.11, once vi.d = δ(s, vi) holds, it continues to hold.
56 / 108
Proof
Something Notable
Claim: vi.d = δ(s, vi) holds after the ith pass over edges.
In the algorithm
Each of the |V | − 1 iterations of the for loop (Lines 2-4) relaxes all edges
in E.
Proof follows by induction on i
The edges relaxed in the i th iteration, for i = 1, 2, ..., k, is (vi−1, vi).
By lemma 24.11, once vi.d = δ(s, vi) holds, it continues to hold.
56 / 108
Proof
Something Notable
Claim: vi.d = δ(s, vi) holds after the ith pass over edges.
In the algorithm
Each of the |V | − 1 iterations of the for loop (Lines 2-4) relaxes all edges
in E.
Proof follows by induction on i
The edges relaxed in the i th iteration, for i = 1, 2, ..., k, is (vi−1, vi).
By lemma 24.11, once vi.d = δ(s, vi) holds, it continues to hold.
56 / 108
Finding a path between s and v
Corollary 24.3
Let G = (V , E) be a weighted, directed graph with source vertex s and
weight function w : E → R, and assume that G contains no
negative-weight cycles that are reachable from s. Then, for each vertex
v ∈ V , there is a path from s to v if and only if Bellman-Ford terminates
with v.d < ∞ when it is run on G
Proof
Left to you...
57 / 108
Finding a path between s and v
Corollary 24.3
Let G = (V , E) be a weighted, directed graph with source vertex s and
weight function w : E → R, and assume that G contains no
negative-weight cycles that are reachable from s. Then, for each vertex
v ∈ V , there is a path from s to v if and only if Bellman-Ford terminates
with v.d < ∞ when it is run on G
Proof
Left to you...
57 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
58 / 108
Correctness of Bellman-Ford
Claim: The Algorithm returns the correct value
Part of Theorem 24.4. Other parts of the theorem follow easily from
earlier results.
Case 1: There is no reachable negative weight cycle.
Upon termination, we have for all (u, v):
v.d = δ(s, v)
by lemma 24.2 (last slide) if v is reachable or v.d = δ(s, v) = ∞ otherwise.
59 / 108
Correctness of Bellman-Ford
Claim: The Algorithm returns the correct value
Part of Theorem 24.4. Other parts of the theorem follow easily from
earlier results.
Case 1: There is no reachable negative weight cycle.
Upon termination, we have for all (u, v):
v.d = δ(s, v)
by lemma 24.2 (last slide) if v is reachable or v.d = δ(s, v) = ∞ otherwise.
59 / 108
Correctness of Bellman-Ford
Claim: The Algorithm returns the correct value
Part of Theorem 24.4. Other parts of the theorem follow easily from
earlier results.
Case 1: There is no reachable negative weight cycle.
Upon termination, we have for all (u, v):
v.d = δ(s, v)
by lemma 24.2 (last slide) if v is reachable or v.d = δ(s, v) = ∞ otherwise.
59 / 108
Correctness of Bellman-Ford
Then, we have that
v.d = δ(s, v)
≤ δ(s, u) + w(u, v)
≤ u.d + w (u, v)
Remember:
5. for each (u, v) to E [G]
6. if v.d > u.d + w (u, v)
7. return false
Thus
So algorithm returns true.
60 / 108
Correctness of Bellman-Ford
Then, we have that
v.d = δ(s, v)
≤ δ(s, u) + w(u, v)
≤ u.d + w (u, v)
Remember:
5. for each (u, v) to E [G]
6. if v.d > u.d + w (u, v)
7. return false
Thus
So algorithm returns true.
60 / 108
Correctness of Bellman-Ford
Then, we have that
v.d = δ(s, v)
≤ δ(s, u) + w(u, v)
≤ u.d + w (u, v)
Remember:
5. for each (u, v) to E [G]
6. if v.d > u.d + w (u, v)
7. return false
Thus
So algorithm returns true.
60 / 108
Correctness of Bellman-Ford
Then, we have that
v.d = δ(s, v)
≤ δ(s, u) + w(u, v)
≤ u.d + w (u, v)
Remember:
5. for each (u, v) to E [G]
6. if v.d > u.d + w (u, v)
7. return false
Thus
So algorithm returns true.
60 / 108
Correctness of Bellman-Ford
Then, we have that
v.d = δ(s, v)
≤ δ(s, u) + w(u, v)
≤ u.d + w (u, v)
Remember:
5. for each (u, v) to E [G]
6. if v.d > u.d + w (u, v)
7. return false
Thus
So algorithm returns true.
60 / 108
Correctness of Bellman-Ford
Case 2: There exists a reachable negative weight cycle
c =< v0, v1, ..., vk >, where v0 = vk.
Then, we have:
k
i=1
w(vi−1, vi) < 0. (7)
Suppose algorithm returns true
Then vi.d ≤ vi−1.d + w(vi−1, vi) for i = 1, ..., k because Relax did not
change any vi.d.
61 / 108
Correctness of Bellman-Ford
Case 2: There exists a reachable negative weight cycle
c =< v0, v1, ..., vk >, where v0 = vk.
Then, we have:
k
i=1
w(vi−1, vi) < 0. (7)
Suppose algorithm returns true
Then vi.d ≤ vi−1.d + w(vi−1, vi) for i = 1, ..., k because Relax did not
change any vi.d.
61 / 108
Correctness of Bellman-Ford
Thus
k
i=1
vi.d ≤
k
i=1
(vi−1.d + w(vi−1, vi))
=
k
i=1
vi−1.d +
k
i=1
w(vi−1, vi)
Since v0 = vk
Each vertex in c appears exactly once in each of the summations,
k
i=1
vi.d
and
k
i=1
vi−1.d, thus
k
i=1
vi.d =
k
i=1
vi−1.d (8)
62 / 108
Correctness of Bellman-Ford
Thus
k
i=1
vi.d ≤
k
i=1
(vi−1.d + w(vi−1, vi))
=
k
i=1
vi−1.d +
k
i=1
w(vi−1, vi)
Since v0 = vk
Each vertex in c appears exactly once in each of the summations,
k
i=1
vi.d
and
k
i=1
vi−1.d, thus
k
i=1
vi.d =
k
i=1
vi−1.d (8)
62 / 108
Correctness of Bellman-Ford
By Corollary 24.3
vi.d is finite for i = 1, 2, ..., k, thus
0 ≤
k
i=1
w(vi−1, vi).
Hence
This contradicts (Eq. 7). Thus, algorithm returns false.
63 / 108
Correctness of Bellman-Ford
By Corollary 24.3
vi.d is finite for i = 1, 2, ..., k, thus
0 ≤
k
i=1
w(vi−1, vi).
Hence
This contradicts (Eq. 7). Thus, algorithm returns false.
63 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
64 / 108
Another Example
Something Notable
By relaxing the edges of a weighted DAG G = (V , E) according to a
topological sort of its vertices, we can compute shortest paths from a
single source in time.
Why?
Shortest paths are always well defined in a DAG, since even if there are
negative-weight edges, no negative-weight cycles can exist.
65 / 108
Another Example
Something Notable
By relaxing the edges of a weighted DAG G = (V , E) according to a
topological sort of its vertices, we can compute shortest paths from a
single source in time.
Why?
Shortest paths are always well defined in a DAG, since even if there are
negative-weight edges, no negative-weight cycles can exist.
65 / 108
Single-source Shortest Paths in Directed Acyclic Graphs
In a DAG, we can do the following (Complexity Θ (V + E))
DAG -SHORTEST-PATHS(G, w, s)
1 Topological sort vertices in G
2 Initialize(G, s)
3 for each u in V [G] in topological sorted order
4 for each v to Adj [u]
5 Relax(u, v, w)
66 / 108
Single-source Shortest Paths in Directed Acyclic Graphs
In a DAG, we can do the following (Complexity Θ (V + E))
DAG -SHORTEST-PATHS(G, w, s)
1 Topological sort vertices in G
2 Initialize(G, s)
3 for each u in V [G] in topological sorted order
4 for each v to Adj [u]
5 Relax(u, v, w)
66 / 108
Single-source Shortest Paths in Directed Acyclic Graphs
In a DAG, we can do the following (Complexity Θ (V + E))
DAG -SHORTEST-PATHS(G, w, s)
1 Topological sort vertices in G
2 Initialize(G, s)
3 for each u in V [G] in topological sorted order
4 for each v to Adj [u]
5 Relax(u, v, w)
66 / 108
It is based in the following theorem
Theorem 24.5
If a weighted, directed graph G = (V , E) has source vertex s and no
cycles, then at the termination of the DAG-SHORTEST-PATHS
procedure, v.d = δ (s, v) for all vertices v ∈ V , and the predecessor
subgraph Gπ is a shortest path.
Proof
Left to you...
67 / 108
It is based in the following theorem
Theorem 24.5
If a weighted, directed graph G = (V , E) has source vertex s and no
cycles, then at the termination of the DAG-SHORTEST-PATHS
procedure, v.d = δ (s, v) for all vertices v ∈ V , and the predecessor
subgraph Gπ is a shortest path.
Proof
Left to you...
67 / 108
Complexity
We have that
1 Line 1 takes Θ (V + E).
2 Line 2 takes Θ (V ).
3 Lines 3-5 makes an iteration per vertex:
1 In addition, the for loop in lines 4-5 relaxes each edge exactly once
(Remember the sorting).
2 Making each iteration of the inner loop Θ (1)
Therefore
The total running time is equal to Θ (V + E).
68 / 108
Complexity
We have that
1 Line 1 takes Θ (V + E).
2 Line 2 takes Θ (V ).
3 Lines 3-5 makes an iteration per vertex:
1 In addition, the for loop in lines 4-5 relaxes each edge exactly once
(Remember the sorting).
2 Making each iteration of the inner loop Θ (1)
Therefore
The total running time is equal to Θ (V + E).
68 / 108
Complexity
We have that
1 Line 1 takes Θ (V + E).
2 Line 2 takes Θ (V ).
3 Lines 3-5 makes an iteration per vertex:
1 In addition, the for loop in lines 4-5 relaxes each edge exactly once
(Remember the sorting).
2 Making each iteration of the inner loop Θ (1)
Therefore
The total running time is equal to Θ (V + E).
68 / 108
Complexity
We have that
1 Line 1 takes Θ (V + E).
2 Line 2 takes Θ (V ).
3 Lines 3-5 makes an iteration per vertex:
1 In addition, the for loop in lines 4-5 relaxes each edge exactly once
(Remember the sorting).
2 Making each iteration of the inner loop Θ (1)
Therefore
The total running time is equal to Θ (V + E).
68 / 108
Complexity
We have that
1 Line 1 takes Θ (V + E).
2 Line 2 takes Θ (V ).
3 Lines 3-5 makes an iteration per vertex:
1 In addition, the for loop in lines 4-5 relaxes each edge exactly once
(Remember the sorting).
2 Making each iteration of the inner loop Θ (1)
Therefore
The total running time is equal to Θ (V + E).
68 / 108
Complexity
We have that
1 Line 1 takes Θ (V + E).
2 Line 2 takes Θ (V ).
3 Lines 3-5 makes an iteration per vertex:
1 In addition, the for loop in lines 4-5 relaxes each edge exactly once
(Remember the sorting).
2 Making each iteration of the inner loop Θ (1)
Therefore
The total running time is equal to Θ (V + E).
68 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
69 / 108
Example
After Initialization, we have b is the source
a b c d e f
5 2
3
7
4
2
-1 -2
6 1
70 / 108
Example
a is the first in the topological sort, but no update is done
a b c d e f
5 2
3
7
4
2
-1 -2
6 1
71 / 108
Example
b is the next one
a b c d e f
5 2
3
7
4
2
-1 -2
6 1
72 / 108
Example
c is the next one
a b c d e f
5 2
3
7
4
2
-1 -2
6 1
73 / 108
Example
d is the next one
a b c d e f
5 2
3
7
4
2
-1 -2
6 1
74 / 108
Example
e is the next one
a b c d e f
5 2
3
7
4
2
-1 -2
6 1
75 / 108
Example
Finally, w is the next one
a b c d e f
5 2
3
7
4
2
-1 -2
6 1
76 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
77 / 108
Dijkstra’s Algorithm
It is a greedy based method
Ideas?
Yes
We need to keep track of the greedy choice!!!
78 / 108
Dijkstra’s Algorithm
It is a greedy based method
Ideas?
Yes
We need to keep track of the greedy choice!!!
78 / 108
Dijkstra’s Algorithm
Assume no negative weight edges
1 Dijkstra’s algorithm maintains a set S of vertices whose
shortest path from s has been determined.
2 It repeatedly selects u in V − S with minimum shortest path estimate
(greedy choice).
3 It store V − S in priority queue Q.
79 / 108
Dijkstra’s Algorithm
Assume no negative weight edges
1 Dijkstra’s algorithm maintains a set S of vertices whose
shortest path from s has been determined.
2 It repeatedly selects u in V − S with minimum shortest path estimate
(greedy choice).
3 It store V − S in priority queue Q.
79 / 108
Dijkstra’s Algorithm
Assume no negative weight edges
1 Dijkstra’s algorithm maintains a set S of vertices whose
shortest path from s has been determined.
2 It repeatedly selects u in V − S with minimum shortest path estimate
(greedy choice).
3 It store V − S in priority queue Q.
79 / 108
Dijkstra’s algorithm
DIJKSTRA(G, w, s)
1 INITIALIZE(G, s)
2 S = ∅
3 Q = V [G]
4 while Q = ∅
5 u =Extract-Min(Q)
6 S = S ∪ {u}
7 for each vertex v ∈ Adj [u]
8 Relax(u,v,w)
80 / 108
Dijkstra’s algorithm
DIJKSTRA(G, w, s)
1 INITIALIZE(G, s)
2 S = ∅
3 Q = V [G]
4 while Q = ∅
5 u =Extract-Min(Q)
6 S = S ∪ {u}
7 for each vertex v ∈ Adj [u]
8 Relax(u,v,w)
80 / 108
Dijkstra’s algorithm
DIJKSTRA(G, w, s)
1 INITIALIZE(G, s)
2 S = ∅
3 Q = V [G]
4 while Q = ∅
5 u =Extract-Min(Q)
6 S = S ∪ {u}
7 for each vertex v ∈ Adj [u]
8 Relax(u,v,w)
80 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
81 / 108
Example
The Graph After Initialization
s b
d
a
c
e h
g
f5
4
5
3
3
6
2
1
5
2
2
7
2
10
3
82 / 108
Example
We use red edges to represent v.π and color black to represent the
set S
s b
d
a
c
e h
g
f5
4
5
3
3
6
2
1
5
2
2
7
2
10
3
83 / 108
Example
s ←Extract-Min(Q) and update the elements adjacent to s
s b
d
a
c
e h
g
f5
4
5
3
3
6
2
1
5
2
2
7
2
10
3
0
5
5
6
84 / 108
Example
a←Extract-Min(Q) and update the elements adjacent to a
s b
d
a
c
e h
g
f5
4
5
3
3
6
2
1
5
2
2
7
2
10
3
0
5
5
6
9
85 / 108
Example
e←Extract-Min(Q) and update the elements adjacent to e
s b
d
a
c
e h
g
f5
4
5
3
3
6
2
1
5
2
2
7
2
10
3
0
5
5
6
9
7
86 / 108
Example
b←Extract-Min(Q) and update the elements adjacent to b
s b
d
a
c
e h
g
f5
4
5
3
3
6
2
1
5
2
2
7
2
10
3
0
5
5
6
9
7
16
87 / 108
Example
h←Extract-Min(Q) and update the elements adjacent to h
s b
d
a
c
e h
g
f5
4
5
3
3
6
2
1
5
2
2
7
2
10
3
0
5
5
6
9
7
16
9 10
88 / 108
Example
c←Extract-Min(Q) and no-update
s b
d
a
c
e h
g
f5
4
5
3
3
6
2
1
5
2
2
7
2
10
3
0
5
5
6
9
7
16
9 10
89 / 108
Example
d←Extract-Min(Q) and no-update
s b
d
a
c
e h
g
f5
4
5
3
3
6
2
1
5
2
2
7
2
10
3
0
5
5
6
9
7
16
9 10
90 / 108
Example
g←Extract-Min(Q) and no-update
s b
d
a
c
e h
g
f5
4
5
3
3
6
2
1
5
2
2
7
2
10
3
0
5
5
6
9
7
16
9 10
91 / 108
Example
f←Extract-Min(Q) and no-update
s b
d
a
c
e h
g
f5
4
5
3
3
6
2
1
5
2
2
7
2
10
3
0
5
5
6
9
7
16
9 10
92 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
93 / 108
Correctness Dijkstra’s algorithm
Theorem 24.6
Upon termination, u.d = δ(s, u) for all u in V (assuming non negative
weights).
Proof
By lemma 24.11, once u.d = δ(s, u) holds, it continues to hold.
We are going to use the following loop Invariance
At the start of each iteration of the while loop of lines 4–8,
v.d = δ (s, v) for each vertex v ∈ S.
94 / 108
Correctness Dijkstra’s algorithm
Theorem 24.6
Upon termination, u.d = δ(s, u) for all u in V (assuming non negative
weights).
Proof
By lemma 24.11, once u.d = δ(s, u) holds, it continues to hold.
We are going to use the following loop Invariance
At the start of each iteration of the while loop of lines 4–8,
v.d = δ (s, v) for each vertex v ∈ S.
94 / 108
Correctness Dijkstra’s algorithm
Theorem 24.6
Upon termination, u.d = δ(s, u) for all u in V (assuming non negative
weights).
Proof
By lemma 24.11, once u.d = δ(s, u) holds, it continues to hold.
We are going to use the following loop Invariance
At the start of each iteration of the while loop of lines 4–8,
v.d = δ (s, v) for each vertex v ∈ S.
94 / 108
Proof
Thus
We are going to prove for each u in V , u.d = δ(s, u) when u is inserted in
S.
Initialization
Initially S = ∅, thus the invariant is true.
Maintenance
We want to show that in each iteration u.d = δ (s, u) for the vertex added
to set S.
For this, note the following
Note that s.d = δ(s, s) = 0 when s is inserted, so u = s.
In addition, we have that S = ∅ before u is added.
95 / 108
Proof
Thus
We are going to prove for each u in V , u.d = δ(s, u) when u is inserted in
S.
Initialization
Initially S = ∅, thus the invariant is true.
Maintenance
We want to show that in each iteration u.d = δ (s, u) for the vertex added
to set S.
For this, note the following
Note that s.d = δ(s, s) = 0 when s is inserted, so u = s.
In addition, we have that S = ∅ before u is added.
95 / 108
Proof
Thus
We are going to prove for each u in V , u.d = δ(s, u) when u is inserted in
S.
Initialization
Initially S = ∅, thus the invariant is true.
Maintenance
We want to show that in each iteration u.d = δ (s, u) for the vertex added
to set S.
For this, note the following
Note that s.d = δ(s, s) = 0 when s is inserted, so u = s.
In addition, we have that S = ∅ before u is added.
95 / 108
Proof
Thus
We are going to prove for each u in V , u.d = δ(s, u) when u is inserted in
S.
Initialization
Initially S = ∅, thus the invariant is true.
Maintenance
We want to show that in each iteration u.d = δ (s, u) for the vertex added
to set S.
For this, note the following
Note that s.d = δ(s, s) = 0 when s is inserted, so u = s.
In addition, we have that S = ∅ before u is added.
95 / 108
Proof
Thus
We are going to prove for each u in V , u.d = δ(s, u) when u is inserted in
S.
Initialization
Initially S = ∅, thus the invariant is true.
Maintenance
We want to show that in each iteration u.d = δ (s, u) for the vertex added
to set S.
For this, note the following
Note that s.d = δ(s, s) = 0 when s is inserted, so u = s.
In addition, we have that S = ∅ before u is added.
95 / 108
Proof
Use contradiction
Now, suppose not. Let u be the first vertex such that u.d = δ (s,u) when
inserted in S.
Note the following
Note that s.d = δ(s, s) = 0 when s is inserted, so u = s; thus S = ∅ just
before u is inserted (in fact s ∈ S).
96 / 108
Proof
Use contradiction
Now, suppose not. Let u be the first vertex such that u.d = δ (s,u) when
inserted in S.
Note the following
Note that s.d = δ(s, s) = 0 when s is inserted, so u = s; thus S = ∅ just
before u is inserted (in fact s ∈ S).
96 / 108
Proof
Now
Note that there exist a path from s to u, for otherwise u.d = δ(s, u) = ∞
by corollary 24.12.
“If there is no path from s to v, then v.d = δ(s, v) = ∞ is an
invariant.”
Thus exist a shortest path p
Between s and u.
Observation
Prior to adding u to S, path p connects a vertex in S, namely s, to a
vertex in V − S, namely u.
97 / 108
Proof
Now
Note that there exist a path from s to u, for otherwise u.d = δ(s, u) = ∞
by corollary 24.12.
“If there is no path from s to v, then v.d = δ(s, v) = ∞ is an
invariant.”
Thus exist a shortest path p
Between s and u.
Observation
Prior to adding u to S, path p connects a vertex in S, namely s, to a
vertex in V − S, namely u.
97 / 108
Proof
Now
Note that there exist a path from s to u, for otherwise u.d = δ(s, u) = ∞
by corollary 24.12.
“If there is no path from s to v, then v.d = δ(s, v) = ∞ is an
invariant.”
Thus exist a shortest path p
Between s and u.
Observation
Prior to adding u to S, path p connects a vertex in S, namely s, to a
vertex in V − S, namely u.
97 / 108
Proof
Consider the following
The first y along p from s to u such that y ∈ V − S.
And let x ∈ S be y’s predecessor along p.
98 / 108
Proof
Proof (continuation)
Then, shortest path from s to u: s
p1
x → y
p2
u looks like...
S
s
x
y
u
Remark: Either of paths p1 or p2 may have no edges.
99 / 108
Proof
We claim
y.d = δ(s, y) when u is added into S.
Proof of the claim
1 Observe that x ∈ S.
2 In addition, we know that u is the first vertex for which u.d = δ (s, u)
when it id added to S
100 / 108
Proof
We claim
y.d = δ(s, y) when u is added into S.
Proof of the claim
1 Observe that x ∈ S.
2 In addition, we know that u is the first vertex for which u.d = δ (s, u)
when it id added to S
100 / 108
Proof
Then
In addition, we had that x.d = δ(s, x) when x was inserted into S.
Then, we relaxed the edge between x and y
Edge (x, y) was relaxed at that time!
101 / 108
Proof
Then
In addition, we had that x.d = δ(s, x) when x was inserted into S.
Then, we relaxed the edge between x and y
Edge (x, y) was relaxed at that time!
101 / 108
Proof
Remember? Convergence property (Lemma 24.14)
Let p be a shortest path from s to v, where p =
p1
s u → v. If
u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then
v.d = δ(s, v) holds at all times after the call.
Then
Then, using this convergence property.
y.d = δ(s, y) = δ(s, x) + w(x, y) (9)
The claim is implied!!!
102 / 108
Proof
Remember? Convergence property (Lemma 24.14)
Let p be a shortest path from s to v, where p =
p1
s u → v. If
u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then
v.d = δ(s, v) holds at all times after the call.
Then
Then, using this convergence property.
y.d = δ(s, y) = δ(s, x) + w(x, y) (9)
The claim is implied!!!
102 / 108
Proof
Remember? Convergence property (Lemma 24.14)
Let p be a shortest path from s to v, where p =
p1
s u → v. If
u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then
v.d = δ(s, v) holds at all times after the call.
Then
Then, using this convergence property.
y.d = δ(s, y) = δ(s, x) + w(x, y) (9)
The claim is implied!!!
102 / 108
Proof
Remember? Convergence property (Lemma 24.14)
Let p be a shortest path from s to v, where p =
p1
s u → v. If
u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then
v.d = δ(s, v) holds at all times after the call.
Then
Then, using this convergence property.
y.d = δ(s, y) = δ(s, x) + w(x, y) (9)
The claim is implied!!!
102 / 108
Proof
Now
1 We obtain a contradiction to prove that u.d = δ (s, u).
2 y appears before u in a shortest path on a shortest path from s to u.
3 In addition, all edges have positive weights.
4 Then, δ(s, y) ≤ δ(s, u), thus
y.d = δ (s, y)
≤ δ (s, u)
≤ u.d
The last inequality is due to the Upper-Bound Property (Lemma
24.11).
103 / 108
Proof
Now
1 We obtain a contradiction to prove that u.d = δ (s, u).
2 y appears before u in a shortest path on a shortest path from s to u.
3 In addition, all edges have positive weights.
4 Then, δ(s, y) ≤ δ(s, u), thus
y.d = δ (s, y)
≤ δ (s, u)
≤ u.d
The last inequality is due to the Upper-Bound Property (Lemma
24.11).
103 / 108
Proof
Now
1 We obtain a contradiction to prove that u.d = δ (s, u).
2 y appears before u in a shortest path on a shortest path from s to u.
3 In addition, all edges have positive weights.
4 Then, δ(s, y) ≤ δ(s, u), thus
y.d = δ (s, y)
≤ δ (s, u)
≤ u.d
The last inequality is due to the Upper-Bound Property (Lemma
24.11).
103 / 108
Proof
Now
1 We obtain a contradiction to prove that u.d = δ (s, u).
2 y appears before u in a shortest path on a shortest path from s to u.
3 In addition, all edges have positive weights.
4 Then, δ(s, y) ≤ δ(s, u), thus
y.d = δ (s, y)
≤ δ (s, u)
≤ u.d
The last inequality is due to the Upper-Bound Property (Lemma
24.11).
103 / 108
Proof
Now
1 We obtain a contradiction to prove that u.d = δ (s, u).
2 y appears before u in a shortest path on a shortest path from s to u.
3 In addition, all edges have positive weights.
4 Then, δ(s, y) ≤ δ(s, u), thus
y.d = δ (s, y)
≤ δ (s, u)
≤ u.d
The last inequality is due to the Upper-Bound Property (Lemma
24.11).
103 / 108
Proof
Now
1 We obtain a contradiction to prove that u.d = δ (s, u).
2 y appears before u in a shortest path on a shortest path from s to u.
3 In addition, all edges have positive weights.
4 Then, δ(s, y) ≤ δ(s, u), thus
y.d = δ (s, y)
≤ δ (s, u)
≤ u.d
The last inequality is due to the Upper-Bound Property (Lemma
24.11).
103 / 108
Proof
Now
1 We obtain a contradiction to prove that u.d = δ (s, u).
2 y appears before u in a shortest path on a shortest path from s to u.
3 In addition, all edges have positive weights.
4 Then, δ(s, y) ≤ δ(s, u), thus
y.d = δ (s, y)
≤ δ (s, u)
≤ u.d
The last inequality is due to the Upper-Bound Property (Lemma
24.11).
103 / 108
Proof
Then
But because both vertices u and y where in V − S when u was chosen in
line 5 ⇒ u.d ≤ y.d.
Thus
y.d = δ(s, y) = δ(s, u) = u.d
Consequently
We have that u.d = δ (s, u), which contradicts our choice of u.
Conclusion: u.d = δ (s, u) when u is added to S and the equality is
maintained afterwards.
104 / 108
Proof
Then
But because both vertices u and y where in V − S when u was chosen in
line 5 ⇒ u.d ≤ y.d.
Thus
y.d = δ(s, y) = δ(s, u) = u.d
Consequently
We have that u.d = δ (s, u), which contradicts our choice of u.
Conclusion: u.d = δ (s, u) when u is added to S and the equality is
maintained afterwards.
104 / 108
Proof
Then
But because both vertices u and y where in V − S when u was chosen in
line 5 ⇒ u.d ≤ y.d.
Thus
y.d = δ(s, y) = δ(s, u) = u.d
Consequently
We have that u.d = δ (s, u), which contradicts our choice of u.
Conclusion: u.d = δ (s, u) when u is added to S and the equality is
maintained afterwards.
104 / 108
Proof
Then
But because both vertices u and y where in V − S when u was chosen in
line 5 ⇒ u.d ≤ y.d.
Thus
y.d = δ(s, y) = δ(s, u) = u.d
Consequently
We have that u.d = δ (s, u), which contradicts our choice of u.
Conclusion: u.d = δ (s, u) when u is added to S and the equality is
maintained afterwards.
104 / 108
Finally
Termination
At termination Q = ∅
Thus, V − S = ∅ or equivalent S = V
Thus
u.d = δ (s, u) for all vertices u ∈ V !!!
105 / 108
Finally
Termination
At termination Q = ∅
Thus, V − S = ∅ or equivalent S = V
Thus
u.d = δ (s, u) for all vertices u ∈ V !!!
105 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
106 / 108
Complexity
Running time is
O(V 2) using linear array for priority queue.
O((V + E) log V ) using binary heap.
O(V log V + E) using Fibonacci heap.
107 / 108
Exercises
From Cormen’s book solve
24.1-1
24.1-3
24.1-4
23.3-1
23.3-3
23.3-4
23.3-6
23.3-7
23.3-8
23.3-10
108 / 108

More Related Content

What's hot

minimum spanning trees Algorithm
minimum spanning trees Algorithm minimum spanning trees Algorithm
minimum spanning trees Algorithm sachin varun
 
Algorithm Design and Complexity - Course 9
Algorithm Design and Complexity - Course 9Algorithm Design and Complexity - Course 9
Algorithm Design and Complexity - Course 9Traian Rebedea
 
Algorithm Design and Complexity - Course 10
Algorithm Design and Complexity - Course 10Algorithm Design and Complexity - Course 10
Algorithm Design and Complexity - Course 10Traian Rebedea
 
Prim's Algorithm on minimum spanning tree
Prim's Algorithm on minimum spanning treePrim's Algorithm on minimum spanning tree
Prim's Algorithm on minimum spanning treeoneous
 
Strong (Weak) Triple Connected Domination Number of a Fuzzy Graph
Strong (Weak) Triple Connected Domination Number of a Fuzzy GraphStrong (Weak) Triple Connected Domination Number of a Fuzzy Graph
Strong (Weak) Triple Connected Domination Number of a Fuzzy Graphijceronline
 
GRAPH APPLICATION - MINIMUM SPANNING TREE (MST)
GRAPH APPLICATION - MINIMUM SPANNING TREE (MST)GRAPH APPLICATION - MINIMUM SPANNING TREE (MST)
GRAPH APPLICATION - MINIMUM SPANNING TREE (MST)Madhu Bala
 
Dijkstra s algorithm
Dijkstra s algorithmDijkstra s algorithm
Dijkstra s algorithmmansab MIRZA
 
Inroduction_To_Algorithms_Lect14
Inroduction_To_Algorithms_Lect14Inroduction_To_Algorithms_Lect14
Inroduction_To_Algorithms_Lect14Naor Ami
 
Dijkstra’s algorithm
Dijkstra’s algorithmDijkstra’s algorithm
Dijkstra’s algorithmfaisal2204
 
Minimal spanning tree class 15
Minimal spanning tree class 15Minimal spanning tree class 15
Minimal spanning tree class 15Kumar
 
Algorithm Design and Complexity - Course 8
Algorithm Design and Complexity - Course 8Algorithm Design and Complexity - Course 8
Algorithm Design and Complexity - Course 8Traian Rebedea
 

What's hot (20)

minimum spanning trees Algorithm
minimum spanning trees Algorithm minimum spanning trees Algorithm
minimum spanning trees Algorithm
 
Algorithm Design and Complexity - Course 9
Algorithm Design and Complexity - Course 9Algorithm Design and Complexity - Course 9
Algorithm Design and Complexity - Course 9
 
Chapter 26 aoa
Chapter 26 aoaChapter 26 aoa
Chapter 26 aoa
 
Algorithm Design and Complexity - Course 10
Algorithm Design and Complexity - Course 10Algorithm Design and Complexity - Course 10
Algorithm Design and Complexity - Course 10
 
Chapter 23 aoa
Chapter 23 aoaChapter 23 aoa
Chapter 23 aoa
 
Chapter 25 aoa
Chapter 25 aoaChapter 25 aoa
Chapter 25 aoa
 
Shortest path
Shortest pathShortest path
Shortest path
 
Daa chpater14
Daa chpater14Daa chpater14
Daa chpater14
 
Prim's Algorithm on minimum spanning tree
Prim's Algorithm on minimum spanning treePrim's Algorithm on minimum spanning tree
Prim's Algorithm on minimum spanning tree
 
Biconnectivity
BiconnectivityBiconnectivity
Biconnectivity
 
Shortest Path in Graph
Shortest Path in GraphShortest Path in Graph
Shortest Path in Graph
 
Dijkstra c
Dijkstra cDijkstra c
Dijkstra c
 
Strong (Weak) Triple Connected Domination Number of a Fuzzy Graph
Strong (Weak) Triple Connected Domination Number of a Fuzzy GraphStrong (Weak) Triple Connected Domination Number of a Fuzzy Graph
Strong (Weak) Triple Connected Domination Number of a Fuzzy Graph
 
Weighted graphs
Weighted graphsWeighted graphs
Weighted graphs
 
GRAPH APPLICATION - MINIMUM SPANNING TREE (MST)
GRAPH APPLICATION - MINIMUM SPANNING TREE (MST)GRAPH APPLICATION - MINIMUM SPANNING TREE (MST)
GRAPH APPLICATION - MINIMUM SPANNING TREE (MST)
 
Dijkstra s algorithm
Dijkstra s algorithmDijkstra s algorithm
Dijkstra s algorithm
 
Inroduction_To_Algorithms_Lect14
Inroduction_To_Algorithms_Lect14Inroduction_To_Algorithms_Lect14
Inroduction_To_Algorithms_Lect14
 
Dijkstra’s algorithm
Dijkstra’s algorithmDijkstra’s algorithm
Dijkstra’s algorithm
 
Minimal spanning tree class 15
Minimal spanning tree class 15Minimal spanning tree class 15
Minimal spanning tree class 15
 
Algorithm Design and Complexity - Course 8
Algorithm Design and Complexity - Course 8Algorithm Design and Complexity - Course 8
Algorithm Design and Complexity - Course 8
 

Similar to 20 Single Source Shorthest Path

SINGLE SOURCE SHORTEST PATH.ppt
SINGLE SOURCE SHORTEST PATH.pptSINGLE SOURCE SHORTEST PATH.ppt
SINGLE SOURCE SHORTEST PATH.pptshanthishyam
 
Single sourceshortestpath by emad
Single sourceshortestpath by emadSingle sourceshortestpath by emad
Single sourceshortestpath by emadKazi Emad
 
Shortest Paths Part 2: Negative Weights and All-pairs
Shortest Paths Part 2: Negative Weights and All-pairsShortest Paths Part 2: Negative Weights and All-pairs
Shortest Paths Part 2: Negative Weights and All-pairsBenjamin Sach
 
Graphs: Finding shortest paths
Graphs: Finding shortest pathsGraphs: Finding shortest paths
Graphs: Finding shortest pathsFulvio Corno
 
Bellman ford Algorithm
Bellman ford AlgorithmBellman ford Algorithm
Bellman ford Algorithmtaimurkhan803
 
bellman-ford Theorem.ppt
bellman-ford Theorem.pptbellman-ford Theorem.ppt
bellman-ford Theorem.pptSaimaShaheen14
 
2.6 all pairsshortestpath
2.6 all pairsshortestpath2.6 all pairsshortestpath
2.6 all pairsshortestpathKrish_ver2
 
test pre
test pretest pre
test prefarazch
 
shortestpathalgorithm-180109112405 (1).pdf
shortestpathalgorithm-180109112405 (1).pdfshortestpathalgorithm-180109112405 (1).pdf
shortestpathalgorithm-180109112405 (1).pdfzefergaming
 
Shortest path algorithm
Shortest path algorithmShortest path algorithm
Shortest path algorithmsana younas
 
Floyd warshall algo {dynamic approach}
Floyd warshall algo {dynamic approach}Floyd warshall algo {dynamic approach}
Floyd warshall algo {dynamic approach}Shubham Shukla
 
Shortest Path Problem.docx
Shortest Path Problem.docxShortest Path Problem.docx
Shortest Path Problem.docxSeethaDinesh
 
04 greedyalgorithmsii 2x2
04 greedyalgorithmsii 2x204 greedyalgorithmsii 2x2
04 greedyalgorithmsii 2x2MuradAmn
 
Secure Domination in graphs
Secure Domination in graphsSecure Domination in graphs
Secure Domination in graphsMahesh Gadhwal
 
lecture 21
lecture 21lecture 21
lecture 21sajinsc
 
Minimum-Cost Bounded-Degree Spanning Trees
Minimum-Cost Bounded-Degree Spanning TreesMinimum-Cost Bounded-Degree Spanning Trees
Minimum-Cost Bounded-Degree Spanning TreesFeliciano Colella
 
Bellman ford algorithm
Bellman ford algorithmBellman ford algorithm
Bellman ford algorithmA. S. M. Shafi
 

Similar to 20 Single Source Shorthest Path (20)

SINGLE SOURCE SHORTEST PATH.ppt
SINGLE SOURCE SHORTEST PATH.pptSINGLE SOURCE SHORTEST PATH.ppt
SINGLE SOURCE SHORTEST PATH.ppt
 
Single sourceshortestpath by emad
Single sourceshortestpath by emadSingle sourceshortestpath by emad
Single sourceshortestpath by emad
 
Shortest Paths Part 2: Negative Weights and All-pairs
Shortest Paths Part 2: Negative Weights and All-pairsShortest Paths Part 2: Negative Weights and All-pairs
Shortest Paths Part 2: Negative Weights and All-pairs
 
chapter24.ppt
chapter24.pptchapter24.ppt
chapter24.ppt
 
Graphs: Finding shortest paths
Graphs: Finding shortest pathsGraphs: Finding shortest paths
Graphs: Finding shortest paths
 
Bellman ford Algorithm
Bellman ford AlgorithmBellman ford Algorithm
Bellman ford Algorithm
 
DAA_Presentation - Copy.pptx
DAA_Presentation - Copy.pptxDAA_Presentation - Copy.pptx
DAA_Presentation - Copy.pptx
 
bellman-ford Theorem.ppt
bellman-ford Theorem.pptbellman-ford Theorem.ppt
bellman-ford Theorem.ppt
 
2.6 all pairsshortestpath
2.6 all pairsshortestpath2.6 all pairsshortestpath
2.6 all pairsshortestpath
 
test pre
test pretest pre
test pre
 
shortestpathalgorithm-180109112405 (1).pdf
shortestpathalgorithm-180109112405 (1).pdfshortestpathalgorithm-180109112405 (1).pdf
shortestpathalgorithm-180109112405 (1).pdf
 
Shortest path algorithm
Shortest path algorithmShortest path algorithm
Shortest path algorithm
 
Floyd warshall algo {dynamic approach}
Floyd warshall algo {dynamic approach}Floyd warshall algo {dynamic approach}
Floyd warshall algo {dynamic approach}
 
Shortest Path Problem.docx
Shortest Path Problem.docxShortest Path Problem.docx
Shortest Path Problem.docx
 
04 greedyalgorithmsii 2x2
04 greedyalgorithmsii 2x204 greedyalgorithmsii 2x2
04 greedyalgorithmsii 2x2
 
Secure Domination in graphs
Secure Domination in graphsSecure Domination in graphs
Secure Domination in graphs
 
lecture 21
lecture 21lecture 21
lecture 21
 
Graph 3
Graph 3Graph 3
Graph 3
 
Minimum-Cost Bounded-Degree Spanning Trees
Minimum-Cost Bounded-Degree Spanning TreesMinimum-Cost Bounded-Degree Spanning Trees
Minimum-Cost Bounded-Degree Spanning Trees
 
Bellman ford algorithm
Bellman ford algorithmBellman ford algorithm
Bellman ford algorithm
 

More from Andres Mendez-Vazquez

01.04 orthonormal basis_eigen_vectors
01.04 orthonormal basis_eigen_vectors01.04 orthonormal basis_eigen_vectors
01.04 orthonormal basis_eigen_vectorsAndres Mendez-Vazquez
 
01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issues01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issuesAndres Mendez-Vazquez
 
05 backpropagation automatic_differentiation
05 backpropagation automatic_differentiation05 backpropagation automatic_differentiation
05 backpropagation automatic_differentiationAndres Mendez-Vazquez
 
01 Introduction to Neural Networks and Deep Learning
01 Introduction to Neural Networks and Deep Learning01 Introduction to Neural Networks and Deep Learning
01 Introduction to Neural Networks and Deep LearningAndres Mendez-Vazquez
 
25 introduction reinforcement_learning
25 introduction reinforcement_learning25 introduction reinforcement_learning
25 introduction reinforcement_learningAndres Mendez-Vazquez
 
Neural Networks and Deep Learning Syllabus
Neural Networks and Deep Learning SyllabusNeural Networks and Deep Learning Syllabus
Neural Networks and Deep Learning SyllabusAndres Mendez-Vazquez
 
Introduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabusIntroduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabusAndres Mendez-Vazquez
 
Ideas about a Bachelor in Machine Learning/Data Sciences
Ideas about a Bachelor in Machine Learning/Data SciencesIdeas about a Bachelor in Machine Learning/Data Sciences
Ideas about a Bachelor in Machine Learning/Data SciencesAndres Mendez-Vazquez
 
20 k-means, k-center, k-meoids and variations
20 k-means, k-center, k-meoids and variations20 k-means, k-center, k-meoids and variations
20 k-means, k-center, k-meoids and variationsAndres Mendez-Vazquez
 

More from Andres Mendez-Vazquez (20)

2.03 bayesian estimation
2.03 bayesian estimation2.03 bayesian estimation
2.03 bayesian estimation
 
05 linear transformations
05 linear transformations05 linear transformations
05 linear transformations
 
01.04 orthonormal basis_eigen_vectors
01.04 orthonormal basis_eigen_vectors01.04 orthonormal basis_eigen_vectors
01.04 orthonormal basis_eigen_vectors
 
01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issues01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issues
 
01.02 linear equations
01.02 linear equations01.02 linear equations
01.02 linear equations
 
01.01 vector spaces
01.01 vector spaces01.01 vector spaces
01.01 vector spaces
 
06 recurrent neural_networks
06 recurrent neural_networks06 recurrent neural_networks
06 recurrent neural_networks
 
05 backpropagation automatic_differentiation
05 backpropagation automatic_differentiation05 backpropagation automatic_differentiation
05 backpropagation automatic_differentiation
 
Zetta global
Zetta globalZetta global
Zetta global
 
01 Introduction to Neural Networks and Deep Learning
01 Introduction to Neural Networks and Deep Learning01 Introduction to Neural Networks and Deep Learning
01 Introduction to Neural Networks and Deep Learning
 
25 introduction reinforcement_learning
25 introduction reinforcement_learning25 introduction reinforcement_learning
25 introduction reinforcement_learning
 
Neural Networks and Deep Learning Syllabus
Neural Networks and Deep Learning SyllabusNeural Networks and Deep Learning Syllabus
Neural Networks and Deep Learning Syllabus
 
Introduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabusIntroduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabus
 
Ideas 09 22_2018
Ideas 09 22_2018Ideas 09 22_2018
Ideas 09 22_2018
 
Ideas about a Bachelor in Machine Learning/Data Sciences
Ideas about a Bachelor in Machine Learning/Data SciencesIdeas about a Bachelor in Machine Learning/Data Sciences
Ideas about a Bachelor in Machine Learning/Data Sciences
 
Analysis of Algorithms Syllabus
Analysis of Algorithms  SyllabusAnalysis of Algorithms  Syllabus
Analysis of Algorithms Syllabus
 
20 k-means, k-center, k-meoids and variations
20 k-means, k-center, k-meoids and variations20 k-means, k-center, k-meoids and variations
20 k-means, k-center, k-meoids and variations
 
18.1 combining models
18.1 combining models18.1 combining models
18.1 combining models
 
17 vapnik chervonenkis dimension
17 vapnik chervonenkis dimension17 vapnik chervonenkis dimension
17 vapnik chervonenkis dimension
 
A basic introduction to learning
A basic introduction to learningA basic introduction to learning
A basic introduction to learning
 

Recently uploaded

MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINESIVASHANKAR N
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Christo Ananth
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Dr.Costas Sachpazis
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...ranjana rawat
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxupamatechverse
 
(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...
(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...
(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...ranjana rawat
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSISrknatarajan
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130Suhani Kapoor
 
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).pptssuser5c9d4b1
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSSIVASHANKAR N
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxupamatechverse
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxpurnimasatapathy1234
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxAsutosh Ranjan
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )Tsuyoshi Horigome
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Serviceranjana rawat
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSKurinjimalarL3
 

Recently uploaded (20)

MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptx
 
(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...
(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...
(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSIS
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
 
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptx
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptx
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptx
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
 

20 Single Source Shorthest Path

  • 1. Analysis of Algorithms Single Source Shortest Path Andres Mendez-Vazquez November 9, 2015 1 / 108
  • 2. Outline 1 Introduction Introduction and Similar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 2 / 108
  • 3. Outline 1 Introduction Introduction and Similar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 3 / 108
  • 4. Introduction Problem description Given a single source vertex in a weighted, directed graph. We want to compute a shortest path for each possible destination (Similar to BFS). Thus The algorithm will compute a shortest path tree (again, similar to BFS). 4 / 108
  • 5. Introduction Problem description Given a single source vertex in a weighted, directed graph. We want to compute a shortest path for each possible destination (Similar to BFS). Thus The algorithm will compute a shortest path tree (again, similar to BFS). 4 / 108
  • 6. Introduction Problem description Given a single source vertex in a weighted, directed graph. We want to compute a shortest path for each possible destination (Similar to BFS). Thus The algorithm will compute a shortest path tree (again, similar to BFS). 4 / 108
  • 7. Similar Problems Single destination shortest paths problem Find a shortest path to a given destination vertex t from each vertex. By reversing the direction of each edge in the graph, we can reduce this problem to a single source problem. 5 / 108
  • 8. Similar Problems Single destination shortest paths problem Find a shortest path to a given destination vertex t from each vertex. By reversing the direction of each edge in the graph, we can reduce this problem to a single source problem. Reverse Directions 5 / 108
  • 9. Similar Problems Single pair shortest path problem Find a shortest path from u to v for given vertices u and v. If we solve the single source problem with source vertex u, we also solve this problem. 6 / 108
  • 10. Similar Problems Single pair shortest path problem Find a shortest path from u to v for given vertices u and v. If we solve the single source problem with source vertex u, we also solve this problem. 6 / 108
  • 11. Similar Problems All pairs shortest paths problem Find a shortest path from u to v for every pair of vertices u and v. 7 / 108
  • 12. Outline 1 Introduction Introduction and Similar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 8 / 108
  • 13. Optimal Substructure Property Lemma 24.1 Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk > be a Shortest Path from v1 to vk. Then, pij =< vi, vi+1, ..., vj > is a Shortest Path from vi to vj, where 1 ≤ i ≤ j ≤ k. We have then So, we have the optimal substructure property. Bellman-Ford’s algorithm uses dynamic programming. Dijkstra’s algorithm uses the greedy approach. In addition, we have that Let δ(u, v) = weight of Shortest Path from u to v. 9 / 108
  • 14. Optimal Substructure Property Lemma 24.1 Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk > be a Shortest Path from v1 to vk. Then, pij =< vi, vi+1, ..., vj > is a Shortest Path from vi to vj, where 1 ≤ i ≤ j ≤ k. We have then So, we have the optimal substructure property. Bellman-Ford’s algorithm uses dynamic programming. Dijkstra’s algorithm uses the greedy approach. In addition, we have that Let δ(u, v) = weight of Shortest Path from u to v. 9 / 108
  • 15. Optimal Substructure Property Lemma 24.1 Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk > be a Shortest Path from v1 to vk. Then, pij =< vi, vi+1, ..., vj > is a Shortest Path from vi to vj, where 1 ≤ i ≤ j ≤ k. We have then So, we have the optimal substructure property. Bellman-Ford’s algorithm uses dynamic programming. Dijkstra’s algorithm uses the greedy approach. In addition, we have that Let δ(u, v) = weight of Shortest Path from u to v. 9 / 108
  • 16. Optimal Substructure Property Lemma 24.1 Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk > be a Shortest Path from v1 to vk. Then, pij =< vi, vi+1, ..., vj > is a Shortest Path from vi to vj, where 1 ≤ i ≤ j ≤ k. We have then So, we have the optimal substructure property. Bellman-Ford’s algorithm uses dynamic programming. Dijkstra’s algorithm uses the greedy approach. In addition, we have that Let δ(u, v) = weight of Shortest Path from u to v. 9 / 108
  • 17. Optimal Substructure Property Lemma 24.1 Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk > be a Shortest Path from v1 to vk. Then, pij =< vi, vi+1, ..., vj > is a Shortest Path from vi to vj, where 1 ≤ i ≤ j ≤ k. We have then So, we have the optimal substructure property. Bellman-Ford’s algorithm uses dynamic programming. Dijkstra’s algorithm uses the greedy approach. In addition, we have that Let δ(u, v) = weight of Shortest Path from u to v. 9 / 108
  • 18. Optimal Substructure Property Lemma 24.1 Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk > be a Shortest Path from v1 to vk. Then, pij =< vi, vi+1, ..., vj > is a Shortest Path from vi to vj, where 1 ≤ i ≤ j ≤ k. We have then So, we have the optimal substructure property. Bellman-Ford’s algorithm uses dynamic programming. Dijkstra’s algorithm uses the greedy approach. In addition, we have that Let δ(u, v) = weight of Shortest Path from u to v. 9 / 108
  • 19. Optimal Substructure Property Corollary Let p be a Shortest Path from s to v, where p = s p1 u → v = p1 ∪ {(u, v)}. Then δ(s, v) = δ(s, u) + w(u, v). 10 / 108
  • 20. The Lower Bound Between Nodes Lemma 24.10 Let s ∈ V . For all edges (u, v) ∈ E, we have δ(s, v) ≤ δ(s, u) + w(u, v). 11 / 108
  • 21. Now Then We have the basic concepts Still We need to define an important one. The Predecessor Graph This will facilitate the proof of several concepts 12 / 108
  • 22. Now Then We have the basic concepts Still We need to define an important one. The Predecessor Graph This will facilitate the proof of several concepts 12 / 108
  • 23. Now Then We have the basic concepts Still We need to define an important one. The Predecessor Graph This will facilitate the proof of several concepts 12 / 108
  • 24. Outline 1 Introduction Introduction and Similar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 13 / 108
  • 25. Predecessor Graph Representing shortest paths For this we use the predecessor subgraph It is defined slightly differently from that on Breadth-First-Search Definition of a Predecessor Subgraph The predecessor is a subgraph Gπ = (Vπ, Eπ) where Vπ = {v ∈ V |v.π = NIL} ∪ {s} Eπ = {(v.π, v)| v ∈ Vπ − {s}} Properties The predecessor subgraph Gπ forms a depth first forest composed of several depth first trees. The edges in Eπ are called tree edges. 14 / 108
  • 26. Predecessor Graph Representing shortest paths For this we use the predecessor subgraph It is defined slightly differently from that on Breadth-First-Search Definition of a Predecessor Subgraph The predecessor is a subgraph Gπ = (Vπ, Eπ) where Vπ = {v ∈ V |v.π = NIL} ∪ {s} Eπ = {(v.π, v)| v ∈ Vπ − {s}} Properties The predecessor subgraph Gπ forms a depth first forest composed of several depth first trees. The edges in Eπ are called tree edges. 14 / 108
  • 27. Predecessor Graph Representing shortest paths For this we use the predecessor subgraph It is defined slightly differently from that on Breadth-First-Search Definition of a Predecessor Subgraph The predecessor is a subgraph Gπ = (Vπ, Eπ) where Vπ = {v ∈ V |v.π = NIL} ∪ {s} Eπ = {(v.π, v)| v ∈ Vπ − {s}} Properties The predecessor subgraph Gπ forms a depth first forest composed of several depth first trees. The edges in Eπ are called tree edges. 14 / 108
  • 28. Predecessor Graph Representing shortest paths For this we use the predecessor subgraph It is defined slightly differently from that on Breadth-First-Search Definition of a Predecessor Subgraph The predecessor is a subgraph Gπ = (Vπ, Eπ) where Vπ = {v ∈ V |v.π = NIL} ∪ {s} Eπ = {(v.π, v)| v ∈ Vπ − {s}} Properties The predecessor subgraph Gπ forms a depth first forest composed of several depth first trees. The edges in Eπ are called tree edges. 14 / 108
  • 29. Predecessor Graph Representing shortest paths For this we use the predecessor subgraph It is defined slightly differently from that on Breadth-First-Search Definition of a Predecessor Subgraph The predecessor is a subgraph Gπ = (Vπ, Eπ) where Vπ = {v ∈ V |v.π = NIL} ∪ {s} Eπ = {(v.π, v)| v ∈ Vπ − {s}} Properties The predecessor subgraph Gπ forms a depth first forest composed of several depth first trees. The edges in Eπ are called tree edges. 14 / 108
  • 30. Predecessor Graph Representing shortest paths For this we use the predecessor subgraph It is defined slightly differently from that on Breadth-First-Search Definition of a Predecessor Subgraph The predecessor is a subgraph Gπ = (Vπ, Eπ) where Vπ = {v ∈ V |v.π = NIL} ∪ {s} Eπ = {(v.π, v)| v ∈ Vπ − {s}} Properties The predecessor subgraph Gπ forms a depth first forest composed of several depth first trees. The edges in Eπ are called tree edges. 14 / 108
  • 31. Predecessor Graph Representing shortest paths For this we use the predecessor subgraph It is defined slightly differently from that on Breadth-First-Search Definition of a Predecessor Subgraph The predecessor is a subgraph Gπ = (Vπ, Eπ) where Vπ = {v ∈ V |v.π = NIL} ∪ {s} Eπ = {(v.π, v)| v ∈ Vπ − {s}} Properties The predecessor subgraph Gπ forms a depth first forest composed of several depth first trees. The edges in Eπ are called tree edges. 14 / 108
  • 32. Outline 1 Introduction Introduction and Similar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 15 / 108
  • 33. The Relaxation Concept We are going to use certain functions for all the algorithms Initialize Here, the basic variables of the nodes in a graph will be initialized v.d = the distance from the source s. v.π = the predecessor node during the search of the shortest path. Changing the v.d This will be done in the Relaxation algorithm. 16 / 108
  • 34. The Relaxation Concept We are going to use certain functions for all the algorithms Initialize Here, the basic variables of the nodes in a graph will be initialized v.d = the distance from the source s. v.π = the predecessor node during the search of the shortest path. Changing the v.d This will be done in the Relaxation algorithm. 16 / 108
  • 35. The Relaxation Concept We are going to use certain functions for all the algorithms Initialize Here, the basic variables of the nodes in a graph will be initialized v.d = the distance from the source s. v.π = the predecessor node during the search of the shortest path. Changing the v.d This will be done in the Relaxation algorithm. 16 / 108
  • 36. The Relaxation Concept We are going to use certain functions for all the algorithms Initialize Here, the basic variables of the nodes in a graph will be initialized v.d = the distance from the source s. v.π = the predecessor node during the search of the shortest path. Changing the v.d This will be done in the Relaxation algorithm. 16 / 108
  • 37. The Relaxation Concept We are going to use certain functions for all the algorithms Initialize Here, the basic variables of the nodes in a graph will be initialized v.d = the distance from the source s. v.π = the predecessor node during the search of the shortest path. Changing the v.d This will be done in the Relaxation algorithm. 16 / 108
  • 38. Initialize and Relaxation The Algorithms keep track of v.d, v.π. It is initialized as follows Initialize(G, s) 1 for each v ∈ V [G] 2 v.d = ∞ 3 v.π = NIL 4 s.d = 0 These values are changed when an edge (u, v) is relaxed. Relax(u, v, w) 1 if v.d > u.d + w(u, v) 2 v.d = u.d + w(u, v) 3 v.π = u 17 / 108
  • 39. Initialize and Relaxation The Algorithms keep track of v.d, v.π. It is initialized as follows Initialize(G, s) 1 for each v ∈ V [G] 2 v.d = ∞ 3 v.π = NIL 4 s.d = 0 These values are changed when an edge (u, v) is relaxed. Relax(u, v, w) 1 if v.d > u.d + w(u, v) 2 v.d = u.d + w(u, v) 3 v.π = u 17 / 108
  • 40. How are these functions used? These functions are used 1 Build a predecesor graph Gπ. 2 Integrate the Shortest Path into that predecessor graph. 1 Using the field d. 18 / 108
  • 41. How are these functions used? These functions are used 1 Build a predecesor graph Gπ. 2 Integrate the Shortest Path into that predecessor graph. 1 Using the field d. 18 / 108
  • 42. Outline 1 Introduction Introduction and Similar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 19 / 108
  • 43. The Bellman-Ford Algorithm Bellman-Ford can have negative weight edges. It will “detect” reachable negative weight cycles. Bellman-Ford(G, s, w) 1 Initialize(G, s) 2 for i = 1 to |V [G]| − 1 3 for each (u, v) to E [G] 4 Relax(u, v, w) 5 for each (u, v) to E [G] 6 if v.d > u.d + w (u, v) 7 return false 8 return true Time Complexity O (VE) 20 / 108
  • 44. The Bellman-Ford Algorithm Bellman-Ford can have negative weight edges. It will “detect” reachable negative weight cycles. Bellman-Ford(G, s, w) 1 Initialize(G, s) 2 for i = 1 to |V [G]| − 1 3 for each (u, v) to E [G] 4 Relax(u, v, w) 5 for each (u, v) to E [G] 6 if v.d > u.d + w (u, v) 7 return false 8 return true Time Complexity O (VE) 20 / 108
  • 45. The Bellman-Ford Algorithm Bellman-Ford can have negative weight edges. It will “detect” reachable negative weight cycles. Bellman-Ford(G, s, w) 1 Initialize(G, s) 2 for i = 1 to |V [G]| − 1 3 for each (u, v) to E [G] 4 Relax(u, v, w) 5 for each (u, v) to E [G] 6 if v.d > u.d + w (u, v) 7 return false 8 return true Time Complexity O (VE) 20 / 108
  • 46. The Bellman-Ford Algorithm Bellman-Ford can have negative weight edges. It will “detect” reachable negative weight cycles. Bellman-Ford(G, s, w) 1 Initialize(G, s) 2 for i = 1 to |V [G]| − 1 3 for each (u, v) to E [G] 4 Relax(u, v, w) 5 for each (u, v) to E [G] 6 if v.d > u.d + w (u, v) 7 return false 8 return true Time Complexity O (VE) 20 / 108
  • 47. Outline 1 Introduction Introduction and Similar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 21 / 108
  • 48. Properties of Relaxation Some properties v.d, if not ∞, is the length of some path from s to v. v.d either stays the same or decreases with time. Therefore If v.d = δ(s, v) at any time, this holds thereafter. Something nice Note that v.d ≥ δ(s, v) always (Upper-Bound Property). After i iterations of relaxing an all (u, v), if the shortest path to v has i edges, then v.d = δ(s, v). If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant. 22 / 108
  • 49. Properties of Relaxation Some properties v.d, if not ∞, is the length of some path from s to v. v.d either stays the same or decreases with time. Therefore If v.d = δ(s, v) at any time, this holds thereafter. Something nice Note that v.d ≥ δ(s, v) always (Upper-Bound Property). After i iterations of relaxing an all (u, v), if the shortest path to v has i edges, then v.d = δ(s, v). If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant. 22 / 108
  • 50. Properties of Relaxation Some properties v.d, if not ∞, is the length of some path from s to v. v.d either stays the same or decreases with time. Therefore If v.d = δ(s, v) at any time, this holds thereafter. Something nice Note that v.d ≥ δ(s, v) always (Upper-Bound Property). After i iterations of relaxing an all (u, v), if the shortest path to v has i edges, then v.d = δ(s, v). If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant. 22 / 108
  • 51. Properties of Relaxation Some properties v.d, if not ∞, is the length of some path from s to v. v.d either stays the same or decreases with time. Therefore If v.d = δ(s, v) at any time, this holds thereafter. Something nice Note that v.d ≥ δ(s, v) always (Upper-Bound Property). After i iterations of relaxing an all (u, v), if the shortest path to v has i edges, then v.d = δ(s, v). If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant. 22 / 108
  • 52. Properties of Relaxation Some properties v.d, if not ∞, is the length of some path from s to v. v.d either stays the same or decreases with time. Therefore If v.d = δ(s, v) at any time, this holds thereafter. Something nice Note that v.d ≥ δ(s, v) always (Upper-Bound Property). After i iterations of relaxing an all (u, v), if the shortest path to v has i edges, then v.d = δ(s, v). If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant. 22 / 108
  • 53. Properties of Relaxation Some properties v.d, if not ∞, is the length of some path from s to v. v.d either stays the same or decreases with time. Therefore If v.d = δ(s, v) at any time, this holds thereafter. Something nice Note that v.d ≥ δ(s, v) always (Upper-Bound Property). After i iterations of relaxing an all (u, v), if the shortest path to v has i edges, then v.d = δ(s, v). If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant. 22 / 108
  • 54. Properties of Relaxation Lemma 24.10 (Triangle inequality) Let G = (V , E) be a weighted, directed graph with weight function w : E → R and source vertex s. Then, for all edges (u, v) ∈ E, we have: δ (s, v) ≤ δ (s, u) + w (u, v) (1) Proof 1 Suppose that p is a shortest path from source s to vertex v. 2 Then, p has no more weight than any other path from s to vertex v. 3 Not only p has no more weiht tha a particular shortest path that goes from s to u and then takes edge (u, v). 23 / 108
  • 55. Properties of Relaxation Lemma 24.10 (Triangle inequality) Let G = (V , E) be a weighted, directed graph with weight function w : E → R and source vertex s. Then, for all edges (u, v) ∈ E, we have: δ (s, v) ≤ δ (s, u) + w (u, v) (1) Proof 1 Suppose that p is a shortest path from source s to vertex v. 2 Then, p has no more weight than any other path from s to vertex v. 3 Not only p has no more weiht tha a particular shortest path that goes from s to u and then takes edge (u, v). 23 / 108
  • 56. Properties of Relaxation Lemma 24.10 (Triangle inequality) Let G = (V , E) be a weighted, directed graph with weight function w : E → R and source vertex s. Then, for all edges (u, v) ∈ E, we have: δ (s, v) ≤ δ (s, u) + w (u, v) (1) Proof 1 Suppose that p is a shortest path from source s to vertex v. 2 Then, p has no more weight than any other path from s to vertex v. 3 Not only p has no more weiht tha a particular shortest path that goes from s to u and then takes edge (u, v). 23 / 108
  • 57. Properties of Relaxation Lemma 24.10 (Triangle inequality) Let G = (V , E) be a weighted, directed graph with weight function w : E → R and source vertex s. Then, for all edges (u, v) ∈ E, we have: δ (s, v) ≤ δ (s, u) + w (u, v) (1) Proof 1 Suppose that p is a shortest path from source s to vertex v. 2 Then, p has no more weight than any other path from s to vertex v. 3 Not only p has no more weiht tha a particular shortest path that goes from s to u and then takes edge (u, v). 23 / 108
  • 58. Properties of Relaxation Lemma 24.10 (Triangle inequality) Let G = (V , E) be a weighted, directed graph with weight function w : E → R and source vertex s. Then, for all edges (u, v) ∈ E, we have: δ (s, v) ≤ δ (s, u) + w (u, v) (1) Proof 1 Suppose that p is a shortest path from source s to vertex v. 2 Then, p has no more weight than any other path from s to vertex v. 3 Not only p has no more weiht tha a particular shortest path that goes from s to u and then takes edge (u, v). 23 / 108
  • 59. Properties of Relaxation Lemma 24.11 (Upper Bound Property) Let G = (V , E) be a weighted, directed graph with weight function w : E → R. Consider any algorithm in which v.d, and v.π are first initialized by calling Initialize(G, s) (s is the source), and are only changed by calling Relax. Then, we have that v.d ≥ δ(s, v) ∀v ∈ V [G] , and this invariant is maintained over any sequence of relaxation steps on the edges of G. Moreover, once v.d = δ(s, v), it never changes. 24 / 108
  • 60. Properties of Relaxation Lemma 24.11 (Upper Bound Property) Let G = (V , E) be a weighted, directed graph with weight function w : E → R. Consider any algorithm in which v.d, and v.π are first initialized by calling Initialize(G, s) (s is the source), and are only changed by calling Relax. Then, we have that v.d ≥ δ(s, v) ∀v ∈ V [G] , and this invariant is maintained over any sequence of relaxation steps on the edges of G. Moreover, once v.d = δ(s, v), it never changes. 24 / 108
  • 61. Properties of Relaxation Lemma 24.11 (Upper Bound Property) Let G = (V , E) be a weighted, directed graph with weight function w : E → R. Consider any algorithm in which v.d, and v.π are first initialized by calling Initialize(G, s) (s is the source), and are only changed by calling Relax. Then, we have that v.d ≥ δ(s, v) ∀v ∈ V [G] , and this invariant is maintained over any sequence of relaxation steps on the edges of G. Moreover, once v.d = δ(s, v), it never changes. 24 / 108
  • 62. Proof of Lemma Loop Invariance The Proof can be done by induction over the number of relaxation steps and the loop invariance: v.d ≥ δ (s, v) for all v ∈ V For the Basis v.d ≥ δ (s, v) is true after initialization, since: v.d = ∞ making v.d ≥ δ (s, v) for all v ∈ V − {s}. For s, s.d = 0 ≥ δ (s, s). For the inductive step, consider the relaxation of an edge (u, v) By the inductive hypothesis, we have that x.d ≥ δ (s, x) for all x ∈ V prior to relaxation. 25 / 108
  • 63. Proof of Lemma Loop Invariance The Proof can be done by induction over the number of relaxation steps and the loop invariance: v.d ≥ δ (s, v) for all v ∈ V For the Basis v.d ≥ δ (s, v) is true after initialization, since: v.d = ∞ making v.d ≥ δ (s, v) for all v ∈ V − {s}. For s, s.d = 0 ≥ δ (s, s). For the inductive step, consider the relaxation of an edge (u, v) By the inductive hypothesis, we have that x.d ≥ δ (s, x) for all x ∈ V prior to relaxation. 25 / 108
  • 64. Proof of Lemma Loop Invariance The Proof can be done by induction over the number of relaxation steps and the loop invariance: v.d ≥ δ (s, v) for all v ∈ V For the Basis v.d ≥ δ (s, v) is true after initialization, since: v.d = ∞ making v.d ≥ δ (s, v) for all v ∈ V − {s}. For s, s.d = 0 ≥ δ (s, s). For the inductive step, consider the relaxation of an edge (u, v) By the inductive hypothesis, we have that x.d ≥ δ (s, x) for all x ∈ V prior to relaxation. 25 / 108
  • 65. Proof of Lemma Loop Invariance The Proof can be done by induction over the number of relaxation steps and the loop invariance: v.d ≥ δ (s, v) for all v ∈ V For the Basis v.d ≥ δ (s, v) is true after initialization, since: v.d = ∞ making v.d ≥ δ (s, v) for all v ∈ V − {s}. For s, s.d = 0 ≥ δ (s, s). For the inductive step, consider the relaxation of an edge (u, v) By the inductive hypothesis, we have that x.d ≥ δ (s, x) for all x ∈ V prior to relaxation. 25 / 108
  • 66. Proof of Lemma Loop Invariance The Proof can be done by induction over the number of relaxation steps and the loop invariance: v.d ≥ δ (s, v) for all v ∈ V For the Basis v.d ≥ δ (s, v) is true after initialization, since: v.d = ∞ making v.d ≥ δ (s, v) for all v ∈ V − {s}. For s, s.d = 0 ≥ δ (s, s). For the inductive step, consider the relaxation of an edge (u, v) By the inductive hypothesis, we have that x.d ≥ δ (s, x) for all x ∈ V prior to relaxation. 25 / 108
  • 67. Thus If you call Relax(u, v, w), it may change v.d v.d = u.d + w(u, v) ≥ δ(s, u) + w(u, v) by inductive hypothesis ≥ δ (s, v) by the triangle inequality Thus, the invariant is maintained. 26 / 108
  • 68. Thus If you call Relax(u, v, w), it may change v.d v.d = u.d + w(u, v) ≥ δ(s, u) + w(u, v) by inductive hypothesis ≥ δ (s, v) by the triangle inequality Thus, the invariant is maintained. 26 / 108
  • 69. Thus If you call Relax(u, v, w), it may change v.d v.d = u.d + w(u, v) ≥ δ(s, u) + w(u, v) by inductive hypothesis ≥ δ (s, v) by the triangle inequality Thus, the invariant is maintained. 26 / 108
  • 70. Thus If you call Relax(u, v, w), it may change v.d v.d = u.d + w(u, v) ≥ δ(s, u) + w(u, v) by inductive hypothesis ≥ δ (s, v) by the triangle inequality Thus, the invariant is maintained. 26 / 108
  • 71. Properties of Relaxation Proof of lemma 24.11 cont... To proof that the value v.d never changes once v.d = δ (s, v): Note the following: Once v.d = δ (s, v), it cannot decrease because v.d ≥ δ (s, v) and Relaxation never increases d. 27 / 108
  • 72. Properties of Relaxation Proof of lemma 24.11 cont... To proof that the value v.d never changes once v.d = δ (s, v): Note the following: Once v.d = δ (s, v), it cannot decrease because v.d ≥ δ (s, v) and Relaxation never increases d. 27 / 108
  • 73. Next, we have Corollary 24.12 (No-path property) If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant. Proof By the upper-bound property, we always have ∞ = δ (s, v) ≤ v.d. Then, v.d = ∞. 28 / 108
  • 74. Next, we have Corollary 24.12 (No-path property) If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant. Proof By the upper-bound property, we always have ∞ = δ (s, v) ≤ v.d. Then, v.d = ∞. 28 / 108
  • 75. More Lemmas Lemma 24.13 Let G = (V , E) be a weighted, directed graph with weight function w : E → R. Then, immediately after relaxing edge (u, v) by calling Relax(u, v, w) we have v.d ≤ u.d + w(u, v). 29 / 108
  • 76. Proof First If, just prior to relaxing edge (u, v), Case 1: if we have that v.d > u.d + w (u, v) Then, v.d = u.d + w (u, v) after relaxation. Now, Case 2 If v.d ≤ u.d + w (u, v) just before relaxation, then: neither u.d nor v.d changes Thus, afterwards v.d ≤ u.d + w (u, v) 30 / 108
  • 77. Proof First If, just prior to relaxing edge (u, v), Case 1: if we have that v.d > u.d + w (u, v) Then, v.d = u.d + w (u, v) after relaxation. Now, Case 2 If v.d ≤ u.d + w (u, v) just before relaxation, then: neither u.d nor v.d changes Thus, afterwards v.d ≤ u.d + w (u, v) 30 / 108
  • 78. Proof First If, just prior to relaxing edge (u, v), Case 1: if we have that v.d > u.d + w (u, v) Then, v.d = u.d + w (u, v) after relaxation. Now, Case 2 If v.d ≤ u.d + w (u, v) just before relaxation, then: neither u.d nor v.d changes Thus, afterwards v.d ≤ u.d + w (u, v) 30 / 108
  • 79. Proof First If, just prior to relaxing edge (u, v), Case 1: if we have that v.d > u.d + w (u, v) Then, v.d = u.d + w (u, v) after relaxation. Now, Case 2 If v.d ≤ u.d + w (u, v) just before relaxation, then: neither u.d nor v.d changes Thus, afterwards v.d ≤ u.d + w (u, v) 30 / 108
  • 80. Proof First If, just prior to relaxing edge (u, v), Case 1: if we have that v.d > u.d + w (u, v) Then, v.d = u.d + w (u, v) after relaxation. Now, Case 2 If v.d ≤ u.d + w (u, v) just before relaxation, then: neither u.d nor v.d changes Thus, afterwards v.d ≤ u.d + w (u, v) 30 / 108
  • 81. Proof First If, just prior to relaxing edge (u, v), Case 1: if we have that v.d > u.d + w (u, v) Then, v.d = u.d + w (u, v) after relaxation. Now, Case 2 If v.d ≤ u.d + w (u, v) just before relaxation, then: neither u.d nor v.d changes Thus, afterwards v.d ≤ u.d + w (u, v) 30 / 108
  • 82. More Lemmas Lemma 24.14 (Convergence property) Let p be a shortest path from s to v, where p = p1 s u → v = p1 ∪ {(u, v)}. If u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then v.d = δ(s, v) holds at all times after the call. Proof: By the upper-bound property, if u.d = δ (s, u) at some moment before relaxing edge (u, v), holding afterwards. 31 / 108
  • 83. More Lemmas Lemma 24.14 (Convergence property) Let p be a shortest path from s to v, where p = p1 s u → v = p1 ∪ {(u, v)}. If u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then v.d = δ(s, v) holds at all times after the call. Proof: By the upper-bound property, if u.d = δ (s, u) at some moment before relaxing edge (u, v), holding afterwards. 31 / 108
  • 84. More Lemmas Lemma 24.14 (Convergence property) Let p be a shortest path from s to v, where p = p1 s u → v = p1 ∪ {(u, v)}. If u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then v.d = δ(s, v) holds at all times after the call. Proof: By the upper-bound property, if u.d = δ (s, u) at some moment before relaxing edge (u, v), holding afterwards. 31 / 108
  • 85. Proof Thus , after relaxing (u, v) v.d ≤ u.d + w(u, v) by lemma 24.13 = δ (s, u) + w (u, v) = δ (s, v) by corollary of lemma 24.1 Now By lemma 24.11, v.d ≥ δ(s, v), so v.d = δ(s, v). 32 / 108
  • 86. Proof Thus , after relaxing (u, v) v.d ≤ u.d + w(u, v) by lemma 24.13 = δ (s, u) + w (u, v) = δ (s, v) by corollary of lemma 24.1 Now By lemma 24.11, v.d ≥ δ(s, v), so v.d = δ(s, v). 32 / 108
  • 87. Proof Thus , after relaxing (u, v) v.d ≤ u.d + w(u, v) by lemma 24.13 = δ (s, u) + w (u, v) = δ (s, v) by corollary of lemma 24.1 Now By lemma 24.11, v.d ≥ δ(s, v), so v.d = δ(s, v). 32 / 108
  • 88. Proof Thus , after relaxing (u, v) v.d ≤ u.d + w(u, v) by lemma 24.13 = δ (s, u) + w (u, v) = δ (s, v) by corollary of lemma 24.1 Now By lemma 24.11, v.d ≥ δ(s, v), so v.d = δ(s, v). 32 / 108
  • 89. Outline 1 Introduction Introduction and Similar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 33 / 108
  • 90. Predecessor Subgraph for Bellman Lemma 24.16 Assume a given graph G that has no negative weight cycles reachable from s. Then, after the initialization, the predecessor subgraph Gπ is always a tree with root s, and any sequence of relaxations steps on edges of G maintains this property as an invariant. Proof It is necessary to prove two things in order to get a tree: 1 Gπ is acyclic. 2 There exists a unique path from source s to each vertex Vπ. 34 / 108
  • 91. Predecessor Subgraph for Bellman Lemma 24.16 Assume a given graph G that has no negative weight cycles reachable from s. Then, after the initialization, the predecessor subgraph Gπ is always a tree with root s, and any sequence of relaxations steps on edges of G maintains this property as an invariant. Proof It is necessary to prove two things in order to get a tree: 1 Gπ is acyclic. 2 There exists a unique path from source s to each vertex Vπ. 34 / 108
  • 92. Predecessor Subgraph for Bellman Lemma 24.16 Assume a given graph G that has no negative weight cycles reachable from s. Then, after the initialization, the predecessor subgraph Gπ is always a tree with root s, and any sequence of relaxations steps on edges of G maintains this property as an invariant. Proof It is necessary to prove two things in order to get a tree: 1 Gπ is acyclic. 2 There exists a unique path from source s to each vertex Vπ. 34 / 108
  • 93. Predecessor Subgraph for Bellman Lemma 24.16 Assume a given graph G that has no negative weight cycles reachable from s. Then, after the initialization, the predecessor subgraph Gπ is always a tree with root s, and any sequence of relaxations steps on edges of G maintains this property as an invariant. Proof It is necessary to prove two things in order to get a tree: 1 Gπ is acyclic. 2 There exists a unique path from source s to each vertex Vπ. 34 / 108
  • 94. Proof of Gπ is acyclic First Suppose there exist a cycle c =< v0, v1, ..., vk >, where v0 = vk. We have vi.π = vi−1 for i = 1, 2, ..., k. Second Assume relaxation of (vk−1, vk) created the cycle. We are going to show that the cycle has a negative weight. We claim that The cycle must be reachable from s (Why?) 35 / 108
  • 95. Proof of Gπ is acyclic First Suppose there exist a cycle c =< v0, v1, ..., vk >, where v0 = vk. We have vi.π = vi−1 for i = 1, 2, ..., k. Second Assume relaxation of (vk−1, vk) created the cycle. We are going to show that the cycle has a negative weight. We claim that The cycle must be reachable from s (Why?) 35 / 108
  • 96. Proof of Gπ is acyclic First Suppose there exist a cycle c =< v0, v1, ..., vk >, where v0 = vk. We have vi.π = vi−1 for i = 1, 2, ..., k. Second Assume relaxation of (vk−1, vk) created the cycle. We are going to show that the cycle has a negative weight. We claim that The cycle must be reachable from s (Why?) 35 / 108
  • 97. Proof First Each vertex on the cycle has a non-NIL predecessor, and so each vertex on it was assigned a finite shortest-path estimate when it was assigned its non-NIL value. Then By the upper-bound property, each vertex on the cycle has a finite shortest-path weight, Thus Making the cycle reachable from s. 36 / 108
  • 98. Proof First Each vertex on the cycle has a non-NIL predecessor, and so each vertex on it was assigned a finite shortest-path estimate when it was assigned its non-NIL value. Then By the upper-bound property, each vertex on the cycle has a finite shortest-path weight, Thus Making the cycle reachable from s. 36 / 108
  • 99. Proof First Each vertex on the cycle has a non-NIL predecessor, and so each vertex on it was assigned a finite shortest-path estimate when it was assigned its non-NIL value. Then By the upper-bound property, each vertex on the cycle has a finite shortest-path weight, Thus Making the cycle reachable from s. 36 / 108
  • 100. Proof Before call to Relax(vk−1, vk, w): vi.π = vi−1 for i = 1, ..., k − 1. (2) Thus This Implies vi.d was last updated by vi.d = vi−1.d + w(vi−1, vi) (3) for i = 1, ..., k − 1 (Because Relax updates π). This implies This implies vi.d ≥ vi−1.d + w(vi−1, vi) (4) for i = 1, ..., k − 1 (Before Relaxation in Lemma 24.13). 37 / 108
  • 101. Proof Before call to Relax(vk−1, vk, w): vi.π = vi−1 for i = 1, ..., k − 1. (2) Thus This Implies vi.d was last updated by vi.d = vi−1.d + w(vi−1, vi) (3) for i = 1, ..., k − 1 (Because Relax updates π). This implies This implies vi.d ≥ vi−1.d + w(vi−1, vi) (4) for i = 1, ..., k − 1 (Before Relaxation in Lemma 24.13). 37 / 108
  • 102. Proof Before call to Relax(vk−1, vk, w): vi.π = vi−1 for i = 1, ..., k − 1. (2) Thus This Implies vi.d was last updated by vi.d = vi−1.d + w(vi−1, vi) (3) for i = 1, ..., k − 1 (Because Relax updates π). This implies This implies vi.d ≥ vi−1.d + w(vi−1, vi) (4) for i = 1, ..., k − 1 (Before Relaxation in Lemma 24.13). 37 / 108
  • 103. Proof Thus Because vk.π is changed by call Relax (Immediately before), vk.d > vk−1.d + w(vk−1, vk), we have that: k i=1 vi.d > k i=1 (vi−1.d + w (vi−1, vi)) = k i=1 vi−1.d + k i=1 w (vi−1, vi) We have finally that Because k i=1 vi.d = k i=1 vi−1.d, we have that k i=1 w(vi−1, vi) < 0, i.e., a negative weight cycle!!! 38 / 108
  • 104. Proof Thus Because vk.π is changed by call Relax (Immediately before), vk.d > vk−1.d + w(vk−1, vk), we have that: k i=1 vi.d > k i=1 (vi−1.d + w (vi−1, vi)) = k i=1 vi−1.d + k i=1 w (vi−1, vi) We have finally that Because k i=1 vi.d = k i=1 vi−1.d, we have that k i=1 w(vi−1, vi) < 0, i.e., a negative weight cycle!!! 38 / 108
  • 105. Proof Thus Because vk.π is changed by call Relax (Immediately before), vk.d > vk−1.d + w(vk−1, vk), we have that: k i=1 vi.d > k i=1 (vi−1.d + w (vi−1, vi)) = k i=1 vi−1.d + k i=1 w (vi−1, vi) We have finally that Because k i=1 vi.d = k i=1 vi−1.d, we have that k i=1 w(vi−1, vi) < 0, i.e., a negative weight cycle!!! 38 / 108
  • 106. Proof Thus Because vk.π is changed by call Relax (Immediately before), vk.d > vk−1.d + w(vk−1, vk), we have that: k i=1 vi.d > k i=1 (vi−1.d + w (vi−1, vi)) = k i=1 vi−1.d + k i=1 w (vi−1, vi) We have finally that Because k i=1 vi.d = k i=1 vi−1.d, we have that k i=1 w(vi−1, vi) < 0, i.e., a negative weight cycle!!! 38 / 108
  • 107. Some comments Comments vi.d ≥ vi−1.d + w(vi−1, vi) for i = 1, ..., k − 1 because when Relax(vi−1, vi, w) was called, there was an equality, and vi−1.d may have gotten smaller by further calls to Relax. vk.d > vk−1.d + w(vk−1, vk) before the last call to Relax because that last call changed vk.d. 39 / 108
  • 108. Some comments Comments vi.d ≥ vi−1.d + w(vi−1, vi) for i = 1, ..., k − 1 because when Relax(vi−1, vi, w) was called, there was an equality, and vi−1.d may have gotten smaller by further calls to Relax. vk.d > vk−1.d + w(vk−1, vk) before the last call to Relax because that last call changed vk.d. 39 / 108
  • 109. Proof of existence of a unique path from source s Let Gπ be the predecessor subgraph. So, for any v in Vπ, the graph Gπ contains at least one path from s to v. Assume now that you have two paths: This can only be possible if for two nodes x and y ⇒ x = y, but z.π = x = y!!! Contradiction!!! Therefore, we have only one path and Gπ is a tree. 40 / 108
  • 110. Proof of existence of a unique path from source s Let Gπ be the predecessor subgraph. So, for any v in Vπ, the graph Gπ contains at least one path from s to v. Assume now that you have two paths: This can only be possible if for two nodes x and y ⇒ x = y, but z.π = x = y!!! Contradiction!!! Therefore, we have only one path and Gπ is a tree. 40 / 108
  • 111. Proof of existence of a unique path from source s Let Gπ be the predecessor subgraph. So, for any v in Vπ, the graph Gπ contains at least one path from s to v. Assume now that you have two paths: s u x y z v Impossible! This can only be possible if for two nodes x and y ⇒ x = y, but z.π = x = y!!! Contradiction!!! Therefore, we have only one path and Gπ is a tree. 40 / 108
  • 112. Proof of existence of a unique path from source s Let Gπ be the predecessor subgraph. So, for any v in Vπ, the graph Gπ contains at least one path from s to v. Assume now that you have two paths: s u x y z v Impossible! This can only be possible if for two nodes x and y ⇒ x = y, but z.π = x = y!!! Contradiction!!! Therefore, we have only one path and Gπ is a tree. 40 / 108
  • 113. Proof of existence of a unique path from source s Let Gπ be the predecessor subgraph. So, for any v in Vπ, the graph Gπ contains at least one path from s to v. Assume now that you have two paths: s u x y z v Impossible! This can only be possible if for two nodes x and y ⇒ x = y, but z.π = x = y!!! Contradiction!!! Therefore, we have only one path and Gπ is a tree. 40 / 108
  • 114. Outline 1 Introduction Introduction and Similar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 41 / 108
  • 115. Lemma 24.17 Lemma 24.17 Same conditions as before. It calls Initialize and repeatedly calls Relax until v.d = δ(s, v) for all v in V . Then Gπ is a shortest path tree rooted at s. Proof For all v in V , there is a unique simple path p from s to v in Gπ (Lemma 24.16). We want to prove that it is a shortest path from s to v in G. 42 / 108
  • 116. Lemma 24.17 Lemma 24.17 Same conditions as before. It calls Initialize and repeatedly calls Relax until v.d = δ(s, v) for all v in V . Then Gπ is a shortest path tree rooted at s. Proof For all v in V , there is a unique simple path p from s to v in Gπ (Lemma 24.16). We want to prove that it is a shortest path from s to v in G. 42 / 108
  • 117. Lemma 24.17 Lemma 24.17 Same conditions as before. It calls Initialize and repeatedly calls Relax until v.d = δ(s, v) for all v in V . Then Gπ is a shortest path tree rooted at s. Proof For all v in V , there is a unique simple path p from s to v in Gπ (Lemma 24.16). We want to prove that it is a shortest path from s to v in G. 42 / 108
  • 118. Proof Then Let p =< v0, v1, ..., vk >, where v0 = s and vk = v. Thus, we have vi.d = δ(s, vi). And reasoning as before vi.d ≥ vi−1.d + w(vi−1, vi) (5) This implies that w(vi−1, vi) ≤ δ(s, vi) − δ(s, vi−1) (6) 43 / 108
  • 119. Proof Then Let p =< v0, v1, ..., vk >, where v0 = s and vk = v. Thus, we have vi.d = δ(s, vi). And reasoning as before vi.d ≥ vi−1.d + w(vi−1, vi) (5) This implies that w(vi−1, vi) ≤ δ(s, vi) − δ(s, vi−1) (6) 43 / 108
  • 120. Proof Then Let p =< v0, v1, ..., vk >, where v0 = s and vk = v. Thus, we have vi.d = δ(s, vi). And reasoning as before vi.d ≥ vi−1.d + w(vi−1, vi) (5) This implies that w(vi−1, vi) ≤ δ(s, vi) − δ(s, vi−1) (6) 43 / 108
  • 121. Proof Then, we sum over all weights w (p) = k i=1 w (vi−1, vi) ≤ k i=1 (δ (s, vi) − δ(s, vi−1)) = δ(s, vk) − δ(s, v0) = δ(s, vk) Finally So, equality holds and p is a shortest path because δ (s, vk) ≤ w (p). 44 / 108
  • 122. Proof Then, we sum over all weights w (p) = k i=1 w (vi−1, vi) ≤ k i=1 (δ (s, vi) − δ(s, vi−1)) = δ(s, vk) − δ(s, v0) = δ(s, vk) Finally So, equality holds and p is a shortest path because δ (s, vk) ≤ w (p). 44 / 108
  • 123. Proof Then, we sum over all weights w (p) = k i=1 w (vi−1, vi) ≤ k i=1 (δ (s, vi) − δ(s, vi−1)) = δ(s, vk) − δ(s, v0) = δ(s, vk) Finally So, equality holds and p is a shortest path because δ (s, vk) ≤ w (p). 44 / 108
  • 124. Proof Then, we sum over all weights w (p) = k i=1 w (vi−1, vi) ≤ k i=1 (δ (s, vi) − δ(s, vi−1)) = δ(s, vk) − δ(s, v0) = δ(s, vk) Finally So, equality holds and p is a shortest path because δ (s, vk) ≤ w (p). 44 / 108
  • 125. Proof Then, we sum over all weights w (p) = k i=1 w (vi−1, vi) ≤ k i=1 (δ (s, vi) − δ(s, vi−1)) = δ(s, vk) − δ(s, v0) = δ(s, vk) Finally So, equality holds and p is a shortest path because δ (s, vk) ≤ w (p). 44 / 108
  • 126. Outline 1 Introduction Introduction and Similar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 45 / 108
  • 127. Again the Bellman-Ford Algorithm Bellman-Ford can have negative weight edges. It will “detect” reachable negative weight cycles. Bellman-Ford(G, s, w) 1 Initialize(G, s) 2 for i = 1 to |V [G]| − 1 3 for each (u, v) to E [G] 4 Relax(u, v, w) The Decision Part of the Dynamic Programming for u.d and u.π. 5 for each (u, v) to E [G] 6 if v.d > u.d + w (u, v) 7 return false 8 return true Observation If Bellman-Ford has not converged after V (G) − 1 iterations, then there cannot be a shortest path tree, so there must be a negative46 / 108
  • 128. Again the Bellman-Ford Algorithm Bellman-Ford can have negative weight edges. It will “detect” reachable negative weight cycles. Bellman-Ford(G, s, w) 1 Initialize(G, s) 2 for i = 1 to |V [G]| − 1 3 for each (u, v) to E [G] 4 Relax(u, v, w) The Decision Part of the Dynamic Programming for u.d and u.π. 5 for each (u, v) to E [G] 6 if v.d > u.d + w (u, v) 7 return false 8 return true Observation If Bellman-Ford has not converged after V (G) − 1 iterations, then there cannot be a shortest path tree, so there must be a negative46 / 108
  • 129. Example Red Arrows are the representation of v.π s b d a c e h g f5 -4 5 3 3 6 -2 1 5 -2 2 7 -2 10 3 47 / 108
  • 130. Example Here, whenever we have v.d = ∞ and v.u = ∞ no change is done s b d a c e h g f5 -4 5 3 3 6 -2 1 5 -2 2 7 -2 106 5 5 2 48 / 108
  • 131. Example Here, we keep updating in the second iteration s b d a c e h g f5 -4 5 3 3 6 -2 1 5 -2 2 7 -2 106 5 5 1 3 16 2 49 / 108
  • 132. Example Here, during it. we notice that e can be updated for a better value s b d a c e h g f5 -4 5 3 3 6 -2 1 5 -2 2 7 -2 106 5 4 1 3 16 2 50 / 108
  • 133. Example Here, we keep updating in third iteration and d and g also is updated s b d a c e h g f5 -4 5 3 3 6 -2 1 5 -2 2 7 -2 106 5 4 1 2 16 44 2 51 / 108
  • 134. Example Here, we keep updating in fourth iteration s b d a c e h g f5 -4 5 3 3 6 -2 1 5 -2 2 7 -2 102 5 4 1 2 14 44 2 52 / 108
  • 135. Example Here, f is updated during this iteration s b d a c e h g f5 -4 5 3 3 6 -2 1 5 -2 2 7 -2 102 5 4 1 2 12 40 2 53 / 108
  • 136. Outline 1 Introduction Introduction and Similar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 54 / 108
  • 137. v.d == δ(s, v) upon termination Lemma 24.2 Assuming no negative weight cycles reachable from s, v.d == δ(s, v) holds upon termination for all vertices v reachable from s. Proof Consider a shortest path p, where p =< v0, v1, ..., vk >, where v0 = s and vk = v. We know the following Shortest paths are simple, p has at most |V | − 1, thus we have that k ≤ |V | − 1. 55 / 108
  • 138. v.d == δ(s, v) upon termination Lemma 24.2 Assuming no negative weight cycles reachable from s, v.d == δ(s, v) holds upon termination for all vertices v reachable from s. Proof Consider a shortest path p, where p =< v0, v1, ..., vk >, where v0 = s and vk = v. We know the following Shortest paths are simple, p has at most |V | − 1, thus we have that k ≤ |V | − 1. 55 / 108
  • 139. v.d == δ(s, v) upon termination Lemma 24.2 Assuming no negative weight cycles reachable from s, v.d == δ(s, v) holds upon termination for all vertices v reachable from s. Proof Consider a shortest path p, where p =< v0, v1, ..., vk >, where v0 = s and vk = v. We know the following Shortest paths are simple, p has at most |V | − 1, thus we have that k ≤ |V | − 1. 55 / 108
  • 140. Proof Something Notable Claim: vi.d = δ(s, vi) holds after the ith pass over edges. In the algorithm Each of the |V | − 1 iterations of the for loop (Lines 2-4) relaxes all edges in E. Proof follows by induction on i The edges relaxed in the i th iteration, for i = 1, 2, ..., k, is (vi−1, vi). By lemma 24.11, once vi.d = δ(s, vi) holds, it continues to hold. 56 / 108
  • 141. Proof Something Notable Claim: vi.d = δ(s, vi) holds after the ith pass over edges. In the algorithm Each of the |V | − 1 iterations of the for loop (Lines 2-4) relaxes all edges in E. Proof follows by induction on i The edges relaxed in the i th iteration, for i = 1, 2, ..., k, is (vi−1, vi). By lemma 24.11, once vi.d = δ(s, vi) holds, it continues to hold. 56 / 108
  • 142. Proof Something Notable Claim: vi.d = δ(s, vi) holds after the ith pass over edges. In the algorithm Each of the |V | − 1 iterations of the for loop (Lines 2-4) relaxes all edges in E. Proof follows by induction on i The edges relaxed in the i th iteration, for i = 1, 2, ..., k, is (vi−1, vi). By lemma 24.11, once vi.d = δ(s, vi) holds, it continues to hold. 56 / 108
  • 143. Proof Something Notable Claim: vi.d = δ(s, vi) holds after the ith pass over edges. In the algorithm Each of the |V | − 1 iterations of the for loop (Lines 2-4) relaxes all edges in E. Proof follows by induction on i The edges relaxed in the i th iteration, for i = 1, 2, ..., k, is (vi−1, vi). By lemma 24.11, once vi.d = δ(s, vi) holds, it continues to hold. 56 / 108
  • 144. Finding a path between s and v Corollary 24.3 Let G = (V , E) be a weighted, directed graph with source vertex s and weight function w : E → R, and assume that G contains no negative-weight cycles that are reachable from s. Then, for each vertex v ∈ V , there is a path from s to v if and only if Bellman-Ford terminates with v.d < ∞ when it is run on G Proof Left to you... 57 / 108
  • 145. Finding a path between s and v Corollary 24.3 Let G = (V , E) be a weighted, directed graph with source vertex s and weight function w : E → R, and assume that G contains no negative-weight cycles that are reachable from s. Then, for each vertex v ∈ V , there is a path from s to v if and only if Bellman-Ford terminates with v.d < ∞ when it is run on G Proof Left to you... 57 / 108
  • 146. Outline 1 Introduction Introduction and Similar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 58 / 108
  • 147. Correctness of Bellman-Ford Claim: The Algorithm returns the correct value Part of Theorem 24.4. Other parts of the theorem follow easily from earlier results. Case 1: There is no reachable negative weight cycle. Upon termination, we have for all (u, v): v.d = δ(s, v) by lemma 24.2 (last slide) if v is reachable or v.d = δ(s, v) = ∞ otherwise. 59 / 108
  • 148. Correctness of Bellman-Ford Claim: The Algorithm returns the correct value Part of Theorem 24.4. Other parts of the theorem follow easily from earlier results. Case 1: There is no reachable negative weight cycle. Upon termination, we have for all (u, v): v.d = δ(s, v) by lemma 24.2 (last slide) if v is reachable or v.d = δ(s, v) = ∞ otherwise. 59 / 108
  • 149. Correctness of Bellman-Ford Claim: The Algorithm returns the correct value Part of Theorem 24.4. Other parts of the theorem follow easily from earlier results. Case 1: There is no reachable negative weight cycle. Upon termination, we have for all (u, v): v.d = δ(s, v) by lemma 24.2 (last slide) if v is reachable or v.d = δ(s, v) = ∞ otherwise. 59 / 108
  • 150. Correctness of Bellman-Ford Then, we have that v.d = δ(s, v) ≤ δ(s, u) + w(u, v) ≤ u.d + w (u, v) Remember: 5. for each (u, v) to E [G] 6. if v.d > u.d + w (u, v) 7. return false Thus So algorithm returns true. 60 / 108
  • 151. Correctness of Bellman-Ford Then, we have that v.d = δ(s, v) ≤ δ(s, u) + w(u, v) ≤ u.d + w (u, v) Remember: 5. for each (u, v) to E [G] 6. if v.d > u.d + w (u, v) 7. return false Thus So algorithm returns true. 60 / 108
  • 152. Correctness of Bellman-Ford Then, we have that v.d = δ(s, v) ≤ δ(s, u) + w(u, v) ≤ u.d + w (u, v) Remember: 5. for each (u, v) to E [G] 6. if v.d > u.d + w (u, v) 7. return false Thus So algorithm returns true. 60 / 108
  • 153. Correctness of Bellman-Ford Then, we have that v.d = δ(s, v) ≤ δ(s, u) + w(u, v) ≤ u.d + w (u, v) Remember: 5. for each (u, v) to E [G] 6. if v.d > u.d + w (u, v) 7. return false Thus So algorithm returns true. 60 / 108
  • 154. Correctness of Bellman-Ford Then, we have that v.d = δ(s, v) ≤ δ(s, u) + w(u, v) ≤ u.d + w (u, v) Remember: 5. for each (u, v) to E [G] 6. if v.d > u.d + w (u, v) 7. return false Thus So algorithm returns true. 60 / 108
  • 155. Correctness of Bellman-Ford Case 2: There exists a reachable negative weight cycle c =< v0, v1, ..., vk >, where v0 = vk. Then, we have: k i=1 w(vi−1, vi) < 0. (7) Suppose algorithm returns true Then vi.d ≤ vi−1.d + w(vi−1, vi) for i = 1, ..., k because Relax did not change any vi.d. 61 / 108
  • 156. Correctness of Bellman-Ford Case 2: There exists a reachable negative weight cycle c =< v0, v1, ..., vk >, where v0 = vk. Then, we have: k i=1 w(vi−1, vi) < 0. (7) Suppose algorithm returns true Then vi.d ≤ vi−1.d + w(vi−1, vi) for i = 1, ..., k because Relax did not change any vi.d. 61 / 108
  • 157. Correctness of Bellman-Ford Thus k i=1 vi.d ≤ k i=1 (vi−1.d + w(vi−1, vi)) = k i=1 vi−1.d + k i=1 w(vi−1, vi) Since v0 = vk Each vertex in c appears exactly once in each of the summations, k i=1 vi.d and k i=1 vi−1.d, thus k i=1 vi.d = k i=1 vi−1.d (8) 62 / 108
  • 158. Correctness of Bellman-Ford Thus k i=1 vi.d ≤ k i=1 (vi−1.d + w(vi−1, vi)) = k i=1 vi−1.d + k i=1 w(vi−1, vi) Since v0 = vk Each vertex in c appears exactly once in each of the summations, k i=1 vi.d and k i=1 vi−1.d, thus k i=1 vi.d = k i=1 vi−1.d (8) 62 / 108
  • 159. Correctness of Bellman-Ford By Corollary 24.3 vi.d is finite for i = 1, 2, ..., k, thus 0 ≤ k i=1 w(vi−1, vi). Hence This contradicts (Eq. 7). Thus, algorithm returns false. 63 / 108
  • 160. Correctness of Bellman-Ford By Corollary 24.3 vi.d is finite for i = 1, 2, ..., k, thus 0 ≤ k i=1 w(vi−1, vi). Hence This contradicts (Eq. 7). Thus, algorithm returns false. 63 / 108
  • 161. Outline 1 Introduction Introduction and Similar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 64 / 108
  • 162. Another Example Something Notable By relaxing the edges of a weighted DAG G = (V , E) according to a topological sort of its vertices, we can compute shortest paths from a single source in time. Why? Shortest paths are always well defined in a DAG, since even if there are negative-weight edges, no negative-weight cycles can exist. 65 / 108
  • 163. Another Example Something Notable By relaxing the edges of a weighted DAG G = (V , E) according to a topological sort of its vertices, we can compute shortest paths from a single source in time. Why? Shortest paths are always well defined in a DAG, since even if there are negative-weight edges, no negative-weight cycles can exist. 65 / 108
  • 164. Single-source Shortest Paths in Directed Acyclic Graphs In a DAG, we can do the following (Complexity Θ (V + E)) DAG -SHORTEST-PATHS(G, w, s) 1 Topological sort vertices in G 2 Initialize(G, s) 3 for each u in V [G] in topological sorted order 4 for each v to Adj [u] 5 Relax(u, v, w) 66 / 108
  • 165. Single-source Shortest Paths in Directed Acyclic Graphs In a DAG, we can do the following (Complexity Θ (V + E)) DAG -SHORTEST-PATHS(G, w, s) 1 Topological sort vertices in G 2 Initialize(G, s) 3 for each u in V [G] in topological sorted order 4 for each v to Adj [u] 5 Relax(u, v, w) 66 / 108
  • 166. Single-source Shortest Paths in Directed Acyclic Graphs In a DAG, we can do the following (Complexity Θ (V + E)) DAG -SHORTEST-PATHS(G, w, s) 1 Topological sort vertices in G 2 Initialize(G, s) 3 for each u in V [G] in topological sorted order 4 for each v to Adj [u] 5 Relax(u, v, w) 66 / 108
  • 167. It is based in the following theorem Theorem 24.5 If a weighted, directed graph G = (V , E) has source vertex s and no cycles, then at the termination of the DAG-SHORTEST-PATHS procedure, v.d = δ (s, v) for all vertices v ∈ V , and the predecessor subgraph Gπ is a shortest path. Proof Left to you... 67 / 108
  • 168. It is based in the following theorem Theorem 24.5 If a weighted, directed graph G = (V , E) has source vertex s and no cycles, then at the termination of the DAG-SHORTEST-PATHS procedure, v.d = δ (s, v) for all vertices v ∈ V , and the predecessor subgraph Gπ is a shortest path. Proof Left to you... 67 / 108
  • 169. Complexity We have that 1 Line 1 takes Θ (V + E). 2 Line 2 takes Θ (V ). 3 Lines 3-5 makes an iteration per vertex: 1 In addition, the for loop in lines 4-5 relaxes each edge exactly once (Remember the sorting). 2 Making each iteration of the inner loop Θ (1) Therefore The total running time is equal to Θ (V + E). 68 / 108
  • 170. Complexity We have that 1 Line 1 takes Θ (V + E). 2 Line 2 takes Θ (V ). 3 Lines 3-5 makes an iteration per vertex: 1 In addition, the for loop in lines 4-5 relaxes each edge exactly once (Remember the sorting). 2 Making each iteration of the inner loop Θ (1) Therefore The total running time is equal to Θ (V + E). 68 / 108
  • 171. Complexity We have that 1 Line 1 takes Θ (V + E). 2 Line 2 takes Θ (V ). 3 Lines 3-5 makes an iteration per vertex: 1 In addition, the for loop in lines 4-5 relaxes each edge exactly once (Remember the sorting). 2 Making each iteration of the inner loop Θ (1) Therefore The total running time is equal to Θ (V + E). 68 / 108
  • 172. Complexity We have that 1 Line 1 takes Θ (V + E). 2 Line 2 takes Θ (V ). 3 Lines 3-5 makes an iteration per vertex: 1 In addition, the for loop in lines 4-5 relaxes each edge exactly once (Remember the sorting). 2 Making each iteration of the inner loop Θ (1) Therefore The total running time is equal to Θ (V + E). 68 / 108
  • 173. Complexity We have that 1 Line 1 takes Θ (V + E). 2 Line 2 takes Θ (V ). 3 Lines 3-5 makes an iteration per vertex: 1 In addition, the for loop in lines 4-5 relaxes each edge exactly once (Remember the sorting). 2 Making each iteration of the inner loop Θ (1) Therefore The total running time is equal to Θ (V + E). 68 / 108
  • 174. Complexity We have that 1 Line 1 takes Θ (V + E). 2 Line 2 takes Θ (V ). 3 Lines 3-5 makes an iteration per vertex: 1 In addition, the for loop in lines 4-5 relaxes each edge exactly once (Remember the sorting). 2 Making each iteration of the inner loop Θ (1) Therefore The total running time is equal to Θ (V + E). 68 / 108
  • 175. Outline 1 Introduction Introduction and Similar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 69 / 108
  • 176. Example After Initialization, we have b is the source a b c d e f 5 2 3 7 4 2 -1 -2 6 1 70 / 108
  • 177. Example a is the first in the topological sort, but no update is done a b c d e f 5 2 3 7 4 2 -1 -2 6 1 71 / 108
  • 178. Example b is the next one a b c d e f 5 2 3 7 4 2 -1 -2 6 1 72 / 108
  • 179. Example c is the next one a b c d e f 5 2 3 7 4 2 -1 -2 6 1 73 / 108
  • 180. Example d is the next one a b c d e f 5 2 3 7 4 2 -1 -2 6 1 74 / 108
  • 181. Example e is the next one a b c d e f 5 2 3 7 4 2 -1 -2 6 1 75 / 108
  • 182. Example Finally, w is the next one a b c d e f 5 2 3 7 4 2 -1 -2 6 1 76 / 108
  • 183. Outline 1 Introduction Introduction and Similar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 77 / 108
  • 184. Dijkstra’s Algorithm It is a greedy based method Ideas? Yes We need to keep track of the greedy choice!!! 78 / 108
  • 185. Dijkstra’s Algorithm It is a greedy based method Ideas? Yes We need to keep track of the greedy choice!!! 78 / 108
  • 186. Dijkstra’s Algorithm Assume no negative weight edges 1 Dijkstra’s algorithm maintains a set S of vertices whose shortest path from s has been determined. 2 It repeatedly selects u in V − S with minimum shortest path estimate (greedy choice). 3 It store V − S in priority queue Q. 79 / 108
  • 187. Dijkstra’s Algorithm Assume no negative weight edges 1 Dijkstra’s algorithm maintains a set S of vertices whose shortest path from s has been determined. 2 It repeatedly selects u in V − S with minimum shortest path estimate (greedy choice). 3 It store V − S in priority queue Q. 79 / 108
  • 188. Dijkstra’s Algorithm Assume no negative weight edges 1 Dijkstra’s algorithm maintains a set S of vertices whose shortest path from s has been determined. 2 It repeatedly selects u in V − S with minimum shortest path estimate (greedy choice). 3 It store V − S in priority queue Q. 79 / 108
  • 189. Dijkstra’s algorithm DIJKSTRA(G, w, s) 1 INITIALIZE(G, s) 2 S = ∅ 3 Q = V [G] 4 while Q = ∅ 5 u =Extract-Min(Q) 6 S = S ∪ {u} 7 for each vertex v ∈ Adj [u] 8 Relax(u,v,w) 80 / 108
  • 190. Dijkstra’s algorithm DIJKSTRA(G, w, s) 1 INITIALIZE(G, s) 2 S = ∅ 3 Q = V [G] 4 while Q = ∅ 5 u =Extract-Min(Q) 6 S = S ∪ {u} 7 for each vertex v ∈ Adj [u] 8 Relax(u,v,w) 80 / 108
  • 191. Dijkstra’s algorithm DIJKSTRA(G, w, s) 1 INITIALIZE(G, s) 2 S = ∅ 3 Q = V [G] 4 while Q = ∅ 5 u =Extract-Min(Q) 6 S = S ∪ {u} 7 for each vertex v ∈ Adj [u] 8 Relax(u,v,w) 80 / 108
  • 192. Outline 1 Introduction Introduction and Similar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 81 / 108
  • 193. Example The Graph After Initialization s b d a c e h g f5 4 5 3 3 6 2 1 5 2 2 7 2 10 3 82 / 108
  • 194. Example We use red edges to represent v.π and color black to represent the set S s b d a c e h g f5 4 5 3 3 6 2 1 5 2 2 7 2 10 3 83 / 108
  • 195. Example s ←Extract-Min(Q) and update the elements adjacent to s s b d a c e h g f5 4 5 3 3 6 2 1 5 2 2 7 2 10 3 0 5 5 6 84 / 108
  • 196. Example a←Extract-Min(Q) and update the elements adjacent to a s b d a c e h g f5 4 5 3 3 6 2 1 5 2 2 7 2 10 3 0 5 5 6 9 85 / 108
  • 197. Example e←Extract-Min(Q) and update the elements adjacent to e s b d a c e h g f5 4 5 3 3 6 2 1 5 2 2 7 2 10 3 0 5 5 6 9 7 86 / 108
  • 198. Example b←Extract-Min(Q) and update the elements adjacent to b s b d a c e h g f5 4 5 3 3 6 2 1 5 2 2 7 2 10 3 0 5 5 6 9 7 16 87 / 108
  • 199. Example h←Extract-Min(Q) and update the elements adjacent to h s b d a c e h g f5 4 5 3 3 6 2 1 5 2 2 7 2 10 3 0 5 5 6 9 7 16 9 10 88 / 108
  • 200. Example c←Extract-Min(Q) and no-update s b d a c e h g f5 4 5 3 3 6 2 1 5 2 2 7 2 10 3 0 5 5 6 9 7 16 9 10 89 / 108
  • 201. Example d←Extract-Min(Q) and no-update s b d a c e h g f5 4 5 3 3 6 2 1 5 2 2 7 2 10 3 0 5 5 6 9 7 16 9 10 90 / 108
  • 202. Example g←Extract-Min(Q) and no-update s b d a c e h g f5 4 5 3 3 6 2 1 5 2 2 7 2 10 3 0 5 5 6 9 7 16 9 10 91 / 108
  • 203. Example f←Extract-Min(Q) and no-update s b d a c e h g f5 4 5 3 3 6 2 1 5 2 2 7 2 10 3 0 5 5 6 9 7 16 9 10 92 / 108
  • 204. Outline 1 Introduction Introduction and Similar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 93 / 108
  • 205. Correctness Dijkstra’s algorithm Theorem 24.6 Upon termination, u.d = δ(s, u) for all u in V (assuming non negative weights). Proof By lemma 24.11, once u.d = δ(s, u) holds, it continues to hold. We are going to use the following loop Invariance At the start of each iteration of the while loop of lines 4–8, v.d = δ (s, v) for each vertex v ∈ S. 94 / 108
  • 206. Correctness Dijkstra’s algorithm Theorem 24.6 Upon termination, u.d = δ(s, u) for all u in V (assuming non negative weights). Proof By lemma 24.11, once u.d = δ(s, u) holds, it continues to hold. We are going to use the following loop Invariance At the start of each iteration of the while loop of lines 4–8, v.d = δ (s, v) for each vertex v ∈ S. 94 / 108
  • 207. Correctness Dijkstra’s algorithm Theorem 24.6 Upon termination, u.d = δ(s, u) for all u in V (assuming non negative weights). Proof By lemma 24.11, once u.d = δ(s, u) holds, it continues to hold. We are going to use the following loop Invariance At the start of each iteration of the while loop of lines 4–8, v.d = δ (s, v) for each vertex v ∈ S. 94 / 108
  • 208. Proof Thus We are going to prove for each u in V , u.d = δ(s, u) when u is inserted in S. Initialization Initially S = ∅, thus the invariant is true. Maintenance We want to show that in each iteration u.d = δ (s, u) for the vertex added to set S. For this, note the following Note that s.d = δ(s, s) = 0 when s is inserted, so u = s. In addition, we have that S = ∅ before u is added. 95 / 108
  • 209. Proof Thus We are going to prove for each u in V , u.d = δ(s, u) when u is inserted in S. Initialization Initially S = ∅, thus the invariant is true. Maintenance We want to show that in each iteration u.d = δ (s, u) for the vertex added to set S. For this, note the following Note that s.d = δ(s, s) = 0 when s is inserted, so u = s. In addition, we have that S = ∅ before u is added. 95 / 108
  • 210. Proof Thus We are going to prove for each u in V , u.d = δ(s, u) when u is inserted in S. Initialization Initially S = ∅, thus the invariant is true. Maintenance We want to show that in each iteration u.d = δ (s, u) for the vertex added to set S. For this, note the following Note that s.d = δ(s, s) = 0 when s is inserted, so u = s. In addition, we have that S = ∅ before u is added. 95 / 108
  • 211. Proof Thus We are going to prove for each u in V , u.d = δ(s, u) when u is inserted in S. Initialization Initially S = ∅, thus the invariant is true. Maintenance We want to show that in each iteration u.d = δ (s, u) for the vertex added to set S. For this, note the following Note that s.d = δ(s, s) = 0 when s is inserted, so u = s. In addition, we have that S = ∅ before u is added. 95 / 108
  • 212. Proof Thus We are going to prove for each u in V , u.d = δ(s, u) when u is inserted in S. Initialization Initially S = ∅, thus the invariant is true. Maintenance We want to show that in each iteration u.d = δ (s, u) for the vertex added to set S. For this, note the following Note that s.d = δ(s, s) = 0 when s is inserted, so u = s. In addition, we have that S = ∅ before u is added. 95 / 108
  • 213. Proof Use contradiction Now, suppose not. Let u be the first vertex such that u.d = δ (s,u) when inserted in S. Note the following Note that s.d = δ(s, s) = 0 when s is inserted, so u = s; thus S = ∅ just before u is inserted (in fact s ∈ S). 96 / 108
  • 214. Proof Use contradiction Now, suppose not. Let u be the first vertex such that u.d = δ (s,u) when inserted in S. Note the following Note that s.d = δ(s, s) = 0 when s is inserted, so u = s; thus S = ∅ just before u is inserted (in fact s ∈ S). 96 / 108
  • 215. Proof Now Note that there exist a path from s to u, for otherwise u.d = δ(s, u) = ∞ by corollary 24.12. “If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant.” Thus exist a shortest path p Between s and u. Observation Prior to adding u to S, path p connects a vertex in S, namely s, to a vertex in V − S, namely u. 97 / 108
  • 216. Proof Now Note that there exist a path from s to u, for otherwise u.d = δ(s, u) = ∞ by corollary 24.12. “If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant.” Thus exist a shortest path p Between s and u. Observation Prior to adding u to S, path p connects a vertex in S, namely s, to a vertex in V − S, namely u. 97 / 108
  • 217. Proof Now Note that there exist a path from s to u, for otherwise u.d = δ(s, u) = ∞ by corollary 24.12. “If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant.” Thus exist a shortest path p Between s and u. Observation Prior to adding u to S, path p connects a vertex in S, namely s, to a vertex in V − S, namely u. 97 / 108
  • 218. Proof Consider the following The first y along p from s to u such that y ∈ V − S. And let x ∈ S be y’s predecessor along p. 98 / 108
  • 219. Proof Proof (continuation) Then, shortest path from s to u: s p1 x → y p2 u looks like... S s x y u Remark: Either of paths p1 or p2 may have no edges. 99 / 108
  • 220. Proof We claim y.d = δ(s, y) when u is added into S. Proof of the claim 1 Observe that x ∈ S. 2 In addition, we know that u is the first vertex for which u.d = δ (s, u) when it id added to S 100 / 108
  • 221. Proof We claim y.d = δ(s, y) when u is added into S. Proof of the claim 1 Observe that x ∈ S. 2 In addition, we know that u is the first vertex for which u.d = δ (s, u) when it id added to S 100 / 108
  • 222. Proof Then In addition, we had that x.d = δ(s, x) when x was inserted into S. Then, we relaxed the edge between x and y Edge (x, y) was relaxed at that time! 101 / 108
  • 223. Proof Then In addition, we had that x.d = δ(s, x) when x was inserted into S. Then, we relaxed the edge between x and y Edge (x, y) was relaxed at that time! 101 / 108
  • 224. Proof Remember? Convergence property (Lemma 24.14) Let p be a shortest path from s to v, where p = p1 s u → v. If u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then v.d = δ(s, v) holds at all times after the call. Then Then, using this convergence property. y.d = δ(s, y) = δ(s, x) + w(x, y) (9) The claim is implied!!! 102 / 108
  • 225. Proof Remember? Convergence property (Lemma 24.14) Let p be a shortest path from s to v, where p = p1 s u → v. If u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then v.d = δ(s, v) holds at all times after the call. Then Then, using this convergence property. y.d = δ(s, y) = δ(s, x) + w(x, y) (9) The claim is implied!!! 102 / 108
  • 226. Proof Remember? Convergence property (Lemma 24.14) Let p be a shortest path from s to v, where p = p1 s u → v. If u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then v.d = δ(s, v) holds at all times after the call. Then Then, using this convergence property. y.d = δ(s, y) = δ(s, x) + w(x, y) (9) The claim is implied!!! 102 / 108
  • 227. Proof Remember? Convergence property (Lemma 24.14) Let p be a shortest path from s to v, where p = p1 s u → v. If u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then v.d = δ(s, v) holds at all times after the call. Then Then, using this convergence property. y.d = δ(s, y) = δ(s, x) + w(x, y) (9) The claim is implied!!! 102 / 108
  • 228. Proof Now 1 We obtain a contradiction to prove that u.d = δ (s, u). 2 y appears before u in a shortest path on a shortest path from s to u. 3 In addition, all edges have positive weights. 4 Then, δ(s, y) ≤ δ(s, u), thus y.d = δ (s, y) ≤ δ (s, u) ≤ u.d The last inequality is due to the Upper-Bound Property (Lemma 24.11). 103 / 108
  • 229. Proof Now 1 We obtain a contradiction to prove that u.d = δ (s, u). 2 y appears before u in a shortest path on a shortest path from s to u. 3 In addition, all edges have positive weights. 4 Then, δ(s, y) ≤ δ(s, u), thus y.d = δ (s, y) ≤ δ (s, u) ≤ u.d The last inequality is due to the Upper-Bound Property (Lemma 24.11). 103 / 108
  • 230. Proof Now 1 We obtain a contradiction to prove that u.d = δ (s, u). 2 y appears before u in a shortest path on a shortest path from s to u. 3 In addition, all edges have positive weights. 4 Then, δ(s, y) ≤ δ(s, u), thus y.d = δ (s, y) ≤ δ (s, u) ≤ u.d The last inequality is due to the Upper-Bound Property (Lemma 24.11). 103 / 108
  • 231. Proof Now 1 We obtain a contradiction to prove that u.d = δ (s, u). 2 y appears before u in a shortest path on a shortest path from s to u. 3 In addition, all edges have positive weights. 4 Then, δ(s, y) ≤ δ(s, u), thus y.d = δ (s, y) ≤ δ (s, u) ≤ u.d The last inequality is due to the Upper-Bound Property (Lemma 24.11). 103 / 108
  • 232. Proof Now 1 We obtain a contradiction to prove that u.d = δ (s, u). 2 y appears before u in a shortest path on a shortest path from s to u. 3 In addition, all edges have positive weights. 4 Then, δ(s, y) ≤ δ(s, u), thus y.d = δ (s, y) ≤ δ (s, u) ≤ u.d The last inequality is due to the Upper-Bound Property (Lemma 24.11). 103 / 108
  • 233. Proof Now 1 We obtain a contradiction to prove that u.d = δ (s, u). 2 y appears before u in a shortest path on a shortest path from s to u. 3 In addition, all edges have positive weights. 4 Then, δ(s, y) ≤ δ(s, u), thus y.d = δ (s, y) ≤ δ (s, u) ≤ u.d The last inequality is due to the Upper-Bound Property (Lemma 24.11). 103 / 108
  • 234. Proof Now 1 We obtain a contradiction to prove that u.d = δ (s, u). 2 y appears before u in a shortest path on a shortest path from s to u. 3 In addition, all edges have positive weights. 4 Then, δ(s, y) ≤ δ(s, u), thus y.d = δ (s, y) ≤ δ (s, u) ≤ u.d The last inequality is due to the Upper-Bound Property (Lemma 24.11). 103 / 108
  • 235. Proof Then But because both vertices u and y where in V − S when u was chosen in line 5 ⇒ u.d ≤ y.d. Thus y.d = δ(s, y) = δ(s, u) = u.d Consequently We have that u.d = δ (s, u), which contradicts our choice of u. Conclusion: u.d = δ (s, u) when u is added to S and the equality is maintained afterwards. 104 / 108
  • 236. Proof Then But because both vertices u and y where in V − S when u was chosen in line 5 ⇒ u.d ≤ y.d. Thus y.d = δ(s, y) = δ(s, u) = u.d Consequently We have that u.d = δ (s, u), which contradicts our choice of u. Conclusion: u.d = δ (s, u) when u is added to S and the equality is maintained afterwards. 104 / 108
  • 237. Proof Then But because both vertices u and y where in V − S when u was chosen in line 5 ⇒ u.d ≤ y.d. Thus y.d = δ(s, y) = δ(s, u) = u.d Consequently We have that u.d = δ (s, u), which contradicts our choice of u. Conclusion: u.d = δ (s, u) when u is added to S and the equality is maintained afterwards. 104 / 108
  • 238. Proof Then But because both vertices u and y where in V − S when u was chosen in line 5 ⇒ u.d ≤ y.d. Thus y.d = δ(s, y) = δ(s, u) = u.d Consequently We have that u.d = δ (s, u), which contradicts our choice of u. Conclusion: u.d = δ (s, u) when u is added to S and the equality is maintained afterwards. 104 / 108
  • 239. Finally Termination At termination Q = ∅ Thus, V − S = ∅ or equivalent S = V Thus u.d = δ (s, u) for all vertices u ∈ V !!! 105 / 108
  • 240. Finally Termination At termination Q = ∅ Thus, V − S = ∅ or equivalent S = V Thus u.d = δ (s, u) for all vertices u ∈ V !!! 105 / 108
  • 241. Outline 1 Introduction Introduction and Similar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 106 / 108
  • 242. Complexity Running time is O(V 2) using linear array for priority queue. O((V + E) log V ) using binary heap. O(V log V + E) using Fibonacci heap. 107 / 108
  • 243. Exercises From Cormen’s book solve 24.1-1 24.1-3 24.1-4 23.3-1 23.3-3 23.3-4 23.3-6 23.3-7 23.3-8 23.3-10 108 / 108