Analysis of Algorithms
Single Source Shortest Path
Andres Mendez-Vazquez
November 9, 2015
1 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
2 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
3 / 108
Introduction
Problem description
Given a single source vertex in a weighted, directed graph.
We want to compute a shortest path for each possible destination
(Similar to BFS).
Thus
The algorithm will compute a shortest path tree (again, similar to BFS).
4 / 108
Introduction
Problem description
Given a single source vertex in a weighted, directed graph.
We want to compute a shortest path for each possible destination
(Similar to BFS).
Thus
The algorithm will compute a shortest path tree (again, similar to BFS).
4 / 108
Introduction
Problem description
Given a single source vertex in a weighted, directed graph.
We want to compute a shortest path for each possible destination
(Similar to BFS).
Thus
The algorithm will compute a shortest path tree (again, similar to BFS).
4 / 108
Similar Problems
Single destination shortest paths problem
Find a shortest path to a given destination vertex t from each vertex.
By reversing the direction of each edge in the graph, we can reduce
this problem to a single source problem.
5 / 108
Similar Problems
Single destination shortest paths problem
Find a shortest path to a given destination vertex t from each vertex.
By reversing the direction of each edge in the graph, we can reduce
this problem to a single source problem.
Reverse Directions
5 / 108
Similar Problems
Single pair shortest path problem
Find a shortest path from u to v for given vertices u and v.
If we solve the single source problem with source vertex u, we also
solve this problem.
6 / 108
Similar Problems
Single pair shortest path problem
Find a shortest path from u to v for given vertices u and v.
If we solve the single source problem with source vertex u, we also
solve this problem.
6 / 108
Similar Problems
All pairs shortest paths problem
Find a shortest path from u to v for every pair of vertices u and v.
7 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
8 / 108
Optimal Substructure Property
Lemma 24.1
Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk >
be a Shortest Path from v1 to vk. Then,
pij =< vi, vi+1, ..., vj > is a Shortest Path from vi to vj, where
1 ≤ i ≤ j ≤ k.
We have then
So, we have the optimal substructure property.
Bellman-Ford’s algorithm uses dynamic programming.
Dijkstra’s algorithm uses the greedy approach.
In addition, we have that
Let δ(u, v) = weight of Shortest Path from u to v.
9 / 108
Optimal Substructure Property
Lemma 24.1
Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk >
be a Shortest Path from v1 to vk. Then,
pij =< vi, vi+1, ..., vj > is a Shortest Path from vi to vj, where
1 ≤ i ≤ j ≤ k.
We have then
So, we have the optimal substructure property.
Bellman-Ford’s algorithm uses dynamic programming.
Dijkstra’s algorithm uses the greedy approach.
In addition, we have that
Let δ(u, v) = weight of Shortest Path from u to v.
9 / 108
Optimal Substructure Property
Lemma 24.1
Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk >
be a Shortest Path from v1 to vk. Then,
pij =< vi, vi+1, ..., vj > is a Shortest Path from vi to vj, where
1 ≤ i ≤ j ≤ k.
We have then
So, we have the optimal substructure property.
Bellman-Ford’s algorithm uses dynamic programming.
Dijkstra’s algorithm uses the greedy approach.
In addition, we have that
Let δ(u, v) = weight of Shortest Path from u to v.
9 / 108
Optimal Substructure Property
Lemma 24.1
Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk >
be a Shortest Path from v1 to vk. Then,
pij =< vi, vi+1, ..., vj > is a Shortest Path from vi to vj, where
1 ≤ i ≤ j ≤ k.
We have then
So, we have the optimal substructure property.
Bellman-Ford’s algorithm uses dynamic programming.
Dijkstra’s algorithm uses the greedy approach.
In addition, we have that
Let δ(u, v) = weight of Shortest Path from u to v.
9 / 108
Optimal Substructure Property
Lemma 24.1
Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk >
be a Shortest Path from v1 to vk. Then,
pij =< vi, vi+1, ..., vj > is a Shortest Path from vi to vj, where
1 ≤ i ≤ j ≤ k.
We have then
So, we have the optimal substructure property.
Bellman-Ford’s algorithm uses dynamic programming.
Dijkstra’s algorithm uses the greedy approach.
In addition, we have that
Let δ(u, v) = weight of Shortest Path from u to v.
9 / 108
Optimal Substructure Property
Lemma 24.1
Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk >
be a Shortest Path from v1 to vk. Then,
pij =< vi, vi+1, ..., vj > is a Shortest Path from vi to vj, where
1 ≤ i ≤ j ≤ k.
We have then
So, we have the optimal substructure property.
Bellman-Ford’s algorithm uses dynamic programming.
Dijkstra’s algorithm uses the greedy approach.
In addition, we have that
Let δ(u, v) = weight of Shortest Path from u to v.
9 / 108
Optimal Substructure Property
Corollary
Let p be a Shortest Path from s to v, where
p = s
p1
u → v = p1 ∪ {(u, v)}. Then δ(s, v) = δ(s, u) + w(u, v).
10 / 108
The Lower Bound Between Nodes
Lemma 24.10
Let s ∈ V . For all edges (u, v) ∈ E, we have δ(s, v) ≤ δ(s, u) + w(u, v).
11 / 108
Now
Then
We have the basic concepts
Still
We need to define an important one.
The Predecessor Graph
This will facilitate the proof of several concepts
12 / 108
Now
Then
We have the basic concepts
Still
We need to define an important one.
The Predecessor Graph
This will facilitate the proof of several concepts
12 / 108
Now
Then
We have the basic concepts
Still
We need to define an important one.
The Predecessor Graph
This will facilitate the proof of several concepts
12 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
13 / 108
Predecessor Graph
Representing shortest paths
For this we use the predecessor subgraph
It is defined slightly differently from that on Breadth-First-Search
Definition of a Predecessor Subgraph
The predecessor is a subgraph Gπ = (Vπ, Eπ) where
Vπ = {v ∈ V |v.π = NIL} ∪ {s}
Eπ = {(v.π, v)| v ∈ Vπ − {s}}
Properties
The predecessor subgraph Gπ forms a depth first forest composed of
several depth first trees.
The edges in Eπ are called tree edges.
14 / 108
Predecessor Graph
Representing shortest paths
For this we use the predecessor subgraph
It is defined slightly differently from that on Breadth-First-Search
Definition of a Predecessor Subgraph
The predecessor is a subgraph Gπ = (Vπ, Eπ) where
Vπ = {v ∈ V |v.π = NIL} ∪ {s}
Eπ = {(v.π, v)| v ∈ Vπ − {s}}
Properties
The predecessor subgraph Gπ forms a depth first forest composed of
several depth first trees.
The edges in Eπ are called tree edges.
14 / 108
Predecessor Graph
Representing shortest paths
For this we use the predecessor subgraph
It is defined slightly differently from that on Breadth-First-Search
Definition of a Predecessor Subgraph
The predecessor is a subgraph Gπ = (Vπ, Eπ) where
Vπ = {v ∈ V |v.π = NIL} ∪ {s}
Eπ = {(v.π, v)| v ∈ Vπ − {s}}
Properties
The predecessor subgraph Gπ forms a depth first forest composed of
several depth first trees.
The edges in Eπ are called tree edges.
14 / 108
Predecessor Graph
Representing shortest paths
For this we use the predecessor subgraph
It is defined slightly differently from that on Breadth-First-Search
Definition of a Predecessor Subgraph
The predecessor is a subgraph Gπ = (Vπ, Eπ) where
Vπ = {v ∈ V |v.π = NIL} ∪ {s}
Eπ = {(v.π, v)| v ∈ Vπ − {s}}
Properties
The predecessor subgraph Gπ forms a depth first forest composed of
several depth first trees.
The edges in Eπ are called tree edges.
14 / 108
Predecessor Graph
Representing shortest paths
For this we use the predecessor subgraph
It is defined slightly differently from that on Breadth-First-Search
Definition of a Predecessor Subgraph
The predecessor is a subgraph Gπ = (Vπ, Eπ) where
Vπ = {v ∈ V |v.π = NIL} ∪ {s}
Eπ = {(v.π, v)| v ∈ Vπ − {s}}
Properties
The predecessor subgraph Gπ forms a depth first forest composed of
several depth first trees.
The edges in Eπ are called tree edges.
14 / 108
Predecessor Graph
Representing shortest paths
For this we use the predecessor subgraph
It is defined slightly differently from that on Breadth-First-Search
Definition of a Predecessor Subgraph
The predecessor is a subgraph Gπ = (Vπ, Eπ) where
Vπ = {v ∈ V |v.π = NIL} ∪ {s}
Eπ = {(v.π, v)| v ∈ Vπ − {s}}
Properties
The predecessor subgraph Gπ forms a depth first forest composed of
several depth first trees.
The edges in Eπ are called tree edges.
14 / 108
Predecessor Graph
Representing shortest paths
For this we use the predecessor subgraph
It is defined slightly differently from that on Breadth-First-Search
Definition of a Predecessor Subgraph
The predecessor is a subgraph Gπ = (Vπ, Eπ) where
Vπ = {v ∈ V |v.π = NIL} ∪ {s}
Eπ = {(v.π, v)| v ∈ Vπ − {s}}
Properties
The predecessor subgraph Gπ forms a depth first forest composed of
several depth first trees.
The edges in Eπ are called tree edges.
14 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
15 / 108
The Relaxation Concept
We are going to use certain functions for all the algorithms
Initialize
Here, the basic variables of the nodes in a graph will be initialized
v.d = the distance from the source s.
v.π = the predecessor node during the search of the shortest path.
Changing the v.d
This will be done in the Relaxation algorithm.
16 / 108
The Relaxation Concept
We are going to use certain functions for all the algorithms
Initialize
Here, the basic variables of the nodes in a graph will be initialized
v.d = the distance from the source s.
v.π = the predecessor node during the search of the shortest path.
Changing the v.d
This will be done in the Relaxation algorithm.
16 / 108
The Relaxation Concept
We are going to use certain functions for all the algorithms
Initialize
Here, the basic variables of the nodes in a graph will be initialized
v.d = the distance from the source s.
v.π = the predecessor node during the search of the shortest path.
Changing the v.d
This will be done in the Relaxation algorithm.
16 / 108
The Relaxation Concept
We are going to use certain functions for all the algorithms
Initialize
Here, the basic variables of the nodes in a graph will be initialized
v.d = the distance from the source s.
v.π = the predecessor node during the search of the shortest path.
Changing the v.d
This will be done in the Relaxation algorithm.
16 / 108
The Relaxation Concept
We are going to use certain functions for all the algorithms
Initialize
Here, the basic variables of the nodes in a graph will be initialized
v.d = the distance from the source s.
v.π = the predecessor node during the search of the shortest path.
Changing the v.d
This will be done in the Relaxation algorithm.
16 / 108
Initialize and Relaxation
The Algorithms keep track of v.d, v.π. It is initialized as follows
Initialize(G, s)
1 for each v ∈ V [G]
2 v.d = ∞
3 v.π = NIL
4 s.d = 0
These values are changed when an edge (u, v) is relaxed.
Relax(u, v, w)
1 if v.d > u.d + w(u, v)
2 v.d = u.d + w(u, v)
3 v.π = u
17 / 108
Initialize and Relaxation
The Algorithms keep track of v.d, v.π. It is initialized as follows
Initialize(G, s)
1 for each v ∈ V [G]
2 v.d = ∞
3 v.π = NIL
4 s.d = 0
These values are changed when an edge (u, v) is relaxed.
Relax(u, v, w)
1 if v.d > u.d + w(u, v)
2 v.d = u.d + w(u, v)
3 v.π = u
17 / 108
How are these functions used?
These functions are used
1 Build a predecesor graph Gπ.
2 Integrate the Shortest Path into that predecessor graph.
1 Using the field d.
18 / 108
How are these functions used?
These functions are used
1 Build a predecesor graph Gπ.
2 Integrate the Shortest Path into that predecessor graph.
1 Using the field d.
18 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
19 / 108
The Bellman-Ford Algorithm
Bellman-Ford can have negative weight edges. It will “detect”
reachable negative weight cycles.
Bellman-Ford(G, s, w)
1 Initialize(G, s)
2 for i = 1 to |V [G]| − 1
3 for each (u, v) to E [G]
4 Relax(u, v, w)
5 for each (u, v) to E [G]
6 if v.d > u.d + w (u, v)
7 return false
8 return true
Time Complexity
O (VE)
20 / 108
The Bellman-Ford Algorithm
Bellman-Ford can have negative weight edges. It will “detect”
reachable negative weight cycles.
Bellman-Ford(G, s, w)
1 Initialize(G, s)
2 for i = 1 to |V [G]| − 1
3 for each (u, v) to E [G]
4 Relax(u, v, w)
5 for each (u, v) to E [G]
6 if v.d > u.d + w (u, v)
7 return false
8 return true
Time Complexity
O (VE)
20 / 108
The Bellman-Ford Algorithm
Bellman-Ford can have negative weight edges. It will “detect”
reachable negative weight cycles.
Bellman-Ford(G, s, w)
1 Initialize(G, s)
2 for i = 1 to |V [G]| − 1
3 for each (u, v) to E [G]
4 Relax(u, v, w)
5 for each (u, v) to E [G]
6 if v.d > u.d + w (u, v)
7 return false
8 return true
Time Complexity
O (VE)
20 / 108
The Bellman-Ford Algorithm
Bellman-Ford can have negative weight edges. It will “detect”
reachable negative weight cycles.
Bellman-Ford(G, s, w)
1 Initialize(G, s)
2 for i = 1 to |V [G]| − 1
3 for each (u, v) to E [G]
4 Relax(u, v, w)
5 for each (u, v) to E [G]
6 if v.d > u.d + w (u, v)
7 return false
8 return true
Time Complexity
O (VE)
20 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
21 / 108
Properties of Relaxation
Some properties
v.d, if not ∞, is the length of some path from s to v.
v.d either stays the same or decreases with time.
Therefore
If v.d = δ(s, v) at any time, this holds thereafter.
Something nice
Note that v.d ≥ δ(s, v) always (Upper-Bound Property).
After i iterations of relaxing an all (u, v), if the shortest path to
v has i edges, then v.d = δ(s, v).
If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant.
22 / 108
Properties of Relaxation
Some properties
v.d, if not ∞, is the length of some path from s to v.
v.d either stays the same or decreases with time.
Therefore
If v.d = δ(s, v) at any time, this holds thereafter.
Something nice
Note that v.d ≥ δ(s, v) always (Upper-Bound Property).
After i iterations of relaxing an all (u, v), if the shortest path to
v has i edges, then v.d = δ(s, v).
If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant.
22 / 108
Properties of Relaxation
Some properties
v.d, if not ∞, is the length of some path from s to v.
v.d either stays the same or decreases with time.
Therefore
If v.d = δ(s, v) at any time, this holds thereafter.
Something nice
Note that v.d ≥ δ(s, v) always (Upper-Bound Property).
After i iterations of relaxing an all (u, v), if the shortest path to
v has i edges, then v.d = δ(s, v).
If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant.
22 / 108
Properties of Relaxation
Some properties
v.d, if not ∞, is the length of some path from s to v.
v.d either stays the same or decreases with time.
Therefore
If v.d = δ(s, v) at any time, this holds thereafter.
Something nice
Note that v.d ≥ δ(s, v) always (Upper-Bound Property).
After i iterations of relaxing an all (u, v), if the shortest path to
v has i edges, then v.d = δ(s, v).
If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant.
22 / 108
Properties of Relaxation
Some properties
v.d, if not ∞, is the length of some path from s to v.
v.d either stays the same or decreases with time.
Therefore
If v.d = δ(s, v) at any time, this holds thereafter.
Something nice
Note that v.d ≥ δ(s, v) always (Upper-Bound Property).
After i iterations of relaxing an all (u, v), if the shortest path to
v has i edges, then v.d = δ(s, v).
If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant.
22 / 108
Properties of Relaxation
Some properties
v.d, if not ∞, is the length of some path from s to v.
v.d either stays the same or decreases with time.
Therefore
If v.d = δ(s, v) at any time, this holds thereafter.
Something nice
Note that v.d ≥ δ(s, v) always (Upper-Bound Property).
After i iterations of relaxing an all (u, v), if the shortest path to
v has i edges, then v.d = δ(s, v).
If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant.
22 / 108
Properties of Relaxation
Lemma 24.10 (Triangle inequality)
Let G = (V , E) be a weighted, directed graph with weight function
w : E → R and source vertex s. Then, for all edges (u, v) ∈ E, we have:
δ (s, v) ≤ δ (s, u) + w (u, v) (1)
Proof
1 Suppose that p is a shortest path from source s to vertex v.
2 Then, p has no more weight than any other path from s to vertex v.
3 Not only p has no more weiht tha a particular shortest path that goes
from s to u and then takes edge (u, v).
23 / 108
Properties of Relaxation
Lemma 24.10 (Triangle inequality)
Let G = (V , E) be a weighted, directed graph with weight function
w : E → R and source vertex s. Then, for all edges (u, v) ∈ E, we have:
δ (s, v) ≤ δ (s, u) + w (u, v) (1)
Proof
1 Suppose that p is a shortest path from source s to vertex v.
2 Then, p has no more weight than any other path from s to vertex v.
3 Not only p has no more weiht tha a particular shortest path that goes
from s to u and then takes edge (u, v).
23 / 108
Properties of Relaxation
Lemma 24.10 (Triangle inequality)
Let G = (V , E) be a weighted, directed graph with weight function
w : E → R and source vertex s. Then, for all edges (u, v) ∈ E, we have:
δ (s, v) ≤ δ (s, u) + w (u, v) (1)
Proof
1 Suppose that p is a shortest path from source s to vertex v.
2 Then, p has no more weight than any other path from s to vertex v.
3 Not only p has no more weiht tha a particular shortest path that goes
from s to u and then takes edge (u, v).
23 / 108
Properties of Relaxation
Lemma 24.10 (Triangle inequality)
Let G = (V , E) be a weighted, directed graph with weight function
w : E → R and source vertex s. Then, for all edges (u, v) ∈ E, we have:
δ (s, v) ≤ δ (s, u) + w (u, v) (1)
Proof
1 Suppose that p is a shortest path from source s to vertex v.
2 Then, p has no more weight than any other path from s to vertex v.
3 Not only p has no more weiht tha a particular shortest path that goes
from s to u and then takes edge (u, v).
23 / 108
Properties of Relaxation
Lemma 24.10 (Triangle inequality)
Let G = (V , E) be a weighted, directed graph with weight function
w : E → R and source vertex s. Then, for all edges (u, v) ∈ E, we have:
δ (s, v) ≤ δ (s, u) + w (u, v) (1)
Proof
1 Suppose that p is a shortest path from source s to vertex v.
2 Then, p has no more weight than any other path from s to vertex v.
3 Not only p has no more weiht tha a particular shortest path that goes
from s to u and then takes edge (u, v).
23 / 108
Properties of Relaxation
Lemma 24.11 (Upper Bound Property)
Let G = (V , E) be a weighted, directed graph with weight function
w : E → R. Consider any algorithm in which v.d, and v.π are first
initialized by calling Initialize(G, s) (s is the source), and are only
changed by calling Relax.
Then, we have that v.d ≥ δ(s, v) ∀v ∈ V [G] , and this invariant is
maintained over any sequence of relaxation steps on the edges of G.
Moreover, once v.d = δ(s, v), it never changes.
24 / 108
Properties of Relaxation
Lemma 24.11 (Upper Bound Property)
Let G = (V , E) be a weighted, directed graph with weight function
w : E → R. Consider any algorithm in which v.d, and v.π are first
initialized by calling Initialize(G, s) (s is the source), and are only
changed by calling Relax.
Then, we have that v.d ≥ δ(s, v) ∀v ∈ V [G] , and this invariant is
maintained over any sequence of relaxation steps on the edges of G.
Moreover, once v.d = δ(s, v), it never changes.
24 / 108
Properties of Relaxation
Lemma 24.11 (Upper Bound Property)
Let G = (V , E) be a weighted, directed graph with weight function
w : E → R. Consider any algorithm in which v.d, and v.π are first
initialized by calling Initialize(G, s) (s is the source), and are only
changed by calling Relax.
Then, we have that v.d ≥ δ(s, v) ∀v ∈ V [G] , and this invariant is
maintained over any sequence of relaxation steps on the edges of G.
Moreover, once v.d = δ(s, v), it never changes.
24 / 108
Proof of Lemma
Loop Invariance
The Proof can be done by induction over the number of relaxation steps
and the loop invariance:
v.d ≥ δ (s, v) for all v ∈ V
For the Basis
v.d ≥ δ (s, v) is true after initialization, since:
v.d = ∞ making v.d ≥ δ (s, v) for all v ∈ V − {s}.
For s, s.d = 0 ≥ δ (s, s).
For the inductive step, consider the relaxation of an edge (u, v)
By the inductive hypothesis, we have that x.d ≥ δ (s, x) for all x ∈ V prior
to relaxation.
25 / 108
Proof of Lemma
Loop Invariance
The Proof can be done by induction over the number of relaxation steps
and the loop invariance:
v.d ≥ δ (s, v) for all v ∈ V
For the Basis
v.d ≥ δ (s, v) is true after initialization, since:
v.d = ∞ making v.d ≥ δ (s, v) for all v ∈ V − {s}.
For s, s.d = 0 ≥ δ (s, s).
For the inductive step, consider the relaxation of an edge (u, v)
By the inductive hypothesis, we have that x.d ≥ δ (s, x) for all x ∈ V prior
to relaxation.
25 / 108
Proof of Lemma
Loop Invariance
The Proof can be done by induction over the number of relaxation steps
and the loop invariance:
v.d ≥ δ (s, v) for all v ∈ V
For the Basis
v.d ≥ δ (s, v) is true after initialization, since:
v.d = ∞ making v.d ≥ δ (s, v) for all v ∈ V − {s}.
For s, s.d = 0 ≥ δ (s, s).
For the inductive step, consider the relaxation of an edge (u, v)
By the inductive hypothesis, we have that x.d ≥ δ (s, x) for all x ∈ V prior
to relaxation.
25 / 108
Proof of Lemma
Loop Invariance
The Proof can be done by induction over the number of relaxation steps
and the loop invariance:
v.d ≥ δ (s, v) for all v ∈ V
For the Basis
v.d ≥ δ (s, v) is true after initialization, since:
v.d = ∞ making v.d ≥ δ (s, v) for all v ∈ V − {s}.
For s, s.d = 0 ≥ δ (s, s).
For the inductive step, consider the relaxation of an edge (u, v)
By the inductive hypothesis, we have that x.d ≥ δ (s, x) for all x ∈ V prior
to relaxation.
25 / 108
Proof of Lemma
Loop Invariance
The Proof can be done by induction over the number of relaxation steps
and the loop invariance:
v.d ≥ δ (s, v) for all v ∈ V
For the Basis
v.d ≥ δ (s, v) is true after initialization, since:
v.d = ∞ making v.d ≥ δ (s, v) for all v ∈ V − {s}.
For s, s.d = 0 ≥ δ (s, s).
For the inductive step, consider the relaxation of an edge (u, v)
By the inductive hypothesis, we have that x.d ≥ δ (s, x) for all x ∈ V prior
to relaxation.
25 / 108
Thus
If you call Relax(u, v, w), it may change v.d
v.d = u.d + w(u, v)
≥ δ(s, u) + w(u, v) by inductive hypothesis
≥ δ (s, v) by the triangle inequality
Thus, the invariant is maintained.
26 / 108
Thus
If you call Relax(u, v, w), it may change v.d
v.d = u.d + w(u, v)
≥ δ(s, u) + w(u, v) by inductive hypothesis
≥ δ (s, v) by the triangle inequality
Thus, the invariant is maintained.
26 / 108
Thus
If you call Relax(u, v, w), it may change v.d
v.d = u.d + w(u, v)
≥ δ(s, u) + w(u, v) by inductive hypothesis
≥ δ (s, v) by the triangle inequality
Thus, the invariant is maintained.
26 / 108
Thus
If you call Relax(u, v, w), it may change v.d
v.d = u.d + w(u, v)
≥ δ(s, u) + w(u, v) by inductive hypothesis
≥ δ (s, v) by the triangle inequality
Thus, the invariant is maintained.
26 / 108
Properties of Relaxation
Proof of lemma 24.11 cont...
To proof that the value v.d never changes once v.d = δ (s, v):
Note the following: Once v.d = δ (s, v), it cannot decrease because
v.d ≥ δ (s, v) and Relaxation never increases d.
27 / 108
Properties of Relaxation
Proof of lemma 24.11 cont...
To proof that the value v.d never changes once v.d = δ (s, v):
Note the following: Once v.d = δ (s, v), it cannot decrease because
v.d ≥ δ (s, v) and Relaxation never increases d.
27 / 108
Next, we have
Corollary 24.12 (No-path property)
If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant.
Proof
By the upper-bound property, we always have ∞ = δ (s, v) ≤ v.d. Then,
v.d = ∞.
28 / 108
Next, we have
Corollary 24.12 (No-path property)
If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant.
Proof
By the upper-bound property, we always have ∞ = δ (s, v) ≤ v.d. Then,
v.d = ∞.
28 / 108
More Lemmas
Lemma 24.13
Let G = (V , E) be a weighted, directed graph with weight function
w : E → R. Then, immediately after relaxing edge (u, v) by calling
Relax(u, v, w) we have v.d ≤ u.d + w(u, v).
29 / 108
Proof
First
If, just prior to relaxing edge (u, v),
Case 1: if we have that v.d > u.d + w (u, v)
Then, v.d = u.d + w (u, v) after relaxation.
Now, Case 2
If v.d ≤ u.d + w (u, v) just before relaxation, then:
neither u.d nor v.d changes
Thus, afterwards
v.d ≤ u.d + w (u, v)
30 / 108
Proof
First
If, just prior to relaxing edge (u, v),
Case 1: if we have that v.d > u.d + w (u, v)
Then, v.d = u.d + w (u, v) after relaxation.
Now, Case 2
If v.d ≤ u.d + w (u, v) just before relaxation, then:
neither u.d nor v.d changes
Thus, afterwards
v.d ≤ u.d + w (u, v)
30 / 108
Proof
First
If, just prior to relaxing edge (u, v),
Case 1: if we have that v.d > u.d + w (u, v)
Then, v.d = u.d + w (u, v) after relaxation.
Now, Case 2
If v.d ≤ u.d + w (u, v) just before relaxation, then:
neither u.d nor v.d changes
Thus, afterwards
v.d ≤ u.d + w (u, v)
30 / 108
Proof
First
If, just prior to relaxing edge (u, v),
Case 1: if we have that v.d > u.d + w (u, v)
Then, v.d = u.d + w (u, v) after relaxation.
Now, Case 2
If v.d ≤ u.d + w (u, v) just before relaxation, then:
neither u.d nor v.d changes
Thus, afterwards
v.d ≤ u.d + w (u, v)
30 / 108
Proof
First
If, just prior to relaxing edge (u, v),
Case 1: if we have that v.d > u.d + w (u, v)
Then, v.d = u.d + w (u, v) after relaxation.
Now, Case 2
If v.d ≤ u.d + w (u, v) just before relaxation, then:
neither u.d nor v.d changes
Thus, afterwards
v.d ≤ u.d + w (u, v)
30 / 108
Proof
First
If, just prior to relaxing edge (u, v),
Case 1: if we have that v.d > u.d + w (u, v)
Then, v.d = u.d + w (u, v) after relaxation.
Now, Case 2
If v.d ≤ u.d + w (u, v) just before relaxation, then:
neither u.d nor v.d changes
Thus, afterwards
v.d ≤ u.d + w (u, v)
30 / 108
More Lemmas
Lemma 24.14 (Convergence property)
Let p be a shortest path from s to v, where
p =
p1
s u → v = p1 ∪ {(u, v)}.
If u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then
v.d = δ(s, v) holds at all times after the call.
Proof:
By the upper-bound property, if u.d = δ (s, u) at some moment before
relaxing edge (u, v), holding afterwards.
31 / 108
More Lemmas
Lemma 24.14 (Convergence property)
Let p be a shortest path from s to v, where
p =
p1
s u → v = p1 ∪ {(u, v)}.
If u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then
v.d = δ(s, v) holds at all times after the call.
Proof:
By the upper-bound property, if u.d = δ (s, u) at some moment before
relaxing edge (u, v), holding afterwards.
31 / 108
More Lemmas
Lemma 24.14 (Convergence property)
Let p be a shortest path from s to v, where
p =
p1
s u → v = p1 ∪ {(u, v)}.
If u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then
v.d = δ(s, v) holds at all times after the call.
Proof:
By the upper-bound property, if u.d = δ (s, u) at some moment before
relaxing edge (u, v), holding afterwards.
31 / 108
Proof
Thus , after relaxing (u, v)
v.d ≤ u.d + w(u, v) by lemma 24.13
= δ (s, u) + w (u, v)
= δ (s, v) by corollary of lemma 24.1
Now
By lemma 24.11, v.d ≥ δ(s, v), so v.d = δ(s, v).
32 / 108
Proof
Thus , after relaxing (u, v)
v.d ≤ u.d + w(u, v) by lemma 24.13
= δ (s, u) + w (u, v)
= δ (s, v) by corollary of lemma 24.1
Now
By lemma 24.11, v.d ≥ δ(s, v), so v.d = δ(s, v).
32 / 108
Proof
Thus , after relaxing (u, v)
v.d ≤ u.d + w(u, v) by lemma 24.13
= δ (s, u) + w (u, v)
= δ (s, v) by corollary of lemma 24.1
Now
By lemma 24.11, v.d ≥ δ(s, v), so v.d = δ(s, v).
32 / 108
Proof
Thus , after relaxing (u, v)
v.d ≤ u.d + w(u, v) by lemma 24.13
= δ (s, u) + w (u, v)
= δ (s, v) by corollary of lemma 24.1
Now
By lemma 24.11, v.d ≥ δ(s, v), so v.d = δ(s, v).
32 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
33 / 108
Predecessor Subgraph for Bellman
Lemma 24.16
Assume a given graph G that has no negative weight cycles reachable
from s. Then, after the initialization, the predecessor subgraph Gπ is
always a tree with root s, and any sequence of relaxations steps on edges
of G maintains this property as an invariant.
Proof
It is necessary to prove two things in order to get a tree:
1 Gπ is acyclic.
2 There exists a unique path from source s to each vertex Vπ.
34 / 108
Predecessor Subgraph for Bellman
Lemma 24.16
Assume a given graph G that has no negative weight cycles reachable
from s. Then, after the initialization, the predecessor subgraph Gπ is
always a tree with root s, and any sequence of relaxations steps on edges
of G maintains this property as an invariant.
Proof
It is necessary to prove two things in order to get a tree:
1 Gπ is acyclic.
2 There exists a unique path from source s to each vertex Vπ.
34 / 108
Predecessor Subgraph for Bellman
Lemma 24.16
Assume a given graph G that has no negative weight cycles reachable
from s. Then, after the initialization, the predecessor subgraph Gπ is
always a tree with root s, and any sequence of relaxations steps on edges
of G maintains this property as an invariant.
Proof
It is necessary to prove two things in order to get a tree:
1 Gπ is acyclic.
2 There exists a unique path from source s to each vertex Vπ.
34 / 108
Predecessor Subgraph for Bellman
Lemma 24.16
Assume a given graph G that has no negative weight cycles reachable
from s. Then, after the initialization, the predecessor subgraph Gπ is
always a tree with root s, and any sequence of relaxations steps on edges
of G maintains this property as an invariant.
Proof
It is necessary to prove two things in order to get a tree:
1 Gπ is acyclic.
2 There exists a unique path from source s to each vertex Vπ.
34 / 108
Proof of Gπ is acyclic
First
Suppose there exist a cycle c =< v0, v1, ..., vk >, where v0 = vk. We have
vi.π = vi−1 for i = 1, 2, ..., k.
Second
Assume relaxation of (vk−1, vk) created the cycle. We are going to show
that the cycle has a negative weight.
We claim that
The cycle must be reachable from s (Why?)
35 / 108
Proof of Gπ is acyclic
First
Suppose there exist a cycle c =< v0, v1, ..., vk >, where v0 = vk. We have
vi.π = vi−1 for i = 1, 2, ..., k.
Second
Assume relaxation of (vk−1, vk) created the cycle. We are going to show
that the cycle has a negative weight.
We claim that
The cycle must be reachable from s (Why?)
35 / 108
Proof of Gπ is acyclic
First
Suppose there exist a cycle c =< v0, v1, ..., vk >, where v0 = vk. We have
vi.π = vi−1 for i = 1, 2, ..., k.
Second
Assume relaxation of (vk−1, vk) created the cycle. We are going to show
that the cycle has a negative weight.
We claim that
The cycle must be reachable from s (Why?)
35 / 108
Proof
First
Each vertex on the cycle has a non-NIL predecessor, and so each vertex on
it was assigned a finite shortest-path estimate when it was assigned its
non-NIL value.
Then
By the upper-bound property, each vertex on the cycle has a finite
shortest-path weight,
Thus
Making the cycle reachable from s.
36 / 108
Proof
First
Each vertex on the cycle has a non-NIL predecessor, and so each vertex on
it was assigned a finite shortest-path estimate when it was assigned its
non-NIL value.
Then
By the upper-bound property, each vertex on the cycle has a finite
shortest-path weight,
Thus
Making the cycle reachable from s.
36 / 108
Proof
First
Each vertex on the cycle has a non-NIL predecessor, and so each vertex on
it was assigned a finite shortest-path estimate when it was assigned its
non-NIL value.
Then
By the upper-bound property, each vertex on the cycle has a finite
shortest-path weight,
Thus
Making the cycle reachable from s.
36 / 108
Proof
Before call to Relax(vk−1, vk, w):
vi.π = vi−1 for i = 1, ..., k − 1. (2)
Thus
This Implies vi.d was last updated by
vi.d = vi−1.d + w(vi−1, vi) (3)
for i = 1, ..., k − 1 (Because Relax updates π).
This implies
This implies
vi.d ≥ vi−1.d + w(vi−1, vi) (4)
for i = 1, ..., k − 1 (Before Relaxation in Lemma 24.13).
37 / 108
Proof
Before call to Relax(vk−1, vk, w):
vi.π = vi−1 for i = 1, ..., k − 1. (2)
Thus
This Implies vi.d was last updated by
vi.d = vi−1.d + w(vi−1, vi) (3)
for i = 1, ..., k − 1 (Because Relax updates π).
This implies
This implies
vi.d ≥ vi−1.d + w(vi−1, vi) (4)
for i = 1, ..., k − 1 (Before Relaxation in Lemma 24.13).
37 / 108
Proof
Before call to Relax(vk−1, vk, w):
vi.π = vi−1 for i = 1, ..., k − 1. (2)
Thus
This Implies vi.d was last updated by
vi.d = vi−1.d + w(vi−1, vi) (3)
for i = 1, ..., k − 1 (Because Relax updates π).
This implies
This implies
vi.d ≥ vi−1.d + w(vi−1, vi) (4)
for i = 1, ..., k − 1 (Before Relaxation in Lemma 24.13).
37 / 108
Proof
Thus
Because vk.π is changed by call Relax (Immediately before),
vk.d > vk−1.d + w(vk−1, vk), we have that:
k
i=1
vi.d >
k
i=1
(vi−1.d + w (vi−1, vi))
=
k
i=1
vi−1.d +
k
i=1
w (vi−1, vi)
We have finally that
Because
k
i=1
vi.d =
k
i=1
vi−1.d, we have that
k
i=1
w(vi−1, vi) < 0, i.e., a
negative weight cycle!!!
38 / 108
Proof
Thus
Because vk.π is changed by call Relax (Immediately before),
vk.d > vk−1.d + w(vk−1, vk), we have that:
k
i=1
vi.d >
k
i=1
(vi−1.d + w (vi−1, vi))
=
k
i=1
vi−1.d +
k
i=1
w (vi−1, vi)
We have finally that
Because
k
i=1
vi.d =
k
i=1
vi−1.d, we have that
k
i=1
w(vi−1, vi) < 0, i.e., a
negative weight cycle!!!
38 / 108
Proof
Thus
Because vk.π is changed by call Relax (Immediately before),
vk.d > vk−1.d + w(vk−1, vk), we have that:
k
i=1
vi.d >
k
i=1
(vi−1.d + w (vi−1, vi))
=
k
i=1
vi−1.d +
k
i=1
w (vi−1, vi)
We have finally that
Because
k
i=1
vi.d =
k
i=1
vi−1.d, we have that
k
i=1
w(vi−1, vi) < 0, i.e., a
negative weight cycle!!!
38 / 108
Proof
Thus
Because vk.π is changed by call Relax (Immediately before),
vk.d > vk−1.d + w(vk−1, vk), we have that:
k
i=1
vi.d >
k
i=1
(vi−1.d + w (vi−1, vi))
=
k
i=1
vi−1.d +
k
i=1
w (vi−1, vi)
We have finally that
Because
k
i=1
vi.d =
k
i=1
vi−1.d, we have that
k
i=1
w(vi−1, vi) < 0, i.e., a
negative weight cycle!!!
38 / 108
Some comments
Comments
vi.d ≥ vi−1.d + w(vi−1, vi) for i = 1, ..., k − 1 because when
Relax(vi−1, vi, w) was called, there was an equality, and vi−1.d may
have gotten smaller by further calls to Relax.
vk.d > vk−1.d + w(vk−1, vk) before the last call to Relax because
that last call changed vk.d.
39 / 108
Some comments
Comments
vi.d ≥ vi−1.d + w(vi−1, vi) for i = 1, ..., k − 1 because when
Relax(vi−1, vi, w) was called, there was an equality, and vi−1.d may
have gotten smaller by further calls to Relax.
vk.d > vk−1.d + w(vk−1, vk) before the last call to Relax because
that last call changed vk.d.
39 / 108
Proof of existence of a unique path from source s
Let Gπ be the predecessor subgraph.
So, for any v in Vπ, the graph Gπ contains at least one path from s
to v.
Assume now that you have two paths:
This can only be possible if for two nodes x and y ⇒ x = y, but
z.π = x = y!!!
Contradiction!!! Therefore, we have only one path and Gπ is a tree.
40 / 108
Proof of existence of a unique path from source s
Let Gπ be the predecessor subgraph.
So, for any v in Vπ, the graph Gπ contains at least one path from s
to v.
Assume now that you have two paths:
This can only be possible if for two nodes x and y ⇒ x = y, but
z.π = x = y!!!
Contradiction!!! Therefore, we have only one path and Gπ is a tree.
40 / 108
Proof of existence of a unique path from source s
Let Gπ be the predecessor subgraph.
So, for any v in Vπ, the graph Gπ contains at least one path from s
to v.
Assume now that you have two paths:
s u
x
y
z v
Impossible!
This can only be possible if for two nodes x and y ⇒ x = y, but
z.π = x = y!!!
Contradiction!!! Therefore, we have only one path and Gπ is a tree.
40 / 108
Proof of existence of a unique path from source s
Let Gπ be the predecessor subgraph.
So, for any v in Vπ, the graph Gπ contains at least one path from s
to v.
Assume now that you have two paths:
s u
x
y
z v
Impossible!
This can only be possible if for two nodes x and y ⇒ x = y, but
z.π = x = y!!!
Contradiction!!! Therefore, we have only one path and Gπ is a tree.
40 / 108
Proof of existence of a unique path from source s
Let Gπ be the predecessor subgraph.
So, for any v in Vπ, the graph Gπ contains at least one path from s
to v.
Assume now that you have two paths:
s u
x
y
z v
Impossible!
This can only be possible if for two nodes x and y ⇒ x = y, but
z.π = x = y!!!
Contradiction!!! Therefore, we have only one path and Gπ is a tree.
40 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
41 / 108
Lemma 24.17
Lemma 24.17
Same conditions as before. It calls Initialize and repeatedly calls Relax until
v.d = δ(s, v) for all v in V . Then Gπ is a shortest path tree rooted at s.
Proof
For all v in V , there is a unique simple path p from s to v in Gπ
(Lemma 24.16).
We want to prove that it is a shortest path from s to v in G.
42 / 108
Lemma 24.17
Lemma 24.17
Same conditions as before. It calls Initialize and repeatedly calls Relax until
v.d = δ(s, v) for all v in V . Then Gπ is a shortest path tree rooted at s.
Proof
For all v in V , there is a unique simple path p from s to v in Gπ
(Lemma 24.16).
We want to prove that it is a shortest path from s to v in G.
42 / 108
Lemma 24.17
Lemma 24.17
Same conditions as before. It calls Initialize and repeatedly calls Relax until
v.d = δ(s, v) for all v in V . Then Gπ is a shortest path tree rooted at s.
Proof
For all v in V , there is a unique simple path p from s to v in Gπ
(Lemma 24.16).
We want to prove that it is a shortest path from s to v in G.
42 / 108
Proof
Then
Let p =< v0, v1, ..., vk >, where v0 = s and vk = v. Thus, we have
vi.d = δ(s, vi).
And reasoning as before
vi.d ≥ vi−1.d + w(vi−1, vi) (5)
This implies that
w(vi−1, vi) ≤ δ(s, vi) − δ(s, vi−1) (6)
43 / 108
Proof
Then
Let p =< v0, v1, ..., vk >, where v0 = s and vk = v. Thus, we have
vi.d = δ(s, vi).
And reasoning as before
vi.d ≥ vi−1.d + w(vi−1, vi) (5)
This implies that
w(vi−1, vi) ≤ δ(s, vi) − δ(s, vi−1) (6)
43 / 108
Proof
Then
Let p =< v0, v1, ..., vk >, where v0 = s and vk = v. Thus, we have
vi.d = δ(s, vi).
And reasoning as before
vi.d ≥ vi−1.d + w(vi−1, vi) (5)
This implies that
w(vi−1, vi) ≤ δ(s, vi) − δ(s, vi−1) (6)
43 / 108
Proof
Then, we sum over all weights
w (p) =
k
i=1
w (vi−1, vi)
≤
k
i=1
(δ (s, vi) − δ(s, vi−1))
= δ(s, vk) − δ(s, v0)
= δ(s, vk)
Finally
So, equality holds and p is a shortest path because δ (s, vk) ≤ w (p).
44 / 108
Proof
Then, we sum over all weights
w (p) =
k
i=1
w (vi−1, vi)
≤
k
i=1
(δ (s, vi) − δ(s, vi−1))
= δ(s, vk) − δ(s, v0)
= δ(s, vk)
Finally
So, equality holds and p is a shortest path because δ (s, vk) ≤ w (p).
44 / 108
Proof
Then, we sum over all weights
w (p) =
k
i=1
w (vi−1, vi)
≤
k
i=1
(δ (s, vi) − δ(s, vi−1))
= δ(s, vk) − δ(s, v0)
= δ(s, vk)
Finally
So, equality holds and p is a shortest path because δ (s, vk) ≤ w (p).
44 / 108
Proof
Then, we sum over all weights
w (p) =
k
i=1
w (vi−1, vi)
≤
k
i=1
(δ (s, vi) − δ(s, vi−1))
= δ(s, vk) − δ(s, v0)
= δ(s, vk)
Finally
So, equality holds and p is a shortest path because δ (s, vk) ≤ w (p).
44 / 108
Proof
Then, we sum over all weights
w (p) =
k
i=1
w (vi−1, vi)
≤
k
i=1
(δ (s, vi) − δ(s, vi−1))
= δ(s, vk) − δ(s, v0)
= δ(s, vk)
Finally
So, equality holds and p is a shortest path because δ (s, vk) ≤ w (p).
44 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
45 / 108
Again the Bellman-Ford Algorithm
Bellman-Ford can have negative weight edges. It will “detect”
reachable negative weight cycles.
Bellman-Ford(G, s, w)
1 Initialize(G, s)
2 for i = 1 to |V [G]| − 1
3 for each (u, v) to E [G]
4 Relax(u, v, w) The Decision Part of the Dynamic
Programming for u.d and u.π.
5 for each (u, v) to E [G]
6 if v.d > u.d + w (u, v)
7 return false
8 return true
Observation
If Bellman-Ford has not converged after V (G) − 1 iterations, then
there cannot be a shortest path tree, so there must be a negative46 / 108
Again the Bellman-Ford Algorithm
Bellman-Ford can have negative weight edges. It will “detect”
reachable negative weight cycles.
Bellman-Ford(G, s, w)
1 Initialize(G, s)
2 for i = 1 to |V [G]| − 1
3 for each (u, v) to E [G]
4 Relax(u, v, w) The Decision Part of the Dynamic
Programming for u.d and u.π.
5 for each (u, v) to E [G]
6 if v.d > u.d + w (u, v)
7 return false
8 return true
Observation
If Bellman-Ford has not converged after V (G) − 1 iterations, then
there cannot be a shortest path tree, so there must be a negative46 / 108
Example
Red Arrows are the representation of v.π
s b
d
a
c
e h
g
f5
-4
5
3
3
6
-2
1
5
-2
2
7
-2
10
3
47 / 108
Example
Here, whenever we have v.d = ∞ and v.u = ∞ no change is done
s b
d
a
c
e h
g
f5
-4
5
3
3
6
-2
1
5
-2
2
7
-2
106
5
5 2
48 / 108
Example
Here, we keep updating in the second iteration
s b
d
a
c
e h
g
f5
-4
5
3
3
6
-2
1
5
-2
2
7
-2
106
5
5
1
3
16
2
49 / 108
Example
Here, during it. we notice that e can be updated for a better value
s b
d
a
c
e h
g
f5
-4
5
3
3
6
-2
1
5
-2
2
7
-2
106
5
4
1
3
16
2
50 / 108
Example
Here, we keep updating in third iteration and d and g also is updated
s b
d
a
c
e h
g
f5
-4
5
3
3
6
-2
1
5
-2
2
7
-2
106
5
4
1
2
16
44
2
51 / 108
Example
Here, we keep updating in fourth iteration
s b
d
a
c
e h
g
f5
-4
5
3
3
6
-2
1
5
-2
2
7
-2
102
5
4
1
2
14
44
2
52 / 108
Example
Here, f is updated during this iteration
s b
d
a
c
e h
g
f5
-4
5
3
3
6
-2
1
5
-2
2
7
-2
102
5
4
1
2
12
40
2
53 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
54 / 108
v.d == δ(s, v) upon termination
Lemma 24.2
Assuming no negative weight cycles reachable from s, v.d == δ(s, v)
holds upon termination for all vertices v reachable from s.
Proof
Consider a shortest path p, where p =< v0, v1, ..., vk >, where v0 = s and
vk = v.
We know the following
Shortest paths are simple, p has at most |V | − 1, thus we have that
k ≤ |V | − 1.
55 / 108
v.d == δ(s, v) upon termination
Lemma 24.2
Assuming no negative weight cycles reachable from s, v.d == δ(s, v)
holds upon termination for all vertices v reachable from s.
Proof
Consider a shortest path p, where p =< v0, v1, ..., vk >, where v0 = s and
vk = v.
We know the following
Shortest paths are simple, p has at most |V | − 1, thus we have that
k ≤ |V | − 1.
55 / 108
v.d == δ(s, v) upon termination
Lemma 24.2
Assuming no negative weight cycles reachable from s, v.d == δ(s, v)
holds upon termination for all vertices v reachable from s.
Proof
Consider a shortest path p, where p =< v0, v1, ..., vk >, where v0 = s and
vk = v.
We know the following
Shortest paths are simple, p has at most |V | − 1, thus we have that
k ≤ |V | − 1.
55 / 108
Proof
Something Notable
Claim: vi.d = δ(s, vi) holds after the ith pass over edges.
In the algorithm
Each of the |V | − 1 iterations of the for loop (Lines 2-4) relaxes all edges
in E.
Proof follows by induction on i
The edges relaxed in the i th iteration, for i = 1, 2, ..., k, is (vi−1, vi).
By lemma 24.11, once vi.d = δ(s, vi) holds, it continues to hold.
56 / 108
Proof
Something Notable
Claim: vi.d = δ(s, vi) holds after the ith pass over edges.
In the algorithm
Each of the |V | − 1 iterations of the for loop (Lines 2-4) relaxes all edges
in E.
Proof follows by induction on i
The edges relaxed in the i th iteration, for i = 1, 2, ..., k, is (vi−1, vi).
By lemma 24.11, once vi.d = δ(s, vi) holds, it continues to hold.
56 / 108
Proof
Something Notable
Claim: vi.d = δ(s, vi) holds after the ith pass over edges.
In the algorithm
Each of the |V | − 1 iterations of the for loop (Lines 2-4) relaxes all edges
in E.
Proof follows by induction on i
The edges relaxed in the i th iteration, for i = 1, 2, ..., k, is (vi−1, vi).
By lemma 24.11, once vi.d = δ(s, vi) holds, it continues to hold.
56 / 108
Proof
Something Notable
Claim: vi.d = δ(s, vi) holds after the ith pass over edges.
In the algorithm
Each of the |V | − 1 iterations of the for loop (Lines 2-4) relaxes all edges
in E.
Proof follows by induction on i
The edges relaxed in the i th iteration, for i = 1, 2, ..., k, is (vi−1, vi).
By lemma 24.11, once vi.d = δ(s, vi) holds, it continues to hold.
56 / 108
Finding a path between s and v
Corollary 24.3
Let G = (V , E) be a weighted, directed graph with source vertex s and
weight function w : E → R, and assume that G contains no
negative-weight cycles that are reachable from s. Then, for each vertex
v ∈ V , there is a path from s to v if and only if Bellman-Ford terminates
with v.d < ∞ when it is run on G
Proof
Left to you...
57 / 108
Finding a path between s and v
Corollary 24.3
Let G = (V , E) be a weighted, directed graph with source vertex s and
weight function w : E → R, and assume that G contains no
negative-weight cycles that are reachable from s. Then, for each vertex
v ∈ V , there is a path from s to v if and only if Bellman-Ford terminates
with v.d < ∞ when it is run on G
Proof
Left to you...
57 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
58 / 108
Correctness of Bellman-Ford
Claim: The Algorithm returns the correct value
Part of Theorem 24.4. Other parts of the theorem follow easily from
earlier results.
Case 1: There is no reachable negative weight cycle.
Upon termination, we have for all (u, v):
v.d = δ(s, v)
by lemma 24.2 (last slide) if v is reachable or v.d = δ(s, v) = ∞ otherwise.
59 / 108
Correctness of Bellman-Ford
Claim: The Algorithm returns the correct value
Part of Theorem 24.4. Other parts of the theorem follow easily from
earlier results.
Case 1: There is no reachable negative weight cycle.
Upon termination, we have for all (u, v):
v.d = δ(s, v)
by lemma 24.2 (last slide) if v is reachable or v.d = δ(s, v) = ∞ otherwise.
59 / 108
Correctness of Bellman-Ford
Claim: The Algorithm returns the correct value
Part of Theorem 24.4. Other parts of the theorem follow easily from
earlier results.
Case 1: There is no reachable negative weight cycle.
Upon termination, we have for all (u, v):
v.d = δ(s, v)
by lemma 24.2 (last slide) if v is reachable or v.d = δ(s, v) = ∞ otherwise.
59 / 108
Correctness of Bellman-Ford
Then, we have that
v.d = δ(s, v)
≤ δ(s, u) + w(u, v)
≤ u.d + w (u, v)
Remember:
5. for each (u, v) to E [G]
6. if v.d > u.d + w (u, v)
7. return false
Thus
So algorithm returns true.
60 / 108
Correctness of Bellman-Ford
Then, we have that
v.d = δ(s, v)
≤ δ(s, u) + w(u, v)
≤ u.d + w (u, v)
Remember:
5. for each (u, v) to E [G]
6. if v.d > u.d + w (u, v)
7. return false
Thus
So algorithm returns true.
60 / 108
Correctness of Bellman-Ford
Then, we have that
v.d = δ(s, v)
≤ δ(s, u) + w(u, v)
≤ u.d + w (u, v)
Remember:
5. for each (u, v) to E [G]
6. if v.d > u.d + w (u, v)
7. return false
Thus
So algorithm returns true.
60 / 108
Correctness of Bellman-Ford
Then, we have that
v.d = δ(s, v)
≤ δ(s, u) + w(u, v)
≤ u.d + w (u, v)
Remember:
5. for each (u, v) to E [G]
6. if v.d > u.d + w (u, v)
7. return false
Thus
So algorithm returns true.
60 / 108
Correctness of Bellman-Ford
Then, we have that
v.d = δ(s, v)
≤ δ(s, u) + w(u, v)
≤ u.d + w (u, v)
Remember:
5. for each (u, v) to E [G]
6. if v.d > u.d + w (u, v)
7. return false
Thus
So algorithm returns true.
60 / 108
Correctness of Bellman-Ford
Case 2: There exists a reachable negative weight cycle
c =< v0, v1, ..., vk >, where v0 = vk.
Then, we have:
k
i=1
w(vi−1, vi) < 0. (7)
Suppose algorithm returns true
Then vi.d ≤ vi−1.d + w(vi−1, vi) for i = 1, ..., k because Relax did not
change any vi.d.
61 / 108
Correctness of Bellman-Ford
Case 2: There exists a reachable negative weight cycle
c =< v0, v1, ..., vk >, where v0 = vk.
Then, we have:
k
i=1
w(vi−1, vi) < 0. (7)
Suppose algorithm returns true
Then vi.d ≤ vi−1.d + w(vi−1, vi) for i = 1, ..., k because Relax did not
change any vi.d.
61 / 108
Correctness of Bellman-Ford
Thus
k
i=1
vi.d ≤
k
i=1
(vi−1.d + w(vi−1, vi))
=
k
i=1
vi−1.d +
k
i=1
w(vi−1, vi)
Since v0 = vk
Each vertex in c appears exactly once in each of the summations,
k
i=1
vi.d
and
k
i=1
vi−1.d, thus
k
i=1
vi.d =
k
i=1
vi−1.d (8)
62 / 108
Correctness of Bellman-Ford
Thus
k
i=1
vi.d ≤
k
i=1
(vi−1.d + w(vi−1, vi))
=
k
i=1
vi−1.d +
k
i=1
w(vi−1, vi)
Since v0 = vk
Each vertex in c appears exactly once in each of the summations,
k
i=1
vi.d
and
k
i=1
vi−1.d, thus
k
i=1
vi.d =
k
i=1
vi−1.d (8)
62 / 108
Correctness of Bellman-Ford
By Corollary 24.3
vi.d is finite for i = 1, 2, ..., k, thus
0 ≤
k
i=1
w(vi−1, vi).
Hence
This contradicts (Eq. 7). Thus, algorithm returns false.
63 / 108
Correctness of Bellman-Ford
By Corollary 24.3
vi.d is finite for i = 1, 2, ..., k, thus
0 ≤
k
i=1
w(vi−1, vi).
Hence
This contradicts (Eq. 7). Thus, algorithm returns false.
63 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
64 / 108
Another Example
Something Notable
By relaxing the edges of a weighted DAG G = (V , E) according to a
topological sort of its vertices, we can compute shortest paths from a
single source in time.
Why?
Shortest paths are always well defined in a DAG, since even if there are
negative-weight edges, no negative-weight cycles can exist.
65 / 108
Another Example
Something Notable
By relaxing the edges of a weighted DAG G = (V , E) according to a
topological sort of its vertices, we can compute shortest paths from a
single source in time.
Why?
Shortest paths are always well defined in a DAG, since even if there are
negative-weight edges, no negative-weight cycles can exist.
65 / 108
Single-source Shortest Paths in Directed Acyclic Graphs
In a DAG, we can do the following (Complexity Θ (V + E))
DAG -SHORTEST-PATHS(G, w, s)
1 Topological sort vertices in G
2 Initialize(G, s)
3 for each u in V [G] in topological sorted order
4 for each v to Adj [u]
5 Relax(u, v, w)
66 / 108
Single-source Shortest Paths in Directed Acyclic Graphs
In a DAG, we can do the following (Complexity Θ (V + E))
DAG -SHORTEST-PATHS(G, w, s)
1 Topological sort vertices in G
2 Initialize(G, s)
3 for each u in V [G] in topological sorted order
4 for each v to Adj [u]
5 Relax(u, v, w)
66 / 108
Single-source Shortest Paths in Directed Acyclic Graphs
In a DAG, we can do the following (Complexity Θ (V + E))
DAG -SHORTEST-PATHS(G, w, s)
1 Topological sort vertices in G
2 Initialize(G, s)
3 for each u in V [G] in topological sorted order
4 for each v to Adj [u]
5 Relax(u, v, w)
66 / 108
It is based in the following theorem
Theorem 24.5
If a weighted, directed graph G = (V , E) has source vertex s and no
cycles, then at the termination of the DAG-SHORTEST-PATHS
procedure, v.d = δ (s, v) for all vertices v ∈ V , and the predecessor
subgraph Gπ is a shortest path.
Proof
Left to you...
67 / 108
It is based in the following theorem
Theorem 24.5
If a weighted, directed graph G = (V , E) has source vertex s and no
cycles, then at the termination of the DAG-SHORTEST-PATHS
procedure, v.d = δ (s, v) for all vertices v ∈ V , and the predecessor
subgraph Gπ is a shortest path.
Proof
Left to you...
67 / 108
Complexity
We have that
1 Line 1 takes Θ (V + E).
2 Line 2 takes Θ (V ).
3 Lines 3-5 makes an iteration per vertex:
1 In addition, the for loop in lines 4-5 relaxes each edge exactly once
(Remember the sorting).
2 Making each iteration of the inner loop Θ (1)
Therefore
The total running time is equal to Θ (V + E).
68 / 108
Complexity
We have that
1 Line 1 takes Θ (V + E).
2 Line 2 takes Θ (V ).
3 Lines 3-5 makes an iteration per vertex:
1 In addition, the for loop in lines 4-5 relaxes each edge exactly once
(Remember the sorting).
2 Making each iteration of the inner loop Θ (1)
Therefore
The total running time is equal to Θ (V + E).
68 / 108
Complexity
We have that
1 Line 1 takes Θ (V + E).
2 Line 2 takes Θ (V ).
3 Lines 3-5 makes an iteration per vertex:
1 In addition, the for loop in lines 4-5 relaxes each edge exactly once
(Remember the sorting).
2 Making each iteration of the inner loop Θ (1)
Therefore
The total running time is equal to Θ (V + E).
68 / 108
Complexity
We have that
1 Line 1 takes Θ (V + E).
2 Line 2 takes Θ (V ).
3 Lines 3-5 makes an iteration per vertex:
1 In addition, the for loop in lines 4-5 relaxes each edge exactly once
(Remember the sorting).
2 Making each iteration of the inner loop Θ (1)
Therefore
The total running time is equal to Θ (V + E).
68 / 108
Complexity
We have that
1 Line 1 takes Θ (V + E).
2 Line 2 takes Θ (V ).
3 Lines 3-5 makes an iteration per vertex:
1 In addition, the for loop in lines 4-5 relaxes each edge exactly once
(Remember the sorting).
2 Making each iteration of the inner loop Θ (1)
Therefore
The total running time is equal to Θ (V + E).
68 / 108
Complexity
We have that
1 Line 1 takes Θ (V + E).
2 Line 2 takes Θ (V ).
3 Lines 3-5 makes an iteration per vertex:
1 In addition, the for loop in lines 4-5 relaxes each edge exactly once
(Remember the sorting).
2 Making each iteration of the inner loop Θ (1)
Therefore
The total running time is equal to Θ (V + E).
68 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
69 / 108
Example
After Initialization, we have b is the source
a b c d e f
5 2
3
7
4
2
-1 -2
6 1
70 / 108
Example
a is the first in the topological sort, but no update is done
a b c d e f
5 2
3
7
4
2
-1 -2
6 1
71 / 108
Example
b is the next one
a b c d e f
5 2
3
7
4
2
-1 -2
6 1
72 / 108
Example
c is the next one
a b c d e f
5 2
3
7
4
2
-1 -2
6 1
73 / 108
Example
d is the next one
a b c d e f
5 2
3
7
4
2
-1 -2
6 1
74 / 108
Example
e is the next one
a b c d e f
5 2
3
7
4
2
-1 -2
6 1
75 / 108
Example
Finally, w is the next one
a b c d e f
5 2
3
7
4
2
-1 -2
6 1
76 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
77 / 108
Dijkstra’s Algorithm
It is a greedy based method
Ideas?
Yes
We need to keep track of the greedy choice!!!
78 / 108
Dijkstra’s Algorithm
It is a greedy based method
Ideas?
Yes
We need to keep track of the greedy choice!!!
78 / 108
Dijkstra’s Algorithm
Assume no negative weight edges
1 Dijkstra’s algorithm maintains a set S of vertices whose
shortest path from s has been determined.
2 It repeatedly selects u in V − S with minimum shortest path estimate
(greedy choice).
3 It store V − S in priority queue Q.
79 / 108
Dijkstra’s Algorithm
Assume no negative weight edges
1 Dijkstra’s algorithm maintains a set S of vertices whose
shortest path from s has been determined.
2 It repeatedly selects u in V − S with minimum shortest path estimate
(greedy choice).
3 It store V − S in priority queue Q.
79 / 108
Dijkstra’s Algorithm
Assume no negative weight edges
1 Dijkstra’s algorithm maintains a set S of vertices whose
shortest path from s has been determined.
2 It repeatedly selects u in V − S with minimum shortest path estimate
(greedy choice).
3 It store V − S in priority queue Q.
79 / 108
Dijkstra’s algorithm
DIJKSTRA(G, w, s)
1 INITIALIZE(G, s)
2 S = ∅
3 Q = V [G]
4 while Q = ∅
5 u =Extract-Min(Q)
6 S = S ∪ {u}
7 for each vertex v ∈ Adj [u]
8 Relax(u,v,w)
80 / 108
Dijkstra’s algorithm
DIJKSTRA(G, w, s)
1 INITIALIZE(G, s)
2 S = ∅
3 Q = V [G]
4 while Q = ∅
5 u =Extract-Min(Q)
6 S = S ∪ {u}
7 for each vertex v ∈ Adj [u]
8 Relax(u,v,w)
80 / 108
Dijkstra’s algorithm
DIJKSTRA(G, w, s)
1 INITIALIZE(G, s)
2 S = ∅
3 Q = V [G]
4 while Q = ∅
5 u =Extract-Min(Q)
6 S = S ∪ {u}
7 for each vertex v ∈ Adj [u]
8 Relax(u,v,w)
80 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
81 / 108
Example
The Graph After Initialization
s b
d
a
c
e h
g
f5
4
5
3
3
6
2
1
5
2
2
7
2
10
3
82 / 108
Example
We use red edges to represent v.π and color black to represent the
set S
s b
d
a
c
e h
g
f5
4
5
3
3
6
2
1
5
2
2
7
2
10
3
83 / 108
Example
s ←Extract-Min(Q) and update the elements adjacent to s
s b
d
a
c
e h
g
f5
4
5
3
3
6
2
1
5
2
2
7
2
10
3
0
5
5
6
84 / 108
Example
a←Extract-Min(Q) and update the elements adjacent to a
s b
d
a
c
e h
g
f5
4
5
3
3
6
2
1
5
2
2
7
2
10
3
0
5
5
6
9
85 / 108
Example
e←Extract-Min(Q) and update the elements adjacent to e
s b
d
a
c
e h
g
f5
4
5
3
3
6
2
1
5
2
2
7
2
10
3
0
5
5
6
9
7
86 / 108
Example
b←Extract-Min(Q) and update the elements adjacent to b
s b
d
a
c
e h
g
f5
4
5
3
3
6
2
1
5
2
2
7
2
10
3
0
5
5
6
9
7
16
87 / 108
Example
h←Extract-Min(Q) and update the elements adjacent to h
s b
d
a
c
e h
g
f5
4
5
3
3
6
2
1
5
2
2
7
2
10
3
0
5
5
6
9
7
16
9 10
88 / 108
Example
c←Extract-Min(Q) and no-update
s b
d
a
c
e h
g
f5
4
5
3
3
6
2
1
5
2
2
7
2
10
3
0
5
5
6
9
7
16
9 10
89 / 108
Example
d←Extract-Min(Q) and no-update
s b
d
a
c
e h
g
f5
4
5
3
3
6
2
1
5
2
2
7
2
10
3
0
5
5
6
9
7
16
9 10
90 / 108
Example
g←Extract-Min(Q) and no-update
s b
d
a
c
e h
g
f5
4
5
3
3
6
2
1
5
2
2
7
2
10
3
0
5
5
6
9
7
16
9 10
91 / 108
Example
f←Extract-Min(Q) and no-update
s b
d
a
c
e h
g
f5
4
5
3
3
6
2
1
5
2
2
7
2
10
3
0
5
5
6
9
7
16
9 10
92 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
93 / 108
Correctness Dijkstra’s algorithm
Theorem 24.6
Upon termination, u.d = δ(s, u) for all u in V (assuming non negative
weights).
Proof
By lemma 24.11, once u.d = δ(s, u) holds, it continues to hold.
We are going to use the following loop Invariance
At the start of each iteration of the while loop of lines 4–8,
v.d = δ (s, v) for each vertex v ∈ S.
94 / 108
Correctness Dijkstra’s algorithm
Theorem 24.6
Upon termination, u.d = δ(s, u) for all u in V (assuming non negative
weights).
Proof
By lemma 24.11, once u.d = δ(s, u) holds, it continues to hold.
We are going to use the following loop Invariance
At the start of each iteration of the while loop of lines 4–8,
v.d = δ (s, v) for each vertex v ∈ S.
94 / 108
Correctness Dijkstra’s algorithm
Theorem 24.6
Upon termination, u.d = δ(s, u) for all u in V (assuming non negative
weights).
Proof
By lemma 24.11, once u.d = δ(s, u) holds, it continues to hold.
We are going to use the following loop Invariance
At the start of each iteration of the while loop of lines 4–8,
v.d = δ (s, v) for each vertex v ∈ S.
94 / 108
Proof
Thus
We are going to prove for each u in V , u.d = δ(s, u) when u is inserted in
S.
Initialization
Initially S = ∅, thus the invariant is true.
Maintenance
We want to show that in each iteration u.d = δ (s, u) for the vertex added
to set S.
For this, note the following
Note that s.d = δ(s, s) = 0 when s is inserted, so u = s.
In addition, we have that S = ∅ before u is added.
95 / 108
Proof
Thus
We are going to prove for each u in V , u.d = δ(s, u) when u is inserted in
S.
Initialization
Initially S = ∅, thus the invariant is true.
Maintenance
We want to show that in each iteration u.d = δ (s, u) for the vertex added
to set S.
For this, note the following
Note that s.d = δ(s, s) = 0 when s is inserted, so u = s.
In addition, we have that S = ∅ before u is added.
95 / 108
Proof
Thus
We are going to prove for each u in V , u.d = δ(s, u) when u is inserted in
S.
Initialization
Initially S = ∅, thus the invariant is true.
Maintenance
We want to show that in each iteration u.d = δ (s, u) for the vertex added
to set S.
For this, note the following
Note that s.d = δ(s, s) = 0 when s is inserted, so u = s.
In addition, we have that S = ∅ before u is added.
95 / 108
Proof
Thus
We are going to prove for each u in V , u.d = δ(s, u) when u is inserted in
S.
Initialization
Initially S = ∅, thus the invariant is true.
Maintenance
We want to show that in each iteration u.d = δ (s, u) for the vertex added
to set S.
For this, note the following
Note that s.d = δ(s, s) = 0 when s is inserted, so u = s.
In addition, we have that S = ∅ before u is added.
95 / 108
Proof
Thus
We are going to prove for each u in V , u.d = δ(s, u) when u is inserted in
S.
Initialization
Initially S = ∅, thus the invariant is true.
Maintenance
We want to show that in each iteration u.d = δ (s, u) for the vertex added
to set S.
For this, note the following
Note that s.d = δ(s, s) = 0 when s is inserted, so u = s.
In addition, we have that S = ∅ before u is added.
95 / 108
Proof
Use contradiction
Now, suppose not. Let u be the first vertex such that u.d = δ (s,u) when
inserted in S.
Note the following
Note that s.d = δ(s, s) = 0 when s is inserted, so u = s; thus S = ∅ just
before u is inserted (in fact s ∈ S).
96 / 108
Proof
Use contradiction
Now, suppose not. Let u be the first vertex such that u.d = δ (s,u) when
inserted in S.
Note the following
Note that s.d = δ(s, s) = 0 when s is inserted, so u = s; thus S = ∅ just
before u is inserted (in fact s ∈ S).
96 / 108
Proof
Now
Note that there exist a path from s to u, for otherwise u.d = δ(s, u) = ∞
by corollary 24.12.
“If there is no path from s to v, then v.d = δ(s, v) = ∞ is an
invariant.”
Thus exist a shortest path p
Between s and u.
Observation
Prior to adding u to S, path p connects a vertex in S, namely s, to a
vertex in V − S, namely u.
97 / 108
Proof
Now
Note that there exist a path from s to u, for otherwise u.d = δ(s, u) = ∞
by corollary 24.12.
“If there is no path from s to v, then v.d = δ(s, v) = ∞ is an
invariant.”
Thus exist a shortest path p
Between s and u.
Observation
Prior to adding u to S, path p connects a vertex in S, namely s, to a
vertex in V − S, namely u.
97 / 108
Proof
Now
Note that there exist a path from s to u, for otherwise u.d = δ(s, u) = ∞
by corollary 24.12.
“If there is no path from s to v, then v.d = δ(s, v) = ∞ is an
invariant.”
Thus exist a shortest path p
Between s and u.
Observation
Prior to adding u to S, path p connects a vertex in S, namely s, to a
vertex in V − S, namely u.
97 / 108
Proof
Consider the following
The first y along p from s to u such that y ∈ V − S.
And let x ∈ S be y’s predecessor along p.
98 / 108
Proof
Proof (continuation)
Then, shortest path from s to u: s
p1
x → y
p2
u looks like...
S
s
x
y
u
Remark: Either of paths p1 or p2 may have no edges.
99 / 108
Proof
We claim
y.d = δ(s, y) when u is added into S.
Proof of the claim
1 Observe that x ∈ S.
2 In addition, we know that u is the first vertex for which u.d = δ (s, u)
when it id added to S
100 / 108
Proof
We claim
y.d = δ(s, y) when u is added into S.
Proof of the claim
1 Observe that x ∈ S.
2 In addition, we know that u is the first vertex for which u.d = δ (s, u)
when it id added to S
100 / 108
Proof
Then
In addition, we had that x.d = δ(s, x) when x was inserted into S.
Then, we relaxed the edge between x and y
Edge (x, y) was relaxed at that time!
101 / 108
Proof
Then
In addition, we had that x.d = δ(s, x) when x was inserted into S.
Then, we relaxed the edge between x and y
Edge (x, y) was relaxed at that time!
101 / 108
Proof
Remember? Convergence property (Lemma 24.14)
Let p be a shortest path from s to v, where p =
p1
s u → v. If
u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then
v.d = δ(s, v) holds at all times after the call.
Then
Then, using this convergence property.
y.d = δ(s, y) = δ(s, x) + w(x, y) (9)
The claim is implied!!!
102 / 108
Proof
Remember? Convergence property (Lemma 24.14)
Let p be a shortest path from s to v, where p =
p1
s u → v. If
u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then
v.d = δ(s, v) holds at all times after the call.
Then
Then, using this convergence property.
y.d = δ(s, y) = δ(s, x) + w(x, y) (9)
The claim is implied!!!
102 / 108
Proof
Remember? Convergence property (Lemma 24.14)
Let p be a shortest path from s to v, where p =
p1
s u → v. If
u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then
v.d = δ(s, v) holds at all times after the call.
Then
Then, using this convergence property.
y.d = δ(s, y) = δ(s, x) + w(x, y) (9)
The claim is implied!!!
102 / 108
Proof
Remember? Convergence property (Lemma 24.14)
Let p be a shortest path from s to v, where p =
p1
s u → v. If
u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then
v.d = δ(s, v) holds at all times after the call.
Then
Then, using this convergence property.
y.d = δ(s, y) = δ(s, x) + w(x, y) (9)
The claim is implied!!!
102 / 108
Proof
Now
1 We obtain a contradiction to prove that u.d = δ (s, u).
2 y appears before u in a shortest path on a shortest path from s to u.
3 In addition, all edges have positive weights.
4 Then, δ(s, y) ≤ δ(s, u), thus
y.d = δ (s, y)
≤ δ (s, u)
≤ u.d
The last inequality is due to the Upper-Bound Property (Lemma
24.11).
103 / 108
Proof
Now
1 We obtain a contradiction to prove that u.d = δ (s, u).
2 y appears before u in a shortest path on a shortest path from s to u.
3 In addition, all edges have positive weights.
4 Then, δ(s, y) ≤ δ(s, u), thus
y.d = δ (s, y)
≤ δ (s, u)
≤ u.d
The last inequality is due to the Upper-Bound Property (Lemma
24.11).
103 / 108
Proof
Now
1 We obtain a contradiction to prove that u.d = δ (s, u).
2 y appears before u in a shortest path on a shortest path from s to u.
3 In addition, all edges have positive weights.
4 Then, δ(s, y) ≤ δ(s, u), thus
y.d = δ (s, y)
≤ δ (s, u)
≤ u.d
The last inequality is due to the Upper-Bound Property (Lemma
24.11).
103 / 108
Proof
Now
1 We obtain a contradiction to prove that u.d = δ (s, u).
2 y appears before u in a shortest path on a shortest path from s to u.
3 In addition, all edges have positive weights.
4 Then, δ(s, y) ≤ δ(s, u), thus
y.d = δ (s, y)
≤ δ (s, u)
≤ u.d
The last inequality is due to the Upper-Bound Property (Lemma
24.11).
103 / 108
Proof
Now
1 We obtain a contradiction to prove that u.d = δ (s, u).
2 y appears before u in a shortest path on a shortest path from s to u.
3 In addition, all edges have positive weights.
4 Then, δ(s, y) ≤ δ(s, u), thus
y.d = δ (s, y)
≤ δ (s, u)
≤ u.d
The last inequality is due to the Upper-Bound Property (Lemma
24.11).
103 / 108
Proof
Now
1 We obtain a contradiction to prove that u.d = δ (s, u).
2 y appears before u in a shortest path on a shortest path from s to u.
3 In addition, all edges have positive weights.
4 Then, δ(s, y) ≤ δ(s, u), thus
y.d = δ (s, y)
≤ δ (s, u)
≤ u.d
The last inequality is due to the Upper-Bound Property (Lemma
24.11).
103 / 108
Proof
Now
1 We obtain a contradiction to prove that u.d = δ (s, u).
2 y appears before u in a shortest path on a shortest path from s to u.
3 In addition, all edges have positive weights.
4 Then, δ(s, y) ≤ δ(s, u), thus
y.d = δ (s, y)
≤ δ (s, u)
≤ u.d
The last inequality is due to the Upper-Bound Property (Lemma
24.11).
103 / 108
Proof
Then
But because both vertices u and y where in V − S when u was chosen in
line 5 ⇒ u.d ≤ y.d.
Thus
y.d = δ(s, y) = δ(s, u) = u.d
Consequently
We have that u.d = δ (s, u), which contradicts our choice of u.
Conclusion: u.d = δ (s, u) when u is added to S and the equality is
maintained afterwards.
104 / 108
Proof
Then
But because both vertices u and y where in V − S when u was chosen in
line 5 ⇒ u.d ≤ y.d.
Thus
y.d = δ(s, y) = δ(s, u) = u.d
Consequently
We have that u.d = δ (s, u), which contradicts our choice of u.
Conclusion: u.d = δ (s, u) when u is added to S and the equality is
maintained afterwards.
104 / 108
Proof
Then
But because both vertices u and y where in V − S when u was chosen in
line 5 ⇒ u.d ≤ y.d.
Thus
y.d = δ(s, y) = δ(s, u) = u.d
Consequently
We have that u.d = δ (s, u), which contradicts our choice of u.
Conclusion: u.d = δ (s, u) when u is added to S and the equality is
maintained afterwards.
104 / 108
Proof
Then
But because both vertices u and y where in V − S when u was chosen in
line 5 ⇒ u.d ≤ y.d.
Thus
y.d = δ(s, y) = δ(s, u) = u.d
Consequently
We have that u.d = δ (s, u), which contradicts our choice of u.
Conclusion: u.d = δ (s, u) when u is added to S and the equality is
maintained afterwards.
104 / 108
Finally
Termination
At termination Q = ∅
Thus, V − S = ∅ or equivalent S = V
Thus
u.d = δ (s, u) for all vertices u ∈ V !!!
105 / 108
Finally
Termination
At termination Q = ∅
Thus, V − S = ∅ or equivalent S = V
Thus
u.d = δ (s, u) for all vertices u ∈ V !!!
105 / 108
Outline
1 Introduction
Introduction and Similar Problems
2 General Results
Optimal Substructure Properties
Predecessor Graph
The Relaxation Concept
The Bellman-Ford Algorithm
Properties of Relaxation
3 Bellman-Ford Algorithm
Predecessor Subgraph for Bellman
Shortest Path for Bellman
Example
Bellman-Ford finds the Shortest Path
Correctness of Bellman-Ford
4 Directed Acyclic Graphs (DAG)
Relaxing Edges
Example
5 Dijkstra’s Algorithm
Dijkstra’s Algorithm: A Greedy Method
Example
Correctness Dijkstra’s algorithm
Complexity of Dijkstra’s Algorithm
6 Exercises
106 / 108
Complexity
Running time is
O(V 2) using linear array for priority queue.
O((V + E) log V ) using binary heap.
O(V log V + E) using Fibonacci heap.
107 / 108
Exercises
From Cormen’s book solve
24.1-1
24.1-3
24.1-4
23.3-1
23.3-3
23.3-4
23.3-6
23.3-7
23.3-8
23.3-10
108 / 108

20 Single Source Shorthest Path

  • 1.
    Analysis of Algorithms SingleSource Shortest Path Andres Mendez-Vazquez November 9, 2015 1 / 108
  • 2.
    Outline 1 Introduction Introduction andSimilar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 2 / 108
  • 3.
    Outline 1 Introduction Introduction andSimilar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 3 / 108
  • 4.
    Introduction Problem description Given asingle source vertex in a weighted, directed graph. We want to compute a shortest path for each possible destination (Similar to BFS). Thus The algorithm will compute a shortest path tree (again, similar to BFS). 4 / 108
  • 5.
    Introduction Problem description Given asingle source vertex in a weighted, directed graph. We want to compute a shortest path for each possible destination (Similar to BFS). Thus The algorithm will compute a shortest path tree (again, similar to BFS). 4 / 108
  • 6.
    Introduction Problem description Given asingle source vertex in a weighted, directed graph. We want to compute a shortest path for each possible destination (Similar to BFS). Thus The algorithm will compute a shortest path tree (again, similar to BFS). 4 / 108
  • 7.
    Similar Problems Single destinationshortest paths problem Find a shortest path to a given destination vertex t from each vertex. By reversing the direction of each edge in the graph, we can reduce this problem to a single source problem. 5 / 108
  • 8.
    Similar Problems Single destinationshortest paths problem Find a shortest path to a given destination vertex t from each vertex. By reversing the direction of each edge in the graph, we can reduce this problem to a single source problem. Reverse Directions 5 / 108
  • 9.
    Similar Problems Single pairshortest path problem Find a shortest path from u to v for given vertices u and v. If we solve the single source problem with source vertex u, we also solve this problem. 6 / 108
  • 10.
    Similar Problems Single pairshortest path problem Find a shortest path from u to v for given vertices u and v. If we solve the single source problem with source vertex u, we also solve this problem. 6 / 108
  • 11.
    Similar Problems All pairsshortest paths problem Find a shortest path from u to v for every pair of vertices u and v. 7 / 108
  • 12.
    Outline 1 Introduction Introduction andSimilar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 8 / 108
  • 13.
    Optimal Substructure Property Lemma24.1 Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk > be a Shortest Path from v1 to vk. Then, pij =< vi, vi+1, ..., vj > is a Shortest Path from vi to vj, where 1 ≤ i ≤ j ≤ k. We have then So, we have the optimal substructure property. Bellman-Ford’s algorithm uses dynamic programming. Dijkstra’s algorithm uses the greedy approach. In addition, we have that Let δ(u, v) = weight of Shortest Path from u to v. 9 / 108
  • 14.
    Optimal Substructure Property Lemma24.1 Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk > be a Shortest Path from v1 to vk. Then, pij =< vi, vi+1, ..., vj > is a Shortest Path from vi to vj, where 1 ≤ i ≤ j ≤ k. We have then So, we have the optimal substructure property. Bellman-Ford’s algorithm uses dynamic programming. Dijkstra’s algorithm uses the greedy approach. In addition, we have that Let δ(u, v) = weight of Shortest Path from u to v. 9 / 108
  • 15.
    Optimal Substructure Property Lemma24.1 Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk > be a Shortest Path from v1 to vk. Then, pij =< vi, vi+1, ..., vj > is a Shortest Path from vi to vj, where 1 ≤ i ≤ j ≤ k. We have then So, we have the optimal substructure property. Bellman-Ford’s algorithm uses dynamic programming. Dijkstra’s algorithm uses the greedy approach. In addition, we have that Let δ(u, v) = weight of Shortest Path from u to v. 9 / 108
  • 16.
    Optimal Substructure Property Lemma24.1 Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk > be a Shortest Path from v1 to vk. Then, pij =< vi, vi+1, ..., vj > is a Shortest Path from vi to vj, where 1 ≤ i ≤ j ≤ k. We have then So, we have the optimal substructure property. Bellman-Ford’s algorithm uses dynamic programming. Dijkstra’s algorithm uses the greedy approach. In addition, we have that Let δ(u, v) = weight of Shortest Path from u to v. 9 / 108
  • 17.
    Optimal Substructure Property Lemma24.1 Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk > be a Shortest Path from v1 to vk. Then, pij =< vi, vi+1, ..., vj > is a Shortest Path from vi to vj, where 1 ≤ i ≤ j ≤ k. We have then So, we have the optimal substructure property. Bellman-Ford’s algorithm uses dynamic programming. Dijkstra’s algorithm uses the greedy approach. In addition, we have that Let δ(u, v) = weight of Shortest Path from u to v. 9 / 108
  • 18.
    Optimal Substructure Property Lemma24.1 Given a weighted, directed graph G = (V , E) with p =< v1, v2, ..., vk > be a Shortest Path from v1 to vk. Then, pij =< vi, vi+1, ..., vj > is a Shortest Path from vi to vj, where 1 ≤ i ≤ j ≤ k. We have then So, we have the optimal substructure property. Bellman-Ford’s algorithm uses dynamic programming. Dijkstra’s algorithm uses the greedy approach. In addition, we have that Let δ(u, v) = weight of Shortest Path from u to v. 9 / 108
  • 19.
    Optimal Substructure Property Corollary Letp be a Shortest Path from s to v, where p = s p1 u → v = p1 ∪ {(u, v)}. Then δ(s, v) = δ(s, u) + w(u, v). 10 / 108
  • 20.
    The Lower BoundBetween Nodes Lemma 24.10 Let s ∈ V . For all edges (u, v) ∈ E, we have δ(s, v) ≤ δ(s, u) + w(u, v). 11 / 108
  • 21.
    Now Then We have thebasic concepts Still We need to define an important one. The Predecessor Graph This will facilitate the proof of several concepts 12 / 108
  • 22.
    Now Then We have thebasic concepts Still We need to define an important one. The Predecessor Graph This will facilitate the proof of several concepts 12 / 108
  • 23.
    Now Then We have thebasic concepts Still We need to define an important one. The Predecessor Graph This will facilitate the proof of several concepts 12 / 108
  • 24.
    Outline 1 Introduction Introduction andSimilar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 13 / 108
  • 25.
    Predecessor Graph Representing shortestpaths For this we use the predecessor subgraph It is defined slightly differently from that on Breadth-First-Search Definition of a Predecessor Subgraph The predecessor is a subgraph Gπ = (Vπ, Eπ) where Vπ = {v ∈ V |v.π = NIL} ∪ {s} Eπ = {(v.π, v)| v ∈ Vπ − {s}} Properties The predecessor subgraph Gπ forms a depth first forest composed of several depth first trees. The edges in Eπ are called tree edges. 14 / 108
  • 26.
    Predecessor Graph Representing shortestpaths For this we use the predecessor subgraph It is defined slightly differently from that on Breadth-First-Search Definition of a Predecessor Subgraph The predecessor is a subgraph Gπ = (Vπ, Eπ) where Vπ = {v ∈ V |v.π = NIL} ∪ {s} Eπ = {(v.π, v)| v ∈ Vπ − {s}} Properties The predecessor subgraph Gπ forms a depth first forest composed of several depth first trees. The edges in Eπ are called tree edges. 14 / 108
  • 27.
    Predecessor Graph Representing shortestpaths For this we use the predecessor subgraph It is defined slightly differently from that on Breadth-First-Search Definition of a Predecessor Subgraph The predecessor is a subgraph Gπ = (Vπ, Eπ) where Vπ = {v ∈ V |v.π = NIL} ∪ {s} Eπ = {(v.π, v)| v ∈ Vπ − {s}} Properties The predecessor subgraph Gπ forms a depth first forest composed of several depth first trees. The edges in Eπ are called tree edges. 14 / 108
  • 28.
    Predecessor Graph Representing shortestpaths For this we use the predecessor subgraph It is defined slightly differently from that on Breadth-First-Search Definition of a Predecessor Subgraph The predecessor is a subgraph Gπ = (Vπ, Eπ) where Vπ = {v ∈ V |v.π = NIL} ∪ {s} Eπ = {(v.π, v)| v ∈ Vπ − {s}} Properties The predecessor subgraph Gπ forms a depth first forest composed of several depth first trees. The edges in Eπ are called tree edges. 14 / 108
  • 29.
    Predecessor Graph Representing shortestpaths For this we use the predecessor subgraph It is defined slightly differently from that on Breadth-First-Search Definition of a Predecessor Subgraph The predecessor is a subgraph Gπ = (Vπ, Eπ) where Vπ = {v ∈ V |v.π = NIL} ∪ {s} Eπ = {(v.π, v)| v ∈ Vπ − {s}} Properties The predecessor subgraph Gπ forms a depth first forest composed of several depth first trees. The edges in Eπ are called tree edges. 14 / 108
  • 30.
    Predecessor Graph Representing shortestpaths For this we use the predecessor subgraph It is defined slightly differently from that on Breadth-First-Search Definition of a Predecessor Subgraph The predecessor is a subgraph Gπ = (Vπ, Eπ) where Vπ = {v ∈ V |v.π = NIL} ∪ {s} Eπ = {(v.π, v)| v ∈ Vπ − {s}} Properties The predecessor subgraph Gπ forms a depth first forest composed of several depth first trees. The edges in Eπ are called tree edges. 14 / 108
  • 31.
    Predecessor Graph Representing shortestpaths For this we use the predecessor subgraph It is defined slightly differently from that on Breadth-First-Search Definition of a Predecessor Subgraph The predecessor is a subgraph Gπ = (Vπ, Eπ) where Vπ = {v ∈ V |v.π = NIL} ∪ {s} Eπ = {(v.π, v)| v ∈ Vπ − {s}} Properties The predecessor subgraph Gπ forms a depth first forest composed of several depth first trees. The edges in Eπ are called tree edges. 14 / 108
  • 32.
    Outline 1 Introduction Introduction andSimilar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 15 / 108
  • 33.
    The Relaxation Concept Weare going to use certain functions for all the algorithms Initialize Here, the basic variables of the nodes in a graph will be initialized v.d = the distance from the source s. v.π = the predecessor node during the search of the shortest path. Changing the v.d This will be done in the Relaxation algorithm. 16 / 108
  • 34.
    The Relaxation Concept Weare going to use certain functions for all the algorithms Initialize Here, the basic variables of the nodes in a graph will be initialized v.d = the distance from the source s. v.π = the predecessor node during the search of the shortest path. Changing the v.d This will be done in the Relaxation algorithm. 16 / 108
  • 35.
    The Relaxation Concept Weare going to use certain functions for all the algorithms Initialize Here, the basic variables of the nodes in a graph will be initialized v.d = the distance from the source s. v.π = the predecessor node during the search of the shortest path. Changing the v.d This will be done in the Relaxation algorithm. 16 / 108
  • 36.
    The Relaxation Concept Weare going to use certain functions for all the algorithms Initialize Here, the basic variables of the nodes in a graph will be initialized v.d = the distance from the source s. v.π = the predecessor node during the search of the shortest path. Changing the v.d This will be done in the Relaxation algorithm. 16 / 108
  • 37.
    The Relaxation Concept Weare going to use certain functions for all the algorithms Initialize Here, the basic variables of the nodes in a graph will be initialized v.d = the distance from the source s. v.π = the predecessor node during the search of the shortest path. Changing the v.d This will be done in the Relaxation algorithm. 16 / 108
  • 38.
    Initialize and Relaxation TheAlgorithms keep track of v.d, v.π. It is initialized as follows Initialize(G, s) 1 for each v ∈ V [G] 2 v.d = ∞ 3 v.π = NIL 4 s.d = 0 These values are changed when an edge (u, v) is relaxed. Relax(u, v, w) 1 if v.d > u.d + w(u, v) 2 v.d = u.d + w(u, v) 3 v.π = u 17 / 108
  • 39.
    Initialize and Relaxation TheAlgorithms keep track of v.d, v.π. It is initialized as follows Initialize(G, s) 1 for each v ∈ V [G] 2 v.d = ∞ 3 v.π = NIL 4 s.d = 0 These values are changed when an edge (u, v) is relaxed. Relax(u, v, w) 1 if v.d > u.d + w(u, v) 2 v.d = u.d + w(u, v) 3 v.π = u 17 / 108
  • 40.
    How are thesefunctions used? These functions are used 1 Build a predecesor graph Gπ. 2 Integrate the Shortest Path into that predecessor graph. 1 Using the field d. 18 / 108
  • 41.
    How are thesefunctions used? These functions are used 1 Build a predecesor graph Gπ. 2 Integrate the Shortest Path into that predecessor graph. 1 Using the field d. 18 / 108
  • 42.
    Outline 1 Introduction Introduction andSimilar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 19 / 108
  • 43.
    The Bellman-Ford Algorithm Bellman-Fordcan have negative weight edges. It will “detect” reachable negative weight cycles. Bellman-Ford(G, s, w) 1 Initialize(G, s) 2 for i = 1 to |V [G]| − 1 3 for each (u, v) to E [G] 4 Relax(u, v, w) 5 for each (u, v) to E [G] 6 if v.d > u.d + w (u, v) 7 return false 8 return true Time Complexity O (VE) 20 / 108
  • 44.
    The Bellman-Ford Algorithm Bellman-Fordcan have negative weight edges. It will “detect” reachable negative weight cycles. Bellman-Ford(G, s, w) 1 Initialize(G, s) 2 for i = 1 to |V [G]| − 1 3 for each (u, v) to E [G] 4 Relax(u, v, w) 5 for each (u, v) to E [G] 6 if v.d > u.d + w (u, v) 7 return false 8 return true Time Complexity O (VE) 20 / 108
  • 45.
    The Bellman-Ford Algorithm Bellman-Fordcan have negative weight edges. It will “detect” reachable negative weight cycles. Bellman-Ford(G, s, w) 1 Initialize(G, s) 2 for i = 1 to |V [G]| − 1 3 for each (u, v) to E [G] 4 Relax(u, v, w) 5 for each (u, v) to E [G] 6 if v.d > u.d + w (u, v) 7 return false 8 return true Time Complexity O (VE) 20 / 108
  • 46.
    The Bellman-Ford Algorithm Bellman-Fordcan have negative weight edges. It will “detect” reachable negative weight cycles. Bellman-Ford(G, s, w) 1 Initialize(G, s) 2 for i = 1 to |V [G]| − 1 3 for each (u, v) to E [G] 4 Relax(u, v, w) 5 for each (u, v) to E [G] 6 if v.d > u.d + w (u, v) 7 return false 8 return true Time Complexity O (VE) 20 / 108
  • 47.
    Outline 1 Introduction Introduction andSimilar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 21 / 108
  • 48.
    Properties of Relaxation Someproperties v.d, if not ∞, is the length of some path from s to v. v.d either stays the same or decreases with time. Therefore If v.d = δ(s, v) at any time, this holds thereafter. Something nice Note that v.d ≥ δ(s, v) always (Upper-Bound Property). After i iterations of relaxing an all (u, v), if the shortest path to v has i edges, then v.d = δ(s, v). If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant. 22 / 108
  • 49.
    Properties of Relaxation Someproperties v.d, if not ∞, is the length of some path from s to v. v.d either stays the same or decreases with time. Therefore If v.d = δ(s, v) at any time, this holds thereafter. Something nice Note that v.d ≥ δ(s, v) always (Upper-Bound Property). After i iterations of relaxing an all (u, v), if the shortest path to v has i edges, then v.d = δ(s, v). If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant. 22 / 108
  • 50.
    Properties of Relaxation Someproperties v.d, if not ∞, is the length of some path from s to v. v.d either stays the same or decreases with time. Therefore If v.d = δ(s, v) at any time, this holds thereafter. Something nice Note that v.d ≥ δ(s, v) always (Upper-Bound Property). After i iterations of relaxing an all (u, v), if the shortest path to v has i edges, then v.d = δ(s, v). If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant. 22 / 108
  • 51.
    Properties of Relaxation Someproperties v.d, if not ∞, is the length of some path from s to v. v.d either stays the same or decreases with time. Therefore If v.d = δ(s, v) at any time, this holds thereafter. Something nice Note that v.d ≥ δ(s, v) always (Upper-Bound Property). After i iterations of relaxing an all (u, v), if the shortest path to v has i edges, then v.d = δ(s, v). If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant. 22 / 108
  • 52.
    Properties of Relaxation Someproperties v.d, if not ∞, is the length of some path from s to v. v.d either stays the same or decreases with time. Therefore If v.d = δ(s, v) at any time, this holds thereafter. Something nice Note that v.d ≥ δ(s, v) always (Upper-Bound Property). After i iterations of relaxing an all (u, v), if the shortest path to v has i edges, then v.d = δ(s, v). If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant. 22 / 108
  • 53.
    Properties of Relaxation Someproperties v.d, if not ∞, is the length of some path from s to v. v.d either stays the same or decreases with time. Therefore If v.d = δ(s, v) at any time, this holds thereafter. Something nice Note that v.d ≥ δ(s, v) always (Upper-Bound Property). After i iterations of relaxing an all (u, v), if the shortest path to v has i edges, then v.d = δ(s, v). If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant. 22 / 108
  • 54.
    Properties of Relaxation Lemma24.10 (Triangle inequality) Let G = (V , E) be a weighted, directed graph with weight function w : E → R and source vertex s. Then, for all edges (u, v) ∈ E, we have: δ (s, v) ≤ δ (s, u) + w (u, v) (1) Proof 1 Suppose that p is a shortest path from source s to vertex v. 2 Then, p has no more weight than any other path from s to vertex v. 3 Not only p has no more weiht tha a particular shortest path that goes from s to u and then takes edge (u, v). 23 / 108
  • 55.
    Properties of Relaxation Lemma24.10 (Triangle inequality) Let G = (V , E) be a weighted, directed graph with weight function w : E → R and source vertex s. Then, for all edges (u, v) ∈ E, we have: δ (s, v) ≤ δ (s, u) + w (u, v) (1) Proof 1 Suppose that p is a shortest path from source s to vertex v. 2 Then, p has no more weight than any other path from s to vertex v. 3 Not only p has no more weiht tha a particular shortest path that goes from s to u and then takes edge (u, v). 23 / 108
  • 56.
    Properties of Relaxation Lemma24.10 (Triangle inequality) Let G = (V , E) be a weighted, directed graph with weight function w : E → R and source vertex s. Then, for all edges (u, v) ∈ E, we have: δ (s, v) ≤ δ (s, u) + w (u, v) (1) Proof 1 Suppose that p is a shortest path from source s to vertex v. 2 Then, p has no more weight than any other path from s to vertex v. 3 Not only p has no more weiht tha a particular shortest path that goes from s to u and then takes edge (u, v). 23 / 108
  • 57.
    Properties of Relaxation Lemma24.10 (Triangle inequality) Let G = (V , E) be a weighted, directed graph with weight function w : E → R and source vertex s. Then, for all edges (u, v) ∈ E, we have: δ (s, v) ≤ δ (s, u) + w (u, v) (1) Proof 1 Suppose that p is a shortest path from source s to vertex v. 2 Then, p has no more weight than any other path from s to vertex v. 3 Not only p has no more weiht tha a particular shortest path that goes from s to u and then takes edge (u, v). 23 / 108
  • 58.
    Properties of Relaxation Lemma24.10 (Triangle inequality) Let G = (V , E) be a weighted, directed graph with weight function w : E → R and source vertex s. Then, for all edges (u, v) ∈ E, we have: δ (s, v) ≤ δ (s, u) + w (u, v) (1) Proof 1 Suppose that p is a shortest path from source s to vertex v. 2 Then, p has no more weight than any other path from s to vertex v. 3 Not only p has no more weiht tha a particular shortest path that goes from s to u and then takes edge (u, v). 23 / 108
  • 59.
    Properties of Relaxation Lemma24.11 (Upper Bound Property) Let G = (V , E) be a weighted, directed graph with weight function w : E → R. Consider any algorithm in which v.d, and v.π are first initialized by calling Initialize(G, s) (s is the source), and are only changed by calling Relax. Then, we have that v.d ≥ δ(s, v) ∀v ∈ V [G] , and this invariant is maintained over any sequence of relaxation steps on the edges of G. Moreover, once v.d = δ(s, v), it never changes. 24 / 108
  • 60.
    Properties of Relaxation Lemma24.11 (Upper Bound Property) Let G = (V , E) be a weighted, directed graph with weight function w : E → R. Consider any algorithm in which v.d, and v.π are first initialized by calling Initialize(G, s) (s is the source), and are only changed by calling Relax. Then, we have that v.d ≥ δ(s, v) ∀v ∈ V [G] , and this invariant is maintained over any sequence of relaxation steps on the edges of G. Moreover, once v.d = δ(s, v), it never changes. 24 / 108
  • 61.
    Properties of Relaxation Lemma24.11 (Upper Bound Property) Let G = (V , E) be a weighted, directed graph with weight function w : E → R. Consider any algorithm in which v.d, and v.π are first initialized by calling Initialize(G, s) (s is the source), and are only changed by calling Relax. Then, we have that v.d ≥ δ(s, v) ∀v ∈ V [G] , and this invariant is maintained over any sequence of relaxation steps on the edges of G. Moreover, once v.d = δ(s, v), it never changes. 24 / 108
  • 62.
    Proof of Lemma LoopInvariance The Proof can be done by induction over the number of relaxation steps and the loop invariance: v.d ≥ δ (s, v) for all v ∈ V For the Basis v.d ≥ δ (s, v) is true after initialization, since: v.d = ∞ making v.d ≥ δ (s, v) for all v ∈ V − {s}. For s, s.d = 0 ≥ δ (s, s). For the inductive step, consider the relaxation of an edge (u, v) By the inductive hypothesis, we have that x.d ≥ δ (s, x) for all x ∈ V prior to relaxation. 25 / 108
  • 63.
    Proof of Lemma LoopInvariance The Proof can be done by induction over the number of relaxation steps and the loop invariance: v.d ≥ δ (s, v) for all v ∈ V For the Basis v.d ≥ δ (s, v) is true after initialization, since: v.d = ∞ making v.d ≥ δ (s, v) for all v ∈ V − {s}. For s, s.d = 0 ≥ δ (s, s). For the inductive step, consider the relaxation of an edge (u, v) By the inductive hypothesis, we have that x.d ≥ δ (s, x) for all x ∈ V prior to relaxation. 25 / 108
  • 64.
    Proof of Lemma LoopInvariance The Proof can be done by induction over the number of relaxation steps and the loop invariance: v.d ≥ δ (s, v) for all v ∈ V For the Basis v.d ≥ δ (s, v) is true after initialization, since: v.d = ∞ making v.d ≥ δ (s, v) for all v ∈ V − {s}. For s, s.d = 0 ≥ δ (s, s). For the inductive step, consider the relaxation of an edge (u, v) By the inductive hypothesis, we have that x.d ≥ δ (s, x) for all x ∈ V prior to relaxation. 25 / 108
  • 65.
    Proof of Lemma LoopInvariance The Proof can be done by induction over the number of relaxation steps and the loop invariance: v.d ≥ δ (s, v) for all v ∈ V For the Basis v.d ≥ δ (s, v) is true after initialization, since: v.d = ∞ making v.d ≥ δ (s, v) for all v ∈ V − {s}. For s, s.d = 0 ≥ δ (s, s). For the inductive step, consider the relaxation of an edge (u, v) By the inductive hypothesis, we have that x.d ≥ δ (s, x) for all x ∈ V prior to relaxation. 25 / 108
  • 66.
    Proof of Lemma LoopInvariance The Proof can be done by induction over the number of relaxation steps and the loop invariance: v.d ≥ δ (s, v) for all v ∈ V For the Basis v.d ≥ δ (s, v) is true after initialization, since: v.d = ∞ making v.d ≥ δ (s, v) for all v ∈ V − {s}. For s, s.d = 0 ≥ δ (s, s). For the inductive step, consider the relaxation of an edge (u, v) By the inductive hypothesis, we have that x.d ≥ δ (s, x) for all x ∈ V prior to relaxation. 25 / 108
  • 67.
    Thus If you callRelax(u, v, w), it may change v.d v.d = u.d + w(u, v) ≥ δ(s, u) + w(u, v) by inductive hypothesis ≥ δ (s, v) by the triangle inequality Thus, the invariant is maintained. 26 / 108
  • 68.
    Thus If you callRelax(u, v, w), it may change v.d v.d = u.d + w(u, v) ≥ δ(s, u) + w(u, v) by inductive hypothesis ≥ δ (s, v) by the triangle inequality Thus, the invariant is maintained. 26 / 108
  • 69.
    Thus If you callRelax(u, v, w), it may change v.d v.d = u.d + w(u, v) ≥ δ(s, u) + w(u, v) by inductive hypothesis ≥ δ (s, v) by the triangle inequality Thus, the invariant is maintained. 26 / 108
  • 70.
    Thus If you callRelax(u, v, w), it may change v.d v.d = u.d + w(u, v) ≥ δ(s, u) + w(u, v) by inductive hypothesis ≥ δ (s, v) by the triangle inequality Thus, the invariant is maintained. 26 / 108
  • 71.
    Properties of Relaxation Proofof lemma 24.11 cont... To proof that the value v.d never changes once v.d = δ (s, v): Note the following: Once v.d = δ (s, v), it cannot decrease because v.d ≥ δ (s, v) and Relaxation never increases d. 27 / 108
  • 72.
    Properties of Relaxation Proofof lemma 24.11 cont... To proof that the value v.d never changes once v.d = δ (s, v): Note the following: Once v.d = δ (s, v), it cannot decrease because v.d ≥ δ (s, v) and Relaxation never increases d. 27 / 108
  • 73.
    Next, we have Corollary24.12 (No-path property) If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant. Proof By the upper-bound property, we always have ∞ = δ (s, v) ≤ v.d. Then, v.d = ∞. 28 / 108
  • 74.
    Next, we have Corollary24.12 (No-path property) If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant. Proof By the upper-bound property, we always have ∞ = δ (s, v) ≤ v.d. Then, v.d = ∞. 28 / 108
  • 75.
    More Lemmas Lemma 24.13 LetG = (V , E) be a weighted, directed graph with weight function w : E → R. Then, immediately after relaxing edge (u, v) by calling Relax(u, v, w) we have v.d ≤ u.d + w(u, v). 29 / 108
  • 76.
    Proof First If, just priorto relaxing edge (u, v), Case 1: if we have that v.d > u.d + w (u, v) Then, v.d = u.d + w (u, v) after relaxation. Now, Case 2 If v.d ≤ u.d + w (u, v) just before relaxation, then: neither u.d nor v.d changes Thus, afterwards v.d ≤ u.d + w (u, v) 30 / 108
  • 77.
    Proof First If, just priorto relaxing edge (u, v), Case 1: if we have that v.d > u.d + w (u, v) Then, v.d = u.d + w (u, v) after relaxation. Now, Case 2 If v.d ≤ u.d + w (u, v) just before relaxation, then: neither u.d nor v.d changes Thus, afterwards v.d ≤ u.d + w (u, v) 30 / 108
  • 78.
    Proof First If, just priorto relaxing edge (u, v), Case 1: if we have that v.d > u.d + w (u, v) Then, v.d = u.d + w (u, v) after relaxation. Now, Case 2 If v.d ≤ u.d + w (u, v) just before relaxation, then: neither u.d nor v.d changes Thus, afterwards v.d ≤ u.d + w (u, v) 30 / 108
  • 79.
    Proof First If, just priorto relaxing edge (u, v), Case 1: if we have that v.d > u.d + w (u, v) Then, v.d = u.d + w (u, v) after relaxation. Now, Case 2 If v.d ≤ u.d + w (u, v) just before relaxation, then: neither u.d nor v.d changes Thus, afterwards v.d ≤ u.d + w (u, v) 30 / 108
  • 80.
    Proof First If, just priorto relaxing edge (u, v), Case 1: if we have that v.d > u.d + w (u, v) Then, v.d = u.d + w (u, v) after relaxation. Now, Case 2 If v.d ≤ u.d + w (u, v) just before relaxation, then: neither u.d nor v.d changes Thus, afterwards v.d ≤ u.d + w (u, v) 30 / 108
  • 81.
    Proof First If, just priorto relaxing edge (u, v), Case 1: if we have that v.d > u.d + w (u, v) Then, v.d = u.d + w (u, v) after relaxation. Now, Case 2 If v.d ≤ u.d + w (u, v) just before relaxation, then: neither u.d nor v.d changes Thus, afterwards v.d ≤ u.d + w (u, v) 30 / 108
  • 82.
    More Lemmas Lemma 24.14(Convergence property) Let p be a shortest path from s to v, where p = p1 s u → v = p1 ∪ {(u, v)}. If u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then v.d = δ(s, v) holds at all times after the call. Proof: By the upper-bound property, if u.d = δ (s, u) at some moment before relaxing edge (u, v), holding afterwards. 31 / 108
  • 83.
    More Lemmas Lemma 24.14(Convergence property) Let p be a shortest path from s to v, where p = p1 s u → v = p1 ∪ {(u, v)}. If u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then v.d = δ(s, v) holds at all times after the call. Proof: By the upper-bound property, if u.d = δ (s, u) at some moment before relaxing edge (u, v), holding afterwards. 31 / 108
  • 84.
    More Lemmas Lemma 24.14(Convergence property) Let p be a shortest path from s to v, where p = p1 s u → v = p1 ∪ {(u, v)}. If u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then v.d = δ(s, v) holds at all times after the call. Proof: By the upper-bound property, if u.d = δ (s, u) at some moment before relaxing edge (u, v), holding afterwards. 31 / 108
  • 85.
    Proof Thus , afterrelaxing (u, v) v.d ≤ u.d + w(u, v) by lemma 24.13 = δ (s, u) + w (u, v) = δ (s, v) by corollary of lemma 24.1 Now By lemma 24.11, v.d ≥ δ(s, v), so v.d = δ(s, v). 32 / 108
  • 86.
    Proof Thus , afterrelaxing (u, v) v.d ≤ u.d + w(u, v) by lemma 24.13 = δ (s, u) + w (u, v) = δ (s, v) by corollary of lemma 24.1 Now By lemma 24.11, v.d ≥ δ(s, v), so v.d = δ(s, v). 32 / 108
  • 87.
    Proof Thus , afterrelaxing (u, v) v.d ≤ u.d + w(u, v) by lemma 24.13 = δ (s, u) + w (u, v) = δ (s, v) by corollary of lemma 24.1 Now By lemma 24.11, v.d ≥ δ(s, v), so v.d = δ(s, v). 32 / 108
  • 88.
    Proof Thus , afterrelaxing (u, v) v.d ≤ u.d + w(u, v) by lemma 24.13 = δ (s, u) + w (u, v) = δ (s, v) by corollary of lemma 24.1 Now By lemma 24.11, v.d ≥ δ(s, v), so v.d = δ(s, v). 32 / 108
  • 89.
    Outline 1 Introduction Introduction andSimilar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 33 / 108
  • 90.
    Predecessor Subgraph forBellman Lemma 24.16 Assume a given graph G that has no negative weight cycles reachable from s. Then, after the initialization, the predecessor subgraph Gπ is always a tree with root s, and any sequence of relaxations steps on edges of G maintains this property as an invariant. Proof It is necessary to prove two things in order to get a tree: 1 Gπ is acyclic. 2 There exists a unique path from source s to each vertex Vπ. 34 / 108
  • 91.
    Predecessor Subgraph forBellman Lemma 24.16 Assume a given graph G that has no negative weight cycles reachable from s. Then, after the initialization, the predecessor subgraph Gπ is always a tree with root s, and any sequence of relaxations steps on edges of G maintains this property as an invariant. Proof It is necessary to prove two things in order to get a tree: 1 Gπ is acyclic. 2 There exists a unique path from source s to each vertex Vπ. 34 / 108
  • 92.
    Predecessor Subgraph forBellman Lemma 24.16 Assume a given graph G that has no negative weight cycles reachable from s. Then, after the initialization, the predecessor subgraph Gπ is always a tree with root s, and any sequence of relaxations steps on edges of G maintains this property as an invariant. Proof It is necessary to prove two things in order to get a tree: 1 Gπ is acyclic. 2 There exists a unique path from source s to each vertex Vπ. 34 / 108
  • 93.
    Predecessor Subgraph forBellman Lemma 24.16 Assume a given graph G that has no negative weight cycles reachable from s. Then, after the initialization, the predecessor subgraph Gπ is always a tree with root s, and any sequence of relaxations steps on edges of G maintains this property as an invariant. Proof It is necessary to prove two things in order to get a tree: 1 Gπ is acyclic. 2 There exists a unique path from source s to each vertex Vπ. 34 / 108
  • 94.
    Proof of Gπis acyclic First Suppose there exist a cycle c =< v0, v1, ..., vk >, where v0 = vk. We have vi.π = vi−1 for i = 1, 2, ..., k. Second Assume relaxation of (vk−1, vk) created the cycle. We are going to show that the cycle has a negative weight. We claim that The cycle must be reachable from s (Why?) 35 / 108
  • 95.
    Proof of Gπis acyclic First Suppose there exist a cycle c =< v0, v1, ..., vk >, where v0 = vk. We have vi.π = vi−1 for i = 1, 2, ..., k. Second Assume relaxation of (vk−1, vk) created the cycle. We are going to show that the cycle has a negative weight. We claim that The cycle must be reachable from s (Why?) 35 / 108
  • 96.
    Proof of Gπis acyclic First Suppose there exist a cycle c =< v0, v1, ..., vk >, where v0 = vk. We have vi.π = vi−1 for i = 1, 2, ..., k. Second Assume relaxation of (vk−1, vk) created the cycle. We are going to show that the cycle has a negative weight. We claim that The cycle must be reachable from s (Why?) 35 / 108
  • 97.
    Proof First Each vertex onthe cycle has a non-NIL predecessor, and so each vertex on it was assigned a finite shortest-path estimate when it was assigned its non-NIL value. Then By the upper-bound property, each vertex on the cycle has a finite shortest-path weight, Thus Making the cycle reachable from s. 36 / 108
  • 98.
    Proof First Each vertex onthe cycle has a non-NIL predecessor, and so each vertex on it was assigned a finite shortest-path estimate when it was assigned its non-NIL value. Then By the upper-bound property, each vertex on the cycle has a finite shortest-path weight, Thus Making the cycle reachable from s. 36 / 108
  • 99.
    Proof First Each vertex onthe cycle has a non-NIL predecessor, and so each vertex on it was assigned a finite shortest-path estimate when it was assigned its non-NIL value. Then By the upper-bound property, each vertex on the cycle has a finite shortest-path weight, Thus Making the cycle reachable from s. 36 / 108
  • 100.
    Proof Before call toRelax(vk−1, vk, w): vi.π = vi−1 for i = 1, ..., k − 1. (2) Thus This Implies vi.d was last updated by vi.d = vi−1.d + w(vi−1, vi) (3) for i = 1, ..., k − 1 (Because Relax updates π). This implies This implies vi.d ≥ vi−1.d + w(vi−1, vi) (4) for i = 1, ..., k − 1 (Before Relaxation in Lemma 24.13). 37 / 108
  • 101.
    Proof Before call toRelax(vk−1, vk, w): vi.π = vi−1 for i = 1, ..., k − 1. (2) Thus This Implies vi.d was last updated by vi.d = vi−1.d + w(vi−1, vi) (3) for i = 1, ..., k − 1 (Because Relax updates π). This implies This implies vi.d ≥ vi−1.d + w(vi−1, vi) (4) for i = 1, ..., k − 1 (Before Relaxation in Lemma 24.13). 37 / 108
  • 102.
    Proof Before call toRelax(vk−1, vk, w): vi.π = vi−1 for i = 1, ..., k − 1. (2) Thus This Implies vi.d was last updated by vi.d = vi−1.d + w(vi−1, vi) (3) for i = 1, ..., k − 1 (Because Relax updates π). This implies This implies vi.d ≥ vi−1.d + w(vi−1, vi) (4) for i = 1, ..., k − 1 (Before Relaxation in Lemma 24.13). 37 / 108
  • 103.
    Proof Thus Because vk.π ischanged by call Relax (Immediately before), vk.d > vk−1.d + w(vk−1, vk), we have that: k i=1 vi.d > k i=1 (vi−1.d + w (vi−1, vi)) = k i=1 vi−1.d + k i=1 w (vi−1, vi) We have finally that Because k i=1 vi.d = k i=1 vi−1.d, we have that k i=1 w(vi−1, vi) < 0, i.e., a negative weight cycle!!! 38 / 108
  • 104.
    Proof Thus Because vk.π ischanged by call Relax (Immediately before), vk.d > vk−1.d + w(vk−1, vk), we have that: k i=1 vi.d > k i=1 (vi−1.d + w (vi−1, vi)) = k i=1 vi−1.d + k i=1 w (vi−1, vi) We have finally that Because k i=1 vi.d = k i=1 vi−1.d, we have that k i=1 w(vi−1, vi) < 0, i.e., a negative weight cycle!!! 38 / 108
  • 105.
    Proof Thus Because vk.π ischanged by call Relax (Immediately before), vk.d > vk−1.d + w(vk−1, vk), we have that: k i=1 vi.d > k i=1 (vi−1.d + w (vi−1, vi)) = k i=1 vi−1.d + k i=1 w (vi−1, vi) We have finally that Because k i=1 vi.d = k i=1 vi−1.d, we have that k i=1 w(vi−1, vi) < 0, i.e., a negative weight cycle!!! 38 / 108
  • 106.
    Proof Thus Because vk.π ischanged by call Relax (Immediately before), vk.d > vk−1.d + w(vk−1, vk), we have that: k i=1 vi.d > k i=1 (vi−1.d + w (vi−1, vi)) = k i=1 vi−1.d + k i=1 w (vi−1, vi) We have finally that Because k i=1 vi.d = k i=1 vi−1.d, we have that k i=1 w(vi−1, vi) < 0, i.e., a negative weight cycle!!! 38 / 108
  • 107.
    Some comments Comments vi.d ≥vi−1.d + w(vi−1, vi) for i = 1, ..., k − 1 because when Relax(vi−1, vi, w) was called, there was an equality, and vi−1.d may have gotten smaller by further calls to Relax. vk.d > vk−1.d + w(vk−1, vk) before the last call to Relax because that last call changed vk.d. 39 / 108
  • 108.
    Some comments Comments vi.d ≥vi−1.d + w(vi−1, vi) for i = 1, ..., k − 1 because when Relax(vi−1, vi, w) was called, there was an equality, and vi−1.d may have gotten smaller by further calls to Relax. vk.d > vk−1.d + w(vk−1, vk) before the last call to Relax because that last call changed vk.d. 39 / 108
  • 109.
    Proof of existenceof a unique path from source s Let Gπ be the predecessor subgraph. So, for any v in Vπ, the graph Gπ contains at least one path from s to v. Assume now that you have two paths: This can only be possible if for two nodes x and y ⇒ x = y, but z.π = x = y!!! Contradiction!!! Therefore, we have only one path and Gπ is a tree. 40 / 108
  • 110.
    Proof of existenceof a unique path from source s Let Gπ be the predecessor subgraph. So, for any v in Vπ, the graph Gπ contains at least one path from s to v. Assume now that you have two paths: This can only be possible if for two nodes x and y ⇒ x = y, but z.π = x = y!!! Contradiction!!! Therefore, we have only one path and Gπ is a tree. 40 / 108
  • 111.
    Proof of existenceof a unique path from source s Let Gπ be the predecessor subgraph. So, for any v in Vπ, the graph Gπ contains at least one path from s to v. Assume now that you have two paths: s u x y z v Impossible! This can only be possible if for two nodes x and y ⇒ x = y, but z.π = x = y!!! Contradiction!!! Therefore, we have only one path and Gπ is a tree. 40 / 108
  • 112.
    Proof of existenceof a unique path from source s Let Gπ be the predecessor subgraph. So, for any v in Vπ, the graph Gπ contains at least one path from s to v. Assume now that you have two paths: s u x y z v Impossible! This can only be possible if for two nodes x and y ⇒ x = y, but z.π = x = y!!! Contradiction!!! Therefore, we have only one path and Gπ is a tree. 40 / 108
  • 113.
    Proof of existenceof a unique path from source s Let Gπ be the predecessor subgraph. So, for any v in Vπ, the graph Gπ contains at least one path from s to v. Assume now that you have two paths: s u x y z v Impossible! This can only be possible if for two nodes x and y ⇒ x = y, but z.π = x = y!!! Contradiction!!! Therefore, we have only one path and Gπ is a tree. 40 / 108
  • 114.
    Outline 1 Introduction Introduction andSimilar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 41 / 108
  • 115.
    Lemma 24.17 Lemma 24.17 Sameconditions as before. It calls Initialize and repeatedly calls Relax until v.d = δ(s, v) for all v in V . Then Gπ is a shortest path tree rooted at s. Proof For all v in V , there is a unique simple path p from s to v in Gπ (Lemma 24.16). We want to prove that it is a shortest path from s to v in G. 42 / 108
  • 116.
    Lemma 24.17 Lemma 24.17 Sameconditions as before. It calls Initialize and repeatedly calls Relax until v.d = δ(s, v) for all v in V . Then Gπ is a shortest path tree rooted at s. Proof For all v in V , there is a unique simple path p from s to v in Gπ (Lemma 24.16). We want to prove that it is a shortest path from s to v in G. 42 / 108
  • 117.
    Lemma 24.17 Lemma 24.17 Sameconditions as before. It calls Initialize and repeatedly calls Relax until v.d = δ(s, v) for all v in V . Then Gπ is a shortest path tree rooted at s. Proof For all v in V , there is a unique simple path p from s to v in Gπ (Lemma 24.16). We want to prove that it is a shortest path from s to v in G. 42 / 108
  • 118.
    Proof Then Let p =<v0, v1, ..., vk >, where v0 = s and vk = v. Thus, we have vi.d = δ(s, vi). And reasoning as before vi.d ≥ vi−1.d + w(vi−1, vi) (5) This implies that w(vi−1, vi) ≤ δ(s, vi) − δ(s, vi−1) (6) 43 / 108
  • 119.
    Proof Then Let p =<v0, v1, ..., vk >, where v0 = s and vk = v. Thus, we have vi.d = δ(s, vi). And reasoning as before vi.d ≥ vi−1.d + w(vi−1, vi) (5) This implies that w(vi−1, vi) ≤ δ(s, vi) − δ(s, vi−1) (6) 43 / 108
  • 120.
    Proof Then Let p =<v0, v1, ..., vk >, where v0 = s and vk = v. Thus, we have vi.d = δ(s, vi). And reasoning as before vi.d ≥ vi−1.d + w(vi−1, vi) (5) This implies that w(vi−1, vi) ≤ δ(s, vi) − δ(s, vi−1) (6) 43 / 108
  • 121.
    Proof Then, we sumover all weights w (p) = k i=1 w (vi−1, vi) ≤ k i=1 (δ (s, vi) − δ(s, vi−1)) = δ(s, vk) − δ(s, v0) = δ(s, vk) Finally So, equality holds and p is a shortest path because δ (s, vk) ≤ w (p). 44 / 108
  • 122.
    Proof Then, we sumover all weights w (p) = k i=1 w (vi−1, vi) ≤ k i=1 (δ (s, vi) − δ(s, vi−1)) = δ(s, vk) − δ(s, v0) = δ(s, vk) Finally So, equality holds and p is a shortest path because δ (s, vk) ≤ w (p). 44 / 108
  • 123.
    Proof Then, we sumover all weights w (p) = k i=1 w (vi−1, vi) ≤ k i=1 (δ (s, vi) − δ(s, vi−1)) = δ(s, vk) − δ(s, v0) = δ(s, vk) Finally So, equality holds and p is a shortest path because δ (s, vk) ≤ w (p). 44 / 108
  • 124.
    Proof Then, we sumover all weights w (p) = k i=1 w (vi−1, vi) ≤ k i=1 (δ (s, vi) − δ(s, vi−1)) = δ(s, vk) − δ(s, v0) = δ(s, vk) Finally So, equality holds and p is a shortest path because δ (s, vk) ≤ w (p). 44 / 108
  • 125.
    Proof Then, we sumover all weights w (p) = k i=1 w (vi−1, vi) ≤ k i=1 (δ (s, vi) − δ(s, vi−1)) = δ(s, vk) − δ(s, v0) = δ(s, vk) Finally So, equality holds and p is a shortest path because δ (s, vk) ≤ w (p). 44 / 108
  • 126.
    Outline 1 Introduction Introduction andSimilar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 45 / 108
  • 127.
    Again the Bellman-FordAlgorithm Bellman-Ford can have negative weight edges. It will “detect” reachable negative weight cycles. Bellman-Ford(G, s, w) 1 Initialize(G, s) 2 for i = 1 to |V [G]| − 1 3 for each (u, v) to E [G] 4 Relax(u, v, w) The Decision Part of the Dynamic Programming for u.d and u.π. 5 for each (u, v) to E [G] 6 if v.d > u.d + w (u, v) 7 return false 8 return true Observation If Bellman-Ford has not converged after V (G) − 1 iterations, then there cannot be a shortest path tree, so there must be a negative46 / 108
  • 128.
    Again the Bellman-FordAlgorithm Bellman-Ford can have negative weight edges. It will “detect” reachable negative weight cycles. Bellman-Ford(G, s, w) 1 Initialize(G, s) 2 for i = 1 to |V [G]| − 1 3 for each (u, v) to E [G] 4 Relax(u, v, w) The Decision Part of the Dynamic Programming for u.d and u.π. 5 for each (u, v) to E [G] 6 if v.d > u.d + w (u, v) 7 return false 8 return true Observation If Bellman-Ford has not converged after V (G) − 1 iterations, then there cannot be a shortest path tree, so there must be a negative46 / 108
  • 129.
    Example Red Arrows arethe representation of v.π s b d a c e h g f5 -4 5 3 3 6 -2 1 5 -2 2 7 -2 10 3 47 / 108
  • 130.
    Example Here, whenever wehave v.d = ∞ and v.u = ∞ no change is done s b d a c e h g f5 -4 5 3 3 6 -2 1 5 -2 2 7 -2 106 5 5 2 48 / 108
  • 131.
    Example Here, we keepupdating in the second iteration s b d a c e h g f5 -4 5 3 3 6 -2 1 5 -2 2 7 -2 106 5 5 1 3 16 2 49 / 108
  • 132.
    Example Here, during it.we notice that e can be updated for a better value s b d a c e h g f5 -4 5 3 3 6 -2 1 5 -2 2 7 -2 106 5 4 1 3 16 2 50 / 108
  • 133.
    Example Here, we keepupdating in third iteration and d and g also is updated s b d a c e h g f5 -4 5 3 3 6 -2 1 5 -2 2 7 -2 106 5 4 1 2 16 44 2 51 / 108
  • 134.
    Example Here, we keepupdating in fourth iteration s b d a c e h g f5 -4 5 3 3 6 -2 1 5 -2 2 7 -2 102 5 4 1 2 14 44 2 52 / 108
  • 135.
    Example Here, f isupdated during this iteration s b d a c e h g f5 -4 5 3 3 6 -2 1 5 -2 2 7 -2 102 5 4 1 2 12 40 2 53 / 108
  • 136.
    Outline 1 Introduction Introduction andSimilar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 54 / 108
  • 137.
    v.d == δ(s,v) upon termination Lemma 24.2 Assuming no negative weight cycles reachable from s, v.d == δ(s, v) holds upon termination for all vertices v reachable from s. Proof Consider a shortest path p, where p =< v0, v1, ..., vk >, where v0 = s and vk = v. We know the following Shortest paths are simple, p has at most |V | − 1, thus we have that k ≤ |V | − 1. 55 / 108
  • 138.
    v.d == δ(s,v) upon termination Lemma 24.2 Assuming no negative weight cycles reachable from s, v.d == δ(s, v) holds upon termination for all vertices v reachable from s. Proof Consider a shortest path p, where p =< v0, v1, ..., vk >, where v0 = s and vk = v. We know the following Shortest paths are simple, p has at most |V | − 1, thus we have that k ≤ |V | − 1. 55 / 108
  • 139.
    v.d == δ(s,v) upon termination Lemma 24.2 Assuming no negative weight cycles reachable from s, v.d == δ(s, v) holds upon termination for all vertices v reachable from s. Proof Consider a shortest path p, where p =< v0, v1, ..., vk >, where v0 = s and vk = v. We know the following Shortest paths are simple, p has at most |V | − 1, thus we have that k ≤ |V | − 1. 55 / 108
  • 140.
    Proof Something Notable Claim: vi.d= δ(s, vi) holds after the ith pass over edges. In the algorithm Each of the |V | − 1 iterations of the for loop (Lines 2-4) relaxes all edges in E. Proof follows by induction on i The edges relaxed in the i th iteration, for i = 1, 2, ..., k, is (vi−1, vi). By lemma 24.11, once vi.d = δ(s, vi) holds, it continues to hold. 56 / 108
  • 141.
    Proof Something Notable Claim: vi.d= δ(s, vi) holds after the ith pass over edges. In the algorithm Each of the |V | − 1 iterations of the for loop (Lines 2-4) relaxes all edges in E. Proof follows by induction on i The edges relaxed in the i th iteration, for i = 1, 2, ..., k, is (vi−1, vi). By lemma 24.11, once vi.d = δ(s, vi) holds, it continues to hold. 56 / 108
  • 142.
    Proof Something Notable Claim: vi.d= δ(s, vi) holds after the ith pass over edges. In the algorithm Each of the |V | − 1 iterations of the for loop (Lines 2-4) relaxes all edges in E. Proof follows by induction on i The edges relaxed in the i th iteration, for i = 1, 2, ..., k, is (vi−1, vi). By lemma 24.11, once vi.d = δ(s, vi) holds, it continues to hold. 56 / 108
  • 143.
    Proof Something Notable Claim: vi.d= δ(s, vi) holds after the ith pass over edges. In the algorithm Each of the |V | − 1 iterations of the for loop (Lines 2-4) relaxes all edges in E. Proof follows by induction on i The edges relaxed in the i th iteration, for i = 1, 2, ..., k, is (vi−1, vi). By lemma 24.11, once vi.d = δ(s, vi) holds, it continues to hold. 56 / 108
  • 144.
    Finding a pathbetween s and v Corollary 24.3 Let G = (V , E) be a weighted, directed graph with source vertex s and weight function w : E → R, and assume that G contains no negative-weight cycles that are reachable from s. Then, for each vertex v ∈ V , there is a path from s to v if and only if Bellman-Ford terminates with v.d < ∞ when it is run on G Proof Left to you... 57 / 108
  • 145.
    Finding a pathbetween s and v Corollary 24.3 Let G = (V , E) be a weighted, directed graph with source vertex s and weight function w : E → R, and assume that G contains no negative-weight cycles that are reachable from s. Then, for each vertex v ∈ V , there is a path from s to v if and only if Bellman-Ford terminates with v.d < ∞ when it is run on G Proof Left to you... 57 / 108
  • 146.
    Outline 1 Introduction Introduction andSimilar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 58 / 108
  • 147.
    Correctness of Bellman-Ford Claim:The Algorithm returns the correct value Part of Theorem 24.4. Other parts of the theorem follow easily from earlier results. Case 1: There is no reachable negative weight cycle. Upon termination, we have for all (u, v): v.d = δ(s, v) by lemma 24.2 (last slide) if v is reachable or v.d = δ(s, v) = ∞ otherwise. 59 / 108
  • 148.
    Correctness of Bellman-Ford Claim:The Algorithm returns the correct value Part of Theorem 24.4. Other parts of the theorem follow easily from earlier results. Case 1: There is no reachable negative weight cycle. Upon termination, we have for all (u, v): v.d = δ(s, v) by lemma 24.2 (last slide) if v is reachable or v.d = δ(s, v) = ∞ otherwise. 59 / 108
  • 149.
    Correctness of Bellman-Ford Claim:The Algorithm returns the correct value Part of Theorem 24.4. Other parts of the theorem follow easily from earlier results. Case 1: There is no reachable negative weight cycle. Upon termination, we have for all (u, v): v.d = δ(s, v) by lemma 24.2 (last slide) if v is reachable or v.d = δ(s, v) = ∞ otherwise. 59 / 108
  • 150.
    Correctness of Bellman-Ford Then,we have that v.d = δ(s, v) ≤ δ(s, u) + w(u, v) ≤ u.d + w (u, v) Remember: 5. for each (u, v) to E [G] 6. if v.d > u.d + w (u, v) 7. return false Thus So algorithm returns true. 60 / 108
  • 151.
    Correctness of Bellman-Ford Then,we have that v.d = δ(s, v) ≤ δ(s, u) + w(u, v) ≤ u.d + w (u, v) Remember: 5. for each (u, v) to E [G] 6. if v.d > u.d + w (u, v) 7. return false Thus So algorithm returns true. 60 / 108
  • 152.
    Correctness of Bellman-Ford Then,we have that v.d = δ(s, v) ≤ δ(s, u) + w(u, v) ≤ u.d + w (u, v) Remember: 5. for each (u, v) to E [G] 6. if v.d > u.d + w (u, v) 7. return false Thus So algorithm returns true. 60 / 108
  • 153.
    Correctness of Bellman-Ford Then,we have that v.d = δ(s, v) ≤ δ(s, u) + w(u, v) ≤ u.d + w (u, v) Remember: 5. for each (u, v) to E [G] 6. if v.d > u.d + w (u, v) 7. return false Thus So algorithm returns true. 60 / 108
  • 154.
    Correctness of Bellman-Ford Then,we have that v.d = δ(s, v) ≤ δ(s, u) + w(u, v) ≤ u.d + w (u, v) Remember: 5. for each (u, v) to E [G] 6. if v.d > u.d + w (u, v) 7. return false Thus So algorithm returns true. 60 / 108
  • 155.
    Correctness of Bellman-Ford Case2: There exists a reachable negative weight cycle c =< v0, v1, ..., vk >, where v0 = vk. Then, we have: k i=1 w(vi−1, vi) < 0. (7) Suppose algorithm returns true Then vi.d ≤ vi−1.d + w(vi−1, vi) for i = 1, ..., k because Relax did not change any vi.d. 61 / 108
  • 156.
    Correctness of Bellman-Ford Case2: There exists a reachable negative weight cycle c =< v0, v1, ..., vk >, where v0 = vk. Then, we have: k i=1 w(vi−1, vi) < 0. (7) Suppose algorithm returns true Then vi.d ≤ vi−1.d + w(vi−1, vi) for i = 1, ..., k because Relax did not change any vi.d. 61 / 108
  • 157.
    Correctness of Bellman-Ford Thus k i=1 vi.d≤ k i=1 (vi−1.d + w(vi−1, vi)) = k i=1 vi−1.d + k i=1 w(vi−1, vi) Since v0 = vk Each vertex in c appears exactly once in each of the summations, k i=1 vi.d and k i=1 vi−1.d, thus k i=1 vi.d = k i=1 vi−1.d (8) 62 / 108
  • 158.
    Correctness of Bellman-Ford Thus k i=1 vi.d≤ k i=1 (vi−1.d + w(vi−1, vi)) = k i=1 vi−1.d + k i=1 w(vi−1, vi) Since v0 = vk Each vertex in c appears exactly once in each of the summations, k i=1 vi.d and k i=1 vi−1.d, thus k i=1 vi.d = k i=1 vi−1.d (8) 62 / 108
  • 159.
    Correctness of Bellman-Ford ByCorollary 24.3 vi.d is finite for i = 1, 2, ..., k, thus 0 ≤ k i=1 w(vi−1, vi). Hence This contradicts (Eq. 7). Thus, algorithm returns false. 63 / 108
  • 160.
    Correctness of Bellman-Ford ByCorollary 24.3 vi.d is finite for i = 1, 2, ..., k, thus 0 ≤ k i=1 w(vi−1, vi). Hence This contradicts (Eq. 7). Thus, algorithm returns false. 63 / 108
  • 161.
    Outline 1 Introduction Introduction andSimilar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 64 / 108
  • 162.
    Another Example Something Notable Byrelaxing the edges of a weighted DAG G = (V , E) according to a topological sort of its vertices, we can compute shortest paths from a single source in time. Why? Shortest paths are always well defined in a DAG, since even if there are negative-weight edges, no negative-weight cycles can exist. 65 / 108
  • 163.
    Another Example Something Notable Byrelaxing the edges of a weighted DAG G = (V , E) according to a topological sort of its vertices, we can compute shortest paths from a single source in time. Why? Shortest paths are always well defined in a DAG, since even if there are negative-weight edges, no negative-weight cycles can exist. 65 / 108
  • 164.
    Single-source Shortest Pathsin Directed Acyclic Graphs In a DAG, we can do the following (Complexity Θ (V + E)) DAG -SHORTEST-PATHS(G, w, s) 1 Topological sort vertices in G 2 Initialize(G, s) 3 for each u in V [G] in topological sorted order 4 for each v to Adj [u] 5 Relax(u, v, w) 66 / 108
  • 165.
    Single-source Shortest Pathsin Directed Acyclic Graphs In a DAG, we can do the following (Complexity Θ (V + E)) DAG -SHORTEST-PATHS(G, w, s) 1 Topological sort vertices in G 2 Initialize(G, s) 3 for each u in V [G] in topological sorted order 4 for each v to Adj [u] 5 Relax(u, v, w) 66 / 108
  • 166.
    Single-source Shortest Pathsin Directed Acyclic Graphs In a DAG, we can do the following (Complexity Θ (V + E)) DAG -SHORTEST-PATHS(G, w, s) 1 Topological sort vertices in G 2 Initialize(G, s) 3 for each u in V [G] in topological sorted order 4 for each v to Adj [u] 5 Relax(u, v, w) 66 / 108
  • 167.
    It is basedin the following theorem Theorem 24.5 If a weighted, directed graph G = (V , E) has source vertex s and no cycles, then at the termination of the DAG-SHORTEST-PATHS procedure, v.d = δ (s, v) for all vertices v ∈ V , and the predecessor subgraph Gπ is a shortest path. Proof Left to you... 67 / 108
  • 168.
    It is basedin the following theorem Theorem 24.5 If a weighted, directed graph G = (V , E) has source vertex s and no cycles, then at the termination of the DAG-SHORTEST-PATHS procedure, v.d = δ (s, v) for all vertices v ∈ V , and the predecessor subgraph Gπ is a shortest path. Proof Left to you... 67 / 108
  • 169.
    Complexity We have that 1Line 1 takes Θ (V + E). 2 Line 2 takes Θ (V ). 3 Lines 3-5 makes an iteration per vertex: 1 In addition, the for loop in lines 4-5 relaxes each edge exactly once (Remember the sorting). 2 Making each iteration of the inner loop Θ (1) Therefore The total running time is equal to Θ (V + E). 68 / 108
  • 170.
    Complexity We have that 1Line 1 takes Θ (V + E). 2 Line 2 takes Θ (V ). 3 Lines 3-5 makes an iteration per vertex: 1 In addition, the for loop in lines 4-5 relaxes each edge exactly once (Remember the sorting). 2 Making each iteration of the inner loop Θ (1) Therefore The total running time is equal to Θ (V + E). 68 / 108
  • 171.
    Complexity We have that 1Line 1 takes Θ (V + E). 2 Line 2 takes Θ (V ). 3 Lines 3-5 makes an iteration per vertex: 1 In addition, the for loop in lines 4-5 relaxes each edge exactly once (Remember the sorting). 2 Making each iteration of the inner loop Θ (1) Therefore The total running time is equal to Θ (V + E). 68 / 108
  • 172.
    Complexity We have that 1Line 1 takes Θ (V + E). 2 Line 2 takes Θ (V ). 3 Lines 3-5 makes an iteration per vertex: 1 In addition, the for loop in lines 4-5 relaxes each edge exactly once (Remember the sorting). 2 Making each iteration of the inner loop Θ (1) Therefore The total running time is equal to Θ (V + E). 68 / 108
  • 173.
    Complexity We have that 1Line 1 takes Θ (V + E). 2 Line 2 takes Θ (V ). 3 Lines 3-5 makes an iteration per vertex: 1 In addition, the for loop in lines 4-5 relaxes each edge exactly once (Remember the sorting). 2 Making each iteration of the inner loop Θ (1) Therefore The total running time is equal to Θ (V + E). 68 / 108
  • 174.
    Complexity We have that 1Line 1 takes Θ (V + E). 2 Line 2 takes Θ (V ). 3 Lines 3-5 makes an iteration per vertex: 1 In addition, the for loop in lines 4-5 relaxes each edge exactly once (Remember the sorting). 2 Making each iteration of the inner loop Θ (1) Therefore The total running time is equal to Θ (V + E). 68 / 108
  • 175.
    Outline 1 Introduction Introduction andSimilar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 69 / 108
  • 176.
    Example After Initialization, wehave b is the source a b c d e f 5 2 3 7 4 2 -1 -2 6 1 70 / 108
  • 177.
    Example a is thefirst in the topological sort, but no update is done a b c d e f 5 2 3 7 4 2 -1 -2 6 1 71 / 108
  • 178.
    Example b is thenext one a b c d e f 5 2 3 7 4 2 -1 -2 6 1 72 / 108
  • 179.
    Example c is thenext one a b c d e f 5 2 3 7 4 2 -1 -2 6 1 73 / 108
  • 180.
    Example d is thenext one a b c d e f 5 2 3 7 4 2 -1 -2 6 1 74 / 108
  • 181.
    Example e is thenext one a b c d e f 5 2 3 7 4 2 -1 -2 6 1 75 / 108
  • 182.
    Example Finally, w isthe next one a b c d e f 5 2 3 7 4 2 -1 -2 6 1 76 / 108
  • 183.
    Outline 1 Introduction Introduction andSimilar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 77 / 108
  • 184.
    Dijkstra’s Algorithm It isa greedy based method Ideas? Yes We need to keep track of the greedy choice!!! 78 / 108
  • 185.
    Dijkstra’s Algorithm It isa greedy based method Ideas? Yes We need to keep track of the greedy choice!!! 78 / 108
  • 186.
    Dijkstra’s Algorithm Assume nonegative weight edges 1 Dijkstra’s algorithm maintains a set S of vertices whose shortest path from s has been determined. 2 It repeatedly selects u in V − S with minimum shortest path estimate (greedy choice). 3 It store V − S in priority queue Q. 79 / 108
  • 187.
    Dijkstra’s Algorithm Assume nonegative weight edges 1 Dijkstra’s algorithm maintains a set S of vertices whose shortest path from s has been determined. 2 It repeatedly selects u in V − S with minimum shortest path estimate (greedy choice). 3 It store V − S in priority queue Q. 79 / 108
  • 188.
    Dijkstra’s Algorithm Assume nonegative weight edges 1 Dijkstra’s algorithm maintains a set S of vertices whose shortest path from s has been determined. 2 It repeatedly selects u in V − S with minimum shortest path estimate (greedy choice). 3 It store V − S in priority queue Q. 79 / 108
  • 189.
    Dijkstra’s algorithm DIJKSTRA(G, w,s) 1 INITIALIZE(G, s) 2 S = ∅ 3 Q = V [G] 4 while Q = ∅ 5 u =Extract-Min(Q) 6 S = S ∪ {u} 7 for each vertex v ∈ Adj [u] 8 Relax(u,v,w) 80 / 108
  • 190.
    Dijkstra’s algorithm DIJKSTRA(G, w,s) 1 INITIALIZE(G, s) 2 S = ∅ 3 Q = V [G] 4 while Q = ∅ 5 u =Extract-Min(Q) 6 S = S ∪ {u} 7 for each vertex v ∈ Adj [u] 8 Relax(u,v,w) 80 / 108
  • 191.
    Dijkstra’s algorithm DIJKSTRA(G, w,s) 1 INITIALIZE(G, s) 2 S = ∅ 3 Q = V [G] 4 while Q = ∅ 5 u =Extract-Min(Q) 6 S = S ∪ {u} 7 for each vertex v ∈ Adj [u] 8 Relax(u,v,w) 80 / 108
  • 192.
    Outline 1 Introduction Introduction andSimilar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 81 / 108
  • 193.
    Example The Graph AfterInitialization s b d a c e h g f5 4 5 3 3 6 2 1 5 2 2 7 2 10 3 82 / 108
  • 194.
    Example We use rededges to represent v.π and color black to represent the set S s b d a c e h g f5 4 5 3 3 6 2 1 5 2 2 7 2 10 3 83 / 108
  • 195.
    Example s ←Extract-Min(Q) andupdate the elements adjacent to s s b d a c e h g f5 4 5 3 3 6 2 1 5 2 2 7 2 10 3 0 5 5 6 84 / 108
  • 196.
    Example a←Extract-Min(Q) and updatethe elements adjacent to a s b d a c e h g f5 4 5 3 3 6 2 1 5 2 2 7 2 10 3 0 5 5 6 9 85 / 108
  • 197.
    Example e←Extract-Min(Q) and updatethe elements adjacent to e s b d a c e h g f5 4 5 3 3 6 2 1 5 2 2 7 2 10 3 0 5 5 6 9 7 86 / 108
  • 198.
    Example b←Extract-Min(Q) and updatethe elements adjacent to b s b d a c e h g f5 4 5 3 3 6 2 1 5 2 2 7 2 10 3 0 5 5 6 9 7 16 87 / 108
  • 199.
    Example h←Extract-Min(Q) and updatethe elements adjacent to h s b d a c e h g f5 4 5 3 3 6 2 1 5 2 2 7 2 10 3 0 5 5 6 9 7 16 9 10 88 / 108
  • 200.
    Example c←Extract-Min(Q) and no-update sb d a c e h g f5 4 5 3 3 6 2 1 5 2 2 7 2 10 3 0 5 5 6 9 7 16 9 10 89 / 108
  • 201.
    Example d←Extract-Min(Q) and no-update sb d a c e h g f5 4 5 3 3 6 2 1 5 2 2 7 2 10 3 0 5 5 6 9 7 16 9 10 90 / 108
  • 202.
    Example g←Extract-Min(Q) and no-update sb d a c e h g f5 4 5 3 3 6 2 1 5 2 2 7 2 10 3 0 5 5 6 9 7 16 9 10 91 / 108
  • 203.
    Example f←Extract-Min(Q) and no-update sb d a c e h g f5 4 5 3 3 6 2 1 5 2 2 7 2 10 3 0 5 5 6 9 7 16 9 10 92 / 108
  • 204.
    Outline 1 Introduction Introduction andSimilar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 93 / 108
  • 205.
    Correctness Dijkstra’s algorithm Theorem24.6 Upon termination, u.d = δ(s, u) for all u in V (assuming non negative weights). Proof By lemma 24.11, once u.d = δ(s, u) holds, it continues to hold. We are going to use the following loop Invariance At the start of each iteration of the while loop of lines 4–8, v.d = δ (s, v) for each vertex v ∈ S. 94 / 108
  • 206.
    Correctness Dijkstra’s algorithm Theorem24.6 Upon termination, u.d = δ(s, u) for all u in V (assuming non negative weights). Proof By lemma 24.11, once u.d = δ(s, u) holds, it continues to hold. We are going to use the following loop Invariance At the start of each iteration of the while loop of lines 4–8, v.d = δ (s, v) for each vertex v ∈ S. 94 / 108
  • 207.
    Correctness Dijkstra’s algorithm Theorem24.6 Upon termination, u.d = δ(s, u) for all u in V (assuming non negative weights). Proof By lemma 24.11, once u.d = δ(s, u) holds, it continues to hold. We are going to use the following loop Invariance At the start of each iteration of the while loop of lines 4–8, v.d = δ (s, v) for each vertex v ∈ S. 94 / 108
  • 208.
    Proof Thus We are goingto prove for each u in V , u.d = δ(s, u) when u is inserted in S. Initialization Initially S = ∅, thus the invariant is true. Maintenance We want to show that in each iteration u.d = δ (s, u) for the vertex added to set S. For this, note the following Note that s.d = δ(s, s) = 0 when s is inserted, so u = s. In addition, we have that S = ∅ before u is added. 95 / 108
  • 209.
    Proof Thus We are goingto prove for each u in V , u.d = δ(s, u) when u is inserted in S. Initialization Initially S = ∅, thus the invariant is true. Maintenance We want to show that in each iteration u.d = δ (s, u) for the vertex added to set S. For this, note the following Note that s.d = δ(s, s) = 0 when s is inserted, so u = s. In addition, we have that S = ∅ before u is added. 95 / 108
  • 210.
    Proof Thus We are goingto prove for each u in V , u.d = δ(s, u) when u is inserted in S. Initialization Initially S = ∅, thus the invariant is true. Maintenance We want to show that in each iteration u.d = δ (s, u) for the vertex added to set S. For this, note the following Note that s.d = δ(s, s) = 0 when s is inserted, so u = s. In addition, we have that S = ∅ before u is added. 95 / 108
  • 211.
    Proof Thus We are goingto prove for each u in V , u.d = δ(s, u) when u is inserted in S. Initialization Initially S = ∅, thus the invariant is true. Maintenance We want to show that in each iteration u.d = δ (s, u) for the vertex added to set S. For this, note the following Note that s.d = δ(s, s) = 0 when s is inserted, so u = s. In addition, we have that S = ∅ before u is added. 95 / 108
  • 212.
    Proof Thus We are goingto prove for each u in V , u.d = δ(s, u) when u is inserted in S. Initialization Initially S = ∅, thus the invariant is true. Maintenance We want to show that in each iteration u.d = δ (s, u) for the vertex added to set S. For this, note the following Note that s.d = δ(s, s) = 0 when s is inserted, so u = s. In addition, we have that S = ∅ before u is added. 95 / 108
  • 213.
    Proof Use contradiction Now, supposenot. Let u be the first vertex such that u.d = δ (s,u) when inserted in S. Note the following Note that s.d = δ(s, s) = 0 when s is inserted, so u = s; thus S = ∅ just before u is inserted (in fact s ∈ S). 96 / 108
  • 214.
    Proof Use contradiction Now, supposenot. Let u be the first vertex such that u.d = δ (s,u) when inserted in S. Note the following Note that s.d = δ(s, s) = 0 when s is inserted, so u = s; thus S = ∅ just before u is inserted (in fact s ∈ S). 96 / 108
  • 215.
    Proof Now Note that thereexist a path from s to u, for otherwise u.d = δ(s, u) = ∞ by corollary 24.12. “If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant.” Thus exist a shortest path p Between s and u. Observation Prior to adding u to S, path p connects a vertex in S, namely s, to a vertex in V − S, namely u. 97 / 108
  • 216.
    Proof Now Note that thereexist a path from s to u, for otherwise u.d = δ(s, u) = ∞ by corollary 24.12. “If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant.” Thus exist a shortest path p Between s and u. Observation Prior to adding u to S, path p connects a vertex in S, namely s, to a vertex in V − S, namely u. 97 / 108
  • 217.
    Proof Now Note that thereexist a path from s to u, for otherwise u.d = δ(s, u) = ∞ by corollary 24.12. “If there is no path from s to v, then v.d = δ(s, v) = ∞ is an invariant.” Thus exist a shortest path p Between s and u. Observation Prior to adding u to S, path p connects a vertex in S, namely s, to a vertex in V − S, namely u. 97 / 108
  • 218.
    Proof Consider the following Thefirst y along p from s to u such that y ∈ V − S. And let x ∈ S be y’s predecessor along p. 98 / 108
  • 219.
    Proof Proof (continuation) Then, shortestpath from s to u: s p1 x → y p2 u looks like... S s x y u Remark: Either of paths p1 or p2 may have no edges. 99 / 108
  • 220.
    Proof We claim y.d =δ(s, y) when u is added into S. Proof of the claim 1 Observe that x ∈ S. 2 In addition, we know that u is the first vertex for which u.d = δ (s, u) when it id added to S 100 / 108
  • 221.
    Proof We claim y.d =δ(s, y) when u is added into S. Proof of the claim 1 Observe that x ∈ S. 2 In addition, we know that u is the first vertex for which u.d = δ (s, u) when it id added to S 100 / 108
  • 222.
    Proof Then In addition, wehad that x.d = δ(s, x) when x was inserted into S. Then, we relaxed the edge between x and y Edge (x, y) was relaxed at that time! 101 / 108
  • 223.
    Proof Then In addition, wehad that x.d = δ(s, x) when x was inserted into S. Then, we relaxed the edge between x and y Edge (x, y) was relaxed at that time! 101 / 108
  • 224.
    Proof Remember? Convergence property(Lemma 24.14) Let p be a shortest path from s to v, where p = p1 s u → v. If u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then v.d = δ(s, v) holds at all times after the call. Then Then, using this convergence property. y.d = δ(s, y) = δ(s, x) + w(x, y) (9) The claim is implied!!! 102 / 108
  • 225.
    Proof Remember? Convergence property(Lemma 24.14) Let p be a shortest path from s to v, where p = p1 s u → v. If u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then v.d = δ(s, v) holds at all times after the call. Then Then, using this convergence property. y.d = δ(s, y) = δ(s, x) + w(x, y) (9) The claim is implied!!! 102 / 108
  • 226.
    Proof Remember? Convergence property(Lemma 24.14) Let p be a shortest path from s to v, where p = p1 s u → v. If u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then v.d = δ(s, v) holds at all times after the call. Then Then, using this convergence property. y.d = δ(s, y) = δ(s, x) + w(x, y) (9) The claim is implied!!! 102 / 108
  • 227.
    Proof Remember? Convergence property(Lemma 24.14) Let p be a shortest path from s to v, where p = p1 s u → v. If u.d = δ(s, u) holds at any time prior to calling Relax(u, v, w), then v.d = δ(s, v) holds at all times after the call. Then Then, using this convergence property. y.d = δ(s, y) = δ(s, x) + w(x, y) (9) The claim is implied!!! 102 / 108
  • 228.
    Proof Now 1 We obtaina contradiction to prove that u.d = δ (s, u). 2 y appears before u in a shortest path on a shortest path from s to u. 3 In addition, all edges have positive weights. 4 Then, δ(s, y) ≤ δ(s, u), thus y.d = δ (s, y) ≤ δ (s, u) ≤ u.d The last inequality is due to the Upper-Bound Property (Lemma 24.11). 103 / 108
  • 229.
    Proof Now 1 We obtaina contradiction to prove that u.d = δ (s, u). 2 y appears before u in a shortest path on a shortest path from s to u. 3 In addition, all edges have positive weights. 4 Then, δ(s, y) ≤ δ(s, u), thus y.d = δ (s, y) ≤ δ (s, u) ≤ u.d The last inequality is due to the Upper-Bound Property (Lemma 24.11). 103 / 108
  • 230.
    Proof Now 1 We obtaina contradiction to prove that u.d = δ (s, u). 2 y appears before u in a shortest path on a shortest path from s to u. 3 In addition, all edges have positive weights. 4 Then, δ(s, y) ≤ δ(s, u), thus y.d = δ (s, y) ≤ δ (s, u) ≤ u.d The last inequality is due to the Upper-Bound Property (Lemma 24.11). 103 / 108
  • 231.
    Proof Now 1 We obtaina contradiction to prove that u.d = δ (s, u). 2 y appears before u in a shortest path on a shortest path from s to u. 3 In addition, all edges have positive weights. 4 Then, δ(s, y) ≤ δ(s, u), thus y.d = δ (s, y) ≤ δ (s, u) ≤ u.d The last inequality is due to the Upper-Bound Property (Lemma 24.11). 103 / 108
  • 232.
    Proof Now 1 We obtaina contradiction to prove that u.d = δ (s, u). 2 y appears before u in a shortest path on a shortest path from s to u. 3 In addition, all edges have positive weights. 4 Then, δ(s, y) ≤ δ(s, u), thus y.d = δ (s, y) ≤ δ (s, u) ≤ u.d The last inequality is due to the Upper-Bound Property (Lemma 24.11). 103 / 108
  • 233.
    Proof Now 1 We obtaina contradiction to prove that u.d = δ (s, u). 2 y appears before u in a shortest path on a shortest path from s to u. 3 In addition, all edges have positive weights. 4 Then, δ(s, y) ≤ δ(s, u), thus y.d = δ (s, y) ≤ δ (s, u) ≤ u.d The last inequality is due to the Upper-Bound Property (Lemma 24.11). 103 / 108
  • 234.
    Proof Now 1 We obtaina contradiction to prove that u.d = δ (s, u). 2 y appears before u in a shortest path on a shortest path from s to u. 3 In addition, all edges have positive weights. 4 Then, δ(s, y) ≤ δ(s, u), thus y.d = δ (s, y) ≤ δ (s, u) ≤ u.d The last inequality is due to the Upper-Bound Property (Lemma 24.11). 103 / 108
  • 235.
    Proof Then But because bothvertices u and y where in V − S when u was chosen in line 5 ⇒ u.d ≤ y.d. Thus y.d = δ(s, y) = δ(s, u) = u.d Consequently We have that u.d = δ (s, u), which contradicts our choice of u. Conclusion: u.d = δ (s, u) when u is added to S and the equality is maintained afterwards. 104 / 108
  • 236.
    Proof Then But because bothvertices u and y where in V − S when u was chosen in line 5 ⇒ u.d ≤ y.d. Thus y.d = δ(s, y) = δ(s, u) = u.d Consequently We have that u.d = δ (s, u), which contradicts our choice of u. Conclusion: u.d = δ (s, u) when u is added to S and the equality is maintained afterwards. 104 / 108
  • 237.
    Proof Then But because bothvertices u and y where in V − S when u was chosen in line 5 ⇒ u.d ≤ y.d. Thus y.d = δ(s, y) = δ(s, u) = u.d Consequently We have that u.d = δ (s, u), which contradicts our choice of u. Conclusion: u.d = δ (s, u) when u is added to S and the equality is maintained afterwards. 104 / 108
  • 238.
    Proof Then But because bothvertices u and y where in V − S when u was chosen in line 5 ⇒ u.d ≤ y.d. Thus y.d = δ(s, y) = δ(s, u) = u.d Consequently We have that u.d = δ (s, u), which contradicts our choice of u. Conclusion: u.d = δ (s, u) when u is added to S and the equality is maintained afterwards. 104 / 108
  • 239.
    Finally Termination At termination Q= ∅ Thus, V − S = ∅ or equivalent S = V Thus u.d = δ (s, u) for all vertices u ∈ V !!! 105 / 108
  • 240.
    Finally Termination At termination Q= ∅ Thus, V − S = ∅ or equivalent S = V Thus u.d = δ (s, u) for all vertices u ∈ V !!! 105 / 108
  • 241.
    Outline 1 Introduction Introduction andSimilar Problems 2 General Results Optimal Substructure Properties Predecessor Graph The Relaxation Concept The Bellman-Ford Algorithm Properties of Relaxation 3 Bellman-Ford Algorithm Predecessor Subgraph for Bellman Shortest Path for Bellman Example Bellman-Ford finds the Shortest Path Correctness of Bellman-Ford 4 Directed Acyclic Graphs (DAG) Relaxing Edges Example 5 Dijkstra’s Algorithm Dijkstra’s Algorithm: A Greedy Method Example Correctness Dijkstra’s algorithm Complexity of Dijkstra’s Algorithm 6 Exercises 106 / 108
  • 242.
    Complexity Running time is O(V2) using linear array for priority queue. O((V + E) log V ) using binary heap. O(V log V + E) using Fibonacci heap. 107 / 108
  • 243.
    Exercises From Cormen’s booksolve 24.1-1 24.1-3 24.1-4 23.3-1 23.3-3 23.3-4 23.3-6 23.3-7 23.3-8 23.3-10 108 / 108