The document describes Johnson's algorithm for finding shortest paths between all pairs of vertices in a sparse graph. It discusses how the algorithm uses reweighting to compute new edge weights that preserve shortest paths while making all weights nonnegative. It shows how Dijkstra's algorithm can then be run on the reweighted graph to find shortest paths between all pairs of vertices. The key steps are: (1) adding a source node and zero-weight edges, (2) running Bellman-Ford to compute distances from the source, (3) using these distances to reweight the edges while preserving shortest paths, resulting in nonnegative weights.
Johnson’s Algorithm forSparse Graph
Dr. Kiran K
Assistant Professor
Department of CSE
UVCE
Bengaluru, India.
2.
Introduction
• Johnson’s algorithmfinds Shortest Paths between All Pairs of vertices in a graph.
• The algorithm either returns a matrix of shortest-path weights for all pairs of vertices or
reports that the input graph contains a negative-weight cycle.
• It uses as subroutines both Dijkstra’s algorithm and the Bellman-Ford algorithm.
• It uses a technique called Reweighting.
• If all edge weights w in a graph G = (V, E) are nonnegative, shortest paths between all pairs
of vertices is found by running Dijkstra’s algorithm once from each vertex.
• The algorithm computes a new set of nonnegative edge weights ŵ.
• ŵ must satisfy two important properties:
1. For all pairs of vertices (u, v) є V, a path p is a shortest path from u to v using weight
function w if and only if p is also a shortest path from u to v using weight function ŵ.
2. For all edges (u, v), the new weight ŵ (u, v) is nonnegative.
3.
Preserving Shortest Pathsby Reweighting
δ : shortest-path weights derived from weight function w
: shortest-path weights derived from weight function ŵ
Lemma: Reweighting Does not change Shortest Paths
Given a Weighted, Directed Graph G = (V, E) with weight function w : E → R,
Let h : V → R be any function mapping Vertices to Real numbers.
For each edge (u, v) є E, define ŵ (u, v) = w (u, v) + h (u) – h (v), where
h (v) = δ (s, v).
Let p = <v0, v1, . . , vk> be any path from vertex v0 to vertex vk. Then p is a shortest
path from v0 to vk with weight function w if and only if it is a shortest path with
weight function ŵ. That is, w (p) = δ (v0, vk) if and only if ŵ (p) = (v0, vk).
Furthermore, G has a negative-weight cycle using weight function w if and only if G
has a negative-weight cycle using weight function ŵ.
4.
Preserving Shortest Pathsby Reweighting…
Proof:
(1) w (p) = δ (v0, vk) if and only if ŵ (p) = (v0, vk).
(Sum Telescope)
(L1)
5.
Preserving Shortest Pathsby Reweighting…
(L1) → Any path p from v0 to vk has ŵ (p) = w (p) + h (v0) – h (vk) (L2)
h (v0) and h (vk) do not depend on the path. (L3)
(L2) and (L3) → if one path from v0 to vk is shorter than another using weight
function w, then it is also shorter using ŵ. (L4)
(L4) → w (p) = δ (v0, vk) if and only if ŵ (p) = (v0, vk).
(2) G has a negative-weight cycle using weight function w if and only if G has a
negative-weight cycle using weight function ŵ.
Let c = <v0, v1, . . , vk > be a cycle where v0 = vk
(L2) → ŵ (c) = w (c) + h (v0) – h (vk) (L5)
v0 = vk in (L5) → ŵ (c) = w (c) (L6)
(L6) → c has negative weight using w if and only if it has negative weight using ŵ.
6.
Producing Nonnegative Weightsby Reweighting
• Given a Weighted, Directed Graph G = (V, E) with weight function w : E → R,
construct a new Graph G ʹ = (V ʹ, E ʹ) where V ʹ = V U {s}for some new vertex
s ȼ V and E ʹ = E U {(s, v) : v є V}.
• The nonnegative weights ŵ (u, v) are computed as follows:
1. Extend the weight function w so that w (s, v) = 0 for all v є V.
2. Compute h (v) = δ (s, v) for all v є V ʹ.
3. Compute ŵ (u, v) = w (u, v) + h (u) – h (v) ≥ 0.
h (v) ≤ h (u) + w (u, v) (Triangle Inequality) (3.1)
ŵ (u, v) = w (u, v) + h (u) – h (v) (Lemma) (3.2)
(3.1) and (3.2) → ŵ (u, v) = w (u, v) + h (u) – h (v) ≥ 0.
Producing Nonnegative Weightsby Reweighting…
2. h (v) = δ (s, v) for all v є V ʹ (J1)
δ (s, v) is computed using Bellman-Ford Algorithm
• The edges are relaxed in the following order:
(0, 1); (0, 2); (0, 3); (0, 4); (0, 5); (1, 2); (1, 3); (1, 5); (2, 4); (2, 5); (3, 2);
(4, 1); (4, 3); (5, 4);
• The Bellman-Ford Algorithm runs for 5 passes since there are 6 vertices in the
graph G ʹ
• At the end of 5th pass, δ (s, v), the shortest paths from s to all the vertices v є V ʹ
would be computed if G ʹ does not consist a negative cycle.
9.
Producing Nonnegative Weightsby Reweighting…
Relax (1, 5): 5.d = min (5.d, 1.d + w (1, 5)) = (0, 0 + (-4)) = -4
Relax (2, 4): 4.d = min (4.d, 2.d + w (2, 4)) = (0, 0 + 1) = 0
Relax (2, 5): 5.d = min (5.d, 2.d + w (2, 5)) = (-4, 0 + 7) = -4
Relax (3, 2): 2.d = min (2.d, 3.d + w (3, 2)) = (0, 0 + 4) = 0
Relax (4, 1): 1.d = min (1.d, 4.d + w (4, 1)) = (0, 0 + 2) = 0
Relax (4, 3): 3.d = min (3.d, 4.d + w (4, 3)) = (0, 0 + -5) = -5
Relax (5, 4): 4.d = min (4.d, 5.d + w (5, 4)) = (0, (-4) + 6) = 0
Relax (0, 1): 1.d = min (1.d, 0.d + w (0, 1)) = (ꝏ, 0 + 0) = 0
Relax (0, 2): 2.d = min (2.d, 0.d + w (0, 2)) = (ꝏ, 0 + 0) = 0
Relax (0, 3): 3.d = min (3.d, 0.d + w (0, 3)) = (ꝏ, 0 + 0) = 0
Relax (0, 4): 4.d = min (4.d, 0.d + w (0, 4)) = (ꝏ, 0 + 0) = 0
Relax (0, 5): 5.d = min (5.d, 0.d + w (0, 5)) = (ꝏ, 0 + 0) = 0
Relax (1, 2): 2.d = min (2.d, 1.d + w (1, 2)) = (0, 0 + 3) = 0
Relax (1, 3): 3.d = min (3.d, 1.d + w (1, 3)) = (0, 0 + 8) = 0
ꝏ
ꝏ ꝏ
ꝏꝏ
0
-4 0
-50
10.
Producing Nonnegative Weightsby Reweighting…
Relax (1, 5): 5.d = min (5.d, 1.d + w (1, 5)) = (-4, 0 + (-4)) = -4
Relax (2, 4): 4.d = min (4.d, 2.d + w (2, 4)) = (0, 0 + 1) = 0
Relax (2, 5): 5.d = min (5.d, 2.d + w (2, 5)) = (-4, 0 + 7) = -4
Relax (3, 2): 2.d = min (2.d, 3.d + w (3, 2)) = (0, (-5) + 4) = -1
Relax (4, 1): 1.d = min (1.d, 4.d + w (4, 1)) = (0, 0 + 2) = 0
Relax (4, 3): 3.d = min (3.d, 4.d + w (4, 3)) = (-5, 0 + -5) = -5
Relax (5, 4): 4.d = min (4.d, 5.d + w (5, 4)) = (0, (-4) + 6) = 0
Relax (0, 1): 1.d = min (1.d, 0.d + w (0, 1)) = (0, 0 + 0) = 0
Relax (0, 2): 2.d = min (2.d, 0.d + w (0, 2)) = (0, 0 + 0) = 0
Relax (0, 3): 3.d = min (3.d, 0.d + w (0, 3)) = (-5, 0 + 0) = -5
Relax (0, 4): 4.d = min (4.d, 0.d + w (0, 4)) = (0, 0 + 0) = 0
Relax (0, 5): 5.d = min (5.d, 0.d + w (0, 5)) = (-4, 0 + 0) = -4
Relax (1, 2): 2.d = min (2.d, 1.d + w (1, 2)) = (0, 0 + 3) = 0
Relax (1, 3): 3.d = min (3.d, 1.d + w (1, 3)) = (-5, 0 + 8) = -5
0
-4 0
-50
-1
-4 0
-50
11.
Producing Nonnegative Weightsby Reweighting…
Relax (1, 5): 5.d = min (5.d, 1.d + w (1, 5)) = (-4, 0 + (-4)) = -4
Relax (2, 4): 4.d = min (4.d, 2.d + w (2, 4)) = (-1, 0 + 1) = -1
Relax (2, 5): 5.d = min (5.d, 2.d + w (2, 5)) = (-4, (-1) + 7) = -4
Relax (3, 2): 2.d = min (2.d, 3.d + w (3, 2)) = (-1, (-5) + 4) = -1
Relax (4, 1): 1.d = min (1.d, 4.d + w (4, 1)) = (0, 0 + 2) = 0
Relax (4, 3): 3.d = min (3.d, 4.d + w (4, 3)) = (-5, 0 + (-5)) = -5
Relax (5, 4): 4.d = min (4.d, 5.d + w (5, 4)) = (0, (-4) + 6) = 0
Relax (0, 1): 1.d = min (1.d, 0.d + w (0, 1)) = (0, 0 + 0) = 0
Relax (0, 2): 2.d = min (2.d, 0.d + w (0, 2)) = (-1, 0 + 0) = -1
Relax (0, 3): 3.d = min (3.d, 0.d + w (0, 3)) = (-5, 0 + 0) = -5
Relax (0, 4): 4.d = min (4.d, 0.d + w (0, 4)) = (0, 0 + 0) = 0
Relax (0, 5): 5.d = min (5.d, 0.d + w (0, 5)) = (-4, 0 + 0) = -4
Relax (1, 2): 2.d = min (2.d, 1.d + w (1, 2)) = (-1, 0 + 3) = 0
Relax (1, 3): 3.d = min (3.d, 1.d + w (1, 3)) = (-5, 0 + 8) = -5
-1
-4 0
-50
-1
-4 0
-50
12.
Producing Nonnegative Weightsby Reweighting…
Relax (1, 5): 5.d = min (5.d, 1.d + w (1, 5)) = (-4, 0 + (-4)) = -4
Relax (2, 4): 4.d = min (4.d, 2.d + w (2, 4)) = (-1, 0 + 1) = -1
Relax (2, 5): 5.d = min (5.d, 2.d + w (2, 5)) = (-4, (-1) + 7) = -4
Relax (3, 2): 2.d = min (2.d, 3.d + w (3, 2)) = (-1, (-5) + 4) = -1
Relax (4, 1): 1.d = min (1.d, 4.d + w (4, 1)) = (0, 0 + 2) = 0
Relax (4, 3): 3.d = min (3.d, 4.d + w (4, 3)) = (-5, 0 + (-5)) = -5
Relax (5, 4): 4.d = min (4.d, 5.d + w (5, 4)) = (0, (-4) + 6) = 0
Relax (0, 1): 1.d = min (1.d, 0.d + w (0, 1)) = (0, 0 + 0) = 0
Relax (0, 2): 2.d = min (2.d, 0.d + w (0, 2)) = (-1, 0 + 0) = -1
Relax (0, 3): 3.d = min (3.d, 0.d + w (0, 3)) = (-5, 0 + 0) = -5
Relax (0, 4): 4.d = min (4.d, 0.d + w (0, 4)) = (0, 0 + 0) = 0
Relax (0, 5): 5.d = min (5.d, 0.d + w (0, 5)) = (-4, 0 + 0) = -4
Relax (1, 2): 2.d = min (2.d, 1.d + w (1, 2)) = (-1, 0 + 3) = 0
Relax (1, 3): 3.d = min (3.d, 1.d + w (1, 3)) = (-5, 0 + 8) = -5
-1
-4 0
-50
-1
-4 0
-50
13.
Producing Nonnegative Weightsby Reweighting…
Relax (1, 5): 5.d = min (5.d, 1.d + w (1, 5)) = (-4, 0 + (-4)) = -4
Relax (2, 4): 4.d = min (4.d, 2.d + w (2, 4)) = (-1, 0 + 1) = -1
Relax (2, 5): 5.d = min (5.d, 2.d + w (2, 5)) = (-4, (-1) + 7) = -4
Relax (3, 2): 2.d = min (2.d, 3.d + w (3, 2)) = (-1, (-5) + 4) = -1
Relax (4, 1): 1.d = min (1.d, 4.d + w (4, 1)) = (0, 0 + 2) = 0
Relax (4, 3): 3.d = min (3.d, 4.d + w (4, 3)) = (-5, 0 + (-5)) = -5
Relax (5, 4): 4.d = min (4.d, 5.d + w (5, 4)) = (0, (-4) + 6) = 0
Relax (0, 1): 1.d = min (1.d, 0.d + w (0, 1)) = (0, 0 + 0) = 0
Relax (0, 2): 2.d = min (2.d, 0.d + w (0, 2)) = (-1, 0 + 0) = -1
Relax (0, 3): 3.d = min (3.d, 0.d + w (0, 3)) = (-5, 0 + 0) = -5
Relax (0, 4): 4.d = min (4.d, 0.d + w (0, 4)) = (0, 0 + 0) = 0
Relax (0, 5): 5.d = min (5.d, 0.d + w (0, 5)) = (-4, 0 + 0) = -4
Relax (1, 2): 2.d = min (2.d, 1.d + w (1, 2)) = (-1, 0 + 3) = 0
Relax (1, 3): 3.d = min (3.d, 1.d + w (1, 3)) = (-5, 0 + 8) = -5
-1
-4 0
-50
-1
-4 0
-50
14.
Producing Nonnegative Weightsby Reweighting…
Fig (c): h (v) computed for G ʹ of Fig (b)
Vertex, v h (v)
1 0
2 -1
3 -5
4 0
5 -4
Computing All-Pairs ShortestPath
• Dijkstra’s algorithm is used to find the shortest paths.
• The algorithm is run once with each vertex v є V as source.
• It takes the reweighted graph G ʹ as input and computes for all the vertex v є V .
• Finally δ (u, v) is computed as: δ (u, v) = (u, v) + h (v) – h (u)
• After V - 1 passes the shortest paths between all pairs of vertices would be
computed.
17.
Algorithm
• Johnson’s algorithmuses as subroutines both the Bellman-Ford algorithm and the
Dijkstra’s algorithm.
• At first, it extends the given weighted directed graph G to produce graph G ʹ.
• Runs Bellman-Ford algorithm to compute h (v) if the graph G ʹ does not contain
negative cycles.
• Reweights the graph G ʹ to get reweighted edges E ʹ.
• Runs Dijkstra’s algorithm V - 1 times to get All-Pairs Shortest Path of G ʹ, and
correspondingly computes the shortest paths of the graph G using:
δ (u, v) = (u, v) + h (v) – h (u)
• It returns a |V| x |V| matrix, D = duv, where duv = δ (u, v).
Running Time:
• If the min-priority queue in Dijkstra’s algorithm is implemented by a Fibonacci
heap, Johnson’s algorithm runs in O (V2 lg V + VE) time.
• Binary min heap implementation yields a running time of O (VE lg V).
18.
Algorithm…
JOHNSON (G, w)
ComputeG ', where G '.V = G.V U {s}, G '.E = G.E U {(s, v): v є G.V} and
w (s, v) = 0 for all v є G.V
If (BELLMAN-FORD (G ', w, s) == FALSE)
print “Graph contains Negative-Weight Cycle”
Else For (Each Vertex v є G '.V)
Set h (v) to the value of δ (s, v) computed by BELLMAN-FORD algorithm
For each edge (u, v) є G '.E
ŵ (u, v) = w (u, v) + h (u) – h (v)
Let D = (duv) be a new n x n matrix
For (Each Vertex u є G.V)
Run DIJKSTRA (G, ŵ, u) to compute (u, v) for all v є G.V
For (Each Vertex v є G.V)
duv = (u, v) + h (v) – h (u)
Return D
19.
Example
Fig (e): ReweightedGraph
2
2
2
0
0
S V Relax (u, v, w) Q u Cost
1,2,3,4,5 1 0
1 2
3
5
Relax (1,2): 2.d = min (2.d, 1.d + w (1, 2)) = (ꝏ, 0 + 4) = 4
Relax (1,3): 3.d = min (3.d, 1.d + w (1, 3)) = (ꝏ, 0 + 13) = 13
Relax (1,5): 5.d = min (5.d, 1.d + w (1, 5)) = (ꝏ, 0 + 0) = 0
2,3,4,5 5 0
1,5 4 Relax (5,4): 4.d = min (4.d, 5.d + w (5, 4)) = (ꝏ, 0 + 2) = 2 2,3,4 4 2
1,5,4 3 Relax (4,3): 3.d = min (3.d, 4.d + w (4, 3)) = (13, 2 + 0) = 2 2,3 3 2
1,5,4,3 2 Relax (3,2): 2.d = min (2.d, 3.d + w (3, 2)) = (4, 2 + 0) = 2 2 2 2
ꝏ
0 ꝏ
ꝏꝏ
Fig (f): (u, v) for Graph in Fig (e) from
Source 1using Dijkstra’s Algorithm
(u, v) from Source 1
20.
Fig (f): (u,v) from Source 1
Example…
Fig (g): δ (u, v) of Graph in Fig (f) from source 1
using duv = (u, v) + h (v) – h (u)
( (u,v), δ (u,v)) – Values within each vertex.
2
2
2
0
0
δ (1, 2) = (1,2) + h (2) – h (1) = 2 – 1 – 0 = 1
δ (1, 3) = (1,3) + h (3) – h (1) = 2 – 5 – 0 = -3
δ (1, 4) = (1,4) + h (4) – h (1) = 2 + 0 – 0 = 2
δ (1, 5 )= (1,5) + h (5) – h (1) = 0 – 4 – 0 = -4
δ (u, v) from Source 1
21.
0
0
0
2
2
Example…
Fig (e): ReweightedGraph
S V Relax (u, v, w) Q u Cost
1,2,3,4,5 2 0
2 4
5
Relax (2, 4): 4.d = min (4.d, 2.d + w (2, 4)) = (ꝏ, 0 + 0) = 0
Relax (2, 5): 5.d = min (5.d, 2.d + w (2, 5)) = (ꝏ, 0 + 10) = 10
1,3,4,5 4 0
2,4 1
3
Relax (4, 1): 1.d = min (1.d, 4.d + w (4, 1)) = (ꝏ, 0 + 2) = 2
Relax (4, 3): 3.d = min (3.d, 4.d + w (4, 3)) = (ꝏ, 0 + 0) = 0
1,3,5 3 0
2,4,3 - 1,5 1 2
2,4,3,1 5 Relax (1, 5): 5.d = min (5.d, 1.d + w (1, 5)) = (10, 2 + 0) = 2 5 5 2
0
ꝏ ꝏ
ꝏꝏ
(u, v) from Source 2
Fig (h): (u, v) for Graph in Fig (e) from
Source 2 using Dijkstra’s Algorithm
22.
Fig (h): (u,v) from Source 2
Example…
δ (2, 1) = (2, 1) + h (1) – h (2) = 2 + 0 + 1 = 3
δ (2, 3) = (2, 3) + h (3) – h (2) = 0 – 5 + 1 = - 4
δ (2, 4) = (2, 4) + h (4) – h (2) = 0 + 0 + 1 = 1
δ (2, 5) = (2, 5) + h (5) – h (2) = 2 – 4 + 1 = -1
δ (u, v) from Source 2
0
0
0
2
2
Fig (i): δ (u, v) of Graph in Fig (f) from source 2
using duv = (u, v) + h (v) – h (u)
( (u,v), δ (u,v)) – Values within each vertex.
23.
0
0
0
2
2
Example…
Fig (e): ReweightedGraph
S V Relax (u, v, w) Q u Cost
1,2,3,4,5 3 0
3 2 Relax (3, 2): 2.d = min (2.d, 3.d + w (3, 2)) = (ꝏ, 0 + 0) = 0 1,2,4,5 2 0
3,2 4
5
Relax (2, 4): 4.d = min (4.d, 2.d + w (2, 4)) = (ꝏ, 0 + 0) = 0
Relax (2, 5): 5.d = min (5.d, 2.d + w (2, 5)) = (ꝏ, 0 + 10) = 10
1,4,5 4 0
3,2,4 1 Relax (4, 1): 1.d = min (1.d, 4.d + w (4, 1)) = (ꝏ, 0 + 2) = 2 1,5 1 2
3,2,4,1 5 Relax (1, 5): 5.d = min (5.d, 1.d + w (1, 5)) = (10, 2 + 0) = 2 5 5 2
ꝏ
ꝏ 0
ꝏꝏ
(u, v) from Source 3
Fig (j): (u, v) for Graph in Fig (e) from
Source 3 using Dijkstra’s Algorithm
24.
Fig (j): (u,v) from Source 3
Example…
δ (3, 1) = (3, 1) + h (1) – h (3) = 2 + 0 + 5 = 7
δ (3, 2) = (3, 2) + h (2) – h (3) = 0 – 1 + 5 = 4
δ (3, 4) = (3, 4) + h (4) – h (3) = 0 + 0 + 5 = 5
δ (3, 5) = (3, 5) + h (5) – h (3) = 2 – 4 + 5 = 3
δ (u, v) from Source 3
0
0
0
2
2
Fig (k): δ (u, v) of Graph in Fig (f) from source 3
using duv = (u, v) + h (v) – h (u)
( (u,v), δ (u,v)) – Values within each vertex.
25.
0
0
0
2
2
Example…
Fig (e): ReweightedGraph
S V Relax (u, v, w) Q u Cost
1,2,3,4,5 4 0
4 1
3
Relax (4, 1): 1.d = min (1.d, 4.d + w (4, 1)) = (ꝏ, 0 + 2) = 2
Relax (4, 3): 3.d = min (3.d, 4.d + w (4, 3)) = (ꝏ, 0 + 0) = 0
1,2,3,5 3 0
4,3 2 Relax (3, 2): 2.d = min (2.d, 3.d + w (3, 2)) = (ꝏ, 0 + 0) = 0 1,2,5 2 0
4,3,2 5 Relax (2, 5): 5.d = min (5.d, 2.d + w (2, 5)) = (ꝏ, 0 + 10) = 10 1,5 1 2
4,3,2,1 5 Relax (1, 5): 5.d = min (5.d, 1.d + w (1, 5)) = (10, 2 + 0) = 2 5 5 2
ꝏ
ꝏ ꝏ
0ꝏ
(u, v) from Source 4
Fig (l): (u, v) for Graph in Fig (e) from
Source 4 using Dijkstra’s Algorithm
26.
Fig (m): δ(u, v) of Graph in Fig (f) from source 4
using duv = (u, v) + h (v) – h (u)
( (u,v), δ (u,v)) – Values within each vertex.
Fig (l): (u, v) from Source 4
Example…
δ (4, 1) = (4, 1) + h (1) – h (4) = 2 + 0 – 0 = 2
δ (4, 2) = (4, 2) + h (2) – h (4) = 0 – 1 – 0 = -1
δ (4, 3) = (4, 3) + h (3) – h (4) = 0 – 5 – 0 = -5
δ (4, 5) = (4, 5) + h (5) – h (4) = 2 – 4 – 0 = -2
δ (u, v) from Source 4
0
0
0
2
2
27.
2
2
2
0
4
Example…
Fig (e): ReweightedGraph
S V Relax (u, v, w) Q u Cost
1,2,3,4,5 5 0
5 4 Relax (5, 4): 4.d = min (4.d, 5.d + w (5, 4)) = (ꝏ, 0 + 2) = 2 1,2,3,4 4 2
5,4 1
3
Relax (4, 1): 1.d = min (1.d, 4.d + w (4, 1)) = (ꝏ, 2 + 2) = 4
Relax (4, 3): 3.d = min (3.d, 4.d + w (4, 3)) = (ꝏ, 2 + 0) = 2
1,2,3 3 2
5,4,3 2 Relax (3, 2): 2.d = min (2.d, 3.d + w (3, 2)) = (ꝏ, 2 + 0) = 2 1,2 2 2
5,4,3,2 - 1 1 4
ꝏ
ꝏ ꝏ
ꝏ0
(u, v) from Source 5
Fig (n): (u, v) for Graph in Fig (e) from
Source 5 using Dijkstra’s Algorithm
28.
Fig (o): δ(u, v) of Graph in Fig (f) from source 5
using duv = (u, v) + h (v) – h (u)
( (u,v), δ (u,v)) – Values within each vertex.
Fig (n): (u, v) from Source 5
Example…
δ (5, 1) = (5, 1) + h (1) – h (5) = 4 + 0 + 4 = 8
δ (5, 2) = (5, 2) + h (2) – h (5) = 2 – 1 + 4 = 5
δ (5, 3) = (5, 3) + h (3) – h (5) = 2 – 5 + 4 = 1
δ (5, 4) = (5, 4) + h (4) – h (5) = 2 + 0 + 4 = 6
δ (u, v) from Source 5
2
2
2
0
4
Appendix
Triangle inequality
For anyedge (u, v) є E, δ (s, v) ≤ δ (s, u) + w (u, v).
Height function - Distance Function
Height of a Vertex - Distance Label. It is related to its Distance from the sink t
Relabel - Operation that Increases the Height of a Vertex.
Telescoping series
For any sequence a0, a1, , , , an,
31.
BELLMAN-FORD (G, w,s)
INITIALIZE-SINGLE-SOURCE (G, s)
For (i = 1 to | G.V | - 1)
For (Each Edge (u, v) є G.E)
RELAX (u, v, w)
For (Each Edge (u, v) є G.E)
If (v.d > (u.d + w (u, v))
Return FALSE
Return TRUE
v.d: Shortest-Path Estimate from Source
s to v.
Appendix…
INITIALIZE-SINGLE-SOURCE (G, s)
For (Each Vertex v є G.V)
v.d = ꝏ
v.π = NIL
s.d = 0
RELAX (u, v, w)
If (v.d > (u.d + w (u, v))
v.d = u.d + w (u, v)
v.π = u
v.π: Predecessor of v.
32.
Appendix…
DIJKSTRA (G, s)
INITIALIZE-SINGLE-SOURCE(G, s)
S = Ø
Q = G.V
While (Q ≠ Ø)
u = EXTRACT-MIN (Q)
S = S U {u}
For (Each Vertex v є G.Adj [u]
RELAX (u, v, w)
S: vertices whose final shortest-path weights from the source s have already been determined.
EXTRACT-MIN (Q): Removes and returns the element of Q with the Smallest key.
33.
References:
• Thomas HCormen. Charles E Leiserson, Ronald L Rivest, Clifford Stein,
Introduction to Algorithms, Third Edition, The MIT Press Cambridge,
Massachusetts London, England.